AE
Abdallah El Ali
27 records found
1
Within our Distributed and Interactive Systems research group, we focus on affective haptics, where we design and develop systems that can enhance human emotional states through the sense of touch. Such artificial haptic sensations can potentially augment and enhance our mind, bo
...
ShareYourReality
Investigating Haptic Feedback and Agency in Virtual Avatar Co-embodiment
Virtual co-embodiment enables two users to share a single avatar in Virtual Reality (VR). During such experiences, the illusion of shared motion control can break during joint-action activities, highlighting the need for position-aware feedback mechanisms. Drawing on the perceptu
...
Transparent AI Disclosure Obligations
Who, What, When, Where, Why, How
Advances in Generative Artificial Intelligence (AI) are resulting in AI-generated media output that is (nearly) indistinguishable from human-created content. This can drastically impact users and the media sector, especially given global risks of misinformation. While the current
...
Reflecting on Hybrid Events
Learning from a Year of Hybrid Experiences
The COVID-19 pandemic led to a sudden shift to virtual work and events, with the last two years enabling an appropriated and rather simulated togetherness - the hybrid mode. As we return to in-person events, it is important to reflect on not only what we learned about technologie
...
Affective Driver-Pedestrian Interaction
Exploring Driver Affective Responses toward Pedestrian Crossing Actions using Camera and Physiological Sensors
Eliciting and capturing drivers' affective responses in a realistic outdoor setting with pedestrians poses a challenge when designing in-vehicle, empathic interfaces. To address this, we designed a controlled, outdoor car driving circuit where drivers (N=27) drove and encountered
...
Measuring interoception ('perceiving internal bodily states') has diagnostic and wellbeing implications. Since heartbeats are distinct and frequent, various methods aim at measuring cardiac interoceptive accuracy (CIAcc). However, the role of exteroceptive modalities for represen
...
From Video to Hybrid Simulator
Exploring Affective Responses toward Non-Verbal Pedestrian Crossing Actions Using Camera and Physiological Sensors
Capturing drivers’ affective responses given driving context and driver-pedestrian interactions remains a challenge for designing in-vehicle, empathic interfaces. To address this, we conducted two lab-based studies using camera and physiological sensors. Our first study collected
...
BreatheWithMe
Exploring Visual and Vibrotactile Displays for Social Breath Awareness during Colocated, Collaborative Tasks
Sharing breathing signals has the capacity to provide insights into hidden experiences and enhance interpersonal communication. However, it remains unclear how the modality of breath signals (visual, haptic) is socially interpreted during collaborative tasks. In this mixed-method
...
FeelTheNews
Augmenting Affective Perceptions of News Videos with Thermal and Vibrotactile Stimulation
Emotion plays a key role in the emerging wave of immersive, multi-sensory audience news engagement experiences. Since emotions can be triggered by somatosensory feedback, in this work we explore how augmenting news video watching with haptics can influence affective perceptions o
...
Instead of predicting just one emotion for one activity (e.g., video watching), fine-grained emotion recognition enables more temporally precise recognition. Previous works on fine-grained emotion recognition require segment-by-segment, fine-grained emotion labels to train the re
...
Voice User Interfaces (VUIs) such as Alexa and Google Home that use human-like design cues are an increasingly popular means for accessing news. Self-disclosure in particular may be used to build relationships of trust with users who may reveal intimate details about themselves.
...
Automatically inferring drivers' emotions during driver-pedestrian interactions to improve road safety remains a challenge for designing in-vehicle, empathic interfaces. To that end, we carried out a lab-based study using a combination of camera and physiological sensors. We coll
...
Watching HMD-based 360° video has become in-creasing popular as a medium for immersive viewing of photo-realistic content. To evaluate subjective video quality, researchers typically prompt users to provide an overall Quality of Experience (QoE) score after viewing a stimulus. Ho
...
Fine-grained emotion recognition can model the temporal dynamics of emotions, which is more precise than predicting one emotion retrospectively for an activity (e.g., video clip watching). Previous works require large amounts of continuously annotated data to train an accurate re
...
Towards socialVR
Evaluating a novel technology for watching videos together
Social VR enables people to interact over distance with others in real-time. It allows remote people, typically represented as avatars, to communicate and perform activities together in a shared virtual environment, extending the capabilities of traditional social platforms like
...
Visualizing biosignals can be important for social Virtual Reality (VR), where avatar non-verbal cues are missing. While several biosignal representations exist, designing effective visualizations and understanding user perceptions within social VR entertainment remains unclear.
...
While affective non-verbal communication between pedestrians and drivers has been shown to improve on-road safety and driving experiences, it remains a challenge to design driver assistance systems that can automatically capture these affective cues. In this early work, we identi
...
CEAP-360VR
A Continuous Physiological and Behavioral Emotion Annotation Dataset for 360 VR Videos
Watching 360 videos using Virtual Reality (VR) head-mounted displays (HMDs) provides interactive and immersive experiences, where videos can evoke different emotions. Existing emotion self-report techniques within VR however are either retrospective or interrupt the immersive exp
...
SensiBlend
Sensing Blended Experiences in Professional and Social Contexts
Unlike traditional workshops, SensiBlend is a living experiment about the future of remote, hybrid, and blended experiences within professional and other social contexts. The interplay of interpersonal relationships with tools and spaces—digital and physical—has been abruptly cha
...
RCEA
Real-time, Continuous Emotion Annotation for Collecting Precise Mobile Video Ground Truth Labels
Collecting accurate and precise emotion ground truth labels for mobile video watching is essential for ensuring meaningful predictions. However, video-based emotion annotation techniques either rely on post-stimulus discrete self-reports, or allow real-time, continuous emotion an
...