AE

Abdallah El Ali

Authored

16 records found

Corrnet

Fine-grained emotion recognition for video watching using wearable physiological sensors

Recognizing user emotions while they watch short-form videos anytime and anywhere is essential for facilitating video content customization and personalization. However, most works either classify a single emotion per video stimuli, or are restricted to static, desktop environmen ...

ET-CycleGAN

Generating thermal images from images in the visible spectrum for facial emotion recognition

Facial thermal imaging has in recent years shown to be an efficient modality for facial emotion recognition. However, the use of deep learning in this field is still not fully exploited given the small number and size of the current datasets. The goal of this work is to improve t ...

BreatheWithMe

Exploring Visual and Vibrotactile Displays for Social Breath Awareness during Colocated, Collaborative Tasks

Sharing breathing signals has the capacity to provide insights into hidden experiences and enhance interpersonal communication. However, it remains unclear how the modality of breath signals (visual, haptic) is socially interpreted during collaborative tasks. In this mixed-method ...

CEAP-360VR

A Continuous Physiological and Behavioral Emotion Annotation Dataset for 360 VR Videos

Watching 360 videos using Virtual Reality (VR) head-mounted displays (HMDs) provides interactive and immersive experiences, where videos can evoke different emotions. Existing emotion self-report techniques within VR however are either retrospective or interrupt the immersive exp ...

Affective Driver-Pedestrian Interaction

Exploring Driver Affective Responses toward Pedestrian Crossing Actions using Camera and Physiological Sensors

Eliciting and capturing drivers' affective responses in a realistic outdoor setting with pedestrians poses a challenge when designing in-vehicle, empathic interfaces. To address this, we designed a controlled, outdoor car driving circuit where drivers (N=27) drove and encountered ...

Towards socialVR

Evaluating a novel technology for watching videos together

Social VR enables people to interact over distance with others in real-time. It allows remote people, typically represented as avatars, to communicate and perform activities together in a shared virtual environment, extending the capabilities of traditional social platforms like ...

From Video to Hybrid Simulator

Exploring Affective Responses toward Non-Verbal Pedestrian Crossing Actions Using Camera and Physiological Sensors

Capturing drivers’ affective responses given driving context and driver-pedestrian interactions remains a challenge for designing in-vehicle, empathic interfaces. To address this, we conducted two lab-based studies using camera and physiological sensors. Our first study collected ...

FeelTheNews

Augmenting Affective Perceptions of News Videos with Thermal and Vibrotactile Stimulation

Emotion plays a key role in the emerging wave of immersive, multi-sensory audience news engagement experiences. Since emotions can be triggered by somatosensory feedback, in this work we explore how augmenting news video watching with haptics can influence affective perceptions o ...

Reflecting on Hybrid Events

Learning from a Year of Hybrid Experiences

The COVID-19 pandemic led to a sudden shift to virtual work and events, with the last two years enabling an appropriated and rather simulated togetherness - the hybrid mode. As we return to in-person events, it is important to reflect on not only what we learned about technologie ...

SensiBlend

Sensing Blended Experiences in Professional and Social Contexts

Unlike traditional workshops, SensiBlend is a living experiment about the future of remote, hybrid, and blended experiences within professional and other social contexts. The interplay of interpersonal relationships with tools and spaces—digital and physical—has been abruptly cha ...

ShareYourReality

Investigating Haptic Feedback and Agency in Virtual Avatar Co-embodiment

Virtual co-embodiment enables two users to share a single avatar in Virtual Reality (VR). During such experiences, the illusion of shared motion control can break during joint-action activities, highlighting the need for position-aware feedback mechanisms. Drawing on the perceptu ...

ShareYourReality

Investigating Haptic Feedback and Agency in Virtual Avatar Co-embodiment

Virtual co-embodiment enables two users to share a single avatar in Virtual Reality (VR). During such experiences, the illusion of shared motion control can break during joint-action activities, highlighting the need for position-aware feedback mechanisms. Drawing on the perceptu ...

ThermalWear

Exploring Wearable On-chest Thermal Displays to Augment Voice Messages with Affect

Voice is a rich modality for conveying emotions, however emotional prosody production can be situationally or medically impaired. Since thermal displays have been shown to evoke emotions, we explore how thermal stimulation can augment perception of neutrally-spoken voice messages ...

ThermalWear

Exploring Wearable On-chest Thermal Displays to Augment Voice Messages with Affect

Voice is a rich modality for conveying emotions, however emotional prosody production can be situationally or medically impaired. Since thermal displays have been shown to evoke emotions, we explore how thermal stimulation can augment perception of neutrally-spoken voice messages ...

RCEA

Real-time, Continuous Emotion Annotation for Collecting Precise Mobile Video Ground Truth Labels

Collecting accurate and precise emotion ground truth labels for mobile video watching is essential for ensuring meaningful predictions. However, video-based emotion annotation techniques either rely on post-stimulus discrete self-reports, or allow real-time, continuous emotion an ...

Transparent AI Disclosure Obligations

Who, What, When, Where, Why, How

Advances in Generative Artificial Intelligence (AI) are resulting in AI-generated media output that is (nearly) indistinguishable from human-created content. This can drastically impact users and the media sector, especially given global risks of misinformation. While the current ...

Contributed

4 records found

AffectiveAir

Exploring pneumatic affective haptics on the shoulder

The focus of this project was the research and development of an affective social touch wearable. AffectiveAir uses pneumatic actuation on the shoulder to convey a library of haptic sensations. The goal was to overcome physical limitations in potential digital communication conte ...

Few shot emotion recognition using intelligent voice assistants and wearables

Learning from few samples of speech and physiological signals

Emotion Recognition is one of the vastly studied areas of affective computing. Attempts have been made to design emotion recognition systems for everyday settings. The ubiquitous nature of Intelligent voice assistants (IVAs) in households, make them a great anchor for the introdu ...

DeepSleep

A sensor-agnostic approach towards modelling a sleep classification system

Sleep is a natural state of our mind and body during which our muscles heal and our memories are consolidated. It is such a habitual phenomenon that we have been viewing it as another ordinary task in our day-to-day life. However, owing to the current fast-paced, technology-drive ...

On Fine-grained Temporal Emotion Recognition in Video

How to Trade off Recognition Accuracy with Annotation Complexity?

Fine-grained emotion recognition is the process of automatically identifying the emotions of users at a fine granularity level, typically in the time intervals of 0.5s to 4s according to the expected duration of emotions. Previous work mainly focused on developing algorithms to r ...