Multimodal Emotion Recognition for �Human-Robot Interaction
1
PhD Student: Farshad Safavi
PhD Advisor: Dr. Ramana Vinjamuri
2
Human Robot Interaction
Physiological signals
Visual
Multiple Communities and Modalities
Haptics / Touch
Natural Language
Auditory
Background
Cui, Y., Song, X., Hu, Q., Li, Y., Sharma, P., & Khapre, S. (2022). Human-robot interaction in higher education for predicting student engagement. Computers and Electrical Engineering, 99, 107827. https://doi.org/10.1016/j.compeleceng.2022.107827
3
Human-Robot Collaboration
Emotional Attunement
Emotion Recognition
Robot
Artificial System
Joint action
Task
Actions
Human user
Goal
Non-verbal
Emotional
signals
Performance
Actions
(Ciceri et al., 2008)
Motivation
Ciceri, R. and S. Balzarotti, From signals to emotions: Applying emotion models to HM affective interactions., Affective Computing. InTech 3 (2008), p. 978
Multimodal Emotion Recognition for Human-Robot Interaction
Safavi, F., Patel, K., & Vinjamuri, R. K. (2023a). Towards Efficient Deep Learning Models for Facial Expression Recognition using Transformers. 2023 IEEE 19th International Conference on Body Sensor Networks (BSN), 1-4. https://doi.org/10.1109/BSN58485.2023.10331041
5
Proposed Multimodal Fusion Model
Intermediate Fusion
face feature vector
EEG feature vector
fusion feature
classifier