EE-JAMS
(Joint Affective Measurement System)
By CruX @ UCLA
Valence
Arousal
Dominance
Why emotion?
Applications
Clinical
Communication
Consumer Products
Methodology
Timeline
Video gathering
Data collection
(OpenBCI)
Model training
(CNN-LSTM)
Analysis
Real-time classification
Expansion
Video Gathering
Data Collection
Cyton
OpenBCI
Headset
BT Dongle
OpenBCI
GUI
Python
Script
Data Labeling
Valence, Arousal, Dominance scores
Videos elicit emotional response
Classifiers
Valence
{0,1}
Arousal
{0,1}
Dominance
{0,1}
Classifier Architecture
Convolutional
Layer
ELU
Dropout
Batchnorm
Maxpool
LSTM
Layer
Dropout
Batchnorm
Fully connected
Softmax
Classifier Results
Epochs
Accuracy
Valence
Arousal
Dominance
Training
Validation
99.7%
89.9%
92.5%
Test acc:
Comparison in Binary VAD Classification
Group | Valence Accuracy | Arousal Accuracy | Dominance Accuracy |
EE-JAMS (ours) | 99.7% | 89.9% | 92.5% |
99.22% | 97.80% | N/A | |
92.87% | 92.30% | N/A | |
86.23% | 84.54% | 85.02% |
*Other papers used DEAP/SEED datasets rather than just one participant
Discussion
Limitations
Further Directions
Emotions have no standardized biomarkers
Corruptible signals (i.e. EMG)
Personalized AND generalized models
Accuracy is upper bounded by self-reported labels
Dataset augmentation & signal processing
Further Directions: BCI Cap
Demonstration
Thank you!