1 of 20

EE-JAMS

(Joint Affective Measurement System)

By CruX @ UCLA

2 of 20

Valence

Arousal

Dominance

3 of 20

Why emotion?

4 of 20

Applications

Clinical

Communication

Consumer Products

5 of 20

Methodology

6 of 20

Timeline

Video gathering

Data collection

(OpenBCI)

Model training

(CNN-LSTM)

7 of 20

Analysis

Real-time classification

Expansion

8 of 20

Video Gathering

  • Videos drawn from online databases

  • Catered to subject’s self-reported emotional profile

9 of 20

Data Collection

Cyton

OpenBCI

Headset

BT Dongle

OpenBCI

GUI

Python

Script

10 of 20

Data Labeling

Valence, Arousal, Dominance scores

Videos elicit emotional response

11 of 20

Classifiers

Valence

{0,1}

Arousal

{0,1}

Dominance

{0,1}

12 of 20

Classifier Architecture

Convolutional

Layer

ELU

Dropout

Batchnorm

Maxpool

LSTM

Layer

Dropout

Batchnorm

Fully connected

Softmax

13 of 20

Classifier Results

Epochs

Accuracy

Valence

Arousal

Dominance

Training

Validation

99.7%

89.9%

92.5%

Test acc:

14 of 20

Comparison in Binary VAD Classification

Group

Valence Accuracy

Arousal Accuracy

Dominance Accuracy

EE-JAMS (ours)

99.7%

89.9%

92.5%

99.22%

97.80%

N/A

92.87%

92.30%

N/A

86.23%

84.54%

85.02%

*Other papers used DEAP/SEED datasets rather than just one participant

15 of 20

Discussion

16 of 20

Limitations

Further Directions

Emotions have no standardized biomarkers

  • Continuous classifier
  • Unsupervised learning
  • Identifying latent emotional variables

Corruptible signals (i.e. EMG)

Personalized AND generalized models

Accuracy is upper bounded by self-reported labels

Dataset augmentation & signal processing

17 of 20

Further Directions: BCI Cap

  • Discreet BCI wearable for day-to-day use

18 of 20

Demonstration

19 of 20

20 of 20

Thank you!