1 of 4

Project Description and Concept

This project will be an exploration of audio visualization and the expression of emotion through sounds and visual imagery.

Goal: To create a project which takes audio inputs (either preexisting audio tracks or live audio inputs from a microphone) and generates visual effects that can express the emotional tone and character of the inputs. How do we identify and express emotion through audio/visual works and how can we translate between the two?

2 of 4

Inspirations

  • Ryoji Ikeda
    • minimal sound and visuals → sine tones, lowercase (ambient minimalism)
    • test pattern → very tight correlation between the audio and visual effects
  • Akiko Yamashita
    • interactive installation + viewer participation
    • Hana Fubuki → installation responds to motion and generates visual effects

test pattern

ryoji ikeda

hana fubuki

akiko yamashita

3 of 4

Inspirations cont’d

  • Weidi Zhang
    • interactive AI + generative audio-visual works
    • Cangjie’s Poetry → multimodal system that takes camera input and generates symbols and descriptive sentences in tandem
  • Speech emotion recognition research: Koolagudi, S.G., Rao, K.S. Emotion recognition from speech: a review. Int J Speech Technol 15, 99–117 (2012). https://doi.org/10.1007/s10772-011-9125-1
  • improvements in AI technology → computer vs human
  • trends in current music → experimental, hyperpop etc
  • quarantine, social isolation → changes in self expression, new unfamiliarity with social interaction

canjie’s poetry, fantastic shredder, borrowed scenery by weidi zhang

4 of 4

Media List

  • touchdesigner → visual programming
  • jupyterhub → data analysis, machine learning(?)
    • pandas, matplotlib, seaborn
  • arduino → possible to use to monitor heart rate?
  • microphone (dependent on exhibit setup)
    • external mic vs built in computer mic
  • display screen/projector (dependent on exhibit setup)
  • if vr → unity?