Introduction

Hi! I’m an open standard for describing music, handy for visualization.

Momo the Monster has been using me for projects and is looking for input.

Read this article on Create Digital Motion for a full intro.


Existing Breakdown:

Rhythm: Kick, Snare, Hat, Crash (or other large accent)

Melody: Bass, Lead, Vox

Specialty: Lyrics, Variable, Speed, Scene


Rhythm Messages:

/rhythm/name intensity

Intensity is a normalized float


Melody Messages:

/melody/name/pitch velocity

pitch is an integer, velocity is a normalized float. Use 0 for note off, like MIDI


Lyric Messages:

/lyrics “word or sentence”


Scene Messages:

/scene “verse1”

These messages inform changes in a song, you can only be in one ‘scene’ at a time


Variable Messages:

/variable/name amount

/variable/intensity 0.5

Used for data like CCs. Anything you want to describe about a section of the song


Suggestions

Top Level Content Descriptors: By describing the type of instrument along with each message, systems which do not have mappings for a particular instrument (like a cowbell), could still know that they were providing rhythm data as opposed to melody data, like so:

/rhythm/cowbell 0.9

via Momo the Monster

Sync Mutes: A note on implementation, it is handy to have a way to easily turn off external control in a granular fashion so you can override automation with human expression. via Bearmod

Metadata for Recorded Music: Kasper Kamperman posits an editor for already-recorded music where we can add this metadata, time-synced to the file.

Grid-Based Controllers: Steve Elbows mentions the popularity of grid controllers. The Monome already uses OSC to address and describe grids, we could look at incorporating something interoperable or similar.

Selective Hearing: Brendan would like computers to pick out parts like this and map them automatically.

Live Looping: Momo the Monster comments that a visualist who has no control over the music nor incoming music data from the live performers can still use this system as a basis for mapping visuals to the music, by live looping MIDI approximations as the music plays.

Max Patch: A Max patch could possibly provide the same function as Max4Live, using the free Max runtime. What are the disadvantages? Can presets be saved with the Ableton project? via Andreas.

A Max patch could be used to collect, sort, & tag metadata before it goes into Touch (or anywhere). This becomes more useful as the amount of OSC messages coming out of Live increases. Advantages include more screen real estate (m4L gets tight), less load on Ableton, a ‘firewall’ to help troubleshooting & provide protection against errant messages, & easier backend patch editing/programming. Presets can be saved, but in a separate xml or txt (not inside of Live). Automator can easily load both the Live session and Max presets with one click. via MDavis

Multitrack Audio In: If you take audio in on multiple channels, you can analyze the frequency and amplitude automatically and generate good data, tagging it with a description of the instrument. This solves some of the issues of audio frequency being too unpredictable and muddled. via racer

Protocol change: I suggest using /Lyrics “” (empty string) instead of /Lyrics/clear as it accomplishes the same thing, removes the need for a special-case for “Lyrics” and “clear” in receiving apps, and allows for generic argument-type-based routing of messages internally, eg. is it a single string? put it in a hash keyed by address (“Lyrics”). (then other strings can be sent just as easily, eg. /Band “Morning Teleportation”). - ian_all_over Excellent! I second this - momo