Dialogism & Generative Combinatoriality
in Time-Based Complex Artefacts
A Grammar for Music (and Film)
Martin Irvine
Communication, Culture & Technology Program
Georgetown University
Methods and Concepts: Recap
We’ve been uncovering the generative principles for combinatoriality in new expressions in many forms, observing how symbolic expression works on two levels simultaneously:
the grammar and lexicon internal to a symbolic system (the rules for creating rule-governed combinations of elements within a form; e.g.; genres of visual media, writing, and music)
meanings as generated through dialogic associations in a collective cultural encyclopedia at a conceptual level external to any specific expression, symbolic connections spanning many forms of expression over a culture’s history and shifts in technical mediation.
In musical forms, when we ask “what are the generative principles behind the form that make it possible and meaningful in a culture” we are essentially asking:
what does anyone need to know to create/express/produce another (new) instance of the form? what rules, codes, associations are implied as presupposed, required knowledge for the form? how would you create a new song, musical piece, as an instance of its genre(s)?
Music’s Meanings: How a Song Sings
Cultural Encyclopedia:�Musical Forms in Cultural History as Clusters of Codes, Genres, Types, and Values. Genres and Styles in Networks of Relations among forms and symbolic associations.
Contemporary Genres:
Features and Generative Possibilities for Combination
(Vocabulary + Encyclopedia)
A specific song as an implementation and combination of genres and musical codes with a new contextual frame and interpretive variations.
Vocabulary of meaning units:
To compose a song, we activate specific compositional codes for musical features expressed in the basic vocabulary of musical form:
�+ / - instrument sounds, rhythms
+ / - types of song structure �+ / - voice and voice styles
+ / - melody or lyrics�+ / - themes in lyrics (sung or rap)
In listening, we reverse the process in understanding the musical form by aligning the vocabulary elements with encyclopedic values.
Frame/Node 1�Rhythm patterns
Frame/Node 2�Instruments and Timbres
Frame/Node 3�Song structures
(melody, harmony, repetitions)
Frame/Node n...�Music elements...
Established
Music Genre�(Type Frames)
Music genre n...
Song 1�Token-Instantiation
of Types
The Combinatorial, Dialogic, and Generative Processes in Music Genres and New Hybrid Forms
A music genre is a conceptual node of intersecting types or categories of possible musical properties known in a culture
Individual compositions (songs) and performances are implementations / realizations (tokens) of conceptual categories and properties (types) that make up a genre
A hybrid or new genre form develops from combination of type resources in the cultural encyclopedia, which are open to new elements
Song 2�Token-Instantiation
of Types
Song 3�Token-Instantiation
of Types
Song 4 ...�Token-Instantiation
of Types
Further, when completing the meanings for one moment, we associate all the semantic and genre features with the symbolic nodes in the larger “cultural encyclopedia,” the accumulated meaning “database” that is always already in place and assumed before new songs or compositions are created (in musician communities) or received and interpreted in a music listening community (and any level of subcommunity). (And today, these roles or positions in the meaning system are often interchangeable or convertible.)
This is the “intertextual” and “intermedial” dimension, which is often tacit and unconsciously “feeding” us chains and networks of Interpretants as we make or learn new associations.
Models, Methods, and Tools for Understanding Symbolic Complexity:
the Compositional Processes for New Combinations
Genres of time-based media (music and cinema/video) incorporate complex layers that happen in sequential units of “orchestrated simultaneity,” or stacks of meaning components that we experience in concurrent bundles.
1 or 2 measures of music require understanding what’s stacked in a flow of a present “now” in the framework of a whole song or longer composition.
In film and video, we usually consider the shot to be a minimal sequential unit: each shot is a linear sequence of film frames+sound (cinematically sequenced photographic frames synchronized with a sound track, itself formed of complex concurrent layers).
Understanding Meaning Layers and Meaning Stacks
in Time-Based Media (Music and Film/Video)
Visualizing “Meaning Stacks” in Musical Time Units
The stacks of concurrent notes in a chord represent tones and note intervals in a key.
Chord sounds provide minimal units of meaning in relation to harmonic sequences in time (changes),
but not much meaning until played by specific instruments (with their sound qualities) or sung with voices (in their sound qualities) in specific genres of music.
A Popular Music Score Represents Melody, Meter, and Chord Stacks in Time Sequences
These are abstract representations until played
and heard with specific instruments and voices.
A music phrase as structured combinatoriality = (Fill + Unify)
Beethoven,
9th Symphony,
4th Movement.
Instrument and note stacks with chorus.
A complex “orchestrated combinatoriality” requiring strict time syncing.
“Concerted Simultaneity” Visualized in Orchestral Scores in Traditional Western Notation
We can orchestrate and sync any kind of sound in layered stacks in music composition:
How about cannons and bells,
as in Tchaikovsky's 1812 Overture?
Look at one vertical section of bars (between two vertical lines) and note the kinds of sounds that must occur together to form the meaning of that time section.
Software is Designed for Visualizing Complex Simultaneous Time-Synced Meaning Layers in Music and Film
“Garage Band” visualizations:
Music as stacks and sequences of sound objects, and as repeatable loop sequences (right).
In the digital software workstation, any sound object can be synced to form complex layers of sound stacks.
At the simplest abstraction, audio sources (objects) are waveforms of multiple frequencies, which can be individually or group processed for any kind of sound effect.
What are we looking at in the interface?
A representation of synchronized meaning elements in stacks of time.
Concurrent layers
in strict time sequences.
Film and Video Software Representations Give Us Complex Frame and Audio Stacks, Synced
An editing interface that visualizes audio and video components as “objects” to sync up in stacks of edited sequences.
“One of the interesting things about pop music is that you can quite often identify a record from a fifth of a second of it. You hear the briefest snatch of sound and know, 'Oh, that's "Good Vibrations",' or whatever. A fact of almost any successful pop record is that its sound is more of a characteristic than its melody or its chord structure or anything else. The sound is the thing that you recognize.”
Brian Eno, “Aurora musicalis,” Art Forum, 24, 10 (1986), 76.
Music comprehension for a typical popular music “song” involves simultaneous understanding of stacked, synced and layered combinations of sounds (timbres, instruments, melodies, harmonies, rhythms) and sung or rhythmic recitation of lyrics (language and poetic forms, in songs with lyrics or words).
We know what kind of song we’re listening to and how it works by sequencing stacks of sounds.
OK, then what constitutes the “sound” of music in its meaning elements?
The “Sound” of Music
How Does the Music Meaning System Work?
What are the structural elements that make music music?
Basic descriptive terms are necessary but capture components as abstract elements
requiring assembly and implementation:
What does the music feel like when brain and body are working together:
Modern/Contemporary Sounds
But all these components are always experienced in a concurrent stack of sounds, not in isolation or out of context.
Complexes of meaning elements in the textures and
overall sounds of popular music
Genres, Types, Prototypes and in Popular Music Compositions
We recognize musical genres by combinations of prototypical (model) sounds
in extended stacks of sound.
Think of the complex layers of simultaneous sound that define the “sound” of any distinctive piece of music:
Distorted lead guitar riff (followed by the bass for a minor blues chord vamp) in The Rolling Stones, Satisfaction
The pulsating guitar-bass-drum rhythm sound in The Beatles, Sgt. Peppers Lonely Hearts Club Band (Reprise)
Power chord guitar stroke and synthesizer rhythm in The Who, Won’t Get Fooled Again.
The aggressive 4/4 “all-in” rhythm on The Clash, London Calling
What prototype sounds/songs do you associate with other genres?
Dance music? Hip Hop? Modern/contemporary jazz? Alt- or post-rock? Soul/R&B? Electronic genres?
U2 - Window In The Skies (2007)
Video directed by Gary Koepke. A montage of nearly 100 clips from footage from the previous 50 years of other famous musicians’ performance in a fantasy lip sync to the lyrics of the U2 song.
Music recorded at Abbey Road studios in London, 2006.
(Historical note: this was 40 years after the Beatles began the Sgt Pepper sessions at Abbey Road.)
Let’s use Peirce’s terms and concepts to explain the way we understand the meanings of this video (how we form sequences of interpretants for the sign/symbol systems moving in parallel).
How many sign/symbol systems are used?
How are indexical, iconic, and symbolic functions combined in the perceptible structures?
What kinds of codes (correlation specifications) for symbolic correspondences are being used?
Genre codes? Encyclopedic codes?
Is this a good example for visualizing a “parallel architecture” for sign/symbol systems?
What about digital and computer mediation?