1 of 7

Live Art in Context

Angel Carvajal, Chris Lonsberry, David Murphy, Theresa Thoraldson

2 of 7

Live Art in Context

  • The Vision
    • Art that responds in real time to events around the world
    • Art that can be created by anyone drawing on upon sources that speak to them and exploiting generative (and non-generative) models

  • The Machinery
    • Pull from live data sources as twitter, Youtube live, camera feeds
    • Elaborate on, Select or Filter these sources according to desired effects via ML models
    • Create new visuals via generate models, possibly conditioned on filtered data
    • Combine resulting visuals to produce final images or videos

3 of 7

Live Art in Context – basic architecture

Filters

Combiners

Output

Twitter Search

Twitter Search

Live Video

Static image

LLM

LLM

Strip Foreground

text2img

text2img

scale

Interpolate

Blend

overlay

Display/save

Computation graph with data flowing from input sources to display

Input Sources

4 of 7

Live Art in Context – User Interface

Create nodes and connections

Run pipeline

Display Results

Construct and execute the graph from a simple UI

5 of 7

Live Art in Context – A Simple Example

Search twitter for “James Webb Telescope”

Live Feed from Tokyo

LLM Prompted with Twitter text

Text to Image

Via Stable Diffusion

Segmentation

Compose foreground/background

Data Flow

6 of 7

Live Art in Context – Next Steps

  • Expand model selection
    • Many more choices exist to expand creative control
  • Audio generation
    • Generation of Accompaniment is now possible
  • Separate update rates on eval for individual nodes to improve composition
  • Autonomous attention mechanisms to control scaling and mask generation on activity or saliency
  • Editor improvements

7 of 7

Thanks to the FSDL staff

Contact: dfm794@gmail.com, twitter: @dfm794