ABCDEFGHIJKLMNOPQRSTUVWXYZAAABACADAEAFAGAHAIAJAKAL
1
Company / Tool NamesCost per monthCorporation, Startup, Research, OtherHeadquarter LocationBrief descriptionTechnical overviewContactInstrumentPro/AmateurInfluences and controlUpload influenceDatasetApproachLimitationsCommercial usesLearning curve(Previously "Ease")@쓰러지 다NotesLast updatedResourcesTutorial?Founded/Created year@dropdown
2
Dadabots SampleRNNIf you don't own an NVIDIA GPU, each training session cost about $100-200 in GPU credits, and you might need to do a few sessions to get the right output. HackersBoston / Sacramento / Berlin Python script. Trains on a custom dataset, generates raw audio Theano project in python to run on a linux server with NVIDIA gpu with at least 16GB ram. https://github.com/dada-bots/dadabots_sampleRNNCJ Carr / Zack Zukowski thedadabot@gmail.comGreat for death metal, free jazz, beatbox, and improv chaos music.Need some linux skillsControl it by constructing the dataset. Also temperature parameters. Custom2019https://www.youtube.com/watch?v=MwtVkPKx3RA
3
AIVA39 pounds/month for pro (Unlimited downloads
Downloads for Pop & Rock
Commercial License
Full Copyright Ownership), 11 pounds for standard (no commercial, no copyright)
StartupLuxembourg focused on creating music for video games, movies, and commercials.
AIVA
Xdeep learning and reinforcement learningGenesis album (2016), This AI has analyzed works of many classical composers and created its own “Opus 1 for Piano Solo,” as well as “Symphonic Fantasy in A Minor, Op. 21.” Compositions written by this AI are used in Nvidia’s videos and a video game called Pixelfield. produces soundtracks for any type of media. point-and-click easy: Pick a genre, a “mood,” and a duration,2020-12-08
4
Magenta PerformanceRNNopensourceCorporation (Google)SF/Bay areajs2020-12-08
5
Magenta MusicVAEopensourceCorporation (Google)SF/Bay areaYacht usedThe most popular feature with musicians is GrooVAE. This feature sets it apart from other technologies as it uses neural networks to sound human.js, deep learning and reinforcement learningYacht (chain tripping)2020-12-08
6
Magenta Onsets and FramesopensourceCorporation (Google)SF/Bay areaPianojs2020-12-08
7
Magenta CoconetopensourceCorporation (Google)SF/Bay areahttps://www.google.com/doodles/celebrating-johann-sebastian-bach
Coucou: https://coconet.glitch.me/
Cococo: https://pair-code.github.io/cococo/, https://github.com/PAIR-code/cococo
Anna HuangMulti-part music (each part monophonic)trained on 306 of Bach’s chorale harmonizationsPAIR used TensorFlow.js on webbrowser2020-12-08
8
EitherStyle selection, tempo slider, and set-able seed to support replicable outputs.The web interface contains previously trained models. It's possible to build your own from the NPM package mentioned under "Technical overview".Style options are Eurovision dataset from last year, Irish jig, Classical allegro, EDM drums. A fun mix!Markov model; state transition matrix pruned to reduce probability of copying too much from one input source in output.Imogen Heap used it in 2020:
https://www.youtube.com/watch?v=3L2Tw6r1rDQ
Easy2021-04-142020 in its current form
9
openai MusenetopensourceResearch orgSF/Bay areaa deep neural network that can generate 4-minute musical compositions with 10 different instruments, and can combine styles from country to Mozart to the Beatles. MuseNet was not explicitly programmed with our understanding of music, but instead discovered patterns of harmony, rhythm, and style by learning to predict the next token in hundreds of thousands of MIDI files. MuseNet uses the same general-purpose unsupervised technology as GPT-2, a large-scale transformer model trained to predict the next token in a sequence, whether audio or text.Mozart, Rachmaninoff, country, Disney, jazz, Bach, Beethoven, Journey, The Beatles, ‘video’, Broadway, bluegrass and TchaikovskyChristine PayneMAIA SuggestopensourceResearch startupDavis, CaliforniadsIt's a browser-based version of some of the code from the MAIA Markov package: https://www.npmjs.com/package/maia-markovTom Collins / contact@musicintelligence.coPlays back on default piano or drum kit. Can download MIDI fileBetter Blog Post: https://openai.com/blog/sparse-transformer/Paper: https://arxiv.org/abs/1904.10509Code: https://github.com/openai/sparse_attentionMuseNet uses the recompute and optimized kernels of Sparse Transformer to train a 72-layer network with 24 attention heads—with full attention over a context of 4096 tokens.

incorporate the dramatic key changes
10
Magenta StudioMagenta Studio is a collection of music plugins built on Magenta’s open source tools and models.
These tools are available both as standalone applications and as plugins for Ableton Live.
11
Magenta Music TransformeropensourceCorporation (Google)SF/Bay areacolab: https://magenta.tensorflow.org/piano-transformerAnna Huang
12
Magenta DDSPopensourceCorporation (Google)SF/Bay area
13
openai JukeboxopensourceResearch orgSF/Bay areaa deep neural network that can generate 4-minute musical compositions with 10 different instruments, and can combine styles from country to Mozart to the Beatles. MuseNet was not explicitly programmed with our understanding of music, but instead discovered patterns of harmony, rhythm, and style by learning to predict the next token in hundreds of thousands of MIDI files. MuseNet uses the same general-purpose unsupervised technology as GPT-2, a large-scale transformer model trained to predict the next token in a sequence, whether audio or text.Mozart, Rachmaninoff, country, Disney, jazz, Bach, Beethoven, Journey, The Beatles, ‘video’, Broadway, bluegrass and TchaikovskyVQ-VAE architecture to compress audio to discrete, loss function designed to retain the maxx amount of musical info. Autoregressive sparse transformer trained with maxx likilihood estimation over this compressed space. Also train autoregressive upsamplers to recreate the lot info at each level of compression. , raw audio domainstringTo train this model, we crawled the web to curate a new dataset of 1.2 million songs (600,000 of which are in English), paired with the corresponding lyrics and metadata from LyricWiki. The metadata includes artist, album genre, and year of the songs, along with common moods or playlist keywords associated with each song. We train on 32-bit, 44.1 kHz raw audio, and perform data augmentation by randomly downmixing the right and left channels to produce mono audio.While Jukebox represents a step forward in musical quality, coherence, length of audio sample, and ability to condition on artist, genre, and lyrics, there is a significant gap between these generations and human-created music.

For example, while the generated songs show local musical coherence, follow traditional chord patterns, and can even feature impressive solos, we do not hear familiar larger musical structures such as choruses that repeat. Our downsampling and upsampling process introduces discernable noise. Improving the VQ-VAE so its codes capture more musical information would help reduce this. Our models are also slow to sample from, because of the autoregressive nature of sampling. It takes approximately 9 hours to fully render one minute of audio through our models, and thus they cannot yet be used in interactive applications. Using techniques2734 that distill the model into a parallel sampler can significantly speed up the sampling speed. Finally, we currently train on English lyrics and mostly Western music, but in the future we hope to include songs from other languages and parts of the world.
takes about an hour to generate 1 minute of top level tokens
14
PopGunAustraliadeep learning technology. This program can predict what you will play, accompany you while you’re playing, and even improvise. An electronic keyboard is the main instrument for Popgun, as it was trained on electronic samples. However, developers promise that they will collaborate with musicians to teach this tool to play other instruments as well. roblox, point-and-click easy: Pick a genre, a “mood,” and a duration,
15
Amper (bought by Shutterstock - 11/11/2020)startupNYIt’s a cloud-based solution which creates music based on the selected style, mood, and duration. It allows users with little to no experience in music to create tracks in various genres. The website has a user-friendly interface and two modes: simple and pro. If you’re not satisfied with the results, you can always change the beat or used instruments until you get what you want.Drew Silverstein, Sam Estes, and Michael Hobeallows non-musicians to create music by indicating parameters like genre, mood and tempo. point-and-click easy: Pick a genre, a “mood,” and a duration,
16
Alysia (WAVEAI)6/moStartupSF/Bay areaSinging AI App. Lyric AI creates lyrics and Melody Studio creates vocal melodies . Karaoke
17
AUDOIR SAMfreeStartupSF/Bay areaSAM is our suite of AI-powered songwriting tools, designed to help artists create hit songs. Artists can use SAM to quickly write melodies, chords, and lyrics.https://www.audoir.com/how-to-build-a-songwriting-aiWayne ChengPianoCreate music from scratch, lyrics / text, chord progression, a motif. Create lyrics from text.Over 5,000 hit songs, MusicXML filesdeep learninghttps://www.audoir.com/can-ai-write-a-hit-songWe've commercially released 5 songs in various genres, which you can listen to on our websitehttps://www.audoir.com/sam-tutorial
18
LALAL.AI
19
Humtap
20

Cyanite
21
LANDR
22
Melomics MediaThis AI creates music with no help from humans. All compositions are created from scratch using an Iamus supercomputer. Recently, developers also added the Melomics 109 cluster and improved the computing capabilities of the machine which already has two albums (here’s a track from the last album).
23
Flow MachinesclosedCorporation (Sony)ParisMarkov Chain (model describing a sequence of possible events, the probability of each event depends on the state of the previous event)compose a melody but not generate a full song automatically.“Daddy’s Car”
24
MelodriveStartupadaptive music generation platform. In interactive media, there is simply too much content for composers to fill with music, especially music that supports an emotional storyline in dynamic situations. We're building an AI composer to extend the capabilities of human composers, so that music automatically adapts to both user input and the emotional setting. Berlin
25
Jukedeck - bought by TikTok 2018LondonAmateur
26
IBM Watson BeatCorporation (IBM)
27
AmperStartupDrew Silverstein, Sam Estes, and Michael Hobebackground music for social media non-musicians to create music by indicating parameters like genre, mood and tempoAmateurTaryn Southern Album
28
Endel4.165833333StartupBerlinUsers state (light, time motion, heartrate, weather, Stavitskypersonalized soundscapesAmateurGrimes Lullaby
29
BronzeLexx and Gwilym Gold and scientist Mick Grierson,
30
Calamity
31
crAIa€250 / month, limited to selected people.StartupEuropeyou have to listen, crAIa does the rest.
32
MustangoopenResearchSingaporeText-to-music system, free for use at https://huggingface.co/spaces/declare-lab/mustangohttps://arxiv.org/abs/2311.08355dorien.herremans@gmail.comVarious genresAnybody can useFree text controlMusicBenchtext-to-music with diffusion10s fragments
33
34
35
36
37
Computer Accompaniment (Carnegie Mellon University)The Computer Music Project at CMU develops computer music and interactive performance technology to enhance human musical experience and creativity. This interdisciplinary effort draws on Music Theory, Cognitive Science, Artificial Intelligence and Machine Learning, Human Computer Interaction, Real-Time Systems, Computer Graphics and Animation, Multimedia, Programming Languages, and Signal Processing.[10]
38
MorpheuSMorpheuS[13] is a research project by Dorien Herremans and Elaine Chew at Queen Mary University of London, funded by a Marie Skłodowská-Curie EU project. The system uses an optimization approach based on a variable neighborhood search algorithm to morph existing template pieces into novel pieces with a set level of tonal tension that changes dynamically throughout the piece. This optimization approach allows for the integration of a pattern detection technique in order to enforce long term structure and recurring themes in the generated music. Pieces composed by MorpheuS have been performed at concerts in both Stanford and London.
39
Ross Goodmanlyric generator
40
41
42
Ableton (has magenta plugin)midi backcatalog, broke instruments up into keyboard, drums, vocal, into 4 bar loops; then ran through different temperatures hundreds of times for source materialno additive, only subtractive, could transpose, Yacht (Chaintripping)
43
ChucKStanford, PrincetonChucK is a text-based, cross-platform language that allows real-time synthesis, composition, performance and analysis of music. It is used by SLOrk (Stanford Laptop Orchestra)[12] and PLOrk (Princeton Laptop Orchestra).Ge Wang
44
Piano Scribe
45
Magenta NSynth SuperDoug Eck16k, mono
46
Spotify
Francois Pachet
47
CCL/Sony(Paris)
48
IRCAM
49
CIRMMT
50
51
Voisey (acquired by Snapchat 11/21/2020)——
52
53
54
Amazon AWS’s DeepComposer
55
Magenta Piano Genie
56
Voyager
George Lewis (Columbia)
80s
57
Open MusicLisp
58
Bach in MaxMXPComputer aided comp in MAX
59
Meapsoft
60
61
Rebecca Fiebrink. Wekinator
Academia/Research
Creative Computing Institute
University of the Arts London
Gesture, audio, etcWekinator
Her research focuses in enabling new forms of human expression, creativity, and embodied interaction combining techniques from HCI and machine learning. She developed Wekinator, an interactive machine learning tool for artists.
62
Mark SantolucioGesture, audio, etcBarnard CS
63
Seth Cluett
64
MIT Media Lab work
65
David Cope
Academia/Research
UC Santa Cruz (retired)
66
Brad Garton (see his Memory Book project
67
antescofo
68
Da-TACOS Dataset
69
Hooktheory and ClassicalMusicArchives, if you'd like an intro.
https://docs.google.com/document/d/1BhZYhagVGnsIwVH1V-FuSukVp0gg96moahTsiipg6NQ/edit?usp=sharing
70
Creative Commons
Free Music Archive
71
Shimon (GeorgiaTech)
Research Songwriting Robot
72
https://magenta.tensorflow.org/music-transformer
73
Brian Eno and with the "Koan" program
74
dadabots
https://www.metalsucks.net/2019/04/22/a-youtube-channel-is-streaming-ai-generated-death-metal-24-7/
75
scriptbook
76
b12
77
the grid
78
Canva
79
Object AI
80
Firedrop
81
OBVIOUS
82
Prisma Labs
83
Cyanapse
84
Lumen5
85
Skylum
86
Logojoy
87
Runway
88
Google Music LM
89
WAV tool
90
91
92
93
94
95
96
97
98
99
100