Jeanine Herbst

Is your favorite music being helped along by artificial intelligence? It could be. A famous band just used AI on a new release that made the top 10 on the Hot 100 chart.

Hi, I'm Jeanine Herbst, and you're listening to Where What If Becomes What's Next, a new podcast from Carnegie Mellon University, where we're exploring what AI means to industries, government, society, and you.

This season, we'll explore the ever-evolving world of artificial intelligence, the transformative impact AI has on society, work, and everyday life, the cutting-edge innovations, and the ethical dilemmas.

On this episode, we'll look at AI's use in music and entertainment.

Oh, and that famous band, The Beatles. They used AI to separate the late John Lennon's voice from a piano in an old demo of the newly released and last song from the group, “Now and Then,” originally recorded on the same track in the 1970s. That track was also recorded with ambient sound in the background. And for decades, existing audio technology couldn't separate and clean up John's vocals. So the band shelved the idea of finishing and releasing the song.

But film director Peter Jackson, who used AI to clean up music in his 2021 documentary about the band, changed their minds.

Drew Thurlow is founder of Opening Ceremony Media and a former senior vice president of A &R at Sony Music.

Drew Thurlow

The interesting part about The Beatles is that it was driven by the Peter Jackson documentary. He developed some kind of advanced audio tools in order to get the documentary made.

And there's quite a bit of synthetic speech and audio research out there. So I don't think it was a hard thing for him to develop this, especially with some of this white label tech. But if the audio is that recorded that poorly in a non-digital environment, which that was, then up until now, you really couldn't separate it in a way that was clean enough to release it as a commercial track. And that's the promise of what is happening now. And that's why we have a new Beatles song. Kind of hard to believe that there's Beatles repertoire out there that the world hasn't seen yet, but here we are.

Jeanine Herbst

Drew, this technology isn't new.

Drew Thurlow

Well, I think that's part of the interesting part of the conversation is that it's new, but it's also not new. And if you have spent any time in a pop recording session over the last 10 years, you've been engaging with, if not AI, at least, you know, heavily machine-learned technology.

Some of the parts of audio processing have been using this technology for quite a while. So I think as mass market adoption and consumer tools it is quite new, but this technology has been utilized in recording for a long time. What the Beatles did for this latest song was probably a little bit more of an advancement of this technology, but it's certainly been used for quite a while.

Jeanine Herbst

Jessica Powell is also joining us.  She's the CEO of the app AudioShake, which uses AI to separate vocal and instrumental stems. Her company's worked on separating tracks for Nina Simone and Old Dirty Bastard among others. Jessica, you've been working on this for a while now, right?

Jessica Powell

We've done everything from separating the original album of Nina Simone so that it could be made immersive in Dolby Atmos, which is what you now hear if you're listening to Tidal or Amazon Music or Apple Music, through to...making it possible for SZA and her producer Rodney Jerkins to be able to sample ODB on a track that's on her current album.

Jeanine Herbst

So how exactly does this work? Walk me through the process.

Jessica Powell

We can remove music from a background. So any kind of task that you can think of that ranges from something a music editor might do in their editing workstation through to things that you might want to do at scale – where you need to be able to manipulate or edit or remix or mix somehow the audio, you need to be able to get at those different components and those different layers of sound. And that's what we do is that we basically open up audio recordings so that they can be made interactive, customizable, mixable, editable.  

Jeanine Herbst

We would have loved to play a clip from the Beatles now and then, but copyright prevents us.

Instead, here's how AI separated and cleaned up legendary blues singer, Bessie Smith's vocals from a 1923 recording of Bleeding Heart Blues. Here's the original recording.

(Singing)

Using AudioShake, AI separated the vocals from the piano track.

(Singing)

Then we used AI to clean up the vocals even more and asked some musicians to compose and record a new backing track in a process very similar to how the Beatles produced “Now and Then.”

(Singing)

AI is being used in so many ways in music production, and it has been for a while now, by recommending songs and artists on music platforms based on similarities or moods to making music, by composing songs based on styles and genres to suggesting themes and composing lyrics. AI can also enhance live performances by changing the sound based on a room's acoustics or vibes.

It's just opening up a lot of uses for artists right now, right, Drew?

Drew Thurlow

That's a lot of what AI is promising. The catalog, the heritage artist who hasn't released new music in quite a while, some of the major labels are now working on training algorithms on an artist catalog. The output is essentially very related musically to what they would have done if they were still composing and performing. That comes with a lot of controversy, but it also comes with a lot of opportunity.

Jeanine Herbst

When we come back, we'll take a look at how some of these tools are being used and what's next for music in both good and bad ways and what can be done about it.

This podcast is brought to you by Carnegie Mellon University. Located in Pittsburgh, Pennsylvania, CMU is a private international research university that's consistently ranked among the top universities in the world. Its trailblazing research is focused on creating, defining, and accelerating the future of artificial intelligence. Coming up, we'll talk about the future of AI and music.

Jeanine Herbst

Welcome back. We're talking about artificial intelligence and music. It's already being used by a number of musicians and entertainers to separate old voice tracks from instruments and more, including the Beatles on their latest release with the voice of the late John Lennon. That did cause controversy, but both Paul McCartney and one of Lennon's sons, Sean,stand behind the use of AI. But still, it raises some tricky ethical and legal issues such as copyright ownership for work created by AI, ethical use of another musician's sound or likeness, who gets compensated for AI created works, and whether it's fair to bring back work from a deceased musician.

Dan Green is a professor and the director of the Entertainment Industry Management Program at Carnegie Mellon. What do you make of this? Is AI and music a good thing or something to be wary of?

Dan Green

Yeah, I think you're asking the right question there. Is there a right way or wrong way? Because in my opinion, I think there is. Now granted, arguably the most popular band in pop music ever ended up having a great team behind it, right? It was McCartney and Giles Martin, who was the son of George Martin and Ben Foster, who ended up putting that together. So it was a Beatles project that was put together.

And I would say that that was used in a way that would allow creativity to flourish. I don't want to be Pollyanna here, but that was a really nice use of it.

Jeanine Herbst

But the use of AI in cleaning up and repackaging musicians alive or dead is controversial.

Dan Green

We know for a fact there are negative ways that are doing it. People are using AI without the consent of the artists and the family. There's a clip online, it's really fascinating, of Ozzy Osbourne listening to “You're Still the One” with his voice. It's AI – he had not heard it. And the gentleman who plays it on a podcast plays Ozzy and it sounds just like Ozzy singing something that is not an Ozzy type tune, “You're Still the One.”  And then the gentleman playing it says, this is fantastic Ozzy, it's you. And Ozzy, you know, who's battling some ailments, said,

Yeah, yeah, it's okay. And his wife says, I don't like it. I don't think I think it's bad. And the podcast is at least I give it a B plus. And by the end of the conversation, Ozzy goes, Yeah, it does sound like me. So there's that whole aspect. No one checked with Ozzy and it's got thousands and thousands of listens. So that's a slippery slope.

Jeanine Herbst

Well, the heirs of the late comedian George Carlin sued over an AI produced fake comedy podcast. It stated it was an AI produced piece, but then used a voice that sounded like Carlin riffing on current events, but he died in 2008.

Dan Green

I think you're right. What you're really touching on is you're being an advocate for artists. And I think that's where we need to stand as a society. AI can do a lot of things wonderfully. One of them is replicate. They can replicate things, but they can't get into the human mind. So I felt the exact same way you did. It came off as the angry old man rather than any subtlety and any nuance and also some joy. It almost became like raging about life in 2023 -24 rather than trying to bring up something new.

Jeanine Herbst

And then there's the fake Drake and the Weeknd song, “Heart on My Sleeve,” that was made by a musician, Ghostwriter977, who used AI to write the medley and lyrics, arrange the music, even perform the vocals to sound like two of the most famous musicians in the world. It got millions of hits online, brought publicity to the artists, but it's not their voices or their song.

Dan Green

Those are a lot of positive factors that are related to that. That being said, It was not Drake and it was not The Weeknd. And then you run into a concern of now that we know what a Drake and The Weeknd voice is, do we need them any longer in order to create new music? And I mean, the short answer, of course, is yes, we do need them because they're artists. But in essence, that was a workaround. And that was, I think, a scary aspect because not only are you taking money from the artist, but you're also now...getting rid of what is truthful and what's not.

Jeanine Herbst

Seems like legislation will be needed soon around the world. Right, Drew?

Drew Thurlow

It's worth looking to see what other governments are doing. The European Union is much further ahead as they typically are for these kinds of things. The administration is taking it seriously. There's an AI Bill of Rights that this current administration announced last spring. You know, it's toothless, but it was really, really comprehensive about how we want to manage the growth of AI, including making sure that training data doesn't reflect the historical biases that a lot of media currently has and making sure artists and writers are compensated and making sure training data and the models are transparent so people can kind of police them. So it was a really smart thing. But there needs to be some kind of consensus. And right now, no, there's not.

Jeanine Herbst

We've been talking about the use of AI and the problems that come with it. But what's the future of the technology and entertainment?

Brett Crawford

So I think that AI and all the arts offers a great opportunity for collaboration for a living artist. I think that there are some interesting opportunities with artists who have passed on and that those who are engaging in those opportunities need to be thoughtful about sort of what would benefit the artists and the artist's intent to their original work.

Jeanine Herbst

Brett Crawford is a professor at Carnegie Mellon and the director of the Arts and Entertainment Management programs. So Brett, being true to the artist's intent is critical to the ethical use of AI, right?

Brett Crawford

So Keith Haring is a fairly well -known visual artist. And in the last year of his life, he was aware that he was passing. He was aware he was fighting a disease. And he designed a Keith Haring piece that was intentionally incomplete.

And someone took that piece and then said, Oh, I can use AI and I can finish it much like they did with the Beatles song, what they've done with novels and things like that. And there was an uprising because they had forgotten that the core reason that he'd left it unfinished was it was a symbol of his own life. It was also a symbol of the lives that were taken away by the disease that the loss that happens is the emptiness that's left behind. And so that was an interesting moment where I think the excitement around AI and what it can do sometimes can get in the way of perhaps what the artist's intent is.

Jeanine Herbst

So what's on the forefront? What's next?

Brett Crawford

I think spatial is a great place to think about what we can do. There's one theater where they've essentially created 360 audio in the space. I mean, you think that 360 audio is already there because you're in the space, but it's making it much more alive to where you're feeling like even though you're on the, you know, row T seat five that you feel it right next to you.

Jeanine Herbst

Spatial is kind of like a virtual surround sound, giving you the experience of being surrounded by several sources of audio all at once. It's used in movies and streaming, but not so much in music yet.

Brett Crawford

I do think our entertainment is not going to be anything we can imagine today. At least those who have the access to it. I think that the, as soon as we can free ourselves of headsets and I'm not saying it's necessarily going to be virtual, but I think there is sort of a 3D, there's a spatial element that's going to be entering into our home entertainment as well as our lived entertainment.

Jeanine Herbst

Production studios seem to be on board. Dan, music labels are spending a lot of money on AI.

Dan Green

The spend from the studios for AI folks is really remarkable. You know, Sony hired someone who's going to be in charge of AI related to the music industry. The industry has to embrace it. We can't put our head in the sand and go, this will pass by. It's not going to pass by. It's going to be like the electric typewriter and the computers. It's going to be like iPhones. It's going to be like digital filmmaking related to celluloid filmmaking. So like it or not, it's going to be a part of our lives in the future. I just hope we can all intelligently and kindly use it together.

Jeanine Herbst

Drew, with the new AI tools and apps, almost anyone can create music without any of the necessary skills, talent or equipment. What does this mean for the industry?

Drew Thurlow

I call it the Instagramming of music. Very few of us had professional cameras or even know what aperture is, but billions of us are into our Instagram feeds and our TikTok feeds. So I think the consumer creator is the next great opportunity for the industry and everyone else.

Jeanine Herbst

So Dan, what's next for AI and music?

Dan Green

The interesting thing is that right now, there is a 12 year old or an 11 year old somewhere, you know, in the middle of Canada, you know, in Columbia, there's somewhere who's learning about AI who could break the internet quote unquote, just based on what they're creating that may not be done for another year or two.  In other words, some young kid is learning about AI right now and starting to put it together.

Jeanine Herbst

Thanks to all my guests for joining us today. I'm Jeanine Herbst and you're listening to Where What If Becomes What's Next, a new podcast from Carnegie Mellon University where we're exploring what AI means to industries, government, society, and you.

Please check the show notes where you'll find links to additional resources for this episode. If you're enjoying the show, please review us on your favorite podcast site and share it with your friends. And be sure to subscribe wherever you get your podcasts so you never miss an episode.

To learn more or contact us with story ideas or comments, please visit ai.cmu.edu/podcast.