
*Note: Due to FCC rules for podcasts, we are unable to include the Randy Travis songs that were played during the live broadcast of the discussion. Therefore, you will hear some jump cuts in the recording.
One month ago, country music singer Randy Travis released a new studio album. Travis has been absent from the music scene since 2013, when a life-threatening stroke left him almost unable to speak or sing. Despite the physical progress he’s made, Travis’ new album was largely possible thanks to a surrogate singer and artificial intelligence.
But you don’t have to be a well-known artist to use AI to create music; the technology can now generate full songs with just a few text prompts from a user. Some artists are also using it to remake songs.
What does that mean for the industry? Can AI convey the complexity of human emotion? Should it?
This hour, we explore the risks and rewards of AI in music. Our guests discuss how the technology is changing the landscape, what it could mean for who creates and owns content, and if – years from now – we’ll need to specify between the AI version of a popular song or the original.
Our guests:
- T.J. Borrelli, musician, and principal lecturer in the Department of Computer Science at RIT, whose classes focus on cryptography, computer science theory, and artificial intelligence
- Amanda Chow, M.D., singer-songwriter
- Eryk Salvaggio, artist, writer, research fellow for the Flickr Foundation, emerging technology research advisor for the Siegel Family Endowment, and lecturer in responsible AI for the Elisava Barcelona School of Engineering and Design