Case Western Reserve University's independent student news source

The Observer

Case Western Reserve University's independent student news source

The Observer

Case Western Reserve University's independent student news source

The Observer

Sign up for our weekly newsletter!

AI is ushering in the next wave of music, for better or for worse

AI+is+ushering+in+the+next+wave+of+music%2C+for+better+or+for+worse
Lucas Yang

Artificial intelligence (AI) has reached the music industry, and it won’t be a fleeting phenomenon.

Chart-topping rapper Bad Bunny is one the most recent high-profile artists to address the consequences of using generative AI as a music production tool. In early November, the artist took to social media to vehemently express his disapproval of the viral song “Demo #5: nostalgIA.” The track, released by an artist called flowgptmusic, used AI to generate the voices of Bad Bunny, Justin Bieber and Daddy Yankee. The original recording has since been removed from Apple Music and Spotify, but the song’s popularity brings the conversation about AI’s place in the music industry to center stage.

Bad Bunny is not the first figure in the music industry to be affected by AI. Beyoncé, Drake, the Weeknd and Selena Gomez comprise only a short list of stars whose AI-mimicked voices have circulated on the Internet.

Some artists are excited about the possibilities that the tool brings with it. At a live concert in February, French DJ David Guetta performed a song that used AI to generate rapper Eminem’s voice. On X (formerly known as Twitter), Guetta posted a clip of the performance with a caption saying, “Let me introduce you to… Emin-AI-em,” and clarified in a comment that he “obviously…won’t release [the track] commercially.” Furthermore, the production of the most recent and final Beatles song, “Now and Then,” was made possible thanks to AI technology that was able to untangle and clarify John Lennon’s voice from an old, unreleased demo.

So, what makes AI such a disruptive force in the music industry? It is, after all, not the first technology to challenge the norms of music production. For instance, the pitch-correcting technology Auto-Tune received backlash before permeating nearly all music genres, and platforms such as GarageBand and SoundCloud empowered hobbyist musicians to produce and disseminate music with unparalleled ease.

The answer, I think, lies in the fact that AI uniquely juxtaposes creative possibilities with intellectual property issues.

As a creative tool, AI could propel the next wave of music in a revolutionary manner. Amateur musicians can prompt computer programs to generate professional-sounding songs in minutes. Hit songwriters report using it to test out lyric ideas. Producers can also hear how a song might sound with different artists’ voices. This can both inspire a song to take a certain direction and help songwriters pitch songs to big-name artists who may be more inclined to contribute when presented with good AI demos. Additionally, independent vocalists could license their voice for producers to use in published music with consent. Finally, algorithms could learn listener preferences to generate playlists and even create customized songs.

But at this point, AI’s presence in artistic fields is riddled with legal concerns regarding intellectual property. Since many programs work by scraping masses of data to generate content, AI-generated creations often exploit copyrighted work. This type of infringement could play out in the composition of lyrics and melodies, among other musical elements. Furthermore, some singers fear that others may profit off the signature sound of their voice without consent. By using an artificial recreation of an artist’s voice, a producer can generate a sound that is similar to but not a true copy of the singer’s vocals. This nuance makes it difficult for current publicity and likeness laws to defend artists’ rights to their own voices. Undoubtedly, this predicament will spark a flurry of new laws modifying the definition of likeness in creative works.

There’s also the question of the rights of AI artists themselves. Does the ability to masterfully use AI to generate novel outputs hold its own merit? As of March 16, the policy of the U.S. Copyright Office holds that AI-generated creative works that are made “without any creative input or intervention from a human actor” are not eligible to claim copyright. Human authorship, however, is tricky to define. The Office notes that it is a “case-by-case inquiry,” meaning it’s highly dependent on the types of prompts used.

Regardless of whether laws can keep pace with AI’s role in music or not, it should come as no surprise that AI is going to transform music and other creative industries. The tool is caught in a chaotic crossfire of legal complications, rapidly-evolving creative possibilities and a market of consumers who have proven to be receptive to its outputs. It’s impossible to predict exactly how AI will impact music, but we can expect the technology to help some artists flourish and leave others faltering.

Leave a Comment
About the Contributor
Lucas Yang
Lucas Yang, Graphic Designer
Lucas Yang (he/him) is a second-year student studying computer science and English. He enjoys abandoning art projects, watching figure skating and distimming the doshes.

Comments (0)

In an effort to promote dialogue and the sharing of ideas, The Observer encourages members of the university community to respectfully voice their comments below. Comments that fail to meet the standards of respect and mutual tolerance will be removed as necessary.
All The Observer Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *