Threat or Tool? Artificial Intelligence in an Evolving Entertainment Industry

By: Kate Johnson

In 2023, the Recording Academy faced a new dilemma: is a song eligible for the Grammys if it was created by artificial intelligence? The artist Ghostwriter submitted “Heart on My Sleeve,” a piece he wrote the lyrics to, but used AI to sing in a style similar to Drake and The Weeknd. Originally, Recording Academy CEO Harvey Mason Jr. seemed willing to allow it, saying that because the lyrics were written by a person, it was eligible in the songwriting category. However, he quickly backtracked, ultimately deeming it ineligible because the vocals were not legally obtained.

This uncertainty reflects a broader conflict over the role of AI in the entertainment industry. Those in favor of it laud its potential benefits, but others decry its usage, positing that artists are losing jobs to machines. Nonetheless, AI has burst onto the scene, leaving policymakers scrambling to catch up. Moving forward, how can innovation using artificial intelligence be balanced with the protection of artists’ livelihoods?

One of creators’ main concerns is that their works are being used without their consent or compensation. To output songs, AI analyzes existing creative works, from both public domain and copyrighted material, to train itself and learn patterns. Then, when given an input (for example, a short lyrical prompt), the machine can create a piece based on the next most probable outcome. Thus, outputs pull strongly from the artists’ voices and songs that the machine scraped, or captured information from, using licensed data without permission.

Another pressing issue is the rise of deepfake technology. “Heart on my Sleeve” used Drake and The Weeknd’s voices, replicating them without consent. As legal guidelines surrounding AI are not currently developed, it is unclear how or if creators would get compensated for their likeness. In fact, a major objective of the recent SAG-AFTRA strike was to create “a contract that explicitly demands AI regulations to protect writers and the works they create,” demonstrating that this issue is at the forefront of artists’ minds.

To protect creators and artists, new legislation must be developed. As the Ghostwriter debate spotlighted, there are few legal guidelines on AI usage in the entertainment industry, with copyright regulations being particularly unclear. Because the AI that created “Heart on my Sleeve” was trained on copyrighted examples, it replicated Universal Music Group’s producer tag, allowing them to prove a copyright violation. If the tag hadn’t been present, though, the song might still be available, despite using Drake’s voice.

More broadly, current copyright laws protect original and fixed works, meaning they were “independently created by a human author” and permanently exist. If anyone other than the author or creator reproduces it, it is considered “copyright infringement” and is punishable by law. However, it is uncertain whether AI-created replicas, particularly ones that alter creative works in some form, would qualify as infringement.

The Copyright Office has recognized this gap and is currently taking action to bridge it. In 2023, it began inspecting and analyzing laws and issued an inquiry about public opinion. On July 31, 2024, Part 1 of the Report on Copyright and Artificial Intelligence was published, “urgently” calling for new federal legislation.

With all of these potential issues, it can be easy to believe AI is an ominous force. However, with adequate regulation, this new technology can bring a multitude of benefits. AI is already commonly used to smooth vocals and engineer sound, and is also a strong tool for inspiration. Additionally, it has been utilized to bring lost voices back to life. For example, the Beatles used AI to isolate John Lennon’s voice from a demo recorded before he died, enabling them to release a final record in 2023 titled “Now and Then,” featuring Lennon’s voice over 40 years after his death.

Applications of this new technology also exist outside of the music industry. This July, Jennifer Wexton, a House Representative, used AI to make a speech in her own voice after an aggressive neurological disorder stole her ability to speak. Even more incredibly, researchers have implanted devices in the brains of people with ALS who are unable to speak, restoring their vocal communication. This technology employs AI to “…actually [translate] brain activity into speech.” As these devices are further tested and implemented, they will continue to return voices to those who have lost theirs, improving quality of life for many.

An important note, however, is that these voices were replicated with their owners’ or owners’ estates’ consent. Without obtaining this, the same ethical concerns apply as with living artists. But when AI is used with the permission of those it is mimicking, it can both revitalize voices and stories that would otherwise be lost to time and make music more accessible. With the implementation of new guidelines and laws, AI can be a strong creative force driving innovation and improving quality of life.

All in all, just because AI has the capability to harm creative industries, this does not mean that it will. There is often a learning curve with new technology, as it is difficult to predict possible repercussions early. For example, the submission of “Heart on my Sleeve” to the Grammys prompted new guidelines surrounding the use of AI in music, demonstrating that industries can respond to a changing environment.

There are some implicit dangers to creators with its growth, but by adding more regulations, artists can be protected. This will allow the true positive potential of AI to be reached, including giving voices back to those who have lost them. AI is a tool, pushing the boundaries of what is possible, providing inspiration, education, regeneration, and much more. We should not be afraid of progress; rather, we should be proactive with our protections. At the end of the day, AI is nothing without people.