Whose Track Is It Anyway: Art in the Time Of AI

Whose Track Is It Anyway: Art in the Time Of AI

A few years ago, I wrote about AI and music like it was something distant. It was an interesting theory you might bring up at a party, not something that should make you check your headphones twice.

Now it’s real. It’s loud. It’s in the control room with us. The real question isn’t whether AI can make music. It’s what happens when it can mimic every tiny breath between a Weeknd hook and a Drake verse and drop it online before either artist even knows it exists.

Remember Heart on My Sleeve? That AI-generated track sounded so much like Drake and The Weeknd that people lost their minds. It didn’t help that the internet loves conspiracies, so a song that sounded real went viral almost instantly. We really believed OVOXO was finally back! Then came the takedown notices, platforms scrambling, and the record industry realising this wasn’t a drill. It proved imitation is no longer harmless. Now it’s fast, legally murky, and impossible to contain.

What’s wild is how quickly this shifted from a niche experiment to something in our everyday apps. Indie producers, hobbyists, even complete beginners can now generate instrumental backings or vocal previews with a few text prompts. Some platforms promise they are licensed. Others seem to be playing the “let’s find out in court” game. It makes Auto-Tune feel almost quaint. That tool didn’t train on your unreleased demos.

Here’s the kicker. Timbaland, yes that Timbaland, launched Stage Zero, an AI label, and signed an AI “artist” named TaTa. That’s like Van Gogh opening a hologram art gallery. It feels symbolic and maybe even prophetic. When someone of that stature steps into AI’s sandbox, it isn’t just experimentation. It’s permission. That signals to the industry that it’s okay to use the tool. But it also raises questions. What happens to the demo singers and session players? Will prompt writers become the new rock stars? It’s a creative leap that might also rewrite the entire list of who gets paid and what’s worth protecting.

The Legal Chess Game
You can guess what’s next. Lawsuits. Record labels are accusing some AI platforms of training on unlicensed music catalogues. The legal question is simple to ask but massive in its consequences. Is training an AI model “copying” or is it “transformative,” the way a mashup or remix might be? If courts decide it’s copying, AI services could be forced to license every track they learn from, which would be a huge win for rights holders. If the opposite happens, we may be heading for a lawless flood of unlicensed AI-made music. Either way, these cases will shape the industry for decades.

Screens Took the First Hit (and Taught Us a Lesson)
The film industry had its AI moment first. Think Martin Scorsese's The Irishman (2019), they used AI-assisted de-aging tech. Or Roadrunner (2021), where Anthony Bourdain’s voice was recreated with AI to read words he’d never spoken aloud. This sparked an ethical debate in the film industry about consent and authenticity in posthumous work. Or Top Gun: Maverick (2022), where Val Kilmer’s voice, damaged by cancer, was rebuilt using earlier recordings so his character could speak again. Even Bruce Willis struck a deal in 2022 to license his likeness for deepfake use, allowing his digital double to appear in projects without him ever stepping on set.

These moments lit the fuse on a bigger debate: if someone can use your voice or face without you in the room, what counts as “you” anymore? SAG-AFTRA and the WGA eventually pushed for hard limits on likeness and performance rights. Music is behind, but the principles are the same. If someone can clone your voice without permission, you deserve control and compensation. Hollywood has shown that protections are possible, but they only happen when artists organise and demand them.

Too Many Tracks, Not Enough Texture
AI is opening the doors for more people to create. That’s good news for experimentation and for anyone making music on a tight budget. But it also means a surge of tracks that are technically fine but strangely flat. Comfortably familiar. Fitting into the same chord progressions and song structures. Suddenly, “vibe” is harder to find. This is why provenance matters. Who sang this? Who owns it? Which studio approved it? Fans are starting to care again, and authenticity is becoming a product feature, not just a buzzword.

What This Means for All of Us
Whether you’re a creator, a listener, or just someone wondering if that new track is real, one thing matters most: know the tools, and don’t pretend they aren’t part of the landscape. Draw your licensing lines, ask where the voice came from, and make that part of the conversation.

At the end of the day, the bottom line is this: AI can copy us, but it can’t live our lives. It won’t know the story behind the broken-sounding demo mic, or how your band’s argument made the chorus better. That texture, that tiny human flaw, is the real song. AI can hum the notes, but it can’t hum with the same crack in the voice. Not yet. And maybe we learn to value that crack again.

The next few years will decide whether AI becomes just another studio tool or if it rewrites the creative economy. Either way, the art will keep coming. The question is: who will be making it, and who will be cashing the checks?