Track, Don't Block: The Music Industry's New Strategy Against Generative AI

In 2023, the music industry faced a wake-up call that was hard to ignore: “Heart on My Sleeve,” a fake AI-generated duet between Drake and The Weeknd, garnered millions of plays before anyone could even identify who had made it. More than just a viral hit, the song exposed the fragility of control over digital content. Since then, the effort hasn’t been to stop AI-generated music, but to make it traceable. A new infrastructure is emerging that aims to recognize synthetic content before it’s even released, tag it with metadata, and regulate its distribution.
“If you don’t build these tools into your infrastructure, you’re just chasing the next big thing,” says Matt Adell, co-founder of Musical AI. “You need a network that goes from training to distribution.” More and more startups are developing systems to detect AI in licensing flows. Platforms like YouTube and Deezer have already activated internal tools to flag synthetic content at upload time, influencing its visibility in search results and personalized recommendations. Companies like Audible Magic, Pex, Rightsify, and SoundCloud are also integrating detection and attribution technologies to track everything from dataset creation to final distribution.
The emerging approach is less about removal than about creating an ecosystem that treats AI traceability as basic infrastructure. Companies like Vermillio and Musical AI are building systems that can analyze completed songs, identify AI-generated components, and automatically add appropriate metadata. Vermillio’s TraceID framework goes further, breaking down songs into “stems” like vocal pitch, melodic phrasing, and lyrical structure to precisely identify synthetic segments. The goal is to foster an authenticated, preemptive licensing model. Vermillio estimates the economic value of these technologies could rise from $75 million in 2023 to $10 billion in 2025.
Some companies are even intervening in the training phase of the models, trying to quantify the creative influence of specific artists on the tracks generated. This would allow them to assign royalties proportional to the estimated artistic contribution, avoiding later litigation. Musical AI, for example, aims to trace the origin of each content from the moment the model is trained. “Attribution should not start when the song is finished, but when the AI starts learning,” explains Sean Power, co-founder of the company. “We want to measure creative influence, not just detect imitations.”
Deezer has also developed tools that, since April, have detected about 20% of new songs uploaded as completely AI-generated, a percentage that has more than doubled since January. These songs remain available, but are not promoted through algorithmic or editorial recommendations. Deezer plans to openly label them for users soon. “We are not against AI,” says Aurélien Hérault, the platform’s chief innovation officer. “But a lot of this content is used in bad faith, more to exploit the platform than to create value.”
Other initiatives are trying to regulate the use of data that drives models. Spawning AI’s Do Not Train Protocol (DNTP) allows artists and rights holders to opt their work out of AI training. Similar tools are already popular in the imaging space, but audio is lagging behind. The protocol, however, is struggling to gain traction due to a lack of shared standards and uncertain support from big AI companies. “The protocol needs to be run by an independent, nonprofit body to be credible,” warns Mat Dryhurst, an AI informed consent advocate. “We can’t trust the future of consent to a single, opaque company that could go out of business overnight.”
Adnkronos International (AKI)