Google and Universal Music (and other music companies) may be in negotiations to license artists’ voices and melodies for songs generated by artificial intelligence (AI).
This development, initially reported by the Financial Times, emerges as the music industry faces new challenges and opportunities in monetizing AI-generated deepfake songs.
Negotiating AI-Generated Music
Technology that can convincingly replicate the voices of established artists has been a pressing concern for music corporations.
In response, Google and Universal Music could be in early talks to allow fans to legally create tracks using AI-generated voices while paying the rightful copyright owners. Artists would have the option to participate.
With deepfake songs already mimicking voices like Frank Sinatra or Johnny Cash, the issue is no longer a distant threat but a current reality. The goal now would be to bring it into a monetizable framework.
Artists such as Drake and Taylor Swift have been “featured” in AI-generated songs that have gone viral.
A Fine Line Between Innovation And Infringement
As AI gains traction in the music industry, some musicians have voiced concern that their work may be diluted by fake versions of their songs and voices.
Others, such as electronic artist Grimes, have embraced the technology.
For Google, creating a music product powered by AI could help the company compete with rivals – like Meta – who are also developing AI audio products.
However, the issue of licensing and copyright in the age of AI-generated music is much more complex.
It will be a delicate balance for corporations between respecting artists’ rights, pushing the boundaries of AI innovation, and making a profit.
MusicLM: High-Quality AI-Generated Music From Text
By simply typing prompts like “soulful jazz for a dinner party,” users of the experimental tool can explore two versions of a song and vote on their preference, aiding in the refinement of the model.
The model’s capabilities go further in that it can be conditioned on both text and melody, transforming whistled and hummed tunes according to the style described in a text caption.
While MusicLM is an experimental tool to generate synthetic music for inspiration, it has certain constraints.
Specific queries mentioning artists or including vocals will not be produced, and users are encouraged to provide feedback if any issues arise with the generated audio.
This may be where a partnership with Universal Music comes into play. Warner Music, another significant label, may also be in talks with Google for similar reasons.
At the beginning of August, Meta announced AudioCraft as a new tool for musicians and sound designers, potentially shaping how we produce and consume audio and music.
AudioCraft consists of three primary models: MusicGen, AudioGen, and EnCodec. MusicGen, backed by Meta-licensed music, facilitates music creation from text prompts. AudioGen, trained in public sound effects, brings the text to life through sounds like a dog barking or cars honking.
The company is open-sourcing these models, granting access to researchers and practitioners to train their models for the first time. This move seeks to drive the field of AI-generated audio and music forward.
The excitement around generative AI has surged, but the audio has lagged. High-fidelity audio requires modeling complex signals and patterns, making music generation incredibly challenging.
Meta hopes the AudioCraft family simplifies this process. Its open structure could allow individuals to build better sound generators, compression algorithms, or music generators.
The Future Of AI-Generated Music
It seems clear that big tech companies want to be the first to launch user-friendly platforms that translate ideas into musical reality.
The development also hints at the future direction of AI in music, offering insights into potential opportunities and risks.
Featured image: Sundry Photography/Shutterstock