Depicted by some artists as a threat to creativity, algorithms are used by others as a powerful new instrument, able to stimulate their imagination, expand their creative capabilities and open doors to so-far unexplored worlds.
PARIS — In the music world, there are those who, as Australian singer Nick Cave confided in the New Yorker, consider that ChatGPT should “go to hell and leave songwriting alone," and those who want to give it a try.
French-born mega DJ David Guetta tried his hand at a concert in February, playing, to a stunned crowd, a track composed using only online artificial intelligence services and rapped by a synthesized voice borrowed from Eminem. Two months later, a masked Internet user, Ghostwriter977, posted a fake AI-generated duet by Drake and The Weeknd, “Heart on My Sleeve," on TikTok, without the authorization of either musician.
This did not stop the track from racking up millions of views and becoming a short-lived success on streaming platforms. After just a few days, Spotify, Youtube and Apple Music removed the track to avoid upsetting Universal Music Group, the artists’ rights holder.
Generating new sounds
In the music industry, generative AI is often depicted as a threat, both to the rights of songwriters and composers on whom the algorithms feed, and to human creation, at risk of competing with and being diluted by a tsunami of soulless, machine-generated tracks. But this technology also opens up new creative possibilities, which are already being seized upon by artists in all genres, from the most expert to the most popular, while respecting copyright.
Research institute Sony CSL learned this lesson: in 2016, they published a track composed by AI in the style of the Beatles, entitled “Daddy’s Car." But the initiative ended in “bad buzz," as not all the necessary authorization had been sought. Since then, “We’ve laid down the rules: we don’t do anything without the artists; only tools with and for them," says Michael Turbot, head of technology promotion at Sony CSL.
His laboratory offers three types of service, based on databases whose rights have been fully respected. First, synthesizers, which produce new sounds. “AI is able to generate an infinite number of sounds which didn’t exist — for example, any interpolation between a guitar sound and a saxophone sound,” he says.
The second kind of tools: creative assistants. “You’re in the studio, but are not very good with such-and-such instruments,” suggests Michael Turbot. “The algorithm will then react to your musical idea and suggest creative possibilities, such as a bass, piano or drum line.”
“This doesn’t mean we don’t need instrumentalists anymore,” he insists. “But today, rare are the musicians who have access to their own drummers, for example. Failing that, they buy ready-made drum lines on the Internet. We offer customized melody lines.” Finally, the algorithms of Sony CSL take care of mixing a track. In this complex process, “AI will do all the calculations for you, allowing you to scan the whole spectrum of possibilities,” he explains.
AI is like an invisible partner without an ego.
Some artists are already playing the game, like Whim Therapy (Jérémy Benichou’s stage name). “I started to try these tools to see if I should be afraid of them, but as I used them, I quickly realized that the big replacement wasn’t around the corner,” the pop composer says. That didn’t stop him from getting a taste for it and trying out CSL's melody generators — drums, bass, piano — and the lyrics assistant, to create his first song, “Let It Go," which won second prize from the audience at the AI Song Contest competition in 2021.
“Imagine a score with three sheets; we remove the one in the middle, and ask the AI what it suggests," he says, describing how the service works. Won over by this first try, the artist kept going and produced an EP with these same technologies. He doesn’t use them to generate an entire track, but rather as a back-up, like an assistant when he’s lacking an idea.
“It avoids blockages and old habits,” he explains. Most of the time, he doesn’t use the algorithm’s suggestions as they are, but uses them as a jumping-off point for a personal approach. “Take the bass line generator: all I had to do was turn it an octave higher and send it to a guitar amp so that interesting things would happen in terms of sound,” he says.
For him, AI is like an “invisible partner without an ego, with whom I don’t need to argue for 15 minutes," which unblocks his work and leads him in unexpected directions. Other artists have experimented with Sony CSL tools, such as electronic music producer DeLaurentis, whose latest album revisits pieces from the classical repertoire (Debussy, Ravel, Satie), with the help of AI.
A formidable compositional tool, AI is also an improv partner. Double bass player Joëlle Léandre, one of France’s leading contemporary musician, has provided proof of this, as part of a program at the Institute of Research and Acoustic/Music Coordination (Ircam), named Reach (Raising Cooperative Creativity in Cyber-Human Musicianship), which creates generative AI models.
“Our AI-based systems are capable of listening to musicians live,” explains Gérard Assayag, head of the program at Ircam, “Breaking down the sound signal into meaningful units (note, rhythm, harmony), to analyze the logic of what is played, even if it’s improvised, by building a cartography giving the range of possibilities, and at the moment of playing, choosing a trajectory within that cartography.”
For some, the universe of possibilities is only just starting to open.
Improvising in the face of AI
Joëlle Léandre was able to work with the algorithms during a concert at the Centre Pompidou in mid-June, inventing musical phrases as she went along, to which the researchers’ “machines” answered, as if in front of other musicians. “For me, there is absolutely no difference,” she says. "I’ve been playing double bass for years; it’s a tool. My friends from Ircam have a tool too. The only difference is that they can offer a proliferation of sounds,” she says — while she is limited to her double bass.
We’re at the dawn of a great revolution.
With these interactive algorithms, Ircam sets itself apart from existing music-generating AI services, like Aiva or Soundful, which produce ready-made pieces on command, with a few prior adjustments, or from Google’s experimental program MusicLM, which generates music based on a text command. “When I ask Google for 10 seconds in the style of Brahms, I know what to expect. This type of algorithm is not very creative and produces ‘more of the same,' as the English say,” comments Gérard Assayag.
“For us, the challenge is to surprise and push back the limits of creation,” he adds. Ircam has other projects in the pipeline, notably the installation of its algorithms within the HyVibe intelligent acoustic guitar, which would be capable of becoming autonomous and playing on its own, following the musician’s lead.
The need for transparency
The universe of possibilities is only just starting to open. “We’re at the dawn of a great revolution,” he predicts. There will be losers too. “I can understand that some are worried,” says Jérémy Benichou. “Things are moving very fast; soon, AI will be able to create summary pieces that can be used in musical libraries. But it will become an enormous asset for those who will try something more daring.”
The majors, for their part, take a dim view of the massive influx of standardized tracks flooding the streaming platforms, threatening to dilute the value attributed to genuine artists. The International Federation of the Phonographic Industry was alarmed by this in its latest report.
“We know that there are playlists lasting several hours where no human being has intervened to showcase their inventiveness, and which are capturing revenue,” warns the president of the National Music Center (CNM), Jean-Philippe Thiellay. He insists on the need for transparency, both in the use of AI in a track and in the way playlists are composed on streaming platforms.
As with the origin of a product in commerce, “We should display that a playlist is made up of so many percent of AI-created tracks,” he suggests, as these authorless tracks potentially guarantee platforms a higher margin. “We also need to make sure that not a single cent of public money is used to finance productions that would have done without human intervention,” insists the president of the CNM — to ensure that, in music and elsewhere, AI remains a tool in human hands, and not the other way around.