Listen: Google’s music-writing AI bot that ‘could trick exam setters’

The logo of Google
The logo of Google

Google has created a “chatbot” that can create music from written instructions, prompting warnings that it could be used to cheat in exams.

MusicLM, an artificial intelligence (AI) model, generates “high-fidelity music” from simple captions, such as “a calming violin melody backed by a distorted guitar riff”.

The captions instruct the model as to which instruments to include, the pace of the music and its genre.

One prompt, accompanied by a 30 second audio clip, reads: “A fusion of reggaeton and electronic dance music, with a spacey, otherworldly sound. Induces the experience of being lost in space, and the music would be designed to evoke a sense of wonder and awe, while being danceable.”

The researchers said the model “further extends the set of tools that assist humans with creative music tasks”.

Ian Pace, professor of music at City, University of London, described the music created by the model as “very generic” and “trite”.

But he warned there was a risk it could be used by music students to cheat in exams.

It comes after concerns were raised about ChatGPT, an AI chatbot which produces instant, human-like answers to questions.

The exams watchdog said last month that it would consider whether new guidance should be drawn up on how to prevent pupils cheating on coursework using the tool.

MusicLM seems to go further than ChatGPT, as it can transform intentions, stories or even paintings into songs.

The researchers used descriptions of famous paintings, such as Salvador Dalí’s The Persistence of Memory and Edvard Munch’s The Scream, to inspire audio based on the artworks.

Prof Pace said the platform, which is based on a data set of 5,000 music-text pairs, could have “big implications” for music education.

“How are we to know when someone puts in their composition assignment, just like with their essay assignment, that they haven’t just … read whatever the task assignment is, typed it into the [software] and hey presto it’s produced,” he said.

“The result might be fine, but it doesn’t necessarily mean they would have learnt very much.”

He said the audio generated would “probably be enough to get a reasonable mark”.

“Just like with ChatGPT, my feeling is that we actually need to rediscover the benefits of exams,” Prof Pace said. “In person exams, where you know the person is there and actually has to do the task themselves.”

He added: “I think ChatGPT has opened up huge new questions for most education establishments. How do we know … that the people submitting it … have done anything more than feed it into this?”

Prof Pace said that MusicLM is obviously at a “very early stage”, but it could become a “big question” in music education.

The AI model won’t be able to replace individual composers, Prof Pace said, but the system could be used for producing music where “generic” results would suffice – such as in films, media and games.

Nello Cristianini, a professor of AI at the University of Bath, said AI has been associated with the creation of music as far back as 1980, however MusicLM is the “most advanced” yet.

“It is clearly going to be used, and useful, and controversial too,” Prof Cristianini said. “This technology is still unexplored, and we have not tested its legal ramifications.”

Fred Scott, a composer and doctoral student in forensic musicology at City, University of London, said the AI could also raise copyright issues.

“If you’re two musicians who are tasked with composing a piece, and they both happen to stumble across the same AI and happen to put in the same form of words, then presumably they’re going to come up with the same product,” he said.

“I just don’t know how you’d unpick AI versus AI?”

He added that the platform “inhibits creativity” because “if you can throw a few words in an AI and it produces a piece of music for you, the technology is leading you rather than you making artistic decisions”.

The model, created by Google researchers, has been made public on the open platform github where users can both listen and see how it generates music. However, in a non-peer reviewed paper published alongside the model, the authors said they had “no plans” to release the models themselves which would allow users to create their own works.

Discussing the broader impact of their work, the researchers said there were “several risks” associated with the model and its use-case.

“The generated samples will reflect the biases present in the training data, raising the question about appropriateness for music generation for cultures under-represented in the training data, while at the same time also raising concerns about cultural appropriation,” they concluded.

A Google spokesman said: “We do not have any immediate plans to provide direct access to MusicLM at this time. Our team will continue to further develop this research to find ways to help creators and composers express themselves creatively.

“We think that MusicLM could have a large and diverse set of applications. Like other recent machine learning models, MusicLM can help people with tasks that have been very hard to achieve before or would take a lot of time to get right. We hope that MusicLM will spark a lot of new creativity, leading to exciting new ways for people to create music.”