After ChatGPT, music may be the next important frontier of AI content generation.
On January 27 local time, Google released the results of its research on the AI model MusicLM, which can generate any type of high-fidelity music from text descriptions. However, the researchers did not release MusicLM to the public due to copyright issues.
“We emphasize that more work is needed in the future to address these risks associated with music generation — we have no plans to release the model at this time.”
This is not the first AI music generation tool in human history. Both Google’s AudioML and OpenAI’s Jukebox project have addressed this issue, however, MusicLM has a huge database of models and training (280,000 hours of music) to produce music with more diversity and depth.
As far as the results go, not only can AI identify and fully integrate the genres and instruments you want, but it can also write tracks using abstract concepts that would be difficult for any ordinary machine to grasp.
For example, if you want to compose a dance and reggae hybrid that is “spare, transcendent” and evokes “wonder and awe,” MusicLM can do it for you quickly, even based on It can even create melodies based on buzzes, whistles, or painted descriptions, and can stitch multiple descriptions together in story mode to create the DJ or soundtrack you want.
Of course, like most AI, MusicLM has some unresolved comprehension issues, and some of its creations sound odd, even when the vocals are difficult to understand.