Google Announces Lyria, a new Music Generation Model
Working in collaboration with artists and songwriters, Google has developed a new Music Generation AI Model dubbed “Lyria”. Earlier models had trouble with continuity, generating music that often seems to change randomly. Google says that with Lyria, users can expect consistent, “long sequences of sound” with all the nuance of traditionally composed music. Google is also teasing incredible control with Lyria-powered software.
Google’s latest software has incredible transformational capabilities. Here’s a vocal choir created by transforming a simple keyboard track:
You don’t even need an instrument. Just hum a tune and tell the software what instruments you’d like it to be played by:
Currently, access to Lyria is highly limited. Select users have access via Google’s “Dream Track” experiment. In collaboration with various artists, Dream Track enables users to create 30-second music tracks using the voice and musical style of Alec Benjamin, Charlie Puth, Charli XCX, Demi Lovato, John Legend, Sia, T-Pain, Troye Sivan, and Papoose.
Here isn’t Charlie Puth:
And here isn’t T-Pain (If you ask me, the voice sounds a tad synthetic in this one. T-Pain might want to have a word.)
All Lyria-created audio is watermarked with Google’s SynthID technology. In principle, this means that AI-generated content will be indentifiable as such, even if it is significantly edited or otherwise altered by other software.