Google DeepMind’s Lyria 3 Launches in Gemini App as AI Music Generator Goes Mainstream
Google DeepMind has rolled out beta access to Lyria 3, its advanced AI music generator, integrated directly into the Gemini app. Users can now create 30-second audio tracks from text, images, or video prompts, marking a major step in generative AI’s convergence with creative industries.

Google DeepMind’s Lyria 3 Launches in Gemini App as AI Music Generator Goes Mainstream
In a significant leap for artificial intelligence in the creative arts, Google DeepMind has officially launched beta access to Lyria 3 within its Gemini app, enabling users to generate high-fidelity, 30-second musical compositions from simple text, image, or video inputs. This move, confirmed by The Verge, signals Google’s strategic push to embed generative AI directly into consumer-facing platforms, transforming how individuals interact with music creation.
Lyria 3 represents the third iteration of Google’s proprietary music-generation model, building upon earlier versions that were limited to research environments and internal demos. Unlike previous iterations, Lyria 3 leverages a multimodal architecture that interprets not only lyrical prompts but also visual and temporal cues from uploaded media. For instance, a user uploading a video of a stormy ocean can generate a haunting ambient track with deep bass pulses and crashing percussion, while a text prompt like "jazz in a neon-lit Tokyo alleyway at midnight" produces a composition blending saxophone melodies with synthetic city ambiance.
According to The Verge, the integration into Gemini — Google’s AI-powered assistant app — underscores the company’s ambition to make AI not just a tool, but an intuitive creative collaborator. The system uses a hybrid autoregressive transformer model, similar to those powering recent advances in image generation, to ensure temporal coherence and harmonic richness across the generated audio. This architecture allows Lyria 3 to maintain stylistic consistency while adapting dynamically to user input, a notable improvement over earlier models that often produced disjointed or repetitive outputs.
Initial beta testers have reported remarkable results, with compositions that rival human-produced indie tracks in emotional depth and structural complexity. One tester, a music producer in Berlin, generated a full chord progression and melody from a single sketch of a sunset over the Alps, then exported the track to Ableton Live for further refinement. "It didn’t just mimic genres — it invented a new hybrid style," they noted in a Reddit thread on r/singularity, where the release was first publicly documented by user /u/GraceToSentience.
While Lyria 3 currently limits outputs to 30 seconds, Google has indicated that extended compositions are under development. The company has also implemented safeguards to prevent misuse, including watermarking generated audio and restricting the replication of copyrighted musical styles without attribution. These measures reflect growing industry concerns around AI-generated content and intellectual property, as seen in recent legal battles involving other generative platforms.
Industry analysts suggest that Lyria 3’s integration into Gemini could disrupt the $15 billion music production software market, particularly targeting indie artists and content creators who lack access to professional studios. With no subscription fee for the beta phase, and seamless export options to common audio formats, Google is positioning Lyria 3 as an accessible gateway to AI-assisted creativity.
However, ethical questions remain. Music unions and composer advocacy groups have called for transparency regarding training data sources, urging Google to disclose whether Lyria 3 was trained on licensed recordings or scraped public datasets. Google has not yet released detailed documentation on its training corpus, though it has affirmed compliance with its AI Principles, including fairness and accountability.
As Lyria 3 enters its public beta, it joins a growing cadre of AI music tools — including Suno, Udio, and Meta’s MusicGen — but stands out for its multimodal input flexibility and deep integration with Google’s ecosystem. For consumers, it offers unprecedented ease of use. For the music industry, it presents both opportunity and disruption.
With further development expected in Q3 2025, Lyria 3 may soon evolve from a novelty into a standard feature of digital creativity — blurring the lines between composer and curator, and redefining what it means to make music in the age of artificial intelligence.


