We are entering an era where anyone can compose music without specialized knowledge or experience, thanks to generative AI. This marks an unprecedented turning point in the history of music production. AI-powered music generation platforms like Suno AI, which offer a ChatGPT-like UI/UX experience, have already attracted tens of millions of users and are gaining global attention.
Yet skepticism and backlash toward generative AI remains strong.
According to the International Confederation of Societies of Authors and Composers (CISAC), the global market for music and audiovisual content generated by AI is projected to grow from €3 billion now to €64 billion in 2028. At the same time, music creators’ income is expected to decline by about 25% over the next five years. For many musicians, the rise of generative AI poses a serious threat.
So, what exactly is generative AI bringing to music production? To understand this, we need to look at the evolution from the analog era, to digital production, to today’s cloud integration and AI-driven production.
For decades, Digital Audio Workstations (DAWs) have been at the heart of digital music creation, enabling everything from composition to recording, editing, mixing, and mastering in a single software environment. Popular DAWs like Pro Tools and Cubase remain industry standards, while Apple’s free GarageBand makes music creation accessible to beginners.
Consumer-oriented DAWs are typically sold as one-time purchases for a few hundred dollars or through monthly subscriptions, with developers continuously updating their software based on user feedback. Recently, AI-assisted tools and automated generation features have become increasingly common.
Originally, music production was an analog process, but by the 1970s, digital systems began to take over. By the 1990s, DAWs became mainstream. Improvements in CPU performance eventually made it possible to run professional setups on laptops alone.
Today, DAWs are more connected than ever, integrating with cloud-based services like Splice, which provides high-quality sample libraries, and incorporating AI-powered tools for composition, mixing, and even mastering.
Unlike traditional DAWs, new platforms such as AIVA, Suno, and Udio offer AI-driven, fully automated music generation. Users simply input text prompts, lyrics, or even images, and the AI creates original tracks. Under certain conditions, these works can be copyrighted and used commercially.
In the realm of vocal synthesis, Yamaha’s VOCALOID and Dreamtonics’ Synthesizer V Studio leverage AI to produce realistic singing voices, significantly expanding creative possibilities. Similarly, music notation software has evolved: AI-powered tools like Songscription can automatically generate sheet music from an audio file or even a YouTube link.
Alongside these technological breakthroughs, legal and ethical issues are surfacing. In June 2024, the RIAA (Recording Industry Association of America) filed lawsuits against AI music startups Suno and Udio, accusing them of mass infringement of copyrighted sound recordings copied and exploited. Negotiations over licensing fees are expected to follow.
In April of the same year, the Artist Rights Alliance (ARA) released an open letter, Stop Devaluing Music, calling for restrictions on unregulated AI use. Over 200 prominent artists, including Billie Eilish and Stevie Wonder, signed the letter, voicing concerns about AI generating vast amounts of music mimicking human artists.
These days, startups like Vermillio are working on detection tools to identify AI-generated music.
The adoption of new technology brings opportunities for innovation and job creation—but also threatens to dismantle traditional revenue models. Generative AI could further reduce royalties and revenue for human creators.
How much influence will generative AI music companies wield, and how will the music industry adapt? The answers are still unfolding.