At the Hot Pod Summit conference in Brooklyn, Adobe presented Project Music GenAi Control, a platform capable of generating audio from text descriptions.
Like the generation of images and videos, the platform can create a few minutes of music, providing even brief indications on genre and style.
The software is not yet available to users but, given Adobe's track record in AI, it could soon become an addition to its catalog, which is based on Firefly, the company's family of generative AI models.
In the US, Adobe has however provided details on how Project Music GenAi Control works.
Users can adjust elements such as tempo, intensity, repeating patterns and structure or use the tracks made available by the AI and build on them to insert new instruments and loops.
Developed in collaboration with researchers from the University of California and Carnegie Mellon, Project Music GenAi was designed, according to an official note from Adobe, not only as a means to generate music but as a real extended control center over songs, in so that these are truly personalized, even for professional needs.
“One of the cool things about these new tools is that they don't just generate audio – they're taking it to the Photoshop level by giving creatives the same kind of deep control to shape, optimize and edit their audio. It's kind of a pixel level for music,” Nicholas Bryan, a senior researcher at Adobe Research and one of the creators of the technology, said in a statement.
As has already happened, the technique could soon clash with the fears of the music industry and the rules imposed by legislators.
Last August, a judge in the federal district of Columbia, US, said that art generated by artificial intelligence cannot be protected by copyright.
Google has also released its own model in an experimental phase, MusicLm, merged into the MusicFx functionality, which creates 30-second audio tracks.
Reproduction reserved © Copyright ANSA