AudioCraft, Meta’s generative AI for creating music from…

Halfthe company behind platforms like Facebook and Instagram has brought together music and artificial intelligence with the launch of audio craftnew open source code that allows users to create music and sound entirely with generative AI.

AudioCraft, Meta’s generative AI for creating music from text

The meta explains that AudioCraft consists of three AI models.each of which specializes in different areas of sound generation. Music generator, for example, accepts text input to create music. To train this model, Meta used ben “20,000 hours of music owned by Meta or specifically licensed for this purpose.” With MusicGen, users can create music tracks in an innovative and creative way.

other model, AudioGen lets you create realistic sounds from simple written instructions, imitating, for example, the barking of a dog or the sound of footsteps. Meta trained this model using publicly available sound effects, allowing users to experiment with creating sounds in different environments and situations.

In addition, Meta introduced improved version of the EnCodec decoder, cit allows users to create sounds with fewer artifacts. Manipulating the sound too much can lead to unwanted distortion, but with this new feature, users can achieve a more natural and convincing sound.

Music and artificial intelligence

The American media had the opportunity to listen to some of the audio samples made with AudioCraft. Emilia David of The Verge reports that the generated noises, such as whistles, sirens and horns, sounded surprisingly natural. However, in some parts, such as the guitar strings on some songs, there was still a touch of artificiality.

Here is Amazon Music with 30 days of free use, cancel anytime

It’s not the first time artificial intelligence is combined with music. Google has created MusicLM, a language model that generates tiny sounds from text messages, but it’s only available to researchers. There have also been viral examples of songs created by artificial intelligence. the voices are similar to celebrities like Drake and The Weeknd.

While musicians have been experimenting with electronic sound for years, genres like EDM and festivals like Ultra, AudioCraft and other AI solutions are opening up new possibilities. AI-powered generative music offers opportunities for research and innovation, paving the way for new forms of artistic and creative expression. It’s not just about manipulating sound, it’s about creating completely from scratch, opens a new chapter in the history of music.

Meta AudioCraft and the future of music created by artificial intelligence

Right now, AudioCraft seems like the right fit for creating atmospheric music, as standard snippets or to create customizations, not to be the next big pop hit. However, Meta believes that its new model can open up new horizons in the music world, just like synthesizers revolutionized the industry when they became popular.

Meta stated that he thinks that MusicGen could be an innovative tool, just like synths when they first came out. But compared to creating text (as Llama 2 does in Meta or GPT in OpenAI), creating it is much more difficult, the researchers say.

To improve the model, Meta released Open source AudioCraft that provides a greater variety of data used for training.. The Company acknowledges that current datasets are limited and reflect the predominance of Western-style music with lyrics and metadata written in English. By sharing the code with the research community, Meta hopes that other researchers will help develop new approaches to reduce or eliminate any biases and abuses of generative models.

Record companies and artists have expressed concern about the potential problems associated with AI. fearing that models might use copyrighted material for training. But before even asking that doubt, we need to ask ourselves: can AI create not only ambient songs and background music for videos, but also songs that make us sing, dance, move? The meta is betting yes, but you have to understand what the public’s reaction will be when these tracks move from artificial intelligence to human ears.

Source link

Leave a Comment