Meta's engineering team is preparing a
-based system capable of creating a short high-definition video from a short text.
The CEO of the company, Mark Zuckerberg, today shared a post on Facebook in which he presented the application, which is still in the testing phase and will soon be made available to users.
The tool, which they have dubbed Make-A-Video, gives one more twist to automatic image generators that, like Dall-E, allow an original drawing to be made following the text instructions provided by the user.
The system is based on the latest advances in generative technology from Meta's artificial intelligence (AI) division, according to the company itself in a statement.
"It has the potential to open up new opportunities for creators and artists," they maintain.
The system learns what the world is like from millions of combinations of text associated with images.
The information on how objects and beings move is obtained from video footage not associated with any concept, but which the tool is capable of linking with the images.
The result shown by Meta are videos with crude graphics, but very convincing, considering that they have been created without human intervention.
A teddy bear taking a self-portrait or a spaceship landing on Mars are some of the examples of videos they have shared.
“This is incredible progress.
It is much more difficult to generate video than photos, because beyond correctly generating each pixel, the system must also predict how it will move at each moment”, Zuckerberg wrote in his post.
Make-A-Video is also capable of creating videos from images provided by the user.
Or to generate a video similar to the one shown to the tool.
The system is presented after the same was done this summer with Make-A-Scene, a generator of photorealistic images composed from sketches.
“We want to be careful how we build new generative AI systems like this.
We share this tool with the community to get feedback”, says the company.
You can follow
EL PAÍS TECNOLOGÍA
or sign up here to receive our