What just happened? The original start-up that conceived one of the earlier machine learning algorithms for artwork creation is now back with a new product. Gen-1 can turn video clips into something completely different, with a seemingly unprecedented quality compared to similar tools.

In 2021, Runway worked with researchers at the University of Munich to create Stable Diffusion, one of the major machine learning algorithms that brought generative AI into the spotlight. Now the company is back with Gen-1, a new model that can transform pre-existing videos by following textual prompts provided by the user.

As explained on the official website, Gen-1 can "realistically and consistently synthesize new videos" by basing the new style on an image or a text prompt. It's just like filming something new "without filming anything at all," Runway Research says.

Gen-1 can actually work in five different "modes": Stylization, to transfer the style of any image of textual prompt to every frame of the video; Storyboard, to turn mockups into fully animated renders; Mask, to isolate video subjects and modify them according to the prompt (like adding black spots to a dog); Render, to turn untextured renders into "realistic outputs" through image or text input; and Customization, to "unleash the full power of Gen-1" by customizing the video's model.

The ML model behind Gen-1 is not the first video-generative AI to arrive on the market, as many companies have already released their own video-making algorithms in 2022. Compared to Meta's Make-a-Video, Google's Phenaki, and Muse, however, Runway's model can provide both professionals and YouTube amateurs innovative tools with greater quality and complex capabilities.

According to Runway's own words, user studies have shown that results from Gen-1 are preferred over existing generative models for image-to-image or video-to-video translations. Gen-1 is seemingly preferred over Stable Diffusion 1.5 by 73.53% of users, and by 88.24% of Text2Live users.

Runway is certainly equipped with the right expertise when it comes to video rendering and transformation, as the AI-powered tools developed by the company are already used for online video platforms (TikTok, YouTube), movie-making (Everything Everywhere All at Once), and TV shows like The Late Show with Stephen Colbert.

Gen-1 has been developed from the aforementioned expertise and with video-making customers in mind, Runway said, after years of insights about VFX editing and post-production in the filmmaking business. The new generative tool runs in the cloud, and access is now provided to a few invited users only. General availability should come in "a few weeks."