Artificial intelligence-powered video maker Runway now offers Gen-3 Alpha Turbo, which augments the recently released Gen-3 Alpha model with even more speed than the successor to the Gen-2 model. The latest iteration is supposedly seven times faster while only costing half as much as Gen-3 Alpha, which will likely attract plenty of interest from professional and amateur filmmakers interested in AI. 

As the name implies, Gen-3 Alpha Turbo is all about speed. The time between submitting a prompt and seeing a video is cut down to almost real-time production, according to Runway. The idea is to offer something for industries where that kind of speed is crucial, such as social media content and topical advertising. The trade-off is in quality. While Runway insists the Turbo model videos are essentially as good as the standard Gen-3 Alpha, the non-Turbo variant can make higher-quality images for the video as a whole. 

Still, the Turbo model is fast enough that Runway CEO Cristóbal Valenzuela boasted on X that “it now takes me longer to type a sentence than to generate a video.” 

Creators keen to focus on plotting and producing videos instead of having to wait for them to render will likely find Gen-3 Alpha Turbo more their speed. That goes double when the price is halved in this case. A second video costs five credits, as opposed to ten credits for a second of a standard Gen-3 Alpha model-made video. Credits on Runway come in bundles starting at $10 for 1,000 credits, so it’s the difference between 100 seconds of film for $10 or 200 seconds of film for $10. Those interested can also try out the new model through a free trial as well. 

Film AI Boom

Runway ML’s aggressive pricing and performance improvements come as the company faces stiff competition from other AI video generation models. The most notable is OpenAI and its Sora model, but it’s far from the only one. Stability AI, Pika, Luma Labs’ Dream Machine, and more are all racing to bring AI video models to the public. Even TikTok’s parent company, Bytedance, has an AI video maker called Jimeng, though it’s limited to China for now.

Runway’s focus on speed and accessibility with the Turbo model could help it stand out in the crowded field. Next, Runway plans to augment its models with better control mechanisms and possibly even real-time interactivity. The Gen-3 Alpha Turbo model does incorporate a lot of what video makers experimenting with AI want. But, it will need to deliver consistently to really beat out the competition for turning words and images into videos.  

Offering reliable consistency in character and environmental design is no small thing, but the use of an initial image as a reference point to maintain coherence across different shots can help. In Gen-3, Runway’s AI can create a 10-second video guided by additional motion or text prompts in the platform. You can see how it works in the video below. 

“Gen-3 Alpha Turbo Image to Video is now available and can generate 7x faster for half the price of the original Gen-3 Alpha. All while still matching performance across many use cases. Turbo is available for all plans, including trial for free.

Runway’s image-to-video feature doesn’t just ensure people and backgrounds stay the same when seen from a distance. Gen-3 also incorporates Runway’s lip-sync feature so that someone speaking moves their mouth in a way that matches the words they are saying. A user can tell the AI model what they want their character to say, and the movement will be animated to match. Combining synchronized dialogue and realistic character movements will interest a lot of marketing and advertising developers looking for new and, ideally, cheaper ways to produce videos. 

Up next

Runway isn’t done adding to the Gen-3 platform, either. The next step is bringing the same enhancements to the video-to-video option. The idea is to keep the same motion but in a different style. A human running down a street becomes an animated anthropomorphic fox dashing through a forest, for instance. Runway will also bring its control features to Gen-3, such as Motion Brush, Advanced Camera Controls, and Director Mode.

AI video tools are still in the early stages of development, with most models excelling in short-form content creation but struggling with longer narratives. That puts Runway and its new features in a strong position from a market standpoint, but it is far from alone. Midjourney, Ideogram, Leonardo (now owned by Canva), and others are all racing to make the definitive AI video generator. Of course, they’re all keeping a wary watch on OpenAI and its Sora video generator. OpenAI has some advantages in name recognition, among other benefits. In fact, Toys”R”Us has already made a short commercial using Sora and premiered it at the Cannes Lions Festival. Still, the film about AI video generators is only in its first act, and the triumphant winner cheering in slow-motion at the end is far from inevitable. As the competition heats up, Runway’s release of Gen-3 Alpha is a strategic move to assert a leading position in the market.

You might also like…

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums