Gen-3 Alpha was built for artists, by artists. Designed to interpret a wide range of concepts, styles and cinematic choices.
Learn more at https://t.co/YQNE3eqoWf
(1/9) pic.twitter.com/Mi8Hun3itO
— Runway (@runwayml) June 21, 2024
Runway, a New York City-based generative AI startup, has unveiled its latest model, Gen-3 Alpha, which can create high-quality, hyper-realistic 10-second video clips. This development marks a significant step forward in AI-driven video creation and puts Runway back in the competitive landscape of AI video generators. Gen-3 Alpha enables users to generate detailed and realistic video clips infused with emotional expressions and fluid camera movements.
AI competition right now is🔥
Anthropic just released Claude 3.5 Sonnet that beats its own flagship Claude 3 Opus and OpenAI's GPT-4o (on benchmarks)
Luma AI Dream Machine out in public bringing Sora-level clips
Runway AI Gen-3 Alpha is also showing off Sora-level demo videos
— Min Choi (@minchoi) June 20, 2024
The initial rollout supports both 5-second and 10-second video generation, with significantly faster rendering times: a 5-second video takes 45 seconds to generate, while a 10-second video takes 90 seconds. According to a Runway spokesperson, the new model will initially be available to paid subscribers, with plans to make it accessible to free tier users at a future date. Runway’s website and social media have showcased some initial demo videos generated using Gen-3 Alpha.
Runway AI just changed the game in filmmaking with Gen-3 Alpha drop📽️
Creators can now generate insane videos and clips with just texts and images
100% AI
10 wild examples:
1. pic.twitter.com/njlNWVUNE0— Min Choi (@minchoi) June 20, 2024
Gen-3 Alpha video capabilities
Anastasis Germanidis, Runway’s CTO, stated that Gen-3 Alpha will enhance current modes, such as text-to-video, image-to-video, and video-to-video, while also introducing new capabilities that leverage the model’s advanced features. Germanidis emphasized that since the release of Gen-2 in 2023, the team has learned that video diffusion models have significant potential for performance improvements.
OpenAI Should Release Sora Before It’s Too Late. Unlike OpenAI, Runway has announced that Gen-3 Alpha will soon be available to everyone.
Read more🔗👇https://t.co/zfVj9NgMrd pic.twitter.com/MFd1zS2mBY
— AIM (@Analyticsindiam) June 22, 2024
The Gen-3 Alpha model has been trained on an undisclosed curated internal dataset, involving a collaborative effort from scientists, engineers, and artists. Runway has partnered with leading entertainment and media organizations to create custom versions of Gen-3, offering tailored stylistic control and consistency for specific artistic and narrative requirements. Filmmakers behind award-winning films like “Everything, Everywhere, All at Once” have previously used Runway’s technology for visual effects, demonstrating the model’s practical application in high-profile projects.
Runway invites other organizations to collaborate on custom versions of Gen-3 Alpha, although the pricing for such custom training remains undisclosed. As Runway continues innovating in AI video creation, it aims to reassert its position as a leading player in the rapidly evolving market of generative AI technologies.