AI Video Goes Mainstream: Meta, Google, and other giants slice up text-to-video
Generated video clips are capturing eyeballs in viral videos, ad campaigns, and a Netflix show.
What’s new: The Dor Brothers, a digital video studio based in Berlin, uses AI-generated clips to produce of social-media hits including “The Drill,” which has been viewed 16 million times. Similarly, AI-focused creative agency Genre.ai made a raucous commercial for gaming company Kalshi for less than $2,000, stirring debate about the future of advertising. Netflix generated a scene for one of its streaming productions, the sci-fi series The Eternaut.
How it works: For Genre.ai and The Dor Brothers, making stand-out videos requires entering new prompts repeatedly until they’re satisfied with the output, then assembling the best clips using traditional digital video editing tools. For the Kalshi ad, for instance, Genre.ai generated 300 to 400 clips to get 15 keepers. Netflix did not describe its video-generation process.
- The Dor Brothers begin by brainstorming concepts and feeding them to OpenAI’s ChatGPT and other chatbots to generate prompts. The studio uses Midjourney, Stable Diffusion, and DALL-E to turn prompts into images. It refines the prompts and feeds them to Runway Gen-4 or Google Veo 3, to produce clips.
- Genre.ai CEO PJ Accetturo uses Google Gemini or ChatGPT to help come up with ideas and co-write scripts. He uses Gemini or ChatGPT to convert scripts into shot-by-shot prompts — no more than 5 at a time, which keeps their quality high, he says — then pastes the prompts into Veo 3. To maintain visual consistency, he provides a detailed description of the scene in every prompt.
- Netflix is experimenting with Runway’s models for video generation, Bloomberg reported. To produce the AI-generated clip that appeared in The Eternaut, the company generated a scene in which a building collapsed. AI allowed production to move at 10 times the usual speed and a fraction of the usual cost, Netflix executive Ted Sarandos told The Guardian. Runway’s output has also appeared in scores of music videos, the 2022 movie Everything Everywhere All at Once, and TV’s “The Late Show.”
Behind the news: Top makers of video generation models have been courting commercial filmmakers to fit generative AI into their production processes.
- Runway has worked with television studio AMC to incorporate its tools into the studio’s production and marketing operations, and with Lionsgate to build a custom model trained on the Hollywood studio’s film archive.
- Meta teamed up with Blumhouse, the production company behind horror thrillers such as Get Out and Halloween, to help develop its Meta Movie Gen tools.
- Google’s DeepMind research team helped filmmaker Darren Aronofsky to build an AI-powered movie studio called Primordial Soup.
Why it matters: Video generation enables studios to produce finished work on schedules and budgets that would be unattainable any other way. Sets, lighting, cameras, talent, makeup, even scripts and scores — generative AI subsumes them all. For newcomers like The Dor Brothers or Genre.ai, this is liberating. They can focus on realizing their ideas without going to the effort and expense of working with people, video equipment, and locations. For established studios, it’s an opportunity to transform traditional methods and do more with less.
We’re thinking: AI is rapidly transforming the labor, cost, and esthetics of filmmaking. This isn’t the first time: It follows close upon streaming and social video, or before that, computer-generated effects and digital cameras. The Screen Actors Guild and Writers Guild of America negotiated agreements with film/video producers that limit some applications of AI, but creative people will find ways to use the technology to make products that audiences like. This creates opportunities for producers not only to boost their productivity but also to expand their revenue — which, we hope, will be used to make more and better productions than ever before.