Google releases Imagen 2, a video clip generator

Google doesn’t have the best track record when it comes to image-generating AI.

In February, the image generator built into Gemini, Google’s AI-powered chatbot, was found to be randomly injecting gender and racial diversity into prompts about people, resulting in images of racially diverse Nazis, among other offensive inaccuracies.

Google pulled the generator, vowing to improve it and eventually re-release it. As we await its return, the company’s launching an enhanced image-generating tool, Imagen 2, inside its Vertex AI developer platform — albeit a tool with a decidedly more enterprise bent. Google announced Imagen 2 at its annual Cloud Next conference in Las Vegas.

Image Credits: Frederic Lardinois/TechCrunch

Imagen 2 — which is actually a family of models, launched in December after being previewed at Google’s I/O conference in May 2023 — can create and edit images given a text prompt, like OpenAI’s DALL-E and Midjourney. Of interest to corporate types, Imagen 2 can render text, emblems and logos in multiple languages, optionally overlaying those elements in existing images — for example, onto business cards, apparel and products.

After launching first in preview, image editing with Imagen 2 is now generally available in Vertex AI along with two new capabilities: inpainting and outpainting. Inpainting and outpainting, features other popular image generators such as DALL-E have offered for some time, can be used to remove unwanted parts of an image, add new components and expand the borders of an image to create a wider field of view.

But the real meat of the Imagen 2 upgrade is what Google’s calling “text-to-live images.”

Imagen 2 can now create short, four-second videos from text prompts, along the lines of AI-powered clip generation tools like Runway, Pika and Irreverent Labs. True to Imagen 2’s corporate focus, Google’s pitching live images as a tool for marketers and creatives, such as a GIF generator for ads showing nature, food and animals — subject matter that Imagen 2 was fine-tuned on.

Google says that live images can capture “a range of camera angles and motions” while “supporting consistency over the entire sequence.” But they’re in low resolution for now: 360 pixels by 640 pixels. Google’s pledging that this will improve in the future. 

To allay (or at least attempt to allay) concerns around the potential to create deepfakes, Google says that Imagen 2 will employ SynthID, an approach developed by Google DeepMind, to apply invisible, cryptographic watermarks to live images. Of course, detecting these watermarks — which Google claims are resilient to edits, including compression, filters and color tone adjustments — requires a Google-provided tool that’s not available to third parties.

And no doubt eager to avoid another generative media controversy, Google’s emphasizing that live image generations will be “filtered for safety.” A spokesperson told TechCrunch via email: “The Imagen 2 model in Vertex AI has not experienced the same issues as the Gemini app. We continue to test extensively and engage with our customers.”

Image Credits: Frederic Lardinois/TechCrunch

But generously assuming for a moment that Google’s watermarking tech, bias mitigations and filters are as effective as it claims, are live images even competitive with the video generation tools already out there?

Not really.

Runway can generate 18-second clips in much higher resolutions. Stability AI’s video clip tool, Stable Video Diffusion, offers greater customizability (in terms of frame rate). And OpenAI’s Sora — which, granted, isn’t commercially available yet — appears poised to blow away the competition with the photorealism it can achieve.

So what are the real technical advantages of live images? I’m not really sure. And I don’t think I’m being too harsh.

After all, Google is behind genuinely impressive video generation tech like Imagen Video and Phenaki. Phenaki, one of Google’s more interesting experiments in text-to-video, turns long, detailed prompts into two-minute-plus “movies” — with the caveat that the clips are low resolution, low frame rate and only somewhat coherent.

In light of recent reports suggesting that the generative AI revolution caught Google CEO Sundar Pichai off guard and that the company’s still struggling to maintain pace with rivals, it’s not surprising that a product like live images feels like an also-ran. But it’s disappointing nonetheless. I can’t help the feeling that there is — or was — a more impressive product lurking in Google’s skunkworks.

Models like Imagen are trained on an enormous number of examples usually sourced from public sites and datasets around the web. Many generative AI vendors see training data as a competitive advantage and thus keep it and info pertaining to it close to the chest. But training data details are also a potential source of IP-related lawsuits, another disincentive to reveal much.

I asked, as I always do around announcements pertaining to generative AI models, about the data that was used to train the updated Imagen 2, and whether creators whose work might’ve been swept up in the model training process will be able to opt out at some future point.

Google told me only that its models are trained “primarily” on public web data, drawn from “blog posts, media transcripts and public conversation forums.” Which blogs, transcripts and forums? It’s anyone’s guess.

A spokesperson pointed to Google’s web publisher controls that allow webmasters to prevent the company from scraping data, including photos and artwork, from their websites. But Google wouldn’t commit to releasing an opt-out tool or, alternatively, compensating creators for their (unknowing) contributions — a step that many of its competitors, including OpenAI, Stability AI and Adobe, have taken.

Another point worth mentioning: Text-to-live images isn’t covered by Google’s generative AI indemnification policy, which protects Vertex AI customers from copyright claims related to Google’s use of training data and outputs of its generative AI models. That’s because text-to-live images is technically in preview; the policy only covers generative AI products in general availability (GA).

Regurgitation, or where a generative model spits out a mirror copy of an example (e.g., an image) that it was trained on, is rightly a concern for corporate customers. Studies both informal and academic have shown that the first-gen Imagen wasn’t immune to this, spitting out identifiable photos of people, artists’ copyrighted works and more when prompted in particular ways.

Barring controversies, technical issues or some other major unforeseen setbacks, text-to-live images will enter GA somewhere down the line. But with live images as it exists today, Google’s basically saying: use at your own risk.