Google’s making the second generation of Imagen, its AI model that can create and edit images given a text prompt, more widely available — at least to Google Cloud customers using Vertex AI who’ve been approved for access.
But the company isn’t disclosing which data it used to train the new model — nor introducing a way for creators who might’ve inadvertently contributed to the dataset to opt out or apply for compensation.
Called Imagen 2, Google’s enhanced model — which was quietly launched in preview at the tech giant’s I/O conference in May — was developed using technology from Google DeepMind, Google’s flagship AI lab. Compared to the first-gen Imagen, it’s “significantly” improved in terms of image quality, Google claims (the company bizarrely refused to share image samples prior to this morning), and introduces new capabilities, including the ability to render text and logos.
“If you want to create images with a text overlay — for example, advertising — you can do that,” Google Cloud CEO Thomas Kurian said during a press briefing on Tuesday.
Text and logo generation brings Imagen in line with other leading image-generating models, like OpenAI’s DALL-E 3 and Amazon’s recently launched Titan Image Generator. In two possible points of differentiation, though, Imagen 2 can render text in multiple languages — specifically Chinese, Hindi, Japanese, Korean, Portuguese, English and Spanish, with more to come sometime in 2024 — and overlay logos in existing images.
“Imagen 2 can generate … emblems, lettermarks and abstract logos … [and] has the ability to overlay these logos onto products, clothing, business cards and other surfaces,” Vishy Tirumalasetty, head of generative media products at Google, explains in a blog post provided to TechCrunch ahead of today’s announcement.
Thanks to “novel training and modeling techniques,” Imagen 2 can also understand more descriptive, long-form prompts and provide “detailed answers” to questions about elements in an image. These techniques also enhance Imagen 2’s multilingual understanding, Google says — allowing the model to translate a prompt in one language to an output (e.g. a logo) in another language.
Imagen 2 leverages SynthID, an approach developed by DeepMind, to apply invisible watermarks to images created by it. Of course, detecting these watermarks — which Google claims are resilient to image edits including compression, filters and color adjustments — requires a Google-provided tool that’s not available to third parties. But as policymakers express concern over the growing volume of AI-generated disinformation on the web, it’ll perhaps allay some fears.
Google didn’t reveal the data that it used to train Imagen 2, which — while disappointing — doesn’t exactly come as a surprise. It’s an open legal question as to whether GenAI vendors like Google can train a model on publicly available — even copyrighted — data and then turn around and commercialize that model.
Relevant lawsuits are working their way through the courts, with vendors arguing that they’re protected by fair use doctrine. But it’ll be some time before the dust settles.
In the meantime, Google’s playing it safe by keeping quiet on the matter — a reverse in the strategy it took with the first-gen Imagen, where it disclosed that it used a version of the public LAION dataset to train the model. LAION is known to contain problematic content including but not limited to private medical images, copyrighted artwork and photoshopped celebrity porn — which obviously isn’t the best look for Google.
Some companies developing AI-powered image generators, like Stability AI and — as of a few months ago — OpenAI, allow creators to opt out of training datasets if they so choose. Others, including Adobe and Getty Images, are establishing compensation schemes for creators — albeit not always well-paying or transparent ones.
Google — and, to be fair, several of its rivals, including Amazon — offer no such opt-out mechanism or creator compensation. That won’t change anytime soon, it seems.
Instead, Google offers an indemnification policy that protects eligible Vertex AI customers from copyright claims related both to Google’s use of training data and Imagen 2 outputs.
Regurgitation, or when a generative model spits out a mirror copy of a training example, is rightly a concern for corporate customers and devs. An academic study showed that the first-gen Imagen wasn’t immune to this phenomenon, spitting out identifiable photos of real people, copyrighted work by artists and more when prompted in specific ways.
Not shockingly, in a recent survey of Fortune 500 companies by Acrolinx, nearly a third said intellectual property was their biggest concern about the use of generative AI. Another poll found that nine out of 10 developers “heavily consider” IP protection when making decisions on whether to use generative AI.
It’s a concern Google hopes that its policy, which is newly expanded, will address. (Google’s indemnification terms didn’t previously cover Imagen outputs.) As for the concerns of creators, well… they’re out of luck this go-around.