AI

Google’s best Gemini demo was faked

Comment

Image Credits: Google

Google’s new Gemini AI model is getting a mixed reception after its big debut yesterday, but users may have less confidence in the company’s tech or integrity after finding out that the most impressive demo of Gemini was pretty much faked.

A video called “Hands-on with Gemini: Interacting with multimodal AI” hit a million views over the last day, and it’s not hard to see why. The impressive demo “highlights some of our favorite interactions with Gemini,” showing how the multimodal model (i.e., it understands and mixes language and visual understanding) can be flexible and responsive to a variety of inputs.

To begin with, it narrates an evolving sketch of a duck from a squiggle to a completed drawing, which it says is an unrealistic color, then evinces surprise (“What the quack!”) when seeing a toy blue duck. It then responds to various voice queries about that toy, then the demo moves on to other show-off moves, like tracking a ball in a cup-switching game, recognizing shadow puppet gestures, reordering sketches of planets, and so on.

It’s all very responsive, too, though the video does caution that “latency has been reduced and Gemini outputs have been shortened.” So they skip a hesitation here and an overlong answer there, got it. All in all, it was a pretty mind-blowing show of force in the domain of multimodal understanding. My own skepticism that Google could ship a contender took a hit when I watched the hands-on.

Just one problem: The video isn’t real. “We created the demo by capturing footage in order to test Gemini’s capabilities on a wide range of challenges. Then we prompted Gemini using still image frames from the footage, and prompting via text.” (Parmy Olson at Bloomberg was the first to report the discrepancy.)

So although it might kind of do the things Google shows in the video, it didn’t, and maybe couldn’t, do them live and in the way they implied. In actuality, it was a series of carefully tuned text prompts with still images, clearly selected and shortened to misrepresent what the interaction is actually like. You can see some of the actual prompts and responses in a related blog post — which, to be fair, is linked in the video description, albeit below the ” . . . more.”

On one hand, Gemini really does appear to have generated the responses shown in the video. And who wants to see some housekeeping commands like telling the model to flush its cache? But viewers are misled about the speed, accuracy, and fundamental mode of interaction with the model.

For instance, at 2:45 in the video, a hand is shown silently making a series of gestures. Gemini quickly responds, “I know what you’re doing! You’re playing Rock, Paper, Scissors!”

Image Credits: Google/YouTube

But the first thing in the documentation of the capability is how the model does not reason based on seeing individual gestures. It must be shown all three gestures at once and prompted: “What do you think I’m doing? Hint: It’s a game.” It responds, “You’re playing rock, paper, scissors.”

Image Credits: Google

Despite the similarity, these don’t feel like the same interaction. They feel like fundamentally different interactions, one an intuitive, wordless evaluation that captures an abstract idea on the fly, another an engineered and heavily hinted interaction that demonstrates limitations as much as capabilities. Gemini did the latter, not the former. The “interaction” showed in the video didn’t happen.

Later, three sticky notes with doodles of the sun, Saturn, and Earth are placed on the surface. “Is this the correct order?” Gemini says, “No, the correct order is Sun, Earth, Saturn.” Correct! But in the actual (again, written) prompt, the question is “Is this the right order? Consider the distance from the sun and explain your reasoning.”

Image Credits: Google

Did Gemini get it right? Or did it get it wrong and needed a bit of help to produce an answer they could put in a video? Did it even recognize the planets, or did it need help there as well?

In the video, a ball of paper gets swapped around under a cup, which the model instantly and seemingly intuitively detects and tracks. In the post, not only does the activity have to be explained, but also the model must be trained (if quickly and using natural language) to perform it. And so on.

These examples may or may not seem trivial to you. After all, recognizing hand gestures as a game so quickly is actually really impressive for a multimodal model! So is making a judgment call on whether a half-finished picture is a duck or not! Although now, since the blog post lacks an explanation for the duck sequence, I’m beginning to doubt the veracity of that interaction as well.

Now, if the video had said at the start, “This is a stylized representation of interactions our researchers tested,” no one would have batted an eye — we kind of expect videos like this to be half factual, half aspirational.

But the video is called “Hands-on with Gemini” and when they say it shows “our favorite interactions,” it implies that the interactions we see are those interactions. They were not. Sometimes they were more involved; sometimes they were totally different; sometimes they don’t really appear to have happened at all. We’re not even told what model it is — the Gemini Pro one people can use now, or (more likely) the Ultra version slated for release next year?

Should we have assumed that Google was only giving us a flavor video when they described it the way they did? Perhaps then we should assume all capabilities in Google AI demos are being exaggerated for effect. I write in the headline that this video was “faked.” At first I wasn’t sure if this harsh language was justified (certainly Google doesn’t think so; a spokesperson asked me to change it). But despite including some real parts, the video simply does not reflect reality. It’s fake.

Google says that the video “shows real outputs from Gemini,” which is true, and that “we made a few edits to the demo (we’ve been upfront and transparent about this),” which isn’t. It isn’t a demo — not really — and the video shows very different interactions from those created to inform it.

Update: In a social media post made after this article was published, Google DeepMind’s VP of Research Oriol Vinyals showed a bit more of how “Gemini was used to create” the video. “The video illustrates what the multimodal user experiences built with Gemini could look like. We made it to inspire developers.” (Emphasis mine.) Interestingly, it shows a pre-prompting sequence that lets Gemini answer the planets question without the sun hinting (though it does tell Gemini it’s an expert on planets and to consider the sequence of objects pictured).

Perhaps I will eat crow when, next week, the AI Studio with Gemini Pro is made available to experiment with. And Gemini may well develop into a powerful AI platform that genuinely rivals OpenAI and others. But what Google has done here is poison the well. How can anyone trust the company when they claim their model does something now? They were already limping behind the competition. Google may have just shot itself in the other foot.

More TechCrunch

Meta’s Oversight Board has now extended its scope to include the company’s newest platform, Instagram Threads, and has begun hearing cases from Threads.

Meta’s Oversight Board takes its first Threads case

The company says it’s refocusing and prioritizing fewer initiatives that will have the biggest impact on customers and add value to the business.

SeekOut, a recruiting startup last valued at $1.2 billion, lays off 30% of its workforce

The U.K.’s self-proclaimed “world-leading” regulations for self-driving cars are now official, after the Automated Vehicles (AV) Act received royal assent — the final rubber stamp any legislation must go through…

UK’s autonomous vehicle legislation becomes law, paving the way for first driverless cars by 2026

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved…

ChatGPT: Everything you need to know about the AI-powered chatbot

SoLo Funds CEO Travis Holoway: “Regulators seem driven by press releases when they should be motivated by true consumer protection and empowering equitable solutions.”

Fintech lender SoLo Funds is being sued again by the government over its lending practices

Hard tech startups generate a lot of buzz, but there’s a growing cohort of companies building digital tools squarely focused on making hard tech development faster, more efficient and —…

Rollup wants to be the hardware engineer’s workhorse

TechCrunch Disrupt 2024 is not just about groundbreaking innovations, insightful panels, and visionary speakers — it’s also about listening to YOU, the audience, and what you feel is top of…

Disrupt Audience Choice vote closes Friday

Google says the new SDK would help Google expand on its core mission of connecting the right audience to the right content at the right time.

Google is launching a new Android feature to drive users back into their installed apps

Jolla has taken the official wraps off the first version of its personal server-based AI assistant in the making. The reborn startup is building a privacy-focused AI device — aka…

Jolla debuts privacy-focused AI hardware

OpenAI is removing one of the voices used by ChatGPT after users found that it sounded similar to Scarlett Johansson, the company announced on Monday. The voice, called Sky, is…

OpenAI to remove ChatGPT’s Scarlett Johansson-like voice

The ChatGPT mobile app’s net revenue first jumped 22% on the day of the GPT-4o launch and continued to grow in the following days.

ChatGPT’s mobile app revenue saw its biggest spike yet following GPT-4o launch

Dating app maker Bumble has acquired Geneva, an online platform built around forming real-world groups and clubs. The company said that the deal is designed to help it expand its…

Bumble buys community building app Geneva to expand further into friendships

CyberArk — one of the army of larger security companies founded out of Israel — is acquiring Venafi, a specialist in machine identity, for $1.54 billion. 

CyberArk snaps up Venafi for $1.54B to ramp up in machine-to-machine security

Founder-market fit is one of the most crucial factors in a startup’s success, and operators (someone involved in the day-to-day operations of a startup) turned founders have an almost unfair advantage…

OpenseedVC, which backs operators in Africa and Europe starting their companies, reaches first close of $10M fund

A Singapore High Court has effectively approved Pine Labs’ request to shift its operations to India.

Pine Labs gets Singapore court approval to shift base to India

The AI Safety Institute, a U.K. body that aims to assess and address risks in AI platforms, has said it will open a second location in San Francisco. 

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

1 day ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine