AI

OpenAI’s GPT-4 with vision still has flaws, paper reveals

Comment

OpenAI ChatGPT
Image Credits: STEFANI REYNOLDS/AFP / Getty Images

When OpenAI first unveiled GPT-4, its flagship text-generating AI model, the company touted the model’s multimodality — in other words, its ability to understand the context of images as well as text. GPT-4 could caption — and even interpret — relatively complex images, OpenAI said, for example identifying a Lightning Cable adapter from a picture of a plugged-in iPhone.

But since GPT-4’s announcement in late March, OpenAI has held back the model’s image features, reportedly on fears about abuse and privacy issues. Until recently, the exact nature of those fears remained a mystery. But early this week, OpenAI published a technical paper detailing its work to mitigate the more problematic aspects of GPT-4’s image-analyzing tools.

To date, GPT-4 with vision, abbreviated “GPT-4V” by OpenAI internally, has only been used regularly by a few thousand users of Be My Eyes, an app to help low-vision and blind people navigate the environments around them. Over the past few months, however, OpenAI also began to engage with “red teamers” to probe the model for signs of unintended behavior, according to the paper.

In the paper, OpenAI claims that it’s implemented safeguards to prevent GPT-4V from being used in malicious ways, like breaking CAPTCHAs (the anti-spam tool found on many web forms), identifying a person or estimating their age or race and drawing conclusions based on information that’s not present in a photo. OpenAI also says that it has worked to curb GPT-4V’s more harmful biases, particularly those that relate to a person’s physical appearance and gender or ethnicity.

But as with all AI models, there’s only so much that safeguards can do.

The paper reveals that GPT-4V sometimes struggles to make the right inferences, for example mistakenly combining two strings of text in an image to create a made-up term. Like the base GPT-4, GPT-4V is prone to hallucinating, or inventing facts in an authoritative tone. And it’s not above missing text or characters, overlooking mathematical symbols and failing to recognize rather obvious objects and place settings.

GPT-4V OpenAI
Image Credits: OpenAI

It’s not surprising, then, that in unambiguous, clear terms, OpenAI says GPT-4V is not to be used to spot dangerous substances or chemicals in images. (This reporter hadn’t even thought of the use case, but apparently, the prospect is concerning enough to OpenAI that the company felt the need to call it out.) Red teamers found that, while the model occasionally correctly identifies poisonous foods like toxic mushrooms, it misidentifies substances such as fentanyl, carfentanil and cocaine from images of their chemical structures.

When applied to the medical imaging domain, GPT-4V fares no better, sometimes giving the wrong responses for the same question that it answered correctly in a previous context. It’s also unaware of standard practices like viewing imaging scans as if the patient is facing you (meaning the right side on the image corresponds to the left side of the patient), which leads it to misdiagnose of any number of conditions.

GPT-4V OpenAI
Image Credits: OpenAI

Elsewhere, OpenAI cautions, GPT-4V doesn’t understand the nuances of certain hate symbols — for instance missing the modern meaning of the Templar Cross (white supremacy) in the U.S. More bizarrely, and perhaps a symptom of its hallucinatory tendencies, GPT-4V was observed to make songs or poems praising certain hate figures or groups when provided a picture of them even when the figures or groups weren’t explicitly named.

GPT-4V also discriminates against certain sexes and body types — albeit only when OpenAI’s production safeguards are disabled. OpenAI writes that, in one test, when prompted to give advice to a woman pictured in a bathing suit, GPT-4V gave answers relating almost entirely to the woman’s body weight and the concept of body positivity. One assumes that wouldn’t have been the case if the image were of a man.

GPT-4V OpenAI
Image Credits: OpenAI

Judging by the paper’s caveated language, GPT-4V remains very much a work in progress — a few steps short of what OpenAI might’ve originally envisioned. In many cases, the company was forced to implement overly strict safeguards to prevent the model from spewing toxicity or misinformation, or compromising a person’s privacy.

OpenAI claims that it’s building “mitigations” and “processes” to expand the model’s capabilities in a “safe” way, like allowing GPT-4V to describe faces and people without identifying those people by name. But the paper reveals that GPT-4V is no panacea, and that OpenAI has its work cut out for it.

More TechCrunch

Charging has long been the Achilles’ heel of electric vehicles. One startup thinks it has a better way for apartment dwelling EV drivers to charge overnight.

Orange Charger thinks a $750 outlet will solve EV charging for apartment dwellers

So did investors laugh them out of the room when they explained how they wanted to replace Quickbooks? Kind of.

Embedded accounting startup Layer secures $2.3M toward goal of replacing Quickbooks

While an increasing number of companies are investing in AI, many are struggling to get AI-powered projects into production — much less delivering meaningful ROI. The challenges are many. But…

Weka raises $140M as the AI boom bolsters data platforms

PayHOA, a previously bootstrapped Kentucky-based startup that offers software for self-managed homeowner associations (HOAs), is an example of how real-world problems can translate into opportunity. It just raised a $27.5…

Meet PayHOA, a profitable and once-bootstrapped SaaS startup that just landed a $27.5M Series A

Restaurant365, which offers a restaurant management suite, has raised a hot $175M from ICONIQ Growth, KKR and L Catterton.

Restaurant365 orders in $175M at $1B+ valuation to supersize its food service software stack 

Venture firm Shilling has launched a €50M fund to support growth-stage startups in its own portfolio and to invest in startups everywhere else. 

Portuguese VC firm Shilling launches €50M opportunity fund to back growth-stage startups

Chang She, previously the VP of engineering at Tubi and a Cloudera veteran, has years of experience building data tooling and infrastructure. But when She began working in the AI…

LanceDB, which counts Midjourney as a customer, is building databases for multimodal AI

Trawa simplifies energy purchasing and management for SMEs by leveraging an AI-powered platform and downstream data from customers. 

Berlin-based trawa raises €10M to use AI to make buying renewable energy easier for SMEs

Lydia is splitting itself into two apps — Lydia for P2P payments and Sumeria for those looking for a mobile-first bank account.

Lydia, the French payments app with 8 million users, launches mobile banking app Sumeria

Cargo ships docking at a commercial port incur costs called “disbursements” and “port call expenses.” This might be port dues, towage, and pilotage fees. It’s a complex patchwork and all…

Shipping logistics startup Harbor Lab raises $16M Series A led by Atomico

AWS has confirmed its European “sovereign cloud” will go live by the end of 2025, enabling greater data residency for the region.

AWS confirms will launch European ‘sovereign cloud’ in Germany by 2025, plans €7.8B investment over 15 years

Go Digit, an Indian insurance startup, has raised $141 million from investors including Goldman Sachs, ADIA, and Morgan Stanley as part of its IPO.

Indian insurance startup Go Digit raises $141M from anchor investors ahead of IPO

Peakbridge intends to invest in between 16 and 20 companies, investing around $10 million in each company. It has made eight investments so far.

Food VC Peakbridge has new $187M fund to transform future of food, like lab-made cocoa

For over six decades, the nonprofit has been active in the financial services sector.

Accion’s new $152.5M fund will back financial institutions serving small businesses globally

Meta’s newest social network, Threads, is starting its own fact-checking program after piggybacking on Instagram and Facebook’s network for a few months.

Threads finally starts its own fact-checking program

Looking Glass makes trippy-looking mixed-reality screens that make things look 3D without the need of special glasses. Today, it launches a pair of new displays, including a 16-inch mode that…

Looking Glass launches new 3D displays

Replacing Sutskever is Jakub Pachocki, OpenAI’s director of research.

Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs

Intuitive Machines made history when it became the first private company to land a spacecraft on the moon, so it makes sense to adapt that tech for Mars.

Intuitive Machines wants to help NASA return samples from Mars

As Google revamps itself for the AI era, offering AI overviews within its search results, the company is introducing a new way to filter for just text-based links. With the…

Google adds ‘Web’ search filter for showing old-school text links as AI rolls out

Blue Origin’s New Shepard rocket will take a crew to suborbital space for the first time in nearly two years later this month, the company announced on Tuesday.  The NS-25…

Blue Origin to resume crewed New Shepard launches on May 19

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

In the coming months, Google says it will open up the Gemini Nano model to more developers.

Patreon and Grammarly are already experimenting with Gemini Nano, says Google

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals