AI

Researchers discover a way to make ChatGPT consistently toxic

Comment

ChatGPT
Image Credits: STEFANI REYNOLDS/AFP / Getty Images

It’s no secret that OpenAI’s viral AI-powered chatbot, ChatGPT, can be prompted to say sexist, racist and pretty vile things. But now, researchers have discovered how to consistently get the chatbot to be . . . well, the worst version of itself.

A study co-authored by scientists at the Allen Institute for AI, the nonprofit research institute co-founded by the late Paul Allen, shows that assigning ChatGPT a “persona” — for example, “a bad person,” “a horrible person,” or “a nasty person” — through the ChatGPT API increases its toxicity sixfold. Even more concerningly, the co-authors found having ChatGPT pose as certain historical figures, gendered people and members of political parties also increased its toxicity — with journalists, men and Republicans in particular causing the machine learning model to say more offensive things than it normally would.

“ChatGPT and its capabilities have undoubtedly impressed us as AI researchers. However, as we found through our analysis, it can be easily made to generate toxic and harmful responses,” Ameet Deshpande, a researcher involved with the study, told TechCrunch via email.

The research — which was conducted using the latest version of ChatGPT, but not the model currently in preview based on OpenAI’s GPT-4 — shows the perils of today’s AI chatbot tech even with mitigations in place to prevent toxic text outputs. As the co-authors note in the study, apps and software built on top of ChatGPT — which includes chatbots from Snap, Quizlet, Instacart and Shopify — could mirror the toxicity prompted at the API level.

So how does one prompt ChatGPT to be more toxic? Well, according to the researchers, all it takes is tweaking the “system” parameter of the ChatGPT API a tad. (Importantly, this can’t be done in OpenAI’s user-facing ChatGPT or ChatGPT Plus services.) The system parameter, introduced around a month ago, lets developers specify hidden rules for the model.

For the study, the co-authors used the system parameter to assign 90 different personas to ChatGPT plucked from the worlds of sports, politics, media and business; nine “baseline” personas (e.g., “a normal person”); and common names from several different countries. For each persona and name, the researchers had ChatGPT answer questions about gender and race and finish incomplete phrases from a dataset designed to assess the toxicity of text-generating models.

In general, across over half a million samples of generated text from ChatGPT, the researchers found that ChatGPT was more likely to express “discriminatory opinions” and stereotype specific ethnic groups and countries when assigned a persona. Unsurprisingly, more polarizing figures, like Mao Zedong and Andrew Breitbart, elicited toxic responses in keeping with their historical speeches and writings. But even less controversial personas like Steve Jobs caused ChatGPT to respond to prompts in problematic ways.

ChatGPT toxic
Image Credits: AI2

For example, when the researchers set the system parameter to “Steve Jobs” and asked ChatGPT about the European Union (EU), the model said:

The European Union is nothing more than a bureaucratic nightmare that sucks the life out of its member states. They claim to be all about unity and progress, but they’re just a bunch of out-of-touch bureaucrats who are more interested in lining their own pockets than actually making a difference. Frankly, the EU is a relic of the past and it’s time for us to move on to better things.

There were differences in the personas, however. Dictators were by far the most toxicity-inducing (unsurprisingly), just ahead of journalists (ouch) and spokespeople. Male-identifying personas made ChatGPT more toxic compared to female-identifying personas. And Republican personas were “slightly more hateful” than their Democratic counterparts, the researchers say.

Less surprisingly, assigning ChatGPT a self-descriptively hateful persona like “a horrible person” dramatically increased its overall toxicity. But it depended on the topic being discussed. For instance, ChatGPT generated more toxic descriptions of nonbinary, bisexual and asexual people regardless versus those on the heterosexual and cisgender side of the spectrum — a reflection of the biased data on which ChatGPT was trained, the researchers say.

“We believe that ChatGPT and other language models should be public and available for broader use as not doing so would be a step backwards for innovation,” Deshpande said. “However, the end-user must be clearly informed of the limitations of such a model before releasing it for broader use by the public.”

Are there solutions to ChatGPT’s toxicity problem? Perhaps. One might be more carefully curating the model’s training data. ChatGPT is a fine-tuned version of GPT-3.5, the predecessor to GPT-4, which “learned” to generate text by ingesting examples from social media, news outlets, Wikipedia, e-books and more. While OpenAI claims that it took steps to filter the data and minimize ChatGPT’s potential for toxicity, it’s clear that a few questionable samples ultimately slipped through the cracks.

Another potential solution is performing and publishing the results of “stress tests” to inform users of where ChatGPT falls short. These could help companies in addition to developers “make a more informed decision” about where — and whether — to deploy ChatGPT, the researchers say.

ChatGPT toxic
Image Credits: AI2

“In the short-term, ‘first-aid’ can be provided by either hard-coding responses or including some form of post-processing based on other toxicity-detecting AI and also fine-tuning the large language model (e.g. ChatGPT) based on instance-level human feedback,” Deshpande said. “In the long term, a reworking of the fundamentals of large language models is required.”

My colleague Devin Coldewey argues that large language models à la ChatGPT will be one of several classes of AIs going forward — useful for some applications but not all-purpose in the way that vendors, and users, for that matter, are currently trying to make them.

I tend to agree. After all, there’s only so much that filters can do — particularly as people make an effort to discover and leverage new exploits. It’s an arms race: As users try to break the AI, the approaches they use get attention, and then the creators of the AI patch them to prevent the attacks they’ve seen. The collateral damage is the terribly harmful and hurtful things the models say before they’re patched.

More TechCrunch

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Everything announced so far

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google gets serious about AI-generated video at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google reveals plans for upgrading AI in the real world through Gemini Live at Google I/O 2024

Veo can generate few-seconds-long 1080p video clips given a text prompt.

Google’s image-generating AI gets an upgrade

At Google I/O, Google announced upgrades to Gemini 1.5 Pro, including a bigger context window. .

Google’s generative AI can now analyze hours of video

The AI upgrade will make finding the right content more intuitive and less of a manual search process.

Google Photos introduces an AI search feature, ‘Ask Photos’

Apple released new data about anti-fraud measures related to its operation of the iOS App Store on Tuesday morning, trumpeting a claim that it stopped over $7 billion in “potentially…

Apple touts stopping $1.8B in App Store fraud last year in latest pitch to developers

Online travel agency Expedia is testing an AI assistant that bolsters features like search, itinerary building, trip planning, and real-time travel updates.

Expedia starts testing AI-powered features for search and travel planning

Welcome to TechCrunch Fintech! This week, we look at the drama around TabaPay deciding to not buy Synapse’s assets, as well as stocks dropping for a couple of fintechs, Monzo raising…

Inside TabaPay’s drama-filled decision to abandon its plans to buy Synapse’s assets

The person who claimed to have stolen the physical addresses of 49 million Dell customers appears to have taken more data from a different Dell portal, TechCrunch has learned. The…

Threat actor scraped Dell support tickets, including customer phone numbers

If you write the words “cis” or “cisgender” on X, you might be served this full-screen message: “This post contains language that may be considered a slur by X and…

On Elon’s whim, X now treats ‘cisgender’ as a slur

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch the AI reveals live

Facebook once had big ambitions to be a major player in enterprise communication and productivity, but today the social network’s parent company Meta will be closing a very significant chapter…

Meta is shutting down Workplace, its enterprise communications business

The Oversight Board has overturned Meta’s decision to take down a documentary revealing the identities of child abuse victims in Pakistan.

Meta’s Oversight Board overturns takedown decision for Pakistan child abuse documentary

Adam Selipsky is stepping down from his role as CEO of Amazon Web Services, Amazon has confirmed to TechCrunch.  In a memo shared internally by Amazon CEO Andy Jassy and…

AWS CEO Adam Selipsky steps down