AI

Are AI models doomed to always hallucinate?

Comment

Text to video concept, text-to-video by generative AI. Language model technology. Cyborg hand holding vdo generated by artificial intelligence.
Image Credits: Ole_CNX (opens in a new window) / Getty Images

Large language models (LLMs) like OpenAI’s ChatGPT all suffer from the same problem: they make stuff up.

The mistakes range from strange and innocuous — like claiming that the Golden Gate Bridge was transported across Egypt in 2016 — to highly problematic, even dangerous.

A mayor in Australia recently threatened to sue OpenAI because ChatGPT mistakenly claimed he pleaded guilty in a major bribery scandal. Researchers have found that LLM hallucinations can be exploited to distribute malicious code packages to unsuspecting software developers. And LLMs frequently give bad mental health and medical advice, like that wine consumption can “prevent cancer.”

This tendency to invent “facts” is a phenomenon known as hallucination, and it happens because of the way today’s LLMs — and all generative AI models, for that matter — are developed and trained.

Training models

Generative AI models have no real intelligence — they’re statistical systems that predict words, images, speech, music or other data. Fed an enormous number of examples, usually sourced from the public web, AI models learn how likely data is to occur based on patterns, including the context of any surrounding data.

For example, given a typical email ending in the fragment “Looking forward…”, an LLM might complete it with “… to hearing back” — following the pattern of the countless emails it’s been trained on. It doesn’t mean the LLM is looking forward to anything.

“The current framework of training LLMs involves concealing, or ‘masking,’ previous words for context” and having the model predict which word should follow this context, Sebastian Berns, a Ph.D. researchers at Queen Mary University of London, told TechCrunch in an email interview. “This is conceptually similar to using predictive text in iOS and continually pressing one of the suggested next words.”

This probability-based approach works remarkably well at scale — for the most part. But while the range of words and their probabilities are likely to result in text that makes sense, it’s far from certain.

The emerging types of language models and why they matter

LLMs can generate something that’s grammatically correct but nonsensical, for instance — like the claim about the Golden Gate. Or they can spout mistruths, propagating inaccuracies in their training data. Or they can conflate different sources of information, including fictional sources, even if those sources clearly contradict each other.

It’s not malicious on the LLMs’ part. They don’t have malice, and the concepts of true and false are meaningless to them. They’ve simply learned to associate certain words or phrases with certain concepts, even if those associations aren’t accurate.

“‘Hallucinations’ are connected to the inability of an LLM to estimate the uncertainty of its own prediction,” Berns said. “An LLM is typically trained to always produce an output, even when the input is very different from the training data. A standard LLM does not have any way of knowing if it’s capable of reliably answering a query or making a prediction.”

Solving hallucination

The question is, can hallucination be solved? It depends on what you mean by “solved.”

Vu Ha, an applied researcher and engineer at the Allen Institute for Artificial Intelligence, asserts that LLMs “do and will always hallucinate.” But he also believes there are concrete ways to reduce — albeit not eliminate — hallucinations, depending on how an LLM is trained and deployed. 

“Consider a question answering system,” Ha said via email. “It’s possible to engineer it to have high accuracy by curating a high-quality knowledge base of questions and answers, and connecting this knowledge base with an LLM to provide accurate answers via a retrieval-like process.”

Ha illustrated the difference between an LLM with a “high-quality” knowledge base to draw on versus one with less careful data curation. He ran the question “Who are the authors of the Toolformer paper?” (Toolformer is an AI model trained by Meta) through Microsoft’s LLM-powered Bing Chat and Google’s Bard. Bing Chat correctly listed all eight Meta co-authors, while Bard misattributed the paper to researchers at Google and Hugging Face.

“Any deployed LLM-based system will hallucinate. The real question is if the benefits outweigh the negative outcome caused by hallucination,” Ha said. In other words, if there’s no obvious harm done by a model — the model gets a date or name wrong once in a while, say — but it’s otherwise helpful, then it might be worth the trade-off. “It’s a question of maximizing expected utility of the AI,” he added.

Age of AI: Everything you need to know about artificial intelligence

Berns pointed out another technique that had been used with some success to reduce hallucinations in LLMs: reinforcement learning from human feedback (RLHF). Introduced by OpenAI in 2017, RLHF involves training an LLM, then gathering additional information to train a “reward” model and fine-tuning the LLM with the reward model via reinforcement learning.

In RLHF, a set of prompts from a predefined dataset are passed through an LLM to generate new text. Then, human annotators are used to rank the outputs from the LLM in terms of their overall “helpfulness” — data that’s used to train the reward model. The reward model, which at this point can take in any text and assign it a score of how well humans perceive it, is then used to fine-tune the LLM’s generated responses.

OpenAI leveraged RLHF to train several of its models, including GPT-4. But even RLHF isn’t perfect, Berns warned.

“I believe the space of possibilities is too large to fully ‘align’ LLMs with RLHF,” Berns said. “Something often done in the RLHF setting is training a model to produce an ‘I don’t know’ answer [to a tricky question], primarily relying on human domain knowledge and hoping the model generalizes it to its own domain knowledge. Often it does, but it can be a bit finicky.”

Alternative philosophies

Assuming hallucination isn’t solvable, at least not with today’s LLMs, is that a bad thing? Berns doesn’t think so, actually. Hallucinating models could fuel creativity by acting as a “co-creative partner,” he posits — giving outputs that might not be wholly factual but that contain some useful threads to tug on nonetheless. Creative uses of hallucination can produce outcomes or combinations of ideas that might not occur to most people.

“‘Hallucinations’ are a problem if generated statements are factually incorrect or violate any general human, social or specific cultural values — in scenarios where a person relies on the LLM to be an expert,” he said. “But in creative or artistic tasks, the ability to come up with unexpected outputs can be valuable. A human recipient might be surprised by a response to a query and therefore be pushed into a certain direction of thoughts which might lead to the novel connection of ideas.”

Ha argued that the LLMs of today are being held to an unreasonable standard — humans “hallucinate” too, after all, when we misremember or otherwise misrepresent the truth. But with LLMs, he believes we experience a cognitive dissonance because the models produce outputs that look good on the surface but contain errors upon further inspection.

“Simply put, LLMs, just like any AI techniques, are imperfect and thus make mistakes,” he said. “Traditionally, we’re OK with AI systems making mistakes since we expect and accept imperfections. But it’s more nuanced when LLMs make mistakes.”

Indeed, the answer may well not lie in how generative AI models work at the technical level. Insofar as there’s a “solution” to hallucination today, treating models’ predictions with a skeptical eye seems to be the best approach.

More TechCrunch

The Series C funding, which brings its total raise to around $95 million, will go toward mass production of the startup’s inaugural products

AI chip startup DEEPX secures $80M Series C at a $529M valuation 

A dust-up between Evolve Bank & Trust, Mercury and Synapse has led TabaPay to abandon its acquisition plans of troubled banking-as-a-service startup Synapse.

Infighting among fintech players has caused TabaPay to ‘pull out’ from buying bankrupt Synapse

The problem is not the media, but the message.

Apple’s ‘Crush’ ad is disgusting

The Twitter for Android client was “a demo app that Google had created and gave to us,” says Particle co-founder and ex-Twitter employee Sara Beykpour.

Google built some of the first social apps for Android, including Twitter and others

WhatsApp is updating its mobile apps for a fresh and more streamlined look, while also introducing a new “darker dark mode,” the company announced on Thursday. The messaging app says…

WhatsApp’s latest update streamlines navigation and adds a ‘darker dark mode’

Plinky lets you solve the problem of saving and organizing links from anywhere with a focus on simplicity and customization.

Plinky is an app for you to collect and organize links easily

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: How to watch

For cancer patients, medicines administered in clinical trials can help save or extend lives. But despite thousands of trials in the United States each year, only 3% to 5% of…

Triomics raises $15M Series A to automate cancer clinical trials matching

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Tap, tap.…

Tesla drives Luminar lidar sales and Motional pauses robotaxi plans

The newly announced “Public Content Policy” will now join Reddit’s existing privacy policy and content policy to guide how Reddit’s data is being accessed and used by commercial entities and…

Reddit locks down its public data in new content policy, says use now requires a contract

Eva Ho plans to step away from her position as general partner at Fika Ventures, the Los Angeles-based seed firm she co-founded in 2016. Fika told LPs of Ho’s intention…

Fika Ventures co-founder Eva Ho will step back from the firm after its current fund is deployed

In a post on Werner Vogels’ personal blog, he details Distill, an open-source app he built to transcribe and summarize conference calls.

Amazon’s CTO built a meeting-summarizing app for some reason

Paris-based Mistral AI, a startup working on open source large language models — the building block for generative AI services — has been raising money at a $6 billion valuation,…

Sources: Mistral AI raising at a $6B valuation, SoftBank ‘not in’ but DST is

You can expect plenty of AI, but probably not a lot of hardware.

Google I/O 2024: What to expect

Dating apps and other social friend-finders are being put on notice: Dating app giant Bumble is looking to make more acquisitions.

Bumble says it’s looking to M&A to drive growth

When Class founder Michael Chasen was in college, he and a buddy came up with the idea for Blackboard, an online classroom organizational tool. His original company was acquired for…

Blackboard founder transforms Zoom add-on designed for teachers into business tool

Groww, an Indian investment app, has become one of the first startups from the country to shift its domicile back home.

Groww joins the first wave of Indian startups moving domiciles back home from US

Technology giant Dell notified customers on Thursday that it experienced a data breach involving customers’ names and physical addresses. In an email seen by TechCrunch and shared by several people…

Dell discloses data breach of customers’ physical addresses

Featured Article

Fairgen ‘boosts’ survey results using synthetic data and AI-generated responses

The Israeli startup has raised $5.5M for its platform that uses “statistical AI” to generate synthetic data that it says is as good as the real thing.

19 hours ago
Fairgen ‘boosts’ survey results using synthetic data and AI-generated responses

Hydrow, the at-home rowing machine maker, announced Thursday that it has acquired a majority stake in Speede Fitness, the company behind the AI-enabled strength training machine. The rowing startup also…

Rowing startup Hydrow acquires a majority stake in Speede Fitness as their CEO steps down

Call centers are embracing automation. There’s debate as to whether that’s a good thing, but it’s happening — and quite possibly accelerating. According to research firm TechSci Research, the global…

Retell AI lets companies build ‘voice agents’ to answer phone calls

TikTok is starting to automatically label AI-generated content that was made on other platforms, the company announced on Thursday. With this change, if a creator posts content on TikTok that…

TikTok will automatically label AI-generated content created on platforms like DALL·E 3

India’s mobile payments regulator is likely to extend the deadline for imposing market share caps on the popular UPI (unified payments interface) payments rail by one to two years, sources…

India likely to delay UPI market caps in win for PhonePe-Google Pay duopoly

Line Man Wongnai, an on-demand food delivery service in Thailand, is considering an initial public offering on a Thai exchange or the U.S. in 2025.

Thai food delivery app Line Man Wongnai weighs IPO in Thailand, US in 2025

Ever wonder why conversational AI like ChatGPT says “Sorry, I can’t do that” or some other polite refusal? OpenAI is offering a limited look at the reasoning behind its own…

OpenAI offers a peek behind the curtain of its AI’s secret instructions

The federal government agency responsible for granting patents and trademarks is alerting thousands of filers whose private addresses were exposed following a second data spill in as many years. The…

US Patent and Trademark Office confirms another leak of filers’ address data

As part of an investigation into people involved in the pro-independence movement in Catalonia, the Spanish police obtained information from the encrypted services Wire and Proton, which helped the authorities…

Encrypted services Apple, Proton and Wire helped Spanish police identify activist

Match Group, the company that owns several dating apps, including Tinder and Hinge, released its first-quarter earnings report on Tuesday, which shows that Tinder’s paying user base has decreased for…

Match looks to Hinge as Tinder fails

Private social networking is making a comeback. Gratitude Plus, a startup that aims to shift social media in a more positive direction, is expanding its wellness-focused, personal reflections journal to…

Gratitude Plus makes social networking positive, private and personal

With venture totals slipping year-over-year in key markets like the United States, and concern that venture firms themselves are struggling to raise more capital, founders might be worried. After all,…

Can AI help founders fundraise more quickly and easily?