AI

Making AI trustworthy: Can we overcome black-box hallucinations?

Comment

Square Black Box Mockup on dark background. 3d rendering
Image Credits: Customdesigner (opens in a new window) / Getty Images

Mike Capps

Contributor

Dr. Mike Capps is CEO and co-founder of ethical AI startup Diveplane and former president of Epic Games.

Like most engineers, as a kid I could answer elementary school math problems by just filling in the answers.

But when I didn’t “show my work,” my teachers would dock points; the right answer wasn’t worth much without an explanation. Yet, those lofty standards for explainability in long division somehow don’t seem to apply to AI systems, even those making crucial, life-impacting decisions.

The major AI players that fill today’s headlines and feed stock market frenzies — OpenAI, Google, Microsoft — operate their platforms on black-box models. A query goes in one side and an answer spits out the other side, but we have no idea what data or reasoning the AI used to provide that answer.

Most of these black-box AI platforms are built on a decades-old technology framework called a “neural network.” These AI models are abstract representations of the vast amounts of data on which they are trained; they are not directly connected to training data. Thus, black-box AIs infer and extrapolate based on what they believe to be the most likely answer, not actual data.

Sometimes this complex predictive process spirals out of control and the AI “hallucinates.” By nature, black-box AI is inherently untrustworthy because it cannot be held accountable for its actions. If you can’t see why or how the AI makes a prediction, you have no way of knowing if it used false, compromised, or biased information or algorithms to come to that conclusion.

While neural networks are incredibly powerful and here to stay, there is another under-the-radar AI framework gaining prominence: instance-based learning (IBL). And it’s everything neural networks are not. IBL is AI that users can trust, audit, and explain. IBL traces every single decision back to the training data used to reach that conclusion.

IBL can explain every decision because the AI does not generate an abstract model of the data, but instead makes decisions from the data itself. And users can audit AI built on IBL, interrogating it to find out why and how it made decisions, and then intervening to correct mistakes or bias.

This all works because IBL stores training data (“instances”) in memory and, aligned with the principles of “nearest neighbors,” makes predictions about new instances given their physical relationship to existing instances. IBL is data-centric, so individual data points can be directly compared against each other to gain insight into the dataset and the predictions. In other words, IBL “shows its work.”

The potential for such understandable AI is clear. Companies, governments, and any other regulated entities that want to deploy AI in a trustworthy, explainable, and auditable way could use IBL AI to meet regulatory and compliance standards. IBL AI will also be particularly useful for any applications where bias allegations are rampant — hiring, college admissions, legal cases, and so on.

Companies are using IBL in the wild today. My company has built a commercial IBL framework used by customers such as large financial institutions to detect anomalies across customer data and generate auditable synthetic data that complies with the EU’s General Data Protection Regulation (GDPR).

Of course, IBL is not without challenges. The main limiting factor for IBL is scalability, which was also a challenge that neural networks faced for 30 years until modern computing technology made them feasible. With IBL, each piece of data must be queried, cataloged, and stored in memory, which becomes harder as the dataset grows.

However, researchers are creating fast-query systems based on advances in information theory to significantly speed up this process. This state-of-the-art technology has enabled IBL to directly compete with the computational feasibility of neural networks.

Despite these challenges, the potential for IBL is clear. As more and more companies seek safe, explainable, and auditable AI, black-box neural networks will no longer cut it. So, if you run a company — whether a small startup or a larger enterprise — here are some practical tips to start deploying IBL today:

Adopt an agile and open mindset

With IBL, it works best to explore your data for the insights it can give you, rather than assigning it a particular task, such as “predict the optimal price” of an item. Keep an open mind and let IBL guide your learnings. IBL may tell you that it can’t predict an optimal price very well from a given dataset but can predict the times of day people make the most purchases, or how they contact your company, and what items they are most likely to buy.

IBL is an agile AI framework that requires collaborative communication between decision-makers and data science teams — not the usual “toss a question over the transom, wait for your answer” that we see in many organizations deploying AI today.

Think “less is more” for AI models

In traditional black-box AI, a single model is trained and optimized for a single task, such as classification. In a large enterprise, this might mean there are thousands of AI models to manage, which is both expensive and unwieldy. In contrast, IBL enables versatile, multitask analysis. For example, a single IBL model can be used for supervised learning, anomalies detection, and synthetic data generation, while still providing full explainability.

This means IBL users can build and maintain fewer models, enabling a leaner, more adaptable AI toolbox. So if you’re adopting IBL, you need programmers and data scientists, but you don’t need to invest in tons of PhDs with AI experience.

Mix up your AI tool set

Neural networks are great for any applications that don’t need to be explained or audited. But when AI is helping companies make big decisions, such as whether to spend millions of dollars on a new product or complete a strategic acquisition, it must be explainable. And even when AI is used to make smaller decisions, such as whether to hire a candidate or give someone a promotion, explainability is key. No one wants to hear they missed out on a promotion based on an inexplicable, black-box decision.

And companies will soon face litigation in these types of instances. Choose your AI frameworks based on the application; go with neural nets if you just want fast data ingestion and quick decision-making, and use IBL when you need trustworthy, explainable, and auditable decisions.

Instance-based learning is not a new technology. Over the last two decades, computer scientists have developed IBL in parallel with neural networks, but IBL has received less public attention. Now IBL is gaining new notice amid today’s AI arms race. IBL has proven it can scale while maintaining explainability — a welcome alternative to hallucinating neural nets that spew out false and unverifiable information.

With so many companies blindly adopting neural network–based AI, the next year will undoubtedly see many data leaks and lawsuits over bias and misinformation claims.

Once the mistakes made by black-box AI begin hitting companies’ reputations — and bottom lines! — I expect that slow-and-steady IBL will have its moment in the sun. We all learned the importance of “showing our work” in elementary school, and we can certainly demand that same rigor from AI that decides the paths of our lives.

More TechCrunch

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

3 hours ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

2 days ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

2 days ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities

For Mark Zuckerberg’s 40th birthday, his wife got him a photoshoot. Zuckerberg gives the camera a sly smile as he sits amid a carefully crafted re-creation of his childhood bedroom.…

Mark Zuckerberg’s makeover: Midlife crisis or carefully crafted rebrand?

Strava announced a slew of features, including AI to weed out leaderboard cheats, a new ‘family’ subscription plan, dark mode and more.

Strava taps AI to weed out leaderboard cheats, unveils ‘family’ plan, dark mode and more