Making AI trustworthy: Can we overcome black-box hallucinations?

Like most engineers, as a kid I could answer elementary school math problems by just filling in the answers.

But when I didn’t “show my work,” my teachers would dock points; the right answer wasn’t worth much without an explanation. Yet, those lofty standards for explainability in long division somehow don’t seem to apply to AI systems, even those making crucial, life-impacting decisions.

The major AI players that fill today’s headlines and feed stock market frenzies — OpenAI, Google, Microsoft — operate their platforms on black-box models. A query goes in one side and an answer spits out the other side, but we have no idea what data or reasoning the AI used to provide that answer.

Most of these black-box AI platforms are built on a decades-old technology framework called a “neural network.” These AI models are abstract representations of the vast amounts of data on which they are trained; they are not directly connected to training data. Thus, black-box AIs infer and extrapolate based on what they believe to be the most likely answer, not actual data.

Sometimes this complex predictive process spirals out of control and the AI “hallucinates.” By nature, black-box AI is inherently untrustworthy because it cannot be held accountable for its actions. If you can’t see why or how the AI makes a prediction, you have no way of knowing if it used false, compromised, or biased information or algorithms to come to that conclusion.

While neural networks are incredibly powerful and here to stay, there is another under-the-radar AI framework gaining prominence: instance-based learning (IBL). And it’s everything neural networks are not. IBL is AI that users can trust, audit, and explain. IBL traces every single decision back to the training data used to reach that conclusion.

By nature, black-box AI is inherently untrustworthy because it cannot be held accountable for its actions.

IBL can explain every decision because the AI does not generate an abstract model of the data, but instead makes decisions from the data itself. And users can audit AI built on IBL, interrogating it to find out why and how it made decisions, and then intervening to correct mistakes or bias.

This all works because IBL stores training data (“instances”) in memory and, aligned with the principles of “nearest neighbors,” makes predictions about new instances given their physical relationship to existing instances. IBL is data-centric, so individual data points can be directly compared against each other to gain insight into the dataset and the predictions. In other words, IBL “shows its work.”

The potential for such understandable AI is clear. Companies, governments, and any other regulated entities that want to deploy AI in a trustworthy, explainable, and auditable way could use IBL AI to meet regulatory and compliance standards. IBL AI will also be particularly useful for any applications where bias allegations are rampant — hiring, college admissions, legal cases, and so on.

Companies are using IBL in the wild today. My company has built a commercial IBL framework used by customers such as large financial institutions to detect anomalies across customer data and generate auditable synthetic data that complies with the EU’s General Data Protection Regulation (GDPR).

Of course, IBL is not without challenges. The main limiting factor for IBL is scalability, which was also a challenge that neural networks faced for 30 years until modern computing technology made them feasible. With IBL, each piece of data must be queried, cataloged, and stored in memory, which becomes harder as the dataset grows.

However, researchers are creating fast-query systems based on advances in information theory to significantly speed up this process. This state-of-the-art technology has enabled IBL to directly compete with the computational feasibility of neural networks.

Despite these challenges, the potential for IBL is clear. As more and more companies seek safe, explainable, and auditable AI, black-box neural networks will no longer cut it. So, if you run a company — whether a small startup or a larger enterprise — here are some practical tips to start deploying IBL today:

Adopt an agile and open mindset

With IBL, it works best to explore your data for the insights it can give you, rather than assigning it a particular task, such as “predict the optimal price” of an item. Keep an open mind and let IBL guide your learnings. IBL may tell you that it can’t predict an optimal price very well from a given dataset but can predict the times of day people make the most purchases, or how they contact your company, and what items they are most likely to buy.

IBL is an agile AI framework that requires collaborative communication between decision-makers and data science teams — not the usual “toss a question over the transom, wait for your answer” that we see in many organizations deploying AI today.

Think “less is more” for AI models

In traditional black-box AI, a single model is trained and optimized for a single task, such as classification. In a large enterprise, this might mean there are thousands of AI models to manage, which is both expensive and unwieldy. In contrast, IBL enables versatile, multitask analysis. For example, a single IBL model can be used for supervised learning, anomalies detection, and synthetic data generation, while still providing full explainability.

This means IBL users can build and maintain fewer models, enabling a leaner, more adaptable AI toolbox. So if you’re adopting IBL, you need programmers and data scientists, but you don’t need to invest in tons of PhDs with AI experience.

Mix up your AI tool set

Neural networks are great for any applications that don’t need to be explained or audited. But when AI is helping companies make big decisions, such as whether to spend millions of dollars on a new product or complete a strategic acquisition, it must be explainable. And even when AI is used to make smaller decisions, such as whether to hire a candidate or give someone a promotion, explainability is key. No one wants to hear they missed out on a promotion based on an inexplicable, black-box decision.

And companies will soon face litigation in these types of instances. Choose your AI frameworks based on the application; go with neural nets if you just want fast data ingestion and quick decision-making, and use IBL when you need trustworthy, explainable, and auditable decisions.

Instance-based learning is not a new technology. Over the last two decades, computer scientists have developed IBL in parallel with neural networks, but IBL has received less public attention. Now IBL is gaining new notice amid today’s AI arms race. IBL has proven it can scale while maintaining explainability — a welcome alternative to hallucinating neural nets that spew out false and unverifiable information.

With so many companies blindly adopting neural network–based AI, the next year will undoubtedly see many data leaks and lawsuits over bias and misinformation claims.

Once the mistakes made by black-box AI begin hitting companies’ reputations — and bottom lines! — I expect that slow-and-steady IBL will have its moment in the sun. We all learned the importance of “showing our work” in elementary school, and we can certainly demand that same rigor from AI that decides the paths of our lives.