Sponsored Content

Why AI doesn’t have to be a black box

Every day, businesses use artificial intelligence to make critical decisions in everything from sales strategy to manufacturing processes. But, too often, it’s not entirely clear exactly how AI models produce the results that drive those decisions. That creates risk for businesses: For example, when an AI-driven outcome proves discriminatory — think bias in health care algorithms — it’s challenging for businesses to ascertain exactly what went wrong.

How did we get here?

Generally speaking, AI recognizes patterns within data and identifies relationships between variables in that data. As AI models such as deep learning and neural networks have grown more advanced in the past decade, it’s become exponentially more difficult to understand why they behave the way they do. However, Wells Fargo is leading the way to improve the explainability of AI. 

“The larger you make the model, the more complex it becomes, and the harder it is to explain why it’s coming to its conclusions,” says Chintan Mehta, CIO, Head of Digital Technology and Innovation at Wells Fargo, who leads the evolution of the bank’s digital platforms and enhances integration of the innovation pipeline into customer-facing capabilities. “Every time a new layer of complexity is added, the onus is on us to keep up.” 

Thankfully, AI doesn’t have to remain an unknowable black box. Innovations in explainable AI are revolutionizing the ability of companies like Wells Fargo to improve their ability to understand why AI models reach the conclusions they do, ensuring they can be responsibly deployed in real-world environments. 

The supreme value of validation

Simplicity and operational capabilities are fundamental for effective AI development, according to Mehta. That’s why Wells Fargo has an independent team of data scientists who validate every Wells Fargo AI model before it’s deployed into a customer-facing environment.

“At a process and a structural level, it ensures a degree of independent validation that our customers expect and deserve,” Mehta says.

The validation itself is compartmentalized into stages, so the team can better identify how each element is used within an AI model. Before an attribute gets used in a model build out, for example, it is put through a bias breaking process to account for potential ​​algorithmic unfairness. The dataset goes through similar bias breaking, in addition to tests that determine the explainability and viability of the data. 

This rigorous validation process has allowed the Wells Fargo team to continue improving bias breaking and explainability techniques of its own, which undergo peer review for publication in partnership with university researchers. It’s a virtuous cycle: By validating an AI model before it’s deployed, the team discovers new and innovative techniques, which they can use for validating future models.

“They have to look at every model, whether it’s gradient boosting, deep learning, traditional statistical models, or Stochastic gradient descent,” Mehta says, citing several kinds of models used for machine learning. “Regardless of the technique, they have to basically solve that one same problem: Explain why that model did what it did.”

How explainable AI helped build a virtual assistant

Image Credits: Getty Images

A pair of real-world examples demonstrate how Wells Fargo is using innovative explainability techniques to lead the way in deploying AI models responsibly.

Earlier this month, Wells Fargo announced Google Cloud will power its new virtual assistant, Fargo. For Mehta and the data scientists, the development of Fargo boils down to an essential question: “How do you take this to market in a way that’s safe and adds value for our customers?”

“Language capabilities have been around for a few years now, so the expectations have already been raised relatively high,” Mehta says. “From a customer experience standpoint, the model should perform exactly the way somebody thinks it should perform the first time they use it.”

To meet that standard, Wells Fargo is improving post hoc techniques, along with applying extensive testing to Google Cloud’s cutting-edge large language models, which are embedded into the virtual assistant. In other words, they want to understand why Google’s models interpret language the way they do, so they are using a technique that explains the behavior of models that have already been built. Through multiple alpha and beta tests, they were able to run through all of the complex interactions a customer might have with Fargo, ensuring not just that the experience works right, but that potential issues can be identified, understood, and corrected during development.

“The alpha and beta tests have the explainability process built into them,” Mehta says. “The response contains metadata that tells us why the interpretation was done the way it was done, so we can actually improve the experience in case we notice anything.”

As a result, explainability is playing an increasingly important role in the development of Fargo. The post hoc interpretation of Google Cloud’s models has helped Wells Fargo build a virtual assistant that meets customer expectations.

Why explainability matters for financial services

Image Credits: Getty Images

With improvements in explainable AI techniques, Wells Fargo is leading innovation in traditional financial services, particularly in the credit space. 

If a financial institution is using a predictive model to make credit decisions, a faulty model can severely raise the risk of inadvertent customer impact — say, for example, if it recommends declining credit to somebody who is actually creditworthy.

In a paper published last April, Wells Fargo data scientists outlined a new technique they developed for explaining credit decisions. In short, this method allows their AI model to provide the reason for its decision, in addition to the decision itself.

“The model not only gives you the output, but it also explains why it reached that conclusion,” Mehta says. “It has been critical to actually scale AI in the context of customer-centered credit decisioning.”

Ultimately, the additional context of an explanation introduces an opportunity for people to review the model’s decision. If an explanation doesn’t make sense, they can step in to reconsider the decision — something that wouldn’t be possible if the AI model were just a black box.

By embedding these constraints into the lifecycle of AI development, Wells Fargo is leading the way in designing techniques to make AI deployments more effective. 

Within next few months, Wells Fargo will launch Fargo, their new virtual assistant. Fargo will leverage Google Cloud’s leading AI to provide personalized service and offer guidance to customers.