Credo AI launches backed by $5.5 million to help companies with ‘ethical AI’

There are many in the world of AI who worry about its implications. One of those people is Navrina Singh, a former product manager for Qualcomm, then Microsoft, who saw firsthand at Microsoft how a Twitter bot it developed in 2016 as an experiment in “conversational understanding,” was, to quote The Verge, “taught to be a racist asshole in less than a day.”

Microsoft’s mishap is but one of a long string of examples of AI gone wrong. In 2019, an algorithm sold by the health services company Optum to predict which patients will benefit from extra medical care was found by researchers to badly misjudge the health needs of the sickest Black patients. Credit-scoring AI systems have repeatedly been found to be sexist.

While many larger companies have assembled teams to tackle the ethical problems arising from the massive troves of data they collect, then used to train their machine learning models, progress on this front has hardly been smooth. In the meantime, smaller AI-powered companies that can’t afford teams of individuals are largely winging it.

Enter Singh’s company, Credo AI, a SaaS outfit that’s today taking the wraps off $5.5 million in funding that it raised from Decibel, Village Global and AI Fund.

The company’s promise is pretty straightforward as she explains it, even as it’s managing the complex. Singh and her current team of 15 employees have developed a risk framework that gives companies a window into their own governance. It’s not so much that the startup’s tech is revolutionary (is our understanding) but rather that Credo AI addresses what is often a lack of accountability within organizations by giving them a control panel with the tools to manage all manner of data they are collecting, as well as suggest controls they might not be using, like IEEE standards they can integrate to provide stronger guardrails for their machine learning models.

“What many companies haven’t really figured out is that there is a lack of common language and alignment on what ‘good’ looks like in AI governance, so organizations are really looking for help with that standardization,” she says.

Credo AI’s software is not a one-size-fits all offering, Singh notes. Different organizations see different impacts from their models, and even within so-called industry verticals, individual companies often have different objectives. “Fairness is not defined within different sectors,” says Singh, who points as an example to financial services, where much is getting redefined on an ongoing basis by federal banking agencies. “What is fairness for fraud mean? What is fairness for credit underwriting?”

Rather than wait for answers, says Singh, Credo AI works with companies to align on what their own values are, then it offers them the tools to manage accordingly, including to add additional metrics when they choose, and different stakeholders. “We want to enable your data science team to collaborate with your compliance team, your executive to collaborate with your machine-earning person, your product manager to collaborate with your risk manager,” says Singh.

Credo AI want to help companies avoid those face-palm moments — or worse.

Certainly, it’s a big market opportunity. According to data published earlier this year by the International Data Corporation (IDC), worldwide revenue for the AI market, including software, hardware and services, was expected to grow 16.4% year over year in 2021, to $327.5 billion. By 2024, said IDC, the market is expected to break the $500 billion mark.

As companies spend more on AI, they’ll presumably need more help to make sure it’s performing the way they expect and not causing harm. Indeed, if Singh has her way, working with Credo AI will one day serve as a kind of stamp that companies use to advertise their focus on ethical AI.

“If we do our jobs right,” says Singh, “I want anyone who’s building good AI to be associated with Credo AI. That is certainly our aspiration.”