How two founders approach building ethical AI startups in health care

The speed at which AI companies are evolving is making a lot of people nervous. That’s because moving fast could lead to potential ethical issues that aren’t able to be addressed.

Building ethical algorithms takes time. Models that were built quickly are more likely to have ingrained bias while lacking the necessary guardrails in place to keep them from causing unnecessary damage. If done in haste, or done poorly, AI models have the potential to cause real harm in certain sensitive industries, such as health care.

But, of course, many of the worries center around new founders riding into the space on the hype train as opposed to the numerous entrepreneurs who started building models with care years before the current market dynamics.

Amy Brown, the founder and CEO of Authenticx, a startup that helps health care companies gain insights from their customer call center data using AI, said on TechCrunch’s Found podcast that those looking to build AI algorithms should recognize the potential negative consequences of models being built incorrectly.

For Authenticx, that meant building the entire model in-house and having people who understand the specific nuances surrounding health care-focused data in charge of labeling and training the algorithm.

“Instead of just hiring data scientists, we hired nurses and social workers and counselors, and people who had actually spent their career working inside the business of health care, who not only understood the spoken word that they were listening to, but also understood the context of the situation,” Brown said.

Within those categories, Authenticx was intentional about who it brought on, purposefully building a diverse oversight group to ensure that the team didn’t insert biases into the model and could help spot potentially problematic patterns.

Authenticx isn’t the only company hiring a diverse team of doctors to help mitigate bias and other problems when using AI. Regard co-founder and CEO Eli Ben-Joseph said that his company also makes sure that every data point fed into the system is signed off by multiple people. Regard creates AI tools for doctors to help automate some clinical work.

“When we create any algorithms, and the outputs of those algorithms, we make sure that it’s not just one person who is giving the stamp of approval,” he said in another recent Found episode. “There’s always at least two or three people who review it and make sure that there isn’t any kind of bias that’s been built into it.”

Regard is also intentional about removing the model’s ability for it to hallucinate, Ben-Joseph said. The company built the algorithm so it will always give the same answer when given the same question or prompt to prevent misinformation. The model also can’t learn on its own without human intervention. If the algorithm makes a suggestion to a physician, and the physician ignores it, a human team member will look at every instance before deciding whether the model should learn from that situation.

“We want to make sure that it doesn’t go off on some random direction without us knowing,” Ben-Joseph said. “So we’ll gather the feedback. But then we still have a human in the loop to make sure that when we’re going to improve an algorithm, it’s the right thing to do.”

So the best way for entrepreneurs new to AI to avoid some of its potential pitfalls is to build slowly and with care. Thankfully for them, there seems to be enough VC capital and interest to allow it.