AI is the next frontier — but for whom?

A few weeks ago, a founder told me it took three hours of endless clicking to find an AI-generated portrait of a Black woman. It reminded me, in some ways, of a speech I saw three years ago when Yasmin Green, the then-director of research and development for Jigsaw, spoke about how human bias seeps into the programming of AI. Her talk and this founder, miles away and years apart, are two pieces of the same puzzle.

Discussions about diversity are more important than ever as AI enters a new golden era. Every new technology that appears seems to be accompanied by some harrowing consequence. So far, AI has contributed to racist job recruiting tactics and slower home approval rates for Black people. Self-driving cars have trouble detecting dark skin, making Black people more likely to be hit by them; in one instance, robots identified Black men as being criminals 9% more than they did white men, which would be put under a new light if judicial systems ever begin adopting AI.

“As AI pervades society, it is critical for developers and corporations to be good stewards of this technology but also hold fellow technologists accountable for these unethical use cases.” Isoken Igbinedion, co-founder, Parfait

AI ethics is often a separate conversation from AI building, but they should be one and the same. Bias is dangerous, especially as AI continues to spread into everyday life. For centuries, doctors once judged Black people on criteria now deemed racist, with one prevalent belief being that such people experienced less pain. Today, algorithms discriminate against Black people; one study from 2019 found that an algorithm used by U.S. hospitals “was less likely to refer Black people than white people who were equally sick to programs that aim to improve care for patients with complex medical needs.”

Right now, bias appears in various AI subsectors, ranging from investment to hiring to data and product execution, and each instance of bias props up others. Eghosa Omoigui, the founder of EchoVC Partners, told TechCrunch that though AI can be “incredibly powerful,” society is still far from “flawless” artificial intelligence.

“This means that the likelihood of AI bias in outcomes remains high because of the excessive dependencies on the sources, weights and biases of training data,” he said. “Diverse teams will prioritize the exquisite understanding and sensitivity necessary to deliver global impact.”

Omoigui’s brother, Nosa, the founder of ESG compliance regulator Weave.AI, reiterated that point. Many of these models are black boxes, and creators have no particular insights into the inner workings of how a prediction or recommendation is achieved, he said. Compared to Wall Street, AI is practically unregulated, and as the level of governance fails to match its growth, it risks going rogue. The EU proposed steps to reduce and account for bias in AI-powered products, with some pushback, though the proposition itself puts it slightly ahead of where the U.S. is now.

In fact, Eghosa said many investors don’t care or think about diversity at all within AI and that there is a groupthink mentality when it comes to machine-led capabilities. He recalled the reactions investors gave him when he helped lead an investment round for the software company KOSA AI, which monitors AI for bias and risks.

“Quite a few investors that we spoke to about the opportunity felt very strongly that AI bias wasn’t a thing or that a ‘woke product’ wouldn’t have product-market fit, which is surprising, to say the least,” Eghosa said.