Sponsored Content by DataRobot

AI bias regulation is good for business

By Ted Kwartler VP, Trusted AI at DataRobot

Artificial intelligence (AI) has limitless potential to change society on a global scale for the better. We’re seeing data solutions combat our climate crisis by protecting our rainforests and tracking forest fires. In healthcare, AI solutions have allowed for better, faster clinical trials, more personalized healthcare plans, and data-driven forecasting that results in earlier and more accurate detection of life-threatening diseases. But as the future of AI rapidly unfolds with an unstoppable pace of innovation, we must pause and examine what happens when organizations and businesses arrive at the wrong data-driven solutions, despite best intentions.

All too often — and largely unintentional and inadvertent — AI and machine learning algorithms have led to unacceptable outcomes of bias. We’ve seen AI bias play a role in rejecting mortgage applications based on race, in underestimating high-risk healthcare needs for people of color, and deprioritizing resumes based on gender. Each instance of AI bias is complex, deserving of exploration and attention to the myriad of issues at play within each context.

With the State of AI Bias Report, a recent survey of business leaders conducted by DataRobot, our goal is to bring this conversation to the forefront; to acknowledge when and how AI bias surfaces, highlight key concerns from business leaders when it comes to AI bias, recognize the related disconnects on an organizational level, and identify the solutions we should look towards as we all strive to do better. In addition to the ethical and moral dilemmas that surface alongside AI bias, all of which deserve fine-tuned examination, it’s also critical to understand that AI bias is – simply put – bad for business.

DataRobot’s report revealed that organizations’ algorithms have inadvertently contributed to discrimination on the basis of gender (34%), age (32%), race (29%), sexual orientation (19%), and religion (18%). These biases have also negatively impacted more than 1 in 3 organizations surveyed – of those organizations, 62% lost revenue as a result, 61% lost customers, 43% lost employees, and 35% incurred legal fees due to a lawsuit or legal action. 

Such bias and discrimination is unacceptable. And business leaders agree: implementing preventative efforts and allocating resources to eradicate AI bias is, more often than not, the norm. And yet, many business leaders and organizations struggle to do so. According to this research, the core challenge to eliminate bias is understanding why algorithms arrived at certain decisions in the first place. If the data-driven decisions aren’t easily explainable, it is impossible to determine if implicit bias played into the algorithm’s decision-making. 

Many organizations are highly concerned about AI bias and are working to put guardrails in place to mitigate AI bias. Unfortunately, most of the time these guardrails are not effective enough. 77% of organizations surveyed by DataRobot had an AI bias or algorithm test in place prior to discovering bias. Despite pouring more resources into AI bias mitigation efforts than ever before (84% of organizations surveyed are planning to invest more in AI bias prevention in the next 12 months), AI bias continues to harm both individuals and businesses every day — about a third of organizations have inadvertently contributed to bias in some form despite their best efforts.

A call for clarity and thoughtful regulation

The question of whether AI regulation would be harmful or helpful is a divisive one – while 81% of respondents want AI regulation, 45% are worried that increased AI regulation will increase costs and make it more challenging to adopt AI. On the other hand, 32% worry that lack of government regulation will harm protected classes of people. After working alongside hundreds of data scientists, business leaders, and compliance officers, I believe that we need government regulation to protect organizations from themselves. Clear, universal guidelines are crucial for driving real change. And if done correctly, can help to accelerate the use of AI in all businesses without raising costs.

Thoughtful legislation will clear up the ambiguity organizations currently face. For instance, many organizations today (large and small) have algorithms deployed for advertising. Without regulatory direction, it’s hard for businesses to know whether a marketing model exhibits unacceptable bias.

As an example, a data scientist could build a model identifying households with members suffering from diabetes. Using the algorithm, could an organization justify running a promotion for healthcare screening among suspected diabetes patients? On one hand, healthcare screenings help improve the quality of life for these prospects.  On the other hand, some races have higher rates of diabetes than others, which this fictitious model identified. As a result, the promotion will affect prospective diabetes patients according to race without intending to. Since the use case is ambiguous, though likely well-meaning, organizations must weigh the risk of promoting health screenings where race is a proxy against the improved quality of life for these patients, with the additional context of profit-seeking.  

Now, consider a similar algorithm promoting insurance quotes where the model used income to find the best prospects. Again, insurance is a benefit, but income can be skewed by gender. Thus, sending junk mail for insurance can target more men than women. This is why it’s so critical that AI is explainable: if we can understand the factors that go into such decision-making, and have processes to review model outcomes, we are less likely to overlook an algorithm that is skewed (in this case, by gender). 

Today both of these use cases are acceptable, though each has AI bias considerations. Concise government regulations — paired with explainable AI — will help organizations navigate complex use cases and understand what type of specific governance, documentation, and assessments are needed. Otherwise, companies may be too risk-averse for the use case, or may be too aggressive, deploying models without careful consideration. 

More TechCrunch

The U.K.’s self-proclaimed “world-leading” regulations for self-driving cars are now official, after the Automated Vehicles (AV) Act received royal assent — the final rubber stamp any legislation must go through…

UK’s autonomous vehicle legislation becomes law, paving the way for first driverless cars by 2026

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved…

ChatGPT: Everything you need to know about the AI-powered chatbot

SoLo Funds CEO Travis Holoway: “Regulators seem driven by press releases when they should be motivated by true consumer protection and empowering equitable solutions.”

Fintech lender Solo Funds is being sued again by the government over its lending practices

Hard tech startups generate a lot of buzz, but there’s a growing cohort of companies building digital tools squarely focused on making hard tech development faster, more efficient, and —…

Rollup wants to be the hardware engineer’s workhorse

TechCrunch Disrupt 2024 is not just about groundbreaking innovations, insightful panels, and visionary speakers — it’s also about listening to YOU, the audience, and what you feel is top of…

Disrupt Audience Choice vote closes Friday

Google says the new SDK would help Google expand on its core mission of connecting the right audience to the right content at the right time.

Google is launching a new Android feature to drive users back into their installed apps

Jolla has taken the official wraps off the first version of its personal server-based AI assistant in the making. The reborn startup is building a privacy-focused AI device — aka…

Jolla debuts privacy-focused AI hardware

OpenAI is removing one of the voices used by ChatGPT after users found that it sounded similar to Scarlett Johansson, the company announced on Monday. The voice, called Sky, is…

OpenAI to remove ChatGPT’s Scarlett Johansson-like voice

The ChatGPT mobile app’s net revenue first jumped 22% on the day of the GPT-4o launch and continued to grow in the following days.

ChatGPT’s mobile app revenue saw its biggest spike yet following GPT-4o launch

Dating app maker Bumble has acquired Geneva, an online platform built around forming real-world groups and clubs. The company said that the deal is designed to help it expand its…

Bumble buys community building app Geneva to expand further into friendships

CyberArk — one of the army of larger security companies founded out of Israel — is acquiring Venafi, a specialist in machine identity, for $1.54 billion. 

CyberArk snaps up Venafi for $1.54B to ramp up in machine-to-machine security

Founder-market fit is one of the most crucial factors in a startup’s success, and operators (someone involved in the day-to-day operations of a startup) turned founders have an almost unfair advantage…

OpenseedVC, which backs operators in Africa and Europe starting their companies, reaches first close of $10M fund

A Singapore High Court has effectively approved Pine Labs’ request to shift its operations to India.

Pine Labs gets Singapore court approval to shift base to India

The AI Safety Institute, a U.K. body that aims to assess and address risks in AI platforms, has said it will open a second location in San Francisco. 

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets