Sponsored Content

AI bias regulation is good for business

By Ted Kwartler VP, Trusted AI at DataRobot

Artificial intelligence (AI) has limitless potential to change society on a global scale for the better. We’re seeing data solutions combat our climate crisis by protecting our rainforests and tracking forest fires. In healthcare, AI solutions have allowed for better, faster clinical trials, more personalized healthcare plans, and data-driven forecasting that results in earlier and more accurate detection of life-threatening diseases. But as the future of AI rapidly unfolds with an unstoppable pace of innovation, we must pause and examine what happens when organizations and businesses arrive at the wrong data-driven solutions, despite best intentions.

All too often — and largely unintentional and inadvertent — AI and machine learning algorithms have led to unacceptable outcomes of bias. We’ve seen AI bias play a role in rejecting mortgage applications based on race, in underestimating high-risk healthcare needs for people of color, and deprioritizing resumes based on gender. Each instance of AI bias is complex, deserving of exploration and attention to the myriad of issues at play within each context.

With the State of AI Bias Report, a recent survey of business leaders conducted by DataRobot, our goal is to bring this conversation to the forefront; to acknowledge when and how AI bias surfaces, highlight key concerns from business leaders when it comes to AI bias, recognize the related disconnects on an organizational level, and identify the solutions we should look towards as we all strive to do better. In addition to the ethical and moral dilemmas that surface alongside AI bias, all of which deserve fine-tuned examination, it’s also critical to understand that AI bias is – simply put – bad for business.

DataRobot’s report revealed that organizations’ algorithms have inadvertently contributed to discrimination on the basis of gender (34%), age (32%), race (29%), sexual orientation (19%), and religion (18%). These biases have also negatively impacted more than 1 in 3 organizations surveyed – of those organizations, 62% lost revenue as a result, 61% lost customers, 43% lost employees, and 35% incurred legal fees due to a lawsuit or legal action. 

Such bias and discrimination is unacceptable. And business leaders agree: implementing preventative efforts and allocating resources to eradicate AI bias is, more often than not, the norm. And yet, many business leaders and organizations struggle to do so. According to this research, the core challenge to eliminate bias is understanding why algorithms arrived at certain decisions in the first place. If the data-driven decisions aren’t easily explainable, it is impossible to determine if implicit bias played into the algorithm’s decision-making. 

Many organizations are highly concerned about AI bias and are working to put guardrails in place to mitigate AI bias. Unfortunately, most of the time these guardrails are not effective enough. 77% of organizations surveyed by DataRobot had an AI bias or algorithm test in place prior to discovering bias. Despite pouring more resources into AI bias mitigation efforts than ever before (84% of organizations surveyed are planning to invest more in AI bias prevention in the next 12 months), AI bias continues to harm both individuals and businesses every day — about a third of organizations have inadvertently contributed to bias in some form despite their best efforts.

A call for clarity and thoughtful regulation

The question of whether AI regulation would be harmful or helpful is a divisive one – while 81% of respondents want AI regulation, 45% are worried that increased AI regulation will increase costs and make it more challenging to adopt AI. On the other hand, 32% worry that lack of government regulation will harm protected classes of people. After working alongside hundreds of data scientists, business leaders, and compliance officers, I believe that we need government regulation to protect organizations from themselves. Clear, universal guidelines are crucial for driving real change. And if done correctly, can help to accelerate the use of AI in all businesses without raising costs.

Thoughtful legislation will clear up the ambiguity organizations currently face. For instance, many organizations today (large and small) have algorithms deployed for advertising. Without regulatory direction, it’s hard for businesses to know whether a marketing model exhibits unacceptable bias.

As an example, a data scientist could build a model identifying households with members suffering from diabetes. Using the algorithm, could an organization justify running a promotion for healthcare screening among suspected diabetes patients? On one hand, healthcare screenings help improve the quality of life for these prospects.  On the other hand, some races have higher rates of diabetes than others, which this fictitious model identified. As a result, the promotion will affect prospective diabetes patients according to race without intending to. Since the use case is ambiguous, though likely well-meaning, organizations must weigh the risk of promoting health screenings where race is a proxy against the improved quality of life for these patients, with the additional context of profit-seeking.  

Now, consider a similar algorithm promoting insurance quotes where the model used income to find the best prospects. Again, insurance is a benefit, but income can be skewed by gender. Thus, sending junk mail for insurance can target more men than women. This is why it’s so critical that AI is explainable: if we can understand the factors that go into such decision-making, and have processes to review model outcomes, we are less likely to overlook an algorithm that is skewed (in this case, by gender). 

Today both of these use cases are acceptable, though each has AI bias considerations. Concise government regulations — paired with explainable AI — will help organizations navigate complex use cases and understand what type of specific governance, documentation, and assessments are needed. Otherwise, companies may be too risk-averse for the use case, or may be too aggressive, deploying models without careful consideration.