AI

4 questions to ask when evaluating AI prototypes for bias

Comment

Hole in Blue Page
Image Credits: mikroman6 (opens in a new window) / Getty Images

Veronica Torres

Contributor

Veronica Torres is the worldwide privacy and regulatory counsel for Jumio, where she provides strategic legal counsel regarding business processes, applications and technologies to ensure compliance with privacy laws.

It’s true there has been progress around data protection in the U.S. thanks to the passing of several laws, such as the California Consumer Privacy Act (CCPA), and nonbinding documents, such as the Blueprint for an AI Bill of Rights. Yet, there currently aren’t any standard regulations that dictate how technology companies should mitigate AI bias and discrimination.

As a result, many companies are falling behind in building ethical, privacy-first tools. Nearly 80% of data scientists in the U.S. are male and 66% are white, which shows an inherent lack of diversity and demographic representation in the development of automated decision-making tools, often leading to skewed data results.

Significant improvements in design review processes are needed to ensure technology companies take all people into account when creating and modifying their products. Otherwise, organizations can risk losing customers to competition, tarnishing their reputation and risking serious lawsuits. According to IBM, about 85% of IT professionals believe consumers select companies that are transparent about how their AI algorithms are created, managed and used. We can expect this number to increase as more users continue taking a stand against harmful and biased technology.

So, what do companies need to keep in mind when analyzing their prototypes? Here are four questions development teams should ask themselves:

Have we ruled out all types of bias in our prototype?

To build effective, bias-free technology, AI teams should develop a list of questions to ask during the review process that can help them identify potential issues in their models.

There are many methodologies AI teams can use to assess their models, but before they do that, it’s critical to evaluate the end goal and whether there are any groups who may be disproportionately affected by the outcomes of the use of AI.

For example, AI teams should take into consideration that the use of facial recognition technologies may inadvertently discriminate against people of color — something that occurs far too often in AI algorithms. Research conducted by the American Civil Liberties Union in 2018 showed that Amazon’s face recognition inaccurately matched 28 members of the U.S. Congress with mugshots. A staggering 40% of incorrect matches were people of color, despite them making up only 20% of Congress.

By asking challenging questions, AI teams can find new ways to improve their models and strive to prevent these scenarios from occurring. For instance, a close examination can help them determine whether they need to look at more data or if they will need a third party, such as a privacy expert, to review their product.

Plot4AI is a great resource for those looking to start.

Have we enlisted a designated privacy professional or champion?

Due to the nature of their job, privacy professionals have been traditionally viewed as barriers to innovation, especially when they need to review every product, document and procedure. Rather than viewing a privacy department as an obstacle, organizations should instead see it as a critical enabler for innovation.

Enterprises must make it a priority to hire privacy experts and incorporate them into the design review process so that they can ensure their products work for everyone, including underserved populations, in a way that’s safe, compliant with regulations and free of bias.

While the process for integrating privacy professionals will vary according to the nature and scope of the organization, there are some key ways to ensure the privacy team has a seat at the table. Companies should start small by establishing a simple set of procedures to identify any new, or changes to existing, processing activities related to personal information.

The key to success with these procedures is to socialize the process with executives, as well as product managers and engineers, and ensure they are aligned with the organization’s definition of personal information. For example, while many organizations generally accept IP addresses and mobile device identifiers as personal information, outdated models and standards may categorize these as “anonymous.” Enterprises must be clear about what types of information qualify as personal information.

Furthermore, organizations may believe that personal information used in their products and services poses the greatest risk and should be the priority for reviews, but they must take into account that other departments, such as human resources and marketing, also process large amounts of personal information.

If an organization doesn’t have the bandwidth to hire a privacy professional for every department, they should consider designating a privacy champion or advocate who can spot issues and escalate them to the privacy team if needed.

Is our people and culture department involved?

Privacy teams shouldn’t be the only ones responsible for privacy within an organization. Every employee who has access to personal information or has an impact on the processing of personal information is responsible.

Expanding recruitment efforts to include candidates from different demographic groups and various regions can bring diverse voices and perspectives to the table. Hiring diverse employees shouldn’t stop at entry-and-mid-level roles, either. A diverse leadership team and board of directors are absolutely essential to serve as representatives for those who cannot make it into the room.

Companywide training programs on ethics, privacy and AI can further support an inclusive culture while raising awareness of the importance of diversity, equity and inclusion (DEI) efforts. Only 32% of organizations require a form of DEI training for their employees, emphasizing how improvements are needed in this area.

Does our prototype align with the AI Bill of Rights Blueprint?

The Biden administration issued a Blueprint for an AI Bill of Rights in October 2022, which outlines key principles, with detailed steps and recommendations for developing responsible AI and minimizing discrimination in algorithms.

The guidelines include five protections:

  1. Safe and effective systems.
  2. Algorithmic discrimination.
  3. Data privacy.
  4. Notice and explanation.
  5. Human alternatives, consideration and fallback.

While the AI Bill of Rights doesn’t enforce any metrics or pose specific regulations around AI, organizations should look to it as a baseline for their own development practices. The framework can be used as a strategic resource for companies looking to learn more about ethical AI, mitigating bias and giving consumers control over their data.

The road to privacy-first AI

Technology has the ability to revolutionize society as we know it, but it will ultimately fail if it doesn’t benefit everyone in the same way. As AI teams bring new products to life or modify their current tools, it’s critical that they apply the necessary steps and ask themselves the right questions to ensure they have ruled out all types of bias.

Building ethical, privacy-first tools will always be a work in progress, but the above considerations can help companies take steps in the right direction.

More TechCrunch

Some Indian government websites have allowed scammers to plant advertisements capable of redirecting visitors to online betting platforms. TechCrunch discovered around four dozen “gov.in” website links associated with Indian states,…

Scammers found planting online betting ads on Indian government websites

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The deck included some redacted numbers, but there was still enough data to get a good picture.

Pitch Deck Teardown: Cloudsmith’s $15M Series A deck

The company is describing the event as “a chance to demo some ChatGPT and GPT-4 updates.”

OpenAI’s ChatGPT announcement: What we know so far

Unlike ChatGPT, Claude did not become a new App Store hit.

Anthropic’s Claude sees tepid reception on iOS compared with ChatGPT’s debut

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Look,…

Startups Weekly: Trouble in EV land and Peloton is circling the drain

Scarcely five months after its founding, hard tech startup Layup Parts has landed a $9 million round of financing led by Founders Fund to transform composites manufacturing. Lux Capital and Haystack…

Founders Fund leads financing of composites startup Layup Parts

AI startup Anthropic is changing its policies to allow minors to use its generative AI systems — in certain circumstances, at least.  Announced in a post on the company’s official…

Anthropic now lets kids use its AI tech — within limits

Zeekr’s market hype is noteworthy and may indicate that investors see value in the high-quality, low-price offerings of Chinese automakers.

The buzziest EV IPO of the year is a Chinese automaker

Venture capital has been hit hard by souring macroeconomic conditions over the past few years and it’s not yet clear how the market downturn affected VC fund performance. But recent…

VC fund performance is down sharply — but it may have already hit its lowest point

The person who claims to have 49 million Dell customer records told TechCrunch that he brute-forced an online company portal and scraped customer data, including physical addresses, directly from Dell’s…

Threat actor says he scraped 49M Dell customer addresses before the company found out

The social network has announced an updated version of its app that lets you offer feedback about its algorithmic feed so you can better customize it.

Bluesky now lets you personalize main Discover feed using new controls

Microsoft will launch its own mobile game store in July, the company announced at the Bloomberg Technology Summit on Thursday. Xbox president Sarah Bond shared that the company plans to…

Microsoft is launching its mobile game store in July

Smart ring maker Oura is launching two new features focused on heart health, the company announced on Friday. The first claims to help users get an idea of their cardiovascular…

Oura launches two new heart health features

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI considers allowing AI porn

Garena is quietly developing new India-themed games even though Free Fire, its biggest title, has still not made a comeback to the country.

Garena is quietly making India-themed games even as Free Fire’s relaunch remains doubtful

The U.S.’ NHTSA has opened a fourth investigation into the Fisker Ocean SUV, spurred by multiple claims of “inadvertent Automatic Emergency Braking.”

Fisker Ocean faces fourth federal safety probe

CoreWeave has formally opened an office in London that will serve as its European headquarters and home to two new data centers.

CoreWeave, a $19B AI compute provider, opens European HQ in London with plans for 2 UK data centers

The Series C funding, which brings its total raise to around $95 million, will go toward mass production of the startup’s inaugural products

AI chip startup DEEPX secures $80M Series C at a $529M valuation 

A dust-up between Evolve Bank & Trust, Mercury and Synapse has led TabaPay to abandon its acquisition plans of troubled banking-as-a-service startup Synapse.

Infighting among fintech players has caused TabaPay to ‘pull out’ from buying bankrupt Synapse

The problem is not the media, but the message.

Apple’s ‘Crush’ ad is disgusting

The Twitter for Android client was “a demo app that Google had created and gave to us,” says Particle co-founder and ex-Twitter employee Sara Beykpour.

Google built some of the first social apps for Android, including Twitter and others

WhatsApp is updating its mobile apps for a fresh and more streamlined look, while also introducing a new “darker dark mode,” the company announced on Thursday. The messaging app says…

WhatsApp’s latest update streamlines navigation and adds a ‘darker dark mode’

Plinky lets you solve the problem of saving and organizing links from anywhere with a focus on simplicity and customization.

Plinky is an app for you to collect and organize links easily

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: How to watch

For cancer patients, medicines administered in clinical trials can help save or extend lives. But despite thousands of trials in the United States each year, only 3% to 5% of…

Triomics raises $15M Series A to automate cancer clinical trials matching

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Tap, tap.…

Tesla drives Luminar lidar sales and Motional pauses robotaxi plans

The newly announced “Public Content Policy” will now join Reddit’s existing privacy policy and content policy to guide how Reddit’s data is being accessed and used by commercial entities and…

Reddit locks down its public data in new content policy, says use now requires a contract

Eva Ho plans to step away from her position as general partner at Fika Ventures, the Los Angeles-based seed firm she co-founded in 2016. Fika told LPs of Ho’s intention…

Fika Ventures co-founder Eva Ho will step back from the firm after its current fund is deployed

In a post on Werner Vogels’ personal blog, he details Distill, an open-source app he built to transcribe and summarize conference calls.

Amazon’s CTO built a meeting-summarizing app for some reason