4 questions to ask when evaluating AI prototypes for bias

It’s true there has been progress around data protection in the U.S. thanks to the passing of several laws, such as the California Consumer Privacy Act (CCPA), and nonbinding documents, such as the Blueprint for an AI Bill of Rights. Yet, there currently aren’t any standard regulations that dictate how technology companies should mitigate AI bias and discrimination.

As a result, many companies are falling behind in building ethical, privacy-first tools. Nearly 80% of data scientists in the U.S. are male and 66% are white, which shows an inherent lack of diversity and demographic representation in the development of automated decision-making tools, often leading to skewed data results.

Significant improvements in design review processes are needed to ensure technology companies take all people into account when creating and modifying their products. Otherwise, organizations can risk losing customers to competition, tarnishing their reputation and risking serious lawsuits. According to IBM, about 85% of IT professionals believe consumers select companies that are transparent about how their AI algorithms are created, managed and used. We can expect this number to increase as more users continue taking a stand against harmful and biased technology.

So, what do companies need to keep in mind when analyzing their prototypes? Here are four questions development teams should ask themselves:

Have we ruled out all types of bias in our prototype?

Technology has the ability to revolutionize society as we know it, but it will ultimately fail if it doesn’t benefit everyone in the same way.

To build effective, bias-free technology, AI teams should develop a list of questions to ask during the review process that can help them identify potential issues in their models.

There are many methodologies AI teams can use to assess their models, but before they do that, it’s critical to evaluate the end goal and whether there are any groups who may be disproportionately affected by the outcomes of the use of AI.

For example, AI teams should take into consideration that the use of facial recognition technologies may inadvertently discriminate against people of color — something that occurs far too often in AI algorithms. Research conducted by the American Civil Liberties Union in 2018 showed that Amazon’s face recognition inaccurately matched 28 members of the U.S. Congress with mugshots. A staggering 40% of incorrect matches were people of color, despite them making up only 20% of Congress.

By asking challenging questions, AI teams can find new ways to improve their models and strive to prevent these scenarios from occurring. For instance, a close examination can help them determine whether they need to look at more data or if they will need a third party, such as a privacy expert, to review their product.

Plot4AI is a great resource for those looking to start.

Have we enlisted a designated privacy professional or champion?

Due to the nature of their job, privacy professionals have been traditionally viewed as barriers to innovation, especially when they need to review every product, document and procedure. Rather than viewing a privacy department as an obstacle, organizations should instead see it as a critical enabler for innovation.

Enterprises must make it a priority to hire privacy experts and incorporate them into the design review process so that they can ensure their products work for everyone, including underserved populations, in a way that’s safe, compliant with regulations and free of bias.

While the process for integrating privacy professionals will vary according to the nature and scope of the organization, there are some key ways to ensure the privacy team has a seat at the table. Companies should start small by establishing a simple set of procedures to identify any new, or changes to existing, processing activities related to personal information.

The key to success with these procedures is to socialize the process with executives, as well as product managers and engineers, and ensure they are aligned with the organization’s definition of personal information. For example, while many organizations generally accept IP addresses and mobile device identifiers as personal information, outdated models and standards may categorize these as “anonymous.” Enterprises must be clear about what types of information qualify as personal information.

Furthermore, organizations may believe that personal information used in their products and services poses the greatest risk and should be the priority for reviews, but they must take into account that other departments, such as human resources and marketing, also process large amounts of personal information.

If an organization doesn’t have the bandwidth to hire a privacy professional for every department, they should consider designating a privacy champion or advocate who can spot issues and escalate them to the privacy team if needed.

Is our people and culture department involved?

Privacy teams shouldn’t be the only ones responsible for privacy within an organization. Every employee who has access to personal information or has an impact on the processing of personal information is responsible.

Expanding recruitment efforts to include candidates from different demographic groups and various regions can bring diverse voices and perspectives to the table. Hiring diverse employees shouldn’t stop at entry-and-mid-level roles, either. A diverse leadership team and board of directors are absolutely essential to serve as representatives for those who cannot make it into the room.

Companywide training programs on ethics, privacy and AI can further support an inclusive culture while raising awareness of the importance of diversity, equity and inclusion (DEI) efforts. Only 32% of organizations require a form of DEI training for their employees, emphasizing how improvements are needed in this area.

Does our prototype align with the AI Bill of Rights Blueprint?

The Biden administration issued a Blueprint for an AI Bill of Rights in October 2022, which outlines key principles, with detailed steps and recommendations for developing responsible AI and minimizing discrimination in algorithms.

The guidelines include five protections:

  1. Safe and effective systems.
  2. Algorithmic discrimination.
  3. Data privacy.
  4. Notice and explanation.
  5. Human alternatives, consideration and fallback.

While the AI Bill of Rights doesn’t enforce any metrics or pose specific regulations around AI, organizations should look to it as a baseline for their own development practices. The framework can be used as a strategic resource for companies looking to learn more about ethical AI, mitigating bias and giving consumers control over their data.

The road to privacy-first AI

Technology has the ability to revolutionize society as we know it, but it will ultimately fail if it doesn’t benefit everyone in the same way. As AI teams bring new products to life or modify their current tools, it’s critical that they apply the necessary steps and ask themselves the right questions to ensure they have ruled out all types of bias.

Building ethical, privacy-first tools will always be a work in progress, but the above considerations can help companies take steps in the right direction.