Machine Learning And Human Bias: An Uneasy Pair

“We’re watching you.” This was the warning that the Chicago Police Department gave to more than 400 people on its “Heat List.” The list, an attempt to identify the people most likely to commit violent crime in the city, was created with a predictive algorithm that focused on factors including, per the Chicago Tribune, “his or her acquaintances and their arrest histories – and whether any of those associates have been shot in the past.”

Algorithms like this obviously raise some uncomfortable questions. Who is on this list and why? Does it take race, gender, education and other personal factors into account? When the prison population of America is overwhelmingly Black and Latino males, would an algorithm based on relationships disproportionately target young men of color?

There are many reasons why such algorithms are of interest, but the rewards are inseparable from the risks. Humans are biased, and the biases we encode into machines are then scaled and automated. This is not inherently bad (or good), but it raises the question: how do we operate in a world increasingly consumed with “personal analytics” that can predict race, religion, gender, age, sexual orientation, health status and much more.

Humans are biased, and the biases we encode into machines are then scaled and automated.

I’d wager that most readers feel a little uneasy about how the Chicago PD Heat List was implemented – even if they agree that the intention behind the algorithm was good. To use machine learning and public data responsibly, we need to have an uncomfortable discussion about what we teach machines and how we use the output.

What We Teach Machines

Most people have an intuitive understanding of categories concerning race, religion and gender, yet when asked to define them precisely, they quickly find themselves hard-pressed to pin them down. Human beings can’t agree objectively on what race a given person is. As Sen and Wasow (2014) argue, race is a social construct based on a mixture of both mutable and immutable traits including skin color, religion, location and diet.

As a result, the definition of who falls into which racial category varies over time (e.g. Italians were once considered to be black in the American South), and a given individual may identify with one race at one time and with another race a decade later. This inability to precisely define a concept such as race represents a risk for personal analytics.

Any program designed to predict, manipulate and display racial categories must operationalize them both for internal processing and for human consumption. Machine learning is one of the most effective frameworks for doing so because machine learning programs learn from human-provided examples rather than explicit rules and heuristics.

So let’s say a programmer builds an algorithm that makes perfect racial predictions based on the categories known to an average American — what is called a “common-knowledge test.” Many of its outputs will be strange from other perspectives. Many Brazilians who are considered white in their home country would be recognized as black in the United States.

We need to have an uncomfortable discussion about what we teach machines and how we use the output.

Biracial Americans and individuals from places such as India, Turkey and Israel often challenge racial categorization, at least as Americans understand it. The algorithm will thus necessarily operationalize the biases of its creators, and these biases will conflict with those of others.

The result is a machine learning program that treats race as its creators do — not necessarily as the individuals see themselves or as the users of the program conceive of race. This may be relatively unproblematic in use cases like marketing and social science research, but with the Chicago PD Heat List, ‘No Fly Lists’ and other public safety applications, biases and misperceptions could have severe ramifications at scale.

How We Use The Data

On an individual scale, any algorithm for personal analytics will make errors. A person is multi-faceted and complex, and rarely do we fit neatly into clearly delineated groups. Nonetheless, when individual-level predictions are aggregated, they can support better understanding of groups of people at scale, help us identify disparities, and inform better decisions about how to transform our society for the better.

So if knocking on the doors of potential criminals seems wrong, do we have alternatives?

With the Chicago PD’s algorithm, one option is to generate a ‘Heat Map’ based on the locations of high-risk populations and activities. Los Angeles, Atlanta, Santa Cruz and many other police jurisdictions already do something similar using a predictive policing tool called PredPol. It allows police departments to increase their presence in crime-prone areas, at the right times, without using any personal data. It strictly looks at type, place and time of crimes.

But is profiling by location another form of discrimination? Would police inevitably stop and ticket more people in heat map areas? If I can only afford to live in an economically depressed area, will I be stopped and questioned by police more often than individuals living in a wealthy area? Could a targeted, predictable police presence drive crime into locations where police are unprepared, and thus expand the geography of crime in a city?

There is a huge opportunity to help rather than harm people.

Perhaps there is a net good, instead. With police strategically located and working with communities, there is an opportunity to reduce crime and create greater opportunity for residents. An algorithm has the potential to discriminate less than human analysts. PredPol reports double-digit crime reduction in cities that implement the software. The Chicago PD hasn’t released any data on the Heat List’s effectiveness yet.

The Chicago PD and PredPol models are important reminders that personal analytics aren’t the only option. Before we operationalize identity – and certainly before we target individuals and knock on doors – we have to consider the ethics of our approach, not just the elegance of the solution.

Taboo, But Necessary

Talking about bias is uncomfortable, but we can’t afford to ignore this conversation in the machine learning space. To avoid scaling stereotypes or infringing on personal rights, we have to talk about this as it applies to each machine learning algorithm that aims to identify and categorize people.

Transparency in the inputs to such algorithms and how their outputs are used is likely to be an important component of such efforts. Ethical considerations like these have recently been recognized as important problems by the academic community: new courses are being created and meetings like FAT-ML are providing venues for papers and discussions on the topic.

It’s easy to imagine how the Chicago PD Heat List could be used in a responsible way. It’s also easy to imagine worst-case scenarios: What if Senator Joe McCarthy had access to personal analytics during the communist witch hunts of the late 1940s and 50s? Today, what if countries with anti-gay and anti-transgender laws used this technology to identify and harm LGBT individuals?

These are troubling scenarios, but not sufficient reasons to bury this technology. There is a huge opportunity to help rather than harm people. Using machine learning, scholars and policymakers alike can ask important questions and use the results to inform decisions that have significant impact at the individual or societal scale.

Like so many technologies, machine learning itself is value neutral, but the final applications will reflect the problems, preferences and worldviews of the creators.