Tyrant in the code

Mankind has a complex relationship with the notion of Artificial Intelligence. Tinged with both fear and fascination; the timeline for AI development is punctuated by cultural and historical events that have brought with them new speculation and theories.

Mechanical men and artificial beings were a prevalent feature of Greek myth, including the golden robots of Hephaestus and Pygmalion’s Galatea; Mary Shelley’s Frankenstein introduced generations of readers to a terrifying idea of non-human intelligence; and, in more recent times, the dialogue has included the idea of computerized tech becoming a threat to the existence of our species.

These recent concerns culminated in the 2015 “Open Letter on Artificial Intelligence”, signed by over 150 people including Professor Stephen Hawking, and have been perpetuated by Elon Musk’s occasional ominous remarks.

In 2014, the entrepreneur billionaire wrote on Twitter that “We need to be super careful with AI”, and that it is “Potentially more dangerous than nukes”; less than a year later, Musk donated $10 million to the Future of Life Institute towards research into how mankind can keep a handle on AI; and in June of this year, he expressed a concern over tech giants, Facebook and Google, leaning closer towards intelligent robotics.

However, the concerns of Hawking, Musk and other influential technocrats and academics are not as far-flung in the future as you might think. In fact, AI is already deeply ingrained in our society in a far more subtle and sinister way.

The latest outburst in our ongoing relationship with AI occurred in the aftermath of Trump’s election. News outlets reported on the “Facebook bubble” – an echo chamber of personalized news feeds that shielded users from opposing views and exposed them to a stream of content that reinforced their existing beliefs.  The criticism was that Facebook failed to allow for a meaningful discourse between different political factions. And while the claim was denied by Mark Zuckerberg, there were similar complaints in the United Kingdom after the surprising Brexit referendum result.

At its most benign, this bubble corrupted the pollsters’ predictions; at its worst, it impacted the results of two of the most important political decisions of the century in the Western world.

Perhaps it’s time to take a closer look at the supposed bias of the technology we interface with on a daily basis, and ask some serious questions about what its implications are, and will potentially be, in the future.

3D render of a robot trying to solve a wooden cube puzzle

3D render of a robot trying to solve a wooden cube puzzle

Machine Learning and the “Sea of Dudes”

The current problems faced by AI come down to the age-old, complex relationship between creator and creation. If we are creating a system of machine learning that plays a fundamental role in all aspects of society, then the danger is that we run the risk that the systems’ creators will pass on their own inherent and natural biases onto these machines.

A tech-based society has often been heralded as the future’s solution to prejudice and inequality. H.G. Wells imagined machine-run utopias where the human inhabitants were free to explore their passions and politically liberal pursuits; likewise, Edward Bellamy’s novel, Looking Backward, envisioned a libertarian socialist system built on the foundations of technological advances.

Even in more recent times, digital technologies have kept up a sheen of utopian promise and were seen as a real and tangible key to unlocking universal social justice. Professor B.C. Mahapatra claimed that “Computerized education will reduce prejudice as no other system can” due to the suggestion that computers lack “prejudice, bias, or bigotry.”

However, the idea that technology might end prejudice is coming under increasing scrutiny and appearing more and more spurious.

Instead, questions have been raised in the public sphere about the extent of tech’s bias, particularly in the algorithms that run through social media and search engine sites. Even before media and the public began questioning Facebook’s supposed “news bubble” in the aftermath of Brexit and the election, the US Senate demanded an official explanation from Zuckerberg about the platform’s perceived liberal bias concerning trending topics.  And again Facecbook denied the accusation that it was manipulating content and somehow tailoring the news that its users were sharing .

Google has also come under fire for a seemingly biased gremlin in the machine. Recently, graphic designer Johana Burai’s research into pictures of “hands” showed that almost all the searches yielded by Google were white, and that searches for “black hands” or “African hands” tended to show images in questionable contexts i.e. a white hand reaching out to offer help to a black one, or hands working in the earth. In June, a tweet went viral where a gentleman compared the image search results from “three black teenagers” vs “three white teenagers.”

Both highlight the fact that our algorithms amplify our worst tendencies. The systems are trained and learn from human behaviors at their rawest and most unfiltered. Although Google isn’t racist, the people it’s learning from surely have some biases.

Of course, the fact that technology is biased shouldn’t come as too much of a shock to any of us. More often than not, technological advances have favored their creator, unconsciously at least. Any left-handed person will testify to the problems they’ve encountered using scissors, ledgers and can-openers. As Professor H. Vanderleest wrote in his essay “The Built-in Bias of Technology”, “Technology is obviously biased towards at least one use – the use intended by the designer.”

The problem is that we’re not just talking about the bias of scissors, ledgers and can-openers, and a quick glance at the demographics of those working in the tech industry points to a hugely disproportionate lack of women, and Black and Latino people. The concern is that this homogeneity of white men – the “sea of dudes”, as Margaret Mitchell, researcher at Microsoft, calls it – will unconsciously program their machines with a narrow world-view.

Beyond the design bias introduced by a homogenous group of people, early products and systems are biased by their first adopters. In many cases, those first adopters are socially connected to the team that creates the products.

Networks, especially, are very biased by the first cohorts that use the platform. They decide on the culture and drive the decisions of the product creators. The early adopters, often, become the most successful users because they are scaffolded and promoted by the product to advertise success stories to the world. This is especially true for social networks where the cost and difficulty in gaining a following gets more difficult over time as the network saturates.

The implications of these biases are potentially dire to certain groups of people. What if your ability to access the latest networks was essential to your position in society? That’s not so far from reality today. The more and more our online statuses and the algorithms that are incumbent on them reflect themselves in our offline lives, the more power we place in a system that is largely out of our control and beyond our understanding.

Human and robot working together.

Reimagining the Glass Ceiling Effect

The bias of tech and its implications has the potential to go far deeper than the homogeneous “sea of dudes”. As society becomes more dependent on technology, we risk exacerbating existing biases, and even creating new areas of prejudice by not actively working towards diversification or considering the morality of our product decisions. The root of this problem lies in a user’s ability to interface with the newest technologies in the first place.

This is a problem that certain minorities have been living with since computerized tech took such a prominent place in our everyday lives. The blind community, for instance, has often been unable to interface with the latest tech advances.

Until Apple added VoiceOver to the iPhone 3GS in 2009, smartphones were just expensive paperweights for the blind; those with sight problems are often stopped in their tracks by websites with early versions of CAPTCHA. The ever-increasing use of images on sites such as Buzzfeed and general social media means that there are still areas of the Internet that are simply off-limits.

Baby boomers and the elderly have fallen victim to the quickening and often myopic march of tech. Because they often lack the education or cultural know-how of younger generations, many of them are left on the periphery of certain industries, and there are plenty of reported cases of elderly workers not considered for positions or even forced from their jobs.

As recently as 2007, Zuckerberg made the contentious statement that “Young people are just smarter”, and voiced his preference for hiring people below the age of 30. It seems that Silicon Valley hasn’t moved on much and ageism is still running rampant.

These examples are foreboding warnings as to how easy it already is to fall behind in society as a result of one’s relationship with tech. The natural extension of this: we have created a system where those who are able to keep up with the latest technological advances are at an immediate advantage.

A study into the five stages of technology adoption shows that the first, early adopters, don’t just influence the bias of networks, they also have a significant impact on the development of all tech advances. According to the study, the “innovators” and “early adopters” of tech make up just 16% of the consumer market, and “are typically younger in age, have a higher social status, have more financial liquidity, advanced education, and are more socially forward than late adopters.”

This would have serious ramifications for the other 84% of tech customers who are neither “innovators” or “early adopters”. Money and time would be invaluable resources that would have a direct correlation to someone’s social position, ability to generate wealth and how easily they could climb the career ladder. Suddenly, the inherent bias of tech has taken a turn towards socio-economics and keeping us in an unjust social prison that many hoped it would help us to break out of.

The increasing influence of tech has become crucial for the millennial generations. Audience has become a form of wealth and, for many, it’s core to how we live. It is reputation, and affects how we’re treated every day.

More and more, our online statuses are having a direct bearing on our offline lives. In modern day contemporary networking, a good LinkedIn profile is essential; long gone are the days of inebriated profile pictures as they may bring us into disrepute with a boss, potential boss or future employer. We’re no longer simply judged on face-to-face meetings but instead on what a Google search spews out.

robottoy

So, who will this affect?

It’s clear that those who go to less expensive schools and don’t have the means to keep up with the latest software and gadgets will be at a disadvantage. In tech-marketing terms, these groups are known as “Late majority” or “Laggards”. They share traits of being older in age, and lower in social status and, often, wealth.

The issue is that these biases exist beyond our control. That’s not to say that the systems adopt a conscious bias. As I’ve already mentioned, much of the problem lies in the disproportionate demographics working in the tech industry, but it is also a result of how tech relates to marketing and metrics.

Using machine learning and algorithms to sell to target audiences is about simplifying the problem without taking into consideration the wider problem of who might be left behind. Explicit marketing bias by its very nature includes and excludes certain groups. While none of these systems are necessarily malevolent or ill-intentioned,  the reality is that they are silencing huge swathes of society and, in the process, selling keys to the doors of the future that many people simply can’t afford.

Fixing the Problem: A Question of What We Accept

The major problems faced in our relationship with AI are three-fold: the tech is not created by a proportionate representation of society; the first adopters that are helped to succeed on the platforms are skewed due to their relationship with the creators; and the moral implications of tech advances aren’t taken into consideration. Of course, there’s no simple solution to any of these problems.

Diversity of the creators will drive a diversity of the first users but that’s not enough.

Tech giants are increasingly working towards assembling better, more diverse data sets to combat the second issue. Or, at least, they’re acknowledging that there’s a problem. Facebook and Google have consistently spoken about working towards a more diverse team of employees; and, after the Tay chatbot displayed racist, homophobic and sexist personality traits, Microsoft have insisted they’re making a determined effort “to do a better job of classifying gender and other diversity signals in training data sets.”

However, how can society go about addressing the lack of foresight: ensuring that tech companies consider the moral implications of the products and AI they’re creating? Machine learning systems aren’t programmed to take morality into consideration. They’re functional. They are more concerned with marketing and metrics than creating a fairer world. AI is created in mankind’s image, always working towards an answer, and in order to get there often operates as efficiently as possible. To break down a problem you must simplify it; it doesn’t help by adding in variables to do with fairly representing all demographics and making sure that no one is left behind.

robotheart

Let’s not forget that humans simplify problems, as well, to get through the tens of thousands of decisions we make every day. The negative side is we end up stereotyping, generalizing, and create false connections between cause and effect.

One solution could be to offset machine bias in the same way that we have done with humans. Input a mix of different known biases into the system. By creating AIs with different sets of bias, it’s possible that their prejudices would level the playing ground by building a middle ground consensus.

Of course, the notion of applying theoretical restraints on the future of AI and machine learning is laced with difficulties. Not only would it be nigh-on-impossible to enforce such a regime on the myriad of tech companies that employ these algorithmic systems, but it would be an uneasy, dictatorial restriction on the freedom we have over technological advances.

Perhaps the only viable solution lies in the hands of AI’s creators.

It’s up to them to apply the moral standards onto what they create and to do their best to ensure that the latest products aren’t only available to the more affluent sections of society. And perhaps it’s also up to us to make sure that they do so. By being vigilant with tech giants we can ensure they use AI to close the gaps of diversity instead of widening them. Otherwise, we will carry the status quo of bias into the tech era, amplifying old prejudices and creating new ones along the way.