Safe artificial intelligence requires cultural intelligence

Knowledge, to paraphrase British journalist Miles Kington, is knowing a tomato is a fruit; wisdom is knowing there’s a norm against putting it in a fruit salad.

Any kind of artificial intelligence clearly needs to possess great knowledge. But if we are going to deploy AI agents widely in society at large — on our highways, in our nursing homes and schools, in our businesses and governments — we will need machines to be wise as well as smart.

Researchers who focus on a problem known as AI safety or AI alignment define artificial intelligence as machines that can meet or beat human performance at a specific cognitive task. Today’s self-driving cars and facial recognition algorithms fall into this narrow type of AI.

But some researchers are working to develop artificial general intelligence (AGI) — machines that can outperform humans at any cognitive task. We don’t know yet when or even if AGI will be achieved, but it’s clear that the research path is leading to ever more powerful and autonomous AI systems performing more and more tasks in our economies and societies.

Building machines that can perform any cognitive task means figuring out how to build AI that can not only learn about things like the biology of tomatoes but also about our highly variable and changing systems of norms about things like what we do with tomatoes.

Humans live lives populated by a multitude of norms, from how we eat, dress and speak to how we share information, treat one another and pursue our goals.

For AI to be truly powerful will require machines to comprehend that norms can vary tremendously from group to group, making them seem unnecessary, yet it can be critical to follow them in a given community.

Tomatoes in fruit salads may seem odd to the Brits for whom Kington was writing, but they are perfectly fine if you are cooking for Koreans or a member of the culinary avant-garde.  And while it may seem minor, serving them the wrong way to a particular guest can cause confusion, disgust, even anger. That’s not a recipe for healthy future relationships.

Norms concern things not only as apparently minor as what foods to combine but also things that communities consider tremendously consequential: who can marry whom, how children are to be treated, who is entitled to hold power, how businesses make and price their goods and services, when and how criticism can be shared publicly.

Image courtesy of Shutterstock

Successful and safe AI that achieves our goals within the limits of socially accepted norms requires an understanding of not only how our physical systems behave, but also how human normative systems behave. Norms are not just fixed features of the environment, like the biology of a plant. They are dynamic and responsive structures that we make and remake on a daily basis, as we decide whether or when to let someone know that “this” is the way “we” do things around here.

These normative systems are the systems on which we rely to solve the challenge of ensuring that people behave the way we want them to in our communities, workplaces and social environments. Only with confidence about how everyone around us is likely to behave are we all willing to trust and live and invest with one another.

Ensuring that powerful AIs behave the way we want them to will not be so terribly different. Just as we need to raise our children to be competent participants in our systems of norms, we will need to train our machines to be similarly competent. It is not enough to be extremely knowledgeable about the facts of the universe; extreme competence also requires wisdom enough to know that there may be a rule here, in this group but not in that group. And that ignoring that rule may not just annoy the group; it may lead them to fear or reject the machine in their midst.

Ultimately, then, the success of Life 3.0 depends on our ability to understand Life 1.0.  And that is where we may face the greatest challenge in AI research.