In pursuit of empathetic machines

The universal reaction to Atlas, the newly upgraded next-generation humanoid robot from Boston Dynamics (a company owned by Alphabet), was lots of empathy. Unmindful of being punched, pushed and teased, it focused on finishing tasks in a demo video. People who watched the video responded with comments like “stop bullying” and “say no to bullying.”

It was clearly evident that we are on the verge of creating machines that can look and act like us in a physical sense, as well as take over repetitious manual labor — to the delight of monotony-hating, fun-loving humans. However, providing these machines with the human traits of emotion and empathy is a missing piece of the puzzle that continues to baffle AI researchers.

Empathy — the ability to put yourself in someone else’s shoes — is key to human behavior, collaboration and collective knowledge, as well as the fabric that helped define and shape human evolution. Keeping aside the figurative incongruity (empathy and machines), humans have made great progress in building functional pieces of the human brain — from machines that can handle computation, memory storage and retrieval to probabilistic reasoning, pattern recognition, natural language processing, classification, learning, etc.

We now have tools that can mimic parts of human intelligence sans emotions and empathy. Suddenly there is renewed passion for and vigor in AI, and excited scientists are predicting we may be at the beginning of singularity — which could potentially lead to a human-created Cambrian explosion.

Human intelligence is a combination of logic, cognition, emotion and empathy. With the recent big win of DeepMind in the game of Go, a game considered to be more complex than chess, it is almost conclusive that machines are capable of meeting or beating humans on logic. Earlier, Deep Blue, aka Watson, outperformed the opponents on Jeopardy, a knowledge contest, and proved that machines can be made superior to humans regarding natural language processing and information retrieval.

Will our left brain ever be able to understand how the right brain works?

Can any of these “deep” machines win a talent contest like American Idol? Not yet; we are a long way from producing artistic, creative and intellectual output associated with our right brain using a synthetic Mozart or a synthetic Picasso. The deep machines are not deep enough to understand our human mind, and singularity will remain a daydream unless we figure this out.

The mind is a product of the brain. We still don’t know how our right brain works; this is key to our emotions, empathy and the creative side. The latest research in neuroscience suggests that emotions are a result of both our body and brain. Our thinking and emotions are tightly intertwined and are the output of our bodily state. Our bodily state, free of hormones or full of dopamine, produces a different emotional outcome and behavioral response depending on many contributing variables.

Turing understood this quite well, and became philosophical on the physical limitations of his inventions during the later part of his life. Marvin Minsky questioned the many algorithms used by AI researchers, like deep neural networks to mimic strong AI — which he felt are too opaque and can be easily fooled.

Will our left brain ever be able to understand how the right brain works? While we may have applied AI based on the modeling of our left brain, which is associated with logic, reasoning and can effectively replace our left brain, the strong AI or full AI (also called affective computing) that combines logic, cognition, emotion and empathy is largely unexplored. There are several computational algorithms that can do what our left brain does, but there is a huge gap when it comes to modeling our right brain.

According to Dr. Rana el Kaliouby, co-founder and Chief Strategy and Science Officer at Affectiva, an MIT spin-off that specializes in emotional computing, their emotion-aware technology platform Affdex has already captured more than 50 billion emotional data points using a multi-modal approach that uses facial expressions and head pose.

A glimpse into our future is hard to miss with all the signs around us.

The technology can capture fleeting expressions, such as a squint or twitch of a lip; combined with cultural and contextual information, it can consistently detect the expressed emotion with a high degree of accuracy. The technology leverages deep learning methodologies and a huge data repository to map images and videos of faces to a read the emotions expressed in the face.

The more data it has, the better it gets. The initial use cases were targeted toward online digital content in media and advertising to gauge audience response. But Rana is more excited about emerging applications where developers are making their applications emotion-aware. Affectiva already has pilots in gaming (the game dynamics change based on the player’s emotional response), and movies that have different endings based on the viewer’s emotional profile.

She sees a future where social robots become an important part of our lives, and maybe even human companions. She feels the day is not too far off when social robots become part of our everyday lives, quoting an example where a mom’s social robot could communicate with a child’s robot: “Mom is a little frustrated today. Don’t bother her about the weekend party and I will let you know when to ask.”

Lots of key players are joining the fray. Apple has reportedly acquired Emotient, a San Diego-based company that uses artificial technology to detect emotion from facial expressions. Microsoft is working on their own platform, Project Oxford, that has an Emotion API available to developers. Google developed its own emotion-sensing technology for the now-defunct project Glass, and has Cloud Vision API, which can help developers detect emotional facial attributes.

There are several big questions. Can we build strong AI or full AI? Why do we need it? How do human-equivalent machines help us? How will it impact human psychology? Will humans prefer interacting with machines and start loving machines instead of humans? It may lead to an empathy paradox — the more we rely on technology to understand how others feel, the less time we spend thinking about how others feel and their needs, which is counter-intuitive.

Finally, the debate rages on the moral dilemma of creating superintelligence. We may not have the answers in our lifetime. Maybe we already have the answers in fictional dramas like Ex Machina or a future Robo II? Anybody’s guess is as good as anybody else’s on this nascent subject.

Which brings up another thought. What kind of beliefs and moral values will the empathetic machines create on their own — besides those with which humans program them? Soon, anything repetitive within a frame of reference will be taken over by weak AI, and the strong AI will become the next frontier in technology — with a potential that could alter the course of human evolution.

A glimpse into our future is hard to miss with all the signs around us. It is both exciting and scary.