The secret language of chatbots

Give a journalist a buzzword and you’ve fed him for a day.

Give a journalist a topic to investigate and explore and you feed the entire industry for years. Even more so when the topic is a sci-fi trope like artificial intelligence.

These days, we are bombarded by warnings about imminent technological disasters. The scenario of robots and AI putting everyone out of work is one of them. Despite the rising skepticism over IBM’s heavily marketed Watson and companies like Foxconn reaching a measly 2.5 percent of their original ambitious automation goals, very few automation impact studies, let alone pieces that cite them in the mainstream press, seem to consider that the technical and organizational hurdles may not be as easy to overcome. (Even though different waves of hype emerged in the 1950s and the 1980s, this time is different, right?)

And if massive social upheaval is not enough to scare the public, there is always the robot uprising. The latest episode in this saga is an experiment on Facebook that resulted in two chatbots that supposedly “invented a secret language,” which (according to the reporters with more developed imaginations), made Facebook “put a cork” on the experiment.

Game of (broken) telephones

On June 14, 2017, an article on TechCrunch described the experiment: Facebook researchers played with an interesting idea of treating a negotiation like a game. They tried to force the bots to learn both the rules of the game and the limited negotiation language using reinforcement learning. The original link no longer works, but the paper has since been uploaded to Arxiv. Unfortunately, the original screenshots seem to be gone.

An article published in The Atlantic on June 15, 2017, picked an obscure line in the report saying that the bot-to-bot conversation “led to divergence from human language.” The piece claimed that the bots weren’t just spouting nonsense; they “developed their own language” for the purpose of negotiating.

This created an avalanche. Sensationalist details grew like a snowball, and by the end of July, Independent’s summary of the story was: “Facebook’s artificial intelligence robots shut down after they start talking to each other in their own language… Facebook abandoned an experiment after two artificially intelligent programs appeared to be chatting to each other in a strange language only they understood.”

Today’s bots are still in their infancy, but at least some of them are meant to handle a large number of domains with the input being unlimited in its scope.

 

I can practically see white-robe-clad researchers in a clean room checking whether the poison gas controls are disconnected, and debating with each other about intelligence, soul and what it means to be human. (Maybe it’s not just my imagination, but the fact remains that many articles pictured sci-fi imagery from Terminator and Westworld. That’s despite the fact that Person of Interest is much more suitable here.)

Several sources, from Gizmodo to the always skeptical commenters on Y Combinator News, picked the story apart. The story earned its own page in Snopes and frustrated Facebook posts of the project lead Mike Lewis.

What really happened

There is no point copying the resources linked above. They explain in exhaustive detail that no robot uprising took place and that no one “stopped” the experiment. There are, however, some interesting conclusions to be drawn and questions to be asked.

A day before the hype began, New Scientist explained the approach from a more technical point of view:

One bot was taught to mimic the way people negotiated in English, but it turned out to be a weak negotiator, and too willing to agree to unfavorable terms. A second was tasked with maximizing its score. This bot was a much better negotiator but ended up using a nonsensical language impossible for humans to understand.

In other words, neither of the bots could accomplish the compound task of learning the language and negotiating properly.

Don’t get me wrong, the task is extremely complex; moreover, I am not sure how or whether it’s possible to learn even a limited language from scratch from only 5,000 sentences.

But the main point that was missed by the hype-hungry press and avoided by the researchers for obvious reasons was that it was not a success with an unexpected side effect; it was another attempt whose main upside was that “we learned something new today” — more “work in progress.” The original purpose was to design a negotiation aid, but it would take a special kind of user to be happy with advice like “balls have zero to me to me to me to me to me to me to me to me to.

According to the press, the researchers claimed that the language was not random nonsense, but had its own grammar (in all fairness, I could not find that part in the Arxiv paper). Some commentators assumed that the repetition is meant to describe numeric values (e.g. if the word is repeated five times, it means five items). To me (pun not intended), it looks like some of the phrases had a buffer overflow and the result was simply truncated, so there is no way to verify the numbers assumption. Can we be even sure that two different bots trained on slightly different data sets will use the same “invented language”? Not at all, so a hypothetical robot uprising would stop at a stage where a bunch of bots slapped together from different data sets are unable to understand each other.

One serious unresolved issue with machine learning systems is debugging. Legendary Peter Norvig from Google highlighted the need for “a better set of tools,” saying that the current machine learning is a “black box,” exhibiting traits which are every coder’s worst nightmare:

Any bug will be replicated throughout the system. Changing one thing changes everything. There are techniques for understanding that there is an error, and there are methods for retraining machine-learning systems, but there isn’t a way to fix just one isolated problem.

In case of the bargaining bots, we can’t tell how bad was the signal degradation caused by the inability to parse the language properly. In fact, do we even know for certain what the bot language meant? We may assume the grammar was consistent, but unless the researchers managed to decipher, communicate back or had some kind of intermediate representation recording the intent of the bot’s communication, it could as well be XXI century’s AI version of Pierre Brassau, the ape artist.

The real language of chatbots

The curious case of the Facebook negotiation experiment aside, can the bots actually benefit from a language of their own?

The answer is yes. And we are not talking about a natural language.

The world where your phone can display an interactive map, detect that you are walking, tag a date or a location in your SMS or call a cab was made possible, among other things, by web services: the ability of different software components to talk to each other across different machines and computer networks. In less than 20 years from the time when the first billing web services were conceived, the ability of components to communicate (even if only over rigid, predefined channels) changed the fundamentals of software and hardware engineering.

Today’s bots are still in their infancy, but at least some of them are meant to handle a large number of domains with the input being unlimited in its scope. Today, these bots know to delegate tasks to predefined web services; some attempts are made to build dynamic cloud catalogues of “how-tos” redirecting to the correct web service.

The next step, however, could be a bot shopping for a better mobile data plan, or investigating what went wrong with a service, “talking” to its fellow bots to find out whether the service is available in another area and when it stopped using a large standard stack of primitives.

The software bots of today and tomorrow will require a more complex and flexible approach, and they will still need a standard and consistent knowledge representation.

 

It does not have to involve natural language processing. It’s a matter of creating a standard language for queries and interactions between the bots, evolving the web services for the era when the software is smarter.

Interestingly enough, the “robot whisper” idea is not new. As early as in 2001, so-called Battle Management Language was proposed to control “human troops, simulated troops, and future robotic forces” (yes, there is a good reason to post that Terminator image again).

Research of Igor Mordatch from OpenAI (mentioned in some articles about the Facebook experiment) focuses on attempts to get the bots to develop their own language in a limited universe. As mentioned above, such a language would depend on the training set (or “universe,” as they call it) and not be useful as a means of communication with other bots (think of an isolated tribe in the Amazon that is unable to communicate with the rest of the world).

The software bots of today and tomorrow will require a more complex and flexible approach, and they will still need a standard and consistent knowledge representation. And, in case of the negotiation bot, the external knowledge representation will eliminate the ambiguities of the natural languages, allowing it to master the art of the deal in a structured, mathematically neat universe.