On bots, language and making technology disappear

There’s a new buzzword in computer design circles every year. This year, the buzzword is without question bots.

As with anything we build, we give bots names. It’s something most of us don’t even question. They come pre-personified and ready for us to start that human-computer relationship, just like HAL 9000 or Her: there’s Siri in our iPhones, Alexa in Amazon’s Echo and there’s even Facebook Messenger’s PSL (Pumpkin Spice Latte) Bot.

A name can be a way of expressing trust in an object — or expressing control over it. In design terms, a name is a kind of affordance — a handle we can hold onto.

As the resident language expert on our product design team, naming things is part of my job. When we began iterating on a bot within our messaging product, I was prepared to brainstorm hundreds of names. Gendered, non-gendered, functional, etc.

But first, we did some testing with actual end users to understand their relationship with bots, language and names. We learned that giving a bot an identity isn’t always for the best. Calling a bot Siri does not necessarily have the same relationship-building effect as calling your car Bessie or Old Faithful.

Talking and typing are two different things

In a voice-activated bot, names are pretty functional: saying “Siri,” “Alexa” or “OK Google” is the conversational equivalent of opening Google and entering a search term.  When you see a search bar, your brain leaps from idea — there’s something I want to find — to action. We do this so often — more than 40,000 times a second — that we don’t think of it as conversing with the system, though we are asking a question and expecting a response.

But names don’t trigger an action in text-based bots, or chatbots. Even Slackbot, the tool built into the popular work messaging platform Slack, doesn’t need you to type “Hey Slackbot” in order to retrieve a pre-programmed response.

Speaking our searches out loud serves a function, but it also draws our attention to the interaction. This can have both good and bad effects. Voice is fundamentally more “humanizing” than text. A study released in August showed that when we hear something versus when we read the same thing, we are more likely to attribute the spoken word to a human “creator.”

The real measure of success for today’s designers is making technology disappear.

But what is humanizing can also be irritating. We may find it far more exhausting, as humans, to say “OK Google” 75 times a day than to silently open a laptop and search.

From a design perspective, bots are aligned with the whole concept of messaging-as-a-platform — we could build a bot right into our own messenger using the same simple elements we’d already designed for human-to-human conversation.

So when we experimented with building a bot, we wanted to use those simple elements to communicate. We gave our test bot a name and let it introduce itself like a real person would: “Hi, I’m Bot, Intercom’s digital assistant.”

What we found was surprising. People hated this bot — found it off-putting and annoying. It was interrupting them, getting in the way of what they wanted (to talk to a real person), even though its interactions were very lightweight.

We tried different things: alternate voices, so that the bot was sometimes friendly and sometimes reserved and functional. But we didn’t see much change.

It was only when we removed the name and took away the first person pronoun and the introduction that things started to improve. The name, more than any other factor, caused friction.

Who holds the handle?

We’ve been telling ourselves scary stories about robots for more than a century, stories in which we simultaneously pity and mistrust them. When we name the tools we use, we assert control over them; we do that because we want to be the ones having the interaction, doing the job.

The digital tools we make live in a completely different psychological landscape to the real world. We can’t get a handle on them, literally. There is no straight line from a tradesman’s hammer he can repair himself to a chatbot designed and built by a design team somewhere in California (or in Dublin, in our case).

Unlike most writers in my company, my work does its job best when it’s barely noticed. Control is incredibly important in designing digital tools — most language we see and experience in a product is about affording control and understanding to you, the person using the product — not me, the writer. To be understood intuitively is the goal — the words on the screen are the handle of the hammer.

Names and identity lift the tools on the screen to a level above intuition. They make us see the tool in all its virtual glory, and place it in an entirely different context to the person using it — and not always a relationship that person asks for or appreciates.

This might be because of novelty — we might become more comfortable with the virtual, more trusting of it (though this year’s headlines haven’t given us much to trust). But despite the hundreds of movies we’ve made and books we’ve written about robots, introducing personality into technology might not be the way we become more comfortable.

There’s another school of thought in design, one that describes it as almost invisible. Siri and Alexa might have been thought of as examples of this type: you can’t really “see” them, and so they disappear into the background. But that’s not necessarily true.

As humans, we’re visual people — we respond to what we see. But even more than that, we’re social — we respond to the things we can speak to. It’s why we name our possessions, and why we fear the pretend humans we’ve been imagining for so long.

The real measure of success for today’s designers is making technology disappear so that it becomes a true tool for humans. The true measure of success for a designer who deals in words is making tools quieter to use, so we can use them more intuitively.