The hidden risks of the bot explosion

Gartner predicts that by 2018 a full 30 percent of our interactions with technology will be through “conversations” with smart machines.

Major players such as Microsoft, Facebook and Google are now focused on empowering development of smart bots, and IBM’s Watson is looking to find ways to integrate its cognitive computing platform deeper into businesses.

We are being inundated with conversational tools and we’re seeing a considerable, but not entirely surprising, shift of operations to this new way of human-machine interaction. Increasingly, we are depending on these systems for both routine daily tasks and sophisticated business interactions.

However, we have already seen these new tools stumble. We all remember Tay — Microsoft’s chatbot that was shut down within just a few days of launch after generating a series of offensive tweets.

As we increasingly rely on intelligent systems and welcome them into our daily routine, it’s imperative that we are able to implicitly trust them. Inevitably, as we push the boundaries of AI, there will be more mistakes and more stumbles. The real question is whether or not the dozens of emerging players in the bot market will go to great enough lengths to earn the trust of their users.

In the age of “early adopters” and “influencers,” that might sound quaint. However, our progress as an industry is contingent on building bots that earn digital trust and, in turn, their place in our everyday lives.

Did you get my email?

Remember when you first started using email? It may be hard to fathom now, but it became an almost knee-jerk reaction to hit send and subsequently ask: “Hey, did you get my email?” We didn’t yet trust email to consistently deliver (even if it did!). However, over time, as more and more emails were successfully delivered, we stopped questioning.

Sure, we still ask people if they got our email, but it’s more of an ode to human oversight than technological failure. Largely, we assume that our message was safely delivered. Over time, email has succeeded in earning our digital trust.

Let’s take a more modern example that is still in progress — self-driving cars. It’s a bold concept, and one that shifts the paradigm of control from human to machine. In order to successfully establish this technology, self-driving cars must first embolden the masses to step behind the wheel, as it were.

In the technology world, users will return to a reliable piece of software.

They’re doing this by being extraordinarily transparent about their R&D process and allowing potential future users to watch as they continue to develop and test a safer product. Google, Tesla and Uber are all engaging in phased rollouts and gradual testing in concert with municipalities for a reason. They are transparently proving that they are addressing any and all issues that may arise. And they’re establishing a foundation for trust.

Defining digital trust in the age of AI

So, what does digital trust look like in the world of bots?

Building trust between humans and machines is not very different than building trust between people. Humans are deeply flawed. We make mistakes. Yet we’re able to build trust by aligning expectations to reality (UX design), learning from our mistakes (machine learning) and listening to one another when we don’t understand (natural language processing).

Likewise, bots don’t have to be perfect to earn our trust. However, they do need to reach a clear understanding with users — one where users know the limitations of their bots, and bots responsibly handle their shortcomings. In other words, bots need to be reliable — within reason.

In the technology world, users will return to a reliable piece of software, resulting in high engagement rates. Consider email, which I mentioned earlier. After several decades of mainstream use, email has yet to be knocked off its perch as the dominant communication technology. No email service has 100 percent uptime, but it’s reliable enough to meet our most essential needs, and consequently earn our trust.

The path to digital trust

If reliability is the goal, the starting point for any immature technology is transparency.

Trust always requires transparency — with humans and machines. But transparency is fickle. If you’re selectively transparent, you shouldn’t bother trying, as it does more damage than good.  Conversely, complete transparency in both failure and success can breed trust faster than virtually any other approach.

Microsoft responsibly posted this after the Tay-tastrophe. They acknowledged an oversight, explained how it happened and promised to address it moving forward. We see this as a prime opportunity to build trust — mistakes are expected, but so is a standard of transparency.

People will only trust their bots if they trust the makers of their bots.

Second, accountability matters. If we hide behind the bugs and glitches in early-stage bots, users have no real or implied contract to fall back on. Netflix is famous for preemptively giving refunds for poor streaming service, even if the issue had nothing to do with Netflix’s technology. They held themselves accountable for your experience as a customer, and in turn earned our trust. The buck needs to stop with the bot maker, not the bot.

Third, we need to set honest expectations for bots. Bots can’t be marketed like the next iPhone, promising the world in the palm of your hand. At least not yet. The pace of technological change is faster than ever, but AI is in its infancy compared to browsers, smartphones and laptops. If we write checks we can’t cash, we run the risk of turning the entire space into the second coming of Clippy.

Accuracy rates on NLP and actions executed on a user’s behalf would ideally be shared publicly, but should at least be used as standard SLA’s with enterprise buyers. The industry has been doing this with speech recognition, which created the original foundation of trust for conversational interfaces.

As the market progresses, we’ll need to build beyond that framework. We need a deeper trust that includes users, bots and bot makers, and all three parties have to be all-in. As it stands today, establishing standards for transparency, accountability and expectations would ensure the industry makes it to its “2.0” moment. If we’re lucky, we’ll get there with enough trust built up to carry us through the next wave of innovation.

Conclusion

Leon Wieseltier once bemoaned that the fetishization of fast-paced tech has inhibited our understanding of the importance of human experience and our respect for quality in innovation. As he says, we must recognize the “lag between an innovation and the apprehension of its consequences… Otherwise a quantitative expansion will result in a qualitative contraction.”

I am confident the bot revolution is a step in the right direction for the technology community and society as a whole. But we can’t hide behind machines in these new human-machine relationships.

People will only trust their bots if they trust the makers of their bots. If they feel we’re acting responsibly behind the curtain, that we respect the importance of their experience, we’re well on our way to an exciting future.