What does artificial intelligence have in common with the price of eggs?
Say you’re trying to decide between 9 or 10 different varieties of eggs at the store. One catches your eye: “All natural.” Well, that’s nice, natural is good and they’re only 30 cents more — you buy those. Now, those chickens and the eggs they produce may or may not be more natural than the others — because there’s no official or even generally agreed-upon definition of natural. It’s a common ploy to make you pay 30 cents for nothing. That same exact thing is becoming a problem in tech — but with AI.
There is no official or generally agreed-upon definition of artificial intelligence — if you’re curious about why that is, I wrote a very woolly post called WTF is AI that you might enjoy. But this lack of consensus hasn’t stopped companies great and small from including AI as a revolutionary new feature in their smart TVs, smart plugs, smart headphones and other smart macguffins. (Smart, of course, only in the loosest sense: like most computers, they’re fundamentally dumb as rocks.)
Now, there are two problems here.
It’s probably not AI
The first problem is this: Because AI is so poorly defined, it’s really easy to say your device or service has it and back that up with some plausible-sounding mumbo jumbo about feeding a neural network a ton of data on TV shows or water use patterns.
“The term is complete bullshit,” said the CEO of a major robotics company that shall remain nameless, but certainly employs in its robots what most would agree could be called AI. It’s a marketing term used to create the perception of competence, because most people can’t conceive of an incompetent AI. Evil, perhaps (“I’m sorry, Dave, I can’t do that”), but not incompetent.
This recent flowering of AI into a buzzword fit to be crammed onto every bulleted list of features has to do at least partly with the conflation of neural networks with artificial intelligence. Without getting too into the weeds, the two aren’t interchangeable, but marketers treat them as if they are.
The neural networks we hear so much about these days are a novel way of processing large sets of data by teasing out patterns in that data through repeated, structured mathematical analysis. The method is inspired by the way the brain processes data, so in a way the term artificial intelligence is apropos — but in another, more important way, it’s very misleading.
AI is a phrase with its own meaning and connotations, and they don’t really match with what neural networks actually do. We may not have defined AI well, but we do have a few ideas. And it’s safe to say that while these pieces of software are interesting, versatile and use human thought processes as inspiration in their creation, they’re not intelligent.
Yet any piece of software that, at any point in its development, employs a convolutional neural network, deep learning system or what have you, is being billed as “powered by AI” or some variation thereof.
Now, if even experts can’t say what AI is, what hope is there for consumers? It’s just another item on a list of features and likely as opaque as the rest to the person reading it. But they know AI is high-tech and being worked on by all the big companies, so the product with AI in it must be better. Just like the person choosing “natural” eggs over another brand — one that could just as easily have put that label on their own box, with as little justification.
And even if it were…
The second problem is that even if there were some standard for saying what AI is and isn’t, and we were to grant that these systems met it, these aren’t the kinds of problems that AI is good at solving.
One company, for instance, touted an AI-powered engine for recommending TV shows. Think about that. What insight could emerge from unleashing a deep learning system on such a limited set of data around such a subjective topic? It’s not a difficult problem to determine a recommendation for someone who likes CSI: Miami. They’ll like Person of Interest or something. These aren’t subtle, hidden patterns that only emerge after close scrutiny, or require hours of supercomputer time to figure out.
And in fact, as Jaron Lanier explained well in The Myth of AI, because the data originates from people — e.g. people who watch this also watch that — the artificial intelligence is completely dependent on human intelligence for all the decisions it makes. People already did the hard part — the development of taste, the selection of what shows they like and don’t like, judging the quality of the episodes, of the acting and direction — and all the computer is doing is searching through human intelligence and returning relevant results.
Similar claims are made on behalf of IoT devices like thermostats and now shower heads that monitor your use and recommend things or save energy when they know you’re not there. An AI for your home! It tells you when you’re low on milk! It identifies who’s at the door! These are similarly spurious: the data sets are sparse and simple, the outputs binary or highly limited. And just because a device isn’t quite as dumb as the one you’ve been using for 30 years, that doesn’t make it smart. On the contrary, these claims of intelligence are… artificial.
It’s a fiction cultivated by tech companies that AI meaningfully improves many of these things — in addition to the fiction that it’s AI in the first place. It’s even possible that relying on machine learning is detrimental to their purpose, since the methods by which these models arrive at their conclusions are often obscure.
This is a bit like another marketing trick often found on egg cartons. Ever seen one that promises that the chickens are raised on an all vegetarian diet? So thoughtful! Problem: chickens aren’t vegetarians, they eat worms and bugs — have done for millions of years. And really, it’s more than likely that taking them off their native diet will negatively affect their livelihood and the quality of the eggs. (Incidentally, what you want is “pasture-raised.”)
Maybe you’re thinking, okay Mr. Big AI Expert, if none of this counts as AI, what does? And why is it you aren’t so choosy about the term AI when it comes to writing clickbait headlines?
Well, this is all just my opinion, but when we’re talking about AI as a concept being researched or developed by big companies and universities, it’s okay to stretch the definition a bit. Because what we’re talking about is really a nascent class of software and there’s no sense being pedantic when the ideas fall under the umbrella most people would understand as AI. But when companies use that fundamental vagueness as a deceptive sales pitch, I feel I have to object. And so I have.
Misleading, exaggerated or outright fabricated feature lists are a hallowed tradition in tech, so this practice is nothing new. But it’s good to point out when a new weasel word enters the lexicon of trend-hunting marketers. Perhaps there will be a day when AI is actually something you’ll look for in a refrigerator, but that day is not today.