When that ‘AI company’ isn’t really an AI company

Artificial intelligence is one of the most important fields in technology right now, which makes it ripe for buzzword-savvy startups to leverage for attention. But while machine learning and related technologies are now frequently employed, it’s less common that it’s central to a company’s strategy and IP.

It’s important to note that this sort of posturing doesn’t necessarily mean a company is bad — it’s entirely possible they have an overzealous communications department or PR firm. Just consider the following points warning signs — if you hear these terms, dig a little deeper to find out exactly what the company does.

“Powered by AI”

There are innumerable variations on this particular line, which is a red flag that the company is trying to paint itself with the AI brush rather than differentiate by other means.

“Our machine-learning powered ___,” “our proprietary AI,” “leverages machine learning…” all basically mean the same thing: AI is involved somewhere along the line.

Apps that purport to connect users (“our unique AI-powered matching engine…”) with the right people or resources based on AI recommendations are also a common offender

But machine learning algorithms have been deeply embedded in computing for many years. They can be simple or complex, tried and true or novel and used for highly visible or completely unknown purposes. There are off-the-shelf algorithms developers can buy to help sort images, parse noisy data and perform many other tasks. Recommendation engines are a dime a dozen. Does using one of these make a product “powered by AI”?

In a way, yes, but not in a way that sets it apart or makes it worth crowing about. Would you buy a box from a company that proudly boasted that its boxes were “held together by nails?” Maybe. After all, nails do hold wood together pretty well. But you certainly wouldn’t assume this company is the only one using them or that they use them in a way that matters.

“The science experiment”

A subset of the “powered by AI” issue is what Luminance founder Emily Foges calls “the science experiment.” This is generally where a larger company decides they want to dabble in a new field like AI and so they throw a little money at it. In six months, boom; they have a product.

At the large business scale, this is merely a way to dazzle shareholders with promises of embracing the future that rarely affects the bottom line. But at smaller companies, it can take the form of clueless leadership forcing the creation of an AI product (or as the case may be, a chatbot, a blockchain integration, etc.) which diverts important resources to rush products to market just so they can say they did.

The problem that arises is, as you might expect, these products are rarely any good. AI is not a product; it’s a method. And applying a method you don’t know you need only complicates existing processes.

This is not to say that companies shouldn’t experiment with using such methods to improve their products — but you can spot the ones that were rushed and tacked onto existing offerings. They indicate poor leadership rather than bad products generally.

“The speed booster”

There are two kinds of AI-related speed boost people should be wary of.

The first is the simple claim that a company’s system does what another company’s does, but 20 times faster. Certainly there are places where such improvements happen, and they’re welcome. But computing processes these days are complicated, and it’s entirely possible that what they’re speeding up is not the bottleneck. Speed-ups in AI training or inference also generally come with a cost — and that would be my first question to a company claiming either one.

The second boost is the claim that an AI-based tool can, say, make your hiring or review process three times faster. To some this statement is catnip, but to others it’s fundamentally laughable. Such practices differ so widely that it’s ridiculous to think that one tool could make such a large difference. It’s a variant of the above problem; sure, it may be able to categorize a hundred applicants three times faster than another tool or manual process, but after that, the “AI” is worthless. It’s not an AI hiring system; it’s an AI sorting tool.

If someone claims speed boosts, question that claim. It may be much less important than they say, and it may not be due to AI at all. And don’t forget that, if you’re not careful, all faster processes do is get you the wrong answer faster.

“Our resident AI expert”

Creating a useful machine learning system or intelligent agent is no easy task. While as mentioned there are off the shelf tools, you wouldn’t use them without refinement or integration in the same way you wouldn’t use a plain WordPress template as your company website.

Similarly, machine learning has become a branch of software engineering that’s just as important and specialized as any other. You wouldn’t ask a web designer to create your database software, nor would you ask a database engineer to build a conversational AI.

So who built the one you’re being pitched? This can be a delicate conversation for a lot of reasons, but the simple fact is that like any other form of software, everything depends on the team that built it. If the team doesn’t have serious AI expertise — on staff, not consulting or contracting — then the product will suffer for it. If a company hasn’t hired up to address the growing need for machine learning expertise, then who will support the product after it ships? Who will add features? Who will help audit the system in matters of privacy and security?

It’s important that this not turn into a Ph. D. hunt. Everyone loves a “real” expert, but they’re not the only ones who can build and support working products. Just make sure someone in your organization fits that description.