Will AI Be Smart Enough To Protect Us From Online Threats?

According to some estimates, the global cost of cybercrime in 2013 was $113 billion. The actual cost may vary, but whatever the true figure, it’s a spicy meatball. Human beings really aren’t the best at computer security. We will trade our confidential passwords for candy bars. We will pick strange USB drives off the ground and jam them into our ports, then act surprised when our computers get infected. We will routinely route our way around the most secure corporate defenses — to play online poker during work hours.

Here’s an idea: Let’s set our computers to defend our computers. Let’s build a computer strong enough, fast enough and smart enough to defend us all from hackers on its own. Can we do this? Probably not; and if it turns out that we can… we’ll end up in a be-careful-what-you-wish-for scenario. Let’s explore why artificial intelligence will probably never be smart enough to end cybercrime.

We Already Use AI, And It Doesn’t Work So Well… Yet

Let’s talk about weak AI with an example that you probably use every day: Google.

Yes, you read that correctly. Google is a form of artificial intelligence. The search engine isn’t as smart as a human, obviously, but it recognizes human language. Within a narrow range of tasks — scouring the Internet looking for search results — it is much better than a human being could ever be. That’s weak AI: non-sentient, but good at working with humans, and better than a human in one or two specific areas.

Weak AI is the present and future of information security. You can see its behaviors at work in a common piece of security software — the SIEM tool. Here’s a good analogy of the way a SIEM tool works: You’re a police officer manning a roadblock. You get a directive; there are criminals on the loose, and they were last seen driving a yellow car. Stop and inspect all yellow cars. In this analogy, all the cars are packets, and yellow cars have a malware signature. You flag all the yellow cars, and nobody gets hacked today.

Machine learning concepts continue to improve, almost by the day.

SIEM tools are rather stupid, however. A criminal can get by your SIEM tool by doing the digital equivalent of painting a blue stripe on their yellow car. Criminals can change as little as a single line of code in the malware they’re using, and thus cause that malicious program to become basically invisible.

It takes human intelligence to comb detailed logs of web traffic and identify which traffic represents an intrusion attempt. It takes human intelligence to then update firewalls, SIEM tools and IDS so that they can identify malicious web traffic. Although this process has become increasingly automated, there is no indication that humans are ever going to be taken entirely out of the loop.

Let’s say that I’m wrong, however. Let’s say that someone develops a computer that can identify 100 percent of malicious web traffic and turn it aside. In all likelihood, that system will do nothing to prevent successful breaches. The problem has to do with humans — specifically, the scary intelligence of human attackers.

Computer Security Is An Arms Race

Consider the case of the air-gapped computer. This is a system that contains data that must never, ever be communicated with outsiders. As such, an air-gapped computer is never connected to the Internet. Ever. Period. This system is, by definition, more secure than any firewalled device, right?

A team at Ben-Gurion University has not only hacked the air gap, but has done so in multiple ways: using heat waves, ultrasound and a low-end mobile phone.

There is no “unhackable” device that criminals and researchers haven’t already hacked. Also, there are more hackers out there than there are security professionals. They talk to one another. They read about each other’s exploits and work to make them more efficient. One week, the most top-secret zero-day exploit being used only by the Russian and Chinese governments will be packaged into a simple .exe file and used by thousands of script kiddies. DDOS attacks used to be used only by massive criminal organizations; now you can simply download Low Orbit Ion Cannon.

Essentially, one needs to think of the hacking community as a giant supercomputing supercluster full of the most powerful computers ever built — the human brain — all bent on breaking into your servers and stealing that which is dear to you. Any computer that you build to stop them has got to be smarter than that.

Clearly, This Is Not A Job For Weak AI

What’s an alternative to the weak AI that we currently use? For starters, there’s strong AI — artificial general intelligence. We’re mostly into the realm of science fiction when we talk about strong AI, because a strong AI would approximate human levels of intelligence and problem solving. Right now, we don’t even have a good idea of how to test computer systems in order to determine that they exhibit these properties.

Human beings really aren’t the best at computer security.

Strong AI also presents some thorny ethical issues. I’m no ethicist, but imagine that tomorrow morning you wake up in a metal box and are told that your only job, from now until the end of time, is to monitor open ports for intrusion attempts and block them, without ever eating, resting or sleeping. In that situation, I would most likely delete myself at the first opportunity, and you probably would too. Clearly, creating a human-level intelligence for the purposes of cybersecurity would be kind of evil, and it probably also wouldn’t work too well.

Again, to completely mitigate cyberthreats, our hypothetical computer system would need to be smarter than a whole stadium full of genius-level bad guys. So, to beat them, we need ASI.

Artificial Superintelligence (ASI)

Now we’re way into sci-fi territory, and mostly of the dystopian variety. Think Shodan, Skynet, the Borg Collective — that’s what we’re talking about. In most fictional scenarios, the development of superintelligence goes badly for our civilization.

Still, if we want to fully negate hackers, this is what it would take. The capacity to monitor every single Internet-connected computer in the United States, the intelligence to determine what constitutes an attack and the flexibility to identify and deter unorthodox intrusion attempts — that takes artificial superintelligence. The world, in this scenario, is not a fun place to live. The NSA might be recording every keystroke on everyone’s computer right now, but it doesn’t have the capability to meaningfully analyze that data. ASI does. It sees all, knows all; whether it’s a benevolent overlord is absolutely beyond our control.

What The Actual Future Will Look Like

Barring the sudden emergence of an AI singularity, computers will continue to be an aid to information security, but not a replacement.

Machine learning concepts continue to improve, almost by the day. Right now, information security experts spend a great deal of time just programming their tools to understand what an anomaly or an attack looks like. These rules work — some of the time. An improved version of weak AI would eventually come to understand these rules on its own. This tool wouldn’t be foolproof, would continue to detect false positives and would absolutely need a human minder — but it would make the jobs of security professionals that much easier.

On the other hand, hackers are only going to get better at developing methods of attack. They also aren’t above co-opting the methods of their opponents. As we continue to refine the development of weak AI as a method of defense, it won’t be long before the same tools are used to design the malware that is used to attack.