When your fear is my opportunity

Politicians scare you to stay in office. Police forces scare you to get worshipful adulation and military equipment. And the information security industry scares you to get more money, power and influence.

I’m not saying the people around me here at Black Hat are malicious, or that the threats aren’t real (although some certainly seem to be FUD.) I’m saying that the industry, and everyone it, is strongly incentivized to make its customers and the wider world as frightened as possible, and people tend to follow the path of most incentivization, consciously or not, reluctantly or not. The security industry is the business of fear.

As Facebook’s Alex Stamos argued yesterday, the real threats that most people face consist primarily of abuse, password reuse, unpatched systems and phishing; problems so well-established that they’re not scary any more, they’re just exasperating and/or infuriating. Meanwhile, though, the industry — and especially the media coverage of the industry — focuses a wildly disproportionate amount of time, energy and attention on sexy new exploits.

This is partly, as Stamos said, due to a cognitive bias toward the thrill of the new, complex and unknown. But I would argue that it’s also because the security industry has powerful incentives to focus on the fear of the new, complex and unknown. When you’re in the business of protection, fear is good; fear means you get more money, more influence and higher priority. High-profile exploit discovery and bug squashing is good for everyone in the industry. Quietly eliminating entire categories of vulnerabilities so that no one ever has to worry about them again is, almost paradoxically, not.

It is not lost on me that The Media is also all too often in the business of fear. We breathlessly cover new scary attacks. And it’s true that there are a lot of them, from ransomware to anti-democratic “information ops” to attacks on the Wi-Fi chipset in your phone. (Fun fact: your phone probably contains four separate-ish computers — main processor, baseband, Wi-Fi and SIM card.) Every so often a major and frightening flaw in fundamental software is uncovered, about which people genuinely should be worried.

But the security industry’s implicit tendency toward making every new exploit seem like a five-alarm fire, and the media coverage of the industry, has led to a widespread and wrongful learned helplessness; an attitude among end users that everything is terrible, everything is hackable and nothing can ever be done about this except to throw money at security in the hopes of becoming a slightly harder target so another victim is chosen.

This is not true. Yes, if the Mossad is after you, your systems are probably going to crumble beneath their assault, unless you are a nation-state or Google or the like. But most people, and most companies, can maintain a fairly high level of security by simply keeping their systems updated with the latest patches, not reusing their passwords (e.g. by using a password manager), using two-factor authorization and — loath as I am to say this, as a fan of decentralization — keeping their data with Apple/Google/Amazon/Dropbox etc., so that it’s protected by experts.

Of course, industry people know this. As I write this I’m surrounded by thousands of hard-working, well-meaning security professionals whose job is to try to secure the hospitals, banks, cities, clients, etc. for whom they work. Most of them really would like to eliminate whole categories of bugs, because they’re chronically overworked, understaffed and underfunded; they really didn’t want to have to drop everything for a few days to deal with WannaCry, apparently because the NSA carelessly let its tools be stolen by malevolent hackers.

Vendors who sell to those professionals, though, and researchers who try to make their name by trumping up the new exploits they discover, and the media who write breathless articles decorated with stock pictures of black-hoodied hackers illuminated by the ghostly glow of their laptops … again, I’m by no means saying that they are bad people, or maliciously motivated, or wrong when they ring their alarm bells. I’m just saying that they’re subtly but strongly incentivized to scare people, and that we should all bear this in mind when we consider their work.