Well, why not? I mean, you know, what the hell. Dave Aitel’s proposal over at The Hill for “a cyber investigatory setup funded by private industry” to react to hacks into the American government may not be a good idea, per se, but who can afford that kind of cost-benefit analysis when we’re already in the throes of de-facto high-seas Internet warfare? Let’s just issue some letters of cybermarque and see what happens!
Back in the days of fighting sail, letters of marque authorized private vessels known as privateers to attack, seize, and profit from ships designated as targets. These were distinct from private vessels known as pirates, who attacked, seized, and profited from any ships they decided were targets. That historical distinction is pretty blurred today, one king’s pirate was another’s privateer, but the fundamental problem / opportunity was that vulnerable stores of highly concentrated wealth could be plundered while beyond the effective reach of traditional law. The consequences were more or less inevitable, given human nature. Don’t hate the pirate, hate the game.
Much the same applies today. Our world is largely built atop a foundation of software built in haste, by sloppy engineers using memory-unsafe languages, and then pressed into service for newly emergent purposes by people who had neither the talent nor the time to understand the niceties of the process and/or the consequences of their actions. Are we really so surprised that hackers and nation-states alike are taking advantage of the resulting birds-nest of gaping security holes?
(One exception: Apple. Philosophically, I don’t like their hegemonic approach to software, but the stark absence of any major iOS malware outbreaks over the first ten years of the iPhone deserves some sustained and standing applause. They’re not perfect, but they’re a long sight better than most — and they indicate that increased cyberinsecurity is not an inevitable result of our world’s increased complexity. We could write safe, or at least vastly safer, software. Apple and some enterprise providers like Cisco show as much. We just can’t be bothered, because of legacy commitments, and carrier fragmentation, and the rush to ship code that sort of mostly works if you reboot it often enough, and because, I mean, really, who has the time?)
And so we get insecure networks, and insecure crypto libraries, and insecure operating systems, and servers so insecure that they bleed someone else’s confidential data. We get worms that can spread across entire cities via light bulbs. We get megabotnets. We get the NSA accidentally leaving their toolkit in staging areas, like burglars leaving lockpicks in a stolen car, and that toolkit being used for the recent tsunamis of ransomware and wiperware.
And above all, we get phishing, because people will click on attachments you send them, and somehow, in 2017, we still have so much pervasive insecurity at both the network and the operating-system level that all too often “clicking on a file” — or, marginally more interestingly, “clicking on an OAuth button,” which even mighty Google was hit hard by just two months ago — basically equates to “handing over most of the keys to your kingdom.”
Sure, you could use two-factor authentication, but guess what, if you’re getting validation codes texted to your phone, that’s insecure too! I mean, you should still sign up for it. It’s better than not getting validation codes texted to your phone. But it’s not as good as using, say, Google Authenticator. Kudos to companies like Coinbase, who (wisely, given the current crypto bubble’s eyepopping valuations) are now requiring their users to switch to Authenticator.
But the fundamental problems remain. Decades of terrible security decisions are coming home to roost like a scene from The Birds. The state of information security has been so dire for so long that learned helplessness has caused many people to conclude, nihilistically and wrongly, that it’s not even possible. Attribution — i.e. deciding beyond a reasonable doubt, with more than circumstantial evidence, who was beyond any given hack — is extremely difficult unless the attackers were dumb enough to leave identifying fingerprints. So is retaliation, which is of course the whole point of asymmetrical warfare.
So: issue those letters of cybermarque, hack back against the hackers, and send our own privateers steaming across the dark web armed with cutlasses and cannons? What the hell, why not? It probably won’t accomplish anything; it probably will just escalate an arms race that makes things worse for everyone; but it might make people feel a little better, and if there’s anything that the last few decades of software development have taught us, it’s that people, companies, and governments are way more into building a feelgood façade of security than the hard work and endless slog of building our edifices atop any kind of solid foundation.