Heartbleed kicked off a new chapter in the rollicking discussion of privacy, digital security, and the role of government in protecting its citizenry from threats both real and imagined.
News of Heartbleed broke early last week, starting a soul-searching bit of Internet-scrambling by services large and small to examine their own networks and products to see if they were exposed to the flaw. Much work remains for those impacted to get their services air-tight and patched, with certificates and revoked and replaced. It’s no small task, and one that isn’t nearly done.
Friday brought allegations that the NSA not only knew of Heartbleed, but had used the exploit for some time, perhaps two years. The NSA, in a statement, denied this. The White House followed suit. Since then we’ve learned a few things that are worth keeping in mind.
Let’s begin with what the U.S. government’s policy is regarding revealing flaws in Internet security. The New York Times wrote the key report on this, based on sourcing from “senior administration officials.” The gist here is that the U.S. government now claims to have a bent towards disclosing what flaws it does find, provided, as quoted by The Times, there is a “a clear national security or law enforcement need.”
While it is easy to appreciate a leaning towards disclosure, the above leaves American people in a position of either trusting the government or not. Put simply, as the government gets to decide for itself what is a “clear national security need or law enforcement need,” we, the average folk, have no window into what it not disclosed, and why.
There’s reason for that, naturally: If the NSA decided to tell the world each and every exploit that it found and intended to use, they would all slam shut, and it’s job would become far harder if not impossible. At the same time, we haven’t answered the following question: If the NSA had known about Heartbleed — and some remain convinced that, denials, aside, it did — would it have told the Internet community?
If we can’t be sure that Heartbleed wouldn’t have passed the anti-efficacy test — the idea that a flaw is so dangerous to the public safety that it must be disclosed, potential offensive capabilities be damned — we are left essentially nowhere. That tension negates the fact that the NSA claims to have not known; if we can’t be sure of its own methods for determining what is to be disclosed and what not, at least in the abstract, any single case is simply an occluded data point with no axes to measure from.
The NSA doesn’t even need to know of an exploit in advance to, well, exploit it. The Guardian did a fine job explaining this yesterday [I quote at length to preserve tone]:
The agency’s recently-disclosed minimization procedures permit “retention of all communications that are enciphered.” In other words, when NSA encounters encryption it can’t crack, it’s allowed to – and apparently does – vacuum up all that scrambled traffic and store it indefinitely, in hopes of finding a way to break into it months or years in the future. As security experts recently confirmed, Heartbleed can be used to steal a site’s master encryption keys – keys that would suddenly enable anyone with a huge database of encrypted traffic to unlock it, at least for the vast majority of sites that don’t generate new keys as a safeguard against retroactive exposure.
If NSA moved quickly enough – as dedicated spies are supposed to – the agency could have exploited the bug to steal those keys before most sites got around to fixing the bug, gaining access to a vast treasure trove of stored traffic.
The NSA isn’t building those datacenters to hold its internal email, of course.
Presuming for the moment that the NSA and the larger US government didn’t know about Heartbleed — and you have to ask why, given their supposed prowess — it doesn’t close the loophole until all is patched. And we have little concrete in the way of promises that the government would have disclosed it had it known. And we have nothing to say that it wouldn’t do so again.
The only upside to this situation is that we are engaged in what Professor Dawkins would call “consciousness raising,” or a period of rising public knowledge of a situation that needs massive course correction. We’re going to need better, and more encryption with more open-source technology along with even more minds parsing the code to ferret out the weaknesses. But at least we understand where we stand.
The childhood and adolescence of the Internet are over. It’s time to grow up.