It’s The Security, Stupid!

It’s 2014. Do you know where your security is? On Tuesday, Google published a full account of the current state of encryption in email, revealing that some leading providers like Comcast and France’s Orange encrypted nearly none of the email that approached its servers. The news this week seemed to confirm many of our worst fears about the state of security on the Internet (as it does most weeks).

In China, this week marked the 25th anniversary of that government’s crackdown on protesters in Tiananmen Square. Of course, “marking” is a turn-of-phrase, as Chinese citizens were blocked from accessing information about the event or discussing it due to the government’s heavy censorship on the web. As the satirical Onion wrote in a headline, “Chinese Citizens Observe 25-Year Moment Of Silence For Tiananmen Square Massacre.”

But lest you think that censorship is only endemic in the Middle Kingdom, an interesting wrinkle also cropped up in Florida this week, where attorneys from the American Civil Liberties Union had filed public records requests for data related to the use of stingrays, a device that can capture information from cell phones such as location. U.S. Marshals seized the reports about the devices before they could be released, preventing disclosure about this particular practice in the United States.

That’s just the highlights of one week of news, in a time when we have had some of the largest thefts of credit card data in history, as well as one of the most critical security vulnerabilities with the Heartbleed error in OpenSSL.

These are depressing signs, but they are only set to get worse over the short-term as companies scramble to catch up to the challenges of security with today’s Internet. The challenges of technology and culture are going to continue to pummel our hopes for a secure Internet future. Only through completely transforming our mindset do we have a hope to move the needle in the right direction.

Technological Complexity and the Disintegration of Security

Unfortunately, technology trends portend even more security vulnerabilities to come. We continue to build increasingly complex interconnectivity into our startups, apps, and products, almost guaranteeing that the kinds of accidental disclosures and leaks we have seen – whether to cyber-hackers or government intelligence collectors – will continue.

Why does this interconnectivity matter? One theory comes from a book by Charles Perrow titled “Normal Accidents.” Perrow argues that accidents are correlated with two qualities of a socio-technical system: complexity and coupling. As the interactions between discrete elements of a system become more complex, and as those elements become increasingly coupled together, the more emerging properties that a system will be expected to exhibit. Such systems produce “normal accidents,” accidents that are unexceptional given the design of the system.

We continue to build increasingly complex interconnectivity into our startups, apps, and products, almost guaranteeing that the kinds of accidental disclosures and leaks we have seen – whether to cyber-hackers or government intelligence collectors – will continue.

Perrow was mostly arguing against nuclear power plants, but much of his logic resonates in our Internet software as well. We see his thinking in the case a few months ago of Naoki Hiroshima and how he lost his own Twitter account. An attacker managed to get through GoDaddy’s security verification process by acquiring the last four digits of Hiroshima’s credit card through PayPal. Once he had control of his GoDaddy accounts, the attacker redirected the domain name that Hiroshima used as his custom email address. With his email under control, the attacker could then log in to other websites using password-reset mechanisms.

In short, a complex, tightly-coupled system. A normal accident, or a normal hack as it may be.

But interconnectivity is only one half of the challenge to security from our changing technology infrastructure. With our rapid adoption of cloud and mobile technologies, we are returning to the “dumb terminal” paradigm in which a mainframe computer (this time, a data center) does much of the processing and storing of data that we used to do on our now increasingly thin (literally and metaphorically) clients.

The security implication here should be obvious. As more of our data has to travel the web, the opportunities quickly multiply for hackers and governments to undermine protections around those bits. Frankly, an early 1990s laptop is more immune to hacking than a 2014 iPhone or Android. We’ve moved in the wrong direction.

These security issues in the cloud have been discussed ad nauseam, but few broach the more fundamental question – is the very concept of cloud-computing the problem rather than the solution to our security? Cloud industry veterans would argue that centralization generally makes security better, since a patch only has to be deployed once to fix all instances. But, the lack of diversity in technologies also means that a single vulnerability can affect nearly everyone. Our concentration on a handful of providers (and a handful of software libraries too!) may be the core problem we have to address.

People are part of the problem. As always.

If it was just technology trends that were putting security in the memory hole, we might have a fighting chance to improve our fragile Internet. But culture plays just as much of a role in these issues, if not more so.

When it comes to startups, the theories of “lean startups” are at the core of the culture of Silicon Valley today. Don’t plan, push code quickly, receive feedback, and iterate. Do this loop as fast as possible to design a product that will reach product-market fit. As a theory, it decently captures a lot of the best practices that startups should pursue in order to avoid key mistakes (like never shipping a product!).

But that culture of break things and iterate is poisonous for security. Securing a system as complicated as a modern software-as-a-service startup takes planning, care and dedication. Some startups obviously do this, especially in highly regulated areas like finance or payments. But few others seem to place security anywhere near the top of priorities. Heck, even encrypting passwords in a database is hard for many startups as repeated leaks can attest (sadly, established companies have had just as many problems).

I understand that startups just starting out benefit from security through obscurity. When you only have 10 users, security is probably a distant twinkle in a founder’s dreams. Security leaks are almost validation that success is beginning – that someone somewhere actually spent the time to poke through a startup’s systems and break into its user database table in PostgreSQL. But taking a triage approach to security is not what the world requires today.

More broadly, the world is also fighting against the underlying current of the Internet’s culture of openness and transparency. The development of encryption on the web was a late one, coming only with the increasing demands of e-commerce websites, which needed a way to accept payments without having details intercepted.

 If the culture around security is going to change, we need to bring that change to the touchpoints of a startup’s development.

That openness means that there is a tendency to secure systems when there are problems, rather than securing them from the beginning. Maybe it is the security postscript to Donald Knuth’s line that programmers learn early in their careers: “premature optimization is the root of all evil.” Security too often feels like a feature tacked on at the end, and not a starting principle.

Finally, lest we heap too much blame on founders already burdened with thousands of demands, we need to note the lack of security consciousness of most venture capitalists and journalists. While it is understandable that a startup’s product, team, and market are the top priorities, that doesn’t mean that we shouldn’t discuss security at all.

Few VC firms do code reviews for instance, and journalists almost never ask about security except for startups with it as their key feature or obvious focus. If the culture around security is going to change, we need to bring that change to the touchpoints of a startup’s development.

Moving From Security as a Feature to Pervasive Security

Between these technology trends and cultural forces, it is a pretty bleak picture for security on the web today. To a degree, it’s a bit unfair to be too critical. Security is damn tough to do right, even by experts. When it comes to flaws and data leaks, the advantage is always for the bad guys – they only have to find one vulnerability, while engineers for the product have to protect the entire codebase. But security can’t be seen as just holes, flaws and injection attacks. Security has to be seen as a constituent part of coding for the web, as important as reliability, speed and ease-of-use.

I think the changes needed today are many-fold. First, and absolutely critically, security needs to become standard in computer science curriculums. Most programs have no security requirements, or if it is taught, it is usually included as part of a systems survey class. Given this lack of preparation and background, it shouldn’t be surprising that websites still have obvious vulnerabilities coming straight off the OWASP Top 10 list.

Companies need to put in place not just the culture, but also the incentives to encourage engineers to do their diligence on their own code and the work of others.

Once those engineers leave school and enter the workforce, few will have to think about security again as those issues are generally handled by dedicated “security engineers” (assuming they exist at all). Startups need to turn that thinking around. Everyone needs to be involved in security, from the front-end programmers designing the client pages to the backend programmers developing APIs. Companies need to put in place not just the culture, but also the incentives to encourage engineers to do their diligence on their own code and the work of others.

In addition to culture, companies need to continue to improve their transparency around security issues, and actively seek accountability from the marketplace through responsible disclosure pages and bounties, or using startups like BugCrowd which helps to manage this process. It would be helpful for some sort of industry group or certification around these ethics and standards to be popularized.

Finally, the Internet needs to default to encrypted protocols like HTTPS, a goal long sought by the Electric Frontier Foundation. There are still very strong concerns behind mandating HTTPS, and it certainly doesn’t solve many of the bugs that cause vulnerabilities. But the number of snoops on the Internet, whether intelligence agencies or cyber-hackers, means that we have to do more to ensure that data is routed around the web securely. That may mean fundamentally changing the way that data centers are structured (to reduce traffic between them, for instance). But the stakes are high, and these changes were needed years ago.

These solutions are only a start. Security is hard, and our programming libraries and protocols have not matured to guarantee the security we may naively expect of them. But we all suffer the consequences when we relegate security to the “nice to have” category. Security is a pain killer, not a vitamin. Every one of us has the responsibility to do our part to build a less fragile and more secure Internet. As James Carville would say, it’s the security, stupid. Let’s get this one right.