New Federal Regulation Deters Experts On Road To Security

Editor’s note: Chris Wysopal is the co-founder and CTO of Veracode.

Security experts were abuzz last month in anticipation of President Barack Obama’s proposal for new federal regulations that would concentrate on bolstering the nation’s stance on cybersecurity. Last year was witness to a number of high-profile breaches, which shone a light on the possibility of nation-state-backed cyberattacks and espionage and even drove the federal government to introduce an executive order that establishes shared threat intelligence between Washington and the private sector.

Researchers will be hard-pressed to walk the line of legality as they venture into security research.

The size and consistency of these attacks brought security to the national forefront and made clear the need to amend laws and regulations that dealt with cybersecurity. Most recently, the president went further and proposed a $14 billion budget exclusively for the matter.

But despite President Obama’s well-intentioned efforts to strengthen security, as they stand, the proposed regulations have garnered mixed reviews. While the new proposals concentrate on information sharing, faster breach disclosure policies and data privacy protection, it muddies the waters for security researchers who help us find new vulnerabilities. And if the language remains as it is, researchers will be hard-pressed to walk the line of legality as they venture into security research. Regulations that mean to help security could instead be a potential hindrance.

Experts across the board have begun to weigh in, and I find myself also finding problems with the potential conditions these new regulations would engender. But to start with what I do agree with, I believe we are on the right path in considering the 30-day disclosure policy, the Consumer Privacy Bill of Rights and the extra digital privacy afforded for children and the classroom. All are needed and much overdue. Here’s why.

The 30-Day Disclosure Policy

There are 47 different state disclosure laws that organizations need to comply with – Alabama, New Mexico and South Dakota lacking. This means the type of information breach that requires disclosure is different by state, timing is also different and some require that harm analysis be performed and policies met before any action is taken. A federal law will simplify greatly the notification requirements, as it will allow for the standardization of the process and will help in setting expectations.

The information sharing that will go along with this will also be a huge asset for companies as they work together to share information about cyberattacker strategies and tactics. I don’t see this being very controversial, although some might complain that 30 days might not be easy to comply with in some cases. Just consider the recent back and forth between Google and Microsoft on Microsoft’s unpatched flaws.

Customer Privacy Bill of Rights

Businesses argue that there are no security standards outside of the specifics required for PCI compliance or those of specific regulated industries such as finance and healthcare. This has led to businesses not having to provide any protection for customer data and not being held liable for a breach of that data. But as recent incidents have shown, any business, including third-party affiliates, are liable to attack, particularly because they can house critical data that could be leveraged elsewhere.

Most will agree that there should be “baseline protections,” but everyone will argue what those are. It is clear that legacy security mechanisms such as firewalls, anti-virus and basic encryption don’t go far enough to protect data, as that technology is widely deployed and yet we still have breaches. Phishing attacks and web application attacks bypass these protections and will need to be covered by the “baseline protections” or they will be meaningless. But it’s yet to be determined what the protections are; I believe it should be rooted in how software is coded. It’s imperative as 80 percent of applications are susceptible to attack.

Extra Digital Privacy for the Classroom

Most free applications on the web and on mobile devices like tablets make money by monetizing the personal and usage information of their users. This behavior should be disclosed and opt-in for all users. Children are too young to opt-in and so should be protected with privacy regulations. It’s a straightforward regulation that should garner little opposition.

The Problems?

What I am more concerned about are the changes to the Computer Fraud and Abuse Act (CFAA) that further criminalize many activities that currently fall under the umbrella of security research. The new legislation misses the point; levying harsher penalties and classifying more offenses as felonies won’t stop or reduce attacks, especially from foreign actors that are largely hard to trace.

It will, however, make people think twice about getting into the security field. The CFAA has already been used numerous times by overzealous prosecutors to harass and imprison security researchers who posed no real risk to national security and the new regulation could make that even easier in the future.

The new CFAA rules make it harder to have a voice that is independent from vendors on software security.

Consider Andrew Auernheimer, an infamous Internet troll known as Weev who went to prison for revealing AT&T’s unsafe method of storing iPad 3G customer records, including presumably private protected data, on public servers. He has questionable views but by no means did he act illegally. Moreover, as security expert Rob Graham pointed out, even clicking and posting links to what could be sensitive data could be illegal for researchers. It’s counterproductive because security researchers employ dual-use tools in order to do their jobs. Imagine making lock-picking tools illegal, even for locksmiths. That is what the CFAA proposes.

Also, the new CFAA rules make it harder to have a voice that is independent from vendors on software security. If researchers have to wait for a vendor to patch before they can release their vulnerability information, then researchers are beholden to the vendors who can delay patching a long time or indefinitely, thus silencing security critique of their products. In other words, teams like Google’s Project Zero would not be able to hold companies like Microsoft or Apple accountable.

The better alternative would be to call for security at the outset; to harden applications at their creation so that they are less susceptible to attack in the wild once deployed. For too long, businesses have concentrated on remediating vulnerabilities once detected, and the new policies only further espouse that practice. Sharing information is important, but not as critical as it is to have strong security from software’s inception. Otherwise, proactive approaches should be employed, because reacting means it’s already too late.