Drawing Lessons From July’s Jeep Hack

If you were anywhere near the internet in late July, you probably read the news: Charlie Miller and Chris Valasek, two security researchers who specialize in hacking cars, figured out how to remotely take control of a Jeep.

They didn’t just take control of the vehicle from ten miles away— the hacking duo exploited a software flaw that shut down the Jeep’s engine while Wired’s Andy Greenberg was driving it. On a busy stretch of public highway where cars whizzed by at 60+ miles per hour. Without a shoulder or emergency pull off lane.

“THEY DID WHAAAAAAAAT?” most of the internet asked, mouths agape while watching the video proof that this was all possible.

Though the piece was masterful, the experiment was an entirely insane trust exercise in which two hackers promised not to put Greenberg’s life in danger while exploiting a security hole in the car’s uConnect system. Weeks later most of the internet was still discussing their shenanigans, and Chrysler announced a recall for 1.4 million vehicles, almost three times the number originally estimated by Miller, a Twitter employee, and Valasek, head of vehicle research at security firm IOActive.

Automakers and technology companies have worked together to connect cars to the internet left and right— there is no shortage of solutions for the connected car on the market, some of which can even unlock your home for you. How is it that major car companies across the world are just now beginning to hire security teams?

Security by Design

If one thing is certain about security, it’s this: security is a fundamental that has to be built in from the very beginning of the product creation process— it cannot be effectively reverse-engineered in at the end. When we design houses, we don’t put the toilet in the middle of the kitchen next to the sink— and when dealing with critical and non-critical systems, isolation is key for strong security.

To prevent an attacker from using a non-critical network to gain access to a highly critical network, security professionals often implement an air gap between the two. In the case of the systems powering the hacked Jeep, however, an air gap didn’t exist: after combining a few different security vulnerabilities, the hackers discovered a link between the system driving the car and the uConnect system powering the dashboard entertainment unit. One SIM card vulnerability and a few hacks later, the potentially fatal remote takeover flaw went from being a theoretical attack to a plausible threat.

Ultimately, this raises many questions about Chrysler’s security, design and review processes— someone (or multiple someones) within Chrysler had the opportunity to spot and correct this flaw, and did no such thing. How could something so glaringly obvious have been missed in something as critical and powerful as an automobile? Who could have thought that connecting these systems was a sound choice in the first place?!

Disclosure is hard

According to cybersecurity advocate Keren Elezari, “Hackers are the immune system of the internet.” As part of that immune system, white hat hackers and security researchers search for vulnerabilities in the critical infrastructures that connect our world, and disclose them as a means to protect vital systems from malicious attack. For this coordinated or responsible disclosure to work, vendors and security researchers must collaborate to find a fix and improve software or device security together.

More often than not, however, disclosure is a risky process for security researchers, and many are threatened with legal action for simply pointing out a flaw to a vendor. Since the 1990s, countless researchers have been threatened with prosecution for attempting to report software holes that are equally accessible by criminals, and more often than not, their findings are downplayed when vulnerabilities are released in the wild.

It is time for those of us building an always-on, omni-connected future to be more proactive about the choices we make for the consumers that trust us to protect their safety.

 

Though this is slowly beginning to change as large technology companies embrace disclosure and adopt platforms like Bugcrowd and HackerOne for vulnerability management, it is still common for researchers to face threats, or to have to approach the media and raise awareness of a security issue for it to be appropriately fixed.

While some consider this to be a dangerous act of stunt hacking, this particular case of car hacking is an example of public vulnerability disclosure at its absolute best. Instead of releasing the vulnerability into the wild,  Miller and Valasek coordinated with the vendor once they found the exploit, and then waited to make their findings public until a fix was available.

Without a video to illustrate the severity and implications of their car hack, their findings might have never been heard outside of the security community in attendance at BlackHat: a patch for the flaw may have never been developed. This summer, there are many more car hacks on the horizon at the world’s largest gathering of hackers and infosec pros BlackHat and DEF CON. (No word yet on whether any of the researchers behind this summer’s round of car hacks are working with reporters on videos of their vulnerability exploits, too!)

Bridging the Last Mile

After sharing their severe vulnerability with Chrysler, Miller and Valasek committed to the time consuming task of collaborating with the car manufacturer to develop a patch for the issue. Using an algorithm for tagging and tracking wild animals, Miller and Valasek estimated that 471k cars were affected— but after the recall from Chrysler, we now know that Chrysler will attempt to ship 1.4 million USB drives preloaded with software updates to consumers. If we know anything about consumers, though, it’s this: consumers don’t regularly update software for their computers, much less their cars.

Updating car software (which can be downloaded here) isn’t a simple process— it requires that the driver take the car to the dealership, or download the software from the web, upload it onto a thumb drive, connect the thumb drive to a USB port in the car’s dashboard, and then go through the installation process in the vehicle.

While this task might be reasonable for a certain subset of tech-savvy drivers, the vast majority of people will not go through this process, meaning many of the vehicles vulnerable to remote hacking may never see this upgrade. (Since the breach, Sprint has implemented upgrades that prevent the attack from being launched on their network, too.)

What good is a security update if it can’t make its way from a manufacturer to the device with a highly critical security issue? If Chrysler knew about this issue for nine months, why did it take three days from the initial public disclosure to announce a recall? And why is the burden of updating being left on the end-user when it the system could have been designed from the outset to automatically deliver important updates directly to affected vehicles?

Mo’ connectivity, mo’ problems

When their structures fail or their designs are found to be faulty, architects and civil engineers are frequently deemed negligent and held liable for decisions that lead to loss of life and limb. The Internet of Things is expected to grow to 30 billion devices by 2020, and as technology begins to power more and more of the things we use in our daily lives, we are quickly approaching world where technologists will be held liable for the loss of life and limb, too.

Whether you agree or not with how Charlie, Chris and Andy worked together to demonstrate a vulnerability affecting 1.5 million cars on the road does not change the severity of the issue or its widespread implications for consumers. By adding connectivity to devices, we greatly expand the potential for security vulnerabilities— with more connectivity comes more security problems.

Make room for it at the beginning of your product development process and embrace the security researchers who step in and share insights that will harden your product against abuse and malicious attack. In the end, it could do more than just save your company’s reputation from massive heartache— it could save life and limb.

Though no technology will ever be unhackable or 100% secure, security and security researchers serve to protect end users and empower the innovation we love so much in the Silicon Valley. To prevent the future from becoming a fully connected Internet of Terrors, it is time for those of us building an always-on, omni-connected future to be more proactive about the choices we make for the consumers that trust us to protect their safety.