Just over four years ago, I was walking out of a conference center in Melbourne with butterflies in my stomach. I’d just sat through what still is the most viscerally disturbing information security talk I’ve ever seen. The late Barnaby Jack, a brilliant security researcher well-known at that time for his work on “Jackpotting” ATMs and remote hijacking of insulin pumps, had just demonstrated in front of 300 people how he could wirelessly take control of an Implantable Cardioverter Defibrillator and cause it to discharge enough electricity to jump a 12 mm spark gap.
“Unfortunately, this has to be a video demo,” he said, “because if any of you have one of these inside right now the demo might kill you.”
Muddy Waters, LLC recently executed a stock-short trade based on information provided by cybersecurity startup MedSec, which outlined security vulnerabilities in St. Jude Medical’s pacemakers and defibrillators. The alleged findings bear eery similarity to the vulnerabilities I saw demonstrated four years ago — it appears they either went undiscovered or unremediated.
Muddy Waters’ alleged security failures (hotly denied by St. Jude Medical) included a series of flaws that could allow anyone to tap into implanted devices and cause potentially fatal disruptions. While the cybersecurity of safety-critical Internet of Things (IoT) devices — i.e. physical devices that are connected to the outside world via the internet or wirelessly — has been a serious concern in the security industry for a long time now, this marked the first open and public disclosure of a device’s security vulnerability specifically designed to manipulate the stock of a publicly traded company.
The deliberate money-making cooperation between the firms brings to light a few questions about the ethics and process of disclosure, the use of vulnerabilities for financial gain and changes to come in this industry.
The ethics and legality of disclosure
Vulnerability disclosure ethics are an inherently murky area. There are countless vulnerabilities that exist unpatched in software systems, including the medical devices that are implanted into humans every day.
Ideally, vulnerabilities are discovered when the vendor learns about them through their own testing or through the help of security researchers operating under a “Coordinated Disclosure” or bug bounty model. Other times, vulnerabilities found are kept secret, and used for attack, unbeknownst to the manufacturer or their users. Then there’s what is called “Full Disclosure,” where discovered vulnerabilities are simply disclosed to the public, sometimes before the vendor has had the opportunity to respond. These three scenarios are status quo for disclosure.
Certainties that are unpredictable (like vulnerabilities) tend to have derivative markets form around them.
It could be argued that a better approach to this situation would have been for the security researchers to share their findings with St. Jude Medical under a Coordinated Disclosure model. Given that they were contracted with Muddy Waters and not St. Jude Medical, they were not obligated to do this, and may have felt that Full Disclosure was the best motivator to St. Jude Medical to fix the problem.
Full Disclosure is most frequently chosen out of frustration at a slow response or bad communication by the impacted organization. The risk of Full Disclosure can be mitigated by organizations, firstly by taking whatever steps are necessary to identify and fix vulnerabilities in the first place, then by establishing clear channels and expectations between security researchers and vendors around newly identified ones.
Safety-critical vulnerabilities have safety-critical impacts, so the exposure of vulnerability details in devices carries inherent risks that forces an ethical consideration. Will this action give an adversary the methods and time to create a critical impact on a user of that device before it is patched? Or will St. Jude Medical correct the vulnerability before this can happen?
Then there are the legal considerations. The Digital Millennium Copyright Act (DMCA) and Computer Fraud and Abuse Act (CFAA) are designed to prevent malicious discovery and use of vulnerabilities, but these acts can also impede legitimate security research. As a result of input from external security researchers and pressure from the public to accelerate their security efforts, exceptions from both were sought and won late last year for cars and a handful of medical devices. The research by MedSec was almost certainly conducted under this exemption.
Another, longer-term, legal question, is how the Securities and Exchange Commission (SEC) will react to this new signal of potential alignment between security researchers with investment organizations. Infamous hacker and internet troll Andrew “Weev” Auernheimer proposed exactly this model under TRO LLC a few years back, so it’s not a brand new idea (and, indeed, it may not even be the first time a short-trade has been executed off the back of this type of information), but this is certainly the first time it has been openly executed.
Certainties that are unpredictable (like vulnerabilities) tend to have derivative markets form around them. The reality is that “long term” will really mean long term in this context, as it will require regulations to be designed and ratified, no matter whether the SEC approves of this or not.
What happens next?
Between the gaping vulnerability of the internet-connected world, the frustration researchers often experience in their attempt for vendors to pay attention and fix life-threatening vulnerabilities and the new pathway for profit creation demonstrated in this situation, it’s likely we’ll see this type of thing again.
This is a wake-up call for the medical device industry, and an opportunity to respond proactively, fixing the vulnerabilities that exist in its products. Ultimately, there is a strong safety and financial advantage to engaging those who can help find and responsibly disclose vulnerabilities. With an improved approach to the security of devices in general, backed up by a crowd of highly motivated and brilliant security researchers on their side, organizations can reduce the risk of both their patients or users being harmed through malicious action, as well as the likelihood of becoming the next St. Jude Medical.