Cyber threats to health delivery organizations (HDOs) and the medical device industry as a whole have hit a new level of maturity in the last year. A decade ago, the attack scene was dominated by academic papers about theoretical attacks on connected medical devices.1 Then we started seeing data breaches on connected medical devices primarily as a means to access personal healthcare information.2 In the last year, however, at least two major events occurred where attackers directly monetized attacks on medical devices. In August 2016, a short stock seller openly published video and details about cybersecurity attacks on a medical device maker’s implantable cardiac devices. Predictably, the device manufacturer’s stock fell.3 In May 2017, the WannaCry ransomware compromised both hospital systems and medical devices. While there are multiple reasons each attack succeeded, it is clear that attackers are becoming bolder, and medical devices are not immune.
The U.S. government is taking steps to address the issue. In 2012, the U.S. Congress became concerned about safety impacts to devices and the U.S. Government Accountability Office (GAO) subsequently recommended that FDA “should expand its consideration of information security for certain types of devices.”4 FDA provided premarket guidance in 2014 and postmarket guidance in 2016. FDA continues to refine its cybersecurity guidance and recently (May 2017) hosted a large public workshop on “Cybersecurity of Medical Devices: A Regulatory Science Gap Analysis.”5 Government, security researchers in academia, medical device companies, and HDOs participated to identify gaps and opportunities for improvement.
While FDA’s emphasis is on the impact cybersecurity has on safety, the U.S. Health and Human Services (HHS) emphasis is on protecting health information privacy and security, e.g., the Health Insurance Portability and Accountability Act (HIPAA) and Health Information Technology for Economic and Clinical Health (HITECH) rules. Cybersecurity also affects non-health-related business risk, where vulnerabilities in devices can be exploited and those devices can then be used to attack other connected devices or systems, not necessarily causing safety or privacy issues. While not performed on medical devices, this happened in an extreme way in the fall of 2016 with the Mirai botnet.7
Much of the FDA guidance encourages proactively managing cybersecurity risk. AAMI TIR57 is an example of a risk management approach tuned to the medical device industry. An article in Medical Design Briefs also treated this in detail.6 The regulation and guidance identifies what should be protected, rather than how it should be protected. This article summarizes steps medical device OEMs can take to improve product security. A number of resources are discussed throughout, with examples highlighted in the sidebar, “Resources.”
The first place to start is with training. You don’t expect someone at random off the street to be able to competently repair your car, so why would you expect someone without adequate training to write safe and secure software for your medical device? Sadly, most of the top university computer science programs skip cybersecurity as a requirement. Industry, however, provides many opportunities to obtain cybersecurity training as a professional.
Since attackers rapidly evolve (and publish) new attacks, security is not a “do it once” or “learn it once” and be done. Software developers must be familiar with existing vulnerabilities and watch for new vulnerabilities and exploits. Industry and government provide great resources to get up to speed. For example, Microsoft STRIDE provides an abstract threat model that is a great way to think about bad things that attackers can do with or to the system. Free resources from the Open Web Application Security Project (OWASP) and from Mitre provide successively detailed breakdowns of weaknesses and vulnerabilities.
Security does not happen by accident. It must be designed in. The first step is to capture comprehensive security requirements that address the vulnerabilities and protect confidentiality, integrity, and availability of the device. These will necessarily cut across privacy, safety, and business concerns. Requirements should focus on the desired properties, rather than on implementation solutions. The FDA premarket guidance recommends providing the National Institute of Standards and Technology (NIST) cybersecurity framework, specifically to identify, protect, detect, respond, and recover adverse cybersecurity threats. Note the latter three elements are responses to events.
Since attacks continue to evolve, no connected device should be considered secure for all time. Therefore, manufacturers should provide a means to distribute security updates to address newly discovered vulnerabilities. Requirements should be developed before selecting design components. This is especially true for key components such as operating systems (OSs).
Companies often quickly build functional prototypes to prove out key concepts. This is fine as long as the prototype is replaced with a safe and secure product implementation. Instead, financial and time-to-market considerations can lead to decisions that the prototype is good enough, and an attitude that the engineering team should simply slap on some of that security goodness before shipping. Unfortunately, the features that make rapid prototyping approaches so effective, such as being able to easily and quickly pull in additional functionality, remain, and these last-minute additions are exactly what makes them easy for attackers to compromise.
The remainder of this article discusses principles that should be incorporated into a safe and secure medical device architecture. These principles are motivated by military, avionics, and process control systems, which have had to deal with cybersecurity for decades.
Minimal Interfaces. Relying solely on a strong perimeter defense is no longer sufficient. Attackers apply automated tools that tirelessly present combinations of inputs to the various device interfaces until they uncover vulnerabilities in that perimeter that allow them to gain a foothold in the device. They then “pivot” on this foothold and bury their way deeper into the device or onto other devices on the same network. A key way to slow down or prevent this sort of attack is to reduce the physical and logical interfaces to only those essential for device functionality. This reduces the attack surface, i.e., the number of places that an attacker can try to get in.
Separation. Next, each remaining interface should be isolated from the rest of the device’s functionality. For example, if an infusion pump requires a network connection, e.g., to maintain current drug libraries, the network stack and code associated with retrieving the drug library should be wholly separate from the software that monitors the rate at which the drug is being pumped. In safety architectures, each unit of separation is called a partition. Separation kernels and some real-time operating systems (RTOSs) provide strong separation. In general, commodity embedded OSs do not provide adequate separation without significant and specialized engineering efforts.
Least Privilege. Least privilege, a common recommendation to secure systems, refers to eliminating access permissions and other privileges to only those needed to operate the deployed device. Unfortunately, the aforementioned rapid-prototyping efforts often assume that everything runs with root or administrator privileges. This makes development easy, since any software component can access everything on the device. It also makes it easy for the attacker to have free access to the device once they are in. Instead, once a critical software service is developed, its privileges should be reduced so that if an attacker compromises that service, it can do nothing more than what it was originally intended to do.