Cyber threats to health delivery organizations (HDOs) and the medical device industry as a whole have hit a new level of maturity in the last year. A decade ago, the attack scene was dominated by academic papers about theoretical attacks on connected medical devices.1 Then we started seeing data breaches on connected medical devices primarily as a means to access personal healthcare information.2 In the last year, however, at least two major events occurred where attackers directly monetized attacks on medical devices. In August 2016, a short stock seller openly published video and details about cybersecurity attacks on a medical device maker’s implantable cardiac devices. Predictably, the device manufacturer’s stock fell.3 In May 2017, the WannaCry ransomware compromised both hospital systems and medical devices. While there are multiple reasons each attack succeeded, it is clear that attackers are becoming bolder, and medical devices are not immune.

The U.S. government is taking steps to address the issue. In 2012, the U.S. Congress became concerned about safety impacts to devices and the U.S. Government Accountability Office (GAO) subsequently recommended that FDA “should expand its consideration of information security for certain types of devices.”4 FDA provided premarket guidance in 2014 and postmarket guidance in 2016. FDA continues to refine its cybersecurity guidance and recently (May 2017) hosted a large public workshop on “Cybersecurity of Medical Devices: A Regulatory Science Gap Analysis.”5 Government, security researchers in academia, medical device companies, and HDOs participated to identify gaps and opportunities for improvement.

While FDA’s emphasis is on the impact cybersecurity has on safety, the U.S. Health and Human Services (HHS) emphasis is on protecting health information privacy and security, e.g., the Health Insurance Portability and Accountability Act (HIPAA) and Health Information Technology for Economic and Clinical Health (HITECH) rules. Cybersecurity also affects non-health-related business risk, where vulnerabilities in devices can be exploited and those devices can then be used to attack other connected devices or systems, not necessarily causing safety or privacy issues. While not performed on medical devices, this happened in an extreme way in the fall of 2016 with the Mirai botnet.7

Much of the FDA guidance encourages proactively managing cybersecurity risk. AAMI TIR57 is an example of a risk management approach tuned to the medical device industry. An article in Medical Design Briefs also treated this in detail.6 The regulation and guidance identifies what should be protected, rather than how it should be protected. This article summarizes steps medical device OEMs can take to improve product security. A number of resources are discussed throughout, with examples highlighted in the sidebar, “Resources.”


Fig. 1 ISOSCELES concept, showing secure separation layer and essential services on which medical device companies can build their own medical applications.

The first place to start is with training. You don’t expect someone at random off the street to be able to competently repair your car, so why would you expect someone without adequate training to write safe and secure software for your medical device? Sadly, most of the top university computer science programs skip cybersecurity as a requirement. Industry, however, provides many opportunities to obtain cybersecurity training as a professional.


Since attackers rapidly evolve (and publish) new attacks, security is not a “do it once” or “learn it once” and be done. Software developers must be familiar with existing vulnerabilities and watch for new vulnerabilities and exploits. Industry and government provide great resources to get up to speed. For example, Microsoft STRIDE provides an abstract threat model that is a great way to think about bad things that attackers can do with or to the system. Free resources from the Open Web Application Security Project (OWASP) and from Mitre provide successively detailed breakdowns of weaknesses and vulnerabilities.


Security does not happen by accident. It must be designed in. The first step is to capture comprehensive security requirements that address the vulnerabilities and protect confidentiality, integrity, and availability of the device. These will necessarily cut across privacy, safety, and business concerns. Requirements should focus on the desired properties, rather than on implementation solutions. The FDA premarket guidance recommends providing the National Institute of Standards and Technology (NIST) cybersecurity framework, specifically to identify, protect, detect, respond, and recover adverse cybersecurity threats. Note the latter three elements are responses to events.

Since attacks continue to evolve, no connected device should be considered secure for all time. Therefore, manufacturers should provide a means to distribute security updates to address newly discovered vulnerabilities. Requirements should be developed before selecting design components. This is especially true for key components such as operating systems (OSs).

Companies often quickly build functional prototypes to prove out key concepts. This is fine as long as the prototype is replaced with a safe and secure product implementation. Instead, financial and time-to-market considerations can lead to decisions that the prototype is good enough, and an attitude that the engineering team should simply slap on some of that security goodness before shipping. Unfortunately, the features that make rapid prototyping approaches so effective, such as being able to easily and quickly pull in additional functionality, remain, and these last-minute additions are exactly what makes them easy for attackers to compromise.


The remainder of this article discusses principles that should be incorporated into a safe and secure medical device architecture. These principles are motivated by military, avionics, and process control systems, which have had to deal with cybersecurity for decades.

Minimal Interfaces. Relying solely on a strong perimeter defense is no longer sufficient. Attackers apply automated tools that tirelessly present combinations of inputs to the various device interfaces until they uncover vulnerabilities in that perimeter that allow them to gain a foothold in the device. They then “pivot” on this foothold and bury their way deeper into the device or onto other devices on the same network. A key way to slow down or prevent this sort of attack is to reduce the physical and logical interfaces to only those essential for device functionality. This reduces the attack surface, i.e., the number of places that an attacker can try to get in.

Separation. Next, each remaining interface should be isolated from the rest of the device’s functionality. For example, if an infusion pump requires a network connection, e.g., to maintain current drug libraries, the network stack and code associated with retrieving the drug library should be wholly separate from the software that monitors the rate at which the drug is being pumped. In safety architectures, each unit of separation is called a partition. Separation kernels and some real-time operating systems (RTOSs) provide strong separation. In general, commodity embedded OSs do not provide adequate separation without significant and specialized engineering efforts.

Least Privilege. Least privilege, a common recommendation to secure systems, refers to eliminating access permissions and other privileges to only those needed to operate the deployed device. Unfortunately, the aforementioned rapid-prototyping efforts often assume that everything runs with root or administrator privileges. This makes development easy, since any software component can access everything on the device. It also makes it easy for the attacker to have free access to the device once they are in. Instead, once a critical software service is developed, its privileges should be reduced so that if an attacker compromises that service, it can do nothing more than what it was originally intended to do.

Tire tracks in the grass around a security barrier, demonstrating bypassability of a poorly thought out security control. (Credit: unknown)

Strip Unintended Functionality. Related to least privilege, another technique is to remove unnecessary functionality. For example, a common software development approach is to include existing software libraries rather than develop new software from scratch. A developer can often acquire a library, write some wrapper code, and have new functionality prototyped after just a few hours. That library, however, is often that: a collection of many different functions. The developer might need only one function, but instead includes many functions when he or she uses that library. Those other functions may contain flaws that an attacker can exploit. Both developers and attackers can acquire automated tools that search these libraries for known vulnerabilities. Developers should either move to libraries that do not have those vulnerabilities or strip out the vulnerable functionality.

Non-bypassability. Attackers are like water: they will seep through the smallest crack. Non-bypassability prevents an attacker from easily working around security controls. Human and machine users of device interfaces should be authenticated, and specific actions the user is permitted to execute should be authorized based on the user. These authentication and authorization steps should not be bypassable, except for safety critical “break glass” functionality, which should be observable if used.

Controls. Responding to new attacks can require new or updated approaches. Yet again, industry comes to the rescue, with resources such as the Center for Internet Security’s Critical Security Controls. While these controls were developed for typical IT systems, each control can relate to medical devices. Developers might consider each control, and if the control is not selected for the medical device, document precisely why it is unnecessary. For example, version 6.1 has a control specifically around e-mail and Web browser protections. A connected infusion pump might not have e-mail, so that function can be documented as “Not Applicable.” If, however, the pump uses e-mail as a mechanism for reporting or file transfer, then those controls apply. Note, however, that while these controls are necessary, they should not be treated as a “sufficient checklist” for security.


To help the medical device industry support these architectural principles, the U.S. Department of Homeland Security (DHS) funded several programs to advance Cyber-Physical System (CPS) Security. One of these programs — Intrinsically Secure, Open, and Safe Cyber-physically Enabled, Life-critical Essential Services (ISOSCELES) — is developing a reference architecture to enable medical device products to be built from a secure foundation. ISOSCELES provides a separation layer and essential services for connected medical devices (see Figure 1). ISOSCELES will release as open source requirements model-based systems engineering tools for analysis and configuration and for example hardware and software implementations. Users adopting the ISOSCELES architecture will be able to select their hardware and build their own medical application on top of these services.

For example, with ISOSCELES, networking functions are wholly separate from the safety monitors. In an infusion pump, these safety monitors would limit the maximum rate at which a drug is delivered. ISOSCELES relies on model-based systems engineering tools to specify, analyze, and configure the separation architecture. The specification includes the allowed communications patterns between the partitions, which enables the tools to identify when safety-critical partitions are mistakenly connected to external networking interfaces. This will prevent errors caused by manual configuration, and it will also catch errors when developers create a connection “just for debugging.”


Developing a new medical device with security built in may reduce the time to market for products by addressing the new FDA guidelines for safe and secure devices. Once in the field, devices that adopt these principles will be better prepared to address newly discovered security vulnerabilities, while leaving the safety-critical components of the medical application untouched.


AppCheck, Commercial Tool – Searches Binaries for Known Vulnerabilities, 

CIS Controls, 


LynxSecure, Commercial Separation Kernel, 

SANS, Information Security Training , 

seL4, Open Source Separation Kernel, 

AAMI, “AAMI TIR57: Principles for Medical Device Security — Risk Management,” 

HIPAA, “Health Information Privacy,” 

IEEE, “Most Top Computer Science Programs Skip Cybersecurity,” 

OWASP, “OWASP Top Ten Project,”  ,  ,  , and 

FDA, “Postmarket Management of Cybersecurity in Medical Devices,” 

FDA, “Public Workshop – Cybersecurity of Medical Devices: A Regulatory Science Gap Analysis,” May 18–19, 2017, 

Microsoft, “The STRIDE Threat Model,” 

ASD, “Top 4 Strategies to Mitigate Targeted Cyber Intrusions: Mandatory Requirement Explained,” 

This article was written by Todd Carpenter, Chief Engineer and Co-owner of Adventium Labs, Minneapolis, MN. For more information, Click Here .


  1. “Security and Privacy for Implantable Medical Devices,” IEEE, http:// 4431854
  2. “MEDJACK: Hackers Hijacking Medical Devices to Create Backdoors in Hospital Networks,” Computerworld,
  3. “Short Seller Muddy Waters Renews Claims of St. Jude Medical Cyber Vulnerabilities,” Marketwatch,
  4. “FDA Should Expand Its Consideration of Information Security for Certain Types of Devices,”
  5. “Content of Premarket Submissions for Management of Cybersecurity in Medical Devices,” FDA, guidancedocuments/ucm356190.pdf
  6. “Managing Cybersecurity Risks in Connected Medical Devices,” Medical Design Briefs,
  7. “Did the Mirai Botnet Really Take Liberia Offline?” Krebs on Security, November 4, 2016,