Rule 4: Secure keys reliably. Keys used for encryption and authentication must be protected, because if these keys are compromised, an attacker may uncover sensitive data or emulate a valid endpoint. For this reason, keys are isolated from untrusted software. Keys stored in nonvolatile memory should always be encrypted, and only decrypted following secure boot verification. Since protection of patient data is paramount, especially for the EU, use of high-assurance kernels and security modules also provide layered separation for fail-safe design.

Keys need to be protected in manufacturing and throughout the product life cycle by an end-to-end security infrastructure. If a key is readable at any time, all of the devices using it are vulnerable. An enterprise security infrastructure protects keys and digital trust assets across distributed supply chains, but can also provide additional economic benefits beyond software update such as real-time device monitoring, counterfeit device protection, and license files to control availability of optional features (see Figure 3).

Rule 5: Operate reliably. As all medical designers know, among the biggest threats to a system are unknown design errors and defects that can occur during the development of complex devices. To address these potential threats, it is a good idea to implement principles of high-assurance software engineering (PHASE). These principles include the following:

  • Minimal implementation — Code should be written to perform only those functions required to avoid “spaghetti code” that is not testable or maintainable.
  • Component architecture — Large software systems should be built up from components that are small enough to be easily understood and maintained; safety and security critical services should be separated from noncritical ones.
  • Least privilege — Each component should be given access to only the resources (e.g., memory, communications channels, I/O devices) that it absolutely needs.
  • Secure development process — High-assurance systems, like medical devices, require a high-assurance development process; additional controls beyond those already in use, such as design tools security and secure coding standards, may be needed for ensuring a secure design.
  • Independent expert validation — Evaluation by an established third party provides confirmation of security claims. It is also often required for certification. As with functional safety, components that have already been certified for cybersecurity are preferred as reliable building blocks in a new design.

These principles are used in the development of Green Hills Software’s INTEGRITY real-time operating system. When applied to application development, they minimize the likelihood and impact of a software error or a new cybersecurity attack.

Creating an End-to-End Security Solution

Building a secure medical system that meets the new regulatory environment requires an end-to-end security design that addresses the security of data and reliability within the networked device throughout the product life cycle. This requires a device security architecture, which safeguards operation by ensuring that keys, certificates, and sensitive data are protected throughout operation and manufacturing supply chain by an enterprise security infrastructure. The optimum selection of both device and enterprise security solutions depends on device operating and manufacturing environments, as well as business tradeoffs, so it is advisable to consult experts in the field.

This article was written by Mary Sue Haydt, Field Applications Engineer for Green Hills Software, Santa Barbara, CA. For more information, Click Here .