It doesn’t take much of a crystal ball to foresee that the way we deliver health care in the United States is going to be vastly different in the coming years. What is causing this change is threefold:
- People from the “Baby Boom” generation are being added to the Medicare rolls at the rate of about 10,000 per day, and living longer
- The Affordable Health Care Act is adding 30 million people to the rolls of those who will now have medical insurance coverage, and
- A shortage of physicians and healthcare workers to take care of all these people.
Steven Dean, Medical Market Lead for the Americas at Freescale, recently said, “Over the next decade, we will be witness to one of the greatest transformations in the history of the healthcare industry, as our culture shifts from focusing on treating life-threatening symptoms to preventing them in the first place.”
It is also clear that we have the capability to affect how we deliver health care, changing from a “sick care” model to a “well care” model — and embedded computer technology will be at the forefront as it holds the promise to reduce the costs and frequency of doctor and hospital visits.
System Implications
New medical devices that shift the healthcare delivery model to prevention will come in all sorts of sizes and involve a myriad of complexities, computers, and connections. There will be defined-function devices that employ 8- and 16-bit processors with minimal code (ROM) and data (RAM) footprints. There will be others that have periodic real-time data acquisition requirements that need more crunch. And still others will have full-blown user interfaces that provide a wide range of options, modes, and capabilities. All of them are likely to have some form of connectivity to hubs or directly to the Cloud. At the center of all that will be the operating environments for the computers these devices use. If history is any predictor of the future, there will be a variety of them, too. Most likely, we will see continued use of the operational models we use today. Let’s examine some of the usual suspects starting at the simplest and going up the food chain of increasing complexity.
The Super Loop
The Super Loop has been around since the dawn of embedded systems. Industry surveys tell us that a significant percentage of products do not use an operating system at all. Instead, they use a simple ad hoc Super Loop in which all functions of the application take place within the context of a single program or set of functions.
The Super Loop is intended to run all or most of the time looking to perform some portion of the application. When it finds something to do, it performs the function and then continues the search for other things to do. The loop repeats endlessly. While there are many possible variations it is very simple and fits applications that have basic or fixed functionality.
Take a simple heart-rate monitor as an example. In Fig. 1, we can see that the application has functions that read an analog-to-digital converter (ADC), determine completeness of the PQRS waveform, calculate instantaneous heart rate for completed waveforms, display the heart rate on an LCD display, check for irregular heart rates, and issue an alert if detected. And then it starts all over again.
For small, sensor-based applications and wearable devices using processors with limited RAM and ROM, the Super Loop model may be a good choice. As shown in Fig. 2, one of the main caveats to its use is the time difference that can occur due to different paths taken (the software resistor on the right) through successive iterations of the loop. The time difference may vary to an unacceptable degree, making it a less-than-ideal choice for applications with hard real-time requirements.
Cooperative Multitasking
Up the food chain from the Super Loop is a structured OS architecture that performs the application’s work as a series of tasks, each of which does some portion of the overall job. As shown in Fig. 3, the Scheduler grants control of the processor to a task in response to some stimulus. Note that the only contextual element of the task is its entry point, which makes switching from one task to another very rapid.
Once in control, the task then runs until it reaches a point of completion and returns to the scheduler, which selects the next task to run. The model allows for prioritization of stimuli (interrupts) so that tasks responding to higher priority stimuli can preempt tasks performing lower priority operations. Except for preemption, once a task gets control of the CPU, it retains control until it completes its work. It is only when the task voluntarily relin- quishes that control that the Scheduler can select the next task and give it control of the CPU, thus the term run-to-completion.
Knowing how to schedule multiple tasks by assigning the proper priority is a vital part of the model. Rate Monotonic Analysis (RMA) and Deadline Monotonic Analysis (DMA) are two ways to approach the scheduling problem. Space does not permit a discussion of these mathematical models, but these methods help determine task priorities that yield a processor utilization whereby all the tasks can meet their respective deadlines.
Regardless of the means one uses to assign task priorities, the Cooperative Multitasking model is a very straightforward approach. It is very well suited for signal processing algorithms, processing states in a finite state machine (FSM), or simple run-to-completion (RTC) application functions. As with the Super Loop, it can take on many forms, having lesser or greater degrees of structure. It can be as simple as switching between the functions that comprise the individual tasks or could be a more structured model that makes use of objects and related services to support the execution of the tasks, as well as pass data, synchronize with events, and handle time-based operations. Overall, it positions the application for better design, maintainability, and product reliability than the Super Loop.
One of the main advantages of the Cooperative Multitasking model is that it has a small code footprint and is very efficient in the use of RAM due to the fact that it requires only a single stack in RAM for all tasks. Additionally, it usually has a very low execution overhead because the saving and restoring of processor contexts is limited only to incidents of preemption. For medical devices that have real-time requirements and need to run in smaller 8- or 16-bit processors, the Cooperative Multitasking model could be an excellent choice.
Preemptive Multitasking Real-Time Operating System
The Preemptive Multitasking Real-Time Operating System (RTOS) is a well known operating system model due to its extensive use over many years. Developers of embedded systems commonly employ it because of its flexibility and proven ability to handle a broad spectrum of applications.
The ability for one task to preempt another based upon priority is one of the main reasons why this model has survived over the years. A task with a higher priority can pre-empt the running task and assume CPU control, allowing the RTOS to schedule tasks in response to real-time demands.
In this RTOS model, the system developer decomposes the application into a set of tasks (or threads) and gives each a priority based on its importance relative to all the other tasks in the application. Each of the tasks exists in one of three general states: Running, Ready, or Blocked. Figure 4 shows one task in a Running state and in control of the CPU. The other tasks are in a Ready state, waiting to get CPU control when the Scheduler deems it appropriate, or otherwise Blocked.
Blocked tasks are waiting for some resource to become available and while in the Blocked state, consume no CPU cycles. When a task has all the resources it needs, its state becomes Ready, meaning it is ready to take control of the CPU when allowed to do so. Like the Cooperative Multitasking model, the traffic cop that determines which task gets control of the CPU is the RTOS Scheduler. It uses the pre-assigned priorities of the Ready tasks to make certain that at any given time the highest priority of the Ready tasks has CPU control.
One of the main differences between the Cooperative and the Preemptive Multitasking models is easily seen in Figures 3 and 4, where a task context in Fig. 4 is larger than that in Fig. 3. Each task in the Preemptive Multitasking RTOS has a context that includes the processor’s registers, its status and program counter (PC). All of that context must be saved and restored whenever there is a task switch. It’s not instantaneous, but it is not unusual to see context switch time measured in a few microseconds or even in less than one microsecond.
While not every task in an application has a hard real-time requirement, the critical tasks can be assured of getting control of the CPU to meet a deadline. However, each task must have a dedicated RAM stack set aside for its specific use. The RTOS uses the stack for saving and restoring the processor’s register set (its context) when the task becomes either Blocked or Running. For small processors with limited memory, this dedicated RAM can be a major hurdle.
The Preemptive Multitasking RTOS also provides a library of services that tasks and interrupt service routines can utilize to achieve the desired behavior. The typical RTOS groups available services according to the class of an associated object. Common examples would include Tasks to manage the execution of the application code, Queues to move data from a producer entity to a consumer entity, Semaphores or Event Flags for synchronization with internal or external events, and Timers or Alarms to keep track of time-based operations. Additionally, classes are often included that manage RAM and provide exclusive access to resources.
The Preemptive Multitasking RTOS model, while fairly simple conceptually, can be complex to implement and validate, depending on the classes of objects it supports and the processor on which it runs. To simplify the issues in moving to another processor, most providers have programmed the RTOS in a language such as C or C++. Compilers manage most of the differences between processors but there remains a small portion of the RTOS model, interrupt management, that is very processor-specific. It is often coded in non-portable ways, usually in assembly language. The source code to most commercial RTOS products is generally available from its manufacturer. For medical applications, having the source code is highly desirable for application certification and long-term support.
General Purpose Operating System
There is another operating system used in embedded applications — the general purpose operating system (GPOS). Some are free and include source code. Others are proprietary and source code is not generally available. In the free space, Linux and Android are popular choices. Proprietary products are available from Microsoft, Apple, and HP.
Many employ a model that allows separation of the application into processes, each of which has a defined address space that is protected by the processor’s memory management unit (MMU) or memory protection unit (MPU) and associated cache memories. Within a process, the model resembles that of the multitasking RTOS. Each process, and its associated tasks (or threads) gets some access to the processor as determined by the system Scheduler. As a result, the Scheduler can be a very complex and sophisticated piece of programming.
The typical GPOS is usually a fullblown system that offers complete user interfaces such as one sees on a smartphone or notepad. They offer lots of options for the developer to deliver very sophisticated devices. In providing all that capability, the typical GPOS comes with a big code footprint and an unrelenting hunger for RAM, without which it cannot operate efficiently. It is not uncommon for just the GPOS Scheduler alone to have a larger code footprint than the typical Preemptive Multitasking RTOS in its entirety.
A GPOS is not suitable for applications in which there are required response times measured in a few microseconds or less. Millisecond response times are much more manageable, but one must be careful to verify that the GPOS Scheduler can perform in such a way that worst-case conditions are survivable.
Multicore processors offer a way to mitigate some of that survivability risk by isolating those parts of the application that are best suited to a GPOS environment from those that are more reliable in a multitasking RTOS or Cooperative Multitasking model. There are also virtualization models in which the GPOS and the RTOS can reside in a single core. Further evolution is likely as the demand for new medical devices grows in the coming years.
Certification/Certifiability
With so many of these devices that deal with patient care, the developer is faced with certification of the system by governmental regulators. Today, it is the FDA. Tomorrow, due to the expectation of a large number of devices, it may be a different governmental body. But whatever the certifying agency, it will have its hands full handling the multitude of new systems and devices.
At the core of these systems that are subject to certification, there will likely be an operating system like those described in this article. And the system developer will need some sort of certification-compatible set of documentation. If it is a home-grown OS, it will be necessary to go through the steps of a software development lifecycle (SDLC) with all of its stages of documentation and testing. Some commercial operating systems already have the necessary documentation and test package available. Even though it will cost money to acquire such a package, it will be a fraction of what it would cost to develop it as a one-off for an internal solution.
More than likely, the functions of a device that do not deal directly with patient-critical operations will not need to undergo the same level of scrutiny that would be experienced by one that does. Thus, the GPOS may get a pass while the RTOS may not. However, even in the GPOS, those portions that deal with secure data transmissions might need certification by a regulatory body.
Summary
In summary, telehealth and remote well care could play a significant role in reducing the cost of health care in the U.S. as Baby Boomers age. Boomers embrace technology in their lives already, and there is a reasonable expectation that they would accept the innovative devices and personal health technology “prescribed” by their insurance provider or their doctor. But it is ultimately up to the current healthcare system to shift its focus. Until that happens, the pace of medical device development will be slow — but it will happen, because the demographics are irrefutable. So when it begins, get ready for a tidal wave of activity as the new technologies work their way into the mainstream. It will be exciting.
This article was written by Tom Barrett, President of Quadros Systems, Houston, TX. For more information, Click Here .