Features

It doesn’t take much of a crystal ball to foresee that the way we deliver health care in the United States is going to be vastly different in the coming years. What is causing this change is threefold:

  • People from the “Baby Boom” generation are being added to the Medicare rolls at the rate of about 10,000 per day, and living longer
  • The Affordable Health Care Act is adding 30 million people to the rolls of those who will now have medical insurance coverage, and
  • A shortage of physicians and healthcare workers to take care of all these people.

Fig. 1 – Super Loop path in a simple heart-rate monitor.
Steven Dean, Medical Market Lead for the Americas at Freescale, recently said, “Over the next decade, we will be witness to one of the greatest transformations in the history of the healthcare industry, as our culture shifts from focusing on treating life-threatening symptoms to preventing them in the first place.”

It is also clear that we have the capability to affect how we deliver health care, changing from a “sick care” model to a “well care” model — and embedded computer technology will be at the forefront as it holds the promise to reduce the costs and frequency of doctor and hospital visits.

System Implications

New medical devices that shift the healthcare delivery model to prevention will come in all sorts of sizes and involve a myriad of complexities, computers, and connections. There will be defined-function devices that employ 8- and 16-bit processors with minimal code (ROM) and data (RAM) footprints. There will be others that have periodic real-time data acquisition requirements that need more crunch. And still others will have full-blown user interfaces that provide a wide range of options, modes, and capabilities. All of them are likely to have some form of connectivity to hubs or directly to the Cloud. At the center of all that will be the operating environments for the computers these devices use. If history is any predictor of the future, there will be a variety of them, too. Most likely, we will see continued use of the operational models we use today. Let’s examine some of the usual suspects starting at the simplest and going up the food chain of increasing complexity.

The Super Loop

Fig. 2 – One of the main caveats of the Super Loop is the time difference that can occur in the different paths taken (the software resistor shown on the right) through successive iterations of the loop.
The Super Loop has been around since the dawn of embedded systems. Industry surveys tell us that a significant percentage of products do not use an operating system at all. Instead, they use a simple ad hoc Super Loop in which all functions of the application take place within the context of a single program or set of functions.

The Super Loop is intended to run all or most of the time looking to perform some portion of the application. When it finds something to do, it performs the function and then continues the search for other things to do. The loop repeats endlessly. While there are many possible variations it is very simple and fits applications that have basic or fixed functionality.

Take a simple heart-rate monitor as an example. In Fig. 1, we can see that the application has functions that read an analog-to-digital converter (ADC), determine completeness of the PQRS waveform, calculate instantaneous heart rate for completed waveforms, display the heart rate on an LCD display, check for irregular heart rates, and issue an alert if detected. And then it starts all over again.

For small, sensor-based applications and wearable devices using processors with limited RAM and ROM, the Super Loop model may be a good choice. As shown in Fig. 2, one of the main caveats to its use is the time difference that can occur due to different paths taken (the software resistor on the right) through successive iterations of the loop. The time difference may vary to an unacceptable degree, making it a less-than-ideal choice for applications with hard real-time requirements.

Cooperative Multitasking

Up the food chain from the Super Loop is a structured OS architecture that performs the application’s work as a series of tasks, each of which does some portion of the overall job. As shown in Fig. 3, the Scheduler grants control of the processor to a task in response to some stimulus. Note that the only contextual element of the task is its entry point, which makes switching from one task to another very rapid.

Once in control, the task then runs until it reaches a point of completion and returns to the scheduler, which selects the next task to run. The model allows for prioritization of stimuli (interrupts) so that tasks responding to higher priority stimuli can preempt tasks performing lower priority operations. Except for preemption, once a task gets control of the CPU, it retains control until it completes its work. It is only when the task voluntarily relin- quishes that control that the Scheduler can select the next task and give it control of the CPU, thus the term run-to-completion.

Knowing how to schedule multiple tasks by assigning the proper priority is a vital part of the model. Rate Monotonic Analysis (RMA) and Deadline Monotonic Analysis (DMA) are two ways to approach the scheduling problem. Space does not permit a discussion of these mathematical models, but these methods help determine task priorities that yield a processor utilization whereby all the tasks can meet their respective deadlines.

« Start Prev 1 2 3 Next End»