This technology helps technical application developers incorporate mathematical and statistical functionality in their applications, while providing the documentation needed for software validation.

Fig. 1 – As demonstrated above, utilization of rigorously tested code from numerical libraries can decrease the effort required to implement new routines, compared to free code and custom-developed code.

Medical device manufacturers often have difficulties ensuring that code is correct, debugging is properly done, and documentation is available for software validation required for regulatory compliance. Because most medical technology involves intensive mathematical and statistical methods, it is timely to reexamine how computational frameworks are or are not designed for maximum performance.

Although many medical technology developers have long relied on prepackaged software for many routine tasks, new areas of research and business development have often outpaced these pre-packaged offerings. Concomitantly, whether realized or not, nearly every biotech researcher today is working in a computational infrastructure that employs multicore processors that significantly slow the performance of legacy applications originally developed for single-processor computing environments.

Performance gains today, by and large, are no longer accessible through hardware upgrades — the historic path taken by commercial enterprises of all kinds through the decades. In this day and age, investment in software, not hardware, may matter most. For these reasons, a re-examination of the computational infrastructure at work throughout the biotech industry is extremely relevant.

Traditionally, performance improvements were largely attributable to the use of faster clocked processors. However, in the multicore environment, the clock speed of each core is now slower, and application performance will decrease unless more than one core can be utilized. Many organizations may therefore notice degradation in application performance when they deploy the latest hardware because they are using applications coded for use on a single processor when the new hardware is equipped with a multicore chip.

Counterintuitively, hardware that could potentially speed processing time by orders of magnitude may be responsible for significantly slowing down many applications. These organizations then experience significant bottom line impact without realizing why this is occurring. The issue for which they are unprepared is that programming a multicore computer is more complex than software development on a single-processor system.

Numerical libraries have typically been the preferred mechanism by which sophisticated technical application de velopers could readily incorporate mathematical and statistical functionality in their applications. These libraries offer organizations a convenient way to access the true power of multicore systems.

Custom-developed numerical code, for use in a specific application, can be incredibly time-consuming to produce, not to mention costly in the long term. Such code may take a long time to develop because of the complexities of designing the best-match algorithmic approach appropriate to the solution of the specific problem and the difficulty of encoding that algorithm in an accurate and stable numerical manner. Also, the very fact that it is being written for one current application suggests that the developers may not consider the possibility of extended numerical requirements and therefore may not include the flexibility and documentation required to enable the next advance for the product or the next development project.

It can be argued that free algorithms, available from the Internet, can provide an alternative to commercially available numerical libraries. Unfortunately support, maintenance, and rigorous testing of these sources is at best unpredictable, and therefore the user of such software is, perhaps unwittingly, risking the longterm viability of the application. The risk incurred may be acceptable, in the short term, for non-critical applications, but as new computing architectures emerge, this risk increases significantly and may prevent optimal use of the code in the long run.

This latter point is especially important because at any given point in time, multiple computing environments can be utilized by a single organization. They want to be free to choose between different hardware platforms and programming languages to best take advantage of the particular characteristics of hardware and software available, but still have confidence in any results produced.

The individual numerical methods used in diverse fields such as modeling, research, analytics, design, exploration, financial engineering, and product development must constantly evolve as well. This is because new, more reliable algorithms for general or specific hardware configurations are constantly being developed.

Developers of numerical libraries are constantly striving, through algorithmic innovations, to provide problem-solving software that is appropriate, efficient, and robust on the wide range of computing platforms used at the time by the technical computing community. In this way, their work continually replenishes the contents of numerical libraries with new material and improved techniques and makes these libraries available on the hardware of choice.

Going forward a few years, another widespread shift in the normal technical computing architecture is widely predicted. This shift will have a similar range of ramifications to the current migration from single to multicore, namely the move to manycore1 or GPU2 computing. This scenario vividly illustrates the major problem faced by organizations — investment in development of specialist code for a specific computing architecture may have a short lifespan before it is obsolete. In this changing environment, most organizations ought not to try to justify such a cost when there are off-the-shelf alternatives produced by numerical software specialists.

Organizations that use the tested algorithms from commercial numerical libraries as the building blocks of their applications may gain a competitive advantage. Developers can more readily migrate codes to new hardware and software systems and still maintain the efficacy of their legacy code, and the many thousands of “person-hours” that it embodies as they move forward.

Another advantage of selecting algorithms from comprehensive numerical libraries is that when developers require access to a different technique, it may well already be available in a tried and tested form that they can rely on. This is because numerical library developers are often assisting researchers at the cutting edge, developing methods applicable to one area of research but that, over time, may well be applicable in the wider spheres of science, engineering, and business analytics.

The extent to which numeric libraries provide extensive documentation also confers a competitive advantage to users of these libraries, especially when one considers the full lifecycle of the technical application being developed. For example, there are often numerous ways that superficially appear to give a developer some means to solve a particular problem. In practice, not all of these potential approaches are effective, accurate, or efficient when applied to a specific problem. The ability to select an appropriate algorithm for the task at hand is therefore vital if the developer and his organization want timely, reliable results. The detailed and refined documentation associated with commercial numerical libraries provides a simple mechanism by which this can be achieved. For example, the NAG Library documentation employs a decision tree structure — a tool familiar to computing professionals — to establish the precise definition of the problem to be solved. This leads the user to routines most appropriate to the actual problem.

However, in some areas of numerical computation, it is not possible to identify just one method that is most suitable for the solution of a problem. In these areas, the best solution is arrived at via an iterative process whereby different techniques are applied to the problem and the best selected from the experimental evidence. Having access to a range of algorithmic techniques, using similar interfaces, to enable this experimentation can therefore be critical. The best numerical libraries provide these facilities to cater to this development paradigm.

Finally, the quality of the routines provided by a commercial numerical library remains paramount. As an example, a good quality local optimization code, especially one that enables constraints on the variables, can run into thousands of lines of code. If that were a custom source, then the initial development constitutes only a small part of the overall cost. Other factors, such as testing, error correction, and general support (which can be guaranteed when using a commercial offering) must also be taken into account.

One such library is the Numerical Algorithms Group (NAG) library, a not-for-profit numerical software development organization that collaborates with researchers and practitioners in academia and industry to, among other projects, create a rigorously tested set of mathematical and statistical routines3. The routines in the NAG Library can be called from a wide range of programming languages and are useful for those working in C++, C#, F#, FORTRAN, MATLAB, R, Maple, and other environments. NAG routines are also tuned for use on multicore and parallel systems by employing standard conforming techniques in order to provide high performance while still maintaining the best algorithmic accuracy and stability.

Researchers worldwide, collaborating with the NAG technical team, help provide regular updates and in-depth support for the NAG Library. This activity provides valuable insight into emerging requirements for new research in computer science and numerical methods. The result is one of the world’s most reliable collections of numerical algorithms, including more than 1,600 usercallable functions.

This article was written by Rob Meyer, CEO of Numerical Algorithms Group, Oxford, UK. For more information, contact David Cassell, NAG Library Specialist, at This email address is being protected from spambots. You need JavaScript enabled to view it., or visit http://info.hotims.com/40436-160 .

References

  1. A manycore processor is one in which the number of cores is large enough that traditional multi-processor techniques are no longer efficient. It is the generation beyond multicore.
  2. A GPU is a Graphical Processing Unit. General Purpose GPUs are well suited for running some numerical algorithms.
  3. http://www.nag.co.uk/numeric/numerical_libraries.asp