A team of researchers at Johns Hopkins Medicine, Baltimore, has designed a computerized imaging process to make minimally invasive surgery more accurate and streamlined using equipment already common in the operating room.
They say initial testing of the algorithm shows that their image-based guidance system advances conventional tracking systems that have been the mainstay of surgical navigation for the past decade.
Rather than adding complicated tracking systems and special markers to the surgical scene, the team devised a method in which the imaging system is the tracker and the patient is the marker. Current state-of-the-art surgical navigation involves a process called registration in which someone manually matches points on the patient’s body to those in a preoperative CT image. This enables a computer to orient the image of the patient within the geometry of the operating room. Unfortunately, this process can be prone to errors, and may degrade over the course of the operation.
Using a mobile C-arm, and starting with a mathematical algorithm they had previously developed to help surgeons locate specific vertebrae during spine surgery, the team adapted the method to the task of surgical navigation. When they tested the method on cadavers, they found a level of accuracy better than 2 millimeters and consistently better than a conventional surgical tracker, which has 2 to 4 millimeters of accuracy in surgical settings.
An additional advantage of the system is that the two X-ray images needed can be acquired at extremely low dose of radiation, far below what is needed for a visually clear image, but enough for the algorithm to extract accurate geometric information.