Biomedical engineers at Duke University have devised a method for increasing the resolution of optical coherence tomography (OCT) down to a single micrometer in all directions, even in a living patient. The new technique, called optical coherence refraction tomography (OCRT), could improve medical images obtained in the multibillion-dollar OCT industry for medical fields ranging from cardiology to oncology.

Machine Learning Increases Resolution of Eye Imaging

The results appear in a paper published online in the journal “Nature Photonics.”

"An historic issue with OCT is that the depth resolution is typically several times better than the lateral resolution," said Joseph Izatt, the Michael J. Fitzpatrick Professor of Engineering at Duke. "If the layers of imaged tissues happen to be horizontal, then they're well defined in the scan. But to extend the full power of OCT for live imaging of tissues throughout the body, a method for overcoming the tradeoff between lateral resolution and depth of imaging was needed."

OCT is an imaging technology analogous to ultrasound that uses light rather than soundwaves. A probe shoots a beam of light into a tissue and, based on the delays of the light waves as they bounce back, determines the boundaries of the features within. To get a full picture of these structures, the process is repeated at many horizontal positions over the surface of the tissue being scanned.

Because OCT provides much better resolution of depth than lateral direction, it works best when these features contain mostly flat layers. When objects within the tissue have irregular shapes, the features become blurred and the light refracts in different directions, reducing the image quality.

Previous attempts at creating OCT images with high lateral resolution have relied on holography – painstakingly measuring the complex electromagnetic field reflected back from the object. While this has been demonstrated, the approach requires the sample and imaging apparatus to remain perfectly still down to the nanometer scale during the entire measurement.

"This has been achieved in a laboratory setting," said Izatt, who also holds an appointment in ophthalmology at the Duke University School of Medicine. "But it is very difficult to achieve in living tissues because they live, breathe, flow, and change."

In the new paper, Izatt and his doctoral student, Kevin Zhou, take a different approach. Rather than relying on holography, the researchers combine OCT images acquired from multiple angles to extend the depth resolution to the lateral dimension. Each individual OCT image, however, becomes distorted by the light's refraction through irregularities in the cells and other tissue components. To compensate for these altered paths when compiling the final images, the researchers needed to accurately model how the light is bent as it passes through the sample.

To accomplish this computational feat, Izatt and Zhou turned to their colleague Sina Farsiu, the Paul Ruffin Scarborough Associate Professor of Engineering at Duke, who has a history of using machine learning tools to create better images for health care applications.

Working with Farsiu, Zhou developed a method using "gradient-based optimization" to infer the refractive index within the different areas of tissue based on the multi-angle images. This approach determines the direction in which the given property – in this case, the refractive index – needs to be adjusted to create a better image. After many iterations, the algorithm creates a map of the tissue's refractive index that best compensates for the light's distortions. The method was implemented using TensorFlow, a popular software library created by Google for deep learning applications.

Source