Researchers have developed a technology that uses computed tomography (CT) data to generate in real time a 3D image of the anatomical structures of the part of the body undergoing surgery for use in a virtual environment. The researchers took 2D CT cross-sections and converted them for use in a virtual environment without a time lag. Using sophisticated programming and the graphics cards, the team sped up the volume rendering to reach the necessary frame rate.
Doctors can use the latest generation of virtual reality glasses to interact in a 3D space with a hip bone that requires surgery, zooming in on the bone, viewing it from any desired angle, adjusting the lighting angle, and switching between the 3D view and regular CT images. The SpectoVive system can also perform fluid shadow rendering to create a realistic impression of depth.
The ability to convert CT images into a 3D on-screen representation is not new; however, commonly available hardware could not generate the 3D volumes in real time for use in virtual spaces. One challenge was that smooth playback in a virtual environment requires at least 180 images a second — 90 images each for the left and right eyes; otherwise, the viewer may experience nausea or dizziness.