Surgeons may soon be able to use a system in the operating room that recognizes hand gestures as commands to a computer to browse and display medical images of the patient during a surgery. Researchers at Purdue University, West Lafayette, IN, are creating a system that uses depth-sensing cameras and specialized algorithms to recognize hand gestures as commands to manipulate MRI images on a large display.
Surgeons routinely need to review medical images and records during surgery, but stepping away from the operating table and touching a keyboard and mouse can delay the procedure and increase the risk of spreading infection-causing bacteria, said Juan Pablo Wachs, an assistant professor of industrial engineering at Purdue University.
When nurses or assistants operate the keyboard for the surgeon, the process of conveying information accurately is cumbersome and inefficient since spoken dialogue can be time-consuming and leads to frustration and delays in the surgery."
The algorithm takes into account what phase the surgery is in, which aids in determining the proper context for interpreting the gestures and reducing the browsing time. The system also has been shown to have a mean accuracy of about 93 percent in translating gestures into specific commands, such as rotating and browsing images. Their findings were detailed in a paper published in December in the Journal of the American Medical Informatics Association.