Massachusetts Institute of Technology researchers have developed a low-power chip for processing 3D camera data. Using the device, the MIT team built a prototype of a complete navigation system for the visually impaired.

About the size of a binoculars case and similarly worn around the neck, the system uses an experimental 3D camera from Texas Instruments. The user carries a mechanical Braille interface developed at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), which conveys information about the distance to the nearest obstacle in the direction the user is moving.

The output of any 3D camera can be converted into a three-dimensional representation called a “point cloud,” which depicts the spatial locations of individual points on the surfaces of objects. The MIT algorithm always begins in the upper left-hand corner of the point cloud and scans along the top row of the image, comparing each point only to the neighbor on its left.

Then, the algorithm starts at the leftmost point in the next row down, comparing each point only to the neighbor on its left and to the one directly above it; the process is repeated until all points have been examined.

To reduce power consumption, the chip loads as many rows as will fit into its working memory, without having to go back to main memory. The chip consumes only one-thousandth as much power as a conventional computer processor executing the same algorithms.