Wearable device miniaturized for easier use.
A team of computer scientists at MIT has developed a low-power chip to process 3D camera data that, they say, could aid visually impaired people navigate their environments. The chip consumes only one-thousandth as much power as a conventional computer processor executing the same algorithms.

There had been some prior work on this type of system, but the systems were too bulky, because they required lots of different data processing. The team wanted to miniaturize the system, but to do so, they needed to make a very tiny chip that would save power yet still provide enough computational power.
How It Works
The researchers developed an algorithm for converting 3D camera data into useful navigation aids. They explained that the output of any 3D camera can be converted into a 3D representation called a “point cloud,” which depicts the spatial locations of individual points on the surfaces of objects. Their algorithm clustered points together to identify flat surfaces in the scene, then measured the unobstructed walking distance in multiple directions.
Then, they modified this algorithm in order to conserve power. The standard way to identify planes in point clouds, for instance, is to pick a point at random, then look at its immediate neighbors, and determine whether any of them lie in the same plane. If one of them does, the algorithm looks at its neighbors, determining whether any of them lie in the same plane, and so on, gradually expanding the surface.
While this is computationally efficient, it requires frequent requests to a chip’s main memory bank. Because the algorithm doesn’t know in advance which direction it will move through the point cloud, it can’t reliably preload the data it will need into its small working-memory bank.
Fetching data from the main memory is the biggest energy drain in today’s chips, so the MIT researchers modified the standard algorithm to always begin in the upper left-hand corner of the point cloud and scan along the top row, comparing each point only to the neighbor on its left.
After the top row, it starts at the leftmost point in the next row down, comparing each point only to the neighbor on its left and to the one directly above it, and repeats this process until it has examined all the points. This enables the chip to load as many rows as will fit into its working memory, without having to go back to main memory.
This, they explained, and similar “tricks” drastically reduced the chip’s power consumption. But, what really consumes the most energy is the 3D. So the chip also includes a circuit that quickly and coarsely compares each new frame of data captured by the camera with the one that immediately preceded it. If little changes over successive frames, that signifies that the user isn’t moving and so the chip sends a signal to the camera to lower its frame rate, saving power.
Future Work
Although the new prototype navigation system is smaller and less obtrusive than previous systems, the team believes that it is possible to miniaturize it even further. One of its biggest components right now is a heat dissipation device atop a second chip that converts the camera’s output into a point cloud. They say that while adding the conversion algorithm to the data-processing chip should have a negligible effect on its power consumption, it would significantly reduce the size of the system’s electronics and allow them to shrink the system.
For more information, visit http://news.mit.edu .

