As a user moves his hand into view, the Kinect snaps 30 frames per second, and an image-processing algorithm in the computer tracks and deciphers the changes from frame to frame. “Basically, we know where your hand is and what your hand is doing,” Ramani says. If the user’s finger comes within a few centimeters of the projected image (i.e., the surface), the system interprets this as a touch.
Other researchers have demonstrated technology that can make use of a similar kind of virtual click, but the Purdue system is unique: It can distinguish between thumbs and fingers, pick out a right hand from a left, and even track the hands of several users at once. In one experiment, several subjects drew virtual pictures on a table, and the computer marked each individual’s rendering in a unique color.