Unlocking Kinect

Kinect in Cinder, Hello World from flight404 on Vimeo.

—————————————————————————-

A few weeks ago, Microsoft released it’s long-awaited XBox 360 peripheral, Kinect – it’s foray into gesture-based gaming, heading through the door opened by Nintendo (with their Wii and it’s Wiimote controller) and Playstation (and it’s tracking camera, Eye).

The Kinect, née Natel, is a based around an RGB webcam, packing in a bunch of other sensors that are capable of picking up sound, image and depth data, and allows users to interact with their console through movement gestures – arm waving, jumping on the spot – spoken commands and with props and images offered up to the device. The level of detail that can be captured means that both facial and vocal recognition can be utilised alongside complex motion-capture. It’s this positional information that the Kinect is able to acquire that’s especially neat; to do this, it bathes the room it’s set up in with a series of dots of near-invisible infrared light, then, rather than a ‘time-of-flight‘ method, it uses these projected patterns of ‘structured’ light to obtain a depth matte (if you’re interested, the patent is up here). Essentially, it functions a little like sonar.



Like the Nintendo Wiimote and Playstation Eye (yes, even though the Eye is 3 years old, it’s still hacked for things like MoCap by people like Ipisoft) now that this sensor is in the hands of the masses, it’s already been cracked open and the pipe of data coming from it tapped into. A whole load of examples such as Kinect in Cinder, Hello World from flight404/Robert Hodgin have been emailed around the studio this last week, as the open-source community get their hands dirty with the newly-obtained open source code that allows developers to develop applications with Kinect. This reverse engineering/hacking and compiling of code that allowed access to the Kinect’s data was partially spurred on by a call-to-action that offered a $3000 bounty, increased from it’s initial stake after a statement was released from Microsoft.

The example above from Hodgin works in real time, as does another experiment of his, Body Dysmorphic Disorder and for me, it’s the immediacy of the interaction that makes these experiments so alluring. The scale that it operates on also feels personal; the effective range is between 1.2–3.5 metres (it’s engineered for use at home, not in an exhibition hall), so any movements you make have a frame of reference, namely the user. You’re not pushing a tiny button and watching a massive gate open. Microsoft even redesigned the customisable avatars within the 360’s control dashboard, re-proportioning them so that users would find it easier to equate their movements being echoed by their on-screen self.

—————————————————————————-

Interactive Puppet Prototype with Xbox Kinect from Theo Watson.

—————————————————————————-

While the simplest guises of these hacks might be getting the device to work with third party operating systems, such as a peripheral controller for Win 7, with this depth-sensing camera and infrared scanner, a wealth of experiments are turning up everyday. For example, if focused onto a specific area, it can essentially turn any flat surface into a multitouch controller. Move the device through a room, and it can be used for spatial mapping (reminds me of the Radiohead House of Cards video). And when the puppet becomes the puppet master, the Kinect has been used in conjunction with additional peripherals, from MIDI controllers, creating depth-sensitive, gestural musical manipulations (surely a Kinect theremin emulator is on the horizon), to robotic arms, as demonstrated by Willow Garage, using their Kinect to talk with a ROS (Robot Operating System).


It’s not all about cause-and-effect interaction though. Seattle’s RGB-D Intel labs, use a Kinect-style camera to record a scene that can be examined and manipulated in detail at a later date. Furthermore, some people have simply chosen to observe it at work, adapting cameras to capture the structured IR light as it is cast over objects, features and spaces.

—————————————————————————-

The main repository for the source code is up on github, initiated by the Open Kinect group (also on Google groups). If you’re into Processing and running OSX, Daniel Shiffman has made code available on his site. And if you’re just simply keen to see what’s being cooked up, there’s a good roundup of projects up on Kinect Hacks.

Comments

  • John

    Thanks for the information 🙂 nice i found another kinect site as well, thought i post it so you can check it out.

    btw i love the puppet.

    kinect hacks