Game Development Reference
Applications of the Kinect
Now that we have seen the Kinect data streams used in a live application, we
should have a feel for what types of data are available to us for use in an ap-
plication. We can now take a moment and consider several potential types of
applications that can be built around it. As described in the introduction, this
data is very unique and allows the developer to interact with their users in very
new and novel ways. This section will briefly describe several uses for the Kinect
to get the reader thinking about the possibilities—and hopefully lead to more
ideas for uses of this intriguing device.
At first thought, it is very common to see the Kinect as a very inexpensive 3D
scanner. Certainly, having the ability to generate a 3D model of a real-world
object would have many potential practical uses. However, in practice this isn't
quite as straightforward as initially thought. The depth data that is received
in each depth frame is not a complete surface mapping due to the blind spots
that we discussed earlier. In addition, the depth data that is available within
the frame is typically somewhat noisy over time. These restrictions introduce
complications to the implementation of such a scanner system.
However, even with these limitations in mind, it is indeed possible to perform
a 3D scanning function with a fairly high fidelity. The Kinect Fusion algorithm
[Izadi et al. 11] utilizes computer vision algorithms that track the location and
orientation of a handheld Kinect device. By knowing its own position relative to
the object being scanned, multiple depth frames can be taken over time to build
a volumetric model of that object. Using multiple frames allows the blind spots
from one frame to be filled in by the data introduced in a subsequent frame.
This process is repeatedly performed, and over time a complete model can be
built. The final generated volumetric model can then be either converted to a
renderable mesh or stored in whatever desired output format for later use. Thus
for the cost of a computer and a Kinect, you can produce a 3D scanner!
Interactive Augmented Reality
If we have the ability to generate models of a particular object, then by extension
it should also be possible to generate a model of the complete environment that is
surrounding the Kinect as well. With a fairly accurate model of the surroundings
around a user, it becomes possible to use this information within a rendered scene
that combines the physically acquired Kinect data with simulation-based data.
For example, after acquiring a model of the area surrounding your desk, you could
then produce a particle system simulation that interacts with the model. Each
particle can interact with objects in the scene, such as bouncing off of your desk.