Game Development Reference
In-Depth Information
camera model, in which we make the assumption that all light that enters the
camera and strikes the sensing element does so through a single point, the pin-
hole. This point is referred to as the center of projection, and it is essentially the
same concept as the camera location when rendering a scene. We will examine
the mathematical properties of this camera configuration in Section 2.3 , “Math-
ematics of the Kinect,” but for now we can simply accept that we are able to
capture the projected 2D image of the 3D world being viewed by the camera.
The color images obtained by the Kinect are made available to the developer
at a variety of frame rates and data formats. At the time of writing this article,
the available resolutions span from 1,280
60. This
selectable resolution allows the developer to only receive the size of data that is
most relevant for them, reducing bandwidth if a full size image isn't needed. The
available data formats include an sRGB and a YUV format, which again allow
the data to be provided to the program in the most suitable format for a given
application. Not all resolutions are valid for all formats, so please consult the
Kinect for Windows SDK documentation [Microsoft 12] for more details about
which combinations can be used.
×
960 all the way down to 80
×
2.2.2
Depth Camera
The Kinect's depth-sensing system is much more unique than its color-based
brother. As mentioned above, the depth-sensing system actually uses two devices
in conjunction with one another: an infrared projector and an infrared camera.
The IR projector applies a pattern to the scene being viewed, producing an effect
similar to that shown in Figure 2.3. The infrared camera then produces an image
that captures the pattern as it interacts with the current scene around the Kinect.
By analyzing the pattern distortions that are present in the image, the distance
from the Kinect to the point in the scene at each pixel of the infrared image can
be inferred. This is the basic mechanism used to generate a secondary image that
represents the depth of the objects in the scene.
Figure 2.3. Sample infrared and depth images produced by the Kinect depth-sensing
system.
 
Search Nedrilad ::




Custom Search