Game Development Reference
Figure 2.8. Removal of unwanted triangles from our surface reconstruction.
Now we take the depth pixel data and convert it to a 3D position in the
depth camera's view space. This is performed using the relationships we provided
earlier, where the focal length and camera parameters are baked into constants
(the values of the constants are taken from the Kinect for Windows SDK header
files). After this we project the depth-camera view-space coordinates to clip space
for rasterization and also keep a copy of the original depth value for use later on.
Each of these vertices is then assembled into a triangle by the graphics pipeline
and passed to the geometry shader. The main reason for using the geometry
shader is to detect and remove triangles that have excessively long edges. If there
is a long edge on a triangle, it would typically mean that the triangle spans a
discontinuity in the depth frame and it doesn't represent a real surface. In that
case we can skip the passing of the triangle into the output stream. Figure 2.8
demonstrates the removal of these unwanted features, and the geometry shader
to perform this operation is shown in Listing 2.5 .
After the geometry passes the geometry shader, it is rasterized and passed
to the pixel shader. In the pixel shader we can simply sample the color frame
texture with our offset coordinates. This performs the mapping from depth space
to color space for us and allows us to minimize the amount of work needed to be
performed at the pixel level. The pixel shader code is provided in Listing 2.6.
The end result of this rendering is a reconstructed 3D representation of what
is visible in front of the Kinect. The final step in our sample application is to
visualize the skeletal data and how it corresponds to the reconstructed surface.
We receive the complete skeletal data from the Kinect runtime and simply create
a sphere at each joint to show where it lies. Since the joint positions are provided
in the depth-camera view space, there is no need for further manipulations of the
data. The overall results of our sample application can be seen in Figure 2.9.