Game Development Reference
In-Depth Information
However, we should also consider which programmable pipeline stage will be
using these resources before finally deciding on a resource type. If the resources
will be used within the compute-shader stage, then there is a choice of using
either a buffer or a texture resource—whichever fits better with the threading
model to be used in the compute shader. If the resources will be used directly to
perform some rendering in the graphics pipeline, then the choice is based more
upon which pipeline stage will be used to read the data. When the pixel shader
will be consuming the data, then a texture resource probably makes the most
sense due to the similarity of pixel to texel orientation. However, if the data will
be read elsewhere in the pipeline, then either a buffer or a texture may make more
sense. In each of these examples, the key factor ends up being the availability of
addressing mechanisms to access the resources.
With these considerations in mind, we have chosen to utilize the Texture2D
resource type for the color, depth, and depth-to-color offset data in the sample
application since we are performing the manipulation of the frame-based data
streams in the graphics pipeline. You may have noticed the mention of the
depth-to-color offset data, which hasn't been described up to this point. This
is a resource used to map from a depth pixel to the corresponding coordinates
in the color pixel, using the Kinect API functions to fill in the data with each
depth frame that is acquired. This essentially gives a direct mapping for each
pixel that can be used to find the correspondence points between the depth and
color frames.
We have also chosen to acquire the color data stream at a resolution of 640
×
480, with the sRGB format. The depth data stream will use a resolution of 320
×
240 and will contain both the depth data and the player index data. Finally,
the skeletal data is only used on the CPU in this application, so we simply keep
a system memory copy of the data for use later on.
2.4.3
Rendering with the Kinect
After selecting our resource types, and after configuring the methods for filling
those resources with data, we are now ready to perform some rendering operations
with them. For our sample application, we will be rendering a 3D reconstruction
of the depth data that is colored according to the color camera frame that is
acquired with it. In addition, a visualization of the skeletal joint information is
also rendered on top of this 3D reconstruction to allow the comparison of the
actual scene with the pose information generated by the Kinect.
The first step in the rendering process is to determine what we will use as our
input geometry to the graphics pipeline. Since we are rendering a 3D representa-
tion of the depth frame, we would like to utilize a single vertex to represent each
texel of the depth texture. This will allow us to displace the vertices according
to the depth data and effectively recreate the desired surface. Thus we create
a grid of indexed vertices that will be passed into the pipeline. Each vertex is
 
Search Nedrilad ::




Custom Search