Game Development Reference
In-Depth Information
The resulting combined scene can then be rendered and presented to the user
in real time from the perspective of the Kinect. Since the volumetric model of
the environment really is actually volumetric, the rendering of the particle system
will properly occlude rendered objects if a physical object obstructs the view of
it from the Kinect. This provides a very powerful mechanism for incorporating
game elements into a realistic scene.
2.5.3
User Pose and Gesture Tracking
The production of the user pose information is perhaps the most widely known
application of the Kinect data streams. This is essentially provided to the de-
veloper for free, with a stream of skeletal frames being provided by the Kinect
runtime. With skeletal information available for the user in the scene, it becomes
quite easy to render an avatar that appears in the pose in which the user is cur-
rently. For example, the user could move around their living room and see a
rendered representation of their favorite character onscreen that is moving in the
same manner. This effect is not only a novelty. This mechanism can be used as
the input method for a game or simulation. In such a scenario, the avatar could
interact with a game environment and replicate the user's actions in the game,
letting them interact with a virtual scene around them.
However, since the virtual scene is likely not going to match exactly the phys-
ical scene surrounding the user, this method will quickly become limited in what
interactions can be modeled. Instead, the user's movements can be translated
into the detection of gestures as they move over time. These gestures can then be
used as the input mechanism for the game or simulation. This breaks the direct
dependency between the virtual and physical scenes and allows the developer to
both interact with their users directly and also provide them with a large scene
to interact with as well.
This is the typical method employed by games that
currently use the Kinect.
2.5.4
Rendering Scenes Based on User Pose
Another interesting area that the Kinect can be used for is to manipulate ren-
dering parameters by monitoring the user. A prime example of this is to modify
the view and projection matrices of a rendered scene based on the location and
proximity of a user's head to the output display. For example, when a user is
standing directly in front of a monitor, the typical view and projection matrices
are more or less physically correct. However, when the user moves to stand to the
right of the display, the view and projection matrices used to project the scene
onto the monitor are no longer correct.
The Kinect can be used to detect the user's head location and gaze and modify
these matrix parameters accordingly to make the rendered scene change correctly
as the user moves. The effect of this is that a display serves as a type of window
 
Search Nedrilad ::




Custom Search