Game Development Reference
future, but I for one can't wait for the day I can watch the New Orleans Saints play as a
three-dimensional table-top game.
Now that you have a background in how the current 3D display technologies work,
there are some aspects of each that you as the programmer should consider when writing
games. There are two ways to add 3D stereoscopic content to games: active stereoization
and passive stereoization. These are not to be confused with the passive and active
technologies for viewing the 3D images. The stereoization process is the method by
which the 3D images get created in the first place.
Active stereoization is the process by which the programmer creates two cameras, ren‐
dering separate images for each eye. Passive stereoization removes the requirements for
two cameras, and adds the stereoization at the GPU level. Either method is going to cost
something in performance. The worst-case cost is twice a monocular scene; however,
some elements of the scene, like the shadow map, will not require recalculation for each
Active stereoization is conceptually simpler and offers greater control over the process
of stereoization. The most naive implementation is to simply have two cameras that
render complete scenes and then pass the buffers labeled one for each eye. The buffers
are then swapped in and out with the traditional definition of frame rate being half the
actual frame rate. However, this simple implementation duplicates some elements of
the scenes that are not eye dependent.
The advantage of this technique is precise control of what each eye is seeing. This allows
the programmer to determine eye separation for each frame and could be used to ac‐
tually disorient the user as a game element. Consider a flash-bang grenade going off:
the programmer could alter the position of the cameras such that it would disorient the
user in 3D for a short period after detonation. However, this technique would cause
very real discomfort to the user, so it should be not used frequently!
The disadvantages are that the programmer is now responsible for managing an extra
camera that must be rendered for each frame. For commercial titles, this method can
be difficult considering that most games already have to be careful about how many
times they invoke the render pipeline in order to maintain playable frame rates. Having
to manage two cameras creates additional runtime burden on the program and makes
the use of existing game engines a little more difficult.
Also due to the fact that not everyone's eyes have the same separation (intraocular dis‐
tance) and not everyone's brain is willing to accept fabricated binocular disparity, the
program must also provide options for the user to adjust the depth and complexity of