Game Development Reference
In-Depth Information
6.4
Implementation
The progressive voxelization method runs entirely on the GPU and has been
implemented in a deferred shading renderer using basic OpenGL 3.0 operations
on a NVIDIA GTX 285 card with 1 GB of memory. We have implemented two
versions of the buffer storage mechanism in order to test their respective speed.
The first uses 3D volume textures along with a geometry shader that sorts injected
fragments to the correct volume slice. The second unwraps the volume buffers
into 2D textures and dispenses with the expensive geometry processing (respective
performance can be seen in Figure 6.8 ) .
The texture requirements are two volume buffers for ping-pong rendering
( V prev , V curr ). Each volume buffer stores N -dimensional attribute vectors a and
corresponds to a number of textures (2D or 3D) equal to
, for 4-channel
textures. For the reasons explained in Section 6.3 an additional N -dimensional
volume buffer is required for lighting applications. In our implementation we
need to store surface normals and full color spherical harmonics coecients for
incident flux in each volume buffer, which translates to 3
N/ 4
4 textures in total.
In terms of volume generation engine design, the user has the option to re-
quest several attributes to be computed and stored into floating-point buffers for
later use. Among them are surface attributes like albedo and normals, but also
dynamic lighting information and radiance values in the form of low-order spher-
ical harmonics (SH) coecients representation (either monochrome radiance or
full color encoding, i.e., separate radiance values per color band). In our im-
plementation the radiance of the corresponding scene location is calculated and
stored as a second-order spherical harmonic representation for each voxel. For
each color band, four SH coecients are computed and encoded as RGBA float
values.
×
6.5
Performance and Evaluation
In terms of voxelization robustness, our algorithm complements single-frame
screen-space voxelization and supports both moving image viewpoints and fully
dynamic geometry and lighting. This is demonstrated in Figures 6.4 and 6.5. In
addition, in Figure 6.6 , a partial volume representation of the Crytek Sponza II
Atrium model is generated at a 64 3 resolution and a 128 2 -point injection grid
using single-frame and progressive voxelization. Figures 6.6(a) and (b) are the
single-frame volumes from two distinct viewpoints. Figure 6.6(c) is the progres-
sive voxelization after the viewpoint moves across several frames. Using the par-
tial single-frame volumes for global illumination calculation, we observe abrupt
changes in lighting as the camera reveals more occluding geometry (e.g., left ar-
cade wall and floor in Figures 6.6(d) and (e) ) . However, the situation is gradually
remedied in the case of progressive voxelization, since newly discovered volume
data are retained for use in following frames ( Figures 6.6(f) and (g) ) .
 
 
Search Nedrilad ::




Custom Search