Game Development Reference
involves rendering the point grid using an equal number of texture lookups and, in
some implementations, a geometry shader. This has a potentially serious impact
We can totally forgo the injection phase of the algorithm and do both opera-
tions in one stage. Using the same notation as before, the logic of the algorithm
remains practically the same.
If the projected voxel center lies in front of the
p v,z >z e + b ), it is still cleared.
recorded depth (i.e.,
If the projected voxel
p v,z <z e −
center lies behind the recorded depth (i.e.,
b ), the voxel is retained;
otherwise it is turned-on (or updated) using the attribute buffers information.
The last operation practically replaces the injection stage.
As we are effectively sampling the geometry at the volume resolution instead of
doing so at higher, image-size-dependent rate and then down-sampling to volume
resolution, the resulting voxelization is expected to degrade. However, since usu-
ally depth buffers are recorded from multiple views, missing details are gradually
added. A comparison of the method variations and analysis of their respective
running times is given in Section 6.5.
Progressive Voxelization for Lighting
As a case study, we applied progressive voxelization to the problem of comput-
ing indirect illumination for real-time rendering. When using the technique for
lighting effects, as in the case of the LPV algorithm of [Kaplanyan 09] or the ray
marching techniques of [Thiedemann et al. 11, Mavridis and Papaioannou 11],
the volume attributes must include occlusion information (referred to as geome-
try volume in [Kaplanyan 09]), sampled normal vectors, direct lighting (VPLs),
and optionally surface albedo in the case of secondary indirect light bounces.
Direct illumination and other accumulated directional data are usually encoded
and stored as low-frequency spherical harmonic coecients (see [Sloan et al. 02]).
Virtual point lights (VPLs) are points in space that act as light sources and
encapsulate light reflected off a surface at a given location. In order to correctly
accumulate VPLs in the volume, during the injection phase, a separate volume
buffer is used that is cleared in every frame in order to avoid erroneous accumu-
lation of lighting. For each RSM, all VPLs are injected and additively blended.
Finally, the camera attribute buffers are injected to provide view-dependent dense
samples of the volume. If lighting from the camera is also exploited (as in our
implementation), the injected VPLs must replace the corresponding values in the
volume, since the camera direct lighting buffer provides cumulative illumination.
After the cleanup has been performed on the previous version of the attribute
volume V prev , nonempty voxels from the separate injection buffer replace corre-
sponding values in V curr . This ensures that potentially stale illumination on valid
volume cells from previous frames is not retained in the final volume buffer. In
Figure 6.3 we can see the results of progressive voxelization and its application
to diffuse indirect lighting.