Game Development Reference
In-Depth Information
and is used later to generate the indirect illumination and to test for geometry
occlusion. Since the voxel grid is recreated each frame, the proposed technique
is fully dynamic and does not rely on any precalculations.
In the second step, the voxels inside the grid are illuminated by each light
source. The illumination is then converted into virtual point lights (VPLs), stored
as second-order spherical harmonics coecients (SH-coecients). The graphics
hardware is utilized again by using the built in blending stage, in order to com-
bine the results of each light source. Later the generated VPLs are propagated
within the grid, in order to generate the indirect illumination. In contrast to
the light propagation volume (LPV) technique, as proposed by [Kaplanyan and
Dachsbacher 10], it is required neither to create a reflective shadow map for each
light source nor to inject VPLs into a grid afterwards. Furthermore, there is no
need to generate occlusion information separately. Yet the obtained information
will be more precise than, e.g., information obtained from depth peeling.
7.3
Implementation
The proposed technique can be subdivided into five distinct steps.
7.3.1
Create Voxel Grid Representation of the Scene
We first need to define the properties of a cubic voxel grid, i.e., its extents,
position, and view-/projection-matrices. The grid is moved synchronously with
the viewer camera and snapped permanently to the grid cell boundaries to avoid
flickering due to the discrete voxel grid representation of the scene. To correctly
map our scene to a voxel grid, we need to use an orthographic projection; thus,
we will use three view-matrices to get a higher coverage of the scene: one matrix
for the back-to-front view, one matrix for the right-to-left view, and one for the
top-to-down view. All other calculations will be done entirely on the GPU.
Next we render the scene geometry that is located inside the grid boundaries
with disabled color writing and without depth testing into a small 2D render-
target. We will use a 32
×
32
×
32 grid; for this it is entirely enough to use a 64
×
64 pixel render-target with the smallest available pixel format, since we will
output the results into a read-write buffer anyway. Basically we pass the triangle
vertices through the vertex shader to the geometry shader. In the geometry
shader the view-matrix is chosen at which the triangle is most visible, in order
to achieve the highest number of rasterized pixels for the primitive. Additionally
the triangle size in normalized device coordinates is increased by the texel size
of the currently bound render-target. In this way, pixels that would have been
discarded due to the low resolution of the currently bound render-target will
still be rasterized. The rasterized pixels are written atomically into a 3D read-
write structured buffer in the pixel shader. In this way, in contrast to [Mavridis
and Papaioannou 11], there is no need to amplify geometry within the geometry
 
Search Nedrilad ::




Custom Search