Game Development Reference
In-Depth Information
Discontinuities, such as surface silhouettes, are the primary sources of aliasing.
The second type of aliasing is the possible undersampling of surface shading. Un-
like visibility, shading is often treated as a continuous signal on a given surface,
thus it can be prefiltered (e.g., by using texture mipmaps). It is therefore a tempt-
ing idea to save computations by sampling visibility and shading information at
different granularities.
3.2.2 Decoupled Sampling
In a modern rasterization pipeline this problem is addressed by MSAA. The ras-
terizer invokes a single fragment shader for each covered pixel; however, there are
multiple subsample locations per pixel, which are tested for primitive coverage.
Shading results are then copied into covered locations. This is an elegant solution
for supersampling visibility without increasing the shading cost.
Decoupled sampling [Ragan-Kelley et al. 11] is a generalization of this idea.
Shading and visibility are sampled in separate domains. In rasterization, the
visibility domain is equivalent to subsamples used for coverage testing, while
the shading domain can be any parameterization over the sampled primitive
itself, such as screen-space coordinates, 2D patch-parameters, or even texture
coordinates. A decoupling map assigns each visibility sample to a coordinate in
the shading domain. If this mapping is a many-to-one projection, the shading
can be reused over visibility samples.
Case study: stochastic rasterization. Using stochastic sampling, rasterization can
be extended to accurately render effects such as depth of field and motion blur.
Each coverage sample is augmented with temporal and lens parameters. Defo-
cused or motion blurred triangles are bounded in screen space according to their
maximum circle of confusion and motion vectors. A deeper introduction of this
method is outside the scope of this article, but we would like to refer the in-
terested reader to [McGuire et al. 10] for implementation details. In short, the
geometry shader is used to determine the potentially covered screen region, the
fragment shader then generates a ray corresponding to each stochastic sample,
and intersects the triangle.
We now illustrate decoupled sampling using the example of motion blur: if
the camera samples over a finite shutter interval, a moving surface is visible
at several different locations on the screen. A na ─▒ve rendering algorithm would
first determine the barycentics of each stochastic sample covered by a triangle,
and evaluate the shading accordingly. In many cases, we can assume that the
observed color of a surface does not change significantly over time (even oine
renderers often do this). MSAA or post-processing methods cannot solve this
issue, as corresponding coverage samples might be scattered over several pixels
of the noisy image. We can, however, rasterize a sharp image of the triangle at
afixed shading time , and we can find corresponding shading for each visibility
sample by projecting them into the pixels of this image.
Search Nedrilad ::

Custom Search