Game Development Reference
In-Depth Information
Figure 3.5. Focusing from the background (left column) to a foreground object (right
column), our adaptive method concentrates shading samples on sharp surfaces. The
motivation is to prefilter shading more agressively, as defocus is similar to a low-pass
filter over the image. The middle row visualizes the shading rate. In the bottom row we
show how the same surface shading would appear from a pinhole camera. The texture
filtering matches the shading resolution.
our method could be easily extended with the recent results of [Vaidyanathan
et al. 12], who presented a novel anisotropic sampling algorithm, based on image
space frequency analysis.
Depth of field. Figure 3.5 shows two renderings of the Crytek Sponza Atrium
scene from the same viewing angle, but different focusing distance. In this exam-
ple the most expensive component of rendering is the computation of the single-
bounce global illumination, using 256 virtual point lights (VPLs), generated from
a reflective shadow map (RSM) [Dachsbacher and Stamminger 05].
We do not only avoid supersampling the G-buffer, but also reduce the shading
frequency of surfaces using the minimum circle of confusion inside each primitive.
This approach prefilters shading of defocused triangles, causing slight overblurring
of textures, however, we found this effect even desirable if the number of visibility
 
Search Nedrilad ::




Custom Search