Game Development Reference
One method to avoid that, from [Lee et al. 10], is to first render the scene
content into layers and then use ray traversal to combine these layers, which is a
costly operation but avoids the previously mentioned discretization artifacts and
allows the simulation of additional lens effects (e.g., chromatic aberration and
The approach presented in the following sections is a layered DoF method and
is based on [Schedl and Wimmer 12]. We first decompose the scene into depth
layers , where each layer contains pixels of a certain depth range. The resulting
layers are then blurred with a filter that is sized according to the depth layer's
CoC and then composited. This approach handles partial occlusion, because
hidden objects are represented in more distant depth layers and contribute to the
In order to avoid rendering the scene K times, we use an A-buffer to generate
input buffers . Note that each input buffer can contain fragments from the full
depth range of the scene, while a depth layer is bound by its associated depth
range. We then generate the depth layers by decomposing the input buffers into
the depth ranges, which is much faster than rendering each depth layer separately.
To avoid discretization artifacts, we do not use hard boundaries for each depth
layer, but a smooth transition between the layers, given by matting functions .
Furthermore, we also show a method for eciently computing both the blur and
the layer composition in one step.
The algorithm consists of the following steps:
1. Render the scene into an A-buffer, containing the color and depth from
front fragments and occluded fragments in an unsorted way. Sorting the
fragments produces the input buffers I 0
to I M − 1 .
2. Decompose the fragments of the input buffers into K depth layers L 0 to
L K − 1 , based on a matting function and the fragments' depth. Thus the
rendered scene is now stored in a layered form, where each layer holds
fragments of a certain depth range.
3. Blur every layer according to its CoC (computed by the layer's depth range)
and alpha-blend them starting with the layer furthest away. We apply an
optimization for this step where we blend and blur recursively: each layer
L k is blended onto the composition buffers I front (containing layers in front
of the focus layer) or I back (holds layers behind the focus layer), where
the composition buffers are blurred after each blending step. Finally the
composition buffers are blended together.
We now describe the individual steps.