Game Development Reference
In-Depth Information
if (edge_x)
{
float k0 = sdx3 - sdx2; // Right second-depth slope
float k1 = dx3 - dx2; // Right slope
float k2 = sdx1 - sdx0; // Left second-depth slope
float k3 = dx1 - dx0; // Left slope
float m0 = sdx2 - k0; // Right second-depth offset
float m1 = dx2 - k1; // Right offset
float m2 = sdx1 + k2; // Left second-depth offset
float m3=dx1+k3; // Left offset
float offset0 = (m1 - m0) / (k0 - k1); // Right intersection
float offset1 = (m3 - m2) / (k2 - k3); // Left intersection
// Pick the closest intersection.
offset.x = (abs(offset0) < abs(offset1))? offset0 : offset1;
offset.x = (abs(offset.x) < 0.5f)? offset.x : 0.5f;
}
Listing 3.3. Computing horizontal silhouette intersection point.
edge is somewhere in between there, but there is no way for us to know how
far that blue primitive stretches over the gap. What we really need here is the
depths of the adjacent primitive in the mesh. For the silhouette case, the adja-
cent primitive will be behind the primitive and back-facing the viewer. We thus
need a second layer of depth values for these hidden surfaces, hence the name of
this technique: second-depth antialiasing. More on how we generate the second
depth layer later in this article. Note though that for this to work we need closed
geometry. If no back-face primitive exists, the edge will be left aliased.
Once you have a second layer of depth values, the silhouette case is quite
similar to the crease case. However, we need to do a separate test to the left
and to the right. Then we select whichever one happened to end up closer to the
center pixel. Again, if the edge is determined to be within half a pixel from the
center, we can use this edge distance information for blending. (See Listing 3.3.)
Once we have determined the distance to the edge, we need to do the final
blending of the color buffer. The distance to the edge can be converted to the
coverage of neighboring primitives on this pixel, which can be used for blending
with the neighboring pixel. We either blend with a horizontal or vertical neighbor.
This can be done in a single sample by simply using a linear texture filter and
offsetting the texture coordinate appropriately [Persson 12]. The code for this is
presented in Listing 3.4.
Generating second depths. A straightforward way to generate the second depth
layer is to render the scene to a depth target with front-face culling. This is the
equivalent of a pre-Z pass, except it is only used to generate the second depth
texture. An additional geometric pass only for this may seem a bit excessive,
though. A better approach is to use this pass instead of a traditional pre-Z pass,
 
Search Nedrilad ::




Custom Search