Game Development Reference
In-Depth Information
Figure 5.9. Illustrations of different stream line integration methods: (a) first-order
Euler integration, (b) second-order Runge-Kutta, (c) second-order Runge-Kutta with
fixed length, and (d) second-order Runge-Kutta with adaptive length.
choose the sign that minimizes the curvature of the stream line. This can be
achieved by ensuring that the scalar product between the minor eigenvectors of
the current and the previous steps is positive. More precisely, let x 0
be the cur-
rent point, let + and
denote the forward and backward directions, respectively,
and let t 0 ( x 0 )=
ξ ( x 0 ); then, for k
0, the next minor eigenvector is given by
t k +1 ( x )=sign t k ( x ) ·
ξ ( x ) ,
where ξ ( x ) denotes the minor eigenvector at the point x , computed using List-
ing 5.1. Let x 0
= x 0 ,thenfor k
0, a step with step size h of the Euler method
is given by
x k +1 = x k + ht k ( x k ) .
At least for long stream lines, which are required by the flow-guided smooth-
ing, the Euler method is comparatively inaccurate. This is illustrated in Fig-
ure 5.9(a). The Euler integration method smoothes pixels lying on adjacent
isophote curves of the image. This corresponds to smoothing in the direction
of the major eigenvector and a loss of information ( Figure 5.10(a) ) . Therefore,
instead of using the Euler integration method, the more precise second-order
Runge-Kutta method is used, which traces stream lines at a higher accuracy
(Figure 5.9(b)), reducing blurring across edges ( Figure 5.10(c) ) . A step with step
size h of the second-order Runge-Kutta method is given by
t k ( x k ) .
The second-order Runge-Kutta method requires values of the minor eigenvec-
tor for arbitrary positions. One option is to calculate these in one pass and then
use nearest-neighbor sampling while tracing the stream lines. Bilinear interpo-
lation of the eigenvectors is complicated, since opposite vectors may cancel each
other out. A better approach is to sample the structure tensor directly using
bilinear interpolation. This is more expensive, since the minor eigenvector has to
be computed for every sample, but also provides clearly superior results.
+ ht k x k
x k +1 = x k
Search Nedrilad ::

Custom Search