Game Development Reference
two methods that best fit real-time requirements, taking into account the properties
of S , the performance, and the memory used:
the conjugate gradient (CG) method,
the Cholesky factorization, combined with the Gauss elimination.
The first one is more general and can be used when S changes its value in every
iteration (see Section 10.3.4), whereas if S remains constant (see Sections 10.3.2
and 10.3.4), the second solution achieves better performance.
10.4.1 The Conjugate Gradient Method
The CG method can be used if the coefficient matrix S is symmetric and posi-
tive definite. This method achieves good performance when S is sparse, since the
most complex operation required is the multiplication of a vector by S .There
exist variations of the CG method that deal with nonsymmetric systems and even
nonsquare matrices, but fortunately, our coefficient matrix fits with the require-
ments described above.
Listing 10.6 shows an example of a CG method implementation. The CG
method is an iterative algorithmwhere an initial solution is refined until archiving
the correct result. This method reaches the right solution in N iterations, where
N is the dimension of S . In practical terms, we will never let the algorithm run N
iterations. We will stop it when it converges to an acceptable solution. The error
of the solution can be estimated using the following expression:
r = S ˆ
where r is the error estimation and ˆ
x is the estimated solution. The algorithm
is below a threshold .
When the solution does not reach this threshold error, we will stop the CG
method after a small number of iterations in order to obtain a good performance.
In our experience, a value between 30 and 100 (depending on the size of S )is
good enough. Obviously, it is desirable to compute a good approximation in only
a few iterations, i.e., we want to improve the CG convergency by
providing a good initial solution;
reducing the matrix condition number,
- using a good quality mesh,
- using a preconditioner.