Game Development Reference
In-Depth Information
if (Pdiff >=randomDir.w){
// Take diffuse reflection.
g_uRays[index].vfReflectiveFactor *=;
g_uRays[index].vfDirection = normalize(;
// Take specular reflection.
g_uRays[index].vfReflectiveFactor *=;
g_uRays[index].vfDirection =
Listing 1.1. Reflection in global illumination.
changes or the shaders are reloaded. When the n th image is generated, every
pixel p a in the accumulated image is updated following the formula
p a + 1
n p n .
The main difference from conventional ray tracing is that each time a ray hits
a primitive, the reflected direction is generated from a random distribution based
on the properties of the material. Under global illumination, the reflected ray
can be either a diffuse or a specular reflection. When it is reflected in a diffuse
way, the new direction is sampled from a unitary sphere; while in the specular
reflection, it is computed by taking the reflection around the normal at the hit
point. The decision on whether to follow a diffuse reflection over the specular
reflection is based on a material parameter called Pdiff , which is a number between
0 and 1 that represents the probability of doing a diffuse reflection. The code in
Listing 1.1 shows the details on how it is implemented.
Since there are no explicit functions for computing random values in the com-
pute shaders, all the random values are computed in the CPU and sent to the
compute shader using a texture. Updating that texture on every frame would be
very expensive and could cause the application to slow down. To prevent this,
only a two-dimensional offset is updated in every frame and the texture is sam-
pled using that offset combined with the thread ID , causing different threads to
use a different pixel of the random texture on the same dispatch. Also, since the
offset is updated every time a random value is going to be used, the pixel used
by an specific thread will be different every time.
Once Monte Carlo is used, it can also be extended to handle other effects;
for example, to reduce aliasing, the ray direction can be computed using a point
around the center of the pixel causing a subsampling on the pixel. Another effect
is depth of field; instead of taking the ray origin as the same point for all the rays
as in a pinhole model, the ray origin is taken as a random point around a circle
centered at the camera position. Figure 1.1 demonstrates the use of those effects.
p a =
Search Nedrilad ::

Custom Search