Game Development Reference
In-Depth Information
This model considers the fields of photopic vision (daylight vision) and scotopic vision
(nocturnal vision), chromatic adaptation and changes in the perception of contrast
as per the state of luminance adaptation. This model is thus more efficient than the
models suggested by Daly or Lubin. On the other hand, this model is quite demanding
in terms of execution time as well as memory.
15.3.1.2 Algorithms of perceptual rendering
After having discussed some models of vision, we will now see how we can incorporate
such a model in a rendering algorithm, with a precise objective: saving some time by
not calculating the details that the human visual system will not be able to perceive.
Mitchel (1987) was the first to use visual perception of noise to optimise the process
of anti-aliasing in ray tracing, by avoiding calculations in the zones where noise
is invisible to the human eye. A mathematical formula of the perception of local
contrast was used to obtain this result;
Bolin and Meyer (1995) tried to make use of contrast sensitivity, non-linearity of
the perception of contrast differences and masking in a ray tracing algorithm. For
this purpose, they flashed the rays directly in the frequency field so as to display
only those frequencies which are perceptible to the human visual system. The
frequency aspect of the method was inspired by the JPEG compression algorithm,
as the image was divided into 8
×
8 blocks;
In 1998, Bolin and Meyer (1998) used a complete vision model, derived from
Lubin's model (by replacing the laplacian pyramid with a Haar wavelet transfor-
mation), to control a Monte Carlo type rendering method. Their objective was to
guide the placing of samples so as to minimise perceptual error;
Ramasubramanian and Greenberg (1999) developed a differential model, i.e. it
provides each pixel with the quantity of luminance that is possible to add to or
subtract from this pixel without the eye seeing the difference. This method is based
on a vision model that detects the luminance and contrast. Note that the process
depending on the luminance and the process depending on the spatial aspects are
treated independently. Ramasubramanian and Greenberg use their model to guide
a path tracing algorithm of rendering. They pre-calculate the part that spatially
depends on the model (very expensive) before starting the calculation of indirect
lighting, which creates only a few high frequencies in the image;
Farrugia and Péroche (2004) suggested a progressive rendering algorithm based on
a perceptual metric. The metric is based on the vision model by Pattanaik et al. It
is evaluated in an adaptive manner, by using randomly selected sample pixels. The
distance is calculated on the basis of these pixels and a homogeneity test is carried
out. If the test is negative, the cell is subdivided and new samples are assessed.
The progressive rendering algorithm is based on the method of light vectors (Zaninetti
et al., 1998). The first image is calculated on the basis of a set of samples selected
randomly around the boundaries of the objects. This set of points is triangulated and an
image is obtained by Gouraud shading. The second image is calculated by positioning
a new sample per triangle, by recalculating a triangulation and by applying a Gouraud
shading. The adaptive distance is then applied to these two images to provide a distance
map. The triangles are divided into three categories on the basis of two thresholds
Search Nedrilad ::




Custom Search