Game Development Reference
without a perceptible loss of quality. This is usually achieved by representing the
chrominance components of an image with lower spatial resolution than the lu-
minance one, a process that is commonly referred as chrominance subsampling .
This process forms the basis of our method.
The color of the rasterized fragments should first be decomposed into lumi-
nance and chrominance components. A lot of transforms have been proposed to
perform this operation. The RGB to YC o C g transform, first introduced in h.264
compression [Malvar and Sullivan 03], has been shown to have better compression
properties than other similar transforms, such as YC b C r . The actual transform
is given by the following equation:
C C g
1 / 4 / 2 / 4
1 / 20
1 / 2
1 / 41 / 2
1 / 4
while the original RGB data can be retrieved as
R=Y+C o −
C g ,
G=Y+C g ,
C o −
C g .
The chrominance values (C o C g )intheYC o C g color space can be negative. In
particular, when the input RGB range is [0 1] the output C o C g range is [
0 . 50 . 5].
Therefore, when rendering on unsigned 8-bit fixed-point render targets, a bias of
0.5 should be added to these values in order to keep them positive. This bias
should be subtracted from the C o C g values before converting the compressed
render target back to the RGB color space.
It is worth noting that, to avoid any rounding errors during this transform,
the YC o C g components should be stored with two additional bits of precision
compared to the RGB data. When the same precision is used for the YC o C g and
RGB data, as in our case, we have measured that converting to YC o C g and back
results in an average peak signal-to-noise ratio (PSNR) of 52.12 dB in the well-
known Kodak lossless true color image suite. This loss of precision is insignificant
for our purposes and cannot be perceived by the human visual system, but still,
this measurement indicates the upper limit in the quality of our compression
One option to take advantage of chrominance subsampling is to downsample the
render targets after the rendering process has been completed. The problem with
this approach is that we can only take advantage of the bandwidth reduction
during the subsequent post-processing operations, as described in [White and
Barre-Brisebois 11], but not during the actual rasterization.
Instead, our method renders color images directly using two channels. The
first channel stores the luminance of each pixel, while the second channel stores