Glenn ChanREELSony Vegas Tutorials
Please see my new chroma subsampling article. This is an older article and contains some inaccuracies, though it does contain information not found in the other article.

Color subsampling

This article is about techniques that can be used to get better results out of color subsampling.

In practice, the most common form of color subsampling is found in Y'CbCr encoding, which uses chroma subsampling. A common source of objectionable artifacts from chroma subsampling is typically found on titles encoded with mainstream DV codecs (which employs 4:1:1 color subsampling).

original red text

Red text comparison
4:1:1 Chroma subsampling as implemented in the Sony Vegas DV codec. Roll-over for comparison with original.

Many DV decompressors do not apply any chroma interpolation at all, resulting in the chunky block-like artifacts on the edges between a highly-saturated color and a black background. Interpolating the chroma reduces these artifacts.

However, the resulting image still appears significantly different than the original. This is due to the filtering applied to the original image.

comparison with glenn's codec
Better 4:1:1 chroma subsampling techniques using improved filtering and chromata interpolation (my term for an approximation of chromaticity interpolation). Roll-over for comparison with original. There is a difference, although it may be quite difficult to see.

An ideal color subsampling system

I describe an ideal system to indicate where engineering approximations are being made in Y'CbCr encoding. In an ideal system, the color component would be completely seperated from the luminance components. In Y'CbCr encoding (used in many image encoding schemes), chroma is used to approximate color (chrominance to be exact) and luma is used to approximate luminance.

Order of operations

Luma and chroma differ from the luminance and chrominance of color science in how the values are formed. For luma and chroma, the values are formed as weighted sums of gamma-corrected components, not linear light components. The 'order of operations' is different- gamma compression is applied, then the values are added together to form luma/luminance.

As a result, there is some 'bleeding' of luminance+chrominance information between the luma+chroma components. Errors in the chroma component will result in other smaller errors in luminance. This effect is proportionally stronger for colors of higher saturation. This is a reasonable approximation as long as the error in chroma is low.


2X chroma


2X chrominance

In the pair of images above, chroma/chrominance was doubled. The left image looks less natural perceptually. In some areas of the image such as the foliage and in the ground, the difference may be difficult to spot.

In a color subsampling scheme, using the correct order of operations can result in improved performance on highly saturated colors. In the images below, notice the dark bands that form around the red, green, and blue stripes.


Order of operations incorrect. I call this chromata interpolation (my term).

Order of operations correct. I call this chromaticity interpolation.

Gamma Compression / Perceptual Uniformity

The main goal of gamma compression in image encoding is to use as low a bit-depth as possible, while producing acceptible images without flaws such as banding. Banding occurs when the minimum difference between values exceeds the threshold of vision. The ideal gamma compression would be as perceptually uniform as possible.

In a completely perceptually uniform system, a change of 1 unit is always just barely noticeable. A system that is not completely perceptually uniform may have somewhere between 0-4 or more just noticeable differences (JNDs) if there is a change of 1 unit. An ideal system changed by 1 unit should uniformly exhibit a JND of 1- it's just barely noticeable.

In video, the transfer function of the camera is generally defined by either sRGB, ITU-R Rec. 601 or Rec. 709. The transfer function of the standard monitor is roughly a power function of 2.4 (there is conflicting information on this topic). There is a difference between the input and output transfer functions, implying a rendering intent that essentially tries to make the image look 'right' (correct). For more information, see Charles Poynton's Gamma FAQ.

In color science, the gamma compression of lightness (L*) differs than that of video. Ignoring the linear segments in the transfer functions, L* has a gamma of roughly 3.0 while Rec. 601 follows a gamma of 2.2. In practice, it generally makes the most sense to stay with sRGB, Rec. 601, or Rec. 709 transfer functions.

Further perceptual uniformity

CIE L*a*b*, CIE L*u*v*, and other color spaces have some properties that can offer additional perceptual uniformity.

Generally speaking, Y'CbCr offers fairly good perceptual uniformity and requires little processing. Strict perceptual uniformity is unnecessary if the bit-depth is high enough.

Chroma/chrominance versus chromaticity

In a luminance-chrominance type color space (Y'CbCr behaves much like one), the chrominance component is intimately tied to luminance. Chrominance only produces the correct hue and saturation for one luminance value. Chromaticity produces the correct hue and saturation for all luminances.

In a luminance-chrominance system, a change in luminance will result in very perceptible changes in hue and saturation. In a luminance-chromaticity system, this does not occur.




Luminance-chromaticity based.


Luminance-chrominance based.

When interpolating chrominance values, the interpolated chrominance values will be applied to varying luminances. This can produce noticeable errors in saturation and (to a lesser degree) hue.


Original. (300% zoom, nearest neighbour resampling.)


Chromaticity-based. (300% zoom, nearest neighbour resampling.)


Chroma-based. (300% zoom, nearest neighbour resampling.)

Chromaticity interpolation problems

One major problem with chromaticity interpolation (with Y'CbCr encoding) occurs when interpolating black, since black has an undefined chromaticity. In scenarios where there are two black pixels to interpolate between, the decoder will have to make assumptions about the chromaticity or black. The most reasonable assumption would be that black is achromatic/colorless. This will result in achromatic/colorless artifacts.

chromaticity bug (300% zoom, nearest neighbour resampling.)

A workaround - 'bumping the blacks'

One workaround/cheat to this problem is to avoid encoding black where necessary. When recording a value with low/no luminance, the encoder can mix in an average of surrounding pixels.

If the surrounding pixels have high luminance values, the encoder will "bump the blacks" and record non-zero values. This will give the chromaticty interpolator something to work with.

If the the surrounding pixels have low luminance values, the achromatic artifacts will not be particularly objectionable. In this case, (mostly) leaving the encoded values alone will be satisfactory. It will also avoid the appearance of a raised black level.

Roll-over to compare the cheated image versus the original. The difference may be very subtle.

One potential problem with low luminance values is that they can lead to very high quantitization error/noise, when interpolated onto values of high luminance.

The ideal solution to the problem is to use a luminance-chromaticity type color space. However, most image acquisition uses luminance-chrominance type color spaces. In those cases, this workaround would be appropriate.


The 'color' of an image should be filtered prior to subsampling.  High frequency color detail that 'straddles' the sampling will cause color aliasing.


Original*. (300% zoom.)

Without filtering. (300% zoom.)

With filtering. (300% zoom.)

The original image already exhibits some aliasing, so this may be a poor example.


Test pattern run through my prototyped color subsampler (4:1:1 color subsampling)

Compare the results to the codec images at

In particular, compare the results with 4:2:2 codecs such as 10-bit Apple uncompressed. My prototype achieves better results in some areas of the test pattern.

Better Color Subsampling Techniques

My prototype improves on traditional chroma interpolation through implementing the correct order of operations, smart filtering ("bumping the blacks"), and chromaticity interpolation. Better results can likely be achieved by the using (more) intelligent chroma reconstruction. Make the assumption that values of similar luminance have similar color. This should remove the color bleeding in between the color bars test pattern.

color bars artifacts

Color Spaces

With Y'CbCr color space, chromaticity interpolation cannot be directly applied since luma values are recorded- not luminance values. However, given an interpolated color, the decoder could make an intelligent guess as to the luminance of the pixel in question. This is not demonstrated in the download.

For ideal chromata or chromaticity interpolation, images should be recorded in a luminance-chromaticity type color space. One method to convert luminance-chrominance color spaces to a luminance-chromaticity color space is to take the color components and divide them by the luminance component.

Why to use luminance-chromaticity type color spaces

The advantage of luminance-chrominance color spaces are that they exhibit better perceptual uniformity than luminance-chromaticity color spaces. However, this is not true in the context of color subsampling. Luminance-chrominace color spaces suffer the 'chromaticity of black' problem. When colors are close to black, luminance-chrominance color spaces have much greater quantitization error. DCT compression (used for many Y'CbCr-based formats) may make this problem even worse (untested).

Luminance-chrominance color spaces can also be inefficient for image encoding systems, since much of the color space is devoted to impossible and/or illegal colors (i.e. resulting negative values in the R/G/B channels).


The color subsampling techniques presented in this article are likely a bad idea, since they offer marginally better quality at great computational expense. However, there are situations where the highest quality is desireable (and the original image has color subsampling applied). These techniques could be useful in upconverting material from SD to HD or 35mm film.


In this article, the definitions of terms used may not be the same as yours.

I would argue for better standardization of terminology, since ambiguous terms can lead to confusion. For example, I am easily confused.

Chrominance should refer to the chrominance of color science, not chroma as implemented in Y'CbCr encoding. In current practice, chrominance is often used to refer to chroma.

Chroma should refer to the color difference components used in Y'CbCr encoding.

Chromata should indicate chromaticity interpolation but with the incorrect order of operations. This is the term I suggest, since it is analogous to the difference between luma and luminance.

Chroma could refer to the color difference components in Y'CbCr encoding. Unfortunately, chroma in the Munsell system means something else... so this usage could potentially be confusing and/or ambiguous.

Chromaticity should refer to color encoding that provides the same hue and saturation regardless of luminance.

Perceived saturation is our intuitive sense of saturation. This cannot be measured by an instrument; it is perceived.

I use the term saturation as an umbrella term to refer to any of the concepts above. This is perhaps sloppy usage?

Luma versus luminance: Luma should refer to the quantity representative of luminance as used in video engineering (i.e. Y'CbCr encoding). See

Luma/Luminance Co-efficients- Rec. 601 verus Rec. 709

Charles Poynton argues (in his book Digital Video and HDTV) that Rec. 709 should have adopted the (incorrect) Rec. 601 luma co-efficients, for better compatibility with SD formats. To loosely paraphrase:

Having two different sets of luma co-efficients requires extra processing during conversions. Without this extra processing, there will be errors in hue and saturation- and consumer equipment likely will not apply the extra processing necessary. At best, the advantage of the correct co-efficients would be a minor improvements in S/N ratio. This slight improvement is overshadowed anyways by the difference in using the incorrect order of operations in forming luma.

The following roll-over illustrates some differences so that you may judge for yourself. *See the "real world imagery" caveat below.

Mouse-over to see the results with Rec. 709 luma co-efficients. The image shown (when the mouse is not over the image) is using Rec. 601 co-efficients.

To evaluate things correctly, you will need a monitor with primary chromaticities that follow the Rec. 709 chromaticities. Since this is unlikely, the roll-over above is be used in demonstrating the subtle differences between the two sets of luma co-efficients.

Real World Imagery

Unfortunately, this article did not use much real world imagery. Test pattern results tend to overemphasize differences and do not reflect perceptual effects, such as in surround effects. In surround effects, real world imagery exhibits weak/no surround effects compared to test patterns.

This site by Glenn Chan. Please email any comments or questions to glennchan /at/ Eric Caton / Jemtec Boundary Noise Reduction reviews / comments versus Picturecode Noise Ninja, Imagenomic Noiseware, Topaz Denoise, Picture Cooler, Neat Image, etc.

My plugins:
Photoshop color correction
Photoshop noise reduction

I wasn't satisfied with the tools in Adobe Photoshop so I made my own. Check them out!