Glenn ChanREELSony Vegas Tutorials

Signal processing in digital cameras notes

A typical video/digital camera will apply the following types of processing to the image.

White Balance

To white balance the image, one or two of the RGB channels will be multiplied by some number above 1 to balance out red, green, and blue. This will push some values above white level and these values will need to be clipped for the image to look right. So, white balance is a destructive process that throws away image information. If white balance is re-applied in post, then the first white balance may have thrown away useful information.

In a RAW scenario, white balance is not baked into the image so this type of destructive manipulation is not applied to the image. Tricks such as highlight recovery (explained in this article) can also be applied. It is essentially recovering dynamic range that would otherwise be discarded.

Color Correction Matrix

An ideal camera would have the same color response as the cone cells in our eyes. We typically assume that all humans have the same response as the 1931 standard observer (the average color response for a group of test subjects) or some other similar reference. For any given color, the red, green, and blue output should be at a certain level. Because the raw output from a camera will not match the ideal color response, a color correction matrix is applied to the signal.

A matrix is simply another way of writing out algebra. In this case, the corresponding set of algebra equations would be:

Output red = something * R + something * G + something * B
Output red = something * R + something * G + something * B
Output red = something * R + something * G + something * B

The original RGB values go through the equations above, which will massage the values so that the final colors look right.

Gamma correction

Human vision is more sensitive to variations in darker areas than for brighter areas. Suppose there were a bunch of lights behind a piece of diffusion (e.g. a silk, frosted window, whatever). We are better at detecting the difference between 1 and 2 lights than 999 and 1000 lights.

Because of this, our video systems apply transfer functions (also called gamma curves) to the video signal so that more bandwidth/bits are alloted for shadows than highlights. This has the same effect as compression as it throws away information so the entire video system doesn't have to deal with as much information. (For whatever reason, most video engineers do not call it or consider it compression.)

For HD formats, cameras are expected to apply the Rec. 709 transfer function to the signal coming off the sensor. Before this function is applied, the signal is in the linear light domain. The signal is proportional to the number of photons of light hitting the sensor. After the transfer function is applied, the signal is in the gamma corrected domain. The gamma corrected signal is pretty close to how we interpret what we see and uses bandwidth more efficiently. It is close to the opposite of the inherent transfer function of a CRT (it has to convert the video signal into light). The two come close to cancelling each other out, so the CRT doesn't have to apply any signal processing to make its output right. (Also see Charles Poynton's Gamma FAQ on "What is gamma correction?").

Why might gamma correction matter?

  1. If you want to perform optically-correct compositing, the signal must be converted into linear light before apply the compositing operations. Things like glows and cross dissolves can look better when done this way.

  2. Some cameras will cheat the signal processing and apply gamma correction before the color correction matrix. This is the wrong order to do things. The color correction matrix should be applied to the linear light signal, not the gamma corrected signal. However, many cameras do this because gamma correction allows them to reduce the bit depth. Applying the color correction matrix after the bit depth has been reduced means that the calculation is less computationally expensive. The errors are very small unless the colors are extremely saturated, so this is a cheat that is extremely difficult to notice.


Most video cameras tend to apply some sort of sharpness enhancement to the image. This increases the perceived sharpness of the image. There may be side effects such as the appearance of halos around edges. Note that actual resolution does not increase. A good test of actual resolution would be to see how easy it is to read fine text. An image may be perceived to be sharper than another with higher resolution, but the image with higher resolution will let you read finer text and see finer details.

When evaluating cameras for resolution, be aware that some cameras apply more aggressive sharpening tricks and should not be confused with actually having higher resolution.

Also note that any sharpening baked into the image can make keying and secondary color correction operations more difficult.

Some cameras have the ability to reduce sharpening for flesh tones, which will result in smoother flesh tones that may be more pleasing to the eye.

Video Knee

Most/all video cameras will compress highlights so that they will gently roll-off between clipping. One way to do this would be to apply some sort of curve to the RGB channels. Normally this would de-saturate the highlights. So many video knee algorithms will apply their own (proprietary) recipe for compensating for this effect.

On some cameras the video knee can be disabled. Sometimes the effect is desirable and sometimes it is not.

Other image processing

There is also a lot of other image processing that may be applied:


This site by Glenn Chan. Please email any comments or questions to glennchan /at/ Eric Caton / Jemtec Boundary Noise Reduction reviews / comments versus Picturecode Noise Ninja, Imagenomic Noiseware, Topaz Denoise, Picture Cooler, Neat Image, etc.

My plugins:
Photoshop color correction
Photoshop noise reduction

I wasn't satisfied with the tools in Adobe Photoshop so I made my own. Check them out!