I've got some brackets at high ISO of this Sekonic target (DNG's and rendered examples) on my iDisk if anyone wants to look em over and comment. Using ACR/LR to (as Michael calls it) Normalize the 'over exposed' image you can see the effects on noise compared to the 'normal' exposure. Look at the noise!
I downloaded the files and did an analysis, which demonstrates good and bad effects of the test series and makes for interesting discussion.
It's nice to have a fancy light meter like Andrew's which gives real time readings, but by looking at the raw files produced by the camera we can also get equivalent exposure data, since the sensor is linear over most of its range. The following analysis involves only the green channel, but could be extended to the others.
Here is a curve from the two exposures spliced together showing the relative exposures of the patches and the resulting values in the raw file which can be decoded with DCRaw, a freeware program much used by digital tinkerers. The raw values are actually 0..4095 but they have been converted to 16 bit format by DCRaw in order to output them by multiplying them by 16.
The exposure values on the left are all bunched and hard to read, but they can be spread out with a log-log plot which is standard for plotting characteristic curves of film. The pixel value is normalized to one by dividing by 65635 for a 16 bit file (erroneously labeled 65625 on the image). This notation is more confusing at first, but best once you get used to it. Norman Koren uses this format in his Imatest charts. I used 2 base logs for the exposure, so the values correspond to f/stops.
This curve shows that the brightest patch on the f/16 1/50 sec shot is blown and not on the linear portion of the graph.
Now we can look at the characteristic curves of the rendered images. The f/16 @ 1/50 exposure with default rendering has a blown highlight, but this is largely restored by the normalized rendering (shown in yellow). This normalized rendering is similar to that of the normal exposure of f/15 @ 1/200 sec, but the brightest patch is at the maximum pixel value and still slightly blown as shown below. The midtones match fairly well but have slightly different density and slope.
Here is the histogram from the blown brightest patch mentioned above. Note that the right side of the bell shaped curve is truncated. This histogram is from the free ware program ImageJ, which does 16 bit histograms, unlike Photoshop.
Now finally for the noise, which is measured as the standard deviation of the pixel values in the patches. Actually some of this variation is in the target and possibly in nonuniform illumination, but most of it is likely random noise from the camera. Such noise is primarily photon sampling noise (shot noise), but read noise enters into the equation at low exposure values (see [a href=\"http://www.clarkvision.com/imagedetail/evaluation-1d2/index.html]Roger Clark[/url] for explanation and a better way to measure noise). This noise is proportional to the square root of the number of photons captured by the sensor and is shown in this plot from the raw data. The noise is actually higher in the file with more exposure and worse in the highlights, perhaps contrary to conventional wisdom which associates noise with the shadows:
However, what we are interested in more is the signal to noise ratio. This varies with the square root of the number of photons captured and is greater in the highlights because it is related to the number of photons actually captured (signal) to the square root of the number of photons captured (noise), which reduces mathematically to the square root of the number of photons captured [N/sqrt(n) = sqrt (n)].
So in the final analysis, we quadrupled the exposure, captured 4 times as many photons and the S:N improved by a factor of 2 (square root of 4), as predicted by theory. However, the highlights were blown and the recovery was less than perfect. All in all, I thought this was a good blend of theory and practice and worth the effort needed to write it up.