In my opinion, any DR test in which the results depend on the RAW developer tool used, and the way to use it (settings), is nonsense.
The ability to capture a certain DR by a camera is a _hardware feature_, and it is clearly hardware limited by: saturation in the highlights, and a given (and there is not a unique criteria for this, but some criteria must be chosen if DR results are to be compared) value for the minimum signal to noise ratio considered valid in the shadows. And all this can (and perhaps should) be measured for each of the individual RGB channels separately.
So the dynamic range of a camera would be the difference in EV between the maximum recordable value in its RAW file, and the value of signal that keeps a given ratio against the std deviation of the noise. Any test involving a particular RAW developer and/or particular settings on a given RAW developer such as ACR for me is crap.
I agree entirely with Guillermo. Achieving greater dynamic range through highlight recovery as done by DPReview involves making up data for the clipped channels, blue and red for daylight exposures with most Bayer array cameras. Shown below is a typical daylight exposure with the Nikon D3 along with a histogram of the raw data. The shot is underexposed, since the green channel is about 2/3 stop from clipping. The red channel is 1 1/3 stops from clipping, and the blue channel is one stop from clipping. With increasing exposure such that the green channel becomes clipped, the other 2 channels would initially still have data and highlight recovery would be possible using data from the non-clipped channels. When the red channel becomes clipped, highlight recovery is no longer possible.
The green channel will have the highest dynamic range, since it receives the most exposure. If you determine DR from a demosaiced image you are combining three different DRs, but what weighting should be used? Probably not an arithemetic average, since the eye is most sensitive to green. Also, in a demosaiced image, the green is contaminated by interpolation from adjacent blue and red pixels. For an excelllent technical article on these matters, see Emil Martinec
And another very important issue that has to be taken into account in DR is the sensor resolution: DR is measured at a pixel level, by calculating SNR on each pixel. If a camera has more Mpx than another one with the same pixel SNR, the highest resolution camera will provide a higher overall image DR (not pixel DR), since SNR will increase when rescaling the image to match the size of the lower resolution sensor.
In general: DR is a hardware feature, and it can never depend on the software used to process the RAW file.
The maximum recordable DR decreases the higher the ISO set. Maximum recordable DR is reached when exposure of the scene is maximum (i.e. ETTR) and the lowest electronic camera ISO is used.
If the sensor size is held constant, and the pixel density varied to give different resolutions, the total DR will not change much, since the amount of light collected remains the same for both sensors. As Emil
and Guillermo explain, one is trading off dynamic range for resolution. When the image of the smaller pixel camera (with the same total sensor area) is downsized so as to have the same resolution as the larger pixel camera, noise will be decreased by pixel binning. The per pixel noise standard deviation as used by DPReview in determining dynamic range can be misleading.
Of course, the reason for increasing megapixels is to allow a larger print size with acceptable image detail. When the print size with the higher megapixel camera is increased so that pixel binning is no longer done, the higher megapixel camera would cease to have a DR advantage, but it would still have better image detail and most likely finer gained noise, which might appear less objectionable to an observer.
Of course I am always talking about shooting in RAW. JPEG is another story which depends on camera's software, and not always the highest DR camera will provide the highest visible DR JPEG files.
Quite true. The raw file has linear scene referred data, whereas the rendered JPEG is object referred. The rendering process involves DR compression and application of a tone curve--see the white paper by Karl Lange
on the Adobe site. Highlight and shadow DR as discussed by DPReview may make sense for JPEG rendered images which have a shoulder, linear segment, and knee, but this concept makes no sense for a raw file which is linear.