My point is actually simpler: many complaints about "inadequate dynamic range" are in reality complaints about "inadequate highlight headroom when I meter the way that I used to with film" because overexposed highlights hit a hard wall with digital that was not there with film. Some sources even try to measure a camera's "highlight DR" and "shadow DR" as separate components, even though the split between the two is a matter of mid-tone placement, which the photographer can adjust.
Perhaps this was true about 8 years ago before Michael raised the issue of 'expose to the right (of the histogram)' and 2500 threads ensued, over the years, exploring the concept in ever more detail.
I get the impression that most people still shoot in jpeg mode (even with DSLRs) and they mostly shoot subjects which generally make no special demand on the DR capability of their camera and therefore have no complaints about the DR limitations of their camera, assuming such people even understand what DR means.
Speaking for myself, I would say I have a greater number of spoiled shots as a result of blown highlights than spoiled shots as a result of shadows with unacceptable noise. But of course, a concern for one influences the other. A concern for achieving the best shadow detail can lead one to inadvertently overexpose. A concern to avoid overexposure can result in actual underexposure with the consequence of noisier shadows that one would otherwise get.
For this reason I frequently have my camera set on autobracket mode, because memory is so cheap and because the rated 'potential shutter actuations' of modern DSLRs is so high (typically 150,000). What have I to lose?
That said, this is my point: once highlights are exposed correctly, the dominant limit of noise on IQ will be getting enough light in the main parts of the image, which means getting at least about 500 photons per pixel (or better, somewhere over 1000) in the mid-tones --- and at those photon levels, shot noise overwhelms read noise in modern CMOS sensors, all the way down to three or more stops below the mid-tone subject matter of the scene, and anything darker than that will, in virtually all "artistic" photography [as opposed to surveillance, medical, astronomical etc.] will be printed or otherwise displayed so dark that noise is not a visible problem. So read noise is of little significance, unless one plays games like massively lightening deep shadow regions of the mid-tone print levels: may I call this "shadow peeping"?
That all sounds very reasonable, JBL, but again I think we're slipping into a modern equivalent of the 'horse's mouth' parable. (Let's attempt to find out how many teeth a horse has through appeal only to authority, instead of checking the hard facts for ourselves.)
The testing that I've done with my own equipment would suggest that what you write above is either, simply not true, or if it is true, there are other significant factors which are being overlooked. Those other factors may be, for example, that read noise and shot noise are not the only sources of noise, even though they may be the single most significant sources, or it may be the case that the eye, being less sensitive to detail and noise in the shadows, can accept shot noise in far greater proportions to the signal than would result in a patch on the sensor with an average of 500 photons per pixel, which I really don't think is at all applicable to deep shadows.
For example, in an area of midtones where the pixels might have a mean average of 500 photons per pixel, the average shot noise would be about 22 photons per pixel. Is this correct?
22 photons per 500?? That's only 4.4%. If shot noise overwhelms read noise at this level, we're still only looking at something less than 8.8% total noise within this tonal range. This is not relevant for even modestly deep shadows.
The testing I'm thinking of, that's relevant here, was carried out shortly after I bought a Canon 20D a few years ago. I'd become rather dissatisfied with the rather limited high-ISO capability of the D60, and the 20D was orders of magnitude better.
I'd read on the old Rob Galbraith forum that it was always better to raise ISO than underexpose at base ISO. This was not such a big deal with the D60. An ETTR exposure at ISO 400 was hardly better than the same exposure at ISO 100, after using EC in ACR to raise the levels.
I decided to check this new performance of the 20D at high ISO for myself, taking two shots of the same high-SBR scene at equal exposure, the exposures being just right for an ETTR at ISO 1600. Of course the exposure when used at ISO 100 became a 4-stop underexposure.
I raised the shadows of the ISO 100 shot in ACR, converting both images without sharpening or noise reduction, and to my astonishment the ISO 1600 shot was so much better across the entire tonal range. Whilst the greatest improvement was observable in the shadows, there was a lesser, but still noticeable improvement in the midtones and upper mid-tones, in the ISO 1600 shot.
Now some of you may be thinking, so what! This is old hat. We all know that it's better to use the higher ISO rather than underexpose at base ISO.
But here's the rub.
Not with the D7000. There's no image quality advantage in using a higher ISO as an alternative to underexposing at base ISO. So what? You're probably thinking again.
If we refer to the DXOMark graphs, comparing the D7000 with the 20D in 'screen' mode (ie. pixel level), we find that at ISO 1600 a D7000 image, cropped to the 8mp of the 20D is as good in all parameters that DXO measure, SNR, DR, tonal range & color sensitivity.
Or, to put it another way, the image which was underexposed by 4 stops in the 20D at ISO 100 and which looked significantly noisier than the analog-boosted image at ISO 1600 in the 20D, will
not look noisier in the D7000 when underexposed 4 stops at ISO 100. It will look about the same as the analog-boosted ISO 1600 shot from the 20D, ie. much improved.
Of course the D7000 has double the pixel count of the 20D and that fact provides for an over all better performance than the 20D at equal image or print size. That's another issue. To see the effects of that, click on 'print' mode on the DXOMark charts. The 20D is left behind in all respects.
Now some of you are probably protesting that the 20D is old technology. This fact I find quite interesting. The 20D
is old technology and newer models since then have boasted many useful improvements, but
not it seems at the pixel level, with the exception of the most recent 1DMK4 which, at the pixel level, has 2/3rds of a stop greater DR than the 20D at ISO 3200, but only at ISO 3200 (and probably beyond if one extrapolates the graph for the 20D). At lower ISOs the difference is reduced to 1/2 a stop, the minimum difference that might be of any concern.
In fact the 20D pixel seems to very slightly better, over all, than the 1Ds3 pixel, with regard to all parameters. The 5D2 pixel appears to be sometimes very slightly better than the 20D pixel, but not by a margin that one would notice in practice.
It seems that the 20D pixel represented the pinnacle of Canon technological achievement, until the 1D4 pixel edged ahead in respect of DR. Regarding color sensitivity, tonal range and SNR at 18% grey, there appears to be no improvement over the 20D pixel.
Have I successfully counted the horse's teeth without even sticking my head in the horse's mouth?