I'm trying to reconcile in my mind the supposed 10 to 12 stop range of current digital sensors, with my practical observations. I know if I underexpose a middle tone about 2 or 3 stops it becomes so dark that it approaches black in an image, onscreen or in a print, and differentiating this tone from tones close to it becomes difficult. Similarly over exposing that same middle tone about 2 or 3 stops places it near to white (as with the under exposure example, this presupposes no tweaking the exposure slider in ACR or Lightroom), differentiating this tone from tones close to it also becomes more difficult. The upshot is that I always think of an effective range of about 5 stops or so... somewhat better pehaps than the 4 stop range I used to keep in mind when shooting transparency film in years past. Quite a bit less than articles I read proclaiming a 10 or 12 stop range for the sensor.
I _think_ this comes about atleast partly because of differences between our "gamma-corrected" human neuro-physiological visual system and the linear response of digital sensors. I know that our human system has a distinct roll-off in the toe and the shoulder, with the result that we don't distinguish tonal differences in very dark or very light tones as well as we do through the mid-range. So, despite the fact that a digital sensor may be capable of recording detail over a 10 or 12 stop range, and we can at best reproduce tones in a good print over a 7 or 8 stop range (275:1 or so)... depending on the ambient light levels in the viewing area, we just can't see the differences and they merge into either the shadows or highlights.
The effective range for capturing and printing tones is therefore limited more by what we are capable of perceiving, rather than what the technology is capable of recording. Having said that, even the best of current technology available to photograpehrs is still incapable of recording the full dynamic range of the typical sunlit scene.
Does any of this make any sense? Jeff? Andrew? Anyone??