This is one of the most interesting conclusions I made from your SNR curve plots. But Emil, is this an empirical rule you concluded from those noise measurements, or there is some physical limit that makes this statement be true on any present or future camera?
It has entirely to do with the noise of the ISO amplifier, which is the limiting factor in low ISO DR. If the signal processing circuitry downstream of the photosite array had as much DR as the photosite array itself (what Roger Clark calls
sensor DR, as opposed to
camera DR), then there would be little distinction between underexposing at a fixed ISO, vs applying EC during RAW conversion. The only reason (apart from jpeg generation) for the camera to have variable gain is the benefit in SNR that it confers at lower EV. The rest of the electronics has less DR than the sensor itself, therefore that lower limit forces one to choose the ISO amplification to decide what part of the sensor DR to access. If the rest of the electronics had a DR that met or exceeded the sensor DR, then the all that the sensor captured would be available starting at the lowest ISO, and so all that raising the ISO would do is remove parts of highlight headroom without adding any more range at the lower end.
Current Canon and Nikon offerings leave anywhere from 1-2.5 EV of DR unrealized due to shortcomings in the DR of the electronics that processes the photosite signal. It is that which limits the utility of changing analog ISO amplification in the camera; once the read noise is dominated by the photosite noise rather than the amplifier/ADC noise, there is no further advantage to raising the ISO. Empirically this happens to be at about ISO 1600 on current Canons, and the D3/D700 from Nikon. I haven't crunched the numbers, but the lower amplifier noise due to the parallel ADC architecture in the D300/D3x probably means that those cameras benefit little or none from raising the ISO past 800.
And another question: I never understood very well what Roger N. Clark used to call 'unity gain' ISO, i.e. the ISO value for which every extra converted photon would account for one extra ADU in the encoded RAW file. He claimed going beyond that ISO value is nonsense in terms of SNR. But I never understood how he extrapolated this no matter which bitdepth had the sensor when there should be a big difference in that ISO value for 12, 14 or 16 bit encodings. Has the 'unity gain' ISO any relation with what you said?
Unity gain is explained by Clark as follows:
"Unity Gain ISO is the ISO of the camera where the A/D converter digitizes 1 electron to 1 data number (DN) in the digital image. Further, to scale all cameras to equivalent Unity Gain ISO, a 12-bit converter is assumed. Since 1 electron (1 converted photon) is the smallest quantum that makes sense to digitize, there is little point in increasing ISO above the Unity Gain ISO".
This would be fine if there were no noise in the image processing chain. Noise means that unity gain is merely the point where
on average one electron captured raises the raw level by one unit. That does
not mean that the sensor is counting electrons, which is what would be required for there to be no advantage to raising the gain above the unity gain ISO. But the noise is much more than one raw level at the unity gain ISO, washing out the accuracy of the raw levels relative to electron counts.
Indeed, because the electronics downstream of the photosite array still has a substantial contribution to read noise at unity gain ISO in many cameras, raising the ISO beyond unity gain decreases their contribution relative to signal (since their contribution to noise stays fixed while the signal is amplified), and SNR improves. Unity gain for the 1D3 is about ISO 500 if one takes the literal definition of one electron per raw level (Clark instead fudges and rescales 14 bit data to 12 bits to avoid the bit depth issue you raised), but my SNR plot at fixed exposure shows a definite improvement from ISO 400 to 800 to 1600:
http://theory.uchicago.edu/~ejm/pix/20d/te...DRwindow1d3.png