The implications for shooting in very dim light are that it does not really make much sense to exceed the unity gain under these conditions.
Really? Where is the proof for that?
You get no more information and lose one f/stop of dynamic range.
You get no more information BECAUSE THEY REALLY AREN'T TRUE AMPLIFIED 3200. How many times do I have to repeat this extremely relevant fact? Roger's conclusions, and those of the astronomers he's asked, are all based on ISO 3200s that are really ISO 1600s under-exposed, with the RAW values doubled and almost a stop of hightlights clipped away for no good reason.
The unity gain of the Nikon D200 is 800 according to Roger's tests. According to the unity gain theory, when shooting in dim light with this camera in raw mode, it would be best to set the camera to ISO 800 rather than 1600. If there is enough light to expose at ISO 800, you get better dynamic range and less noise. If "underexposure" occurs at ISO 800, you merely use the exposure control of ACR (or whatever raw converter you are using) to brighten the image.[a href=\"index.php?act=findpost&pid=125384\"][{POST_SNAPBACK}][/a]
What does that have to do with unity gain?
It is true for most Nikon cameras more than a year or two old, because most of the read noise occurs at the initial read on the sensor, and only a clean, low-gain amplifier is used to feed the ADC. Total read noise in electrons is very similar at all ISOs, and only varies because the absolute_signal-to-ADC_noise is different.
The same principle can apply between ISO 200 and 400, as well as 400 and 800 with cameras that do not have lower absolute (electron) noise at higher ISOs (like the Pentax K10D). Nothing special happening at unity gain. Even when the ADC noise makes a difference between ISOs, it makes the most difference between lower ISOs, explaining why there is less loss pushing 800 to 1600 than 200 to 400. The higher the noise before the ADC, the less increase there is in total noise from the ADC, because of the non-linear way in which noise sums.
Only poor circumstantial evidence exists for the unity gain theory. An ADU:electron ratio is only completely relevant if the total read noise is low enough so that no two quantities of electrons are digitized as one. That actually requires far greater than 1:1 with even 0.1 adu of analog read noise, for total accuracy in counting.
As far as 14 bits are concerned, they do nothing for IQ at ISO3200 with the mk3. Truncated to 8 bits, and then converted to RGB, the mk3 has less noise than the 1dmk2 with 12 bits. Even 12 bits at high ISOs is overkill for 99.99% of uses.
Roger's idea of testing this type of thing is to take a linear conversion, and then quantizing it. That is nothing at all like quantizing the RAW data, and then interpolating/demosaicing it and performing WB.