Emil,
One point I find interesting here is that not all cameras seem to employ this analog, pre-A/D amplification of the voltage.
Edmund has reported that his P45+ produces at least equally good images after underexposing at base ISO, compared to the same exposure used at a higher ISO.
John Sheehy also reported noticing this effect in certain P&S cameras. In other words, the false ISO settings, such as ISO 3200 in most Canon DSLRs, and ISO 12,800 and 25,600 in the Nikon D3, apply across the board in some models of digital cameras. Any setting above base ISO is essentially a false ISO that ultimately serves no purpose that can't be achieved in post processing.
With a fixed exposure, the difference between using a higher ISO versus underexposing at a lower ISO and amplifying in software post-capture lies in the additional noise/error sources that get amplified. Basically, any amplification takes as input not just the signal but also any noise that was added upstream in the signal processing chain. The signal processing chain is
sensor readout > ISO amplification > quantization > post-capture manipulation
Thus, using the higher ISO, only the noise added by the electronics doing the sensor readout is amplified. Amplifying post-capture by software multiplication of the digitized raw data, one is amplifying in addition the noise of the ISO amplifier as well as the ADC, as well as the quantization error (the difference between the analog input to the ADC and the nearest quantized level that it can be digitized to).
To the extent that the ISO amplifier noise, ADC noise, and quantization error are small relative to the overall noise level, one can get away with amplifying post-capture. If they are large contributors to the noise level (as they are in deep shadows at low ISO in Canon DSLR's, for instance), one pays a substantial price for the underexposure. GLuijk had a nice demonstration of this in a recent thread for which I'm the OP.
One reason to use a higher bit depth is to eliminate quantization error as an issue; average quantization error is about .3 raw level, so if you are going to implement higher ISO as a post-capture software manipulation (as I'm told several MFDB's do) it makes some sense to use 16-bit capture instead of 12-bit -- quantization error is reduced by a factor of 16 and is completely negligible relative to other noise sources. Of course this doesn't change the fact that all those extra bits are merely giving a refined specification of those other noise sources, and not really helping to record the signal more accurately.
One also wonders, if the analog signal can be boosted prior to A/D conversion to result in a higher ISO with less shadow noise than the unboosted, underexposed data at base ISO, why cannot a similar boost be applied at base ISO to lift the shadows? Is it simply a matter of the electronics not be sufficiently robust to handle the greater voltages?
[a href=\"index.php?act=findpost&pid=202001\"][{POST_SNAPBACK}][/a]
Because in current designs the level of amplification is specified ahead of time, for all pixels being read; to do what you are suggesting, one would have to apply a different boost to pixels with few photons captured vs those with many photons captured. One might do this by splitting the signal and sending it to two separate amplifiers, one set to low amplification for the highlights and another to high amplification for the shadows. Quantize both, then combine the two using the high amplification output to replace the shadows in the low amplification output. The cost would be an extra amplifier/ADC combination (some cameras, like the D3, have parallel analog front end processors; the number would have to be doubled) and the processing overhead to combine the two data streams. If this could be done with no further introduction of noise, the Canon 1D3/1Ds3 and Nikon D3 would each gain about two stops of DR (and finally 14-bit capture would be justified, perhaps even 15-bit).