And then, the lowest ISO needs to be amplified in such a way that the read noise is low, relative to max signal. If the RAW data only goes up to 2700, and the read noise is basically the same as ISO 100 in ADUs, then you're never going to see any improvement in DR; just better quality highlights and midtones.
[a href=\"index.php?act=findpost&pid=103098\"][{POST_SNAPBACK}][/a]
I think all of us who have taken the trouble to compare images at ISO 1600 from our 20D, 5D or 1D2 with the same scene underexposed by 4 stops at ISO 100, are impressed with the huge improvement in noise in the ISO 1600 shots, which stretches right across the tonal range but is particularly significant in the deep shadow, lower mid-tones and mid-tones. Owners of the new 1D3 should be able to compare ISO 3200 with a 5 stop underexposed ISO 100 shot and expect to see an even greater improvement, unless Canon have also improved shadow noise at ISO 100, in which case the
degree of noise improvement might just be the same as that of the 1D2, just improved equally at all ISOs. I guess that remains to be seen.
However, I have to admit, as much as I am impressed with the results, I haven't much of a clue as to what processes are employed to achieve these outstanding results. Perhaps some of you clever chaps can enlighten me.
I have some vague understanding that the voltages generated at each photosite are amplified in the analog domain before becoming digitised. Such amplification of the signal allows read noise to be lower than it otherwise would be with an unamplfied (or less amplified) signal. (Don't know why this should be the case, though.)
I also have some vague understanding that the initial analog signal needs to be amplified because, in relation to full-well capacity, it's a much weaker signal than it could be.
What I don't understand at at all is, having established the technique of reducing noise through analog amplification (and no doubt through another 100 patented electronic tricks)
why should well capacity limit the degree of amplification?I have, for example, a shot at ISO 100. Full-well capacity is, say 50,000e which represents the highlights with ETTR exposure. I want to achieve the same level of noise in the shadows as I get with ISO 1600 whilst preserving the highlights and dynamic range that one expects at base ISO.
One could argue that boosting the analog signal in this situation would 'blow out' the highlights. Why should it? One photosite generates its maximum 50,000e signal and when amplified 4x becomes a 200,000e signal. Another photosite generates a 40,000e signal which, when amplified 4x becomes a 160,000e signal. The relationship and relativities are preserved.
Where's the problem? Is it possible that the interconnects cannot sustain such high voltages, or that excessive heat would degrade the results?
Can someone do me a favour and clarify this issue?