Hi all, I have a question for those brilliant minds around here.
We all know that:
- The higher the ISO for a given exposure, the lower the SNR.
Yes, if the relative exposure (EC) stays constant, and the absolute exposure (photons collected in the sensor wells) remains the same.
- The lower the exposure for a given ISO, the lower the SNR as well.
This is always the case, as a non-defective camera would have to intentionally go out of its way to provide a lower SNR at a higher tonal level at the same ISO. The firmware would literally have to say, "if the exposure is this high, add this much noise".
But if we are forced to use some exposure parameters (aperture and exposure time), what is recommended to obtain the best possible SNR (Signal to Noise Ratio), i.e. less visible noise, to push the ISO number? or to allow certain underexposure?[a href=\"index.php?act=findpost&pid=153903\"][{POST_SNAPBACK}][/a]
That really depends on the camera. All the issues here are with read noise - shot noise depends on sensor exposure
ONLY, as shot noise is really signal, in an empirical sense, even though it is noise as far as our IQ ideals are concerned. Quantization is not an issue in the RAW data of current cameras, except for extreme imaging like stacking multiple images for astrophotography.
Generally speaking, if you shoot RAW and your RAW converter allows you to bring up the shadows without suppressing them in any way, then CMOS cameras with high-ISO optimizations (like most Canons, and the Nikon D3) give much better read noise performance with higher ISOs (ignoring clipping issues), especially at the lower end of the ISO range (100 pushed to 400 is far worse compared to 400 than 400 pushed to 1600 is compared to 1600). If you look at typical read noises at different ISOs, you can see why. My Canon 20D has read noises of 2.07, 2.2, 2.45, 3.2 and 4.7 ADU, repectively, for ISOs 100, 200, 400, 800, and 1600. There is almost no difference in read noise between the lower ISOs, in ADUs. Between 800 and 1600, the difference is not quite 2x, but is a lot closer. So, if you under-expose ISO 100 to do 200, then your new, adjusted relative read noise is 4.14, almost as much as ISO 1600! If you push 800 to 1600, then your new relative read noise is 6.4, a little worse than ISO 1600. It works in the other, way, too for pulling. Pulling ISO 400 to 100 results in a relative read noise of 0.61 ADU (or 2.45 at a relative bit depth of 14 bits instead of 12, depending how you look at it). This is why extreme positive EC works nicely for subjects that have a majority of their tones concentrated in its higher levels, like a white wall where everything else is darker, or even a grey wall where everything else is darker.
For the majority of the rest of the cameras out there, the only difference in the state of RAW data at different ISOs is due to differences in gain before hitting the ADC, typically resulting in about 12 to 15x the read noise at ISO 1600 compared to 100. Shooting RAW, and using a converter that doesn't trash shadow areas, these cameras give you less back by shooting at high ISOs, considering what they give you in extended headroom through under-exposure at lower ISOs. There is also a possible issue that an under-exposed image will not be processed with enough precision in a converter (although this shouldn't really have to be an issue). You definitely don't want to under-expose JPEGs with the majority of cameras, as JPEGs tend to trash the shadows and make them less pushable.
Some cameras have even more reason to under-expose at low ISOs. My Panasonic FZ50 gets crazy stupid at ISO 1600; not only does it have far more than 16x the read noise of ISO 100, but the blackpoint is in the wrong place, causing a color cast to the deep shadow areas. ISO 100 blackpoint is right on the money, even after multiplying the RAW data by 16.
EDIT - I forgot to mention that higher ISOs on many cameras are achieved solely through digital math, so any gains from higher ISO stop at the highest analog ISO.