Hi Steve,
The reason is that DR is the amount of usable information of the image. It is defined as maximum usable signal (normally called full well capacity) divided by signal where Signal/Noise Ratio is 1.
One bit is needed for stop of DR. So if DR is 15 EV there is 15 bits of information, the rest is noise. With 13EV there is 13 bits of information and so on.
It is quite possible to transfer say 15 stops of DR trough say 12 bit of data using a tone curve, but using more bits than what corresponds to DR is just waste of band width. This may be over-simplified, but it is a pretty good rule of thumb.
A good article by Jack Hogan is here:
http://www.strollswithmydog.com/how-many-bits-to-fully-encode-my-image/Emil Martinec has a classic piece on the issue here:
https://theory.uchicago.edu/~ejm/pix/20d/tests/noise/noise-p3.htmlGetting back to the original posting. Assume that you expose fully to the right. That means that your relevant highlights go up to Full Well Capacity (FWC). You utilise the sensor fully. If DR is 15 EV it means that noise floor is 32768 times below FWC, so you need 15 bits of linear data to represent the usable output from the sensor.
Now, cut exposure by half, we no longer utilise the full FWC, just half of it. But, noise is still the same, at least regarding readout noise. So we now only have 16384 bits worth of data, needing 14 bits. We can multiply the signal with 4, but that would multiply the noise by four, too.
Astronomers may utilise deeper data (below SNR = 1).
A small note, if you pick up spec sheets from Kodak or Dalsa sensors they will mention readout noise in electron charges and saturation level also in electron charges. The saturation level is FWC. They may also give the dynamic range of the sensor, say 71 dB. You can convert dB to stops by dividing with 6 (6.021 to be more exact). Using either calculation you will end up with the figures DxO measures at base ISO in what they call screen mode and that figure normally fits vendor data very closely.
Best regards
Erik
Ok, I may be wading in here where I normally don't want to, but help me understand why the question of the relevance of 16 bit capture is answered almost exclusively in terms of dynamic range?
You know, I've always been about the image. The image on your screen, the image you print on paper or whatever. But, always about the image. What does it mean in terms of end results, etc.
And for someone who leads toward the non-scientific approach, please try to keep this in laymans terms.
Thanks,
Steve Hendrix