Precision is not only a question of bit depth. If your analog device (sensor) is not able to mesure a Signal with a better precision than Noise, there is little advantage to sample with more accuracy. For example 10IL of Dynamic Range (as defined by Saturation/Read Noise) is quite well handled with usual 12 bit depth ADC.
So... I am not too sure about the advantages of that 22 bits ADC...
Olivier
[a href=\"index.php?act=findpost&pid=76379\"][{POST_SNAPBACK}][/a]
Not sure what you mean by "For example 10IL of Dynamic Range (as defined by Saturation/Read Noise)". What is "10IL"?
Whatever it is, I can tell you that getting a true 16-bit RAW is about the only thing that *might* pry my 5D from my living fingers.
22bit ADC is, in my view, totally over the top as far as bit conversion is concerned. We are talking about three orders of magnitude beyond the 12-bits I get today. The least significant bits are going to be swamped by noise from any sensor/amplifier with maybe the exception of some of the super-chilled astro cameras.
But, the more bits you can get off the sensor chip into a PC, the greater is your ability to characterise the noise and cancel or at least reduce it.
So, why are they not producing at least 16-bit raws instead of 12-bit? This is still 1.5 magnitude improved and is not to be sneezed at by anyone.
It's like putting a speed limiter set at 55mph into a high performance sports car. Yes, we gave you a six-speed transmission, but we have locked out the top two gears - sorry. But ain't it great that you get a six-speed gear box?
Some Pentax marketing copywriter has found a bone to gnaw on and has totally failed to notice that it is his own leg.
Andrew