Its often difficult to make sense of the advantage or otherwise of higher bit depths. For example, I use a drum scanner that is nominally a 12 bit device, but its dynamic range blows away supposedly superior and newer 16bit CCD scanners.
My understanding is that bit depth does not equal dynamic range. If you have great dynamic range, high bit depth is useful for holding all the gradations with in it. However, DR has to do with the range brightness the sensor or film can handle, not the numbers that represent values.
The analogy I've heard is to think of a photo-site on a sensor as a kind of well. At the low range, the issue is how many photons it takes to register a voltage. In other words, how low a light level will be recorded above absolute black? At the high end, how many photons can the well hold before it overflows -- how much light can it handle before it is saturated? DR is the range between these two.
Bit level comes into play when the output from the sensor is converted to digital representation. The signal level from the sensor can be recorded as an 8 bit number, a 12 bit number, a 16 bit number, etc. It has nothing to do with what kind of range the sensor is capable of. Bit level determines how finely the brightness levels are divided. So, if you have a wide DR, it is useful to have a high bit level, but bit level does not determine DR.
That is how I understand it. If I am in error, I would appreciate correction or clarification.
Regards,
Robin Casady
http://www.robincasady.com