Dithering is an intentional introduction of noise.
I find that definition overly constrained in two aspects.
The first in the use of the word
noise. Wholly deterministic waveforms can provide useful dither. I myself have a patent on one such technique:
http://www.google.com/patents/US4187466The second is the word
intentional. The result is the same whether the non-signal component is natural or made for the use, and the electrons can't understand intention.
Better to have enough bit-depth to resolve the real noise of a sensor.
If shot noise is real noise, and I think it is, that means that you need to be able to resolve an individual electron. That would take a precision greater than (how much greater than is subject to some debate) log2(FWC).
Maybe unless the sensor has a lot of fixed-pattern noise in it, and some of them certainly do.
I don't want noise in an image. I prefer a clean signal with lots of bit-depth. Gives you more to work with.
You don't get a choice. Even with a ideal sensor, you'll have shot noise. With real sensors, you'll have PRML, and various flavors of RN.
If people are happy with a lossy compression scheme used to destroy the image of a 42MPixel sensor in a $3000+ camera- great. Sony has better marketing than they do engineers.
Your use of the word
destroy is curious. I have done a great deal of testing and simulation of cRAM compansion. I think I know what it can do well, and where it has problems. I've never seen, in real images, or in simulations
where photon noise is simulated, any artifacts that merit the use of the word destroy. I invite you to post examples.
Would I be happier if Sony didn't use cRAW compression? Sure. Will their use of that form of compression stop me from buying more cameras from them? Heck, no.
Jim