Regarding Foveon, where did you get these 120 to 200% values from if I may ask? Aren't those figures experimental data that were impacted by the noise resulting of the Foveon implementation of the multi-layer sensor idea? I still don't see any theoretical justification for those.
I'm basing that on my comparisons of 100% crops of SD9 and SD10 images to 100% crops from my 1Ds and 1D-MkII. I'm only putting those figures out as a rough estimate, not as something scientifically precise. I'm not sure that anyone has devised a way to objectively measure image quality that goes beyond simple S/N and resolution measurements, and takes into account color accuracy, the visual acceptability of whatever image artifacts may be present, and other such issues. Given that, there is currently no way to precisely quantify the image quality differences betwen Bayer and Foveon.
Your example would probably be closer to the reality if water and oil were mixed to create a suspension. Removing a fixed amount of liquid on the top would affect less oil than if no water had been added, but it will still affect some.
No, my analogy is accurate as-is. When you manipulate an image, the rounding errors and entropic losses are introduced in the least significant bits, and gradually work their way into the more significant bits as one performs more edits to the image data. The whole point of 16-bit editing is to keep the rounding errors and other entropic reductions in the bits that are made-up anyway, so losing some of them does not compromise the actual information.
Another way of looking at it: if you edit a 16-bit image to the point that only every eighth level is populated, you have invalidated or lost the least significant 3 bits of image data (2^3 =
. If you continue editing until only every 32nd level is populated, you have now invalidated or lost the least significant 5 bits (2^5 = 32). Since the true image information is contained in the
most significant bits of the image data, you have to lose/invalidate approximately 7 bits worth of image data (toothcombing the histogram so that only every 128th level is populated) before you start corrupting or losing any of the real image information. That's pretty tough to do; a sensible workflow (convert RAW, adjust levels/curves, moderate color tweaks, and sharpen) is only going to introduce 1-3 bits of entropy losses (maximum toothcombing of the histogram to every eighth level or so), which still leaves you at least 3-4 bits worth of buffer between the real image information and the entropic garbage.
Digital audio editing works the same way; it is common to record with either a 16 or 24-bit DAC, pad the data with zeroes to make it 32-bit, edit, and then downsample to 16 bits for the final output. If done correctly, the error of greatest magnitude in the final output data will arise from rounding to the nearest 16-bit value. All of the entropic losses and rounding errors introduced during editing are buried in the least significant 4-8 bits of the data bits that are thrown away anyway. The rounding error inherent to downsampling to 16-bit audio is far greater in magnitude, but is still acceptable because the 16-bit audio format is good enough, and is still the best it can possibly be.