I think Bernard has a good point about image manipulation causing a loss of 'real' information. It's not about rounding errors in floating point arithmetic. Suppose you take a picture of a test chart using a 12mp camera ( say a 1Ds ) and a 3mp camera ( say a D30 ). Now downsample the 1Ds image 2:1 to make a 3mp image. Then subtract this from the D30 image. What we have is then an 'error image' for the D30. You could measure this magnitude of the error as the standard deviation of the difference over all the pixels.
That would be totally useless, because if you didn't do the exact same sharpening, curves, creative tweaks, and other processing to both images, there would be a difference as a result, and you'd have no real way of proving which camera was "right". You can make a difference mask, but that cannot tell you which image is right, or even if either image is right; it can only tell you how much the two images differ.
The entropic data loss Bernard was referencing is all about rounding errors, and functions where multiple input values can result in the same output value. Once data has passed through such a function, it cannot be known with certainty which input value was responsible for the given output value. This does start in the lowest-order bits, and gradually works its way into the higher-order bits as more edits (curves, levels, and suchlike) are performed. For example, if you're doing a curve adjustment in 8-bit mode where levels 8, 9, 10, and 11 all get mapped to the output value of 5, you have just destroyed the least significant 2 bits of deep shadow values because you are mapping 4 input values to 1 output value. As you continue to perform more edits, you're gradually corrupting and destroying the information in progressively higher-order bits, but it's not a completely linear progression. IIRC, if you have a sequence of edits that each cause 2 bits worth of entropic loss, you need to do two edits to lose 3 bits worth of information, four edits to lose four bits, sixteen edits to lose five bits, and the progression continues with each additional bit requiring the square of the previous bit's number of edits to be destroyed. Most Photoshop edits cause 2 bits of entropic loss or less, so when editing in 8-bit mode this loss can become significant enough to manifest as visible artifacts, usually banding or posterization. But when editing in 16-bit mode, you can do many more edits with a higher entropic loss per edit before visible degradation occurs because the entropic degradation always affects the lowest-order bits first, and the image information always lives in the highest-order bits. So if you can add extra bits to the data, even if it's simple zero padding (which is what happens when you convert an 8-bit file to 16 bits), you can edit in 16-bit mode and destroy less real image information while doing so.
Here's a practical application of this information theory crap, an experiment you can perform for yourself to see all this in action. Open a JPEG image that is already reasonably well-processed (my little girl pic would be a fine candidate), convert it to 16-bit RGB mode, and do a series of random curve and/or level adjustments to screw it up and then put it right again. An easy one would be a level adjustment where you don't change anything but the gamma control (the third slider in the middle). Do 10 random gamma tweaks between 0.5 and 2.0, with the last one or two designed to return the image as closely as possible to its original state. Then convert back to 8-bit mode, while recording these tweaks as an action. Save the tweaked image in a new file. Now reopen the original image, leave it in 8-bit mode, and run the action you just recorded. Save as a third copy. Now open the copy that was tweaked in 16-bit mode and compare its appearance to the one that was tweaked in 8-bit mode. Both files had the exact same number of bits destroyed by the level tweaks, but the bits destroyed in the tweaked-in-16-bit-mode file were the zero bits padded on to the real image information when converted from 8-bit to 16-bit, (and thus were no real loss) while the bits destroyed in the file-tweaked-in-8-bit-mode were actual image information, resulting in a visible degradation of image quality. While you're at it, compare histograms.