The only part that of which I am not 100% convinced yet, but I am not saying that you are wrong, just that you didn't convince me yet () is the part where you are seemingly saying that the first 2 editions of a 16 bit RAW converted file (resulting, for the sake of discussion, in the loss of 2 bits each) will only damage artificially created data that was not present as information in the 12 bits RAW file in the first place.
It's a combination of logic and information theory. Information theory proves that your average Bayer RAW file has no more than 8 bits per pixel of non-redundant (image information + sensor noise information). This is a necessary corollary of the observable lossless compressibility of RAW files. The articles I cited early go into the math behind that deeper than either of us will probably care to go, especially the PDF.
So we have 8 bits or less per pixel of actual image information going into the RAW conversion process, which is hiding in 12 bits of RAW data. That data goes into the RAW converter, which expands the 12-bit RAW to 48-bit RGB. We've agreed that no new information is created by this process, and we've also agreed that editing operations destroy low-order, least significant bits first. So the only remaining issue is where those "real" bits are hiding in the output of the RAW converter.
In the vast majority of cases, RAW conversion is done in such a manner that the range of output values are fairly well-distributed between minimum and maximum. You are correct in asserting that the "real" bits are not simply the highest-order data bits like you would get with simple zero-bit padding, but are cleverly spread around where they will do the most good. But that doesn't mean you'll ever find them in the lowest-order bits or the RGB data, as you would have to do a really extreme levels adjustment to get them anywhere close. Let's say you did a levels adjustment where the output scaled from 0 to 7 in Photoshop's dialog. That would vacate the 5 highest-order bits of any real image information and replace those bits with zeroes, moving the image data to less-significant bits. Instead of real image data living somewhere in bits 9-16 of a given color channel, now it's been relocated to bits 4-11, and bits 12-16 have been filled with zeroes. At this point, we still have 3 low-order bits left to bear the brunt of the entropic losses. But that is an absurdly extreme example; I've never done that, and if I did, I wouldn't care if a few low-order real bits got munched, because those pixels have already been relegated to extreme shadows, and you're going to need a really good printer and custom profile combination to get any detail other than featureless black out of that anyway.
But in any normal scenario where the brightest highlights are (8-bit scaled) level 128 or greater, then the most significant bits of real image information
must be somewhere in the most significant bits of the RGB color channels. It is not possible to "split" the bits of real information into noncontiguous groups in a single color channel; they come off the sensor together, and remain that way in the RGB data. If 8 bits of real image information starts at bit 16 in the RGB data, it can't go to bit 12 and stop, be interspersed with some guesswork bits, and then pick up again at bit 6 and continue down to bit 3. We've already established that a Bayer RAW doesn't contain enough information to precisely define all 48 RGB bits, therefore if the highlights are greater than (8-bit scaled) 127, then the real image information has to be contained somewhere in the topmost 8 bits of the RGB color channels. That means that the lowest-order bits have to be some combination of guesswork and garbage.