You replied about 1 minute too soon. I just edited the post you replied to, as there were two figures of 1000 or close in my post, and I apparently assumed the wrong reference in the post you replied to.
You talk a lot, but your logic is questionable and you present absolutely no data to back up your assertions. Could you give us a link to your data?
You need to be a bit clearer about what you don't believe. I felt a very vague feeling in your previous reply.
So let me state the core of what I am saying, and then you can object specifically to something or ask for proof. It's as if I am supposed to know what is apparently illogical to you and defend it. Nothing I've written recently is illogical to me, so I can't figure out what you think is questionable.
Here is a summary of what I have been saying:
"In the face of all the analog noises involved in the readout of a sensor, quantization of any practical significance can only occur when the number of linear levels used is such that the blackframe read noise, in those ADUs, falls significantly below 1.4, for single-exposure RAW images."
As I've stated previously, 1.4 is a conservative value. We can get away with read noise ADUs as low as 1.1 without incident (and that implies even less levels needed).
The tools that I am using to look at these matters are horrible, in terms of the workflow involved in compositing images for comparisons. You can verify what I say for yourself, quite easily, though. Just open two instances of IRIS, and in one, select a RAW from a camera whose read noise is known at the ISO. Then, calculate the division needed to bring that read noise down to 1.4 ADU. Set the threshold sliders to some window down in the shadows, crop an area of interest, and then save out the .fit file. In the second instance of IRIS, load the crop. In this second instance, divide the image by the factor needed to scale the read noise down to 1.4 ADU. Multiply back by that number again (or rescale the threshold sliders, if that is more convenient). What you will see, is exactly the same thing, as far as your eyes can tell, in both instances. Now, reload the crop into the second instance of IRIS, and divide by the read noise in original ADUs (to make the new noise 1.0 ADU). *NOW*, you can see a little bit of quantization. Try again, bringing the read noise down to 0.8, and 0.7 ADU and things fall apart very rapidly. This is true regardless of what level of read noise you started at; you only start running out of useful levels when you get down to 1.4 ADU of noise.
Now, try similar things with highlights. Find the brightest area in a smooth, OOF gradient, and measure its sigma. Then, do the same as before, to bring this down to 1.4 ADU. The bright area that you got the shot noise deviation from will look like it lost no smoothness, but the darker areas that might be in the image will show quantization. Try again with 1.0, 0.8, 0.7, etc. Same principal keeps applying.
It is easiest to do this with single color channels, because they are easier to crop. You can verify, however that the same principle applies to color by carefully cropping so that the RGB CFA patterns in the crops are unaltered, or if you have plenty of RAM, just don't crop at all and just window-in the areas to compare (quantization must be before color conversion, though).
One day you will realize that noise is a hard ruler of appreciable levels, and all the anecdotes about levels and levels per stop and such is usually irrelevant in today's noisy digital photography. All those anecdotes come from noiseless, synthetic graphics and are totally meaningless in digital photography.