Roger's method of determining the read noise may not be accurate. As Emil Martinec has shown, the D3 clips the black point. My own tests with the D200 also show the black point is clipped, as shown in this Iris histogram of one of the green channels:

[attachment=4094:attachment]

Whether this is half Gaussian or some other distribution is not clear.

(EDIT - the red text is a mistake; ignore it - I explain what I should have wrote in another reply to bjanes)

The biggest clue is when you subtract the number of positive values from the number of zero values. The result should be equal to the number of positive values, if the RAW was really zeroed at photonic black (with a short exposure with no significant dark current noise, of course). Here's a case where 14 bits would be convenient; the accuracy of this estimation would be better with more levels, even if the extra 2 did not contain significant usable signal.

If so, one could multiply the standard deviation by 1.66. It would be preferable to use Emil's regression method, but this would require additional data.

Well, I get my figure by subtracting shot noise from total noise in quadrature from a section of smooth, out-of-focus near-black. For example, with a mean level of 8 ADU (with almost no zeros in the sample), I get a sigma of 2.55 ADU. With an assumed 32K electrons at 4095, a mean of 8 ADU represents a mean of 8/4095 * 32000 = 62.5 electrons. The sigma of 2.55 ADU represents a sigma of 2.55/4095 * 3200 = 19.9 electrons. (19.9^2 - 62.5)^0.5 = 18.26 electrons total read noise, or 18.26/32000 * 4095 = 2.34 ADU. 10 electrons (, as Roger reports, would make the D200 the best DSLR except maybe for the K10D, for deep shadows at ISO 100.

Edit - that text in red was not meant to be in my reply; I thought I had deleted it. 10 electrons would put it right in line with most of the DSLRs with the lowest read noise at ISO 100, not better.

Reverse-engineering the effects of clipping (from the difference between the number of zeros and the number of positive values), and working with a blackframe, however might be preferable, as then you could get the noise of the entire RAW, as opposed to single color channel (considering green to be two channels, of course). It would take some experimentation to see of this is practical. Basically, you would have to clip different noise levels from a full gaussian curve at different points in the curve, and see if the relationship has a direct mapping.

If one multiplied Roger's read noise by 1.66 to correct for the clipping, then the read noise would be 17, more or less in agreement with your figures.

That is the whole point of the Imatest plot, which shows noise in terms of f/stops rather than S:N. The average person would have trouble translating a S:N to what is seen visually.

[a href=\"index.php?act=findpost&pid=157480\"][{POST_SNAPBACK}][/a]

Well, a full graph of SNR at all RAW levels is the most useful, especially when comparing cameras on the same graph. Of course, white-balancing must be taken into consideration for practical use, possibly by charting the weakest channel, adjusted for its scaling factor. SNR in tungsten light or deep desert shade is obviously going to be worse than SNR in magenta light, if they are all to be fully WB'ed.