Makes no sense to me. One is one, and the avarage of one, is one. In a context of one level, any pixel is either black or white. However, in the deep shadows there can be no white, so one level is either black or near black.
Is this not correct?
I'm deducing that the black equals 0dB and the near-black is the one level.
The point is that noise acts to dither the levels. Suppose the true signal is some value X between zero and one, over a patch of the image (we are going to ignore natural scene variation for the purpose of answering your question). Suppose the noise is of strength N, for example let N be one level. The noise adds a random number roughly between -N and N to X, so that the pixel wants to record some number between X-N and X+N. Of course, the resulting signal plus noise is digitized so the output is either 0 or 1; if the noise is random (uncorrelated from pixel to pixel), the value of X is reflected in the
percentage of 1's vs 0's in the patch -- a fraction X of the pixels will be 1 and the rest 0. If we average the levels over a large enough patch, we recover the original signal, even though each individual pixel only recorded 0 or 1.
This is the basic idea that allows one to trade resolution for noise -- and why downsampled images look less noisy. Note also that while the average is more finely graded than steps of one, that doesn't mean we buy anything by making individual pixels record values more finely spaced than the level of noise, because the individual values are jumping around randomly by an amount between -N and N.