I don't see how this can be true. The smallest possible step in a dynamic range is limited by the effect of a single photon. If photodetectors on sensors could actually count photons, which I believe they can't, but lets assume that they can without any interference from noise of any description, then the dynamic range is determined by the maximum number of photons that a sensor can 'process' during any exposure.
The context in which I was writing was one without noise, in which bit depth is the limiter. DR applies as well to any computer-generated image, in which case you could double the DR with each extra bit of linear depth. I was think of shot noise as a noise in this context.
Of course, with a pure photon-counting situation, the maximum number of photons would determine DR. That is just as hypothetical as my noiseless image (for photography; not for computer-generated images).
This maximum is determined by the size of the sensor, all else being equal, or if you like, the size of individual photodetectors, all else being equal.
In this sense, size or scale is very relevant to DR.
In practice, you have to either subtract or cancel all sources of noise from this maximum signal capacity of the sensor in order to determine a useful DR.
There's where your analogy doesn't work; the maximum number of photons minus the "noise floor" does not relate to DR at all, unless you are still thinking about pure photon counting noise, where the lowest usable standard (by any arbitrary but consistent standard) is always the same number of photons, so there is a unique mapping between max minus noise floor and max divided by noise floor. The division is the most useful and straightforward, though. "Differences" just result in a curve that needs to be converted back to a ratio for any useful calculations. As soon as you start doing anything else besides counting photons/electrons, like introducing read noise, the absolute difference between the noise floor and the max has no direct relationship to DR. A 16-bit camera with a noise floor of 32K ADU would have a traditional DR of only 2x, or one stop, although the difference would be 32K ADU or somewhere from 16K to 400K electrons. An 8-bit camera with a noise floor of 1.5 ADU or 0.75 to 30 electrons out of a max of 30K electrons would have much more DR, even though the difference between the noise floor and max signal is much less, both in ADUs and electrons.
You seem to have a literal conception of "noise floor". It's a poorly chosen term, IMO, which gives off false connotations. It is not the bottom of anything. Signal always exists below the noise floor and is not totally obscured by the noise. It's just an important turning point for SNRs in the deep shadows. Don't forget, most of this discussion is pixel-centric, and that's fine, as long as we understand what that means. The DR we usually speak of is that of the pixel, but the pixel does not determine the image, and depending upon the pixel frequency of the detail we are interested in capturing, you can get usable signal well below the noise floor. You can record a white fat letter that almost fills the frame, on a black background, in a clean DSLR, where the level for the white letter is a small fraction of a single photon. In the same way, as we use more and more pixels in our images, the pixel noise, and pixel DR, become less of an issue to image
noise and DR. Never forget, the real world of light is individual photons, and any illusion of smooth levels is achieved by mechanical binning and inability to resolve individual photons.
If DR is not 'scale' dependent as you suggest then you could claim, if it were possible to design a completely noise free tiny sensor containing say 100 photodiodes just 1 micron in diameter, that such a tiny sensor could have the same dynamic range as, say a 5D or P45+.
This would be clearly ridiculous. Reductio absurdum!
Yes it would be, but as I said before, my context was one where DR isn't separable
from bit depth, and such scenarios can exist, if not in a digital capture. You can do a 3D ray-tracing with the output as a linear DNG that looks like a camera capture, but with the only noise/distortion as quantization. In that case, DR would be directly proportional to bit depth.
I was just trying to give some balance to the idea that DR has nothing to do with bit depth.