A couple of issues I considered were:
- Some people talk about photographic DR, something like SNR = 10.
- I would suggest that photographic DR is essentially a function of exposure.
- Readout noise affects the darkest part of the pictures. How relevant is that?
As I see it, photographic DR can be defined in two ways:
- Relative to output image
- Relative to the data
In the first case one needs to consider the output image size and distance of observation. The latter doesn't needs to consider that (though one has to understand the relevancy of those). Regardless, a photographic DR requires a minimum SNR.
One thing we need to notice is that the SNR needs to be
normalized some way. If we do not do that, then we're not talking about image SNR, but pixel SNR.
An interesting question is if your suggestion of PDR being essentially a function of exposure holds water. I assume your arbitrary SNR=10 floor is for pixel SNR. As N(total)=sqrt(N(shot)^2 + N(read)^2), we can easily see that for small read noises on most modern sensors read noise is indeed irrelevant for PDR. Without any read noise a signal of 100 photons (or electrons) would create SNR=10. With most sensors the read noise at base ISO is about 3 or 4 electrons, so to have SNR=10 we'd need about 109 to 114 photons. Unforunately there are also sensors - notably from Canon - which has much higher base ISO read noise. The new 7DII has ISO 100 read noise in the ballpark of 13 electrons. Thus to have SNR=10 you'd need to collect about 190 photons!
Thus clearly with such SNR=10 we can
not say that PDR is essentially a function of exposure, but
read noise needs to be considered.
(Also I think it is better to consider the storage capacity of pixel (FWC), instead of exposure itself and the latter would still have to consider the former and also QE, and the number of pixels.)
I am pretty sure that for PDR pattern noises is quite irrelevant.
About normalizing: the most straightforward way of normlizing is just to consider the whole image sensor - as we can relatively easily measure the relevant metrics of a pixel, we can also easily calculate the what that means to the whole sensor:
S(total) = S(pixel)*n where S()=number of photons, and n=number of pixels.
N(total) = sqrt(n*N(shot)^2 + n*N(read)^2), where N()=noise, read=read noise, shot=shot noise=sqrt(photons)
Of coure such normlization needs a bit larger SNR minimum for PDR
Alternatively we could use for example the normalization DxOMark uses.
Note:
None of the above considers colours influence to noise. Creating colour out of the data increases noise and this is not constant for all cameras. It is likely that for conventional cameras this is not much of a topic (weaker colour separation, like of Canon's should increase noise a bit for colour images), but if one considers Sigma's sensors it can be very significant. Calculating the relevancy is beyond my limited skills though, I suspect.