But I wonder: why after the point in which diffraction starts to affect AIQ (about 5 microns for the bigger formats) the AIQ value starts to fall down? If diffraction at a given apperture produces image blurring, once the pixel pitch chosen is smaller than this blurring (circle of confusion or whatever it is called), why should AIQ go down?
I think it should simply stop improving, i.e. remain flat since no more improving is achieved having smaller photocells. But according to Clark's plot it starts a quick falling down. I cannot understand this part of the curves. Could someone explain this behaviour?
[a href=\"index.php?act=findpost&pid=180232\"][{POST_SNAPBACK}][/a]
It's because of the way it is defined. AIQ is
(S/N at 18% grey)x(effective MP)/20
Up to some overall constant, this is to a good approximation (ie ignoring read noise and black point offset) the same as
sqrt[gain in e-/12bit ADU] x (effective MP)
One can debate the merits of combining these two in just this way, but let us put that aside. The claim is that AIQ combines the effects of SNR and resolution. The problem comes when diffraction effects are accounted for. As pixel size decreases, gain goes down in proportion to pixel area, and eventually diffraction limits resolution. So the way Clark defines it, the effective MP part of AIQ saturates -- his definition of effective MP count is the lesser of the actual MP count, and sensor area divided by the diameter squared of the diffraction spot. But gain continues to decrease with pixel size and AIQ crashes.
Diffraction limits resolution, but it does not affect SNR since the same photons are being collected, they are merely being redistributed among the pixels due to the diffraction phenomenon. So when evaluating SNR in the diffraction limited regime, we should do it over the area of the diffraction spot size, combining the pixels that are within the the dominant support of the diffraction spot; this is what Clark fails to account for. The signal will scale with the number of pixels, the noise with the sqrt of the number of pixels, and so the SNR of the diffraction spot will scale with the sqrt of the number of pixels within it. So really we should think of the factors in diffraction limited AIQ as
AIQ = const. x (SNR in a diffraction spot) x (diffraction limited resolution)^2
This definition will of course saturate as the sensor becomes diffraction limited at a particular f-number. The SNR of a diffraction spot this scales as the SNR of a pixel times the size of the diffraction spot in pixel widths.
I would also think that a slightly different measure along these lines would be more appropriate. When one looks at an image one takes it in as a whole, and looks at features within the image. The SNR compenent of AIQ should then be measured with respect to the fixed size features that are being rendered by the camera, in other words one should look at SNR per unit area and not the SNR of a pixel. Two slightly different versions are possible -- either look at SNR per unit sensor area, or SNR as a percentage of the area of the frame (depending on how one wants to treat crop factor). Then for the same reasons as above, the SNR per area scales with the sqrt of the megapixel count. We then write
AIQ = const. x sqrt(full well electrons x megapixels) x sqrt(effective megapixels)
= const. x (SNR per area) x (linear resolution)
= const. x sqrt[gain x megapixels] x (linear resolution)
Where "megapixels" is either megapixels per area, or total megapixels depending on how one wants to think about crop factor. Now when diffraction limitation arises, the first factor is unchanged, and the second factor is replaced by the diffraction limited linear resolution (in either line pairs per mm or line pairs per picture height, depending on how one wants to treat crop factor). This modified definition also saturates properly as pixel size decreases.