Pages: [1]   Go Down

Author Topic: The more Mpx the better?  (Read 2786 times)

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2005
    • http://www.guillermoluijk.com
The more Mpx the better?
« on: March 09, 2008, 01:29:45 pm »

I have always had the opinion that the more Mpx the better. More Mpx means more resolution and thus ability to distinguish more detail, perform tighter crops, large prints,....

Of course sensor technology must be good enough to achieve a good performance (quantum efficiency, etc...) so this could possible not apply to smaller sensor sizes like those found in cheaper compact cameras. But in FF, 1.3, 1.6, 4/3 formats it seemed to me that there is still room to increase Mpx.

Noise per pixel will be higher in those cameras, but this does not mean they are more noisy since after a rescaling process to match the size of comparision, noise would be statistically averaged and final SNR per pixel would be the same as in the camera with larger pixels.

I have found a very interesting plot from Roger N. Clark that relates the quality of image to the pixel and sensor size. He proposes a definition of Apparent Image Quality taking into account both resolution and SNR and relates this AIQ to pixel size for various camera formats (see dotted curves):


© Roger N. Clark, Digital Sensor Performance Summary


The graph clearly shows that there is still room to increase the number of Mpx getting higher AIQ up to a point over 5 microns, where diffraction begins to limit the improvement.
He adds that for very small pixel pitchs (less than 2 microns), the effect of low DR for having very few electrons in ech photocell makes the AIQ fall even more quickly.

Taking for example the FF curve, 5 microns would mean (at f/8.0 in Clark's plot):

35mm / 5 microns/pixel =~ 7000 pixels

Accounting a total: 7000 * (7000 * 2/3) = 32.7 Mpx
(remember that Canon 5D y Nikon D3 FF have ~12 Mpx).


Question: I find Clark's findings very logical and numbers seem to give credit to him.

But I wonder: why after the point in which diffraction starts to affect AIQ (about 5 microns for the bigger formats) the AIQ value starts to fall down? If diffraction at a given apperture produces image blurring, once the pixel pitch chosen is smaller than this blurring (circle of confusion or whatever it is called), why should AIQ go down?
I think it should simply stop improving, i.e. remain flat since no more improving is achieved having smaller photocells. But according to Clark's plot it starts a quick falling down. I cannot understand this part of the curves. Could someone explain this behaviour?
« Last Edit: March 09, 2008, 01:35:13 pm by GLuijk »
Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
The more Mpx the better?
« Reply #1 on: March 09, 2008, 07:11:11 pm »

Hi!

The S/N term goes down:

sqrt(0.18*Full well electrons)

Erik


Quote
I have always had the opinion that the more Mpx the better. More Mpx means more resolution and thus ability to distinguish more detail, perform tighter crops, large prints,....

Of course sensor technology must be good enough to achieve a good performance (quantum efficiency, etc...) so this could possible not apply to smaller sensor sizes like those found in cheaper compact cameras. But in FF, 1.3, 1.6, 4/3 formats it seemed to me that there is still room to increase Mpx.

Noise per pixel will be higher in those cameras, but this does not mean they are more noisy since after a rescaling process to match the size of comparision, noise would be statistically averaged and final SNR per pixel would be the same as in the camera with larger pixels.

I have found a very interesting plot from Roger N. Clark that relates the quality of image to the pixel and sensor size. He proposes a definition of Apparent Image Quality taking into account both resolution and SNR and relates this AIQ to pixel size for various camera formats (see dotted curves):


© Roger N. Clark, Digital Sensor Performance Summary
The graph clearly shows that there is still room to increase the number of Mpx getting higher AIQ up to a point over 5 microns, where diffraction begins to limit the improvement.
He adds that for very small pixel pitchs (less than 2 microns), the effect of low DR for having very few electrons in ech photocell makes the AIQ fall even more quickly.

Taking for example the FF curve, 5 microns would mean (at f/8.0 in Clark's plot):

35mm / 5 microns/pixel =~ 7000 pixels

Accounting a total: 7000 * (7000 * 2/3) = 32.7 Mpx
(remember that Canon 5D y Nikon D3 FF have ~12 Mpx).
Question: I find Clark's findings very logical and numbers seem to give credit to him.

But I wonder: why after the point in which diffraction starts to affect AIQ (about 5 microns for the bigger formats) the AIQ value starts to fall down? If diffraction at a given apperture produces image blurring, once the pixel pitch chosen is smaller than this blurring (circle of confusion or whatever it is called), why should AIQ go down?
I think it should simply stop improving, i.e. remain flat since no more improving is achieved having smaller photocells. But according to Clark's plot it starts a quick falling down. I cannot understand this part of the curves. Could someone explain this behaviour?
[a href=\"index.php?act=findpost&pid=180232\"][{POST_SNAPBACK}][/a]
Logged
Erik Kaffehr
 

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2005
    • http://www.guillermoluijk.com
The more Mpx the better?
« Reply #2 on: March 09, 2008, 08:29:09 pm »

Quote
The S/N term goes down:

sqrt(0.18*Full well electrons)
Yes Erik, but consider that when reducing pixel pitch the Mpx term goes up faster than sqrt(Full well) goes down. If we assume:

- Full Well varies according to (pixel pitch)^2 (surface of photocell)
- And Mpx for a given sensor size varies according to (1/pixel pitch)^2 (inverse of surface of photocell)

Then the term AIQ which varies according to sqrt(Full Well) * Mpx varies according to:

sqrt((pixel pitch)^2) * (1/pixel pitch)^2=
(pixel pitch) * (pixel pitch)^(-2) =
1/(pixel pitch)

And 1/(pixel pitch) increases when pixel pitch goes down. This is the equation that provides the shape of Clark's curves for high values of pixel pitch, until diffraction starts to play a role.
« Last Edit: March 09, 2008, 08:52:12 pm by GLuijk »
Logged

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
The more Mpx the better?
« Reply #3 on: March 09, 2008, 09:23:53 pm »

Quote
But I wonder: why after the point in which diffraction starts to affect AIQ (about 5 microns for the bigger formats) the AIQ value starts to fall down? If diffraction at a given apperture produces image blurring, once the pixel pitch chosen is smaller than this blurring (circle of confusion or whatever it is called), why should AIQ go down?
I think it should simply stop improving, i.e. remain flat since no more improving is achieved having smaller photocells. But according to Clark's plot it starts a quick falling down. I cannot understand this part of the curves. Could someone explain this behaviour?
[a href=\"index.php?act=findpost&pid=180232\"][{POST_SNAPBACK}][/a]


It's because of the way it is defined.  AIQ is

(S/N at 18% grey)x(effective MP)/20  

Up to some overall constant, this is to a good approximation (ie ignoring read noise and black point offset) the same as

sqrt[gain in e-/12bit ADU] x (effective MP)

One can debate the merits of combining these two in just this way, but let us put that aside.  The claim is that AIQ combines the effects of SNR and resolution.  The problem comes when diffraction effects are accounted for.  As pixel size decreases, gain goes down in proportion to pixel area, and eventually diffraction limits resolution.  So the way Clark defines it, the effective MP part of AIQ saturates -- his definition of effective MP count is the lesser of the actual MP count, and sensor area divided by the diameter squared of the diffraction spot.  But gain continues to decrease with pixel size and AIQ crashes.

Diffraction limits resolution, but it does not affect SNR since the same photons are being collected, they are merely being redistributed among the pixels due to the diffraction phenomenon.  So when evaluating SNR in the diffraction limited regime, we should do it over the area of the diffraction spot size, combining the pixels that are within the the dominant support of the diffraction spot; this is what Clark fails to account for.  The signal will scale with the number of pixels, the noise with the sqrt of the number of pixels, and so the SNR of the diffraction spot will scale with the sqrt of the number of pixels within it. So really we should think of the factors in diffraction limited AIQ as

AIQ = const. x (SNR in a diffraction spot) x (diffraction limited resolution)^2

This definition will of course saturate as the sensor becomes diffraction limited at a particular f-number.  The SNR of a diffraction spot this scales as the SNR of a pixel times the size of the diffraction spot in pixel widths.

I would also think that a slightly different measure along these lines would be more appropriate.  When one looks at an image one takes it in as a whole, and looks at features within the image.  The SNR compenent of AIQ should then be measured with respect to the fixed size features that are being rendered by the camera, in other words one should look at SNR per unit area and not the SNR of a pixel.  Two slightly different versions are possible -- either look at SNR per unit sensor area, or SNR as a percentage of the area of the frame (depending on how one wants to treat crop factor).  Then for the same reasons as above, the SNR per area scales with the sqrt of the megapixel count.  We then write

AIQ = const. x sqrt(full well electrons x megapixels) x sqrt(effective megapixels)
   = const. x (SNR per area) x (linear resolution)
   = const. x sqrt[gain x megapixels] x (linear resolution)

Where "megapixels" is either megapixels per area, or total megapixels depending on how one wants to think about crop factor.  Now when diffraction limitation arises, the first factor is unchanged, and the second factor is replaced by the diffraction limited linear resolution (in either line pairs per mm or line pairs per picture height, depending on how one wants to treat crop factor).  This modified definition also saturates properly as pixel size decreases.
« Last Edit: March 09, 2008, 10:19:44 pm by ejmartin »
Logged
emil

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2005
    • http://www.guillermoluijk.com
The more Mpx the better?
« Reply #4 on: March 10, 2008, 06:42:20 am »

Thanks a lot ejmartin, I will take some time to swallow your post calmly.

Best regards!
Pages: [1]   Go Up