There has been much discussion in this and other forums of how much resolution do we need in our sensors. Erik Kaffehr started a thread about how much can we see. I’d like to come at it from another angle.
I have been reading a book by Robert Fiete, entitled Modeling the Imaging Chain of Digital Cameras
There’s a chapter on balancing the resolution of the lens and the sensor, which introduces the concept of system Q, defined as:Q = 2 * fcs / fco
where fcs is the cutoff frequency of the sampling system (sensor), and fco is the cutoff frequency of the optical system (lens).
An imaging system is in some sense “balanced” when the frequencies are the same, and thus Q=2.
The assumptions of the chapter in the book where Q is discussed are probably appropriate for the kinds of aerial and satellite surveillance systems the author works with, but they are not usually met in the photographic systems that most of us work with.
1) Monochromatic sensors (no CFA)
2) Diffraction-limited optics
3) No anti-aliasing filter
Under these assumptions, the cutoff frequency of the sensor is half the inverse of the sensel pitch; we get that from Nyquist.
To get the cutoff frequency of the lens, we need to define the point where diffraction prevents the detection of whether we’re looking at one point or two. Lord Rayleigh came up with this formula in the 19th century:R = 1.22 * lambda * N,
where lambda is the wavelength of the light, and N is the f-stop.
Fiete uses a criterion that makes it harder on the sensor, the rounded Sparrow criterion:S = lambda * N
Or, in the frequency domain, fco = 1 / (lambda * N)
Thus Q is:Q = lambda * N / pitch
I figure that some of the finest lenses that we use are close to diffraction-limited at f/8. If that’s true, for 0.5 micrometer light (in the middle of the visible spectrum), a Q of 2 implies:Pitch = N /4
At f/8 we want a 2-micrometer pixel pitch, finer than currently available for any available sensors sized at micro 4/3 and larger. A full frame sensor with that pitch would have 216 megapixels.
You can try to come up with a correction to take into account the Bayer array. Depending on the assumptions, the correction should be between 1 and some number greater than 2, but in any case, the pixel pitch should be at least as fine as for a monochromatic sensor.
As an aside, note that you don’t need an AA filter for a system with a Q of 2, since the lens diffraction does the job for you. That’s not true with a Bayer CFA.
I have several questions for anyone who cares to get involved in a discussion:
1) Is any of this relevant to our photography?
2) Have I made a math or logical error?
3) At what aperture do our best lenses become close to being diffraction-limited?
4) What other questions should I be asking?
For details about the Sparrow criterion, click here
For more details on calculating Q, take a look here
For ruminations on corrections for a Bayer CFA, look at this