Myhrvold is obviously not an expert on the question of optical diffraction effects on digital sensors.
Obviously not. He seems to believe the resolution resulting from a chain of two imaging devices (lens and sensor) where each has its own resolution limit was equal to the lower resolution of the two. I'd call this the "weakest-link theory." And this theory is wrong.
Actually the resulting resolution R[span style=\'font-size:8pt;line-height:100%\']res[/span] depends on the sequence of two input resolutions R[span style=\'font-size:8pt;line-height:100%\']1[/span] and R[span style=\'font-size:8pt;line-height:100%\']2[/span] like this:
1/R[span style=\'font-size:8pt;line-height:100%\']res[/span] = 1/R[span style=\'font-size:8pt;line-height:100%\']1[/span] + 1/R[span style=\'font-size:8pt;line-height:100%\']2[/span]
If this formula looks familiar to you---yes, it's the same that also describes the resulting resistance of two parallel resistors.
What does this mean? Let's say we have a sensor that due to pixel pitch can resolve up to 40 lp/mm. And we have three lenses that, for a given subject contrast, can resolve 40 lp/mm, 60 lp/mm, and 80 lp/mm respectively. Now, when using the 40 lp/mm lens on the 40 lp/mm sensor this seems like a good match, doesn't it? And using the better lenses on that sensor seems like a waste of resolving power as the poor sensor cannot exploit it, right? Wrong! Actually on the 40 lp/mm sensor, the 60 lp/mm lens will yield a sharper image than the 40 lp/mm lens, and the 80 lp/mm lens a sharper image still (albeit not twice as sharp as the 40 lp/mm lens).
Of course this works out the same the other way around. When using a lens that can resolve, say, 40 lp/mm, then a 60 lp/mm sensor will yield a sharper image than a 40 lp/mm sensor, and an 80 lp/mm sensor will yield a still sharper image (albeit not twice as sharp as the 40 lp/mm sensor).
So, the
Myhrvold threshold of "f-stop equal to pixel spacing in microns" is a completely wrong concept---even when augmented by a corrective factor or two. Generally, the optimal f-stop roughly correlates to image size (among other things, the most important being lens quality) ... but not to pixel count or pixel pitch. With all other things equal, a higher pixel count will establish a higher overall resolution level and will make the degradation more obvious---but it will occur at the same aperture.
... observations of several users of the Nikon D2X, with 5.5 micron pixel spacing, say that diffraction starts to limit resolution at somewhere between f/8 and f/11.
With good lenses, this matches my own observations exactly. And my D-SLR camera has only half the D2X's pixel count (that is, 6 MP) and consequently, a pixel pitch of 7.8 microns which is 1.4× the pixel pitch of the D2X. So according to Myhrvold I should see diffraction setting in at apertures one stop smaller than D2X owners. But as a matter of fact I am seeing it at the same aperture as D2X owners do. And the only thing my camera has in common with the Nikon D2X is the image size which is APS-C (form factor 1.5×, relative to 35-mm format).
With good or very good (but not exceptional) lenses on APS-C format (form factor 1.5× or 1.6×), diffraction starts to become visible---upon
very close inspection!---at apertures between f/8 and f/11 typically ... no matter what the pixel count is. With 35-mm-format cameras, the limiting f-stop is somewhere between f/11 and f/16. With medium-format cameras, it's around f/22. And so on.
With exceptionally good lenses, the limit is reached at apertures one or maybe even two stops larger (i. e. smaller f-stop numbers).
However, don't let these facts keep you from stopping down beyond these limits whenever you need the depth-of-field! Overall image quality does not so much depend on highest resolution at the plane of focus. If a composition depends on DOF then give it all the DOF it needs (but not more)!
-- Olaf