Sorry if this is the wrong thread. I wanted to tell about this blog post:
http://mtfmapper.blogspot.it/2012/06/nikon-d40-and-d7000-aa-filter-mtf.html
Seems that he is able to make models and measurements of D40 and D7000 sensel/olpf/lense fit quite well. So Nikon use 0.375 pixel pitch OLPFs?
Hi h,
It's an interesting model (which BTW doesn't account for residual lens aberrations, defocus, and a non square sensel aperture), which may fit a particular situation. I'm not convinced it can be applied universally. It also doesn't account for the result
after demosaicing, which is the basis for our Capture sharpening effort. However, as my tool shows for the cameras I've tested, and others have independently found for their cameras, in actual empirical tests the simple Gaussian model still describes the
actual blur of an edge profile (Edge Spread Function, or ESF) very accurately:
The very slight mismatch at the dark end of the curve is caused by lens glare, not blur, and should be fixed with tone curve adjustments, not Capture sharpening. So the blur pattern from
the entire imaging and Raw conversion chain can apparently be very well modeled by a simple Gaussian.
And there seems to be a theoretical explanation for that resemblence to a Gaussian shaped blur pattern, the input (a cascade of blur sources) apparently comes close to satisfying the requirements of the
Central Limit Theorem. It (loosely formulated) states that the sum of a number of independent distributions will resemble a normal distribution (which can be described by a Gaussian). The
DSP Guide, a free on-line book about Digital Signal Processing, also has a nice example at the bottom of that page link. It shows how fast a cascade of (even not close to Gaussian) distributions, very rapidly converges to a Gaussian shape.
Beside the interesting theoretical model of the PSF shape of the OLPF+sensel, and the unknown Raw conversion exploitation of that input (which at least normalizes the MTF of the R/G/B channels despite the differences in sampling density), we also have to consider that the only practical tool that most people have in their workflow, is the Sharpening dialog panel of our Raw converter or image editor, which essentially only offers
radius and
amount as controls for the PSF shape to use. IMO it is therefore useful to determine as close a match to such a PSF shape as possible, and the formerly unknown blur radius is what we now can determine for that specific purpose.
My tool turns out to be so sensitive, that it can detect the difference between the left and right side of a horizontal edge if the target was not shot perpendicular enough to the optical axis. It also detects differences within the DOF zone, and shows that there is only one plane of best focus. Luckily we need not, and we even cannot, specify the radius to that degree of precision in the common sharpening interfaces. It can still help with more elaborate deconvolution sharpening algorithms, which allow to specify the PSF kernel and also allow to regulate a lower sharpening of noise compared to the actual signal, thus boosting the S/N ratio even further.
Cheers,
Bart