Camera: Canon EOS 7D
ISO speed: 100
Shutter: 1/1.3 sec
Aperture: f/7.0
Focal length: 116.0 mm
Lense: Canon 70-200mm f/4.0L IS
Image read using dcraw with no demosaic, no gamma, 16 bits:
>>dcraw -D -4 -T IMG_4315-1.CR2
Hi -h,
This means that the lens was used at an aperture that allows to resolve details restricted by the diffraction of a nominal f/7.1 aperture lens.
The Raw conversion settings kept the image in linear 16-bit/channel gamma space, but without white balancing.
Applied black-point/white-point assuming flat spectrum. Estimated circle centrum manually and drew a circle around it at 92 pixel diameter. Included images are a central crop and a scaled-down version of the entire image. It is evident now that i had uneven lighting (light source at lower left).
I'll assume that this black/white-point setting took care of the white balancing.
What I don't get is where the the elliptical distortion comes from. Hence, I can't comment on the suggested better than Nyquist performance.
Kind of curious if the PSF can be estimated directly when you have the analytic function that generated the reference output and the sensor reading. If a good estimate of the PSF under optimally focused conditions for a given camera and a set of lenses one might speculate how its AA-filter works and how ideal images should optimally be sharpened.
Well, you 'know' the input signal (the print of the target) and the output signal. It's possible to make a model to derive the PSF needed to accomplish that. Whether that is of any use depends on the tools one has to reconstruct the original input signal based on the PSF. An application that uses the user input of a PSF is required to use such info.
What IMHO is probably the easiest approach to derive the PSF, is by using the slanted edge features of the target, in order to approximate the horizontal/vertical Edge Spread Functions (ESFs). The slanted edges of the printed target need to be calibrated (usng the gray scale) to the (presumed linear) gamma of your image. Then one needs to model a PSF to arrive at the observed slope of the oversampled ESFs, which could lead to a 2-dimensional model of the PSF that's reasonably close to reality, assuming there is no motion blur involved.
I'm sorry that I can't reveal more explicit details (although all disclosed details were pertinent to the solution), but it took me a while to figure things out, and it resulted in a proprietary method that I probably want to patent/commercialise.
Cheers,
Bart