Bart, I've been doing some work with my camera simulator to try to get a handle on the "big sensels vs small sensels" question.
http://blog.kasson.com/?p=6078
Hi Jim,
I'll have to catch-up with reading of the latest developments. I've been following the DPreview discussion from a distance, so I do have some idea about the train of thought you're following.
I'm simulating a Zeiss Otus at f/5.6, and calculating diffraction at 450 nm, 550 nm, and 650 nm for the three (or four, since it's a Bayer sensor) color planes. The blur algorithm is diffraction plus a double application of a pilllbox filter with a radius (in microns) of 0.5 + 8.5/f , where f is the f-stop.
This is where I'd have to seriously adjust my thinking to what you are (or MatLab is) doing exactly, which may take some time. That's because what I generally do is, I take a
Luminance weighted average of the (presumed) peak transmissions (450nm, 550nm, 650nm) of the R/G/B CFA filters (because Luminance is what most good (but generally unknown, unlike in your case) Demosaicing algorithms optimize for), and take that as input for a single 2-dimensional diffraction pattern at 564nm (564.05nm, if we use the weights of: R=0.212671, G=0.71516, B=0.072169), and I then integrate that diffraction pattern at each sensel aperture area over the surface of the fill-factor (usually 100%, assuming gap-less microlenses).
The reason that I reduce the problem to a single weighted average luminance diffraction pattern is because Deconvolution is usually performed on the Luminance channel (e.g. CIE Y in case of PixInsight, or L of LRGB in ImagesPlus). It can also reduce the processing time to almost 1/3rd compared to an R+G+B deconvolution cycle. I can understand that for your model you would need to keep separate diffraction patterns per CFA color.
A 2-D kernel that includes the third Bessel zero of the Airy disc pattern usually accounts for most (at least 93.8%) of the energy of the full diffraction pattern. This diffraction kernel is then used, e.g. as an image or as a mathematical PSF, depending on the required input of the deconvolution algorithm. Attached are 2 kernels in data form, one for 2.5 micron pitch and one for 1.25 micron pitch, both for a 100% fill-factor, weighted luminance at 564nm, and a nominal f/5.6 aperture.
I'm finding that 1.25 um sensel pitch doesn't give a big advantage over 2.5 um with this lens.
Off-the-cuff, I'd assume that's due to the size of the diffraction pattern, which will dominate at such small pitches.
I'm wondering if things would be different with some deconvolution sharpening.
I assume they would, due to the dominating influence of diffraction (+ fill-factor blur, and/or OLPF). How much can be restored remains to be seen and depends on the system MTF at the various spatial frequencies, but an OTUS would significantly increase the probability of being able to restore something, especially on a high Dynamic range sensor array.
Now that you know my lens blur characteristics, can you give me a recipe for a good deconvolution kernel? Tell me what the spacing is in um or nm, and I'll do the conversion to sensels, which will change depending on the pitch. I'd do it myself, but I have to admit I've just followed the broad outlines of this discussion, and would prefer not to delve much deeper at this point.
From what I remember of reading the earlier development of your simulation model, I'd have to assume that a
Dirac delta function that is convolved with your specific R/G/B CFA diffraction patterns, and subsequent pill-box filters, should provide an exact (for your model) Deconvolution filter.
If one were to follow my approach as described above, for f/5.6 (actually 4*Sqrt[2]=5.68528), at 564nm, for a 100% fill-factor, of a 2.5 micron and 1.25 micron sensel pitch (at infinity focus), I've added two data files with kernel weights per pixel. They need to be normalized to a sum total of 1.0, or converted to an image (e.g. with ImageJ, import as a text image and save to a 16-bit TIFF or a 32-bit float FITS), before using them as a deconvolution kernel.
Warning: As this project progresses, I'll be asking for help on upsampling and downsampling algorithms as well. I hope I won't wear out my welcome.
No problem, if in a thread that's more appropriate to that subject.
Cheers,
Bart