Once I get PixelClarity
's deconvolution capability fully functional (I'm currently finishing PSF generation code, the database code needed to store and retrieve the PSF data, and the code needed to merge 5 dimensions worth of data down to the 3 dimensions needed for a given image), I'm going to experiment with upsizing before deconvolution. I've also have a few interesting theoretical thoughts regarding using a point spread function for interpolation vs sinc:
Imagine you have a row of 100 point light sources, a lens, and a sensor with 100 photosites in a row, arranged so that if the lens had no aberrations, the light from each point source would be focused exactly on one and only one photosite on the sensor. In such an ideal arrangement, altering the intensity of any given point source would affect the output of one and only one photosite, completely independent of all the others. With a real-world lens and diffraction, this ideal is not achieved; altering the intensity of one point source will affect the outputs of several photosites.
The issue I see with sinc, bicubic, and other interpolation algorithms is this: increasing a single value in a data series can cause neighboring interpolated values to decrease
. But this is contrary to what happens in real life; increasing the intensity of a single point light source will increase the outputs of its associated photosite and some of its neighbors, but can never cause any photosite output value to decrease
. Conversely, decreasing the intensity of a single point source will cause the outputs of its associated photosite and some of its neighbors to decrease, but can never cause any photosite output value to increase
. Therefore, sinc and other common interpolation algorithms work in a way that at times is opposite of the behavior of the real world (because increasing one output value can cause some interpolated values to decrease, and vice versa), and alternative methods should be investigated.
Given these principles, the thought that occurred to me is this: if one could devise a way to interpolate using a curve derived from the appropriate point spread function instead of the sinc function, one could simultaneously perform reconstruction/upsizing AND correct for lens blur, while reducing or completely eliminating clipping/ringing artifacts. Thoughts?