Sorry it has been a while--I had several trips and work was intense.
I did add a third example in my sharpening series: http://www.clarkvision.com/articles/image-restoration3/
It is upsampling like Bart described and then deconvolution sharpening.
Some of the questions asked:
> I can't see how a star would be "smaller than 0% MTF".
While the star itself is extremely small, in the optical system, it is a diffraction disk. My point was that one can resolve with deconvolution two stars that are closer than 0% MTF. MTF is a one-dimensional description of the response of an optical system to a bar chart and one can't resolve the bars in a bar chart if the bars are closer than 0% MTF. But one can resolve 2-dimensional structure that are closer than 0% MTF.
Bart, you say "I do not fully agree with your downsampling conclusions" and I would agree with you if one just down samples bar charts. But as you say "sharpening before downsampling may happen to work for certain image content (irregular structures)" is the key. Images of the real world are dominated by irregular structures, and I have yet to see in any of my images artifacts like seen in your examples of bar charts. If I ever run across a pathologic case where artifacts are seen like those in your downsampling examples, then I'll change my methodology. So far I have never seen such a case in my images. So far no one was met my challenge of downsampling first, then sharpening and producing a better, or even equal image like those I show in Figure 7e and 7f at:http://www.clarkvision.com/articles/image-restoration2/
Isn't your posts about upsampling, deconvolution sharpening then downsampling in conflict with saying no sharpening until you have downsized?
Fine_Art sharpened my crane image with Van Cittert. While I have read research papers on Van Cittert, it seems to produce similar results to Richardson-Lucy, so I have not explored Van Cittert, mostly due to lack of time, and I figured I should try and master RL first. Your posted sharpening of the crane looks to me about like unsharp mask results on my image-restoration2 web page (Figure 1). So, I think you can push much further. As the image is high S/N, I would not be surprised if you could surpass my RL results in Figure 3b.
Also asked was what is my strategy for choosing the PSF? Well, for star images, it is simple: just choose a non saturated star. But for scenics and landscapes, it is not so easy to find the right one. And one reads online (e.g. the wikipedia page on deconvolution) that it is hopeless so not applied to regular images. Well, it is not that hard either. Basically, I start large and work small. But there is rarely one PSF for the typical landscape or wildlife image, mainly due to depth of field. Thus, different parts of the image need different PSFs. I also don't worry about linearizing the data. I just use the image as it comes out of the raw converter with the tone curve applied, plus any other contrast adjustments I do. A Gaussian response function run through such a process is still reasonably modeled by a Gaussian, though different from the Gaussian one would derive from linear data.
So it is simple (I can see a need for a 4th article in my series): start with a large PSF, like 9x9 Gaussian and run a few iterations. If it looks good, run again with more iterations. If that starts producing ringing artifacts, that is an indication the PSF is too large and/or iterations too many. I drop back on the iterations. And I then drop the PSF (e.g. 7x7) and start again. For a typical DSLR image made at say ISO 400, one should be able to go for 50 to 100 iterations without significant noise enhancement. If a high S/N image, say ISO 100 with a camera having large pixels, then 500 iterations may be possible. If the PSF is too low, one can do hundreds of iterations and not see any improvement in the image. So there is a maximum that sharpens without artifacts. What I find in the typical image is that different parts of the image respond better with different sizes of PSF. So I put all the results in photoshop as layers, with the original on top and increasing PSF size going down. I then erase portions of each layer to show the portion of the sharpened image that responded best to the deconvolution. For wildlife, I usually concentrate on the eyes first, then work out. I usually leave the background as the original unsharpened image.
For landscapes, it is usually simpler than for wildlife. Most of the image is usually pretty sharp, so I find the one PSF that works best.
I can see developing some examples would be nice.
The bottom line is that real world images have variability, and no one formula works for multiple images, let alone all parts of one image. This is also true for the simpler methods like unsharp mask. So I just try a few things and see what works well for an image, then push it until I see artifacts, then back off.