How does one implement output sharpening with deconvolution? A Gaussian PSP seems to work reasonably well with capture sharpening, but does it also apply to output sharpening?
Yes, while never perfect, a Gaussian approximation comes a long way. This does assume that an algorithm is used that does a pretty straight forward interpolation. One can test for this by analyzing the slanted edge after interpolation, e.g. with my Slanted edge evaluation
What radius and strength should one use with what algorithm and how does one judge the results other than by the laborious process of making test prints at the final output resolution. It is generally agreed that one can not judge the results on screen, since screen resolution is far below that of the printer.
I do not necessarily agree that it is not possible to judge the results on screen. One aspect can even be better judged on screen, and zoomed in, and that is to see if we are introducing artifacts which are a clear sign that we're doing something wrong. When we sharpen a blurred signal to the point that we add new information that was not there in the original unblurred signal in front of the camera, we are introducing artifacts. The other aspect is indeed difficult to judge, and that is how much we must exaggerate, or rather pre-compensate, for losses that have yet to occur (e.g. ink diffusion), and at the viewing distance scale.
A possible workflow that I suggest, until one has enough experience to judge some subject matter by eye, is to include a slanted edge or the above test images as a layer in Photoshop, or process them exactly the same (same resampling percentage) as our target image in Lightroom. That will allow to objectively see the effects of our capture blur + resampling blur and output sharpening, not obscured by present or not present real image detail.
Frankly, I'm amazed that e.g. Lightroom doesn't do something like that under-the-hood but rather seems to use Bicubic Smoother which produces artifacts, and then apparently applies some of the Photokit tricks to hide those artifacts while boosting edge contrast.
It should be almost trivial to use a smart pattern, which can be automatically analyzed after resampling to remove resampling blur. I've proposed one such pattern in an earlier post (see attached image). In fact, that pattern, or a similar one, can be used to determine the full 2D PSF if it rides along the entire processing workflow (behind the scenes, and one could theoretically even include demosaicing and lens correction effects).
A plugin like Topaz Labs Infocus already allows to 'Estimate' the blur PSF, and deconvolve the image, and then offers to possibility add some more regular sharpening to amplify the effect or pre-compensate for output medium losses. It unfortunately doesn't allow to save the estimated settings for other images (I'm hoping version 2 will allow such things), but one can use the Generic setting as a simpler alternative instead (although for output sharpening a larger maximum radius would be helpful, and a more efficient memory management). FocusMagic also works fine with such an approach where one determines the optimal settings for a proxy image that has undergone the same resizing, and applies the same deconvolution on the actual image.
In developing PhotoKit sharpener Bruce Frasier and his colleagues reportedly made literally hundreds of prints to judge the effects of various parameters. With regard to PhotoKit, one can simulate the workflow (e.g. Glenn Mitchell, TLR), but the PhotoKit people say that such efforts lack the magic numbers (sharpening parameters) required for optimal results.
Well, there is an aspect of smoke and mirrors to that, although experience is also a component. The fact that they needed hundreds of prints may also signal a lack of fundamental insight, which is understandable when one is unfamiliar with Digital Signal Processing (DSP) theory. Of course, it is also a fact that we now have more computer power to do a lot of things nearly in real-time, and that was not available when these more traditional edge contrast and halo masking tools were developed.
Although the two pass (with optimal creative sharpening) workflow of Frasier et al is widely accepted in the photographic community, you mentioned in a previous post that one could skip capture sharpening and sharpen only the final image. An interesting concept, but how would one implement such a workflow?
One can use the method mentioned above, with a proxy image to serve as a guide. Mathematically it makes little difference if one cascades multiple deconvolutions with a smaller Gaussian PSF, or uses one with a larger Gaussian PSF (see Associative Property
, which explains that "any number of cascaded systems can be replaced with a single system"). That's also one of the nice properties of Gaussians. So, besides having to cope with additional resampling artifacts, it's pretty much the same thing, but without cumulative rounding errors and halos building up to larger halos. One can even use resampling algorithms like those in Benvista's PhotoZoom Pro, which add (convincing fake) resolution to edges.