Dear Bart,
I've read your comments on the above process a few times, and considering your expertise I'd love to try it out. It is however a little confusing to me with all the steps and possible options during these steps. I'm sorry to bother you, but could you provide us with a step by step tutorial on this process, assuming we have the correct software installed already?
Hi, no problem.
1. Take an unsharpened Raw conversion. No sharpening artifacts can exist, because none are created.
2. Up-sample to e.g. 300%, preferably with a very good algorithm that creates very few artifacts. While not perfect, Bu-cubic Smoother will often do (although it does produce halo) good enough, especially if the prior image was not sharpened yet.
3. Use a deconvolution sharpener, PS Smart Sharpening, or Topaz Labs Infocus, or FocusMagic. Expect to use a radius that is also 3x larger than you would use otherwise.
4. Down-sample to the original image dimensions, regular bi-cubic will do, but better algorithms are always beneficial.
5. Add a final bit of very small radius deconvolution sharpening to offset the down-sampling blur.
In my case, I have all the software you discuss and am shooting Nikon D800e, Sigma DPM series, Ricoh GR.
Why does up-sampling, deconvolve sharpening, then down-sampling again give superior results over simply deconvolve sharpening the original file at original size?
First of all, we have to consider that it would not help or be necessary if we had a perfect model of the blur PSF, and high precision data values. If we did have such a PSF and image data, convolution with that PSF would be exactly reversible by deconvolution with that same PSF. We also need to understand that our original image is not a perfect representation of the projected image. Our image is stored in a sort of rectangular grid of what seems to look like squares.
What we are actually looking at is a grid with brightness samples taken at those grid positions, more like 'blobs of brightness' than squares. By interpolating we create an image that looks more like those blobs, with smooth transitions between them. We usually do not add resolution, so we need not worry too much about adding aliasing when we down-sample at a later stage.
When we deconvolve this up-sampled representation we also have more intermediate brightness positions available to model the transitions between the original samples in a smooth way. It is also easier to see if we apply too much deconvolution, because we are looking at larger areas with information that should remain somewhat smooth, not blocky.
Deconvolution will affect all spatial frequencies, also lower frequencies (giving more punch to the entire image, a sort of clarity on a feature size level, not just contrast), but the
most effect will be there where the (original + up-sample) blur matches the deconvolution PSF. Since we can push the deconvolution pretty far on the enlarged image without creating immediately noticeable artifacts, we can possibly get more effective results (also at lower spatial frequencies).
When we now down-sample again, the risk of introducing down-sampling artifacts is pretty low, because we have not really created resolution in excess of what we originally started with (we're still under-sampled), but the resolution of the original image
was restored to a higher resolution level. Down-sampling with e.g. Bi-cubic will still blur the image a bit, but it will be sharper and more accurate that what we started with. To remove that down-sample blur we can do a final small radius sharpening.
This procedure works best on well-behaved original image data, AA-filtered and not yet sharpened. Using 16-bit/channel data helps but having more interpolated pixels also benefits originals with only 8-b/ch.
As always, one can combine this procedure with the use as a Luminosity Blend-if layer, just for the sharpening. That will reduce the effect of noise, and allow the use of masks, e.g. to mask out smooth sky regions. Just do the sharpening up/down sample on a copy of the image, and use the result as a layer. Before down-sampling one can disable the layer because it will only add to the risk of aliasing upon down-sampling.
Cheers,
Bart