Hi folks,
We have great Raw conversion and sharpening tools at our disposal, but it is not always clear which settings will objectively lead to the best results. Human vision is easily fooled e.g. by differences in contrast, so finding the optimal settings by eye may not be easy. Especially for 'Capture Sharpening' it is important to get it as accurate as possible. When we don't sharpen enough, we'll leave image quality on the table, and when we overdo it then we'll have to face the consequences. When we for example produce large format output, or need to crop a lot, we may discover distracting halos because at a larger output magnification our eyes now have an easier task distinguishing between the actual detail and the artifacts.
Regardless of the exact sharpening method used, one of the sharpening parameters is usually a radius setting which controls how wide of an area around each pixel is going to influence that central pixel's brightness, and thereby how much contrast will be added to the local micro-detail. Ideally we only want to restore the original image's sharpness as it was before it got blurred by lens aberrations, diffraction, the AA-filter, Raw conversion, etc. Creative sharpening is considered by many to be a separate process, best applied locally.
The radius control is the most important one to get right, regardless of the sharpening method we use. The actual sharpening method may influence the amount we need to apply, but the radius is pretty much a physical given for a certain lens and sensor combination.
Now, wouldn't it be nice to have a tool to objectively determine that optimal radius setting?
Well, now there is such a tool, the
'Slanted Edge evaluation' tool, and it makes use of the 'slanted edge' features that can be found in a number of test charts (such as the one I proposed
here).
I've made it a web-page based tool, which can therefore also operate on modern smartphones, and it allows to objectively determine that optimal sharpening radius. Unfortunately, the basic functionality of HTML web pages doesn't allow to read and write random user selected image files on client side computers, so there is some manual input e.g. of pixel brightness values required, but it's a free tool so who could complain. You could try and ask your money back if you don't like it, but with enough support I might actually make a commercial version available, we'll see.
This new tool works by making a model of the blur pattern. That model will essentially be based on the shape of a Gaussian bell curve, which actually has a pretty good overall correspondence with the more complex, Point Spread function (PSF). Such a PSF is a mathematical model which not only characterizes the blur pattern, but also allows to invert the blur effects, and restore the original sharp signal.
Actually, those who use the Imatest software already have some great capability to simplify the data collection process, because it can analyse image files directly, even Raw files. Part of the trick is in figuring out how to interpret the output results, and convert them to input for this tool.
However, this new tool continues where most analysis tools stop, and it not only gives feedback in the form of a (Capture) sharpening radius to use, but it also allows to produce a discrete deconvolution kernel based on the prior analysis. There are free tools available on the internet (e.g.
ImageJ, or
ImageMagick) that allow to use such a kernel and let you apply deconvolution sharpening to images that were similarly blurred (same lens and aperture, and Raw conversion) as the test file that was used to determine the kernel.
How to use the results of the analysis?
The easy way to use it is by copying the optimal radius that results from the analysis to your sharpening tool. You can then optimize the other parameters, knowing that any resulting artifacts are caused by overdoing the amount or other settings. Also when the resolution drops after adjusting the other parameters, you'll know that you are applying too much noise correction or are using too strong masking. Just re-analyze the same test image after the additional processing and compare the results if you want an objective verdict.
A more advanced use of the analysis involves the creation of a deconvolution filter kernel from the blur radius parameter and using that kernel to deconvolve the image, or similar (same lens/aperture/camera/rawprocessing) images. One can also re-analyse the initial test image after an initial deconvolution, and determine if (a) subsequent run(s) with a different filter kernel further improves the result. It will, if the original blur is a combination of different but similarly strong sources of blur.
I will be adding some before/after examples of what can be achieved with the analysis results, but feel free to experiment with it and ask questions about how to use it for specific situations.
Cheers,
Bart