Hi!
I have so far delayed sharpening until after (up)scaling. Theoretically, I know that I should capture-sharpen as early as possible. That should then be in the raw converter -?. And I assume, it should be done as pure deconvolution, since everything else is not 'real', but just cheating the eye. Also I mean to have understood that capture sharpening should just restore the degradation caused by the capture process *and nothing more*.
So how do I know "nothing more"?
Hi Hening,
It's not just the amount, but also the shape of the blur function we should try to estimate correctly. Fortunately for us, when we have a combination of blur sources, e.g. optical blur + demosaicing blur, then the shape of the blur function (the Point Spread Function, or PSF) starts looking like a Gaussian blur. So, as a first try, dialing in the correct width of the Gaussian blur should get us on the right track. Of course, if there is also a certain amount of Diffraction blur (with a different shape from Gaussian, i.e. an Airy disk shape) then the Gaussian shape slowly starts looking more like an Airy shape as the amount of diffraction blur starts to dominate. If there is defocus blur, with yet another shape, the PSF will gradually start looking more like a defocus blur (a convolution with a more or less circular aperture, the Circle of Confusion).
So, it's the blur shape, and then the amount of it that matters when counteracting its effects.
If we nail both of them, and can deconvolve with that, then we have 'enough'.
And at what magnification should I view? At 100%, I don't see much. Everything else is coined by the scaling algorithm, which I have learned makes a BIG difference.
Correct, so we should look at a magnified view, and use an algorithm that doesn't change the edge contrast of sharp transitions between light and dark edges and lines. Edges and lines are visually easy to predict most of the time, and when we use Nearest Neighbor interpolation we are guaranteed to not change the edge transitions, just make them larger and easier to see. Many applications do the right thing for this purpose, when we zoom in beyond 100% magnification.
I pick an image shot with a CY Planar at f/11, and choose a central area, so as not to be mislead to oversharpening.
Well done, since we do not want to oversharpen (or use the wrong settings) the center based on regions of the image that are likely to exhibit some more optical degradation going from the center towards the edges and corners of the image. Of course, unless we specifically focused on something off-center, and there is nothing in the center that's in better focus.
My raw converter, Raw Therapee, offers RL Deconvolution. I leave the amount, damping and iterations at their defaults, 100, 0 and 30, respectively, and just play with the radius. The minimum is 0.40
Richardson-Lucy (RL) deconvolution is still an excellent compromise to use as a deconvolution algorithm, assuming that the PSF is somewhat Gaussian in shape, and the image was formed by photons (i.e. somewhat noisy, by the square root of the intensity). This does assume that the image was shot at native sensor ISO, and not much other noise than photon/Poisson noise was present. This also assumes that the deconvolution is preferably done at linear gamma, just like the exposure was captured.
I start at 100% view, look at one of the very finest branches which is supposed to be in focus and arrive at a radius of 0.65.
Then I switch to 400%, look at some other very fine branches, that stand out against the sky, where I exspect halos to appear. But these branches are out of max focus on beforehand. Some of them have a darker central part and almost-white edges, but not clipped, before sharpening. The difference increases with the radius, see screen shots.
Now here things get getting (more) tricky.
First, we may be facing aliasing artifacts. Anti-Aliasing filters (or Optical Low Pass filters, OLPF) on the sensor, if any, are usually not strong enough to prevent all of the aliasing artifacts (because that would blur the captured image, potentially more than we can restore with deconvolution). In your example case though, f/11 also adds a significant amount of diffraction blur.
Second, in addition, the edge of backlit edges may indeed be lighter, due to the fact that backlight rays will produce more specular reflection at shallow angles. They will reflect more than they will be absorbed/diffused. Only a razor's edge can avoid this.
In my experience, mostly with AA-filtered images, a Gaussian blur radius at an optically optimal aperture (often at f/4 to f/5.6) will produce a Gaussian type of blur with a radius of about 0.7. Slightly less if perfect optics and perfect focus is achieved (and no camera shake or subject motion is involved). It's technically very hard to get better focus, requiring smaller radii.
Is any of these radii the right one? Is this the way to determine it?
The 'best' radius seems to be slightly smaller than expected. That suggests something is not quite as expected.
It takes a bit of practice, but one needs to look at:
1. Edge halos start showing up as the radius increases. Branches against a bright sky, can have specular reflections at the top of the branch, but less likely at the bottom edge.
2. At slanted edges, the stair-stepped edge is not monotonically increasing/decreasing, but over-/undershoots as we follow the edge. Straight edges are easiest to judge.
I determined the sharpening radius before any edits. In this case, I would apply a contrast of +30 in RT. This will of course increase the visibility of the halos. So maybe this should be done before sharpening?
Capture Sharpening is typically done before other tonal adjustments, to preserve the photon/noise nature of light, in linear gamma. How, and in what order, the Raw converter processes the adjustments, is something else.
This little experiment makes me feel I was well advised to delay sharpening - but my feeling may be as wrong as my visual judgement... Or maybe I should just apply the minimum radius as a standard, avoiding all the hassle, and continue to defer the rest until after scaling?
The latter. Scaling also benefits from proper (but not overdone) Capture sharpening. If, for whatever reason, your Capture sharpening produces artifacts, dial down the settings. We do not want to magnify artifacts.
Cheers,
Bart
P.S. New developments in Sharpening applications offer different approaches. A recent application that deserves some attention, is Topaz Sharpen AI. That application takes (unsharpened) images, analyses the structure, and replaces the details with trained sharper structures. In many cases, this offers superior results, without sharpening halos, especially when also (subject or camera) motion blur was involved.