The more I use Topaz Detail 3, the more impressed I am. Thank, you, Bart, for bringing it to my attention. One of the things that is does really well is avoid blowing highlights and clipping blacks, which is always a consideration with sharpening.
Hi Jim,
You're welcome. It's an amazingly powerful plugin, and there is a lot more it does under the hood (like reducing unnatural looking color shifts due to contrast changes) to protect the innocent against themselves ...
I am now involved with a project for which I need to sharpen in one dimension only. I'm writing the code in Matlab to do that. I'm doing some aggressive sharpening, so I'm running into blown highlights and clipped shadows. I'd like to pick everyone's collective brains on what to do about that.
This suggests that you are pushing things far beyond mere resolution restoration, which is fine if the situation calls for it, but indeed requires unconventional measures to reduce the unwanted effects.
One approach that I use is based on the actual differences between the original and the sharpened data. It's, as also suggested by "TylerB", based on blending between the two layers. In Photoshop it can be approached with a Blend-if layer adjustment, like this:
What it basically does is (linearly) reduce the contribution of the sharpened layer as it approaches the clipping level. That would be simple to implement, and in MatLab one can also use more complex functions than linear interpolation of the alpha channel.
Since you are working in floating point math, you can also use a gamma adjusted blending approach that was suggested by Nicolas Robidoux to reduce halo under/over-shoots when resampling (
here). The goal was to avoid discontinuities (which manifest themselves as visual artifacts in brightness and color) in the transfer between the original and adjusted version.
In your case you could sharpen (or enhance acutance) a gamma adjusted version for the shadows, and one for the highlights, and blend them (with alpha proportional to luminance) in linear gamma space, before restoring the original gamma precompensation for display.
Cheers,
Bart
P.S. I agree with Jack that (given its dominant influence at f/45) perhaps a somewhat more Airy disc shaped deconvolution kernel (in 1 dimension) would allow better deconvolution than with a simple Gaussian. You can also consider using a Richardson-Lucy deconvolution that will be more effective than a simple 1 pass deconvolution, or if the image has a high S/N ratio, even a Van Cittert deconvolution, which may be available as predefined functions in MatLab.
I would at any rate not use a simple slice out of a 2-D Gaussian kernel, but rather a (discrete) line spread function (LSF) version of it (the sum of all kernel values in 1 dimension), where a discrete 1-dimensional version can be calculated directly by using:
g(x) = 0.5 * (erf((x - fFactor * 0.5) / (sqrt(2) * xSigma)) - erf((x + fFactor * 0.5) / (sqrt(2) * xSigma)))
where erf() is the 'error function' which probably is available directly in Matlab, and fFactor would be a sensel fill-factor (0.0 for point sample, up to 1.0 for full gap-less micro-lens coverage). The fill-factor may function as a poor-man's approximation to a more complex interaction between an Airy disc pattern and finite area samplers such as our capture device sensels.