And my claim has been that what seems to be your underlying assumption is wrong. Not that I blaim you, the same claim seems to reverbrate all over the net: "deconvolution recreates detail, USM fakes detail". Let me modify your analogy: the classical radiator has a single variable, controlled by the user, and no thermometer to be seen anywhere. Getting the right temperature can be challenging. A more modern radiator might have one or more temperature probes, and thus can make more well-informed choices.
I believe that the aforementioned claim is not supported by an analysis of what USM (in various incarnations) does compared to deconvolution. Information cannot be made out of nothing (Shannon & friends). These methods can only transform information present at their input in a way that more closely resemble some assumed reference, given some assumed degradation. When USM use a windowed gaussian subtracted the image itself, this is (in effect) a convolution of a single kernel, seemingly by the linear-phase complimentary filter. Thus, the sharpening used in USM can perhaps be described as inverting the implicitly assumed gaussian image degradation. A function that (of course) can be described in the frequency domain. The nonlinearity does complicate the analysis, but I think that the same is true for the regularization used in deconvolution.
This description might prove instructive for relatively "small-scale" USM parameters ("sharpening"), while larger-scale "local contrast" modification might be more easily comprehended in the spatial domain?
Thus, my claim (and I don't have the maths to back it up) is that USM is very similar to (naiive) deconvolution, and that both can be described as inverting an implicit/explicit model of the image degradation. The most important difference seems to be that USM practically always have a fixed kernel (of variable sigma), while deconvolution tends to have a highly parametric (or even blindly estimated) kernel, thus giving more parameters to tweak and (if chosen wisely) better results. It seems that practical deconvolution tends to use nonlinear methods, e.g. to satisfy the simultaneous (and contradicting) goals of detail enhancement but noise suppression. These may well give better numerical/perceived compromises, but it does not (in my mind) make it right to claim that "deconvolution recreates detail, while USM fakes it"
Hi,
The whole notion of deconvolution, as I understand it, is that since we are dealing with an essentially linear system, we can add and subtract the various components with no degradation. So if we have the original image g and add the blurring function f to it to get the blurred image h, we can simply subtract f from h (assuming we know f) and we will get back to g. So although it seems to be getting something back from nothing, in the case of a blurred image, in reality we are just getting the component we want back and we are leaving behind the component we don't want.
The example I give above of an image blurred with a Gaussian blur of 8 and then 'unblurred' by subtracting the blurring from it, restoring the original image perfectly, illustrates this point dramatically. Of course, in this case f is know perfectly, so the extraction of the original image from the blurred image is also possible perfectly; as you say, our problem is how to find out what f is for our images.
I may be entirely wrong here, but I don't think image deconvolution is non-linear, because if it was it wouldn't work. Even if there are non-linearities in the system, the algorithm would have to approximate to a linear system. This comment in the ImageJ macro explains one technique for dealing with noise:
// Regarding adding noise to the PSF, deconvolution works by
// dividing by the PSF in the frequency domain. A Gaussian
// function is very smooth, so its Fourier, (um, Hartley)
// components decrease rapildy as the frequency increases. (A
// Gaussian is special in that its transform is also a
// Gaussian.) The highest frequency components are nearly zero.
// When FD Math divides by these nearly-zero components, noise
// amplification occurs. The noise added to the PSF has more
// or less uniform spectral content, so the high frequency
// components of the modified PSF are no longer near zero,
// unless it is an unlikely accident.
So what this is doing is adding noise to the PSF in order to avoid noise amplification in the deconvolution (which is pretty smart!). Again, this is assuming a linear system.
As for USM ... if the implementation in Photoshop etc., is not the conventional one of creating an overlay by blurring/subtracting, but instead uses a convolution kernel - then yes, it's also doing a deconvolution and the difference between it and another deconvolution comes down to the algorithm and implementation. However, the belief seems to be that USM creates a mask by blurring the image and subtracting the blurred image from the original, effectively eliminating the low frequency components (like a high-pass filter). This mask is then used to add contrast to the high-frequency components of the image. So, within the constraints of my limited understanding, in USM we are
adding a signal, whereas in deconvolution we are
subtracting one. The question for me is ... is the signal we are adding the inverse of the signal we are subtracting? (It's true that in the case of USM we have a high-frequency signal, whereas in deconvolution we have a low-frequency one). I would think that it is not, because adding contrast is not the inverse of removing blurring: we now have an additional signal c in the equation, c being a high-frequency signal that is added to the high-frequency components of the image.
Someone who understands the maths better than me would need to answer this question.
Robert