Not everybody here is familiar with MatLab, so that would not help a larger audience.
Not everything can be explained to a larger audience (math, for instance). The question is what means are available that will do the job. MATLAB is one such tool, excel formulas, Python scripts etc are others. I tend to prefer descriptions that can be executed in a computer, as that leaves less room to leave out crucial details (researchers are experts at publishing papers with nice formulas that cannot easily be put into practice without unwritten knowledge).
The crux of the matter is that in a DSP manner Deconcolution exactly inverts the blur operation (asuming an accurate PSF model, no input noise, and high precision calculations to avoid cumulation of errors). USM only boosts the gradient of e.g. edge transitions, which will look sharp but is only partially helpful and not accurate (and prone to creating halos which are added/subtracted from those edge profiles to achieve that gradient boost).
The exact PSF of a blurred image is generally unknown (except the trivial example of intentionally blurring an image in Photoshop). Moreover, it will be different in the corners from the center, from "blue" to "red" wavelengths etc. Deconvolution will (practically) always use some approximation to the true blur kernel, either input from some source, or blindly estimated.
Neither sharpening nor deconvolution can invent information that is not there. They are limited to transforming (linear or nonlinear) their input into something that resembles the "true" signal in some sense (e.g. least squares) or simply "looks better" assuming some known deterioration model.
It's not subjective, but measurable and visually verifiable. That's why it was used to salvage the first generation of Hubble Space Station's images taken with flawed optics.
I know the basics of convolution and deconvolution. You post contains a lot of claims and little in the way of hands-on explanations. Why is the 2-d neighborhood weighting used in USM so fundamentally different from the 2-d weighting used in deconvolution aside from the actual weights?
No, it's not the model of the blur, but how that model is used to invert the blurring operation. USM uses a blurred overlay mask to create halo overshoots in order to boost edge gradients. Deconvolution doesn't use an overlay mask, but redistributes weighted amounts of the diffused signal in the same layer back to the intended spatial locations (it contracts blurry edges to sharpen, instead of boosting edge amplitudes to mimic sharpness).
I can't help but thinking that you are missing something in the text above. What is a fair frequency-domain interpretation of USM?
More advanced algorithms usually have a regularization component that blurs low signal-to-noise amounts but fully deconvolves higher S/N pixels.
My point was that USM seems to allow just that (although probably in a crude way compared to state-of-the-art deconvolution).
It would aid my own (and probably a few others) understanding of sharpening if there was a concrete (i.e. something else that mere words) describing USM and deconvolution in the context of each other, ideally showing that deconvolution is a generalization of USM.
I believe that convolution can be described as:
y = x * h where:
x is some input signal
h is some convolution kernel
* is the convolution operator
In the frequency domain, this can be described as
Y = X · H
where x, h and y and frequency-domain transformed, and the "·" operator is regular multiplication.
If we want some output Z to resemble the original X, we could in principle just invert the (linear) blur:
Z = Y / H_inv = X · H / H_inv ~ X
In practice, we don't know the exact H, there might not exist an exact inverse, and there will be noise, so it may be safer to do some regulariztion:
Z = Y / (delta + H_pseudoinv) = X · H / (delta + H_pseudoinv) ~ X
for delta some "small" number to avoid divide by zero and infinite gain.
This is about where my limited understanding of deconvolution stops. You might want to tailor the pseudoinverse wrgt (any) knowledge about noise and/or signal spectrum (ala Wiener filtering), but I have no idea how blind deconvolution finds a suitable inverse.
Now, how might USM be expressed in this context, and what would be the fundamental difference?
-h