Pages: 1 ... 4 5 [6] 7 8 ... 12   Go Down

Author Topic: Sharpening ... Not the Generally Accepted Way!  (Read 59872 times)

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #100 on: August 12, 2014, 03:28:26 pm »

Have been following this thread with some interest and as far as deconvolution goes I am sure that Bart and others knowledge and experience of this aspect will prove very useful for you.

Couple of things I picked up on and it is my opinion that maybe you are making things a little more difficult than they need to be to get excellent result whichever sharpening route you choose.

1.  Your test file of the power/telephone line is not a particularly good choice as presented due to purple green CA.  IMO this should be removed first during raw processing to give a meaningful view of sharpening options.

2.  As you started the thread with PS have you tried the Smart Sharpen / Lens Blur / More Accurate checked?  This AFAIK is deconvolution sharpening (particular parameters unknown) and offers quite a lot in the way of control.  Not as many options of course as in other software but sometimes this maybe enough?

By chance I had also played with the sample NEF image in ACR using Amt=50 Rad= 0.7 Detail = 80  and seems to be pretty close to your FM example although that was not my intention.  Seems to me in this case that a little tweaking ACR would narrow the differences even further


Have been following this thread with some interest and as far as deconvolution goes I am sure that Bart and others knowledge and experience of this aspect will prove very useful for you.

1.  Your test file of the power/telephone line is not a particularly good choice as presented due to purple green CA.  IMO this should be removed first during raw processing to give a meaningful view of sharpening options.

2.  As you started the thread with PS have you tried the Smart Sharpen / Lens Blur / More Accurate checked?  This AFAIK is deconvolution sharpening (particular parameters unknown) and offers quite a lot in the way of control.  Not as many options of course as in other software but sometimes this maybe enough?

By chance I had also played with the sample NEF image in ACR using Amt=50 Rad= 0.7 Detail = 80  and seems to be pretty close to your FM example although that was not my intention.  Seems to me in this case that a little tweaking ACR would narrow the differences even further


Hi Tony,

I corrected the CA so if you download the image now it's CA-free http://www.irelandupclose.com/customer/LL/sharpentest.tif

Also, here are some comparisons:



[You need to right-click on the image and then zoom in to see the detail properly.]

My own feeling is that the Smart Sharpen result is the best (without More Accurate as this is a Legacy setting which seems to increase artifacts quite a lot). ACR and FocusMagic seem much of muchness. QImage gives a good sharp line, but at the expense of flattening the power lines. 

I did the best I could with all of the sharpening methods (but without adding any additional steps of course, not even fading highlights in Smart Sharpen as the same effect can be obtained for the other filters using Blend-if in Photoshop).

Perhaps adding a GB of 3 on an unsharpened raw image is a bit unfair.

I find it hard to compare your two D800Pine images as the bottom one has darker leaves but a lighter trunk.  Not sure why that is?

Robert
                                         
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

TonyW

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 648
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #101 on: August 12, 2014, 04:37:44 pm »

...
I corrected the CA so if you download the image now it's CA-free http://www.irelandupclose.com/customer/LL/sharpentest.tif

Also, here are some comparisons:

...
Hi Robert.  Quite happy with your findings now CA corrected :)

Quote
My own feeling is that the Smart Sharpen result is the best (without More Accurate as this is a Legacy setting which seems to increase artifacts quite a lot). ACR and FocusMagic seem much of muchness. QImage gives a good sharp line, but at the expense of flattening the power lines.
My understanding that the Smart Sharpen Lens Blur kernel and More accurate option should give the best results (based on something I read by Eric Chan - I think on this forum).  It certainly takes longer to apply and I assume that more iterations performed which may lead to the artifact increase you are seeing?

...
Quote
I find it hard to compare your two D800Pine images as the bottom one has darker leaves but a lighter trunk.  Not sure why that is?
I think it would be wrong to try and draw conclusions by this comparison all I did was to crop the full size view of your test and paste as a new document in PS.  My own version using ACR was actually produced before I even saw your example and was straight from camera with only lens profile and CA correction applied plus the sharpening.  The difference may be explained by the simple fact of copying your image or possible that FM may have altered contrast/colour slightly or even a combination  :).

On comparing it just occurred to me that the difference was slight suggesting that in this case ACR decon. may be just as good a starting point as any and of course once output sharpening applied I would have thought that printing would yield perfectly acceptable results in either case. 

I have no experience of FM or Qimage therefore could not comment on advantages, but if Bart says they are good then I have every reason to believe that is the case and worth investigating to see how they may fit in with your workflow.
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #102 on: August 12, 2014, 04:51:09 pm »

Here is where we need to distinguish between a masking type of filter, like USM and other acutance enhancing filters, and a deconvolution type of filter. A mask is just an overlay, that selectively attenuates the transmission to underlying layers. It adds a (positive or negative) percentage of a single pixel to a lower layer's pixel. A deconvolution on the other hand adds weighted amounts of surrounding pixels to a central pixel, for all pixels (a vast amount of multiplications/additions is required for each pixel) in the same layer.
                 
 
   

Many thanks for taking the time to write such a thorough response Bart!

Looking at this image:



What I’m attempting to emulate is a point source (original image), blurred using the F1 filter.  The blurred point is then ‘unblurred’ using the F2 filter (which is not a USM but a neighbouring pixel computation).

So is this a deconvolution?  And is the PSF effectively F1 (that is, the blur)?  In which case F2 would be the deconvolution function?

As you’ve probably guessed, I’m trying to put this whole thing in terms that I can understand.  I know of course that a sophisticated deconvolution algorithm would be more intelligent and complex, but would it not essentially be doing the same thing as above?


Interestingly, this sharpen filter:



gives a sharper result than the ACR filter, for example, in the test image with the power lines.  A little bit of a halo, but nothing much … and no doubt the filter could be improved on by someone who knew what he was doing!

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #103 on: August 12, 2014, 05:16:42 pm »


I have no experience of FM or Qimage therefore could not comment on advantages, but if Bart says they are good then I have every reason to believe that is the case and worth investigating to see how they may fit in with your workflow.

Hi Tony,

I tried the D800Pine sharpen again, this time with Smart Sharpen, Smart Sharpen (Legacy with More Accurate) and FocusMagic.  Smart Sharpen with Legacy turned off appears to be the same as Sharpen Sharpen with Legacy on and More Accurate on. 

The best result was clearly with FocusMagic for this test.

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

TonyW

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 648
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #104 on: August 12, 2014, 05:31:04 pm »

Hi Robert
I just realised we are using different versions PS I am on CS6 and you CC?  So things under the hood seem to have changed
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #105 on: August 12, 2014, 05:33:15 pm »

OK ... this is where I stop for tonight!!

But just before ending, you should try this Ps Custom Filter on the D800pine image:



Then fade to around 18-20% with Luminosity blend mode.  It's better than Smart Sharpen.  Which is pretty scary.

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #106 on: August 12, 2014, 06:06:02 pm »

What I’m attempting to emulate is a point source (original image), blurred using the F1 filter.  The blurred point is then ‘unblurred’ using the F2 filter (which is not a USM but a neighbouring pixel computation).

So is this a deconvolution?  And is the PSF effectively F1 (that is, the blur)?  In which case F2 would be the deconvolution function?

That's correct, the Custom filter performs a simple (de)convolution.
However, to deconvolve the  F1 filter would require an F2 filter like:
-1 -1 -1
-1  9 -1
-1 -1 -1
All within the accuracy of the Photoshop implementation. One typically reverses the original blur kernel values to negative values, and then adds to the central value to achieve a kernel sum of one (to keep the multiplied and summed restored pixels at the same average brightness).

A more elaborate deconvolution would use a statistically more robust version, because the simple implementation tends to increase noise almost as much as signal, but we'd like to increase the signal to noise ratio by boosting the signal significantly more than than the noise in a regular photographic image.

Quote
As you’ve probably guessed, I’m trying to put this whole thing in terms that I can understand.  I know of course that a sophisticated deconvolution algorithm would be more intelligent and complex, but would it not essentially be doing the same thing as above?

With the suggested F2 adjustment, yes.

Quote
Interestingly, this sharpen filter:



gives a sharper result than the ACR filter, for example, in the test image with the power lines.  A little bit of a halo, but nothing much … and no doubt the filter could be improved on by someone who knew what he was doing!

Yes, but this will be a sharpening filter, not a neutral deconvolution (unless it exactly reverses the unknown blur function, PSF). Only a perfect PSF deconvolution (probably close to a Gaussian PSF deconvolution) will remain halo free (so with imperfect precision, almost halo free).

Cheers,
Bart
« Last Edit: August 12, 2014, 06:11:09 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #107 on: August 13, 2014, 03:10:00 am »

OK ... this is where I stop for tonight!!

But just before ending, you should try this Ps Custom Filter on the D800pine image:



Then fade to around 18-20% with Luminosity blend mode.  It's better than Smart Sharpen.  Which is pretty scary.

Hi Robert,

Try the attached values, which should approximate a deconvolution of a (slightly modified) 0.7 radius Gaussian blur, which would be about the best that a very good lens would produce on a digital sensor, at the best aperture for that lens. It would under-correct for other apertures but not hurt either. Always use 16-bit/channel image mode in Photoshop, otherwise Photoshop produces wrong results with this Custom filter pushed to the max.

As I've said earlier, such a 'simple' deconvolution tends to also 'enhance' noise (and things like JPEG artifacts), because it can't discriminate between signal and noise. So one might want to use this with a blend-if layer or with masks that are opaque for smooth areas (like blue skies which are usually a bit noisy due their low photon counts, and demosaicing of that).

Upsampled images would require likewise upsampled filter kernel dimensions, but a 5x5 kernel is too limited for that, so this is basically only usable for original size or down-sampled images.

Cheers,
Bart
« Last Edit: August 13, 2014, 03:48:38 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #108 on: August 13, 2014, 03:52:59 am »

I'd really appreciate it if someone could relate "sharpening" to "deconvolution" in a dsp manner, ideally using simplistic MATLAB scripts. There are many subjective claims ("deconvolution regains true detail, while sharpening only fakes detail"). But what is the fundamental difference? Both have some inherent model of the blur (be it gaussian or something else), successful implementations of both have to work around noise/numerical issues...

If you put an accurate modelled/measure PSF into an USM algorithm, does it automatically become "deconvolution"? If you use a generic windowed gaussian in a deconvolution algorithm, does it become sharpening? Is the nonlinear "avoid amplifying small stuff as it is probably noise" part of USM really that bad, or is it an ok first approximation to methods used in deconvolution?

-h
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #109 on: August 13, 2014, 04:31:45 am »

That's correct, the Custom filter performs a simple (de)convolution.
However, to deconvolve the  F1 filter would require an F2 filter like:
-1 -1 -1
-1  9 -1
-1 -1 -1
All within the accuracy of the Photoshop implementation. One typically reverses the original blur kernel values to negative values, and then adds to the central value to achieve a kernel sum of one (to keep the multiplied and summed restored pixels at the same average brightness).

Yes, I realize that - but the above filter deconvolves too strongly, so that some detail is lost around the edges, which is why I reduced it a bit.  However, surely the difference between my F2 and yours is only a modification of the deconvolution algorithm, with yours being the perfect one, and mine giving the better real-world result.  Isn't this one of the things that different deconvolution algorithms will do?: improve the deconvolution by, for example, boosting the signal more than the noise (as you mention).  Which might mean softening the deconvolution to the point that it ignores noise, for example.

At any rate, the F2 filter and the one above are both sharpening filters.  They are also deconvolving filters because the convolution is known.  So what Jeff says, that the Lr sharpen goes from USM to Deconvolution as one moves the Detail slider from 0 to 100%, just doesn't make sense.  To deconvolve you have to know the distortion, and increasing the Detail can't give you that information.

[/quote]
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #110 on: August 13, 2014, 04:38:16 am »

Hi Robert,

Try the attached values, which should approximate a deconvolution of a (slightly modified) 0.7 radius Gaussian blur, which would be about the best that a very good lens would produce on a digital sensor, at the best aperture for that lens. It would under-correct for other apertures but not hurt either. Always use 16-bit/channel image mode in Photoshop, otherwise Photoshop produces wrong results with this Custom filter pushed to the max.


Yes, it works quite well, although you have to apply it several times to get the sort of sharpening needed for the D800Pine image (which would seem to indicate that the D800 image is a bit softer than one would expect, given that the test image was produced by Nikon, presumably with the very best lens and in the very best conditions).

Could you explain how you work out the numbers?  Do you have a formula or algorithm, or is it educated guesswork (in which case your guessing capabilities are better than mine  :)).

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #111 on: August 13, 2014, 05:17:01 am »

Yes, it works quite well, although you have to apply it several times to get the sort of sharpening needed for the D800Pine image (which would seem to indicate that the D800 image is a bit softer than one would expect, given that the test image was produced by Nikon, presumably with the very best lens and in the very best conditions).

Correct. The actual blur PSF is probably larger/different than a 0.7 sigma Gaussian blur, and thus requires a larger radius PSF (which may not be possible to adequately model in a 5x5 kernel), or multiple iterations with the too small radius version (but that risks boosting noise too much). The amount of actual blur is largely dictated by the aperture used (assuming perfect focus, since defocus is a resolution killer). The 0.7 Gaussian blur seems to do much better on your power lines, but I don't know about the rest of that image.

Quote
Could you explain how you work out the numbers?  Do you have a formula or algorithm, or is it educated guesswork (in which case your guessing capabilities are better than mine  :)).

It's trial and error, but starting with a solid foundation. I start with a PSF of the assumed 0.7 sigma blur, using my PSF generator tool, to have something solid to work with. I then have to consider some of the limitations (limited precision, integer kernel values only, maximum of 999) of the Photoshop Custom filter implementation. So one would need to produce an integer values only kernel, and select a deconvolution type of kernel (inverts the blur values and normalizes to a kernel sum of 1 with the central kernel value, to maintain normal average brightness).

Then one needs to tweak the scale factor. The scale in my tool is essentially an amplitude amplifier, but that is not necessarily want we want in the PS Custom filter, we want to increase it's precision, not the amplitude of it's effect. Therefore we need to adjust the Custom filter's scale factor. We then also need to tweak the numbers to get more predictable output for uniform areas, since those should stay at the same brightness.

Cheers,
Bart
« Last Edit: August 13, 2014, 05:30:39 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #112 on: August 13, 2014, 05:43:39 am »

I'd really appreciate it if someone could relate "sharpening" to "deconvolution" in a dsp manner, ideally using simplistic MATLAB scripts. There are many subjective claims ("deconvolution regains true detail, while sharpening only fakes detail"). But what is the fundamental difference? Both have some inherent model of the blur (be it gaussian or something else), successful implementations of both have to work around noise/numerical issues...

If you put an accurate modelled/measure PSF into an USM algorithm, does it automatically become "deconvolution"? If you use a generic windowed gaussian in a deconvolution algorithm, does it become sharpening? Is the nonlinear "avoid amplifying small stuff as it is probably noise" part of USM really that bad, or is it an ok first approximation to methods used in deconvolution?

-h

It certainly would be interesting ... but I'm not the person to do it!

What I was attempting to do above is to relate deconvolution to sharpening using a kernel (in the Photoshop Custom Filter).  It would appear (confirmed by Bart, I believe) that in it's simplest implementation, a sharpening filter is a deconvolution filter if the sharpening filter reverses the blurring. So if you blur with a 'value' of 1 and unblur with a value of 1 you revert back to the original image, hopefully, which is clearly going to be sharper than the blurred image: so you could say that you have sharpened the blurred image.

However, what we normally call 'sharpening' is not restoring lost detail (which is what deconvolution attempts to do): what it does is to add contrast at edges, and this gives an impression of sharpness because of the way our eyes work (we are more sensitive to sharp transitions than to gradual ones - this gives a useful explanation http://www.cambridgeincolour.com/tutorials/unsharp-mask.htm).

So a sharpening filter like USM could by chance be a deconvolution filter, but it normally won't be.  But I guess that if we carefully play around with the radius that we could come close to a deconvolution, providing the convolution is gaussian.  With Smart Sharpen that might be more achievable, using the Lens blur.  Just guessing here  :) ... perhaps someone could clarify.

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #113 on: August 13, 2014, 06:15:12 am »

I'd really appreciate it if someone could relate "sharpening" to "deconvolution" in a dsp manner, ideally using simplistic MATLAB scripts.

Hi,

Not everybody here is familiar with MatLab, so that would not help a larger audience.

The crux of the matter is that in a DSP manner Deconcolution exactly inverts the blur operation (asuming an accurate PSF model, no input noise, and high precision calculations to avoid cumulation of errors). USM only boosts the gradient of e.g. edge transitions, which will look sharp but is only partially helpful and not accurate (and prone to creating halos which are added/subtracted from those edge profiles to achieve that gradient boost).

Quote
There are many subjective claims ("deconvolution regains true detail, while sharpening only fakes detail").


It's not subjective, but measurable and visually verifiable. That's why it was used to salvage the first generation of Hubble Space Station's images taken with flawed optics.

Quote
But what is the fundamental difference? Both have some inherent model of the blur (be it gaussian or something else), successful implementations of both have to work around noise/numerical issues...

If you put an accurate modelled/measure PSF into an USM algorithm, does it automatically become "deconvolution"?

No, it's not the model of the blur, but how that model is used to invert the blurring operation. USM uses a blurred overlay mask to create halo overshoots in order to boost edge gradients. Deconvolution doesn't use an overlay mask, but redistributes weighted amounts of the diffused signal in the same layer back to the intended spatial locations (it contracts blurry edges to sharpen, instead of boosting edge amplitudes to mimic sharpness).

I can recommend this free book on DSP for those interested in a more fundamental explanation of how things work. This tutorial has a nice visual demonstration of how a kernel moves through a single layer to convolve an image.

Quote
If you use a generic windowed gaussian in a deconvolution algorithm, does it become sharpening? Is the nonlinear "avoid amplifying small stuff as it is probably noise" part of USM really that bad, or is it an ok first approximation to methods used in deconvolution?

It's the algorithm that defines what is done with the model of the blur function (PSF). More advanced algorithms usually have a regularization component that blurs low signal-to-noise amounts but fully deconvolves higher S/N pixels. They also tend to use multiple iterations to hone in on a better balance between noise attenuation and signal restoration. Thus, they become locally adaptive to the S/N ratios present in an image layer (often a Luminance component to avoid mistakenly amplifying Chromatic noise).

Cheers,
Bart
« Last Edit: August 13, 2014, 06:20:08 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #114 on: August 13, 2014, 06:32:05 am »

However, what we normally call 'sharpening' is not restoring lost detail (which is what deconvolution attempts to do): what it does is to add contrast at edges, and this gives an impression of sharpness because of the way our eyes work (we are more sensitive to sharp transitions than to gradual ones - this gives a useful explanation http://www.cambridgeincolour.com/tutorials/unsharp-mask.htm).

Correct.

Quote
So a sharpening filter like USM could by chance be a deconvolution filter, but it normally won't be.

That's not correct, USM is never a deconvolution, it's a masked addition of halo. The USM operation produces a halo version layer of the edge transitions and adds that layer (halos and all) back to the source image, thus boosting the edge gradient (and overshooting the edge amplitudes). Halo is added to the image, which explains why USM always produces visible halos at relatively sharp transitions, which is also why a lot of effort is taken by USM oriented tools like Photokit sharpener to mitigate the inherent flaw in the USM approach (which was the only remedy available for film), with edge masks and and Blend-if layers.

Deconvolution restores diffused signal to the original intended spatial location, and it uses the PSF to do its weighted signal redistribution/contraction.

I know it's a somewhat difficult concept to grasp for us visually oriented beings, so don't worry if it takes a while become 'obvious'.

Cheers,
Bart
« Last Edit: August 13, 2014, 06:38:09 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #115 on: August 13, 2014, 06:35:28 am »

Not everybody here is familiar with MatLab, so that would not help a larger audience.
Not everything can be explained to a larger audience (math, for instance). The question is what means are available that will do the job. MATLAB is one such tool, excel formulas, Python scripts etc are others. I tend to prefer descriptions that can be executed in a computer, as that leaves less room to leave out crucial details (researchers are experts at publishing papers with nice formulas that cannot easily be put into practice without unwritten knowledge).
Quote
The crux of the matter is that in a DSP manner Deconcolution exactly inverts the blur operation (asuming an accurate PSF model, no input noise, and high precision calculations to avoid cumulation of errors). USM only boosts the gradient of e.g. edge transitions, which will look sharp but is only partially helpful and not accurate (and prone to creating halos which are added/subtracted from those edge profiles to achieve that gradient boost).
The exact PSF of a blurred image is generally unknown (except the trivial example of intentionally blurring an image in Photoshop). Moreover, it will be different in the corners from the center, from "blue" to "red" wavelengths etc. Deconvolution will (practically) always use some approximation to the true blur kernel, either input from some source, or blindly estimated.

Neither sharpening nor deconvolution can invent information that is not there. They are limited to transforming (linear or nonlinear) their input into something that resembles the "true" signal in some sense (e.g. least squares) or simply "looks better" assuming some known deterioration model.
Quote
It's not subjective, but measurable and visually verifiable. That's why it was used to salvage the first generation of Hubble Space Station's images taken with flawed optics.
I know the basics of convolution and deconvolution. You post contains a lot of claims and little in the way of hands-on explanations. Why is the 2-d neighborhood weighting used in USM so fundamentally different from the 2-d weighting used in deconvolution aside from the actual weights?
Quote
No, it's not the model of the blur, but how that model is used to invert the blurring operation. USM uses a blurred overlay mask to create halo overshoots in order to boost edge gradients. Deconvolution doesn't use an overlay mask, but redistributes weighted amounts of the diffused signal in the same layer back to the intended spatial locations (it contracts blurry edges to sharpen, instead of boosting edge amplitudes to mimic sharpness).
I can't help but thinking that you are missing something in the text above. What is a fair frequency-domain interpretation of USM?
Quote
More advanced algorithms usually have a regularization component that blurs low signal-to-noise amounts but fully deconvolves higher S/N pixels.
My point was that USM seems to allow just that (although probably in a crude way compared to state-of-the-art deconvolution).

It would aid my own (and probably a few others) understanding of sharpening if there was a concrete (i.e. something else that mere words) describing USM and deconvolution in the context of each other, ideally showing that deconvolution is a generalization of USM.

I believe that convolution can be described as:
y = x * h where:
x is some input signal
h is some convolution kernel
* is the convolution operator

In the frequency domain, this can be described as
Y = X · H
where x, h and y and frequency-domain transformed, and the "·" operator is regular multiplication.

If we want some output Z to resemble the original X, we could in principle just invert the (linear) blur:
Z = Y / H_inv = X · H / H_inv ~ X

In practice, we don't know the exact H, there might not exist an exact inverse, and there will be noise, so it may be safer to do some regulariztion:
Z = Y / (delta + H_pseudoinv) = X · H / (delta + H_pseudoinv) ~ X

for delta some "small" number to avoid divide by zero and infinite gain.

This is about where my limited understanding of deconvolution stops. You might want to tailor the pseudoinverse wrgt (any) knowledge about noise and/or signal spectrum (ala Wiener filtering), but I have no idea how blind deconvolution finds a suitable inverse.

Now, how might USM be expressed in this context, and what would be the fundamental difference?

-h
« Last Edit: August 13, 2014, 06:55:56 am by hjulenissen »
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #116 on: August 13, 2014, 06:46:26 am »

That's not correct, USM is never a deconvolution, it's a masked addition of halo. The USM operation produces a halo version layer of the edge transitions and adds that layer (halos and all) back to the source image, thus boosting the edge gradient (and overshooting the edge amplitudes). Halo is added to the image, which explains why USM always produces visible halos at relatively sharp transitions, which is also why a lot of effort is taken by USM oriented tools like Photokit sharpener to mitigate the inherent flaw in the USM approach (which was the only remedy available for film), with edge masks and and Blend-if layers.


Sorry, my mistake ... in that I assume, probably incorrectly, that the 'USM' implementation in Photoshop etc., doesn't actually use the traditional blur/subtract/overlay type method, but uses something more like one of the kernels above, as that would give far more flexibility and accuracy in the implementation.  If that was the case, then would it not be correct to say that this sort of filter could either be a sharpening filter or a deconvolution filter, depending on whether or not it was (by chance or by trial and error) the inverse of the convolution?

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #117 on: August 13, 2014, 07:12:47 am »

Now, how might USM be expressed in this context, and what would be the fundamental difference?


Well, it would be 'something' like (~(g*h-g))*g I would think. Or in plain English:

Blur g by h, subtract g from it, invert and apply this to the original signal. At any rate, nothing like a deconvolution (which would be something like ~h*(g*h), I guess??).

Robert
« Last Edit: August 13, 2014, 07:31:03 am by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #118 on: August 13, 2014, 07:20:36 am »

Sorry, my mistake ... in that I assume, probably incorrectly, that the 'USM' implementation in Photoshop etc., doesn't actually use the traditional blur/subtract/overlay type method, but uses something more like one of the kernels above, as that would give far more flexibility and accuracy in the implementation.

No problem. The USM method used by Photoshop, according to reverse engineering attempted descriptions I've seen on the internet, does somewhat follow the traditional sandwiching method use with film, but Adobe no-doubt cuts some corners along the way to speed up things. It remains a crude way to mimic sharpness by adding halo overshoots (the radius determines the width of the halo, and the amount determines the contrast of the halo, and the threshold is limiter). The somewhere before mentioned article by Doug Kerr explains that process quite well.

Quote
If that was the case, then would it not be correct to say that this sort of filter could either be a sharpening filter or a deconvolution filter, depending on whether or not it was (by chance or by trial and error) the inverse of the convolution?

It is in fact extremely unlikely (virtually impossible) that simply adding a halo facsimile of the original image will invert a convolution (blur) operation. USM is only trying to fool us into believing something is sharp, because it adds local contrast (and halos), which is very vaguely similar to what our eyes do at sharp edges.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #119 on: August 13, 2014, 07:24:09 am »

Hi Robert,

Try the attached values, which should approximate a deconvolution of a (slightly modified) 0.7 radius Gaussian blur, which would be about the best that a very good lens would produce on a digital sensor, at the best aperture for that lens. It would under-correct for other apertures but not hurt either. Always use 16-bit/channel image mode in Photoshop, otherwise Photoshop produces wrong results with this Custom filter pushed to the max.


Hi Bart,

I've tried your PSF generator and I'm using it incorrectly as the figures I get are very different to yours.  See here:



I don't understand 'fill factor' for example - and I just chose the pixel value to be as close to 999 as possible.

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana
Pages: 1 ... 4 5 [6] 7 8 ... 12   Go Up