Pages: 1 ... 6 7 [8] 9 10 ... 12   Go Down

Author Topic: Sharpening ... Not the Generally Accepted Way!  (Read 56135 times)

Eyeball

  • Full Member
  • ***
  • Offline Offline
  • Posts: 150
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #140 on: August 14, 2014, 01:26:13 pm »

I'm also looking forward to a newer more powerful version of Topaz Labs Infocus (which currently is a bit too sensitive regarding the creation of artifacts).

Yes, I have Infocus, too, and I had high expectations for it when it first came out - primarily due to delays in FM coming out in 64-bit.  But I find it much more finicky to use and I find it has a pretty strong bias to hard edges - something that has also bothered me about Smart Sharpen in PS.  Topaz also appears to have almost forgotten about that product since it was first released.

The hard edges vs. softer texture differentiation is a big deal to me and I wish more developers would take it into consideration.  I think it is much more useful than the shadows/highlights adjustments in PS Smart Sharpen, for example.  I think there is still a lot of legacy thinking that gets applied where people think they need to restrict sharpening to just hard edges or away from dark areas of the image.  The noise in older cameras was I believe what prompted that thinking but IMO it is needed much less with today's cameras.

It would be nice to control it though and while the LR Detail adjustment seems to have that goal in mind, it seems hard to balance it some times with noise reduction.
Logged

Eyeball

  • Full Member
  • ***
  • Offline Offline
  • Posts: 150
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #141 on: August 14, 2014, 01:31:56 pm »

Which looks to me like a massive amount of contrast has been added to the detail so that we end up with a posterized look ... and doesn't look to me like a deconvolution at all.

Also, moving the Detail slider up in steps of 10 just shows an increasing amount of this coarsening  of detail; there is no point at which there is a noticeable change in processing (from USM-type to deconvolution-type).  Also, notice the noise on the lamp post.

I know Jeff has said that this is so - and I don't dispute his insider knowledge of Photoshop development - but it would be good to see how the sharpening transitions from USM to deconvolution, because I certainly can't see it.

That is exactly what I was referring to in my earlier post.  I have often said to myself exactly what you did: "I know you say it is but is it REALLY using deconvolution?"  :)

Eric would be the man to know I guess, although I'm not sure how often he checks out more esoteric threads like this one.  In fact, in my mind Eric is the source of the ">50% Detail uses deconvolution in LR" understanding although I'm not sure I could link a direct quote.  If not here, maybe on the Adobe forums.
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #142 on: August 14, 2014, 02:04:09 pm »

That is exactly what I was referring to in my earlier post.  I have often said to myself exactly what you did: "I know you say it is but is it REALLY using deconvolution?"  :)

Eric would be the man to know I guess, although I'm not sure how often he checks out more esoteric threads like this one.  In fact, in my mind Eric is the source of the ">50% Detail uses deconvolution in LR" understanding although I'm not sure I could link a direct quote.  If not here, maybe on the Adobe forums.

Hi,

I've sent him an email - hopefully he will give us an explanation.

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #143 on: August 15, 2014, 12:03:06 am »

Hi Bart,

I thought that the problem might lie along the lines you've pointed out.

The image that I posted is a screen capture, so it's way upsampled.  The original that I tried to deconvolve is just a 4 pixel black on a gray background.

I tried it with the example macro in ImageJ and this restores the square perfectly.  I also played around with a couple of images, and for anyone who doubts the power of deconvolution (or who thinks deconvolution and USM is the same sort of thing), here is an example from the D800Pine image:



It would be very interesting to play around with ImageJ with PSFs with varying Gaussian Blur amounts.  If you have reasonably understandable steps that can be followed (by me) in ImageJ I would be happy to give it a go.  I've never used ImageJ before, so at this stage I'm just stumbling around in the dark with it :).

I have to thank you for all the information and help!  You are being very generous with your time and knowledge.

Robert



Here is my attempt to deconvolve the same section of the image. It needs more work.
Logged

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #144 on: August 15, 2014, 03:46:38 am »

I am seriously not an expert in imaging science, but it would seem to me that a better analogy between USM and deconvolution would be something like a blanket and a radiator, in that the blanket covers up the fact that there's not enough heat in the room whereas the radiator puts heat back in (heat being detail ... which is a bit of a pity because heat is noise, harking back to my thermodynamics :)).
And my claim has been that what seems to be your underlying assumption is wrong. Not that I blaim you, the same claim seems to reverbrate all over the net: "deconvolution recreates detail, USM fakes detail". Let me modify your analogy: the classical radiator has a single variable, controlled by the user, and no thermometer to be seen anywhere. Getting the right temperature can be challenging. A more modern radiator might have one or more temperature probes, and thus can make more well-informed choices.

I believe that the aforementioned claim is not supported by an analysis of what USM (in various incarnations) does compared to deconvolution. Information cannot be made out of nothing (Shannon & friends). These methods can only transform information present at their input in a way that more closely resemble some assumed reference, given some assumed degradation. When USM use a windowed gaussian subtracted the image itself, this is (in effect) a convolution of a single kernel, seemingly by the linear-phase complimentary filter. Thus, the sharpening used in USM can perhaps be described as inverting the implicitly assumed gaussian image degradation. A function that (of course) can be described in the frequency domain. The nonlinearity does complicate the analysis, but I think that the same is true for the regularization used in deconvolution.

This description might prove instructive for relatively "small-scale" USM parameters ("sharpening"), while larger-scale "local contrast" modification might be more easily comprehended in the spatial domain?

Thus, my claim (and I don't have the maths to back it up) is that USM is very similar to (naiive) deconvolution, and that both can be described as inverting an implicit/explicit model of the image degradation. The most important difference seems to be that USM practically always have a fixed kernel (of variable sigma), while deconvolution tends to have a highly parametric (or even blindly estimated) kernel, thus giving more parameters to tweak and (if chosen wisely) better results. It seems that practical deconvolution tends to use nonlinear methods, e.g. to satisfy the simultaneous (and contradicting) goals of detail enhancement but noise suppression. These may well give better numerical/perceived compromises, but it does not (in my mind) make it right to claim that "deconvolution recreates detail, while USM fakes it"

http://homepages.inf.ed.ac.uk/rbf/HIPR2/unsharp.htm


-h
« Last Edit: August 15, 2014, 03:59:12 am by hjulenissen »
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #145 on: August 15, 2014, 04:29:22 am »

Here is my attempt to deconvolve the same section of the image. It needs more work.

Hi,

This was deconvolving the original image, or was it deconvolving the image blurred with a Gaussian blur?  If the latter then pretty impressive.

What tools/technique do you use using wavelets?

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #146 on: August 15, 2014, 05:02:06 am »

And my claim has been that what seems to be your underlying assumption is wrong. Not that I blaim you, the same claim seems to reverbrate all over the net: "deconvolution recreates detail, USM fakes detail". Let me modify your analogy: the classical radiator has a single variable, controlled by the user, and no thermometer to be seen anywhere. Getting the right temperature can be challenging. A more modern radiator might have one or more temperature probes, and thus can make more well-informed choices.

I believe that the aforementioned claim is not supported by an analysis of what USM (in various incarnations) does compared to deconvolution. Information cannot be made out of nothing (Shannon & friends). These methods can only transform information present at their input in a way that more closely resemble some assumed reference, given some assumed degradation. When USM use a windowed gaussian subtracted the image itself, this is (in effect) a convolution of a single kernel, seemingly by the linear-phase complimentary filter. Thus, the sharpening used in USM can perhaps be described as inverting the implicitly assumed gaussian image degradation. A function that (of course) can be described in the frequency domain. The nonlinearity does complicate the analysis, but I think that the same is true for the regularization used in deconvolution.

This description might prove instructive for relatively "small-scale" USM parameters ("sharpening"), while larger-scale "local contrast" modification might be more easily comprehended in the spatial domain?

Thus, my claim (and I don't have the maths to back it up) is that USM is very similar to (naiive) deconvolution, and that both can be described as inverting an implicit/explicit model of the image degradation. The most important difference seems to be that USM practically always have a fixed kernel (of variable sigma), while deconvolution tends to have a highly parametric (or even blindly estimated) kernel, thus giving more parameters to tweak and (if chosen wisely) better results. It seems that practical deconvolution tends to use nonlinear methods, e.g. to satisfy the simultaneous (and contradicting) goals of detail enhancement but noise suppression. These may well give better numerical/perceived compromises, but it does not (in my mind) make it right to claim that "deconvolution recreates detail, while USM fakes it"


Hi,

The whole notion of deconvolution, as I understand it, is that since we are dealing with an essentially linear system, we can add and subtract the various components with no degradation.  So if we have the original image g and add the blurring function f to it to get the blurred image h, we can simply subtract f from h (assuming we know f) and we will get back to g.  So although it seems to be getting something back from nothing, in the case of a blurred image, in reality we are just getting the component we want back and we are leaving behind the component we don't want.  

The example I give above of an image blurred with a Gaussian blur of 8 and then 'unblurred' by subtracting the blurring from it, restoring the original image perfectly, illustrates this point dramatically.  Of course, in this case f is know perfectly, so the extraction of the original image from the blurred image is also possible perfectly; as you say, our problem is how to find out what f is for our images.

I may be entirely wrong here, but I don't think image deconvolution is non-linear, because if it was it wouldn't work.  Even if there are non-linearities in the system, the algorithm would have to approximate to a linear system. This comment in the ImageJ macro explains one technique for dealing with noise:

// Regarding adding noise to the PSF, deconvolution works by
// dividing by the PSF in the frequency domain.  A Gaussian
// function is very smooth, so its Fourier, (um, Hartley)
// components decrease rapildy as the frequency increases.  (A
// Gaussian is special in that its transform is also a
// Gaussian.)  The highest frequency components are nearly zero.
// When FD Math divides by these nearly-zero components, noise
// amplification occurs.  The noise added to the PSF has more
// or less uniform spectral content, so the high frequency
// components of the modified PSF are no longer near zero,
// unless it is an unlikely accident.

So what this is doing is adding noise to the PSF in order to avoid noise amplification in the deconvolution (which is pretty smart!).  Again, this is assuming a linear system.

As for USM ... if the implementation in Photoshop etc., is not the conventional one of creating an overlay by blurring/subtracting, but instead uses a convolution kernel - then yes, it's also doing a deconvolution and the difference between it and another deconvolution comes down to the algorithm and implementation.  However, the belief seems to be that USM creates a mask by blurring the image and subtracting the blurred image from the original, effectively eliminating the low frequency components (like a high-pass filter).  This mask is then used to add contrast to the high-frequency components of the image.  So, within the constraints of my limited understanding, in USM we are adding a signal, whereas in deconvolution we are subtracting one.  The question for me is ... is the signal we are adding the inverse of the signal we are subtracting?  (It's true that in the case of USM we have a high-frequency signal, whereas in deconvolution we have a low-frequency one). I would think that it is not, because adding contrast is not the inverse of removing blurring: we now have an additional signal c in the equation, c being a high-frequency signal that is added to the high-frequency components of the image.

Someone who understands the maths better than me would need to answer this question.

Robert

« Last Edit: August 15, 2014, 05:20:44 am by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #147 on: August 15, 2014, 05:24:08 am »

Try the attached values, which should approximate a deconvolution of a (slightly modified) 0.7 radius Gaussian blur, which would be about the best that a very good lens would produce on a digital sensor, at the best aperture for that lens. It would under-correct for other apertures but not hurt either. Always use 16-bit/channel image mode in Photoshop, otherwise Photoshop produces wrong results with this Custom filter pushed to the max.

As I've said earlier, such a 'simple' deconvolution tends to also 'enhance' noise (and things like JPEG artifacts), because it can't discriminate between signal and noise. So one might want to use this with a blend-if layer or with masks that are opaque for smooth areas (like blue skies which are usually a bit noisy due their low photon counts, and demosaicing of that).

Upsampled images would require likewise upsampled filter kernel dimensions, but a 5x5 kernel is too limited for that, so this is basically only usable for original size or down-sampled images.

Hi Bart,

Could you expand on the math that resulted in the kernel above for a gaussian blurring function of radius r? f1=>F1, 1/F1=F2, F2=>f2?

Thank you.
Jack
« Last Edit: August 15, 2014, 06:57:18 am by Jack Hogan »
Logged

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #148 on: August 15, 2014, 09:59:35 am »

Someone who understands the maths better than me would need to answer this question.

I don't know about the math but from what I understand USM is somewhat equivalent to taking a black/white marker and drawing along every transition in the picture to make it stand out more - automatically.  Line thickness and darkness is chosen arbitrarily to achieve the desired effect, much like painters do.   One way to look at USM is to imagine coming across one such simplified transition in an image, say a sharp edge from black to white, and plotting its profile as if you were crossing it perpendicularly.  The plot of the relative brightness (Luminance) profile might look something like this (0 signal is black, 1 is white, from an actual Edge Spread Function):



The painter/photographer then says to herself:"Hmm, that's one fuzzy edge.  It takes what looks like the distance of 6 pixels to go from black to white.  Surely I can make it look sharper than that.  Maybe I can arbitrarily squeeze its ends together so that it fits in fewer pixels".  She takes out her tool (USM/marker), dials in darkness 1, thickness 1 and redraws the transition to pleasure:



Now the transition fits in a space of less than two pixels.  "Alright, that looks more like it" she says contentedly and moves on to the next transition.

The only problem with this approach to sharpening is that it has (very little if) nothing to do with the reality of the scene.  It is completely perceptual, arbitrary and destructive (not reversible).  We can make the slope of the transition (acutance!) as steep as we like simply by choosing more or less aggressive parameters.  MTFs shoot through the roof beyond what's physically possible, actual scene information need not apply.  Might as well draw the transition in with a thick marker :)

Nothing inherently wrong with it: the USM approach is perfectly fine and quite useful in many cases, especially where creative or output sharpening are concerned.  But as far as capture sharpening is concerned, upon closer scrutiny USM always disappoints (at least me) because the arbitrariness and artificiality of it show up in all of their glory as you can clearly see above: halos (overshoots, undershoots), ringing, pixels boldly going where they were never meant to be (where is the center of the transition now?).  

So what is the judicious pixel peeper supposed to do in order to restore a modicum of pre-capture sharpness?  Well, contrary to USM's approach one could start with scene information first.  If the aggregate edge profile in the raw data looks like that, and such and such an aperture produces this type of blur, and the pixels were this shape and size and the AA was this strong and of this type, the lens bends and blurs light that way on the image around the area of the transition - perhaps we can try to undo some of the blurring introduced by each of these components of our camera/lens system and attempt to take a good stab at reconstructing what the edge actually looked like before it was blurred by them.

The process by which we attempt to undo one by one the blurring introduced by each of these components is called deconvolution.  Deconvolution math is easier performed in the frequency domain because there it involves mainly simple division/multiplication.  If one can approximately model and characterize the effect of each component in the frequency domain, one can in theory undo blurring introduced by it - with many limits, mostly imposed by imperfect modeling, complicated 2D variations in parameters and (especially) noise combined with insufficient energy.  In general photography you can undo some of it well, some of it approximately, some of it not at all.  This is what characterization in the frequency domain looks like for the simplest components to model:



Deconvolution can also be performed in the spatial domain by applying discrete kernels to the image, but less well (see for instance my question to Bart above).  Either way results as far as capture sharpening is concerned are much more appealing to the eye of this pixel peeper than the rougher, arbitrary alternative of USM.  And the bonus is that deconvolution is by and large reversible and not as destructive.  USM can always and subsequently be added later in moderation to specific effect.

In a nutshell: USM is a meat cleaver handled by an artistic butcher.  Deconvolution is a scalpel wielded by a scientific surgeon :-)

Jack
« Last Edit: August 15, 2014, 10:16:27 am by Jack Hogan »
Logged

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #149 on: August 15, 2014, 11:41:56 am »

Using your graph I attempted to add a curve that simulates what deconvolution does in green. Compared to USM which moves in the Y plane, deconvolution squeezes the edge in the X plane.
Logged

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #150 on: August 15, 2014, 11:46:39 am »

Hi,

This was deconvolving the original image, or was it deconvolving the image blurred with a Gaussian blur?  If the latter then pretty impressive.

What tools/technique do you use using wavelets?

Robert

It is the multi-resolution smooth/sharpen feature in images plus.

I was playing around with the scene again in RT last night. I actually got a very good result dropping the damping to 0 and moving the radius to .80. I have never touched the damping before. Suddenly the R-L deconvolve in RT seems more powerful.

I will have to post it tonight when I am off the laptop.
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #151 on: August 15, 2014, 11:56:15 am »

Could you expand on the math that resulted in the kernel above for a gaussian blurring function of radius r? f1=>F1, 1/F1=F2, F2=>f2?

Hi Jack,

I didn't remember how I got there exactly initially, because I rarely use the crude Photoshop implementation, but rather an exact floating point number implementation in other software (e.g. ImageJ). But after some pondering I do remember that I started out with my PSF generator tool, with a Blur sigma of 0.7, Fill-factor 100%, Kernel size of 5x5, Integer numbers, Deconvolution type of kernel, and typing a scale that would start to show useful integer values, also aiming for a central 999 value therefore one would need to scale by something like 1378.

However, by using the scale factor in my tool, one amplifies the 'amount' of sharpening, and Photoshop uses its scale in a somewhat different way (just a division of the kernel values), which would also need it to come down to something like 251 to keep a relatively balanced brightness in flat regions, but still over-sharpen due to the amount boost. So I abandoned that approach because it would give the wrong weights and I changed the integer type of kernel back to floating point with a scale of 1.0.

Now, keeping in mind the way the Photoshop's scaling works, and with a goal to have a central value of about 999 (now 1.724232265654046 at a scale of 1.0), it turned out that an approx. 579 custom filter scale factor was required, and all floating point kernel values were multiplied by that same amount (to be divided again later by that scale factor by PS), and rounded to integers.

I remember I had to tweak a few values to get uniform brightness before and after sharpening uniform areas (requires 16-bit/channel data), so I finally arrived at the values given earlier.

That was about the train of thought, which should work for other blur radii just as well, without boosting the sharpening amount.

Cheers,
Bart
.
« Last Edit: August 15, 2014, 11:58:19 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #152 on: August 15, 2014, 01:07:39 pm »

I don't know about the math but from what I understand USM is somewhat equivalent to taking a black/white marker and drawing along every transition in the picture to make it stand out more - automatically.

Jack, the math is very basis and simple. Here is an attempt to clarify with some numbers, based on a Gaussian blur of 1.0, Fill-factor 100%:

PSF (GB 1.0, FF=100%): 3.37868E-06 0.000229231 0.005977036 0.060597538 0.241730347 0.382924937 0.241730347 0.060597538 0.005977036 0.000229231 3.37868E-06

ORIGINAL SIGNAL: 50 50 50 50 50 50 200 200 200 200 200 200
CONVOLVED SIGNAL: 50 50.0005 50.0349 50.9314 60.0211 96.2806 153.719 189.979 199.069 199.965 199.999 200

This convolved signal is our digital image of an abrupt brightness change in the original scene after optical blur and Raw conversion, and it's the object of further calculations.

Now, what USM essentially does is blur the convolved image again, let's assume by a Radius of 1.0 to stay consistent with a deconvolution attempt to undo the blur that our original image was subjected to. This blurred image facsimile is then subtracted from the convolved image it originated from, and the difference is added back to the convolved image in a layered fashion.

CONVOLVED SIGNAL: 50.0000 50.0005 50.0349 50.9314 60.0211 96.2806 153.719 189.979 199.069 199.965 199.999 200

 BLURRED VERSION: 50.0103 50.1359 51.1468 56.2446 72.4085 104.681 145.319 177.591 193.755 198.853 199.864 199.99
      DIFFERENCE: -0.0103 -0.1354 -1.1119 -5.3132 -12.3874 -8.4004 8.4 12.388 5.314 1.112 0.135 0.01

CONVOLVED + DIFF: 49.9897 49.8651 48.923 45.6182 47.6337 87.8802 162.119 202.367 204.383 201.077 200.134 200.01


The Difference layer is a halo layer which is added to the convolved input layer, and an amount setting would amplify the values, and thus amplify the halo amplitude. Clearly, the radius of 1.0 is relatively wide to use, but was used for consistency with deconvolution which does something else than layered addition of differences. One could use a smaller radius, and increase the amount, but it would only create a narrower halo. Halos are inherent in USM, and therefore require a significant effort to reduce their visibility.

CONVOLVED SIGNAL: 50.0000 50.0005 50.0349 50.9314 60.0211 96.2806 153.719 189.979 199.069 199.965 199.999 200
  RL DECONVOLVED: 50.7165 50.0871 48.0134 54.9656 41.4365 61.9295 186.182 213.192 189.792 206.158 197.43 200.374

The Richardson-Lucy (RL) deconvolution is an iterative method that's more effective than the simple version discussed earlier, but uses the same kind if principles, not layer masking but deconvolution.

Cheers,
Bart
« Last Edit: August 15, 2014, 06:38:34 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #153 on: August 15, 2014, 01:18:45 pm »

...I remember I had to tweak a few values to get uniform brightness before and after sharpening uniform areas (requires 16-bit/channel data), so I finally arrived at the values given earlier.

That was about the train of thought, which should work for other blur radii just as well, without boosting the sharpening amount.

Thanks Bart, trial and error then I guess :)  I was hoping you had worked through a little more math (because I got stuck doing this myself).  Let's work in one dimension to simplify things initially:

1) Spatial domain blur function (SPF) of gaussian blur of radius r -->    g(x) = 1/r/sqrt(2.pi)*exp[-(x/r)^2/2]

If plotted, this PSF would look like the classic bell curve.  The relative kernel would have values that rise and fall accordingly;

2) Take the Fourier transform of 1) to switch to the Frequency Domain ---> G(w) = exp[-(wr)^2/2]

with w=2.pi.s, s= frequency;

3) Calculate D(w) = 1/G(w) = exp[(wr)^2/2]

4) Take the inverse Fourier Transform of 3) to switch back to the spatial domain ---> d(x) = ... ? :(

If we had the formula for d(x) we could simply read off the values for the kernel to deconvolve gaussian blur of radius r.  Help?

Jack
« Last Edit: August 15, 2014, 04:07:04 pm by Jack Hogan »
Logged

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #154 on: August 15, 2014, 04:05:24 pm »

Jack, the math is very basis and simple. Here is an attempt to clarify with some numbers, based on a Gaussian blur of 1.0, Fill-factor 100%:

Got it, thanks Bart.  My question about USM was rethorical more than anything else.  The one about how to properly calculate the deconvolution kernel of a Gaussian is on the other hand real: I am stuck there :)
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #155 on: August 15, 2014, 04:36:30 pm »

Got it, thanks Bart.  My question about USM was rethorical more than anything else.  The one about how to properly calculate the deconvolution kernel of a Gaussian is on the other hand real: I am stuck there :)

Jack, There are some trickeries involved in going from continuous to discrete functions. I use the following to calculate kernel values for arbitrary sigma radius Gaussian blurs: https://www.dropbox.com/s/igxwk0izafkbnr9/Gaussian_PSF_2.png

x and y are the kernel positions around the central [0,0] kernel position, and they are in principle integer offsets from the center, and sdx and sdy are the horizontal and vertical sigmas (standard deviations), they are usually identical for a symmetrical blur. A 100% fill factor is assumed (pixel center +/- 1/2 pixel). The Erf() is the Error Function.

Cheers,
Bart
« Last Edit: August 15, 2014, 04:53:44 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #156 on: August 15, 2014, 05:01:24 pm »

It is the multi-resolution smooth/sharpen feature in images plus.


Hi,

My question was whether the image was the original raw image (so you're trying to get the best detail from it) or were you doing a more brutal test, that is, to blur the image with a Gaussian blur and then attempt to recover the original (as per the example I gave using ImageJ)?

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #157 on: August 15, 2014, 07:05:39 pm »

Hi,

My question was whether the image was the original raw image (so you're trying to get the best detail from it) or were you doing a more brutal test, that is, to blur the image with a Gaussian blur and then attempt to recover the original (as per the example I gave using ImageJ)?

Robert

I did not blur it.
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #158 on: August 16, 2014, 01:15:30 am »

I did not blur it.

Thanks. 

I would really love to see an example of an image, blurred in Photoshop with a Gaussian blur of, say 4, and then restored using deconvolution. Ideally I would like to see the deconvolution using a kernel and also using Fourier.

Secondly, I would also really love to be shown a method to photograph a point light source with my camera (for a given fixed focal length), and then to use this to produce a deconvolution kernel.

Thirdly, I would really, really love to see the two above put together, so that taking a point light source, say a white oval on a black background in Photoshop, that after applying a blur of some sort to it, we could work out the deconvolution kernel and use this to restore an image that had the same blur applied to it.

It's fascinating to learn about the technicalities (some of it well over my head, although I'm getting there bit by bit :)), but the next step for me would be putting it into practice ... not using a black box like FocusMagic, say, but doing it step by step using the techniques and tools currently available.

Any takers?

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #159 on: August 16, 2014, 04:25:03 am »

I would really love to see an example of an image, blurred in Photoshop with a Gaussian blur of, say 4, and then restored using deconvolution. Ideally I would like to see the deconvolution using a kernel and also using Fourier.

Hi Robert,

A Gaussian blur of 4 is HUGE, and Photoshop's implementation of the Gaussian blur may be not exact, so a half-way decent deconvolution may be virtually impossible. But I understand you want a huge the challenge.

Quote
Secondly, I would also really love to be shown a method to photograph a point light source with my camera (for a given fixed focal length), and then to use this to produce a deconvolution kernel.

The problems with properly photographing a small point light are many. A lightsource behind an aperture won't work, because the aperture would cause diffraction. So a more appropriate target would be a small reflective sphere that reflects a distant lightsource. Of course that would cause problems to shield the surroundings from also being reflected by the sphere. Shooting a distant star would suffer from atmospheric turbulence and motion. And of course each lens behaves differently at different apertures and focusing distances, and it's not uniform across the image. Camera vibration is also a consideration that needs to be eliminated.

That's why methods like using a slanted edge are more commonly used to model the behavior of lenses in two orthogonal directions, or methods that try to quantify the deterioration of the Power Spectrum of White noise, or of a 'Dead-leaves' target.

I'm currently working on a 'simpler' method based on measuring the edge transitions in all 360 degree orientations, using a test target like this example:


Quote
Thirdly, I would really, really love to see the two above put together, so that taking a point light source, say a white oval on a black background in Photoshop, that after applying a blur of some sort to it, we could work out the deconvolution kernel and use this to restore an image that had the same blur applied to it.

I'm working on it ... ;) , but for the moment a Slanted edge approach goes a long way to allow a characterization of the actual blur kernel with sub-pixel accuracy.

Quote
It's fascinating to learn about the technicalities (some of it well over my head, although I'm getting there bit by bit :)), but the next step for me would be putting it into practice ... not using a black box like FocusMagic, say, but doing it step by step using the techniques and tools currently available.

Tools like FocusMagic are real time savers, and doing it another way may require significant resources and amongst others, besides dedicated Math software, a lot of calibration and processing time.

Cheers,
Bart
« Last Edit: August 16, 2014, 04:56:05 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==
Pages: 1 ... 6 7 [8] 9 10 ... 12   Go Up