Pages: 1 ... 4 5 [6] 7 8 ... 18   Go Down

Author Topic: Deconvolution sharpening revisited  (Read 266066 times)

deejjjaaaa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1170
Deconvolution sharpening revisited
« Reply #100 on: July 31, 2010, 02:12:13 am »

Quote from: ejmartin
We now have it on Eric Chan's authority that, when the detail slider is cranked up to 100%, the sharpening in ACR 6 is deconvolution based.

actually he was saying that it always involves deconvolution for as long as the detail slider is > 0, just at 100 it is a pure deconvolution, and between 0 and 100 it is a blend of the output provided by USM and deconvolution... unless Eric Chan wants to provide any further clarifications.
Logged

Schewe

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6229
    • http:www.schewephoto.com
Deconvolution sharpening revisited
« Reply #101 on: July 31, 2010, 02:43:08 am »

Quote from: ejmartin
But I would ask again, why do you want to throw dirt on deconvolution methods if you are lavishing praise on ACR 6.1?

I'm not throwing dirt on deconvolution methods other than to state that in MY experience (which is not inconsiderable) the effort does not bear the fruit that advocates seem to expound on–read that to mean, I can't seem to find a solution better than the tools I'm currently using without going through EXTREME effort.

ACR 6.1 seems pretty darn good to me, how about you?

You got any useful feedback to contribute?

What do YOU want in image sharpening?

Do you think computational solutions will solve everything?

Have you actually learned how to use ACR 6.1?

How many hours do YOU have in ACR 6.1 (the odds are I've prolly got a few more hours in ACR 6.1/6.2 than you might–and worked to improve the ACR sharpening more than most people may have).
Logged

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
Deconvolution sharpening revisited
« Reply #102 on: July 31, 2010, 02:55:27 am »

Quote from: Schewe
I'm not throwing dirt on deconvolution methods other than to state that in MY experience (which is not inconsiderable) the effort does not bear the fruit that advocates seem to expound on–read that to mean, I can't seem to find a solution better than the tools I'm currently using without going through EXTREME effort.

ACR 6.1 seems pretty darn good to me, how about you?

You got any useful feedback to contribute?

What do YOU want in image sharpening?

Do you think computational solutions will solve everything?

Have you actually learned how to use ACR 6.1?

How many hours do YOU have in ACR 6.1 (the odds are I've prolly got a few more hours in ACR 6.1/6.2 than you might–and worked to improve the ACR sharpening more than most people may have).

A clumsy attempt to change the subject.  You still seem to be making an artificial distinction between deconvolution methods and ACR 6.x


Logged
emil

Schewe

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6229
    • http:www.schewephoto.com
Deconvolution sharpening revisited
« Reply #103 on: July 31, 2010, 03:08:43 am »

Quote from: ejmartin
A clumsy attempt to change the subject.  You still seem to be making an artificial distinction between deconvolution methods and ACR 6.x


No, I was responding to the actual results as posted by Ray that showed the 1K deconvolution results compared to ACR 6.1 as posted by Ray.

What are you responding to?

Simply the fact that I'm actually posting a response in this thread?
Logged

joofa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 544
Deconvolution sharpening revisited
« Reply #104 on: July 31, 2010, 04:28:13 am »

Quote from: Ray
Well, Joofa, you obviously appear to know what you are talking about. I confess I have almost zero knowledge about the Gibb's phenomenon, but I can appreciate that it may be useful to be able to indentify and name any artifacts one may see in an image, especially if one is examining an X-ray of someone's medical condition, or indeed searching for evidence of alien life on a distant planet.

Hi Ray,

I never said anything regarding the comparison of Bart's and your images. I just mentioned that not all "ringing" artifacts are Gibbs, and in the usual denconvolution, if any ringing is found, then it may not be Gibbs, rather arising from other reasons.

A more technical note: The deconvolution-problem is typically ill-posed, at least, initially. In the continuous domain the usual distortion due to blurring effect acts as an integral operator and problem statement boils down to a Fredholm integral equation of the first kind. In the discrete domain, which we usually operate due to digitization, the inherent ill-posedness is inherited, while some of the problems are ameliorated. More well-behaved solutions can be obtained by introducing some sort of "smoothness" or regularization criterion at this stage. Richardson-Lucy deconvolution converges to maximum-likelihood (ML) estimation. Maximum-likelihood techniques just do the analysis of image data, and hence, in general may not be smooth enough. However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation, which might provide more acceptable results. Under the assumptions of Gaussianity of certain image parameters (NOTE: not-necessarily the Gaussanity of the blur function) some equivalence of minimum mean square error estimation (MMSE), linearity, and MAP estimation can be obtained. Further optimizations can be introduced by using a more realistic nonstationary form of the blur function and variations of image data distribution and noise distribution - the draw back being that one might have to forgo some quick operations in the form of fast Fourier transforms (FFT) embedded somewhere in many deconvolution techniques.
« Last Edit: July 31, 2010, 04:35:03 am by joofa »
Logged
Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins

John R Smith

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1357
  • Still crazy, after all these years
Deconvolution sharpening revisited
« Reply #105 on: July 31, 2010, 07:11:41 am »

Quote from: joofa
A more technical note: The deconvolution-problem is typically ill-posed, at least, initially. In the continuous domain the usual distortion due to blurring effect acts as an integral operator and problem statement boils down to a Fredholm integral equation of the first kind. In the discrete domain, which we usually operate due to digitization, the inherent ill-posedness is inherited, while some of the problems are ameliorated. More well-behaved solutions can be obtained by introducing some sort of "smoothness" or regularization criterion at this stage. Richardson-Lucy deconvolution converges to maximum-likelihood (ML) estimation. Maximum-likelihood techniques just do the analysis of image data, and hence, in general may not be smooth enough. However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation, which might provide more acceptable results. Under the assumptions of Gaussianity of certain image parameters (NOTE: not-necessarily the Gaussanity of the blur function) some equivalence of minimum mean square error estimation (MMSE), linearity, and MAP estimation can be obtained. Further optimizations can be introduced by using a more realistic nonstationary form of the blur function and variations of image data distribution and noise distribution - the draw back being that one might have to forgo some quick operations in the form of fast Fourier transforms (FFT) embedded somewhere in many deconvolution techniques.

I suppose, sometimes, if I start feeling a bit pleased with myself and think that I know quite a lot about photography, it is probably good for me to be taken down a peg or two and realise that there are people in this world with whom I would actually be unable to communicate, except on the level of "Would you like a cup of tea?".

John
Logged
Hasselblad 500 C/M, SWC and CFV-39 DB
an

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Deconvolution sharpening revisited
« Reply #106 on: July 31, 2010, 09:50:55 am »

Hi,

My conclusion from the discussion is that:

1) It is quite possible to regain some of the sharpness lost to to diffraction using deconvolution, even if the PSF (Point Spread Function) is not known. It also seems to be the case that we have deconvolution built into ACR 6.1 and LR 3.

2) Setting "Detail" to high and varying the radius in LR3 and ACR 6.1 is a worthwhile experiment, but we may need to gain some more experience how this tools should be used.

My experience is that "Deconvolution" in both ACR 6.1 and LR3 amplifies noise, we need to find out how to use all settings to best effect.

Best regards
Erik


Quote from: John R Smith
I suppose, sometimes, if I start feeling a bit pleased with myself and think that I know quite a lot about photography, it is probably good for me to be taken down a peg or two and realise that there are people in this world with whom I would actually be unable to communicate, except on the level of "Would you like a cup of tea?".

John
Logged
Erik Kaffehr
 

deejjjaaaa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1170
Deconvolution sharpening revisited
« Reply #107 on: July 31, 2010, 10:01:10 am »

Quote from: ErikKaffehr
My experience is that "Deconvolution" in both ACR 6.1 and LR3 amplifies noise, we need to find out how to use all settings to best effect.

We can just ask mr Schewe, can't we ? With his endless hours spent on the sharpening with ACR he can just tell us and we will be all set.
Logged

John R Smith

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1357
  • Still crazy, after all these years
Deconvolution sharpening revisited
« Reply #108 on: July 31, 2010, 10:14:01 am »

Quote from: ErikKaffehr
Hi,

My conclusion from the discussion is that:

1) It is quite possible to regain some of the sharpness lost to to diffraction using deconvolution, even if the PSF (Point Spread Function) is not known. It also seems to be the case that we have deconvolution built into ACR 6.1 and LR 3.

2) Setting "Detail" to high and varying the radius in LR3 and ACR 6.1 is a worthwhile experiment, but we may need to gain some more experience how this tools should be used.

My experience is that "Deconvolution" in both ACR 6.1 and LR3 amplifies noise, we need to find out how to use all settings to best effect.

Best regards
Erik

Erik

Thank you so much for this summary which even I can understand.

John
Logged
Hasselblad 500 C/M, SWC and CFV-39 DB
an

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
Deconvolution sharpening revisited
« Reply #109 on: July 31, 2010, 10:36:10 am »

Quote from: John R Smith
I suppose, sometimes, if I start feeling a bit pleased with myself and think that I know quite a lot about photography, it is probably good for me to be taken down a peg or two and realise that there are people in this world with whom I would actually be unable to communicate, except on the level of "Would you like a cup of tea?".

John

Quote
A more technical note: The deconvolution-problem is typically ill-posed, at least, initially. In the continuous domain the usual distortion due to blurring effect acts as an integral operator and problem statement boils down to a Fredholm integral equation of the first kind. In the discrete domain, which we usually operate due to digitization, the inherent ill-posedness is inherited, while some of the problems are ameliorated. More well-behaved solutions can be obtained by introducing some sort of "smoothness" or regularization criterion at this stage. Richardson-Lucy deconvolution converges to maximum-likelihood (ML) estimation. Maximum-likelihood techniques just do the analysis of image data, and hence, in general may not be smooth enough. However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation, which might provide more acceptable results. Under the assumptions of Gaussianity of certain image parameters (NOTE: not-necessarily the Gaussanity of the blur function) some equivalence of minimum mean square error estimation (MMSE), linearity, and MAP estimation can be obtained. Further optimizations can be introduced by using a more realistic nonstationary form of the blur function and variations of image data distribution and noise distribution - the draw back being that one might have to forgo some quick operations in the form of fast Fourier transforms (FFT) embedded somewhere in many deconvolution techniques.

I sympathise with your frustration here, John, but let's not be intimidated by poor expression. Here's my translation, for what it's worth, sentence by sentence.

(1) The deconvolution problem is typically ill-posed.

means: The sharpening problem is often poorly defined. (That's easy).

(2) In the continuous domain the usual distortion due to blurring effect acts as an integral operator and problem statement boils down to a Fredholm integral equation of the first kind.

means: The analog world, which is a smooth continuum, is different from the digital world with discrete steps. You need complex mathematics to deal with this problem, such as a Fredholm integral equation. (Whatever that is).

(3) In the discrete domain, which we usually operate due to digitization, the inherent ill-posedness is inherited, while some of the problems are ameliorated.

means: We're now stuck with the digital domain. There's a hangover from the analog world with incorrect definitions, but we can fix some of the problems. There's hope.

(4) More well-behaved solutions can be obtained by introducing some sort of "smoothness" or regularization criterion at this stage.

means: We can achieve a balanced result by sacrificing detail for smoothness.

(5) Richardson-Lucy deconvolution converges to maximum-likelihood (ML) estimation.

means: The Richardson-Lucy method attempts to provide the best result, in terms of detail.

(6) Maximum-likelihood techniques just do the analysis of image data, and hence, in general may not be smooth enough.

means: The best result may introduce noise.

(7)  However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation, which might provide more acceptable results.

means: With a bit of experimentation we might be able to fix the noise problem.

(8) Under the assumptions of Gaussianity of certain image parameters (NOTE: not-necessarily the Gaussanity of the blur function) some equivalence of minimum mean square error estimation (MMSE), linearity, and MAP estimation can be obtained.

means: Gaussian mathematics is used to get the best estimate for sharpening purposes. (Guass was a German mathematical genius, considered to be one of the greatest mathematicians who has ever lived. Far greater than Einstein, in the field of mathematics).

(9) Further optimizations can be introduced by using a more realistic nonstationary form of the blur function and variations of image data distribution and noise distribution - the draw back being that one might have to forgo some quick operations in the form of fast Fourier transforms (FFT) embedded somewhere in many deconvolution techniques.

means: You can get better results if you take more time and have more computing power.

Okay! Maybe I've missed a few nuances in my translation. No-one's perfect. Any improved translation is welcome.  


Logged

John R Smith

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1357
  • Still crazy, after all these years
Deconvolution sharpening revisited
« Reply #110 on: July 31, 2010, 10:55:00 am »

Well, good shot at it, Ray. I'm afraid I never got any further than O-Level maths, and I only just scraped that.

Don't mind me, do carry on chaps  

John
Logged
Hasselblad 500 C/M, SWC and CFV-39 DB
an

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Deconvolution sharpening revisited
« Reply #111 on: July 31, 2010, 12:32:24 pm »

I'm afraid that sharpening cannot overcome the hard limit on resolution due to diffraction.

Here are versions of some of the posted images where the high and low frequencies have been separated into layers. They show what is being sharpened: only the detail that remains below the diffraction limit. The detail above the diffraction limit is lost and is not being recovered.

Original Crop (Undiffracted)

Diffracted Crop

Lucy Deconvolution

Ray FM Sharpened

The Lowpass layers include all of the detail that is enclosed within the central diffraction limit "oval" seen in the spectra I posted before. The Hipass layers include everything else outside of the central diffraction oval.

The following is a comparison of the Lowpass layers. This where the sharpening is taking place, and amounts only to approaching the quality of the Lowpass of the Original Crop.



Look at the Original Crop Hipass layer. This shows all the fine detail that the eye is craving for, but hasn't come back with any of sharpening attemps. For fun, paste a copy of the original Hipass layer in overlay mode onto any of the sharpened versions. Or double the Hipass layer for a super-sharp effect.

Since diffraction pretty-much wipes out the detail outside of the diffraction limit, deconvolution sharpening is generally limited to massaging whatever is left within the central cutoff.

From what I've read, detail beyond the diffraction cutoff has to be extrapolated ("Gerchberg method", for one), or otherwise estimated from the lower frequency information. The methods are generally called "super-resolution". The Lucy method, due to a non-linear step in the processing is supposed to have an extrapolating effect, but I'm not sure if it's visible here.

Cliff
« Last Edit: July 31, 2010, 12:33:17 pm by crames »
Logged
Cliff

joofa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 544
Deconvolution sharpening revisited
« Reply #112 on: July 31, 2010, 12:45:50 pm »

Quote from: joofa
However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation,

Sorry made a typo. The above means maximum a posteriori estimation and not a priori estimation.

Hi Ray, you did an interesting translation  

Quote from: crames
Since diffraction pretty-much wipes out the detail outside of the diffraction limit, deconvolution sharpening is generally limited to massaging whatever is left within the central cutoff.

From what I've read, detail beyond the diffraction cutoff has to be extrapolated ("Gerchberg method", for one), or otherwise estimated from the lower frequency information. The methods are generally called "super-resolution". The Lucy method, due to a non-linear step in the processing is supposed to have an extrapolating effect, but I'm not sure if it's visible here.

Yes, Gerchberg technique is effective in theory, (because a bandlimited signal is analytic and hence extrapolatable), but in practise noise limitations stop such solutions from becoming very effective.
« Last Edit: July 31, 2010, 12:49:09 pm by joofa »
Logged
Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
Deconvolution sharpening revisited
« Reply #113 on: July 31, 2010, 01:16:03 pm »

Hi Cliff,

A rather illuminating way of looking at things.  

I don't think one is expecting miracles here, like undoing the Rayleigh limit; zero MTF is going to remain zero.  But nonzero microcontrast can be boosted back up to close to pre-diffraction levels, and the deconvolution methods seem to be doing that rather well.

I am wondering whether a good denoiser (perhaps Topaz, which seems to use nlmeans methods) can help squelch some of the noise amplified by deconvolution without losing recovered detail such as the venetian blinds.
Logged
emil

joofa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 544
Deconvolution sharpening revisited
« Reply #114 on: July 31, 2010, 02:18:54 pm »

Quote from: ejmartin
zero MTF is going to remain zero.

Not in theory. For bounded functions the Fourier transform is an analytic function, which means that if it is known for a certain range then techniques such as analytic continuation can be used to extend the solution to all the frequency range.  However, as I indicated earlier, such resolution boosting techniques have difficulty in practise due to noise. It has been estimated in a particular case that to succeed in analytic continuation an SNR (amplitude ratio) of 1000 is required.

Quote from: ejmartin
I don't think one is expecting miracles here, like undoing the Rayleigh limit

IIRC, even in the presence of noise, it has been claimed that a twofold to fourfold improvement of the Rayleigh resolution in the restored image over that in the acquired image may be achieved if the transfer function of the imaging system is known sufficiently accurately and using the resolution boosting analytic continuation techniques.
« Last Edit: July 31, 2010, 02:54:07 pm by joofa »
Logged
Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins

eronald

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6642
    • My gallery on Instagram
Deconvolution sharpening revisited
« Reply #115 on: July 31, 2010, 03:34:48 pm »

Ray,

Unfortunately, from about (5) my feeling is that your jargon-reduction algorithm is oversmoothing and losing semantic detail
But then, what do I know ?

Edmund

Quote from: Ray
I sympathise with your frustration here, John, but let's not be intimidated by poor expression. Here's my translation, for what it's worth, sentence by sentence.

(1) The deconvolution problem is typically ill-posed.

means: The sharpening problem is often poorly defined. (That's easy).

(2) In the continuous domain the usual distortion due to blurring effect acts as an integral operator and problem statement boils down to a Fredholm integral equation of the first kind.

means: The analog world, which is a smooth continuum, is different from the digital world with discrete steps. You need complex mathematics to deal with this problem, such as a Fredholm integral equation. (Whatever that is).

(3) In the discrete domain, which we usually operate due to digitization, the inherent ill-posedness is inherited, while some of the problems are ameliorated.

means: We're now stuck with the digital domain. There's a hangover from the analog world with incorrect definitions, but we can fix some of the problems. There's hope.

(4) More well-behaved solutions can be obtained by introducing some sort of "smoothness" or regularization criterion at this stage.

means: We can achieve a balanced result by sacrificing detail for smoothness.

(5) Richardson-Lucy deconvolution converges to maximum-likelihood (ML) estimation.

means: The Richardson-Lucy method attempts to provide the best result, in terms of detail.

(6) Maximum-likelihood techniques just do the analysis of image data, and hence, in general may not be smooth enough.

means: The best result may introduce noise.

(7)  However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation, which might provide more acceptable results.

means: With a bit of experimentation we might be able to fix the noise problem.

(8) Under the assumptions of Gaussianity of certain image parameters (NOTE: not-necessarily the Gaussanity of the blur function) some equivalence of minimum mean square error estimation (MMSE), linearity, and MAP estimation can be obtained.

means: Gaussian mathematics is used to get the best estimate for sharpening purposes. (Guass was a German mathematical genius, considered to be one of the greatest mathematicians who has ever lived. Far greater than Einstein, in the field of mathematics).

(9) Further optimizations can be introduced by using a more realistic nonstationary form of the blur function and variations of image data distribution and noise distribution - the draw back being that one might have to forgo some quick operations in the form of fast Fourier transforms (FFT) embedded somewhere in many deconvolution techniques.

means: You can get better results if you take more time and have more computing power.

Okay! Maybe I've missed a few nuances in my translation. No-one's perfect. Any improved translation is welcome.  
Logged
If you appreciate my blog posts help me by following on https://instagram.com/edmundronald

eronald

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6642
    • My gallery on Instagram
Deconvolution sharpening revisited
« Reply #116 on: July 31, 2010, 03:37:14 pm »

Quote from: joofa
Not in theory. For bounded functions the Fourier transform is an analytic function, which means that if it is known for a certain range then techniques such as analytic continuation can be used to extend the solution to all the frequency range.  However, as I indicated earlier, such resolution boosting techniques have difficulty in practise due to noise. It has been estimated in a particular case that to succeed in analytic continuation an SNR (amplitude ratio) of 1000 is required.



IIRC, even in the presence of noise, it has been claimed that a twofold to fourfold improvement of the Rayleigh resolution in the restored image over that in the acquired image may be achieved if the transfer function of the imaging system is known sufficiently accurately and using the resolution boosting analytic continuation techniques.

It has been claimed ... references please??

Edmund
Logged
If you appreciate my blog posts help me by following on https://instagram.com/edmundronald

joofa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 544
Deconvolution sharpening revisited
« Reply #117 on: July 31, 2010, 03:58:17 pm »

Quote from: eronald
It has been claimed ... references please??

Edmund

If memory serves right then, among others, check out the following:

http://www.springerlink.com/content/f4620747648x043l/
« Last Edit: July 31, 2010, 03:58:44 pm by joofa »
Logged
Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Deconvolution sharpening revisited
« Reply #118 on: July 31, 2010, 04:17:32 pm »

Quote from: ejmartin
I don't think one is expecting miracles here, like undoing the Rayleigh limit; zero MTF is going to remain zero.  But nonzero microcontrast can be boosted back up to close to pre-diffraction levels, and the deconvolution methods seem to be doing that rather well.

I am wondering whether a good denoiser (perhaps Topaz, which seems to use nlmeans methods) can help squelch some of the noise amplified by deconvolution without losing recovered detail such as the venetian blinds.

Hi Emil,

No, I agree, deconvolution sharpening is certainly useful, since most images don't have f/32 diffraction, and there is real detail that can be restored.

The RL sharpening of RawTherapee does a very good job, with just a gaussian kernel. I wonder if knowing the exact PSF for deconvolution could be any better? Somehow I doubt it (except in the case of motion blur, which has PSFs like jagged snakes.)

Simple techniques that boost high frequencies can also do the job, exposing detail as long as the detail is there in the first place.

A simple way that I use to sharpen is a variation on high-pass sharpening. Instead of the High Pass filter, I convolve with an inverted "Laplacian" kernel in PS Custom Filter. I think it reduces haloing:

0 -1 -2 -1 0
0 -2 12 -2 0
0 -1 -2 -1 0

Scale: 4 Offset: 128

This filter has a response that slope up from zero, roughly the opposite slope of a lens MTF. (The strength can be varied by changing Scale.)

I copy the image to a new layer, change mode to Overlay (or Hard, etc.), then run the above filter on the layer copy. Noise can be controlled by applying a little Surface Blur on the filtered layer. With a little tweaking the results can approach FM and be even less noisy.

Although this usually works pretty well, it didn't on Bart's f/32 diffraction image, hence the little investigation...

Cliff







Logged
Cliff

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
Deconvolution sharpening revisited
« Reply #119 on: July 31, 2010, 04:35:25 pm »

Quote from: crames
The RL sharpening of RawTherapee does a very good job, with just a gaussian kernel. I wonder if knowing the exact PSF for deconvolution could be any better? Somehow I doubt it (except in the case of motion blur, which has PSFs like jagged snakes.)

Thanks for the PS tip.  

Well, for lens blur I imagine it could be a bit better to use something more along the lines of a rounded off 'top hat' filter (perhaps more of a bowler  ) rather than a Gaussian, since that more accurately approximates the structure of OOF specular highlights which in turn ought to reflect (no pun intended) the PSF of lens blur.  Another thing that RT lacks is any kind of adaptivity to its RL deconvolution; that could mitigate some of the noise amplification if done properly.  The question is whether that would add significantly to the processing time.  Its on my list of things to look into.
« Last Edit: July 31, 2010, 05:57:30 pm by ejmartin »
Logged
emil
Pages: 1 ... 4 5 [6] 7 8 ... 18   Go Up