Pages: 1 ... 5 6 [7] 8 9 ... 18   Go Down

Author Topic: Deconvolution sharpening revisited  (Read 266079 times)

eronald

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6642
    • My gallery on Instagram
Deconvolution sharpening revisited
« Reply #120 on: July 31, 2010, 05:56:49 pm »

Quote from: ejmartin
Thanks for the PS tip.  

Well, for lens blur I imagine it could be a bit better to use something more along the lines of a rounded off 'top hat' filter (perhaps more of a bowler  ) rather than a Gaussian, since that more accurately approximates the structure of OOF specular highlights which in turn ought to reflect (no pun intended) the PSF of lens blur.  Another thing that RT lacks is any kind of adaptivity to its RL deconvolution.  The question is whether that would add significantly to the processing time.  Its on my list of things to look into.

This just went up on Slashdot
http://research.microsoft.com/en-us/um/red.../imudeblurring/

It looks like the Hasselblad gyro hardware should be able to write this type of info in the future.

Edmund
« Last Edit: July 31, 2010, 06:08:07 pm by eronald »
Logged
If you appreciate my blog posts help me by following on https://instagram.com/edmundronald

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
Deconvolution sharpening revisited
« Reply #121 on: July 31, 2010, 06:20:18 pm »

Quote from: joofa
Not in theory. For bounded functions the Fourier transform is an analytic function, which means that if it is known for a certain range then techniques such as analytic continuation can be used to extend the solution to all the frequency range.  However, as I indicated earlier, such resolution boosting techniques have difficulty in practise due to noise. It has been estimated in a particular case that to succeed in analytic continuation an SNR (amplitude ratio) of 1000 is required.


IIRC, even in the presence of noise, it has been claimed that a twofold to fourfold improvement of the Rayleigh resolution in the restored image over that in the acquired image may be achieved if the transfer function of the imaging system is known sufficiently accurately and using the resolution boosting analytic continuation techniques.

I would be surprised if any method can do more than guess at obliterated detail (data in the original beyond the Rayleigh limit).  The problem is much akin to upsampling an image; in both cases there is a hard cutoff on frequency content somewhat below Nyquist (in the case of upsampling, I mean the Nyquist of the target resolution).  Yes there are methods for the upsampling such as the algorithm in Genuine Fractals, but they amount to pleasing extrapolations of the image rather than genuine restored detail.  That's not to say the result is not pleasing, and perhaps analytic continuation for super-resolution yields a pleasing result; in fact it sounds a bit similar to the use of fractal scaling to extrapolate image content to higher frequency bands.
Logged
emil

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Deconvolution sharpening revisited
« Reply #122 on: July 31, 2010, 07:50:23 pm »

Quote from: eronald
This just went up on Slashdot
http://research.microsoft.com/en-us/um/red.../imudeblurring/

It looks like the Hasselblad gyro hardware should be able to write this type of info in the future.

Edmund
Here's another one from Microsoft Research:

Detail Recovery for Single-image Defocus Blur
Logged
Cliff

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Deconvolution sharpening revisited
« Reply #123 on: July 31, 2010, 08:09:17 pm »

Quote from: ejmartin
Well, for lens blur I imagine it could be a bit better to use something more along the lines of a rounded off 'top hat' filter (perhaps more of a bowler  ) rather than a Gaussian, since that more accurately approximates the structure of OOF specular highlights which in turn ought to reflect (no pun intended) the PSF of lens blur.  Another thing that RT lacks is any kind of adaptivity to its RL deconvolution; that could mitigate some of the noise amplification if done properly.  The question is whether that would add significantly to the processing time.  Its on my list of things to look into.

With the chain of blur-upon-blur in images, doesn't it get complicated? Diffraction blur, defocus blur, lens aberrations, motion blur,  AA filter blur... most of it changing from point to point in the frame. (I think it was mentioned that multiple blurs tend to become Gaussian?)

Maybe targeting AA filter blur would give a lot of bang for the buck? (Not much help for digital back users, though.)
Logged
Cliff

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
Deconvolution sharpening revisited
« Reply #124 on: July 31, 2010, 08:25:49 pm »

Is there anywhere one can find the typical PSF or spectral power distribution of the typical AA filter?
Logged
emil

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
Deconvolution sharpening revisited
« Reply #125 on: July 31, 2010, 09:50:35 pm »

Quote from: eronald
Ray,

Unfortunately, from about (5) my feeling is that your jargon-reduction algorithm is oversmoothing and losing semantic detail
But then, what do I know ?

Edmund

Damn! Have I revealed I'm out of my depth?  
Logged

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Deconvolution sharpening revisited
« Reply #126 on: July 31, 2010, 10:08:59 pm »

Quote from: ejmartin
Is there anywhere one can find the typical PSF or spectral power distribution of the typical AA filter?

I remember seeing an article that showed one. Instead of four little dots in a neat square like I imagined, it was more like a dozen dots in a messy diamond pattern. I'll try to find it...
Logged
Cliff

joofa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 544
Deconvolution sharpening revisited
« Reply #127 on: July 31, 2010, 11:22:51 pm »

Quote from: ejmartin
I would be surprised if any method can do more than guess at obliterated detail (data in the original beyond the Rayleigh limit).  The problem is much akin to upsampling an image; in both cases there is a hard cutoff on frequency content somewhat below Nyquist (in the case of upsampling, I mean the Nyquist of the target resolution).  Yes there are methods for the upsampling such as the algorithm in Genuine Fractals, but they amount to pleasing extrapolations of the image rather than genuine restored detail.  That's not to say the result is not pleasing, and perhaps analytic continuation for super-resolution yields a pleasing result; in fact it sounds a bit similar to the use of fractal scaling to extrapolate image content to higher frequency bands.

Not sure if the problem is akin to upsampling as upsampling does not create new information where as analytic continuation does create new information.
Logged
Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins

eronald

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6642
    • My gallery on Instagram
Deconvolution sharpening revisited
« Reply #128 on: July 31, 2010, 11:40:04 pm »

Quote from: joofa
Not sure if the problem is akin to upsampling as upsampling does not create new information where as analytic continuation does create new information.

I'm getting increasingly dubious here about the ability of any method to "create" information if assumptions about the missing pieces are not made. Fractal upsampling software for instance is tuned to assume that certain objects are "clean"boundary  lines and curves - it will thus "recreate" perfect typography in box-shots. In this sense, if assumptions about the origins of the image data are made, eg by means of a texture vocabulary, then a method tuned for these assumptions will do well "creating" image data when provided with such images, and presumably fail when the hypotheses are not met. Which also means that we need to define which measure we use to appreciate a good result and a bad one, and I respectfully suggest that photoreconnaissance, astronomy and beauty photography have different metrics.


Edmund
Logged
If you appreciate my blog posts help me by following on https://instagram.com/edmundronald

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
Deconvolution sharpening revisited
« Reply #129 on: August 01, 2010, 12:43:13 am »

Quote from: joofa
Not sure if the problem is akin to upsampling as upsampling does not create new information where as analytic continuation does create new information.

I don't see what the difference is between a spectral density that is zero beyond the inverse Airy disk radius, and a spectral density that is zero beyond Nyquist.  If you are going to extend the spectral density to higher frequencies, in effect that information is being invented.  This is different from straight deconvolution, where the function being recovered has been multiplied by some nonzero function, and one recovers the original function by dividing out by the (nonzero) FT of the PSF.  To generate information where the spectral density is intially zero, one has to invent a rule for doing so, and the issue then is how closely that rule hews to the properties of some family of 'natural' images.
Logged
emil

joofa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 544
Deconvolution sharpening revisited
« Reply #130 on: August 01, 2010, 02:57:51 am »

Quote from: ejmartin
I don't see what the difference is between a spectral density that is zero beyond the inverse Airy disk radius, and a spectral density that is zero beyond Nyquist.  If you are going to extend the spectral density to higher frequencies, in effect that information is being invented.

Perhaps you mean oversampling and not upsampling.

Quote from: ejmartin
This is different from straight deconvolution, where the function being recovered has been multiplied by some nonzero function, and one recovers the original function by dividing out by the (nonzero) FT of the PSF.

The intent is to use deconvolution to recover the spectrum in the passband of the imaging system and then use analytic continuation to extend it out to those frequencies where it was zero before.
« Last Edit: August 01, 2010, 02:59:00 am by joofa »
Logged
Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
Deconvolution sharpening revisited
« Reply #131 on: August 01, 2010, 07:45:07 am »

Quote from: joofa
The intent is to use deconvolution to recover the spectrum in the passband of the imaging system and then use analytic continuation to extend it out to those frequencies where it was zero before.

Analytic continuation of what?  We're talking about discrete data...  so at best we're talking about some assumption about a smooth analytic function that interpolates the discrete data in a region you like (low frequencies) and extrapolates into a region you don't like with the existing data (high frequencies).

Also, analytic continuation is simply one of many possible assumptions about how to extend the data; the issue is whether it or another invents new data that is visually pleasing.

Anyway, I've made my point and I don't want this to hijack the thread.
« Last Edit: August 01, 2010, 09:25:31 am by ejmartin »
Logged
emil

joofa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 544
Deconvolution sharpening revisited
« Reply #132 on: August 01, 2010, 10:44:11 am »

Quote from: ejmartin
Analytic continuation of what?  We're talking about discrete data...  so at best we're talking about some assumption about a smooth analytic function that interpolates the discrete data in a region you like (low frequencies) and extrapolates into a region you don't like with the existing data (high frequencies).

Also, analytic continuation is simply one of many possible assumptions about how to extend the data; the issue is whether it or another invents new data that is visually pleasing.

This line of reasoning started because you said that "zero MTF is going to remain zero", and I pointed out that that is not true in theory, and even in practise in the presence of noise some gains might be achieved (though not as good as the theory says). It appears now you are saying that it is one of the ways of "extrapolating/inventing" the data, thereby negating the position that zero MTF would stay zero.

Quote from: ejmartin
Anyway, I've made my point and I don't want this to hijack the thread.

Thanks for the discussion. Lets keep this thread moving.
« Last Edit: August 01, 2010, 12:47:55 pm by joofa »
Logged
Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins

madmanchan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2115
    • Web
Deconvolution sharpening revisited
« Reply #133 on: August 01, 2010, 12:11:59 pm »

deja, yes, the Detail slider in CR 6 & LR 3 is a blend of sharpening/deblur methods and if you want the deconv-based method then you crank up the Detail slider (even up to 100 if you want the pure deconv-based method). I do this for most landscapes and images with a lot of texture (rocks, bark, twigs, etc.) and I find it's not bad for that.

Erik, yes it will indeed amplify noise, which does become a little tricky (but not impossible) to differentiate from texture. I have some basic ideas on how to improve this, but for now the best way to treat this is to increase the Luminance slider and apply a bit of Masking (remember you can use the Option/Alt key with the Masking slider to get a visualization of which areas of the image are being masked off). Furthermore, if there are big areas of the image that you simply don't want to sharpen then you can paint those out with a local adjustment brush and a minus Sharpness value.

Bill, unfortunately I can't go into the PSF and other details of the CR 6 / LR 3 sharpen method. Sorry.
Logged
Eric Chan

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Deconvolution sharpening revisited
« Reply #134 on: August 01, 2010, 12:48:28 pm »

Quote from: crames
I remember seeing an article that showed one. Instead of four little dots in a neat square like I imagined, it was more like a dozen dots in a messy diamond pattern. I'll try to find it...

Sorry, I'm coming up empty-handed on the messy one. Surprisingly hard to find a measured AA filter MTF anywhere.

Here's a spec sheet of one made by Epson that shows the usual four dots in a square. Epson Toyocom

Edited 8/2/2010-

Found this:
[attachment=23440:OLPF_PSF_MTF.png]

from Optical Transfer Function of the Optical Low-Pass Filter

Looks like 4 dots in a square for "double plate", 8 dots for "triple plate". No idea which kind is in our cameras.
« Last Edit: August 02, 2010, 09:24:50 am by crames »
Logged
Cliff

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Deconvolution sharpening revisited
« Reply #135 on: August 02, 2010, 11:18:07 am »

Quote from: crames
I remember seeing an article that showed one. Instead of four little dots in a neat square like I imagined, it was more like a dozen dots in a messy diamond pattern. I'll try to find it...
Zeiss MTF Curven shows the PSF of a low pass filter.

Regards,

Bill
Logged

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Deconvolution sharpening revisited
« Reply #136 on: August 02, 2010, 11:34:28 am »

Quote from: bjanes
Zeiss MTF Curven shows the PSF of a low pass filter.

Regards,

Bill

Yes, Nr. 8 on page 4 - let's see if we can deconvolve that!

Rgds,
« Last Edit: August 02, 2010, 11:34:51 am by crames »
Logged
Cliff

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Deconvolution sharpening revisited
« Reply #137 on: August 02, 2010, 07:17:24 pm »

Quote from: crames
Yes, Nr. 8 on page 4 - let's see if we can deconvolve that!

Hi Cliff,

That pattern will differ for each AA-filter and camera(type) combination. Some of the variables are, the thickness(es) of the crossed filter layers, their (individual and combined) orientation/rotation, and the distance to the microlenses/sensels.

(Un)fortunately, in practice, the PSF of a lens (residual aberrations+diffraction, assuming perfect focus and no camera or subject motion) plus an optical low-pass filter (OLPF) and the sensel mask and spacing will resemble a modified Gaussian rather than just the OLPF's PSF. As with many natural sources of noise, when several are combined then a (somewhat) modified Gaussian approximation can be made.

I have analyzed the PSF of the full optical system (different lenses + OLPF at various apertures + aperture mask of the sensels) of e.g. my 1Ds3 (and the 1Ds2 and 20D before that), and the effect a Raw converter has on the captured data, and have found that a certain combination of multiple Gaussians does a reasonably good job of characterizing the system PSF. The complicating factor is that it thus requires prior knowledge to effectively counteract the effects.

Other complicating factors are defocus and camera shake (let alone subject motion).

The practical solution is to employ either a quasi-intelligent PSF determination based on the image at hand (or a test image under more controlled circumstances), or a flexible interactive interface system (some intelligent choices can be made to simplify things for the average user) that allows user interaction (human vision is e.g. quite good at comparing before/after images, especially when super-imposed).

There is a lot of ground to cover before simple tools are available, but threads like these serve to at least increase awareness.

Cheers,
Bart
« Last Edit: April 22, 2012, 02:07:09 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

joofa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 544
Deconvolution sharpening revisited
« Reply #138 on: August 02, 2010, 07:35:19 pm »

Quote from: BartvanderWolf
(Un)fortunately, in practice, the PSF of a lens (residual aberrations+diffraction, assuming perfect focus and no camera or subject motion) plus an optical low-pass filter (OLPF) and the sensel mask and spacing will resemble a modified Gaussian rather than just the OLPF's PSF. As with many natural sources of noise, when several are combined then a (somewhat) modified Gaussian approximation can be made.

I have analyzed the PSF of the full optical system (different lenses + OLPF at various apertures + aperture mask of the sensels) of e.g. my 1Ds3 (and the 1Ds2 and 20D before that), and the effect a Raw converter has on the captured data, and have found that a certain combination of multiple Gaussians does a reasonably good job of characterizing the system MTF.

Hi Bart,

One doesn't necessarily need to rely on the "combination effect" to get closer to a Gaussian function. Any reasonable (finite energy) function can be represented by a linear combination of Gaussians (much like Fourier expansion). So for e.g., even if you isolate a system component (OLPF, etc.) and its response does not look like Gaussian, it can still be expanded into a sum of a number of Gaussians.
Logged
Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Deconvolution sharpening revisited
« Reply #139 on: August 02, 2010, 08:36:09 pm »

Quote from: BartvanderWolf
...
There is a lot of ground to cover before simple tools are available, but threads like these serve to at least increase awareness.

I agree on all your points.

The most impressive results I've seen are the "sparse prior" deconvolution by Levin et al. Take a look at page 29 of this.   A group at Microsoft seems to have taken it further.

I think these are the kinds of results we are all looking for. When will they ever be available in a product we can use?
Logged
Cliff
Pages: 1 ... 5 6 [7] 8 9 ... 18   Go Up