Pages: 1 ... 12 13 [14] 15 16 ... 18   Go Down

Author Topic: Deconvolution sharpening revisited  (Read 265957 times)

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Deconvolution sharpening revisited
« Reply #260 on: January 14, 2014, 02:16:25 am »

Hi,

Nice to see you here! Learned much about sensors from your page.

Best regards
Erik

My first post.  I was sent this link by someone else, and as I was referenced in the post that started this thread, I thought I would add to it.

My web page on image deconvolution referred to in the first post is: http://www.clarkvision.com/articles/image-restoration1/
and has been updated recently.

I have added a second page with more results using an image where I added known blur and then used a guess PSF to recover the image.  This is part 2 from the above page:
http://www.clarkvision.com/articles/image-restoration2/

Regarding some statements made in this thread about how one can't go beyond 0% MTF, that is true if one images bar charts.  But the real world is not bar charts.  MTF is a one dimensional description of an imaging system.  It only applies to parallel spaced lines and only in the dimension perpendicular to those lines.  MTF limits do not apply to other 2-D objects.  For example, stars are much smaller than 0% MTF yet we see them.  Two stars closer together than the 0% MTF can still be seen as an elongated diffraction disk.  It is that asymmetry and a known PSF of the diffraction disk that can be used to fully resolve the two stars.  Extend this to all irregular objects in a scene, whether it be splotchy detail on a bird's beak, feather detail, or stars in the sky, deconvolution methods can recover a wealth of detail, some beyond 0% MTF.

I have been using Richardson-Lucy image deconvolution on my images for many years now, both astro images and everyday scenes.  It works well and I can consistently pull out detail that I have been unable to achieve with smart sharpen or any other method.  Smart sharpen is so fast that it can't be doing more than an iteration (or a couple if done in integers). I would love to see a demonstration by those in this thread who say smart sharpen can do as well as RL deconvolution.  On my web page, part 2 above, I have a link to the 16-bit image (it is just above the conclusions).  You are welcome to download that image and show something better than I can produce in figure 4 (right side) on that page.  Post your results here.  I would certainly love to see smart sharpen do as well, as it would speed up my work flow.

Thanks for the interesting read.  And a special hi to Bart.  I haven't seen you in a forum in years.

Roger Clark
Logged
Erik Kaffehr
 

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: Deconvolution sharpening revisited
« Reply #261 on: January 14, 2014, 03:27:47 am »

My first post.  I was sent this link by someone else, and as I was referenced in the post that started this thread, I thought I would add to it.
Hello. I have also read your site with great interest.
Quote
Regarding some statements made in this thread about how one can't go beyond 0% MTF, that is true if one images bar charts.  But the real world is not bar charts.  MTF is a one dimensional description of an imaging system.  It only applies to parallel spaced lines and only in the dimension perpendicular to those lines.  MTF limits do not apply to other 2-D objects.  For example, stars are much smaller than 0% MTF yet we see them.  Two stars closer together than the 0% MTF can still be seen as an elongated diffraction disk.  It is that asymmetry and a known PSF of the diffraction disk that can be used to fully resolve the two stars.  Extend this to all irregular objects in a scene, whether it be splotchy detail on a bird's beak, feather detail, or stars in the sky, deconvolution methods can recover a wealth of detail, some beyond 0% MTF.
MTF0 would be the spatial frequency at which the modulation is zero, right?

I can't see how a star would be "smaller than 0% MTF". A star is approximately a point-source (infinitely small "dot"), and would excite a wide range of spatial frequencies (including the zero-frequency, "DC"). If we analyse 1 dimension and assume linearity (for simplicity), we would expect a small star to be registered as a blurry star in a system of low MTF cutoff point, more or less like how a (non-reverbrated) hand-clap is recorded by a low-bandwidth audio recorder. Is that not what happens?

When we factor in (lack of) Nyquistian pre-filtering, color filter array etc, we add some non-separable and non-linear factors that are harder to comprehend intuitively. I guess numerical simulations are the way to go for diving deep in that direction.

I use the term "information" about aspects of a scene that cannot be (safely) guessed. A CD can carry 700MB or so of information at the most. If you fill a CD with random numbers, you need to be able to read every bit (after error correction etc) in order to recover those bits. "Pleasant" is a different concept. A music CD may contain large gaps in the data. A good CD player might still be able to render the track in a pleasant manner by smoothing over the (post error correction) gaps in a way consistent with how our hearing expects the track to sound (e.g. no discontinuities).

I mistrust claims about the rayleigh criterion ("diffraction limited"), in that it seems to be a pragmatic astronomers rule-of-thumb, rather than a theoretically developed absolute limit of information. As far as I understand, there may still be information beyond the diffraction limit, but we may not be able to interpret it properly.

If you enforce the policy that the image shall only consist of a (sparse) number of white points (stars) on a black background, I think that you can apply different methology to sensor design and deconvolution, than if you design a general imaging system.

-h
« Last Edit: January 14, 2014, 03:40:38 am by hjulenissen »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #262 on: January 14, 2014, 07:14:28 am »

My first post.  I was sent this link by someone else, and as I was referenced in the post that started this thread, I thought I would add to it.

My web page on image deconvolution referred to in the first post is: http://www.clarkvision.com/articles/image-restoration1/
and has been updated recently.

I have added a second page with more results using an image where I added known blur and then used a guess PSF to recover the image.  This is part 2 from the above page:
http://www.clarkvision.com/articles/image-restoration2/

Hi Roger,

Nice to see you joining here. Your part 2 webpage also shows the practical benefits of deconvolution very well, specifically in direct comparison with "Smart sharpen" which is also said to be a form of deconvolution but, as you also conclude, probably with very few iterations.

Quote
Regarding some statements made in this thread about how one can't go beyond 0% MTF, that is true if one images bar charts.  But the real world is not bar charts.  MTF is a one dimensional description of an imaging system.  It only applies to parallel spaced lines and only in the dimension perpendicular to those lines.  MTF limits do not apply to other 2-D objects.

Real images are indeed more complex than a simple bar chart or a sinusoidal grating. Where we may not have much SNR in one direction/angle, we may still have adequate SNR in another direction/angle to restore more of the original signal. We do run into limitations when noise is involved, or when the signal is very small compared to the sampling density and sensel aperture (+low-pass filter).

I do not fully agree with your downsampling conclusions, although sharpening before downsampling may happen to work for certain image content (irregular structures), it is IMHO better to postpone the creation of highest spatial frequency detail before downsampling. But it would be better to discuss that in a separate thread. Maybe this post related to downsampling of 'super resolution' images provides some useful food for thought, as does my webpage with a 'stress test' zone-plate target.

Quote
Thanks for the interesting read.  And a special hi to Bart.  I haven't seen you in a forum in years.

And thank you for your informative webpages (and beautiful landscape images). I've been around since Usenet days, but apparently in different fora ... I don't know if you are still visiting Noordwijk occasionally, but if you do just drop me a line and we can meet in person when agendas can be synchronized.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Deconvolution sharpening revisited
« Reply #263 on: January 14, 2014, 12:58:12 pm »

Another welcome to Roger Clark.

I have been visiting your site to admire your images as well as read your articles on the technical aspects of photography for a few years now. It's great to see you in this community.
Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Deconvolution sharpening revisited (Image J, two questions)
« Reply #264 on: January 14, 2014, 03:16:46 pm »

Hi,

I am interested in trying ImageJ for deconvolution. I have tried some plugins but I am not entirely happy.

Any suggestion for a good deconvolution plugin?

Best regards
Erik
Logged
Erik Kaffehr
 

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: Deconvolution sharpening revisited
« Reply #265 on: January 14, 2014, 05:41:31 pm »

We are very fortunate to have Roger joining the discussion along with Bart. I have been working on developing PSFs for my Zeiss 135 mm f/2 lens on the Nikon D800e using Bart's slanted edge tool. I photographed Bart's test image at 4 meters, determining optimum focus via focus bracketing with a rail. Once optimal focus was determined, I took a series of shots at various apertures and determined the resolution with Bart's method with the sinusoidal Siemens star. Results are shown both for ACR rendering and rendering with DCRaw in Imatest.

The results are shown in the table below. The f/16 and f/22 shots are severely degraded by diffraction. I used Bart's PSF generator to derive a 5x5 deconvolution PSF for f/16. Results are shown after 20 iterations of adaptive RL in ImagesPlus.



Crops of images are shown also.










Results were worse using a 7x7 PSF. I would appreciate pointers from Roger and Bart on what factors determine the best size PSF to employ and what else could be done to improve the result. Presumably a 7x7 would be better if the PSF were optimal, but a suboptimal 7x7 PSF could produce inferior results by extending the deconvolution kernel too far outward.

Bill
« Last Edit: January 17, 2014, 06:09:31 am by bjanes »
Logged

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Deconvolution sharpening revisited
« Reply #266 on: January 15, 2014, 01:47:16 am »

Hi Bob,

You can crop a small section out then duplicate several times with the (IIRC) 3 dots button. The deconvolution runs almost instantly on small crops like 640x640. Try 30 cycles 3x3 in one, 30 cycles 5x5 in the next, 30 7x7 in the next. If you are using Bart's oversized output from the raw converter as a starting point the 7x7 may be best. I have found 3x3 usually works for a sharp lens on a tripod output at the normal camera size. Sometimes I do 5x5 for 10 cycles followed by 3x3 for 30 to 50. I also often do 10 cycles of Van Cittert.

Roger's note that PS smart sharpen seems to do 1 iteration is interesting to me. When I have had a point in a picture to create a reasonable custom PSF from the image, I have usually run it for 1 or 2 cycles. More than that I found large black artifacts forming around points. I would then switch to 3x3 gaussian for more iterations to taste. What happens with PS smart sharpen followed by IP or repeated runs of Smart sharpen? Sorry, I dont have PS installed on my system so I cant answer it myself.

Both Bart and Roger have mentioned hundreds of cycles so I think I am not getting optimal results. I shut it down (with cancel) when I start seeing artifacts. Is hundreds based on a 2x or 3x starting image size? The sequence is Lanczos 3x, Capture sharpen 7x7, downsample, creative sharpen?

When is Van Cittert or Adaptive Contrast better? There are so many sharpening tools in the program that can be used sequentially it seems very hard to find a best sequence. Any guidance on this would be greatly appreciated.

Logged

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Deconvolution sharpening revisited
« Reply #267 on: January 15, 2014, 01:52:06 am »

One other thing, it has come up before that the program is not color manged. Am I correct that if the output of the raw converter is fed to IPlus then the IPlus tif is put back into the raw converter the colors will still be correct?
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited (Image J, two questions)
« Reply #268 on: January 15, 2014, 04:18:15 am »

Hi,

I am interested in trying ImageJ for deconvolution. I have tried some plugins but I am not entirely happy.

Any suggestion for a good deconvolution plugin?

Hi Erik,

I'm not sure why you want to use ImageJ for deconvolution, because it's not the easiest way of deconvolving regular color images (which have a variety of imperfections that need to be addressed/circumvented). Mathematically deconvolution is a simple principle, but in the practical implementation there are lots of things that can (and do) go wrong. Also remember that most of these plugins only work on grayscale images, and could work better on linear gamma input.

As a general deconvolution you can use the "Process>Filters>Convolve" command when you feed it a deconvolution kernel. It does work on RGB images, but it only does a single pass deconvolution without noise regularization.

The more useful extensive ImageJ deconvolution plugin that I came across sofar is DeconvolutionLab.
You'll need a separate PSF file, which can be made with the help of my PSF generator tool, and its 'space separated' text output can be copied and pasted as a plain text fie and converted in ImageJ to a "File>Import>Text image".

Much easier to use for photographers is a regular Photoshop plugin such as FocusMagic, or Topaz Labs Infocus. The latter can be called from Lightroom, even without Photoshop if one also has the photoFXlab plugin from Topaz Labs. Piccure is a relatively new PS plugin that does a decent job, but not really better than the cheaper alternatives mentioned before.

Other possibilities are several dedicated Astrophotography applications such as ImagesPlus (not colormanaged) or PixInsight (colormanaged). PixInsight is kind of amazing (colormanaged, floating point, Linear gamma processing of Luminance in an RGB image), and offers lots of possibilities for deconvolution and artifact suppression and all sorts of other astrophotography imaging tasks, but is not an cheap solution if you only want to use it for deconvolution. It's more a work environment with the possibility to create one's own scripts and modules with Java or Javascript programming, with several common astrophotography tasks pre-programmed for seamless integration.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #269 on: January 15, 2014, 06:49:30 am »

We are very fortunate to have Roger joining the discussion along with Bart.

Thanks for the kind words.

Quote
I have been working on developing PSFs for my Zeiss 135 mm f/2 lens on the Nikon D800e using Bart's slanted edge tool. I photographed Bart's test image at 3 meters, determining optimum focus via focus bracketing with a rail. Once optimal focus was determined, I took a series of shots at various apertures and determined the resolution with Bart's method with the sinusoidal Siemens star. Results are shown both for ACR rendering and rendering with DCRaw in Imatest.

The results are shown in the table below.

Thanks for the feedback, it can help others in understanding the procedures and for the insight it gives, and it allows to get better image quality.

Quote
The f/16 and f/22 shots are severely degraded by diffraction. I used Bart's PSF generator to derive a 5x5 deconvolution PSF for f/16. Results are shown after 20 iterations of adaptive RL in ImagesPlus.

It nicely demonstrates that when the 'luminance' diffraction pattern diameter exceeds 1.5x the sensel pitch, at f/5.6 or narrower on the 4.88 micron pitch sensor of the D800/D800E, the diffraction blur overtakes the reduction of residual lens aberrations. You may find that somewhere between f/4 and f/5.6 there could be an even slightly better performance than at f/4 , which can be useful to know if one wants to perform focus stacking with the highest resolution possible for a given camera/lens combination (at a given magnification factor or focus distance).

Especially the Slanted Edge measurements give the most accurate signal of optimal aperture, and they give the relevant/dominant blur radius for deconvolution as well. It also shows that for good lenses there is usually a smallest blur radius that is close to 0.7 at the optimum aperture, and a radius of more than 1.0 at narrow apertures. That's definitely something to consider for capture sharpening which is hardware dependent, not the same as subject dependent creative sharpening.

Quote
Results were worse using a 7x7 PSF. I would appreciate pointers from Roger and Bart on what factors determine the best size PSF to employ and what else could be done to improve the result. Presumably a 7x7 would be better if the PSF were optimal, but a suboptimal 7x7 PSF could produce inferior results by extending the deconvolution kernel too far outward.

I do not think that it is the size of the kernel that's limiting, it may be some aliasing that is playing tricks. I don't know how well the actual edge profile and the Gaussian model fitted, but that is often a good prediction of the shape of the PSF. So it may be a good PSF shape, but the source data may also still be causing some issues (noise, aliasing, demosaicing) that get magnified by restoration. I assume there is no Raw noise reduction in the file, as that might also break the statistical nature of photon shot noise.

You could try if RawTherapee's Amaze algorithm makes a difference, to eliminate one possible (demosaicing) cause. The diffraction limited f/16 shot, which seems to be at the edge of totally low-pass filtering with zero aliasing possibility, suggests that aliasing is not causing the unwanted effects, but maybe to many iteratons or too little 'noise' or artifact suppression. You can also reduce the number of iterations, although that will reduce the overall restoration effectiveness as well. What can also help is using a slightly smaller radius than would be optimal, since that under-corrects the restoration which may reduce the accumulation of errors per iteration. Another thing that may help a little is only restoring the L channel from an LRGB image set, although I do not expect it to make much of a difference on a mostly grayscale image.

Cheers,
Bart
« Last Edit: January 15, 2014, 07:15:16 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #270 on: January 15, 2014, 08:25:46 am »

What happens with PS smart sharpen followed by IP or repeated runs of Smart sharpen? Sorry, I dont have PS installed on my system so I cant answer it myself.

Hi Arthur,

In principle, repeated (de)convolution with a Gaussian PSF, equals a single (de)convolution with a larger radius PSF. However, that also assumes that there is little accumulation of round-off errors. Therefore I anticipate the build-up of artifacts, although with deliberately too small radius PSFs things might somewhat relax the demand on data quality.

Quote
Both Bart and Roger have mentioned hundreds of cycles so I think I am not getting optimal results. I shut it down (with cancel) when I start seeing artifacts. Is hundreds based on a 2x or 3x starting image size? The sequence is Lanczos 3x, Capture sharpen 7x7, downsample, creative sharpen?

The first thing is using a well behaved, decently low-pass filtered image, with little noise and preferably 16-bits/channel. Then we should not use a PSF with too large a radius. When we up-sample the image (hopefully with minimal introduction of new artifacts), we create some room for sub-pixel accuracy in restoration, and small artifacts will mostly disappear upon downsampling to the original size.

Quote
When is Van Cittert or Adaptive Contrast better? There are so many sharpening tools in the program that can be used sequentially it seems very hard to find a best sequence. Any guidance on this would be greatly appreciated.

It's hard to say in general, because image content can be so different, even in the same image. Richardson-Lucy is reasonably good when there is also some noise involved, where van Cittert is better with clean low ISO images with high SNR. Both are restoration algorithms. Adaptive contrast modifies contrast and should, if used, come after restoration.

Cheers,
Bart
« Last Edit: January 17, 2014, 03:15:28 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: Deconvolution sharpening revisited
« Reply #271 on: January 15, 2014, 12:36:15 pm »



I do not think that it is the size of the kernel that's limiting, it may be some aliasing that is playing tricks. I don't know how well the actual edge profile and the Gaussian model fitted, but that is often a good prediction of the shape of the PSF. So it may be a good PSF shape, but the source data may also still be causing some issues (noise, aliasing, demosaicing) that get magnified by restoration. I assume there is no Raw noise reduction in the file, as that might also break the statistical nature of photon shot noise.

You could try if RawTherapee's Amaze algorithm makes a difference, to eliminate one possible (demosaicing) cause. The diffraction limited f/16 shot, which seems to be at the edge of totally low-pass filtering with zero aliasing possibility, suggests that aliasing is not causing the unwanted effects, but maybe to many iteratons or too little 'noise' or artifact suppression. You can also reduce the number of iterations, although that will reduce the overall restoration effectiveness as well. What can also help is using a slightly smaller radius than would be optimal, since that under-corrects the restoration which may reduce the accumulation of errors per iteration. Another thing that may help a little is only restoring the L channel from an LRGB image set, although I do not expect it to make much of a difference on a mostly grayscale image.


Bart,

Thanks for the feedback. We are getting some interesting discussion in this rejuvenated thread:)

As you suggested, I did use RawTherapee to render the f/16 image.



The Gaussian radius was smaller than with the ACR rendering, 0.9922, and I used your tool to calculate a deconvolution PSF for 5x5 and 7.7.

Here is the image restoration in ImagesPlus with 20 iterations of RL using 7x7. There is quite a bit of artifact.


Using RL and a 5x5 kernel with 20 iterations, there again less artifact:


van Clittert with the 5x5 kernel and 20 iterations produces the best results.


What do you think?

Bill
Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Deconvolution sharpening revisited (Image J, two questions)
« Reply #272 on: January 15, 2014, 02:37:49 pm »

Hi Bart,

It has been suggested on a discussion regarding macro photography that smallish apertures could be used and the image restored using deconvolution. I may be a bit of a skeptic,  but I felt I would be ignorant without trying. I have both Focus Magic and Topaz Infocus but neither let me choose PSF. I have ImageJ already so I felt I could explore deconvolution a bit more, but the plugins I have tested were not very easy to use and did not really answer my questions about usability.

My take is really to use medium apertures and stacking if needed.

Best regards
Erik

Hi Erik,

I'm not sure why you want to use ImageJ for deconvolution, because it's not the easiest way of deconvolving regular color images (which have a variety of imperfections that need to be addressed/circumvented). Mathematically deconvolution is a simple principle, but in the practical implementation there are lots of things that can (and do) go wrong. Also remember that most of these plugins only work on grayscale images, and could work better on linear gamma input.

As a general deconvolution you can use the "Process>Filters>Convolve" command when you feed it a deconvolution kernel. It does work on RGB images, but it only does a single pass deconvolution without noise regularization.

The more useful extensive ImageJ deconvolution plugin that I came across sofar is DeconvolutionLab.
You'll need a separate PSF file, which can be made with the help of my PSF generator tool, and its 'space separated' text output can be copied and pasted as a plain text fie and converted in ImageJ to a "File>Import>Text image".

Much easier to use for photographers is a regular Photoshop plugin such as FocusMagic, or Topaz Labs Infocus. The latter can be called from Lightroom, even without Photoshop if one also has the photoFXlab plugin from Topaz Labs. Piccure is a relatively new PS plugin that does a decent job, but not really better than the cheaper alternatives mentioned before.

Other possibilities are several dedicated Astrophotography applications such as ImagesPlus (not colormanaged) or PixInsight (colormanaged). PixInsight is kind of amazing (colormanaged, floating point, Linear gamma processing of Luminance in an RGB image), and offers lots of possibilities for deconvolution and artifact suppression and all sorts of other astrophotography imaging tasks, but is not an cheap solution if you only want to use it for deconvolution. It's more a work environment with the possibility to create one's own scripts and modules with Java or Javascript programming, with several common astrophotography tasks pre-programmed for seamless integration.

Cheers,
Bart
Logged
Erik Kaffehr
 

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: Deconvolution sharpening revisited (Image J, two questions)
« Reply #273 on: January 15, 2014, 04:26:15 pm »

Hi Bart,

It has been suggested on a discussion regarding macro photography that smallish apertures could be used and the image restored using deconvolution. I may be a bit of a skeptic,  but I felt I would be ignorant without trying. I have both Focus Magic and Topaz Infocus but neither let me choose PSF. I have ImageJ already so I felt I could explore deconvolution a bit more, but the plugins I have tested were not very easy to use and did not really answer my questions about usability.

My take is really to use medium apertures and stacking if needed.

Erik,

That conclusion is in agreement with my findings thus far. One can get an idea of what is possible by looking at diffraction limits for various apertures and MTFs. The table below was taken from one of Roger's posts and is illustrative.



As Bart pointed out, diffraction will begin to limit the image resolution when the Airy disc is about 1.5x the pixel pitch of the sensor; for the D800e, the pixel pitch is 4.87 microns and 1.5x this is is 7.31 microns. At f/8, the Airy disc is 8.9 microns and resolution at 50% MTF very close to the Nyquist of the sensor (103 lp/mm), suggesting that this would be a good aperture for stacking. At f/22 the MTF is so low deconvolution is of little avial, since there is not much to work with. However, deconvolution works well at f/8 and I would use this aperture for stacking after image restoration with deconvolution.

Bart's PSF generator uses only one parameter, the Gaussian radius, to derive the PSF for a given aperture and I doubt that this is sufficient to fully describe the nature of the blurring. Roger does not go into the details of how he derives his PSFs and more information would be helpful. Van Clittert deconvolution seems to work well for low noise images, and more discussion on its use would be welcome. ImateMagic works well for at f/8 and is very fast and convenient to use.

Regards,

Bill

Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #274 on: January 15, 2014, 06:56:16 pm »

Bart,

Thanks for the feedback. We are getting some interesting discussion in this rejuvenated thread:)

Hi Bill,

There is a lot of info to share, and people to convince ...

Quote
As you suggested, I did use RawTherapee to render the f/16 image.



The Gaussian radius was smaller than with the ACR rendering, 0.9922, and I used your tool to calculate a deconvolution PSF for 5x5 and 7x7.

Great, RT Amaze is always interesting to have in a comparison, because it is very good at resolving fine detail with few artifacts (and optional false color suppression).

Quote
Here is the image restoration in ImagesPlus with 20 iterations of RL using 7x7. There is quite a bit of artifact.


Using RL and a 5x5 kernel with 20 iterations, there again less artifact:

I see what you mean, and looking at the artifacts there may be something that can be done. No guarantee, but I suspect that deconvolving with a linear gamma can help quite a bit. In ImagesPlus one can convert an RGB image into R+G+B+L layers, deconvolve the L layer, and recombine the channels into an RGB image again. However, before and after deconvolution, one can switch the L layer to linear gamma and back (gamma 0.455 and gamma 2.20 will be close enough).

It can also help to temporarily up-sample the image before deconvolution. The drawback of that method is the increased time required for the deconvolution calculations, and it is possible that the re-sampling introduces artifacts. The benefit though is than one can visually judge the intermediate result (which is sort of sub-sampled) until deconvolution artifacts start to appear, and then downsample to the original size to make the artifacts visually less important.

Quote
Van Cittert with the 5x5 kernel and 20 iterations produces the best results.

In this case it does, but with more noise it may not be as beneficial. Also in this case, deconvolving the linear gamma luminance may work better.

Then there is another thing, and that will change the shape of the Gaussian PSF a bit. Creating the PSF kernel with my PSF generator defaults to a sensel arrangement with 100% fill factor (assuming gapless microlenses). By reducing that percentage a bit the Gaussian will become a bit more spiky, gradually more like a point sample and a pure Gaussian.

I realize its a bit of work, but that's also why we need better integration of deconvolution in our Raw converter tools. Until then, we can learn a lot about what can be achieved and how important it is for image quality.

Finally, you can also try the RL deconvolution in RawTherapee, I don't know if that is applied with Linear gamma but it should be come clear when you compare images. As soon as barely resolved detail becomes darker than expected, it's usually gamma related.

Cheers,
Bart
« Last Edit: January 17, 2014, 03:18:35 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited (Image J, two questions)
« Reply #275 on: January 15, 2014, 07:23:55 pm »

Bart's PSF generator uses only one parameter, the Gaussian radius, to derive the PSF for a given aperture and I doubt that this is sufficient to fully describe the nature of the blurring.

In my Slanted edge tool you can get the Edge Spread Function (edge profile) data and the ESF model, sub-sampled to approx. 1/10th of a pixel,  and compare the fit. Here is an example for one of my lenses, it only deviates a bit at the dark end, probably due to glare reducing  contrast by adding some stray light (blue line is actual edge, red line is cumulative Gaussian distribution curve or ESF):


In most cases the Gaussian model (with fill factor assumption) is pretty good, not perfect, but good enough.

Quote
Roger does not go into the details of how he derives his PSFs and more information would be helpful.

I assume it is based on a continuous Gaussian PSF, or as preprogrammed in the 3x3, 5x5, etc. default selections of ImagesPlus. My PSF generator tool would need to be set to point sample instead of a fill factor percentage. The fill factor integrates the Gaussian over a rectangular/square area as percentage of the sensel pitch, which makes the PSF a bit wider and less pointed. An actual sensel aperture is more irregularly shaped, but micro-lenses change that shape influence.

Cheers,
Bart
« Last Edit: January 16, 2014, 03:02:40 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited (Image J, two questions)
« Reply #276 on: January 16, 2014, 03:58:32 am »

Hi Bart,

It has been suggested on a discussion regarding macro photography that smallish apertures could be used and the image restored using deconvolution. I may be a bit of a skeptic,  but I felt I would be ignorant without trying. I have both Focus Magic and Topaz Infocus but neither let me choose PSF.

Hi Erik,

I see. Well in that case it may be easier to use the Photoshop plugins you already have, despite the different types of input they require.

FocusMagic does several things under the hood, e.g. suppression of noise amplification and adapt the processing based to the type of input data. It does not allow to specify the radius more accurately than in integer pixel steps, but for the algorithms they use that's often precise enough. You can also upsample the image, e.g. to 300%, before applying Focus Magic (obviously with a larger radius) which allows to use relatively sub pixel accuracy. A radius of 5 or 6 pixels is common for well focused images, and the amount can then be boosted to e.g. 175 or 200%. While the deconvolution will increase resolution, it will not really add much detail that's smaller than the Nyquist frequency of the original image dimensions, so the subsequent downsampling will have a low risk of creating downsampling artifacts/aliasing. Because the downsampling will usually reduce contrast, a final run of FocusMagic with a radius 1 and amount of e.g. 50%. can help.

Topaz Labs Infocus offers several deconvolution algorithms and separate artifact suppression controls, and those controls are needed because the deconvolution algo's are quite aggressive and do not offer an amount setting. Infocus also responds quite well to prior upsampling. The Estimate method of deconvolution often functions quite good after upsampling, especially if some additional sharpening is added with the sharpening and radius controls at the bottom of the interface.

Quote
I have ImageJ already so I felt I could explore deconvolution a bit more, but the plugins I have tested were not very easy to use and did not really answer my questions about usability.

Correct, ImageJ is a tool for the scientific community, and they often cherry pick the tools for very specific sub-tasks, such as Photomicrography, or 3D CT and MRI imagery. The exact algorithms must often be very well understood to avoid mistaking artifacts (or suppression of those) for pathology. Some of its functionality is useful for mere mortals though, it works with floating point accuracy, and is free and multiplatform software which can help to share processing methods. It does assume a deep understanding of image processing fundamentals for some of its operations.

Quote
My take is really to use medium apertures and stacking if needed.

Nothing beats collecting the actual physical data, but we can significantly improve the results we get, because the image capture process is riddled with compromises (e.g. OLPF or no OLPF, demosaicing of undersampled data, etc.), and limitations (such as diffraction) set by physics.

Stacking itself is also not free of compromises (especially around occlusions in the scene, and the claims on processing power), but the balance for stationary subjects is often in favor of shooting at a wider aperture and stacking multiple focus planes for increased DOF.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Christoph C. Feldhaim

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2509
  • There is no rule! No - wait ...
Re: Deconvolution sharpening revisited
« Reply #277 on: January 16, 2014, 07:52:39 am »

ImageMagick has an option to deconvolute images by doing a Division on an FFT image.
How would one derive or construct the appropriate deconvolution image to do that?

Cheers
~Chris

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #278 on: January 16, 2014, 09:42:34 am »

ImageMagick has an option to deconvolute images by doing a Division on an FFT image.
How would one derive or construct the appropriate deconvolution image to do that?

Hi Chris,

AFAIK, that would require one to first compile an HDRI version of ImageMagick oneself. There seem to be no precompiled HDRI binaries available for the various operating systems.

There are also drawbacks to using a simple Fourier transform divided by a Fourier transform of a PSF though. Divisions by zero need to be avoided and a single small deviation from a perfect PSF can have huge effects in the restored image. See the earlier posts in this thread for that aspect.

That's why more elaborate algorithms and tools are required, also to allow to correct for the artifacts that are very likely to arise from the simplified approach. As an example, the Deconvolution controls in PixInsight allow to use several methods to adjust the ringing artifacts (see attached PixInsight dialog). It does take quite a bit of work to get optimal settings though, and tools like FocusMagic and Infocus make life relatively much easier.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Deconvolution sharpening revisited
« Reply #279 on: January 16, 2014, 02:36:57 pm »

Here is my sharpened version of the crane done this morning without blowing up the image first. I assume Roger's is better than mine as he has more experience with the app, a much better understanding of the process and the original as a guide.

It is a reasonable improvement from the blurred image without artefacts. Done in Images plus 4.5 with Van Cittert, Adaptive contrast, Adaptive R/L.
« Last Edit: January 16, 2014, 02:39:14 pm by Fine_Art »
Logged
Pages: 1 ... 12 13 [14] 15 16 ... 18   Go Up