Pages: 1 ... 13 14 [15] 16 17 18   Go Down

Author Topic: Deconvolution sharpening revisited  (Read 265968 times)

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: Deconvolution sharpening revisited
« Reply #280 on: January 18, 2014, 03:08:26 pm »

Great, RT Amaze is always interesting to have in a comparison, because it is very good at resolving fine detail with few artifacts (and optional false color suppression).

I see what you mean, and looking at the artifacts there may be something that can be done. No guarantee, but I suspect that deconvolving with a linear gamma can help quite a bit. In ImagesPlus one can convert an RGB image into R+G+B+L layers, deconvolve the L layer, and recombine the channels into an RGB image again. However, before and after deconvolution, one can switch the L layer to linear gamma and back (gamma 0.455 and gamma 2.20 will be close enough).

It can also help to temporarily up-sample the image before deconvolution. The drawback of that method is the increased time required for the deconvolution calculations, and it is possible that the re-sampling introduces artifacts. The benefit though is than one can visually judge the intermediate result (which is sort of sub-sampled) until deconvolution artifacts start to appear, and then downsample to the original size to make the artifacts visually less important.

In this case it does, but with more noise it may not be as beneficial. Also in this case, deconvolving the linear gamma luminance may work better.

Then there is another thing, and that will change the shape of the Gaussian PSF a bit. Creating the PSF kernel with my PSF generator defaults to a sensel arrangement with 100% fill factor (assuming gapless microlenses). By reducing that percentage a bit the Gaussian will become a bit more spiky, gradually more like a point sample and a pure Gaussian.

I realize its a bit of work, but that's also why we need better integration of deconvolution in our Raw converter tools. Until then, we can learn a lot about what can be achieved and how important it is for image quality.

Finally, you can also try the RL deconvolution in RawTherapee, I don't know if that is applied with Linear gamma but it should be come clear when you compare images. As soon as barely resolved detail becomes darker than expected, it's usually gamma related.

Cheers,
Bart

Bart,

To assess the effect of linear processing, I rendered my images into a custom 16 bit ProPhotoRBG space with a gamma of 1.0 prior to performing the deconvolution in ImagesPlus and converted back to sRGB for display on the web. I noted little difference between linear and gamma ~2.2 files. Performing 30 iterations of RL with a radius of 0.89 as determined by your tool works well with Rawthereapee. 10 iterations of RL in ImagesPlus with a 5x5 kernel derived with your tools and a radius of 0.89 produces artifacts, but 3 iterations produces more reasonable results. I used the deconvolution kernel with a fill factor of 100%. Deconvolving the luminance channel in IP made little difference. Where should I go from here?

Image before deconvolution:


Image deconvolved in RawTherepee:


Image deconvolved with 10 iterations in ImagesPlus:


Image deconvolved with 3 iterations in ImagesPlus:


I presume that the deconvoltion kernel would be most appropriate, but what is the purpose of the other PSFs?

Thanks,

Bill
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #281 on: January 19, 2014, 09:36:24 am »

Bart,

To assess the effect of linear processing, I rendered my images into a custom 16 bit ProPhotoRBG space with a gamma of 1.0 prior to performing the deconvolution in ImagesPlus and converted back to sRGB for display on the web. I noted little difference between linear and gamma ~2.2 files.

Hi Bill,

Deconvolution should preferably be performed in Linear gamma space, and the artifacts you showed (darkened microcontrast) are a typical indicator of gamma related issues. Of course not all images are as insanely critical as a starchart, so deconvolving gamma precompensated images may work well enough. However, it's good if linearization can be easily accommodated in a workflow that involves image math. This is also preferably performed with floating point precision calculations, which will usually allow to apply more iterations or more severe adjustments without artifact due to cumulating errors.

Quote
Performing 30 iterations of RL with a radius of 0.89 as determined by your tool works well with Rawtherapee.

In the search for optimal settings, it is good to have at least the radius parameter nailed. Hopefully, under the hood, RawTherapee does the RL-deconvolution on linearized data.

Quote
10 iterations of RL in ImagesPlus with a 5x5 kernel derived with your tools and a radius of 0.89 produces artifacts, but 3 iterations produces more reasonable results. I used the deconvolution kernel with a fill factor of 100%. Deconvolving the luminance channel in IP made little difference. Where should I go from here?

Just to make sure I understand what you've done. When you say you used a 5x5 kernel, I assume you copied the values from the PSF Kernel generator into the ImagesPlus "Custom Point Spread Function" dialog box, and clicked the "Set Filter" button, then used the "Adaptive Richardson-Lucy" control with "Custom" selected, and "Reduce Artifacts" checked.

That still leaves the fine-tuning of the "Noise Threshold" slider, or the Relaxation slider in the Van Cittert dialog. Too low a setting will not reduce the noise between iterations in the featureless smooth regions of the image, and too high a setting will start to reduce fine detail in addition to noise.

Quote
I presume that the deconvoltion kernel would be most appropriate, but what is the purpose of the other PSFs?

Not sure, what other PSFs you are referring to? You mean in the Adaptive RL dialog?

Now, if this still produces artifact with more than a few iterations, I suspect that there are aliasing artifacts that rear their ugly head. Aliases are larger than actual representations of fine detail. The larger detail is getting some definition added by the deconvolution where it shouldn't. Maybe, just as an attempt, some over-correction of the noise adaptation might help a bit, but it is not ideal. Also multiple runs with a deliberately too small Gaussian blur radius PSF may built up to an optimum more slowly.

As a final resort, but it won't do much if indeed aliasing is the issue, you can try to first up-sample the image, say to 300% which should keep the file size below the 2GB TIFF boundary that could cause issues with some TIFF libraries. With the up-sampled data, hopefully without adding too many artifacts of its own, the resolution has not increased, but the data has become sub-sampled.

That data will be easier (but much slower) to deconvolve (multiply the PSF blur radius by the same factor or more accurately determine it by upsampling the slanted edge first and then measure the blur radius) smoothly, and stop the iterations when visible artifacts begin to develop. The problem becomes how to create a custom kernel that fits the 9x9 maximum dimensions of ImagesPlus. Raw therapee can go to 2.5, which is close. Then do a simple down-sample to the original image size, and compensate for the down-sampling blur by adding some small (e.g. 0.6) radius deconvolution sharpening.

Other than fine-tuning the shape of the PSF by selecting a fill-factor smaller than 100% upon creation, there is not much left to do, other than resort to super resolution or stitching longer focal lengths.

If you'd like, I could try a deconvolution with PixInsight because that allows more tweaking of the parameters, and see it that makes a difference. But I'd like to have a 16-bit PNG crop from the RT Amaze conversion to work on.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: Deconvolution sharpening revisited
« Reply #282 on: January 19, 2014, 03:22:05 pm »

Just to make sure I understand what you've done. When you say you used a 5x5 kernel, I assume you copied the values from the PSF Kernel generator into the ImagesPlus "Custom Point Spread Function" dialog box, and clicked the "Set Filter" button, then used the "Adaptive Richardson-Lucy" control with "Custom" selected, and "Reduce Artifacts" checked.

That still leaves the fine-tuning of the "Noise Threshold" slider, or the Relaxation slider in the Van Cittert dialog. Too low a setting will not reduce the noise between iterations in the featureless smooth regions of the image, and too high a setting will start to reduce fine detail in addition to noise.

Bart, Thanks again for your detailed replies. Yes, I copied the values from your web based tool and pasted them into the IP Custom PSF dialog. I used the Apply check box in the custom filter dialog instead of the Set, but the effect seems to be the same when I used the Set function. The Apply box is not covered in the IP docs that I have, and may have been added to a later version. I am using IP ver 5.0



I left the noise threshold at the default and did not adjust the minimum and maximum apply values.



Not sure, what other PSFs you are referring to? You mean in the Adaptive RL dialog?

The PSFs to which I was referring are those derived by your PSF generator.

Now, if this still produces artifact with more than a few iterations, I suspect that there are aliasing artifacts that rear their ugly head. Aliases are larger than actual representations of fine detail. The larger detail is getting some definition added by the deconvolution where it shouldn't. Maybe, just as an attempt, some over-correction of the noise adaptation might help a bit, but it is not ideal. Also multiple runs with a deliberately too small Gaussian blur radius PSF may built up to an optimum more slowly.

As a final resort, but it won't do much if indeed aliasing is the issue, you can try to first up-sample the image, say to 300% which should keep the file size below the 2GB TIFF boundary that could cause issues with some TIFF libraries. With the up-sampled data, hopefully without adding too many artifacts of its own, the resolution has not increased, but the data has become sub-sampled.

That data will be easier (but much slower) to deconvolve (multiply the PSF blur radius by the same factor or more accurately determine it by upsampling the slanted edge first and then measure the blur radius) smoothly, and stop the iterations when visible artifacts begin to develop. The problem becomes how to create a custom kernel that fits the 9x9 maximum dimensions of ImagesPlus. Raw therapee can go to 2.5, which is close. Then do a simple down-sample to the original image size, and compensate for the down-sampling blur by adding some small (e.g. 0.6) radius deconvolution sharpening.

Other than fine-tuning the shape of the PSF by selecting a fill-factor smaller than 100% upon creation, there is not much left to do, other than resort to super resolution or stitching longer focal lengths.

If you'd like, I could try a deconvolution with PixInsight because that allows more tweaking of the parameters, and see it that makes a difference. But I'd like to have a 16-bit PNG crop from the RT Amaze conversion to work on.

I will try these other suggestions at a later date.

If you (or others) wish to work with my files, here are links.

The raw file (NEF) f/8:
http://adobe.ly/1bbzgzC

The Rawtherapee rendered TIFF f/8:
http://adobe.ly/1ifW49t

Other NEFs

f/4
http://adobe.ly/1dJp3jW

f/16
http://adobe.ly/1mrvDOa

f/22
http://adobe.ly/1cMuNTV

Thanks,

Bill

p.s.
Edited 1/20/2014 to correct links and add files
« Last Edit: January 20, 2014, 12:29:19 pm by bjanes »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #283 on: January 19, 2014, 05:46:59 pm »

Bart, Thanks again for your detailed replies. Yes, I copied the values from your web based tool and pasted them into the IP Custom PSF dialog. I used the Apply check box in the custom filter dialog instead of the Set, but the effect seems to be the same when I used the Set function. The Apply box is not covered in the IP docs that I have, and may have been added to a later version. I am using IP ver 5.0


Hi Bill,

Great, that explain a few things, and reveals a procedural error. Good that I asked, or we would not have found it.

The filter kernel values that you used, are for the direct application of a single deconvolution filter operation (the addition of a high-pass filter to the original image). To store those values for use in other ImagesPlus dialogs, one uses the "Set" button, and can leave the dialog box open for further adjustments. Hitting the "Apply" button will apply the single pass deconvolution to the active (and/or locked) image window(s).

However, the adaptive Richardson-Lucy dialog expects a regular Point Spread function (all kernel values are positive) to be defined in the Custom filter box, just like a regular sample of a blurred star. And here a larger support kernel should produce a more accurate restoration, a 9x9 kernel would be almost optimal (as the PSF tool suggests, approx. 10x Sigma).

Quote
I left the noise threshold at the default and did not adjust the minimum and maximum apply values.

The default noise assumption often works good enough, and the minimum/maximum limits are more useful for star images.

Quote
The PSFs to which I was referring are those derived by your PSF generator.

I see. The different PSFs are just precalculated kernel values for various purposes. A regular PSF is fine, although with large kernels, there will be a lot of zero decimal digits. When the input boxes, like those of ImagesPlus, only allow to enter a given number of digits (15 or so), it can help to pre-multiply all kernel values. ImagePlus will still sum and divide the weight of all kernel values to a total sum of 1.0, to keep overall image brightness the same.

I used to use the second PSF version (PSF[0,0] normalized to 1.0) with a multiplier of 65535. That gives a simple indication whether the kernel values in the outer positions have a significant enough effect (say>1.0) on the total sum in 16-bit math. When a kernel element contributes little, one could probably also use a smaller kernel size.

Quote
If you (or others) wish to work with my files, here are links.

I'll have a look, thanks. BTW, the NEF is of a different file (3086) than the TIFF (3088).

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: Deconvolution sharpening revisited
« Reply #284 on: January 19, 2014, 08:36:36 pm »

Hi Bill,

Great, that explain a few things, and reveals a procedural error. Good that I asked, or we would not have found it.

The filter kernel values that you used, are for the direct application of a single deconvolution filter operation (the addition of a high-pass filter to the original image). To store those values for use in other ImagesPlus dialogs, one uses the "Set" button, and can leave the dialog box open for further adjustments. Hitting the "Apply" button will apply the single pass deconvolution to the active (and/or locked) image window(s).

However, the adaptive Richardson-Lucy dialog expects a regular Point Spread function (all kernel values are positive) to be defined in the Custom filter box, just like a regular sample of a blurred star. And here a larger support kernel should produce a more accurate restoration, a 9x9 kernel would be almost optimal (as the PSF tool suggests, approx. 10x Sigma).

The default noise assumption often works good enough, and the minimum/maximum limits are more useful for star images.

I see. The different PSFs are just precalculated kernel values for various purposes. A regular PSF is fine, although with large kernels, there will be a lot of zero decimal digits. When the input boxes, like those of ImagesPlus, only allow to enter a given number of digits (15 or so), it can help to pre-multiply all kernel values. ImagePlus will still sum and divide the weight of all kernel values to a total sum of 1.0, to keep overall image brightness the same.

I used to use the second PSF version (PSF[0,0] normalized to 1.0) with a multiplier of 65535. That gives a simple indication whether the kernel values in the outer positions have a significant enough effect (say>1.0) on the total sum in 16-bit math. When a kernel element contributes little, one could probably also use a smaller kernel size.

I'll have a look, thanks. BTW, the NEF is of a different file (3086) than the TIFF (3088).

Cheers,
Bart

Bart,

Sorry, but I posted the link for the f/11 raw file. Here is the link for f/8:
http://adobe.ly/1bbzgzC

I calculated the 9x9 PSF for a radius of 0.7533 and scaled by 65535 and pasted the values into the IP custom filter as shown:


I then applied the filter with the set command and performed 10 iterations of RL. The image appeared overcorrected with artifacts similar to those I experienced previously.



Regards,

Bill

ps
edited 1/20/2014 to correct link to f/8 file
« Last Edit: January 20, 2014, 11:14:55 am by bjanes »
Logged

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Deconvolution sharpening revisited
« Reply #285 on: January 20, 2014, 01:11:40 am »

Bob, RT displays f16 on that one.
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: Deconvolution sharpening revisited
« Reply #286 on: January 20, 2014, 11:15:45 am »

Bob, RT displays f16 on that one.

Thanks for pointing this out. I corrected the links.
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #287 on: January 20, 2014, 11:56:57 am »

Bart,

Sorry, but I posted the link for the f/11 raw file. Here is the link for f/8:
https://creative.adobe.com/share/60c87f91-96a0-4906-b108-568974011f22

No problem, I've downloaded it and the EXIF says f/16 (as intended for the exercise at hand). I've made a conversion in RawTherapee, with Raw level Chromatic Aberration correction which helped a bit. Further processing was mostly left at default, except a White Balance on the white patch of the star chart grey scale and  small increase in exposure to get the midgrey level to approx. 50% and white at 90%.

Now, there is good news and bad news.

The good news is that a good deconvolution is possible. The bad news is that it is not simple to do with the conventional approach of determining the amount of blur based on a slanted edge.

I was already a bit surprised that it was possible to produce significant deconvolution artifacts with 'normal' radius settings one might expect from other tests on f/16 images. I was able to get a nice Adaptive RL deconvolution result in ImagesPlus by using the default 3x3 Gaussian PSF, twice (after a first run, click the blue eye icon on the toolbar, and do another run). Doing multiple deconvolution runs with a small radius amounts to the same as a single run with a larger radius, but a run with a default 5x5 Gaussian already was problematic. Hmm, what to think of that?

I then checked the edge profile of the Slanted edge, and found that there is some glare (possibly from the lighting angle or the print surface) that makes it hard to produce a clean profile model with my Slanted Edge tool. The tool suggests a much larger radius, which already tested as problematic. But the trained human eye is sometimes harder to fool than a simple curve fitting algorithm, so I saw that I had to try something with smaller radii, although I didn't know how small.

I then attempted an empirical approach (when everything else fails, try and try again) to finding a better PSF size/shape. I had to use the power of PixInsight to help me with that, because it also produces some statistical convergence data to assist in the efforts, and it allows to do math in 64-bit floating point number precision (to eliminate the possibility of rounding errors to influence the tests). This all suggested that a Gaussian radius of about 0.67 should produce a good compromise. That is indeed a radius normally only needed by the best possible lenses and at the optimal aperture, certainly not at f/16. So this remains puzzling, and hard to explain.

To test the influence of the deconvolution algorithm implementation, I then produced a PSF with a radius of 0.67, as suggested by PixInsight, with a 65535 multiplier for use in ImagesPlus (see attachment). A 7x7 kernel should be large enough. Note that in my version of ImagesPlus, there is a Custom Restoration PSF dialog for sharpening (besides a Custom filter dialog).

This allows to produce a reasonably good deconvolution, without too many artifacts, improved a bit further by linearization of the data before deconvolution. However, I'm not totally satisfied yet (and the lack of logic for the need of such a small PSF is puzzling), so some more investigation is in order.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: Deconvolution sharpening revisited
« Reply #288 on: January 20, 2014, 12:44:09 pm »

No problem, I've downloaded it and the EXIF says f/16 (as intended for the exercise at hand). I've made a conversion in RawTherapee, with Raw level Chromatic Aberration correction which helped a bit. Further processing was mostly left at default, except a White Balance on the white patch of the star chart grey scale and  small increase in exposure to get the midgrey level to approx. 50% and white at 90%.

Now, there is good news and bad news.

The good news is that a good deconvolution is possible. The bad news is that it is not simple to do with the conventional approach of determining the amount of blur based on a slanted edge.

I was already a bit surprised that it was possible to produce significant deconvolution artifacts with 'normal' radius settings one might expect from other tests on f/16 images. I was able to get a nice Adaptive RL deconvolution result in ImagesPlus by using the default 3x3 Gaussian PSF, twice (after a first run, click the blue eye icon on the toolbar, and do another run). Doing multiple deconvolution runs with a small radius amounts to the same as a single run with a larger radius, but a run with a default 5x5 Gaussian already was problematic. Hmm, what to think of that?

I then checked the edge profile of the Slanted edge, and found that there is some glare (possibly from the lighting angle or the print surface) that makes it hard to produce a clean profile model with my Slanted Edge tool. The tool suggests a much larger radius, which already tested as problematic. But the trained human eye is sometimes harder to fool than a simple curve fitting algorithm, so I saw that I had to try something with smaller radii, although I didn't know how small.

I then attempted an empirical approach (when everything else fails, try and try again) to finding a better PSF size/shape. I had to use the power of PixInsight to help me with that, because it also produces some statistical convergence data to assist in the efforts, and it allows to do math in 64-bit floating point number precision (to eliminate the possibility of rounding errors to influence the tests). This all suggested that a Gaussian radius of about 0.67 should produce a good compromise. That is indeed a radius normally only needed by the best possible lenses and at the optimal aperture, certainly not at f/16. So this remains puzzling, and hard to explain.

To test the influence of the deconvolution algorithm implementation, I then produced a PSF with a radius of 0.67, as suggested by PixInsight, with a 65535 multiplier for use in ImagesPlus (see attachment). A 7x7 kernel should be large enough. Note that in my version of ImagesPlus, there is a Custom Restoration PSF dialog for sharpening (besides a Custom filter dialog).

This allows to produce a reasonably good deconvolution, without too many artifacts, improved a bit further by linearization of the data before deconvolution. However, I'm not totally satisfied yet (and the lack of logic for the need of such a small PSF is puzzling), so some more investigation is in order.

Cheers,
Bart

Bart,

Thanks again for your help and good work.

Your target was printed on glossy paper. I tried to position the lights (Solux 4700K) properly, but there could still be some glare. The ISO 12233 target in the background has some slanted edges and was printed on matt paper, so it might be better even though it is slightly off center.

Bill

PS I reposted links to the files in this message here. I was just learning to use Adobe Creative Cloud for the first time and some of the links were erroneous.
Logged

rnclark

  • Newbie
  • *
  • Offline Offline
  • Posts: 6
Re: Deconvolution sharpening revisited
« Reply #289 on: February 22, 2014, 03:20:52 am »

Hi Guys,
Sorry it has been a while--I had several trips and work was intense.

I did add a third example in my sharpening series: http://www.clarkvision.com/articles/image-restoration3/
It is upsampling like Bart described and then deconvolution sharpening.

Some of the questions asked:

> I can't see how a star would be "smaller than 0% MTF".
While the star itself is extremely small, in the optical system, it is a diffraction disk.  My point was that one can resolve with deconvolution two stars that are closer than 0% MTF.  MTF is a one-dimensional description of the response of an optical system to a bar chart and one can't resolve the bars in a bar chart if the bars are closer than 0% MTF.  But one can resolve 2-dimensional structure that are closer than 0% MTF.

Bart, you say "I do not fully agree with your downsampling conclusions" and I would agree with you if one just down samples bar charts.  But as you say "sharpening before downsampling may happen to work for certain image content (irregular structures)" is the key.  Images of the real world are dominated by irregular structures, and I have yet to see in any of my images artifacts like seen in your examples of bar charts.  If I ever run across a pathologic case where artifacts are seen like those in your downsampling examples, then I'll change my methodology.  So far I have never seen such a case in my images.  So far no one was met my challenge of downsampling first, then sharpening and producing a better, or even equal image like those I show in Figure 7e and 7f at:
http://www.clarkvision.com/articles/image-restoration2/
Isn't your posts about upsampling, deconvolution sharpening then downsampling in conflict with saying no sharpening until you have downsized?

Fine_Art sharpened my crane image with Van Cittert.  While I have read research papers on Van Cittert, it seems to produce similar results to Richardson-Lucy, so I have not explored Van Cittert, mostly due to lack of time, and I figured I should try and master RL first.  Your posted sharpening of the crane looks to me about like unsharp mask results on my image-restoration2 web page (Figure 1).  So, I think you can push much further.  As the image is high S/N, I would not be surprised if you could surpass my RL results in Figure 3b.

Also asked was what is my strategy for choosing the PSF?  Well, for star images, it is simple: just choose a non saturated star.  But for scenics and landscapes, it is not so easy to find the right one.  And one reads online (e.g. the wikipedia page on deconvolution) that it is hopeless so not applied to regular images.  Well, it is not that hard either.  Basically, I start large and work small.  But there is rarely one PSF for the typical landscape or wildlife image, mainly due to depth of field.  Thus, different parts of the image need different PSFs.  I  also don't worry about linearizing the data.  I just use the image as it comes out of the raw converter with the tone curve applied, plus any other contrast adjustments I do.  A Gaussian response function run through such a process is still reasonably modeled by a Gaussian, though different from the Gaussian one would derive from linear data.

So it is simple (I can see a need for a 4th article in my series):  start with a large PSF, like 9x9 Gaussian and run a few iterations.  If it looks good, run again with more iterations.  If that starts producing ringing artifacts, that is an indication the PSF is too large and/or iterations too many.  I drop back on the iterations.  And I then drop the PSF (e.g. 7x7) and start again.  For a typical DSLR image made at say ISO 400, one should be able to go for 50 to 100 iterations without significant noise enhancement.  If a high S/N image, say ISO 100 with a camera having large pixels, then 500 iterations may be possible.  If the PSF is too low, one can do hundreds of iterations and not see any improvement in the image.  So there is a maximum that sharpens without artifacts.  What I find in the typical image is that different parts of the image respond better with different sizes of PSF.  So I put all the results in photoshop as layers, with the original on top and increasing PSF size going down.  I then erase portions of each layer to show the portion of the sharpened image that responded best to the deconvolution.  For wildlife, I usually concentrate on the eyes first, then work out.  I usually leave the background as the original unsharpened image.

For landscapes, it is usually simpler than for wildlife.  Most of the image is usually pretty sharp, so I find the one PSF that works best.

I can see developing some examples would be nice.

The bottom line is that real world images have variability, and no one formula works for multiple images, let alone all parts of one image.  This is also true for the simpler methods like unsharp mask.  So I just try a few things and see what works well for an image, then push it until I see artifacts, then back off.

Roger

Logged

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Deconvolution sharpening revisited
« Reply #290 on: February 22, 2014, 02:43:54 pm »

Conceptually, I still don't understand the reason for hundreds of iterations. To me, that seems to imply the wrong PSF is being used.

Most of my shots are ISO 100 with a high quality prime lens. Sometimes ISO 400, rarely 800 or more. This high S/N is the reason I use Van Cittert first. I can get a good base improvement with the default 10 cycles in the dialog box. I go 5x5, then 3x3, then switch to A R/L maybe 10 5x5 then 30 3x3. I have never felt a need for a larger radius, given a good lens to begin with.

Using Bart's upsample first method gives a better result without question. If I need it I would probably start with 7x7.

Please explain the benefit of a more gradual curve 9x9 with more iterations vs the default Gaussian shape 5x5 with low iterations. The feeling I get is that my camera puts the data into the right pixel +/- a radius of about 1. Therefore the tails of a 5x5 will create ring artifacts with too many iterations. This is what I see happening. Maybe I miss interpret the output.
Logged

rnclark

  • Newbie
  • *
  • Offline Offline
  • Posts: 6
Re: Deconvolution sharpening revisited
« Reply #291 on: February 22, 2014, 03:03:41 pm »

Conceptually, I still don't understand the reason for hundreds of iterations. To me, that seems to imply the wrong PSF is being used.

Most of my shots are ISO 100 with a high quality prime lens. Sometimes ISO 400, rarely 800 or more. This high S/N is the reason I use Van Cittert first. I can get a good base improvement with the default 10 cycles in the dialog box. I go 5x5, then 3x3, then switch to A R/L maybe 10 5x5 then 30 3x3. I have never felt a need for a larger radius, given a good lens to begin with.

Using Bart's upsample first method gives a better result without question. If I need it I would probably start with 7x7.

Please explain the benefit of a more gradual curve 9x9 with more iterations vs the default Gaussian shape 5x5 with low iterations. The feeling I get is that my camera puts the data into the right pixel +/- a radius of about 1. Therefore the tails of a 5x5 will create ring artifacts with too many iterations. This is what I see happening. Maybe I miss interpret the output.


Hi,
Deconvolution is an iterative process.  Think of it this way: in a pixel, there is signal from the surrounding pixels contaminating the signal in the pixel.  But in those adjacent pixels, those pixels have signal contamination from the pixels surrounding those, and so on.  To put back the light in each pixel, one would need to know the correct signal from the adjacent pixels.  But we don't know that because those pixels too are contaminated.  The result is that there is no direct solution, only an iterative one.  A few iterations gets a only partial solution.

A larger blur radius to the PSF will result in more artifacts.  So people limit the number of iterations to prevent noticeable artifacts.  If you start getting artifacts at 10 or so iterations, it is my experience that that PSF is too large, in which case it is usually better in my experience to use a smaller PSF and more iterations.

There are cases where the PSF may be like two different Gaussians with different radii.  Then one could either derive the PSF for that image, or do two Gaussian runs.  For example, while diffraction is somewhat Gaussian, there is a big "skirt" especially considering multiple wavelengths.  Thus a 2 step deconvolution, like a large radius Gaussian with a few iterations, followed by a smaller radius Gaussian with more iterations can be effective.

Roger
Logged

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Deconvolution sharpening revisited
« Reply #292 on: February 22, 2014, 03:19:42 pm »

Thanks,

What do you think of using the program's ability to split RGB to do a larger radius on Red, then smaller G, then smaller B then recombine?
Logged

rnclark

  • Newbie
  • *
  • Offline Offline
  • Posts: 6
Re: Deconvolution sharpening revisited
« Reply #293 on: February 22, 2014, 03:27:05 pm »

Thanks,

What do you think of using the program's ability to split RGB to do a larger radius on Red, then smaller G, then smaller B then recombine?

Hmmm...  Seems like more work.  I would wonder about color noise in the final image.  Probably better to just do the sharpening on a luminance channel.

Roger
Logged

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Deconvolution sharpening revisited
« Reply #294 on: February 22, 2014, 04:08:34 pm »

Hmmm...  Seems like more work.  I would wonder about color noise in the final image.  Probably better to just do the sharpening on a luminance channel.

Roger

Actually, I have found the detail preserving NR filters highly effective at removing color noise. You are probably right on using the luminance channel. In my handful of prior tests I was not able to tell the difference. I expected more smear on red. I was unable to improve it over just the L channel. Your reply tells me it was more poor idea than poor technique.
Logged

rnclark

  • Newbie
  • *
  • Offline Offline
  • Posts: 6
Re: Deconvolution sharpening revisited
« Reply #295 on: February 22, 2014, 04:41:00 pm »

Actually, I have found the detail preserving NR filters highly effective at removing color noise. You are probably right on using the luminance channel. In my handful of prior tests I was not able to tell the difference. I expected more smear on red. I was unable to improve it over just the L channel. Your reply tells me it was more poor idea than poor technique.

Well, I would not say a poor idea.  I have not tried it.  If one were limited by diffraction, the red diffraction disk is larger than the green and blue, so in theory, sharpening the red with a larger PSF and the blue with a smaller PSF than the green channel makes sense.  Perhaps those f/32 images...

At apertures not limited be diffraction, and with effects of the blur filter, there probably isn't much difference in the PSF between the color channels unless the lens has some really bad chromatic aberration.

Roger
Logged

Christoph C. Feldhaim

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2509
  • There is no rule! No - wait ...
Re: Deconvolution sharpening revisited
« Reply #296 on: February 22, 2014, 04:44:37 pm »

Theoretically it makes totally sense, since Airy discs differ by a factor of about 2 between red and blue light.

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Deconvolution sharpening revisited
« Reply #297 on: February 22, 2014, 05:01:44 pm »

Theoretically it makes totally sense, since Airy discs differ by a factor of about 2 between red and blue light.

That is what I was thinking of, I did not bring it up in a post to Roger who has written more papers than most people have read. For the rest of us it is worth mentioning.
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #298 on: February 22, 2014, 07:53:31 pm »

Bart, you say "I do not fully agree with your downsampling conclusions" and I would agree with you if one just down samples bar charts.

Trust me, I only use (bar) charts for an objective, worst case scenario testing. If a procedure passes that test, it will pass real life challenges with flying colors. In fact, near orthogonal bi-tonal charts (e.g. the 1951 USAF chart, designed for analog aerial imaging (obviously) with film captures) should be banned from quantitative discrete sampling tests, they serve nothing more than a visual clue (but are very susceptible to phase alignment with the camera's sensels).

I have (many years ago on Usenet) proposed a more reliable, easily interpretable, modified star target for (visual qualitative and) quantitative analysis (along with the Slanted Edge target) that in fact has become a part of the ISO standards for resolution testing of Digital Still Image cameras.

Quote
But as you say "sharpening before downsampling may happen to work for certain image content (irregular structures)" is the key.  Images of the real world are dominated by irregular structures, and I have yet to see in any of my images artifacts like seen in your examples of bar charts.  If I ever run across a pathologic case where artifacts are seen like those in your downsampling examples, then I'll change my methodology.  So far I have never seen such a case in my images.

You are correct in that subject matter matters ...

Most shots of nature scenes are pretty forgiving, although e.g. tree branches against a much brighter sky could get one into some trouble. A sample image of a natural scene (although with many more urban structures as well) that gets everybody into downsampling trouble (try a web size of 533x800 pixels ), even without prior sharpening, is given here. The devil is in the rendering of the brick structures, and branches against the sky, and grass structure/texture.

So my goal is to prevent nasty surprises, with the knowledge that one might be able to push things a little further, but with the risk of introducing trouble.

Quote
So far no one has met my challenge of downsampling first, then sharpening and producing a better, or even equal image like those I show in Figure 7e and 7f at:
http://www.clarkvision.com/articles/image-restoration2/

It is about as much one can practically extract from the image crop as is possible with current technology.

Quote
Isn't your posts about upsampling, deconvolution sharpening then downsampling in conflict with saying no sharpening until you have downsized?

Yes, although the 'violations' are tolerable, in fact they are usually better trade-offs than a straightforward deconvolution or other sharpening at 100% zoom size. There are several reasons for that.

One is that by up-sampling we can change existing samples/pixels at a sub-pixel level. Up-sampling by itself does not add resolution (although some procedures can), we are still bound (at best) by the Nyquist frequency of the original signal sampling density, but we can more accurately shape the steepness of the gradient between pixels. Two pixels of identical luminosity may get a different luminosity after deconvolution, depending on their neighboring pixels, even more accurately if we can distribute the contribution of surrounding pixels more accurately.

So subsequent down-sampling of a bandwidth limited data source, can only cause aliasing if additional resolution is created (which deconvolution might, but only for those spatial frequencies that were already at the limit, all others will benefit (admittedly to variable degrees) from the gained precision.

Another reason is that it becomes visually much easier to detect over-sharpening, even for inexperienced users. At a larger magnification, it is easier to detect halos, and e.g. stairstepping, or blocking, or posterization, artifacts.

I'm preparing some example material, based both on charts and on real life imagery. To be continued ...

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #299 on: February 22, 2014, 08:15:27 pm »

Deconvolution is an iterative process.  Think of it this way: in a pixel, there is signal from the surrounding pixels contaminating the signal in the pixel.  But in those adjacent pixels, those pixels have signal contamination from the pixels surrounding those, and so on.  To put back the light in each pixel, one would need to know the correct signal from the adjacent pixels.  But we don't know that because those pixels too are contaminated.  The result is that there is no direct solution, only an iterative one.  A few iterations gets a only partial solution.

Hi Roger,

Excellent explanation! It's not about repeating a procedure as such, but more about honing in on an optimal solution. Deconvolution is generally known as a mathematically ill posed problem to solve, especially in the presence of noise.

Quote
There are cases where the PSF may be like two different Gaussians with different radii.  Then one could either derive the PSF for that image, or do two Gaussian runs.  For example, while diffraction is somewhat Gaussian, there is a big "skirt" especially considering multiple wavelengths.  Thus a 2 step deconvolution, like a large radius Gaussian with a few iterations, followed by a smaller radius Gaussian with more iterations can be effective.

Yes, although adding multiple random distributions (e.g. subject motion/camera shake, residual lens aberrations, defocus, diffraction, Optical Low-pass Filter, sensel aperture), tends to gravitate to a combined Gaussian shaped blur distribution pretty fast. There may be some variation, but it usually is mostly aperture/diffraction induced. Of course defocus is a killer as well.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==
Pages: 1 ... 13 14 [15] 16 17 18   Go Up