Luminous Landscape Forum

Equipment & Techniques => Medium Format / Film / Digital Backs – and Large Sensor Photography => Topic started by: bjanes on July 23, 2010, 08:42:36 AM

Title: Deconvolution sharpening revisited
Post by: bjanes on July 23, 2010, 08:42:36 AM
In his comparison of the new Leica S2 with the Nikon D3x, Lloyd Chambers (Diglloyd (http://diglloyd.com/diglloyd/2010-07-blog.html#_20100722DeconvolutionSharpening)) has shown how the deconvolution sharpening (more properly image restoration) with the Mac only raw converter Raw Developer markedly improves the micro-contrast of the D3x image to the point that it rivals that of the Leica S2. Diglloyd's site is a pay site, but it is well worth the modest subscription fee. The Richardson-Lucy algorithm used by Raw Developer partially restores detail lost by the presence of a blur filter (optical low pass filter) on the D3x and other dSLRs.

Bart van der Wolf and others have been touting the advantages of deconvolution image restoration for some time, but pundits on this forum usually pooh pooh the technique, pointing out that deconvolution techniques are fine in theory, but in practice are limited by the difficulties in obtaining a proper point spread function (PSF) that enables the deconvolution to undo the blurring of the image. Roger Clark (http://www.clarkvision.com/articles/image-restoration1/index.html) has reported good results with the RL filter available in the astronomical program ImagesPlus. Focus Magic is another deconvolution program used by many for this purpose, but it has not been updated for some time and is 32 bit only.

Isn't it time to reconsider deconvolution? The unsharp mask is very mid 20th century and originated in the chemical darkroom. In many cases decent results can be obtained by deconvolving with a less than perfect and empirically derived PSP. Blind deconvolution algorithms that automatically determine the PSP are being developed.

Regards,

Bill
Title: Deconvolution sharpening revisited
Post by: ejmartin on July 23, 2010, 09:03:24 AM
You can also add the freeware RawTherapee (http://rawtherapee.com/) to the list of converters that offer RL deconvolution sharpening.  The program is currently undergoing some major revisions which will make it much better (alpha builds can be found here (http://rawtherapee.com/forum/viewtopic.php?t=1910&start=120)).  The deconvolution sharpening works pretty well, but there may be improvements to be had; it's on my list of things to tackle in the future.

I have also heard that Smart Sharpen in Photoshop is deconvolution based, but I've never been able to find any semi-official confirmation of that from the people who should know.
Title: Deconvolution sharpening revisited
Post by: John R Smith on July 23, 2010, 09:14:37 AM
If it is possible to explain it in (reasonably) simple terms, how does deconvolution sharpening actually work?

John
Title: Deconvolution sharpening revisited
Post by: Craig Lamson on July 23, 2010, 09:38:58 AM
Quote from: John R Smith
If it is possible to explain it in (reasonably) simple terms, how does deconvolution sharpening actually work?

John

I wrote a quick review of Focus Magic for a hobbist website sometime back.  Be forwarned it it written to that audience...

http://www.craiglamson.com/FocusMagic.htm (http://www.craiglamson.com/FocusMagic.htm)
Title: Deconvolution sharpening revisited
Post by: ejmartin on July 23, 2010, 09:49:47 AM
Quote from: John R Smith
If it is possible to explain it in (reasonably) simple terms, how does deconvolution sharpening actually work?

John

Suppose we are interested in undoing the effect of the AA filter.  The image coming through the lens is focussed on the plane of the sensor, then the AA filter acts on the image in a manner similar to a Gaussian blur in Photoshop (though with a different blur profile).  The idea of deconvolution sharpening is that mathematically, if one knows the blur profile, one can reverse engineer to a reasonable approximation what the image was before it went through the AA filter.  The same idea works for many other kinds of image degrading phenomena, from misfocus to motion blur, to lens aberrations, etc.  The trick is that each different kind of blur has its own blur profile, ideally one would want to use the one specific to the image flaw one is trying to remove.  But practically that isn't possible, so typically one tries to use a generic blur profile and hope that that does a reasonably good job.  Another issue is that the method, like any sharpening method, can amplify noise, and generate haloes, and so one has to modify it to suppress noise enhancement and haloes.

For the more technically inclined, a more detailed explanation is that a good way to think about imaging components is in terms of spatial frequencies; for instance, MTF's are multiplicative -- for a fixed spatial frequency, the MTF of the entire optical chain is the product of the MTF's of the individual components.  So if the component doing the blurring has a blur profile B(f) for as a function of spatial frequency f, and the image has spatial frequency content I(f) at the point it reaches this component, then the image after passing through that component is I'(f)=I(f)*B(f).  Thus, if one knows the blur profile B(f), one can recover the unblurred image by dividing: I(f)=I'(f)/B(f).  The problem comes that B(f) can be small at high frequencies, since it is a low pass filter that is removing these frequencies from the image.  Dividing by a small number is inherently numerically unstable, and so choosing the wrong blur profile, or having a bit of noise in the image, all those inaccuracies get amplified by the method.  So in practice one includes a bit of damping at high frequency (quite similar to the 'radius' setting in USM) to keep the algorithm from going too far astray.

Edit: This multiplicative (in frequencies) aspect of the blur I'(f)=I(f)*B(f) is what is known as convolution, which is why the reverse process I(f)=I'(f)/B(f) is called deconvolution.   I see all the techies have chimed in  
Title: Deconvolution sharpening revisited
Post by: BartvanderWolf on July 23, 2010, 09:51:14 AM
Quote from: John R Smith
If it is possible to explain it in (reasonably) simple terms, how does deconvolution sharpening actually work?

Hi John,

Basically it tries to invert the process of what caused the blurring of image detail, that's why they are also refered to as restoration algorithms. Blurring is about spreading some of the info of a single pixel over its neighbors, the deconvolution is about removing the blur component from neighboring pixels and adding it back to the original pixel. The blurring component is mathematically modeled as a convolution, hence the inverse is called deconvolution.

One of the difficulties is about how to discriminate between signal and noise. One preferrably sharpens the signal, not the noise.

Cheers,
Bart
Title: Deconvolution sharpening revisited
Post by: PierreVandevenne on July 23, 2010, 09:52:16 AM
Quote from: John R Smith
If it is possible to explain it in (reasonably) simple terms, how does deconvolution sharpening actually work?

John

When you apply a filter to an image in Photoshop, you are generally applying what is called a convolution kernel. These are tables (arrays in computer speak) of numbers that describe the transformation applied to the pixels. The blurring of an image is a convolution. De-convolution is basically running the process in reverse.

In real life, lens blur, movement, filter interference etc... can also be represented as convolutions. The catch is that we often have no precise idea of what the array numbers are and therefore have to guess them. There are many different methods of guessing, some statistical, some based on knowledge of the parameters of the system, etc.... Some specific methods are more suited to some specific situations. Mathematically, that can become quite complex, as we can't simply try all kernels and possible sequence of operations: brute force doesn't work.

One interesting point to note is that if you have a perfect point source and its image (called Point Spread Function), you can have a fairly good estimate of the convolution that was applied.

This is why, in astronomical imaging, deconvolution has been so successful: stars are, for most practical purposes, point sources and their image allows us to reverse engineer the convolution their photons have endured. But even that isn't perfect and there's still a small part of mystery and black magic to the process. Your deconvolution algorithm can converge to the correct initial image, but also diverge.





Title: Deconvolution sharpening revisited
Post by: BartvanderWolf on July 23, 2010, 10:06:07 AM
Quote from: ejmartin
You can also add the freeware RawTherapee (http://rawtherapee.com/) to the list of converters that offer RL deconvolution sharpening.

Indeed, and it can also use a TIFF as input, not only Raw files. AFAIK it is implemented as a postprocessing operation anyway, so TIFFs (without prior sharpening) are just as good as Raws, for that PP phase. JPEGs are less suited for RL sharpening, as it may bring out block artifacts related to the lossy compression.

Cheers,
Bart
Title: Deconvolution sharpening revisited
Post by: John R Smith on July 23, 2010, 10:18:32 AM
Holy Cow. That must be some sort of record for the fastest number of highly technical and precisely worded replies to a question on LL Forum, ever. I am humbled, and (I think) enlightened.

Many thanks. I shall now re-read them in a quieter moment. I did try the deconvolution in Raw Therapee, but I didn't really see a kind of wow factor. But then my DB does not have an AA filter, so perhaps the effects would be pretty subtle.

John
Title: Deconvolution sharpening revisited
Post by: BartvanderWolf on July 23, 2010, 10:47:39 AM
Quote from: John R Smith
But then my DB does not have an AA filter, so perhaps the effects would be pretty subtle.

Correct (assuming decent lenses), besides recovering from the more pronounced effects of diffraction at narrow apertures, or motion blur if an asymmetrical PSF is used.

Cheers,
Bart
Title: Deconvolution sharpening revisited
Post by: PierreVandevenne on July 23, 2010, 11:35:43 AM
Quote from: BartvanderWolf
recovering from the more pronounced effects of diffraction at narrow apertures,

I am not sure this would work, but that is a very technical issue and we might be (rightly) reprimanded for going into it here.
Title: Deconvolution sharpening revisited
Post by: joofa on July 23, 2010, 11:53:08 AM
Quote from: John R Smith
If it is possible to explain it in (reasonably) simple terms, how does deconvolution sharpening actually work?

John

The usual text approach (generalized Richardson-Lucy types) is that it  takes the original image data and does a guess on certain parameters associated with the image data. The using these parameters generates new image data. Then takes the guess again for newer parameters using this new image data, and using the newer parameters generates a newer set of image data. Keep on doing this iteration until you are satisfied. In technical terms that satisfaction is called "convergence". And under typical settings the solution converges to what is called maximum likelihood estimation.

Joofa
Title: Deconvolution sharpening revisited
Post by: madmanchan on July 23, 2010, 01:55:07 PM
Yes Photoshop's Smart Sharpen is based on deconvolution (but you will need to choose the "More Accurate" option and the Lens Blur kernel for best results). Same with Camera Raw 6 and Lightroom 3 if you ramp up the Detail slider.
Title: Deconvolution sharpening revisited
Post by: BartvanderWolf on July 23, 2010, 02:32:25 PM
Quote from: madmanchan
Yes Photoshop's Smart Sharpen is based on deconvolution (but you will need to choose the "More Accurate" option and the Lens Blur kernel for best results). Same with Camera Raw 6 and Lightroom 3 if you ramp up the Detail slider.

Hi Eric,

Thanks for confirming that.

Could you disclose if the Smart Sharpen filter visible effectiveness has been changed between, say, CS3 and CS5, or is it essentially the same since its earlier versions?

I've compared it before, and used it on installations without better alternative plug-ins, but its restoration effectiveness for larger radii seemed less than a direct Richardson-Lucy or similar implementation, although faster. Perhaps a new test/comparison is in order.

Cheers,
Bart
Title: Deconvolution sharpening revisited
Post by: ErikKaffehr on July 23, 2010, 02:56:53 PM
Hi,

Wouldn't it be possible to add a "OLP filter" option. I presume that Gaussian and Lens Blur are different PSF (Point Spread Functions), I presume that OLP essentially splits the light so it will affect the pixels above, under, left of and right of the central pixel?

Best regards
Erik


Quote from: BartvanderWolf
Hi Eric,

Thanks for confirming that.

Could you disclose if the Smart Sharpen filter visible effectiveness has been changed between, say, CS3 and CS5, or is it essentially the same since its earlier versions?

I've compared it before, and used it on installations without better alternative plug-ins, but its restoration effectiveness for larger radii seemed less than a direct Richardson-Lucy or similar implementation, although faster. Perhaps a new test/comparison is in order.

Cheers,
Bart
Title: Deconvolution sharpening revisited
Post by: madmanchan on July 23, 2010, 03:59:08 PM
Hi Bart, unfortunately I don't know the answer to that, but I will check with the scientist who does. I believe they limit the number of iterations for speed, so I expect this is the reason it would not be as effective for some parameters as the plug-ins, as you've observed.

Hi Erik, yes, the Gaussian and Lens Blur are different PSFs. The Gaussian is basically just that, and the Lens Blur is effectively simulating a nearly circular aperture (assuming even light distribution within the aperture, very unlike Gaussian). You will get better results with the latter though in many cases they are admittedly subtle. The OLP filter can be somewhat complex to model. (I believe the Zeiss articles you've referenced recently have some nice images showing how gnarly they can be. I recall it was in the first of the two MTF articles). Gaussians are handy because they have convenient mathematical properties but not the best for modeling this, unfortunately ...
Title: Deconvolution sharpening revisited
Post by: feppe on July 23, 2010, 04:19:08 PM
Since you are talking about "undoing" the effects of AA filter, won't that introduce aliasing artifacts mistaken for sharpness? What about moire?

My understanding is that moire avoidance is not the only reason camera manufacturers put those expensive filters on. It's not like some marketing guy came to the engineers and said "slap one of those make-my-pictures-all-blurred-to-hell -filters on all our cameras, would'ya?" What the reasons are I don't know, but Hot Rod mods (http://www.maxmax.com/hot_rod_visible.htm) haven't been that popular, and I've heard more than one complain about the resulting aliasing.

I've seen so many photos which are oversharpened to the extent of making them as surreal as overcooked HDR. I haven't seen the samples of the results from this undoing, but the samples from D3X I've seen show that it produces exceptionally sharp results out of the box.
Title: Deconvolution sharpening revisited
Post by: ErikKaffehr on July 23, 2010, 04:40:10 PM
Feppe,

The images that Diglloyd had show no Moirés. This doesn't say that sharpening would not restore artifacts, but I actually don't think that this would be the case. If we down scale an image it will contain a lot of artifacts, a good standard practice is to blur the image very slightly before downscaling and the sharpen after scaling.

I did also run a quick test with RawDeveloper and got pretty good results, better than with CS5/Smart Sharpen Gaussian Blur. I may look a bit more into that but I'm going to travel a lot next two weeks.

Best regards
Erik


Quote from: feppe
Since you are talking about "undoing" the effects of AA filter, won't that introduce aliasing artifacts mistaken for sharpness? What about moire?

My understanding is that moire avoidance is not the only reason camera manufacturers put those expensive filters on. It's not like some marketing guy came to the engineers and said "slap one of those make-my-pictures-all-blurred-to-hell -filters on all our cameras, would'ya?" What the reasons are I don't know, but Hot Rod mods (http://www.maxmax.com/hot_rod_visible.htm) haven't been that popular, and I've heard more than one complain about the resulting aliasing.

I've seen so many photos which are oversharpened to the extent of making them as surreal as overcooked HDR. I haven't seen the samples of the results from this undoing, but the samples from D3X I've seen show that it produces exceptionally sharp results out of the box.
Title: Deconvolution sharpening revisited
Post by: ErikKaffehr on July 23, 2010, 04:43:29 PM
Eric,

Some discussion of what sharpening in LR3 actually does would be interesting. In the Bruce Fraser/Jeff Schewe book it's said that the Detail slider affects "halo supression", but that was bfore LR3.

The way I see it I much prefer "parametric adjustments" so I'd like to stay with LR as long as possible, if I need to render TIFFs to be able to sharpen with deconvolution it would brake the workflow, making it into workslow.

Best regards
Erik


Quote from: madmanchan
Yes Photoshop's Smart Sharpen is based on deconvolution (but you will need to choose the "More Accurate" option and the Lens Blur kernel for best results). Same with Camera Raw 6 and Lightroom 3 if you ramp up the Detail slider.
Title: Deconvolution sharpening revisited
Post by: ejmartin on July 23, 2010, 04:57:43 PM
Quote from: feppe
Since you are talking about "undoing" the effects of AA filter, won't that introduce aliasing artifacts mistaken for sharpness? What about moire?

In a word, no.  Aliasing is a shifting of image content from one frequency band to another that is an artifact of discrete sampling.  Deconvolution doesn't introduce aliasing (ie shift frequencies around) so much as try to reverse some of the suppression of high frequency image content that the AA filter effects in its effort to mitigate aliasing.
Title: Deconvolution sharpening revisited
Post by: eronald on July 23, 2010, 05:47:43 PM
Of course the camera manufacturer may have a very accurate lens and AA filter model ... What Canon DPP's sharpening can do is amazing

Edmund
Title: Deconvolution sharpening revisited
Post by: BartvanderWolf on July 23, 2010, 06:13:20 PM
Quote from: madmanchan
Hi Bart, unfortunately I don't know the answer to that, but I will check with the scientist who does. I believe they limit the number of iterations for speed, so I expect this is the reason it would not be as effective for some parameters as the plug-ins, as you've observed.

Thanks Eric, appreciated. I can imagine that there are a whole lot of other optimizations (e.g. avoiding noise amplification and halo contol) bundled together, so some compromises can be expected. Especially because the user has not the choice of tweaking the inner workings and the PSF shape (other than the 2 basic varieties), those compromises could affect the results. However, even in its current state it's clearly preferable over a regular USM in most cases.

Quote from: madmanchan
Hi Erik, yes, the Gaussian and Lens Blur are different PSFs. The Gaussian is basically just that, and the Lens Blur is effectively simulating a nearly circular aperture (assuming even light distribution within the aperture, very unlike Gaussian). You will get better results with the latter though in many cases they are admittedly subtle.

That's correct. It is underestimated by many how devastating de-focus is for microdetail at the pixel level, and then there's DOF. In fact, most of the image is usually defocused to a certain degree. In addition there is the influence of the residual lens aberrations and diffraction, and it also varies throughout the image. Then there may be a bit of camera shake and motion blur, and we have a pretty messed up PSF. Then to think that the sampling density of Red and Blue is different from Green on sensors with a CFA, and the Raw converter adds it's non-linear adaptive interpolation, it's pretty amazing what we can do. The OLPF is just one parameter in the whole mix, but it does help to create a more predictable (better behaved) signal.

Cheers,
Bart
Title: Deconvolution sharpening revisited
Post by: BartvanderWolf on July 23, 2010, 06:25:33 PM
Quote from: ejmartin
In a word, no.  Aliasing is a shifting of image content from one frequency band to another that is an artifact of discrete sampling.  Deconvolution doesn't introduce aliasing (ie shift frequencies around) so much as try to reverse some of the suppression of high frequency image content that the AA filter effects in its effort to mitigate aliasing.

While that is correct, there may be aliased spatial frequencies that are rendered as larger detail which happens to correspond to the non-aliased detail we are trying to sharpen. So while it won't introduce aliasing, it will 'enhance' some of the already aliased detail. One also needs to watch out for introducing stairstepping in e.g. powerlines, sharp edges at an angle, and other high contrast fine detail. Working on a luminosity blending layer with blend-if applied to spare the highest contrast edges will help to reduce those artifacts.

Cheers,
Bart
Title: Deconvolution sharpening revisited
Post by: hubell on July 23, 2010, 08:56:16 PM
The R-L Deconvolution sharpening tool in Raw Developer often produces exceptional results. It is so crazy that Adobe, with all its resources, does not offer it as an option in Photoshop.
Title: Deconvolution sharpening revisited
Post by: BernardLanguillier on July 23, 2010, 09:23:01 PM
Quote from: bjanes
the deconvolution sharpening (more properly image restoration) with the Mac only raw converter Raw Developer markedly improves the micro-contrast of the D3x image to the point that it rivals that of the Leica S2.

Yep, I have been using raw developper deconvolution for a few years now. It works great and was actually one of the reasons why I switched to Mac.

One of the reasons why it works so well with the D3x is the very low amount of noise in mid-tones/shadows at low ISO. With lesser sensor you end up sharpening noise also. This comes on top of an AA filter than is weaker than average (but yet manages to avoid moire in all but the most extreme situations).

Couple this with a lens like the Zeiss 100mm f2.0 that has amazing micro-contrast at f5.6-f8 and you have detail rendition that is a lot closer to non AA filter sensors than many seem to believe. You introduce some artifacts also though, but you have the freedom to tune the sharpness/artifacts ratio which you don't with a AA filter less sensors.

Nothing is new here though, D3x users have been reporting on this for 1.5 years now.  

Cheers,
Bernard
Title: Deconvolution sharpening revisited
Post by: deejjjaaaa on July 23, 2010, 10:39:39 PM
Quote from: madmanchan
Yes Photoshop's Smart Sharpen is based on deconvolution (but you will need to choose the "More Accurate" option and the Lens Blur kernel for best results). Same with Camera Raw 6 and Lightroom 3 if you ramp up the Detail slider.

Eric - does it mean that after some certain value (>25, >50, > ???) set by the Detail Slider in ACR you switch the sharpening completely from some variety of USM to some variety of deconvolution ? can you tell what this value has to be (if it is fixed) or it depends on the specific combination of exif parameters (camera model, iso, aperture value, etc)... or you are somehow blending the output of two methods going gradually from some variety of USM to deconvolution as the slider is moved to the right ?

please clariy, thank you.
Title: Deconvolution sharpening revisited
Post by: ziocan on July 24, 2010, 12:26:21 AM
Quote from: hcubell
The R-L Deconvolution sharpening tool in Raw Developer often produces exceptional results. It is so crazy that Adobe, with all its resources, does not offer it as an option in Photoshop.
Smart sharpen in Photoshop and the sharpening tool in LR offer the same kind of exceptional results.
IMO they also give slightly better results since they have more parameters.

For some work flows, sharpening during the raw conversion is not an option. In that case Photoshop "smart sharpen" or others plugs in for PS, are the only options.

I often see threads about the marvels made by sharpening plugs in, or raw converters, but I have have hardly seen anything doing it better than photoshop "smart sharpen".
At best they are equal.
Title: Deconvolution sharpening revisited
Post by: ziocan on July 24, 2010, 12:32:30 AM
Quote from: deja
Eric - does it mean that after some certain value (>25, >50, > ???) set by the Detail Slider in ACR you switch the sharpening completely from some variety of USM to some variety of deconvolution ? can you tell what this value has to be (if it is fixed) or it depends on the specific combination of exif parameters (camera model, iso, aperture value, etc)... or you are somehow blending the output of two methods going gradually from some variety of USM to deconvolution as the slider is moved to the right ?

please clariy, thank you.
well, if you load up an image in LR and do some tests, you may be able to find it out by yourself.

anyway, how can it be fixed?
How a camera model, iso or aperture determine fixed parameters?

every image needs its own parameters, because every image is focused on its own way, has it is own texture and surfaces. not to mention that two images taken with the exact same equipment but focussed slightly different (they all are) will need different sharpening parameters.

I think trial and error and some experiments will give you the best answer to your question.
Title: Deconvolution sharpening revisited
Post by: ErikKaffehr on July 24, 2010, 02:40:19 AM
Hi Eric,

I actually tried to forget about Zeiss articles, as I recalled that the shape shown there was quite ugly ;-)

Best regards
Erik



Quote from: madmanchan
Hi Bart, unfortunately I don't know the answer to that, but I will check with the scientist who does. I believe they limit the number of iterations for speed, so I expect this is the reason it would not be as effective for some parameters as the plug-ins, as you've observed.

Hi Erik, yes, the Gaussian and Lens Blur are different PSFs. The Gaussian is basically just that, and the Lens Blur is effectively simulating a nearly circular aperture (assuming even light distribution within the aperture, very unlike Gaussian). You will get better results with the latter though in many cases they are admittedly subtle. The OLP filter can be somewhat complex to model. (I believe the Zeiss articles you've referenced recently have some nice images showing how gnarly they can be. I recall it was in the first of the two MTF articles). Gaussians are handy because they have convenient mathematical properties but not the best for modeling this, unfortunately ...
Title: Deconvolution sharpening revisited
Post by: Jack Flesher on July 24, 2010, 03:55:24 AM
Quote from: ziocan
I often see threads about the marvels made by sharpening plugs in, or raw converters, but I have have hardly seen anything doing it better than photoshop "smart sharpen".
At best they are equal.

Agreed.  I'd even take it one step further -- on cameras with no AA or OLP filter, some raw converters (my favorite is C1) do such an excellent job with sharpening that nothing further is need in CS until your desired output sharpening step.  I no longer do any initial sharpening in CS with my Phase files for this reason.

Cheers,
Title: Deconvolution sharpening revisited
Post by: bjanes on July 24, 2010, 09:19:20 AM
Quote from: madmanchan
Yes Photoshop's Smart Sharpen is based on deconvolution (but you will need to choose the "More Accurate" option and the Lens Blur kernel for best results). Same with Camera Raw 6 and Lightroom 3 if you ramp up the Detail slider.
Eric,

Thanks for the information. The behavior of the sliders appears to be quite different from the older versions of ACR. In Real World Camera Raw with Adobe Photoshop CS4, Jeff Schewe states that if one moves the detail slider all the way to the right, the results are very similar but not exactly the same that would be obtained with the unsharp mask.

The following observations are likely nothing new to you, but may be of interest to others. The slanted edge target (a black on white transition at a slight angle) is an ISO certified method of determining MTF and is used in Imatest. Here is an example with the Nikon D3 using ACR 6.1 without sharpening (far right), with and with ACR sharpening set to 50, 1, 50 [amount, radius, detail] (middle) and with deconvolution sharpening using Focus Magic with a blur width of 2 pixels and amount of 100%. The images used for measurement are cropped. so the per picture height measurements are for the cropped images.

[attachment=23291:Comp1_images.gif]

One can analyze the black-white transition with Imatest, which determines the pixel interval for a rise in intensity at the interface from 10 to 90%. Results are shown for Focus Magic and ACR sharpening with the above settings. The results are similar. With real world images with this camera (previously posted in a discussion with Mark Segal), I have not noted much difference between optimally sharpened images using ACR and Focus Magic, contrary to the results reported by Diglloyd using the Richardson-Lucy algorithm. Perhaps the Focus Magic algorithm is inferior to the RL. Diglloyd used Smart Sharpen for comparison and did not test ACR 6 sharpening.

[attachment=23292:CompACR_FM_1.gif]

One can look at the effect of the detail slider by using ACR sharpening settings of 100, 1, 100 (left) and 100, 1, 0 (right). The detail setting of zero dampens the overshoot.

[attachment=23293:CompACR.gif]





Title: Deconvolution sharpening revisited
Post by: bjanes on July 24, 2010, 09:24:06 AM
Quote from: ziocan
Smart sharpen in Photoshop and the sharpening tool in LR offer the same kind of exceptional results.
IMO they also give slightly better results since they have more parameters.

For some work flows, sharpening during the raw conversion is not an option. In that case Photoshop "smart sharpen" or others plugs in for PS, are the only options.

I often see threads about the marvels made by sharpening plugs in, or raw converters, but I have have hardly seen anything doing it better than photoshop "smart sharpen".
At best they are equal.
That may be your experience, but have you tried Richardson-Lucy with a camera having a blur filter? Digilloyd did compare smart sharpen to RL, and found the latter to be much better. Perhaps he did not use optimal settings, but he is a very careful worker and I would not dismiss his results out of hand.
Title: Deconvolution sharpening revisited
Post by: madmanchan on July 24, 2010, 09:29:40 AM
Quote from: deja
Eric - does it mean that after some certain value (>25, >50, > ???) set by the Detail Slider in ACR you switch the sharpening completely from some variety of USM to some variety of deconvolution ? can you tell what this value has to be (if it is fixed) or it depends on the specific combination of exif parameters (camera model, iso, aperture value, etc)... or you are somehow blending the output of two methods going gradually from some variety of USM to deconvolution as the slider is moved to the right ?

please clariy, thank you.

Hi Deja, yes, the sharpening in CR 6 / LR 3 is a continuous blend of methods (with Detail slider being the one used to "tween" between the methods, and the Amount, Radius, & Masking used to control the parameters fed into the methods). As you ramp up the Detail slider to higher values, the deconvolution-based method gets more weight. If you're interested in only the deconv method then just set Detail to 100 (which is what I do for low-ISO high-detail landscape images). Not recommended for portraits, though ...  
Title: Deconvolution sharpening revisited
Post by: madmanchan on July 24, 2010, 09:37:43 AM
Hi Bill, thanks for doing this; your studies and results match my expectations. At higher Detail settings, CR 6 & LR 3 should be closer in behavior to PS's Smart Sharpen, though a little better IMO. There are some differences due to the raw-based design of the former (we have the luxury applying the sharpening at a stage of the imaging pipeline where the signal characteristics are better understood, whereas PS's SS is necessarily at the mercy of whatever processing has already been done to the image, and hence can't really assume anything).
Title: Deconvolution sharpening revisited
Post by: deejjjaaaa on July 24, 2010, 10:46:22 AM
Quote from: ziocan
well, if you load up an image in LR and do some tests, you may be able to find it out by yourself.

thank you for the suggestion, but I was interested to hear from the developer himself.

Quote from: ziocan
anyway, how can it be fixed?

very simple :

if (slider < 50)
{
   // code to execute if condition is true
}
else
{
   // code to execute if condition is false
}

Quote from: ziocan
How a camera model, iso or aperture determine fixed parameters?

I was talking about if exif parameters will be used in the following manner -> for example high ISO = more noise detected/expected = lesser blending with deconvolution method, etc

Quote from: ziocan
I think trial and error and some experiments will give you the best answer to your question.

certainly - but now that we have some answer I can do trial & error w/ what Eric said in mind
Title: Deconvolution sharpening revisited
Post by: walter.sk on July 24, 2010, 11:39:03 AM
Quote from: madmanchan
Hi Deja, yes, the sharpening in CR 6 / LR 3 is a continuous blend of methods (with Detail slider being the one used to "tween" between the methods, and the Amount, Radius, & Masking used to control the parameters fed into the methods). As you ramp up the Detail slider to higher values, the deconvolution-based method gets more weight. If you're interested in only the deconv method then just set Detail to 100 (which is what I do for low-ISO high-detail landscape images). Not recommended for portraits, though ...  
If one were to set the Detail to 100, would this carry through to the Sharpening slider when using the Adjustment Brush in ACR?  If so, that would go a long way toward selective application of the deconvolution method, possibly as good as painting it in from a layer mask.
Title: Deconvolution sharpening revisited
Post by: bjanes on July 24, 2010, 01:50:41 PM
Quote from: madmanchan
Hi Deja, yes, the sharpening in CR 6 / LR 3 is a continuous blend of methods (with Detail slider being the one used to "tween" between the methods, and the Amount, Radius, & Masking used to control the parameters fed into the methods). As you ramp up the Detail slider to higher values, the deconvolution-based method gets more weight. If you're interested in only the deconv method then just set Detail to 100 (which is what I do for low-ISO high-detail landscape images). Not recommended for portraits, though ...  
Based on your information, I experimented for sharpening with ACR to reproduce the results posted by Diglloyd in in his blog. I used 41-1-100 (amount, radius, detail). The results are pretty close. I hope that this is fair use of Diglloyd's copyright. If there are any complaints, the post can be deleted. I think that the topic is important, though.

[attachment=23300:ACR_RL.jpg]
Title: Deconvolution sharpening revisited
Post by: ejmartin on July 24, 2010, 02:12:31 PM
Quote from: bjanes
Based on your information, I experimented for sharpening with ACR to reproduce the results posted by Diglloyd in in his blog. I used 41-1-0 (amount, radius, detail). The results are pretty close. I hope that this is fair use of Diglloyd's copyright. If there are any complaints, the post can be deleted. I think that the topic is important, though.

Sorry, don't you want the detail at 100 if you're trying to use the deconvolution part of ACR sharpening?
Title: Deconvolution sharpening revisited
Post by: BartvanderWolf on July 24, 2010, 02:15:31 PM
Quote from: bjanes
I used 41-1-0 (amount, radius, detail). The results are pretty close.

Hi Bill,

Based on what I see, the radius 1.0 seems to be a bit too large. This is confirmed by the earlier Imatest SFR output that you posted (SFR_20080419_0003_ACR_100_1_100.tif), where the 0.3 cycles/pixel resolution was boosted. Perhaps something like a 0.6 or 0.7 radius is more appropriate to boost the higher spatial frequencies (lower frequencies will also be boosted by that).

Cheers,
Bart
Title: Deconvolution sharpening revisited
Post by: madmanchan on July 24, 2010, 02:39:43 PM
Yes, it looks like Bill made a typo in the post (the screenshot values say 43, 1, 100, as opposed to 41,1,0). For this type of image I do recommend a value below 1 for the Radius, though 1 is not a bad starting point.
Title: Deconvolution sharpening revisited
Post by: madmanchan on July 24, 2010, 02:45:34 PM
Quote from: walter.sk
If one were to set the Detail to 100, would this carry through to the Sharpening slider when using the Adjustment Brush in ACR?  If so, that would go a long way toward selective application of the deconvolution method, possibly as good as painting it in from a layer mask.

Yes, Walter. It does means you can apply this type of sharpening / deblurring selectively, if you wish. There are two basic workflows for doing this in CR 6 and LR 3.

The first way is just to paint in the sharpening where you want it. To do this, you set the Radius and Detail the way you want, but set Amount to 0. Then, with the local adjustment brush, you paint in a positive Sharpness amount in the desired areas. The brush controls and the local Sharpness amount can be used to control the application of it. (Of course you can also use the erase mode in case you overpaint.) This workflow is effective if there are relatively small areas of the image you want to sharpen. I tend to use this for narrow DOF images (e.g., macro of flower) where I only care about very specific elements being sharpened. It also works fine for portraits.

The second way is the opposite, i.e., you apply the capture sharpening in the usual till most of the image looks good, but then you can selective "back off" on it (using local Sharpness with negative values) in some areas. Of course you can also add to it (using local Sharpness with positive values).
Title: Deconvolution sharpening revisited
Post by: bjanes on July 24, 2010, 04:23:44 PM
Quote from: madmanchan
Yes, it looks like Bill made a typo in the post (the screenshot values say 43, 1, 100, as opposed to 41,1,0). For this type of image I do recommend a value below 1 for the Radius, though 1 is not a bad starting point.
Yes, 41,1,0 is a typo. The figures on the illustration are correct: 41,1,100

Bill
Title: Deconvolution sharpening revisited
Post by: mhecker* on July 24, 2010, 06:50:10 PM
I agree totally with bjanes.

I have found that by varying the sharpening setting in ACR6/LR3 I am able to duplicate
the "totally superior" results found in other highly touted RAW converters.

That said IMO Lightrooms work flow is far superior to any other product I've tried.

However, it's a free country and the new RAW converter developers are happy to relieve you of excess cash.  
Title: Deconvolution sharpening revisited
Post by: eronald on July 24, 2010, 07:16:09 PM
I think this post is a bit misleading. If you incorporate sharpening in your processing then your process is now non-linear and however ISO certified the target itself,  the slanted edge method is no longer valid because MTF is only meaningful as a description of a 2D spatial convolution process which is thereby assumed to be linear. Even though Imatest is an excellent piece of software - I am acquainted with Norman Koren, which doesn't mean I understand those maths - , feeding Imatest invalid input does not sanctify the output.

Edmund

Quote from: bjanes
Eric,

Thanks for the information. The behavior of the sliders appears to be quite different from the older versions of ACR. In Real World Camera Raw with Adobe Photoshop CS4, Jeff Schewe states that if one moves the detail slider all the way to the right, the results are very similar but not exactly the same that would be obtained with the unsharp mask.

The following observations are likely nothing new to you, but may be of interest to others. The slanted edge target (a black on white transition at a slight angle) is an ISO certified method of determining MTF and is used in Imatest. Here is an example with the Nikon D3 using ACR 6.1 without sharpening (far right), with and with ACR sharpening set to 50, 1, 50 [amount, radius, detail] (middle) and with deconvolution sharpening using Focus Magic with a blur width of 2 pixels and amount of 100%. The images used for measurement are cropped. so the per picture height measurements are for the cropped images.

[attachment=23291:Comp1_images.gif]

One can analyze the black-white transition with Imatest, which determines the pixel interval for a rise in intensity at the interface from 10 to 90%. Results are shown for Focus Magic and ACR sharpening with the above settings. The results are similar. With real world images with this camera (previously posted in a discussion with Mark Segal), I have not noted much difference between optimally sharpened images using ACR and Focus Magic, contrary to the results reported by Diglloyd using the Richardson-Lucy algorithm. Perhaps the Focus Magic algorithm is inferior to the RL. Diglloyd used Smart Sharpen for comparison and did not test ACR 6 sharpening.

[attachment=23292:CompACR_FM_1.gif]

One can look at the effect of the detail slider by using ACR sharpening settings of 100, 1, 100 (left) and 100, 1, 0 (right). The detail setting of zero dampens the overshoot.

[attachment=23293:CompACR.gif]
Title: Deconvolution sharpening revisited
Post by: BartvanderWolf on July 24, 2010, 07:51:22 PM
Quote from: eronald
I think this post is a bit misleading. If you incorporate sharpening in your processing then your process is now non-linear and however ISO certified the target itself,  the slanted edge method is no longer valid because MTF is only meaningful as a description of a 2D spatial convolution process which is thereby assumed to be linear. Even though Imatest is an excellent piece of software - I am acquainted with Norman Koren, which doesn't mean I understand those maths - , feeding Imatest invalid input does not sanctify the output.

Hi Edmund,

You are correct that sharpening introduces non-linearity into the determination of the MTF, however that is also of great use when comparing the sharpened result to the 'before' situation. It allows us to assess the difference that the non-linear process of sharpening introduces. The math behind the slanted edge method of MTF determination is robust, so the results will be accurate (for the particular output image under investigation).

If one had to compare camera files with unknown levels of pre-processing (such as in-camera sharpened JPEGs), Imatest also comes prepared. It offers a kind of normalization called "standardized sharpening (http://www.imatest.com/docs/sharpening.html)" which allows to compare non-linear input when no other info is available. In this case however, we get a very useful insight into the spatial frequencies that are boosted, hence my suggestion to try a lower radius value, Imatest gave the clue.

Cheers,
Bart
Title: Deconvolution sharpening revisited
Post by: walter.sk on July 24, 2010, 08:57:28 PM
Quote from: madmanchan
Yes, Walter. It does means you can apply this type of sharpening / deblurring selectively, if you wish. There are two basic workflows for doing this in CR 6 and LR 3.

The first way is just to paint in the sharpening where you want it. To do this, you set the Radius and Detail the way you want, but set Amount to 0. Then, with the local adjustment brush, you paint in a positive Sharpness amount in the desired areas. The brush controls and the local Sharpness amount can be used to control the application of it. (Of course you can also use the erase mode in case you overpaint.) This workflow is effective if there are relatively small areas of the image you want to sharpen. I tend to use this for narrow DOF images (e.g., macro of flower) where I only care about very specific elements being sharpened. It also works fine for portraits.

The second way is the opposite, i.e., you apply the capture sharpening in the usual till most of the image looks good, but then you can selective "back off" on it (using local Sharpness with negative values) in some areas. Of course you can also add to it (using local Sharpness with positive values).

Thank you. Now I'm going to have to try some comparisons between deconvolution by these methods in RAW, versus post-processing with Focus Magic, which has been my favorite for years now.
Title: Deconvolution sharpening revisited
Post by: hubell on July 24, 2010, 11:25:31 PM
Quote from: walter.sk
Thank you. Now I'm going to have to try some comparisons between deconvolution by these methods in RAW, versus post-processing with Focus Magic, which has been my favorite for years now.

Unfortunately, Focus Magic has become functionally useless for me. With larger 16 bit files, it consistently gives me "memory full" errors and then crashes CS 4. It appears that development has ceased. Too bad, it gave me great results.
Title: Deconvolution sharpening revisited
Post by: Craig Lamson on July 25, 2010, 05:56:44 AM
Quote from: hcubell
Unfortunately, Focus Magic has become functionally useless for me. With larger 16 bit files, it consistently gives me "memory full" errors and then crashes CS 4. It appears that development has ceased. Too bad, it gave me great results.


How big of files?  I just tested a 444mb 16 bit tif, CS4 and it ran fine on a w7 64bit  4mb machine.
Title: Deconvolution sharpening revisited
Post by: BartvanderWolf on July 25, 2010, 07:18:56 AM
Quote from: hcubell
Unfortunately, Focus Magic has become functionally useless for me. With larger 16 bit files, it consistently gives me "memory full" errors and then crashes CS 4. It appears that development has ceased. Too bad, it gave me great results.

I had similar issues with FocusMagic when I still ran Win XP. There is a sort of workaround though. Just make partial selections (use a few guides to allow making joining but not overlapping selections). It's not ideal, but it will get the job done selection after selection. I couldn't get FM to install under Vista, but they recently changed the installer so perhaps now it will, but I've moved to Win7 by now, and there are no problems so far.

I've not tested RawTherapee for size limitations, but it does read TIFFs and it allows Richardson-Lucy deconvoluton.

Cheers,
Bart
Title: Deconvolution sharpening revisited
Post by: hubell on July 25, 2010, 07:41:23 AM
Quote from: BartvanderWolf
I had similar issues with FocusMagic when I still ran Win XP. There is a sort of workaround though. Just make partial selections (use a few guides to allow making joining but not overlapping selections). It's not ideal, but it will get the job done selection after selection. I couldn't get FM to install under Vista, but they recently changed the installer so perhaps now it will, but I've moved to Win7 by now, and there are no problems so far.

I've not tested RawTherapee for size limitations, but it does read TIFFs and it allows Richardson-Lucy deconvoluton.

Cheers,
Bart

I tried it unsuccessfully last week with a 16 bit 428mb file. I am on a 2009 Mac Pro with 16gb of Ram running OS X 10.6.4. I always had problems with Out of Memory error issues under 10.4 and 10.5, but 10.6 has just been impossible to use with Focus Magic.

BTW, do you use Focus Magic "just" for capture sharpening or also for output sharpening?

Title: Deconvolution sharpening revisited
Post by: bjanes on July 25, 2010, 09:48:38 AM
Quote from: eronald
I think this post is a bit misleading. If you incorporate sharpening in your processing then your process is now non-linear and however ISO certified the target itself,  the slanted edge method is no longer valid because MTF is only meaningful as a description of a 2D spatial convolution process which is thereby assumed to be linear. Even though Imatest is an excellent piece of software - I am acquainted with Norman Koren, which doesn't mean I understand those maths - , feeding Imatest invalid input does not sanctify the output.

Edmund
In addition to Bart's post in response to your comment, I think that your use of misleading and invalid input is too harsh. If you look at Norman's documentation of Imatest, he uses it extensively to compare the effects of sharpening. Indeed, if if the method were invalid for sharpened images, it would be useless to assess the sharpness of images derived from cameras with low pass filters, since these images always must be sharpened for optimal appearance. If my use of Imatest is misleading and invalid, so is Norman's.

From the Imatest documentation:

[attachment=23321:ImatestDoc.gif]
Title: Deconvolution sharpening revisited
Post by: eronald on July 25, 2010, 10:46:06 AM
Quote from: bjanes
In addition to Bart's post in response to your comment, I think that your use of misleading and invalid input is too harsh. If you look at Norman's documentation of Imatest, he uses it extensively to compare the effects of sharpening. Indeed, if if the method were invalid for sharpened images, it would be useless to assess the sharpness of images derived from cameras with low pass filters, since these images always must be sharpened for optimal appearance. If my use of Imatest is misleading and invalid, so is Norman's.

From the Imatest documentation:

[attachment=23321:ImatestDoc.gif]


Sorry, I'll remove myself from this discussion; Norman is a guy I respect, his understanding of these topics is infinitely greater than mine, and I don't want my own lack of understanding and personal views to reflect on his excellent product.

Edmund
Title: Deconvolution sharpening revisited
Post by: bjanes on July 25, 2010, 10:59:19 AM
Quote from: eronald
Sorry, I'll remove myself from this discussion; Norman is a guy I respect, his understanding of these topics is infinitely greater than mine, and I don't want my own lack of understanding and personal views to reflect on his excellent product.

Edmund
Edmund,
Thanks for the reply, but there is no need to withdraw from the discussion. Your point on non-linearity is well taken and excessive sharpening can lead to spurious results. Some time ago, I was involved in a discussion with Norman and others over test results reporting MTF 50s well over the Nyquist limit. Magnified aliasing artifacts apparently were being interpreted as meaningful resolution. Norman stated that the slanted edge method did have limitations and he was working on other methods.

Bill
Title: Deconvolution sharpening revisited
Post by: EricWHiss on July 25, 2010, 02:29:37 PM
Quote from: bjanes
In addition to Bart's post in response to your comment, I think that your use of misleading and invalid input is too harsh. If you look at Norman's documentation of Imatest, he uses it extensively to compare the effects of sharpening. Indeed, if if the method were invalid for sharpened images, it would be useless to assess the sharpness of images derived from cameras with low pass filters, since these images always must be sharpened for optimal appearance. If my use of Imatest is misleading and invalid, so is Norman's.

From the Imatest documentation:

[attachment=23321:ImatestDoc.gif]

Why don't you e-mail and ask him what's correct?  He's usually quick to get back unless he's traveling...
Title: Deconvolution sharpening revisited
Post by: eronald on July 25, 2010, 04:46:17 PM
Quote from: bjanes
Edmund,
Thanks for the reply, but there is no need to withdraw from the discussion. Your point on non-linearity is well taken and excessive sharpening can lead to spurious results. Some time ago, I was involved in a discussion with Norman and others over test results reporting MTF 50s well over the Nyquist limit. Magnified aliasing artifacts apparently were being interpreted as meaningful resolution. Norman stated that the slanted edge method did have limitations and he was working on other methods.

Bill

Yes, I just talked to Norman, linking him to this conversation, and it seems ISO is going to move to lower contrast slanted edge targets precisely to prevent cameras from moving into a non-linear regime.

Re. MTF, if I understand rightly, Norman's position is that in the presence of sharpening you are measuring whole system performance, and it becomes difficult to derive the performance of a specific component of the system. I'm sure he would be delighted to get email from any Imatest user, and discuss the topic further.

Edmund
Title: Deconvolution sharpening revisited
Post by: madmanchan on July 26, 2010, 09:57:53 AM
Depends on what you're looking for, though. As scientists we're interested in isolated behaviors of individual components, but as photographers it's the end-to-end (system-wide) results that ultimately matter (I.e., what comes out the back end, the final result).
Title: Deconvolution sharpening revisited
Post by: bjanes on July 26, 2010, 10:53:29 AM
Quote from: madmanchan
Yes, it looks like Bill made a typo in the post (the screenshot values say 43, 1, 100, as opposed to 41,1,0). For this type of image I do recommend a value below 1 for the Radius, though 1 is not a bad starting point.

Quote from: BartvanderWolf
Based on what I see, the radius 1.0 seems to be a bit too large. This is confirmed by the earlier Imatest SFR output that you posted (SFR_20080419_0003_ACR_100_1_100.tif), where the 0.3 cycles/pixel resolution was boosted. Perhaps something like a 0.6 or 0.7 radius is more appropriate to boost the higher spatial frequencies (lower frequencies will also be boosted by that).
Eric and Bart,

As per your suggestions, I repeated the tests using ACR 6.1 with settings of amount = 32, radius = 0.7, and detail = 100 and Focus Magic with settings of Blur Width = 1 and amount = 150. I found the amount in the ACR slider to be quite sensitive, and there is a considerable difference between 30 and 40 or even 30 and 35 with respect to overshoot and MTF at Nyquist. The chosen settings seem to be a reasonable compromise and produce similar results near Nyquist, but the FM gives more of a boost in the range of 0.2 to 0.3 cycles/pixel, which may be desirable.

[attachment=23335:Comp1_Graphs.png]

Inspection of the images from which the graphs were obtained is also of interest:

[attachment=23336:Comp1_images.png]

Title: Deconvolution sharpening revisited
Post by: BartvanderWolf on July 26, 2010, 06:36:08 PM
Quote from: bjanes
Eric and Bart,

As per your suggestions, I repeated the tests using ACR 6.1 with settings of amount = 32, radius = 0.7, and detail = 100 and Focus Magic with settings of Blur Width = 1 and amount = 150. I found the amount in the ACR slider to be quite sensitive, and there is a considerable difference between 30 and 40 or even 30 and 35 with respect to overshoot and MTF at Nyquist. The chosen settings seem to be a reasonable compromise and produce similar results near Nyquist, but the FM gives more of a boost in the range of 0.2 to 0.3 cycles/pixel, which may be desirable.

Hi Bill,

Indeed you managed to get the MTF responses almost identical, with a slight edge to FocusMagic due to it's boosting some of the lower spatial frequencies a bit more. Of couse there is no law against doing 2 conversions with different settings, and luminosity blending the results, but in a single operation FM will do a bit better, it packs a bit more punch.

Quote
Inspection of the images from which the graphs were obtained is also of interest:

Yes, they confirm what Imatest was predicting including slightly lower noise for the FM version which also has less moiré showing (probably those better lower frequencies are responsible for that) while giving an overall sharper impression. But they are quite close, especially when used as print output.

Thanks for the examples,
Bart
Title: Deconvolution sharpening revisited
Post by: Wayne Fox on July 27, 2010, 12:20:38 AM
Apologies in hijacking this thread a little bit, but personally I'm just curious if de-convolution sharpening and the evolvement of computational imaging might eventually overcome much of the problem with diffraction.  (and if this has already been discussed I also apologize - I only skimmed through the thread, seeing  how most of it is above my pay-grade).

I would assume it would be much more challenging than resolving the issues from an AA filter,  since it would require each individual lens design to be carefully tested then some method to apply the information to the file, and perhaps would require the data from every possible f/stop and with zoom lens specific zoom settings.  But it seems the theory of restoring the data as it is spread to adjacent pixels isn't much different than what happens with an AA filter.  

I know I have many times stopped down to f/22 (or further) and smart sharpen seems to work quite well, even when printing large prints.

Just curious.
Title: Deconvolution sharpening revisited
Post by: ErikKaffehr on July 27, 2010, 12:51:49 AM
Wayne,

That's no hijacking. I'd say a very good question indeed. FM was originally intended to restore focus.

It's my guess that it is easy to estimate PSF (Point Spread Function) for a stopped down lens at least regarding diffraction. A better PSF then Lens Blur may be needed. I got the impression that regular "unsharp mask" works quite well. Certainly an area to investigate!

Best regards
Erik


Quote from: Wayne Fox
Apologies in hijacking this thread a little bit, but personally I'm just curious if de-convolution sharpening and the evolvement of computational imaging might eventually overcome much of the problem with diffraction.  (and if this has already been discussed I also apologize - I only skimmed through the thread, seeing  how most of it is above my pay-grade).

I would assume it would be much more challenging than resolving the issues from an AA filter,  since it would require each individual lens design to be carefully tested then some method to apply the information to the file, and perhaps would require the data from every possible f/stop and with zoom lens specific zoom settings.  But it seems the theory of restoring the data as it is spread to adjacent pixels isn't much different than what happens with an AA filter.  

I know I have many times stopped down to f/22 (or further) and smart sharpen seems to work quite well, even when printing large prints.

Just curious.
Title: Deconvolution sharpening revisited
Post by: BartvanderWolf on July 27, 2010, 07:51:54 AM
Quote from: Wayne Fox
Apologies in hijacking this thread a little bit, but personally I'm just curious if de-convolution sharpening and the evolvement of computational imaging might eventually overcome much of the problem with diffraction.

Hi Wayne,

No problem, in fact the question is quite relevant. The challenge is to address the effects of several sources of blur combined, in a simple interface, yet with lots of control over the process.

Quote
I would assume it would be much more challenging than resolving the issues from an AA filter,  since it would require each individual lens design to be carefully tested then some method to apply the information to the file, and perhaps would require the data from every possible f/stop and with zoom lens specific zoom settings.  But it seems the theory of restoring the data as it is spread to adjacent pixels isn't much different than what happens with an AA filter.

Correct, diffraction has a different PSF shape than e.g. defocus, yet in practice we need a mix of both (in addition to addressing residual optical and OLPF induced blur). The point spread function is just a mathematical description of the blur function which is used to reverse its effect.  

Quote
I know I have many times stopped down to f/22 (or further) and smart sharpen seems to work quite well, even when printing large prints.

The difficulty with (deconvolution) restoration of a signal is 2-fold. First, there has to be enough signal to noise to have something to restore. If detail is blurred too far, i.e. it has fused with its surroundings, then it will be impossible to lift it up from the background. Second, we are always faced with noise, even light is noisy (photon shot noise). When signal levels get reduced down to the noise level, then the restoration needs to find a way and disciminate between signal to amplify, and noise not to amplify. That's a challenge.

Currently there are a few algorithms that can do such a task with reasonable success, but there are limits to what can be achieved. One algorithm that's popular, but not necessarily the best, is the Richardson-Lucy restoration algorithm. It was used to improve the Hubble Space Telescope images, and the adaptive variety of the RL restoration addresseses the noise amplification issue with visible improvement of the S/N ratio. One of it's drawbacks can be that it is processing intensive, therefore slow, and it's success also depends on a decent input as to what the PSF should look like. Other, so-called blind deconvolution algorithms, attempt to find the optimal PSF shape as part of the process, but they tend to have a difficulty in separating noise out of the enhancement.

Quote
Just curious.

Curious is good, it's the start of progress.

So, another attempt to address diffraction blur might be in order. Diffraction blur can actually help to reduce moiré, because it kills high spatial frequencies before discrete sampling takes place, but we are confronted with it mostly if we want to add DOF to a scene. Therefore it has both useful (artifact reduction and artistic control) and detrimental (diffraction blur of the focused micro-detail) effects. Wouldn't it be nice if the drawbacks could be reduced? Well they can (upto a certain level).

I'll prepare an image, and add an f/32 (let's up the ante) diffraction blur, and post it later. We can then see what the various methods can restore, and what the limitations are.

Cheers,
Bart
Title: Deconvolution sharpening revisited
Post by: Christoph C. Feldhaim on July 27, 2010, 08:15:20 AM
Quote from: BartvanderWolf
.......
I'll prepare an image, and add an f/32 (let's up the ante) diffraction blur, and post it later. We can then see what the various methods can restore, and what the limitations are.
.....

Shouldn't it be possible to some extent to reverse this, since the PSF of diffraction is well known?
( I think its called some sort of Bessel function).
Title: Deconvolution sharpening revisited
Post by: John R Smith on July 27, 2010, 08:38:47 AM
Quote from: BartvanderWolf
I'll prepare an image, and add an f/32 (let's up the ante) diffraction blur, and post it later. We can then see what the various methods can restore, and what the limitations are.

Cheers,
Bart

Sounds very interesting, Bart. Some of my Zeiss lenses go to f45   Never used to worry me on film . . .

John
Title: Deconvolution sharpening revisited
Post by: BJL on July 27, 2010, 08:44:14 AM
Quote from: Wayne Fox
... I'm just curious if de-convolution sharpening and the evolvement of computational imaging might eventually overcome much of the problem with diffraction.
This is done in microscopy ... an area where there is a constant battle to overcome extremely shallow DOF, or to put it another way, to reduce the painful trade-offs between OOF effects (aperture too big) and diffraction effects (aperture too small). One snippet:
http://en.wikipedia.org/wiki/Microscopy#Su...tion_techniques (http://en.wikipedia.org/wiki/Microscopy#Sub-diffraction_techniques)
Title: Deconvolution sharpening revisited
Post by: bjanes on July 27, 2010, 08:48:24 AM
Quote from: John R Smith
Sounds very interesting, Bart. Some of my Zeiss lenses go to f45   Never used to worry me on film . . .

John
The resolution limiting effects of the Airy disk has the same effect on an 8 x 10 inch view camera as on a Minox miniature format. However, for a given print size, the effects of diffraction for as given Airy disc diameter are much more apparent with the Minox due to to the magnification factor. Likewise, the effects of diffraction do not depend on pixel size. For a given overall sensor size, a small pixel camera will have the same diffraction limited resolution as a larger pixel sized camera.

Regards,

Bill
Title: Deconvolution sharpening revisited
Post by: ejmartin on July 27, 2010, 08:58:27 AM
Quote from: ChristophC
Shouldn't it be possible to some extent to reverse this, since the PSF of diffraction is well known?
( I think its called some sort of Bessel function).

Yes, for a circular aperture it involves the square of a Bessel function J_1

(http://upload.wikimedia.org/math/8/b/a/8ba098b2158a7d56acedbe6e8b79fd8c.png)

Which has the following characteristic intensity pattern (PSF)

(http://upload.wikimedia.org/wikipedia/en/thumb/1/14/Airy-pattern.svg/220px-Airy-pattern.svg.png)

An issue may be that it is hard to deal with the pattern of peaks and minima accurately in a numerical setting.  Of course, what the sensor will typically see unless you're really stopped down is a box blur of this pattern, since it is sampled by pixels of finite size.  Another issue is that the tail of the PSF has a much slower falloff than a Gaussian, so might need more computational resources to mitigate accurately.  All this is saying that deconvolution may have a harder time dealing with diffraction than with, say, OOF blur.

Just for fun, here's an actual imaged diffraction pattern (with satellite rings!) from someone's macro setup:

http://forums.dpreview.com/forums/read.asp...essage=21952208 (http://forums.dpreview.com/forums/read.asp?forum=1030&message=21952208)
Title: Deconvolution sharpening revisited
Post by: BartvanderWolf on July 27, 2010, 04:25:05 PM
Quote from: BartvanderWolf
I'll prepare an image, and add an f/32 (let's up the ante) diffraction blur, and post it later. We can then see what the various methods can restore, and what the limitations are.

Okay, here we go.

1. I've taken a crop of a shot taken with my 1Ds3 (6.4 micron sensel pitch + Bayer CFA) and the TS-E 90mm af f/7.1 (the aperture where the diffraction pattern spans approx. 1.5 pixels).
0343_Crop.jpg (http://www.xs4all.nl/~bvdwolf/main/downloads/0343_Crop.jpg) (1.203kb) I used 16-b/ch TIFFs throughout the experiment, but provide links to JPEGs and PNGs to save bandwidth.

2. That crop is convolved with a single diffraction (at f/32) kernel for 564nm wavelength (the luminosity weighted average of R, G and B taken as 450, 550 and 650 nm) at a 6.4 micron sensel spacing (assuming 100% fill-factor). That kernel (http://www.xs4all.nl/~bvdwolf/main/downloads/Airy9x9{p=6.4FF=100w=0.564f=32.}.dat) was limited to the maximum 9x9 kernel size of ImagesPlus, a commercial Astrophotography program chosen for the experiment because a PSF kernel can be specified and the experiment can be verified. That means that only a part of the infinite diffraction pattern (some 44 micron, or 6.38 pixel widths, in diameter to the first minimum) could be encoded. So I realise that the diffraction kernel is not perfect, but it covers the majority of the energy distribution. The goal is to find out how well certain methods can restore the original image, so anything that resembles diffraction will do.
The benefit of using a 9x9 convolution kernel is that the same kernel can be used for both convolution and deconvolution, so we can judge the potential of a common method under somewhat ideal conditions (a known PSF, and computable in a reasonable time). it will present a sort of benchmark for the others to beat.
Crop+diffraction (http://www.xs4all.nl/~bvdwolf/main/downloads/0343_Crop+Diffraction.png) (5.020kb !) This is the subject to restore to it's original state before diffraction was added.

3. And here (http://www.xs4all.nl/~bvdwolf/main/downloads/0343_Crop+Diffraction+RL0-1000.jpg) (945kb) is the result after only one Richardson Lucy restoration (although with 1000 iterations) with a perfectly matching PSF. There are some ringing artifacts, but the noise is almost the same level as in the original. The resolution has been improved significantly, quite usable for a simulated f/32 shot as a basis for further postprocessing and printing. Look specifically at the Venetian blinds at the first floor windows in the center. Remember, the restoration goal was to restore the original, not to improve on it (that will take another postprocessing step).

Again, this is a simplified case (with only moderate noise) with only one type of uniform blur, and its PSF is exactly known. But it does suggest that under ideal circumstances, a lot can be restored. So that reduces the quest to an accurate characterization of the PSF in a given image, and a software that can use it for restoration ...

Cheers,
Bart
Title: Deconvolution sharpening revisited
Post by: Ray on July 27, 2010, 10:33:38 PM
Quote from: BartvanderWolf
Okay, here we go.

1. I've taken a crop of a shot taken with my 1Ds3 (6.4 micron sensel pitch + Bayer CFA) and the TS-E 90mm af f/7.1 (the aperture where the diffraction pattern spans approx. 1.5 pixels).
0343_Crop.jpg (http://www.xs4all.nl/~bvdwolf/main/downloads/0343_Crop.jpg) (1.203kb) I used 16-b/ch TIFFs throughout the experiment, but provide links to JPEGs and PNGs to save bandwidth.

2. That crop is convolved with a single diffraction (at f/32) kernel for 564nm wavelength (the luminosity weighted average of R, G and B taken as 450, 550 and 650 nm) at a 6.4 micron sensel spacing (assuming 100% fill-factor). That kernel (http://www.xs4all.nl/~bvdwolf/main/downloads/Airy9x9{p=6.4FF=100w=0.564f=32.}.dat) was limited to the maximum 9x9 kernel size of ImagesPlus, a commercial Astrophotography program chosen for the experiment because a PSF kernel can be specified and the experiment can be verified. That means that only a part of the infinite diffraction pattern (some 44 micron, or 6.38 pixel widths, in diameter to the first minimum) could be encoded. So I realise that the diffraction kernel is not perfect, but it covers the majority of the energy distribution. The goal is to find out how well certain methods can restore the original image, so anything that resembles diffraction will do.
The benefit of using a 9x9 convolution kernel is that the same kernel can be used for both convolution and deconvolution, so we can judge the potential of a common method under somewhat ideal conditions (a known PSF, and computable in a reasonable time). it will present a sort of benchmark for the others to beat.
Crop+diffraction (http://www.xs4all.nl/~bvdwolf/main/downloads/0343_Crop+Diffraction.png) (5.020kb !) This is the subject to restore to it's original state before diffraction was added.

3. And here (http://www.xs4all.nl/~bvdwolf/main/downloads/0343_Crop+Diffraction+RL0-1000.jpg) (945kb) is the result after only one Richardson Lucy restoration (although with 1000 iterations) with a perfectly matching PSF. There are some ringing artifacts, but the noise is almost the same level as in the original. The resolution has been improved significantly, quite usable for a simulated f/32 shot as a basis for further postprocessing and printing. Look specifically at the Venetian blinds at the first floor windows in the center. Remember, the restoration goal was to restore the original, not to improve on it (that will take another postprocessing step).

Again, this is a simplified case (with only moderate noise) with only one type of uniform blur, and its PSF is exactly known. But it does suggest that under ideal circumstances, a lot can be restored. So that reduces the quest to an accurate characterization of the PSF in a given image, and a software that can use it for restoration ...

Cheers,
Bart

Interesting, Bart. Thanks for providing a 16 bit PNG file of the unsharpened image.

I tried sharpening the PNG file using Focus Magic (which I've been using for a number of years now). The automatic detection of blur width gave me readings varying from 2 pixels to 7 pixels, depending on which part of the image was selected. One can get some rather ugly results sharpening a whole image at a 7-pixels setting, especially at 100%, so I tried using a 1-pixel blur width at 50%, repeating the operation 7 times.

Below is the result, using maximum quality jpeg compression. To my eyes, the result looks very close to yours. However, at 200% it's clear that your result shows slightly finer detail. An obvious example of this is the lower window to the left of the tree. The faint horizontal stripes suggest the presence of a venetian blind. In my FM-sharpened image, there's no hint of this detail.

[attachment=23359:FM_1_pix...fraction.jpg]
Title: Deconvolution sharpening revisited
Post by: MichaelEzra on July 28, 2010, 06:58:13 AM
Quote from: BartvanderWolf
3. And here (http://www.xs4all.nl/~bvdwolf/main/downloads/0343_Crop+Diffraction+RL0-1000.jpg) (945kb) is the result after only one Richardson Lucy restoration (although with 1000 iterations) with a perfectly matching PSF. There are some ringing artifacts, but the noise is almost the same level as in the original. The resolution has been improved significantly, quite usable for a simulated f/32 shot as a basis for further postprocessing and printing. Look specifically at the Venetian blinds at the first floor windows in the center. Remember, the restoration goal was to restore the original, not to improve on it (that will take another postprocessing step).

Again, this is a simplified case (with only moderate noise) with only one type of uniform blur, and its PSF is exactly known. But it does suggest that under ideal circumstances, a lot can be restored. So that reduces the quest to an accurate characterization of the PSF in a given image, and a software that can use it for restoration ...

Cheers,
Bart

Bart, this is very interesting! I was not yet able to achieve the same deconvolution by using RawTherapee (RL deconvolution), ACR, SmartSharpen, Topaz Detail and ALCE(bigano.com).
Just discovered this tool - DeblurMyImage (http://www.adptools.com/en/deblurmyimage-description.html) that allows to import a PSF.
Do you have by any chance an image of the PSF used by ImagePlus?
This will be an interesting experiment!

I suppose if I will be able to measure the PSF for my lens + camera + raw converter, it will provide the best sharpening for my images, This is very tempting!
Title: Deconvolution sharpening revisited
Post by: ejmartin on July 28, 2010, 08:14:37 AM
Quote from: BartvanderWolf
3. And here (http://www.xs4all.nl/~bvdwolf/main/downloads/0343_Crop+Diffraction+RL0-1000.jpg) (945kb) is the result after only one Richardson Lucy restoration (although with 1000 iterations) with a perfectly matching PSF. There are some ringing artifacts, but the noise is almost the same level as in the original. The resolution has been improved significantly, quite usable for a simulated f/32 shot as a basis for further postprocessing and printing. Look specifically at the Venetian blinds at the first floor windows in the center. Remember, the restoration goal was to restore the original, not to improve on it (that will take another postprocessing step).

What is the effect of using fewer than 1000 iterations?  In RawTherapee there doesn't seem to be much change after 40 or 50.  

BTW, I looked at the (open) source code for RT, and it assumes a Gaussian PSF.  I think it would be easily modified to use different PSF's, and possibly not too hard to allow one input by the user.
Title: Deconvolution sharpening revisited
Post by: bjanes on July 28, 2010, 10:40:55 AM
Quote from: Ray
Interesting, Bart. Thanks for providing a 16 bit PNG file of the unsharpened image.

I tried sharpening the PNG file using Focus Magic (which I've been using for a number of years now). The automatic detection of blur width gave me readings varying from 2 pixels to 7 pixels, depending on which part of the image was selected. One can get some rather ugly results sharpening a whole image at a 7-pixels setting, especially at 100%, so I tried using a 1-pixel blur width at 50%, repeating the operation 7 times.

Below is the result, using maximum quality jpeg compression. To my eyes, the result looks very close to yours. However, at 200% it's clear that your result shows slightly finer detail. An obvious example of this is the lower window to the left of the tree. The faint horizontal stripes suggest the presence of a venetian blind. In my FM-sharpened image, there's no hint of this detail.

[attachment=23359:FM_1_pix...fraction.jpg]
Ray,

Your experiment debunks one of the main criticisms of deconvolution: deconvolution is fine in theory but falls down in practice because a suitable PSP can not be found. Bart used a near perfect PSP (limited by the 9*9 filter in ImagesPlus) and you used a trial and error method to derive a PSP that produced nearly as good results.

The PSP used by FocusMagic and how it is affected by the BlurWidth and Amount parameters is not well documented. Does amount determine the number of iterations or some other quantity? Restorations for defocus, diffraction, and lens aberrations such as spherical aberration require different PSPs. As implied by its name, FocusMagic may use a PSP optimized for restoration of defocus. However, as your experiment demonstrates, decent results may be obtained with a PSP that is not optimal. A decent approximation may be sufficient.

It was disappointing to learn that the PSP for Raw Therapee is for Gaussian blur. Photoshop's SmartSharpen has PSPs for Gaussian blue and lens blur (whatever that is), and the latter is recommended for photographic use. Does anyone have information on the PSPs used by Raw Developer or ACR?

Regards,

Bill
Title: Deconvolution sharpening revisited
Post by: BartvanderWolf on July 28, 2010, 12:08:08 PM
Quote from: MichaelEzra
Bart, this is very interesting! I was not yet able to achieve the same deconvolution by using RawTherapee (RL deconvolution), ACR, SmartSharpen, Topaz Detail and ALCE(bigano.com).
Just discovered this tool - DeblurMyImage (http://www.adptools.com/en/deblurmyimage-description.html) that allows to import a PSF.
Do you have by any chance an image of the PSF used by ImagePlus?
This will be an interesting experiment!

I have supplied a link (http://www.xs4all.nl/~bvdwolf/main/downloads/Airy9x9%7Bp=6.4FF=100w=0.564f=32.%7D.dat) to the data file. You can read the dat file with Wordpad or a similar simple document reader. You can input those numbers(rounded to 16-bit values, or converted to 8-bit numbers by dividing by 65535 and multiplying by 255 and rounding to integers). A small warning, the lower the accuracy, the lower the output quality will be. For convenience I've added a 16-bit Greyscale TIFF (http://www.xs4all.nl/~bvdwolf/main/downloads/N32.tif) (convert to RGB mode if needed). I have turned it into an 11x11 kernel (9x9 + black border) because the program you referenced apparently (from the description) requires a zero backgound level.

Quote
I suppose if I will be able to measure the PSF for my lens + camera + raw converter, it will provide the best sharpening for my images, This is very tempting!

Yes, that would be a goal, but the trick is to acquire the PSF from an arbitrary image without prior knowledge, or be able to interactively synthesize a PSF that works well on a preview.

Cheers,
Bart
Title: Deconvolution sharpening revisited
Post by: BartvanderWolf on July 28, 2010, 12:28:58 PM
Quote from: ejmartin
What is the effect of using fewer than 1000 iterations?  In RawTherapee there doesn't seem to be much change after 40 or 50.

Hi Emil,

The reason was because fewer iterations showed more ringing artifacts, but one could opt for that compromise and try to deal with the artifacts in an other way. After a few hundred iterations the ringing started to reduce a bit, so I decided to give the PC a workout. Perhaps a larger kernel size would have allowed to stop earlier with less ringing, but a larger kernel would also increase calculation time per iteration.

Quote
BTW, I looked at the (open) source code for RT, and it assumes a Gaussian PSF.  I think it would be easily modified to use different PSF's, and possibly not too hard to allow one input by the user.

I'm sure that would increase its usability even further, although it's already quite effective for normal capture sharpening. It is possible to approximate the most important part of a diffraction pattern with a Gaussian, but it will deliver lower quality results for deconvolution of diffraction effects alone. A mix of PSFs can potentially be appoximated by (a mix of) Gaussians, but defocus has a markedly different PSF shape. It would be preferable to use prior knowledge (e.g. from a database of analyses) or by analyzing the image content (or a test pattern image taken with the same shooting parameters).

The beauty of deconvolution is that it really increases resolution, not just edge contrast (and halo).

Cheers,
Bart
Title: Deconvolution sharpening revisited
Post by: BartvanderWolf on July 28, 2010, 12:44:50 PM
Quote from: bjanes
Ray,

Your experiment debunks one of the main criticisms of deconvolution: deconvolution is fine in theory but falls down in practice because a suitable PSP can not be found. Bart used a near perfect PSP (limited by the 9*9 filter in ImagesPlus) and you used a trial and error method to derive a PSP that produced nearly as good results.

Hi Bill,

Yes, Ray did well by modifying the method of using a good (more defocus oriented) deconvolver.

Quote
The PSP used by FocusMagic and how it is affected by the BlurWidth and Amount parameters is not well documented. Does amount determine the number of iterations or some other quantity? Restorations for defocus, diffraction, and lens aberrations such as spherical aberration require different PSPs.

That's correct, but then FocusMagic doesn't claim to be a cure for everything. The documentation leaves a bit to be desired, but on the other hand the preview makes it into a quick trial and error procedure to find the best settings. What works well in most cases is to increase the amount and start increasing the radius. There comes a point where the resolution suddenly changes for the worse. Just back up one click and fine-tune the amount.

Quote
As implied by its name, FocusMagic may use a PSP optimized for restoration of defocus. However, as your experiment demonstrates, decent results may be obtained with a PSP that is not optimal. A decent approximation may be sufficient.

I agree. The improvement will be quite visible anyway, and a bit of creativity may find an even better solution. As Ray's example showed, he came very close to an optimal scenario, and with less visible artifacts.

Quote
It was disappointing to learn that the PSP for Raw Therapee is for Gaussian blur.

The program has an open development structure now, so who knows what the future has in store.

Cheers,
Bart
Title: Deconvolution sharpening revisited
Post by: ErikKaffehr on July 28, 2010, 01:24:58 PM
Hi,

I tried Photoshop CS5 with these settings:
[attachment=23361:Screen_s...20.29_PM.png]

And got this result:
[attachment=23362:0343_Cro...ion_ekr1.jpg]

I'd suggest, from what I have read, that Smart Sharpen with 'Lens Blur' is also oriented to remove defocusing errors.

Best regards
Erik




Quote from: BartvanderWolf
Hi Bill,

Yes, Ray did well by modifying the method of using a good (more defocus oriented) deconvolver.



That's correct, but then FocusMagic doesn't claim to be a cure for everything. The documentation leaves a bit to be desired, but on the other hand the preview makes it into a quick trial and error procedure to find the best settings. What works well in most cases is to increase the amount and start increasing the radius. There comes a point where the resolution suddenly changes for the worse. Just back up one click and fine-tune the amount.



I agree. The improvement will be quite visible anyway, and a bit of creativity may find an even better solution. As Ray's example showed, he came very close to an optimal scenario, and with less visible artifacts.



The program has an open development structure now, so who knows what the future has in store.

Cheers,
Bart
Title: Deconvolution sharpening revisited
Post by: ejmartin on July 28, 2010, 01:34:02 PM
Quote from: BartvanderWolf
Hi Emil,

The reason was because fewer iterations showed more ringing artifacts, but one could opt for that compromise and try to deal with the artifacts in an other way. After a few hundred iterations the ringing started to reduce a bit, so I decided to give the PC a workout. Perhaps a larger kernel size would have allowed to stop earlier with less ringing, but a larger kernel would also increase calculation time per iteration.

What software were you using to do the deconvolution?  Is it damped RL? Adaptive?  I would have thought the ringing could be controlled with damping using fewer iterations, but I'm no expert.
Title: Deconvolution sharpening revisited
Post by: ErikKaffehr on July 28, 2010, 05:14:40 PM
Hi,

This is what I got in LR3.

[attachment=23379:Screen_s...08.38_PM.png]

Best regards
Erik
Title: Deconvolution sharpening revisited
Post by: feppe on July 28, 2010, 06:35:59 PM
Those are some very impressive results from several posters! Makes me almost regret deleting some OOF shots
Title: Deconvolution sharpening revisited
Post by: crames on July 28, 2010, 07:37:23 PM
Quote from: BartvanderWolf
Okay, here we go.
...
Cheers,
Bart

Very interesting experiment, Bart. Thanks for posting these.

I had a look at what is going on in the frequency domain. Following are some log(Magnitude+1) plots (Fourier spectra):

The target, original crop:
[attachment=23365:0343_Crop_Mag.png]

The MTF of the PSF:
[attachment=23368:Airy9x9_Mag.png]

The crop+diffraction. Note the horizontal and vertical streaking (looks like interpolation?):
[attachment=23366:0343_Cro...tion_Mag.png]

The crop you restored with Lucy:
[attachment=23367:0343_Cro...1000_Mag.png]

I tried convolving the PSF and the original crop in Matlab and got this for the crop+diffraction:
[attachment=23370:0343_cro...conv_Mag.png]
It looks like there might be more high frequencies to restore in this one.

Here's the Matlab convolved crop+diffraction if anyone wants to play with it: (http://sites.google.com/site/cliffpicsmisc/home/0343_crop+diffract_matlabconv.png)


Regards,
Cliff
Title: Deconvolution sharpening revisited
Post by: Ray on July 29, 2010, 12:17:20 AM
Quote from: ErikKaffehr
Hi,

This is what I got in LR3.


Best regards
Erik

Hi Erik,
We're into extreme pixel-peeping here, are we not?     It appears that CS5 might now be doing a better job than Focus Magic.

As I mentioned, one of the critical areas in Bart's image, which highlights the quality of the sharpening, is that window nearest the ground, just to the left of the tree. It's clear there's a ventian blind there, so it's reasonable to deduce that the horizontal lines represent real detail and are not just artifacts. My sharpening attempt with FM has not done well in that section of the image. Bart's attempt with a single Richardson Lucy restoration does the best job, your's next and mine a poor third.

Such differences are best viewed at 300%. Here's a comparison at 300% so we all know what we're talking about. Bart's is first on the left, yours in the middle and mine furthest to the right. I added one more iteration of 1 pixel blur width at 50%, so the title should read 8x instead of 7x.

[attachment=23380:Comparis..._at_300_.jpg]

Okay! Let's now shift our gaze to the smooth blue surface at the top of the crop. What! Is that noise I see? Surely it must be! However, in my FM sharpened image, that plain blue section at the top is as smooth as a baby's bottom.

I guess we have trade-offs in operation here.

Out of curiosity, I tried sharpening Bart's image using ACR 6.1 with the following settings. Detail 100%, 0.5 pixels, amount 120%, no masking. (Masking reduces resolution.)

It's done an excellent job. So close to Bart's, I would say the differences are irrelevant. A 300% enlarged crop on the monitor represents a print size of the entire image of about 10 metres by 25 metres (maybe a slight exaggeration, but you get my point   ).

[attachment=23381:ACR_6.1_..._Bart__s.jpg]

Title: Deconvolution sharpening revisited
Post by: ErikKaffehr on July 29, 2010, 12:56:38 AM
Ray,

I have noticed that noise gets amplified and even considered starting a discussion about that. Also, I would point out, it's not mine job but LR's job. What's very nice with LR is that it doesn't break parametric workflow. LR/ACR have some measures to contain noise, masking and others but they may look artificial.

I guess that both Focus Magic (which I used before) and Ps/CS5 Advanced Sharpen mostly handle defocus errors. Eric Chen described the Lens Blur PSF used in CS5's Advanced Sharpen as: "The Gaussian is basically just that, and the Lens Blur is effectively simulating a nearly circular aperture (assuming even light distribution within the aperture, very unlike Gaussian)."

Regarding your test with FM, I guess/suggest that using a larger radius may be better, you could perhaps try with "blur with 2" and use your iterative technique. Ideally the deconvolver would use a correct PSF. I assume that blur width is closely related to the width of the PSF.

I have tested a little bit with FM and larger "blur radius" but have not yet found what's optimal in my eyes.

I agree that FM doesn't seem to enhance noise and LR seems to do it if no noise suppression is used. Noise suppression may counter act what we to to achieve. Masking does not necessarily reduce resolution, it decides which areas to be sharpened so you would choose mask to keep sharpening on detail but suppress sharpening in smooth areas, like the blue paint. Using intensive/excessive sharpening the transition area between masked/unmasked may be ugly.

Than we need to keep in mind that when we process the image for printing we may rescale and sharpen for output, the printer will also add a processing of it's own. A complex world we live in.

Anyway, a week ago I didn't know LR had deconvolution although I was pretty sure that PS/CS5's Advanced Sharpen actually had some deconvolution going on. So now we start utilizing techniques that we didn't even know we had.

Also, we need to keep in mind that there are a lot of deconvolution algorithms around and all are not created equal.

Best regards
Erik

Quote from: Ray
Hi Erik,
We're into extreme pixel-peeping here, are we not?     It appears that CS5 might now be doing a better job than Focus Magic.

As I mentioned, one of the critical areas in Bart's image, which highlights the quality of the sharpening, is that window nearest the ground, just to the left of the tree. It's clear there's a ventian blind there, so it's reasonable to deduce that the horizontal lines represent real detail and are not just artifacts. My sharpening attempt with FM has not done well in that section of the image. Bart's attempt with a single Richardson Lucy restoration does the best job, your's next and mine a poor third.

Such differences are best viewed at 300%. Here's a comparison at 300% so we all know what we're talking about. Bart's is first on the left, yours in the middle and mine furthest to the right. I added one more iteration of 1 pixel blur width at 50%, so the title should read 8x instead of 7x.

[attachment=23380:Comparis..._at_300_.jpg]

Okay! Let's now shift our gaze to the smooth blue surface at the top of the crop. What! Is that noise I see? Surely it must be! However, in my FM sharpened image, that plain blue section at the top is as smooth as a baby's bottom.

I guess we have trade-offs in operation here.

Out of curiosity, I tried sharpening Bart's image using ACR 6.1 with the following settings. Detail 100%, 0.5 pixels, amount 120%, no masking. (Masking reduces resolution.)

It's done an excellent job. So close to Bart's, I would say the differences are irrelevant. A 300% enlarged crop on the monitor represents a print size of the entire image of about 10 metres by 25 metres (maybe a slight exaggeration, but you get my point   ).

[attachment=23381:ACR_6.1_..._Bart__s.jpg]
Title: Deconvolution sharpening revisited
Post by: ErikKaffehr on July 29, 2010, 12:59:32 AM
Wayne,

Thanks for "hijacking" this discussion, it got much more interesting!


Best regards
Erik

Quote from: Wayne Fox
Apologies in hijacking this thread a little bit, but personally I'm just curious if de-convolution sharpening and the evolvement of computational imaging might eventually overcome much of the problem with diffraction.  (and if this has already been discussed I also apologize - I only skimmed through the thread, seeing  how most of it is above my pay-grade).

I would assume it would be much more challenging than resolving the issues from an AA filter,  since it would require each individual lens design to be carefully tested then some method to apply the information to the file, and perhaps would require the data from every possible f/stop and with zoom lens specific zoom settings.  But it seems the theory of restoring the data as it is spread to adjacent pixels isn't much different than what happens with an AA filter.  

I know I have many times stopped down to f/22 (or further) and smart sharpen seems to work quite well, even when printing large prints.

Just curious.
Title: Deconvolution sharpening revisited
Post by: Ray on July 29, 2010, 02:42:50 AM
Quote from: ErikKaffehr
Ray,
Masking does not necessarily reduce resolution, it decides which areas to be sharpened so you would choose mask to keep sharpening on detail but suppress sharpening in smooth areas, like the blue paint. Using intensive/excessive sharpening the transition area between masked/unmasked may be ugly.

Erik,
I understand that's the principle. But in practice the result may be different. In my experiment to achieve maximum clarity in the venetian blinds, any tinkering with the 'masking' slider in ACR, reduced that clarity. Try it for yourself.
Title: Deconvolution sharpening revisited
Post by: ejmartin on July 29, 2010, 07:55:36 AM
Quote from: Ray
Out of curiosity, I tried sharpening Bart's image using ACR 6.1 with the following settings. Detail 100%, 0.5 pixels, amount 120%, no masking. (Masking reduces resolution.)

It's done an excellent job. So close to Bart's, I would say the differences are irrelevant. A 300% enlarged crop on the monitor represents a print size of the entire image of about 10 metres by 25 metres (maybe a slight exaggeration, but you get my point   ).

[attachment=23381:ACR_6.1_..._Bart__s.jpg]

I would say all the examples other than Bart's show much more Gibbs' phenomenon ('ringing' artifacts along sharp edges); look at the edges of the white window frames, for example.  Though it looks like Bart's ringing is longer range (perhaps the result of all those iterations).
Title: Deconvolution sharpening revisited
Post by: eronald on July 29, 2010, 09:10:41 AM
Quote from: ErikKaffehr
Wayne,

Thanks for "hijacking" this discussion, it got much more interesting!


Best regards
Erik

i agree

Edmund
Title: Deconvolution sharpening revisited
Post by: Ray on July 29, 2010, 09:40:10 AM
Quote from: ejmartin
I would say all the examples other than Bart's show much more Gibbs' phenomenon ('ringing' artifacts along sharp edges); look at the edges of the white window frames, for example.  Though it looks like Bart's ringing is longer range (perhaps the result of all those iterations).

Hi Emil.

Much more Gibb's phenomenon??

Here's a 400% crop comparison between Bart's sharpened result and ACR 6.1. Could you point out any significant ringing artifacts along edges, which are apparent in the ACR sharpened image but not in Bart's?

The most significant differences I see between the two images are a few faint horizontal lines on the blue paint-work at the top of the crop, which are apparent in Bart's rendition but not in the ACR rendition.

I suppose if one were examining an image of some distant planet, then such faint lines might be of great significance (assuming they are not software-generated artifacts   ).

[attachment=23391:400__crop.jpg]
Title: Deconvolution sharpening revisited
Post by: bjanes on July 29, 2010, 09:46:18 AM
Quote from: ejmartin
I would say all the examples other than Bart's show much more Gibbs' phenomenon ('ringing' artifacts along sharp edges); look at the edges of the white window frames, for example.  Though it looks like Bart's ringing is longer range (perhaps the result of all those iterations).
Those 1000 iterations of Bart's RL deconvolution were not without benefit.   The Gibbs phenomenon is well demonstrated with the slanted edge and line spread plots of Imatest. The illustration on the left shows no sharpening is on the left and sharpening with FocusMagic Blur width 50, amount 150 is on the right. The line spread is for the Focus Magic image.

[attachment=23388:003CompSh.png] [attachment=23392:lineSpread.png]


The dangers of pixel peeping are well demonstrated by looking at the actual images of the target. The maze pattern is in the area of Nyquist. The benefits of sharpening on looking at the overall image IMHO are most pronounced in the low frequencies, which appear to have much better contrast. This is because the contrast sensitivity function (CSF) of the eye peaks at the relatively low resolution of 8 cycles/degree, which corresponds to 1 cycle per mm for a print that is viewed at a distance of 34cm (about 13.5") [Bob Atkins (http://bobatkins.com/photography/technical/mtf/mtf4.html)]. If you zoom in to look at high frequencies, the contrast in the low frequencies may be missed, and aliasing artifact is quite apparent. Artifacts above Nyquist are shown both by the Imatest analysis and the actual image.

[attachment=23390:03_ACR_CompSh.png]

Title: Deconvolution sharpening revisited
Post by: ejmartin on July 29, 2010, 10:19:29 AM
Quote from: Ray
Hi Emil.

Much more Gibb's phenomenon??

Here's a 400% crop comparison between Bart's sharpened result and ACR 6.1. Could you point out any significant ringing artifacts along edges, which are apparent in the ACR sharpened image but not in Bart's?

The most significant differences I see between the two images are a few faint horizontal lines on the blue paint-work at the top of the crop, which are apparent in Bart's rendition but not in the ACR rendition.

[attachment=23391:400__crop.jpg]

They both have ringing artifacts.  Bart's have more side lobes, yours have a stronger first peak and trough.  It was that initial over- and under-shoot that I was referring to when I wrote "much more" -- the initial amplitude is stronger.  Though that longer tail of side lobes can be more of a problem in some places -- see the white sliver next to the left side of the tree trunk near the bottom.
Title: Deconvolution sharpening revisited
Post by: Ray on July 29, 2010, 10:44:53 AM
Quote from: ejmartin
They both have ringing artifacts.  Bart's have more side lobes, yours have a stronger first peak and trough.  It was that initial over- and under-shoot that I was referring to when I wrote "much more" -- the initial amplitude is stronger.  Though that longer tail of side lobes can be more of a problem in some places -- see the white sliver next to the left side of the tree trunk near the bottom.

The white sliver is more natural in the ACR image on the right, right?
Title: Deconvolution sharpening revisited
Post by: KevinA on July 29, 2010, 11:39:05 AM
I've used for sometime with success http://www.fixerlabs.com/EN/photoshop_plugins/ffex3.htm (http://www.fixerlabs.com/EN/photoshop_plugins/ffex3.htm)

Kevin.
Title: Deconvolution sharpening revisited
Post by: joofa on July 29, 2010, 12:28:02 PM
Quote from: ejmartin
I would say all the examples other than Bart's show much more Gibbs' phenomenon ('ringing' artifacts along sharp edges); look at the edges of the white window frames, for example.  Though it looks like Bart's ringing is longer range (perhaps the result of all those iterations).

The "ringing" in deconvolution may not be the Gibbs effect, rather it may be caused by inaccuracies in modeling image noise and psf correctly.

Quote from: bjanes
The Gibbs phenomenon is well demonstrated with the slanted edge and line spread plots of Imatest.

Gibbs phenomenon is dependent upon the metric to measure it. For e.g., L1 norm has higher immunity than L2. So unless the metric is specified it is incomplete information.

Title: Deconvolution sharpening revisited
Post by: ErikKaffehr on July 29, 2010, 02:26:40 PM
Hi,

This comparison may be of some interest:

http://www.deconvolve.net/bialith/Research/BARclockblur.htm (http://www.deconvolve.net/bialith/Research/BARclockblur.htm)

Best regards
Erik


Quote from: bjanes
In his comparison of the new Leica S2 with the Nikon D3x, Lloyd Chambers (Diglloyd (http://diglloyd.com/diglloyd/2010-07-blog.html#_20100722DeconvolutionSharpening)) has shown how the deconvolution sharpening (more properly image restoration) with the Mac only raw converter Raw Developer markedly improves the micro-contrast of the D3x image to the point that it rivals that of the Leica S2. Diglloyd's site is a pay site, but it is well worth the modest subscription fee. The Richardson-Lucy algorithm used by Raw Developer partially restores detail lost by the presence of a blur filter (optical low pass filter) on the D3x and other dSLRs.

Bart van der Wolf and others have been touting the advantages of deconvolution image restoration for some time, but pundits on this forum usually pooh pooh the technique, pointing out that deconvolution techniques are fine in theory, but in practice are limited by the difficulties in obtaining a proper point spread function (PSF) that enables the deconvolution to undo the blurring of the image. Roger Clark (http://www.clarkvision.com/articles/image-restoration1/index.html) has reported good results with the RL filter available in the astronomical program ImagesPlus. Focus Magic is another deconvolution program used by many for this purpose, but it has not been updated for some time and is 32 bit only.

Isn't it time to reconsider deconvolution? The unsharp mask is very mid 20th century and originated in the chemical darkroom. In many cases decent results can be obtained by deconvolving with a less than perfect and empirically derived PSP. Blind deconvolution algorithms that automatically determine the PSP are being developed.

Regards,

Bill
Title: Deconvolution sharpening revisited
Post by: joofa on July 30, 2010, 12:02:23 PM
Quote from: Wayne Fox
if de-convolution sharpening and the evolvement of computational imaging might eventually overcome much of the problem with diffraction.

If computational resources and a large amount of memory are available then it is possible to have closed form solutions under some usual circumstances instead of iterative procedures in the type of deconvolution being discussed here. This has something to do with the structure of block Toeplitz and circulant matrices that are reduced to usual convolution problems in iteration based deconvolution procedures. However, the amount of memory required would be huge, in tera bytes for the usual sized images these days, and is perhaps not a possibility currently for home users.

Quote from: Ray
Much more Gibb's phenomenon??

Unfortunately, Gibb's phenomenon, which produces ringing like effects, is usually mistaken for ringing produced by convolution (or deconvolution) operations. They are not the same in general, and the typical ringing associated with image restoration is mistakenly identified with Gibbs phenomenon on this thread.
Title: Deconvolution sharpening revisited
Post by: CBarrett on July 30, 2010, 12:12:33 PM
If only there were a De-Convoluter for LuLa threads....

; )
Title: Deconvolution sharpening revisited
Post by: Eric Myrvaagnes on July 30, 2010, 07:44:50 PM
Quote from: CBarrett
If only there were a De-Convoluter for LuLa threads....

; )

Amen!!!
Title: Deconvolution sharpening revisited
Post by: Ray on July 30, 2010, 10:54:41 PM
Quote from: joofa
Unfortunately, Gibb's phenomenon, which produces ringing like effects, is usually mistaken for ringing produced by convolution (or deconvolution) operations. They are not the same in general, and the typical ringing associated with image restoration is mistakenly identified with Gibbs phenomenon on this thread.

Well, Joofa, you obviously appear to know what you are talking about. I confess I have almost zero knowledge about the Gibb's phenomenon, but I can appreciate that it may be useful to be able to indentify and name any artifacts one may see in an image, especially if one is examining an X-ray of someone's medical condition, or indeed searching for evidence of alien life on a distant planet.

I wouldn't attempt to argue points of physics or mathematics with the eminent Emil Martinec. However, when Emil implies that my ACR 6.1 'detail enhancement' has significantly more ringing artifacts than Bart's Richardson Lucy rendition, I'm plain confused. I just don't see it; at least not at 400% enlargement.

Here's the comparison again.

[attachment=23419:400__crop.jpg]

As it appears to me, the edges of the white sliver at the bottom left of the tree has slightly more noticeable ringing artifacts in Bart's image. Furthermore, if one examines the plain blue area at the top of the crop, immediately above the uppermost white bar, one can see 4 or 5 faint horizontal lines in Bart's image, but only one line in the ACR image (exluding the very dark edge adjoining the white bar, which is apparent in both crops).

I presume these faint lines in the blue paint-work are ringing artifacts, but I'm not certain. Perhaps those faint blue lines actually exist in the paint-work. If I were a doctor examining an X-ray, I'd be concerned about such matters.

As a matter of interest, I tried another shazpening experiment using Focus Magic. Those who are familiar with this program will know that there are several options for different types of image source. 'Digital Camera' is the default and the one I used earlier, but at the bottom of the list is 'Forensic'. It sounds as though that option would produce a better result at restoring detail, and so it does.

[attachment=23420:FM_Foren...mparison.jpg]

Showing 300% crops (above) of the same part of the image, Focus Magic is now doing a much better job in delineating the individual slats of the blind. Bart's image has the edge regarding the clarity of those slats, but I think one could say the FM (forensic) image displays slightly lower noise in the blue area at the top. The crop on the far right is my first result using the default 'Digital Camera' source. The blue paint-work is clearly much smoother, but the detail of the slats much worse, even non-existent. Trade-offs again.

So much for pixel-peeping!  

Title: Deconvolution sharpening revisited
Post by: Schewe on July 30, 2010, 11:30:15 PM
Quote from: Ray
I wouldn't attempt to argue points of physics or mathematics with the eminent Emil Martinec. However, when Emil implies that my ACR 6.1 'detail enhancement' has significantly more ringing artifacts than Bart's Richardson Lucy rendition, I'm plain confused. I just don't see it; at least not at 400% enlargement.

Since I have a dog in this hunt, (involved in ACR capture sharpening and PhotoKit Sharpener) I've avoided this thread like the plague...but I will say this, I keep my ear to the ground and short of exotic deconvolution algorithms (but with no easy to use plug-ins) with generally well know PSF's, I'm not sure that any theoretical image "restoration" or detail sharpening via deconvolution is really and truthfully useful for general photograpy.

Yes, there may be technical solutions to image processing that un-blurs motion blur to the point where you "might" be able to use facial recognition software to identify a person or un-blur a license plate number so when they blow past a red light, you can send them a ticket...I suspect England would LOVE to be able to un-blur speeders so they can send a bill to a speeder for going over the limit.

But the fact that ACR 6.1 comes real close (and perhaps arguably with less ringing) to a 1K iteration of deconvolution processing should tell you something...the other side of the fence is NOT always a whole lot greener...

Yes, it's useful to keep pushing the limits of image processing. Take a look at ACR 6.1 Process 2010 and the new noise reduction (and lens corrections)...part of the 2010 Process is tweaking of the sharpening blend and radius precision...and of course, radically better noise reduction.

Some people seem hellbent on looking for computational correction of things that really, should be taken care of in selecting the optimal shutter speed and aperture for a given shot and then use the proper combination of capture, creative and output sharpening for the image.

I know the research is ongoing...I welcome it! I've spent a nice, eye-opening time at MIT looking at a variety of doctoral dissertations for a bunch of different directions...and cool stuff DOES come from MIT (think Seam Carving AKA Content Aware Scaling) but seriously, the thought that deconvolution image restoration is the ultimate solution to all of photography's woes, is well, SciFi–as in zooming into the image of the photo in Blade Runner or the way the CSI "Enhance" filter seems to work.

Rather that looking towards and hoping on the future, I think it would be generally more useful for people to really learn how to use the tools they already have to advance their images...but ya know, that's just me.
Title: Deconvolution sharpening revisited
Post by: ejmartin on July 31, 2010, 12:37:44 AM
Quote from: Schewe
...but ya know, that's just me.

Yes, we know.    

We now have it on Eric Chan's authority that, when the detail slider is cranked up to 100%, the sharpening in ACR 6 is deconvolution based.  So what hairs are you splitting to distinguish it from deconvolution sharpening?
Title: Deconvolution sharpening revisited
Post by: Schewe on July 31, 2010, 01:26:18 AM
Quote from: ejmartin
We now have it on Eric Chan's authority that, when the detail slider is cranked up to 100%, the sharpening in ACR 6 is deconvolution based.  So what hairs are you splitting to distinguish it from deconvolution sharpening?

Well, that was a bit of a surprise to me...

But I would ask again, what did a 1K iteration deconvolution do that ACR 6.1 couldn't do (except add ringing effects)?
Title: Deconvolution sharpening revisited
Post by: ejmartin on July 31, 2010, 02:05:04 AM
Quote from: Schewe
Well, that was a bit of a surprise to me...

But I would ask again, what did a 1K iteration deconvolution do that ACR 6.1 couldn't do (except add ringing effects)?

I think you are fixating on the particular implementation (that Bart used) rather than considering the method in general.  Typically most of the improvement to be had with RL deconvolution comes in the first few tens of iterations, and the method can be quite fast (as it is in RawTherapee, FocusMagic, and RawDeveloper, for instance).  A good implementation of RL will converge much faster than 1K iterations.  It's hard to say what is sourcing the ringing tails in Bart's example; it could be the truncation of the PSF, it could be something else.  I would imagine that the dev team at Adobe has spent much more time tweaking their deconvolution algorithm than the one day that Bart spent working up his example.

But I would ask again, why do you want to throw dirt on deconvolution methods if you are lavishing praise on ACR 6.1?
Title: Deconvolution sharpening revisited
Post by: deejjjaaaa on July 31, 2010, 02:12:13 AM
Quote from: ejmartin
We now have it on Eric Chan's authority that, when the detail slider is cranked up to 100%, the sharpening in ACR 6 is deconvolution based.

actually he was saying that it always involves deconvolution for as long as the detail slider is > 0, just at 100 it is a pure deconvolution, and between 0 and 100 it is a blend of the output provided by USM and deconvolution... unless Eric Chan wants to provide any further clarifications.
Title: Deconvolution sharpening revisited
Post by: Schewe on July 31, 2010, 02:43:08 AM
Quote from: ejmartin
But I would ask again, why do you want to throw dirt on deconvolution methods if you are lavishing praise on ACR 6.1?

I'm not throwing dirt on deconvolution methods other than to state that in MY experience (which is not inconsiderable) the effort does not bear the fruit that advocates seem to expound on–read that to mean, I can't seem to find a solution better than the tools I'm currently using without going through EXTREME effort.

ACR 6.1 seems pretty darn good to me, how about you?

You got any useful feedback to contribute?

What do YOU want in image sharpening?

Do you think computational solutions will solve everything?

Have you actually learned how to use ACR 6.1?

How many hours do YOU have in ACR 6.1 (the odds are I've prolly got a few more hours in ACR 6.1/6.2 than you might–and worked to improve the ACR sharpening more than most people may have).
Title: Deconvolution sharpening revisited
Post by: ejmartin on July 31, 2010, 02:55:27 AM
Quote from: Schewe
I'm not throwing dirt on deconvolution methods other than to state that in MY experience (which is not inconsiderable) the effort does not bear the fruit that advocates seem to expound on–read that to mean, I can't seem to find a solution better than the tools I'm currently using without going through EXTREME effort.

ACR 6.1 seems pretty darn good to me, how about you?

You got any useful feedback to contribute?

What do YOU want in image sharpening?

Do you think computational solutions will solve everything?

Have you actually learned how to use ACR 6.1?

How many hours do YOU have in ACR 6.1 (the odds are I've prolly got a few more hours in ACR 6.1/6.2 than you might–and worked to improve the ACR sharpening more than most people may have).

A clumsy attempt to change the subject.  You still seem to be making an artificial distinction between deconvolution methods and ACR 6.x


Title: Deconvolution sharpening revisited
Post by: Schewe on July 31, 2010, 03:08:43 AM
Quote from: ejmartin
A clumsy attempt to change the subject.  You still seem to be making an artificial distinction between deconvolution methods and ACR 6.x


No, I was responding to the actual results as posted by Ray that showed the 1K deconvolution results compared to ACR 6.1 as posted by Ray.

What are you responding to?

Simply the fact that I'm actually posting a response in this thread?
Title: Deconvolution sharpening revisited
Post by: joofa on July 31, 2010, 04:28:13 AM
Quote from: Ray
Well, Joofa, you obviously appear to know what you are talking about. I confess I have almost zero knowledge about the Gibb's phenomenon, but I can appreciate that it may be useful to be able to indentify and name any artifacts one may see in an image, especially if one is examining an X-ray of someone's medical condition, or indeed searching for evidence of alien life on a distant planet.

Hi Ray,

I never said anything regarding the comparison of Bart's and your images. I just mentioned that not all "ringing" artifacts are Gibbs, and in the usual denconvolution, if any ringing is found, then it may not be Gibbs, rather arising from other reasons.

A more technical note: The deconvolution-problem is typically ill-posed, at least, initially. In the continuous domain the usual distortion due to blurring effect acts as an integral operator and problem statement boils down to a Fredholm integral equation of the first kind. In the discrete domain, which we usually operate due to digitization, the inherent ill-posedness is inherited, while some of the problems are ameliorated. More well-behaved solutions can be obtained by introducing some sort of "smoothness" or regularization criterion at this stage. Richardson-Lucy deconvolution converges to maximum-likelihood (ML) estimation. Maximum-likelihood techniques just do the analysis of image data, and hence, in general may not be smooth enough. However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation, which might provide more acceptable results. Under the assumptions of Gaussianity of certain image parameters (NOTE: not-necessarily the Gaussanity of the blur function) some equivalence of minimum mean square error estimation (MMSE), linearity, and MAP estimation can be obtained. Further optimizations can be introduced by using a more realistic nonstationary form of the blur function and variations of image data distribution and noise distribution - the draw back being that one might have to forgo some quick operations in the form of fast Fourier transforms (FFT) embedded somewhere in many deconvolution techniques.
Title: Deconvolution sharpening revisited
Post by: John R Smith on July 31, 2010, 07:11:41 AM
Quote from: joofa
A more technical note: The deconvolution-problem is typically ill-posed, at least, initially. In the continuous domain the usual distortion due to blurring effect acts as an integral operator and problem statement boils down to a Fredholm integral equation of the first kind. In the discrete domain, which we usually operate due to digitization, the inherent ill-posedness is inherited, while some of the problems are ameliorated. More well-behaved solutions can be obtained by introducing some sort of "smoothness" or regularization criterion at this stage. Richardson-Lucy deconvolution converges to maximum-likelihood (ML) estimation. Maximum-likelihood techniques just do the analysis of image data, and hence, in general may not be smooth enough. However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation, which might provide more acceptable results. Under the assumptions of Gaussianity of certain image parameters (NOTE: not-necessarily the Gaussanity of the blur function) some equivalence of minimum mean square error estimation (MMSE), linearity, and MAP estimation can be obtained. Further optimizations can be introduced by using a more realistic nonstationary form of the blur function and variations of image data distribution and noise distribution - the draw back being that one might have to forgo some quick operations in the form of fast Fourier transforms (FFT) embedded somewhere in many deconvolution techniques.

I suppose, sometimes, if I start feeling a bit pleased with myself and think that I know quite a lot about photography, it is probably good for me to be taken down a peg or two and realise that there are people in this world with whom I would actually be unable to communicate, except on the level of "Would you like a cup of tea?".

John
Title: Deconvolution sharpening revisited
Post by: ErikKaffehr on July 31, 2010, 09:50:55 AM
Hi,

My conclusion from the discussion is that:

1) It is quite possible to regain some of the sharpness lost to to diffraction using deconvolution, even if the PSF (Point Spread Function) is not known. It also seems to be the case that we have deconvolution built into ACR 6.1 and LR 3.

2) Setting "Detail" to high and varying the radius in LR3 and ACR 6.1 is a worthwhile experiment, but we may need to gain some more experience how this tools should be used.

My experience is that "Deconvolution" in both ACR 6.1 and LR3 amplifies noise, we need to find out how to use all settings to best effect.

Best regards
Erik


Quote from: John R Smith
I suppose, sometimes, if I start feeling a bit pleased with myself and think that I know quite a lot about photography, it is probably good for me to be taken down a peg or two and realise that there are people in this world with whom I would actually be unable to communicate, except on the level of "Would you like a cup of tea?".

John
Title: Deconvolution sharpening revisited
Post by: deejjjaaaa on July 31, 2010, 10:01:10 AM
Quote from: ErikKaffehr
My experience is that "Deconvolution" in both ACR 6.1 and LR3 amplifies noise, we need to find out how to use all settings to best effect.

We can just ask mr Schewe, can't we ? With his endless hours spent on the sharpening with ACR he can just tell us and we will be all set.
Title: Deconvolution sharpening revisited
Post by: John R Smith on July 31, 2010, 10:14:01 AM
Quote from: ErikKaffehr
Hi,

My conclusion from the discussion is that:

1) It is quite possible to regain some of the sharpness lost to to diffraction using deconvolution, even if the PSF (Point Spread Function) is not known. It also seems to be the case that we have deconvolution built into ACR 6.1 and LR 3.

2) Setting "Detail" to high and varying the radius in LR3 and ACR 6.1 is a worthwhile experiment, but we may need to gain some more experience how this tools should be used.

My experience is that "Deconvolution" in both ACR 6.1 and LR3 amplifies noise, we need to find out how to use all settings to best effect.

Best regards
Erik

Erik

Thank you so much for this summary which even I can understand.

John
Title: Deconvolution sharpening revisited
Post by: Ray on July 31, 2010, 10:36:10 AM
Quote from: John R Smith
I suppose, sometimes, if I start feeling a bit pleased with myself and think that I know quite a lot about photography, it is probably good for me to be taken down a peg or two and realise that there are people in this world with whom I would actually be unable to communicate, except on the level of "Would you like a cup of tea?".

John

Quote
A more technical note: The deconvolution-problem is typically ill-posed, at least, initially. In the continuous domain the usual distortion due to blurring effect acts as an integral operator and problem statement boils down to a Fredholm integral equation of the first kind. In the discrete domain, which we usually operate due to digitization, the inherent ill-posedness is inherited, while some of the problems are ameliorated. More well-behaved solutions can be obtained by introducing some sort of "smoothness" or regularization criterion at this stage. Richardson-Lucy deconvolution converges to maximum-likelihood (ML) estimation. Maximum-likelihood techniques just do the analysis of image data, and hence, in general may not be smooth enough. However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation, which might provide more acceptable results. Under the assumptions of Gaussianity of certain image parameters (NOTE: not-necessarily the Gaussanity of the blur function) some equivalence of minimum mean square error estimation (MMSE), linearity, and MAP estimation can be obtained. Further optimizations can be introduced by using a more realistic nonstationary form of the blur function and variations of image data distribution and noise distribution - the draw back being that one might have to forgo some quick operations in the form of fast Fourier transforms (FFT) embedded somewhere in many deconvolution techniques.

I sympathise with your frustration here, John, but let's not be intimidated by poor expression. Here's my translation, for what it's worth, sentence by sentence.

(1) The deconvolution problem is typically ill-posed.

means: The sharpening problem is often poorly defined. (That's easy).

(2) In the continuous domain the usual distortion due to blurring effect acts as an integral operator and problem statement boils down to a Fredholm integral equation of the first kind.

means: The analog world, which is a smooth continuum, is different from the digital world with discrete steps. You need complex mathematics to deal with this problem, such as a Fredholm integral equation. (Whatever that is).

(3) In the discrete domain, which we usually operate due to digitization, the inherent ill-posedness is inherited, while some of the problems are ameliorated.

means: We're now stuck with the digital domain. There's a hangover from the analog world with incorrect definitions, but we can fix some of the problems. There's hope.

(4) More well-behaved solutions can be obtained by introducing some sort of "smoothness" or regularization criterion at this stage.

means: We can achieve a balanced result by sacrificing detail for smoothness.

(5) Richardson-Lucy deconvolution converges to maximum-likelihood (ML) estimation.

means: The Richardson-Lucy method attempts to provide the best result, in terms of detail.

(6) Maximum-likelihood techniques just do the analysis of image data, and hence, in general may not be smooth enough.

means: The best result may introduce noise.

(7)  However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation, which might provide more acceptable results.

means: With a bit of experimentation we might be able to fix the noise problem.

(8) Under the assumptions of Gaussianity of certain image parameters (NOTE: not-necessarily the Gaussanity of the blur function) some equivalence of minimum mean square error estimation (MMSE), linearity, and MAP estimation can be obtained.

means: Gaussian mathematics is used to get the best estimate for sharpening purposes. (Guass was a German mathematical genius, considered to be one of the greatest mathematicians who has ever lived. Far greater than Einstein, in the field of mathematics).

(9) Further optimizations can be introduced by using a more realistic nonstationary form of the blur function and variations of image data distribution and noise distribution - the draw back being that one might have to forgo some quick operations in the form of fast Fourier transforms (FFT) embedded somewhere in many deconvolution techniques.

means: You can get better results if you take more time and have more computing power.

Okay! Maybe I've missed a few nuances in my translation. No-one's perfect. Any improved translation is welcome.  


Title: Deconvolution sharpening revisited
Post by: John R Smith on July 31, 2010, 10:55:00 AM
Well, good shot at it, Ray. I'm afraid I never got any further than O-Level maths, and I only just scraped that.

Don't mind me, do carry on chaps  

John
Title: Deconvolution sharpening revisited
Post by: crames on July 31, 2010, 12:32:24 PM
I'm afraid that sharpening cannot overcome the hard limit on resolution due to diffraction.

Here are versions of some of the posted images where the high and low frequencies have been separated into layers. They show what is being sharpened: only the detail that remains below the diffraction limit. The detail above the diffraction limit is lost and is not being recovered.

Original Crop (Undiffracted) (http://sites.google.com/site/cliffpicsmisc/bart/0343_Crop_Lowpass_Hipass_Layers.psd?attredirects=0)

Diffracted Crop (http://sites.google.com/site/cliffpicsmisc/bart/0343_Crop%2BDiffraction_Lowpass_Hipass_Layers.psd?attredirects=0)

Lucy Deconvolution (http://sites.google.com/site/cliffpicsmisc/bart/0343_Crop%2BDiffraction%2BRL0-1000_Lowpass_Hipass_Layers.psd?attredirects=0)

Ray FM Sharpened (http://sites.google.com/site/cliffpicsmisc/bart/0343_Crop_Diffraction_Ray_FM_Lowpass_Hipass_Layers.psd?attredirects=0)

The Lowpass layers include all of the detail that is enclosed within the central diffraction limit "oval" seen in the spectra I posted before. The Hipass layers include everything else outside of the central diffraction oval.

The following is a comparison of the Lowpass layers. This where the sharpening is taking place, and amounts only to approaching the quality of the Lowpass of the Original Crop.

(http://sites.google.com/site/cliffpicsmisc/bart/Orig_Diffract_Lucy_FM_Lowpass.png)

Look at the Original Crop Hipass layer. This shows all the fine detail that the eye is craving for, but hasn't come back with any of sharpening attemps. For fun, paste a copy of the original Hipass layer in overlay mode onto any of the sharpened versions. Or double the Hipass layer for a super-sharp effect.

Since diffraction pretty-much wipes out the detail outside of the diffraction limit, deconvolution sharpening is generally limited to massaging whatever is left within the central cutoff.

From what I've read, detail beyond the diffraction cutoff has to be extrapolated ("Gerchberg method", for one), or otherwise estimated from the lower frequency information. The methods are generally called "super-resolution". The Lucy method, due to a non-linear step in the processing is supposed to have an extrapolating effect, but I'm not sure if it's visible here.

Cliff
Title: Deconvolution sharpening revisited
Post by: joofa on July 31, 2010, 12:45:50 PM
Quote from: joofa
However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation,

Sorry made a typo. The above means maximum a posteriori estimation and not a priori estimation.

Hi Ray, you did an interesting translation  

Quote from: crames
Since diffraction pretty-much wipes out the detail outside of the diffraction limit, deconvolution sharpening is generally limited to massaging whatever is left within the central cutoff.

From what I've read, detail beyond the diffraction cutoff has to be extrapolated ("Gerchberg method", for one), or otherwise estimated from the lower frequency information. The methods are generally called "super-resolution". The Lucy method, due to a non-linear step in the processing is supposed to have an extrapolating effect, but I'm not sure if it's visible here.

Yes, Gerchberg technique is effective in theory, (because a bandlimited signal is analytic and hence extrapolatable), but in practise noise limitations stop such solutions from becoming very effective.
Title: Deconvolution sharpening revisited
Post by: ejmartin on July 31, 2010, 01:16:03 PM
Hi Cliff,

A rather illuminating way of looking at things.  

I don't think one is expecting miracles here, like undoing the Rayleigh limit; zero MTF is going to remain zero.  But nonzero microcontrast can be boosted back up to close to pre-diffraction levels, and the deconvolution methods seem to be doing that rather well.

I am wondering whether a good denoiser (perhaps Topaz, which seems to use nlmeans methods) can help squelch some of the noise amplified by deconvolution without losing recovered detail such as the venetian blinds.
Title: Deconvolution sharpening revisited
Post by: joofa on July 31, 2010, 02:18:54 PM
Quote from: ejmartin
zero MTF is going to remain zero.

Not in theory. For bounded functions the Fourier transform is an analytic function, which means that if it is known for a certain range then techniques such as analytic continuation (http://en.wikipedia.org/wiki/Analytic_continuation) can be used to extend the solution to all the frequency range.  However, as I indicated earlier, such resolution boosting techniques have difficulty in practise due to noise. It has been estimated in a particular case that to succeed in analytic continuation an SNR (amplitude ratio) of 1000 is required.

Quote from: ejmartin
I don't think one is expecting miracles here, like undoing the Rayleigh limit

IIRC, even in the presence of noise, it has been claimed that a twofold to fourfold improvement of the Rayleigh resolution in the restored image over that in the acquired image may be achieved if the transfer function of the imaging system is known sufficiently accurately and using the resolution boosting analytic continuation techniques.
Title: Deconvolution sharpening revisited
Post by: eronald on July 31, 2010, 03:34:48 PM
Ray,

Unfortunately, from about (5) my feeling is that your jargon-reduction algorithm is oversmoothing and losing semantic detail
But then, what do I know ?

Edmund

Quote from: Ray
I sympathise with your frustration here, John, but let's not be intimidated by poor expression. Here's my translation, for what it's worth, sentence by sentence.

(1) The deconvolution problem is typically ill-posed.

means: The sharpening problem is often poorly defined. (That's easy).

(2) In the continuous domain the usual distortion due to blurring effect acts as an integral operator and problem statement boils down to a Fredholm integral equation of the first kind.

means: The analog world, which is a smooth continuum, is different from the digital world with discrete steps. You need complex mathematics to deal with this problem, such as a Fredholm integral equation. (Whatever that is).

(3) In the discrete domain, which we usually operate due to digitization, the inherent ill-posedness is inherited, while some of the problems are ameliorated.

means: We're now stuck with the digital domain. There's a hangover from the analog world with incorrect definitions, but we can fix some of the problems. There's hope.

(4) More well-behaved solutions can be obtained by introducing some sort of "smoothness" or regularization criterion at this stage.

means: We can achieve a balanced result by sacrificing detail for smoothness.

(5) Richardson-Lucy deconvolution converges to maximum-likelihood (ML) estimation.

means: The Richardson-Lucy method attempts to provide the best result, in terms of detail.

(6) Maximum-likelihood techniques just do the analysis of image data, and hence, in general may not be smooth enough.

means: The best result may introduce noise.

(7)  However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation, which might provide more acceptable results.

means: With a bit of experimentation we might be able to fix the noise problem.

(8) Under the assumptions of Gaussianity of certain image parameters (NOTE: not-necessarily the Gaussanity of the blur function) some equivalence of minimum mean square error estimation (MMSE), linearity, and MAP estimation can be obtained.

means: Gaussian mathematics is used to get the best estimate for sharpening purposes. (Guass was a German mathematical genius, considered to be one of the greatest mathematicians who has ever lived. Far greater than Einstein, in the field of mathematics).

(9) Further optimizations can be introduced by using a more realistic nonstationary form of the blur function and variations of image data distribution and noise distribution - the draw back being that one might have to forgo some quick operations in the form of fast Fourier transforms (FFT) embedded somewhere in many deconvolution techniques.

means: You can get better results if you take more time and have more computing power.

Okay! Maybe I've missed a few nuances in my translation. No-one's perfect. Any improved translation is welcome.  
Title: Deconvolution sharpening revisited
Post by: eronald on July 31, 2010, 03:37:14 PM
Quote from: joofa
Not in theory. For bounded functions the Fourier transform is an analytic function, which means that if it is known for a certain range then techniques such as analytic continuation (http://en.wikipedia.org/wiki/Analytic_continuation) can be used to extend the solution to all the frequency range.  However, as I indicated earlier, such resolution boosting techniques have difficulty in practise due to noise. It has been estimated in a particular case that to succeed in analytic continuation an SNR (amplitude ratio) of 1000 is required.



IIRC, even in the presence of noise, it has been claimed that a twofold to fourfold improvement of the Rayleigh resolution in the restored image over that in the acquired image may be achieved if the transfer function of the imaging system is known sufficiently accurately and using the resolution boosting analytic continuation techniques.

It has been claimed ... references please??

Edmund
Title: Deconvolution sharpening revisited
Post by: joofa on July 31, 2010, 03:58:17 PM
Quote from: eronald
It has been claimed ... references please??

Edmund

If memory serves right then, among others, check out the following:

http://www.springerlink.com/content/f4620747648x043l/ (http://www.springerlink.com/content/f4620747648x043l/)
Title: Deconvolution sharpening revisited
Post by: crames on July 31, 2010, 04:17:32 PM
Quote from: ejmartin
I don't think one is expecting miracles here, like undoing the Rayleigh limit; zero MTF is going to remain zero.  But nonzero microcontrast can be boosted back up to close to pre-diffraction levels, and the deconvolution methods seem to be doing that rather well.

I am wondering whether a good denoiser (perhaps Topaz, which seems to use nlmeans methods) can help squelch some of the noise amplified by deconvolution without losing recovered detail such as the venetian blinds.

Hi Emil,

No, I agree, deconvolution sharpening is certainly useful, since most images don't have f/32 diffraction, and there is real detail that can be restored.

The RL sharpening of RawTherapee does a very good job, with just a gaussian kernel. I wonder if knowing the exact PSF for deconvolution could be any better? Somehow I doubt it (except in the case of motion blur, which has PSFs like jagged snakes.)

Simple techniques that boost high frequencies can also do the job, exposing detail as long as the detail is there in the first place.

A simple way that I use to sharpen is a variation on high-pass sharpening. Instead of the High Pass filter, I convolve with an inverted "Laplacian" kernel in PS Custom Filter. I think it reduces haloing:

0 -1 -2 -1 0
0 -2 12 -2 0
0 -1 -2 -1 0

Scale: 4 Offset: 128

This filter has a response that slope up from zero, roughly the opposite slope of a lens MTF. (The strength can be varied by changing Scale.)

I copy the image to a new layer, change mode to Overlay (or Hard, etc.), then run the above filter on the layer copy. Noise can be controlled by applying a little Surface Blur on the filtered layer. With a little tweaking the results can approach FM and be even less noisy.

Although this usually works pretty well, it didn't on Bart's f/32 diffraction image, hence the little investigation...

Cliff







Title: Deconvolution sharpening revisited
Post by: ejmartin on July 31, 2010, 04:35:25 PM
Quote from: crames
The RL sharpening of RawTherapee does a very good job, with just a gaussian kernel. I wonder if knowing the exact PSF for deconvolution could be any better? Somehow I doubt it (except in the case of motion blur, which has PSFs like jagged snakes.)

Thanks for the PS tip.  

Well, for lens blur I imagine it could be a bit better to use something more along the lines of a rounded off 'top hat' filter (perhaps more of a bowler  ) rather than a Gaussian, since that more accurately approximates the structure of OOF specular highlights which in turn ought to reflect (no pun intended) the PSF of lens blur.  Another thing that RT lacks is any kind of adaptivity to its RL deconvolution; that could mitigate some of the noise amplification if done properly.  The question is whether that would add significantly to the processing time.  Its on my list of things to look into.
Title: Deconvolution sharpening revisited
Post by: eronald on July 31, 2010, 05:56:49 PM
Quote from: ejmartin
Thanks for the PS tip.  

Well, for lens blur I imagine it could be a bit better to use something more along the lines of a rounded off 'top hat' filter (perhaps more of a bowler  ) rather than a Gaussian, since that more accurately approximates the structure of OOF specular highlights which in turn ought to reflect (no pun intended) the PSF of lens blur.  Another thing that RT lacks is any kind of adaptivity to its RL deconvolution.  The question is whether that would add significantly to the processing time.  Its on my list of things to look into.

This just went up on Slashdot
http://research.microsoft.com/en-us/um/red.../imudeblurring/ (http://research.microsoft.com/en-us/um/redmond/groups/ivm/imudeblurring/)

It looks like the Hasselblad gyro hardware should be able to write this type of info in the future.

Edmund
Title: Deconvolution sharpening revisited
Post by: ejmartin on July 31, 2010, 06:20:18 PM
Quote from: joofa
Not in theory. For bounded functions the Fourier transform is an analytic function, which means that if it is known for a certain range then techniques such as analytic continuation (http://en.wikipedia.org/wiki/Analytic_continuation) can be used to extend the solution to all the frequency range.  However, as I indicated earlier, such resolution boosting techniques have difficulty in practise due to noise. It has been estimated in a particular case that to succeed in analytic continuation an SNR (amplitude ratio) of 1000 is required.


IIRC, even in the presence of noise, it has been claimed that a twofold to fourfold improvement of the Rayleigh resolution in the restored image over that in the acquired image may be achieved if the transfer function of the imaging system is known sufficiently accurately and using the resolution boosting analytic continuation techniques.

I would be surprised if any method can do more than guess at obliterated detail (data in the original beyond the Rayleigh limit).  The problem is much akin to upsampling an image; in both cases there is a hard cutoff on frequency content somewhat below Nyquist (in the case of upsampling, I mean the Nyquist of the target resolution).  Yes there are methods for the upsampling such as the algorithm in Genuine Fractals, but they amount to pleasing extrapolations of the image rather than genuine restored detail.  That's not to say the result is not pleasing, and perhaps analytic continuation for super-resolution yields a pleasing result; in fact it sounds a bit similar to the use of fractal scaling to extrapolate image content to higher frequency bands.
Title: Deconvolution sharpening revisited
Post by: crames on July 31, 2010, 07:50:23 PM
Quote from: eronald
This just went up on Slashdot
http://research.microsoft.com/en-us/um/red.../imudeblurring/ (http://research.microsoft.com/en-us/um/redmond/groups/ivm/imudeblurring/)

It looks like the Hasselblad gyro hardware should be able to write this type of info in the future.

Edmund
Here's another one from Microsoft Research:

Detail Recovery for Single-image Defocus Blur (http://research.microsoft.com/en-us/um/people/stevelin/papers/cva09.pdf)
Title: Deconvolution sharpening revisited
Post by: crames on July 31, 2010, 08:09:17 PM
Quote from: ejmartin
Well, for lens blur I imagine it could be a bit better to use something more along the lines of a rounded off 'top hat' filter (perhaps more of a bowler  ) rather than a Gaussian, since that more accurately approximates the structure of OOF specular highlights which in turn ought to reflect (no pun intended) the PSF of lens blur.  Another thing that RT lacks is any kind of adaptivity to its RL deconvolution; that could mitigate some of the noise amplification if done properly.  The question is whether that would add significantly to the processing time.  Its on my list of things to look into.

With the chain of blur-upon-blur in images, doesn't it get complicated? Diffraction blur, defocus blur, lens aberrations, motion blur,  AA filter blur... most of it changing from point to point in the frame. (I think it was mentioned that multiple blurs tend to become Gaussian?)

Maybe targeting AA filter blur would give a lot of bang for the buck? (Not much help for digital back users, though.)
Title: Deconvolution sharpening revisited
Post by: ejmartin on July 31, 2010, 08:25:49 PM
Is there anywhere one can find the typical PSF or spectral power distribution of the typical AA filter?
Title: Deconvolution sharpening revisited
Post by: Ray on July 31, 2010, 09:50:35 PM
Quote from: eronald
Ray,

Unfortunately, from about (5) my feeling is that your jargon-reduction algorithm is oversmoothing and losing semantic detail
But then, what do I know ?

Edmund

Damn! Have I revealed I'm out of my depth?  
Title: Deconvolution sharpening revisited
Post by: crames on July 31, 2010, 10:08:59 PM
Quote from: ejmartin
Is there anywhere one can find the typical PSF or spectral power distribution of the typical AA filter?

I remember seeing an article that showed one. Instead of four little dots in a neat square like I imagined, it was more like a dozen dots in a messy diamond pattern. I'll try to find it...
Title: Deconvolution sharpening revisited
Post by: joofa on July 31, 2010, 11:22:51 PM
Quote from: ejmartin
I would be surprised if any method can do more than guess at obliterated detail (data in the original beyond the Rayleigh limit).  The problem is much akin to upsampling an image; in both cases there is a hard cutoff on frequency content somewhat below Nyquist (in the case of upsampling, I mean the Nyquist of the target resolution).  Yes there are methods for the upsampling such as the algorithm in Genuine Fractals, but they amount to pleasing extrapolations of the image rather than genuine restored detail.  That's not to say the result is not pleasing, and perhaps analytic continuation for super-resolution yields a pleasing result; in fact it sounds a bit similar to the use of fractal scaling to extrapolate image content to higher frequency bands.

Not sure if the problem is akin to upsampling as upsampling does not create new information where as analytic continuation does create new information.
Title: Deconvolution sharpening revisited
Post by: eronald on July 31, 2010, 11:40:04 PM
Quote from: joofa
Not sure if the problem is akin to upsampling as upsampling does not create new information where as analytic continuation does create new information.

I'm getting increasingly dubious here about the ability of any method to "create" information if assumptions about the missing pieces are not made. Fractal upsampling software for instance is tuned to assume that certain objects are "clean"boundary  lines and curves - it will thus "recreate" perfect typography in box-shots. In this sense, if assumptions about the origins of the image data are made, eg by means of a texture vocabulary, then a method tuned for these assumptions will do well "creating" image data when provided with such images, and presumably fail when the hypotheses are not met. Which also means that we need to define which measure we use to appreciate a good result and a bad one, and I respectfully suggest that photoreconnaissance, astronomy and beauty photography have different metrics.


Edmund
Title: Deconvolution sharpening revisited
Post by: ejmartin on August 01, 2010, 12:43:13 AM
Quote from: joofa
Not sure if the problem is akin to upsampling as upsampling does not create new information where as analytic continuation does create new information.

I don't see what the difference is between a spectral density that is zero beyond the inverse Airy disk radius, and a spectral density that is zero beyond Nyquist.  If you are going to extend the spectral density to higher frequencies, in effect that information is being invented.  This is different from straight deconvolution, where the function being recovered has been multiplied by some nonzero function, and one recovers the original function by dividing out by the (nonzero) FT of the PSF.  To generate information where the spectral density is intially zero, one has to invent a rule for doing so, and the issue then is how closely that rule hews to the properties of some family of 'natural' images.
Title: Deconvolution sharpening revisited
Post by: joofa on August 01, 2010, 02:57:51 AM
Quote from: ejmartin
I don't see what the difference is between a spectral density that is zero beyond the inverse Airy disk radius, and a spectral density that is zero beyond Nyquist.  If you are going to extend the spectral density to higher frequencies, in effect that information is being invented.

Perhaps you mean oversampling and not upsampling.

Quote from: ejmartin
This is different from straight deconvolution, where the function being recovered has been multiplied by some nonzero function, and one recovers the original function by dividing out by the (nonzero) FT of the PSF.

The intent is to use deconvolution to recover the spectrum in the passband of the imaging system and then use analytic continuation to extend it out to those frequencies where it was zero before.
Title: Deconvolution sharpening revisited
Post by: ejmartin on August 01, 2010, 07:45:07 AM
Quote from: joofa
The intent is to use deconvolution to recover the spectrum in the passband of the imaging system and then use analytic continuation to extend it out to those frequencies where it was zero before.

Analytic continuation of what?  We're talking about discrete data...  so at best we're talking about some assumption about a smooth analytic function that interpolates the discrete data in a region you like (low frequencies) and extrapolates into a region you don't like with the existing data (high frequencies).

Also, analytic continuation is simply one of many possible assumptions about how to extend the data; the issue is whether it or another invents new data that is visually pleasing.

Anyway, I've made my point and I don't want this to hijack the thread.
Title: Deconvolution sharpening revisited
Post by: joofa on August 01, 2010, 10:44:11 AM
Quote from: ejmartin
Analytic continuation of what?  We're talking about discrete data...  so at best we're talking about some assumption about a smooth analytic function that interpolates the discrete data in a region you like (low frequencies) and extrapolates into a region you don't like with the existing data (high frequencies).

Also, analytic continuation is simply one of many possible assumptions about how to extend the data; the issue is whether it or another invents new data that is visually pleasing.

This line of reasoning started because you said that "zero MTF is going to remain zero", and I pointed out that that is not true in theory, and even in practise in the presence of noise some gains might be achieved (though not as good as the theory says). It appears now you are saying that it is one of the ways of "extrapolating/inventing" the data, thereby negating the position that zero MTF would stay zero.

Quote from: ejmartin
Anyway, I've made my point and I don't want this to hijack the thread.

Thanks for the discussion. Lets keep this thread moving.
Title: Deconvolution sharpening revisited
Post by: madmanchan on August 01, 2010, 12:11:59 PM
deja, yes, the Detail slider in CR 6 & LR 3 is a blend of sharpening/deblur methods and if you want the deconv-based method then you crank up the Detail slider (even up to 100 if you want the pure deconv-based method). I do this for most landscapes and images with a lot of texture (rocks, bark, twigs, etc.) and I find it's not bad for that.

Erik, yes it will indeed amplify noise, which does become a little tricky (but not impossible) to differentiate from texture. I have some basic ideas on how to improve this, but for now the best way to treat this is to increase the Luminance slider and apply a bit of Masking (remember you can use the Option/Alt key with the Masking slider to get a visualization of which areas of the image are being masked off). Furthermore, if there are big areas of the image that you simply don't want to sharpen then you can paint those out with a local adjustment brush and a minus Sharpness value.

Bill, unfortunately I can't go into the PSF and other details of the CR 6 / LR 3 sharpen method. Sorry.
Title: Deconvolution sharpening revisited
Post by: crames on August 01, 2010, 12:48:28 PM
Quote from: crames
I remember seeing an article that showed one. Instead of four little dots in a neat square like I imagined, it was more like a dozen dots in a messy diamond pattern. I'll try to find it...

Sorry, I'm coming up empty-handed on the messy one. Surprisingly hard to find a measured AA filter MTF anywhere.

Here's a spec sheet of one made by Epson that shows the usual four dots in a square. Epson Toyocom (http://ndap3-net.ebz.epson.co.jp/w/www/PDFS/epdoc_qd.nsf/a185159d89026eb04925707400231998/31cfff63fb6d01c449257410000e9fcc/$FILE/OLPF_4point_E08X.pdf)

Edited 8/2/2010-

Found this:
[attachment=23440:OLPF_PSF_MTF.png]

from Optical Transfer Function of the Optical Low-Pass Filter (http://www.photon.ac.cn/EN/abstract/abstract12698.shtml)

Looks like 4 dots in a square for "double plate", 8 dots for "triple plate". No idea which kind is in our cameras.
Title: Deconvolution sharpening revisited
Post by: bjanes on August 02, 2010, 11:18:07 AM
Quote from: crames
I remember seeing an article that showed one. Instead of four little dots in a neat square like I imagined, it was more like a dozen dots in a messy diamond pattern. I'll try to find it...
Zeiss MTF Curven (http://www.zeiss.de/C12567A8003B8B6F/EmbedTitelIntern/CLN_30_MTF_en/$File/CLN_MTF_Kurven_EN.pdf) shows the PSF of a low pass filter.

Regards,

Bill
Title: Deconvolution sharpening revisited
Post by: crames on August 02, 2010, 11:34:28 AM
Quote from: bjanes
Zeiss MTF Curven (http://www.zeiss.de/C12567A8003B8B6F/EmbedTitelIntern/CLN_30_MTF_en/$File/CLN_MTF_Kurven_EN.pdf) shows the PSF of a low pass filter.

Regards,

Bill

Yes, Nr. 8 on page 4 - let's see if we can deconvolve that!

Rgds,
Title: Deconvolution sharpening revisited
Post by: BartvanderWolf on August 02, 2010, 07:17:24 PM
Quote from: crames
Yes, Nr. 8 on page 4 - let's see if we can deconvolve that!

Hi Cliff,

That pattern will differ for each AA-filter and camera(type) combination. Some of the variables are, the thickness(es) of the crossed filter layers, their (individual and combined) orientation/rotation, and the distance to the microlenses/sensels.

(Un)fortunately, in practice, the PSF of a lens (residual aberrations+diffraction, assuming perfect focus and no camera or subject motion) plus an optical low-pass filter (OLPF) and the sensel mask and spacing will resemble a modified Gaussian rather than just the OLPF's PSF. As with many natural sources of noise, when several are combined then a (somewhat) modified Gaussian approximation can be made.

I have analyzed the PSF of the full optical system (different lenses + OLPF at various apertures + aperture mask of the sensels) of e.g. my 1Ds3 (and the 1Ds2 and 20D before that), and the effect a Raw converter has on the captured data, and have found that a certain combination of multiple Gaussians does a reasonably good job of characterizing the system PSF. The complicating factor is that it thus requires prior knowledge to effectively counteract the effects.

Other complicating factors are defocus and camera shake (let alone subject motion).

The practical solution is to employ either a quasi-intelligent PSF determination based on the image at hand (or a test image under more controlled circumstances), or a flexible interactive interface system (some intelligent choices can be made to simplify things for the average user) that allows user interaction (human vision is e.g. quite good at comparing before/after images, especially when super-imposed).

There is a lot of ground to cover before simple tools are available, but threads like these serve to at least increase awareness.

Cheers,
Bart
Title: Deconvolution sharpening revisited
Post by: joofa on August 02, 2010, 07:35:19 PM
Quote from: BartvanderWolf
(Un)fortunately, in practice, the PSF of a lens (residual aberrations+diffraction, assuming perfect focus and no camera or subject motion) plus an optical low-pass filter (OLPF) and the sensel mask and spacing will resemble a modified Gaussian rather than just the OLPF's PSF. As with many natural sources of noise, when several are combined then a (somewhat) modified Gaussian approximation can be made.

I have analyzed the PSF of the full optical system (different lenses + OLPF at various apertures + aperture mask of the sensels) of e.g. my 1Ds3 (and the 1Ds2 and 20D before that), and the effect a Raw converter has on the captured data, and have found that a certain combination of multiple Gaussians does a reasonably good job of characterizing the system MTF.

Hi Bart,

One doesn't necessarily need to rely on the "combination effect" to get closer to a Gaussian function. Any reasonable (finite energy) function can be represented by a linear combination of Gaussians (much like Fourier expansion). So for e.g., even if you isolate a system component (OLPF, etc.) and its response does not look like Gaussian, it can still be expanded into a sum of a number of Gaussians.
Title: Deconvolution sharpening revisited
Post by: crames on August 02, 2010, 08:36:09 PM
Quote from: BartvanderWolf
...
There is a lot of ground to cover before simple tools are available, but threads like these serve to at least increase awareness.

I agree on all your points.

The most impressive results I've seen are the "sparse prior" deconvolution by Levin et al (http://www.google.com/url?sa=t&source=web&cd=1&ved=0CBIQFjAA&url=http%3A%2F%2Fgroups.csail.mit.edu%2Fgraphics%2FCodedAperture%2FSparseDeconv-LevinEtAl07.pdf&ei=P2BXTP2nF4H-8Aa50Y2nBA&usg=AFQjCNEeQ2f8ebNQCE5Q33bpvzXedTs9jw&sig2=75Nqr_RWsxs2WNHPtPHoLA). Take a look at page 29 of this. (http://www.cs.unc.edu/~lazebnik/research/fall08/lec05_deblurring.pdf)   A group at Microsoft (http://research.microsoft.com/en-us/um/redmond/groups/ivm/twocolordeconvolution/supplemental_results/deblurring_synthetic.html) seems to have taken it further.

I think these are the kinds of results we are all looking for. When will they ever be available in a product we can use?
Title: Deconvolution sharpening revisited
Post by: ejmartin on August 02, 2010, 09:30:42 PM
As usual, there is a continuum between "most accurate, system and situation specific" and "good enough most of the time, easy to use and generic".   There is the DxO approach that tailors everything to s specific ombination of body, lens, focal length etc but then somebody has to accumulate all that data.  I guess I'm in the second camp -- I would like to know whether a Gaussian PSF or something else is "good enough" for most bodies and lense and exposure parameters, most of the time.  My hunch is that a single Gaussian is not optimal, and am wondering whether there is a single PSF, or maybe a one-parameter family of PSF's, that is good enough for the majority of situations one encounters in practice, given the practical limitations on what can be recovered through deconvolution.
Title: Deconvolution sharpening revisited
Post by: Daniel Browning on August 04, 2010, 07:37:49 PM
Quote from: ejmartin
Is there anywhere one can find the typical PSF or spectral power distribution of the typical AA filter?

Not that I know of. The most I've seen is a Pretty Picture on page four of "How to Read MTF Curves" (http://www.zeiss.com/C12567A8003B8B6F/EmbedTitelIntern/CLN_30_MTF_en/$File/CLN_MTF_Kurven_EN.pdf), by H. H. Nasse.

EDIT: Beat to the punch by Bill (two and a half days, no less).
Title: RL deconvolution using Photoshop
Post by: crames on August 15, 2010, 10:12:45 AM
To gain some insight into how it works, here is a recipe for RL deconvolution using Photoshop commands:

1. Duplicate the ORIGINAL blurry image, call it "COPY1"

2. Duplicate COPY1, call it "COPY2"

3. Blur COPY2 with the PSF. For a gaussian, use Gaussian Blur. Other PSFs can be defined with the Custom Filter.

4. Divide the ORIGINAL blurry image by COPY2, with the result in COPY2.

5. Blur COPY2 with the PSF (as in step 3).

6. Multiply COPY1 by COPY2, with the result in COPY1. (Apply Image with Blending Mode: Multiply)

7. Go to step #2 and repeat for the number of iterations you want. Each iteration gets a little sharper. The final result is in COPY1.

Note that there is a little snag - step 4 requires dividing one image by another. As far as I know, only CS5 has the Divide Blend Mode (http://blogs.adobe.com/jnack/2010/05/video_new_blending_modes_in_photoshop_cs5.html (http://blogs.adobe.com/jnack/2010/05/video_new_blending_modes_in_photoshop_cs5.html)), as does Gimp.

I don't have CS5, so I used a plugin to do the division in step 4. This is how COPY2 looks after that division step:

(https://sites.google.com/site/cliffpicsmisc/bart/COPY2small.jpg)

Full size here: https://sites.google.com/site/cliffpicsmisc/bart/RL_copy2_full.png

You can see that this is where the high frequencies are getting boosted, then applied as a mask in step 6 to the previous iteration.

Title: Re: Deconvolution sharpening revisited
Post by: ejmartin on August 15, 2010, 10:59:58 AM
Interesting.  I'd be curious to see how it does relative to RL in, say, RawTherapee -- applied to the same tiff image (RT takes tiffs as inputs).  The thing that I would worry about with doing the deconvolution in photoshop is roundoff errors, since all one has access to in PS is 16-bit integer math unless you jump through hoops with the HDR format.  How are you doing the division step in integer math?
Title: Re: Deconvolution sharpening revisited
Post by: crames on August 15, 2010, 12:14:59 PM
Quote
Interesting.  I'd be curious to see how it does relative to RL in, say, RawTherapee -- applied to the same tiff image (RT takes tiffs as inputs).  The thing that I would worry about with doing the deconvolution in photoshop is roundoff errors, since all one has access to in PS is 16-bit integer math unless you jump through hoops with the HDR format.  How are you doing the division step in integer math?

Roundoff errors could be a problem after more than a few iterations. I've only tried 4 iterations, manually. Maybe someone with CS5 could try it, in an action, and let it run for a while. I don't think that switching to 32bit mode would change any of the steps, so no jumping through hoops?

For division, I used a plugin that makes the reciprocal (1/x), then multiplied by that in step 4.

I guess another way to do that (multiply by the reciprocal, instead of divide) would be to make a look-up table in Curves for the reciprocal. I'll try that later and post it if it works.
Title: Re: Deconvolution sharpening revisited
Post by: ejmartin on August 15, 2010, 01:03:54 PM
OK, so your plugin presumably takes 1/x where x is an integer from 1 to 2^15-1=32767 (Photoshop internal format is signed integer IIRC), then multiplies back up by 32767 to restore the range, and truncates to integers.  I was wondering what it meant to divide two images, since the range of values could be anywhere from 1/32767 to 32767 (ignoring cases where one is dividing or mulitplying by zero).  For two nearly similar images such as the ratio of an image and its low-pass filter, most of the values would be near one which doesn't truncate nicely in integer math; I was wondering how that would be dealt with.  It looks like in your version the nearly equal values unpon division are being sent to a color value 203 (on my non-calibrated laptop).
Title: Re: Deconvolution sharpening revisited
Post by: crames on August 15, 2010, 02:07:52 PM
For two nearly similar images such as the ratio of an image and its low-pass filter, most of the values would be near one which doesn't truncate nicely in integer math; I was wondering how that would be dealt with.  It looks like in your version the nearly equal values unpon division are being sent to a color value 203 (on my non-calibrated laptop).

After the division step the result looks black. The image mean is 3220/32768 (or 25 in 8 bit) in sRGB. I normalized it with the Exposure tool @ +7 before going to the next steps.

I suppose this should all be done in a linear space, too, instead of sRGB, still it seems to work.
Title: Re: Deconvolution sharpening revisited
Post by: DeanSonneborn on August 17, 2010, 12:19:52 AM
 While in the Lightroom forum someone mentioned this:

As per Eric Chan:
Quote from: madmanchan
... CR 5.7 uses the same new method as LR 3.0 (so that LR 3 users can use Edit-In-PS with CS4 and get the same results).

So since LR3 does deconvultion sharpening and CR 5.7 uses the same new method(s)...does that mean that CR 5.7 will perform deconvolution sharpening when the detail slider is moved all the way to the right? I tried to compare it with CS4 smart sharpening via lens blur settings and they do seem too have similar effects but I'm just not sure. Does anyone know if CR 5.7 is actually doing deconvolution hsarpening?
Title: Re: Deconvolution sharpening revisited
Post by: DeanSonneborn on August 17, 2010, 12:26:37 AM
While in the Lightroom forum some one mentioned this:

As per Eric Chan:
Quote from: madmanchan
... CR 5.7 uses the same new method as LR 3.0 (so that LR 3 users can use Edit-In-PS with CS4 and get the same results).

So...since LR3 does deconvolution sharpening when the detail slider is moved all the way to the right and since CR 5.7 uses the same method(s) does that mean CD 5.7 also does deconvolution sharpening?  I've tried to compare it to CS4 smart sharpen via lens blur settings and it seems to have a similar appearance but I can not be sure. Does anyone know if CD 5.7 is really also doing deconvolution sharpening when the detail slider is moved all the way to the right?
Title: Re: Deconvolution sharpening revisited
Post by: joofa on August 17, 2010, 01:53:01 AM
For two nearly similar images such as the ratio of an image and its low-pass filter, most of the values would be near one which doesn't truncate nicely in integer math; I was wondering how that would be dealt with.

Don't know if that is applicable here but a complicated floating point calculation is approximated by fixed point when direct integer math is not favorable.

For e.g., I took randomly 15/16*23/7*11/9*41/45*73/87*101/127=2.29 in floating point and just using 2 extra bits I got the answer as 2.

I think crames is kind of simulating that when he mentions "I normalized it with the Exposure tool @ +7 before going to the next steps."

Joofa
Title: Re: Deconvolution sharpening revisited
Post by: deejjjaaaa on August 17, 2010, 12:50:59 PM
So...since LR3 does deconvolution sharpening when the detail slider is moved all the way to the right

no, not "all the way" to the right - as Eric clarified it is a blend of USM and deconvolution methods, where the input from deconvolution is growing as you move the slider to the right... just when it is "all the way to the right" you probably have 100% pure deconvolution w/o any input from USM
Title: Re: Deconvolution sharpening revisited
Post by: bjanes on August 17, 2010, 03:16:15 PM
no, not "all the way" to the right - as Eric clarified it is a blend of USM and deconvolution methods, where the input from deconvolution is growing as you move the slider to the right... just when it is "all the way to the right" you probably have 100% pure deconvolution w/o any input from USM

Yes, that is my take on Eric's post. Now, what happens when the slider is all the way to the left? USM? However, detail of zero suppresses halos, which is different from the usual USM.

Regards,

Bill
Title: Re: Deconvolution sharpening revisited
Post by: deejjjaaaa on August 17, 2010, 05:05:01 PM
Yes, that is my take on Eric's post. Now, what happens when the slider is all the way to the left? USM? However, detail of zero suppresses halos, which is different from the usual USM.

they should be using some proprietary modifications and not textbook formula
Title: Re: Deconvolution sharpening revisited
Post by: FranciscoDisilvestro on August 18, 2010, 08:35:11 AM

While in the Lightroom forum some one mentioned this:

As per Eric Chan:
Quote from: madmanchan
... CR 5.7 uses the same new method as LR 3.0 (so that LR 3 users can use Edit-In-PS with CS4 and get the same results).


As far as I understand, in the post from Eric Chan in the LR forum, he was mentioning that the demosaicing method was the same in LR2.7/ACR 5.7 as in LR 3, ACR 6. He was not talking about sharpening or noise reduction. Later he mentioned than in LR 3 and ACR 6 the three functions were optimized to work toghether (demosaicing, sharpening and NR). There is no implication that sharpening and NR have been changed in LR 2.7 and ACR 5.7

From a previous post in this thread, Eric Chan specify that for deconvolution sharpening only in LR3 /ACR 6, you should set the detail slider at 100%
Title: Re: Deconvolution sharpening revisited
Post by: ced on August 22, 2010, 05:28:34 AM
What is the problem with sharpening in Lab? see attached
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on August 22, 2010, 06:29:36 AM
What is the problem with sharpening in Lab? see attached

In general, converting to Lab and back to RGB mode risks losing certain colors/distinctions because the gamut remapping is a lossy process.

Which file did you sharpen, and which settings did you use?

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: ced on August 23, 2010, 04:50:11 AM
Bart Hi!  It was a convoluted file you uploaded but I would be happy to do it on a crop of the original from the camera.
The switch to Lab should not cause any shift in the colour values.
The file got a gauss blur on the a&b channels and a smart shrpn 100 - .2 on the L with a slight increase in saturation on the a&b because as you know blurring causes desaturation.
KR
Title: Re: Deconvolution sharpening revisited
Post by: John R Smith on August 23, 2010, 05:19:38 AM
I have been playing around with the use of LR3 sharpening in Eric's suggested deconvolution mode, and it seems to work very well for subjects with lots of very fine detail. I usually end up with Detail 100, Radius 0.5 to 0.8, and Amount say 25 to 40. But it struck me that if this is deconvolution based sharpening, the routine cannot be using many iterations because the results are instantaneous. I had previously thought that for deconvolution to work it had to run thousands of iterations and was very processor-intensive.

John
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on August 23, 2010, 08:08:26 AM
Bart Hi!  It was a convoluted file you uploaded but I would be happy to do it on a crop of the original from the camera.

No, that's fine. It's the convolved 16-b/ch PNG file that needs to be used indeed. I just wondered, because I cannot reproduce your result with the settings you just gave. Could you double check if there was an accidental mix-up?

Quote
The switch to Lab should not cause any shift in the colour values.

Unfortunately it does. On the one hand you lose color (http://"http://www.brucelindbloom.com/RGB16Million.html") precision (twice) due to rounding to integer numbers which results in mapping some colors to similar ones, thus losing the distinction, and you risk clipping of saturated colors. Then on the other hand you introduce a colorshift (how much depends on the amount of change in the L channel) when you change local contrast. Given a small radius correction it might amount to only a bit of shift for most of the image, but a shift will occur (http://"http://www.brucelindbloom.com/MunsellCalcHelp.html#BluePurple").

The following experiment will show you the potential magnitude of the issue:

Quote
The file got a gauss blur on the a&b channels and a smart shrpn 100 - .2 on the L with a slight increase in saturation on the a&b because as you know blurring causes desaturation.

Thanks for sharing the settings, that will allow others to reproduce the results by using a similar approach. Yes, blurring the chromaticity will indeed change the saturation a bit, although only for the detail that changes significantly by the blur. By increasing the contrast of the ab channels, one does boost the saturation for the lower spatial frequencies more than the compensation for the blurred features.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: ced on August 23, 2010, 10:54:35 AM
Please guide me to where the original crop png file can be found.
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on August 23, 2010, 12:30:47 PM
Please guide me to where the original crop png file can be found.

Here (http://www.xs4all.nl/~bvdwolf/main/downloads/0343_Crop+Diffraction.png) it is.

I need something like Smart sharpening, advanced/lens blur, amount 500, radius 1.3 to get something a bit more usable.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: XFer on August 24, 2010, 11:15:10 AM
Hi Bart, how are you?
We used to write in comp.periph.scanners, some years ago.  :)

I'm playing with deconvolution a bit.
I've studied RawTherapee sources and put up a quick hack in C to experiment with various kernels (PSFs).
If you have images and PSFs to play with, I would be really happy to show up the results.
I can deal with float PSFs of square shape and whatever size. Only grayscale pictures at the moment.
Hopefully we can work out a set of PSFs to complement the Gaussian 3x3 approximation that RT is using right now.
The quick hack is commandline and really ugly with lots of limitations, so I would be ashamed to share it for now; but I can download test images and upload the results.

Fernando
Title: Re: Deconvolution sharpening revisited
Post by: ced on August 24, 2010, 12:38:06 PM
Xfer the post from Bart above yours leads to an image you can use to test.
Title: Re: Deconvolution sharpening revisited
Post by: XFer on August 24, 2010, 12:45:57 PM
Sorry, I didn't make myself clear.
I'm looking for specific images and associated PSFs to try together (example: picture taken at very small aperture and diffraction PSF model, defocused picture and defocus PSF model, etc.)
I don't have any PSF to use as of now (apart from the standard Gaussian approximation, not very interesting).
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on August 24, 2010, 12:51:10 PM
Hi Bart, how are you?
We used to write in comp.periph.scanners, some years ago.  :)

Hi Fernando,

Doesn't time fly, it's 'a bit more' than some years by now.

Quote
I'm playing with deconvolution a bit.
I've studied RawTherapee sources and put up a quick hack in C to experiment with various kernels (PSFs).
If you have images and PSFs to play with, I would be really happy to show up the results.
I can deal with float PSFs of square shape and whatever size. Only grayscale pictures at the moment.
Hopefully we can work out a set of PSFs to complement the Gaussian 3x3 approximation that RT is using right now.

IMHO there are 3 obvious fundamental candidates:

It is usually some sort of mix between them, although the mix of Gaussians can be tailored to approximate a lot of different scenarios (although it's not simple to find the mix by trial and error).
 
Quote
The quick hack is commandline and really ugly with lots of limitations, so I would be ashamed to share it for now; but I can download test images and upload the results.

Frankly, I'm in the process of programming a "PSF generator" application, but I'm not ready yet. So it shouldn't be too difficult to generate all sorts of mixes in the foreseeable future. I have to do a bit more coding before it's usable enough to release, but I also want to build in intelligence for deriving the PSF needed from an actual image (although that's a lot harder to program). Of course, when that's done, the logical next step is doing the actual deconvolution with it ...

In this thread I posted an image crop that has the diffraction of f/32 added, and the PSF (as data and as an image) that was used to convolve the original with. There is also a number of results from various methods, so it would make most sense for the short term to start with that. We can add some more as we go and if there is enough interest in the subject.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: XFer on August 24, 2010, 04:51:39 PM

Doesn't time fly, it's 'a bit more' than some years by now.

So true!  ::)
BTW, I've added a couple of drum scanners, a Nikon 8000 and a V700 to my collection (besides the Minolta 5400, Epson 2450 and Microtek 120).

Quote
IMHO there are 3 obvious fundamental candidates:
  • A mix of Gaussians
  • Defocus blur (DOF related or plain OOF)
  • Diffraction dominated

What about segmenting the image and applying a different PSF to each relevant portion?
I was talking about that with ejmartin in the RT forum.
Real lenses have different issues near center, at borders and at corners.
Maybe trying to compensate all different effects and aberrations with a single PSF across the whole frame is asking too much.
Example: a fast lens shooting at large aperture may show strong coma at the edges a just some spherical aberration at the center.

Quote
I'm in the process of programming a "PSF generator" application

Now, this is such a wonderful idea!!  :D
I'm driving crazy trying to write down discrete kernels for my tests.

Quote
In this thread I posted an image crop that has the diffraction of f/32 added

Here we have a couple of tests of mine.
Please note that since my dirty little app only manages gray images at this time, I had to convert to Lab and deconvolve Lightness only.

First test: RT Gaussian approximation (3x3 kernel).
Radius (sigma) = 1.2, 2000 iterations, no damping.
(http://img840.imageshack.us/img840/3705/cropdiffractionrtlr.jpg)

Your deconvolution has more hi-freq details, but more ringing. See the angled white bar near the bottom of the tree.


Second test: same kernel, but I tried a special "turbo" mode, modifying the R-L implementation so that I can use a very narrow Gaussian (small sigma) and very few iterations.
radius = 0.35, 60 iterations, no damping.
(http://img830.imageshack.us/img830/4939/cropdiffractionrtlrturb.jpg)

This method is very fast but really "nervous", can diverge easily.  ;D

EDIT: Mmmm, I dont' understand, why are my images downsized? I'm sure the original links are full-res!  ???

Well, here you can find direct http links:
http://img840.imageshack.us/img840/3705/cropdiffractionrtlr.jpg
http://img830.imageshack.us/img830/4939/cropdiffractionrtlrturb.jpg
Title: Re: Deconvolution sharpening revisited
Post by: MichaelEzra on August 24, 2010, 05:22:43 PM
Second test: same kernel, but I tried a special "turbo" mode, modifying the R-L implementation so that I can use a very narrow Gaussian (small sigma) and very few iterations.
radius = 0.35, 60 iterations, no damping.

XFer, Is this code googlecode by any chance? I would love to compile it to try out!
Title: Re: Deconvolution sharpening revisited
Post by: XFer on August 24, 2010, 05:46:56 PM
XFer, Is this code googlecode by any chance? I would love to compile it to try out!

Hi Michael,

no, not yet: it's just an ugly hack in single-threaded C (original RT code is multithreaded C++).
I quickly put it together just to experiment with some PSFs.
It's so ugly I can't release it just now (I would be hunted down by Kernighan and Ritchie!), but I'll polish it up a bit in coming days.  :)
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on August 24, 2010, 06:51:50 PM
What about segmenting the image and applying a different PSF to each relevant portion?

Yes, that will help, but it does require adaptive PSF generation. Another approach is determining a different PSF for center and corners, and then blend between them.

Quote
Here we have a couple of tests of mine.
Please note that since my dirty little app only manages gray images at this time, I had to convert to Lab and deconvolve Lightness only.

Yes, that's fine for now, but one ultimate needs to do either 3 layers or just the luminosity. The 3 layers will allow to address things like diffraction even better, although my application handles that for luminosity as well via a weighted combination of R/G/B.

Quote
First test: RT Gaussian approximation (3x3 kernel).
Radius (sigma) = 1.2, 2000 iterations, no damping.

That's not bad, given the small kernel size. It really requires a 7x7 or 9x9 kernel to get most of the power of the diffraction pattern.
(http://www.xs4all.nl/~bvdwolf/main/downloads/AiryDisc_N32.png)

Quote
Your deconvolution has more hi-freq details, but more ringing. See the angled white bar near the bottom of the tree.

That's right, compromises, compromises. Still no free lunch ...

Quote
Second test: same kernel, but I tried a special "turbo" mode, modifying the R-L implementation so that I can use a very narrow Gaussian (small sigma) and very few iterations.
radius = 0.35, 60 iterations, no damping
This method is very fast but really "nervous", can diverge easily.  ;D.

It's quite a small radius, but the result is getting close to what can be achieved with this algorithm.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: XFer on August 24, 2010, 07:17:27 PM
Yes, that will help, but it does require adaptive PSF generation. Another approach is determining a different PSF for center and corners, and then blend between them.

Emil Martinec suggested the same thing. Subdividing the image in tiles and using an interpolated PSF according to the tile position (given a PSF for the center and 4 for the corners).

Quote
That's right, compromises, compromises. Still no free lunch ...

You have actually measured the PSF, right?
It's strange, it resembles so much a Gaussian, instead of the Airy disc I would have expected from a heavy diffraction-limited image. I see no fringes in the PSF.

Fernando
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on August 24, 2010, 07:38:13 PM
You have actually measured the PSF, right?
It's strange, it resembles so much a Gaussian, instead of the Airy disc I would have expected from a heavy diffraction-limited image. I see no fringes in the PSF.

I generated the PSF from an integrated 3D Airy-disc function, and convolved the original (unblurred) image with it. All for the purpose of having a perfect PSF to work with, and determine the benefit of prior knowledge for the RL algorithm or others. The only drawback is the limitation of the (ImagesPlus) software that I used which is limited to a maximum 9x9 kernel size. As it (f/32) happens to work out for a sensel pitch of 6.4 micron and 564 nanometer wavelength, the first minus of the Airy diffraction pattern just fits within that limitation. So we have something like 84% of the total power covered.

The center of the pattern can be approximated (http://en.wikipedia.org/wiki/Airy_disk#Approximation_using_a_Gaussian_profile) reasonably well by a Gaussian, but defocus has a markedly different shape.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: ejmartin on August 25, 2010, 12:48:10 PM
What's a good PSF for defocus?
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on August 25, 2010, 01:15:53 PM
What's a good PSF for defocus?

Hi Emil,

I'd say a disc of somewhat uniform intensity would come close. It's a bit like taking a slice (the focal plane) out of a cone of light as its focal point is in front or behind of the focal plane. Of course a real world defocus PSF is a combination of many things, but if one want to isolate defocus, a disc seems appropriate.

I think that a model of a PSF can be split into some quantifiable contributors, such as diffraction and defocus, and the remaining bit to add can be a Gaussian. By extracting some known contributors, it should be easier to find the residual contribution. Another approach is to take multiple Gaussians, where one with a large sigma could simulate the diffraction or defocus part, and a small sigma could represent some more localized blur.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: ejmartin on August 25, 2010, 01:26:59 PM
Yes, well I was imagining it should look like the little disks of OOF specular highlights, those being extreme versions of OOF point sources.  But there one sees some structure near the edges, perhaps diffraction off the edge of the aperture blades?  As well of course as a slight polygonal shape due to the aperture blades.  But I'm not sure any of those are significant, and a disk is perhaps good enough.  I was just wondering if there was any discussion eg in the literature or in some online source. 

In the service of keeping it simple, perhaps since apart from the side lobes of the Airy pattern the central peak is fairly well approximated by a Gaussian, one could use a suitable combination (the successive convolution) of a disk, Gaussian, and line (for motion deblur).
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on August 25, 2010, 04:41:32 PM
Yes, well I was imagining it should look like the little disks of OOF specular highlights, those being extreme versions of OOF point sources.  But there one sees some structure near the edges, perhaps diffraction off the edge of the aperture blades?  As well of course as a slight polygonal shape due to the aperture blades. But I'm not sure any of those are significant, and a disk is perhaps good enough.  I was just wondering if there was any discussion eg in the literature or in some online source.

That's right, there is also some irregularity caused by (the correction of) lens aberrations, such as spherical aberration, vignetting, and other phenomenae. See an explanation by Paul van Walree for some background (http://toothwalker.org/optics/bokeh.html). Therefore it would be nice to be able and distract the PSF information from an image itself, which would allow to make spatially determined PSFs (although one might not want to treat an OOF background, but rather the slight misfocus of the main subject).

Quote
In the service of keeping it simple, perhaps since apart from the side lobes of the Airy pattern the central peak is fairly well approximated by a Gaussian, one could use a suitable combination (the successive convolution) of a disk, Gaussian, and line (for motion deblur).

Yes, that alone will already allow a huge improvement. Of course, for speed and accuracy, one might want to use a single run with a combined PSF, but it's not an absolute necessity.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: XFer on August 25, 2010, 05:07:40 PM
Ok, I can try whatever kernel on whatever image now (also RGB 48bpp).
Here's another quick comparison, from a scan (Nikon 8000 @ 4000 dpi).
Original screenshot:

http://a.imageshack.us/img830/5715/sharpen01.jpg

Reduced size (sic!):
(http://a.imageshack.us/img830/5715/sharpen01.jpg)

Deconvolution was with RT gaussian approximation. Radius = 0.45, 15 iterations, "turbo mode", no damping.

Can't wait to test some of your PSFs!  ;D
Title: Re: RL deconvolution using Photoshop
Post by: ablankertz on August 30, 2010, 11:47:04 AM
To gain some insight into how it works, here is a recipe for RL deconvolution using Photoshop commands:

1. Duplicate the ORIGINAL blurry image, call it "COPY1"

2. Duplicate COPY1, call it "COPY2"

3. Blur COPY2 with the PSF. For a gaussian, use Gaussian Blur. Other PSFs can be defined with the Custom Filter.

4. Divide the ORIGINAL blurry image by COPY2, with the result in COPY2.

5. Blur COPY2 with the PSF (as in step 3).

6. Multiply COPY1 by COPY2, with the result in COPY1. (Apply Image with Blending Mode: Multiply)

7. Go to step #2 and repeat for the number of iterations you want. Each iteration gets a little sharper. The final result is in COPY1.

To me, this set of operations implies that deconvolution with a seperable kernel is seperable. Is it?
Title: Re: RL deconvolution using Photoshop
Post by: BartvanderWolf on August 30, 2010, 02:08:41 PM
To me, this set of operations implies that deconvolution with a seperable kernel is seperable. Is it?

I don't know how you conclude that from that procedure, but perhaps it is related to how you view the concept of separability of kernels (http://blogs.mathworks.com/steve/2006/10/04/separable-convolution/) ?

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: XFer on September 05, 2010, 11:36:31 AM
I'd like to build a NxN kernel approximating an Airy pattern (diffraction figure from circular aperture).

Let's say the parameters are:

Lambda = 560nm = 0.56*10^-3 mm
Aperture radius = 1mm

X,Y domain:
-N/2 <= X <= N/2
-N/2 <= Y <= N/2

The function is defined in these terms (if I recall well):

R = sqrt(X^2+Y^2)

K = 2 * PI / Lambda
A = 1 (it's the aperture radius)
T = K * A * Sin(R)

Airy(R) = (2 * J1(T) / (T))^2

(where J1() is the Bessel function of degree 1, first kind)

So I need to table Airy(R) on a NxN grid

Is there a Matlab expert who can help me?
I need a real working example: I found a huge number of so called "tutorials" on the web but none of them actually works... for example, note that Airy(R) as defined is singular for R=0, one must somehow tell Matlab that R=0 must give Airy(R) = 1.

Thanks.

Fernando
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on September 05, 2010, 01:24:50 PM
[...]
So I need to table Airy(R) on a NxN grid

Is there a Matlab expert who can help me?
I need a real working example: I found a huge number of so called "tutorials" on the web but none of them actually works... for example, note that Airy(R) as defined is singular for R=0, one must somehow tell Matlab that R=0 must give Airy(R) = 1.

Hi Fernando,

I used Mathematica to develop and test my models, and based the calculation on a module (a self made function) that contains the following core logic:
(http://www.xs4all.nl/~bvdwolf/main/downloads/AiryDiscFunc.png)
This is a visual representation as produced by Mathematica, but there can be some code optimizations performed like you did.
SamplePitch and Wavelength are both in the same (micron) units, e.g. 6.4 micron pitch and 0.564 micron wavelength, N is the aperture number e.g. 32.

However, IMHO there is also an integration of the function required to account for the finite apertures of the sensels (which I presumed to be square with a 100% fill factor as a result of the microlenses). What I've been struggling with is that there seems to be no simple solution other than basic 2D integration, and a workhorse like Mathematica takes it's time doing that because it's apparently not a simple function to integrate with rigorous precision (which is what Mathematica strives to do, but what may be overkill in this practical implementation).

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: ejmartin on September 05, 2010, 09:59:12 PM
And then there is the OLPF convolved with the Airy pattern before it gets to the box blur of the sensels.  It all makes me suspect that a Gaussian is going to be a reasonable approximation in the end, given all the inaccuracies introduced all along the way.  Do you think that the difference between using the precise PSF and Gaussian is going to be noticeable?
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on September 06, 2010, 05:50:22 AM
And then there is the OLPF convolved with the Airy pattern before it gets to the box blur of the sensels.  It all makes me suspect that a Gaussian is going to be a reasonable approximation in the end, given all the inaccuracies introduced all along the way.

Correct. The issue is indeed that there are several cascading PSFs involved. They may or may not produce local maxima or minima when combined, so it is hard to predict how important it will be. One approach of estimating the PSF of the unknown OLPF is to factor out  known components, such as diffraction and/or defocus, from the combined system PSF.  It will make the resulting mix of PSF contributions more predictable and steerable with a simpler (Gaussian) model. Then the recombined PSF can be used for efficient processing.

Quote
Do you think that the difference between using the precise PSF and Gaussian is going to be noticeable?

The difference will not be huge most of the time, but probably noticeable when the best result is required. The examples earlier in the thread show that even the more general solutions do a resonably good job, but there is more potential to be utilized with a 'perfect' PSF. Also note that I've only used the Richardson-Lucy deconvolution algorithm because it's readily available in several programs and allows people to do their own experiments, but there are modern variants available that perform better. And who knows what the future has in store ... I think that if it is possible to get closer to the actual PSF by doing a little extra preprocessing, then the effort is justified and will pay-off in the end (even if only as an option for less time critical or processor intensive jobs). Also, the insights may lead to new efficient shortcuts.

It is probably my experience with quality control that has taught me that sloppiness earlier in the process will take more effort to set straight in the end. So that's why I tend to seek for optimization earlier in the chain of events (which also means when taking the image), but with cascading deterioration it's best to intervene early.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: XFer on September 06, 2010, 10:19:16 AM
Ok but let's not forget special-purpose deconvolution.
It's not only about inverting diffraction or box blur; it's also about spherical aberration, coma, defocusing, motion blur.
That's why we need a way to comfortably explore different PSFs.
I have this small utility that, at the moment, can accept hardcoded kernels, but I can extend it to load a kernel from file.
The problem is having meaningful PSFs to experiment with.

Right now I have an Excel sheet which can compute a 9x9 diffraction kernel (input parameters are pixel pitch, f-number, wavelength), but that's too limited.  :-\
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on September 06, 2010, 01:50:40 PM
Ok but let's not forget special-purpose deconvolution.
It's not only about inverting diffraction or box blur; it's also about spherical aberration, coma, defocusing, motion blur.
That's why we need a way to comfortably explore different PSFs.

I agree, and that's why (as I've disclosed earlier) I'm working on a flexible tool to do just that.

Quote
I have this small utility that, at the moment, can accept hardcoded kernels, but I can extend it to load a kernel from file.
The problem is having meaningful PSFs to experiment with.

Since I've already invested considerable resources in this project, I hope you can understand that I'm not going to give everything away for free, but when my PSF generator (part of a much larger set of integrated software solutions) is a bit further in it's development I do need some beta testers ;) . I'm making progress but there is a lot to do to make it usable for normal human beings, and I unfortunately have to protect my intellectual property against copyright violations and reverse engineering. I probably will need to apply for some patent protection for the proprietary stuff further down the line as well.

Quote
Right now I have an Excel sheet which can compute a 9x9 diffraction kernel (input parameters are pixel pitch, f-number, wavelength), but that's too limited.  :-\

Yes, I know the frustration, I've been using spreadsheets for a long time as wel, but one really needs a better integration with other software. There is also the feeling that the imageprocessing industry has been dragging their feet (remember Pixmantec's Rawshooter, Photomatix, etc.,...), and it's up to the smaller innovators to really get things moving forwards. That's why I started my project. It's just that my resources are limited, so it can take a bit longer before it's commercially available, but the potential looks promising.

If you need certain kernels to play with, I'm willing to help and make a few available as data files, just like the f/32 diffraction kernel (for a 6.4 micron sensel pitch) I shared in this thread. Send me a PM about what you are thinking of, and we can work something out so you can continue your investigations which might e.g. help RawTherapee.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: XFer on September 07, 2010, 10:06:20 AM
Hi Bart,
thanks for the offer; anyway I'm tooling up to build the basic PSF I need for now (Airy patterns, Gaussians, convolutions of the two).

I have a few question regarding how to get discrete kernels from these continuous functions.

1) Do you just evaluate the function at the grid points? This would be like using a rectangular filtering window for the sampling. Doesn't this lead to issues? Or are you using more sophisticated ways to get the samples (triangular windows, Chebychev windows etc.)?
2) For Airy patterns which emulate strong diffraction (F/32 and beyond), even a 9x9 kernel leaves out a certain percentage of the total signal intensity.
Are you just truncating the function at the edges of the kernel, or do you perform some kind of smoothing? I think that just truncating could lead to ripples -> ringing on the image.
3) Expecially for "tight & pointy" PSFs (think small-radius Gaussians), I have the feeling that a grid with a pitch of 1 pixel is too rough. Too much approximation from the continuous function to the kernel. I think we're going to need sub-pixel accuracy to avoid some artifacts (mosquitos around high-contrast details, ringing, edge overshooting, noise amplification, hot pixels).
What do you think about it?

Thanks a lot.

Fernando

PS: if anyone is interested, I can publish actual kernels of the type mentioned above.
Title: Re: Deconvolution sharpening revisited
Post by: MichaelEzra on September 07, 2010, 11:21:42 AM
Fernando,

you might also look at ImageJ (http://rsbweb.nih.gov/ij/)

and PSF generator plugin:

http://bigwww.epfl.ch/deconvolution/?p=plugins

Cheers,
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on September 07, 2010, 12:20:20 PM
Hi Fernando,

I'll try to avoid boring the other readers of this thread with (too many) programming details, but I also don't want to give the (wrong) impression that I'm avoiding an answer. I like to help others where possible, so I'll answer in general terms and propose to deal with the specifics in a PM if needed.

I have a few question regarding how to get discrete kernels from these continuous functions.

1) Do you just evaluate the function at the grid points? This would be like using a rectangular filtering window for the sampling. Doesn't this lead to issues? Or are you using more sophisticated ways to get the samples (triangular windows, Chebychev windows etc.)?

I evaluate the functions over the finite area of the sensel apertures. For reasons of calculation efficiency (=speed) that may be done either by integration in the spatial domain or by filtering in the frequency domain.

Quote
2) For Airy patterns which emulate strong diffraction (F/32 and beyond), even a 9x9 kernel leaves out a certain percentage of the total signal intensity.
Are you just truncating the function at the edges of the kernel, or do you perform some kind of smoothing? I think that just truncating could lead to ripples -> ringing on the image.

In principle I do not use a specific fixed size of the filter kernels (unless dictated by other software implementations), but due to the significant impact on processing time, one does need to use some sort of trade-off sooner or later. Fortunately most defects can be tackled with reasonably sized kernels before we are faced with diminishing returns. When the filter exhibits significant signal at the edges, it is wise to use a windowing function to suppress the potential ringing. It depends on the particular dimensions and goals if and when to choose for windowed functions or larger kernels. Heuristics can be used to switch between the methods.

Quote
3) Expecially for "tight & pointy" PSFs (think small-radius Gaussians), I have the feeling that a grid with a pitch of 1 pixel is too rough. Too much approximation from the continuous function to the kernel. I think we're going to need sub-pixel accuracy to avoid some artifacts (mosquitos around high-contrast details, ringing, edge overshooting, noise amplification, hot pixels).
What do you think about it?

From a mathematical point of view it's not important as long as enough calculation precision is maintained, e.g. by clever programming, or by using e.g. floating point or rational numbers. Of course there is a limit to the usefulness of very small kernels, however because of their limited support it is also not too processing expensive to do it in e.g. floating point, even in the spatial domain.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: XFer on September 07, 2010, 06:24:55 PM
A quick example, on a diffraction-limited image.
100% crops (raw-converted with Raw Therapee 3.0alpha and ejmartin's AMaZE algorythm).

First picture.
On the left a shot taken with a Canon 5DmkII and 135/2L at f/5.6.
On the right, same stuff at f/22.
Both shots unsharpened.
(http://img62.imageshack.us/img62/3672/diffraction01.jpg)
The difference is obvious.
The right image is diffraction-limited (the 135/2L is a very sharp lens).

Second picture: same images, sharpened with R-L deconvolution.
f/5.6 on the left, f/22 on the right.
(http://img96.imageshack.us/img96/534/diffraction02.jpg)
The f/5.6 shot is still sharper, but look at aliasing artifacts.
The lens transmitted spatial frequencies far beyond the Nyquist limit and the AA filter could not do much about it.
Smaller pattern are totally destroyed by aliasing.
The f/22 is a bit softer, but almost entirely aliasing-free; I'd say that almost all the useable details are there, and smaller patterns are much more gracefully handled.

Fernando
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on September 08, 2010, 05:59:23 AM
Second picture: same images, sharpened with R-L deconvolution.
f/5.6 on the left, f/22 on the right.
(http://img96.imageshack.us/img96/534/diffraction02.jpg)
The f/5.6 shot is still sharper, but look at aliasing artifacts.
The lens transmitted spatial frequencies far beyond the Nyquist limit and the AA filter could not do much about it.

Well, it shows that the OLPF does not prevent all aliasing, although it does reduce the risk of it occurring. It also shows that the dreaded stories about destructively blurring one's image are grossly exaggerated in relation to what the effects of poor technique (or unavoidable DOF compromises) can be.

What type of PSF did you use for the RL restoration? Was it diffraction only, or a (mix with a) Gaussian type?

Quote
Smaller pattern are totally destroyed by aliasing.
The f/22 is a bit softer, but almost entirely aliasing-free; I'd say that almost all the useable details are there, and smaller patterns are much more gracefully handled.

The restoration also shows that additional precautions to avoid the processing of low S/N areas need to be taken, to constrain the grittiness when extreme proccessing is required. While diffraction can be used as an AA-tool, it does have a negative effect on the per pixel microdetail. An interesting fact is that slight defocus has a very dramatic effect on aliasing, so that can be used for certain (flat) structures, whereas diffraction has a more gentle effect on AA-suppression. When a subject is positioned at e.g. 5 metres (some 16.4 feet) distance, shifting the focus plane to 5.10 metres will, with a 135mm lens at f/5.6, create a blur disc of 13.12 micron diameter which is more than 2 sensel widths on a 6.4 micron sensel pitch sensor array. It will effectively half the resolution capability, although deconvolution can restore part of that (with reduced aliasing). So using a wider aperture will kill more moiré except for in the plane of focus, while a narrower aperture will also kill moiré even in the plane of optimal focus.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: Christoph C. Feldhaim on September 08, 2010, 06:31:14 AM
That sounds interesting!
So - thinking about scanning negatives it would probably
make sense to manually defocus a bit to reduce the gritty sky
and then use deconvolution later to get back details.
I have a Nikon LS 9000 scanner which allows for controled, manual defocusing,
but I'm always fighting with pseudo-aliasing (film grain/scanner CCD interaction).
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on September 08, 2010, 09:06:41 AM
That sounds interesting!
So - thinking about scanning negatives it would probably
make sense to manually defocus a bit to reduce the gritty sky
and then use deconvolution later to get back details.
I have a Nikon LS 9000 scanner which allows for controled, manual defocusing,
but I'm always fighting with pseudo-aliasing (film grain/scanner CCD interaction).


Hi Christoph,

Film scans are a bit different in this respect because we are talking about a second generation capture (analog camera on film, analog film on discrete sampling sensor) and the layered grain filaments (or clouds of dye) is the image. Noise or grain in the presence of signal hampers the successful restoration of only the signal component. For filmscans the best approach is to avoid grain-aliasing (http://www.photoscientia.co.uk/Grain.htm) by oversampling (scanning at 6000-8000 PPI, or at least at the maximum native scanner resolution). Then, after pre-blurring (=convolution) and adaptive noise reduction,  downsampling can take place, followed by final output sharpening. It's there where I see the most use for deconvolution, after a number of preprocessing steps.

With direct digital capture we are faced with a number of processing steps that all introduce loss of resolution due to the digitization. Just like with film it already starts with the lens and aperture used, but the (if present AA-filter and) CFA also add their specific fingerprint to the mix, as does the Raw converter. That digitization process is well explored territory in Digital Signal Processing. It's the application to general photographic imaging that seems to pick up way too slow, it is in fact holding back (creative) progress in some areas.

Properly addressing it also has more applications than many realise. Not only restoring sharpness and recovering from motion blur, but also resampling, and even adjustable/variable DOF and glare reduction can be tackled with PSFs and deconvolution.

Exciting times are ahead...

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: XFer on September 08, 2010, 12:08:02 PM
Hi Bart,

for this example I used a basic 3x3 Gaussian approximation; I ran my "strange" version of R-L which allows me very small sigmas (0.45 here) and few iterations (10 here).

I can't seem to avoid some ringing when using "true" R-L.

I still think that sub-pixel processing and smooth sampling windows are needed to fully exploit R-L with larger kernels.

Fernando
Title: Re: Deconvolution sharpening revisited
Post by: XFer on September 08, 2010, 12:11:04 PM
I have a Nikon LS 9000 scanner which allows for controled, manual defocusing,
but I'm always fighting with pseudo-aliasing (film grain/scanner CCD interaction).

I posted a sample a few days ago: film scanning with Nikon 8000 and R-L deconvolution vs. standard USM

In case you missed it, it may be of interest:

http://www.luminous-landscape.com/forum/index.php?topic=45038.msg384006#msg384006

Fernando
Title: Re: Deconvolution sharpening revisited
Post by: sjprg on December 20, 2010, 10:05:46 PM
Did this subject die out? haven't seen an entry since September 2010.
Title: Re: Deconvolution sharpening revisited
Post by: feppe on December 21, 2010, 01:40:10 PM
Did this subject die out? haven't seen an entry since September 2010.

It was until you started grave-digging ;D
Title: Re: Deconvolution sharpening revisited
Post by: eronald on December 21, 2010, 04:05:37 PM
It was until you started grave-digging ;D

Nah, the topic got oversharpened, and suffered from ringing and a noise explosion :)

As Steve would say:

Divide by Zero, (Apple)Core dumped!

Edmund
Title: Re: Deconvolution sharpening revisited
Post by: David Ellsworth on February 01, 2011, 05:30:34 PM
I hope no one minds my entering this a bit late. I've written a small C program that does deconvolution by Discrete Fourier Transform division (using the library "FFTW" to do Fast Fourier Transforms). To me this seems to beat all the other deconvolution algorithms that have been presented in this thread... I'd like to see what others think.

My algorithm currently only works on square images, and deals with edge effects very badly, so I added black borders to Bart's max-quality jpeg crop making it 1115x1115, then applied the convolution myself using his provided 9x9 kernel. I operated entirely on 16-bit per channel data. The exact convoluted image that I worked from can be downloaded here (5.0 MB PNG file) (http://kingbird.myphotos.cc/0343_Crop+Diffraction_square_black_border.png). My result, saved as a maximum-quality JPEG (after re-cropping it to 1003x1107): 0343_Crop+Diffraction+DFT_division_v2.jpg (1.2 MB) (http://kingbird.myphotos.cc/0343_Crop+Diffraction+DFT_division_v2.jpg)

My algorithm takes a single white pixel on a black background the same size as the original image, applies the PSF blur to it, and divides the DFT of the blurred pixel by the DFT of the pixel (element by element, using complex arithmetic); this takes advantage of the fact that the DFT of a single white pixel in the center of a black background has a uniformly gray DFT. Then it takes the DFT of the convoluted image and divides this by the result from the previous operation. Division on a particular element is only done if the divisor is above a certain threshold (to avoid amplifying noise too much, even noise resulting from 16-bit quantization). An inverse DFT is done on the final result to get a deconvoluted image.

This algorithm is very fast and does not need to go through iterations of successive approximation; it gets its best result right off the bat.

1. I've taken a crop of a shot taken with my 1Ds3 (6.4 micron sensel pitch + Bayer CFA) and the TS-E 90mm af f/7.1 (the aperture where the diffraction pattern spans approx. 1.5 pixels).
0343_Crop.jpg (http://www.xs4all.nl/~bvdwolf/main/downloads/0343_Crop.jpg) (1.203kb) I used 16-b/ch TIFFs throughout the experiment, but provide links to JPEGs and PNGs to save bandwidth.

2. That crop is convolved with a single diffraction (at f/32) kernel for 564nm wavelength (the luminosity weighted average of R, G and B taken as 450, 550 and 650 nm) at a 6.4 micron sensel spacing (assuming 100% fill-factor). That kernel (http://www.xs4all.nl/~bvdwolf/main/downloads/Airy9x9{p=6.4FF=100w=0.564f=32.}.dat) was limited to the maximum 9x9 kernel size of ImagesPlus, a commercial Astrophotography program chosen for the experiment because a PSF kernel can be specified and the experiment can be verified. That means that only a part of the infinite diffraction pattern (some 44 micron, or 6.38 pixel widths, in diameter to the first minimum) could be encoded. So I realise that the diffraction kernel is not perfect, but it covers the majority of the energy distribution. The goal is to find out how well certain methods can restore the original image, so anything that resembles diffraction will do.
The benefit of using a 9x9 convolution kernel is that the same kernel can be used for both convolution and deconvolution, so we can judge the potential of a common method under somewhat ideal conditions (a known PSF, and computable in a reasonable time). it will present a sort of benchmark for the others to beat.
Crop+diffraction (http://www.xs4all.nl/~bvdwolf/main/downloads/0343_Crop+Diffraction.png) (5.020kb !) This is the subject to restore to it's original state before diffraction was added.

3. And here (http://www.xs4all.nl/~bvdwolf/main/downloads/0343_Crop+Diffraction+RL0-1000.jpg) (945kb) is the result after only one Richardson Lucy restoration (although with 1000 iterations) with a perfectly matching PSF. There are some ringing artifacts, but the noise is almost the same level as in the original. The resolution has been improved significantly, quite usable for a simulated f/32 shot as a basis for further postprocessing and printing. Look specifically at the Venetian blinds at the first floor windows in the center. Remember, the restoration goal was to restore the original, not to improve on it (that will take another postprocessing step).
I have supplied a link (http://www.xs4all.nl/~bvdwolf/main/downloads/Airy9x9%7Bp=6.4FF=100w=0.564f=32.%7D.dat) to the data file.
Title: Re: Deconvolution sharpening revisited
Post by: eronald on February 01, 2011, 05:48:13 PM
I hope no one minds my entering this a bit late. I've written a small C program that does deconvolution by Discrete Fourier Transform division (using the library "FFTW" to do Fast Fourier Transforms). To me this seems to beat all the other deconvolution algorithms that have been presented in this thread... I'd like to see what others think.

My algorithm currently only works on square images, and deals with edge effects very badly, so I added black borders to Bart's max-quality jpeg crop making it 1115x1115, then applied the convolution myself using his provided 9x9 kernel. I operated entirely on 16-bit per channel data. The exact convoluted image that I worked from can be downloaded here (5.0 MB PNG file) (http://kingbird.myphotos.cc/0343_Crop+Diffraction_square_black_border.png). My result, saved as a maximum-quality JPEG (after re-cropping it to 1003x1107): 0343_Crop+Diffraction+DFT_division_v2.jpg (1.2 MB) (http://kingbird.myphotos.cc/0343_Crop+Diffraction+DFT_division_v2.jpg)

My algorithm takes a single white pixel on a black background the same size as the original image, applies the PSF blur to it, and divides the DFT of the blurred pixel by the DFT of the pixel (element by element, using complex arithmetic); this takes advantage of the fact that the DFT of a single white pixel in the center of a black background has a uniformly gray DFT. Then it takes the DFT of the convoluted image and divides this by the result from the previous operation. Division on a particular element is only done if the divisor is above a certain threshold (to avoid amplifying noise too much, even noise resulting from 16-bit quantization). An inverse DFT is done on the final result to get a deconvoluted image.

This algorithm is very fast and does not need to go through iterations of successive approximation; it gets its best result right off the bat.


Sounds simple. Results are really nice.
Can you post the code please, to avoid my having to recreate it. ?

Edmund
Title: Re: Deconvolution sharpening revisited
Post by: EricWHiss on February 01, 2011, 06:44:29 PM
David,
The results from method looked impressive!
Eric
Title: Re: Deconvolution sharpening revisited
Post by: David Ellsworth on February 01, 2011, 10:33:33 PM
Thanks Edmund, and thanks Eric. It is indeed simple, which makes me wonder why seemingly nobody else has thought of it. However I do think it has a lot of room for improvement, for example dealing with a noisy or quantized image — my current solution is to cut off frequencies that are noisy, but that results in ringing artifacts. Maybe I can add an algorithm that fiddles with the noisy frequencies in order to reduce the appearance of ringing (not sure at this point how to go about it, though). And there's of course the issue of edge effects, which I haven't tried to tackle yet.

I intend to post the source code, but it's rather messy right now (the main problem is that it uses raw files instead of TIFFs), so I'd like to clean it up first. Unless you'd really like to play with it right away, in which case I can post it as-is...

Meanwhile, I've improved the algorithm: 1) Do a gradual frequency cutoff instead of a threshold discontinuity; 2) Use the exact floating point kernel for deconvolution, instead of using a kernel-blurred white pixel rounded to 16 bits/channel.
The result: 0343_Crop+Diffraction+DFT_division_v3.jpg (1.2 MB) (http://kingbird.myphotos.cc/0343_Crop+Diffraction+DFT_division_v3.jpg)

David
Title: Re: Deconvolution sharpening revisited
Post by: bradleygibson on February 01, 2011, 10:42:00 PM
David, what PSF did you use on your sample?  A simple Gaussian, or something more complex?
Title: Re: Deconvolution sharpening revisited
Post by: David Ellsworth on February 01, 2011, 11:28:46 PM
David, what PSF did you use on your sample?  A simple Gaussian, or something more complex?

Brad, I used the same exact one that Bart used, both for convolution and deconvolution:

I have supplied a link (http://www.xs4all.nl/~bvdwolf/main/downloads/Airy9x9%7Bp=6.4FF=100w=0.564f=32.%7D.dat) to the data file. You can read the dat file with Wordpad or a similar simple document reader. You can input those numbers(rounded to 16-bit values, or converted to 8-bit numbers by dividing by 65535 and multiplying by 255 and rounding to integers). A small warning, the lower the accuracy, the lower the output quality will be. For convenience I've added a 16-bit Greyscale TIFF (http://www.xs4all.nl/~bvdwolf/main/downloads/N32.tif) (convert to RGB mode if needed). I have turned it into an 11x11 kernel (9x9 + black border) because the program you referenced apparently (from the description) requires a zero backgound level.

I used the floating point data file, not the TIFF.

My algorithm can work on a PSF up to the size of the image itself, in this case 1115x1115... but I'd have to synthesize an Airy disk myself, and haven't learned how to do that yet. So I just padded Bart's 9x9 one with black.
Title: Re: Deconvolution sharpening revisited
Post by: MichaelEzra on February 01, 2011, 11:46:04 PM
David, this looks really interesting, especially considering that algorithm is fast.

It would be lovely to see something like this in a tool like RawTherapee, which provides a great image processing platform.
If you would be interested, I could help with UI implementation there.
Title: Re: Deconvolution sharpening revisited
Post by: David Ellsworth on February 02, 2011, 12:21:06 AM
David, this looks really interesting, especially considering that algorithm is fast.

It would be lovely to see something like this in a tool like RawTherapee, which provides a great image processing platform.
If you would be interested, I could help with UI implementation there.

Michael,

Thanks for your interest. I'm not sure my algorithm is ready for that, though!

When I said it's "very fast" I meant relative to other deconvolution algorithms.
It takes about 2.5 seconds on my Intel E8400 (3.0 GHz) with DDR2 800 to process a 1115x1115 image single-threaded. It probably scales roughly as O(N log N) where N=X*Y, the number of pixels, since most of its time is taken calculating FFTs.
It takes 39 seconds to process a 4096x4096 image (yep, that's O(N log N) all right). But with multi-threading it could be much faster, so maybe it could indeed be practical for raw conversion...

But first things first. I need to write some code to make it adapt its frequency cutoff to which frequencies are most garbled by noise, and I need to address the awful edge effects it currently has, and of course make it work with non-square images (EDIT: It now works with non-square images).
Title: Re: Deconvolution sharpening revisited
Post by: ErikKaffehr on February 02, 2011, 01:12:06 AM
Hi!

Your effort is really appreciated! Thanks for sharing ideas, results and code.

Best regards
Erik

Thanks Edmund, and thanks Eric. It is indeed simple, which makes me wonder why seemingly nobody else has thought of it. However I do think it has a lot of room for improvement, for example dealing with a noisy or quantized image — my current solution is to cut off frequencies that are noisy, but that results in ringing artifacts. Maybe I can add an algorithm that fiddles with the noisy frequencies in order to reduce the appearance of ringing (not sure at this point how to go about it, though). And there's of course the issue of edge effects, which I haven't tried to tackle yet.

I intend to post the source code, but it's rather messy right now (the main problem is that it uses raw files instead of TIFFs), so I'd like to clean it up first. Unless you'd really like to play with it right away, in which case I can post it as-is...

Meanwhile, I've improved the algorithm: 1) Do a gradual frequency cutoff instead of a threshold discontinuity; 2) Use the exact floating point kernel for deconvolution, instead of using a kernel-blurred white pixel rounded to 16 bits/channel.
The result: 0343_Crop+Diffraction+DFT_division_v3.jpg (1.2 MB) (http://kingbird.myphotos.cc/0343_Crop+Diffraction+DFT_division_v3.jpg)

David
Title: Re: Deconvolution sharpening revisited
Post by: Graham Mitchell on February 02, 2011, 02:34:16 AM
Great results so far. I look forward to playing around with it some day (assuming code is released :) )
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on February 02, 2011, 04:14:02 AM
I hope no one minds my entering this a bit late. I've written a small C program that does deconvolution by Discrete Fourier Transform division (using the library "FFTW" to do Fast Fourier Transforms). To me this seems to beat all the other deconvolution algorithms that have been presented in this thread... I'd like to see what others think.

Hi David,

The more the merrier, I'm glad you joined.

Quote
My algorithm currently only works on square images, and deals with edge effects very badly, so I added black borders to Bart's max-quality jpeg crop making it 1115x1115, then applied the convolution myself using his provided 9x9 kernel. I operated entirely on 16-bit per channel data. The exact convoluted image that I worked from can be downloaded here (5.0 MB PNG file) (http://kingbird.myphotos.cc/0343_Crop+Diffraction_square_black_border.png). My result, saved as a maximum-quality JPEG (after re-cropping it to 1003x1107): 0343_Crop+Diffraction+DFT_division_v2.jpg (1.2 MB) (http://kingbird.myphotos.cc/0343_Crop+Diffraction+DFT_division_v2.jpg)

David, the results look too good to be true. Could you verify that you (your software) used the correct (convolved) file as input? It's not that I don't like the results, it's that all other algorithms I've tried cannot restore data that has been lost (too low S/N ratio) to f/32 diffraction. Maybe you didn't save the intermediate convolved/diffracted result, thus truncating the accuracy to 16-bit at best. Maybe you were performing all subsequent steps while keeping the intermediate results in floating point?

Quote
My algorithm takes a single white pixel on a black background the same size as the original image, applies the PSF blur to it, and divides the DFT of the blurred pixel by the DFT of the pixel (element by element, using complex arithmetic); this takes advantage of the fact that the DFT of a single white pixel in the center of a black background has a uniformly gray DFT. Then it takes the DFT of the convoluted image and divides this by the result from the previous operation. Division on a particular element is only done if the divisor is above a certain threshold (to avoid amplifying noise too much, even noise resulting from 16-bit quantization). An inverse DFT is done on the final result to get a deconvoluted image.

Yes, that's basic restoration by deconvolution in Fourier space.

Quote
This algorithm is very fast and does not need to go through iterations of successive approximation; it gets its best result right off the bat.

The iterations that were mentioned are part of the Richardson-Lucy algorithm, it's an iterative Bayesian maximum likelihood method. Regular deconvolution is faster, but poses other challenges. One of the things you can do to improve the edge performance is padding with e.g. mirrored image data, and making a final crop after deconvolution.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: David Ellsworth on February 02, 2011, 01:12:01 PM
Hi Bart,

David, the results look too good to be true. Could you verify that you (your software) used the correct (convolved) file as input? It's not that I don't like the results, it's that all other algorithms I've tried cannot restore data that has been lost (too low S/N ratio) to f/32 diffraction. Maybe you didn't save the intermediate convolved/diffracted result, thus truncating the accuracy to 16-bit at best. Maybe you were performing all subsequent steps while keeping the intermediate results in floating point?

Well in a way, it is too good to be true. It's VERY sensitive to tiny changes in the convoluted input, and the edges need to be completely intact and fade out to black. But my program is indeed using a 16-bit/ch data file as input (the PNG I posted), which I'm absolutely sure of as 1) the convolution and deconvolution are done in separate runs, with only files on the hard disk as input and output, and 2) I've tried messing with the convoluted input, which thoroughly ruins the deconvoluted output unless it is made less sensitive by increasing the frequency drop-off threshold, and 3) I've tried pasting your 16-bit/ch convoluted PNG on top of mine, and the deconvolution still looks good but has a bit of noise near the edges, probably because of the difference between the JPEG crop and the original crop.

Here's what happens if I pollute the convoluted input with just one white pixel, all else being equal: 0343_Crop+Diffraction+DFT_division_v3_whitepixel.jpg (1.5 MB) (http://kingbird.myphotos.cc/0343_Crop+Diffraction+DFT_division_v3_whitepixel.jpg)
I added the white pixel in an image editor, not using my program.

I would enjoy it if you posted another 16-bit/ch PNG of a convoluted image, without posting the original, but this time with edges that fade to black (i.e., pad the original with 8 pixels of black before applying a 9x9 kernel).

Best regards,
David
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on February 02, 2011, 02:24:12 PM
I would enjoy it if you posted another 16-bit/ch PNG of a convoluted image, without posting the original, but this time with edges that fade to black (i.e., pad the original with 8 pixels of black before applying a 9x9 kernel).

Hi David,

Okay, here it is, a 16-b/ch RGB PNG file with an 8 pixel black border, convolved with the same "N=32" kernel as before:

(http://www.xs4all.nl/~bvdwolf/main/downloads/7640_Crop+Diffraction.png)
http://www.xs4all.nl/~bvdwolf/main/downloads/7640_Crop+Diffraction.png

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: David Ellsworth on February 02, 2011, 03:42:55 PM
Hi David,

Okay, here it is, a 16-b/ch RGB PNG file with an 8 pixel black border, convolved with the same "N=32" kernel as before:

Tada:

(http://kingbird.myphotos.cc/7640_Crop+Diffraction+DFT_division_v3.jpg)

But I really want to get this working with cropped-edge images. Right now that's the Achilles heel of the algorithm: having missing edges corrupts the entire image, not just the vicinity of the edges. I tried masking edges by multiplying them by a convoluted white rectangle, but that still left significant ringing noise over the whole picture. I'll try that mirrored-edge idea, but I doubt it'll work (EDIT: indeed, it didn't work). I have another idea, of tapering off the inverse PSF so that it doesn't have that "action at a distance", but that might remove its ability to reconstruct fine detail... it's a really hard concept to wrap my mind around. It seems that this algorithm works on a gestalt of the whole image to reconstruct even one piece of it, even though the PSF is just 9x9.

BTW, the edges can actually be any color, as long as it's uniform. I can just subtract it out (making some values negative within the image itself), and then add it back in before saving the result.

Cheers,
David
Title: Re: Deconvolution sharpening revisited
Post by: ErikKaffehr on February 02, 2011, 03:48:27 PM
+1

Great results so far. I look forward to playing around with it some day (assuming code is released :) )
Title: Re: Deconvolution sharpening revisited
Post by: eronald on February 02, 2011, 04:47:15 PM
David,


Seeing the algorithm works, I think that some code in Matlab would significantly advance the topic at this point.
Maybe you could post the existing code, and then we could recode the thing in Matlab?

I'm a bit confused by the fact that you said you operate on the Raw; how are you doing the Raw deconvolution?

Edmund

Tada:

(http://kingbird.myphotos.cc/7640_Crop+Diffraction+DFT_division_v3.jpg)

But I really want to get this working with cropped-edge images. Right now that's the Achilles heel of the algorithm: having missing edges corrupts the entire image, not just the vicinity of the edges. I tried masking edges by multiplying them by a convoluted white rectangle, but that still left significant ringing noise over the whole picture. I'll try that mirrored-edge idea, but I doubt it'll work. I have another idea, of tapering off the inverse PSF so that it doesn't have that "action at a distance", but that might remove its ability to reconstruct fine detail... it's a really hard concept to wrap my mind around. It seems that this algorithm works on a gestalt of the whole image to reconstruct even one piece of it, even though the PSF is just 9x9.

BTW, the edges can actually be any color, as long as it's uniform. I can just subtract it out (making some values negative within the image itself), and then add it back in before saving the result.

Cheers,
David
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on February 02, 2011, 05:20:30 PM
Tada:

(http://kingbird.myphotos.cc/7640_Crop+Diffraction+DFT_division_v3.jpg)

Pretty close. And here is (a max quality JPEG version of) the original before convolution:

(http://www.xs4all.nl/~bvdwolf/main/downloads/7640_Crop.jpg)


Quote
But I really want to get this working with cropped-edge images. Right now that's the Achilles heel of the algorithm: having missing edges corrupts the entire image, not just the vicinity of the edges. I tried masking edges by multiplying them by a convoluted white rectangle, but that still left significant ringing noise over the whole picture. I'll try that mirrored-edge idea, but I doubt it'll work. I have another idea, of tapering off the inverse PSF so that it doesn't have that "action at a distance", but that might remove its ability to reconstruct fine detail... it's a really hard concept to wrap my mind around. It seems that this algorithm works on a gestalt of the whole image to reconstruct even one piece of it, even though the PSF is just 9x9.

As your experiment with the single white pixel shows, each and every pixel contributes to the reconstruction of the entire image, although the amplitude reduces with distance. The ringing at edges has to do with the abrupt discontinuity of the pixels contributing to the image. An image is presumed to have infinite dimensions, which can be approximated by creating a symmetric image, such as in the "Reflected" image padding example at:
http://reference.wolfram.com/mathematica/ref/ImagePad.html (http://reference.wolfram.com/mathematica/ref/ImagePad.html)

This also allows to low-pass filter the larger Fourier transform with an appropriate function (a Gaussian is a simple one), but the highest spatial frequencies are sacrificed to prevent other artifacts.

It is also important to note that here the PSF is known to high accuracy, where in real time situations we can only approximate it, and noise can spoil the party.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: crames on February 03, 2011, 09:19:29 AM
David, the results look too good to be true...

Inverse filtering can be very exact if the blur PSF is known and there is no added noise, as in these examples.

But what is really surprising is how much detail is coming back - it's as though nothing is being lost to diffraction. Detail above what should be the "cutoff frequency" is being restored.

I think what is happening is that the 9x9 or 11x11 Airy disk is too small to simulate a real Airy disk. It is allowing spatial frequencies above the diffraction cutoff to leak past. Then David's inverse filter is able to restore most of those higher-than-cutoff frequency details as well as the lower frequencies (on which it does a superior job).

To be more realistic I think it will be necessary to go with a bigger simulated Airy disk.

--typo edited
Title: Re: Deconvolution sharpening revisited
Post by: eronald on February 03, 2011, 09:44:22 AM
I'm vaguely following this - however, I would have thought that if the fourier transform of the blur function is invertible then it's pretty obvious that you'll get the original back - with some uncertainty in areas with a lot of higher frequency due to noise in the original and computational approximation.
Edmund

Inverse filtering can be very exact if the blur PSF is known and there is no added noise, as in these examples.

But what is really surprising how much detail is coming back - it's as though nothing is being lost to diffraction. Detail above what should be the "cutoff frequency" is being restored.

I think what is happening is that the 9x9 or 11x11 Airy disk is too small to simulate a real Airy disk. It is allowing spatial frequencies above the diffraction cutoff to leak past. Then David's inverse filter is able to restore most of those higher-than-cutoff frequency details as well as the lower frequencies (on which it does a superior job).

To be more realistic I think it will be necessary to go with a bigger simulated Airy disk.
Title: Re: Deconvolution sharpening revisited
Post by: PierreVandevenne on February 03, 2011, 11:14:05 AM
I think (possibly wrongly) that the main problem is estimating the real world PSF, both at the focal plane and ideally in a space around the focal plane (see Nijboer-Zernike). Inverting known issues is relatively easy in comparison. This is only based on my experience trying to fix astronomical images but I suspect (photographically vs astrophotographically) optical issues are equivalent, tracking errors are similar to camera movement and turbulence errors are similar to subject movement. Depth of field should make things more complex for photos. And of course, one doesn't lack distorted point sources in astro images.

Apologies if I missed the point, I haven't re-read the whole thread.
Title: Re: Deconvolution sharpening revisited
Post by: crames on February 03, 2011, 01:47:41 PM
I'm vaguely following this - however, I would have thought that if the fourier transform of the blur function is invertible then it's pretty obvious that you'll get the original back - with some uncertainty in areas with a lot of higher frequency due to noise in the original and computational approximation.

The frequencies above the cutoff frequency should be zero, so should not be invertible. (1/zero = ???)
Title: Re: Deconvolution sharpening revisited
Post by: eronald on February 03, 2011, 03:29:52 PM
The frequencies above the cutoff frequency should be zero, so should not be invertible. (1/zero = ???)

Speaking without experience, again, I would assume that is always the case. When you fourier transform or DFT or whatever, you will truncate at the cutoff or twice the cutoff or whatever - after that it doesn't make sense anymore.

Maybe I should go and try actually doing some computational experiments with Matlab and an image processing book, and refresh my knowledge,  rather than spout nonsense.
Edmund
Title: Re: Deconvolution sharpening revisited
Post by: David Ellsworth on February 03, 2011, 05:19:57 PM
I think what is happening is that the 9x9 or 11x11 Airy disk is too small to simulate a real Airy disk. It is allowing spatial frequencies above the diffraction cutoff to leak past. Then David's inverse filter is able to restore most of those higher-than-cutoff frequency details as well as the lower frequencies (on which it does a superior job).

To be more realistic I think it will be necessary to go with a bigger simulated Airy disk.

Wow, you are absolutely right. It makes a HUGE difference in this case; 9x9 was far too small to nullify the higher-than-cutoff frequencies in the same way the full Airy disk does. I tried a 127x127 kernel and indeed, my algorithm now cannot recover those frequencies at all. Also I notice that global contrast is quite visibly reduced using the 127x127 kernel, which didn't happen with the 9x9 for obvious reasons.

I used the Airy disk formula from Wikipedia (http://en.wikipedia.org/wiki/Airy_disk#Mathematical_details), using the libc implementation of the Bessel function, double _j1(double). My result differed slightly from Bart's in the inner 9x9 pixels. Any idea why? Bart, was your kernel actually an Airy disk convolved with an OLPF?

I attempted to save the 127x127 kernel in the same format as Bart posted his 9x9 (except for using a PNG instead of a TIFF for the picture version):
Airy127x127_6.4micron_564nm_f32.zip (data file) (http://kingbird.myphotos.cc/Airy127x127_6.4micron_564nm_f32.zip)
Airy127x127_6.4micron_564nm_f32.png (16-bit PNG) (http://Airy127x127_6.4micron_564nm_f32.png)

Here's the original, convoluted and deconvoluted fourier transforms (logarithm of the absolute value of each color channel) — click one to see full DFT as a max-quality JPEG:
(http://kingbird.myphotos.cc/0343_Crop_DFT_thumbnail.jpg) (http://kingbird.myphotos.cc/0343_Crop_DFT.jpg) (http://kingbird.myphotos.cc/0343_Crop+Diffraction127x127_DFT_thumbnail.jpg) (http://kingbird.myphotos.cc/0343_Crop+Diffraction127x127_DFT.jpg) (http://kingbird.myphotos.cc/0343_Crop+Diffraction127x127+DFT_division_DFT_thumbnail.jpg) (http://kingbird.myphotos.cc/0343_Crop+Diffraction127x127+DFT_division_DFT.jpg)

The residual frequencies outside the ellipse are due to the use of a 127x127 kernel instead of one as big as the picture.

Convoluted 16-bit/channel PNG: 0343_Crop+Diffraction127x127.png (5.1 MB) (http://kingbird.myphotos.cc/0343_Crop+Diffraction127x127.png)
Deconvoluted max-quality JPEG: 0343_Crop+Diffraction127x127+DFT_division.jpg (1.1 MB) (http://kingbird.myphotos.cc/0343_Crop+Diffraction127x127+DFT_division.jpg)

So as you can see, my algorithm doesn't do so well with a real Airy disk. It has significant ringing. Do the other algorithms/programs demonstrated earlier in the thread deal with this 127x127-convoluted 16-bit PNG just as well as they dealt with the 9x9-convoluted one?

BTW, Bart, do you have protanomalous or protanopic vision? I notice you always change your links to blue instead of the default red, and I've been doing the same thing because the red is hard for me to tell at a glance from black, against a pale background.

Cheers,
David
Title: Re: Deconvolution sharpening revisited
Post by: eronald on February 03, 2011, 06:05:26 PM
I guess I can ask Norman whether Imatest could dump some real-world diffraction kernels.
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on February 03, 2011, 07:01:36 PM
Inverse filtering can be very exact if the blur PSF is known and there is no added noise, as in these examples.

But what is really surprising is how much detail is coming back - it's as though nothing is being lost to diffraction. Detail above what should be the "cutoff frequency" is being restored.

Hi Cliff,

Indeed, it surprised me as well. I expected more frequencies to have a too low contribution after rounding to 16 bit integer values. However, the convolution kernel used is not a full (infinite) diffraction pattern, but is truncated at 9x9 pixels (assuming a 6.4 micron sensel pitch with 100% fill factor sensels). Since the convolution and the deconvolution are done with the same filter, the reconstruction can be (close to) perfect, within the offered floating point precision. Any filter would do, but I wanted to demonstrate something imaginable, based on an insanely narrow aperture.

Quote
I think what is happening is that the 9x9 or 11x11 Airy disk is too small to simulate a real Airy disk. It is allowing spatial frequencies above the diffraction cutoff to leak past. Then David's inverse filter is able to restore most of those higher-than-cutoff frequency details as well as the lower frequencies (on which it does a superior job).

To be more realistic I think it will be necessary to go with a bigger simulated Airy disk.

I don't think that's necessary, unless one wants an even more accurate diffraction kernel. I can make larger kernels, but there are few applications (besides selfmade software) that can accommodate them. Anyway, a 9x9 kernel covers some 89% of the power of a 99x99 kernel.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on February 03, 2011, 07:09:23 PM
I think (possibly wrongly) that the main problem is estimating the real world PSF, both at the focal plane and ideally in a space around the focal plane (see Nijboer-Zernike). Inverting known issues is relatively easy in comparison.

Hi Pierre,

That's correct. In this particular case it has been demonstrated that in theory, in a perfect world, the effects of e.g. diffraction (or defocus, or optical aberrations, or ...) can be reversed. So people who claim that blur is blur and all is lost are demonstrably wrong. However, the trick is in finding the PSF in the first place. Fortunately, with a combination of several different blur sources, we often end up with something that can be described by a combination of Gaussians. Many natural phenomenae cumulate into a Gaussian type of distribution.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on February 03, 2011, 07:30:00 PM
I used the Airy disk formula from Wikipedia (http://en.wikipedia.org/wiki/Airy_disk#Mathematical_details), using the libc implementation of the Bessel function, double _j1(double). My result differed slightly from Bart's in the inner 9x9 pixels. Any idea why? Bart, was your kernel actually an Airy disk convolved with an OLPF?

Hi David,

Yes, my kernel is assumed to represent a 100% fill factor, 6.4 micron sensel pitch, kernel. I did that by letting Mathematica integrate the 2D function at each sensel position + or - 0.5 sensel.

Quote
BTW, Bart, do you have protanomalous or protanopic vision? I notice you always change your links to blue instead of the default red, and I've been doing the same thing because the red is hard for me to tell at a glance from black, against a pale background.

No, my color vision is normal. I change the color because it is more obvious in general, and follows the default Web conventions for hyperlinks (which may have had colorblind vision in the considerations for that color choice, I don't know). It's more obvious that it's a hyperlink and not just an underlined word. Must be my Marketing background, to reason from the perspective of the endusers.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: David Ellsworth on February 03, 2011, 08:14:57 PM
Hi Bart,

I don't think that's necessary, unless one wants an even more accurate diffraction kernel. I can make larger kernels, but there are few applications (besides selfmade software) that can accommodate them. Anyway, a 9x9 kernel covers some 89% of the power of a 99x99 kernel.

But just look at the difference between 0343_Crop convolved with the 9x9 kernel and the 127x127 kernel. There's a huge difference which is visually immediately obvious: the larger kernel has a visible effect on global contrast. Admittedly it might not be as visually obvious if this were done on linear image data (with the gamma/tone curve applied afterwards, for viewing). But then, the ringing might not look as bad either.

There is of course another reason not to use large kernels. Applying a large kernel the conventional way is very slow; if I'm not mistaken it's O(N^2). But doing it through DFTs is fast (basically my algorithm in reverse), O(N log N). Of course the same problem about software exists.

Yes, my kernel is assumed to represent a 100% fill factor, 6.4 micron sensel pitch, kernel. I did that by letting Mathematica integrate the 2D function at each sensel position + or - 0.5 sensel.

I don't understand. What is there for Mathematica to integrate? The Airy disk function does use the Bessel function, which can be calculated either as an integral or an infinite sum, but can't you just call the Bessel function in Mathematica? What did you integrate?

No, my color vision is normal. I change the color because it is more obvious in general, and follows the default Web conventions for hyperlinks (which may have had colorblind vision in the considerations for that color choice, I don't know). It's more obvious that it's a hyperlink and not just an underlined word. Must be my Marketing background, to reason from the perspective of the endusers.

Well then, it's a neat coincidence that this makes it easier for me to see your links.

BTW, should this thread perhaps be in the "Digital Image Processing" subforum instead of "Medium Format / Film / Digital Backs – and Large Sensor Photography"?

-David
Title: Re: Deconvolution sharpening revisited
Post by: ejmartin on February 04, 2011, 10:19:32 AM
There is of course another reason not to use large kernels. Applying a large kernel the conventional way is very slow; if I'm not mistaken it's O(N^2). But doing it through DFTs is fast (basically my algorithm in reverse), O(N log N). Of course the same problem about software exists.

One could process the image in blocks, say 512x512 or 1024x1024, with a little block overlap to mitigate edge effects; then the cost is only O(N).  

Quote
I don't understand. What is there for Mathematica to integrate? The Airy disk function does use the Bessel function, which can be calculated either as an integral or an infinite sum, but can't you just call the Bessel function in Mathematica? What did you integrate?

I would suspect one wants to box blur the Airy pattern to model the effect of diffraction on pixel values (assuming 100% microlens coverage).  The input to that is a pixel size, as Bart states.
Title: Re: Deconvolution sharpening revisited
Post by: jfwfoto on February 04, 2011, 11:04:09 AM
I can recommend the Focus Fixer plugin. I believe it is a deconvolution type program though they will not describe it as such to promote their exclusivity. They have a database of camera sensors so the PSF may be better than the ballpark options. The plugin corrects AA blurring when used at low settings and at higher settings can refocus images that are slightly out of focus. It will run on a selected area so if only a part of the image needs to be refocused it can be done quickly whereas running the plugin on the whole image takes a little time. DXO is another software that has AA blurring recovery designed for specific sensors and I think it works well too. It also will not describe its 'secret' as deconvolution but it is essentially what they are talking about.
Title: Re: Deconvolution sharpening revisited
Post by: David Ellsworth on February 04, 2011, 12:10:37 PM
I would suspect one wants to box blur the Airy pattern to model the effect of diffraction on pixel values (assuming 100% microlens coverage).  The input to that is a pixel size, as Bart states.

Oh, thanks. Now I understand — he integrated the Airy function over the square of each pixel. I made the mistake of evaluating it only at the center of each pixel, silly me.

What method does Mathematica use to integrate that? Is it actually evaluating to full floating point accuracy (seems unlikely)?

Edit: Now getting the same result as Bart in the inner 9x9 pixels, to 8-10 significant digits.
Title: Re: Deconvolution sharpening revisited
Post by: ejmartin on February 04, 2011, 12:50:07 PM
Oh, thanks. Now I understand — he integrated the Airy function over the square of each pixel. I made the mistake of evaluating it only at the center of each pixel, silly me.

What method does Mathematica use to integrate that? Is it actually evaluating to full floating point accuracy (seems unlikely)?

Good question.  Looking in the documentation a bit, it seems it samples the domain adaptively and recursively until a target error estimate is reached.  Usually the precision is more than you will need, but it is specifiable if the default settings are insufficient; similarly the sampling method is specifiable from a list of choices.
Title: Re: Deconvolution sharpening revisited
Post by: Ernst Dinkla on June 27, 2011, 06:41:50 AM
Exported from another thread and forum but fitting here well in my opinion:


I then ran another test to see if altering my capture sharpening could improve things further. As I think you suggested, deconvolution sharpening could result in fewer artefacts, so I went back to the Develop Module and altered my sharpening to Radius 0.6, Detail 100, and Amount 38 (my original settings were Radius 0.9, Detail 35, Amount 55). The next print gained a little more acutance as a result with output sharpening still set to High, with some fine lines on the cup patterns now becoming visible under the loupe. Just for fun, I am going to attach 1200 ppi scans of the prints so you can judge for yourselves, bearing in mind that this is a very tiny section of the finished print.

John

John,

This is not the forum to discuss scanning and sharpening but I am intrigued by some aspects/contradictions of deconvolution sharpening, flatbed scanning and film grain. I have seen a thread on another LL forum  (you were there too) that discussed deconvolution sharpening but little information on flatbed scanners and film grain. I have a suspicion that on flatbeds diffraction plays an important role in loss of sharpness (engineers deliberately using a small stop for several reasons) while at the same time the oversampling as used on most Umax and Epson models keeps (aliased) grain in the scan low and delivers an acceptable dynamic range. In another thread Bart mentioned the use of a slanted edge target on a flatbed to deliver a suitable base for the sharpening. I would be interested in an optimal deconvolation sharpening route for an Epson V700 while still keeping grain/noise at bay. Noise too as I use that scanner also for reflective scans.





met vriendelijke groeten, Ernst
Try: http://groups.yahoo.com/group/Wide_Inkjet_Printers/
Title: Re: Deconvolution sharpening revisited
Post by: hjulenissen on June 27, 2011, 06:55:17 AM
How would one go about to characterize a lense/sensor as "perfectly" as possible, in such a way to generate suitable deconvolution kernels? I imagine that they would at least be a function of distance from lense centre point (radially symmetric), aperture and focal length. Perhaps also scene distance, wavelength and non-radially spatial coordinate. If you want to have a complete PSF as a function of all of those without a lot of sparse sampling/interpolation, you have to make serious number of measurements. Would be neat as an exercise in "how good can deconvolution be in a real-life camera system".

A practical limitation would be the consistency of the parameters (variation over time) and typical sensor noise. I believe that typical kernels would be spatial high-pass(boost), meaning that any sensor noise will be amplified compared to real image content.

-h
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on June 27, 2011, 10:18:35 AM
In another thread Bart mentioned the use of a slanted edge target on a flatbed to deliver a suitable base for the sharpening. I would be interested in an optimal deconvolation sharpening route for an Epson V700 while still keeping grain/noise at bay. Noise too as I use that scanner also for reflective scans.

Hi Ernst,

Allow me to make a few remarks/observations before answering. The determination of scanner resolution is described in an ISO norm, and it uses a "slanted edge" target to determine the resolution (SRF or MTF) in 2 directions (the fast scan direction, and the slow scan direction):
Photography -- Spatial resolution measurements of electronic scanners for photographic images
Part 1: Scanners for reflective media (http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=29702) and
Part 2: Film scanners (http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=35201)
In both cases slanted edge targets are used, but obviously on different media/substrates. These targets are offered by several suppliers, and given the low volumes and strict tolerances they are not really cheap.

I have made my own slanted edge target for filmscanners, as a DIY project, from a slide mount holding a straight razor blade and positioned at an approx. 5.7 degrees slant. This worked slightly better than using folded thin alumin(i)um foil, despite the bevelled edge of the razor blade. I used black tape to cover the holes in the blade and most of the surface of the blade to reduce internal reflections and veiling glare.

This allowed me to determine the resolution capabilities or rather the Spatial Frequency Response (=MTF) of a dedicated film scanner (which allows focusing), in my case by using the Imatest software that follows that ISO method of determination. It also allowed me to quit scanning 35mm film when the Canon EOS-1Ds Mark II arrived (16MP on that camera effectively matched low ISO color film resolution). I haven't shot 35mm film since.

There is however an important factor we might overlook. When we scan film, we are in fact convolving the ideal image data with the system Point Spread Function (PSF) of both camera (lens and film) and the scanner (assuming perfect focus). That combined system PSF is what we really want to use for deconvolution sharpening. The scanner PSF alone will help to restore whatever blur the scan process introduced, but it produces a sub-optimal restoration when film is involved. It would suffice for refection scans though, it could even compensate somewhat for the mismatched focus at the glass platen surface used for reflection scans.

Therefore, for the construction of a (de)convolution kernel, one can take a photographic recording of a printed copy of a slanted edge target, on low ISO color film. One can use a good lens at the optimal aperture as a best case PSF scenario. Even other areas of the same frame, like the corners, will benefit. For a better sharpening one can blend between a corner and the center PSF deconvolution. One can repeat that for different apertures and lenses. However, that will produce a sizeable database of PSFs to cover the various possibilities, and still not cover unknown sources.

Luckily when a number of blur sources are combined, as is often the case with natural phenomena, the combined PSFs of several sources will resemble a Gaussian PSF shape. This means that we can approximate the combined system PSF with a relatively simple model, which even allows to empirically determine the best approach for unknown sources. It won't be optimal from a quality point of view, but that would require a lot of work. Perhaps close is good enough in >90% of the cases?

So my suggestion for filmscans is to try the empirical path, e.g. with "Rawshooter" which also handles TIFFs as input (although I don't know how well it behaves with very large scans), or with Focusmagic (which also has a film setting to cope with graininess), or with Topazlabs InFocus (perhaps after a mild prior denoise step).

For reflection scans, and taking the potentially suboptimal focus at the surface of the glass platen into account, One could use a suitable slanted edge target, and build a PSF from it. I have made a target out of thin selfadhesive Black and White PVC foil. That will allow to have a very sharp edge when one uses a sharp knife to cut it. Just stick the white foil on top of the black foil which will hopefully reduce the risk of white clipping in the scan, or add a thin gelatin ND filter between the target and the platen if the exposure cannot be influenced.

Unfortunately there are only few software solutions that take a custom PSF as input, so perhaps an empirical approach can be used here as well. Topazlabs InFocus allows to generate and automatically use an estimated deconvolution by letting it analyse an image/scan with adequate edge contrast detail. That should work pretty well for reflection scans, because there is no depth of field issue when scanning most flat originals (although scans of photos can be a challenge depending on the scene). Unfortunately, the contrasty edges need to be part of the same scene (or added in the scan file) because I think InFocus doesn't allow to store the estimated solution, but it could save as a preset a normal deconvolution with optimal settings to optimize a PSF or a more simple detailed piece of artwork.

As for sharpening noise, I don't think that deconvolution necessarily sharpens the multi-pixel graininess/dye couds, although it might 'enhance' some of the finest dye clouds. It just depends on what the blur radius is that helps the image detail, and that isn't necessarily the same radius that some of the graininess has.

Sorry for the long answer. Been there done that, so much to explain and take into account.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on June 27, 2011, 11:15:06 AM
How would one go about to characterize a lense/sensor as "perfectly" as possible, in such a way to generate suitable deconvolution kernels? I imagine that they would at least be a function of distance from lense centre point (radially symmetric), aperture and focal length. Perhaps also scene distance, wavelength and non-radially spatial coordinate. If you want to have a complete PSF as a function of all of those without a lot of sparse sampling/interpolation, you have to make serious number of measurements. Would be neat as an exercise in "how good can deconvolution be in a real-life camera system".

Indeed, a lot of work and los of data, but also limited possibilities to use the derived PSF. Mind you, this is exactly what is being worked on behind the scenes, spatially variant deconvolution, potentially estimated from the actual image detail.

Quote
A practical limitation would be the consistency of the parameters (variation over time) and typical sensor noise. I believe that typical kernels would be spatial high-pass(boost), meaning that any sensor noise will be amplified compared to real image content.


There is some potential to combat the noise influence, because many images have their capture (shot) noise and readnoise recorded at the sensel level, yet after demosaicing that noise becomes larger than a single pixel. The demosaiced luminance detail though can approach per pixel resolution quite closely. So there is some possibility to suppress some of the lower frequency noise without hurting the finest detail too much. In addition, one can intelligently mask low spatial frequency areas (where noise is more visible) to exclude from the deconvolution.

Noise remains a challenge for deconvolution, but there are a few possibilities that  can be exploited to allow to sharpen detail more than noise, thus improving the S/N ratio.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: Ernst Dinkla on June 27, 2011, 11:57:48 AM

So my suggestion for filmscans is to try the empirical path, e.g. with "Rawshooter" which also handles TIFFs as input (although I don't know how well it behaves with very large scans), or with Focusmagic (which also has a film setting to cope with graininess), or with Topazlabs InFocus (perhaps after a mild prior denoise step).

For reflection scans, and taking the potentially suboptimal focus at the surface of the glass platen into account, One could use a suitable slanted edge target, and build a PSF from it. I have made a target out of thin selfadhesive Black and White PVC foil. That will allow to have a very sharp edge when one uses a sharp knife to cut it. Just stick the white foil on top of the black foil which will hopefully reduce the risk of white clipping in the scan, or add a thin gelatin ND filter between the target and the platen if the exposure cannot be influenced.

Cheers,
Bart

Bart,  thank you for the explanation.

That there is a complication in camera film scanning is something I expected, two optical systems to build it. Yet I expect that the scanner optics may have a typical character that could be defined separately for both film scanning and reflective scanning. The diffraction limited character of the scanner lens + the multisampling sensor/stepping in that scanner should be detectable I guess and treated with a suitable sharpening would be a more effective first step. There are not that many lenses used for the films to scan and I wonder if that part of deconvolution could be done separately. It would be interesting to see whether a typical Epson V700 restoration sharpening can be used by other V700 owners separate of their camera influences.

For resolution testing the Nikon 8000 scanner I had some slanted edge targets made on litho film on an image setter. The slanted edge parallel to the laser beam for a sharp edge and a high contrast. Not that expensive, I had them run with a normal job for a graphic film. That way I could use the film target in wet mounting where a cut razor or cut vinyl tape would create its own linear fluid lens on the edge, a thing better avoided. Of course I have to do the scan twice for both directions.

In your reply you probably kick the legs off of that chair I am sitting on.... I will have a look at the applications you mention.


met vriendelijke groeten, Ernst

Try: http://groups.yahoo.com/group/Wide_Inkjet_Printers/
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on June 28, 2011, 05:58:43 AM
Bart,  thank you for the explanation.

That there is a complication in camera film scanning is something I expected, two optical systems to build it. Yet I expect that the scanner optics may have a typical character that could be defined separately for both film scanning and reflective scanning. The diffraction limited character of the scanner lens + the multisampling sensor/stepping in that scanner should be detectable I guess and treated with a suitable sharpening would be a more effective first step. There are not that many lenses used for the films to scan and I wonder if that part of deconvolution could be done separately. It would be interesting to see whether a typical Epson V700 restoration sharpening can be used by other V700 owners separate of their camera influences.

Hi Ernst,

Well, since the system MTF is built by multiplying the component MTFs, it makes sense to improve the worst contributor first, as it will boost the total MTF most. I'm not so sure that diffraction is a big issue, afterall several scanners use linear array CCDs which probably is easier to tackle with mostly cylindrical lenses, and to reduce heat the are operated pretty wide open. One thing is sure though, defocus will kill your MTF very fast, so for a reflective scanner the mismatch between the focus plane and the surface of the glass platen will cause an issue which could be addressed by using deconvolution sharpening.

So I wouldn't mind makig a PSF based on a slanted edge scan, presuming we can find an application that takes it as input for deconvolution. What would work anyway, is to tweak the deconvolution settings to restore as much of a slanted edge scan (excluding the camera 'system' MTF) as possible, and compare that setting to the full deconvolution of an average color film/print scan (including the camera 'system' MTF).

Quote
For resolution testing the Nikon 8000 scanner I had some slanted edge targets made on litho film on an image setter. The slanted edge parallel to the laser beam for a sharp edge and a high contrast. Not that expensive, I had them run with a normal job for a graphic film. That way I could use the film target in wet mounting where a cut razor or cut vinyl tape would create its own linear fluid lens on the edge, a thing better avoided. Of course I have to do the scan twice for both directions.

Yes, for a scanner that allows to adjust its exposure that might help to avoid highlight clipping. I'm a bit concerned about shadow clipping though, because graphic films can have a reasonably high D-max, which might throw off the Slanted Edge evalation routines in Imatest.

Quote
In your reply you probably kick the legs off of that chair I am sitting on.... I will have a look at the applications you mention.

No harm intended, but sometimes we have to settle for a sub-optimal solution. It might work good enough when we're working at the limit of human visual acuity. For magnified output, we of course try to alow compromises in the workflow system as late as possible if unavoidable.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: Ernst Dinkla on June 28, 2011, 10:44:26 AM
Hi Ernst,

Well, since the system MTF is built by multiplying the component MTFs, it makes sense to improve the worst contributor first, as it will boost the total MTF most. I'm not so sure that diffraction is a big issue, afterall several scanners use linear array CCDs which probably is easier to tackle with mostly cylindrical lenses, and to reduce heat the are operated pretty wide open. One thing is sure though, defocus will kill your MTF very fast, so for a reflective scanner the mismatch between the focus plane and the surface of the glass platen will cause an issue which could be addressed by using deconvolution sharpening.

So I wouldn't mind makig a PSF based on a slanted edge scan, presuming we can find an application that takes it as input for deconvolution. What would work anyway, is to tweak the deconvolution settings to restore as much of a slanted edge scan (excluding the camera 'system' MTF) as possible, and compare that setting to the full deconvolution of an average color film/print scan (including the camera 'system' MTF).

Yes, for a scanner that allows to adjust its exposure that might help to avoid highlight clipping. I'm a bit concerned about shadow clipping though, because graphic films can have a reasonably high D-max, which might throw off the Slanted Edge evalation routines in Imatest.

No harm intended, but sometimes we have to settle for a sub-optimal solution. It might work good enough when we're working at the limit of human visual acuity. For magnified output, we of course try to alow compromises in the workflow system as late as possible if unavoidable.

Cheers,
Bart

The V700 scanner lenses have a longer focal length than I expected, the path is folded up with 5 mirrors I think. The symmetrical 6 element lenses, either plasmats or early planar designs, are small. Yet there is still enough depth of focus, something you would expect of a wide angle lens.  To achieve that and an even sharpness over the entire scan width I thought a diffraction limited lens design would be a good choice, compromising on centre sharpness but improving the sides. The scanner lamp still has that reflector design which compensates the light fall off at the sides. There are two lenses, one scanning 150 mm wide for the normal film carriers that are in focus at about 2.5mm on my V700 and a lens that covers the total scan bed width to be used for reflective scanning and 8x10"film on the bed so focus should be close to the bed. So it has to be done for two lens/carrier combinations. I am using a Doug Fisher wet mount glass carrier with the film wet mounted to the underside of the glass and the focus can be adjusted with small nylon screws. All fine tuned meanwhile.

Your path to get there is the most practical I think. The opaqueness of the slanted edge film that I have is 5.3 D, I did discuss it at the time with the image setting shop. My probably naïve assumption was that it should be as high as possible and the edge as sharp as possible. When I compared my Nikon 8000 MTF results some years back they differed from other user tests but I assumed it was related to the tweaking of my Nikon wet mount carriers focusing etc.
http://www.photo-i.co.uk/BB/viewtopic.php?p=14907&sid=00cf64bad077b78d1f3f8bf70172afef
When later on I used another MTF tool (Quick MTF demo) on the same data the results were also different. Imatest is growing beyond my budget for tools like that.
So the target I made may not be the right one. Pity though, what could be better than a film target, a 5.3 D black emulsion less than 30 micron thick, a laser created edge :-)

met vriendelijke groeten, Ernst

New: Spectral plots of +250 inkjet papers:

http://www.pigment-print.com/spectralplots/spectrumviz_1.htm
Title: Deconvolution sharpening revisited
Post by: sjprg on October 11, 2011, 12:13:15 PM
Adobe has probably used some of this discussion in a new prototype.
Shown at Adobe Max

http://www.pcworld.com/article/241637/adobe_shows_off_prototype_blurfixing_feature.html

Paul
Title: Re: Deconvolution sharpening revisited
Post by: eronald on October 11, 2011, 04:35:10 PM
Adobe has probably used some of this discussion in a new prototype.
Shown at Adobe Max

http://www.pcworld.com/article/241637/adobe_shows_off_prototype_blurfixing_feature.html

Paul

They probably purchased a russian mathematician

The interesting question which Adobe need to demonstrate having solved is not whether images can be enhanced, it is whether they can be enhanced while looking nice.

Edmund
Title: Re: Deconvolution sharpening revisited
Post by: sjprg on October 12, 2011, 12:11:43 AM
Hi Edmond; Its been a long time.
I am glad to see Adobe at least working on it. At present Im using the adaptive Lucy-Richardson
from the program ImagesPlus.
Paul
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on October 12, 2011, 01:45:26 AM
Ive been using it since ver 2.82. Awesome program.
Title: Re: Deconvolution sharpening revisited
Post by: Schewe on October 12, 2011, 02:04:10 AM
They probably purchased a russian mathematician

The presenter's name is Jue Wang (not Russian which shows a bias on your part) and yes, it's based on a method of computing a PSF on a mutli-directional camera shake.

I saw something at MIT (don't know if this was the same math) that was able to compute and correct for this sort of blurring...ain't easy to compute the multi-directional PSF but if you can, you can de-burr images pretty successfully...up to a point.

It demos well but note the presenter loaded some "presets" that may have been highly tested and took a long time to figure out. Don't expect this in the "next version of Photoshop" but it is interesting (and useful) research for Adobe to be doing...
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on October 12, 2011, 03:05:54 AM
Here is a very fast sharpen of a crop. 2 ground squirrels printscreen to paint.

This took maybe 3 seconds due to the small size. A full 16MB picture would be about a minute. Its a multi-threaded 64 bit program.
Title: Re: Deconvolution sharpening revisited
Post by: jguentert on January 09, 2012, 04:30:41 PM
For all those who are interested in deconvolution like me this may be interesting: I'm still searching for a convincing solution for the Mac. I bought Topaz InFocus but I am not satisfied with it. I would pay a lot to get the not perfect but easy to use Focus Magic – updated for CS 4 or 5 (without the need of Rosetta) or Caron with a more convenient Mac interface.

But meanwhile I found sth. relatively new in this special field: Back In Focus (http://www.metakine.com/products/backinfocus/) Currently implemented algorithms: Unsharp masking (fast and full), Wiener finite and infinite impulse response, Richardson-Lucy (with a thresholding variant), Linear algebra deconvolution.

"Back In Focus" is absolutely not for beginners. It took me some days to learn how to use it – and here's room for improvement. But once you got how to use it – you'll get better pictures.

http://www.metakine.com/products/backinfocus/

Example:

(http://www.metakine.com/lang/en/img/products/backinfocus/screenshots/screenshot1.png)
Title: Re: Deconvolution sharpening revisited
Post by: BernardLanguillier on January 09, 2012, 06:01:08 PM
"Back In Focus" is absolutely not for beginners. It took me some days to learn how to use it – and here's room for improvement. But once you got how to use it – you'll get better pictures.

http://www.metakine.com/products/backinfocus/

Thanks for the link, it sounds interesting.

Would you say its main strenght is to further improve the detail of sharply focused images, or it is more devoted to recovering blurred images (hand shake, focus softness,...)?

Cheers,
Bernard
Title: Re: Deconvolution sharpening revisited
Post by: EricWHiss on January 09, 2012, 06:47:10 PM
Thanks, I'll give this a whirl.    I do like Caron, much better than topaz.    I know there also must be some tools for imageJ but haven't tried any.  Has anyone tested any deconvolution plugins for imageJ that they like?

Title: Re: Deconvolution sharpening revisited
Post by: jguentert on January 09, 2012, 07:03:36 PM
It's definetly devoted to recovering blurred images.
Title: Re: Deconvolution sharpening revisited
Post by: BernardLanguillier on January 09, 2012, 08:04:01 PM
It's definetly devoted to recovering blurred images.

Thanks for the feedback.

Cheers,
Bernard
Title: Re: Deconvolution sharpening revisited
Post by: EricWHiss on January 09, 2012, 10:35:01 PM
Well, this is not exactly intuitive and at least some of the routines run very slow. 
Title: Re: Deconvolution sharpening revisited
Post by: sjprg on December 27, 2012, 03:53:41 PM
Hey guys, has the interest waned again, or have we beat the subject to death? I don't have the math to contribute but I do have an interest in following the discussion.
Regards
Paul
Title: Re: Deconvolution sharpening revisited
Post by: walter.sk on December 27, 2012, 09:54:01 PM
Back In Focus (http://www.metakine.com/products/backinfocus/) Currently implemented algorithms: Unsharp masking (fast and full), Wiener finite and infinite impulse response, Richardson-Lucy (with a thresholding variant), Linear algebra deconvolution.

I tried downloading it but couldn't install it on a Windows machine.  Is it only for MAC?
Title: Re: Deconvolution sharpening revisited
Post by: sjprg on December 28, 2012, 02:40:01 AM
MAC only
Title: Re: Deconvolution sharpening revisited
Post by: walter.sk on December 28, 2012, 12:32:49 PM
MAC only
Hmmm!  Oh, well.  Now, if they would only upgrade Focus Magic to 16Bit or more.  With all of its faults I had found it the best for "capture sharpening."
Title: Re: Deconvolution sharpening revisited
Post by: bjanes on December 28, 2012, 01:54:12 PM
Hmmm!  Oh, well.  Now, if they would only upgrade Focus Magic to 16Bit or more.  With all of its faults I had found it the best for "capture sharpening."

I see that Focus Magic will have a beta for 64 bit windows in January 2013 and for the Mac in March 2013. AFAIK the old version does support 16 bit files, but I have not used it for some time since I upgraded to 64 bit windows. One can use it with 32 bit Photoshop, but it is a pain to switch back and forth between 32 and 64 bit.

Bill
Title: Re: Deconvolution sharpening revisited
Post by: sjprg on December 28, 2012, 02:08:08 PM
I write them about 4 times a year requesting an update. They keep responding "we are too busy on other projects"
Hopefully someday they will give us an updat. They had the best available. In the meantime I use ACR 7.3 at
amount=0, radius=.5, detail=100 which seems to work very well.
Regards
Paul
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on March 16, 2013, 03:12:05 PM
I think you are fixating on the particular implementation (that Bart used) rather than considering the method in general.  Typically most of the improvement to be had with RL deconvolution comes in the first few tens of iterations, and the method can be quite fast (as it is in RawTherapee, FocusMagic, and RawDeveloper, for instance).  A good implementation of RL will converge much faster than 1K iterations.  It's hard to say what is sourcing the ringing tails in Bart's example; it could be the truncation of the PSF, it could be something else.  I would imagine that the dev team at Adobe has spent much more time tweaking their deconvolution algorithm than the one day that Bart spent working up his example.

But I would ask again, why do you want to throw dirt on deconvolution methods if you are lavishing praise on ACR 6.1?

This thread was linked in a current thread so I am bringing up my 2 bits years later.

I have been using deconvolution for years. Sometimes in an image I can find something that lets me use the custom PSF function in images plus. When this happens a single cycle of deconvolution dramatically improves the image blur. Further cycles using that PSF actually dont work better than a regular function like a gaussian for the obvious reason that that PSF no longer matches the state of the image. To really use a good custom you have to modify it each cycle as the image is improving.

Using a 1000 cycle approach is fine if you use a very mild under-correction wanting it to converge to something. I have never gone more than 250 cycles with a mild function because I simply do not see further improvements. I mostly use 3x3 gausssian for example even If I know it needs a couple cycles with a 7x7. The artifact damage to other parts of the image is much less.

My rule of thumb is if I cant do it with 50 cycles I am using the wrong function. Usually I just use 10 cycles. Sometimes less, I have images where I stop after 3 cycles of a 3x3 gauss, the lightest the program runs.
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on March 16, 2013, 05:12:47 PM
This thread was linked in a current thread so I am bringing up my 2 bits years later.

Hi Arthur,

The thread is as relevant today as it was when it was started. No problem with people adding to it. RawTherapee just a few days ago improved the speed of its RL sharpening implementation, and FocusMagic is beta testing the long overdue 64-bit version of their plugin. So the subject is very much alive.

Quote
I have been using deconvolution for years. Sometimes in an image I can find something that lets me use the custom PSF function in images plus. When this happens a single cycle of deconvolution dramatically improves the image blur. Further cycles using that PSF actually dont work better than a regular function like a gaussian for the obvious reason that that PSF no longer matches the state of the image. To really use a good custom you have to modify it each cycle as the image is improving.

That depends on the algorithm used, but in particular the Richardson-Lucy deconvolution is an iterative algorithm that internally iterates to optimize certain parameters. It's not simply a repetition of the same deconvolution with the same PSF.

To demonstrate that multiple iterations do serve a purpose, especially with a low noise image, I've created the following example:

Here is the test target that was used for the following demonstration:
(http://bvdwolf.home.xs4all.nl/temp/LuLa/SinusoidalSiemensStar_500px.png)

Here is the sigma=1.0 PSF that was used to convolve (blur) the image, and then deconvolve the blurred 'star' version.
(http://bvdwolf.home.xs4all.nl/temp/LuLa/PSF010-pointsampled.png) (http://bvdwolf.home.xs4all.nl/temp/LuLa/PSF010-pointsampled.tiff)
Click on the image to get a higher accuracy 32-bit floating point TIFF version of the PSF

Here is the convolution result:
(http://bvdwolf.home.xs4all.nl/temp/LuLa/SinusoidalSiemensStar_500px _PSF010Blur.png)

Here is a RL deconvolution result after 10 iterations, resolution has improved:
(http://bvdwolf.home.xs4all.nl/temp/LuLa/SinusoidalSiemensStar_500px _PSF010Blur+RLx10.png)

Here is a RL deconvolution result after 100 iterations, it has better resolution:
(http://bvdwolf.home.xs4all.nl/temp/LuLa/SinusoidalSiemensStar_500px _PSF010Blur+RLx100.png)

Here is a RL deconvolution result after 1000 iterations, it has even better resolution, however not all resolution could be restored.
When the blur reduces the image detail to zero contrast, then all is lost:
(http://bvdwolf.home.xs4all.nl/temp/LuLa/SinusoidalSiemensStar_500px _PSF010Blur+RLx1000.png)

The RL implementation used was from PixInsight Version 1.8, and I used the unregularized RL version, because there was no noise in this artificial (CGI) image.

Quote
My rule of thumb is if I cant do it with 50 cycles I am using the wrong function. Usually I just use 10 cycles. Sometimes less, I have images where I stop after 3 cycles of a 3x3 gauss, the lightest the program runs.

Often, 10 or 20 iterations are plenty, especially with some noise in the image and a less than perfect PSF (which is the common situation). I do see a benefit in 'rolling your own PSF', because it allows to make more accurate versions than the few default ones that many programs offer. Since Gaussian shaped PSFs are often close to ideal, my PSF generator tool (http://bvdwolf.home.xs4all.nl/main/foto/psf/PSF_generator.html) can assist in producing a number of PSFs to choose from. The numerical data can be copied and pasted into a text file, which e.g. ImageJ can import as a space separated text image file, for those applications that take a PSF image as input. Others will need to copy and paste the data into dialog boxes.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on March 16, 2013, 11:07:01 PM
Bart I understand the role of iteration. What I am referring to is "custom" PSFs not standard distributions. A custom to me is something selected from the image.

Your demonstration is running a math function on a chart then reversing it with iterations of a standard Gaussian. The problems we have out in nature are imperfect lenses, front focus/back focus, tripod vibration, shutter vibration. The FF/BF component would be handled nicely by a Gaussian. The rest need something else which you can only find in the image itself. If you do select some strange shape from the image it will make the blur better fast or not help much. If it does help you need to switch to something else soon or it just starts to make another mess.
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on March 16, 2013, 11:28:17 PM
Here is an example ofwhere I could find a "custom" PSF.

There was wind against the building while the fireworks were going. There was also the big booms of the larger shells. Looking around through the full size images I can probably find lights that should be perfectly round. many will be odd shapes from being reflected in glass. If I make a good selection I can remove almost all the problems of the shot.
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on March 16, 2013, 11:58:39 PM
So in that shot I found some rich person had been useful enough to get a chopper to watch the Canada day fireworks. I just watched them from my balcony :D.
In the first 200% zoom you can see the oblong PSF of the chopper's safety lights. I pick that as a test "custom" then cut a piece out of the rest of the shot to test it on.
In the second screenshot you can see the impact of 2 cycles. The first cycle was an improvement. The second is starting to make halo edges. If I keep goin it will make a mess.
So if that really was a better PSF than another I find in the shot I would re-sample it after the cycle. It would be a new different PSF.

Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on March 17, 2013, 12:56:52 AM
Here is a the original unsharpened in the middle
10 cycles  of gaussian 5x5 on the left
custom on the right.
Title: Re: Deconvolution sharpening revisited
Post by: rnclark on January 13, 2014, 11:07:59 PM
My first post.  I was sent this link by someone else, and as I was referenced in the post that started this thread, I thought I would add to it.

My web page on image deconvolution referred to in the first post is: http://www.clarkvision.com/articles/image-restoration1/
and has been updated recently.

I have added a second page with more results using an image where I added known blur and then used a guess PSF to recover the image.  This is part 2 from the above page:
http://www.clarkvision.com/articles/image-restoration2/

Regarding some statements made in this thread about how one can't go beyond 0% MTF, that is true if one images bar charts.  But the real world is not bar charts.  MTF is a one dimensional description of an imaging system.  It only applies to parallel spaced lines and only in the dimension perpendicular to those lines.  MTF limits do not apply to other 2-D objects.  For example, stars are much smaller than 0% MTF yet we see them.  Two stars closer together than the 0% MTF can still be seen as an elongated diffraction disk.  It is that asymmetry and a known PSF of the diffraction disk that can be used to fully resolve the two stars.  Extend this to all irregular objects in a scene, whether it be splotchy detail on a bird's beak, feather detail, or stars in the sky, deconvolution methods can recover a wealth of detail, some beyond 0% MTF.

I have been using Richardson-Lucy image deconvolution on my images for many years now, both astro images and everyday scenes.  It works well and I can consistently pull out detail that I have been unable to achieve with smart sharpen or any other method.  Smart sharpen is so fast that it can't be doing more than an iteration (or a couple if done in integers). I would love to see a demonstration by those in this thread who say smart sharpen can do as well as RL deconvolution.  On my web page, part 2 above, I have a link to the 16-bit image (it is just above the conclusions).  You are welcome to download that image and show something better than I can produce in figure 4 (right side) on that page.  Post your results here.  I would certainly love to see smart sharpen do as well, as it would speed up my work flow.

Thanks for the interesting read.  And a special hi to Bart.  I haven't seen you in a forum in years.

Roger Clark
Title: Re: Deconvolution sharpening revisited
Post by: ErikKaffehr on January 14, 2014, 02:16:25 AM
Hi,

Nice to see you here! Learned much about sensors from your page.

Best regards
Erik

My first post.  I was sent this link by someone else, and as I was referenced in the post that started this thread, I thought I would add to it.

My web page on image deconvolution referred to in the first post is: http://www.clarkvision.com/articles/image-restoration1/
and has been updated recently.

I have added a second page with more results using an image where I added known blur and then used a guess PSF to recover the image.  This is part 2 from the above page:
http://www.clarkvision.com/articles/image-restoration2/

Regarding some statements made in this thread about how one can't go beyond 0% MTF, that is true if one images bar charts.  But the real world is not bar charts.  MTF is a one dimensional description of an imaging system.  It only applies to parallel spaced lines and only in the dimension perpendicular to those lines.  MTF limits do not apply to other 2-D objects.  For example, stars are much smaller than 0% MTF yet we see them.  Two stars closer together than the 0% MTF can still be seen as an elongated diffraction disk.  It is that asymmetry and a known PSF of the diffraction disk that can be used to fully resolve the two stars.  Extend this to all irregular objects in a scene, whether it be splotchy detail on a bird's beak, feather detail, or stars in the sky, deconvolution methods can recover a wealth of detail, some beyond 0% MTF.

I have been using Richardson-Lucy image deconvolution on my images for many years now, both astro images and everyday scenes.  It works well and I can consistently pull out detail that I have been unable to achieve with smart sharpen or any other method.  Smart sharpen is so fast that it can't be doing more than an iteration (or a couple if done in integers). I would love to see a demonstration by those in this thread who say smart sharpen can do as well as RL deconvolution.  On my web page, part 2 above, I have a link to the 16-bit image (it is just above the conclusions).  You are welcome to download that image and show something better than I can produce in figure 4 (right side) on that page.  Post your results here.  I would certainly love to see smart sharpen do as well, as it would speed up my work flow.

Thanks for the interesting read.  And a special hi to Bart.  I haven't seen you in a forum in years.

Roger Clark
Title: Re: Deconvolution sharpening revisited
Post by: hjulenissen on January 14, 2014, 03:27:47 AM
My first post.  I was sent this link by someone else, and as I was referenced in the post that started this thread, I thought I would add to it.
Hello. I have also read your site with great interest.
Quote
Regarding some statements made in this thread about how one can't go beyond 0% MTF, that is true if one images bar charts.  But the real world is not bar charts.  MTF is a one dimensional description of an imaging system.  It only applies to parallel spaced lines and only in the dimension perpendicular to those lines.  MTF limits do not apply to other 2-D objects.  For example, stars are much smaller than 0% MTF yet we see them.  Two stars closer together than the 0% MTF can still be seen as an elongated diffraction disk.  It is that asymmetry and a known PSF of the diffraction disk that can be used to fully resolve the two stars.  Extend this to all irregular objects in a scene, whether it be splotchy detail on a bird's beak, feather detail, or stars in the sky, deconvolution methods can recover a wealth of detail, some beyond 0% MTF.
MTF0 would be the spatial frequency at which the modulation is zero, right?

I can't see how a star would be "smaller than 0% MTF". A star is approximately a point-source (infinitely small "dot"), and would excite a wide range of spatial frequencies (including the zero-frequency, "DC"). If we analyse 1 dimension and assume linearity (for simplicity), we would expect a small star to be registered as a blurry star in a system of low MTF cutoff point, more or less like how a (non-reverbrated) hand-clap is recorded by a low-bandwidth audio recorder. Is that not what happens?

When we factor in (lack of) Nyquistian pre-filtering, color filter array etc, we add some non-separable and non-linear factors that are harder to comprehend intuitively. I guess numerical simulations are the way to go for diving deep in that direction.

I use the term "information" about aspects of a scene that cannot be (safely) guessed. A CD can carry 700MB or so of information at the most. If you fill a CD with random numbers, you need to be able to read every bit (after error correction etc) in order to recover those bits. "Pleasant" is a different concept. A music CD may contain large gaps in the data. A good CD player might still be able to render the track in a pleasant manner by smoothing over the (post error correction) gaps in a way consistent with how our hearing expects the track to sound (e.g. no discontinuities).

I mistrust claims about the rayleigh criterion ("diffraction limited"), in that it seems to be a pragmatic astronomers rule-of-thumb, rather than a theoretically developed absolute limit of information. As far as I understand, there may still be information beyond the diffraction limit, but we may not be able to interpret it properly.

If you enforce the policy that the image shall only consist of a (sparse) number of white points (stars) on a black background, I think that you can apply different methology to sensor design and deconvolution, than if you design a general imaging system.

-h
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on January 14, 2014, 07:14:28 AM
My first post.  I was sent this link by someone else, and as I was referenced in the post that started this thread, I thought I would add to it.

My web page on image deconvolution referred to in the first post is: http://www.clarkvision.com/articles/image-restoration1/
and has been updated recently.

I have added a second page with more results using an image where I added known blur and then used a guess PSF to recover the image.  This is part 2 from the above page:
http://www.clarkvision.com/articles/image-restoration2/

Hi Roger,

Nice to see you joining here. Your part 2 webpage also shows the practical benefits of deconvolution very well, specifically in direct comparison with "Smart sharpen" which is also said to be a form of deconvolution but, as you also conclude, probably with very few iterations.

Quote
Regarding some statements made in this thread about how one can't go beyond 0% MTF, that is true if one images bar charts.  But the real world is not bar charts.  MTF is a one dimensional description of an imaging system.  It only applies to parallel spaced lines and only in the dimension perpendicular to those lines.  MTF limits do not apply to other 2-D objects.

Real images are indeed more complex than a simple bar chart or a sinusoidal grating. Where we may not have much SNR in one direction/angle, we may still have adequate SNR in another direction/angle to restore more of the original signal. We do run into limitations when noise is involved, or when the signal is very small compared to the sampling density and sensel aperture (+low-pass filter).

I do not fully agree with your downsampling conclusions, although sharpening before downsampling may happen to work for certain image content (irregular structures), it is IMHO better to postpone the creation of highest spatial frequency detail before downsampling. But it would be better to discuss that in a separate thread. Maybe this post (http://www.openphotographyforums.com/forums/showthread.php?p=148810#post148810) related to downsampling of 'super resolution' images provides some useful food for thought, as does my webpage (http://bvdwolf.home.xs4all.nl/main/foto/down_sample/down_sample.htm) with a 'stress test' zone-plate target.

Quote
Thanks for the interesting read.  And a special hi to Bart.  I haven't seen you in a forum in years.

And thank you for your informative webpages (and beautiful landscape images). I've been around since Usenet days, but apparently in different fora ... I don't know if you are still visiting Noordwijk occasionally, but if you do just drop me a line and we can meet in person when agendas can be synchronized.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on January 14, 2014, 12:58:12 PM
Another welcome to Roger Clark.

I have been visiting your site to admire your images as well as read your articles on the technical aspects of photography for a few years now. It's great to see you in this community.
Title: Re: Deconvolution sharpening revisited (Image J, two questions)
Post by: ErikKaffehr on January 14, 2014, 03:16:46 PM
Hi,

I am interested in trying ImageJ for deconvolution. I have tried some plugins but I am not entirely happy.

Any suggestion for a good deconvolution plugin?

Best regards
Erik
Title: Re: Deconvolution sharpening revisited
Post by: bjanes on January 14, 2014, 05:41:31 PM
We are very fortunate to have Roger joining the discussion along with Bart. I have been working on developing PSFs for my Zeiss 135 mm f/2 lens on the Nikon D800e using Bart's slanted edge tool (http://bvdwolf.home.xs4all.nl/main/foto/psf/SlantedEdge.html). I photographed Bart's test image at 4 meters, determining optimum focus via focus bracketing with a rail. Once optimal focus was determined, I took a series of shots at various apertures and determined the resolution with Bart's method (http://www.openphotographyforums.com/forums/showthread.php?t=13217) with the sinusoidal Siemens star. Results are shown both for ACR rendering and rendering with DCRaw in Imatest.

The results are shown in the table below. The f/16 and f/22 shots are severely degraded by diffraction. I used Bart's PSF generator to derive a 5x5 deconvolution PSF for f/16. Results are shown after 20 iterations of adaptive RL in ImagesPlus.

(http://bjanes.smugmug.com/Photography/Aperture-Series/i-J2KGpvK/0/O/Table2.png)

Crops of images are shown also.
(http://bjanes.smugmug.com/Photography/Aperture-Series/i-fNtSSsg/1/O/_DSC3082b.png)

(http://bjanes.smugmug.com/Photography/Aperture-Series/i-SbmjNrq/1/O/_DSC3084b.png)

(http://bjanes.smugmug.com/Photography/Aperture-Series/i-NWjgDbW/1/O/_DSC3088b.png)

(http://bjanes.smugmug.com/Photography/Aperture-Series/i-NQp3kqC/1/O/_DSC3089b.png)

(http://bjanes.smugmug.com/Photography/Aperture-Series/i-9qwjd88/0/O/_DSC3088_RL5by5_20.png)

Results were worse using a 7x7 PSF. I would appreciate pointers from Roger and Bart on what factors determine the best size PSF to employ and what else could be done to improve the result. Presumably a 7x7 would be better if the PSF were optimal, but a suboptimal 7x7 PSF could produce inferior results by extending the deconvolution kernel too far outward.

Bill
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on January 15, 2014, 01:47:16 AM
Hi Bob,

You can crop a small section out then duplicate several times with the (IIRC) 3 dots button. The deconvolution runs almost instantly on small crops like 640x640. Try 30 cycles 3x3 in one, 30 cycles 5x5 in the next, 30 7x7 in the next. If you are using Bart's oversized output from the raw converter as a starting point the 7x7 may be best. I have found 3x3 usually works for a sharp lens on a tripod output at the normal camera size. Sometimes I do 5x5 for 10 cycles followed by 3x3 for 30 to 50. I also often do 10 cycles of Van Cittert.

Roger's note that PS smart sharpen seems to do 1 iteration is interesting to me. When I have had a point in a picture to create a reasonable custom PSF from the image, I have usually run it for 1 or 2 cycles. More than that I found large black artifacts forming around points. I would then switch to 3x3 gaussian for more iterations to taste. What happens with PS smart sharpen followed by IP or repeated runs of Smart sharpen? Sorry, I dont have PS installed on my system so I cant answer it myself.

Both Bart and Roger have mentioned hundreds of cycles so I think I am not getting optimal results. I shut it down (with cancel) when I start seeing artifacts. Is hundreds based on a 2x or 3x starting image size? The sequence is Lanczos 3x, Capture sharpen 7x7, downsample, creative sharpen?

When is Van Cittert or Adaptive Contrast better? There are so many sharpening tools in the program that can be used sequentially it seems very hard to find a best sequence. Any guidance on this would be greatly appreciated.

Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on January 15, 2014, 01:52:06 AM
One other thing, it has come up before that the program is not color manged. Am I correct that if the output of the raw converter is fed to IPlus then the IPlus tif is put back into the raw converter the colors will still be correct?
Title: Re: Deconvolution sharpening revisited (Image J, two questions)
Post by: BartvanderWolf on January 15, 2014, 04:18:15 AM
Hi,

I am interested in trying ImageJ for deconvolution. I have tried some plugins but I am not entirely happy.

Any suggestion for a good deconvolution plugin?

Hi Erik,

I'm not sure why you want to use ImageJ for deconvolution, because it's not the easiest way of deconvolving regular color images (which have a variety of imperfections that need to be addressed/circumvented). Mathematically deconvolution is a simple principle, but in the practical implementation there are lots of things that can (and do) go wrong. Also remember that most of these plugins only work on grayscale images, and could work better on linear gamma input.

As a general deconvolution you can use the "Process>Filters>Convolve" command when you feed it a deconvolution kernel. It does work on RGB images, but it only does a single pass deconvolution without noise regularization.

The more useful extensive ImageJ deconvolution plugin that I came across sofar is DeconvolutionLab (http://bigwww.epfl.ch/algorithms/deconvolutionlab/).
You'll need a separate PSF file, which can be made with the help of my PSF generator tool (http://bvdwolf.home.xs4all.nl/main/foto/psf/PSF_generator.html), and its 'space separated' text output can be copied and pasted as a plain text fie and converted in ImageJ to a "File>Import>Text image".

Much easier to use for photographers is a regular Photoshop plugin such as FocusMagic (http://www.focusmagic.com/download.htm), or Topaz Labs Infocus (http://www.topazlabs.com/infocus/). The latter can be called from Lightroom, even without Photoshop if one also has the photoFXlab plugin from Topaz Labs. Piccure (http://intelligentimagingsolutions.com/index.php/en/) is a relatively new PS plugin that does a decent job, but not really better than the cheaper alternatives mentioned before.

Other possibilities are several dedicated Astrophotography applications such as ImagesPlus (http://www.mlunsold.com/) (not colormanaged) or PixInsight (http://pixinsight.com/) (colormanaged). PixInsight is kind of amazing (colormanaged, floating point, Linear gamma processing of Luminance in an RGB image), and offers lots of possibilities for deconvolution and artifact suppression and all sorts of other astrophotography imaging tasks, but is not an cheap solution if you only want to use it for deconvolution. It's more a work environment with the possibility to create one's own scripts and modules with Java or Javascript programming, with several common astrophotography tasks pre-programmed for seamless integration.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on January 15, 2014, 06:49:30 AM
We are very fortunate to have Roger joining the discussion along with Bart.

Thanks for the kind words.

Quote
I have been working on developing PSFs for my Zeiss 135 mm f/2 lens on the Nikon D800e using Bart's slanted edge tool (http://bvdwolf.home.xs4all.nl/main/foto/psf/SlantedEdge.html). I photographed Bart's test image at 3 meters, determining optimum focus via focus bracketing with a rail. Once optimal focus was determined, I took a series of shots at various apertures and determined the resolution with Bart's method (http://www.openphotographyforums.com/forums/showthread.php?t=13217) with the sinusoidal Siemens star. Results are shown both for ACR rendering and rendering with DCRaw in Imatest.

The results are shown in the table below.

Thanks for the feedback, it can help others in understanding the procedures and for the insight it gives, and it allows to get better image quality.

Quote
The f/16 and f/22 shots are severely degraded by diffraction. I used Bart's PSF generator to derive a 5x5 deconvolution PSF for f/16. Results are shown after 20 iterations of adaptive RL in ImagesPlus.

It nicely demonstrates that when the 'luminance' diffraction pattern diameter exceeds 1.5x the sensel pitch, at f/5.6 or narrower on the 4.88 micron pitch sensor of the D800/D800E, the diffraction blur overtakes the reduction of residual lens aberrations. You may find that somewhere between f/4 and f/5.6 there could be an even slightly better performance than at f/4 , which can be useful to know if one wants to perform focus stacking with the highest resolution possible for a given camera/lens combination (at a given magnification factor or focus distance).

Especially the Slanted Edge measurements give the most accurate signal of optimal aperture, and they give the relevant/dominant blur radius for deconvolution as well. It also shows that for good lenses there is usually a smallest blur radius that is close to 0.7 at the optimum aperture, and a radius of more than 1.0 at narrow apertures. That's definitely something to consider for capture sharpening which is hardware dependent, not the same as subject dependent creative sharpening.

Quote
Results were worse using a 7x7 PSF. I would appreciate pointers from Roger and Bart on what factors determine the best size PSF to employ and what else could be done to improve the result. Presumably a 7x7 would be better if the PSF were optimal, but a suboptimal 7x7 PSF could produce inferior results by extending the deconvolution kernel too far outward.

I do not think that it is the size of the kernel that's limiting, it may be some aliasing that is playing tricks. I don't know how well the actual edge profile and the Gaussian model fitted, but that is often a good prediction of the shape of the PSF. So it may be a good PSF shape, but the source data may also still be causing some issues (noise, aliasing, demosaicing) that get magnified by restoration. I assume there is no Raw noise reduction in the file, as that might also break the statistical nature of photon shot noise.

You could try if RawTherapee's Amaze algorithm makes a difference, to eliminate one possible (demosaicing) cause. The diffraction limited f/16 shot, which seems to be at the edge of totally low-pass filtering with zero aliasing possibility, suggests that aliasing is not causing the unwanted effects, but maybe to many iteratons or too little 'noise' or artifact suppression. You can also reduce the number of iterations, although that will reduce the overall restoration effectiveness as well. What can also help is using a slightly smaller radius than would be optimal, since that under-corrects the restoration which may reduce the accumulation of errors per iteration. Another thing that may help a little is only restoring the L channel from an LRGB image set, although I do not expect it to make much of a difference on a mostly grayscale image.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on January 15, 2014, 08:25:46 AM
What happens with PS smart sharpen followed by IP or repeated runs of Smart sharpen? Sorry, I dont have PS installed on my system so I cant answer it myself.

Hi Arthur,

In principle, repeated (de)convolution with a Gaussian PSF, equals a single (de)convolution with a larger radius PSF. However, that also assumes that there is little accumulation of round-off errors. Therefore I anticipate the build-up of artifacts, although with deliberately too small radius PSFs things might somewhat relax the demand on data quality.

Quote
Both Bart and Roger have mentioned hundreds of cycles so I think I am not getting optimal results. I shut it down (with cancel) when I start seeing artifacts. Is hundreds based on a 2x or 3x starting image size? The sequence is Lanczos 3x, Capture sharpen 7x7, downsample, creative sharpen?

The first thing is using a well behaved, decently low-pass filtered image, with little noise and preferably 16-bits/channel. Then we should not use a PSF with too large a radius. When we up-sample the image (hopefully with minimal introduction of new artifacts), we create some room for sub-pixel accuracy in restoration, and small artifacts will mostly disappear upon downsampling to the original size.

Quote
When is Van Cittert or Adaptive Contrast better? There are so many sharpening tools in the program that can be used sequentially it seems very hard to find a best sequence. Any guidance on this would be greatly appreciated.

It's hard to say in general, because image content can be so different, even in the same image. Richardson-Lucy is reasonably good when there is also some noise involved, where van Cittert is better with clean low ISO images with high SNR. Both are restoration algorithms. Adaptive contrast modifies contrast and should, if used, come after restoration.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: bjanes on January 15, 2014, 12:36:15 PM


I do not think that it is the size of the kernel that's limiting, it may be some aliasing that is playing tricks. I don't know how well the actual edge profile and the Gaussian model fitted, but that is often a good prediction of the shape of the PSF. So it may be a good PSF shape, but the source data may also still be causing some issues (noise, aliasing, demosaicing) that get magnified by restoration. I assume there is no Raw noise reduction in the file, as that might also break the statistical nature of photon shot noise.

You could try if RawTherapee's Amaze algorithm makes a difference, to eliminate one possible (demosaicing) cause. The diffraction limited f/16 shot, which seems to be at the edge of totally low-pass filtering with zero aliasing possibility, suggests that aliasing is not causing the unwanted effects, but maybe to many iteratons or too little 'noise' or artifact suppression. You can also reduce the number of iterations, although that will reduce the overall restoration effectiveness as well. What can also help is using a slightly smaller radius than would be optimal, since that under-corrects the restoration which may reduce the accumulation of errors per iteration. Another thing that may help a little is only restoring the L channel from an LRGB image set, although I do not expect it to make much of a difference on a mostly grayscale image.


Bart,

Thanks for the feedback. We are getting some interesting discussion in this rejuvenated thread:)

As you suggested, I did use RawTherapee to render the f/16 image.

(http://bjanes.smugmug.com/Photography/Aperture-Series/i-CPfK2m5/0/O/_DSC3088b_RT.png)

The Gaussian radius was smaller than with the ACR rendering, 0.9922, and I used your tool to calculate a deconvolution PSF for 5x5 and 7.7.

Here is the image restoration in ImagesPlus with 20 iterations of RL using 7x7. There is quite a bit of artifact.
(http://bjanes.smugmug.com/Photography/Aperture-Series/i-vpKhfF6/0/O/_DSC3088_RL_20_7by7.png)

Using RL and a 5x5 kernel with 20 iterations, there again less artifact:
(http://bjanes.smugmug.com/Photography/Aperture-Series/i-F5vLMxV/0/O/_DSC3088_RL_20_5by5.png)

van Clittert with the 5x5 kernel and 20 iterations produces the best results.
(http://bjanes.smugmug.com/Photography/Aperture-Series/i-PxPGFkP/0/O/_DSC3088_VC_20_5by5.png)

What do you think?

Bill
Title: Re: Deconvolution sharpening revisited (Image J, two questions)
Post by: ErikKaffehr on January 15, 2014, 02:37:49 PM
Hi Bart,

It has been suggested on a discussion regarding macro photography that smallish apertures could be used and the image restored using deconvolution. I may be a bit of a skeptic,  but I felt I would be ignorant without trying. I have both Focus Magic and Topaz Infocus but neither let me choose PSF. I have ImageJ already so I felt I could explore deconvolution a bit more, but the plugins I have tested were not very easy to use and did not really answer my questions about usability.

My take is really to use medium apertures and stacking if needed.

Best regards
Erik

Hi Erik,

I'm not sure why you want to use ImageJ for deconvolution, because it's not the easiest way of deconvolving regular color images (which have a variety of imperfections that need to be addressed/circumvented). Mathematically deconvolution is a simple principle, but in the practical implementation there are lots of things that can (and do) go wrong. Also remember that most of these plugins only work on grayscale images, and could work better on linear gamma input.

As a general deconvolution you can use the "Process>Filters>Convolve" command when you feed it a deconvolution kernel. It does work on RGB images, but it only does a single pass deconvolution without noise regularization.

The more useful extensive ImageJ deconvolution plugin that I came across sofar is DeconvolutionLab (http://bigwww.epfl.ch/algorithms/deconvolutionlab/).
You'll need a separate PSF file, which can be made with the help of my PSF generator tool (http://bvdwolf.home.xs4all.nl/main/foto/psf/PSF_generator.html), and its 'space separated' text output can be copied and pasted as a plain text fie and converted in ImageJ to a "File>Import>Text image".

Much easier to use for photographers is a regular Photoshop plugin such as FocusMagic (http://www.focusmagic.com/download.htm), or Topaz Labs Infocus (http://www.topazlabs.com/infocus/). The latter can be called from Lightroom, even without Photoshop if one also has the photoFXlab plugin from Topaz Labs. Piccure (http://intelligentimagingsolutions.com/index.php/en/) is a relatively new PS plugin that does a decent job, but not really better than the cheaper alternatives mentioned before.

Other possibilities are several dedicated Astrophotography applications such as ImagesPlus (http://www.mlunsold.com/) (not colormanaged) or PixInsight (http://pixinsight.com/) (colormanaged). PixInsight is kind of amazing (colormanaged, floating point, Linear gamma processing of Luminance in an RGB image), and offers lots of possibilities for deconvolution and artifact suppression and all sorts of other astrophotography imaging tasks, but is not an cheap solution if you only want to use it for deconvolution. It's more a work environment with the possibility to create one's own scripts and modules with Java or Javascript programming, with several common astrophotography tasks pre-programmed for seamless integration.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited (Image J, two questions)
Post by: bjanes on January 15, 2014, 04:26:15 PM
Hi Bart,

It has been suggested on a discussion regarding macro photography that smallish apertures could be used and the image restored using deconvolution. I may be a bit of a skeptic,  but I felt I would be ignorant without trying. I have both Focus Magic and Topaz Infocus but neither let me choose PSF. I have ImageJ already so I felt I could explore deconvolution a bit more, but the plugins I have tested were not very easy to use and did not really answer my questions about usability.

My take is really to use medium apertures and stacking if needed.

Erik,

That conclusion is in agreement with my findings thus far. One can get an idea of what is possible by looking at diffraction limits for various apertures and MTFs. The table below was taken from one of Roger's posts and is illustrative.

(http://bjanes.smugmug.com/Photography/Aperture-Series/i-P8ZwzT6/0/O/ResolutionLimits.png)

As Bart pointed out, diffraction will begin to limit the image resolution when the Airy disc is about 1.5x the pixel pitch of the sensor; for the D800e, the pixel pitch is 4.87 microns and 1.5x this is is 7.31 microns. At f/8, the Airy disc is 8.9 microns and resolution at 50% MTF very close to the Nyquist of the sensor (103 lp/mm), suggesting that this would be a good aperture for stacking. At f/22 the MTF is so low deconvolution is of little avial, since there is not much to work with. However, deconvolution works well at f/8 and I would use this aperture for stacking after image restoration with deconvolution.

Bart's PSF generator uses only one parameter, the Gaussian radius, to derive the PSF for a given aperture and I doubt that this is sufficient to fully describe the nature of the blurring. Roger does not go into the details of how he derives his PSFs and more information would be helpful. Van Clittert deconvolution seems to work well for low noise images, and more discussion on its use would be welcome. ImateMagic works well for at f/8 and is very fast and convenient to use.

Regards,

Bill

Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on January 15, 2014, 06:56:16 PM
Bart,

Thanks for the feedback. We are getting some interesting discussion in this rejuvenated thread:)

Hi Bill,

There is a lot of info to share, and people to convince ...

Quote
As you suggested, I did use RawTherapee to render the f/16 image.

(http://bjanes.smugmug.com/Photography/Aperture-Series/i-CPfK2m5/0/O/_DSC3088b_RT.png)

The Gaussian radius was smaller than with the ACR rendering, 0.9922, and I used your tool to calculate a deconvolution PSF for 5x5 and 7x7.

Great, RT Amaze is always interesting to have in a comparison, because it is very good at resolving fine detail with few artifacts (and optional false color suppression).

Quote
Here is the image restoration in ImagesPlus with 20 iterations of RL using 7x7. There is quite a bit of artifact.
(http://bjanes.smugmug.com/Photography/Aperture-Series/i-vpKhfF6/0/O/_DSC3088_RL_20_7by7.png)

Using RL and a 5x5 kernel with 20 iterations, there again less artifact:
(http://bjanes.smugmug.com/Photography/Aperture-Series/i-F5vLMxV/0/O/_DSC3088_RL_20_5by5.png)

I see what you mean, and looking at the artifacts there may be something that can be done. No guarantee, but I suspect that deconvolving with a linear gamma can help quite a bit. In ImagesPlus one can convert an RGB image into R+G+B+L layers, deconvolve the L layer, and recombine the channels into an RGB image again. However, before and after deconvolution, one can switch the L layer to linear gamma and back (gamma 0.455 and gamma 2.20 will be close enough).

It can also help to temporarily up-sample the image before deconvolution. The drawback of that method is the increased time required for the deconvolution calculations, and it is possible that the re-sampling introduces artifacts. The benefit though is than one can visually judge the intermediate result (which is sort of sub-sampled) until deconvolution artifacts start to appear, and then downsample to the original size to make the artifacts visually less important.

Quote
Van Cittert with the 5x5 kernel and 20 iterations produces the best results.

In this case it does, but with more noise it may not be as beneficial. Also in this case, deconvolving the linear gamma luminance may work better.

Then there is another thing, and that will change the shape of the Gaussian PSF a bit. Creating the PSF kernel with my PSF generator defaults to a sensel arrangement with 100% fill factor (assuming gapless microlenses). By reducing that percentage a bit the Gaussian will become a bit more spiky, gradually more like a point sample and a pure Gaussian.

I realize its a bit of work, but that's also why we need better integration of deconvolution in our Raw converter tools. Until then, we can learn a lot about what can be achieved and how important it is for image quality.

Finally, you can also try the RL deconvolution in RawTherapee, I don't know if that is applied with Linear gamma but it should be come clear when you compare images. As soon as barely resolved detail becomes darker than expected, it's usually gamma related.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited (Image J, two questions)
Post by: BartvanderWolf on January 15, 2014, 07:23:55 PM
Bart's PSF generator uses only one parameter, the Gaussian radius, to derive the PSF for a given aperture and I doubt that this is sufficient to fully describe the nature of the blurring.

In my Slanted edge tool you can get the Edge Spread Function (edge profile) data and the ESF model, sub-sampled to approx. 1/10th of a pixel,  and compare the fit. Here is an example for one of my lenses, it only deviates a bit at the dark end, probably due to glare reducing  contrast by adding some stray light (blue line is actual edge, red line is cumulative Gaussian distribution curve or ESF):

(http://bvdwolf.home.xs4all.nl/temp/LuLa/8104_GreenProfile.png)

In most cases the Gaussian model (with fill factor assumption) is pretty good, not perfect, but good enough.

Quote
Roger does not go into the details of how he derives his PSFs and more information would be helpful.

I assume it is based on a continuous Gaussian PSF, or as preprogrammed in the 3x3, 5x5, etc. default selections of ImagesPlus. My PSF generator tool would need to be set to point sample instead of a fill factor percentage. The fill factor integrates the Gaussian over a rectangular/square area as percentage of the sensel pitch, which makes the PSF a bit wider and less pointed. An actual sensel aperture is more irregularly shaped, but micro-lenses change that shape influence.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited (Image J, two questions)
Post by: BartvanderWolf on January 16, 2014, 03:58:32 AM
Hi Bart,

It has been suggested on a discussion regarding macro photography that smallish apertures could be used and the image restored using deconvolution. I may be a bit of a skeptic,  but I felt I would be ignorant without trying. I have both Focus Magic and Topaz Infocus but neither let me choose PSF.

Hi Erik,

I see. Well in that case it may be easier to use the Photoshop plugins you already have, despite the different types of input they require.

FocusMagic does several things under the hood, e.g. suppression of noise amplification and adapt the processing based to the type of input data. It does not allow to specify the radius more accurately than in integer pixel steps, but for the algorithms they use that's often precise enough. You can also upsample the image, e.g. to 300%, before applying Focus Magic (obviously with a larger radius) which allows to use relatively sub pixel accuracy. A radius of 5 or 6 pixels is common for well focused images, and the amount can then be boosted to e.g. 175 or 200%. While the deconvolution will increase resolution, it will not really add much detail that's smaller than the Nyquist frequency of the original image dimensions, so the subsequent downsampling will have a low risk of creating downsampling artifacts/aliasing. Because the downsampling will usually reduce contrast, a final run of FocusMagic with a radius 1 and amount of e.g. 50%. can help.

Topaz Labs Infocus offers several deconvolution algorithms and separate artifact suppression controls, and those controls are needed because the deconvolution algo's are quite aggressive and do not offer an amount setting. Infocus also responds quite well to prior upsampling. The Estimate method of deconvolution often functions quite good after upsampling, especially if some additional sharpening is added with the sharpening and radius controls at the bottom of the interface.

Quote
I have ImageJ already so I felt I could explore deconvolution a bit more, but the plugins I have tested were not very easy to use and did not really answer my questions about usability.

Correct, ImageJ is a tool for the scientific community, and they often cherry pick the tools for very specific sub-tasks, such as Photomicrography, or 3D CT and MRI imagery. The exact algorithms must often be very well understood to avoid mistaking artifacts (or suppression of those) for pathology. Some of its functionality is useful for mere mortals though, it works with floating point accuracy, and is free and multiplatform software which can help to share processing methods. It does assume a deep understanding of image processing fundamentals for some of its operations.

Quote
My take is really to use medium apertures and stacking if needed.

Nothing beats collecting the actual physical data, but we can significantly improve the results we get, because the image capture process is riddled with compromises (e.g. OLPF or no OLPF, demosaicing of undersampled data, etc.), and limitations (such as diffraction) set by physics.

Stacking itself is also not free of compromises (especially around occlusions in the scene, and the claims on processing power), but the balance for stationary subjects is often in favor of shooting at a wider aperture and stacking multiple focus planes for increased DOF.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: Christoph C. Feldhaim on January 16, 2014, 07:52:39 AM
ImageMagick has an option to deconvolute images by doing a Division on an FFT image.
How would one derive or construct the appropriate deconvolution image to do that?

Cheers
~Chris
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on January 16, 2014, 09:42:34 AM
ImageMagick has an option to deconvolute images by doing a Division on an FFT image.
How would one derive or construct the appropriate deconvolution image to do that?

Hi Chris,

AFAIK, that would require one to first compile an HDRI version of ImageMagick oneself. There seem to be no precompiled HDRI binaries available for the various operating systems.

There are also drawbacks to using a simple Fourier transform divided by a Fourier transform of a PSF though. Divisions by zero need to be avoided and a single small deviation from a perfect PSF can have huge effects in the restored image. See the earlier posts (http://www.luminous-landscape.com/forum/index.php?topic=45038.msg420459#msg420459) in this thread for that aspect.

That's why more elaborate algorithms and tools are required, also to allow to correct for the artifacts that are very likely to arise from the simplified approach. As an example, the Deconvolution controls in PixInsight allow to use several methods to adjust the ringing artifacts (see attached PixInsight dialog). It does take quite a bit of work to get optimal settings though, and tools like FocusMagic and Infocus make life relatively much easier.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on January 16, 2014, 02:36:57 PM
Here is my sharpened version of the crane done this morning without blowing up the image first. I assume Roger's is better than mine as he has more experience with the app, a much better understanding of the process and the original as a guide.

It is a reasonable improvement from the blurred image without artefacts. Done in Images plus 4.5 with Van Cittert, Adaptive contrast, Adaptive R/L.
Title: Re: Deconvolution sharpening revisited
Post by: bjanes on January 18, 2014, 03:08:26 PM
Great, RT Amaze is always interesting to have in a comparison, because it is very good at resolving fine detail with few artifacts (and optional false color suppression).

I see what you mean, and looking at the artifacts there may be something that can be done. No guarantee, but I suspect that deconvolving with a linear gamma can help quite a bit. In ImagesPlus one can convert an RGB image into R+G+B+L layers, deconvolve the L layer, and recombine the channels into an RGB image again. However, before and after deconvolution, one can switch the L layer to linear gamma and back (gamma 0.455 and gamma 2.20 will be close enough).

It can also help to temporarily up-sample the image before deconvolution. The drawback of that method is the increased time required for the deconvolution calculations, and it is possible that the re-sampling introduces artifacts. The benefit though is than one can visually judge the intermediate result (which is sort of sub-sampled) until deconvolution artifacts start to appear, and then downsample to the original size to make the artifacts visually less important.

In this case it does, but with more noise it may not be as beneficial. Also in this case, deconvolving the linear gamma luminance may work better.

Then there is another thing, and that will change the shape of the Gaussian PSF a bit. Creating the PSF kernel with my PSF generator defaults to a sensel arrangement with 100% fill factor (assuming gapless microlenses). By reducing that percentage a bit the Gaussian will become a bit more spiky, gradually more like a point sample and a pure Gaussian.

I realize its a bit of work, but that's also why we need better integration of deconvolution in our Raw converter tools. Until then, we can learn a lot about what can be achieved and how important it is for image quality.

Finally, you can also try the RL deconvolution in RawTherapee, I don't know if that is applied with Linear gamma but it should be come clear when you compare images. As soon as barely resolved detail becomes darker than expected, it's usually gamma related.

Cheers,
Bart

Bart,

To assess the effect of linear processing, I rendered my images into a custom 16 bit ProPhotoRBG space with a gamma of 1.0 prior to performing the deconvolution in ImagesPlus and converted back to sRGB for display on the web. I noted little difference between linear and gamma ~2.2 files. Performing 30 iterations of RL with a radius of 0.89 as determined by your tool works well with Rawthereapee. 10 iterations of RL in ImagesPlus with a 5x5 kernel derived with your tools and a radius of 0.89 produces artifacts, but 3 iterations produces more reasonable results. I used the deconvolution kernel with a fill factor of 100%. Deconvolving the luminance channel in IP made little difference. Where should I go from here?

Image before deconvolution:
(http://bjanes.smugmug.com/Photography/RT-Deconvolution/i-T3LpG48/0/O/_DSC3088_RT_Amaze.png)

Image deconvolved in RawTherepee:
(http://bjanes.smugmug.com/Photography/RT-Deconvolution/i-xwQ3jSz/0/O/_DSC3088_RT_Lin_RL30_Rad0_88.png)

Image deconvolved with 10 iterations in ImagesPlus:
(http://bjanes.smugmug.com/Photography/RT-Deconvolution/i-xGrPK5b/0/O/_DSC3088_RT_Amaze_IP_RL_10it_Rad_0_89.png)

Image deconvolved with 3 iterations in ImagesPlus:
(http://bjanes.smugmug.com/Photography/RT-Deconvolution/i-CDZpBbv/0/O/_DSC3088_RT_Amaze_IP_RL_3it_Rad_0_89.png)

I presume that the deconvoltion kernel would be most appropriate, but what is the purpose of the other PSFs?

Thanks,

Bill
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on January 19, 2014, 09:36:24 AM
Bart,

To assess the effect of linear processing, I rendered my images into a custom 16 bit ProPhotoRBG space with a gamma of 1.0 prior to performing the deconvolution in ImagesPlus and converted back to sRGB for display on the web. I noted little difference between linear and gamma ~2.2 files.

Hi Bill,

Deconvolution should preferably be performed in Linear gamma space, and the artifacts you showed (darkened microcontrast) are a typical indicator of gamma related issues. Of course not all images are as insanely critical as a starchart, so deconvolving gamma precompensated images may work well enough. However, it's good if linearization can be easily accommodated in a workflow that involves image math. This is also preferably performed with floating point precision calculations, which will usually allow to apply more iterations or more severe adjustments without artifact due to cumulating errors.

Quote
Performing 30 iterations of RL with a radius of 0.89 as determined by your tool works well with Rawtherapee.

In the search for optimal settings, it is good to have at least the radius parameter nailed. Hopefully, under the hood, RawTherapee does the RL-deconvolution on linearized data.

Quote
10 iterations of RL in ImagesPlus with a 5x5 kernel derived with your tools and a radius of 0.89 produces artifacts, but 3 iterations produces more reasonable results. I used the deconvolution kernel with a fill factor of 100%. Deconvolving the luminance channel in IP made little difference. Where should I go from here?

Just to make sure I understand what you've done. When you say you used a 5x5 kernel, I assume you copied the values from the PSF Kernel generator into the ImagesPlus "Custom Point Spread Function" dialog box, and clicked the "Set Filter" button, then used the "Adaptive Richardson-Lucy" control with "Custom" selected, and "Reduce Artifacts" checked.

That still leaves the fine-tuning of the "Noise Threshold" slider, or the Relaxation slider in the Van Cittert dialog. Too low a setting will not reduce the noise between iterations in the featureless smooth regions of the image, and too high a setting will start to reduce fine detail in addition to noise.

Quote
I presume that the deconvoltion kernel would be most appropriate, but what is the purpose of the other PSFs?

Not sure, what other PSFs you are referring to? You mean in the Adaptive RL dialog?

Now, if this still produces artifact with more than a few iterations, I suspect that there are aliasing artifacts that rear their ugly head. Aliases are larger than actual representations of fine detail. The larger detail is getting some definition added by the deconvolution where it shouldn't. Maybe, just as an attempt, some over-correction of the noise adaptation might help a bit, but it is not ideal. Also multiple runs with a deliberately too small Gaussian blur radius PSF may built up to an optimum more slowly.

As a final resort, but it won't do much if indeed aliasing is the issue, you can try to first up-sample the image, say to 300% which should keep the file size below the 2GB TIFF boundary that could cause issues with some TIFF libraries. With the up-sampled data, hopefully without adding too many artifacts of its own, the resolution has not increased, but the data has become sub-sampled.

That data will be easier (but much slower) to deconvolve (multiply the PSF blur radius by the same factor or more accurately determine it by upsampling the slanted edge first and then measure the blur radius) smoothly, and stop the iterations when visible artifacts begin to develop. The problem becomes how to create a custom kernel that fits the 9x9 maximum dimensions of ImagesPlus. Raw therapee can go to 2.5, which is close. Then do a simple down-sample to the original image size, and compensate for the down-sampling blur by adding some small (e.g. 0.6) radius deconvolution sharpening.

Other than fine-tuning the shape of the PSF by selecting a fill-factor smaller than 100% upon creation, there is not much left to do, other than resort to super resolution or stitching longer focal lengths.

If you'd like, I could try a deconvolution with PixInsight because that allows more tweaking of the parameters, and see it that makes a difference. But I'd like to have a 16-bit PNG crop from the RT Amaze conversion to work on.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: bjanes on January 19, 2014, 03:22:05 PM
Just to make sure I understand what you've done. When you say you used a 5x5 kernel, I assume you copied the values from the PSF Kernel generator into the ImagesPlus "Custom Point Spread Function" dialog box, and clicked the "Set Filter" button, then used the "Adaptive Richardson-Lucy" control with "Custom" selected, and "Reduce Artifacts" checked.

That still leaves the fine-tuning of the "Noise Threshold" slider, or the Relaxation slider in the Van Cittert dialog. Too low a setting will not reduce the noise between iterations in the featureless smooth regions of the image, and too high a setting will start to reduce fine detail in addition to noise.

Bart, Thanks again for your detailed replies. Yes, I copied the values from your web based tool and pasted them into the IP Custom PSF dialog. I used the Apply check box in the custom filter dialog instead of the Set, but the effect seems to be the same when I used the Set function. The Apply box is not covered in the IP docs that I have, and may have been added to a later version. I am using IP ver 5.0

(http://bjanes.smugmug.com/Photography/RT-Deconvolution/i-XL5qtMz/0/O/ApplyPSF.png)

I left the noise threshold at the default and did not adjust the minimum and maximum apply values.

(http://bjanes.smugmug.com/Photography/RT-Deconvolution/i-ZtHRsbj/0/O/ARL_Dialog.png)

Not sure, what other PSFs you are referring to? You mean in the Adaptive RL dialog?

The PSFs to which I was referring are those derived by your PSF generator.

Now, if this still produces artifact with more than a few iterations, I suspect that there are aliasing artifacts that rear their ugly head. Aliases are larger than actual representations of fine detail. The larger detail is getting some definition added by the deconvolution where it shouldn't. Maybe, just as an attempt, some over-correction of the noise adaptation might help a bit, but it is not ideal. Also multiple runs with a deliberately too small Gaussian blur radius PSF may built up to an optimum more slowly.

As a final resort, but it won't do much if indeed aliasing is the issue, you can try to first up-sample the image, say to 300% which should keep the file size below the 2GB TIFF boundary that could cause issues with some TIFF libraries. With the up-sampled data, hopefully without adding too many artifacts of its own, the resolution has not increased, but the data has become sub-sampled.

That data will be easier (but much slower) to deconvolve (multiply the PSF blur radius by the same factor or more accurately determine it by upsampling the slanted edge first and then measure the blur radius) smoothly, and stop the iterations when visible artifacts begin to develop. The problem becomes how to create a custom kernel that fits the 9x9 maximum dimensions of ImagesPlus. Raw therapee can go to 2.5, which is close. Then do a simple down-sample to the original image size, and compensate for the down-sampling blur by adding some small (e.g. 0.6) radius deconvolution sharpening.

Other than fine-tuning the shape of the PSF by selecting a fill-factor smaller than 100% upon creation, there is not much left to do, other than resort to super resolution or stitching longer focal lengths.

If you'd like, I could try a deconvolution with PixInsight because that allows more tweaking of the parameters, and see it that makes a difference. But I'd like to have a 16-bit PNG crop from the RT Amaze conversion to work on.

I will try these other suggestions at a later date.

If you (or others) wish to work with my files, here are links.

The raw file (NEF) f/8:
http://adobe.ly/1bbzgzC

The Rawtherapee rendered TIFF f/8:
http://adobe.ly/1ifW49t

Other NEFs

f/4
http://adobe.ly/1dJp3jW

f/16
http://adobe.ly/1mrvDOa

f/22
http://adobe.ly/1cMuNTV

Thanks,

Bill

p.s.
Edited 1/20/2014 to correct links and add files
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on January 19, 2014, 05:46:59 PM
Bart, Thanks again for your detailed replies. Yes, I copied the values from your web based tool and pasted them into the IP Custom PSF dialog. I used the Apply check box in the custom filter dialog instead of the Set, but the effect seems to be the same when I used the Set function. The Apply box is not covered in the IP docs that I have, and may have been added to a later version. I am using IP ver 5.0

(http://bjanes.smugmug.com/Photography/RT-Deconvolution/i-XL5qtMz/0/O/ApplyPSF.png)

Hi Bill,

Great, that explain a few things, and reveals a procedural error. Good that I asked, or we would not have found it.

The filter kernel values that you used, are for the direct application of a single deconvolution filter operation (the addition of a high-pass filter to the original image). To store those values for use in other ImagesPlus dialogs, one uses the "Set" button, and can leave the dialog box open for further adjustments. Hitting the "Apply" button will apply the single pass deconvolution to the active (and/or locked) image window(s).

However, the adaptive Richardson-Lucy dialog expects a regular Point Spread function (all kernel values are positive) to be defined in the Custom filter box, just like a regular sample of a blurred star. And here a larger support kernel should produce a more accurate restoration, a 9x9 kernel would be almost optimal (as the PSF tool suggests, approx. 10x Sigma).

Quote
I left the noise threshold at the default and did not adjust the minimum and maximum apply values.

The default noise assumption often works good enough, and the minimum/maximum limits are more useful for star images.

Quote
The PSFs to which I was referring are those derived by your PSF generator.

I see. The different PSFs are just precalculated kernel values for various purposes. A regular PSF is fine, although with large kernels, there will be a lot of zero decimal digits. When the input boxes, like those of ImagesPlus, only allow to enter a given number of digits (15 or so), it can help to pre-multiply all kernel values. ImagePlus will still sum and divide the weight of all kernel values to a total sum of 1.0, to keep overall image brightness the same.

I used to use the second PSF version (PSF[0,0] normalized to 1.0) with a multiplier of 65535. That gives a simple indication whether the kernel values in the outer positions have a significant enough effect (say>1.0) on the total sum in 16-bit math. When a kernel element contributes little, one could probably also use a smaller kernel size.

Quote
If you (or others) wish to work with my files, here are links.

I'll have a look, thanks. BTW, the NEF is of a different file (3086) than the TIFF (3088).

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: bjanes on January 19, 2014, 08:36:36 PM
Hi Bill,

Great, that explain a few things, and reveals a procedural error. Good that I asked, or we would not have found it.

The filter kernel values that you used, are for the direct application of a single deconvolution filter operation (the addition of a high-pass filter to the original image). To store those values for use in other ImagesPlus dialogs, one uses the "Set" button, and can leave the dialog box open for further adjustments. Hitting the "Apply" button will apply the single pass deconvolution to the active (and/or locked) image window(s).

However, the adaptive Richardson-Lucy dialog expects a regular Point Spread function (all kernel values are positive) to be defined in the Custom filter box, just like a regular sample of a blurred star. And here a larger support kernel should produce a more accurate restoration, a 9x9 kernel would be almost optimal (as the PSF tool suggests, approx. 10x Sigma).

The default noise assumption often works good enough, and the minimum/maximum limits are more useful for star images.

I see. The different PSFs are just precalculated kernel values for various purposes. A regular PSF is fine, although with large kernels, there will be a lot of zero decimal digits. When the input boxes, like those of ImagesPlus, only allow to enter a given number of digits (15 or so), it can help to pre-multiply all kernel values. ImagePlus will still sum and divide the weight of all kernel values to a total sum of 1.0, to keep overall image brightness the same.

I used to use the second PSF version (PSF[0,0] normalized to 1.0) with a multiplier of 65535. That gives a simple indication whether the kernel values in the outer positions have a significant enough effect (say>1.0) on the total sum in 16-bit math. When a kernel element contributes little, one could probably also use a smaller kernel size.

I'll have a look, thanks. BTW, the NEF is of a different file (3086) than the TIFF (3088).

Cheers,
Bart

Bart,

Sorry, but I posted the link for the f/11 raw file. Here is the link for f/8:
http://adobe.ly/1bbzgzC

I calculated the 9x9 PSF for a radius of 0.7533 and scaled by 65535 and pasted the values into the IP custom filter as shown:
(http://bjanes.smugmug.com/Photography/RT-Deconvolution/i-2VDsfkj/0/O/9x9deconvol_b.png)

I then applied the filter with the set command and performed 10 iterations of RL. The image appeared overcorrected with artifacts similar to those I experienced previously.

(http://bjanes.smugmug.com/Photography/RT-Deconvolution/i-3VJj7GM/0/O/9x9deconvol.png)

Regards,

Bill

ps
edited 1/20/2014 to correct link to f/8 file
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on January 20, 2014, 01:11:40 AM
Bob, RT displays f16 on that one.
Title: Re: Deconvolution sharpening revisited
Post by: bjanes on January 20, 2014, 11:15:45 AM
Bob, RT displays f16 on that one.

Thanks for pointing this out. I corrected the links.
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on January 20, 2014, 11:56:57 AM
Bart,

Sorry, but I posted the link for the f/11 raw file. Here is the link for f/8:
https://creative.adobe.com/share/60c87f91-96a0-4906-b108-568974011f22

No problem, I've downloaded it and the EXIF says f/16 (as intended for the exercise at hand). I've made a conversion in RawTherapee, with Raw level Chromatic Aberration correction which helped a bit. Further processing was mostly left at default, except a White Balance on the white patch of the star chart grey scale and  small increase in exposure to get the midgrey level to approx. 50% and white at 90%.

Now, there is good news and bad news.

The good news is that a good deconvolution is possible. The bad news is that it is not simple to do with the conventional approach of determining the amount of blur based on a slanted edge.

I was already a bit surprised that it was possible to produce significant deconvolution artifacts with 'normal' radius settings one might expect from other tests on f/16 images. I was able to get a nice Adaptive RL deconvolution result in ImagesPlus by using the default 3x3 Gaussian PSF, twice (after a first run, click the blue eye icon on the toolbar, and do another run). Doing multiple deconvolution runs with a small radius amounts to the same as a single run with a larger radius, but a run with a default 5x5 Gaussian already was problematic. Hmm, what to think of that?

I then checked the edge profile of the Slanted edge, and found that there is some glare (possibly from the lighting angle or the print surface) that makes it hard to produce a clean profile model with my Slanted Edge tool. The tool suggests a much larger radius, which already tested as problematic. But the trained human eye is sometimes harder to fool than a simple curve fitting algorithm, so I saw that I had to try something with smaller radii, although I didn't know how small.

I then attempted an empirical approach (when everything else fails, try and try again) to finding a better PSF size/shape. I had to use the power of PixInsight to help me with that, because it also produces some statistical convergence data to assist in the efforts, and it allows to do math in 64-bit floating point number precision (to eliminate the possibility of rounding errors to influence the tests). This all suggested that a Gaussian radius of about 0.67 should produce a good compromise. That is indeed a radius normally only needed by the best possible lenses and at the optimal aperture, certainly not at f/16. So this remains puzzling, and hard to explain.

To test the influence of the deconvolution algorithm implementation, I then produced a PSF with a radius of 0.67, as suggested by PixInsight, with a 65535 multiplier for use in ImagesPlus (see attachment). A 7x7 kernel should be large enough. Note that in my version of ImagesPlus, there is a Custom Restoration PSF dialog for sharpening (besides a Custom filter dialog).

This allows to produce a reasonably good deconvolution, without too many artifacts, improved a bit further by linearization of the data before deconvolution. However, I'm not totally satisfied yet (and the lack of logic for the need of such a small PSF is puzzling), so some more investigation is in order.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: bjanes on January 20, 2014, 12:44:09 PM
No problem, I've downloaded it and the EXIF says f/16 (as intended for the exercise at hand). I've made a conversion in RawTherapee, with Raw level Chromatic Aberration correction which helped a bit. Further processing was mostly left at default, except a White Balance on the white patch of the star chart grey scale and  small increase in exposure to get the midgrey level to approx. 50% and white at 90%.

Now, there is good news and bad news.

The good news is that a good deconvolution is possible. The bad news is that it is not simple to do with the conventional approach of determining the amount of blur based on a slanted edge.

I was already a bit surprised that it was possible to produce significant deconvolution artifacts with 'normal' radius settings one might expect from other tests on f/16 images. I was able to get a nice Adaptive RL deconvolution result in ImagesPlus by using the default 3x3 Gaussian PSF, twice (after a first run, click the blue eye icon on the toolbar, and do another run). Doing multiple deconvolution runs with a small radius amounts to the same as a single run with a larger radius, but a run with a default 5x5 Gaussian already was problematic. Hmm, what to think of that?

I then checked the edge profile of the Slanted edge, and found that there is some glare (possibly from the lighting angle or the print surface) that makes it hard to produce a clean profile model with my Slanted Edge tool. The tool suggests a much larger radius, which already tested as problematic. But the trained human eye is sometimes harder to fool than a simple curve fitting algorithm, so I saw that I had to try something with smaller radii, although I didn't know how small.

I then attempted an empirical approach (when everything else fails, try and try again) to finding a better PSF size/shape. I had to use the power of PixInsight to help me with that, because it also produces some statistical convergence data to assist in the efforts, and it allows to do math in 64-bit floating point number precision (to eliminate the possibility of rounding errors to influence the tests). This all suggested that a Gaussian radius of about 0.67 should produce a good compromise. That is indeed a radius normally only needed by the best possible lenses and at the optimal aperture, certainly not at f/16. So this remains puzzling, and hard to explain.

To test the influence of the deconvolution algorithm implementation, I then produced a PSF with a radius of 0.67, as suggested by PixInsight, with a 65535 multiplier for use in ImagesPlus (see attachment). A 7x7 kernel should be large enough. Note that in my version of ImagesPlus, there is a Custom Restoration PSF dialog for sharpening (besides a Custom filter dialog).

This allows to produce a reasonably good deconvolution, without too many artifacts, improved a bit further by linearization of the data before deconvolution. However, I'm not totally satisfied yet (and the lack of logic for the need of such a small PSF is puzzling), so some more investigation is in order.

Cheers,
Bart

Bart,

Thanks again for your help and good work.

Your target was printed on glossy paper. I tried to position the lights (Solux 4700K) properly, but there could still be some glare. The ISO 12233 target in the background has some slanted edges and was printed on matt paper, so it might be better even though it is slightly off center.

Bill

PS I reposted links to the files in this message here (http://www.luminous-landscape.com/forum/index.php?topic=45038.msg700154#msg700154). I was just learning to use Adobe Creative Cloud for the first time and some of the links were erroneous.
Title: Re: Deconvolution sharpening revisited
Post by: rnclark on February 22, 2014, 03:20:52 AM
Hi Guys,
Sorry it has been a while--I had several trips and work was intense.

I did add a third example in my sharpening series: http://www.clarkvision.com/articles/image-restoration3/
It is upsampling like Bart described and then deconvolution sharpening.

Some of the questions asked:

> I can't see how a star would be "smaller than 0% MTF".
While the star itself is extremely small, in the optical system, it is a diffraction disk.  My point was that one can resolve with deconvolution two stars that are closer than 0% MTF.  MTF is a one-dimensional description of the response of an optical system to a bar chart and one can't resolve the bars in a bar chart if the bars are closer than 0% MTF.  But one can resolve 2-dimensional structure that are closer than 0% MTF.

Bart, you say "I do not fully agree with your downsampling conclusions" and I would agree with you if one just down samples bar charts.  But as you say "sharpening before downsampling may happen to work for certain image content (irregular structures)" is the key.  Images of the real world are dominated by irregular structures, and I have yet to see in any of my images artifacts like seen in your examples of bar charts.  If I ever run across a pathologic case where artifacts are seen like those in your downsampling examples, then I'll change my methodology.  So far I have never seen such a case in my images.  So far no one was met my challenge of downsampling first, then sharpening and producing a better, or even equal image like those I show in Figure 7e and 7f at:
http://www.clarkvision.com/articles/image-restoration2/
Isn't your posts about upsampling, deconvolution sharpening then downsampling in conflict with saying no sharpening until you have downsized?

Fine_Art sharpened my crane image with Van Cittert.  While I have read research papers on Van Cittert, it seems to produce similar results to Richardson-Lucy, so I have not explored Van Cittert, mostly due to lack of time, and I figured I should try and master RL first.  Your posted sharpening of the crane looks to me about like unsharp mask results on my image-restoration2 web page (Figure 1).  So, I think you can push much further.  As the image is high S/N, I would not be surprised if you could surpass my RL results in Figure 3b.

Also asked was what is my strategy for choosing the PSF?  Well, for star images, it is simple: just choose a non saturated star.  But for scenics and landscapes, it is not so easy to find the right one.  And one reads online (e.g. the wikipedia page on deconvolution) that it is hopeless so not applied to regular images.  Well, it is not that hard either.  Basically, I start large and work small.  But there is rarely one PSF for the typical landscape or wildlife image, mainly due to depth of field.  Thus, different parts of the image need different PSFs.  I  also don't worry about linearizing the data.  I just use the image as it comes out of the raw converter with the tone curve applied, plus any other contrast adjustments I do.  A Gaussian response function run through such a process is still reasonably modeled by a Gaussian, though different from the Gaussian one would derive from linear data.

So it is simple (I can see a need for a 4th article in my series):  start with a large PSF, like 9x9 Gaussian and run a few iterations.  If it looks good, run again with more iterations.  If that starts producing ringing artifacts, that is an indication the PSF is too large and/or iterations too many.  I drop back on the iterations.  And I then drop the PSF (e.g. 7x7) and start again.  For a typical DSLR image made at say ISO 400, one should be able to go for 50 to 100 iterations without significant noise enhancement.  If a high S/N image, say ISO 100 with a camera having large pixels, then 500 iterations may be possible.  If the PSF is too low, one can do hundreds of iterations and not see any improvement in the image.  So there is a maximum that sharpens without artifacts.  What I find in the typical image is that different parts of the image respond better with different sizes of PSF.  So I put all the results in photoshop as layers, with the original on top and increasing PSF size going down.  I then erase portions of each layer to show the portion of the sharpened image that responded best to the deconvolution.  For wildlife, I usually concentrate on the eyes first, then work out.  I usually leave the background as the original unsharpened image.

For landscapes, it is usually simpler than for wildlife.  Most of the image is usually pretty sharp, so I find the one PSF that works best.

I can see developing some examples would be nice.

The bottom line is that real world images have variability, and no one formula works for multiple images, let alone all parts of one image.  This is also true for the simpler methods like unsharp mask.  So I just try a few things and see what works well for an image, then push it until I see artifacts, then back off.

Roger

Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on February 22, 2014, 02:43:54 PM
Conceptually, I still don't understand the reason for hundreds of iterations. To me, that seems to imply the wrong PSF is being used.

Most of my shots are ISO 100 with a high quality prime lens. Sometimes ISO 400, rarely 800 or more. This high S/N is the reason I use Van Cittert first. I can get a good base improvement with the default 10 cycles in the dialog box. I go 5x5, then 3x3, then switch to A R/L maybe 10 5x5 then 30 3x3. I have never felt a need for a larger radius, given a good lens to begin with.

Using Bart's upsample first method gives a better result without question. If I need it I would probably start with 7x7.

Please explain the benefit of a more gradual curve 9x9 with more iterations vs the default Gaussian shape 5x5 with low iterations. The feeling I get is that my camera puts the data into the right pixel +/- a radius of about 1. Therefore the tails of a 5x5 will create ring artifacts with too many iterations. This is what I see happening. Maybe I miss interpret the output.
Title: Re: Deconvolution sharpening revisited
Post by: rnclark on February 22, 2014, 03:03:41 PM
Conceptually, I still don't understand the reason for hundreds of iterations. To me, that seems to imply the wrong PSF is being used.

Most of my shots are ISO 100 with a high quality prime lens. Sometimes ISO 400, rarely 800 or more. This high S/N is the reason I use Van Cittert first. I can get a good base improvement with the default 10 cycles in the dialog box. I go 5x5, then 3x3, then switch to A R/L maybe 10 5x5 then 30 3x3. I have never felt a need for a larger radius, given a good lens to begin with.

Using Bart's upsample first method gives a better result without question. If I need it I would probably start with 7x7.

Please explain the benefit of a more gradual curve 9x9 with more iterations vs the default Gaussian shape 5x5 with low iterations. The feeling I get is that my camera puts the data into the right pixel +/- a radius of about 1. Therefore the tails of a 5x5 will create ring artifacts with too many iterations. This is what I see happening. Maybe I miss interpret the output.


Hi,
Deconvolution is an iterative process.  Think of it this way: in a pixel, there is signal from the surrounding pixels contaminating the signal in the pixel.  But in those adjacent pixels, those pixels have signal contamination from the pixels surrounding those, and so on.  To put back the light in each pixel, one would need to know the correct signal from the adjacent pixels.  But we don't know that because those pixels too are contaminated.  The result is that there is no direct solution, only an iterative one.  A few iterations gets a only partial solution.

A larger blur radius to the PSF will result in more artifacts.  So people limit the number of iterations to prevent noticeable artifacts.  If you start getting artifacts at 10 or so iterations, it is my experience that that PSF is too large, in which case it is usually better in my experience to use a smaller PSF and more iterations.

There are cases where the PSF may be like two different Gaussians with different radii.  Then one could either derive the PSF for that image, or do two Gaussian runs.  For example, while diffraction is somewhat Gaussian, there is a big "skirt" especially considering multiple wavelengths.  Thus a 2 step deconvolution, like a large radius Gaussian with a few iterations, followed by a smaller radius Gaussian with more iterations can be effective.

Roger
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on February 22, 2014, 03:19:42 PM
Thanks,

What do you think of using the program's ability to split RGB to do a larger radius on Red, then smaller G, then smaller B then recombine?
Title: Re: Deconvolution sharpening revisited
Post by: rnclark on February 22, 2014, 03:27:05 PM
Thanks,

What do you think of using the program's ability to split RGB to do a larger radius on Red, then smaller G, then smaller B then recombine?

Hmmm...  Seems like more work.  I would wonder about color noise in the final image.  Probably better to just do the sharpening on a luminance channel.

Roger
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on February 22, 2014, 04:08:34 PM
Hmmm...  Seems like more work.  I would wonder about color noise in the final image.  Probably better to just do the sharpening on a luminance channel.

Roger

Actually, I have found the detail preserving NR filters highly effective at removing color noise. You are probably right on using the luminance channel. In my handful of prior tests I was not able to tell the difference. I expected more smear on red. I was unable to improve it over just the L channel. Your reply tells me it was more poor idea than poor technique.
Title: Re: Deconvolution sharpening revisited
Post by: rnclark on February 22, 2014, 04:41:00 PM
Actually, I have found the detail preserving NR filters highly effective at removing color noise. You are probably right on using the luminance channel. In my handful of prior tests I was not able to tell the difference. I expected more smear on red. I was unable to improve it over just the L channel. Your reply tells me it was more poor idea than poor technique.

Well, I would not say a poor idea.  I have not tried it.  If one were limited by diffraction, the red diffraction disk is larger than the green and blue, so in theory, sharpening the red with a larger PSF and the blue with a smaller PSF than the green channel makes sense.  Perhaps those f/32 images...

At apertures not limited be diffraction, and with effects of the blur filter, there probably isn't much difference in the PSF between the color channels unless the lens has some really bad chromatic aberration.

Roger
Title: Re: Deconvolution sharpening revisited
Post by: Christoph C. Feldhaim on February 22, 2014, 04:44:37 PM
Theoretically it makes totally sense, since Airy discs differ by a factor of about 2 between red and blue light.
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on February 22, 2014, 05:01:44 PM
Theoretically it makes totally sense, since Airy discs differ by a factor of about 2 between red and blue light.

That is what I was thinking of, I did not bring it up in a post to Roger who has written more papers than most people have read. For the rest of us it is worth mentioning.
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on February 22, 2014, 07:53:31 PM
Bart, you say "I do not fully agree with your downsampling conclusions" and I would agree with you if one just down samples bar charts.

Trust me, I only use (bar) charts for an objective, worst case scenario testing. If a procedure passes that test, it will pass real life challenges with flying colors. In fact, near orthogonal bi-tonal charts (e.g. the 1951 USAF chart, designed for analog aerial imaging (obviously) with film captures) should be banned from quantitative discrete sampling tests, they serve nothing more than a visual clue (but are very susceptible to phase alignment with the camera's sensels).

I have (many years ago on Usenet) proposed a more reliable, easily interpretable, modified star target (http://www.openphotographyforums.com/forums/showthread.php?t=13217) for (visual qualitative and) quantitative analysis (along with the Slanted Edge target) that in fact has become a part of the ISO standards for resolution testing of Digital Still Image cameras (http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber=59419&commid=48420).

Quote
But as you say "sharpening before downsampling may happen to work for certain image content (irregular structures)" is the key.  Images of the real world are dominated by irregular structures, and I have yet to see in any of my images artifacts like seen in your examples of bar charts.  If I ever run across a pathologic case where artifacts are seen like those in your downsampling examples, then I'll change my methodology.  So far I have never seen such a case in my images.

You are correct in that subject matter matters ...

Most shots of nature scenes are pretty forgiving, although e.g. tree branches against a much brighter sky could get one into some trouble. A sample image of a natural scene (although with many more urban structures as well) that gets everybody into downsampling trouble (try a web size of 533x800 pixels ), even without prior sharpening, is given here (http://bvdwolf.home.xs4all.nl/temp/7640_CO40_FM1-175pct_sRGB.jpg). The devil is in the rendering of the brick structures, and branches against the sky, and grass structure/texture.

So my goal is to prevent nasty surprises, with the knowledge that one might be able to push things a little further, but with the risk of introducing trouble.

Quote
So far no one has met my challenge of downsampling first, then sharpening and producing a better, or even equal image like those I show in Figure 7e and 7f at:
http://www.clarkvision.com/articles/image-restoration2/

It is about as much one can practically extract from the image crop as is possible with current technology.

Quote
Isn't your posts about upsampling, deconvolution sharpening then downsampling in conflict with saying no sharpening until you have downsized?

Yes, although the 'violations' are tolerable, in fact they are usually better trade-offs than a straightforward deconvolution or other sharpening at 100% zoom size. There are several reasons for that.

One is that by up-sampling we can change existing samples/pixels at a sub-pixel level. Up-sampling by itself does not add resolution (although some procedures can), we are still bound (at best) by the Nyquist frequency of the original signal sampling density, but we can more accurately shape the steepness of the gradient between pixels. Two pixels of identical luminosity may get a different luminosity after deconvolution, depending on their neighboring pixels, even more accurately if we can distribute the contribution of surrounding pixels more accurately.

So subsequent down-sampling of a bandwidth limited data source, can only cause aliasing if additional resolution is created (which deconvolution might, but only for those spatial frequencies that were already at the limit, all others will benefit (admittedly to variable degrees) from the gained precision.

Another reason is that it becomes visually much easier to detect over-sharpening, even for inexperienced users. At a larger magnification, it is easier to detect halos, and e.g. stairstepping, or blocking, or posterization, artifacts.

I'm preparing some example material, based both on charts and on real life imagery. To be continued ...

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on February 22, 2014, 08:15:27 PM
Deconvolution is an iterative process.  Think of it this way: in a pixel, there is signal from the surrounding pixels contaminating the signal in the pixel.  But in those adjacent pixels, those pixels have signal contamination from the pixels surrounding those, and so on.  To put back the light in each pixel, one would need to know the correct signal from the adjacent pixels.  But we don't know that because those pixels too are contaminated.  The result is that there is no direct solution, only an iterative one.  A few iterations gets a only partial solution.

Hi Roger,

Excellent explanation! It's not about repeating a procedure as such, but more about honing in on an optimal solution. Deconvolution is generally known as a mathematically ill posed problem to solve, especially in the presence of noise.

Quote
There are cases where the PSF may be like two different Gaussians with different radii.  Then one could either derive the PSF for that image, or do two Gaussian runs.  For example, while diffraction is somewhat Gaussian, there is a big "skirt" especially considering multiple wavelengths.  Thus a 2 step deconvolution, like a large radius Gaussian with a few iterations, followed by a smaller radius Gaussian with more iterations can be effective.

Yes, although adding multiple random distributions (e.g. subject motion/camera shake, residual lens aberrations, defocus, diffraction, Optical Low-pass Filter, sensel aperture), tends to gravitate to a combined Gaussian shaped blur distribution (http://www.luminous-landscape.com/forum/index.php?topic=68089.msg539252#msg539252) pretty fast. There may be some variation, but it usually is mostly aperture/diffraction induced. Of course defocus is a killer as well.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on February 22, 2014, 08:24:32 PM
Well, I would not say a poor idea.  I have not tried it.

I have, and it appears that the demosaicing of Bayer CFAs results in almost identical resolutions (http://www.luminous-landscape.com/forum/index.php?topic=68089.msg539252#msg539252) for R/G/B channels, after Raw conversion.

Quote from: Bart
There is also a pretty close correlation between the Red/Green/Blue channels (sigmas of 0.757/0.762/0.758), which shows how the Bayer CFA Demosaicing for mostly Luminance data produces virtually identical resolution in all channels. Since Luminance is the dominant factor for the Human Visual System's contrast sensitivity, it also shows that we can use a single sharpening value for the initial Capture sharpening of all channels.

That is probably because, despite the less dense sampling of the R and B channels, and the differences in diffraction pattern diameter, the luminance component of the signal in them is still used to create luminance resolution. And since the Red and most certainly the Blue channel are relatively under-weighted in luminance contribution, I would not be surprised if some is ''borrowed" from Green. This tends to (in general) negate wavelength dependent diffraction blur. Of course, differences in lens design and demosaicing may produce different results.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: rnclark on February 23, 2014, 09:36:03 AM
Trust me, I only use (bar) charts for an objective, worst case scenario testing. If a procedure passes that test, it will pass real life challenges with flying colors.

Hi Bart,
While I agree in principle, the question is in practice does it really matter?  For example, on your worst case scenario test charts, the Nikon D800E, that is the camera without a blur filter, might do pretty poorly.  If that was used to base decisions on production or not, we may not have a D800E today.  But in the field use by many photographers, moire artifacts are rare.  Yes, they do happen, but it doesn't seem to impact photographers, and overall they seem to like the added sharpness.

At least with down sampling, we can do it however way we want and if we see artifacts, then modify the procedure.

So I'll stand by my method of producing the best highest pixel count sharpened image (e.g. with deconvolution sharpening), then downsample.  If a component in the image shows artifacts in the down sampling step, I'll pre-blur that part of the sharpened image then downsample.  So far I have not seen such a problem.  Maybe I should try down sampling some of the thousands of windmill images I have ;-).

Roger
Title: Re: Deconvolution sharpening revisited
Post by: AreBee on February 23, 2014, 12:33:12 PM
Folks,

Most of the discussion in this thread is way over my head, but I would be interested to learn if deconvolution sharpening is strictly appropriate for use with the D800E. My understanding is that deconvolution sharpening is used to reverse, as best as possible, the blur effect of an AA filter. However, the D800E does not have an AA filter (I appreciate that it is a bit of a hybrid filter sandwitch and not simply the absence of an AA filter). Therefore, in applying deconvolution sharpening to a D800E image, are we not attempting to de-blur what never was blurred?

Cheers,
Title: Re: Deconvolution sharpening revisited
Post by: bjanes on February 23, 2014, 12:55:42 PM
Folks,

Most of the discussion in this thread is way over my head, but I would be interested to learn if deconvolution sharpening is strictly appropriate for use with the D800E. My understanding is that deconvolution sharpening is used to reverse, as best as possible, the blur effect of an AA filter. However, the D800E does not have an AA filter (I appreciate that it is a bit of a hybrid filter sandwitch and not simply the absence of an AA filter). Therefore, in applying deconvolution sharpening to a D800E image, are we not attempting to de-blur what never was blurred?

Cheers,

Yes, deconvolution restoration is desirable with the D800e because there are other sources of blur other than from a low pass filter. Diffraction and lens aberration as well as defocus can be partially reversed with deconvolution. See some of my earlier posts in this thread with examples of deconvolution with the D800e.

Bill
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on February 23, 2014, 02:34:34 PM
Folks,

Most of the discussion in this thread is way over my head, but I would be interested to learn if deconvolution sharpening is strictly appropriate for use with the D800E. My understanding is that deconvolution sharpening is used to reverse, as best as possible, the blur effect of an AA filter. However, the D800E does not have an AA filter (I appreciate that it is a bit of a hybrid filter sandwitch and not simply the absence of an AA filter). Therefore, in applying deconvolution sharpening to a D800E image, are we not attempting to de-blur what never was blurred?

Cheers,

Since you mention the D800E filter sandwich, deconvolution is exactly what it is doing. They use the same filter as the D800 then use other filters to reverse the AA effect.

These methods, in software or hardware, are much better than the old USM illusion of sharpness.
Title: Re: Deconvolution sharpening revisited
Post by: AreBee on February 23, 2014, 04:37:48 PM
Thanks Bill, I'll take a look.

Fine_Art,

Quote
These methods, in software or hardware, are much better than the old USM illusion of sharpness.

Absolutely. I previously read posts by Bart in a range of threads and noted how highly he praised Focus Magic. On that basis I purchased a copy and have tested it on several images. It really is astonishing how much the plugin restores image acuity without introducing sharpening halos. Worth its purchase price several times over to me.

Regards,
Title: Re: Deconvolution sharpening revisited
Post by: Theodoros on February 23, 2014, 05:24:31 PM
I have, and it appears that the demosaicing of Bayer CFAs results in almost identical resolutions (http://www.luminous-landscape.com/forum/index.php?topic=68089.msg539252#msg539252) for R/G/B channels, after Raw conversion.

That is probably because, despite the less dense sampling of the R and B channels, and the differences in diffraction pattern diameter, the luminance component of the signal in them is still used to create luminance resolution. And since the Red and most certainly the Blue channel are relatively under-weighted in luminance contribution, I would not be surprised if some is ''borrowed" from Green. This tends to (in general) negate wavelength dependent diffraction blur. Of course, differences in lens design and demosaicing may produce different results.

Cheers,
Bart
Hi Bart, is your opinion that for a completely still subject shot in 16x multishot, one should use the "smart sharpen" filter before he prints, or other and why?
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on February 23, 2014, 06:58:31 PM
Hi Bart, is your opinion that for a completely still subject shot in 16x multishot, one should use the "smart sharpen" filter before he prints, or other and why?

Hi,

Yes, Smart sharpen is a good start, but using a Photoshop plugin that does better deconvolution than Smart sharpen might squeeze a bit more real resolution and less noise out of the image.

Even though a 16x Multi-step sensor solves a few issues (and could create some others), deconvolution sharpening is still beneficial. The enhanced color resolution, by sampling each sensel position with each color of the Bayer CFA in sequence, helps and the piezo actuator driven half-sensel pitch offsets doubles the sampling density. However, the lens still has its residual aberrations and inevitable diffraction blur from narrowing the aperture, and the sensel aperture also plays a role in averaging the projected image over the original sensel aperture dimensions (the sensel aperture is roughly twice the sensel pitch, so 4x the sensel area). This relatively large sensel aperture, as does the increase in sampling density, will help in reducing aliasing but some blur will still remain. The lowered aliasing and remaining blur, shout for deconvolution sharpening to be applied.

So you can still improve the results from such a sensor design. As mentioned, Focus Magic (http://www.focusmagic.com/download.htm) does a great job, but also a plugin such as Topaz Labs Detail (http://www.topazlabs.com/detail/) is worth a mention. Not only does it offer a simple to use 'Deblur' option (=deconvolution), it also allows to tweak several sizes and contrast levels of detail, which is great for 'output sharpening' (where different output sizes might need different levels of micro-contrast and sharpening). Their InFocus (http://www.topazlabs.com/infocus/) plugin offers more control over Capture sharpening alone, and also works very well with my suggestion of up-sampling, deconvolution sharpening, and down-sampling to original size, approach.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: Theodoros on February 24, 2014, 06:27:48 AM
Hi,

Yes, Smart sharpen is a good start, but using a Photoshop plugin that does better deconvolution than Smart sharpen might squeeze a bit more real resolution and less noise out of the image.

Even though a 16x Multi-step sensor solves a few issues (and could create some others), deconvolution sharpening is still beneficial. The enhanced color resolution, by sampling each sensel position with each color of the Bayer CFA in sequence, helps and the piezo actuator driven half-sensel pitch offsets doubles the sampling density. However, the lens still has its residual aberrations and inevitable diffraction blur from narrowing the aperture, and the sensel aperture also plays a role in averaging the projected image over the original sensel aperture dimensions (the sensel aperture is roughly twice the sensel pitch, so 4x the sensel area). This relatively large sensel aperture, as does the increase in sampling density, will help in reducing aliasing but some blur will still remain. The lowered aliasing and remaining blur, shout for deconvolution sharpening to be applied.

So you can still improve the results from such a sensor design. As mentioned, Focus Magic (http://www.focusmagic.com/download.htm) does a great job, but also a plugin such as Topaz Labs Detail (http://www.topazlabs.com/detail/) is worth a mention. Not only does it offer a simple to use 'Deblur' option (=deconvolution), it also allows to tweak several sizes and contrast levels of detail, which is great for 'output sharpening' (where different output sizes might need different levels of micro-contrast and sharpening). Their InFocus (http://www.topazlabs.com/infocus/) plugin offers more control over Capture sharpening alone, and also works very well with my suggestion of up-sampling, deconvolution sharpening, and down-sampling to original size, approach.

Cheers,
Bart
I thought so… thanks for detailed explanation and the suggestions. One more thing, if the subject is huge in size, (say a painting of 1.5 square meters) and it is required to be printed at 1:1 size using 360ppi as input to the printer (say Epson 9900), then the process should be 1. up-sampling to 360ppi, 2. sharpening, 3. print or should it be 1. Upsampling to 720 ppi 2. sharpening 3. Downsampling back to 360ppi and then, 4. print ? …Is there a benefit if one doesn't down-sample when the image size comes to less than 360ppi?  Thanks.
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on February 24, 2014, 07:53:14 AM
I thought so… thanks for detailed explanation and the suggestions. One more thing, if the subject is huge in size, (say a painting of 1.5 square meters) and it is required to be printed at 1:1 size using 360ppi as input to the printer (say Epson 9900), then the process should be 1. up-sampling to 360ppi, 2. sharpening, 3. print or should it be 1. Upsampling to 720 ppi 2. sharpening 3. Downsampling back to 360ppi and then, 4. print ? …Is there a benefit if one doesn't down-sample when the image size comes to less than 360ppi?  Thanks.

Hi,

First of all, it is not absolutely necessary to upsample/sharpen/downsample, it is just a method to allow very accurate sharpening. With the proper technique, and precautions, and experience, it is possible to directly sharpen at the final output size (that also means the use of blend-if sharpening layers to avoid clipping).

Second, when the original file already has a lot of pixels, upsampling it by a factor of e.g. 3x just for sharpening may cause issues due to file size, and the deconvolution sharpening will take a lot of processing time and system memory to complete.

Third, depending on the printing pipeline, and given the physical size of the output (and thus normal viewing distance), I think that upsampling to 360 PPI will probably be adequate and printing will be faster than at 720 PPI. Only if very close inspection needs to be possible without compromises, and the input file has enough detail to require little interpolation to reach output dimensions, then creating a 720 PPI output file can make a difference (because the printer interpolation is not very good, and doesn't allow to sharpen at the final output size). The 'finest detail' option must be activated in the Epson printer driver to actually print at 720 PPI.

When an output file has less than 360 PPI at the final output size, one can consider upsampling with an application like PhotoZoom Pro (http://www.benvista.com/photozoompro) (only for upsampling), because that actually adds edge detail at a higher resolution, but it depends on the original image contents. Other upsampling methods do not create additional resolution, but will allow to push sharpening a bit further at 720 PPI (because small artifacts will be rendered too small to notice). So once you have more than 360 PPI useful data, I would not downsample to 360 PPI, but upsample to 720 PPI and sharpen at that size and print with 'finest detail' activated.

Since output sharpening also needs to pre-compensate for contrast losses due to print media (ink diffusion, paper structure, limited media contrast, etc.) I'd seriously consider using Topaz Detail, because that not only offers deconvolution (deblur) but also micro-contrast controls. It also allows to boost the low contrast micro-detail in shadows more than in the highlights, which is especially useful for non-glossy output media or dim viewing conditions. But that all goes beyond the main subject of this thread.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: Manoli on February 24, 2014, 08:25:14 AM
When an output file has less than 360 PPI at the final output size, one can consider upsampling with an application like PhotoZoom Pro (http://www.benvista.com/photozoompro) (only for upsampling),

And for downsampling , Adobe's Bicubic sharper or PhotoZoom's S-Spline XL or MAX with downsize settings ?
Title: Re: Deconvolution sharpening revisited
Post by: Theodoros on February 24, 2014, 08:31:53 AM
Hi,

First of all, it is not absolutely necessary to upsample/sharpen/downsample, it is just a method to allow very accurate sharpening. With the proper technique, and precautions, and experience, it is possible to directly sharpen at the final output size (that also means the use of blend-if sharpening layers to avoid clipping).

Second, when the original file already has a lot of pixels, upsampling it by a factor of e.g. 3x just for sharpening may cause issues due to file size, and the deconvolution sharpening will take a lot of processing time and system memory to complete.

Third, depending on the printing pipeline, and given the physical size of the output (and thus normal viewing distance), I think that upsampling to 360 PPI will probably be adequate and printing will be faster than at 720 PPI. Only if very close inspection needs to be possible without compromises, and the input file has enough detail to require little interpolation to reach output dimensions, then creating a 720 PPI output file can make a difference (because the printer interpolation is not very good, and doesn't allow to sharpen at the final output size). The 'finest detail' option must be activated in the Epson printer driver to actually print at 720 PPI.

When an output file has less than 360 PPI at the final output size, one can consider upsampling with an application like PhotoZoom Pro (http://www.benvista.com/photozoompro) (only for upsampling), because that actually adds edge detail at a higher resolution, but it depends on the original image contents. Other upsampling methods do not create additional resolution, but will allow to push sharpening a bit further at 720 PPI (because small artifacts will be rendered too small to notice). So once you have more than 360 PPI useful data, I would not downsample to 360 PPI, but upsample to 720 PPI and sharpen at that size and print with 'finest detail' activated.

Since output sharpening also needs to pre-compensate for contrast losses due to print media (ink diffusion, paper structure, limited media contrast, etc.) I'd seriously consider using Topaz Detail, because that not only offers deconvolution (deblur) but also micro-contrast controls. It also allows to boost the low contrast micro-detail in shadows more than in the highlights, which is especially useful for non-glossy output media or dim viewing conditions. But that all goes beyond the main subject of this thread.

Cheers,
Bart
Great Bart, very detailed and well explained… Thanks.  :-*
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on February 24, 2014, 08:38:24 AM
And for downsampling , Adobe's Bicubic sharper or PhotoZoom's S-Spline XL or MAX with downsize settings ?

Hi,

For downsampling just use Regular Bicubic. It will introduce aliasing artifacts when there is more real resolution than the smaller size can accommodate. I'm not impressed by the PhotoZoom down-sampling quality, and Bicubic Sharper will create even more aliasing than regular Bicubic.

When upsampling/deconvolution/downsampling is used, the risk of generating aliasing artifacts by using regular Bicubic for downsampling is limited, because there was not enough detail to begin with and we basically mostly remove the blur of the original input (which was already bandwidth limited to the Nyquist frequency of the original size).

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: Paul2660 on February 24, 2014, 01:30:46 PM
Bart,

In one of the posts on sharpening back in Oct 2013 or so, you brought up Focus Magic and I looked at it again.  To be honest it does an excellent job and it was enough to say I totally changed my workflow.  It helps on any file IQ back, to Fuji X.  Very nice tool indeed.  I looked at the Topaz tool but found I still liked the output from Focus Magic. 

I currently don't uprez, to sharpen, then downrez back, just have been getting such good results on the files as they are. 

Thanks again, to the tip.

Paul C.
Title: Re: Deconvolution sharpening revisited
Post by: WalterKircher on June 21, 2014, 03:25:17 PM
Hello + to help this fantastic thread to liev again.  ;D

I am using since 4 years the FocusFixer from Fixerlabs - and i am still very satisfied. Since 2012 also working as 64Bit plug-in for Photoshop. As I compared 2010 many available solutions, also Focusmagic - i am curious if anybody used here my tool?

http://www.fixerlabs.com/EN/photoshop_plugins/focusfixer.htm

Walter
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on June 22, 2014, 07:42:41 AM
Hello + to help this fantastic thread to liev again.  ;D

I am using since 4 years the FocusFixer from Fixerlabs - and i am still very satisfied. Since 2012 also working as 64Bit plug-in for Photoshop. As I compared 2010 many available solutions, also Focusmagic - i am curious if anybody used here my tool?

Hi Walter,

I've used their SizeFixer software, which also uses the LensFix part of FocusFixer technology. It worked okay, but I have not done a side by side test between FocusFixer (which offers more control over the focusing aspect) and FocusMagic, so I can't compare. Also important is how these programs handle noise, do they enhance real detail more than noise?

If you post an example (very high quality JPEG or PNG crops should do), before and after sharpening with FocusFixer, I could add a FocusMagic sharpened version to that.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: torger on June 24, 2014, 04:43:13 AM
I've got a bit fond of shooting tree trunks. As the trunks are curved and macro DoF is extremely short it's a typical application for focus stacking. However focus stacking is really not my thing, especially in the field where shutter speeds are often say 10 seconds per image or so, and the camera is often in cumbersome shooting positions.

So what I do is that I shoot at f/22 and suffer the diffraction. The subject is not too much in need to be super-detailed so it's no disaster. Still I'd like to improve my sharpening techniques for these types of images.

Maybe the types of softwares you talk about could be at help?

I've attached a thumb of one such image, and here you have the a crop from the "raw" (a neutral 16 bit tiff developed in RawTherapee, no sharpening no contrast increase, no nothing):

http://torger.dyndns.org/trunk-crop.tif

It was shot at f/22 with a Schneider Digitar 180mm using a Leaf Aptus 75 back.

Can someone demonstrate how sharp it can become using these tools?
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on June 24, 2014, 08:41:38 AM
I've got a bit fond of shooting tree trunks. As the trunks are curved and macro DoF is extremely short it's a typical application for focus stacking. However focus stacking is really not my thing, especially in the field where shutter speeds are often say 10 seconds per image or so, and the camera is often in cumbersome shooting positions.

So what I do is that I shoot at f/22 and suffer the diffraction. The subject is not too much in need to be super-detailed so it's no disaster. Still I'd like to improve my sharpening techniques for these types of images.

Maybe the types of softwares you talk about could be at help?

Hi,

Indeed tree trunks can have very interesting structures, especially when looking at them up close. Using f/22 on a 6 micron pitch sensor will reduce the effective resolution to less than 94% of its theoretical limiting resolution, so Deconvolution will not be able to fully recover the detail that the lens has to offer, but it can help to improve.

Quote
I've attached a thumb of one such image, and here you have the a crop from the "raw" (a neutral 16 bit tiff developed in RawTherapee, no sharpening no contrast increase, no nothing):

Attached are: first a treatment with FocusMagic, then one with PixInsight, and finally that same one from PixInsight but with a bit of detail added.

Without very precise calibration of the lens+diffraction+Rawconversion, I had to guess a bit. I assumed that the Diffraction would be dominant, and mainly deconvolved for that, based on a 6 micron pitch sensor, and an f/22 diffraction pattern (the airy disc pattern is more than 31 micron or 5 pixels in diameter).

Cheers,
Bart


P.S. In my estimate, and if DOF requirements allow, I'd also give f/20 a try. It will prevent the diffraction pattern from absolutely reducing the limiting resolution of the sensor, and might allow to squeeze out a tiny bit more micro-detail, with hardly any loss due to a third stop less DOF.
Title: Re: Deconvolution sharpening revisited
Post by: torger on June 24, 2014, 09:15:06 AM
Thanks Bart! Very nice to see what's possible. Clear improvement over the original, but one cannot expect crisp results. I don't think it's a big problem for a print though. The original crop image is quite flat in terms of contrast, just increasing contrast will get a sense of a sharper image.

The Aptus 75 is a 33 megapixel Dalsa  with 7.2um pixels. It's hard to know the effective f-stop, at this close range, I'd guess something like 1/8 of life size, the bellows extension is so large that the effective f-stop would be a little bit higher still than f/22. But I guess your assumption 6um + f/22 is pretty close to 7.2um + macro effective f-stop.
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on June 24, 2014, 11:31:47 AM
Thanks Bart! Very nice to see what's possible. Clear improvement over the original, but one cannot expect crisp results. I don't think it's a big problem for a print though. The original crop image is quite flat in terms of contrast, just increasing contrast will get a sense of a sharper image.

Hi,

True, but correct deconvolution with some contrast added will get you even further ...

Quote
The Aptus 75 is a 33 megapixel Dalsa  with 7.2um pixels. It's hard to know the effective f-stop, at this close range, I'd guess something like 1/8 of life size, the bellows extension is so large that the effective f-stop would be a little bit higher still than f/22. But I guess your assumption 6um + f/22 is pretty close to 7.2um + macro effective f-stop.

It would be close. I'll see if I can get something better based on the additional info, although a lot of real resolution is lost beyond restoration. I mistakenly thought (couldn't find info on the Leaf website) that the Aptus 75 was a 40 MP back, hence the 6 micron pitch assumption. With a larger pitch, and the suggested magnification (f/22 becomes f/25 effective) a different (half a pixel smaller) diffraction pattern would need to be used for optimal deconvolution.

Well, I just did another deconvolution, based on the new assumptions, and the differences are marginal (see attached, first only deconvolved and then the same with some Topaz Detail added to compensate for the resolution losses). Should print just fine at normal size and viewing distance.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: torger on June 24, 2014, 01:27:47 PM
I experimented with repeating unsharp masks and some other sharpening algorithm in Gimp and it's possible to get quite good results with that as well, but the examples you posted are better still, so I guess deconvolution actually works :)
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on June 25, 2014, 03:56:01 PM
I experimented with repeating unsharp masks and some other sharpening algorithm in Gimp and it's possible to get quite good results with that as well, but the examples you posted are better still, so I guess deconvolution actually works :)

It definitely works. Bart's is probably better than mine so I wont spend the time to play with it. One thing I will add, if I can give a help to Bart for once instead of always the other way around:

Many deconvolution methods work in the direction of increasing the luminance. You see the histogram shifting to the right. It is also the peaks of points that are moving. What I often do to eliminate these artifacts is average it back with an older version. This often gives a more natural look to the final image.
Title: Re: Deconvolution sharpening revisited
Post by: ErikKaffehr on June 25, 2014, 05:41:35 PM
Hi,

Just a small idea, you could test using a larger aperture for near optimal sharpness and use deconvolution sharpening on the out of focus areas. I have found that "smart sharpen" in Photoshop works well for minor defocus.

Best regards
Erik

I experimented with repeating unsharp masks and some other sharpening algorithm in Gimp and it's possible to get quite good results with that as well, but the examples you posted are better still, so I guess deconvolution actually works :)
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on June 27, 2014, 04:18:27 AM
I experimented with repeating unsharp masks and some other sharpening algorithm in Gimp and it's possible to get quite good results with that as well, but the examples you posted are better still, so I guess deconvolution actually works :)

Indeed, it works. Actually, it can almost work too good as shown in the attached versions of your crop, by 'restoring' some of the sensel structure (especially of non-AA-filtered sensor arrays).

The first attached version is with the same settings as the before versions with an assumed effective f/28 aperture on a 7.2 micron pitch sensor, but this time I switched-off the protection against noise amplification ('regularization'), which is difficult to see anyway in this crop as it can be mistaken for actual detail.

The second crop is that same image but now with some Topaz Detail enhancement added, to compensate for the overall loss of contrast due to diffraction and glare.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on June 27, 2014, 04:40:02 AM
Many deconvolution methods work in the direction of increasing the luminance. You see the histogram shifting to the right. It is also the peaks of points that are moving. What I often do to eliminate these artifacts is average it back with an older version. This often gives a more natural look to the final image.

Hi Arthur,

Strictly speaking deconvolution should have no significant bias effect on average brightness, but depending on the algorithm implementation, or execution, if might. The crucial thing is that deconvolution should really be done on image data in linear gamma space, or with more complicated calculations which do that gamma conversion and back on-the-fly.

What's then left is the possibility that what previously was the lowest/highest pixel value, now has a lower/higher value, which may lead to clipping. Afterall, reduced contrast flattens the local amplitudes/contrast, and restoration will restore those amplitudes.

A specialized application like PixInsight has an additional (to gamma linearization) provision (Dynamic Range Extension) for that which allows to avoid clipped intensities. This is also easier because that program can calculate with 64-bit floating point precision, which avoids most accumulation and rounding issues.

One can also prevent clipping by using Blend-if layers in Photoshop.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: ErikKaffehr on June 28, 2014, 03:57:00 AM
Hi,

I made a test using small aperture, f/22 in this case, and sharpen extensively in Focus Magic, it worked amazingly well.

Using external tools with Lightroom breaks my parametric workflow, but I guess that I will use FM any time I print larger than A2 or have some defocus/diffraction issue.

Best regards
Erik


I experimented with repeating unsharp masks and some other sharpening algorithm in Gimp and it's possible to get quite good results with that as well, but the examples you posted are better still, so I guess deconvolution actually works :)
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on June 28, 2014, 12:25:48 PM
Hi Arthur,

Strictly speaking deconvolution should have no significant bias effect on average brightness, but depending on the algorithm implementation, or execution, if might. The crucial thing is that deconvolution should really be done on image data in linear gamma space, or with more complicated calculations which do that gamma conversion and back on-the-fly.

What's then left is the possibility that what previously was the lowest/highest pixel value, now has a lower/higher value, which may lead to clipping. Afterall, reduced contrast flattens the local amplitudes/contrast, and restoration will restore those amplitudes.

A specialized application like PixInsight has an additional (to gamma linearization) provision (Dynamic Range Extension) for that which allows to avoid clipped intensities. This is also easier because that program can calculate with 64-bit floating point precision, which avoids most accumulation and rounding issues.

One can also prevent clipping by using Blend-if layers in Photoshop.

Cheers,
Bart

I checked out Pixinsight after seeing you use it. Your results seem better than what I can do with ImagesPlus. In IP you can watch the right tip of the histogram stretch right with iterations of adaptive R-L.

I would like a linear .tif export from raw therapee to use IP as you describe. On the color tab of RT, at the bottom, there is output gamma. As far as I can tell it does absolutely nothing. Whatever settings I pick do not seem to change the image at all. AMAZE is a much better demosaicing algorithm than the one in IP so I want to continue with RT first.

Another strange thing (tangents everywhere) in RT is that the color based noise routine seemed very powerful when they first developed it. In the last dozen updates it seems crippled. I can get better color denoise now going back to noise ninja 3. I use topaz denoise now which is far superior. It is all such a hodge-podge of tools that should be integrated with the raw processor. I do not like the RT deconvolution or the new noise system. Does pixinsight work in the raw data? The videos seem to show a complicated workflow.

Somebody will jump in saying lightroom is the integrated solution. I will avoid a monthy payment company like the plague. LR might not be in the cloud now, while there is lots of competition. It seems clear that is the direction Adobe wants for their products. No thanks. If it was by the hr that would be better for casual users.
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on June 28, 2014, 01:46:37 PM
I spent some time looking over the pixinsight documentation, as well as their forum. It seems you have hit the motherload.
They use DCRAW so they should have the same debayer options as raw therapee
They have very advanced sharpening better than images plus
They have very advanced noise reduction, better than the topaz denoise I just got recently. Probably better than DxO (I cant test that yet)
They do HDR
They do multi-frame mosaics
It seems to be color managed

This is huge.
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on June 28, 2014, 02:52:02 PM
I checked out Pixinsight after seeing you use it. Your results seem better than what I can do with ImagesPlus. In IP you can watch the right tip of the histogram stretch right with iterations of adaptive R-L.

Hi Arthur,

The formally correct way to do Deconvolution is in Linear gamma space. That means that in IP you should first reverse the gamma (e.g. of the L channel of an LRGB split), then Deconvolve that, then re-apply gamma to it, then recombine to an LRGB. IP uses floating-point calculations for its conversions, so losses should be relatively minimal, and not all that significant if based on 16-bit/channel images. That should result in amplitude increases that are independent on exposure level because gamma is linear. That may still lead to increased whitepoints or decreased blackpoints, and potential clipping if the original blurred data was stretched to maximum before deconvolution.

PixInsight is not for the faint of heart, it is a bit like the very professional version of IP, in the sense of being astrophotography oriented. But it is much more than an image editor, it's a software development and an image processing environment, fully color managed and with high scientific quality algorithms (using floating-point numbers of course). It is programmed by a group of astronomers, based in Spain. They recently developed a very advanced type of noise reduction (total generalized variation, or TGV Denoise (http://pixinsight.com/forum/index.php?topic=5603.0)), but it's not easy to use (requires lots of trial and error to optimize the settings), and they are also getting ready to release a novel Deconvolution tool (TGVRestoration (http://pixinsight.com/forum/index.php?topic=6968.0)) that looks extremely promising.

The biggest drawback of PixInsight is its poor documentation for many of its functions, but the documentation that is there is again of high quality. It takes time to write the documentation, and they are just too busy writing/improving the software itself. They hope that many of their customers will still find their way, especially if they have a scientific/academic background, and there are some users who prepared some very useful tutorials (although also astronomy oriented).

Quote
I would like a linear .tif export from raw therapee to use IP as you describe.

I haven't looked into that, but you can 'linearize' in IP by using a gamma conversion. I know, it's an extra step, but it does allow to do that, and the conversion losses in floating point should be minimal.

Quote
On the color tab of RT, at the bottom, there is output gamma. As far as I can tell it does absolutely nothing. Whatever settings I pick do not seem to change the image at all. AMAZE is a much better demosaicing algorithm than the one in IP so I want to continue with RT first.

I'd have to try that after reading the documentation first, but perhaps using an output profile with a linear gamma space would do the trick.

Quote
Does pixinsight work in the raw data? The videos seem to show a complicated workflow.

Yes, but for the moment the demosaicing options are limited to Bi-linear and VNG algorithms (which are not good enough for my taste), and a recently added Raw Drizzle approach called 'SuperPixel' that requires many Raw input files (incl. separate sets of Black frames and Offset frames) from slightly displaced (e.g. by atmospheric turbulence) undersampled images to reconstruct a full RGB image from Bayer CFA input. PI also allows to convert Bayer CFA data (e.g. a Dump from DCraw) directly, or use binning on White balanced CFA data (DCraw can do that directly).

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on June 28, 2014, 03:28:28 PM
I spent some time looking over the pixinsight documentation, as well as their forum. It seems you have hit the motherload.

(http://www.luminous-landscape.com/forum/Smileys/default/grin.gif)

Quote
They use DCRAW so they should have the same debayer options as raw therapee

Should, but currently only Bilinear and VNG are implemented for specific Debayer operations, as well as a very complicated 'SuperPixel' Drizzle method. Maybe if they asked Emil Martinec, or the RT team, they could get permission to implement 'Amaze'.

Quote
They have very advanced sharpening better than images plus

Yes, although FocusMagic is pretty amazing as well. PixInsight is potentially going to be even better than it already is (TVG Restoration), 'soon' to be released after debugging and stability testing is finished.

Quote
They have very advanced noise reduction, better than the topaz denoise I just got recently. Probably better than DxO (I cant test that yet)

Topaz Denoise is easier to use, and to integrate into a Photoshop centric workflow, or via their own PhotoFXlab host program (can also be used as a PS plugin). I haven't tried DxO, but its latest Denoising method is supposed to be extremely good (and slow), but I don't like the Adobe RGB gamut limitations of DxO (even when tagged as ProPhoto RGB).

Quote
They do HDR

Yes, easy enough with floating point calculations, but the subsequent tonemapping is something completely different. Haven't experimented with it in PI much yet.

Quote
They do multi-frame mosaics

Yes, although their Star alignment seems to work best. On more terrestrial images PTGUI is more photographer oriented.

Quote
It seems to be color managed

Correct. Also allows to automatically do gamma linearization.

Quote
This is huge.

It'll take some work to get the hang of its specific workflow, but it does make a lot of sense (especially for astrophotography, but in many areas also for 'normal' photography). It's documentation is of high quality, but far from complete though.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on June 30, 2014, 11:34:15 AM
For people not comfortable following the technical discussion on the other forum, they turned this:

http://pteam.pixinsight.com/decchall/bigradient_conv.tif (http://pteam.pixinsight.com/decchall/bigradient_conv.tif)

into (http://pteam.pixinsight.com/decchall/bigradient_t1_tgvr.png)

Try doing that with your sharpening software.
Title: Re: Deconvolution sharpening revisited
Post by: Jim Kasson on June 30, 2014, 03:17:07 PM
The formally correct way to do Deconvolution is in Linear gamma space.<huge snip>

Bart, I've been doing some work with my camera simulator to try to get a handle on the "big sensels vs small sensels" question.

http://blog.kasson.com/?p=6078

I'm simulating a Zeiss Otus at f/5.6, and calculating diffraction at 450 nm, 550 nm, and 650 nm for the three (or four, since it's a Bayer sensor) color planes. The blur algorithm is diffraction plus a double application of a pilllbox filter with a radius (in microns) of 0.5 + 8.5/f , where f is the f-stop.

I'm finding that 1.25 um sensel pitch doesn't give a big advantage over 2.5 um with this lens. I'm wondering if things would be different with some deconvolution sharpening.

Now that you know my lens blur characteristics, can you give me a recipe for a good deconvolution kernel? Tell me what the spacing is in um or nm, and I'll do the conversion to sensels, which will change depending on the pitch.  I'd do it myself, but I have to admit I've just followed the broad outlines of this discussion, and would prefer not to delve much deeper at this point.

Warning: As this project progresses, I'll be asking for help on upsampling and downsampling algorithms as well. I hope I won't wear out my welcome.

Jim
Title: Re: Deconvolution sharpening revisited
Post by: hjulenissen on June 30, 2014, 03:44:31 PM
If the convolution kernel is known (due to being a simulation), then that should narrow the list of deconvolution kernels quite a lot, should it not?

In the abscence of noise, would not a perfect inversion be optimal (possibly limited so as not to amplify by e.g. +30dB anywhere)? In the precense of noise, it might become a question of what kernel trades detail enhancement vs noise/artifact suppression (Wiener filtering?) in a suitable manner, which might depend on the CFA and the noise reduction.... ok, it is hard. My point is that is should be a lot less hard than the real-world case where the kernel has to be guesstimated locally.

-h
Title: Re: Deconvolution sharpening revisited
Post by: Jim Kasson on June 30, 2014, 03:57:00 PM
If the convolution kernel is known (due to being a simulation), then that should narrow the list of deconvolution kernels quite a lot, should it not?

In the abscence of noise, would not a perfect inversion be optimal (possibly limited so as not to amplify by e.g. +30dB anywhere)? In the precense of noise, it might become a question of what kernel trades detail enhancement vs noise/artifact suppression (Wiener filtering?) in a suitable manner, which might depend on the CFA and the noise reduction.... ok, it is hard. My point is that is should be a lot less hard than the real-world case where the kernel has to be guesstimated locally.

You're probably right. Are you telling me that I'm just being lazy asking Bart to do the work for me?  :)

There is noise, and ever more noise as the sensel pitch gets smaller. I can post a link to some image files if anyone want to look at the noise.

Jim
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on June 30, 2014, 08:33:02 PM
Bart, I've been doing some work with my camera simulator to try to get a handle on the "big sensels vs small sensels" question.

http://blog.kasson.com/?p=6078

Hi Jim,

I'll have to catch-up with reading of the latest developments. I've been following the DPreview discussion from a distance, so I do have some idea about the train of thought you're following.

Quote
I'm simulating a Zeiss Otus at f/5.6, and calculating diffraction at 450 nm, 550 nm, and 650 nm for the three (or four, since it's a Bayer sensor) color planes. The blur algorithm is diffraction plus a double application of a pilllbox filter with a radius (in microns) of 0.5 + 8.5/f , where f is the f-stop.

This is where I'd have to seriously adjust my thinking to what you are (or MatLab is) doing exactly, which may take some time. That's because what I generally do is, I take a Luminance weighted average of the (presumed) peak transmissions (450nm, 550nm, 650nm) of the R/G/B CFA filters (because Luminance is what most good (but generally unknown, unlike in your case) Demosaicing algorithms optimize for), and take that as input for a single 2-dimensional diffraction pattern at 564nm (564.05nm, if we use the weights of: R=0.212671, G=0.71516, B=0.072169), and I then integrate that diffraction pattern at each sensel aperture area over the surface of the fill-factor (usually 100%, assuming gap-less microlenses).

The reason that I reduce the problem to a single weighted average luminance diffraction pattern is because Deconvolution is usually performed on the Luminance channel (e.g. CIE Y in case of PixInsight, or L of LRGB in ImagesPlus). It can also reduce the processing time to almost 1/3rd compared to an R+G+B deconvolution cycle. I can understand that for your model you would need to keep separate diffraction patterns per CFA color.

A 2-D kernel that includes the third Bessel zero of the Airy disc pattern usually accounts for most (at least 93.8%) of the energy of the full diffraction pattern. This diffraction kernel is then used, e.g. as an image or as a mathematical PSF, depending on the required input of the deconvolution algorithm. Attached are 2 kernels in data form, one for 2.5 micron pitch and one for 1.25 micron pitch, both for a 100% fill-factor, weighted luminance at 564nm, and a nominal f/5.6 aperture.

Quote
I'm finding that 1.25 um sensel pitch doesn't give a big advantage over 2.5 um with this lens.

Off-the-cuff, I'd assume that's due to the size of the diffraction pattern, which will dominate at such small pitches.

Quote
I'm wondering if things would be different with some deconvolution sharpening.

I assume they would, due to the dominating influence of diffraction (+ fill-factor blur, and/or OLPF). How much can be restored remains to be seen and depends on the system MTF at the various spatial frequencies, but an OTUS would significantly increase the probability of being able to restore something, especially on a high Dynamic range sensor array.

Quote
Now that you know my lens blur characteristics, can you give me a recipe for a good deconvolution kernel? Tell me what the spacing is in um or nm, and I'll do the conversion to sensels, which will change depending on the pitch.  I'd do it myself, but I have to admit I've just followed the broad outlines of this discussion, and would prefer not to delve much deeper at this point.

From what I remember of reading the earlier development of your simulation model, I'd have to assume that a Dirac delta function (https://en.wikipedia.org/wiki/Dirac_delta_function) that is convolved with your specific R/G/B CFA diffraction patterns, and subsequent pill-box filters, should provide an exact (for your model) Deconvolution filter.

If one were to follow my approach as described above, for f/5.6 (actually 4*Sqrt[2]=5.68528), at 564nm, for a 100% fill-factor, of a 2.5 micron and 1.25 micron sensel pitch (at infinity focus), I've added two data files with kernel weights per pixel. They need to be normalized to a sum total of 1.0, or converted to an image (e.g. with ImageJ, import as a text image and save to a 16-bit TIFF or a 32-bit float FITS), before using them as a deconvolution kernel.

Quote
Warning: As this project progresses, I'll be asking for help on upsampling and downsampling algorithms as well. I hope I won't wear out my welcome.

No problem, if in a thread that's more appropriate to that subject.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on July 11, 2014, 11:14:11 AM
Attached is a shot at 3600mm. (1200mm scope, 2x barlow, Sony A55 1.5 crop factor) After processing it looks a bit like a PS layers job. The details follow.

After importing from the camera I just opened one of many shots, I did not check to see if is is the sharpest. Anyway, I converted in RT, it looked ok. Imported to images plus, revered the gamma, split the channels, deconvolved the luma channel. 5 @ 7x7, 10 @ 5x5, 30 @ 3x3, all adaptive R-L. I recombined, moved the gamma back to 2.2. Denoised - the background which was completely OOF to begin with, had started to looked "sharp" with a grain texture. The problem is the outline of the bird has the colors of the bird slightly outside it's area. The image becomes very paste job, which it is not, it is as shot. It feels like I need to deconvolve the colors to keep them in position with the luma data.

I have read in several places to deconvolve luma only. It seems a bit off. What are the issues with deconvolving the color channels?
Title: Re: Deconvolution sharpening revisited
Post by: BartvanderWolf on July 11, 2014, 11:52:28 AM
Attached is a shot at 3600mm. (1200mm scope, 2x barlow, Sony A55 1.5 crop factor) After processing it looks a bit like a PS layers job. The details follow.

Hi Arthur,

It's either a very steady tripod you used, or good deconvolution (or both).

Quote
After importing from the camera I just opened one of many shots, I did not check to see if is is the sharpest. Anyway, I converted in RT, it looked ok. Imported to images plus, revered the gamma, split the channels, deconvolved the luma channel. 5 @ 7x7, 10 @ 5x5, 30 @ 3x3, all adaptive R-L. I recombined, moved the gamma back to 2.2. Denoised - the background which was completely OOF to begin with, had started to looked "sharp" with a grain texture. The problem is the outline of the bird has the colors of the bird slightly outside it's area. The image becomes very paste job, which it is not, it is as shot. It feels like I need to deconvolve the colors to keep them in position with the luma data.

I have read in several places to deconvolve luma only. It seems a bit off. What are the issues with deconvolving the color channels?

The L channel (from an L/RGB in ImagesPlus) is a (luminance) weighted average of the R/G/B channels. So the (sharpened) luminance weights are also redistributed to the original R/G/B channels upon recombining them, with a lower probability of over processing any of them. It is of course possible that one of the channels was significantly poorer (or better) than both others to begin with. In such a rare case, it could help to process the R/G/B channels separately with different settings.

So other than more control, but also more work per channel and 3x the processing time, the results could be quite similar. As long as in linear gamma, not that much can go wrong, it's like remixing the light itself.

Cheers,
Bart
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on July 11, 2014, 08:13:21 PM
It's mostly the mount. The 10" dob is a massive unit so you have to set it up near the road. This spot is at the side of the highway, with a creek/ small river at the bottom of a coolie (strange word for a ravine). The ospreys are on a very big power pole spanning the creek. I cover the dob with a silver thermal blanket to reflect the heat. That, and the incline down from the road, help shelter the unit from the wind. The A55 is very light with no mirror moving. The shutter is a tiny mass compared to the locked down dob. I carry everything over the steel road barricade, then set my folding chair with cable release. Apart from the effort of setup, it is really quite easy. You sit there watching your live view, pressing your cable release whenever you want. If you are interested in running your testing software on a raw to see the difference between the D600, A55, with and without a 2x barlow I can email you some raws. I think you will do a better deconvolve than I did. A sample of that would help me see how much more I need to work on it.
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on July 11, 2014, 09:11:18 PM
Here is an unsharpened sample opened at default settings in RT.

Some will say the quality is not like up close with a 200 f2 for example. Yes, no question about it. At the same time you would not get shots of chicks being fed if you were close enough to bother them. The father came back with a fish, dropped it off then the mother (I assume) started feeding them. The dad flew off to sit on a fence post watching me, watching them.

PS the flies buzzing the fish was a bonus. ;)
Title: Re: Deconvolution sharpening revisited
Post by: Fine_Art on July 12, 2014, 02:25:23 PM
Off topic: This video shows the mount stability. 1200mm fl with APSC = 1800mm or 1.1 degrees of arc for the whole frame.

http://youtu.be/ZtWYOCiKSbw (http://youtu.be/ZtWYOCiKSbw)

Heat ripple is as big a problem as the mount stability.
Title: Re: Deconvolution sharpening revisited
Post by: Lundberg02 on July 12, 2014, 07:42:03 PM
I've plowed through the four years' worth of posts here to my benefit, and abstracted many salient points into a doc to keep. I've used Focus Magic for several years with what I thought were very good results, especially in one case where I recovered a license plate from a very motion blurred car. It always bothered me that I couldn't do sub pixel like I always do first in USM, so the suggestion to upsample and the assurance that it doesn't create artifacts was very welcome. I wonder if that's always true, however. In the Photoshop forum, one of the "experts", a man who overkills everything (he has a ten thousand dollar RAID array of SSDs), complained about the chromatic aberration removal tool in CS6 creating artifacts, but only later, after no one else saw them, did he reveal that he always up samples images before doing anything, including printing.

In other news, the fact that a Gaussian PSF is as good as anything else doesn't surprise me, and neither does the fact that a 7x7 array is probably optimal.  I learned long ago (50 years) that a few convolutions of arbitrary transforms closely approximates Gaussian, this is akin to the Central Limit Theorem in probability. Outliers are suppressed by convolution and by adding random variables as well. Sample means from any distribution are Gaussian distributed in the limit, same type of thing.

There is a fairly new app called Blurity for deconvolving that no one mentioned. It's reasonably priced and makes claims of efficacy, but its own examples on the web site show pretty severe ringing, so I didn't bother investigating it. YMMV.

Focus Magic has tutorials that i have only skimmed, but I intend to revisit and follow their forensic tute to gain a better understanding of FM's capability.

As a result of this thread I also bought ALCE from Bigano's website because I was impressed by the examples. It appears to be a sort of super Clarity adjustment, and I hope it will bring to life very old b&w prints from my days in the military. Note that bigano.com has a new website as of this weekend.