If it is possible to explain it in (reasonably) simple terms, how does deconvolution sharpening actually work?
John
If it is possible to explain it in (reasonably) simple terms, how does deconvolution sharpening actually work?
John
If it is possible to explain it in (reasonably) simple terms, how does deconvolution sharpening actually work?
If it is possible to explain it in (reasonably) simple terms, how does deconvolution sharpening actually work?
John
You can also add the freeware RawTherapee (http://rawtherapee.com/) to the list of converters that offer RL deconvolution sharpening.
But then my DB does not have an AA filter, so perhaps the effects would be pretty subtle.
recovering from the more pronounced effects of diffraction at narrow apertures,
If it is possible to explain it in (reasonably) simple terms, how does deconvolution sharpening actually work?
John
Yes Photoshop's Smart Sharpen is based on deconvolution (but you will need to choose the "More Accurate" option and the Lens Blur kernel for best results). Same with Camera Raw 6 and Lightroom 3 if you ramp up the Detail slider.
Hi Eric,
Thanks for confirming that.
Could you disclose if the Smart Sharpen filter visible effectiveness has been changed between, say, CS3 and CS5, or is it essentially the same since its earlier versions?
I've compared it before, and used it on installations without better alternative plug-ins, but its restoration effectiveness for larger radii seemed less than a direct Richardson-Lucy or similar implementation, although faster. Perhaps a new test/comparison is in order.
Cheers,
Bart
Since you are talking about "undoing" the effects of AA filter, won't that introduce aliasing artifacts mistaken for sharpness? What about moire?
My understanding is that moire avoidance is not the only reason camera manufacturers put those expensive filters on. It's not like some marketing guy came to the engineers and said "slap one of those make-my-pictures-all-blurred-to-hell -filters on all our cameras, would'ya?" What the reasons are I don't know, but Hot Rod mods (http://www.maxmax.com/hot_rod_visible.htm) haven't been that popular, and I've heard more than one complain about the resulting aliasing.
I've seen so many photos which are oversharpened to the extent of making them as surreal as overcooked HDR. I haven't seen the samples of the results from this undoing, but the samples from D3X I've seen show that it produces exceptionally sharp results out of the box.
Yes Photoshop's Smart Sharpen is based on deconvolution (but you will need to choose the "More Accurate" option and the Lens Blur kernel for best results). Same with Camera Raw 6 and Lightroom 3 if you ramp up the Detail slider.
Since you are talking about "undoing" the effects of AA filter, won't that introduce aliasing artifacts mistaken for sharpness? What about moire?
Hi Bart, unfortunately I don't know the answer to that, but I will check with the scientist who does. I believe they limit the number of iterations for speed, so I expect this is the reason it would not be as effective for some parameters as the plug-ins, as you've observed.
Hi Erik, yes, the Gaussian and Lens Blur are different PSFs. The Gaussian is basically just that, and the Lens Blur is effectively simulating a nearly circular aperture (assuming even light distribution within the aperture, very unlike Gaussian). You will get better results with the latter though in many cases they are admittedly subtle.
In a word, no. Aliasing is a shifting of image content from one frequency band to another that is an artifact of discrete sampling. Deconvolution doesn't introduce aliasing (ie shift frequencies around) so much as try to reverse some of the suppression of high frequency image content that the AA filter effects in its effort to mitigate aliasing.
the deconvolution sharpening (more properly image restoration) with the Mac only raw converter Raw Developer markedly improves the micro-contrast of the D3x image to the point that it rivals that of the Leica S2.
Yes Photoshop's Smart Sharpen is based on deconvolution (but you will need to choose the "More Accurate" option and the Lens Blur kernel for best results). Same with Camera Raw 6 and Lightroom 3 if you ramp up the Detail slider.
The R-L Deconvolution sharpening tool in Raw Developer often produces exceptional results. It is so crazy that Adobe, with all its resources, does not offer it as an option in Photoshop.Smart sharpen in Photoshop and the sharpening tool in LR offer the same kind of exceptional results.
Eric - does it mean that after some certain value (>25, >50, > ???) set by the Detail Slider in ACR you switch the sharpening completely from some variety of USM to some variety of deconvolution ? can you tell what this value has to be (if it is fixed) or it depends on the specific combination of exif parameters (camera model, iso, aperture value, etc)... or you are somehow blending the output of two methods going gradually from some variety of USM to deconvolution as the slider is moved to the right ?well, if you load up an image in LR and do some tests, you may be able to find it out by yourself.
please clariy, thank you.
Hi Bart, unfortunately I don't know the answer to that, but I will check with the scientist who does. I believe they limit the number of iterations for speed, so I expect this is the reason it would not be as effective for some parameters as the plug-ins, as you've observed.
Hi Erik, yes, the Gaussian and Lens Blur are different PSFs. The Gaussian is basically just that, and the Lens Blur is effectively simulating a nearly circular aperture (assuming even light distribution within the aperture, very unlike Gaussian). You will get better results with the latter though in many cases they are admittedly subtle. The OLP filter can be somewhat complex to model. (I believe the Zeiss articles you've referenced recently have some nice images showing how gnarly they can be. I recall it was in the first of the two MTF articles). Gaussians are handy because they have convenient mathematical properties but not the best for modeling this, unfortunately ...
I often see threads about the marvels made by sharpening plugs in, or raw converters, but I have have hardly seen anything doing it better than photoshop "smart sharpen".
At best they are equal.
Yes Photoshop's Smart Sharpen is based on deconvolution (but you will need to choose the "More Accurate" option and the Lens Blur kernel for best results). Same with Camera Raw 6 and Lightroom 3 if you ramp up the Detail slider.Eric,
Smart sharpen in Photoshop and the sharpening tool in LR offer the same kind of exceptional results.That may be your experience, but have you tried Richardson-Lucy with a camera having a blur filter? Digilloyd did compare smart sharpen to RL, and found the latter to be much better. Perhaps he did not use optimal settings, but he is a very careful worker and I would not dismiss his results out of hand.
IMO they also give slightly better results since they have more parameters.
For some work flows, sharpening during the raw conversion is not an option. In that case Photoshop "smart sharpen" or others plugs in for PS, are the only options.
I often see threads about the marvels made by sharpening plugs in, or raw converters, but I have have hardly seen anything doing it better than photoshop "smart sharpen".
At best they are equal.
Eric - does it mean that after some certain value (>25, >50, > ???) set by the Detail Slider in ACR you switch the sharpening completely from some variety of USM to some variety of deconvolution ? can you tell what this value has to be (if it is fixed) or it depends on the specific combination of exif parameters (camera model, iso, aperture value, etc)... or you are somehow blending the output of two methods going gradually from some variety of USM to deconvolution as the slider is moved to the right ?
please clariy, thank you.
well, if you load up an image in LR and do some tests, you may be able to find it out by yourself.
anyway, how can it be fixed?
How a camera model, iso or aperture determine fixed parameters?
I think trial and error and some experiments will give you the best answer to your question.
Hi Deja, yes, the sharpening in CR 6 / LR 3 is a continuous blend of methods (with Detail slider being the one used to "tween" between the methods, and the Amount, Radius, & Masking used to control the parameters fed into the methods). As you ramp up the Detail slider to higher values, the deconvolution-based method gets more weight. If you're interested in only the deconv method then just set Detail to 100 (which is what I do for low-ISO high-detail landscape images). Not recommended for portraits, though ...If one were to set the Detail to 100, would this carry through to the Sharpening slider when using the Adjustment Brush in ACR? If so, that would go a long way toward selective application of the deconvolution method, possibly as good as painting it in from a layer mask.
Hi Deja, yes, the sharpening in CR 6 / LR 3 is a continuous blend of methods (with Detail slider being the one used to "tween" between the methods, and the Amount, Radius, & Masking used to control the parameters fed into the methods). As you ramp up the Detail slider to higher values, the deconvolution-based method gets more weight. If you're interested in only the deconv method then just set Detail to 100 (which is what I do for low-ISO high-detail landscape images). Not recommended for portraits, though ...Based on your information, I experimented for sharpening with ACR to reproduce the results posted by Diglloyd in in his blog. I used 41-1-100 (amount, radius, detail). The results are pretty close. I hope that this is fair use of Diglloyd's copyright. If there are any complaints, the post can be deleted. I think that the topic is important, though.
Based on your information, I experimented for sharpening with ACR to reproduce the results posted by Diglloyd in in his blog. I used 41-1-0 (amount, radius, detail). The results are pretty close. I hope that this is fair use of Diglloyd's copyright. If there are any complaints, the post can be deleted. I think that the topic is important, though.
I used 41-1-0 (amount, radius, detail). The results are pretty close.
If one were to set the Detail to 100, would this carry through to the Sharpening slider when using the Adjustment Brush in ACR? If so, that would go a long way toward selective application of the deconvolution method, possibly as good as painting it in from a layer mask.
Yes, it looks like Bill made a typo in the post (the screenshot values say 43, 1, 100, as opposed to 41,1,0). For this type of image I do recommend a value below 1 for the Radius, though 1 is not a bad starting point.Yes, 41,1,0 is a typo. The figures on the illustration are correct: 41,1,100
Eric,
Thanks for the information. The behavior of the sliders appears to be quite different from the older versions of ACR. In Real World Camera Raw with Adobe Photoshop CS4, Jeff Schewe states that if one moves the detail slider all the way to the right, the results are very similar but not exactly the same that would be obtained with the unsharp mask.
The following observations are likely nothing new to you, but may be of interest to others. The slanted edge target (a black on white transition at a slight angle) is an ISO certified method of determining MTF and is used in Imatest. Here is an example with the Nikon D3 using ACR 6.1 without sharpening (far right), with and with ACR sharpening set to 50, 1, 50 [amount, radius, detail] (middle) and with deconvolution sharpening using Focus Magic with a blur width of 2 pixels and amount of 100%. The images used for measurement are cropped. so the per picture height measurements are for the cropped images.
[attachment=23291:Comp1_images.gif]
One can analyze the black-white transition with Imatest, which determines the pixel interval for a rise in intensity at the interface from 10 to 90%. Results are shown for Focus Magic and ACR sharpening with the above settings. The results are similar. With real world images with this camera (previously posted in a discussion with Mark Segal), I have not noted much difference between optimally sharpened images using ACR and Focus Magic, contrary to the results reported by Diglloyd using the Richardson-Lucy algorithm. Perhaps the Focus Magic algorithm is inferior to the RL. Diglloyd used Smart Sharpen for comparison and did not test ACR 6 sharpening.
[attachment=23292:CompACR_FM_1.gif]
One can look at the effect of the detail slider by using ACR sharpening settings of 100, 1, 100 (left) and 100, 1, 0 (right). The detail setting of zero dampens the overshoot.
[attachment=23293:CompACR.gif]
I think this post is a bit misleading. If you incorporate sharpening in your processing then your process is now non-linear and however ISO certified the target itself, the slanted edge method is no longer valid because MTF is only meaningful as a description of a 2D spatial convolution process which is thereby assumed to be linear. Even though Imatest is an excellent piece of software - I am acquainted with Norman Koren, which doesn't mean I understand those maths - , feeding Imatest invalid input does not sanctify the output.
Yes, Walter. It does means you can apply this type of sharpening / deblurring selectively, if you wish. There are two basic workflows for doing this in CR 6 and LR 3.
The first way is just to paint in the sharpening where you want it. To do this, you set the Radius and Detail the way you want, but set Amount to 0. Then, with the local adjustment brush, you paint in a positive Sharpness amount in the desired areas. The brush controls and the local Sharpness amount can be used to control the application of it. (Of course you can also use the erase mode in case you overpaint.) This workflow is effective if there are relatively small areas of the image you want to sharpen. I tend to use this for narrow DOF images (e.g., macro of flower) where I only care about very specific elements being sharpened. It also works fine for portraits.
The second way is the opposite, i.e., you apply the capture sharpening in the usual till most of the image looks good, but then you can selective "back off" on it (using local Sharpness with negative values) in some areas. Of course you can also add to it (using local Sharpness with positive values).
Thank you. Now I'm going to have to try some comparisons between deconvolution by these methods in RAW, versus post-processing with Focus Magic, which has been my favorite for years now.
Unfortunately, Focus Magic has become functionally useless for me. With larger 16 bit files, it consistently gives me "memory full" errors and then crashes CS 4. It appears that development has ceased. Too bad, it gave me great results.
Unfortunately, Focus Magic has become functionally useless for me. With larger 16 bit files, it consistently gives me "memory full" errors and then crashes CS 4. It appears that development has ceased. Too bad, it gave me great results.
I had similar issues with FocusMagic when I still ran Win XP. There is a sort of workaround though. Just make partial selections (use a few guides to allow making joining but not overlapping selections). It's not ideal, but it will get the job done selection after selection. I couldn't get FM to install under Vista, but they recently changed the installer so perhaps now it will, but I've moved to Win7 by now, and there are no problems so far.
I've not tested RawTherapee for size limitations, but it does read TIFFs and it allows Richardson-Lucy deconvoluton.
Cheers,
Bart
I think this post is a bit misleading. If you incorporate sharpening in your processing then your process is now non-linear and however ISO certified the target itself, the slanted edge method is no longer valid because MTF is only meaningful as a description of a 2D spatial convolution process which is thereby assumed to be linear. Even though Imatest is an excellent piece of software - I am acquainted with Norman Koren, which doesn't mean I understand those maths - , feeding Imatest invalid input does not sanctify the output.In addition to Bart's post in response to your comment, I think that your use of misleading and invalid input is too harsh. If you look at Norman's documentation of Imatest, he uses it extensively to compare the effects of sharpening. Indeed, if if the method were invalid for sharpened images, it would be useless to assess the sharpness of images derived from cameras with low pass filters, since these images always must be sharpened for optimal appearance. If my use of Imatest is misleading and invalid, so is Norman's.
Edmund
In addition to Bart's post in response to your comment, I think that your use of misleading and invalid input is too harsh. If you look at Norman's documentation of Imatest, he uses it extensively to compare the effects of sharpening. Indeed, if if the method were invalid for sharpened images, it would be useless to assess the sharpness of images derived from cameras with low pass filters, since these images always must be sharpened for optimal appearance. If my use of Imatest is misleading and invalid, so is Norman's.
From the Imatest documentation:
[attachment=23321:ImatestDoc.gif]
Sorry, I'll remove myself from this discussion; Norman is a guy I respect, his understanding of these topics is infinitely greater than mine, and I don't want my own lack of understanding and personal views to reflect on his excellent product.Edmund,
Edmund
In addition to Bart's post in response to your comment, I think that your use of misleading and invalid input is too harsh. If you look at Norman's documentation of Imatest, he uses it extensively to compare the effects of sharpening. Indeed, if if the method were invalid for sharpened images, it would be useless to assess the sharpness of images derived from cameras with low pass filters, since these images always must be sharpened for optimal appearance. If my use of Imatest is misleading and invalid, so is Norman's.
From the Imatest documentation:
[attachment=23321:ImatestDoc.gif]
Edmund,
Thanks for the reply, but there is no need to withdraw from the discussion. Your point on non-linearity is well taken and excessive sharpening can lead to spurious results. Some time ago, I was involved in a discussion with Norman and others over test results reporting MTF 50s well over the Nyquist limit. Magnified aliasing artifacts apparently were being interpreted as meaningful resolution. Norman stated that the slanted edge method did have limitations and he was working on other methods.
Bill
Yes, it looks like Bill made a typo in the post (the screenshot values say 43, 1, 100, as opposed to 41,1,0). For this type of image I do recommend a value below 1 for the Radius, though 1 is not a bad starting point.
Based on what I see, the radius 1.0 seems to be a bit too large. This is confirmed by the earlier Imatest SFR output that you posted (SFR_20080419_0003_ACR_100_1_100.tif), where the 0.3 cycles/pixel resolution was boosted. Perhaps something like a 0.6 or 0.7 radius is more appropriate to boost the higher spatial frequencies (lower frequencies will also be boosted by that).Eric and Bart,
Eric and Bart,
As per your suggestions, I repeated the tests using ACR 6.1 with settings of amount = 32, radius = 0.7, and detail = 100 and Focus Magic with settings of Blur Width = 1 and amount = 150. I found the amount in the ACR slider to be quite sensitive, and there is a considerable difference between 30 and 40 or even 30 and 35 with respect to overshoot and MTF at Nyquist. The chosen settings seem to be a reasonable compromise and produce similar results near Nyquist, but the FM gives more of a boost in the range of 0.2 to 0.3 cycles/pixel, which may be desirable.
Inspection of the images from which the graphs were obtained is also of interest:
Apologies in hijacking this thread a little bit, but personally I'm just curious if de-convolution sharpening and the evolvement of computational imaging might eventually overcome much of the problem with diffraction. (and if this has already been discussed I also apologize - I only skimmed through the thread, seeing how most of it is above my pay-grade).
I would assume it would be much more challenging than resolving the issues from an AA filter, since it would require each individual lens design to be carefully tested then some method to apply the information to the file, and perhaps would require the data from every possible f/stop and with zoom lens specific zoom settings. But it seems the theory of restoring the data as it is spread to adjacent pixels isn't much different than what happens with an AA filter.
I know I have many times stopped down to f/22 (or further) and smart sharpen seems to work quite well, even when printing large prints.
Just curious.
Apologies in hijacking this thread a little bit, but personally I'm just curious if de-convolution sharpening and the evolvement of computational imaging might eventually overcome much of the problem with diffraction.
I would assume it would be much more challenging than resolving the issues from an AA filter, since it would require each individual lens design to be carefully tested then some method to apply the information to the file, and perhaps would require the data from every possible f/stop and with zoom lens specific zoom settings. But it seems the theory of restoring the data as it is spread to adjacent pixels isn't much different than what happens with an AA filter.
I know I have many times stopped down to f/22 (or further) and smart sharpen seems to work quite well, even when printing large prints.
Just curious.
.......
I'll prepare an image, and add an f/32 (let's up the ante) diffraction blur, and post it later. We can then see what the various methods can restore, and what the limitations are.
.....
I'll prepare an image, and add an f/32 (let's up the ante) diffraction blur, and post it later. We can then see what the various methods can restore, and what the limitations are.
Cheers,
Bart
... I'm just curious if de-convolution sharpening and the evolvement of computational imaging might eventually overcome much of the problem with diffraction.This is done in microscopy ... an area where there is a constant battle to overcome extremely shallow DOF, or to put it another way, to reduce the painful trade-offs between OOF effects (aperture too big) and diffraction effects (aperture too small). One snippet:
Sounds very interesting, Bart. Some of my Zeiss lenses go to f45 Never used to worry me on film . . .The resolution limiting effects of the Airy disk has the same effect on an 8 x 10 inch view camera as on a Minox miniature format. However, for a given print size, the effects of diffraction for as given Airy disc diameter are much more apparent with the Minox due to to the magnification factor. Likewise, the effects of diffraction do not depend on pixel size. For a given overall sensor size, a small pixel camera will have the same diffraction limited resolution as a larger pixel sized camera.
John
Shouldn't it be possible to some extent to reverse this, since the PSF of diffraction is well known?
( I think its called some sort of Bessel function).
I'll prepare an image, and add an f/32 (let's up the ante) diffraction blur, and post it later. We can then see what the various methods can restore, and what the limitations are.
Okay, here we go.
1. I've taken a crop of a shot taken with my 1Ds3 (6.4 micron sensel pitch + Bayer CFA) and the TS-E 90mm af f/7.1 (the aperture where the diffraction pattern spans approx. 1.5 pixels).
0343_Crop.jpg (http://www.xs4all.nl/~bvdwolf/main/downloads/0343_Crop.jpg) (1.203kb) I used 16-b/ch TIFFs throughout the experiment, but provide links to JPEGs and PNGs to save bandwidth.
2. That crop is convolved with a single diffraction (at f/32) kernel for 564nm wavelength (the luminosity weighted average of R, G and B taken as 450, 550 and 650 nm) at a 6.4 micron sensel spacing (assuming 100% fill-factor). That kernel (http://www.xs4all.nl/~bvdwolf/main/downloads/Airy9x9{p=6.4FF=100w=0.564f=32.}.dat) was limited to the maximum 9x9 kernel size of ImagesPlus, a commercial Astrophotography program chosen for the experiment because a PSF kernel can be specified and the experiment can be verified. That means that only a part of the infinite diffraction pattern (some 44 micron, or 6.38 pixel widths, in diameter to the first minimum) could be encoded. So I realise that the diffraction kernel is not perfect, but it covers the majority of the energy distribution. The goal is to find out how well certain methods can restore the original image, so anything that resembles diffraction will do.
The benefit of using a 9x9 convolution kernel is that the same kernel can be used for both convolution and deconvolution, so we can judge the potential of a common method under somewhat ideal conditions (a known PSF, and computable in a reasonable time). it will present a sort of benchmark for the others to beat.
Crop+diffraction (http://www.xs4all.nl/~bvdwolf/main/downloads/0343_Crop+Diffraction.png) (5.020kb !) This is the subject to restore to it's original state before diffraction was added.
3. And here (http://www.xs4all.nl/~bvdwolf/main/downloads/0343_Crop+Diffraction+RL0-1000.jpg) (945kb) is the result after only one Richardson Lucy restoration (although with 1000 iterations) with a perfectly matching PSF. There are some ringing artifacts, but the noise is almost the same level as in the original. The resolution has been improved significantly, quite usable for a simulated f/32 shot as a basis for further postprocessing and printing. Look specifically at the Venetian blinds at the first floor windows in the center. Remember, the restoration goal was to restore the original, not to improve on it (that will take another postprocessing step).
Again, this is a simplified case (with only moderate noise) with only one type of uniform blur, and its PSF is exactly known. But it does suggest that under ideal circumstances, a lot can be restored. So that reduces the quest to an accurate characterization of the PSF in a given image, and a software that can use it for restoration ...
Cheers,
Bart
3. And here (http://www.xs4all.nl/~bvdwolf/main/downloads/0343_Crop+Diffraction+RL0-1000.jpg) (945kb) is the result after only one Richardson Lucy restoration (although with 1000 iterations) with a perfectly matching PSF. There are some ringing artifacts, but the noise is almost the same level as in the original. The resolution has been improved significantly, quite usable for a simulated f/32 shot as a basis for further postprocessing and printing. Look specifically at the Venetian blinds at the first floor windows in the center. Remember, the restoration goal was to restore the original, not to improve on it (that will take another postprocessing step).
Again, this is a simplified case (with only moderate noise) with only one type of uniform blur, and its PSF is exactly known. But it does suggest that under ideal circumstances, a lot can be restored. So that reduces the quest to an accurate characterization of the PSF in a given image, and a software that can use it for restoration ...
Cheers,
Bart
3. And here (http://www.xs4all.nl/~bvdwolf/main/downloads/0343_Crop+Diffraction+RL0-1000.jpg) (945kb) is the result after only one Richardson Lucy restoration (although with 1000 iterations) with a perfectly matching PSF. There are some ringing artifacts, but the noise is almost the same level as in the original. The resolution has been improved significantly, quite usable for a simulated f/32 shot as a basis for further postprocessing and printing. Look specifically at the Venetian blinds at the first floor windows in the center. Remember, the restoration goal was to restore the original, not to improve on it (that will take another postprocessing step).
Interesting, Bart. Thanks for providing a 16 bit PNG file of the unsharpened image.Ray,
I tried sharpening the PNG file using Focus Magic (which I've been using for a number of years now). The automatic detection of blur width gave me readings varying from 2 pixels to 7 pixels, depending on which part of the image was selected. One can get some rather ugly results sharpening a whole image at a 7-pixels setting, especially at 100%, so I tried using a 1-pixel blur width at 50%, repeating the operation 7 times.
Below is the result, using maximum quality jpeg compression. To my eyes, the result looks very close to yours. However, at 200% it's clear that your result shows slightly finer detail. An obvious example of this is the lower window to the left of the tree. The faint horizontal stripes suggest the presence of a venetian blind. In my FM-sharpened image, there's no hint of this detail.
[attachment=23359:FM_1_pix...fraction.jpg]
Bart, this is very interesting! I was not yet able to achieve the same deconvolution by using RawTherapee (RL deconvolution), ACR, SmartSharpen, Topaz Detail and ALCE(bigano.com).
Just discovered this tool - DeblurMyImage (http://www.adptools.com/en/deblurmyimage-description.html) that allows to import a PSF.
Do you have by any chance an image of the PSF used by ImagePlus?
This will be an interesting experiment!
I suppose if I will be able to measure the PSF for my lens + camera + raw converter, it will provide the best sharpening for my images, This is very tempting!
What is the effect of using fewer than 1000 iterations? In RawTherapee there doesn't seem to be much change after 40 or 50.
BTW, I looked at the (open) source code for RT, and it assumes a Gaussian PSF. I think it would be easily modified to use different PSF's, and possibly not too hard to allow one input by the user.
Ray,
Your experiment debunks one of the main criticisms of deconvolution: deconvolution is fine in theory but falls down in practice because a suitable PSP can not be found. Bart used a near perfect PSP (limited by the 9*9 filter in ImagesPlus) and you used a trial and error method to derive a PSP that produced nearly as good results.
The PSP used by FocusMagic and how it is affected by the BlurWidth and Amount parameters is not well documented. Does amount determine the number of iterations or some other quantity? Restorations for defocus, diffraction, and lens aberrations such as spherical aberration require different PSPs.
As implied by its name, FocusMagic may use a PSP optimized for restoration of defocus. However, as your experiment demonstrates, decent results may be obtained with a PSP that is not optimal. A decent approximation may be sufficient.
It was disappointing to learn that the PSP for Raw Therapee is for Gaussian blur.
Hi Bill,
Yes, Ray did well by modifying the method of using a good (more defocus oriented) deconvolver.
That's correct, but then FocusMagic doesn't claim to be a cure for everything. The documentation leaves a bit to be desired, but on the other hand the preview makes it into a quick trial and error procedure to find the best settings. What works well in most cases is to increase the amount and start increasing the radius. There comes a point where the resolution suddenly changes for the worse. Just back up one click and fine-tune the amount.
I agree. The improvement will be quite visible anyway, and a bit of creativity may find an even better solution. As Ray's example showed, he came very close to an optimal scenario, and with less visible artifacts.
The program has an open development structure now, so who knows what the future has in store.
Cheers,
Bart
Hi Emil,
The reason was because fewer iterations showed more ringing artifacts, but one could opt for that compromise and try to deal with the artifacts in an other way. After a few hundred iterations the ringing started to reduce a bit, so I decided to give the PC a workout. Perhaps a larger kernel size would have allowed to stop earlier with less ringing, but a larger kernel would also increase calculation time per iteration.
Okay, here we go.
...
Cheers,
Bart
Hi,
This is what I got in LR3.
Best regards
Erik
Hi Erik,
We're into extreme pixel-peeping here, are we not? It appears that CS5 might now be doing a better job than Focus Magic.
As I mentioned, one of the critical areas in Bart's image, which highlights the quality of the sharpening, is that window nearest the ground, just to the left of the tree. It's clear there's a ventian blind there, so it's reasonable to deduce that the horizontal lines represent real detail and are not just artifacts. My sharpening attempt with FM has not done well in that section of the image. Bart's attempt with a single Richardson Lucy restoration does the best job, your's next and mine a poor third.
Such differences are best viewed at 300%. Here's a comparison at 300% so we all know what we're talking about. Bart's is first on the left, yours in the middle and mine furthest to the right. I added one more iteration of 1 pixel blur width at 50%, so the title should read 8x instead of 7x.
[attachment=23380:Comparis..._at_300_.jpg]
Okay! Let's now shift our gaze to the smooth blue surface at the top of the crop. What! Is that noise I see? Surely it must be! However, in my FM sharpened image, that plain blue section at the top is as smooth as a baby's bottom.
I guess we have trade-offs in operation here.
Out of curiosity, I tried sharpening Bart's image using ACR 6.1 with the following settings. Detail 100%, 0.5 pixels, amount 120%, no masking. (Masking reduces resolution.)
It's done an excellent job. So close to Bart's, I would say the differences are irrelevant. A 300% enlarged crop on the monitor represents a print size of the entire image of about 10 metres by 25 metres (maybe a slight exaggeration, but you get my point ).
[attachment=23381:ACR_6.1_..._Bart__s.jpg]
Apologies in hijacking this thread a little bit, but personally I'm just curious if de-convolution sharpening and the evolvement of computational imaging might eventually overcome much of the problem with diffraction. (and if this has already been discussed I also apologize - I only skimmed through the thread, seeing how most of it is above my pay-grade).
I would assume it would be much more challenging than resolving the issues from an AA filter, since it would require each individual lens design to be carefully tested then some method to apply the information to the file, and perhaps would require the data from every possible f/stop and with zoom lens specific zoom settings. But it seems the theory of restoring the data as it is spread to adjacent pixels isn't much different than what happens with an AA filter.
I know I have many times stopped down to f/22 (or further) and smart sharpen seems to work quite well, even when printing large prints.
Just curious.
Ray,
Masking does not necessarily reduce resolution, it decides which areas to be sharpened so you would choose mask to keep sharpening on detail but suppress sharpening in smooth areas, like the blue paint. Using intensive/excessive sharpening the transition area between masked/unmasked may be ugly.
Out of curiosity, I tried sharpening Bart's image using ACR 6.1 with the following settings. Detail 100%, 0.5 pixels, amount 120%, no masking. (Masking reduces resolution.)
It's done an excellent job. So close to Bart's, I would say the differences are irrelevant. A 300% enlarged crop on the monitor represents a print size of the entire image of about 10 metres by 25 metres (maybe a slight exaggeration, but you get my point ).
[attachment=23381:ACR_6.1_..._Bart__s.jpg]
Wayne,
Thanks for "hijacking" this discussion, it got much more interesting!
Best regards
Erik
I would say all the examples other than Bart's show much more Gibbs' phenomenon ('ringing' artifacts along sharp edges); look at the edges of the white window frames, for example. Though it looks like Bart's ringing is longer range (perhaps the result of all those iterations).
I would say all the examples other than Bart's show much more Gibbs' phenomenon ('ringing' artifacts along sharp edges); look at the edges of the white window frames, for example. Though it looks like Bart's ringing is longer range (perhaps the result of all those iterations).Those 1000 iterations of Bart's RL deconvolution were not without benefit. The Gibbs phenomenon is well demonstrated with the slanted edge and line spread plots of Imatest. The illustration on the left shows no sharpening is on the left and sharpening with FocusMagic Blur width 50, amount 150 is on the right. The line spread is for the Focus Magic image.
Hi Emil.
Much more Gibb's phenomenon??
Here's a 400% crop comparison between Bart's sharpened result and ACR 6.1. Could you point out any significant ringing artifacts along edges, which are apparent in the ACR sharpened image but not in Bart's?
The most significant differences I see between the two images are a few faint horizontal lines on the blue paint-work at the top of the crop, which are apparent in Bart's rendition but not in the ACR rendition.
[attachment=23391:400__crop.jpg]
They both have ringing artifacts. Bart's have more side lobes, yours have a stronger first peak and trough. It was that initial over- and under-shoot that I was referring to when I wrote "much more" -- the initial amplitude is stronger. Though that longer tail of side lobes can be more of a problem in some places -- see the white sliver next to the left side of the tree trunk near the bottom.
I would say all the examples other than Bart's show much more Gibbs' phenomenon ('ringing' artifacts along sharp edges); look at the edges of the white window frames, for example. Though it looks like Bart's ringing is longer range (perhaps the result of all those iterations).
The Gibbs phenomenon is well demonstrated with the slanted edge and line spread plots of Imatest.
In his comparison of the new Leica S2 with the Nikon D3x, Lloyd Chambers (Diglloyd (http://diglloyd.com/diglloyd/2010-07-blog.html#_20100722DeconvolutionSharpening)) has shown how the deconvolution sharpening (more properly image restoration) with the Mac only raw converter Raw Developer markedly improves the micro-contrast of the D3x image to the point that it rivals that of the Leica S2. Diglloyd's site is a pay site, but it is well worth the modest subscription fee. The Richardson-Lucy algorithm used by Raw Developer partially restores detail lost by the presence of a blur filter (optical low pass filter) on the D3x and other dSLRs.
Bart van der Wolf and others have been touting the advantages of deconvolution image restoration for some time, but pundits on this forum usually pooh pooh the technique, pointing out that deconvolution techniques are fine in theory, but in practice are limited by the difficulties in obtaining a proper point spread function (PSF) that enables the deconvolution to undo the blurring of the image. Roger Clark (http://www.clarkvision.com/articles/image-restoration1/index.html) has reported good results with the RL filter available in the astronomical program ImagesPlus. Focus Magic is another deconvolution program used by many for this purpose, but it has not been updated for some time and is 32 bit only.
Isn't it time to reconsider deconvolution? The unsharp mask is very mid 20th century and originated in the chemical darkroom. In many cases decent results can be obtained by deconvolving with a less than perfect and empirically derived PSP. Blind deconvolution algorithms that automatically determine the PSP are being developed.
Regards,
Bill
if de-convolution sharpening and the evolvement of computational imaging might eventually overcome much of the problem with diffraction.
Much more Gibb's phenomenon??
If only there were a De-Convoluter for LuLa threads....
; )
Unfortunately, Gibb's phenomenon, which produces ringing like effects, is usually mistaken for ringing produced by convolution (or deconvolution) operations. They are not the same in general, and the typical ringing associated with image restoration is mistakenly identified with Gibbs phenomenon on this thread.
I wouldn't attempt to argue points of physics or mathematics with the eminent Emil Martinec. However, when Emil implies that my ACR 6.1 'detail enhancement' has significantly more ringing artifacts than Bart's Richardson Lucy rendition, I'm plain confused. I just don't see it; at least not at 400% enlargement.
...but ya know, that's just me.
We now have it on Eric Chan's authority that, when the detail slider is cranked up to 100%, the sharpening in ACR 6 is deconvolution based. So what hairs are you splitting to distinguish it from deconvolution sharpening?
Well, that was a bit of a surprise to me...
But I would ask again, what did a 1K iteration deconvolution do that ACR 6.1 couldn't do (except add ringing effects)?
We now have it on Eric Chan's authority that, when the detail slider is cranked up to 100%, the sharpening in ACR 6 is deconvolution based.
But I would ask again, why do you want to throw dirt on deconvolution methods if you are lavishing praise on ACR 6.1?
I'm not throwing dirt on deconvolution methods other than to state that in MY experience (which is not inconsiderable) the effort does not bear the fruit that advocates seem to expound on–read that to mean, I can't seem to find a solution better than the tools I'm currently using without going through EXTREME effort.
ACR 6.1 seems pretty darn good to me, how about you?
You got any useful feedback to contribute?
What do YOU want in image sharpening?
Do you think computational solutions will solve everything?
Have you actually learned how to use ACR 6.1?
How many hours do YOU have in ACR 6.1 (the odds are I've prolly got a few more hours in ACR 6.1/6.2 than you might–and worked to improve the ACR sharpening more than most people may have).
A clumsy attempt to change the subject. You still seem to be making an artificial distinction between deconvolution methods and ACR 6.x
Well, Joofa, you obviously appear to know what you are talking about. I confess I have almost zero knowledge about the Gibb's phenomenon, but I can appreciate that it may be useful to be able to indentify and name any artifacts one may see in an image, especially if one is examining an X-ray of someone's medical condition, or indeed searching for evidence of alien life on a distant planet.
A more technical note: The deconvolution-problem is typically ill-posed, at least, initially. In the continuous domain the usual distortion due to blurring effect acts as an integral operator and problem statement boils down to a Fredholm integral equation of the first kind. In the discrete domain, which we usually operate due to digitization, the inherent ill-posedness is inherited, while some of the problems are ameliorated. More well-behaved solutions can be obtained by introducing some sort of "smoothness" or regularization criterion at this stage. Richardson-Lucy deconvolution converges to maximum-likelihood (ML) estimation. Maximum-likelihood techniques just do the analysis of image data, and hence, in general may not be smooth enough. However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation, which might provide more acceptable results. Under the assumptions of Gaussianity of certain image parameters (NOTE: not-necessarily the Gaussanity of the blur function) some equivalence of minimum mean square error estimation (MMSE), linearity, and MAP estimation can be obtained. Further optimizations can be introduced by using a more realistic nonstationary form of the blur function and variations of image data distribution and noise distribution - the draw back being that one might have to forgo some quick operations in the form of fast Fourier transforms (FFT) embedded somewhere in many deconvolution techniques.
I suppose, sometimes, if I start feeling a bit pleased with myself and think that I know quite a lot about photography, it is probably good for me to be taken down a peg or two and realise that there are people in this world with whom I would actually be unable to communicate, except on the level of "Would you like a cup of tea?".
John
My experience is that "Deconvolution" in both ACR 6.1 and LR3 amplifies noise, we need to find out how to use all settings to best effect.
Hi,
My conclusion from the discussion is that:
1) It is quite possible to regain some of the sharpness lost to to diffraction using deconvolution, even if the PSF (Point Spread Function) is not known. It also seems to be the case that we have deconvolution built into ACR 6.1 and LR 3.
2) Setting "Detail" to high and varying the radius in LR3 and ACR 6.1 is a worthwhile experiment, but we may need to gain some more experience how this tools should be used.
My experience is that "Deconvolution" in both ACR 6.1 and LR3 amplifies noise, we need to find out how to use all settings to best effect.
Best regards
Erik
I suppose, sometimes, if I start feeling a bit pleased with myself and think that I know quite a lot about photography, it is probably good for me to be taken down a peg or two and realise that there are people in this world with whom I would actually be unable to communicate, except on the level of "Would you like a cup of tea?".
John
A more technical note: The deconvolution-problem is typically ill-posed, at least, initially. In the continuous domain the usual distortion due to blurring effect acts as an integral operator and problem statement boils down to a Fredholm integral equation of the first kind. In the discrete domain, which we usually operate due to digitization, the inherent ill-posedness is inherited, while some of the problems are ameliorated. More well-behaved solutions can be obtained by introducing some sort of "smoothness" or regularization criterion at this stage. Richardson-Lucy deconvolution converges to maximum-likelihood (ML) estimation. Maximum-likelihood techniques just do the analysis of image data, and hence, in general may not be smooth enough. However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation, which might provide more acceptable results. Under the assumptions of Gaussianity of certain image parameters (NOTE: not-necessarily the Gaussanity of the blur function) some equivalence of minimum mean square error estimation (MMSE), linearity, and MAP estimation can be obtained. Further optimizations can be introduced by using a more realistic nonstationary form of the blur function and variations of image data distribution and noise distribution - the draw back being that one might have to forgo some quick operations in the form of fast Fourier transforms (FFT) embedded somewhere in many deconvolution techniques.
However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation,
Since diffraction pretty-much wipes out the detail outside of the diffraction limit, deconvolution sharpening is generally limited to massaging whatever is left within the central cutoff.
From what I've read, detail beyond the diffraction cutoff has to be extrapolated ("Gerchberg method", for one), or otherwise estimated from the lower frequency information. The methods are generally called "super-resolution". The Lucy method, due to a non-linear step in the processing is supposed to have an extrapolating effect, but I'm not sure if it's visible here.
zero MTF is going to remain zero.
I don't think one is expecting miracles here, like undoing the Rayleigh limit
I sympathise with your frustration here, John, but let's not be intimidated by poor expression. Here's my translation, for what it's worth, sentence by sentence.
(1) The deconvolution problem is typically ill-posed.
means: The sharpening problem is often poorly defined. (That's easy).
(2) In the continuous domain the usual distortion due to blurring effect acts as an integral operator and problem statement boils down to a Fredholm integral equation of the first kind.
means: The analog world, which is a smooth continuum, is different from the digital world with discrete steps. You need complex mathematics to deal with this problem, such as a Fredholm integral equation. (Whatever that is).
(3) In the discrete domain, which we usually operate due to digitization, the inherent ill-posedness is inherited, while some of the problems are ameliorated.
means: We're now stuck with the digital domain. There's a hangover from the analog world with incorrect definitions, but we can fix some of the problems. There's hope.
(4) More well-behaved solutions can be obtained by introducing some sort of "smoothness" or regularization criterion at this stage.
means: We can achieve a balanced result by sacrificing detail for smoothness.
(5) Richardson-Lucy deconvolution converges to maximum-likelihood (ML) estimation.
means: The Richardson-Lucy method attempts to provide the best result, in terms of detail.
(6) Maximum-likelihood techniques just do the analysis of image data, and hence, in general may not be smooth enough.
means: The best result may introduce noise.
(7) However, some regularization is imparted by incorporating some notions regarding the a priori (default) distribution of image data, and hence, converting the problem to max a priori (MAP) estimation, which might provide more acceptable results.
means: With a bit of experimentation we might be able to fix the noise problem.
(8) Under the assumptions of Gaussianity of certain image parameters (NOTE: not-necessarily the Gaussanity of the blur function) some equivalence of minimum mean square error estimation (MMSE), linearity, and MAP estimation can be obtained.
means: Gaussian mathematics is used to get the best estimate for sharpening purposes. (Guass was a German mathematical genius, considered to be one of the greatest mathematicians who has ever lived. Far greater than Einstein, in the field of mathematics).
(9) Further optimizations can be introduced by using a more realistic nonstationary form of the blur function and variations of image data distribution and noise distribution - the draw back being that one might have to forgo some quick operations in the form of fast Fourier transforms (FFT) embedded somewhere in many deconvolution techniques.
means: You can get better results if you take more time and have more computing power.
Okay! Maybe I've missed a few nuances in my translation. No-one's perfect. Any improved translation is welcome.
Not in theory. For bounded functions the Fourier transform is an analytic function, which means that if it is known for a certain range then techniques such as analytic continuation (http://en.wikipedia.org/wiki/Analytic_continuation) can be used to extend the solution to all the frequency range. However, as I indicated earlier, such resolution boosting techniques have difficulty in practise due to noise. It has been estimated in a particular case that to succeed in analytic continuation an SNR (amplitude ratio) of 1000 is required.
IIRC, even in the presence of noise, it has been claimed that a twofold to fourfold improvement of the Rayleigh resolution in the restored image over that in the acquired image may be achieved if the transfer function of the imaging system is known sufficiently accurately and using the resolution boosting analytic continuation techniques.
It has been claimed ... references please??
Edmund
I don't think one is expecting miracles here, like undoing the Rayleigh limit; zero MTF is going to remain zero. But nonzero microcontrast can be boosted back up to close to pre-diffraction levels, and the deconvolution methods seem to be doing that rather well.
I am wondering whether a good denoiser (perhaps Topaz, which seems to use nlmeans methods) can help squelch some of the noise amplified by deconvolution without losing recovered detail such as the venetian blinds.
The RL sharpening of RawTherapee does a very good job, with just a gaussian kernel. I wonder if knowing the exact PSF for deconvolution could be any better? Somehow I doubt it (except in the case of motion blur, which has PSFs like jagged snakes.)
Thanks for the PS tip.
Well, for lens blur I imagine it could be a bit better to use something more along the lines of a rounded off 'top hat' filter (perhaps more of a bowler ) rather than a Gaussian, since that more accurately approximates the structure of OOF specular highlights which in turn ought to reflect (no pun intended) the PSF of lens blur. Another thing that RT lacks is any kind of adaptivity to its RL deconvolution. The question is whether that would add significantly to the processing time. Its on my list of things to look into.
Not in theory. For bounded functions the Fourier transform is an analytic function, which means that if it is known for a certain range then techniques such as analytic continuation (http://en.wikipedia.org/wiki/Analytic_continuation) can be used to extend the solution to all the frequency range. However, as I indicated earlier, such resolution boosting techniques have difficulty in practise due to noise. It has been estimated in a particular case that to succeed in analytic continuation an SNR (amplitude ratio) of 1000 is required.
IIRC, even in the presence of noise, it has been claimed that a twofold to fourfold improvement of the Rayleigh resolution in the restored image over that in the acquired image may be achieved if the transfer function of the imaging system is known sufficiently accurately and using the resolution boosting analytic continuation techniques.
This just went up on SlashdotHere's another one from Microsoft Research:
http://research.microsoft.com/en-us/um/red.../imudeblurring/ (http://research.microsoft.com/en-us/um/redmond/groups/ivm/imudeblurring/)
It looks like the Hasselblad gyro hardware should be able to write this type of info in the future.
Edmund
Well, for lens blur I imagine it could be a bit better to use something more along the lines of a rounded off 'top hat' filter (perhaps more of a bowler ) rather than a Gaussian, since that more accurately approximates the structure of OOF specular highlights which in turn ought to reflect (no pun intended) the PSF of lens blur. Another thing that RT lacks is any kind of adaptivity to its RL deconvolution; that could mitigate some of the noise amplification if done properly. The question is whether that would add significantly to the processing time. Its on my list of things to look into.
Ray,
Unfortunately, from about (5) my feeling is that your jargon-reduction algorithm is oversmoothing and losing semantic detail
But then, what do I know ?
Edmund
Is there anywhere one can find the typical PSF or spectral power distribution of the typical AA filter?
I would be surprised if any method can do more than guess at obliterated detail (data in the original beyond the Rayleigh limit). The problem is much akin to upsampling an image; in both cases there is a hard cutoff on frequency content somewhat below Nyquist (in the case of upsampling, I mean the Nyquist of the target resolution). Yes there are methods for the upsampling such as the algorithm in Genuine Fractals, but they amount to pleasing extrapolations of the image rather than genuine restored detail. That's not to say the result is not pleasing, and perhaps analytic continuation for super-resolution yields a pleasing result; in fact it sounds a bit similar to the use of fractal scaling to extrapolate image content to higher frequency bands.
Not sure if the problem is akin to upsampling as upsampling does not create new information where as analytic continuation does create new information.
Not sure if the problem is akin to upsampling as upsampling does not create new information where as analytic continuation does create new information.
I don't see what the difference is between a spectral density that is zero beyond the inverse Airy disk radius, and a spectral density that is zero beyond Nyquist. If you are going to extend the spectral density to higher frequencies, in effect that information is being invented.
This is different from straight deconvolution, where the function being recovered has been multiplied by some nonzero function, and one recovers the original function by dividing out by the (nonzero) FT of the PSF.
The intent is to use deconvolution to recover the spectrum in the passband of the imaging system and then use analytic continuation to extend it out to those frequencies where it was zero before.
Analytic continuation of what? We're talking about discrete data... so at best we're talking about some assumption about a smooth analytic function that interpolates the discrete data in a region you like (low frequencies) and extrapolates into a region you don't like with the existing data (high frequencies).
Also, analytic continuation is simply one of many possible assumptions about how to extend the data; the issue is whether it or another invents new data that is visually pleasing.
Anyway, I've made my point and I don't want this to hijack the thread.
I remember seeing an article that showed one. Instead of four little dots in a neat square like I imagined, it was more like a dozen dots in a messy diamond pattern. I'll try to find it...
I remember seeing an article that showed one. Instead of four little dots in a neat square like I imagined, it was more like a dozen dots in a messy diamond pattern. I'll try to find it...Zeiss MTF Curven (http://www.zeiss.de/C12567A8003B8B6F/EmbedTitelIntern/CLN_30_MTF_en/$File/CLN_MTF_Kurven_EN.pdf) shows the PSF of a low pass filter.
Zeiss MTF Curven (http://www.zeiss.de/C12567A8003B8B6F/EmbedTitelIntern/CLN_30_MTF_en/$File/CLN_MTF_Kurven_EN.pdf) shows the PSF of a low pass filter.
Regards,
Bill
Yes, Nr. 8 on page 4 - let's see if we can deconvolve that!
(Un)fortunately, in practice, the PSF of a lens (residual aberrations+diffraction, assuming perfect focus and no camera or subject motion) plus an optical low-pass filter (OLPF) and the sensel mask and spacing will resemble a modified Gaussian rather than just the OLPF's PSF. As with many natural sources of noise, when several are combined then a (somewhat) modified Gaussian approximation can be made.
I have analyzed the PSF of the full optical system (different lenses + OLPF at various apertures + aperture mask of the sensels) of e.g. my 1Ds3 (and the 1Ds2 and 20D before that), and the effect a Raw converter has on the captured data, and have found that a certain combination of multiple Gaussians does a reasonably good job of characterizing the system MTF.
...
There is a lot of ground to cover before simple tools are available, but threads like these serve to at least increase awareness.
Is there anywhere one can find the typical PSF or spectral power distribution of the typical AA filter?
Interesting. I'd be curious to see how it does relative to RL in, say, RawTherapee -- applied to the same tiff image (RT takes tiffs as inputs). The thing that I would worry about with doing the deconvolution in photoshop is roundoff errors, since all one has access to in PS is 16-bit integer math unless you jump through hoops with the HDR format. How are you doing the division step in integer math?
For two nearly similar images such as the ratio of an image and its low-pass filter, most of the values would be near one which doesn't truncate nicely in integer math; I was wondering how that would be dealt with. It looks like in your version the nearly equal values unpon division are being sent to a color value 203 (on my non-calibrated laptop).
For two nearly similar images such as the ratio of an image and its low-pass filter, most of the values would be near one which doesn't truncate nicely in integer math; I was wondering how that would be dealt with.
So...since LR3 does deconvolution sharpening when the detail slider is moved all the way to the right
no, not "all the way" to the right - as Eric clarified it is a blend of USM and deconvolution methods, where the input from deconvolution is growing as you move the slider to the right... just when it is "all the way to the right" you probably have 100% pure deconvolution w/o any input from USM
Yes, that is my take on Eric's post. Now, what happens when the slider is all the way to the left? USM? However, detail of zero suppresses halos, which is different from the usual USM.
While in the Lightroom forum some one mentioned this:
As per Eric Chan:
Quote from: madmanchan
... CR 5.7 uses the same new method as LR 3.0 (so that LR 3 users can use Edit-In-PS with CS4 and get the same results).
What is the problem with sharpening in Lab? see attached
Bart Hi! It was a convoluted file you uploaded but I would be happy to do it on a crop of the original from the camera.
The switch to Lab should not cause any shift in the colour values.
The file got a gauss blur on the a&b channels and a smart shrpn 100 - .2 on the L with a slight increase in saturation on the a&b because as you know blurring causes desaturation.
Please guide me to where the original crop png file can be found.
Hi Bart, how are you?
We used to write in comp.periph.scanners, some years ago. :)
I'm playing with deconvolution a bit.
I've studied RawTherapee sources and put up a quick hack in C to experiment with various kernels (PSFs).
If you have images and PSFs to play with, I would be really happy to show up the results.
I can deal with float PSFs of square shape and whatever size. Only grayscale pictures at the moment.
Hopefully we can work out a set of PSFs to complement the Gaussian 3x3 approximation that RT is using right now.
The quick hack is commandline and really ugly with lots of limitations, so I would be ashamed to share it for now; but I can download test images and upload the results.
Doesn't time fly, it's 'a bit more' than some years by now.
IMHO there are 3 obvious fundamental candidates:
- A mix of Gaussians
- Defocus blur (DOF related or plain OOF)
- Diffraction dominated
I'm in the process of programming a "PSF generator" application
In this thread I posted an image crop that has the diffraction of f/32 added
Second test: same kernel, but I tried a special "turbo" mode, modifying the R-L implementation so that I can use a very narrow Gaussian (small sigma) and very few iterations.
radius = 0.35, 60 iterations, no damping.
XFer, Is this code googlecode by any chance? I would love to compile it to try out!
What about segmenting the image and applying a different PSF to each relevant portion?
Here we have a couple of tests of mine.
Please note that since my dirty little app only manages gray images at this time, I had to convert to Lab and deconvolve Lightness only.
First test: RT Gaussian approximation (3x3 kernel).
Radius (sigma) = 1.2, 2000 iterations, no damping.
Your deconvolution has more hi-freq details, but more ringing. See the angled white bar near the bottom of the tree.
Second test: same kernel, but I tried a special "turbo" mode, modifying the R-L implementation so that I can use a very narrow Gaussian (small sigma) and very few iterations.
radius = 0.35, 60 iterations, no damping
This method is very fast but really "nervous", can diverge easily. ;D.
Yes, that will help, but it does require adaptive PSF generation. Another approach is determining a different PSF for center and corners, and then blend between them.
That's right, compromises, compromises. Still no free lunch ...
You have actually measured the PSF, right?
It's strange, it resembles so much a Gaussian, instead of the Airy disc I would have expected from a heavy diffraction-limited image. I see no fringes in the PSF.
What's a good PSF for defocus?
Yes, well I was imagining it should look like the little disks of OOF specular highlights, those being extreme versions of OOF point sources. But there one sees some structure near the edges, perhaps diffraction off the edge of the aperture blades? As well of course as a slight polygonal shape due to the aperture blades. But I'm not sure any of those are significant, and a disk is perhaps good enough. I was just wondering if there was any discussion eg in the literature or in some online source.
In the service of keeping it simple, perhaps since apart from the side lobes of the Airy pattern the central peak is fairly well approximated by a Gaussian, one could use a suitable combination (the successive convolution) of a disk, Gaussian, and line (for motion deblur).
To gain some insight into how it works, here is a recipe for RL deconvolution using Photoshop commands:
1. Duplicate the ORIGINAL blurry image, call it "COPY1"
2. Duplicate COPY1, call it "COPY2"
3. Blur COPY2 with the PSF. For a gaussian, use Gaussian Blur. Other PSFs can be defined with the Custom Filter.
4. Divide the ORIGINAL blurry image by COPY2, with the result in COPY2.
5. Blur COPY2 with the PSF (as in step 3).
6. Multiply COPY1 by COPY2, with the result in COPY1. (Apply Image with Blending Mode: Multiply)
7. Go to step #2 and repeat for the number of iterations you want. Each iteration gets a little sharper. The final result is in COPY1.
To me, this set of operations implies that deconvolution with a seperable kernel is seperable. Is it?
[...]
So I need to table Airy(R) on a NxN grid
Is there a Matlab expert who can help me?
I need a real working example: I found a huge number of so called "tutorials" on the web but none of them actually works... for example, note that Airy(R) as defined is singular for R=0, one must somehow tell Matlab that R=0 must give Airy(R) = 1.
And then there is the OLPF convolved with the Airy pattern before it gets to the box blur of the sensels. It all makes me suspect that a Gaussian is going to be a reasonable approximation in the end, given all the inaccuracies introduced all along the way.
Do you think that the difference between using the precise PSF and Gaussian is going to be noticeable?
Ok but let's not forget special-purpose deconvolution.
It's not only about inverting diffraction or box blur; it's also about spherical aberration, coma, defocusing, motion blur.
That's why we need a way to comfortably explore different PSFs.
I have this small utility that, at the moment, can accept hardcoded kernels, but I can extend it to load a kernel from file.
The problem is having meaningful PSFs to experiment with.
Right now I have an Excel sheet which can compute a 9x9 diffraction kernel (input parameters are pixel pitch, f-number, wavelength), but that's too limited. :-\
I have a few question regarding how to get discrete kernels from these continuous functions.
1) Do you just evaluate the function at the grid points? This would be like using a rectangular filtering window for the sampling. Doesn't this lead to issues? Or are you using more sophisticated ways to get the samples (triangular windows, Chebychev windows etc.)?
2) For Airy patterns which emulate strong diffraction (F/32 and beyond), even a 9x9 kernel leaves out a certain percentage of the total signal intensity.
Are you just truncating the function at the edges of the kernel, or do you perform some kind of smoothing? I think that just truncating could lead to ripples -> ringing on the image.
3) Expecially for "tight & pointy" PSFs (think small-radius Gaussians), I have the feeling that a grid with a pitch of 1 pixel is too rough. Too much approximation from the continuous function to the kernel. I think we're going to need sub-pixel accuracy to avoid some artifacts (mosquitos around high-contrast details, ringing, edge overshooting, noise amplification, hot pixels).
What do you think about it?
Second picture: same images, sharpened with R-L deconvolution.
f/5.6 on the left, f/22 on the right.
(http://img96.imageshack.us/img96/534/diffraction02.jpg)
The f/5.6 shot is still sharper, but look at aliasing artifacts.
The lens transmitted spatial frequencies far beyond the Nyquist limit and the AA filter could not do much about it.
Smaller pattern are totally destroyed by aliasing.
The f/22 is a bit softer, but almost entirely aliasing-free; I'd say that almost all the useable details are there, and smaller patterns are much more gracefully handled.
That sounds interesting!
So - thinking about scanning negatives it would probably
make sense to manually defocus a bit to reduce the gritty sky
and then use deconvolution later to get back details.
I have a Nikon LS 9000 scanner which allows for controled, manual defocusing,
but I'm always fighting with pseudo-aliasing (film grain/scanner CCD interaction).
I have a Nikon LS 9000 scanner which allows for controled, manual defocusing,
but I'm always fighting with pseudo-aliasing (film grain/scanner CCD interaction).
Did this subject die out? haven't seen an entry since September 2010.
It was until you started grave-digging ;D
1. I've taken a crop of a shot taken with my 1Ds3 (6.4 micron sensel pitch + Bayer CFA) and the TS-E 90mm af f/7.1 (the aperture where the diffraction pattern spans approx. 1.5 pixels).
0343_Crop.jpg (http://www.xs4all.nl/~bvdwolf/main/downloads/0343_Crop.jpg) (1.203kb) I used 16-b/ch TIFFs throughout the experiment, but provide links to JPEGs and PNGs to save bandwidth.
2. That crop is convolved with a single diffraction (at f/32) kernel for 564nm wavelength (the luminosity weighted average of R, G and B taken as 450, 550 and 650 nm) at a 6.4 micron sensel spacing (assuming 100% fill-factor). That kernel (http://www.xs4all.nl/~bvdwolf/main/downloads/Airy9x9{p=6.4FF=100w=0.564f=32.}.dat) was limited to the maximum 9x9 kernel size of ImagesPlus, a commercial Astrophotography program chosen for the experiment because a PSF kernel can be specified and the experiment can be verified. That means that only a part of the infinite diffraction pattern (some 44 micron, or 6.38 pixel widths, in diameter to the first minimum) could be encoded. So I realise that the diffraction kernel is not perfect, but it covers the majority of the energy distribution. The goal is to find out how well certain methods can restore the original image, so anything that resembles diffraction will do.
The benefit of using a 9x9 convolution kernel is that the same kernel can be used for both convolution and deconvolution, so we can judge the potential of a common method under somewhat ideal conditions (a known PSF, and computable in a reasonable time). it will present a sort of benchmark for the others to beat.
Crop+diffraction (http://www.xs4all.nl/~bvdwolf/main/downloads/0343_Crop+Diffraction.png) (5.020kb !) This is the subject to restore to it's original state before diffraction was added.
3. And here (http://www.xs4all.nl/~bvdwolf/main/downloads/0343_Crop+Diffraction+RL0-1000.jpg) (945kb) is the result after only one Richardson Lucy restoration (although with 1000 iterations) with a perfectly matching PSF. There are some ringing artifacts, but the noise is almost the same level as in the original. The resolution has been improved significantly, quite usable for a simulated f/32 shot as a basis for further postprocessing and printing. Look specifically at the Venetian blinds at the first floor windows in the center. Remember, the restoration goal was to restore the original, not to improve on it (that will take another postprocessing step).
I have supplied a link (http://www.xs4all.nl/~bvdwolf/main/downloads/Airy9x9%7Bp=6.4FF=100w=0.564f=32.%7D.dat) to the data file.
I hope no one minds my entering this a bit late. I've written a small C program that does deconvolution by Discrete Fourier Transform division (using the library "FFTW" to do Fast Fourier Transforms). To me this seems to beat all the other deconvolution algorithms that have been presented in this thread... I'd like to see what others think.
My algorithm currently only works on square images, and deals with edge effects very badly, so I added black borders to Bart's max-quality jpeg crop making it 1115x1115, then applied the convolution myself using his provided 9x9 kernel. I operated entirely on 16-bit per channel data. The exact convoluted image that I worked from can be downloaded here (5.0 MB PNG file) (http://kingbird.myphotos.cc/0343_Crop+Diffraction_square_black_border.png). My result, saved as a maximum-quality JPEG (after re-cropping it to 1003x1107): 0343_Crop+Diffraction+DFT_division_v2.jpg (1.2 MB) (http://kingbird.myphotos.cc/0343_Crop+Diffraction+DFT_division_v2.jpg)
My algorithm takes a single white pixel on a black background the same size as the original image, applies the PSF blur to it, and divides the DFT of the blurred pixel by the DFT of the pixel (element by element, using complex arithmetic); this takes advantage of the fact that the DFT of a single white pixel in the center of a black background has a uniformly gray DFT. Then it takes the DFT of the convoluted image and divides this by the result from the previous operation. Division on a particular element is only done if the divisor is above a certain threshold (to avoid amplifying noise too much, even noise resulting from 16-bit quantization). An inverse DFT is done on the final result to get a deconvoluted image.
This algorithm is very fast and does not need to go through iterations of successive approximation; it gets its best result right off the bat.
David, what PSF did you use on your sample? A simple Gaussian, or something more complex?
I have supplied a link (http://www.xs4all.nl/~bvdwolf/main/downloads/Airy9x9%7Bp=6.4FF=100w=0.564f=32.%7D.dat) to the data file. You can read the dat file with Wordpad or a similar simple document reader. You can input those numbers(rounded to 16-bit values, or converted to 8-bit numbers by dividing by 65535 and multiplying by 255 and rounding to integers). A small warning, the lower the accuracy, the lower the output quality will be. For convenience I've added a 16-bit Greyscale TIFF (http://www.xs4all.nl/~bvdwolf/main/downloads/N32.tif) (convert to RGB mode if needed). I have turned it into an 11x11 kernel (9x9 + black border) because the program you referenced apparently (from the description) requires a zero backgound level.
David, this looks really interesting, especially considering that algorithm is fast.
It would be lovely to see something like this in a tool like RawTherapee, which provides a great image processing platform.
If you would be interested, I could help with UI implementation there.
Thanks Edmund, and thanks Eric. It is indeed simple, which makes me wonder why seemingly nobody else has thought of it. However I do think it has a lot of room for improvement, for example dealing with a noisy or quantized image — my current solution is to cut off frequencies that are noisy, but that results in ringing artifacts. Maybe I can add an algorithm that fiddles with the noisy frequencies in order to reduce the appearance of ringing (not sure at this point how to go about it, though). And there's of course the issue of edge effects, which I haven't tried to tackle yet.
I intend to post the source code, but it's rather messy right now (the main problem is that it uses raw files instead of TIFFs), so I'd like to clean it up first. Unless you'd really like to play with it right away, in which case I can post it as-is...
Meanwhile, I've improved the algorithm: 1) Do a gradual frequency cutoff instead of a threshold discontinuity; 2) Use the exact floating point kernel for deconvolution, instead of using a kernel-blurred white pixel rounded to 16 bits/channel.
The result: 0343_Crop+Diffraction+DFT_division_v3.jpg (1.2 MB) (http://kingbird.myphotos.cc/0343_Crop+Diffraction+DFT_division_v3.jpg)
David
I hope no one minds my entering this a bit late. I've written a small C program that does deconvolution by Discrete Fourier Transform division (using the library "FFTW" to do Fast Fourier Transforms). To me this seems to beat all the other deconvolution algorithms that have been presented in this thread... I'd like to see what others think.
My algorithm currently only works on square images, and deals with edge effects very badly, so I added black borders to Bart's max-quality jpeg crop making it 1115x1115, then applied the convolution myself using his provided 9x9 kernel. I operated entirely on 16-bit per channel data. The exact convoluted image that I worked from can be downloaded here (5.0 MB PNG file) (http://kingbird.myphotos.cc/0343_Crop+Diffraction_square_black_border.png). My result, saved as a maximum-quality JPEG (after re-cropping it to 1003x1107): 0343_Crop+Diffraction+DFT_division_v2.jpg (1.2 MB) (http://kingbird.myphotos.cc/0343_Crop+Diffraction+DFT_division_v2.jpg)
My algorithm takes a single white pixel on a black background the same size as the original image, applies the PSF blur to it, and divides the DFT of the blurred pixel by the DFT of the pixel (element by element, using complex arithmetic); this takes advantage of the fact that the DFT of a single white pixel in the center of a black background has a uniformly gray DFT. Then it takes the DFT of the convoluted image and divides this by the result from the previous operation. Division on a particular element is only done if the divisor is above a certain threshold (to avoid amplifying noise too much, even noise resulting from 16-bit quantization). An inverse DFT is done on the final result to get a deconvoluted image.
This algorithm is very fast and does not need to go through iterations of successive approximation; it gets its best result right off the bat.
David, the results look too good to be true. Could you verify that you (your software) used the correct (convolved) file as input? It's not that I don't like the results, it's that all other algorithms I've tried cannot restore data that has been lost (too low S/N ratio) to f/32 diffraction. Maybe you didn't save the intermediate convolved/diffracted result, thus truncating the accuracy to 16-bit at best. Maybe you were performing all subsequent steps while keeping the intermediate results in floating point?
I would enjoy it if you posted another 16-bit/ch PNG of a convoluted image, without posting the original, but this time with edges that fade to black (i.e., pad the original with 8 pixels of black before applying a 9x9 kernel).
Hi David,
Okay, here it is, a 16-b/ch RGB PNG file with an 8 pixel black border, convolved with the same "N=32" kernel as before:
Great results so far. I look forward to playing around with it some day (assuming code is released :) )
Tada:(http://kingbird.myphotos.cc/7640_Crop+Diffraction+DFT_division_v3.jpg)
But I really want to get this working with cropped-edge images. Right now that's the Achilles heel of the algorithm: having missing edges corrupts the entire image, not just the vicinity of the edges. I tried masking edges by multiplying them by a convoluted white rectangle, but that still left significant ringing noise over the whole picture. I'll try that mirrored-edge idea, but I doubt it'll work. I have another idea, of tapering off the inverse PSF so that it doesn't have that "action at a distance", but that might remove its ability to reconstruct fine detail... it's a really hard concept to wrap my mind around. It seems that this algorithm works on a gestalt of the whole image to reconstruct even one piece of it, even though the PSF is just 9x9.
BTW, the edges can actually be any color, as long as it's uniform. I can just subtract it out (making some values negative within the image itself), and then add it back in before saving the result.
Cheers,
David
Tada:(http://kingbird.myphotos.cc/7640_Crop+Diffraction+DFT_division_v3.jpg)
But I really want to get this working with cropped-edge images. Right now that's the Achilles heel of the algorithm: having missing edges corrupts the entire image, not just the vicinity of the edges. I tried masking edges by multiplying them by a convoluted white rectangle, but that still left significant ringing noise over the whole picture. I'll try that mirrored-edge idea, but I doubt it'll work. I have another idea, of tapering off the inverse PSF so that it doesn't have that "action at a distance", but that might remove its ability to reconstruct fine detail... it's a really hard concept to wrap my mind around. It seems that this algorithm works on a gestalt of the whole image to reconstruct even one piece of it, even though the PSF is just 9x9.
David, the results look too good to be true...
Inverse filtering can be very exact if the blur PSF is known and there is no added noise, as in these examples.
But what is really surprising how much detail is coming back - it's as though nothing is being lost to diffraction. Detail above what should be the "cutoff frequency" is being restored.
I think what is happening is that the 9x9 or 11x11 Airy disk is too small to simulate a real Airy disk. It is allowing spatial frequencies above the diffraction cutoff to leak past. Then David's inverse filter is able to restore most of those higher-than-cutoff frequency details as well as the lower frequencies (on which it does a superior job).
To be more realistic I think it will be necessary to go with a bigger simulated Airy disk.
I'm vaguely following this - however, I would have thought that if the fourier transform of the blur function is invertible then it's pretty obvious that you'll get the original back - with some uncertainty in areas with a lot of higher frequency due to noise in the original and computational approximation.
The frequencies above the cutoff frequency should be zero, so should not be invertible. (1/zero = ???)
I think what is happening is that the 9x9 or 11x11 Airy disk is too small to simulate a real Airy disk. It is allowing spatial frequencies above the diffraction cutoff to leak past. Then David's inverse filter is able to restore most of those higher-than-cutoff frequency details as well as the lower frequencies (on which it does a superior job).
To be more realistic I think it will be necessary to go with a bigger simulated Airy disk.
Inverse filtering can be very exact if the blur PSF is known and there is no added noise, as in these examples.
But what is really surprising is how much detail is coming back - it's as though nothing is being lost to diffraction. Detail above what should be the "cutoff frequency" is being restored.
I think what is happening is that the 9x9 or 11x11 Airy disk is too small to simulate a real Airy disk. It is allowing spatial frequencies above the diffraction cutoff to leak past. Then David's inverse filter is able to restore most of those higher-than-cutoff frequency details as well as the lower frequencies (on which it does a superior job).
To be more realistic I think it will be necessary to go with a bigger simulated Airy disk.
I think (possibly wrongly) that the main problem is estimating the real world PSF, both at the focal plane and ideally in a space around the focal plane (see Nijboer-Zernike). Inverting known issues is relatively easy in comparison.
I used the Airy disk formula from Wikipedia (http://en.wikipedia.org/wiki/Airy_disk#Mathematical_details), using the libc implementation of the Bessel function, double _j1(double). My result differed slightly from Bart's in the inner 9x9 pixels. Any idea why? Bart, was your kernel actually an Airy disk convolved with an OLPF?
BTW, Bart, do you have protanomalous or protanopic vision? I notice you always change your links to blue instead of the default red, and I've been doing the same thing because the red is hard for me to tell at a glance from black, against a pale background.
I don't think that's necessary, unless one wants an even more accurate diffraction kernel. I can make larger kernels, but there are few applications (besides selfmade software) that can accommodate them. Anyway, a 9x9 kernel covers some 89% of the power of a 99x99 kernel.
Yes, my kernel is assumed to represent a 100% fill factor, 6.4 micron sensel pitch, kernel. I did that by letting Mathematica integrate the 2D function at each sensel position + or - 0.5 sensel.
No, my color vision is normal. I change the color because it is more obvious in general, and follows the default Web conventions for hyperlinks (which may have had colorblind vision in the considerations for that color choice, I don't know). It's more obvious that it's a hyperlink and not just an underlined word. Must be my Marketing background, to reason from the perspective of the endusers.
There is of course another reason not to use large kernels. Applying a large kernel the conventional way is very slow; if I'm not mistaken it's O(N^2). But doing it through DFTs is fast (basically my algorithm in reverse), O(N log N). Of course the same problem about software exists.
I don't understand. What is there for Mathematica to integrate? The Airy disk function does use the Bessel function, which can be calculated either as an integral or an infinite sum, but can't you just call the Bessel function in Mathematica? What did you integrate?
I would suspect one wants to box blur the Airy pattern to model the effect of diffraction on pixel values (assuming 100% microlens coverage). The input to that is a pixel size, as Bart states.
Oh, thanks. Now I understand — he integrated the Airy function over the square of each pixel. I made the mistake of evaluating it only at the center of each pixel, silly me.
What method does Mathematica use to integrate that? Is it actually evaluating to full floating point accuracy (seems unlikely)?
I then ran another test to see if altering my capture sharpening could improve things further. As I think you suggested, deconvolution sharpening could result in fewer artefacts, so I went back to the Develop Module and altered my sharpening to Radius 0.6, Detail 100, and Amount 38 (my original settings were Radius 0.9, Detail 35, Amount 55). The next print gained a little more acutance as a result with output sharpening still set to High, with some fine lines on the cup patterns now becoming visible under the loupe. Just for fun, I am going to attach 1200 ppi scans of the prints so you can judge for yourselves, bearing in mind that this is a very tiny section of the finished print.
John
In another thread Bart mentioned the use of a slanted edge target on a flatbed to deliver a suitable base for the sharpening. I would be interested in an optimal deconvolation sharpening route for an Epson V700 while still keeping grain/noise at bay. Noise too as I use that scanner also for reflective scans.
How would one go about to characterize a lense/sensor as "perfectly" as possible, in such a way to generate suitable deconvolution kernels? I imagine that they would at least be a function of distance from lense centre point (radially symmetric), aperture and focal length. Perhaps also scene distance, wavelength and non-radially spatial coordinate. If you want to have a complete PSF as a function of all of those without a lot of sparse sampling/interpolation, you have to make serious number of measurements. Would be neat as an exercise in "how good can deconvolution be in a real-life camera system".
A practical limitation would be the consistency of the parameters (variation over time) and typical sensor noise. I believe that typical kernels would be spatial high-pass(boost), meaning that any sensor noise will be amplified compared to real image content.
So my suggestion for filmscans is to try the empirical path, e.g. with "Rawshooter" which also handles TIFFs as input (although I don't know how well it behaves with very large scans), or with Focusmagic (which also has a film setting to cope with graininess), or with Topazlabs InFocus (perhaps after a mild prior denoise step).
For reflection scans, and taking the potentially suboptimal focus at the surface of the glass platen into account, One could use a suitable slanted edge target, and build a PSF from it. I have made a target out of thin selfadhesive Black and White PVC foil. That will allow to have a very sharp edge when one uses a sharp knife to cut it. Just stick the white foil on top of the black foil which will hopefully reduce the risk of white clipping in the scan, or add a thin gelatin ND filter between the target and the platen if the exposure cannot be influenced.
Cheers,
Bart
Bart, thank you for the explanation.
That there is a complication in camera film scanning is something I expected, two optical systems to build it. Yet I expect that the scanner optics may have a typical character that could be defined separately for both film scanning and reflective scanning. The diffraction limited character of the scanner lens + the multisampling sensor/stepping in that scanner should be detectable I guess and treated with a suitable sharpening would be a more effective first step. There are not that many lenses used for the films to scan and I wonder if that part of deconvolution could be done separately. It would be interesting to see whether a typical Epson V700 restoration sharpening can be used by other V700 owners separate of their camera influences.
For resolution testing the Nikon 8000 scanner I had some slanted edge targets made on litho film on an image setter. The slanted edge parallel to the laser beam for a sharp edge and a high contrast. Not that expensive, I had them run with a normal job for a graphic film. That way I could use the film target in wet mounting where a cut razor or cut vinyl tape would create its own linear fluid lens on the edge, a thing better avoided. Of course I have to do the scan twice for both directions.
In your reply you probably kick the legs off of that chair I am sitting on.... I will have a look at the applications you mention.
Hi Ernst,
Well, since the system MTF is built by multiplying the component MTFs, it makes sense to improve the worst contributor first, as it will boost the total MTF most. I'm not so sure that diffraction is a big issue, afterall several scanners use linear array CCDs which probably is easier to tackle with mostly cylindrical lenses, and to reduce heat the are operated pretty wide open. One thing is sure though, defocus will kill your MTF very fast, so for a reflective scanner the mismatch between the focus plane and the surface of the glass platen will cause an issue which could be addressed by using deconvolution sharpening.
So I wouldn't mind makig a PSF based on a slanted edge scan, presuming we can find an application that takes it as input for deconvolution. What would work anyway, is to tweak the deconvolution settings to restore as much of a slanted edge scan (excluding the camera 'system' MTF) as possible, and compare that setting to the full deconvolution of an average color film/print scan (including the camera 'system' MTF).
Yes, for a scanner that allows to adjust its exposure that might help to avoid highlight clipping. I'm a bit concerned about shadow clipping though, because graphic films can have a reasonably high D-max, which might throw off the Slanted Edge evalation routines in Imatest.
No harm intended, but sometimes we have to settle for a sub-optimal solution. It might work good enough when we're working at the limit of human visual acuity. For magnified output, we of course try to alow compromises in the workflow system as late as possible if unavoidable.
Cheers,
Bart
Adobe has probably used some of this discussion in a new prototype.
Shown at Adobe Max
http://www.pcworld.com/article/241637/adobe_shows_off_prototype_blurfixing_feature.html
Paul
They probably purchased a russian mathematician
"Back In Focus" is absolutely not for beginners. It took me some days to learn how to use it – and here's room for improvement. But once you got how to use it – you'll get better pictures.
http://www.metakine.com/products/backinfocus/
It's definetly devoted to recovering blurred images.
Back In Focus (http://www.metakine.com/products/backinfocus/) Currently implemented algorithms: Unsharp masking (fast and full), Wiener finite and infinite impulse response, Richardson-Lucy (with a thresholding variant), Linear algebra deconvolution.
MAC onlyHmmm! Oh, well. Now, if they would only upgrade Focus Magic to 16Bit or more. With all of its faults I had found it the best for "capture sharpening."
Hmmm! Oh, well. Now, if they would only upgrade Focus Magic to 16Bit or more. With all of its faults I had found it the best for "capture sharpening."
I think you are fixating on the particular implementation (that Bart used) rather than considering the method in general. Typically most of the improvement to be had with RL deconvolution comes in the first few tens of iterations, and the method can be quite fast (as it is in RawTherapee, FocusMagic, and RawDeveloper, for instance). A good implementation of RL will converge much faster than 1K iterations. It's hard to say what is sourcing the ringing tails in Bart's example; it could be the truncation of the PSF, it could be something else. I would imagine that the dev team at Adobe has spent much more time tweaking their deconvolution algorithm than the one day that Bart spent working up his example.
But I would ask again, why do you want to throw dirt on deconvolution methods if you are lavishing praise on ACR 6.1?
This thread was linked in a current thread so I am bringing up my 2 bits years later.
I have been using deconvolution for years. Sometimes in an image I can find something that lets me use the custom PSF function in images plus. When this happens a single cycle of deconvolution dramatically improves the image blur. Further cycles using that PSF actually dont work better than a regular function like a gaussian for the obvious reason that that PSF no longer matches the state of the image. To really use a good custom you have to modify it each cycle as the image is improving.
My rule of thumb is if I cant do it with 50 cycles I am using the wrong function. Usually I just use 10 cycles. Sometimes less, I have images where I stop after 3 cycles of a 3x3 gauss, the lightest the program runs.
My first post. I was sent this link by someone else, and as I was referenced in the post that started this thread, I thought I would add to it.
My web page on image deconvolution referred to in the first post is: http://www.clarkvision.com/articles/image-restoration1/
and has been updated recently.
I have added a second page with more results using an image where I added known blur and then used a guess PSF to recover the image. This is part 2 from the above page:
http://www.clarkvision.com/articles/image-restoration2/
Regarding some statements made in this thread about how one can't go beyond 0% MTF, that is true if one images bar charts. But the real world is not bar charts. MTF is a one dimensional description of an imaging system. It only applies to parallel spaced lines and only in the dimension perpendicular to those lines. MTF limits do not apply to other 2-D objects. For example, stars are much smaller than 0% MTF yet we see them. Two stars closer together than the 0% MTF can still be seen as an elongated diffraction disk. It is that asymmetry and a known PSF of the diffraction disk that can be used to fully resolve the two stars. Extend this to all irregular objects in a scene, whether it be splotchy detail on a bird's beak, feather detail, or stars in the sky, deconvolution methods can recover a wealth of detail, some beyond 0% MTF.
I have been using Richardson-Lucy image deconvolution on my images for many years now, both astro images and everyday scenes. It works well and I can consistently pull out detail that I have been unable to achieve with smart sharpen or any other method. Smart sharpen is so fast that it can't be doing more than an iteration (or a couple if done in integers). I would love to see a demonstration by those in this thread who say smart sharpen can do as well as RL deconvolution. On my web page, part 2 above, I have a link to the 16-bit image (it is just above the conclusions). You are welcome to download that image and show something better than I can produce in figure 4 (right side) on that page. Post your results here. I would certainly love to see smart sharpen do as well, as it would speed up my work flow.
Thanks for the interesting read. And a special hi to Bart. I haven't seen you in a forum in years.
Roger Clark
My first post. I was sent this link by someone else, and as I was referenced in the post that started this thread, I thought I would add to it.Hello. I have also read your site with great interest.
Regarding some statements made in this thread about how one can't go beyond 0% MTF, that is true if one images bar charts. But the real world is not bar charts. MTF is a one dimensional description of an imaging system. It only applies to parallel spaced lines and only in the dimension perpendicular to those lines. MTF limits do not apply to other 2-D objects. For example, stars are much smaller than 0% MTF yet we see them. Two stars closer together than the 0% MTF can still be seen as an elongated diffraction disk. It is that asymmetry and a known PSF of the diffraction disk that can be used to fully resolve the two stars. Extend this to all irregular objects in a scene, whether it be splotchy detail on a bird's beak, feather detail, or stars in the sky, deconvolution methods can recover a wealth of detail, some beyond 0% MTF.MTF0 would be the spatial frequency at which the modulation is zero, right?
My first post. I was sent this link by someone else, and as I was referenced in the post that started this thread, I thought I would add to it.
My web page on image deconvolution referred to in the first post is: http://www.clarkvision.com/articles/image-restoration1/
and has been updated recently.
I have added a second page with more results using an image where I added known blur and then used a guess PSF to recover the image. This is part 2 from the above page:
http://www.clarkvision.com/articles/image-restoration2/
Regarding some statements made in this thread about how one can't go beyond 0% MTF, that is true if one images bar charts. But the real world is not bar charts. MTF is a one dimensional description of an imaging system. It only applies to parallel spaced lines and only in the dimension perpendicular to those lines. MTF limits do not apply to other 2-D objects.
Thanks for the interesting read. And a special hi to Bart. I haven't seen you in a forum in years.
Hi,
I am interested in trying ImageJ for deconvolution. I have tried some plugins but I am not entirely happy.
Any suggestion for a good deconvolution plugin?
We are very fortunate to have Roger joining the discussion along with Bart.
I have been working on developing PSFs for my Zeiss 135 mm f/2 lens on the Nikon D800e using Bart's slanted edge tool (http://bvdwolf.home.xs4all.nl/main/foto/psf/SlantedEdge.html). I photographed Bart's test image at 3 meters, determining optimum focus via focus bracketing with a rail. Once optimal focus was determined, I took a series of shots at various apertures and determined the resolution with Bart's method (http://www.openphotographyforums.com/forums/showthread.php?t=13217) with the sinusoidal Siemens star. Results are shown both for ACR rendering and rendering with DCRaw in Imatest.
The results are shown in the table below.
The f/16 and f/22 shots are severely degraded by diffraction. I used Bart's PSF generator to derive a 5x5 deconvolution PSF for f/16. Results are shown after 20 iterations of adaptive RL in ImagesPlus.
Results were worse using a 7x7 PSF. I would appreciate pointers from Roger and Bart on what factors determine the best size PSF to employ and what else could be done to improve the result. Presumably a 7x7 would be better if the PSF were optimal, but a suboptimal 7x7 PSF could produce inferior results by extending the deconvolution kernel too far outward.
What happens with PS smart sharpen followed by IP or repeated runs of Smart sharpen? Sorry, I dont have PS installed on my system so I cant answer it myself.
Both Bart and Roger have mentioned hundreds of cycles so I think I am not getting optimal results. I shut it down (with cancel) when I start seeing artifacts. Is hundreds based on a 2x or 3x starting image size? The sequence is Lanczos 3x, Capture sharpen 7x7, downsample, creative sharpen?
When is Van Cittert or Adaptive Contrast better? There are so many sharpening tools in the program that can be used sequentially it seems very hard to find a best sequence. Any guidance on this would be greatly appreciated.
I do not think that it is the size of the kernel that's limiting, it may be some aliasing that is playing tricks. I don't know how well the actual edge profile and the Gaussian model fitted, but that is often a good prediction of the shape of the PSF. So it may be a good PSF shape, but the source data may also still be causing some issues (noise, aliasing, demosaicing) that get magnified by restoration. I assume there is no Raw noise reduction in the file, as that might also break the statistical nature of photon shot noise.
You could try if RawTherapee's Amaze algorithm makes a difference, to eliminate one possible (demosaicing) cause. The diffraction limited f/16 shot, which seems to be at the edge of totally low-pass filtering with zero aliasing possibility, suggests that aliasing is not causing the unwanted effects, but maybe to many iteratons or too little 'noise' or artifact suppression. You can also reduce the number of iterations, although that will reduce the overall restoration effectiveness as well. What can also help is using a slightly smaller radius than would be optimal, since that under-corrects the restoration which may reduce the accumulation of errors per iteration. Another thing that may help a little is only restoring the L channel from an LRGB image set, although I do not expect it to make much of a difference on a mostly grayscale image.
Hi Erik,
I'm not sure why you want to use ImageJ for deconvolution, because it's not the easiest way of deconvolving regular color images (which have a variety of imperfections that need to be addressed/circumvented). Mathematically deconvolution is a simple principle, but in the practical implementation there are lots of things that can (and do) go wrong. Also remember that most of these plugins only work on grayscale images, and could work better on linear gamma input.
As a general deconvolution you can use the "Process>Filters>Convolve" command when you feed it a deconvolution kernel. It does work on RGB images, but it only does a single pass deconvolution without noise regularization.
The more useful extensive ImageJ deconvolution plugin that I came across sofar is DeconvolutionLab (http://bigwww.epfl.ch/algorithms/deconvolutionlab/).
You'll need a separate PSF file, which can be made with the help of my PSF generator tool (http://bvdwolf.home.xs4all.nl/main/foto/psf/PSF_generator.html), and its 'space separated' text output can be copied and pasted as a plain text fie and converted in ImageJ to a "File>Import>Text image".
Much easier to use for photographers is a regular Photoshop plugin such as FocusMagic (http://www.focusmagic.com/download.htm), or Topaz Labs Infocus (http://www.topazlabs.com/infocus/). The latter can be called from Lightroom, even without Photoshop if one also has the photoFXlab plugin from Topaz Labs. Piccure (http://intelligentimagingsolutions.com/index.php/en/) is a relatively new PS plugin that does a decent job, but not really better than the cheaper alternatives mentioned before.
Other possibilities are several dedicated Astrophotography applications such as ImagesPlus (http://www.mlunsold.com/) (not colormanaged) or PixInsight (http://pixinsight.com/) (colormanaged). PixInsight is kind of amazing (colormanaged, floating point, Linear gamma processing of Luminance in an RGB image), and offers lots of possibilities for deconvolution and artifact suppression and all sorts of other astrophotography imaging tasks, but is not an cheap solution if you only want to use it for deconvolution. It's more a work environment with the possibility to create one's own scripts and modules with Java or Javascript programming, with several common astrophotography tasks pre-programmed for seamless integration.
Cheers,
Bart
Hi Bart,
It has been suggested on a discussion regarding macro photography that smallish apertures could be used and the image restored using deconvolution. I may be a bit of a skeptic, but I felt I would be ignorant without trying. I have both Focus Magic and Topaz Infocus but neither let me choose PSF. I have ImageJ already so I felt I could explore deconvolution a bit more, but the plugins I have tested were not very easy to use and did not really answer my questions about usability.
My take is really to use medium apertures and stacking if needed.
Bart,
Thanks for the feedback. We are getting some interesting discussion in this rejuvenated thread:)
As you suggested, I did use RawTherapee to render the f/16 image.
(http://bjanes.smugmug.com/Photography/Aperture-Series/i-CPfK2m5/0/O/_DSC3088b_RT.png)
The Gaussian radius was smaller than with the ACR rendering, 0.9922, and I used your tool to calculate a deconvolution PSF for 5x5 and 7x7.
Here is the image restoration in ImagesPlus with 20 iterations of RL using 7x7. There is quite a bit of artifact.
(http://bjanes.smugmug.com/Photography/Aperture-Series/i-vpKhfF6/0/O/_DSC3088_RL_20_7by7.png)
Using RL and a 5x5 kernel with 20 iterations, there again less artifact:
(http://bjanes.smugmug.com/Photography/Aperture-Series/i-F5vLMxV/0/O/_DSC3088_RL_20_5by5.png)
Van Cittert with the 5x5 kernel and 20 iterations produces the best results.
Bart's PSF generator uses only one parameter, the Gaussian radius, to derive the PSF for a given aperture and I doubt that this is sufficient to fully describe the nature of the blurring.
Roger does not go into the details of how he derives his PSFs and more information would be helpful.
Hi Bart,
It has been suggested on a discussion regarding macro photography that smallish apertures could be used and the image restored using deconvolution. I may be a bit of a skeptic, but I felt I would be ignorant without trying. I have both Focus Magic and Topaz Infocus but neither let me choose PSF.
I have ImageJ already so I felt I could explore deconvolution a bit more, but the plugins I have tested were not very easy to use and did not really answer my questions about usability.
My take is really to use medium apertures and stacking if needed.
ImageMagick has an option to deconvolute images by doing a Division on an FFT image.
How would one derive or construct the appropriate deconvolution image to do that?
Great, RT Amaze is always interesting to have in a comparison, because it is very good at resolving fine detail with few artifacts (and optional false color suppression).
I see what you mean, and looking at the artifacts there may be something that can be done. No guarantee, but I suspect that deconvolving with a linear gamma can help quite a bit. In ImagesPlus one can convert an RGB image into R+G+B+L layers, deconvolve the L layer, and recombine the channels into an RGB image again. However, before and after deconvolution, one can switch the L layer to linear gamma and back (gamma 0.455 and gamma 2.20 will be close enough).
It can also help to temporarily up-sample the image before deconvolution. The drawback of that method is the increased time required for the deconvolution calculations, and it is possible that the re-sampling introduces artifacts. The benefit though is than one can visually judge the intermediate result (which is sort of sub-sampled) until deconvolution artifacts start to appear, and then downsample to the original size to make the artifacts visually less important.
In this case it does, but with more noise it may not be as beneficial. Also in this case, deconvolving the linear gamma luminance may work better.
Then there is another thing, and that will change the shape of the Gaussian PSF a bit. Creating the PSF kernel with my PSF generator defaults to a sensel arrangement with 100% fill factor (assuming gapless microlenses). By reducing that percentage a bit the Gaussian will become a bit more spiky, gradually more like a point sample and a pure Gaussian.
I realize its a bit of work, but that's also why we need better integration of deconvolution in our Raw converter tools. Until then, we can learn a lot about what can be achieved and how important it is for image quality.
Finally, you can also try the RL deconvolution in RawTherapee, I don't know if that is applied with Linear gamma but it should be come clear when you compare images. As soon as barely resolved detail becomes darker than expected, it's usually gamma related.
Cheers,
Bart
Bart,
To assess the effect of linear processing, I rendered my images into a custom 16 bit ProPhotoRBG space with a gamma of 1.0 prior to performing the deconvolution in ImagesPlus and converted back to sRGB for display on the web. I noted little difference between linear and gamma ~2.2 files.
Performing 30 iterations of RL with a radius of 0.89 as determined by your tool works well with Rawtherapee.
10 iterations of RL in ImagesPlus with a 5x5 kernel derived with your tools and a radius of 0.89 produces artifacts, but 3 iterations produces more reasonable results. I used the deconvolution kernel with a fill factor of 100%. Deconvolving the luminance channel in IP made little difference. Where should I go from here?
I presume that the deconvoltion kernel would be most appropriate, but what is the purpose of the other PSFs?
Just to make sure I understand what you've done. When you say you used a 5x5 kernel, I assume you copied the values from the PSF Kernel generator into the ImagesPlus "Custom Point Spread Function" dialog box, and clicked the "Set Filter" button, then used the "Adaptive Richardson-Lucy" control with "Custom" selected, and "Reduce Artifacts" checked.
That still leaves the fine-tuning of the "Noise Threshold" slider, or the Relaxation slider in the Van Cittert dialog. Too low a setting will not reduce the noise between iterations in the featureless smooth regions of the image, and too high a setting will start to reduce fine detail in addition to noise.
Not sure, what other PSFs you are referring to? You mean in the Adaptive RL dialog?
Now, if this still produces artifact with more than a few iterations, I suspect that there are aliasing artifacts that rear their ugly head. Aliases are larger than actual representations of fine detail. The larger detail is getting some definition added by the deconvolution where it shouldn't. Maybe, just as an attempt, some over-correction of the noise adaptation might help a bit, but it is not ideal. Also multiple runs with a deliberately too small Gaussian blur radius PSF may built up to an optimum more slowly.
As a final resort, but it won't do much if indeed aliasing is the issue, you can try to first up-sample the image, say to 300% which should keep the file size below the 2GB TIFF boundary that could cause issues with some TIFF libraries. With the up-sampled data, hopefully without adding too many artifacts of its own, the resolution has not increased, but the data has become sub-sampled.
That data will be easier (but much slower) to deconvolve (multiply the PSF blur radius by the same factor or more accurately determine it by upsampling the slanted edge first and then measure the blur radius) smoothly, and stop the iterations when visible artifacts begin to develop. The problem becomes how to create a custom kernel that fits the 9x9 maximum dimensions of ImagesPlus. Raw therapee can go to 2.5, which is close. Then do a simple down-sample to the original image size, and compensate for the down-sampling blur by adding some small (e.g. 0.6) radius deconvolution sharpening.
Other than fine-tuning the shape of the PSF by selecting a fill-factor smaller than 100% upon creation, there is not much left to do, other than resort to super resolution or stitching longer focal lengths.
If you'd like, I could try a deconvolution with PixInsight because that allows more tweaking of the parameters, and see it that makes a difference. But I'd like to have a 16-bit PNG crop from the RT Amaze conversion to work on.
Bart, Thanks again for your detailed replies. Yes, I copied the values from your web based tool and pasted them into the IP Custom PSF dialog. I used the Apply check box in the custom filter dialog instead of the Set, but the effect seems to be the same when I used the Set function. The Apply box is not covered in the IP docs that I have, and may have been added to a later version. I am using IP ver 5.0
(http://bjanes.smugmug.com/Photography/RT-Deconvolution/i-XL5qtMz/0/O/ApplyPSF.png)
I left the noise threshold at the default and did not adjust the minimum and maximum apply values.
The PSFs to which I was referring are those derived by your PSF generator.
If you (or others) wish to work with my files, here are links.
Hi Bill,
Great, that explain a few things, and reveals a procedural error. Good that I asked, or we would not have found it.
The filter kernel values that you used, are for the direct application of a single deconvolution filter operation (the addition of a high-pass filter to the original image). To store those values for use in other ImagesPlus dialogs, one uses the "Set" button, and can leave the dialog box open for further adjustments. Hitting the "Apply" button will apply the single pass deconvolution to the active (and/or locked) image window(s).
However, the adaptive Richardson-Lucy dialog expects a regular Point Spread function (all kernel values are positive) to be defined in the Custom filter box, just like a regular sample of a blurred star. And here a larger support kernel should produce a more accurate restoration, a 9x9 kernel would be almost optimal (as the PSF tool suggests, approx. 10x Sigma).
The default noise assumption often works good enough, and the minimum/maximum limits are more useful for star images.
I see. The different PSFs are just precalculated kernel values for various purposes. A regular PSF is fine, although with large kernels, there will be a lot of zero decimal digits. When the input boxes, like those of ImagesPlus, only allow to enter a given number of digits (15 or so), it can help to pre-multiply all kernel values. ImagePlus will still sum and divide the weight of all kernel values to a total sum of 1.0, to keep overall image brightness the same.
I used to use the second PSF version (PSF[0,0] normalized to 1.0) with a multiplier of 65535. That gives a simple indication whether the kernel values in the outer positions have a significant enough effect (say>1.0) on the total sum in 16-bit math. When a kernel element contributes little, one could probably also use a smaller kernel size.
I'll have a look, thanks. BTW, the NEF is of a different file (3086) than the TIFF (3088).
Cheers,
Bart
Bob, RT displays f16 on that one.
Bart,
Sorry, but I posted the link for the f/11 raw file. Here is the link for f/8:
https://creative.adobe.com/share/60c87f91-96a0-4906-b108-568974011f22
No problem, I've downloaded it and the EXIF says f/16 (as intended for the exercise at hand). I've made a conversion in RawTherapee, with Raw level Chromatic Aberration correction which helped a bit. Further processing was mostly left at default, except a White Balance on the white patch of the star chart grey scale and small increase in exposure to get the midgrey level to approx. 50% and white at 90%.
Now, there is good news and bad news.
The good news is that a good deconvolution is possible. The bad news is that it is not simple to do with the conventional approach of determining the amount of blur based on a slanted edge.
I was already a bit surprised that it was possible to produce significant deconvolution artifacts with 'normal' radius settings one might expect from other tests on f/16 images. I was able to get a nice Adaptive RL deconvolution result in ImagesPlus by using the default 3x3 Gaussian PSF, twice (after a first run, click the blue eye icon on the toolbar, and do another run). Doing multiple deconvolution runs with a small radius amounts to the same as a single run with a larger radius, but a run with a default 5x5 Gaussian already was problematic. Hmm, what to think of that?
I then checked the edge profile of the Slanted edge, and found that there is some glare (possibly from the lighting angle or the print surface) that makes it hard to produce a clean profile model with my Slanted Edge tool. The tool suggests a much larger radius, which already tested as problematic. But the trained human eye is sometimes harder to fool than a simple curve fitting algorithm, so I saw that I had to try something with smaller radii, although I didn't know how small.
I then attempted an empirical approach (when everything else fails, try and try again) to finding a better PSF size/shape. I had to use the power of PixInsight to help me with that, because it also produces some statistical convergence data to assist in the efforts, and it allows to do math in 64-bit floating point number precision (to eliminate the possibility of rounding errors to influence the tests). This all suggested that a Gaussian radius of about 0.67 should produce a good compromise. That is indeed a radius normally only needed by the best possible lenses and at the optimal aperture, certainly not at f/16. So this remains puzzling, and hard to explain.
To test the influence of the deconvolution algorithm implementation, I then produced a PSF with a radius of 0.67, as suggested by PixInsight, with a 65535 multiplier for use in ImagesPlus (see attachment). A 7x7 kernel should be large enough. Note that in my version of ImagesPlus, there is a Custom Restoration PSF dialog for sharpening (besides a Custom filter dialog).
This allows to produce a reasonably good deconvolution, without too many artifacts, improved a bit further by linearization of the data before deconvolution. However, I'm not totally satisfied yet (and the lack of logic for the need of such a small PSF is puzzling), so some more investigation is in order.
Cheers,
Bart
Conceptually, I still don't understand the reason for hundreds of iterations. To me, that seems to imply the wrong PSF is being used.
Most of my shots are ISO 100 with a high quality prime lens. Sometimes ISO 400, rarely 800 or more. This high S/N is the reason I use Van Cittert first. I can get a good base improvement with the default 10 cycles in the dialog box. I go 5x5, then 3x3, then switch to A R/L maybe 10 5x5 then 30 3x3. I have never felt a need for a larger radius, given a good lens to begin with.
Using Bart's upsample first method gives a better result without question. If I need it I would probably start with 7x7.
Please explain the benefit of a more gradual curve 9x9 with more iterations vs the default Gaussian shape 5x5 with low iterations. The feeling I get is that my camera puts the data into the right pixel +/- a radius of about 1. Therefore the tails of a 5x5 will create ring artifacts with too many iterations. This is what I see happening. Maybe I miss interpret the output.
Thanks,
What do you think of using the program's ability to split RGB to do a larger radius on Red, then smaller G, then smaller B then recombine?
Hmmm... Seems like more work. I would wonder about color noise in the final image. Probably better to just do the sharpening on a luminance channel.
Roger
Actually, I have found the detail preserving NR filters highly effective at removing color noise. You are probably right on using the luminance channel. In my handful of prior tests I was not able to tell the difference. I expected more smear on red. I was unable to improve it over just the L channel. Your reply tells me it was more poor idea than poor technique.
Theoretically it makes totally sense, since Airy discs differ by a factor of about 2 between red and blue light.
Bart, you say "I do not fully agree with your downsampling conclusions" and I would agree with you if one just down samples bar charts.
But as you say "sharpening before downsampling may happen to work for certain image content (irregular structures)" is the key. Images of the real world are dominated by irregular structures, and I have yet to see in any of my images artifacts like seen in your examples of bar charts. If I ever run across a pathologic case where artifacts are seen like those in your downsampling examples, then I'll change my methodology. So far I have never seen such a case in my images.
So far no one has met my challenge of downsampling first, then sharpening and producing a better, or even equal image like those I show in Figure 7e and 7f at:
http://www.clarkvision.com/articles/image-restoration2/
Isn't your posts about upsampling, deconvolution sharpening then downsampling in conflict with saying no sharpening until you have downsized?
Deconvolution is an iterative process. Think of it this way: in a pixel, there is signal from the surrounding pixels contaminating the signal in the pixel. But in those adjacent pixels, those pixels have signal contamination from the pixels surrounding those, and so on. To put back the light in each pixel, one would need to know the correct signal from the adjacent pixels. But we don't know that because those pixels too are contaminated. The result is that there is no direct solution, only an iterative one. A few iterations gets a only partial solution.
There are cases where the PSF may be like two different Gaussians with different radii. Then one could either derive the PSF for that image, or do two Gaussian runs. For example, while diffraction is somewhat Gaussian, there is a big "skirt" especially considering multiple wavelengths. Thus a 2 step deconvolution, like a large radius Gaussian with a few iterations, followed by a smaller radius Gaussian with more iterations can be effective.
Well, I would not say a poor idea. I have not tried it.
There is also a pretty close correlation between the Red/Green/Blue channels (sigmas of 0.757/0.762/0.758), which shows how the Bayer CFA Demosaicing for mostly Luminance data produces virtually identical resolution in all channels. Since Luminance is the dominant factor for the Human Visual System's contrast sensitivity, it also shows that we can use a single sharpening value for the initial Capture sharpening of all channels.
Trust me, I only use (bar) charts for an objective, worst case scenario testing. If a procedure passes that test, it will pass real life challenges with flying colors.
Folks,
Most of the discussion in this thread is way over my head, but I would be interested to learn if deconvolution sharpening is strictly appropriate for use with the D800E. My understanding is that deconvolution sharpening is used to reverse, as best as possible, the blur effect of an AA filter. However, the D800E does not have an AA filter (I appreciate that it is a bit of a hybrid filter sandwitch and not simply the absence of an AA filter). Therefore, in applying deconvolution sharpening to a D800E image, are we not attempting to de-blur what never was blurred?
Cheers,
Folks,
Most of the discussion in this thread is way over my head, but I would be interested to learn if deconvolution sharpening is strictly appropriate for use with the D800E. My understanding is that deconvolution sharpening is used to reverse, as best as possible, the blur effect of an AA filter. However, the D800E does not have an AA filter (I appreciate that it is a bit of a hybrid filter sandwitch and not simply the absence of an AA filter). Therefore, in applying deconvolution sharpening to a D800E image, are we not attempting to de-blur what never was blurred?
Cheers,
These methods, in software or hardware, are much better than the old USM illusion of sharpness.
I have, and it appears that the demosaicing of Bayer CFAs results in almost identical resolutions (http://www.luminous-landscape.com/forum/index.php?topic=68089.msg539252#msg539252) for R/G/B channels, after Raw conversion.Hi Bart, is your opinion that for a completely still subject shot in 16x multishot, one should use the "smart sharpen" filter before he prints, or other and why?
That is probably because, despite the less dense sampling of the R and B channels, and the differences in diffraction pattern diameter, the luminance component of the signal in them is still used to create luminance resolution. And since the Red and most certainly the Blue channel are relatively under-weighted in luminance contribution, I would not be surprised if some is ''borrowed" from Green. This tends to (in general) negate wavelength dependent diffraction blur. Of course, differences in lens design and demosaicing may produce different results.
Cheers,
Bart
Hi Bart, is your opinion that for a completely still subject shot in 16x multishot, one should use the "smart sharpen" filter before he prints, or other and why?
Hi,I thought so… thanks for detailed explanation and the suggestions. One more thing, if the subject is huge in size, (say a painting of 1.5 square meters) and it is required to be printed at 1:1 size using 360ppi as input to the printer (say Epson 9900), then the process should be 1. up-sampling to 360ppi, 2. sharpening, 3. print or should it be 1. Upsampling to 720 ppi 2. sharpening 3. Downsampling back to 360ppi and then, 4. print ? …Is there a benefit if one doesn't down-sample when the image size comes to less than 360ppi? Thanks.
Yes, Smart sharpen is a good start, but using a Photoshop plugin that does better deconvolution than Smart sharpen might squeeze a bit more real resolution and less noise out of the image.
Even though a 16x Multi-step sensor solves a few issues (and could create some others), deconvolution sharpening is still beneficial. The enhanced color resolution, by sampling each sensel position with each color of the Bayer CFA in sequence, helps and the piezo actuator driven half-sensel pitch offsets doubles the sampling density. However, the lens still has its residual aberrations and inevitable diffraction blur from narrowing the aperture, and the sensel aperture also plays a role in averaging the projected image over the original sensel aperture dimensions (the sensel aperture is roughly twice the sensel pitch, so 4x the sensel area). This relatively large sensel aperture, as does the increase in sampling density, will help in reducing aliasing but some blur will still remain. The lowered aliasing and remaining blur, shout for deconvolution sharpening to be applied.
So you can still improve the results from such a sensor design. As mentioned, Focus Magic (http://www.focusmagic.com/download.htm) does a great job, but also a plugin such as Topaz Labs Detail (http://www.topazlabs.com/detail/) is worth a mention. Not only does it offer a simple to use 'Deblur' option (=deconvolution), it also allows to tweak several sizes and contrast levels of detail, which is great for 'output sharpening' (where different output sizes might need different levels of micro-contrast and sharpening). Their InFocus (http://www.topazlabs.com/infocus/) plugin offers more control over Capture sharpening alone, and also works very well with my suggestion of up-sampling, deconvolution sharpening, and down-sampling to original size, approach.
Cheers,
Bart
I thought so… thanks for detailed explanation and the suggestions. One more thing, if the subject is huge in size, (say a painting of 1.5 square meters) and it is required to be printed at 1:1 size using 360ppi as input to the printer (say Epson 9900), then the process should be 1. up-sampling to 360ppi, 2. sharpening, 3. print or should it be 1. Upsampling to 720 ppi 2. sharpening 3. Downsampling back to 360ppi and then, 4. print ? …Is there a benefit if one doesn't down-sample when the image size comes to less than 360ppi? Thanks.
When an output file has less than 360 PPI at the final output size, one can consider upsampling with an application like PhotoZoom Pro (http://www.benvista.com/photozoompro) (only for upsampling),
Hi,Great Bart, very detailed and well explained… Thanks. :-*
First of all, it is not absolutely necessary to upsample/sharpen/downsample, it is just a method to allow very accurate sharpening. With the proper technique, and precautions, and experience, it is possible to directly sharpen at the final output size (that also means the use of blend-if sharpening layers to avoid clipping).
Second, when the original file already has a lot of pixels, upsampling it by a factor of e.g. 3x just for sharpening may cause issues due to file size, and the deconvolution sharpening will take a lot of processing time and system memory to complete.
Third, depending on the printing pipeline, and given the physical size of the output (and thus normal viewing distance), I think that upsampling to 360 PPI will probably be adequate and printing will be faster than at 720 PPI. Only if very close inspection needs to be possible without compromises, and the input file has enough detail to require little interpolation to reach output dimensions, then creating a 720 PPI output file can make a difference (because the printer interpolation is not very good, and doesn't allow to sharpen at the final output size). The 'finest detail' option must be activated in the Epson printer driver to actually print at 720 PPI.
When an output file has less than 360 PPI at the final output size, one can consider upsampling with an application like PhotoZoom Pro (http://www.benvista.com/photozoompro) (only for upsampling), because that actually adds edge detail at a higher resolution, but it depends on the original image contents. Other upsampling methods do not create additional resolution, but will allow to push sharpening a bit further at 720 PPI (because small artifacts will be rendered too small to notice). So once you have more than 360 PPI useful data, I would not downsample to 360 PPI, but upsample to 720 PPI and sharpen at that size and print with 'finest detail' activated.
Since output sharpening also needs to pre-compensate for contrast losses due to print media (ink diffusion, paper structure, limited media contrast, etc.) I'd seriously consider using Topaz Detail, because that not only offers deconvolution (deblur) but also micro-contrast controls. It also allows to boost the low contrast micro-detail in shadows more than in the highlights, which is especially useful for non-glossy output media or dim viewing conditions. But that all goes beyond the main subject of this thread.
Cheers,
Bart
And for downsampling , Adobe's Bicubic sharper or PhotoZoom's S-Spline XL or MAX with downsize settings ?
Hello + to help this fantastic thread to liev again. ;D
I am using since 4 years the FocusFixer from Fixerlabs - and i am still very satisfied. Since 2012 also working as 64Bit plug-in for Photoshop. As I compared 2010 many available solutions, also Focusmagic - i am curious if anybody used here my tool?
I've got a bit fond of shooting tree trunks. As the trunks are curved and macro DoF is extremely short it's a typical application for focus stacking. However focus stacking is really not my thing, especially in the field where shutter speeds are often say 10 seconds per image or so, and the camera is often in cumbersome shooting positions.
So what I do is that I shoot at f/22 and suffer the diffraction. The subject is not too much in need to be super-detailed so it's no disaster. Still I'd like to improve my sharpening techniques for these types of images.
Maybe the types of softwares you talk about could be at help?
I've attached a thumb of one such image, and here you have the a crop from the "raw" (a neutral 16 bit tiff developed in RawTherapee, no sharpening no contrast increase, no nothing):
Thanks Bart! Very nice to see what's possible. Clear improvement over the original, but one cannot expect crisp results. I don't think it's a big problem for a print though. The original crop image is quite flat in terms of contrast, just increasing contrast will get a sense of a sharper image.
The Aptus 75 is a 33 megapixel Dalsa with 7.2um pixels. It's hard to know the effective f-stop, at this close range, I'd guess something like 1/8 of life size, the bellows extension is so large that the effective f-stop would be a little bit higher still than f/22. But I guess your assumption 6um + f/22 is pretty close to 7.2um + macro effective f-stop.
I experimented with repeating unsharp masks and some other sharpening algorithm in Gimp and it's possible to get quite good results with that as well, but the examples you posted are better still, so I guess deconvolution actually works :)
I experimented with repeating unsharp masks and some other sharpening algorithm in Gimp and it's possible to get quite good results with that as well, but the examples you posted are better still, so I guess deconvolution actually works :)
I experimented with repeating unsharp masks and some other sharpening algorithm in Gimp and it's possible to get quite good results with that as well, but the examples you posted are better still, so I guess deconvolution actually works :)
Many deconvolution methods work in the direction of increasing the luminance. You see the histogram shifting to the right. It is also the peaks of points that are moving. What I often do to eliminate these artifacts is average it back with an older version. This often gives a more natural look to the final image.
I experimented with repeating unsharp masks and some other sharpening algorithm in Gimp and it's possible to get quite good results with that as well, but the examples you posted are better still, so I guess deconvolution actually works :)
Hi Arthur,
Strictly speaking deconvolution should have no significant bias effect on average brightness, but depending on the algorithm implementation, or execution, if might. The crucial thing is that deconvolution should really be done on image data in linear gamma space, or with more complicated calculations which do that gamma conversion and back on-the-fly.
What's then left is the possibility that what previously was the lowest/highest pixel value, now has a lower/higher value, which may lead to clipping. Afterall, reduced contrast flattens the local amplitudes/contrast, and restoration will restore those amplitudes.
A specialized application like PixInsight has an additional (to gamma linearization) provision (Dynamic Range Extension) for that which allows to avoid clipped intensities. This is also easier because that program can calculate with 64-bit floating point precision, which avoids most accumulation and rounding issues.
One can also prevent clipping by using Blend-if layers in Photoshop.
Cheers,
Bart
I checked out Pixinsight after seeing you use it. Your results seem better than what I can do with ImagesPlus. In IP you can watch the right tip of the histogram stretch right with iterations of adaptive R-L.
I would like a linear .tif export from raw therapee to use IP as you describe.
On the color tab of RT, at the bottom, there is output gamma. As far as I can tell it does absolutely nothing. Whatever settings I pick do not seem to change the image at all. AMAZE is a much better demosaicing algorithm than the one in IP so I want to continue with RT first.
Does pixinsight work in the raw data? The videos seem to show a complicated workflow.
I spent some time looking over the pixinsight documentation, as well as their forum. It seems you have hit the motherload.
They use DCRAW so they should have the same debayer options as raw therapee
They have very advanced sharpening better than images plus
They have very advanced noise reduction, better than the topaz denoise I just got recently. Probably better than DxO (I cant test that yet)
They do HDR
They do multi-frame mosaics
It seems to be color managed
This is huge.
The formally correct way to do Deconvolution is in Linear gamma space.<huge snip>
If the convolution kernel is known (due to being a simulation), then that should narrow the list of deconvolution kernels quite a lot, should it not?
In the abscence of noise, would not a perfect inversion be optimal (possibly limited so as not to amplify by e.g. +30dB anywhere)? In the precense of noise, it might become a question of what kernel trades detail enhancement vs noise/artifact suppression (Wiener filtering?) in a suitable manner, which might depend on the CFA and the noise reduction.... ok, it is hard. My point is that is should be a lot less hard than the real-world case where the kernel has to be guesstimated locally.
Bart, I've been doing some work with my camera simulator to try to get a handle on the "big sensels vs small sensels" question.
http://blog.kasson.com/?p=6078
I'm simulating a Zeiss Otus at f/5.6, and calculating diffraction at 450 nm, 550 nm, and 650 nm for the three (or four, since it's a Bayer sensor) color planes. The blur algorithm is diffraction plus a double application of a pilllbox filter with a radius (in microns) of 0.5 + 8.5/f , where f is the f-stop.
I'm finding that 1.25 um sensel pitch doesn't give a big advantage over 2.5 um with this lens.
I'm wondering if things would be different with some deconvolution sharpening.
Now that you know my lens blur characteristics, can you give me a recipe for a good deconvolution kernel? Tell me what the spacing is in um or nm, and I'll do the conversion to sensels, which will change depending on the pitch. I'd do it myself, but I have to admit I've just followed the broad outlines of this discussion, and would prefer not to delve much deeper at this point.
Warning: As this project progresses, I'll be asking for help on upsampling and downsampling algorithms as well. I hope I won't wear out my welcome.
Attached is a shot at 3600mm. (1200mm scope, 2x barlow, Sony A55 1.5 crop factor) After processing it looks a bit like a PS layers job. The details follow.
After importing from the camera I just opened one of many shots, I did not check to see if is is the sharpest. Anyway, I converted in RT, it looked ok. Imported to images plus, revered the gamma, split the channels, deconvolved the luma channel. 5 @ 7x7, 10 @ 5x5, 30 @ 3x3, all adaptive R-L. I recombined, moved the gamma back to 2.2. Denoised - the background which was completely OOF to begin with, had started to looked "sharp" with a grain texture. The problem is the outline of the bird has the colors of the bird slightly outside it's area. The image becomes very paste job, which it is not, it is as shot. It feels like I need to deconvolve the colors to keep them in position with the luma data.
I have read in several places to deconvolve luma only. It seems a bit off. What are the issues with deconvolving the color channels?