Pages: 1 2 [3] 4 5 ... 18   Go Down

Author Topic: Deconvolution sharpening revisited  (Read 266078 times)

madmanchan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2115
    • Web
Deconvolution sharpening revisited
« Reply #40 on: July 24, 2010, 02:45:34 pm »

Quote from: walter.sk
If one were to set the Detail to 100, would this carry through to the Sharpening slider when using the Adjustment Brush in ACR?  If so, that would go a long way toward selective application of the deconvolution method, possibly as good as painting it in from a layer mask.

Yes, Walter. It does means you can apply this type of sharpening / deblurring selectively, if you wish. There are two basic workflows for doing this in CR 6 and LR 3.

The first way is just to paint in the sharpening where you want it. To do this, you set the Radius and Detail the way you want, but set Amount to 0. Then, with the local adjustment brush, you paint in a positive Sharpness amount in the desired areas. The brush controls and the local Sharpness amount can be used to control the application of it. (Of course you can also use the erase mode in case you overpaint.) This workflow is effective if there are relatively small areas of the image you want to sharpen. I tend to use this for narrow DOF images (e.g., macro of flower) where I only care about very specific elements being sharpened. It also works fine for portraits.

The second way is the opposite, i.e., you apply the capture sharpening in the usual till most of the image looks good, but then you can selective "back off" on it (using local Sharpness with negative values) in some areas. Of course you can also add to it (using local Sharpness with positive values).
Logged
Eric Chan

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Deconvolution sharpening revisited
« Reply #41 on: July 24, 2010, 04:23:44 pm »

Quote from: madmanchan
Yes, it looks like Bill made a typo in the post (the screenshot values say 43, 1, 100, as opposed to 41,1,0). For this type of image I do recommend a value below 1 for the Radius, though 1 is not a bad starting point.
Yes, 41,1,0 is a typo. The figures on the illustration are correct: 41,1,100

Bill
Logged

mhecker*

  • Contributor
  • Jr. Member
  • *
  • Offline Offline
  • Posts: 93
    • http://www.wyofoto.com
Deconvolution sharpening revisited
« Reply #42 on: July 24, 2010, 06:50:10 pm »

I agree totally with bjanes.

I have found that by varying the sharpening setting in ACR6/LR3 I am able to duplicate
the "totally superior" results found in other highly touted RAW converters.

That said IMO Lightrooms work flow is far superior to any other product I've tried.

However, it's a free country and the new RAW converter developers are happy to relieve you of excess cash.  
« Last Edit: July 25, 2010, 11:52:53 am by mhecker* »
Logged

eronald

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6642
    • My gallery on Instagram
Deconvolution sharpening revisited
« Reply #43 on: July 24, 2010, 07:16:09 pm »

I think this post is a bit misleading. If you incorporate sharpening in your processing then your process is now non-linear and however ISO certified the target itself,  the slanted edge method is no longer valid because MTF is only meaningful as a description of a 2D spatial convolution process which is thereby assumed to be linear. Even though Imatest is an excellent piece of software - I am acquainted with Norman Koren, which doesn't mean I understand those maths - , feeding Imatest invalid input does not sanctify the output.

Edmund

Quote from: bjanes
Eric,

Thanks for the information. The behavior of the sliders appears to be quite different from the older versions of ACR. In Real World Camera Raw with Adobe Photoshop CS4, Jeff Schewe states that if one moves the detail slider all the way to the right, the results are very similar but not exactly the same that would be obtained with the unsharp mask.

The following observations are likely nothing new to you, but may be of interest to others. The slanted edge target (a black on white transition at a slight angle) is an ISO certified method of determining MTF and is used in Imatest. Here is an example with the Nikon D3 using ACR 6.1 without sharpening (far right), with and with ACR sharpening set to 50, 1, 50 [amount, radius, detail] (middle) and with deconvolution sharpening using Focus Magic with a blur width of 2 pixels and amount of 100%. The images used for measurement are cropped. so the per picture height measurements are for the cropped images.

[attachment=23291:Comp1_images.gif]

One can analyze the black-white transition with Imatest, which determines the pixel interval for a rise in intensity at the interface from 10 to 90%. Results are shown for Focus Magic and ACR sharpening with the above settings. The results are similar. With real world images with this camera (previously posted in a discussion with Mark Segal), I have not noted much difference between optimally sharpened images using ACR and Focus Magic, contrary to the results reported by Diglloyd using the Richardson-Lucy algorithm. Perhaps the Focus Magic algorithm is inferior to the RL. Diglloyd used Smart Sharpen for comparison and did not test ACR 6 sharpening.

[attachment=23292:CompACR_FM_1.gif]

One can look at the effect of the detail slider by using ACR sharpening settings of 100, 1, 100 (left) and 100, 1, 0 (right). The detail setting of zero dampens the overshoot.

[attachment=23293:CompACR.gif]
« Last Edit: July 24, 2010, 07:19:04 pm by eronald »
Logged
If you appreciate my blog posts help me by following on https://instagram.com/edmundronald

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Deconvolution sharpening revisited
« Reply #44 on: July 24, 2010, 07:51:22 pm »

Quote from: eronald
I think this post is a bit misleading. If you incorporate sharpening in your processing then your process is now non-linear and however ISO certified the target itself,  the slanted edge method is no longer valid because MTF is only meaningful as a description of a 2D spatial convolution process which is thereby assumed to be linear. Even though Imatest is an excellent piece of software - I am acquainted with Norman Koren, which doesn't mean I understand those maths - , feeding Imatest invalid input does not sanctify the output.

Hi Edmund,

You are correct that sharpening introduces non-linearity into the determination of the MTF, however that is also of great use when comparing the sharpened result to the 'before' situation. It allows us to assess the difference that the non-linear process of sharpening introduces. The math behind the slanted edge method of MTF determination is robust, so the results will be accurate (for the particular output image under investigation).

If one had to compare camera files with unknown levels of pre-processing (such as in-camera sharpened JPEGs), Imatest also comes prepared. It offers a kind of normalization called "standardized sharpening" which allows to compare non-linear input when no other info is available. In this case however, we get a very useful insight into the spatial frequencies that are boosted, hence my suggestion to try a lower radius value, Imatest gave the clue.

Cheers,
Bart
« Last Edit: July 24, 2010, 08:00:37 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

walter.sk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1433
Deconvolution sharpening revisited
« Reply #45 on: July 24, 2010, 08:57:28 pm »

Quote from: madmanchan
Yes, Walter. It does means you can apply this type of sharpening / deblurring selectively, if you wish. There are two basic workflows for doing this in CR 6 and LR 3.

The first way is just to paint in the sharpening where you want it. To do this, you set the Radius and Detail the way you want, but set Amount to 0. Then, with the local adjustment brush, you paint in a positive Sharpness amount in the desired areas. The brush controls and the local Sharpness amount can be used to control the application of it. (Of course you can also use the erase mode in case you overpaint.) This workflow is effective if there are relatively small areas of the image you want to sharpen. I tend to use this for narrow DOF images (e.g., macro of flower) where I only care about very specific elements being sharpened. It also works fine for portraits.

The second way is the opposite, i.e., you apply the capture sharpening in the usual till most of the image looks good, but then you can selective "back off" on it (using local Sharpness with negative values) in some areas. Of course you can also add to it (using local Sharpness with positive values).

Thank you. Now I'm going to have to try some comparisons between deconvolution by these methods in RAW, versus post-processing with Focus Magic, which has been my favorite for years now.
Logged

hubell

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1135
Deconvolution sharpening revisited
« Reply #46 on: July 24, 2010, 11:25:31 pm »

Quote from: walter.sk
Thank you. Now I'm going to have to try some comparisons between deconvolution by these methods in RAW, versus post-processing with Focus Magic, which has been my favorite for years now.

Unfortunately, Focus Magic has become functionally useless for me. With larger 16 bit files, it consistently gives me "memory full" errors and then crashes CS 4. It appears that development has ceased. Too bad, it gave me great results.

Craig Lamson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3264
    • Craig Lamson Photo Homepage
Deconvolution sharpening revisited
« Reply #47 on: July 25, 2010, 05:56:44 am »

Quote from: hcubell
Unfortunately, Focus Magic has become functionally useless for me. With larger 16 bit files, it consistently gives me "memory full" errors and then crashes CS 4. It appears that development has ceased. Too bad, it gave me great results.


How big of files?  I just tested a 444mb 16 bit tif, CS4 and it ran fine on a w7 64bit  4mb machine.
Logged
Craig Lamson Photo

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Deconvolution sharpening revisited
« Reply #48 on: July 25, 2010, 07:18:56 am »

Quote from: hcubell
Unfortunately, Focus Magic has become functionally useless for me. With larger 16 bit files, it consistently gives me "memory full" errors and then crashes CS 4. It appears that development has ceased. Too bad, it gave me great results.

I had similar issues with FocusMagic when I still ran Win XP. There is a sort of workaround though. Just make partial selections (use a few guides to allow making joining but not overlapping selections). It's not ideal, but it will get the job done selection after selection. I couldn't get FM to install under Vista, but they recently changed the installer so perhaps now it will, but I've moved to Win7 by now, and there are no problems so far.

I've not tested RawTherapee for size limitations, but it does read TIFFs and it allows Richardson-Lucy deconvoluton.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

hubell

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1135
Deconvolution sharpening revisited
« Reply #49 on: July 25, 2010, 07:41:23 am »

Quote from: BartvanderWolf
I had similar issues with FocusMagic when I still ran Win XP. There is a sort of workaround though. Just make partial selections (use a few guides to allow making joining but not overlapping selections). It's not ideal, but it will get the job done selection after selection. I couldn't get FM to install under Vista, but they recently changed the installer so perhaps now it will, but I've moved to Win7 by now, and there are no problems so far.

I've not tested RawTherapee for size limitations, but it does read TIFFs and it allows Richardson-Lucy deconvoluton.

Cheers,
Bart

I tried it unsuccessfully last week with a 16 bit 428mb file. I am on a 2009 Mac Pro with 16gb of Ram running OS X 10.6.4. I always had problems with Out of Memory error issues under 10.4 and 10.5, but 10.6 has just been impossible to use with Focus Magic.

BTW, do you use Focus Magic "just" for capture sharpening or also for output sharpening?

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Deconvolution sharpening revisited
« Reply #50 on: July 25, 2010, 09:48:38 am »

Quote from: eronald
I think this post is a bit misleading. If you incorporate sharpening in your processing then your process is now non-linear and however ISO certified the target itself,  the slanted edge method is no longer valid because MTF is only meaningful as a description of a 2D spatial convolution process which is thereby assumed to be linear. Even though Imatest is an excellent piece of software - I am acquainted with Norman Koren, which doesn't mean I understand those maths - , feeding Imatest invalid input does not sanctify the output.

Edmund
In addition to Bart's post in response to your comment, I think that your use of misleading and invalid input is too harsh. If you look at Norman's documentation of Imatest, he uses it extensively to compare the effects of sharpening. Indeed, if if the method were invalid for sharpened images, it would be useless to assess the sharpness of images derived from cameras with low pass filters, since these images always must be sharpened for optimal appearance. If my use of Imatest is misleading and invalid, so is Norman's.

From the Imatest documentation:

[attachment=23321:ImatestDoc.gif]
Logged

eronald

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6642
    • My gallery on Instagram
Deconvolution sharpening revisited
« Reply #51 on: July 25, 2010, 10:46:06 am »

Quote from: bjanes
In addition to Bart's post in response to your comment, I think that your use of misleading and invalid input is too harsh. If you look at Norman's documentation of Imatest, he uses it extensively to compare the effects of sharpening. Indeed, if if the method were invalid for sharpened images, it would be useless to assess the sharpness of images derived from cameras with low pass filters, since these images always must be sharpened for optimal appearance. If my use of Imatest is misleading and invalid, so is Norman's.

From the Imatest documentation:

[attachment=23321:ImatestDoc.gif]


Sorry, I'll remove myself from this discussion; Norman is a guy I respect, his understanding of these topics is infinitely greater than mine, and I don't want my own lack of understanding and personal views to reflect on his excellent product.

Edmund
« Last Edit: July 25, 2010, 10:48:10 am by eronald »
Logged
If you appreciate my blog posts help me by following on https://instagram.com/edmundronald

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Deconvolution sharpening revisited
« Reply #52 on: July 25, 2010, 10:59:19 am »

Quote from: eronald
Sorry, I'll remove myself from this discussion; Norman is a guy I respect, his understanding of these topics is infinitely greater than mine, and I don't want my own lack of understanding and personal views to reflect on his excellent product.

Edmund
Edmund,
Thanks for the reply, but there is no need to withdraw from the discussion. Your point on non-linearity is well taken and excessive sharpening can lead to spurious results. Some time ago, I was involved in a discussion with Norman and others over test results reporting MTF 50s well over the Nyquist limit. Magnified aliasing artifacts apparently were being interpreted as meaningful resolution. Norman stated that the slanted edge method did have limitations and he was working on other methods.

Bill
Logged

EricWHiss

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2639
    • Rolleiflex USA
Deconvolution sharpening revisited
« Reply #53 on: July 25, 2010, 02:29:37 pm »

Quote from: bjanes
In addition to Bart's post in response to your comment, I think that your use of misleading and invalid input is too harsh. If you look at Norman's documentation of Imatest, he uses it extensively to compare the effects of sharpening. Indeed, if if the method were invalid for sharpened images, it would be useless to assess the sharpness of images derived from cameras with low pass filters, since these images always must be sharpened for optimal appearance. If my use of Imatest is misleading and invalid, so is Norman's.

From the Imatest documentation:

[attachment=23321:ImatestDoc.gif]

Why don't you e-mail and ask him what's correct?  He's usually quick to get back unless he's traveling...
Logged
Rolleiflex USA

eronald

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6642
    • My gallery on Instagram
Deconvolution sharpening revisited
« Reply #54 on: July 25, 2010, 04:46:17 pm »

Quote from: bjanes
Edmund,
Thanks for the reply, but there is no need to withdraw from the discussion. Your point on non-linearity is well taken and excessive sharpening can lead to spurious results. Some time ago, I was involved in a discussion with Norman and others over test results reporting MTF 50s well over the Nyquist limit. Magnified aliasing artifacts apparently were being interpreted as meaningful resolution. Norman stated that the slanted edge method did have limitations and he was working on other methods.

Bill

Yes, I just talked to Norman, linking him to this conversation, and it seems ISO is going to move to lower contrast slanted edge targets precisely to prevent cameras from moving into a non-linear regime.

Re. MTF, if I understand rightly, Norman's position is that in the presence of sharpening you are measuring whole system performance, and it becomes difficult to derive the performance of a specific component of the system. I'm sure he would be delighted to get email from any Imatest user, and discuss the topic further.

Edmund
« Last Edit: July 25, 2010, 04:47:40 pm by eronald »
Logged
If you appreciate my blog posts help me by following on https://instagram.com/edmundronald

madmanchan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2115
    • Web
Deconvolution sharpening revisited
« Reply #55 on: July 26, 2010, 09:57:53 am »

Depends on what you're looking for, though. As scientists we're interested in isolated behaviors of individual components, but as photographers it's the end-to-end (system-wide) results that ultimately matter (I.e., what comes out the back end, the final result).
Logged
Eric Chan

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Deconvolution sharpening revisited
« Reply #56 on: July 26, 2010, 10:53:29 am »

Quote from: madmanchan
Yes, it looks like Bill made a typo in the post (the screenshot values say 43, 1, 100, as opposed to 41,1,0). For this type of image I do recommend a value below 1 for the Radius, though 1 is not a bad starting point.

Quote from: BartvanderWolf
Based on what I see, the radius 1.0 seems to be a bit too large. This is confirmed by the earlier Imatest SFR output that you posted (SFR_20080419_0003_ACR_100_1_100.tif), where the 0.3 cycles/pixel resolution was boosted. Perhaps something like a 0.6 or 0.7 radius is more appropriate to boost the higher spatial frequencies (lower frequencies will also be boosted by that).
Eric and Bart,

As per your suggestions, I repeated the tests using ACR 6.1 with settings of amount = 32, radius = 0.7, and detail = 100 and Focus Magic with settings of Blur Width = 1 and amount = 150. I found the amount in the ACR slider to be quite sensitive, and there is a considerable difference between 30 and 40 or even 30 and 35 with respect to overshoot and MTF at Nyquist. The chosen settings seem to be a reasonable compromise and produce similar results near Nyquist, but the FM gives more of a boost in the range of 0.2 to 0.3 cycles/pixel, which may be desirable.

[attachment=23335:Comp1_Graphs.png]

Inspection of the images from which the graphs were obtained is also of interest:

[attachment=23336:Comp1_images.png]

Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Deconvolution sharpening revisited
« Reply #57 on: July 26, 2010, 06:36:08 pm »

Quote from: bjanes
Eric and Bart,

As per your suggestions, I repeated the tests using ACR 6.1 with settings of amount = 32, radius = 0.7, and detail = 100 and Focus Magic with settings of Blur Width = 1 and amount = 150. I found the amount in the ACR slider to be quite sensitive, and there is a considerable difference between 30 and 40 or even 30 and 35 with respect to overshoot and MTF at Nyquist. The chosen settings seem to be a reasonable compromise and produce similar results near Nyquist, but the FM gives more of a boost in the range of 0.2 to 0.3 cycles/pixel, which may be desirable.

Hi Bill,

Indeed you managed to get the MTF responses almost identical, with a slight edge to FocusMagic due to it's boosting some of the lower spatial frequencies a bit more. Of couse there is no law against doing 2 conversions with different settings, and luminosity blending the results, but in a single operation FM will do a bit better, it packs a bit more punch.

Quote
Inspection of the images from which the graphs were obtained is also of interest:

Yes, they confirm what Imatest was predicting including slightly lower noise for the FM version which also has less moiré showing (probably those better lower frequencies are responsible for that) while giving an overall sharper impression. But they are quite close, especially when used as print output.

Thanks for the examples,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Wayne Fox

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 4237
    • waynefox.com
Deconvolution sharpening revisited
« Reply #58 on: July 27, 2010, 12:20:38 am »

Apologies in hijacking this thread a little bit, but personally I'm just curious if de-convolution sharpening and the evolvement of computational imaging might eventually overcome much of the problem with diffraction.  (and if this has already been discussed I also apologize - I only skimmed through the thread, seeing  how most of it is above my pay-grade).

I would assume it would be much more challenging than resolving the issues from an AA filter,  since it would require each individual lens design to be carefully tested then some method to apply the information to the file, and perhaps would require the data from every possible f/stop and with zoom lens specific zoom settings.  But it seems the theory of restoring the data as it is spread to adjacent pixels isn't much different than what happens with an AA filter.  

I know I have many times stopped down to f/22 (or further) and smart sharpen seems to work quite well, even when printing large prints.

Just curious.
Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Deconvolution sharpening revisited
« Reply #59 on: July 27, 2010, 12:51:49 am »

Wayne,

That's no hijacking. I'd say a very good question indeed. FM was originally intended to restore focus.

It's my guess that it is easy to estimate PSF (Point Spread Function) for a stopped down lens at least regarding diffraction. A better PSF then Lens Blur may be needed. I got the impression that regular "unsharp mask" works quite well. Certainly an area to investigate!

Best regards
Erik


Quote from: Wayne Fox
Apologies in hijacking this thread a little bit, but personally I'm just curious if de-convolution sharpening and the evolvement of computational imaging might eventually overcome much of the problem with diffraction.  (and if this has already been discussed I also apologize - I only skimmed through the thread, seeing  how most of it is above my pay-grade).

I would assume it would be much more challenging than resolving the issues from an AA filter,  since it would require each individual lens design to be carefully tested then some method to apply the information to the file, and perhaps would require the data from every possible f/stop and with zoom lens specific zoom settings.  But it seems the theory of restoring the data as it is spread to adjacent pixels isn't much different than what happens with an AA filter.  

I know I have many times stopped down to f/22 (or further) and smart sharpen seems to work quite well, even when printing large prints.

Just curious.
Logged
Erik Kaffehr
 
Pages: 1 2 [3] 4 5 ... 18   Go Up