Pages: [1] 2 3 ... 5   Go Down

Author Topic: Optimal Capture Sharpening, a new tool  (Read 63506 times)

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Optimal Capture Sharpening, a new tool
« on: June 19, 2012, 06:58:44 am »

Hi folks,

We have great Raw conversion and sharpening tools at our disposal, but it is not always clear which settings will objectively lead to the best results. Human vision is easily fooled e.g. by differences in contrast, so finding the optimal settings by eye may not be easy. Especially for 'Capture Sharpening' it is important to get it as accurate as possible. When we don't sharpen enough, we'll leave image quality on the table, and when we overdo it then we'll have to face the consequences. When we for example produce large format output, or need to crop a lot, we may discover distracting halos because at a larger output magnification our eyes now have an easier task distinguishing between the actual detail and the artifacts.

Regardless of the exact sharpening method used, one of the sharpening parameters is usually a radius setting which controls how wide of an area around each pixel is going to influence that central pixel's brightness, and thereby how much contrast will be added to the local micro-detail. Ideally we only want to restore the original image's sharpness as it was before it got blurred by lens aberrations, diffraction, the AA-filter, Raw conversion, etc. Creative sharpening is considered by many to be a separate process, best applied locally.
The radius control is the most important one to get right, regardless of the sharpening method we use. The actual sharpening method may influence the amount we need to apply, but the radius is pretty much a physical given for a certain lens and sensor combination.

Now, wouldn't it be nice to have a tool to objectively determine that optimal radius setting?
Well, now there is such a tool, the 'Slanted Edge evaluation' tool, and it makes use of the 'slanted edge' features that can be found in a number of test charts (such as the one I proposed here).

I've made it a web-page based tool, which can therefore also operate on modern smartphones, and it allows to objectively determine that optimal sharpening radius. Unfortunately, the basic functionality of HTML web pages doesn't allow to read and write random user selected image files on client side computers, so there is some manual input e.g. of pixel brightness values required, but it's a free tool so who could complain. You could try and ask your money back if you don't like it, but with enough support I might actually make a commercial version available, we'll see.

This new tool works by making a model of the blur pattern. That model will essentially be based on the shape of a Gaussian bell curve, which actually has a pretty good overall correspondence with the more complex, Point Spread function (PSF). Such a PSF is a mathematical model which not only characterizes the blur pattern, but also allows to invert the blur effects, and restore the original sharp signal.

Actually, those who use the Imatest software already have some great capability to simplify the data collection process, because it can analyse image files directly, even Raw files. Part of the trick is in figuring out how to interpret the output results, and convert them to input for this tool.

However, this new tool continues where most analysis tools stop, and it not only gives feedback in the form of a (Capture) sharpening radius to use, but it also allows to produce a discrete deconvolution kernel based on the prior analysis. There are free tools available on the internet (e.g. ImageJ, or ImageMagick) that allow to use such a kernel and let you apply deconvolution sharpening to images that were similarly blurred (same lens and aperture, and Raw conversion) as the test file that was used to determine the kernel.

How to use the results of the analysis?
The easy way to use it is by copying the optimal radius that results from the analysis to your sharpening tool. You can then optimize the other parameters, knowing that any resulting artifacts are caused by overdoing the amount or other settings. Also when the resolution drops after adjusting the other parameters, you'll know that you are applying too much noise correction or are using too strong masking. Just re-analyze the same test image after the additional processing and compare the results if you want an objective verdict.

A more advanced use of the analysis involves the creation of a deconvolution filter kernel from the blur radius parameter and using that kernel to deconvolve the image, or similar (same lens/aperture/camera/rawprocessing) images. One can also re-analyse the initial test image after an initial deconvolution, and determine if (a) subsequent run(s) with a different filter kernel further improves the result. It will, if the original blur is a combination of different but similarly strong sources of blur.

I will be adding some before/after examples of what can be achieved with the analysis results, but feel free to experiment with it and ask questions about how to use it for specific situations.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Optimal Capture Sharpening, a new tool
« Reply #1 on: June 19, 2012, 11:49:19 am »

Hi,

Here I have attached 2 crops, from a Canon EOS 1Ds Mark III image that was shot with the EF 100mm f/2.8L Macro lens at f/4.5 at a little over 13 metres distance. Tripod, mirror lockup, and focused with Live View and a loupe. I tried to shoot when the wind didn't move the branches too much.

The Raw conversion was done with ACR 7.1 for the unsharpened version, and that TIFF file was Deconvolved with ImageJ using a single Deconvolution filter kernel that was derived from a blur radius of approx. 0.725. That radius was determined earlier to characterize the f/4.5 blur pretty well (see the attached chart).

Even with the 0.7 Radius as a given, it was very difficult to find the other optimal ACR Capture sharpening parameters by eye, but using a deconvolution filter takes that guesswork also out of the equation.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

EricWHiss

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2639
    • Rolleiflex USA
Re: Optimal Capture Sharpening, a new tool
« Reply #2 on: June 19, 2012, 09:39:19 pm »

Bart,
Thanks for the information!   I'll start with a very simple question - this is for capture level sharpening yes? If a two tiered sharpening technique is used does that effect this calculation?
Eric
Logged
Rolleiflex USA

julianv

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 55
Re: Optimal Capture Sharpening, a new tool
« Reply #3 on: June 20, 2012, 12:22:18 am »

Hi Bart,

Thanks for making this tool, and for providing the associated explanations.  It will be interesting to see if a custom deconvolution filter kernel, derived from a specific camera, provides visibly better sharpening than the generic sharpening filters in commercial raw converters like LR, ACR, CNX, or C1.

Those of us who have workflows built around those products may not want to add additional steps, like passes through ImageJ or ImageMagick, in order to obtain a theoretically optimal sharpening of every image. But some experiments with your method might enable us to see what an optimal sharpening looks like, and to choose radius and amount settings in our favorite converters which come close.  I think that there are quite a few people (including some internet celebrities) who are over-sharpening their files, and could use a reality check.  OK, so I suppose sharpening is ultimately a matter of taste, but I prefer my images chewy but not crunchy.
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Optimal Capture Sharpening, a new tool
« Reply #4 on: June 20, 2012, 04:30:11 am »

Bart,
Thanks for the information!   I'll start with a very simple question - this is for capture level sharpening yes? If a two tiered sharpening technique is used does that effect this calculation?

Hi Eric,

The tool can be used for many things, and it offers a lot of insight for those who invest some time. As you can see in the crop example I posted earlier, it also restores the so-called 3D look that is sometimes attributed to some camera platforms. It's all about restoring the original input data, the MTF input, as much as the system MTF allows.

Step one would be to optimize Capture sharpening, when used on an unsharpened Raw conversion of a Slanted Edge shot. Without Raw conversion sharpening, well get the baseline that the lens/aperture/camera system produces (assuming good focus). That would be the time to determine optimal capture sharpening, since that would create a much better starting point for the Creative and Output sharpening steps.

It also doesn't produce halos and, because it only sharpens (amongst others) highlights, it shouldn't produce clipping either. If the highlights clip after sharpening, then they were too bright to begin with because only the original signal is restored. Nothing is exaggerated, only restoration takes place.

So to aswer your question, for a two-tiered sharpening approach, this would be the basis. One word of caution, the contrast and tonality settings during the Raw conversion do influence the sharpening result. Therefore it would be optimal to incorporate this blur radius we found earlier, as soon in the Raw conversion process as possible, and be systematic about it. Hence its suitability for Capture sharpening. All it does is restore capture losses, subsequent sharpening will have a better foundation.

Do note, if one routinely adjusts the tonality, e.g. bumps the contrast a bit or adds an S-curve, then it would make sense to also do that on a Slanted edge conversion, so its effect will be incorporated in the Blur Radius analysis.

In Photoshop this would become my Background layer, without avoidable blur and halo, with the maximum quality pulled out of the capture system. Using a Smart object would still allow me to return to certain Raw conversion related settings, such as WB and minor tonality adjustments, or spot removal. Subsequent Creative sharpening, or output sharpening after resampling to the output size, should only add emphasis to elements that help to better get our creative intentions across. I like using High-Pass filter layers to do such targeted resolution adjustments. Anyhow, with proper Capture sharpening we can spend more time on the creative aspects without having to worry about artifacts being 'enhanced' by further processing, because there are virtually no artifacts to start with.

But it doesn't have to stop there. In fact, not only can it be used to find the sweet spot aperture for a lens (in case the highest resolution is required), but it allows over time to build a 'database' (or look-up table) of Radius parameters that can be reused at will. Only determine the parameters once, use them on many occasions. When done sytematically one can also exchange findings, or at least get started with settings that are in the ball-park.

It can also be used for improving our large format output. One can e.g. produce an ImageMagick script that upsamples our halo free Capture sharpened image by e.g. a fixed factor of 2, or 3, or whatever, and automatically deconvolves to restore part of the upsampling blur. When a perfect slanted edge (e.g. a crop of my test target) is used for that upsampling step, then we can create a deconvolution kernel that restores the original lower resolution data as good as possible. It will even allow to detect flaws in the upsampling algorithms (many add halos).

Cheers,
Bart
« Last Edit: June 20, 2012, 04:53:55 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Optimal Capture Sharpening, a new tool
« Reply #5 on: June 20, 2012, 04:50:27 am »

Hi Bart,

Thanks for making this tool, and for providing the associated explanations.  It will be interesting to see if a custom deconvolution filter kernel, derived from a specific camera, provides visibly better sharpening than the generic sharpening filters in commercial raw converters like LR, ACR, CNX, or C1.

Hi Julian,

You're welcome. I'm confident that a deconvolution filter kernel will be at least as good as a generic sharpening filter, but likely it will be better. It just depends on how much better, and if that justifies the effort. There's always a trade-off, but perfect quality requires putting in at least some effort.

Quote
Those of us who have workflows built around those products may not want to add additional steps, like passes through ImageJ or ImageMagick, in order to obtain a theoretically optimal sharpening of every image. But some experiments with your method might enable us to see what an optimal sharpening looks like, and to choose radius and amount settings in our favorite converters which come close.  I think that there are quite a few people (including some internet celebrities) who are over-sharpening their files, and could use a reality check.

I agree, not everybody needs or wants to go the extra mile. However, as you say, it will allow some reality checks and it may also get some of the established industry moving in the right direction. Meanwhile, the solutions are available for those who need them.

Quote
OK, so I suppose sharpening is ultimately a matter of taste, but I prefer my images chewy but not crunchy.

Yes, (Creative) sharpening is very much a matter of taste, and so it should be. However, what we do not need is the quality losses that are inherent in the Capture process and the Output process, and the good news is that we can restore some of the losses, which will improve our creative options without having to fear for unwanted artifacts.

Cheers,
Bart
« Last Edit: June 20, 2012, 05:39:19 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Optimal Capture Sharpening, a new tool
« Reply #6 on: June 20, 2012, 06:42:59 am »

Here is another example of Capture sharpening only, with the use of the optimal Radius.

I've shot the same spruce tree cone scene as I showed earlier, but this image was taken with an f/16 aperture (instead of f/4.5). That obviously did increase the DOF which might be required for artistic reasons, but there is a small price to be paid due to diffraction. On my 1Ds3 camera with its 6.4 micron sensel pitch, visible diffraction sets in at f/7.1 and gets progressively worse at narrower apertures. I've attached both the before and after Capture sharpening crops at the end of this post. It's clear that the unsharpened f/16 shot needed more help than the earlier f/4.5 one.

However, by using optimal Capture sharpening, most of the losses will be restored to detail and we should get an almost identical result to base our Creative sharpening on. And indeed, besides some unrecoverable loss in micro contrast, the Capture sharpened images look almost identical:

The f/4.5 image was deconvolved with a kernel for a 0.725 blur radius, and the f/16 image was deconvolved with a kernel for a 1.037 blur radius. Despite, or rather because of, the different blur radii, the resulting images look almost the same and form a good foundation for almost identical further processing. That's another benefit for optimal Capture sharpening, the new 'calibrated' or restored baseline allows to use a more unified approach for further processing. Both images have the more 3D look restored, with e.g. a similar amount of glossyness on the needles.

Cheers,
Bart
« Last Edit: June 20, 2012, 01:43:29 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Optimal Capture Sharpening, a new tool
« Reply #7 on: June 20, 2012, 11:12:12 am »

Hi folks,

To allow a quick start exploration of the tool, I've prepared a text file with some data which can be copied and pasted into the application, here. You can save it with a right mouse click, or just copy it from your screen. The data was collected with ImageJ.

The two x,y coordinates on the edge are given at the start, and then there are 3 columns of data (one for each color channel). You can copy and paste the numerical data of one channel at a time in the tool's textbox (right click, and use 'select all', before pasting new data over existing data), and click 'Calculate sigma'.

That should create, in addition to the single number Blur Radius value, also comma and space delimited columns of data that can be copied, and pasted in e.g. MS Excel. There the text can be separated in columns of numbers with a heading, with the Data|Text to Columns menu function, where you select delimited data, and check the comma and space delimiters.

That is of course only needed if you want to further analyse or compare the data, or produce e.g. such a chart of the data:


BTW, that chart shows how good a Gaussian approximation can fit the actual edge profile of an unsharpened Raw conversion of a Slanted Edge image.

There is also a pretty close correlation between the Red/Green/Blue channels (sigmas of 0.757/0.762/0.758), which shows how the Bayer CFA Demosaicing for mostly Luminance data produces virtually identical resolution in all channels. Since Luminance is the dominant factor for the Human Visual System's contrast sensitivity, it also shows that we can use a single sharpening value for the initial Capture sharpening of all channels. Only at the Creative sharpening stage we could focus some more attention on localized sharpening of predominantly Red or Blue surfaces.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Optimal Capture Sharpening, a new tool
« Reply #8 on: June 21, 2012, 03:10:36 pm »

Suppose you want to only use ACR7 or LR4 for sharpening, because you don't want to make round-trips to external applications. How to approach the iterative process of finding the best default settings for a given camera/aperture/lens combination? Here follows a suggestion how I would do it.

Use your common Exposure, Contrast, etc., settings in the basic settings panel, and in the Tone Curve panel. With the introduction of Process version 2012, all Basic controls also have an influence on the tone curve, and contrast in general has an influence on the Sharpening settings. Perform a White Balancing on the gray areas of the target, and adjust the overall exposure/brightness of the image to medium gray values for the chart's gray background. For now, keep these settings the same for the following conversions. We only want to change a single parameter at a time.

1. You first start by generating an unsharpened Raw conversion of a Slanted Edge target shot. This requires setting the Amount slider on the Detail panel to zero. A so-called 16-bit/channel TIFF output will give the most accurate results, but 8-b/ch data will also give correct (only slightly less accurate) results.

2. Then use your preferred procedure to collect the edge angle coordinates and transition data from the converted result. It helps if you use a procedure that allows to plot the edge transition pixel values, because we also want to visually interpret the shape of the edge transition.

3. Copy / Paste the data into the Slanted Edge analysis tool's textbox, and click the 'Calculate sigma' button. This will calculate the Blur Radius we should use in the Detail settings panel. On the file I'm currently using, with my preferred settings, that gives a result of sigma=0.6635332701693427. I'll enter the closest possible setting in the Detail panel for Radius, 0.7 . This determines one of the interdependent variables, and we can start changing the other settings one-by-one.

4. I'll start with an initial setting of the Detail slider to 50. This sets the sharpening method to using 50% USM like sharpening, and 50% Smart Sharpen like deconvolution. Now I'll make a new Raw conversion where only the Amount slider is changed, e.g  to 20. This Raw conversion, saved as a TIFF is again analysed (make sure to use the exact same edge transition area in each conversion), and the Slanted Edge tool now reports a Blur radius of 0.4518385151528749, and a 10-90% edge rise of 1.16 pixels. A perfectly sharp image would have a 10-90% edge rise of a little less than 1.0, and a Blur radius of  a bit under 0.39 . So it seems I can increase the amount setting a bit.

5. A Raw conversion with an amount of 30, results in a Blur radius of 0.3710860886167325, and a 10-90% rise of 0.95 pixels, which is only slightly oversharpened, and a graphic plot of the ESF shows a very minor amount of highlight halo, but also an increase of the shadow noise (See attachment).

6. Therefore I decided to decrease the Detail slider to 35, and try again. This resulted in a Blur radius of 0.41275946448178497 and a 10-90% edge rise of 1.06, in other words a slight undersharpening.

7. I boosted the Amount control to 35 and did a new analysis on the TIFF conversion. This time it resulted in a Blur radius of 0.3876845834047897, and a 10-90% edge rise of 0.99 pixels, almost perfect (See attachment).

This produces a pretty good Capture sharpening although the shadow noise did increase a bit more than I like. Maybe it would be wise to find a setting with an even lower Detail value, but with the Amount boosted a bit further. Since this was a relatively wide aperture and thus low diffraction shot, there wouldn't be too much deconvolution benefit to be gotten anyway.

I'll repeat the procedure a bit later for a file with much more diffraction, to see if the Detail slider helps without boosting noise as much.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

jrp

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 321
Re: Optimal Capture Sharpening, a new tool
« Reply #9 on: June 21, 2012, 05:00:33 pm »

It would be great to have a little more detail on how to perform steps like

Then use your preferred procedure to collect the edge angle coordinates and transition data from the converted result.

for those of us who can have never done this before, please.

Thanks.
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Optimal Capture Sharpening, a new tool
« Reply #10 on: June 21, 2012, 06:44:33 pm »

It would be great to have a little more detail on how to perform steps like

Then use your preferred procedure to collect the edge angle coordinates and transition data from the converted result.

for those of us who can have never done this before, please.

Thanks.

Hi,

The two points on the edge that define its angle can be picked by any image viewer that reports pixel coordinates and pixel value. It's described when you click the question mark icon of the first step on the Slanted Edge analysis tool page.

You can also click on the question mark icon of the second step.
That will open a new tab or webpage where I describe how to collect the pixel values with the use of ImageJ, a free JAVA based image processing utility. There may be other tools available that allow to record the pixel values of a row of pixels, and it would be nice if people would share such information.

I have a preference for ImageJ because it can do a lot more (which I also use) than strictly needed for this functionality, but I understand that it represents an additional learning curve. So if anybody can recommend another utility or built-in method, or Photoshop plug-in or Lightroom module to collect the pixel values for copying and pasting, please share that tip.

Hope that answers your question, but don't hesitate to ask for further clarification if needed.

Cheers,
Bart

P.S. I've found that this method for use in Photoshop Extended works, but it requires saving and converting the text output before it can be copied and pasted into the webpage tool. Make sure to set a lower spacing than described there because we want to sample each pixel, no pixels skipped.
« Last Edit: July 06, 2013, 08:36:09 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

kirkt

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 604
Re: Optimal Capture Sharpening, a new tool
« Reply #11 on: June 24, 2012, 04:48:46 pm »

Bart,

This is great!  As a long-time user of ImageJ (NIH Image, written in Pascal!) I am loving this insightful tool - there are plug-ins for deconvolution but trying to get them adapted to the kind of processing your method targets is tricky business and not very straightforward.

Thanks for the effort.  Now I need to print your test chart and get to work analyzing....

kirk
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Optimal Capture Sharpening, a new tool
« Reply #12 on: June 24, 2012, 07:40:39 pm »

Bart,

This is great!  As a long-time user of ImageJ (NIH Image, written in Pascal!) I am loving this insightful tool - there are plug-ins for deconvolution but trying to get them adapted to the kind of processing your method targets is tricky business and not very straightforward.

Hi Kirk,

The difficulty with many of those deconvolution plugins is that they require a significant level of prior knowledge by the users. In contrast, my tool and the link to a Gaussian based PSF kernel generator can produce an almost perfect Deconvolution kernel for the spatial domain. From there it is just a matter of copying and pasting that kernel output into ImageJ's Process|Filter|Convolve... menu. That will perform the deconvolution in the spatial domain, which is only less efficient for larger image sizes, but otherwise should give the same result (without the need for 'abstract' concepts like regularization parameters) as deconvolution in the frequency domain (after a Fourier conversion).

The only issue spoiling the fun sofar is that there seems to be a small bug in the ImageJ Convolver function code, it seems to offset the resulting image a few pixels to the left (I'll raise a bug report to the programmers, I'll have to first sign up for their forum).

Quote
Thanks for the effort.  Now I need to print your test chart and get to work analyzing....

Thanks, and you're welcome. One could use the Slanted Edge features on e.g. the DPReview resolution charts, but they are usually already sharpened (with sub-optimal settings), so that would not give a useful zero-baseline. Also, since contrast influences the sharpening requirements I indeed recommend to use one's own preferred Raw conversion method, but without any sharpening (or optimise in-camera presets). My test chart at least helps to get accurate focus, and eliminating defocus from the equation is obviously an important prerequisite for addressing the other/unavoidable blur sources.

I do understand that the need to do some preparations (shooting targets and collecting edge transition data) is a significant obstacle, but the rewards are sweet ... It is also rewarding in that it exposes the flaws in many current sharpening/resampling solutions, and thus offers ways to reduce/overcome those flaws. The use of a free web-based tool obviously comes with some disadvantages compared to a commercial software product, but I cannot give everything away for free ...

Anyway, feedback is appreciated.

Cheers,
Bart

P.S. The bug in ImageJ mentioned has been solved.
« Last Edit: March 16, 2013, 05:57:57 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Mike Sellers

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 666
    • Mike Sellers Photography
Re: Optimal Capture Sharpening, a new tool
« Reply #13 on: June 24, 2012, 09:56:17 pm »

Bring out the commercial product and I will buy it!
Mike
Logged

kirkt

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 604
Re: Optimal Capture Sharpening, a new tool
« Reply #14 on: June 24, 2012, 10:40:20 pm »

Bart,

I printed your target on my ancient Epson R800 and shot an aperture sequence in whole stops from 2.8 to 22 for the combination of the Canon 5DmkII and the Zeiss 50mm MakroPlanar.  I have gone through evaluating the 2.8, 4.0 and 22 apertures and I am really impressed with the results.  Thusfar, I have not exercised the full range of the tool, but I can appreciate the detailed data that one can generate to analyze the effect of the sharpening radius (sigma) and the resulting deconvolution kernel.  I shot the target and then a test scene in the same light with the same set up, composed of objects of various textures and frequencies.

For the apertures analyzed so far, I determined sigma and the deconv kernel.  I shot in relatively diffused light.  My raw converter (Raw Photo Processor) performs no sharpening.  I took the RGB image into ImageJ and converted the green channel to grayscale to perform the angle and ESF measurements.  I then applied the kernel to the original target image and repeated the measurements.  As expected, the 10-90 slope increases to within 1 to 1.5 pixels from 2 to 3, typically.  I can see that one would want to perform this exercise for all lenses and have that database on hand for automating deconvolution on keeper images.

Very cool.  I would be happy to make a donation via Paypal, as this is a very useful tool, regardless of the workflow.  Ages ago I actually had a working knowledge of the then NIH Image macro language.  Time to dust off the cobwebs, perhaps...

I'll go though the analysis and post observations here.

Kirk
« Last Edit: June 24, 2012, 10:43:13 pm by kirkt »
Logged

kirkt

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 604
Re: Optimal Capture Sharpening, a new tool
« Reply #15 on: June 25, 2012, 12:01:58 am »

Quick example - f/8 with above camera+lens combination.

Here is the scene (booooring) - focus was on the stitching on the baseball - live view + loupe:



Here is a plot of the original target image ESF compared with the deconvolved ESF.  The deconvolved ESF 10-90 is 1.34 pixels.



attached are 100% crops of original and deconvolved.



Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Optimal Capture Sharpening, a new tool
« Reply #16 on: June 25, 2012, 05:37:04 am »

Bring out the commercial product and I will buy it!

Hi Mike,

Thanks for the vote of confidence. For which OS platform would that preferably be, Mac or Windows?

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

julianv

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 55
Re: Optimal Capture Sharpening, a new tool
« Reply #17 on: June 25, 2012, 05:44:58 am »

Is it my imagination, or is there a slight color shift in kirkt's processed images?  Seems like the sharpened versions are a bit more saturated.  Will the decon kernel do that?

As for a commercial version of the tool - I vote for a Mac product.
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Optimal Capture Sharpening, a new tool
« Reply #18 on: June 25, 2012, 06:21:56 am »

Bart,

I printed your target on my ancient Epson R800 and shot an aperture sequence in whole stops from 2.8 to 22 for the combination of the Canon 5DmkII and the Zeiss 50mm MakroPlanar.  I have gone through evaluating the 2.8, 4.0 and 22 apertures and I am really impressed with the results.

Hi Kirk,

One of the benefits of doing these things oneself, is that it allows to learn so much more about the image quality. It also becomes (even more) clear that different apertures require different Capture sharpening to achieve the best quality, and that the resulting differences in quality between apertures can be minimized.

Quote
I took the RGB image into ImageJ and converted the green channel to grayscale to perform the angle and ESF measurements.  I then applied the kernel to the original target image and repeated the measurements.  As expected, the 10-90 slope increases to within 1 to 1.5 pixels from 2 to 3, typically.

Yes, that's commonly the case. A single iteration will usually not achieve the ultimate goal of approx. 1 pixel ESF edge rise, but it can come close. One could repeat the deconvolution with the newly found blur Radius after one iteration for even sharper results, but it does increase the risk of noise amplification and mild halos. A single Deconvolution would minimize such risks to some noise amplification, which can be mitigated by performing a mild noise reduction before Capture sharpening. The analysis also allows to improve one's up/down-sampling output quality. The potential improvement on large format output is impressive, but may also reveal the need for a better algorithm for that purpose. Even Photoshop's Bicubic Smoother for instance falls kind of flat on its face when the image was sharpened in addition to Capture sharpening ...

Quote
I can see that one would want to perform this exercise for all lenses and have that database on hand for automating deconvolution on keeper images.

It can also become apparent that most of one's lenses (assuming quality lenses) exhibit similar behavior. A peak performance with many lenses can be found some 2 stops from the wide end, and narrow aperture diffraction is known to progressively deteriorate the resolution thus leading to larger sigmas. Of course nothing beats the accuracy of testing one's own lenses, but it does require a bit of work.

Quote
Very cool.  I would be happy to make a donation via Paypal, as this is a very useful tool, regardless of the workflow.  Ages ago I actually had a working knowledge of the then NIH Image macro language.  Time to dust off the cobwebs, perhaps...

I'll go though the analysis and post observations here.

Appreciated.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Optimal Capture Sharpening, a new tool
« Reply #19 on: June 25, 2012, 06:55:11 am »

Is it my imagination, or is there a slight color shift in kirkt's processed images?  Seems like the sharpened versions are a bit more saturated.  Will the decon kernel do that?

Deconvolution will only have an influence on local saturation when it lifts the veil of blur and reveals the microdetail of the original material structure. So it attempts to only restore the original colors and brightness at the pixel level. As also Kirk's examples demonstrate (look at the fibres of the stitches and the surface of the leather), the restoration of small specular reflections and shadows in dents produces a dramatically more realistic rendering of the material structure. And that is only due to Capture sharpening, we haven't even begun to augment that with Creative sharpening, should we want to stress certain features.

Quote
As for a commercial version of the tool - I vote for a Mac product.

I was afraid of that, it would mean a significantly more complex programming effort. Maybe I'll do it in JAVA instead, that should also allow to run it on even more platforms.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==
Pages: [1] 2 3 ... 5   Go Up