Pages: 1 [2] 3 4 5   Go Down

Author Topic: Optimal Capture Sharpening, a new tool  (Read 63514 times)

Mike Sellers

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 666
    • Mike Sellers Photography
Re: Optimal Capture Sharpening, a new tool
« Reply #20 on: June 25, 2012, 08:20:42 am »

Hi,
I only use Windows
Logged

jrp

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 321
Re: Optimal Capture Sharpening, a new tool
« Reply #21 on: June 25, 2012, 02:07:29 pm »

I had overlooked the online help buttons first time around.

My ImageJ is only 32-bit and crashes when accessing my network share files.

The 64-bit version seems to want to install an old Java SDK, that I'd prefer to avoid.

How does your approach compare with TopazLabs InFocus?  That works quite well at detecting the optimal radius, but it does introduce a lot of artefacts that undermine the quality of the result?
Logged

George Machen

  • Newbie
  • *
  • Offline Offline
  • Posts: 14
Re: Optimal Capture Sharpening, a new tool
« Reply #22 on: June 25, 2012, 02:55:33 pm »

If you only have the resources & inclination for one platform, then the best of all possible worlds might be Windows, but standards-compliant (no Microsoft funny business): make it work under the Wine libraries, so Mac users could use it without opening themselves to all the malware risks of a full-blown Windows OS installation. And please not Java — the security risks of the recent Flashback malware have made many Mac users disable Java completely.
Logged

kirkt

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 604
Re: Optimal Capture Sharpening, a new tool
« Reply #23 on: June 25, 2012, 05:50:47 pm »

So...

My previous "by eye" capture sharpening was, at best, hit or miss.  I have been, admittedly, struggling with my sharpening workflow - ad hoc-ing it as best as I could visually approximate based on some assessment of detail frequency.  I've read the real world sharpening, done all that - it just turns out that my eye is not so good sometimes.  Sometimes it worked great, other times I knew my early attempts in the workflow to compensate for optical softness caused issues that only got propagated and amplified as I continued in post.

I have been testing the capture sharpening optimization on both target images with accompanying test scenes, as well as previously shot images that, serendipitously, contained relatively useable slanted edges of high contrast.  I have been applying the deconvolution kernel in ImageJ as well as simply using the Gaussian sigma as the radius for raw conversion sharpening in ACR7.1 (I have not tried in DXO or Raw Developer yet).  The results are significantly better and more predictable.  

Moreover, once the proper capture sharpening is dialed in I find two additional benefits:

1) Less of the need for noise reduction.  Some ACR NR balances the proper sharpening radius, and I can leave more "grain" - i.e., less luminance NR.  This appears to be the result of not having to clobber the incorrectly sharpened image with NR to get a result.  I may have been able to achieve this balance previously, but with this new tool, it is a matter of applying a sharpening amount, as opposed to juggling radius, amount, detail, masking and doe-see-doe'ing around and around until it looked right.  Also, the tendency with capture sharpening is to use a very small, sub pixel radius always.  This makes sense intuitively, but is many times not the optimal amount.  Using Bart's approach takes the guesswork out and provides an efficient method for removing this inherent bias in my capture sharpening starting point.

2) Once the proper capture sharpening is applied, subsequent image up or downsizing is less plagued by artifact, and final output sharpening is virtually halo-free.  This is particularly noticeable compared to my "by eye" hit or miss attempts.  Using PK Sharpener is much more predictable, especially on significantly downsized images that were particularly susceptible to aliasing artifact at low resolution and narrow output sharpening.

The combination of more original "grain" permitted to pass into the raw conversion and proper radius gives a much more useable image with fewer "corrections" required to pull out the output-optimized sharp image.

In reply to Bart's previous comments about multi-pass evaluation and deconvolution - what is particularly cool is that the ESF for a doubly deconvolved image can demonstrate the potential over sharpening that can occur and show up as overshoot in the edge spread profile.  See attached plot as an example of the effect of multi-pass deconvolution  - I guess we're all shooting for a critically damped capture sharpening!

It is pretty clear to me that I am going to learn a lot about how I have been subtly destroying my images at the most crucial point in their life - raw conversion!

Just when I think I know a little bit about something, I learn a little more and realize I have a lot to learn.

That's what I love about this stuff.

kirk

PS - I'm a Mac user, but I'm used to having to kluge together workflow, so whatever you chose to implement, I'll adopt and adapt.
« Last Edit: June 25, 2012, 06:15:28 pm by kirkt »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Optimal Capture Sharpening, a new tool
« Reply #24 on: June 25, 2012, 06:44:03 pm »

I had overlooked the online help buttons first time around.

My ImageJ is only 32-bit and crashes when accessing my network share files.

The 64-bit version seems to want to install an old Java SDK, that I'd prefer to avoid.

You can also install a version of ImageJ which uses an existing JAVA installation, but you'll have to edit some paths in the setup file that's used when IJ starts. I just used a complete installation, which installs the JRE in a subdirectory of ImageJ. I don't think it touches an existing JAVA version/installation, but you'd have to check the documentation for that.

Quote
How does your approach compare with TopazLabs InFocus?  That works quite well at detecting the optimal radius, but it does introduce a lot of artefacts that undermine the quality of the result?

Topaz Labs' InFocus has several modes to choose from. The Generic and the Out of Focus ones also require a Radius input. People usually set the Blur Radius too large, which will result in artifacts. Now you have a means to know the correct radius to use. The 'Unkown/Estimate' method of the plugin's deconvolution requires to zoom in on some detail in a narrow DOF zone. If the chosen area includes too many clues from different DOF zones, it will get confused and generate artifacts.

My method just uses a single (optimal radius) deconvolution to build a deconvolution kernel. If speed is less of an issue, then my method can in principle also use a weighted average of several PSFs, but for the web version I wanted to avoid time-out issues with scripts that run too long. It is also possible to add Gamma adjustment to the calculations, but for most Raw converters we don't have the luxury to decide when to sharpen until after the Raw converted result is already converted to a gamma that's not 1.0 . So I skipped that option as well, also to keep the user interface simple.

One can also use the Richardson-Lucy deconvolution sharpening in RawTherapee, which also uses a Radius as input parameter.

Cheers,
Bart
« Last Edit: June 25, 2012, 06:55:44 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Optimal Capture Sharpening, a new tool
« Reply #25 on: June 26, 2012, 09:58:05 am »

So...

My previous "by eye" capture sharpening was, at best, hit or miss.  I have been, admittedly, struggling with my sharpening workflow - ad hoc-ing it as best as I could visually approximate based on some assessment of detail frequency.  I've read the real world sharpening, done all that - it just turns out that my eye is not so good sometimes.  Sometimes it worked great, other times I knew my early attempts in the workflow to compensate for optical softness caused issues that only got propagated and amplified as I continued in post.

I have been testing the capture sharpening optimization on both target images with accompanying test scenes, as well as previously shot images that, serendipitously, contained relatively useable slanted edges of high contrast.  I have been applying the deconvolution kernel in ImageJ as well as simply using the Gaussian sigma as the radius for raw conversion sharpening in ACR7.1 (I have not tried in DXO or Raw Developer yet).  The results are significantly better and more predictable.

Hi Kirk,

I'm glad you apparently have also come to the conclusion that a good sharpening Radius starting-point will benefit the quality and predictability of our technical image quality, from the start to its final state. I have also thought of myself as being reasonably good in finding the optimal settings for capture sharpening, until I started to create a level playing-field by actually 'removing' the physical blur component.

While true deconvolution should alway provide superior quality, it is true that even ACR/LR and others do benefit from the (or a more) correct choice of the Radius control. What struck me most about the "Real World Sharpening" book, was the continuing attempts to reduce the visibility of resulting halos, instead of preventing them to begin with ... I'm a strong believer of Prevention being better that Cure.

Quote
Moreover, once the proper capture sharpening is dialed in I find two additional benefits:

1) Less of the need for noise reduction.

What's interesting about the deconvolution side of the situation, is that the weighted average contribution of surrounding pixels, say 48 or more per new pixel value for each channel, should also have a bit of an averaging effect on the per pixel noise. Of course this will add to central pixel's noise which will dominate the resulting pixel value, so noise will increase (noise is related to signal) as we boost the microcontrast by lifting the blur veil.

Especially for higher ISO settings one could use a bit of noise reduction before/or after Capture sharpening, but as you also have found, the resulting noise has a nice quality about it when the sharpening Radius was more in tune with the pysical source of the blur itself.

I'm still a bit puzzled by the ACR/LR dialog which starts with an Amount control, instead of a Radius control, where the rest of the controls are in a much more logical order top to bottom.

Quote
I may have been able to achieve this balance previously, but with this new tool, it is a matter of applying a sharpening amount, as opposed to juggling radius, amount, detail, masking and doe-see-doe'ing around and around until it looked right.  Also, the tendency with capture sharpening is to use a very small, sub pixel radius always.  This makes sense intuitively, but is many times not the optimal amount.  Using Bart's approach takes the guesswork out and provides an efficient method for removing this inherent bias in my capture sharpening starting point.

That is indeed one of my goals, removing the subjective part (and our eyes are easily fooled), and at least eliminate one important variable from the list of controls. The tool also allows to get a better understanding of the effect of the Detail slider by trying a few fixed settings and then dialing in the correct Amount. As long as there are not too many negative effects on noise, I'd increase the Detail slider towards the Deconvolution biased side.

Quote
2) Once the proper capture sharpening is applied, subsequent image up or downsizing is less plagued by artifact, and final output sharpening is virtually halo-free.

That's right, although downsampling may even benefit from no capture sharpening at all. The downsampled result can use a bit of sharpening, but we won't be able to set the correct parameters until after the actual down-sampling. My tool will allow to show if damage was already done, and that may lead to using a blur before (or a different algorithm for) downsampling.

Upsampling will not only benefit from the absence of artifacts, but it also gets a quality boost by using the correct post-resample deconvolution. As will become apparent, even Bicubic Smoother will add halos, but now we have a tool to see if a small pre-blur will take that artifact away, after which a deconvolution blur will restore the sharpness that was available in the original file data. It won't necessarily create much more resolution than the original had, but it will look less blurred and still natural.

Quote
The combination of more original "grain" permitted to pass into the raw conversion and proper radius gives a much more useable image with fewer "corrections" required to pull out the output-optimized sharp image.

That's it, and it also produces also a more unified look between images.

Quote
In reply to Bart's previous comments about multi-pass evaluation and deconvolution - what is particularly cool is that the ESF for a doubly deconvolved image can demonstrate the potential over sharpening that can occur and show up as overshoot in the edge spread profile.  See attached plot as an example of the effect of multi-pass deconvolution  - I guess we're all shooting for a critically damped capture sharpening!

Absolutely, and yet we do have the freedom to tweak to our heart's content. One could e.g. tweak the second/third deconvolution filter's scale down a bit, or make another combination of kernels. Lots of possibillities if one can invest some time in it.

Quote
It is pretty clear to me that I am going to learn a lot about how I have been subtly destroying my images at the most crucial point in their life - raw conversion!

Just when I think I know a little bit about something, I learn a little more and realize I have a lot to learn.

That's what I love about this stuff.

The same here, the learning never stops but it helps to have some useful tools to assist in that process.

Cheers,
Bart
« Last Edit: October 18, 2012, 11:40:30 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Hiroshi S.

  • Newbie
  • *
  • Offline Offline
  • Posts: 5
Re: Optimal Capture Sharpening, a new tool
« Reply #26 on: July 05, 2012, 11:03:44 am »

Bart,

I just found this thread and while I haven’t played with your tool it looks fantastic! Before I print your target, shoot an aperture series and analyze the images, I have a few questions if you don’t mind.

-   You mention the intriguing possibility of using your tool and an imageJ deconvolution kernel to help with upsampling artifacts. What workflow do you recommend for this? Would you “capture sharpen” then increase the size, and then resharpen? Or only sharpen the final image? How would you build the deconvolution kernels? Only one for the increased size or two separate ones (before and after resizing..?)
-   I just downloaded the trial version of DxO pro and I am pretty impressed with their lens modules, and how DxO lifts the “veil” of some of my images and restores microcontrast. I wonder if they apply similar methods to build their camera/lens modules?
-   Maybe you should consider setting up a database, so that people who go through the trouble of shooting an aperture series with their favorite camera/lens can deposit the data (and/or the original slanted edge images).
-   I assume the distance at which you shoot the target doesn’t influence the PSF of the camera lens combination (and 25-50x the focal length would be fine and then can be used for all images)
-   Lastly, what happens with out of-focus blur/bokeh if you apply your tool

Sorry, about these probably naïve questions but I am a newbie when it comes to sharpening.  I just got a D800E, and while I don’t expect that it needs a lot of sharpening in general, the upsizing and diffraction recovery possibilities look vey attractive. Maybe I also convert to the D800 once I will discover that its deconvolved images look as good as the D800E files  ;)
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Optimal Capture Sharpening, a new tool
« Reply #27 on: July 05, 2012, 08:06:15 pm »

Bart,

I just found this thread and while I haven’t played with your tool it looks fantastic! Before I print your target, shoot an aperture series and analyze the images, I have a few questions if you don’t mind.

Hi Hiroshi,

No problem.

Quote
-   You mention the intriguing possibility of using your tool and an imageJ deconvolution kernel to help with upsampling artifacts. What workflow do you recommend for this? Would you “capture sharpen” then increase the size, and then resharpen? Or only sharpen the final image? How would you build the deconvolution kernels? Only one for the increased size or two separate ones (before and after resizing..?)

Those are indeed the two routes one could take. If we can exactly nail the Capture sharpening, then I would prefer to do that as step one, because it would give a better idea about how far we can go with subsequent Creative sharpening without introducing e.g. clipping. On the other hand, if we e.g. have a high ISO image and we do not want to noise reduce all the life out of it, we could consider postponing the Capture sharpening, and wrap it together into one operation if we already know we are going to enlarge the image. One other consideration is, which upsampling artifacts we may encounter and if correcting them is better done on a sharpened or unsharpened basis.

In general, because I'm a low ISO shooter myself (if possible), I would probably go for separate deconvolution sharpening for Capture, and again when preparing for upsampling+output. I will start another thread about the upsampling workflow where my tool can help to analyse issues and solve some of the softness (it won't create new detail, but it will restore losses).

Quote
-   I just downloaded the trial version of DxO pro and I am pretty impressed with their lens modules, and how DxO lifts the “veil” of some of my images and restores microcontrast. I wonder if they apply similar methods to build their camera/lens modules?

They essentially do the same, but with many more things being considered. They also differentiate across the image, and thus treat e.g. corners with their specific deblurring. That's why it can take a while before a camera/lens combination is added to the converter solutions that are automatically invoked based on EXIF information. They also calibrate for distance, because lenses do not necessarily perform equally well at all distances.

Viewed in that light, it is amazing how much a single sharpening radius can already restore. For lenses with very poor corner performance one can attempt to do 2 separate deconvolutions, one based on the center of the image and one based on the corners, and then use a radial blend to combine the results in Photoshop. A Raw converter like Capture one already allows to compensate for sharpness fall-off.

Quote
-   Maybe you should consider setting up a database, so that people who go through the trouble of shooting an aperture series with their favorite camera/lens can deposit the data (and/or the original slanted edge images).

If people want to share their findings, and make a serious effort to follow the guidelines of no-sharpening, a linear tonecurve, and a decent Low ISO exposure conversion (medium grey is medium gray) and contrast is normal (therefore black and white are not clipped), then it can also be useful for others when that info is shared.

I wouldn't mind making an overview when the data is sent to me (link is at the bottom of the tool's webpage).
 
Quote
-   I assume the distance at which you shoot the target doesn’t influence the PSF of the camera lens combination (and 25-50x the focal length would be fine and then can be used for all images)

That's correct. the target is 'scale invariant'. In fact that is a major benefit that prevents the need for magnification calibration. The only thing not covered is when lenses perform significantly better/worse at certain distances other than these medium distance settings. Things can be done though for extreme situations like macro, or scanners, or long telelenses. For scanners I use a slide mount with a razor-blade mounted at a slant, and for long distances one can use a larger version of the target (enlarged and deconvolution sharpened, ;) ).

Quote
-   Lastly, what happens with out of-focus blur/bokeh if you apply your tool

It stays OOF, but becomes a bit less blurred. If the target itself is not optimally focused, then removal of that level of defocus will be attempted. All my tool does, is determine the major blur component, and fit a model to allow removal of that particular blur. Similar but different blur levels will be sub-optimally restored, and there will remain a certain amount of blur if the radius for that blur was larger. If there are fore/background zones with better focus (a smaller radius) then they will be restored with too large a radius and it is likely that sharpening halos will be the result. Therefore it is important to try and focus as good as possible, to find the smallest possible blur radius one could encounter in an image.

Quote
Sorry, about these probably naïve questions but I am a newbie when it comes to sharpening.

No, there is no need to be so modest, your questions were excellent and may help others who were wondering but didn't ask.

Quote
I just got a D800E, and while I don’t expect that it needs a lot of sharpening in general, the upsizing and diffraction recovery possibilities look vey attractive. Maybe I also convert to the D800 once I will discover that its deconvolved images look as good as the D800E files  ;)

Well, the focus is only perfect in a very narrow zone around the focus plane, and there will always be some level of residual lens aberratons and/or diffraction, even on cameras without an AA-filter. And then there is a Demosaicing step which has to make trade-offs between artifacts and sharpness. And then there is resampling, up or down, which will add its own blur. There will always be something to improve, and now we can know how to do that.

Cheers,
Bart
« Last Edit: July 05, 2012, 08:12:37 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

kirkt

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 604
Re: Optimal Capture Sharpening, a new tool
« Reply #28 on: July 06, 2012, 07:35:46 pm »

Bart,

This tool is super useful.  I have been testing it on some images I shot previously in high contrast (sunlit) conditions with a 5DII+70-200 2.8 with a 1.4x extender.  This combination left all of the images soft, but the contrast in the images permits focus to be evaluated fairly well.  I nailed focus most of the time.

I was able to find a set of images where there was a distinct, high-contrast slanted edge in the image - i.e., I did not use the target to assess the blur of the combination, but I used "field data" to assess the edge spread function.  I was not surprised that the tool output a sigma of 1.99xxxxx.  But, I would NEVER have used a capture sharpening radius that large, it just does not seem right.  I tested this batch of images with Capture One and the difference is huge.  I will post a comparison here to demonstrate once I get all of the images and data together.  I also used the field-data based deconvolution kernel and I will post that for comparison as well.  It seems that deconvolution spares clipping highlights, whereas USM in C1 appears, to my eye, to cause some highlight clipping upon sharpening (nothing really noticeable in reality, but every bit counts).

Suffice it to say that I would never have eyeballed a 1.9 pixel capture sharpening radius before, but now that I can assess this critical variable quantitatively, it makes so much more sense and permits tweaking in no time.  

Totally cool.

kirk
Logged

elliot_n

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1219
Re: Optimal Capture Sharpening, a new tool
« Reply #29 on: July 06, 2012, 09:52:05 pm »

Re. The f4.5/f16 pine cone comparison. It's not very persuasive. Are you sure the f16 image has only been degraded by diffraction? It looks like it is back-focused. Or maybe the foreground foliage has moved during exposure?

It would be good to see a more compelling visual demonstration.
« Last Edit: July 06, 2012, 09:53:43 pm by elliot_n »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Optimal Capture Sharpening, a new tool
« Reply #30 on: July 07, 2012, 06:42:50 am »

Re. The f4.5/f16 pine cone comparison. It's not very persuasive. Are you sure the f16 image has only been degraded by diffraction? It looks like it is back-focused.

I focused with 10x Live View magnification using a loupe on the camera's LCD. I'd say focus was accurate, and the wider aperture shot proves that.

Quote
Or maybe the foreground foliage has moved during exposure?

Sure, that is always possible, and as I mentioned I tried shooting between the moments of wind moving the branches. That's landscape photography for ya ...

Quote
It would be good to see a more compelling visual demonstration.


I tried avoiding a brick wall, and use a subject that's more in line with the name of this website. But feel free to convince yourself, while I'll look for a more stable subject (while trying to avoid road traffic vibrations and/or atmospheric turbulence).

I do have testchart shots I can share, but not too many people get excited about that type of subject because it is too remote from what they usually shoot (it's harder to make the mental connection to what improvements to expect in their specific shooting situations).

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Optimal Capture Sharpening, a new tool
« Reply #31 on: July 07, 2012, 07:05:51 am »

Hi,

Here is a test shoot with different apertures on a 16 MP APS-C camera, same pixel pitch as Nikon D800:

http://echophoto.dnsalias.net/ekr/index.php/photoarticles/49-dof-in-digital-pictures?start=1

Left column is correct focus with apertures f/4 - f/16

Best regards
Erik
Logged
Erik Kaffehr
 

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Optimal Capture Sharpening, a new tool
« Reply #32 on: July 07, 2012, 07:34:32 am »

Bart,

This tool is super useful.

Hi Kirk,

I obviously agree ;)  I'm glad it's found to be useful to others as well, and it is also clear to me that you really understand the importance what it teaches us and how it allows to improve our technical image quality. It allows to remove a lot of subjectivity, and it demonstrates how poor we humans are in finding the optimal settings by eye.

Quote
I was not surprised that the tool output a sigma of 1.99xxxxx.  But, I would NEVER have used a capture sharpening radius that large, it just does not seem right.  I tested this batch of images with Capture One and the difference is huge.  I will post a comparison here to demonstrate once I get all of the images and data together.

That would be appreciated a lot. You have proven what I said before, that it's hard to accept or even find these better settings by eyeballing the previews of our sharpening tools. Subjectively, and we have been taught that in books on the subject as well, we would expect that small radius settings are best for high spatial frequency subject matter. Well, apparently they are not always the best, and quality is left on the table if we do not look beyond our preconceptions.

Quote
I also used the field-data based deconvolution kernel and I will post that for comparison as well.  It seems that deconvolution spares clipping highlights, whereas USM in C1 appears, to my eye, to cause some highlight clipping upon sharpening (nothing really noticeable in reality, but every bit counts).

Yes, those are my findings as well. Of course, the closer the deconvolution kernel comes to the actual convolution that took place, the better the restoration will be (and halos were not in the original signal so they should not be in the reconstructed signal either). Halos can lead to clipping because they overshoot the original signal level gradients.

Quote
Suffice it to say that I would never have eyeballed a 1.9 pixel capture sharpening radius before, but now that I can assess this critical variable quantitatively, it makes so much more sense and permits tweaking in no time 

Totally cool..

Yes, that's an other benefit. There is a learning effect (or maybe even an unlearning of preconceived notions), that allows us to reach better results much faster once we've invested some time. It's a good investment IMHO, because one will soon discover that there are similarities between how different lenses behave.

My best lenses sofar all produce radii of around 0.7-0.8 in the center of the image at the optimal aperture (knowing that, makes it also easier to spot a 'poor' lens, e.g. a new one or a rental), and there is a deterioration towards the more defocused regions that follows a somewhat parabolic path. There is also a pattern for other apertures, so we may interpolate results quite accurately without the need to test each possible setting (although that would be even more accurate). There are returns, but they are diminishing returns for the time invested, so it helps if these patterns are proven to be reliable.

The addition of Extenders or Teleconverters, which effectively magnify the optical projection (and blur) of the lens itself and adds a bit of its own, shows that the results can be surprising at first but are actually somewhat predictable. The use of my tool will show how that exactly works out, no more guessing but actual facts instead.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Optimal Capture Sharpening, a new tool
« Reply #33 on: July 07, 2012, 08:17:20 am »

Hi,

Here is a test shoot with different apertures on a 16 MP APS-C camera, same pixel pitch as Nikon D800:

http://echophoto.dnsalias.net/ekr/index.php/photoarticles/49-dof-in-digital-pictures?start=1

Left column is correct focus with apertures f/4 - f/16

Hi Erik,

Yes, that illustrates the 2 dimensions (defocus/diffraction) around the optimum nicely.

It would be interesting to add deconvolved versions of the images to the test, but of course we'd need to have an idea of the actual blur radius involved. The radius/radii can be established by shooting a slanted edge after the fact in a similar setup.

I could do a guess of the radius that does the best job, but as shown before we can be surprised by the actual radius we need. Also, guessing based on a JPEG is not very reliable, although I do already get significantly improved results with some quick trials I did (although diffraction or defocus losses in micro detail cannot be restored once they disappear in the 8-bit rendering).

Cheers,
Bart
« Last Edit: July 07, 2012, 08:56:07 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Optimal Capture Sharpening, a new tool
« Reply #34 on: July 08, 2012, 01:50:59 am »

What happens to OOF areas? Test on bubbles:
http://www.sendspace.com/file/xfp28l

I have no doubt this system works, I just went "huh?" at the beginning when it started looking complicated.

If you sharpen this shot what happens the the OOF versus a deconvolution method? A good deconvolution should be like stopping down a bit. What does USM do?
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Optimal Capture Sharpening, a new tool
« Reply #35 on: July 08, 2012, 09:24:53 am »

What happens to OOF areas? Test on bubbles:
http://www.sendspace.com/file/xfp28l

Here is the result:
https://rcpt.yousendit.com/1595297373/8350158a870bbadcb79d842b19582da4
The link will expire on July 15, 2012 05:57 PDT.

Of course I have no idea which settings to use for the best result, without doing a slanted edge test of your setup. The file is already quite sharp, looks like possibly from a camera without AA-filter, so I guessed that a 0.60 radius would be best to use. As said before, eyeballing the right settings is failure prone, but this is all I had to go on.

It did sharpen the in-focus bubbles and rims/edges into having more punch, while not affecting the defocused areas too much visually. Of course, basing this on JPEG input is not ideal, so I saved the linked result as a PNG file, to avoid adding another lossy compression. I've attached a JPEG version in case the link has expired when people read this.

Quote
What does USM do?

The original is already reasonably sharp, so the difference will not be huge, but USM does not increase resolution like deconvolution does, it just boosts edge contrast.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Optimal Capture Sharpening, a new tool
« Reply #36 on: July 08, 2012, 11:11:21 am »

Interesting, thanks.
Logged

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Optimal Capture Sharpening, a new tool
« Reply #37 on: July 08, 2012, 01:34:03 pm »

Here is the raw if you want to test the settings in the original conversion.
http://www.sendspace.com/file/r22fje

Of course a good prime doesn't need any capture sharpening. (This is an old Minolta 50 2.8 from ebay - 1970s? ) The A350 CCD color is great. Digressing...

Look for the impact of any sharpening on the specular highlights in the bottom right amber color. Also the beer neck label around the o.
Logged

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: Optimal Capture Sharpening, a new tool
« Reply #38 on: July 08, 2012, 01:35:30 pm »

...USM does not increase resolution like deconvolution does, it just boosts edge contrast.
In the linear sense, all you can do to combat blur is to amplify (weak) high-frequency components to sharpen an image, until the signal component looks "good" or "close to correct" while the noise component looks "not too bad".

Now, USM and deconvolution are usually not purely linear processes, but your statement above seems strange to me.

-h
Logged

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Optimal Capture Sharpening, a new tool
« Reply #39 on: July 08, 2012, 06:17:56 pm »

In the linear sense, all you can do to combat blur is to amplify (weak) high-frequency components to sharpen an image, until the signal component looks "good" or "close to correct" while the noise component looks "not too bad".

Now, USM and deconvolution are usually not purely linear processes, but your statement above seems strange to me.

-h


Sounds fine to me. A deconvolution shrinks circles of confusion. A USM boosts contrast. If you imagine an edge like a sine wave R-L increases the frequency of the wave. USM increases the amplitude.
Logged
Pages: 1 [2] 3 4 5   Go Up