Pages: [1] 2   Go Down

Author Topic: Capture sharpening - how much? and when?  (Read 1688 times)

Hening Bettermann

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 933
    • landshape.net
Capture sharpening - how much? and when?
« on: April 14, 2019, 03:20:30 pm »


Hi!

I have so far delayed sharpening until after (up)scaling. Theoretically, I know that I should capture-sharpen as early as possible. That should then be in the raw converter -?. And I assume, it should be done as pure deconvolution, since everything else is not 'real', but just cheating the eye. Also I mean to have understood that capture sharpening should just restore the degradation caused by the capture process *and nothing more*.

So how do I know "nothing more"?

And at what magnification should I view? At 100%, I don't see much. Everything else is coined by the scaling algorithm, which I have learned makes a BIG difference.

I pick an image shot with a CY Planar at f/11, and choose a central area, so as not to be mislead to oversharpening.

My raw converter, Raw Therapee, offers RL Deconvolution. I leave the amount, damping and iterations at their defaults, 100, 0 and 30, respectively, and just play with the radius. The minimum is 0.40

I start at 100% view, look at one of the very finest branches which is supposed to be in focus and arrive at a radius of 0.65.

Then I switch to 400%, look at some other very fine branches, that stand out against the sky, where I exspect halos to appear. But these branches are out of max focus on beforehand. Some of them have a darker central part and almost-white edges, but not clipped, before sharpening. The difference increases with the radius, see screen shots.

Is any of these radii the right one? Is this the way to determine it?  One thing is what I can see, another thing is what downstream software will see, such as Helicon Focus for stacking and Iridient for upscaling and subsequent sharpening. I mean nobody else than downstream software will ever see the image in this state, so why bother, if I can not really anticipate what *they* see?

I determined the sharpening radius before any edits. In this case, I would apply a contrast of +30 in RT. This will of course increase the visibility of the halos. So maybe this should be done before sharpening?

This little experiment makes me feel I was well advised to delay sharpening - but my feeling may be as wrong as my visual judgement... Or maybe I should just apply the minimum radius as a standard, avoiding all the hassle, and continue to defer the rest until after scaling?

Thank you for your comments!

faberryman

  • Sr. Member
  • ****
  • Online Online
  • Posts: 856
Re: Capture sharpening - how much? and when?
« Reply #1 on: April 14, 2019, 03:29:01 pm »

Just do it until it looks best to you.

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12510
    • http://www.markdsegal.com
Re: Capture sharpening - how much? and when?
« Reply #2 on: April 14, 2019, 04:28:28 pm »

Buy a copy of the Fraser/Schewe book called "Image Sharpening with Adobe Photoshop, Camera Raw and Lightroom" . Most of it applies to any application and will give you the best foundation in sharpening you'll find anywhere.
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12510
    • http://www.markdsegal.com
Re: Capture sharpening - how much? and when?
« Reply #3 on: April 14, 2019, 04:32:28 pm »

Also, go to the Pixelgenius website and download a free copy of the Photokit applications, in particular Photokit Sharpener 2, and read the manual. Shorter than the book and a tremendous education on what sharpening is and how to do it properly.
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 7839
Re: Capture sharpening - how much? and when?
« Reply #4 on: April 14, 2019, 08:38:26 pm »

Hi!

I have so far delayed sharpening until after (up)scaling. Theoretically, I know that I should capture-sharpen as early as possible. That should then be in the raw converter -?. And I assume, it should be done as pure deconvolution, since everything else is not 'real', but just cheating the eye. Also I mean to have understood that capture sharpening should just restore the degradation caused by the capture process *and nothing more*.

So how do I know "nothing more"?

Hi Hening,

It's not just the amount, but also the shape of the blur function we should try to estimate correctly. Fortunately for us, when we have a combination of blur sources, e.g. optical blur + demosaicing blur, then the shape of the blur function (the Point Spread Function, or PSF) starts looking like a Gaussian blur. So, as a first try, dialing in the correct width of the Gaussian blur should get us on the right track. Of course, if there is also a certain amount of Diffraction blur (with a different shape from Gaussian, i.e. an Airy disk shape) then the Gaussian shape slowly starts looking more like an Airy shape as the amount of diffraction blur starts to dominate. If there is defocus blur, with yet another shape, the PSF will gradually start looking more like a defocus blur (a convolution with a more or less circular aperture, the Circle of Confusion).

So, it's the blur shape, and then the amount of it that matters when counteracting its effects.

If we nail both of them, and can deconvolve with that, then we have 'enough'.

Quote
And at what magnification should I view? At 100%, I don't see much. Everything else is coined by the scaling algorithm, which I have learned makes a BIG difference.

Correct, so we should look at a magnified view, and use an algorithm that doesn't change the edge contrast of sharp transitions between light and dark edges and lines. Edges and lines are visually easy to predict most of the time, and when we use Nearest Neighbor interpolation we are guaranteed to not change the edge transitions, just make them larger and easier to see. Many applications do the right thing for this purpose, when we zoom in beyond 100% magnification.

Quote
I pick an image shot with a CY Planar at f/11, and choose a central area, so as not to be mislead to oversharpening.

Well done, since we do not want to oversharpen (or use the wrong settings) the center based on regions of the image that are likely to exhibit some more optical degradation going from the center towards the edges and corners of the image. Of course, unless we specifically focused on something off-center, and there is nothing in the center that's in better focus.

Quote
My raw converter, Raw Therapee, offers RL Deconvolution. I leave the amount, damping and iterations at their defaults, 100, 0 and 30, respectively, and just play with the radius. The minimum is 0.40

Richardson-Lucy (RL) deconvolution is still an excellent compromise to use as a deconvolution algorithm, assuming that the PSF is somewhat Gaussian in shape, and the image was formed by photons (i.e. somewhat noisy, by the square root of the intensity). This does assume that the image was shot at native sensor ISO, and not much other noise than photon/Poisson noise was present. This also assumes that the deconvolution is preferably done at linear gamma, just like the exposure was captured.

Quote
I start at 100% view, look at one of the very finest branches which is supposed to be in focus and arrive at a radius of 0.65.

Then I switch to 400%, look at some other very fine branches, that stand out against the sky, where I exspect halos to appear. But these branches are out of max focus on beforehand. Some of them have a darker central part and almost-white edges, but not clipped, before sharpening. The difference increases with the radius, see screen shots.

Now here things get getting (more) tricky.

First, we may be facing aliasing artifacts. Anti-Aliasing filters (or Optical Low Pass filters, OLPF) on the sensor, if any, are usually not strong enough to prevent all of the aliasing artifacts (because that would blur the captured image, potentially more than we can restore with deconvolution). In your example case though, f/11 also adds a significant amount of diffraction blur.

Second, in addition, the edge of backlit edges may indeed be lighter, due to the fact that backlight rays will produce more specular reflection at shallow angles. They will reflect more than they will be absorbed/diffused. Only a razor's edge can avoid this.

In my experience, mostly with AA-filtered images, a Gaussian blur radius at an optically optimal aperture (often at f/4 to f/5.6) will produce a Gaussian type of blur with a radius of about 0.7. Slightly less if perfect optics and perfect focus is achieved (and no camera shake or subject motion is involved). It's technically very hard to get better focus, requiring smaller radii.

Quote
Is any of these radii the right one? Is this the way to determine it?

The 'best' radius seems to be slightly smaller than expected. That suggests something is not quite as expected.

It takes a bit of practice, but one needs to look at:
1. Edge halos start showing up as the radius increases. Branches against a bright sky, can have specular reflections at the top of the branch, but less likely at the bottom edge.
2. At slanted edges, the stair-stepped edge is not monotonically increasing/decreasing, but over-/undershoots as we follow the edge. Straight edges are easiest to judge.

Quote
I determined the sharpening radius before any edits. In this case, I would apply a contrast of +30 in RT. This will of course increase the visibility of the halos. So maybe this should be done before sharpening?

Capture Sharpening is typically done before other tonal adjustments, to preserve the photon/noise nature of light, in linear gamma. How, and in what order, the Raw converter processes the adjustments, is something else.

Quote
This little experiment makes me feel I was well advised to delay sharpening - but my feeling may be as wrong as my visual judgement... Or maybe I should just apply the minimum radius as a standard, avoiding all the hassle, and continue to defer the rest until after scaling?

The latter. Scaling also benefits from proper (but not overdone) Capture sharpening. If, for whatever reason, your Capture sharpening produces artifacts, dial down the settings. We do not want to magnify artifacts.

Cheers,
Bart

P.S. New developments in Sharpening applications offer different approaches. A recent application that deserves some attention, is Topaz Sharpen AI. That application takes (unsharpened) images, analyses the structure, and replaces the details with trained sharper structures. In many cases, this offers superior results, without sharpening halos, especially when also (subject or camera) motion blur was involved.
Logged
== If you do what you did, you'll get what you got. ==

JaapD

  • Full Member
  • ***
  • Offline Offline
  • Posts: 182
Re: Capture sharpening - how much? and when?
« Reply #5 on: April 15, 2019, 02:11:00 am »

Hi Hening,

This subject seems to keep us awake from time to time. The best argument is “don’t overdo it!”. Once oversharpened in a pre-image processing stage there is no step back lateron.

Looking at your images it looks like 0.50.png is critical and 0.65.png is over the top. Why? Let me clarify: With 0.50.png I don’t see brighter pixel values around the branch but with 0.65.png I clearly do, indicating an oversharpened edge.

My way of working is to check on brighter values around steep edges like a branch against a sky, brighter than the sky or any surrounding itself. When I have them I have oversharpened and need to apply different settings, either on Amount or Radius.

I usually agree on about EVERYTHING Bart states as Bart is highly knowledgeable in this field. However I don’t know where his statement “Scaling also benefits from proper (but not overdone) Capture sharpening” is coming from. To me scaling (as well as keystoning, setting the horizon straight) does not benefit from any degree of sharpening what so ever.
@ Bart: please jump in if you want...... highly appreciated!

I think we’re better off doing all our image processing steps on non-presharpend and smooth images, while doing sharpening only as a final processing step, after scaling. There is one exception to this and that is de-convolution sharpening for lens-diffraction correction purposes (as so nicely built-in in CaptureOne).

Regards,
Jaap.

Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 7839
Re: Capture sharpening - how much? and when?
« Reply #6 on: April 15, 2019, 08:07:13 am »

However I don’t know where his statement “Scaling also benefits from proper (but not overdone) Capture sharpening” is coming from. To me scaling (as well as keystoning, setting the horizon straight) does not benefit from any degree of sharpening what so ever.
@ Bart: please jump in if you want...... highly appreciated!

Hi Jaap,

For a successful deconvolution we need to have data that is as original as possible. Resampling and scaling introduces new artifacts, which then reduces the chance of optimal deconvolution. Proper Capture sharpening also creates better microdetail which can then be upscaled  more accurately. However, as always, it's a trade-off and I can live with that fact of life. If we introduce artifacts during Capture sharpening, then those will propagate further through the chain of subsequent steps.

Quote
I think we’re better off doing all our image processing steps on non-presharpend and smooth images, while doing sharpening only as a final processing step, after scaling. There is one exception to this and that is de-convolution sharpening for lens-diffraction correction purposes (as so nicely built-in in CaptureOne).

If upsampling is anticipated in our workflow, I recommend giving Topaz Gigapixel AI a try. Gigapixel AI is capable of taking faint hints of detail and turning that into larger sized higher resolution detail. It sort of combines best of both worlds, no risk of creating Capture sharpening artifacts, yet at the same time utilizing any hint of captured detail and turning that into sharpened detail at a larger size. The way they implemented the AI, creates a whole new playing field, one with lots of potential for higher quality output.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12510
    • http://www.markdsegal.com
Re: Capture sharpening - how much? and when?
« Reply #7 on: April 15, 2019, 08:56:46 am »

I think there is a fundamental question worth addressing about what kind of sharpening most normal photographs need and can be accomplished most easily with the least risk of damage to the image. With all due respect to Bart's expertise and penchant for deconvolution sharpening, I continue to be of the view that traditional acutance sharpening using high quality applications such as Photokit Sharpener, and with those criteria in mind, remains an optimal approach for most normal photographic purposes much of the time. Deconvolution has a definite place in the toolset especially for those situations that can't respond to acutance sharpening, so this is not an "either/or"; hence, I think it fair to suggest to Hening that he not lose sight of the traditional alternative, and to explore it fully before assuming that he is necessarily better off with deconvolution as his default approach.
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3349
Re: Capture sharpening - how much? and when?
« Reply #8 on: April 15, 2019, 10:44:19 am »


It's not just the amount, but also the shape of the blur function we should try to estimate correctly. Fortunately for us, when we have a combination of blur sources, e.g. optical blur + demosaicing blur, then the shape of the blur function (the Point Spread Function, or PSF) starts looking like a Gaussian blur. So, as a first try, dialing in the correct width of the Gaussian blur should get us on the right track. Of course, if there is also a certain amount of Diffraction blur (with a different shape from Gaussian, i.e. an Airy disk shape) then the Gaussian shape slowly starts looking more like an Airy shape as the amount of diffraction blur starts to dominate. If there is defocus blur, with yet another shape, the PSF will gradually start looking more like a defocus blur (a convolution with a more or less circular aperture, the Circle of Confusion).

So, it's the blur shape, and then the amount of it that matters when counteracting its effects.

If we nail both of them, and can deconvolve with that, then we have 'enough'.

Bart,

Thank you for an excellent introduction on what PSF to use for blur reduction. It would be of interest to know what PSF is assumed by various commercially available algorithms. Photoshop smart sharpen offers gaussian blur, lens blur, and motion blur. The gaussian option would be good where multiple blur sources have resulted in an approximately gaussian blur shape as you have discussed. I presume the lens blur is primarily for mitigation of diffraction. Optical aberrations would vary according to the lens in use and a non-smart algorithm would not know what PSF to employ. The motion blur is for removing blur caused by linear motion and one enters the axis and amount to move in that axis. I have been advised by various presumably authoritative sources to use lens blur for most images in general photography.

Focus Magic, an old favorite, seems to deal with motion blur and defocus blur, but I have noted that it is also useful for dealing with diffraction blur.

Adobe shake reduction tries to deal with a random walk type of blur and it occasionally works. The new kid on the block, Topaz Sharpen AI, appears more comprehensive, offering sharpen, stabilize, and focus. My impression is that one would use sharpen to reduce demosaicing blur and diffraction blur. The other two options are self explanatory.

Sharpen AI could replace Topaz InFocus for capture sharpening if one is not concerned about speed.

Your comments and clarification would be greatly appreciated.

Regards,

Bill

Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3349
Re: Capture sharpening - how much? and when?
« Reply #9 on: April 15, 2019, 11:40:04 am »

I think there is a fundamental question worth addressing about what kind of sharpening most normal photographs need and can be accomplished most easily with the least risk of damage to the image. With all due respect to Bart's expertise and penchant for deconvolution sharpening, I continue to be of the view that traditional acutance sharpening using high quality applications such as Photokit Sharpener, and with those criteria in mind, remains an optimal approach for most normal photographic purposes much of the time. Deconvolution has a definite place in the toolset especially for those situations that can't respond to acutance sharpening, so this is not an "either/or"; hence, I think it fair to suggest to Hening that he not lose sight of the traditional alternative, and to explore it fully before assuming that he is necessarily better off with deconvolution as his default approach.

Mark,

On reading your many posts and articles I have come to regard your opinion highly, but I think Photokit 2 creative sharpening is getting long in the tooth for capture sharpening and I get better results with Topaz AI Clear and AI Sharpen, although the latter is quite slow but worth it when one is dealing with an image where one has already spent a lot of time in retouching and editing in Photoshop. For creative sharpening I prefer Topaz Precision Detail in Studio.

Output sharpening is where I think Photokit 2 and its Lightroom derivative are most useful. I print mainly to an Epson 3880 inkjet from Lightroom and am quite pleased with the convenience and quality of the results. For contone printing Photokit might have an advantage but results from Lightroom acceptable.

I would be interested in your comments and in what other users are doing.

Regards,

Bill
Logged

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12510
    • http://www.markdsegal.com
Re: Capture sharpening - how much? and when?
« Reply #10 on: April 15, 2019, 12:16:57 pm »

Hi Bill,

Thanks - in this business one learns that for every task there are options - which is a very good thing. And we all know that every one has their favorite option or options because they prefer the workflow or the results relative to their needs, or both. It is difficult to compare and determine what is better than what absent some combination of objective and subjective criteria. So when you say you get "better results" with one product relative to another, I'd be interested to know more exactly what you mean by "better", and what kind of sharpening problem you are most often trying to address.

I wasn't so much going after a particular product (though I mentioned PKS as an example for acutance sharpening), but more broadly a genre - that is whether there should be any general preference for a convolution approach versus an acutance approach, recognizing full well that there are roles for both.

One could do a nice review article on various sharpening products using well-chosen real world photos, and that would help answer the question you are asking. But it's a time-consuming job to do properly, so perhaps one of these days......... :)
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

Hening Bettermann

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 933
    • landshape.net
Re: Capture sharpening - how much? and when?
« Reply #11 on: April 15, 2019, 02:47:06 pm »

Thanks to all of you who chimed in!

@Bart:
> Nearest Neighbour interpolation we are guaranteed to not change the edge transitions, just make them larger and easier to see.

Lesson learned.

> This does assume that the image was shot at native sensor ISO, and not much other noise than photon/Poisson noise was present. This also assumes that the deconvolution is preferably done at linear gamma, just like the exposure was captured.

My images are typically shot at native sensor ISO, if that is 100 ISO (Sony a7r2). -

In RT, I can not choose the working profile all freely, so I set it to ProPhoto. In Iridient, I could choose Elle Stones' g1-version of ProPhoto, but that would then be after raw conversion.
OTOH, I mean to remember from somewhere, that demosaicing also may introduce some blur, so sharpening the TIF would -maybe- address that.

That the deconvolution should be done at linear gamma - how does that relate to 2 other requirements,
1. that the camera profile should already contain a basic curve? This is what Anders Torger writes in connection with his Lumariver Profiler. It is possible to make linear camera profiles in Luma. But I also find that I can not make tonal values look as good with the later modification of a linear profile.

2. that basic tonal edits should be made in the raw converter. At least this is my understanding.

> In my experience, mostly with AA-filtered images, a Gaussian blur radius at an optically optimal aperture (often at f/4 to f/5.6) will produce a Gaussian type of blur with a radius of about 0.7.

That sounds strange since even my image with f/11 seems to require a radius smaller than that. But wait - the Sony a7r2 has no AA filter.

> The 'best' radius seems to be slightly smaller than expected. That suggests something is not quite as expected.

Maybe it this AA filter?

> A recent application that deserves some attention, is Topaz Sharpen AI.

I am aware of it's existence, but I don't meet the hardware requirements, and I don't have the money to change that.

@JaapD
> I think we’re better off doing all our image processing steps on non-presharpend and smooth images, while doing sharpening only as a final processing step, after scaling. There is one exception to this and that is de-convolution sharpening for lens-diffraction correction purposes

And that is exactly what I'm dealing with.

@Mark:
I find it difficult to share your view 'acutance sharpening first'. I can not do extensive trials on everything, so to some degree, I rely on theoretical considerations and advice. And in this matter, deconvolution sounds more convincing to me, in particular for *capture* sharpening. - I have recently discovered the use of USM with large radius and low amount to increase local contrast.

So this is my bottom line from this thread so far:
-I set Resize method to Nearest in RT's Transform tab.
-I will look at very fine branches which are supposed to be in focus, have a DARK background and preferably are vertical or horizontal rather than slanted.
-If I loose my patience, I will apply the minimum radius of 0.4.

Trying these achievements on the same image:
At a radius of 0.5, a nearly horizontal, but slightly bended branch shows a little more pronounced jaggies. But are these artifacts, or normal pixelation (at 400%)? -screen shot 0.50-1
At the same radius, another branch at an angle of about 45 degrees, shows the increase of something that indeed seems to be a halo. - screen shot 0.50-2
So in this case, I stay with the minimum radius of 0.4.

A remaining question would be the weighing of the requirements of linear gamma for sharpening versus camera profile with curve, and versus basic tonal edits in the raw converter.

Thanks again for your participation!

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12510
    • http://www.markdsegal.com
Re: Capture sharpening - how much? and when?
« Reply #12 on: April 15, 2019, 02:54:15 pm »

...........

@Mark:
I find it difficult to share your view 'acutance sharpening first'. I can not do extensive trials on everything, so to some degree, I rely on theoretical considerations and advice. And in this matter, deconvolution sounds more convincing to me, in particular for *capture* sharpening. - I have recently discovered the use of USM with large radius and low amount to increase local contrast.
...............

For avoidance of doubt, that's not what I said or recommended. But each to their own and glad to see you got some useful advice from others here.
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

JaapD

  • Full Member
  • ***
  • Offline Offline
  • Posts: 182
Re: Capture sharpening - how much? and when?
« Reply #13 on: April 16, 2019, 01:22:46 am »

@JaapD
> I think we’re better off doing all our image processing steps on non-presharpend and smooth images, while doing sharpening only as a final processing step, after scaling. There is one exception to this and that is de-convolution sharpening for lens-diffraction correction purposes
And that is exactly what I'm dealing with.

Hi Hening, I understand. I believe you did not tell us what RAW converter you're using. If you're actually 'dealing' with this wouldn't it then be on option to use a RAW converter with built-in functionality, automatically taking care of proper diffraction correction during RAW conversion and based on applied f-stop? Think ‘CaptureOne’ here….

@Bart: thanks for your reply. I’m also working with Topaz Gigapixel and –Sharpening AI so I’m aware of their strength and unfortunately also their weaknesses, especially on Stabilize. But Topaz is improving and things are looking promising.

Regards,
Jaap.

« Last Edit: April 16, 2019, 01:38:55 am by JaapD »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 7839
Re: Capture sharpening - how much? and when?
« Reply #14 on: April 16, 2019, 07:29:37 am »

Thanks to all of you who chimed in!

@Bart:
> Nearest Neighbour interpolation we are guaranteed to not change the edge transitions, just make them larger and easier to see.

Lesson learned.

Just to be sure, it's not a recommendation to use it for any rescaling or distortion correction operation other than magnified inspection at integer factors, because that would create massive amounts of artifacts. It was just mentioned that it won't create halo over/under-shoots (but it does create aliasing/blocking/ringing artifacts).

Quote
That the deconvolution should be done at linear gamma - how does that relate to 2 other requirements,
1. that the camera profile should already contain a basic curve? This is what Anders Torger writes in connection with his Lumariver Profiler. It is possible to make linear camera profiles in Luma. But I also find that I can not make tonal values look as good with the later modification of a linear profile.

2. that basic tonal edits should be made in the raw converter. At least this is my understanding.

Not necessarily. Smart software can convert from a gamma adjusted space back to linear gamma, perform the deconvolution, and return to the initial gamma. That does assume proper care in the math in order to avoid cumulation of rounding errors.

Quote
> In my experience, mostly with AA-filtered images, a Gaussian blur radius at an optically optimal aperture (often at f/4 to f/5.6) will produce a Gaussian type of blur with a radius of about 0.7.

That sounds strange since even my image with f/11 seems to require a radius smaller than that. But wait - the Sony a7r2 has no AA filter.

> The 'best' radius seems to be slightly smaller than expected. That suggests something is not quite as expected.

Maybe it this AA filter?

Don't know for sure, but it's unlikely. Sensors without AA-filter will produce more alising artifacts, and aliasing is always lower resolution than the Nyquist frequency. But maybe some of it is just producing single pixel wide stairstepped fine line detail. That would leave no information in neighboring pixels to perform a deconvolution with.

Quote
> A recent application that deserves some attention, is Topaz Sharpen AI.

I am aware of it's existence, but I don't meet the hardware requirements, and I don't have the money to change that.

Don't despair, my hardware is also not meeting the requirements, but I run the software in CPU mode instead of GPU mode, and that works fine albeit slow. And Sharpen AI is a free upgrade for previous owners of a Topaz InFocus plugin license.

Quote
So this is my bottom line from this thread so far:
-I set Resize method to Nearest in RT's Transform tab.

See above, don't do that for resampling. Lanczos windowed resampling is much better at preserving detail, but it does produce ringing artifacts/halos.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

earlybird

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 260
Re: Capture sharpening - how much? and when?
« Reply #15 on: April 16, 2019, 11:37:48 am »

...Deconvolution has a definite place in the toolset especially for those situations that can't respond to acutance sharpening...

I do not think I understand this statement. Perhaps I am missing some context.

It seems to me that Deconvolution is one of several methods that increases "acutance", and thus would be considered a method of acutance sharpening.




Logged

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12510
    • http://www.markdsegal.com
Re: Capture sharpening - how much? and when?
« Reply #16 on: April 16, 2019, 12:53:21 pm »

I do not think I understand this statement. Perhaps I am missing some context.

It seems to me that Deconvolution is one of several methods that increases "acutance", and thus would be considered a method of acutance sharpening.

From Wikipedia: <<In photography, the term "acutance" describes a subjective perception of sharpness that is related to the edge contrast of an image. ... Due to the nature of the human visual system, an image with higher acutance appears sharper even though an increase in acutance does not increase real resolution.>> To be more precise about it, "edge contrast of an image" is the contrast between all edges within the image - which would be a great number in a high density photo. This kind of sharpening is only perceptually effective if the photograph is reasonably well-focused, i.e. not blurred, to begin with. It enhances the appearance of sharpness by accentuating the edge contrast that otherwise gets diminished through the various processes from capture to output of a photograph.

Deconvolution is restorative processing to recover adequate sharpness from blurred or defocused photographs ("de-blurring"), the nature of the deconvolution varying according to the kind of blur (e.g. out of focus, motion blur, etc.) It restores defocused or blurred photos, which acutance sharpening does not.

There is an insightful discussion of deconvolution here: http://yuzhikov.com/articles/BlurredImagesRestoration1.htm.

Different approaches for dealing with different kind of issues.

Speaking personally, I have not found my several attempts with deconvolution (admitting I could explore it further) to be all that easy or useful for dealing with acutance, which is usually the more prevalent kind of editing that most careful photographers would need. That's why I'm suggesting to keep an open mind, always attempting to achieve the beat match between the tool and the problem - not necessarily assume that one approach is optimal for all aspects of perceptual sharpness.

Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

earlybird

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 260
Re: Capture sharpening - how much? and when?
« Reply #17 on: April 16, 2019, 01:39:35 pm »

From Wikipedia: ...

My point exactly.

The attached images demonstrate how a deconvolution process will increase "acutance" as per the definition posted at Wiki, Merriam Webster etc. It seems clear that the edge contrast can be increased in a range that spans from subtle to dramatic.

It seems to me that deconvolution is a method of acutance sharpening, even if it seems customary to attempt to minimize an increase in edge contrast when using such a process.

FWIW the zoomed examples were upscaled 400% with nearest neighbor to suit an inspection of the results.

Logged

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12510
    • http://www.markdsegal.com
Re: Capture sharpening - how much? and when?
« Reply #18 on: April 16, 2019, 01:49:24 pm »

My point exactly.

The attached images demonstrate how a deconvolution process will increase "acutance" as per the definition posted at Wiki, Merriam Webster etc. It seems clear that the edge contrast can be increased in a range that spans from subtle to dramatic.

It seems to me that deconvolution is a method of acutance sharpening, even if it seems customary to attempt to minimize an increase in edge contrast when using such a process.

FWIW the zoomed examples were upscaled 400% with nearest neighbor to suit an inspection of the results.

You're not arguing with me, because I didn't say you may not get some acutance sharpening from deconvolution. Maybe you do, but I'm putting out an altogether different proposition about optimizing the tool relative to the purpose. Those 400% examples are quite useless - screen pixelation gets in the way. I would recommend stopping the magnification at 100% or 200% maximum. In any event, unlike for tone and colour edits, the effects of sharpening can be only approximated on a display - a well-known limitation. It is best to view these differences in prints. You should print your original, your deconvolution sample and a sample prepared with a high quality acutance sharpener (PKS, NIK, a number of them) and compare the prints. That would begin to be a more useful comparative methodology.
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 7839
Re: Capture sharpening - how much? and when?
« Reply #19 on: April 16, 2019, 02:19:10 pm »

I do not think I understand this statement. Perhaps I am missing some context.

It seems to me that Deconvolution is one of several methods that increases "acutance", and thus would be considered a method of acutance sharpening.

In addition to Mark's explanations, I consider Sharpening by increasing "acutance" a perceptual improvement of edge contrast, whereas Deconvolution is an actual mathematical inverse of blurring (convolution). It's a local (micro-)contrast enhancement, that looks sharper but isn't.

Deconvolution in fact takes the fragments of the original signal that were spread/scattered to neighboring pixels by blur, and restores those scattered fragments of blur to the pixel position they actually belong to, and that for each and every pixel in turn. Therefore it is also called a restoration procedure. This procedure needs a good signal to noise ratio, to begin with, and in addition a good model of the point-spread-function  (the PSF), the blur function. This actually increases resolution, makes lines thinner, spots/specular highlights smaller.

The AI method of restoration is done by replacement of blurred image fragments by sharper versions. It's done through a learning procedure that compares sharp and blurred image content of many types of subjects and (surface) structures, and that is put into a model that can efficiently replace blurred content by sharp content. It remains a complex and computation intensive operation.

Cheers,
Bart
« Last Edit: April 16, 2019, 03:27:36 pm by Bart_van_der_Wolf »
Logged
== If you do what you did, you'll get what you got. ==
Pages: [1] 2   Go Up