Pages: 1 ... 7 8 [9] 10   Go Down

Author Topic: deconvolution sharpening plug in  (Read 54868 times)

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: deconvolution sharpening plug in
« Reply #160 on: February 27, 2016, 01:04:30 pm »

What I was implying was that if the Capture sharpening is done well at the Capture size (maybe even before/during demosaicing), then we have a much easier job with final sharpening at any size.


I agree entirely, based on some testing (at least for upsizing ... downsizing TBD :) ).  Both on slanted edge and on normal images I've found that the best approach, for me, is:
- No sharpening or noise reduction in Lightroom
- Very careful noise reduction, if necessary, using DeNoise
- Very careful sharpening using Focus Magic
- Upsize
- Output sharpen with Focus Magic
- Add grain if necessary

Here is an image that was upsized by 2.95x, with all of the above steps.  BTW ... these are very small flower-heads and the flowers have a grainy look ... the white dots are not caused by sharpening.



It may not show very well as it's a screen-grab, so here's a crop if you're interested.  The resolution is 600ppi.

Crop of upsized image

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

earlybird

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 331
Re: deconvolution sharpening plug in
« Reply #161 on: February 27, 2016, 06:01:34 pm »

BTW Infocus 'Estimate' does best if zoomed in to a well focused area with lots of detail in all sorts of directions.

Does InFocus "Estimate Blur" on just the portion that is shown in its preview window rather than the entire picture file?
Logged

marcmccalmont

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1780
Re: deconvolution sharpening plug in
« Reply #162 on: February 27, 2016, 08:45:12 pm »

Thanks for all the comments and especially the technical ones
I ended up purchasing Focus Magic for now.
My normal workflow is to turn off sharpening in C1 or just use lens sharpening in DXO, then capture sharpen in PS as a first step on the full resolution file (I'll use FM)
I then leave the output sharpening to my Canon printers. Seems to work well for me but suggestions on an improved work flow are appreciated.
Marc
Logged
Marc McCalmont

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: deconvolution sharpening plug in
« Reply #163 on: February 28, 2016, 10:11:04 am »

Here are some results for downsampling using bicubic and bicubic sharper.

The first example is using bicubic sharper with no sharpening before or after:



As you can see, it's clear that bicubic sharper applies sharpening (or that the algorithm sharpens the image). This is an excellent result as is and normally there would be no need to apply any further sharpening.

The following shows sharpening applied before bicubic, after bicubic, before bicubic sharper and after bicubic sharper:



Sharpening before bicubic is fine, but not as good as no sharpening with bicubic sharper.

Sharpening after bicubic is good (but only a very small amount of sharpening was applied FM1/50).

Sharpening before bicubic sharper is just OK, but there are some artifacts (the amount of sharpening applied was very low).

And, finally, sharpening after bicubic sharper is too much, even though a very low amount of sharpening was applied.

My conclusion would be that for downsizing (assuming the use of bicubic or bicubic sharper):
- Use bicubic if the image was already sharpened
- Use bicubic or bicubic sharper if the image was not already sharpened and sharpen after the resize if needed.
- Based on the slanted edge test image, the best result is to use bicubic and to sharpen after the resize, not before.

On the other hand, for upsizing it would seem that it is better to apply sharpening before the resize, followed by output sharpening if required. 

So Bart's suggestion to keep a sharpened layer as well as an unsharpened layer makes sense if we don't know whether the final image will be upsized or downsized.
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: deconvolution sharpening plug in
« Reply #164 on: February 28, 2016, 11:25:16 am »

I know that this I am asking an unanswerable question because the answer probably depends on the image and what we are going to do with it in post-processing.

But I'll ask it anyway.  What do you think is the best sharpening workflow?

Formally, from the point of Digital Signal Processing (DSP), the Capture sharpening/deconvolution should take place earliy in the chain of postprocessing steps, when the captured signal still has a linear relationship (except for the capture blur and conversion noise) with the scene's signal or exposure. That will allow to extract the most of the original signal from the captured signal.

So that would plead for deconvoluion capture sharpening at the earliest moment of Raw conversion, before noise reduction, before tonescale adjustments/tonemapping.

Since Raw converters can use metadata from the Raw files to streamline part of the Capture sharpening process, it would be relatively easy to implement such a facility, but the software engineers are apparently somewhat oblivious to the opportunity. As I've explained on other occasions, the lens quality and the aperture used for a given sensor are the main deciding factors for the default (Gaussian sigma) radius that comes closest to a perfect setting of that important Point Spread Function (PSF) modelling parameter. Yet most sharpening dialogs start with "Amount" instead of radius, which is backwards and telling...

Quote
Assuming Lightroom and Photoshop;  and that there will be either upsampling or downsampling for output;  and that we have a well-taken image with a good lens and camera.

I will understand if you are too weary of the subject to answer :)

Well, it's being made unnecessarily difficult by the programs you mention, but unfortunately they are no exception. It can be easily shown that the PSF shape required for deconvolution is variable, but that the aperture value that was used when shooting is a driving force. Good lenses usually require approx. a 0.7 radius for shots taken at 'optimal' aperture values (usually something like 2 stops down from wide open) in the focus plane. Wider apertures may suffer from some residual lens aberrations and demand a larger radius (how much that is depends on lens quality and widest aperture), and narrower apertures will be diffraction affected which also increases the required radius, perhaps to something like 1.0 to 1.2, depending on lens and aperture shape.

Defocus will require a somewhat different PSF shape, more resembling a flattened Gaussian shape, but the Gaussian shape remains pretty dominant overall.

So strictly speaking (and for a smooth workflow), one should attempt to repair Capture blur, with deconvolution during the Raw conversion. Unfortunately, the Raw converter sharpening tools/dialogs have a rather mediocre implementation of Deconvolution sharpening. Therefore, the attempts to do it properly in the Rawconverter tend to create substandard results.

That is why it may be beneficial for image quality to postpone the Capture sharpening to a later stage, although that also creates a less than ideal situation for the Deconvolution tools. And it makes for a relatively clumsy workflow, having to render the image, resize it and only then do the thing that needed to be done first. One benefit of the process though, is that while things like scaling, distortion correction, etc , all add new blur to the image, the blur PSF tends to become more Gaussian in shape again, so we can address the combined blur with a relatively simple model, that can also be implemented much more efficiently in software as two separable linear (de)convolutions rather than one 2-dimensional (de)convolution. Doing it later in the workflow also means that we have to worry less about artifacts that cumulate, due to rounding errors and sub-optimal settings early in the chain of events.

So, to make a long story short, in my workflow I usually postpone the capture sharpening to a later moment in finishing the final output. Since I use Capture One as my main Raw converter, that's easy because I can just select to disable the sharpening on export with one checkbox in the output recipe. If I need a faster workflow, I keep the sharpening settings that have an adjusted Radius setting based on aperture value, and an amount that matches the output requirements (more for printed output, less for other output).

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: deconvolution sharpening plug in
« Reply #165 on: February 28, 2016, 11:33:46 am »

Does InFocus "Estimate Blur" on just the portion that is shown in its preview window rather than the entire picture file?

Yes, that's what I get from early comments by the founder and president of Topaz Labs, Feng (Albert) Yang. Maybe it just uses a heavier weighting, but it helps getting better results to zoom in to the subject matter in the plane of best focus, and if the subject is showing lots of contrasty edges in various directions.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: deconvolution sharpening plug in
« Reply #166 on: February 28, 2016, 12:41:52 pm »

As you can see, it's clear that bicubic sharper applies sharpening (or that the algorithm sharpens the image). This is an excellent result as is and normally there would be no need to apply any further sharpening.

However (!), bicubic sharper creates horrible downsampling artifacts on critical structure, and the sharpenening cannot be controlled. IMHO it's much better to not use Bicubic sharper, but rather bicubic and then sharpen. It can even help to reduce artifacts to first blur the image content (e.g. 0.25 Gaussian blur per downsampling factor, so a 50% down-sample would be factor 2 x 0.25 = 0.50, a 33.3% would be factor 3 x 0.25 = 0.75, etc.). This becomes very clear if one down-samples the very critical zoneplate image mentioned earlier.

Quote
The following shows sharpening applied before bicubic, after bicubic, before bicubic sharper and after bicubic sharper:[...]

All bicubic sharper down-sampled results have significant halos and low spatial frequency and aliasing boosts.

Quote
My conclusion would be that for downsizing (assuming the use of bicubic or bicubic sharper):
- Use bicubic if the image was already sharpened
- Use bicubic or bicubic sharper if the image was not already sharpened and sharpen after the resize if needed.
- Based on the slanted edge test image, the best result is to use bicubic and to sharpen after the resize, not before.

With Photoshop (Lightroom is much better at downsampling) I'd 'never' use anything else than bicubic for general down-sampling, and I'd rather add a bit of blur before doing so, just to get fewer artifacts. Deconvolution after down-sampling restores excellent sharpness, and we can exactly see if we go too far in restoring aliasing artifacts because we are at the final size.

In addition one should consider dedicated Output sharpening, for which Topaz Detail is also (like it is for Creative 'sharpening') a good option.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Hening Bettermann

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 945
    • landshape.net
Re: deconvolution sharpening plug in
« Reply #167 on: February 28, 2016, 12:52:10 pm »

@ Bart, post #164

> so we can address the combined blur with a relatively simple model, that can also be implemented much more efficiently in software as two separable linear (de)convolutions rather than one 2-dimensional (de)convolution.

I don't understand this part. Even if it sounds like something the software author would have to do, not something I could do myself, I would like to understand it a LITTLE better.  Which are these 2 dimensions of deconvolution? Would you care to explain just a little?

Hening Bettermann

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 945
    • landshape.net
Re: deconvolution sharpening plug in
« Reply #168 on: February 28, 2016, 01:31:02 pm »

@ Bart,
post #133
> Also to complicate matters further, Bicubic filtered down-sampling is not perfect and introduces some artifacts by itself. However, it is not that easy to devise a better down-sampler because there will always be other trade-offs to consider (although a Lanczos2 or Lanczos3 windowed downsampling is often pretty usable).

post #166
> With Photoshop (Lightroom is much better at downsampling) I'd 'never' use anything else than bicubic for general down-sampling, and I'd rather add a bit of blur before doing so, just to get fewer artifacts.

What if we go beyond Photoshop/Lightroom?
I think I remember from the ImageMagick thread (http://forum.luminous-landscape.com/index.php?topic=91754.msg746273#msg746273) and from your site that Mitchell-Netravali was a 'basically good' algorithm for downsampling. Would you recommend it for general downsampling? For some time, it has been readily available in PhotoLine, so it would not require the command line and ImageMagick. PL also offers Lanczos 3 and 8, but I wouldn't know if they are 'windowed' (nor what 'windowed' means - nor if I need to know).
« Last Edit: February 28, 2016, 01:37:56 pm by Hening Bettermann »
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: deconvolution sharpening plug in
« Reply #169 on: February 28, 2016, 01:45:17 pm »

This becomes very clear if one down-samples the very critical zoneplate image mentioned earlier.


One scary image!! 

I can see that applying a blur before the bicubic downsample is a good idea, but at the cost of some loss of fine detail. 

When trying to sharpen this image after the downsize, Focus Magic can't handle the image at all (presumably because the smallest blur radius is 1), whereas InFocus can (with a radius under 1, that is).

And as for Bicubic Sharper ... yes, it certainly appears to create a lot more artifacts than Bicubic. 

BTW ... I tried Photozoom Pro with S-Spline Max on a normal image and it didn't seem any better to me than Bicubic for upsizing - except that it's slow as hell.  But perhaps a test image like the Rings image might show things that I can't see in a landscape photo.

Well, I've just tried Photozoom with S-Spline Max to downsample the Rings and it does seem better than Bicubic + Gaussian blur.
 
« Last Edit: February 28, 2016, 01:52:44 pm by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: deconvolution sharpening plug in
« Reply #170 on: February 28, 2016, 01:48:48 pm »

@ Bart, post #164

> so we can address the combined blur with a relatively simple model, that can also be implemented much more efficiently in software as two separable linear (de)convolutions rather than one 2-dimensional (de)convolution.

I don't understand this part. Even if it sounds like something the software author would have to do, not something I could do myself, I would like to understand it a LITTLE better.  Which are these 2 dimensions of deconvolution? Would you care to explain just a little?

Hi Hening,

Yes, it's the software implementation, not something the user can do.

When a 2-D filter (e.g. 5x5 pixels) is separable, it can be replaced by 2 othogonal filters (5x1 and 1x5 pixels). That reduces the number of multiplication plus addition operations per pixel from 25 to 10 in this example, so a speed increase of 2.5x (larger filter kernels benefit even more, e.g. 7x7 becomes 14 instead of 49 Mult+Add operations, so 3.5x faster).

A Gaussian filter has the unique property that it is always separable, so it offers great benefits, and it is a close match to how most PSFs look anyway. Further speed gains can be had from using shift operations instead of multiplications, also an optimization for programmers, which is slightly less accurate but potentially also faster in execution.

Cheers,
Bart
« Last Edit: February 28, 2016, 02:36:44 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Hening Bettermann

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 945
    • landshape.net
Re: deconvolution sharpening plug in
« Reply #171 on: February 28, 2016, 01:52:12 pm »

Thanks Bart. I think I get some idea.

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: deconvolution sharpening plug in
« Reply #172 on: February 28, 2016, 01:59:00 pm »

@ Bart,
post #133
> Also to complicate matters further, Bicubic filtered down-sampling is not perfect and introduces some artifacts by itself. However, it is not that easy to devise a better down-sampler because there will always be other trade-offs to consider (although a Lanczos2 or Lanczos3 windowed downsampling is often pretty usable).

post #166
> With Photoshop (Lightroom is much better at downsampling) I'd 'never' use anything else than bicubic for general down-sampling, and I'd rather add a bit of blur before doing so, just to get fewer artifacts.

What if we go beyond Photoshop/Lightroom?

Okay, although Lightroom is pretty good.

Quote
I think I remember from the ImageMagick thread (http://forum.luminous-landscape.com/index.php?topic=91754.msg746273#msg746273) and from your site that Mitchell-Netravali was a 'basically good' algorithm for downsampling. Would you recommend it for general downsampling? For some time, it has been readily available in PhotoLine, so it would not require the command line and ImageMagick.

Yes, that option would be better, both for upsampling (ImageMagick uses it as the default for upsampling), as well as down-sampling (although there might (depending on implementation) be some small issues when only down-sampling slightly below 100 percent.

Quote
PL also offers Lanczos 3 and 8, but I wouldn't know if they are 'windowed' (nor what 'windowed' means - nor if I need to know).

Yes, Lanczos 3 is very good as well although it may introduce a bit more aliasing because it tries to keep very high resolution (and thus requires a bit less post-sharpening). Lanczos 8 is even stronger in detail retention, but also in halo generation, so I'd be careful depending on subject. It's great for nature and landscapes (but watch out for branches against clear sky). Lanczos is short for "Lanczos windowed sinc", so yes they are all windowed functions (so are bicubic and Mitchell Netravali).

Cheers,
Bart

P.S. It would be nice if the authors of PL also added a Lanczos 2 option. That can be very good at many things, basically without risk of halos.
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: deconvolution sharpening plug in
« Reply #173 on: February 28, 2016, 02:31:44 pm »

BTW ... I tried Photozoom Pro with S-Spline Max on a normal image and it didn't seem any better to me than Bicubic for upsizing - except that it's slow as hell.  But perhaps a test image like the Rings image might show things that I can't see in a landscape photo.

Well, I've just tried Photozoom with S-Spline Max to downsample the Rings and it does seem better than Bicubic + Gaussian blur.

I only like PhotoZoom Pro's upsampling, the down-sampling is IMHO not good (I have to verify for the most recent version, maybe it has improved). But for upsampling it (S-Spline Max) is very benign on subtle structure, and it increases resolution on sharp edges and lines (the edges/lines remain thinner than the upsampling factor would make one expect, and it reduces/removes the jaggies). Imatest probably thinks that the MTF response no longer drops to zero at Nyquist, but keeps going to 2x Nyquist, i.e. double resolution. But that's not going to happen on non-edge detail, so the non-linear processing confuses Imatest.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

earlybird

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 331
Re: deconvolution sharpening plug in
« Reply #174 on: February 28, 2016, 04:01:44 pm »

Hi Bart,
 Thanks for the answer about Estimate Blur.
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: deconvolution sharpening plug in
« Reply #175 on: February 28, 2016, 05:02:08 pm »

I only like PhotoZoom Pro's upsampling, the down-sampling is IMHO not good (I have to verify for the most recent version, maybe it has improved). But for upsampling it (S-Spline Max) is very benign on subtle structure, and it increases resolution on sharp edges and lines (the edges/lines remain thinner than the upsampling factor would make one expect, and it reduces/removes the jaggies). Imatest probably thinks that the MTF response no longer drops to zero at Nyquist, but keeps going to 2x Nyquist, i.e. double resolution. But that's not going to happen on non-edge detail, so the non-linear processing confuses Imatest.

Cheers,
Bart

Actually I only tried it on a landscape photo, not on a slanted edge (I'll try that tomorrow).  You're right, Photozoom does keep the lines and edges very clean - but this is at the expense of a bit of a plastic look I think.  It seems that the software is going around cleaning up lines and edges.

Here is a crop at 200% after a 3.6x upscale.  I think Photozoom has retained less detail overall (look at the bush, for example), but it has cleaned up the sharp edges around the windows and the pipes.  The cleaning up is destructive IMO ... for example, look at the potted plant to the right of the car.  There is also a smoothing on the gravel, the wall at the front, the trees at the top right; also a loss of contrast.  Generally not so great. (BTW ... this is a very small crop of a distant scene. It could be that on a sharper image that PZ would perform better).

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Hening Bettermann

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 945
    • landshape.net
Re: deconvolution sharpening plug in
« Reply #176 on: February 28, 2016, 05:05:39 pm »

@post#172
Thanks again, Bart. That is very valuable information for me.

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: deconvolution sharpening plug in
« Reply #177 on: February 29, 2016, 03:22:06 am »

Here are some results for downsampling using bicubic and bicubic sharper.

Hi Robert,

Fun isn't it?  Here is an article that uses a similar approach to gain insights on downsampling methods.

Jack
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: deconvolution sharpening plug in
« Reply #178 on: February 29, 2016, 06:56:32 am »


Fun isn't it?  Here is an article that uses a similar approach to gain insights on downsampling methods.


Thanks Jack ... I had read your article some time ago, but I had a poor understanding of what I was reading at the time (probably not a whole lot better now, but just a bit).

I'm a bit puzzled by this MTF plot from your article:



I don't understand how the original and nearest neighbor can be at 80% at Nyquist ... they should be at zero or close.  The same, to a lesser extent for the Bilinear and Bicubic (the latter beginning to look more like it should, but still very high).

When I run Imatest on the test image I get this curve:



Which seems much more reasonable, giving an MTF50 lw/ph of 3120.  There does seem to be quite a bit of aliasing on the image: perhaps MTFMapper is getting confused by it?

Robert
« Last Edit: February 29, 2016, 01:02:54 pm by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: deconvolution sharpening plug in
« Reply #179 on: February 29, 2016, 01:51:14 pm »

I'm a bit puzzled by this MTF plot from your article:

The solid lines all refer to the same final pixels, the 4:1 downsized ones, so their results are as measured. The dashed original line is there for reference.

I don't understand how the original and nearest neighbor can be at 80% at Nyquist ... they should be at zero or close.  The same, to a lesser extent for the Bilinear and Bicubic (the latter beginning to look more like it should, but still very high).

Did you resize the image 4:1 using the various methods?

When I run Imatest on the test image I get this curve:
Which seems much more reasonable, giving an MTF50 lw/ph of 3120.  There does seem to be quite a bit of aliasing on the image: perhaps MTFMapper is getting confused by it?

MTF Mapper never gets confused, if anything it's operator error :)  But in this case it looks like you are using the original edge at its native resolution so that's where the discrepancy comes from.  And you are probably unknowingly adding a little sharpening somewhere in your workflow, because the MTF50 value in cy/px looks high.  Have you tried running Imatest on the cropped tiff I provide there?

Jack
Logged
Pages: 1 ... 7 8 [9] 10   Go Up