Pages: 1 ... 7 8 [9] 10 11 12   Go Down

Author Topic: Sharpening ... Not the Generally Accepted Way!  (Read 59211 times)

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #160 on: August 16, 2014, 05:27:05 am »

but for the moment a Slanted edge approach goes a long way to allow a characterization of the actual blur kernel with sub-pixel accuracy.

I agree wholeheartedly: in fact if one thinks about it, what is a line if not a series of single point PSFs in a row (aka Line Spread Function)?  And what is an edge if not the integral of a line?  One can easily get 1D PSFs (and MTFs) with excellent accuracy (for photographic purposes) from pictures of edges, without all the problems mentioned about recording points of light.

And for the peanut gallery, what would be the derivative/differential of the Edge Spread Functions shown above?  You guessed it, the PSF in the direction perpendicular to the edge.

One can use Bart's most excellent calculator, or with a bit more work one can use open source MTF Mapper by Frans van den Bergh to obtain more accurate values.  MTF Mapper produces one dimensional ESF, PSF and MTF curves, not to mention MTF50 values.

Tools like FocusMagic are real time savers, and doing it another way may require significant resources and amongst others, besides dedicated Math software, a lot of calibration and processing time.

Agreed again.  My own motivation is to do a better job of capture sharpening the asymmetrical AA of my current main squeeze (it only has it in one direction, as do some other recent Exmors).

And thanks for the earlier link, Bart, I will peruse it later on today.

Jack
« Last Edit: August 16, 2014, 01:09:15 pm by Jack Hogan »
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #161 on: August 16, 2014, 06:40:31 am »


I'm working on it ... ;) , but for the moment a Slanted edge approach goes a long way to allow a characterization of the actual blur kernel with sub-pixel accuracy.

Hi Bart ... well, I guess I am asking for a lot!  Still, why aim low?

I'll try to muddle through your slanted edge approach, but to be honest the likelihood of my succeeding is pretty low, as my overall understanding is limited, and I don't know tools like ImageJ except in passing. 

Am I correct in understanding that using the Slanted Edge approach that it should be possible:
- to take a photograph of an edge
- process that in ImageJ to get the slope of the edge and the pixel values along a single pixel row
- paste this information in your Slanted Edge tool to compute the sigma value
- use this sigma value in your PSF Generator to produce a deconvolution kernel
- use the deconvolution kernel in Photoshop (or preferably ImageJ as one can use a bigger kernel there):
   - as a test it should remove the blur from the edge
   - subsequently it could be used to remove capture blur from a photograph (taken with the same lens/aperture/focal length)

Assuming I have it even approximately right, it would be incredibly useful to have a video demonstration of this as it's quite easy to make a mess of things with tools one isn't familiar with. I would be happy to do this video, but first of all I would need to be able to work through the technique successfully, and right now I'm not sure I'm even on the tracks at all, not to mention on the right track!

Quote
Tools like FocusMagic are real time savers, and doing it another way may require significant resources and amongst others, besides dedicated Math software, a lot of calibration and processing time.


Yes, of course, I understand that for practical purposes photographing slanted edges etc., is really only practical for very fixed conditions and in real life is unlikely to yield better results than a tool like FocusMagic ... but from the point of view of understanding what is under the hood it's a great exercise!

Robert




Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #162 on: August 16, 2014, 07:28:13 am »

Am I correct in understanding that using the Slanted Edge approach that it should be possible:
- to take a photograph of an edge
- process that in ImageJ to get the slope of the edge and the pixel values along a single pixel row
- paste this information in your Slanted Edge tool to compute the sigma value
- use this sigma value in your PSF Generator to produce a deconvolution kernel
- use the deconvolution kernel in Photoshop (or preferably ImageJ as one can use a bigger kernel there):
   - as a test it should remove the blur from the edge
   - subsequently it could be used to remove capture blur from a photograph (taken with the same lens/aperture/focal length)

You've got it!

I agree it's a bit of work, and the workflow could be improved by a dedicated piece of software that does it all on an image that gets analyzed automatically. But hey, it's a free tool, and it's educational. As said, I'm also working on something more flexible that can analyze a more normal image, or for more accurate results can do a better job on an image of a proper test target as input.

Quote
Assuming I have it even approximately right, it would be incredibly useful to have a video demonstration of this as it's quite easy to make a mess of things with tools one isn't familiar with. I would be happy to do this video, but first of all I would need to be able to work through the technique successfully, and right now I'm not sure I'm even on the tracks at all, not to mention on the right track!

A video could be helpful, but there are also linked webpages with more background info, and the thread also addresses some initial questions that others have raised.
 
Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #163 on: August 16, 2014, 12:39:58 pm »


A video could be helpful, but there are also linked webpages with more background info, and the thread also addresses some initial questions that others have raised.


Hello (again!) Bart,

I'm getting there - I've now found your thread http://www.luminous-landscape.com/forum/index.php?topic=68089.0 (that's the one,I take it?), and I've taken the test figures you supplied on the first page, fed them into your Slanted Edge tool and got the same radius (I haven't checked this out, but I assume you take an average of the RGB radii?).

I then put this radius in your PSF generator and got a deconvolution kernel and tried it on an image from a 1Ds3 with a 100mm f2.8 macro (so pretty close to your eqpt).  The deconvolution in Photoshop is pretty horrendous (due to the integer rounding, presumably); however if the filter is faded to around 5% the results are really good.  Using floating point and ImageJ, the results are nothing short of impressive, with detail recovery way beyond Lr, especially in shadows.

I don't know how best to set the scale on your PSF generator - clearly a high value gives a much stronger result; I found that a scale of between 3 and 5 is excellent, but up to 10 is OK depending on the image.  Beyond that noise gets boosted too much, I think.

I didn't see much difference between a 5x5 and a 7x7 kernel, but it probably needs a bit more pixel-peeping.

I also don't understand the fill factor (I just set it to Point Sample).

What seems to be a good approach is to do a deconvolve with a scale of 2 or 3 and one with a scale of 5 and to do a Blend If in Photoshop - you can get a lot of detail and soften out any noise (although this is only visible at 200% and completely invisible at print size on an ISO200 image).

It occurred to me that as your data for the same model camera and lens gives me very good results that it would be possible to build up a database that could be populated by users, so that over time you could select your camera, lens, focal length and aperture and get a close match to the radius (and even the deconvolution kernel).  The two pictures I checked were at f2.8 (a flower) and F7.1 (a landscape), whereas your sample data was at f5.6 - but the deconvolution still worked very well with both.

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #164 on: August 17, 2014, 05:26:08 am »

Hello (again!) Bart,

I'm getting there - I've now found your thread http://www.luminous-landscape.com/forum/index.php?topic=68089.0 (that's the one,I take it?), and I've taken the test figures you supplied on the first page, fed them into your Slanted Edge tool and got the same radius (I haven't checked this out, but I assume you take an average of the RGB radii?).

Hi Robert,

What the analysis of the  R/G/B channels shows is that, despite the lower sampling density of Red an Blue, there is not that much difference in resolution/blur. The reason is that most demosaicing schemes use the denser sampled Green channel info as a kind of clue for the Luminance component of the R/B channels as well. Since Luminance resolution is then relatively equal, one could just take the blur value for Green, or the lower of the three, to avoid over-sharpening the other channels. But with such small differences it's not all that critical.

Quote
I then put this radius in your PSF generator and got a deconvolution kernel and tried it on an image from a 1Ds3 with a 100mm f2.8 macro (so pretty close to your eqpt).  The deconvolution in Photoshop is pretty horrendous (due to the integer rounding, presumably); however if the filter is faded to around 5% the results are really good.  Using floating point and ImageJ, the results are nothing short of impressive, with detail recovery way beyond Lr, especially in shadows.

Cool, isn't it? And that is merely Capture sharpening in a somewhat crude single deconvolution pass. The same radius can be used for more elaborate iterative deconvolution algorithms, which will sharpen the noise less than the signal, thus producing an even higher S/N ratio, and restore even a bit more resolution.

Quote
I don't know how best to set the scale on your PSF generator - clearly a high value gives a much stronger result; I found that a scale of between 3 and 5 is excellent, but up to 10 is OK depending on the image.  Beyond that noise gets boosted too much, I think.

In my tool, the 'Scale' is normally left at 1.0, unless one wants to increase the 'Amount' of sharpening. When upsampling is part of the later operations, I'd leave it at 1.0, to avoid the risk of small halos at very high contrast edges. The 'scale' is mostly used for floating point number kernels.

Quote
I didn't see much difference between a 5x5 and a 7x7 kernel, but it probably needs a bit more pixel-peeping.

When radii get larger, there may be abrupt cut-offs at the kernel edges, where a slightly larger kernel support would allow for a smoother roll-off. This becomes more important with iterative methods, hence the recommendation to just use a total kernel diameter of 10x the Blur Radius, which will reduce the edge contributions to become marginal and thus have a smooth transition towards zero contribution outside the range of the kernel.

Quote
I also don't understand the fill factor (I just set it to Point Sample).

A point sample takes a single point on the Bell shaped Gaussian blur pattern at the center of the pixel and uses that for the kernel cell. However, our sensels are not point samplers, but area samplers. They will integrate all light falling within their area aperture to an average. This reduces the peakedness of the Gaussian shape a bit, as if averaging all possible point samples inside that sensel aperture with a square kernel. The size of that square sensel kernel is either 100% (assuming a sensel aperture that receives light from edge to edge, like with gap-less micro-lenses), or a smaller percentage (e.g. to simulate a complex CMOS sensor without micro-lenses with lots of transistors per sensel, leaving only a smaller part of the real estate to receive light). When you use a smaller percentage, the kernel's blur pattern will become narrower and more peaked and less sharpening will result, because the sensor already sharpens (and aliases) more by it's small sampling aperture.

Quote
What seems to be a good approach is to do a deconvolve with a scale of 2 or 3 and one with a scale of 5 and to do a Blend If in Photoshop - you can get a lot of detail and soften out any noise (although this is only visible at 200% and completely invisible at print size on an ISO200 image).

Again, it depends on the total workflow. I'd leave it closer to 1.0 if upsampling will happen later, but otherwise it's up to the user to play with the 'Amount' of sharpening by changing the 'Scale' factor. This all assumes Floating point number kernels, which can also be converted into images with ImageJ, for those applications that take images of PSFs as input (as is usual in Astrophotography).

Quote
It occurred to me that as your data for the same model camera and lens gives me very good results that it would be possible to build up a database that could be populated by users, so that over time you could select your camera, lens, focal length and aperture and get a close match to the radius (and even the deconvolution kernel).  The two pictures I checked were at f2.8 (a flower) and F7.1 (a landscape), whereas your sample data was at f5.6 - but the deconvolution still worked very well with both.

That's correct, as you will find out, the amount of blur is even not all that different between lenses of similar quality, but it does change significantly for the more extreme aperture values. That's completely unlike the Capture sharpening gospel of some 'gurus' who say that it's the image feature detail that determine the Capture sharpening settings, and thus they introduce halos by using too large radii early in their processing. It was also discussed here.

It's a revelation for many, to realize they have been taught wrong, and the way the Detail dialog is designed in e.g. LR doesn't help either (it even suggests to start with changing the Amount setting before setting the correct radius, and it offers no real guidance as to the correct radius, which could be set to a more useful default based on the aperture in the EXIF). We himanss are pretty poor at eye-balling the correct settings because we prefer high contrast, which is not the same as real resolution. It's even made worse by forcing to user to use the Capture sharpening settings of the Detail panel for Creative sharpening later in the parametric workflow, which seduces users to use a too large radius value there, to do a better Creative sharpening job.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #165 on: August 17, 2014, 07:32:56 am »


Cool, isn't it? And that is merely Capture sharpening in a somewhat crude single deconvolution pass. The same radius can be used for more elaborate iterative deconvolution algorithms, which will sharpen the noise less than the signal, thus producing an even higher S/N ratio, and restore even a bit more resolution.


Yes, very cool!

You mentioned before that doing the deconvolution in the frequency domain is much more complex, which it no doubt is, but would it be worth it? I'm thinking of the possibility of (at least partially) removing noise, for example.  How would you boost the S/N ratio using a kernel?

Quote
A point sample takes a single point on the Bell shaped Gaussian blur pattern at the center of the pixel and uses that for the kernel cell. However, our sensels are not point samplers, but area samplers. They will integrate all light falling within their area aperture to an average. This reduces the peakedness of the Gaussian shape a bit, as if averaging all possible point samples inside that sensel aperture with a square kernel. The size of that square sensel kernel is either 100% (assuming a sensel aperture that receives light from edge to edge, like with gap-less micro-lenses), or a smaller percentage (e.g. to simulate a complex CMOS sensor without micro-lenses with lots of transistors per sensel, leaving only a smaller part of the real estate to receive light). When you use a smaller percentage, the kernel's blur pattern will become narrower and more peaked and less sharpening will result, because the sensor already sharpens (and aliases) more by it's small sampling aperture.


I take it then that with a 1DsIII you would want to use a fill factor of maybe 80%, whereas a 7D would be 100%?  I ask because I have both cameras :).

Quote
That's correct, as you will find out, the amount of blur is even not all that different between lenses of similar quality, but it does change significantly for the more extreme aperture values. That's completely unlike the Capture sharpening gospel of some 'gurus' who say that it's the image feature detail that determine the Capture sharpening settings, and thus they introduce halos by using too large radii early in their processing. It was also discussed here.

It's a revelation for many, to realize they have been taught wrong, and the way the Detail dialog is designed in e.g. LR doesn't help either (it even suggests to start with changing the Amount setting before setting the correct radius, and it offers no real guidance as to the correct radius, which could be set to a more useful default based on the aperture in the EXIF). We himanss are pretty poor at eye-balling the correct settings because we prefer high contrast, which is not the same as real resolution. It's even made worse by forcing to user to use the Capture sharpening settings of the Detail panel for Creative sharpening later in the parametric workflow, which seduces users to use a too large radius value there, to do a better Creative sharpening job.


I think I've been lucky (or perhaps it's that I hate oversharpened images), but I've always set the radius and detail first with the Alt key pressed (the values always end up with a low radius - 0.6, 0.7 typically, and detail below 20) and I then adjust the amount at 100% zoom - and it's very rare that I would go over 40, normally 20-30.  That has meant that I haven't judged the image on the look (at that stage of the process, at any rate) .... more by chance than by intent.

Regarding FocusMagic - the lowest radius you can use is 1 going in increments of 1.  That seems a high starting point and a high increment ... or am I mixing apples and oranges?

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #166 on: August 17, 2014, 12:37:31 pm »

You mentioned before that doing the deconvolution in the frequency domain is much more complex, which it no doubt is, but would it be worth it? I'm thinking of the possibility of (at least partially) removing noise, for example.  How would you boost the S/N ratio using a kernel?

Strictly speaking, conversion to and back from the Fourier space (frequency domain), is reversible and produces a 100% identical image. A deconvolution is as simple as a division in frequency space, where in the spatial domain it would take multiple multiplications and additions for each pixel, and a solution for the edges, so it's much faster between the domain conversions.

The difficulties arise when we start processing that image in the frequency domain. Division by (almost) zero (which happens at the highest spatial frequencies) can drive the results to 'infinity' or create non-existing numerical results. Add in some noise and limited precision, and it becomes a tricky deal.

There are also some additional choices to be made with regard to padding the image and kernel data to equal sizes to allow frequency space divisions, and to account for a non-infinitely repeating frequency representation which could cause ringing artifacts if not done intelligently. They are mostly technical precautions, but they should be done correctly and therefore the correct implementation of the algorithms takes attention.

The S/N ratio boost is done through a process known as regularization, where some prior knowledge of the type of noise distribution is used to reduce noise at each iteration, in such a way that the gain of resolution at a given step exceeds the loss of resolution due to noise reduction. It can be as simple as adding a mild Gaussian blur between each iteration step.

Quote
I take it then that with a 1DsIII you would want to use a fill factor of maybe 80%, whereas a 7D would be 100%?  I ask because I have both cameras :).

You'd be hard pressed to see much difference in the sharpening result between the default 100% fill-factor and 80%, so I usually just leave it at 100% (also for my 1Ds3). I've added that option to better comply with the norm of creating discrete Gaussian kernels for convolution with our discrete pixel samplers, instead of point sampling at the pixel mid-point, and for more precise kernel values for those pixels (the immediate neighbors) that have the most impact on the sharpening in iterative algorithms.

Quote
I think I've been lucky (or perhaps it's that I hate oversharpened images), but I've always set the radius and detail first with the Alt key pressed (the values always end up with a low radius - 0.6, 0.7 typically, and detail below 20) and I then adjust the amount at 100% zoom - and it's very rare that I would go over 40, normally 20-30.  That has meant that I haven't judged the image on the look (at that stage of the process, at any rate) .... more by chance than by intent.

You probably have a better eye for it than most ..., hence the search for an even better method.

Quote
Regarding FocusMagic - the lowest radius you can use is 1 going in increments of 1.  That seems a high starting point and a high increment ... or am I mixing apples and oranges?

One would think so, but we don't know exactly how that input is modified by the unknown algorithm they use. Also, because it probably is an iterative or recursive operation, they will somehow optimize several parameters with each iteration to produce a better fitting model. Of course one can first magnify the image, then apply FM (at a virtual sub-pixel accurate level), and then down-sample again. That works fine, although things slow down due to the amount of pixels that need to be processed.

The only downside to that kind of method is that the resampling itself may create artifacts, but we're not talking about huge magnification/reduction factors, maybe 3 or 4 is what I occasionally use when I'm confronted with an image of unknown origin and I want to see exactly what FM does at a sub-pixel level. Also, because regular upsampling does not create additional resolution, the risk of creating aliasing artifacts at the down-sampling stage is minimal. The FM radius to use, scales nicely with the maginification, e.g. a blur width 5 for a 4x upsample of a sharp image.

Cheers,
Bart
« Last Edit: August 17, 2014, 12:48:23 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #167 on: August 17, 2014, 04:07:23 pm »


Strictly speaking, conversion to and back from the Fourier space (frequency domain), is reversible and produces a 100% identical image. A deconvolution is as simple as a division in frequency space, where in the spatial domain it would take multiple multiplications and additions for each pixel, and a solution for the edges, so it's much faster between the domain conversions.

The difficulties arise when we start processing that image in the frequency domain. Division by (almost) zero (which happens at the highest spatial frequencies) can drive the results to 'infinity' or create non-existing numerical results. Add in some noise and limited precision, and it becomes a tricky deal.


Hi Bart,


This is presumably why the macro example I posted (ImageJ) adds noise to the deconvolution filter, to avoid division by 0.  So the filter would be a Gaussian blur with a radius of around 0.7 (in your example), with noise added (which is multiplication by high frequencies (above Nyquist?)). 

I’m talking through my hat here, needless to say  :). But it would be interesting to try it … and ImageJ seems to provide the necessary functions.

Quote
The S/N ratio boost is done through a process known as regularization, where some prior knowledge of the type of noise distribution is used to reduce noise at each iteration, in such a way that the gain of resolution at a given step exceeds the loss of resolution due to noise reduction. It can be as simple as adding a mild Gaussian blur between each iteration step.

So would you then apply your deconvolution kernel with radius 0.7 (say, for your lens/camera), then blur with a small radius, say 0.2, repeat the deconvolution with the same radius of 0.7 ... several times?  That sort of thing?

Quote
You probably have a better eye for it than most ..., hence the search for an even better method.

Well, it’s partly interest, but also … what’s the point of all of this expensive and sophisticated equipment if we ruin the image at the first available opportunity? 

Quote
One would think so, but we don't know exactly how that input is modified by the unknown algorithm they use. Also, because it probably is an iterative or recursive operation, they will somehow optimize several parameters with each iteration to produce a better fitting model. Of course one can first magnify the image, then apply FM (at a virtual sub-pixel accurate level), and then down-sample again. That works fine, although things slow down due to the amount of pixels that need to be processed.

The only downside to that kind of method is that the resampling itself may create artifacts, but we're not talking about huge magnification/reduction factors, maybe 3 or 4 is what I occasionally use when I'm confronted with an image of unknown origin and I want to see exactly what FM does at a sub-pixel level. Also, because regular upsampling does not create additional resolution, the risk of creating aliasing artifacts at the down-sampling stage is minimal. The FM radius to use, scales nicely with the maginification, e.g. a blur width 5 for a 4x upsample of a sharp image.

So if you wanted to try a radius of 0.75, for example, you would upscale by 4 and use a radius of 3 ... and then downscale back by 4?  What resizing algorithms would you use? Bicubic I expect?

I have a couple of other questions (of course!!).

Regarding raw converters, have you seen much difference between them, in terms of resolution, with sharpening off and after deconvolution (a la Bart)? With your 1Ds3, that is, as I expect the converters may be different for different cameras.

Second question: you mentioned in your post on Slanted Edge that using Imatest could speed up the process.  I have Imatest Studio and I was wondering how I can use it to get the radius?  One way, I guess, would be to take the 10-90% edge and divide by 2 … but that seems far too simple!  I’m sure I should be using natural logs and square roots and such!  Help would be appreciated (as usual!).

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #168 on: August 17, 2014, 06:03:20 pm »

FYI, here is Eric Chan's reply to my question regarding the Detail slider in Lr/ACR:

"Yes, moving the Detail slider towards 100 progressively moves the 'sharpening' method used by ACR/Lr to be a technique based on deblur/deconvolution.  This is done with a limited set of iterations, some assumptions, and a few other techniques in there in order to keep the rendering performance interactive.  I recommend that Radius be set to a low value, and that this be done only on very clean / low ISO images."

It isn't exactly a clear explanation of what's going on or how to use it ... but short of Adobe giving us their algorithm, which I don't imagine they'll do, it's probably the best we'll get.  "To be used with caution" seems to be the message (which I would agree with, based on trial and error).

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #169 on: August 17, 2014, 06:57:56 pm »

This is presumably why the macro example I posted (ImageJ) adds noise to the deconvolution filter, to avoid division by 0.  So the filter would be a Gaussian blur with a radius of around 0.7 (in your example), with noise added (which is multiplication by high frequencies (above Nyquist?)). 

I’m talking through my hat here, needless to say  :). But it would be interesting to try it … and ImageJ seems to provide the necessary functions.

Yes, the addition of noise is a crude attempt to avoid division by zero, although it may also create some where no issue was before.

Quote
So would you then apply your deconvolution kernel with radius 0.7 (say, for your lens/camera), then blur with a small radius, say 0.2, repeat the deconvolution with the same radius of 0.7 ... several times?  That sort of thing?

The issue with that is that the repeated convolution with a given radius will result in the same effect as that of a single convolution with a larger radius. And the smaller radius denoise blur will also cumulate to a larger radius single blur, so there is more that needs to be done.

Quote
Well, it’s partly interest, but also … what’s the point of all of this expensive and sophisticated equipment if we ruin the image at the first available opportunity?

Yes, introducing errors early in the workflow, can only bite us later in the process.

Quote
So if you wanted to try a radius of 0.75, for example, you would upscale by 4 and use a radius of 3 ... and then downscale back by 4?  What resizing algorithms would you use? Bicubic I expect?

Yes, upsampling with Bicubic Smoother, and down-sampling with Bicubic will often be good enough, but better algorithms will give better results.

Quote
I have a couple of other questions (of course!!).

Regarding raw converters, have you seen much difference between them, in terms of resolution, with sharpening off and after deconvolution (a la Bart)? With your 1Ds3, that is, as I expect the converters may be different for different cameras.

The slanted edge determinations depend on the Rawconverter that was used. Some are a bit sharper than others. Capture One Pro, starting with version 7, does somewhat better than LR/ACR process 2012, but RawTherapee with the Amaze algorithm is also very good for lower noise images.

Quote
Second question: you mentioned in your post on Slanted Edge that using Imatest could speed up the process.  I have Imatest Studio and I was wondering how I can use it to get the radius?  One way, I guess, would be to take the 10-90% edge and divide by 2 … but that seems far too simple!  I’m sure I should be using natural logs and square roots and such!  Help would be appreciated (as usual!).

Actually it is that simple, provided that the Edge Profile (=ESF) has a Gaussian based Cumulative Distribution Function shape, in which case dividing the 10-90 percent rise width in pixels by 2.5631 would result in the correct Gaussian sigma radius. Not all edge profiles follow the exact same shape as a Gaussian CDF, notably in the shadows where veiling glare is added, and not all response curves are calibrated for the actual OECF, so one might need to use a slightly different value.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #170 on: August 18, 2014, 05:28:40 am »

Yes, the addition of noise is a crude attempt to avoid division by zero, although it may also create some where no issue was before.

Hi Bart,

Well, how about adding the same noise to both the image and to the blur function - and then doing the deconvolution?  That way you should avoid both division by 0 and other issues, I would have thought?

Quote
The issue with that is that the repeated convolution with a given radius will result in the same effect as that of a single convolution with a larger radius. And the smaller radius denoise blur will also cumulate to a larger radius single blur, so there is more that needs to be done.

So then, for repeated convolutions you would need to reduce the radii?  But if so, on what basis, just guesswork?

Quote
Yes, upsampling with Bicubic Smoother, and down-sampling with Bicubic will often be good enough, but better algorithms will give better results.

Any suggestions would be welcome.  In the few tests I've done I'm not so sure that upsampling in order to use a smaller radius is giving any benefit (whereas it does seem to introduce some artifacts). It may be better to use the integer radius and then fade the filter.

Quote
The slanted edge determinations depend on the Rawconverter that was used. Some are a bit sharper than others. Capture One Pro, starting with version 7, does somewhat better than LR/ACR process 2012, but RawTherapee with the Amaze algorithm is also very good for lower noise images.

Do you think these differences are significant after deconvolution?  Lr seems to be a bit softer than Capture One, for example, but is that because of a better algorithm in Capture One, or is it because Capture One applies some sharpening?  Which raises the question in my mind: is it possible to deconvolve on the raw data, and if so would that not be much better than leaving it until after the image has been demosaiced?  Perhaps this is where one raw processor may have the edge over another?

Quote
Actually it is that simple, provided that the Edge Profile (=ESF) has a Gaussian based Cumulative Distribution Function shape, in which case dividing the 10-90 percent rise width in pixels by 2.5631 would result in the correct Gaussian sigma radius. Not all edge profiles follow the exact same shape as a Gaussian CDF, notably in the shadows where veiling glare is added, and not all response curves are calibrated for the actual OECF, so one might need to use a slightly different value.

Interesting ... how did you calculate that number?

Robert
« Last Edit: August 18, 2014, 05:32:01 am by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #171 on: August 18, 2014, 08:15:49 am »

Well, how about adding the same noise to both the image and to the blur function - and then doing the deconvolution?  That way you should avoid both division by 0 and other issues, I would have thought?

There are many different ways to skin a cat. One can also invert the PSF and use multiplication instead of division in frequency space. But I do think that operations in frequency space are complicating the issues due to the particularities of working in the frequency domain. The only reason to convert to frequency domain is to save processing time on large images because it may be simpler to implement some calculations, not specifically to get better quality, once everything is correctly set up (which requires additional math skills).

Quote
So then, for repeated convolutions you would need to reduce the radii?  But if so, on what basis, just guesswork?

There is a difference between theory and practice, so one would have to verify with actual examples. That's why the more successful algorithms use all sorts of methods, and adaptive (to local image content, and per iteration) regularization schemes. They do not necessarily use different radii, but vary the other parameters (RL algorithm, RL considerations).

Quote
Any suggestions would be welcome.  In the few tests I've done I'm not so sure that upsampling in order to use a smaller radius is giving any benefit (whereas it does seem to introduce some artifacts). It may be better to use the integer radius and then fade the filter.

Maybe this thread offers better than average resampling approaches.

Quote
Do you think these differences are significant after deconvolution?  Lr seems to be a bit softer than Capture One, for example, but is that because of a better algorithm in Capture One, or is it because Capture One applies some sharpening?  Which raises the question in my mind: is it possible to deconvolve on the raw data, and if so would that not be much better than leaving it until after the image has been demosaiced?  Perhaps this is where one raw processor may have the edge over another?

The differences between Rawconverter algorithms concern more than just sharpness. Artifact reduction is also an important issue, because we are working with undersampled color channels and differences between Green and Red/Blue sampling density. Capture One Pro version 7, exhibited much improved resistance to jaggies compared to version 6, while retaining its capability to extract high resolution. It also has a slider control to steer that trade-off for more or less detail. There is no implicit sharpening added if one switches that off on export. The Amaze algorithm as implemented in RawTherapee does very clean demosaicing, especially on images with low noise levels. LR does a decent job most of the time, but I've seen examples (converted them myself, so personally verified) where it fails with the generation of all sorts of artifacts.

Quote
Interesting ... how did you calculate that number?

The 10th and 90th percentile of the cumulative distribution function are at approx. -1.28155 * sigma and +1.28155 * sigma, the range therefore spans approx. 2.5631 * sigma.

Cheers,
Bart
« Last Edit: August 18, 2014, 11:38:49 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #172 on: August 18, 2014, 11:19:26 am »

There are many different ways to skin a cat. One can also invert the PSF and use multiplication instead of division in frequency space. But I do think that operations in frequency space are complicating the issues due to the particularities of working in the frequency domain. The only reason to convert to frequency domain is to save processing time on large images because it may be simpler to implement some calculations, not specifically to get better quality, once everything is correctly set up (which requires additional math skills).

Yes, it does get complicated, and at this point my maths is extremely rusty.  Still, out of interest I might have a go when I've polished up on it a bit (I mean a lot!).  But you're probably right - there may be no advantage working in the frequency domain, except that it should be possible to be more precise I would have thought. Not that I would expect to get better results than the experts, of course.

Quote
There is a difference between theory and practice, so one would have to verify with actual examples. That's why the more successful algorithms use all sorts of methods, and adaptive (to local image content, and per iteration) regularization schemes. They do not necessarily use different radii, but vary the other parameters (RL algorithm, RL considerations).

Maybe this thread offers better than average resampling approaches.

The differences between Rawconverter algorithms concern more than just sharpness. Artifact reduction is also an important issue, because we are working with undersampled color channels and differences between Green and Red/Blue sampling density. Capture One Pro version 7, exhibited much improved resistance to jaggies compared to version 6, while retaining its capability to extract high resolution. It also has a slider control to steer that trade-off for more or less detail. There is no implicit sharpening added if one switches that off on export. The Amaze algorithm as implemented in RawTherapee does very clean demosaicing, especially on images with low noise levels. LR does a decent job most of the time, but I've seen examples (converted them myself, so personally verified) where it fails with the generation of all sorts of artifacts.

Thanks for all of that info!  I've played around a bit with RawTherapee and it's certainly very powerful and complex - but for that reason also more difficult to use properly.  Unless the benefits over Lr are really significant, I think the complication of the workflow and the difficulty of integrating it with Lr, Ps etc., is not worth it.  The performance of RT is also a bit of a problem (even though I have a powerful PC), and I've already managed to crash it twice without trying too hard.  But it's certainly an impressive development!  And for an open-source project it's nothing short of amazing.

Quote
The 10th and 90th percentile of the cumulative distribution function are at approx. -1.28155 * sigma and 128155 * sigma, the range therefore spans approx. 2.5631 * sigma.


Obvious now that you've pointed it out  :-[

I think at this stage I need to stop asking questions and do some testing and reading and putting into practice what I've learnt from this thread ... which is certainly a lot and I would like to thank everyone!

At this stage my overall conclusions would be
- that there is a significant advantage in using the more sophisticated deconvolution tools over the basic Lr sharpening
- that there is little or no advantage in capture sharpening before resize
- that there is no benefit to doing capture sharpening followed by output sharpening: one sharpening pass is enough
- that other techniques like local contrast and blurring can be very effective in giving an impression of sharpness without damaging the image in the same way that over-sharpening does

I'm sure that these conclusions won't meet with general agreement! And I'm also sure there are plenty of other conclusions that could be drawn from our discussion.

Anyway, many thanks again!

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #173 on: August 27, 2014, 05:20:58 pm »

I've now had a chance to do a little more testing and I thought these results could be of interest.

I've compared capture sharpening in Lightroom/ACR, Photoshop Smart Sharpen, FocusMagic and Bart's Kernel with ImageJ on a focused image and on one slightly out of focus.  I used Imatest Studio slanted edge 10-90%.  Here are the results:



The first set of results are for the focused image and the second for the slightly out-of-focus image.  Base is the number of pixels in 10-90% edge rise with no sharpening.  LR is for Lightroom/ACR with the Amount, Radius and Detail values. FM is for FocusMagic with the radius and amount. SS is for Smart Sharpen in Photoshop. IJ is for Bart's kernel in ImageJ.

For ImageJ I used Bart's formula to calculate the horizontal and vertical radii. For the others I used my eye first of all, and then Imatest to get a good rise without (or with little) overshoot or undershoot (also for ImageJ for the scale value).

In the first set of results, ACR gave a much lower result with an Amount of 40. Increasing that to 50 made a big difference at the cost of slight halos.  Smart Sharpen sharpens the noise beautifully  :), so it really needs an edge mask (but with an edge mask it does a very good job).  Focus Magic gave the cleanest result with IJ not far behind.  Any of these sharpening tools would do a good job of capture sharpening with this image (with edge masks for ACR and Smart Sharpen).

In the second set of results, FocusMagic gives the sharpest image - however at the expense of artifacts around the edges (but with very little boosting of the image noise). Smart Sharpen gives a similar result with a clean edge but very noisy (absolutely needs an edge mask). Lightroom does a good job even without Masking - but that makes it even better. ImageJ gives a very clean image and could easily match the others for sharpness by upping the scale to 1.3 or 1.4.

I think FocusMagic suffers from the integer radius settings; Smart Sharpen suffers from noise boosting; LR/ACR needs careful handling to avoid halos but the Masking feature is very nice. ImageJ/Bart is a serious contender. Overall, with care any of these sharpening/deconvoluting tools will do a good job, but FocusMagic needs to be used with care on blurred images (IMO, of course :)).

I also tested the LR/ACR rendering against RawTherapee with amaze and igv and found no difference (at pixel-level amaze is cleaner than the other two).

Robert


« Last Edit: August 27, 2014, 05:25:57 pm by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #174 on: August 27, 2014, 05:48:27 pm »

I agree it's a bit of work, and the workflow could be improved by a dedicated piece of software that does it all on an image that gets analyzed automatically. But hey, it's a free tool, and it's educational.

There's Matlab source code of a function called sfrmat3, which does the analysis automatically, here. I've used this code, and it works well. Matlab is not free. However, there's a clone called Octave that is. I don't know if sfrmat3 runs under Octave.

Jim
« Last Edit: August 27, 2014, 06:17:53 pm by Jim Kasson »
Logged

Misirlou

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 711
    • http://
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #175 on: August 27, 2014, 06:49:58 pm »

(this shot was taken at 14.5K feet on Mauna Kea). Retaining the vibrance of the sky, while pulling detail from the backside of this telescope was my goal.

Im happy to post the CR2 if anyone wants to take a shot.

PP

pp,

That's a great place. I was there just before first light at Keck 1. Magical.

I might be interested in making a run at your CR2 with DXO, just to see what we might get with minimal user intervention. I don't expect anything particularly noteworthy, but it might be interesting from a comparative workflow point of view.
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #176 on: August 28, 2014, 02:52:54 am »

Hi ppmax2,

Yes, I would also be interested to try your raw image with FocusMagic and ImageJ - as I expect that your RT image is about as good as you will get with RT, it would be interesting to see how two other deconvolution tools compare.

Robert
« Last Edit: August 28, 2014, 03:11:29 am by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #177 on: August 28, 2014, 03:10:26 am »

Another observation regarding FocusMagic: I mentioned the low noise boosting ... well it's fairly clear that FM uses an edge mask.  If you look at a (slanted edge) edge you can see clearly that there is an area near the edge where the noise is boosted. The result is much the same as with Smart Sharpen using an edge mask.  A bit disappointing, especially since there is no control over the edge mask (IMO, a small amount of noise-level sharpening can be visually beneficial).

This really puts Bart/ImageJ in very good light as the same noise boosting is not apparent with this technique, without the use of an edge mask (but of course higher sharpening levels could be used with an edge mask).

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #178 on: August 28, 2014, 06:46:09 am »

I've now had a chance to do a little more testing and I thought these results could be of interest.

Hi Robert,

Thanks for the feedback.

One question, out of curiosity, did you also happen to record the Imatest "Corrected" (for "standardized sharpening") values? In principle, Imatest does it's analysis on linearized data, either directly from Raw (by using the same raw conversion engine for all comparisons) or by linearizing the gamma adjusted data by a gamma approximation, or an even more accurate OECF response calibration. Since gamma adjusted, and sharpened (can be local contrast adjustment), input will influence the resulting scores, it offers a kind of correction mechanism to more level the playing field for already sharpened images.

Quote
I've compared capture sharpening in Lightroom/ACR, Photoshop Smart Sharpen, FocusMagic and Bart's Kernel with ImageJ on a focused image and on one slightly out of focus.  I used Imatest Studio slanted edge 10-90%.  Here are the results:

With the local contrast distortions of the scores in mind, the results are about as one would expect them to be, but it's always nice to see the theory confirmed ...

Quote
In the first set of results, ACR gave a much lower result with an Amount of 40. Increasing that to 50 made a big difference at the cost of slight halos.  Smart Sharpen sharpens the noise beautifully  :), so it really needs an edge mask (but with an edge mask it does a very good job).

This explains why the acutance boost of mostly USM (with some deconvolution mixed in) requires a lot of masking to keep the drawbacks of that method (halos and noise amplification depending on radius setting) in check.

Quote
Focus Magic gave the cleanest result with IJ not far behind.  Any of these sharpening tools would do a good job of capture sharpening with this image (with edge masks for ACR and Smart Sharpen).

With the added note of real resolution boost for the deconvolution based methods, and simulated resolution by acutance boost of the USM based methods. That will make a difference as the output size goes up, but at native to reduced pixel sizes they would all be useful to a degree.

Quote
I think FocusMagic suffers from the integer radius settings; Smart Sharpen suffers from noise boosting; LR/ACR needs careful handling to avoid halos but the Masking feature is very nice. ImageJ/Bart is a serious contender. Overall, with care any of these sharpening/deconvoluting tools will do a good job, but FocusMagic needs to be used with care on blurred images (IMO, of course :)).

We also need to keep in mind whether we are Capture sharpening of doing something else. Therefore, the avoidance of halos and other edge artifacts (like 'restoring' aliasing artifacts and jaggies) may require to reduce the amount settings where needed, or use masks for applying different amounts of sharpening in different parts of the image (e.g. selections based on High-pass filters or blend-if masks to reduce clipping). A tool like the Topaz Labs "Detail" plugin allows to do several of these operations (including deconvolution) in a very controlled fashion, and not only does so without the risk of producing halos, but also while avoiding color issues due to increased contrast.

I think the issue (if we can call it that) with FocusMagic is that it has to perform its magic at the single pixel level, where we already know that we really need more than 2 pixels to reliably represent non-aliased discrete detail. It's not caused by the single digit blur width input (we don't know how that's used internally in an unknown iterative deconvolution algorithm) as such IMHO.

That's why I occasionally suggest that FocusMagic may also be used after first upsampling the unsharpened image data. That would allow it to operate on a sub-pixel accurate level, although its success would then also depend on the quality of the resampling algorithm.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #179 on: August 28, 2014, 08:38:09 am »


One question, out of curiosity, did you also happen to record the Imatest "Corrected" (for "standardized sharpening") values? In principle, Imatest does it's analysis on linearized data, either directly from Raw (by using the same raw conversion engine for all comparisons) or by linearizing the gamma adjusted data by a gamma approximation, or an even more accurate OECF response calibration. Since gamma adjusted, and sharpened (can be local contrast adjustment), input will influence the resulting scores, it offers a kind of correction mechanism to more level the playing field for already sharpened images.

Yes, I’ve kept the information – for example this one is a horizontal edge using your deconvolution and IJ (7x7 matrix) with a scale of 1.25.  As you can see it’s perfectly ‘sharpened’. The slight overshoot/undershoot is because of the +25% on the scale.



 
Quote
I've compared capture sharpening in Lightroom/ACR, Photoshop Smart Sharpen, FocusMagic and Bart's Kernel with ImageJ on a focused image and on one slightly out of focus.  

With the local contrast distortions of the scores in mind, the results are about as one would expect them to be, but it's always nice to see the theory confirmed ...

I think it would be worth writing a Photoshop filter with this technique.  If I can find some time in the next few months I would be happy to have a go.

Quote
In the first set of results, ACR gave a much lower result with an Amount of 40. Increasing that to 50 made a big difference at the cost of slight halos.  Smart Sharpen sharpens the noise beautifully.  So it really needs an edge mask (but with an edge mask it does a very good job).

This explains why the acutance boost of mostly USM (with some deconvolution mixed in) requires a lot of masking to keep the drawbacks of that method (halos and noise amplification depending on radius setting) in check.

Yes, absolutely.  I took the shots at ISO 100 on a 1Ds3 (but slow shutter speed of 1/5th) so the images were very clean. To give a reasonable edge with Smart Sharpen the noise gets boosted significantly. This wouldn’t be so obvious on a normal image, but with a flat gray area it’s very easy to see.  Your deconvolution really does a very good job of restoring detail without boosting noise.  FocusMagic cheats a bit by using an edge mask, IMO, but my only bitch with that is that there is no user control over the mask..

Quote
Focus Magic gave the cleanest result with IJ not far behind.  Any of these sharpening tools would do a good job of capture sharpening with this image (with edge masks for ACR and Smart Sharpen).

With the added note of real resolution boost for the deconvolution based methods, and simulated resolution by acutance boost of the USM based methods. That will make a difference as the output size goes up, but at native to reduced pixel sizes they would all be useful to a degree.

Hugely important!

Quote
We also need to keep in mind whether we are Capture sharpening or doing something else. Therefore, the avoidance of halos and other edge artifacts (like 'restoring' aliasing artifacts and jaggies) may require to reduce the amount settings where needed, or use masks for applying different amounts of sharpening in different parts of the image (e.g. selections based on High-pass filters or blend-if masks to reduce clipping). A tool like the Topaz Labs "Detail" plugin allows to do several of these operations (including deconvolution) in a very controlled fashion, and not only does so without the risk of producing halos, but also while avoiding color issues due to increased contrast.

As you know, I don’t much like the idea of capture sharpening followed by output sharpening, so I would tend to use one stronger sharpening after resize. In the Imatest sharpening example above, I would consider the sharpening to be totally fine for output – but if I had used a scale of 1 and not 1.25 it would not have been enough.  I don’t see what is to be gained by sharpening once with a radius of 1 and then sharpening again with a radius of 1.25 … but maybe I’m wrong.

I do have the Topaz plug-ins and I find the Detail plug-in very good for Medium and Large Details, but not for Small Details because that just boots noise and requires an edge mask (so why not use Smart Sharpen which has a lot more controls?).  So, to your point regarding Capture or Capture + something else, I would think that the Topaz Detail plug-in would be excellent for Creative sharpening, but not for capture/output sharpening.

The InFocus plug-in seems OK for deblur, but on its own it’s not enough: however, with a small amount of Sharpen added (same plug-in) it does a very good job.  Here’s an example:



Apart from the undershoot and slight noise boost (acceptable without an edge mask IMO) it’s pretty hard to beat a 10-90% edge rise of 1.07 pixels!  (This is one example of two-pass sharpening that’s beneficial, it would seem  :)).

Quote
I think the issue (if we can call it that) with FocusMagic is that it has to perform its magic at the single pixel level, where we already know that we really need more than 2 pixels to reliably represent non-aliased discrete detail. It's not caused by the single digit blur width input (we don't know how that's used internally in an unknown iterative deconvolution algorithm) as such IMHO.

That's why I occasionally suggest that FocusMagic may also be used after first upsampling the unsharpened image data. That would allow it to operate on a sub-pixel accurate level, although its success would then also depend on the quality of the resampling algorithm.

Yes, and as you know I would favour resizing before ‘Capture sharpening’ in any case.  


Robert
« Last Edit: August 28, 2014, 08:43:42 am by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana
Pages: 1 ... 7 8 [9] 10 11 12   Go Up