Pages: 1 ... 8 9 [10] 11 12   Go Down

Author Topic: Sharpening ... Not the Generally Accepted Way!  (Read 59104 times)

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #180 on: August 28, 2014, 09:48:45 am »

As you know, I don’t much like the idea of capture sharpening followed by output sharpening, so I would tend to use one stronger sharpening after resize. In the Imatest sharpening example above, I would consider the sharpening to be totally fine for output – but if I had used a scale of 1 and not 1.25 it would not have been enough.

I agree, and it's easier when one only has to consider the immediate sharpening to be performed, and not something that may or may not be done much later in the workflow.

Quote
I don’t see what is to be gained by sharpening once with a radius of 1 and then sharpening again with a radius of 1.25 … but maybe I’m wrong.

The only potential benefit is that one can use different types of sharpening, but in practice that does not make too much of a difference if the sharpening already was of the devolution kind, and not only acutance. Once resolution is restored, acutance enhancement goes a long way.

Quote
I do have the Topaz plug-ins and I find the Detail plug-in very good for Medium and Large Details, but not for Small Details because that just boots noise and requires an edge mask (so why not use Smart Sharpen which has a lot more controls?).

I have the same observations, but the noise amplification in "Detail" can be reduced with a negative "boost" adjustment. There is also a "Deblur" control that specifically does deconvolution at the smallest pixel level, instead of the more Wavelet oriented spatial frequency ranges boosts.

Quote
So, to your point regarding Capture or Capture + something else, I would think that the Topaz Detail plug-in would be excellent for Creative sharpening, but not for capture/output sharpening.

The "Deblur" control might work for deconvolution based Capture sharpening, especially if one doesn't have other tools. Output sharpening is a whole other can of worms, because viewing distance needs to be factored in as well as some differences in output media. However, not all matte media are also blurry. On the contrary, some are quite sharp despite a reduced contrast and/or surface structure. Even Canvas can be real sharp, and surface structures can be quite different. I've had large canvas output done at 720 PPI, FM deconvolution sharpened at that native printer output size, and the results were amazing

Quote
The InFocus plug-in seems OK for deblur, but on its own it’s not enough: however, with a small amount of Sharpen added (same plug-in) it does a very good job.

Yes, its main difficulty in use is that the radius is not a good predictor as to the range it affects. I assume they don't define the Radius in sigma units, but rather something like pixels (although at the smallest radii it does tend to be more sigma like..., maybe full-width-half-maximum (FWHM, or 2.3548 x Gaussian sigma for the diameter, or 1.1774 for radius) is the actual dimension they use. It often seems to do a better job after first upsampling the image, so maybe its algorithms try too hard to recover detail at the single pixel level, and produces artifacts instead. The upsampled image, effectively more under-sampled, is then harder to push too far. I hope that an updated version (when they get around to updating it) will also allow user generated PSF input, and maybe a choice between algorithms.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #181 on: August 28, 2014, 10:25:32 am »

I agree, and it's easier when one only has to consider the immediate sharpening to be performed, and not something that may or may not be done much later in the workflow.

The only potential benefit is that one can use different types of sharpening, but in practice that does not make too much of a difference if the sharpening already was of the devolution kind, and not only acutance. Once resolution is restored, acutance enhancement goes a long way.

I have the same observations, but the noise amplification in "Detail" can be reduced with a negative "boost" adjustment. There is also a "Deblur" control that specifically does deconvolution at the smallest pixel level, instead of the more Wavelet oriented spatial frequency ranges boosts.

The "Deblur" control might work for deconvolution based Capture sharpening, especially if one doesn't have other tools.

I need to have a good look at the Topaz sharpening options clearly :) - so far I haven't used Topaz much at all for anything, but it seems like there's some quite good stuff there.

Quote
Output sharpening is a whole other can of worms, because viewing distance needs to be factored in as well as some differences in output media. However, not all matte media are also blurry. On the contrary, some are quite sharp despite a reduced contrast and/or surface structure. Even Canvas can be real sharp, and surface structures can be quite different. I've had large canvas output done at 720 PPI, FM deconvolution sharpened at that native printer output size, and the results were amazing

I take it you just used FM deconvolution on its own, without any further output sharpening?  I'm not saying that a two-pass sharpen isn't sometimes necessary, but I find that in general, if you know your paper (and especially if you don't like halos and artifacts) that one fairly delicate and careful sharpen/deconvolution aimed at the output resolution and size gives me really good results.  Of course if there is some camera shake then that has to be sorted out first.

What do you do if your image is a bit out-of-focus?  Do you first correct for the base softening due to the AA filter etc., and then correct for the out-of-focus, or do you attempt to do it in one go?


Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #182 on: August 28, 2014, 11:26:23 am »

I need to have a good look at the Topaz sharpening options clearly :) - so far I haven't used Topaz much at all for anything, but it seems like there's some quite good stuff there.

There are only that many hours in a day, one has to prioritize ..., which is why I like to share my findings and hope for others to do the same. What I find useful is to reduce all 3 (small, medium, large) details sliders to -1.00, and then in turn restore one slider at a time to 0.00 or more to see exactly which detail is being targeted. The Boost sliders can be reduced for less effect (I think it targets based on the source level of contrast of the specific feature size). Boosting the small details also increases noise, so reducing the boost will reduce the amplification of low contrast noise, while maintaining some of the higher contrast small detail.

The color targeted Cyan-Red / Magenta-Green / Yellow-Blue luminance balance controls are also very useful for bringing out detail or suppressing it, because many complementary colors do not reside directly next to each other. There is also an Edge-aware masking function that allows to paint the selected detail adjustments in or out. One can also work in stages and "Apply" intermediate results. It's a very potent plugin.

Quote
I take it you just used FM deconvolution on its own, without any further output sharpening?

Yes, all that was required was 2 rounds of FM deconvolution sharpening with different width settings at the final output size, because the original was already very sharp in the limited DOF zone. One round for the upsampling, and another for the finest (restored) detail.

Quote
What do you do if your image is a bit out-of-focus?  Do you first correct for the base softening due to the AA filter etc., and then correct for the out-of-focus, or do you attempt to do it in one go?

In that case I probably would need too large a "blur width" setting, or several, and thus do a mild amount at original file size, and another after resampling. Of course my goal is to avoid blurred originals ..., and I usually succeed (I do lug my tripod or a monopod around a lot).

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

ppmax2

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 92
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #183 on: August 28, 2014, 02:04:51 pm »

Here's that CR2 for those that want to test with it. I'd love to see what can be done with it with the various tools mentioned:
http://ppmax.duckdns.org/public.php?service=files&t=af778de4fb2e78531e4d4058faf6061b


If you have any problems downloading please let me know.

PP
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #184 on: August 28, 2014, 05:23:20 pm »

There are only that many hours in a day, one has to prioritize ..., which is why I like to share my findings and hope for others to do the same.

Yes indeed ... not so many hours in a day (and this kind of testing is VERY time-consuming!).  So I really do appreciate all of your help (and that of others too, of course).

Quote
What I find useful is to reduce all 3 (small, medium, large) details sliders to -1.00, and then in turn restore one slider at a time to 0.00 or more to see exactly which detail is being targeted. The Boost sliders can be reduced for less effect (I think it targets based on the source level of contrast of the specific feature size). Boosting the small details also increases noise, so reducing the boost will reduce the amplification of low contrast noise, while maintaining some of the higher contrast small detail.

They’ve really gone slider-mad here!  I can see that the Small Details Boost may be useful in toning down noise introduced by the Small Details adjustment, but I don’t see any reason to use the Small Details adjustment at all as the InFocus filter seems to me to do a better job.

The Medium and Large adjustments are a bit like USM with a large and very large radius, respectively.  But what is very nice with the Topaz filter is the ability to target shadows and highlights.  I think I’ll be using these!

Quote
The color targeted Cyan-Red / Magenta-Green / Yellow-Blue luminance balance controls are also very useful for bringing out detail or suppressing it, because many complementary colors do not reside directly next to each other.

What's interesting here is that you're bringing tonal adjustments into a discussion about sharpening ... and absolutely correctly IMO.  What we're looking for is to bring life to our images, and detail is only one small (but not insignificant!) aspect to it.  I've just played around with the tonal adjustments you mentioned in Topaz and they are really very good.  I just picked a rather flat image of an old castle on an estuary and with a few small tweaks the whole focus of the image was brought onto the castle and promontary - and what was a not very interesting image has become not bad at all.

I will definitely be using this feature!

Quote
Yes, all that was required was 2 rounds of FM deconvolution sharpening with different width settings at the final output size, because the original was already very sharp in the limited DOF zone. One round for the upsampling, and another for the finest (restored) detail.

OK … this is where I have a problem/don’t understand.  If I understand you correctly, you used FM first to correct your original (already nicely focused) image to restore fine detail (lost by lens/sensor etc). Then you upsampled and used FM again to correct the softness caused by the upsampling.  Why not leave the original without correction, upsample, and then use FM once?  Whatever softness is in the original image will be upsampled so the deconvolution radius will have to be increased by the same ratio as the upsampling, then you add a bit more strength, to taste, to correct for any softness introduced by the upsampling.

I’ve given a few examples that seem to show that there is no downside to this (the upside is that any over-enthusiasm in the ‘capture’ sharpening won’t be amplified by the upsampling), but so far I haven’t seen an example where sharpen/upsize/sharpen is better.  Still, this is probably splitting hairs, and either approach will work (in the right hands  :)).
[/quote]

Quote
In that case I probably would need too large a "blur width" setting, or several, and thus do a mild amount at original file size, and another after resampling. Of course my goal is to avoid blurred originals ..., and I usually succeed (I do lug my tripod or a monopod around a lot).

Yes, I expect this is a linear problem so doing the standard deblur for your lens/camera followed by a deblur for the out-of-focus would probably be a good idea (rather than trying to fix everything in one go).

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #185 on: August 28, 2014, 06:25:16 pm »

They’ve really gone slider-mad here!  I can see that the Small Details Boost may be useful in toning down noise introduced by the Small Details adjustment, but I don’t see any reason to use the Small Details adjustment at all as the InFocus filter seems to me to do a better job.

Well, not exactly. The Small details adjustment, is adjusting the amplitude of 'small feature detail'. Small is not defined as a fixed number of pixels but rather small in relation to the total image size. InFocus instead, deconvolves and optionally sharpens more traditionally, based on certain fixed blur dimensions in pixels.

Quote
The Medium and Large adjustments are a bit like USM with a large and very large radius, respectively.

Only a small bit, but without any risk of creating halos!

Quote
But what is very nice with the Topaz filter is the ability to target shadows and highlights.

Yes, and that is in addition to the overall settings in the detail panel. It's a bit confusing at first, but they each allow and remember their own settings of the detail sliders.

Quote
OK … this is where I have a problem/don’t understand.  If I understand you correctly, you used FM first to correct your original (already nicely focused) image to restore fine detail (lost by lens/sensor etc). Then you upsampled and used FM again to correct the softness caused by the upsampling.  Why not leave the original without correction, upsample, and then use FM once?

Not exactly. You can upsample an unsharpened image, and apply 2 deconvolutions with different widths at the output size. So e.g. an upsample to 300% might require a blur width of 4 or 5, but can be followed with one of 1 or 2 (with a lower amount).

Quote
Whatever softness is in the original image will be upsampled so the deconvolution radius will have to be increased by the same ratio as the upsampling, then you add a bit more strength, to taste, to correct for any softness introduced by the upsampling.

Yes, the original optical blur is scaled to a larger dimension, but may be diffraction dominated or defocus dominated. That would lead to different PSF requirements. FocusMagic may be clever enough to optimize either type of blur, but I'm not sure that would take the same blur width settings. In addition, the resizing will also create some blur, of yet another kind. There is a good chance that these PSFs will cascade into a Gaussian looking combined blur, but sometimes we can do better by the above mentioned dual deconvolution at the final size.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #186 on: August 28, 2014, 07:13:42 pm »

Hi pp,

Well, all I've done with your image is to apply FocusMagic to it ... and some tonal adjustments in Lightroom.  Your image has color differences which I haven't tried to match. The vertical lines in your image are very clean - but the rest of the image is very soft ... which is a tradeoff, IMO.

Be interesting to get some views on which is the cleaner result :).



(You can right-click on the image to see it full-size)

Well-taken shot, btw!!

Robert
« Last Edit: August 28, 2014, 07:37:40 pm by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #187 on: August 28, 2014, 07:32:56 pm »

Well, not exactly. The Small details adjustment, is adjusting the amplitude of 'small feature detail'. Small is not defined as a fixed number of pixels but rather small in relation to the total image size. InFocus instead, deconvolves and optionally sharpens more traditionally, based on certain fixed blur dimensions in pixels.

Not exactly. You can upsample an unsharpened image, and apply 2 deconvolutions with different widths at the output size. So e.g. an upsample to 300% might require a blur width of 4 or 5, but can be followed with one of 1 or 2 (with a lower amount).

Yes, the original optical blur is scaled to a larger dimension, but may be diffraction dominated or defocus dominated. That would lead to different PSF requirements. FocusMagic may be clever enough to optimize either type of blur, but I'm not sure that would take the same blur width settings. In addition, the resizing will also create some blur, of yet another kind. There is a good chance that these PSFs will cascade into a Gaussian looking combined blur, but sometimes we can do better by the above mentioned dual deconvolution at the final size.

Cheers,
Bart

I need to play around with Topaz more ... but I can see that there is a lot there.

I understand what you're saying about the upsampling deconvolutions.  Effectively what you are doing (after the resize/deconvolution) is to do a second deconvolution with a smaller radius and amount if you find that the image is still too soft (and the first deconvolution cannot be adjusted to give you the optimum sharpness).  Of course that makes perfect sense: there is no cast-in-concrete formula and different images with different resizing will require different approaches.  I guess what I'm saying is that, as far as possible, multiple sharpening passes should be the exception rather than the rule.  It's a sort of campaign to remind us that we can do more harm than good with what is often our flaithiúlach (gaelic meaning over-generous, as in buying drinks for the whole bar :)) approach to sharpening.

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

ppmax2

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 92
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #188 on: August 28, 2014, 07:47:32 pm »

Hi pp,

Well, all I've done with your image is to apply FocusMagic to it ... and some tonal adjustments in Lightroom.  Your image has color differences which I haven't tried to match. The vertical lines in your image are very clean - but the rest of the image is very soft ... which is a tradeoff, IMO.

Be interesting to get some views on which is the cleaner result :).



(You can right-click on the image to see it full-size)

Well-taken shot, btw!!

Robert

Hi Robert--

Wow--FM looks like to be a gem of a tool. Compared to RT, I think your result has a bit more definition, especially on the guardrail that encircles the telescope. I also see that the weather vanes on the top look a bit more defined as well. Also, the vertical lines on the rear of the building look good too.

Is there any chance of posting an uncropped version? I'd like to see what the detail looks like in the lower portion of the image, especially in the shadow/noise areas.

Also, what did you do to embed the full size image that can be viewed by right-click?

Nice job and thanks for the render!

PP
Logged

sniper

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 670
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #189 on: August 29, 2014, 05:25:50 am »

Bart forgive the slightly off topic question, but what is the structure in your picture?

Regards Wayne
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #190 on: August 29, 2014, 06:04:09 am »

Bart forgive the slightly off topic question, but what is the structure in your picture?

Hi Wayne,

I'm sorry, but I do not understand the question. Maybe you are referring to ppmax2's picture?

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #191 on: August 29, 2014, 06:24:05 am »

Hi Robert--

Wow--FM looks like to be a gem of a tool. Compared to RT, I think your result has a bit more definition, especially on the guardrail that encircles the telescope. I also see that the weather vanes on the top look a bit more defined as well. Also, the vertical lines on the rear of the building look good too.

Is there any chance of posting an uncropped version? I'd like to see what the detail looks like in the lower portion of the image, especially in the shadow/noise areas.

Also, what did you do to embed the full size image that can be viewed by right-click?

Nice job and thanks for the render!

PP

Hi, Sure ... you can download the image here: http://www.irelandupclose.com/customer/LL/TestImage-Full.jpg

Before opening into Photoshop I did a small amount of luminance and color noise reduction in ACR - but very little as the image is very clean.  There's a tiny amount of shadow noise, but that could have been reduced by using an edge mask with FM (although FM is pretty good at not boosting noise).  But even as it stands you could lighten the image considerably without noise being an issue, native resolution or upsized.

I'm sure Bart or someone who has done a lot of research into deconvolution could do a better job than I did.

Almost forgot ... I use the img link to an image on my website.

Robert
« Last Edit: August 29, 2014, 06:27:08 am by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

ppmax2

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 92
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #192 on: August 29, 2014, 07:15:37 am »

Hello sniper--

That building is the housing for the Subaru telescope on top of Mauna Kea volcano on the Big Island of Hawaii. In the image below its the one to the left of the two orbs (Keck 1 and Keck 2):


Thanks for posting the full size Robert--FM looks like it did a really nice job...I'll have to check that out now ;)

thx--
PP
Logged

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #193 on: August 29, 2014, 11:33:39 am »

The InFocus plug-in seems OK for deblur, but on its own it’s not enough: however, with a small amount of Sharpen added (same plug-in) it does a very good job.  Here’s an example...

Apart from the undershoot and slight noise boost (acceptable without an edge mask IMO) it’s pretty hard to beat a 10-90% edge rise of 1.07 pixels!  (This is one example of two-pass sharpening that’s beneficial, it would seem  :)).

Hi Robert,

I haven't been able to read all of it but you have covered a lot of excellent ground and come a long way in this thread, good for you and thank you for doing it.  There was a recent thread around here of a gentleman who was able to undo a fair amount of known blur using an FT library, I wonder if any of that can be used by us non-coders.

For my landscapes I typically use InFocus in its Estimate mode (Radius 2, Softness 0.3, Suppress 0.2) for capture sharpening sometimes followed by a touch of Local Contrast at low opacity.  That seems to take care of the small to medium range detail quite well.  If I see any squigglies from InFocus I mask those out.  Imho one of the limitations we are running into is that we are deconvolving based on a gaussian PSF, which is not necessarily representative of the intensity distribution of the camera system's.

But along these lines, since you are playing with Imatest, I have this consideration for you: a good blind guess at what deconvolution radius to use for a guessian :) PSF is that which would result in the same MTF50 as the MTF50 produced by the edge when measured off the raw data.  In other words, a good guess at the radius to use for deconvolution is (excuse the Excel notation)

StdDev/Radius = SQRT(-2*LN(0.5))/2/PI/MTF50 pixels

For example, if when you fed the edge raw data to Imatest it returned an MTF50 of 0.28 cy/px, a good guess at the gaussian radius to use for deconvolution would be 0.67 pixels.

Jack


Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #194 on: August 29, 2014, 11:44:14 am »


Re: Topaz Detail:
The Small details adjustment, is adjusting the amplitude of 'small feature detail'. Small is not defined as a fixed number of pixels but rather small in relation to the total image size. InFocus instead, deconvolves and optionally sharpens more traditionally, based on certain fixed blur dimensions in pixels.


Yes, you're right - here is pp's image with USM inside the shape and Topaz Detail outside (both overdone to make it clearer).



The Topaz Detail Small clearly brings out detail in the image (the clouds have gone from flat to having depth), as well as noise, so it might be good to reduce noise either before or after applying the filter - whereas USM just sharpens fine detail.  And as you say, USM also introduces halos.

So ... nice filter (especially considering all the rest of it)!

Robert
« Last Edit: August 29, 2014, 11:46:02 am by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #195 on: August 29, 2014, 12:00:31 pm »

Hi Robert,

I haven't been able to read all of it but you have covered a lot of excellent ground and come a long way in this thread, good for you and thank you for doing it.  There was a recent thread around here of a gentleman who was able to undo a fair amount of known blur using an FT library, I wonder if any of that can be used by us non-coders.

For my landscapes I typically use InFocus in its Estimate mode (Radius 2, Softness 0.3, Suppress 0.2) for capture sharpening sometimes followed by a touch of Local Contrast at low opacity.  That seems to take care of the small to medium range detail quite well.  If I see any squigglies from InFocus I mask those out.  Imho one of the limitations we are running into is that we are deconvolving based on a gaussian PSF, which is not necessarily representative of the intensity distribution of the camera system's.

But along these lines, since you are playing with Imatest, I have this consideration for you: a good blind guess at what deconvolution radius to use for a guessian :) PSF is that which would result in the same MTF50 as the MTF50 produced by the edge when measured off the raw data.  In other words, a good guess at the radius to use for deconvolution is (excuse the Excel notation)

StdDev/Radius = SQRT(-2*LN(0.5))/2/PI/MTF50 pixels

For example, if when you fed the edge raw data to Imatest it returned an MTF50 of 0.28 cy/px, a good guess at the gaussian radius to use for deconvolution would be 0.67 pixels.

Jack


Hi Jack - thanks for the tips ... and I'll have a try of the radius estimate you suggest.  What I've done so far is to use Bart's suggestion of dividing the 10% to 90% edge rise (in pixels) by 2.5631, which is the 10% & 90% point on a Gaussian curve.  In the example I gave earlier I used both the horizontal and vertical figures and fed them into Bart's PSF tool, then used the kernel in ImageJ.  So far, Bart's tool is the only one I've found that allows an asymmetrical PSF, so it has a level of sophistication not generally present.  It would be very nice to have this technique in a Photoshop filter ... something to think about!

I'll have another look at this when I have a bit of time - the tests I did with Imatest were on not a very good paper (Epson Enhanced Matte), so it's probable that some of the image softness came from the print - also, I used a 24-105 F4L lens from quite a distance back and I would like to try again with a prime lens.

Cheers,

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

sniper

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 670
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #196 on: August 29, 2014, 12:39:49 pm »

PPmax2   Thank you, I just wondered what sort of building it was.  (nice pic by the way)

Bart my appoligies, I goofed and thought it was your pic.

Regards both  Wayne
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #197 on: August 29, 2014, 01:01:45 pm »


But along these lines, since you are playing with Imatest, I have this consideration for you: a good blind guess at what deconvolution radius to use for a guessian :) PSF is that which would result in the same MTF50 as the MTF50 produced by the edge when measured off the raw data.  In other words, a good guess at the radius to use for deconvolution is (excuse the Excel notation)

StdDev/Radius = SQRT(-2*LN(0.5))/2/PI/MTF50 pixels


Pretty close to Bart's method! 0.56 by Bart, 0.54 by you .. and 1.0 Bart, 0.9 you :)

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #198 on: August 29, 2014, 02:23:33 pm »

Pretty close to Bart's method! 0.56 by Bart, 0.54 by you .. and 1.0 Bart, 0.9 you :)

Excellent then.  You can read the rationale behind my approach here.

Jack
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #199 on: August 30, 2014, 03:46:45 am »

Excellent then.  You can read the rationale behind my approach here.

Jack

Thanks Jack - very interesting and a bit scary!  I thought I would check out what happens using Bart's deconvolution, based on the correct radius and then increasing it progressively, and this is what happens:

 

The left-hand image has the correct radius of 1.06, the one at the right has a radius of 4.  As you can see, all that happens is that there is a significant overshoot on the MTF at 4 (this overshoot increases progressively from a radius of about 1.4).

The MTF remains roughly Gaussian unlike the one in your article … and there is no sudden transition around the Nyquist frequency or shoot off to infinity as the radius increases.  Are these effects due to division by zero(ish) in the frequency domain … or to something else?

There is also no flattening of the MTF as per your article – the deconvolution that I’m showing seems more like a USM effect, as you can see here where I’ve applied a USM with radius of 1.1:

 
 
FocusMagic, on the other hand, goes progressively manic as the radius is increased from 2 (first image, OK) to 3 then to 4 and finally 6.

   

   

What do you think, Bart and Jack (and anyone else who understands deconvolution  :)).

Robert
« Last Edit: August 30, 2014, 05:14:56 am by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana
Pages: 1 ... 8 9 [10] 11 12   Go Up