Pages: 1 ... 9 10 [11] 12   Go Down

Author Topic: Sharpening ... Not the Generally Accepted Way!  (Read 50621 times)

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8901
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #200 on: August 30, 2014, 05:46:46 am »

Thanks Jack - very interesting and a bit scary!

Robert,

The approach that Jack takes (in the frequency domain) is pretty close to what I do (in the spatial domain). We both assume (based on both theory and empirical evidence) that the cascade of blur sources will usually result in a Gaussian type of PSF.

Jack takes a medium response (MTF50) as pivot point on the actual MTF curve, and calculates the corresponding MTF (at that point) of a pure Gaussian blur function, he calculates the required sigma. In principle that's fine, although one might also try to find a sigma that minimizes the absolute difference between the actual MTF and that of the pure Gaussian over a wider range. Although it's a reasonable single point optimization, maybe MTF50 is not the best pivot point, maybe e.g. MTF55 or MTF45 would give an overall better match, who knows.

My approach is also trying to fit a Gaussian (Edge-Spread function) to the actual data, but does so on two points (10% and 90% rise) on the edge profile in the spatial domain. That may result in a slightly different optimization, e.g. in case of veiling glare which raises the dark tones more than the light tones, also on the slanted edge transition profile. My webtool attempts to minimize the absolute difference between the entire edge response and the Gaussian model. It therefore attempts to make a better overall edge profile fit, which usually is most difficult for the dark edge, due to veiling glare which distorts the Gaussian blur profile. That also gives an indication of how much of a role the veiling glare plays in the total image quality, and how it complicates a successful resolution restoration because it reduces the lower frequencies of the MTF response. BTW, Topaz Detail can be used to adjust some of that with the large detail control.

Quote
 I thought I would check out what happens using Bart's deconvolution, based on the correct radius and then increasing it progressively, and this is what happens:

 

The left-hand image has the correct radius of 1.06, the one at the right has a radius of 4.  As you can see, all that happens is that there is a significant overshoot on the MTF at 4 (this overshoot increases progressively from a radius of about 1.4).

The MTF remains roughly Gaussian unlike the one in your article and there is no sudden transition around the Nyquist frequency or shoot off to infinity as the radius increases.  Are these effects due to division by zero(ish) in the frequency domain or to something else?

Jack's model is purely mathematical, and as such allows to predict the effects of full restoration in the frequency domain. However, anything that happens above the Nyquist frequency (0.5 cycles/pixel) folds back (mirrors) to the below Nyquist range and manifests itself as aliasing in the spatial domain (so you won't see it as an amplification above Nyquist in the actually sharpened version, but as a boost below Nyquist).

Also, since the actual signal's MTF near the Nyquist frequency is very low, there is little detail (with low S/N ratio) left to reconstruct, so there will be issues with noise amplification. MTF curves need to be interpreted, because actual images are not the same as simplified mathematical models (simple numbers do not tell the whole story, they just show a certain aspect of it, like spatial frequency response of a system in an MTF, and an separation of aliasing due to sub-pixel phase effects of fine detail).

Cheers,
Bart
« Last Edit: August 30, 2014, 05:56:30 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #201 on: August 30, 2014, 06:47:47 am »


Jack's model is purely mathematical, and as such allows to predict the effects of full restoration in the frequency domain. However, anything that happens above the Nyquist frequency (0.5 cycles/pixel) folds back (mirrors) to the below Nyquist range and manifests itself as aliasing in the spatial domain (so you won't see it as an amplification above Nyquist in the actually sharpened version, but as a boost below Nyquist).

Also, since the actual signal's MTF near the Nyquist frequency is very low, there is little detail (with low S/N ratio) left to reconstruct, so there will be issues with noise amplification. MTF curves need to be interpreted, because actual images are not the same as simplified mathematical models (simple numbers do not tell the whole story, they just show a certain aspect of it, like spatial frequency response of a system in an MTF, and an separation of aliasing due to sub-pixel phase effects of fine detail).


Yes ... practise and theory are necessarily not a perfect fit!  Also, the prints I'm using are certainly not the sharpest, so it doesn't matter how well the algorithm reconstructs the image, the edges will never be sharp.  The contrast on the Imatest chart is also quite low (Lab 30% and 80%), which certainly impacts on the results.

What seems to work very well, giving no artifacts at all is to use the base radius and then a smaller radius (I used half).  Here are the results using FM (2/100%, 1/100%) and IJ (1.2/0.6):

 

And here is the image with contrast changed to Lab 10% and 90%) with the same FM settings:

 



As you can see, the contrast adjustment makes a big difference.

Robert
« Last Edit: August 30, 2014, 06:59:03 am by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Mike Sellers

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 666
    • Mike Sellers Photography
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #202 on: August 30, 2014, 09:53:34 am »

May I ask a question about a sharpening workflow for my Tango drum scanner? Should I leave sharpening turned on in the Tango and then would there be any need for the capture sharpening stage or turn it off in the Tango software then use the capture sharpening stage?
Mike
Logged

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 797
    • Hikes -more than strolls- with my dog
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #203 on: August 30, 2014, 11:08:51 am »

Jack takes a medium response (MTF50) as pivot point on the actual MTF curve, and calculates the corresponding MTF (at that point) of a pure Gaussian blur function, he calculates the required sigma. In principle that's fine, although one might also try to find a sigma that minimizes the absolute difference between the actual MTF and that of the pure Gaussian over a wider range. Although it's a reasonable single point optimization, maybe MTF50 is not the best pivot point, maybe e.g. MTF55 or MTF45 would give an overall better match, who knows.

Correct, Bart and Robert.  It depends how symmetrical the two curves are about MTF50.  MTF50 seems to be fairly close to the mark, the figure is often avaiable and the formula is easier to use with that one parameter - than to do curve fitting.  But it's always a compromise.

Jack's model is purely mathematical, and as such allows to predict the effects of full restoration in the frequency domain. However, anything that happens above the Nyquist frequency (0.5 cycles/pixel) folds back (mirrors) to the below Nyquist range and manifests itself as aliasing in the spatial domain (so you won't see it as an amplification above Nyquist in the actually sharpened version, but as a boost below Nyquist).

Also, since the actual signal's MTF near the Nyquist frequency is very low, there is little detail (with low S/N ratio) left to reconstruct, so there will be issues with noise amplification. MTF curves need to be interpreted, because actual images are not the same as simplified mathematical models (simple numbers do not tell the whole story, they just show a certain aspect of it, like spatial frequency response of a system in an MTF, and an separation of aliasing due to sub-pixel phase effects of fine detail).

Cheers,
Bart

Correct again, I was interested in an ideal implementation to isolate certain parameters and understand the effect of changes in the variables involved.  Actual deconvolution algorithms have all sorts of additional knobs to control quite non-ideal real life data and the shape of the resulting MTF: knobs related to noise and sophisticated low pass filters - which I did not show (I mention them in the next post).  Those do their job in FM and other plug-ins, which is why the resulting MTF curves are better behaved than my ideal examples.

However, imo the application of those knobs comes too early in the process, especially when the MTF curve is poorly behaved.  There is no point in boosting frequencies just to cut them back later with a low pass: noise is increased and detail information is lost that way.  On the contrary the objective of deconvolution should be to restore without boosting too much - at least up to Nyquist.

So why not give us a chance to first attempt to reverse out the dominant components of MTF based on their physical properties (f-number, AA, etc.) and only then resort to generic parameters based on Gaussian PSFs and low pass filters?  At least take out the Airy and AA, then we'll talk (I am talking to you Nik, Topaz and FM).

Jack
« Last Edit: August 30, 2014, 12:12:43 pm by Jack Hogan »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8901
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #204 on: August 30, 2014, 11:17:05 am »

May I ask a question about a sharpening workflow for my Tango drum scanner? Should I leave sharpening turned on in the Tango and then would there be any need for the capture sharpening stage or turn it off in the Tango software then use the capture sharpening stage?

Hi Mike,

Hard to say, but I think you should first scan with the aperture that gives the best balance between sharpness and graininess for the given image. It's possible that you will scan for an output size that may need to be downsampled for the final output, because you want to avoid undersampling the grain structure as that will result in grain aliasing. Scanning for 6000-8000 PPI usually allows to avoid grain aliasing.

That final output would require an analysis to see if there is room for deconvolution sharpening. If you already have FocusMagic then that would be simple enough to just give a try, otherwise you could perhaps upload a crop of a sharp edge in the image for analysis (assuming a typical well focused segment of an image). Defocused images would always require capture sharpening, unless only small/downsampled output is produced.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #205 on: August 30, 2014, 05:04:01 pm »


However, imo the application of those knobs comes too early in the process, especially when the MTF curve is poorly behaved.  There is no point in boosting frequencies just to cut them back later with a low pass: noise is increased and detail information is lost that way.  On the contrary the objective of deconvolution should be to restore without boosting too much - at least up to Nyquist.

So why not give us a chance to first attempt to reverse out the dominant components of MTF based on their physical properties (f-number, AA, etc.) and only then resort to generic parameters based on Gaussian PSFs and low pass filters?  At least take out the Airy and AA, then we'll talk (I am talking to you Nik, Topaz and FM).

Jack

My approach is also trying to fit a Gaussian (Edge-Spread function) to the actual data, but does so on two points (10% and 90% rise) on the edge profile in the spatial domain. That may result in a slightly different optimization, e.g. in case of veiling glare which raises the dark tones more than the light tones, also on the slanted edge transition profile. My webtool attempts to minimize the absolute difference between the entire edge response and the Gaussian model. It therefore attempts to make a better overall edge profile fit, which usually is most difficult for the dark edge, due to veiling glare which distorts the Gaussian blur profile. That also gives an indication of how much of a role the veiling glare plays in the total image quality, and how it complicates a successful resolution restoration because it reduces the lower frequencies of the MTF response. BTW, Topaz Detail can be used to adjust some of that with the large detail control.

Hi Jack & Bart,

Given an actual MTF, could you produce a deconvolution kernel to properly restore detail, to the extent possible?  As opposed to assuming a Gaussian model, that is. Say this one here:



Robert
« Last Edit: August 30, 2014, 05:16:38 pm by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 797
    • Hikes -more than strolls- with my dog
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #206 on: August 31, 2014, 04:02:42 am »

Hi Jack & Bart,

Given an actual MTF, could you produce a deconvolution kernel to properly restore detail, to the extent possible?  As opposed to assuming a Gaussian model, that is. Say this one here:

Hi Robert,

I could give you a better answer if I could see the raw file that generated that output.

But for a generic answer, assuming we are talking about Capture Sharpening in the center of the FOV - that is attempting to restore spatial resolution lost by blurring by the HARDWARE during the capture process - if one wants to get camera/lens setup specific PSFs for deconvolution one should imo start by reversing out the blurring introduced by each easily modeled component of the hardware.

The easy ones are diffraction and AA.  So one could deconvolve with an Airy disk of the appropriate f-number and (typically) the PSF of a 4-dot beam splitter of the appropriate strength.  Next comes lens blur, which in the center of the image can often be modeled as a combination of a pillbox and a gaussian.  Then one has all sorts of aberrations, not to mention blur introduced by demosaicing, sensor/subject motion etc. which are very hard to model.

But even for the easy ones, deconvolving them out is not easy :)  Hence the idea instead to assume that all the PSFs add up to a Gaussian and try to deconvolve using that.  The fact is though, the PSFs of the camera/lens as a system do not always add up to one that looks like a Gaussian - as you probably read here.  Therefore we mess with our image's spatial resolution in ways it was never meant to be messed with and we get noise and artifacts that we need to mask out.  What we often do not realize is that we have compromised spatial resolution information elsewhere as well - but if it looks ok...

If you can share the raw file that generated the graph above I will take a stab at breaking down its 'easy' MTF components.

Jack

PS BTW to make modeling effective I work entirely on the objective raw data (blurring introduced by lens/sensor only) to isolate it from subjective components that would otherwise introduce too many additional variables: no demosaicing, no rendering, no contrast, no sharpening.  More or less in the center of the FOV.   Capture obtained by using good technique, so no shake.
« Last Edit: August 31, 2014, 04:07:02 am by Jack Hogan »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8901
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #207 on: August 31, 2014, 04:45:47 am »

Hi Jack & Bart,

Given an actual MTF, could you produce a deconvolution kernel to properly restore detail, to the extent possible?  As opposed to assuming a Gaussian model, that is.

Hi Robert,

I'm not a mathematician, so I'm not 100% sure, but I don't think that is directly possible from an arbitrary MTF. An MTF has already lost some information required to re-build the original data. It's a bit like trying to reconstruct a single line of an image from its histogram. That's not a perfect analogy either, but you get the idea. The MTF only tells us with which contrast certain spatial frequencies will be recorded, but it e.g. no longer has information about it's phase (position).

That's why it helps to reverse engineer the PSF, i.e. compare the image MTF of a known feature (e.g. edge) to the model of known shapes, such as e.g. a Gaussian, and thus derive the PSF indirectly. This works pretty well for many images, until diffraction/defocus/motion  becomes such a dominating component in the cascaded blur contributions that the combined blur becomes a bit less Gaussian looking. In the cascade it will still be somewhat Gaussian (except for complex motion), so one can also attempt to model a weighted sum of Gaussians, or a convolution of a Gaussian with a diffraction or defocus blur PSF.

So we can construct a model of the contributing PSFs, but it will still be very difficult to do absolutely accurate, and small differences in the frequency domain can have huge effects in the spatial domain.

I feel somewhat comforted by the remarks of Dr. Eric Fossum (the inventor of the CMOS image sensor) when he mentions that the design of things like microlenses and their effect on the image is too complicated to predict accurately, that one usually resorts to trial and error rather than attempt to model it. That of course won't stop us from trying ..., as long as we don't expect perfection, because that would probably never happen.

What we can do is model the main contributors, and see if eliminating their contribution helps.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #208 on: August 31, 2014, 06:47:02 am »

Hi Robert,

I could give you a better answer if I could see the raw file that generated that output.

Hi Jack,

You can get the raw file (and also the Imatest SFR and Edge plot) here: http://www.irelandupclose.com/customer/LL/Base.zip.  As we're getting more into areas of interest rather than strictly practical (at this point) application, I would be very interested in what your procedure is to map the different components of the blurring.  I'm an engineer originally so I do have some maths ... but it's very rusty at this stage so if you do go into the maths I would appreciate an explanation :).  BTW ... I hope this is of interest to you too and not just a nuisance, because if it is please don't waste your time.  If you have any software algorithms ... those I can normally follow without much help as I've spent most of my life in software development.

Quote
But for a generic answer, assuming we are talking about Capture Sharpening in the center of the FOV - that is attempting to restore spatial resolution lost by blurring by the HARDWARE during the capture process - if one wants to get camera/lens setup specific PSFs for deconvolution one should imo start by reversing out the blurring introduced by each easily modeled component of the hardware.

Yes, well that's really what caught my attention: if we could reverse the damage caused by the AA filter, sensor, A/D, firmware processing ... first, that would seem to be a really good first step.  The thing is, how do you separate this from this + the lens?  Would it help to have two images taken in identical conditions with two prime lenses at the same focal length and aperture and shutter speed, for example?  What about the light source?  For the test image I used a Solux 4700K bulb, which has a pretty flat spectrum, sloping up towards the lower frequencies.

Quote
PS BTW to make modeling effective I work entirely on the objective raw data (blurring introduced by lens/sensor only) to isolate it from subjective components that would otherwise introduce too many additional variables: no demosaicing, no rendering, no contrast, no sharpening.  More or less in the center of the FOV.   Capture obtained by using good technique, so no shake.

The capture was reasonably well taken - however I did not use mirror lock-up and the exposure was quite long (1/5th second). ISO 100, good tripod, remote capture, so no camera shake apart from the mirror.  The test chart is quite small and printed on Epson Enhanced Matte using an HPZ3100 ... so not the sharpest print, but as the shot was taken from about 1.75m away any softness in the print is probably not a factor.  However, if you would like a better shot, I can redo it with a prime lens with mirror lock-up and increase the light to shorten the exposure.

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #209 on: August 31, 2014, 07:27:25 am »

I'm not a mathematician, so I'm not 100% sure, but I don't think that is directly possible from an arbitrary MTF. An MTF has already lost some information required to re-build the original data. It's a bit like trying to reconstruct a single line of an image from its histogram. That's not a perfect analogy either, but you get the idea. The MTF only tells us with which contrast certain spatial frequencies will be recorded, but it e.g. no longer has information about it's phase (position).

Well, Bart, for someone who's not a mathematician you're doing pretty well!

What I was wondering ... and my maths certainly isn't up to this sort of thing ... is whether it is possible to capture an image of certain known shapes ... say a horizontal edge, a vertical edge, a circle of know size ... and from these model the distortion with reasonable accuracy (for the specific conditions under which the image was captured).  If we could do this for a specific setup with all the parameters as fixed as possible, including the light source, it would be a fantastic achievement (to my mind at least).  If we then introduced one additional thing: slight defocusing of the lens, for example, and were able to model that, and produce a deconvolution filter that would restore the image to the focused state ... well, that would be quite a step along the way.

What really impressed me was the blur/deblur macro example in ImageJ: the deconvolution completely reverses the blurring of the image. Of course the blurring function is fully known in this case: but what it illustrates very graphically is that for a really effective deconvolution the blurring function needs to be as fully known as possible.  I would have thought that with modern techniques and software that it should be possible to photograph an image (with whatever complex shapes are required) and from this compute a very accurate blur function for that very particular setup.  If this was possible, then it would also be possible to take many captures at different focal lengths and apertures and so obtain a database describing the lens/camera/demosaicing.  

Other factors like camera shake and lens out of focus would seem secondary issues to me as these are to a large extent within the photographer's control.

I'm certainly speaking from ignorance, but to me it's like if we know the shape of the blur accurately then we can deblur either in the spatial or frequency domain and we should be able to do it with a high degree of success.  Of course I know that there are random issues like noise ... but in the controlled test captures it should be possible to analyse the image and remove the noise, I would have thought.  Then in the real-world deblurring perhaps noise-removal should be the first step before deconvolution???  That's a question in itself :).
[/quote]

Quote
That's why it helps to reverse engineer the PSF, i.e. compare the image MTF of a known feature (e.g. edge) to the model of known shapes, such as e.g. a Gaussian, and thus derive the PSF indirectly. This works pretty well for many images, until diffraction/defocus/motion  becomes such a dominating component in the cascaded blur contributions that the combined blur becomes a bit less Gaussian looking. In the cascade it will still be somewhat Gaussian (except for complex motion), so one can also attempt to model a weighted sum of Gaussians, or a convolution of a Gaussian with a diffraction or defocus blur PSF.

So we can construct a model of the contributing PSFs, but it will still be very difficult to do absolutely accurate, and small differences in the frequency domain can have huge effects in the spatial domain.

I feel somewhat comforted by the remarks of Dr. Eric Fossum (the inventor of the CMOS image sensor) when he mentions that the design of things like microlenses and their effect on the image is too complicated to predict accurately, that one usually resorts to trial and error rather than attempt to model it. That of course won't stop us from trying ..., as long as we don't expect perfection, because that would probably never happen.

My own feeling (based on ignorance, needless to say) is that there are just too many variables ... and if just one of them (the microlenses) are considered to have too complex an effect on the image to model successfully, well then modeling is not the way to go.  That leaves measurement and educated guesswork, which is where MTFs and such come into play.  I just wonder to what extent the guesswork can be removed, and the shape of the blur function can be modeled from an actual photograph.  I understand that the MTF has limitations ... but at least it's a start. We can take both the horizontal and vertical MTFs.  What else could we photograph to give us a more accurate idea of the shape of the blur?

Quote
What we can do is model the main contributors, and see if eliminating their contribution helps.

Well, I'll be very interested to see what Jack comes up with.  It may be that this is not a one-step problem, but that Jack is right and that we should fix a) and then look at b).

Cheers

Robert

Cheers,
Bart
[/quote]
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 797
    • Hikes -more than strolls- with my dog
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #210 on: August 31, 2014, 12:35:39 pm »

Here it is Robert



However I must say upfront that I am not comfortable with it because 'lens' blur is out of the ballpark and overwhelming the other components: I couldn't even see the AA zero (so I guessed a generic 0.35).  You should be getting about half the lens blur diameter and MTF50 of close to 2000 in lw/ph with your setup.

Could you re-capture a larger chart at 40mm f/5.6 using contrast detect (live view) focusing, mirror up etc.?  Alternatively try 80mm at f/7-8.  All I need is the raw file of one sharp black square/edge about 2-400 pixels on a side on a white background.

On the other issues, I agree with Bart: look at what an overwhelming 'lens blur' you got even with a tripod and relatively good technique.  Shutter shock clobbers many cameras even on a granite tripod.  In the real world outside the labs we are not even close to being able to detect the effect of microlenses or other micro components.

At this stage of the game I think we can only hope to take it to the next stage: from generic Gaussian PSFs to maybe a couple of the more easily modeled ones.  From symmetrical to asymmetrical.  Nobody is doing it because it's hard enough as it is, what with the noise and low energy in the higher frequencies creating all sorts of undesired effects.  I am not even sure it is worthwhile to split AA and diffraction out.  Intuitively I think so, especially as pixels become smaller and AAless, approaching Airy disk size even at larger apertures.  Time will tell.  In the meantime the exercise is fun and informative for inquisitive minds :)

Jack

PS With regards to my methodology keep an eye on my (new as of this week) site.  I'll go into it in the near future.
« Last Edit: August 31, 2014, 01:18:20 pm by Jack Hogan »
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #211 on: September 01, 2014, 06:06:39 am »

Thank you Jack.

I'm playing around a bit with the test print.  On the matte paper I'm using the best I can get with a 100mm F2.8 Macro lens (which should be very sharp) is 2.26 pixel 10-90% rise, which isn't great. Increasing the contrast using a curve brings this down to 1.83 pixels, which is a bit better (and is similar to Bart's shot with the same lens and camera).  I'll try with a print on glossy paper, but then there's the problem of reflection - but at least then I can get fairly good contrast.

For Imatest you need the print to have a contrast ratio of about 10:1 max (so Lab 9/90, say).  You say that you need black on white - but of course with the paper and ink limits that's not achievable.  Is it acceptable to you to apply a curve to the image to bring the contrast back to black/white?

As a side-effect, I'm finding out quite a bit about my lenses doing these tests (which I did a few years ago but have since forgotten the conclusions).

All the best,

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #212 on: September 01, 2014, 06:55:11 am »

Hello again Jack,

I've tried the SFR with a glossy paper and the results are all over the place. Due to reflections I expect, even though I was careful enough.  The problem may be that the print is glued onto a board that has slight bumps and hollows - I could try again on perspex.  

In the meantime, here is a somewhat better image, back to the matte paper: http://www.irelandupclose.com/customer/LL/1Ds3-100mmF2p8.zip

I used a 100mm F2.8 Macro lens.  I focused manually with MLU, and out-of-the-camera the 10-90% top edge is 2.57 pixels which isn't exactly brilliant. However the image needs contrast applied (the original has a Lab black of 10 and light gray of 90).  With a curve applied to restore the contrast, the 10-90% rise is 1.83 pixels, which is OK, I think.  With a light sharpening of 20 in ACR this reduces to 1.38 pixels.  With FocusMagic 2/100 the figure drops to 0.86 pixels with an MTF50 of 0.5 cycles per pixel.

Actually, this image may be better for you as the edge is at the center of the lens: http://www.irelandupclose.com/customer/LL/Matte-100mmF6p3.zip

I could try a different lens, but this is about as good as I'll get with this particular lens I think.  As for the paper ... advice would be welcome!

Cheers

Robert

« Last Edit: September 01, 2014, 07:16:07 am by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 797
    • Hikes -more than strolls- with my dog
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #213 on: September 01, 2014, 11:51:41 am »

For Imatest you need the print to have a contrast ratio of about 10:1 max (so Lab 9/90, say).  You say that you need black on white - but of course with the paper and ink limits that's not achievable.  Is it acceptable to you to apply a curve to the image to bring the contrast back to black/white?

Hi Robert,

I forgot to mention one little detail (I often do:-): in order to give reliable data the edge needs be slightly slanted (as per the name of the MTF generating method), ideally between 5 and 9 degrees, and near the center of the FOV.  I only downloaded the second image you shared (F6p3) because I am not at home at the moment and my data plan has strict limits (the other one was 210MB).  The slant is only 1 degree in F6p3 and I am getting values again too low for your lens/camera combo: WB Raw MTF50 = 1580 lw/ph, when near the center of the FOV it should be up around 2000.  Could be the one degree.  Or it could be that the lens is not focused properly.  The Blue channel is giving the highest MTF50 readings while Red is way down - so it could be that you are chasing your lens' longitudinal aberrations down, not focusing right on the sensing plane :)

To give you an idea, the ISO 100 5DIII+85mm/1.8 @f/7.1 raw image here is yielding MTF50 values of over 2100 lw/ph.  I consistently get well over that with my D610 from slanted edges printed by a laser printer on normal copy paper and lit by diffuse lighting.  For this kind of forensic exercise one must use good technique (15x focal length away from target, solid tripod, mirror up, delayed shutter release) and either use contrast detect focusing, or focus peak manually (that is take a number of shots around what is suspected to be the appropriate focus point by varying monotonically and very slowly the focus ring manually in between shots; then view the series at x00% and choose the one that appears the sharpest).  Another potential culprit is the target image source: if it is not a vector the printing program/process could be introducing artifacts.

As far as the contrast of the edge is concerned I work directly off the raw data so it is what it is.  MTF Mapper seems not to have a problem with what you shared, albeit using a bit of a lower threshold than its default.  That was the case with yesterday's image as well.

Jack
« Last Edit: September 01, 2014, 12:04:50 pm by Jack Hogan »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8901
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #214 on: September 01, 2014, 02:01:25 pm »

The capture was reasonably well taken - however I did not use mirror lock-up and the exposure was quite long (1/5th second). ISO 100, good tripod, remote capture, so no camera shake apart from the mirror.  The test chart is quite small and printed on Epson Enhanced Matte using an HPZ3100 ... so not the sharpest print, but as the shot was taken from about 1.75m away any softness in the print is probably not a factor.  However, if you would like a better shot, I can redo it with a prime lens with mirror lock-up and increase the light to shorten the exposure.

Hi Robert,

I usually recommend at least 25x the focal length, therefore the shooting distance is a bit too short for my taste (or the focal length too long for that distance). This relatively short distance will make the target resolution more important. Also make sure you print it at 600 PPI on your HP printer. That potentially will bring your 10-90% rise distance down to better values. Some matte papers are relatively sharp but others are a bit blurry, so that may also play a role.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #215 on: September 01, 2014, 03:24:45 pm »

Hi Robert,

I forgot to mention one little detail (I often do:-): in order to give reliable data the edge needs be slightly slanted (as per the name of the MTF generating method), ideally between 5 and 9 degrees, and near the center of the FOV.  I only downloaded the second image you shared (F6p3) because I am not at home at the moment and my data plan has strict limits (the other one was 210MB).  The slant is only 1 degree in F6p3 and I am getting values again too low for your lens/camera combo: WB Raw MTF50 = 1580 lw/ph, when near the center of the FOV it should be up around 2000.  Could be the one degree.  Or it could be that the lens is not focused properly.  The Blue channel is giving the highest MTF50 readings while Red is way down - so it could be that you are chasing your lens' longitudinal aberrations down, not focusing right on the sensing plane :)

To give you an idea, the ISO 100 5DIII+85mm/1.8 @f/7.1 raw image here is yielding MTF50 values of over 2100 lw/ph.  I consistently get well over that with my D610 from slanted edges printed by a laser printer on normal copy paper and lit by diffuse lighting.  For this kind of forensic exercise one must use good technique (15x focal length away from target, solid tripod, mirror up, delayed shutter release) and either use contrast detect focusing, or focus peak manually (that is take a number of shots around what is suspected to be the appropriate focus point by varying monotonically and very slowly the focus ring manually in between shots; then view the series at x00% and choose the one that appears the sharpest).  Another potential culprit is the target image source: if it is not a vector the printing program/process could be introducing artifacts.

As far as the contrast of the edge is concerned I work directly off the raw data so it is what it is.  MTF Mapper seems not to have a problem with what you shared, albeit using a bit of a lower threshold than its default.  That was the case with yesterday's image as well.

Jack

Hi Jack,

I'm not doing too well so far.  I've tried printing with a Laser using Microsoft Expression Design for a vector square and although there is improvement (best so far is 10-90% of 1.81pixels), it's not too good.  I've tried several lenses (24-105F4L, 100F2.8Macro, 50F2.5 Macro and 70-200F4L) and the results are much of a muchness.  I suspect that my prints, lighting and technique are just not good enough.  I'll have to do a bit more investigation, but it will be a few days as I need to catch up on some work).

Cheers,

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #216 on: September 01, 2014, 03:30:02 pm »

Hi Robert,

I usually recommend at least 25x the focal length, therefore the shooting distance is a bit too short for my taste (or the focal length too long for that distance). This relatively short distance will make the target resolution more important. Also make sure you print it at 600 PPI on your HP printer. That potentially will bring your 10-90% rise distance down to better values. Some matte papers are relatively sharp but others are a bit blurry, so that may also play a role.

Cheers,
Bart

Thanks Bart ... I think you got around 1.8 pixels for the 10-90% rise with a 100mm macro, is that right?  Is that the sort of figure I can expect or should it be significantly better than that?

I'm certainly much too close (based on your 25x) and it may well be that the print edge softness is what I'm photographing!

Also the lighting and print contrast seem to be quite critical and I doubt either are optimal.  This sort of thing is designed to do one's head in  :'(
Cheers,

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8901
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #217 on: September 01, 2014, 04:28:46 pm »

Thanks Bart ... I think you got around 1.8 pixels for the 10-90% rise with a 100mm macro, is that right?  Is that the sort of figure I can expect or should it be significantly better than that?

That 1.8 pixels rise is a common value for very well focused, high quality lenses. It's equal to a 0.7 sigma blur which is about as good as it can get.

Quote
Also the lighting and print contrast seem to be quite critical and I doubt either are optimal.  This sort of thing is designed to do one's head in  :'(

The slanted edges on my 'star' target go from paper white to pretty dark, to avoid dithered edges (try to print other targets for a normal range with shades of gray). One can get even straighter edges by printing them horizontal/vertical, and then rotating the target some 5-6 degrees when shooting them. The ISO recommendations are for a lower contrast edge, but that is to reduce veiling glare and (in camera JPEG) sharpening effects. With a properly exposed edge the medium gray should produce an approx. R/G/B 120/120/120, and paper white of 230/230/230, after Raw conversion. It also helps to get the output gamma calibrated for Imatest instead of just assuming 0.5, or use a linear gamma Raw input.

Do not use contrast adjustment to boost the sharpness, just shoot from a longer distance.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #218 on: September 01, 2014, 06:12:15 pm »

That 1.8 pixels rise is a common value for very well focused, high quality lenses. It's equal to a 0.7 sigma blur which is about as good as it can get.

The slanted edges on my 'star' target go from paper white to pretty dark, to avoid dithered edges (try to print other targets for a normal range with shades of gray). One can get even straighter edges by printing them horizontal/vertical, and then rotating the target some 5-6 degrees when shooting them. The ISO recommendations are for a lower contrast edge, but that is to reduce veiling glare and (in camera JPEG) sharpening effects. With a properly exposed edge the medium gray should produce an approx. R/G/B 120/120/120, and paper white of 230/230/230, after Raw conversion. It also helps to get the output gamma calibrated for Imatest instead of just assuming 0.5, or use a linear gamma Raw input.

Do not use contrast adjustment to boost the sharpness, just shoot from a longer distance.

Cheers,
Bart

Hi Bart,

Just messed around a bit more and one thing that clearly makes quite a difference is the lighting.  For example, with the light in one direction I was getting vertical 2.02, horizontal 1.87; in the other direction the figures reversed completely; with light from both directions I got 2.06/2.06 on both horizontal and vertical (no other changes).  I remember Norman Koren telling me to be super-careful with the lighting.  

Photographing from a greater distance does improve things.  However, the focusing is so fiddly that I find it very difficult to get an optimum focus.

I need to try different papers because that also makes quite a difference.

It's interesting to see the effect of the different variables involved and also the sort of sharpening that we might want to apply.

For example, with a 10-90 edge rise of 2 pixels, applying Focus Magic 2/50 gives this:



and then applying a further FM of 1/25 gives this:



With the raw image like this:



I doubt that I would get as good as this in the field, so I wonder, Jack, if you could explain why getting an optimally focused image is useful for your modelling ... because it's pretty tricky to achieve!

Cheers

Robert
« Last Edit: September 01, 2014, 06:14:55 pm by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 797
    • Hikes -more than strolls- with my dog
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #219 on: September 02, 2014, 05:20:27 am »

Just messed around a bit more and one thing that clearly makes quite a difference is the lighting.  For example, with the light in one direction I was getting vertical 2.02, horizontal 1.87; in the other direction the figures reversed completely; with light from both directions I got 2.06/2.06 on both horizontal and vertical (no other changes).  I remember Norman Koren telling me to be super-careful with the lighting. 

Hi Robert, the spoiler with light if one is not too careful are sharp gradients that can make the change in light intensity become part of the ESF.  I try to take mine indoors with bright but indirect sunlight in a neutrally colored room.  I don't think the color of the walls of the room is too important because MTF Mapper can (and I do) look at one raw channel at a time.

Jack
« Last Edit: September 02, 2014, 06:22:57 am by Jack Hogan »
Logged
Pages: 1 ... 9 10 [11] 12   Go Up