If you're already shooting pano and are that far into post production, why not consider focus stacking in addition to pano shooting?
Focus stacking is not an option, as the process is slow enough as it is...Perhaps focus stacking would allow you to reduce the number of stitched images (using a wider lense), keeping the total time spent in check (but altering the balance between DOF and maximum sharpness)?
there is no free lunch. diffraction results in serious loss of sharpness beyond f16 in full frame, and f11 in crop frame - pretty much in direct porportion to the difference in DOF.
Not bad, what system is this?
dee - i think you misunderstood me - diffraction starts to become seriously noticeable with crop frame beyond f11which crop you are talking about ? if 1.5x then as 2x @ 12mp sensor is still good @ f11 -> w/ proper lenses 1.5x might be OK (and not seriously noticeable) @ f11 + 1/2 stop down... it is another story that most lenses are too bad to deliver on 1.5x and above @ those apertures...
there is no free lunch. diffraction results in serious loss of sharpness beyond f16 in full frame, and f11 in crop frame - pretty much in direct porportion to the difference in DOF.While I do not disagree with your statement, one thing I've found is you can fix some of the softness from diffraction in post ... something you cannot do with out of focus from not enough "depth of field". Shooting something at f/22 may leave the file soft, but a little work and prints look pretty good. You probably loose some of the micro detail, but it can still work pretty well.
While I do not disagree with your statement, one thing I've found is you can fix some of the softness from diffraction in post ... something you cannot do with out of focus from not enough "depth of field". Shooting something at f/22 may leave the file soft, but a little work and prints look pretty good. You probably loose some of the micro detail, but it can still work pretty well.
While I do not disagree with your statement, one thing I've found is you can fix some of the softness from diffraction in post ... something you cannot do with out of focus from not enough "depth of field". Shooting something at f/22 may leave the file soft, but a little work and prints look pretty good. You probably loose some of the micro detail, but it can still work pretty well.
Brian Peterson in his latest book on Exposure brings attention to this "problem". Two images shot, one at f/8 and one at f/22. Blows both up to 200% and invites the reader to look at them. They are trees. He points out there is only a little softness in the one shot at f/22 and states it isn't worth bothering about it. I am paraphrasing him. It isn't a scientific appraisal but a practical one which I think matters the most. I have since rethought my beliefs on this subject which meant that I rarely went beyond f/13. Practical examples are more important to me than theoretical ones. :)
Are you saying diffraction results in a loss of resolution? A new one on me but I am willing here to learn. :)
That's not only what Erik is saying, it's a commonly known fact called physics, unfortunately. Deconvolution sharpening can restore some of the resolution lost due to diffraction blur, but some loss remains and noise may increase. The fact that you act surprised suggests that you don't see a difference between an actual image taken at e.g. f/8 and f/22. I'm puzzled by that. Are you saying that you can fully remove the blur caused by diffraction, or do you see no difference to begin with?
As I stated in an earlier post I have very rarely shot smaller than f/13 because that was the advice that I read. I saw the examples in Brian Peterson's book but it didn't mention loss of resolution. Yes there was some softening at 200%. I have a lot of photography books and a lot of the photographers are happily shooting away at f/22 so it becomes confusing as to who are right and who are wrong. I have done a lot of shooting with ND filters and sometimes a slow shutter speed means going smaller than f/13 but mostly at sunset, so detail isn't critical. As I said still learning. :)
Cheers,
Bart
Diffraction depends only on aperture, there are no lenses more tolerant of diffraction. It's a property of light. The form of the aperture may matter, but I presume that all lenses we discuss have circular or near circular apertures.
Two topics: focus stacking and tilt shift to increase DOF in panos.
you forgot superresolution to fight w/ loss of resultion (while increasing DOF by stopping down further... albeit it might allow you to stop just 1-1.5 stops further down... not much, but still) from diffraction using programs like from http://photoacute.com (it can do focus stacking as well)But superresolution will mainly combat limited sensor resolution, not severly limited optical resolution due to diffraction. If diffraction is the main factor limiting the system resolution, I would not expect SR to fix it.
But superresolution will mainly combat limited sensor resolution, not severly limited optical resolution due to diffraction. If diffraction is the main factor limiting the system resolution, I would not expect SR to fix it.
-h
The fact that you act surprised suggests that you don't see a difference between an actual image taken at e.g. f/8 and f/22. I'm puzzled by that.I think it is image dependent ... depends on the micro detail and how critical it is to the overall image. Gaining sharpness in important detail but perhaps losing some of the ultra fine detail can still result in a very good image. Certainly you can't completely overcome the loss of sharpness and restores all the detail once you start stopping down.
I think it is image dependent ... depends on the micro detail and how critical it is to the overall image. Gaining sharpness in important detail but perhaps losing some of the ultra fine detail can still result in a very good image. Certainly you can't completely overcome the loss of sharpness and restores all the detail once you start stopping down.
So curious what others think, which image would be sharper and have better detail printed at 40x60, one taken with a good lens at f/22 on a 60-80mp sensor (no AA filter) or a dSLR taken at an optimum of f/8 (with no diffraction but with blurring from an AA filter).
Focus stacking is great but can't always be used (things moving, things in front of other things can leave fringe or halo of "softness" since the background around the edge of front objects can't be captured sharp). Scheimpflug is great, but can't always be applied well (if at all). Sometimes your only hope of any depth of field is just stop the thing down and take what you can get, and with a high resolution back what you get is usually really good despite diffraction.
Hi,
I essentially tried this with one of the images from the great 2006 MFDB shootout. There was an aperture series with a P45+ back. I "reproduced" the subject (a one dollar bill) and shot it with my 24.5 MP DSLR. Large prints were made from both. At f/8 a blind man could see that P45+ was wastly superior. With the lens on the P45+ stopped down to f/22 the difference was essentially gone.
Best regards
Erik
So curious what others think, which image would be sharper and have better detail printed at 40x60, one taken with a good lens at f/22 on a 60-80mp sensor (no AA filter) or a dSLR taken at an optimum of f/8 (with no diffraction but with blurring from an AA filter).
Hey, Erik, that would be interesting to see, can you post the results for us?
BTW, comparing f/8 and f/22 is a bit unfair to the p45, since f/8 won´t give sufficient dof on the FF anyway. How was the difference between f/16 on the FF and f/22 on the p45?
In other words if you improve something in the chain of blur factors that is not the worst factor you may still see an improvement.
Diffraction puts an absolute (unrecoverable by deconvolution sharpening) limit (MTF=0%) on resolution at 81.9 cycles/mm for f/22, and at 225.2 cycles/mm for f/8, when we look at 555 nanometer wavelength.Is there such a thing as a brick-wall diffraction limit above which mft == 0? Or are you talking about the limit above which you have experienced practical deconvolution algorithms to fail due to falling SNR or whatever?
Is there such a thing as a brick-wall diffraction limit above which mft == 0? Or are you talking about the limit above which you have experienced practical deconvolution algorithms to fail due to falling SNR or whatever?
It's not really a sudden diffraction brick wall, but rather a gradual spatial frequency slope of reduced contrast that ends in zero modulation. Even with only 10% modulation response, it would render a subject contrast of 10:1 to a barely perceptible 1% response. Raw converters performance becomes very important. Diffraction does set an upper limit to what can be resolved/restored, as does defocus.So what you are saying is that in your experience, deconvolution algoritms are unable to recover 1% contrast into something meaningful, not that the contrast is identical to zero.
Cheers,
Bart
So what you are saying is that in your experience, deconvolution algoritms are unable to recover 1% contrast into something meaningful, not that the contrast is identical to zero.
The reason that I am asking, is that there have been lots of discussions about the "diffraction limit" at dpreview. A lot of people seem to think that it is a theoretically perfect brick-wall, but I am sceptical as theoretically perfect brick-walls are very seldomly seen in nature. Nature seems to dislike 100000 tap sin(x)/x filters, rather going for low-order ones.
-h
That's not only what Erik is saying, it's a commonly known fact called physics, unfortunately. Deconvolution sharpening can restore some of the resolution lost due to diffraction blur, but some loss remains and noise may increase. The fact that you act surprised suggests that you don't see a difference between an actual image taken at e.g. f/8 and f/22. I'm puzzled by that. Are you saying that you can fully remove the blur caused by diffraction, or do you see no difference to begin with?
Cheers,
Bart
large aperture | small aperture | |
Out of focus | Large PSF | Medium PSF |
In focus | Small PSF | Medium PSF |
Hi,
My understanding is that diffraction is not a disc (like defocus for an ideal thin lens) but more like a "bell curve". For peak shapes similar to "bell curves" most often FWHM (Full With Half Maximum) is used, but the effect of diffraction will be broader than FWHM. But some detail may also be resolved within the FWHM diameter as we still have some gradient.
My article here: http://echophoto.dnsalias.net/ekr/index.php/photoarticles/49-dof-in-digital-pictures?start=1 demonstrates this with real world samples. Diffraction is red circles and defocus is green circles. For diffraction the conventional value is used. FWHM would be somewhat smaller.
When looking at the above article keep in mind that diffraction is constant for each row. Defocus is increasing from left to right.
Last page of the article shows examples of sharpening using "basic" deconvulution using Smart Sharpen in CS5 and Topaz inFocus.
Best regards
Erik
Restore lost resolution? How can resolution be restored once it is lost? Do you really mean accutance, perhaps?I think this is only semantics. Deconvolution can (ideally) do a filtering operation that bring details that have had their contrast reduced to invisible levels back again to visible levels. I.e. true details that are visibly lost (and really hard to regain using blind sharpening) can be restored to their original value. Or to an approximation of their original value corrupted by noise and flaws in the characterization of the PSF.
Restore lost resolution? How can resolution be restored once it is lost? Do you really mean accutance, perhaps?
Restore lost resolution? How can resolution be restored once it is lost? Do you really mean accutance, perhaps?
My understanding is that diffraction is not a disc (like defocus for an ideal thin lens) but more like a "bell curve". For peak shapes similar to "bell curves" most often FWHM (Full With Half Maximum) is used, but the effect of diffraction will be broader than FWHM. But some detail may also be resolved within the FWHM diameter as we still have some gradient.
This will also mean that with very high sensel densities and/or very narrow apertures (= large diffraction pattern diameter), the diffraction blur pattern will be subsampled, which in turn will make it easier to successfully deconvolve such an image.Did you mean supersampled?
Diffraction especially is a good candidate for deconvolution sharpening/restoration, because it is not just an average over an area but rather a weighted average.I am scratching my head over this. Why is it necessarily harder to invert the response of a rectangular filter kernel than a general non-rectangular kernel (switching my brain over to 1-d operations for convenience)?
Did you mean supersampled?
I am scratching my head over this. Why is it necessarily harder to invert the response of a rectangular filter kernel than a general non-rectangular kernel (switching my brain over to 1-d operations for convenience)?
Now, as for the difference between a defocus and a diffraction blur, and the deconvolution of it. Consider a large uniform area (free of noise to make things easy) in the spatial domain with a small signal in the middle. Now blur it with a uniform disc shaped filter that's several times larger in diameter than the signal. The small signal will become the average of that full disc's area, and thus very small, maybe even less than 1 quantization unit difference from the surrounding area. Now compare that to blurring with a Gaussian or an Airy disk shaped blur filter. The blurred image is more likely to still have some (Gaussian) shape with a slightly higher signal directly in the middle of the original signal, because the blur filter took a weighted average instead of an area average. Combining this slightly better signal with a deconvolution, offers a (slightly) better chance of restoration.That may be the case for a small star or a hypothetical object. But is it the case for general, complex objects? How do we know that the shape of a part of a tree does not interact with the Gaussian when convolving to form a perfectly flat (hard-to-recover) end-result where using a flat kernel would have produced something for R-L to work on?
Add noise, and we can use all the small bits of help we can get.
Cheers,
Bart
So, after this week´s math-class, lets have a look at some real world examples:
A side by side comparison of canon 1dsIII and the pentax645d at different apertures. Although diffraction is clearly noticable, the MF holds detail much better than the FF, and the detail lost to diffraction seems to be recoverable with sharpening/deconvolution.
http://www.ephotozine.com/article/pentax-645d-canon-eos-1ds-mark-iii-comparison-digital-slr-review-15653 (http://www.ephotozine.com/article/pentax-645d-canon-eos-1ds-mark-iii-comparison-digital-slr-review-15653)