Hi,
I'm posting this with some trepidation as I expect a lot of disagreement. But here goes!
My understanding of the 3-step sharpening proposed by Shewe et al is: a. Capture Sharpening with an edge mask; b. Creative Sharpening to taste; c. Output Sharpening with no edge mask.
I would like to propose (no doubt others have before me, so perhaps I should say re-propose) an alternative, which I think has advantages. And that is: a. Output Sharpen after any resizing, tonal/color adjustments etc; b. Creative Sharpening to taste.
Creative sharpening on the other hand is IMHO a bit of a misnomer, although one can use sharpening tools to achieve the effect. It's more a detail enhancement/local contrast adjustment process than really sharpening.
Sharpening can be a creative tool. Sometimes we want to make the image sharper than it really was, to tell a story, make a point, or emphasize an area of interest.Bruce Fraser who I believe coined the term.
Nudging the image towards reasonable sharpness early on helps the editing process, and gives you a solid floor to stand on when it's time to make creative sharpening decisions.
Creative Sharpening. I don't tell people how to do art, so the only real guideline I can give here is to use common sense.
Bruce Fraser who I believe coined the term.
http://www.creativepro.com/article/out-of-gamut-thoughts-on-a-sharpening-workflow
Yes, he coined it 11 years ago if that article was the first time he mentioned it.Then why the misnomer**? His description seems clear to me. You guys can use whatever products or techniques you wish, but how is what Bruce wrote to define Creative Sharpening, a misnomer? It is creative in it's direction, it makes the image appear sharper for that aim.
Then why the misnomer**?
Sharpening is not the same as increasing acutance by boosting edge or local contrast. It only give an impression of sharpness by fooling the human visual system.So what term do you propose be used to replace sharpening behind capture, creative and output?
So what term do you propose be used to replace sharpening behind capture, creative and output?
People can call it what they want, as long as they remember that (USM) 'sharpening' is but one of many (better) methods to visually enhance/subdue detail.If there's a misnomer, it's that term, USM which predates digital anything and was an analog darkroom process to produce the appearance of more sharpness. So I'd submit that Sharpening Photo's is the perception of the process onto that photo, not the specific process itself.
Sharpening is not the same as increasing acutance by boosting edge or local contrast.Is sharpening an image the technique, be it analog or digital or the result of the technique as perceived by a viewer? I'd suggest it is the perceptual result which makes the image look sharper. At least considering the use of the term on images far before anyone was digitizing them.
We'll also add contrast, which will make parts of the image stand out, adjust colors for the same reason ... all of which could come under the term 'Creative Sharpening' because they have the same objective of bring focus onto the important parts of the image ... but have nothing to do with sharpening.Does that selective and creative work make that area appear sharpner? Creative bluring (any blurning) is different?
Does that selective and creative work make that area appear sharpner? Creative bluring (any blurning) is different?
Both effects have been available to affect photos’s long before anything photographic was digital.
Let us revert to fundamentals for a moment:
Two different concepts: focus and acutance.
Focus: A photo can be blurry from subject or camera movement, or because circles of confusion are visible due to D.O.F. limitations or poor focusing of the lens. These are focus problems. Deconvolution sharpening tools have been designed to recover image detail from such problems.
Acutance: the micro-ciontrast of lighter to darker edges between pixels. Acutance reduces as a result of digital image processing at the capture, rendering, editing and printing stages. Bruce Fraser et. al. analyzed all these issues and more in great depth and produced techniques and corresponding software for addressing them that hasn't been fundamentally improved upon since their latest version. For readers who want more background into this, Jeff Schewe's book on sharpening is the best and most comprehensive published resource I know to recommend.
This discussion and the one in the other thread about QImage isn't always clear about what concept is at play: the focus concept or the acutance concept. Most digital imaging most people do these days is about the latter. And it is partly a matter of taste, partly a matter of credibility. If I were doing micro-photography I may want more detail on paper than I see in reality. For routine photography, the most natural appearance of detail corresponds with how I see it in the scene. As I've mentioned elsewhere before, if a photograph is meant to be sharp, it should look sharp but not sharpened. That is a fine distinction which I believe Photokit Sharpener 2, and Lightroom/ACR handle admirably; different people prefer different vendors' software - that's par for the course, but again, let us relate our preferences correctly to the concept. Tools designed primarily for acutance enhancement won't necessarily handle out of focus issues so admirably, because they are not dedicated deconvolution tools. I use the tools I use because the benefit:cost ratio is very high. I'm not a techno-masochist; I just want good, credible results in a time-efficient manner.
there are other reasons for loss of detail, for example the anti-aliasing filter, sensor noise, the analog to digital conversion, the demosaicing algorithm, resizing, etc.,
I'm a bit tested-out at this stage: could you tell me something about how Photokit Sharpener 2 works? In your experience that is, not from the product marketing info.
Robert
It could be that the term 'sharpening' is one that we should start to drop. I'm more a painter than a photographer, to be honest, and as a painter I would never think in terms of 'sharpening' my painting.Sharpening is a term that dates back into the analog film days. I've made USM's in the analog darkroom as an assignment in photo school, long before the word Photoshop existed. We were taught why the appearance of sharpness changed (due to changes of edge contrast), much like we understood what a grade 1 paper would do for an image compared to a grade 4 and that apparent visual effect of sharpness. USM may have produced something vastly different from the digital terms used here to express sharpness, but the reason we made prints this way was for one reason; to make the image visually appear sharper.
The information about Photokit Sharpener on the PixelGenius website is very reliable. If you want a proper understanding of the underling principles, as I said, nothing I know of beats the Schewe book. As for how well it works, Michael Reichmann reviewed it on this website when the product first appeared - you can locate that product review. It is accurate. I was using it from the time of that review until its principles were ported into Lightroom, where I use that same approach very successfully now. If you are asking me about my personal experience with it: highly recommended. But nothing beats testing it yourself. As we all know - in spades - different people have different taste in software. What floats my boat may not necessarily float yours', or for that matter Barts'. So I suggest once you have recovered from the present round of testing overload, give it a shot and see what you think.
For those who think that boosting acutance is as effective as restoration by deconvolution,
Cheers,
Bart
This is a red herring. I, for one, think/hope I made the distinction between focus/blur and acutance issues clear enough to understand - as I mentioned - that they warrant different treatment with different tools. I'm not talking about "is as effective as" - I'm talking about aiming the right tool at the problem it is best adapted to resolve. Once readers accept I may have a point here, a lot of the discussion that's confusing these conceptually different targets of image correction can just as well evaporate. For those who are not techno-masochists and just want good results - easily - a humble suggestion: don't go to a dermatologist for a root canal: :-); use products designed for handling acutance to change image acutance; use products designed for blur (movement, focusing, DoF) to correct blur. Then it becomes sensible to make apples to apples comparisons of different software products designed for handling the same problems.
The tools are all better at something different, even though they all try to achieve the same goal.
Cheers,
Bart
For those who are not techno-masochists and just want good results - easily - a humble suggestion: don't go to a dermatologist for a root canal: :-);
........... how sharp are those pins? ;)
ROTFL, so true.
How many angels can dance on the head of a pin and how sharp are those pins? ;)
Logically, I would have thought that if an equal sharpening can be achieved in one go after resizing, that it should be better to do it this way than capture sharpening, resizing, and then output sharpening. Having said that, in the few tests I've done I can't see that one method damages the image more or less than the other. What I do see is that there appears to be no advantage in capture sharpening first, using the sort of radii that I use.
What you are failing to consider is that capture sharpening is designed to be applied to your master image BEFORE you've actually determined at what size the image will be printed and output sharpening applied AFTER you've determined the size.
I know you have your own workflow and you're happy with that ... and you have a vested interest in this way of doing things and thinking ... but why don't you try out the action I've posted?Because of the workflow you propose. Is one output sharpening or whatever you describe optimal for ink jet, screen, halftone dot, contone output? I can't see how it could be as each output devices requires a different degree and handling of the sharpening. And it's resolution dependant. The same devices receiving a 1000x1000 pixel file need different treatment than if they are 10Kx10K. A sharpening workflow is output and resolution agnostic up until the point you know what size and device you'll output sharpen for.
I know you have your own workflow and you're happy with that ... and you have a vested interest in this way of doing things and thinking ... but why don't you try out the action I've posted?
Well, because it doesn't fit in with my workflow. I guess you missed the part about capture sharpening in ACR/LR and output sharpening in LR.
Because of the workflow you propose. Is one output sharpening or whatever you describe optimal for ink jet, screen, halftone dot, contone output? I can't see how it could be as each output devices requires a different degree and handling of the sharpening. And it's resolution dependant. The same devices receiving a 1000x1000 pixel file need different treatment than if they are 10Kx10K. A sharpening workflow is output and resolution agnostic up until the point you know what size and device you'll output sharpen for.
If you capture sharpen at native resolution of the capture device, or after sampling up, that's one step. But you might need to change the size considerably as well as the output device technology. One size doesn't fit all ideally. If you size and sharpen based on the output device and that sharpening is based too on the initial capture sharpening, you have a pretty flexible sharpening workflow.
Robert, I know a lot of people will laugh when I say this, but Jeff can be a bit shy about tooting his own horn, so I shall weigh in here. Very simply put, if you haven't done so already, you need to read Chapter Two of his sharpening book. It provides a splendid explanation of the technical factors underlying the multi-stage sharpening workflow he recommends. In a nutshell, the kinds of things that need to be "sharpened for" are not the same at the input versus the output stages, therefore the algorithms need to be custom-tailored for each situation and they need to build on each other. That's the essence of the approach, and between Bruce, Jeff, and the others in the Pixelgenius group, they have spent ions of time developing and testing algorithms appropriate to each context. Having read what I have and worked with the various approaches I've tried over the years. I would be very skeptical that a one-pass approach could be systemically superior - perhaps with some photos at some resolution by happenstance, but not systemically.
........... but sometimes new insights can come from revisiting established beliefs.
Robert
I looked at the article you referenced and I didn't see, even at 200% magnification the kind of damage you consider to be "criminal" at 50 or 60 Amount setting.I haven't looked at this example but it's kind of important to clarity that one setting, say Amount in USM is hugely influenced by the other sliders, like Radius. The two teeter-totter between themselves so one setting specified without the other is kind of like one hand clapping.
Very often the case and much scientific progress has been made over the centuries on the basis of that very principle. BUT in this particular instance we are not dealing with beliefs. We are dealing with algorithms that emerged from very extensive testing done by people who seriously knew/know the subject matter, and I respect that. That said, there's little in this world that can't be improved upon, but I think scientific procedure pretty much requires that you identify and demonstrate lacunae in the approach you are challenging, as a basis for trying to achieve the same objective in a better way. That is why I recommended Jeff's book to you.
I looked at the article you referenced and I didn't see, even at 200% magnification the kind of damage you consider to be "criminal" at 50 or 60 Amount setting. Personally, I don't usually find it necessary or desirable to move much beyond 45, but it can happen if I also added luminance noise reduction. However, give or take 10 or 15 point of Amount, there is something intervening called "taste". What you may consider "criminal" someone else may think is just sharp and snappy. It only gets criminal if anything has been destroyed, but if you use PK Sharpener (unflattened) or Lightroom of course everything is reversible and no pixels are destroyed.
Anyhow, reverting from the empirical to the principles, I do think it necessary to successfully challenge the correctness of the principles underlying the multi-stage sharpening workflow before accepting that a single pass approach will be SYSTEMATICALLY superior. To do this, there needs to be a combination of both reasons and a highly varied palette of extensive testing of the proposed alternative. I think it is incumbent on the author to do this research and share the results in a manner amenable to systematic evaluation.
... all you're doing is saying you can't be bothered to check it out because it doesn't fit in with your workflow.And that's the problem for some of us. We do output to many other devices than just an ink jet. Heck, output sharpening for display is pretty common for me. Ditto with halftone work. I simply can't have a workflow that is only directed to ink jet output.
But in my case I use inkjet printers and I never want to output sharpen with a radius of more than 2 or 3 (for 'creative' sharpening, maybe, but that's another story).
"Snappy and Sharp" applies to the final sharpened image, printed or for web or for whatever medium, not for 'capture' sharpening (again, this is consistent with Schewe's workflow, I believe).
Yes, that is what I meant.
Regarding the empirical principles etc., etc., it seems to me that I have been doing a lot of testing and that I've offered not only examples, but actions for you guys to check out my suggestions . But so far no one has actually given an example testing a one-pass sharpen against a two-pass sharpen and shown that the two-pass is clearly superior, and under what conditions.
My suggestion was that since you are proposing this option you should be the one doing the rigorous testing. I could download your action and use it, time permitting - I've got a very full plate - but to give it justice I would need to do a lot of very well-conceptualized testing, which unfortunately I don't have time for just now; recall this is all voluntary. I would feel more compelled to make the time if I saw obvious deficiencies in the LR sharpening workflow, but quite frankly I don't. Maybe that's also why there hasen't been a chorus of volunteers. All the more reason why the onus of proof of concept is on you.
And I have certainly not suggested that a one-pass approach is SYSTEMATICALLY superior ... or even that it is superior at all. I personally think, both from tests and from logic, that it will be better in some cases and worse in others. If that is true (which you can check out for yourself if you're interested) then surely that is a useful bit of information? If you knew that for, let's say, images that are upscaled, that you are better off leaving the 'capture sharpening' to after the resize, and that if you did this you would probably have some improvement in the quality of your output, would you not at least consider sharpening after resize rather than before?
Robert, that is part of the problem with what you are proposing. Why not just use one approach systematically and be done with it? The toolset available in LR/ACR and Photokit Sharpener is designed to handle just about anything, systematically. After learning to handle that toolset very well, I doubt one would need or do much better with anything else - unless a deconvolution approach were needed to handle blur.
Part of the problem with this whole discussion is that some of you seem to think that I am criticizing an established workflow by the gurus of the industry (including Bruce Fraser, who is no longer with us sadly). That may to some extent be the case, but in reality it boils down to 'do you sharpen before or after resizing?'. The reason I say that is that if you sharpen after resizing then it gives you the opportunity (if it is appropriate) to sharpen only once.
I'm not part of that problem, nor am I sure who is. But for sake of greater clarity, I have no problem with criticizing established anything from anyone. It only depends on the substance of critique.
Unless the 'before resizing' corrects flaws in the original image (due, for example, to the blurring caused by the anti-aliasing filter) there seems no logical reason to apply it before resizing, and good logical reason to apply it after. In my testing (admittedly limited) I can see no benefit to applying it before. Since almost all of our photos will be resized before output to the web or print, it then follows that if this is true, you are better off resizing and then sharpening.
There are always flaw in the original image - as you say, the AA filter being one source of reduced acutance at the capture stage. If you read Chapter Two of Schewe's book you would see the point. Turning to Output sharpening, one is in any case be it PKS or LR, doing output sharpening as a function of pixel size. That happens on the fly in LR and on layers in PKS.
So let's say that the conclusion is that two types of sharpening are typically beneficial with the current 'sharpening' technology: one with a small radius to 'recover' fine detail, and one with a higher radius, to give the output a boosted impression of sharpness and crispness. I think this may well be so at times. Then, I, personally, would resize, sharpen with a small radius and then sharpen with a higher radius. This does not fit in with the Lightroom model, because Lightroom is strictly 1st phase sharpen, followed by resize, followed by (optional) 2nd phase sharpen.
Yes, that is how Lightroom is designed to be normally used, because between the imaging scientists on the Adobe Camera Raw team (photographers who know image quality and are brilliant mathematicians on a world scale) and the highly experienced developers in Pixelgenius, it was their combined evaluation that this is indeed the optimal processing approach for most of what LR is designed to do. But it is not really "followed by, followed by...." from a user perspective, as you undoubtedly know. The user can dial any of this stuff into the metadata in any order and the application applies adjustments in the correct sequence under the hood. We don't need to worry about sequencing - part of the application's design philosophy - it relieves the users of fiddling with that which users definitely need not control.
If you do not sharpen in Lightroom, then, in Photoshop, you can use one-pass sharpen where appropriate, and two-pass sharpen if you think this would be beneficial. There is nothing that I am aware of in PK Sharpen to prevent you from doing this, since it's a Photoshop set of actions.
Yes agreed, we can handle all this any way we want. As well in LR we have options about what sharpening to use or not use at either stage.
I would have thought that one of the great benefits of a forum like this one is that it has many very experience members, who could take a suggestion like this one and demonstrate that it is nonsense, or that it is sometimes good, or that it's the best thing since sliced bread (as a home baker I would have to question that analogy :)).
Robert, I agree - that is one of the benefits of this forum, and it is one of the better ones around. There are highly experienced people who visit here and help each other. You are clearly a serious professional and the "rules" within such a peer group don't call for proving a concept to be nonsense - unless of course it so obviously is. But I for one am not saying that. Others may not agree with my criteria in respect of a sharpening workflow - they happen to be very closely aligned with what Jeff said above, for whatever that is worth - "repeatable and consistent workflow without a lot of gyrations". I don't want to be bothered even thinking about whether an image deserves a one pass or a two pass solution. Once I know how to handle two pass properly, and understanding what I think I do about the underlying logic, I just do it. From my experience editing countless numbers of photographs for the past 14 years that I've been doing digital imaging whether from scanners or DSLRs, I think it's the most efficient and effective path to sharpness I've ever used. But taking into account the value-added of sacrificing the benefits of a self-contained raw workflow, if you convincingly demonstrate a better mouse-trap in terms of both process and results, that's fine.
It seems to me that capture sharpening is best done with deconvolution.
Output sharpening is to reverse the bleeding of inks from the print process.
A new photoshop action doesnt seem to advance anything. The main claim to fame is, if I follow the thread, a 1 step sharpening process. IMO people are willing to put a lot of effort into their best images. The average throwaway image usually sits on a hard drive as a raw that never gets printed.
It seems to me that capture sharpening is best done with deconvolution. Output sharpening is to reverse the bleeding of inks from the print process. Someone should be able to take an input image, print, scan, determine the PSF of their printer, then deconvolve that to get back close to the original. Once you know the printer PSF you can correct for it in all your output. Again, deconvolution is the tool.
The only thing left is creative sharpening, which, as Bart says, is mostly contrast/clarity adjustment.
A new photoshop action doesnt seem to advance anything. The main claim to fame is, if I follow the thread, a 1 step sharpening process. IMO people are willing to put a lot of effort into their best images. The average throwaway image usually sits on a hard drive as a raw that never gets printed.
Well at least Andrew gave a reason for why he believes this approach is wrong ... all you're doing is saying you can't be bothered to check it out because it doesn't fit in with your workflow.
Chances are people using deconvolve methods can beat anything done in LR/ACR.
Chances are people using deconvolve methods can beat anything done in LR/ACR.Yet the last post by Jeff indicates LR/ACR can do just that. Confused...
I guess you don't remember that with the Detail slider moved to the right ACR/LR employs deconvolution similar to the Lens Blur function of Smart Sharpen. No, you can't change the PSF but you can blend the amount of deconvolution by adjusting the slider number. At 50 it's about 1/2 deconvolution and 1/2 halo suppression...then by adjusting the amount and radius (and masking) you have good control over the capture sharpening.
Yet the last post by Jeff indicates LR/ACR can do just that. Confused...
See the next post then try studying the various methods.I see that your original text:Chances are people using deconvolve methods can beat anything done in LR/ACR, needed further clarification.
Yet the last post by Jeff indicates LR/ACR can do just that. Confused...
Yes, exactly ... the problem is we don't currently have the tools to do that (perhaps some of them, but the overall problem is quite complex because there are many reasons for the loss of detail in the image, and these all need to be known for the image to be properly 'fixed' with deconvolution. Still, it should be well possible to fix specific issues like the anti-aliasing blurring for each camera model).
I see that your original text:Chances are people using deconvolve methods can beat anything done in LR/ACR, needed further clarification.
Here is one page that shows several methods from one blurry image.Interesting and I'll dig into it, thanks.
http://www.deconvolve.net/bialith/Research/BARclockblur.htm (http://www.deconvolve.net/bialith/Research/BARclockblur.htm)
Deconvolution is a process designed to remove certain degradations from signals e.g. to remove blurring from a photograph that was originally taken with the wrong focus (or with camera shake).
USM as it's done in Photoshop and elsewhere is 50 years old?
Maybe Eric and Bart can comment on the lack of control of PSFs.
Photoshop isn't 50 years old, but the unsharp filter is a digital implementation of the unsharp mask that has been used in the darkroom for quite some time.I'm aware of that Bill, I actually did USM in the analog darkroom as a photo assignment in school, long before Photoshop.
Interesting and I'll dig into it, thanks.
So is this about making out of focuse images appear in focus?
Im not going to get into a word play highjack. Out of focus is an extreme example. Any reason for capture sharpening is a reson to deconvolve. If you start with good tools/ technique your need for capture sharpening may be minimal.Not meant to be wordplay, the question is about capture sharpening which I'd expect would be done on images that are not out of focus ideally. A set of algorithm's or processes that can do what you illustrate with out of focus images would indeed be very useful, no argument. The question is about current tools used on images that are not out of focus but need some work to over come issues with digitizing the image in the first place. The statement made was: Chances are people using deconvolve methods can beat anything done in LR/ACR. If the image is out of focus or has camera shake, the examples you should would be impressive and useful. Does that mean other methods that don't make out of focus images in focus fail to work when the rubber hits the road and final output sharpening and a print is produced?
Fair enough.
Here is one page that shows several methods from one blurry image.
http://www.deconvolve.net/bialith/Research/BARclockblur.htm (http://www.deconvolve.net/bialith/Research/BARclockblur.htm)
corrected typo.
Yes, that page shows what is typically the key strength of the deconvolution approach. It allows one to retrieve usable information from an apparently hopelessly blurred photograph. This is particularly useful in forensics and espionage. How good it is for fine art photography is another matter. Some years ago I tested deconvolution software on photographs that simply needed the usual kind of acutance improvement for the usual reasons and I found the results ugly. And I tried numerous settings to make it look as good as I could, but it wasn't very prospective. Now, maybe the software has improved a lot in the intervening period, but since then I haven't gone back to it because I haven't perceived any need to do so. Time is my scarcest resource, very valuable and how I use it therefore carefully selected.
Everyone has to decide if a particular image is worth time.I supsect that is what Mark, myself and perhaps Robert would like to see. No question the examples you provided show a huge benefit working with actual out of focus images. Now how about those that are not so severely awful? In such a case, are the chances people using deconvolve methods can beat anything done in LR/ACR?
Yes, exactly ... the problem is we don't currently have the tools to do that (perhaps some of them, but the overall problem is quite complex because there are many reasons for the loss of detail in the image, and these all need to be known for the image to be properly 'fixed' with deconvolution). Also, I wonder ... and perhaps Bart could answer this ... whether or not a deconvolution function would be any more effective than an unsharp mask, carefully tuned, for blurring due to the AA filter.)
Weeelll ... is that entirely true? Emphasizing edges (beyond compensation for ink bleed) will create an impression of sharpness - and that isn't strictly 'creative sharpening' ... although of course there's no reason why you couldn't call it that.
There is no claim to fame at all here - as I've said, it's just a question: "Is a 2-step sharpening process always necessary, given our currently available technology?". I hardly think I'm the first person to have suggested a one-pass sharpening!! No doubt this is what everyone did before the 2 or 3 pass sharpening came into vogue.
Sometimes Wikipedia puts things quite nicely:
Robert
I supsect that is what Mark, myself and perhaps Robert would like to see. No question the examples you provided show a huge benefit working with actual out of focus images. Now how about those that are not so severely awful? In such a case, are the chances people using deconvolve methods can beat anything done in LR/ACR?
When I do go to print I usually like to have lots of pixels in the file. With deconvolution I feel Bart's 3x upsample recommendation is workable to get fair detail, good for printing, out of the image pixels. Anyone who thinks they can get close to optimal results from a raw can throw out a challenge with the file. I have offered that before with raws for the standard Imaging Resource reviews which include many raw shots. I doubt anyone can get sharp 3x upsampled images with ACR/LR.
Thank you for that Robert, and the bottom line one gets out of it is "horses for courses".
Very unclear to me that deconvolution tools are ideally suited to efficient and high quality workflows in "fine-art" photography. The onus is on those who propose them to demonstrate superiority in regard to both quality and efficiency. And while we are at it, let us not forget the need to define what we mean by "best" when we are talking about the quality of a sharpening outcome. Only when we agree on the criteria defining "best outcome" can we determine what is "best practice".
When I do go to print I usually like to have lots of pixels in the file. With deconvolution I feel Bart's 3x upsample recommendation is workable to get fair detail, good for printing, out of the image pixels. Anyone who thinks they can get close to optimal results from a raw can throw out a challenge with the file. I have offered that before with raws for the standard Imaging Resource reviews which include many raw shots. I doubt anyone can get sharp 3x upsampled images with ACR/LR.I don't know what Perfect Resize is supposed to be using but the last tests I did with it, Photoshop (even doing step interpolation), LR sizing up 250%, LR was the best of the lot based on a final print. And oh so much faster. Proper capture sharpening made the biggest differences in the results.
I'm aware of that Bill, I actually did USM in the analog darkroom as a photo assignment in school, long before Photoshop.
I was under the impression that there was some algorithm or process that Photoshop (perhaps other software) conducted and just named UnSharp Mask hence the question. Someone could build such an algorithm and call it USM or anything else, what similarity is there to the process we used in the analog darkroom if any? Or was the name just applied because in the old days of Photoshop, the name was given to give us old time analog darkroom users something we could understand?
There is no need to reinvent the wheel.
http://www.clarkvision.com/articles/index.html#sharpening (http://www.clarkvision.com/articles/index.html#sharpening)
In both the darkroom unsharp masking and with digital unsharp masking, the same general principle is to blur the image and then subtract the blurred image from the original.OK, same general principle. But I suspect there are multiple products using the term and not producing the same results using the same original data. In fact I know that as I just applied USM in Graphic Converter then Photoshop using the same values and they are not the same! GP only has two of the three controls found in PS (Radius and Intensity which I suspect is akin to Amount).
What he has in that article isn't the wheel. There are better acutance-enhancing tools than Photoshop's USM, and some of the comparisons he shows even at that are awfully close, and probably indistinguishable at normal magnifications and viewing distances. I remain unconvinced. And yes Andrew, you're right: cost-effectiveness in terms of time versus practical outcomes is a real consideration.
OK, same general principle. But I suspect there are multiple products using the term and not producing the same results using the same original data. In fact I know that as I just applied USM in Graphic Converter then Photoshop using the same values and they are not the same! GP only has two of the three controls found in PS (Radius and Intensity which I suspect is akin to Amount).
That is not an article, it is a series. If you go through it it shows a comparison of the PS smart sharpen with richardson-lucy here: http://www.clarkvision.com/articles/image-restoration2/index.html (http://www.clarkvision.com/articles/image-restoration2/index.html)First thing I see is: In this example, we will start with a high signal-to-noise ratio image, then intentionally blur it. I try to never intentionally blur my images from the get go.
Yes, that is what Doug Kerr discusses in his article (if you take the time to read it).I'm a fan of Doug's work and will, but I think what I've seen even before that is anyone can call a routine USM and they all produce different results. In the test I did today, pretty significant visual differences! In fact, if I showed you the two side by side and said one was USM and the other a vastly different approach (dare I say deconvolve), an observer could come to many of the same conclusions as to what is 'better' as we see on the various pages referenced here. One looks quite less sharp than the other and that suggests to me, a setting of USM in Photoshop may not produce the same level of sharpness as another product presumably using the same sharpening process (they share the same name). If USM in PS set to the same values we read on Clark's page look soft compared to the settings in Graphic Converter, does that mean one should up the values? USM isn't USM it appears, all things are not equal.
First thing I see is: In this example, we will start with a high signal-to-noise ratio image, then intentionally blur it. I try to never intentionally blur my images from the get go.
That is not an article, it is a series. If you go through it it shows a comparison of the PS smart sharpen with richardson-lucy here: http://www.clarkvision.com/articles/image-restoration2/index.html (http://www.clarkvision.com/articles/image-restoration2/index.html)
There are already several deconvolution threads on the site so to me remaining unconvinced= remaining in the dark. Your choice.
The point of Bart's demonstration of blurring an image with Gaussian blur and restoring it with deconvolution is to demonstrate that deconvolution works very well if you know the PSF, but I agree that it best to work with real world images.Yes and the demo IS impressive in handling blurred images. By most of mine are not blurred, my current workflow is to use LR for capture sharpening on images that are not out of focus. Going back full circle to the comment that Chances are people using deconvolve methods can beat anything done in LR/ACR. That may be true, but the demo's provided thus far have two issues as I see it. First, the images being used are blurred, out of focus. Next, one has to wonder if the USM examples are handled ideally (well no, as none so far are used on real world non blurry images). Clark shows one example with one setting of USM, no question it doesn't look as sharp (on-screen which is rarely my final goal) as the others. He does suggest upping the settings would look sharper but produce other issues and it would have been nice to see that. Just today's test using USM in two different products, something I've never looked at, gives me the impression that there are vast differences in just what someone calls USM! That the same settings are not ideal in both cases. That it would be useful for someone to really attempt to produce the best possible results with the tools provided on good images in the first place, then show me a scan of good output such that I could evaluate what the results would mean in a real would context.
Yes and the demo IS impressive in handling blurred images. By most of mine are not blurred, my current workflow is to use LR for capture sharpening on images that are not out of focus. Going back full circle to the comment that Chances are people using deconvolve methods can beat anything done in LR/ACR. That may be true, but the demo's provided thus far have two issues as I see it. First, the images being used are blurred, out of focus. Next, one has to wonder if the USM examples are handled ideally (well no, as none so far are used on real world non blurry images). Clark shows one example with one setting of USM, no question it doesn't look as sharp (on-screen which is rarely my final goal) as the others. He does suggest upping the settings would look sharper but produce other issues and it would have been nice to see that. Just today's test using USM in two different products, something I've never looked at, gives me the impression that there are vast differences in just what someone calls USM! That the same settings are not ideal in both cases. That it would be useful for someone to really attempt to produce the best possible results with the tools provided on good images in the first place, then show me a scan of good output such that I could evaluate what the results would mean in a real would context.
Upping the settings in USM creates halos. I bet you, as a photographer, have seen countless images on the web with them.Yes, I'm keenly aware that over sharpening can cause visible halos on output, that's not what I'm suggesting.
IMO the biggest easy improvement for LR/ACR would be to have listed deconvolve methods in a sub dialog box.I'll let the engineers who handle this within the product comment, I'm not qualified to suggest they do or do not do this, I'll bet they are pretty aware of this possibility.
The methods are well documented non-proprietary scientific algorithms.Why do you suppose we are not seeing this in said products?
There is no reason not to make them available.Again, with no knowledge of the processing or specifics of this product, I'm not willing to accept that at face value, I'd certainly prefer to hear what an engineer would have to say about this.
Adobe always seems to want to say they have a secret sauce. They marketing power that convinces many people whatever they do is best.Ah sure OK. That seems like a pointless area to speculate about.
Hi Robert,
Yes, deconvolution is perfect for Capture sharpening, and it's also very good for restoration of some of the upsampling blur, and yes these can also be combined if one wants to avoid upsampling any artifacts. For workflows involving Photoshop, I can recommend FocusMagic. What my analysis has shown, Capture sharpening should be 'focused' at Aperture dictated blur (not image detail as suggested in 'Real world Image sharpening'). The amount of blur (in the plane of best focus) is largely Gaussian in nature, due the the combination of several blur sources (which tends to combine into a Gaussian distribution), and varies with aperture.
Here is a crop from the image. screenshot pasted to MS Paint, saved as JPG. A PNG was too big.
This has Adaptive Richardson-Lucy in Gaussian 5x5 then 3x3 pixels.
I'm very interested in this and if the only thing that comes from this thread is a better way of doing capture sharpening then I, for one, will be very happy indeed. I've had a try with FocusMagic and it looks very good at first sight. I added it in to the Ps action I'm using to compare different methods.
For FM capture sharpen using default settings (the filter estimates the blur distance), my first conclusion (based on a sample of 1) is that FM does a much better job of capture sharpening than does Lr.
Hi Robert,
Since you are new to FocusMagic, allow me to share a tip (or two). FocusMagic does try to estimate the best blur width setting, but may fail at getting it right for the best focused part of the image (also depends on where you exactly set the preview marker). I tend to increase the Amount setting to its maximum of 300%, and set the Blur width to 0. Then increase the blur width by 1 at a time. There will be a point where most images will suddenly start to produce fat contours/edges instead of sharper edges. That's where you back-off 1 blur width click, and dial in a more pleasing amount (larger radii tolerate larger amounts). For critical subsequent upsampling jobs, I then use a Layer Blend-if setup, or I first upsample and then (WYSIWYG) sharpen that.
Thank you all for a most informative thread. Can you tell me whether there is any inherent advantage to performing capture sharpening as step in the demosaicing process as opposed to on a tiff "developed" without sharpening?
Thank you all for a most informative thread. Can you tell me whether there is any inherent advantage to performing capture sharpening as step in the demosaicing process as opposed to on a tiff "developed" without sharpening?
Thank you Bart - you are very helpful as usual! I'll give that a go. I often use a Layer Blend-if setup similar to yours to soften halos in sharpening.
Focus Magic can't be used as a smart filter which is a real pity ... and right now it doesn't seem to install for CC 2014 (which isn't such a great surprise as I'm having problem installing plugins). Hopefully they will fix that in a future release.
Robert
Robert,
FM works fine on my Windows 8 machine with PS CC ver 2014.1.0. I can't remember how I installed it, whether with the installer or merely copying the plugin from a previous version of CC. FocusMagic64.8bf resides in c:\\ProgramFiles\Adobe\Adobe Photoshop CC 2014\Plug-ins.
Regards,
Bill
Here is a very simple test image (real-life) that can be used to try out the different techniques:Have been following this thread with some interest and as far as deconvolution goes I am sure that Bart and others knowledge and experience of this aspect will prove very useful for you.
http://www.irelandupclose.com/customer/LL/sharpentest.tif
I’ve tried various methods (after applying a Gaussian blur of 3) and none of them seem to be particularly effective. I would very interested indeed if you have a filter, or multiple filters, or filter applied multiple times, that can (within reason) restore the image to the original.
And I would be very grateful for clarification on deconvolution (and correction of my understanding, particularly on how it is normally applied to digital images). There's a lot of talk about deconvolution, but I doubt that there are too many of us who understand it (me included)!
I would really appreciate a bit of help understanding the whole concept of deconvolution. BTW, I see there was a massive thread 4 years ago, here: http://www.luminous-landscape.com/forum/index.php?topic=45038, started by Bill. (Which humbles me a bit as I can see you guys have been talking about this for ages!). I've read some of it and while it's very interesting, with something like 18 pages it takes some plowing through! Still, I will get to it.
From a mathematical point of view it seems straightforward enough: the signal f is convolved with another signal g to yield h. If we know g then we can find its inverse and so recover f. If we don’t know g then we can guess it or estimate it and so attempt recovery of f.
Noise messes things up a bit because it’s added to the convolved signal … so how do we remove it from h before doing the deconvolution? Well, one way would be to add some blur to h (in other words convolve it further, which isn’t a brilliant idea if the g was a blur function to start off with!).
Anyway, moving on to imaging, I assume that all filters convolve the image (essentially one function applied to another). If we convolve the image with a blur filter and then apply the inverse filter (a sharpening filter?) then we are convolving the image twice, but the second convolution is also a deconvolution. Is that correct?
Looking at the Ps Custom filter, it’s easy enough to apply a blur and then apply the inverse (so where the adjacent pixel was added, we now subtract it). The effect is to remove the blur … but it also introduces the beloved halo!
So I guess I must be missing something fundamental! Or not using the Ps Custom filter correctly, which is also highly likely!
But assuming that I’m not entirely off the mark, when Jeff says that the Lr sharpen is effectively a USM-type sharpening when used with a low Detail setting, but becomes a deconvolution filter with high Detail settings … I’m both puzzled and lost. I’m puzzled as to how a Detail setting of 0 gives USM (which to my mind is a deconvolution if its intention is to remove blur) while at 100 it’s a deconvolution.
If I take an image and blur it with a Gaussian blur, radius 3, and then sharpen using the ACR sharpen, moving the Detail to 100% certainly gives more sharpening, but it also gives a nice (NOT) halo … it certainly doesn’t recover the image to the pre-blur version.
Have been following this thread with some interest and as far as deconvolution goes I am sure that Bart and others knowledge and experience of this aspect will prove very useful for you.
Couple of things I picked up on and it is my opinion that maybe you are making things a little more difficult than they need to be to get excellent result whichever sharpening route you choose.
1. Your test file of the power/telephone line is not a particularly good choice as presented due to purple green CA. IMO this should be removed first during raw processing to give a meaningful view of sharpening options.
2. As you started the thread with PS have you tried the Smart Sharpen / Lens Blur / More Accurate checked? This AFAIK is deconvolution sharpening (particular parameters unknown) and offers quite a lot in the way of control. Not as many options of course as in other software but sometimes this maybe enough?
By chance I had also played with the sample NEF image in ACR using Amt=50 Rad= 0.7 Detail = 80 and seems to be pretty close to your FM example although that was not my intention. Seems to me in this case that a little tweaking ACR would narrow the differences even further
Have been following this thread with some interest and as far as deconvolution goes I am sure that Bart and others knowledge and experience of this aspect will prove very useful for you.
1. Your test file of the power/telephone line is not a particularly good choice as presented due to purple green CA. IMO this should be removed first during raw processing to give a meaningful view of sharpening options.
2. As you started the thread with PS have you tried the Smart Sharpen / Lens Blur / More Accurate checked? This AFAIK is deconvolution sharpening (particular parameters unknown) and offers quite a lot in the way of control. Not as many options of course as in other software but sometimes this maybe enough?
By chance I had also played with the sample NEF image in ACR using Amt=50 Rad= 0.7 Detail = 80 and seems to be pretty close to your FM example although that was not my intention. Seems to me in this case that a little tweaking ACR would narrow the differences even further
...Hi Robert. Quite happy with your findings now CA corrected :)
I corrected the CA so if you download the image now it's CA-free http://www.irelandupclose.com/customer/LL/sharpentest.tif
Also, here are some comparisons:
...
My own feeling is that the Smart Sharpen result is the best (without More Accurate as this is a Legacy setting which seems to increase artifacts quite a lot). ACR and FocusMagic seem much of muchness. QImage gives a good sharp line, but at the expense of flattening the power lines.My understanding that the Smart Sharpen Lens Blur kernel and More accurate option should give the best results (based on something I read by Eric Chan - I think on this forum). It certainly takes longer to apply and I assume that more iterations performed which may lead to the artifact increase you are seeing?
I find it hard to compare your two D800Pine images as the bottom one has darker leaves but a lighter trunk. Not sure why that is?I think it would be wrong to try and draw conclusions by this comparison all I did was to crop the full size view of your test and paste as a new document in PS. My own version using ACR was actually produced before I even saw your example and was straight from camera with only lens profile and CA correction applied plus the sharpening. The difference may be explained by the simple fact of copying your image or possible that FM may have altered contrast/colour slightly or even a combination :).
Here is where we need to distinguish between a masking type of filter, like USM and other acutance enhancing filters, and a deconvolution type of filter. A mask is just an overlay, that selectively attenuates the transmission to underlying layers. It adds a (positive or negative) percentage of a single pixel to a lower layer's pixel. A deconvolution on the other hand adds weighted amounts of surrounding pixels to a central pixel, for all pixels (a vast amount of multiplications/additions is required for each pixel) in the same layer.
I have no experience of FM or Qimage therefore could not comment on advantages, but if Bart says they are good then I have every reason to believe that is the case and worth investigating to see how they may fit in with your workflow.
What I’m attempting to emulate is a point source (original image), blurred using the F1 filter. The blurred point is then ‘unblurred’ using the F2 filter (which is not a USM but a neighbouring pixel computation).
So is this a deconvolution? And is the PSF effectively F1 (that is, the blur)? In which case F2 would be the deconvolution function?
As you’ve probably guessed, I’m trying to put this whole thing in terms that I can understand. I know of course that a sophisticated deconvolution algorithm would be more intelligent and complex, but would it not essentially be doing the same thing as above?
Interestingly, this sharpen filter:
(http://www.irelandupclose.com/customer/LL/customsharpen.jpg)
gives a sharper result than the ACR filter, for example, in the test image with the power lines. A little bit of a halo, but nothing much … and no doubt the filter could be improved on by someone who knew what he was doing!
OK ... this is where I stop for tonight!!
But just before ending, you should try this Ps Custom Filter on the D800pine image:
(http://www.irelandupclose.com/customer/LL/customsharpen.jpg)
Then fade to around 18-20% with Luminosity blend mode. It's better than Smart Sharpen. Which is pretty scary.
That's correct, the Custom filter performs a simple (de)convolution.
However, to deconvolve the F1 filter would require an F2 filter like:
-1 -1 -1
-1 9 -1
-1 -1 -1
All within the accuracy of the Photoshop implementation. One typically reverses the original blur kernel values to negative values, and then adds to the central value to achieve a kernel sum of one (to keep the multiplied and summed restored pixels at the same average brightness).
Hi Robert,
Try the attached values, which should approximate a deconvolution of a (slightly modified) 0.7 radius Gaussian blur, which would be about the best that a very good lens would produce on a digital sensor, at the best aperture for that lens. It would under-correct for other apertures but not hurt either. Always use 16-bit/channel image mode in Photoshop, otherwise Photoshop produces wrong results with this Custom filter pushed to the max.
Yes, it works quite well, although you have to apply it several times to get the sort of sharpening needed for the D800Pine image (which would seem to indicate that the D800 image is a bit softer than one would expect, given that the test image was produced by Nikon, presumably with the very best lens and in the very best conditions).
Could you explain how you work out the numbers? Do you have a formula or algorithm, or is it educated guesswork (in which case your guessing capabilities are better than mine :)).
I'd really appreciate it if someone could relate "sharpening" to "deconvolution" in a dsp manner, ideally using simplistic MATLAB scripts. There are many subjective claims ("deconvolution regains true detail, while sharpening only fakes detail"). But what is the fundamental difference? Both have some inherent model of the blur (be it gaussian or something else), successful implementations of both have to work around noise/numerical issues...
If you put an accurate modelled/measure PSF into an USM algorithm, does it automatically become "deconvolution"? If you use a generic windowed gaussian in a deconvolution algorithm, does it become sharpening? Is the nonlinear "avoid amplifying small stuff as it is probably noise" part of USM really that bad, or is it an ok first approximation to methods used in deconvolution?
-h
I'd really appreciate it if someone could relate "sharpening" to "deconvolution" in a dsp manner, ideally using simplistic MATLAB scripts.
There are many subjective claims ("deconvolution regains true detail, while sharpening only fakes detail").
But what is the fundamental difference? Both have some inherent model of the blur (be it gaussian or something else), successful implementations of both have to work around noise/numerical issues...
If you put an accurate modelled/measure PSF into an USM algorithm, does it automatically become "deconvolution"?
If you use a generic windowed gaussian in a deconvolution algorithm, does it become sharpening? Is the nonlinear "avoid amplifying small stuff as it is probably noise" part of USM really that bad, or is it an ok first approximation to methods used in deconvolution?
However, what we normally call 'sharpening' is not restoring lost detail (which is what deconvolution attempts to do): what it does is to add contrast at edges, and this gives an impression of sharpness because of the way our eyes work (we are more sensitive to sharp transitions than to gradual ones - this gives a useful explanation http://www.cambridgeincolour.com/tutorials/unsharp-mask.htm).
So a sharpening filter like USM could by chance be a deconvolution filter, but it normally won't be.
Not everybody here is familiar with MatLab, so that would not help a larger audience.Not everything can be explained to a larger audience (math, for instance). The question is what means are available that will do the job. MATLAB is one such tool, excel formulas, Python scripts etc are others. I tend to prefer descriptions that can be executed in a computer, as that leaves less room to leave out crucial details (researchers are experts at publishing papers with nice formulas that cannot easily be put into practice without unwritten knowledge).
The crux of the matter is that in a DSP manner Deconcolution exactly inverts the blur operation (asuming an accurate PSF model, no input noise, and high precision calculations to avoid cumulation of errors). USM only boosts the gradient of e.g. edge transitions, which will look sharp but is only partially helpful and not accurate (and prone to creating halos which are added/subtracted from those edge profiles to achieve that gradient boost).The exact PSF of a blurred image is generally unknown (except the trivial example of intentionally blurring an image in Photoshop). Moreover, it will be different in the corners from the center, from "blue" to "red" wavelengths etc. Deconvolution will (practically) always use some approximation to the true blur kernel, either input from some source, or blindly estimated.
It's not subjective, but measurable and visually verifiable. That's why it was used to salvage the first generation of Hubble Space Station's images taken with flawed optics.I know the basics of convolution and deconvolution. You post contains a lot of claims and little in the way of hands-on explanations. Why is the 2-d neighborhood weighting used in USM so fundamentally different from the 2-d weighting used in deconvolution aside from the actual weights?
No, it's not the model of the blur, but how that model is used to invert the blurring operation. USM uses a blurred overlay mask to create halo overshoots in order to boost edge gradients. Deconvolution doesn't use an overlay mask, but redistributes weighted amounts of the diffused signal in the same layer back to the intended spatial locations (it contracts blurry edges to sharpen, instead of boosting edge amplitudes to mimic sharpness).I can't help but thinking that you are missing something in the text above. What is a fair frequency-domain interpretation of USM?
More advanced algorithms usually have a regularization component that blurs low signal-to-noise amounts but fully deconvolves higher S/N pixels.My point was that USM seems to allow just that (although probably in a crude way compared to state-of-the-art deconvolution).
That's not correct, USM is never a deconvolution, it's a masked addition of halo. The USM operation produces a halo version layer of the edge transitions and adds that layer (halos and all) back to the source image, thus boosting the edge gradient (and overshooting the edge amplitudes). Halo is added to the image, which explains why USM always produces visible halos at relatively sharp transitions, which is also why a lot of effort is taken by USM oriented tools like Photokit sharpener to mitigate the inherent flaw in the USM approach (which was the only remedy available for film), with edge masks and and Blend-if layers.
Now, how might USM be expressed in this context, and what would be the fundamental difference?
Sorry, my mistake ... in that I assume, probably incorrectly, that the 'USM' implementation in Photoshop etc., doesn't actually use the traditional blur/subtract/overlay type method, but uses something more like one of the kernels above, as that would give far more flexibility and accuracy in the implementation.
If that was the case, then would it not be correct to say that this sort of filter could either be a sharpening filter or a deconvolution filter, depending on whether or not it was (by chance or by trial and error) the inverse of the convolution?
Hi Robert,
Try the attached values, which should approximate a deconvolution of a (slightly modified) 0.7 radius Gaussian blur, which would be about the best that a very good lens would produce on a digital sensor, at the best aperture for that lens. It would under-correct for other apertures but not hurt either. Always use 16-bit/channel image mode in Photoshop, otherwise Photoshop produces wrong results with this Custom filter pushed to the max.
It is in fact extremely unlikely (virtually impossible) that simply adding a halo facsimile of the original image will invert a convolution (blur) operation. USM is only trying to fool us into believing something is sharp, because it adds local contrast (and halos), which is very vaguely similar to what our eyes do at sharp edges.
Why is the 2-d neighborhood weighting used in USM so fundamentally different from the 2-d weighting used in deconvolution aside from the actual weights?
What is a fair frequency-domain interpretation of USM?
It would aid my own (and probably a few others) understanding of sharpening if there was a concrete (i.e. something else that mere words) describing USM and deconvolution in the context of each other, ideally showing that deconvolution is a generalization of USM.
I believe that convolution can be described as:
[...]
This is about where my limited understanding of deconvolution stops.
You might want to tailor the pseudoinverse wrgt (any) knowledge about noise and/or signal spectrum (ala Wiener filtering), but I have no idea how blind deconvolution finds a suitable inverse.
Hi Bart,
I've tried your PSF generator and I'm using it incorrectly as the figures I get are very different to yours.
I don't understand 'fill factor' for example - and I just chose the pixel value to be as close to 999 as possible.
Not really all that different, although I did mention that I tweaked the PS Custom filter a bit (to beat it into submission). I tend to use the larger fill-factor percentages, because they create a more digital sensor sampled shape (slightly less peaked) of the Gaussian blur.
The fill factor tries to account for the aperture sampling of the sensels of our digital cameras. Instead of a point sample (which produces a pure 2D Gaussian), a (sensel) fill-factor of 100% would use a square pixel aperture to sample the 2D Gaussian for each sensel without gaps between the sensels (as with gap-less micro-lenses). It's just a means to approximate the actual sensel sampling area a bit more realistically, although it's rarely a perfect square.
On top of that, I adjusted the PS Custom filter kernel values a bit to improve the limited calculation precision and reduce potential halos from mis-matched PSF radius/shape, but your values would produce quite similar results, although probably with a different Custom filter scale value than I ultimately arrived at. If only that filter would allow larger kernels and floating point number values as input, we could literally copy values at a scale of 1.0 ...
Dr. Clark is a science team member on the Cassini mission to Saturn, Visual and Infrared Mapping Spectrometer (VIMS) http://wwwvims.lpl.arizona.edu, a Co-Investigator on the Mars Reconnaissance Orbiter, Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) team, which is currently orbiting Mars, and a Co-Investigator on the Moon Mineral Mapper (M3) http://m3.jpl.nasa.gov , on the Indian Chandrayaan-1 mission which orbited the moon (November, 2008 - August, 2009). He was also a Co-Investigator on the Thermal Emission Spectrometer (TES) http://tes.asu.edu team on the Mars Global Surveyor, 1997-2006.
I'm not promoting one workflow/technology/tool over another. However, I think the result I achieved (with help from Fine_Art) speaks for itself.
Really hard to separate demosiacing from sharpening. RawTherapee has a different demosiacing than LR. Part of what I'm seeing is the original demosiacing plus sharpening.
So, it's really not an apple to apple comparison...
Really hard to separate demosiacing from sharpening. RawTherapee has a different demosiacing than LR. Part of what I'm seeing is the original demosiacing plus sharpening.
So, it's really not an apple to apple comparison...
Going back to more basic basics, and looking at this image:
(http://www.irelandupclose.com/customer/LL/unbt.jpg)
I expected that the second convolution kernel would deconvolve the first one - but clearly it doesn't. The reason seems that at the edges, the subtraction of black is greater than the addition of grey, so we get the dreaded halo.
I messed around a bit with your PSF generator, but I could not come up with a proper convolution/deconvolution. Could you explain what is going wrong?
The dot seems to be upsampled, so I cannot check what a different deconvolver, e.g. the one in ImageJ (http://imagej.nih.gov/ij/download.html) which is a much better implementation, would have done. That would allow to estimate the influence of the calculation accuracy, but it will remain a rather impossible deconvolution.
or who thinks deconvolution and USM is the same sort of thingI am speculating that USM and deconvolution might be "the same sort of thing" in the same way that a Fiat and a Ferrari are both Italian cars (they both have four wheels and an engine, and it can bring insight to relate them to each other).
I tried it with the example macro in ImageJ and this restores the square perfectly.
It would be very interesting to play around with ImageJ with PSFs with varying Gaussian Blur amounts. If you have reasonably understandable steps that can be followed (by me) in ImageJ I would be happy to give it a go.
I have to thank you for all the information and help! You are being very generous with your time and knowledge.
I am speculating that USM and deconvolution might be "the same sort of thing" in the same way that a Fiat and a Ferrari are both Italian cars (they both have four wheels and an engine, and it can bring insight to relate them to each other).
I am not questioning that deconvolution (when properly executed) can give better results than USM.
-h
Hi Robert,
I'm not sure which example Macro you used, or whether you are referring to the Process/Filters/Convolve... menu option.
I am seriously not an expert in imaging science,
Robert
I am enjoying this thread. I have been a deconvolution fan for quite a while and really like Focus Magic. I hope you won't mind if I chime in with a somewhat less technical question/observation.
One of the frequent criticisms that I have seen in the past of deconvolution is that "you need to know the Point Spread Function to use it properly."
While I understand at a basic level that the PSF is indeed important, I always found that supposed criticism to be a bit of a red herring - mainly because it makes it sound like you need to pull out an Excel spreadsheet or MATLAB to use it properly. In practical use, however, it can be as simple as using your eyes with something like FM or even letting software like FM make an educated guess for you based on a selected sample. And while correction of lens defects and properties admittedly gets into more complicated territory, I would thing that "capture sharpening", in particular, can be handled in a pretty straight-forward manner where deconvolution is concerned.
Anyway, back to my "just use your eyes" comment. One big difference I have noticed between FM and the Adobe tools that reportedly use some degree of deconvolution (PS Smart Sharpen* and LR when Detail>50) is that FM makes it super-easy/obvious where the ideal radius sweet-spot is and the Adobe products do not. FM will start to show obvious ringing when you go too far but the Adobe tools will just start maxing out shadows and highlights. The Adobe tools also seem to have a difficult time mixing deconvolution with noise suppresion where as FM almost always seems to do a great job of magically differentiating between fine detail and noise.
Any ideas/info on why this is?
Hi Bart,
I can't remember where I got the macro from (it's in there with ImageJ somewhere, obviously),
I haven't looked into the FD Math code, but it appears to be using FFTs.
BTW ... do you ever use ImageJ to 'sharpen' your own images?
... other than that Adobe LR/ACR probably uses a relatively simple deconvolution method
I have other applications for specific deconvolution tasks.
I'm also looking forward to a newer more powerful version of Topaz Labs Infocus (which currently is a bit too sensitive regarding the creation of artifacts).
Which looks to me like a massive amount of contrast has been added to the detail so that we end up with a posterized look ... and doesn't look to me like a deconvolution at all.
Also, moving the Detail slider up in steps of 10 just shows an increasing amount of this coarsening of detail; there is no point at which there is a noticeable change in processing (from USM-type to deconvolution-type). Also, notice the noise on the lamp post.
I know Jeff has said that this is so - and I don't dispute his insider knowledge of Photoshop development - but it would be good to see how the sharpening transitions from USM to deconvolution, because I certainly can't see it.
That is exactly what I was referring to in my earlier post. I have often said to myself exactly what you did: "I know you say it is but is it REALLY using deconvolution?" :)
Eric would be the man to know I guess, although I'm not sure how often he checks out more esoteric threads like this one. In fact, in my mind Eric is the source of the ">50% Detail uses deconvolution in LR" understanding although I'm not sure I could link a direct quote. If not here, maybe on the Adobe forums.
Hi Bart,
I thought that the problem might lie along the lines you've pointed out.
The image that I posted is a screen capture, so it's way upsampled. The original that I tried to deconvolve is just a 4 pixel black on a gray background.
I tried it with the example macro in ImageJ and this restores the square perfectly. I also played around with a couple of images, and for anyone who doubts the power of deconvolution (or who thinks deconvolution and USM is the same sort of thing), here is an example from the D800Pine image:
(http://www.irelandupclose.com/customer/LL/dconv.jpg)
It would be very interesting to play around with ImageJ with PSFs with varying Gaussian Blur amounts. If you have reasonably understandable steps that can be followed (by me) in ImageJ I would be happy to give it a go. I've never used ImageJ before, so at this stage I'm just stumbling around in the dark with it :).
I have to thank you for all the information and help! You are being very generous with your time and knowledge.
Robert
I am seriously not an expert in imaging science, but it would seem to me that a better analogy between USM and deconvolution would be something like a blanket and a radiator, in that the blanket covers up the fact that there's not enough heat in the room whereas the radiator puts heat back in (heat being detail ... which is a bit of a pity because heat is noise, harking back to my thermodynamics :)).And my claim has been that what seems to be your underlying assumption is wrong. Not that I blaim you, the same claim seems to reverbrate all over the net: "deconvolution recreates detail, USM fakes detail". Let me modify your analogy: the classical radiator has a single variable, controlled by the user, and no thermometer to be seen anywhere. Getting the right temperature can be challenging. A more modern radiator might have one or more temperature probes, and thus can make more well-informed choices.
Here is my attempt to deconvolve the same section of the image. It needs more work.Hi,
And my claim has been that what seems to be your underlying assumption is wrong. Not that I blaim you, the same claim seems to reverbrate all over the net: "deconvolution recreates detail, USM fakes detail". Let me modify your analogy: the classical radiator has a single variable, controlled by the user, and no thermometer to be seen anywhere. Getting the right temperature can be challenging. A more modern radiator might have one or more temperature probes, and thus can make more well-informed choices.
I believe that the aforementioned claim is not supported by an analysis of what USM (in various incarnations) does compared to deconvolution. Information cannot be made out of nothing (Shannon & friends). These methods can only transform information present at their input in a way that more closely resemble some assumed reference, given some assumed degradation. When USM use a windowed gaussian subtracted the image itself, this is (in effect) a convolution of a single kernel, seemingly by the linear-phase complimentary filter. Thus, the sharpening used in USM can perhaps be described as inverting the implicitly assumed gaussian image degradation. A function that (of course) can be described in the frequency domain. The nonlinearity does complicate the analysis, but I think that the same is true for the regularization used in deconvolution.
This description might prove instructive for relatively "small-scale" USM parameters ("sharpening"), while larger-scale "local contrast" modification might be more easily comprehended in the spatial domain?
Thus, my claim (and I don't have the maths to back it up) is that USM is very similar to (naiive) deconvolution, and that both can be described as inverting an implicit/explicit model of the image degradation. The most important difference seems to be that USM practically always have a fixed kernel (of variable sigma), while deconvolution tends to have a highly parametric (or even blindly estimated) kernel, thus giving more parameters to tweak and (if chosen wisely) better results. It seems that practical deconvolution tends to use nonlinear methods, e.g. to satisfy the simultaneous (and contradicting) goals of detail enhancement but noise suppression. These may well give better numerical/perceived compromises, but it does not (in my mind) make it right to claim that "deconvolution recreates detail, while USM fakes it"
Try the attached values, which should approximate a deconvolution of a (slightly modified) 0.7 radius Gaussian blur, which would be about the best that a very good lens would produce on a digital sensor, at the best aperture for that lens. It would under-correct for other apertures but not hurt either. Always use 16-bit/channel image mode in Photoshop, otherwise Photoshop produces wrong results with this Custom filter pushed to the max.
As I've said earlier, such a 'simple' deconvolution tends to also 'enhance' noise (and things like JPEG artifacts), because it can't discriminate between signal and noise. So one might want to use this with a blend-if layer or with masks that are opaque for smooth areas (like blue skies which are usually a bit noisy due their low photon counts, and demosaicing of that).
Upsampled images would require likewise upsampled filter kernel dimensions, but a 5x5 kernel is too limited for that, so this is basically only usable for original size or down-sampled images.
Someone who understands the maths better than me would need to answer this question.
Hi,
This was deconvolving the original image, or was it deconvolving the image blurred with a Gaussian blur? If the latter then pretty impressive.
What tools/technique do you use using wavelets?
Robert
Could you expand on the math that resulted in the kernel above for a gaussian blurring function of radius r? f1=>F1, 1/F1=F2, F2=>f2?
I don't know about the math but from what I understand USM is somewhat equivalent to taking a black/white marker and drawing along every transition in the picture to make it stand out more - automatically.
...I remember I had to tweak a few values to get uniform brightness before and after sharpening uniform areas (requires 16-bit/channel data), so I finally arrived at the values given earlier.
That was about the train of thought, which should work for other blur radii just as well, without boosting the sharpening amount.
Jack, the math is very basis and simple. Here is an attempt to clarify with some numbers, based on a Gaussian blur of 1.0, Fill-factor 100%:
Got it, thanks Bart. My question about USM was rethorical more than anything else. The one about how to properly calculate the deconvolution kernel of a Gaussian is on the other hand real: I am stuck there :)
It is the multi-resolution smooth/sharpen feature in images plus.
Hi,
My question was whether the image was the original raw image (so you're trying to get the best detail from it) or were you doing a more brutal test, that is, to blur the image with a Gaussian blur and then attempt to recover the original (as per the example I gave using ImageJ)?
Robert
I did not blur it.
I would really love to see an example of an image, blurred in Photoshop with a Gaussian blur of, say 4, and then restored using deconvolution. Ideally I would like to see the deconvolution using a kernel and also using Fourier.
Secondly, I would also really love to be shown a method to photograph a point light source with my camera (for a given fixed focal length), and then to use this to produce a deconvolution kernel.
Thirdly, I would really, really love to see the two above put together, so that taking a point light source, say a white oval on a black background in Photoshop, that after applying a blur of some sort to it, we could work out the deconvolution kernel and use this to restore an image that had the same blur applied to it.
It's fascinating to learn about the technicalities (some of it well over my head, although I'm getting there bit by bit :)), but the next step for me would be putting it into practice ... not using a black box like FocusMagic, say, but doing it step by step using the techniques and tools currently available.
but for the moment a Slanted edge approach goes a long way to allow a characterization of the actual blur kernel with sub-pixel accuracy.
Tools like FocusMagic are real time savers, and doing it another way may require significant resources and amongst others, besides dedicated Math software, a lot of calibration and processing time.
I'm working on it ... ;) , but for the moment a Slanted edge approach (http://bvdwolf.home.xs4all.nl/main/foto/psf/SlantedEdge.html) goes a long way to allow a characterization of the actual blur kernel with sub-pixel accuracy.
Tools like FocusMagic are real time savers, and doing it another way may require significant resources and amongst others, besides dedicated Math software, a lot of calibration and processing time.
Am I correct in understanding that using the Slanted Edge approach that it should be possible:
- to take a photograph of an edge
- process that in ImageJ to get the slope of the edge and the pixel values along a single pixel row
- paste this information in your Slanted Edge tool to compute the sigma value
- use this sigma value in your PSF Generator to produce a deconvolution kernel
- use the deconvolution kernel in Photoshop (or preferably ImageJ as one can use a bigger kernel there):
- as a test it should remove the blur from the edge
- subsequently it could be used to remove capture blur from a photograph (taken with the same lens/aperture/focal length)
Assuming I have it even approximately right, it would be incredibly useful to have a video demonstration of this as it's quite easy to make a mess of things with tools one isn't familiar with. I would be happy to do this video, but first of all I would need to be able to work through the technique successfully, and right now I'm not sure I'm even on the tracks at all, not to mention on the right track!
A video could be helpful, but there are also linked webpages with more background info, and the thread also addresses some initial questions that others have raised.
Hello (again!) Bart,
I'm getting there - I've now found your thread http://www.luminous-landscape.com/forum/index.php?topic=68089.0 (that's the one,I take it?), and I've taken the test figures you supplied on the first page, fed them into your Slanted Edge tool and got the same radius (I haven't checked this out, but I assume you take an average of the RGB radii?).
I then put this radius in your PSF generator and got a deconvolution kernel and tried it on an image from a 1Ds3 with a 100mm f2.8 macro (so pretty close to your eqpt). The deconvolution in Photoshop is pretty horrendous (due to the integer rounding, presumably); however if the filter is faded to around 5% the results are really good. Using floating point and ImageJ, the results are nothing short of impressive, with detail recovery way beyond Lr, especially in shadows.
I don't know how best to set the scale on your PSF generator - clearly a high value gives a much stronger result; I found that a scale of between 3 and 5 is excellent, but up to 10 is OK depending on the image. Beyond that noise gets boosted too much, I think.
I didn't see much difference between a 5x5 and a 7x7 kernel, but it probably needs a bit more pixel-peeping.
I also don't understand the fill factor (I just set it to Point Sample).
What seems to be a good approach is to do a deconvolve with a scale of 2 or 3 and one with a scale of 5 and to do a Blend If in Photoshop - you can get a lot of detail and soften out any noise (although this is only visible at 200% and completely invisible at print size on an ISO200 image).
It occurred to me that as your data for the same model camera and lens gives me very good results that it would be possible to build up a database that could be populated by users, so that over time you could select your camera, lens, focal length and aperture and get a close match to the radius (and even the deconvolution kernel). The two pictures I checked were at f2.8 (a flower) and F7.1 (a landscape), whereas your sample data was at f5.6 - but the deconvolution still worked very well with both.
Cool, isn't it? And that is merely Capture sharpening in a somewhat crude single deconvolution pass. The same radius can be used for more elaborate iterative deconvolution algorithms, which will sharpen the noise less than the signal, thus producing an even higher S/N ratio, and restore even a bit more resolution.
A point sample takes a single point on the Bell shaped Gaussian blur pattern at the center of the pixel and uses that for the kernel cell. However, our sensels are not point samplers, but area samplers. They will integrate all light falling within their area aperture to an average. This reduces the peakedness of the Gaussian shape a bit, as if averaging all possible point samples inside that sensel aperture with a square kernel. The size of that square sensel kernel is either 100% (assuming a sensel aperture that receives light from edge to edge, like with gap-less micro-lenses), or a smaller percentage (e.g. to simulate a complex CMOS sensor without micro-lenses with lots of transistors per sensel, leaving only a smaller part of the real estate to receive light). When you use a smaller percentage, the kernel's blur pattern will become narrower and more peaked and less sharpening will result, because the sensor already sharpens (and aliases) more by it's small sampling aperture.
That's correct, as you will find out, the amount of blur is even not all that different between lenses of similar quality, but it does change significantly for the more extreme aperture values. That's completely unlike the Capture sharpening gospel of some 'gurus' who say that it's the image feature detail that determine the Capture sharpening settings, and thus they introduce halos by using too large radii early in their processing. It was also discussed here (http://www.luminous-landscape.com/forum/index.php?topic=76998.msg617613#msg617613).
It's a revelation for many, to realize they have been taught wrong, and the way the Detail dialog is designed in e.g. LR doesn't help either (it even suggests to start with changing the Amount setting before setting the correct radius, and it offers no real guidance as to the correct radius, which could be set to a more useful default based on the aperture in the EXIF). We himanss are pretty poor at eye-balling the correct settings because we prefer high contrast, which is not the same as real resolution. It's even made worse by forcing to user to use the Capture sharpening settings of the Detail panel for Creative sharpening later in the parametric workflow, which seduces users to use a too large radius value there, to do a better Creative sharpening job.
You mentioned before that doing the deconvolution in the frequency domain is much more complex, which it no doubt is, but would it be worth it? I'm thinking of the possibility of (at least partially) removing noise, for example. How would you boost the S/N ratio using a kernel?
I take it then that with a 1DsIII you would want to use a fill factor of maybe 80%, whereas a 7D would be 100%? I ask because I have both cameras :).
I think I've been lucky (or perhaps it's that I hate oversharpened images), but I've always set the radius and detail first with the Alt key pressed (the values always end up with a low radius - 0.6, 0.7 typically, and detail below 20) and I then adjust the amount at 100% zoom - and it's very rare that I would go over 40, normally 20-30. That has meant that I haven't judged the image on the look (at that stage of the process, at any rate) .... more by chance than by intent.
Regarding FocusMagic - the lowest radius you can use is 1 going in increments of 1. That seems a high starting point and a high increment ... or am I mixing apples and oranges?
Strictly speaking, conversion to and back from the Fourier space (frequency domain), is reversible and produces a 100% identical image. A deconvolution is as simple as a division in frequency space, where in the spatial domain it would take multiple multiplications and additions for each pixel, and a solution for the edges, so it's much faster between the domain conversions.
The difficulties arise when we start processing that image in the frequency domain. Division by (almost) zero (which happens at the highest spatial frequencies) can drive the results to 'infinity' or create non-existing numerical results. Add in some noise and limited precision, and it becomes a tricky deal.
The S/N ratio boost is done through a process known as regularization, where some prior knowledge of the type of noise distribution is used to reduce noise at each iteration, in such a way that the gain of resolution at a given step exceeds the loss of resolution due to noise reduction. It can be as simple as adding a mild Gaussian blur between each iteration step.So would you then apply your deconvolution kernel with radius 0.7 (say, for your lens/camera), then blur with a small radius, say 0.2, repeat the deconvolution with the same radius of 0.7 ... several times? That sort of thing?
You probably have a better eye for it than most ..., hence the search for an even better method.
One would think so, but we don't know exactly how that input is modified by the unknown algorithm they use. Also, because it probably is an iterative or recursive operation, they will somehow optimize several parameters with each iteration to produce a better fitting model. Of course one can first magnify the image, then apply FM (at a virtual sub-pixel accurate level), and then down-sample again. That works fine, although things slow down due to the amount of pixels that need to be processed.
The only downside to that kind of method is that the resampling itself may create artifacts, but we're not talking about huge magnification/reduction factors, maybe 3 or 4 is what I occasionally use when I'm confronted with an image of unknown origin and I want to see exactly what FM does at a sub-pixel level. Also, because regular upsampling does not create additional resolution, the risk of creating aliasing artifacts at the down-sampling stage is minimal. The FM radius to use, scales nicely with the maginification, e.g. a blur width 5 for a 4x upsample of a sharp image.
This is presumably why the macro example I posted (ImageJ) adds noise to the deconvolution filter, to avoid division by 0. So the filter would be a Gaussian blur with a radius of around 0.7 (in your example), with noise added (which is multiplication by high frequencies (above Nyquist?)).
I’m talking through my hat here, needless to say :). But it would be interesting to try it … and ImageJ seems to provide the necessary functions.
So would you then apply your deconvolution kernel with radius 0.7 (say, for your lens/camera), then blur with a small radius, say 0.2, repeat the deconvolution with the same radius of 0.7 ... several times? That sort of thing?
Well, it’s partly interest, but also … what’s the point of all of this expensive and sophisticated equipment if we ruin the image at the first available opportunity?
So if you wanted to try a radius of 0.75, for example, you would upscale by 4 and use a radius of 3 ... and then downscale back by 4? What resizing algorithms would you use? Bicubic I expect?
I have a couple of other questions (of course!!).
Regarding raw converters, have you seen much difference between them, in terms of resolution, with sharpening off and after deconvolution (a la Bart)? With your 1Ds3, that is, as I expect the converters may be different for different cameras.
Second question: you mentioned in your post on Slanted Edge that using Imatest could speed up the process. I have Imatest Studio and I was wondering how I can use it to get the radius? One way, I guess, would be to take the 10-90% edge and divide by 2 … but that seems far too simple! I’m sure I should be using natural logs and square roots and such! Help would be appreciated (as usual!).
Yes, the addition of noise is a crude attempt to avoid division by zero, although it may also create some where no issue was before.
The issue with that is that the repeated convolution with a given radius will result in the same effect as that of a single convolution with a larger radius. And the smaller radius denoise blur will also cumulate to a larger radius single blur, so there is more that needs to be done.
Yes, upsampling with Bicubic Smoother, and down-sampling with Bicubic will often be good enough, but better algorithms will give better results.
The slanted edge determinations depend on the Rawconverter that was used. Some are a bit sharper than others. Capture One Pro, starting with version 7, does somewhat better than LR/ACR process 2012, but RawTherapee with the Amaze algorithm is also very good for lower noise images.
Actually it is that simple, provided that the Edge Profile (=ESF) has a Gaussian based Cumulative Distribution Function shape, in which case dividing the 10-90 percent rise width in pixels by 2.5631 would result in the correct Gaussian sigma radius. Not all edge profiles follow the exact same shape as a Gaussian CDF, notably in the shadows where veiling glare is added, and not all response curves are calibrated for the actual OECF, so one might need to use a slightly different value.
Well, how about adding the same noise to both the image and to the blur function - and then doing the deconvolution? That way you should avoid both division by 0 and other issues, I would have thought?
So then, for repeated convolutions you would need to reduce the radii? But if so, on what basis, just guesswork?
Any suggestions would be welcome. In the few tests I've done I'm not so sure that upsampling in order to use a smaller radius is giving any benefit (whereas it does seem to introduce some artifacts). It may be better to use the integer radius and then fade the filter.
Do you think these differences are significant after deconvolution? Lr seems to be a bit softer than Capture One, for example, but is that because of a better algorithm in Capture One, or is it because Capture One applies some sharpening? Which raises the question in my mind: is it possible to deconvolve on the raw data, and if so would that not be much better than leaving it until after the image has been demosaiced? Perhaps this is where one raw processor may have the edge over another?
Interesting ... how did you calculate that number?
There are many different ways to skin a cat. One can also invert the PSF and use multiplication instead of division in frequency space. But I do think that operations in frequency space are complicating the issues due to the particularities of working in the frequency domain. The only reason to convert to frequency domain is to save processing time on large images because it may be simpler to implement some calculations, not specifically to get better quality, once everything is correctly set up (which requires additional math skills).
There is a difference between theory and practice, so one would have to verify with actual examples. That's why the more successful algorithms use all sorts of methods (http://www.mathcs.emory.edu/~nagy/RestoreTools/IR.pdf), and adaptive (to local image content, and per iteration) regularization schemes. They do not necessarily use different radii, but vary the other parameters (RL algorithm (https://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy_deconvolution), RL considerations (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3222693/)).
Maybe this thread (http://www.luminous-landscape.com/forum/index.php?topic=91754.0) offers better than average resampling approaches.
The differences between Rawconverter algorithms concern more than just sharpness. Artifact reduction is also an important issue, because we are working with undersampled color channels and differences between Green and Red/Blue sampling density. Capture One Pro version 7, exhibited much improved resistance to jaggies compared to version 6, while retaining its capability to extract high resolution. It also has a slider control to steer that trade-off for more or less detail. There is no implicit sharpening added if one switches that off on export. The Amaze algorithm as implemented in RawTherapee does very clean demosaicing, especially on images with low noise levels. LR does a decent job most of the time, but I've seen examples (converted them myself, so personally verified) where it fails with the generation of all sorts of artifacts.
The 10th and 90th percentile of the cumulative distribution function (https://www.wolframalpha.com/input/?i=normal+distribution%2C+mean%3D0) are at approx. -1.28155 * sigma and 128155 * sigma, the range therefore spans approx. 2.5631 * sigma.
I agree it's a bit of work, and the workflow could be improved by a dedicated piece of software that does it all on an image that gets analyzed automatically. But hey, it's a free tool, and it's educational.
(this shot was taken at 14.5K feet on Mauna Kea). Retaining the vibrance of the sky, while pulling detail from the backside of this telescope was my goal.
Im happy to post the CR2 if anyone wants to take a shot.
PP
I've now had a chance to do a little more testing and I thought these results could be of interest.
I've compared capture sharpening in Lightroom/ACR, Photoshop Smart Sharpen, FocusMagic and Bart's Kernel with ImageJ on a focused image and on one slightly out of focus. I used Imatest Studio slanted edge 10-90%. Here are the results:
In the first set of results, ACR gave a much lower result with an Amount of 40. Increasing that to 50 made a big difference at the cost of slight halos. Smart Sharpen sharpens the noise beautifully :), so it really needs an edge mask (but with an edge mask it does a very good job).
Focus Magic gave the cleanest result with IJ not far behind. Any of these sharpening tools would do a good job of capture sharpening with this image (with edge masks for ACR and Smart Sharpen).
I think FocusMagic suffers from the integer radius settings; Smart Sharpen suffers from noise boosting; LR/ACR needs careful handling to avoid halos but the Masking feature is very nice. ImageJ/Bart is a serious contender. Overall, with care any of these sharpening/deconvoluting tools will do a good job, but FocusMagic needs to be used with care on blurred images (IMO, of course :)).
One question, out of curiosity, did you also happen to record the Imatest "Corrected" (for "standardized sharpening") values? In principle, Imatest does it's analysis on linearized data, either directly from Raw (by using the same raw conversion engine for all comparisons) or by linearizing the gamma adjusted data by a gamma approximation, or an even more accurate OECF response calibration. Since gamma adjusted, and sharpened (can be local contrast adjustment), input will influence the resulting scores, it offers a kind of correction mechanism to more level the playing field for already sharpened images.
I've compared capture sharpening in Lightroom/ACR, Photoshop Smart Sharpen, FocusMagic and Bart's Kernel with ImageJ on a focused image and on one slightly out of focus.
With the local contrast distortions of the scores in mind, the results are about as one would expect them to be, but it's always nice to see the theory confirmed ...
In the first set of results, ACR gave a much lower result with an Amount of 40. Increasing that to 50 made a big difference at the cost of slight halos. Smart Sharpen sharpens the noise beautifully. So it really needs an edge mask (but with an edge mask it does a very good job).
This explains why the acutance boost of mostly USM (with some deconvolution mixed in) requires a lot of masking to keep the drawbacks of that method (halos and noise amplification depending on radius setting) in check.
Focus Magic gave the cleanest result with IJ not far behind. Any of these sharpening tools would do a good job of capture sharpening with this image (with edge masks for ACR and Smart Sharpen).
With the added note of real resolution boost for the deconvolution based methods, and simulated resolution by acutance boost of the USM based methods. That will make a difference as the output size goes up, but at native to reduced pixel sizes they would all be useful to a degree.
We also need to keep in mind whether we are Capture sharpening or doing something else. Therefore, the avoidance of halos and other edge artifacts (like 'restoring' aliasing artifacts and jaggies) may require to reduce the amount settings where needed, or use masks for applying different amounts of sharpening in different parts of the image (e.g. selections based on High-pass filters or blend-if masks to reduce clipping). A tool like the Topaz Labs "Detail" plugin allows to do several of these operations (including deconvolution) in a very controlled fashion, and not only does so without the risk of producing halos, but also while avoiding color issues due to increased contrast.
I think the issue (if we can call it that) with FocusMagic is that it has to perform its magic at the single pixel level, where we already know that we really need more than 2 pixels to reliably represent non-aliased discrete detail. It's not caused by the single digit blur width input (we don't know how that's used internally in an unknown iterative deconvolution algorithm) as such IMHO.
That's why I occasionally suggest that FocusMagic may also be used after first upsampling the unsharpened image data. That would allow it to operate on a sub-pixel accurate level, although its success would then also depend on the quality of the resampling algorithm.
As you know, I don’t much like the idea of capture sharpening followed by output sharpening, so I would tend to use one stronger sharpening after resize. In the Imatest sharpening example above, I would consider the sharpening to be totally fine for output – but if I had used a scale of 1 and not 1.25 it would not have been enough.
I don’t see what is to be gained by sharpening once with a radius of 1 and then sharpening again with a radius of 1.25 … but maybe I’m wrong.
I do have the Topaz plug-ins and I find the Detail plug-in very good for Medium and Large Details, but not for Small Details because that just boots noise and requires an edge mask (so why not use Smart Sharpen which has a lot more controls?).
So, to your point regarding Capture or Capture + something else, I would think that the Topaz Detail plug-in would be excellent for Creative sharpening, but not for capture/output sharpening.
The InFocus plug-in seems OK for deblur, but on its own it’s not enough: however, with a small amount of Sharpen added (same plug-in) it does a very good job.
I agree, and it's easier when one only has to consider the immediate sharpening to be performed, and not something that may or may not be done much later in the workflow.
The only potential benefit is that one can use different types of sharpening, but in practice that does not make too much of a difference if the sharpening already was of the devolution kind, and not only acutance. Once resolution is restored, acutance enhancement goes a long way.
I have the same observations, but the noise amplification in "Detail" can be reduced with a negative "boost" adjustment. There is also a "Deblur" control that specifically does deconvolution at the smallest pixel level, instead of the more Wavelet oriented spatial frequency ranges boosts.
The "Deblur" control might work for deconvolution based Capture sharpening, especially if one doesn't have other tools.
Output sharpening is a whole other can of worms, because viewing distance needs to be factored in as well as some differences in output media. However, not all matte media are also blurry. On the contrary, some are quite sharp despite a reduced contrast and/or surface structure. Even Canvas can be real sharp, and surface structures can be quite different. I've had large canvas output done at 720 PPI, FM deconvolution sharpened at that native printer output size, and the results were amazing
I need to have a good look at the Topaz sharpening options clearly :) - so far I haven't used Topaz much at all for anything, but it seems like there's some quite good stuff there.
I take it you just used FM deconvolution on its own, without any further output sharpening?
What do you do if your image is a bit out-of-focus? Do you first correct for the base softening due to the AA filter etc., and then correct for the out-of-focus, or do you attempt to do it in one go?
There are only that many hours in a day, one has to prioritize ..., which is why I like to share my findings and hope for others to do the same.
What I find useful is to reduce all 3 (small, medium, large) details sliders to -1.00, and then in turn restore one slider at a time to 0.00 or more to see exactly which detail is being targeted. The Boost sliders can be reduced for less effect (I think it targets based on the source level of contrast of the specific feature size). Boosting the small details also increases noise, so reducing the boost will reduce the amplification of low contrast noise, while maintaining some of the higher contrast small detail.
The color targeted Cyan-Red / Magenta-Green / Yellow-Blue luminance balance controls are also very useful for bringing out detail or suppressing it, because many complementary colors do not reside directly next to each other.
Yes, all that was required was 2 rounds of FM deconvolution sharpening with different width settings at the final output size, because the original was already very sharp in the limited DOF zone. One round for the upsampling, and another for the finest (restored) detail.
In that case I probably would need too large a "blur width" setting, or several, and thus do a mild amount at original file size, and another after resampling. Of course my goal is to avoid blurred originals ..., and I usually succeed (I do lug my tripod or a monopod around a lot).
They’ve really gone slider-mad here! I can see that the Small Details Boost may be useful in toning down noise introduced by the Small Details adjustment, but I don’t see any reason to use the Small Details adjustment at all as the InFocus filter seems to me to do a better job.
The Medium and Large adjustments are a bit like USM with a large and very large radius, respectively.
But what is very nice with the Topaz filter is the ability to target shadows and highlights.
OK … this is where I have a problem/don’t understand. If I understand you correctly, you used FM first to correct your original (already nicely focused) image to restore fine detail (lost by lens/sensor etc). Then you upsampled and used FM again to correct the softness caused by the upsampling. Why not leave the original without correction, upsample, and then use FM once?
Whatever softness is in the original image will be upsampled so the deconvolution radius will have to be increased by the same ratio as the upsampling, then you add a bit more strength, to taste, to correct for any softness introduced by the upsampling.
Well, not exactly. The Small details adjustment, is adjusting the amplitude of 'small feature detail'. Small is not defined as a fixed number of pixels but rather small in relation to the total image size. InFocus instead, deconvolves and optionally sharpens more traditionally, based on certain fixed blur dimensions in pixels.
Not exactly. You can upsample an unsharpened image, and apply 2 deconvolutions with different widths at the output size. So e.g. an upsample to 300% might require a blur width of 4 or 5, but can be followed with one of 1 or 2 (with a lower amount).
Yes, the original optical blur is scaled to a larger dimension, but may be diffraction dominated or defocus dominated. That would lead to different PSF requirements. FocusMagic may be clever enough to optimize either type of blur, but I'm not sure that would take the same blur width settings. In addition, the resizing will also create some blur, of yet another kind. There is a good chance that these PSFs will cascade into a Gaussian looking combined blur, but sometimes we can do better by the above mentioned dual deconvolution at the final size.
Cheers,
Bart
Hi pp,
Well, all I've done with your image is to apply FocusMagic to it ... and some tonal adjustments in Lightroom. Your image has color differences which I haven't tried to match. The vertical lines in your image are very clean - but the rest of the image is very soft ... which is a tradeoff, IMO.
Be interesting to get some views on which is the cleaner result :).
(http://www.irelandupclose.com/customer/LL/TestImage.jpg)
(You can right-click on the image to see it full-size)
Well-taken shot, btw!!
Robert
Bart forgive the slightly off topic question, but what is the structure in your picture?
Hi Robert--
Wow--FM looks like to be a gem of a tool. Compared to RT, I think your result has a bit more definition, especially on the guardrail that encircles the telescope. I also see that the weather vanes on the top look a bit more defined as well. Also, the vertical lines on the rear of the building look good too.
Is there any chance of posting an uncropped version? I'd like to see what the detail looks like in the lower portion of the image, especially in the shadow/noise areas.
Also, what did you do to embed the full size image that can be viewed by right-click?
Nice job and thanks for the render!
PP
The InFocus plug-in seems OK for deblur, but on its own it’s not enough: however, with a small amount of Sharpen added (same plug-in) it does a very good job. Here’s an example...
Apart from the undershoot and slight noise boost (acceptable without an edge mask IMO) it’s pretty hard to beat a 10-90% edge rise of 1.07 pixels! (This is one example of two-pass sharpening that’s beneficial, it would seem :)).
Re: Topaz Detail: The Small details adjustment, is adjusting the amplitude of 'small feature detail'. Small is not defined as a fixed number of pixels but rather small in relation to the total image size. InFocus instead, deconvolves and optionally sharpens more traditionally, based on certain fixed blur dimensions in pixels.
Hi Robert,
I haven't been able to read all of it but you have covered a lot of excellent ground and come a long way in this thread, good for you and thank you for doing it. There was a recent thread around here of a gentleman who was able to undo a fair amount of known blur using an FT library, I wonder if any of that can be used by us non-coders.
For my landscapes I typically use InFocus in its Estimate mode (Radius 2, Softness 0.3, Suppress 0.2) for capture sharpening sometimes followed by a touch of Local Contrast at low opacity. That seems to take care of the small to medium range detail quite well. If I see any squigglies from InFocus I mask those out. Imho one of the limitations we are running into is that we are deconvolving based on a gaussian PSF, which is not necessarily representative of the intensity distribution of the camera system's.
But along these lines, since you are playing with Imatest, I have this consideration for you: a good blind guess at what deconvolution radius to use for a guessian :) PSF is that which would result in the same MTF50 as the MTF50 produced by the edge when measured off the raw data. In other words, a good guess at the radius to use for deconvolution is (excuse the Excel notation)
StdDev/Radius = SQRT(-2*LN(0.5))/2/PI/MTF50 pixels
For example, if when you fed the edge raw data to Imatest it returned an MTF50 of 0.28 cy/px, a good guess at the gaussian radius to use for deconvolution would be 0.67 pixels.
Jack
But along these lines, since you are playing with Imatest, I have this consideration for you: a good blind guess at what deconvolution radius to use for a guessian :) PSF is that which would result in the same MTF50 as the MTF50 produced by the edge when measured off the raw data. In other words, a good guess at the radius to use for deconvolution is (excuse the Excel notation)
StdDev/Radius = SQRT(-2*LN(0.5))/2/PI/MTF50 pixels
Pretty close to Bart's method! 0.56 by Bart, 0.54 by you .. and 1.0 Bart, 0.9 you :)
Excellent then. You can read the rationale behind my approach here (http://www.strollswithmydog.com/what-radius-to-use-for-deconvolution-capture-sharpening/).
Jack
Thanks Jack - very interesting and a bit scary!
I thought I would check out what happens using Bart's deconvolution, based on the correct radius and then increasing it progressively, and this is what happens:
(http://www.irelandupclose.com/customer/LL/Base-1p06.jpg) (http://www.irelandupclose.com/customer/LL/Base-4.jpg)
The left-hand image has the correct radius of 1.06, the one at the right has a radius of 4. As you can see, all that happens is that there is a significant overshoot on the MTF at 4 (this overshoot increases progressively from a radius of about 1.4).
The MTF remains roughly Gaussian unlike the one in your article … and there is no sudden transition around the Nyquist frequency or shoot off to infinity as the radius increases. Are these effects due to division by zero(ish) in the frequency domain … or to something else?
Jack's model is purely mathematical, and as such allows to predict the effects of full restoration in the frequency domain. However, anything that happens above the Nyquist frequency (0.5 cycles/pixel) folds back (mirrors) to the below Nyquist range and manifests itself as aliasing in the spatial domain (so you won't see it as an amplification above Nyquist in the actually sharpened version, but as a boost below Nyquist).
Also, since the actual signal's MTF near the Nyquist frequency is very low, there is little detail (with low S/N ratio) left to reconstruct, so there will be issues with noise amplification. MTF curves need to be interpreted, because actual images are not the same as simplified mathematical models (simple numbers do not tell the whole story, they just show a certain aspect of it, like spatial frequency response of a system in an MTF, and an separation of aliasing due to sub-pixel phase effects of fine detail).
Jack takes a medium response (MTF50) as pivot point on the actual MTF curve, and calculates the corresponding MTF (at that point) of a pure Gaussian blur function, he calculates the required sigma. In principle that's fine, although one might also try to find a sigma that minimizes the absolute difference between the actual MTF and that of the pure Gaussian over a wider range. Although it's a reasonable single point optimization, maybe MTF50 is not the best pivot point, maybe e.g. MTF55 or MTF45 would give an overall better match, who knows.
Jack's model is purely mathematical, and as such allows to predict the effects of full restoration in the frequency domain. However, anything that happens above the Nyquist frequency (0.5 cycles/pixel) folds back (mirrors) to the below Nyquist range and manifests itself as aliasing in the spatial domain (so you won't see it as an amplification above Nyquist in the actually sharpened version, but as a boost below Nyquist).
Also, since the actual signal's MTF near the Nyquist frequency is very low, there is little detail (with low S/N ratio) left to reconstruct, so there will be issues with noise amplification. MTF curves need to be interpreted, because actual images are not the same as simplified mathematical models (simple numbers do not tell the whole story, they just show a certain aspect of it, like spatial frequency response of a system in an MTF, and an separation of aliasing due to sub-pixel phase effects of fine detail).
Cheers,
Bart
May I ask a question about a sharpening workflow for my Tango drum scanner? Should I leave sharpening turned on in the Tango and then would there be any need for the capture sharpening stage or turn it off in the Tango software then use the capture sharpening stage?
However, imo the application of those knobs comes too early in the process, especially when the MTF curve is poorly behaved. There is no point in boosting frequencies just to cut them back later with a low pass: noise is increased and detail information is lost that way. On the contrary the objective of deconvolution should be to restore without boosting too much - at least up to Nyquist.
So why not give us a chance to first attempt to reverse out the dominant components of MTF based on their physical properties (f-number, AA, etc.) and only then resort to generic parameters based on Gaussian PSFs and low pass filters? At least take out the Airy and AA, then we'll talk (I am talking to you Nik, Topaz and FM).
Jack
My approach is also trying to fit a Gaussian (Edge-Spread function) to the actual data, but does so on two points (10% and 90% rise) on the edge profile in the spatial domain. That may result in a slightly different optimization, e.g. in case of veiling glare which raises the dark tones more than the light tones, also on the slanted edge transition profile. My webtool attempts to minimize the absolute difference between the entire edge response and the Gaussian model. It therefore attempts to make a better overall edge profile fit, which usually is most difficult for the dark edge, due to veiling glare which distorts the Gaussian blur profile. That also gives an indication of how much of a role the veiling glare plays in the total image quality, and how it complicates a successful resolution restoration because it reduces the lower frequencies of the MTF response. BTW, Topaz Detail can be used to adjust some of that with the large detail control.
Hi Jack & Bart,
Given an actual MTF, could you produce a deconvolution kernel to properly restore detail, to the extent possible? As opposed to assuming a Gaussian model, that is. Say this one here:
Hi Jack & Bart,
Given an actual MTF, could you produce a deconvolution kernel to properly restore detail, to the extent possible? As opposed to assuming a Gaussian model, that is.
Hi Robert,
I could give you a better answer if I could see the raw file that generated that output.
But for a generic answer, assuming we are talking about Capture Sharpening in the center of the FOV - that is attempting to restore spatial resolution lost by blurring by the HARDWARE during the capture process - if one wants to get camera/lens setup specific PSFs for deconvolution one should imo start by reversing out the blurring introduced by each easily modeled component of the hardware.
PS BTW to make modeling effective I work entirely on the objective raw data (blurring introduced by lens/sensor only) to isolate it from subjective components that would otherwise introduce too many additional variables: no demosaicing, no rendering, no contrast, no sharpening. More or less in the center of the FOV. Capture obtained by using good technique, so no shake.
I'm not a mathematician, so I'm not 100% sure, but I don't think that is directly possible from an arbitrary MTF. An MTF has already lost some information required to re-build the original data. It's a bit like trying to reconstruct a single line of an image from its histogram. That's not a perfect analogy either, but you get the idea. The MTF only tells us with which contrast certain spatial frequencies will be recorded, but it e.g. no longer has information about it's phase (position).
That's why it helps to reverse engineer the PSF, i.e. compare the image MTF of a known feature (e.g. edge) to the model of known shapes, such as e.g. a Gaussian, and thus derive the PSF indirectly. This works pretty well for many images, until diffraction/defocus/motion becomes such a dominating component in the cascaded blur contributions that the combined blur becomes a bit less Gaussian looking. In the cascade it will still be somewhat Gaussian (except for complex motion), so one can also attempt to model a weighted sum of Gaussians, or a convolution of a Gaussian with a diffraction or defocus blur PSF.
So we can construct a model of the contributing PSFs, but it will still be very difficult to do absolutely accurate, and small differences in the frequency domain can have huge effects in the spatial domain.
I feel somewhat comforted by the remarks of Dr. Eric Fossum (the inventor of the CMOS image sensor) when he mentions (http://www.dpreview.com/forums/post/54295244) that the design of things like microlenses and their effect on the image is too complicated to predict accurately, that one usually resorts to trial and error rather than attempt to model it. That of course won't stop us from trying ..., as long as we don't expect perfection, because that would probably never happen.
What we can do is model the main contributors, and see if eliminating their contribution helps.
For Imatest you need the print to have a contrast ratio of about 10:1 max (so Lab 9/90, say). You say that you need black on white - but of course with the paper and ink limits that's not achievable. Is it acceptable to you to apply a curve to the image to bring the contrast back to black/white?
The capture was reasonably well taken - however I did not use mirror lock-up and the exposure was quite long (1/5th second). ISO 100, good tripod, remote capture, so no camera shake apart from the mirror. The test chart is quite small and printed on Epson Enhanced Matte using an HPZ3100 ... so not the sharpest print, but as the shot was taken from about 1.75m away any softness in the print is probably not a factor. However, if you would like a better shot, I can redo it with a prime lens with mirror lock-up and increase the light to shorten the exposure.
Hi Robert,
I forgot to mention one little detail (I often do:-): in order to give reliable data the edge needs be slightly slanted (as per the name of the MTF generating method), ideally between 5 and 9 degrees, and near the center of the FOV. I only downloaded the second image you shared (F6p3) because I am not at home at the moment and my data plan has strict limits (the other one was 210MB). The slant is only 1 degree in F6p3 and I am getting values again too low for your lens/camera combo: WB Raw MTF50 = 1580 lw/ph, when near the center of the FOV it should be up around 2000. Could be the one degree. Or it could be that the lens is not focused properly. The Blue channel is giving the highest MTF50 readings while Red is way down - so it could be that you are chasing your lens' longitudinal aberrations down, not focusing right on the sensing plane :)
To give you an idea, the ISO 100 5DIII+85mm/1.8 @f/7.1 raw image here (http://www.dpreview.com/reviews/image-comparison?utm_campaign=internal-link&utm_source=mainmenu&utm_medium=text&ref=mainmenu)is yielding MTF50 values of over 2100 lw/ph. I consistently get well over that with my D610 from slanted edges printed by a laser printer on normal copy paper and lit by diffuse lighting. For this kind of forensic exercise one must use good technique (15x focal length away from target, solid tripod, mirror up, delayed shutter release) and either use contrast detect focusing, or focus peak manually (that is take a number of shots around what is suspected to be the appropriate focus point by varying monotonically and very slowly the focus ring manually in between shots; then view the series at x00% and choose the one that appears the sharpest). Another potential culprit is the target image source: if it is not a vector the printing program/process could be introducing artifacts.
As far as the contrast of the edge is concerned I work directly off the raw data so it is what it is. MTF Mapper seems not to have a problem with what you shared, albeit using a bit of a lower threshold than its default. That was the case with yesterday's image as well.
Jack
Hi Robert,
I usually recommend at least 25x the focal length, therefore the shooting distance is a bit too short for my taste (or the focal length too long for that distance). This relatively short distance will make the target resolution more important. Also make sure you print it at 600 PPI on your HP printer. That potentially will bring your 10-90% rise distance down to better values. Some matte papers are relatively sharp but others are a bit blurry, so that may also play a role.
Cheers,
Bart
Thanks Bart ... I think you got around 1.8 pixels for the 10-90% rise with a 100mm macro, is that right? Is that the sort of figure I can expect or should it be significantly better than that?
Also the lighting and print contrast seem to be quite critical and I doubt either are optimal. This sort of thing is designed to do one's head in :'(
That 1.8 pixels rise is a common value for very well focused, high quality lenses. It's equal to a 0.7 sigma blur which is about as good as it can get.
The slanted edges on my 'star' target go from paper white to pretty dark, to avoid dithered edges (try to print other targets for a normal range with shades of gray). One can get even straighter edges by printing them horizontal/vertical, and then rotating the target some 5-6 degrees when shooting them. The ISO recommendations are for a lower contrast edge, but that is to reduce veiling glare and (in camera JPEG) sharpening effects. With a properly exposed edge the medium gray should produce an approx. R/G/B 120/120/120, and paper white of 230/230/230, after Raw conversion. It also helps to get the output gamma calibrated for Imatest instead of just assuming 0.5, or use a linear gamma Raw input.
Do not use contrast adjustment to boost the sharpness, just shoot from a longer distance.
Cheers,
Bart
Just messed around a bit more and one thing that clearly makes quite a difference is the lighting. For example, with the light in one direction I was getting vertical 2.02, horizontal 1.87; in the other direction the figures reversed completely; with light from both directions I got 2.06/2.06 on both horizontal and vertical (no other changes). I remember Norman Koren telling me to be super-careful with the lighting.
I doubt that I would get as good as this in the field, so I wonder, Jack, if you could explain why getting an optimally focused image is useful for your modelling ... because it's pretty tricky to achieve!
You are right, Robert, Robert Cicala of lensrentals.com says that a difference of about 10% in MTF50 is barely noticeable. I tend to agree. The reason for going the extra distance when attempting to determine the parameters for Capture Sharpening (recall: capture sharpening = restore sharpness lost during capture process = camera/lens hardware dependent) is that otherwise we cannot 'see' them and unless someone at Canon obliges with the figures we have to guesstimate.
For instance a key one is the strength of the AA filter. I assume that the 1DsIII has an AA filter in a classic 4-dot beam splitting configuration like the Exmor sensored cameras I am more familiar with. Since most such AAs cause a shift of about +/- 0.35 pixels, we should be able to see a zero around there in the relative MTF curve (in cycle/pixels it is 0.25/offset):
(http://i.imgur.com/8XCFB96.png)
So we know that the A7s AA appears to be about +/- 0.363 pixels in strength, and if we wanted to attempt to remove its effects through deconvolution we would have a good estimate as far as what the shape and size of its PSF are concerned. However if the spatial resolution information is buried in a morass of lens induced blur we are not going to be able to find what we seek. Heck it might be that the Canons do not have a 4-dot beam splitter, or that it is a lot less strong, in which case all bets are off (the slanted edge method becomes exponentially unreliable past Nyquist) :(
Jack
PS Since I suspected that my D610 (like the A7 and other Exmors of its generation) has AA action in one direction only, I figured that if I divided the MTF obtained from a vertical edge by the one obtained from a horizontal edge in the same capture the result should be the missing element = the MTF of the AA filter. And low and behold, right as theory had predicted (ignore stuff after the zero, there is too much noise and too little energy there for the division of two small numbers to be meaningful - it was quite a noisy image to start with):
(http://i.imgur.com/iDciom7.png)
I take it that the MTFs for diffraction (at a fixed aperture), pixel aperture and the AA filter are all constant? Also that diffraction and pixel aperture MTFs can be quite accurately estimated?
That leaves the unknowns, which are the lens blur and AA filter. So, if you take two shots, the only difference being a slight change in the lens blur ... could you not then work out the AA from that? Notice that I say you, because I certainly could not! And no doubt it's not possible to do or you would be doing it already.
I had a quick look at MTF Mapper and it seems very good. If you could give me your command arguments I could use it to check my image before sending it to you.
Yes, and the more you narrow the wavelength of the light the better, that's why I like to work with the green CFA raw channel only, which for some Nikon cameras has 1/2 power bandwidth of around 540nm +/-50ish.
Lens blur is the hardest of the simple components to model because it depends on so many variables (if we concentrate on the center only of well corrected lenses at least SA, CAs and defocus): it changes model significantly and non-linearly with even small incremental variations. So far I have concentrated on modeling well corrected prime lenses with small amounts of defocus in the center of the FOV. By small I mean less than half a wavelength of optical path difference (Lord Rayghley's criterion for in-focus was 1/4 lambda OPD). It has the finickiest theory and it is the plug in my overall model: diffraction, pixel aperture and AA are set according to their physical properties and camera settings. Solver than varies OPD to get the best fit to measured data. There is always a residual value because no lens is ever perfect. I have never seen it at less than 0.215 lambda, which corresponds to a lens blur diameter of about 5.3um (on a 2.4um pitched RX100vIII).
I don't have Imatest so perhaps you can set it up to do the same thing, and trust me it would be much easier. MTF Mapper is excellent because it allows one to work directly on the green channel raw data, without introducing demosaicing blur into the mix. The author, Frans van den Bergh is a very smart and helpful guy whose blog (http://mtfmapper.blogspot.it/2012/06/diffraction-and-box-filters.html)got me going on this frequency domain trip. On the other hand it is an open source command line program which is not as user friendly as commercial products. This is the way I use it, you may not want to once you realize what's involved :)
1) First create a TIFF of the raw data with dcraw -D -4 -T filename.cr2;
2) Open filename.tiff in a good editor and save a 400x200 pixel crop (horizontal edge, 200x400 vertical) of the central edge you'd like to analyze in a file called, say, h.tif making sure the top left most pixel of h.tif corresponds to a Red pixel in the original raw data (use RawDigger (http://www.rawdigger.com)for that)
3) run the command line "mtf_mapper h.tif g:\ -arbef --bayer green -t x", assuming that you are working in directory g:\ and x is the threshold (your last two images worked with x=0.5)
4) MTF Mapper produces a number of text files and Annotate.png: open mtf_sfr.txt in Excel using the data import function. There should be four lines with 65 values each. The first value of each line is the angle of the edge (ideally it should be somewhere between 5-10 degrees). The remaining 64 values are the MTF curve in 1/64th cycles/pixel increments, starting with 0 cy/px which clearly has an MTF value of 1. Choose the line that corresponds to the edge (see the Annotate.PNG file) and plot it.
Voila', that's the MTF curve of just the two green raw channels. Alternatively send me the file (one at a time please) and I'll do it for you - I've got batch files for most of this but they reflect how I work, call other programs and they are not easy to explain or set up if starting from scratch.
As the image has been demosaiced, I don't see what difference it would make what color the pixel was (using Imatest, that is), but perhaps it does?
At this level, surely the light source would be pretty important? Higher frequency better??
Ah, but that's the point. It isn't demosaiced. The way I showed you it is just the two green raw channels straight off the sensor.
Jack
Specular highlights are your enemy. One way to get rid of them is to use matte paper, but that makes it more difficult to achieve high spatial frequencies on the target itself.
Camera motion is your enemy. Fast shutter speeds help (sometimes -- faster isn't always better). Stiff tripods help. But the thing that helps the most is short duration electronic flash in a dark room. I use the Paul Buff Einsteins, which can produce a t.1 below 100 usec when set up right.
Cutting thin black plastic with a paper cutter can sometime produce a clean edge. If you can find die-cut plastic, even better.
Do you use a long exposure on the camera and use the flash only (so no shutter movement at all?). Sounds like a good trick!
On most cameras, with most lenses, it doesn't take that long of an exposure to let the first curtain vibrations die down. 1/25 with trailing curtain synch will usually do it. 1/8 would be even safer, if you don't want to run tests. The faster the shutter speed the more residual room light is allowable.
Jim
I don't see any way of doing this on the 1Ds3. I can change from 1st to 2nd curtain sync for flash of course, but there's no delay that I can change. Would either 1st or 2nd make any difference to camera shake? I wouldn't have thought so.
I thought that what you suggested is to photograph in a very dark room with a long exposure (say 3 seconds) and manually trigger the flash after a second or so ... in which case there would be no mirror lock-up needed and no issue with shutter vibration. Did I misunderstand you?
You can lock the mirror up on the Canon, right? You can use an electronic release or the self-timer to trip the shutter, right? If you can do that, the only vibration you have to worry about is the first shutter curtain. By using a longish exposure and trailing curtain synch, you can let the vibrations from the opening of the first curtain die down before the flash goes off.
Does it make a difference? It did for me with the Sony a7R, but it's got a particularly problematical shutter. If you can get your flash duration well under a millisecond, it's probably a "can't hurt, might help" thing.
Here's another thought. Does your camera have EFCS? Use that.
That works, too, but it's easier to let the camera trigger the flash at the end of the exposure with trailing curtain synch.
Jim
As far as I know the 1Ds3 doesn't have EFCS. Of course it has mirror lock-up etc, and I can trigger the camera remotely. But in the test shots I've done these seem to make little difference, so I wonder if shutter shake would be significant. It is a very heavy camera and I have a good tripod, so I suspect that the image softness I'm seeing has more to do with my lenses not being as good as they should be, and possibly even more to my test conditions (like the target print and lighting) not being too good.
You're probably right. As your lenses get better, you may want to revisit this issue.
http://blog.kasson.com/?p=4359
Jim
It's one thing to look at an image and another to see the MTF and edge rise on a test chart - but when both are telling you that you are getting as good resolution as is possible, well then it's not hard to be convinced that this is the way to go.
That's what this is all about, figuring out how to make the equipment deliver what it is supposed to - and no less.
I guess my question would be: if you can pretty accurately get the PSF just for the sensor and demosaicing, do you think there is a benefit from deconvolving the image for this first (before doing a guesstimate lens deblur).
On the other hand now that I understand things a little better I think deconvolution plug-in designers could work a little harder at producing more flexible and controllable products. For instance, are we sure that the deconvolution PSF used in a DSLR with an old-style AA would be suitable as-is with just a different radius/strength on a brand spanking new one sans AA? I personally think not.