Luminous Landscape Forum

Raw & Post Processing, Printing => Digital Image Processing => Topic started by: Robert Ardill on August 07, 2014, 09:55:53 am

Title: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 07, 2014, 09:55:53 am
Hi,

I'm posting this with some trepidation as I expect a lot of disagreement.  But here goes!

My understanding of the 3-step sharpening proposed by Shewe et al is: a. Capture Sharpening with an edge mask; b. Creative Sharpening to taste; c. Output Sharpening with no edge mask.

I would like to propose (no doubt others have before me, so perhaps I should say re-propose) an alternative, which I think has advantages.  And that is: a. Output Sharpen after any resizing, tonal/color adjustments etc; b. Creative Sharpening to taste.

I do have some empirical evidence to back this up, which I will come to in a moment.  But before that, my starting point is that sharpening should be minimized and that sharpening on top of sharpening should be avoided if at all possible.  The reason is simple: sharpening potentially damages the image.

So, on to the empirical side.

Here are the Photoshop layers I used to compare the different techniques:

(http://www.irelandupclose.com/customer/LL/sharpening-layers.jpg)

1. The bottom layer is the unsharpened image upsized by 2x.
2. The 2nd layer up is with Capture Sharpening applied in Lightroom and then resized x2.  
3. The 3rd layer up is the resized image Capture Sharpened using Smart Sharpen and an edge mask.
4. The 4th layer up is a single sharpen for output from the original upsized image (layer 1).
5. The 5th layer up (top layer) is the upsized image, first with Capture Sharpen, then with Output Sharpen.

IMO the capture sharpen after resize (layer 3) is clearly better than layer 2.  So my first conclusion is Resize first, Capture sharpen second.

In the top layer, Layer 5, I have added output sharpening to the capture-sharpened image in Layer 3 (the output sharpen filter is above the capture sharpen filter, so it is applied after the capture sharpen). In the layer below that (layer 4) I have output sharpened the original upsized image (layer 1) in one go, as you can see.  For both layers 4 and 5 I have used the same edge mask as in Layer 3, but lightened a bit to let more fine detail through.

There was too much haloing in Layer 5, so I softened these using the Smart Sharpen fade shadows and highlights (quite a large amount 50% strength, 50% tonal width and radius 6 for both highlights and shadows).  I increased the amount of sharpening in Layer 4, not because I thought it needed it, but for direct comparison to Layer 5. So the amounts of output sharpening were different for Layer 4 and Layer 5 (more in Layer 4 as one would expect, to achieve a similar result to Layer 5).

My observation overall is that the same or a better result can be obtained with one-pass sharpening as with two and that it is better to resize, then sharpen, rather than capture sharpen, then resize.  The two-pass output sharpened image (Layer 5) still had ugly black lines, especially at the line between the mountains and the sky, whereas these were absent in the one-pass sharpening. These lines would certainly appear on a print and wouldn’t be acceptable to me.  To get rid of them would probably require a different sharpening algorithm (increasing the shadow fade didn’t help and reducing the radius wasn’t on as it was only at 2, which is probably the minimum for output sharpening at 300ppi).

Creative sharpening is possible with the output image (both for Layer 4 and Layer 5), simply by painting on the edge mask to add or remove sharpening.  

Whether or not there is more or less damage done using one approach over the other, from a workflow point of view the one-step sharpening is really simple and very easily automated.

This is a down-sampled crop of the test image I used (as you can see it has a good mix of very smooth skies and fine detail in the foreground):

(http://www.irelandupclose.com/customer/LL/Output-One-Pass-Image.jpg)

BTW, this was the one-pass sharpened image from Layer 4, with the sharpening dialed down for web viewing. All I did was to down-size the image and adjust the Smart Sharpen filter.

It would be very interesting if someone else tried this out.  I’ve tried to be as objective as possible, but that isn’t so easy!

Robert

Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mark D Segal on August 07, 2014, 11:25:35 am
I would like to see you perform a comparison test of processing efficiency and results between all the stuff you propose here, and sharpening the same image - properly - using Photokit Sharpener Pro 2.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 07, 2014, 12:02:36 pm
Hi,

I'm posting this with some trepidation as I expect a lot of disagreement.  But here goes!

My understanding of the 3-step sharpening proposed by Shewe et al is: a. Capture Sharpening with an edge mask; b. Creative Sharpening to taste; c. Output Sharpening with no edge mask.

I would like to propose (no doubt others have before me, so perhaps I should say re-propose) an alternative, which I think has advantages.  And that is: a. Output Sharpen after any resizing, tonal/color adjustments etc; b. Creative Sharpening to taste.

Hi Robert,

I have indeed suggested in other posts that, when upsampling, there may be benefits to postpone Capture sharpening. Otherwise one runs the risk of magnifying any sharpening artifacts and only make them more visible by blowing them up to a larger size. Also with down-sampling, the addition of sharpening may increase the risk of creating aliasing artifacts. That's why I usually have sharpening layer in my files that can be switched off before resampling.

Creative sharpening on the other hand is IMHO a bit of a misnomer, although one can use sharpening tools to achieve the effect. It's more a detail enhancement/local contrast adjustment process than really sharpening. Therefore it can be done before upsampling, although its effect may change a bit with output size and viewing distance. It can be a very processor intensive operation, so large upsampled printfiles may take quite a while to be processed by the more advanced procedures.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 07, 2014, 12:55:40 pm
Creative sharpening on the other hand is IMHO a bit of a misnomer, although one can use sharpening tools to achieve the effect. It's more a detail enhancement/local contrast adjustment process than really sharpening.

Quote
Sharpening can be a creative tool. Sometimes we want to make the image sharper than it really was, to tell a story, make a point, or emphasize an area of interest.
Nudging the image towards reasonable sharpness early on helps the editing process, and gives you a solid floor to stand on when it's time to make creative sharpening decisions.
Creative Sharpening. I don't tell people how to do art, so the only real guideline I can give here is to use common sense.
Bruce Fraser who I believe coined the term.
http://www.creativepro.com/article/out-of-gamut-thoughts-on-a-sharpening-workflow
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 07, 2014, 01:33:09 pm
Bruce Fraser who I believe coined the term.
http://www.creativepro.com/article/out-of-gamut-thoughts-on-a-sharpening-workflow

Yes, he coined it 11 years ago if that article was the first time he mentioned it.

A lot has changed (for the better I might add), with regards to tools and technology. One only has to look at what e.g. Topaz Labs Detail can achieve, with preservation of color, and luminance targeted halo free adjustment of several sizes of detail (also deconvolution of the finest detail is possible). All optionally combined with very clever masking, and with separate controls for highlights, overall tones, and shadows.

I'm sure it would have been his wet dream, had it been available during his life.

I like the concept of Capture/Creative/Output sharpening (simple to remember and target a particular stage in the workflow), but sharpening (real resolution enhancement, not boosting acutance) is not necessarily the same as detail enhancement (it's only a very small subset of the possibilities).

Also, a program like Qimage offers a non-halo generating type of sharpening, called Deep-Focus sharpening (DFS). That's another thing Bruce could only dream of, given the trouble he took (had to take) to avoid halos that were inherent to the old USM methods dating back to the film days.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Redcrown on August 07, 2014, 01:51:53 pm
I second what Bart says. I think I used or tested every sharpening technique known in the past 10 years. But they are all obsolete for me with the adoption of Topaz Detail and Clarity. Somtimes in combination with complex luminosity masks or edge masks or manual masks, but not often.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 07, 2014, 02:02:50 pm
Yes, he coined it 11 years ago if that article was the first time he mentioned it.
Then why the misnomer**? His description seems clear to me. You guys can use whatever products or techniques you wish, but how is what Bruce wrote to define Creative Sharpening, a misnomer? It is creative in it's direction, it makes the image appear sharper for that aim.

**a name that is wrong or not proper or appropriate
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 07, 2014, 02:15:09 pm
Then why the misnomer**?

Hi Andrew,

I said 'a bit of a misnomer'. Sharpening is not the same as increasing acutance by boosting edge or local contrast. It only give an impression of sharpness by fooling the human visual system.

Only mathematical techniques like deconvolution can restore actual sharpness, both visual as well as objectively measurable. That's why NASA used a specific technique like Richardson-Lucy deconvolution to salvage the early Hubble space station's images ...
Techniques like Wavelet decomposition allow to address various larger detail levels, by boosting their weight, not by sharpening them.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 07, 2014, 02:18:38 pm
Sharpening is not the same as increasing acutance by boosting edge or local contrast. It only give an impression of sharpness by fooling the human visual system.
So what term do you propose be used to replace sharpening behind capture, creative and output?
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 07, 2014, 02:35:30 pm
So what term do you propose be used to replace sharpening behind capture, creative and output?

People can call it what they want, as long as they remember that (USM) 'sharpening' is but one of many (better) methods to visually enhance/subdue detail. They may call it Creative sharpening if they want, I do (but with a disclaimer).

Sharpening also suggests that increasing detail visibility is the only way to achieve one's creative vision. Reducing the visibility of structures can be equally important, to make the important detail stand out more. We also wouldn't necessarily call that blurring, it's more like de-emphasizing.

Noise reduction is similar, it's a targeted reduction of the visibility of noise (if possible without hurting resolution). That's also not blurring, although it can be used to crudely achieve the goal, sort of.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 07, 2014, 03:29:38 pm
Having had a bit of a snooze after lunch and a glass of wine :) ... I am now ready to rejoin the battle!

I have to say that I think the term 'Creative Sharpening' is one of these things that has led many of us down the garden path.  It's like, "OK, to sharpen properly, you have to a) Capture Sharpen, b) Creative Sharpen, c) Output Sharpen, and if you forget b) well you haven't sharpened properly, have you?".  So many of us (me included) would capture sharpen in Lightroom, 'creative' sharpen in Photoshop (really adding more USM for no good reason) and then sharpen yet again for web or print.

The fact is that if we forget all about 'Creative Sharpening' and concentrate on the image, we will quite naturally do things like blurring parts of the image in order to make other parts stand out, we'll remove distracting detail, etc., just as we automatically do when we paint.  Having fine detail on the whole image is fine if we want to do a resolution test, but it's hardly going to make a beautiful picture, in most cases.  We'll also add contrast, which will make parts of the image stand out, adjust colors for the same reason ... all of which could come under the term 'Creative Sharpening' because they have the same objective of bring focus onto the important parts of the image ... but have nothing to do with sharpening.

This is not in any way to belittle Bruce or Jeff or Andrew or any of these guys' insights and wonderful work.  But things have moved on and it really isn't at all certain any more that it's advantageous to follow these 3 steps (or rather I should say these 2 steps, because creative sharpening really should at this point be taken out of the ladder).  The reason for my post was some testing I've done recently on QImage, and because of a comment by John here: http://www.luminous-landscape.com/forum/index.php?topic=92128.msg750983#msg750983.  It made me think, 'hmm, I've been doing this capture, output sharpening thing for years and it's been OK, but does it really make sense?', so I tried to remove capture sharpening and I was very happy to see that, actually, it is not a required step and that it may actually not be a good step at all ... and that's only using Smart Sharpen, which is really not a lot more than USM with a steering wheel.

I'm looking forward to doing some more testing with QImage and DFS ... which really does seem to be an amazing sharpening algorithm.  Bart keeps talking about deconvolution and it's interesting that no one seems to pick up on that (at least I haven't seen it, perhaps there's been lots of talk about it) - but it really is a key point in sharpening.  I don't know if the QImage Deep Focus Sharpening uses this sort of maths, but I have to say that one of the examples I posted seemed to be an almost perfect example: it restored a blurred square back to the original, almost perfectly.  If that is possible, then the notion of applying USM-type sharpening to 'repair' the blurring caused by lens, anti-aliasing filter etc., is almost criminal.

As for Topaz etc., ... seems I need to check out what's going on in the world these days!  For which I have to thank this forum.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 07, 2014, 03:29:54 pm
People can call it what they want, as long as they remember that (USM) 'sharpening' is but one of many (better) methods to visually enhance/subdue detail.
If there's a misnomer, it's that term, USM which predates digital anything and was an analog darkroom process to produce the appearance of more sharpness. So I'd submit that Sharpening Photo's is the perception of the process onto that photo, not the specific process itself.

Quote
Sharpening is not the same as increasing acutance by boosting edge or local contrast.
Is sharpening an image the technique, be it analog or digital or the result of the technique as perceived by a viewer? I'd suggest it is the perceptual result which makes the image look sharper. At least considering the use of the term on images far before anyone was digitizing them.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 07, 2014, 03:32:58 pm
We'll also add contrast, which will make parts of the image stand out, adjust colors for the same reason ... all of which could come under the term 'Creative Sharpening' because they have the same objective of bring focus onto the important parts of the image ... but have nothing to do with sharpening.
Does that selective and creative work make that area appear sharpner? Creative bluring (any blurning) is different?
Both effects have been available to affect photos’s long before anything photographic was digital.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 07, 2014, 03:56:56 pm
Does that selective and creative work make that area appear sharpner? Creative bluring (any blurning) is different?
Both effects have been available to affect photos’s long before anything photographic was digital.

It could be that the term 'sharpening' is one that we should start to drop.  I'm more a painter than a photographer, to be honest, and as a painter I would never think in terms of 'sharpening' my painting.  What I do is to use various techniques (composition being one, of course) to bring attention to parts of the painting and away from others ... but mostly it's a question of removing detail rather than adding detail.  So in photography perhaps the term 'Creative Blurring' would be just as valid (more so, probably) than 'Creative Sharpening'.

I think that generally what I would think 'sharpening' means in photography is an attempt to restore lost detail.  So far, mostly, this has been achieved by a sort of flattery - it's a pretense only because, in fact, detail is lost rather than gained using techniques like USM.  However with newer techniques like deconvolution, it really is possible to restore apparently lost detail, if we know why the detail was lost (for example due to hand-shake).

So maybe we need to rethink our terminology.  It may lead to us becoming better photographers (and not just to 'sharper' photos).

Robert

Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mark D Segal on August 07, 2014, 04:14:28 pm
Let us revert to fundamentals for a moment:

Two different concepts: focus and acutance.

Focus: A photo can be blurry from subject or camera movement, or because circles of confusion are visible due to D.O.F. limitations or poor focusing of the lens. These are focus problems. Deconvolution sharpening tools have been designed to recover image detail from such problems.
Acutance: the micro-ciontrast of lighter to darker edges between pixels. Acutance reduces as a result of digital image processing at the capture, rendering, editing and printing stages. Bruce Fraser et. al. analyzed all these issues and more in great depth and produced techniques and corresponding software for addressing them that hasn't been fundamentally improved upon since their latest version. For readers who want more background into this, Jeff Schewe's book on sharpening is the best and most comprehensive published resource I know to recommend.

This discussion and the one in the other thread about QImage isn't always clear about what concept is at play: the focus concept or the acutance concept. Most digital imaging most people do these days is about the latter. And it is partly a matter of taste, partly a matter of credibility. If I were doing micro-photography I may want more detail on paper than I see in reality. For routine photography, the most natural appearance of detail corresponds with how I see it in the scene. As I've mentioned elsewhere before, if a photograph is meant to be sharp, it should look sharp but not sharpened. That is a fine distinction which I believe Photokit Sharpener 2, and Lightroom/ACR handle admirably; different people prefer different vendors' software - that's par for the course, but again, let us relate our preferences correctly to the concept. Tools designed primarily for acutance enhancement won't necessarily handle out of focus issues so admirably, because they are not dedicated deconvolution tools.  I use the tools I use because the benefit:cost ratio is very high. I'm not a techno-masochist; I just want good, credible results in a time-efficient manner.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 07, 2014, 04:59:57 pm
Let us revert to fundamentals for a moment:

Two different concepts: focus and acutance.

Focus: A photo can be blurry from subject or camera movement, or because circles of confusion are visible due to D.O.F. limitations or poor focusing of the lens. These are focus problems. Deconvolution sharpening tools have been designed to recover image detail from such problems.
Acutance: the micro-ciontrast of lighter to darker edges between pixels. Acutance reduces as a result of digital image processing at the capture, rendering, editing and printing stages. Bruce Fraser et. al. analyzed all these issues and more in great depth and produced techniques and corresponding software for addressing them that hasn't been fundamentally improved upon since their latest version. For readers who want more background into this, Jeff Schewe's book on sharpening is the best and most comprehensive published resource I know to recommend.

This discussion and the one in the other thread about QImage isn't always clear about what concept is at play: the focus concept or the acutance concept. Most digital imaging most people do these days is about the latter. And it is partly a matter of taste, partly a matter of credibility. If I were doing micro-photography I may want more detail on paper than I see in reality. For routine photography, the most natural appearance of detail corresponds with how I see it in the scene. As I've mentioned elsewhere before, if a photograph is meant to be sharp, it should look sharp but not sharpened. That is a fine distinction which I believe Photokit Sharpener 2, and Lightroom/ACR handle admirably; different people prefer different vendors' software - that's par for the course, but again, let us relate our preferences correctly to the concept. Tools designed primarily for acutance enhancement won't necessarily handle out of focus issues so admirably, because they are not dedicated deconvolution tools.  I use the tools I use because the benefit:cost ratio is very high. I'm not a techno-masochist; I just want good, credible results in a time-efficient manner.

Well Acutance would be a good (and correct term) to use instead of Sharpness in these discussions.  I'm not sure about Focus though ... there are other reasons for loss of detail, for example the anti-aliasing filter, sensor noise, the analog to digital conversion, the demosaicing algorithm, resizing, etc., some or all of which can be corrected, at least to some extent, with techniques like deconvolution (but as I'm not an imaging scientist I have to defer to you guys for advice and information here).

I'm a bit tested-out at this stage: could you tell me something about how Photokit Sharpener 2 works?  In your experience that is, not from the product marketing info.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mark D Segal on August 07, 2014, 05:21:08 pm
there are other reasons for loss of detail, for example the anti-aliasing filter, sensor noise, the analog to digital conversion, the demosaicing algorithm, resizing, etc.,

I'm a bit tested-out at this stage: could you tell me something about how Photokit Sharpener 2 works?  In your experience that is, not from the product marketing info.

Robert

All those reasons are included in the wording I used above, ref "digital imaging process", and much but not necessarily all of it is acutance-related.

The information about Photokit Sharpener on the PixelGenius website is very reliable. If you want a proper understanding of the underling principles, as I said, nothing I know of beats the Schewe book. As for how well it works, Michael Reichmann reviewed it on this website when the product first appeared - you can locate that product review. It is accurate. I was using it from the time of that review until its principles were ported into Lightroom, where I use that same approach very successfully now. If you are asking me about my personal experience with it: highly recommended. But nothing beats testing it yourself. As we all know - in spades - different people have different taste in software. What floats my boat may not necessarily float yours', or for that matter Barts'. So I suggest once you have recovered from the present round of testing overload, give it a shot and see what you think.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 07, 2014, 05:47:49 pm
It could be that the term 'sharpening' is one that we should start to drop.  I'm more a painter than a photographer, to be honest, and as a painter I would never think in terms of 'sharpening' my painting.
Sharpening is a term that dates back into the analog film days. I've made USM's in the analog darkroom as an assignment in photo school, long before the word Photoshop existed. We were taught why the appearance of sharpness changed (due to changes of edge contrast), much like we understood what a grade 1 paper would do for an image compared to a grade 4 and that apparent visual effect of sharpness. USM may have produced something vastly different from the digital terms used here to express sharpness, but the reason we made prints this way was for one reason; to make the image visually appear sharper.

If one believes that the result of sharpening makes the image appear sharper, then the term and Bruce's explanation of Creative Sharpening is not a misnomer even a little. However, if the method used is a consideration not the result, then Creative Sharpening I would agree is a bit of a misnomer.

To me, the differences of the look of the final result is key but I understand how some consider the route to that result important. USM in the darkroom made the image appear sharper and that's why we went through this agonizing slow process. FWIW, we were also taught how to build contast masks in a simialr fashion for printing Ciba. We understood this wasn't a process that had anything to do with sharpening or bluring, again focusing (no pun intended) on the results of the process on image contrast in a much different way.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 07, 2014, 06:06:17 pm
The information about Photokit Sharpener on the PixelGenius website is very reliable. If you want a proper understanding of the underling principles, as I said, nothing I know of beats the Schewe book. As for how well it works, Michael Reichmann reviewed it on this website when the product first appeared - you can locate that product review. It is accurate. I was using it from the time of that review until its principles were ported into Lightroom, where I use that same approach very successfully now. If you are asking me about my personal experience with it: highly recommended. But nothing beats testing it yourself. As we all know - in spades - different people have different taste in software. What floats my boat may not necessarily float yours', or for that matter Barts'. So I suggest once you have recovered from the present round of testing overload, give it a shot and see what you think.

Actually, now that I've had a look at Photokit Sharpener (on the web, that is), I realize that I had the version 1 some quite long time ago.  If I remember correctly it essentially uses actions to create sharpening layers, using high pass filtering, USM (or Smart Sharpen, perhaps) ... in other words Photoshop filters ... and in addition adjusts the effect using advanced blending (blend underlying layer sort of thing), different blend modes etc., and uses edge masks.  All very good and with the advantage that the user can tune the effects by modifying layer opacity, and so on.

Then it works out things like sharpening levels required based on image resolution, printer type, paper type, image size etc. So if you're working with lots of different printer types, different media etc., it's a useful tool.

I worked on some sharpening tools like these with Uwe Steinmueller some years back ... but at the end of it I felt that understanding the basic techniques and using them directly is better (for me).  So unless there's something really special in Photokit Sharpener 2 (and Andrew or Jeff can surely enlighten me) I think I'll give it a pass.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mark D Segal on August 07, 2014, 06:30:01 pm
Yeah  - what's special about it is ease of use and high quality results. I'm just sharing my experience - for readers to use or not as they see fit.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 08, 2014, 06:00:10 am
For those who think that boosting acutance is as effective as restoration by deconvolution, even after digesting this thread (http://www.luminous-landscape.com/forum/index.php?topic=45038.100), I've attached 3 images. The first is a Gaussian (sigma=2) blurred star target.

The second image is that same target after restoration with deconvolution (using the Van Cittert algorithm, which would be less appropriate when the image would have contained noise).

The third is a Qimage DFS (radius2/amount 450) attempt to get as close as possible, but against deconvolution (which takes a long processing time and can be less effective if the exact PSF is not known), that's not really fair (although it does a commendable job, even with these extreme settings).

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: PhotoEcosse on August 08, 2014, 06:05:07 am
I am finding this thread both informative and thought-provoking.

One of the most common criticisms of prints made by competition judges (not that they are necessarily the ultimate arbiters of good taste) is that they are "over-sharpened".

Ideally, I suspect, the main purpose of sharpening is to compensate for deficiencies in the digital sensor and in subsequent data-processing of the captured image. Thereafter, it becomes a question of creative intent or artistic taste.

What I am happy to learn from a thread such as this is that there is an armoury of tools available to me and that I can select and use according to my intentions for any particular file.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mark D Segal on August 08, 2014, 08:05:55 am
For those who think that boosting acutance is as effective as restoration by deconvolution,

Cheers,
Bart

This is a red herring. I, for one, think/hope I made the distinction between focus/blur and acutance issues clear enough to understand - as I mentioned - that they warrant different treatment with different tools. I'm not talking about "is as effective as" - I'm talking about aiming the right tool at the problem it is best adapted to resolve. Once readers accept I may have a point here, a lot of the discussion that's confusing these conceptually different targets of image correction can just as well evaporate. For those who are not techno-masochists and just want good results - easily - a humble suggestion: don't go to a dermatologist for a root canal: :-); use products designed for handling acutance to change image acutance; use products designed for blur (movement, focusing, DoF) to correct blur. Then it becomes sensible to make apples to apples comparisons of different software products designed for handling the same problems.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 08, 2014, 08:29:17 am
This is a red herring. I, for one, think/hope I made the distinction between focus/blur and acutance issues clear enough to understand - as I mentioned - that they warrant different treatment with different tools. I'm not talking about "is as effective as" - I'm talking about aiming the right tool at the problem it is best adapted to resolve. Once readers accept I may have a point here, a lot of the discussion that's confusing these conceptually different targets of image correction can just as well evaporate. For those who are not techno-masochists and just want good results - easily - a humble suggestion: don't go to a dermatologist for a root canal: :-); use products designed for handling acutance to change image acutance; use products designed for blur (movement, focusing, DoF) to correct blur. Then it becomes sensible to make apples to apples comparisons of different software products designed for handling the same problems.

Mark, I respectfully disagree. I'm using digital image processing tools, as they are commercially available to anyone willing to take a more, or less, complicated/slow/involved/automatic/whatever route to achieve his/her creative objective. The tools are all better at something different, even though they all try to achieve the same goal.

Some tools can achieve better results, but may be less convenient or even downright slow or complicated, which may, or may not, be a deciding factor to (not) use it. I'm just showing some alternatives, people can then take a informed decision that's also based on their particular preference and requirements.

Personally, I use a different tool for when larger quantities of output must be generated (perhaps with repeat-orders), compared to one-off prints. I do know what the quality trade-offs are, and what is still acceptable. It's an informed choice, not one based on ignorance or unfamiliarity.

I've only shown that there is a difference between looking sharper (by using a very well respected tool for the creation of high output quality), and actually being sharper. That's all.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mark D Segal on August 08, 2014, 08:45:25 am
The tools are all better at something different, even though they all try to achieve the same goal.

Cheers,
Bart

Depends how you define "goal" and whether you want clear, articulated definitions that unpack the concepts - not to put too fine a point on this discussion about sharpness !!! :-)
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 08, 2014, 11:55:16 am
For those who are not techno-masochists and just want good results - easily - a humble suggestion: don't go to a dermatologist for a root canal: :-);

ROTFL, so true.
How many angels can dance on the head of a pin and how sharp are those pins? ;)
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mark D Segal on August 08, 2014, 01:35:03 pm

........... how sharp are those pins? ;)

OK, I was going to get into unsharp masking, but I won't go there; I think this issue has been sharpened to death   ;D
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: jrsforums on August 08, 2014, 06:26:15 pm
ROTFL, so true.
How many angels can dance on the head of a pin and how sharp are those pins? ;)

Is this the same as slicing the baloney too thin?  :-)
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mark D Segal on August 08, 2014, 09:03:03 pm
:-)

Nice to see some good-natured humour around here!
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 09, 2014, 07:01:42 am
Just in case we all get too touchy-feely, buddy-buddy ;D here's a further stirring of the nest.

I don't think I've entirely convinced anyone (myself included  :-\) that we can eliminate capture sharpening and go straight to output sharpening (I'm talking about edge-type sharpening here, not fixing the image with deconvolution or other techniques that I may not even have heard of). So I prepared this set of actions to test the hypothesis, and I hope you will try it out and give your feedback:

http://www.irelandupclose.com/customer/LL/SharpeningTest.atn

You should open an image into Photoshop from Lightroom with sharpening turned off.  There are two actions: one that resizes the image by 2 and one that resizes the image by 3.  The actions use the Camera Raw filter for all sharpening to keep things on an equal footing.

There are some Stops in the actions, just to let you know what's happening next.  Just press the continue button.

At the end there will be 4 layers.

The bottom layer (1) is the image with capture sharpening before the resize.

The second layer (2) is the image with capture sharpening after the resize.

The third layer (3) is layer 2 with output sharpening (so classical capture sharpen; resize; output sharpen).

The fourth layer (4) is the original image with output sharpening (no capture sharpening).

The basic premise is that if capture sharpening is applied with a radius of 1 before resize by x, then the capture sharpening after resize should have a radius of x.  If you compare layers 1 and 2 I think you will find that this is pretty well bang on.

The second premise is that if capture sharpening is applied with a radius of 1 and the image is then resized by x before output sharpening with a radius of x (which would give a nice sharp output but with minimum haloes ... the sort of radius I would use normally), then the same result can be obtained by doing a single output sharpening with a radius of x, but with a higher strength.  If you compare layers 3 and 4 I think you will find that there is not much between them.

Logically, I would have thought that if an equal sharpening can be achieved in one go after resizing, that it should be better to do it this way than capture sharpening, resizing, and then output sharpening.  Having said that, in the few tests I've done I can't see that one method damages the image more or less than the other.  What I do see is that there appears to be no advantage in capture sharpening first, using the sort of radii that I use.

If the output sharpening uses a much higher radius (criminal, but there you are, there are criminals out there!) then I think it would be necessary to do capture sharpening first.

Robert



Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Schewe on August 09, 2014, 04:53:52 pm
Logically, I would have thought that if an equal sharpening can be achieved in one go after resizing, that it should be better to do it this way than capture sharpening, resizing, and then output sharpening.  Having said that, in the few tests I've done I can't see that one method damages the image more or less than the other.  What I do see is that there appears to be no advantage in capture sharpening first, using the sort of radii that I use.

What you are failing to consider is that capture sharpening is designed to be applied to your master image BEFORE you've actually determined at what size the image will be printed and output sharpening applied AFTER you've determined the size. That's the part of the workflow Bruce's sharpening workflow addresses. It allows one to disconnect the original size and the final print size. It's quite possible (depending on your original capture and print size) there may be no resampling needed...

And no, nothing you've written has given me any reason to alter my perception of Bruce's work in defining a sharpening workflow. Note, I may be a bit biased since I worked with Bruce to help design PhotoKit Sharpener and worked with the Lightroom engineers to incorporate PK's output sharpening in the LR Print module...

Personally, I really only use PK for Creative Sharpening and/or blurring (it has both). Most of the time I use Lightroom (or ACR) for capture sharpening (which I also consulted with the engineers to develop) and output sharpening.

Look, there are a lot of ways to skin a cat...use whatever way makes you happy. But for me, I want a repeatable and consistent way to get from capture to print without a lot of gyrations.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 09, 2014, 06:59:54 pm
What you are failing to consider is that capture sharpening is designed to be applied to your master image BEFORE you've actually determined at what size the image will be printed and output sharpening applied AFTER you've determined the size.

Well Jeff, I'm afraid you have missed my point entirely.  What I am suggesting is that you only sharpen once (after resizing, or not resizing, as the case may be).  

I'm using words like 'Capture Sharpening' and 'Output Sharpening' in order to fit in with the terminology that seems generally accepted.

I know you have your own workflow and you're happy with that ... and you have a vested interest in this way of doing things and thinking ... but why don't you try out the action I've posted?  It will only take you a few minutes.  I would be interested in your analysis of the results (after all, you have a world of experience in this area, and your opinion - based on empirical evidence, that is - would be much appreciated!).

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mark D Segal on August 09, 2014, 07:22:32 pm
Robert, I know a lot of people will laugh when I say this, but Jeff can be a bit shy about tooting his own horn, so I shall weigh in here. Very simply put, if you haven't done so already, you need to read Chapter Two of his sharpening book. It provides a splendid explanation of the technical factors underlying the multi-stage sharpening workflow he recommends. In a nutshell, the kinds of things that need to be "sharpened for" are not the same at the input versus the output stages, therefore the algorithms need to be custom-tailored for each situation and they need to build on each other. That's the essence of the approach, and between Bruce, Jeff, and the others in the Pixelgenius group, they have spent ions of time developing and testing algorithms appropriate to each context. Having read what I have and worked with the various approaches I've tried over the years. I would be very skeptical that a one-pass approach could be systemically superior - perhaps with some photos at some resolution by happenstance, but not systemically.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 09, 2014, 09:47:29 pm
I know you have your own workflow and you're happy with that ... and you have a vested interest in this way of doing things and thinking ... but why don't you try out the action I've posted?
Because of the workflow you propose. Is one output sharpening or whatever you describe optimal for ink jet, screen, halftone dot, contone output? I can't see how it could be as each output devices requires a different degree and handling of the sharpening. And it's resolution dependant. The same devices receiving a 1000x1000 pixel file need different treatment than if they are 10Kx10K. A sharpening workflow is output and resolution agnostic up until the point you know what size and device you'll output sharpen for.

Think of it this way. Say we have the best ICC profile for an Epson 3880 for Luster paper. Now change printer technology, paper, inks etc. How good is that one profile that worked so well for Luster? It isn't.

If you capture sharpen at native resolution of the capture device, or after sampling up, that's one step. But you might need to change the size considerably as well as the output device technology. One size doesn't fit all ideally. If you size and sharpen based on the output device and that sharpening is based too on the initial capture sharpening, you have a pretty flexible sharpening workflow.

Or we could go back to the days when film was scanned and output in CMYK for a specific size and press condition. That workflow worked quite well. Until you found you needed to also output a 4x5 on a film recorder. Then the initial size and color space and sharpening too, was far less than optional for that secondary use. Scan once, use many was a newer workflow in the old days when desktop imaging evolved. I believe Bruce saw that as being a far more flexible workflow and probably based his ideas on sharpening in a similar way.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Schewe on August 09, 2014, 09:57:05 pm
I know you have your own workflow and you're happy with that ... and you have a vested interest in this way of doing things and thinking ... but why don't you try out the action I've posted?

Well, because it doesn't fit in with my workflow. I guess you missed the part about capture sharpening in ACR/LR and output sharpening in LR.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 10, 2014, 06:40:47 am
Well, because it doesn't fit in with my workflow. I guess you missed the part about capture sharpening in ACR/LR and output sharpening in LR.

Well at least Andrew gave a reason for why he believes this approach is wrong ... all you're doing is saying you can't be bothered to check it out because it doesn't fit in with your workflow.

Because of the workflow you propose. Is one output sharpening or whatever you describe optimal for ink jet, screen, halftone dot, contone output? I can't see how it could be as each output devices requires a different degree and handling of the sharpening. And it's resolution dependant. The same devices receiving a 1000x1000 pixel file need different treatment than if they are 10Kx10K. A sharpening workflow is output and resolution agnostic up until the point you know what size and device you'll output sharpen for.

If you capture sharpen at native resolution of the capture device, or after sampling up, that's one step. But you might need to change the size considerably as well as the output device technology. One size doesn't fit all ideally. If you size and sharpen based on the output device and that sharpening is based too on the initial capture sharpening, you have a pretty flexible sharpening workflow.


Of course one output sharpening isn't optimal for all papers and printers.  If it was then we would just have one button with no settings and we would all be happy little piggies.

Equally, one sharpening isn't optimal for all camera/lens/sensor combinations - especially as some have AA filters and others don't.

I think the 2-step approach (capture sharpen followed by output sharpen) is an excellent idea and has a lot of flexibility (and it makes sense for a program like PK Sharpener to be based on this approach as it caters for many different media and technologies).  

But in my case I use inkjet printers and I never want to output sharpen with a radius of more than 2 or 3 (for 'creative' sharpening, maybe, but that's another story).  On capture sharpen I will never use a radius of more than 1.  So the question I had was this: why capture sharpen with a radius of 1, then output sharpen again with a radius of 1 (for an image that has not been upsized)? Why not just sharpen once?  Then, say I upsize by 2x ... what radius should I use for output sharpen?  Well, a radius of 2 seems about right.  What about upsizing by 3?  Well, a radius of 3 seems about right.  So then the question is: if I capture sharpen with a radius of 1 and upsize by a factor of x (where x could be 1) is there an advantage in capture sharpening with a radius of 1 and then output sharpening with a radius of x, or can I just leave out the capture sharpening and sharpen once with a radius of x?

If the capture sharpening was actually fixing a flaw in the image then it would be a no-brainer: of course you would capture sharpen.  But it isn't: all it's doing is masking the flaw, and in doing so it is damaging the image.  So if you could leave out that step, which would only be magnified by subsequent (possible) resizing, that would (at least in theory) be a good idea.

Well, in the few tests I've done, it seems to me that providing you keep the sort of ratio of radii that I have mentioned, that there appears to be no better sharpening from the 2-step approach.  I also do not see that the file is damaged more by the 2-step approach (a little more haloes, but you really need to zoom in to pixel-level to see it).  So in my view it's a matter of choice.

However ... that is being VERY careful with the sharpening.  If someone over-sharpens at the Capture Sharpening stage (which I expect is very common) then the 2-step approach will be worse (easily verified).  Which isn't to say that we can't also mess up the sharpening in one step, needless to say!

Anyway, it's no big deal ... all it really points out to me is that a) we need to be very careful with sharpening, and b) I really hope to see some deconvolution image correction software soon, because then we wouldn't be having this conversation.

And ... more generally ... I don't think it's a bad thing to challenge the orthodox teachings from time to time.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 10, 2014, 07:37:40 am
Robert, I know a lot of people will laugh when I say this, but Jeff can be a bit shy about tooting his own horn, so I shall weigh in here. Very simply put, if you haven't done so already, you need to read Chapter Two of his sharpening book. It provides a splendid explanation of the technical factors underlying the multi-stage sharpening workflow he recommends. In a nutshell, the kinds of things that need to be "sharpened for" are not the same at the input versus the output stages, therefore the algorithms need to be custom-tailored for each situation and they need to build on each other. That's the essence of the approach, and between Bruce, Jeff, and the others in the Pixelgenius group, they have spent ions of time developing and testing algorithms appropriate to each context. Having read what I have and worked with the various approaches I've tried over the years. I would be very skeptical that a one-pass approach could be systemically superior - perhaps with some photos at some resolution by happenstance, but not systemically.

Well Mark, if Jeff tries out my little action I'll buy his book.

I had a quick look at http://www.peachpit.com/articles/article.aspx?p=1721157, as an example of capture sharpening by Jeff ... and I can tell you that I would NEVER sharpen by an amount of 60 prior to resizing, as Jeff does in this tutorial. Personally I think that's close to criminal.  Resize and then sharpen by an amount of 60, fine, not the other way around.  Since the example in the article uses Camera Raw, the sharpening is pre-resizing.

To be fair to myself, I too have spent a lot of time on this subject and I quite independently developed sharpening actions using Photoshop scripts that used very similar techniques to PK Sharpener (and also other techniques, for example using curves to generate sharpening/noise reduction masks).  A while back I even started selling the sharpening tools (and other Photoshop tools) under PixIntel.com (good name, isn't it? I still have the domain registration if someone's interested ... for a small fee  ;)) ... but I got involved in another project (and lots of people started producing this sort of software) so I dropped it.  Which isn't to say that I lost interest in the subject.

Anyway, this is not a pissing contest - but sometimes new insights can come from revisiting established beliefs.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mark D Segal on August 10, 2014, 09:40:37 am
........... but sometimes new insights can come from revisiting established beliefs.

Robert

Very often the case and much scientific progress has been made over the centuries on the basis of that very principle. BUT in this particular instance we are not dealing with beliefs. We are dealing with algorithms that emerged from very extensive testing done by people who seriously knew/know the subject matter, and I respect that. That said, there's little in this world that can't be improved upon, but I think scientific procedure pretty much requires that you identify and demonstrate lacunae in the approach you are challenging, as a basis for trying to achieve the same objective in a better way. That is why I recommended Jeff's book to you.

I looked at the article you referenced and I didn't see, even at 200% magnification the kind of damage you consider to be "criminal" at 50 or 60 Amount setting. Personally, I don't usually find it necessary or desirable to move much beyond 45, but it can happen if I also added luminance noise reduction. However, give or take 10 or 15 point of Amount, there is something intervening called "taste". What you may consider "criminal" someone else may think is just sharp and snappy. It only gets criminal if anything has been destroyed, but if you use PK Sharpener (unflattened) or Lightroom of course everything is reversible and no pixels are destroyed.

Anyhow, reverting from the empirical to the principles, I do think it necessary to successfully challenge the correctness of the principles underlying the multi-stage sharpening workflow before accepting that a single pass approach will be SYSTEMATICALLY superior. To do this, there needs to be a combination of both reasons and a highly varied palette of extensive testing of the proposed alternative. I think it is incumbent on the author to do this research and share the results in a manner amenable to systematic evaluation.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 10, 2014, 10:02:07 am
I looked at the article you referenced and I didn't see, even at 200% magnification the kind of damage you consider to be "criminal" at 50 or 60 Amount setting.
I haven't looked at this example but it's kind of important to clarity that one setting, say Amount in USM is hugely influenced by the other sliders, like Radius. The two teeter-totter between themselves so one setting specified without the other is kind of like one hand clapping.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 10, 2014, 01:35:31 pm
Very often the case and much scientific progress has been made over the centuries on the basis of that very principle. BUT in this particular instance we are not dealing with beliefs. We are dealing with algorithms that emerged from very extensive testing done by people who seriously knew/know the subject matter, and I respect that. That said, there's little in this world that can't be improved upon, but I think scientific procedure pretty much requires that you identify and demonstrate lacunae in the approach you are challenging, as a basis for trying to achieve the same objective in a better way. That is why I recommended Jeff's book to you.

I looked at the article you referenced and I didn't see, even at 200% magnification the kind of damage you consider to be "criminal" at 50 or 60 Amount setting. Personally, I don't usually find it necessary or desirable to move much beyond 45, but it can happen if I also added luminance noise reduction. However, give or take 10 or 15 point of Amount, there is something intervening called "taste". What you may consider "criminal" someone else may think is just sharp and snappy. It only gets criminal if anything has been destroyed, but if you use PK Sharpener (unflattened) or Lightroom of course everything is reversible and no pixels are destroyed.

Anyhow, reverting from the empirical to the principles, I do think it necessary to successfully challenge the correctness of the principles underlying the multi-stage sharpening workflow before accepting that a single pass approach will be SYSTEMATICALLY superior. To do this, there needs to be a combination of both reasons and a highly varied palette of extensive testing of the proposed alternative. I think it is incumbent on the author to do this research and share the results in a manner amenable to systematic evaluation.

There's nothing criminal about the sharpening in the article ... providing that this sharpening is the final sharpening (this is just my opinion, OK?).  Since the sharpening is the Capture sharpening (as per Schewe's workflow), it is the first pass before output sharpening. As such it's way too high, IMO.  "Snappy and Sharp" applies to the final sharpened image, printed or for web or for whatever medium, not for 'capture' sharpening (again, this is consistent with Schewe's workflow, I believe).

Regarding the empirical principles etc., etc., it seems to me that I have been doing a lot of testing and that I've offered not only examples, but actions for you guys to check out my suggestions . But so far no one has actually given an example testing a one-pass sharpen against a two-pass sharpen and shown that the two-pass is clearly superior, and under what conditions.

And I have certainly not suggested that a one-pass approach is SYSTEMATICALLY superior ... or even that it is superior at all.  I personally think, both from tests and from logic, that it will be better in some cases and worse in others.  If that is true (which you can check out for yourself if you're interested) then surely that is a useful bit of information?  If you knew that for, let's say, images that are upscaled, that you are better off leaving the 'capture sharpening' to after the resize, and that if you did this you would probably have some improvement in the quality of your output, would you not at least consider sharpening after resize rather than before?

Part of the problem with this whole discussion is that some of you seem to think that I am criticizing an established workflow by the gurus of the industry (including Bruce Fraser, who is no longer with us sadly).  That may to some extent be the case, but in reality it boils down to 'do you sharpen before or after resizing?'.  The reason I say that is that if you sharpen after resizing then it gives you the opportunity (if it is appropriate) to sharpen only once.

Unless the 'before resizing' corrects flaws in the original image (due, for example, to the blurring caused by the anti-aliasing filter) there seems no logical reason to apply it before resizing, and good logical reason to apply it after.  In my testing (admittedly limited) I can see no benefit to applying it before.  Since almost all of our photos will be resized before output to the web or print, it then follows that if this is true, you are better off resizing and then sharpening.

So let's say that the conclusion is that two types of sharpening are typically beneficial with the current 'sharpening' technology: one with a small radius to 'recover' fine detail, and one with a higher radius, to give the output a boosted impression of sharpness and crispness.  I think this may well be so at times.  Then, I, personally, would resize, sharpen with a small radius and then sharpen with a higher radius.  This does not fit in with the Lightroom model, because Lightroom is strictly 1st phase sharpen, followed by resize, followed by (optional) 2nd phase sharpen.

If you do not sharpen in Lightroom, then, in Photoshop, you can use one-pass sharpen where appropriate, and two-pass sharpen if you think this would be beneficial.  There is nothing that I am aware of in PK Sharpen to prevent you from doing this, since it's a Photoshop set of actions.

I would have thought that one of the great benefits of a forum like this one is that it has many very experience members, who could take a suggestion like this one and demonstrate that it is nonsense, or that it is sometimes good, or that it's the best thing since sliced bread (as a home baker I would have to question that analogy  :)).  

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 10, 2014, 01:39:11 pm
... all you're doing is saying you can't be bothered to check it out because it doesn't fit in with your workflow.
But in my case I use inkjet printers and I never want to output sharpen with a radius of more than 2 or 3 (for 'creative' sharpening, maybe, but that's another story).  
And that's the problem for some of us. We do output to many other devices than just an ink jet. Heck, output sharpening for display is pretty common for me. Ditto with halftone work. I simply can't have a workflow that is only directed to ink jet output.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mark D Segal on August 10, 2014, 02:27:28 pm
"Snappy and Sharp" applies to the final sharpened image, printed or for web or for whatever medium, not for 'capture' sharpening (again, this is consistent with Schewe's workflow, I believe).

Yes, that is what I meant.

Regarding the empirical principles etc., etc., it seems to me that I have been doing a lot of testing and that I've offered not only examples, but actions for you guys to check out my suggestions . But so far no one has actually given an example testing a one-pass sharpen against a two-pass sharpen and shown that the two-pass is clearly superior, and under what conditions.

My suggestion was that since you are proposing this option you should be the one doing the rigorous testing. I could download your action and use it, time permitting - I've got a very full plate - but to give it justice I would need to do a lot of very well-conceptualized testing, which unfortunately I don't have time for just now; recall this is all voluntary. I would feel more compelled to make the time if I saw obvious deficiencies in the LR sharpening workflow, but quite frankly I don't. Maybe that's also why there hasen't been a chorus of volunteers. All the more reason why the onus of proof of concept is on you.

And I have certainly not suggested that a one-pass approach is SYSTEMATICALLY superior ... or even that it is superior at all.  I personally think, both from tests and from logic, that it will be better in some cases and worse in others.  If that is true (which you can check out for yourself if you're interested) then surely that is a useful bit of information?  If you knew that for, let's say, images that are upscaled, that you are better off leaving the 'capture sharpening' to after the resize, and that if you did this you would probably have some improvement in the quality of your output, would you not at least consider sharpening after resize rather than before?

Robert, that is part of the problem with what you are proposing. Why not just use one approach systematically and be done with it? The toolset available in LR/ACR and Photokit Sharpener is designed to handle just about anything, systematically. After learning to handle that toolset very well, I doubt one would need or do much better with anything else - unless a deconvolution approach were needed to handle blur.

Part of the problem with this whole discussion is that some of you seem to think that I am criticizing an established workflow by the gurus of the industry (including Bruce Fraser, who is no longer with us sadly).  That may to some extent be the case, but in reality it boils down to 'do you sharpen before or after resizing?'.  The reason I say that is that if you sharpen after resizing then it gives you the opportunity (if it is appropriate) to sharpen only once.

I'm not part of that problem, nor am I sure who is. But for sake of greater clarity, I have no problem with criticizing established anything from anyone. It only depends on the substance of critique.

Unless the 'before resizing' corrects flaws in the original image (due, for example, to the blurring caused by the anti-aliasing filter) there seems no logical reason to apply it before resizing, and good logical reason to apply it after.  In my testing (admittedly limited) I can see no benefit to applying it before.  Since almost all of our photos will be resized before output to the web or print, it then follows that if this is true, you are better off resizing and then sharpening.

There are always flaw in the original image - as you say, the AA filter being one source of reduced acutance at the capture stage. If you read Chapter Two of Schewe's book you would see the point. Turning to Output sharpening, one is in any case be it PKS or LR, doing output sharpening as a function of pixel size. That happens on the fly in LR and on layers in PKS.

So let's say that the conclusion is that two types of sharpening are typically beneficial with the current 'sharpening' technology: one with a small radius to 'recover' fine detail, and one with a higher radius, to give the output a boosted impression of sharpness and crispness.  I think this may well be so at times.  Then, I, personally, would resize, sharpen with a small radius and then sharpen with a higher radius.  This does not fit in with the Lightroom model, because Lightroom is strictly 1st phase sharpen, followed by resize, followed by (optional) 2nd phase sharpen.

Yes, that is how Lightroom is designed to be normally used, because between the imaging scientists on the Adobe Camera Raw team (photographers who know image quality and are brilliant mathematicians on a world scale) and the highly experienced developers in Pixelgenius, it was their combined evaluation that this is indeed the optimal processing approach for most of what LR is designed to do. But it is not really "followed by, followed by...." from a user perspective, as you undoubtedly know. The user can dial any of this stuff into the metadata in any order and the application applies adjustments in the correct sequence under the hood. We don't need to worry about sequencing - part of the application's design philosophy - it relieves the users of fiddling with that which users definitely need not control.

If you do not sharpen in Lightroom, then, in Photoshop, you can use one-pass sharpen where appropriate, and two-pass sharpen if you think this would be beneficial.  There is nothing that I am aware of in PK Sharpen to prevent you from doing this, since it's a Photoshop set of actions.

Yes agreed, we can handle all this any way we want. As well in LR we have options about what sharpening to use or not use at either stage.

I would have thought that one of the great benefits of a forum like this one is that it has many very experience members, who could take a suggestion like this one and demonstrate that it is nonsense, or that it is sometimes good, or that it's the best thing since sliced bread (as a home baker I would have to question that analogy  :)).  

Robert, I agree - that is one of the benefits of this forum, and it is one of the better ones around. There are highly experienced people who visit here and help each other. You are clearly a serious professional and the "rules" within such a peer group don't call for proving a concept to be nonsense - unless of course it so obviously is. But I for one am not saying that. Others may not agree with my criteria in respect of a sharpening workflow - they happen to be very closely aligned with what Jeff said above, for whatever that is worth -  "repeatable and consistent workflow without a lot of gyrations". I don't want to be bothered even thinking about whether an image deserves a one pass or a two pass solution. Once I know how to handle two pass properly, and understanding what I think I do about the underlying logic, I just do it. From my experience editing countless numbers of photographs for the past 14 years that I've been doing digital imaging whether from scanners or DSLRs, I think it's the most efficient and effective path to sharpness I've ever used. But taking into account the value-added of sacrificing the benefits of a self-contained raw workflow, if you convincingly demonstrate a better mouse-trap in terms of both process and results, that's fine.



Robert - I've responded in italics above for ease of following the conversation.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 10, 2014, 03:14:52 pm
I'll reply to both of you (Mark and Andrew) at the same time as you are essentially making the same point, I think: you both want a consistent workflow that covers all of the different media and technologies you use, and you're happy with your current workflow, and believe that it is the best workflow for sharpening and resizing.

Well of course I have no issue with that at all.  And I’m sure I would be very resistant to someone telling me that I should change my workflow … without demonstrating that what I was doing was sub-optimal, at any rate.

So, to be clear, I’m not at all suggesting that anyone should change their workflow to cut out capture sharpening. 

My post was more like “Hey, what do you think, could it be that we can sharpen just the once? Could there be some benefit to sharpening as little and as few times as possible?”

The thing is … that leaving out a step in a workflow doesn’t mean that you have abandoned or changed your workflow.  You could think of it like this: “I’m going to stick with my workflow and apply capture sharpening as I always do, but for this image I’m going to set the sharpening strength to 0).

As you are both, no doubt, working on many images a week, perhaps you could try it on one and see how you get on. 

Robert

Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mark D Segal on August 10, 2014, 03:20:44 pm
Thanks Robert, that's clear and I could do that, but it would not be determinative unless it were thoroughly tested on a properly stratified sample of photographs. And in LR the only place one has real control over sharpening is at the capture stage.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Fine_Art on August 10, 2014, 03:43:56 pm
It seems to me that capture sharpening is best done with deconvolution. Output sharpening is to reverse the bleeding of inks from the print process. Someone should be able to take an input image, print, scan, determine the PSF of their printer, then deconvolve that to get back close to the original. Once you know the printer PSF you can correct for it in all your output. Again, deconvolution is the tool.

The only thing left is creative sharpening, which, as Bart says, is mostly contrast/clarity adjustment.

A new photoshop action doesnt seem to advance anything. The main claim to fame is, if I follow the thread, a 1 step sharpening process. IMO people are willing to put a lot of effort into their best images. The average throwaway image usually sits on a hard drive as a raw that never gets printed.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 10, 2014, 04:22:32 pm
It seems to me that capture sharpening is best done with deconvolution.

Yes, exactly ... the problem is we don't currently have the tools to do that (perhaps some of them, but the overall problem is quite complex because there are many reasons for the loss of detail in the image, and these all need to be known for the image to be properly 'fixed' with deconvolution).  Also, I wonder ... and perhaps Bart could answer this ... whether or not a deconvolution function would be any more effective than an unsharp mask, carefully tuned, for blurring due to the AA filter.)

Output sharpening is to reverse the bleeding of inks from the print process.


Weeelll ... is that entirely true?  Emphasizing edges (beyond compensation for ink bleed) will create an impression of sharpness - and that isn't strictly 'creative sharpening' ... although of course there's no reason why you couldn't call it that.

A new photoshop action doesnt seem to advance anything. The main claim to fame is, if I follow the thread, a 1 step sharpening process. IMO people are willing to put a lot of effort into their best images. The average throwaway image usually sits on a hard drive as a raw that never gets printed.

The Ps action I put a link to is just a test tool and it can advance things as it gives an easy mechanism to do some comparative sharpening tests.  

There is no claim to fame at all here - as I've said, it's just a question: "Is a 2-step sharpening process always necessary, given our currently available technology?".  I hardly think I'm the first person to have suggested a one-pass sharpening!!  No doubt this is what everyone did before the 2 or 3 pass sharpening came into vogue.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mark D Segal on August 10, 2014, 04:28:15 pm
It seems to me that capture sharpening is best done with deconvolution. Output sharpening is to reverse the bleeding of inks from the print process. Someone should be able to take an input image, print, scan, determine the PSF of their printer, then deconvolve that to get back close to the original. Once you know the printer PSF you can correct for it in all your output. Again, deconvolution is the tool.

The only thing left is creative sharpening, which, as Bart says, is mostly contrast/clarity adjustment.

A new photoshop action doesnt seem to advance anything. The main claim to fame is, if I follow the thread, a 1 step sharpening process. IMO people are willing to put a lot of effort into their best images. The average throwaway image usually sits on a hard drive as a raw that never gets printed.

There's more to output sharpening than what you say here. Ref. Chapters Two and Three of Schewe's book on the subject.

I would be interested to see comparison testing you've done on the relative merits of deconvolution sharpening versus acutance recovery on a range of images having different frequency of detail, and I'd also be interested to know how much sweat-equity you had to put into determining the "printer PSF".
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Fine_Art on August 10, 2014, 04:35:29 pm
The "Sweat equity" is the same for deconvolving your camera-lens system and for the printer. You need an artificial "star" based on how astronomers do it. Any bright light behind a pinhole will do. The pinhole has to be far enough away that your lens images it as a point (from 1 pixel to 3x3) . The spreading of the point is your PSF from the camera-lens system. It's even easier with the printer to begin with, you can make a graphics point. Of course you have to scan it so the work is about the same.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Schewe on August 10, 2014, 04:40:33 pm
Well at least Andrew gave a reason for why he believes this approach is wrong ... all you're doing is saying you can't be bothered to check it out because it doesn't fit in with your workflow.

Correct...and it's a rabbit hole I've lived in before–no need for me to go back down.

The part you are missing in capture sharpening in ACR/LR is that with the Detail slider moved to the right (above 25) you move towards employing deconvolution sharpening. With the Detail slider at 100, it's all deconvolution (similar to Smart Sharpen's Lens Blur removal). With the type of high frequency and high rez images I usually work on, ACR/LR does a very nice job of capture sharpening and the built in edge masking is quite good!

But, there's a flip side to the coin of sharpening and that's noise reduction. Noise reduction is best done prior to sharpening (and I believe the pipeline ACR/LR puts it there). Anytime you sharpen you need to apply a certain degree of noise reduction. Obviously, you need it with higher ISO shots, but you also need it whenever the base image is lightened to help mitigate the increased shadow noise. Even with low ISO captures on my Phase One IQ180 captures, I generally use a very gentle noise reduction to help smooth out any increased noise perceptibility due to sharpening–particularly with high Detail settings.

In terms of output sharpening, I tend to use image sizes that are near the native capture resolution. Sure I resize a tad to get the print size correct, but it's rather unusual for me to have to make big resolution jumps. This slight resizing is done before the LR output sharpening is done–which again is the reason I print from LR.

So, sorry, I'm just not all that interested in stepping backward into a single sharpening workflow...the tools we have now are just so much better than that.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Fine_Art on August 10, 2014, 04:41:31 pm
There is a thread on comparisons with many images posted by Bart, Roger Clark (with links to his site clarkvision), myself and others.

Post a small crop out of any image. Chances are people using deconvolve methods can beat anything done in LR/ACR. I wont say photoshop because it has a function for a custom PSF. It's biggest issue is it cannot iterate in that dialog box.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Schewe on August 10, 2014, 04:45:04 pm
Chances are people using deconvolve methods can beat anything done in LR/ACR.

I guess you don't remember that with the Detail slider moved to the right ACR/LR employs deconvolution similar to the Lens Blur function of Smart Sharpen. No, you can't change the PSF but you can blend the amount of deconvolution by adjusting the slider number. At 50 it's about 1/2 deconvolution and 1/2 halo suppression...then by adjusting the amount and radius (and masking) you have good control over the capture sharpening.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 10, 2014, 04:46:04 pm
Chances are people using deconvolve methods can beat anything done in LR/ACR.
Yet the last post by Jeff indicates LR/ACR can do just that. Confused...
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Fine_Art on August 10, 2014, 04:50:29 pm
I guess you don't remember that with the Detail slider moved to the right ACR/LR employs deconvolution similar to the Lens Blur function of Smart Sharpen. No, you can't change the PSF but you can blend the amount of deconvolution by adjusting the slider number. At 50 it's about 1/2 deconvolution and 1/2 halo suppression...then by adjusting the amount and radius (and masking) you have good control over the capture sharpening.

Ok, but there are many deconvolution methods. The one you pick, you may even use several, is based on the image. You DO need control. So based on that I still think it is highly likely I can beat any ACR?LR general function with known deconvolve routines.

I often use a custom PSF, an adaptive Richardson-Lucy, a VanCittert. Also wavelets.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Fine_Art on August 10, 2014, 04:53:57 pm
Yet the last post by Jeff indicates LR/ACR can do just that. Confused...

See the next post then try studying the various methods.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mark D Segal on August 10, 2014, 04:54:24 pm
Can you post some examples or make some comparison files available for download?
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 10, 2014, 04:57:19 pm
See the next post then try studying the various methods.
I see that your original text:Chances are people using deconvolve methods can beat anything done in LR/ACR, needed further clarification.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 10, 2014, 04:58:06 pm
Yet the last post by Jeff indicates LR/ACR can do just that. Confused...

Me too  :).  I don't see how the Lr sharpening can use deconvolution since it doesn't know what the image has been convolved with.  Perhaps at this stage Bart or someone who knows about deconvolution could explain the maths for something relatively known like an AA filter?

RObert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: bjanes on August 10, 2014, 04:59:15 pm
Yes, exactly ... the problem is we don't currently have the tools to do that (perhaps some of them, but the overall problem is quite complex because there are many reasons for the loss of detail in the image, and these all need to be known for the image to be properly 'fixed' with deconvolution.  Still, it should be well possible to fix specific issues like the anti-aliasing blurring for each camera model).

That is correct, as our resident deconvolution guru, Bart van der Wolf, has pointed out, since a number of sources of blur are convolved together in blurring a typical image, a Gaussian PSF often works reasonably well for deconvolution. He has produced a PSF generator (http://bvdwolf.home.xs4all.nl/main/foto/psf/PSF_generator.html) from which custom PSFs can be geneated. While Bruce, Jeff, and the other PixelGenius workers have done some excellent work, but in the 21st century perhaps it is time to progress beyond the 50 year or older unsharpmask and the slightly newer high pass filter with an overlay blending mode. These processes are described in their sharpening books and I think are used with PhotoKit sharpener and with some of the LR/ACR sharpening algorithms. Eric Chan has implemented some deconvolution algorithms for capture sharpening in ACR/LR, but there is no control of the PSFs used for the deconvolution.

Bill
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Fine_Art on August 10, 2014, 05:01:30 pm
I see that your original text:Chances are people using deconvolve methods can beat anything done in LR/ACR, needed further clarification.

Fair enough.

Here is one page that shows several methods from one blurry image.

http://www.deconvolve.net/bialith/Research/BARclockblur.htm (http://www.deconvolve.net/bialith/Research/BARclockblur.htm)

corrected typo.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 10, 2014, 05:02:08 pm
USM as it's done in Photoshop and elsewhere is 50 years old?

Maybe Eric and Bart can comment on the lack of control of PSFs.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 10, 2014, 05:05:20 pm
Here is one page that shows several methods from one blurry image.
http://www.deconvolve.net/bialith/Research/BARclockblur.htm (http://www.deconvolve.net/bialith/Research/BARclockblur.htm)
Interesting and I'll dig into it, thanks.
So is this about making out of focuse images appear in focus?
Quote
Deconvolution is a process designed to remove certain degradations from signals e.g. to remove blurring from a photograph that was originally taken with the wrong focus (or with camera shake).
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: bjanes on August 10, 2014, 05:08:33 pm
USM as it's done in Photoshop and elsewhere is 50 years old?

Maybe Eric and Bart can comment on the lack of control of PSFs.

Photoshop isn't 50 years old, but the unsharp filter is a digital implementation of the unsharp mask that has been used in the darkroom for quite some time.

Regards,

Bill
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 10, 2014, 05:11:58 pm
Photoshop isn't 50 years old, but the unsharp filter is a digital implementation of the unsharp mask that has been used in the darkroom for quite some time.
I'm aware of that Bill, I actually did USM in the analog darkroom as a photo assignment in school, long before Photoshop.
I was under the impression that there was some algorithm or process that Photoshop (perhaps other software) conducted and just named UnSharp Mask hence the question. Someone could build such an algorithm and call it USM or anything else, what similarity is there to the process we used in the analog darkroom if any? Or was the name just applied because in the old days of Photoshop, the name was given to give us old time analog darkroom users something we could understand?
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Fine_Art on August 10, 2014, 05:14:35 pm
Interesting and I'll dig into it, thanks.
So is this about making out of focuse images appear in focus?

Im not going to get into a word play highjack. Out of focus is an extreme example. Any reason for capture sharpening is a reson to deconvolve. If you start with good tools/ technique your need for capture sharpening may be minimal.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 10, 2014, 05:19:23 pm
Im not going to get into a word play highjack. Out of focus is an extreme example. Any reason for capture sharpening is a reson to deconvolve. If you start with good tools/ technique your need for capture sharpening may be minimal.
Not meant to be wordplay, the question is about capture sharpening which I'd expect would be done on images that are not out of focus ideally. A set of algorithm's or processes that can do what you illustrate with out of focus images would indeed be very useful, no argument. The question is about current tools used on images that are not out of focus but need some work to over come issues with digitizing the image in the first place. The statement made was: Chances are people using deconvolve methods can beat anything done in LR/ACR. If the image is out of focus or has camera shake, the examples you should would be impressive and useful. Does that mean other methods that don't make out of focus images in focus fail to work when the rubber hits the road and final output sharpening and a print is produced?
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 10, 2014, 05:19:58 pm
Sometimes Wikipedia puts things quite nicely:

"For image processing, deconvolution is the process of approximately inverting the process that caused an image to be blurred. Specifically, unsharp masking is a simple linear image operation—a convolution by a kernel that is the Dirac delta minus a gaussian blur kernel. Deconvolution, on the other hand, is generally considered an ill-posed inverse problem that is best solved by nonlinear approaches. While unsharp masking increases the apparent sharpness of an image in ignorance of the manner in which the image was acquired, deconvolution increases the apparent sharpness of an image, but based on information describing some of the likely origins of the distortions of the light path used in capturing the image; it may therefore sometimes be preferred, where the cost in preparation time and per-image computation time are offset by the increase in image clarity.

With deconvolution, "lost" image detail may be approximately recovered—although it generally is impossible to verify that any recovered detail is accurate. Statistically, some level of correspondence between the sharpened images and the actual scenes being imaged can be attained. If the scenes to be captured in the future are similar enough to validated image scenes, then one can assess the degree to which recovered detail may be accurate. The improvement to image quality is often attractive, since the same validation issues are present even for un-enhanced images.

For deconvolution to be effective, all variables in the image scene and capturing device need to be modeled, including aperture, focal length, distance to subject, lens, and media refractive indices and geometries. Applying deconvolution successfully to general-purpose camera images is usually not feasible, because the geometries of the scene are not set. However, deconvolution is applied in reality to microscopy and astronomical imaging, where the value of gained sharpness is high, imaging devices and the relative subject positions are both well defined, and the imaging devices would cost a great deal more to optimize to improve sharpness physically. In cases where a stable, well-defined aberration is present, such as the lens defect in early Hubble Space Telescope images, deconvolution is an especially effective technique."

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mark D Segal on August 10, 2014, 05:20:56 pm
Fair enough.

Here is one page that shows several methods from one blurry image.

http://www.deconvolve.net/bialith/Research/BARclockblur.htm (http://www.deconvolve.net/bialith/Research/BARclockblur.htm)

corrected typo.

Yes, that page shows what is typically the key strength of the deconvolution approach. It allows one to retrieve usable information from an apparently hopelessly blurred photograph. This is particularly useful in forensics and espionage. How good it is for fine art photography is another matter. Some years ago I tested deconvolution software on photographs that simply needed the usual kind of acutance improvement for the usual reasons and I found the results ugly. And I tried numerous settings to make it look as good as I could, but it wasn't very prospective. Now, maybe the software has improved a lot in the intervening period, but since then I haven't gone back to it because I haven't perceived any need to do so. Time is my scarcest resource, very valuable and how I use it therefore carefully selected.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Fine_Art on August 10, 2014, 05:27:05 pm
Yes, that page shows what is typically the key strength of the deconvolution approach. It allows one to retrieve usable information from an apparently hopelessly blurred photograph. This is particularly useful in forensics and espionage. How good it is for fine art photography is another matter. Some years ago I tested deconvolution software on photographs that simply needed the usual kind of acutance improvement for the usual reasons and I found the results ugly. And I tried numerous settings to make it look as good as I could, but it wasn't very prospective. Now, maybe the software has improved a lot in the intervening period, but since then I haven't gone back to it because I haven't perceived any need to do so. Time is my scarcest resource, very valuable and how I use it therefore carefully selected.

Everyone has to decide if a particular image is worth time.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 10, 2014, 05:30:49 pm
Everyone has to decide if a particular image is worth time.
I supsect that is what Mark, myself and perhaps Robert would like to see. No question the examples you provided show a huge benefit working with actual out of focus images. Now how about those that are not so severely awful? In such a case, are the chances people using deconvolve methods can beat anything done in LR/ACR?
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 10, 2014, 05:34:21 pm
Yes, exactly ... the problem is we don't currently have the tools to do that (perhaps some of them, but the overall problem is quite complex because there are many reasons for the loss of detail in the image, and these all need to be known for the image to be properly 'fixed' with deconvolution).  Also, I wonder ... and perhaps Bart could answer this ... whether or not a deconvolution function would be any more effective than an unsharp mask, carefully tuned, for blurring due to the AA filter.)

Hi Robert,

Yes, deconvolution is perfect for Capture sharpening, and it's also very good for restoration of some of the upsampling blur, and yes these can also be combined if one wants to avoid upsampling any artifacts. For workflows involving Photoshop, I can recommend FocusMagic. What my analysis has shown, Capture sharpening should be 'focused' at Aperture dictated blur (not image detail as suggested in 'Real world Image sharpening'). The amount of blur (in the plane of best focus) is largely Gaussian in nature, due the the combination of several blur sources (which tends to combine into a Gaussian distribution), and varies with aperture.

Resampling (up/down) also creates blur due to averaging of pixels.
 
Quote
Weeelll ... is that entirely true?  Emphasizing edges (beyond compensation for ink bleed) will create an impression of sharpness - and that isn't strictly 'creative sharpening' ... although of course there's no reason why you couldn't call it that.

Pre-compensation for ink diffusion (also pretty Gausian looking), or raster dots/lines, or to compensate for the low resolution of most displays, are the main areas of attention, but one can also optimize local detail for viewing distance if that is is relatively fixed.

Quote
There is no claim to fame at all here - as I've said, it's just a question: "Is a 2-step sharpening process always necessary, given our currently available technology?".  I hardly think I'm the first person to have suggested a one-pass sharpening!!  No doubt this is what everyone did before the 2 or 3 pass sharpening came into vogue.

In many cases a one step sharpening approach is very well possible, especially for large format output and if the Capture sharpening tools are limited in quality (no-deconvolution) with halo risk. For optimal one-step sharpening one may need to combine several Gaussian blur radii, and produce a combined deconvolution kernel, but a single deblur operation already goes a long way.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mark D Segal on August 10, 2014, 05:34:38 pm
Sometimes Wikipedia puts things quite nicely:

Robert

Thank you for that Robert, and the bottom line one gets out of it is "horses for courses".

Very unclear to me that deconvolution tools are ideally suited to efficient and high quality workflows in "fine-art" photography. The onus is on those who propose them to demonstrate superiority in regard to both quality and efficiency. And while we are at it, let us not forget the need to define what we mean by "best" when we are talking about the quality of a sharpening outcome. Only when we agree on the criteria defining "best outcome" can we determine what is "best practice".
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Fine_Art on August 10, 2014, 05:41:04 pm
I supsect that is what Mark, myself and perhaps Robert would like to see. No question the examples you provided show a huge benefit working with actual out of focus images. Now how about those that are not so severely awful? In such a case, are the chances people using deconvolve methods can beat anything done in LR/ACR?

When I do go to print I usually like to have lots of pixels in the file. With deconvolution I feel Bart's 3x upsample recommendation is workable to get fair detail, good for printing, out of the image pixels. Anyone who thinks they can get close to optimal results from a raw can throw out a challenge with the file. I have offered that before with raws for the standard Imaging Resource reviews which include many raw shots. I doubt anyone can get sharp 3x upsampled images with ACR/LR.

Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mark D Segal on August 10, 2014, 05:44:29 pm
When I do go to print I usually like to have lots of pixels in the file. With deconvolution I feel Bart's 3x upsample recommendation is workable to get fair detail, good for printing, out of the image pixels. Anyone who thinks they can get close to optimal results from a raw can throw out a challenge with the file. I have offered that before with raws for the standard Imaging Resource reviews which include many raw shots. I doubt anyone can get sharp 3x upsampled images with ACR/LR.



You're the one throwing out the challenges and expressing various views about the purported superiority of one approach over the other. I suggested above that you demonstrate the validity of your hypotheses by posting the comparative results of your own research.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Fine_Art on August 10, 2014, 05:47:10 pm
Thank you for that Robert, and the bottom line one gets out of it is "horses for courses".

Very unclear to me that deconvolution tools are ideally suited to efficient and high quality workflows in "fine-art" photography. The onus is on those who propose them to demonstrate superiority in regard to both quality and efficiency. And while we are at it, let us not forget the need to define what we mean by "best" when we are talking about the quality of a sharpening outcome. Only when we agree on the criteria defining "best outcome" can we determine what is "best practice".

There is no need to reinvent the wheel.

http://www.clarkvision.com/articles/index.html#sharpening (http://www.clarkvision.com/articles/index.html#sharpening)
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 10, 2014, 05:48:00 pm
When I do go to print I usually like to have lots of pixels in the file. With deconvolution I feel Bart's 3x upsample recommendation is workable to get fair detail, good for printing, out of the image pixels. Anyone who thinks they can get close to optimal results from a raw can throw out a challenge with the file. I have offered that before with raws for the standard Imaging Resource reviews which include many raw shots. I doubt anyone can get sharp 3x upsampled images with ACR/LR.
I don't know what Perfect Resize is supposed to be using but the last tests I did with it, Photoshop (even doing step interpolation), LR sizing up 250%, LR was the best of the lot based on a final print. And oh so much faster. Proper capture sharpening made the biggest differences in the results.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: bjanes on August 10, 2014, 05:54:32 pm
I'm aware of that Bill, I actually did USM in the analog darkroom as a photo assignment in school, long before Photoshop.
I was under the impression that there was some algorithm or process that Photoshop (perhaps other software) conducted and just named UnSharp Mask hence the question. Someone could build such an algorithm and call it USM or anything else, what similarity is there to the process we used in the analog darkroom if any? Or was the name just applied because in the old days of Photoshop, the name was given to give us old time analog darkroom users something we could understand?

In both the darkroom unsharp masking and with digital unsharp masking, the same general principle is to blur the image and then subtract the blurred image from the original. This subtracts out the low frequencies. This is explained in a Wikipedia article (http://en.wikipedia.org/wiki/Unsharp_masking) and in a post (http://www.cambridgeincolour.com/tutorials/unsharp-mask.htm) on Cambridge in Color.

With the darkroom form, blur is introduced optically. With the digital unsharp mask, Gaussian blur may be used, so the digital process is not an exact duplication of the darkroom process, but the principles are similar. Doug Kerr (http://dougkerr.net/Pumpkin/articles/Unsharp_Mask.pdf) expounds in greater detail on the matter.

Regards,

Bill
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mark D Segal on August 10, 2014, 05:56:14 pm
There is no need to reinvent the wheel.

http://www.clarkvision.com/articles/index.html#sharpening (http://www.clarkvision.com/articles/index.html#sharpening)

What he has in that article isn't the wheel. There are better acutance-enhancing tools than Photoshop's USM, and some of the comparisons he shows even at that are awfully close, and probably indistinguishable at normal magnifications and viewing distances. I remain unconvinced. And yes Andrew, you're right: cost-effectiveness in terms of time versus practical outcomes is a real consideration.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 10, 2014, 06:00:04 pm
In both the darkroom unsharp masking and with digital unsharp masking, the same general principle is to blur the image and then subtract the blurred image from the original.
OK, same general principle. But I suspect there are multiple products using the term and not producing the same results using the same original data. In fact I know that as I just applied USM in Graphic Converter then Photoshop using the same values and they are not the same! GP only has two of the three controls found in PS (Radius and Intensity which I suspect is akin to Amount).
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Fine_Art on August 10, 2014, 06:06:33 pm
What he has in that article isn't the wheel. There are better acutance-enhancing tools than Photoshop's USM, and some of the comparisons he shows even at that are awfully close, and probably indistinguishable at normal magnifications and viewing distances. I remain unconvinced. And yes Andrew, you're right: cost-effectiveness in terms of time versus practical outcomes is a real consideration.

That is not an article, it is a series. If you go through it it shows a comparison of the PS smart sharpen with richardson-lucy here: http://www.clarkvision.com/articles/image-restoration2/index.html (http://www.clarkvision.com/articles/image-restoration2/index.html)

There are already several deconvolution threads on the site so to me remaining unconvinced= remaining in the dark. Your choice.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: bjanes on August 10, 2014, 06:08:50 pm
OK, same general principle. But I suspect there are multiple products using the term and not producing the same results using the same original data. In fact I know that as I just applied USM in Graphic Converter then Photoshop using the same values and they are not the same! GP only has two of the three controls found in PS (Radius and Intensity which I suspect is akin to Amount).

Yes, that is what Doug Kerr discusses in his article (if you take the time to read it).

Regards,

Bill
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 10, 2014, 06:17:42 pm
That is not an article, it is a series. If you go through it it shows a comparison of the PS smart sharpen with richardson-lucy here: http://www.clarkvision.com/articles/image-restoration2/index.html (http://www.clarkvision.com/articles/image-restoration2/index.html)
First thing I see is: In this example, we will start with a high signal-to-noise ratio image, then intentionally blur it. I try to never intentionally blur my images from the get go.
Quote
Yes, that is what Doug Kerr discusses in his article (if you take the time to read it).
I'm a fan of Doug's work and will, but I think what I've seen even before that is anyone can call a routine USM and they all produce different results. In the test I did today, pretty significant visual differences! In fact, if I showed you the two side by side and said one was USM and the other a vastly different approach (dare I say deconvolve), an observer could come to many of the same conclusions as to what is 'better' as we see on the various pages referenced here. One looks quite less sharp than the other and that suggests to me, a setting of USM in Photoshop may not produce the same level of sharpness as another product presumably using the same sharpening process (they share the same name). If USM in PS set to the same values we read on Clark's page look soft compared to the settings in Graphic Converter, does that mean one should up the values? USM isn't USM it appears, all things are not equal.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: bjanes on August 10, 2014, 06:29:50 pm
First thing I see is: In this example, we will start with a high signal-to-noise ratio image, then intentionally blur it. I try to never intentionally blur my images from the get go.

The point of Bart's demonstration of blurring an image with Gaussian blur and restoring it with deconvolution is to demonstrate that deconvolution works very well if you know the PSF, but I agree that it best to work with real world images. The PSF can often be estimated or approximated by a Gaussian PSF.

If you use USM, you are blurring the image as part of the process, although you might not be aware of it. :)

Regards,

Bill
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mark D Segal on August 10, 2014, 06:58:56 pm
That is not an article, it is a series. If you go through it it shows a comparison of the PS smart sharpen with richardson-lucy here: http://www.clarkvision.com/articles/image-restoration2/index.html (http://www.clarkvision.com/articles/image-restoration2/index.html)

There are already several deconvolution threads on the site so to me remaining unconvinced= remaining in the dark. Your choice.

Mr. Clark's work shows two things: (i) yes, he obtained good image detail from the deconvolution technique he used, and (b) at normal viewing distance it is unlikely to look much better than the best result he got from PS Smart Sharpen. He hasn't published tests versus Photokit Sharpener or Nik Sharpener Pro, arguably the best conventional sharpening tools other than dedicated deconvolution techniques. I really cannot justify putting in the time to do this, but for me the determinative tests would be to use PKS and NIK at their best versus deconvolution at its best on properly focused photographs that come out of a good DSLR (full-frame or APS-C in the 20~24 MP range) with a good lens and see what differential quality of prints in the 13*19 to 17*22 size range they produce. The kind of evaluation basis I have in mind would be to observe any differences in the definition and the natural character of that definition of fine textural detail (i.e. relative to how we humans see textured objects without loupes), as well as for "easier" objects such as wires against sky, etc. If anyone reading this thread has done this kind of comparison or can point me to one it could be of considerable interest.

As for remaining in the dark, keep your personal slurs to yourself. This is about science.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 10, 2014, 07:28:12 pm
The point of Bart's demonstration of blurring an image with Gaussian blur and restoring it with deconvolution is to demonstrate that deconvolution works very well if you know the PSF, but I agree that it best to work with real world images.
Yes and the demo IS impressive in handling blurred images. By most of mine are not blurred, my current workflow is to use LR for capture sharpening on images that are not out of focus. Going back full circle to the comment that Chances are people using deconvolve methods can beat anything done in LR/ACR. That may be true, but the demo's provided thus far have two issues as I see it. First, the images being used are blurred, out of focus. Next, one has to wonder if the USM examples are handled ideally (well no, as none so far are used on real world non blurry images). Clark shows one example with one setting of USM, no question it doesn't look as sharp (on-screen which is rarely my final goal) as the others. He does suggest upping the settings would look sharper but produce other issues and it would have been nice to see that. Just today's test using USM in two different products, something I've never looked at, gives me the impression that there are vast differences in just what someone calls USM! That the same settings are not ideal in both cases. That it would be useful for someone to really attempt to produce the best possible results with the tools provided on good images in the first place, then show me a scan of good output such that I could evaluate what the results would mean in a real would context.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Fine_Art on August 10, 2014, 08:18:49 pm
Yes and the demo IS impressive in handling blurred images. By most of mine are not blurred, my current workflow is to use LR for capture sharpening on images that are not out of focus. Going back full circle to the comment that Chances are people using deconvolve methods can beat anything done in LR/ACR. That may be true, but the demo's provided thus far have two issues as I see it. First, the images being used are blurred, out of focus. Next, one has to wonder if the USM examples are handled ideally (well no, as none so far are used on real world non blurry images). Clark shows one example with one setting of USM, no question it doesn't look as sharp (on-screen which is rarely my final goal) as the others. He does suggest upping the settings would look sharper but produce other issues and it would have been nice to see that. Just today's test using USM in two different products, something I've never looked at, gives me the impression that there are vast differences in just what someone calls USM! That the same settings are not ideal in both cases. That it would be useful for someone to really attempt to produce the best possible results with the tools provided on good images in the first place, then show me a scan of good output such that I could evaluate what the results would mean in a real would context.

Upping the settings in USM creates halos. I bet you, as a photographer, have seen countless images on the web with them. When you start to see images sharpened without halos you see that defect as rather nasty, in that it is no longer required based on the methods available. I have grown to abhor halos over the last few years. Now, deconvolution can create ring artifacts when taken to the level that would make a high contrast print. They can be wiped out by blending back to the original.

IMO the biggest easy improvement for LR/ACR would be to have listed deconvolve methods in a sub dialog box. People should be able to use several in a sequence they choose. The methods are well documented non-proprietary scientific algorithms. There is no reason not to make them available. Adobe always seems to want to say they have a secret sauce. They marketing power that convinces many people whatever they do is best. Test it. If they included these methods along with several NR methods I would buy it. For NR DxO seems to be taking the prize. If Adobe doesn't move they will soon take the prize in sharpening with deconvolution set for specific lenses.

Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: digitaldog on August 10, 2014, 08:35:50 pm
Upping the settings in USM creates halos. I bet you, as a photographer, have seen countless images on the web with them.
Yes, I'm keenly aware that over sharpening can cause visible halos on output, that's not what I'm suggesting.
Quote
IMO the biggest easy improvement for LR/ACR would be to have listed deconvolve methods in a sub dialog box.
I'll let the engineers who handle this within the product comment, I'm not qualified to suggest they do or do not do this, I'll bet they are pretty aware of this possibility.
Quote
The methods are well documented non-proprietary scientific algorithms.
Why do you suppose we are not seeing this in said products?
Quote
There is no reason not to make them available.
Again, with no knowledge of the processing or specifics of this product, I'm not willing to accept that at face value, I'd certainly prefer to hear what an engineer would have to say about this.
Quote
Adobe always seems to want to say they have a secret sauce. They marketing power that convinces many people whatever they do is best.
Ah sure OK. That seems like a pointless area to speculate about.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Fine_Art on August 10, 2014, 09:21:08 pm
Here is a real world RAW from Imaging Resource
http://www.imaging-resource.com/PRODS/nikon-d800/D800PINE.NEF.HTM (http://www.imaging-resource.com/PRODS/nikon-d800/D800PINE.NEF.HTM)

Anyone can download it, then sharpen however they want.

The reason to look at the methods astronomers use is that they have the most difficult problem. Very faint data with a variety of problems like the atmosphere. They somehow have to get an improved image while retaining accurate data. Wavelets, then deconvolution are the methods they came up with.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Fine_Art on August 10, 2014, 10:26:20 pm
Here is a crop from the image. screenshot pasted to MS Paint, saved as JPG. A PNG was too big.

This has Adaptive Richardson-Lucy in Gaussian 5x5 then 3x3 pixels.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 11, 2014, 07:27:01 am
Hi Robert,

Yes, deconvolution is perfect for Capture sharpening, and it's also very good for restoration of some of the upsampling blur, and yes these can also be combined if one wants to avoid upsampling any artifacts. For workflows involving Photoshop, I can recommend FocusMagic. What my analysis has shown, Capture sharpening should be 'focused' at Aperture dictated blur (not image detail as suggested in 'Real world Image sharpening'). The amount of blur (in the plane of best focus) is largely Gaussian in nature, due the the combination of several blur sources (which tends to combine into a Gaussian distribution), and varies with aperture.


I'm very interested in this and if the only thing that comes from this thread is a better way of doing capture sharpening then I, for one, will be very happy indeed.  I've had a try with FocusMagic and it looks very good at first sight.  I added it in to the Ps action I'm using to compare different methods.

For FM capture sharpen using default settings (the filter estimates the blur distance), my first conclusion (based on a sample of 1) is that FM does a much better job of capture sharpening than does Lr.    

My second conclusion is that if the Lr image is then resized (after capture sharpen) and compared to the FM image sharpened after resize (the FM filter estimates the blur distance at twice the blur distance for the normal size image, which is pretty impressive), the improvement is really significant, with virtually no artifacts (halos and noise) in the FM-sharpened image but with significant artifacts in the Lr image.  

The FM sharpened (after resize image) is also better than the Ps sharpened after resize image, with cleaner edges and more detail.  

The FM sharpened (after resize image) is also slightly better than the Ps sharpened once for output image with a little more detail, but the differences are pretty subtle.

My test is not very fair though, because the sharpening in Lr/ACR used Masking, whereas the FM sharpening was just default sharpening with no edge mask.

So I repeated the test with Masking removed and then the advantage swings much further towards the FM sharpening as the ACR sharpening also sharpens noise whereas the FM sharpening doesn't (and this is with a test version of FM which doesn't include noise reduction).

So, based on this I'm forking out $65 for Focus Magic ... so I can try it out properly.

This forum is costing me money!

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 11, 2014, 07:53:36 am
Here is a crop from the image. screenshot pasted to MS Paint, saved as JPG. A PNG was too big.

This has Adaptive Richardson-Lucy in Gaussian 5x5 then 3x3 pixels.

And here is the same crop with sharpening using FocusMagic, default settings.  Pretty impressive IMO.

(http://www.irelandupclose.com/customer/LL/D800PINE-Crop.jpg)

(To view the image at 100%, right-click on it and select View Image ... no doubt there's a better way of doing this on this forum, in which case perhaps someone would enlighten me :)).

I also did a test, with the same image, but this time capture sharpening and then upsizing x 2,  compared to upsizing x 2 and then sharpening (with FocusMagic using default settings).  I found the capture followed by resize a bit sharper, but over-sharpened to my taste.  I preferred the resize-then-sharpen with the amount dialled up by 25%.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 11, 2014, 09:08:21 am
I'm very interested in this and if the only thing that comes from this thread is a better way of doing capture sharpening then I, for one, will be very happy indeed.  I've had a try with FocusMagic and it looks very good at first sight.  I added it in to the Ps action I'm using to compare different methods.

For FM capture sharpen using default settings (the filter estimates the blur distance), my first conclusion (based on a sample of 1) is that FM does a much better job of capture sharpening than does Lr.

Hi Robert,

Since you are new to FocusMagic, allow me to share a tip (or two). FocusMagic does try to estimate the best blur width setting, but may fail at getting it right for the best focused part of the image (also depends on where you exactly set the preview marker). I tend to increase the Amount setting to its maximum of 300%, and set the Blur width to 0. Then increase the blur width by 1 at a time. There will be a point where most images will suddenly start to produce fat contours/edges instead of sharper edges. That's where you back-off 1 blur width click, and dial in a more pleasing amount (larger radii tolerate larger amounts). For critical subsequent upsampling jobs, I then use a Layer Blend-if setup, or I first upsample and then (WYSIWYG) sharpen that.

Here's a generic Blend-if setup I use in an action that creates a duplicate (sharpening) layer in Photoshop and gives a useful way to throttle the possible artifacts if the amount settings are taken to extremes:

(http://bvdwolf.home.xs4all.nl/main/downloads/Non-clipped-sharpening.png)

It basically reduces the amount of sharpening by the top layer as the local contrast is already high, and near clipping. The start/end points of the gradual decrease/increase for shadows/highlights can be used for further fine-tuning, as can the layer's opacity.

FocusMagic usually strikes a very nice balance between enhancing signal/sharpness and constraining noise, but for larger (>4) blur width settings it also allows to manually switch noise suppression on/off. One can instead also do modest noise reduction before sharpening with a dedicated noise reduction plugin.

It is of course also possible to use multiple runs of FM, usually starting at the larger required radius, and finishing off with the smaller/smallest radius, with adjusted amounts. That allows to optimize for more complex PSF shapes than whatever Focusmagic uses by default. There are also differences between the different image source methods/models, although Digital camera and Forensic produce very similar/close results.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: AFairley on August 11, 2014, 10:15:23 am
Thank you all for a most informative thread.  Can you tell me whether there is any inherent advantage to performing capture sharpening as step in the demosaicing process as opposed to on a tiff "developed" without sharpening?
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 11, 2014, 10:42:24 am
Hi Robert,

Since you are new to FocusMagic, allow me to share a tip (or two). FocusMagic does try to estimate the best blur width setting, but may fail at getting it right for the best focused part of the image (also depends on where you exactly set the preview marker). I tend to increase the Amount setting to its maximum of 300%, and set the Blur width to 0. Then increase the blur width by 1 at a time. There will be a point where most images will suddenly start to produce fat contours/edges instead of sharper edges. That's where you back-off 1 blur width click, and dial in a more pleasing amount (larger radii tolerate larger amounts). For critical subsequent upsampling jobs, I then use a Layer Blend-if setup, or I first upsample and then (WYSIWYG) sharpen that.


Thank you Bart - you are very helpful as usual! I'll give that a go.   I often use a Layer Blend-if setup similar to yours to soften halos in sharpening. 

Focus Magic can't be used as a smart filter which is a real pity ... and right now it doesn't seem to install for CC 2014 (which isn't such a great surprise as I'm having problem installing plugins). Hopefully they will fix that in a future release.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 11, 2014, 10:50:47 am
Thank you all for a most informative thread.  Can you tell me whether there is any inherent advantage to performing capture sharpening as step in the demosaicing process as opposed to on a tiff "developed" without sharpening?

I can't say if there's an inherent advantage, but I can say that empirically there is no advantage.  If you think about it, in order to sharpen, Lightroom has to render the image, so the sharpening is applied after the demosaicing. So whether the sharpening is done in Lr or in Ps won't make any difference (although I don't know at what point Lr applies the sharpening ... for example, before or after applying Clarity) so there might be some small differences.  Also, if you want to keep things like to like you would be better opening/exporting the image as ProPhoto.

It's easy to check using the Ps Camera Raw filter (so apply the sharpening in Lr, open the image in Ps without sharpening, then use the Camera Raw filter to apply exactly the same sharpening as in Lr, and compare the two images).

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 11, 2014, 11:05:20 am
Thank you all for a most informative thread.  Can you tell me whether there is any inherent advantage to performing capture sharpening as step in the demosaicing process as opposed to on a tiff "developed" without sharpening?

Hi Alan,

I'm not sure at which point LR applies the parametric sharpening settings when it ultimately renders the image. Maybe some is done in Raw (e.g. some noise reduction), but chance has it that it's most likely after demosaicing to RGB. So at that stage it would make little difference, other than the sharpening algorithm used, whether one uses the LR Capture sharpening or another application.

Of course, LR's parametric adjustments do add onto each other, so it would not be the same to just switch it off before export and sharpen elsewhere as it would be if you skipped it all together from the start of your tweaking of other LR parameters.

At this point in time, the capabilities of sharpening outside of LR make it worth considering. Maybe not on a routine basis, but when you want the best of the best you might. Especially if one also has Photoshop at ones disposal, there is a lot that can be done there, that LR was not designed for.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 12, 2014, 08:30:17 am
I would really appreciate a bit of help understanding the whole concept of deconvolution.  BTW, I see there was a massive thread 4 years ago, here: http://www.luminous-landscape.com/forum/index.php?topic=45038, started by Bill. (Which humbles me a bit as I can see you guys have been talking about this for ages!). I've read some of it and while it's very interesting, with something like 18 pages it takes some plowing through!  Still, I will get to it.

From a mathematical point of view it seems straightforward enough: the signal f is convolved with another signal g to yield h.  If we know g then we can find its inverse and so recover f.  If we don’t know g then we can guess it or estimate it and so attempt recovery of f.

Noise messes things up a bit because it’s added to the convolved signal … so how do we remove it from h before doing the deconvolution?  Well, one way would be to add some blur to h (in other words convolve it further, which isn’t a brilliant idea if the g was a blur function to start off with!).

Anyway, moving on to imaging, I assume that all filters convolve the image (essentially one function applied to another).  If we convolve the image with a blur filter and then apply the inverse filter (a sharpening filter?) then we are convolving the image twice, but the second convolution is also a deconvolution.  Is that correct?

Looking at the Ps Custom filter, it’s easy enough to apply a blur and then apply the inverse (so where the adjacent pixel was added, we now subtract it).  The effect is to remove the blur … but it also introduces the beloved halo!

So I guess I must be missing something fundamental!  Or not using the Ps Custom filter correctly, which is also highly likely!

But assuming that I’m not entirely off the mark, when Jeff says that the Lr sharpen is effectively a USM-type sharpening when used with a low Detail setting, but becomes a deconvolution filter with high Detail settings … I’m both puzzled and lost.  I’m puzzled as to how a Detail setting of 0 gives USM (which to my mind is a deconvolution if its intention is to remove blur) while at 100 it’s a deconvolution.  

If I take an image and blur it with a Gaussian blur, radius 3, and then sharpen using the ACR sharpen, moving the Detail to 100% certainly gives more sharpening, but it also gives a nice (NOT) halo … it certainly doesn’t recover the image to the pre-blur version.

Here is a very simple test image (real-life) that can be used to try out the different techniques:

http://www.irelandupclose.com/customer/LL/sharpentest.tif

I’ve tried various methods (after applying a Gaussian blur of 3) and none of them seem to be particularly effective.  I would very interested indeed if you have a filter, or multiple filters, or filter applied multiple times, that can (within reason) restore the image to the original.

And I would be very grateful for clarification on deconvolution (and correction of my understanding, particularly on how it is normally applied to digital images).  There's a lot of talk about deconvolution, but I doubt that there are too many of us who understand it (me included)!  

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: bjanes on August 12, 2014, 09:11:31 am
Thank you Bart - you are very helpful as usual! I'll give that a go.   I often use a Layer Blend-if setup similar to yours to soften halos in sharpening. 

Focus Magic can't be used as a smart filter which is a real pity ... and right now it doesn't seem to install for CC 2014 (which isn't such a great surprise as I'm having problem installing plugins). Hopefully they will fix that in a future release.

Robert

Robert,

FM works fine on my Windows 8 machine with PS CC ver 2014.1.0. I can't remember how I installed it, whether with the installer or merely copying the plugin from a previous version of CC. FocusMagic64.8bf resides in c:\\ProgramFiles\Adobe\Adobe Photoshop CC 2014\Plug-ins.

Regards,

Bill
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 12, 2014, 10:03:18 am
Robert,

FM works fine on my Windows 8 machine with PS CC ver 2014.1.0. I can't remember how I installed it, whether with the installer or merely copying the plugin from a previous version of CC. FocusMagic64.8bf resides in c:\\ProgramFiles\Adobe\Adobe Photoshop CC 2014\Plug-ins.

Regards,

Bill

Cool, Bill!  Thanks ... stupid, I should have checked the plug-in folder.  The FM installation just doesn't (currently at least) install into the CC 2014\Pluging.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: TonyW on August 12, 2014, 10:30:49 am
Here is a very simple test image (real-life) that can be used to try out the different techniques:

http://www.irelandupclose.com/customer/LL/sharpentest.tif

I’ve tried various methods (after applying a Gaussian blur of 3) and none of them seem to be particularly effective.  I would very interested indeed if you have a filter, or multiple filters, or filter applied multiple times, that can (within reason) restore the image to the original.

And I would be very grateful for clarification on deconvolution (and correction of my understanding, particularly on how it is normally applied to digital images).  There's a lot of talk about deconvolution, but I doubt that there are too many of us who understand it (me included)!  

Have been following this thread with some interest and as far as deconvolution goes I am sure that Bart and others knowledge and experience of this aspect will prove very useful for you.

Couple of things I picked up on and it is my opinion that maybe you are making things a little more difficult than they need to be to get excellent result whichever sharpening route you choose.

1.  Your test file of the power/telephone line is not a particularly good choice as presented due to purple green CA.  IMO this should be removed first during raw processing to give a meaningful view of sharpening options.

2.  As you started the thread with PS have you tried the Smart Sharpen / Lens Blur / More Accurate checked?  This AFAIK is deconvolution sharpening (particular parameters unknown) and offers quite a lot in the way of control.  Not as many options of course as in other software but sometimes this maybe enough?

By chance I had also played with the sample NEF image in ACR using Amt=50 Rad= 0.7 Detail = 80  and seems to be pretty close to your FM example although that was not my intention.  Seems to me in this case that a little tweaking ACR would narrow the differences even further
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 12, 2014, 12:34:47 pm
I would really appreciate a bit of help understanding the whole concept of deconvolution.  BTW, I see there was a massive thread 4 years ago, here: http://www.luminous-landscape.com/forum/index.php?topic=45038, started by Bill. (Which humbles me a bit as I can see you guys have been talking about this for ages!). I've read some of it and while it's very interesting, with something like 18 pages it takes some plowing through!  Still, I will get to it.

From a mathematical point of view it seems straightforward enough: the signal f is convolved with another signal g to yield h.  If we know g then we can find its inverse and so recover f.  If we don’t know g then we can guess it or estimate it and so attempt recovery of f.

Hi Robert,

Deconvolution is a pretty simple operation (mathematically speaking). In the spatial domain it just allows to restore the original signal that was intended for a single pixel, but was also spread over a range of surrounding pixels instead, by subtracting that spread signal from the neighbors and adding it back to its intended location. However, also that pixel carries parts of signal from all surrounding source pixels, so that needs to be subtracted and added back to those other pixels.

The distribution of that 'stray information' is mathematically described by a Point Spread Function (PSF), which will allow to subtract the correct amounts of information from its neighbors. For a completely accurate description of that PSF one would need infinite precision (because the amounts get smaller per neighbor, but there will be also many more neighbors, as the distance increases), and no noise to disturb the smaller and smaller true amounts of signal as we get further away from the source pixel position.

Quote
Noise messes things up a bit because it’s added to the convolved signal … so how do we remove it from h before doing the deconvolution?  Well, one way would be to add some blur to h (in other words convolve it further, which isn’t a brilliant idea if the g was a blur function to start off with!).

Given that we already have to deal with a signal made from (Poisson distributed) shot noise and a few other sources of electronic noise, we will not be able to have such a perfect restoration, but by adding some clever statistical procedures (which know how to deal with noise and probability distributions) to the concept of deconvolution, one can devise rather successful algorithms for image restoration, luminance resolution (and color) restoration in our case.

When we try to reduce noise, we also reduce or dilute signal which was made up from photon shot-noise, so we need to understand the statistical properties of our (mostly Poisson distributed) noisy signal, and knowledge about the PSF, to have a better chance of beating the odds in the reconstruction of what is called, a solution to an "ill posed problem".

Quote
Anyway, moving on to imaging, I assume that all filters convolve the image (essentially one function applied to another).  If we convolve the image with a blur filter and then apply the inverse filter (a sharpening filter?) then we are convolving the image twice, but the second convolution is also a deconvolution.  Is that correct?

Here is where we need to distinguish between a masking type of filter, like USM and other acutance enhancing filters, and a deconvolution type of filter. A mask is just an overlay, that selectively attenuates the transmission to underlying layers. It adds a (positive or negative) percentage of a single pixel to a lower layer's pixel. A deconvolution on the other hand adds weighted amounts of surrounding pixels to a central pixel, for all pixels (a vast amount of multiplications/additions is required for each pixel) in the same layer.

Quote
Looking at the Ps Custom filter, it’s easy enough to apply a blur and then apply the inverse (so where the adjacent pixel was added, we now subtract it).  The effect is to remove the blur … but it also introduces the beloved halo!


Halo only occurs if we use the wrong amounts to add back from many surrounding pixels. When we add back the correct (positive and negative) amounts from the neighbors, we will have a perfect restoration (within the limitations of noise and calculation precision). It's due to those limitations that we cannot have a perfect restoration, although we may get close enough for our usually 8-bit/channel output requirements to make positive difference.

Quote
So I guess I must be missing something fundamental!  Or not using the Ps Custom filter correctly, which is also highly likely!

Assuming you used the correct kernel values to reverse the operation, Photoshop offers a limited precision of calculation (integer values as input, being divided by scaling integers, with limited calculation precision and rounding or truncation of intermediate values), and it is also limited to small 5x5 kernel sizes. So expect less than perfect results form that operator.

Quote
But assuming that I’m not entirely off the mark, when Jeff says that the Lr sharpen is effectively a USM-type sharpening when used with a low Detail setting, but becomes a deconvolution filter with high Detail settings … I’m both puzzled and lost.  I’m puzzled as to how a Detail setting of 0 gives USM (which to my mind is a deconvolution if its intention is to remove blur) while at 100 it’s a deconvolution.

If I take an image and blur it with a Gaussian blur, radius 3, and then sharpen using the ACR sharpen, moving the Detail to 100% certainly gives more sharpening, but it also gives a nice (NOT) halo … it certainly doesn’t recover the image to the pre-blur version.

I assume it is just a gradual blend between USM and a sort of deconvolution, and the deconvolution part is not as powerful as e.g. FocusMagic, to allow faster execution. The deconvolution method used, will quickly create more artifacts than restored signal, and the required amount setting is a complete guess (zero guidance is offered, other than eyeballing the resulting effect).

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 12, 2014, 03:28:26 pm
Have been following this thread with some interest and as far as deconvolution goes I am sure that Bart and others knowledge and experience of this aspect will prove very useful for you.

Couple of things I picked up on and it is my opinion that maybe you are making things a little more difficult than they need to be to get excellent result whichever sharpening route you choose.

1.  Your test file of the power/telephone line is not a particularly good choice as presented due to purple green CA.  IMO this should be removed first during raw processing to give a meaningful view of sharpening options.

2.  As you started the thread with PS have you tried the Smart Sharpen / Lens Blur / More Accurate checked?  This AFAIK is deconvolution sharpening (particular parameters unknown) and offers quite a lot in the way of control.  Not as many options of course as in other software but sometimes this maybe enough?

By chance I had also played with the sample NEF image in ACR using Amt=50 Rad= 0.7 Detail = 80  and seems to be pretty close to your FM example although that was not my intention.  Seems to me in this case that a little tweaking ACR would narrow the differences even further


Have been following this thread with some interest and as far as deconvolution goes I am sure that Bart and others knowledge and experience of this aspect will prove very useful for you.

1.  Your test file of the power/telephone line is not a particularly good choice as presented due to purple green CA.  IMO this should be removed first during raw processing to give a meaningful view of sharpening options.

2.  As you started the thread with PS have you tried the Smart Sharpen / Lens Blur / More Accurate checked?  This AFAIK is deconvolution sharpening (particular parameters unknown) and offers quite a lot in the way of control.  Not as many options of course as in other software but sometimes this maybe enough?

By chance I had also played with the sample NEF image in ACR using Amt=50 Rad= 0.7 Detail = 80  and seems to be pretty close to your FM example although that was not my intention.  Seems to me in this case that a little tweaking ACR would narrow the differences even further


Hi Tony,

I corrected the CA so if you download the image now it's CA-free http://www.irelandupclose.com/customer/LL/sharpentest.tif

Also, here are some comparisons:

(http://www.irelandupclose.com/customer/LL/sharpentest.jpg)

[You need to right-click on the image and then zoom in to see the detail properly.]

My own feeling is that the Smart Sharpen result is the best (without More Accurate as this is a Legacy setting which seems to increase artifacts quite a lot). ACR and FocusMagic seem much of muchness. QImage gives a good sharp line, but at the expense of flattening the power lines. 

I did the best I could with all of the sharpening methods (but without adding any additional steps of course, not even fading highlights in Smart Sharpen as the same effect can be obtained for the other filters using Blend-if in Photoshop).

Perhaps adding a GB of 3 on an unsharpened raw image is a bit unfair.

I find it hard to compare your two D800Pine images as the bottom one has darker leaves but a lighter trunk.  Not sure why that is?

Robert
                                         
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: TonyW on August 12, 2014, 04:37:44 pm
...
I corrected the CA so if you download the image now it's CA-free http://www.irelandupclose.com/customer/LL/sharpentest.tif

Also, here are some comparisons:

...
Hi Robert.  Quite happy with your findings now CA corrected :)

Quote
My own feeling is that the Smart Sharpen result is the best (without More Accurate as this is a Legacy setting which seems to increase artifacts quite a lot). ACR and FocusMagic seem much of muchness. QImage gives a good sharp line, but at the expense of flattening the power lines.
My understanding that the Smart Sharpen Lens Blur kernel and More accurate option should give the best results (based on something I read by Eric Chan - I think on this forum).  It certainly takes longer to apply and I assume that more iterations performed which may lead to the artifact increase you are seeing?

...
Quote
I find it hard to compare your two D800Pine images as the bottom one has darker leaves but a lighter trunk.  Not sure why that is?
I think it would be wrong to try and draw conclusions by this comparison all I did was to crop the full size view of your test and paste as a new document in PS.  My own version using ACR was actually produced before I even saw your example and was straight from camera with only lens profile and CA correction applied plus the sharpening.  The difference may be explained by the simple fact of copying your image or possible that FM may have altered contrast/colour slightly or even a combination  :).

On comparing it just occurred to me that the difference was slight suggesting that in this case ACR decon. may be just as good a starting point as any and of course once output sharpening applied I would have thought that printing would yield perfectly acceptable results in either case. 

I have no experience of FM or Qimage therefore could not comment on advantages, but if Bart says they are good then I have every reason to believe that is the case and worth investigating to see how they may fit in with your workflow.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 12, 2014, 04:51:09 pm
Here is where we need to distinguish between a masking type of filter, like USM and other acutance enhancing filters, and a deconvolution type of filter. A mask is just an overlay, that selectively attenuates the transmission to underlying layers. It adds a (positive or negative) percentage of a single pixel to a lower layer's pixel. A deconvolution on the other hand adds weighted amounts of surrounding pixels to a central pixel, for all pixels (a vast amount of multiplications/additions is required for each pixel) in the same layer.
                 
 
   

Many thanks for taking the time to write such a thorough response Bart!

Looking at this image:

(http://www.irelandupclose.com/customer/LL/unblur.jpg)

What I’m attempting to emulate is a point source (original image), blurred using the F1 filter.  The blurred point is then ‘unblurred’ using the F2 filter (which is not a USM but a neighbouring pixel computation).

So is this a deconvolution?  And is the PSF effectively F1 (that is, the blur)?  In which case F2 would be the deconvolution function?

As you’ve probably guessed, I’m trying to put this whole thing in terms that I can understand.  I know of course that a sophisticated deconvolution algorithm would be more intelligent and complex, but would it not essentially be doing the same thing as above?


Interestingly, this sharpen filter:

(http://www.irelandupclose.com/customer/LL/customsharpen.jpg)

gives a sharper result than the ACR filter, for example, in the test image with the power lines.  A little bit of a halo, but nothing much … and no doubt the filter could be improved on by someone who knew what he was doing!

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 12, 2014, 05:16:42 pm

I have no experience of FM or Qimage therefore could not comment on advantages, but if Bart says they are good then I have every reason to believe that is the case and worth investigating to see how they may fit in with your workflow.

Hi Tony,

I tried the D800Pine sharpen again, this time with Smart Sharpen, Smart Sharpen (Legacy with More Accurate) and FocusMagic.  Smart Sharpen with Legacy turned off appears to be the same as Sharpen Sharpen with Legacy on and More Accurate on. 

The best result was clearly with FocusMagic for this test.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: TonyW on August 12, 2014, 05:31:04 pm
Hi Robert
I just realised we are using different versions PS I am on CS6 and you CC?  So things under the hood seem to have changed
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 12, 2014, 05:33:15 pm
OK ... this is where I stop for tonight!!

But just before ending, you should try this Ps Custom Filter on the D800pine image:

(http://www.irelandupclose.com/customer/LL/customsharpen.jpg)

Then fade to around 18-20% with Luminosity blend mode.  It's better than Smart Sharpen.  Which is pretty scary.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 12, 2014, 06:06:02 pm
What I’m attempting to emulate is a point source (original image), blurred using the F1 filter.  The blurred point is then ‘unblurred’ using the F2 filter (which is not a USM but a neighbouring pixel computation).

So is this a deconvolution?  And is the PSF effectively F1 (that is, the blur)?  In which case F2 would be the deconvolution function?

That's correct, the Custom filter performs a simple (de)convolution.
However, to deconvolve the  F1 filter would require an F2 filter like:
-1 -1 -1
-1  9 -1
-1 -1 -1
All within the accuracy of the Photoshop implementation. One typically reverses the original blur kernel values to negative values, and then adds to the central value to achieve a kernel sum of one (to keep the multiplied and summed restored pixels at the same average brightness).

A more elaborate deconvolution would use a statistically more robust version, because the simple implementation tends to increase noise almost as much as signal, but we'd like to increase the signal to noise ratio by boosting the signal significantly more than than the noise in a regular photographic image.

Quote
As you’ve probably guessed, I’m trying to put this whole thing in terms that I can understand.  I know of course that a sophisticated deconvolution algorithm would be more intelligent and complex, but would it not essentially be doing the same thing as above?

With the suggested F2 adjustment, yes.

Quote
Interestingly, this sharpen filter:

(http://www.irelandupclose.com/customer/LL/customsharpen.jpg)

gives a sharper result than the ACR filter, for example, in the test image with the power lines.  A little bit of a halo, but nothing much … and no doubt the filter could be improved on by someone who knew what he was doing!

Yes, but this will be a sharpening filter, not a neutral deconvolution (unless it exactly reverses the unknown blur function, PSF). Only a perfect PSF deconvolution (probably close to a Gaussian PSF deconvolution) will remain halo free (so with imperfect precision, almost halo free).

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 13, 2014, 03:10:00 am
OK ... this is where I stop for tonight!!

But just before ending, you should try this Ps Custom Filter on the D800pine image:

(http://www.irelandupclose.com/customer/LL/customsharpen.jpg)

Then fade to around 18-20% with Luminosity blend mode.  It's better than Smart Sharpen.  Which is pretty scary.

Hi Robert,

Try the attached values, which should approximate a deconvolution of a (slightly modified) 0.7 radius Gaussian blur, which would be about the best that a very good lens would produce on a digital sensor, at the best aperture for that lens. It would under-correct for other apertures but not hurt either. Always use 16-bit/channel image mode in Photoshop, otherwise Photoshop produces wrong results with this Custom filter pushed to the max.

As I've said earlier, such a 'simple' deconvolution tends to also 'enhance' noise (and things like JPEG artifacts), because it can't discriminate between signal and noise. So one might want to use this with a blend-if layer or with masks that are opaque for smooth areas (like blue skies which are usually a bit noisy due their low photon counts, and demosaicing of that).

Upsampled images would require likewise upsampled filter kernel dimensions, but a 5x5 kernel is too limited for that, so this is basically only usable for original size or down-sampled images.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: hjulenissen on August 13, 2014, 03:52:59 am
I'd really appreciate it if someone could relate "sharpening" to "deconvolution" in a dsp manner, ideally using simplistic MATLAB scripts. There are many subjective claims ("deconvolution regains true detail, while sharpening only fakes detail"). But what is the fundamental difference? Both have some inherent model of the blur (be it gaussian or something else), successful implementations of both have to work around noise/numerical issues...

If you put an accurate modelled/measure PSF into an USM algorithm, does it automatically become "deconvolution"? If you use a generic windowed gaussian in a deconvolution algorithm, does it become sharpening? Is the nonlinear "avoid amplifying small stuff as it is probably noise" part of USM really that bad, or is it an ok first approximation to methods used in deconvolution?

-h
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 13, 2014, 04:31:45 am
That's correct, the Custom filter performs a simple (de)convolution.
However, to deconvolve the  F1 filter would require an F2 filter like:
-1 -1 -1
-1  9 -1
-1 -1 -1
All within the accuracy of the Photoshop implementation. One typically reverses the original blur kernel values to negative values, and then adds to the central value to achieve a kernel sum of one (to keep the multiplied and summed restored pixels at the same average brightness).

Yes, I realize that - but the above filter deconvolves too strongly, so that some detail is lost around the edges, which is why I reduced it a bit.  However, surely the difference between my F2 and yours is only a modification of the deconvolution algorithm, with yours being the perfect one, and mine giving the better real-world result.  Isn't this one of the things that different deconvolution algorithms will do?: improve the deconvolution by, for example, boosting the signal more than the noise (as you mention).  Which might mean softening the deconvolution to the point that it ignores noise, for example.

At any rate, the F2 filter and the one above are both sharpening filters.  They are also deconvolving filters because the convolution is known.  So what Jeff says, that the Lr sharpen goes from USM to Deconvolution as one moves the Detail slider from 0 to 100%, just doesn't make sense.  To deconvolve you have to know the distortion, and increasing the Detail can't give you that information.

[/quote]
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 13, 2014, 04:38:16 am
Hi Robert,

Try the attached values, which should approximate a deconvolution of a (slightly modified) 0.7 radius Gaussian blur, which would be about the best that a very good lens would produce on a digital sensor, at the best aperture for that lens. It would under-correct for other apertures but not hurt either. Always use 16-bit/channel image mode in Photoshop, otherwise Photoshop produces wrong results with this Custom filter pushed to the max.


Yes, it works quite well, although you have to apply it several times to get the sort of sharpening needed for the D800Pine image (which would seem to indicate that the D800 image is a bit softer than one would expect, given that the test image was produced by Nikon, presumably with the very best lens and in the very best conditions).

Could you explain how you work out the numbers?  Do you have a formula or algorithm, or is it educated guesswork (in which case your guessing capabilities are better than mine  :)).

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 13, 2014, 05:17:01 am
Yes, it works quite well, although you have to apply it several times to get the sort of sharpening needed for the D800Pine image (which would seem to indicate that the D800 image is a bit softer than one would expect, given that the test image was produced by Nikon, presumably with the very best lens and in the very best conditions).

Correct. The actual blur PSF is probably larger/different than a 0.7 sigma Gaussian blur, and thus requires a larger radius PSF (which may not be possible to adequately model in a 5x5 kernel), or multiple iterations with the too small radius version (but that risks boosting noise too much). The amount of actual blur is largely dictated by the aperture used (assuming perfect focus, since defocus is a resolution killer). The 0.7 Gaussian blur seems to do much better on your power lines, but I don't know about the rest of that image.

Quote
Could you explain how you work out the numbers?  Do you have a formula or algorithm, or is it educated guesswork (in which case your guessing capabilities are better than mine  :)).

It's trial and error, but starting with a solid foundation. I start with a PSF of the assumed 0.7 sigma blur, using my PSF generator tool (http://bvdwolf.home.xs4all.nl/main/foto/psf/PSF_generator.html), to have something solid to work with. I then have to consider some of the limitations (limited precision, integer kernel values only, maximum of 999) of the Photoshop Custom filter implementation. So one would need to produce an integer values only kernel, and select a deconvolution type of kernel (inverts the blur values and normalizes to a kernel sum of 1 with the central kernel value, to maintain normal average brightness).

Then one needs to tweak the scale factor. The scale in my tool is essentially an amplitude amplifier, but that is not necessarily want we want in the PS Custom filter, we want to increase it's precision, not the amplitude of it's effect. Therefore we need to adjust the Custom filter's scale factor. We then also need to tweak the numbers to get more predictable output for uniform areas, since those should stay at the same brightness.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 13, 2014, 05:43:39 am
I'd really appreciate it if someone could relate "sharpening" to "deconvolution" in a dsp manner, ideally using simplistic MATLAB scripts. There are many subjective claims ("deconvolution regains true detail, while sharpening only fakes detail"). But what is the fundamental difference? Both have some inherent model of the blur (be it gaussian or something else), successful implementations of both have to work around noise/numerical issues...

If you put an accurate modelled/measure PSF into an USM algorithm, does it automatically become "deconvolution"? If you use a generic windowed gaussian in a deconvolution algorithm, does it become sharpening? Is the nonlinear "avoid amplifying small stuff as it is probably noise" part of USM really that bad, or is it an ok first approximation to methods used in deconvolution?

-h

It certainly would be interesting ... but I'm not the person to do it!

What I was attempting to do above is to relate deconvolution to sharpening using a kernel (in the Photoshop Custom Filter).  It would appear (confirmed by Bart, I believe) that in it's simplest implementation, a sharpening filter is a deconvolution filter if the sharpening filter reverses the blurring. So if you blur with a 'value' of 1 and unblur with a value of 1 you revert back to the original image, hopefully, which is clearly going to be sharper than the blurred image: so you could say that you have sharpened the blurred image.

However, what we normally call 'sharpening' is not restoring lost detail (which is what deconvolution attempts to do): what it does is to add contrast at edges, and this gives an impression of sharpness because of the way our eyes work (we are more sensitive to sharp transitions than to gradual ones - this gives a useful explanation http://www.cambridgeincolour.com/tutorials/unsharp-mask.htm).

So a sharpening filter like USM could by chance be a deconvolution filter, but it normally won't be.  But I guess that if we carefully play around with the radius that we could come close to a deconvolution, providing the convolution is gaussian.  With Smart Sharpen that might be more achievable, using the Lens blur.  Just guessing here  :) ... perhaps someone could clarify.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 13, 2014, 06:15:12 am
I'd really appreciate it if someone could relate "sharpening" to "deconvolution" in a dsp manner, ideally using simplistic MATLAB scripts.

Hi,

Not everybody here is familiar with MatLab, so that would not help a larger audience.

The crux of the matter is that in a DSP manner Deconcolution exactly inverts the blur operation (asuming an accurate PSF model, no input noise, and high precision calculations to avoid cumulation of errors). USM only boosts the gradient of e.g. edge transitions, which will look sharp but is only partially helpful and not accurate (and prone to creating halos which are added/subtracted from those edge profiles to achieve that gradient boost).

Quote
There are many subjective claims ("deconvolution regains true detail, while sharpening only fakes detail").


It's not subjective, but measurable and visually verifiable. That's why it was used to salvage the first generation of Hubble Space Station's images taken with flawed optics.

Quote
But what is the fundamental difference? Both have some inherent model of the blur (be it gaussian or something else), successful implementations of both have to work around noise/numerical issues...

If you put an accurate modelled/measure PSF into an USM algorithm, does it automatically become "deconvolution"?

No, it's not the model of the blur, but how that model is used to invert the blurring operation. USM uses a blurred overlay mask to create halo overshoots in order to boost edge gradients. Deconvolution doesn't use an overlay mask, but redistributes weighted amounts of the diffused signal in the same layer back to the intended spatial locations (it contracts blurry edges to sharpen, instead of boosting edge amplitudes to mimic sharpness).

I can recommend this free book on DSP (http://www.dspguide.com/ch6.htm) for those interested in a more fundamental explanation of how things work. This tutorial (http://micro.magnet.fsu.edu/primer/java/digitalimaging/processing/kernelmaskoperation/) has a nice visual demonstration of how a kernel moves through a single layer to convolve an image.

Quote
If you use a generic windowed gaussian in a deconvolution algorithm, does it become sharpening? Is the nonlinear "avoid amplifying small stuff as it is probably noise" part of USM really that bad, or is it an ok first approximation to methods used in deconvolution?

It's the algorithm that defines what is done with the model of the blur function (PSF). More advanced algorithms usually have a regularization component that blurs low signal-to-noise amounts but fully deconvolves higher S/N pixels. They also tend to use multiple iterations to hone in on a better balance between noise attenuation and signal restoration. Thus, they become locally adaptive to the S/N ratios present in an image layer (often a Luminance component to avoid mistakenly amplifying Chromatic noise).

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 13, 2014, 06:32:05 am
However, what we normally call 'sharpening' is not restoring lost detail (which is what deconvolution attempts to do): what it does is to add contrast at edges, and this gives an impression of sharpness because of the way our eyes work (we are more sensitive to sharp transitions than to gradual ones - this gives a useful explanation http://www.cambridgeincolour.com/tutorials/unsharp-mask.htm).

Correct.

Quote
So a sharpening filter like USM could by chance be a deconvolution filter, but it normally won't be.

That's not correct, USM is never a deconvolution, it's a masked addition of halo. The USM operation produces a halo version layer of the edge transitions and adds that layer (halos and all) back to the source image, thus boosting the edge gradient (and overshooting the edge amplitudes). Halo is added to the image, which explains why USM always produces visible halos at relatively sharp transitions, which is also why a lot of effort is taken by USM oriented tools like Photokit sharpener to mitigate the inherent flaw in the USM approach (which was the only remedy available for film), with edge masks and and Blend-if layers.

Deconvolution restores diffused signal to the original intended spatial location, and it uses the PSF to do its weighted signal redistribution/contraction.

I know it's a somewhat difficult concept to grasp for us visually oriented beings, so don't worry if it takes a while become 'obvious'.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: hjulenissen on August 13, 2014, 06:35:28 am
Not everybody here is familiar with MatLab, so that would not help a larger audience.
Not everything can be explained to a larger audience (math, for instance). The question is what means are available that will do the job. MATLAB is one such tool, excel formulas, Python scripts etc are others. I tend to prefer descriptions that can be executed in a computer, as that leaves less room to leave out crucial details (researchers are experts at publishing papers with nice formulas that cannot easily be put into practice without unwritten knowledge).
Quote
The crux of the matter is that in a DSP manner Deconcolution exactly inverts the blur operation (asuming an accurate PSF model, no input noise, and high precision calculations to avoid cumulation of errors). USM only boosts the gradient of e.g. edge transitions, which will look sharp but is only partially helpful and not accurate (and prone to creating halos which are added/subtracted from those edge profiles to achieve that gradient boost).
The exact PSF of a blurred image is generally unknown (except the trivial example of intentionally blurring an image in Photoshop). Moreover, it will be different in the corners from the center, from "blue" to "red" wavelengths etc. Deconvolution will (practically) always use some approximation to the true blur kernel, either input from some source, or blindly estimated.

Neither sharpening nor deconvolution can invent information that is not there. They are limited to transforming (linear or nonlinear) their input into something that resembles the "true" signal in some sense (e.g. least squares) or simply "looks better" assuming some known deterioration model.
Quote
It's not subjective, but measurable and visually verifiable. That's why it was used to salvage the first generation of Hubble Space Station's images taken with flawed optics.
I know the basics of convolution and deconvolution. You post contains a lot of claims and little in the way of hands-on explanations. Why is the 2-d neighborhood weighting used in USM so fundamentally different from the 2-d weighting used in deconvolution aside from the actual weights?
Quote
No, it's not the model of the blur, but how that model is used to invert the blurring operation. USM uses a blurred overlay mask to create halo overshoots in order to boost edge gradients. Deconvolution doesn't use an overlay mask, but redistributes weighted amounts of the diffused signal in the same layer back to the intended spatial locations (it contracts blurry edges to sharpen, instead of boosting edge amplitudes to mimic sharpness).
I can't help but thinking that you are missing something in the text above. What is a fair frequency-domain interpretation of USM?
Quote
More advanced algorithms usually have a regularization component that blurs low signal-to-noise amounts but fully deconvolves higher S/N pixels.
My point was that USM seems to allow just that (although probably in a crude way compared to state-of-the-art deconvolution).

It would aid my own (and probably a few others) understanding of sharpening if there was a concrete (i.e. something else that mere words) describing USM and deconvolution in the context of each other, ideally showing that deconvolution is a generalization of USM.

I believe that convolution can be described as:
y = x * h where:
x is some input signal
h is some convolution kernel
* is the convolution operator

In the frequency domain, this can be described as
Y = X · H
where x, h and y and frequency-domain transformed, and the "·" operator is regular multiplication.

If we want some output Z to resemble the original X, we could in principle just invert the (linear) blur:
Z = Y / H_inv = X · H / H_inv ~ X

In practice, we don't know the exact H, there might not exist an exact inverse, and there will be noise, so it may be safer to do some regulariztion:
Z = Y / (delta + H_pseudoinv) = X · H / (delta + H_pseudoinv) ~ X

for delta some "small" number to avoid divide by zero and infinite gain.

This is about where my limited understanding of deconvolution stops. You might want to tailor the pseudoinverse wrgt (any) knowledge about noise and/or signal spectrum (ala Wiener filtering), but I have no idea how blind deconvolution finds a suitable inverse.

Now, how might USM be expressed in this context, and what would be the fundamental difference?

-h
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 13, 2014, 06:46:26 am
That's not correct, USM is never a deconvolution, it's a masked addition of halo. The USM operation produces a halo version layer of the edge transitions and adds that layer (halos and all) back to the source image, thus boosting the edge gradient (and overshooting the edge amplitudes). Halo is added to the image, which explains why USM always produces visible halos at relatively sharp transitions, which is also why a lot of effort is taken by USM oriented tools like Photokit sharpener to mitigate the inherent flaw in the USM approach (which was the only remedy available for film), with edge masks and and Blend-if layers.


Sorry, my mistake ... in that I assume, probably incorrectly, that the 'USM' implementation in Photoshop etc., doesn't actually use the traditional blur/subtract/overlay type method, but uses something more like one of the kernels above, as that would give far more flexibility and accuracy in the implementation.  If that was the case, then would it not be correct to say that this sort of filter could either be a sharpening filter or a deconvolution filter, depending on whether or not it was (by chance or by trial and error) the inverse of the convolution?

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 13, 2014, 07:12:47 am
Now, how might USM be expressed in this context, and what would be the fundamental difference?


Well, it would be 'something' like (~(g*h-g))*g I would think. Or in plain English:

Blur g by h, subtract g from it, invert and apply this to the original signal. At any rate, nothing like a deconvolution (which would be something like ~h*(g*h), I guess??).

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 13, 2014, 07:20:36 am
Sorry, my mistake ... in that I assume, probably incorrectly, that the 'USM' implementation in Photoshop etc., doesn't actually use the traditional blur/subtract/overlay type method, but uses something more like one of the kernels above, as that would give far more flexibility and accuracy in the implementation.

No problem. The USM method used by Photoshop, according to reverse engineering attempted descriptions I've seen on the internet, does somewhat follow the traditional sandwiching method use with film, but Adobe no-doubt cuts some corners along the way to speed up things. It remains a crude way to mimic sharpness by adding halo overshoots (the radius determines the width of the halo, and the amount determines the contrast of the halo, and the threshold is limiter). The somewhere before mentioned article by Doug Kerr (http://dougkerr.net/Pumpkin/articles/Unsharp_Mask.pdf) explains that process quite well.

Quote
If that was the case, then would it not be correct to say that this sort of filter could either be a sharpening filter or a deconvolution filter, depending on whether or not it was (by chance or by trial and error) the inverse of the convolution?

It is in fact extremely unlikely (virtually impossible) that simply adding a halo facsimile of the original image will invert a convolution (blur) operation. USM is only trying to fool us into believing something is sharp, because it adds local contrast (and halos), which is very vaguely similar to what our eyes do at sharp edges.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 13, 2014, 07:24:09 am
Hi Robert,

Try the attached values, which should approximate a deconvolution of a (slightly modified) 0.7 radius Gaussian blur, which would be about the best that a very good lens would produce on a digital sensor, at the best aperture for that lens. It would under-correct for other apertures but not hurt either. Always use 16-bit/channel image mode in Photoshop, otherwise Photoshop produces wrong results with this Custom filter pushed to the max.


Hi Bart,

I've tried your PSF generator and I'm using it incorrectly as the figures I get are very different to yours.  See here:

(http://www.irelandupclose.com/customer/LL/deconv.jpg)

I don't understand 'fill factor' for example - and I just chose the pixel value to be as close to 999 as possible.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 13, 2014, 07:28:08 am

It is in fact extremely unlikely (virtually impossible) that simply adding a halo facsimile of the original image will invert a convolution (blur) operation. USM is only trying to fool us into believing something is sharp, because it adds local contrast (and halos), which is very vaguely similar to what our eyes do at sharp edges.


Yes, I understand ... when I said 'a filter like that' I was referring to a convolution kernel operation, not the traditional USM overlay.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 13, 2014, 08:14:17 am
Why is the 2-d neighborhood weighting used in USM so fundamentally different from the 2-d weighting used in deconvolution aside from the actual weights?

It's not a difference in weighting, but what it is used for. In the case of USM, it is used to create a halo facsimile of our blurred image which is added back to the image.

Quote
What is a fair frequency-domain interpretation of USM?

I don't think one can predict the effect that adding two images has on the frequency domain representation of that 'sandwich'. It's more like a contrast adjusted version of the image, spatial frequencies do not really change, just amplitudes.

Quote
It would aid my own (and probably a few others) understanding of sharpening if there was a concrete (i.e. something else that mere words) describing USM and deconvolution in the context of each other, ideally showing that deconvolution is a generalization of USM.

The problem is that they are different beasts altogether, nothing connects their operations. USM adds a contrast enhancing layer, deconvolution rearranges bits of spatial frequencies that got scattered to neighboring pixels. Again, check out the explanations given by Doug Kerr's article (http://dougkerr.net/Pumpkin/articles/Unsharp_Mask.pdf) and the Cambridge in Color article (http://www.cambridgeincolour.com/tutorials/unsharp-mask.htm). The latter literally states; "An unsharp mask improves sharpness by increasing acutance, although resolution remains the same ". Sharpness is a perceptual qualification, resolution is an objectively measurabe quantification.

Quote
I believe that convolution can be described as:
[...]
This is about where my limited understanding of deconvolution stops.

Yes, that's about correct.

Quote
You might want to tailor the pseudoinverse wrgt (any) knowledge about noise and/or signal spectrum (ala Wiener filtering), but I have no idea how blind deconvolution finds a suitable inverse.

That remains to be a challenge especially because it is complicated by the fact, as you noted before, that the blur function (PSF) is spatially variant across the image and noise complicates things (to distinguish between random noise and a photon shot noise signal, requires statistical probabilities, it's not exact). The usual attempts (other than using prior knowledge/calibration of the imaging system, like DxO does with their Raw converter, which also calibrates for different focus distances), is by trial and error. Just try different shapes and sizes of PSFs and 'see' what produces 'better' results (according to certain criteria). Fortunately, Gaussian PSF shapes do a reasonably good job, which leaves a bit less uncertainty, but it remains an ill posed problem to solve. That's because there are multiple mathematical error minimization solutions possible, without a possibility to predict which one will produce the best looking result.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 13, 2014, 08:41:45 am
Hi Bart,

I've tried your PSF generator and I'm using it incorrectly as the figures I get are very different to yours.

Not really all that different, although I did mention that I tweaked the PS Custom filter a bit (to beat it into submission). I tend to use the larger fill-factor percentages, because they create a more digital sensor sampled shape (slightly less peaked) of the Gaussian blur.

Quote
I don't understand 'fill factor' for example - and I just chose the pixel value to be as close to 999 as possible.

The fill factor tries to account for the aperture sampling of the sensels of our digital cameras. Instead of a point sample (which produces a pure 2D Gaussian), a (sensel) fill-factor of 100% would use a square pixel aperture to sample the 2D Gaussian for each sensel without gaps between the sensels (as with gap-less micro-lenses). It's just a means to approximate the actual sensel sampling area a bit more realistically, although it's rarely a perfect square.

On top of that, I adjusted the PS Custom filter kernel values a bit to improve the limited calculation precision and reduce potential halos from mis-matched PSF radius/shape, but your values would produce quite similar results, although probably with a different Custom filter scale value than I ultimately arrived at. If only that filter would allow larger kernels and floating point number values as input, we could literally copy values at a scale of 1.0 ...

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 13, 2014, 02:03:18 pm
Not really all that different, although I did mention that I tweaked the PS Custom filter a bit (to beat it into submission). I tend to use the larger fill-factor percentages, because they create a more digital sensor sampled shape (slightly less peaked) of the Gaussian blur.

The fill factor tries to account for the aperture sampling of the sensels of our digital cameras. Instead of a point sample (which produces a pure 2D Gaussian), a (sensel) fill-factor of 100% would use a square pixel aperture to sample the 2D Gaussian for each sensel without gaps between the sensels (as with gap-less micro-lenses). It's just a means to approximate the actual sensel sampling area a bit more realistically, although it's rarely a perfect square.

On top of that, I adjusted the PS Custom filter kernel values a bit to improve the limited calculation precision and reduce potential halos from mis-matched PSF radius/shape, but your values would produce quite similar results, although probably with a different Custom filter scale value than I ultimately arrived at. If only that filter would allow larger kernels and floating point number values as input, we could literally copy values at a scale of 1.0 ...


Thanks for the explanation Bart (although I'm not sure to what extent I understand it - but I'll take your word for it that a fill factor of 100% will give a square pixel aperture rather than a round one).

Going back to more basic basics, and looking at this image:

(http://www.irelandupclose.com/customer/LL/unbt.jpg)

I expected that the second convolution kernel would deconvolve the first one - but clearly it doesn't.  The reason seems that at the edges, the subtraction of black is greater than the addition of grey, so we get the dreaded halo.

I messed around a bit with your PSF generator, but I could not come up with a proper convolution/deconvolution.  Could you explain what is going wrong?  

Robert

Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: ppmax2 on August 14, 2014, 12:09:09 am
I'd like to post a real world sample of deconvolution applied to a challenging image. This shot was taken at sunset and exposure was set so the red channel wouldn't clip (shot with 5D3). As such the backside of this telescope was noisy and underexposed. In the unedited image, the shadow areas were a mush of blotchy blues and reds. Although these images don't show it, the upper left portion of the frame is a firehose of deep purples, reds, and oranges with flecks of high-intensity light reflecting off the clouds. (this shot was taken at 14.5K feet on Mauna Kea). Retaining the vibrance of the sky, while pulling detail from the backside of this telescope was my goal.

I'd like to thank Fine_Art who helped me with this image, and provided some great guidance during my first experiences with RawTherapee. After helping me suppress noise in the shadows (RT has great tools for this) he then advised me to ditch USM and try deconvolution instead. I've since worked on this image quite a bit and am blown away by what RT could recover.

First image is LR, no settings (LR-Import.JPG).

2nd image is LR with white balance, tone curve to bump up shadows a bit so that RGB+L values in various regions in the image are similar to those same regions in RawTherapee, with USM and Noise reduction applied. Also added lens correction, and adjusted CA. I don't think I can be accused of over sharpening...I tried purposefully to avoid USM halos. (LR-Final.JPG)

3rd image is RawTherapee with tone, color, deconvolution, noise reduction, lens correction, CA, etc. (RT-Final.JPG)

FYI: these are screen captures, not exports.

I am sure someone here could do better vs my LR Final...but I doubt anyone could do better in LR vs. the RT-Final. Im happy to post the CR2 if anyone wants to take a shot.

There are regions in the LR-Final where virtually all detail is lost, even with only moderate noise reduction. For comparison, in the RT-Final, each vertical line on the surface of the two pillars is clear and distinct...yet the sky is buttery smooth. Some will accuse me of too much NR in the RT-Final image...but it looks fantastic on screen and in print  ;)

FWIW: glad to see someone has referenced Roger Clark's excellent articles...which are worth a read en toto. I'm sure he doesn't need any cheerleaders, but this fellow has a substantial background in digital photography and digital image processing. From his site:
Quote
Dr. Clark is a science team member on the Cassini mission to Saturn, Visual and Infrared Mapping Spectrometer (VIMS) http://wwwvims.lpl.arizona.edu, a Co-Investigator on the Mars Reconnaissance Orbiter, Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) team, which is currently orbiting Mars, and a Co-Investigator on the Moon Mineral Mapper (M3) http://m3.jpl.nasa.gov , on the Indian Chandrayaan-1 mission which orbited the moon (November, 2008 - August, 2009). He was also a Co-Investigator on the Thermal Emission Spectrometer (TES) http://tes.asu.edu team on the Mars Global Surveyor, 1997-2006.


I'm not promoting one workflow/technology/tool over another. However, I think the result I achieved (with help from Fine_Art) speaks for itself.

PP
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Schewe on August 14, 2014, 12:40:31 am
I'm not promoting one workflow/technology/tool over another. However, I think the result I achieved (with help from Fine_Art) speaks for itself.

Really hard to separate demosiacing from sharpening. RawTherapee has a different demosiacing than LR. Part of what I'm seeing is the original demosiacing plus sharpening.

So, it's really not an apple to apple comparison...
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Fine_Art on August 14, 2014, 12:58:16 am
Really hard to separate demosiacing from sharpening. RawTherapee has a different demosiacing than LR. Part of what I'm seeing is the original demosiacing plus sharpening.

So, it's really not an apple to apple comparison...

Yes, the RT AMAZE, written by this forum's Emil M., does give a big advantage to RT. The deconvolution has better information to start with.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: ppmax2 on August 14, 2014, 02:10:40 am
Quote
Really hard to separate demosiacing from sharpening. RawTherapee has a different demosiacing than LR. Part of what I'm seeing is the original demosiacing plus sharpening.

So, it's really not an apple to apple comparison...


But is no comparison valid? I'm not dismissing your point...however:

As instated in my post, I tried using USM in RT...which as you point out uses a different demosaicing algorithm. I tried using USM in RT and was encouraged by Fine_Art to try deconvolution instead. Given the same data, deconvolution produced better results. Perhaps I could have shown this; perhaps I'll post a USM sample later. But let's not lose sight of the bigger picture...

For all intents and purposes each tool is a black box that applies transforms to input data. While the methods each black box employs may be interesting and ripe for discussion, the final result is what matters most to me. To what degree did the demosaicer contribute to the end result vs deconvolution? I admit I don't really care if it was 10%, 49%, or 99%. That question is better answered by someone that is more interested in pixels vs pictures. That discussion quickly devolves into hair splitting. This is not to say that factoring out the demosaicer from the equation is without merit. But these algorithms are not reasonably separable or transposable between tools...and the USM interface in RT is different vs LR as well...so I don't think its reasonably possible to compare apples to apples. Implementations differ, and implementation matters.

But to be fair to your point I'll backtrack from my OP and state that the images are an example of what RT's deconvolution + demosaicer were able to achieve.

I think the RT render has several characteristics that are objectively superior to results I was able to achieve in other similar tools (LR, Aperture, C1). Each tool has its strengths and weaknesses, and it's too bad it's not reasonably possible to combine the best features of each tool into a single app or coherent workflow. Until then, I'll use the tool that yields the best result for each image I process and suffer the inconvenience of doing so.

Pp
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 14, 2014, 05:12:55 am
Going back to more basic basics, and looking at this image:

(http://www.irelandupclose.com/customer/LL/unbt.jpg)

I expected that the second convolution kernel would deconvolve the first one - but clearly it doesn't.  The reason seems that at the edges, the subtraction of black is greater than the addition of grey, so we get the dreaded halo.

Hi Robert,

There are several (possible) causes for the imperfect restoration, part of which may be due to Photoshop's somewhat crude implementation of the function. A 3x3 kernel with all 1's will effectively average/remove all distinction between individual pixels, so it is lossy (and hard to solve without artifacts like ringing). Then there is the rounding/truncation of intermediate value accuracy as the kernel contributions are added, the rounding/truncation of the intermediate blurred dot version's pixels, and there is a possible issue with clipping of black values (no negative pixel values possible). In other words, a difficult if at all possible task.

The dot seems to be upsampled, so I cannot check what a different deconvolver, e.g. the one in ImageJ (http://imagej.nih.gov/ij/download.html) which is a much better implementation, would have done. That would allow to estimate the influence of the calculation accuracy, but it will remain a rather impossible deconvolution.

As a compromise, you can increase the central kernel's value (and adjust the scale) so that the pixel 'under investigation' contributes a proportionally larger part to the total solution, and the restoration attempt is less rigorous (which should 'tame' the edge overshoot). But again, such a crude method and limited precision will not do a perfect restoration. One would get better results by performing such calculations in floating point, and not in the spatial domain but in the frequency domain, but that is a whole other level of abstraction if one is not familiar with that.

A more realistic deconvolution was the one with the 0.7 Gaussian blur kernel that I shared. Natural images, such as of your power lines have a minimum of unavoidable amount of blur which can largely (not totally) be reversed with common deconvolution methods, and better implementations than a simple deconvolution (e.g. FocusMagic or a Richardson-Lucy deconvolution) also achieve better results. Make sure to use at least 16-bit/channel image data to do these operations on, it will allow more precise calculations and reduce the effects of intermediate round-off issues.

Quote
I messed around a bit with your PSF generator, but I could not come up with a proper convolution/deconvolution.  Could you explain what is going wrong?

I'm not sure what you tried to do, but my PSF generator tool is aimed at creating Gaussian shaped PSF (de)convolution kernels, or high-pass mask filters, as is useful for processing real images rather than synthetic CGI images. It is not intended for averaging filters which eliminate most subtle signal differences like the one you used. When you create a Gaussian blur PSF, and deconvolve with a Gaussian deconvolution kernel that matches that blur PSF, the restoration will be better.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 14, 2014, 07:40:40 am
The dot seems to be upsampled, so I cannot check what a different deconvolver, e.g. the one in ImageJ (http://imagej.nih.gov/ij/download.html) which is a much better implementation, would have done. That would allow to estimate the influence of the calculation accuracy, but it will remain a rather impossible deconvolution.


Hi Bart,

I thought that the problem might lie along the lines you've pointed out.

The image that I posted is a screen capture, so it's way upsampled.  The original that I tried to deconvolve is just a 4 pixel black on a gray background.

I tried it with the example macro in ImageJ and this restores the square perfectly.  I also played around with a couple of images, and for anyone who doubts the power of deconvolution (or who thinks deconvolution and USM is the same sort of thing), here is an example from the D800Pine image:

(http://www.irelandupclose.com/customer/LL/dconv.jpg)

It would be very interesting to play around with ImageJ with PSFs with varying Gaussian Blur amounts.  If you have reasonably understandable steps that can be followed (by me) in ImageJ I would be happy to give it a go.  I've never used ImageJ before, so at this stage I'm just stumbling around in the dark with it :).

I have to thank you for all the information and help!  You are being very generous with your time and knowledge.

Robert

Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: hjulenissen on August 14, 2014, 09:15:56 am
or who thinks deconvolution and USM is the same sort of thing
I am speculating that USM and deconvolution might be "the same sort of thing" in the same way that a Fiat and a Ferrari are both Italian cars (they both have four wheels and an engine, and it can bring insight to relate them to each other).

I am not questioning that deconvolution (when properly executed) can give better results than USM.

-h
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 14, 2014, 09:28:02 am
I tried it with the example macro in ImageJ and this restores the square perfectly.

Hi Robert,

I'm not sure which example Macro you used, or whether you are referring to the Process/Filters/Convolve... menu option.

Quote
It would be very interesting to play around with ImageJ with PSFs with varying Gaussian Blur amounts.  If you have reasonably understandable steps that can be followed (by me) in ImageJ I would be happy to give it a go.

If you need an image of a custom kernel, ImageJ can import a plain text file with the kernel values (space separated, no commas or such), e.g. the ones you can Copy/paste from my PSF Generator tool. Use the File/Import/Text Image... menu option to import text as an image.
For PSF's (to blur with, or for plugins that want a PSF as input) you use a regular PSF, to Convolve with a Deconvolution kernel generate that and save it as a text file.

Quote
I have to thank you for all the information and help!  You are being very generous with your time and knowledge.

You're welcome.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Eyeball on August 14, 2014, 09:53:30 am
I am enjoying this thread.  I have been a deconvolution fan for quite a while and really like Focus Magic.  I hope you won't mind if I chime in with a somewhat less technical question/observation.

One of the frequent criticisms that I have seen in the past of deconvolution is that "you need to know the Point Spread Function to use it properly."

While I understand at a basic level that the PSF is indeed important, I always found that supposed criticism to be a bit of a red herring - mainly because it makes it sound like you need to pull out an Excel spreadsheet or MATLAB to use it properly.  In practical use, however, it can be as simple as using your eyes with something like FM or even letting software like FM make an educated guess for you based on a selected sample.  And while correction of lens defects and properties admittedly gets into more complicated territory, I would thing that "capture sharpening", in particular, can be handled in a pretty straight-forward manner where deconvolution is concerned.

Anyway, back to my "just use your eyes" comment.  One big difference I have noticed between FM and the Adobe tools that reportedly use some degree of deconvolution (PS Smart Sharpen* and LR when Detail>50) is that FM makes it super-easy/obvious where the ideal radius sweet-spot is and the Adobe products do not.  FM will start to show obvious ringing when you go too far but the Adobe tools will just start maxing out shadows and highlights.  The Adobe tools also seem to have a difficult time mixing deconvolution with noise suppresion where as FM almost always seems to do a great job of magically differentiating between fine detail and noise.

Any ideas/info on why this is?

If I was guessing, I would say the Adobe tools are mixing deconvolution with other techniques, probably in an effort to control halos, but that is a total guess.

* I don't have CC so I have not seen the latest improvements of Smart Sharpen in CC.

Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 14, 2014, 10:07:52 am
I am speculating that USM and deconvolution might be "the same sort of thing" in the same way that a Fiat and a Ferrari are both Italian cars (they both have four wheels and an engine, and it can bring insight to relate them to each other).

I am not questioning that deconvolution (when properly executed) can give better results than USM.

-h

I am seriously not an expert in imaging science, but it would seem to me that a better analogy between USM and deconvolution would be something like a blanket and a radiator, in that the blanket covers up the fact that there's not enough heat in the room whereas the radiator puts heat back in (heat being detail ... which is a bit of a pity because heat is noise, harking back to my thermodynamics :)).

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 14, 2014, 10:17:08 am
Hi Robert,

I'm not sure which example Macro you used, or whether you are referring to the Process/Filters/Convolve... menu option.


Hi Bart,

I can't remember where I got the macro from (it's in there with ImageJ somewhere, obviously), but here it is:
// This macro demonstrates the use of frequency domain convolution
// and deconvolution. It opens a samples image, creates a point spread
// function (PSF), adds some noise (*), blurs the image by convolving it
// with the PSF, then de-blurs it by deconvolving it with the same PSF.
//
// * Why add noise? - Robert Dougherty
// Regarding adding noise to the PSF, deconvolution works by
// dividing by the PSF in the frequency domain.  A Gaussian
// function is very smooth, so its Fourier, (um, Hartley)
// components decrease rapildy as the frequency increases.  (A
// Gaussian is special in that its transform is also a
// Gaussian.)  The highest frequency components are nearly zero.
// When FD Math divides by these nearly-zero components, noise
// amplification occurs.  The noise added to the PSF has more
// or less uniform spectral content, so the high frequency
// components of the modified PSF are no longer near zero,
// unless it is an unlikely accident.

  if (!isOpen("bridge.gif")) run("Bridge (174K)");
  if (isOpen("PSF")) {selectImage("PSF"); close();}
  if (isOpen("Blurred")) {selectImage("Blurred"); close();}
  if (isOpen("Deblurred")) {selectImage("Deblurred"); close();}
  newImage("PSF", "8-bit black", 512, 512, 1);
  makeOval(246, 246, 20, 20);
  setColor(255);
  fill();
  run("Select None");
  run("Gaussian Blur...", "radius=8");
  run("Add Specified Noise...", "standard=2");
  run("FD Math...", "image1=bridge.gif operation=Convolve image2=PSF result=Blurred do");
  run("FD Math...", "image1=Blurred operation=Deconvolve image2=PSF result=Deblurred do");


I haven't looked into the FD Math code, but it appears to be using FFTs.  I just commented out the Bridge.gif line and opened my own version.

I'll have a go at what you suggest with your custom kernel.  At least with ImageJ you can use a bigger kernel.  BTW ... do you ever use ImageJ to 'sharpen' your own images?

Cheers

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mark D Segal on August 14, 2014, 10:22:06 am
I am seriously not an expert in imaging science,

Robert

Neither am I, but I have been reading these posts with interest, and what I am picking up from Bart's description of the basic algorithms underlying Deconvolution versus acutance is that these are indeed different mathematical procedures that can therefore be expected to deliver differing results. If I'm wrong about that, I'd like to be so advised. If in order to implement those different procedures differences in the demosaic algorithm are also required, so be it. I too am results oriented, but I think Jeff's point is an important one, in particular from a developer perspective, because it is necessary to have a proper allocation of cause and effect when more than one variable is deployed to achieve an outcome; even from a user perspective this kind of knowledge can help one to make choices. The samples that ppmax2 posted are interesting. There's no question that the vertical lines down the building structure are better separated in the RT with deconvolution process than in the LR process. While we don't know - can't tell - to what extent LR's settings were optimized to get the best possible output, I think it fair to presume from what he/she described that ppmax2 was giving it his/her best shot.  
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 14, 2014, 11:29:45 am
I am enjoying this thread.  I have been a deconvolution fan for quite a while and really like Focus Magic.  I hope you won't mind if I chime in with a somewhat less technical question/observation.

One of the frequent criticisms that I have seen in the past of deconvolution is that "you need to know the Point Spread Function to use it properly."

While I understand at a basic level that the PSF is indeed important, I always found that supposed criticism to be a bit of a red herring - mainly because it makes it sound like you need to pull out an Excel spreadsheet or MATLAB to use it properly.  In practical use, however, it can be as simple as using your eyes with something like FM or even letting software like FM make an educated guess for you based on a selected sample.  And while correction of lens defects and properties admittedly gets into more complicated territory, I would thing that "capture sharpening", in particular, can be handled in a pretty straight-forward manner where deconvolution is concerned.

Hi,

That's correct. It's often used as a red herring, while in practice even a somewhat less than optimal PSF will already offer a huge improvement. Of course a better estimate will produce an even better result.

Quote
Anyway, back to my "just use your eyes" comment.  One big difference I have noticed between FM and the Adobe tools that reportedly use some degree of deconvolution (PS Smart Sharpen* and LR when Detail>50) is that FM makes it super-easy/obvious where the ideal radius sweet-spot is and the Adobe products do not.  FM will start to show obvious ringing when you go too far but the Adobe tools will just start maxing out shadows and highlights.  The Adobe tools also seem to have a difficult time mixing deconvolution with noise suppresion where as FM almost always seems to do a great job of magically differentiating between fine detail and noise.

Any ideas/info on why this is?

Not really, other than that Adobe LR/ACR probably uses a relatively simple deconvolution method (for reasons of execution speed) that tends to create artifacts quite easily when pushed a little too far. Of course there are a lot of pitfalls to avoid with deconvolution, but that should not be a major issue for an imaging software producer. A really high quality deconvolution algorithm is rather slow and may require quite a bit of memory to avoid swapping to disk, so that could explain the choice for a lesser alternative.

FocusMagic on the other hand, is a specialized plug-in and it does pull it off within a reasonable amount of time. I'm also looking forward to a newer more powerful version of Topaz Labs Infocus (which currently is a bit too sensitive regarding the creation of artifacts).

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 14, 2014, 12:04:20 pm
Hi Bart,

I can't remember where I got the macro from (it's in there with ImageJ somewhere, obviously),

Ah, I see. It's from a link in the help documentation of the Process/FFT/FD Math menu option. So it is using the built-in FFT functionality. While that is one of many ways to skin a cat, it is a rather advanced option that requires a reasonably good understanding of what it does.

Quote
I haven't looked into the FD Math code, but it appears to be using FFTs.

Indeed.

Quote
BTW ... do you ever use ImageJ to 'sharpen' your own images?

I use it mostly for testing some procedures, producing images from text files, and more things like that, but sharpening is usually left to FocusMagic because it fits more conveniently in my workflow. I have other applications for specific deconvolution tasks.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 14, 2014, 12:56:32 pm

... other than that Adobe LR/ACR probably uses a relatively simple deconvolution method

Do we have any real evidence that deconvolution is used at all in LR or SmartSharpen?  I tried the DP800Pine image with a Gaussian blur of 1 and if I apply the Lr sharpen (via the Camera Raw Filter), with Detail set low I get a reasonable sharpening effect; with Detail set to maximum (with the same maximum Amount setting and radius set to 1 to mirror the GB of 1) I get this, viewed at 200%:

(http://www.irelandupclose.com/customer/LL/lrtest.jpg)

Which looks to me like a massive amount of contrast has been added to the detail so that we end up with a posterized look ... and doesn't look to me like a deconvolution at all.

Also, moving the Detail slider up in steps of 10 just shows an increasing amount of this coarsening  of detail; there is no point at which there is a noticeable change in processing (from USM-type to deconvolution-type).  Also, notice the noise on the lamp post.

I know Jeff has said that this is so - and I don't dispute his insider knowledge of Photoshop development - but it would be good to see how the sharpening transitions from USM to deconvolution, because I certainly can't see it.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 14, 2014, 01:05:16 pm
I have other applications for specific deconvolution tasks.

You shouldn't say things like that if you don't want me to bug you for more information  :)

Actually, I was wondering if there's a procedure for creating a PSF by photographing a point light source (fixed focal length, best focus etc) ... using a torch through a pin-hole (in a darkened room or at night), or something like that ... and translating this into a convolution kernel for this camera/lens combination?

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Eyeball on August 14, 2014, 01:26:13 pm
I'm also looking forward to a newer more powerful version of Topaz Labs Infocus (which currently is a bit too sensitive regarding the creation of artifacts).

Yes, I have Infocus, too, and I had high expectations for it when it first came out - primarily due to delays in FM coming out in 64-bit.  But I find it much more finicky to use and I find it has a pretty strong bias to hard edges - something that has also bothered me about Smart Sharpen in PS.  Topaz also appears to have almost forgotten about that product since it was first released.

The hard edges vs. softer texture differentiation is a big deal to me and I wish more developers would take it into consideration.  I think it is much more useful than the shadows/highlights adjustments in PS Smart Sharpen, for example.  I think there is still a lot of legacy thinking that gets applied where people think they need to restrict sharpening to just hard edges or away from dark areas of the image.  The noise in older cameras was I believe what prompted that thinking but IMO it is needed much less with today's cameras.

It would be nice to control it though and while the LR Detail adjustment seems to have that goal in mind, it seems hard to balance it some times with noise reduction.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Eyeball on August 14, 2014, 01:31:56 pm
Which looks to me like a massive amount of contrast has been added to the detail so that we end up with a posterized look ... and doesn't look to me like a deconvolution at all.

Also, moving the Detail slider up in steps of 10 just shows an increasing amount of this coarsening  of detail; there is no point at which there is a noticeable change in processing (from USM-type to deconvolution-type).  Also, notice the noise on the lamp post.

I know Jeff has said that this is so - and I don't dispute his insider knowledge of Photoshop development - but it would be good to see how the sharpening transitions from USM to deconvolution, because I certainly can't see it.

That is exactly what I was referring to in my earlier post.  I have often said to myself exactly what you did: "I know you say it is but is it REALLY using deconvolution?"  :)

Eric would be the man to know I guess, although I'm not sure how often he checks out more esoteric threads like this one.  In fact, in my mind Eric is the source of the ">50% Detail uses deconvolution in LR" understanding although I'm not sure I could link a direct quote.  If not here, maybe on the Adobe forums.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 14, 2014, 02:04:09 pm
That is exactly what I was referring to in my earlier post.  I have often said to myself exactly what you did: "I know you say it is but is it REALLY using deconvolution?"  :)

Eric would be the man to know I guess, although I'm not sure how often he checks out more esoteric threads like this one.  In fact, in my mind Eric is the source of the ">50% Detail uses deconvolution in LR" understanding although I'm not sure I could link a direct quote.  If not here, maybe on the Adobe forums.

Hi,

I've sent him an email - hopefully he will give us an explanation.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Fine_Art on August 15, 2014, 12:03:06 am
Hi Bart,

I thought that the problem might lie along the lines you've pointed out.

The image that I posted is a screen capture, so it's way upsampled.  The original that I tried to deconvolve is just a 4 pixel black on a gray background.

I tried it with the example macro in ImageJ and this restores the square perfectly.  I also played around with a couple of images, and for anyone who doubts the power of deconvolution (or who thinks deconvolution and USM is the same sort of thing), here is an example from the D800Pine image:

(http://www.irelandupclose.com/customer/LL/dconv.jpg)

It would be very interesting to play around with ImageJ with PSFs with varying Gaussian Blur amounts.  If you have reasonably understandable steps that can be followed (by me) in ImageJ I would be happy to give it a go.  I've never used ImageJ before, so at this stage I'm just stumbling around in the dark with it :).

I have to thank you for all the information and help!  You are being very generous with your time and knowledge.

Robert



Here is my attempt to deconvolve the same section of the image. It needs more work.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: hjulenissen on August 15, 2014, 03:46:38 am
I am seriously not an expert in imaging science, but it would seem to me that a better analogy between USM and deconvolution would be something like a blanket and a radiator, in that the blanket covers up the fact that there's not enough heat in the room whereas the radiator puts heat back in (heat being detail ... which is a bit of a pity because heat is noise, harking back to my thermodynamics :)).
And my claim has been that what seems to be your underlying assumption is wrong. Not that I blaim you, the same claim seems to reverbrate all over the net: "deconvolution recreates detail, USM fakes detail". Let me modify your analogy: the classical radiator has a single variable, controlled by the user, and no thermometer to be seen anywhere. Getting the right temperature can be challenging. A more modern radiator might have one or more temperature probes, and thus can make more well-informed choices.

I believe that the aforementioned claim is not supported by an analysis of what USM (in various incarnations) does compared to deconvolution. Information cannot be made out of nothing (Shannon & friends). These methods can only transform information present at their input in a way that more closely resemble some assumed reference, given some assumed degradation. When USM use a windowed gaussian subtracted the image itself, this is (in effect) a convolution of a single kernel, seemingly by the linear-phase complimentary filter. Thus, the sharpening used in USM can perhaps be described as inverting the implicitly assumed gaussian image degradation. A function that (of course) can be described in the frequency domain. The nonlinearity does complicate the analysis, but I think that the same is true for the regularization used in deconvolution.

This description might prove instructive for relatively "small-scale" USM parameters ("sharpening"), while larger-scale "local contrast" modification might be more easily comprehended in the spatial domain?

Thus, my claim (and I don't have the maths to back it up) is that USM is very similar to (naiive) deconvolution, and that both can be described as inverting an implicit/explicit model of the image degradation. The most important difference seems to be that USM practically always have a fixed kernel (of variable sigma), while deconvolution tends to have a highly parametric (or even blindly estimated) kernel, thus giving more parameters to tweak and (if chosen wisely) better results. It seems that practical deconvolution tends to use nonlinear methods, e.g. to satisfy the simultaneous (and contradicting) goals of detail enhancement but noise suppression. These may well give better numerical/perceived compromises, but it does not (in my mind) make it right to claim that "deconvolution recreates detail, while USM fakes it"

http://homepages.inf.ed.ac.uk/rbf/HIPR2/unsharp.htm
(http://homepages.inf.ed.ac.uk/rbf/HIPR2/figs/ushboxd3.gif)

-h
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 15, 2014, 04:29:22 am
Here is my attempt to deconvolve the same section of the image. It needs more work.

Hi,

This was deconvolving the original image, or was it deconvolving the image blurred with a Gaussian blur?  If the latter then pretty impressive.

What tools/technique do you use using wavelets?

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 15, 2014, 05:02:06 am
And my claim has been that what seems to be your underlying assumption is wrong. Not that I blaim you, the same claim seems to reverbrate all over the net: "deconvolution recreates detail, USM fakes detail". Let me modify your analogy: the classical radiator has a single variable, controlled by the user, and no thermometer to be seen anywhere. Getting the right temperature can be challenging. A more modern radiator might have one or more temperature probes, and thus can make more well-informed choices.

I believe that the aforementioned claim is not supported by an analysis of what USM (in various incarnations) does compared to deconvolution. Information cannot be made out of nothing (Shannon & friends). These methods can only transform information present at their input in a way that more closely resemble some assumed reference, given some assumed degradation. When USM use a windowed gaussian subtracted the image itself, this is (in effect) a convolution of a single kernel, seemingly by the linear-phase complimentary filter. Thus, the sharpening used in USM can perhaps be described as inverting the implicitly assumed gaussian image degradation. A function that (of course) can be described in the frequency domain. The nonlinearity does complicate the analysis, but I think that the same is true for the regularization used in deconvolution.

This description might prove instructive for relatively "small-scale" USM parameters ("sharpening"), while larger-scale "local contrast" modification might be more easily comprehended in the spatial domain?

Thus, my claim (and I don't have the maths to back it up) is that USM is very similar to (naiive) deconvolution, and that both can be described as inverting an implicit/explicit model of the image degradation. The most important difference seems to be that USM practically always have a fixed kernel (of variable sigma), while deconvolution tends to have a highly parametric (or even blindly estimated) kernel, thus giving more parameters to tweak and (if chosen wisely) better results. It seems that practical deconvolution tends to use nonlinear methods, e.g. to satisfy the simultaneous (and contradicting) goals of detail enhancement but noise suppression. These may well give better numerical/perceived compromises, but it does not (in my mind) make it right to claim that "deconvolution recreates detail, while USM fakes it"


Hi,

The whole notion of deconvolution, as I understand it, is that since we are dealing with an essentially linear system, we can add and subtract the various components with no degradation.  So if we have the original image g and add the blurring function f to it to get the blurred image h, we can simply subtract f from h (assuming we know f) and we will get back to g.  So although it seems to be getting something back from nothing, in the case of a blurred image, in reality we are just getting the component we want back and we are leaving behind the component we don't want.  

The example I give above of an image blurred with a Gaussian blur of 8 and then 'unblurred' by subtracting the blurring from it, restoring the original image perfectly, illustrates this point dramatically.  Of course, in this case f is know perfectly, so the extraction of the original image from the blurred image is also possible perfectly; as you say, our problem is how to find out what f is for our images.

I may be entirely wrong here, but I don't think image deconvolution is non-linear, because if it was it wouldn't work.  Even if there are non-linearities in the system, the algorithm would have to approximate to a linear system. This comment in the ImageJ macro explains one technique for dealing with noise:

// Regarding adding noise to the PSF, deconvolution works by
// dividing by the PSF in the frequency domain.  A Gaussian
// function is very smooth, so its Fourier, (um, Hartley)
// components decrease rapildy as the frequency increases.  (A
// Gaussian is special in that its transform is also a
// Gaussian.)  The highest frequency components are nearly zero.
// When FD Math divides by these nearly-zero components, noise
// amplification occurs.  The noise added to the PSF has more
// or less uniform spectral content, so the high frequency
// components of the modified PSF are no longer near zero,
// unless it is an unlikely accident.

So what this is doing is adding noise to the PSF in order to avoid noise amplification in the deconvolution (which is pretty smart!).  Again, this is assuming a linear system.

As for USM ... if the implementation in Photoshop etc., is not the conventional one of creating an overlay by blurring/subtracting, but instead uses a convolution kernel - then yes, it's also doing a deconvolution and the difference between it and another deconvolution comes down to the algorithm and implementation.  However, the belief seems to be that USM creates a mask by blurring the image and subtracting the blurred image from the original, effectively eliminating the low frequency components (like a high-pass filter).  This mask is then used to add contrast to the high-frequency components of the image.  So, within the constraints of my limited understanding, in USM we are adding a signal, whereas in deconvolution we are subtracting one.  The question for me is ... is the signal we are adding the inverse of the signal we are subtracting?  (It's true that in the case of USM we have a high-frequency signal, whereas in deconvolution we have a low-frequency one). I would think that it is not, because adding contrast is not the inverse of removing blurring: we now have an additional signal c in the equation, c being a high-frequency signal that is added to the high-frequency components of the image.

Someone who understands the maths better than me would need to answer this question.

Robert

Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jack Hogan on August 15, 2014, 05:24:08 am
Try the attached values, which should approximate a deconvolution of a (slightly modified) 0.7 radius Gaussian blur, which would be about the best that a very good lens would produce on a digital sensor, at the best aperture for that lens. It would under-correct for other apertures but not hurt either. Always use 16-bit/channel image mode in Photoshop, otherwise Photoshop produces wrong results with this Custom filter pushed to the max.

As I've said earlier, such a 'simple' deconvolution tends to also 'enhance' noise (and things like JPEG artifacts), because it can't discriminate between signal and noise. So one might want to use this with a blend-if layer or with masks that are opaque for smooth areas (like blue skies which are usually a bit noisy due their low photon counts, and demosaicing of that).

Upsampled images would require likewise upsampled filter kernel dimensions, but a 5x5 kernel is too limited for that, so this is basically only usable for original size or down-sampled images.

Hi Bart,

Could you expand on the math that resulted in the kernel above for a gaussian blurring function of radius r? f1=>F1, 1/F1=F2, F2=>f2?

Thank you.
Jack
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jack Hogan on August 15, 2014, 09:59:35 am
Someone who understands the maths better than me would need to answer this question.

I don't know about the math but from what I understand USM is somewhat equivalent to taking a black/white marker and drawing along every transition in the picture to make it stand out more - automatically.  Line thickness and darkness is chosen arbitrarily to achieve the desired effect, much like painters do.   One way to look at USM is to imagine coming across one such simplified transition in an image, say a sharp edge from black to white, and plotting its profile as if you were crossing it perpendicularly.  The plot of the relative brightness (Luminance) profile might look something like this (0 signal is black, 1 is white, from an actual Edge Spread Function):

(http://i.imgur.com/F16BwAe.png)

The painter/photographer then says to herself:"Hmm, that's one fuzzy edge.  It takes what looks like the distance of 6 pixels to go from black to white.  Surely I can make it look sharper than that.  Maybe I can arbitrarily squeeze its ends together so that it fits in fewer pixels".  She takes out her tool (USM/marker), dials in darkness 1, thickness 1 and redraws the transition to pleasure:

(http://i.imgur.com/NQhlXSF.png)

Now the transition fits in a space of less than two pixels.  "Alright, that looks more like it" she says contentedly and moves on to the next transition.

The only problem with this approach to sharpening is that it has (very little if) nothing to do with the reality of the scene.  It is completely perceptual, arbitrary and destructive (not reversible).  We can make the slope of the transition (acutance!) as steep as we like simply by choosing more or less aggressive parameters.  MTFs shoot through the roof beyond what's physically possible, actual scene information need not apply.  Might as well draw the transition in with a thick marker :)

Nothing inherently wrong with it: the USM approach is perfectly fine and quite useful in many cases, especially where creative or output sharpening are concerned.  But as far as capture sharpening is concerned, upon closer scrutiny USM always disappoints (at least me) because the arbitrariness and artificiality of it show up in all of their glory as you can clearly see above: halos (overshoots, undershoots), ringing, pixels boldly going where they were never meant to be (where is the center of the transition now?).  

So what is the judicious pixel peeper supposed to do in order to restore a modicum of pre-capture sharpness?  Well, contrary to USM's approach one could start with scene information first.  If the aggregate edge profile in the raw data looks like that, and such and such an aperture produces this type of blur, and the pixels were this shape and size and the AA was this strong and of this type, the lens bends and blurs light that way on the image around the area of the transition - perhaps we can try to undo some of the blurring introduced by each of these components of our camera/lens system and attempt to take a good stab at reconstructing what the edge actually looked like before it was blurred by them.

The process by which we attempt to undo one by one the blurring introduced by each of these components is called deconvolution.  Deconvolution math is easier performed in the frequency domain because there it involves mainly simple division/multiplication.  If one can approximately model and characterize the effect of each component in the frequency domain, one can in theory undo blurring introduced by it - with many limits, mostly imposed by imperfect modeling, complicated 2D variations in parameters and (especially) noise combined with insufficient energy.  In general photography you can undo some of it well, some of it approximately, some of it not at all.  This is what characterization in the frequency domain looks like for the simplest components to model:

(http://i.imgur.com/3qkG6gB.png)

Deconvolution can also be performed in the spatial domain by applying discrete kernels to the image, but less well (see for instance my question to Bart above).  Either way results as far as capture sharpening is concerned are much more appealing to the eye of this pixel peeper than the rougher, arbitrary alternative of USM.  And the bonus is that deconvolution is by and large reversible and not as destructive.  USM can always and subsequently be added later in moderation to specific effect.

In a nutshell: USM is a meat cleaver handled by an artistic butcher.  Deconvolution is a scalpel wielded by a scientific surgeon :-)

Jack
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Fine_Art on August 15, 2014, 11:41:56 am
Using your graph I attempted to add a curve that simulates what deconvolution does in green. Compared to USM which moves in the Y plane, deconvolution squeezes the edge in the X plane.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Fine_Art on August 15, 2014, 11:46:39 am
Hi,

This was deconvolving the original image, or was it deconvolving the image blurred with a Gaussian blur?  If the latter then pretty impressive.

What tools/technique do you use using wavelets?

Robert

It is the multi-resolution smooth/sharpen feature in images plus.

I was playing around with the scene again in RT last night. I actually got a very good result dropping the damping to 0 and moving the radius to .80. I have never touched the damping before. Suddenly the R-L deconvolve in RT seems more powerful.

I will have to post it tonight when I am off the laptop.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 15, 2014, 11:56:15 am
Could you expand on the math that resulted in the kernel above for a gaussian blurring function of radius r? f1=>F1, 1/F1=F2, F2=>f2?

Hi Jack,

I didn't remember how I got there exactly initially, because I rarely use the crude Photoshop implementation, but rather an exact floating point number implementation in other software (e.g. ImageJ). But after some pondering I do remember that I started out with my PSF generator tool, with a Blur sigma of 0.7, Fill-factor 100%, Kernel size of 5x5, Integer numbers, Deconvolution type of kernel, and typing a scale that would start to show useful integer values, also aiming for a central 999 value therefore one would need to scale by something like 1378.

However, by using the scale factor in my tool, one amplifies the 'amount' of sharpening, and Photoshop uses its scale in a somewhat different way (just a division of the kernel values), which would also need it to come down to something like 251 to keep a relatively balanced brightness in flat regions, but still over-sharpen due to the amount boost. So I abandoned that approach because it would give the wrong weights and I changed the integer type of kernel back to floating point with a scale of 1.0.

Now, keeping in mind the way the Photoshop's scaling works, and with a goal to have a central value of about 999 (now 1.724232265654046 at a scale of 1.0), it turned out that an approx. 579 custom filter scale factor was required, and all floating point kernel values were multiplied by that same amount (to be divided again later by that scale factor by PS), and rounded to integers.

I remember I had to tweak a few values to get uniform brightness before and after sharpening uniform areas (requires 16-bit/channel data), so I finally arrived at the values given earlier.

That was about the train of thought, which should work for other blur radii just as well, without boosting the sharpening amount.

Cheers,
Bart
.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 15, 2014, 01:07:39 pm
I don't know about the math but from what I understand USM is somewhat equivalent to taking a black/white marker and drawing along every transition in the picture to make it stand out more - automatically.

Jack, the math is very basis and simple. Here is an attempt to clarify with some numbers, based on a Gaussian blur of 1.0, Fill-factor 100%:

PSF (GB 1.0, FF=100%): 3.37868E-06 0.000229231 0.005977036 0.060597538 0.241730347 0.382924937 0.241730347 0.060597538 0.005977036 0.000229231 3.37868E-06

ORIGINAL SIGNAL: 50 50 50 50 50 50 200 200 200 200 200 200
CONVOLVED SIGNAL: 50 50.0005 50.0349 50.9314 60.0211 96.2806 153.719 189.979 199.069 199.965 199.999 200

This convolved signal is our digital image of an abrupt brightness change in the original scene after optical blur and Raw conversion, and it's the object of further calculations.

Now, what USM essentially does is blur the convolved image again, let's assume by a Radius of 1.0 to stay consistent with a deconvolution attempt to undo the blur that our original image was subjected to. This blurred image facsimile is then subtracted from the convolved image it originated from, and the difference is added back to the convolved image in a layered fashion.

CONVOLVED SIGNAL: 50.0000 50.0005 50.0349 50.9314 60.0211 96.2806 153.719 189.979 199.069 199.965 199.999 200

 BLURRED VERSION: 50.0103 50.1359 51.1468 56.2446 72.4085 104.681 145.319 177.591 193.755 198.853 199.864 199.99
      DIFFERENCE: -0.0103 -0.1354 -1.1119 -5.3132 -12.3874 -8.4004 8.4 12.388 5.314 1.112 0.135 0.01

CONVOLVED + DIFF: 49.9897 49.8651 48.923 45.6182 47.6337 87.8802 162.119 202.367 204.383 201.077 200.134 200.01


The Difference layer is a halo layer which is added to the convolved input layer, and an amount setting would amplify the values, and thus amplify the halo amplitude. Clearly, the radius of 1.0 is relatively wide to use, but was used for consistency with deconvolution which does something else than layered addition of differences. One could use a smaller radius, and increase the amount, but it would only create a narrower halo. Halos are inherent in USM, and therefore require a significant effort to reduce their visibility.

CONVOLVED SIGNAL: 50.0000 50.0005 50.0349 50.9314 60.0211 96.2806 153.719 189.979 199.069 199.965 199.999 200
  RL DECONVOLVED: 50.7165 50.0871 48.0134 54.9656 41.4365 61.9295 186.182 213.192 189.792 206.158 197.43 200.374

The Richardson-Lucy (RL) deconvolution is an iterative method that's more effective than the simple version discussed earlier, but uses the same kind if principles, not layer masking but deconvolution.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jack Hogan on August 15, 2014, 01:18:45 pm
...I remember I had to tweak a few values to get uniform brightness before and after sharpening uniform areas (requires 16-bit/channel data), so I finally arrived at the values given earlier.

That was about the train of thought, which should work for other blur radii just as well, without boosting the sharpening amount.

Thanks Bart, trial and error then I guess :)  I was hoping you had worked through a little more math (because I got stuck doing this myself).  Let's work in one dimension to simplify things initially:

1) Spatial domain blur function (SPF) of gaussian blur of radius r -->    g(x) = 1/r/sqrt(2.pi)*exp[-(x/r)^2/2]

If plotted, this PSF would look like the classic bell curve.  The relative kernel would have values that rise and fall accordingly;

2) Take the Fourier transform of 1) to switch to the Frequency Domain ---> G(w) = exp[-(wr)^2/2]

with w=2.pi.s, s= frequency;

3) Calculate D(w) = 1/G(w) = exp[(wr)^2/2]

4) Take the inverse Fourier Transform of 3) to switch back to the spatial domain ---> d(x) = ... ? :(

If we had the formula for d(x) we could simply read off the values for the kernel to deconvolve gaussian blur of radius r.  Help?

Jack
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jack Hogan on August 15, 2014, 04:05:24 pm
Jack, the math is very basis and simple. Here is an attempt to clarify with some numbers, based on a Gaussian blur of 1.0, Fill-factor 100%:

Got it, thanks Bart.  My question about USM was rethorical more than anything else.  The one about how to properly calculate the deconvolution kernel of a Gaussian is on the other hand real: I am stuck there :)
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 15, 2014, 04:36:30 pm
Got it, thanks Bart.  My question about USM was rethorical more than anything else.  The one about how to properly calculate the deconvolution kernel of a Gaussian is on the other hand real: I am stuck there :)

Jack, There are some trickeries involved in going from continuous to discrete functions. I use the following to calculate kernel values for arbitrary sigma radius Gaussian blurs: https://www.dropbox.com/s/igxwk0izafkbnr9/Gaussian_PSF_2.png

x and y are the kernel positions around the central [0,0] kernel position, and they are in principle integer offsets from the center, and sdx and sdy are the horizontal and vertical sigmas (standard deviations), they are usually identical for a symmetrical blur. A 100% fill factor is assumed (pixel center +/- 1/2 pixel). The Erf() is the Error Function.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 15, 2014, 05:01:24 pm
It is the multi-resolution smooth/sharpen feature in images plus.


Hi,

My question was whether the image was the original raw image (so you're trying to get the best detail from it) or were you doing a more brutal test, that is, to blur the image with a Gaussian blur and then attempt to recover the original (as per the example I gave using ImageJ)?

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Fine_Art on August 15, 2014, 07:05:39 pm
Hi,

My question was whether the image was the original raw image (so you're trying to get the best detail from it) or were you doing a more brutal test, that is, to blur the image with a Gaussian blur and then attempt to recover the original (as per the example I gave using ImageJ)?

Robert

I did not blur it.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 16, 2014, 01:15:30 am
I did not blur it.

Thanks. 

I would really love to see an example of an image, blurred in Photoshop with a Gaussian blur of, say 4, and then restored using deconvolution. Ideally I would like to see the deconvolution using a kernel and also using Fourier.

Secondly, I would also really love to be shown a method to photograph a point light source with my camera (for a given fixed focal length), and then to use this to produce a deconvolution kernel.

Thirdly, I would really, really love to see the two above put together, so that taking a point light source, say a white oval on a black background in Photoshop, that after applying a blur of some sort to it, we could work out the deconvolution kernel and use this to restore an image that had the same blur applied to it.

It's fascinating to learn about the technicalities (some of it well over my head, although I'm getting there bit by bit :)), but the next step for me would be putting it into practice ... not using a black box like FocusMagic, say, but doing it step by step using the techniques and tools currently available.

Any takers?

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 16, 2014, 04:25:03 am
I would really love to see an example of an image, blurred in Photoshop with a Gaussian blur of, say 4, and then restored using deconvolution. Ideally I would like to see the deconvolution using a kernel and also using Fourier.

Hi Robert,

A Gaussian blur of 4 is HUGE, and Photoshop's implementation of the Gaussian blur may be not exact, so a half-way decent deconvolution may be virtually impossible. But I understand you want a huge the challenge.

Quote
Secondly, I would also really love to be shown a method to photograph a point light source with my camera (for a given fixed focal length), and then to use this to produce a deconvolution kernel.

The problems with properly photographing a small point light are many. A lightsource behind an aperture won't work, because the aperture would cause diffraction. So a more appropriate target would be a small reflective sphere that reflects a distant lightsource. Of course that would cause problems to shield the surroundings from also being reflected by the sphere. Shooting a distant star would suffer from atmospheric turbulence and motion. And of course each lens behaves differently at different apertures and focusing distances, and it's not uniform across the image. Camera vibration is also a consideration that needs to be eliminated.

That's why methods like using a slanted edge are more commonly used to model the behavior of lenses in two orthogonal directions, or methods that try to quantify the deterioration of the Power Spectrum of White noise, or of a 'Dead-leaves' target.

I'm currently working on a 'simpler' method based on measuring the edge transitions in all 360 degree orientations, using a test target like this example:

(http://bvdwolf.home.xs4all.nl/main/downloads/PSF-estimate_S.png)

Quote
Thirdly, I would really, really love to see the two above put together, so that taking a point light source, say a white oval on a black background in Photoshop, that after applying a blur of some sort to it, we could work out the deconvolution kernel and use this to restore an image that had the same blur applied to it.

I'm working on it ... ;) , but for the moment a Slanted edge approach (http://bvdwolf.home.xs4all.nl/main/foto/psf/SlantedEdge.html) goes a long way to allow a characterization of the actual blur kernel with sub-pixel accuracy.

Quote
It's fascinating to learn about the technicalities (some of it well over my head, although I'm getting there bit by bit :)), but the next step for me would be putting it into practice ... not using a black box like FocusMagic, say, but doing it step by step using the techniques and tools currently available.

Tools like FocusMagic are real time savers, and doing it another way may require significant resources and amongst others, besides dedicated Math software, a lot of calibration and processing time.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jack Hogan on August 16, 2014, 05:27:05 am
but for the moment a Slanted edge approach goes a long way to allow a characterization of the actual blur kernel with sub-pixel accuracy.

I agree wholeheartedly: in fact if one thinks about it, what is a line if not a series of single point PSFs in a row (aka Line Spread Function)?  And what is an edge if not the integral of a line?  One can easily get 1D PSFs (and MTFs) with excellent accuracy (for photographic purposes) from pictures of edges, without all the problems mentioned about recording points of light.

And for the peanut gallery, what would be the derivative/differential of the Edge Spread Functions shown above?  You guessed it, the PSF in the direction perpendicular to the edge.

One can use Bart's most excellent calculator, or with a bit more work one can use open source MTF Mapper (http://sourceforge.net/projects/mtfmapper/) by Frans van den Bergh to obtain more accurate values.  MTF Mapper produces one dimensional ESF, PSF and MTF curves, not to mention MTF50 values.

Tools like FocusMagic are real time savers, and doing it another way may require significant resources and amongst others, besides dedicated Math software, a lot of calibration and processing time.

Agreed again.  My own motivation is to do a better job of capture sharpening the asymmetrical AA of my current main squeeze (it only has it in one direction, as do some other recent Exmors).

And thanks for the earlier link, Bart, I will peruse it later on today.

Jack
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 16, 2014, 06:40:31 am

I'm working on it ... ;) , but for the moment a Slanted edge approach (http://bvdwolf.home.xs4all.nl/main/foto/psf/SlantedEdge.html) goes a long way to allow a characterization of the actual blur kernel with sub-pixel accuracy.

Hi Bart ... well, I guess I am asking for a lot!  Still, why aim low?

I'll try to muddle through your slanted edge approach, but to be honest the likelihood of my succeeding is pretty low, as my overall understanding is limited, and I don't know tools like ImageJ except in passing. 

Am I correct in understanding that using the Slanted Edge approach that it should be possible:
- to take a photograph of an edge
- process that in ImageJ to get the slope of the edge and the pixel values along a single pixel row
- paste this information in your Slanted Edge tool to compute the sigma value
- use this sigma value in your PSF Generator to produce a deconvolution kernel
- use the deconvolution kernel in Photoshop (or preferably ImageJ as one can use a bigger kernel there):
   - as a test it should remove the blur from the edge
   - subsequently it could be used to remove capture blur from a photograph (taken with the same lens/aperture/focal length)

Assuming I have it even approximately right, it would be incredibly useful to have a video demonstration of this as it's quite easy to make a mess of things with tools one isn't familiar with. I would be happy to do this video, but first of all I would need to be able to work through the technique successfully, and right now I'm not sure I'm even on the tracks at all, not to mention on the right track!

Quote
Tools like FocusMagic are real time savers, and doing it another way may require significant resources and amongst others, besides dedicated Math software, a lot of calibration and processing time.


Yes, of course, I understand that for practical purposes photographing slanted edges etc., is really only practical for very fixed conditions and in real life is unlikely to yield better results than a tool like FocusMagic ... but from the point of view of understanding what is under the hood it's a great exercise!

Robert




Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 16, 2014, 07:28:13 am
Am I correct in understanding that using the Slanted Edge approach that it should be possible:
- to take a photograph of an edge
- process that in ImageJ to get the slope of the edge and the pixel values along a single pixel row
- paste this information in your Slanted Edge tool to compute the sigma value
- use this sigma value in your PSF Generator to produce a deconvolution kernel
- use the deconvolution kernel in Photoshop (or preferably ImageJ as one can use a bigger kernel there):
   - as a test it should remove the blur from the edge
   - subsequently it could be used to remove capture blur from a photograph (taken with the same lens/aperture/focal length)

You've got it!

I agree it's a bit of work, and the workflow could be improved by a dedicated piece of software that does it all on an image that gets analyzed automatically. But hey, it's a free tool, and it's educational. As said, I'm also working on something more flexible that can analyze a more normal image, or for more accurate results can do a better job on an image of a proper test target as input.

Quote
Assuming I have it even approximately right, it would be incredibly useful to have a video demonstration of this as it's quite easy to make a mess of things with tools one isn't familiar with. I would be happy to do this video, but first of all I would need to be able to work through the technique successfully, and right now I'm not sure I'm even on the tracks at all, not to mention on the right track!

A video could be helpful, but there are also linked webpages with more background info, and the thread also addresses some initial questions that others have raised.
 
Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 16, 2014, 12:39:58 pm

A video could be helpful, but there are also linked webpages with more background info, and the thread also addresses some initial questions that others have raised.


Hello (again!) Bart,

I'm getting there - I've now found your thread http://www.luminous-landscape.com/forum/index.php?topic=68089.0 (that's the one,I take it?), and I've taken the test figures you supplied on the first page, fed them into your Slanted Edge tool and got the same radius (I haven't checked this out, but I assume you take an average of the RGB radii?).

I then put this radius in your PSF generator and got a deconvolution kernel and tried it on an image from a 1Ds3 with a 100mm f2.8 macro (so pretty close to your eqpt).  The deconvolution in Photoshop is pretty horrendous (due to the integer rounding, presumably); however if the filter is faded to around 5% the results are really good.  Using floating point and ImageJ, the results are nothing short of impressive, with detail recovery way beyond Lr, especially in shadows.

I don't know how best to set the scale on your PSF generator - clearly a high value gives a much stronger result; I found that a scale of between 3 and 5 is excellent, but up to 10 is OK depending on the image.  Beyond that noise gets boosted too much, I think.

I didn't see much difference between a 5x5 and a 7x7 kernel, but it probably needs a bit more pixel-peeping.

I also don't understand the fill factor (I just set it to Point Sample).

What seems to be a good approach is to do a deconvolve with a scale of 2 or 3 and one with a scale of 5 and to do a Blend If in Photoshop - you can get a lot of detail and soften out any noise (although this is only visible at 200% and completely invisible at print size on an ISO200 image).

It occurred to me that as your data for the same model camera and lens gives me very good results that it would be possible to build up a database that could be populated by users, so that over time you could select your camera, lens, focal length and aperture and get a close match to the radius (and even the deconvolution kernel).  The two pictures I checked were at f2.8 (a flower) and F7.1 (a landscape), whereas your sample data was at f5.6 - but the deconvolution still worked very well with both.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 17, 2014, 05:26:08 am
Hello (again!) Bart,

I'm getting there - I've now found your thread http://www.luminous-landscape.com/forum/index.php?topic=68089.0 (that's the one,I take it?), and I've taken the test figures you supplied on the first page, fed them into your Slanted Edge tool and got the same radius (I haven't checked this out, but I assume you take an average of the RGB radii?).

Hi Robert,

What the analysis of the  R/G/B channels shows is that, despite the lower sampling density of Red an Blue, there is not that much difference in resolution/blur. The reason is that most demosaicing schemes use the denser sampled Green channel info as a kind of clue for the Luminance component of the R/B channels as well. Since Luminance resolution is then relatively equal, one could just take the blur value for Green, or the lower of the three, to avoid over-sharpening the other channels. But with such small differences it's not all that critical.

Quote
I then put this radius in your PSF generator and got a deconvolution kernel and tried it on an image from a 1Ds3 with a 100mm f2.8 macro (so pretty close to your eqpt).  The deconvolution in Photoshop is pretty horrendous (due to the integer rounding, presumably); however if the filter is faded to around 5% the results are really good.  Using floating point and ImageJ, the results are nothing short of impressive, with detail recovery way beyond Lr, especially in shadows.

Cool, isn't it? And that is merely Capture sharpening in a somewhat crude single deconvolution pass. The same radius can be used for more elaborate iterative deconvolution algorithms, which will sharpen the noise less than the signal, thus producing an even higher S/N ratio, and restore even a bit more resolution.

Quote
I don't know how best to set the scale on your PSF generator - clearly a high value gives a much stronger result; I found that a scale of between 3 and 5 is excellent, but up to 10 is OK depending on the image.  Beyond that noise gets boosted too much, I think.

In my tool, the 'Scale' is normally left at 1.0, unless one wants to increase the 'Amount' of sharpening. When upsampling is part of the later operations, I'd leave it at 1.0, to avoid the risk of small halos at very high contrast edges. The 'scale' is mostly used for floating point number kernels.

Quote
I didn't see much difference between a 5x5 and a 7x7 kernel, but it probably needs a bit more pixel-peeping.

When radii get larger, there may be abrupt cut-offs at the kernel edges, where a slightly larger kernel support would allow for a smoother roll-off. This becomes more important with iterative methods, hence the recommendation to just use a total kernel diameter of 10x the Blur Radius, which will reduce the edge contributions to become marginal and thus have a smooth transition towards zero contribution outside the range of the kernel.

Quote
I also don't understand the fill factor (I just set it to Point Sample).

A point sample takes a single point on the Bell shaped Gaussian blur pattern at the center of the pixel and uses that for the kernel cell. However, our sensels are not point samplers, but area samplers. They will integrate all light falling within their area aperture to an average. This reduces the peakedness of the Gaussian shape a bit, as if averaging all possible point samples inside that sensel aperture with a square kernel. The size of that square sensel kernel is either 100% (assuming a sensel aperture that receives light from edge to edge, like with gap-less micro-lenses), or a smaller percentage (e.g. to simulate a complex CMOS sensor without micro-lenses with lots of transistors per sensel, leaving only a smaller part of the real estate to receive light). When you use a smaller percentage, the kernel's blur pattern will become narrower and more peaked and less sharpening will result, because the sensor already sharpens (and aliases) more by it's small sampling aperture.

Quote
What seems to be a good approach is to do a deconvolve with a scale of 2 or 3 and one with a scale of 5 and to do a Blend If in Photoshop - you can get a lot of detail and soften out any noise (although this is only visible at 200% and completely invisible at print size on an ISO200 image).

Again, it depends on the total workflow. I'd leave it closer to 1.0 if upsampling will happen later, but otherwise it's up to the user to play with the 'Amount' of sharpening by changing the 'Scale' factor. This all assumes Floating point number kernels, which can also be converted into images with ImageJ, for those applications that take images of PSFs as input (as is usual in Astrophotography).

Quote
It occurred to me that as your data for the same model camera and lens gives me very good results that it would be possible to build up a database that could be populated by users, so that over time you could select your camera, lens, focal length and aperture and get a close match to the radius (and even the deconvolution kernel).  The two pictures I checked were at f2.8 (a flower) and F7.1 (a landscape), whereas your sample data was at f5.6 - but the deconvolution still worked very well with both.

That's correct, as you will find out, the amount of blur is even not all that different between lenses of similar quality, but it does change significantly for the more extreme aperture values. That's completely unlike the Capture sharpening gospel of some 'gurus' who say that it's the image feature detail that determine the Capture sharpening settings, and thus they introduce halos by using too large radii early in their processing. It was also discussed here (http://www.luminous-landscape.com/forum/index.php?topic=76998.msg617613#msg617613).

It's a revelation for many, to realize they have been taught wrong, and the way the Detail dialog is designed in e.g. LR doesn't help either (it even suggests to start with changing the Amount setting before setting the correct radius, and it offers no real guidance as to the correct radius, which could be set to a more useful default based on the aperture in the EXIF). We himanss are pretty poor at eye-balling the correct settings because we prefer high contrast, which is not the same as real resolution. It's even made worse by forcing to user to use the Capture sharpening settings of the Detail panel for Creative sharpening later in the parametric workflow, which seduces users to use a too large radius value there, to do a better Creative sharpening job.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 17, 2014, 07:32:56 am

Cool, isn't it? And that is merely Capture sharpening in a somewhat crude single deconvolution pass. The same radius can be used for more elaborate iterative deconvolution algorithms, which will sharpen the noise less than the signal, thus producing an even higher S/N ratio, and restore even a bit more resolution.


Yes, very cool!

You mentioned before that doing the deconvolution in the frequency domain is much more complex, which it no doubt is, but would it be worth it? I'm thinking of the possibility of (at least partially) removing noise, for example.  How would you boost the S/N ratio using a kernel?

Quote
A point sample takes a single point on the Bell shaped Gaussian blur pattern at the center of the pixel and uses that for the kernel cell. However, our sensels are not point samplers, but area samplers. They will integrate all light falling within their area aperture to an average. This reduces the peakedness of the Gaussian shape a bit, as if averaging all possible point samples inside that sensel aperture with a square kernel. The size of that square sensel kernel is either 100% (assuming a sensel aperture that receives light from edge to edge, like with gap-less micro-lenses), or a smaller percentage (e.g. to simulate a complex CMOS sensor without micro-lenses with lots of transistors per sensel, leaving only a smaller part of the real estate to receive light). When you use a smaller percentage, the kernel's blur pattern will become narrower and more peaked and less sharpening will result, because the sensor already sharpens (and aliases) more by it's small sampling aperture.


I take it then that with a 1DsIII you would want to use a fill factor of maybe 80%, whereas a 7D would be 100%?  I ask because I have both cameras :).

Quote
That's correct, as you will find out, the amount of blur is even not all that different between lenses of similar quality, but it does change significantly for the more extreme aperture values. That's completely unlike the Capture sharpening gospel of some 'gurus' who say that it's the image feature detail that determine the Capture sharpening settings, and thus they introduce halos by using too large radii early in their processing. It was also discussed here (http://www.luminous-landscape.com/forum/index.php?topic=76998.msg617613#msg617613).

It's a revelation for many, to realize they have been taught wrong, and the way the Detail dialog is designed in e.g. LR doesn't help either (it even suggests to start with changing the Amount setting before setting the correct radius, and it offers no real guidance as to the correct radius, which could be set to a more useful default based on the aperture in the EXIF). We himanss are pretty poor at eye-balling the correct settings because we prefer high contrast, which is not the same as real resolution. It's even made worse by forcing to user to use the Capture sharpening settings of the Detail panel for Creative sharpening later in the parametric workflow, which seduces users to use a too large radius value there, to do a better Creative sharpening job.


I think I've been lucky (or perhaps it's that I hate oversharpened images), but I've always set the radius and detail first with the Alt key pressed (the values always end up with a low radius - 0.6, 0.7 typically, and detail below 20) and I then adjust the amount at 100% zoom - and it's very rare that I would go over 40, normally 20-30.  That has meant that I haven't judged the image on the look (at that stage of the process, at any rate) .... more by chance than by intent.

Regarding FocusMagic - the lowest radius you can use is 1 going in increments of 1.  That seems a high starting point and a high increment ... or am I mixing apples and oranges?

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 17, 2014, 12:37:31 pm
You mentioned before that doing the deconvolution in the frequency domain is much more complex, which it no doubt is, but would it be worth it? I'm thinking of the possibility of (at least partially) removing noise, for example.  How would you boost the S/N ratio using a kernel?

Strictly speaking, conversion to and back from the Fourier space (frequency domain), is reversible and produces a 100% identical image. A deconvolution is as simple as a division in frequency space, where in the spatial domain it would take multiple multiplications and additions for each pixel, and a solution for the edges, so it's much faster between the domain conversions.

The difficulties arise when we start processing that image in the frequency domain. Division by (almost) zero (which happens at the highest spatial frequencies) can drive the results to 'infinity' or create non-existing numerical results. Add in some noise and limited precision, and it becomes a tricky deal.

There are also some additional choices to be made with regard to padding the image and kernel data to equal sizes to allow frequency space divisions, and to account for a non-infinitely repeating frequency representation which could cause ringing artifacts if not done intelligently. They are mostly technical precautions, but they should be done correctly and therefore the correct implementation of the algorithms takes attention.

The S/N ratio boost is done through a process known as regularization, where some prior knowledge of the type of noise distribution is used to reduce noise at each iteration, in such a way that the gain of resolution at a given step exceeds the loss of resolution due to noise reduction. It can be as simple as adding a mild Gaussian blur between each iteration step.

Quote
I take it then that with a 1DsIII you would want to use a fill factor of maybe 80%, whereas a 7D would be 100%?  I ask because I have both cameras :).

You'd be hard pressed to see much difference in the sharpening result between the default 100% fill-factor and 80%, so I usually just leave it at 100% (also for my 1Ds3). I've added that option to better comply with the norm of creating discrete Gaussian kernels for convolution with our discrete pixel samplers, instead of point sampling at the pixel mid-point, and for more precise kernel values for those pixels (the immediate neighbors) that have the most impact on the sharpening in iterative algorithms.

Quote
I think I've been lucky (or perhaps it's that I hate oversharpened images), but I've always set the radius and detail first with the Alt key pressed (the values always end up with a low radius - 0.6, 0.7 typically, and detail below 20) and I then adjust the amount at 100% zoom - and it's very rare that I would go over 40, normally 20-30.  That has meant that I haven't judged the image on the look (at that stage of the process, at any rate) .... more by chance than by intent.

You probably have a better eye for it than most ..., hence the search for an even better method.

Quote
Regarding FocusMagic - the lowest radius you can use is 1 going in increments of 1.  That seems a high starting point and a high increment ... or am I mixing apples and oranges?

One would think so, but we don't know exactly how that input is modified by the unknown algorithm they use. Also, because it probably is an iterative or recursive operation, they will somehow optimize several parameters with each iteration to produce a better fitting model. Of course one can first magnify the image, then apply FM (at a virtual sub-pixel accurate level), and then down-sample again. That works fine, although things slow down due to the amount of pixels that need to be processed.

The only downside to that kind of method is that the resampling itself may create artifacts, but we're not talking about huge magnification/reduction factors, maybe 3 or 4 is what I occasionally use when I'm confronted with an image of unknown origin and I want to see exactly what FM does at a sub-pixel level. Also, because regular upsampling does not create additional resolution, the risk of creating aliasing artifacts at the down-sampling stage is minimal. The FM radius to use, scales nicely with the maginification, e.g. a blur width 5 for a 4x upsample of a sharp image.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 17, 2014, 04:07:23 pm

Strictly speaking, conversion to and back from the Fourier space (frequency domain), is reversible and produces a 100% identical image. A deconvolution is as simple as a division in frequency space, where in the spatial domain it would take multiple multiplications and additions for each pixel, and a solution for the edges, so it's much faster between the domain conversions.

The difficulties arise when we start processing that image in the frequency domain. Division by (almost) zero (which happens at the highest spatial frequencies) can drive the results to 'infinity' or create non-existing numerical results. Add in some noise and limited precision, and it becomes a tricky deal.


Hi Bart,


This is presumably why the macro example I posted (ImageJ) adds noise to the deconvolution filter, to avoid division by 0.  So the filter would be a Gaussian blur with a radius of around 0.7 (in your example), with noise added (which is multiplication by high frequencies (above Nyquist?)). 

I’m talking through my hat here, needless to say  :). But it would be interesting to try it … and ImageJ seems to provide the necessary functions.

Quote
The S/N ratio boost is done through a process known as regularization, where some prior knowledge of the type of noise distribution is used to reduce noise at each iteration, in such a way that the gain of resolution at a given step exceeds the loss of resolution due to noise reduction. It can be as simple as adding a mild Gaussian blur between each iteration step.

So would you then apply your deconvolution kernel with radius 0.7 (say, for your lens/camera), then blur with a small radius, say 0.2, repeat the deconvolution with the same radius of 0.7 ... several times?  That sort of thing?

Quote
You probably have a better eye for it than most ..., hence the search for an even better method.

Well, it’s partly interest, but also … what’s the point of all of this expensive and sophisticated equipment if we ruin the image at the first available opportunity? 

Quote
One would think so, but we don't know exactly how that input is modified by the unknown algorithm they use. Also, because it probably is an iterative or recursive operation, they will somehow optimize several parameters with each iteration to produce a better fitting model. Of course one can first magnify the image, then apply FM (at a virtual sub-pixel accurate level), and then down-sample again. That works fine, although things slow down due to the amount of pixels that need to be processed.

The only downside to that kind of method is that the resampling itself may create artifacts, but we're not talking about huge magnification/reduction factors, maybe 3 or 4 is what I occasionally use when I'm confronted with an image of unknown origin and I want to see exactly what FM does at a sub-pixel level. Also, because regular upsampling does not create additional resolution, the risk of creating aliasing artifacts at the down-sampling stage is minimal. The FM radius to use, scales nicely with the maginification, e.g. a blur width 5 for a 4x upsample of a sharp image.

So if you wanted to try a radius of 0.75, for example, you would upscale by 4 and use a radius of 3 ... and then downscale back by 4?  What resizing algorithms would you use? Bicubic I expect?

I have a couple of other questions (of course!!).

Regarding raw converters, have you seen much difference between them, in terms of resolution, with sharpening off and after deconvolution (a la Bart)? With your 1Ds3, that is, as I expect the converters may be different for different cameras.

Second question: you mentioned in your post on Slanted Edge that using Imatest could speed up the process.  I have Imatest Studio and I was wondering how I can use it to get the radius?  One way, I guess, would be to take the 10-90% edge and divide by 2 … but that seems far too simple!  I’m sure I should be using natural logs and square roots and such!  Help would be appreciated (as usual!).

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 17, 2014, 06:03:20 pm
FYI, here is Eric Chan's reply to my question regarding the Detail slider in Lr/ACR:

"Yes, moving the Detail slider towards 100 progressively moves the 'sharpening' method used by ACR/Lr to be a technique based on deblur/deconvolution.  This is done with a limited set of iterations, some assumptions, and a few other techniques in there in order to keep the rendering performance interactive.  I recommend that Radius be set to a low value, and that this be done only on very clean / low ISO images."

It isn't exactly a clear explanation of what's going on or how to use it ... but short of Adobe giving us their algorithm, which I don't imagine they'll do, it's probably the best we'll get.  "To be used with caution" seems to be the message (which I would agree with, based on trial and error).

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 17, 2014, 06:57:56 pm
This is presumably why the macro example I posted (ImageJ) adds noise to the deconvolution filter, to avoid division by 0.  So the filter would be a Gaussian blur with a radius of around 0.7 (in your example), with noise added (which is multiplication by high frequencies (above Nyquist?)). 

I’m talking through my hat here, needless to say  :). But it would be interesting to try it … and ImageJ seems to provide the necessary functions.

Yes, the addition of noise is a crude attempt to avoid division by zero, although it may also create some where no issue was before.

Quote
So would you then apply your deconvolution kernel with radius 0.7 (say, for your lens/camera), then blur with a small radius, say 0.2, repeat the deconvolution with the same radius of 0.7 ... several times?  That sort of thing?

The issue with that is that the repeated convolution with a given radius will result in the same effect as that of a single convolution with a larger radius. And the smaller radius denoise blur will also cumulate to a larger radius single blur, so there is more that needs to be done.

Quote
Well, it’s partly interest, but also … what’s the point of all of this expensive and sophisticated equipment if we ruin the image at the first available opportunity?

Yes, introducing errors early in the workflow, can only bite us later in the process.

Quote
So if you wanted to try a radius of 0.75, for example, you would upscale by 4 and use a radius of 3 ... and then downscale back by 4?  What resizing algorithms would you use? Bicubic I expect?

Yes, upsampling with Bicubic Smoother, and down-sampling with Bicubic will often be good enough, but better algorithms will give better results.

Quote
I have a couple of other questions (of course!!).

Regarding raw converters, have you seen much difference between them, in terms of resolution, with sharpening off and after deconvolution (a la Bart)? With your 1Ds3, that is, as I expect the converters may be different for different cameras.

The slanted edge determinations depend on the Rawconverter that was used. Some are a bit sharper than others. Capture One Pro, starting with version 7, does somewhat better than LR/ACR process 2012, but RawTherapee with the Amaze algorithm is also very good for lower noise images.

Quote
Second question: you mentioned in your post on Slanted Edge that using Imatest could speed up the process.  I have Imatest Studio and I was wondering how I can use it to get the radius?  One way, I guess, would be to take the 10-90% edge and divide by 2 … but that seems far too simple!  I’m sure I should be using natural logs and square roots and such!  Help would be appreciated (as usual!).

Actually it is that simple, provided that the Edge Profile (=ESF) has a Gaussian based Cumulative Distribution Function shape, in which case dividing the 10-90 percent rise width in pixels by 2.5631 would result in the correct Gaussian sigma radius. Not all edge profiles follow the exact same shape as a Gaussian CDF, notably in the shadows where veiling glare is added, and not all response curves are calibrated for the actual OECF, so one might need to use a slightly different value.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 18, 2014, 05:28:40 am
Yes, the addition of noise is a crude attempt to avoid division by zero, although it may also create some where no issue was before.

Hi Bart,

Well, how about adding the same noise to both the image and to the blur function - and then doing the deconvolution?  That way you should avoid both division by 0 and other issues, I would have thought?

Quote
The issue with that is that the repeated convolution with a given radius will result in the same effect as that of a single convolution with a larger radius. And the smaller radius denoise blur will also cumulate to a larger radius single blur, so there is more that needs to be done.

So then, for repeated convolutions you would need to reduce the radii?  But if so, on what basis, just guesswork?

Quote
Yes, upsampling with Bicubic Smoother, and down-sampling with Bicubic will often be good enough, but better algorithms will give better results.

Any suggestions would be welcome.  In the few tests I've done I'm not so sure that upsampling in order to use a smaller radius is giving any benefit (whereas it does seem to introduce some artifacts). It may be better to use the integer radius and then fade the filter.

Quote
The slanted edge determinations depend on the Rawconverter that was used. Some are a bit sharper than others. Capture One Pro, starting with version 7, does somewhat better than LR/ACR process 2012, but RawTherapee with the Amaze algorithm is also very good for lower noise images.

Do you think these differences are significant after deconvolution?  Lr seems to be a bit softer than Capture One, for example, but is that because of a better algorithm in Capture One, or is it because Capture One applies some sharpening?  Which raises the question in my mind: is it possible to deconvolve on the raw data, and if so would that not be much better than leaving it until after the image has been demosaiced?  Perhaps this is where one raw processor may have the edge over another?

Quote
Actually it is that simple, provided that the Edge Profile (=ESF) has a Gaussian based Cumulative Distribution Function shape, in which case dividing the 10-90 percent rise width in pixels by 2.5631 would result in the correct Gaussian sigma radius. Not all edge profiles follow the exact same shape as a Gaussian CDF, notably in the shadows where veiling glare is added, and not all response curves are calibrated for the actual OECF, so one might need to use a slightly different value.

Interesting ... how did you calculate that number?

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 18, 2014, 08:15:49 am
Well, how about adding the same noise to both the image and to the blur function - and then doing the deconvolution?  That way you should avoid both division by 0 and other issues, I would have thought?

There are many different ways to skin a cat. One can also invert the PSF and use multiplication instead of division in frequency space. But I do think that operations in frequency space are complicating the issues due to the particularities of working in the frequency domain. The only reason to convert to frequency domain is to save processing time on large images because it may be simpler to implement some calculations, not specifically to get better quality, once everything is correctly set up (which requires additional math skills).

Quote
So then, for repeated convolutions you would need to reduce the radii?  But if so, on what basis, just guesswork?

There is a difference between theory and practice, so one would have to verify with actual examples. That's why the more successful algorithms use all sorts of methods (http://www.mathcs.emory.edu/~nagy/RestoreTools/IR.pdf), and adaptive (to local image content, and per iteration) regularization schemes. They do not necessarily use different radii, but vary the other parameters (RL algorithm (https://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy_deconvolution), RL considerations (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3222693/)).

Quote
Any suggestions would be welcome.  In the few tests I've done I'm not so sure that upsampling in order to use a smaller radius is giving any benefit (whereas it does seem to introduce some artifacts). It may be better to use the integer radius and then fade the filter.

Maybe this thread (http://www.luminous-landscape.com/forum/index.php?topic=91754.0) offers better than average resampling approaches.

Quote
Do you think these differences are significant after deconvolution?  Lr seems to be a bit softer than Capture One, for example, but is that because of a better algorithm in Capture One, or is it because Capture One applies some sharpening?  Which raises the question in my mind: is it possible to deconvolve on the raw data, and if so would that not be much better than leaving it until after the image has been demosaiced?  Perhaps this is where one raw processor may have the edge over another?

The differences between Rawconverter algorithms concern more than just sharpness. Artifact reduction is also an important issue, because we are working with undersampled color channels and differences between Green and Red/Blue sampling density. Capture One Pro version 7, exhibited much improved resistance to jaggies compared to version 6, while retaining its capability to extract high resolution. It also has a slider control to steer that trade-off for more or less detail. There is no implicit sharpening added if one switches that off on export. The Amaze algorithm as implemented in RawTherapee does very clean demosaicing, especially on images with low noise levels. LR does a decent job most of the time, but I've seen examples (converted them myself, so personally verified) where it fails with the generation of all sorts of artifacts.

Quote
Interesting ... how did you calculate that number?

The 10th and 90th percentile of the cumulative distribution function (https://www.wolframalpha.com/input/?i=normal+distribution%2C+mean%3D0) are at approx. -1.28155 * sigma and +1.28155 * sigma, the range therefore spans approx. 2.5631 * sigma.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 18, 2014, 11:19:26 am
There are many different ways to skin a cat. One can also invert the PSF and use multiplication instead of division in frequency space. But I do think that operations in frequency space are complicating the issues due to the particularities of working in the frequency domain. The only reason to convert to frequency domain is to save processing time on large images because it may be simpler to implement some calculations, not specifically to get better quality, once everything is correctly set up (which requires additional math skills).

Yes, it does get complicated, and at this point my maths is extremely rusty.  Still, out of interest I might have a go when I've polished up on it a bit (I mean a lot!).  But you're probably right - there may be no advantage working in the frequency domain, except that it should be possible to be more precise I would have thought. Not that I would expect to get better results than the experts, of course.

Quote
There is a difference between theory and practice, so one would have to verify with actual examples. That's why the more successful algorithms use all sorts of methods (http://www.mathcs.emory.edu/~nagy/RestoreTools/IR.pdf), and adaptive (to local image content, and per iteration) regularization schemes. They do not necessarily use different radii, but vary the other parameters (RL algorithm (https://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy_deconvolution), RL considerations (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3222693/)).

Maybe this thread (http://www.luminous-landscape.com/forum/index.php?topic=91754.0) offers better than average resampling approaches.

The differences between Rawconverter algorithms concern more than just sharpness. Artifact reduction is also an important issue, because we are working with undersampled color channels and differences between Green and Red/Blue sampling density. Capture One Pro version 7, exhibited much improved resistance to jaggies compared to version 6, while retaining its capability to extract high resolution. It also has a slider control to steer that trade-off for more or less detail. There is no implicit sharpening added if one switches that off on export. The Amaze algorithm as implemented in RawTherapee does very clean demosaicing, especially on images with low noise levels. LR does a decent job most of the time, but I've seen examples (converted them myself, so personally verified) where it fails with the generation of all sorts of artifacts.

Thanks for all of that info!  I've played around a bit with RawTherapee and it's certainly very powerful and complex - but for that reason also more difficult to use properly.  Unless the benefits over Lr are really significant, I think the complication of the workflow and the difficulty of integrating it with Lr, Ps etc., is not worth it.  The performance of RT is also a bit of a problem (even though I have a powerful PC), and I've already managed to crash it twice without trying too hard.  But it's certainly an impressive development!  And for an open-source project it's nothing short of amazing.

Quote
The 10th and 90th percentile of the cumulative distribution function (https://www.wolframalpha.com/input/?i=normal+distribution%2C+mean%3D0) are at approx. -1.28155 * sigma and 128155 * sigma, the range therefore spans approx. 2.5631 * sigma.


Obvious now that you've pointed it out  :-[

I think at this stage I need to stop asking questions and do some testing and reading and putting into practice what I've learnt from this thread ... which is certainly a lot and I would like to thank everyone!

At this stage my overall conclusions would be
- that there is a significant advantage in using the more sophisticated deconvolution tools over the basic Lr sharpening
- that there is little or no advantage in capture sharpening before resize
- that there is no benefit to doing capture sharpening followed by output sharpening: one sharpening pass is enough
- that other techniques like local contrast and blurring can be very effective in giving an impression of sharpness without damaging the image in the same way that over-sharpening does

I'm sure that these conclusions won't meet with general agreement! And I'm also sure there are plenty of other conclusions that could be drawn from our discussion.

Anyway, many thanks again!

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 27, 2014, 05:20:58 pm
I've now had a chance to do a little more testing and I thought these results could be of interest.

I've compared capture sharpening in Lightroom/ACR, Photoshop Smart Sharpen, FocusMagic and Bart's Kernel with ImageJ on a focused image and on one slightly out of focus.  I used Imatest Studio slanted edge 10-90%.  Here are the results:

(http://www.irelandupclose.com/customer/LL/EdgeTests.jpg)

The first set of results are for the focused image and the second for the slightly out-of-focus image.  Base is the number of pixels in 10-90% edge rise with no sharpening.  LR is for Lightroom/ACR with the Amount, Radius and Detail values. FM is for FocusMagic with the radius and amount. SS is for Smart Sharpen in Photoshop. IJ is for Bart's kernel in ImageJ.

For ImageJ I used Bart's formula to calculate the horizontal and vertical radii. For the others I used my eye first of all, and then Imatest to get a good rise without (or with little) overshoot or undershoot (also for ImageJ for the scale value).

In the first set of results, ACR gave a much lower result with an Amount of 40. Increasing that to 50 made a big difference at the cost of slight halos.  Smart Sharpen sharpens the noise beautifully  :), so it really needs an edge mask (but with an edge mask it does a very good job).  Focus Magic gave the cleanest result with IJ not far behind.  Any of these sharpening tools would do a good job of capture sharpening with this image (with edge masks for ACR and Smart Sharpen).

In the second set of results, FocusMagic gives the sharpest image - however at the expense of artifacts around the edges (but with very little boosting of the image noise). Smart Sharpen gives a similar result with a clean edge but very noisy (absolutely needs an edge mask). Lightroom does a good job even without Masking - but that makes it even better. ImageJ gives a very clean image and could easily match the others for sharpness by upping the scale to 1.3 or 1.4.

I think FocusMagic suffers from the integer radius settings; Smart Sharpen suffers from noise boosting; LR/ACR needs careful handling to avoid halos but the Masking feature is very nice. ImageJ/Bart is a serious contender. Overall, with care any of these sharpening/deconvoluting tools will do a good job, but FocusMagic needs to be used with care on blurred images (IMO, of course :)).

I also tested the LR/ACR rendering against RawTherapee with amaze and igv and found no difference (at pixel-level amaze is cleaner than the other two).

Robert


Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jim Kasson on August 27, 2014, 05:48:27 pm
I agree it's a bit of work, and the workflow could be improved by a dedicated piece of software that does it all on an image that gets analyzed automatically. But hey, it's a free tool, and it's educational.

There's Matlab source code of a function called sfrmat3, which does the analysis automatically, here (http://losburns.com/imaging/software/SFRedge/sfrmat3_post/index.html). I've used this code, and it works well. Matlab is not free. However, there's a clone called Octave that is. I don't know if sfrmat3 runs under Octave.

Jim
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Misirlou on August 27, 2014, 06:49:58 pm
(this shot was taken at 14.5K feet on Mauna Kea). Retaining the vibrance of the sky, while pulling detail from the backside of this telescope was my goal.

Im happy to post the CR2 if anyone wants to take a shot.

PP

pp,

That's a great place. I was there just before first light at Keck 1. Magical.

I might be interested in making a run at your CR2 with DXO, just to see what we might get with minimal user intervention. I don't expect anything particularly noteworthy, but it might be interesting from a comparative workflow point of view.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 28, 2014, 02:52:54 am
Hi ppmax2,

Yes, I would also be interested to try your raw image with FocusMagic and ImageJ - as I expect that your RT image is about as good as you will get with RT, it would be interesting to see how two other deconvolution tools compare.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 28, 2014, 03:10:26 am
Another observation regarding FocusMagic: I mentioned the low noise boosting ... well it's fairly clear that FM uses an edge mask.  If you look at a (slanted edge) edge you can see clearly that there is an area near the edge where the noise is boosted. The result is much the same as with Smart Sharpen using an edge mask.  A bit disappointing, especially since there is no control over the edge mask (IMO, a small amount of noise-level sharpening can be visually beneficial).

This really puts Bart/ImageJ in very good light as the same noise boosting is not apparent with this technique, without the use of an edge mask (but of course higher sharpening levels could be used with an edge mask).

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 28, 2014, 06:46:09 am
I've now had a chance to do a little more testing and I thought these results could be of interest.

Hi Robert,

Thanks for the feedback.

One question, out of curiosity, did you also happen to record the Imatest "Corrected" (for "standardized sharpening (http://www.imatest.com/docs/sharpening/)") values? In principle, Imatest does it's analysis on linearized data, either directly from Raw (by using the same raw conversion engine for all comparisons) or by linearizing the gamma adjusted data by a gamma approximation, or an even more accurate OECF response calibration. Since gamma adjusted, and sharpened (can be local contrast adjustment), input will influence the resulting scores, it offers a kind of correction mechanism to more level the playing field for already sharpened images.

Quote
I've compared capture sharpening in Lightroom/ACR, Photoshop Smart Sharpen, FocusMagic and Bart's Kernel with ImageJ on a focused image and on one slightly out of focus.  I used Imatest Studio slanted edge 10-90%.  Here are the results:

With the local contrast distortions of the scores in mind, the results are about as one would expect them to be, but it's always nice to see the theory confirmed ...

Quote
In the first set of results, ACR gave a much lower result with an Amount of 40. Increasing that to 50 made a big difference at the cost of slight halos.  Smart Sharpen sharpens the noise beautifully  :), so it really needs an edge mask (but with an edge mask it does a very good job).

This explains why the acutance boost of mostly USM (with some deconvolution mixed in) requires a lot of masking to keep the drawbacks of that method (halos and noise amplification depending on radius setting) in check.

Quote
Focus Magic gave the cleanest result with IJ not far behind.  Any of these sharpening tools would do a good job of capture sharpening with this image (with edge masks for ACR and Smart Sharpen).

With the added note of real resolution boost for the deconvolution based methods, and simulated resolution by acutance boost of the USM based methods. That will make a difference as the output size goes up, but at native to reduced pixel sizes they would all be useful to a degree.

Quote
I think FocusMagic suffers from the integer radius settings; Smart Sharpen suffers from noise boosting; LR/ACR needs careful handling to avoid halos but the Masking feature is very nice. ImageJ/Bart is a serious contender. Overall, with care any of these sharpening/deconvoluting tools will do a good job, but FocusMagic needs to be used with care on blurred images (IMO, of course :)).

We also need to keep in mind whether we are Capture sharpening of doing something else. Therefore, the avoidance of halos and other edge artifacts (like 'restoring' aliasing artifacts and jaggies) may require to reduce the amount settings where needed, or use masks for applying different amounts of sharpening in different parts of the image (e.g. selections based on High-pass filters or blend-if masks to reduce clipping). A tool like the Topaz Labs "Detail" plugin allows to do several of these operations (including deconvolution) in a very controlled fashion, and not only does so without the risk of producing halos, but also while avoiding color issues due to increased contrast.

I think the issue (if we can call it that) with FocusMagic is that it has to perform its magic at the single pixel level, where we already know that we really need more than 2 pixels to reliably represent non-aliased discrete detail. It's not caused by the single digit blur width input (we don't know how that's used internally in an unknown iterative deconvolution algorithm) as such IMHO.

That's why I occasionally suggest that FocusMagic may also be used after first upsampling the unsharpened image data. That would allow it to operate on a sub-pixel accurate level, although its success would then also depend on the quality of the resampling algorithm.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 28, 2014, 08:38:09 am

One question, out of curiosity, did you also happen to record the Imatest "Corrected" (for "standardized sharpening") values? In principle, Imatest does it's analysis on linearized data, either directly from Raw (by using the same raw conversion engine for all comparisons) or by linearizing the gamma adjusted data by a gamma approximation, or an even more accurate OECF response calibration. Since gamma adjusted, and sharpened (can be local contrast adjustment), input will influence the resulting scores, it offers a kind of correction mechanism to more level the playing field for already sharpened images.

Yes, I’ve kept the information – for example this one is a horizontal edge using your deconvolution and IJ (7x7 matrix) with a scale of 1.25.  As you can see it’s perfectly ‘sharpened’. The slight overshoot/undershoot is because of the +25% on the scale.

(http://www.irelandupclose.com/customer/LL/Base-CA-IJ-Deconv-Horiz-S1p25.jpg)

 
Quote
I've compared capture sharpening in Lightroom/ACR, Photoshop Smart Sharpen, FocusMagic and Bart's Kernel with ImageJ on a focused image and on one slightly out of focus.  

With the local contrast distortions of the scores in mind, the results are about as one would expect them to be, but it's always nice to see the theory confirmed ...

I think it would be worth writing a Photoshop filter with this technique.  If I can find some time in the next few months I would be happy to have a go.

Quote
In the first set of results, ACR gave a much lower result with an Amount of 40. Increasing that to 50 made a big difference at the cost of slight halos.  Smart Sharpen sharpens the noise beautifully.  So it really needs an edge mask (but with an edge mask it does a very good job).

This explains why the acutance boost of mostly USM (with some deconvolution mixed in) requires a lot of masking to keep the drawbacks of that method (halos and noise amplification depending on radius setting) in check.

Yes, absolutely.  I took the shots at ISO 100 on a 1Ds3 (but slow shutter speed of 1/5th) so the images were very clean. To give a reasonable edge with Smart Sharpen the noise gets boosted significantly. This wouldn’t be so obvious on a normal image, but with a flat gray area it’s very easy to see.  Your deconvolution really does a very good job of restoring detail without boosting noise.  FocusMagic cheats a bit by using an edge mask, IMO, but my only bitch with that is that there is no user control over the mask..

Quote
Focus Magic gave the cleanest result with IJ not far behind.  Any of these sharpening tools would do a good job of capture sharpening with this image (with edge masks for ACR and Smart Sharpen).

With the added note of real resolution boost for the deconvolution based methods, and simulated resolution by acutance boost of the USM based methods. That will make a difference as the output size goes up, but at native to reduced pixel sizes they would all be useful to a degree.

Hugely important!

Quote
We also need to keep in mind whether we are Capture sharpening or doing something else. Therefore, the avoidance of halos and other edge artifacts (like 'restoring' aliasing artifacts and jaggies) may require to reduce the amount settings where needed, or use masks for applying different amounts of sharpening in different parts of the image (e.g. selections based on High-pass filters or blend-if masks to reduce clipping). A tool like the Topaz Labs "Detail" plugin allows to do several of these operations (including deconvolution) in a very controlled fashion, and not only does so without the risk of producing halos, but also while avoiding color issues due to increased contrast.

As you know, I don’t much like the idea of capture sharpening followed by output sharpening, so I would tend to use one stronger sharpening after resize. In the Imatest sharpening example above, I would consider the sharpening to be totally fine for output – but if I had used a scale of 1 and not 1.25 it would not have been enough.  I don’t see what is to be gained by sharpening once with a radius of 1 and then sharpening again with a radius of 1.25 … but maybe I’m wrong.

I do have the Topaz plug-ins and I find the Detail plug-in very good for Medium and Large Details, but not for Small Details because that just boots noise and requires an edge mask (so why not use Smart Sharpen which has a lot more controls?).  So, to your point regarding Capture or Capture + something else, I would think that the Topaz Detail plug-in would be excellent for Creative sharpening, but not for capture/output sharpening.

The InFocus plug-in seems OK for deblur, but on its own it’s not enough: however, with a small amount of Sharpen added (same plug-in) it does a very good job.  Here’s an example:

(http://www.irelandupclose.com/customer/LL/topaz-infocus.jpg)

Apart from the undershoot and slight noise boost (acceptable without an edge mask IMO) it’s pretty hard to beat a 10-90% edge rise of 1.07 pixels!  (This is one example of two-pass sharpening that’s beneficial, it would seem  :)).

Quote
I think the issue (if we can call it that) with FocusMagic is that it has to perform its magic at the single pixel level, where we already know that we really need more than 2 pixels to reliably represent non-aliased discrete detail. It's not caused by the single digit blur width input (we don't know how that's used internally in an unknown iterative deconvolution algorithm) as such IMHO.

That's why I occasionally suggest that FocusMagic may also be used after first upsampling the unsharpened image data. That would allow it to operate on a sub-pixel accurate level, although its success would then also depend on the quality of the resampling algorithm.

Yes, and as you know I would favour resizing before ‘Capture sharpening’ in any case.  


Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 28, 2014, 09:48:45 am
As you know, I don’t much like the idea of capture sharpening followed by output sharpening, so I would tend to use one stronger sharpening after resize. In the Imatest sharpening example above, I would consider the sharpening to be totally fine for output – but if I had used a scale of 1 and not 1.25 it would not have been enough.

I agree, and it's easier when one only has to consider the immediate sharpening to be performed, and not something that may or may not be done much later in the workflow.

Quote
I don’t see what is to be gained by sharpening once with a radius of 1 and then sharpening again with a radius of 1.25 … but maybe I’m wrong.

The only potential benefit is that one can use different types of sharpening, but in practice that does not make too much of a difference if the sharpening already was of the devolution kind, and not only acutance. Once resolution is restored, acutance enhancement goes a long way.

Quote
I do have the Topaz plug-ins and I find the Detail plug-in very good for Medium and Large Details, but not for Small Details because that just boots noise and requires an edge mask (so why not use Smart Sharpen which has a lot more controls?).

I have the same observations, but the noise amplification in "Detail" can be reduced with a negative "boost" adjustment. There is also a "Deblur" control that specifically does deconvolution at the smallest pixel level, instead of the more Wavelet oriented spatial frequency ranges boosts.

Quote
So, to your point regarding Capture or Capture + something else, I would think that the Topaz Detail plug-in would be excellent for Creative sharpening, but not for capture/output sharpening.

The "Deblur" control might work for deconvolution based Capture sharpening, especially if one doesn't have other tools. Output sharpening is a whole other can of worms, because viewing distance needs to be factored in as well as some differences in output media. However, not all matte media are also blurry. On the contrary, some are quite sharp despite a reduced contrast and/or surface structure. Even Canvas can be real sharp, and surface structures can be quite different. I've had large canvas output done at 720 PPI, FM deconvolution sharpened at that native printer output size, and the results were amazing

Quote
The InFocus plug-in seems OK for deblur, but on its own it’s not enough: however, with a small amount of Sharpen added (same plug-in) it does a very good job.

Yes, its main difficulty in use is that the radius is not a good predictor as to the range it affects. I assume they don't define the Radius in sigma units, but rather something like pixels (although at the smallest radii it does tend to be more sigma like..., maybe full-width-half-maximum (FWHM, or 2.3548 x Gaussian sigma for the diameter, or 1.1774 for radius) is the actual dimension they use. It often seems to do a better job after first upsampling the image, so maybe its algorithms try too hard to recover detail at the single pixel level, and produces artifacts instead. The upsampled image, effectively more under-sampled, is then harder to push too far. I hope that an updated version (when they get around to updating it) will also allow user generated PSF input, and maybe a choice between algorithms.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 28, 2014, 10:25:32 am
I agree, and it's easier when one only has to consider the immediate sharpening to be performed, and not something that may or may not be done much later in the workflow.

The only potential benefit is that one can use different types of sharpening, but in practice that does not make too much of a difference if the sharpening already was of the devolution kind, and not only acutance. Once resolution is restored, acutance enhancement goes a long way.

I have the same observations, but the noise amplification in "Detail" can be reduced with a negative "boost" adjustment. There is also a "Deblur" control that specifically does deconvolution at the smallest pixel level, instead of the more Wavelet oriented spatial frequency ranges boosts.

The "Deblur" control might work for deconvolution based Capture sharpening, especially if one doesn't have other tools.

I need to have a good look at the Topaz sharpening options clearly :) - so far I haven't used Topaz much at all for anything, but it seems like there's some quite good stuff there.

Quote
Output sharpening is a whole other can of worms, because viewing distance needs to be factored in as well as some differences in output media. However, not all matte media are also blurry. On the contrary, some are quite sharp despite a reduced contrast and/or surface structure. Even Canvas can be real sharp, and surface structures can be quite different. I've had large canvas output done at 720 PPI, FM deconvolution sharpened at that native printer output size, and the results were amazing

I take it you just used FM deconvolution on its own, without any further output sharpening?  I'm not saying that a two-pass sharpen isn't sometimes necessary, but I find that in general, if you know your paper (and especially if you don't like halos and artifacts) that one fairly delicate and careful sharpen/deconvolution aimed at the output resolution and size gives me really good results.  Of course if there is some camera shake then that has to be sorted out first.

What do you do if your image is a bit out-of-focus?  Do you first correct for the base softening due to the AA filter etc., and then correct for the out-of-focus, or do you attempt to do it in one go?


Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 28, 2014, 11:26:23 am
I need to have a good look at the Topaz sharpening options clearly :) - so far I haven't used Topaz much at all for anything, but it seems like there's some quite good stuff there.

There are only that many hours in a day, one has to prioritize ..., which is why I like to share my findings and hope for others to do the same. What I find useful is to reduce all 3 (small, medium, large) details sliders to -1.00, and then in turn restore one slider at a time to 0.00 or more to see exactly which detail is being targeted. The Boost sliders can be reduced for less effect (I think it targets based on the source level of contrast of the specific feature size). Boosting the small details also increases noise, so reducing the boost will reduce the amplification of low contrast noise, while maintaining some of the higher contrast small detail.

The color targeted Cyan-Red / Magenta-Green / Yellow-Blue luminance balance controls are also very useful for bringing out detail or suppressing it, because many complementary colors do not reside directly next to each other. There is also an Edge-aware masking function that allows to paint the selected detail adjustments in or out. One can also work in stages and "Apply" intermediate results. It's a very potent plugin.

Quote
I take it you just used FM deconvolution on its own, without any further output sharpening?

Yes, all that was required was 2 rounds of FM deconvolution sharpening with different width settings at the final output size, because the original was already very sharp in the limited DOF zone. One round for the upsampling, and another for the finest (restored) detail.

Quote
What do you do if your image is a bit out-of-focus?  Do you first correct for the base softening due to the AA filter etc., and then correct for the out-of-focus, or do you attempt to do it in one go?

In that case I probably would need too large a "blur width" setting, or several, and thus do a mild amount at original file size, and another after resampling. Of course my goal is to avoid blurred originals ..., and I usually succeed (I do lug my tripod or a monopod around a lot).

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: ppmax2 on August 28, 2014, 02:04:51 pm
Here's that CR2 for those that want to test with it. I'd love to see what can be done with it with the various tools mentioned:
http://ppmax.duckdns.org/public.php?service=files&t=af778de4fb2e78531e4d4058faf6061b


If you have any problems downloading please let me know.

PP
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 28, 2014, 05:23:20 pm
There are only that many hours in a day, one has to prioritize ..., which is why I like to share my findings and hope for others to do the same.

Yes indeed ... not so many hours in a day (and this kind of testing is VERY time-consuming!).  So I really do appreciate all of your help (and that of others too, of course).

Quote
What I find useful is to reduce all 3 (small, medium, large) details sliders to -1.00, and then in turn restore one slider at a time to 0.00 or more to see exactly which detail is being targeted. The Boost sliders can be reduced for less effect (I think it targets based on the source level of contrast of the specific feature size). Boosting the small details also increases noise, so reducing the boost will reduce the amplification of low contrast noise, while maintaining some of the higher contrast small detail.

They’ve really gone slider-mad here!  I can see that the Small Details Boost may be useful in toning down noise introduced by the Small Details adjustment, but I don’t see any reason to use the Small Details adjustment at all as the InFocus filter seems to me to do a better job.

The Medium and Large adjustments are a bit like USM with a large and very large radius, respectively.  But what is very nice with the Topaz filter is the ability to target shadows and highlights.  I think I’ll be using these!

Quote
The color targeted Cyan-Red / Magenta-Green / Yellow-Blue luminance balance controls are also very useful for bringing out detail or suppressing it, because many complementary colors do not reside directly next to each other.

What's interesting here is that you're bringing tonal adjustments into a discussion about sharpening ... and absolutely correctly IMO.  What we're looking for is to bring life to our images, and detail is only one small (but not insignificant!) aspect to it.  I've just played around with the tonal adjustments you mentioned in Topaz and they are really very good.  I just picked a rather flat image of an old castle on an estuary and with a few small tweaks the whole focus of the image was brought onto the castle and promontary - and what was a not very interesting image has become not bad at all.

I will definitely be using this feature!

Quote
Yes, all that was required was 2 rounds of FM deconvolution sharpening with different width settings at the final output size, because the original was already very sharp in the limited DOF zone. One round for the upsampling, and another for the finest (restored) detail.

OK … this is where I have a problem/don’t understand.  If I understand you correctly, you used FM first to correct your original (already nicely focused) image to restore fine detail (lost by lens/sensor etc). Then you upsampled and used FM again to correct the softness caused by the upsampling.  Why not leave the original without correction, upsample, and then use FM once?  Whatever softness is in the original image will be upsampled so the deconvolution radius will have to be increased by the same ratio as the upsampling, then you add a bit more strength, to taste, to correct for any softness introduced by the upsampling.

I’ve given a few examples that seem to show that there is no downside to this (the upside is that any over-enthusiasm in the ‘capture’ sharpening won’t be amplified by the upsampling), but so far I haven’t seen an example where sharpen/upsize/sharpen is better.  Still, this is probably splitting hairs, and either approach will work (in the right hands  :)).
[/quote]

Quote
In that case I probably would need too large a "blur width" setting, or several, and thus do a mild amount at original file size, and another after resampling. Of course my goal is to avoid blurred originals ..., and I usually succeed (I do lug my tripod or a monopod around a lot).

Yes, I expect this is a linear problem so doing the standard deblur for your lens/camera followed by a deblur for the out-of-focus would probably be a good idea (rather than trying to fix everything in one go).

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 28, 2014, 06:25:16 pm
They’ve really gone slider-mad here!  I can see that the Small Details Boost may be useful in toning down noise introduced by the Small Details adjustment, but I don’t see any reason to use the Small Details adjustment at all as the InFocus filter seems to me to do a better job.

Well, not exactly. The Small details adjustment, is adjusting the amplitude of 'small feature detail'. Small is not defined as a fixed number of pixels but rather small in relation to the total image size. InFocus instead, deconvolves and optionally sharpens more traditionally, based on certain fixed blur dimensions in pixels.

Quote
The Medium and Large adjustments are a bit like USM with a large and very large radius, respectively.

Only a small bit, but without any risk of creating halos!

Quote
But what is very nice with the Topaz filter is the ability to target shadows and highlights.

Yes, and that is in addition to the overall settings in the detail panel. It's a bit confusing at first, but they each allow and remember their own settings of the detail sliders.

Quote
OK … this is where I have a problem/don’t understand.  If I understand you correctly, you used FM first to correct your original (already nicely focused) image to restore fine detail (lost by lens/sensor etc). Then you upsampled and used FM again to correct the softness caused by the upsampling.  Why not leave the original without correction, upsample, and then use FM once?

Not exactly. You can upsample an unsharpened image, and apply 2 deconvolutions with different widths at the output size. So e.g. an upsample to 300% might require a blur width of 4 or 5, but can be followed with one of 1 or 2 (with a lower amount).

Quote
Whatever softness is in the original image will be upsampled so the deconvolution radius will have to be increased by the same ratio as the upsampling, then you add a bit more strength, to taste, to correct for any softness introduced by the upsampling.

Yes, the original optical blur is scaled to a larger dimension, but may be diffraction dominated or defocus dominated. That would lead to different PSF requirements. FocusMagic may be clever enough to optimize either type of blur, but I'm not sure that would take the same blur width settings. In addition, the resizing will also create some blur, of yet another kind. There is a good chance that these PSFs will cascade into a Gaussian looking combined blur, but sometimes we can do better by the above mentioned dual deconvolution at the final size.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 28, 2014, 07:13:42 pm
Hi pp,

Well, all I've done with your image is to apply FocusMagic to it ... and some tonal adjustments in Lightroom.  Your image has color differences which I haven't tried to match. The vertical lines in your image are very clean - but the rest of the image is very soft ... which is a tradeoff, IMO.

Be interesting to get some views on which is the cleaner result :).

(http://www.irelandupclose.com/customer/LL/TestImage.jpg)

(You can right-click on the image to see it full-size)

Well-taken shot, btw!!

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 28, 2014, 07:32:56 pm
Well, not exactly. The Small details adjustment, is adjusting the amplitude of 'small feature detail'. Small is not defined as a fixed number of pixels but rather small in relation to the total image size. InFocus instead, deconvolves and optionally sharpens more traditionally, based on certain fixed blur dimensions in pixels.

Not exactly. You can upsample an unsharpened image, and apply 2 deconvolutions with different widths at the output size. So e.g. an upsample to 300% might require a blur width of 4 or 5, but can be followed with one of 1 or 2 (with a lower amount).

Yes, the original optical blur is scaled to a larger dimension, but may be diffraction dominated or defocus dominated. That would lead to different PSF requirements. FocusMagic may be clever enough to optimize either type of blur, but I'm not sure that would take the same blur width settings. In addition, the resizing will also create some blur, of yet another kind. There is a good chance that these PSFs will cascade into a Gaussian looking combined blur, but sometimes we can do better by the above mentioned dual deconvolution at the final size.

Cheers,
Bart

I need to play around with Topaz more ... but I can see that there is a lot there.

I understand what you're saying about the upsampling deconvolutions.  Effectively what you are doing (after the resize/deconvolution) is to do a second deconvolution with a smaller radius and amount if you find that the image is still too soft (and the first deconvolution cannot be adjusted to give you the optimum sharpness).  Of course that makes perfect sense: there is no cast-in-concrete formula and different images with different resizing will require different approaches.  I guess what I'm saying is that, as far as possible, multiple sharpening passes should be the exception rather than the rule.  It's a sort of campaign to remind us that we can do more harm than good with what is often our flaithiúlach (gaelic meaning over-generous, as in buying drinks for the whole bar :)) approach to sharpening.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: ppmax2 on August 28, 2014, 07:47:32 pm
Hi pp,

Well, all I've done with your image is to apply FocusMagic to it ... and some tonal adjustments in Lightroom.  Your image has color differences which I haven't tried to match. The vertical lines in your image are very clean - but the rest of the image is very soft ... which is a tradeoff, IMO.

Be interesting to get some views on which is the cleaner result :).

(http://www.irelandupclose.com/customer/LL/TestImage.jpg)

(You can right-click on the image to see it full-size)

Well-taken shot, btw!!

Robert

Hi Robert--

Wow--FM looks like to be a gem of a tool. Compared to RT, I think your result has a bit more definition, especially on the guardrail that encircles the telescope. I also see that the weather vanes on the top look a bit more defined as well. Also, the vertical lines on the rear of the building look good too.

Is there any chance of posting an uncropped version? I'd like to see what the detail looks like in the lower portion of the image, especially in the shadow/noise areas.

Also, what did you do to embed the full size image that can be viewed by right-click?

Nice job and thanks for the render!

PP
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: sniper on August 29, 2014, 05:25:50 am
Bart forgive the slightly off topic question, but what is the structure in your picture?

Regards Wayne
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 29, 2014, 06:04:09 am
Bart forgive the slightly off topic question, but what is the structure in your picture?

Hi Wayne,

I'm sorry, but I do not understand the question. Maybe you are referring to ppmax2's picture?

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 29, 2014, 06:24:05 am
Hi Robert--

Wow--FM looks like to be a gem of a tool. Compared to RT, I think your result has a bit more definition, especially on the guardrail that encircles the telescope. I also see that the weather vanes on the top look a bit more defined as well. Also, the vertical lines on the rear of the building look good too.

Is there any chance of posting an uncropped version? I'd like to see what the detail looks like in the lower portion of the image, especially in the shadow/noise areas.

Also, what did you do to embed the full size image that can be viewed by right-click?

Nice job and thanks for the render!

PP

Hi, Sure ... you can download the image here: http://www.irelandupclose.com/customer/LL/TestImage-Full.jpg (http://www.irelandupclose.com/customer/LL/TestImage-Full.jpg)

Before opening into Photoshop I did a small amount of luminance and color noise reduction in ACR - but very little as the image is very clean.  There's a tiny amount of shadow noise, but that could have been reduced by using an edge mask with FM (although FM is pretty good at not boosting noise).  But even as it stands you could lighten the image considerably without noise being an issue, native resolution or upsized.

I'm sure Bart or someone who has done a lot of research into deconvolution could do a better job than I did.

Almost forgot ... I use the img link to an image on my website.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: ppmax2 on August 29, 2014, 07:15:37 am
Hello sniper--

That building is the housing for the Subaru telescope on top of Mauna Kea volcano on the Big Island of Hawaii. In the image below its the one to the left of the two orbs (Keck 1 and Keck 2):
(https://farm4.staticflickr.com/3862/14414366318_f2f92501c6_b.jpg) (https://flic.kr/p/nXKoVd)

Thanks for posting the full size Robert--FM looks like it did a really nice job...I'll have to check that out now ;)

thx--
PP
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jack Hogan on August 29, 2014, 11:33:39 am
The InFocus plug-in seems OK for deblur, but on its own it’s not enough: however, with a small amount of Sharpen added (same plug-in) it does a very good job.  Here’s an example...

Apart from the undershoot and slight noise boost (acceptable without an edge mask IMO) it’s pretty hard to beat a 10-90% edge rise of 1.07 pixels!  (This is one example of two-pass sharpening that’s beneficial, it would seem  :)).

Hi Robert,

I haven't been able to read all of it but you have covered a lot of excellent ground and come a long way in this thread, good for you and thank you for doing it.  There was a recent thread around here of a gentleman who was able to undo a fair amount of known blur using an FT library, I wonder if any of that can be used by us non-coders.

For my landscapes I typically use InFocus in its Estimate mode (Radius 2, Softness 0.3, Suppress 0.2) for capture sharpening sometimes followed by a touch of Local Contrast at low opacity.  That seems to take care of the small to medium range detail quite well.  If I see any squigglies from InFocus I mask those out.  Imho one of the limitations we are running into is that we are deconvolving based on a gaussian PSF, which is not necessarily representative of the intensity distribution of the camera system's.

But along these lines, since you are playing with Imatest, I have this consideration for you: a good blind guess at what deconvolution radius to use for a guessian :) PSF is that which would result in the same MTF50 as the MTF50 produced by the edge when measured off the raw data.  In other words, a good guess at the radius to use for deconvolution is (excuse the Excel notation)

StdDev/Radius = SQRT(-2*LN(0.5))/2/PI/MTF50 pixels

For example, if when you fed the edge raw data to Imatest it returned an MTF50 of 0.28 cy/px, a good guess at the gaussian radius to use for deconvolution would be 0.67 pixels.

Jack


Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 29, 2014, 11:44:14 am

Re: Topaz Detail:
The Small details adjustment, is adjusting the amplitude of 'small feature detail'. Small is not defined as a fixed number of pixels but rather small in relation to the total image size. InFocus instead, deconvolves and optionally sharpens more traditionally, based on certain fixed blur dimensions in pixels.


Yes, you're right - here is pp's image with USM inside the shape and Topaz Detail outside (both overdone to make it clearer).

(http://www.irelandupclose.com/customer/LL/USM-Tz-Detail.jpg)

The Topaz Detail Small clearly brings out detail in the image (the clouds have gone from flat to having depth), as well as noise, so it might be good to reduce noise either before or after applying the filter - whereas USM just sharpens fine detail.  And as you say, USM also introduces halos.

So ... nice filter (especially considering all the rest of it)!

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 29, 2014, 12:00:31 pm
Hi Robert,

I haven't been able to read all of it but you have covered a lot of excellent ground and come a long way in this thread, good for you and thank you for doing it.  There was a recent thread around here of a gentleman who was able to undo a fair amount of known blur using an FT library, I wonder if any of that can be used by us non-coders.

For my landscapes I typically use InFocus in its Estimate mode (Radius 2, Softness 0.3, Suppress 0.2) for capture sharpening sometimes followed by a touch of Local Contrast at low opacity.  That seems to take care of the small to medium range detail quite well.  If I see any squigglies from InFocus I mask those out.  Imho one of the limitations we are running into is that we are deconvolving based on a gaussian PSF, which is not necessarily representative of the intensity distribution of the camera system's.

But along these lines, since you are playing with Imatest, I have this consideration for you: a good blind guess at what deconvolution radius to use for a guessian :) PSF is that which would result in the same MTF50 as the MTF50 produced by the edge when measured off the raw data.  In other words, a good guess at the radius to use for deconvolution is (excuse the Excel notation)

StdDev/Radius = SQRT(-2*LN(0.5))/2/PI/MTF50 pixels

For example, if when you fed the edge raw data to Imatest it returned an MTF50 of 0.28 cy/px, a good guess at the gaussian radius to use for deconvolution would be 0.67 pixels.

Jack


Hi Jack - thanks for the tips ... and I'll have a try of the radius estimate you suggest.  What I've done so far is to use Bart's suggestion of dividing the 10% to 90% edge rise (in pixels) by 2.5631, which is the 10% & 90% point on a Gaussian curve.  In the example I gave earlier I used both the horizontal and vertical figures and fed them into Bart's PSF tool, then used the kernel in ImageJ.  So far, Bart's tool is the only one I've found that allows an asymmetrical PSF, so it has a level of sophistication not generally present.  It would be very nice to have this technique in a Photoshop filter ... something to think about!

I'll have another look at this when I have a bit of time - the tests I did with Imatest were on not a very good paper (Epson Enhanced Matte), so it's probable that some of the image softness came from the print - also, I used a 24-105 F4L lens from quite a distance back and I would like to try again with a prime lens.

Cheers,

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: sniper on August 29, 2014, 12:39:49 pm
PPmax2   Thank you, I just wondered what sort of building it was.  (nice pic by the way)

Bart my appoligies, I goofed and thought it was your pic.

Regards both  Wayne
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 29, 2014, 01:01:45 pm

But along these lines, since you are playing with Imatest, I have this consideration for you: a good blind guess at what deconvolution radius to use for a guessian :) PSF is that which would result in the same MTF50 as the MTF50 produced by the edge when measured off the raw data.  In other words, a good guess at the radius to use for deconvolution is (excuse the Excel notation)

StdDev/Radius = SQRT(-2*LN(0.5))/2/PI/MTF50 pixels


Pretty close to Bart's method! 0.56 by Bart, 0.54 by you .. and 1.0 Bart, 0.9 you :)

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jack Hogan on August 29, 2014, 02:23:33 pm
Pretty close to Bart's method! 0.56 by Bart, 0.54 by you .. and 1.0 Bart, 0.9 you :)

Excellent then.  You can read the rationale behind my approach here (http://www.strollswithmydog.com/what-radius-to-use-for-deconvolution-capture-sharpening/).

Jack
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 30, 2014, 03:46:45 am
Excellent then.  You can read the rationale behind my approach here (http://www.strollswithmydog.com/what-radius-to-use-for-deconvolution-capture-sharpening/).

Jack

Thanks Jack - very interesting and a bit scary!  I thought I would check out what happens using Bart's deconvolution, based on the correct radius and then increasing it progressively, and this is what happens:

(http://www.irelandupclose.com/customer/LL/Base-1p06.jpg)  (http://www.irelandupclose.com/customer/LL/Base-4.jpg)

The left-hand image has the correct radius of 1.06, the one at the right has a radius of 4.  As you can see, all that happens is that there is a significant overshoot on the MTF at 4 (this overshoot increases progressively from a radius of about 1.4).

The MTF remains roughly Gaussian unlike the one in your article … and there is no sudden transition around the Nyquist frequency or shoot off to infinity as the radius increases.  Are these effects due to division by zero(ish) in the frequency domain … or to something else?

There is also no flattening of the MTF as per your article – the deconvolution that I’m showing seems more like a USM effect, as you can see here where I’ve applied a USM with radius of 1.1:

(http://www.irelandupclose.com/customer/LL/Base-USM-1p06.jpg)  
 
FocusMagic, on the other hand, goes progressively manic as the radius is increased from 2 (first image, OK) to 3 then to 4 and finally 6.

(http://www.irelandupclose.com/customer/LL/Base-FM-2.jpg)   (http://www.irelandupclose.com/customer/LL/Base-FM-3.jpg)

(http://www.irelandupclose.com/customer/LL/Base-FM-4.jpg)   (http://www.irelandupclose.com/customer/LL/Base-FM-6.jpg)

What do you think, Bart and Jack (and anyone else who understands deconvolution  :)).

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 30, 2014, 05:46:46 am
Thanks Jack - very interesting and a bit scary!

Robert,

The approach that Jack takes (in the frequency domain) is pretty close to what I do (in the spatial domain). We both assume (based on both theory and empirical evidence) that the cascade of blur sources will usually result in a Gaussian type of PSF.

Jack takes a medium response (MTF50) as pivot point on the actual MTF curve, and calculates the corresponding MTF (at that point) of a pure Gaussian blur function, he calculates the required sigma. In principle that's fine, although one might also try to find a sigma that minimizes the absolute difference between the actual MTF and that of the pure Gaussian over a wider range. Although it's a reasonable single point optimization, maybe MTF50 is not the best pivot point, maybe e.g. MTF55 or MTF45 would give an overall better match, who knows.

My approach is also trying to fit a Gaussian (Edge-Spread function) to the actual data, but does so on two points (10% and 90% rise) on the edge profile in the spatial domain. That may result in a slightly different optimization, e.g. in case of veiling glare which raises the dark tones more than the light tones, also on the slanted edge transition profile. My webtool attempts to minimize the absolute difference between the entire edge response and the Gaussian model. It therefore attempts to make a better overall edge profile fit, which usually is most difficult for the dark edge, due to veiling glare which distorts the Gaussian blur profile. That also gives an indication of how much of a role the veiling glare plays in the total image quality, and how it complicates a successful resolution restoration because it reduces the lower frequencies of the MTF response. BTW, Topaz Detail can be used to adjust some of that with the large detail control.

Quote
 I thought I would check out what happens using Bart's deconvolution, based on the correct radius and then increasing it progressively, and this is what happens:

(http://www.irelandupclose.com/customer/LL/Base-1p06.jpg)  (http://www.irelandupclose.com/customer/LL/Base-4.jpg)

The left-hand image has the correct radius of 1.06, the one at the right has a radius of 4.  As you can see, all that happens is that there is a significant overshoot on the MTF at 4 (this overshoot increases progressively from a radius of about 1.4).

The MTF remains roughly Gaussian unlike the one in your article … and there is no sudden transition around the Nyquist frequency or shoot off to infinity as the radius increases.  Are these effects due to division by zero(ish) in the frequency domain … or to something else?

Jack's model is purely mathematical, and as such allows to predict the effects of full restoration in the frequency domain. However, anything that happens above the Nyquist frequency (0.5 cycles/pixel) folds back (mirrors) to the below Nyquist range and manifests itself as aliasing in the spatial domain (so you won't see it as an amplification above Nyquist in the actually sharpened version, but as a boost below Nyquist).

Also, since the actual signal's MTF near the Nyquist frequency is very low, there is little detail (with low S/N ratio) left to reconstruct, so there will be issues with noise amplification. MTF curves need to be interpreted, because actual images are not the same as simplified mathematical models (simple numbers do not tell the whole story, they just show a certain aspect of it, like spatial frequency response of a system in an MTF, and an separation of aliasing due to sub-pixel phase effects of fine detail).

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 30, 2014, 06:47:47 am

Jack's model is purely mathematical, and as such allows to predict the effects of full restoration in the frequency domain. However, anything that happens above the Nyquist frequency (0.5 cycles/pixel) folds back (mirrors) to the below Nyquist range and manifests itself as aliasing in the spatial domain (so you won't see it as an amplification above Nyquist in the actually sharpened version, but as a boost below Nyquist).

Also, since the actual signal's MTF near the Nyquist frequency is very low, there is little detail (with low S/N ratio) left to reconstruct, so there will be issues with noise amplification. MTF curves need to be interpreted, because actual images are not the same as simplified mathematical models (simple numbers do not tell the whole story, they just show a certain aspect of it, like spatial frequency response of a system in an MTF, and an separation of aliasing due to sub-pixel phase effects of fine detail).


Yes ... practise and theory are necessarily not a perfect fit!  Also, the prints I'm using are certainly not the sharpest, so it doesn't matter how well the algorithm reconstructs the image, the edges will never be sharp.  The contrast on the Imatest chart is also quite low (Lab 30% and 80%), which certainly impacts on the results.

What seems to work very well, giving no artifacts at all is to use the base radius and then a smaller radius (I used half).  Here are the results using FM (2/100%, 1/100%) and IJ (1.2/0.6):

(http://www.irelandupclose.com/customer/LL/Base-FM-2-100-1-100.jpg)   (http://www.irelandupclose.com/customer/LL/Base-IJ-1p2S1p25-0p5.jpg)

And here is the image with contrast changed to Lab 10% and 90%) with the same FM settings:

(http://www.irelandupclose.com/customer/LL/Base--Contrast-FM-2-100-1-50.jpg)  

(http://www.irelandupclose.com/customer/LL/Image-Contrast10-90-FM2-100-1-50.jpg)

As you can see, the contrast adjustment makes a big difference.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Mike Sellers on August 30, 2014, 09:53:34 am
May I ask a question about a sharpening workflow for my Tango drum scanner? Should I leave sharpening turned on in the Tango and then would there be any need for the capture sharpening stage or turn it off in the Tango software then use the capture sharpening stage?
Mike
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jack Hogan on August 30, 2014, 11:08:51 am
Jack takes a medium response (MTF50) as pivot point on the actual MTF curve, and calculates the corresponding MTF (at that point) of a pure Gaussian blur function, he calculates the required sigma. In principle that's fine, although one might also try to find a sigma that minimizes the absolute difference between the actual MTF and that of the pure Gaussian over a wider range. Although it's a reasonable single point optimization, maybe MTF50 is not the best pivot point, maybe e.g. MTF55 or MTF45 would give an overall better match, who knows.

Correct, Bart and Robert.  It depends how symmetrical the two curves are about MTF50.  MTF50 seems to be fairly close to the mark, the figure is often avaiable and the formula is easier to use with that one parameter - than to do curve fitting.  But it's always a compromise.

Jack's model is purely mathematical, and as such allows to predict the effects of full restoration in the frequency domain. However, anything that happens above the Nyquist frequency (0.5 cycles/pixel) folds back (mirrors) to the below Nyquist range and manifests itself as aliasing in the spatial domain (so you won't see it as an amplification above Nyquist in the actually sharpened version, but as a boost below Nyquist).

Also, since the actual signal's MTF near the Nyquist frequency is very low, there is little detail (with low S/N ratio) left to reconstruct, so there will be issues with noise amplification. MTF curves need to be interpreted, because actual images are not the same as simplified mathematical models (simple numbers do not tell the whole story, they just show a certain aspect of it, like spatial frequency response of a system in an MTF, and an separation of aliasing due to sub-pixel phase effects of fine detail).

Cheers,
Bart

Correct again, I was interested in an ideal implementation to isolate certain parameters and understand the effect of changes in the variables involved.  Actual deconvolution algorithms have all sorts of additional knobs to control quite non-ideal real life data and the shape of the resulting MTF: knobs related to noise and sophisticated low pass filters - which I did not show (I mention them in the next post (http://www.strollswithmydog.com/deconvolution-radius-changes-with-aperture/)).  Those do their job in FM and other plug-ins, which is why the resulting MTF curves are better behaved than my ideal examples.

However, imo the application of those knobs comes too early in the process, especially when the MTF curve is poorly behaved.  There is no point in boosting frequencies just to cut them back later with a low pass: noise is increased and detail information is lost that way.  On the contrary the objective of deconvolution should be to restore without boosting too much - at least up to Nyquist.

So why not give us a chance to first attempt to reverse out the dominant components of MTF based on their physical properties (f-number, AA, etc.) and only then resort to generic parameters based on Gaussian PSFs and low pass filters?  At least take out the Airy and AA, then we'll talk (I am talking to you Nik, Topaz and FM).

Jack
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 30, 2014, 11:17:05 am
May I ask a question about a sharpening workflow for my Tango drum scanner? Should I leave sharpening turned on in the Tango and then would there be any need for the capture sharpening stage or turn it off in the Tango software then use the capture sharpening stage?

Hi Mike,

Hard to say, but I think you should first scan with the aperture that gives the best balance between sharpness and graininess for the given image. It's possible that you will scan for an output size that may need to be downsampled for the final output, because you want to avoid undersampling the grain structure as that will result in grain aliasing. Scanning for 6000-8000 PPI usually allows to avoid grain aliasing.

That final output would require an analysis to see if there is room for deconvolution sharpening. If you already have FocusMagic then that would be simple enough to just give a try, otherwise you could perhaps upload a crop of a sharp edge in the image for analysis (assuming a typical well focused segment of an image). Defocused images would always require capture sharpening, unless only small/downsampled output is produced.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 30, 2014, 05:04:01 pm

However, imo the application of those knobs comes too early in the process, especially when the MTF curve is poorly behaved.  There is no point in boosting frequencies just to cut them back later with a low pass: noise is increased and detail information is lost that way.  On the contrary the objective of deconvolution should be to restore without boosting too much - at least up to Nyquist.

So why not give us a chance to first attempt to reverse out the dominant components of MTF based on their physical properties (f-number, AA, etc.) and only then resort to generic parameters based on Gaussian PSFs and low pass filters?  At least take out the Airy and AA, then we'll talk (I am talking to you Nik, Topaz and FM).

Jack

My approach is also trying to fit a Gaussian (Edge-Spread function) to the actual data, but does so on two points (10% and 90% rise) on the edge profile in the spatial domain. That may result in a slightly different optimization, e.g. in case of veiling glare which raises the dark tones more than the light tones, also on the slanted edge transition profile. My webtool attempts to minimize the absolute difference between the entire edge response and the Gaussian model. It therefore attempts to make a better overall edge profile fit, which usually is most difficult for the dark edge, due to veiling glare which distorts the Gaussian blur profile. That also gives an indication of how much of a role the veiling glare plays in the total image quality, and how it complicates a successful resolution restoration because it reduces the lower frequencies of the MTF response. BTW, Topaz Detail can be used to adjust some of that with the large detail control.

Hi Jack & Bart,

Given an actual MTF, could you produce a deconvolution kernel to properly restore detail, to the extent possible?  As opposed to assuming a Gaussian model, that is. Say this one here:

(http://www.irelandupclose.com/customer/LL/BaseMTF.jpg)

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jack Hogan on August 31, 2014, 04:02:42 am
Hi Jack & Bart,

Given an actual MTF, could you produce a deconvolution kernel to properly restore detail, to the extent possible?  As opposed to assuming a Gaussian model, that is. Say this one here:

Hi Robert,

I could give you a better answer if I could see the raw file that generated that output.

But for a generic answer, assuming we are talking about Capture Sharpening in the center of the FOV - that is attempting to restore spatial resolution lost by blurring by the HARDWARE during the capture process - if one wants to get camera/lens setup specific PSFs for deconvolution one should imo start by reversing out the blurring introduced by each easily modeled component of the hardware.

The easy ones are diffraction and AA.  So one could deconvolve with an Airy disk of the appropriate f-number and (typically) the PSF of a 4-dot beam splitter of the appropriate strength.  Next comes lens blur, which in the center of the image can often be modeled as a combination of a pillbox and a gaussian.  Then one has all sorts of aberrations, not to mention blur introduced by demosaicing, sensor/subject motion etc. which are very hard to model.

But even for the easy ones, deconvolving them out is not easy :)  Hence the idea instead to assume that all the PSFs add up to a Gaussian and try to deconvolve using that.  The fact is though, the PSFs of the camera/lens as a system do not always add up to one that looks like a Gaussian - as you probably read here (http://www.strollswithmydog.com/deconvolution-radius-changes-with-aperture/).  Therefore we mess with our image's spatial resolution in ways it was never meant to be messed with and we get noise and artifacts that we need to mask out.  What we often do not realize is that we have compromised spatial resolution information elsewhere as well - but if it looks ok...

If you can share the raw file that generated the graph above I will take a stab at breaking down its 'easy' MTF components.

Jack

PS BTW to make modeling effective I work entirely on the objective raw data (blurring introduced by lens/sensor only) to isolate it from subjective components that would otherwise introduce too many additional variables: no demosaicing, no rendering, no contrast, no sharpening.  More or less in the center of the FOV.   Capture obtained by using good technique, so no shake.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on August 31, 2014, 04:45:47 am
Hi Jack & Bart,

Given an actual MTF, could you produce a deconvolution kernel to properly restore detail, to the extent possible?  As opposed to assuming a Gaussian model, that is.

Hi Robert,

I'm not a mathematician, so I'm not 100% sure, but I don't think that is directly possible from an arbitrary MTF. An MTF has already lost some information required to re-build the original data. It's a bit like trying to reconstruct a single line of an image from its histogram. That's not a perfect analogy either, but you get the idea. The MTF only tells us with which contrast certain spatial frequencies will be recorded, but it e.g. no longer has information about it's phase (position).

That's why it helps to reverse engineer the PSF, i.e. compare the image MTF of a known feature (e.g. edge) to the model of known shapes, such as e.g. a Gaussian, and thus derive the PSF indirectly. This works pretty well for many images, until diffraction/defocus/motion  becomes such a dominating component in the cascaded blur contributions that the combined blur becomes a bit less Gaussian looking. In the cascade it will still be somewhat Gaussian (except for complex motion), so one can also attempt to model a weighted sum of Gaussians, or a convolution of a Gaussian with a diffraction or defocus blur PSF.

So we can construct a model of the contributing PSFs, but it will still be very difficult to do absolutely accurate, and small differences in the frequency domain can have huge effects in the spatial domain.

I feel somewhat comforted by the remarks of Dr. Eric Fossum (the inventor of the CMOS image sensor) when he mentions (http://www.dpreview.com/forums/post/54295244) that the design of things like microlenses and their effect on the image is too complicated to predict accurately, that one usually resorts to trial and error rather than attempt to model it. That of course won't stop us from trying ..., as long as we don't expect perfection, because that would probably never happen.

What we can do is model the main contributors, and see if eliminating their contribution helps.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 31, 2014, 06:47:02 am
Hi Robert,

I could give you a better answer if I could see the raw file that generated that output.

Hi Jack,

You can get the raw file (and also the Imatest SFR and Edge plot) here: http://www.irelandupclose.com/customer/LL/Base.zip.  As we're getting more into areas of interest rather than strictly practical (at this point) application, I would be very interested in what your procedure is to map the different components of the blurring.  I'm an engineer originally so I do have some maths ... but it's very rusty at this stage so if you do go into the maths I would appreciate an explanation :).  BTW ... I hope this is of interest to you too and not just a nuisance, because if it is please don't waste your time.  If you have any software algorithms ... those I can normally follow without much help as I've spent most of my life in software development.

Quote
But for a generic answer, assuming we are talking about Capture Sharpening in the center of the FOV - that is attempting to restore spatial resolution lost by blurring by the HARDWARE during the capture process - if one wants to get camera/lens setup specific PSFs for deconvolution one should imo start by reversing out the blurring introduced by each easily modeled component of the hardware.

Yes, well that's really what caught my attention: if we could reverse the damage caused by the AA filter, sensor, A/D, firmware processing ... first, that would seem to be a really good first step.  The thing is, how do you separate this from this + the lens?  Would it help to have two images taken in identical conditions with two prime lenses at the same focal length and aperture and shutter speed, for example?  What about the light source?  For the test image I used a Solux 4700K bulb, which has a pretty flat spectrum, sloping up towards the lower frequencies.

Quote
PS BTW to make modeling effective I work entirely on the objective raw data (blurring introduced by lens/sensor only) to isolate it from subjective components that would otherwise introduce too many additional variables: no demosaicing, no rendering, no contrast, no sharpening.  More or less in the center of the FOV.   Capture obtained by using good technique, so no shake.

The capture was reasonably well taken - however I did not use mirror lock-up and the exposure was quite long (1/5th second). ISO 100, good tripod, remote capture, so no camera shake apart from the mirror.  The test chart is quite small and printed on Epson Enhanced Matte using an HPZ3100 ... so not the sharpest print, but as the shot was taken from about 1.75m away any softness in the print is probably not a factor.  However, if you would like a better shot, I can redo it with a prime lens with mirror lock-up and increase the light to shorten the exposure.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on August 31, 2014, 07:27:25 am
I'm not a mathematician, so I'm not 100% sure, but I don't think that is directly possible from an arbitrary MTF. An MTF has already lost some information required to re-build the original data. It's a bit like trying to reconstruct a single line of an image from its histogram. That's not a perfect analogy either, but you get the idea. The MTF only tells us with which contrast certain spatial frequencies will be recorded, but it e.g. no longer has information about it's phase (position).

Well, Bart, for someone who's not a mathematician you're doing pretty well!

What I was wondering ... and my maths certainly isn't up to this sort of thing ... is whether it is possible to capture an image of certain known shapes ... say a horizontal edge, a vertical edge, a circle of know size ... and from these model the distortion with reasonable accuracy (for the specific conditions under which the image was captured).  If we could do this for a specific setup with all the parameters as fixed as possible, including the light source, it would be a fantastic achievement (to my mind at least).  If we then introduced one additional thing: slight defocusing of the lens, for example, and were able to model that, and produce a deconvolution filter that would restore the image to the focused state ... well, that would be quite a step along the way.

What really impressed me was the blur/deblur macro example in ImageJ: the deconvolution completely reverses the blurring of the image. Of course the blurring function is fully known in this case: but what it illustrates very graphically is that for a really effective deconvolution the blurring function needs to be as fully known as possible.  I would have thought that with modern techniques and software that it should be possible to photograph an image (with whatever complex shapes are required) and from this compute a very accurate blur function for that very particular setup.  If this was possible, then it would also be possible to take many captures at different focal lengths and apertures and so obtain a database describing the lens/camera/demosaicing.  

Other factors like camera shake and lens out of focus would seem secondary issues to me as these are to a large extent within the photographer's control.

I'm certainly speaking from ignorance, but to me it's like if we know the shape of the blur accurately then we can deblur either in the spatial or frequency domain and we should be able to do it with a high degree of success.  Of course I know that there are random issues like noise ... but in the controlled test captures it should be possible to analyse the image and remove the noise, I would have thought.  Then in the real-world deblurring perhaps noise-removal should be the first step before deconvolution???  That's a question in itself :).
[/quote]

Quote
That's why it helps to reverse engineer the PSF, i.e. compare the image MTF of a known feature (e.g. edge) to the model of known shapes, such as e.g. a Gaussian, and thus derive the PSF indirectly. This works pretty well for many images, until diffraction/defocus/motion  becomes such a dominating component in the cascaded blur contributions that the combined blur becomes a bit less Gaussian looking. In the cascade it will still be somewhat Gaussian (except for complex motion), so one can also attempt to model a weighted sum of Gaussians, or a convolution of a Gaussian with a diffraction or defocus blur PSF.

So we can construct a model of the contributing PSFs, but it will still be very difficult to do absolutely accurate, and small differences in the frequency domain can have huge effects in the spatial domain.

I feel somewhat comforted by the remarks of Dr. Eric Fossum (the inventor of the CMOS image sensor) when he mentions (http://www.dpreview.com/forums/post/54295244) that the design of things like microlenses and their effect on the image is too complicated to predict accurately, that one usually resorts to trial and error rather than attempt to model it. That of course won't stop us from trying ..., as long as we don't expect perfection, because that would probably never happen.

My own feeling (based on ignorance, needless to say) is that there are just too many variables ... and if just one of them (the microlenses) are considered to have too complex an effect on the image to model successfully, well then modeling is not the way to go.  That leaves measurement and educated guesswork, which is where MTFs and such come into play.  I just wonder to what extent the guesswork can be removed, and the shape of the blur function can be modeled from an actual photograph.  I understand that the MTF has limitations ... but at least it's a start. We can take both the horizontal and vertical MTFs.  What else could we photograph to give us a more accurate idea of the shape of the blur?

Quote
What we can do is model the main contributors, and see if eliminating their contribution helps.

Well, I'll be very interested to see what Jack comes up with.  It may be that this is not a one-step problem, but that Jack is right and that we should fix a) and then look at b).

Cheers

Robert

Cheers,
Bart
[/quote]
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jack Hogan on August 31, 2014, 12:35:39 pm
Here it is Robert

(http://i.imgur.com/e6imP3Q.png)

However I must say upfront that I am not comfortable with it because 'lens' blur is out of the ballpark and overwhelming the other components: I couldn't even see the AA zero (so I guessed a generic 0.35).  You should be getting about half the lens blur diameter and MTF50 of close to 2000 in lw/ph with your setup.

Could you re-capture a larger chart at 40mm f/5.6 using contrast detect (live view) focusing, mirror up etc.?  Alternatively try 80mm at f/7-8.  All I need is the raw file of one sharp black square/edge about 2-400 pixels on a side on a white background.

On the other issues, I agree with Bart: look at what an overwhelming 'lens blur' you got even with a tripod and relatively good technique.  Shutter shock clobbers many cameras even on a granite tripod.  In the real world outside the labs we are not even close to being able to detect the effect of microlenses or other micro components.

At this stage of the game I think we can only hope to take it to the next stage: from generic Gaussian PSFs to maybe a couple of the more easily modeled ones.  From symmetrical to asymmetrical.  Nobody is doing it because it's hard enough as it is, what with the noise and low energy in the higher frequencies creating all sorts of undesired effects.  I am not even sure it is worthwhile to split AA and diffraction out.  Intuitively I think so, especially as pixels become smaller and AAless, approaching Airy disk size even at larger apertures.  Time will tell.  In the meantime the exercise is fun and informative for inquisitive minds :)

Jack

PS With regards to my methodology keep an eye on my (new as of this week) site (http://www.strollswithmydog.com/).  I'll go into it in the near future.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on September 01, 2014, 06:06:39 am
Thank you Jack.

I'm playing around a bit with the test print.  On the matte paper I'm using the best I can get with a 100mm F2.8 Macro lens (which should be very sharp) is 2.26 pixel 10-90% rise, which isn't great. Increasing the contrast using a curve brings this down to 1.83 pixels, which is a bit better (and is similar to Bart's shot with the same lens and camera).  I'll try with a print on glossy paper, but then there's the problem of reflection - but at least then I can get fairly good contrast.

For Imatest you need the print to have a contrast ratio of about 10:1 max (so Lab 9/90, say).  You say that you need black on white - but of course with the paper and ink limits that's not achievable.  Is it acceptable to you to apply a curve to the image to bring the contrast back to black/white?

As a side-effect, I'm finding out quite a bit about my lenses doing these tests (which I did a few years ago but have since forgotten the conclusions).

All the best,

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on September 01, 2014, 06:55:11 am
Hello again Jack,

I've tried the SFR with a glossy paper and the results are all over the place. Due to reflections I expect, even though I was careful enough.  The problem may be that the print is glued onto a board that has slight bumps and hollows - I could try again on perspex.  

In the meantime, here is a somewhat better image, back to the matte paper: http://www.irelandupclose.com/customer/LL/1Ds3-100mmF2p8.zip

I used a 100mm F2.8 Macro lens.  I focused manually with MLU, and out-of-the-camera the 10-90% top edge is 2.57 pixels which isn't exactly brilliant. However the image needs contrast applied (the original has a Lab black of 10 and light gray of 90).  With a curve applied to restore the contrast, the 10-90% rise is 1.83 pixels, which is OK, I think.  With a light sharpening of 20 in ACR this reduces to 1.38 pixels.  With FocusMagic 2/100 the figure drops to 0.86 pixels with an MTF50 of 0.5 cycles per pixel.

Actually, this image may be better for you as the edge is at the center of the lens: http://www.irelandupclose.com/customer/LL/Matte-100mmF6p3.zip

I could try a different lens, but this is about as good as I'll get with this particular lens I think.  As for the paper ... advice would be welcome!

Cheers

Robert

Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jack Hogan on September 01, 2014, 11:51:41 am
For Imatest you need the print to have a contrast ratio of about 10:1 max (so Lab 9/90, say).  You say that you need black on white - but of course with the paper and ink limits that's not achievable.  Is it acceptable to you to apply a curve to the image to bring the contrast back to black/white?

Hi Robert,

I forgot to mention one little detail (I often do:-): in order to give reliable data the edge needs be slightly slanted (as per the name of the MTF generating method), ideally between 5 and 9 degrees, and near the center of the FOV.  I only downloaded the second image you shared (F6p3) because I am not at home at the moment and my data plan has strict limits (the other one was 210MB).  The slant is only 1 degree in F6p3 and I am getting values again too low for your lens/camera combo: WB Raw MTF50 = 1580 lw/ph, when near the center of the FOV it should be up around 2000.  Could be the one degree.  Or it could be that the lens is not focused properly.  The Blue channel is giving the highest MTF50 readings while Red is way down - so it could be that you are chasing your lens' longitudinal aberrations down, not focusing right on the sensing plane :)

To give you an idea, the ISO 100 5DIII+85mm/1.8 @f/7.1 raw image here  (http://www.dpreview.com/reviews/image-comparison?utm_campaign=internal-link&utm_source=mainmenu&utm_medium=text&ref=mainmenu)is yielding MTF50 values of over 2100 lw/ph.  I consistently get well over that with my D610 from slanted edges printed by a laser printer on normal copy paper and lit by diffuse lighting.  For this kind of forensic exercise one must use good technique (15x focal length away from target, solid tripod, mirror up, delayed shutter release) and either use contrast detect focusing, or focus peak manually (that is take a number of shots around what is suspected to be the appropriate focus point by varying monotonically and very slowly the focus ring manually in between shots; then view the series at x00% and choose the one that appears the sharpest).  Another potential culprit is the target image source: if it is not a vector the printing program/process could be introducing artifacts.

As far as the contrast of the edge is concerned I work directly off the raw data so it is what it is.  MTF Mapper seems not to have a problem with what you shared, albeit using a bit of a lower threshold than its default.  That was the case with yesterday's image as well.

Jack
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on September 01, 2014, 02:01:25 pm
The capture was reasonably well taken - however I did not use mirror lock-up and the exposure was quite long (1/5th second). ISO 100, good tripod, remote capture, so no camera shake apart from the mirror.  The test chart is quite small and printed on Epson Enhanced Matte using an HPZ3100 ... so not the sharpest print, but as the shot was taken from about 1.75m away any softness in the print is probably not a factor.  However, if you would like a better shot, I can redo it with a prime lens with mirror lock-up and increase the light to shorten the exposure.

Hi Robert,

I usually recommend at least 25x the focal length, therefore the shooting distance is a bit too short for my taste (or the focal length too long for that distance). This relatively short distance will make the target resolution more important. Also make sure you print it at 600 PPI on your HP printer. That potentially will bring your 10-90% rise distance down to better values. Some matte papers are relatively sharp but others are a bit blurry, so that may also play a role.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on September 01, 2014, 03:24:45 pm
Hi Robert,

I forgot to mention one little detail (I often do:-): in order to give reliable data the edge needs be slightly slanted (as per the name of the MTF generating method), ideally between 5 and 9 degrees, and near the center of the FOV.  I only downloaded the second image you shared (F6p3) because I am not at home at the moment and my data plan has strict limits (the other one was 210MB).  The slant is only 1 degree in F6p3 and I am getting values again too low for your lens/camera combo: WB Raw MTF50 = 1580 lw/ph, when near the center of the FOV it should be up around 2000.  Could be the one degree.  Or it could be that the lens is not focused properly.  The Blue channel is giving the highest MTF50 readings while Red is way down - so it could be that you are chasing your lens' longitudinal aberrations down, not focusing right on the sensing plane :)

To give you an idea, the ISO 100 5DIII+85mm/1.8 @f/7.1 raw image here  (http://www.dpreview.com/reviews/image-comparison?utm_campaign=internal-link&utm_source=mainmenu&utm_medium=text&ref=mainmenu)is yielding MTF50 values of over 2100 lw/ph.  I consistently get well over that with my D610 from slanted edges printed by a laser printer on normal copy paper and lit by diffuse lighting.  For this kind of forensic exercise one must use good technique (15x focal length away from target, solid tripod, mirror up, delayed shutter release) and either use contrast detect focusing, or focus peak manually (that is take a number of shots around what is suspected to be the appropriate focus point by varying monotonically and very slowly the focus ring manually in between shots; then view the series at x00% and choose the one that appears the sharpest).  Another potential culprit is the target image source: if it is not a vector the printing program/process could be introducing artifacts.

As far as the contrast of the edge is concerned I work directly off the raw data so it is what it is.  MTF Mapper seems not to have a problem with what you shared, albeit using a bit of a lower threshold than its default.  That was the case with yesterday's image as well.

Jack

Hi Jack,

I'm not doing too well so far.  I've tried printing with a Laser using Microsoft Expression Design for a vector square and although there is improvement (best so far is 10-90% of 1.81pixels), it's not too good.  I've tried several lenses (24-105F4L, 100F2.8Macro, 50F2.5 Macro and 70-200F4L) and the results are much of a muchness.  I suspect that my prints, lighting and technique are just not good enough.  I'll have to do a bit more investigation, but it will be a few days as I need to catch up on some work).

Cheers,

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on September 01, 2014, 03:30:02 pm
Hi Robert,

I usually recommend at least 25x the focal length, therefore the shooting distance is a bit too short for my taste (or the focal length too long for that distance). This relatively short distance will make the target resolution more important. Also make sure you print it at 600 PPI on your HP printer. That potentially will bring your 10-90% rise distance down to better values. Some matte papers are relatively sharp but others are a bit blurry, so that may also play a role.

Cheers,
Bart

Thanks Bart ... I think you got around 1.8 pixels for the 10-90% rise with a 100mm macro, is that right?  Is that the sort of figure I can expect or should it be significantly better than that?

I'm certainly much too close (based on your 25x) and it may well be that the print edge softness is what I'm photographing!

Also the lighting and print contrast seem to be quite critical and I doubt either are optimal.  This sort of thing is designed to do one's head in  :'(
Cheers,

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on September 01, 2014, 04:28:46 pm
Thanks Bart ... I think you got around 1.8 pixels for the 10-90% rise with a 100mm macro, is that right?  Is that the sort of figure I can expect or should it be significantly better than that?

That 1.8 pixels rise is a common value for very well focused, high quality lenses. It's equal to a 0.7 sigma blur which is about as good as it can get.

Quote
Also the lighting and print contrast seem to be quite critical and I doubt either are optimal.  This sort of thing is designed to do one's head in  :'(

The slanted edges on my 'star' target go from paper white to pretty dark, to avoid dithered edges (try to print other targets for a normal range with shades of gray). One can get even straighter edges by printing them horizontal/vertical, and then rotating the target some 5-6 degrees when shooting them. The ISO recommendations are for a lower contrast edge, but that is to reduce veiling glare and (in camera JPEG) sharpening effects. With a properly exposed edge the medium gray should produce an approx. R/G/B 120/120/120, and paper white of 230/230/230, after Raw conversion. It also helps to get the output gamma calibrated for Imatest instead of just assuming 0.5, or use a linear gamma Raw input.

Do not use contrast adjustment to boost the sharpness, just shoot from a longer distance.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on September 01, 2014, 06:12:15 pm
That 1.8 pixels rise is a common value for very well focused, high quality lenses. It's equal to a 0.7 sigma blur which is about as good as it can get.

The slanted edges on my 'star' target go from paper white to pretty dark, to avoid dithered edges (try to print other targets for a normal range with shades of gray). One can get even straighter edges by printing them horizontal/vertical, and then rotating the target some 5-6 degrees when shooting them. The ISO recommendations are for a lower contrast edge, but that is to reduce veiling glare and (in camera JPEG) sharpening effects. With a properly exposed edge the medium gray should produce an approx. R/G/B 120/120/120, and paper white of 230/230/230, after Raw conversion. It also helps to get the output gamma calibrated for Imatest instead of just assuming 0.5, or use a linear gamma Raw input.

Do not use contrast adjustment to boost the sharpness, just shoot from a longer distance.

Cheers,
Bart

Hi Bart,

Just messed around a bit more and one thing that clearly makes quite a difference is the lighting.  For example, with the light in one direction I was getting vertical 2.02, horizontal 1.87; in the other direction the figures reversed completely; with light from both directions I got 2.06/2.06 on both horizontal and vertical (no other changes).  I remember Norman Koren telling me to be super-careful with the lighting.  

Photographing from a greater distance does improve things.  However, the focusing is so fiddly that I find it very difficult to get an optimum focus.

I need to try different papers because that also makes quite a difference.

It's interesting to see the effect of the different variables involved and also the sort of sharpening that we might want to apply.

For example, with a 10-90 edge rise of 2 pixels, applying Focus Magic 2/50 gives this:

(http://www.irelandupclose.com/customer/LL/50mmFM2-50.jpg)

and then applying a further FM of 1/25 gives this:

(http://www.irelandupclose.com/customer/LL/50mmFM2-50-1-25.jpg)

With the raw image like this:

(http://www.irelandupclose.com/customer/LL/50mm.jpg)

I doubt that I would get as good as this in the field, so I wonder, Jack, if you could explain why getting an optimally focused image is useful for your modelling ... because it's pretty tricky to achieve!

Cheers

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jack Hogan on September 02, 2014, 05:20:27 am
Just messed around a bit more and one thing that clearly makes quite a difference is the lighting.  For example, with the light in one direction I was getting vertical 2.02, horizontal 1.87; in the other direction the figures reversed completely; with light from both directions I got 2.06/2.06 on both horizontal and vertical (no other changes).  I remember Norman Koren telling me to be super-careful with the lighting. 

Hi Robert, the spoiler with light if one is not too careful are sharp gradients that can make the change in light intensity become part of the ESF.  I try to take mine indoors with bright but indirect sunlight in a neutrally colored room.  I don't think the color of the walls of the room is too important because MTF Mapper can (and I do) look at one raw channel at a time.

Jack
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jack Hogan on September 02, 2014, 06:16:42 am
I doubt that I would get as good as this in the field, so I wonder, Jack, if you could explain why getting an optimally focused image is useful for your modelling ... because it's pretty tricky to achieve!

You are right, Robert, Robert Cicala of lensrentals.com says that a difference of about 10% in MTF50 is barely noticeable.  I tend to agree.  The reason for going the extra distance when attempting to determine the parameters for Capture Sharpening (recall: capture sharpening = restore sharpness lost during capture process = camera/lens hardware dependent) is that otherwise we cannot 'see' them and unless someone at Canon obliges with the figures we have to guesstimate.

For instance a key one is the strength of the AA filter. I assume that the 1DsIII has an AA filter in a classic 4-dot beam splitting configuration like the Exmor sensored cameras I am more familiar with.  Since most such AAs cause a shift of about +/- 0.35 pixels, we should be able to see a zero around there in the relative MTF curve (in cycle/pixels it is 0.25/offset):

(http://i.imgur.com/8XCFB96.png)

So we know that the A7s AA appears to be about +/- 0.363 pixels in strength, and if we wanted to attempt to remove its effects through deconvolution we would have a good estimate as far as what the shape and size of its PSF are concerned.  However if the spatial resolution information is buried in a morass of lens induced blur we are not going to be able to find what we seek.  Heck it might be that the Canons do not have a 4-dot beam splitter, or that it is a lot less strong, in which case all bets are off (the slanted edge method becomes exponentially unreliable past Nyquist) :(

Jack

PS Since I suspected that my D610 (like the A7 and other Exmors of its generation) has AA action in one direction only, I figured that if I divided the MTF obtained from a vertical edge by the one obtained from a horizontal edge in the same capture the result should be the missing element = the MTF of the AA filter. And low and behold, right as theory had predicted (ignore stuff after the zero, there is too much noise and too little energy there for the division of two small numbers to be meaningful - it was quite a noisy image to start with):

(http://i.imgur.com/iDciom7.png)
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on September 02, 2014, 07:37:53 am
You are right, Robert, Robert Cicala of lensrentals.com says that a difference of about 10% in MTF50 is barely noticeable.  I tend to agree.  The reason for going the extra distance when attempting to determine the parameters for Capture Sharpening (recall: capture sharpening = restore sharpness lost during capture process = camera/lens hardware dependent) is that otherwise we cannot 'see' them and unless someone at Canon obliges with the figures we have to guesstimate.

For instance a key one is the strength of the AA filter. I assume that the 1DsIII has an AA filter in a classic 4-dot beam splitting configuration like the Exmor sensored cameras I am more familiar with.  Since most such AAs cause a shift of about +/- 0.35 pixels, we should be able to see a zero around there in the relative MTF curve (in cycle/pixels it is 0.25/offset):

(http://i.imgur.com/8XCFB96.png)

So we know that the A7s AA appears to be about +/- 0.363 pixels in strength, and if we wanted to attempt to remove its effects through deconvolution we would have a good estimate as far as what the shape and size of its PSF are concerned.  However if the spatial resolution information is buried in a morass of lens induced blur we are not going to be able to find what we seek.  Heck it might be that the Canons do not have a 4-dot beam splitter, or that it is a lot less strong, in which case all bets are off (the slanted edge method becomes exponentially unreliable past Nyquist) :(

Jack

PS Since I suspected that my D610 (like the A7 and other Exmors of its generation) has AA action in one direction only, I figured that if I divided the MTF obtained from a vertical edge by the one obtained from a horizontal edge in the same capture the result should be the missing element = the MTF of the AA filter. And low and behold, right as theory had predicted (ignore stuff after the zero, there is too much noise and too little energy there for the division of two small numbers to be meaningful - it was quite a noisy image to start with):

(http://i.imgur.com/iDciom7.png)

Hmmm ... very interesting, although I need to get my thinking cap on to make sense of it!  It really is a shame the camera manufacturers are so unforthcoming with information.

I take it that the MTFs for diffraction (at a fixed aperture), pixel aperture and the AA filter are all constant? Also that diffraction and pixel aperture MTFs can be quite accurately estimated?  That leaves the unknowns, which are the lens blur and AA filter.  So, if you take two shots, the only difference being a slight change in the lens blur ... could you not then work out the AA from that?  Notice that I say you, because I certainly could not!  And no doubt it's not possible to do or you would be doing it already.

I had a quick look at MTF Mapper and it seems very good.  If you could give me your command arguments I could use it to check my image before sending it to you.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jack Hogan on September 02, 2014, 11:53:43 am
I take it that the MTFs for diffraction (at a fixed aperture), pixel aperture and the AA filter are all constant? Also that diffraction and pixel aperture MTFs can be quite accurately estimated?

Yes, and the more you narrow the wavelength of the light the better, that's why I like to work with the green CFA raw channel only, which for some Nikon cameras has 1/2 power bandwidth of around 540nm +/-50ish.

That leaves the unknowns, which are the lens blur and AA filter.  So, if you take two shots, the only difference being a slight change in the lens blur ... could you not then work out the AA from that?  Notice that I say you, because I certainly could not!  And no doubt it's not possible to do or you would be doing it already.

Lens blur is the hardest of the simple components to model because it depends on so many variables (if we concentrate on the center only of well corrected lenses at least SA, CAs and defocus): it changes model significantly and non-linearly with even small incremental variations.  So far I have concentrated on modeling well corrected prime lenses with small amounts of defocus in the center of the FOV.  By small I mean less than half a wavelength of optical path difference (Lord Rayghley's criterion for in-focus was 1/4 lambda OPD).  It has the finickiest theory and it is the plug in my overall model: diffraction, pixel aperture and AA are set according to their physical properties and camera settings.  Solver than varies OPD to get the best fit to measured data.  There is always a residual value because no lens is ever perfect.  I  have never seen it at less than 0.215 lambda, which corresponds to a lens blur diameter of about 5.3um (on a 2.4um pitched RX100vIII).

I had a quick look at MTF Mapper and it seems very good.  If you could give me your command arguments I could use it to check my image before sending it to you.

I don't have Imatest so perhaps you can set it up to do the same thing, and trust me it would be much easier.  MTF Mapper is excellent because it allows one to work directly on the green channel raw data, without introducing demosaicing blur into the mix.  The author, Frans van den Bergh is a very smart and helpful guy whose blog  (http://mtfmapper.blogspot.it/2012/06/diffraction-and-box-filters.html)got me going on this frequency domain trip. On the other hand it is an open source command line program which is not as user friendly as commercial products.  This is the way I use it, you may not want to once you realize what's involved :)

1) First create a TIFF of the raw data with dcraw -D -4 -T filename.cr2;
2) Open filename.tiff in a good editor and save a 400x200 pixel crop (horizontal edge, 200x400 vertical) of the central edge you'd like to analyze in a file called, say, h.tif making sure the top left most pixel of h.tif corresponds to a Red pixel in the original raw data (use RawDigger  (http://www.rawdigger.com)for that)
3) run the command line "mtf_mapper h.tif g:\ -arbef --bayer green -t x", assuming that you are working in directory g:\ and x is the threshold (your last two images worked with x=0.5)
4) MTF Mapper produces a number of text files and Annotate.png: open mtf_sfr.txt in Excel using the data import function.  There should be four lines with 65 values each.  The first value of each line is the angle of the edge (ideally it should be somewhere between 5-10 degrees).  The remaining 64 values are the MTF curve in 1/64th cycles/pixel increments, starting with 0 cy/px which clearly has an MTF value of 1.  Choose the line that corresponds to the edge (see the Annotate.PNG file) and plot it.

Voila', that's the MTF curve of just the two green raw channels.  Alternatively send me the file (one at a time please) and I'll do it for you - I've got batch files for most of this but they reflect how I work, call other programs and they are not easy to explain or set up if starting from scratch.

Jack
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on September 02, 2014, 02:56:37 pm
Yes, and the more you narrow the wavelength of the light the better, that's why I like to work with the green CFA raw channel only, which for some Nikon cameras has 1/2 power bandwidth of around 540nm +/-50ish.

Lens blur is the hardest of the simple components to model because it depends on so many variables (if we concentrate on the center only of well corrected lenses at least SA, CAs and defocus): it changes model significantly and non-linearly with even small incremental variations.  So far I have concentrated on modeling well corrected prime lenses with small amounts of defocus in the center of the FOV.  By small I mean less than half a wavelength of optical path difference (Lord Rayghley's criterion for in-focus was 1/4 lambda OPD).  It has the finickiest theory and it is the plug in my overall model: diffraction, pixel aperture and AA are set according to their physical properties and camera settings.  Solver than varies OPD to get the best fit to measured data.  There is always a residual value because no lens is ever perfect.  I  have never seen it at less than 0.215 lambda, which corresponds to a lens blur diameter of about 5.3um (on a 2.4um pitched RX100vIII).

Yikes :).  I don't know how one would go about focusing a lens to that accuracy. The Canon EOS utility does give remote control over the focusing, but not with mirror lock-up, which is a shame.  Also, the focusing step-size doesn't seem to be all that fine.  So not only is lens blur the hardest component to model ... but it's also very hard to minimize it in the capture!


Quote
I don't have Imatest so perhaps you can set it up to do the same thing, and trust me it would be much easier.  MTF Mapper is excellent because it allows one to work directly on the green channel raw data, without introducing demosaicing blur into the mix.  The author, Frans van den Bergh is a very smart and helpful guy whose blog  (http://mtfmapper.blogspot.it/2012/06/diffraction-and-box-filters.html)got me going on this frequency domain trip. On the other hand it is an open source command line program which is not as user friendly as commercial products.  This is the way I use it, you may not want to once you realize what's involved :)

1) First create a TIFF of the raw data with dcraw -D -4 -T filename.cr2;
2) Open filename.tiff in a good editor and save a 400x200 pixel crop (horizontal edge, 200x400 vertical) of the central edge you'd like to analyze in a file called, say, h.tif making sure the top left most pixel of h.tif corresponds to a Red pixel in the original raw data (use RawDigger  (http://www.rawdigger.com)for that)
3) run the command line "mtf_mapper h.tif g:\ -arbef --bayer green -t x", assuming that you are working in directory g:\ and x is the threshold (your last two images worked with x=0.5)
4) MTF Mapper produces a number of text files and Annotate.png: open mtf_sfr.txt in Excel using the data import function.  There should be four lines with 65 values each.  The first value of each line is the angle of the edge (ideally it should be somewhere between 5-10 degrees).  The remaining 64 values are the MTF curve in 1/64th cycles/pixel increments, starting with 0 cy/px which clearly has an MTF value of 1.  Choose the line that corresponds to the edge (see the Annotate.PNG file) and plot it.

Voila', that's the MTF curve of just the two green raw channels.  Alternatively send me the file (one at a time please) and I'll do it for you - I've got batch files for most of this but they reflect how I work, call other programs and they are not easy to explain or set up if starting from scratch.

Voila indeed :).  Or 'Just-like-that'  as Tommy Cooper would have said.

I can do all of that with Photoshop and Imatest fairly easily (split into R,G,B images, select the 200x400 edge, do the MTF) ... but what I can't do is to find out if the top leftmost pixel was a red pixel ... without RawDigger, that is, which I don't have.  Is that very necessary? (because it adds quite a bit of complication).  As the image has been demosaiced, I don't see what difference it would make what color the pixel was (using Imatest, that is), but perhaps it does?

Certainly doing the MTF on the green channel rather than the RGB image did improve the reading.

At this level, surely the light source would be pretty important?  Higher frequency better??

Cheers,

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jack Hogan on September 02, 2014, 03:33:31 pm
As the image has been demosaiced, I don't see what difference it would make what color the pixel was (using Imatest, that is), but perhaps it does?

Ah, but that's the point.  It isn't demosaiced.  The way I showed you it is just the two green raw channels straight off the sensor.

Jack
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jim Kasson on September 02, 2014, 04:08:01 pm
At this level, surely the light source would be pretty important?  Higher frequency better??

Let me jump in here to give you some of the things I've found out in the last six or 8 months of doing slanted edge target shooting.

Specular highlights are your enemy. One way to get rid of them is to use matte paper, but that makes it more difficult to achieve high spatial frequencies on the target itself.

Another way to reduce them is to have a diffuse light source. Soft boxes are good. Bouncing the light off a matte reflector is good.

Camera motion is your enemy. Fast shutter speeds help (sometimes -- faster isn't always better). Stiff tripods help. But the thing that helps the most is short duration electronic flash in a dark room. I use the Paul Buff Einsteins, which can produce a t.1 below 100 usec when set up right.

Along those lines, use trailing curtain synch to reduce shutter shock effect.

Mirror locked up? Absolutely.

Cable/electronic release or self-timer or shutter delay. For sure.

EFCS. If you got one, use it.

Cutting thin black plastic with a paper cutter can sometime produce a clean edge. If you can find die-cut plastic, even better.

Good luck,

Jim
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on September 02, 2014, 06:15:24 pm
Ah, but that's the point.  It isn't demosaiced.  The way I showed you it is just the two green raw channels straight off the sensor.

Jack

I get it :)

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on September 02, 2014, 06:23:52 pm

Specular highlights are your enemy. One way to get rid of them is to use matte paper, but that makes it more difficult to achieve high spatial frequencies on the target itself.

I can second that!

Quote
Camera motion is your enemy. Fast shutter speeds help (sometimes -- faster isn't always better). Stiff tripods help. But the thing that helps the most is short duration electronic flash in a dark room. I use the Paul Buff Einsteins, which can produce a t.1 below 100 usec when set up right.

Do you use a long exposure on the camera and use the flash only (so no shutter movement at all?).  Sounds like a good trick!

Quote
Cutting thin black plastic with a paper cutter can sometime produce a clean edge. If you can find die-cut plastic, even better.

That sounds like a good idea too!

Thanks!

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jim Kasson on September 02, 2014, 06:51:05 pm
Do you use a long exposure on the camera and use the flash only (so no shutter movement at all?).  Sounds like a good trick!

On most cameras, with most lenses, it doesn't take that long of an exposure to let the first curtain vibrations die down. 1/25 with trailing curtain synch will usually do it. 1/8 would be even safer, if you don't want to run tests. The faster the shutter speed the more residual room light is allowable.

Jim

Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on September 03, 2014, 04:08:57 am
On most cameras, with most lenses, it doesn't take that long of an exposure to let the first curtain vibrations die down. 1/25 with trailing curtain synch will usually do it. 1/8 would be even safer, if you don't want to run tests. The faster the shutter speed the more residual room light is allowable.

Jim


Hi Jim,

I don't see any way of doing this on the 1Ds3.  I can change from 1st to 2nd curtain sync for flash of course, but there's no delay that I can change.  Would either 1st or 2nd make any difference to camera shake?  I wouldn't have thought so.

I thought that what you suggested is to photograph in a very dark room with a long exposure (say 3 seconds) and manually trigger the flash after a second or so ... in which case there would be no mirror lock-up needed and no issue with shutter vibration.  Did I misunderstand you?

Cheers

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jim Kasson on September 03, 2014, 12:38:03 pm
I don't see any way of doing this on the 1Ds3.  I can change from 1st to 2nd curtain sync for flash of course, but there's no delay that I can change.  Would either 1st or 2nd make any difference to camera shake?  I wouldn't have thought so.

You can lock the mirror up on the Canon, right? You can use an electronic release or the self-timer to trip the shutter, right? If you can do that, the only vibration you have to worry about is the first shutter curtain. By using a longish exposure and trailing curtain synch, you can let the vibrations from the opening of the first curtain die down before the flash goes off.

Does it make a difference? It did for me with the Sony a7R, but it's got a particularly problematical shutter. If you can get your flash duration well under a millisecond, it's probably a "can't hurt, might help" thing.

Here's another thought. Does your camera have EFCS? Use that.


I thought that what you suggested is to photograph in a very dark room with a long exposure (say 3 seconds) and manually trigger the flash after a second or so ... in which case there would be no mirror lock-up needed and no issue with shutter vibration.  Did I misunderstand you?

That works, too, but it's easier to let the camera trigger the flash at the end of the exposure with trailing curtain synch.

Jim
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on September 03, 2014, 01:15:44 pm
You can lock the mirror up on the Canon, right? You can use an electronic release or the self-timer to trip the shutter, right? If you can do that, the only vibration you have to worry about is the first shutter curtain. By using a longish exposure and trailing curtain synch, you can let the vibrations from the opening of the first curtain die down before the flash goes off.

Does it make a difference? It did for me with the Sony a7R, but it's got a particularly problematical shutter. If you can get your flash duration well under a millisecond, it's probably a "can't hurt, might help" thing.

Here's another thought. Does your camera have EFCS? Use that.


That works, too, but it's easier to let the camera trigger the flash at the end of the exposure with trailing curtain synch.

Jim

Hi Jim,

As far as I know the 1Ds3 doesn't have EFCS.  Of course it has mirror lock-up etc, and I can trigger the camera remotely.  But in the test shots I've done these seem to make little difference, so I wonder if shutter shake would be significant.  It is a very heavy camera and I have a good tripod, so I suspect that the image softness I'm seeing has more to do with my lenses not being as good as they should be, and possibly even more to my test conditions (like the target print and lighting) not being too good.

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jim Kasson on September 03, 2014, 01:47:57 pm
As far as I know the 1Ds3 doesn't have EFCS.  Of course it has mirror lock-up etc, and I can trigger the camera remotely.  But in the test shots I've done these seem to make little difference, so I wonder if shutter shake would be significant.  It is a very heavy camera and I have a good tripod, so I suspect that the image softness I'm seeing has more to do with my lenses not being as good as they should be, and possibly even more to my test conditions (like the target print and lighting) not being too good.

You're probably right. As your lenses get better, you may want to revisit this issue.

http://blog.kasson.com/?p=4359

Jim
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on September 04, 2014, 02:27:35 pm
You're probably right. As your lenses get better, you may want to revisit this issue.

http://blog.kasson.com/?p=4359

Jim

Yes, well it's hard to know without doing a whole lot more testing.  The lenses I have are very good - admittedly not L-series primes, but the 100mm macro f2.8 and 50mm macro 2.5 are both excellent lenses and my zoom lenses (24-105 F4L and 70-200 F4L IS) are also very good. I should be getting well over 3000 lw/ph from all of these lenses, so I think my testing technique is more to blame than the lenses.  At any rate the results I get in the field are very acceptable to me, so I don't plan to change the lenses.

Out of interest, I did a test with a Samyang 14mm lens and I got a 10-90% edge rise of 1.41 pixels ... which is better than the results I got from my Canon lenses!  The only real difference is that the focal length to lens/print distance ratio is over 100:1 with the Samyang, whereas with the other lenses the ratio is more in the region of 15-30:1, so most likely my test print is to blame.

It's certainly been very interesting to see how effective the deconvolution is, using FocusMagic, for example. It's one thing to look at an image and another to see the MTF and edge rise on a test chart - but when both are telling you that you are getting as good resolution as is possible, well then it's not hard to be convinced that this is the way to go.

Robert

Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jack Hogan on September 04, 2014, 04:44:58 pm
It's one thing to look at an image and another to see the MTF and edge rise on a test chart - but when both are telling you that you are getting as good resolution as is possible, well then it's not hard to be convinced that this is the way to go.

That's what this is all about, figuring out how to make the equipment deliver what it is supposed to - and no less.
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on September 05, 2014, 05:51:23 am
That's what this is all about, figuring out how to make the equipment deliver what it is supposed to - and no less.

Yes, absolutely.  Actually, the reason I bought Imatest Studio in the first place is that I had bought a 17-40mm F4L lens that I wasn't very happy with and the testing showed the weakness in the lens - so I got rid of it. Then I tested the 24-70F2.8L lens I had and that came up short too. So I got rid of that one. I then bought a 24-105 F4L lens and returned two of the lenses before I got one I was reasonably happy with.  Anyway, the point I'm making is that if one is prepared to print large charts and go to the trouble of setting up the tests properly that there is a lot to be learnt from these tests.  Center focus is just one thing ... and not necessarily the most important if the point of interest of your image is off-center.  So there could be a trade off between accepting a softness subject focus ... or taking a wider-angle shot and cropping it, for example.  Other things like mirror lock-up, a good tripod etc., also make a big difference, of course.

Anyway, getting back to the subject of modelling the camera system - is it still worth doing do you feel?  If so I can print a large chart and set up the equipment to get the best sharpness possible.  But if you think that there is no real practical benefit possible because the system is too complex and there are too many unknowns, well then I'll save myself the trouble (and you too :)).

I guess my question would be: if you can pretty accurately get the PSF just for the sensor and demosaicing, do you think there is a benefit from deconvolving the image for this first (before doing a guesstimate lens deblur).  I think you have already said that you do think that this would be good (but the problem is getting the pretty accurate PSF!).

If you do think it's worth carrying on, would it help to take test shots with a very simple lens like the 50mm Macro F2.5? (to eliminate the lens complexity as much as possible).

Also, since with MTF Mapper you can work directly on the raw channels, could you not then model the blurring just due to the demosaicing, and so deconvolve that part of the blurring separately?

Robert
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Jack Hogan on September 08, 2014, 05:34:16 am
I guess my question would be: if you can pretty accurately get the PSF just for the sensor and demosaicing, do you think there is a benefit from deconvolving the image for this first (before doing a guesstimate lens deblur).

Hi Robert,

To me this frequency domain trip is mainly a learning exercise in how lenses and cameras interact with detail in the scene.  I am focusing in on the hardware to limit the number of variables involved and because it may come in handy when evaluating what equipment to purchase. The way I am using the slanted edge method does not deal with demosaicing at all so I am mainly dealing with the lens (diffraction and blur), the AA and pixel pitch as you have probably seen in graphs produced with the model.  For modeling simplicity I concentrate on the center of the image which is not necessarily an indication of lens performance throughout the FOV.  As you see there are many variables unaccounted for that can contribute to the formation of a Gaussian MTF.  I think the average bloke does just fine by simply playing with the sliders in existing tools and reading reviews on decent sites like DxOmark.com and lenstip.com.

On the other hand now that I understand things a little better I think deconvolution plug-in designers could work a little harder at producing more flexible and controllable products.  For instance, are we sure that the deconvolution PSF used in a DSLR with an old-style AA would be suitable as-is with just a different radius/strength on a brand spanking new one sans AA?  I personally think not.

Jack
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Bart_van_der_Wolf on September 08, 2014, 05:49:46 am
On the other hand now that I understand things a little better I think deconvolution plug-in designers could work a little harder at producing more flexible and controllable products.  For instance, are we sure that the deconvolution PSF used in a DSLR with an old-style AA would be suitable as-is with just a different radius/strength on a brand spanking new one sans AA?  I personally think not.

Hi Jack,

I agree. And, as you have experienced yourself, the PSF is not always circularly symmetric (isotropic). There are several things that can  be updated for a more modern sharpening tool. To avoid complexity for inexperienced users, there are lots of things that can be done in the Human Interface design to help with finding the optimal settings. Starting with more sensible (based on aperture used) defaults is but one of them.

Cheers,
Bart
Title: Re: Sharpening ... Not the Generally Accepted Way!
Post by: Robert Ardill on September 08, 2014, 04:24:31 pm
Hi Jack and Bart,

First of all, many thanks for all of your help ... I for one have learnt a whole lot from this discussion and I am quite sure that my 'sharpening' will be a whole lot better than it was in the past as a result.

I can't help feeling that there is still quite a lot that could be done to selectively remove blurring due to, for example, the AA filter ... in isolation from the other causes such as the lens.  Perhaps this is too complex for us to do (well certainly I would need to do a whole lot of relearning of my maths before I could even begin to attempt it!), but it surely should be possible for the camera manufacturers, say.  After all, it isn't rocket science at this stage to extract the relevant frequencies in the frequency domain by comparing with and without the AA filter.  Once that is known then it should be relatively straightforward to remove the unwanted signals (says I glibly :)). Whether it would be worth doing or not I don't know ... but it might be an alternative to buying two cameras, one with an AA filter and one without.

At any rate this is as far as it's practical to go at this point, but it is great that there are now tools out there that go quite a long way to help us working photographers :).

Cheers,

Robert