I've got a new computer and it seems my favorite sharpening plugin (for PS) is no longer available Focusfixer 3.21
Any suggestions for a replacement deconvolution sharpener?
Thanks in advance for your help
Marc
Windows or Mac? Focus Magic is available on both and is very good. Topaz Detail and InFocus are also good.Windows!
Bill
Windows or Mac? Focus Magic is available on both and is very good. Topaz Detail and InFocus are also good.
I've got a new computer and it seems my favorite sharpening plugin (for PS) is no longer available Focusfixer 3.21
Any suggestions for a replacement deconvolution sharpener?
Thanks in advance for your help
Marc
+1 on all three.
FocusMagic is primarily a deconvolution sharpening plugin, and it does a stellar job with automatically balancing between resolution restoration and noise suppression. This would be a direct replacement, only better.
Topaz Detail has a deconvolution control ('Deblur'), but its main forte is a very high level of control over all sorts of detail enhancement.
Topas InFocus is a deconvolution sharpening tool with much more control over the deconvolution parameters than 'Detail' offers, but can be a bit heavy on the artifact by-products (e.g. ringing) it creates, although that's usually caused by using the wrong settings, like too large a radius.
Perhaps I haven't read the Topaz documentation in sufficient detail, but from what I have seen they have done a poor job of differentiating 'Detail' from 'InFocus' and where one should use each tool. It seems to me that one would use InFocus primarily for capture sharpening and Detail for creative sharpening.
Where does Affinity Photo fit in with all this?
Are it's sharpening tools up to standard?
Does it work properly with these Ps plug-ins?
Where does Affinity Photo fit in with all this? - Does it work properly with these Ps plug-ins?
I've managed to get Detail, DeNoise and InFocus to work, Clarity crashes when I invoke it from AF. I am running AF 1.4 on El Capitan 10.11.2. I point the AF plug-in folder list to my PSCC2015 plug-in folder and allow global. I also allow "Unknown" plug-ins.
Ironically, the AF plug-in status window shows Clarity as the only one that is "Working."
Who knows...
If you have Photoshop CC 2015, take a look at the Smart Sharpening filter. It is much improved over previous versions and very competitive with Focus Magic and InFocus in terms of results.
I couldn't find any improvement (and only trivial differences) in using Topaz Infocus or FocusMagic over Smart Sharpening set to "lens blur." Not surprising, really.
For my own benefit (about 6 months ago) I performed a limited capture sharpening & NR matrix test. Preliminary testing was performed to find the optimum settings in each program for the image at hand. ....
When you talk about optimum settings, are you referring to print or screen?
In Lightroom (and ACR) the sharpening uses deconvolution if the detail slider is moved right. No convolution when the detail slider is at the left end, or so Jeff Schewe says in "The Digital Negative".In my test I used one set of conditions for "pure" Deconvolution and several additional sets, placing the sliders to obtain the best I could get.
Hello,Any discounts for first time users?
currently piccure+ is compatible with PS (CS4 or later), PSE (7 or later), LR (3 or later), DxO (9 or later) and PhaseOne C1 (8 or later). We have not tested it with AF - sorry to hear that it crashed the computer. However, there is a standalone version available (including a RAW converter). You find a lot about piccure+ in the internet and forums and on our homepage (just google). It is currently the only solution that corrects spatially-varying complex optical aberrations (e.g. coma) as well as camera shake (e.g. micro-shakes) by the means of (blind) deconvolution. You do not need to specify a lens, some motion trace etc. The software does all that for you, there is a 30-days free trial with no limitations on functionality.
Best,
Lui
Co-Founder
[piccure+ plus] is currently the only solution that corrects spatially-varying complex optical aberrations (e.g. coma) as well as camera shake (e.g. micro-shakes) by the means of (blind) deconvolution. You do not need to specify a lens, some motion trace etc. The software does all that for you, there is a 30-days free trial with no limitations on functionality.I've been a user of Focus Magic since it was offered many years ago. I just downloaded Piccure+Plus and tried it with several of my images, and found that on some, the default setttings did a better job of deconvolution than I was ever able to achieve with Focus Magic, the Detail tab of LR or Camera RAW, the Topaz InFocus, and some others. I am still trying to figure out the controls of Piccure+Plus for the other images (I read the user manual but need time to work with it.)
Best,
Lui
Co-Founder
I've been a user of Focus Magic since it was offered many years ago. I just downloaded Piccure+Plus and tried it with several of my images, and found that on some, the default setttings did a better job of deconvolution than I was ever able to achieve with Focus Magic, the Detail tab of LR or Camera RAW, the Topaz InFocus, and some others. I am still trying to figure out the controls of Piccure+Plus for the other images (I read the user manual but need time to work with it.)
For those images it worked well with, I found no artifacts and stunning deblurring with no work on my part! This looks like quite a powerful program!
I just gave piccure+ a try and wow very very good
Could you post a before on that 100% crop jpeg so we can see Piccure+'s magic?yes I'll do that for you but it will be a JPEG so neither will look like a high rez TIFF
This is about the second or third time someone has posted finished results using some type of image sharpening and/or clarity enhancing software and not show a before. If they went to the trouble of posting finished samples why not post the before?
Really nice shot of Bryce Canyon. Probably the best I've seen and I've seen quite a few. And certainly I wouldn't go all that way out there and shoot that detailed of a landscape with anything less than a high rez Phase One camera.
Pre and Post sharpened C1 raw conversion with sharpen set at 0
Piccure+ set at quality+, sharpen 27, denoise off
The tree on the left was near the limit of the DOF and towards the outer edge of the image circle
Marc
Thanks, Marc. That's impressive.
Applied CS5's Smart Sharpen to your pre-sharpen version and I couldn't come close. Don't know if CS6 and above can do a better job. I noticed Piccure+ gives nice even sharpening across various low to high frequency detail almost like an adjustment mask but without having to paint it back in at different blend levels.
Adobe's had a number of different iterations of Smart Sharpen. The latest (in CC) is by far the best.
I think there is a little more going under the hood than just one simple round of deconvolution sharpening.
I do not think you can get the same as Tim's and Marc examples just using Smart Sharpen in PS or in LR alone. I believe a round of USM or applying HP sharpening will greatly narrow the difference.
Quick play CS6 first using Smart Sharpen then on a duplicate layer applied low level USM. FWIW maybe a touch too much but..
Lui, can piccure+ increase granularity as shown in Tony's micro-fine rock detail example of Marc's 100% Phase One crop? Is there a slider or setting you'ld suggest to bring out more detail?
Also you say to shoot Raw which most of us including myself exclusively do but Piccure+ doesn't work parametrically on Raw data as a LR plugin but only on the tiff conversion of the Raw. And I and most others agree that 16bit tiff is second best to working directly on the Raw version, ...
... but there are incamera jpegs set to high quality that can produce as good as a tiff without compression artifacts where noise and noise suppression artifacts baked in are far worse issues.
Marc,
First a big thank you for sharing your raw images for personal use very generous.
The subject matter is ideal with plenty of nice edges and detail and a bonus of the same subject with the Nikon D800e and the Phase One back. As good as the D800e is the Phase One IQ is a real step up. Problem now is I want one ;D Excellent images and I will be playing with sharpening soon, hoping to identify which is your boat ;D
BTW. Loved the Bryce image and the way you caught the light giving such a great sense of depth with the colour .
I hope to find the time to explore Piccure+ a little further. First off, I have a question to the developers concerning the install process.
I tried to install it on my Desktop Mac. During the install process, there was an alert, something like "The 'Rez' component requires the Developer command line tool to be installed. Do you want to install it now?" I clicked 'yes', thinking the tool might be included in the install package. But it looked like the installer tried to find the tool via the Internet and couldn't, since that computer is not connected to the Internet. None the less, it said that the install was successful.
Then I installed it on the MacBook, which has Internet access. Here was no alert about any Developer tools. But nor was there any sign that anything was downloaded during the install. Also here, the install was called successful.
So what am I to believe? Can I exspect the Rez component to work on the MacBook? Or what shall I do to be sure?
...It looks to me like Piccure+ has a little edge due to a stronger contrast boost.Yes looks the same to me and perhaps due to the added contrast it seems CA also increased a tad. On this image alone does not appear to be a big difference and maybe not evident when printed?
Image | PH | R10-90 | Over- | Over- | MTF50 | MTF50P | MTF | MTF |
/PH | shoot % | sharpng % | LW/PH | LW/PH | Nyq c/p | Nyq lw/ph | ||
A000541---No-Sharpening.tif | 4000 | 1449 | 0.2 | -36.4 | 1720 | 1720 | 0.046 | 368 |
A000541---ACR25.tif | 4000 | 1737 | 0.2 | -26 | 2346 | 2346 | 0.149 | 1192 |
A000541---ACR25-FM1.tif | 4000 | 3473 | 3.6 | -9.1 | 3445 | 3445 | 0.351 | 2808 |
A000541---FM1.tif | 4000 | 2200 | 0.2 | -21.8 | 2603 | 2603 | 0.108 | 864 |
A000541---FM1+1.tif | 4000 | 3610 | 5.1 | -3.1 | 3402 | 3364 | 0.274 | 2192 |
A000541---FM2.tif | 4000 | 3342 | 5 | -2.4 | 3202 | 3194 | 0.202 | 1616 |
Do you think this is a useful approach to evaluating different types of sharpening?
Robert
BTW ... the softness with no sharpening is due to a significant extent to the target being rather poor as it is printed using an inkjet on satin paper.
For me this has never been useful information for getting sharp results because I just fix it in post. I look at digital images as just varying densities of microscopically small pixels I can apply contrast globally and locally to get the level of sharpness I want.
The day someone can take a Raw shot straight out of the camera and not do any editing to it is when I think a test you've outlined would justify all that time and effort.
Do you think this is a useful approach to evaluating different types of sharpening?
Yes, quite. It is obvious for instance that your last try goes way overboard, bumping up noisy/non-existent aliased frequencies an increasing real frequencies above what they were in nature, so it will probably generate more artifacts.Thanks for the links Jack. I read your articles with interest. I've run DCRAW (dcraw -w -o 2 -6 -T -g 2.2 0) versus ACR (no sharpening) and the results are indeed different as shown below (the tiff file is DCRAW and the tif file is ACR).
What the test doesn't really show though is contrast (I don't think?) and micro-detail.
I don't know what micro-detail is Robert, but 'global' contrast is represented by lower frequencies, the left 1/8th of the MTF curve. In fact it turns out that in typical viewing conditions what matters most as far as the perception of 'sharpness' is concerned is the performance of the lens around MTF90 (http://www.strollswithmydog.com/mtf50-perceived-sharpness/). Keep that in mind the next time you buy a lens ;)Yes, but as per your article, only at standard viewing distance :). For us pixel-peepers MTF50 is a better guide.
Cheers,
Jack
Jack, you couldn't post one actual photographed image sample in any of your linked articles to show how all that analysis makes a better sharpened image?
Come on!
Connect the science and graphs with reality will ya' so photographers can see with their own eyes that understanding the science really helps.
Jack, you couldn't post one actual photographed image sample in any of your linked articles to show how all that analysis makes a better sharpened image?
Come on!
Connect the science and graphs with reality will ya' so photographers can see with their own eyes that understanding the science really helps.
So does that not show that there can be a benefit from a scientific analysis that is reproducible with small parameter changes so that you can fine-tune your sharpening for different lenses, different apertures, different ISO etc? Or in assisting you to pick the best sharpening plug-ins?
Robert
Not that I don't appreciate the effort you took, Robert, posting the sharpening sample, but since you didn't connect the graph analysis in Jack's blog articles to the look of the sharpening results I fail to see how the science helps.
For me it's always been the slider behavior and positioning relationship between Amount, Radius & Detail in ACR that affects image sharpness differently depending on distance the detail was from the lens combined with resolution/sensor size at the time of capture.
For instance a small Amount above ACR's +25 like say +40/Radius-1/Detail-25 sharpening detail lit at 45 degree angle just feet from the lens is all that's needed as opposed to detail farther away lit at 75 degree angle needs a larger Radius and Amount-50/Detail-50. Sometimes I can crank Radius to 2.5 and increase Detail to remove "mosquito" edge artifacts, but it's different image to image. How do you connect science analysis to so many unknowns and inconsistencies as to what's really going on with software?
I notice ACR slider position relationship changes as it acts on various clump size of detail which has not been characterized/profiled in these discussions.
I have also downloaded and installed a Focus Magic demo for the second time in 12 months and I was reminded that the results I get with it seem, to put it politely, *less than good*. I am surprised that so many here seem to find it useful, so I guess I will keep trying it on occasions to see where it works well.
The trick for me, with Focus Magic, is to up the radius until you see halos/artifacts, and then to step back at least one pixel and usually better 2 pixels. Just like with all these other techniques, if you use too high a radius you will get halos/artifacts.
An alternative which I also think is very good is to step back only 1px and then do a Fade. That gives very good control over any small amounts of haloing that might be present.
Yes, I use a similar technique, but I also start with an amount setting of 300% to exaggerate the effect for easier detection of when the artifacts become visible.Good thinking.
Strictly speaking, halos should not happen with deconvolution Capture sharpening with the proper blur width setting. When used for Creative sharpening one can increase the amount, not the blur width.Yes, that is quite clear either visually or with the slanted edge analysis. The reason I use Fade in Photoshop rather than reduce the Amount in Focus Magic is that the preview in FM is quite poor and so it's much easier to see the effect in Photoshop. But are the two equivalent?? Perhaps you could answer that question ... but I'll try it out with Imatest also, from an experimental rather than theoretical point of view.
I use a blend-if layer setting (see attached) that avoids clipping, and that mitigates the restoration in regions that already have a high edge contrast (which allows to use a somewhat higher amount).Yes, I do that too ... especially if I apply a stronger creative-type sharpening which results in some haloing (even if it isn't supposed to :) ).
After many years of comparing alternatives, I find FocusMagic to be one of the best at improving the signal to noise ratio, i.e. it doesn't sharpen noise as much as it does the signal, and it generates very few artifacts. People who need more than real resolution restoration (i.e. looking for an effect that suggests sharpness), should additionally consider using Topaz Detail (it performs miracles for the rendering of structural detail).
Topaz Detail is great ... I agree. What about InFocus? Not as good as FM in your view?
Topas Infocus is good, but it's more prone to generating artifacts than FocusMagic, so it needs the exactly correct settings for especially the Blur radius to be used.
Robert, the bottom sharpened result looks artificial. It looks too overly smooth and clay like.
Or are you sharpening so it looks natural when downsized for web viewing which is output sharpening and not capture sharpening.
Concerning that same layer arrangement: a dumb question: which are the 2 layers you are blending?
Thanks, Bart, on that layer blend tip. It gave me an idea about applying it to a high pass sharpening routine on severely upsampled low resolution images with better results than using Smart Sharpen in CS5.
So what was your point behind the painted wall sample? What do you want us to understand about sharpening outside of using visual judgement vs nyquist graphs?
It appears from the painted wall image that there's not a lot of acuity in the lens you're using or you're over exposing and introducing flair or nonlinear sensor behavior.
Robert, that second attempt looks more real (natural) which is all that I look for.
But it still looks overly smooth but at least more consistent overall which masking tends to introduce unevenness between smooth surfaces next to fine crispy detail. The eye sees fine detail like granules from dirt and tiny dried bubbles and cracks underneath the paint along smooth raised bumps. But your second attempt makes me think the viscosity of the paint must've been thicker and dried to a burnished look differently across the entire surface.
It just looks odd in a surreal way, but maybe that's what will make the print unique from an aesthetic standpoint. But scientific analysis is not going to predict that kind of outcome on a consistent basis.
I couldn't download your ARW file. I get a "Enable Quicktime" alert in Firefox and when I allow it, I end up with a blank white page.
Are you asking Bart or me?Everybody who knows and is willing to explain ;-)
Everybody who knows and is willing to explain ;-)
Finally got around to trying Topaz InFocus. It seems to really dig in and bring out the detail. Perhaps it is too much, too early, if used as a pre sharpener? I look forward to having 30 days of trial and error with it. :-)
Hi,
It's easy to overdo the deconvolution with Topaz InFocus, which would create artifacts. As a tip on how to avoid that, see my earlier post (http://forum.luminous-landscape.com/index.php?topic=107311.msg892643#msg892643) in this thread.
Cheers,
Bart
I've just downloaded the piccure+ free trial and I'm getting a very ugly result straight off...
Either my setup is faulty in some way (don't think so) or piccure+ is for the recycle bin!
(http://www.irelandupclose.com/customer/LL/picplus.jpg)
I understood that smart sharpen in PS used deconvolution sharpening, but perhaps I've got that wrong.
In Lightroom (and ACR) the sharpening uses deconvolution if the detail slider is moved right. No convolution when the detail slider is at the left end, or so Jeff Schewe says in "The Digital Negative".
I think your test example is a worst case candidate for showing piccure+'s ability. I have been getting bad results with high contrast edges being output with wide dark lines. Piccure+ seems to work very well on frames filled with lots of lower contrast and soft detail. Even when I find that piccure+ is helpful I find that the range of parameter adjustments seems crude. Sometimes I make a one click change in a single parameter and can barely, if at all, see a change in output while other times I make a single parameter change and the results are suddenly ugly. In other words, adjusting the parameter sliders does not seem to result in subtle differences that can be appreciated.
Hi Tim, thank you for the demo. I note that you use Linear Light as the blend mode, rather than Luminosity; and that in your base layer, you have defined 2 overlapping areas. It looks like, amongst others, I need some basic read-up on layer blending, with wich I am not familiar at all.
- As for the look, I prefer the middle image, without the highpass sharpening. The one at right has a little grainy look to my eyes.
Is there a way to use your test method to analyze the results of the various sharpening processes on a photo such as this or do the tests just work on graphical chart type subjects?:
Robert, I'm not into uploading my Raw files. It's a PITA and besides you can find dozens of Raw shots of duck images to download online to conduct sharpening experiments. I mean how hard is it for you to go to your local park and take a shot of ducks?
Is there a way to use your test method to analyze the results of the various sharpening processes on a photo such as this or do the tests just work on graphical chart type subjects?:No, unfortunately not. The slanted-edge analysis can only be used on a test image. ... but the images can't be analysed as a slanted edge can.
No, unfortunately not. The slanted-edge analysis can only be used on a test image. ... but the images can't be analysed as a slanted edge can.
Huh. Of course, you have been living in the olden times of, what's the name again, yeah, slanted edge method ... The world has moved on to using JIDM on real images:
(http://djjoofa.com/data/images/jidmswans.jpg)
What does a value of 0 to 1 tell you about the performance of an imaging system? That it has more or less resolution? More or less acutance? More or less of both?
A slanted-edge analysis can give you a huge amount of detailed information about the whole imaging system, from lens-style MTF (which shows the performance of the imaging system across the whole frame), chromatic aberration, edge profile, noise analysis ... and other things like distortion, tonal response and color fidelity with different charts. It also provides a standardized way of comparing different camera/lens combinations, different sharpening or resizing methods, the effect of lens profile correction on resolution etc.
Is there a way to use your test method to analyze the results of the various sharpening processes on a photo such as this or do the tests just work on graphical chart type subjects?
I've just downloaded the piccure+ free trial and I'm getting a very ugly result straight off.
And that's why it is an ISO approved method for measuring Resolution for digital scanners and cameras. There is a lot of information that can be extracted from the results. One of the things it clearly demonstrated in the examples you showed, is that FocusMagic is very good at restoring detail below the Nyquist frequency and it avoids creating aliasing artifacts, and Infocus also boosts signals (above the Nyquist frequency) that may lead to aliasing but can also look sharper at the limiting resolution.Yes, I think it's clear that FM does a really great job up to MTF40 as can be seen below. The MTF80 result is way better than the InFocus result. So the FM image should look sharper overall. InFocus pulls up the MTF near Nyquist, but that seems to be caused by aliasing. I sharpened the InFocus result a second time with a small radius and the jaggies are quite obvious at 300% as can be seem in the image under the graphs.
Yes, I think it's clear that FM does a really great job up to MTF40 as can be seen below. The MTF80 result is way better than the InFocus result. So the FM image should look sharper overall. InFocus pulls up the MTF near Nyquist, but that seems to be caused by aliasing. I sharpened the InFocus result a second time with a small radius and the jaggies are quite obvious at 300% as can be seem in the image under the graphs.
The one thing I've found though is that it's really necessary to dial-down the FM setting quite a bit. For example with this image the artifacts became strong with a radius of 5, but I had to drop the radius down to 2 in order to get a clean edge profile and MTF. Even at 3 there is significant overshoot.
What does a value of 0 to 1 tell you about the performance of an imaging system? That it has more or less resolution? More or less acutance? More or less of both?
Correct, as in most single number qualifiers, they only mean that there is a difference. How significant that difference is, is anyone's guess.
And that's why it is an ISO approved method for measuring Resolution for digital scanners and cameras.
Implementations of the ISO procedure like Imatest does, also allows to view the data at a number of ways, highlighting different aspects of the results. It is also one of the few methods that allows to study the behavior at higher spatial frequencies than the Nyquist limit, because the slanted edge allows to super-sample the pixels at 4x the Nyquist frequency (it's actually sampling at close to 10x, for a 5-6 degree slant, but for statistical robustness it bins the results in larger bins).
BTW, as noted before by others, the slanted edge method doesn't let you operate on natural images.
I find it interesting that you don't know the internals of JIDM but are quick to jump to a conclusion of 'anyone's guess'. In experiments that is called bias.
Are we measuring resolution of digital scanners or cameras here. Or just a simple comparison of different software/algorithms?
You can spend all the time praising such antiquated methods until the cows come home.
They on the other hand claim to be able to perform blind reversal of the effects of spatially-varying optical aberrations. This is one mean feat and requires major computational power.
Incidentally, one of InFocus' neatest features is its one click capture sharpening. To use it zero out the Sharpen section and set up the following as a preset, it comes straight from dr. Albert Yang, President of Topaz:
Blur Type: Unknown/Estimate
Blur Radius: 2 (don't worry, it does not mean 2 pixels in this context)
Edge Softness: 0.3
The next time you want to capture sharpen an image bring it into InFocus, recall the preset and click the 'Estimate Blur' button. Works pretty decently most of the time.
Jack
When I apply FM on an upsampled image, e.g. for deconvolution output sharpening, then I need to multiply the Blur width by the same amount as the upsampling factor (although I can nail the optimum width a bit more exact due to the potential super-resolution). So upsampling by a factor 2x, could lead to a blur width of approx 3. instead of 2 or 4, just because it is possible to be more exact and interpolate between the initial 1x2 or 2x2.
a single metric for sharpness can also be gotten from the JPEG file size after saving, or even the standard deviation. Surely your metric is supposed to be somewhat more useful than that?
See, I don't want to tout JIDM too much on this forum. I just presented it as a measure that acts on natural images as somebody asked. Where as the slanted edge method is not directly applicable - you can force it, and then it becomes a manual process to find edges in an image, and no longer an automated process like JIDM.
So I take it that you do not apply FM before the upsampling? Of do you apply it before at, say, a blur width of 1, and then re-apply FM to the 3x upsampled image with a blur width of 3?
Hi Bart,
>I can disable it e.g. before down-sampling where it would only increase the risk of generating aliasing artifacts.
I am surprised to read that. I thought down-sampling would also decrease artifacts?
If memory serves me, I remember that you even favoured a workflow of first up-, then down-sampling for the sole purpose of doing just that?
Any detail, especially when it is well resolved, that is too small to be resolved at a smaller size will create aliasing artifacts. So my advise is to not sharpen before downsampling, in fact one can benefit from blurring (or using appropriate windowing algorithms) before downsampling.
Hi Bart,
In the image below, on the left I applied FM radius 2 amount 100 then resized to 50% using Bicubic. On the right I resized to 50% using Bicubic and then applied FM radius 1 amount 100.
[...]
It looks like the whole curve is pulled up (on the sharpen-after), probably a bit too much.
I reduced the sharpening after resize to FM1/75 and also to FM1/100 followed by a sharpened-layer opacity reduction to 80%:
[...]
Hi Bart,
thank you for your detailed reply.
So sharpening before downsizing can be beneficial if the larger size was achieved by upsampling first, not if it is the shooting size - correct?
What would be the benefit of such upsampling first? better visibility when adjusting the parameters? this is what I read from your post #120.
So my take-away so far is:
Preferably, sharpening should be done at output size. After downsampling for web, after upsampling for (large) prints. The concept of *capture* sharpening is kind of fading away.
It might be replaced by sharpening for the monitor size as the primary "output".
Hi Bart,
thank you for your detailed reply.
So sharpening before downsizing can be beneficial if the larger size was achieved by upsampling first, not if it is the shooting size - correct?
What would be the benefit of such upsampling first? better visibility when adjusting the parameters? this is what I read from your post #120.
So my take-away so far is:
Preferably, sharpening should be done at output size. After downsampling for web, after upsampling for (large) prints. The concept of *capture* sharpening is kind of fading away.
It might be replaced by sharpening for the monitor size as the primary "output".
... Topaz InFocus seemed to require more input from the user ...
I tried this and it doesn't seem to work:
Imageprocessing of natural images is a process where a lot of trade-offs need to be made, and some image content is better suited for one approach, while other image content benefits from another approach, and they are often combined in the same image. The need for sharpening is inherently linked to the capture process, which blurs image content, and resampling also blurs and/or reduces contrast. Therefore there is no single best solution. But if our tools allow a good preview of what the effects are, and we use some of the insights we can get from analyzing images with tools like Imatest, we can get quite far.
What we really need is better Capture sharpening tools in the Raw converter. Most of the current 'solutions' also cause a lot of confusion and issues, and most of that is avoidable, IMHO.
Ugh, right. Two things:
1) I forgot one setting for the preset: 'Suppress Artifacts = 0.2'
2) How are these images getting to InFocus? The settings I gave are for capture sharpening unsharpened raw images, if they have already been pre-sharpened by LR for instance all bets are off.
Robert, would you care to make a raw of your test shot available, let me try Iridient on it, and then analyse it with Imatest?
Many thanks, Robert. - Which output color space did you use when processing the raw?
Just curious, did you try InFocus' one-click mode as described earlier (http://forum.luminous-landscape.com/index.php?topic=107311.msg893513#msg893513)?
EDIT: Including the setting that I forgot, 'Suppress Artifacts = 0.2'
"Your attachment has failed security checks and cannot be uploaded. Please consult the forum administrator." ??
It's a 66 kB TIF containing a 70x135 px crop of the slanted edge target.
Still doesn't work for me. I am using an image from Lightroom with sharpening off (everything off in fact) and here is the result:Hi Robert
If I use a smaller radius (even 1.9) I don't get the ringing. If I reduce the Edge Softness (with a Blur Radius of 2) the artifacts are also reduced. But even then, I have to put Suppress Artifacts to max before I get a clean image.
Robert
Hi Robert
Curious, so I had a look at your original and used the same settings. I am not getting the same result as you. The attached shows original and Topaz at 100% view and at closer to your screenshot 200%. I would also add that Smart Sharpen does the job just as well in this case ( a little more noise perhaps but irrelevant for print IMHO)
EDIT: The only change I made to your original was to apply ACR Lens corrections and Remove CA.
Yes Robert the same file link you posted earlier in the thread: Reply #143.
Weird ... I'm using InFocus 1.0.0 Win 64. And you?Same version 1.0.0 on Windows 10 64bit
Same version 1.0.0 on Windows 10 64bit
Hi Jack,
Still doesn't work for me. I am using an image from Lightroom with sharpening off (everything off in fact) and here is the result:
I can't explain it Tony. The only way it works for me is to put a radius of between 1 and 1.9. Anything at or above 2 gives major artifacts.I confess no idea why the results should be so different between our systems.
Robert
Ugh, right. Two things:
1) I forgot one setting for the preset: 'Suppress Artifacts = 0.2'
2) How are these images getting to InFocus? The settings I gave are for capture sharpening unsharpened raw images, if they have already been pre-sharpened by LR for instance all bets are off.
@#134
Thanks again, Bart.
> What we really need is better Capture sharpening tools in the Raw converter.
1- But that would seldom be the output size.
What I was implying was that if the Capture sharpening is done well at the Capture size (maybe even before/during demosaicing), then we have a much easier job with final sharpening at any size.
BTW Infocus 'Estimate' does best if zoomed in to a well focused area with lots of detail in all sorts of directions.
I know that this I am asking an unanswerable question because the answer probably depends on the image and what we are going to do with it in post-processing.
But I'll ask it anyway. What do you think is the best sharpening workflow?
Assuming Lightroom and Photoshop; and that there will be either upsampling or downsampling for output; and that we have a well-taken image with a good lens and camera.
I will understand if you are too weary of the subject to answer :)
Does InFocus "Estimate Blur" on just the portion that is shown in its preview window rather than the entire picture file?
As you can see, it's clear that bicubic sharper applies sharpening (or that the algorithm sharpens the image). This is an excellent result as is and normally there would be no need to apply any further sharpening.
The following shows sharpening applied before bicubic, after bicubic, before bicubic sharper and after bicubic sharper:[...]
My conclusion would be that for downsizing (assuming the use of bicubic or bicubic sharper):
- Use bicubic if the image was already sharpened
- Use bicubic or bicubic sharper if the image was not already sharpened and sharpen after the resize if needed.
- Based on the slanted edge test image, the best result is to use bicubic and to sharpen after the resize, not before.
This becomes very clear if one down-samples the very critical zoneplate image (https://www.dropbox.com/s/jiywm0vrse2t7zm/Rings.png?dl=0) mentioned earlier.
@ Bart, post #164
> so we can address the combined blur with a relatively simple model, that can also be implemented much more efficiently in software as two separable linear (de)convolutions rather than one 2-dimensional (de)convolution.
I don't understand this part. Even if it sounds like something the software author would have to do, not something I could do myself, I would like to understand it a LITTLE better. Which are these 2 dimensions of deconvolution? Would you care to explain just a little?
@ Bart,
post #133
> Also to complicate matters further, Bicubic filtered down-sampling is not perfect and introduces some artifacts by itself. However, it is not that easy to devise a better down-sampler because there will always be other trade-offs to consider (although a Lanczos2 or Lanczos3 windowed downsampling is often pretty usable).
post #166
> With Photoshop (Lightroom is much better at downsampling) I'd 'never' use anything else than bicubic for general down-sampling, and I'd rather add a bit of blur before doing so, just to get fewer artifacts.
What if we go beyond Photoshop/Lightroom?
I think I remember from the ImageMagick thread (http://forum.luminous-landscape.com/index.php?topic=91754.msg746273#msg746273) and from your site that Mitchell-Netravali was a 'basically good' algorithm for downsampling. Would you recommend it for general downsampling? For some time, it has been readily available in PhotoLine, so it would not require the command line and ImageMagick.
PL also offers Lanczos 3 and 8, but I wouldn't know if they are 'windowed' (nor what 'windowed' means - nor if I need to know).
BTW ... I tried Photozoom Pro with S-Spline Max on a normal image and it didn't seem any better to me than Bicubic for upsizing - except that it's slow as hell. But perhaps a test image like the Rings image might show things that I can't see in a landscape photo.
Well, I've just tried Photozoom with S-Spline Max to downsample the Rings and it does seem better than Bicubic + Gaussian blur.
I only like PhotoZoom Pro's upsampling, the down-sampling is IMHO not good (I have to verify for the most recent version, maybe it has improved). But for upsampling it (S-Spline Max) is very benign on subtle structure, and it increases resolution on sharp edges and lines (the edges/lines remain thinner than the upsampling factor would make one expect, and it reduces/removes the jaggies). Imatest probably thinks that the MTF response no longer drops to zero at Nyquist, but keeps going to 2x Nyquist, i.e. double resolution. But that's not going to happen on non-edge detail, so the non-linear processing confuses Imatest.
Cheers,
Bart
Here are some results for downsampling using bicubic and bicubic sharper.
Fun isn't it? Here (http://www.strollswithmydog.com/downsizing-algorithms-effects-on-resolution/) is an article that uses a similar approach to gain insights on downsampling methods.
I'm a bit puzzled by this MTF plot from your article:
I don't understand how the original and nearest neighbor can be at 80% at Nyquist ... they should be at zero or close. The same, to a lesser extent for the Bilinear and Bicubic (the latter beginning to look more like it should, but still very high).
When I run Imatest on the test image I get this curve:
Which seems much more reasonable, giving an MTF50 lw/ph of 3120. There does seem to be quite a bit of aliasing on the image: perhaps MTFMapper is getting confused by it?
The solid lines all refer to the same final pixels, the 4:1 downsized ones, so their results are as measured. The dashed original line is there for reference.
Did you resize the image 4:1 using the various methods?
MTF Mapper never gets confused, if anything it's operator error :) But in this case it looks like you are using the original edge at its native resolution so that's where the discrepancy comes from. And you are probably unknowingly adding a little sharpening somewhere in your workflow, because the MTF50 value in cy/px looks high. Have you tried running Imatest on the cropped tiff I provide there?
Here is an image that was upsized by 2.95x, with all of the above steps. BTW ... these are very small flower-heads and the flowers have a grainy look ... the white dots are not caused by sharpening.
(http://www.irelandupclose.com/customer/LL/upsize.jpg)
In my opinion the white dots are specular reflections caused by harsh lighting of the flower. They are more prominent when using an undiffused flash and can be reduced by diffusing the flash or better yet by using a soft box or other means to produce soft lighting. I have often noted these artifacts when photographing orchids.
Bill
Yes, it's the dashed original, in particular, that I don't understand (the MTF curve seems way too high).
Right that one is there for reference and its frequency axis is in lp/px: it has been scaled to reflect the different pixel size, assuming one would view final images at the same size, see the bottom of the post.