Pages: 1 ... 5 6 [7] 8 9 10   Go Down

Author Topic: deconvolution sharpening plug in  (Read 54938 times)

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: deconvolution sharpening plug in
« Reply #120 on: February 25, 2016, 10:10:13 am »

Yes, I think it's clear that FM does a really great job up to MTF40 as can be seen below.  The MTF80 result is way better than the InFocus result.  So the FM image should look sharper overall.  InFocus pulls up the MTF near Nyquist, but that seems to be caused by aliasing.  I sharpened the InFocus result a second time with a small radius and the jaggies are quite obvious at 300% as can be seem in the image under the graphs.

The one thing I've found though is that it's really necessary to dial-down the FM setting quite a bit.  For example with this image the artifacts became strong with a radius of 5, but I had to drop the radius down to 2 in order to get a clean edge profile and MTF.  Even at 3 there is significant overshoot.

Hi Robert,

Which is what I'd expect if the image was correctly focused. On a perfectly focused image, taken with an aperture that strikes a balance between aberration reduction and diffraction, usually approx. 2 stops down from wide-open, I usually get an optimal blur width of 1 or 2 in FM. That will boost the highest frequencies near Nyquist, and it lifts the entire MTF response.

When I apply FM on an upsampled image, e.g. for deconvolution output sharpening, then I need to multiply the Blur width by the same amount as the upsampling factor (although I can nail the optimum width a bit more exact due to the potential super-resolution). So upsampling by a factor 2x, could lead to a blur width of approx 3. instead of 2 or 4, just because it is possible to be more exact and interpolate between the initial 1x2 or 2x2.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

joofa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 544
Re: deconvolution sharpening plug in
« Reply #121 on: February 25, 2016, 10:14:00 am »

What does a value of 0 to 1 tell you about the performance of an imaging system?  That it has more or less resolution? More or less acutance? More or less of both?

Lets not change the experiment and go on to a more general performance measurement of an imaging system. Though, not implying that JIDM cannot be used there somewhere. If I got it right and please correct me if I didn't, then you are interested in running different algorithms (or software) and comparing the output as far as 'sharpness' goes. Correct? And, a single number is fine for rank ordering in this case.

BTW, as noted before by others, the slanted edge method doesn't let you operate on natural images. The use of JIDM was just in the context as somebody asked on how to use that antiquated slanted edge method on natural images? And, apparently there is no direct way.


Correct, as in most single number qualifiers, they only mean that there is a difference. How significant that difference is, is anyone's guess.

I find it interesting that you don't know the internals of JIDM but are quick to jump to a conclusion of 'anyone's guess'. In experiments that is called bias.

And that's why it is an ISO approved method for measuring Resolution for digital scanners and cameras.

Are we measuring resolution of digital scanners or cameras here. Or just a simple comparison of different software/algorithms?

Implementations of the ISO procedure like Imatest does, also allows to view the data at a number of ways, highlighting different aspects of the results. It is also one of the few methods that allows to study the behavior at higher spatial frequencies than the Nyquist limit, because the slanted edge allows to super-sample the pixels at 4x the Nyquist frequency (it's actually sampling at close to 10x, for a 5-6 degree slant, but for statistical robustness it bins the results in larger bins).


You can spend all the time praising such antiquated methods until the cows come home. OTOH, JIDM has some known limitations and one should use a judicious approach. However, as far as devising an automated of comparing and rank ordering the 'sharpness' of natural images, JIDM does fine.
Logged
Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: deconvolution sharpening plug in
« Reply #122 on: February 25, 2016, 11:56:17 am »

BTW, as noted before by others, the slanted edge method doesn't let you operate on natural images.

But the principles can be applied to any edge detail in an image. Any edge that is sufficiently contrasty, and not exactly parallel to one of the orthogonal axes of the pixel grid, intersects the pixel rows and columns at an angle. Once we see that, it becomes quite easy to even visually(!) judge the blur radius after sharpening. A blur radius that is too large will generate an over- / under-shoot. When the estimated blur width is exactly right, there is no over- / under-shoot and the amount setting can be used to make the transition (Imatest pixel profile) as steep as possible.

Quote
I find it interesting that you don't know the internals of JIDM but are quick to jump to a conclusion of 'anyone's guess'. In experiments that is called bias.

You misunderstood my comment, which was about single figure quantifications of complex processes, in general. But while we're at it, how relevant is a 0.01 or a 0.1 difference in the metric that you devised? Is it significant, very significant, is it a logarithmic scale, or linear, is it peak sharpness, global sharpness, chrominance or luminance sensitive, is it sensitive to image contrast, or ...

Besides, a single metric for sharpness can also be gotten from the JPEG file size after saving, or even the standard deviation. Surely your metric is supposed to be somewhat more useful than that?

Quote
Are we measuring resolution of digital scanners or cameras here. Or just a simple comparison of different software/algorithms?

I'd say that the mentioning of deconvolution  in the subject line was enough of an indication by itself that we are looking for signal restoration in the presence of noise. Signals are a composite of multiple spatial frequencies in the case of images, and temporal and electronic noise are part of the capture process.

Quote
You can spend all the time praising such antiquated methods until the cows come home.

I suggest you propose something better, e.g. to the ISO standards organization, to improve on their methods of analyzing Image resolution in discrete digital capture devices.

Cheers,
Bart
« Last Edit: February 25, 2016, 12:24:26 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: deconvolution sharpening plug in
« Reply #123 on: February 25, 2016, 12:22:10 pm »


They on the other hand claim to be able to perform blind reversal of the effects of spatially-varying optical aberrations.  This is one mean feat and requires major computational power.

Hi Jack,

Well here's a digitally spatially-varied aberration (I think) and piccure did nothing to the image:



As you can see, I warped the image a bit and I can tell you that there is zilch difference between the original warped image and the one processed by piccure (I had Optical Aberrations set to NORMAL).

Have I misunderstood what they mean by 'spatially-varying optical aberrations'?  Are they talking about things like softness in rings caused by spherical aberration, for example?  I tried that by applying a 1px blur to a ring around the center of the image, but there was no stronger correction of the blurred ring (than of the rest of the image).

Quote
Incidentally, one of InFocus' neatest features is its one click capture sharpening.  To use it zero out the Sharpen section and set up the following as a preset, it comes straight from dr. Albert Yang, President of Topaz:

Blur Type: Unknown/Estimate
Blur Radius: 2 (don't worry, it does not mean 2 pixels in this context)
Edge Softness: 0.3

The next time you want to capture sharpen an image bring it into InFocus, recall the preset and click the 'Estimate Blur' button.  Works pretty decently most of the time.
Jack

I tried this and it doesn't seem to work:





Very strong artifacts, as you can see.

Robert
« Last Edit: February 25, 2016, 12:29:17 pm by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: deconvolution sharpening plug in
« Reply #124 on: February 25, 2016, 12:36:29 pm »


When I apply FM on an upsampled image, e.g. for deconvolution output sharpening, then I need to multiply the Blur width by the same amount as the upsampling factor (although I can nail the optimum width a bit more exact due to the potential super-resolution). So upsampling by a factor 2x, could lead to a blur width of approx 3. instead of 2 or 4, just because it is possible to be more exact and interpolate between the initial 1x2 or 2x2.


So I take it that you do not apply FM before the upsampling?  Of do you apply it before at, say, a blur width of 1, and then re-apply FM to the 3x upsampled image with a blur width of 3?

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

joofa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 544
Re: deconvolution sharpening plug in
« Reply #125 on: February 25, 2016, 12:44:33 pm »

a single metric for sharpness can also be gotten from the JPEG file size after saving, or even the standard deviation. Surely your metric is supposed to be somewhat more useful than that?


JPEG file size is a not a good metric as it depends upon image dimensions also. However, JIDM, is independent of image dimensions and actually analyzes image content. Similarly direct comparison of standard deviation also has flaws. For one thing it is affected by the overall brightness, etc.

JIDM has further advantages that it can be used in different sections of an image to identify areas of higher detail, etc.

See, I don't want to tout JIDM too much on this forum. I just presented it as a measure that acts on natural images as somebody asked.  Where as the slanted edge method is not directly applicable - you can force it, and then it becomes a manual process to find edges in an image, and no longer an automated process like JIDM.
« Last Edit: February 25, 2016, 12:49:08 pm by joofa »
Logged
Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: deconvolution sharpening plug in
« Reply #126 on: February 25, 2016, 01:36:41 pm »

See, I don't want to tout JIDM too much on this forum. I just presented it as a measure that acts on natural images as somebody asked.  Where as the slanted edge method is not directly applicable - you can force it, and then it becomes a manual process to find edges in an image, and no longer an automated process like JIDM.

If you could tell us what exactly the JIDM metric says about the image then it might be useful in seeing if one sharpening algorithm is better than another on a landscape photograph, say.  What, for example, is the meaning of the metric going from .0538 to .0629 on the photo of the swan?  Is a difference of .0091 significant?

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: deconvolution sharpening plug in
« Reply #127 on: February 25, 2016, 02:42:30 pm »

So I take it that you do not apply FM before the upsampling?  Of do you apply it before at, say, a blur width of 1, and then re-apply FM to the 3x upsampled image with a blur width of 3?

It depends. I usually create a sharpening layer based on the original Raw conversion image size. But, because it is a layer, I can disable it e.g. before down-sampling where it would only increase the risk of generating aliasing artifacts. Before upsampling I have a choice. I either do another round of sharpening on the already sharpened and then upsampled size, but I also can try to switch-off the first sharpening layer, and redo it at larger size image, with a larger blur width setting and usually a larger amount, and then choose which combination to use. The latter (single sharpening layer at larger size) has the benefit of having a lower risk of amplifying artifacts that may have been caused at a smaller image size but were not objectionable at that size.

Cheers,
Bart
« Last Edit: February 25, 2016, 04:25:24 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: deconvolution sharpening plug in
« Reply #128 on: February 25, 2016, 03:34:37 pm »

Thanks Bart ... that makes sense.

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Hening Bettermann

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 945
    • landshape.net
Re: deconvolution sharpening plug in
« Reply #129 on: February 25, 2016, 04:40:41 pm »

Hi Bart,

>I can disable it e.g. before down-sampling where it would only increase the risk of generating aliasing artifacts.

I am surprised to read that. I thought down-sampling would also decrease artifacts? If memory serves me, I remember that you even favoured a workflow of first up-, then down-sampling for the sole purpose of doing just that?

Kind regards - Hening

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: deconvolution sharpening plug in
« Reply #130 on: February 25, 2016, 06:21:30 pm »

Hi Bart,

>I can disable it e.g. before down-sampling where it would only increase the risk of generating aliasing artifacts.

I am surprised to read that. I thought down-sampling would also decrease artifacts?

Hi Hening,

When we increase the high spatial frequency amplitudes at the native Raw conversion size, we also create better modulated spatial frequencies, some of which are most likely too small to be still resolvable once down-sampled. Any detail, especially when it is well resolved, that is too small to be resolved at a smaller size will create aliasing artifacts. So my advise is to not sharpen before downsampling, in fact one can benefit from blurring (or using appropriate windowing algorithms) before downsampling.

Quote
If memory serves me, I remember that you even favoured a workflow of first up-, then down-sampling for the sole purpose of doing just that?

Correct, but here we have a special case. Usually, upsampling doesn't increase resolution, it just upsamples/dilutes what is there at a smaller size. What sharpening at the larger size achieves is, first compensate for the upsampling blur, and second restore original signal resolution if it wasn't already deconvolved. So in theory, upsampled resolution will be close to what can be resolved at the original/smaller size, just bigger. If that is the case, and we didn't overdo it before down-sampling, then there is no image content with spatial frequencies that exceed the Nyquist frequency at the smaller size, hence not aliasing.

If we sharpen at the larger size for direct output, then we can overdo it a bit to pre-compensate for expected losses later in the print process, due to media losses e.g. caused by ink diffusion and dithering.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: deconvolution sharpening plug in
« Reply #131 on: February 26, 2016, 04:25:59 am »

Any detail, especially when it is well resolved, that is too small to be resolved at a smaller size will create aliasing artifacts. So my advise is to not sharpen before downsampling, in fact one can benefit from blurring (or using appropriate windowing algorithms) before downsampling.


Hi Bart,

In the image below, on the left I applied FM radius 2 amount 100 then resized to 50% using Bicubic.  On the right I resized to 50% using Bicubic and then applied FM radius 1 amount 100.



It looks like the whole curve is pulled up (on the sharpen-after), probably a bit too much.

I reduced the sharpening after resize to FM1/75 and also to FM1/100 followed by a sharpened-layer opacity reduction to 80%:



I would be interested in your interpretation.  What I see is that none of these methods are introducing aliasing (because the image is softish to start off with and the sharpening I applied before resize was quite low?) but the sharpening after resize improves the low-frequencies compared to the sharpening before resize (any logical reason for this?).  It also seems better to reduce the sharpened layer opacity rather than reducing the deblur amount (but that could just be because it the layer opacity is more easily controlled that the deblur amount).

Robert
« Last Edit: February 26, 2016, 04:29:50 am by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Hening Bettermann

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 945
    • landshape.net
Re: deconvolution sharpening plug in
« Reply #132 on: February 26, 2016, 07:35:06 am »

Hi Bart,

thank you for your detailed reply.
So sharpening before downsizing can be beneficial if the larger size was achieved by upsampling first, not if it is the shooting size - correct?

What would be the benefit of such upsampling first? better visibility when adjusting the parameters? this is what I read from your post #120.

So my take-away so far is:
Preferably, sharpening should be done at output size. After downsampling for web, after upsampling for (large) prints. The concept of *capture* sharpening is kind of fading away.
It might be replaced by sharpening for the monitor size as the primary "output".

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: deconvolution sharpening plug in
« Reply #133 on: February 26, 2016, 08:06:04 am »

Hi Bart,

In the image below, on the left I applied FM radius 2 amount 100 then resized to 50% using Bicubic.  On the right I resized to 50% using Bicubic and then applied FM radius 1 amount 100.

[...]

It looks like the whole curve is pulled up (on the sharpen-after), probably a bit too much.

I reduced the sharpening after resize to FM1/75 and also to FM1/100 followed by a sharpened-layer opacity reduction to 80%:

[...]

Based on the technical data and charts, we can see (on the edge pixel profiles) that there is a small edge halo in all versions. Ideally we would have no halo, but that is almost impossible if the image has to be sharp as well. So that is why I use a Blend-if layer rather than a reduced opacity, in order to keep the deconvolution sharpening at full strength except for very high contrast edges and lines to avoid clipping and the most annoying high contrast halos.

Also, we can see that there is some aliasing potential (the shaded region at the bottom right under the MTF curves) in all versions, again hard to totally avoid, but something to be cautious about. Any method that combines a boost in the spatial frequencies at the left of the Nyquist limt marker and a very low response to the right of it, should be our goal. But it's hard to achieve on normal images without introducing other artifacts, so it remains a balancing act.

I still think that an ultimate test to verify the creation of down-sampling artifacts, is to down-sample a zoneplate kind of target (like this one). Such a target is super critical and has many spatial frequencies in all orientations, and the sort of repetitive sinusoids show deviations from the expected pattern with merciless clarity (not only aliasing, but also blocking and ringing artifacts will break the smooth patterns). That will also show that preblurring before downsampling will reduce the artifacts, but that sharpening after resampling will bring out some of that again, just as the Imatest charts predict. Another useful natural subject image for testing is this one, with many thin lines and sharp edges at slightly different angles. Down-sampling that will also show many issues if the process is not of high enough quality.

Also to complicate matters further, Bicubic filtered down-sampling is not perfect and introduces some artifacts by itself. However, it is not that easy to devise a better down-sampler because there will always be other trade-offs to consider (although a Lanczos2 or Lanczos3 windowed downsampling is often pretty usable). In this thread I tried to create a best compromise, but it requires an external imageprocessing library (it's free though). It also uses different filters/methods for up- and down-sampling.

Imageprocessing of natural images is a process where a lot of trade-offs need to be made, and some image content is better suited for one approach, while other image content benefits from another approach, and they are often combined in the same image. The need for sharpening is inherently linked to the capture process, which blurs image content, and resampling also blurs and/or reduces contrast. Therefore there is no single best solution. But if our tools allow a good preview of what the effects are, and we use some of the insights we can get from analyzing images with tools like Imatest, we can get quite far.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: deconvolution sharpening plug in
« Reply #134 on: February 26, 2016, 08:15:02 am »

Hi Bart,

thank you for your detailed reply.
So sharpening before downsizing can be beneficial if the larger size was achieved by upsampling first, not if it is the shooting size - correct?

Yep.

Quote
What would be the benefit of such upsampling first? better visibility when adjusting the parameters? this is what I read from your post #120.

Yes, because of the additional pixels one can make the corrections more accurately. However, this is most often used when the upsampled image is the goal (e.g. for native printer PPI matching). It can be used to downsample again to the original size, but that assumes a decent downsampling routine. In most cases, direct sharpening at the target size is a good approach.

Quote
So my take-away so far is:
Preferably, sharpening should be done at output size. After downsampling for web, after upsampling for (large) prints. The concept of *capture* sharpening is kind of fading away.

Well, it is still capture sharpening, but not necessarily at the captured size. The capture process is inherently blurry so it doesn't matter much when we address it, although it is often done early in the process to avoid introduction of too many non-linearities that make proper deconvolution harder to achieve. What we really need is better Capture sharpening tools in the Raw converter. Most of the current 'solutions' also cause a lot of confusion and issues, and most of that is avoidable, IMHO.

Quote
It might be replaced by sharpening for the monitor size as the primary "output".

If that is the goal, yes. Working with layers allows flexibility to switch on/off certain sharpening approaches if they get in the way of further processing, or to only apply them locally.

Cheers,
Bart
« Last Edit: February 26, 2016, 09:29:55 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

earlybird

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 331
Re: deconvolution sharpening plug in
« Reply #135 on: February 26, 2016, 09:03:36 am »

This morning I made some tests for myself.

I created a 5* slanted edge image and ran numerous sharpen processes on respective layers. It seemed very clear how each plugin works on slanted edges, and as I made adjustments to parameters it was easy to see which plugins worked well with either more or less input from the user. Focus Magic doesn't seem to have much user input but the results are clean. Topaz Infocus has lots of input and you can find settings to get what you want. Piccure+ has a moderate amount of choices but all the results seem characterized with a general tendency to give you what it gives you regardless of what you change. I'd say that Focus Magic is the one click winner of the slant edge test but that InFocus can easily match it if you know how to set the parameters. Piccure+ did not have a subtle effect on the slant edge, but I did notice that the Optical Aberation=micro setting made the pronounced edge line appear much smother than the others. If you actually wanted a pronounced edge the smoothness piccure produced seemed very deluxe. 

I then took what I esteemed to be the better settings for each plugin and ran them on the photo of the Black Crowned Night Heron. I think it is a lot less clear which plugin offers the best results. I noted that Topaz InFocus seemed to require more input from the user but also offered the most appreciable variance in results so I ran a few extra processes with it to see what it could do. On this photo I find myself appreciating the results that Piccure+ provides. I also like the results I got with InFocus and I appreciate the sense that I can influence the output with changes in settings. Focus Magic seems OK but I feel like it leaves the image less sharp than the other choices.

I have made and uploaded two .psd files with stacked and labeled layers so anyone can easily compare the output of the plugins. The labeling is a shorthand for the parameters available in each plugin. The labeling should not seem too puzzling to anyone who has the plugins and can look at the parameter labeling in the respective GUI. I zero'd out the Sharpen setting in InFocus so I didn't specify those settings on the InFocus layers.

The .psd files were cropped to a smaller size to make upload/download easier. The .psd files are 16bt ProPhotoRGB.

I attached jpegs of the cropped examples to this post so as to provide some idea of what you will find in the psd files.

You can download the zip file here: https://www.dropbox.com/s/ly4cbdsyiygjj9i/Sharpen%20Tests.zip?dl=0

 
« Last Edit: February 26, 2016, 09:10:40 am by earlybird »
Logged

Tim Lookingbill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2436
Re: deconvolution sharpening plug in
« Reply #136 on: February 26, 2016, 12:23:35 pm »

Hi Bart,

thank you for your detailed reply.
So sharpening before downsizing can be beneficial if the larger size was achieved by upsampling first, not if it is the shooting size - correct?

What would be the benefit of such upsampling first? better visibility when adjusting the parameters? this is what I read from your post #120.

So my take-away so far is:
Preferably, sharpening should be done at output size. After downsampling for web, after upsampling for (large) prints. The concept of *capture* sharpening is kind of fading away.
It might be replaced by sharpening for the monitor size as the primary "output".

If you go back and examine the screenshots I posted of the white duck, you'll see the Highpass sharpened version on the far right is a sort of output sharpen for downsizing to the full frame 1000x668px version on top. So basically I went from a 4MP (2536x1690/10.5x7in) upsampled to (8640x5758px/36x24in./240ppi) in LR4, downsized (with highpass sharpen layer) to 4x3in/240ppi so it looks sharper than if I'ld just opened the original 4MP and downsized to 4x3in.(1000x668px).

With regular 6MP Raws that I sharpen for posting online at 700px on the long end often ACR/LR's sharpening isn't enough to override the softening introduced by downsizing this small. I just turn on sharpen for Glossy Prints set to Standard or High in CS5 ACR/LR4. It does a pretty decent job.
Logged

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: deconvolution sharpening plug in
« Reply #137 on: February 26, 2016, 01:23:38 pm »

... Topaz InFocus seemed to require more input from the user ...

Just curious, did you try InFocus' one-click mode as described earlier?

EDIT: Including the setting that I forgot, 'Suppress Artifacts = 0.2'
« Last Edit: February 26, 2016, 01:30:09 pm by Jack Hogan »
Logged

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: deconvolution sharpening plug in
« Reply #138 on: February 26, 2016, 01:29:21 pm »

I tried this and it doesn't seem to work:

Ugh, right.  Two things:

1) I forgot one setting for the preset: 'Suppress Artifacts = 0.2'
2) How are these images getting to InFocus?  The settings I gave are for capture sharpening unsharpened raw images, if they have already been pre-sharpened by LR for instance all bets are off.

Jack
Logged

Hening Bettermann

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 945
    • landshape.net
Re: deconvolution sharpening plug in
« Reply #139 on: February 26, 2016, 01:52:40 pm »

@#136
Thanks Tim. Yes I can see. The whole-image on top looks sharp without the grainy look I saw on the sharpened crop at right.
 
@#134
Thanks again, Bart.

> What we really need is better Capture sharpening tools in the Raw converter.

1- But that would seldom be the output size.
2- I wonder how Iridient's sharpening would fare in comparison.
Robert, would you care to make a raw of your test shot available, let me try Iridient on it, and then analyse it with Imatest?
Pages: 1 ... 5 6 [7] 8 9 10   Go Up