Pages: 1 ... 5 6 [7] 8 9 ... 12   Go Down

Author Topic: Sharpening ... Not the Generally Accepted Way!  (Read 59123 times)

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #120 on: August 13, 2014, 07:28:08 am »


It is in fact extremely unlikely (virtually impossible) that simply adding a halo facsimile of the original image will invert a convolution (blur) operation. USM is only trying to fool us into believing something is sharp, because it adds local contrast (and halos), which is very vaguely similar to what our eyes do at sharp edges.


Yes, I understand ... when I said 'a filter like that' I was referring to a convolution kernel operation, not the traditional USM overlay.

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #121 on: August 13, 2014, 08:14:17 am »

Why is the 2-d neighborhood weighting used in USM so fundamentally different from the 2-d weighting used in deconvolution aside from the actual weights?

It's not a difference in weighting, but what it is used for. In the case of USM, it is used to create a halo facsimile of our blurred image which is added back to the image.

Quote
What is a fair frequency-domain interpretation of USM?

I don't think one can predict the effect that adding two images has on the frequency domain representation of that 'sandwich'. It's more like a contrast adjusted version of the image, spatial frequencies do not really change, just amplitudes.

Quote
It would aid my own (and probably a few others) understanding of sharpening if there was a concrete (i.e. something else that mere words) describing USM and deconvolution in the context of each other, ideally showing that deconvolution is a generalization of USM.

The problem is that they are different beasts altogether, nothing connects their operations. USM adds a contrast enhancing layer, deconvolution rearranges bits of spatial frequencies that got scattered to neighboring pixels. Again, check out the explanations given by Doug Kerr's article and the Cambridge in Color article. The latter literally states; "An unsharp mask improves sharpness by increasing acutance, although resolution remains the same ". Sharpness is a perceptual qualification, resolution is an objectively measurabe quantification.

Quote
I believe that convolution can be described as:
[...]
This is about where my limited understanding of deconvolution stops.

Yes, that's about correct.

Quote
You might want to tailor the pseudoinverse wrgt (any) knowledge about noise and/or signal spectrum (ala Wiener filtering), but I have no idea how blind deconvolution finds a suitable inverse.

That remains to be a challenge especially because it is complicated by the fact, as you noted before, that the blur function (PSF) is spatially variant across the image and noise complicates things (to distinguish between random noise and a photon shot noise signal, requires statistical probabilities, it's not exact). The usual attempts (other than using prior knowledge/calibration of the imaging system, like DxO does with their Raw converter, which also calibrates for different focus distances), is by trial and error. Just try different shapes and sizes of PSFs and 'see' what produces 'better' results (according to certain criteria). Fortunately, Gaussian PSF shapes do a reasonably good job, which leaves a bit less uncertainty, but it remains an ill posed problem to solve. That's because there are multiple mathematical error minimization solutions possible, without a possibility to predict which one will produce the best looking result.

Cheers,
Bart
« Last Edit: August 13, 2014, 08:50:07 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #122 on: August 13, 2014, 08:41:45 am »

Hi Bart,

I've tried your PSF generator and I'm using it incorrectly as the figures I get are very different to yours.

Not really all that different, although I did mention that I tweaked the PS Custom filter a bit (to beat it into submission). I tend to use the larger fill-factor percentages, because they create a more digital sensor sampled shape (slightly less peaked) of the Gaussian blur.

Quote
I don't understand 'fill factor' for example - and I just chose the pixel value to be as close to 999 as possible.

The fill factor tries to account for the aperture sampling of the sensels of our digital cameras. Instead of a point sample (which produces a pure 2D Gaussian), a (sensel) fill-factor of 100% would use a square pixel aperture to sample the 2D Gaussian for each sensel without gaps between the sensels (as with gap-less micro-lenses). It's just a means to approximate the actual sensel sampling area a bit more realistically, although it's rarely a perfect square.

On top of that, I adjusted the PS Custom filter kernel values a bit to improve the limited calculation precision and reduce potential halos from mis-matched PSF radius/shape, but your values would produce quite similar results, although probably with a different Custom filter scale value than I ultimately arrived at. If only that filter would allow larger kernels and floating point number values as input, we could literally copy values at a scale of 1.0 ...

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #123 on: August 13, 2014, 02:03:18 pm »

Not really all that different, although I did mention that I tweaked the PS Custom filter a bit (to beat it into submission). I tend to use the larger fill-factor percentages, because they create a more digital sensor sampled shape (slightly less peaked) of the Gaussian blur.

The fill factor tries to account for the aperture sampling of the sensels of our digital cameras. Instead of a point sample (which produces a pure 2D Gaussian), a (sensel) fill-factor of 100% would use a square pixel aperture to sample the 2D Gaussian for each sensel without gaps between the sensels (as with gap-less micro-lenses). It's just a means to approximate the actual sensel sampling area a bit more realistically, although it's rarely a perfect square.

On top of that, I adjusted the PS Custom filter kernel values a bit to improve the limited calculation precision and reduce potential halos from mis-matched PSF radius/shape, but your values would produce quite similar results, although probably with a different Custom filter scale value than I ultimately arrived at. If only that filter would allow larger kernels and floating point number values as input, we could literally copy values at a scale of 1.0 ...


Thanks for the explanation Bart (although I'm not sure to what extent I understand it - but I'll take your word for it that a fill factor of 100% will give a square pixel aperture rather than a round one).

Going back to more basic basics, and looking at this image:



I expected that the second convolution kernel would deconvolve the first one - but clearly it doesn't.  The reason seems that at the edges, the subtraction of black is greater than the addition of grey, so we get the dreaded halo.

I messed around a bit with your PSF generator, but I could not come up with a proper convolution/deconvolution.  Could you explain what is going wrong?  

Robert

« Last Edit: August 13, 2014, 02:19:54 pm by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

ppmax2

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 92
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #124 on: August 14, 2014, 12:09:09 am »

I'd like to post a real world sample of deconvolution applied to a challenging image. This shot was taken at sunset and exposure was set so the red channel wouldn't clip (shot with 5D3). As such the backside of this telescope was noisy and underexposed. In the unedited image, the shadow areas were a mush of blotchy blues and reds. Although these images don't show it, the upper left portion of the frame is a firehose of deep purples, reds, and oranges with flecks of high-intensity light reflecting off the clouds. (this shot was taken at 14.5K feet on Mauna Kea). Retaining the vibrance of the sky, while pulling detail from the backside of this telescope was my goal.

I'd like to thank Fine_Art who helped me with this image, and provided some great guidance during my first experiences with RawTherapee. After helping me suppress noise in the shadows (RT has great tools for this) he then advised me to ditch USM and try deconvolution instead. I've since worked on this image quite a bit and am blown away by what RT could recover.

First image is LR, no settings (LR-Import.JPG).

2nd image is LR with white balance, tone curve to bump up shadows a bit so that RGB+L values in various regions in the image are similar to those same regions in RawTherapee, with USM and Noise reduction applied. Also added lens correction, and adjusted CA. I don't think I can be accused of over sharpening...I tried purposefully to avoid USM halos. (LR-Final.JPG)

3rd image is RawTherapee with tone, color, deconvolution, noise reduction, lens correction, CA, etc. (RT-Final.JPG)

FYI: these are screen captures, not exports.

I am sure someone here could do better vs my LR Final...but I doubt anyone could do better in LR vs. the RT-Final. Im happy to post the CR2 if anyone wants to take a shot.

There are regions in the LR-Final where virtually all detail is lost, even with only moderate noise reduction. For comparison, in the RT-Final, each vertical line on the surface of the two pillars is clear and distinct...yet the sky is buttery smooth. Some will accuse me of too much NR in the RT-Final image...but it looks fantastic on screen and in print  ;)

FWIW: glad to see someone has referenced Roger Clark's excellent articles...which are worth a read en toto. I'm sure he doesn't need any cheerleaders, but this fellow has a substantial background in digital photography and digital image processing. From his site:
Quote
Dr. Clark is a science team member on the Cassini mission to Saturn, Visual and Infrared Mapping Spectrometer (VIMS) http://wwwvims.lpl.arizona.edu, a Co-Investigator on the Mars Reconnaissance Orbiter, Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) team, which is currently orbiting Mars, and a Co-Investigator on the Moon Mineral Mapper (M3) http://m3.jpl.nasa.gov , on the Indian Chandrayaan-1 mission which orbited the moon (November, 2008 - August, 2009). He was also a Co-Investigator on the Thermal Emission Spectrometer (TES) http://tes.asu.edu team on the Mars Global Surveyor, 1997-2006.


I'm not promoting one workflow/technology/tool over another. However, I think the result I achieved (with help from Fine_Art) speaks for itself.

PP
Logged

Schewe

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6229
    • http:www.schewephoto.com
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #125 on: August 14, 2014, 12:40:31 am »

I'm not promoting one workflow/technology/tool over another. However, I think the result I achieved (with help from Fine_Art) speaks for itself.

Really hard to separate demosiacing from sharpening. RawTherapee has a different demosiacing than LR. Part of what I'm seeing is the original demosiacing plus sharpening.

So, it's really not an apple to apple comparison...
Logged

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #126 on: August 14, 2014, 12:58:16 am »

Really hard to separate demosiacing from sharpening. RawTherapee has a different demosiacing than LR. Part of what I'm seeing is the original demosiacing plus sharpening.

So, it's really not an apple to apple comparison...

Yes, the RT AMAZE, written by this forum's Emil M., does give a big advantage to RT. The deconvolution has better information to start with.
Logged

ppmax2

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 92
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #127 on: August 14, 2014, 02:10:40 am »

Quote
Really hard to separate demosiacing from sharpening. RawTherapee has a different demosiacing than LR. Part of what I'm seeing is the original demosiacing plus sharpening.

So, it's really not an apple to apple comparison...


But is no comparison valid? I'm not dismissing your point...however:

As instated in my post, I tried using USM in RT...which as you point out uses a different demosaicing algorithm. I tried using USM in RT and was encouraged by Fine_Art to try deconvolution instead. Given the same data, deconvolution produced better results. Perhaps I could have shown this; perhaps I'll post a USM sample later. But let's not lose sight of the bigger picture...

For all intents and purposes each tool is a black box that applies transforms to input data. While the methods each black box employs may be interesting and ripe for discussion, the final result is what matters most to me. To what degree did the demosaicer contribute to the end result vs deconvolution? I admit I don't really care if it was 10%, 49%, or 99%. That question is better answered by someone that is more interested in pixels vs pictures. That discussion quickly devolves into hair splitting. This is not to say that factoring out the demosaicer from the equation is without merit. But these algorithms are not reasonably separable or transposable between tools...and the USM interface in RT is different vs LR as well...so I don't think its reasonably possible to compare apples to apples. Implementations differ, and implementation matters.

But to be fair to your point I'll backtrack from my OP and state that the images are an example of what RT's deconvolution + demosaicer were able to achieve.

I think the RT render has several characteristics that are objectively superior to results I was able to achieve in other similar tools (LR, Aperture, C1). Each tool has its strengths and weaknesses, and it's too bad it's not reasonably possible to combine the best features of each tool into a single app or coherent workflow. Until then, I'll use the tool that yields the best result for each image I process and suffer the inconvenience of doing so.

Pp
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #128 on: August 14, 2014, 05:12:55 am »

Going back to more basic basics, and looking at this image:



I expected that the second convolution kernel would deconvolve the first one - but clearly it doesn't.  The reason seems that at the edges, the subtraction of black is greater than the addition of grey, so we get the dreaded halo.

Hi Robert,

There are several (possible) causes for the imperfect restoration, part of which may be due to Photoshop's somewhat crude implementation of the function. A 3x3 kernel with all 1's will effectively average/remove all distinction between individual pixels, so it is lossy (and hard to solve without artifacts like ringing). Then there is the rounding/truncation of intermediate value accuracy as the kernel contributions are added, the rounding/truncation of the intermediate blurred dot version's pixels, and there is a possible issue with clipping of black values (no negative pixel values possible). In other words, a difficult if at all possible task.

The dot seems to be upsampled, so I cannot check what a different deconvolver, e.g. the one in ImageJ which is a much better implementation, would have done. That would allow to estimate the influence of the calculation accuracy, but it will remain a rather impossible deconvolution.

As a compromise, you can increase the central kernel's value (and adjust the scale) so that the pixel 'under investigation' contributes a proportionally larger part to the total solution, and the restoration attempt is less rigorous (which should 'tame' the edge overshoot). But again, such a crude method and limited precision will not do a perfect restoration. One would get better results by performing such calculations in floating point, and not in the spatial domain but in the frequency domain, but that is a whole other level of abstraction if one is not familiar with that.

A more realistic deconvolution was the one with the 0.7 Gaussian blur kernel that I shared. Natural images, such as of your power lines have a minimum of unavoidable amount of blur which can largely (not totally) be reversed with common deconvolution methods, and better implementations than a simple deconvolution (e.g. FocusMagic or a Richardson-Lucy deconvolution) also achieve better results. Make sure to use at least 16-bit/channel image data to do these operations on, it will allow more precise calculations and reduce the effects of intermediate round-off issues.

Quote
I messed around a bit with your PSF generator, but I could not come up with a proper convolution/deconvolution.  Could you explain what is going wrong?

I'm not sure what you tried to do, but my PSF generator tool is aimed at creating Gaussian shaped PSF (de)convolution kernels, or high-pass mask filters, as is useful for processing real images rather than synthetic CGI images. It is not intended for averaging filters which eliminate most subtle signal differences like the one you used. When you create a Gaussian blur PSF, and deconvolve with a Gaussian deconvolution kernel that matches that blur PSF, the restoration will be better.

Cheers,
Bart
« Last Edit: August 14, 2014, 05:44:22 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #129 on: August 14, 2014, 07:40:40 am »

The dot seems to be upsampled, so I cannot check what a different deconvolver, e.g. the one in ImageJ which is a much better implementation, would have done. That would allow to estimate the influence of the calculation accuracy, but it will remain a rather impossible deconvolution.


Hi Bart,

I thought that the problem might lie along the lines you've pointed out.

The image that I posted is a screen capture, so it's way upsampled.  The original that I tried to deconvolve is just a 4 pixel black on a gray background.

I tried it with the example macro in ImageJ and this restores the square perfectly.  I also played around with a couple of images, and for anyone who doubts the power of deconvolution (or who thinks deconvolution and USM is the same sort of thing), here is an example from the D800Pine image:



It would be very interesting to play around with ImageJ with PSFs with varying Gaussian Blur amounts.  If you have reasonably understandable steps that can be followed (by me) in ImageJ I would be happy to give it a go.  I've never used ImageJ before, so at this stage I'm just stumbling around in the dark with it :).

I have to thank you for all the information and help!  You are being very generous with your time and knowledge.

Robert

« Last Edit: August 14, 2014, 07:43:50 am by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #130 on: August 14, 2014, 09:15:56 am »

or who thinks deconvolution and USM is the same sort of thing
I am speculating that USM and deconvolution might be "the same sort of thing" in the same way that a Fiat and a Ferrari are both Italian cars (they both have four wheels and an engine, and it can bring insight to relate them to each other).

I am not questioning that deconvolution (when properly executed) can give better results than USM.

-h
« Last Edit: August 14, 2014, 09:18:54 am by hjulenissen »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #131 on: August 14, 2014, 09:28:02 am »

I tried it with the example macro in ImageJ and this restores the square perfectly.

Hi Robert,

I'm not sure which example Macro you used, or whether you are referring to the Process/Filters/Convolve... menu option.

Quote
It would be very interesting to play around with ImageJ with PSFs with varying Gaussian Blur amounts.  If you have reasonably understandable steps that can be followed (by me) in ImageJ I would be happy to give it a go.

If you need an image of a custom kernel, ImageJ can import a plain text file with the kernel values (space separated, no commas or such), e.g. the ones you can Copy/paste from my PSF Generator tool. Use the File/Import/Text Image... menu option to import text as an image.
For PSF's (to blur with, or for plugins that want a PSF as input) you use a regular PSF, to Convolve with a Deconvolution kernel generate that and save it as a text file.

Quote
I have to thank you for all the information and help!  You are being very generous with your time and knowledge.

You're welcome.

Cheers,
Bart
« Last Edit: August 14, 2014, 09:32:25 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Eyeball

  • Full Member
  • ***
  • Offline Offline
  • Posts: 150
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #132 on: August 14, 2014, 09:53:30 am »

I am enjoying this thread.  I have been a deconvolution fan for quite a while and really like Focus Magic.  I hope you won't mind if I chime in with a somewhat less technical question/observation.

One of the frequent criticisms that I have seen in the past of deconvolution is that "you need to know the Point Spread Function to use it properly."

While I understand at a basic level that the PSF is indeed important, I always found that supposed criticism to be a bit of a red herring - mainly because it makes it sound like you need to pull out an Excel spreadsheet or MATLAB to use it properly.  In practical use, however, it can be as simple as using your eyes with something like FM or even letting software like FM make an educated guess for you based on a selected sample.  And while correction of lens defects and properties admittedly gets into more complicated territory, I would thing that "capture sharpening", in particular, can be handled in a pretty straight-forward manner where deconvolution is concerned.

Anyway, back to my "just use your eyes" comment.  One big difference I have noticed between FM and the Adobe tools that reportedly use some degree of deconvolution (PS Smart Sharpen* and LR when Detail>50) is that FM makes it super-easy/obvious where the ideal radius sweet-spot is and the Adobe products do not.  FM will start to show obvious ringing when you go too far but the Adobe tools will just start maxing out shadows and highlights.  The Adobe tools also seem to have a difficult time mixing deconvolution with noise suppresion where as FM almost always seems to do a great job of magically differentiating between fine detail and noise.

Any ideas/info on why this is?

If I was guessing, I would say the Adobe tools are mixing deconvolution with other techniques, probably in an effort to control halos, but that is a total guess.

* I don't have CC so I have not seen the latest improvements of Smart Sharpen in CC.

Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #133 on: August 14, 2014, 10:07:52 am »

I am speculating that USM and deconvolution might be "the same sort of thing" in the same way that a Fiat and a Ferrari are both Italian cars (they both have four wheels and an engine, and it can bring insight to relate them to each other).

I am not questioning that deconvolution (when properly executed) can give better results than USM.

-h

I am seriously not an expert in imaging science, but it would seem to me that a better analogy between USM and deconvolution would be something like a blanket and a radiator, in that the blanket covers up the fact that there's not enough heat in the room whereas the radiator puts heat back in (heat being detail ... which is a bit of a pity because heat is noise, harking back to my thermodynamics :)).

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #134 on: August 14, 2014, 10:17:08 am »

Hi Robert,

I'm not sure which example Macro you used, or whether you are referring to the Process/Filters/Convolve... menu option.


Hi Bart,

I can't remember where I got the macro from (it's in there with ImageJ somewhere, obviously), but here it is:
// This macro demonstrates the use of frequency domain convolution
// and deconvolution. It opens a samples image, creates a point spread
// function (PSF), adds some noise (*), blurs the image by convolving it
// with the PSF, then de-blurs it by deconvolving it with the same PSF.
//
// * Why add noise? - Robert Dougherty
// Regarding adding noise to the PSF, deconvolution works by
// dividing by the PSF in the frequency domain.  A Gaussian
// function is very smooth, so its Fourier, (um, Hartley)
// components decrease rapildy as the frequency increases.  (A
// Gaussian is special in that its transform is also a
// Gaussian.)  The highest frequency components are nearly zero.
// When FD Math divides by these nearly-zero components, noise
// amplification occurs.  The noise added to the PSF has more
// or less uniform spectral content, so the high frequency
// components of the modified PSF are no longer near zero,
// unless it is an unlikely accident.

  if (!isOpen("bridge.gif")) run("Bridge (174K)");
  if (isOpen("PSF")) {selectImage("PSF"); close();}
  if (isOpen("Blurred")) {selectImage("Blurred"); close();}
  if (isOpen("Deblurred")) {selectImage("Deblurred"); close();}
  newImage("PSF", "8-bit black", 512, 512, 1);
  makeOval(246, 246, 20, 20);
  setColor(255);
  fill();
  run("Select None");
  run("Gaussian Blur...", "radius=8");
  run("Add Specified Noise...", "standard=2");
  run("FD Math...", "image1=bridge.gif operation=Convolve image2=PSF result=Blurred do");
  run("FD Math...", "image1=Blurred operation=Deconvolve image2=PSF result=Deblurred do");


I haven't looked into the FD Math code, but it appears to be using FFTs.  I just commented out the Bridge.gif line and opened my own version.

I'll have a go at what you suggest with your custom kernel.  At least with ImageJ you can use a bigger kernel.  BTW ... do you ever use ImageJ to 'sharpen' your own images?

Cheers

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #135 on: August 14, 2014, 10:22:06 am »

I am seriously not an expert in imaging science,

Robert

Neither am I, but I have been reading these posts with interest, and what I am picking up from Bart's description of the basic algorithms underlying Deconvolution versus acutance is that these are indeed different mathematical procedures that can therefore be expected to deliver differing results. If I'm wrong about that, I'd like to be so advised. If in order to implement those different procedures differences in the demosaic algorithm are also required, so be it. I too am results oriented, but I think Jeff's point is an important one, in particular from a developer perspective, because it is necessary to have a proper allocation of cause and effect when more than one variable is deployed to achieve an outcome; even from a user perspective this kind of knowledge can help one to make choices. The samples that ppmax2 posted are interesting. There's no question that the vertical lines down the building structure are better separated in the RT with deconvolution process than in the LR process. While we don't know - can't tell - to what extent LR's settings were optimized to get the best possible output, I think it fair to presume from what he/she described that ppmax2 was giving it his/her best shot.  
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #136 on: August 14, 2014, 11:29:45 am »

I am enjoying this thread.  I have been a deconvolution fan for quite a while and really like Focus Magic.  I hope you won't mind if I chime in with a somewhat less technical question/observation.

One of the frequent criticisms that I have seen in the past of deconvolution is that "you need to know the Point Spread Function to use it properly."

While I understand at a basic level that the PSF is indeed important, I always found that supposed criticism to be a bit of a red herring - mainly because it makes it sound like you need to pull out an Excel spreadsheet or MATLAB to use it properly.  In practical use, however, it can be as simple as using your eyes with something like FM or even letting software like FM make an educated guess for you based on a selected sample.  And while correction of lens defects and properties admittedly gets into more complicated territory, I would thing that "capture sharpening", in particular, can be handled in a pretty straight-forward manner where deconvolution is concerned.

Hi,

That's correct. It's often used as a red herring, while in practice even a somewhat less than optimal PSF will already offer a huge improvement. Of course a better estimate will produce an even better result.

Quote
Anyway, back to my "just use your eyes" comment.  One big difference I have noticed between FM and the Adobe tools that reportedly use some degree of deconvolution (PS Smart Sharpen* and LR when Detail>50) is that FM makes it super-easy/obvious where the ideal radius sweet-spot is and the Adobe products do not.  FM will start to show obvious ringing when you go too far but the Adobe tools will just start maxing out shadows and highlights.  The Adobe tools also seem to have a difficult time mixing deconvolution with noise suppresion where as FM almost always seems to do a great job of magically differentiating between fine detail and noise.

Any ideas/info on why this is?

Not really, other than that Adobe LR/ACR probably uses a relatively simple deconvolution method (for reasons of execution speed) that tends to create artifacts quite easily when pushed a little too far. Of course there are a lot of pitfalls to avoid with deconvolution, but that should not be a major issue for an imaging software producer. A really high quality deconvolution algorithm is rather slow and may require quite a bit of memory to avoid swapping to disk, so that could explain the choice for a lesser alternative.

FocusMagic on the other hand, is a specialized plug-in and it does pull it off within a reasonable amount of time. I'm also looking forward to a newer more powerful version of Topaz Labs Infocus (which currently is a bit too sensitive regarding the creation of artifacts).

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #137 on: August 14, 2014, 12:04:20 pm »

Hi Bart,

I can't remember where I got the macro from (it's in there with ImageJ somewhere, obviously),

Ah, I see. It's from a link in the help documentation of the Process/FFT/FD Math menu option. So it is using the built-in FFT functionality. While that is one of many ways to skin a cat, it is a rather advanced option that requires a reasonably good understanding of what it does.

Quote
I haven't looked into the FD Math code, but it appears to be using FFTs.

Indeed.

Quote
BTW ... do you ever use ImageJ to 'sharpen' your own images?

I use it mostly for testing some procedures, producing images from text files, and more things like that, but sharpening is usually left to FocusMagic because it fits more conveniently in my workflow. I have other applications for specific deconvolution tasks.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #138 on: August 14, 2014, 12:56:32 pm »


... other than that Adobe LR/ACR probably uses a relatively simple deconvolution method

Do we have any real evidence that deconvolution is used at all in LR or SmartSharpen?  I tried the DP800Pine image with a Gaussian blur of 1 and if I apply the Lr sharpen (via the Camera Raw Filter), with Detail set low I get a reasonable sharpening effect; with Detail set to maximum (with the same maximum Amount setting and radius set to 1 to mirror the GB of 1) I get this, viewed at 200%:



Which looks to me like a massive amount of contrast has been added to the detail so that we end up with a posterized look ... and doesn't look to me like a deconvolution at all.

Also, moving the Detail slider up in steps of 10 just shows an increasing amount of this coarsening  of detail; there is no point at which there is a noticeable change in processing (from USM-type to deconvolution-type).  Also, notice the noise on the lamp post.

I know Jeff has said that this is so - and I don't dispute his insider knowledge of Photoshop development - but it would be good to see how the sharpening transitions from USM to deconvolution, because I certainly can't see it.

Robert
« Last Edit: August 14, 2014, 12:58:26 pm by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Sharpening ... Not the Generally Accepted Way!
« Reply #139 on: August 14, 2014, 01:05:16 pm »

I have other applications for specific deconvolution tasks.

You shouldn't say things like that if you don't want me to bug you for more information  :)

Actually, I was wondering if there's a procedure for creating a PSF by photographing a point light source (fixed focal length, best focus etc) ... using a torch through a pin-hole (in a darkened room or at night), or something like that ... and translating this into a convolution kernel for this camera/lens combination?

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana
Pages: 1 ... 5 6 [7] 8 9 ... 12   Go Up