Pages: 1 ... 14 15 [16] 17 18   Go Down

Author Topic: Deconvolution sharpening revisited  (Read 266063 times)

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #300 on: February 22, 2014, 08:24:32 pm »

Well, I would not say a poor idea.  I have not tried it.

I have, and it appears that the demosaicing of Bayer CFAs results in almost identical resolutions for R/G/B channels, after Raw conversion.

Quote from: Bart
There is also a pretty close correlation between the Red/Green/Blue channels (sigmas of 0.757/0.762/0.758), which shows how the Bayer CFA Demosaicing for mostly Luminance data produces virtually identical resolution in all channels. Since Luminance is the dominant factor for the Human Visual System's contrast sensitivity, it also shows that we can use a single sharpening value for the initial Capture sharpening of all channels.

That is probably because, despite the less dense sampling of the R and B channels, and the differences in diffraction pattern diameter, the luminance component of the signal in them is still used to create luminance resolution. And since the Red and most certainly the Blue channel are relatively under-weighted in luminance contribution, I would not be surprised if some is ''borrowed" from Green. This tends to (in general) negate wavelength dependent diffraction blur. Of course, differences in lens design and demosaicing may produce different results.

Cheers,
Bart
« Last Edit: February 22, 2014, 09:04:41 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

rnclark

  • Newbie
  • *
  • Offline Offline
  • Posts: 6
Re: Deconvolution sharpening revisited
« Reply #301 on: February 23, 2014, 09:36:03 am »

Trust me, I only use (bar) charts for an objective, worst case scenario testing. If a procedure passes that test, it will pass real life challenges with flying colors.

Hi Bart,
While I agree in principle, the question is in practice does it really matter?  For example, on your worst case scenario test charts, the Nikon D800E, that is the camera without a blur filter, might do pretty poorly.  If that was used to base decisions on production or not, we may not have a D800E today.  But in the field use by many photographers, moire artifacts are rare.  Yes, they do happen, but it doesn't seem to impact photographers, and overall they seem to like the added sharpness.

At least with down sampling, we can do it however way we want and if we see artifacts, then modify the procedure.

So I'll stand by my method of producing the best highest pixel count sharpened image (e.g. with deconvolution sharpening), then downsample.  If a component in the image shows artifacts in the down sampling step, I'll pre-blur that part of the sharpened image then downsample.  So far I have not seen such a problem.  Maybe I should try down sampling some of the thousands of windmill images I have ;-).

Roger
Logged

AreBee

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 638
Re: Deconvolution sharpening revisited
« Reply #302 on: February 23, 2014, 12:33:12 pm »

Folks,

Most of the discussion in this thread is way over my head, but I would be interested to learn if deconvolution sharpening is strictly appropriate for use with the D800E. My understanding is that deconvolution sharpening is used to reverse, as best as possible, the blur effect of an AA filter. However, the D800E does not have an AA filter (I appreciate that it is a bit of a hybrid filter sandwitch and not simply the absence of an AA filter). Therefore, in applying deconvolution sharpening to a D800E image, are we not attempting to de-blur what never was blurred?

Cheers,
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: Deconvolution sharpening revisited
« Reply #303 on: February 23, 2014, 12:55:42 pm »

Folks,

Most of the discussion in this thread is way over my head, but I would be interested to learn if deconvolution sharpening is strictly appropriate for use with the D800E. My understanding is that deconvolution sharpening is used to reverse, as best as possible, the blur effect of an AA filter. However, the D800E does not have an AA filter (I appreciate that it is a bit of a hybrid filter sandwitch and not simply the absence of an AA filter). Therefore, in applying deconvolution sharpening to a D800E image, are we not attempting to de-blur what never was blurred?

Cheers,

Yes, deconvolution restoration is desirable with the D800e because there are other sources of blur other than from a low pass filter. Diffraction and lens aberration as well as defocus can be partially reversed with deconvolution. See some of my earlier posts in this thread with examples of deconvolution with the D800e.

Bill
Logged

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Deconvolution sharpening revisited
« Reply #304 on: February 23, 2014, 02:34:34 pm »

Folks,

Most of the discussion in this thread is way over my head, but I would be interested to learn if deconvolution sharpening is strictly appropriate for use with the D800E. My understanding is that deconvolution sharpening is used to reverse, as best as possible, the blur effect of an AA filter. However, the D800E does not have an AA filter (I appreciate that it is a bit of a hybrid filter sandwitch and not simply the absence of an AA filter). Therefore, in applying deconvolution sharpening to a D800E image, are we not attempting to de-blur what never was blurred?

Cheers,

Since you mention the D800E filter sandwich, deconvolution is exactly what it is doing. They use the same filter as the D800 then use other filters to reverse the AA effect.

These methods, in software or hardware, are much better than the old USM illusion of sharpness.
Logged

AreBee

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 638
Re: Deconvolution sharpening revisited
« Reply #305 on: February 23, 2014, 04:37:48 pm »

Thanks Bill, I'll take a look.

Fine_Art,

Quote
These methods, in software or hardware, are much better than the old USM illusion of sharpness.

Absolutely. I previously read posts by Bart in a range of threads and noted how highly he praised Focus Magic. On that basis I purchased a copy and have tested it on several images. It really is astonishing how much the plugin restores image acuity without introducing sharpening halos. Worth its purchase price several times over to me.

Regards,
Logged

Theodoros

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2454
Re: Deconvolution sharpening revisited
« Reply #306 on: February 23, 2014, 05:24:31 pm »

I have, and it appears that the demosaicing of Bayer CFAs results in almost identical resolutions for R/G/B channels, after Raw conversion.

That is probably because, despite the less dense sampling of the R and B channels, and the differences in diffraction pattern diameter, the luminance component of the signal in them is still used to create luminance resolution. And since the Red and most certainly the Blue channel are relatively under-weighted in luminance contribution, I would not be surprised if some is ''borrowed" from Green. This tends to (in general) negate wavelength dependent diffraction blur. Of course, differences in lens design and demosaicing may produce different results.

Cheers,
Bart
Hi Bart, is your opinion that for a completely still subject shot in 16x multishot, one should use the "smart sharpen" filter before he prints, or other and why?
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #307 on: February 23, 2014, 06:58:31 pm »

Hi Bart, is your opinion that for a completely still subject shot in 16x multishot, one should use the "smart sharpen" filter before he prints, or other and why?

Hi,

Yes, Smart sharpen is a good start, but using a Photoshop plugin that does better deconvolution than Smart sharpen might squeeze a bit more real resolution and less noise out of the image.

Even though a 16x Multi-step sensor solves a few issues (and could create some others), deconvolution sharpening is still beneficial. The enhanced color resolution, by sampling each sensel position with each color of the Bayer CFA in sequence, helps and the piezo actuator driven half-sensel pitch offsets doubles the sampling density. However, the lens still has its residual aberrations and inevitable diffraction blur from narrowing the aperture, and the sensel aperture also plays a role in averaging the projected image over the original sensel aperture dimensions (the sensel aperture is roughly twice the sensel pitch, so 4x the sensel area). This relatively large sensel aperture, as does the increase in sampling density, will help in reducing aliasing but some blur will still remain. The lowered aliasing and remaining blur, shout for deconvolution sharpening to be applied.

So you can still improve the results from such a sensor design. As mentioned, Focus Magic does a great job, but also a plugin such as Topaz Labs Detail is worth a mention. Not only does it offer a simple to use 'Deblur' option (=deconvolution), it also allows to tweak several sizes and contrast levels of detail, which is great for 'output sharpening' (where different output sizes might need different levels of micro-contrast and sharpening). Their InFocus plugin offers more control over Capture sharpening alone, and also works very well with my suggestion of up-sampling, deconvolution sharpening, and down-sampling to original size, approach.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Theodoros

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2454
Re: Deconvolution sharpening revisited
« Reply #308 on: February 24, 2014, 06:27:48 am »

Hi,

Yes, Smart sharpen is a good start, but using a Photoshop plugin that does better deconvolution than Smart sharpen might squeeze a bit more real resolution and less noise out of the image.

Even though a 16x Multi-step sensor solves a few issues (and could create some others), deconvolution sharpening is still beneficial. The enhanced color resolution, by sampling each sensel position with each color of the Bayer CFA in sequence, helps and the piezo actuator driven half-sensel pitch offsets doubles the sampling density. However, the lens still has its residual aberrations and inevitable diffraction blur from narrowing the aperture, and the sensel aperture also plays a role in averaging the projected image over the original sensel aperture dimensions (the sensel aperture is roughly twice the sensel pitch, so 4x the sensel area). This relatively large sensel aperture, as does the increase in sampling density, will help in reducing aliasing but some blur will still remain. The lowered aliasing and remaining blur, shout for deconvolution sharpening to be applied.

So you can still improve the results from such a sensor design. As mentioned, Focus Magic does a great job, but also a plugin such as Topaz Labs Detail is worth a mention. Not only does it offer a simple to use 'Deblur' option (=deconvolution), it also allows to tweak several sizes and contrast levels of detail, which is great for 'output sharpening' (where different output sizes might need different levels of micro-contrast and sharpening). Their InFocus plugin offers more control over Capture sharpening alone, and also works very well with my suggestion of up-sampling, deconvolution sharpening, and down-sampling to original size, approach.

Cheers,
Bart
I thought so… thanks for detailed explanation and the suggestions. One more thing, if the subject is huge in size, (say a painting of 1.5 square meters) and it is required to be printed at 1:1 size using 360ppi as input to the printer (say Epson 9900), then the process should be 1. up-sampling to 360ppi, 2. sharpening, 3. print or should it be 1. Upsampling to 720 ppi 2. sharpening 3. Downsampling back to 360ppi and then, 4. print ? …Is there a benefit if one doesn't down-sample when the image size comes to less than 360ppi?  Thanks.
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #309 on: February 24, 2014, 07:53:14 am »

I thought so… thanks for detailed explanation and the suggestions. One more thing, if the subject is huge in size, (say a painting of 1.5 square meters) and it is required to be printed at 1:1 size using 360ppi as input to the printer (say Epson 9900), then the process should be 1. up-sampling to 360ppi, 2. sharpening, 3. print or should it be 1. Upsampling to 720 ppi 2. sharpening 3. Downsampling back to 360ppi and then, 4. print ? …Is there a benefit if one doesn't down-sample when the image size comes to less than 360ppi?  Thanks.

Hi,

First of all, it is not absolutely necessary to upsample/sharpen/downsample, it is just a method to allow very accurate sharpening. With the proper technique, and precautions, and experience, it is possible to directly sharpen at the final output size (that also means the use of blend-if sharpening layers to avoid clipping).

Second, when the original file already has a lot of pixels, upsampling it by a factor of e.g. 3x just for sharpening may cause issues due to file size, and the deconvolution sharpening will take a lot of processing time and system memory to complete.

Third, depending on the printing pipeline, and given the physical size of the output (and thus normal viewing distance), I think that upsampling to 360 PPI will probably be adequate and printing will be faster than at 720 PPI. Only if very close inspection needs to be possible without compromises, and the input file has enough detail to require little interpolation to reach output dimensions, then creating a 720 PPI output file can make a difference (because the printer interpolation is not very good, and doesn't allow to sharpen at the final output size). The 'finest detail' option must be activated in the Epson printer driver to actually print at 720 PPI.

When an output file has less than 360 PPI at the final output size, one can consider upsampling with an application like PhotoZoom Pro (only for upsampling), because that actually adds edge detail at a higher resolution, but it depends on the original image contents. Other upsampling methods do not create additional resolution, but will allow to push sharpening a bit further at 720 PPI (because small artifacts will be rendered too small to notice). So once you have more than 360 PPI useful data, I would not downsample to 360 PPI, but upsample to 720 PPI and sharpen at that size and print with 'finest detail' activated.

Since output sharpening also needs to pre-compensate for contrast losses due to print media (ink diffusion, paper structure, limited media contrast, etc.) I'd seriously consider using Topaz Detail, because that not only offers deconvolution (deblur) but also micro-contrast controls. It also allows to boost the low contrast micro-detail in shadows more than in the highlights, which is especially useful for non-glossy output media or dim viewing conditions. But that all goes beyond the main subject of this thread.

Cheers,
Bart
« Last Edit: February 24, 2014, 07:59:47 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Manoli

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2296
Re: Deconvolution sharpening revisited
« Reply #310 on: February 24, 2014, 08:25:14 am »

When an output file has less than 360 PPI at the final output size, one can consider upsampling with an application like PhotoZoom Pro (only for upsampling),

And for downsampling , Adobe's Bicubic sharper or PhotoZoom's S-Spline XL or MAX with downsize settings ?
Logged

Theodoros

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2454
Re: Deconvolution sharpening revisited
« Reply #311 on: February 24, 2014, 08:31:53 am »

Hi,

First of all, it is not absolutely necessary to upsample/sharpen/downsample, it is just a method to allow very accurate sharpening. With the proper technique, and precautions, and experience, it is possible to directly sharpen at the final output size (that also means the use of blend-if sharpening layers to avoid clipping).

Second, when the original file already has a lot of pixels, upsampling it by a factor of e.g. 3x just for sharpening may cause issues due to file size, and the deconvolution sharpening will take a lot of processing time and system memory to complete.

Third, depending on the printing pipeline, and given the physical size of the output (and thus normal viewing distance), I think that upsampling to 360 PPI will probably be adequate and printing will be faster than at 720 PPI. Only if very close inspection needs to be possible without compromises, and the input file has enough detail to require little interpolation to reach output dimensions, then creating a 720 PPI output file can make a difference (because the printer interpolation is not very good, and doesn't allow to sharpen at the final output size). The 'finest detail' option must be activated in the Epson printer driver to actually print at 720 PPI.

When an output file has less than 360 PPI at the final output size, one can consider upsampling with an application like PhotoZoom Pro (only for upsampling), because that actually adds edge detail at a higher resolution, but it depends on the original image contents. Other upsampling methods do not create additional resolution, but will allow to push sharpening a bit further at 720 PPI (because small artifacts will be rendered too small to notice). So once you have more than 360 PPI useful data, I would not downsample to 360 PPI, but upsample to 720 PPI and sharpen at that size and print with 'finest detail' activated.

Since output sharpening also needs to pre-compensate for contrast losses due to print media (ink diffusion, paper structure, limited media contrast, etc.) I'd seriously consider using Topaz Detail, because that not only offers deconvolution (deblur) but also micro-contrast controls. It also allows to boost the low contrast micro-detail in shadows more than in the highlights, which is especially useful for non-glossy output media or dim viewing conditions. But that all goes beyond the main subject of this thread.

Cheers,
Bart
Great Bart, very detailed and well explained… Thanks.  :-*
« Last Edit: February 24, 2014, 09:04:38 am by T.Dascalos »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #312 on: February 24, 2014, 08:38:24 am »

And for downsampling , Adobe's Bicubic sharper or PhotoZoom's S-Spline XL or MAX with downsize settings ?

Hi,

For downsampling just use Regular Bicubic. It will introduce aliasing artifacts when there is more real resolution than the smaller size can accommodate. I'm not impressed by the PhotoZoom down-sampling quality, and Bicubic Sharper will create even more aliasing than regular Bicubic.

When upsampling/deconvolution/downsampling is used, the risk of generating aliasing artifacts by using regular Bicubic for downsampling is limited, because there was not enough detail to begin with and we basically mostly remove the blur of the original input (which was already bandwidth limited to the Nyquist frequency of the original size).

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Paul2660

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 4066
    • Photos of Arkansas
Re: Deconvolution sharpening revisited
« Reply #313 on: February 24, 2014, 01:30:46 pm »

Bart,

In one of the posts on sharpening back in Oct 2013 or so, you brought up Focus Magic and I looked at it again.  To be honest it does an excellent job and it was enough to say I totally changed my workflow.  It helps on any file IQ back, to Fuji X.  Very nice tool indeed.  I looked at the Topaz tool but found I still liked the output from Focus Magic. 

I currently don't uprez, to sharpen, then downrez back, just have been getting such good results on the files as they are. 

Thanks again, to the tip.

Paul C.
Logged
Paul Caldwell
Little Rock, Arkansas U.S.
www.photosofarkansas.com

WalterKircher

  • Newbie
  • *
  • Offline Offline
  • Posts: 1
Re: Deconvolution sharpening revisited
« Reply #314 on: June 21, 2014, 03:25:17 pm »

Hello + to help this fantastic thread to liev again.  ;D

I am using since 4 years the FocusFixer from Fixerlabs - and i am still very satisfied. Since 2012 also working as 64Bit plug-in for Photoshop. As I compared 2010 many available solutions, also Focusmagic - i am curious if anybody used here my tool?

http://www.fixerlabs.com/EN/photoshop_plugins/focusfixer.htm

Walter
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #315 on: June 22, 2014, 07:42:41 am »

Hello + to help this fantastic thread to liev again.  ;D

I am using since 4 years the FocusFixer from Fixerlabs - and i am still very satisfied. Since 2012 also working as 64Bit plug-in for Photoshop. As I compared 2010 many available solutions, also Focusmagic - i am curious if anybody used here my tool?

Hi Walter,

I've used their SizeFixer software, which also uses the LensFix part of FocusFixer technology. It worked okay, but I have not done a side by side test between FocusFixer (which offers more control over the focusing aspect) and FocusMagic, so I can't compare. Also important is how these programs handle noise, do they enhance real detail more than noise?

If you post an example (very high quality JPEG or PNG crops should do), before and after sharpening with FocusFixer, I could add a FocusMagic sharpened version to that.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: Deconvolution sharpening revisited
« Reply #316 on: June 24, 2014, 04:43:13 am »

I've got a bit fond of shooting tree trunks. As the trunks are curved and macro DoF is extremely short it's a typical application for focus stacking. However focus stacking is really not my thing, especially in the field where shutter speeds are often say 10 seconds per image or so, and the camera is often in cumbersome shooting positions.

So what I do is that I shoot at f/22 and suffer the diffraction. The subject is not too much in need to be super-detailed so it's no disaster. Still I'd like to improve my sharpening techniques for these types of images.

Maybe the types of softwares you talk about could be at help?

I've attached a thumb of one such image, and here you have the a crop from the "raw" (a neutral 16 bit tiff developed in RawTherapee, no sharpening no contrast increase, no nothing):

http://torger.dyndns.org/trunk-crop.tif

It was shot at f/22 with a Schneider Digitar 180mm using a Leaf Aptus 75 back.

Can someone demonstrate how sharp it can become using these tools?
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #317 on: June 24, 2014, 08:41:38 am »

I've got a bit fond of shooting tree trunks. As the trunks are curved and macro DoF is extremely short it's a typical application for focus stacking. However focus stacking is really not my thing, especially in the field where shutter speeds are often say 10 seconds per image or so, and the camera is often in cumbersome shooting positions.

So what I do is that I shoot at f/22 and suffer the diffraction. The subject is not too much in need to be super-detailed so it's no disaster. Still I'd like to improve my sharpening techniques for these types of images.

Maybe the types of softwares you talk about could be at help?

Hi,

Indeed tree trunks can have very interesting structures, especially when looking at them up close. Using f/22 on a 6 micron pitch sensor will reduce the effective resolution to less than 94% of its theoretical limiting resolution, so Deconvolution will not be able to fully recover the detail that the lens has to offer, but it can help to improve.

Quote
I've attached a thumb of one such image, and here you have the a crop from the "raw" (a neutral 16 bit tiff developed in RawTherapee, no sharpening no contrast increase, no nothing):

Attached are: first a treatment with FocusMagic, then one with PixInsight, and finally that same one from PixInsight but with a bit of detail added.

Without very precise calibration of the lens+diffraction+Rawconversion, I had to guess a bit. I assumed that the Diffraction would be dominant, and mainly deconvolved for that, based on a 6 micron pitch sensor, and an f/22 diffraction pattern (the airy disc pattern is more than 31 micron or 5 pixels in diameter).

Cheers,
Bart


P.S. In my estimate, and if DOF requirements allow, I'd also give f/20 a try. It will prevent the diffraction pattern from absolutely reducing the limiting resolution of the sensor, and might allow to squeeze out a tiny bit more micro-detail, with hardly any loss due to a third stop less DOF.
« Last Edit: June 24, 2014, 08:50:13 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: Deconvolution sharpening revisited
« Reply #318 on: June 24, 2014, 09:15:06 am »

Thanks Bart! Very nice to see what's possible. Clear improvement over the original, but one cannot expect crisp results. I don't think it's a big problem for a print though. The original crop image is quite flat in terms of contrast, just increasing contrast will get a sense of a sharper image.

The Aptus 75 is a 33 megapixel Dalsa  with 7.2um pixels. It's hard to know the effective f-stop, at this close range, I'd guess something like 1/8 of life size, the bellows extension is so large that the effective f-stop would be a little bit higher still than f/22. But I guess your assumption 6um + f/22 is pretty close to 7.2um + macro effective f-stop.
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #319 on: June 24, 2014, 11:31:47 am »

Thanks Bart! Very nice to see what's possible. Clear improvement over the original, but one cannot expect crisp results. I don't think it's a big problem for a print though. The original crop image is quite flat in terms of contrast, just increasing contrast will get a sense of a sharper image.

Hi,

True, but correct deconvolution with some contrast added will get you even further ...

Quote
The Aptus 75 is a 33 megapixel Dalsa  with 7.2um pixels. It's hard to know the effective f-stop, at this close range, I'd guess something like 1/8 of life size, the bellows extension is so large that the effective f-stop would be a little bit higher still than f/22. But I guess your assumption 6um + f/22 is pretty close to 7.2um + macro effective f-stop.

It would be close. I'll see if I can get something better based on the additional info, although a lot of real resolution is lost beyond restoration. I mistakenly thought (couldn't find info on the Leaf website) that the Aptus 75 was a 40 MP back, hence the 6 micron pitch assumption. With a larger pitch, and the suggested magnification (f/22 becomes f/25 effective) a different (half a pixel smaller) diffraction pattern would need to be used for optimal deconvolution.

Well, I just did another deconvolution, based on the new assumptions, and the differences are marginal (see attached, first only deconvolved and then the same with some Topaz Detail added to compensate for the resolution losses). Should print just fine at normal size and viewing distance.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==
Pages: 1 ... 14 15 [16] 17 18   Go Up