Pages: [1] 2   Go Down

Author Topic: Ameliorating clipping in sharpening operations  (Read 5995 times)

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Ameliorating clipping in sharpening operations
« on: August 29, 2014, 08:09:45 pm »

The more I use Topaz Detail 3, the more impressed I am. Thank, you, Bart, for bringing it to my attention. One of the things that is does really well is avoid blowing highlights and clipping blacks, which is always a consideration with sharpening.

I am now involved with a project for which I need to sharpen in one dimension only. I'm writing the code in Matlab to do that. I'm doing some aggressive sharpening, so I'm running into blown highlights and clipped shadows. I'd like to pick everyone's collective brains on what to do about that.

Here are the first few options that come to my head. I'll concentrate on the highlights for now; I think that's an easier problem. Assume the image to be sharpened is in double-precision floating point with a nominal full scale of one.


Just do small sharpening moves in Matlab and do the rest in 2D form in Topaz Detail 3.
Normalize the output sharpened image by dividing it by max(max(max(sharpenedImage))) Produces really dark images, but I can add a curve later. No clipped highlights.
Normalize the output sharpened image by dividing it by mean(max(max(sharpenedImage))) Produces dark images, but I can add a curve later. Slight highlight clipping.
Normalize the output sharpened image by dividing it by min(max(max(sharpenedImage))) Produces darkish images, but I can add a curve later. A little highlight clipping.
Apply a nonlinear luminance curve to the sharpened image.
Replace blown pixels (as defined in one of the three ways above) with unsharpened pixels
Split the difference between blown and unsharpened pixels.
Do sharpening with various weights, and for each blown pixel in the image made with the desired weight, back off to that pixel in an image with a lower weight.
Do some neighborhood process.

With the exception of the last one, which can be as complex as desired, these are all fairly simple-minded approaches, and may or may not work well.

What I'm hoping for is that there is a body of knowledge on how to solve this problem that can result in something I could code up fairly easily.

Ideas?

Jim

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
Re: Ameliorating clipping in sharpening operations
« Reply #1 on: August 29, 2014, 11:24:00 pm »

Just out of curiosity Jim, have you tried Photokit Sharpener 2 or Lightroom sharpening? I've been using these routinely for years and not had the kind of issues you are trying to address here.
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

TylerB

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 446
    • my photography
Re: Ameliorating clipping in sharpening operations
« Reply #2 on: August 29, 2014, 11:58:47 pm »

I honestly don't have any background in some of the more advanced routines you are discussing.. but have you tried running these routines on a duped layer then adjusting how it is applied in highlights and shadows by using the layer blending options?
That's how I tame these issues when using standard USM, very useful, and discussed regularly in other's workflows even back to Bruce Fraser.
Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Ameliorating clipping in sharpening operations
« Reply #3 on: August 30, 2014, 12:00:27 am »

Just out of curiosity Jim, have you tried Photokit Sharpener 2 or Lightroom sharpening? I've been using these routinely for years and not had the kind of issues you are trying to address here.

I'm trying to do one-dimensional sharpening. From what I can tell from their website, Photokit Sharpener does just two-dimensional sharpening. I know that Lr does only 2D sharpening.

Jim

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Ameliorating clipping in sharpening operations
« Reply #4 on: August 30, 2014, 12:02:06 am »

I honestly don't have any background in some of the more advanced routines you are discussing.. but have you tried running these routines on a duped layer then adjusting how it is applied in highlights and shadows by using the layer blending options?

That's a good idea. I was looking for an algorithmic solution, but by messing around with blending, maybe I can get some ideas.

Jim

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: Ameliorating clipping in sharpening operations
« Reply #5 on: August 30, 2014, 03:20:13 am »

Hi Jim,

what kind of sharpening are you trying to accomplish, Capture or Creative? Based on the bullet list in your link I would think a combination of both.

For Capture sharpening by f/45 one has to assume that a lot of the blurring at higher spatial frequencies is the result of diffraction, therefore I would start by deconvolution with the Airy disk PSF for f/45 in your one dimension and take it from there.

Defocus is a b*tch because it changes throughout the frame and whether it is front or back.  It can be approximated by a Gaussian in certain defocus ranges but not others.

But I have a feeling that what you'd like is more of a Local Contrast effect (right?): for that I head straight to Nik's Tonal Contrast 4 or Topaz Clarity (lately more the latter than the former).  As you know a rough tool for Local Contrast is also a High Radius Low Amplitude USM step, which should be easy to implement in MatLab..

As for clipping: I would go for compression plus curve to make the final result fit into 0-1, keeping in mind that no 'sharpening' worth its name should ever result in under/overshoots of 10% of full scale :)

Jack
« Last Edit: August 30, 2014, 03:24:36 am by Jack Hogan »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: Ameliorating clipping in sharpening operations
« Reply #6 on: August 30, 2014, 06:38:50 am »

The more I use Topaz Detail 3, the more impressed I am. Thank, you, Bart, for bringing it to my attention. One of the things that is does really well is avoid blowing highlights and clipping blacks, which is always a consideration with sharpening.

Hi Jim,

You're welcome. It's an amazingly powerful plugin, and there is a lot more it does under the hood (like reducing unnatural looking color shifts due to contrast changes) to protect the innocent against themselves ... ;)

Quote
I am now involved with a project for which I need to sharpen in one dimension only. I'm writing the code in Matlab to do that. I'm doing some aggressive sharpening, so I'm running into blown highlights and clipped shadows. I'd like to pick everyone's collective brains on what to do about that.


This suggests that you are pushing things far beyond mere resolution restoration, which is fine if the situation calls for it, but indeed requires unconventional measures to reduce the unwanted effects.

One approach that I use is based on the actual differences between the original and the sharpened data. It's, as also suggested by "TylerB", based on blending between the two layers. In Photoshop it can be approached with a Blend-if layer adjustment, like this:


What it basically does is (linearly) reduce the contribution of the sharpened layer as it approaches the clipping level. That would be simple to implement, and in MatLab one can also use more complex functions than linear interpolation of the alpha channel.

Since you are working in floating point math, you can also use a gamma adjusted blending approach that was suggested by Nicolas Robidoux to reduce halo under/over-shoots when resampling (here). The goal was to avoid discontinuities (which manifest themselves as visual artifacts in brightness and color) in the transfer between the original and adjusted version.

In your case you could sharpen (or enhance acutance) a gamma adjusted version for the shadows, and one for the highlights, and blend them (with alpha proportional to luminance) in linear gamma space, before restoring the original gamma precompensation for display.

Cheers,
Bart


P.S. I agree with Jack that (given its dominant influence at f/45) perhaps a somewhat more Airy disc shaped deconvolution kernel (in 1 dimension) would allow better deconvolution than with a simple Gaussian. You can also consider using a Richardson-Lucy deconvolution that will be more effective than a simple 1 pass deconvolution, or if the image has a high S/N ratio, even a Van Cittert deconvolution, which may be available as predefined functions in MatLab.

I would at any rate not use a simple slice out of a 2-D Gaussian kernel, but rather a (discrete) line spread function (LSF) version of it (the sum of all kernel values in 1 dimension), where a discrete 1-dimensional version can be calculated directly by using:
g(x) = 0.5 * (erf((x - fFactor * 0.5) / (sqrt(2) * xSigma)) - erf((x + fFactor * 0.5) / (sqrt(2) * xSigma)))
where erf() is the 'error function' which probably is available directly in Matlab, and fFactor would be a sensel fill-factor (0.0 for point sample, up to 1.0 for full gap-less micro-lens coverage).  The fill-factor may function as a poor-man's approximation to a more complex interaction between an Airy disc pattern and finite area samplers such as our capture device sensels.
« Last Edit: August 30, 2014, 08:54:57 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Ameliorating clipping in sharpening operations
« Reply #7 on: August 30, 2014, 11:48:27 am »

what kind of sharpening are you trying to accomplish, Capture or Creative? Based on the bullet list in your link I would think a combination of both.

Thanks, Jack. I think it would be best to think of the sharpening as entirely creative. The images are totally abstract, and I'm not trying to create an image that looks real in any sense.

OTOH, hitting the image with a pass of a diffraction compensating filter before doing the other filtering probably can't hurt anything.

Jim

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: Ameliorating clipping in sharpening operations
« Reply #8 on: August 30, 2014, 12:00:52 pm »

I would at any rate not use a simple slice out of a 2-D Gaussian kernel, but rather a (discrete) line spread function (LSF) version of it (the sum of all kernel values in 1 dimension), where a discrete 1-dimensional version can be calculated directly by using:
g(x) = 0.5 * (erf((x - fFactor * 0.5) / (sqrt(2) * xSigma)) - erf((x + fFactor * 0.5) / (sqrt(2) * xSigma)))
where erf() is the 'error function' which probably is available directly in Matlab, and fFactor would be a sensel fill-factor (0.0 for point sample, up to 1.0 for full gap-less micro-lens coverage).  The fill-factor may function as a poor-man's approximation to a more complex interaction between an Airy disc pattern and finite area samplers such as our capture device sensels.

Hi Bart,

Interesting, I assume that's for a Gaussian.  What do you mean by 'the sum of all kernel values in one dimension'?
Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Ameliorating clipping in sharpening operations
« Reply #9 on: August 30, 2014, 12:23:21 pm »

But I have a feeling that what you'd like is more of a Local Contrast effect (right?): for that I head straight to Nik's Tonal Contrast 4 or Topaz Clarity (lately more the latter than the former).  

Both are 2D, though.

As you know a rough tool for Local Contrast is also a High Radius Low Amplitude USM step, which should be easy to implement in MatLab.

It's already there, but I find that, for sigmas of over 40 pixels or so, I'm better off with 2D programs than my simple luminance USM.

As for clipping: I would go for compression plus curve to make the final result fit into 0-1, keeping in mind that no 'sharpening' worth its name should ever result in under/overshoots of 10% of full scale :)

Then I'm way off in left field here; one of my 1D sharpened images has a max of 5 in the red channel, 3 in the green channel, and 1.4 in the blue channel.

Jim

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Ameliorating clipping in sharpening operations
« Reply #10 on: August 30, 2014, 12:34:11 pm »

I would at any rate not use a simple slice out of a 2-D Gaussian kernel, but rather a (discrete) line spread function (LSF) version of it (the sum of all kernel values in 1 dimension)

I made the change, and I can't tell the images produced with the 1D slice and the summed rows apart. Hmm.

On to blending layers...

Jim
« Last Edit: August 30, 2014, 01:45:35 pm by Jim Kasson »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: Ameliorating clipping in sharpening operations
« Reply #11 on: August 30, 2014, 01:38:25 pm »

Hi Bart,

Interesting, I assume that's for a Gaussian.

Hi Jack,

Yes, and a 2D Gaussian has the nice property of being separable in two orthogonal 1D Gaussians (LSFs) which allows to speed up calculations a lot. Convolving with a 7x7 kernel, requires 49 multiplications and additions per pixel, a convolution with two separated 7x1 1D kernels requires only 14 multiplications and additions, in the spatial domain.

Using Floating point math allows to keep the intermediate rounding errors to a minimum. 

One can of course just use a slice of a 2D Gaussian kernel as approximation (somewhat similar to a point-sample), but it will not function like the separated 1D version, the result has a different shape. Whether this precision is required/relevant for Jim's experiment was not evident from his initial info, but it now seems less important to do e.g. accurate measurements, so my approach may be overkill for his particular (more creative) use.

Quote
What do you mean by 'the sum of all kernel values in one dimension'?

When we integrate the full continuous 2D Gaussian (for the purpose of separating into two 1D kernels), we need to use an "infinite support" in one direction, and a discrete support interval in the other dimension to construct that separated discrete 1D Gaussian. When starting with a limited support 2D Gaussian kernel, we can approximate that by e.g. adding all vertical columns for each discrete horizontal kernel position. Using the above mentioned formula allows to do that much more accurately, and as a bonus allows to inject a Fill-factor parameter (which modifies the shape of the Gaussian).

So, in a very simplified 3x3 version, the principle would look like:

1 2 1
2 3 2
1 2 1
------
4 7 4 (~= a 1D separation)

And multiplying 2 such 1D orthogonal {4 7 4} kernels (with proper scaling, and much higher precision) would allow to reconstruct the 2D kernel (only if the kernel were a pure Gaussian, so this example will be approximate at best).

Again, the precision is probably not needed for Jim's experiment, but can be used for real deconvolution tasks in the spatial domain that need to be sped up by reduction of the processing load. Gaussian functions are our friend, they have useful properties ... That all works best with modest sized kernels, because larger deconvolution tasks are more efficiently done in the frequency domain (although conversion to and inverse conversion from Fourier space also adds processing overhead).

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: Ameliorating clipping in sharpening operations
« Reply #12 on: August 30, 2014, 01:45:01 pm »

I'm not sure the exact shape is important here, but it's easy to make that change: sum all the kernel rows to create the 1xn kernel (the Matlab convention is rows first, columns second, color planes third).

Indeed, if the use is more for creative sharpening then your method of taking a slice will also work.

Quote
That will have the effect of making the kernel less broad (more "pointy"), right?

Yes, but is more relevant with 2D kernel separation in mind.

Quote
Thanks so much for all your help, Bart.

You're welcome. I'm a believer in sharing such info, like you share your insights and Matlab code, we can increase the synergy with such research projects.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: Ameliorating clipping in sharpening operations
« Reply #13 on: August 30, 2014, 04:49:29 pm »

Using Floating point math allows to keep the intermediate rounding errors to a minimum.  

One can of course just use a slice of a 2D Gaussian kernel as approximation (somewhat similar to a point-sample), but it will not function like the separated 1D version, the result has a different shape. Whether this precision is required/relevant for Jim's experiment was not evident from his initial info, but it now seems less important to do e.g. accurate measurements, so my approach may be overkill for his particular (more creative) use.

When we integrate the full continuous 2D Gaussian (for the purpose of separating into two 1D kernels), we need to use an "infinite support" in one direction, and a discrete support interval in the other dimension to construct that separated discrete 1D Gaussian. When starting with a limited support 2D Gaussian kernel, we can approximate that by e.g. adding all vertical columns for each discrete horizontal kernel position. Using the above mentioned formula allows to do that much more accurately, and as a bonus allows to inject a Fill-factor parameter (which modifies the shape of the Gaussian).

So, in a very simplified 3x3 version, the principle would look like:

1 2 1
2 3 2
1 2 1
------
4 7 4 (~= a 1D separation)

And multiplying 2 such 1D orthogonal {4 7 4} kernels (with proper scaling, and much higher precision) would allow to reconstruct the 2D kernel (only if the kernel were a pure Gaussian, so this example will be approximate at best).

Ok thanks Bart, so that's how you separate a 2D kernel into two orthogonal 1D ones...  The procedure is pretty easy but I am going to have to think a bit about why it works :)
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: Ameliorating clipping in sharpening operations
« Reply #14 on: August 30, 2014, 05:18:03 pm »

Ok thanks Bart, so that's how you separate a 2D kernel into two orthogonal 1D ones...  The procedure is pretty easy but I am going to have to think a bit about why it works :)

Yes, it's not all that intuitive. The formal procedure requires a lot of matrix math, determining if a kernel is separable (determinant = 0), then Singular Value Composition into the different components.

However, Gaussians have several useful properties, although one needs to consider their infinite support, that's why one can use functions that can easily calculate the area below the curve (the integral) between e.g. 0 and infinity and e.g. subtract the area between 1 and infinity, which then gives the difference which is the area between 0 and 1, and do that in 2 dimensions. Discrete kernel element weights can thus be calculated e.g. from -0.5 to +0.5 for the centered zero offset position, in 2 dimensions.

Elliptical Gaussians use different sigmas for the horizontal and vertical dimension of a kernel, like for cameras with a different OLPF blur in different dimensions, or with more (e.g. mirror shake induced) blur in one dimension.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: Ameliorating clipping in sharpening operations
« Reply #15 on: August 31, 2014, 03:21:39 am »

Yes, it's not all that intuitive. The formal procedure requires a lot of matrix math, determining if a kernel is separable (determinant = 0), then Singular Value Composition into the different components.

However, Gaussians have several useful properties, although one needs to consider their infinite support, that's why one can use functions that can easily calculate the area below the curve (the integral) between e.g. 0 and infinity and e.g. subtract the area between 1 and infinity, which then gives the difference which is the area between 0 and 1, and do that in 2 dimensions. Discrete kernel element weights can thus be calculated e.g. from -0.5 to +0.5 for the centered zero offset position, in 2 dimensions.

Elliptical Gaussians use different sigmas for the horizontal and vertical dimension of a kernel, like for cameras with a different OLPF blur in different dimensions, or with more (e.g. mirror shake induced) blur in one dimension.

Right, thanks again, intuitively it makes sense though it will take a little time to sink in :)

Would you say that for sensors with AAs in one direction only (the vertical OR the horizontal, like in late Exmors such as the D610's, A7's, XA1's etc.) one could apply two different 1D deconvolution PSFs separately, first the horizontal one and subsequently the vertical one?  Would the 1D PSFs estimated by capturing slanted edges in the two directions be appropriate?

Given the small size of PSF parameters, what program would you recommend to apply the separate 1D deconvolutions?

PS Last question: for a symmetrical Gaussian, can we take the 1D LSF obtained by capturing a single slanted edge and simply rotate it 360degrees along the vertical axis to estimate the relative 2D Gaussian PSF?
« Last Edit: August 31, 2014, 03:32:23 am by Jack Hogan »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: Ameliorating clipping in sharpening operations
« Reply #16 on: August 31, 2014, 10:10:38 am »

Right, thanks again, intuitively it makes sense though it will take a little time to sink in :)

Would you say that for sensors with AAs in one direction only (the vertical OR the horizontal, like in late Exmors such as the D610's, A7's, XA1's etc.) one could apply two different 1D deconvolution PSFs separately, first the horizontal one and subsequently the vertical one?  Would the 1D PSFs estimated by capturing slanted edges in the two directions be appropriate?

Yes, as an approximation that should work 'reasonably' well (although possibly not as well as an accurate 2D kernel, or two 1D separated versions of it), as long as the elliptical PSF is aligned with the pixel matrix, i.e. not also rotated. So the horizontal and vertical MTFs derived from slanted edges can be used for that since they are orthogonal and they characterize the horizontal and vertical pixel blurs exactly.

Quote
Given the small size of PSF parameters, what program would you recommend to apply the separate 1D deconvolutions?

I think image processing applications that calculate in floating point precision are best equipped to do that, but even those that make careful use of proper 16-bit/channel processing can go a long way. ImageMagick is free and is available for most computer platforms, and it allows to do straightforward discrete custom kernel convolutions, either 1D or 2D. The current (Beta) version 7 will be available as precompiled floating point binary versions if I understood correctly, but the stable version 6 would need to be compiled by yourself. So I often use the IM version 6 for 16-b/ch accurate processsing. They also have FFT processing capabilities, but I'll wait for a stable version 7 before diving into that (floating point is really a must for low-noise FFT results).

Another one would be ImageJ, but that also has a bit of a learning curve. It also allows simple floating point (de)convolutions, and also Fourier space image manipulation (although one often has to treat each color channel separately).

One can also use Mathematical packages like MatLab (or a free version called Octave) or Mathematica, with their image processing libraries.

Also image processing applications for astronomical images offer such capabilities, but their feature sets are usually more tuned for specific processing tasks, with deconvolution as a bonus. I like PixInsight very much for it's tools and color managed capabilities.

Quote
PS Last question: for a symmetrical Gaussian, can we take the 1D LSF obtained by capturing a single slanted edge and simply rotate it 360degrees along the vertical axis to estimate the relative 2D Gaussian PSF?

Yes, that should work reasonably well for a continuous LSF, but it's not going to give exactly the same values as separating a captured or modeled discrete 2D Gaussian kernel. One also needs to take care of scaling issues (sums of kernel weights must be 1.0). For 2D convolution kernels I prefer to use the discrete versions of the Gaussian, although there may be some differences, amongst others due to sensel size.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: Ameliorating clipping in sharpening operations
« Reply #17 on: August 31, 2014, 01:22:05 pm »

Excellent, thank you Bart!
Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Ameliorating clipping in sharpening operations
« Reply #18 on: August 31, 2014, 05:16:03 pm »

I think I'm about ready to declare victory. Part of the reason for my progress has been redefining success. I'm just doing the higher-frequency (smaller kernel -- say, up to 15 pixel sigma) sharpening one-dimensionally. I'll use Topaz Detail for the lower-frequency work. One reason I can do that is that the noise in the image is so low, and another is that I'm using the first pass of Topaz Detail on 56000x6000 pixel images, and I'll be squishing them in the time (long) dimension later, so round kernels become elliptical after squeezing. Doing the 1D sharpening with small kernels makes visible clipping less likely.

Another important reason for my progress was that I've found a way to make the adjustments of the 1D image sharpening interactive. Rather than have the Matlab program construct the entire 1D filtered image, I'm having it write out monochromatic sharpened images at each kernel size, with aggressive (amazingly -- at least to me -- high) weights:



Then I bring the original image plus layers for all the sharpened ones into Photoshop, and set the layer blend modes for the sharpened images to "Luminosity":



Then I adjust each layer's opacity to taste.

Finally, when I see objectionable clipping, I brush black into the layer mask for the layer(s) that are making it happen.

Not mathematically elegant. Not really what I was looking for at all when I started this thread. But it gets the job done, and well.

I may run into a problem with this method down the road, but it's working for me on the one image I've tried it on.

Thanks to everybody, especially Bart.

Jim

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: Ameliorating clipping in sharpening operations
« Reply #19 on: September 01, 2014, 03:08:17 am »

I think I'm about ready to declare victory. Part of the reason for my progress has been redefining success.

;)

Quote
I'm just doing the higher-frequency (smaller kernel -- say, up to 15 pixel sigma) sharpening one-dimensionally. I'll use Topaz Detail for the lower-frequency work. One reason I can do that is that the noise in the image is so low, and another is that I'm using the first pass of Topaz Detail on 56000x6000 pixel images, and I'll be squishing them in the time (long) dimension later, so round kernels become elliptical after squeezing. Doing the 1D sharpening with small kernels makes visible clipping less likely.

That will take a while for Detail to do the processing, but it's good to know it can do that. It's memory management is apparently programmed well enough.

Quote
Another important reason for my progress was that I've found a way to make the adjustments of the 1D image sharpening interactive. Rather than have the Matlab program construct the entire 1D filtered image, I'm having it write out monochromatic sharpened images at each kernel size, with aggressive (amazingly -- at least to me -- high) weights:
...
Then I bring the original image plus layers for all the sharpened ones into Photoshop, and set the layer blend modes for the sharpened images to "Luminosity":

That will give lots of control, nice.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==
Pages: [1] 2   Go Up