Pages: [1] 2   Go Down

Author Topic: Most favorable downsampling percentage  (Read 36704 times)

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Most favorable downsampling percentage
« on: September 18, 2014, 09:38:36 pm »

There was a popular thread a while ago with a similar title, only referring to up-sampling.

http://www.luminous-landscape.com/forum/index.php?topic=77949.0

In doing some testing of downsampling algorithms as reducers of image noise, I found some interesting -- at least to me; I should get out more -- properties of power-of-two ratios for downsampling.

http://blog.kasson.com/?p=7101

http://blog.kasson.com/?p=7123

OK, now I'm going to run to another computer and post some graphs.

I'm back. Here's what you get for the rms value of an image that consists of a dc level of .5 with Gaussian noise added with a sigma of 0.1 if you downsample it with a few common algorithms with no AA filtering:



On Bart's website, there is a post that indicates that AA filtering with a Gaussian kernel with sigma = 0.2 or 0.3 over the magnification ratio is a reasonable compromise between sharpness and aliasing.

Here's the rms error with AA filtering with a Gaussian kernel of 0.2 / magnification:



And the rms error with AA filtering with a Gaussian kernel of 0.3 / magnification:



It's clear that there are some downressing percentages that are more effective at reducing noise in the image than other percentages that are very close. In fact, it looks like there are some discontinuities in the curves.

In the past, I have, when faced with down resing with big ratios, done it in factor of two stages. I dunno why, it just seemed like a good idea. Maybe better than I knew?

Jim







« Last Edit: September 18, 2014, 09:53:19 pm by Jim Kasson »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: Most favorable downsampling percentage
« Reply #1 on: September 19, 2014, 07:43:22 am »

There was a popular thread a while ago with a similar title, only referring to up-sampling.

http://www.luminous-landscape.com/forum/index.php?topic=77949.0

In doing some testing of downsampling algorithms as reducers of image noise, I found some interesting -- at least to me; I should get out more -- properties of power-of-two ratios for downsampling.

http://blog.kasson.com/?p=7101

http://blog.kasson.com/?p=7123

Hi Jim,

Upsampling is an operation that comes with its own specific issues (upsampling blur, and e.g. halo and blocking artifacts).

Down-sampling also comes with its own specific set of challenges, like maintaining high resolution while at the same time reducing the risk of (almost guaranteed) aliasing, and ringing, and it also preferably does the blending of colors and luminance in linear gamma space for better accuracy. Noise reduction is IMHO not the primary goal (losing resolution may be worse than a bit of organic looking noise), but noise should not be made worse by the other artifacts (like aliasing) either.

That's why good down-sampling methods mainly try to avoid the creation of artifacts. Should an image have much more low spatial frequency noise than high spatial frequency noise, then downsampling may produce a sensation of increased noise at the now new smallest detail levels.

Whether noise gets reduced is therefore largely related to the source image's noise power spectrum, if we assume a proper down-sampling method that avoids aliasing artifacts. White noise will have a scale invariant level of noise. The finest noise will vanish because it cannot be resolved in a smaller image anyway, so a properly windowed resampling method, e.g. a Lanczos windowed Sinc, will automatically reduce the contribution of unresolvable noise levels (to reduce aliasing). In pathetic cases one may even pre-filter (proper denoising is preferred over simple blur) the data for better noise(-aliasing) properties.

Quote
On Bart's website, there is a post that indicates that AA filtering with a Gaussian kernel with sigma = 0.2 or 0.3 over the magnification ratio is a reasonable compromise between sharpness and aliasing.

Well, Gaussian blur is a relatively crude filter and more blurry (for spatial frequencies that need to remain resolvable after down-sampling) than needed, if we can use better methods.

Quote
It's clear that there are some downressing percentages that are more effective at reducing noise in the image than other percentages that are very close. In fact, it looks like there are some discontinuities in the curves.

In the past, I have, when faced with down resing with big ratios, done it in factor of two stages. I dunno why, it just seemed like a good idea. Maybe better than I knew?

I would also not only look at the noise effects in isolation (unless one needs to down-sample images of random noise). Do also check whether other attributes of image quality, e.g. resolution and ringing take a hit, which may explain some of the dips in RMS noise near integer fractions of scale.

Maybe there is also some explanation to be found in the exact implementation of the algorithm. Also be aware of image edge artifacts that may skew the total image metrics. Photoshop for example produces artifacts at image edges because it doesn't account for 'virtual pixels' when the filter window reaches the edge of the image data).

Maybe a comparison with the ImageMagick implementation of certain filters is also useful, to estimate the potential effects of specific (e.g. MatLab) algorithm implementations.

And then there is the nature of the original source of the noise (i.e. the power spectrum of the noise). Bayer CFA demosaicing may produce less high spatial frequency noise, but does produce some slightly lower spatial frequency noise that is only vaguely related to the Poisson nature of per sensel shot noise.

And finally, the matter of down-sampling in Linear gamma space may be important when down-sampling gamma pre-compensated images. It will also have a different effect on shadows and highlights of an image, due to different amplitudes of overshoot versus undershoot with some resampling algorithms, which even may result in clipping if not done in floating point precision.

Cheers,
Bart
« Last Edit: September 19, 2014, 10:38:53 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Wayne Fox

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 4237
    • waynefox.com
Re: Most favorable downsampling percentage
« Reply #2 on: September 19, 2014, 02:00:20 pm »

Your math and graphs are beyond my pay grade, but for quite some time now I have used a down sampling method I learned from Jack Flesher which ends up with better small files for the web ... basically stepping down at exactly 50% increments to obtain the final size.  I resize the original file to 8 times what I want the final size to be. sometimes this is a pretty big step up, but most of the time it is a very small step down.  I then downsample 50% 3 consecutive times (adding a very slight amount of sharpening in 2 of the steps).

The end result is visually better than a single resize down in Photoshop.  I’m sure if I understood the process better this could be refined even further (and I’ll try using something other than bi-cubic to see what happens).  but it’s simple (in an action) and does give me better results.
Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Most favorable downsampling percentage
« Reply #3 on: September 20, 2014, 12:27:24 pm »

Noise reduction is IMHO not the primary goal (losing resolution may be worse than a bit of organic looking noise), but noise should not be made worse by the other artifacts (like aliasing) either.

Maybe not the primary goal, but, IMO, a primary goal. I think there's a three-legged stool here: maintaining (or, proportionally, increasing) sharpness, avoiding artifacts, and reducing photon noise. Why the last? First, in my photographs, where noise is an issue, it's photon noise. I don't use the camera in a way that makes read noise important, and I've never had a camera where the PRNU was bad enough to bother me. Second, as time goes by, I find myself using cameras with tighter pixel pitch. Technology has driven up coverage, QE, and FWC/unit area, but we're reaching the end of that road, and my D810, D800E, and my Sony a7R (to say nothing of my H2D-39) all have more photon noise per pixel than my D4 or D3s. (Yeah, I know. That's too many cameras. I can't seem to bring myself to sell the older ones.)

Conventional wisdom, and DxOMark, says that I should be able to produce images from the 30+ MP cameras (with the exception of the H2D) that match the sharpness, resolution and photon noise from the 12 - 16 MP cameras. If I'm allowed to tweak the noise reduction setting in Lr, I can not only do that, I can usually do better. But doing that requires me to pick noise reduction settings that are inappropriate for full-resolution images from the high-resolution cameras.

I like the idea of a workflow that does all the creative work at full-resolution (or, in the case of Lr, on a proxy image), and the generation of the copy of the image, at appropriate resolution, to be printed or displayed is a no-thinking, mechanical operation. I've got it so that's pretty much true for upsampling. I'd like it to be true for downsampling. It looks like the common dowsampling algorithms, even with AA filtering, don't do too well at noise reduction at low reduction ratios. I'm trying to understand that, and eventually, find a way around it.

It probably is the case that, as is the case with sharpness, that noise reduction with linear filters and conventional downsampling trades off with artifact prevention. Certainly, if noise reduction is a priority, it is tempting to precede downsampling with a brick wall lowpass filter with cutoff at the Nyquist frequency of the new resolution, but that will ring like a gong.

If nonlinear filtering is the ticket to mastering noise upon downsampling, I'd like to understand implementations that work at all reduction ratios. I can't figure out how to do a fractional-pixel median filter, for example.

Another motivation for this work is my continuing project of modeling and comparing cameras with different pixel pitches (currently down to 1.1 um), but that's a thread that we don't need to explore now.

Remember that I am a reformed color scientist, and inexpert on matters involving the wider world of digital signal processing, but an willing to learn.

Jim

« Last Edit: September 20, 2014, 12:29:00 pm by Jim Kasson »
Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Most favorable downsampling percentage
« Reply #4 on: September 20, 2014, 12:35:32 pm »

Well, Gaussian blur is a relatively crude filter and more blurry (for spatial frequencies that need to remain resolvable after down-sampling) than needed, if we can use better methods.

Here's where I got the idea:

http://bvdwolf.home.xs4all.nl/main/foto/down_sample/down_sample.htm


I would also not only look at the noise effects in isolation (unless one needs to down-sample images of random noise). Do also check whether other attributes of image quality, e.g. resolution and ringing take a hit, which may explain some of the dips in RMS noise near integer fractions of scale.

On my list of things to look at. Thanks.

Maybe there is also some explanation to be found in the exact implementation of the algorithm. Also be aware of image edge artifacts that may skew the total image metrics. Photoshop for example produces artifacts at image edges because it doesn't account for 'virtual pixels' when the filter window reaches the edge of the image data).

I'm extending the images symmetrically before filtering. I'll try cropping after filtering, They are 4000x4000 images, so he edge effects should get buried.

Thanks,

jim

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Most favorable downsampling percentage
« Reply #5 on: September 20, 2014, 02:20:45 pm »

I would also not only look at the noise effects in isolation (unless one needs to down-sample images of random noise). Do also check whether other attributes of image quality, e.g. resolution and ringing take a hit, which may explain some of the dips in RMS noise near integer fractions of scale.

Yep, here is a look at two spectra of downsampled Gaussian noise images using bilinear interpolation with no AA filtering:







There's something special about 0.5 magnification. I think I understand it, but I'll have teo look at the details of the algorithm.

Jim
« Last Edit: September 20, 2014, 02:40:25 pm by Jim Kasson »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: Most favorable downsampling percentage
« Reply #6 on: September 20, 2014, 02:49:30 pm »

Here's where I got the idea:

http://bvdwolf.home.xs4all.nl/main/foto/down_sample/down_sample.htm

Hi Jim,

I know that I mentioned it as a possible fix, but it's inferior to many of ImageMagic's filters.

Quote
I'm extending the images symmetrically before filtering. I'll try cropping after filtering, They are 4000x4000 images, so he edge effects should get buried.

I know, it's just one of the many shortcomings of many down-sampling algorithms. Just something to be aware of, probably not a big issue for analysis.

I'm still puzzled by the dips in your noise data. I have attached a summary of EWA down-sampled Gaussian noise images (AdobeRGB gamma 2.2 precompensated), using the adjusted Keys Cubic filter with default deconvolution sharpening as implemented in my script. I made an extra effort of sampling near the 50% scale point, and found no evidence of a dip ...

The noise reduction is almost perfectly in line with the (EWA) averaging window size.

Cheers,
Bart
« Last Edit: September 20, 2014, 03:16:36 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Most favorable downsampling percentage
« Reply #7 on: September 21, 2014, 11:26:00 am »

I'm still puzzled by the dips in your noise data. I have attached a summary of EWA down-sampled Gaussian noise images (AdobeRGB gamma 2.2 precompensated), using the adjusted Keys Cubic filter with default deconvolution sharpening as implemented in my script. I made an extra effort of sampling near the 50% scale point, and found no evidence of a dip ...

The noise reduction is almost perfectly in line with the (EWA) averaging window size.

Bart, that is one beautiful curve! I have been resisting adding one more image processing tool to my toolbox, but you've convinced me. I'll download ImageMagick, run your script to verify, then look at it for clues as to how to call ImageMagick from Matlab, and report results. Then I'll look at integrating ImageMagick downsampling into my camera model. That'll take a while.


I will also report on how the Ps (and maybe Lr) downsampling tools work wrt noise  -- maybe there's a problem with the Matlab imresize code. For extra credit, maybe QImage, too.

Thanks,

Jim
« Last Edit: September 21, 2014, 11:28:02 am by Jim Kasson »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: Most favorable downsampling percentage
« Reply #8 on: September 22, 2014, 02:52:29 am »

Bart, that is one beautiful curve! I have been resisting adding one more image processing tool to my toolbox, but you've convinced me. I'll download ImageMagick, run your script to verify, then look at it for clues as to how to call ImageMagick from Matlab, and report results. Then I'll look at integrating ImageMagick downsampling into my camera model. That'll take a while.

Jim, alternatively you could have a look at the 'resize.c' source file from the ImageMagick code distribution, and see if you can implement some of the functionality as Matlab functions. The CubicBC() interpolation filter is simple enough to do in a few lines of code, and I saw that various windowing functions have been refactored by Nicolas Robidoux for efficient execution. From there on you could experiment with more complex interpolation schemes than simple tensor approaches, like Elliptical Weighted Averaging (EWA). There are of course lots of refinements already implemented in the IM code base, so the easier route is achieved by linking to the IM binaries, but doing the code oneself is a good learning experience if that is also something one wants to do, gain a better fundamental understanding of the factors involved.

Quote
I will also report on how the Ps (and maybe Lr) downsampling tools work wrt noise  -- maybe there's a problem with the Matlab imresize code. For extra credit, maybe QImage, too.

More than enough things to fill 'a few' blog pages, enjoy.

Cheers,
Bart
« Last Edit: September 22, 2014, 11:58:28 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

NicolasRobidoux

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 280
Re: Most favorable downsampling percentage
« Reply #9 on: September 22, 2014, 07:34:12 am »

...
Jim, alternatively you could have a look at the 'resize.c' source file from the ImageMagick code distribution, and see if you can implement some of the functionality as Matlab functions.
...
If you are interested in implementing EWA with a Keys cubic, you can also look at the implementation of the downsampling component of the LoHalo image resampling function: https://git.gnome.org/browse/gegl/tree/gegl/buffer/gegl-sampler-lohalo.c There is some amount of obfuscation resulting from optimization through the use of mipmaps, which you can skip. And things are bit more complicated given that the resampler can handle arbitrary warps; with aspect ratio preserved, all ellipses are trivially computed discs. (The ImageMagick EWA implementation also handles arbitrary warps.)
« Last Edit: September 23, 2014, 03:42:42 am by NicolasRobidoux »
Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Photoshop uses a different bilinear interpolation algorithm than Matlab
« Reply #10 on: September 22, 2014, 07:00:17 pm »

The graph pretty much says it all.



Details here:

http://blog.kasson.com/?p=7152

Jim

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Photoshop bicubic sharper has good noise performance...
« Reply #11 on: September 23, 2014, 05:33:16 pm »



...but achieves it by attenuating higher spatial frequencies (and boosting some):



The attenuation varies with reduction ratio:



Details here:

http://blog.kasson.com/?p=7168

jim

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Lightroom has nearly ideal noise performance
« Reply #12 on: September 23, 2014, 05:40:57 pm »



Orange dots are ideal performance, blue are actual.

One nice thing about Lr export is that the output spectrum is materially independent of reduction ratio.

A sample:



Details here:

http://blog.kasson.com/?p=7193

Jim

PS. Bart, don't worry, I'll get to ImageMagick. I'm dealing with the more-common tools first.

PPS. If anyone knows the details of the Ps bilinear interpolation and bicubic sharper when used for downsizing, I'd like to hear about them, please. Ditto for Lr exporting.

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Help with script
« Reply #13 on: September 23, 2014, 08:06:13 pm »

Bart, I downloaded ImageMagick (x64, dynamic), installed it, and ran the test successfully. I downloaded V101 of your script, ran it against a 4000x4000 16-bit sRGB TIFF, each plane of which had a dc level of half-scale with superimposed Gaussian noise with standard deviation one-tenth scale. I told it to downsample to 50%. I got back a 2000x2000 gamma 2.2 grayscale image with the correct dc value, and what looked like the right histogram in Ps. I read it into Matlab, scaled it down to [0,1], and read the sigma as 0.0457160595991501. So far so good.

I plotted the spectrum:



Also good, I think.

Then I tried again with version 122. This time I got a 16-bit RGB TIFF, but the histogram was centered on zero, not expected the half-scale, and I lost half the noise (the left half).

Suggestions?

Thanks,

Jim
« Last Edit: September 23, 2014, 10:54:23 pm by Jim Kasson »
Logged

fdisilvestro

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1853
    • Frank Disilvestro
Re: Help with script
« Reply #14 on: September 24, 2014, 12:13:55 am »

Then I tried again with version 122. This time I got a 16-bit RGB TIFF, but the histogram was centered on zero, not expected the half-scale, and I lost half the noise (the left half).

Suggestions?

Thanks,

Jim

Hi, that is very strange, I tried with a similar file created in Photoshop (mid level gray + 10% Gaussian blur, Yes, I know all the issues related to it from your blog) and the version 122 worked flawlessly.

In any case version 122 has changed dramatically since version 101, including forcing the colormode to RGB (that's why you got the gray gamma file with version 101)

Thinking about the issue you are experiencing, could it be that you started with an 16 bit unsigned integer file (0-65535) and the resulting file from version 122 is also 16 bit unsigned? If this is the case, Photoshop actually uses 15 bits (or the positive values from a signed 16 bits integer) which means it is discarding half of the data and it would explain the histogram you are getting.

regards

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: Help with script
« Reply #15 on: September 24, 2014, 06:41:42 am »

Bart, I downloaded ImageMagick (x64, dynamic), installed it, and ran the test successfully. I downloaded V101 of your script, ran it against a 4000x4000 16-bit sRGB TIFF, each plane of which had a dc level of half-scale with superimposed Gaussian noise with standard deviation one-tenth scale. I told it to downsample to 50%. I got back a 2000x2000 gamma 2.2 grayscale image with the correct dc value, and what looked like the right histogram in Ps. I read it into Matlab, scaled it down to [0,1], and read the sigma as 0.0457160595991501. So far so good.

Jim, V1.0.1 of the script is 'very old' (v1.2.2 is current as of this writing). ImageMagick has also changed its handling of image gamma through the -colorspace operator over time, and I think I understand it better now then I did before. IM makes assumptions based on the image type that was input. ImageMagick also attempts to simplify files if possible, which means it converts 3-channel grayscale RGB images to single channel grayscale images, unless told otherwise (which happens in the recent versions of the script). Single Channel Grayscale images will be assumed to have a gamma=1 colorspace, 3-channel Grayscale will be assumed to have an sRGB slope limited average gamma =2.2 colorspace. I don't like it, but that's how it currently works, and therefore needs attention to avoid simplification by adding a '-TrueColor' parameter (in the latest versions), and a -depth 16 parameter to convert/force 16-bit channels to preserve precision.

Quote
Then I tried again with version 122. This time I got a 16-bit RGB TIFF, but the histogram was centered on zero, not expected the half-scale, and I lost half the noise (the left half).

Suggestions?

Maybe due to some peculiarity in interpreting the TIFF? ImageMagick uses a standard TIFF library, but does warn a lot about flaky tags it discovers. Could you try if the same happens with a 16-bit (15-bit) PNG saved from Photoshop? I created my Gaussian noise image with ImageJ, imported in Photoshop, assigned a gamma space that corresponds with my working space, and changed the mode to RGB, then converted to AdobeRGB . That convoluted process created a 4096x4096 noise TIFF file that is interpreted correctly.

I haven't tried generating such a file with ImageMagick itself, but that would also be an option to try if nothing else works. However, I saw a lot of discussion on the IM discourse server in the past about difficulties with controlling that functionality (to define the sigma and amplitude). Don't know about the current state of affairs...

To make a long story short, the script (v1.2.2+) expects typical photographic 3-channel RGB image input, preferably with something close to an embedded sRGB or AdobeRGB colorspace (although ProPhotoRGB with gamma 1.8 should probably be no big issue). Single channel images will be forced to 3-channel monochrome images, embedded profiles will be preserved.

Cheers,
Bart

P.S. This comes close (not exactly 50% DC) to generating the noise in ImageMagic:
convert -type TrueColor -depth 16 -colorspace sRGB -size 4096x4096 xc:gray50 -attenuate 1.059 +noise Gaussian IMNoise.tif
When read into Photoshop, assigned a profile, and desaturated, it becomes something like mean=127, StdDev=12.7 (in 8-bit values), and can be saved as a '16'-bit/3-channel TIFF, e.g. with sRGB or Adobe RGB. that poses no problem for the script. If I had more time to spare I could do a better job, I think, including separating and combining channels and using the same random seed value for each channel.
« Last Edit: September 24, 2014, 08:47:29 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Script problem solved
« Reply #16 on: September 24, 2014, 01:25:39 pm »

I figured it out, Bart. I was entering "default" for sharpening, not "100".

Thanks,

Jim

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
1st result
« Reply #17 on: September 24, 2014, 01:49:07 pm »

Here are the first results with Bart's script, Downsizing optimized, 50% magnification, sharpening = 100

sigma = 0.05 -- perfect. (Actually, it's 0.0499608616760874).

Spectrum looks good, too:



Thanks, Bart!

Jim

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
EWA noise graph
« Reply #18 on: September 24, 2014, 05:38:37 pm »

Bart, I modified your V122 script so all the options are in the command line. That's better for when I call it from other programs. I ran a bunch of points, and got this:



I think this is similar to the graph you posted above in this thread. I'm not sure why I'm getting slightly less noise now, but I don't think it's important at this point. Note that I have verified the lack of a discontinuity near 0.5 magnification.

I will do some artifact testing with this and the Lr export processing, and then I'll try to incorporate your script into my camera simulator.

Jim
« Last Edit: September 24, 2014, 05:41:04 pm by Jim Kasson »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: EWA noise graph
« Reply #19 on: September 24, 2014, 06:42:26 pm »

Bart, I modified your V122 script so all the options are in the command line. That's better for when I call it from other programs.

Sure Jim, it's just a front-end to ImageMagick. You can shape it anyway that makes it easy for your workflow.

Quote
I think this is similar to the graph you posted above in this thread. I'm not sure why I'm getting slightly less noise now, but I don't think it's important at this point.

That's a bit odd, indeed.

Quote
Note that I have verified the lack of a discontinuity near 0.5 magnification.

It seems to be an oddity in the MatLab function. I think the main conclusion is that it is not universal, but the effect of a specific algorithm implementation.

Quote
I will do some artifact testing with this and the Lr export processing, and then I'll try to incorporate your script into my camera simulator.

Sounds like a plan. Do note that the EWA resizing (-distort Resize) has an isotropic symmetry, its frequency power spectrum shows that it doesn't favor diagonal resolution like other (orthogonal/tensor) algorithms do. IMHO that produces more organic looking results. Detail is reduced to dots of intensity differences, not squares. Also, a deconvolution sharpening amount of 50 is 'neutral', the default of 100 is sharpening more than required and is chosen to add some additional punch to the image.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==
Pages: [1] 2   Go Up