Pages: 1 [2]   Go Down

Author Topic: Most favorable downsampling percentage  (Read 36703 times)

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: EWA noise graph
« Reply #20 on: September 24, 2014, 07:05:00 pm »

...a deconvolution sharpening amount of 50 is 'neutral', the default of 100 is sharpening more than required and is chosen to add some additional punch to the image.

Yet 100 doesn't add an actual peak to the frequency response, it just holds it at about the dc level for a bit longer. That's why I picked it. [Edit: I was wrong about this. See corrected post further down.]

Jim

« Last Edit: September 25, 2014, 01:36:26 pm by Jim Kasson »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: EWA noise graph
« Reply #21 on: September 24, 2014, 07:24:31 pm »

Yet 100 doesn't add an actual peak to the frequency response, it just holds it at about the dc level for a bit longer. That's why I picked it.

I suppose that depends on how it's evaluated. When I use ImageJ to produce a logarithmic (!) FFT power spectrum of the down-sampled Gaussian noise image, it does reveal the boosted frequencies in a radial profile plot.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: EWA noise graph
« Reply #22 on: September 25, 2014, 01:25:03 pm »

I suppose that depends on how it's evaluated. When I use ImageJ to produce a logarithmic (!) FFT power spectrum of the down-sampled Gaussian noise image, it does reveal the boosted frequencies in a radial profile plot.

Bart, I found and fixed a bug in my version of your script. Now I think our results agree. Please have a look and tell me if you see anything wonky.

With sharpening set to 50:





With sharpening set to 100:





Thanks,

Jim
« Last Edit: September 25, 2014, 01:31:48 pm by Jim Kasson »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: EWA noise graph
« Reply #23 on: September 26, 2014, 09:59:04 am »

Bart, I found and fixed a bug in my version of your script. Now I think our results agree. Please have a look and tell me if you see anything wonky.

Hi Jim,

This looks plausible. Nothing to question, or comment on.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: EWA noise graph
« Reply #24 on: September 26, 2014, 10:39:21 am »

This looks plausible.

Thanks, Bart. Check your PM queue, please.

Jim

NicolasRobidoux

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 280
Re: Most favorable downsampling percentage
« Reply #25 on: September 27, 2014, 03:31:05 am »

Looking at the latest plots, I find it interesting that my favorite sharpening, doing some quick tests a while ago with the eyeball metric, was 67.
Logged

Bill Koenig

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 361
Re: Most favorable downsampling percentage
« Reply #26 on: October 27, 2014, 10:51:18 am »

The content of this post is way over my head. But I do have a question about down sampling.
I have a pano that I made in Autopano pro, it was shot on a spherical pano head, and is made up of 48 images, 3 columns, 3 rows. using a Nikkor 85mm f1.8 @f8

I rendered this in APP at 100%  which gives me a image that's about 42"x 93" 
This is the first time I've actually told APP to render @100% With past pano projects I would render at my finish size, which would be closer to 16"x 40" as my Epson 3800 can only go 17" wide. But for this image I wanted to be able to print larger

One thing I've found with rendering @100% in APP is that the image seems much cleaner with less artifacts as well as better sharpness than the same image rendered @50% in APP.
Obviously there is down sampling going on here when I render in APP at less than 100%. And the down sampling in APP doesn't seem to be the best.

My question, what would be to best way to do the down sampling for my pano rendered @100% in APP?





 
Logged
Bill Koenig,

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Most favorable downsampling percentage
« Reply #27 on: October 27, 2014, 11:16:29 am »

My question, what would be to best way to do the down sampling for my pano rendered @100% in APP?

If you don't want to mess with ImageMagick scripts, try importing the full size image into Lightroom or QImage, and printing from there. Lr's downsizing isn't bad, and QImage has lots of choices.

Jim

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: Most favorable downsampling percentage
« Reply #28 on: October 27, 2014, 11:28:58 am »

One thing I've found with rendering @100% in APP is that the image seems much cleaner with less artifacts as well as better sharpness than the same image rendered @50% in APP.

Hi Bill,

That's the thing to look out for, indeed.

Quote
My question, what would be to best way to do the down sampling for my pano rendered @100% in APP?

I don't know for the AutoPano Pro version, but the Giga version offers rendering choices for the resampling algorithm to be used, it's under Settings/Interpolator. Depending on the subject matter, you may want to select BiCubic Smoother, or one of the Spline options. Spline36 will do a good job while preserving sharpness, Spline64 preserves even more sharpness, but might add a small halo at high contrast edges (which should not be too much of a problem if you output at native printer resolution).

Alternatively you can output full size at 100%, and use a separate downsampling run e.g. with the ImageMagick application for which a script file is available if you work on Windows, and for Mac/Linux a few syntax changes need to be made.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bill Koenig

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 361
Re: Most favorable downsampling percentage
« Reply #29 on: October 27, 2014, 01:41:03 pm »

 Jim and Bart, thanks for your reply's.

The problem with this file is its size, over 4GB, can't save it as a tiff.
Qimage would be a possibility as its not that expensive at $70, but will it work with PSB files?   
When I start asking questions about file formats like PSB  I'm now getting out of my comfort zone, but PSB is what its saved to right now. 
Logged
Bill Koenig,

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Most favorable downsampling percentage
« Reply #30 on: October 27, 2014, 04:07:37 pm »

The problem with this file is its size, over 4GB, can't save it as a tiff.
Qimage would be a possibility as its not that expensive at $70, but will it work with PSB files? 

Oh. I just tried QImage, and it won't read a psb file. I tried to save a big file from Ps as a PNG, but it won't save as PNG over 2 GB.

I think the thing to so is to pursue Bart's suggestions about AutoPano's downsampling options.

Sorry,

Jim

NicolasRobidoux

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 280
Re: Most favorable downsampling percentage
« Reply #31 on: October 30, 2014, 03:33:24 am »

nip2 http://www.vips.ecs.soton.ac.uk/index.php?title=Nip2 can handle huge images without breaking a sweat (which is why Wikipedia switched to the underlying library, libvips, when it previously used ImageMagick, which was too resource intensive for its servers). If you can manage to feed it your image (and learn how to use it), it would be a free way of dealing with it.
Also: Maybe consider vipsthumbnail libvips.blogspot.dk/2013/11/tips-and-tricks-for-vipsthumbnail.html does arbitrary downsampling fairly well.
(My apologies if I am not providing an answer suitable for your situation: read your post really quickly.)
Logged

texshooter

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
Re: Most favorable downsampling percentage
« Reply #32 on: October 27, 2015, 06:46:33 pm »

Your math and graphs are beyond my pay grade, but for quite some time now I have used a down sampling method I learned from Jack Flesher which ends up with better small files for the web ... basically stepping down at exactly 50% increments to obtain the final size.  I resize the original file to 8 times what I want the final size to be. sometimes this is a pretty big step up, but most of the time it is a very small step down.  I then downsample 50% 3 consecutive times (adding a very slight amount of sharpening in 2 of the steps).

The end result is visually better than a single resize down in Photoshop.  I’m sure if I understood the process better this could be refined even further (and I’ll try using something other than bi-cubic to see what happens).  but it’s simple (in an action) and does give me better results.

Wayne, Sorry to revive a dead thread, but I'm curious to know if you still use the same resize for web recipe. I looked up Jack Flesher comments but can only find ancient (like 2009 ancient) posts on this subject. I currently use a recipe I learned from Chip Phillips, but am open to better ones if the logic is sound.  One thing I find hard to swallow is the step you take to upsample the image before you downsample it 3 times. How can that much pulling and pounding be healthy?
Logged

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: Most favorable downsampling percentage
« Reply #33 on: October 28, 2015, 02:51:06 am »

In doing some testing of downsampling algorithms as reducers of image noise, I found some interesting -- at least to me; I should get out more -- properties of power-of-two ratios for downsampling.
Since this thread was brought back anyways:
I think that it is ill-adviced to even consider downsampling for reducing noise. Downsampling (or any conventional resampling) conceptually consists of linear filtering combined with "upsamplers" (inserting M zeros between each sample) and "downsamplers" (dropping every n-th sample). Of these 3 components, only the linear filtering part can sensibly affect image noise.

So why not use a linear filter directly if you prefer that to dedicated noise reduction? MATLAB allows you to design highly complex linear 2-d filters.

-h
Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Most favorable downsampling percentage
« Reply #34 on: November 04, 2015, 01:56:50 pm »

Since this thread was brought back anyways:
I think that it is ill-adviced to even consider downsampling for reducing noise. Downsampling (or any conventional resampling) conceptually consists of linear filtering combined with "upsamplers" (inserting M zeros between each sample) and "downsamplers" (dropping every n-th sample). Of these 3 components, only the linear filtering part can sensibly affect image noise.

So why not use a linear filter directly if you prefer that to dedicated noise reduction? MATLAB allows you to design highly complex linear 2-d filters.


You are correct, if reduction of image size is not desired for its own sake. I was assuming in this testing that the situation to which it applied was one where a photographer required a downsized image in the first place. Having made that decision, the following question arises: "What are the noise reduction properties of various down sizing methods?" While certainly not the only parameter that should be looked at when choosing a downsizing algorithm, noise performance is certainly relevant in many situations, and may actually be dispositive in some.

I was also assuming that the photographer might have neither the ability, the pocketbook, nor the desire to use Matlab as part of their normal workflow, although Matlab was a convenient tool for me to use to compare algorithms, modulo the complicating difficulty that the Matlab implementations of some downsizing algorithms do not match that of similarly-named algorithms in Ps and other image editing programs.

Jim

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: Most favorable downsampling percentage
« Reply #35 on: November 04, 2015, 02:24:31 pm »

I was also assuming that the photographer might have neither the ability, the pocketbook, nor the desire to use Matlab as part of their normal workflow, although Matlab was a convenient tool for me to use to compare algorithms, modulo the complicating difficulty that the Matlab implementations of some downsizing algorithms do not match that of similarly-named algorithms in Ps and other image editing programs.
I assume that a skilled Photoshop operator (I am not) can apply complex LTI filters?

-h
Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Most favorable downsampling percentage
« Reply #36 on: November 04, 2015, 07:16:00 pm »

I assume that a skilled Photoshop operator (I am not) can apply complex LTI filters?

I haven't looked at this recently, but the last time I did, it was possible to manually enter kernels in some versions of Ps. Of course, if you are, as they say in patent applications, "skilled in the art" you can download the Ps plugin SDK and write plugins that do anything you'd like.

Adobe is backing away from Pixel Bender, which was more approachable than the full plugin development route: http://blog.kasson.com/?p=1723

AFAIK, Adobe does not support user-specified  frequency domain operations in Ps out of the (virtual) box.

Jim

Pages: 1 [2]   Go Up