Pages: 1 ... 14 15 [16] 17 18 ... 24   Go Down

Author Topic: A free high quality resampling tool for ImageMagick users  (Read 251179 times)

NicolasRobidoux

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 280
Re: A free high quality resampling tool for ImageMagick users
« Reply #300 on: September 26, 2014, 02:30:32 pm »

Far out idea: Choose the "lower gamma" so as to best fit the input image's linear light histograms.
(Yes, I know, my recent suggestions are quite ad hoc.)
P.S. Attempting to grasp the big picture, I have managed to convince myself that anchoring the "dual gamma" blending on gamma 1 was a mistake: Use two different gammas larger than 1.
« Last Edit: September 26, 2014, 04:06:22 pm by NicolasRobidoux »
Logged

snibgo

  • Newbie
  • *
  • Offline Offline
  • Posts: 11
Re: A free high quality resampling tool for ImageMagick users
« Reply #301 on: September 26, 2014, 03:42:03 pm »

Quote from: NicolasRobidoux
Making one single tool that automatically, accurately and correctly deals with all possibilities is tricky.
Yes.

As a rule, I wouldn't. I think it's better to stay within one colourspace (sRGB, AdobeRGB1998, WidestGamut, whatever you want) throughout the workflow. Take offshoots from the mainstream workflow as desired, eg sRGB for display on a screen, but I think batting around between profiles within the workflow is bad practice.

My own workflow is sRGB. My tools work in sRGB. If I used something else, such as AdobeRGB1998, I would adjust each tool to work in that space. I would not bracket each tool to convert to sRGB and back.
Logged

alain

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 465
Re: A free high quality resampling tool for ImageMagick users
« Reply #302 on: September 26, 2014, 04:36:45 pm »

Yes.

As a rule, I wouldn't. I think it's better to stay within one colourspace (sRGB, AdobeRGB1998, WidestGamut, whatever you want) throughout the workflow. Take offshoots from the mainstream workflow as desired, eg sRGB for display on a screen, but I think batting around between profiles within the workflow is bad practice.

My own workflow is sRGB. My tools work in sRGB. If I used something else, such as AdobeRGB1998, I would adjust each tool to work in that space. I would not bracket each tool to convert to sRGB and back.
Hi

If the end result is sRGB (web)  and originals are prophoto (tif) and adobeRGB (jpg) (and from time to time sRGB), would you then use 2 or (3) tools or do a convertion to sRGB first?
(I'm now lucky that I can select on extention.)
Logged

snibgo

  • Newbie
  • *
  • Offline Offline
  • Posts: 11
Re: A free high quality resampling tool for ImageMagick users
« Reply #303 on: September 26, 2014, 05:21:45 pm »

Round trips between colorspaces lose both precision and accuracy, so the main thing is to avoid this. Profiles are necessary evils, but profile conversions really are horrible and should be kept to a minimum.

Then we have considerations of processing speed, user convenience, familiarity with the tools, and tool maintenance. I would rather convert all inputs to a common space and push them through an identical workflow than have to use different tools for different spaces.

If outputs are needed in different spaces, as well as inputs being in different spaces, then decisions become harder. Then it may become a trade-off between convenience (funnel everything through a single colour space) and precision (minimise colour space conversion).
Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: A free high quality resampling tool for ImageMagick users
« Reply #304 on: September 26, 2014, 06:01:36 pm »

Round trips between colorspaces lose both precision and accuracy, so the main thing is to avoid this.

I'm not sure what you mean by "precision" in this context. Precision as I use the term in the context of computational accuracy describes the number of bits used in encoding the values and how they are allocated. Round trip conversion of an image with 16-bit precision usually results in an image with 16-bit precision. Round trip conversion of an image with 32-bit floating point representation (aka single precision) usually results in an image with 32-bit floating point precision.

As to accuracy, while 3D lookup table conversions usually "walk" after round trips, it is possible to do conversions between model-based color spaces, such as display spaces, with no loss in accuracy save rounding to the original precision, if the intermediate calculations are performed with sufficient precision. For performance reasons, that is not usually the case.

However, current tools are quite robust. As a test, I took this 16-bit, sRBG ray-traced image of Bruce Lindbloom's desk that I'd res'd down to 20% using Bart's script version 1.2.2:



I brought it into Photoshop CC 2014.1.0 converted it to Adobe RGB, then to Prophoto RGB, then to Adobe RGB again, then to sRGB using the ACE Engine and Absolute rendering intent, with black compensation turned off. Note that this set of conversions involves one change of gamma. There is also a change of white point, but Photoshop blithly ignores that.

I subtracted the original image from the one with the four color space conversions, and got this:



Then I applied a 10-stop exposure increase. This resulted:



Jim

« Last Edit: September 26, 2014, 06:53:04 pm by Jim Kasson »
Logged

snibgo

  • Newbie
  • *
  • Offline Offline
  • Posts: 11
Re: A free high quality resampling tool for ImageMagick users
« Reply #305 on: September 26, 2014, 08:34:00 pm »

We hope for both precision (a decent number of bits) and accuracy (the bits we have, are correct). Your 8-bit jpg diff file has a maximum value of 4 (out of 255). After 4 conversions, the worst pixels have only 6 accurate bits. Not wonderful, really.

On the other hand, the diff file is jpeg, and that compression may have created noise that raised the error to 4 out of 255.

Of course, the image matters far more than the numbers. (I don't want to gain a reputation as a bean-counter. Umm, bit-counter.)
Logged

NicolasRobidoux

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 280
Re: A free high quality resampling tool for ImageMagick users
« Reply #306 on: September 27, 2014, 02:21:03 am »

RE: Alan Gibson's (snibgo)'s warning about using color profiles.
The ImageMagick colorspace and color profile system has seen many improvements in the last few years and I am not quite up to speed on the details.
In the FLOSS world, I personally trust the libvips/nip2 system more (http://libvips.blogspot.dk/2012/11/new-colour-package.html). The main reason is that when importing with a color profile, a "large" reference container colorspace (like L*a*b* or XYZ) is used as an anchor. (At least, that's more or less how it used to work. The system is even smarter now.) A consequence is that when I request conversion into and out of linear light, I have full trust that I am getting what I am asking for (modulo the accuracy of the profile itself).
Maybe ImageMagick is now up to that level of trustworthiness w.r.t. the entire color space/profile business. But I'm not sure. So I prefer to take profiles out of the equation if I can. And the whole thing is a can of worms. For example, my understanding is that older sRGB profiles are often somewhat inaccurate on round trips. (That's one of the benefits of using sRGB v4.) I'm not only concerned with round trip accuracy: I want linear light to be as close as possible to actually being linear light. Returning to Earth after a trip to Phobos does me little good if I wanted to visit Mars.
P.S. Really, at this point, I want to know that if there is color drift it is because of a shortcoming of LWGB, not some color toolchain quirk. By design, LWGB will not make flat patches drift. But I find it quite likely that high frequency patches drift with LWGB, and it is on my TODO list to figure out how much of an issue this drift is.
« Last Edit: September 27, 2014, 02:39:26 am by NicolasRobidoux »
Logged

NicolasRobidoux

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 280
Re: A free high quality resampling tool for ImageMagick users
« Reply #307 on: September 27, 2014, 03:40:52 am »

I have adapted the 1.2.2 script to suit my own working methods, and published resampHM.bat with a number of trials at http://im.snibgo.com/resamphm.htm
In the first "Basic Upsampling" results, I do see a difference between #3 (LWGB EWA Lanczos) and the others (tensor Mitchell-Netravali through sRGB (#1) and linear RGB (#2)): LWGB is sharper yet just a bit less jaggy, and has a little more haloing. LWGB also shows something that can be vaguely described as more color texture or color separation (the "cheap gummidruck" look). (Hard to put my finger on it. Looks sort of like chromatic aberration or color noise.) This last property could be a side effect of the additional sharpness: the LWGB is a bit more "vivid" in the color department.
Certainly no big bang for the buck, though.
« Last Edit: September 27, 2014, 03:48:31 am by NicolasRobidoux »
Logged

NicolasRobidoux

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 280
Re: A free high quality resampling tool for ImageMagick users
« Reply #308 on: September 27, 2014, 11:01:34 am »

Bart (and Alan and Jim):
I think I may have a solution to the "lightening of high frequency patches" issue.
Here is the off the top of my head version that is closest to what we are doing now:
After downsampling in linear light using EWA RobidouxSoft, compute two deconvolved versions (I would prefer with the same deconvolution parameters but this is not necessary) with two different gamma, making sure to convert the two results to linear light.
Now, compute three luminance images:
1) Luminance of the non-deconvolved image
2) Luminance of the result of deconvolving through the first gamma (gamma 1, say)
3) Same as 2) with the second gamma (gamma 3, say)
At every pixel, choose the result with the median luminance (averaging ties, say) out of 1), 2) and 3).
That's it.
-----
I'll see if I can come up with something more likely to be better.
P.S. Let me explain the heuristic behind this.
What we want to do is improve LWGB deconvolution sharpening.
We don't want to do things on a per channel basis because don't want to lose monochromaticity preservation (that is, if all three channels are proportional in linear light before sharpening, it's also true after sharpening, ignoring clipping issues). See http://www.luminous-landscape.com/forum/index.php?topic=77949.msg745796#msg745796. Performing median filtering of the three results on a per-channel basis would break this property. Averaging would not.
So, let's talk luminance.
Recall that the lower the gamma, the more noticeable dark halos, and the higher the gamma, the more noticeable light halos (of same "numerical" size).
If the pixel is part of a light halo in at least one of the two sharpened results, its sharpened luminance is higher than the unsharpened luminance. So, don't use the pixel value that corresponds to the highest luminance.
If the pixel is part of a dark halo in at least one of the two sharpened results, its sharpened luminance is lower than the unsharpened one. So, don't use the lowest one.
The key component of the heuristic consists of reversing the implications:
If you pick the median of the three results, you'll never choose the gamma creating the worst halo. Provided the other gamma creates a moderate halo in the same situation, you have successfully tamed halos.
Now, you may ask what happens when the two gammas produce luminance values that bracket the original's.
This is likely to happen on high frequency patches. Selecting the non-sharpened result in this situation will help the preservation of local averages. We want that.
-----
The above, as usual, is off the top of my head. Let's see if there is some proof in the pudding.
P.S. I just realized that there is a clear path to resolving ties: 1) (no deconvolution) is best, 2) (lowest gamma) is second best, 3) (highest gamma) is last.
« Last Edit: September 27, 2014, 02:45:34 pm by NicolasRobidoux »
Logged

snibgo

  • Newbie
  • *
  • Offline Offline
  • Posts: 11
Re: A free high quality resampling tool for ImageMagick users
« Reply #309 on: September 27, 2014, 01:19:14 pm »

Quote from: NicolasRobidoux
After downsampling in linear light using EWA RobidouxSoft, compute two deconvolved versions (I would prefer with the same deconvolution parameters but this is not necessary) with two different gamma, making sure to convert the two results to linear light.
I'm unsure about this, so I'll leave it to someone else.

Quote from: NicolasRobidoux
Now, compute three luminance images:
1) Luminance of the non-deconvolved image
2) Luminance of the result of deconvolving through the first gamma (gamma 1, say)
3) Same as 2) with the second gamma (gamma 3, say)
At every pixel, choose the result with the median luminance (averaging ties, say) out of 1), 2) and 3).
I think the following almost delivers what I think you want. From three source images, each output pixel comes from one of the three sources, determined by which luminance is equal to the median luminance. Windows BAT syntax; adjust for other shells.

If two or three sources have luminances equal to the median, it won't average the pixels from the sources. Every output pixel comes from only one input source.

%IM%convert ^
  %SRC0% ^
  %SRC1% ^
  %SRC2% ^
  ( -clone 0 -colorspace gray +write x0.png ) ^
  ( -clone 1 -colorspace gray +write x1.png ) ^
  ( -clone 2 -colorspace gray +write x2.png ) ^
  ( -clone 3-5 -evaluate-sequence median +write x4.png ) ^
  -compose Difference -fill White ^
  ( -clone 3,6 -composite +opaque Black +write x5.png ) ^
  ( -clone 5,6 -composite +opaque Black +write x6.png ) ^
  -delete 3-6 ^
  -compose Over ^
  ( -clone 2 -clone 1 -clone 4 -composite +write x7.png ) ^
  ( -clone 0 -clone 5 -clone 3 -composite +write x8.png ) ^
  -delete 0-5 ^
  %OUTFILE%
It isn't optimal; I optimise only when I am reassured the logic is correct. Every "+write x?.png" is only for debugging; they can all be removed.
Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: A free high quality resampling tool for ImageMagick users
« Reply #310 on: September 27, 2014, 03:36:13 pm »

We hope for both precision (a decent number of bits) and accuracy (the bits we have, are correct). Your 8-bit jpg diff file has a maximum value of 4 (out of 255). After 4 conversions, the worst pixels have only 6 accurate bits. Not wonderful, really.

On the other hand, the diff file is jpeg, and that compression may have created noise that raised the error to 4 out of 255.

This is a post about mission creep.

I thought of responding to you that it wasn't appropriate to subtract errors in a nonlinear color space. Then it occurred to me that It wasn't appropriate for me to subtract the images in Ps, which not only does the subtraction in gamma-corrected space, but throws away negative values. Then it occured to me that if I performed the subtraction in linear space and kept the negative numbers, I still wouldn't have any reasonable way to make a scalar from them.

So I bit the bullet and wrote some Matlab code to convert both images to CIEL*a*b*, and compute delta-E at each pixel.



A few notes. I'm using Jerker Wagberg's OptProp for the color calcs. When I go to Lab I'm using the D65 spectrum and the 1931 2 degree observer for the white point.

When I run the program, I get a mean DeltaE of 1.4515, a standard deviation of 1.2533, and a worst cae DeltaE of 6.3707.  

That seems like a lot. So I wrote some more code to do 100 round trip conversions srgb>argb>srgb, with quantizing to 16 bit unsigned integer precision after each conversion:



Then I got an average error of 6.004 * 10^-4 DeltaE, a standard deviation of 0.0011 DeltaE, and a worst case error of 0.0485 DeltaE.

The take home for me is that color conversion between display-based color spaces don't have to be a significant source of error, even with 16-bit working spaces, but that the Ps (ACE) implementation is not as good as it could be.

The way the errors walk is interesting:





Thanks for prompting me to do this work.

[Added 10/10/2014: The above CIELab DeltaE errors are actually much worse than is accurate. The reason is a confusion on my part wrt sRGB color spaces. The original Brue Lindbloom image was in a space with the sRGB primaries and a gamma of 2.2, while I thought it was in the IEC 61966-2-1:1999 color space. When corrected the errors are quite small even for repetitive 16 bit conversions. See here for some results with a tougher target image: http://blog.kasson.com]/?p=7517

Jim
« Last Edit: October 10, 2014, 02:20:24 pm by Jim Kasson »
Logged

NicolasRobidoux

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 280
Re: A free high quality resampling tool for ImageMagick users
« Reply #311 on: September 27, 2014, 03:58:21 pm »

...
My oh my Alan! You're awesome.
Anyway, all I did was hook things up and check that nothing blows up. No clear opinion yet on the quality of the resulted compared with the "old" downsample. (Manana.) But this looks promising.
I used 100 deconvolution to emphasize tone drift, if any.
convert \
  \( input.jpg -set colorspace sRGB -colorspace RGB \
     -define filter:c=0.1601886205085204 -filter Cubic -distort Resize 25\% \) \
  \( -clone 0 -define convolve:scale=100^,100% \
     -morphology Convolve DoG:3,0,0.4806768770037563 \) \
  \( -clone 0 -gamma 3 -define convolve:scale=100^,100% \
     -morphology Convolve DoG:3,0,0.4806768770037563 -gamma 0.3333333333333333333 \) \
  \( -clone 0 -colorspace gray \) \
  \( -clone 1 -colorspace gray \) \
  \( -clone 2 -colorspace gray \) \
  \( -clone 3-5 -evaluate-sequence median \) \
  -compose Difference -fill White \
  \( -clone 3,6 -composite +opaque Black \) \
  \( -clone 5,6 -composite +opaque Black \) \
  -delete 3-6 \
  -compose Over \
  \( -clone 2 -clone 1 -clone 4 -composite \) \
  \( -clone 0 -clone 5 -clone 3 -composite \) \
  -delete 0-5 \
  -set colorspace RGB -colorspace sRGB -quality 98 output.jpg
(The -set colorspace sRGB should not be needed because we're dealing with JPEGs, but are needed if, say, we're using PNGs, so I left them there.)
Question: If there is a median tie, does this resolve it by chosing 0 if possible, then 1, and never 2 (never 2 since one needs a tie to resolve)? This is what I would like.
Here is the result on the fly http://upload.wikimedia.org/wikipedia/commons/8/85/Calliphora_sp_Portrait.jpg
P.S. As a comparison, I also attached the result of plain EWA RobidouxSoft through linear RGB, without the LWGB deconvolution:
convert \
  input.jpg -set colorspace sRGB -colorspace RGB \
  -define filter:c=0.1601886205085204 -filter Cubic -distort Resize 25\% \
  -set colorspace RGB -colorspace sRGB -quality 98 plain.jpg
P.S.2 Also attached the result with 50^,100% instead of 100^,100%.
« Last Edit: September 28, 2014, 02:28:27 am by NicolasRobidoux »
Logged

snibgo

  • Newbie
  • *
  • Offline Offline
  • Posts: 11
Re: A free high quality resampling tool for ImageMagick users
« Reply #312 on: September 27, 2014, 05:55:50 pm »

Good work, Jim. It's good to know where errors are, or that there aren't any. I'm spending a happy weekend hunting bugs in ImageMagick's YCC colorspace.

Quote from: NicolasRobidoux
Question: If there is a median tie, does this resolve it by chosing 0 if possible, then 1, and never 2 (never 2 since one needs a tie to resolve)? This is what I would like.
Your wish is my, umm, headache. If there is a tie, the code above would use 0 if possible, otherwise 2, never 1.

The revised script below resolves ties in order 0 then 1, never 2. The two changed lines are marked ***.
%IM%convert ^
  %SRC0% ^
  %SRC1% ^
  %SRC2% ^
  ( -clone 0 -colorspace gray +write x0.png ) ^
  ( -clone 1 -colorspace gray +write x1.png ) ^
  ( -clone 2 -colorspace gray +write x2.png ) ^
  ( -clone 3-5 -evaluate-sequence median +write x4.png ) ^
  -compose Difference -fill White ^
  ( -clone 3,6 -composite +opaque Black +write x5.png ) ^
  ( -clone 4,6 -composite +opaque Black +write x6.png ) ^  ***
  -delete 3-6 ^
  -compose Over ^
  ( -clone 1 -clone 2 -clone 4 -composite +write x7.png ) ^  ***
  ( -clone 0 -clone 5 -clone 3 -composite +write x8.png ) ^
  -delete 0-5 ^
  %OUTFILE%
Resolving ties by taking averages would need the 4 average colour images to be made (01, 02, 12, 123), one more median difference, then further complexities to decide which of the seven sources to use. I feel a migraine coming on.
Logged

NicolasRobidoux

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 280
Re: A free high quality resampling tool for ImageMagick users
« Reply #313 on: September 28, 2014, 02:26:00 am »

Alan:
Thank you.
-----
Resolving ties by preferring no-deconvolution over some, and low gamma over high, should be better than with averages. Keep the aspirin in the cupboard (although I can't do anything about YCC bugs).
P.S. Here is the full enchilada with favored median tie resolution (and hardcoded options):
convert \
  \( input.jpg -set colorspace sRGB -colorspace RGB \
     -define filter:c=0.1601886205085204 -filter Cubic -distort Resize 25\% \) \
  \( -clone 0 -define convolve:scale=100^,100% \
     -morphology Convolve DoG:3,0,0.4806768770037563 \) \
  \( -clone 0 -gamma 3 -define convolve:scale=100^,100% \
     -morphology Convolve DoG:3,0,0.4806768770037563 -gamma 0.3333333333333333333 \) \
  \( -clone 0 -colorspace gray \) \
  \( -clone 1 -colorspace gray \) \
  \( -clone 2 -colorspace gray \) \
  \( -clone 3-5 -evaluate-sequence median \) \
  -compose Difference -fill White \
  \( -clone 3,6 -composite +opaque Black \) \
  \( -clone 4,6 -composite +opaque Black \) \
  -delete 3-6 \
  -compose Over \
  \( -clone 1 -clone 2 -clone 4 -composite \) \
  \( -clone 0 -clone 5 -clone 3 -composite \) \
  -delete 0-5 \
  -set colorspace RGB -colorspace sRGB -quality 98 output.jpg
« Last Edit: September 28, 2014, 05:15:27 am by NicolasRobidoux »
Logged

NicolasRobidoux

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 280
Re: A free high quality resampling tool for ImageMagick users
« Reply #314 on: September 28, 2014, 06:24:24 am »

The "new downsample" is promising, and I like its heuristic basis a lot more than the "old downsample".
Here is a quick comparison with the usual fly, with hardwired options:
# "Old downsample" (blending alpha proportional to the luminance of the gamma 1 result)
convert \
  \( input.jpg -set colorspace sRGB -colorspace RGB \
    -define filter:c=0.1601886205085204 -filter Cubic -distort Resize 25\% \) \
  \( -clone 0 -gamma 3 -define convolve:scale=100^,100% \
    -morphology Convolve DoG:3,0,0.4981063336734057 -gamma 0.3333333333333333333 \) \
  \( -clone 0 -define convolve:scale=100^,100% \
    -morphology Convolve DoG:3,0,0.4806768770037563 \) \
  -delete 0 \
  \( -clone 1 -colorspace gray -auto-level \) \
  -compose over -composite \
  -set colorspace RGB -colorspace sRGB -quality 98 olddownsample100.jpg

# "New downsample" (among the unsharpened result and sharpened results computed with two different gammas, select result with median luminance)
convert \
  \( input.jpg -set colorspace sRGB -colorspace RGB \
     -define filter:c=0.1601886205085204 -filter Cubic -distort Resize 25\% \) \
  \( -clone 0 -define convolve:scale=100^,100% \
     -morphology Convolve DoG:3,0,0.4806768770037563 \) \
  \( -clone 0 -gamma 3 -define convolve:scale=100^,100% \
     -morphology Convolve DoG:3,0,0.4806768770037563 -gamma 0.3333333333333333333 \) \
  \( -clone 0 -colorspace gray \) \
  \( -clone 1 -colorspace gray \) \
  \( -clone 2 -colorspace gray \) \
  \( -clone 3-5 -evaluate-sequence median \) \
  -compose Difference -fill White \
  \( -clone 3,6 -composite +opaque Black \) \
  \( -clone 5,6 -composite +opaque Black \) \
  -delete 3-6 \
  -compose Over \
  \( -clone 2 -clone 1 -clone 4 -composite \) \
  \( -clone 0 -clone 5 -clone 3 -composite \) \
  -delete 0-5 \
  -set colorspace RGB -colorspace sRGB -quality 98 newdownsample100.jpg
It looks like I'll have to come up with a new acronym.
P.S. I've also attached the result with the new scheme using a higher strength for the DoG: 125^ instead of 100^. The new scheme does not sharpen as much given the same strength. 125 "new" roughly matches 100 "old" (as measured by JPEG file size :) ). The differences are subtle, but they are there.
Just because "why not", I also added the result with 150^.
P.S.2 Quick and dirty check of luminance drift. With "neutral" strenght of the DoG, namely 50, I saved the results into pngs, loaded into nip2, converted to XYZ, and compared the Y channel image mean with that of the input. The new scheme is a lot closer. Not conclusive. But encouraging.
P.S.3 This seems to hold at higher strengths of the DoG: Same strength -> "new" preserves mean luminance better than "old". This comparison, however, is not quite fair, because the "new" scheme is perceptually less sharp at equal DoG strength. And this is one image etc etc etc.
The jury is still out. But I'd put money on "new" being a better method than "old".
« Last Edit: September 28, 2014, 09:48:51 am by NicolasRobidoux »
Logged

NicolasRobidoux

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 280
Re: A free high quality resampling tool for ImageMagick users
« Reply #315 on: September 28, 2014, 07:41:30 am »

Pngs of the results with DoG strength = 50.
Logged

NicolasRobidoux

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 280
Re: A free high quality resampling tool for ImageMagick users
« Reply #316 on: September 28, 2014, 10:17:47 am »

With the "new downsample", there are isolated single or pairs of pixels that surprise me. Hung jury for now.
P.S. Typo in test code. (Unwittingly used the other way of resolving ties.) Redoing.
« Last Edit: September 28, 2014, 11:37:36 am by NicolasRobidoux »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: A free high quality resampling tool for ImageMagick users
« Reply #317 on: September 28, 2014, 11:49:00 am »

With the "new downsample", there are isolated single or pairs of pixels that surprise me. Hung jury for now.

Nicolas,

Allow me to make a few quick remarks that might save some time when looking for differences.

First of all, I really appreciate Alan's coding skills being put to good use for improving the algorithms we consider. It may also have given me some better idea how to tackle my Blend-if ideas that allow to reduce the risk of clipping, and that a add a few other positive side effects.

The deconvolution radius should IMHO be different for the different gamma versions of restoration. I understand your concerns about the potential effects on color by using different radii, but the simple reason for different radii is that restoration of resolution with the 'wrong radius' will either under-achieve with a smaller radius, or over-achieve (with halo) with a larger radius than optimal. The optimal radii were determined by fitting a Gaussian blur kernel to the 10x oversampled PSF blur as observed. Because the radii are almost the same, I didn't mention it yet as I estimate the sub-optimal restoration will have a hardly noticeable effect, but I do want to mention it now that we start looking at how individual pixels turn out.

When comparing single pixels, I think it is very important to eliminate the potential influence of JPEG compression (quality), and Chroma sub-sampling. That's why I set those parameters to the highest possible quality level in the 'old' version.

Also, in the 'old' version I force the processing to 16-bit precision from the start of the conversion with the '-depth 16' parameter, because ImageMagick tends to use the lowest amount of memory if possible (e.g. by using single channel images if 3-channel monochrome images are input), and may (or may not) do some of the processing in 8-bit/channel precision. I'd rather be over-cautious than sorry, and prefer to force bit-depth to 16, even for 8-b/ch input. When experimentation stabilizes, we can always test if it makes any difference for a JPEG workflow, but I want to keep the options for dealing with other input sources open for now.

The Median weighting of the various luminances that result from deconvolution restoration, tends to choose conservatively, towards un-sharpened luminance domination. This results in a reduced sharpness, although on the positive side it seems to also reduce aliasing a little, even with a boosted sharpening amount. Alternatively one might also consider a different Keys 'C' value as filter input, but I do not want to change too many things at the same time, because we'd lose track of what exactly caused the observed changes in image detail.

I didn't want to nip the creative thinking process in the bud, because that's how progress can be made, by stumbling on unanticipated side-effects that we may like. However, the lightening of high amplitude high frequency image content, is the direct result of resampling in linear gamma, not necessarily the other factors that are being looked at. For example averaging 51 and 204 in 8-bit gamma 2.2 space will produce 127.5, but it will produce 152 when first linearized, then averaged, and converted back to the original gamma space. It has way more impact that the otherr things that are being looked at.

Blending in linear light is essential if we want to preserve accurate color (e.g. as in the Dalai Lama image test) but it does lighten the high amplitude and high (beyond Nyquist) spatial frequencies as they are low-pass filtered. One might test how a modest low-pass filter run in gamma space before down-sampling in linear light would work. Maybe it does a better job of achieving both objectives in RGB space, reduced lightening and preservation of accurate color blends.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: A free high quality resampling tool for ImageMagick users
« Reply #318 on: September 28, 2014, 11:58:22 am »

It looks like I'll have to come up with a new acronym.

Down-sampling with Median Blended Gamma Deconvolution (MBGD down-sampling)?

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Round trip color accuracy with double precision FP
« Reply #319 on: September 28, 2014, 12:27:27 pm »

If we leave out the quantizing to 16 bit after every conversion, and leave the image in double-precision floating point representation all the time, we can see that the round trip color space conversion does not materially affect the accuracy of the color encoding.










Jim
Pages: 1 ... 14 15 [16] 17 18 ... 24   Go Up