Pages: [1] 2   Go Down

Author Topic: The State of Vector Conversion Image Enlargement?  (Read 17164 times)

dwdallam

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2044
    • http://www.dwdallam.com
The State of Vector Conversion Image Enlargement?
« on: August 11, 2012, 05:40:17 pm »

I'm wondering if anyone has any experience using the latest algorithms for enlargement? For instance, Alien Skin Blowup 3 using a method where it takes chunks of the image and converts them to vector plots, and then using the vector image to increase the size pf the original. In theory, that should be a perfect enlargement with no degradation.

I have some old 20D images I'd love to print larger, and even many 5D MKI images too. Although I have successfully printed the 5D images at 20x30 with excellent results, that was using the entire image and all of its original pixels. There are many images I have that are cropped that would benefit from a vector conversion up size. Even some of my 1DSMKIII images with heavy cropping would benefit.
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: The State of Vector Conversion Image Enlargement?
« Reply #1 on: August 12, 2012, 06:15:00 am »

I'm wondering if anyone has any experience using the latest algorithms for enlargement? For instance, Alien Skin Blowup 3 using a method where it takes chunks of the image and converts them to vector plots, and then using the vector image to increase the size pf the original. In theory, that should be a perfect enlargement with no degradation.

Hi,

After lots of experimentation I've settled for the results that PhotoZoom Pro can produce. It integrates nicely with Photoshop via an Automate and Export plugin, and it also offers a standalone application so it could enlarge more if the amount of memory that Photoshop claims drains someone's resources. I prefer its output results over 'Blowup' which tends to round off sharp corners. PhotoZoom Pro also mixes raster and vector results but in a user adjustable ratio, to avoid producing an artificial looking mental disconnect between edge detail and surface structure.

The great thing is that these methods add real resolution, instead of just adding more blurry pixel enlargements. For your purpose of significant upsampling, check out the 800% enlarged crops.

For some more background information you can also have a look at some of the printer setting related conclusions in this thread.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

bill t.

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3011
    • http://www.unit16.net
Re: The State of Vector Conversion Image Enlargement?
« Reply #2 on: August 12, 2012, 02:09:36 pm »

Bottom line for all those programs is that they exchange a grungy, pixelated look for one that is rather graphic in character.  There is simply no magic that can correct low resolution without creating some sort of artifacts.  Going from grunge to vectors simply gives you grunge with cleaner looking edges, and a look that I think of as digital scar tissue.

For images where only modest upscaling is required, I really feel that Bicubic Smooth up to around twice printer resolution followed by Smart Sharpen is often the best option.  Some tweaking of the "Advanced" Smart Sharpen controls is helpful.  It's quite a trick balancing out all the possible compromises, and the best solution is heavily dependent on the character of the original image.

In every case in my experience, upscaled images and "straight" high resolution images seen next to each other have very different looks.  If you were setting up an exhibition or publication, you would want to keep those categories isolated from each other.

IMHO.
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: The State of Vector Conversion Image Enlargement?
« Reply #3 on: August 12, 2012, 02:50:05 pm »

Bottom line for all those programs is that they exchange a grungy, pixelated look for one that is rather graphic in character.  There is simply no magic that can correct low resolution without creating some sort of artifacts.  Going from grunge to vectors simply gives you grunge with cleaner looking edges, and a look that I think of as digital scar tissue.

I wouldn't be so hasty and jump to that conclusion before actually trying it in print yourself. There are several controls with these applications that can mitigate the vectorized look. The edges can be made a bit less sharp, and noise can (and probably should) be added to suggest surface structure where there is none due to the enormous amount of upsampling. It's very effective, and looks way much better that blurry blobs with halo and stairstepping artifacts.

Quote
For images where only modest upscaling is required, I really feel that Bicubic Smooth up to around twice printer resolution followed by Smart Sharpen is often the best option.  Some tweaking of the "Advanced" Smart Sharpen controls is helpful.  It's quite a trick balancing out all the possible compromises, and the best solution is heavily dependent on the character of the original image.

The BiCubic Smooth quality is actually pretty poor compared to some other alternatives, but at 200% it may not be too apparent (until you see an alternative next to it). Using these tools also allows to print (and sharpen) at the printer's native resolution (600/720 PPI) which by itself makes a visible difference for subjects with fine detail.

Quote
In every case in my experience, upscaled images and "straight" high resolution images seen next to each other have very different looks.  If you were setting up an exhibition or publication, you would want to keep those categories isolated from each other.

Of course nothing beats real pixels, but sometimes we need more than there is available, and using the right tools then can make a huge difference in narrowing the gap.

IMHO, of course.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

dwdallam

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2044
    • http://www.dwdallam.com
Re: The State of Vector Conversion Image Enlargement?
« Reply #4 on: August 12, 2012, 05:19:06 pm »

Hey Bart,

Thanks for the information. Would yuo be willing to post an example of PZP? If you have an old 20D image or something similar and could increase it's resolution to 22MP, that would be great. Then just crop it and upload a partial net to the original.

Logged

dwdallam

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2044
    • http://www.dwdallam.com
Re: The State of Vector Conversion Image Enlargement?
« Reply #5 on: August 12, 2012, 05:26:02 pm »

Bill,

I'm not looking to change the image. If the software can increase an image x3, in the case of my old 20D images, and x2 in the case of the 5D images (not that I really need that as the prints looks really tight at 20x30) while maintaining their original "look" noise and all, I'm ok with that.

In theory, the way I understand it, which may be wrong, is that a vector image does not lose anything when up-scaling or down-scaling. The challenge is to convert complex pixel data into the vector itself, and that's where we see undesirable conversion artifacts.

 One of these days we'll never need to worry about that again, since cameras will capture vector images natively--probably about the time quantum processors hit the market, since that's probably how much power they'll need to accomplish that task.

Bottom line for all those programs is that they exchange a grungy, pixelated look for one that is rather graphic in character.  There is simply no magic that can correct low resolution without creating some sort of artifacts.  Going from grunge to vectors simply gives you grunge with cleaner looking edges, and a look that I think of as digital scar tissue.

For images where only modest upscaling is required, I really feel that Bicubic Smooth up to around twice printer resolution followed by Smart Sharpen is often the best option.  Some tweaking of the "Advanced" Smart Sharpen controls is helpful.  It's quite a trick balancing out all the possible compromises, and the best solution is heavily dependent on the character of the original image.

In every case in my experience, upscaled images and "straight" high resolution images seen next to each other have very different looks.  If you were setting up an exhibition or publication, you would want to keep those categories isolated from each other.

IMHO.
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: The State of Vector Conversion Image Enlargement?
« Reply #6 on: August 13, 2012, 06:33:34 am »

Thanks for the information. Would yuo be willing to post an example of PZP? If you have an old 20D image or something similar and could increase it's resolution to 22MP, that would be great. Then just crop it and upload a partial net to the original.

Hi,

No problem, although going from 8MP to 22MP is only a 1.644x linear magnification (20D = 3504 x 2336 px , 5D3 = 5760 x 3840 px). Even Photoshop's BiCubic Smoother algorithm can do that (although the PhotoZoom results already look a bit better).

When you go larger, or cropped significantly, or want to utilize the 720 or 600 PPI print resolution instead of the default, then the quality difference becomes more apparent.

Cheers,
Bart

P.S. The image is from a 1Ds3 instead of a 20D, but both have 6.4 micron pitch sensors.
« Last Edit: August 13, 2012, 06:36:38 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: The State of Vector Conversion Image Enlargement?
« Reply #7 on: August 13, 2012, 08:17:38 am »

...it takes chunks of the image and converts them to vector plots, and then using the vector image to increase the size pf the original. In theory, that should be a perfect enlargement with no degradation.
Only if the real-world scene corresponds perfectly with a theory that observed "edges" in a low-resolution image corresponds perfectly with edges in a high-resolution of the same image via a simple model.

This may be true for many, or even the most important kinds of images. It is clearly not the case for all images. Thus it cannot allow for a general, perfect enlargement.

For example, if an image from your digital camera produce a "stair-step-like" pattern in a part of the image, a "vector-oriented" scaler would probably assume that these are samples of a smooth edge, and reproduce this edge at a higher resolution with similar pixel values and a sharper transition. The real scene could, however really contain such steps (staircases, tiled rooftops, etc). In-camera anti-aliasing filter, lense, focus, movement etc would affect the outcome to a great degree. This is an example that while there should be a deterministic mapping from high-resolution image to a low-resolution image, there is no unique mapping from low-resolution image back to high-resolution image: several different scenes could produce the same low-resolution image. An intelligent scaler could then perhaps pick the "most likely" high-resolution image, or what it believes to be "most pleasing". Or you could be presented with several alternatives and pick or blend those that you prefer. But that is far from the "theoretically perfect" enlargement that would mean that we only needed 1 megapixel cameras.

I tend to prefer the gentle smoothing from a properly filtered, "undersampled" image instead of the sharp, but occasionally "artifacty" of an image heavily processed by non-linear scaling, sharpening, etc. If there is some software (or expert usage) where I could have the best of both worlds, I would be happy.

-h
« Last Edit: August 13, 2012, 08:24:09 am by hjulenissen »
Logged

dwdallam

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2044
    • http://www.dwdallam.com
Re: The State of Vector Conversion Image Enlargement?
« Reply #8 on: August 13, 2012, 05:51:46 pm »

My point is that processing power will eventually be able to make an exact copy of your image and reproduce it exactly as it is, at any size, perfectly, or "improve" it if you want. The technology is already here. It just takes a hell of a lot of power to do it.

Only if the real-world scene corresponds perfectly with a theory that observed "edges" in a low-resolution image corresponds perfectly with edges in a high-resolution of the same image via a simple model.

-h
Logged

dwdallam

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2044
    • http://www.dwdallam.com
Re: The State of Vector Conversion Image Enlargement?
« Reply #9 on: August 13, 2012, 05:55:12 pm »

Bart,

I didn't even think about that. What about going from 8MPs image to a 300ppi image at 20x30? When converting my old 5D images to print at 20x30 I allowed PS to re-sample up tom 300ppi and had excellent results, not perfect, but still 20x30 printable with sharp edges and nicely rendered transitions, especially at about 4.5 feet viewing distance.

I'd really be interested in seeing an 8mp image at 20x30" 300ppi.

Hi,

No problem, although going from 8MP to 22MP is only a 1.644x linear magnification (20D = 3504 x 2336 px , 5D3 = 5760 x 3840 px). Even Photoshop's BiCubic Smoother algorithm can do that (although the PhotoZoom results already look a bit better).

When you go larger, or cropped significantly, or want to utilize the 720 or 600 PPI print resolution instead of the default, then the quality difference becomes more apparent.

Cheers,
Bart

P.S. The image is from a 1Ds3 instead of a 20D, but both have 6.4 micron pitch sensors.
Logged

dwdallam

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2044
    • http://www.dwdallam.com
Re: The State of Vector Conversion Image Enlargement?
« Reply #10 on: August 13, 2012, 06:06:34 pm »

While we're on the topic of print resolution, I've heard lately that 150dpi (ppi?) is enough for most printers to get a very good without artifacts image printed on photo paper. So what's the standard these days?

I know my 1DS MKII will output a 300ppi image at 12x18, no crop. So you can see going any larger or cropping will need resampling to achieve higher ppi.
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: The State of Vector Conversion Image Enlargement?
« Reply #11 on: August 13, 2012, 07:20:59 pm »

Bart,

I didn't even think about that. What about going from 8MPs image to a 300ppi image at 20x30?

That would be 6000 x 9000 pixels, coming from 2336 x 3504 px, that would equal a 2.568x linear magnification (still a modest challenge for PhotoZoom Pro). See the atttached results (just using a preset, no tweaking involved), before any output sharpening (because I don't know your specific media and printer requirements).

On screen it may not look too pretty, but in print the PZP result shines (and the magnification quality can be adjusted at will when converting). Imagine what happens when printing a 600 PPI magnification ...

Cheers,
Bart
« Last Edit: August 13, 2012, 07:29:37 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

dwdallam

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2044
    • http://www.dwdallam.com
Re: The State of Vector Conversion Image Enlargement?
« Reply #12 on: August 13, 2012, 08:33:26 pm »

Interesting. Bicubic does do a very good job even compared to the PZP. I'm wondering if the PS Bicubic algorithm has been improved since I used it to uprez that much? But you are saying that in print, the effect of uprezing is "much"  more evident with bicubic than with PZP at 2.58 magnification?

That would be 6000 x 9000 pixels, coming from 2336 x 3504 px, that would equal a 2.568x linear magnification (still a modest challenge for PhotoZoom Pro). See the atttached results (just using a preset, no tweaking involved), before any output sharpening (because I don't know your specific media and printer requirements).

On screen it may not look too pretty, but in print the PZP result shines (and the magnification quality can be adjusted at will when converting). Imagine what happens when printing a 600 PPI magnification ...

Cheers,
Bart
Logged

Wolfman

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 314
    • www.bernardwolf.com
Re: The State of Vector Conversion Image Enlargement?
« Reply #13 on: August 14, 2012, 02:16:06 am »

I tried Photo Zoom Pro demo and it is painfully slow compared to PS bicubic smoother and PS bicubic smoother shines when using this technique in conjuction with it:
http://www.digitalphotopro.com/technique/software-technique/the-art-of-the-up-res.html?start=3

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: The State of Vector Conversion Image Enlargement?
« Reply #14 on: August 14, 2012, 02:18:07 am »

My point is that processing power will eventually be able to make an exact copy of your image and reproduce it exactly as it is, at any size, perfectly, or "improve" it if you want. The technology is already here. It just takes a hell of a lot of power to do it.
Do you have any references, or is this just a wish-list?

-h
Logged

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: The State of Vector Conversion Image Enlargement?
« Reply #15 on: August 14, 2012, 02:26:29 am »

While we're on the topic of print resolution, I've heard lately that 150dpi (ppi?) is enough for most printers to get a very good without artifacts image printed on photo paper. So what's the standard these days?

I know my 1DS MKII will output a 300ppi image at 12x18, no crop. So you can see going any larger or cropping will need resampling to achieve higher ppi.
There are at least three issues:
1. What is the "native" grid of the printer. What pattern may be used for ink dots?
2. How is the signal flow from printing application to printer hardware. I.e. will the printer driver do something like bilinear interpolation or even nearest neighbor if you feed it a frame that does not coincide with pt. 1.
3. What is the acuity of our vision. In other words, at what point does further improvements not matter anymore.

I believe that you put too much weight to the "dpi" line of thinking, and the reluctance to do image resampling. I believe that the image _will_ be resampled at least once from camera sensor to print. Perhaps someone using specialized RIPs and not caring much about physical print size can prove me wrong, but it seems that the image pipeline is far too complicated and the user requirements so far removed from the "1:1 pixel" line of thought that it is nearly impossible to avoid image resampling. Any image is heavily procesed anyways (debayer is a kind of interpolation, dithering in the printer makes large changes to the image on a small scale). It may make more sense to inspect the end-result and see what kinds of artifacts and spatial resolution you are achieving. If the result is good, all is good, right?

So:
For a given image, what kind of MTF figures can you hope to achieve with your 1DS (lense, focus, stand, sharpening/deconvolution...)? What is the "MTF" from your raw processor/printing application onto paper? Given these two, you may be able to:
a) Figure out how large you can print before the printer begins to significantly limit the actual end-to-end resolution
b) Figure out how large you can print before the prints start to look fuzzy to you at some given distance.

-h
« Last Edit: August 14, 2012, 02:37:59 am by hjulenissen »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: The State of Vector Conversion Image Enlargement?
« Reply #16 on: August 14, 2012, 09:24:22 am »

Interesting. Bicubic does do a very good job even compared to the PZP. I'm wondering if the PS Bicubic algorithm has been improved since I used it to uprez that much? But you are saying that in print, the effect of uprezing is "much"  more evident with bicubic than with PZP at 2.58 magnification?

Yes, and also don't underestimate the benefit of sharpening and printing at 600 PPI. You can sharpen more (with a small radius) at 600 PPI (or 720 PPI for Epsons) because artifacts will probably be too small to see. What's more, boosting the modulation of the highest spatial frequencies also lifts the other/lower spatial frequencies, the total image becomes more defined.

With PZP you can balance between the amount of vector edge sharpening and USM like sharpening, whatever the image requires, and you can add noise at the finest detail level.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: The State of Vector Conversion Image Enlargement?
« Reply #17 on: August 14, 2012, 10:30:26 am »

I tried Photo Zoom Pro demo and it is painfully slow compared to PS bicubic smoother and PS bicubic smoother shines when using this technique in conjuction with it:
http://www.digitalphotopro.com/technique/software-technique/the-art-of-the-up-res.html?start=3

BiCubic Smoother produces halos while upsampling. I don't see how that can be helpful for anything but the most modest resampling factors.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

AFairley

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1486
Re: The State of Vector Conversion Image Enlargement?
« Reply #18 on: August 14, 2012, 12:02:25 pm »

With PZP you can balance between the amount of vector edge sharpening and USM like sharpening, whatever the image requires, and you can add noise at the finest detail level.

Bart, I am playing with the PZP demo, but I'm not sure if I see the controls to do the balance you are talking about, can you elaborate?

Thanks
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: The State of Vector Conversion Image Enlargement?
« Reply #19 on: August 14, 2012, 02:04:58 pm »

Bart, I am playing with the PZP demo, but I'm not sure if I see the controls to do the balance you are talking about, can you elaborate?

Hi,

Assuming you have selected the "S-Spline Max" resizing method, start with selecting the "Photo - Detailed" preset. That will leave the Unsharp masking unchecked, and sets Sharpness to 100.00, Film Grain to 20, and Artifact Reduction to 0. Now reduce the Sharpness value. That will, at zero, use a good interpolation that doesn't produce halos and tries to keep jaggies away. When you now check the "Unsharp Masking" checkbox then you can set a regular USM sharpening. That's how you can balance between full vector edge detail (which doesn't  produce halos and doesn't require much sharpening) with Sharpness at 100.00 and/or USM with it's control checked.

However, the basic "S-Spline Max" resizing method will still be much better than BiCubic though, but of course one can always "blend-if" 2  layers in Photoshop if one prefers, and PZP also offers other, more traditional, resampling methods. When you composite an "S-Spline Max" layer on top of another method's resampling, you can effectively mix them (while suppressing halos) with the following type of layer Blend-if setting:


Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==
Pages: [1] 2   Go Up