Luminous Landscape Forum

Raw & Post Processing, Printing => Digital Image Processing => Topic started by: dwdallam on August 11, 2012, 05:40:17 pm

Title: The State of Vector Conversion Image Enlargement?
Post by: dwdallam on August 11, 2012, 05:40:17 pm
I'm wondering if anyone has any experience using the latest algorithms for enlargement? For instance, Alien Skin Blowup 3 using a method where it takes chunks of the image and converts them to vector plots, and then using the vector image to increase the size pf the original. In theory, that should be a perfect enlargement with no degradation.

I have some old 20D images I'd love to print larger, and even many 5D MKI images too. Although I have successfully printed the 5D images at 20x30 with excellent results, that was using the entire image and all of its original pixels. There are many images I have that are cropped that would benefit from a vector conversion up size. Even some of my 1DSMKIII images with heavy cropping would benefit.
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: Bart_van_der_Wolf on August 12, 2012, 06:15:00 am
I'm wondering if anyone has any experience using the latest algorithms for enlargement? For instance, Alien Skin Blowup 3 using a method where it takes chunks of the image and converts them to vector plots, and then using the vector image to increase the size pf the original. In theory, that should be a perfect enlargement with no degradation.

Hi,

After lots of experimentation I've settled for the results that PhotoZoom Pro (http://www.benvista.com/photozoompro/) can produce. It integrates nicely with Photoshop via an Automate and Export plugin, and it also offers a standalone application so it could enlarge more if the amount of memory that Photoshop claims drains someone's resources. I prefer its output results over 'Blowup' which tends to round off sharp corners. PhotoZoom Pro also mixes raster and vector results but in a user adjustable ratio, to avoid producing an artificial looking mental disconnect between edge detail and surface structure.

The great thing is that these methods add real resolution (http://www.luminous-landscape.com/forum/index.php?topic=62609.msg505337#msg505337), instead of just adding more blurry pixel enlargements. For your purpose of significant upsampling, check out the 800% enlarged crops.

For some more background information you can also have a look at some of the printer setting related conclusions in this thread (http://www.luminous-landscape.com/forum/index.php?topic=54798.0).

Cheers,
Bart
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: bill t. on August 12, 2012, 02:09:36 pm
Bottom line for all those programs is that they exchange a grungy, pixelated look for one that is rather graphic in character.  There is simply no magic that can correct low resolution without creating some sort of artifacts.  Going from grunge to vectors simply gives you grunge with cleaner looking edges, and a look that I think of as digital scar tissue.

For images where only modest upscaling is required, I really feel that Bicubic Smooth up to around twice printer resolution followed by Smart Sharpen is often the best option.  Some tweaking of the "Advanced" Smart Sharpen controls is helpful.  It's quite a trick balancing out all the possible compromises, and the best solution is heavily dependent on the character of the original image.

In every case in my experience, upscaled images and "straight" high resolution images seen next to each other have very different looks.  If you were setting up an exhibition or publication, you would want to keep those categories isolated from each other.

IMHO.
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: Bart_van_der_Wolf on August 12, 2012, 02:50:05 pm
Bottom line for all those programs is that they exchange a grungy, pixelated look for one that is rather graphic in character.  There is simply no magic that can correct low resolution without creating some sort of artifacts.  Going from grunge to vectors simply gives you grunge with cleaner looking edges, and a look that I think of as digital scar tissue.

I wouldn't be so hasty and jump to that conclusion before actually trying it in print yourself. There are several controls with these applications that can mitigate the vectorized look. The edges can be made a bit less sharp, and noise can (and probably should) be added to suggest surface structure where there is none due to the enormous amount of upsampling. It's very effective, and looks way much better that blurry blobs with halo and stairstepping artifacts.

Quote
For images where only modest upscaling is required, I really feel that Bicubic Smooth up to around twice printer resolution followed by Smart Sharpen is often the best option.  Some tweaking of the "Advanced" Smart Sharpen controls is helpful.  It's quite a trick balancing out all the possible compromises, and the best solution is heavily dependent on the character of the original image.

The BiCubic Smooth quality is actually pretty poor compared to some other alternatives, but at 200% it may not be too apparent (until you see an alternative next to it). Using these tools also allows to print (and sharpen) at the printer's native resolution (600/720 PPI) which by itself makes a visible difference for subjects with fine detail.

Quote
In every case in my experience, upscaled images and "straight" high resolution images seen next to each other have very different looks.  If you were setting up an exhibition or publication, you would want to keep those categories isolated from each other.

Of course nothing beats real pixels, but sometimes we need more than there is available, and using the right tools then can make a huge difference in narrowing the gap.

IMHO, of course.

Cheers,
Bart
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: dwdallam on August 12, 2012, 05:19:06 pm
Hey Bart,

Thanks for the information. Would yuo be willing to post an example of PZP? If you have an old 20D image or something similar and could increase it's resolution to 22MP, that would be great. Then just crop it and upload a partial net to the original.

Title: Re: The State of Vector Conversion Image Enlargement?
Post by: dwdallam on August 12, 2012, 05:26:02 pm
Bill,

I'm not looking to change the image. If the software can increase an image x3, in the case of my old 20D images, and x2 in the case of the 5D images (not that I really need that as the prints looks really tight at 20x30) while maintaining their original "look" noise and all, I'm ok with that.

In theory, the way I understand it, which may be wrong, is that a vector image does not lose anything when up-scaling or down-scaling. The challenge is to convert complex pixel data into the vector itself, and that's where we see undesirable conversion artifacts.

 One of these days we'll never need to worry about that again, since cameras will capture vector images natively--probably about the time quantum processors hit the market, since that's probably how much power they'll need to accomplish that task.

Bottom line for all those programs is that they exchange a grungy, pixelated look for one that is rather graphic in character.  There is simply no magic that can correct low resolution without creating some sort of artifacts.  Going from grunge to vectors simply gives you grunge with cleaner looking edges, and a look that I think of as digital scar tissue.

For images where only modest upscaling is required, I really feel that Bicubic Smooth up to around twice printer resolution followed by Smart Sharpen is often the best option.  Some tweaking of the "Advanced" Smart Sharpen controls is helpful.  It's quite a trick balancing out all the possible compromises, and the best solution is heavily dependent on the character of the original image.

In every case in my experience, upscaled images and "straight" high resolution images seen next to each other have very different looks.  If you were setting up an exhibition or publication, you would want to keep those categories isolated from each other.

IMHO.
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: Bart_van_der_Wolf on August 13, 2012, 06:33:34 am
Thanks for the information. Would yuo be willing to post an example of PZP? If you have an old 20D image or something similar and could increase it's resolution to 22MP, that would be great. Then just crop it and upload a partial net to the original.

Hi,

No problem, although going from 8MP to 22MP is only a 1.644x linear magnification (20D = 3504 x 2336 px , 5D3 = 5760 x 3840 px). Even Photoshop's BiCubic Smoother algorithm can do that (although the PhotoZoom results already look a bit better).

When you go larger, or cropped significantly, or want to utilize the 720 or 600 PPI print resolution instead of the default, then the quality difference becomes more apparent.

Cheers,
Bart

P.S. The image is from a 1Ds3 instead of a 20D, but both have 6.4 micron pitch sensors.
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: hjulenissen on August 13, 2012, 08:17:38 am
...it takes chunks of the image and converts them to vector plots, and then using the vector image to increase the size pf the original. In theory, that should be a perfect enlargement with no degradation.
Only if the real-world scene corresponds perfectly with a theory that observed "edges" in a low-resolution image corresponds perfectly with edges in a high-resolution of the same image via a simple model.

This may be true for many, or even the most important kinds of images. It is clearly not the case for all images. Thus it cannot allow for a general, perfect enlargement.

For example, if an image from your digital camera produce a "stair-step-like" pattern in a part of the image, a "vector-oriented" scaler would probably assume that these are samples of a smooth edge, and reproduce this edge at a higher resolution with similar pixel values and a sharper transition. The real scene could, however really contain such steps (staircases, tiled rooftops, etc). In-camera anti-aliasing filter, lense, focus, movement etc would affect the outcome to a great degree. This is an example that while there should be a deterministic mapping from high-resolution image to a low-resolution image, there is no unique mapping from low-resolution image back to high-resolution image: several different scenes could produce the same low-resolution image. An intelligent scaler could then perhaps pick the "most likely" high-resolution image, or what it believes to be "most pleasing". Or you could be presented with several alternatives and pick or blend those that you prefer. But that is far from the "theoretically perfect" enlargement that would mean that we only needed 1 megapixel cameras.

I tend to prefer the gentle smoothing from a properly filtered, "undersampled" image instead of the sharp, but occasionally "artifacty" of an image heavily processed by non-linear scaling, sharpening, etc. If there is some software (or expert usage) where I could have the best of both worlds, I would be happy.

-h
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: dwdallam on August 13, 2012, 05:51:46 pm
My point is that processing power will eventually be able to make an exact copy of your image and reproduce it exactly as it is, at any size, perfectly, or "improve" it if you want. The technology is already here. It just takes a hell of a lot of power to do it.

Only if the real-world scene corresponds perfectly with a theory that observed "edges" in a low-resolution image corresponds perfectly with edges in a high-resolution of the same image via a simple model.

-h
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: dwdallam on August 13, 2012, 05:55:12 pm
Bart,

I didn't even think about that. What about going from 8MPs image to a 300ppi image at 20x30? When converting my old 5D images to print at 20x30 I allowed PS to re-sample up tom 300ppi and had excellent results, not perfect, but still 20x30 printable with sharp edges and nicely rendered transitions, especially at about 4.5 feet viewing distance.

I'd really be interested in seeing an 8mp image at 20x30" 300ppi.

Hi,

No problem, although going from 8MP to 22MP is only a 1.644x linear magnification (20D = 3504 x 2336 px , 5D3 = 5760 x 3840 px). Even Photoshop's BiCubic Smoother algorithm can do that (although the PhotoZoom results already look a bit better).

When you go larger, or cropped significantly, or want to utilize the 720 or 600 PPI print resolution instead of the default, then the quality difference becomes more apparent.

Cheers,
Bart

P.S. The image is from a 1Ds3 instead of a 20D, but both have 6.4 micron pitch sensors.
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: dwdallam on August 13, 2012, 06:06:34 pm
While we're on the topic of print resolution, I've heard lately that 150dpi (ppi?) is enough for most printers to get a very good without artifacts image printed on photo paper. So what's the standard these days?

I know my 1DS MKII will output a 300ppi image at 12x18, no crop. So you can see going any larger or cropping will need resampling to achieve higher ppi.
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: Bart_van_der_Wolf on August 13, 2012, 07:20:59 pm
Bart,

I didn't even think about that. What about going from 8MPs image to a 300ppi image at 20x30?

That would be 6000 x 9000 pixels, coming from 2336 x 3504 px, that would equal a 2.568x linear magnification (still a modest challenge for PhotoZoom Pro). See the atttached results (just using a preset, no tweaking involved), before any output sharpening (because I don't know your specific media and printer requirements).

On screen it may not look too pretty, but in print the PZP result shines (and the magnification quality can be adjusted at will when converting). Imagine what happens when printing a 600 PPI magnification ...

Cheers,
Bart
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: dwdallam on August 13, 2012, 08:33:26 pm
Interesting. Bicubic does do a very good job even compared to the PZP. I'm wondering if the PS Bicubic algorithm has been improved since I used it to uprez that much? But you are saying that in print, the effect of uprezing is "much"  more evident with bicubic than with PZP at 2.58 magnification?

That would be 6000 x 9000 pixels, coming from 2336 x 3504 px, that would equal a 2.568x linear magnification (still a modest challenge for PhotoZoom Pro). See the atttached results (just using a preset, no tweaking involved), before any output sharpening (because I don't know your specific media and printer requirements).

On screen it may not look too pretty, but in print the PZP result shines (and the magnification quality can be adjusted at will when converting). Imagine what happens when printing a 600 PPI magnification ...

Cheers,
Bart
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: Wolfman on August 14, 2012, 02:16:06 am
I tried Photo Zoom Pro demo and it is painfully slow compared to PS bicubic smoother and PS bicubic smoother shines when using this technique in conjuction with it:
http://www.digitalphotopro.com/technique/software-technique/the-art-of-the-up-res.html?start=3
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: hjulenissen on August 14, 2012, 02:18:07 am
My point is that processing power will eventually be able to make an exact copy of your image and reproduce it exactly as it is, at any size, perfectly, or "improve" it if you want. The technology is already here. It just takes a hell of a lot of power to do it.
Do you have any references, or is this just a wish-list?

-h
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: hjulenissen on August 14, 2012, 02:26:29 am
While we're on the topic of print resolution, I've heard lately that 150dpi (ppi?) is enough for most printers to get a very good without artifacts image printed on photo paper. So what's the standard these days?

I know my 1DS MKII will output a 300ppi image at 12x18, no crop. So you can see going any larger or cropping will need resampling to achieve higher ppi.
There are at least three issues:
1. What is the "native" grid of the printer. What pattern may be used for ink dots?
2. How is the signal flow from printing application to printer hardware. I.e. will the printer driver do something like bilinear interpolation or even nearest neighbor if you feed it a frame that does not coincide with pt. 1.
3. What is the acuity of our vision. In other words, at what point does further improvements not matter anymore.

I believe that you put too much weight to the "dpi" line of thinking, and the reluctance to do image resampling. I believe that the image _will_ be resampled at least once from camera sensor to print. Perhaps someone using specialized RIPs and not caring much about physical print size can prove me wrong, but it seems that the image pipeline is far too complicated and the user requirements so far removed from the "1:1 pixel" line of thought that it is nearly impossible to avoid image resampling. Any image is heavily procesed anyways (debayer is a kind of interpolation, dithering in the printer makes large changes to the image on a small scale). It may make more sense to inspect the end-result and see what kinds of artifacts and spatial resolution you are achieving. If the result is good, all is good, right?

So:
For a given image, what kind of MTF figures can you hope to achieve with your 1DS (lense, focus, stand, sharpening/deconvolution...)? What is the "MTF" from your raw processor/printing application onto paper? Given these two, you may be able to:
a) Figure out how large you can print before the printer begins to significantly limit the actual end-to-end resolution
b) Figure out how large you can print before the prints start to look fuzzy to you at some given distance.

-h
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: Bart_van_der_Wolf on August 14, 2012, 09:24:22 am
Interesting. Bicubic does do a very good job even compared to the PZP. I'm wondering if the PS Bicubic algorithm has been improved since I used it to uprez that much? But you are saying that in print, the effect of uprezing is "much"  more evident with bicubic than with PZP at 2.58 magnification?

Yes, and also don't underestimate the benefit of sharpening and printing at 600 PPI. You can sharpen more (with a small radius) at 600 PPI (or 720 PPI for Epsons) because artifacts will probably be too small to see. What's more, boosting the modulation of the highest spatial frequencies also lifts the other/lower spatial frequencies, the total image becomes more defined.

With PZP you can balance between the amount of vector edge sharpening and USM like sharpening, whatever the image requires, and you can add noise at the finest detail level.

Cheers,
Bart
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: Bart_van_der_Wolf on August 14, 2012, 10:30:26 am
I tried Photo Zoom Pro demo and it is painfully slow compared to PS bicubic smoother and PS bicubic smoother shines when using this technique in conjuction with it:
http://www.digitalphotopro.com/technique/software-technique/the-art-of-the-up-res.html?start=3

BiCubic Smoother produces halos while upsampling. I don't see how that can be helpful for anything but the most modest resampling factors.

Cheers,
Bart
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: AFairley on August 14, 2012, 12:02:25 pm
With PZP you can balance between the amount of vector edge sharpening and USM like sharpening, whatever the image requires, and you can add noise at the finest detail level.

Bart, I am playing with the PZP demo, but I'm not sure if I see the controls to do the balance you are talking about, can you elaborate?

Thanks
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: Bart_van_der_Wolf on August 14, 2012, 02:04:58 pm
Bart, I am playing with the PZP demo, but I'm not sure if I see the controls to do the balance you are talking about, can you elaborate?

Hi,

Assuming you have selected the "S-Spline Max" resizing method, start with selecting the "Photo - Detailed" preset. That will leave the Unsharp masking unchecked, and sets Sharpness to 100.00, Film Grain to 20, and Artifact Reduction to 0. Now reduce the Sharpness value. That will, at zero, use a good interpolation that doesn't produce halos and tries to keep jaggies away. When you now check the "Unsharp Masking" checkbox then you can set a regular USM sharpening. That's how you can balance between full vector edge detail (which doesn't  produce halos and doesn't require much sharpening) with Sharpness at 100.00 and/or USM with it's control checked.

However, the basic "S-Spline Max" resizing method will still be much better than BiCubic though, but of course one can always "blend-if" 2  layers in Photoshop if one prefers, and PZP also offers other, more traditional, resampling methods. When you composite an "S-Spline Max" layer on top of another method's resampling, you can effectively mix them (while suppressing halos) with the following type of layer Blend-if setting:

(http://bvdwolf.home.xs4all.nl/main/downloads/Non-clipped-sharpening.png)

Cheers,
Bart
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: Hening Bettermann on August 14, 2012, 02:12:57 pm
Bart,

to which degree would this vectorizing be able to replace Super Resolution stacking, and thus make multiple shots unnecessary? SR doubles the pixel count, which does not sound much in comparison.

Hope this is not hi-jacking the thread??

Good light - Hening
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: hjulenissen on August 14, 2012, 02:22:59 pm
SR doubles the pixel count, which does not sound much in comparison.
I dont believe there is such a hard limit in SR. It is more of a setup-dependent point of diminishing returns.
-number of images
-lense PSF

-h
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: Hening Bettermann on August 14, 2012, 04:00:31 pm
What you write sounds reasonable, in principle. But i the real world, there is - to my knowledge - only one app which does this: Photo Acute, and that is limited to 2 times the original pixel count.
Good light - Hening
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: hjulenissen on August 14, 2012, 04:15:50 pm
What you write sounds reasonable, in principle. But i the real world, there is - to my knowledge - only one app which does this: Photo Acute, and that is limited to 2 times the original pixel count.
Good light - Hening
try googling "image super resolution software". I got 20,600,000 results, many of which seems to be commercial applications that claims to do multi-frame super-resolution.

http://photoacute.com/tech/superresolution_faq.html
Quote
Q What levels of increased resolution are realistic?
A It is highly variable depending on the optical system exposure conditions and what post-processing is applied. As a rule of thumb, you can expect and increase of 2x effective resolution from a real-life average system (see MTF measurements) using our methods. We've seen up to a 4x increases in some cases. You can get even higher results under controlled laboratory conditions, but that's only of theoretical interest."

At some point, it does not make sense to keep doubling the number of exposures for a diminishing return. Most camera setups run into the lense MTF/camera shake limit sooner or later (a naiive implementation of SR really only adress the limitation of sensor resolution, relying on aliasing for its results: )
Quote
Digital cameras usually have anti-aliasing filters in front of the sensors. Such filters prevent the appearance of aliasing artifacts, simply blurring high-frequency patterns. With the ideal anti-aliasing filter, the patterns shown above would have been imaged as a completely uniform grey field. Fortunately for us, no ideal anti-aliasing filter exists and in a real camera the aliased components are just attenuated to some degree.

I think that the problem you are raising is kind of backwards. If superresolution allows you to fuse N images into one that has twice the native resolution of your camera, this superresolution image is going to be the ideal starting-point for further image scaling. Anyhow, anyone can claim that their algorithm does "400% upscaling" or "1600% upscaling". That is trivial. The question is how does the result look. That is not trivial.

-h
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: dwdallam on August 14, 2012, 05:04:31 pm
Do you have any references, or is this just a wish-list?

-h

It's an extrapolation based on current technology that can reproduce graphics perfectly, but they are simple graphics. I also based my comment on the fact that entire automobiles can be constructed in vector programs and scaled accordingly to any size without loss of that image.

Yes, there are gaps in my position. I admit that. But if you're saying that mathematical models working in concert with vector algorithms will never be able to reproduce bitmap images perfectly (meaning there is no difference to the human eye at any resolution or viewing distance), then the burden of proof is on you.
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: dwdallam on August 14, 2012, 05:07:41 pm
Are you saying to re-sample at 600ppi and then do the sharpening etc as opposed to re-sampling at 300ppi?

Yes, and also don't underestimate the benefit of sharpening and printing at 600 PPI. You can sharpen more (with a small radius) at 600 PPI (or 720 PPI for Epsons) because artifacts will probably be too small to see. What's more, boosting the modulation of the highest spatial frequencies also lifts the other/lower spatial frequencies, the total image becomes more defined.

With PZP you can balance between the amount of vector edge sharpening and USM like sharpening, whatever the image requires, and you can add noise at the finest detail level.

Cheers,
Bart
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: dwdallam on August 14, 2012, 05:23:09 pm
I just read this and it is interesting. I'm going to try to and see what happens. What do you all think?

http://www.digitalphotopro.com/technique/software-technique/the-art-of-the-up-res.html
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: Bart_van_der_Wolf on August 14, 2012, 06:25:50 pm
Are you saying to re-sample at 600ppi and then do the sharpening etc as opposed to re-sampling at 300ppi?

Absolutely, yes. Of course, this assumes that there is something to sharpen at 600/720 PPI. So when the native resolution for the output size drops below 300/360 PPI, there would be little detail to sharpen at that highest level unless one uses PZP or similar resolution adding(!) applications, but the effect will still carry over to lower spatial frequencies (after all, a 2 pixel radius at 600 PPI is still a 1 pixel radius at 300 PPI, but with more pixels to make smoother edge contrast enhancements).

Another thing is that at 600/720 PPI one has another posibility to enhance overall resampling quality, and that is with deconvolution sharpening, targeted at the losses inherent with upsampling. But that's another subject, a bit beyond the scope of this thread which deals with the specific situation of vector types of sharpening (although with blending one can get the best of both worlds).

Cheers,
Bart
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: hjulenissen on August 15, 2012, 02:16:27 am
It's an extrapolation based on current technology that can reproduce graphics perfectly, but they are simple graphics. I also based my comment on the fact that entire automobiles can be constructed in vector programs and scaled accordingly to any size without loss of that image.
Then you are missing the point (in my humble opinion). Scaling a vector model should be relatively easy. Building a vector model from simple graphics (like text) is also doable. Building a vector model from noisy, complex, unsharp real-world images is very hard (I have tried).

There is also the problem that even though a nice vector model of, say, a car can be reproduced at any scale to produce smooth, sharp edges, blowing it up won't produce _new details_. The amount of information is still limited to the thousands or millions of vectors that represents the model. At some scale, it might be possible to "guess" the periphery of a leaf in order to smoothly represent it at finer pixel grids. But leafs contains new, complex structures the closer you examine it. Unless that information is encoded into the pixels of the camera, good luck estimating it. You might end up with a "cartoonish" or "bilatteral filtered" image where large-scale edges are perfectly smooth, while small-scale detail is very visibly lacking.

http://en.wikipedia.org/wiki/Image_scaling
(http://upload.wikimedia.org/wikipedia/commons/c/cb/Test_nn.png)
(Image enlarged 3× with the nearest-neighbor interpolation)
(http://upload.wikimedia.org/wikipedia/commons/f/f5/Test_hq3x.png)
(Image enlarged in size by 3× with hq3x algorithm)
The results obtained using specialized pixel art algorithms are striking, but in my opinion the reason why they work so well is because the source image really is a "clean" set of easily vectorized objects, rendered with a limited color map. This is a narrow set of the pixels that a general image can contain, and these algorithms does not work well on natural images (I have tried).
Quote
Yes, there are gaps in my position. I admit that. But if you're saying that mathematical models working in concert with vector algorithms will never be able to reproduce bitmap images perfectly (meaning there is no difference to the human eye at any resolution or viewing distance),
The Shannon-Nyquist sampling theory actually supports that a properly anti-aliased image can be properly reproduced at any sampling rate (=pixel density). The thing is that "properly anti-aliased" actually means a band-limited waveform, i.e. fine detail must be removed. If you can live with that, everything else reduce to simple linear filters that can nicely fit into existing cpu hardware.
http://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem

I interpret your position to be that, say, a VGA color image (640x480 pixels) at 24 bits per pixel can be upscaled to any size/resolution and be visually indistinguishable from an image at that native resolution. I am very sceptic about such a view. Do you really think that future upscaling will make a $200 ixus look as good as a D800? I shall give you an exotic example. Hopefully, you will see the general point that I am making. Say that you are shooting an image of a television screen showing static noise. Using a 1 megapixel camera. You obtain 1 megapixel of "information" about that static. Now, shoot the same television using a 0.3 megapixel camera. The information is limited to 0.3 megapixel. As there is (ideally) no correspondence between pixels at different scales, the lowres image simply does not contain the information needed to recreate the large one, and no algorithm in the world can guess the accurate outcome of a true whitenoise process.

http://en.wikipedia.org/wiki/Information_theory

Say that you have high-rez image A and high-rez image B. When downsampled, they produce an identical image, C (may be unlikely, but clearly possible). If you only have image C, should an ideal upscaler produce A or B?

I think that I have introduced sufficient philosophical and algorithmic issues that your claim that it is only a matter of cpu cycles is weakened.
Quote
then the burden of proof is on you.
You put out certain claims. I am sceptic about those claims. The burden of proof obviously is on you. I shall try to support my own claims.

http://en.wikipedia.org/wiki/Philosophical_burden_of_proof
Quote
"When debating any issue, there is an implicit burden of proof on the person asserting a claim. "If this responsibility or burden of proof is shifted to a critic, the fallacy of appealing to ignorance is committed"."

-h
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: Hening Bettermann on August 16, 2012, 06:38:42 am
quote BartvanderWolf august 14, 2012 05:25:50 PM:

Another thing is that at 600/720 PPI one has another posibility to enhance overall resampling quality, and that is with deconvolution sharpening,

Bart,
now we have 3 ways of upsizing/improving sharpness under discussion:
Super Resolution, Deconvolution, and vector uprezzing. How would you suggest to combine them?
(btw is deconvolution limited by a certain minimum resolution like 600 dpi?)

Good light! Hening
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: hjulenissen on August 16, 2012, 07:20:38 am
This article seems to provide an overview of SR vs deconvolution vs "upscaling"
http://www-sipl.technion.ac.il/new/Teaching/Projects/Spring2006/01203207.pdf

The natural order I believe would be super-resolution->deconvolution->upsampling, since SR and deconvolution are strongly connected to physical characteristics of the camera (and SR is strongly connected to the PSF/deconvolution).

Most of the litterature is concerned with blind deconvolution in mind. The devotion of users on this forum seems to indicate that people are willing to estimate PSF independently. That should greatly simplify the problem.

Quote
(btw is deconvolution limited by a certain minimum resolution like 600 dpi?)
Think of deconvolution as "sharpening done better", at the cost of more parameters to input/estimate and more cpu cycles. Like sharpening, you can do deconvolution on any resolution image.

-h
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: nemophoto on August 16, 2012, 10:26:39 am
This is a topic that comes up periodically. It essentially boils down to individual taste and experience. I for one have used Genuine Fractals (now Perfect Resize) and Alien Skin Blowup for years. As a matter of fact, my experience with GF goes back about 15+ years when you had to save the image in a dedicated file format. Now, I mostly use Blowup, partially because of my frustration with rendering speed (on screen) with Perfect Resize since they adopted OpenGL for and the GPU for rendering. For one of my clients, I regularly use the software to create images for instore posters, and am generally enlarging 150 -200%. Most recently, though, for some shows, I created 40x60 blowups, for printing on canvas with my iPF8300. These images, at 300 dpi, often hit the scales at 900MB, and sometimes 1.4GB, and required enlarging about 340%. (One image was even greater because I had accidentally shot it on MEDIUM JPEG from my 1Ds Mark II, so it was a 9MP file.) If you pixel peep at 100%, you will see what appear to be nasty and weird artifacts. However, the truer view is 50% and really at that size, 25%. Then you'll see what most viewers see in printed form from the proper distance of about 3' - 5'. That said, many people still got within inches.

I think these programs offer a superior result over bicubic and alike, but the results are most noticeable when enlargement approaches 200%. For something in the 110-135% range, it doesn't make sense to use a plugin that takes longer to render with marginally better results. One of the main things one gains is in edge sharpness. Our eyes perceive contrast (and therefore sharpness), before we even start to perceive things like color, and this is the strong suit for programs such as Perfect Resize and Blowup3.
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: Bart_van_der_Wolf on August 16, 2012, 11:34:26 am
quote BartvanderWolf august 14, 2012 05:25:50 PM:

Another thing is that at 600/720 PPI one has another posibility to enhance overall resampling quality, and that is with deconvolution sharpening,

Bart,
now we have 3 ways of upsizing/improving sharpness under discussion:
Super Resolution, Deconvolution, and vector uprezzing. How would you suggest to combine them?

Hi Hening,

1. Single image capture produces inherently blurry images. Even if not by subject motion or camera shake, we have to deal with  lens aberrations, diffraction, anti-aliasing filters, area sensors, Bayer CFA demosaicing, IOW blurry images. A prime candidate to address that capture blur is deconvolution, because the blur can usually be characterized quite predictably with prior calibration, and the Point Spread Function (PSF) that describes that combined mix of blur sources can be used to reverse part of that blur in the capture process. If done well, it will not introduce artfacts that can later become a problem when blown-up in size and thus visibility.

2. Second would be techniques like Super Resolution, but usually that requires multiple images with sub-pixel offsets so that may not be too practical for some shooting scenarios. Single image SR depends heavily on suitable image elements in the same image that can be reused to invent credible new detail at a smaller scale, so not all images are suitable. I would also put fractal based upscaling in that category, sometimes it works, sometimes it doesn't, even within the same image there are areas that are better than others.

3. Third is a combination of good quality upsampling, combined with vector based resampling. The vector based approach favors edge detail (which is very important in human vision), so it is best combined with high quality traditional resampling for upsampling. The traditional resampling should balance between ringing, blocking, and blurring artifacts. Downsampling is best done with good quality pre-filtered downsampling techniques (to avoid aliasing artifacts).

Quote
(btw is deconvolution limited by a certain minimum resolution like 600 dpi?)

No, there is no real limit other than that it is a processing intensive procedure, and therefore it takes longer when the image is larger. So ultimately memory constraints and processing time are the practical limitations, but fundamentally it is not limited by size.

Quote
Good light!

Same to you, Cheers,
Bart
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: Hening Bettermann on August 16, 2012, 03:33:17 pm
Thanks to all the three of you for your answers!

The sequence SR --> deconvolution would be like my current work flow with SR in PhotoAcute, output as DNG, then conversion to TIF in Raw Developer with deconvolution, then editing the TIF - so vector uprezzing could be added here if required.
The sequence deconvolution-->SR would - ideally - require deconvolution independent of raw conversion, since raw input is best for SR - at least in PhotoAcute.

Anyway, it looks like multiple frames for SR are still required.

Good light! - Hening.
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: hjulenissen on August 16, 2012, 04:06:10 pm
Anyway, it looks like multiple frames for SR are still required.
Yes, I like to think of multiple frames as needed by SR by definition. Clever upscaling using a single image is just.... clever upscaling. SR exploits the slight variations in aliasing of several slightly shifted images that reveals different details about the true scene.

-h
Title: Re: The State of Vector Conversion Image Enlargement?
Post by: bill t. on August 16, 2012, 06:49:41 pm
By coincidence a charity I help asked me to print some really grungy, over processed stock shots.  My Alien Skin Blow Up 3 trial was still valid, so I  used it.  Was amazed how nicely the stair-stepping and over-sharpened grizzle dissolved into something not exactly credible, but hugely more acceptable than the original.  Also, Blow Up 3 has by far the nicest integration into LR and PS of any similar program I have used.  It is also the fastest, and seems to handle large files with ease.

So a grudging thumbs up.  If you do service printing, you need it.  It may not always produce the technically best results, the I think that for commercial customers the exceptional smoothness of the result would trump most every other issue.  But no specific jpeg artifact reduction, would be nice if it had that too.