I'm wondering if anyone has any experience using the latest algorithms for enlargement? For instance, Alien Skin Blowup 3 using a method where it takes chunks of the image and converts them to vector plots, and then using the vector image to increase the size pf the original. In theory, that should be a perfect enlargement with no degradation.
Bottom line for all those programs is that they exchange a grungy, pixelated look for one that is rather graphic in character. There is simply no magic that can correct low resolution without creating some sort of artifacts. Going from grunge to vectors simply gives you grunge with cleaner looking edges, and a look that I think of as digital scar tissue.
For images where only modest upscaling is required, I really feel that Bicubic Smooth up to around twice printer resolution followed by Smart Sharpen is often the best option. Some tweaking of the "Advanced" Smart Sharpen controls is helpful. It's quite a trick balancing out all the possible compromises, and the best solution is heavily dependent on the character of the original image.
In every case in my experience, upscaled images and "straight" high resolution images seen next to each other have very different looks. If you were setting up an exhibition or publication, you would want to keep those categories isolated from each other.
Bottom line for all those programs is that they exchange a grungy, pixelated look for one that is rather graphic in character. There is simply no magic that can correct low resolution without creating some sort of artifacts. Going from grunge to vectors simply gives you grunge with cleaner looking edges, and a look that I think of as digital scar tissue.
For images where only modest upscaling is required, I really feel that Bicubic Smooth up to around twice printer resolution followed by Smart Sharpen is often the best option. Some tweaking of the "Advanced" Smart Sharpen controls is helpful. It's quite a trick balancing out all the possible compromises, and the best solution is heavily dependent on the character of the original image.
In every case in my experience, upscaled images and "straight" high resolution images seen next to each other have very different looks. If you were setting up an exhibition or publication, you would want to keep those categories isolated from each other.
IMHO.
Thanks for the information. Would yuo be willing to post an example of PZP? If you have an old 20D image or something similar and could increase it's resolution to 22MP, that would be great. Then just crop it and upload a partial net to the original.
...it takes chunks of the image and converts them to vector plots, and then using the vector image to increase the size pf the original. In theory, that should be a perfect enlargement with no degradation.Only if the real-world scene corresponds perfectly with a theory that observed "edges" in a low-resolution image corresponds perfectly with edges in a high-resolution of the same image via a simple model.
Only if the real-world scene corresponds perfectly with a theory that observed "edges" in a low-resolution image corresponds perfectly with edges in a high-resolution of the same image via a simple model.
-h
Hi,
No problem, although going from 8MP to 22MP is only a 1.644x linear magnification (20D = 3504 x 2336 px , 5D3 = 5760 x 3840 px). Even Photoshop's BiCubic Smoother algorithm can do that (although the PhotoZoom results already look a bit better).
When you go larger, or cropped significantly, or want to utilize the 720 or 600 PPI print resolution instead of the default, then the quality difference becomes more apparent.
Cheers,
Bart
P.S. The image is from a 1Ds3 instead of a 20D, but both have 6.4 micron pitch sensors.
Bart,
I didn't even think about that. What about going from 8MPs image to a 300ppi image at 20x30?
That would be 6000 x 9000 pixels, coming from 2336 x 3504 px, that would equal a 2.568x linear magnification (still a modest challenge for PhotoZoom Pro). See the atttached results (just using a preset, no tweaking involved), before any output sharpening (because I don't know your specific media and printer requirements).
On screen it may not look too pretty, but in print the PZP result shines (and the magnification quality can be adjusted at will when converting). Imagine what happens when printing a 600 PPI magnification ...
Cheers,
Bart
My point is that processing power will eventually be able to make an exact copy of your image and reproduce it exactly as it is, at any size, perfectly, or "improve" it if you want. The technology is already here. It just takes a hell of a lot of power to do it.Do you have any references, or is this just a wish-list?
While we're on the topic of print resolution, I've heard lately that 150dpi (ppi?) is enough for most printers to get a very good without artifacts image printed on photo paper. So what's the standard these days?There are at least three issues:
I know my 1DS MKII will output a 300ppi image at 12x18, no crop. So you can see going any larger or cropping will need resampling to achieve higher ppi.
Interesting. Bicubic does do a very good job even compared to the PZP. I'm wondering if the PS Bicubic algorithm has been improved since I used it to uprez that much? But you are saying that in print, the effect of uprezing is "much" more evident with bicubic than with PZP at 2.58 magnification?
I tried Photo Zoom Pro demo and it is painfully slow compared to PS bicubic smoother and PS bicubic smoother shines when using this technique in conjuction with it:
http://www.digitalphotopro.com/technique/software-technique/the-art-of-the-up-res.html?start=3
With PZP you can balance between the amount of vector edge sharpening and USM like sharpening, whatever the image requires, and you can add noise at the finest detail level.
Bart, I am playing with the PZP demo, but I'm not sure if I see the controls to do the balance you are talking about, can you elaborate?
SR doubles the pixel count, which does not sound much in comparison.I dont believe there is such a hard limit in SR. It is more of a setup-dependent point of diminishing returns.
What you write sounds reasonable, in principle. But i the real world, there is - to my knowledge - only one app which does this: Photo Acute, and that is limited to 2 times the original pixel count.try googling "image super resolution software". I got 20,600,000 results, many of which seems to be commercial applications that claims to do multi-frame super-resolution.
Good light - Hening
Q What levels of increased resolution are realistic?
A It is highly variable depending on the optical system exposure conditions and what post-processing is applied. As a rule of thumb, you can expect and increase of 2x effective resolution from a real-life average system (see MTF measurements) using our methods. We've seen up to a 4x increases in some cases. You can get even higher results under controlled laboratory conditions, but that's only of theoretical interest."
Digital cameras usually have anti-aliasing filters in front of the sensors. Such filters prevent the appearance of aliasing artifacts, simply blurring high-frequency patterns. With the ideal anti-aliasing filter, the patterns shown above would have been imaged as a completely uniform grey field. Fortunately for us, no ideal anti-aliasing filter exists and in a real camera the aliased components are just attenuated to some degree.
Do you have any references, or is this just a wish-list?
-h
Yes, and also don't underestimate the benefit of sharpening and printing at 600 PPI. You can sharpen more (with a small radius) at 600 PPI (or 720 PPI for Epsons) because artifacts will probably be too small to see. What's more, boosting the modulation of the highest spatial frequencies also lifts the other/lower spatial frequencies, the total image becomes more defined.
With PZP you can balance between the amount of vector edge sharpening and USM like sharpening, whatever the image requires, and you can add noise at the finest detail level.
Cheers,
Bart
Are you saying to re-sample at 600ppi and then do the sharpening etc as opposed to re-sampling at 300ppi?
It's an extrapolation based on current technology that can reproduce graphics perfectly, but they are simple graphics. I also based my comment on the fact that entire automobiles can be constructed in vector programs and scaled accordingly to any size without loss of that image.Then you are missing the point (in my humble opinion). Scaling a vector model should be relatively easy. Building a vector model from simple graphics (like text) is also doable. Building a vector model from noisy, complex, unsharp real-world images is very hard (I have tried).
Yes, there are gaps in my position. I admit that. But if you're saying that mathematical models working in concert with vector algorithms will never be able to reproduce bitmap images perfectly (meaning there is no difference to the human eye at any resolution or viewing distance),The Shannon-Nyquist sampling theory actually supports that a properly anti-aliased image can be properly reproduced at any sampling rate (=pixel density). The thing is that "properly anti-aliased" actually means a band-limited waveform, i.e. fine detail must be removed. If you can live with that, everything else reduce to simple linear filters that can nicely fit into existing cpu hardware.
then the burden of proof is on you.You put out certain claims. I am sceptic about those claims. The burden of proof obviously is on you. I shall try to support my own claims.
"When debating any issue, there is an implicit burden of proof on the person asserting a claim. "If this responsibility or burden of proof is shifted to a critic, the fallacy of appealing to ignorance is committed"."
(btw is deconvolution limited by a certain minimum resolution like 600 dpi?)Think of deconvolution as "sharpening done better", at the cost of more parameters to input/estimate and more cpu cycles. Like sharpening, you can do deconvolution on any resolution image.
quote BartvanderWolf august 14, 2012 05:25:50 PM:
Another thing is that at 600/720 PPI one has another posibility to enhance overall resampling quality, and that is with deconvolution sharpening,
Bart,
now we have 3 ways of upsizing/improving sharpness under discussion:
Super Resolution, Deconvolution, and vector uprezzing. How would you suggest to combine them?
(btw is deconvolution limited by a certain minimum resolution like 600 dpi?)
Good light!
Anyway, it looks like multiple frames for SR are still required.Yes, I like to think of multiple frames as needed by SR by definition. Clever upscaling using a single image is just.... clever upscaling. SR exploits the slight variations in aliasing of several slightly shifted images that reveals different details about the true scene.