It's an extrapolation based on current technology that can reproduce graphics perfectly, but they are simple graphics. I also based my comment on the fact that entire automobiles can be constructed in vector programs and scaled accordingly to any size without loss of that image.
Then you are missing the point (in my humble opinion). Scaling a vector model should be relatively easy. Building a vector model from simple graphics (like text) is also doable. Building a vector model from noisy, complex, unsharp real-world images is very hard (I have tried).
There is also the problem that even though a nice vector model of, say, a car can be reproduced at any scale to produce smooth, sharp edges, blowing it up won't produce _new details_. The amount of information is still limited to the thousands or millions of vectors that represents the model. At some scale, it might be possible to "guess" the periphery of a leaf in order to smoothly represent it at finer pixel grids. But leafs contains new, complex structures the closer you examine it. Unless that information is encoded into the pixels of the camera, good luck estimating it. You might end up with a "cartoonish" or "bilatteral filtered" image where large-scale edges are perfectly smooth, while small-scale detail is very visibly lacking.
http://en.wikipedia.org/wiki/Image_scaling(Image enlarged 3× with the nearest-neighbor interpolation)
(Image enlarged in size by 3× with hq3x algorithm)
The results obtained using specialized pixel art algorithms are striking, but in my opinion the reason why they work so well is because the source image really is a "clean" set of easily vectorized objects, rendered with a limited color map. This is a narrow set of the pixels that a general image can contain, and these algorithms does not work well on natural images (I have tried).
Yes, there are gaps in my position. I admit that. But if you're saying that mathematical models working in concert with vector algorithms will never be able to reproduce bitmap images perfectly (meaning there is no difference to the human eye at any resolution or viewing distance),
The Shannon-Nyquist sampling theory actually supports that a properly anti-aliased image can be properly reproduced at any sampling rate (=pixel density). The thing is that "properly anti-aliased" actually means a band-limited waveform, i.e. fine detail must be removed. If you can live with that, everything else reduce to simple linear filters that can nicely fit into existing cpu hardware.
http://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem
I interpret your position to be that, say, a VGA color image (640x480 pixels) at 24 bits per pixel can be upscaled to any size/resolution and be visually indistinguishable from an image at that native resolution. I am very sceptic about such a view. Do you really think that future upscaling will make a $200 ixus look as good as a D800? I shall give you an exotic example. Hopefully, you will see the general point that I am making. Say that you are shooting an image of a television screen showing static noise. Using a 1 megapixel camera. You obtain 1 megapixel of "information" about that static. Now, shoot the same television using a 0.3 megapixel camera. The information is limited to 0.3 megapixel. As there is (ideally) no correspondence between pixels at different scales, the lowres image simply does not contain the information needed to recreate the large one, and no algorithm in the world can guess the accurate outcome of a true whitenoise process.
http://en.wikipedia.org/wiki/Information_theorySay that you have high-rez image A and high-rez image B. When downsampled, they produce an identical image, C (may be unlikely, but clearly possible). If you only have image C, should an ideal upscaler produce A or B?
I think that I have introduced sufficient philosophical and algorithmic issues that your claim that it is only a matter of cpu cycles is weakened.
then the burden of proof is on you.
You put out certain claims. I am sceptic about those claims. The burden of proof obviously is on you. I shall try to support my own claims.
http://en.wikipedia.org/wiki/Philosophical_burden_of_proof"When debating any issue, there is an implicit burden of proof on the person asserting a claim. "If this responsibility or burden of proof is shifted to a critic, the fallacy of appealing to ignorance is committed"."
-h