I don't think that nearest neighbor is an accurate emulation of a lower-resolution camera (if that was your intention)? More like an OLPF-less, perfect lens, non-Bayer, poor micro-lense camera? Bicubic is probably not an accurate emulation of low-res cameras either, but perhaps better (?), and certainly more relevant to how people scale images in their computer.
Adding another dimension to your test could be tweaking sharpening until the 24MP and the bicubic up/down-sampled one are perceived as having similar crispness/bite to the NN. Does that also bring unwanted artifacts to the same level?
-h
Hi,
I wanted to emulate a non OLPF low res camera on the idea that it would cause fake detail that may give impression of better perceived sharpness.
You have saddled me up on one of my favourite hobby-horses, Erik - i.e. that the main reason for opting for a good hi-res sensor is not to be able to produce massive prints (Which should, in any case, be viewed from a reasonable distance) but, rather, to obtain much more data for processing.
If I print an A3+ print from the full frame of my D800 or D800E (36Mp) and compare it with a similar print from my D3s (12Mp), then there is no discernible difference in print quality.
While it won't be a huge difference for some subject matter at that modest size, if you can't see any difference, then something is wrong. Maybe your subject matter doesn't require the realism that additional resolution potentially offers?
Cheers,
Bart
However, if it is like other Epson's, you will get best results if you feed it 360ppi, not 300ppi....and usually, best results if you properly interpolate to 720ppi.
You're absolutely correct, and it has been discussed in several of the LuLa fora on a number of occasions.
Things start with good shooting technique (high enough shutterspeed, tripod, limited diffraction), then proper (deconvolution) Capture sharpening and image processing, then proper resampling to the printer's native output resolution (requires good resampling algorithms and proper printer driver settings), and finally sharpening the print file data (to compensate for the upsampling and pre-compenate for print medium losses). Larger format output obviously benefits more from such an approach.
Cheers,
Bart
Not really Bart.I have the very same printer. I usually print on Ilford Gold Fibre Silk.
My caveat, of course, should have been "with my printer" (an Epson R3000).
With a printer like that, printing at 300 dpi, you get as perfect an A3+ print from a 12Mp image as from a 36Mp image. What an amazing number of people fail to understand - including some experienced journalists on some of our most respected magazines - is that dpi and ppi bear no direct relationship to each other. You don't need a 300ppi digital file to get a 300dpi print.
There was a recent discussion on print sizes, some posters argued that little difference can be seen in large prints from 12 MP and 36 MP cameras. I decided to make a small experiment.Thanks for this experiment Erik (from one of those "arguing posters"!).
...
Those prints would correspond to about 55x81 cm or 21"x31". What I have seen visually.
At distance, say 1.5 m ...
At medium distance ...
At short distance ...
Thanks for this experiment Erik (from one of those "arguing posters"!).
I have one question: can you specify more precisely what you mean by "medium" and "short" distances?
I am interested in "pixels per viewing distance" as a measure of what our visual systems detects.
Hopefully one is close to the effective full image width of 81cm, since my observations in galleries suggests that this is a common range for the viewing of large prints. (Paintings by the way are typically viewed from further away, further than the "normal" distance of image diagonal length. But most paintings are very low res. by photographic standards!)
A better way to do this is to start with a moderately high (20 to 26 MP) to higher resolution camera and a high quality zoom lens with sufficent range to let you frame the subject at full frame resolution and then zoom out so that you keep the same framing of the subject and subject to camera distance when you crop the full resolution frame to lower pixel dimensions to emulate lower resolution cameras.Yes, I was thinking the same thing.
It is also important that that the printing methodology ... remain constant from print to print as well.I admit that I would be happy to avoid the printing issues with a simpler approach: final viewing on-screen of crops to few enough pixels that screen resolution is not a limitation, and with the screen size and viewing distances specified. In fact, I will try this. I propose trying with viewing distances corresponding to something like 2000, 3000, 4000 and 5000 times the effective camera pixel pitch (the width of the screen area occupied by each camera pixel.)
Yes, I was thinking the same thing.
I admit that I would be happy to avoid the printing issues with a simpler approach: final viewing on-screen of crops to few enough pixels that screen resolution is not a limitation, and with the screen size and viewing distances specified. In fact, I will try this. I propose trying with viewing distances corresponding to something like 2000, 3000, 4000 and 5000 times the effective camera pixel pitch (the width of the screen area occupied by each camera pixel.)
P. S. Erik, thanks for your reply, which arrived as I was writing.
So your medium distance is almost exactly what I asked for, and what I call "close normal" because it seems common in viewing of large prints. I take it that with good downsampling (not NN), the 12MP is barely distinguishable from the 24MP at that close normal range.
, if it is like other Epsons, you will get best results if you feed it 360ppi, not 300ppi....and usually, best results if you properly interpolate to 720ppi.
Hadn't heard that before - and will certainly give it a try to see what difference (if any) it makes. (I assume you meant dpi, not ppi, as the ppi of a file "fed to the printer" makes no difference whatsoever. It is the actual dimensions of the file that might make a difference.).
While I can't confirm it, the thought is at this point that resampling is either Nearest Neighbor (most likely) or Bi-Linear.
With the printers I have tested -- the Epson 3880, 3900, and 9800...
Given that the sensor resolution between 12 and 24MP is only different by a factor of about 40% it doesn't take too much post-processing to even images up, I reckon. I once did a shoot-out between my 3.4MP Sigma and a 12MP micro four-thirds, using QuickMTF to measure sharpness and MTF. Pixel pitch 9.12um versus 4.3um. On a per-pixel basis, the Sigma was a clear winner. However, when 12MP images were downsized to 3.4MP, it was hard to tell the difference.
Since I don't print photos, I can't offer anything re: up-sizing which, IMHO, is almost as bad as Bayer de-mosaicing quality-wise (just kidding, please don't jump me).
Ted.
The most correct wise to compare image to upres to a common size.
Downsampling looses resolution but keeps some contrast but it also introduces aliasing artifacts, upsizing introduces less artifacts.
If you compare large pixels with small pixels at actual pixels the large pixels also win. Just don't forget that quantity has a quality of it's own.
If you don't print, you don't need many megapixels. Full HD i about 2 mega pixels.
I believe that downsampling does not always introduce aliasing artifacts. Take, for example, an image having a sinusoidal pattern at less than half of the sensor's Nyquist frequency. Downsizing 50% would not produce any artifacts at all in that admittedly theoretical case. But the same statement would be true of a cloudscape, would it not?
What artifacts are introduced by up-sampling? Apart from blur, that is ;-)
Thank you.
I read that 4K is coming, like we really need it. "I think I'll watch a movie. Pass my magnifying glass and bar stool, please . . . Wow, look at the detail in that grass!". I'm glad I'm old . . .
Ted
I believe that downsampling does not always introduce aliasing artifacts.In the generally, useful cases, where images have wide bandwidth (spatial detail) and filters have less than infinite stop-band attenuation you would tend to get some aliasing. The visibility of those artifacts is a matter of debate, of course.
What artifacts are introduced by up-sampling? Apart from blur, that is ;-)There are many kinds of up-sampling algorithms. Linear scaling is best understood, and bicubic is an important subset of linear scaling algorithms. By varying two parameters in the bicubic formula, you get the classic trade-off presented by Mitchell in 1988:
PPI = pixels per inch. It is what you feed te printer with. Each printer, based on driver settings will expect a specific PPI. If it does not get that, it will interpolate whatever it is fed to get what it wants.
I am interested in "pixels per viewing distance" as a measure of what our visual systems detects.
Hopefully one is close to the effective full image width of 81cm, since my observations in galleries suggests that this is a common range for the viewing of large prints. (Paintings by the way are typically viewed from further away, further than the "normal" distance of image diagonal length. But most paintings are very low res. by photographic standards!)
Not really.
. . . .
That the file may have a ppi tag attached to it only determines how it will be displayed on a monitor.
. . . .
I checked upsizing on Bart's target and got artifacts.
http://bvdwolf.home.xs4all.nl/main/foto/down_sample/down_sample_files/Rings1.gif
Thanks, yes, I have used that target before. It does not go well with too much beer ;)
I notice it does have a little bit of moiré built-in (due to quantization of the drawing algorithm ouput?)
Not really.
What matters is the actual dimensions of the file in pixels - e.g. 6400px x 4800 px. That the file may have a ppi tag attached to it only determines how it will be displayed on a monitor. As far as the printer is concerned, it should treat the file identically irrespective of any ppi setting. It is if the image dimensions (i.e. the total number of pixels) are inadequate that it will invoke interpolation. However, as you correctly suggest, how the printer prints it will be determined by the dpi setting applied by the printer.
You will find a good explanation of the whole ppi/dpi dichotomy here: http://www.rideau-info.com/photos/mythdpi.html (http://www.rideau-info.com/photos/mythdpi.html)
You will find a good explanation of the whole ppi/dpi dichotomy here: http://www.rideau-info.com/photos/mythdpi.html (http://www.rideau-info.com/photos/mythdpi.html)
It may not be easy to pin it to a single number because it depends on (assuming 'optimal' viewing conditions) contrast, resolution and the individual's eyes (and degree of optical correction).Indeed, seeking a universal number is futile: I would be happy with a sense of (a) what my eyes need, and (b) what suffices for most people, say for a sample of young adults with 20/20 vision.
At 81cm the above situation would suggest 4500/810 = 5.56, multiplied by 23.27 PPI that gives 129 PPI resolution ...That sound like a good starting point, and fits fairly well with the guideline of 12MP (or as I prefer, about 4000 pixels in the long dimension) for viewing for a distance comparable to the image diagonal.
... and I'd use the double of that to allow for Vernier acuity / higher contrast detail / sharpening, so 258 PPI as a minimum for that viewing distance.I am not sure that the stricter Vernier acuity standard is relevant to much (non-technical) photography; if anything the hard black-to-white transitions of those test patterns are sharper and more easily resolvable by our eyes than almost any edges in photographs of real-world objects. If I were interested in photographing from a drone at 15,000ft and then reading license plates with low-contrast color schemes, my standards would be higher.
There are plenty of "real-world" tests of video resolution...
This approach could produce a simple rule-of-thumb, e.g. in my case something like 105 PPI at 1 metre distance, 52.5 PPI at 2 metres, 210 PPI at 50cm, etc., and double that PPI for higher contrast detail, Vernier acuity, and sharpening.
Here (http://michaelbach.de/ot/lum_hyperacuity/index.html) is a nice demonstration of what Vernier acuity is capable of. The best I can achieve with that test is 0.02 pixels at half a metre viewing distance, which would translate for my display resolution to some 3183 PPI at 1 metre, i.e some 25x the display PPI, and some 14.6x the maximum spatial resolution limit of 0.4 arc minutes. Therefore, the above recommendation of using 2x the PPI that the rule of thumb suggests is not unrealistic.I never heard of Vernier acuity before, thanks for the link.
I never heard of Vernier acuity before, thanks for the link.
Not sure that I understand the reasoning, though.
The demo seems to indicate that a properly anti-aliased edge can be positioned to within subpixel accuracy. That is fully in line with sampling theory (i.e. reconstruction by a sinc shifted by subpixel amounts). How can this be used as an argument for higher resolution? It shows that if there were such edges in the scene, you would still be able to position them correctly to within subpixel accuracy, provided that good sampling/resampling was used throughout.
Just have a look at a Vernier Caliper scale (http://en.wikipedia.org/wiki/Vernier_scale). That should explain the concept. Human vision can resolve lines that are relatively offset, with more than 10x higher resolution than simple spatial resolution.That part I got.
The point being that we need enough pixels to position the anti aliased edges, even if we can no longer resolve the details themselves with our eyes. Edges at an angle will look even less jagged. Our eyes can detect relative displacement with more than 10x higher resolution than the detail size itself.Yes, but if we can detect "blobs" shifted at e.g. 1/10th pixel accuracy when captured and rendered using relatively large pixels, why does this tell us that we need to capture and render them using any smaller pixels? The filtering used in front of the camera sensels, in any image scaling and in the printer/display rendering all are more or less crude approximations to the ideal filters (some cruder than others). This experiment does not prove (AFAIK) that the edge has to be rendered as an infinitely sharp edge?
That part I got.Yes, but if we can detect "blobs" shifted at e.g. 1/10th pixel accuracy when captured and rendered using relatively large pixels, why does this tell us that we need to capture and render them using any smaller pixels?
The filtering used in front of the camera sensels, in any image scaling and in the printer/display rendering all are more or less crude approximations to the ideal filters (some cruder than others). This experiment does not prove (AFAIK) that the edge has to be rendered as an infinitely sharp edge?
This reminds me of a debate about the supposed inadequacy of the CD format.
Because real detail is more accurate than interpolated detail?Interpolation is required by the sampling theorem for recreating a general waveform. As long as the waveform is limited to <fs/2, it can, in principle be perfectly recovered. Notions of "real detail" vs "interpolated detail" does not fit very well with my understanding of sampling theory.
Interpolation is required by the sampling theorem for recreating a general waveform. As long as the waveform is limited to <fs/2, it can, in principle be perfectly recovered. Notions of "real detail" vs "interpolated detail" does not fit very well with my understanding of sampling theory.
Thank you for taking the time to demonstrate your point.
Now, the lines are not bandlimited, and the resampling procedure is unknown. In practice, most cameras/lenses are not capable of such step functions. What happens if you bandlimit the original image to fs/2, then resample using something like lanczos2/3?