Thanks for this, Nicolas and Dogway.
Just to throw in a slightly different perspective, I decided to measure MTF curves of a downsized ISO100 D800e slanted edge using the different methods discussed in the latter part of this thread. The original image is a 300x400 pixel crop directly off DSC_6483.NEF (I downloaded it from dpr and so can you if you so wish) saved as a tiff with absolutely zero processing other than CFA normalization. I think it's as close as one can get to the actual spatial resolution information captured by the camera.
Hi Jack,
Yes, a slanted edge can reveal lots of useful information which is why I use it a lot, but it doesn't give a complete picture. It e.g. can only suggest that aliasing is possible (or not), but it doesn't know if the original subject had enough contrast to even have a modulation that can cause aliasing. A high contrast edge will survive, but say tiny detail (and small features in digital imaging by definition equal low MTF modulation) may suffer enough loss of contrast due to (re)sampling to become unrecoverable, even with deconvolution. Think about detail with 1% subject contrast (still detectabe by humans), and an MTF response of 10%. That would render the detail at 0.1% modulation, which might be below the quantization/noise threshold.
The MTF view also doesn't reveal the sensitivity for the generation of blocking artifacts or ringing artifacts. Especially the EWA types of resampling (-distort Resize) behave in a very 'organic' way, which avoids some of these types of artifacts much better than tensor type of resampling (-resize). The traditional tensor type of resampling has a higher diagonal resolution, which is nice, but it does cause trouble when we start to push things to the limit. That's why I additionally test with a zoneplate (rings) target, and a star target, and challenging images such as my windmil sample image.
The D800e edge crop does have a bit of a ragged, zipper like, edge structure which is something to be wary about, but I don't think it is too much of an issue for downsampling to 25% or less of the original size. Still, for more objective tests I'd prefer to base my conclusions on a really smooth and sharp edge. One can even compare edges with different contrast, to better judge some (unwanted) effects for really high spatial detail. I can easily generate such edges if needed, at any required angle, but the ISO standard uses an arctan(0.1)=5.71 degrees angle so that's what I usually make (a 1/10 pixel slope is also easy for analysis of CGI versions, camera shots are usually rotated slightly and their slope must be measured).
That beautiful edge was first measured with MTF Mapper - then downsized 4:1 using Photoshop's standard methods and re-measured.
[...]
The original is soaring at those lofty heights because one of its pixels corresponds to four of the others'. Nearest neighbour tracks it well, which is good at the lower frequencies, not so good beyond Nyquist because the ideal SFR/MTF curve for a perfect downsizing would track the original's up to Nyquist - but show little or no energy above 0.5 cycles/pixel, the point at which aliasing and moirè start rearing their ugly heads. Bilinear and bicubic show different amounts of attenuation, with bicubic looking like the best compromise.
The blue curve below shows the upper boundaries of what I think an ideal MTF curve should look like after perfect downsizing only (no extra sharpening), with bicubic for reference as the dotted line.
True, but again watch out for bicubic's (poor) blocking and ringing behavior, therefore also other tests are required.
Next I plotted the MTF curves from three downsizing algorithms discussed in this thread ('D' option in the script): 'nodownsharp' quadratic, 'downsample' V1.22 and RobidouxSharp. None were sharpened after downsizing, with 'downsample' set at a minimalist 1% DoG. Quadratic and downsample 1% fare much worse than our benchmark bicubic while RobidouxSharp does a good job early on and bests it up to Nyquist. Note, however, how all three have an unwelcome tendency to hover on in the higher frequencies, letting through faux energy there, a fact potentially indicative of the high frequency trouble discussed earlier in the thread.
Yes, but keep in mind that lower contrast small detail will have low MTF to begin with, so down-sampling will only reduce the chance of it having a meaningful modulation after down-sampling. Which is also why I use deconvolution sharpening without too much hesitation. Some of the noise is probably unrecoverable already, while still meaningful detail modulation will be boosted.
Looking at the MTF curve of RobidouxSharp one would almost feel like throwing a sharp low-pass at its output past Nyquist. What would that do to the frequencies below Nyquist? Would it kill its slim advantage over bicubic there?
That's the choice I made after studying the other artifacts. It seems to be better to down-sample a bit soft and deconvolve, than to down-sample sharp and blur/convolve. The sharper down-sampling generates to many compromise artifacts, and resampling generally requires a bit of sharpening to restore some resampling losses but that's not good when there already are artifacts present.
Lastly I decided to give Bart his due and show 'downsample' V1.22 as it was meant to be, with full 100% DoGs.
Just a small addition, 50% is a neutral 1 iteration deconvolution, 100% is default because it adds a bit of
extra sharpening that often is visually pleasing.
The 1% dotted red curve is the heavily attenuated result of its downsizing component. With 100% 'sharpening' the DoGs component (solid red line) attempts to make up for the loss of energy in the desirable frequency range by amplifying the attenuated curve (and everything else including unwanted noise) back to where it thinks it should be. I can't help but wish that both the attenuation and the amplification not be so drastic. Even so the method does not quite achieve bicubic's apparently effortless performance.
The benchmark seems hard to beat when looking at these curves off grayscale raw data.
Thoughts?
Keep the earlier comments in mind. Lower contrast detail may already be lost, so the potentially risky boost of aliasing prone spatial frequencies may be relatively harmless. Also check for other artifacts, the rings target is cruel enough to reveal such poor behavior in 2D. Gamma effects also play a big role in generating or suppressing artifacts.
Cheers,
Bart