Hi Bart,
In the image below, on the left I applied FM radius 2 amount 100 then resized to 50% using Bicubic. On the right I resized to 50% using Bicubic and then applied FM radius 1 amount 100.
[...]
It looks like the whole curve is pulled up (on the sharpen-after), probably a bit too much.
I reduced the sharpening after resize to FM1/75 and also to FM1/100 followed by a sharpened-layer opacity reduction to 80%:
[...]
Based on the technical data and charts, we can see (on the edge pixel profiles) that there is a small edge halo in all versions. Ideally we would have no halo, but that is almost impossible if the image has to be sharp as well. So that is why I use a
Blend-if layer rather than a reduced opacity, in order to keep the deconvolution sharpening at full strength except for very high contrast edges and lines to avoid clipping and the most annoying high contrast halos.
Also, we can see that there is some aliasing potential (the shaded region at the bottom right under the MTF curves) in all versions, again hard to totally avoid, but something to be cautious about. Any method that combines a boost in the spatial frequencies at the left of the Nyquist limt marker and a very low response to the right of it, should be our goal. But it's hard to achieve on normal images without introducing other artifacts, so it remains a balancing act.
I still think that an ultimate test to verify the creation of down-sampling artifacts, is to down-sample a zoneplate kind of target (like
this one). Such a target is super critical and has many spatial frequencies in all orientations, and the sort of repetitive sinusoids show deviations from the expected pattern with merciless clarity (not only aliasing, but also blocking and ringing artifacts will break the smooth patterns). That will also show that preblurring before downsampling will reduce the artifacts, but that sharpening after resampling will bring out some of that again, just as the Imatest charts predict. Another useful natural subject image for testing is
this one, with many thin lines and sharp edges at slightly different angles. Down-sampling that will also show many issues if the process is not of high enough quality.
Also to complicate matters further, Bicubic filtered down-sampling is not perfect and introduces some artifacts by itself. However, it is not that easy to devise a better down-sampler because there will always be other trade-offs to consider (although a Lanczos2 or Lanczos3 windowed downsampling is often pretty usable). In
this thread I tried to create a best compromise, but it requires an external imageprocessing library (it's free though). It also uses different filters/methods for up- and down-sampling.
Imageprocessing of natural images is a process where a lot of trade-offs need to be made, and some image content is better suited for one approach, while other image content benefits from another approach, and they are often combined in the same image. The need for sharpening is inherently linked to the capture process, which blurs image content, and resampling also blurs and/or reduces contrast. Therefore there is no single best solution. But if our tools allow a good preview of what the effects are, and we use some of the insights we can get from analyzing images with tools like Imatest, we can get quite far.
Cheers,
Bart