Noise reduction is IMHO not the primary goal (losing resolution may be worse than a bit of organic looking noise), but noise should not be made worse by the other artifacts (like aliasing) either.
Maybe not
the primary goal, but, IMO,
a primary goal. I think there's a three-legged stool here: maintaining (or, proportionally, increasing) sharpness, avoiding artifacts, and reducing photon noise. Why the last? First, in my photographs, where noise is an issue, it's photon noise. I don't use the camera in a way that makes read noise important, and I've never had a camera where the PRNU was bad enough to bother me. Second, as time goes by, I find myself using cameras with tighter pixel pitch. Technology has driven up coverage, QE, and FWC/unit area, but we're reaching the end of that road, and my D810, D800E, and my Sony a7R (to say nothing of my H2D-39) all have more photon noise per pixel than my D4 or D3s. (Yeah, I know. That's too many cameras. I can't seem to bring myself to sell the older ones.)
Conventional wisdom, and DxOMark, says that I should be able to produce images from the 30+ MP cameras (with the exception of the H2D) that match the sharpness, resolution and photon noise from the 12 - 16 MP cameras. If I'm allowed to tweak the noise reduction setting in Lr, I can not only do that, I can usually do better. But doing that requires me to pick noise reduction settings that are inappropriate for full-resolution images from the high-resolution cameras.
I like the idea of a workflow that does all the creative work at full-resolution (or, in the case of Lr, on a proxy image), and the generation of the copy of the image, at appropriate resolution, to be printed or displayed is a no-thinking, mechanical operation. I've got it so that's pretty much true for upsampling. I'd like it to be true for downsampling. It looks like the common dowsampling algorithms, even with AA filtering, don't do too well at noise reduction at low reduction ratios. I'm trying to understand that, and eventually, find a way around it.
It probably is the case that, as is the case with sharpness, that noise reduction with linear filters and conventional downsampling trades off with artifact prevention. Certainly, if noise reduction is a priority, it is tempting to precede downsampling with a brick wall lowpass filter with cutoff at the Nyquist frequency of the new resolution, but that will ring like a gong.
If nonlinear filtering is the ticket to mastering noise upon downsampling, I'd like to understand implementations that work at all reduction ratios. I can't figure out how to do a fractional-pixel median filter, for example.
Another motivation for this work is my continuing project of modeling and comparing cameras with different pixel pitches (currently down to 1.1 um), but that's a thread that we don't need to explore now.
Remember that I am a reformed color scientist, and inexpert on matters involving the wider world of digital signal processing, but an willing to learn.
Jim