One comment about Michael's review that I felt was left out, was that he failed to point out pattern noise as a special characteristic which makes noise level much less tolerable and correctable.
Hi,
The issue of noise is a subject with quite a few aspects to address, on its own. Of course it helps tremendously if the sensor array itself is well behaved with regards to pattern noise, but part of that can/should be addressed by the Raw converter. It would be interesting to compare against the performance of the Camera manufacturer's own Raw conversion solution, afterall they could use proprietary/calibration data without having to reverse engineer that from the file's Metadata, and other (commercial) solutions.
When pushing the barely exposed shadows of underexposed images, one basically gets to deal with read-noise patterns (often non-random), sensor calibration patterns (usually non-random in linear gamma), Photon shot-noise (Poisson random distribution), and read-noise (Gaussian random distribution). The non-random parts can usually be addressed successfully by software without affecting image detail (e.g. (master) dark frame subtraction). The random parts require a trade-off between detail (photons are 'noisy') and truly random noise which could be reduced by filtering based on absence of dominant spatial frequencies, a statistical approach.
All that noise makes it also much harder to demosaic the (Bayer) CFA pattern into consistent color. That's why the shadows go Red Magenta-ish when noise starts to dominate. BTW, that's something a proper Noise reduction software (e.g. Topaz Denoise) can deal with, but it requires some image specific guidance if we want to avoid the overall desaturation that I see in e.g. Lightroom noise reduction.
These software corrections/improvements can be processing intensive and take quite some time to perform due to the vast amount of calculations needed, but they can be very effective. We can learn a lot from the people who (by definition) have to routinely work on photon starved images, astronomers. Proper dark frame subtraction would already solve the biggest problems, but requires a kind of per camera database of images to synthesize ''Master Darkframes" at various exposure times and ISOs, and maybe temperature. A Rawconverter like RawTherapee already offers a basic approach that can average multiple darkframes into a more robust master darkframe for subtraction in linear gamma space.
So that would involve a huge amount of work on its own to properly address in a camera comparison. People like Jim Kasson and Jack Hogan (e.g.
e.g. here) already put a lot of effort in analyzing the noise behavior of several cameras, including the A7R II (e.g.
here). And it turns out to be a bit of a can of worms at times.
Again, it already helps a lot if the camera/sensor behaves well in the noise department, but when we start using photon starved images, all cameras can need varying amounts of help. It would be easier (if practical) to bracket, and preferably use more photons to begin with (lower ISO, ETTR, stacking), etc.).
Nothing beats a properly exposed shot, so photographers should not grow lazy and just depend on their cameras to solve things (and then blame the camera). An image requires photons, it's up to the photographer to provide as many of them (in the right places) as he can, given the circumstances and shooting conditions.
Cheers,
Bart