So it boils down to whether Leica's downsize interpolation of the colour channels before demosaic (if AP is right) is equal to, better, or worse than a straight interpolation after the full image is demosaiced.
The downsize interpolation process is lossy, so I think it's safe to say you'll never get anything with the 36Mpx or 18Mpx modes that cannot be achieved or improved in terms of image quality using the original 60Mpx RAW data. The usefulness of these lower resolution modes comes when the advantage of smaller files is greater than the quality loss.
However what we know so far is that the camera produces
undemosaiced DNG files (or we'd better say
to-be-demosaiced DNG files), but it remains a mystery whether the original RAW data is demosaiced at some point in the process or not. I can think of two possibilities:
- Each individual RAW channel: R, G1, G2, B (maybe with some interaction/combination of the highly correlated G1 and G2) is individually downsampled to the final size. This is totally possible, but would require a fine tuned previous AA filtering. We must think that the sampling frequency of partial RAW channels: R, G1, G2, B is half the sampling frequency of the sensor.
- The original RAW data is demosaiced, then properly rescaled to the final size (the required AA filtering here would be less critical since we already have standard RGB data, so we take advantage of a higher sampling frequency over the whole image), and finally 2 out of each 3 channels on every pixel are dropped to form a new fake-Bayer smaller DNG file. Not sure if this last step makes sense vs the need to demosaic the data again.
I can see advantages and disadvantage on both approaches, but the first being more complex makes more sense to me and I find it more attractive from an engineering/signal processing perspective.
Regards