Michael wrote:
Engineering a monochrome sensor equipped camera isn't simply a matter of removing the Bayer array. Though based on the M9 sensor, a significant amount of reengineering at the chip level was required
This is absolutely false. If you knew what you are talking about, instead of just repeating what someone at Leica has told you, you'd know that. Of course, I would be delighted if you would tell of one single thing that would need to be reengineered (at the "chip" level - I assume you mean the sensor, but maybe something else).
Several differences can be mentioned here. I shall only list two, among others, that I have encountered personally during the making of digital cameras. There are no claims that they have any relevance to Leica camera, but does answer your query to provide a single difference, which I interpret as that the state of affairs should not change if you substitute a monochrome sensor for a color sensor: (1) A sensor typically has more pixels than the quoted resolution that is output. There are several uses of having a slightly larger grid than announced resolution. Among other things it can let one move around the rectangle that is output a little for several benefits that I won't go in here. However, on a monochrome sensor one can just offset the grid by a single pixel. However, on a color Bayer sensor, if you offset by one pixel then you change the order of BGRG pattern. A camera firmware must take the corresponding action to account for that. (2) Many sensors provide analog gain that can be different numbers for R, G and B. On a monochrome sensor you have a single number. Again, the firmware has to account for accordingly.
Admittedly, these are minor differences, and the overall design of a camera should not change a whole lot between a color and a monochrome version. But, again you asked for a single difference, however small that may be.
Your lack of knowledge is clear here as well ...
It is quite easy to pick on many (most??) articles for technical correctness. Guessing by the tone of your attack, I went back and read Michael's essay. To me the points that you have mentioned are quite minor to detract from the overall message of the essay. The aim is to get the overall gist correct on an informal Internet forum, in the context things are written. I shall give you two examples again.
(1) On LL, DPR, and elsewhere, how many times have "knowledgeable" people mentioned that to get the overall system MTF one multiples the MTF of lens, sensor, this, and that, and what not. However, technically, that is incorrect, unless properly accounted for. Because, to multiply MTFs in such manner the system should be shift-invariant, which the "MTF" of the sensor is not. Of course, people have realized that the sensor response is not shift-invariant, when you see stuff such as the sensor output to alternating thin-enough white and black lines depends upon the registration of the lines with the pixels. On certain displacements you can get white falling on a single pixel and black on the next pixel giving you a contrasty image. However, with a slight displacement, say half a pixel, you get part of black and white lines on each pixel, giving a less contrasty or more uniformly gray image. This is layman speak for the phenomenon of non-shift-invariance of the sensor response as mentioned above. That is, the MTF itself is a function of the displacement in this scenario. So, while such an effect is realized by people, it is many times not incorporated into the definition and determination of MTF, which it is possible to do so. So one might attack making the "product of MTFs" assertion done without proper qualification.
(2) Open almost any book, Internet article, etc and stare at the xyz chromaticity diagram. Do you see what is wrong? I shall leave that as a homework exercise
.
Also with all due respect, you may well talk to all kinds of experts, but that does not mean that you are an expert in this field (digital imaging, including sensor operation). You are not.
I hope you get the point by now.
Sincerely,
Joofa