You raise several interesting issues, and I thank you for making me think hard about something that I haven't been paid to think about for close to twenty years.
It is possible to have a kind of compression of the differences between two or more colors that takes place at a sensor level, as opposed to processing that takes place after the raw image is captured. You could have a camera whose filter set was sufficiently peaky that, for saturated colors with a narrow spectrum in the pass-band of one of the filters, only one color plane in the raw image had a useful signal-to-noise ratio (SNR). Small chromaticity variations in the subject would not translate to any chromaticity variations in the captured image, since the chromaticity of the captured image would be whatever chromaticity the software that followed the capture mapped a raw value of x,0,0 (or 0,x,0 or 0,0,x) to. I don't think that's what's happening in your example, since, in the absence of tone compression in the sensor output (brought on by the well filling up or something in the electron-to-digital-value chain saturating), you would see no compression in luminance, which I'm pretty sure I see in the JPEG part of your example.
Designing a filter set for a camera requires the management of a complicated set of tradeoffs. (Tradeoffs are a good thing; otherwise, what would the engineers and the product managers argue about?)
Here are a few:
1) Sensitivity (base ISO) -- higher is better. This argues for keeping the filter passbands wide.
2) SNR -- higher is better. This argues for keeping the filter passbands narrow.
3) Accurate conversion to human tristimulus color space. The effect of this is difficult to generalize. Here's one specific that comes to mind: the ability to replicate the excitation of the "long" or "red" cone cells by extremely short-wavelength visible light that produces the color violet in the rainbow. I've not seen a camera that does a credible job with this.
4) Availability, cost, and stability of suitable filter materials.
The last set of criteria is easier to deal with in a camera that uses beam-splitters and three monochromatic sensors rather that a Bayer array or something similar. However, such an approach has adverse cost, size, and ruggedness implications.
You can see how criteria 2) and 4), together with a low valuation of 3) (Velvia, anyone?) could lead a product team to develop a sensor that had the kind of color-specific chromaticity compression that you're talking about.
Your example shows a lot of luminance compression in the red channel, but it also shows luminance compression in the other two channels -- look at the hand. That makes me think that some or all of that compression comes, as you mention, from the image processing pipeline that follows the raw capture. There are lots of places in that chain of operations where such damage might take place.
Does that make sense? Or have I missed your point?