The next question is if the possible differences in colour rendition depend on image processing and weather those differences can be corrected by correct profiles or not.
I think Torger answered this question with a definite yes, with the usual cautionary qualifications. I think I can see why: DNG profiles are designed to allow us to take any input and turn it into any output via double roundtrips to HVS with stopovers for additional PP (go wild with those sliders, Ken), although compared to a nice, pure linear matrix it seems at first glance to be a little bit like cheating. This is where I wish we had a bit more clarity from the cognoscenti:
When the goal is trying to objectively characterize HARDWARE quantitatively, as opposed to subjectively achieving pleasing output qualitatively - answering questions that depend on CFA Spectral Sensitivity Functions like sensor gamut, color discrimination, separation, orthogonality - shouldn't we be able to calculate that information straight from the forward 'compromise' matrix? Hasn't anyone come up with some practical such metrics yet, besides useless SMI? When someone says 'that camera has poor yellow discrimination', shouldn't we be able to point to a ratio or other combination of terms in the compromise matrix and say yay or nay? Please show the math in your answer.