Of course I'm toying with you Joofa, but you're a real champ to play along.
I'm not a photographer, but a friend pointed this thread out and thought I'd enjoy it. And I did. I work in color: I started out working in industrial pigment doping and now I do advanced detection which involves some color modeling. I say that not to impress, but to encourage you to lower your defenses a bit and really understand the disagreement here. It's actually a common disagreement that generally happens when both sides are only willing to understand their half of the problem. It often boils down to photometry vs colorimetry vs. radiometry vs. appearance modeling. Google will be your best friend here.
Basically, Joofa, you are giving too much credit to the CIE XYZ model when it comes to color. Honesty, it's a little comical because you also criticize it at the same time. Now, I use the word
color very specifically to mean the interaction between the human visual system, a light source, and an object. You'll hear this called the color triangle in the literature. Because the CIE model has very specific restraints, one gets into all manner of trouble if one tries to replace the human in the triangle with the CIE model. At it's heart, as someone above pointed out, the CIE model only deals with color matching under the same conditions. Again, it's not a model of color, it models color matching—that's a very important distinction. There's a reason the spectral tristimulus values at the heart of the CIE system are called color
matching functions. They are not color
defining functions.
It really is useful to talk about absolute colors. But an XYZ tristimulus value is only a color in the same sense that a spectral power distribution is a color. They are both absolute in their own sense, you are totally right about that, but when you speak of them this way you remove the human from the color triangle and so you are no longer really talking about color.
These days, we find it much more useful to talk about
fundamental tristimulus values which are based on cone responses rather than 100 year-old color matching data. Believe me, if the CIE had direct cone response data, they would have used it instead of the color matching function. A stimulus that provides the same cone response can (with a
lot of caveats) be called the same color. The RLAB space does this and it's not really hard to understand, but the first thing you will notice when you try to go from XYZ to RLAB is a chromatic adaptation matrix, which you seem to really dislike for some reason. The CIECAM02 model requires the same thing (this is instructive:
http://www.polybytes.com/misc/Meet_CIECAM02.pdf but again chromatic adaptation is front and center). All color appearance models need to account for chromatic adaptation because it's a fact of the human visual system and the human visual system is central to any conversation about color.
So now look at your problem from this perspective. Take your blue primary in the Adobe color space and, instead of jumping right into the CIE XYZ model, try to think in terms of cone response.
If you do that
then you'll be beginning to talk about
color rather than a matching model or an old photometric definition. If you'll go that far with me, you'll see that you can in fact produce that same cone response with the ProPhoto space. Which is to say, the
color is available in ProPhoto. That isn't to say your facts are wrong, just that they aren't as applicable to color (by mine and most everyone's definition) as you assert.