Hi Doug,
That's just me thinking floating point vs unsigned integers. It appears that many folks over-think the in/out of working gamut issue, which is instead pretty simple both in principle and to understand: start by visualizing a cube, end by visualizing a cube, what image information falls outside of the ending cube is out of gamut. That's it.
Some free-wheeling thoughts. Assuming linearity for the sake of simplicity, image information:
1) is originally in a plane of raw data (say rggb);
2) it is assembled in the form of an initial rgb cube which, after white balance and demosaicing, has origin at [0,0,0]; and
3) it's still the same cube, albeit viewed from a different perspective, when it is stored in the file to be displayed. The final point of view is specified by the relevant spec (e.g. Adobe RGB)
So how can image information end up falling out of the final cube? There are only two ways (they can actually be considered one and the same):
i) White balance
ii) The change of perspective matrix multiplication.
White balance multipliers are what they are given the sensor and the illuminant. For my 5DSR ISO100 example above they were
1 / [0.45819,1,0.65374]
for r, g and b resp. This means that, absent some brightness manipulation, image information from the red CFA channel greater than 45.8% of full scale will be out of gamut (also greater than 65.4% blue and 100% green).
After white balancing (so the initial cube will have origin at [0,0,0] and normalized vertex at [1,1,1]) only positive normalized values between 0 and 1 are part of the color space cube, everything else is out of gamut. The wbraw->aRGB compromise color matrix for this case happens to be:
[ 1.4047 -0.3878 -0.0169
-0.2301 1.7805 -0.5504
0.0043 -0.4620 1.4577 ]
This matrix will project all rgb image data in the initial cube 2) to the final cube 3) above. Note for instance that the origin of the initial cube [0,0,0] maps to also [0,0,0] in the final cube; and the initial normalized vertex [1,1,1] maps to [1,1,1] in the final cube. Black to black, white to white, so far so good.
But it is also obvious that the projection may push some image data out of the final RGB cube. For instance a full green CFA input signal with no red and blue in the initial cube [0,1,0] will result in a tone outside of the final cube: [-0.3878 1.7805 -0.4620]. Those coordinates are not between zero and one, so the tone is out of gamut. Many more like that near there.
That's all there is to it, no magic. Just in or out of the final RGB cube, which in this example is Adobe RGB.
My earlier point was simply that if one sticks to floating point and retains all values (less than zero, greater than one and all) until the final conversion, it really does not make any difference whatsoever to tone 'accuracy' what the matrix (hence the working color space or its gamut size) is. No need to work in PP for instance. Even so I would want a working color space to have a gamut similar to that of the monitor on which the image being adjusted is viewed, so that one can see what they are doing. For most of us these days that means at best a true 8-bit video path to an Adobe RGB monitor.
Cheers,
Jack