If I am understanding correctly,
Hi Emil,
Nice to see you here. Well you always understand correctly.

1. 'pure white' in one, with its native illuminant, does not have the same XYZ coordinates as 'pure white' in the other, in its native illuminant; at least when, as in this case, the native illuminants used to specify the two color spaces differ.
Yes, because in their native spaces, both have tristimulus [1,1,1], but in absolute terms (XYZ) they represent different colors.
2. Part of the point of chromatic adaptation is to map whites from different illuminants to one another (providing three constraints on the nine components of a 3x3 adaptation matrix).
Yes, say so D65 white [0.95, 1, 1.08] goes to D50 white [0.96, 1, 0.83]. In their respective spaces both have representation [1,1,1]. But they differ in XYZ, of course, i.e.,
(a) [0.95, 1, 1.08] in D65 space has tristimuls [1,1,1].
(b) [0.96, 1, 0.86] in D50 space has tristimuls [1,1,1].
(c) [0.95, 1, 1.08] in D50 does not have tristimuls [1,1,1] <------------ This causes confusion for some.
(d) [0.96, 1, 0.86] in D65 does not have tristimls [1,1,1] <------------ This causes confusion for some.
3. Part of color space conversion must involve chromatic adaptation, otherwise there will be a color shift.
Color spaces are related by an affine transformation (okay linear). That is all that is needed. What many don't realize is that if the color space primaries are kept the same but the white point is moved, then that affine transformation is identified by a fancy name - Bradford/von Kries/etc- like chromatic adaption.
A color space is just a vector space. Given any 3 linearly independent bases (RGB) one cay figure out the coordinates in a different set of RGB basis. White point is only used to set the length of "unit vector" as I explained here:
http://www.luminous-landscape.com/forum/index.php?topic=49940.msg412251#msg4122514. I would presume that Photoshop works internally with colors already adapted to a common illuminant, as Joofa is doing. Otherwise, in the short integer representation they use, white in some color spaces might be out of the 16-bit data range, resulting in white not being representable in some color spaces. It would make much more sense to adapt everything to say D50. If I am understanding correctly, this is what Joofa is calling AdobeRGB (D50). It is not the same as the AdobeRGB (1998) spec, it will be as close as the chromatic adaptation allows. Does anyone know if this is how Adobe does things (and can tell)?
Digital Dog has mentioned that Adobe is using D50 space in Photoshop. So, I think that will correspond to the following situation in my original note:
Joofa wrote on DPReview:
Fraction of unit stimulus blue ProPhoto RGB primary needed to match unit stimulus blue Adobe RGB primary:
(3) Adobe RGB white point=D50, PropPhoto RGB white point=D50, Fraction needed=0.88
This is the same mode for which I think MarkM did is calculation after Bradford transformation. So you won't see the clipping in this mode, unless the Adobe RGB (D65) blue primary is correctly figured out after Bradford, and which is no longer [0,0,1].
Sincerely,
Joofa