No, it's not.
Yes it is.
As others keep pointing out, there is no Adobe RGB with a D50 white point.
Selection of a white point is independent of the selection of the primaries. Adobe RGB primaries with the same chromacity coordinates can be used with either D65 white point or D50 white point. It is a vector space. You are given three basis vectors (RGB) directions (chromacity coordinates) but you don't know how long is a "unit vector". The white point sets the units. This concept is an important one and apparently is being missed by you and others.
With this understanding Adobe RGB with D65 and D50 are two different coordinate systems in the same 3D space.
If you want to compare spaces with different white points, you need to convert to a common white point.
To go from AdobeRGB to ProPhotoRGB you need to account for the different white points. The AdobeRGB D50 matrix is just a convenience to remove that step from the calculations. You can do the same thing using the D65 matrix, but then you need to explicitly perform the chromatic adaptation yourself.
AdobeRGB -> XYZ using the D50 matrix (what I originally did):
[0.6097559 0.2052401 0.1492240]
[0.3111242 0.6256560 0.0632197] x [0 0 1.0] = [ 0.149224 0.0632197 0.7448387]
[0.0194811 0.0608902 0.7448387]
AdobeRGB -> XYZ using the D65 matrix:
[0.5767309 0.1855540 0.1881852]
[0.2973769 0.6273491 0.0752741] x [0 0 1.0] = [ 0.1881852 0.0752741 0.9911085]
[0.0270343 0.0706872 0.9911085]
But now you need to explicitly account for D50 if you want to move to a D50 space or compare to a D50 space (using Bradford):
[1.0478112 0.0228866 -0.0501270]
[0.0295424 0.9904844 -0.0170491] x [ 0.1881852 0.0752741 0.9911085] = [ 0.14922403 0.06321976 0.74483862]
[-0.0092345 0.0150436 0.7521316]
See, same result. This is all the AdobeRGB D50 matrix is doing.
What all the above calculation is doing is that showing that for e.g., if you take a 100% reflector and shine D65 light on it and measure the XYZ tristimulus of blue in an rgb mixture to match D65, you will get blue = [0.1881852 0.0752741 0.9911085]. If instead of D65 you have shone D50 on the same reflector you would have measured [0.14922403 0.06321976 0.74483862]. But these two colors are different in absolute terms.
What Bradford transformation says is that you don't need to shine a D50 light and measure the XYZ. If you have D65 tristimulus then you can convert from D65 to D50 with certain human assumptions in mind regarding neutral/gray colors consistency.
And this is what you have shown. But this is not what I'm after.
Instead of a reflector, suppose there is a source that emits two colors A = [0.1881852 0.0752741 0.9911085] and B= [0.14922403 0.06321976 0.74483862] in XYZ, which are, in absolute terms, two different colors. If you measure A with Adobe RGB primaries scaled to D65 white point you get [0, 0, 1]. If you measure B with Adobe RGB primaries scaled to D50 white points you get [0, 0, 1]. But what if you measure A with D50 and B with D65. You, of course, don't get [0, 0, 1], but some other numbers, which can be calculated, but not important right now.
Now lets try to measure A and B in Prophoto RGB scaled to D50 white point. You will find that A needs more than unit amount of blue Prophoto RGB while B needs less. Which means that A can't be represented but B can be in Prophoto (D50). But A can be represented in Adobe RGB (D65) as [0,0,1]. So this is a color which has representation in Adobe RGB (D65) but not in Prophoto RGB (D50) without clipping.
There is no need for a Bradford transformation here as we are doing a direct measurement of two different colors A and B in a measurement system using ProPhoto RGB scaled to D50 white point.
Again there is no AdobeRGB D50.
I hope by now you know that one can be constructed as easily as one with D65!
Making a statement like "[0 0 1] in Adobe RGB with D50" doesn't make any sense.
Again, I hope by now you understand.