Isn't this all about semantics?
The raw data as such don't have a color space in the sense that they have a meaning in terms of PCS, they're just values.
The matrix you describe acts as an input profile to map linearized camera values to PCS.
[{POST_SNAPBACK}][/a]
If the raw data were to have no meaning with respect to the PCS, then obviously it would not be possible to convert them to the PCS. Since they do have a relationship to the PCS, it is possible to convert from the camera space to CIE XYZ by a standard three by three matrix conversion. Here is an excerpt from the source code of DCRAW:
/*
Thanks to Adobe for providing these excellent CAM -> XYZ matrices!
*/
void CLASS adobe_coeff (char *make, char *model)
static const struct {
const char *prefix;
short black, trans[12];
{ "NIKON D200", 0,
{ 8367,-2248,-763,-8758,16447,2422,-1527,1550,8053 } },
As you can see, these values are a 3 by 3 matrix to convert from the Nikon D200 colorspace to CIE XYZ. Once that is accomplished, another transformation to the working space (e.g. Adobe RGB) can be performed. The details are described by [a href=\"http://www.poynton.com/notes/colour_and_gamma/ColorFAQ.html#RTFToC18]Poynton[/url]. What is the difference?
As Bruce Fraser explained in Real World Photoshop CS2, a custom RGB space contains there elements: Gamma, White Point, and Primaries.
In the raw file some of these are implicit. The gamma is one, the white point is described by the white balance data in the raw file, and the primaries are described in the matrix. What is missing?