Of course it is not bullshit, and you know it. The Raw camera RGB is profiled, and that can be expressed as a gamut by plotting the hull and calculating the size.
When I stated we both knew it was BS, I assumed you understood the subject, sorry
Raw image data is in some native camera color space, but it is not a colorimetric color space, and has no single “
correct” relationship to colorimetry. The same thing could be said about a color film negative. Someone has to make a choice of how to convert values in non-colorimetric color spaces to colorimetric ones. The choice is that I've asked about, the web page you provide didn't answer the question. There are better and worse choices, but no single correct conversion (unless the “
scene” you are photographing has only three independent colorants, like when we scan film).Cameras don’t have primaries, they have spectral sensitivities, and the difference is important because a camera can capture all sorts of different primaries. Two different primaries may be captured as the same values by a camera, and the same primary may be captured as two different values by a camera (if the spectral power distributions of the primaries are different). A camera has colors it can capture and encode as unique values compared to others, that are imaginary (not visible) to us. There are colors we can see, but the camera can't capture that are imaginary to it. Most of the colors the camera can "see" we can see as well. Yet some cameras can “see colors“ outside the spectral locus however every attempt is usually made to filter those out. Most important is the fact that cameras “
see colors“ inside the spectral locus differently than humans.
No shipping camera that I know of meets the Luther-Ives condition. This means that cameras exhibit significant observer metamerism with respect to humans. The camera color space differs from a more common working color space in that it does not have a unique one to one transform to and from CIE XYZ. This is because the camera has different color filters than the human eye, and thus "
sees" colors differently. Any translation from camera color space to CIE XYZ space is therefore an approximation. What is the CIE XYZ space?
The point is that if you think of camera primaries you can come to many incorrect conclusions because cameras capture spectrally. On the other hand, displays create colors using primaries. Primaries are defined colorimetrically so any color space defined using primaries is colorimetric. Native (raw) camera color spaces are almost never colorimetric, and therefore cannot be defined using primaries. Therefore, the measured pixel values don't even produce a gamut until they're mapped into a particular RGB space. Before then, *all* colors are (by definition) possible.
So no, whatever processing
COLOR SPACE I've asked about isn't the same or even similar to that
captured by a camera sensor! Adobe for whatever reason has no issue telling it's users what the actual color space for
processing is utilized: ProPhoto RGB primaries with a 1.0 TRC. C1's page was written with ambiguities and marketing BS.