From the FAQ's on Munsell Color Science Laboratory:
Question:
"Digital image sensors (such as those used in digital cameras) use red, green, blue ink-based color filters to generate color. Do they therefore have a color gamut that limits the range of colors that they can detect? (255)"
Answer:
"Let's start with the short answer to your question; there is no such thing as a camera, or scanner, gamut. A gamut is defined as the range of colors that a given imaging device can display. To say that a camera had a gamut would be to imply that you could put a color in front of it that it could not possibly respond to. While it is certainly possible that two colors that are visually distinct might be mapped into the same color signals by a camera, that does not mean that the camera could not detect those colors. It just couldn't discriminate them. For example, a monochrome sensor will map all colors into a grayscale image and encode it as such. Certainly the encoding has a gamut (in this case a lightness range with no chroma information), but did the camera responded to all the colors put before it. It is the encoding that imposed the gamut. In the color world, encoding is based on some explicit or implied display. For example, sRGB is a description of a display and therefore defines a gamut (but only if the sRGB values are limited in range). If a camera encodes an image in sRGB, that doesn't mean that the range of colors the camera detected are only from within the sRGB display gamut, but it means the camera data have been transformed to best use that sRGB encoding. As long as a camera has three or more sensors that span the visual spectrum, then it will respond all the same stimuli as our visual system. Whether the camera can discriminate colors as well as the human visual system will depend on the encoding of the camera signals, quantitization, and the details of the camera responsivities. (To return to the black and white system, that camera encodes all the colors into a gray scale. They could then be displayed as any color within a given display, but many colors from the original scene would be mapped to the same values.)
Since there is no such thing as a gamut for an input device, then there is no way to compute it or calculate a figure of merit. Generally, the accuracy of color capture devices is assessed through the accuracy of the output values for known inputs in terms of color differences. Also, sensors are sometimes evaluate in terms of their ability to mimic human visual responses (and therefore be accurate) using quantities with names like colorimetric quality factor, that measure how close the camera responsivities are to linear transformations of the human color matching functions. Doing an internet search on "colorimetric quality factor" will lead you in the right direction."
https://www.rit.edu/cos/colorscience/rc_faq_all.php#255I'll post this again too:
Digital cameras don't have a gamut, but rather a color mixing function. Basically, a color mixing function is a mathematical representation of a measured color as a function of the three standard monochromatic RGB primaries needed to duplicate a monochromatic observed color at its measured wavelength. Cameras don’t have primaries, they have spectral sensitivities, and the difference is important because a camera can capture all sorts of different primaries. Two different primaries may be captured as the same values by a camera, and the same primary may be captured as two different values by a camera (if the spectral power distributions of the primaries are different). A camera has colors it can capture and encode as unique values compared to others, that are imaginary (not visible) to us. There are colors we can see, but the camera can't capture that are imaginary to it. Most of the colors the camera can "see" we can see as well.
Yet some cameras can “see colors“ outside the spectral locus however every attempt is usually made to filter those out. Most important is the fact that cameras “see colors“ inside the spectral locus differently than humans. I know of no shipping camera that meets the Luther-Ives condition. This means that cameras exhibit significant observer metameric failure compared to humans. The camera color space differs from a more common working color space in that it does not have a unique one to one transform to and from CIE XYZ. This is because the camera has different color filters than the human eye, and thus "sees" colors differently. Any translation from camera color space to CIE XYZ space is therefore an approximation.
The point is that if you think of camera primaries you can come to many incorrect conclusions because cameras capture spectrally. On the other hand, displays create colors using primaries. Primaries are defined
colorimetrically so any color space defined using primaries is colorimetric. Native (raw) camera color spaces are almost never colorimetric, and therefore cannot be defined using primaries. Therefore, the measured pixel values don't even produce a gamut until they're mapped into a particular RGB space. Before then, *all* colors are (by definition) possible.
Raw image data is in some native camera color space, but it is not a colorimetric color space, and has no single “correct” relationship to colorimetry. The same thing could be said about a color film negative.
Someone has to make a choice of how to convert values in non-colorimetric color spaces to colorimetric ones. There are better and worse choices, but no single correct conversion (unless the “scene” you are photographing has only three independent colorants, like when we scan film).