Spectral plots are recorded using a monochromator. So you expose frame after frame using near monochromatic light. A monochromator is an affordable device, but it needs to be calibrated by simultaneous reading using a photospectrometer and those are expensive.
The value of this plots it that they show the characteristics of the CFA/sensor/IR-filter combo. A lot of talk about filter designs but these curves illustrate some design choices.
I am sorry I should have chosen my words more clearly. I was not asking how the spectral plots are recorded (I am sure that they are accurate). I was asking if you were basing your conclusions on the spectral plots in isolation, or the spectral plots coupled with some correction.
I am guessing that the former is true. Then I question how one can suggest (even with disclaimers) that one CFA offers "subtle colour, more like Kodachrome than Velvia look" over another CFA.
My impression is that the physical color filtering mainly affects 1) The colors that cannot be distinguished (information is lost forever and cannot be reintroduced) and 2) Colors that can be distinguished, but with variable levels of effort needed (and secondary artifacts introduced). The two are probably the same thing, only distinguished by some system-dependent threshold.
My point is sort of similar to the Nikon D800 vs D800E discussion if we swap the color domain for spatial domain. By removing the OLPF, one gets more "selective" sensels (acutance). If such acutance is what the user wants, then there might be some advantage to doing it early in the recording chain (by removing the OLPF) instead of later. But to really know, one has to do a side-by-side, process both either "optimally" or "using the tools that some given user will use in practice", and see if and to what degree sharpening/deconvolution is able to equalize differences. I believe that Bart has done just such an exercise in the spatial domain.
One probably will find that in some cases, the differences are negligible (system linear errors are small enough that they can be practically compensated without banging your head against recording nonlinearity/noise/characterization errors), while in some cases the differences are significant. I imagine that the color filter may well be very "wrong", but as long as a nice transformation can make the result right, then the end user will be happy. A trivial example may be swapping the red and blur channels (very "wrong" colors, but easy to fix). A slightly less trivial example might be rotated primaries. "CMY" color filters may be very "wrong", but if the output can be transformed into some standard rgb representation through a nice linear matrix (or a full 3-d lookup) without any regions being "stretched too much" (noise amplification) or "compressed too much" (wasted recording precision), things could concievably be fine? (I don't know that CMY can actually be used, but I am suggesting it as an example).
-h