I don’t know Eric is trying to correct anything but rather initially describe. This is supposed to work in a raw converter which doesn’t circumvent rendering to taste.
I understand. Using a monochromator for characterizing a camera may be useful. Capturing an on-scene spectrum currently has little or no meaning, neither for image processing, nor for the photographer.
Just to think about it without going too much into technical detail, but if you capture an on-scene spectrum, does it represent the scene average, weighted average, or perhaps point metering value? Does that point-metering value represent the actual light, the on-scene lighting condition, something else? Can the resulting spectrum or its derived tri-color be used for gray balancing? Would it yield a different result if the tri-color was derived by conventional methods?
Would a photographer that actual needed such precision perhaps be helped by simply carrying a spectrophoto meter?
I'm just wondering. I'm not in particular questioning what Eric W and the others are developing here. Do you know the current status of those developments are?