All third-party converters are using one or two of the above-mentioned methods.
Maybe I'm confused or we're talking about two different parts of the process. The raw image data is in some native camera color space right? But it is not a colorimetric color space, and has no single “
correct” relationship to colorimetry. How to get there and what is the native camera color space? Again, my understanding is that short of having spectral sensitivities of the filters, someone has to make a choice of how to convert values in non-colorimetric color spaces to colorimetric ones. There are good and not as good choices, but no single
correct conversion (unless the “
scene” you are photographing has only three independent colorants, like when we scan film). The image has a color space, but do we know what it is? If you know the camera spectral sensitivities, you may estimate the image color space quite well if the camera spectral sensitivities happen to satisfy the Ives-Luther condition which is unlikely. Only then we can determine the colorimetric color space. So do all the raw converters have this necessary data supplied to them or as I'm led to believe, they make some educated guesses and move on?
It is kind of useful that critical photographers understand this otherwise they might wonder why they get different results photographing the same scene with different cameras, even when they use the same raw converter. Or, why different conversions to scene-referred produce different results for the same scene and camera. You need to know there is judgment involved in designing the conversion otherwise you won’t know to perform evaluations to see what conversion you like best.