Here is what I think at the moment:
1. If a sensor is blind to some color, you cannot put it back into the images.
2. If a sensor is too strongly sensitive to a color, you can scale back that color in the imagery mathematically ,provided it can discriminate it.
3. If a sensor is under-sensitive to a color you can intensify it mathematically, provided again that you can discriminate it.
The discrimination bit depends on the Bayer filters which are set in front of the actual photosites.
As an example, take the Leica M8 and its IR sensitivity. Here case (2) applies. but a look at the curves for the Kodak sensor used in the Leica M8 shows that they are getting collinear in the IR region and so IR cannot be *easily* discriminated from red. So it's easier to just kill the IR physically with a filter. I think this argument may carry over to the digital backs with similar Kodak sensors (Phase, Hassy) although they already have a stronger IR filter in front of *all* the Bayer filters.
i would agree that it's possible that some color-hue filters over the lens might also help with discrimination between visible colors. And also bring the sensor back into its linear region. In particular a blue (cooling) filter used under incandescent light should help bring up the blue channel and get rid of blue channel noise visible when the blue channel is low and other channels are strong.
Maybe some experiments would be in order here ?
On the IR/UV thing
While profiles and 'looks' can help I dont think they can be the answer - they only perform transforms on data they have
An extreme example to demonstrate the point.
If a digital camera were blind to red then no profile or transform could correct that
Because red would be black and black would be black
No mathematic al transform (which is what profiles/looks are) could know which black areas were supposed to be red and which black areas supposed to be black
Extrapolate this theory and you are lead to the use of phisical filters at some point in the capture process to create an image that is percived correctly by the eye before or after conversion
Being that red skin areas go dark that would demonstrate an insensitivity to red so it would appear that using a (mild) red filter (which cuts other colous but red) could be the solution
This would have of course to be used in conjunction with a grey card or white reference to remove a blatant hue from the image
Is this filter exactly what used to be known as a 'warm up' filter in the 'old days'