I'm a bit confused first a general question then a specific one.
1. In general; If I shoot pure red and the sensor records fully saturated red and nothing else, then I process the raw in a large and a small color space aren't I still recording fully saturated red. (R=256 G=0 B=0) Then if I view it using it's appropriate color space won’t it look the same? (assuming the monitor can display both) or is a larger color spaces fully saturated red actually more saturated? How would the sensor know what red it is? all it would know is that all my red photo sites are maxed out?
2. Specifically; for a 5D and an iPF5000 what would be the optimum color space to use, assuming a raw file and 16 bit all the way through the workflow (Canon 16 bit plugin for printing)? It seems to me a space that is slightly bigger than the sensors and the printers capabilities would be ideal.
3. Would the content of the image make a difference or the sensor capability? i.e. should the color space be tailored to encompass the sensors gamut or the images gamut?