I agree with you about minimizing conversions, but you have to remember that the PCS is usually in Lab (and if it is in XYZ there is a straightforward mapping to Lab) and in all CMM conversions the intermediate color space is always Lab/XYZ.
AFAIK, there's a difference between converting all pixles to and from Lab versus using a PCS to come up with values for the conversion. An old urban legend is that Photoshop converts to and from Lab to do all it's conversions and other editing operations. Not necessary, way too slow. What happens and has always happened since Photoshop started using ICC profiles is that when you ask for a conversion, Photoshop builds a conversion table and to do so, it uses LAB to find the equivalents from source to destination in cases where it needs to translate such color spaces, using 20-bit precision so you get less quantization errors than you would actually converting the pixels to LAB.
I don't know enough about the problems that may be associated with Lab as a working space (as you say there is) to comment on what you say ... but certainly one major problem that I see is that Lab is way too big for our monitors and output devices at this stage, so there would need to be a mechanism to constrain it.
CIELAB was 'recommended' by the CIE in 1976, to address a specific problem, namely, while identical XYZ values could tell you when two stimuli would be experienced as the same 'color' by most observers, it did not tell you how 'close' two colors were if they were not exactly the same XYZ value. Where Lab is useful is for predicting the degree to which two sets of tristimulus values will match under defined conditions thus it is not anywhere close to being an adequate model of human color perception. It works well as a reference space for colorimetrically defining device spaces, but as a space for image editing, it has many problems. There are a slew of other perceptual effects that Lab ignores. Lab assumes that hue and chroma can be treated separately, but numerous experimental results indicate that our perception of hue varies with the purity of color. Mixing white light with a monochromatic light does not produce a constant hue, but Lab assumes it does! This is seen in Lab modelling of blues. It's the cause of the dreaded blue-magenta color issues or shifts. Lab is no better, and in many cases can be worse than a colorimetrically defined color space based on real or imaginary primaries.
- Do you agree that the main problem with the flat-looking sRGB image in your video has more to do with the fact that you converted an image with saturated colors in ProPhoto to the very much smaller sRGB working space (a conversion that is inevitably a Relative Colorimetric conversion) rather than to sRGB itself?
- Do you agree that if such a conversion is a requirement (for example if you are preparing the image for the web) that doing a Perceptual conversion to an intermediate profile and then converting to sRGB will to a large extent avoid the problems you highlighted? I used a print profile to do this, but a better one would be a table-based monitor profile as this would not cause color shifts/desaturation to the same extent.
- Do you agree with Steve Upton's recommendations:
Choose a working space that is just large enough to contain your imagery; any bigger and you're wasting space.
If you can, choose a standard working space like sRGB or AdobeRGB. It makes file exchange and discussions easier
Avoid converting between working spaces as the conversions don't deal with out-of-gamut colors well
If the entire world used sRGB PROPERLY, color quality would go up significantly. What this means is that many color problems are not due to working space choices.
1. Yes of course (in terms of bigger going smaller gamut)! The images have a gamut that greatly surpasses sRGB so sRGB is the wrong encoding working space to use for these images (all images unless you can show otherwise) from raw processed using the ACR engine. The rendering intent is a red herring you continue to obsess upon.
2. No, no problems. In my workflow, and many others, it's scan once, use many (or capture raw, render once, use many). I have no idea what output device my images will go to today or a year from now. And I want to retain
all the data I can in the master to use it. So the answer is simple: encode in the largest color space I can. At such a point I'm going to output that data to a known device and have a profile, I'll examine what rendering intent makes the image look best (
IF I even have such an option, output to web isn't one of them and it doesn't matter a bit to me considering the huge inconsistencies of output devices and lack of color management for so many). So again, you're obsessing about rendering intents and I've seen nothing thus far in my work or anything you've shown that illustrates I should be concerned and just use a smaller gamut working space instead.
3. I agree that I should use the biggest working space that doesn't clip color I captured which is WHY I use ProPhoto RGB. I'm far, far less concerned wasting space than wasting color data.
I don't know what a
standard working space like sRGB or AdobeRGB means, seems like something someone made up. For me, ProPhoto RGB is a standard working space, my standard working space! I really don't care what some think the entire world is using, and no, if everyone used sRGB, they would have the same poor output to their Epson's as I see to mine using that working space (compared to ProPhoto RGB, even Adobe RGB (1998) which I just tested and found produced an inferior print to the 3880 compared to ProPhoto RGB.
IF anything, the recent testing I did with my Gamut Test File (from raw and some synthetics) has strengthened my opinion that ProPhoto RGB is a vastly better working space to use based on how I capture my data, in the converter I use and the output device I print to today.