Yes, that concept, outside the bit depth considerations is an urban legend, perhaps created on the DPR forums that are colorimetrically false.
My assumption was based on several premises
- That smaller is smaller and is computationally cheaper. CPUs and storage space neither free nor infinite.
- If you do extensive editing in a smaller space there is less danger of posterization in gradients (skies) than if you do extensive editing in a larger space and then export to a smaller space.
- If you do extensive editing in a larger space (that contains colors that any real world monitor can't display) that there is increased danger of hue and saturation shifts when you export to a smaller space.
If you know beforehand how your real world image fits in various color spaces, so you don't have to spend a lot of time with Photoshop's crude gamut clipping soft proofing tools.
First, those who propose it can't tell us how we should determine how the image gamut would fit. Sure, you can render from raw into various color spaces and plot them. Image by image?
I am working on a tool that uses 100% free components and is easy to use. Where is the disadvantage?
More importantly, unnecessary. When I heard this silly idea that you should use a color gamut working space that isn't any larger than the image, I colorimetrically proved it wrong. I took an image that from raw can easily fit into sRGB, did so, then rendered into ProPhoto RGB RGB. There's no difference using either color space!
If an image from raw can't fit into the resulting color space gamut, you clip colors; not good.
If an image easily fits, using something significantly larger makes NO difference.
I already told you I watched your sRGB Myths video. I just watched it again. I obliquely referenced it in my post to Doug when I highlighted CC chart cyan poking out of sRGB. Your video covered a CC chart and an unedited image of a white dog and snow. Neither illustrate the concerns I raised in my list, above.
I also said that I did a random walk examining different spectral analysis tools.
Babelcolor CC&T ($125),
Robin Myers Imaging SpectraShop 5 ($99), and
Chromix ColorThink Pro ($399).
I started out using CC&T to compare Lab values of different CC charts (ones I shot under different light against the reference CC Lab values) and hand transcribing Lab values from the PhotoShop info panes into CC&T got tedious real quick. I wanted something that would compare the Lab values from each square of the CC charts as one operation. I examined all three programs and I don't think that any of them could do it. So I returned to my existing methods of running images (and color spaces) through the ArgyllCMS utilities to produce 3D plots. This is quick and easy--all I had to do was add the filenames of the various image files and ICC profiles to a configuration file and run my script. And the interactive HTMLish 3D plots are easily sharable. (ColorThink Pro's...?)
I also read Jim's blog posts
https://blog.kasson.com/the-last-word/the-color-reproduction-problem/. As I posted earlier "In
blog post 14 you noted how tedious it was to transcribe Lab numbers from Photoshop" and then the post referenced Matlab. I did a quick check of Matlab and that is "write for a quote", which I assumed didn't mean that it was cheap. So I kept on with my script and ArgyllCMS programs and asked for suggestions on how to improve my methodology.
My random walk continued. Last night I got to
GNU Octave which is free and claims to be "Drop-in compatible with many Matlab scripts". So possibly this could be used as a free untedious way of comparing CC chart Lab values? (And for doing many other things.)
But, not knowing anything about Matlab, I could use a head start in doing this.