I think tonal and color adjustments are easier to do after demosaicing (there is no 'color' before that data conversion), but still in linear gamma space. Of course, assigning an input profile is also something of an adjustment, but it just takes whatever the demosaicing will output.
I think I'm using terms a bit too loosely. What I meant is that ideally it is better (if I understand you correctly) to do as many of the adjustments as possible in the raw converter, and as few as possible after conversion to tiff. There are a few reasons from an image quality point of view:
- larger color space (pros and cons here)
- linear mode (used internally by the raw processor)
- better CA and fringe correction (resulting in less artifacts in the image)
- better for color noise correction
- potentially better for deblur (assuming a good deblur tool in the raw converter)
- potentially better for noise reduction (assuming a good denoise tool in the raw converter)
- ... ?
This assumes that the raw converter actually makes the adjustments on the sensor data directly where at all possible.
Which does lead to another question: what image adjustments/corrections are typically done on the sensor data, pre-demosaic? I would have thought not too many (CA and color noise possibly?). Would the other adjustments not then be made on the linear, demosaiced image? If so then the only advantage of doing something like a saturation adjustment in the raw converter is that the image is still linear and in a large color space.
And how big an advantage is it? As you point out, it may be measurable but not noticeable ... and against this is the risk of making adjustments that will clip when the image is converted to a working space (aRGB etc) and to posterisation because of the large color space used.
Which all goes back to my original question. It makes sense (to me) to make tonal adjustments in the raw converter. But it makes much less sense that it is necessarily better to make color adjustments in the raw converter. It might possibly be better to convert to the intended final working space and to make corrections in that color space, because a smaller color space will lead to less posterisation.
Of course if C1 is in fact using a dynamic, image-dependent color space, that would be very good. But it would have to leave elbow room for the possible adjustments ... or it would have to grow as required. This enlarging would cause degradation I would have thought (but maybe C1 are working in 32-bit or more?).
It would be good to do some actual comparisons between pre and post conversion to working space ... which is what I tried to do earlier in this topic. But it isn't an easy thing to do, which is why I'm looking for a theoretical answer
Cheers
Robert