Interesting observation and I had a similar "hunch" that got further solidified when reading this thread on getDPI.
Even though it's not the main subject being discussed what I learned is that the problem with high DR sensors (especially increased highlight headroom) ETTR can lead to putting your whole picture in a narrower gamut at the bright side of the histogram, and then by bringing the EV down in the raw converter you can't get all you would have gotten at a lower exposure.
Hi Pieter,
Don't believe everything you read
.
At capture time, there is no color space (with its particular gamut hull) defined. The mostly linear sensitivity of our sensor arrays just records photons converted to electrons in R/G/B filtered sampling positons. Then a Raw converter needs to convert that data into RGB colors at each sample position (=demosaicing), and a number of calculations is performed before mapping those RGB coordinates to some RGB coordinate system. One of the operations, after demosaicing, is a linear scaling of the RGB channels to perform White Balancing and Exposure correction. Both should be done in linear gamma space.
Only after all these operations is the data set mapped to a color space (a coordinate system), that is why we effectively get a wider gamut by e.g. choosing Adobe RGB instead of sRGB, and ProPhoto RGB can squeeze a bit more out of our files (although most coordinates in the PP RGB are left unused in a real image).
So it depends on the Raw converter if it scales the ETTR exposure down to the proper/required level
before mapping it to a colorspace. Only then, with a colorspace assigned, will the change of exposure potentially impact the saturation due to the remapping model in the given colorspace coordinate system.
Therefore the answer to the OPs question can only be given for a specific Raw converter, because we have no guarantee that the processing is done for optimal quality rather than speed. In fact, quality probably requires to do most of the processing in floating point math, which is slower than integer math. I saw some Raw conversion samples by Iliah Borg, indicating that (while slower) floating point math during Raw conversion indeed produces superior results compared to the current LibRaw/DCRaw conversions.
The best proof is therefore to shoot a scene (or a ColorChecker card) at different exposure levels, making sure that there is
no single channel clipping (!), and pulling the exposure at Raw conversion to a common brightness level. Then a comparison of color differences will reveal if there is a problem with the Raw processing engine or not.
Cheers,
Bart