Yeah it all makes sense in a linear encoding scheme, except that the RAW converters don't provide a way to mush the data back and forth that produces equal results both for up and down.
I don't understand what that means.
That's what I'm saying. IF there were a way to do this with C1 then ETTR would be the best way.
Why is C1 an issue?
So far I haven't found a way to do this and get equal results. Even if you could get equal results why would you want to add one more step to your workflow if it wasn't needed?
They are not supposed to be equal.
Just make the highlight clipping value of each the same, let everything else fall where it will. No, the midtones and shadows will not be equal. For one, you'll see a lot less noise in the shadows (if you clamp/clip the black the same, you may end up reducing noise at the expense of true shadow detail. In fact, a great way to reduce noise is to clip the black!).
Anyhow we agree on the point that the results for normal exposure and ETTR adjusted in post are not the equal, and I am further saying I prefer the normal exposure or even .5 stop under for visual look.
They are not. But you need to normalize the rendering for what appears to be 'over exposure' unless as I point out in the article, you really DO over expose and get past the point of sensor saturation.
I think the reason is because the contrast is not affected as it is in the ETTR adjusted version. The result looks more real to me.
Again, I can't comment on this (looks more real). One should be able to use the Raw rendering controls to get the best possible data AND a rendering you desire but I can't speak to C1, I used Lightroom (which is the same as CR).
You'll also note, I found almost as many issues with ETTR that make it an iffy proposition. But, the math is undeniable. IF you expose properly for digital, which IS ETTR (not blowing out highlight data you hope to reproduce), you WILL get better data, less noise in that last stop. That may not be important to you and that's fine.
But the format is immaterial here if indeed we're taking apples to apples with linear encoded data. Look at the figure of this in the article, its clear how much you have in the first stop and the last stop.