The bottom (shadows) non-linearity is noise, but for the highlight non-linearity, it's generally very tiny (it seems however more common that once one channel clip, others becom non-linear).
The main point of ETTR is there, that there is one side of the linear curve where there is almost no non-linearity before clipping, hence the need to take the clipping as upper limit of exposure.
I was referring to the sensor as transistors. All transistors, including the light sensitive ones, are non-linear. It may take a lot of very meticulous and careful electronic balancing to get the data from the linear part of the curve. The RAW data we usually see in files, is obviously not related to the response curve of the sensor considered as transistors. Probably far from it.
Plus another issue: once you have an exact linear response, how do you (we) propose to benefit from ETTR? If the response is linear, then shifting the data makes no difference. The reason that ETTR works is because it is a multiplication, not a transition.
So you can only get ETTR plus its benefits when you specifically dial in a different (longer) exposure, not by dialing in a standard exposure where the camera shifts the data to the right if at all possible. (Note that for most landscape situations changing exposure time automagically may be possible, but for the majority of photography, this is obviously not what we want).
Oh, yes, I don't want it too, I want it to be logarithmic so that it can be graduated in Fstops of course.
I understand, but my other question remains:
- do you want Fstops? (= gamma 2.0)
- do you want gamma 2.2 / sRGB / Lab?
- or user selectable?
And more importantly: what will be the our reference in bringing the data back to some reasonable reproduction, considering that the data usually represents overexposure?
ps. Yes, I did experience rather annoying problems with HP mode in Canon. Specifically, blue skies with gray/white "fluffy" clouds. Both in DPP as well as Lightroom. Only after the custom profile option back then, things became somewhat more controllable. But that is too long ago to really know for sure where the real problems came from.
I did once decode Canon RAW files and one particularly interesting anomaly I noticed was this: there is data available below and above a certain threshold, but in the histogram it is non-continuous data. You have to specifically "compress" that data back to the continuous part. Which by the way is an interesting problem, because it somehow requires a mid-point reference to determine what needs to be compressed toward light, and what toward dark…
While i realize this is somewhat technical, it may also be important because of the logarithmic RAW histogram we propose. For Canon RAW files for example, the gamma curve is not applied to the exact zero origin, but rather to a specific lower threshold level. This may not be true for other manufacturers, but obviously has significant impact on what we will be looking at…