When I watch the live histogram on my a7r2 and increase exposure, I see the histogram move to the right. However, before it touches the right side, it is being skewed to the right. This must mean that an in-camera software compresses the upper mid tones to prevent highlight clipping. As far as I can see, this in turn must mean less separation between tones in that range.
IF this is correct, then I will in the future typically expose for the "flattest" histogram rather than to the right, since I have no problems with noise. Well, maybe I will get them if I follow my idea...
Your analysis is correct but I don't recommend your solution.
The tone curve is compressed. Partly to simulate the response of film and partly to raise the brightness in the midtones relative to "white." This process is a standard part of the conversion from camera sensors to a digital space known as "output referred." It compares with "scene referred" which is used in photography for reproduction.
Said another way, if you, using normal photography, take a picture of a picture. Print it and put it side by side with the original, it will look quite different. The high end of the tone curve is compressed in the print.
If you use scene rendered photographic processes to make a colorimetric match of an original, then the print will closely match the original. This is a specialty because, except for repro work, the prints produced tend to look dull and less colorful than standard "output referred" prints.
That's how it works. Don't try to defeat it, you will not like the prints or you will wind up boosting the brightness. You also incur higher image noise. If you use RAW files, none of this compression occurs in the camera. The camera imager operates linearly and the best exposure is where the highlights just come within a few percent of the sensor clipping point. RawDigger is a tool that can tell you exactly where this occurs.