Sorry to add to the geeky part of the discussion.
Mentioning the loss of resolution due to the more or less logarithmic conversion of Sony sounds bad. But once it is realized that the dominant noise source at higher illumination level is shot noise, a noise source that grows as a function of the light level, it can be understood that the full linear resolution as provided by Nikon and many back makers, is of little practical use. The extra resolution in the highlights will be completely swamped by shot noise.
A linear analog to digital converter provides half its resolution to the highest full stop of its dynamic range. So in the case of a 14 bit ADC, the highest EV of the total range is divided in 8192 values. One EV down the resolution is 4096 etc, until the last bit, the one in the deepest shadows, where a single EV is only expressed with a single bit. Apart from the rather ridiculous resolution the highest EV gets, this amount of resolution is not present in the signal going into the ADC. Shot noise will reduce the SNR, at the top of the illumination range, to say 45dB, which is a ratio of 1:178. So most of the ADC resolution is used to express noise, which is of little practical value. Sony is no doubt aware of this, and reduces the resolution to a more useful range, reducing file size.
As long as shot noise is high enough to dominate the resolution of the Sony approach at each level of illumination, dither will ensure that all tonal gradations are smooth.
The Sony a900 had both the same compression algorithm and uncompressed RAW. I was never able to find any visible difference.