What if I'm not increasing ISO ... a low dynamic range scene which I would "overexpose" to get ETTR, so I end up with a histogram empty of any data on the left 1/4-1/3. Perhaps even if it isn't exposed to the right (example attached), there still isn't any data on the left and in this case on the right side of the histogram.
In that case, it helps to shoot at 16-bit precision. When exposure is high, or even ETTR, the Accuracy of the signal capture is relatively high (a high S/N ratio). Then increasing the precision at which the levels are recorded helps to create more robust data for Raw conversion (similar to 8-b/ch versus 16-b/ch postprocessing, but instead here we compare 14-bit to 16-bit encoding).
The bits are not only used to cover the DR, but also to more precisely encode the more Accurate signal levels. A 1-bit ADC can record a huge DR, 0 for deepest black, a 1 for the lightest white, but without any precision for intermediate tones. With 14-bits we can encode luminosity differences as small as 1/2^14 = 1/16384, and with 16-bits we can encode luminance differences as small as 1/2^16 = 1/65536, or 4 times as precise.
Does the 16bit file offer anything at all in flexibility in post processing?
Yes, but it already helps to make a more precise demosaicing and Raw conversion.
Does the process of making a 16bit tiff from a 14bit raw produce an identical result or is there perhaps some small gains to be had from making a 16bit tiff from a 16bit raw?
This is post-capture. It doesn't help the demosaicing and Raw conversion much since no higher precision data is added but the range is only stretched and divided more finely. The 16-bits do help when the demosaiced data is manipulated, e.g. gamma precompensated or contrast adjusted, because intermediate tones have an up to 4x higher precision (which helps during cascaded processing steps, but is overkill for output). Raw converters like Capture One already use higher bit levels to do most of the caculations with higher precision.
My unscientific "logic" based on this discussion tells me there is no point in shooting 16bit on scenes with lower dynamic range, or probably when pushing the ISO past 400 up since all of the available dynamic range can be encoded precisely/accurately enough with 14bits.
DR is not the sole reason to use 16-bit encoding. The old analogy of a ladder applies. The height between the first and last rung is the DR (and it can be a long or a short ladder), the number of bits is the number of rungs from the lowest to the highest rung, and determines how precise intermediate levels can be achieved and how easy it is to go from one level to the next.
It would require more in-depth analysis to establish whether the IQ3 100's Analog to Digital Converters (ADCs) achieve a higher ladder, or only adds more rungs to the ladder, but it's circuits do warrant the use of 16-bit encoding since it effectively expands the recordable DR beyond 14-bits. More importantly, it also increases precision, which is helpful in the demosaicing of the ETTR signal levels and produces more solid files for postprocessing.
Cheers,
Bart