Those images are much too small to see any difference, and the current monitor I'm using is not particularly good.
Those are synthetic images with the only purpose to didactically demonstrate that
more levels not always mean anything useful, they come from the same RAW file. Refer to Emil's numbers to convince yourself about a real case for RAW.
It allows to quantize the noise more accurately, and thus will lead to less posterization risk. Keep in mind that these Raw levels will undergo a gamma adjustment (=boosting shadow contrast, and compressing highlight contrast) before being displayed.
What is better, a computer than can perform a task in one milisecond, or a computer that can do it in one microsecond?. For a human being, both are as good, no matter if the second computer was 1000 times faster.
If posterization is not an issue with any arbitrary exposure (and this is the point of the whole story), it's irrelevant that any higher exposure could theoretically provide less posterization risk.
BTW, if your scene contains a plain colour area (like a sky), the more you expose it, the less noise it will have, and hence the more posterization risk when converting to 8-bit JPEG. I have experimented this issue on HDR interiors, where walls absolutely clean of noise, which sounds like a good idea at first, because of RAW overexposure are quite prone to display bands when converted to JPEG. So funily ETTR could lead more easily to posterization in some cases than a lower exposure (posterization in the highlights due to the absence of noise, never due to the lack of levels in the RAW capture).
I already posted an example of this; no one has posted so far a practical example of the opposite, the
Myth of ETTR levels!!!.
Regards