ETTR means that you exposure maximum without blowing out any highlight. The way I shoot I preserve highlights and I agree, of course, that any essential highlights should not be blown out. If you have a low DR scene you can underexposure by one stop and no problem, but if you have a high DR scene this is not good if you want to see details also in the shadows. Therefore I have adopted my approach which also lends itself to do an HDR merge if needed. So all cases are covered. The downside is many more exposures. The upside is no wasted time or attention to non essential technical details when shooting. In many cases there is time enough but when the light and weather really plays in the landscapes then one need to maximize the attention on the shooting, compositions, moving around etc. In such situations errors are made by even the most experienced shooters
Believe me I have been there ....
Yes, I agree with you that if you really need information in the shadows, it might be a good idea to blend several exposures in HDR software, but honestly, I never had good results with that. To me, it looks like geeking it out, like, let's get all the information we can, but my experience is that photography is as much about letting information go, as it is in obtaining it. You need to discard stuff. This desire to include everything, to mourn every white and black pixel as a tragic loss due to clipping which would be better recovered, I don't know, I can't really adopt that attitude. For me, it's like this: I know what I want to get and if I'm getting it, I'm happy. To me, it's getting the blacks to look clean and smooth, defined by the subject and not by sensor readout noise or banding. If the blacks look good, I'm quite satisfied to leave them black, I have no great need to push 2ev of shadow detail out of it and make blacks look muddy brown. Actually, the point where I fell in love with a large-sensor camera is when I saw a picture of deep dark rock on cascading waterfalls, rock that was perfectly defined and yet deep black, and I understood that I couldn't make that shot with either C41 or E6 film, or my small-sensor digital. The silde has no color definition in the shades, the negative would define it as an empty hazy blob and the small sensor would also define it as brown due to noise. But only a large-sensor digital would define it as pure, 3d volume of darkness.
ETTR technique assumes there's a problem with shadows when shooting digital, but I can't confirm that, not with low-ISO shots with modern large sensor cameras. In fact, I find the opposite to be true; the shadows are defined in the most perfect tonality, it's the greatest asset of digital photography compared to film. Also, Michael's explanation which says that the left side of the histogram is defined by a very limited binary number-space sounds flawed; in my understanding, the sensor element uses photoelectric effect to charge a capacitor with photons, and at readout the ADC converts the charge of the capacitor, from 0 V to MAX V, into a binary number that defines luminance. Why LSB of that number would carry more information than MSB, I have no idea. The information is limited by photon count, by the readout noise that could corrupt the information in pixels adjacent to the readout conduits, by the ADC noise, EM noise from the PCB and similar things, but once the ADC has delivered its number, all bits are equal. Sure, if a pixel is so dark it has a luminance value of 3, there isn't much between that and zero, only 2 bits of data to work with. But if you're working in 8-bit space and you're working with pixel brightness of 253 to 255, you're still working with those same two least-significant bits to the left, because everything else is set to 1. So the entire logic that refers to the limited number space at the left side of the histogram, compared to the right, is flawed.