I wish that were the case! If glare was actually applied evenly, essentially adding a fixed amount of light to every point in the image (which could equal four or five stops in the shadows, but a fraction of a stop in the highlights) it would essentially work as a giant fill flash, reducing the dynamic range of the scene and making it easier to capture. Unfortunately it is not, and doesn't really reduce the DR across the whole frame - merely where the lens flare is.
Veiling glare contributes/adds mostly to the shadows where signal levels are low. Since the glare is a product of intra-lens and inter-lens element/group reflections (aggravated by dust and atmospheric deposits), it is not confined to the regions where light is (besides the lens receives all scene light everywhere on the lens before it is finally focused on the sensor).
Fortunately, it's usually easy to completely shield it with a well-placed hand forward of the lens but outside the field of view.
Some of it, yes, but it would take unwieldly deep petal-shaped lens hoods to really do a good job. Hence the on average mediocre shielding peple use if it's even given proper attention to begin with. I use a different lens hood on my TS-E 24mm II when not using it shifted, or only a little. The EW-88C to which I added flocking material, does a better job, even though it was designed for a different lens. I use a separate (Lee bellows) if I want something deeper, and have a petal shaped design ready for 3D printing if that makes enough of an additional difference.
Doesn't work when things are moving. In landscape photography, wind is the usual culprit.
On the contrary, it works fine in most cases. It's often not the horizon line or other moving features that are contrasted with the brightest parts of the image. Most of the info is in a single shadow exposure shot, and only parts are in the ETTR highlight shot.
I often do that. Functionally, it's the same as halving the ISO - you're collecting twice as many photons by exposing for twice as long, so each photon counts for half as much. It certainly minimises photon shot noise. I'm not sure that it actually increases DR, though, since the read noise is also counted twice.
Yes, photon shot noise gets reduced, but averaging also averaged read noise. It does it so well, that pattern noise will be better visible. That's where improved sensors (and/or black frame subtraction) will shine, that is by absence of pattern noise. The patterns become more noticeable because we humans are good at pattern recognition, even where there are none we see details (like shapes in clouds, or faces in moon rocks).