The question is what about a camera sensor when using the high ISO settings?
Does setting the ISO to H and then using +1 EV comp just set the sensor to the next lower sensitivity or does it work like it would with film?
There is no single answer. Following relates to raw data, not in-camera JPEG.
MFDBs, at least those I analyzed simulate ISO steps (i.e. when the "higher sensitivity" is not achieved by increasing the analog gain) by recording this in metadata; the raw data is not affected at all, it's the task of the raw processor to account for the lower exposure. In other words: the ISO simulation is plain underexposure.
DSLRs do more eyewash and cause lots of damage. The simulated ISO steps are achieved by adjusting the raw data in-camera. This is just like increasing the intensity in raw processing,
except that the adjustment of the pixel values before creating the raw data reduces the dynamic range. For example with 12bit raw data depth the maximum pixel value is 4095 (or even less). If the multiplication takes place in-camera, then anything between 2048 and 4095 becomes "clipped", because the new value exceeds the numerical range.
Note, that
not only the H settings are fake. For example the Canon 5D2 offers ISO 100-6400 and H1 (12800) and H2 (25600). However, in fact ISO 3200 is the highest real ISO. (Another aspect is, that even though ISO 3200 of the 5D2 is "real", it does not offer any true gain over ISO 1600, i.e. it should not be used, for it reduces the DR by exactly one stop.)