What you are describing cannot be improved using super-resolution AFAIK, at least not established, generic algorithms. When you apply proper lowpass filtering in front, the signal is allready bandlimited. Shifting the sensor by a tiny amount does not record any new information, and therefore it is hard to see a significant gain from combining multiple images (besides SNR improvements).
Although the posts about Super Resolution are a bit off-topic, the aliasing part is relevant for the D800E.
Maybe this little experiment will shed some (hopefully not false color) light on the matter. I've taken 18 shots of my resolution target, each displaced horizontally by 15 micron on my 1Ds3 with a 6.4 micron sensel pitch at a distance that produced a magnification factor of 0.0324x (1 : 30.86). This should allow to cover a horizontal offset of the projected image of some 8.75 micron, slightly more than the sensel pitch.
That would give a series of images with only horizontal sub-pixel samples, and the aliasing in each sub-pixel image is the same in both horizontal and vertical direction.
Here is the Super Resolution result from PhotoAcute:
and here one of the input frames, upsampled 2x with ImageMagick:
You can (hopefully) clearly see that the horizontal oversampling at the sub-pixel level allowed to resolve the vertical spokes without aliasing artifacts, and the horizontal spokes which mostly had aliasing as their guide to super resolution (and no vertical sub-pixel offset) still show the false color/aliasing artifacts in their full 'glory', only larger and sharper.
The yellow circle equals the Nyquist frequency of the smaller originals at 92 pixels also resized 2x to a 184 pixels diameter. Where the original image didn't quite reach Nyquist, the Super Resolution image indeed increased the resolution of the original image(s) all the way to their original Nyquist frequency, and upsampled the original 2x to allow and display the increased resolution. There are even some hints of aliasing that could be mistaken for detail, if the orientation of the features happens to align with the sensel grid.
This hopefully demonstrates that it is the sub-pixel oversampling that leads to the added resolution, not the aliasing artifacts. The reason that the PhotoAcute person said that a non-OLP filtered image would help the Super Resolution results is because of higher modulation near Nyquist, not because of aliasing being helpful, it isn't (in their algorithm), as demonstrated.
When you look beyond the Wikipedia text and read some of the PDFs mentioned as reference there, you'll see that sub-pixel sampling is a useful mechanism, whereas aliasing can only help to determine those spatial offsets in very unlikely setups. The aliasing artifacts themselves are not going to help and boost the resolution, also because aliasing is by definition larger than Nyquist.