In all honesty regular video frames are bad compared to same resolution stills. Besides the usual long shutter speed movement blur, there are 2 more reasons: Even normal HD has only half or one fourth of the color resolution compared to luminance, thus color resolution for typical 4:2:0 HD video is only 960x510 pixels. Only top tier professional video cameras can shoot 4:4:4 with full amount of color pixels, but even that is not distributed at full color resolution, because of the saved bandwidth and because it does not mattar much with a moving image.*
So video throws out information that matters little perceptually (or never was there in the first place, or cannot be recreated using common display tech). Why is that a problem? The Bayer CFA does something similar for cameras, so does JPEG still-image compression (and to a degree, chroma noise reduction).
Second reason is the extremely efficient compression methods used with video, typically delivery formats are 5% of the original data stream, often even less.
Again, video does its best to provide perceptually good results at minimum bandwidth cost. Since two video frames are often very similar, it only makes sense to exploit this in compression (for many but not all applications).
I do agree that one should be careful about re-using perceptual results from video perception to still images.
While it has been mentioned that our spatial resolution is effectively "less" for moving images than still-images, there is this issue of aliasing. If your camera/scaler/... creates aliasing, it can create annoying moving artifacts in video that moves (often counter to real movement). In still images, the same amount of aliasing can be acceptable. Thus, I think that video processing equipment tends to sacrifice some sharpness in order to reduce visible artifacts, while still-image equipment might let through more ("fake detail") aliasing. If you are displaying your 80MP images on your 4k LCD tv using the built-in scaling, results may or may not be optimal...
-k