Does this mean that regular video doesn't use the whole sensor resolution plus downsampling? I never thought digital video worked that way...
It seems that many/most still cameras cannot read all of their 20 million or so pixels at a rate of 30 times (or 60 times) per second, so they use some subsampling, such as reading every third row. That make aliasing worse than if all pixels are read and then binned when down-converting to 1920x1080. The folks at RED are happy to provide evidence of the asymmetrical moiré caused by this in Canon DSLRs, for example.
I do think however that sufficiently high sensor resolution and data rates could make OLPF filters unnecessary within a few years, at least in all but cheap video+stills compacts.