Thanks for the reply Graeme: I was hoping for feedback from someone who actually knows what he is talking about.
When you line skip, it's like having a very low fill factor in that direction, and that actually increases the sensor part of the MTF, making aliasing stronger, meaning you don't just need an OLPF set to the new pixel pitch, but a stronger one even still. ... Instead of line skipping, the solution is larger pixels.
Agreed that line-skipping is still a compromise resulting from trying to adapt a sensor designed for one purpose (stills) to the different needs of another (motion). And it might be that quite soon (maybe as soon as the forthcoming Sony APS-C HD EXMOR sensor?), some "primarily stills with video on the side" camera sensors will read all photosites and use suitable binning in video mode.
I was only thinking about sensors that cannot read all photosites at video frame rates, and speculating on how with them one could mitigate the problems that arise with the current combination of (a.) reading only every third line and (b.) using an OLPF deigned for the needs of the higher still resolution.
Do you think I right or wrong that
(a.) reading every second line instead of every third, and
(b.) using a stronger OLPF than in a still camera
could improve the situation, by raising the sampling frequency while lower the maximum frequency present in the sampled signal?
And what about my other idea of
(c.) microlenses extending over the unread lines
Does that not increase the fill factor, and so partially offset the increased MTF and its aliasing effect? (If is is doable at all!)