I can't imagine this hasn't been discussed (and if it has, I would appreciate links if you have them), but I can't figure out search terms to find it.
I have been wondering if, instead of demosaicing an image the traditional way, if there might be an advantage to taking four individual sensors on a Bayer and converting them to one pixel. This would, of course, cut the resolution by 75%, but it would, in essence, create a larger "individual sensor" which reads all the color information by combining the values.
If this worked, the red channel would only contain information from the red sensors, the green channel would contain the information from combined green sensors and the blue channel would contain only data from the blue sensors.
Would this create a file with cleaner colors since they are not interpolated? Would it create better monochrome files?