From what I recall from building pinhole cameras the red and infra-red parts of the spectrum were more prone to diffraction due to a wider wavelength I would always use an IR (Red 99) filter over the front of my camera to help limit diffraction.
Yes, that would help. The formula (not a rule of thumb, but a physical fact) to calculate the diameter of the Airy 'disk' is (approximated by rounding):
2.43934 x lambda x N
where lambda is the wavelength, and N is the F-number. The fractions of the total power contained within the first, second, and third dark rings (disk diameters) of the total diffraction pattern are 83.8%, 91.0%, and 93.8% respectively for the multipliers 2.43934, 4.46626, and 6.47663.
However, for resolution we also have to factor in the relative weight of the different wavelengths in the luminance of the signal. For the Human Visual System (HVS) the resolution for Luminance is much more important than for Chrominance. Chrominance typically fluctuates much less than Luminance in real world scenes, just check out the channels of an image in the Lab, or HSV/HSL, colorspaces.
The common choice for a wavelength of 555 nanometres is related to the peak of luminous efficiency
of the human eye. This also corresponds reasonably well with the common calculation for relative Luminance
from Red, Green, and Blue channels in an RGB colorspace (with e.g. the ITU-R BT.709 primaries):
Y = 0.2126 R + 0.7152 G + 0.0722 B
That weighting shows that Red is the second most important contributor to luminance, but only at a fraction of the Green contribution. It of course also depends on the colors of the subject we are photographing ...
I have no idea if this works, or if the red lenses are less diffraction prone compared to green or blue in the Bayer array.
The diffraction that matters for photographic resolution, originates in the lens we use. The microlenses and/or color filters on the Bayer CFA only concentrate or filter the total luminous flux (after lens diffraction) for that sensor element (with little further effect on resolution). Interesting observation is that the luminance resolution of the RGB channels after demosaicing is almost identical (because the demosaicing uses luminance to a large extent to reconstruct the missing 2 channels' data).
Having said that, Diffraction only reperesents an upper limit when the rest of the system is perfect. As soon as the lens is not, the combined MTF will therefore never reach that level, it will always be a bit worse. However, when diffraction becomes a very dominant factor, then the combined MTF will come close to the theoretical limit. A very good lens will just get a bit closer.