In practice, diffraction is an issue of magnification (sensor size vs. output size), not the pixel-count!
One example: you have to make a 76cm-wide print and you use two cameras with identical MP-count but one is equipped with a 4/3-sensor (18mm wide -> 40x magnification), the other one with a 24x36-sensor (36mm wide -> 20x magnification). The 24x36-system can be stopped down two stops further at a similar level of diffraction.
But when you compare cameras with the same sensor size but different pixel count (we stick with the 24x36 24MP vs 54MP comparison) diffraction effects will be the same on both prints. On a pixel level, the 54MP-file will degrade earlier (let's say from f8 on) but this is compensated for by the higher pixel count. At the same stop, the 54MP-sensor will always result in a superior IQ compared to the 24MP-sensor. Only the IQ-advantage over the 24MP-sensor degrades.
A hypothetical 24x36 camera with 54MP-sensor could also offer internal downsampling (e.g. to 24MP or 13.5MP) and would just behave like a camera with lower native pixel count. Most likely, the downsampled 24MP-file will be superior to the native 24MP-file. No disadvantages for the user whatsoever!
We have reached that level of technology now, we just have to implement it. No need for two cameras (like Nikon D3x vs D3s) - high-sensitivity vs high-resolution - anymore.
Yes, for a 300ppi-print (which is needed for a fine-art-print in demanding landscape-photography, not a portrait-poster viewed at several m distance...) A2 "just" takes 7000 pixels horizontally, A1 already 10000 pixels.