My experience is that 12 MP works well with A2-size prints. A2 is the largest I print, because of printer and wall surface limitations.
Keep in mind that there are many variables, image detail and viewing distance to mention a few. Another issue is sharpening. Also, the full sensor resolution can only be utilized and certain apertures. If we assume a 24 MPixel camera, just stopping it down to f/16 would reduce resolution to about 12 MP. So an 12 MP APS/C at f/8 would give similar results to a full frame sensor and 24 MP at f/16.
What I have seen is essentially the following:
1) A2 prints from APS-C, optimally handled, are very good
2) Image files from full frame are much better than uprezzed APS-C images at the same dimensions
3) Most of the resolution advantage is lost in the printing pipeline
Sometimes, A2 prints from APS-C (12.5 MPixels) and full frame (24.5 MPixels) can easily be told apart, sometimes not. There are a lot of parameters involved our vision is sensitive to subtle differences in tonality but not really to megapixels. It is quite probable that you would see a difference between an 12.5 MP and a 24.5 MP camera in print with a loupe but not with the naked eye. Looking with a loupe may show you differences in detail which you may also observe with the naked eye.
I mostly use a Sony Alpha 900 (24.5 MP full frame). I have not made "art prints" larger than A2 (17x26") from that camera, but I expect that I'm able to do A1 with good quality and it gives me some room for cropping.
To put things a little bit in perspective:
I have two 70x100 cm (27x40") prints on my wall, one is taken with 6x7 on Velvia and scanned using an MF scanner, the other is shot with a 10 MP APS-C. You don't put your nose against the APS-C print and say "Gee, this is sharp!" but it's certainly good enough at normal viewing distances. I actually tried to reshoot the 10 MP image on my 24.5 MP camera but failed to get optimal images because of wind (and lens issues). The subject was autumn leaves so I need to wait 5 months until next time.
What you are essentially stating is that when a 12MP image is blown up to a size that is well beyond what image data is natively present, the scarce image related data gets spread around, leaving "holes" throughout the image.....something like trying to spread around a handful of sand across the floor of an entire room (not enough grains of sand to cover the entire area of the room !). The software will then make guesses on what kind of "fillers" (fake data since native data is just not there) to fill the "holes" with, so that people watching the image can be fooled into thinking it is a real image than something filled with fake stuff.
Yes, the above will work in lots of situations. Unfortunately, such tricks will only work in images that do not contain a lot of fine detail. Once you introduce fine detail into the picture, the "filling the holes with fake stuff to make it look real" will start coming apart right away. Just a clarification.