The trouble is, this "comparison" requires making quite the assumption - that is, that the only difference between the two sensors is pixel count. In fact, we're talking about sensors two generations apart. Differences in sensor technology, demosaicing algorithms, and so forth may have more to do with any increase in resolution than pixel count does, for all we know. Unless we can compare two sensors from the same generation in the same format using the same technology from the same manufacturer with different pixel counts, we're comparing apples and oranges.
The test result as referenced by Emil is normalized as far as possible. It uses a method to determine the resolution, based on a robust MTF determination (essentially equivalent to the ISO prodedure for testing spatial resolution in digital cameras. The inevitable differences that remain are that we use different camera systems, with their particular AA-filter configuration.
We also don't have the "lines" related in any way to what the maximum resolution as limited by diffraction might be (both cameras may be below it, for example, which means you could see no limitation in these "tests," even though it would apply if the cameras performed well enough for the limitations to be seen).
While the presentation used doesn't show the relation to diffraction limitation on limiting resolution, the same test procedure I mentioned above does show that in a different graphical chart that the Imatest program can produce. Unfortunately, it is not easy (if at all possible) to present that in a summary graphical chart for multiple apertures. What the charts do show is the effect of diffraction on the MTF50 metric as an approximation of perceived sharpness.
Quite frankly in the digital age I'm not so confident that "lines" are actually being resolved, as opposed to being interpolated, so I'm not especially convinced by such "data." I'm skeptical of something so easy for square pixels to artificially replicate through algorithm guesswork (i.e., straight lines) as being a meaningful subject of comparison - a more challenging, non-linear subject is more likely to be a realistic test of real-world resolving power.
The resolving power is not determined with a line pattern, it is merely a (numerically converted metric) reference to other ISO test charts that
do use hyperbolic line patterns for visual intepretation. The underlying spatial resolution metric is based on an MTF determnation, and more specifically the spatial resolution at 50% MTF response. That is by no means close to the limiting resolutions these systems can resolve, but rather a reasonable indication of perceived resolution. As much as an MTF curve is already a simplified representation, a single point on the MTF curve is even more so.
The important point is that we do not compare based on a line pattern being resolved or not, but on the gradual reduction of contrast as we approach the limiting resolution, and we stop almost half way to pick a reference point that's still
very well resolved, and indicative for perceived resolution.
The fact that we compare
actual camera's and lenses, will unfortunately make a purely theoretical distinction of a single isolated factor more difficult, because all other factors are not exactly equal. Nevertheless, there is strong enough evidence for us to draw careful conclusions. One such conclusion is that increased sampling density will improve absolute resolution (although it may also influence other characteristics such as dynamic range). Obviously, output magnification will inversely impact resolution, so larger sensor arrays benefit from that. Another conclusion is that diffraction blur has a negative impact on
per pixel resolution, but there is no hard limit. However, it is possible to indicate for a given sampling density from which aperture number on the diffraction will visually impact that per pixel resolution. Smaller sensels will already be hurt at wider apertures. How that impacts the combined effect of the whole system, can only be seen at the final output size.
Cheers,
Bart