It's always seemed a bit absurd to me the way battle lines are drawn between MFDB and 35mm format when any discussion about comparative image quality is started.
It should be apparent to everyone who's been interested in photography for any significant period of time, that there's a strong connection between sensor size and image quality. Generally, the bigger the sensor, the better the image quality, at least in some major respects in not in all respects.
No-one could sensibly argue that a P&S camera produces image quality equal to that from an APS-C format. No-one could sensibly argue that an APS-C format produces image quality on a par with full frame 35mm, and likewise, no-one can sensibly argue that full frame 35mm produces image quality on a par with a DB which has two or more times the sensor area of 35mm.
If a sensor collects double the amount of light because it has double the area of another sensor, one would expect approximately a 1 stop increase in DR. If it has 4x the area, one could expect a 2 stop increase in DR, 8x the area 3 stops, 16x the area 4 stops and so on.
That progression of course assumes that all other factors contributing to DR are on an equal technological footing, and they rarely are. CMOS sensors are different to CCD sensors. My impression is that the photon-collecting diode on the CMOS sensor is smaller than the equivalent diode on a CCD sensor of equal pixel pitch.
That fact alone might explain why a DB, with double the sensor area of 35mm format, might have more than a one stop DR advantage. If each photon-collecting photodiode on the CCD pixel (of equal pitch) is double the area of the equivalent photodiode on a CMOS pixel, one might expect a 2 stop DR advantage, at base ISO.
However, the reason the photodiode on the CMOS pixel is smaller is in order to accommodate other processing devices on the sensor which improve signal-to-noise before the signal is digitised. I would gess that this is why, at the pixel level, a D3X has substantially higher DR than a P65+.
The D3X pixel is exactly the same size as that of the P65+. It's a testament to the technological prowess of Nikon and Sony that they've succeeded in gaining greater DR from a photodiode which is probably smaller than the photodiode on a much more expensive DB of equal pixel pitch.
However, the P65+ has many more pixels than the D3X. Comparing images of equal size, the increased DR of the D3X is marginal. Only 2/3rds of a stop at an 8"x12' print size.
But there's a question here which I've never seen addressed. When downsizing both images to an 8x12' size, as DXO does, one is throwing away image information from both cameras, in the example of the D3X and P65+, but one is throwing away more information from the P65+ image.
If one were to interpolate the D3X image to the same file size as the P65+ and then compare DR, what would be the result?
If one defines DR as the amount of meaningful image information in the deep shadows of an ETTR exposure, then it stands to reason that the P65+, with its significantly higher pixel count, might have a higher DR than the interpolated D3X image. At a guess, instead of 2/3rds of a stop lower DR (than the D3X) it might have 2/3rds of a stop higher DR. What do you think?