Under the assumptions that S/N per pixel is dominated by the number of photons collected, and that the number of photons collected per pixel is proportional to the pixel area, a sensor of a given size will have the same overall noise performance whether I divide the sensor area into small pixels or large pixels. The smaller pixels will have worse S/N per pixel, but in the final image that will be precisely compensated for by the increased number of pixels.
This is not exactly correct, since if one bins 4 pixels into one pixel post capture via software, the binned superpixel will have 4 read noise contributions whereas a larger pixel with 4x the area would have only one read noise contribution. Software binning is the mechanism underlying the DXO screen vs pixel data. Hardware binning is widely used with monochrome scientific CCDs (see here
), but the process is considerably more complex for Bayer array sensors and as far as I know, hardware binning with Bayer sensors is only available with the Phase One sensor plus technology (see here
. Click on the P+ tutorial).
While a large sensor does collect more photoelectrons, one should remember that the SNR contribution from shot noise increases as the square root of the number of photons collected. Doubling the sensor area (as in going from an APS sized sensor to a full frame 35 mm sensor) will improve the SNR by a factor of only 1.4. Newer technology CMOS sensors (such as in the Nikon D7000) can compete quite well with older full frame sensors. The same considerations apply to MFDBs. As Erik has pointed out, the MFDBs are hampered by their high read noise which limits their dynamic range. However, their SNR in the midtones (where shot noise does not contribute significantly to the SNR) is quite good.
For ultimate image quality few well informed observers would deny that MFDBs are the way to go, but the price to performance ratio is quite steep.