The short version:
Sensors have gotten better. Making pixels smaller reduces the DR per pixel but the increased number of pixels helps a lot.
The long version, see below:
DR is a bit tricky. The technical definition is FWC/readout noise, with WFC is being Full Well Capacity, the number of electrons a pixel can hold. Readout noise is the noise in reading out the electrons.
For medium size pixels like (6 my) the FWC may be around 60000 electrons FWC is by and large proportional to pixel surface, even if I think some progress have been made. The major factor in DR has probably been the reduction in readout noise. Modern CMOS may have say 3 electron charges in readout noise while CCD is more like 12-15 electron charges.
Let's assume we have a P45+ with 6.8my pixels, and let it compare to a 3.4 my pixel sensor using modern CMOS-technology. Let's assume 65000 electron charges (EC) per pixel for the P45+ and 16 EC in readout noise. So we have 65000/16=4062, converting to EV we get 11.99 EV. DxO-Mark measures 11.75 EV at 50 ISO (in 'screen, mode).
Now, let's take a CMOS sensor with half of that diameter, and assume FWC 65000/4 = 16250, if we assume 4EC in readout noise we would get a DR of 4062 that is 11.98, nearly the same as the "fat pixel CCD".
Now the small pixel CMOS has four pixels instead of one. Would be print both at the same size and assuming a sensor of the same size, the combined FWC would be 65000, read noise adds in quadrature, so we would have sqrt (4 * 4 * 4) = 8
So DR would be 12.98 EV, the smaller CMOS pixels would have a 1 EV advantage over the large pixel CCD. Would we do the same math with a CCD instead, FWC would still be at 65000 but readout noise would be 32. So DR would reduce to 10.98 EV.
Newer generations of CCD sensors have less readout noise than the older ones, but they are still in the low tens. Regarding CMOS, some sensors have external analog-digital converters (ADC), the off chip ADCs tend to be noisy, so these cameras have CCD like readout noise. Many modern CMOS sensors have thousands of on chip converters that work in parallell, these are much less noisy.
What complicates the issues is that DR I describe is a technical definition, and represents a level of noise where the signal is barely perceptible. In photographs we want a better Signal Noise ratio (SNR), on the other hand lower end of the DR scale is normally what I would call deep shadows where we would have little detail anyway.
Midtones and highlights are more affected by "shot noise", the natural variation of incident photons, which is independent of the sensor technology and depends only on exposure, sensor surface and quantum efficiency QE). QE is the percentage of the incoming photons that are detected.
Hope this helps! Here is a recommended paper, describing it more deeply: http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/index.html
Thank you both.
Torger, was/is the colour look a result of the filtration used? Surely only from the "profile" used to de mosaic the data? If so the same "look" could come from the Dalsa series?
The question is was that "look" which is attributed to the fat pixels not a result of pixel size but of the software?
Indeed the detail "captured" will improve as the pixel size decreases, but has the dynamic range not been maintained or indeed improved from the fat pixel backs as the "read" technology and associated sensitivity and amplification have improved over time?
Sorry if this has strayed from the thread title but as a fat pixel owner, venerable P20, I always find the latest backs/cameras interesting to see if the "improvements" are real at the sizes I use as an amateur. Your patience is appreciated.