Pixel density goes up and image quality goes up, is that possible?
To begin with, how do we define image quality? A definition would be faithful rendition of the original. A rendition should add, remove or modify as little as possible to original image.
- Proper resolution is obviously is a part of the equation. We need a sensor that can resolve the image cast by the lens.
- Having a sensor that cannot resolve all the detail the lens yields will produce artefacts, known as aliases.
- Noise is also a modification of the image. Just as an example, has anyone seen a grainy sky in real life?
- We may also need to reproduce the deepest shadows, that requires a sensor with good dynamic range.
So we want a sensor with high resolution, matching the lens. If we buy a world class lens we probably also want to have a world class sensor. But it has been shown that sensor resolution also helps lenses that are not so great. The explanation is easy:
We can look at the MTF of the system. It is something like this:
MTF_system = MTF_lens * MTF_aperture * MTF_olp_filter * MTF_pixel_aperture
- MTF_aperture corresponds to diffraction
- MTF_pixel_aperture corresonds to the smearing represented by sensitive area of the pixel
Reducing pixel pitch will increase the MTF_pixel_aperture factor and also the MTF_olp_filter will increase. So whatever a lens, MTF will improve with smaller pixels
Reducing pixel size will affect noise at pixel level. Pixel noise is mostly determined by the number of photons detected by the pixel. If we reduce pixel dimension to half, the area will be a fourth of the larger pixel. Shot noise is proportional to the square root (SQRT) of the detected photons. So a pixel of half diameter will have twice the noise. But we have four times the number of pixels. So noise level over a sensor will not be affected.
There is a limitation to this. Part of the pixel area is made up from transistors and connections and those areas don't capture light. What happens is that design rules are reduced. Older sensors may have been produced for 0.050 micron design rules but now 0.018 micron design rules are common. Also, the electrical design is simplified, so that much less of the pixel area is used by gates and interconnects.
All this makes that optimum pixel size shrinks.
But, there is a limitation and that is affecting both DR and high ISO. In the darkest parts, noise is dominated by readout noise and readout noise acts differently from shot noise. Halving the diameter of a pixel we could replace a large pixel with four small ones and still have the same shot noise but at a cost of one EV in DR.
Many photographers are skeptical of things measurable, but DxO produces a lot of quite solid data. The first attachment shows SNR for three different cameras. The Nikon D700, D750 and D810. The cameras have 12, 24 and 36 megapixels. The D750 and the D810 are quite close, indicating that they capture the same number of photons for a given exposure. The D810 reaches lower ISO, which indicates it has higher full well capacity.
So, in this case development makes the sensor better although it has significant increase in megapixels.
If we look at DR the lead of the Nikon D810 is even cleaner. DR depends both on number of captured photons (Full Well Capacity) and readout noise. The major development is in readout noise. The D700 uses old technology and it has high readout noise at low ISO. The short explanation is external converters. At high ISO analogue gain is increased before the external converters, which gives good high ISO performance. It is the low ISO end that suffers for old technology.
The DxO figures shown here are normalised for a given print size. So, this would be what we would see in a print, or viewing an image on screen. Would we view the image at actual pixels the trend would be different.
Check also the link below:http://photonstophotos.net/Charts/PDR.htm#Nikon%20D700,Nikon%20D750,Nikon%20D810