I am trying to wrap my head around that statement and the situation where it would apply. Here is an example to talk around: We are shooting hockey with a Canon 5DMIII with its lens fully open in an indoor arena. The brightest highlight we are interested in is the white helmet of the players and in order to freeze their motion satisfactorily we need a shutter speed of 1/800s or shorter. We take a test shot at ISO 100, look at our (fictitious, vendors!) Raw histogram and see that the helmets show up four stops below clipping.
Okay Jack,
That means we could use that setting, which allows to stop motion and doesn't blow out highlights, but the shadows would be very dark. We can boost the exposure in postprocessing, but then the very dark shadows will show lack of detail, and may drown in the read noise. There may be some 4 or more electrons averaged per ADU, but boosting exposure in postprocessing will not help the make a more accurate distinction in the shadows, the jumps from ADU to the next will still be a multiple of 4 or more electrons worth.
By increasing the Analog Gain we can reach Unity gain, where each single additional exposure electron produces a different ADU. We have gained accuracy, and lifted that detail higher above the noise floor. We now have better shadow detail, and the highlights are still not clipped. IOW, we've improved the accuracy of our signal, within the constraints that we couldn't increase the number of photons due to shuttertime, and we could amplify the signal without introducing clipping, thus not lose useful dynamic range which was present in the scene (we did lose potential DR, but we didn't have enough signal to clip at the top due to shutterspeed). Mind you, the image is noisier, but it is random noise rather than read/pattern noise. Analog gain will determine the number of electrons per ADU.
In order to maximize IQ given our constraints we quickly evaluate noise in PP at various ISOs (better yet we look at this great chart by Bill Claff) and decide to bump ISO up to ISO 1600 and shoot away.
Unity Gain for the 5DMIII is around ISO 500. How would we use that knowledge to improve IQ? Or would it apply more to a camera with the read noise profile of the D800e?
By boosting the gain with a factor of 16, we might have barely avoided highlight clipping, although the tail of the highlight shot noise might have clipped. However, the question becomes, what looks better: ISO 1600, or ISO 800 and a stop exposure correction in Raw conversion (plus a stop more highlight headroom for tweaking/recovery). It may even be interesting to test ISO 400 and a 2 stops Raw conversion exposure push. Raw converters may also handle the conversion differently based on ISO metadata.
Because of that reason, I once did a quick test of 3 scenarios on my 1Ds3. Image quality was only evaluated on the amount of noise after Raw conversion. The steps 1 through 6 mentioned, were the Grayscale Colorchecker patches 19 - 24:
Noise standard deviation of a 50x49 pixel area of each patch.
As can be seen, the ISO 400 has lower noise than the ISO 800 group, and the ISO 800 group has lower noise than the ISO 1600 group. Within the ISO 800 group, there is little difference between the underexposed + pushed in post setting, but the pushed settings will have more highlight clipping latitude (a stop headroom) at capture time. Within the ISO 1600 group there is also little difference, although the ISO 800 pushed 1 stop is slightly better than the rest, and again it has 1 stop overexposure headroom. Even ISO 400 pushed 2 stops is a bit better than the ISO 1600 gain setting.
The differences
within each 'ISO group' are actually very difficult to see, but they are measurable. The differences between the groups are more distinct, visually. For me that made it clear that ISO 400 was good enough (for the rare occasion that I need higher ISOs) with still the potential to boost 1 or 2 stops in post processing with little loss compared to a higher ISO setting to begin with, because the Unity gain of the 1Ds3 is approx. reached at ISO 400.
That's how I see a practical implementation of what we can do with the knowledge about 'Unity Gain' for a specific camera.
Cheers,
Bart