Since the green channel is the first to blow in daylight, this allows the red and blue channels to "catch up" and the practical effect is to expand the effective dynamic range of the system.
It seems that bjanes is a great fan of this technique, but I'm still skeptical. The general principle for use of filters is that you use the filter to suit the scene. In this use of filters which you've referred to, the use is to suit the characteristics of the sensor, completley ignoring differences in scene content. One can't help wondering why Nikon or Canon did not reduce the sensitivity of the 'green' pixels, through one way or another, if this has the effect of increasing dynamic range. For some cameras, what that would mean is that the true ISO was brought back into line. For example, ISO 100 on the D60 is more like ISO 160. Reduce the sensitivity of the 'green' pixels and you are back to a true ISO 100 with (perhaps?) a half stop increase in dynamic range.
My gut feeling is, such an approach (using a filter to suit the sensor rather than the scene) will result in a botched effect, the benfits of which will vary according to scene content and color values.
I've brought up before (in the 'interesting article' relating to ETTR), the issue of 'true blue' skies shifting towards cyan during overexposure. I'm going to post an example of this effect and then try to predict what might happen if I had used a magenta filter. I'm really thinking while I'm writing because I don't have any hard evidence.
The photo, by the way, is a crop of the top of a vertical shot. This is not overexposed as a result of not knowing what I was doing, but deliberately overexposed because it was a snapshot of my partner who was the focus of interest. I'm not sure why so many people have an irresistable urge to place themselves in front of all interesting, beautiful, old and valuable monuments, buildings, artifacts and simple beautiful scenes, when photos are taken. Do they feel a need to compete for the attention of the photographer, perhaps? Or do they simply want a record of the fact that they were
there, in case anyone might disbelieve them.
The shot has a -2EC adjustment in ACR.
[attachment=571:attachment]
The RGB values for the 'true blue' part towards the upper left corner are R=104, G=114, B=143. There's a clear predominance of blue, as one would expect.
The RGB values within the part encircled by the red line are, R=208, G=219, B=228.
In other words, green has become more prominent as a result of the blue channel clipping. (Pure cyan is a mixture of blue and green in equal parts).
How would a magenta filter affect this? (Magenta being equal parts of blue and red).
Well, I'm relying upon logic here because I don't have a magenta filter. It seems to me, in this example, the situation would be complicated. The magenta filter would suppress the green channel which is not a problem. The problem is, the blue channel is clipping. If the exposure is the same, the blue channel will still clip despite the magenta filter. However, the green channel will be subdued so that the area enclosed by the red line in my example should (might?) begin to look more 'blue' than 'cyan'.
Is that a solution to the problem? If the exposure is the same, I've done nothing for the shadows. It's true, I've corrected a cyan shift in one part of the sky whilst simulataneously creating a magenta shift over the rest of the image.
If I can correct that magenta shift in the rest of the image successfully, then perhaps I could argue that I've been able to use a 1/2 stop or 2/3rds stop greater exposure, which would help the shadows. On the other hand, if I know what I'm doing in Photoshop, I can isolate that part of the sky which suffers a cyan shift, and change the hue in 'Hue/Sat' control.
Where's the benefit?
I should point out, in case anyone is wondering where the cyan shift is, the sRGB jpeg compression reduces this difference. I work in the ProPhoto RGB space and convert to sRGB for net display.