Just for curiosity, does anybody know what could be the reason or benefit of multiplying the values from the R and B channels by a factor? I mean, not the factors for White Balance, but a scalar multiplication of the raw values after converting to digital (what you get in a "unprocessed NEF"). This can be shown by looking at raw histograms, where you can see a periodic "hole" in the histograms of the Red and Blue channels.
The attached image shows the histogram from RawDigger for the first 128 values (0-127) of a 14-bit uncompressed NEF file at base ISO (100) from a D800. The pattern will look the same if you show any set of values. This is a clear indication of a scalar multiplication after converting to digital, and happens with other models too (I have observed this on D300 files too, using RawDigger and Rawnalize).
The factors are different for the red and blue channels, adding more curiosity to why they are doing this. From my understanding of digital image processing, they could just adjust the WB factors and achieve the same end result.
A practical implication (which I think it is negative) is that if you shot at base ISO, the green channel shows saturation at a value close to 15800 (out of 16383 for 14 bit) while the red and blue channels go all the way up to 16383. This give the false impression of the green channel saturating before the other channels, but it is just the effect of the multiplication mentioned before, which will throw values close to 15800 well above 16383 for the R and B channels. The second image shows the histogram for the highest values, where the early saturation of the green channel could be observed.
I have found explanations for other manipulations of the raw data in Nikon cameras, such as the black point offset and the "Star-killing" algorithm, but I haven't found any about this.