The comparison of quantities of colored pixels in both designs is 16.6% red, blue and green for CFAv2 versus 25% red, blue and green for Bayer. (I've discounted the extra 25% of green because I believe this is for luminance purposes and I'm not sure how that contributes to over-all color accuracy).
As I understood it, the larger proportion of green comes from how the average human eye functions; green is simply more important.
Technically speaking, the proportion of blue should probably be lower than 25%.
If we now compare say a 20mp upgrade to the 400D, employing the new CFA, with the existing 10mp 400D, we could expect higher resolution from the 20MP camera without compromising dynamic range or high ISO performance. Agreed?
I'm not sure I can agree to that, because we don't yet know how this works, cf. what you mention about the problem of base ISO. But for the sake of the thought experiment, sure.
Lets compare color accuracy. The 10MP 400D has 2.5m red, blue and green pixels (plus 2.5m additional green for luminance purposes).
The new 20mp 400D has 3.3m red, blue and green pixels (plus 10m for luminance).
Comparing the final images, one has 2.5m items of red data and the other has 3.3m items of red data. Which is more accurate? Is color accuracy even going to be an issue with such high pixel density?
Colour accuracy is
always an issue.
I know you could argue that a 20MP Bayer sensor would have 5m items of red data and that 5m is better than 3.3m, but that argument discounts the role of the new algorithms for the CFAv2.
I would argue that Kodak say that CFAv2 vs. Bayer is roughly at parity with normal images, but that it's unclear what pixel peepers would see.
Again, it's a bit like Foveon vs. Bayer.
The other issue which I think deserves more investigation is, "Just how much color information does the human eye require in order to get a realistic sense of accurate color in a scene?"
That clearly depends on how closely you inspect the image in question, as well as on interference effects.
I'll mention just two observations which make me think it is far less than you suppose.
That assumes that you know how much I suppose is necessary, but it also requires that
you answer the question: "necessary for what?"
I agree that it's possible to compress information very well with a minimal loss of visual impact.
The evidence for that lies not only in JPEG vs. TIFF-RGB, but also in GIF.
However, this does not necessarily stand up to scrutiny in all cases.
Again, assuming that Kodak are right in their claims, this new pattern will -- with assumed future improvements in demosaicing algorithms -- be at parity with the Bayer pattern under well-lit conditions.
So how, exactly, is this "far less than I suppose"?