Yes, and this problem also applies to the CFAv2.
What you're apparently claiming, is that what improvements in sensor technology that happen will benefit CFAv2, while they won't benefit the Bayer style arrays.
Why not?
Remember, this is just the colour filter above the sensor wells!
I understand this new technology is principally a new CFA with a new way of demosaicing and interpolating the color information. It doesn't necessarily involve fundamental changes to the rest of the sensor technology (at least such fundamental changes are not mentioned), but I expect if Canon were able to buy a license to use this new CFA they would have to implement other changes in sensor design to make it work and make it work better than the current Bayer type. I can't see them buying a Kodak sensor.
For example, if you were to simply replace the Bayer CFA with CFAv2, on an exisiting sensor, then base ISO for half the pixels would jump to ISO 800 (because the panchromatic pixels are receiving 3x as much light with the same exposure), but the other half of the pixels with a color filter in front would never reach full well capacity and color accuracy would certainly suffer for this reason.
What changes to sensor design might be required to get around this problem might be fun to speculate upon.
And if you look at the CFAv2, you'll see that you need more pixels to reproduce the same colour information as in a Bayer array (read the blog ...). In other words, you could easily claim that a large proportion of the colour information is wasted with CFAv2.
But the demosaicing is happening with far less colour information to go by.
More pixels will be provided. As I mentioned before, this new CFA seems to lend itself very well to further increases in pixel count without compromising high ISO performance. Just a one stop improvement in sensitivity might allow for double the number of pixels on a given size sensor whilst maintaining the same signal-to-noise for each pixel and the same over-all dynamic range for the image as a whole.
The comparison of quantities of colored pixels in both designs is 16.6% red, blue and green for CFAv2 versus 25% red, blue and green for Bayer. (I've discounted the extra 25% of green because I believe this is for luminance purposes and I'm not sure how that contributes to over-all color accuracy).
If we now compare say a 20mp upgrade to the 400D, employing the new CFA, with the existing 10mp 400D, we could expect higher resolution from the 20MP camera without compromising dynamic range or high ISO performance. Agreed?
Even if lenses sometimes were not adequate to deliver that extra resolution, I expect we would still get some because I doubt that such a high density sensor would require an AA filter which has the effect of softening the image.
Lets compare color accuracy. The 10MP 400D has 2.5m red, blue and green pixels (plus 2.5m additional green for luminance purposes).
The new 20mp 400D has 3.3m red, blue and green pixels (plus 10m for luminance).
Comparing the final images, one has 2.5m items of red data and the other has 3.3m items of red data. Which is more accurate? Is color accuracy even going to be an issue with such high pixel density?
I know you could argue that a 20MP Bayer sensor would have 5m items of red data and that 5m is better than 3.3m, but that argument discounts the role of the new algorithms for the CFAv2.
The other issue which I think deserves more investigation is, "Just how much color information does the human eye require in order to get a realistic sense of accurate color in a scene?"
I'll mention just two observations which make me think it is far less than you suppose.
(1) During the transition from B&W TV to Color TV there was a problem regarding compatibility with old B&W sets. It was necessary to devise a color system so that the signal could be received by B&W sets which the majority of the population still owned. Without getting into technical details, the engineers devised a way of superimposing the color signal onto the existing luminance signal, which resulted in a modest increase in the bandwidth of the transmission from something like 4.5MHz to 5.5MHz.
The impression I get is you simply don't need as much color information as luminance information. The color can be filled in.
(2) Anyone who has scanned old slides
must have been amazed at how successful computer algorithms can be in restoring faded color.
I've scanned slides that have been so faded that, when I first looked at them holding them to the light, I thought they were B&W. Now I'm not going to pretend that I got them looking as though they were taken yesterday, but the very small amount of color information still there was sufficient to enable a very surprising degree of restoration.
With slides that have undergone a more modest degree of color fading, there seems to be no problem in getting the colors looking perfect, as though the shot really was taken yesterday.
Kodak, can I please have a high paying job selling your new sensor design.