I prefer "A". I don't like the double peak of the Red channel corrupting the Blue response and would not ever want a CFA with these dyes. It would be easy to start with "A" and produce the "B" response mathematically from the Raw file of "A". Most Blue and Green channels have a double peak, but the secondary peak is in the NIR region, and this can be filtered out using an IR blocking filter.
It is very frustrating to spend a lot of money on a camera and have the manufacturer hold back the actual spectral response data. I prefer my Kodak/OnSemi cameras as I know the spectral response from the data sheets.
As far as the BSI vs FSI plots shown in the Sony link, shifting the peaks by 20nm is going to produce a difference. It can always be corrected in the color balance.
To add: The Red curve in "B" looks similar to the dye used in Canon DSLR's. I would not have expected to see a Red dye with such a strong secondary peak in the Blue region to be used. Why did Canon select this dye?
Perhaps Canon (and Nikon) did because B) is one recent estimate of the photopic response of the cones in the average human eye, as Jim correctly hinted to. Did I forget to mention it was a trick question?

Anyways I am definitely out of my depth here and glad to see that real color scientists have joined the party who can jump in to save me when I start sinking : the point I was trying to make is that B) could represent one ideal set of CFA recipes while A) is quite far from that ideal - although on the other hand A) is quite typical of CFAs used in current digital still cameras. The CFA in B) could in theory be used as-is to capture excellent color information, while data collected with CFA A) will require massive transformations before it can be used to display an approximation of the color information from the scene as perceived by the average human.
So given that A) is so far off and requires such mathematically intensive manipulation just to get approximately pleasing color out of it, what will such tiny differences as those shown in the Sony spectral response graphs above mean practically? Very little says Erik. The differences look to me like they could almost be classified as measurement error, so I concur. Don't forget, if two curves are similar (and these seem so to me) what gets recorded in the raw data is proportional to the area under each curve. Once appropriate filters for general photography are in place, how different are those areas? Once white balanced? Once subjected to color transformations? My guess is that there is a lot more play elsewhere in the system which will mask those differences, making them virtually immaterial.
Jack