It doesn't look like the author knows how to interpret the very data he is quoting in his own article.
Bottom line: Due to the complexity of design and manufacture (let alone the high cost and weight) of large aperture lenses, one may actually end up with better results at virtually the same ISO and depth of field using lenses with more modest maximum apertures.
So many glaring flaws in that logic. The data clearly shows that ultra large aperture lenses still collect more additional light than is lost by the sensor. Going from f/2.0 to f/1.4 may not yield the full stop of light we might expect.... but it still yields a
net benefit of 2/3 stops of light. The data is explicitly showing that. An f/2.0 lens does not yield "virtually the same" results a f/1.4.
When you look at the structure of CMOS sensors, each pixel as basically a tube with the sensing element at the bottom. If a light ray that is not parallel to the tube hits the photo site, chances are the light ray will not get to the bottom of the tube and will not hit the sensing element. Therefore, the light coming from that light ray will be lost. It appears from this graph that when using large aperture lenses on Canon cameras, there is a substantial amount of light loss at the sensor due to this effect. In other words, the "marginal" light rays coming in at a large angle from near the edges of the large aperture are completely lost.
The article fails to mention the important fact that there are microlenses at the top of these "tubes" to funnel light into the sensing elements at the bottom. And no, "marginal" light rays coming at a large angle are not completely lost. As microlenses become more efficient at collecting light, so will the sensor.
The article fails to point out that the amount of light lost at the sensor is decreasing with each camera generation. It's right there in the charts, if the author had bothered to look. The 1Ds3 sensor has the same pixel pitch as a an EOS 20D sensor, but the 1Ds3 loses only 1/3 stop of light @ f/1.4 compared to 2/3 stops by the 20D. The pixel pitch of a 1D4 sensor is nearly identical to that of a 450D sensor, but the 1D4 loses only 0.4 stops of light @ f1.4 instead of 0.7 stops. There is progress is being made in minimizing the light loss at the sensor level, and so there is no reason for companies to stop producing ultra large aperture lenses.
I wish to re-emphasize that these issues apply only to the current crop of cameras with CMOS sensors
Odd that the article would be constantly laying the blame on the structure of CMOS sensors, when the very data quoted clearly indicates that the problem is far more severe with CCD sensors. The graphs in the article clearly shows that CCD sensors were losing significantly more light than CMOS sensors from the same generation. The author fails to mention that important fact, and instead misinterprets the data as a reason for the medium format backs and Leica to opt for CCD sensors.
I do not think the author even realized that a lot of the sensors he plotted on those charts are, in fact, CCD sensors. He keeps emphasizing that his article refers to CMOS sensors, when the charts he is using is showing a good mix of CMOS and CCD sensors. Is he not aware that the sensor for the A350, D200, D80, D70, D60, D50, and D40x are all CCD? Is he not aware that the bottom feeders on his charts are populated by these very same CCD sensors? And that the charts are clearly showing a greater efficiency among CMOS sensors?