... from what I read with a CMOS sensor only around 10-30% of each sensel is actually light sensitive? And I've read (somewhere) that even with current micro lenses, only about 20 to 30% of the available light reaches an actual light sensitive area?
There is a confusion here about what that 20-30% means. The short answer is that microlenses and/or back-illumination overcome that low number.
It might be that in the very small photosites of small traditional front-illuminated sensors, only about 30% of the photosite area is unobstructed by the circuitry on top of the photosite, but
(a) modern gapless micro lenses gather light from almost all the photosite area and send it to that unobstructed part so that it can be detected, so the fraction of light that reaches than "30% window" can be (and is!) a lot more than 30% of incident light.
(b) The back-illuminated sensors now popular with small sensors avoid this problem by sending the light in from the other, unobstructed side of the chip.
The bottom line is that modern sensors (both CCD and CMOS) detect well over half of the photons that get through the color filters. In fact, even with color filters, where QE is measured as a fraction of light of all visible wavelengths that get through the color filters and are detected, the QE figures are around 30-40% ... and even higher in many compact camera sensors, which seems to use color filters that sacrifice some color discrimination accuracy for higher sensitivity.