The sensors are new designs, designed by RED's in-house experts. They're not bought in from Kodak or Dalsa or... etc. Part of that is that we get to control what we're doing, and have the ability to innovate, and part of it is that we have very specific requirements for how fast those sensors can be run because we do motion and (will be) stills. And because of the rather nice compression engines in camera, you can keep that raw sensor resolution at a very high fps through to your storage medium. Your style of photography may not require a high burst rate, or even motion capture, but you will certainly enjoy the amount of shots you can take on a card and the speed that they get saved and not waiting for a buffer to write to card because the system is designed for the higher requirement of real time raw motion capture.
Now, I'm not a sensor designer, but I do know a few, and this high speed side of things is rather tricky. But there are ways to cheat high speed on a CMOS because usually on CMOS designs you can read off any row in any order. The more rows you try to read off, the slower everything goes. So, if you feel like it, you could skip reading some rows and get a more HD sized output off the sensor, with just a bit of scaling needed to make the "HD" image. This would make a sensor appear to run much faster, but can and does lead to some atrocious artifacts in the image. So we don't do that. Similarly, doing high quality scaling of video in real time is quite easy from a mathematical definition point of view, but hard from a fit it all in hardware, to run in real time, with minimum power consumption point of view. You could just cheat on the scaling, and hope people don't see seams in the image, or aliases or funky color artifacts. So we don't do that.
There are both cost, technical and mechanical design reasons why video can look vastly inferior to a camera's stills capabilities. Doing motion properly is by far the harder task than doing stills properly. You can see that in how far ahead digital stills cameras are compared to traditional video cameras, and how quickly image processing software for stills has developed compared to grading software for video. It's just so much easier and so much less computer resource consuming to deal with stills images than motion imagery.
Now is the time for motion to finally catch up to stills, and for stills to benefit from the technology that allows motion to produced at the same quality as the stills. So that's what we're doing. We're making motion look as good as stills shot on the same camera.
Graeme