Thank you for explaining things much more clearly, as I should have.
There is something I should add: Sensors come in families, often using the same cell structure and the same basic interface circuitry, and CFAs. So a design and software conceived to deal with one sensor will be easily converted to deal with another from that family.
If we manage to start now with even a very small sensor, then I would expect that we will have gained a jump-start on the day a big sensor we like is freely released - we will know how to read in images, remove pattern noise and color shading, demosaic them, get live previews and add overlays, do edge detection for focus, calibrate for color, process shift and tilt etc.
In fact I have been looking at the CMOSIS/Zedboard combo being employed by the Apertus project for their cine camera. The CMOSIS chip reads into the Zedboard which is built around a Xilinx Zynq chip that combines an ARM9 core set with an FPGA for IO. This means Linux code can be written fairly easily to control the chip, read the data out a speed, even do pattern noise reduction, and then ship previews or full files across to the host computer via a network connection which could be wifi.
Engineering becomes much easier once you have a working prototype of a thing you want to perfect.
The main reason Edmund wants to use a CMOS design with on sensor AD-s is that all the hard work is done on the sensor. So the output from the sensor is not a brittle analogue signal but an undestructable digital one. Building a support circuitry CCD cameras is very difficult as it must be designed to have very low electronic noise and excellent shielding. Also, analog readout is in all probability far more complex.
Sony's sensor has on sensor converters and so has the Leica M (240) sensor designed by CMOSIS, but Nikon D4 and all Canons use CMOS sensors with off chip ADCs. So all CMOS is not alike. With the Sony type design, the hard work has been done by the sensor vendor.