It makes some sense. The smaller pixel has less dynamic range and requires fewer bits to encode the useful information captured by the sensor. It could be that 8 bits at a gamma of 2.2 is sufficient to record what the sensor is capable of capturing.
8 bits with a gamma of 2.2 can record more DR at the pixel level than 12-bit linear, by about 5.5 stops. In fact, the Leica M8 uses this format for its DNG files.
The real issues are the JPEG artifacts, the compression of shadows and highlights, and the clipping of highlights.
However, sensors with a higher dynamic range will require more bits to encode their dynamic range, and high end cameras may need 16 bit AD converters. Look at Figure 4 on Roger Clark's web site:
I think Roger's point was that the sensors themselves in recent Canons are capable of warranting 16-bit readout at ISO 100, but just changing the ADC to a 16-bit one is not enough, as there is a lot of read noise at ISO 100. The conclusion is based on the lower (electron-unit) noise at ISO 1600. Roger's experiment was a response to a post of mine on usenet where I reported the absolute noise at ISO 1600 was lower than ISO 100. At that time, I thought that the deficiency of ISO 100 was the bit depth, and I argued that higher bit-depth was the solution, but I realized afterward that posterization doesn't cause that kind of noise; in fact, any noise that shows up as a standard deviation is not caused by posterization. It became quite clear that bit depth was not the real DR limitation at ISO 100 when I took the same shot with the same manual settings at ISOs 100 and 1600 on my 20D, and then posterized the ISO 1600 image to the same number of levels as the ISO 100 image, and it gained only a small amount of visible noise, and was still orders of magnitude clearer than the ISO 100 image.
A system with only 10 bits and no read noise would be better than what we have now, IMO. However, with no read noise, you might as well digitize at 16 bits, because that will be even better.
I have a lot of experiments in my head that I'd like to try, but never get around to. One is to take a stack of ISO 100 images (about 16) with my 20D on a tripod, with a 10mm lens for maximum registration. The result will be a 16-bit linear RAW, with a standard deviation of about 0.52 at the black level, and low in shot noise as well (all noises will be two stops weaker). Then, compare a single one of the 16 images with the stack, and with the stack posterized to various bit depths.
The other advantages of RAW, such as the ability to use a better decoder on a more powerful computer have been mentioned. One can adjust color temperature after the fact with Lightroom, but having only 8 bits to work with can be a limitation.
I'm not sure exactly how much RAW headroom is clipped in a G7 JPEG, but even if it is half of what the DSLRs clip (typically 1 stop for the DSLRs), you will get less noise by shooting your ISO 400 at ISO 283 with +0.5 EC, something you can not do in JPEG mode without blowing highlights.
Canon scares me, to be honest. I don't think that people who know what they are doing are making decisions, or they do, but are just trying to save money by not having to answer tech support calls from people who can't figure out what to do with the RAW shots they took.