Yes I get that. What I am saying is to me there are two steps - getting your camera(s) 'normalised' - making grey grey, or more complexly normalising it to a gregtag card (which is automatic in some stills softwares) and then applying an artistic look to the footage
If you add a single LUT to two slightly mismatched cameras they wont match - but if you normalise the two cams and then add the LUT they will match
Middle grey for every sensor is different. Normalizing it to a 'standard' grey value is flying blind, because you're trying to get the sensor to normalize to something it doesn't know. And you don't know the exact middle grey for a sensor because the manufacturer hasn't shared his coefficients with you.
If you shoot white and expose bang on in spot mode, then you'll get the middle grey the camera was calibrated to. It looks different on every camera. It is interesting to see how IRE is used in the video industry to place skin tones, blacks and whites. With RAW and sRGB monitors, all that goes out the window.
I'm not sure you can get two disparate sensors to match in color. If you get the reds to match, the blues and greens will be off, etc. You can't get perfection in video. Too many problems - poor color space, bayer sensor, motion, changes and variations in lighting and color especially in daylight scenes, artificial lighting on locations (especially fluroscents), codecs, bit rates, chroma subsampling, all kinds of crazy sampling, and poor quality of display units (which is getting better).
The correct way, in my opinion, is the 3D LUT, which pulls all colors together for organic color changes. It is such a simple (to use) method, which anyone can create in a few hours (and years of experience behind him/her). The Waveform and Vectorscope keeps you in check.
Is this slow?
You pay (in some way) to build a lut, I dont get why you would not want the processed simplified/automated. Some stuff still photgraphers do and have done for years to speed their workflow might just be useful in the motion world
Well, in photography, there's almost always only one person doing everything. In filmmaking, there are many. I would assume the cinematographer and/or DIT will take ownership of all aspects of the image, color being paramount.
Sidney Lumet says in his book 'Making Movies' how he was horrified of labs developing the negatives in the wrong way, or a color timer making the movie look like it wasn't meant to look. I wonder how the Godfather would have looked if someone used a standard curve to lift the shadows...
Photographers get into using software and then become slaves to it. The problem for them is that there's nobody to take them out of their habits. In filmmaking, many individuals constantly battle your cherished beliefs, and compromises and discoveries constantly get made. I don't think I would ever want to lose that.
It even applies to colorists. Photographers don't have anybody looking over their shoulders, while colorists have to please many people, and work fast at the same time. There is a school of thought that LUTs are bad for colorists, since it restricts their art. But the DPs vision is final, and that vision in encapsulated in a LUT.
To answer the OP, the Pocket camera is miles better, simply because it's a better workflow. I think 99.9% of those who use it would be very happy with just Prores. I was blown away by it. All the issues have been fixed, and it simply delivers great video right out of the box with Prores in Film mode. If there is something I don't like, it's the noise in the shadows, even in raw (studying the DNG stills John had posted on twitter). I'm not sure ETTR is good enough to fix it, but I'll need to see more footage to know for sure.
Can't wait for the 4K version to come out.