So I imagine that DNG files from BMC are actually
444 in 8bits like the tiffs? And not 10bits.
Blackmagic DNG's from any of their cameras are 12 bit LOG encoded files that generally unpack as 16 bit linear files in an application like Resolve.
They are far far better than 8 or even 10 bit tiffs. DNG is a tiff file basically by the way, with some extra metadata. cDNG is the motion version of a DNG which is also a DNG with timecode and audio.
They are unencoded and therefore do not have a colour sub sampling applied to them.
Colour sampling or chroma subsampling is the various ratios that get applied to how many times the luminance channel is "sampled" Vs the two colour difference channels.
That is what YUV is. (sometimes also called component video) It's fundamentally different to RGB which is what most sills photographers would be used to working with and what the BM camera stores DNG files as.
When we talk about colour sampling, we are talking about what's known as "encoded" video. It's in a video form ready to edit, do post work, watch etc. DNG's are basically like stills RAW photos.
So in YUV form, the Y channel is the brightness or luminance channel and is ALSO the green channel. It gets summed or subtracted from the two colour difference channels (the U and the V part of YUV) so therefore we're creating a video signal by taking the BRIGHTNESS signal and then multiplying or subtracting it from the brightness signal of the Red and Blue signals to generate the other colours.
This was a kind of way of having compression before data compression was around in the analog days, the logic being that green was the colour we're most sensitive too, so all the resolution was stored there and the colour could then be stored in two smaller less frequently sampled channels.
RBG has the brightness encoded within each individual colour channel. YUV does not.
https://en.wikipedia.org/wiki/Chroma_subsamplingRAW video cameras like those from Blackmagic and Arriflex have un-mosaiced DNG files, so it's not really the correct terminology to describe them using video encoding terms because they arguably haven't been encoded yet. There is no 4:2:2 or 4:4:4 until they get turned into YUV video.
Many many many people also confuse that ratio with the CMOS bayer sensor ratio of green / blue and red pixels and assume they mean the same thing. There are many heated discussions on video oriented forums about CMOS sensors needing to oversample to be able to generate 4:4:4 video because they don't have enough pixels to generate "full bandwidth colour". Because the ratio of RGB photosites tends to be twice as many greens to blues and reds you get a lot of people sating CMOS sensors only have 4:2:2 colour which is also the wrong terminology completely (and ignores what happens during the mosaic de-mosaic process.
It's a RATIO for the ENCODED video
Sony, back when they introduced this novel new HD format called HDCAM that got used to shoot this new Star Wars film called the Phantom Menace was originally a 22:11:11 format ! Which compared to the original SD it was.
So that nomenclature now is very very basterdised and misunderstood and has pretty much lost it's original meaning :-)
Other RAW cameras like RED and SONY do other secret things that make it harder to work out what's going on. RED have an SDK that does some secret stuff and until recently, you couldn't get anyone from RED to actually commit to what the bit depth of the cameras was. (it's apparently at least 16 bit internally now, but was originally probably 12-14 bit)
RED also famously sued Sony a little while ago when Sony introduced their RAW format for the F55 / F65. From what I could tell reading the papers lodged, RED have a patent on the compression they do because they convert to the RBG sensor data to YUV FIRST and then apply different encoding and compression techniques to the Y signal from the UV signals. Sony tried the same technique of compressing the half encoded YUV signal and got sued. They later settled out of court.
By the way, someone was mentioned the Metadata being carried by RED ? DNG's have metadata embedded in the file itself as well, and It's up to the application handling the file to then interpret it.
JB
EDIT
here's a link to some DNG's from the little POCKET cinema camera. It's "only" a 1920 sensor size so tiny, but i think you'll find a camera that can shoot 24 of these every second is pretty compelling.
https://copy.com/s36D39T6q7oa(these are quite old and from a prototype pocket camera, but you get the idea.