Let's see what our forum engineers have to say on that.
First, I'd like to say that what follows is NOT scientific right? I'm not a scientist. But it has something to do and wouldn't mind having a rational explaination on that if you feel like.
So forgive my non scientific semantic in this OP.
Also, I didn't want to put this in the coffee corner as it has something to do with the way images are recorded and then perceived so it could concern this area or any other area of image output.
Then, I suspect that our engineers are very present in this forum, for the number of graphics and charts I generally see arround here.
So apologies to MF artists, forgive my intrusion into the temple but this is a purely tech thread.
Ok, so I start:
If I remember (vagely) my physic's classes, I've learned that more an object moves fast, more it is necessary to increment a lot the energy in order to increase it's speed. If I remember well, it's because more an object moves fast, more it weights, so much more energy is requiered to keep going accelerating.
Am I right there ?
Well, I remembered this law because I had an experienced with the GH2 hack that seems a bit similar. (it's the MF forum so you could ask what the GH2 has to do? but I'd like to ask something in general, here that could also concern MF equipment.)
So, there is a camera that records 24 mb/s datas on card. I hack it to 44 mb/s. An increment in quality is clearly visible. Then, I said, fine, I'm going to hack it now at 88 mb/s. I was expecting the same optical increment, but no. From 24 to 44, I see a difference. From 44 to 88 I don't notice any difference. To start to notice a difference from 44 mb/s, I have to jump, not to the double but to more than the triple (almost 4 times).
In other words, it seems that more you push to high bitrates, more it requires much more bitrates to be able to see the difference. It doesn't follow a simple x2 logic.
That's why I remember this physic law of speed-energy (don't know the name of the law). But again, this is a perception I had on field, I didn't make any rigourus testings, it's purely perceptive, ok?
So my question to our scientists-engineers: are you surprised with what I experienced and is it reasonable to extrapolate that to the imagery in general. For ex that a slighlty increment in quality costs much much more "power", and the higher the quality is recorded the most difficult it is to increment it even more?
Hope I explained myself well enough.
more strange facts datas.
30 sec clips, same subject, same lightning, same optic
24 mb/s - file size 50M
44 mb/s - file size 80M - noticiable difference in quality compared to the 24
100 mb/s - file size 270M - no noticiable difference in quality compared to the 44, I'd need to jump to 140/160 to see an increment, almost 4x
etc...the logic keeps the same
See also that the file size increment doesn't follow the bitrate increment ? To gain twice the bitrate, I need 3 times more storage. That has consequences in storage.
now...
Red One 4K raw file, 30 sec clip: +600 M - 1Gb. Close to an already highly compressed AVCHD higher hack that records, let's remember, in 4.2.0... and 8 bits !!?
And the visual differences between the R3D file and the hack GH2 files yes are huge. (not so much downsampled to HD though)
Do you see a logic in all that ?