Well, there are many possible workflows with red.
The only thing I really don't like is that just check in the Red forum and you'll notice that the workflow is mainly based on FCP for the vast majority of the posters, wich I find particularly annoying because the big prod houses and special fx here in Europe are generally NOT working on FCP, but it seems that USA is a different reality. Anyway,
There are still many questions to be seen when it comes to simplify things, wich seems to be a word that doesn't exist yet in the motion industry dictionary.
In Avid, my workflow with red is this one:
- import the R3D in AMA (1 minutes)
- transode the bin into DNxHD 36 into another bin (hours if there is volume)
- edit (days)
- duplicate the edited sequence (2 sec)
- Manually re-link the duplicated editing sequence with the native R3D (3 minutes)
- export the AAF (1 min)
The advantage of that is that the edit is now pointing to the native footage, so it's not locked.
If I want to work off-line, I've noticed that Metafuze is slower than RedCine X, so I tend to use RedCine X to export in 1/2 debayer. But again, transcoding cues, time etc...
When I work in DPX image sequences, I use a proxy files to edit. In fact, I use a canopus lossless as a proxy generated in the bin. So even if I got a 4K sequence, wich allows Edius but it's more interesting to work in 2k because you can crop and recompose the 4Ks without loosing definition, the editing is smooth using the Canopus Codec. But you see that again there is a complication: a transcoding. And each time a part of the sequence is recomposed and cropped there is a transcoding step to the proxy of the sub-sequence. It's ok, it's not crazy but I'd like it way more simple.
In fact, my concern is this, because it's interesting to note that in Iron Man they used DNXHD codec, and they said that if they where cautious about using the DNxHD 36, because they didn't want to mess with the quality, they where amazed in big screens about the highest DNxHD 10 bits.
When you read all those stuff, it makes you think if really the best thing would not be:
Transoding the sequence into the highest DNxHD or Canopus or ProRes quality and that's it. End of the story and no more reconform to R3D.
In short, there are many kinds of workflow very complicated that supposed to be better in terms of keeping the quality into the pipeline, and it seems that this is mostly techs masturbation.
Why not using RedCine X to do a primary grading, then ingest that into a NLE converted into the highest possible codec and that's it ? Why so much complication in the chain?
I have the feeling that this 4-5K reconform is a mystic. I can be wrong because my experience is still limited while I'm learning.
You know what Sareesh? If I could I would just get rid off all that stuff and only work within Nuke from A to Z and end of the story.
The benefit of I.S to me is clear: everything can work on those, even Photoshop. Mostly people are using it for some cuts that have to be sent to fx, but in short movies, like advertisings, I did an I.S workflow from the beginning to the end and it works amazing. I don't get the point why therte is so little 100% IS workflow and people are spending their times transcoding between platform.
There is a point I'm missing I'd like to understand.
How to avoid the transcodings once for awhile?