Forum Replies Created
-
I’m afraid I don’t know anything about the P2 or MXF format so I can’t answer that directly.
A lot of cameras can shoot a higher dynamic range but this range has to be squeezed into the limits of the delivery format somehow. Remember, there aren’t really any viewing options that show higher dynamic range images so everything has to be clamped at some point. The idea is not to clamp the values until right at the end of your work.
I work on feature films with Nuke and we work with floating point values all the time but we always view our work through a LUT (look up table) that shows us how the values should look on a projector, or HD display. We send these float images to the grading suite (as EXR or DPX image sequences) so they have the whole range to work with, but then the final movie is clamped once they have everything in the right place.
There are codecs that let you render QuickTimes with floating point values, but they will be equally as big, and rendering image sequences is a much more robust workflow for this kind of situation.
EXR is a very common format when it comes to rendering VFX, and is ideal for the purpose of pre-rendering something to re-use in another After Effects composition. It is completely loss-less so you keep every tiny detail from your original comp. There are some loss-less compression options in the EXR format that help keep file sizes down without losing any detail.
There is a brief description of some of the terms and concepts in this PDF from Steve Wright: https://vfxio.com/PDFs/Nuke_Color_Management_Wright.pdf
His book has a lot more detail about some of the more technical aspects of VFX work and is probably the best description you are going to get about color space. I think you can get a Kindle version.
—
-
I can see how that could confuse you.
Dynamic range and bit depth are kind of linked, but not really. In the case of the Avid codec the extra bit depth just adds more fidelity within the same range.
Even in After Effects, switching between 8bit and 16bit doesn’t add dynamic range to the project, it just adds more steps between black and white. You only get to with with values brighter than white and darker than black (float) when you switch to 32bit.
—
-
I have never ever heard this before. In my opinion, it needs to be more prevalent in talk about intermediate codecs.
And just why in the world don’t more codecs support high dynamic range?! seems like everyone would need that kind of functionality, since color grading is usually done last.
I think you are misunderstanding the point of these intermediate codecs. They are there to provide a compromise between file size, low CPU utilization, and quality. They are also aimed at people working in edit packages more than people compositing.
Most displays, edit software and final delivery formats don’t need floating point colours, as part of the compromise process they limit the bit depth.
In your situation, you cannot compromise on bit depth, so you have to use a different format, but sacrifice file size and realtime playback. Once you have all the your final composite you can afford to clamp your floating values, because no one will see them anyway.
—
-
It seems your question is answered in the other posts. The Avid codec is designed to be used with video that was originated from a video camera. Most video cameras only capture 10 or 12 bits of latitude, so there is no need to store super bright values.
When people say that ProRes or Avid DNxHD are good intermediate codecs, they are usually talking about transcoding camera footage, or rendering out final pieces of work from After Effects (or Nuke etc) to send back to an edit application. In these cases you shouldn’t really need the floating point range any more. Your floating point composites or effects should be converted into a 10 bit colour space, like rec709.
Almost all displays and delivery formats are 8 or 10 bit, so you can’t deliver anything in float space.
If you are pre-rendering something to continue using in After Effect then you will need to pick a format that supports 32bit floating point values, which realistically is only EXR. TIF files can do it but are much slower to work with.
—
-
Conrad Olson
July 4, 2014 at 11:52 pm in reply to: Large file editing, give me only one time an in and out. So I cannot use more stuff from that same file.You don’t mention anything about adding the section you selected to a sequence.
You have to create a sequence and add the first section that you selected to the sequence. Then you can go back to the clip and select another section, add that to the sequence, then go back to the original clip and select another section of the clip.
—
-
Also, why are you doing it over 118 frames for 2 seconds? Are you sure you need to work at such a high frame rate?
—
-
Conrad Olson
May 5, 2014 at 9:26 pm in reply to: Source timecode incorrect when exporting from After Effects to AMEI noticed something similar the other day. The timecode that AME was burning in didn’t match the correct timecode from Premier so I used the timecode effect in Premier instead, but even then I was having issues where that would give a different result to if I exported straight from Premier
—
-
-
I had the same issue on my latest project. I ended up un-nesting the sequences.
I’d definitely like that feature.
—
-