John Heagy
Forum Replies Created
-
[Dave LaRonde] “the motion seen at 24p is VERY different than at 29.97 interlaced. Fieldskit ain’t gonna be able to fix that. “
That’s right… which is why the next step is to use Twixtor to convert 60p to 24p. Giving Twixtor 60 frames to pick 24 from is far better than picking 24 from 30. Twixtor will also apply a small amount of Optical Flow tweening to correct the motion in the images. Again, AE will do this as well.
-
Export a CMX edl and run it through EDLhacker https://www.edlhacker.com/. It will give you a reel summary at the bottom including total time per reel.
John Heagy
-
If all you need is realltime playback on the timeline, than Avid and Adobe are far better than FCP. However, this is just timeline playback, when it comes time to export a file you will need to choose a codec to render to. I don’t know of any file format that supports multiple codecs in a self contained file.
For our workflow, no matter how good realtime playback is, if there is significant rendering required to create a file that can be passed onto the next step in the workflow, an Episode Engine cluster in our case, than staying 100% native trumps realtime playback. In a 100% native timeline we can create a 2hr ref movie in a few minutes and send if off to Episode Engine for flattening or conversion.
Another concern I have with the realtime mixed codec playback is: Are there any shortcuts taken as compared to 100% native? This is certainly the case with FCP’s limited RT Extreme… what about Avid and Adobe? Any quality benefit to rendering to a playback friendly timeline native codec before final output to tape? I’d imagine Red .r3d footage would look improved post render compared to realtime preview.
John Heagy
-
If you use Re:Visions FieldsKit to deinterlace to 60p than Twixtor to convert 60p to 24p you should get good results. You could use After Effects to do the same thing.
-
Your seeing the motion blur caused by the camera movement, but without the move. You can try something like Re:Vision’s ReelSmart Motion Blur to remove the blur selectively.
-
Since the example didn’t make it I will assume you want it to look like a stop motion miniature shoot. Besides a sharp motion blurless image, and an a steppy frame rate, the key is to simulate a very shallow depth of field. This can be done by blurring the foreground and background leaving an area of focus in the center. It’s helpful to have a shot with foreground and background objects.
I’ve heard this effect described as “Miniaturization” and the “Toy Soldier” look. It most likely was first noticed shooting with the new crop of high speed cameras and the shallow DOF they can exhibit.
John Heagy
-
[Bob Zelin] “uncompressed 10 bit HD is 157Mb/sec. “
No… it’s 1200Mb/sec… you mean 157MB/sec. Little “b” for bits, Big “B” for Bytes.
Most codecs specs, even the names, refer to the data rate in Mega bits/sec. DV25, DNx145, DV100 (DVCProHD), AVC-I 100, ProRes, XDCam etc… The only one that uses Mega Bytes is Redcode36 which means 36MB/sec.
The devil is in the details, we don’t what any spacecrafts crashing into Mars.
Anybody get that reference?
John Heagy
-
[Bob Zelin] “with the JVC camera, you are only getting ProRes422HQ images,”
This camera records XDCam MPEG2 .mov not ProRes. No camera records internally to ProRes, that is until the Alexa ships.
10bit uncomp is overkill, thats 1200Mb/sec… HD broadcasts are only 19Mb/sec. Believe it or not Digital Cinema is only 250Mb/sec. Get yourself a KiPro and record the HDSDI output directly to ProRes(HQ) at 220Mb/sec… more than enough IMHO.
-
Given that all digital broadcasts are 4:2:0 if you start with 4:1:1 you end up with 4:1:0
-
Array and RAID hard drives can be used for back up, but they should not be used for archive. I wouldn’t expect people to post questions about LTO tape systems or long term archive strategies here… IMHO
Thanks
John Heagy
NFL Films