Forum Replies Created

Page 11 of 37
  • Next time after you render, export an XML from the conform tab… and import that into FCP.

    eric b johnson
    online editor | colorist | workflow
    https://vimeo.com/39073239

    Some contents or functionalities here are not available due to your cookie preferences!

    This happens because the functionality/content marked as “Vimeo framework” uses cookies that you choosed to keep disabled. In order to view this content or use this functionality, please enable cookies: click here to open your cookie preferences.

  • Eric Johnson

    August 15, 2013 at 6:10 pm in reply to: View node output, not output image

    In v8, weren’t you able to double click a node to view its out put and single click another node to modify?

    I just checked this is on a lite version v9, and it currently doesn’t appear to work… but maybe that’s the Lite speaking?

    I may also be mis-remembering…

    eric b johnson
    online editor | colorist | workflow
    https://vimeo.com/39073239

    Some contents or functionalities here are not available due to your cookie preferences!

    This happens because the functionality/content marked as “Vimeo framework” uses cookies that you choosed to keep disabled. In order to view this content or use this functionality, please enable cookies: click here to open your cookie preferences.

  • Have you confirmed that the “Reel/Tape” info is present and/or unique? Also, are there clips that may have similar Reel/Tape info that may also have similar Timecode but different file names?

    Have you tried sending the accurate timeline created in FCP from FCP to Resolve via FCP XML?

  • Eric Johnson

    August 12, 2013 at 5:22 pm in reply to: Round trip question (basic workflow issue)

    depending on the extent of the changes, in the Preconform/notch workflow, it can also be helpful to do a Hard Cut to Hard cut export of the changes from the timeline and lay in the new media by timecode in v2.

    You will have to notch the new sections… but like I said, this will depend on the extent and number of changes to determine if it is a viable option.

  • Eric Johnson

    August 7, 2013 at 11:46 pm in reply to: Underperforming new Davinci Resolve

    Mike:

    Do I understand correctly that a 680 and Q4k can co-habitate a Mac Pro? Without the need for a Cubix or some other Chassis? I was under the impression there were power issues with that type of setup…

    If so, that is great/interesting news… might be time to get rid of my 5770…

  • Eric Johnson

    July 29, 2013 at 5:16 pm in reply to: Resolve 9.1.3 Lite window question

    You may be leaving “Highlight” on… Pretty sure that’s what it’s called… The checkbox in ht eQualifier tab that allows you to see what you are qualifying, it works for Windows also…

  • Eric Johnson

    July 25, 2013 at 10:53 pm in reply to: Resolve Lite 8.2.2 for OS X

    Have you tried switching your boot kernal to 64bit? Is there a reason you are unable to use it?

    https://support.apple.com/kb/ht3773

  • If you are able to use AE, why not use it? Generate the asset, Import as and Alpha Matte, and no need to render….

    Keep the project as a Template and then you have what you need to make additional files…

    Anything and everything to avoid the Avid Title Tool, as well as Marquee…

  • The basic containers I understand (as far as where overall performance can be impacted), but where I get a little “iffy” is trying to equate CPU/RAM speed to R/W, encode/decode or buffering… since there is a disparity in the speed metric… B/s vs Hz.

    Of course this gets additionally muddled when you take into account that CPU clock speed is a soft number, with every new chip operating at near similar Clock speeds but being differently optimized for multithreading or per chip core count…

    Is there a way to force encode/decode a particular codec only on the CPU to determine how that codec performs? Obviously the results would be slightly skewed for the system being tested, because of the aforementioned limiting factors, but if I could determine that my systems encode/decode of a particular codec on the CPU level is X, then it would be possible to know if N GPU’s is optimal based on Y render results… there would still be variables of course, but the general principal remains true…

    Beyond all of that though, knowing that 90-100 Pro Res (HQ) 1080 frames is a lofty goal then I know more than I did. Which is always helpful.

  • [Juan Salvo] “It’s your CPU or maybe drives… From the sound of it, I’d say your CPU”

    Juan:

    Your comment raised an interesting question, at least it did for me… Is there a way to determine at what point your CPU becomes the bottle neck?

    I am able to determine, within an allowable margin of error, the approximate “frames/sec” my drives are able to achieve as a product of data transfer, but that does not account for per frame encode/decode or any other CPU processing that is happening. Nor does it account for what my GPU is processing…

    For example:
    I have a RAID5 that does 280 MB/s, according to the AJA system test for a 16GB file.
    @ DNxHD 175/x or Pro Res (HQ), 24fps (not 23.976) for the sake of discussion, that is approximately 290fps (depending on how you do the math and were you may or may not round up, I did it a couple of ways and got between 280 and 310… Pro Res being VBR I feel ok using 290 for this discussion)
    In Resolve I can get around 60fps on shots w/o grades (if I remember correctly, this is mostly from memory)….

    Based on that information, using Resolve strictly as a means to transcode/transfer media, I am operating @ 20% of my drive speed.

    In this situation I know my GPU is on the low side of processing power, but that should have limited impact on what the CPU is actually able to process… so is there a way to determine what portion of that 80% loss is a result of what the CPU is doing?

    The system in question is similar to the OP’s… mac5,1 12×2.66. 26GBs RAM. 5770/Q4k. OS X 10.7.5. Resolve 9.1.3. eSATA-II RAID5 (8drive).

Page 11 of 37

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy