Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums DaVinci Resolve Benchmarks wanted, please participate!

  • Benchmarks wanted, please participate!

    Posted by Pepijn Klijs on July 3, 2012 at 8:10 am

    As I just build my new Resolve system, I’m wondering how it’s doing compared to other configurations. Therefore I created a very simple benchmark test that probably isn’t scientifically correct, but will shine some light on various options and how they perform.

    The idea is this:

    1. Download the following clip from the BMD website

    2. create a 24 fps project with the clip

    3. Loop playback and see how many nodes off 100% blur you can add, while maintaining 24 fps (realtime) playback.

    4. Post your results together with some details about your system, like this (these are actually my own results):

    System: Win 64 bit
    Resolve version: 8.2.1. lite
    Slot configuration: slot 1 GTX 580 1,5 GB, slot 2 GTX 560 Ti 2GB.
    Number of blur nodes in realtime: 12

    If anyone has suggestions for this test, or if I forgot something important, please tell…

    I’m curious to see some results!

    Editor/Colorist, Amsterdam, The Netherlands

    Pepijn Klijs replied 11 years, 8 months ago 6 Members · 19 Replies
  • 19 Replies
  • Timo Teravainen

    July 3, 2012 at 8:57 pm


    I tried your test, and got 11 nodes realtime. With 12 nodes I got 22.5 fps.

    My system is Resolve Lite 8.2 b3, Win 64 bit, Asus P6T6 WS mobo, Intel Xeon 6-core 2,5 GHz, 12GB ram, GTX 285 (GUI) & GTX 580 1.5 GB, Decklink HD Extreme 3D.


  • Helge Tjelta

    July 4, 2012 at 10:50 am

    System: OSX 10.7.4
    Resolve version: 8.2.1 full, latest beta
    Slot configuration: slot 1 ATI 5570, Cubix with 3 x GTX 570 2.5GB
    Number of blur nodes in realtime: 34

    cheers Helge


  • Pepijn Klijs

    July 4, 2012 at 2:24 pm

    Wow, you created a monster!

    Editor/Colorist, Amsterdam, The Netherlands

  • Timo Teravainen

    July 4, 2012 at 8:20 pm

    It would be also interesting to build some kind of a chart of the power/$$-ratio of the systems. For example, I put together my PC with about 1200€ (plus the Decklink card and Win 7), and I’m getting all my work done. There’s actually never been a situation where it would not have been real-time (except with RED 4K material, which runs realtime at quarter res)

    I mean, it’s interesting to know that some systems can handle 30 blur nodes, but I usually use max 10 nodes, a couple of them may have blurs or other processor-intensive stuff.

  • Pepijn Klijs

    July 4, 2012 at 8:32 pm

    I agree, totally. Although this test for me was to see what various configurations can do and how they compare to my own build. More a pure technical research.

    Could be interesting to measure the cost per blur node or something like that….

    Editor/Colorist, Amsterdam, The Netherlands

  • Helge Tjelta

    July 4, 2012 at 9:02 pm

    Yepp, we do have an RedRocket card as well inside the cubix box. So we get god performance out of 4K, full quality files… have not tested that yet…

    But yes, smaller systems need to be tested. This setup was just to show that the cheap 570 can work unmodified in a mac setup.

    The expensive stuff here is the Cubix box… around $ 3.700 in norwegian price
    The GTX 570 cards where about $ 350 each
    The mac @ $ 6.000,- (48GB RAM and 2×2.66 MacPro with ATI 5870 for GUI. (the mac’s are quite expensive in norway)
    Thats about $10400 total, and that gives:

    $ 305 per blur node…


  • Pepijn Klijs

    July 4, 2012 at 9:47 pm

    Haha, it’s quite a funny way of describing things.

    My system, PC/Hackintosh, X79 chipset, i7 3930, 32 GB ram, GTX 580 and GTX 560 Ti was around 2.000 euros. So that will be 166,- euro per blur node!

    Editor/Colorist, Amsterdam, The Netherlands

  • Rohit Gupta

    July 5, 2012 at 5:23 am

    Just using a single PC GTX 570 (for both CUDA and GUI) on a 2010 Mac Pro. Using Decklink 3D+ for SDI monitoring.

    V9 gives me 11 nodes of blur.

    Please note than V9 has a new option to optimize GPU performance for system with a single GPU both for running GUI and CUDA processing.

  • Pepijn Klijs

    July 5, 2012 at 6:14 am

    Hi Rohit,
    A little of topic but… Will v9 have audio monitoring thru internal output of system or just thru sdi, like in the current version?


    Editor/Colorist, Amsterdam, The Netherlands

  • Rohit Gupta

    July 5, 2012 at 7:16 am

    If you don’t have a SDI card configured, system audio is used. When SDI card is used, audio goes out through SDI as well.

    Basically, you can’t say I want video out through SDI, and audio out through internal or vice-versa.

Page 1 of 2

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy