Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Adobe Premiere Pro CUDA Test Comparisons in CS6 and Adobe Media Encoder

  • CUDA Test Comparisons in CS6 and Adobe Media Encoder

    Posted by Al Jensen on April 18, 2014 at 2:34 am

    Hey folks,

    I was having trouble with my CPU maxing out at 100% and being unable to do anything else and one of things I thought might help was upgrading my video card. There’s a lot of misinformation regarding whether Adobe Media Encoder takes advantage of the GPU acceleration provided by the Mercury Playback Engine when rendering, so I decided to do my own before and after tests.

    First off, here’s my current setup:

    Windows 7 64-bit, i7-3770K 3.5ghz Quad-Core, 16gb of RAM, SSD C drive, RAID data drive.

    I went from the GeForce 210 to the GeForce GTX 750 Ti. Here’s the card comparison:

    GeForce 210
    RAM: 1gb 64-bit DDR3
    Core Clock: 520mhz
    CUDA Cores: 16

    GeForce GTX 750 Ti
    RAM: 2gb 128-bit DDR5
    Core Clock: 1072mhz
    CUDA Cores: 640

    In order to enable CUDA in either one I needed to modify the acceptable list of video cards, instructions for which you can find here:

    https://www.studio1productions.com/Articles/AfterEffects.htm

    FWIW, according to the above the bit interface (64 vs. 128) might be more important than trying to win the CUDA cores battle, so take that for what it’s worth.

    On to the tests…

    I did 3 different tests. The first two were static 1080i/29.97 interviews with some intro/outro graphics + Levels/Contrast/Saturation and Red Giant’s Misfire Vignette. There’s also a logo graphic in the corner of the entire clip.

    ————————————————————
    Render Test 1 (Static Interview):

    Length: 2:20
    Attrib: 1080i/29.97
    Filters: L/C/S/V + Graphics
    GeForce 210 Encoding Time: 15:32
    GeForce GTX 750 Ti Encoding Time: 12:44 — approx. 40% CPU usage
    Control 750 w/ MPE set to Software: 15:38 — approx. 65% CPU Usage
    ————————————————————

    ————————————————————
    Render Test 2 (Static Interview):
    Length: 3:06
    Attrib: 1080i/29.97
    Filters: L/C/S/V + Graphics
    GeForce 210 Encoding Time: 20:52
    GeForce GTX 750 Ti Encoding Time: 17:05
    Control 750 w/ MPE set to Software: Skipped
    ————————————————————

    Render time seemed to be about 20% faster with the new card and CUDA enabled, and processor usage dropped from 65% to 45%. Disabling CUDA and setting the Mercury Playback Engine to Software Only showed roughly the exact same time as my old card. I didn’t bother running the Control for the second interview since the first was so obviously conclusive.

    The other test I ran was on some 720p/29.97 stuff that was the cause of my original peaking at 100% problem. It contains a lot of speeding up/slowing down of the footage, and the second 2/3rds of it have a title graphic in addition to a logo overlayed and contain frame blending. The entire sequence has Levels + Contrast + Misfire Vignette.

    ————————————————————
    Render Test 3 (Action with additional complexity)
    Length: 1:48
    Attrib: 720p/29.97
    Filter: L/C/V + Graphics + Timeshifting/Frame Blending
    GeForce 210 Encoding Time: 8:55
    GeForce GTX 750 Ti Encoding Time: 5:33 — approx. 45% for 1/3rd, 100% next 2/3rds (gets more complex)
    Control 750 w/ MPE set to Software: 8:30 — approx. 90% for 1/3rd, 100% next 2/3rds
    ————————————————————

    In this test you can see my time dropped significantly, and my CPU usage for the first third was cut in half from 90 to 45… however the second two thirds are still maxing out my CPUs at 100%. Disabling MPE put it back on par with my old card.

    Conclusion: Well, upgrading my card didn’t solve my 100% CPU problem for the speed/frame blend stuff, but it looks like it’s going to improve my overall render time and drop my CPU usage for the majority of the stuff I deal with. Also, it seems pretty clear to me that AME does take advantage of the CUDA cores, at least in regards to the basic filters/graphics stuff I was doing.

    Hope you guys find this informative, if a bit long 🙂

    Cato Fiandesio replied 11 years, 9 months ago 3 Members · 6 Replies
  • 6 Replies
  • Tim Kolb

    April 18, 2014 at 12:18 pm

    AME CS6 uses CUDA when you are originating from a PPro sequence the same way PPro does. It uses CUDA for transcodes not involving PPro in CC.

    AFAIK, Red Giant software uses primarily OpenGL for pixel draw acceleration, but is primarily CPU based. Adobe can’t change a third party’s coding.

    Keep in mind that an actual test of GPU acceleration over CPU alone, you would need to check the “max render quality” box when using the CPU as it’s always on when GPU acceleration is on.

    (You can keep it checked for both tests if you question whether or not this rings true to you, and you’ll likely find no change in the CUDA render times…but the CPU times will become glacial.)

    TimK,
    Director, Consultant
    Kolb Productions,

    Adobe Certified Instructor

  • Al Jensen

    April 18, 2014 at 6:16 pm

    FWIW, I always render at maximum depth. I was told in the past if you don’t then graphics won’t look nearly as good.

  • Tim Kolb

    April 18, 2014 at 6:53 pm

    Using Max Depth is fine, but max quality is a different property…toward the bottom.

    TimK,
    Director, Consultant
    Kolb Productions,

    Adobe Certified Instructor

  • Al Jensen

    April 18, 2014 at 9:35 pm

    I had Maximum Render Quality checked as well. Don’t I always want the best quality? Is your thinking that if I unchecked that my CPU usage/encoding time would drop significantly? Won’t the quality suffer?

  • Al Jensen

    April 18, 2014 at 10:02 pm

    I take it back, I had Maximum Render Quality checked but for some reason did NOT have Maximum Depth checked. That was the same for all of the tests.

  • Cato Fiandesio

    July 13, 2014 at 3:32 am

    Can open the CUDA cores of the GTX 750 Ti for After Effects?

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy