Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums VEGAS Pro “New” graphics card

  • Sonic 67

    November 6, 2014 at 9:54 pm

    I am curious about that. I predict that you will not have any hardware acceleration on encoding with MainConcept, because they state that Maxwell and newer are not supported.
    https://www.mainconcept.com/products/sdks/gpu-acceleration-sdk/cuda-h264avc.html#product_page-5
    NVIDIA graphics card with CUDA support (Professional - Tesla, Quadro 4000-series, FX, CX, NVS, QuadroPlex; Consumer - GeForce 8, 9, 100, 200, 400-series GPUs - with a minimum of 256 MB of local graphics memory card or 512 MB for 1920x1080p encoding). CUDA compute capability support only up to 1.3 (excludes certain GeForce 8800 models - GTS, Ultra. Compute capability 1.0 works in general for encoding, but has known issues. Boards with Kepler architecture are not supported.
    In Vegas case maybe a Quadro 6000 or a GTX480/GTX580 was a better option. No matter what gaming tests show…

  • Dave Haynie

    November 7, 2014 at 9:14 am

    [Paul McDermott] “Based on what you have said I think the Quadro 4000 fits these needs. This seems like a good price for over here.
    https://www.amazon.co.uk/PNY-Quadro-Nvidia-Graphics-GDDR5/dp/B003Y3TPKQ

    The Quadro 4000 is certainly stable. The actual Quadro 4000 is an older model, the current is the K4000… that’s Fermi vs. Kepler. I believe the Quadro 4000 would get acceleration under Main Concept AVC, I know the K4000 would not.

    I have a Quadro 4000 at work and an AMD HD6970 at home. They’re equally stable, in that neither has ever demonstrated any kind of problem. But I’m not looking for Beta drivers for either for them. The HD6970 is dramatically faster at GPU-related tasks than the Quadro 4000.

    Certainly, one point of the Quadro series is professional features: high color, more stable drivers, at least in a generation more video RAM than consumer models. They get stablility in the pro series largely by using versions of older GPU cores, chips already well proven in the consumer circles. That’s why the pro cards aren’t usually all that fast.

    The are faster at a few key 64-bit floating point operations. That’s a trick — it’s a driver thing only, pretty well proven as such, but if you’re dealing with 3D CAD (I use a 3D viewer occasionally at work to view PC board models, but most of what I do doesn’t really use a GPU at all), you’ll get the benefit. It’s also pretty likely that nVidia’s doing the same artificial limiting of CUDA and OpenCL performance in a few key areas, to make their computer-only boards sell better.

    As far as speed of playback, you will benefit from a faster OpenCL board if you’re doing compositing or using video plug-ins. That’s an area in which consumer boards usually outperform the professional. For example, I can pretty much preview the whole “Red Car” demo in realtime with OpenCL enabled. Not so with just the CPU, even though I have a six-core system. That acceleration isn’t subject to the limits of Main Concept, of course (just to be clear). And for 3D stuff, like Boris Continuum, you’ll benefit from fast OpenGL, which could in theory favor the pro card, but probably not, as I suspect it’s mostly of not all 32-bit math.

    Now, other than the Main Concept CODEC, Vegas proper will work with any supported OpenCL device. Adobe Premiere has traditionally been way more picky about which GPU to use. If you are planning to move to or add Adobe’s ransomware/subscription Premiere CC, you might as well choose based on that. I know they have in the past had full support for the Quadro series as well as the higher end consumer nVidias, and more recently, the AMD boards as well.

    -Dave

  • Sonic 67

    November 7, 2014 at 11:36 am

    [Dave Haynie] “Now, other than the Main Concept CODEC, Vegas proper will work with any supported OpenCL device.”

    Not true for MainConcept encoders. OpenCL will work only with specific ATI video cards (same generation like Fermi-nVidia case, 2010 – HD59xx, HD69xx).
    The cards that are allowed to work with MainConcept are hard-coded in their software, it cannot be ‘edited’ like you can in Adobe case.

    See here the requirements for MC OpenCL: https://www.mainconcept.com/products/sdks/gpu-acceleration-sdk/opencltm-h264avc.html#product_page-5
    And here for MC CUDA: https://www.mainconcept.com/products/sdks/gpu-acceleration-sdk/cuda-h264avc.html#product_page-5

  • Dave Haynie

    November 7, 2014 at 2:15 pm

    [Sorin Nicu] “[Dave Haynie] “Now, other than the Main Concept CODEC, Vegas proper will work with any supported OpenCL device.”

    Not true for MainConcept encoders. OpenCL will work only with specific ATI video cards (same generation like Fermi-nVidia case, 2010 – HD59xx, HD69xx).
    The cards that are allowed to work with MainConcept are hard-coded in their software, it cannot be ‘edited’ like you can in Adobe case.

    Vegas is a big system. Internally, Vegas only supports OpenCL, no CUDA. It works with any OpenCL device, at least at supported versions of OpenCL: AMD GPUs, nVidia GPUs, Intel Phi, even AMD’s native x86 driver for OpenCL. Vegas is using the GPU, when available, for internal compositing and filter operations. Final rendering is not something that happens in the core of Vegas, that’s handled by a plug-in.

    And there are all kinds of plug-ins. Vegas supports Microsoft standards for video plugins, the newer OFX standard, and private interfaces for things like CODECs. Part of those APIs has Vegas communicating the OpenCL settings you set in Vegas preferences to the plug-ins. There are few, like some of the NewBlue stuff, that don’t use that, and have their own independent GPU setup (CUDA-only, last I checked), but the information’s made available to all plug-ins.

    Did you see where I wrote “other than the Main Concept CODEC”? CODEC = COder-DECoder. That’s the only place that specific GPUs are hard coded, Main Concept’s GPU-accelerated AVC CODEC. Yes, it hard-codes specific AMD cards as allowed to use OpenCL, and specific nVidia cards to use CUDA. I was one of the first people to figure this all out and report it here in detail. None of these cards are made anymore. Main Concept’s had a fairly rough road of acquisitions… it was bought by DiVX, which was in turn bought by Rovio, and little to no work has gone into this CODEC. Thing is, there was never any good technical reason for that hard-wiring. That was a trick to force Main Concept’s licensees into re-negotiating for upgraded versions of the CODEC for each new application release. Only, with Main Concept the problem, we’re not stuck with Main Concept only supporting GPUs from 2011 or so.

    This is a big problem for Sony, not to mention the looming need for them to get support for HEVC and VP9. Sony should really work on their own, or allows for an open CODEC plug-in interface so that it’s easy to add x264 or other open source CODECs to function from within Vegas… a bit nicer than rendering out to a high quality format and feeding that to Handbrake or TMPGenc. Alternately, a built-in frameserver API would enable any external encoder to be driven from Vegas… personally, I like that idea.

    -Dave

  • Sonic 67

    November 7, 2014 at 3:54 pm

    Sorry, I meant to refer strictly at the MainConcept statement, I read it wrong (didn’t see the ‘except’part). I know that the rest uses a mix OpenCL and CUDA (for some effects).
    NVidia’s nvenc already supports up to 4K encoding. Higher quality is available for Maxwell second generation cards.
    And nvenc will support in near future h265 encoding.

    Cyberlink already released a patch that makes use of nvenc, I wonder why Sony doesn’t go that route.

  • Malcolm Matusky

    November 12, 2014 at 6:22 pm

    Just installed the GTX980 today (EVGA brand) I did notice a bump in time line playback in Resolve 11 and Vegas 13, not a lot though!

    No rendering tests yet as I just got it configured and fortunately Resolve and Vegas saw it right away. Not having 2 identical cards in Resolve is limiting as now on of the shared compute options is gone. Both my Q4K’s shared CPU acceleration, now only the Q4k does, as it’s set up for my main display, the GTX980 is listed but I no longer have the option for using both cards for that one operation.

    Vegas shows the GTX980, nice to have a bit better timeline playback, I’ll check rendering between the two cards later.

    Malcolm
    http://www.malcolmproductions.com

  • Sonic 67

    November 12, 2014 at 6:47 pm

    Now that is weird. What drivers did you install in the system, since one card would require Quadro drivers and the other GeForce ones.
    IME, Quadro cards would work with GeForce drivers (but at lower performance).

    Both my ‘Fermi based’ cards share the encoding process, even if unequal (because they are not even close: Quadro 6000 and Quadro 600).

  • Malcolm Matusky

    November 13, 2014 at 6:13 pm

    I left the Quadro driver alone and installed the latest driver for the GTX980 from Nvidia’s site, seems to be working well, so far. Eventually I’ll get a second GTX980, as this works better for Resolve, to have matched cards.

    Malcolm
    http://www.malcolmproductions.com

  • Roman Beilharz

    November 17, 2014 at 10:17 am

    Hey Malcolm, hey everyone,

    I followed this thread with great interest, since I was also curious to give a Maxwell Nvidia (I think a GTX 970 would do) a whirl in my system (Core i7 860/8 GB).

    My main goal is not speeding up rendering times, though, as I edit most of the time and don’t care about doing overnight renders once a project is done. By the way rendering “CPU only” – if possible in a 2-pass-fashion – always gave me better results than using the GPU for encoding, anyway. Í have seen everything from dropped green frames over strange artifacts to shifting colors in GPU renders, so don’t you think this feature is overrated?!

    What I am after is fluid realtime playback with typical color corrected GPU-FX video tracks (e.g. Levels, 3-way CC, Secondary CC, Bump-Map/Soft contrast, Sharpen). Besides: the faster all Event/Track/Bus FX are computed, the faster even a “CPU-only” rendering will be done, right?

    @Malcolm: What is your perception regarding timeline playback improvements using Sony GPU-plugins with the new card? Any glitches? Does it run stable?

    Another thing is I am thinking about replacing my current GTX 570 / GT 9400 combo that I am using for a 2D quad monitor setup by a single GTX 970. The specs say 4 monitors can be driven simultaneously. But what about resolutions/modes? Can we have like 2x1920x1200 plus 2x1920x1080 on 4 separate desktop windows (main primary display plus 3 extensions; not stretched out) using a single card?

    Have you tried to hook up 3 or 4 monitors? If so, how’s the power consumption/temp going up for pure Windows multi desktop work?

    Thanks a lot

    Roman

    https://www.uvasonar.com

  • Aaron Christiansen

    November 17, 2014 at 10:59 pm

    [That’s pretty much the kind of thing I’ve seen with the HD6970 as well. And again, in my experience with the recent AMD drivers, they are rock solid. Maybe that became a mission after AMD bought ATi, not sure, but it’s really a non-issue these days. ]

    Just curious: if 1 x HD 6970 is good. Is 2 x HD 6970 better?

    Or is the GTX 580 the way to go? And would 2 x GTX 580s be better?

    Thanks!

Page 11 of 13

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy