Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums VEGAS Pro Vegas Pro 10D to offer better CUDA support?

  • Mike Kujbida

    April 25, 2011 at 2:01 pm

    10d will be out sometime this week so you’ll have your answer very soon.

  • Stephen Mann

    April 25, 2011 at 5:38 pm

    The more I think about how Vegas non-destructive editing works, the less likely, I am certain, that the GPU would ever be used for preview. Or could, for that matter.

    Vegas does not edit directly on the video media like Premiere, FCP or Avid. Instead, everything you do on the timeline is in the veg file (or memory if you haven’t saved the veg file). During preview or rendering, Vegas applies the instructions in the veg file for each frame of the media on the timeline and output to the resulting video or preview device. You never touched the original media. Vegas does this one frame at a time. To get anything to the GPU, Vegas has to first assemble the frame from the instructions in the veg file. There’s just nothing left for a GPU to do.

    I wonder if the other editing programs use the GPU for preview because they have to? I really don’t think that GPU preview is the panacea people think it may be.

    This is conjecture on my part based on my experience as a programmer in a former life, but no one with intimate knowledge of the program’s internals has ever said anything to the contrary.

    Steve Mann
    MannMade Digital Video
    http://www.mmdv.com

  • David Shirey

    April 25, 2011 at 6:29 pm

    Well we have CS5 Premiere with nvidia 470 gtx cards at the office and the preview sure is nice being gpu accelerated. I stick with Vegas because I love the interface but the limited preview is definitely my least favorite feature. On the other hand Premiere has a lot of convoluted ways it does other things that I’m sure glad I don’t have to deal with.

    One reason it’d be nice to have the option is that I’m sure some people out there still have core2duo systems and it’d be nice for them to just drop a new video card in their system for a performance boost rather than having to build a whole new pc.

  • Stephen Mann

    April 25, 2011 at 7:22 pm

    It’s been many years since I’ve used Premieret can you click “play” and preview the whole video in Premiere?

    Steve Mann
    MannMade Digital Video
    http://www.mmdv.com

  • Dave Lozinski

    April 26, 2011 at 1:42 am

    As David mentions in a follow up response, I think for the majority of Vegas users viewing the “preview” is where the most time is spent.

    When there’s several video layers with complex effects and transitions that we have to keep previewing to make sure everything lines up just right, it would be nice if Vegas would take advantage of the GPU so there’s barely any noticeable slowdown in showing us the preview.

    Otherwise, having to constantly “selectively render” sections of the video and/or generating “dynamic ram” previews takes up so much more time.

    So my take is if Vegas has to compile those instructions together to “render” the frames out to a file, in which case it passes that off through the GPU, then they should be able to program it so the preview does as well.

    Extremely simplified pseudocode below: 🙂


    if (going_through_render_process == true) {
    if (isCudaEnabled == true) {
    currentProcessedFrame = renderFrameUsingCuda();
    if (isRenderingToFile == true) {
    renderToFile();
    } else {
    renderToPreviewScreen();
    }
    } else {
    currentProcessedFrame = renderFrameUsingCPU();
    }
    }

    Like I said, extremely simplified, but should be doable.

    PS: I like the play on names with yours and the name of your company. 🙂

    —————————————–
    https://www.davelozinski.com
    https://www.davelozinski.com/DemoReel/
    —————————————–

  • Stephen Mann

    April 26, 2011 at 2:56 am

    Thanks on the name.

    Remember that Vegas started as an audio editor. (I believe that Video was added in Version 3). There was no GPU in the picture. No graphics, no GPU. Vegas was designed from the beginning to be hardware independent – it would run on any Windows PC and there was no special hardware requirements. It was this framework that Vegas developers were working with when they introduced video. It was an evolutionary addition to the audio product.

    If Sony tried to rework Vegas to use the GPU for preview, it simply wouldn’t be Vegas any more. And we probably couldn’t afford it.

    As I said, it’s all conjecture and I would welcome comment from anyone who knows the inside workings of Vegas Pro.

    But, back to my earlier question. In FCP, Premiere or Avid, can you simply hit “play” and preview the whole video?

    Steve Mann
    MannMade Digital Video
    http://www.mmdv.com

  • Dave Haynie

    April 26, 2011 at 5:33 am

    [Stephen Mann] “The more I think about how Vegas non-destructive editing works, the less likely, I am certain, that the GPU would ever be used for preview. Or could, for that matter.”

    As an Engineer with a collective 38 (ouch!) years of software development experience (hobby and pro, both), I would have to disagree with that assertion.

    [Stephen Mann] “Vegas does not edit directly on the video media like Premiere, FCP or Avid. Instead, everything you do on the timeline is in the veg file (or memory if you haven’t saved the veg file). During preview or rendering, Vegas applies the instructions in the veg file for each frame of the media on the timeline and output to the resulting video or preview device. You never touched the original media.

    None of those programs routinely do destructive editing.

    Here’s how accelerated rendering in Vegas would work. Conceptually, today, you have CPU-driven rendering. You have N media files in encoded form. Vegas had to read in the encoded format, via some CODEC or another, decode that into a frame buffer. Lather, rinse, repeat for each video or still in a composite. Next, each of these frame buffers is composited, via various rules that you’ve set up (compositing parents and children, masks, etc). Once that’s all done, whether for preview or output, that final video is submitted to the preview output device: a real graphics card frame buffer, an AVC or MPEG encoder, etc.

    Now, sure, there may be some shortcuts to speed things up for preview, but that’s the job that absolutely has to be done, regardless of the various simple details.

    Now, consider just the decoding phase. Right now, that’s done with my CPU. I know from benchmarking video playback that, for example, my AMDx6 will take about 40-60% of all six CPUs to deliver a realtime decode, into the GPU frame buffer, of a 1080/60p video in TM700 28Mb/s AVC. If I use a decoder that uses a graphics card and the DXVA 2.0 video acceleration API, I’n now down to about 6% CPU (given my fairly ancient nVidia GeForce GT8800 card).

    It’s a given that the decode is path independent… whether the CPUs do it, the GPU helps, or I get a card with magic AVC elves on it, the decoder is a black box: AVC in, raw frame buffer out. Now, the final format you get via DXVA may not be what you want in Vegas, so we might not expect to see that degress of speedup using GPGPU code (CUDA, OpenCL, Streams, whatever) versus the dedicated video acceleration API in Windows. But if you can write a GPGPU program to even do just that one operation, encoded video to raw framebuffer in RAM, the video preview AND rendering will go faster.

    In fact, AVC decoding can be a fairly big part of the rendering time, even for simple operations. Even a simple render from Vegas to any kind of output will be demonstrably slower if the source is AVC vs. MPEG, Cineform, or any other lower overhead CODEC with a well tuned CODEC (I was going to say DNxHD in there too, but I think it needs some performance tweaking).

    [Stephen Mann]
    Vegas does this one frame at a time. To get anything to the GPU, Vegas has to first assemble the frame from the instructions in the veg file. There’s just nothing left for a GPU to do. “

    Again, no way, Jose! Vegas is indeed interpreting the instructions (it’s running instructions in memory, not stored in a .veg file… these are loaded into RAM when you open a .veg file.. but the idea is correct). This is a set of higher level operations. A Veg file does not contain, for example, a separate composite instruction for each frame… Vegas just had a compositing engine that works from a pointer within the project, and does a frame at a time. The work the GPU would do would be helping to accelerate each of those tasks. In short, it’s doing exactly the same functions the CPU does now, only in parallel with the CPU, and hopefully, much faster than the CPU can do them.

    -Dave

  • Dave Haynie

    April 26, 2011 at 5:52 am

    [Stephen Mann] “Remember that Vegas started as an audio editor. (I believe that Video was added in Version 3).
    Actually, Version 2 added video. I started with Vegas Pro (the original audio-only version), and have owned most since then.

    [Stephen Mann]
    There was no GPU in the picture. No graphics, no GPU. Vegas was designed from the beginning to be hardware independent – it would run on any Windows PC and there was no special hardware requirements. It was this framework that Vegas developers were working with when they introduced video. It was an evolutionary addition to the audio product.”

    You’ll also notice that original Vegas knew not a thing about multiple processors, SIMD instructions (SSE, etc), or the x86-64 instruction set. Yet, it evolved to use each of these resources just dandy, and without most users really paying much attention to that fact.

    Vegas is not hardware independent — it’s dependent on an x86 processor of some kind. But even that’s pretty flexible. GPGPU (General Purpose GPU) computing is actually far less hardware dependent than that. Using an API like OpenCL, you use a GPU if one exists, and if it’s there, the API (in theory, anyway) isolates you from the low-level details of the specific GPU. So add more processing cores, go from nVidia to ATi, etc. and things just work. Or at least they will once OpenCL is stable. It may be… it’s been a bit behind CUDA (nVidia’s proprietary GPGPU API) and Streams (ATi/AMD’s proprietary GPGPU API).

    We’ve seen this all before.. in the OpenGL library. Vegas doesn’t use this much, maybe in a couple of 3D transitions. BCC uses it quite a bit. OpenGL will actually run on a very basic PC today, but when you add a good GPU, things get dramatically faster. Still very hardware dependent.

    And these are OS-level things. I think the point of Vegas was never to ignore the better resources of evolving PCs, but simply to not require special hardware in order to function well. In fact, Vegas has supported custom hardware… I could use my DV camcorder as a display, driven directly by Vegas, long ago.

    [Stephen Mann] “If Sony tried to rework Vegas to use the GPU for preview, it simply wouldn’t be Vegas any more. And we probably couldn’t afford it.”

    In what possible way would it be different? Did the fact that Vegas went multiprocessor make it “not Vegas anymore” or unaffordable? Didn’t seem to, IMHO. GPU support is nothing more than yet another computing engine to tap.

    And in fact, I don’t believe Vegas can survive in the long run without using the GPU. The era of GPGPU computing is upon us. Many people want to use those 400-800 stream processors they paid for, in addition to the handful of x86 processors. If Vegas doesn’t support this, people will ultimately leave for NLEs, like Premiere, that do.

    And on a personal level, I don’t like seeing Premiere ahead of Vegas on anything. I used Premiere 6.something back in the dark old days, and it was heinous. You had to convert any form of video into the Premiere internal format before editing, it had all the audio and video tracks you wanted, as long as that was 3 and 2, respectively (or something like that). Audio was very much somewhere between a second class citizen and an afterthought.

    Once Vegas grew video features, it was like stepping out of the stone age into the modern age. I could just drop a DV capture into Vegas and it worked, no issues with conversion, or the fact that Premiere didn’t support any DirectShow CODECs (only VfW). And yeah, while I certainly realize Premiere has improved, and they got to toss out a great deal of heinous stuff by going from “Premiere” to “Premiere Pro” (it’s always a bit of a risk to change you UI too much, even when it needs it), it’s clear that, in many areas today, Premiere Pro is ahead of Vegas. And runtime rendering speed is a biggie.

    -Dave

  • Stephen Mann

    April 26, 2011 at 2:35 pm

    [i]”You’ll also notice that original Vegas knew not a thing about multiple processors, SIMD instructions (SSE, etc), or the x86-64 instruction set. Yet, it evolved to use each of these resources just dandy, and without most users really paying much attention to that fact.”[/]i

    I remember when we (programmers) made the evolutionary steps from 8-bit to 16, then 32. Somewhere along the way we picked up multiprocessing in the O/S. In every step we were introduced to more instruction sets and system calls. But, it was evolutionary. Our programs required little rewrite to adopt the new technology and our underlying function remained largely unchanged.

    The CUDA API is not evolutionary as it requires a significant rewrite of how the video is processed – an underlying function of the program. And, as you point out, the competing CUDA platforms have different, proprietary API’s.

    Given that non-use of the GPU is probably the number one slam that Vegas-haters and some adherents use to denigrate Vegas, if it were as easy as you suggest, don’t you think that Sony would have done it by now?

    Steve Mann
    MannMade Digital Video
    http://www.mmdv.com

  • Matt Crowley

    April 26, 2011 at 7:08 pm

    Switching to GPU accelerated video editing means a major rewrite of lots of things… As Dave points out, just the decoding of AVC, MPEG etc video files to raw frame data is a large part of the editing CPU workload. There are some GPU accelerated decoders, but they tend to output directly to the video card frame buffer as part of a media player. Until we get generic CODECs that use the GPU, there’s probably not a lot Sony/Vegas can do, except write their own proprietary CODECs at some expense. They’ve got the AVC encoder, but that’s about it.

    Then you have to port the effects library to the GPU – things like Gaussian blur take a fair bit of CPU power, but the GPU is designed to plough through them. Even simple crossfades and colour correction are a breeze for the GPU and free up the CPU for sequencing all these tasks and shuffling the data around.

    I don’t know if GPUs are so good at ENcoding video – that’s a lot more involved than decoding because there are many decisions to take and various methods to encode the various bits of a frame. GPUs are better at large amounts of simple processing than convoluted tasks.

Page 1 of 2

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy