-
Graphic Card Benchmark for Sony Vegas Pro 13 or 12 ?
John Rofrano replied 11 years ago 14 Members · 27 Replies
-
John Rofrano
October 21, 2014 at 4:36 amThe problem is that the newer NVIDIA cards (600/700) use a new architecture and Vegas Pro doesn’t take advantage of all the processing power which causes them to actually be slower than other 400/500 series cards. I would recommend you upgrade your 200 series card to a 500 series card (570). That will give you faster rendering.
~jr
http://www.johnrofrano.com
http://www.vasst.com -
Quincy Berry
October 21, 2014 at 5:43 amThank you for your response. I will be testing a 580 that my buddy has for sale. that should be ok since it’s a 5 series I assume… he has 2 of them for sale. Does that make a difference if I ran both? would it be even faster than the one card at rendering?
-
Dave Haynie
October 21, 2014 at 9:14 amOk.. there are a couple of things at work here. You probably won’t like any of them.
1) It’s CUDA, not KUDA. Stands for Compute Unified Device Architecture. If you’re German, I’ll give you a pass on this one. But nVidia calls it CUDA. Use the right name so you don’t sound ignorant.
2) Your CPU. AMD FX 8320. That’s actually a very good value these days, so you’re probably getting more CPU per dollar than most folks. You’re seeing a CPU Passmark of 8079. My six core Intel i7-3930K delivers 12135 on the same benchmark. But it’ll actually do better on video rendering. My old CPU, an AMD Phenom 1090T, did 5703 on the Passmark, on six cores. The problem with the new AMD architecture is that you don’t have an actual 8-core processor, you have a processor composed of four “compute modules”. Each compute module contains two integer processors and one floating point unit that’s shared. They also have individual L1 caches and a shared L2 cache. The idea is that you’ll get better performance than a single core, which is true… but less than two individual cores, also true. Yours is a Piledriver chip, at least, which did make a bunch of improvements over the Bulldozer architecture — an 8-core Bulldozer wasn’t necessarily any faster than the six core 1090T.
So in reality, I can render some stuff faster than realtime on my faster system. Other stuff, not so much. It’s dependent on the job. Even if you add little things, like level or color adjustments to a video, you’ll add what can be very noticeable overhead. And when you do a just-plain render of video to output, there’s no much help that Vegas’ built-in GPCPU stuff can give you (see #3).
So should you have gone Intel? It’s a price-performance thing. Your system was probably pretty close to what I have long described as the “knee of the commodity curve”. If you’re looking for value, when an item is found to be a commodity, you basically get at least twice as “much” for twice the money… so you spent on an 8-core CPU, and you more or less get 4x the performance of a 2-core CPU (one compute unit) from the same architecture). Thing is, as you pass that “knee”, you start having to pay exponentially more money for increased performance. So for example, my CPU cost $500 new in 2013. It’s certainly not twice as fast as yours, but I probably paid several times as much. And more for the motherboard, since it’s a rarer part. You have to be the judge of your cash vs. performance needs, but overall, I don’t think you made the wrong decision without more information. Are you under very strict timelines, delivering the final video product? In that case, and particularly when there’s pay involved, you want the faster CPU despite the cost. If it’s more of a casual thing, you did well saving the money.
3) The Vegas Architecture. Vegas is actually a collection of plug-ins plus the main program. When you render a video, you are setting parameters for Vegas itself, and for the particular CODEC that’s doing the rendering. If you find a setting in the CODEC for “CUDA” (and you know that’s the right spelling if you set it), OpenCL, or CPU-Only, you have found the controls for just that CODEC, the Main Concept AVC CODEC most likely. We’ll get to the reason (one of those things you don’t want to hear and no one’s happy about), but consider the other control.
So fire up Vegas, go to Preferences, click on the Video Tab. Look at the third line down, “GPU acceleration of video processing”. That should not say “Off”, it should have your nVidia driver listed. That’s an OpenCL thing… Vegas doesn’t actually use CUDA internally. This the internal control. Some plug-ins get their GPGPU settings from Vegas (like all of the plug-ins that come with Vegas), others make you adjust parameters independently of Vegas.
4) Geforce GTX 750Ti vs Main Concept. So as I mentioned, there are TWO places to set your GPU. If you go to Vegas’ Preference and set your GPU, Vegas will use the GPU for anything a GPU can do for Vegas internally, and it’ll pass that GPU selection on to plug-ins that use Vegas’ preference information. So you want to do that. If you don’t see your nVidia there, get some recent drivers that support OpenCL. This WILL make every video render faster, and it’s also the only thing that’ll enable the GPU to make editing and preview faster.
Now back to that AVC plug-in. Main Concept was a company that just did video CODECs. Sony used their stuff going way back. In Vegas 11, released on 2011, Sony first bought the version of the Main Concept AVC plug-in that did a pretty good job of using the GPU for rendering. The problems were already at work then. While Main Concept was established (in Germany) to make video CODEC technology, they were a little too successful. So they were acquired in 2007 by DiVX, at the time a very successful company selling MPEG-4 ASP products, looking to get more advanced video CODEC technology. This wasn’t a huge problem yet. But in 2010, Sonic Solutions, a division of consumer video products company Rovio, bought DiVX. And they haven’t put much into the company. So Sony engaged with Main Concept/Rovio to get the video CODEC technology for AVC into Vegas. It was pretty good back then. But it was essentially the same for Vegas 12 and Vegas 13.
And none of that would have itself been a problem, except Main Concept did a bad and very evil implementation of the GPGPU computing. The whole point of both CUDA and OpenCL is to allow any kind of device with sufficient math performance, particularly for Open CL, to do computing with applications that known nothing about the specifics of that processor. In fact, AMD has a version of OpenCL for its CPUs, just to allow OpenCL development/support without a supporting GPU. Intel has a few non-GPU massively parallel processor boards, like the Intel Phi, that use OpenCL. It’s a very good thing. Any recent nVidia card can do OpenCL in addition to CUDA — CUDA is a proprietary, nVidia only system.
Here’t the thing: Main Concept hard-wired their CODEC to only work with a list of very specific GPU chips. No good reason… presumably, they did this to force folks like Sony to pony up more cash for the new version of their CODEC for new versions of Vegas. Only, Main Concept was the one that failed to deliver. As a result, the Main Concept CODECs only support GPUs that were around in 2010 and perhaps up to mid-2011. My AMD Radeon HD6970 helps me render video up to 6x faster than I’d get just using my 6-core i7… usually it’s more like 2-3x. Newer GPUs will do the part of the GPU acceleration that Vegas handles faster than mine will. And that’s both editing performance as well as the rendering pipeline for any CODEC. But they’ll take longer in the actual rendering component of your video, since MC refuses to use your perfectly good GPU that should be able to render with either CUDA or OpenCL.
5) The complexity of your project. That matters, big. It’s not just video rendering…. how many files is Vegas loading for your project. Where are they — which HDDs or SSDs? Particularly for HDDs, you don’t want too much media coming from the same disc, or you may thrash. Check your Task Manager/Performance display, and look for around 90% CPU being used during a render. If it’s significantly less, you have a bottleneck tht’s not the CPU. You don’t want that, ever.. the CPU (and GPU, if it’s helping) need to be the bottleneck, because they’re the fastest things in the system.
And finely tuned, some projects are just too big. I had a couple of music videos with animation, composited in Vegas with 40-70 layers, that took 4-8 hours to render. For a 2-3 minute video. Nothing I could do about it but bite the bullet.
-Dave
-
Quincy Berry
October 21, 2014 at 4:59 pmWOW! thank you very much for schooling me. I really do appreciate this in depth response from you. This was amazing. ok first I got it now. CUDA! not kuda. I wanted to just say that, I notice when I had GPU acceleration of video processing set to use. SV wasn’t stable, it would crash all the time. Once I disabled it. I didn’t crash. So I am a bit hesitant to set that back to use. I understand that with rendering times can very depending on layers etc but, my example is taking my footage from a gh3 in avhcd 1080p and adding a 12 second intro that was already rendered, 2 lower thirds that last 6 seconds, one at the start, one near the end. the video in total was 18 minutes long. and it took 40 minutes to render using internet 720p stock. I just find that frustrating. But again that must just be using the cpu as there is no difference in time when I select Cuda or open CL in the option before rendering.
I will get the 580 series card in a couple of days from my buddy and install it, render the same file and see what happens. It’s a shame that the newer cards are not benefiting.
Thanks again for your response. I will be reading over it a few times.
-
Sonic 67
October 22, 2014 at 3:20 pm“As a result, the Main Concept CODECs only support GPUs that were around in 2010 and perhaps up to mid-2011.”
I have to add to this that for nVidia, that means Fermi-based cards.
The Main Concept CUDA encoding works very well on those cards, but Main Concept OpenGL encoding doesn’t work well on nVidia (several times less GPU utilization).
Also newer generation nVidia cards have the floating point capability FP64 crippled by design to 1/24 in Kepler and 1/32 in Maxwell (compared to FP32).
Fermi had 1/8 for gaming cards, up to 1/2 in the professional line.
I think that makes a difference in the encoding process. -
Dave Haynie
October 22, 2014 at 7:30 pm[Sorin C. Nicu] “I have to add to this that for nVidia, that means Fermi-based cards. “
Yup.. that’s one big reason that Vegas is fast on renders than the newer Keplers.
[Sorin C. Nicu] “The Main Concept CUDA encoding works very well on those cards, but Main Concept OpenGL encoding doesn’t work well on nVidia (several times less GPU utilization).”
Actually, no one has any idea how well Main Concept’s OpenCL works on the nVidia cards, because of that whole chip keying thing I mentioned. They only enable CUDA or OpenCL for specific chips, despite the whole point of CUDA and OpenCL being that nothing chip-specific isn’t needed. They only enable OpenCL for those older AMD/ATi GPUs, HD5xxx and HD6xxx series. Any other card, any nVidia, the GCN AMDs, etc. are all running on the CPU only when set to OpenCL.
OpenGL is something different…. “Open Graphics Library” versus “Open Computing Language”. Vegas will use OpenGL for some 3D graphics plug-ins. OpenCL is used by Vegas internally for some compositing work, and of course, some plug-ins use it for things like rendering.
-Dave
-
Sonic 67
October 22, 2014 at 8:48 pmYou are right, I meant OpenCL.In my tests I see some minimal GPU utilization if I select OpenCL, as opposed to CPU only, but that might be due to playback of resulting video, not due to encoding itself.
Main Concept encoders page states that OpenCL works only for ATI. -
Ty Yang
November 14, 2014 at 4:49 amI created an account just to add to this discussion. I was searching for answers to the same question many are having: How come I can’t see a noticeable difference in render time with my new PC?
I had a small sample clip that I rendered with my old PC, my new PC sadly rendered just a few seconds faster only. Same vegas project file, same Sony AVC 1080p output.
Old Dell PC: 3:25sec render time
i7 920
Nvidia GTX 560
24GB RAM
rendering to separate WD Black HD.New PC: 3:14sec render time
i7 4790k
Asus z97 Mobo
Nvidia 750ti
16GB RAM
rendering to separate WD Black HD.New PC: Also no noticeable render time difference with GPU on vs CPU only.
In summary, nope.. no big difference in render time with brand spanking new system. =(
-
John Rofrano
November 14, 2014 at 4:06 pm[Ty Yang] “In summary, nope.. no big difference in render time with brand spanking new system. =(“
What did you specifically upgrade that you thought was going to reduce render times? Simply “buying a new PC” isn’t a plan. So let’s look at your configuration.
Intel i7 920 vs i7 4790k
That is an upgrade in architecture and a bump in clock speed from 2.8Ghz to 4.0Ghz so you should see a small boost from that upgrade but you still only have 4 cores so don’t expect much.Nvidia GTX 560 vs Nvidia 750ti
Vegas Pro uses OpenCL for timeline GPU acceleration and NIVIDA has poor support for this so I wouldn’t expect you to see any benefit in timeline GPU performance from this upgrade. A better choice would have been to switch to an ATI Radeon card which excels at OpenCL support.Sony also seems to have trouble with newer NVIDIA cards for render GPU acceleration so again, I would not expect you to see any benefit from upgrading your NVIDIA GPU.
24GB RAM vs 16GB RAM
That’s actually a downgrade but I don’t beleive it would have any impact anyway because 16GB is more than enough memory for Vegas Pro to use.Old Dell PC: 3:25sec render time vs New PC: 3:14sec render time
It looks like the only thing buying a new PC did for you was increase your CPU clock speed from 2.8Ghz to 4.0Ghz. Other than that, there is no reason to believe that your new PC would perform any better than your old one.If you really wanted better performance you should have upgraded to more cores (i.e., 6-core, 8-core. etc). When I went from 4-cores to 6-cores I saw a pretty good improvement. I’m currently using a Mac Pro 8-core and my next purchase will be a 12-core. Just buying another 4-core computer with 1Ghz bump in processor speed is not going to help much as you have seen.
~jr
http://www.johnrofrano.com
http://www.vasst.com -
Sonic 67
November 14, 2014 at 5:16 pmSony Vegas supports GPU encoding only for Fermi generation cards. Your ‘new’ card therefore is unsupported, good for gaming but crippled in Vegas.
Also, you have to select CUDA (or OpenCL for ATI) manually, otherwise it will default to CPU encoding (rendering). Even if OpenCL is available for nVidia rendering, use CUDA…OpenCL is very useful for timeline view (supported there by both nVidia and AMD), but your tests measured basically only the encoding (rendering) process.
Reply to this Discussion! Login or Sign Up
