Activity › Forums › VEGAS Pro › Is this lack of difference in CPU vs GPU-enabled render speed a case of the dreaded Kepler chip thing?
-
Is this lack of difference in CPU vs GPU-enabled render speed a case of the dreaded Kepler chip thing?
Posted by Stephen Crye on January 23, 2014 at 3:12 amHi All;
Sorry for doing this, but the original thread was getting old and moving farther down the list. Ozzie was hoping for some feedback. Unlike I, who am just a hobbyist, he shoots and renders vids for a good portion of his income (in addition to being the operations manager at a local TV station). He would love to turn 12-hour renders into 6 hour renders. He just built his computer two months ago.
Can someone in the know have a look at this post and comment, please?
https://forums.creativecow.net/readpost/24/973597
Thanks, Ozzie and I would really appreciate it.
Win7 Pro X64 on Dell T7500, MultiTB SATA, 8GB RAM, nVidia Quadro 2000, Vegas 12, 11, 10, 9 DVDA 6.0 & 5.2(build 135) Sony HDR-CX550V, Panasonic GH3 with LUMIX G X VARIO 12-35mm / F2.8 ASPH, LUMIX G X VARIO 35-100mm / F2.8
Stephen Crye replied 12 years, 3 months ago 3 Members · 13 Replies -
13 Replies
-
Dave Haynie
January 23, 2014 at 7:19 amOk… first, let’s see here. He’s running an AMD FX-8350 CPU, which is a kind of hybrid 8/4 core processor. In fact you all don’t know the story, AMD’s latest SledgeHammer/Piledriver architecture builds processor “modules”, which consist of two Integer and one floating point unit. So it’s not bad on pure integer code, but tends to be slower than you’d expect on floating point code. There’s lots of floating point code in video. But of course, some of that can be done by the GPU. Sometimes. It really is getting hard to predict these things.
GPU is the nVidia GeForce GTX660Ti, which is based on the new Kepler architecure, runs internally at 915MHz, contains 1344 CUDA cores, and supports a 192-bit GDDR5 memory bus capable of 144.2GB/s memory bandwidth. In short, it’s a pretty good GPU. Also supports up to four monitors at up to 4K resolution.
So, one question: what was the Vegas setting of GPU acceleration? That’s the setting on the Preferences/Video panel. It would be interesting to change that setting (you have to reboot Vegas) and see how it affects playback.
So you ran MainConcept AVC, and got 2:56 seconds with the Main Concept CUDA setting set, 2.57 seconds with OpenCL set, and 2:56 seconds with CPU-only set. Pretty much the same with Sony AVC.
So… hmm. You didn’t specify your render settings, which does matter. For reference, my system did 5:19 for Main Concept rendered to 1080i60, 25Mb/s, with “8-bit best” set in the project settings. That was with GPU disabled in Vegas, and CPU-only as a render setting. And I got 0:57 with GPU enabled and CUDA rendering. I didn’t try a mix.
So your results: given that we KNOW that the Main Concept CODEC won’t use OpenCL, and you get the same result with both OpenCL, CUDA, and CPU-only, it sure looks like Main Concept isn’t using CUDA, either. I mean it could be, but the Main Concept AVC CODEC does practically everything except entropy encoding (the lossless compression phase) on the GPU. It would be nearly impossible for CUDA vs non-CUDA to give the same result. It could be that NC does the same brain-damaged things in CUDA that it does in OpenGL, only supporting cards it specifically knows about. The 6xx series shipped first in March of 2012, so it would be pretty poor if Main Concept didn’t support them. Also, given the result on the Sony benchmark suggests the same thing, I’d look at your CUDA support.
My first recommendation is to see if CUDA is actually working at all on this system. Try setting the GPU acceleration setting in Preferences/Video. If there is an OpenCL driver available, you’ll see it as an option there. Or download the Geeks3D CPU Caps Viewer https://www.geeks3d.com/20131113/gpu-caps-viewer-1-19-0-videocard-information-utility-opengl-opencl-geforce-quadro-radeon-gpu/ … this will show you that OpenCL and/or CUDA are enabled, and let you run a few short demos to prove it’s all working.
. When you say
-Dave
-
Norman Black
January 23, 2014 at 5:22 pm[Dave Haynie] ” It could be that nVidia does the same brain-damaged things in CUDA that it does in OpenGL, only supporting cards it specifically knows about.”
MC did that with the OpenCL AVC encoder for AMD. Odds are they did it with Nvidia CUDA as well.
Looking at the mc_enc_avc_cuda.dll file one see a 2010 Copyright from Mainconcept. The mc_enc_avc_ocl.dll file has a 2011 copyright. This does not mean anything. MC could have missed updating a copyright string somewhere.
-
Dave Haynie
January 23, 2014 at 7:19 pm[Norman Black] “MC did that with the OpenCL AVC encoder for AMD. Odds are they did it with Nvidia CUDA as well.”
That was supposed to say “Main Concept”, not nVidia. This is the same thing Adobe did for their GPU accelerated code early on at least — they looked for a few very specific bits of hardware, and supported only those.
And yeah, I found some online stuff from awhile back about the hard-coded chip family stuff in the Main Concept drivers. Really dumb. Hey, if you’re not sure, go ahead and pop up a warning notice. But looking at the actual chip is kind of the opposite of the point of OpenCL. It runs on nVidia, it runs (more slowly, but still) on newer Intel GPUs, and the fairly exotic Intel Phi coprocessor board (Intel’s sort-of answer to nVidia’s Tesla).
-Dave
-
Norman Black
January 23, 2014 at 8:06 pm[Dave Haynie] “That was supposed to say “Main Concept”, not nVidia.”
Let me try wording that differently.
Mainconcept did that with the OpenCL AVC encoder which only works with AMD. Odds are that they, Mainconcept, did it with CUDA AVC encoder as well.” -
Stephen Crye
January 23, 2014 at 10:33 pmThanks for taking the time, Dave!
[Dave Haynie] “So, one question: what was the Vegas setting of GPU acceleration? That’s the setting on the Preferences/Video panel. I”
In the drop down his card was listed; we picked that. We did not have to reboot because it was already selected; he knew that from when he 1st installed SVP12.
[Dave Haynie] “So… hmm. You didn’t specify your render settings, which does matter.”
I’ve been using the default 1080 30p templates for both the MainConcept AVC/AAC and Sony AVC benchmarks. The only things we change are the various CPU/GPU settings. 8-bit, good.
[Dave Haynie] “It could be that NC does the same brain-damaged things in CUDA that it does in OpenGL, only supporting cards it specifically knows about. The 6xx series shipped first in March of 2012, so it would be pretty poor if Main Concept didn’t support them. Also, given the result on the Sony benchmark suggests the same thing, I’d look at your CUDA support.”
When we do a “Check GPU” , Vegas reports that CUDA is available. GPU-Z put checkboxes beside both OpenCL and CUDA , just like it does with my Quadro 2000. On my Quadro 2000, OpenCL is way slower than CPU-only; on his GTX 660 Ti, one second slower. Sure seems like his CUDA is broken from the Kepler thing, not sure why OpenCL is not working for either NVIDIA card.
[Dave Haynie] “Try setting the GPU acceleration setting in Preferences/Video. If there is an OpenCL driver available, you’ll see it as an option there.”
On my system and his, the only options in that drop-down are “none” or the names of the card. OpenCL is only available as an option in the MainConcept render template; in the Sony render template, there are three choices, Auto, CPU-only, or GPU if available.
We will try that geeks38 link, thanks!
Steve
Win7 Pro X64 on Dell T7500, MultiTB SATA, 8GB RAM, nVidia Quadro 2000, Vegas 12, 11, 10, 9 DVDA 6.0 & 5.2(build 135) Sony HDR-CX550V, Panasonic GH3 with LUMIX G X VARIO 12-35mm / F2.8 ASPH, LUMIX G X VARIO 35-100mm / F2.8
-
Stephen Crye
January 23, 2014 at 10:34 pmThanks Norman; tonight I’m going to try looking at those DLLs myself – see if I can channel the Programmer in my former self.
Steve
Win7 Pro X64 on Dell T7500, MultiTB SATA, 8GB RAM, nVidia Quadro 2000, Vegas 12, 11, 10, 9 DVDA 6.0 & 5.2(build 135) Sony HDR-CX550V, Panasonic GH3 with LUMIX G X VARIO 12-35mm / F2.8 ASPH, LUMIX G X VARIO 35-100mm / F2.8
-
Stephen Crye
January 23, 2014 at 10:39 pm[Norman Black] “Odds are that they, Mainconcept, did it with CUDA AVC encoder as well”
Hmmm. But on my Quadro 2000 system, using MainConcept AVS with CUDA selected, it renders almost twice as fast as with CPU-only, and my GPU utilization goes from almost nothing in CPU-only to > 80% at times in CUDA mode.
That’s what makes me think that both the MainConcept and Sony AVC codecs are unable to use Kepler architecture, even with the latest nVidia drivers.
Steve
Win7 Pro X64 on Dell T7500, MultiTB SATA, 8GB RAM, nVidia Quadro 2000, Vegas 12, 11, 10, 9 DVDA 6.0 & 5.2(build 135) Sony HDR-CX550V, Panasonic GH3 with LUMIX G X VARIO 12-35mm / F2.8 ASPH, LUMIX G X VARIO 35-100mm / F2.8
-
Norman Black
January 23, 2014 at 11:30 pm[Stephen Crye] “Hmmm. But on my Quadro 2000 system, using MainConcept AVS with CUDA selected, it renders almost twice as fast as with CPU-only”
Quadro 2000 is Fermi and dates back to late 2010. If MC did silly things with the CUDA AVC encoder like they did with the AMD OpenCL encoder, then that card is likely fully supported and tweaked.
I don’t have multiple generations of Nvidia cards like I do with AMD. I need hardware in hand to do tests.
There is nothing wrong with having custom tuned and tweaked versions of the code. You just need to have the generic one around when the available hardware does not match any of the custom tuned versions.
-
Dave Haynie
January 24, 2014 at 6:25 pm[Stephen Crye] “On my system and his, the only options in that drop-down are “none” or the names of the card. OpenCL is only available as an option in the MainConcept render template; in the Sony render template, there are three choices, Auto, CPU-only, or GPU if available.”
Just to clarify — when you seen your GPU listed in the Preferences/Video tab under GPU acceleration, that IS OpenCL. Vegas only uses OpenCL internally. That setting is only for Vegas’ own use: compositing, etc. That setting is passed on to plug-ins that know OpenCL.
Some 3rd party plug-ins may have their own GPU settings, which can of course include CUDA or OpenCL. Among those plug-ins are the video plug-ins. Sony’s AVC plug-in inherents the setting from Vegas (that’s the “use if available” setting), while Main Concept requires explict setting of OpenCL or CUDA.
That’s why I was suggesting a test render with/without the Vegas preference setting. What what I gather, you’re always running Vegas in OpenCL mode, but switching between CPU-only, CUDA, and OpenCL when running the Main Concept plug-in. Which seems to always do the same thing, suggesting to me that it’s not using the GPU (when I run with the GPU enabled in both places, I’m see 80-90% peaks from the Main Concept CODEC, as viewed with GPU Shark). So if you set the Vegas itself to CPU-only, reboot Vegas, and try a CPU-only Main Concept render, that’ll give you more information about whether OpenCL is helping you or not with the Kepler card.
Right now, from you’ve told me, all we know is that Main Concept’s AVC encoder doesn’t do anything different with OpenCL or CUDA, but there’s a reasonable expectation that it won’t use any new GPU for either of those things. If OpenCL is really bad on the Kepler, switching that off in Vegas might actually make things faster, particularly on a very fast CPU. Maybe not, but it’s worth knowing.
-Dave
-
Stephen Crye
January 26, 2014 at 1:28 am[Dave Haynie] “Just to clarify — when you seen your GPU listed in the Preferences/Video tab under GPU acceleration, that IS OpenCL. Vegas only uses OpenCL internally. That setting is only for Vegas’ own use: compositing, etc. That setting is passed on to plug-ins that know OpenCL…That’s why I was suggesting a test render with/without the Vegas preference setting… try a CPU-only Main Concept render, that’ll give you more information about whether OpenCL is helping you or not with the Kepler card.”
Roger Wilco, thanks for the clarification. Very enlightening and interesting. It will be a while before I can go over to Ozzie’s house to try that, please stand by.
I’ll do some tests tonight on my Quadro 2000 system just for grins. My preview performance is pretty horrible with GPU enabled; I expect it will get worse. Ozzie’s is great with GPU-enabled preview.
Finally in another recent post in this forum, Chris found something interesting on the MainConcept site:
https://www.mainconcept.com/products/sdks/gpu-acceleration/cuda-h264avc.html
“NVIDIA graphics card with CUDA support (Professional – Tesla, Quadro 4000-series, FX, CX, NVS, QuadroPlex; Consumer – GeForce 8, 9, 100, 200, 400-series GPUs – with a minimum of 256 MB of local graphics memory card or 512 MB for 1920x1080p encoding). CUDA compute capability support only up to 1.3 (excludes certain GeForce 8800 models – GTS, Ultra. Compute capability 1.0 works in general for encoding, but has known issues. Boards with Kepler architecture are not supported.”Is it safe to assume that the Sony AVC has the same problem with Kepler?
Steve
Win7 Pro X64 on Dell T7500, MultiTB SATA, 8GB RAM, nVidia Quadro 2000, Vegas 12, 11, 10, 9 DVDA 6.0 & 5.2(build 135) Sony HDR-CX550V, Panasonic GH3 with LUMIX G X VARIO 12-35mm / F2.8 ASPH, LUMIX G X VARIO 35-100mm / F2.8
Reply to this Discussion! Login or Sign Up