-
NVIDIA cards, CUDA and SLI
Posted by Nigel O’neill on November 17, 2010 at 12:40 pmAre there any Vegas Pro 10 editors out there using a multiple video card SLI setup in their systems to aid in rendering/previewing performance?
Intel i7 920, 12GB RAM, ASUS P6T, Vegas Pro 10 (X64), Vista x64 Ultimate, Vegas Production Assistant 1.0, VASST Ultimate S 4.1
Dave Haynie replied 15 years, 5 months ago 5 Members · 5 Replies -
5 Replies
-
John Rofrano
November 17, 2010 at 2:58 pmI thought I saw benchmarks that said that SLI does not give you a 2x improvement for rendering and therefore is not worth spending 2x the money. The article (which unfortunately I can’t find) said that you could get the same improvement by simply buying a better video card that doesn’t cost 2x the price.
~jr
http://www.johnrofrano.com
http://www.vasst.com -
Jeff Schroeder
November 17, 2010 at 4:17 pmThere is a lot to be said for twin Cards that are not SLI’d. First I have 2 systems set up this way. Vegas 10 on one, AE on the other. Both systems have Windows 7 64 bit. Each have 2 monitors (2-1920×1080 & 2-1680×1050). In win7 each core can offload ‘routine’ math to the GPU, and it doesn’t matter which GPU when it has access multiple ones. So even before we fire-up our favorite video program we already have an increase in overall system speed.
Second, having both monitors plugged into the same card will definitely impact graphics performance on any machine.
John is absolutely correct in saying that “buying a better video card” is a better choice, if you have a single monitor.
Since I need all the screen real estate I can get, I will never build a system with less than 2 video cards driving 2 monitors.
Jeff
http://www.narrowroadmedia.com
-
Stephen Mann
November 17, 2010 at 4:53 pm“Since I need all the screen real estate I can get, I will never build a system with less than 2 video cards driving 2 monitors.”
I find editing with one screen extremely limiting. My next system will have four monitors. Three 22-inch 1920’s and one HDMI LCD TV monitor.
Steve Mann
MannMade Digital Video
http://www.mmdv.com -
Nigel O’neill
November 18, 2010 at 9:54 amThanks for the responses
I have a 2 monitor setup based on an NVIDIA GTX 9800 chipset. I could wait for the NVIDIA 580 series with 512 CUDA cores and get the single card, or get 2 ‘run out’ model cards and SLI them for less $$.
Intel i7 920, 12GB RAM, ASUS P6T, Vegas Pro 10 (X64), Vista x64 Ultimate, Vegas Production Assistant 1.0, VASST Ultimate S 4.1
-
Dave Haynie
November 21, 2010 at 3:31 pmSecond, having both monitors plugged into the same card will definitely impact graphics performance on any machine.
It’ll impact PEAK graphics performance. Or, more correctly, it’s exactly the same graphics load as driving a larger display. For example, the bandwidth requirements for driving both of my 1920×1200 monitors is precisely the same as for driving one 3840×1200 monitor.
Now, for any kind of 2D work whatsoever, that’s not even remotely a concern. The memory bandwidth on all modern graphics cards is pretty insane… my oldish nVidia 8800GT has a memory bandwidth of 57.6GB/s. A single 1920×1200/60p display needs 415MB/s of bandwidth for screen refresh… I have enough memory bandwidth to display 138 of these screens. So the load of two is hardly measurable on anything else I’m doing. Neither will all of the 2D blits I could ever want. This is why graphics card vendors long ago abandoned dual-port VRAM (no loading on the main bus for graphics display) for super high speed GRAM (Graphics DRAM).
Even a full AVCHD decode assist is a small load for your GPU (the fact that a 1080/60p video plays just dandy on my laptop, on a relatively weak nVidia 8600M GPU, just as well as on my much more powerful laptop, is a pretty good indication it’s using only a fraction of the available resources).
Of course, one you’re pushing 3D as hard as it goes, two graphics cards will double your performance over a single graphics card of the same kind, assuming you’re doing that kind of work on both screens. Of course, you might also find a single graphic card that doubles that performance as well.
For example, that 8800GT (G92 GPU) includes 118 stream processors on a 256-bit bus, runs a 900MHz memory clock, together delivering that bandwith of 57.6GB/s and a pixel fill rate of 33.6 billion pixels per second, computing 504GFLOPS. That’s way faster than your CPU memory, but no longer all that interesting. Look at nVidia’s latest GTX580 card (GF110 GPU)… this include 512 much more capable stream processors, on a 384-bit bus clocked at 1GHz using much more advanced graphics RAM, for bandwidth of 192.4GB/s, computing about 1581.1 GFLOPS. And you can get a card with two of these chips on it. Not cheap, of course, but clearly, you’d need four 8800GT cards, more than I could fit in any PC, to perform about as well as one of these GPUs on a card.
-Dave
Reply to this Discussion! Login or Sign Up