-
James Redmond
December 10, 2016 at 7:07 pmAaron, Thanks for all the great information! Your information about setting up a second graphic card just for rendering was amazing!
I am also considering rebuilding my system to speed up rendering in Vegas 13 (haven’t installed 14 yet). Generally I do a lot of editing then have several projects that need to be rendered. So I am always looking at ways to speed up the rendering.
Questions:
Thinking of the Radeon Pro WX-5100 graphic board for only rendering as you recommended. Would it work well with Radeon RX 480? I will use the 480 for display with 2 monitors. I currently have a GTX 970. Would that work or are you recommending both graphics card to be AMD?
I currently have a i7-5930k. Thinking of the i7-6950 with 10 cores instead of the i7-5960. Looks like it does 40 lane processing. Is it worth a 50% increase in price over the 6900k? Looks like it is a 15% increase in performance.
I have with 32 gigs of DDR4-2401 (1200 Mhz). How much ram would you recommend and what speed?
Currently have a Asus x99 Deluxe version 1 motherboard. Thinking of switching to the MSI Extreme Gaming Intel X99 LGA 2011 DDR4 USB 3.1 Extended ATX Motherboard (X99A GODlike Gaming ). Seems to have more USB 3.1, which is becoming pretty standard.
Thanks for all your advice! James
James Redmond
Dynamic Videos, Inc.
Rogers, AR USA -
Matthew Jeschke
December 17, 2016 at 5:28 pmMy head is spinning with all this graphics card talk… i understand NVIDA maybe as good as Radeon now for OpenCL which is the platform SVP uses. I too am shopping for a graphics card… still leaves me confused as to the plarithra of Radeon cards available for purchase. How does one determine which card is the best (or right) card to play with Vegas?
I am looking at number of simultaneous streams
Onboand memory.
Clock speed.
What else should one consider? Fyi I am not even able to tell which is the latest generation GPU. seems they have 200’s 400’s… then the 7000’s 8000’s firepro etc… what ishould best for video editing lol
Any pointers are appreciated. Thx 🙂
————————————–
I do Architectural Photograph & Cinematography as a part of being a Residential Real Estate Consultant.
Some of my work can be seen at,
https://www.youtube.com/keystoaz/
https://www.vimeo.com/matthewjeschke/PS. It\’s an excellent excuse to ride off what I love, Camera equipment 🙂
-
James Redmond
December 22, 2016 at 6:36 pmWell I built the system as described earlier with the i7 6950, Radeon 480, MSI X99A GODlike Gaming motherboard, 32 gigs ram and and AMD Radeon Pro WX-5100 graphic board.
Could not get the Radeon Pro WX-5100 graphic board to be used in Vegas. Called AMD regarding having dual graphics and the tech support said there would be a conflict with the two different drivers and recommend I use 2 of the same kind of graphics cards. So I am returning the WX-5100 and getting another Radeon 480.
Good luck with it all, James
James Redmond
Dynamic Videos, Inc.
Rogers, AR USA -
Aaron Star
December 23, 2016 at 9:28 pmJames sorry for taking so long to get back to you.
What was the reasoning for choosing the WX-5100 in the 1st place? You pretty much only need a FirePro card if you are trying to setup a 10-bit display path. I mean with regards to Vegas operation. Blackmagic has cards that will give you a monitor with 10bit display output if you need that type of path. If you want to go directly from your GPU, or have other applications that will utilize a Pro GPU features then the FirePro is a good choice too.
The 6950 or BWE series has the benefits of the improved stable memory speed of 2400 over 5960 series. Your board maybe able to overclock the memory past either 2100 or 2400, but make sure to run memtest86 overnight to verify there are zero errors with the controller or memory at new speeds.
When it comes to price performance , or is this one worth the other, that is really up to you to decide with your own testing and financial circumstances. Is your business writing off, or is the purchase just an expensive new toy.
When it comes to which series to buy, I would always buy the CPU with the most cores, operating frequency, the newest instruction sets, and best memory bandwidth. This all depends on how long you plan to use the CPU. One can only assume that Magix and other NLEs are actually working to take advantage of new instruction sets and OpenCL version as they come out. Sony in the past was not however. Buying an older CPU, means that advancements in software that takes advantage of new instructions will require a CPU upgrade. There is some leap frogging here.
Anandtech has a good article on the 6950x you may want to read.
PCper has some good into on the 6950x:
“Our synthetic performance tests start off very well for the Core i7-6950X, resulting in scores that are 29% faster than the 8-core 5960X and nearly 50% faster than the Core i7-6700K Skylake processor. “
There does not seem to be much testing going on with OpenCL benchmarking like Luxmark or others. Clearly posting some results of a Vegas test file in 32-FP mode would be a good judge of performance, but our NLE community is small in reality. Even AnandTech stopped using Vegas as benchmark, I feel because Sony no long owns it, and the results were topped out with almost no difference in new GPU releases. When in fact the benchmark file was not evolving with the times and resolutions, color depths.
A new Vegas test file that is playback/rendering a 4K EXR format with effects in 32-bit full mode, and then also rendering out to a 4K 10-bit HEVC format would be a good new benchmark. This would increase the complexity of the project and show a much greater difference in the speeds of the compute hardware setups.
-
Aaron Star
December 23, 2016 at 10:05 pmMathew,
I would look at AMD X series chips with stream processors that evenly divide by 64. Wiki will provide a list of what series chip is being run in the different cards. Buyer beware, since some newer models are actually less performance than older X series. New models generally support other new features than pure compute performance.
RX480
290x
390X
fury-xLook a the GFLOP ratings between AMD cards. Most of the GPU effects that need the most acceleration are using math that you want the highest GFLOP rating. The best CPUs do not come closet to say a Fury-X in terms of GFLOP performance. Most of what you want from OpenCL assist is the quick return of complex math or floating point results while working in 32-bit FP mode.
“Core i7-6950X+DDR4-2133 = 497.1GFlops”
Fury-X = 8,600 GFLOPsGPU Memory bandwidth and normally in goes hand in hand with bus width, both are good indicators.
PCI3.0 is only 16GB/s roughly and so most GPUs and System memories are well beyond this, and yet they share large amounts of data between them. So any increase in PCI speeds is normally a good thing to look for. For example, PCI3.0 interface is not only faster than 2.0, but 3.0 offers less overhead and so offers a speed increase there as well.
Make sure your motherboard allows for all your extra interface cards (capture cards, PCIe SSD, even M.2, ect) to run on the CPU PCIe lanes and not the southbridge (x99 or other chipset designations.)
OpenCL versions would matter more but the software manufacturer (Magix) needs to implement the newer versions of OpenCL into the software. I believe Vegas 13-14 only supports 1.2.
Hope this helps.
-
Matthew Jeschke
December 23, 2016 at 11:12 pmHey Aaron,
Thanks again ☺And shucks shucks shucks ☹ I finalized my build. I was EXTREMELY confused on the graphics chip-set / GPU from Radeon. I knew I wanted a Radeon but NONE of their model numbers make any since. I bought the newest setup and assumed it would be the best… I bought a RX 480 8GB.
I opened vegas on the new machine. I added a few effects to a on 4K h.264 video from my S6 phone. Specifically I stabilized using ProDad. It did analysis somewhat quickly… Then playedback without a hitch at full quality. I then tacked a HitFilm noise reduction effect and tried to playback at full quality. I could see the timeline preview stressing. It was dropping frames at that point. The setup has a feeling that it’s not silky smooth and super muscular so to speak. I had thought was because of processor only having 6 cores (i7-5930k). It must be due to GPU?
I was extremely confused as to why the older architectures cost a tad more like the 290x. I looked at a lot of OpenGL bench tests… Then chose the tried to pick out the unit with most # of compute units, stream processors, and graphics card memory on a single GPU (36 / 2304 / 8gb respectively).
Here’s my final spec list:
– Radeon RX480 8GB by MSI (this should have been a 290x, 389x, or Furry-x?)
-Intel I7-5930k 6 core 3.5Ghrz (would have liked a few more cores but compromised for price).
– X99 MB – ASRock X99M Killer 3.1 (fyi- don’t buy ASRock memory is limited & hard to setup).
– 64GB DDR4 – 4x 16GB DDR42133 / PC4-17000 CL 15-15-15-36 G.Skill Memory
– 512GB – Samsung 950 VNAND PCIE M.2 SSD
– Windows 10 Pro
– Liquid CPU cooler – Deep Cool Captain 240 EX
– Case – Corsair Carbide Air Series 240 Case
– 1TB SSD capture scratch – Samsung 850If I had it to do over again I would have bought a MB with integrated graphics for displays. Then installed the GPU for editing system (computation / rendering / timeline playback / etc)
FYI ~ If you have further critique… I may sell off or return some of the components and buy others ☺
I had a VERY hard time specing this system out. The important features are not very well marketed. They sell on gymics lol Thanks a MILLION for your help you were a lifesaver.
Link to video breakdown of my build 🙂
PS. A few people wanted to see how and why I built the system the way I did. I tried my best to explain, probably I have mis-represented some Items in the video. If so please do not be shy let me know my blunder. I will redo the video. It was INSANELY hard to spec out the system and TONS of misinformation on the internet.
————————————–
I do Architectural Photograph & Cinematography as a part of being a Residential Real Estate Consultant.
Some of my work can be seen at,
https://www.youtube.com/keystoaz/
https://www.vimeo.com/matthewjeschke/PS. It’s an excellent excuse to ride off what I love, Camera equipment 🙂
-
Aaron Star
December 24, 2016 at 4:14 amThat system build should work very well with Vegas.
The RX480 should be pretty close to an R9-290X. The 480 will have support for more modern features (things like DP1.2A ports), use less power, and only a slightly less compute power (36 vs 44 compute units).
Here is a GPU grid I compile from wiki and other tech sites:
https://1drv.ms/x/s!Au_dLvF4HRtrhP9j2EWb3NDQXNQ3Bw
I need to update it to the latest specs out there.
The CPU is the only thing I would maybe change with your configuration, but only if you were going to add another GPU ,PCIe SSD, or capture interfaces like a Blackmagic card. You want the extra PCIe lanes from the CPU to support another GPU at 16X. I understand about the CPU costs however, the 6950x in the high-end Xeon range.
Since the newer x99 boards are now doing 2400 DDR4, you may want to make sure your BIOS is up to date. You may be able to run stable at the high freq. Test your system overnight at the higher frequency to make sure it is stable.
Were you able to get your system loaded NVME? There should be an NVME controller under computer management. That mode is what you are looking for with that M.2.
Run “winsat mem” from a admin command prompt, to verify you are getting the memory bandwidth you think you should be getting.
winsat disk will test the M2 performance, which you should be seeing like 1GB/s sequential and a huge amount of random performance (much higher than SATA)
-
Matthew Jeschke
December 24, 2016 at 6:00 amI have everything setup and am playing with an old project I put together. The timeline has 1080p H.264 clips in it. These clips are on SSD Hard Drive. I have a track which has the following Sony effects applied:
– Color Corrector
– Color Curves
– Soft ContrastWhen I do a render to Sony .MP4 file it renders faster than I could actually play the clip! Super awesome.
However, when I do timeline playback at Preview Best Full… This timeline skips.
I am concerned as the biggest reason for me to do the build was for timeline performance. I want to be able to playback the project and preview it without having to do tons of renders (even RAM previews) to see the result.
I’m looking at your spreadsheet… thanks a million for sharing it ☺ Curious would the Fury X be a much better card. In most of the columns it looks better… If so what about the Fury 2X?
————————————–
I do Architectural Photograph & Cinematography as a part of being a Residential Real Estate Consultant.
Some of my work can be seen at,
https://www.youtube.com/keystoaz/
https://www.vimeo.com/matthewjeschke/PS. It\’s an excellent excuse to ride off what I love, Camera equipment 🙂
-
Aaron Star
December 24, 2016 at 7:04 amThere have been several threads on the benefits of using an intermediate format for editing. You may want to use that project of yours as a test. There is significant overhead of decoding H264 group of pictures back into individual frames.
Create a new project to convert all your source media to xdcam-ex.MXF, or Cineform.avi (installed with GoPro software.), or even HDCAM-SR-LITE.MXF. For 4K media, convert to XAVC-intra or Cineform.AVI.
Move all your h264 media out of the main folder into a subfolder.
Move all the intermediate files into another subfolder.
Re-open the project and point vegas at the new intermediate files.
Do some play back tests to see what works the smoothest for what you want to do. You should see when Scrubbing on the timeline that you see all the frames played back quickly, vs jumping from frame 1 to frame 50 using h264 files.
Best Full setting uses a different scaler than Good, and does full computation on the frames. Preview does only what is needed to display the resolution being shown, and drops info to prioritize frame rate. I pretty much only use Best Full when color correcting or doing graphics. Then use Preview or even draft when editing to max frame rate.
With Vegas, you learn that rendering is sort of like baking. You create at less than best perfect image, and bake/render for 30 mins to see the final result. You may need to re-render, but you learn over time what to check at best full ram preview to lessen the need to re-render finals.
Final render using a Best Full Sony AVC/XAVC(4K) profile that matches your project. If you are going to YouTube, you can upload XDCAM (smart render for unmodified footage) and Cineform directly. YouTube and other services will throw away any extra bit rate.
-
Matthew Jeschke
December 24, 2016 at 7:14 amAaron, you are SUPER helpful, massive understatement ☺ Here are the scores from my tests you mentioned in previous post.
Here is an extra one I ran with Crystal Disk Mark
I am still super confused on if the Fury X would have made any difference with the timeline playback.
H.264 transcoding makes since as that stuff is so highly compressed. I noticed my higher bitrate (less compressed) 5dm2 footage had no troubles playing back with effects. It was my nasty Samsung S6 footage that dropped frames on timeline playback.
What is this a sign of? CPU or GPU power? I think you’re leaning towards CPU power?
————————————–
I do Architectural Photograph & Cinematography as a part of being a Residential Real Estate Consultant.
Some of my work can be seen at,
https://www.youtube.com/keystoaz/
https://www.vimeo.com/matthewjeschke/PS. It\’s an excellent excuse to ride off what I love, Camera equipment 🙂
Log in to reply.