Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums VEGAS Pro I would like to increase my Rendering Speed, it seems pretty low to me

  • Norman Black

    January 30, 2015 at 6:34 pm

    [Sorin Nicu] “So why Sony didn’t developed their own encoders to use GPU either? Is lack of interest in investing money, just in taking them.
    nVidia and Intel have free SDK’s and libraries to build encoders using their hardware. Other companies use those successfully, even the free apps make use of them – maybe they should just hire the guy from Handbrake for example.

    It is uncommon for NLE developers to develop their own encoders. They buy from people like Mainconcept. Even Adobe does this. SCS has developed the Sony AVC encoder.

    SCS/Sony has used the Intel QuickSync hardware AVC encoder SDK. The Sony AVC encoder supports this. Maybe SCS will support the Nvidia and/or AMD hardware AVC encoders. I have actually entered a feature request to SCS in this regard. For me this is mostly for temp/text encodes. ALL the hardware encoders, and even the Mainconcept CPU and Sony AVC encoders pale in comparison to x264 in visual quality.

    Which brings me to Handbrake. Handbrake, like SCS, supports Quicksync but not Nvidia/AMD. Nobody at Handbrake has ever developed an encoder. They just use open source encoders that are available, which includes x264 for AVC/H.264 encoding. SCS cannot include x264 in Vegas.

    I am not an expert in legal issues with GPL, but I believe SCS could write an encoder plug-in for x264 and make that source code available and let us download that. Vegas could remain proprietary. I have even suggested this to SCS.

  • Sonic 67

    January 30, 2015 at 9:09 pm

    [Norman Black] “It is uncommon for NLE developers to develop their own encoders.”

    Just two examples: Cyberlink and Pinnacle (Corel) have their own encoders, based on AMD, nVidia and Intel SDK’s.
    And even if that was the case, why is not Sony requesting DivX to update the MainConcept encoders? Don’t want to pay, that’s why.
    As for “gpu doesn’t match the cpu quality” – that’s just something that people say because they assume it has to be true.
    Same software algorithms can be run of CPU or on CUDA/OpenCL cores, so the result should be identical.

  • Norman Black

    January 31, 2015 at 2:09 am

    [Sorin Nicu] “And even if that was the case, why is not Sony requesting DivX to update the MainConcept encoders? Don’t want to pay, that’s why.

    You have proof they have not asked Mainconcept to update support?
    “Don’t want to pay”. You have proof of that?
    Mainconcept has changed hands a couple of times in the last handful of years. Not much going on there at all.

    [Sorin Nicu] “As for “gpu doesn’t match the cpu quality” – that’s just something that people say because they assume it has to be true.”

    This has been documented time and again.

    https://compression.ru/video/codec_comparison/h264_2012/

    https://www.behardware.com/articles/828-1/h-264-encoding-cpu-vs-gpu-nvidia-cuda-amd-stream-intel-mediasdk-and-x264.html

    A 30 minute video of the primary x264 developer about the problems of GPU encoding. The short story. Video encoding is not a very parallel task and forcing GPU use to get parallelism forces compromises.

    https://www.youtube.com/watch?v=uOOOTqqI18A

    [Sorin Nicu] “Same software algorithms can be run of CPU or on CUDA/OpenCL cores, so the result should be identical.”

    Not really. Watch the video of the x264 developer. The real point is that GPU encoders are designed for speed and they make compromises in their algorithms to get that parallelism and thus speed on GPU.

    Even within the same company like Mainconcept, the CPU encoder has better quality than either the OpenCL or CUDA encoders.

    Of course it has to be said time and again. At high bitrates most all encoder look the same even though the SSIM and/or PSNR will be different. As one lowers the bitrate the best encoders start to shine.

    Many think that GPUs are somehow faster because of all the GPU support things being done and the speed that can be gained. GPUs are *MUCH* slower than CPUs. What a GPU has is massive parallelism, and if and only if, an algorithm can map to the parallelism in an advantageous manner does one get the speed.

    Image editing tasks like effects in Vegas are an obvious item that is typically very parallel.

  • Sonic 67

    January 31, 2015 at 4:36 pm

    [Norman Black] “story. Video encoding is not a very parallel task and forcing GPU use to get parallelism forces compromises.”
    That’s a fallacy. You have a data file that has repetitive points that are independent of others (the I-frames). The points between them are not parallelizable easy, but how hard is to read ahead 200-1000 of those blocks (between the I-frames) and give them to the GPU cores to work in parallel on them?
    Actually parallelism works for 2, 4, 6, 12 CPU’s, why you assume that won’t work in higher numbers for GPU’s?

    CUDA encoding was different from beginning, is basically C++ programming running on different processors. The fact that nVidia didn’t get it perfect in 2011 is irrelevant today, they updated the CUDA encoder constantly until 2014.
    Only Sony is stuck in 2010 and that was my original comment.

    Also, since 2014 the quality of included encoders in intel Hashwell and nVidia Maxwell (2nd gen) focused on quality.
    Assuming that “GPU encoding is bad” just because cause in 2102 a certain OpenCL implementation sucked is just not OK.

Page 2 of 2

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy