Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Storage & Archiving CIFS mount – poor performance

  • CIFS mount – poor performance

    Posted by Ahmed Ali on March 29, 2019 at 10:06 am

    We have an Arriscan that’s supposed to scan .dpx files directly to a CIFS share from a windows 10 workstation. The connection between both is 10Gige direct without a switch over CAT7. The NICs on both are item X550. Jumbo frames are enable at 9014 on Windows and 9000 on Centos 7.
    For some reason the performance is poor. It starts scanning at the normal 6.7 fps (same speed as when scanning to the internal storage, but then drops to 1.9fps. Testing by copying from tge internal storage to the linux gui is really slow @ 10MB/s. Command line copy is faster but still poor. Testing with rom /dev/zero with bs=24M and count 600 (supposedly like writing 600 3k files) gives a much much better performance at 437MB/s, which is more than we need but still fast from expected 10Gige speed.
    Any suggestions?
    I cheked for wsize and looks like the default is already the maximum although I didn’t try to set it.

    Bob Zelin replied 7 years, 1 month ago 3 Members · 6 Replies
  • 6 Replies
  • Steve Grappone

    March 29, 2019 at 12:46 pm

    You seem to know your technology pretty well. Seems that you’ve eliminated internal storage as the culprit. You’ll find Linux testing a little tricky due to the way the test uses the CPU. For example dd is a single threaded app so your bound to one core of the CPU. I’d look in to testing with FIO.

    Your network setting seem to be good too. I’d test the connection with iperf and get rid of the intel cards go with mellanox lol

    If iperf proves networking is good. Now we’re down to client or smb.

    Within your smb.conf file make sure you have aio = 1.

    Let me know if that helps.

  • Bob Zelin

    March 29, 2019 at 5:36 pm

    I do not know the Arriscan product.
    However, the first thing you should do is direct connect between your two PC’s together via the Intel X550 cards, and do a simple speed test between the 2. That will tell you in seconds what your transfer speed is, and if the problem is with your drive array that you are trying to write to.

    The Intel X550 cards are excellent, and the Mellanox 10G cards will not outperform the Intel X550 cards. THAT at least I have experience with.

    Bob Zelin

    Bob Zelin
    Rescue 1, Inc.
    bobzelin@icloud.com

  • Ahmed Ali

    March 30, 2019 at 2:55 pm

    Will adjust for AIO after passing the Physical layer test. Thanks for the suggestion.

  • Ahmed Ali

    March 30, 2019 at 3:56 pm

    Actually I’m connected directly. No switch.
    The weird thing is that when I tested the direct throughput with iperf3 I only got 3.5 to 4Gbps !!!
    Using the same everything, I booted the Windows box with Ubuntu, and repeated the test and got the whole 10Gbps !!!
    Now I obviously suffer from a problem on the Windows box, although it was all configured by Dell. Thought it might be a driver issue. I tried updating driver and BIOS but that changed nothing. For some reason, Windows10 is unable to use the whole line bandwidth.

  • Ahmed Ali

    March 30, 2019 at 4:06 pm

    Found it !!! ????

    We disabled interrupt moderation rate on the NIC’s configuration page. Got the 10Gbps bandwidth.

  • Bob Zelin

    March 31, 2019 at 12:48 am

    I know the Intel X550 pretty well, and I have NO IDEA of where
    “Interrupt Moderation Rate” even is, in the Properties tab for the setup of the X550.

    Bob Zelin

    Bob Zelin
    Rescue 1, Inc.
    bobzelin@icloud.com

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy