Forum Replies Created

Page 1 of 2
  • Steve Grappone

    May 31, 2019 at 3:21 pm in reply to: MAM suggestions?

    Cantemo isn’t too bad. Should be a little less than CatDV. Depending on your use Axle isn’t a bad choice.

  • Steve Grappone

    May 24, 2019 at 5:27 pm in reply to: Suggestions for a 16 bay rackmount NAS?

    I’m not as you’d call a qnap or “pro-sumor” user. My official role is to design video workflows and build their storage to a specific task.

    I will always try to help anyway. If you added the M.2 NVMe drives and the system is ZFS it’s not going to help as much as you’d think.

    Another problem with cache when it comes to video is that you’re typically reading, so if the data hasn’t been accessed it won’t be in your cache.

    I’m not sure with the unit that you’re looking at but a good alternative would to have two different data pool.

    1.) HDDs for archives and other non-performance data
    2.) NVMe for your 4K workloads.

    creative.space the company that I work with has developed a hybrid system. You can get up to 288TB (24 12TB drives) and 64TB NVMe (4 NSF-1 16TB each)

    The advantages of creative.space is you pay for it as a service not as a device. So they setup the system and they perform preventive maintenance rather than the normal responsive technical support that never seems open when you need them.

    Hope this helps.
    Steve

  • Steve Grappone

    May 14, 2019 at 4:50 am in reply to: Suggestions for a 16 bay rackmount NAS?

    Hi Randy,

    This solution appears that it could work. Unfortunately for me I’ve never worked with this product. But if it has 16 drives and each drive has a max speed of 232MB/sec then you should have a total drive through-put of 3,712MB/sec however there are some bottlenecks so expecting that speed is beyond optimistic. After raid calc, seek time, various settings on FS, and a slower proc (from the internet spec sheet) I would hope this system gets to 2,000MB/sec

    Working with 4K ProRes and having 4 artist with a bit rate of apx 225MB/sec X 4 900MB/sec, I want to say this system is fine.

    I might make one modification. Rather than a switch I’d add another 10GbE card and make home runs. Get your internet via wi-fi or another cat 6 run.

    Hope this helps
    Steve

  • Steve Grappone

    March 29, 2019 at 12:46 pm in reply to: CIFS mount – poor performance

    You seem to know your technology pretty well. Seems that you’ve eliminated internal storage as the culprit. You’ll find Linux testing a little tricky due to the way the test uses the CPU. For example dd is a single threaded app so your bound to one core of the CPU. I’d look in to testing with FIO.

    Your network setting seem to be good too. I’d test the connection with iperf and get rid of the intel cards go with mellanox lol

    If iperf proves networking is good. Now we’re down to client or smb.

    Within your smb.conf file make sure you have aio = 1.

    Let me know if that helps.

  • Hi Bob,

    You mentioned that nobody makes a system with NVMe that you knew of, so I figured I’d point you to the creative.space system that is full NVMe, also the DeusEX has NVMe too, the top 4 drives in the 24 bay system can also be used with NVMe (U.2) drives.

    In fact, Local Hero Post in Santa Monica threw out their open drives system due to performance issues.

    https://www.digitalglue.com/case-study-creative-space-drives-local-hero-post/

  • Check out creative.space breathless, 576TB of NVMe in 1 RU. 10 Million IOPS crazy fast!

  • Steve Grappone

    November 30, 2018 at 4:29 pm in reply to: SFP+ Cable Issues, Qbiquit and Mellanox

    Hmmm. Well the writes being faster actually doesn’t surprise me too much. I think qnap uses ZFS which is a copy on write file system. Means it uses ram and or fast SSDs to cache the info then it rights it to the actual disk.

    Do you have anything else plugged in to the iMac that uses TB? If so try testing without any other TB device connected.

    Do you have another system with TB that you could also test with?

  • Steve Grappone

    November 30, 2018 at 1:34 pm in reply to: SFP+ Cable Issues, Qbiquit and Mellanox

    Hi Kevin,

    Well that was unfortunate. Typically MTU settings are the issue. Is there a switch involved? If so send me the make and model and remember that MTU settings do need to be made on the switch.

    Also I’d run iperf on the both directions server to client and then client to server. Iperf version 2 seems to be best and also supported under Mellanox so if you open a case with them they’ll want to see the version results.

    The point of this test is to see if there is performance lost on both send and receive as some systems are setup to recieve and others to send.

    Iperf works most OS’s too. Pretty simple to use

    Server: iperf -s
    Client: iperf -c server_ip -P 4

    The capital P means how many processes to run, in this case we’ll run 4 I’d also do 3 and just the default. Lowercase p would to specify the port if the default is different.

  • Steve Grappone

    November 29, 2018 at 9:05 pm in reply to: SFP+ Cable Issues, Qbiquit and Mellanox

    Sometimes it can be a simple MTU settings. I typically do not use 1500 and use Jumbo frames 9000 MTU.

    I’d run iperf and post the results.

  • For this type of question in today’s world of storage. It’s not how much space you need rather how fast you can access it. For space you could probably use two 12TB drives be eh ok.

    If you want to playback DPX frames and you want spinning drives to gather 24+ files a second you’re going to be very disappointed.

    So, if you want people to edit with RAW and you want people to work collaboratively then you need to have a 10Gig network because 1Gig has a total throughput of 110 MB/sec and your bitrate would probably be twice that. Lucky 10 Gig network is pretty affordable these days.

    Next would be the server. You can purchase your own (bare metal) and install mas software, you could purchase a affordable qnap, or you could rent a creative.space system and return it when done.

    All are fine options. Hope this helps.

Page 1 of 2

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy