Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Storage & Archiving Is link aggregation the solution?

  • Steve Modica

    June 3, 2011 at 12:20 am

    10Gb itself goes line rate. You can fire up a benchmark like iperf and show that pretty easily. The problem lies in pulling data off of disk, segmenting it and getting it out to the network. There’s significant overhead in that (and reversing the process on the other side).

    Our 10Gb products are doing segmentation offload now and receive side coalescing, so that helps. I have not run new benchmarks to see if things have gotten a lot better.

    I think the main code improvements need to be in the Samba/AFP code. This isn’t an apple issue either. They all have these limitations.

    10Gb running a block protocol like FCoE or even iSCSI goes pretty close to line rate.

    Steve Modica
    CTO, Small Tree Communications

  • Steve Modica

    June 3, 2011 at 12:22 am

    One more thing:
    AFP (and Samba) do a lot of consistency checking since they are shared protocols. So they stat files and directories very frequently. When we put together blazeFS a long time ago, reducing that was one of our primary goals. That’s a big contributor to the overhead.

    You can watch all that happen with tcpdump.

    Steve

    Steve Modica
    CTO, Small Tree Communications

  • Jesse, Dijifi

    June 3, 2011 at 12:55 am

    Thank you Bob and Alex,

    Yes, I was just wanting to increase our network bandwidth. Capturing and editing to an actual SAN is supposedly not acceptable by the machine we use and its accompanying software (which requires that we first capture a raw file, then run frame pulldown on the captured file to produce the final file that we are editing): https://moviestuff.tv/8mm_sniper_hd.html

    Plus, we don’t necessarily need a SAN since we are only editing on one machine, not multiple.

    And DiJiFi is the name of my company! (https://www.dijifi.com)

  • Jesse, Dijifi

    June 3, 2011 at 1:01 am

    Wow, thanks so much Steve and Alex. This is really interesting information, though a lot of it is over my head.

    From what I can tell, 10 Gbps products are extremely expensive, so it seems this is not an option for me. I was hope to spend less than $1,000 on simply speeding up my network between stations, but it seems this is currently too new of a technology.

    I will certainly try the jumbo frames, and will consider the other options for a while.

    Thanks again, you’re heroes!

  • Alex Gerulaitis

    June 3, 2011 at 1:28 am

    [Jesse, DiJiFi] “From what I can tell, 10 Gbps products are extremely expensive, so it seems this is not an option for me. I was hope to spend less than $1,000 on simply speeding up my network between stations, but it seems this is currently too new of a technology.”

    From what I understand, you could do it for $1110: two Intel 10Gigabit AT2 Server Adapters for about $550 each, a cross-over Cat6 cable ($10 or so), and you are all set – as long as you are only interested in speeding up a file transfer between two stations. According to Steve, you should see 300MB/s if the drives can handle it. Like Steve said, the weak link (after you upgrade to 10GigE) will be your dual-drive RAID0 (200-280MB/s are most common speeds).

    Or, you could go with Small Tree Comms for about $1000-2000 more and get even higher speeds.

    Alex (DV411)

  • Steve Modica

    June 3, 2011 at 1:37 am

    [Alex Gerulaitis] “From what I understand, you could do it for $1110: two Intel 10Gigabit AT2 Server Adapters for about $550 each, a cross-over Cat6 cable ($10 or so), and you are all set – as long as you are only interested in speeding up a file transfer between two stations. According to Steve, you should see 300MB/s if the drives can handle it. Like Steve said, the weak link (after you upgrade to 10GigE) will be your dual-drive RAID0 (200-280MB/s are most common speeds).

    Two comments:
    The Intel cards won’t work in a mac without a driver. So the small tree cards are the only option for those
    Cross over cables are no longer required (since gigabit)

    Or, you could go with Small Tree Comms for about $1000-2000 more and get even higher speeds.”

    I think his limitation would be the raid0 stripe. I think before moving to a faster network, they should consider more drive spindles on the edit machines.

    Steve Modica
    CTO, Small Tree Communications

  • Alex Gerulaitis

    June 3, 2011 at 1:48 am

    [Steve Modica] “The Intel cards won’t work in a mac without a driver. So the small tree cards are the only option for those
    Cross over cables are no longer required (since gigabit)”

    Again, great info. I’ve been using patch cables for direct GigE transfers but never knew until now why they worked. 🙂

    [Steve Modica] “I think his limitation would be the raid0 stripe. I think before moving to a faster network, they should consider more drive spindles on the edit machines.”

    There might be more to it:

    [Jesse, DiJiFi] “We need it to be faster, though, as 100 GB of transfers will still take close to an hour.”

    If my calculations are right, 100GB/hr is about 28MB/s which is roughly 25% of the GigE line rate; he should be able to get much higher speeds by optimizing his NICs, possibly using jumbo frames and a faster switch.

    Alex (DV411)

  • Steve Modica

    June 3, 2011 at 1:59 am

    [Alex Gerulaitis] “If my calculations are right, 100GB/hr is about 28MB/s which is roughly 25% of the GigE line rate; he should be able to get much higher speeds by optimizing his NICs, possibly using jumbo frames and a faster switch.”

    Good catch. I actually scanned that and just read it as “100MB/sec” assuming he was talking about gigabit speed.

    he should be seeing 90MB/sec pretty solid if he’s moving a large file.
    30MB/sec is really horrible.

    Steve Modica
    CTO, Small Tree Communications

  • Bob Zelin

    June 3, 2011 at 2:23 am

    Jesse writes in the original post –
    “I’ve read a lot of articles and threads and feel that maybe we could use link aggregation between the 3 stations (2 for transferring and 1 for editing). The editing station has a fast 12 TB setup over 2 G-Speed eS units in RAID 5 with 8 2 TB disks. The transfer stations are simpler setups of 2 1 TB disks in a RAID 0 stripe. We simply share the editing drive (12 TB) over the network to receive the transferred files while we are still editing off the drive.”

    REPLY – to extract your text – “we could use link aggregation between the 3 stations (2 for transfering and 1 for editing).” Maybe I am missing something, but it sounds like you want to have THREE STATIONS that are sharing information. This is called SHARED STORAGE. You have a G-Speed eS. You aint’ gonna do nothing with the G-Speed eS, other than what you are doing right now. You don’t need a “faster switch” as you will not get better speeds with regular ethernet unless you simply enable jumbo frames. A “faster switch” is a fantasy. If you want to spend money (which you don’t) – you buy 10Gig cards for direct connection to get MUCH faster speeds, but you STILL don’t have shared storage (sharing 3 systems with one drive array). If you want SHARED STORAGE, then follow my other post in this thread.

    Jesse – this is what you want – you want to have a single drive array that all three of your systems can access, at fast speeds. Simple, right ? It costs money to do this.

    Bob Zelin

  • Jesse, Dijifi

    June 3, 2011 at 3:23 am

    Again, thanks to you both!

    Interesting, regarding the direct connection (though I need to connect 2 ‘media ingest’ stations to a 3rd editing station, not just one to one here). We are on Windows machines only, by the way.

    And yes, it seems I must have a pretty horrible speed then, even though our switches and desktops support 1 Gbps. I’ll have to try jumbo frames and see what kind of improvement takes place. 90 MB/s might be enough for our purposes, though 100-200 MB/s for a $550 investment in each of the 3 stations may pay for itself in the long run. I am also able to add extra drives I have available to increase the RAID 0 arrays of the two ingest stations, which I will do soon.

Page 2 of 4

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy