Activity › Forums › Storage & Archiving › Is link aggregation the solution?
-
Is link aggregation the solution?
Posted by Jesse, Dijifi on June 2, 2011 at 8:15 pmHi there,
I run a consumer film transfer studio (for 8mm, 16mm) and the way we transfer film requires that the digital file (1080P Motion-JPG BlackMagic Codec @ ~40GB/hr) be captured on one system and then transferred to a different system for editing to keep things efficient. Timing is of the essence for us, so the transfer stations need to be transferring and the editing stations need to be editing all day, non-stop. So once the file is captured, we want to move it onto another system ASAP. We used to do this with SATA II drives using an external SATA II dock. The speed was okay, but the SATA II drives die too often and we eventually upgraded to Gigabit ethernet which outpaced the SATA II dock system. We need it to be faster, though, as 100 GB of transfers will still take close to an hour.
I’ve read a lot of articles and threads and feel that maybe we could use link aggregation between the 3 stations (2 for transferring and 1 for editing). The editing station has a fast 12 TB setup over 2 G-Speed eS units in RAID 5 with 8 2 TB disks. The transfer stations are simpler setups of 2 1 TB disks in a RAID 0 stripe. We simply share the editing drive (12 TB) over the network to receive the transferred files while we are still editing off the drive.
Anyone have experience with this?
Andrew Richards replied 14 years, 11 months ago 5 Members · 31 Replies -
31 Replies
-
Steve Modica
June 2, 2011 at 8:22 pmLink aggregation won’t work in a scenario like this. It’s basically a socket balancing thing. When clients connect to servers, they open a socket. That socket gets assigned to a port. As more clients come in, they get randomly assigned and are load balanced.
On the client side, there’s only ever one socket, so you won’t see any additional bandwidth. Does that make sense?Steve Modica
CTO, Small Tree Communications -
Jesse, Dijifi
June 2, 2011 at 8:40 pmOh, okay. I think so.
From what I read before it seemed that link aggregation could double or triple the bandwidth of gigabit ethernet between a switch and a storage array. I thought maybe that connection could be made between a switch and multiple desktops (and their attached storage arrays), but I guess it has to be a standalone storage array designed for link aggregation?
So just to be clear, there is no way to use link aggregation on the client end in order to increase bandwidth?
Thanks for your help!
-
Alex Gerulaitis
June 2, 2011 at 9:39 pm[Steve Modica] “On the client side, there’s only ever one socket, so you won’t see any additional bandwidth. Does that make sense?”
Does this still hold true in a hypothetical scenario where the links are trunked directly from the server to the client (no switch) via LACP?
I read that LACP allows to present multiple physical links as a single logical channel. I am not sure how it works of course, i.e. if something like file transfer will be able to use multiple physical links.
Thanks!
Alex (DV411)
-
Steve Modica
June 2, 2011 at 10:04 pmThe links are presented as a single logical channel. That is true.
The 802.3ad spec requires that a “conversation” must exist on one port. (This is to maintain TCP ordering. TCP stacks cannot deal with lots of out of order packets. That’s an exception condition).So what happens is a socket opens and gets assigned to a port. The only time it will ever hit another port is if the first port fails.
Many people think LACP acts like a striping utility with the packets. This can’t work because packets 1 2 3 and 4 would end up on different ports and would arrive “out of order”. The stack would go bonkers.
At SGI, we did this experiment. We actually wrote a driver to do this. It took 3 CPUS to handle 2 striped gigabit ports and if we added a 3rd port, it couldn’t go any faster (the re-ordering caused a bigger slowdown than the new ports additional bandwidth).
This is a problem people have wanted to solve forever. SGI’s NUMA and several other protocols that use “scheduled transfers” and RDMA like buffer splitting were created. However they were all etherNOT and etherNOT never wins. It required special hardware and rewritten stacks etc.
Steve Modica
CTO, Small Tree Communications -
Alex Gerulaitis
June 2, 2011 at 10:13 pmThanks Steve, that cleared things up.
As far as helping the original poster speed things up – will Jumbo frames help him? Faster GigE switch? A direct 10GigE connection between the two stations that need a faster file transfer? (I.e. install 10GigE NICs in both machines and cable them up?)
(Of course an 8Gbs SAN will help but that’s an order of magnitude more expensive than any options above.)
Alex (DV411)
-
Bob Zelin
June 2, 2011 at 10:56 pmREPLY –
Does your mother call you DiGiFi ?There is the straight answer to your question –
you state:
The editing station has a fast 12 TB setup over 2 G-Speed eS units in RAID 5 with 8 2 TB disks. The transfer stations are simpler setups of 2 1 TB disks in a RAID 0 stripe. We simply share the editing drive (12 TB) over the network to receive the transferred files while we are still editing off the drive.
REPLY – your G-Speed eS unit will not work as a drive array for a shared storage server. You want shared storage, you need a dedicated server, you need a professional VERY FAST RAID array, and THEN you can put a multiport ethernet card or 10Gig card into your server, tie that to a matching ethernet switch, and accomplish what you want (to tie your 3 editing systems together, so they can all share the same media). AND, you can’t use the server computer as one of your editing systems. Why ? Because you will get drop frame errors.
You started this thread, thinking that you would buy an $800 ethernet card, stick it into one of your MAC’s, set up link aggregation, and use your existing equipment to have a shared storage system. It’s not going to work.
Do you want to see a drive array from G-Tech, that will be suitable for your work ?
https://www.g-technology.com/products/g-speed-es-pro-xl.cfm
AND a dedicated MAC Pro as a server computer, AND a switch, AND a multiport ethernet card, so you can accomplish what you want to do.Compared to an Apple XSAN system, there are LOTS of shared storage systems that will do exactly what you want for a fraction of the price of a full blown Apple XSAN system. But simply putting a multiport etherent card in your Mac Pro, and using your existing drive array will not do what you want.
companies like AVID, Facilis, Small Tree, Apace, EditShare, Cal Digit, JMR, Studio Network Solutions, and Maxx Digital can all provide you with a working solution. Will you pay over 10 grand for a working system – YES YOU WILL ! Can you do it for $800 bucks, and use your existing G-Speed eS as the storage – ABSOLUTELY NOT.
Bob Zelin
-
Alex Gerulaitis
June 2, 2011 at 11:07 pm[Bob Zelin] “your G-Speed eS unit will not work as a drive array for a shared storage server.”
Bob, I don’t think Jesse needs shared storage for editing, or at least he didn’t seem to ask for it:
[Jesse, DiJiFi] “the way we transfer film requires that the digital file be captured on one system and then transferred to a different system for editing to keep things efficient.”
Jesse appears to be only searching for ways to speed up file transfer over Ethernet, and he asked a specific question about LAG.
Alex (DV411)
-
Steve Modica
June 3, 2011 at 12:02 amJumbo frames could help, although that will mostly be cpu reduction.
10Gb can probably help with a couple provisos:
1. Using a normal copy, there’s only one cpu thread running to push the data across. This one thread will top out at around 300MB/sec. This is a function of how fast a single cpu can turn the TCP crank. Faster cores could make this a little faster.
(FCP and quicktime use AIO routines which work in parallel. Servers get the advantage of having many open sockets, so more cores are brought to bear)
2. You might not get 300MB/sec if the storage on both sides can’t handle that bandwidth. You won’t go faster than the slowest element.
Steve
Steve Modica
CTO, Small Tree Communications -
Steve Modica
June 3, 2011 at 12:05 amIn answer to my own question:
The 2 RAID0 striped drives in the destination stations will be the bottleneck. If they are SATA devices I think you’ll be topping out at 140MB/sec best case.Steve Modica
CTO, Small Tree Communications -
Alex Gerulaitis
June 3, 2011 at 12:11 am[Steve Modica] “1. Using a normal copy, there’s only one cpu thread running to push the data across. This one thread will top out at around 300MB/sec. This is a function of how fast a single cpu can turn the TCP crank. Faster cores could make this a little faster. “
Great info – thanks Steve. I had no idea TCP had that much of a computational overhead. Are there ways to reduce it and speed up 10GigE to get it closer to its 10Gbs ceiling?
Alex (DV411)
Reply to this Discussion! Login or Sign Up