Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Storage & Archiving multiple volumes for ethernet SAN

  • multiple volumes for ethernet SAN

    Posted by Eric Hansen on May 18, 2009 at 5:18 pm

    this is mostly directed at Bob and Walter, but anyone else can jump in.

    i have a question regarding using multiple volumes in an ethernet SAN. bob, in previous emails you told me that you always have the edit systems writing to their own volume on the SAN. for example, in a 4 suite setup, you have 4 different volumes on the SAN. they all read from any volume, but can only write to one. in Walter’s article, there is SAN1 and SAN2 in his Final Share system, but he never explains why. is this for the same reason? and actually, what is the reason? were you guys having issues with different systems writing to the same volume and now the standard practice is to create different volumes?

    my main ethernet installation has not been entirely trouble-free, especially relating to captures. the 2 capture systems are running NFS connections instead of the usual AFP because the captures were aborting at 2GB over AFP. Bob, i asked and you said you’ve never had this issue and i wonder if my single volume is the culprit. i’m still wondering why NFS works, but right now it works and thats what they have been sticking with. i noticed that the 10.5.7 update has some fixes for network file sharing related to Jumbo Frames and Flow Control, so i’m going to closer at that.

    thanks guys. i think it would be cool to create a “Best Practices” for Ethernet SANs similar to the one over at Xsanity for Xsan. we’re all figuring out that there’s a very specific way you have to install these things. then again, i think that document might be worth it’s weight in gold and whoever possesses it might not want to let it go…

    e

    Eric Hansen, The Audio Visual Plumber – http://www.avplumber.com

    Bob Zelin replied 16 years, 11 months ago 2 Members · 1 Reply
  • 1 Reply
  • Bob Zelin

    May 19, 2009 at 1:02 am

    replies below, but as an overview statement – we find the answers out to “best practices” as we make mistakes over time.

    bob, in previous emails you told me that you always have the edit systems writing to their own volume on the SAN. for example, in a 4 suite setup, you have 4 different volumes on the SAN. they all read from any volume, but can only write to one.

    REPLY – this is what I try to do all the time, but I have since done installations, where everyone is writing to the same volume. As Walter has suffered thru, the more “strain” on the system (number of streams, rendering, resolution), the more difficult this can become.
    Walter currently has one large 16 Terabyte drive array – one volume for all the edit rooms (plus local storage on each main FCP system).

    in Walter’s article, there is SAN1 and SAN2 in his Final Share system, but he never explains why. is this for the same reason? and actually, what is the reason?

    REPLY –
    I did this, because of my philosopy, but Small Tree and Maxx Digital changed this. Walters system started with an Areca 1680x controller card, and Seagate 1.5 TB drives. Since that time, his system (and the system that I had trouble with in Orlando) switched to the ATTO R380 card, and Hitachi Saturn Enterprise series drives. Because of rendering, both systems had “segmented bonds” created – 2 ethernet ports per “area”. These are the only two systems that have been setup like this, because of problems. All the other systems have one big link aggregate, with all 6 ports tied together. I refused to believe in Walters configuration, until I had problems in Orlando with one client – and we segmented the ports into three groups of two. (bond0, bond1, bond2). In future installations, I will not do this (unless I get into trouble).

    were you guys having issues with different systems writing to the same volume and now the standard practice is to create different volumes?

    REPLY – as an overview answer, it is much easier for me to say that if you have jumbo frames enabled on both the clients and the switch, flow control enabled on both the clients and the swtich, and are using an ATTO R380 host controller on your drives, and your drives are Hitachi and NOT Seagate, you should not have any issues. I continue to create seperate volumes IF I CAN. If I have clients with countless clients, I cannot do this. But even in a small installation, if I have an 8 bay array, I will now create 2 volumes, 4 drives each – volume 1, and volume 2. This is now my typical setup – 2 editors, and one intern who digitzes on the two volumes.
    I find that this works for me, even though others disagree. Of course, if you have 9 clients, and only an 8 bay array with a PEG6 card, you can’t do this.

    my main ethernet installation has not been entirely trouble-free, especially relating to captures. the 2 capture systems are running NFS connections instead of the usual AFP because the captures were aborting at 2GB over AFP. Bob, i asked and you said you’ve never had this issue and i wonder if my single volume is the culprit. i’m still wondering why NFS works, but right now it works and thats what they have been sticking with. i noticed that the 10.5.7 update has some fixes for network file sharing related to Jumbo Frames and Flow Control, so i’m going to closer at that.

    REPLY – this is the problem with me not being an expert. I never use NFS – I only use AFP. Walter is using AFP, and my “problem client” who is no longer a problem, is also using AFP. I have done 15 systems so far, and ALL are running AFP. I didn’t belive the stories about latency in the Seagate drives until I lived thru it myself (and was almost killed by my client). It will be a long time before I trust Seagate again.

    thanks guys. i think it would be cool to create a “Best Practices” for Ethernet SANs similar to the one over at Xsanity for Xsan. we’re all figuring out that there’s a very specific way you have to install these things. then again, i think that document might be worth it’s weight in gold and whoever possesses it might not want to let it go…

    REPLY –
    many best practices are not practical. For example, many SAN manufacturers say to turn of Spotlight on your shared volume. Easy for them to say – not easy for my clients to live with. It’s easy to tell Walter not to render to his shared volume (as Mark Raudonis at Bunim Murray with this 100 seat XSAN enforces), but when you have a small shop, and you have deadlines to meet, rendering is critical on a SAN volume. Best practices are great in theory. My best practices these days – avoid Seagate drives for shared drive arrays, because of the latency issues.

    Bob Zelin

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy