Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Storage & Archiving Advice on a SAN

  • Advice on a SAN

    Posted by Brandon Kraemer on October 19, 2009 at 4:37 pm

    I am looking for advice on building or purchasing a SAN for a 3-5 user environment. I am looking for something that is relatively affordable, scalable, easy to maintain (doesn’t require dedicated IT personnel) has a full maintenance contract, has redundancies built in, and can handle at least 1k proxy (RED) bandwidths. Our workflow uses 1k proxies and 1k Pro-Res transcodes, as well as 2k DPX sequences for online and color grading.

    I have seen bids from Apple (X-SAN, FCP SERVER) and SNS (EVO)… and there are pro’s and con’s to each system. My biggest concern is bandwidth and asset organization. I doubt that a “quad ethernet” kind of connection will suffice, probably is going to require fiber-channel, but I want to make sure the infrastructure is scalable, that the pipe we lay now will work in the future as bandwidths get more demanding.

    If trying to set up for 5 systems crosses a line that makes it cost prohibitive, I could see a 3 or 4 system config as a starting possibility. It would also be great to have a volume where 2D/3D departments could render to (over a Gigibit connection) and be read by the edit systems (via fibre).

    So, trying to strike a balance that gets a department on a much needed SAN platform, is future proof, and isn’t going to send us into sticker shock, but most of all… MUST WORK RIGHT, NO DROPPED FRAMES.

    I welcome your suggestions and feedback.

    Jim Boas replied 16 years, 6 months ago 7 Members · 14 Replies
  • 14 Replies
  • Matt Geier

    October 19, 2009 at 5:24 pm

    Hi Brandon,

    You could do this with a Fibre Channel SAN. You can also do this with an Ethernet based network utilizing multi-port Gigabit or 10Gb in your dedicated server.

    As long as you’re not doing anything uncompressed, Gigabit or 10Gb Ethernet can support you. Bandwidth allocation and resources do not become a problem, if you scale the network correctly.

    To support 3-5 users, you can get along just fine with a Quad or Six Port Ethernet Card, giving each user their own link…

    Each Gigabit wire can run up to 90MB/sec, which is plenty of bandwidth for editing video stream sizes….

    The real question is going to be weather or not you have enough network bandwidth, AND if your server AND storage is fast enough to support the real time requirements.

    If you’d like to call me, I can discuss some Ethernet options that would suit you.

    Regards,

    Matt G
    Small Tree
    651-209-6509 x 1

  • Brandon Kraemer

    October 19, 2009 at 6:19 pm

    Thanks for the information so far… let me follow up with a bit more information.

    We finish Uncompressed 10-bit, at resolutions up to 1080p (23.976 fps). We would need bandwidth to support this workflow.

    We offline with RED proxies and other media formats, up to 1k in size.

    We pipeline 2k DPX sequences and 2k QT media for color corrections, but we would not need to play back at full frame rate.

    There is a gigabit network already in place, but there should be a dedicated network devoted to the SAN as the main network is too taxed to support SAN traffic. What are the bandwidth limits of 10gb ethernet compared to Fiberchannel?

    Thanks in advance. I will collect more info before contacting anyone directly.

  • Nathaniel Cooper

    October 19, 2009 at 6:29 pm

    Hey Brandon,

    I work for SNS, I’m sure you’re working with one of my colleagues already, but wanted to chime in.

    If you’re interested in the EVO hardware, which will allow you to share the same LUNs/data over FC and GigE seamlessly, but you want the workflow that Xsan offers; just use the Xsan software with the EVO hardware.

    EVO is just our hardware platform, most users use it with SANmp as we provide complete solution packages and SANmp is a much more simple solution than most other SAN softwares. But, you can use EVO a la carte with any standard SAN software (Xsan, MetaSAN, Commandsoft, etc.)

    I don’t know where you are located, but, we’ll be at HD Expo in Burbank in a couple of weeks showing EVO using SANmp and Xsan software if you’re interested in seeing it. We’ll be at booth #113.

    Whatever you choose, good luck!

    Nate Cooper
    ncooper@studionetworksolutions.com
    818 209 1331

  • Bob Zelin

    October 19, 2009 at 6:54 pm

    2K DPX files require 293Mb/sec bandwidth. This is A LOT of bandwidth. Ethernet can’t do this. Even 10 Gig Ethernet can only do about 180Mb/sec, and this is enough for uncompressed HD, but not 2K.

    RedCode RAW is only 36Mb/sec, ProRes422HQ is only 29Mb/sec, and ProRes4444 is only 41Mb/sec, so an ethernet based system will do Red Raw, Red Proxy files, ProRes422HQ all day long. But it will NOT do 2K DPX files (and neither with 10GigEthernet, and neither will iSCSI).

    You will not get any dropped frames from any ethernet based system running ProRes422, ProRes422HQ or Red files. However, if you think that you will build a system and have zero maintenance issues ever, and just want things to work, without any technical support, you are dreaming. Everything, every system (and everything you currently own) has problems. Drives fail, systems crash, AJA cards develop issues, RED software is in “beta” and has issues. Nothing is trouble free. Ethernet based systems are wonderful, inexpensive, and require almost no administration once your system is setup. However, there is a bandwidth limitation- so if you need 2K or even uncompressed HD, consider a good fibre channel based system.

    Bob Zelin

  • Eric Hansen

    October 19, 2009 at 7:57 pm

    hey Brandon

    I am in the middle of building a setup with demands very similar to yours. i’m going the ethernet route. the one big difference is that we are limiting the use of DPX files and Uncompressed HD to the online/color correction system, which has its own SAS direct attached storage. when footage is ingested, we either use the 1k proxies (in the case of RED), or transcode to ProRes HQ (HDCAM, HDCAM SR, DPX from Phantom HD, H.264 from 7D, etc). we use ProRes for all the editorial on an ethernet-based SAN, serving 4 main edit systems and numerous laptops and iMacs. and use ProRes mostly as our online codec. but if we need to work in uncompressed or DPX for any reason, that is solely in the color correction/online suite which has a Maxx Digital SAS array directly attached.

    if you think you can work in a similar way – only having super high speed on one or 2 systems with their own storage, and the rest of the system running ProRes from an ethernet SAN, you can save a bundle over a full 4Gbs Fibre setup.

    e

  • Brandon Kraemer

    October 21, 2009 at 8:26 pm

    Eric,

    It’s possible that this might work as a solution for us. I am curious to know what kind of ball park cost you are looking at to deploy this and what the biggest ticket hardware items are, if you don’t mind sharing?

    Uncompressed 10-bit is our house online standard, and we don’t often enough finish out of just one room, but our best suite for finish work does include a SAS array and a direct attached X-Serve RAID. I am thinking that Fiber is going to have to be the way we go, and from a future proof standpoint, but it’s possible to build a two tier system, or expand to fiber in the future.

    Thanks,

    bk

  • Brandon Kraemer

    October 21, 2009 at 8:32 pm

    Bob,

    Thanks for the hard data on bandwidth requirements. So if I am understanding you, assuming we need to work Uncompressed 10-bit, we are definitely looking at a fiber infrastructure?

    I certainly realize that no system is fool proof, my goal is to find a system that doesn’t require a full time IT person to administer the SAN, or take too much time away from editors editing to do admin work. I didn’t know if you had an opinion on what setups were easier to maintain that others?

    As for dropped frames, I just need to set up a system that can handle the upper limits of our bandwidth requirements, and if say limiting the # of users to achieve higher data rate workflows was an option, vs. setting up a system that reached all our users (6) but had to take a data rate (format) hit to do so. Is there a way to strike that balance?

  • Eric Hansen

    October 21, 2009 at 9:37 pm

    the biggest issue i have with fibre is the different speeds. your Xserve RAID is at 2Gbs. but the current standard is 4Gbs, which is quickly giving way to 8Gbs. you say “future-proof” with fibre, but this just isnt the case. its more dependent on the switches, cards and transceivers used (the optical to copper FC converters). the fibre itself is actually cheap in comparison. you would never use an Xserve RAID as the hub of even an ethernet based SAN because its just too slow. 80MB/s per channel for a total of 160MB per enclosure. the server in the example setup i gave runs over 650MB/s over SAS. don’t go fibre just because you have an Xserve RAID. its OLD

    you say your house finishing codec is Unc10bit, but come on, really? i have worked with post houses before that told me exactly this, but i got them to switch one show to ProRes HQ and they havent looked back. ask yourself if you really need Unc10bit for delivery. i understand the need to work with Unc10bit for certain things like graphics, and for DPX. but that’s why the system in my example has one fast online system. with a codec like ProRes 4444, theres no need for Unc10bit anymore.

    its difficult to put together numbers because theres a lot of variables, but here goes with the big ticket items. i’m using Xsan as an example of a fibre-based SAN because that’s what i have experience with:

    Fibre:
    16TB storage for Fibre – Promise RAID $15,000
    16port FC switch for Fibre system – $5000
    FC card for each client, and server – $500
    $999 per client and server (need 2 metadata servers) for Xsan
    2 servers, roughly $3000-4000 each

    Ethernet:
    16TB storage for Ethernet system – MaxxDigital Expando (SAS with expansion) $8500
    SAS card for server – $1000
    Ethernet card for server – $800
    24port ethernet switch for Ethernet system – $1200
    ethernet cards not needed for Mac Pros or G5s that already have 2 ethernet ports
    free for Apple file sharing for ethernet system
    1 server – $2500-3000

    of course, there’s also the cost of installation and service. neither one of these is plug and play, although the ethernet based system is a little bit easier to install.

    e

  • Brandon Kraemer

    October 21, 2009 at 10:18 pm

    I think you misunderstood the roll of the X-Serve RAID… it is not what we are building around. At most it would become a proxy server for Final Cut Server, or a direct attached box for a finishing suite, which is what it works in now. It would not be mixed into the SAN as a first tier storage server. By future proof I meant that were looking for a bandwidth solution that we can grow into as formats continue to grow. Of course what ever technology one purchases, it’s 50% obsolete once you leave the store with it. Yesterday everything was 1080, today it’s 2k, 4k… tomorrow… ??? So if 4Gbs or 8Gbs hardware allows for some of that top end room to grow into, it might be worth the expense. I hear you though… it’s always changing.

    And yes, Uncompressed 10-bit really. We work with lots of motion graphics and compositing in our pipeline and we always want to keep the compression out of the mix till the very end. I have seen ProRes codecs, and to the eye they are very convincing, but that doesn’t always hold up in the pipeline from my experience. I think ProRes is fantastic for offline or reality TV/corporate/web delivery work, but i wouldn’t online with it for national TV commercial broadcast or push it though a composite pipeline for a film out. I don’t want the technology to handcuff the format if there is no need to. But the dedicated suite for this might be a cost effective way to go, as you mentioned.

    Thanks very much for sharing the ballpark costs. This is on par with some of the quotes I have seen, but I can see how the Ethernet option is very attractive, saves some serious dough. Worth considering for sure.

    Best,

    bk

  • Eric Hansen

    October 21, 2009 at 10:41 pm

    OK. Fibre for Unc10bit and 2k…4k??

    then the next thing you need to be prepared for is multiple RAID boxes. in one Xsan setup i’ve done, we have a single Promise RAID serving 3 edit suites. if we want to do Unc10bit HD on one of the systems, we have to quiet the other edit systems. it uses all the available bandwidth. and this is for 23.98fps. forget 29.97. if we want more bandwidth, we need to get another Promise RAID. this also means more ports on the switch. i’m not sure if even 2 RAIDs could feed that much bandwidth to 4 or 5 systems. you might even need 3 RAIDs.

    if you need the speed and bandwidth, you need it. with the installations i do, i just try to make sure that the client and i are on the same page as far as requirements. once i start to spell out how much it costs for that much speed, everyone usually comes back with, well, maybe we dont really NEED all that speed. but if you do, you do.

    e

Page 1 of 2

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy