Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Storage & Archiving SAN discussion – volume vs file based – interesting question at the end

  • SAN discussion – volume vs file based – interesting question at the end

    Posted by Francois Stark on October 2, 2005 at 7:27 pm

    I’ve been using SanMP for about 4 months now, and have a few things on my mind. In this discussion I’m referring to SanMP since it is the only SAN I have personal experience with. It should be pretty close in operation to other volume-based SAN’s like Commandsoft’s fibrejet.

    SanMP is a high performance SAN system using volume based management. This means all users can mount all volumes for read access at the same time, while only one user can write to each volume at a time. This model has some advantages: It does not need a metadata controller, and does not have a metadata controller’s overheads. Once the volume is mounted, the client is directly reading and writing from the array with no delays.

    This system is quite stable provided you synchronise, or in most cases, unmount / remount the SAN volumes about once a day. Herein lies the kick: When a client mounts a volume read/write, it takes control of the file allocation tables and can write files and delete files at will. Great.

    However when it mounts the volume in read-only mode, it reads a copy of the file allocation tables, and keeps it. Meanwhile, another client has write access and is merrily changing the volume: adding new files and deleting old ones. The read-only client does not know this, and can even continue reading and using files after the writer has trashed them… Once the writer overwrites a critical part of the data, the reader reads crap and crashes.

    The reader has two ways to get an updated version of what is really happening on the volume: it can “synchronise” or unmount and re-mount a volume. Synchronise is SanMP’s process of reading an updated version of the volume without unmounting – it sometimes does not work without closing all apps (especially FCP) using the volume, and unmounting always needs all apps using the volume closed. There is also a setting to automatically sync every X minutes, but that process always leads to dropped frames and is thus unusable.

    If the reader stays reasonably up to date (in our case about once a day seems fine), the system is really stable. It is also resilient to data transfer spikes: You all know that occasionally after using FCP’s “capture now”, FCP feels the need to read back the complete capture: It can capture merrily away at 20MB/s uncompressed SD, but as soon as you hit escape, it reads back from the drive at 185MB/s for about 30 seconds solidly. Sometimes it writes at that speed as well. I don’t know why this happens, I just see it happen occasionally on the performance monitor. The best part of this is that the other sanMP clients don’t know about it. They can continue editing their uncompressed work without a hiccup – something Im not sure a six seat XSAN installation on the same hardware – a single ADTX LH 15 drive array – would handle as well. Any system using gigabit ethernet would fall over.

    I have used this SAN for a large project with tight deadlines and it has helped us a lot – digitising in four suites at the same time; playing out a 50 minute broadcast master and protection copy from two FCP suites at the same time 2 hours before broadcast, etc.

    Now for some negatives: We are running four FCP suites and two Pro Tools final mix suites on this SAN. We often need to send a quicktime video file and OMFI audio file from the video suites to the audio suites instantly – this is one of the main reasons for getting the SAN. Using a volume based SAN, the audio suites must always synchronise, and often, when that fails, close all open apps using these volumes, unmount/remount and then re-launch all apps before they can start working. This also happens when we export a reference quicktime movie for immediate playout on another suite – we need to close FCP on the second suite before we can play out.

    There is no workaround on a volume based SAN – it causes frustrating delays that we would not have on a file-based SAN.

    The second issue is more about thinking ahead: We do quite a fair number of DVD’s for approval and final deliveries. And in many cases the client comes in for a viewing, likes what he sees and wants a DVD made immediately for further approval back at his office. Well that’s what I would like to use compressor 2’s distributed rendering for. But using our SANMP there is no way.

    I currently use compressor like this: Inside FCP I export the FCP timeline as a reference quicktime movie. I switch to finder and drag the exported reference file onto compressor, choose a mpeg-2 preset and submit.

    The dual G5 machines can then actually continue editing in FCP while compressor is chugging away in the background – the only problem being that it still takes a while (about 1.5 x real time) to do the compression to mpeg 2. On a single machine.

    My thinking is this: I have 4 dual G5 machines – if I can let them help compressing in the background, I can cut the compression time by at least two thirds.

    My problem with sanMP would be this: After exporting the quicktime reference movie, all other machines that want to help with the render would need to synchronise or re-mount the source volume – impractical already. That takes care of reading the file, but what about writing? All the machines that help with the compressing would need to write to one volume at the same time – and that is not going to happen with any volume based SAN soon.

    The reason it’s on my mind like this is that I suspect the next major revision of FCP will include network rendering in the same fashion as compressor 2. And I don’t want to miss out on that because I’m on a volume based SAN.

    So after baring my sould like this, I have two questions:
    – Anybody using Apple XSAN or especially Tiger Technology MetaSAN on a similar setup: six seats doing 70% DV and 30% uncompressed SD work. Please discuss general operations, and what advantages or disadvantages you see in everyday operations? I would also like to hear from a MetaSAN user – how does Metasan run real-world loads without a dedicated metadata controller?
    – Pro Tools will not run on any file-based SAN. If I decide to go to XSan or Metasan, with metadata controller, would it be possible to run SanMP on some LU’s and XSan (or metasan) on other LU’s on the same box?

    Regards
    Francois

    Francois Stark replied 17 years, 11 months ago 5 Members · 9 Replies
  • 9 Replies
  • Floh Peters

    October 5, 2005 at 7:37 pm

    [Francois Stark] “If I decide to go to XSan or Metasan, with metadata controller, would it be possible to run SanMP on some LU’s and XSan (or metasan) on other LU’s on the same box? “

    I

  • Francois Stark

    October 7, 2005 at 10:41 am

    Hi Floh

    [Floh] “it should be possible to split LUNs of a storage array between different software approaches, but I would never try to run 2 different FC clients on the same Computer.”

    Makes sense.

    [Floh] “So you probably would not gain anything from using SanMP for your ProTools and XSan for your FCP suites.

    I have not thought about that… Running XSan on the FCP suites and SanMP on the pro tools suites would mean the FCP suites can work together, and the Pro Tools suites can work together, but sending stuff (Omfi files and reference movies) to Pro Tools from FCP would mean transferring it through the network – not very efficient.

    Compressor can render through a network connection, but there you said it yourself – the network will be a bottleneck, whic is why I thought about using the SAN.

    Regarding MetaSAN, I’ll start a new, simple thread to try to get a response from Metasan users.

    Regards
    Francois

  • Bart Harrison

    October 17, 2005 at 1:59 pm

    Hey Francois,

    Good to hear SanMP is working well for you (with the exception of this one issue).

    On one of my installations I came up with a rather unique solution to this type of problem. On a twelve volume SanMP SAN there were two volumes we that needed simultanious write access to from multiple workstations (graphics, distributed rendering, etc.) so I repurposed one of their old G4’s as a “SAN Access Server”. I put a FibreChannel card and a copy of SanMP on the G4, mounted the two volumes read/write and shared them via NFS and Gigabit Ethernet with all the other workstations. This way everyone can write to those two SAN volumes at any time without concern. Since graphics/rendering doesn’t require full bandwith access to the SAN this turned out to be the perfect solution.

    I also took it one step further by adding a terabyte of Raid 3 SCSI storage to the SAN Access Server (an old Huge array). We also digitised all six of their music libraries (using iTunes) and shared that with all the workstations. Each editing system can search for the desired music via their local copy of iTunes and then drop it right on the timeline without copying it to the SAN.

    NFS is highly efficient. We’ve been able to reliably play a single stream of 10-bit uncompressed NTSC through the SAN Access Server. Of course any SAN client can mount either or both of those SAN volumes via SanMP and have full bandwidth access to the material. Hope this helps !!

    Bart

    – – – – – – – – – – – – –
    Bart Harrison
    MPA – The HD Suite

    America’s VAR
    TurnKey Editing Systems, Storage Area Networks
    HD Consulting, Production & Post, Exhibition & Distribution
    http://www.hdsuite.com
    954-894-1221

  • Christopher Tay

    October 18, 2005 at 5:21 am

    Hey there Bart…which software did you use to share those SANmp volumes via NFS ?

    Have you tried Sharepoint ? I’ve briefly tried it and it works but it uses Samba instead as I was accessing those SANmp volumes on a Windows machine.

    -chrispy

  • Bart Harrison

    October 18, 2005 at 2:15 pm

    Hey Chrispy,

    Actually I use the NFS client/server capability that’s built right in to OS X. For the feint of heart there’s a wonderful little utility called NFS Manager that can help set it up. If I have to share with the PC World I always use Thursby Dave.

    Bart

    – – – – – – – – – – – – –
    Bart Harrison
    MPA – The HD Suite

    America’s VAR
    TurnKey Editing Systems, Storage Area Networks
    HD Consulting, Production & Post, Exhibition & Distribution
    http://www.hdsuite.com
    954-894-1221

  • Mark Raudonis

    October 24, 2005 at 5:42 am

    [Francois Stark] “- Anybody using Apple XSAN”

    Francois,

    It sounds like you’ve got a handle on working around the peculiarities of volume based storage schemes. If you’d like to discuss how we’re using X-SAN feel free to contact me off-list.

    We’re currently running close to 100 FCP clients on two separate X-SAN’s. One of the primary reasons we chose X-SAN over the other FCP compatible SAN’s out there was X-SAN’s ability to share volumes with all users. At a certain point, workflow becomes more important than the cost of the system. In my opinion, X-SAN gives us a collaborative workflow that’s as good as and in someways better than AVID’s Unity. It’s certainly a lot cheaper. We’ve taken those saving and turned them into terrabytes of storage… which signficantly impacts our workflow.

    Mark

  • Francois Stark

    October 24, 2005 at 2:05 pm

    Thanks

    I looked at XSan closely, and the main reason I went with SanMP was that it could be used for Audio storage for Pro Tools. We have two PT suites on the SAN and they exchange audio projects on a daily basis, as well as reading video files from our FCP volumes. That’s besides that fact that they don’t have spare PCI slots for second ethernet cards.

    It’s good to hear about larger SAN systems, and I certainly can not see SanMP scaling to 50 clients: we have 6 seats and 10 volumes – imagine mounting 70 volumes for 50 clients!

    Regards
    Francois

  • Mark Raudonis

    October 25, 2005 at 6:24 am

    Francois,

    We DO have two audio protools suites connected to the X-SAN. They regularly tranfer projects and media across the network. It’s a bit tricky, but here’s how we do it.

    The X-SAN can “see” clients on either the dedicated fiber network OR a basic gig e ethernet. So, because of the “card slot limitations” with the G-5 we’re NOT connected via fiber, but we are connected via ethernet. This means that we can transfer OMF’s from the edit suite to our Pro Tools rooms. The Audio guys “pull in” the OMF’s to work locally. You’re right that the G-5’s are out of ethernet slots… but WAIT! The newly announced g-5’s have DUAL ethernet cards! No extra slot necessary for Pro Tools access.

    X-SAN is a very complex but versatile system. At this level, definitley NOT a do it your self kind of project.

    Good luck with your set up.

    mark

  • Francois Stark

    October 25, 2005 at 10:16 am

    That’s where the difference lies – our Pro Tools suites actually use the fibre connection for audio sessions. They can exchange sessions without copying – they prepare the sessions in the smaller room and then just open it in the other room without duplicating the audio data by using sanMP. We are using a three-drive raid 1 in the ADTX box for the two suites’ audio drives. As longs as they are not srtiped in mac os X, Pro Tools can use the storage for audio.

    I suppose that’s one of the rare ways to have redundancy on Pro Tools’ audio drives…

    Regards
    Francois

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy