Activity › Forums › Storage & Archiving › Storage Options100+ TB
-
Storage Options100+ TB
Posted by Drew Mortensen on April 13, 2011 at 12:57 amHello All,
The educational non-profit that I work for recently decided to make the jump into a modest amount of video production work. By that I mean that we have purchased a pair of Sony XDcams, a NewTek Tricaster, and a plethora of accessories and software. All very nice equipment for what we need. Now that we are a few months into regularly producing HD, storage, asset management, and transfer times are becoming the nagging holdup of our workflow.Right now we have four Lenovo T-500/510 laptops, 1 MacbookPro, and 1 i5 desktop that we use for editing with Adobe Premiere CS5. We will probably add an i7 desktop in the next few weeks. On average we are chewing through about 3TB per month, and our schedule for the next six months looks like that might go up another 2TB. So, if we look out over the next 18 months, a system that can accommodate 100+ TB seems a fairly reasonable usage estimate.
The data center in our building is very robust, but it was designed for documents, not storing and editing HD video. So, after discussing the situation with our network infrastructure folks, I am coming to the Cow community to find specific companies and product lines that I can bring back and suggest that my IT gurus investigate. They are well aware that typical SAN solutions are not the answer, and whatever we do will land up being a video-centric solution.
I would greatly appreciate any insight on:
1. Companies / Product Lines
2. Standards of Performance (Get at least xxx)
3. Asset managementThank you all!
DrewMurat Karslioglu replied 14 years, 11 months ago 8 Members · 15 Replies -
15 Replies
-
Bob Zelin
April 13, 2011 at 5:13 amThis is my suggestion to you.
Your IT gurus know nothing about shared storage for video enviornments. They know Cisco switches, and Dell drive arrays. This is not what you want.On this forum, there are countless threads about shared storage for video applications. On Creative Cow, there are countless adds for companies that make shared storage systems.
I suggest that either you, or your IT gurus read these forums, and look at these ads and contact these companies. But I know in advance that your IT gurus will be resistant, as if it is not a Windows server application that they are familiar with, along with a Cisco switch solution, they will “hate it” and want to have these companies justify why they just can’t use a Cisco switch, and a standard Windows Server for your shared storage.
AND YOU WILL FAIL.
I suggest that you do NOT contact your IT Gurus, and do the research yourself, contact these companies yourself, do your own research, contact the customers of these companies that are using their video shared storage systems, and tell your IT gurus to go screw off, and don’t bother you.
Your support will come from the wonderful companies that are mentioned on Creative Cow, not your IT gurus, who will never learn the specialized products advertised here, and will keep questioning, why you just dind’t use a Windows Server with a Cisco switch, that they already know.Bob Zelin
-
Drew Mortensen
April 13, 2011 at 1:02 pmChris,
The honest answer is “I don’t know.” The in-practice answer is that the company spends whatever money is necessary to do it correctly. They understand the value of time and money wasted through time, and my IT Director is fully on board with a video SAN solution, knowing that it could be very very expensive. I am really not worried about the potential costs. -
Drew Mortensen
April 13, 2011 at 1:23 pmBob,
Thank you for your response. I am privileged in the sense that I have an IT Director who not only sets up Cisco switches, but edits video (Premiere), mans the camera (XDcam), produces live events (Tricaster), and has no qualms about running Linux or MacOSX (Server and Client). He does all of this despite having ten other workers in the department. Although I may be generally in “charge” of the video productions, our department shares in all of the work. Every member is cross-trained. We all have our preferences, but we one of our core values is to use the correct tool for the job, regardless of our preferences. That is why we are coming to this community. We know that we aren’t the experts and that we have a lot to learn, and so we are asking to be pointed in the right direction.As a department we have read through many of the posts from the past six months. It was only after we discussed those posts that we collectively decided to write the question that I posted. There are a lot of vendors throughout this wonderful Cow community, but we have not been able to find a comprehensive list of ones that are geared towards 100+ TB of video management. We also don’t want to miss a great company just because it wasn’t mentioned in a previous thread or our eyes didn’t land on their banner.
We would truly appreciate the benefit of your experience. Which companies do you gravitate towards?
-
Mark Raudonis
April 14, 2011 at 12:36 amDrew,
There are many, many solutions out there. Having just returned from NAB, walking miles (literally) of
ailes of vendors, I know that one of them is just right for you. Which one? I can’t possibly know. Only
you can decide. In your situation, there is no “This is the best” answer. There are at least a dozen companies that can supply you with a workable system.Having said that, your choice of equipment is driven by your own specific needs, budget, etc. That is why Bob suggested contacting all of the vendors advertising on these pages to hear their pitch. Only then can you decide what’s right for you.
If you’re unable/uninterested in doing the homework, then find a trusted VAR to guide you through the process. Expect to pay for that service.
Good luck.
Mark
-
Drew Mortensen
April 14, 2011 at 2:12 amMark,
Thank you for writing. For me, researching a solution involves asking questions of those with experience- hence, that is why I posted to this forum after reading several months worth of posts. I don’t know anyone in this business, thus I came to where the experts reside. Many of the people that read this forum probably have experience with large systems and can provide testimonials about specific companies and platforms that have worked well for them, or those that haven’t for various reasons. Unfortunately, there doesn’t seem to be a repository of user comments for these systems readily accessible – at least that I’ve found.I’m not looking for anyone to do my homework, nor for any sort of panacea product. Before I start going to vendors and inviting them to sell me their line of goods, I would like to have a good idea of what people with real-world experience think about the system that they know well.
Anyone willing to give a testimonial for a company that they have had positive results utilizing?
-
Mark Raudonis
April 14, 2011 at 4:35 amDrew,
OK. Here you go.
We use Apple’s X-SAN tied to Active Storage X-RAIDs. With over five years of uptime, it’s been a
workhorse for us. Will this be the right choice for you? I have no idea. It depends on what your needs are. So, your question is like “How long is a piece of string?” There’s just too many potential solutions for
you to ask everyone , “What do you like?” Better you say, “Here’s my needs: Specifically citing bandwidth, storage, and reliability requirements. Who has a similar set up?”If I were you, I’d really ask, “here’s the city where I live. Who knows a good VAR there?”
Mark
-
Drew Mortensen
April 14, 2011 at 5:22 amMark,
I’ll ask both of your questions:1. Does anyone know of a VAR near Erie, Pennsylvania?
2. Here are my needs, does anyone have a similar setup?
Storage: 100+TB (over the next 18 months)
Reliability: It needs to work consistently. Some measure of redundancy would be nice as we don’t want to lose files.
Bandwidth: On a regular basis we will be doing at least two simultaneous edits of HD footage. This could spike to four. We have had internal conversations of 3/6/10 GB/s, and Fiber.Again, thank you for your help in pointing us in the right direction.
-
Chris Gordon
April 15, 2011 at 1:17 amFirst big question to ask is do you need all of this data online and available to all of your editing stations all of the time? If you don’t, then you could get a less expensive and more manageable solution at a much lower price point.
I’ll assume you do need/want everything on line and accessible at once. To better frame the problem, let’s look at the number of disks you’re talking about. This is just some ballpark guesswork, but it will help give you an idea.
– I’ll assume (there I go again) we are using 1 TB disks
– 8+2 RAID6 groups (10 disks total in group, but you effectively have 8 disks for data, the other 2 are consumed by parity)
– 1 hot spare disk for every 4 RAID groups (this allows for rebuilds of failed disks onto a disk already in the system instead of waiting for you to get a new disks and replace it.)
– We only fill the disks to 75% full. Performance of disks tend to drop off significantly as you get closer to capacity. (This is different than keeping free space in a file system.)
– If I’ve done my math right, that comes out to about 205 disks to get you 100 TB of usable space (less file system overhead). This will probably end up being at least 1 42U rack full of disks.Now that we have some ballpark of the number of disks we’re talking about, some additional things to think about.
– Do you plan to back this up? If so how? Traditional backups (a machine connected to the storage and a tape library and writing all of the data to tape) can put a hit on your storage and possibly affect performance as it sucks up all of the IO capacity on your array. Some ways around this are:
— Do your backups when no one is using the array. With 100TB you’ll need to break up what gets backed up on which days
— Make array based copies of the data and backup the copies. This will increase the number of disks you need since. Additionally, you’ll need the horsepower in the array to make the copies.
— Replicate the data to another array, preferably in another location. Mid and high-end arrays have the ability to replicate data between different arrays both synchronously and asynchronously. This can get your data to another location with little thought and forgo the need to always use tape. Or you can use the remote array for backups so your primary array isn’t affected
— I could come up with others but would need to understand your workflow more
– How much time and energy will be invested in managing all of this? There is a balance between the size/number of arrays you have and the effort to manage it all. A bunch of small 16 to 24 disks arrays with an expansion chassis or two on each may keep your price down but can result in a lot more work to manage as you have that many more arrays to deal with. Bigger arrays more cost a lot more but you only need one of them making management much easier. As you said before, people time and skills cost money. Regardless, at 200+ disks you are going to need to plan for resources (people) to take care of it on a routine basis — either your existing IT staff or contractors/consultants.
– Do the controllers in the array really have the horse power to keep up with what you want your array to do. I’d be very skeptical of something like a little XScale/ARM processor keeping up with 200+ disks in a single array, especially if you do any array based copies or replication.All of that said, I’d go back and really look at what you need to have online on your “performance” array and what can you have in a slower/cheaper “archive” array. Needing to have 100TB+ of storage online and ready to edit is going to be expensive, maybe not really what you need and more complex than you’d like.
Like everyone else has said, talk to multiple vendors and get an idea of the solution they propose. If you really do need storage this big, I’d expect then vendors to put together proposals/bids describing their solution — and I’d expect that to be free (that’s what it costs them to get the sale). I deal with designing a lot of infrastructure solutions at work and have vendors propose solutions all the time (them showing up on site with loads of people to better understand the problem and our needs) and I’ve never been asked to pay for that. Now on going support after the purchase, help installing systems, on-going professional services, etc — that’s all something I expect to pay for, but not a proposal on what they want to sell me.
-
Bob Zelin
April 15, 2011 at 4:51 amHi –
because this is a small industry, and because all the competitors are pretty friendly with each other, you are not going to see “us say” – “oh yea, that brand sucks”. They all work. The most expensive solution is as Mark suggested – Active Storage – and it is fantastic, and the most robust solution. The cheapest solution is the one I represent – Maxx Digital Final Share. To provide 100TB of storage today is nothing, as a 16 bay chassis with 3 TB Hitachi drives is 48TB in one small 3RU chassis, and these chassis can be daisy chained up to 384 TB. It’s no big deal. 100TB of storage today is nothing, for any vendor of SAN systems.I welcome all the pro vendors to post here, to say to this guy “just call us, we can help you” – but in summary –
Apple XSAN
Active Storage
AVID ISIS
Maxx Digital Final Share
Studio Network Solutions
Small Tree Granite Stor
Apace Systems
Cal Digit Super Share
Facilis Terrblock
EditShare
Rorke Data with Fibre Jet
Sonnet Tech using MetaSAN
Accusys using MetaSAN
ATTO Tech FastStream
JMR using FastStream and MetaSAN, XSAN, or FibreJetthere are a few others, but these are the “big” brands. Which one sucks ? NONE of them – they all
work. They all have different features, they all have different prices. All of “us” are tired of saying “call us, we can help you” – it’s your turn to make the calls, and find out for yourself which one is best for you.Creative Cow, and specifically this forum, is the best resource to research this information. A generic IT solution is the only solution that will fail for you.
Bob Zelin
Reply to this Discussion! Login or Sign Up