David Gagne
Forum Replies Created
-
The question of data growth and timing you bring up is one I think about a lot but is hard to guess. I “need” a good chunk now, something like 60-70TB. Add in 20% for headroom and it’s closer to 80. Our growth rate is hard to guess, but I think we could be at 150TB in 2-3 years. Of course with some smart media management those numbers may come down. Who knows? I think I’d rather plan too big than too small at this point, as our large purchases tend to take 6 months or more to go through, and I’d rather not be caught behind like we are right now.
As for fiber vs ethernet, the SataBeast has both 8GB fiber and 10GB ethernet (their brochure only says 1GB but they’ve updated to 10GB). We’ll probably stick with fiber for the short term but have the option of ethernet long term. Does XSAN work over 10GB ethernet? Haven’t heard anyone talk about this yet…
As for LTO, I have a few reasons I don’t want to mess with it, but the main reason is complexity of keeping it going and lack of staff to deal with it. LTO requires a bit more hand-holding than disk and doesn’t offer significant enough advantage to make it worthwhile.
-
XSAN is not just about the Apple branding, it’s also Apple support and the Apple “it should just work” mentality. FCP guys don’t really want the hassle of SMB/NFS shares, third party iScsi initiators, or dealing with Linux servers.
I think for a storage company to do this, they really have to pick up the ball on support and integration. I think Active is doing this best at the moment, but some other companies are starting to get the idea.
-
On the SataBeast specifically:
Good point Bob of heat buildup being a potential issue, but with good rack cooling I think it will be fine. As for the spin-up times, etc., I think that also is ok – it’s pretty controllable, so we can have it spin-down at night/weekends, and run full-bore during the day, or just set it to do things after certain periods of inactivity.
We’re planning to use something like this for a near-line archive, but it might also be fast enough to edit on (it’s probably faster than our current 20TB Promise Raid).
The NexSAN guys are very familiar with XSAN implementations and support it fully.Back to the generic question:
As you grow, you add more storage and more clients. Generally more clients require more speed, and more storage adds more speed (especially with XSAN or other similar implementations). But for us, our need for more storage has outgrown our need for more clients, thus it seems to make sense to go more dense as we do not require the speed boost of more controllers, and I’d rather not take up a whole rack worth of heat/power.Thoughts? (Just don’t say LTO)
-
Ok, so I tried to pose the question generically, but here’s a real example:
NexSAN SataBeast + Expansion has 204TB (raw) in 8U. It’s dual controller, but it’s just one big array.
Compare that to using something like Active XRaids, where you would need 4x 32TB XRaid and 3X 32TB expansion. That’s 28U and a lot more power consumption.
Obviously the XRaids should outperform because they have more controllers/connections to your drives, but I don’t really need that kind of performance.
There are a bunch of others that you could fit into dense storage, whether it’s IBM, Equalogic, etc, and a bunch you could fit into less dense storage, so that’s why I wanted to keep it generic.
The interesting part is that the faster, less dense is typically cheaper than the slower, more dense storage, but those costs would probably be offset by power consumption over time.
Good points Bob about drive speeds ramping up too.
-
David Gagne
January 16, 2011 at 4:11 pm in reply to: Reviving external RAID in the internal drive bays possible?R-Studio can do it. You’ll need some storage that is twice the size of the raid (for creating images of the drives and for recoverying).
-
David Gagne
January 16, 2011 at 4:08 pm in reply to: Building independent workstation – 3ware SidecareBefore you go all-in on shared storage as Bob highlighted, you might also take a look at your desired workflow.
If each project:
1. shares nothing in common (library, stock footage, etc)
2. and each project is only worked on from one station,
3. and each project and all it’s related files can fit in a TB or so…You might get away with individual raids inside your mac pros (assuming you have mac pros?).
For this: Buy some nice 2TB hard drives (WD Caviar Black or Hitachi UltraStar), and raid 0 them. Cheap.
The problem with this is backup and archiving. Raid 0 is vulnerable to drive failure, and what do you do with your projects once completed? Now you have to invest in storage anyways! You might limp along with something like Drobo for $3k, but now you need a full-time night person copying data to it while nobody is working!
For the cost of that person’s salary, and to allieve all that headache, pony up for a real shared storage. (I.E. Listen to Bob).
-
David Gagne
January 15, 2011 at 11:41 pm in reply to: RAID- any benefit to having multiple RAID arrays?1. If speed on your startup volume is an issue, buy a single SSD for it.
2. What kind of raid controller does your system have? If you have a single drive as your OS, can you still have a 4-drive raid 5? I wouldn’t do RAID 0 for production unless you have realtime backups or your work is amateur only with no money involved. -
David Gagne
January 15, 2011 at 11:34 pm in reply to: Best option to build a RAID, Drobo Pro is to slow.20TB? Uncompressed? Get a real enterprise grade raid, like an Active Storage or NexSan SataBoy. You’ll need a fiber card or 10gb nic. Then you’ll get the kind of speeds and storage capacity you need. Somewhere in the 20-30k range.
I’m sure there’s some other raids you can get in the $15k range, but I wouldn’t mess with anything less than that unless your requirements are lowered (compressed footage).
Or! If you have lots of time, keep your drobo, build an internal raid, and then just copy it back and forth in chunks. A horrible workflow, but big money savings.
-
If you’re trying to install Snow Leopard using the disk from your older mac pros — that won’t work. The disks are coded to be installed on specific hardware. If you go out and pay $30 for a retail disk, that will probably work.
Depending on what the server will be responsible for, it may or may not make sense to use the older one as the server… what kind of traffic will this server see?
-
Cool, I’ll look into CatDV. I think I was interested in FORK because of the multi-cam ingest/tagging capabilities — are there other products to handle that? Can CatDV do some of that? Also does it work with After Effects at all?