Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Storage & Archiving Maxx Digital Evo 4K 12TB

  • Maxx Digital Evo 4K 12TB

    Posted by Jeff Smith on May 28, 2009 at 5:22 pm

    Anyone know anything about this product? I can’t find anything on their website that describes how it works. Nothing about RAID levels or connectivity. Although there is a convenient Buy Now! button.

    Eric Hansen replied 16 years, 11 months ago 6 Members · 20 Replies
  • 20 Replies
  • Bob Zelin

    May 28, 2009 at 5:57 pm

    what would you like to know. The MAxx Digital EVO 4K 12TB is
    a single drive enclosure that holds 12 Hitachi 1 Terabyte Saturn Enterprise series disk drives. The host controller is the ATTO R380 card. This is a RAID 5 (or RAID 6) controller card, that is used by other companies as well, like AVID Technology, for their current products.

    the ATTO R380 card supports SAS expansion, so if you buy the 12 terabyte, and run out of room, you can buy another 12 terabyte (or 8 terabyte, or 16 terabyte), and simply daisy chain them, to keep adding drive volumes to your current system. You can use up to 128 disk drives. This is a function of the ATTO R380, and the SAS/SATA expansion port that is native to the Maxx Digital drive arrays.

    These drive arrays can be used with the Maxx Digital Final Share system as well. Which means that if you purchase this product, and then realize that you want shared storage 6 months from now, you can use everything that you own, and simply add a managed ethernet switch, and multiport ethernet card. You need to add nothing else to have shared storage (other than a dedicated MAC Pro to act as a server for your shared storage system). You will NOT have to reinitialize your drive arrays for this application – you can just plug it in, and keep using your existing media for shared storage.

    What else do you want to know – just ask, and I will tell you.

    Bob Zelin

  • Bob Zelin

    May 28, 2009 at 6:02 pm

    this is info on the ATTO R380, and what it can do.

    https://www.attotech.com/expressSASr380.html?PHPSESSID=ed0672e060c4dc2e2e60109ee81294fa

    Please let me know if you need any further information. I can answer all of your questions.

    bob Zelin

  • Jeff Smith

    May 28, 2009 at 7:06 pm

    Thanks very much Bob, very helpful.

    Jeff

  • Greg Leuenberger

    May 29, 2009 at 6:55 am

    Hi Bob, I’ve been lurking around checking these threads out for a while. Couple quick questions:

    * Do you just use one GigE port from the Switch to each Mac Pro Workstation? If so is it dedicated? (in other words use the Mac Pro’s other GigE port to go to the switch you use for other network activity, email, etc…) Or do you just run everything through the one switch?

    * I noticed the EVO 4K is a 16 bay chassis…are you saying there’s a 12 bay available?

    * I have a 4 Core Xserve acting as a file server right now (3 drive raid 5 with the built in apple raid card). Can I use the Raid card instead of the ATTO R380? If not I’m guessing I can put the R380 in the XServe and the PEG6 in the two open PCI X8 slots..(it’s the previous gen. XServe so 2 x8 PCIe slots…not x16)

    I’m sick of direct attached storage…I use the Xserve to serve 3D projects and compositing files to my workstations and renderfarm and it’s beautiful. I hate how projects get scattered as soon as I go local storage for editing.

    I’m a little hesitant about ProRes HQ…I’ve had issues (gamma shifting, artifacts..) when converting to h264 and flv…and my clients ALWAYS want those formats…for review and delivery. So I need to investigate that a little further.

    A typical scenario would be to have 2 edit stations, a couple render machines and maybe 1 or 2 3D machines (not a lot of bandwidth, but having to save some large 150+MB files every half hour or so) on the shared storage at once (the render machines would be loading 150+MB files..but only saving 5MB images every 5 min or so.

    Lastly….let’s say an affordable 10Gig Switch comes out in two years (fantasy..I know) can I just drop in a 10GB PEG 6, 10GIG cards in the Macs and the new switch and be in business? I always thought I would wait until 10GigE was available (and that it would be available in 2009…..looks like somebody put the brakes on it…) – but I’m sick of waiting.

    best,

    Greg

    Greg Leuenberger
    CEO
    Sabertooth Productions, Inc.
    http://www.sabpro.com

  • Matt Geier

    May 29, 2009 at 7:10 pm

    Hi Greg,

    I participate here with Bob to help assist with questions like yours. Bob knows us Small Tree people well. He can vouch! 🙂

    Question:
    Do you just use one GigE port from the Switch to each Mac Pro Workstation? If so is it dedicated? (in other words use the Mac Pro’s other GigE port to go to the switch you use for other network activity, email, etc…) Or do you just run everything through the one switch?

    Answer:
    The reality is you can do it both ways. Because Managed Switches are very capable and build specifically for managing a lot of data and traffic, one switch will suffice because you can keep the traffic separate (video editing / administrative, email, dns, etc .. ). You can look at Small Tree’s ES4524D Managed Gigabit Ethernet Switch! The best switch for price/performance on the market!

    Question:
    I have a 4 Core Xserve acting as a file server right now (3 drive raid 5 with the built in apple raid card). Can I use the Raid card instead of the ATTO R380? If not I’m guessing I can put the R380 in the XServe and the PEG6 in the two open PCI X8 slots..(it’s the previous gen. XServe so 2 x8 PCIe slots…not x16)

    Answer:
    You run a risk here if you make the choice to do this.
    The RAID cards and storage that people spec are typically done to the specifications that are required for passing files back and forth, NOT REAL TIME PERFORMANCE. Many of them go fast (Bandwidth). However the fastest ones tend to have the worst realtime characteristics and even 2 or even 3 streams of low bandwidth video tip them over (latency).

    Question:
    Lastly….let’s say an affordable 10Gig Switch comes out in two years (fantasy..I know) can I just drop in a 10GB PEG 6, 10GIG cards in the Macs and the new switch and be in business? I always thought I would wait until 10GigE was available (and that it would be available in 2009…..looks like somebody put the brakes on it…) – but I’m sick of waiting.

    Answer:
    10GbE CAT6 Cards are out and available for Windows and Linux and for Mac (Small Tree) Part Number = PETG1-C SRP = $1301.00. All of the current cards are Single Port units. I speculate as 10Gb CAT6 moves along, multi port units will become available.

    The real issue here is waiting on switch vendors to release CAT6 10Gb switches. There’s a few vendors out there working on some now. Those of us watching this expect to see 10Gb CAT6 really starting to surface closer to the end of Q4 in 2009.

    Some comments about Pro Res HQ;
    It is possible to edit Pro Res HQ over Gigabit Ethernet in Real Time. You can fit comfortably two streams of Pro Res HQ on a single Gigabit wire (approx 60MB of Pro Res HQ data). If you are not able to do this today, there could be several reasons why.

    It’s best to find a solution that will do this for you and that has been engineered to do this kind of work.

    Matt G
    Small Tree
    651-209-6509 x 1

  • Bob Zelin

    May 29, 2009 at 8:46 pm

    Hi Greg –
    I will respond to your questions as well –

    * Do you just use one GigE port from the Switch to each Mac Pro Workstation? If so is it dedicated? (in other words use the Mac Pro’s other GigE port to go to the switch you use for other network activity, email, etc…) Or do you just run everything through the one switch?

    REPLY – although you can hook it up this way (one port per MAC), I NEVER EVER DO THIS. I always link aggregate all 6 ports of the Small Tree PEG6 to 6 ports (19-24) on the Small Tree switch, so I have a nice giant data pipe. Now, I can plug in my individual clients to the 18 open ports on the switch, and I have 70Mb/sec bandwidth to each MAC FCP client. I always use the Small Tree
    ES4524D switch these days. If you choose another switch, it must support jumbo frames, flow control, and dynamic link aggregation.

    * I noticed the EVO 4K is a 16 bay chassis…are you saying there’s a 12 bay available?

    REPLY – I hate the Maxx Digital website. Maxx Digital sells everything – 4 bay, 8 bay, 12 bay and 16 bay. All in expandable and non expandable versions. It is very difficult to navigate their site, so I just call them, and yell at them, and say “where the hell is the damn 12 bay !”. Their products are great, and they don’t ask stupid questions if you have a product failure, so support has been great as well. I still don’t know the differnce between the EVO2K and EVO4K, nor do I care. They make expandable drive boxes, in all sizes.

    * I have a 4 Core Xserve acting as a file server right now (3 drive raid 5 with the built in apple raid card). Can I use the Raid card instead of the ATTO R380? If not I’m guessing I can put the R380 in the XServe and the PEG6 in the two open PCI X8 slots..(it’s the previous gen. XServe so 2 x8 PCIe slots…not x16)

    REPLY – this is what I have found. We tried so many RAID cards. The ATTO R380 is the only one so far that has the lowest latency issues. What does this mean to you – it means that when you playout a 55 minute show at ProRes422HQ, and it stops at 35 minutes, you get angry and say “my drives are not working”. But then you test them with AJA System Test, and you find out that you are getting greater than 600Mb/sec – so you say “what the hell is going on here”. Latency issues. That is why we now only use Hitachi Saturn enterprise drives, and the ATTO R380 card. Doing a test on local storage means nothing – I don’t care if it’s showing 620Mb/sec- if the host card can’t provide SUSTAINED PERFORMANCE without dropping out or timing out, your show stops and you get DROPPED FRAME ERROR even if your drives are fast. My clients with “other cards” luckily do 60 second commercials on TV, and short openings for shows, so they are not yelling at me.
    We learned this the hard way, with one of the infamous people here on Creative Cow.
    PS – I don’t think the PEG6 would even run in a x16 slot. All you need are 2 x4 slots.

    I’m sick of direct attached storage…I use the Xserve to serve 3D projects and compositing files to my workstations and renderfarm and it’s beautiful. I hate how projects get scattered as soon as I go local storage for editing.

    REPLY – DO NOT USE THIS SYSTEM AS A RENDER ENGINE OR RENDER FARM – it will fail for you. If you do our solution, render locally. Then transfer your data to the shared volume. We only use an ethernet port for connectivity, so you can keep your local drives right where they are now. We have a way that you can have a couple of systems do heavy renders using the shared volumes, but I HATE THIS METHOD – because it makes the setup more complicated, and I like SIMPLE SOLUTIONS. I don’t want to have to keep track of a complex system. So share away with all your systems – share away using ProRes422HQ and DVCProHD all day long, but try to render – especially long complex renders LOCALLY, and drag your finished effects work to the shared volume.

    I’m a little hesitant about ProRes HQ…I’ve had issues (gamma shifting, artifacts..) when converting to h264 and flv…and my clients ALWAYS want those formats…for review and delivery. So I need to investigate that a little further.

    REPLY – everyone is using ProRes422. If you are hesitant about ProRes422HQ, then you are hesitant about AVID DNxHD220, which is all AVID does. There ain’t no uncompressed HD on AVID – not unless you got a DS. And AVID Unity doesn’t handle uncompressed HD. If it’s good enough for every TV network in the US, it’s good enough for you. Anyway, we can’t do uncompressed HD – not without 10 Gig ethernet, and then it gets too expensive.

    A typical scenario would be to have 2 edit stations, a couple render machines and maybe 1 or 2 3D machines (not a lot of bandwidth, but having to save some large 150+MB files every half hour or so) on the shared storage at once (the render machines would be loading 150+MB files..but only saving 5MB images every 5 min or so.

    REPLY – forget it. We will fail with your two 3D machines rendering away on the single shared volume. We can have everyone share the same HD Media on your 5 or 6 MAC clients, and they can all read the same media at the exact same time, and it can all be ProRes422HQ, but if you start 2 3D machines (Maya, etc.) doing multi hour long renders using the shared volume, we will fail. Better see that now. Still want a system like this, without excuses, and the low cost, and ease of use and setup – RENDER LOCALLY.

    Lastly….let’s say an affordable 10Gig Switch comes out in two years (fantasy..I know) can I just drop in a 10GB PEG 6, 10GIG cards in the Macs and the new switch and be in business? I always thought I would wait until 10GigE was available (and that it would be available in 2009…..looks like somebody put the brakes on it…) – but I’m sick of waiting.

    REPLY – when 10 Gb switches are affordable, your current MAC Pro’s will be useless. I thought that 10Gb would be common place by now, but it’s not. I have never built a 10Gb SAN system, so I cannot tell you how it works, and what the bugs will be. We don’t have clients that are asking for uncompressed HD, so it’s not worth my research now – besides, no one could afford it. We have enough people that die when they have to spend $3000 for a MAC Pro as a server (can’t we use our old G5 ?). So will these people buy into 10Gig – absolutely not – not at these prices. And even if it was cheap, uncompressed files are MASSIVE, so you would need multiple large drive arrays, and the single most expensive part of this system is the drive arrays.

    Bob Zelin

  • Greg Leuenberger

    May 29, 2009 at 10:45 pm

    Thanks a lot for the info guys…hmmm, quick follow up. Do you recommend rendering locally and copying back to the array even if we’re talking about a fiber array? What is it about rendering that screws up the flow of data?…when you render a sequence you’re writing data far slower than if you were reading live streams…so does it have something to do with the array being unable to write a rendered file while other users are reading off of it? Just seems a little strange.

    I’ll have to do a little more research…I do like ProRes HQ…it’s the converting the final renders to h264’s and flv’s that I’ve had issues with (they’re pretty well documented..I just need to take another look at it).

    Man…I just want one storage ‘hub’ for all my projects..3D and editing…as soon as editors start saving stuff locally and copying assets back and forth the projects start getting screwy..and a lot of editors aren’t exactly organized (stop saving all the shit to the desktop, we have project directories for a reason!)

    Anyway, thanks for your help.

    best,

    Greg

    Greg Leuenberger
    CEO
    Sabertooth Productions, Inc.
    http://www.sabpro.com

  • Bob Zelin

    May 30, 2009 at 2:08 pm

    Do you recommend rendering locally and copying back to the array even if we’re talking about a fiber array?

    REPLY – if you are using ethernet and link aggregation to connect to your clients, then YES – render locally. The fibre array is not going to do anything for you. You want fast (like 10Gig fast, but right now ?) – choose a Fibre solution like Facilis Terrablock. This will perform faster than what we do. People choose our solution becuase it is INEXPENSIVE and EASY. Maxx Digital drive arrays are as fast as any fibre array, however, our ethernet connection to the clients is NOT as fast as a Fibre connection (or 10 Gig connection). If you SPEND MORE MONEY you can get what you want. You want cheap – you choose what we are doing.

    What is it about rendering that screws up the flow of data?…when you render a sequence you’re writing data far slower than if you were reading live streams…so does it have something to do with the array being unable to write a rendered file while other users are reading off of it? Just seems a little strange.

    REPLY – I am not talking about ideal solution here. If you had unlimited money, you would put in a big fancy fibre channel solution, and be done with it. If you are looking at budget solutions that are based on 1 Gig ethernet, there are limitations.
    No one ever said that we are the exact same solution as a Fibre Apple XSAN or Facilis Terrablock solution. We just offer a simple inexpensive solution that can do COMPRESSED HD (and uncompressed standard def), that is an alternative solution.

    Man…I just want one storage ‘hub’ for all my projects..3D and editing…as soon as editors start saving stuff locally and copying assets back and forth the projects start getting screwy..and a lot of editors aren’t exactly organized (stop saving all the shit to the desktop, we have project directories for a reason!)

    REPLY – your comment reads to me “Man… why can’t I just get one damn solution that does everything, that doesn’t cost $200,000.”.
    Even with a $200,000 solution (or even more expensive) – you know what your real expense is ? LABOR – QUALFIED LABOR – you will NEVER EVER EVER EVER EVER find a solution that can be run by unorganized idiots. Do you actually expect to say “gee, I spent $400,000 with Apple Enterprise Group for a killer SAN system that can do uncomrpessed HD – why do I have to hire qualified editors that know how to organize their workflow, and keep track of their files”. What kind of statement is this. EVEN WHEN EDITING SYSTEMS AND SAN STORAGE BECOMES FREE, qualified people will ALWAYS be the backbone
    of our industry. This is not McDonnalds – and even if it was, it is run by qualified, highly paid managers, that know how to keep the low level employees organized. Just like managing files.

    I am emotional about this subject, because every week, I see totally qualified editors being layed off, only to be replace by kids that have no idea of what they are doing, and then companies saying “how come nothing is working”.

    Bob Zelin

  • Greg Leuenberger

    May 30, 2009 at 8:05 pm

    Lol…goodness, settle down there Beavis : )

    Filling in the blanks I’m guessing the bottleneck is the switch..since you *should* be able to have 3 computers write files at a couple MB/s a second while 2 others read files at 50 (at most) MB/s on an array capable of 500 MB/s. Even then it seems the switch should be able to handle it (easily) but whatever…

    Anyway, yes….reading from shared storage but rendering locally is not a good solution…it’s merely a cheap solution. I’m not looking for cheap (never said I was) I want (as you put it) one damn system that does everything. Fiber adds a layer of complexity to the system I want to avoid…I’ll wait for 10GB Ethernet (you also added an extra zero to your fiber SAN estimates)..for the system I described I’d probably be looking at around $30K for fiber instead of $15K for GB Ethernet..

    Regarding editors…yes, I understand where you are coming from. I’m a 38 year 3D guy…and I spend a vast amount of effort staying current with the advancement of 3D software and technology (editing by comparison is child’s play compared to 3D in regards to the vast amount of technical knowledge and software expertise required..no offense to any editors here).

    The facts of production are you are often rendering and re-rendering projects at the last minute…having to copy these back over to the shared storage system (which is where the archives come from) is a bad solution. For what it’s worth some “kids” are VASTLY more talented than the older, more technically proficient guys they are replacing (this applies more to graphics)…and if you have ever been in the position of hiring (I get the feeling you haven’t) then you have to factor in talent over knowing what a SAN is and find an IT solution to accommodate your talent until they’re up to speed. The Ethernet solutions are appealing not just because they are cheap, but because they are drop dead easy to have all the various assets of a project (3D..Motion graphics..AE files..audio…PDF scripts..etc.) all in ONE place, available to everybody and easy to archive on just a shared volume sitting on your desktop…easy to adim..I’ve done it myself with PCs and Macs for the last 8 years with zero problems..I do NOT want to manage a Fiber SAN. That to me is a much more compelling reason to buy Ethernet over Fiber than saving a few bucks (and it’s really not that much).

    FWIW, most of the production houses I work with that primarily do editing have their graphics guys off in a corner working locally…my place is the opposite (mostly 3D and Motion Graphics) with 2 edit stations for finishing..I need one data repository and it has to be easy.

    thanks for the info…I guess I’ll wait for 10GB Ethernet…

    -Greg

    Greg Leuenberger
    CEO
    Sabertooth Productions, Inc.
    http://www.sabpro.com

  • Bob Zelin

    May 30, 2009 at 8:54 pm

    Greg –
    there is a way around this, and we have done it two times, but I don’t like it. When you get a multiport ethernet card, you can dedicate two of the ports for a particular user, and create a seperate “bond” for that one user that is doing intensive rendering. We did this recently, using a Small Tree PEG6. This is a 6 port ethernet card, and we created 4 bonds – pairs of two on the PEG6, and we tied up the two ports of the native MAC Pro as well. These guys are doing rendering all day long. But now, I have to create seperate static IP addresses for each bond, and assign users to a seperate IP address (of the individual bond) so that no one else is on “his” bonded (link aggregated) ethernet ports. So now, that is fast enough for the rendering.

    Does it work – yes. Is it all of a sudden “complex”, and not easy, like a single 6 port link aggregate trunked port – you bet it is.
    The whole point of my discussion is to take the expense and complexity out of doing this. IT SHOULD BE SIMPLE. Are there tricks – yes, but I hate that. I want to promote INEXPENSIVE, EASY TO USE, EASY TO SETUP shared storage – not some convoluted nightmare that requires knowlegeable people to setup, and operate.

    Bob Zelin

Page 1 of 2

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy