Forum Replies Created

  • Gavin Greenwalt

    February 15, 2013 at 7:28 pm in reply to: SAN software for every machine? Or just host?

    Just a follow up. I ignored everyone’s advice, bought two 20gb infiniband cards for $30, a $30 cable and am getting about 300-400MBs w/ IPoIB using the open fabric drivers.

    Problem solved. So far all of my performance tests in Nuke and windows seem to be working exactly as expected.

  • Gavin Greenwalt

    February 1, 2013 at 5:08 pm in reply to: SAN software for every machine? Or just host?

    Re: RAID 10. I agree it’s a weird choice. But our system builders think there is a new option that they want to test so seeing as a standard RAID 5/6 will only give us more storage I like to err on the side of conservative predictions for available storage in laying out our needs. I’m more than happy to entertain their curiosity with a demo system to see how it performs when tested.

    So what you’re saying is “Don’t buy infiniband because the hardware is horrifically unreliable?” Good to know.

    As to link aggregation not working between clients. Are these people crazy? https://www.thetechrepo.com/main-articles/569-link-aggregation-aka-trunking-or-bonding-directly-between-two-ubuntu-linux-servers

    Again, we don’t need editing system levels of bandwidth. We’re working as-is. I’m just trying to find a way to get a little more oomf for not too much more. SuperShare seemed like the perfect solution but Caldigit doesn’t seem to be interested in pushing it anymore so I’m waiting to hear back from their sales rep on what the state of the union is there.

  • Gavin Greenwalt

    February 1, 2013 at 6:20 am in reply to: SAN software for every machine? Or just host?

    $100k would be well outside of our budget. Our studio is 6 workstations and 20 render nodes. We have one file server already that is dual bonded GigE to the switch that feeds everything. This works well enough for 3D animation projects and serves the farm well but we’re looking for something to boost performance for the two compositing workstations. I was thinking about running a small pocket sized SAN between the file server and the two compositing workstations so that they wouldn’t need to sync renders to and from their local RAIDs.

    We work on short form commercial work so we generally don’t have more than 3TB active at any one time so we’re looking to rollout an 8 bay server in RAID10 for 8 effective TB of storage. We have a second server which will be a lower performance deeper storage for archived projects. The current plan is to simply run quad bonded ethernet to each of the compositing workstations but we’ll need another switch for all of the extra cabling and I figure if we’re already going through the hassle why not consider a SAN for the two systems that need high performance access.

    I don’t see how it would end up being $100k for those needs unless I’m massively overlooking a bunch of hidden costs to setting up a SAN. Which I very well may be, and hence the nature of my inquiry. 😀

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy