January 6, 2015 at 10:56 pm
I like zfz on Solaris but it is a bit of a pain to manage and integrate in a Mac environment.
OpenZFS does run on OS X and the thought of zfs shared out via a Mac is appealing especially afp and smb3. The problem is zfs LOVES ram and the 64GB max on a MacPro is not ideal. Our Solaris box has 256GB. The only way to get that much on a Mac is go Hackintosh.
Anybody tried a zfs Mac server?
January 7, 2015 at 3:46 am
Hi John –
I read your post with great interest. I have just heard from a company in LA that is running a new Solaris install with a Mellanox 10G SFP+ switch (using Premiere CC 2014 on the client Mac computers with NFS) and they are having LOTS of problems.
Apparantly Adobe likes this company, but things are still not working properly. Personally, I don’t know a damn thing about Solaris.
Rescue 1, Inc.
January 8, 2015 at 3:37 pm
Interesting comment. So the Solaris system was the NAS head? Was this straight up Oracle Solaris or Open Solaris aka illumos? What kind of problems?
What was the storage behind it? Oracle has an interesting zfs based storage system with killer reporting. The storage shelves are very reasonably priced, believe it or not, most likely due to the lack of RAID controllers as zfs handles all that.
Managing Solaris is a bit like managing OS X without a GUI. It’s just another Unix variant without any of Apple’s OS shenanigans.
February 24, 2015 at 1:21 am
Could always try FreeNAS if you want a simple, relatively easy to manage ZFS server. As you may know, this was a fork project after the breakup of the team that dev’d ZFS for Sun Microsystems when Oracle bought them. I created a 180TB raw / 155TB usable using FreeNAS. Works great.
February 24, 2015 at 2:45 pm
I would like to try FreeNAS. Why did you choose FreeNAS over the turnkey TrueNAS system? Did you have storage chassises already in hand?
February 25, 2015 at 12:22 am
I simply wanted to roll my own. I have a necessity to know the ‘why’ behind everything and was committed to learn it, however long it might take, AND before I used it in my production environment, it was my little science experiment first. However, if i was not so inclined I would opt for a turnkey system tuned for m&e with support from companies that do this for a living. If you need suggestions, look to Bob Zelin’s posts, as they are dead-on. I will also support what Bob has stated; that conventional IT knowledge is a good start, but is not nearly enough to cobble together a properly running storage system for large non-compressable files used in m&e. It takes an F*-ton of research and trial-and-error tweaking that most people don’t have the patience for or the knowledge required to take it on. It can be very rewarding to the few that can ‘lab it up’ and afford to tinker and learn for a long time before needing to rely on it. For anyone that needs reliability out of the box, so to speak, and dont want to be troubleshooting permissions, tcp stack parameters, variations in implementations of SMB, AFP, NFS, driver issues, etc. then I highly recommend going with a reputable solution and pony up the cash. You will actually save money in the long run.
My grandmother told me once when I was younger, ” sometimes the cheapest solution ends up being the most expensive. “. For a while, and being young at the time I discounted her comment. Only later did I realize the wisdom in it.
EDIT: After having gone through older posts, it looks like you have been knee-deep in the weeds on your ZFS journey for over a year already. So for you specifically, forget my whole “if you’re not up for the challenge, don’t go down that road” speech above. To everyone else reading this, it is sound advice.
February 26, 2015 at 4:12 pm
Thanks for the detailed response. Yes we have a Coraid system that uses an Oracle server running Solaris. The current 108 drive system can sustain 30 ProRes140 ingests and 50 growing file playbacks. That adds up to 1700MB/sec of “do or die” bandwidth. Unfortunately Coraid is on the ropes so we are looking for a way to duplicate the success we’ve had using another zfs implementation.
It’s really 3 approaches to choose from:
1) A turnkey system with limited options like with an Oracle “appliance” or ixsystems.
2) A wide range of supported hardware options using Nexenta based on illumos.
3) A compete roll your own with FreeNAS (BSD) or Solaris but without all the tools the Oracle appliance provides or any support from FreeNAS.
We did a lot of testing with the Coraid POC and found a few settings that unlocked performance so I do think we could build a nice FreeNAS system but it is daunting to work without a net support wise.
What kind of storage chassises are you using? The Coraid chassises are OEM’d SuperMicro https://www.supermicro.com/products/system/4U/6048/SSG-6048R-E1CR36N.cfm
We are very happy with the performance of these chassises, and I attribute that to the 5 internal SAS controllers Coraid decided to include. I believe the 8 drive per SAS card ratio reduces latency which is the main reason for dropped frames.
The drawback is the SAS cards are not active/active or even active/passive like ixssystems. Because of this we designed a 6 chassis system with 6 drive vdevs, one member from each chassis, meaning we can loose two chassises without loss of data.
How did you architect your FreeNAS system?
Did you consider Nexenta?
March 30, 2015 at 11:31 am
Bumping an old thread here, but its still relevant.
Just wondered if you’d tried ZFS on Linux? It is also based on Open ZFS.
I mention this because we’re just finishing the dev indiestor-pro, which takes care of user, avid-specific and generic workspace management. Furthermore, the system is trained to recognise ZFS pools, which is kind of cool 🙂
The web interface is designed for editors, not engineers, so its actually really simple to operate. Each workspace has zfs-quota support, as well as easy to understand read/write lists. Behind the scenes we automatically synchronise each workspace with AFP and SMB, so it all just works.
Version 0.1 of should release in April, with a heap of backup/reporting automation to come after that. Its a Open Source tool (GPL licensed), but we’ll be charging a trivial amount for repository access.
I’d be happy to fire some testing packages over to you if you like…
OS support is for Debian Wheezy and Ubuntu 14.04.
My email is:
indiestor.com – “Avid project sharing, shared!”
October 30, 2015 at 10:42 pm
Looking at the possibility of turning my GB Labs Space into a ZFS server. Sounds counter-intuitive, but it’s 4 years old, bought and paid for, apart from the software upgrade and SLA needed to make it play with latest OSX. Which would cost about £5k. Not worth it.
So thinking…Nuke the Space and there’s a nice Supermicro chassis with matched drives, 10gb card, processors, etc. Make a nice nearline backup, file server for 3d rendering, that sort of thing.
November 6, 2015 at 5:54 pm
Just read your post and to be honest the upgrade should be a lot less than the cost you are mentioning.
Would be very happy to show you the new features on the software as V3 is way above V2, if that is what you are on. Features like snapshot replication, FTP, HTTP sharing plus the automation tools. There is so much more than just AFP services. All that plus a performance boost with the new OS as well.
Hopefully we will hear from you.
VP of Product Management and Sales
GB Labs LLP
duncan (@) gblabs.co.uk
Log in to reply.