Forum Replies Created

  • Jon Thomasberg

    February 25, 2015 at 12:22 am in reply to: ZFS on OS X

    I simply wanted to roll my own. I have a necessity to know the ‘why’ behind everything and was committed to learn it, however long it might take, AND before I used it in my production environment, it was my little science experiment first. However, if i was not so inclined I would opt for a turnkey system tuned for m&e with support from companies that do this for a living. If you need suggestions, look to Bob Zelin’s posts, as they are dead-on. I will also support what Bob has stated; that conventional IT knowledge is a good start, but is not nearly enough to cobble together a properly running storage system for large non-compressable files used in m&e. It takes an F*-ton of research and trial-and-error tweaking that most people don’t have the patience for or the knowledge required to take it on. It can be very rewarding to the few that can ‘lab it up’ and afford to tinker and learn for a long time before needing to rely on it. For anyone that needs reliability out of the box, so to speak, and dont want to be troubleshooting permissions, tcp stack parameters, variations in implementations of SMB, AFP, NFS, driver issues, etc. then I highly recommend going with a reputable solution and pony up the cash. You will actually save money in the long run.

    My grandmother told me once when I was younger, ” sometimes the cheapest solution ends up being the most expensive. “. For a while, and being young at the time I discounted her comment. Only later did I realize the wisdom in it.

    EDIT: After having gone through older posts, it looks like you have been knee-deep in the weeds on your ZFS journey for over a year already. So for you specifically, forget my whole “if you’re not up for the challenge, don’t go down that road” speech above. To everyone else reading this, it is sound advice.

  • Jon Thomasberg

    February 24, 2015 at 1:59 am in reply to: What’s wrong with our NAS setup?

    Ted,

    Most 3TB 7200rpm drives operate at 80-100MBps individually (SATA lower than SAS). However when you have 7 of them in a RAID6 (2 parity), and with the parity overhead, you are only looking at a best-case scenario of ~170MBps reads, and your writes will be at ~ 30MBps. To put that into ethernet terms, just over 1.3 gigabits per second, best case on a read operation.

    RAID6 is good for having double-parity for Also, it might help you to understand that SATA (making an assumption here) can only do one operation at a time (read or write, not both). You will never saturate the 10Gig port with this RAID config.

    Certainly flashing your switch to the latest stable release to fix bugs would prob help optimize the signal path. The 708 doesn’t have a ton of packet buffers, which will cause issues — I would recommend the 712T model at the very least for a 10gig switch. But your real issue is you simply don’t have the spindle-count (# of drives) to reap the full benefit afforded by 10G-BaseT ethernet.

    Hope this helps.

  • Jon Thomasberg

    February 24, 2015 at 1:30 am in reply to: Looking for a Solid IT/Network Company
  • Jon Thomasberg

    February 24, 2015 at 1:21 am in reply to: ZFS on OS X

    Could always try FreeNAS if you want a simple, relatively easy to manage ZFS server. As you may know, this was a fork project after the breakup of the team that dev’d ZFS for Sun Microsystems when Oracle bought them. I created a 180TB raw / 155TB usable using FreeNAS. Works great.

  • Channel bonding is certainly possible to use more than 1 link between: host and NAS (which is technically just another host acting as a iSCSI target, CIFS, AFP, or NFS server); host and a capable enterprise-class switch (we are talking Arista, Cisco Nexus, etc. not some cheap thing from your local big box retailer). This is accomplished either via specialized software (as already cited) or by configuring Channel Bonding in ‘balance-rr’ mode (a.k.a mode 0). Be aware that TCP sessions will have packets arrive out of order and require significant retransmits, reducing the aggregate bandwidth to less than the sum of the links, but still significantly more than a single link or attempting to combine the throughput of two links using LACP, as many of you all have tried and realized that it is no faster than single link, this is due to the fact that LACP uses a per-destination flow over only 1 link, regardless how many links you aggregate in the LACP. This is done because traffic is segmented in chunks (size defined by maximum transmission unit, or MTU) and each segment is given a sequence number so that the segments arrive in order for reassembly on the other end.

    So back to ‘balance-rr’ (load balancing, round-robin) mode: All the links in the bonded virtual interface must be of the same speed, duplex, MTU, buffer size, etc. or it won’t work. This works in Linux and in Mac. I am not certain how to accomplish this in Windows, or if it is even possible.

    If you have linux, you can read up on the topic here:

    /usr/src/linux/Documentation/networking/bonding.txt

    …pay special attention to section 12.

    This is the beginning piece. However, if you are savvy in networking and you feel comfortable editing .conf files in linux CLI, you should be able to accomplish this quite simply. On the other hand, if what you have read above sounds like Greek to you, you can: A) start reading up on it, learn it and test (on a sandbox, non-production system); or, B) hire someone knowledgeable to do this.

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy