Forum Replies Created

Page 13 of 15
  • The eS Pro data sheet says they come with SATA Rev. 3 enterprise drives. They will have the proper SCT ERC setting for this use case. Given the cost of the drives and the effort to go with SAS, I’m slightly perplexed with the use of SATA drives. But I doubt that’s related.

    The recurring theme is that the failure happens near the end of file copy, regardless of the sequence of the source used. So I’m thinking of three possible causes, only one of which is hardware related:

    a.) Apple has a rather significant file system bug on their hands, in 10.7.4 (at least), that causes it to face plant some time shortly after 20-25% free space remains on a 12TB volume. Possible. Seems least likely of the three.

    b.) Firmware/driver bug. Something is being translated wrong between physical device LBAs and the logical LBAs sent to the file system. File system is writing, writing, writing, and then upon reading some bit of file system, it’s been damaged – per the very first post on the thread. Buggy firmware is a significant cause of silent data corruption at the drive and controller level.

    c.) One of the disks has damage, possibly conveyance related. If a large number of sectors have somehow gone bad due to damage, they are automatically substituted by drive firmware with reserves. But if all reserves get used up, the drive must report a write error the very next time a persistently bad sector is written to. Which would be pretty much in the same place every time. The controller tends to collapse the array when this happens.

    a and b. are easy to test. Borrow a completely different brand of SAS controller and repeat the test. If it fails, it’s a file system bug. While the ATTO R680 has a linux driver, a linux based test with XFS or ext4 while interesting, presents an ambiguity: maybe it’s a driver bug.

    c. Run smartctl -x on all of the drives to see if there are or have been bad sectors (goes by various names). Ideally the drives should have extended offline tests done regularly. And they should have had the conveyance and extended offline test run before being put into production.

    But in any case, it’s a nasty bug.

  • Chris Murphy

    August 26, 2013 at 2:41 pm in reply to: ZFS anyone?

    OpenVault puts 30 drives in 2U of space, and there are other options, all at Open Compute.

    BlackBlaze Storage Pod 3 has also been mentioned.

  • Chris Murphy

    August 24, 2013 at 9:28 pm in reply to: Home Brew NAS…

    If you’re really interested in computer hardware, operating systems, software, networking integration, file systems, etc., i.e. the classic definition of an amateur (you’re really serious about it because you’re willing to do it in limited free time vs other pursuits), then go for it. If you’re frustrated with tinkering to make things work, either suck it up (what doesn’t kill you makes you stronger) or decide this isn’t the hobby for you.

    As for the listed options, all are workable for home NAS, and so is NexentaStor Community. I’d vote up the strength of documentation and community support before all other options. When something goes wrong, you’ll need help, not just want it. Next, I’d look at which ones best support the hardware you have. And then next consider features, and tops on that list is ease of backup and restore. Scheduled backups, and solid restores will allow all sorts of mayhem and still let you get back on track without data loss (or maybe minimal data loss). Basically with a home unit you really don’t need much availablity. The thing can be unavailable for a few days while doing a restore. What you don’t want is for your limited free time to be sucked up troubleshooting the restore process for three days.

    RE: ZFS vs ext4 vs XFS, differences don’t really matter in this usage category. Two more relevant questions I have, that I haven’t spent the time to learn yet with the listed NAS packages, is whether any of them correctly set drive SCT ERC and device driver timeouts to be compatible with each other; and if they disable drive write-back caching.

    Last, don’t forget the first word in NAS. Network. Just pretend I’m writing two more paragraphs on issues/concerns surrounding networking. DAS can be tedious and inelegant with a ton of files that exceed one disk capacity. But it’s straightforward, you probably already know most of what you need to know.

  • Chris Murphy

    August 24, 2013 at 7:36 pm in reply to: ZFS anyone?

    Oh yes, this. I’m not really sure what to think of it yet. On the one hand Apple should make the UI rather simple. And it also ought to mean they feel their home rolled version has matured enough they can ditch AFP. (Or it means they need a psychiatrist.) But SMB isn’t simple.

    If the workflow is an all Mac situation, it Should Just Work, right from the GUI. The cases where it might have problems would be integrating with Windows or Linux or BSD on the other end of the connection. And if I’m out in the weeds without a GUI push button make it work solution, I think I’d rather get Samba built on the Macs, with its significant pile of documentation and support community, rather than learning Apple’s 3rd flavor of SMB under the hood.

    Also, Microsoft has been contributing to the Samba project for a while now, so that ought to be working pretty reliably with either 3.6.16 or 4.0.x. I suspect Apple’s implementation is a distinct subset of SMB capability which is why there have been so many problems since 10.7.

  • Chris Murphy

    August 24, 2013 at 12:04 am in reply to: ZFS anyone?

    Yeah in terms of price/performance, direct attach storage is unbeatable.

    I think NTFS/JHFS+ over iSCSI on ZFS could be confusing what aspects benefit from ZFS. If the server drives’ ECC were to not detect read errors, ZFS would catch and correct for it even while hosting a guest filesystem within a ZFS sparse volume. But if there were iSCSI link level corruption, ZFS couldn’t protect from this. And as neither NTFS nor JHFS+ have either data journalling, nor use checksums in the metadata or metadata journal, they can’t even protect themselves from such events (not that this is limited to iSCSI of course, the same issue applies to direct attach storage with these file systems).

    Another advantage of NFS over iSCSI in a ZFS context is the NFS export metadata for a zpool is included in ZFS metadata. So when exporting the file system (ZFS send/receive), the NFS configuration follows. Really though, they are quite different and it comes down to what you need it to do rather than it being only about performance considerations.

    Interesting you get 100MB/s via AFP over GigE. I seem to top out at 55-65MB/s on the same physical setup as NFS async over GigE where I get 100+. But good to know that it’s possible. I think I’d rather have oral surgery than configure Samba.

  • Chris Murphy

    August 23, 2013 at 4:44 pm in reply to: ZFS anyone?

    I get read/write speeds of 100+MB/s over non-special GigE network, not even jumbo frames, using NFS with an async export. I see more downsides to iSCSI than upside, mainly because it’s easy to configure incorrectly while NFS is easy to configure correctly. If NFS is incorrectly configured, the client won’t connect, or it’ll be slow, or maybe it’ll have random disconnects. If iSCSI is incorrectly configured, data can get corrupted and you won’t necessarily be informed of this. Also iSCSI is a block device which implies either SAN (multiple layers of additional complication) to share it with other users; otherwise as NTFS or JHFS+ formatted it can’t be shared with other users.

    I’d guess most use cases are: sharing files among users, which thus points to NFS not iSCSI; or pushing files to NAS for longer term storage in which case the performance differential is unlikely to be worth considering.

  • Chris Murphy

    August 22, 2013 at 3:36 pm in reply to: Clones won’t boot (neither CCC nor SD)

    The Startup Disk panel ought to be smart enough to parse the JHFS+ volume and determine if it has all necessary ingredients for a bootable system, before it displays an icon+volumelabel+OSversion. So if Startup Disk panel shows an icon, it almost certainly qualifies for booting based on volume content. If Startup Manager (the thing that comes up at boot chime with option key held) doesn’t show the disk at all, while Startup Disk panel does, that makes me suspicious the firmware isn’t seeing the bus or the enclosure attached to it; or maybe the GPT is wrong or corrupt for some reason.

    Another thing to try is, after you use diskutil list, you’ll be able to identify what /dev/ node it’s on, something like /dev/disk0 or /dev/disk1. The actual thing you want to boot is on a slice (partition), e.g. disk2s2. If you run this command:

    diskutil verifydisk diskX

    where X is the number of disk (not including the partition), diskutil verifies everything on the disk: partition maps (all three kinds), EFI System partition, corestorage metadata, raid metadata, all file systems. But it makes no changes. Maybe it finds something.

  • Chris Murphy

    August 22, 2013 at 1:19 am in reply to: Clones won’t boot (neither CCC nor SD)

    I just googled the enclosure model and realized it’s USB 3.0. It wouldn’t surprise me if the firmware and enclosure simply aren’t able to agree on a fallback to USB 2, which is what the Pro will need. You could try unplugging the enclosure, booting with option key to the Startup Manager, and then plugging in the USB drive while you’re in the Startup Manager. And, I don’t know, give them a minute maybe.

    Otherwise I definitely think you should talk to LaCie support and ask them if this enclosure supports USB booting on Macs that predate USB 3.0 (by many years). And if it doesn’t, decide if you want to return it.

  • Chris Murphy

    August 22, 2013 at 1:13 am in reply to: Clones won’t boot (neither CCC nor SD)

    Actually I didn’t catch that he has 32-bit EFI firmware. It could be a factor if the firmware isn’t finding a 32-bit version of boot.efi (the OS X bootloader). Yet the system was cloned, and the cloned system works, so I’m not sure why it wouldn’t find the right bootloader.

    It seems more likely there’s a negative interaction between the firmware and the drive enclosure, for whatever reason. I’m under the impression that all Macs can boot via FireWire (since the dawn of FireWire), and pretty much any Mac in the last 10 years can boot OS X via USB. But common USB bridge chipsets annoy the living daylights out of me, as so few common products use a chipset that does ATA passthrough, making it impossible to do SMART diagnostics, check the Phy event counters, blah.

    Out of this list from Bombich, I’m uncertain about 1 and 5. I’d like to think LaCie can confirm/deny whether their enclosure supports booting.

  • Chris Murphy

    August 21, 2013 at 5:25 pm in reply to: Clones won’t boot (neither CCC nor SD)

    It’s odd that the Startup Disk panel sees it as a valid enough option to display, but the Startup Manager does not. What port is it being connected to? And what’s the result from these three (readonly) commands after you choose the lacie in the Startup Disk panel?

    nvram -p
    bless –getboot
    bless –info
    diskutil list

Page 13 of 15

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy