Forum Replies Created

Page 2 of 15
  • Chris Murphy

    April 29, 2014 at 12:38 am in reply to: 6 TB drives internal on Mac Pro?

    If these are SMR (shingled magnetic recording) drives, then this applies:
    https://lwn.net/Articles/591782/

    If they’re SMR, they’re device managed at this point so while they will function, they may not function the way you expect when modifying existing files on them. That is there could be a delay. A delay that for normal users may not be noticeable, or if it is not disagreeable; however for video it could cause problems. So you’ll just have to test it.

    You shouldn’t have a problem with anything recent, which would have to use GPT as the partition scheme in order for more than 2TB to be usable. And then also aligning on 8-sector boundaries is the standard. OS X does it exactly on 8-sector boundaries, starting the first partition at LBA 40. While Windows and Linux start the first partition at LBA 2048 which is both 8-sector and 1MB aligned.

  • Chris Murphy

    April 29, 2014 at 12:21 am in reply to: Infortrend SAS RAID6 Running very slow

    Spinning rust drives get slower as they get full, assuming the file system allocates sectors from outside to inside of the platter, which is usually how it works. And the difference is pretty significant. Some companies include the min and max in the spec, most don’t. I’ve seen the difference be upwards of 40%. What you’re reporting is a lot more than that though, more like a 70% drop in performance.

    So that makes me wonder if one or more drives is having problems writing data and it’s having to retry a lot. I’d like to think that the RAID software has some testing options, maybe even a way to issue SMART commands to each drive, and get some idea what’s going on. If not then it means breaking the array and doing both read and write tests to individual drives, and that is a destructive test.

  • Chris Murphy

    April 29, 2014 at 12:10 am in reply to: Pegasus R6 – 1MB Stripe size default?

    “stripe size” is a routinely mishandled word, to the point I think it ought to be retired as a term. Some mean it as chunk/strip size, and some mean it as full stripe including parity, and some mean it as full stripe not including parity.

    SNIA defines it as stripe depth (same as strip size, same as chunk size which is now deprecated) times member extents less parity extents. The most common synonym for extent that I come across in the Linux storage world is stripe width, which is the number of data “drives”. A six drive RAID 5 array has five data “drives” and one parity “drive.” And for RAID 6 it’s four data “drives” and two parity “drives.” I use “drive” because due to distributed parity, there aren’t dedicated data and parity drives; but functionally you can look at it that way when doing the stripe width (member extent less parity extent) count.

    Since 1MB isn’t divisible by either 5 or 6, I don’t know what Promise means by a 1MB stripe size. Maybe they mean 1MB strip size, which would mean it’s a 5MB stripe size. Or maybe they’re rounding up or down a bit, e.g. a 192KB strip size.

    There is an optimization here that’s workload specific so the stripe size is important. Normally I’d say bigger is better for video. But there’s another factor, which is something called alignment, and if that’s not ideal then a large stripe size can be bad by causing a lot of unnecessary or inefficient read-modify-write. So basically you just have to test it or ask someone if they’ve tested it or published results with a variety of workloads or benchmarks that do a good job of simulating actual workloads.

  • Chris Murphy

    April 10, 2014 at 12:31 am in reply to: move internal RAID

    The advice to make a current backup before proceeding with any modification of an array is good. Apple software raid writes quite a healthy amount of metadata to each disk so it ought to be capable of assembling the array if all of the drives are moved to another OS X system. If it doesn’t, I’d consider it a bug. But before doing it, I’d budget the time necessary to do a complete recreation of the RAID and a restore from backup – just in case it comes to that. If that down time can’t be allocated then it’s not a good time to do the migration.

  • Chris Murphy

    March 19, 2014 at 1:26 am in reply to: SSD or RAID Controller Issue?

    Without logs it’s speculation why an array is failing. A generic “failed” error message isn’t helpful. The controller should have something a lot more verbose than that, since its the one doing the complaining, if I understand the reporting correctly. And often it requires having a tech support person or engineer look at the log and tell you why the array is imploding. There are all sorts of errors one of the drives could be reporting to the controller, it could be cable induced, and yes it could be drive or controller firmware induced, or OS driver induced. Logs.

  • Chris Murphy

    March 19, 2014 at 1:14 am in reply to: Archiving: Keep a great lossless copy or not?

    This is what we get for tolerating proprietary formats.

    Some project choose to record the final to film for archiving because they’re solidly convinced in 30 or whatever years there will be no way to deal with the original digital data at all, or it will be prohibitively expensive, and therefore why bother paying to store all of that digital originally captured material in the meantime? The original material is a lot bigger than the final digital deliverable. And film, well that’s comparatively cheap to archive with a freezer. And it’s somewhat self-describing in that its encoding isn’t totally obscuring the content.

    Back in the old plate making days, quite a few artists would break the plates after all members of an edition were made. The only thing that mattered were the edition prints. So another valid tactic would be to render out the “best practical” deliverable file you’d ever need and obliterate the rest. And even that’s not as aggressive as destroying the plates, because the digital final such as it is can be identically duplicated, unlike edition prints.

  • Chris Murphy

    March 19, 2014 at 12:58 am in reply to: Archiving: Keep a great lossless copy or not?

    A 2TB consumer-grade disk costs around $99
    A 2TB enterprise-grade disk costs around $160
    If you have an LTO-5 or LTO-6 tape drive, a 1.5TB LTO-5 tape costs around $30, 2.5TB LTO-6 around $70

    Although this is somewhat esoteric information, the rate of unrecoverable reads for the above options decreases as you descend the list. Consumer SATA shouls have less than 1 unrecoverable read error (bad sector) in 12TB read. Enterprise SATA should be less than 1 in 120TB read. Enterprise SAS/FC, should be less than 1 in 1.2PB read. And LTO tape should be less than 1 in 12PB read. So that’s in the realm of 1200x the likelihood of losing a sector of data on a consumer SATA hard drive, than on LTO tape, or three orders magnitude greater. And their shelf life is longer also.

  • Chris Murphy

    March 19, 2014 at 12:34 am in reply to: NAS & OSX Mavericks

    Mucking with configuration files without understanding the problem isn’t a good idea. It’s like throwing spaghetti at a wall. Yeah you might resolve the current problem, but then end up with some other problem, or at least a mess. Certainly the IT guy needs to see the relevant server and client logs, to get some idea why the two are becoming disagreeable after the handshake.

    It’s entirely possible the two are already negotiating an SMB 1 connection, and yet are still having a problem. It’s possible the Linux Samba server is configured to deny falling back to SMB 1 so the problem may get worse by making unfounded changes. So ask him what sort of bribery is required to get him to look at server and client logs and suggest the next steps to get to a stable connection. At some point you could let him know that if he gets into a bind and prefers to configure Samba on OS X instead of using Apple’s homebrew, a relatively recent Samba 3 source for OS X is available at Macports. That means all of the hard work of getting the source code to compile on OS X is already done. You “only” have to install macports (free), and XCode (free), and learn how to use the ports command (non-obvious but not overly difficult) to download the source and compile it, and optionally create an installer package for deployment on multiple machines.

  • Chris Murphy

    March 11, 2014 at 5:56 pm in reply to: Super slow transfer time

    Use Blackmagic Speed Test from the App Store. Either post the result, or if you’re convinced they indicate a problem or departure from the G-Technology specs then I’d call G-Technology and ask them what’s up.

  • One drive missing usually means it has failed or has been disconnected. Since this is raid0 (striped set) that would mean the entire volume is useless which is why it’s offline. Formatting is not going to fix this.

    Disk Utility RAID user interface is really screwy, so confusion is normal. But in recent versions you can click on physical drives (not volumes or raid sets as you have done here) and click the Info button in the tool bar, and find the SMART status which should give you an idea if the drive has in fact failed or not.

Page 2 of 15

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy