Forum Replies Created

Page 4 of 15
  • Chris Murphy

    January 31, 2014 at 3:37 am in reply to: Pulled RAID 5 drives have very slow read speeds.

    smartmontools: GUI and command line versions for Windows. On Linux, lots of packages in your distro repo. And for Mac OS X, build via Macports and XCode. Also comes in handy for checking and setting SCT ERC. The command you want for a quick check is “smartctl -x /dev/sdX” which is a linux/bsd notation for the drive, I’m going to guess it’s some letter on Windows.

  • Chris Murphy

    January 31, 2014 at 3:26 am in reply to: Pulled RAID 5 drives have very slow read speeds.

    10MB/s isn’t OK. Hitachi’s data sheet on this model says that you should get 45-85MB/s. I’m not familiar with HD Tune Pro features, but you want to totally ignore random IO speeds. In fact, it’s possible the sequential benchmark is useless depending on what the block size being tested is, and without knowing that and if it’s similar to how the apps you use access storage, it’s a pointless test. I’d think if you were really getting 10MB/s reads you’d have all sorts of problems with video editing, so I’m not sure I trust this test.

    6 year old enterprise drives are at the end of their useful life. If they die in an array, what do you replace them with? Probably not the same thing, they’re that old. So then you replace it with something similar, but probably totally different performance characteristics. Maybe it should be true you can mix and match drives, and with some raid implementations you can, I’d say it’s only worth finding out if you’re tinkering. Or if it’s some kind of 2nd tier nearline backup: you spin the thing up, you write all the important stuff to it, and you spin it down, and you keep the system that runs it encased in carbonite, so that if you had to you could read it again in 3, 6, 12 or 18 months. But I wouldn’t expect to get a lot more life out of the arrangement. If you have tape backup, I wouldn’t even screw with it. Donate the drives. Heck I might take ~4, especially because I’d love to have more experience with exploding arrays due to face planting drives.

  • For it to become degraded in 3 days isn’t normal. If the firmware update explicitly fixes a problem that’s the direct result of the array going degraded, then great. But the only way you’d know this is if you or someone checked the array log, understands why the array went degraded, and is familiar with the changelog for the firmware update. Otherwise, I’m skeptical it’s just magically going to be fixed with a firmware update.

    While the Reds aren’t enterprise drives, they do have the proper error time out that enables bad sector “repair” (via overwrite) on any read (including read only scrubs).

  • Chris Murphy

    January 30, 2014 at 7:21 pm in reply to: How Does Partitioning Impact RAID Arrays?

    I wouldn’t do it unless you have a use case that benefits from it. You can point LTO backup software at specific folders, excluding others, the same as you can for Time Machine. Also, if you don’t properly predict their relative consumption requirements, you’d going to run out of space on one before the other and regret the layout. This isn’t a problem with folders. So I don’t see an advantage to partitioning based on what you’ve said so far.

    However, since the inner part of the platters have lower performance than the outter part, you could choose to short stroke the array. Normally this means using only up to the first third or so of each disk, “wasting” the last 2/3rd. This is often still cheaper than getting 10K or 15K RPM disks, which are smaller anyway, and not short stroking them. In your case, maybe you’d go 50/50, or even 70/30, which is perhaps a faux short stroke. Instead of taking the best performing part of the drives, you’re eliminating the worst performing part of the drive. Also, you wouldn’t be discarding the worst performing part, you’d use it. If you actively use both partitions, then you’ve lost the performance benefit, and possibly have made it worse because of the additional seeks that will happen from one end of the platter to the other more often than it otherwise would, at least until the array is 2/3+ full.

    Anyway, to do this in Disk Utility, click on the top most icon for the array device so that the UI shows a Partition tab. Change the scheme pop-up from Current to 2 Partitions. The first partition will have the best performance because that will use the outer portion of the drive. And have the 2nd partition take up the rest. You’ll put video files on the 1st partition’s volume. And the things that tolerate lower performance on the 2nd volume. Whether you do this depends on your workload. If you really have distinctly separate workloads, this two partition layout will help overall performance. If you combine them, it’ll make them worse. How much worse just depends on what the usage pattern is, and hence whether the disks will be seeking to death or not.

    More relevant for performance though, is the chunk size for the array. I don’t know what the default is for Promise arrays but for this use case I’d say 512KB or larger is reasonable. If we’re talking about a bunch of emails, mp3s, and JPEGs, well that’s a different story. But you might just accept the write/rewrite performance hit you get with those. Chances are that’s a one time thing, and from then on they just get read.

  • Certainly the firmware should be updated as a first step before creating the array. After the array is created, I’m skeptical of hardware RAID firmware updates. They should be safe, but… if they aren’t, it’s a mess. So it’s better to assume it’ll blow up the raid, and therefore make sure the array is suitably backed up and you’re prepared for a complete restore should the upgrade go wrongly. And postpone the firmware upgrade to a time when you can afford the rebuild time, like maybe a Friday after lunch: test the raid that afternoon and if there are unacceptable regressions or it blows up, you have the weekend for an unattended restore to happen.

    I’d also make sure the file system(s) on the array are all unmounted, and I’d follow the manufacturer’s instructions for firmware upgrades exactly.

  • Chris Murphy

    January 28, 2014 at 6:44 am in reply to: RAID 50 loses half the expected speed?

    Oh and for that matter I’d like to know what the chunk size is for the two R380 raid5 sets too.

  • Chris Murphy

    January 28, 2014 at 6:41 am in reply to: RAID 50 loses half the expected speed?

    I agree with the earlier questions and I’ll add some of my own:

    What computer make/model/RAM?

    What benchmarking tool?

    Rerun the test on the raid50 volume, while running the Terminal command: ‘top -s5 -n10 -o cpu’ it will take 5 seconds for this to accumulate data, give it 10-15 seconds and take a screen shot (command-shift-4 then spacebar, hover-highlight the terminal window, mouse click)

    Rerun the test on a single raid5 set (i.e. break the raid50, run the test on one of the raid5 arrays without software raid), while running the same Terminal command above and screen shot after 10-15 seconds for settling.

    In Disk Utility when the raid0 set was created, what was the chunk size? I’m pretty sure the default is 32KB?

    In Disk utility created the raid0 set with two hw raid5s, make sure the chunk size is a lot bigger or a lot smaller than whatever you used before. Dollars to donuts it was at the default of 32KB which is normally fine, but 32KB translates into a pile of IOs round robin between those two raid5s that’s totally unnecessary and might just be amping the kernel doing a lot of work it doesn’t need to do. So I’d try something almost obscene like 1MB if the chunk size goes that high and then redo your tests. I’m assuming your average file size is well above 1MB anyway for this raid50?

  • Check the array log with ATTOExpressSASRaid log utility. Optionally post the log somewhere like pastebin and provide a link.

  • Chris Murphy

    January 23, 2014 at 4:25 pm in reply to: Inconsistant Error -36 when copying files

    Looks like disk4 is the logical device created from disk0s2, disk1s2, disk2s2 as raid0. The previous error message:

    Jan 23 10:06:54 edits-mac-pro kernel[0]: AppleRAID::completeRAIDRequest – error 0xe00002ca detected for set “Local Video” (E08FB46B-DDE5-43AD-B541-E6484E8F04DF), member E7E30B03-70BC-4244-87D5-079DA684E3BF, set byte offset = 289144029184.

    Use these commands:
    diskutil info disk0
    diskutil info disk1
    diskutil info disk2

    In one of those results you should find the UUID above, “member E7E30B03…” which will implicate a particular /dev/diskX device.

    Since /dev/diskX can change between boots, you’d want to use this command:

    system_profiler SPSerialATADataType

    That will return some more detailed information on each physical SATA disk. Look for BSD Name, and find the implicated diskX from above, and about five lines above that you’ll see the serial number for the drive producing the error.

  • Chris Murphy

    January 23, 2014 at 4:10 pm in reply to: Inconsistant Error -36 when copying files

    If you don’t already have a current backup of this raid0 array, do it ASAP.

    Reallocated Sectors : 000000000130
    304 sectors have been reallocated by the drive firmware

    Current Pending Sectors : 00000000059B
    1435 sectors are pending reallocation. Some of these may not be reallocated because they can’t be read without error, and where the source of the problem is. The firmware won’t move the data if it can’t read it. So the data in these locations is already lost, effectively. There are some data recovery techniques to recover that sector despite the read error, but it’s quite tedious and in the realm of specialized data recovery.

    Multi-Zone Error Rate : 0000000132E4
    This is a kind of write error and there have been 78564 of them.

    I personally would have this disk replaced under warranty. To keep it for raid0 usage is asking for more trouble, corrupt files, and an untimely death of the array. For any other purpose, at a minimum I would write zeros [1] to the drive, and then run an extended smart self-test [2]. Afterwards, if current pending sectors isn’t zero, then the drive is toast.

    If this drive isn’t in warranty, I’ll gladly accept it as a donation for R&D however. 😀

    [1] I prefer booting linux and using hdparm to issue the ATA Security Erase command. It’s obscure but it’s much faster than Disk Utility write zeros, and it also zeros deallocated sectors, i.e. sectors that contain stale data but can’t be erased any other way because they no longer have an LBA due to reallocation events. Another way to do this within OS X is to use dd with a block size of 1MB, which is option bs=1m. That’s also faster than Disk Utility, but can’t erase data in remapped sectors.

    [2] smartctl -t long, this is part of the smartmontools package which can be built from Macports (using XCode). It’s not included in OS X. Macports can also build an installer package, so it’s installable on other computers without having to compile per machine.

Page 4 of 15

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy