Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Storage & Archiving Raid 10

  • Ericbowen

    July 12, 2013 at 4:32 pm

    Raid 10 can be run on today’s onboard controllers or as a software Raid without the larger drawbacks the Parity raids have on those same configurations. However Raid 10 does have some drawbacks that parity raids, especially raid 6 don’t have. One of the biggest problems today is drives failing over time rather than completely at once. This often causes data corruption over time. If you have a raid 10 and the mirror is corrupting over time while the primary disk fails, then you will rebuild the data corruption into the replacement disk. This seems like something that would not happen accept once in a blue moon. Well the more Disks you add the greater the likelihood. Look online at clients who are losing entire raid volumes to data corruption of the raid. Even Raid 5’s are seeing this. Raid 6 gives 2 levels of parity which allow parity verification to catch this corruption when it occurs and fix it. A SAS Raid Controller will also often mark the drive out as bad when these errors reach a certain level rather than let the problem progress further.

    Another major reason to look at Raid 5 or 6 is the rebuild times. Contrary to what others are posting, the current algorithms/methods for rebuilding Raid 5 or Raid 6 is by far the most efficient taking the least amount of time. The reason is the controller is pulling rebuild data from ALL the other disks at 1 time instead of just the mirror. This significantly shortens the rebuild time. The largest factor to rebuild times though are the controllers themselves. An onboard raid controller will take 48+ hours to rebuild a 8TB to 16TB raid volume. A Lower end High Point controller will take about the same time. A real SAS controller with an LSI chip, good drivers, and firmware like LSI or Intel cards take 3 to 6 hours to rebuild a Raid 5 or 6 volume. Those same controllers take 9 hours to rebuild a raid 10. A Areca controller takes around 8 to 10 hours to rebuild that same Raid 5 or 6 with good firmware. The previous firmware was taking 20 to 27 hours to rebuild. All of this seems less important as reason to pay far more for a raid setup until you see the current drive failure rates and have to wait 2+ days for your extremely important raid to rebuild. Often in that case it’s better to just wipe redo and restore from a data backup from the closest date. However excellent raid controllers like the LSI or Intel really alleviate that.

    A final note is Raid is never bullet proof. It is a 90% effect layer of protection. That is it. There are many ways to lose a raid so always back up your data off line as well. I hope that helps with the raid questions.

    Eric-ADK
    Tech Manager

  • Alex Gerulaitis

    July 12, 2013 at 8:33 pm

    [EricBowen] “A real SAS controller with an LSI chip, good drivers, and firmware like LSI or Intel cards take 3 to 6 hours to rebuild a Raid 5 or 6 volume. Those same controllers take 9 hours to rebuild a raid 10.”

    Eric, any independent sources on those numbers? They’re quite strange given no relation to drive count and capacity, and are contrary to my experience. Also, 10 never rebuilds the whole array, just like 0 doesn’t: only a specific mirror is rebuilt on drive failure. Never heard of it taking 9 hours.

  • Herb Sevush

    July 12, 2013 at 8:38 pm

    Thank you Eric that was very helpful.

    Herb Sevush
    Zebra Productions
    —————————
    nothin’ attached to nothin’
    “Deciding the spine is the process of editing” F. Bieberkopf

  • Ericbowen

    July 12, 2013 at 9:42 pm

    Those are the testing results I have done here with Intel or Areca Raid cards. There have been clients I deal with that have the same results and they did not buy the system or raid from us. When I state Rebuild I mean the failed drive which is the same for the parity raids. I simply state as rebuild array because the state of the array is in degraded state until the rebuild is finished. Hence the term rebuild array. There have been postings on the Adobe forums regarding Areca, Intel, and onboard rebuild results. I believe they were comparable or even longer than what I have reported. As to the drive count, 8TB to 16TB array was covering 4 to 8 drive raid arrays. Obviously less than that wont work with those raids.

    Eric-ADK
    Tech Manager

  • Alex Gerulaitis

    July 12, 2013 at 9:52 pm

    9 hours on RAID10 “rebuild” means a drive clone taking 9 hours. Either one drive (or both) are faulty, or something’s really wrong with the RAID controller. I haven’t done a RAID10 “rebuild” but have done many RAID1 ones, and they take what a drive clone usually does – about two hours per TB.

    Same goes for software mirrors (Win 7, 8, storage spaces, Linux).

    Also a software RAIDs (Linux mdm based) without any “real” chips does a RAID6 rebuild on an 8x3TB set in 7 hours. With RAID bitmap enabled and same drive re-insertion – in seconds. Bottom line – no need for “real” chips for performance, resiliency or reliability reasons. If only Storage Spaces supported dual parity…

  • Ericbowen

    July 12, 2013 at 10:54 pm

    Alex I was surprised as you are when I was testing this and dealing with clients with a raid 10 on the rebuild times. I about fell over with the onboard Intel raid 10 rebuild times. I had always assumed the same as was it normally published. However the results were as I stated and I never received a valid response when I emailed engineering. My assumption was the rebuild time was extended because of the priority % assigned to each function for background processing. However when I changed that it did not effect the results much at all. The Raid 3 Rebuild times of Areca were 27 hours for a 6 drive raid 3 with 1 TB Samsung drives. When we asked as to why it took so long their response was the drives were not enterprise. Their 8 Drive 16TB raid took 9 hours with Enterprise drives. Keep in mind that is still 3 or more hours longer than the same Raid on the Intel controllers. I will try and test again next week and see if the newer controllers/drivers/firmware have changed things. It’s been over a year since I tested it specifically here outside of dealing with clients.

    I agree the software raids are reporting far better rebuild times than even Areca controllers with the current systems. I am starting to consider looking into software options. If you have some recommendations for Windows systems that you approve of and support, let me know.

    Eric-ADK
    Tech Manager

Page 2 of 2

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy