Activity › Forums › Storage & Archiving › troubles with Areca 1882x and RAID array
-
troubles with Areca 1882x and RAID array
Vincent Robinson replied 13 years, 3 months ago 10 Members · 32 Replies
-
Paul Joy
September 30, 2012 at 7:57 amThe solution for me was simply the settings provided by Areca and I have attached screenshots of every page from the admin at the bottom of the page on my blog. Of course if your using different hardware it might not work for you.
https://www.pauljoy.com/2012/08/mac-pro-raid-setup/
-
Alex Gerulaitis
September 30, 2012 at 8:02 am -
John Roesli
October 6, 2012 at 3:52 amFWIW, so setting up a scratch RAID 0 with Areca 1882X and external CineRAID Enclosure ( 6 x 2TB). Installed the 1882X in PCIe Slot 3
of Mac Pro 5,1, running 4x at 5.0 GT/s got some outrageous read numbers like 4.5 GBytes per second and write number
around .9 to 1.2 GBytes per second. Then read that the 1882X is an 8 lane card so move it to Slot 2 and it comes up as
2.5 GT/s x8 set up same RAID 0 and numbers were average 800 – 900 MBytes per second for both read and write, with write
maybe just a bit slower. Firmware and BOOT ROM are version 1.51 date August 8th, 2012. Moved the card back to Slot 3 and
building a RAID 50 overnight tonight, will see how it goes. -
John Roesli
October 6, 2012 at 5:07 pmOkay so RAID 50 running AJA with disk cache disabled averaging 500 to 600 MB for write and
300 to 400 for read, without disk cache disabled writes are about the same, but read goes
up to 4.3 GBps Running Blackmagic matches the AJA benchmark with cache disabled except
read speeds are a bit slower. -
John Roesli
October 8, 2012 at 6:10 pmUpdate: Rebuilt a RAID 5 and matched Paul Joy settings. Verified using the 1.3.5 driver from ML. Using
V1.51 Firmware and Boot ROM dated 2012-08-08. Only difference I could see is an option for PCIe 3.0, which I disabled and moved the card back to slot 2 voila, x8 at 5.0 GT/s, not sure that is what did it, but
it’s there now. Verified the CineRAID uses 2 x 4 drive channels. Using 6 x 2TB Seagate for the RAID 5. Here is the result from AJA system test.
<a href='https://images.creativecow.net/244410/screen-capture.png'><img src='https://i1.creativecow.net/u/244410/screen-capture.png' border='0' /></a> -
Rob Curley
October 11, 2012 at 11:34 amWell thank you Paul Joy. I now have my 8bay Istorage pro screaming with Raid 6 and 850MBs/900MBs read/write
-
Rob Curley
January 24, 2013 at 6:10 amThis card may be screaming but getting it to work with any low latency needs is ….well I haven’t
-
Vincent Robinson
February 14, 2013 at 7:04 amThis will be a wordy post, but this thread, and this forum, seem by far the most apropos for my situation of any source I’ve found in scouring the web. Thanks in advance if you make it all the way through.
I’m using a similar setup to folks here, but am a little further back in process due to a very strange issue. The setup is a Mac Pro 1,1 with a pristine install of 10.6.8, an ARC-1882x and a 24-bay SAS-expander enclosure. The enclosure is powered by an ARC-8026, with 6 internal connections to each of the 6 rows of 4 disks apiece. Populated the enclosure with brand new WDC Red drives (2TB, 3 yr warranty — not desktop, but not entirely enterprise either; best I could do on a non-profit budget). Connection from host is with a single mini-SAS cable, as you’d expect.
First spinup, all drives seen by the host card. Initialized a test array (RAID 5) using all 24 disks. After perhaps 5 minutes of an unresponsive interface (the Areca web interface everyone’s familiar with), found the raid set created, but controller had marked 6 drives as “failed.” Seemed incredible that fully 1/6th of a batch of newly minted drives could truly be bad. Tried swapping drives to different bay positions, marked everything as “unfailed,” power cycled, etc. and got the same result. After several variations with the same outcome, noticed a bizarre pattern: failures were always at same position: the left-most drive of each row (enclosure has 4 columns, 6 rows) would always “fail.” Could, in the end, successfully create an 18 drive RAID 5 volume if I avoided the 1st column of drives (1 per row — also 1 per backplane subcontroller). Filled that volume to the brim with test data without any issues. Tried setting the recalcitrant 6 as pass-throughs, each still failed out.
I’ve been talking with Areca, an Areca distributor, and the enclosure manufacturer, all of whom have been reasonably responsive — but no one has had a clear answer. I’ve tried a second ARC-1882x with the same results. I’m now in the midst of trying to bypass the enclosure’s ARC-8026 (sas-expander controller) and connect the ARC-1882x directly to each backplane sets (4 drives each), one at a time. So far (2 tests of 6 available rows of drives), it’s working (i.e., can intialize and read & write to the resulting RAID 5 volumes). If that pattern persists, it points to the ARC-8026 in the enclosure.
But — has anyone come across anything remotely like this? It’s strange — and as you can imagine, been frustrating and incredibly time consuming.
thanks,
Vincent
-
Alex Gerulaitis
February 14, 2013 at 7:21 pm[Vincent Robinson] “But — has anyone come across anything remotely like this? It’s strange — and as you can imagine, been frustrating and incredibly time consuming.”
Not me – and yes, I can only imagine. Congrats on pinpointing the culprit (expander).
-
Vincent Robinson
February 14, 2013 at 9:04 pmAs it stands, I *hope* it’s the expander controller, though I can’t see that it’s anything else at this point. It’s just such a strange — and consistent — pattern that I’m worried it’s a very obscure incompatibility rather than a bad board. But if so, I’d be pretty surprised, since were talking about (essentially), an Areca-to-Areca communication. It does underscore that these systems are not off-the-shelf combinations, and hence the role of folks who do higher-end configuration regularly. The cost in my time, unfortunately, would have paid for the difference. Couldn’t have known that, though — felt I’d done due diligence, in terms of research.
Thank you for the response. I’ll update this when I have a more definitive conclusion.
Vincent
Reply to this Discussion! Login or Sign Up