Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Apple Final Cut Pro Legacy RAID not mounting!

  • RAID not mounting!

    Posted by Brian Tario on July 13, 2007 at 3:33 am

    I just finished updating my system to FCS2 and my xRAID won’t mount. Unfortunately I’ve done a lot of updates/upgrades in the past few hours so pinpointing what caused it will be difficult. It doesn’t show up in the Finder or in Disk Utility, but it was working perfectly prior to all the updates (FCS2, OS is now 10.4.10, EFI Firmware, new RAID Admin, etc.). I tested the new RAID Admin after updating it and it worked fine. Is there a new utility I need to download from Apple? New Fibre Channel driver? Anything??? Searching their site didn’t help. I’m sure the xRAID itself is fine…but how do I mount it?? Bummer. Please help! THANKS!!

    Brian Tario replied 18 years, 10 months ago 2 Members · 2 Replies
  • 2 Replies
  • David Bogie

    July 13, 2007 at 3:56 pm

    While I hope you find an easy fix, do you mean the Apple XServe RAID? I think the number of xRAID users here on the cow can be counted with one finger. You.

    Time to call Apple.

    Or drop by apple.com’s support discussions. Tehre is a forum for xRAID that gets one or two posts a month. From the folliwng interchange, it doesn’t look good.

    https://discussions.apple.com/thread.jspa?threadID=1037965&tstart=0

    Hi, we have an XRaid populated with 14 x 250gb disks, each controller has six disks formatted to raid 5 plus a hot spare. The two controllers are then concatenated and the host machine (G5 Mac via dual fibre) sees this as a single 2.3tb volume.
    Twice now we have experienced a disk failure (the XRaid is nearly 4 years old) and the Xraid rebuilds itself using the hotspare.
    All then looks great from Raid Admin but the volume just won’t mount onto the host, we get an error message ‘unable to read disk, initialise or eject’. We can see the volume in Disk Utilities but requesting a mount from here does not work. We have restarted everything multiple times! We have tried to repair from Disk Utilities.
    On both occasions the advice from Apple Support has been to reformat the entire Raid (luckily we have a full backup) but it just seems odd to me that Raid Admin tells me everything is OK. Is this happening because of the concatination? Would we be better off configuring the XRaid differently after this reformat?

    hanks
    \
    > Hi Dan,
    is this only happening during the rebuild to HS or also after the rebuild operation has completed? When the hot spare jumps in, have you checked if it belongs to the same controller where the disk failed?

    >>I don’t have any experience with this, but when the RAID rebuilds, it’s in degraded mode. That’s normal. With another level of RAID concatenating the degraded RAID with a non-degraded RAID, I can see where problems could develop.
    I keep my two RAID5s separate so that if one dies, I can still use the other one, and if the usage of both is less than 40%, I can run the whole thing off of one until I get the other back up.
    << >>>Yes, during a rebuild the LUN would be in degraded mode, but it should still be mounted and accessible by the host, although with a performance impact, at least that’s how it works on most other entry level storage arrays. I don’t think Apple arrays can handle this type of configuration. Either Apple Support provides you with more useful information than completely destroying the RAID Groups and then rebuilding them, or you do what Roger suggests, by having two separate LUNs presented to the host, so you don’t have this issue every time a disk fails. Did you use host software to create the meta LUN? Ultimately it would be useful to get some type of host and array log to see the configuration. Like I said, reading through the data sheet of the XRaid, I don’t think Apple supports high availability and redundancy when the LUNs of both controllers are concatenated. <<< >>>>>i Dan, reading a bit about the architecture of the XRaid I found the following useful information:

    Hot Sparing
    For each RAID controller, any drives not assigned to an array are automatically used as global hot spares. If a drive fails, the RAID controller can automatically rebuild its data on the spare drive without requiring intervention by the administrator. The rebuild operation occurs in the background while the controller processes normal host reads and writes, so that service continues uninterrupted. This gives the administrator ample time to replace the failed drive. Xserve RAID automatically configures the drive as a new hot spare for the array.
    As I mentioned before, typically the rebuild occurs in the background without much impact to the host. If you cannot mount your file system when a rebuild occurs, then there is something quirky about your configuration. I also found the following info:
    Critical Path Eliminated
    The Xserve RAID architecture is designed to reduce vulnerability to a component failure. With this in mind, Apple built Xserve RAID around a midplane with a passive data path, a feature not commonly found in other storage systems of its kind. The midplane is the central connector between the drives, RAID controllers, power supplies and cooling modules. Most RAID systems depend on the midplane to relay data and instruction sets between drives, and a failure in the midplane can impair data availability. In Xserve RAID, all data passes through the independent drive channels, which are simply held in place by the midplane.

    As I understand it, each controller should have access to every single disk in the array. I’m not sure if meta LUNs are supported on this type of array, but if you can create a RAID-5 with all disks, but one, which is then your hot spare, you would then have one large LUN presented to the host, through one controller. If the controller fails, the other should take over. That makes the most sense, because that’s how it typically works with other storage arrays. The only problem with RAID-5 is that only one disk can fail. If two fail you will have a double faulted RAID group. Then the data cannot be rebuilt from the data and parity on the other disks.
    I hope to have pointed you in the right direction.
    Cy
    <<<<< Hi, thanks for the responses. To answer a couple of the questions, we actually lost the mount at the point the faulty drive changed to 'amber' (failing drive). Apple Support then advised us to remove the failing drive to kick-start the rebuild process which is exactly what happened. We did not try to mount during the rebuild, just left it overnight and all looked fine in Raid Admin the next morning except that we still could not mount. The indication from support seems to be that the Concatenation is the problem, causing the combined published volume to be unreadable on the host. As far as I can remember we just used Disk Utilities to concatenate together the two arrays to mount as a single 2.3tb drive. We've restored our data now to a big Lacie drive and are about the reformat the XRaid and I think we will publish each array as a separate volume this time - it just seems to defeat the purpose of raid to have to reformat after a drive failure! Thanks >>You have two different types of RAID that you’ve been using. Two RAID5s which are the two sides of the XRAID, and a sotware RAID1 that DiskUtility created from the two RAID5 volumes. The software RAID is the one caausing the problems. Even though I’m sick of Apple at this point, I would bet that the two RAID5s are actually fine.
    Software RAIDs have never really been considered robust, and hardware RAID is preferred because of that.
    Roger << Roger Thanks for this - I'm sure I know the answer to this but can you just 'turn off' the RAID 1 and publish the two existing RAID 5 volumes without a reformat? Thanks Dan bogiesan This is my standard sigfile so do not take it personally: “For crying out loud, read the freakin’ manual.”

  • Brian Tario

    July 13, 2007 at 5:16 pm

    Thanks for the info! Last night, after multiple reboots and apparently nothing on my end, it seemed to be working fine. But I just started it up today and it didn’t mount. I checked Disk Utility and it wasn

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy