Forum Replies Created

Page 1 of 2
  • Vadim Carter

    May 2, 2014 at 4:41 am in reply to: Infortrend SAS RAID6 Running very slow

    Jason, I concur with Chris – you have one or more drives that are going bad and the RAID controller keeps on retrying whatever operation is failing (most likely a write). Now would be a great time to back this whole thing to your LTO tape drive. ASAP. It is your insurance policy. After that you can either upgrade the drives to 2TB ones, I recommend Hitachi Enterprise drives, or pull your Seagate drives out and test them one by one by connecting them directly to a PC and running the SeaTools Hard Drive diagnostics utility on each drive. You should be able to isolate the bad drive this way. You can then put the remaining good drives back in your array and it should come up in a degraded state. Get a replacement drive, rebuild the RAID Volume and you’ll have the performance back.

    Good Luck!

    Vadim

    Lucid Technology, Inc. / 801 West Bay Dr. Suite 465 / Largo, FL 33770
    “Enterprise Data Storage for Everyone!”
    Ph.: 727-487-2430
    https://www.lucidti.com

  • Vadim Carter

    January 29, 2014 at 10:28 pm in reply to: zfs tuning

    John, I was wondering if you could comment on the level of performance increase after you had disabled compression. I would expect a slight performance increase with lzjb compression enabled on your zfs filesystem. Oracle ZFS does not support lz4 compression, unfortunately, which is even better and faster.

    Turning off atime will most certainly bump up the performance. Disabling sync should only affect writes.

    Thanks for sharing your findings. ZFS is awesome!

    Lucid Technology, Inc. / 801 West Bay Dr. Suite 465 / Largo, FL 33770
    “Enterprise Data Storage for Everyone!”
    Ph.: 727-487-2430
    https://www.lucidti.com

  • Vadim Carter

    October 17, 2013 at 6:44 pm in reply to: Periodic network drop and recovery

    I am going to put my 2c in.

    This issue could be related to caching. When the cache on your storage device gets full, the OS will flush its contents to disks. This can cause your storage to momentarily stop responding to network i/o while it is performing heavy disk write operations. Try disabling caching on your NAS and see if it helps at all.

    Another thing to take a look at is the MTU (Maximum Transmission Unit) settings. It must be the same for all devices on the network. Typical MTU value is 1500. Jumbo frames use MTU of 9000. Any mismatch will spell trouble due to packet fragmentation. You can check the MTU settings by going to System Preferences > Network (I think you have to click on the “Advanced…” button). You’ll probably have to run the ‘ifconfig’ command on your Oracle box to see the MTU setting. Lastly, do not forget to check the Ethernet switch for the same.

    Eric has made a good suggestion of eliminating the switch but it does not look like it is going to be possible in your case given the number of clients.

    Lucid Technology, Inc. / 801 West Bay Dr. Suite 465 / Largo, FL 33770
    “Enterprise Data Storage for Everyone!”
    Ph.: 727-487-2430
    https://www.lucidti.com

  • Vadim Carter

    July 3, 2013 at 3:03 pm in reply to: Crowd Cloud

    [Yuval Dimnik] “Will you be willing to co-share a computer in order to backup a colleagues data and vice versa?”

    This has been done before (and it does work great):

    https://www.crashplan.com/consumer/crashplan.html

    One can backup to their friends’ computers or to any other computers they have on their network.

    Lucid Technology, Inc. / 801 West Bay Dr. Suite 465 / Largo, FL 33770
    “Enterprise Data Storage for Everyone!”
    Ph.: 727-487-2430
    https://www.lucidti.com

  • Vadim Carter

    June 26, 2013 at 2:17 pm in reply to: RAID level reliability

    Thanks for your post, Andreas. It is nice to get an independent endorsement from a person using RAIDz/zfs in production. While there are still many users who are completely unaware of the benefits zfs-based storage brings to the table, I believe the tide has turned and we’ll see more people jumping aboard.

    Lucid Technology, Inc. / 801 West Bay Dr. Suite 465 / Largo, FL 33770
    “Enterprise Data Storage for Everyone!”
    Ph.: 727-487-2430
    https://www.lucidti.com

  • Vadim Carter

    June 1, 2013 at 3:03 am in reply to: RAID level reliability

    [Alex Gerulaitis] “Vadim, I was talking about how the width of a stripe affects performance. Can’t improve performance without widening a stripe one way or the other, correct? I.e. even in RAIDZ, you’d have to re-stripe the array to improve performance. (At least that’s my understanding.)”

    There are a few ways to improve performance, and, you are correct, Alex, by increasing the stripe width or, simply put, striping across more disk spindles thus taking advantage of parallelism. When you expand a ZFS pool, ZFS uses dynamic striping in order to maximize throughput and attempts to include all devices in order to balance it.

    Below is a quote from Oracle:

    “ZFS dynamically stripes data across all top-level virtual devices. The decision about where to place data is done at write time, so no fixed-width stripes are created at allocation time.

    When new virtual devices are added to a pool, ZFS gradually allocates data to the new device in order to maintain performance and disk space allocation policies. Each virtual device can also be a mirror or a RAID-Z device that contains other disk devices or files. This configuration gives you flexibility in controlling the fault characteristics of your pool. For example, you could create the following configurations out of four disks:

    -Four disks using dynamic striping

    -One four-way RAID-Z configuration

    -Two two-way mirrors using dynamic striping

    Although ZFS supports combining different types of virtual devices within the same pool, avoid this practice. For example, you can create a pool with a two-way mirror and a three-way RAID-Z configuration. However, your fault tolerance is as good as your worst virtual device, RAID-Z in this case. A best practice is to use top-level virtual devices of the same type with the same redundancy level in each device.”

    Lucid Technology, Inc. / 801 West Bay Dr. Suite 465 / Largo, FL 33770
    “Enterprise Data Storage for Everyone!”
    Ph.: 727-487-2430
    https://www.lucidti.com

  • Vadim Carter

    May 31, 2013 at 2:15 am in reply to: RAID level reliability

    [Alex Gerulaitis] “I could see how “no reformatting” works (legacy RAID expansions also don’t require reformatting) but no re-striping? Say, you added another eight drives to an existing set of eight in RAID-Z – how would the performance (transfer rates) of existing files improve w/o re-striping?”

    This is another cool thing about ZFS – the whole concept of storage pools. ZFS storage pools can span multiple vdevs (virtual devices), and vdevs themselves consist of block devices, e.g. hard drives or partitions on these hard drives. So, what would happen in the example you gave above is that a second RAID-Z vdev will be created of eight drives and the original RAID-Z vdev and the new RAID-Z vdev will be pooled.

    Lucid Technology, Inc. / 801 West Bay Dr. Suite 465 / Largo, FL 33770
    “Enterprise Data Storage for Everyone!”
    Ph.: 727-487-2430
    https://www.lucidti.com

  • Vadim Carter

    May 30, 2013 at 2:29 am in reply to: RAID level reliability

    [Eric Hansen] “the ability to expand a volume without reformatting is another awesome feature.”

    [Alex Gerulaitis] I’ve done this with RAID5 and RAID6 volumes. Am I missing something? (Thought ZFS’s advantages were centered around resiliency, performance, integration with RAID-Z.)”

    I’ll try to explain and put my 2c in.

    A traditional hardware controller based RAID array will typically allow RAID5 or RAID6 expansion by restriping a RAID set to include any newly added drive(s) (an inherently dangerous operation). The end result is an increase in the RAID Set size, however, the filesystem size that is residing on the newly expanded RAID set is still the same. There are three options at this point: 1. “Stretch” the existing filesystem over this newly added space, if your OS supports this; or 2. Create a second partition on this newly added space, format, and mount; or 3. Delete everything, reformat, create new filesystem, mount.

    ZFS is a filesystem and a volume manager “all in one” so to speak. Adding additional drives is a very simple operation. There is no need to wait for ZFS RAID Set to restripe. There is no reformatting involved. There is no risk to the data. It just works. Like magic.

    Lucid Technology, Inc. / 801 West Bay Dr. Suite 465 / Largo, FL 33770
    “Enterprise Data Storage for Everyone!”
    Ph.: 727-487-2430
    https://www.lucidti.com

  • It looks like you have a dying drive that’s developing bad sectors. You have a few options though:

    1. If you absolutely must get your files back then you should not try to do anything else and contact a data recovery company (this is EXPENSIVE!!!)

    2. Try installing and running SuperDuper. You might be able to clone your failing drive onto your new drive.

    3. If that does not work, enlist a friend who is good with Linux and use ddrescue to attempt a sector-by-sector copy and recovery of your failing disk – https://www.gnu.org/software/ddrescue/ddrescue.html This may take days, depending on how bad your disk drive is but you may end up recovering most of your data.

    Lastly, please do not attempt to run any kind of filesystem repair utility on your failing drive – it will only corrupt it further.

    Good Luck!

    Lucid Technology, Inc. / 801 West Bay Dr. Suite 465 / Largo, FL 33770
    “Enterprise Data Storage for Everyone!”
    Ph.: 727-487-2430
    https://www.lucidti.com

  • Vadim Carter

    May 25, 2013 at 1:02 am in reply to: RAID level reliability

    [Alex Gerulaitis] “I am hoping nobody does those really wide parity groups without understanding the implications. Personally, I’d do RAID60 on 48-wide group.”

    You’d be surprised, Alex, how many installations do have those very large RAID sets… I concur – RAID 60 is the right way to do it.

    I have briefly looked at the spreadsheet you found and it looks intriguing. I am not an Excel ninja and I cannot attest to the accuracy of the formulas being used. I’ll play with the numbers and try to trace the logic behind all the variables being used. It is certainly a good find and we can build on that 🙂 Thanks, Alex.

    Lucid Technology, Inc. / 801 West Bay Dr. Suite 465 / Largo, FL 33770
    “Enterprise Data Storage for Everyone!”
    Ph.: 727-487-2430
    https://www.lucidti.com

Page 1 of 2

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy