Forum Replies Created

Viewing 11 - 20 of 77 posts
  • Alex Gardiner

    June 8, 2018 at 4:19 pm

    4 bay enclosures are a bit of a pain.

    Mirroring always gives you 50% usable space, which is usually not acceptable. Also depending on how your RAID is provisioned it can actually be slower for sequential access.

    Over 4 drives I’d begrudgingly choose RAID5, but suggest that you don’t use HDDs that are too large.

    For 6x drives and above choose RAID6, or some kind of double parity equivalent.

    Beyond that I’d say 16x is getting wide enough for RAID6. At that point you may want to look at R60 (thats what we do on 24x enclosures).

    Also some people like to allow for a hot spare, but I’ve seen this get out of shape when storage controllers/software are left unattended. Once something fails an experienced engineer will be better at weighing up the next step.

    NB: ZFS rewrites some of these rules, but I doubt that is within the scope of this discussion.

    alex@indiestor.com

  • Alex Gardiner

    June 6, 2018 at 9:15 am

    Done a fair number of these around London.

    Backup first – sounds like you’re on top of that (check).

    The surprising thing I didn’t account for on my first go – time of day. Overall it’s pretty obvious, but its so much more stressful if you do this at a busy time.

    Also I hope you have the original boxes… you did keep these right? 🙂

    alex@indiestor.com

  • Alex Gardiner

    April 27, 2018 at 9:44 pm

    > Trouble is when a gigabit cable is connected it wants to connect to the NAS with gigabit instead of 10gige every time….even if I change the network order in the Apple system prefs so the 10gig e gets priority.

    What IP range are the 1G and 10G networks using?

    Two things…

    1) If the 1G and 10G interfaces are in the same range this is likely the cause of the problem…

    2) I wonder if your QNAP actually needs internet access? I’m usually happy enough to keep production storage somewhat air gapped, but it depends what you’re doing (expectations etc).

    NB: Most services advertised by Linux based NAS units export on all network interfaces (samba/netatalk are configured in this way by default). You can obviously control which IP you connect over, so I’d expect (1) is the answer – you just need to specify at the point of mounting – i.e. smb://xxx.xxx.xxx.xxx using the relevant IP.

  • Alex Gardiner

    April 10, 2018 at 8:40 am

    It’s nice to see that you had some progress with this!

    Storage Engineer
    alex@indiestor.com

  • Alex Gardiner

    April 10, 2018 at 8:35 am

    [Trevor Asquerthian] “it’s nothing like Interplay though, it just replicates bin-locking for shared bins.”

    Trevor hit the nail on the head.

    You can access basic bin-locking and media-sharing workflows in the same fashion as any other third-party (Editshare, Facilis, SANFusion etc).

    The key is in the phrase ‘third-party’. As of MC 8.8 you’ll see this splash screen when opening a project for the first time…

    For the latest workflows you’ll need to invest in NEXIS/Media Central. Considering what these tools can do the cost is reasonable.

    AVID have the brightest future in this space – my view is that they are without peers at the moment. I’m pleased they have a new direction and somebody new to lead them forward.

    Storage Engineer
    alex@indiestor.com

  • Alex Gardiner

    March 14, 2018 at 4:42 pm

    Long shot, try running the DELL in legacy BIOS mode (if it has one)… I’ve seen HP boxes fail to boot with other brands of RAID controller. Depends on the board etc.

    Sorry can’t be clearer.

    Storage Engineer
    alex@indiestor.com

  • Alex Gardiner

    February 12, 2018 at 12:52 pm

    At a guess the kind of bursty/unpredictable performance you see stems from the fact that there is no dedicated hardware handling offload, as would be the case with a real network card.

    I could be very wrong, but that seems likely.

    PS. Tuning something like sysctl is “evil”. This is a good description (originally from an old ZFS tuning discussion). The idea is that the defaults should already be right and if they’re not the problem is likely elsewhere.

    “Tuning is often evil and should rarely be done.

    First, consider that the default values are set by the people who know the most about the effects of the tuning on the software that they supply. If a better value exists, it should be the default. While alternative values might help a given workload, it could quite possibly degrade some other aspects of performance. Occasionally, catastrophically so.

    Over time, tuning recommendations might become stale at best or might lead to performance degradations. Customers are leery of changing a tuning that is in place and the net effect is a worse product than what it could be. Moreover, tuning enabled on a given system might spread to other systems, where it might not be warranted at all.

    Nevertheless, it is understood that customers who carefully observe their own system may understand aspects of their workloads that cannot be anticipated by the defaults. In such cases, the tuning information below may be applied, provided that one works to carefully understand its effects”

    The original link for this is dead (old Solaris stuff), but you get the idea.

    Storage Engineer
    alex@indiestor.com

  • Alex Gardiner

    November 16, 2017 at 7:56 pm

    HGST has always been somehow the one I gravitate towards, but the Ironwolf range is pretty solid in my view.

    If you go for densities above 8x then it pays to go pro (or enterprise) , but for small enclosures they’re perfectly fine.

    Ultimately you get what you pay for, but I doubt these will cause too many issues when used inside the manufactures guidelines.

    Storage Engineer
    alex@indiestor.com

  • Alex Gardiner

    November 15, 2017 at 8:37 pm

    That seems like more of an answer to me.

    Hope you get it sorted Eric.

    Storage Engineer
    alex@indiestor.com

  • Alex Gardiner

    November 15, 2017 at 7:52 pm

    Just to clarify…

    A) You created a spanned volume/storage space using Windows? (As in you created some kind of software RAID volume)

    OR…

    B) You pressed some buttons on the enclosure, which created a RAID volume of some description? (stripe/mirror/parity.. whatever)

    In the case of B, I would not expect you could take drives out of a OWC enclosure, whack them in another brand of enclosure and expect it to just work.

    On Mac you might have been passing the drives to macOS, then creating a RAID volume. This would generally work if you moved the drives to another enclosure, providing the OS could see just the drives.

    Expressed another way, you can’t normally create a RAID6 on an Areca controller, then later import that using an ATTO/LSI/Adaptec/Whatever.

    I’m not 100% on this, but I have chipped in because it sounds like your data is at risk.

    I also may have misunderstood, so apologies if I’m up the wrong tree.

    FWIW the only really good filesystem for this kind of importability is ZFS, but that is still very much a community project.

    Storage Engineer
    alex@indiestor.com

Viewing 11 - 20 of 77 posts

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy