Forum Replies Created

Page 8 of 15
  • Chris Murphy

    December 5, 2013 at 9:25 am in reply to: Sonnet Fusion R800 RAID – self-ejecting

    No backup at all? That is a very, very different situation than not having a current backup. I’m not sure where the idea of RAID being a backup came from, but it has certainly convinced all too many people that they can forgo backups, because hey, a drive might die, but the array won’t. Or something like that. But nothing could be farther from the truth. RAID is about uptime. It’s about the availability of data. It’s not a backup. Anyway, I suggest you schedule the appropriate flogging for another time because you really need to better understand the basics of how fragile this stuff is, how incredibly disproportionate the penalties are for not being prepared for the inevitable disaster. The only thing you don’t know is the scope of the future disaster. This one looks pretty bad but it’s made worse without backups.

    You’re basically in a disaster recovery situation now. It’s really important that no further changes be made to the data on disk. The more changes that are made, statistically your situation gets worse. It’s possible to stumble forward in the correct direction, it’s much more likely a mistake will be made and things will get even worse.

    If the hardware is designed and tested to work together, NCQ shouldn’t be a factor, it’s just a SATA command queueing algorithm so that the controller and drive can most efficiently fulfill read/write requests. If it’s set too high or too low, performance is degraded. But it’s possible there are bugs which can cause read/write errors.

    Press the company for more help, but make it clear to them you do not have backups, you can’t afford to take risks, including mounting the array read only or rebuilding it. I personally wouldn’t rebuild this array until I had identical sector copies of every drive. If you don’t know how to do that, you’re going to have to learn how or prepare to budget for data recovery service. Also ask Sonnet support what RAID metadata format they’re using. Is it proprietary? DDF? IMSM?

    When you’ve exhausted your options with Sonnet support, press them for a referral and discount code for a data recovery service. Call them and get a quote. This will be one of the phases of your flogging for not having a backup. RAID data recovery is really f’n expensive, so prepared for sticker shock. Report back when you have an update.

    At least with linux software RAID, there is a way to force assemble an array that writes no metadata to any of the drives, and does not mount the array. Then it’s possible to mount the degraded assembled array read only and start extracting data. There’s certainly a similar procedure here, but I can’t tell you how agreeable Sonnet is in sharing this information, it may very well be that specialized data recovery companies have had to reverse engineer this process which is one of the reasons why I’m very skeptical of proprietary encodings.

    Anyway, you have some homework to do.

  • Chris Murphy

    December 5, 2013 at 8:54 am in reply to: OSX 10.9 and codecs

    Yes, from an archiving standpoint, any data that uses proprietary encoding means you have to put effort into aging the content to guard against it becoming stuck in time. So long as those encodings are constantly being migrated to new versions of the same encoding (which should be as simple as opening and resaving in the current version of FCP), you’re OK. But at the point when you consider migrating out of the Apple universe is when you’ll need to figure out how to reencode all of that content. You might look at, and keep track of libavcodec and FFmpeg projects to see if it meets your requirements.

  • Chris Murphy

    December 5, 2013 at 8:41 am in reply to: Monitor Calibration Across Multiple Systems…

    Ideally what you want is a display that does internal calibration so that you’re not depending on the lower quality calibration produced via curves in the video card. The NEC PA series are quite nice and reasonably priced for this. There’s also a number of Eizo displays that support hardware calibration. And there’s also the HP Dream Color. The other thing these displays enable is they can constrain their gamut to that of Rec. 709 primaries.

    Apple Cinema Displays are problematic because there’s no way to independently set white and black luminance, so the dynamic range is something you’re simply stuck with. And that means you’ll need viewing environments that are each slightly different to account for the difference in each display. Further, their gamut isn’t exactly Rec. 709 to begin with. This can be dealt with if the application supports ICC profiles or 3D LUTs to compensate for the display, but of course if the primaries of the display are less chromatic than the standard, there’s simply nothing that can be done to enhance the chroma of a primary.

    Dark images on screen implies viewing conditions that are too bright. Improper whites means a white point (color temperature) that hasn’t been properly set. There are a number of products that can help with this, the X-Rite i1 Display Pro is quite a nice instrument for this purpose.

  • Chris Murphy

    December 5, 2013 at 8:29 am in reply to: Different color/gamma in Quicktime Player – MAC OSX

    All three look different to me, but then they also each have different scaling, which affects our perception and hence the color appearance.

    I don’t know what you mean by “both set to Adobe RGB in display settings” can you be a lot more specific? This doesn’t seem like the right space to use for video although in Premiere it should be compensated for before it’s displayed on screen or rendered for whatever your chosen final output is. But Adobe RGB isn’t the right encoding for the final output, so I sorta fail to see why it’d be used as an intermediate space for video. It’s more of a print space (used by some photographers too, but largely it’s a prepress oriented color space).

  • Chris Murphy

    December 5, 2013 at 8:16 am in reply to: Sonnet Fusion R800 RAID – self-ejecting

    Updating drivers, flashing firmware, must always be preceded with a backup. Unless the data is disposable. I’d call support and wait on the phone until they get you to someone more senior who can answer your questions, primarily how to get the array to mount which should be possible whether it’s degraded or not.

    If the data isn’t disposable, and isn’t backed up, and me not having seen the log files, I would avoid mounting the array read-write and instead mount it read only and start copying files off the thing. If this is a Mac this can be done first with ‘diskutil unmount /dev/diskXsY’ and then with ‘diskutil mount readonly /dev/diskXsY’ command, where XY are the node/slice numbers for the volume which you can get from ‘diskutil list’. Then you can do a file copy and not make any changes to the state of the array or its underlying disks.

    As for replacing the dead drive and letting it rebuild, yeah you could do that, but the rebuild will be slow and will slow down the file copy. If you seriously don’t have current backups, your strategy needs to be a lot more conservative and I think it’s more conservative to mount the array read only and start file copying, than rebuilding a new drive. Depending on the drive size that’ll take hours, maybe 1/2 a day.

    What drives are these?

  • Chris Murphy

    December 5, 2013 at 8:00 am in reply to: RAID 5 Drive failure

    I agree. Scrub (parity check) is a minimum requirement. It’s a needle in a hay stack, but it’s typical for drives to spit garbage as they die and neither RAID nor the file system have any means of disputing the garbage. That garbage just causes confusion and strange OS behavior, including file system problems. The other thing that’s possible is one or more of the surviving drives has bad sectors resulting in transient read failure. That causes delays as data is being rebuilt from parity (on the fly) in the array’s degraded state. Upon rebooting to the RAID firmware interface, initiating the rebuild on a replacement drive, the remaining drives are permitted to take quite a while (30 seconds, maybe more) to make multiple attempts at reading these transient sectors. The thing is, without an explicit read failure those transient sectors aren’t fixed (or replaced). And if they did produce a read error, that’d mean their data isn’t returned which means a collapsed RAID 5 array. So… yeah. I wouldn’t trust it.

    And that’s why buying cheap drives not designed for use with RAID is shooting yourself in the foot. They’re explicitly designed to have long error recovery times, instead of producing an error quickly, thereby causing the RAID to fix that sector, so they don’t accumulate. Which they do with the wrong kind of drives.

  • Maybe I’m missing something in the topology, but I don’t see how this works without SAN software. Otherwise it’s basically like connecting two Macs to one drive via FW 800 (or USB or SCSI for that matter), and doing writes to the single drive. If the connections actually let you do this, and the file system mounted on the two computers at the same time as read-write volumes, the file system would quickly corrupt itself beyond repair. Hence the need for SAN software.

  • Chris Murphy

    October 22, 2013 at 7:24 am in reply to: Periodic network drop and recovery

    This is what network protocol over what speed physical link? It seems really straight forward to narrow down whether it’s the sending computers that are hanging up for some reason, or if it’s the NAS that’s getting busy. In particular, it’s expected any spare RAM it has is used for caching and once that gets full it’s going to behave this way in which case the array write performance is inadequate for the sending streams you’ve got. So anyway, more information is needed to have any idea what’s going on.

    bonnie++ and iozone are useful utilities for narrowing down such things but you’ve got to have an idea what what your workload needs to be in order to setup any benchmark tool to simulate it correctly. There’s no point getting data indicating problems (or success) for a non-used workload.

  • Chris Murphy

    September 30, 2013 at 2:32 am in reply to: Proper storage solution. Promise Pegasus?

    The difference between the 10Gbps spec and real world experience is due to overhead. The 8b/10b encoding alone causes a 20% hit. Although, I’m surprised at some of the benchmarks cited in this article, even accounting for the fact they’re RAID 0, there still should be some overhead for transport, data and file system layers.

    Decent article on the subject:
    Theoretical vs. Actual Bandwidth: PCI Express and Thunderbolt

    An n+1 member RAID 5 array seems like it could have about the same read performance as an n member RAID 0 array, but chunk size and parity computation adds overhead for writes. A particularly good (expensive) controller will have more optimizations for this than average controllers.

  • Chris Murphy

    September 29, 2013 at 1:24 am in reply to: The Incredible Shrinking Hard drive

    10.9 server is designed to work this way, to cache software updates for clients locally.

Page 8 of 15

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy