Forum Replies Created

Page 10 of 15
  • Chris Murphy

    September 6, 2013 at 11:22 pm in reply to: G Speed Q ESATA Write Speeds in Raid5

    What happened to read speeds? 195MB/s writes are still not saturating the SATA 3Gbps link. If the reads are relatively unchanged, then I’d say you don’t have an mis-alignment, nor is there excessive RWM.

  • Chris Murphy

    September 6, 2013 at 11:10 pm in reply to: mini-SAS on Thunderbolt?

    Xserve RAID has two options: verify and rebuild parity. Verify reads data and parity chunks, and reports mismatches. Rebuild reads data chunks and writes new parity chunks. Verify is the one to use regularly, there’s no good reason to rebuild parity without a specific reason. Verify is sufficient to activate any needed self healing as a result of bad sectors.

  • Chris Murphy

    September 5, 2013 at 10:24 pm in reply to: CompactFlash Extreme Pro vs SDXC Memory Card Extreme

    Aha!
    [I wrote] getting CF to perform worse than SD

    Reverse that. Somehow getting SD to perform worse than CF.

    Panasonic just came out with a mini version of their P2 cards. If I had a bunch of the very expensive P2 cards I would be a little put off if I couldn’t use them in the next version of their camcorder. Unless of course the new card was significantly faster. Which it’s not.

    Which is why it’s totally fair game to reject the product soley on the grounds that the camera and storage medium are effectively inextricably linked. Otherwise it’s just rewarding companies for a nonsensical approach to storage.

    I also find it strange that so many pieces of high end gear use HDMI instead of SDI HD.

    We have DMCA to thank for this at least in part, and also for backward compatibility with DVI which in turn post-dates DMCA. So it’s really about the moneyed stakeholders of content wanting to have control over the consumer’s ability to manipulate that content.

  • Chris Murphy

    September 5, 2013 at 9:06 pm in reply to: mini-SAS on Thunderbolt?

    certainly it wouldn’t hurt to have something better than e-SATA

    For what it’s worth G-Speed Q is a particular instance of eSATA that uses the SATA Rev 2.0 spec, or 3Gbps. When considering the G-Technology unit that uses mini-SAS, its ExpressSAS R680 card is 6Gbps SAS. So it’s a different interface, command set, and bandwidth. I haven’t benchmarked the workloads you’re talking about to see if there’s a meaningful difference between SATA 6Gbps, and SAS 6Gbps, so I have to defer to others. But if SATA, for sure specify at least nearline if not enterprise drives. It’s not worth the hassle factor with consumer drives and incorrect (and usually unconfigurable) error recovery settings that inhibit proper raid5 bad block repair.

    Check the R680 card if it has a scrub option. In my opinion, scrubs are mandatory functionality. My own personal bias is to disqualify a product that doesn’t offer it. I might even go so far as to disqualify on the basis of being unable to have regularly scheduled scrubs, with an email in case of mismatches occurring. I’d seriously rather go with a simpler solution like raid10, or even two independent raid0’s and a daily rsync from one to the other than mess around with junk raid5 that doesn’t have a proper scrubbing function. “Wow cool, I have raid5 but no scrubs,” is buying into fail safe that has a decent chance of fail danger rather than fail safe. Why bother? You shouldn’t have to wonder whether a raid5 rebuild is actually going to succeed.

  • Chris Murphy

    September 5, 2013 at 1:09 pm in reply to: mini-SAS on Thunderbolt?

    I agree that the PCIe expansion chassis is not elegant. This whole Thunderbolt thing strikes me as benefitting Apple first, and customers second in that at least we’re getting some ability for high speed connectivity rather than none, because Apple long ago decided to be done with traditional desktop hardware. But this is what’s available, so if you want to use OS X the Thunderbolt to PCIe enclosure is the new paradigm unless/until companies come up with a totally different form factor for their products that directly connect to Thunderbolt as the primary interface rather than effectively being adapted to a PCIe slot.

    The alternative is go Windows or Linux for video work. *shrug*

    Anyway I don’t think the G Speed Q meets the OP’s requirements if he’s considering mini-SAS.

  • Chris Murphy

    September 5, 2013 at 1:00 pm in reply to: mini-SAS on Thunderbolt?

    If you built a large array with something, e.g. from ATTO, it should work in a Thunderbolt -> PCIe enclosure. Should. Without someone testing, it’s unknown, but someone will test it and I’m reasonably certain if there are problems found that they’ll get sorted out. The alternative scenario is just too close to a horror show for video on the Mac – let’s face it.

    I think the question is, are you needing to actively use a large array now? If you can make good use of it now, then I’d get it now and worry about the compatibility with the next machine later. If you don’t really need it now, then the question is what to use as a stop gap.

    With any “large” RAID my concerns are: drive quality and raid card options. That you’re thinking mini-SAS implies SAS drives which tend to be nearline or enterprise and are more reliable than consumer SATA, and that goes a long way to reduced urgency for using a more resilient file system than HFSJ. Better drives don’t totally obviate my HFSJ/NTFS concerns, but it accounts for a lot. Next, whatever raid card you get should explicitly support scrubbing, some of the raid products clearly don’t. You want to scrub the array probably once a month. I know people who do it weekly.

    Another way to get around this is look at a non-Mac host for big storage that’s available via 10GigE, and then you can have new and old MacPro that can share the same storage, should you desire an extra workstation which could be useful for transitioning to new hardware. It’s more than a bit more complex since you have all the storage concerns with direct attach, but you also have network concerns, and both have to be done right for network storage. But once it’s working, you walk away from it until something dies. It isn’t affected by OS updates causing conflicts with drivers, you can concurrently use it with more than one computer, you can pick a file system that’s as good or better than HFSJ or NTFS.

    So I’d state how big you want the storage to be, what your usage is going to be and hopefully some people with more experience respond. I’d also plan on calling some companies who sell and support such storage, get quotes, and then you’ll be in a better position to make a decision.

  • Chris Murphy

    September 5, 2013 at 6:22 am in reply to: CompactFlash Extreme Pro vs SDXC Memory Card Extreme

    Do you mean SD to perform worse than CF?

    No. Although SD does in practice in some cameras perform worse than CF, which is sad.

    In what way is it inferior other than the possible bent pins?

    They are more expensive to produce, bigger, heavier, unsizable, it’s an 18 year old spec that has been abandoned by the maintainer, they’ve moved on to CFast a while ago and XQD more recently but there are these hold outs like Canon for some reason using old things. It’s like the booger we can’t flick off because some companies keep holding onto it.

  • Chris Murphy

    September 5, 2013 at 6:15 am in reply to: G Speed Q ESATA Write Speeds in Raid5

    [Rainer Wirth] how is it, that we experience huge speed loss with esata connections in comparison with let’s say SAS connections.

    If the comparison is SATA vs SAS, SNIA has more than one slideshow that explores the differences that result in better scalability for SAS that go beyond bandwidth. The difference between SAS 3Gbps and 6Gbps, and SATA 3Gbps and 6Gbps isn’t just one of bandwidth. There are other meaningful changes to the protocols including command queueing that for certain use cases can make a huge difference. So even when bandwidth saturation isn’t the bottleneck, there can be differences simply because of the different command set being used. So while you have not stated exactly what products you were comparing, I’ll argue it will be difficult to answer your question because there are a lot of differences possible between SATA and SAS.

    But by continuing to ding eSATA specifically, you’re implying a meaningful difference between eSATA and SATA which is untrue.

    We therefore have decided that SAS and FC are the only solutions for us.

    Lots of people have arrived at this conclusion as well, which is why it’s good to have SAS as an alternative to SATA. But you keep saying eSATA as if it’s different from SATA is sortof annoying.

    In the future Thunderbolt might be an answer, but even thunderbolt doesn’t reach the speed of a 4x8GB/s FC Raid.

    And the specs bear this out. You don’t need to test it. But I think you mean 8Gbps or 8GFC rather than 8GB/s. 8GFC is ~1.6GB/s.

    In this thread someone is coping with the actual speed limits of esata, and we have to accept it.

    No they are not. The bandwidth of the connection is not being saturated even accounting for 8b/10b overhead, on reads let alone writes. The problem is elsewhere.

  • Chris Murphy

    September 4, 2013 at 10:43 pm in reply to: G Speed Q ESATA Write Speeds in Raid5

    I did not state a like or dislike of SATA or eSATA. I stated that its available bandwidth, in this thread’s context, is a non-factor. Given the read is 225MB/s which is below the SATA Rev 2.0 3Gbps bandwidth, it makes no sense to suggest FC alone would solve this problem. The bandwidth between array and host is not the bottleneck, it’s something else.

    FC to SATA is apples to oranges. Totally different available bandwidths by year, different protocols SCSI vs ATA and hence different command queuing and ECC, and FC implies SAS drives so different mechanisms possibly faster rotational rates. It’s like, nothing is the same between them. But it wouldn’t matter if the G Speed Q had an FC port capable of 8GFC, the host to array bandwidth is not being saturated now as it is.

    I think your experience is that you’ve used unreliable SATA cables, drives, cards, raids, and as for the 70% full business again that has nothing to do with the connector being used. The connector knows nothing about how full a hard drive is. A file system does, and the raid RWM processing will be affected by this as well.

  • Chris Murphy

    September 4, 2013 at 6:39 pm in reply to: G Speed Q ESATA Write Speeds in Raid5

    The issue isn’t eSATA, which is the exact same thing as SATA except for the physical connector. Protocol and bandwidth are the same. Also eSATA neither correlates nor causes slow down when the array starts to fill up. Array slow down is a function of file system fragmentation, which also leads to heavier raid layer RWM penalties.

    The bandwidth of the eSATA connection in this case is 600MB/s after accounting for 8b/10b encoding overhead. The drives can do large sequential writes at ~135MS/s per drive. The stripe width is 3 in this case, which makes for 405MB/s full stripe writes: assuming the raid hardware in these units can do that, and the layout is optimized for the workload, any of which may not be true. But the performance hit isn’t due to eSATA or the drives, and probably isn’t due to the Fasta card or the PCIe-TB enclosure or the bandwidth limits of Thunderbolt compared to a legit 4-lane PCIe slot.

    I think it’s either misaligned chunks to 4K physical sectors of the drive causing drive firmware RWM, and/or the lack of full stripe writes causing excessive RWM of chunks at the raid layer. And the raid hardware isn’t powerful enough to optimize its way out of that given the workload.

    I just looked at the user manual for this box and it doesn’t show a configuration utility option for setting chunk size or layout options. Rather than powering down the unit, pulling a drive, and putting it on a separate SATA bus, and looking at sector data to figure these things out, it’s easier to just ask G-Technology support what the chunk size is, and confirm/deny that their firmware supports proper alignment of chunks on 4K physical sector drives.

    I’d also question the testing method that comes up with 95MB/s writes, which may not reflect the intended actual real world usage. If the workload between the test and the real world aren’t the same, the test is useless. It may indicate better or worse than real world performance.

Page 10 of 15

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy