Forum Replies Created
-
Or not… ATI cards and OpenCL are already supported in CC for GPU accel purposes, yet CUDA is faster AOTBE (all other things being equal) – it’s closer to the hardware than OpenCL is. Kinda like Assembler vs. C – properly written assembler code is always faster, often significantly, vs. any other language.
-
Alex Gerulaitis
January 30, 2014 at 8:36 pm in reply to: Pulled RAID 5 drives have very slow read speeds.According to the spec sheet, these should do between 40 and 80MB/s, so the 10MB/s read speeds you’re reporting are odd.
How are these drives connected when you’re running benchmarks, and on what system (CPU, OS, memory)?
— Alex Gerulaitis | Systems Engineer | DV411 – Los Angeles, CA
-
Alex Gerulaitis
January 30, 2014 at 8:08 pm in reply to: ATTO R680 + Sans Digital TR8X-B RAID degraded after 3 daysSecond Chris’s idea to check the logs. Has the rebuild finished?
-
Also, it might help to post benchmarks with detailed specs of each configuration, in addition to “seat of pants” impressions (nothing wrong with those). There’s always a possibility of bottlenecks and configuration problems when the numbers don’t add up.
Also, check this out. Eric has been vocal on that thread, for a good reason:
New Mac Pro 8-core / D700 not much faster than an iMac… in PPro CC.
-
[Tim Jones] “the numbers for non-Adobe stuff tell a dramatically different story”
The question is perhaps, are these numbers indicative of anything beyond isolated incidents and configurations? (Not to my knowledge, and Eric is 100% on that nMP is limited performance-wise, and that the vast majority of apps can’t use what it sells efficiently: dual GPUs.)
nMP has two things going for it, performance-wise: a ridiculously fast NVMe SSD, and dual GPUs. Both can be added to an HP system, too. A fast SSD is nice; dual GPUs may be something my apps can’t use efficiently (yet) and thus nMP becomes too expensive too fast.
-
[Vishal Pulikottil] “Anyone with any experience using Adobe products on Windows 8.1? How stable is it, compared to a Mac?”
Just as if not more stable. If the PC is stable, so will be Adobe products. Note though that Ryan in the (very entertaining – and poignant) video Steve referred to, uses an HP top shelf workstation. I am not sure how he got it for less money than a nMP – a fully decked out one can cost more money, while having more juice. The real value compared to nMP are single CPU (HP Z420, DIY/custom Intel Haswell i7-4700 systems).
[Vishal Pulikottil] “Is there any good reason why we shouldn’t switch?”
If your workflow is all about ProRes, for instance, with deliverables in ProRes as well. If there’s not much in your workflow that is Mac-specific, then probably no.
— Alex Gerulaitis | Systems Engineer | DV411 – Los Angeles, CA
-
Individual drives’ link speeds (3G or 6G) don’t matter. Drives don’t saturate these link speeds.
Neither does the backplane: a 4-lane 3G SAS connection means 12Gbs in full duplex, i.e. roughly 1.2GB/s line speed per connection, and he has two of them: 2.4GB/s.
Testing this setup with a different (faster) controller will determine whether the bottleneck is in the controller or the host system.
-
Interesting stuff, thanks for sharing John.
[John Heagy] “We initially set up the vdevs (LUNS in RAID speak) as 20 mirrored pairs but found a 6×6 Z2 (RAID6 in RAID) to be superior.”
Would it be possible to share the test results that showed Z2 to be superior?
(Side note: vdevs in “RAID-speak” are usually called RAID volumes, groups, etc. – don’t think they’re LUNs.)
[John Heagy] “All parity calculation is done in software and, despite 35 streams of PRSQ reads and writes across 36 disks, the CPU was only 17% busy exporting NFS to 33 Mac 10.9 clients. “
Have you noticed significant difference in CPU utilization between reads (no parity calculations) and writes (parity calculations), and which processes were using the most cycles? Is it possible most of the those cycles were used servicing TCP/IP and file system requests rather than parity calcs?
Agreed that there isn’t much info on ZFS out there – yet perhaps it’s worth mentioning that ZFS does have significant performance issues and limitations (no dynamic restriping in online vdev expansion – i.e. no performance increase when adding disks to existing zpools) – making it less suitable for applications that need to squeeze every ounce of performance out of the drives.
-
[Bob Zelin] “if you merge both of them as a RAID 50 (or 60), shouldn’t the speed INCREASE when it appears as a single volume ?”
The easiest tests would be:
– on a different host system to see if the problem is machine-specific
– to borrow an R680 or 1882x and perform similar testsUntil then, it’s shooting in the dark.
P.S. Understanding the problem was never the issue, getting the relevant information was.
We did find out that 800MB/s is R380’s performance ceiling, only to later find out there was a test done on two of them, with similar results – although we do not know if both cards were working in PCIe 8x mode.
We still don’t know how that RAID0 test was done – a single stripe set all done in R380, two stripe sets then soft-striped in the OS (with one or two R380s). We still don’t know the specs of the host machine. We don’t know what other tests the OP did.
I’ve asked if the OP contacted ATTO about it, and heard no response.
We don’t know if the system hits memory or CPU utilization ceilings during speed testing possibly pointing to configuration or performance problems.
The root cause of the slowdown is likely the OS – but we won’t know unless we we have all the relevant info about the system, and until proper tests are done.
-
[Blase Theodore] “I have 2 separate R380 cards”
Not what you said initially:
[Blase Theodore] “each xtore feeds a single SAS connection to the R380
OSX 10.8.5RAID SETUP:
12 drives > xtore_unit1 > SAS1 > RAID5 > “RaidGroup1”
12 drives > xtore_unit2 > SAS2 > RAID5 > “RaidGroup2″”The above implies and describes a single R380.
You also mentioned that RAID0 exhibits the same behavior, which points to the card as the culprit.
If you actually need help figuring out why striping two RAID5 groups across two R380 cards causes slower performance, perhaps you’d want to list your configuration more precisely?