John Heagy
Forum Replies Created
-
[Erik Lindahl] “two 6-core systems would possibly fair better than this and land in the same ball-park cost”
The thought did cross my mind. The 12 core is $3000 over the 6 core, but doubling up on CPUs means the same for Episode Engine licenses which retail for $3400. Throw in the fact that the NMP is half as rack space efficient as the Xserve, and the cost of addition fiber infrastructure, more bang per CPU is better.
With something like this… https://www.mk1manufacturing.com/store/cart.php
Replacing our 20 Xserve Episode cluster will go from under half a rack to nearly a full rack. Thankfully it will be 60% faster.
It may be possible to use two of these mounts, one facing front and another facing back, but that would require airflow up the center and out the top of the rack.
John
-
Ooops…
*6 core should be 8 core. We also have a 6 core but I did not set it.
Corrected below…
I tested both a 12 core and an 8 core with Episode, an app that loves cores.
The *8 core converted an 60 min ProRes 1080i file to 720p60 xh264 in 75min
The 12 core did the same job in 57 min 25% faster
The last 2.93G 8 core Xserve did it in 94 min
John
-
I tested both a 12 core and an 8 core with Episode, an app that loves cores.
The 6 core converted an 60 min ProRes 1080i file to 720p60 xh264 in 75min
The 12 core did the same job in 57 min 25% faster
The last 2.93G 8 core Xserve did it in 94 min
John
-
John Heagy
February 3, 2014 at 10:53 pm in reply to: NEW MAC PRO: Fresh install on Mavericks, won’t upgrade to 7.0.3The update worked fine for me and I’m running 7.0.3 on and 8 core D500 machine. I did need to open the unknown software publisher settings in Security before it would install.
-
Hi Vadim,
We had compression off from day one so we never tested with compression. The video files we are recording are compressed as ProRes and we’ve found that compressing already compressed media yields no space savings.
John
-
[bryson jones] “The db does not store “online” as a status”
I see that now. I saw that an xml export from the CatDV app did include <
>Online movie >>ONLINE>> so I assumed the same field would be present in the db. Any reason why it can’t be? Does the Archive add-on add anything to manage “online” status? Knowing if a file is available would seem to be a fairly basic need. I supposed issuing a command and seeing if fail would indicate an offline status but is a messy way a ding things.
John
-
John Heagy
January 28, 2014 at 10:32 pm in reply to: So networked external storage is NOT cool? Only SANs?NFS will work if your server supports it. AFP or SMB will not.
-
Hi Bryson
Thanks for the quick response.
[bryson jones] “”Online” is a very subjective thing… right? ;)”
In this case no… Is the file where the catalog thinks it is? Yes/No We don’t have any alternate Hi_Res directories listed.
As far as scanning the entire filesystem… I would only do it by catalog, or group of catalogs, so presumably Worker would only look for files in the catalogs, not the entire filesystem/s.
I wouldn’t need to know online status for a particular file but really the entire catalog so scripts would be of little use unless there’s a way of stepping through an entire cat prior to and action being triggered. The action for this workflow is to send an xml of the catalog to our MAM,
What I’m really shooting for is to have our MAM query the CatDV database directly hence my desire to have the DB indicate “online” status accurately.
Back to my opening question… Is there no command to check the O/L status of each file in a catalog short of opening it? That would be a good start.
Thanks
John -
[alex gardiner] “Have you started to drill down into how the cache is performing?”
Other than seeing only 27GB free of 256GB, not really. It does like RAM!
[alex gardiner] “Have you experimented with L2ARC or ZIL?”
We have both. The ZIL made a huge difference early on. Nearly unusable without it. The L2ARC did not make any difference except when I move to playback only testing. Since I was using the same media over and over it started caching the playback data. The drives went from 300 reads/sec to less than 50 but the L2ARC when to 600!. During normal edit activities I don’t see that making much difference. It would help during an After Effects session where one is using the same files all day. I’d imagine one’s entire project would eventually move to the cache. Of course there’s little need for performance while using After Effects when it come to source files, but every little bit helps I suppose.
-
[Alex Gerulaitis] “Would it be possible to share the test results that showed Z2 to be superior?”
The Z2 6×6 was not only faster with less disks (36 vs 40) it had a 63% yield versus of the mirror’s 50%.
The 20 mirrored pair zpool did 14 ingests and 14 playbacks all PRSQ via B4M’s Fork running on 10.9 Xserves. The 40 streams was really pushing it as the playback buffers where very active trying to stay ahead of the storage.
The 36 6×6 Z2 zpool did 16 ingest and 19 playback with calm playback buffers indicating it still had some headroom. I normally like to keep piling on until it drops but at that time I had exhausted all available resources.
[Alex Gerulaitis] “(Side note: vdevs in “RAID-speak” are usually called RAID volumes, groups, etc. – don’t think they’re LUNs.)”
Below is what I’m mainly basing my terminology on. Xsan/Stornext also refers to them as stripe groups. These are basically analogous to vdevs although Xsan supports LUN groups as Storage Pools as well and then finally Volumes or Filesystems in Stornext speak.
[Alex Gerulaitis] “Have you noticed significant difference in CPU utilization between reads (no parity calculations) and writes”
Less than 10%
[Alex Gerulaitis] “ZFS does have significant performance issues and limitations (no dynamic restriping in online vdev expansion – i.e. no performance increase when adding disks to existing zpools)”
No vdev expansion, but one can easily add matching vdevs which I believe is more valuable performance wise while not increasing disk yield like vdev expansion could.
I’m suspect of comparing the many open sourced zfs based system to 100% kosher Oracle zfs running on Oracle hardware. The system we have is pure and uncut Oracle zfs!
