Neil Smith
Forum Replies Created
-
Simon,
I think you’ll find that FCP X is only architected to work in an Xsan FC SAN environment … doesn’t use NFS or even SMB2 over a shared storage environment.
Worth double checking with Apple Support but that might be your issue.
Neil
Neil Smith
CEO
LumaForge LLC
high performance workflow
323-850-3550
http://www.lumaforge.com -
Hi Bob,
I too was a little surprised at how fast the 8050T2 ran under AJA speed test but let me tell you exactly what I did this morning .. this is precisely what happened – no fudging or tweaking used at any time.
1) I unpacked the 8050T2 out of the box that arrived yesterday from Taiwan.
2) took out the 8 x drive sleds.
3) went across the room and took out the 8 x 3TB sleds that I already had in an Areca Thunderbolt 1 8050 – these are 3 TB Seagate desktop drives I use for demo purposes.
4) inserted the 8 x 3TB sleds into the 8050TB2 .. the 24TB volume already had 13 TBs of footage on it so these were not empty spindles.
5) plugged in the power supply cable and a copper Thunderbolt cable into the 8050T2.
6) the existing RAID 0 volume came up straight away on the nMP.
7) ran the BMD and AJA disk whack test.
Didn’t tweak anything … didn’t adjust anything … 1300 MB/s on the AJA system using 4K files and 16GB test – first time … actually ran it a second time just to make sure I wasn’t doing something wrong.
What I do find interesting is the difference in measured I/O speeds between the Blackmagic speed test and the AJA disk whack … I typically use both and they always give different results … any idea what they’re actually measuring and why the difference?
I don’t treat either test as an absolute value more as a relative comparison but would like to know what the underlying difference is.
And yes, NAB this year is going to be all about Thunderbolt 2 and 4K … we’ll be in the South Hall near you guys so maybe we should get together and organize a Creative Cow Tblot2 RAVE or something 🙂
Cheers,
NeilNeil Smith
CEO
LumaForge LLC
high performance workflow
323-850-3550
http://www.lumaforge.com -
Neil Smith
January 19, 2014 at 6:47 am in reply to: Two new Mac Pros, two Thunderbolt 2 RAIDs, one Thunderbolt Bridge ….Well spotted! … and yes, it’s not attached to the nMPs but to a PABLO RIO PC … the NEO panel just happens to be in the middle of the desk in front of the HD SDI monitor I’m using for color grading. I have the two new Mac Pros either side of it due to space limitations.
One of the things I’m testing in the Workflow Integration Lab are the different options for connecting the PC world to the Mac Thunderbolt world using devices like the MAGMA Tbolt 2 PCIe expansion chassis with 10 GbE Myricom cards.
One way to share files and data between the PC world and Mac world is to use Xsan or metaSAN which work very well but require a FC switch in the topology … 10GbE is an efficient means to do file transfer between Macs and PCs but you end up having duplicate data on both NTFS and HFS+ sides.
What I’m ideally looking for is a way to color correct in the PABLO RIO PC world and then render out directly to ProRes Quicktimes in the Mac world without having to copy the DPX files over to the Apple RAIDs.
All suggestions welcome.
Neil
Neil Smith
CEO
LumaForge LLC
fast data
323-850-3550
http://www.lumaforge.com -
Neil Smith
January 19, 2014 at 6:12 am in reply to: Two new Mac Pros, two Thunderbolt 2 RAIDs, one Thunderbolt Bridge ….Eli,
If I understand your question correctly, I think you’re suggesting something that is not a good thing to do … i.e., you want to connect two Macs at the same time to one Thunderbolt 2 RAID enclosure. The issue you’re facing isn’t so much a Tbolt connectivity problem, it’s more to do with maintaining the integrity of the Directory Structure on the RAID file system.
I’m assuming you’d have the RAID formatted with HFS+ which is designed to only have one Mac writing and reading to it at a time … if you connect two Macs to the same RAID which one is the master in charge of maintaining the Directory Tree Structure? I haven’t tried it personally but I suspect that you will find that the Directory gets corrupted pretty quickly and neither machine will be able to Read or Write to the drive. Data is not written directly to the physical drive but to the logical layer that manages the directory of where all the bits and bytes are written to … in this case HFS+ which is not designed to have multiple hosts writing to the same physical drive at the same time.
The safer option is to attach the RAID to the assistant’s MBP and then for you to use IP over Thunderbolt bridging to connect directly to his MBP and then mount the RAID on your desktop … that way you can work on the files on the RAID and his MBP will be maintaining the Directory Structure.
As long as you alway unmount the RAID from his machine first, you could then also directly attach the RAID to your machine safely.
Hope that makes sense.
Neil
Neil Smith
CEO
LumaForge LLC
fast data
323-850-3550
http://www.lumaforge.com -
Neil Smith
January 10, 2014 at 7:06 pm in reply to: Two new Mac Pros, two Thunderbolt 2 RAIDs, one Thunderbolt Bridge ….Testing was done on mainly empty RAIDs but then also with about 6 TBs of 4K footage … when I get back from CES will fill them up to 80% and see what happens.
ARECA will be delivering a 16 bay Tbolt 2 enclosure next week so I’ll put that through its paces … have a feeling that we’ll see sustained throughput go up to around 1500 MB/s Read&Write – the more spinning spindles the better.
We’ll publish prices for the ARECA 8 bay Tbolt2 RAID next week and they should start shipping units around the end of the month.
We’ll be demoing the Tbolt2 RAIDs in action with the nMPs at Larry Jordan’s FCP X 10.1 Training Day in Burbank on Tuesday Jan 14th and at Michael Horton’s Jan LACPUG meeting on 22nd:
https://www.larryjordan.biz/powerup-4k-in-fcpx/
https://www.lafcpug.org/user_schedule.html
Saw something of interest on the Intel booth yesterday at CES … Lacie had a 1TB flash drive attached to a Tbolt2 PC and they were getting around 1000 MB/s R&W on the BMD speed test … nice small compact unit that will make a nifty shuttle drive from on-set back to post … plug it into a nMP and Bob’s your uncle … transfer a terabyte of data in under 20 minutes … will test as soon as they ship me one.
CES is all about 4K/UHDTV this year … UHDTV panels all over the place … all they need now is some engaging 4K content and we should see adoption rates start to ramp up.
One really cool thing I did see on the Display Port booth was 3 x 4K TVs attached to a flight simulator! … they had some young dude flying Spitfires across the Kent country side … with three 50 inch UHDTV panels and 60 KHz refresh rate, man oh man, was it immersive … know what Santa needs to bring me for next Christmas 🙂
Neil
Neil Smith
CEO
LumaForge LLC
fast data
323-850-3550
http://www.lumaforge.com -
Neil Smith
January 7, 2014 at 3:59 am in reply to: Two new Mac Pros, two Thunderbolt 2 RAIDs, one Thunderbolt Bridge ….Valid point, Jack … but I already knew about the bus config and kept the Thunderbolt Bridge cable on Bus 2 and the Tbolt2 RAIDs on BUS 1 and the monitors on Bus 0.
I even tried with the TB2 Bridge and the TB2 DAS on the same bus to see if the nMP 3 x bus config was acting as some kind of internal switch which was adding “choppiness” when transferring packets from one bus to the other but that had no impact on overall consistency.
Also, has anyone managed to get Compressor 4.1 to work over Tbolt2 bridging yet? … that would be a sweet way to set up a nMP render farm if it works … but just can’t seem to get it working between 2 x nMPs and a TB2 MBP.
Very impressed with the performance of the ARECA TB2 8 x bay RAID … consistently getting over 1000 MB/s WRITE and 1100 MB/s READ speeds in RAID 5 … they’re going to get me their 16 x bay Tbolt2 enclosure next week to test … think we should see speeds up around 1500 MB/s if Areca engineers manage to work their magic with their drivers.
Am at CES in Vegas now and it definitely looks like 4K is the next BIG THING (well according to the TV vendors anyway) …. Apple is well positioned to capitalize on 4K workflow with the nMP and FCPX 10.1 and Logic Pro X plus Resolve 10.1 which are all optimized for the new 64 architecture and dual GPUs … really hope that Apple put some engineering resource into fixing the data flow over TB2 bridging.
Neil
Neil Smith
CEO
LumaForge LLC
fast data
323-850-3550
http://www.lumaforge.com -
Neil Smith
January 6, 2014 at 11:40 am in reply to: Two new Mac Pros, two Thunderbolt 2 RAIDs, one Thunderbolt Bridge ….Just got a reply from Iljitsch … it would appear that’s he’s having some trouble signing up to the Cow.
Below is a snippet from his email reply … as soon as he gets signed up he’ll join in the thread … agree with his comment on the linked to blog about the need for 10GbE port on the back of the nMP … was thinking the same thing yesterday while plugging in a gig E cable for Xsan config.
I’m waiting for a Tbolt2 to PCIe expansion box to come in next week … when it does, I’ll put a 10GbE Myricom card in it and connect to a 5,1 Mac Pro and Win 7 PC and see how that goes.
From Iljitsch email:
” …. I tried to sign up for an account but I didn’t get an email.
What I wanted to add to the discussion is seeing if the TSO can be turned off (using ifconfig?) so many small packets are used rather than fewer big ones. Perhaps this will help. I don’t think CPU utilization is an issue here as even on those < 3 GHz dual core machines I was able to get impressive speeds some of the time.
And I’m interested to hear what you guys think about this blog post:
https://www.muada.com/2014/01-03-the-mac-pro-needs-10-gigabit-ethernet.html …”
Neil
Neil Smith
CEO
LumaForge LLC
faster storage faster finishing
323-850-3550
http://www.lumaforge.com -
Neil Smith
January 6, 2014 at 9:46 am in reply to: Two new Mac Pros, two Thunderbolt 2 RAIDs, one Thunderbolt Bridge ….Apologies guys, just seen that Bob Z referenced the same Ars Technica article in an earlier thread below … should have read that before I posted.
Have sent an email to Iljitsch asking him if he’d care to join in our discussion.
Neil
Neil Smith
CEO
LumaForge LLC
faster storage faster finishing
323-850-3550
http://www.lumaforge.com -
Neil Smith
January 6, 2014 at 9:20 am in reply to: Two new Mac Pros, two Thunderbolt 2 RAIDs, one Thunderbolt Bridge ….I think you’re onto something, Chris and we’re getting closer to an explanation of the erratic behavior of ‘IP over Thunderbolt’ .. just found an insightful article by ILJITSCH VAN BEIJNUM on ‘Ars Technica’ written back in October 2013 where he highlights the “choppy” Tbolt Bridge throughput issue:
” …. The Thunderbolt network interface also indicates that it supports TCP segmentation offloading for both IPv4 and IPv6 (TSO4 and TSO6), but presumably, there’s no actual network hardware in the Thunderbolt interface that could perform this function. The idea behind TSO is that the network software creates one large packet or segment, and the networking hardware splits that packet into pieces that conform to the MTU limit. This allows gigabit-scale networks to operate without using excessive amounts of CPU time. What seems to be happening here is that the system maintains an outward appearance of using the standard MTU size so nothing unexpected happens, but then simply transmits the large TCP segment over Thunderbolt without bothering with the promised segmentation. ….”
Here’s the link to the full article – worth reading for the detailed analysis that Iljitsch provides on the inconsistent throughput of IP over Tbolt:
https://arstechnica.com/apple/2013/10/os-x-10-9-brings-fast-but-choppy-thunderbolt-networking/
Presumably the same issues that Iljitsch identified with Tbolt 1 bridge networking apply equally, if not more so, to Tbolt 2 bridging.
I’ll repeat the testing between the two nMPs and see what I can measure.
The other thing I was testing this evening was to try and set up a Compressor 4.1 distributed render farm using Thunderbolt bridging between a Tbolt 2 MBP and the two nMPs … didn’t have much success but if we could get it to work it would be a useful way to edit offline on a MBP with proxies and then connect to a nMP and utilize all the available CPU cores for online conform and deliverables.
Anyone else tried Compressor 4.1 over Tbolt bridging yet?
Neil
Neil Smith
CEO
LumaForge LLC
faster storage faster finishing
323-850-3550
http://www.lumaforge.com -
Neil Smith
January 5, 2014 at 3:10 pm in reply to: Two new Mac Pros, two Thunderbolt 2 RAIDs, one Thunderbolt Bridge ….Yes, agree on importance of separating Tbolt 2 networking from Tbolt 2 drive performance to get a better understanding of where the real bottleneck is .. like you, I suspect that the underlying issue is in how Apple implemented the IP stack in Mavericks and maybe the internal Tbolt 2 bus switch in the nMPs.
And yes, be great to have you come over and put iPerf though its paces and see what we find …. I’m off to CES tomorrow for the week (much joy) so maybe the week after when I’m back you can come over to The Lot and we’ll roll up our sleeves and see what we can suss out.
Will also have a 16 x bay Tbolt 2 RAID to test by then which should saturate the Tbolt2 bandwidth even more so … it was still good to see that even on 8 bay arrays I was getting peak I/O of over 800 MB/s Read/Write using the BMD speed test between two 6 core nMPs … which means that if we can find a way to smooth out the IP traffic then utilizing IP over Thunderbolt Bridge will be a viable way to connect a small group of Tbolt2 editors together.
For basic file transfer between the two nMPs, Tbolt2 bridging works very well – transferred half a terabyte of 4K files from one RAID to the other in under ten minutes … but for editorial work where we need a consistent real-time playback off the timeline there still needs some optimization done.
See you in a week’s time, assuming I survive CES … it’s going to be interesting to see where all this 4K content we’re producing on these spiffing nMPs is going to end up … if 4K UHDTV delivery into the home takes off, then the consumer market for 4K content will be more significant than the DCI cinema 4K opportunity.
One’s thing for sure, if we do move to 4K workflows then the demand for storage and bandwidth is only going to grow rapidly.
Cheers,
NeilNeil Smith
CEO
LumaForge LLC
faster storage faster finishing
323-850-3550
http://www.lumaforge.com

