Forum Replies Created

Page 3 of 3
  • We can fix everything in post, Alex!

    Will try and do some more detailed testing but I’m off to CES on Monday for the week and up to my neck in transcoding footage for UHDTVs … apparently 4K is the next “big thing”.

    Just did a ‘quick and dirty test’ on running FCP X 10.1 on both nMPs with each one accessing the Tbolt 2 RAID on the other nMP playing 4K ProRes4444 files across the Thunderbridge link … either one would play well or the other would play well but not both at the same time … strange me thinks.

    1) 2 x nMPs playing 4K ProRes4444 files over Thunderbridge link with 4K monitoring in background:

    2) Close-up of single Thunderbolt cable connecting 2 x nMPs:

    So I disconnected the two nMPS and put a Thunderbolt 2 MBP laptop in the middle and mounted both RAIDs … ran the BMD system test and got reasonable speeds to each RAID separately:

    3) MBP Thunderbolt 2 laptop connected to 2 x nMPs:

    N.B. the numbers show in the BMD tests varied greatly when testing Thunderbridge connectivity … pls don’t take them as gospel .. besides which, ARECA is busy tweaking their drivers based on real-life testing … but still think the real is issue is how Apple implemented the SMB IP stack in Mavericks.

    4) MBP connected to ARECA Tbolt2 RAID through thunder bridge link to nMP:

    5) MBP connected to PROMISE Tbolt2 RAID through ThunderLink bridge to nMP:

    6) FCP X 10.1 playing 4K ProRes4444 files from both Tbolt2 RAIDs with dropped frames:

    So, it looks like though there is plenty of bandwidth available in Tbolt 2 (theoretically 20 Gb/s) we’re not going to be able to utilize IP over Tbolt2 until some smart cookie comes up with some nifty software to optimize sustained throughput and reduce packet contention … over to Quantum, Alex.

    Neil Smith
    CEO
    LumaForge LLC
    faster storage faster finishing
    323-850-3550
    http://www.lumaforge.com

  • Thanks for feedback, Bob … for straightforward data transfer between two new Mac Pros then Thunderbolt 2 Bridging is very useful … I transferred half a terabyte of files between the ARECA and P2R8 in under ten minutes which was a lot quicker than having to copy to a transfer drive and then copy again.

    But for a couple of editors trying to work constantly off HD or 4K files in realtime it might be a pain even though we were getting 500 MB/s to 600 MB/s in both directions at the same time at peak moments – even had a moment of 800 MB/s in both directions at one point!

    Am going to try with FPC X 10.1 running on both nMPs and see if it’s useable … if it is, you could at least have an editor and an assistant working on the same show with different Libraries.

    What’s the root cause of the IP stack being so erratic and inconsistent? … maybe the SMB IP stack needs some collision detection code or ‘Jumbo Frames’ option written for it. The other thing I tested was putting the cables on the same and different Tbolt buses but that didn’t seem to make much difference … as you know there are six Tbolt 2 ports but only three buses … was wondering if the internal Tbolt2 switch was adding to the inconsistent data flow?

    Will start testing the new MAGMA Tbolt2 to PCI expansion chassis next week with 10 GbE, 8 Gb/s FC and 6 Gb/s SAS cards to see what kind of throughput we get … at the moment we’re capped by the 800 MB/s limit of Tbolt1 but Tbolt2 should take that up to over 1200 MB/s.

    Interesting times for sure in the Apple world … I love the quietness of nMPs … even with two of them on the desk right in font of you you can hardly hear them purr … but have to say, when you have six Thunderbolt cables plugged into the I/O ports it really is fiddly to take them in and out … and with the slightest bit of tension they pop out … lost a couple of renders yesterday when the RAIDs dismounted unintentionally.

    Neil

    Neil Smith
    CEO
    LumaForge LLC
    faster storage faster finishing
    323-850-3550
    http://www.lumaforge.com

  • Been running similar tests all week with a six core nMP directly attached to a Promise Pegasus2 RB 24TB RAID.

    Also had a i7 MBP with Thunderbolt2 connection to the nMP through TCP/IP over Thunderbolt Bridge to the Pegasus2 RAID mounting it on the MBP desktop.

    Both the nMP and MBP are also attached to an Areca 8GB/s FC Xsan through Atto 8GB/s ThunderLlink boxes but of course throughput of Xsan is limited to 800 MB/s from Thunderbolt1 link on Atto boxes.

    With the nMP in DAS mode to the Pegasus2 R8 I’m getting 1123 MB/s in WRITE speed and 928 READ speed using the AJA DiskWhack Test set to 2K and 16 GB file size.

    With the MBP in Thunderbolt2 Bridge mode to the nMP/Pegasus2 I’m getting 808 MB/s WRITE speed and 363 MB/s READ speed.

    Attached is a screen grab showing both BMD and AJA speed tests of the nMP in DAS mode to the Pegasus2 R8:

    Attached is a screen grab showing both BMD and AJA speed tests of the MBP in Thunderbolt2 Bridge mode to the nMP/Pegasus2 R8:

    Here’s some photos taken with iPad showing set up:

    However, using the same tool that Apple uses to test drive performance I was actually getting over 1200 MB/s WRITE speed and 1300 READ speed from the Pegasus2 in RAID0 DAS mode, so obviously the different disk speed tests are measuring different things.

    The Promise support guys have published an interesting KB on their website that explains the disk caching optimization we went through to achieve the best results:

    https://kb.promise.com/KnowledgebaseArticle10394.aspx

    Individual disk speed is but one factor in overall sustained throughput using Thunderbolt 2 architecture … number of spindles, efficiency of RAID controller’s read-ahead cache and latency in TCP/IP over Tbolt2 bridging are all factors to take into account in overall system performance. Saturating the Tbolt2 pipeline requires that everything be firing on max throughput – 6 core and 12 core nMPs are great for CPU intensive tasks but maximizing Tbolt2 data I/O needs careful tweaking of the entire pipeline.

    We’ll be demonstrating the new Mac Pros in DAS and Xsan mode at the Larry Jordan FCP X event on January 14th in Burbank and also at our X Pro monthly meeting on Saturday Jan 18th on The Lot in West Hollywood. We’ll also be presenting nMP, FCP X 10.1 and Xsan 3 at the January LACPUG meeting.

    I’ve put together a short presentation that covers the different storage options now available using the nMPs in DAS, NAS and SAN configurations – details on our website:

    https://www.lumaforge.com/index.html

    Any questions, let me know or come along to one of the events.

    Cheers and all the best for 2014.
    Neil

    Neil Smith
    CEO
    LumaForge LLC
    faster storage faster finishing
    323-850-3550
    http://www.lumaforge.com

  • Neil Smith

    September 12, 2013 at 6:13 am in reply to: Resolve 10 vs Colorfront for Dailies?

    If you live in the LA area and want to see the Resolve 10 beta in action we’ll be demoing it on Saturday September 21st on the Lot in West Hollywood at 10am …. details on the LumaForge website … you need to sign up through the EventBrite invitation … we’re on a movie Lot and Security won’t let you through if you’re not on the list:

    https://www.lumaforge.com/page12/index.html

    We’re up to beta build 32 now so Peter and his team are making good progress …. it’s very stable and they’re getting close to releasing the public beta.

    Resolve 10 is a significant upgrade and worth checking out before you make any strategic decisions … we sell both ColorFront and Resolve systems and each has its own pros and cons … you need to carefully consider the trade off between price and performance, functionality and availability, ease of user versus processing power, platform of choice and of course overall budget ….. it’s not a direct apples to apple comparison.

    Neil

    Neil Smith
    CEO
    LumaForge LLC
    shoot it. store it. share it
    323-850-3550
    http://www.lumaforge.com

  • Neil Smith

    April 16, 2013 at 10:51 pm in reply to: SAN Latency Issues – 10 gig Copper vs. Fiber Channel

    The latency issue may not be directly related to networking I/O speeds but to how you’ve got your RAID striping and block sizing configured … you should send an email to Tiger-Tech and see what their recommendation is for a metaLAN SAN.

    The other alternative to using a GigE and 10GigE SAN is to consider using an external PCI express topology from ExaSAN … one of their 12 bay RAID 5 array delivers 1200 MB/s to the desktop … we were demoing their A12 boxes at NAB last week running XSAN with three Macs attached and were getting realtime time performance with very low latency from 5K EPIC files and 4K ProRes4444 QuickTime files.

    If you’re in the LA area come over to our place on the old Warner Hollywood Lot in West Hollywood and we’ll give you a demo of the ExaSAN/XSAN combination … if you haven’t seen a PCI express SAN in action, you’ll be in for a pleasant surprise, both in terms of price and performance!

    And just so you know, we’re running a ‘4K MADE EASY’ training day on Saturday April 27th on The Lot which will feature XSAN running over ExaSAN RAID with FCP X and DaVinci Resolve round-tripping … the price performance of XSAN on Mountain Lion with ExaSAN hardware is pretty amazing.

    Details of the training event below:

    https://www.lumaforge.com/styled-2/index.html

    Cheers,
    Neil

    Neil Smith
    CEO
    LumaForge LLC
    shoot it. store it. share it
    323-850-3550
    http://www.lumaforge.com

  • Neil Smith

    February 22, 2013 at 10:38 pm in reply to: Cheap multi-drive clone

    Eric,

    I think (note the word think) that you can use SP Pro to copy a file or folder from a RAID to 3 DAS drives … you don’t need to be just ingesting from a camera card … our workflow guy is out today, otherwise I’d check with him.

    Try it …. you just have to set up the source as your RAID drive.

    Re, making a clone of your $9k SAS RAID …. if you don’t need SAS speeds for the clone you could always buy an 8bay eSATA JBOD and stick some 2TB Seagates in there … pretty cost-effective.

    Neil

    Neil Smith
    CEO
    LumaForge LLC
    shoot it. store it. share it
    323-850-3550
    http://www.lumaforge.com

  • Neil Smith

    February 22, 2013 at 8:28 pm in reply to: Cheap multi-drive clone

    Eric,

    You should check out ShotPut Pro .. we use it on our DIT carts for something very similar to what you’re doing.

    It’s very simple to set up, fast and effecient and does a MD5 checksum as well if you need it.

    https://www.imagineproducts.com/index.php?main_page=index&cPath=5&zenid=b3sibsu43mao7odd7jg9627h06

    Let us know what you end up with.

    Regards,
    Neil

    Neil Smith
    CEO
    LumaForge LLC
    storage and networking specialists
    323-850-3550
    http://www.lumaforge.com

Page 3 of 3

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy