Forum Replies Created

Viewing 1 - 10 of 12 posts
  • Troy Williams

    November 17, 2017 at 11:43 pm

    [Bob Zelin] “you still have not explained why you are going to use NFS. What are your client computers ?”

    That was just for testing. What I’ll be “going with” is whatever method Tiger’s client software uses. Tiger Spaces will be using NFS between it and the data repository unless I have reason to use SMB. Doesn’t matter to me either way. Client computers are all Mac.

  • Troy Williams

    November 17, 2017 at 3:20 pm

    [Bob Zelin] “REPLY – I have seen, that based on the generation of the computer, the results of Thunderbolt 2 (using the same damn thunderbolt to 10G adaptor) varies widely from Mac to Mac. This is very discouraging, and I don’t have an answer for it. I have seen Thunderbolt 2 Mac Book Pro’s dramatically outperform a Mac Pro 6,1 on the same network with the same Tbolt adaptor. I don’t know why.”

    I chewed on this for a day and came up with a hypothesis. I’m suspecting that the PCIe to Thunderbolt adapter is falling back to PCIe 1.1 4x rather than using PCIe 2.0 4x. The former tops out at 8Gb/s, whereas the latter is 16Gb/s, which would perfectly explain the bandwidth starvation. On the Mac Pro it’s connected on an 8x lane which even at PCIe 1.1 spec lets it achieve full bandwidth. I might drop it into a 4x slot and test how it does then. The card should support PCIe 2.0 though, as do the TBolt chassis, so it’s puzzling. I’m tempted to order an ATTO card just to test how that performs over TBolt 2.

    Also, a sidenote regarding the Macbook Pro with TBolt 1 — I discovered that cold booting with the TBolt unit plugged in makes a difference. If hot plugged, it tops out at 2.5Gb/s, as previously stated. If cold plugged, I get 5.0Gb/s. I still think I should see better, but for TBolt1 I wont complain.

    [Bob Zelin] “REPLY – you will find quite a jump when you enable an MTU of 9000. Many people “poo poo” jumbo frames but for 10G performance, it makes a big deal.”

    I understand the reasons why they do. It can affect smaller packet traffic such as basic internet on the same network. The solution is simple — don’t put them on the same network.

    And yes, when testing the two direct connected workstations, Jumbo Packets (MTU 9000) were enabled at both ends.

    [Bob Zelin] “REPLY – I figured that if you were already dealing with Tiger Spaces, you could simply get a nice Tiger Store or Tiger Serve system, that will work out of the box, without all the fuss you are going thru.”

    Almost did, until Bernard said “NTFS”. An NTFS filesystem for a POSIX house? No thank you. Dealt with the headaches of that before, don’t want a repeat.

    And it’s not fuss — this is my idea of fun. I’d be doing this in my free time if I wasn’t being paid to.

  • Troy Williams

    November 15, 2017 at 10:29 am

    Hi Bob. Always a joy to read your rather spicy flavor of advice.

    I intentionally did not go into detail on the storage system because I did not wish to invite critique, nor are such details relevant at this time. The system in question is a small proof-of-concept in a lab to identify problems before going full scale — and yes, in fact, it is all SSD. The final system will also utilize Tiger Spaces. It’s going quite well except for this odd issue with Thunderbolt. That said, I’m well aware of your loathing and scorn for those who wish to pursue the DIY path. Objection noted and lovingly ignored.

    I’ll provide a little further detail though — The MacPro5,1 can access the test array at 1.1GB/s over NFS, according to AJA System Test. The fact that I am not seeing the same results over Thunderbolt 2 indicated a problem, which meant taking steps to isolate the problem.

    While 6.3Gb/s is “good enough” performance, I have seen multiple screenshots here from people reporting better over Thunderbolt 2. There is a bottleneck somewhere in the config, and though the performance is “good enough”, that bottleneck could result in other issues down the road.

    The configurations listed in my previous message were strictly troubleshooting configurations with the network adapters direct connected to each other. No switch and no array, though for what it’s worth I did also test with a switch — specifically, a Dell N4032 with jumbo frames enabled.

    The network performance tool I used (iPerf) generates raw TCP packets and does not read/write to storage at all. One system runs iPerf in transmit mode, the other runs in receive mode. Hard drive throughput is not a factor at either end, nor is protocol overhead as is inherent to NFS, AFP or SMB (the latter of which I’m fully aware is hindered in OSX 10.11 and up).

    As for why I don’t contact SmallTree directly, the cards were acquired secondhand from a studio no longer in business. I don’t know if SmallTree will support secondhand product. I will be ordering a Sonnet Twin 10GbE SFP+ as a control reference for the sake of determining if I somehow have flawed cards, but that would surprise me greatly.

    Lastly, I *do* do IT for a living. Please do not make assumptions. You don’t know me, my qualifications, my experience, my plans, or the situation. Also, your entire post seemed to serve little purpose but to promote your turn-key solution friends and to sow fear about even the mere thought of pursuing a DIY path. Though I respect your knowledge and experience, you seem to make this forum an astoundingly hostile environment for people who might wish to learn something about data storage.

  • Troy Williams

    December 2, 2016 at 8:49 am

    I realize I’m necro-posting here, but I appreciate that Maria brought up a good point regarding proprietary drives. I’ve been bit by that before. Is DDP the same way?

    Within the next year I’m looking to set up a >100TB cluster and DDP is on my short-list, but a strong influence on my decision will be on whether I can easily replace a failed drive with an exact unit obtained from common vendors such as Amazon or Newegg.

  • Troy Williams

    December 13, 2014 at 2:02 am

    So indeed, dragging in the MDB works, but there’s an audio track missing.

  • Troy Williams

    December 11, 2014 at 3:53 am

    I tried some of your steps, Michael, but actually the solution turned out to be pretty simple.

    The steps I had taken as well as my DaVinci settings seemed to be okay, all I needed to do was to add the tape name to the footage in my AMA Imported bin. Interesting how the correct tape name made it to the DaVinci footage but was blank on the footage brought in via AMA.

    Step 3b) Highlight the AMA footage, right-click and select “Modify”, choose “Set Source”, and choose the tape name.

    …unless doing it this way will somehow mess something up later down the road?

  • Troy Williams

    December 8, 2014 at 9:16 pm

    Isn’t the metadata in the AAF critical for reconnecting back to the raw footage later for conforming?

  • Troy Williams

    October 6, 2013 at 6:33 pm

    Sorry to necropost, but I’ve been encountering this exact same issue with Adobe Encore CS6.

    There was one frame in our film where Encore seemed to have inserted one from an earlier point in the picture. I re-rendered three times and the problem came back each time. The ProRes422 we were using was flawless, so I chalked it up as an Encore glitch, but I wanted to know precisely where so I took some time to try to trace the problem.

    Surprisingly, the problem wasn’t with Encore’s (or AME’s) h264 transcoding, but rather its M2TS muxing. I checked the cache directory and tested the H264 M4V Encore made, and it did NOT have the glitch. The glitch would only show up when Encore muxed the video and audio material into an M2TS file.

    Ultimately, what I ended up doing was dropping our material into an Adobe Premiere timeline, dynamic-linking it into Encore, and then render. That thankfully finally worked. Unfortunately it sucks that I can’t have faith in Encore.

  • Troy Williams

    February 15, 2013 at 6:17 pm

    Thank you for your constructive reply. I’ll be glad to show you around at the studio I’ve been working at in LA for the past four years. Obviously as you may have guessed assistant editing isn’t my normal job (I handle IT normally), but I’m doing what I can to assist an impoverished project.

    I have no problem doing it one at a time if need be, but I thought that there’d be a simple way to do them as a batch since it’s just syncing a number with another number.

  • Troy Williams

    October 26, 2012 at 2:00 am

    Argh… that was a typo on that post.

    Yes, the directory is R:\Avid MediaFiles\MXF\1

Viewing 1 - 10 of 12 posts

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy