Creative Communities of the World Forums

The peer to peer support community for media production professionals.

Activity Forums Storage & Archiving NFS sharing fcpx projects

  • NFS sharing fcpx projects

    Posted by Simon Blackledge on May 30, 2014 at 8:32 am

    So I did a test for this NFS setup to see if it suits.

    On initial connection it asks for no password?

    Also when I drop a file in over finder there’s a 10-20 second delay before the copy starts?

    Fcpx seems fine although I have a timeline that renders over and over with background render. Seems it’s trying to write at end but can’t

    Tried a slomo and that wrote fine :/maybe just that timeline.

    Most concerning is I’ve had files literally disappear from the server. There in the backup server in /archived so they have deffo been deleted… :/

    Seems to happen when quit fcpx. Oprn up next day and sometimes .. Not always something is missing.. Audio or quick times. Nothing is hard imported all leave in place.

    S

    John Davidson replied 10 years, 4 months ago 10 Members · 26 Replies
  • 26 Replies
  • Simon Blackledge

    May 30, 2014 at 1:22 pm

    Oh plus when exporting to the NFS mount FCP-X leaves a folder behind post completion :-/

    Name = (A Document Being Saved By StompUtil)

    S

  • Neil Smith

    May 30, 2014 at 2:36 pm

    Simon,

    I think you’ll find that FCP X is only architected to work in an Xsan FC SAN environment … doesn’t use NFS or even SMB2 over a shared storage environment.

    Worth double checking with Apple Support but that might be your issue.

    Neil

    Neil Smith
    CEO
    LumaForge LLC
    high performance workflow
    323-850-3550
    http://www.lumaforge.com

  • Simon Blackledge

    May 30, 2014 at 4:30 pm

    Hey Neil

    The NFS workaround does indeed work.

    The rendering over and over is a bug in FCP-X when using Fill instead of non or FIT in Spatial Conform with some codecs or media external to FCPX library.

    the folder left behind with compressor ( internal share in FCP-X) appears to also be a bug.

    Local media exported to the NFS share ( so fcp-x isn’t using anything on the NFS mounted volume) still leaves behind the *tmp folder.

    I’m putting the disappearing files down to use error… !

    Nice 4K demo at NAB btw.. caught the vids online.

    S

  • Bob Zelin

    May 31, 2014 at 5:31 pm

    Hi Simon –
    as you may know, this forum’s John Davidson from Magic Feather is using NFS with FCP-X every day.

    And yes – no permissions. It just connects.

    Bob Zelin

    Bob Zelin
    Rescue 1, Inc.
    maxavid@cfl.rr.com

  • Simon Blackledge

    June 4, 2014 at 6:02 pm

    Try this

    Mount the server fcpx folder as NFS

    open FCPx

    Open whatever projects you want.

    Dont quit fcp-x

    Mount the sever folder over afp

    eject the NFS mounted one

    go back to editing in FCPX – export etc without any of the NFS hidden folder rubbish showing up.

    FCPX only flags the need for the project to be on a SAN when opening.. once open it can be mounted any way you wish…

    s

  • Bob Zelin

    June 5, 2014 at 12:40 am

    I will try this. If this is true, then it is REALLY clear to me that Apple is intentionally making it difficult to allow for shared storage on a network that they do not manufacture (because you should be using XSAN or iCloud as your storage for FCP-X).

    Simon, you always come up with brilliant things.

    Bob Zelin

    Bob Zelin
    Rescue 1, Inc.
    maxavid@cfl.rr.com

  • John Davidson

    June 5, 2014 at 1:34 am

    Hey guys – stepping out of paternity leave to comment. Very interesting about the NFS mount – also saw the FCP.co discussion about it as well. For us, we just have more problems using NFS than solutions so we have kept to sparse disk images. It’s not elegant, but it’s been rock solid using Sparse over AFP for 2 years now.

    Our NFS issues were that it kept crashing, freezing, and beach balling after a while with NFS. Add to that that adobe doesn’t render to NFS mounts (it freezes AE when you try) we just gave up and decided to stick with what works for now.

    Interesting to see this develop though – maybe we’ll try again one of these days.

    John Davidson | President / Creative Director | Magic Feather Inc.

  • Bob Zelin

    June 5, 2014 at 12:05 pm

    Hi John –
    can you post the link on fcp.co.uk that discusses this ?

    Bob Zelin

    Bob Zelin
    Rescue 1, Inc.
    maxavid@cfl.rr.com

  • Steve Modica

    June 8, 2014 at 12:02 pm

    NFS was working great with 10.1, and then they released 10.1.1 and broke the locking.

    With 10.1, you could enable locallocks on the clients and they would adhere to the lock files in the Library directories. The only possible concern would be if two clients tried to create lock files at the exact same moment.

    With 10.1.1, apple started using NFS locking (using rpc.lockd) and that messed things up.
    First, if you use locallocks, since the client can’t see the files are locked on the server anymore, it just blows through them and overwrites the locks. So clients will corrupt Libraries if they simultaneously access.
    (You fix this by removing “locallocks” which makes sure they heed the locking)

    Second, they seem to have broken things when a second client tries to access a locked Library. Now, the second client hangs. It has to be killed. Nothing gets broken in the library, but the hang on the client isn’t very nice.

    I’m not sure if this is our implementation of rpc.lockd or what. I’d like to know if anyone has this working.
    Our lockd is pretty standard.

    Steve

    Steve Modica
    CTO, Small Tree Communications

  • Neil Smith

    June 8, 2014 at 4:38 pm

    [Steve Modica] “NFS was working great with 10.1, and then they released 10.1.1 and broke the locking.

    With 10.1, you could enable locallocks on the clients and they would adhere to the lock files in the Library directories. The only possible concern would be if two clients tried to create lock files at the exact same moment.

    With 10.1.1, apple started using NFS locking (using rpc.lockd) and that messed things up.
    First, if you use locallocks, since the client can’t see the files are locked on the server anymore, it just blows through them and overwrites the locks. So clients will corrupt Libraries if they simultaneously access.
    (You fix this by removing “locallocks” which makes sure they heed the locking)

    Second, they seem to have broken things when a second client tries to access a locked Library. Now, the second client hangs. It has to be killed. Nothing gets broken in the library, but the hang on the client isn’t very nice.

    I’m not sure if this is our implementation of rpc.lockd or what. I’d like to know if anyone has this working.
    Our lockd is pretty standard.

    Steve

    Steve Modica
    CTO, Small Tree Communications”

    That’s what I was trying to hint at in my earlier posting … the chaps from Cupertino may have a different idea on how they want to deploy and manage shared storage under FCP X 10.1 … highly suggest that peeps talk to Apple Support directly before they get too far down un-supported topologies.

    Neil

    Neil Smith
    CEO
    LumaForge LLC
    high performance workflow
    323-850-3550
    http://www.lumaforge.com

Page 1 of 3

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy