Forum Replies Created

Page 5 of 98
  • [Greg Leuenberger] “Because if it works (and I’ll definitely be pushing for it to work) you can bet your ass that is exactly what everybody will be doing. I still haven’t heard **anybody** give me a technical reason why doing shared storage over 20Gb TB wires won’t work as well as ethernet wires.”

    Everybody? The new Mac Pro has 6 TB2 ports across 3 TB2 controllers. The best you could ever do, assuming everything else works perfectly, is 5 IP-over-TB Mavericks clients of that Mac Pro as a file server. And even then only if it runs headless, which is a huge waste of those twin GPUs that make up so much of the cost of the Mac Pro. Is that enough clients for everybody? Maybe many small shops, but even if this idea does work, it is going to be strictly very limited in its ability to scale. Oh, and everyone has to be really close to the server since the longest available Thunderbolt cable right now is only 10 meters. And it costs $330 per cable, so your 5 seat “cheap” workgroup costs $1,650 just for the cables.

    I think you’re also seriously underestimating the work that will be necessary getting such a setup to actually work, not to mention the cost of the equipment necessary to test prior to using it in anger. The back of my napkin has it at least over $5,500 before you even price the RAID. Would you spend that kind of money one something just to test it to see if it works?

    [Greg Leuenberger] “If something like a TB switch comes out with latency management features it could still be something that is far less expensive than 10GB switches and 10GB NICs on every system.”

    A TB switch would essentially be a PCIe switch on the inside, a technology that is already on the market. Since I can’t find a published price for one, I’m betting they cost something more like a house than a car. Basically, don’t hold your breath on a Thunderbolt switch ever coming to market. The tech is just too expensive and the potential market just too small (and notoriously thrifty).

    We’d be much better off if someone would just put the guts of one of these into a big brother to one of these. Or better yet, if Apple would put a modern NIC on board in the new Mac Pro.

    Best,
    Andy

  • Andrew Richards

    October 26, 2013 at 2:44 pm in reply to: Mac Pro GPU may be replaceable

    [Bernard Newnham] “Are they really different? My Hackintosh, now largely returned to being a PC since the demise of FCP7, runs a standard GeForce card perfectly well. “

    Does your Hackintosh boot from BIOS or UEFI? Even if it is UEFI, it is different EFI code than what ships in Macs. That’s the key difference. You can put PC-spec NVIDIA cards in a Mac and it will boot with no gray Apple screen and can only show images once the OS and the necessary drivers have loaded. The firmware on the GPU needs to support the Mac’s EFI code to work end-to-end.

    Best,
    Andy

  • Andrew Richards

    October 16, 2013 at 3:30 pm in reply to: As usual…

    [Dave Gage] “Someone here mentioned awhile back that a ThunderBolt to USB 3 cable wouldn’t be easy to make for technical reasons (don’t remember what they were), but that would be perfect for me.”

    This thread?

    Best,
    Andy

  • Andrew Richards

    August 31, 2013 at 12:10 am in reply to: Looking Ahead to 10.1

    [Geert van den Berg] “Yes but a project and an event are stored as an SQlite database which is open to only one writing user.”

    Allow me to elaborate on my previous post:

    [Me] “They aren’t using SQLite directly, but rather via Core Data. Core Data has a method for hooking up to other RDBMSes, so it is technically feasible. “

    Final Cut Pro X does not directly call SQLite C APIs, it calls Core Data which uses SQLite as one of its available persistent store types. There is an obscure class in Core Data called NSIncrementalStore that allows the developer to hook their Core Data app up to whatever kind of persistent data store back-end they so chose. This could be a PostgreSQL database, a NoSQL store of some kind, whatever. They just need a little glue code to get it going.

    This is not to say building such a thing is trivial, only that the way FCPX was developed to store project and event databases already has a built-in method for hooking it up to a RDBMS. So from the client-side perspective, the hooks are already there. Building the server-side would be a major feat and probably far more expensive than the market would bear.

    [Geert van den Berg] “Maybe Apple will come with a final cut server product again, they have a server product for OS X too. But the main thing is that the databases need to be stored on a server which all clients can access for this to work.”

    I doubt it, but I’d be very pleased if they did. Probably the biggest hurdle to building the server side of such a system is it is not at all practical in the tidy, shrink-wrapped package which Apple wants to achieve for its products. A complex server with highly specific hardware requirements that is necessarily very expensive, not a Mac, and requires a lot of enterprisey support is not a product Apple is ever going to consider building. The best I think we can hope for is some kind of API for FCPX that would let some enterprising third party do the heavy lifting. Throw a switch in FCPX’s preferences and it stops pointing to SQLite files on disk and instead queries a compliant MAM? Could be very cool, but still far too niche to believe it would ever happen.

    [Geert van den Berg] “Mounting and unmounting SAN locations can’t be the end of this. It’s old fashioned.”

    It’s a kludge, I agree.

    Best,
    Andy

  • Andrew Richards

    August 30, 2013 at 8:55 pm in reply to: Looking Ahead to 10.1

    They aren’t using SQLite directly, but rather via Core Data. Core Data has a method for hooking up to other RDBMSes, so it is technically feasible.

    Oliver is right though, it isn’t going to happen. Too small a niche and not worthy of the very considerable effort it would take. Sure would be cool though.

    Best,
    Andy

  • Andrew Richards

    August 30, 2013 at 12:20 pm in reply to: Looking Ahead to 10.1

    I was hoping against hope for live Event sharing pretty much since FCPX launched. Shared timelines are probably much more difficult to pull off, but if done well could be pretty interesting. I was a big proponent of Final Cut Server while it existed, and even tried to build a business around it right around the time it was canned. Shared Events with hooks for developers would bring all that full circle, even if I’m not really part of the scene anymore.

    Best,
    Andy

  • Andrew Richards

    August 19, 2013 at 3:09 pm in reply to: Apple releases ShakeX

    Booooooriiiiing!

    Final Cut Server X or GTFO!

    Best,
    Andy

  • Andrew Richards

    July 26, 2013 at 4:10 pm in reply to: Object storage

    [Neil Sadwelkar] “I’m trying to wrap my head around this but I can’t seem to fathom how (or why) it will be better than what we currently use. Does anyone have any clue on this? So is this the ‘next big thing’ in storage? Or is it one of those things – like multimedia, 3D, holography etc.”

    Object storage is definitely the next big thing for storing a lot of data, just not necessarily your data for the way you use it.

    Think of object storage as the next layer of abstraction for commodity storage. Back in the day, formatting a hard drive involved thinking about segments and cylinders and manually aligning them and flipping dip switches on the hardware to set them up. Now SAS and SATA firmwares handle that low level noise and the OS only has to think about the filesystem. Object storage goes a level further, managing a large number of individual filesystem-formatted physical drives into a single large blob that is typically accessed natively via a RESTful (HTTP-based) interface. Amazon S3 is object storage, and as you may be familiar, is not accessed the same way as your more ordinary single-filesystem NAS.

    Object storage seeks to create a storage environment that can scale both in size and geographically, and is usually measured in petabytes. You could build one in the tens of terabytes, but you would not realize the advantages compared to conventional RAID+filesystem storage. Object storage’s sweet spot is at multi-petabyte scale.

    To give a couple high-level examples of how it works, Ceph and OpenStack’s Swift both work by aggregating a collection of individual physical drives, usually cheap commodity SATA HDDs, that reside in a bunch of Linux servers which have formatted each with XFS or ext4 (or soon BTRFS). Since the Ceph or Swift system is a cluster of many servers each with many drives inside, the cluster controller(s) distribute the data across typically three or more locations in the cluster for durability and fault tolerance. These clusters are typically assembled with 10Gb Ethernet as the network “glue” and can employ SSD cache at each node, and thus in aggregate can provide very good storage performance for the types of applications that demand that scale and style of storage- large web applications, Hadoop clusters, and other Big Data analytics.

    The cool thing about how Ceph and Swift scale is that neither uses a database to keep track of the objects they are storing. It is all handled algorithmically based on file metadata. This makes it much more efficient for them to scale into the billions of objects without requiring a complex and expensive database to keep track of it all.

    Are they useful for production and post? Maybe as an warm archive, but at the scale where they start to look competitive performance-wise, they are probably much too large and expensive to make any financial sense compared to more conventional RAID-based NAS and SAN solutions. The thing about using object storage for production and post is they would need to have a NAS gateway to deal with sharing these RESTful HTTP-based storage systems with clients that want to connect via conventional NAS protocols like SMB, AFP, NFS, etc.

    Best,
    Andy

  • [Craig Seeman] “I think the Mni’s target had been “switchers” as a way to buy a Mac for someone with monitor, mouse, keyboard from their Windows computer.”

    That was the original concept, but for the last few years the MacBook Air has probably carried the flag as the most popular entry-point for new Mac users. The mini seems to be most popular as a hobbyist/utility/server/HTPC type thing lately.

    Best,
    Andy

  • Andrew Richards

    July 9, 2013 at 12:29 pm in reply to: Promise Pegasus reliability issues…

    What does it connect to? SAS card? eSATA card? Either way there are Thunderbolt to PCIe card chassis available:

    Sonnet Echo Express
    OWC Helios

    …but they are expensive.

    Best,
    Andy

Page 5 of 98

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy