Activity › Forums › Apple Final Cut Pro Legacy › Qmaster? distributing rendering across multiple macs on a network
-
Qmaster? distributing rendering across multiple macs on a network
Danny James replied 12 years, 6 months ago 7 Members · 15 Replies
-
Andrew Jenter
December 29, 2010 at 6:51 pmHello,
I’m coming across this thread now and think that PDF you created is great! I was wondering if you’ve come across a different issue…or if it’s an issue at all. I’ve connected 5 MacPros on our local network here and set up a cluster to use upwards of 100 “cores”. I supbmitted a job that was converting an 80 minute quicktime to ProRes LT (a very labor intensive task). While watching the Batch Monitor, I see that it compressed in about 15 minutes but then took another 25 minutes just to piece everything back together. Is it normal for things to take that long to piece together or is there anything I can do to speed that up?
Thanks!
-Andrew -
Loic De lame
December 29, 2010 at 7:36 pmHello!
Glad to hear that the guide was of help!
I haven’t dealt with intensive codec compressions like that myself, but I have seen some similar things in qmaster where the job is finished and then it assembles all the pieces together.
There may be a few reasons for this.
The first thing is obviously your network. If it’s at minimum gigabit, then it should work fairly well and be similar in speed to an external FW800 drive. If it’s megabit, you can work with that, but of course it will take longer to transfer files over the wire.
Another thing is your cluster controller and how it’s setup with regards to the hard drives. From my experiences and understanding of Qmaster, the cluster controller gets all of the pieces and puts them together before transferring the final file to the final destination.
What this means is that if you have your cluster storage setup on a single hard drive, your hard drive is working pretty hard to gather all the pieces at the same time and writing everything at once from multiple sources. The other thing is that the size of the resulting file can be a major factor in this situation because your hard drive has to move x amount of data to itself.
If you were to have your cluster storage on a RAID array and it were setup so that data could be written to multiple disks at once, you should/would see a performance boost with regards to assimilating the compressed files. I personally haven’t had the luxury of working with this setup, but that’s the theory.
In my experience, I remember processing two files in separate jobs, but with the same priority. What happened is that once one of the files was completed, it would continue on to the next file, but the cluster was also gathering the segments. So this put a load on the cluster hard drive and slowed everything down. Once the file was assimilated, then everything ran up to snuff again. So it really has to do with, in a way, load balancing and how the data gets moved around
Hope this helps and let me know how it goes. And of course, if you have any other questions.
P.S. I’ve been experiencing strange things on my side with qmaster not working at all and service nodes dropping their segments just after starting them. One of the “intricacies” of Qmaster…..trying to figure out it’s quirks. ;~)
~ Loïc
-
Hunter Julius
August 3, 2011 at 4:54 amDoes each computer need to have Compressor on it, or can one have Qmaster only?
-
Danny James
October 15, 2013 at 6:49 amHi – I tried setting this up.. Running FCPX on an iMac running 10.8 and trying to use my old G5 to help render.. When I start up QMaster it’s detecting 2 services running on each Machine but in Quadministrator only the iMac show’s the iMac but not the G5 and the G5 shows the G5 but no cluster to choose from.. Don’t really want to upgrade the G5 to 10.8 as run Cubase on that which won’t run as does not have Rosetta… is there an easier workround?
Cheers
Dan
Reply to this Discussion! Login or Sign Up