Activity › Forums › Creative Community Conversations › Nice thread on the future of the Mac Pro on the Videoguy’s forum
-
Nice thread on the future of the Mac Pro on the Videoguy’s forum
Steve Mcgarrigle replied 13 years, 1 month ago 19 Members · 90 Replies
-
Gary Bettan
December 31, 2012 at 4:59 pmFrom our perspective when Apple jumped to the Intel Xeons, they caught up to HP. For both performance and value. When the XP8400 was the workstation of choice on the PC side, the 2nd gen Mac Pro w Intel was just as solid a performer and value. I actually gave Mac Pro an edge then, because the 3rd party hardware folks had much tighter drivers for Mac Pro then Win.
With the z800 series HP jumped ahead, Apple gained some ground, but the lack of GPU support was an big issue. Mac Pro was still a good a value, but a z800 offered more performance and options.
As we all know with the z820 and new Dell workstations, the Mac Pro is now very far behind.
I’ll go back to the original post. I don’t see Apple releasing a new Mac Pro this year. I’d love to see them push technology forward, but I just don’t see it happening. Big high end hardware does not match up with their current corporate strategies or goals. Of course, that can change at any time, and with Apple NO ONE knows anything, we are all speculating.
Happy New Year to all!
Gary
COW members get 5% OFF with Coupon COW5OFF
https://www.videoguys.com 800 323-2325 | We are the video editing and production experts!
-
Jeremy Garchow
December 31, 2012 at 5:43 pm[Walter Soyka] “Jeremy, I don’t think the bit about x86 in the Phi marketing is directed at us. I think it’s directed at the massive computational users that already have x86-specific code running on clusters.”
Sure. It’s not an end user type of situation, I agree, unless you as an end user have software that can take advantage of it.
[Walter Soyka] “The hard part about writing for some kind of massively parallel co-processing like a GPU or a Phi isn’t the instruction set of the hardware — it’s designing your application to exploit parallel processing in the first place. Your application has to be cool with its execution being split up into literally hundreds of pieces that can run concurrently, all separated from the main system by the expansion bus. “
And my personal feeling is that Apple’s flagship performance demanding applications in the ProApps department, could take advantage of this. You’ve talked of the common renderer, I’ve talked about all the things that can be done concurrently with FCPX. Looking at the render files folder and the way it’s structured, it looks kind of like the Compressor cache folder that holds all the temp transcodes. Compressor is geared to be broken up to little parts and separate renders, Apple themselves have written down in the release notes of 10.0.6:
“Background Rendering uses the GPU on the graphics card, enabling CPU-based processes like transcoding and proxy creation to continue uninterrupted while effects are rendering.”
[Walter Soyka] “You couldn’t just (say) recompile an old app like FCP7 with the Phi switch turned on and get a co-processor accelerated app. You’d have to actively design and develop the application for parallelism (like FCPX has been with OpenCL, or Pr has been, first with CUDA, now with CUDA/OpenCL).
Speaking of which, if Phi runs OpenCL, FCPX and Pr should be able to make use of it as such almost out of the gate. Maybe.”
Totally. But my feeling from what I have read, is that it will be much ‘easier’ than say, CUDA. If using Adobe as an example, CUDA seems very compartmentalized. Some things work with CUDA, and some don’t. It is filter by filter, action by action, app by app. It’s not as if everything is ready for CUDA within the Adobe applications. I think Phi is different in this regard since it’s a part of the underlying OS instructions. Or maybe I’m wrong about that.
[Walter Soyka] “First, I think that Phi, like Thunderbolt before it, is being hyped far beyond reality. Both are very cool technologies, but neither are magic. I don’t not trying to be a wet blanket, I don’t dislike Apple, and I’m not trying to diminish the influence Thunderbolt has had on raising the capabilities of otherwise consumer-class portables, but it seems to me that a lot of the discussions about these technologies veers away from their real-world, practical applications and into fantasyland very quickly.”
Phi is new, I don’t know what it will bring, but Thunderbolt is very tangible. It has brought powerful workflows to “lesser” powered computers, namely portables, that simply weren’t possible before. To me, that’s not a fantasy. If you can add PCIe extenders to other machines, that is Thunderbolt except it has pass through. I think you are saying that computers will somehow be able to be “ganged” through thunderbolt, and I think you’re right in that regard in that you won’t be able to double your CPU speed by attaching a MacMini to an iMac and create a render farm, or whatever.
[Walter Soyka] “Phi is fast, but NVIDIA’s Tesla K20X is up to 30% faster. Intel will very certainly offer a nice toolchain to support Phi development, but NVIDIA’s CUDA toolchain has a huge lead in both maturity and community. The big news about Phi is that the massively parallel co-processing race has expanded from two horses (NVIDIA and AMD/ATI) to three (adding Intel).”
No question about that. I don’t think anyone is arguing that CUDA is faster, but if you read about the programming for Phi, it’s that it’s much easier and convenient. CUDA isn’t as easy, or at least that’s where my reading has taken me.
There’s also a big difference in that this would allow Apple to free themselves from the GPU wars. They are obviously already tied to intel pretty heavily.
Everyone has theorized about modularity. Since external GPUs aren’t part of OSX’s DNA without some hacking, Phi seems to sit a bit differently there. It would allow, via PCIe, and already available instruction set. Of course each application would have to be written to take advantage of it, but it would allow Apple to keep their computers relatively “hands off” in that they could offer a limited range of GPUs to keep the ecosystem under control, Xeon processors, and then, if you need more power, a Phi in an external enclosure or perhaps PCIe. It does away with multiple GPUs.
I guess, in a way, I see it as more controllable by Apple.
[Walter Soyka] “Second, I’m questioning the suggestion here that whatever the future Mac Pro will be, it will be some technological marvel the likes of which are unlikely to be matched, like when Craig says “The only reason I can see for such a long period between 2010 and “later in 2013″ is they’re once again pushing technology” or when Rick says “Perhaps that would be optical, not copper, at far greater speeds and/or bandwidth which could disrupt the hardware playing field. If you were announcing to the world that you will be bringing “something really great” to the dinner table, then you have set expectations that it will be something really delicious after all. What we’ve speculated about would be tasty appetizers but we may be surprised by a new satisfying entreé we hadn’t realized how much we craved until it is served on our platter.” “
Yeah, I’m not sure what it will be, or if it will be a technological marvel anymore than say, the Retina is a technological marvel. The “Something Special” MacPro will be there, though, and it will be what Apple has to offer in terms of “high-end”. It will be more expensive and have more capabilities than their current offerings. I think it will go a bit beyond a case redesign. It will be smaller and lighter, it will be more locked down and will not have every single option to it that a PC has. Let’s face it, it will still be a Mac so what else is new in that regard? I think Craig is right in that some will hate it, some will love it.
If you can get past the partisanship, the Retina is a well designed machine for the amount of capability that is packed in to a small, light, and efficient portable. My feeling is that the new MacPro channel replacement will be of this ilk, but it will need a little something extra to push it over the edge. Thunderbolt will help, but it won’t be the end all be all as it doesn’t extraordinarily help performance in the desktop class besides being able to universally connect your thunderbolt pieces and parts. I guess this shouldn’t be undermined as it’s actually a pretty big deal, but I don’t think it’s enough to persuade the waning public to buy an expensive Apple desktop.
There has to be more meat on the bone there, and Phi with the release dates, Tim Cook’s “later in 2013” comment, the tea leaves, and watching all the processing instructions FCPX can do concurrently, all of that seems to be in alignment. Perhaps it’s just false hope on my part, but I’m not afraid to be wrong. 😉
-
Craig Seeman
December 31, 2012 at 5:53 pmTim Cook basically has said something will be in that “pro space” by my interpretation. I’m not sure how you can interpret it differently.
“Our pro customers are really important to us…don’t worry as we’re working on something really great for later next year.”
and in Forbes
“An Apple spokesman just told me that new models and new designs of the Mac Pro, as well as the iMac desktop, are in the works and will likely be released in 2013. That confirms what New York Times columnist David Pogue said yesterday, citing an unnamed Apple executive, about Apple’s commitment to its desktop computers.”and David Pogue in NYTimes (yes I know he’s a fanboi).
“An executive did assure me, however, that new MacPro designs are under way, probably for release in 2013.”Sorry, but I just can’t fathom another non Xeon system being added to the lineup. Apple doesn’t fragment its product line. Apple’s too guarded about the language used by their staff and by mainstream media. There’s no economic point to another i7.
-
Jeremy Garchow
December 31, 2012 at 5:55 pm[Walter Soyka] “Apple did do high-performance hardware. Yes, there were caveats, but any system has caveats. The areas where Macs were limited (RAM capacity, PCIe slots) relative to other workstations were immaterial for the sort of work we’re discussing here. It wasn’t until GPU co-processing started catching on for some specific software that Apple’s poor GPU drivers, limited GPU choices, and lack of slots to put them in started to matter. I think that practically speaking, the concern about slots/GPU on Mac had been forward-looking until maybe last year.
Think back just a few years. How much better could you do than a 8-core Intel workstation with 32 GB of RAM, running UNIX, with an NVIDIA Quadro FX4800, AJA Kona, and fibre channel or RAID card installed?”
Yes, they did high performance hardware, but every time we talk about this, we have to talk about the entire system.
When running the same CPU intensive applcaiton (After Effects for example) on the the relaitve same hardware, PC’s usually always win the speedtests. This of course is doubly true now as Apple has let a generation of Xeon’s lapse so the comparison’s are no longer on the same scale.
I am saying, Apple systems on the whole, are slower, even if they have the same stats as a PC and you can connect the same devices.
Then, of course, there’s two or three FX4800s, which just ins’t possible with Macs without bus saturation and weird PCIe extending dongles, and some serious programming. It’s not as easy to get this done on Mac as it is on Windows.
I know you disagree with me, but you cannot go as fast on a Mac as you can on Windows.
-
Jeremy Garchow
December 31, 2012 at 5:56 pm[Walter Soyka] “[Michael Phillips] “I often wonder what the tipping point is for a monopoly… ;)”
Well, apparently it’s somewhere after Bruce Willis’s iTunes collection… :)”
Or buying hotels on Boardwalk and Park Place.
-
Brett Sherman
December 31, 2012 at 6:21 pm[Walter Soyka] “I’m curious what wishful thinking you see on the other side. I talk about theory here because that’s the way the conversations sometimes go, but I work in the real world and practicality matters a lot to me. If I’m being impractical about something, I would love to re-consider.”
I wouldn’t label any particular person as on one side or another. Neither am I the Apple fanboy that I seem to be labeled. I used PCs until about 5 years ago.
However, as a mostly outsider to this forum who rarely posts I do notice a tenor to the discussion. I think there is a lot of bitterness towards Apple because some people dislike FCP X. I think that translates into a some schadenfreude and negative speculation about Apple.
I only posited a differing theory about why Apple would not abandon the pro market and got called out for “wishful thinking”. Which was perhaps not the best phrasing, but was the phrasing that was thrown at me for some reason.
-
Chris Harlan
January 1, 2013 at 12:10 am[Brett Sherman] “I only posited a differing theory about why Apple would not abandon the pro market and got called out for “wishful thinking”. Which was perhaps not the best phrasing, but was the phrasing that was thrown at me for some reason.
“Sorry. I apologize for being such a b@st@rd to you there, Brett. As for phrasing, I don’t think wishful thinking is such a bad thing. I do it all the time. I don’t think I “called you out” either; I was simply defending Gary from what I thought of as a vaguely defamatory statement. But, I do concede that that may be wishful thinking on my part.
-
Chris Harlan
January 1, 2013 at 12:21 am[Gary Bettan] “I’ll go back to the original post. I don’t see Apple releasing a new Mac Pro this year. I’d love to see them push technology forward, but I just don’t see it happening. Big high end hardware does not match up with their current corporate strategies or goals. Of course, that can change at any time, and with Apple NO ONE knows anything, we are all speculating. “
I pretty much agree with you. The one bit of hope I do see for a newer, brighter Mac Pro is that the blowback at the last Developers shindig seemed to get management’s attention. I think Cook and company were genuinely surprised by the damage the lack of attention to the workstation workspace was creating among developers. It may have sunk in that having a strong workstation in the line would pay off in terms of morale.
-
Chris Harlan
January 1, 2013 at 12:29 am[Jeremy Garchow] “There has to be more meat on the bone there, and Phi with the release dates, Tim Cook’s “later in 2013” comment, the tea leaves, and watching all the processing instructions FCPX can do concurrently, all of that seems to be in alignment. Perhaps it’s just false hope on my part, but I’m not afraid to be wrong. 😉
“I just love it, Jeremy! Its glass all full here. If you’re right, I get to put my hands on an awesome machine. And, if you’re not, I get to point out just how wrong you were. Totally win-win from my POV.
-
Walter Soyka
January 2, 2013 at 3:40 pm[Jeremy Garchow] “But my feeling from what I have read, is that it will be much ‘easier’ than say, CUDA. If using Adobe as an example, CUDA seems very compartmentalized. Some things work with CUDA, and some don’t. It is filter by filter, action by action, app by app. It’s not as if everything is ready for CUDA within the Adobe applications. I think Phi is different in this regard since it’s a part of the underlying OS instructions. Or maybe I’m wrong about that.”
That’s the thing I disagree with you on. Just because Phi runs x86 doesn’t mean it runs your OS. It is a co-processor, not some kind of bolt-on processing power. OS system calls happen on the main system; these libraries are not available on the co-processor, which is essentially a separate computer within a computer.
In other words, in some very important ways, A 16-core Xeon system plus a 50-core Phi is closer to a Xeon plus a GPU or to a Xeon networked to a little cluster than it is to a 66-core Xeon.
Phi shares some of the challenges of GPGPU solutions: it’s segregated from the main system, it has limited memory on the coprocessor card, and there is appreciable transfer overhead (getting code and data from the main system onto the co-processor and back over PCIe/whatever).
The big advantages of Phi being x86 (as I understand them) are that if you are already using supported parallel programming methods for computation, your Fortran, C, or C++ code can compile and run on Phi coprocessors, and that if you tune your code for running on Phi, it will realize performance benefits running on Xeon systems without Phi, too.
The hardest part of all this, whether we’re talking about CUDA, OpenCL, or Phi, is making sure your program is highly parallel. If you want to use a co-processor, you have to specifically develop for it.
[Jeremy Garchow] “And my personal feeling is that Apple’s flagship performance demanding applications in the ProApps department, could take advantage of this. You’ve talked of the common renderer, I’ve talked about all the things that can be done concurrently with FCPX. Looking at the render files folder and the way it’s structured, it looks kind of like the Compressor cache folder that holds all the temp transcodes. Compressor is geared to be broken up to little parts and separate render”
I just want to reiterate that I agree with you here. Both Apple and Adobe are very well-positioned to take advantage of co-processing: both FCPX and Pr are nicely multi-threaded, and both FCPX and Pr already support some kind of co-processing via OpenCL/CUDA. Autodesk has been using background processing since before it was cool, but most rendering is OpenGL, not GPGPU. I can’t speak to MC.
[Jeremy Garchow] “If you can get past the partisanship, the Retina is a well designed machine for the amount of capability that is packed in to a small, light, and efficient portable.”
Absolutely. I plan on updating mine with the next rev.
[Jeremy Garchow] “There has to be more meat on the bone there, and Phi with the release dates, Tim Cook’s “later in 2013″ comment, the tea leaves, and watching all the processing instructions FCPX can do concurrently, all of that seems to be in alignment. Perhaps it’s just false hope on my part, but I’m not afraid to be wrong. ;)”
I think you could be on to something. I hope your prediction comes true.
I’m still excited about what you can do with Thunderbolt, I’m excited about what we can do with GPGPU, and I’m excited about what we’ll be able to do someday with Phi. I think that these technologies are more evolutionary than revolutionary, but they are very cool and are already provided our industry with big benefits.
Walter Soyka
Principal & Designer at Keen Live
Motion Graphics, Widescreen Events, Presentation Design, and Consulting
RenderBreak Blog – What I’m thinking when my workstation’s thinking
Creative Cow Forum Host: Live & Stage Events
Reply to this Discussion! Login or Sign Up