Ian Mapleson
Forum Replies Created
-
Ian Mapleson
November 8, 2015 at 7:14 pm in reply to: Advice needed on PC workstation for 8K files in After EffectsNot with modern models like the 850 EVO/Pro), and besides, it’s this sort of task for which SSDs are ideally suited (the performance made possible by SSDs is a natural match for how AE processes data). Otherwise it’d be a bit like saying don’t use a knife to cut meat because it’ll blunt the blade. S’what knives are for. 🙂
Ian.
——–
SGI Guru -
Ian Mapleson
November 6, 2015 at 11:13 am in reply to: Advice needed on PC workstation for 8K files in After EffectsIn that case I’d suggest a 980 minimum, 980 Ti if possible.
Ian.
——–
SGI Guru -
Ian Mapleson
November 6, 2015 at 10:53 am in reply to: Advice needed on PC workstation for 8K files in After EffectsRichard Li writes:
> Fair enough, I can’t afford Quadra cards, …That’s what the used market is for. 😀
Yesterday I won a Quadro K5000 for 205 UKP. That’s quite a lot less than the cost of a new 970, or even most used 970s atm. A few hours later, a Quadro 6000 went for about 260 UKP (I didn’t bid, having won the K5000). And for raw CUDA oomph on a very limited budget, it’s hard to beat multiple 580 3GB cards.
> intend to use GTX970/980 cards, …
Note these can be slower than the 780 Ti or Titan/Black, and remember the significant differences gamer cards impose for reliability, image quality, etc. Choose carefully. See:
https://www.migenius.com/products/nvidia-iray/iray-benchmarks-2014-5
What will your main application(s) be though? It does seem like a gamer card can be a potent choice for Premiere, and the 980 does use a lot less power, has HDMI 2.0, etc. See:
https://www.pugetsystems.com/labs/articles/Product-Qualification-NVIDIA-GTX-980-4GB-600/
Personally if I was going to opt for the newer models, I’d try to get a 980 Ti instead.
NB: don’t get a 970 if you plan on working with material beyond 2K. Although there is no evidence that the split RAM bus speed design of the 970 has any significant impact on gaming, there is some anecdotal evidence that working with 4K on a 970 (such that RAM usage exceeds 3.5GB) can have issues. Thus, get a 980 at a minimum for working with high-res.
> … do you know some not-too-dear game cards with external exhausts you mentioned above?
Most reference cards are designed like this, so just look for the ones that don’t have aftermarket coolers, eg. item 262107587881. Less of an issue though if you’re not going to use more thsan two GPUs, eg. I’d be content to use two EVGA GTX 980 Ti ACX 2.0 cards with a gap between, just needs sensible air flow management.
Ian.
——–
SGI Guru -
Ian Mapleson
November 6, 2015 at 3:44 am in reply to: Advice needed on PC workstation for 8K files in After EffectsRichard Li writes:
> wow, the specification is really good, the only concerns in my mind are the physical
> layout may be short space because GPU cards occupy two slots …Not really an issue if one is careful; I don’t think Quadros are ever wider than 2 slots, and for gamer cards just choose models which have external exhausts (most reference models are like this). If you do use normal gamer cards with all sorts of open-type coolers, where heat is dumped inside the case, then indeed one must take care with cooling. However, it definitely doesn’t matter if one is only using two GPUs because they would be positioned with a 2-slot gap between them.
> … and lack of USB3.1 connectors, …
Is anyone even making anything for 3.1 yet?
> … this board looks like Micro ATX board from the picture. ..
Heavens no, it uses the CEB form factor (12″ x 10.5″, ie. 30.5 x 26.7cm), right at the other end of the scale. 😀
> … Does Gigabyte have something similar? …
No.
> … My last Asus mobo died after one week of use.
I’ve used a lot of this type of ASUS board, they are very good.
I suppose it’s worth pointing out though that if you don’t intend to ever use more than 2 GPUs, then it’s worth looking at the lesser models like the Deluxe or Pro, but I’d just get the WS in order to have the max possible future GPU expansion.
Ian.
——–
SGI Guru -
Ian Mapleson
November 5, 2015 at 12:07 pm in reply to: Advice needed on PC workstation for 8K files in After EffectsMost welcome! Personally I’d recommend the ASUS X99-E WS and either a 5930K or 5960X. See:
https://www.asus.com/uk/Commercial-Servers-Workstations/X99E_WS/
And note that this board can run the main slots at x16/x16/x16/x16, or all 7 slots at x16/x8/x8/x8/x8/x8/x8, made possible by the use of two PLX chips which are clearly visible in this pic where the chipset heatsink has been removed:
https://www.hardwareluxx.de/media/jphoto/artikel-galerien/asus-x99-e-ws/img-10-950×633.jpg
The above pic is from a site that has a very good summary of this board, including excellent explanations of how the various possible slot arrangements can be exploited, and a diagram showing the PLX connections:
Ian.
——–
SGI Guru -
Ian Mapleson
November 5, 2015 at 1:28 am in reply to: Advice needed on PC workstation for 8K files in After EffectsNB: high-end to me is multi-socket XEON (with really high-end being an SGI UV 3000 stuffed with a thousand CPUs, 64TB RAM and a dozen Quadros), whereas X99 is only a single-socket, but I know what you mean…
Z170/SkyLake: far fewer cores, less max RAM, way less PCIe lanes and hence less upgrade potential for multiple GPUs, RAID cards, etc., and SkyLake can’t even run a mere two GPUs in full x16/x16, whereas X99 (or X79 for those on a budget) can. For the kind of pro task being discussed here, a 5960X would demollish a 6700K. The cost difference really isn’t relevant given the payback of faster workflow & time-to-insight, and of course if budget permits then it makes a lot of sense to go for a dual-socket XEON setup if that’s a good match to the target applications (some are not a good fit for this, eg. ProE).
I’d be more forgiving of the mid-range chipset and CPUs if Intel had bothered to expand the no. of PCIe lanes to more like 32 (or more precisely, HSIO), but it’s been stuck at a low amount for a long while. Likewise, the top-end hasn’t moved beyond 40 (should be twice that by now). SkyLake has upped it a little compared to the pitiful 16 present in Z97, but it’s still nowhere near as flexible as the full 40 lanes available with X99 and a relevant CPU (the 5820K irritates me, its restricted 28 lanes mean in some cases a 4-core 4820K would be faster).
Really, even considering the middling chipset & CPUs for this sort of task is most unwise. In order to have a degree of tolerable performance, one would have to completely max out a Z170, and even then it’s still not that good. It would mean using a system which from the start has no room to grow. One could say well ok though, surely it’s an option for those on a budget; that’s true, but for anyone in such a position, that’s where I’d leap in and say forget Z170, just get a used X79 setup for less cost and much more performance potential (there are some compromises involved of course, but they are of a nature which means their impact doesn’t matter anyway, eg. no M.2, but if one can afford a good M.2 SSD without quibble then more than likely one has the budget to buy X99 anyway).
Putting together a system should include thought of future expansion. Restricted & limited mid-range chipsets don’t have that. SkyLake does have decent single-core IPC performance, but that’s not where the bottleneck lies for this sort of work, certainly not as multi-core support and GPU compute grows ever stronger in usage.
And note btw how Z170 has changed from earlier mid-range solutions. Back in the days of P55 and then P67/Z68, it was common for mbd vendors to include PLX or NF200 switches on more costly boards to enable support for x16/x16 SLI/CF, or even beyond that, eg. x8/x16/x16 on the ASUS M4E, or a most impressive x8/x8/x8/x8 on the ASUS P7P55 WS Supercomputer (I hold a number of 3DMark P55 records using one of these fitted with an i7 870 and three GTX 980s). I have Asrock and EVGA Z68 samples which work the same way. The same was true of Z77 and Z87, eg. x8/x16/x8/x8 on the ASUS M6E. But after Z87 things began to change, fewer vendors made use of PLX or similar chips, presumably to aim at lower price points, but it means GPUs cannot operate at x16/x16, and with two cards installed there’s little room for exploiting PCIe RAIDs or suchlike. Z170 is the same, there are very few with PLX chips (any at all infact?), so PCIe potential is much more limited compared to earlier mbds. The change in HSIO provision does mean mbd vendors can dedicate more lanes to the GPU setup if they want, but it doesn’t really help that much, there’s still not enough for x16/x16 + storage options.
Given the same more limited budget, a used X79 setup that does not have these restrictions, but still allows one to exploit a decent CPU (from 3930K to 4960X), is a way better idea (it’s why I like the ASUS P9X79 WS so much). If budget permits though then X99 is far more sensible, and then beyond that a multi-socket XEON, the ceiling through one must break to afford such a thing being the scary cost of the CPUs. In this industry though, the upfront cost of the hardware often pales in comparison to sw licenses, and the return on investment should quickly make up for the higher cost of a better system anyway. More complex projects can be undertaken, normal workloads get done faster (happier clients), less power is used to achieve the same result which saves money, etc.
Think of using Z170 for this type of work as being a bit like running a CPU at way too high an overclock with too much voltage. It may work ok for a while, but it’s much more likely to fail, less efficient, etc. It’s the tech equivalent of trying to navigate an Autobahn in a Reliant Robin, ie. one must max out the poor beast to breaking point merely to survive. 😀
Ian.
——–
SGI Guru -
Ian Mapleson
November 4, 2015 at 5:01 pm in reply to: Advice needed on PC workstation for 8K files in After EffectsDavid Lawrence writes:
> Fantastic post, Ian. Thank you!Most welcome!
Worth mentioning btw that if one is moving from using RT3D to C4D, then plugins like iRay
and Octane do support the newer 900 series NV cards, and of course the Titan X.> At this point since we anticipate using both After Effects and Premiere Pro equally,
> we’re having discussions about building a monster – i.e. maxing out everything as much
> as possible: number of CPUs, clock speed, core count, RAM. We’ll also spec multiple
> GPUs and want to optimize for both Premiere and AE per the test links in Walter’s post above:I guess in that case it makes sense to have the optimal primary GPU for Premiere, while
selecting extra GPUs that benefit AE, or some tradeoff inbetween.Since it’s probably wise to match GPU architectures (and thus driver release streams),
one could do something like have a K5200 or K6000 with several Titans (or Titan Black,
or 780 Ti; Titans would have better FP64), ie. all Kepler; or an M5000/M6000 with several
Titan Xs (or 980 Tis), ie. all Maxwell.> https://forums.creativecow.net/readpost/2/1066563
Thanks for the link, that was interesting, affirmed some info I’ve found elsewhere.
> inexpensive gamer GPUs will outperform bigger, more expensive boards. …
That can be true, but there are important caveats to this.
Professional cards are designed to last longer, they generate less heat, the drivers are
more optimised, the product support is better, they usually have more RAM (in some cases
ECC), they support higher colour depths, etc.Gamer cards often have a compute speed edge (eg. for those on a very low budget, multiple
580s is a good compromise), which makes their low cost look attractive, but they often have
less RAM, and most models will dump a lot of heat inside a case, especially since they use
higher clock rates than the pro equivalents based on the same GPU design. This is particularly
true of highly factory-overclocked models which often provide the best/price performance
because they sell better (it’s why I bought a 1266MHz EVGA GTX 980, it was cheaper than
all slower models).To this end, if one is going to employ gamer cards at all, it’s very important to either plan
carefully to handle the hot air inside the case, or look for reference models of gamer cards
which normally use coolers that expel hot air only out the back of the case (or search for
specific models that are designed to work that way, eg. EVGA had its EE line of various cards,
which stands for External Exhaust, though these days they don’t tend to name that way). A typical
decent Titan X model would be the EVGA 12G-P4-2992-KR.I’ve also found that many of what are supposed to be dual-slot gamer cards are infact somewhat
wider than 2 slots, which can make it a real pain to fit several of them smoothly inside a case. My
test system has four top-end MSI GTX 580 3GB Lightning Xtremes @ 900MHz (total CUDA power that
beats two Titan Blacks), but I had to use small bits of wadding to keep the cards apart (I’d say
the cards are about 2mm wider than they should be to honestly be called 2-slot), and of course
several side fans were essential (I use NDS PWM for their excellent performance and low noise,
plus they’re half the cost of Noctuas, and look nicer, for those who care). Sometimes though,
optimal airflow is not what one might expect, and as I say one must be careful when using a water
cooler for the CPU to ensure that the mbd chipset is kept cool aswell. My system has an H110 for
the CPU, plus some smaller fans cooling the mbd chipset and the 64GB RAM. The side fans though,
after some experimenting, worked better as intakes, with the large front 230mm fan acting as an
exhaust (the rear exhaust handles the rest; the four 140mm fans at the top for the H110 are
intakes). See:https://www.sgidepot.co.uk/misc/3930K_quad580_13.jpg
I had assumed the side fans should be exhausts, but in practice the cards behaved better with fresh
air blowing down on them from the side, which is then pulled away elsewhere, rather than just a front
fan providing an intake of air & side fans sucking it away. And with the side fans as intakes, the
top fans for the H110 worked better as intakes aswell. This is just for the case I used though (an
Aerocool XPredator Evil Green), it will vary between cases.These issues would be a lot less complicated if one used cards that dump all their heat externally
(I’ve built systems with multiple standard 3GB 580s which were less cramped), though Quadro cards
don’t get so hot anyway (lower core clocks) while often providing better performance and features
for non-compute tasks due to driver optimisation differences.The biggest performance difference though is as you say for GPU acceleration. Gamer cards can be
a great way of obtaining low cost GPU power for supported rendering, but do take note of things
like the warranty duration, exact card width, cooling, customer feedback on model reliability, etc.> We’re not doing ray-tracing or any CGI related rendering. Our GPU needs are mainly driven by
> what helps accelerate Premiere playback and output rendering. We also need to drive a couple 4K
> screens and a 1080p 3D display. …I guess whatever card is best for Premiere then; I’d go for an M5000 or M6000 if I could. If you’re
using AE aswell though, then additional top-end gamer cards would be useful, and from those links I
gather that Premiere can make use of multiple GPUs aswell in some cases.> There’s interest and discussion about possibly building a system in-house, but I also want to
> have custom build vendors as an option if we need to go outside. I’ve looked at configuring a
> HP Z840 and am investigating other vendors. Chris has mentioned ADK in Kentucky and I’ve read
> great things about Puget Systems.There are quite a few, though I’ve not looked into them that much. I guess the biggest attraction
of a prebuilt system is reliability, support, etc. It’ll be a tried & trusted setup. Building your
own can save a heck of a lot of money (HP’s markups seem pretty high given the raw parts costs
involved), but a self-build is inevitably an experimental process, and will likely require a fair
degree of reading up on all sorts of things, tiem investment, ironing out unexpected build issues,
etc. Still, some may regard it as a worthwhile exercise, if the end result is a more powerful
customised system. If I was building something like that, there’s certainly plenty of scope for
exploiting PCIe SSDs, M.2 NVMe for the C-drive, all sorts of things.> I get the impression you build your own machines. …
Yes; currently building four for various people, though they’re not XEON setups (three are 6-core
4.8GHz X79, one is 4-core 5GHz Z68). Possibly building a 5th shortly, X99 with 5960X and 980 Ti.This is a somewhat new venture for me though (just the last 3 years; the main stuff I do is SGIs),
and I’ve not tackled anything seriously high-end yet (probably next year I’ll end up doing my first
multi-XEON build).For the sort of system you’re talking about, that’s where the extra cost of getting a pro company
like Puget to make it may be worthwhile, though such places might not offer the flexibility of doing
things like exploiting gamer cards for extra GPUs (I suppose one could order a base system and
then modify it, but that probably has warranty implications). Who knows though, you could ask a
place if they can do something like a dual-2687W-v3 XEON with an M6000, but stuff in three extra
Titan Xs instead of their default listed GPU options, fit Samsung NVMe M.2 SSDs, etc.. I’d be
surprised if they said no, given the sale value involved.> Is this correct or do you work with a custom build vendor? …
Just me atm. One thing I specialise in is making best use of used parts to reduce costs, as that’s
where the main margin lies (though I do tend to obtain RAM, SSDs, DVDRW, media card readers and all
fans new), eg. here’s a system I built for someone in early 2013, but it’s not a suitable m.o. for
everyone, and almost certainly not for the level of system you’re considering, ie. better to get
everything new, in order to have maximum warranty status, though I do obtain new items aswell via
normal auction (saved 35 on an 850 Pro 512GB this week). Still, the used savings have certainly
proved usual to people, eg. I built a system with a K5000 for someone, saved the guy 700 UKP vs.
buying it new, but it’s a risk for both parties given the lack of original warranty.However, sometimes one can be surprised. I obtained a used OCZ Vector 512GB a few years ago, put
it in an AE system I built for someone for the AE cache. A year later (about 3 months ago) it failed,
but much to my surprise, OCZ replaced it without quibble, sending me a new Vector 180 480GB, even
though I did say in my RMA request that the Vector 512GB was bought used from eBay.In general, the bigger the build & the higher the budget, the less I would suggest anyone make
use of used hardware. You’re probably way over the threshold. 😀 And if you can afford an M6000
with one or more TitanXs, then Currahee!! 8)> … Are there any builders you’d especially recommend?
Alas I don’t have any experience of such builders to make any recommendations, sorry.
I was impressed that Puget offer a quad-socket system though. I think Dell used to have something
similar, but not now (or maybe that was just a server, can’t recall).Ian.
——–
SGI Guru -
Ian Mapleson
November 3, 2015 at 8:23 pm in reply to: Advice needed on PC workstation for 8K files in After EffectsA few additional points worth noting:
Boards such as the ASUS Z10PE-D8 WS and Z10PE-D16 WS allow for high capacity RAM, with
dual XEONs, and varying degrees of multiple PCIe slot functionality. What you could do
is begin with a single 10-core XEON (eg. E5-2687W V3), add a 2nd later when the budget
permits, which also would expand the max RAM.Someone mentioned the raytrace GPU renderer in AE. Note that AFAIK this does not
support Maxwell CUDA V2 (or has that changed with the latest CC 2015?), ie. a GTX 900
series card will work fine for OGL and viewport functions, but it won’t work for RT3D.Also, if budget is an issue, older cards are still very potent for AE as regards GPU
acceleration via CUDA, including the GTX 580 3GB (two of these are faster than a Titan),
GTX 780 Ti 3GB (same speed as the Titan for FP32, but less RAM), Titan and Titan Black.
One can also mix pro cards for viewport precision, etc., with CUDA cards, eg. a Quadro
K5000 with a couple of 780 Tis or Titans is a good match. However, if one isn’t going
to use the RT3D function then the extra CUDA power is of course less relevant. If using
a gamer card as the primary display though, then it makes more sense to use a 780 Ti or
Titan, properly supported as it will be for RT3D, etc. (not so the 970, which isn’t really
that fast compared to the 700s and Titans for CUDA, eg. even a 980 is 10% slower than a 780 Ti
for RT3D). Perhaps I’m out of date though, it’d be great if Maxwell CUDA V2 is supported now,
in which case a Titan X or 980 Ti would be good. In all cases, models which employ external
exhausts are best, as they will prevent waste heat from limiting the cooling that’s viable
for the CPU (many models of gfx card dump some or all of their hot air inside a case, which
can badly affect the CPU cooling).With respect to CPUs, it does need good cooling to get the most out of a 5960X, but if
one can employ an H110 or somesuch then 4.5GHz+ is possible, though perhaps somewhat less
if a system is fitted with 64GB or 128GB RAM (the ASUS X99-E WS would be better for this
than the lesser Deluxe model, for various reasons). Varies greatly between CPUs, but I’d
expect to get at least 4.3 from a 5960X with max RAM using good cooling. However, there is
of course a tradeoff between how high one pushes the chip and its long term lifespan. XEON
hardware does have one advantage here, ie. longer general lifespan. Oh, if using water
coolers, remember to include active cooling over the mbd chipset components.For those on a budget, there’s a lot of utility in used hardware, eg. a 3930K will run at
4.7GHz+, and a board like the ASUS X79(-E) WS supports up to 4 GPUs, though the 64GB max RAM
is less than X99, and the native Intel SATA3 support is less. Still a decent setup though.
However, with $5K to $10K available, an X99 or XEON equivalent should be very viable.Re storage, Samsung 850 Pro for the budget-minded, Samsung SM951 512GB M.2 NVMe if budget
permits. Also a good idea if possible to have separate devices for source, destination and
cache, and I use a separate SSD for the Windows paging file too (850 Pro would be fine for
that, though an EVO would also suffice). X99 or XEON equivalent would be good for using
multiple devices like this, ie. lots of Intel SATA3 ports or M.2 (X99 is probably stronger
for M.2 support though). For general storage, definitely Enterprise SATA or SAS (don’t rely
on consumer SATA), and consider proper LTO for longer term backup (good used deals on LTO3/4,
though of course LTO5/6 would be ideal, just very costly; I bagged an LTO1 recently for just
10 UKP, just to get used to the tech).As for cases, if you don’t need portability then the distinctly large but incredibly impressive
Nanoxia Deep Silence 6 would be ideal. Loads of space inside, excellent fans, enough room for
multiple water coolers if need be, very reconfigurable. If portability is needed though, the
smaller Corsair C70 is good. Enough space for an H110, four GPUs and a beefy PSU.Ian.
PS. Gigabyte has an interesting dual-XEON board aswell (the MW70-3S0); it has the usual max
RAM of 1TB and various PCIe slots, but also has onboard 12Gbit SAS. No M.2 though, whereas
ASUS’ Z10PE boards do have M.2.——–
SGI Guru -
Ian Mapleson
July 15, 2015 at 7:39 am in reply to: AE CS6 11.0.1 CUDA BENCHMARK PROJECT – test your graphics cards!Yep, when I last tested it I saw the usual not-supported error, with the CUDA section blanked out in the settings panel (though of course OGL stuff is ok). Pity, I’d been looking forward to testing multiple 980s. I thought Adobe would add MW CUDA V2 support when the 980 Ti and Titan X came out, but nothing so far.
Ian.
——–
SGI Guru -
Ian Mapleson
July 15, 2015 at 1:40 am in reply to: AE CS6 11.0.1 CUDA BENCHMARK PROJECT – test your graphics cards!Your post just reminded me, anyone know if AE yet supports Maxwell CUDA V2? (980, etc.)
I mean proper support, not a lib hack or something.Ian.
——–
SGI Guru