Ian Mapleson
Forum Replies Created
-
Ian Mapleson
January 19, 2015 at 11:25 pm in reply to: AE CS6 11.0.1 CUDA BENCHMARK PROJECT – test your graphics cards!Simon Gomes writes:
> … Cuda it’s for a little part of the AE use. …Check the release notes, it also accelerates various aspects of the main GUI, including RAM Preview.
> … Cuda are no effect on 90% things we made on AE. …
That’s why I mentioned not ignoring main RAM size, CPU power, SSDs, etc.
> … In this regard it’s not preferable to buy the ASUS 780 6G?
Only if you’re going to work with 4K. I doubt you’d benefit from the larger RAM if you’re
mainly dealing with HD.> … Yourself said that the 780 6g it’s the only card on 700 series that you consider.
You’ve misread; what I meant was, of the cards below the 780 Ti, the 760 6GB is the
one I’d prefer if one wants a single card and does not like the noise/heat/power issues of
580s, but really what I mean is, if one has the budget to afford the 760 6GB at all, then
one is better off with the 780 Ti IMO, because if one is dealing with 4K material
then it makes sense to have at least two Titan/Blacks to cope with that kind of workload;
remember 4K will likely incur 4X the resource usage of HD/2K.It’s a bit like those wishing to play games at 4K. Newer GPUs have enough RAM to run the
latest games at 4K with high detail (3GB minimum usually), but just one such card is normally
not good enough to get a decent frame rate. Thus, for 4K gaming with a good frame rate, one
typically needs two newer GPUs to get both the RAM capacity and the performance.Just my opinion of course, from seeing what people do with the RayTrace3D function.
> I really hesitate between the asus matrix 780 ti and de asus 780 6g. I don’t know what
> is the good choice ^^To clarify: since you can consider the 780 Ti at all, then get that one. I would only err
for the 780 6GB if I was working with 4K, but if so then it would have to be two cards, or
two Titans, not just one GPU (assuming the budget). IMO the extra speed of the 780 Ti
outweighs the lower RAM for the vast majority of users, ie. anyone working with 2K or below.
Or for 4K, get one 780 6GB or Titan and then get a 2nd GPU later asap.NB: Your comment about AE doing other things aswell is why I’ve often built combo systems
where the primary display card is a Quadro (for better OGL) and the other cards are 580s
or whatever. I’ve done one with a K5000 and two 1.5GB 580s, one with a Quadro 4000 and three
3GB 580s, and the most recent system had a Quadro 4000 and one 3GB 580.Installing the drivers in the correct manner is important, and one should use driver sets
from the same release timeframe/code, but it does work ok, ie. Quadro shown as main output
and OGL accelerator in AE preferences, 580s or whatever shown as the CUDA munchers. Nice
part of this is with a K5000, it means the primary display isn’t limited for OGL ops by
the 580’s lower 3GB (K5000 has 4GB), though since the K5000 is quite good anyway for CUDA,
it can still be included in the CUDA pool (for systems with Quadro 4000s, I don’t include
the Q4K in the CUDA pool).So, get the 780 Ti. Unless you work with 4K and some heavy data, it’ll service you well.
All I’d say is stuff in a 2nd card as soon as is practical. 8) That’d be sweet. Most
likely the cost of used 780 Tis will drop a lot when the 980 Ti comes out – at least,
that’s what I’m hoping anyway; I’ve built a 4th system with a K5000 to sell (3970X, etc.),
but not yet added extra CUDA GPUs. This time I want to fill it with one or more used 780 Tis.Ian.
——–
SGI Guru -
Ian Mapleson
January 19, 2015 at 9:40 pm in reply to: AE CS6 11.0.1 CUDA BENCHMARK PROJECT – test your graphics cards!Teddy writes:
> Ian you’re a mad man. …I try… 😀
> Thanks for taking over this thread, I can’t believe it’s still going strong!
No prob! I know people are finding it useful. There’s a lot of solo pro AE users
out there who need info.> And don’t forget the original titan series are getting cheaper, I have two in
> SLI that outpace my 2x gtx 580 setup, with 6 gb vram, and have pretty good compute scores.That’s a good point, I’ve not really followed what’s been happening with Titan
pricing on the used market. For AE though, AFAIK its only advantage is the RAM
capacity. Hence why I’m hoping when Adobe does ad V2 CUDA support, we’ll also see
the 980 Ti given 8GB.I’m curious, have you ever tried just using AE in a normal manner whith the Titans’
64bit fp mode set on vs. off, see if you can discern any difference? Even if it has
an effect on your benchmark, it’s the general usage which matters more.So I too, am looking for a new setup and wondering about two options
> #1 is a single 780 ti 6 gb
Are you sure there is such a thing? I was under the impression NVIDIA never
released a 6GB 780 Ti.> #2 is a gtx 780 ti 4 gb + gtx 970 for gaming and 3D work
The 780 Ti would be 3GB. Not sure it’s wise to mix cards with difference
CUDA architectures. One thing I do know though, a 580 plus a 980 would
easily be slower than two Titans.Roughly speaking, a 980 is about 10% slower than two 580s, and from data
elsewhere it seems like a normal Titan is about the same as two good 580s
for CUDA. So, think of your Titans as being like four good 580s, then
compare that to 580 + 980 which is a tad less than three 580s.Really, the only seriously useful step up from two Titans would be either
more Titans, extra 780 Tis, or just wait for V2 CUDA support and replace
them with 980 Tis.> #3 is a cheaper gtx 580 3gb + gtx 980 for better future proofing and more vram
If V2 CUDA was supported right now, then yes option 3 would make the most sense
I guess, but until proper support is added… dunno, numerous options in the
meantime.> … Would there be any driver issues / vram limits by going with this setup? …
There could well be driver issues, since the CUDA version is different. Even if
the system as a whole functioned ok, AE would act weird I expect, and right now
of course the RayTrace3D & other CUDA functions wouldn’t work.> I know AE can recognize the discrete cards separately, in fact it may ignore
> the 980 altogetherIf you had the 980 as the primary display-output card, then the OGL functions in
AE will likely work, but only the 580 will show up in the CUDA pool.Ian.
——–
SGI Guru -
Ian Mapleson
January 19, 2015 at 4:42 pm in reply to: AE CS6 11.0.1 CUDA BENCHMARK PROJECT – test your graphics cards!I wish people would read my earlier posts… 😉
Ok, I’ll summarise this once again.
The GTX 580 uses the older shader architecture, employing a 2X higher shader
clock and a lot more bandwidth per core.The GTX 580 is faster for CUDA in AE than ALL 600 series cards.
The GTX 580 is faster for CUDA in AE than all 700 series cards except the 780 Ti.
The 900 series use Maxwell CUDA Version 2, which the RayTrace3D does not support
at the moment, hence why they’re not yet suitable for AE. The OGL functions will probably
work ok, but not anything CUDA-related.The only advantage of the Titan is RAM capacity, which isn’t necessary if you’re working
with HD or less. Could be useful for 4K work though. Two 580s are faster than a Titan, and
two good 580s are faster than a Titan Black (the latter being the model that has the same number
of shaders as the 780 Ti). I haven’t found any evidence so far that the Titan’s optional
higher 64bit fp mode helps for AE at all, so really any model of Titan for AE is a waste
unless one definitely benefits from the higher RAM. Hopefully in time Adobe can add Maxwell
CUDA V2 support and NVIDIA will release an 8GB card, that would be ideal for AE. Grud knows
when that’ll happen though, but if I had to guess I’d say NVIDIA is probably waiting until
AMD releases its 3K series.Don’t bother with the 600 series cards at all. Total waste for AE/CUDA. I don’t why
people keep asking about them, since Teddy’s very first post in this thread showed
how slow a 680 is vs. a 580 for AE/CUDA.A used 760 may be cheaper than a used 580 (depends), but its performance is nowhere near
as good for AE/CUDA. Unless there were specific power/heat/noise issues involved, I’d
get one or more 580s every time if the choice was between that and 760(s).If you’re not going to bother with 580s, then try and get a 780 Ti. The 780 is quite a
lot slower than the 780 Ti, not worth the lower cost IMO.Two 580s will be faster than a 780 no problem, but will use more power, generate
more heat, make more noise. Usually, two 580s will beat a 780 Ti aswell (depends
on the models in each case) However, multiple 580s will cost much less if one just
buys 1.5GB editions (for most users, this will be sufficient, but I always recommend
hunting for 3GB 580s if possible).Any reference 3GB 580 will be a 2-slot card, including the standard 783MHz Palit.
Many top-end overclocked 580s will use more than 2 slots. Even the MSI Lightning
Xtreme, which is sold as a 2-slot card, is actually fractionally wider than that;
I had to use some stubs of paper to hold the four cards apart slightly to stop the
fan blades from hitting the back of the next card in line. By contrast, normal
780 or 780 Ti cards will not suffer from this, and likewise normal 580s will be ok.
Four Palit 3GB cards fits into a 4-way mbd just fine (I normally use the ASUS P9X79 WS).The tradeoff/judgement about 580s is whether the increased power consumption over their
usage lifetime would offset the lower cost of buying them compared to a single 780 Ti.
For others, noise/heat may be a factor, ie. CPUs with air coolers may be affected by
internally dumped heat from multiple GPUs. Newer cards generate less heat, though some
models (including 580s) have external-only exhausts which are ideal for systems with
air cooled CPUs. Water-cooled systems are less affected, though one must note carefully
how heat is going to flow. I’ve switched entirely to use water coolers, namely the H80,
H100, H100i and H110 (my 4-way 580 system has an H110).After the 500 series, NVIDIA halved the shader clock speed to make power delivery easier.
This meant the number of shader cores had to be at least twice as high as a 500 series
card in order to provide the same CUDA performance. In reality, it’s more like 2.5X to 3X.
Remember this discussion only applies to CUDA in apps like AE. For gaming, newer cards
will quicker than 580s, as my numerous results show nicely.Never judge based on the number of cores. They cannot be compared across card generations.
You will make bad decisions if you choose a card based on the number of cores.Check review links such as those I have linked earlier in this thread. Even for other types
of CUDA test, in most cases a 580 beats all other 600/700 cards except the 780 Ti (and even
then it’s close).So, if you can afford it, and want maximum efficiency, minimum noise, etc., then try and
get a 780 Ti. If budget is a big problem but you still want maximum performance, then look
for two 580 3GB cards (or 1.5GB if you can’t find any 3GB units). I bought over a dozen 3GB
Palit cards in the last two years. Two good 580s like the MSI LX will be quicker, but still
not as noise efficient, and certainly use more power. Alas, the MSI LXs do dump heat inside
a case, so that’s a factor to consider with any choice.Inbetween multiple 580s at the cheap end and the luxury world of one or more 780 Tis at the
other, there are all the inbetween mishmash options involving 760, 770 and the 780. Out of
these, the only one I would consider if I was doing 4K work would be the 6GB 780, though it’s
probably hard to find. However, if the budget can stretch to multiple 780s, but not multiple
780 Tis, then sure, go for the 780s.NOTE: in all of these, do not neglect the rest of the system. The maximum possible RAM is
essential, 32GB preferably, certainly not less than 16GB, the more the better. An SSD for
the C drive, another high IOPS SSD for the AE and media cache, and I also use a lesser SSD
for the Window paging file to ease the paging load on the C-drive SSD (this is because PCs
with a lot of RAM need to have a large paging file, which wastes space, etc.) Atm a good
choice for this would be any used decent model or if buying new then the SanDisk X300 128GB
is good. I keep hunting for used Vertex4, Vector and Samsung 830/840/etc. SSDs for building
systems, they all work well.Hope this helps!!
Right, I’m off to collect the P9X79 Deluxe board I won for a snip. 😀
Ian.
——–
SGI Guru -
Ian Mapleson
January 19, 2015 at 11:33 am in reply to: AE CS6 11.0.1 CUDA BENCHMARK PROJECT – test your graphics cards!Marc Gutt writes:
> MSI Lightning XE are rare to find in Germany. The last one was sold in the UK and at
> the moment there is only one auction in the US. But maybe I found two in Austria 😀True, they don’t show up that often, and they do tend to sell for more than normal
3GB cards, often quite a bit more than 1.5GB cards. Note that with the release of the
900 series, it’s entirely possible for two good 3GB 580s to cost almost the
same or even more than a 780 or 780 Ti, so don’t pay too much when looking for these
cards, beyond a certain point it’s not worthwhile, though if I was going to get any
700 series card at all it would definitely be either a 6GB 780 or any 780 Ti.> The same problem with Palit cards. In Germany you will find much more Gainward cards,
> but they should be the same (Palit is the parent company of Gainward). Most people
> like the low loudness of the Gainward.Again though, watch out for the card width, eg. I think the Phantom uses 2.5 slots.
> The very last option would be the Asus DirectCUII, but its huge:
Exactly why I’d never buy it, too big for most mbds.
Ian.
PS. Some good X79 bargains to be had these days. Just bought two Rampage IV Extreme mbds
for 103 UKP each, a P9X79 Deluxe for 75 UKP (all ASUS, good for oc’ing) and a 3930K C2
for 225 UKP. Would love to use X99, but it’s way too expensive atm, especially DDR4.——–
SGI Guru -
Ian Mapleson
January 18, 2015 at 6:30 pm in reply to: AE CS6 11.0.1 CUDA BENCHMARK PROJECT – test your graphics cards!Simon, yes it’ll work fine, doesn’t matter what brand one uses.
Only thing I’d mention though is to bare in mind your future plans.
If you ever want to increase the GPU power by adding another card,
check whether the model you’re considering would block any relevant
PCIe slots. Some ASUS cards are 2.5 or 3 slot cards (ditto numerous
versions from other vendors). But if you’re content with a single
card, or your mbd has tri-slot spacing, then no problem.Performance-wise, there’s not much difference between models for AE.
It’s more important to have an SSD for the AE cache, good CPU, RAM
at as high a clock as possible, etc.Ian.
——–
SGI Guru -
Ian Mapleson
January 17, 2015 at 1:07 pm in reply to: AE CS6 11.0.1 CUDA BENCHMARK PROJECT – test your graphics cards!Marc writes:
> Only for your interest: I will buy two used 580 as they are much cheapier than one 780.That’s why my system has four good 3GB 580s. 😀 It’s faster than two Titan Blacks.
The downside is heat, noise & power, though the 580s I chose are on the lower side of
noise output.By ‘good 580’ I mean models such as the MSI Lightning Xtreme 3GB (832MHz default core,
overclocks to between 900 and 1000 no problem, I run mine at 900 or 950 depending on
the ambient temp). Standard 580s include the Palit 3GB with a 783MHz core, these are
louder, with a slower base clock, and they can’t oc beyond about 900MHz (for the Palit,
1075mV vcore worked with mine).Note that I have two 980s now; can’t test with AE yet of course, but I’ll run them
through Arion, Blender, etc. soonish. Currently doing gaming tests with an older
platform, Futuremark, etc., eg. https://www.3dmark.com/fs/3821168Ian.
——–
SGI Guru -
Ian Mapleson
January 17, 2015 at 9:36 am in reply to: AE CS6 11.0.1 CUDA BENCHMARK PROJECT – test your graphics cards!Simon, I’ve talked about this a lot in my previous posts, please check them out.
– 900 series cards are not yet officially supported, and adding their names to
the ray trace text file will not help; they use Maxwell CUDA V2. The OGL stuff
will likely work ok, the app will launch ok, but RayTrace3D can’t use them.– If you care about cost more than anything, then search for two used 3GB GTX 580s.
Combined, these are quicker than a Titan, at the expense of power, heat & noise.– If you want maximum speed, the fastest card right now is the 780 Ti.
– Between these two are other options like the 770, the 780 (including the 6GB 780)
and of course the Titan.Or you can combine pro & consumer cards, eg. a K5000 for the primary display output,
two 580s (or whatever) for CUDA.See previous posts here for performance examples of all these cards.
NOTE: based on Arion data, a 980 is about 10% slower than two 580s, so even if/when
Adobe fully supports Maxwell CUDA V2, the fastest single card for AE/RayTrace3D will
still be the 780 Ti. We won’t see this change until the 980 Ti and/or Titan II come out.Your choice will impact the rest of the system if you have an overclocked CPU, eg.
air coolers can be affected by any heat dumped inside the case, so external exhaust
GPUs are better if you can find them, though this matters less for water-cooled CPUs.Ian.
——–
SGI Guru -
Ian Mapleson
January 12, 2015 at 2:54 pm in reply to: AE CS6 11.0.1 CUDA BENCHMARK PROJECT – test your graphics cards!He’s talking about Photoshop. Can’t comment on that, I don’t use it.
Besides, being able to run up the app on a card is not the same as
the app having CUDA support of the right type that allows certain
app functions to operate as expected. OGL functions probably work
ok with a 900 card, but atm RayTrace3D in AE does not. Again, we
need Maxwell CUDA V2 support.Ian.
——–
SGI Guru -
Ian Mapleson
January 11, 2015 at 10:52 am in reply to: AE CS6 11.0.1 CUDA BENCHMARK PROJECT – test your graphics cards!No proper support yet; please read previous recent posts.
Ian.
——–
SGI Guru -
Ian Mapleson
January 10, 2015 at 4:25 pm in reply to: AE CS6 11.0.1 CUDA BENCHMARK PROJECT – test your graphics cards!I doubt you’d see a useful performance boost over multiple TBs. More useful to
some would be the drop in power consumption for long duration renders, if you
live somewhere that has costly electricity, though the tradeoff vs. the card
costs might not be worthwhile (depends how long you’d keep them). My Arion test
suggests a good 980 would be almost exactly the same speed as TB for AE, and of
course the TB does have a RAM advantage.Ian.
——–
SGI Guru