Hi Chris
Sorry to be slow in responding. We do a lot of military work and I was working on an RFI response for a govt agency these last few days.
The size and performance of a server is of course determined by the load you anticipate and the speed, performance and number of clients you plan to attach. I’ve been asked about a zillion times for specifics, but there are no hard fast rules.
Here are some of mine:
1. The server should be as fast as the fastest client or *at least* of the same CPU generation as the fastest client
We are dealing with 3 broad generations right now, soon to be 4. Old Mac Pros, Nehelam, Westmere and the new guys (Sandy bridge/Ivy Bridge).
You can’t expect a first gen Intel Mac Pro to handle the data coming from a Westmere client.
Does this mean it won’t work? No, but it does mean that if you decide to do a very taxing render to the server, you might cause other clients to slow down and drop frames. So if you’re asking me to spec things, I have to consider worst case. Whereas a lot of people looking to save money would rather deal with the occasional slow down.
2. More memory is better
It used to be 1 GB of memory per client was about right. I now tell people 2 GB of memory. I would prefer even more.
The more memory you have on the server, the more the server can cache and the more efficient the IO to the RAID can be. If you are low on memory, tiny IOs like directory queries will hit the raid leading to poor performance.
3. 10Gb needs the fastest macs.
As things are today, Only the westmere systems can really drive a significant number of 10Gb ports. So if you want to do 10Gb to clients, make sure the machines with 10Gb cards have fast cpus. The Westmere system has the memory controller inside the cpu which makes a huge difference. (Sandy bridge moves the Northbridge inside the CPU which will be *another* huge increase in throughput.)
4. Get the slots right
Big IO cards need big slots. The server has 2 16X slots and one needs to be for the RAID and one needs to be for the network card. The graphics card will have to move.
5. Tune the system correctly.
You are going to be using a deskside workstation as a high throughput device. It’s not tuned for that. Apple doesn’t consider that you’ll have 200,000 open files or that you’ll have 1GB/sec moving from disk to the network all day. So you need lots of nbufs, clusters, vnodes, open file descriptors etc. (We tune all systems that have our storage attached). One simple way to get some of this tuning is to load OS X server. They do some of it.
Steve
Steve Modica
CTO, Small Tree Communications