Activity › Forums › Storage & Archiving › slightly ghetto san
-
slightly ghetto san
Posted by Marcus Lyall on December 24, 2009 at 6:47 pmWe’ve got a Procurve 2810 switch that I’m using to run the whole office network. But I’m planning to use it to run a video network.
Mostly for AFX rendering, and Pro_res editing.Here’s what I’m planning to do: ( a geeky xmas by the looks of things)
Use a PCI-e G5 with 4gb of ram.as my video server
Connect an 8-drive SATA RAID to it via a highpoint card. So that’s the storage.Connect the server to a Procurve 2810 Gig-E switch via 2 ethernet cables.
Configure the Procurve to ‘enable jumbo frames’ and whatever needs to be done to make the link aggregation work.Connect it to:
3 x Mac Pros via dual ethernet cables with link aggregation.
1 x Mac Mini via a single cable.
1 x PCI-x Mac G5 via a single cable.Trying to work out how to best connect this network to the office network and internet. I am not an IT person. I’ve put together raids and done basic networking but nothing fancy.
Do I:
a) Get another small Gig-e hub for the office network..And another ethernet card for the PCI-x G5 and use it as the link between the video and office networks.
b) Cleverly configure the Procurve to handle both office and video networks.
c) Get a proper IT person round here to do it and let him charge me the normal amounts to tell me I have to upgrade all my computers.
d) Reassess the situation.
???
Any advice much appreciated.
Seasons greatings!
M
Ian Liuzzi-fedun replied 14 years, 4 months ago 7 Members · 27 Replies -
27 Replies
-
Bob Zelin
December 24, 2009 at 7:08 pmREPLY –
it won’t work. Replies below –We’ve got a Procurve 2810 switch that I’m using to run the whole office network. But I’m planning to use it to run a video network.
Mostly for AFX rendering, and Pro_res editing.REPLY – without setting up link aggregation, you will never get the thruoughput that you need. You need a switch that supports dynamic link aggregation, flow control, and jumbo frames. And you will need a large fast drive array that can support multiple workstations pulling data at a very fast rate.
Here’s what I’m planning to do: ( a geeky xmas by the looks of things)
Use a PCI-e G5 with 4gb of ram.as my video serverREPLY – you mean the Power MAC Quad G5. It wont’ work. 4 Gig’s of RAM is too small, unless you only have 2 clients.
Connect an 8-drive SATA RAID to it via a highpoint card. So that’s the storage.
REPLY – you will soon see that even if you have the right switch, and setup link aggregation, the Highpoint card exhibits severe latency, and you will get drop frame errors from playing out from any of your clients, using the shared volume. But hey, try it – you will see. You need a PROFESSIONAL SAS/SATA host controller, not the Highpoint.Connect the server to a Procurve 2810 Gig-E switch via 2 ethernet cables. Configure the Procurve to ‘enable jumbo frames’ and whatever needs to be done to make the link aggregation work.
REPLY – if you think that 2 ethernet ports is enough for link agg, you are dreaming. We use 6 now on all systems – even small systems. Even 4 was pushing it (unless you have only a couple of clients).
Connect it to:
3 x Mac Pros via dual ethernet cables with link aggregation.
1 x Mac Mini via a single cable.
1 x PCI-x Mac G5 via a single cable.REPLY – so you expect to run 5 FCP clients, with a server that has 4 gig of RAM, and only 2 ports for link aggregation. I hope your job doesn’t depend on this project.
Trying to work out how to best connect this network to the office network and internet. I am not an IT person. I’ve put together raids and done basic networking but nothing fancy.
REPLY – even if you were an IT person, this is a difficult process. There are lots of steps to go thru. And you CANNOT share the office IT network along with the shared storage (which needs to be a dedicated network), using Ethernet port 2 on your MAC Pro’s, with static IP addresses. Your PCI-X MAC G5 doesn’t support Jumbo frames, so you wont’ get any bandwidth.
Do I:
a) Get another small Gig-e hub for the office network..And another ethernet card for the PCI-x G5 and use it as the link between the video and office networks.
REPLY – they are called SWITCHES these days, not hubs. Do you think that Linksys ethernet cards from Office Depot are going to work for this application ?
b) Cleverly configure the Procurve to handle both office and video networks.
REPLY – so you are not an IT person, but you are going to configure 2 different VLAN’s, and share the single switch ? And you are going to create link agg on one of the VLANS?
c) Get a proper IT person round here to do it and let him charge me the normal amounts to tell me I have to upgrade all my computers.
REPLY –
your proper IT person will have no clue as to what any of this is.
There are lots of people that you see on these forums – LOTS of manufacturers that do shared storage for video enviornments. But you try it – you try to get this to work with your IT guy. You let us know how things turn out, with your G5 server with 4 gigs of RAM, 2 ethernet ports, a single Procurve switch that you are going to share with your office network, and a Higpoint card. We are always here – just let us know how things go with your IT expert.Any advice much appreciated.
REPLY – advice – study what is available to you on Creative Cow. Read this forum. Contact the people that do this for a living.
Save money – be a hero to your company.Bob Zelin
-
Marcus Lyall
December 26, 2009 at 4:37 pmReplies to your replies.
Thanks for the info to date…REPLY – without setting up link aggregation, you will never get the thruoughput that you need. You need a switch that supports dynamic link aggregation, flow control, and jumbo frames. And you will need a large fast drive array that can support multiple workstations pulling data at a very fast rate.
So is the 2810 a switch I can use? Apparently it has….
IEEE 802.3ad Link Aggregation Protocol (LACP) and ProCurve trunking: support up to 24 trunks, each with up to 8 links (ports) per trunk
Jumbo packet support: supports up to 9,216 byte frame size to improve performance of large data transfersIs this what we’re talking about? Or is dynamic link aggregation different? Fast drive array should do about 500 mb/s. So all good?
REPLY – you mean the Power MAC Quad G5. It wont’ work. 4 Gig’s of RAM is too small, unless you only have 2 clients.So If I put in 8 gigs, it’ll do 4 clients? Only 3 clients will need FCP speed. But 8 gig for good measure?
REPLY – you will soon see that even if you have the right switch, and setup link aggregation, the Highpoint card exhibits severe latency
I can play back uncompressed SD without latency over my office network Gig-e from any of the 3 machines I have with Highpoint cards already.
Does this change with multiple clients? I’ve just bought a bunch of SATA drives. Can you recommend a more suitable controller card?
REPLY – if you think that 2 ethernet ports is enough for link agg, you are dreaming. We use 6 now on all systems – even small systems. Even 4 was pushing it (unless you have only a couple of clients).So a small tree 6 port ethernet card will make a big improvement?
Thing is that most of the time, the server / network will be used for AFX rendering. So not such an issue. It’ll be kinda rare for my system to be playing out to more than 1 or 2 FCP clients. Thus the slightly ghetto nature. I agree that if I had a small edit facility with simultaneous FCP edits, this wouldn’t really work. But I don’t.REPLY – so you expect to run 5 FCP clients, with a server that has 4 gig of RAM, and only 2 ports for link aggregation.
Not exactly. Most of the throughput is going to be AFX rendering.
The Mac Mini is for rendering from Watch folders. The G5 is for archiving to tape. (LTO-4 via PCI-xATTO scsi card) It would be nice to hang other spare machines off this network to use as a makeshift AFX render farm. (There’s some other 8-cores we could hook up)
The 3 Mac Pros will all have fast local storage, but I want to store some assets on a server to avoid duplication and make it easier to back up at the end of projects. But obviously, I want to build it so I ideally can edit from the server where possible. But happy to build this capability up over time. No need for 10gig Ethernet just yet. Nice to be able to play back single streams of Pro-res HD to check renders without dropped frames though.REPLY – even if you were an IT person, this is a difficult process. There are lots of steps to go thru. And you CANNOT share the office IT network along with the shared storage (which needs to be a dedicated network), using Ethernet port 2 on your MAC Pro’s, with static IP addresses. Your PCI-X MAC G5 doesn’t support Jumbo frames, so you won’t get any bandwidth.
Yep. not planning to share office network with video. Just wondering whether they can be run as separate VLANs on the same switch.
Can always get a basic gig-e switch for the office network. No biggie. I think there’s one in a drawer somewhere. And yes, a Small tree PCI -X card that supports Jumbo frames for the PCI-x G5. Sound like a plan?
Static IP’s in place already.REPLY – they are called SWITCHES these days, not hubs. Do you think that Linksys ethernet cards from Office Depot are going to work for this application ?
Not quite sure what you mean here? But I’m in London. We don’t have Office Depot. I think you’re saying that I need a decent ethernet card, yes?
REPLY – so you are not an IT person, but you are going to configure 2 different VLAN’s, and share the single switch ? And you are going to create link agg on one of the VLANS?
No. I was going to get my techy friend to do it. He seemed quite savvy on switches and the like. But wanted to get some advice, thus the posts.
REPLY –
your proper IT person will have no clue as to what any of this is.
There are lots of people that you see on these forums – LOTS of manufacturers that do shared storage for video enviornments. But you try it – you try to get this to work with your IT guy.Kinda depends on who your IT guys are I guess… Some are better than others…
REPLY – advice – study what is available to you on Creative Cow. Read this forum. Contact the people that do this for a living.
Save money – be a hero to your company.It’s my company, so no risk of being fired! In fact, the whole set-up is really a thing for me that has grown slightly out of hand.
As I say, the shared storage isn’t mission-critical, just useful. The ghetto vibe is because the situation doesn’t quite merit a major purchasing plan just yet.So to recap…..
1) Does the Highpoint card give worse performance when serving multiple clients than other cards?
2) Does the Highpoint card give worse performance when serving multiple clients over a network than when used as local storage?
(ie, we assume the normal network bandwidth issues, but is there a particular issue with a Highpoint card serving multiple clients, rather than just one?)
3) Is the Procurve 2810 switch capable of dynamic link aggregation, and flow control?
4) Can I create two linkable VLANs on the procurve 2810, one for video, one for office?
5) Or do I just use the Procurve for video, and get another switch for the office and use the Small tree-d PciX G5 as a gateway between them?
6) Or , if the Procurve isn’t going to cut the mustard, what’s the switch to buy`? And why?
7) What are your thoughts on putting one of those 6 port small tree PCix card bad boys in the maligned G5, after putting some more ram in it? Jeez, it’s tempting. Using the G5 will save me a few quid.
8) Or do I bite the bullet and shell out on a 2nd hand Quad Core Mac Pro? Is there a big difference in performance between the two?
9) Seasons Greetings! -
Bob Zelin
December 26, 2009 at 6:15 pmSo is the 2810 a switch I can use? Apparently it has….
IEEE 802.3ad Link Aggregation Protocol (LACP) and ProCurve trunking: support up to 24 trunks, each with up to 8 links (ports) per trunk
Jumbo packet support: supports up to 9,216 byte frame size to improve performance of large data transfersREPLY – go ahead and try it. what ethernet card are you going to link agg to ?
Fast drive array should do about 500 mb/s. So all good?
REPLY – it’s not the speed of the drives, it’s their latency – but go ahead and try it.So If I put in 8 gigs, it’ll do 4 clients? Only 3 clients will need FCP speed. But 8 gig for good measure?
REPLY – again, go ahead and try it. Even with the “right boards and switch and drives” you will see that the G5 Quad will not work – but go ahead and try it. It’s a good learning experience, and you will need to do it again, even with the right gear.
I can play back uncompressed SD without latency over my office network Gig-e from any of the 3 machines I have with Highpoint cards already.
Does this change with multiple clients? I’ve just bought a bunch of SATA drives. Can you recommend a more suitable controller card?REPLY – if you are able to playout 3 uncompressed SD clients over a GIG E ethernet network with no link agg, and no jumbo frames on your office network, than you are doing more than I was ever able to do, so don’t listen to anything I am saying. When I started all these experiments, we could not get two MAC’s to playout single streams of compressed DVCProHD (which is half the bandwidth of uncompressed SD), so if you CAN do this right now, then don’t listen to me anymore, as you have accomplished something that I (or anyone else) could not. If you are playing out 3 uncompressed 30Mb/sec streams over a standard office GigE network with no link agg, over a standard gig e switch with no link agg or jumbo frames setup, and a highpoint host controller, then this is amazing, and you should proceed as you wish, and ignore my rants. I can’t believe this is working for you.
So a small tree 6 port ethernet card will make a big improvement?
Thing is that most of the time, the server / network will be used for AFX rendering. So not such an issue. It’ll be kinda rare for my system to be playing out to more than 1 or 2 FCP clients. Thus the slightly ghetto nature. I agree that if I had a small edit facility with simultaneous FCP edits, this wouldn’t really work. But I don’t.REPLY – even our “wonderful” system has issues with rendering large files.
Not exactly. Most of the throughput is going to be AFX rendering.
The Mac Mini is for rendering from Watch folders. The G5 is for archiving to tape. (LTO-4 via PCI-xATTO scsi card) It would be nice to hang other spare machines off this network to use as a makeshift AFX render farm. (There’s some other 8-cores we could hook up)
The 3 Mac Pros will all have fast local storage, but I want to store some assets on a server to avoid duplication and make it easier to back up at the end of projects. But obviously, I want to build it so I ideally can edit from the server where possible. But happy to build this capability up over time. No need for 10gig Ethernet just yet. Nice to be able to play back single streams of Pro-res HD to check renders without dropped frames though.REPLY – our “wonderful system” is not a render farm, and doing large multi hour renders will bog down your system. We have gone thru the trials and tribulations of what does, and does not work, and we know our current limitations. I can assure you that even if you had the money to buy a non getto system, and get exactly what I told you to buy, you would STILL have issues with long renders, but hey – if you are playing back 3 clients uncompressed SD over an office network, with nothing – then what the hell do I know.
Yep. not planning to share office network with video. Just wondering whether they can be run as separate VLANs on the same switch.
Can always get a basic gig-e switch for the office network. No biggie. I think there’s one in a drawer somewhere. And yes, a Small tree PCI -X card that supports Jumbo frames for the PCI-x G5. Sound like a plan? Static IP’s in place already.REPLY – we have failed with the PXG6 card on a G5. But you go ahead and try it. I thought you said earlier that you were already playing out 3 uncompresssed streams with nothing – this is amazing. For your G5, if you want to try this, you need the PXG6.
It’s my company, so no risk of being fired! In fact, the whole set-up is really a thing for me that has grown slightly out of hand.
As I say, the shared storage isn’t mission-critical, just useful. The ghetto vibe is because the situation doesn’t quite merit a major purchasing plan just yet.REPLY – you certainly can try what you are saying, and at worst case, it’s an education in setting up what you will ultimately need.
So to recap…..
1) Does the Highpoint card give worse performance when serving multiple clients than other cards?
REPLY – yes, Areca and ATTO are dramatically better.
2) Does the Highpoint card give worse performance when serving multiple clients over a network than when used as local storage?
(ie, we assume the normal network bandwidth issues, but is there a particular issue with a Highpoint card serving multiple clients, rather than just one?)REPLY – yes, see above answer.
3) Is the Procurve 2810 switch capable of dynamic link aggregation, and flow control?
REPLY – I don’t know – I have not looked at the specs (but you and your IT guy should research what it involved for this switch to do dynamic (active) link agg, flow control and jumbo frames.
4) Can I create two linkable VLANs on the procurve 2810, one for video, one for office?
REPLY – I don’t know. The reason we go with Small Tree is because of support. HP support in the US is difficult without a support contract. The help desk on line from India doens’t give you very accurate information. This is why we started to look around. A switch is a switch – Small Tree, HP, Netgear, Cisco -it’s all up to the suppor that you can get.
5) Or do I just use the Procurve for video, and get another switch for the office and use the Small tree-d PciX G5 as a gateway between them?
REPLY – if you have 2 ethernet ports on each MAC, you don’t need a gateway – just create two independent networks. No big deal.
6) Or , if the Procurve isn’t going to cut the mustard, what’s the switch to buy`? And why?
REPLY – now is the time to call HP Support, and see what kind of support they will give you, to answer these specific questions. Ask the ProCurve team exactly how to setup what we have talked about here – exactly where are the menus. Do you have to use the CLI or can you use the GUI. What is active and passive link agg ? Make that call before you proceed.
7) What are your thoughts on putting one of those 6 port small tree PCix card bad boys in the maligned G5, after putting some more ram in it? Jeez, it’s tempting. Using the G5 will save me a few quid.
REPLY – we tried it – it didnt’ work – but you may have better results.
8) Or do I bite the bullet and shell out on a 2nd hand Quad Core Mac Pro? Is there a big difference in performance between the two?
REPLY – you will stilll need a 6 port Small Tree card for the new Quad Core.
9) Seasons Greetings!
REPLY – baah humbug – it’s the season, and both of us are working !
Bob Zelin
-
Marcus Lyall
December 27, 2009 at 2:39 pmCan play back one stream of uncompressed SD from 1 machine to another. Not 3 at the same time. Or at least, I haven’t tried it.
Getting another drive card isn’t an issue. I have to say though, the Highpoint cards have worked pretty well as local storage. Got 4 of them in the studio, all running 8-bay raids. Shipped the drives and cards over to the states, plugged ’em in there and run them straight away.
Had the normal odd drive fall over, but nothing unrecoverable in 3 years so far. Reason for sticking with Highpoint was more to keep everything compatible, in case the drive needed re-purposing.
Areca card is much the same price as Highpoint….Sounds like I need the Small Tree PXG6 whether I have a G5 or MacPro, correct? So may as well try it out in the G5, just for fun, I guess.
Then stick it in a Mac Pro when, as you predict, that doesn’t work. Just so I can get ordering. These things take a while to arrive over here.
REPLY – if you have 2 ethernet ports on each MAC, you don’t need a gateway – just create two independent networks. No big deal.Ah. There was me thinking I’d need both internal ports as linked agg on the client end…. No advantage there then?
-
Bob Zelin
December 27, 2009 at 7:07 pmthe PXG6 is for the G5. The PEG6 is for the MAC Pro. And no, they won’t let you return it if the PXG6 does not work with the G5 in a shared storage enviornment.
You can start learning about all this stuff by doing TESTS, if you are so inclined. You can link agg just 2 ports on your MAC Pro, and use it as a server. Setup your switch (VLAN, link agg) to the 2 internal ports of the MAC Pro “server”, and try to get two clients (two other MAC’s) to play out video at the same time from the server. You can do this before you spend ONE PENNY of money on anything. This way, you will see if things will work for you, or if they don’t work. If they work – great. If they don’t work, then you need to HIRE SOMEONE to assist you.
Anyone can play out a single stream of pro res from one computer to another without any equipment. The trick is to get two or more computers to do it AT THE SAME TIME. That is shared storage.
Bob Zelin
-
Marcus Lyall
December 27, 2009 at 9:32 pmG5 is a PCI-Express type so PEG6 is the go, no?
If no joy, then I’ll switch to Mac Pro…Takes a while for this Smalltree stuff to arrive though. Can aggregate the two ports on the G5 in the meantime… Just to test..
My ghetto theory… start with most basic setup and work my way up to inevitable costly solution… therefore proving to myself that it’s worth spending the money all the way.
Techy mate is really techy. (He’s writing me an app in C++ at the moment). Hoping a lil’ old HP switch won’t outfox him.
-
David Chai
December 28, 2009 at 8:48 pmJust want to chime in with my experience. I was originally inspired to build this based loosely on Bob’s cheap shared storage setup and was surprised how well it worked.
Using an:
Dual 2ghz G5 as File Server running OSX server 10.5.8, 4.5GB ram.
Enhancetech Ultrastor RS16FS with 16 x Seagate 1.5TB drives. RAID 6. (We paid around $6000 total. We bought the drives separate and installed ourselves.) You can buy the Chassis direct from enhance or from BH, CDW or NEWEGG. Enhance have great support.
Intel Pro/1000 MT Quad Port Server card Gigabit (10.5.6 or higher has built in drivers) PCI-X.
Apple Dual 2GB fibre channel PCI-X card.4 ports Trunked through Cisco 2960G switch. (not a cheap switch, but excellent quality).
Read/write is around 300MB/sec (limited by the Dual 2GB Fibre card)
The Ultrastor Chassis can easily do 600MB/sec, but you need dual 4GB fibre cards to get that. Besides 4 ports trunked matches the 4GB fibre throughput of the fibre cards as a good match for the 4 workstations that needed access to the file server.I’m getting 5-6 streams of XDCAM 35mb/sec to 3 workstations simultaneously, without dropped frames. Latency will take a hit with so many streams, but it’s usable. G5 is around 50-60% CPU load. Do not run too many other services on it as the G5 is way slower than the Intel Xeons.
ProRes 422 streams can get 2-3 per machine. Figure around 60MB/sec as your top with random access and burstable to around 100MB/sec for a straight file to file copy. This is a Non JUMBO frame setup. You can probably get 10-15% more throughput on Jumbo, we just have other systems on the switch that are not compatible with Jumbo.
I recommend rendering to local and then copying back to the server storage afterwards, as reading and writing to the file server definitely kills your performance, but just reading is ok. Saves tons of copying time. This setup has saved me countless hours over the last six months. And the intel Card and apple fibre card can be found on ebay for really little money, and we had a G5 sitting around, so might as well put it to work. If you only need a few clients, you maybe able to get away with using AFP file sharing on a client OS system. We just needed more administrative control over who can access what files so we used the Server version of the OS.
This setup maybe more than you want to spend, but it’s expandable (up to additional 4 chassis for 80HDD or around 100TB of storage or more if you use 2TB drives) and it’s a hardware raid controller, so it takes a load off the G5. I would have to agree on Bob on a lot of points. You can get cheap direct attached storage like the Highpoint (which I have used), but when you want to go to a shared environment and be able to rely on it, the price starts to climb high very quick. Good quality gear, Raid controllers, switches are absolutely essential if you want a nice fast, stable working system. Still this was so much cheaper than XSAN or even the Promise chassis.
good luck with your ghetto SAN.
David—————–
David Chai
Director . Camera . Editor
http://www.davidchai.com
dc@davidchai.com
212 363 0159 -
Marcus Lyall
December 29, 2009 at 10:51 pmwaiting for some gear now.
will try the g5 first.found this on my travels…. looks like areca is the way to go.
https://www.xbitlabs.com/articles/storage/display/6-sas-raid-controllers-roundup.html
will try the highpoint for comparison.
keeping it ghetto….thanks for the advice so far guys…
-
Bob Zelin
December 30, 2009 at 12:02 amHi David –
where did you get the driver for the Intel 4 port card from for your G5?I am amazed that you wanted to build a budget system, yet you chose the most expensive switch, and a fibre array ! I am also amazed that you are able to get 3 streams of XDCam or ProRes422HQ without jumbo frames, as ethernet with an MTU of 1500 will max out at about 50Mb/sec !
Cisco won’t help anyone without a support contract – how did you get the Cisco working ? Did you know an IT guy that was Cisco certified ?
People ask me why I am on these forums so much – it is because of your excellent post (and others like it). Now, I have to try the G5 again, as I did not get the results you did. Please answer my questions above, and thanks for posting your results.
Bob Zelin
-
Chris Blair
December 30, 2009 at 5:44 amBob Zelin: you CANNOT share the office IT network along with the shared storage (which needs to be a dedicated network), using Ethernet port 2 on your MAC Pro’s, with static IP addresses.
Perhaps this is a Mac thing…but in our Windows based facility using the Apace vStor, we can share our office IT network right along with the vStor’s shared video storage using just one 48-port switch. I don’t recall the brand or anything, but it was an $800 switch recommended by Apace. We can also set-up one of the 4 edit systems to act as a gateway of sorts for our office PCs (11 of them) to have access to the audio and video on the vStor with no noticeable impact to editing performance. The vStor also works happily with Final Cut and the majority of systems they sell are to Final Cut facilities.
Bob Zelin: ethernet with an MTU of 1500 will max out at about 50Mb/sec
The Apace vStor can use jumbo frames, but it’s default from the factory uses an MTU of 1500 and we consistently got read speeds above 75MB/sec and write speeds above 60MB/sec. We eventually enabled jumbo frames on the vStor (and the $25 intel GB ethernet cards from TigerDirect in each workstation) and saw about a 20% increase in read/write speeds. Each workstation has 2 ethernet cards, one that uses a static IP address to link to the vStor, with the other using static IP addresses to connect to the office network.
Now I don’t know if this is all the exact same thing as what’s being discussed here…but we were told repeatedly by other companies (EditShare, Studio Network Solutions, Tiger Technology and others) that what we’re doing could NOT be done using one switch. I’m here to tell you it CAN be done and is being done everyday in facilities using Apace products. We get 3-4 real-time streams of DVCPro50 video across 3 edit systems, with the fourth getting at least 1 or 2 real-time streams while being used for compositing. We see virtually NO hiccups, no pauses, no issues whatsoever. I can count on one hand the number of times playback has paused or stopped during editing in 18 months of use. I can count on one hand the number of times a render stopped or had corrupt frames. I can count on no hands the amount of down-time or file corruption we’ve had (since it’s zero).
I have no idea how the engineers at Apace achieve this kind of performance when everyone, and I mean EVERYONE else says you cannot do it. But I’m here to say they do it.
I realize the thread is talking about a DIY sort of scenario with Macs, and perhaps using one switch isn’t possible with them. But it IS possible with affordable, turn-key products that just plain work. No fuss, no headaches, no figuring anything out.
So my advice is STOP trying save $5000 or $10,000 and build a system from existing components. Just bite the bullet and pay for a system that’s gonna work and has a 5 year warranty and FREE technical support. You’ll be glad you did.
Chris Blair
Magnetic Image, Inc.
Evansville, IN
http://www.videomi.com
Reply to this Discussion! Login or Sign Up