Simon Blackledge
Forum Replies Created
-
Bravo!
How someone can even say they can do something,especially IT people, when they have never done before is beyond me.
-
Haha.. What goes around..
Ping you tomorrow.
S
-
I’d expand the working smalltree and daisy a storage dna onto that.
-
If they have budget and don’t want DIY I’d go to a systems intergrator.
Will you just be editing of the iMac connected to all this storage ? :-/
If not you need a server or a NAS server
https://www.object-matrix.com is an optionIf your doing it.
Get a TB expantion chassis that will take 2 cards at least and put an Areca card in the daisy chain these.
https://www.netstor.com.tw/_03/03_02.php?MTE58TB drives and 10TB dropping soon!
You’ll want to be Raid6.
i’d defiantly with that amount of data add a LTO6 tape library ! which you can put the HBA for in the expansion chassis also.
Good luck!
s
-
Your all mac based? Or are these PC’s pulling also ?
Are you 100% sure the servers 10Gig ports are aggregated with the 1Gig ports on the switch ? Thats mad.. surprised it even works.
Your feeding the Mac Pro to the switch at 10Gig but the workstations are all Gig yes?
Have you customised the sysctl as in the settings in this thread?
I’d delete all the aggregation. Just connect 1 x 10 GB port from Sanlink to switch and see if that works better to begin with.
Make sure the promise raids and the SanLink are plugged into their own port on the MacPro
You ask can you have 2X sanlink on same machine. Don’t see why not but you only have 1 currently no?
Remember if you connect another machine at 10Gig and start pulling data from one of those Promise raids from the server at say 600MBs and the raid can only do 700MBs then it only leaves 100MBs to everyone else till you stop read/write. I doubt you’d see anything close to 100 even and it will be v choppy.
Delete your aggregation and just start with 1Port feeding the switch/Lan and see where that gets you.
Baby steps.
s
-
Talk to Ron if he’s local. They know what they are doing.
If you want max speed I’d still go at least Raid5. If your over 16TB Raid 6. As the possibility of another disk failure during rebuild is higher.
I’d do a local 8bay Raid 5 with a larger Raid6 for backup of that and other data.
ThunderRaid2 32TB as main plus the Netstor to backup/offload to.
You should backup incrementally like this Disk > Disk >Tape
S
-
Ok Raid 6 – 8 drive array – you use 2 drives for parity data. So in total your volume is only 6 disks.
If your doing 2 volumes at raid 6 you use 4 drives for parity.
Raid 6 you can have 2 drives die before you loose anything.
When you say high speed. What bandwidth do you need?
Rally don’t see the point of sticking time machine in the middle and having to jump though another hoop before you can get back to work. and getting back to work is the whole reason for the backup and safety net.
So how fast do you need ?
-
I really wouldn’t bother with 2x Raid 6 volumes.
Your loosing 4 drives from the off. Plus you’ll want minimum 1 global hot spare – preferably 1x local hot spare per volume. So thats 6 disks you loose before you even start.
Also yes you have 2 raids. But they are not on separate machines. So you still have a single point of failure.
raid card breaks.. something in the mac breaks.. dodgy lead.. and you can’t get to your data.
I’d go with Raid 6 with 2 hot spares set to auto rebuild.
I like the thunderbolt stuff. Means if your mac breaks you can just plug in another thunderbolt mac and install driver and away you go.
If you want a backup I’d seriously look and running something totally redundant/separate to the above setup and using something like Chronosync or a simple rsync script to clone Disk to Disk hourly.
In your scenario – TimeMachine to 2nd raid. If raid 1 goes bye-bye – how do you just start work again ? You can’t see all your folders in a time machine backup.
S
-
Hey Bob,
The MC install was just to test MC as we look to move away from FCP7 finally.I’m not using what we do server wise to use with MC. I was just aware after the install and playing media that was AMA’d from the server it seemed stuttery. DPX seq etc..
So checked the connection speed and was like WTF!
Am actually surprised an uninstall fixed it. You know how some uninstalls leave crap behind.
Have reached out to an Avid contact I was given. Will update you with further findings.
BTW – didn’t realise NFS over 10Gig is limited to 200MBs is on the write :-/ Reads fine at 750MBs though.
Cheers
s
-
Simon Blackledge
May 14, 2015 at 2:18 pm in reply to: Networking and SAN/NAS for a large number of usersDo yourself a HUGE favour.
If you are indeed looking at this many seats – and they are students..
make sure your San can do De Dupe. Youl will need it! And you can set per student quotas.
Because all students do is copy.. and copy.. and give to friend to use, and he/she copies etc.. etc…
It will happen.
DeDupe! = $$$$ but not as much $$$$ as the storage you will actually need vs what you think.
Either that or have a brilliant workflow course that they all must do. Because if it aint their money buying the drives they just won’t think about it… nor care.
s