Forum Replies Created
-
Security: Having multiple file transfers going at the same time shouldn’t pose any additional/different security risk as opposed to a single transfer.
Speed: Depends, but I’m guessing in practical terms doesn’t matter. Not knowing the specs of everything, I’m going to guess your bottle neck on the copies is probably disk IO — how fast you can read on one machine and how fast the other can write. The more simultaneous streams you have going, the more contention there is: how much seeking the disk heads have to do to service the different copies. There are many variables that come into play to determine the optimal solution, and ultimately you’d need to test different combinations to find the “best”. Of course, if you’re sleeping while all of this going on, does it really matter?
-
I’d look at Promise, 3Ware and LSI controllers.
As far as Internal vs external, that depends on how much expandability you want and how much you want to spend. There is fundamentally nothing different between internal and external other than connectors.
-
Chris Gordon
December 18, 2010 at 4:51 pm in reply to: how to slave one computer to use as a monitor while editingSorry, but you’re pretty much out of luck. It’s only the newest/current round of iMacs that can be used as monitors for another machine.
-
Chris Gordon
December 18, 2010 at 1:09 am in reply to: how to slave one computer to use as a monitor while editingI assume by “desktop computer” you mean an iMac. I think only the newest iMacs support being used as a monitor — check the manual for the details on this. Otherwise, can you post the exact details of what computers you have?
-
The other interesting development is Converged Networking for which 10 GigE over UTP is a strong enabler. With converged networking, you essentially virtualize all of your connections (ethernet, FC, etc) over a single large (i.e., 10 GigE) connection, or 2 connections if you want redundancy. Cisco is already pushing this in their UCS blade servers and you can get CNAs (converged network adapters) from a number of different vendors. You can do some of this today without CNAs, but not as cleanly. I think this is clearly the direction we will all end up going, but its still a couple of years out, especially for affordable CNAs and switches.
Bob, have you looked at any of this and have any opinions/thoughts?
-
IMHO, 10 GigE over copper is still just way too new. You pay a serious premium for anything really new and there are bound to be some “adjustments” to the “standards” as the new tech settles in. I’d at least expect to run into some odd bugs and have to do a number of firmware updates to the switch and NICs. Unless there is some real need to use the very newest tech, 10 GigE over copper in this case, stick with something more tried and true. Let others pay the early adopter tax and figure out all of the bugs for you.
-
I second all of what Bob said. Definitely get some help from someone that specializes in this — they should know what really works and solves your problems and what is vapor-ware. Another thing is to work try go with a vendor that can sell you everything you need (array, switch, NICs/HBAs, etc) so you can get some better leverage of a volume purchase. If you call up and order just a single switch, you’re going to pretty much pay list price. If you are bundling a whole bunch of stuff together, you have a lot more negotiating power to get discounts. Of course, always shop around and get some comparative prices. Don’t be arrogant or mean with your vendors but nothing wrong with getting designs and quotes from say two different ones and being open with them that you are also considering their competitor. Again, a good consultant should be able to help you with this.
As for 10GigE, it’s expensive. I’m actually working on a project at work with 10 GigE (not video related) and it’s just not cheap.
-
– Is the PC on the same network as your Mac — meaning does it take the same network path to get to the FTP servers?
– Are you behind a firewall? If so, you may need to ensure you are forcing passive FTP from your client.
– Are these public/anonymous FTP servers? I’m guessing they are from your attempt to use “Guest” as the account. The standard account to use for public FTp servers is “anonymous” and for your password, your email address (or at least something formatted like an email address). See https://en.wikipedia.org/wiki/File_Transfer_Protocol#Anonymous_FTPAs for clients, I’ve had the best luck just using FireFox for FTP things on my Mac. I’m behind a firewall and a squid proxy and Firefox is the only thing that seems to work reliably (though OpenBSD images and patches seem to be the only thing I use FTP for these days).
Hope that helps.
-
When you say “SQL Server”, I’m assuming you mean Microsoft SQL Server. Have you reviewed Microsoft’s best practices and other guidance documents? Those type of things will often cover what is supported or generally recommended.
Some questions:
– What kind of array do you have?
– How is the server connected to the array?
– You listed 20 disks above. Is that what you have to work with or just an example?
– Do you know the transaction load you need to handle?
– Are you doing any array based replication?For a situation with a small number of disks and no array replication, I’d opt to put a RAID10 across all of the disks to allow sharing of all of the IO across all functions and then cut that up with an LVM and put different file systems on each logical volume to minimize impact of file system corruption and otherwise help keep the file systems sane/healthy.
The ultimate best it to break everything into their own RAID groups such that under max load none of them ever have IO problems and contention isn’t an issue. Of course this means lots of disks which may not be cost effective.
-
Chris Gordon
December 10, 2010 at 2:33 am in reply to: Terminal Front-End for Simple Copy/Cut/Paste Operations?Thanks, I better understand the issues, so hopefully I can provide some help.
First, in Finder, you can turn off the preview/thumbnail generation. In Finder, go to View -> Show View Options. There is a button in there to disable the icon preview. Now why this isn’t in Finder preferences, I don’t know….
Your next challenge is just the pure number of files in the directory. When you view the directory in Finder, some other tool or even do an “ls” at the CLI, the OS has to crawl the file system and stat every file in there. The more files, the longer it can take. 1000 shouldn’t be a significant issue but noticeable, 10,000 I would expect some slowness. Depending on your exact work flow adjusting the number of directories and number of files per directory may help.
Similar to the number of files in a directory, the entire file system (volume) does have limits to the number of files you can place in it. There are the absolute limits (HFS+ is something around 4 billion) but the practical limit is smaller and where you would start to see performance issues doing things such as you are. I don’t know what the limits for HFS+ are, but the solution to this type of problem would be to partition your disk into smaller volumes and break up the files across those so that you end up with fewer files per volume. I doubt this is your problem, but figured I’d throw it out for general education.
If turning off the preview doesn’t solve the problem and you want something GUI, you could look into writing something in Apple Script as a GUI front end to a cp or mv CLI command. For instance something that let you just choose a source and destination directory, but never lists the actual files in those dirs. Then just call the cp or mv.
Hope this helps some. If it doesn’t, let me know any more details or issues so I can see if i can come up with something else.