Forum Replies Created
-
I couldn’t find anything online talking about SEC requirements for dropping a product.
I’m sure there are contractual obligations that some companies have for things like this (perhaps apple has support contracts with large customers that need to be met), but other than that, I doubt this is true.I think Apple is doing the same thing I do when I have a product that I don’t “want” to support. Make it free. 🙂
Steve Modica
CTO, Small Tree Communications -
I don’t think “never” is the right answer. I’m sure there will be something viable at some point. It’s just that right now, it doesn’t perform so well. I’m not sure if it’s framing activity or what. I haven’t profiled it.
Steve Modica
CTO, Small Tree Communications -
The sysctl settings are OSX.
The flow control settings are hardware level, so they get set in the driver. Unfortunately, every piece of gear and OS is a little different, so how you set flow control sometimes requires a look at the manual. We enabled it by default, but it can be set manually in the Network->Advanced->Hardware tab (it goes with “full duplex”)Steve Modica
CTO, Small Tree Communications -
Steve Modica
June 20, 2014 at 10:43 pm in reply to: Poor AFP read speeds after updating our server to MavericksDid you guys resolve this? I would expect when you upgraded the thing, all the network tuning got removed from /etc/sysctl.conf
Steve Modica
CTO, Small Tree Communications -
I miss Xsan. When a customer would call you with a $60,000 quote in their hand, and you could give them something comparable for $25,000, that was a very pleasant conversation 🙂
Steve Modica
CTO, Small Tree Communications -
Since I sell 10Gb stuff and would get some money anyway, I’ll answer! 🙂
If you are copying files around, using 10Gb is a pretty safe bet provided everything uses flow control and the local disks are all fast enough. (If you are copying from a sata drive to your Areca, you’re only going to see the SATA drives performance).
If you want people to play out video on all those attached systems, you might be disappointed. I haven’t looked at area in a while to know what their latency looks like, but the load difference between one system prefetching and playing video is huge compared to 2 or 3 systems trying to play video. It’s not sequential anymore and there are lots more metadata actions due to the shared protocol the clients are using. (Plus you have to tune the server for the higher network activity).
There’s tons of stuff online discussing how to tune for TCP performance over NFS/AFP/Samba.
Steve
Steve Modica
CTO, Small Tree Communications -
Hi Dylan
I imagine you got this answer already, but I think Apples old 82546 driver (PCIX) is jumping in and grabbing our card. It seems to grab our intel chipset because the vendor ID matches (8086 is the Intel vendor ID. That’s really funny if you’re old enough to remember the first Intel cpus 🙂
Anyhow, we have a KB entry on this problem here with a solution:
https://www.small-tree.com/kb_results.asp?ID=44
Steve
Steve Modica
CTO, Small Tree Communications -
Here’s my take, from having watched the SAN schism occur back in the SGI days:
NAS systems had some issues. CPUS were slow. TCP took a lot of horse power and we just couldn’t build systems any larger to handle the NAS IO (think NFS).The solution was to stop moving the data on the server, and just move the metadata. You can go get the data yourself. Now I have all my client cpus and IO busses working for me as well.
The problem of course is that it means 2 (or 3) networks and lots of complexity. Metadata (inodes and locking tokens) have to move around a TCP network, which can be wonky when clients die and leave things locked or the configuration manager loses control (zombie clients).
A NAS was always the most desirable, easy solution, it was just too hard.
Now, it’s not like that anymore. CPUS are fast, but more importantly, you have more of them than you (or any of the OS vendors) know how to use. They just sit there most of the time. So now, there are plenty of cores to run TCP and whatever other protocol stuff has to happen.
About the only legacy thing left over from the NAS days is the “direct attach” requirements some apps have. Some want specific locking calls. Some want inode access (protools 8 for example). Some have trouble writing extended attributes across NAS protocols (avid).
The great thing about a SAN is that it fools the apps and they think it’s direct attached, so all those things automatically go away.
All that being said, I think the SAN complexity will always be a burden. Ethernet is easy and it sucks in and adopts all the best elements of SAN. All your metadata stays in one place. Dead clients don’t kill the system and you only need one simple network.
What’s really blown me away in the last year has been the “re-rising” of iSCSI. The original iSCSI book (which I have a few copies of) predicates iSCSI on TCP offload cards. It clearly states that it’s expected that some special ASIC will be handling TCP. That didn’t catch on. A number of vendors doing that went away (alacritech, neteffect, neterion). iSCSI seemed doomed to fall behind things like FCoE (which skips the stack and uses an FC stack and a new form of flow control). However, now that we all have a gozillion cores on our laptops, iSCSI runs just fine and runs *really* fast. So we’re starting to see vendors push it again.
(we give people ready access to iSCSI with our Project Wrangler software).Steve
Steve Modica
CTO, Small Tree Communications -
A couple comments here:
1. A normal mac doesn’t have its tuning setup to deal with 10Gb very well. The window sizes are too small. (How it’s tuned can also depend on your destination. BSD doesn’t like to let a lot of packets go unacked, but linux seems to do much better with unacked packets)
I personally use these settings:
net.inet.tcp.doautorcvbuf=0
net.inet.tcp.doautosndbuf=0
kern.ipc.maxsockbuf=8388608
net.inet.tcp.sendspace=4000000
net.inet.tcp.recvspace=4000000
net.inet.tcp.maxseg_unacked=8
net.inet.tcp.delayed_ack=0
net.inet.tcp.win_scale_factor=72. Flow control has to be on everywhere. (802.3x). It has to be enabled everywhere and hopefully, everything negotiated to have it on. Systems have to both heed and send xon/xoff packets. Some switches (Cisco Catalyst) won’t send xoff. This is bad since any one element in the chain not pushing back will cause that element to drop packets when congestion occurs. That’s bad and leads to some really bad performance (similar to what you’re seeing).
Steve
Steve Modica
CTO, Small Tree Communications -
Steve Modica
June 8, 2014 at 12:21 pm in reply to: How do you find out which power supply belongs to which drive?Businesses that integrate products are extremely narrow and focused. One huge advantage of having external power is the ability to avoid UL listing and certification. You get to drop that on the power supply manufacturer. (and you can’t relabel their power supply because it was tested under their name).
I agree that it would be wonderful if vendors all branded their power supplies 🙂
Mostly, I just make sure the output and polarity matches my device. Vendors *are* very good at putting these labels on and ever since I blew up my electronic dartboard many years ago, I’m always very careful.
Steve
Steve Modica
CTO, Small Tree Communications