Forum Replies Created

Page 30 of 32
  • Steve Modica

    December 30, 2010 at 4:35 pm in reply to: SAN advice

    Equallogic is actually high end (recently bought by DELL). They make a good box and I assume they have a good raid controller in there. Considering that they only have to handle 1Gb iSCSI, how bad can it be? 🙂
    Our first iSCSI targets were equallogic and I have one in the office now we’re tweaking for our iSCSI initiator.
    SNS probably has a better audio product. Protools wants really tiny IOs and I imagine equallogic may have problems with that (as will most targets). Protools *very* sensitive to this. I can tweak the driver IO size and break it. In fact, we made it tunable for this reason. (It does not help that protools prevents tracing on the app. I’d love to see how it does its IO but we haven’t bothered to hack the bits to disable that yet. It’s on the list)

    Anyhow, some advice:
    Do not mix drive sizes or speeds. Try not to mix firmware either.
    Do not use desktop drives
    You’d better have flow control on your switches and cards for iSCSI or it will be unhappy.
    You’d better have a newer kernel on your linux target box.
    Apple does not support MPIO via iSCSI (we can do it but we have to lie and say we’re fibrechannel which breaks the disk utility). So you can only use 1 port. So think jumbo frames or 10Gb to get better numbers

    Our Mobile box does all this and supports iSCSI. Most protools guys I’ve talked to are not looking to spend SNS or Mobile dollars tho. They go off and buy direct attached stuff.

    Steve

    Steve Modica
    CTO, Small Tree Communications

  • Steve Modica

    December 30, 2010 at 4:28 pm in reply to: cross-platform SAN/NAS friendly to iMacs, laptops?

    Apple’s SMB is a lot better in 10.6.5 and you can always load Mac ports and download the opensource samba4 as a backup. There’s Thursby’s Dave as well.

    Steve Modica
    CTO, Small Tree Communications

  • Steve Modica

    December 30, 2010 at 4:27 pm in reply to: Direct connect Fiber-PC ?

    I once had a doctor call us complaining that he spent $20k to upgrade his SGI system to faster graphics, but his 3D modeling was no faster (and in fact, slower). He was pissed.

    We looked at the IO coming from the app he was using. It was reading a record, then writing it to the screen, reading another record, then writing it to the screen (and so on). His bottleneck was his 40MB/sec SATA drive.

    When we had his programmer read many records at once, it was 6 times faster immediately. It would have been 6 times faster on the old machine too.

    So in your case, I think you should profile the IO load before you look for a formula or you will find the wrong answer.

    Steve

    Steve Modica
    CTO, Small Tree Communications

  • Steve Modica

    December 30, 2010 at 4:18 pm in reply to: final share equipment question

    Video editing is a soft realtime function. It means that substantially all of the IOs FCP issues must finish in time, or you will drop frames.
    (Many things are hard realtime functions like fly by wire airplane controllers or data collection devices tracking Space Shuttle launches. I’ve supported those too, so Final Cut is pretty tame by comparison)

    When you buy your system, you are not buying drives and cards. You are buying a complex soft realtime system and computer support. You need engineers that understand realtime, mission-critical systems so when something fails, they can fix it.

    I used to analyze kernel crashes for a living. People were always asking me to teach them how to use dbg or icrash so they could do it too. Truthfully, the tools I use have nothing to do with it. If you don’t understand the kernel, how is a tool going to help you? It’s like listening to the radio in China if you don’t understand Chinese!

    So in this case, once you have your storage, you need people supporting you that understand how it works and can wade through all the little menus, OS changes, hardware errata and tuning parameters to make sure *your* realtime loads work every time.

    If you buy inexpensive stuff from people that don’t understand realtime, you’ll basically have cheap shared storage that can’t handle video.

    Steve

    Steve Modica
    CTO, Small Tree Communications

  • Steve Modica

    December 30, 2010 at 4:09 pm in reply to: General questions about SAN from new user

    Hi Mike,
    How this all works is a lot more complicated than you might thing.

    For example, FCP and Quicktime use AIO (asynchronous IO) and other things like Finder don’t. The AFP daemon uses multiple threads to handle reads, but it uses Pread and Pwrite rather than AIO. Of course, each application and codec might have its own IO size too.

    The story of block access, File based (NAS) access and shared block access (clustered filesystems) is long. I’ve got a number of articles out there. Here’s one that’s recent.

    https://www.postmagazine.com/Publications/Post-Magazine/2010/December-1-2010/The-NAS-vs-SAN-argument.aspx

    What you’re going to be doing is asking a server to do something a billion times a second in coordination with a switch and a client computer (also required to do everything a billion times a second). Really small 1 in 10,000 glitches kill performance. Since you are editing video, this is unacceptable.

    In a way, TCP messes me up. It enables “green light syndrome”. You can hook up any computers over just about any switch and TCP will deal with the errors silently. You won’t know anything is wrong. You’ll get green lights, you’ll see your files and all is right with the world til you drop frames.

    In the tougher world of FCoE or AOE where TCP isn’t involved, you don’t get that easy pass. IO errors occur and problems get figured out quickly.

    Poke around for a few more of the articles I have out there and you’ll get a better idea of how all this works. We worked at SGI/Cray during the invention of Clustered filesystems like cxfs and Xsan.

    Steve

    Steve Modica
    CTO, Small Tree Communications

  • Steve Modica

    December 30, 2010 at 3:54 pm in reply to: Sharemax

    I’m going to quibble with Bob on this.
    ShareMax is definitely *not* the same thing as Small Tree or Maxx Digital sells.
    It may have the same chassis (bent metal) and it may have the same cards and cables.

    The difference is that we set our systems up and poke our noses into how they are tuned and configured. We get right on the system and set all that up. We also deal with each OS upgrade when apple changes those parameters and performance tanks or changes (like when 10.6 changed the default IO size or AFP gets broken because of a TCP issue).

    So the difference is that phone call when it breaks. We screen share, look at the actual individual IOs coming from the raid and figure out what to do. We know how this works from platter to graphics card. (We even know about all the bugs on the built in Ethernet chips because we actually have (and read) the errata sheets)

    If you want to take your chances, I think Sonnet sells the same chassis. It’s *real* cheap. You can put your 10^14 Read Error rate drives in there and set it up however you want 🙂

    I heard one IT guy in New York put it very succinctly: Some people have hit the realization that they have $200,000 worth of product sitting on a $300 disk drive. Some haven’t. If they haven’t hit that realization, there’s no selling them a good product.

    Lots of vendors buy Small Tree cards, but they don’t have my computer engineers. We do this every day. We live and breath it. And we don’t gouge you for it. We just make sure you can sleep at night.

  • Steve Modica

    December 30, 2010 at 3:45 pm in reply to: SAN configuration question

    You should do a lot of small IO lantecy testing to configure the index file array. That’s a huge point of contention for heavily used databases. We helped GTE with their clone detection stuff many years ago and it was always the index files causing trouble. (especially when you inadvertently put them on your boot drive in your home directory)

  • Steve Modica

    December 30, 2010 at 3:43 pm in reply to: Too early for copper 10GbE?

    10Gb ethernet with 10GbaseT phys is still expensive. The PHYs (physical layer chips) are discrete (separate) chips and they use a lot of power. So switches can’t be too dense or they will melt. Cards require extra space and chips so they use a lot of power, have higher failure rates and are expensive. However if you need it, it exists.
    Shortly (as in, we have samples and the driver work is done) there will be integrated PHY chips. The MAC (Media Access Controller) and the PHY are one chip. They use very little power and will allow the prices on both cards and switches to fall.

    Personally, I think gigabit is still a great value and if you run cat6A everywhere, you will have no trouble upgrading. Since gigabit is so cheap, what’s the risk? You might toss $2000-3000 worth of cards and switches when you go out and buy your $10000 10Gb switch? You can always repurpose that stuff for print servers or something 🙂 Additionally, RED keeps rumbling about these super low bit rate codecs. If all that stuff works out, Gigabit has a lot of legs.

    In Q1 or Q2 of next year, all the 10GbaseT stuff will be hitting.

    One caveat. FCoE (we have the driver going up as a free beta on our website soon) requires a certain bit error rate and 10GbaseT does not meet that spec. So it’s likely FCoE will work and be supported (we support it now), but there will be some length limitation to help deal with that bit error rate requirement. FCoE and other storage protocols are very sensitive to errors. This is one thing TCP gets right. It expects errors. FC and SCSI do not.

    Steve

  • Steve Modica

    December 30, 2010 at 3:36 pm in reply to: Real aggregate using built-in Mac Pro ethernet?

    Link aggregation is socket balancing. It’s 802.3ad (there are both static and dynamic variants and apple is dynamic only. Cheap switches are often static only).

    Clients only use a single socket to connect to the server. The spec requires that each “conversation” (aka socket) must stay on a single port for its lifetime unless there’s a failover event. So when you connect to the server, the socket is going to use one port.
    The socket coming back from the server may use the other port. So in some instances when you have bidirectional traffic (like during a render with source and destination on the server) you *may* see traffic going on both ports. however I can’t guarantee this since Apple has a frustratingly random 802.3ad assignment algorithm. if they offered more flexibility there, we could lock it down and use this effect.

    Under “normal” circumstances, you might see traffic on one port going out and the acks (lots of tiny packets) coming back on the second port.

    I haven’t spent any time looking at the 10.6 802.3ad driver. Perhaps I can hack that to offer a way to lock in unidirectional traffic on each port with our edge-core switches. I’ll check.

    Jumbo frames are obviously a good thing to use.
    Also, with our new 10Gb cards we have RSC supported. So we can aggregate incoming frames to the server. That’s a nice feature that helps performance (especially with the imacs that don’t do jumbo frames)

    Steve

  • Steve Modica

    December 30, 2010 at 3:30 pm in reply to: Bob: which one works for me ?

    Hi Gabriele,
    Small Tree does stuff like this all the time. As Bob mentions the 10Gb network will be very fast. It will be the storage that lags.

    You will definitely need a dedicated system if you intend to have 3 stations working at once. A system trying to do FCP and share the storage will cause lots of problems. No one will be happy.

    I would have you get a new mac pro, and we would put a RAID card and 10Gb network card in there. (we have a 4 and 6 port 10Gb Ethernet card). Then you could direct attach your clients with no switch.

    You will definitely want our 6Gb based SATA storage. Nothing else is going to keep the latency numbers low enough to support 10Gb. We can quote those pieces now.

    A word of warning. The 6Gb SATA stuff has a lot of nuances. The drives are still 3Gb and you need a lot of spindles to get the required low latency. You also have to watch the Read Error rate because as you add spindles, you’ll have a higher probability of a double failure (during a rebuild for example).

    We have it working and are very happy with the numbers.

    Steve

Page 30 of 32

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy