Forum Replies Created

Page 5 of 84
  • Tim Jones

    June 28, 2018 at 3:42 pm in reply to: ‘Offline’ tape backup, or ‘image’ and backup

    Hi Neil,

    Yes – the BRU container format is the same on disk and tape, so you would use the “tapewrite” command that is part of BRU PE.

    There are a lot of more advanced things that you can do, but the simplest mechanism is:
    cat /PathTo/archive.bru | tapewrite -b 128k -f ntape0
    If you shift to BRU Server, it’s an automatic process called “UpStage”. When you set the BRU Server job for Disk 2 Disk, you can then follow up at a later time with an UpStage that automatically transfers the disk file to tape with the options of deleting the original disk archive or retaining it for local operations while sending the resulting tape(s) offsite.

    As always, contact the support team if you have any specific workflow questions.

    Tim

    Tim Jones
    CTO – TOLIS Group, Inc.
    https://www.tolisgroup.com
    BRU … because it’s the RESTORE that matters!

  • Tim Jones

    June 27, 2018 at 3:27 pm in reply to: ‘Offline’ tape backup, or ‘image’ and backup

    Hi Neil,

    The key to such a mechanism is in the format that the archive container is written. As long as the container format for the two types of containers is the same, it is quite “doable”. You just need a tool that can tape the file and write it as raw data to the tape so that the format and any metadata in the container is retained.

    BRU Server already does this (D2D and D2D2T) and you can do it through a script with BRU PE, disk archiving and the tapewrite command line tool that is included. Drop our support team a note for more details.

    Tim

    Tim Jones
    CTO – TOLIS Group, Inc.
    https://www.tolisgroup.com
    BRU … because it’s the RESTORE that matters!

  • As we document many places, BRU’s reliability mechanisms – filesystem info, metadata, block-level CRC, error recovery info – consume approximately 18% extra space when writing to tape depending on data content, ACLs, extended Attributes, Finder Info, and other filesystem-specific data. This is why we provide the “How Many Tapes” button on the Backup panels. When you execute that pass, the result is taking into account the actual overhead that your specific data will require. And, with BRU, you don’t have to worry about data spanning tapes – BRU handles that automatically.

    Remember, in the grand scheme of things, tape is cheap compared to the time and cost of recreating lost data (if you even can). BRU’s container format is designed to provide the highest level of recoverability – because it’s the RESTORE that matters.

    Tim

    Tim Jones
    CTO – TOLIS Group, Inc.
    https://www.tolisgroup.com
    BRU … because it’s the RESTORE that matters!

  • Data compression in the computing world is NOT like the data compression that you associate with transcoding.

    When you “compress” media by transcoding, you are actually losing data (fidelity). It is unfortunate that the media world has chosen to call this “compression” since it is far more than simple compression and should always be referred to as “lossy” since you are losing data (fidelity) in the resulting file. Once you transcode something to a lesser format, there is no way to recover the information that was lost during the initial transcode.

    In the Data world, we use compression that is known as “lossless”. This means that when you compress the data, you get back 100% of what you put in when you decompress the data. Imagine if your bank backed up your records with a lossy algorithm and you discover that your $30,000 bank account now only has $300 because the back restored their data after a system glitch …

    Additionally, the compression used in the tape world is known as adaptive in that the software performing the compression (whether in the application or the drive’s firmware) is aware of the result of the compression algorithm and can recognize when the compression process is not working on the data supplied.

    When you use an LTO drive (all the way back to LTO-1), you are using ALDC – Adaptive Lossless Data Compression. This means that only data that IS compressible is compressed (Adaptive), and you get back 100% of what went through the compression algorithm (Lossless) when you restore that data.

    Taking notice of that statement – “data that IS compressible“, the bulk of the files that we process in a M&E environment are already compressed. That means that the drive is going to pass most data through to the tape as it was received. This is why, when you are reading the specifications for various tape technologies, you should ONLY pay attention to the “Native” capacity and performance values. This means that you should expect to get:

    • 1.5TB on an LTO-5 tape
    • 2.5TB on an LTO-6 tape
    • 6TB on an LTO-7 tape
    • 12TB on an LTO-8 tape

    The magical numbers that are used by most of the manufacturers all are based on a mythical 2:1 or 2.5:1 compression of the data that you are writing to the tape. Even a normal business data server providing email and business document storage doesn’t achieve 2:1, let alone 2.5:1. We see normal business data hitting the 1.3:1 or 1.4:1 on average on a good data run.

    So far as BRU PE is concerned, we leave the compression decision up to the drive as it is far more aware of the incoming stream. Also, since the drive has hardware dedicated to the compression task, the result is NO slow down of your data speed when writing or reading a tape. The compression switch that you see in the Preferences only applies if you are creating BRU archives on disk. We then use the LZOP compressor to compress the data that is added to the BRU container file. Because this IS dependent on your CPU and RAM speeds, you may notice a slow down when writing archives to disk. But, this has no bearing on archives written to tape.

    Tim

    Tim Jones
    CTO – TOLIS Group, Inc.
    https://www.tolisgroup.com
    BRU … because it’s the RESTORE that matters!

  • Tim Jones

    May 23, 2018 at 5:20 pm in reply to: Calling all Super PC and Adobe CS5 Nerds

    You can buy a lot of multi-core goodness for $1,000. I can understand if this is an experiment, but I can tell you from experience that your limiting factor with be CS5. It’s really very “ignorant” of high end systems as your build describes. Also, while there was Cuda support, there was limited ATI GPU support – I actually dumped my ATI cards for an Nvidia 690 with 4GB (and paid through the nose for it).

    Dell and Lenovo make some great desktop PCs with i7, 4GHz CPUs and high end Nvidia GPUs that will come in under $1,000 and deal with most CS5 tasks with aplomb.

    Tim

    Tim Jones
    CTO – TOLIS Group, Inc.
    https://www.tolisgroup.com
    BRU … because it’s the RESTORE that matters!

  • Tim Jones

    May 19, 2018 at 6:37 pm in reply to: Tool on a MAC

    In the Unix world, we refer to that as the “magic” file and it’s been around for 50+ years.

    If you want to determine what a file type is on Mac OS, try this:
    file /path/to/file
    In a Terminal. For example:

    $ file /Volumes/ArGest\ Cube/dislocated-boy-backing-track.mp4
    /Volumes/ArGest Cube/dislocated-boy-backing-track.mp4: ISO Media, MP4 v2 [ISO 14496-14]

    No need for any other software unless the file is incredibly “esoteric”. And, since the /etc/magic file is simple text, you can add your own signatures as needed and maintain your own changes, or send them upstream to the maintainers to get it pushed out over time.

    Tim

    Tim Jones
    CTO – TOLIS Group, Inc.
    https://www.tolisgroup.com
    BRU … because it’s the RESTORE that matters!

  • Touching on a bit of FUD there – both media types will hang onto their magnetic domains for many, many years. The real difference (as IBM and HP learned with LTO-5 head assembly destruction) is a matter of “smoothness” of the surface that really differentiates the two formulas.

    Unless you’re buying from a questionable reseller, I don’t even think that the major manufacturers offer anything BUT BaFe tapes any more. Fujifilm, Overland Storage, HPE, IBM, Quantum, and everyone else that we deal with stopped with the older formulas long ago.

    Again, using (not so?) common sense will be the answer here.

    Tim

    Tim Jones
    CTO – TOLIS Group, Inc.
    https://www.tolisgroup.com
    BRU … because it’s the RESTORE that matters!

  • I wasn’t sharing that photo as a “best practices” shot, but rather to affirm that tapes aren’t as fragile as the CYA (ask if you don’t know) documents which the manufacturers publish indicate. I was simply sharing that tapes are very robust and reliable over time. Of the literally 10,000’s of tapes that we’ve worked with in our labs since 1985, we’ve never had a tape fail to restore because of the manner in which we have stored them. Yes, dropping a cartridge onto a cement floor is not a good idea, but neither is dropping your infant onto that same concrete floor ????. Some simple common sense (which I know is in limited supply nowadays) is really all that it takes.

    Alignment within the cartridge, while important to lessen the possibility of edge damage, is not as important with LTO since the path management within the drive is where the tape travel is managed, not within the cartridge like in the old QIC days. If you’ve ever disassembled an LTO cartridge (and we have many times), you would find that the tape wrap is so tight that you can barely move the tape by hitting it with a hammer.

    Common sense and a normal business environment are all that are really required to keep your data around for a very long time when using even the oldest of tape technologies.

    Tim

    Tim Jones
    CTO – TOLIS Group, Inc.
    https://www.tolisgroup.com
    BRU … because it’s the RESTORE that matters!

  • Hi Ian,

    All of that is pretty much old-tech mythology. It’s actually embarrassing as one of the original tape design engineers (Archive, Corp from 1987-1993) that those myths are still around 30 years after the fact.

    Modern tapes do not require retensioning and actually ignore the command. That is an old command that was used for old Teac Data Cassettes and QIC cartridge tapes from the 1980s. You haven’t needed to use the RETEN command for any device newer than DLT-8 and not at all for any of the helical scan technologies (Exabyte 8MM, VXA, AIT, DAT).

    It is quite safe to leave an LTO tape on the shelf for 50+ years and still retrieve data successfuly (provided you used the right software to create the tape in the first place). Also, most LTO technologies have a 10-12 year life expectancy, so you don’t need to migrate the data that often. Also, by simply hanging onto an existing drive and maintaining it, you could stretch that even longer. As for storage, we keep our onsite tapes in plastic boxes in the lab on a metal rack. And, no special climate control (the tape drives themselves are in a more controlled environment, though). And here’s a shot of some LTO-2, LTO-4, and DAT tapes that are over 10 years old and restore perfectly just sitting on the shelf in the closet in our video room:

    The frustration comes with LTO-8 not reading LTO-6 tapes. This means that to keep LTO-6 media available for longer, you would need to hang onto an LTO-6 drive or grab a refreshed LTO-7 drive when the manufacturers EOL the LTO-6 drives. Otherwise, a 6 will read back to 4 and a 7 will read back to 5, so the life of even an LTO-5 tape is still looking towards 15 more years of use.

    Aside from the “green zone” info that I posted earlier, tape media – especially LTO tape media – is very robust.

    Now, will IBM, Quantum, etc. make this same statement? Most likely not since they need to continue selling new technologies to keep their huge corporations running. If you’re settled on LTO-7 and plan to stick with it for 10 years, they’ve lost you as a customer for 8 years.

    Tim

    Tim Jones
    CTO – TOLIS Group, Inc.
    https://www.tolisgroup.com
    BRU … because it’s the RESTORE that matters!

  • Tim Jones

    May 15, 2018 at 10:14 pm in reply to: LTO Tape Technologies update

    I suspect that you’re probably seeing a difference between Fiber Channel and SAS more than FH versus HH. Fiber Channel doesn’t suffer from the causes of the slow down on SAS due to TLR and buffer issues. You also won’t see the slow down with SAS if you’re using a 12Gb SAS HBA (which the Mac platform does not support – PCIe-3 x8)..

    There really is no difference with the FH to HH comparison when the host environment is equal.

    Tim

    Tim Jones
    CTO – TOLIS Group, Inc.
    https://www.tolisgroup.com
    BRU … because it’s the RESTORE that matters!

Page 5 of 84

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy