Forum Replies Created

Page 1 of 90
  • Neil Sadwelkar

    September 9, 2023 at 3:21 am in reply to: Problem Import Clips that have been exported

    Davinci Resolve Free has some limitations about the kind of clips it can import. For example, some 10-bit file types, MP4 or MXF do not open in Resolve Free. You can import them but they show the ‘Media Offline’ screen when played.

    In case you’re using Resolve Studio for the export and the Resolve Free for the import, then this what you describe, is likely. If you’re using Resolve Studio for both, then this is very unusual.

  • Neil Sadwelkar

    August 10, 2023 at 3:48 am in reply to: Dynamic Backup software – opinion’s please


    Like I wrote, if the media files drive and the clone of that drive are both attached to your editing system, then Chronosync can keep them in sync. And make an archive of what’s different so in case you accidentally delete a file on the media drive, Chronosync will also delete it from the cone, but keep a backup in an archive folder.

    Carbon Copy Cloner can also do exactly that. As will SuperDuper

    If the media drive and its clone are not attached to the same computer, but are on the same premises but on a network, then too Chronosync or Carbon Cloner will do the job. They will mount it as required.

    If the media drive and its clone are in different premises and connected via Internet, then the free SyncThing, or Resilio sync can clone them over the Internet. Of course, copying TBs of data over the Internet would take time.

  • Neil Sadwelkar

    August 7, 2023 at 10:57 am in reply to: Dynamic Backup software – opinion’s please

    You’ve not mentioned what editing system you use (Pc or Mac, Avid or Prem Pro or some other) and whether you have projects and media on the same drive, or projects on internal drive and media on external drive.

    Anyway, for an Avid MC system (would apply to Prem Pro too), system I have Carbon Copy Cloner clone my internal drive to an external drive once a day. Back when external drives could be used to boot a Mac from (pre M1/M2 Mac), an external SSD clone of the internal drive would enable one to boot from one external drive and have all of one’s work intact. If once a day was insufficient, then that external SSD could be left connected and cloning could be set to happen once every 2-3 hours.

    As for the external drive with the media files, you can keep a spare drive connected, and have Chronosync sync the two, daily or once every few hours. So that as you add media to the external drive, the Chronosync clone would automatically have that media updated with folder structure intact.

    If your external media drive failed, you could rename the clone drive with the same name as the original failed drive, and once you open your project all media will automatically relink because the system would ‘see’ that clone as the original.

    For offsite backup of your project files, you could use Google Drive, or Dropbox to keep a continuous cloud backup.

    If you’re not comfortable saving these on the cloud, then you can keep a clone drive at another location (office/home) and have a utility like Resilio Sync or SyncThing, to sync a folder/s on your work system in sync with a folder/s on the remote backup.

  • Rodion,

    As Bob described, it’s possible to use U.3 SSDs for editing, in a RAID form, inside a NAS.

    For your use case, using one 15TB $2000 U.3 SSD may be overkill.

    A U.2 or U.3 SSD will deliver that speed only if the interface supports it. Like if it’s installed on a PCIe card, or directly in a U.2/U.3 compatible slot internal to your PC.

    If you’re on a Mac, you’ll need to be using a Thunderbolt to U.2/U.3 adapter of some kind (OWC makes enclosures which are U.2 SSD compatible). Used in this manner, a U.2/U.3 SSD will work at about 2,500-3,000 MB/sec which is fast enough for 8k video file processing. But then these speeds are also achievable with M.2 NVMe SSDs.

    You could also take a look at Iodyne which makes SSD based storage which can be connected to multiple Macs simultaneously.

    Also, bear in mind that U.2 and U.3 in some situations/enclosures/adapters are not interchangeable.

  • Neil Sadwelkar

    June 29, 2023 at 5:16 am in reply to: Data ‘migration’ as a fact of (data) life


    All the LTO backup I do is only on Mac systems. For the past few years, it been an Intel i7 Mac Mini.

    I use Yoyotta mostly, as it offers built-in cataloguing and project management. I also have Canister which I use for quick backups. And restore from tapes that were not made by me.

    I have a different older MacBook Pro with macOS Sierra still on it, and a working Bru install. That I use when I need to retrieve an old Bru tape. Many of my clients have had their backups done by me which are on Bru, so I need to keep this system operational just for Bru.


  • I have both of them. I use Diskcatalogmaker for its ability to make self-contained catalogs, to have multiple drives in one catalog, to copy drive catalogs from one catalog to another, and the ability to export a catalog as a .csv so I can do stuff with that data in Numbers/Excel.

    I use NeoFinder to be able to import old Bru catalogs and create a browsable, searchable catalog of Bru tapes.

    For the past couple of years, for some projects, I’ve also maintained a database of assets by importing raw camera files into a Resolve project. Compared to DCM or NF, Resolve stores more detailed metadata like timecode, resolution, codec camera details. And a Resolve media pool can be exported as .csv and imported into Numbers/Excel for further analysis.


  • I wasn’t aware of RapidCopy. I use Hedge for all my copying tasks. Costs more than RapidCopy but lets you make two or more copies at the same speed as a single copy.

    I like the idea of backup to spinning disks. I use enterprise drives in USB 3 docks, so the drives become almost like ‘cassettes’. For spinning drives Mac HFS extended (GUID) is a preferred format. Spinning disks are more easy to access at a later date, than LTO tapes.

    For some tasks I have also used LTO plus drive and stored the contents of the LTO and the drive in a DiskCatalogMaker catalog. I prefer making plain catalogs (.dcmf) rather than the default catalog with thumbnails (.dcmd) because dcmf is a single file while dcmd is a package which can get corrupt across file systems.

    One issue you could come across as you add backups is, while DiskCatalogMaker lets you ‘browse’ backups without mounting the drive, you don’t get any idea of what the asset looks like. I’m now considering creating 1/100th size H.264 or H.265 proxies of everything I backup so that this can be used to do quick ‘browse and select’ shortlisting without touching the backup.

    Resolve lets you create proxies very fast while preserving the original folder structure. So, you could have 20 TB of video files in a complex folders structure, compressed to 200 GB with the exact same folder structure saved away in a local drive. If you have 50 backup drives of 20TB each, (1 Petabyte total) then proxies will take up just 10 TB.


  • Neil Sadwelkar

    May 22, 2023 at 4:15 am in reply to: Data ‘migration’ as a fact of (data) life

    Your assets are on LTO-5 tapes and LaCie rugged drives. The LaCie drives will run about 80 MB/sec and the LTO tapes about 100-120 MB/sec for retrieval.

    Today, bare SATA drives of 20 TB each can run at two times this speed even over USB3.0. Bare drives run about $ 16 per TB. Many drives in a RAID enclosure, run about $ 25 – $ 40 per TB. This is your cost per TB for 5 years of storage.

    Your 200 LTO-5 tapes will become 16 LTO-9 Tapes if you migrate then now. So, restoring the Retrospect LTO-5 tapes to hard drives, and then writing them to LTFS LTO-9 tapes will extend their life.

    After 5 years, this cost of hard drives will halve, and most likely, take up half the physical space as well. So, your 300 TB of data, which takes up 15 drives of 20 TB now, will fit in 6 drives of 50 TB each of which will most likely cost exactly what you paid for the 20 TB drives now. If you paid $ 5,000 for 15 drives of 20 TB today, 6 drives of 50 TB will cost about $ 2000, 5 years from now. And they will hold the same amount of data. (I’m basing this on the fact that in 2017-18, 8 TB drives cost nearly what 20 TB drives cost today)

    As for your clients, your best bet is to catalog their data using something like DiskCatalogMaker and send them a mail, showing their data as a pdf (made from DiskCatalogMaker), and offering them a certain amount per TB for 5 years. I suspect this will be less than what they would have paid for a cloud service. You could also let them pay annually.

    Even if you bill your client $ 10 per TB per year, some clients with about 10 TB, may even be willing to pay for 5 years up front. And even if you manage to convince 100 TB worth of clients, you’re looking at recovering part or all of your costs for this storage venture.

    There is, of course, the cost of the labour of doing this retrieval and migration.

  • Neil Sadwelkar

    May 20, 2023 at 3:31 am in reply to: LTO Archive vs Near line storage

    I re-read this whole thread from 2018. So much changed. Bru went away, but got revived as Argest. Even if one had Bru licences, one needed a MacOS 10.14 or earlier system to run it. Argest import of Bru catalogs off tape is a bit iffy, so, large quantities of Bru archives are best restored using an older MacOS 10.14 system.

    On-prem is being looked at once again, because those that adopted cloud storage, realised that over a long period, the monthly ‘holding’ charges quickly add up. Then there are ‘egress’ fees to get back your own data.

    LTO backup’s one big issue is availability of tape drives, and the software they were written in, after extended periods. Like, those who archived to LTO-1 to LTO-3 in the early 2000s, may or may not find the software and the system, to restore those tapes. Like NTBackup on Windows was quite popular (in my region) and I know people with stacks of those tapes. They have no easy way to read those tapes now, as the current Windows doesn’t support the software. Same with many other pre-LTFS softwares.

    ‘LTO migration’, and in general ‘data migration’ is a fact of life, and no on-prem storage is ‘forever’. I tell clients that any on-prem or off-site archive storage is good for about 5 years. After that it needs to be migrated to something else.

    The next 5 years will see (hopefully not) a few cloud storage players go down, and with that, a scramble to restore the TBs of data that’s on their servers.


  • Neil Sadwelkar

    May 19, 2023 at 5:24 am in reply to: DATA And Storage Genius needed for this one,

    This is something that many media houses that have a large quantity of video and audio assets, will have.

    I’m intrigued by your multi-tiered approach. The assets some here some there, some in multiple places, is typical, and it will all get sorted over time.

    But one question I have, since you’re making projections for the future. How are you going to afford to pay for the ‘holding costs’ that most cloud services charge per month which over many years and many TBs quickly run into six figures (in $$). Plus with some providers, there are ‘egress fees’, so you pay to get your own data back.

    The question actually is, are the assets worth these costs? Do they return these investments?

    Maybe they are.

    About projecting future data requirements, the ‘formula’ you have, seems fine. Although data estimation is tricky. One cannot accurately predict how much (more or less) data you’ll create, going forward. And, with new formats coming out all the time, one cannot predict how small (or large) future assets will be. Back in the day, archives were 10-bit dpx. then came ProRes and files shrunk, but more files were created. Now we have J2k, JPEG-XS, HEVC, so you may have to factor that in.

    The other thing some people have begun to consider is ‘data audit’, ‘data pruning’, ‘data mining’. To make the data stored and archived, richer and more usable. But that’s a whole new story.


Page 1 of 90

We use anonymous cookies to give you the best experience we can.
Our Privacy policy | GDPR Policy