I read many posts talking about importance of having multiple copies. but the problem is, even if you have multiple copies, how do you make sure that EVERY FILE in each copy is good. For instance, imagine you want to view a photo taken a few years ago, when you checkout copy 1 of your backup, you find it already corrupted. Then you turn to copy 2/3, find this photo is good. OK you happily discard copy 1 of backup and keep 2/3. Next day you want to view another photo 2, and find that photo 2 in backup copy 2 is dead but good in copy 3, so you keep copy 3, discard copy 3. Now some day you find something is wrong in copy 3, and you no longer have any copies with everything intact.

Someone may say, when we find that some files for copy 1 are dead, we make a new copy 4 from copy 2 (or 3), but problem is, there are already dead files in this copy 2, so this new copy would not solve the issue above.

Just wonder how do you guys deal with this issue? Any idea would be appreciated.

  • Melodic-Look-9428@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    It’s something I need to take a look into more if I’m honest, I checked all my media in VLC looking for duration and replaced any with no duration that wouldn’t play.

    I’ve got an old backup I can refer to and when I sync to my Synology the deleted items don’t get removed so if something gets removed by mistake I have a couple of places to refer back to.

    There’s still the risk of ongoing corruption/bit rot so I installed Checkrr last weekend to try to flag problematic files.

    Take a look: Checkrr

  • Individual_Brick5537@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Many storage systems have features called “data scrubbing” https://en.wikipedia.org/wiki/Data_scrubbing , which Synology discusses here: https://blog.synology.com/how-data-scrubbing-protects-against-data-corruption

    This will correct errors with drives, and potentially give some early warning that a drive may fail. You will also want to run SMART tests on your drives. Quick tests often (I do daily), extended tests occasionally (I do monthly).

    The backup software should also have a way to verify the accuracy of the data, and check that the data can be restored. On Synology, HyperBackup has backup integrity check https://kb.synology.com/en-us/DSM/tutorial/What_is_backup_integrity_check_for_Hyper_Backup_tasks .

  • hobbyhacker@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    if you use real backups, and not just simple copies, then your backup software has verify function. For simple copies you should use hash files or something that can build a hash database and verify it. Btw. you should already use hash checking for live data anyway. For archiving you can create winrar archives with 10% recovery record, so it can self-verify and self-repair easily.

      • hobbyhacker@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        dedicated software that can create verifiable historical backup files. Like Veeam or Macrium, or the new generation like Duplicacy, Arq, Borg, etc. All of them have integrity verification integrated.

  • Far_Marsupial6303@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Ideally you would have generated and saved a HASH before you copied your files as a control. Otherwise, it’s just a probability game. If the HASH on copy 1&2 match, but doesn’t match 3, then the probability is 1&2 are correct. If all three don’t match, you toss a coin.

    If you’re on Windows, I recommend using Teracopy for all your file copying (always copy, never move!) and set verify on, which will perform a CRC and generate a HASH which you can then save. You can also use it to Test your files after the fact and generate a HASH.

  • FizzicalLayer@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I realize there are solutions, but I wanted my own for various reasons (better fit to the peculiar way I store and backup).

    It was straightforward to write a python script to crawl a directory tree, adding files to an sqlite database. The script has a few commands:

    - “check” computes checksums on files whose modification times have changed since last check, or on any file whose checksum is older than X days (find bitrot this way).

    - “parity” Use par2 to compute parity files for all files in database. Store these in a “.par2” directory in the directory tree root so it doesn’t clutter the directory tree.

    I like this because I can compute checksums and parity files per directory tree (movies, music, photos, etc), and by disk (no raid here, just JBOD + mergerfs). Each disk corresponds exactly to a backup set kept in a pelican case.

    The sqlite database has the nice side effect that checksum / parity computation can run in the background and be interrupted at any time (it takes a loooooooong time). The commits are atomic, so machines crashes or have to shut down, it’s easy to resume from previous point.

    Surely… SURELY… someone has already written this. But it took me a couple of afternoons to roll my own. Now I have parity and the ability detect bitrot on all live disks and backup sets.

  • DTLow@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Versioning
    My backups are incremental (Mac TimeMachine and Arq)
    If I find a file is corrupted, I can restore an earlier version

  • iMainQuake@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    how do you make sure that EVERY FILE in each copy is good.

    Checksums.

    Personally, I use TeraCopy to safely copy a folder/file from my main drive to my backups (there’s even an option on there that will save a checksum of said folder/file on the backup so that I can later run that checksum and see if anything has corrupted).

    What do I do if there’s corruption? Simply delete the corrupted files and replace them with good copies from other backups.

  • GNUr000t@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Good backup software is going to have methods to verify that backed-up data is intact. When backups are stored in (potentially fixed-sized) blobs, you have the option of verifying a single file in one action instead of potentially thousands.

    By “dead” I’m also assuming you mean bit rot. While that’s a real problem, it’s not something that happens day after day at any scale an individual would be using. If the source is getting corrupted somehow and that corrupted file is being backed-up, this is what version history is for.

  • dr100@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    This why you check your backups periodically and replace the bad ones with good copies. If you’re asking how you know what’s good and bad - traditionally and fundamentally, even if many people here dismiss it, the storage already has checksums, that sneaky bitrot when the storage will give you slightly altered data (instead of saying “Error”) are so small that most people would never encounter this. Now of course serious data hoarders would use checksumming file systems, will do extra checksums for any archived data, also all archiving formats or backup formats have their own checksums too, if one would use that instead of dropping the files in the regular file system.

  • WikiBox@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I use snapraid as one of my backup methods. Mainly for long term mostly static archive backups. Things that no longer change, but is added to, and I still want to have accessible read-only. Not for daily backups or for frequently changing files or folders, nor for “permanent” off-line cold storage.

    https://www.snapraid.it/

    I use 8 storage drives and two snapraid parity drives.

    Using snapraid I can then easily verify that all backed up files are 100% OK, exactly as they were when I had just backed them up.

    Snapraid can detect and fix bitrot (has never happened so far), undelete accidentally deleted files or folders and even recreate up to two failed drives.

    When I backup/archive files, I simply copy them to one of the storage drives and then ask snapraid to update the parity.

    Done!

  • fediverser@alien.top
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    11 months ago

    This post is an automated archive from a submission made on /r/DataHoarder, powered by Fediverser software running on alien.top. Responses to this submission will not be seen by the original author until they claim ownership of their alien.top account. Please consider reaching out to them let them know about this post and help them migrate to Lemmy.

    Lemmy users: you are still very much encouraged to participate in the discussion. There are still many other subscribers on !datahoarder@selfhosted.forum that can benefit from your contribution and join in the conversation.

    Reddit users: you can also join the fediverse right away by getting by visiting https://portal.alien.top. If you are looking for a Reddit alternative made for and by an independent community, check out Fediverser.