I want to have a mirror of my local music collection on my server, and a script that periodically updates the server to, well, mirror my local collection.

But crucially, I want to convert all lossless files to lossy, preferably before uploading them.

That’s the one reason why I can’t just use git - or so I believe.

I also want locally deleted files to be deleted on the server.

Sometimes I even move files around (I believe in directory structure) and again, git deals with this perfectly. If it weren’t for the lossless-to-lossy caveat.

It would be perfect if my script could recognize that just like git does, instead of deleting and reuploading the same file to a different location.

My head is spinning round and round and before I continue messing around with find and scp it’s time to ask the community.

I am writing in bash but if some python module could help with it I’m sure I could find my way around it.

TIA


additional info:

  • Not all files in the local collection are lossless. A variety of formats.
  • The purpose of the remote is for listening/streaming with various applications
  • The lossy version is for both reducing upload and download (streaming) bandwidth. On mobile broadband FLAC tends to buffer a lot.
  • The home of the collection (and its origin) is my local machine.
  • The local machine cannot act as a server
  • Kissaki@programming.dev
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 day ago

    I also want locally deleted files to be deleted on the server.

    Sometimes I even move files around (I believe in directory structure) and again, git deals with this perfectly. If it weren’t for the lossless-to-lossy caveat.

    It would be perfect if my script could recognize that just like git does, instead of deleting and reuploading the same file to a different location.

    If you were to use Git, deleted files get deleted in the working copy, but not in history. It’s still there, taking up disk space, although no transmission.

    I’d look at existing backup and file sync solutions. They may have what you want.

    For an implementation, I would work with an index. If you store paths + file size + content checksum you can match files under different paths. If you compare local index and remote you could identify file moves and do the move on the remote site too.

  • Lysergid@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    1 day ago

    Git is for text files. Your git repo might get very big after some time. Especially if you move files. But it’s your choice. Sounds like your problem can be solved with pre-commit hook

    • A_norny_mousse@feddit.orgOP
      link
      fedilink
      arrow-up
      1
      ·
      15 hours ago

      Your git repo might get very big after some time.

      I was thinking about this; probably true for the origin, but I’m sure it can be mitigated or at least minimized and maybe avoided completely for the cloning side?

    • Kissaki@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Your git repo might get very big after some time. Especially if you move files.

      Moving files does not noticeably increase git repo size. The files are stored as blob objects. Changing their path does not duplicate them.

  • who@feddit.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    2 days ago

    Git is for text files and retaining a history of every change and every state that has ever existed. It is the wrong tool for what you want, because it would be wasteful of resources.

    I suggest automating lossy encodings locally (there are quite a few approaches you could use here, such as a cron job with the encoder of your choice), and automating an rsync job to keep your server updated.

    • A_norny_mousse@feddit.orgOP
      link
      fedilink
      arrow-up
      2
      ·
      15 hours ago

      It is the wrong tool for what you want, because it would be wasteful of resources.

      I’m actually coming round to this.

      I guess rsync can be told to remove removed files on the destination, too?

      Then I’d just exclude the lossless file extensions, and deal with them differently.

      • who@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 hours ago

        I guess rsync can be told to remove removed files on the destination, too?

        Yes. The --delete family of options are relevant here.

  • kethali@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    I’m not sure if syncthing will do everything you want, could be worth taking a look

  • thejml@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 days ago

    Is there a reason not to have the lossless/original files on the server? What I mean is, you could setup one of the myriad of self hosted music streaming apps here and the vast majority will transcode to lossy, appropriately compressed files for steaming or even downloading on remote devices for offline listening, on the fly.

  • solrize@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    Not sure what you’re asking, but can you use git hooks? What is the purpose of the mirror: for backup, for remote listening, or what? If the mirror is the permanent home for the files, you should keep the lossless version there. Is the lossy conversion just to reduce upload bandwidth? How did you get the lossless files onto the client to begin with?

    If I imagine this setup, the lossless versions would live on the server, lossy compression would also be done on the server, and then the client could download either version.

    I think version control isn’t really what you want, since you normally won’t have multiple revisions of the same file.

    Maybe you could look at git-annex for handling the large binaries in your git repo.

    • A_norny_mousse@feddit.orgOP
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      The mirror is for remote listening and streaming, yes.

      The lossy version is for both reducing upload and download (streaming) bandwidth. On mobile broadband FLAC tends to buffer a lot.

      No, the home of the collection (and its origin) is my local machine.

  • hedgehog@ttrpg.network
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 day ago

    I think the best way to handle this would be to just encode everything and upload all files. If I wanted some amount of history, I’d use some file system with automatic snapshots, like ZFS.

    If I wanted to do what you’ve outlined, I would probably use rclone with filtering for the extension types or something along those lines.

    If I wanted to do this with Git specifically, though, this is what I would try first:

    First, add lossless extensions (*.flac, *.wav) to my repo’s .gitignore

    Second, schedule a job on my local machine that:

    1. Watches for changes to the local file system (e.g., with inotifywait or fswatch)
    2. For any new lossless files, if there isn’t already an accompanying lossy files (i.e., identified by being collocated, having the exact same filename, sans extension, with an accepted extension, e.g., .mp3, .ogg - possibly also with a confirmation that the codec is up to my standards with a call to ffprobe, avprobe, mediainfo, exiftool, or something similar), it encodes the file to your preferred lossy format.
    3. Use git status --porcelain to if there have been any changes.
    4. If so, run git add --all && git commit --message "Automatic commit" && git push
    5. Optionally, automatically craft a better commit message by checking which files have been changed, generating text like Added album: "Satin Panthers - EP" by Hudson Mohawke or Removed album: "Brat" by Charli XCX; Added album "Brat and it's the same but there's three more songs so it's not" by Charli XCX

    Third, schedule a job on my remote machine server that runs git pull at regular intervals.

    One issue with this approach is that if you delete a file (as opposed to moving it), the space is not recovered on your local or your server. If space on your server is a concern, you could work around that by running something like the answer here (adjusting the depth to an appropriate amount for your use case):

    git fetch --depth=1
    git reflog expire --expire-unreachable=now --all
    git gc --aggressive --prune=all
    

    Another potential issue is that what I described above involves having an intermediary git to push to and pull from, e.g., running on a hosted Git forge, like GitHub, Codeberg, etc… This could result in getting copyright complaints or something along those lines, though.

    Alternatively, you could use your server as the git server (or check out forgejo if you want a Git forge as well), but then you can’t use the above trick to prune file history and save space from deleted files (on the server, at least - you could on your local, I think). If you then check out your working copy in a way such that Git can use hard links, you should at least be able to avoid needing to store two copies on your server.

    The other thing to check out, if you take this approach, is git lfs. EDIT: Actually, I take that back - you probably don’t want to use Git LFS.