I accidentally untarred archive intended to be extracted in root directory, which among others included some files for /etc directory.
I went on to rm -rv ~/etc, but I quickly typed rm -rv /etc instead, and hit enter, while using a root account.

  • thespcicifcocean@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    16 minutes ago

    Ohohoho man did you ever fuck up. I did that once too. I can’t remember how I fixed it. I think I had to reinstall the whole OS

  • MonkeMischief@lemmy.today
    link
    fedilink
    arrow-up
    22
    ·
    3 hours ago

    OOOOOOOOOOOF!!

    One trick I use, because I’m SUPER paranoid about this, is to mv things I intend to delete to /tmp, or make /tmp/trash or something.

    That way, I can move it back if I have a “WHAT HAVE I DONE!?” moment, or it just deletes itself upon reboot.

      • I Cast Fist@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        24 minutes ago

        After being bitten by rm a few times, the impulse rises to alias the rm command so that it does an rm -i” or, better yet, to replace the rm command with a program that moves the files to be deleted to a special hidden directory, such as ~/.deleted. These tricks lull innocent users into a false sense of security.

  • dunz@feddit.nu
    link
    fedilink
    arrow-up
    12
    ·
    4 hours ago

    Be happy that you didn’t remeber the ~ and put a space between it and etc😃.

    • SalmiakDragon@feddit.nu
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      5 hours ago

      The handbook has numbered pages, so why use “page X of the pdf”? I don’t see the page count in my mobile browser - you made me do math.

      (I think it’s page number 22 btw, for anyone else wondering)

      • I Cast Fist@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        58 minutes ago

        The handbook has numbered pages, so why use “page X of the pdf”?

        Because the book’s page 1 is the pdf’s page 41, everything before is numbered with roman numerals :)

        I also wasn’t expecting anyone to try and read with a browser or reader that doesn’t show the current page number

      • StellarSt0rm@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 hours ago

        I dont know if you use firefox on your phone, but i do, and i fucking hate it that i cant jump to a page or see the page number im on.

        • SalmiakDragon@feddit.nu
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 hours ago

          That is what I’m using. I don’t really read enough pdf:s to notice it normally, but I guess it’s another reason to get off my ass about switching browsers ¯\_(ツ)_/¯

    • ThanksForAllTheFish@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      arrow-down
      2
      ·
      6 hours ago

      The biggest flaw with cars is when they crash. When I crash my car due to user error, because I made a small mistake, this proves that cars are dangerous. Some other vehicles like planes get around this by only allowing trusted users to do dangerous actions, why can’t cars be more like planes? /s

      Always backup important data, always have the ability to restore your backups. If rm doesn’t get it, ransomware or a bad/old drive will.

      A sysadmin deleting /bin is annoying, but it shouldn’t take them more than a few mins to get a fresh copy from a backup or a donor machine. Or to just be more careful instead.

      • I Cast Fist@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        48 minutes ago

        Unix aficionados accept occasional file deletion as normal. For example, consider following excerpt from the comp.unix.questions FAQ:
        6) How do I “undelete” a file?
        Someday, you are going to accidentally type something like:
        % rm * .foo
        and find you just deleted “*” instead of “*.foo”. Consider it a rite of passage.
        Of course, any decent systems administrator should be doing regular backups. Check with your sysadmin to see if a recent backup copy of your file is available

        “A rite of passage”? In no other industry could a manufacturer take such a cavalier attitude toward a faulty product. “But your honor, the exploding gas tank was just a rite of passage.”

        There’s a reason sane programs ask for confirmation for potentially dangerous commands

    • underscores@lemmy.zip
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      1
      ·
      17 hours ago

      I agree with this take, don’t wanna blame the victim but there’s a lesson to be learned.

      • neatchee@piefed.social
        link
        fedilink
        English
        arrow-up
        46
        ·
        16 hours ago

        except if you read the accompanying text they already stated the issue by accidentally unpacking an archive to their user directory that was intended for the root directory. that’s how they got an etc dir in their user directory in the first place

        • ThanksForAllTheFish@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          arrow-down
          2
          ·
          6 hours ago

          Could make one archive intended to be unpacked from /etc/ and one archive that’s intended to be unpacked from /home/Alice/ , that way they wouldn’t need to be root for the user bit, and there would never be an etc directory to delete. And if they run tar test (t) and pwd first, they could check the intended actions were correct before running the full tar. Some tools can be dangerous, so the user should be aware, and have safety measures.

          • neatchee@piefed.social
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 hours ago

            they acquired a tar package from somewhere else. the instructions said to extract it to the root directory (because of its file structure). they accidentally extracted it to their home dir

            that is how this happened. not anything like what you were saying

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      15 hours ago

      [OP] accidentally untarred archive intended to be extracted in root directory, which among others included some files for /etc directory.

    • palordrolap@fedia.io
      link
      fedilink
      arrow-up
      7
      ·
      17 hours ago

      I dunno, ~/bin is a fairly common thing in my experience, not that it ends up containing many actual binaries. (The system started it, miss, honest. A quarter of the things in my system’s /bin are text based.)

      ~/etc is seriously weird though. Never seen that before. On Debians, most of the user copies of things in /etc usually end up under ~/.local/ or at ~/.filenamehere

        • palordrolap@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          2 hours ago

          ~/bin is the old-school location from before .local became a thing, and some of us have stuck to that ancient habit.

      • db2@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        16 hours ago

        I use ~/config/* to put directories named the same as system ones. I got used to it in BeOS and brought it to LFS when I finally accepted BeOS wasn’t doing what I needed anymore, kept doing it ever since.

    • vapeloki@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      3
      ·
      edit-2
      12 hours ago

      So, you don’t do backups of /etc? Or parts of it?

      I have those tars dir ssh, pam, and portage for Gentoo systems. Quickset way to set stuff up.

      And before you start whining about ansible or puppet or what, I need those maybe 3-4 times a year to set up a temporary hardened system.

      But may, just maybe, don’t assume everyone is a fucking moron or has no idea.

      Edit Or just read what op did, I think that is pretty much the same

      • FauxLiving@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        7 hours ago

        But may, just maybe, don’t assume everyone is a fucking moron or has no idea.

        Well, OP didn’t say they used Arch, btw so it’s safe to assume.

        (I hate that this needs a /s)

    • Jakeroxs@sh.itjust.works
      link
      fedilink
      arrow-up
      6
      ·
      14 hours ago

      Wish ZFS didn’t constantly cause my proxmox to need to be forcefully restarted after the ZFS pool crashed randomly.

      • wylinka@szmer.info
        link
        fedilink
        arrow-up
        7
        ·
        13 hours ago

        I get months of uptime on a ZFS NAS, though I’m not using Proxmox. I don’t think it’s the filesystem’s fault, you might have some hardware issue tbh. Do you have some logs?

        • Jakeroxs@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          13 hours ago

          I just reformatted back to ext after messing with it for about a month, been totally fine since.

          I do also assume it was something screwy with how it was handling my consumer m2

          • vapeloki@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            12 hours ago

            I am running a zfs raidz1-0 pool on 3 consumer nvme in my workstation, doing crazy stuff on it.

            Ran zfs under proxmox with enterprise nvme and had the same issue.

            It is proxmox, not zfs

  • woelkchen@lemmy.world
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    15 hours ago

    Your first mistake was attempting to unarchive to / in the first place. Like WTF. Why would this EVER be a sane idea?

    • u/lukmly013 💾 (lemmy.sdf.org)@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      12 hours ago

      I don’t know if it should be a bad thing. Inside the tar archive the configs were already organized into their respective dirctories, this way with --preserve-permissions --overwrite I could just quickly add the desired versions of configs.
      Some examples of contents:

      -rw-r--r-- root/root      2201 2026-02-18 08:08 etc/pam.d/sshd
      -rw-r--r-- root/root       399 2026-02-17 23:22 etc/pam.d/sudo
      -rw-r--r-- root/root      2208 2026-02-18 09:13 etc/sysctl.conf
      drwx------ user/user         0 2026-02-17 23:28 home/user/.ssh/
      -rw------- user/user       205 2026-02-17 23:29 home/user/.ssh/authorized_keys
      drwxrwxr-x user/user         0 2026-02-18 16:30 home/user/.vnc/
      -rw-rw-r-- user/user        85 2026-02-18 15:32 home/user/.vnc/tigervnc.conf
      -rw-r--r-- root/root      3553 2026-02-18 08:04 etc/ssh/sshd_config
      

      Keeps permissions, keeps ownership, puts things where they belong (or copies from where they were), and you end up with a single file that can be stored on whatever filesystem.

      • vapeloki@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        12 hours ago

        I assumed something like this. That’s a perfectly valid usecase for a tar extracted to /.

        But I love it how people always jump to the assumption that the one on the other end is the stupid one

    • SkaveRat@discuss.tchncs.de
      link
      fedilink
      arrow-up
      11
      ·
      14 hours ago

      that was my reaction when I saw a coworker put random files and directories into / of a server

      I feel like some people don’t have a feeling about how a file system works

      • Trainguyrom@reddthat.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 hours ago

        Its a pretty common Windows server practice to just throw random shit on the root directory of the server. I’m guilty of this at times when there isn’t a better option available to me, but I at least use a dedicated directory at the root for dumping random crap and organize the files within that directory (and delete unneeded files when done) so that it doesn’t create more work later.

      • macniel@feddit.org
        link
        fedilink
        arrow-up
        1
        ·
        13 hours ago

        Maybe they do and don’t fear the HFS? I mean do you use the HFS in a docker container?