My apologies for the long post.

I have a single server running Unraid with about 12 services (Pihole, Wordpress, Heimdall, Jellyfin, etc.) all running on Docker. This server is also acting as my home lab NAS. Everything runs fine for my use case (at least for right now) but I’ve been thinking about creating some type of compute cluster for my services instead of a single server.

Recently, I saw a discussion about Proxmox, Docker, LXD and Incus where a user felt that Incus was a better option to all the others. Curious, I started reading up on Incus and playing around with it and contemplating switching all my services from Docker in Unraid to an Incus cluster (I’m thinking around 3 nodes) and leaving the Unraid server to serve as a NAS only.

In a nutshell Incus/LXD appear to be (to me) a lightweight version of a VM in which case I would have to manually install and configure each service I have running. Based on the services I have running, that seems like a ton of work to switch to Incus when I could just do 3 physical servers (Debian) in docker swarm mode or a Proxmox cluster with 3 Debian VMs with docker in swarm mode. I’d all possible, I would like to keep my services containerized rather then actual VMs.

What has me thinking that a switch to Incus may be worth it is performance. If the performance of my services is significantly better in Incus/LXDs as compared to Docker, then that’s worth it to me. I have not been able to find any type of performance comparison between Incus/LXD and Docker. I don’t know if there are other reasons as to “Incus over Proxmox and Docker” which is why I’m asking the greater community.

Here’s my question:

Based on your experience and taking into consideration my use case (home lab/home use), do the pros and cons of Incus outweigh accomplishing my goal by creating standalone hosts cluster or Proxmox cluster?

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    8 months ago

    Docker has native compute performance. The processes essentially run on the host kernel with a different set of libs. The only notable overhead is in storing and loading those libs which takes a bit more disk and RAM. This will be true for any container solution and VMs. VMs have a lot of additional overhead. An a cursory glance, Incus seems to provide an interface to run Linux containers or VMs. I wouldn’t expect performance differences between containers run through it compared to Docker.

      • vegetaaaaaaa@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        8 months ago

        VMs have a lot of additional overhead.

        The overhead is minimal, KVM VMs have near-native performance (type 1 hypervisor). There is some memory overhead as each VM runs its own kernel, but a lot of this is cancelled by KSM [1] which is a memory de-duplication mechanism.

        Each VM runs its own system services (think systemd, logging, etc) so there is some memory/disk usage overhead there - but it would be the same with Incus/LXC as they do the same thing (they only share the same kernel).

        https://serverfault.com/questions/225719/so-really-what-is-the-overhead-of-virtualization-and-when-should-i-be-concerned

        I usually go for bare-metal > on top of that, multiple VMs separated by context (think “tenant”, production/testing, public/confidential/secret, etc. VMs provide strong isolation which containers do not. At the very minimum it’s good to have at least separate VMs for “serious business” and “lab” contexts) > applications running inside the VMs (containerized or not - service/application isolation through namespaces/systemd has come a long way, see man systemd-analyze security) - for me the benefit of containerization is mostly ease of deployment and… ahem running inscrutable binary images with out-of-date dependencies made by strangers on the Internet)

        If you go for a containerization solution on top of your VMs, I suggest looking into podman as a replacement for Docker (less bugs, less attack surface, no single-point-of-failure in the form of a 1-million-lines-of-code daemon running as root, more unix-y, better integration with systemd [2]. But be aware of the maintenance overhead caused by containerization, if you’re serious about it you will probably end up maintaining your own images)

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    8 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    ESXi VMWare virtual machine hypervisor
    IP Internet Protocol
    LXC Linux Containers
    NAS Network-Attached Storage

    4 acronyms in this thread; the most compressed thread commented on today has 3 acronyms.

    [Thread #559 for this sub, first seen 29th Feb 2024, 23:55] [FAQ] [Full list] [Contact] [Source code]

  • thirdBreakfast@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 months ago

    Your workload (a NAS and a handful of services) is going to be a very familiar one to members of the community, so you should get some great answers.

    My (I guess slightly wacky) solution for this sort of workload has ended up being a single Docker container inside an LXC container for each service on Proxmox. Docker for ease of management with compose and separate LXCs for each service for ease of snapshots/backups.

    Obviously there’s some overhead, but it doesn’t seem to be significant.

    On the subject of clustering, I actually purchased three machines to do this, but have ended up abandoning that idea - I can move a service (or restore it from a snapshot to a different machine) in a couple of minutes which provides all the redundancy I need for a home service. Now I keep the three machines as a production server, a backup (that I swap over to for a week or so every month or two) and a development machine. The NAS is separate to these.

    I love Proxmox, but most times it get mentioned here people pop up to boost Incus/LXD so that’s something I’d like to investigate, but my skills (and Ansible playbooks) are currently built around Proxmox so I’ve got a bit on inertia.

  • Sethayy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    In theory Incus and LXD by default will be slightly heavier than docker; they run a a lot more bare-metal services (ex. systemd) in container giving them more flexability and a VM like feel, which would 99% of the time be wasted resources in a docker container

    They also dont have nearly as much ‘out of the box’ support as Docker/Podman might, especially for single process containers.

    That being said docker used to run on lxc until not too long ago, so there’s still many similarities between the 2

    • lal309@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      Interesting tidbit about the performance. It has been a bit of challenge getting “up to speed” with Incus/LXD from a guide and walkthrough perspective. Although I do find their documentation pretty well organized and useful.

  • Pantherina@feddit.de
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    Incus is a weird name lol.

    But jokes aside, I think Docker and Podman have more adaption?

    • lal309@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      You are probably right. Judging by their GitHub repo, their first release was in October of 2023. If I understand correctly, Incus is a fork of Canonical LXDs which is not so new??? Idk. Their documentation is quite good but there aren’t a lot of “guides” out there so yea.

  • SayCyberOnceMore@feddit.uk
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    8 months ago

    I am in no way even slightly an “expert” here, but Incus could be considered a lightweight Proxmox…

    They’re both going to run some VMs and / or containers, but with Proxmox you get the overhead of the fancy GUI.

    So if your host(s) aren’t running guests at >90% load, then there won’t be any difference in performance.

    I’ve recently installed Proxmox because everyone else uses it (and VMware’s free ESXi is now dead)… but after pulling my hair out trying to get some things done, I’m seriously looking to move to Incus.

    There’s another post here somewhere (the one about free ESXi being killed off) with someone explaining more about Incus, which seems like it’s the way to go… maybe worth a search.

    • lal309@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      I believe you are referencing the same post that got my curious about Incus and started playing around with it.

      My biggest gripe is the manual installation of all services which I will do if it’s worth it. So far not sure that it is, hence the post to get more opinions.

      There’s is a GUI you can install for Incus but it’s optional and not preinstalled.

      I appreciate your input.

      • Lemongrab@lemmy.one
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        8 months ago

        I think it is a good way to isolate docker containers from the host without the heavier performance increase of a full VM. Each container can be easily set to an IP address, though the same is probably true for docker idk.

        Unrelated, Podman is the a unprivileged implementation of docker with full compatiblity. You can use docker images with it which is great, and the syntax is mostly the same.

        • lal309@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          Haven’t really looked into Podman as I read somewhere (if I remember correctly) that it takes quite a bit of rewrite (from docker compose to podman). Again, might be speaking out of turn here.

          • Lemongrab@lemmy.one
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            8 months ago

            I have no problems thus far. It does have a docker compatiblity mode as well

        • Sethayy@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          Depending on your threat model incus/lxd won’t add too much security as they generally use the same background software as docker, leaving things kernel exploits as vulerable as just docker

      • filister@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        You should also consider the time you will spend configuring and setting up everything in Incus.

        If you do this with educational purpose go for it, otherwise I will advise you not to, as Proxmox has a wider support and probably finding information, etc. for it is going to be easier.

        Alternatively, why don’t you dedicate one of the hosts to Incus and play around with it and decide if it works for you or not.

        • lal309@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          Fair point. I’m most familiar with docker and proxmox. Sorta doing it for educational purposes but I also have critical services (critical to me) running that must be available.

      • SayCyberOnceMore@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        My install is on Arch linux, I just installed incus and cockpit-machines and (from memory) that was enough.

        But, yeah, it’s definitely a step away from a full GUI…

        But I think that’s part of the appeal (to me)… there’s a lot of things preinstalled with Proxmox, (and XCP-ng, etc), that I’ll just never use… so if I could get that old Ras Pi3 as a 2nd node in the cluster without all the fluff… maybe that’s a good thing?

        • lal309@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          That’s another fair point. I do have a couple of pi’s collecting dust. As someone else stated, I need to consider the time it takes me to get up to speed with incus. Can you elaborate on your experience going “from 0 to hero” with incus? Just curious.

  • garibaldi@startrek.website
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    I would think of Incus and Proxmox as equivalent - both can run containers and VMs. I like the idea of 3 incus servers each with a VM in Docker Swarm mode for running your docker services. Then if you have additional services that aren’t a good fit for docker, you can spin them up as separate containers or VMs in incus as needed

    • lal309@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Strictly from a container perspective, wouldn’t this workflow create more overhead? For example, an incus cluster for me it would be Debian hosts (layer 1), incus (layer 2), lxd container (layer 3), docker (layer 4), app/service (layer 5). A Docker Swarm cluster (for me) would be Debian hosts (layer 1), docker (layer 2), app/service (layer 3).

      Granted a docker swarm cluster would negate the possibility of VMs without having to install something else on the hosts but asking since I’m trying to keep my services in containers.

  • Nibodhika@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    8 months ago

    I’ve never used Incus, but it’s not clear to me why you would choose it over docker, you said that it would be preferable if performance was better, I can already tell you it’s not, best case scenario is equivalent performance (since docker runs natively), but I doubt any VM can match that.

    • lal309@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Well that’s kinda why I came here to the greater community as I wasn’t really sure if there would be any performance gains or other upsides I’m not aware of. Based on general feedback, it appears that there’s no clear upside to incus.