I’m working on setting up my first homelab. I have an older dell optiplex with a duel PCIe NIC in it. I was wondering if I could setup OPNsense as a docker container or virtual machine so that I could also use the extra resources of the box for other things besides just being a router. Is this a good idea?

  • tvcvt@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    Hey, as others have said, you can definitely set up OPNSense in a VM and it works great. I wanted to take a second and answer the first part of your question: it cannot run in Docker. Containers in Docker share their kernel with the Linux host machine. Since OPNSense isn’t a Linux distribution (it’s based on FreeBSD), it can’t make use of the shared Linux kernel.

  • Solar Bear@slrpnk.net
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Yeah, this is perfectly doable. I ran a very similar setup for a while. I’d recommend passing one of the NICs directly through to the VM and using one for the host to keep it simple, but you can also virtualize the networking if you need something more complex. If you do pass through a single NIC, you’ll need a switch capable of handling VLANs and a bit of knowledge on how to set up what’s called a “router on a stick” with everything trunked over one connection and only separated by VLANs.

    Keep in mind, while this is a great way to save resources, it also means these systems are sharing resources. If you need to reboot, you’re taking everything down. If you have other users, that might be annoying for everyone involved.

    • wiggles@programming.devOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I have a managed switch. I’m a little confused how everything would be hooked up if I’m using a vm for pfsense and another vm for some Linux distro. I want the router and that distro to be isolated from my other vlans. Could I use the onboard nic hooked up to the switch to put the distro on its own vlan?

      • Solar Bear@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        You can absolutely attach each VM and even the host to separate NICs which each connect back to the switch and has its own VLAN. You can also attach everything to one NIC and just use a virtual bridge(s) on the host to connect everything. Or any combination therein. You have complete freedom on how you want to do it to suit your needs. How this is done depends on what you’re using on the host for a hypervisor though, so I can’t give you exact directions.

        One thing I should have thought of before; if two NICs are on one single PCI card, you probably can’t pass them through to the VM independent of one another. So that would limit you to doing virtual networking if you want to split them.

    • teutoburg1@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I have opnsense virtualized on a proxmox server with a couple of things that should hardly ever need restarts. It actually works pretty well because the host almost never needs a reboot and rebooting a vm is way faster than bare metal

    • Arrayrepairman@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      A bit more about mine now that I have a little more time, it’s a VM on vmWare, it has two virtual interfaces, on on my DMZ vlan, and the other is a trunk with the rest of my vlans. With the *sense, I have 2 phisical I terfaces, and then virtual interfaces that correspond to the VLANs. My router is plugged into my switch on an access port for the DMZ, and the ESXi hosts are connected to the switch with VLAN trunks. This allows me to migrate the router to another host for reboots.

  • Jérôme Flesch@lemmy.kwain.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I use OPNSense virtualized on top of Proxmox. Each physical interface of the host system (ethX and friends) is in its own bridge (vmbrX), and for each bridge, the OpenSense VM also has a virtual interface that is part of the bridge. It has worked flawlessly for months now.

  • SirNuke@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Only issue I had with a similar setup is turns out the old HP desktop I bought didn’t support VT-d on the chipset, only on the CPU. Had do some crazy hacks to get it to forward a 10gbe NIC plugged into the x16 slot.

    Then I discovered the NIC I had was just old enough (ConnectX-3) that getting it to properly forward was finicky, so I had to buy a much more expensive ConnectX-4. My next task is to see if I can give it a virtual NIC, have OPNsense only listen to web requests on that interface, and use the host’s Nginx reverse proxy container for SSL.

  • HousePanther@lemmy.goblackcat.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Yes, you can. You need a hypervisor that is capable of IOMMU. I know for a fact that you can do it with libvirtd and KVM/qemu. I think you can do it with Proxmox. That much said, I’ve no experience doing this myself.

  • Legarth@lemmy.fmhy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’m doing it as VM running in truenas, it works perfect. The LAN nic is shared between host and OpnSense and the wan is passed through to the VM as hardware.

    It’s much better than my USG4 pro, so that is next to the server turned off