Started off by
- Enabling unattended updates
- Enable only ssh login with key
- Create user with sudo privileges
- Disable root login
- Enable ufw with necessary ports
- Disable ping
- Change ssh default port 21 to something else.
Got the ideas from networkchuck
Did this on the proxmox host as well as all VMs.
Any suggestions?
-
Don’t bother with disabling icmp. You’ll use it way more then it’s worth disabling, and something like
nmap -Pn -p- X.X.X.0/24
will find all your servers anyways (same can be said for ssh and port 22. But moving that does stop some bots) -
As long as i go out not exposing anything the the global internet, you really don’t need a lot. The fire wall should already deny all inbound traffic.
The next step is monitoring. It’s one thing to think your stuff is safe and locked down. It’s another thing to know your stuff is safe. Something like Observium, Nagios, Zabbix, or otherwise is a great way to make sure everything stays up, as well as having insights into what everything it doing. Even Uptime Kuma is a good test. Then something like Wazuh to watch for security events and OpenVAS or Nessus, to look holes. I’d even though in CrowdSec for host based virus detection. (Warning, this will quickly send you down the rabbit hole of being a SOC analyst for your own home)
Block outbound traffic too.
Open up just what you need.
Segment internally and restrict access. You don’t need more than SSH to a Linux Server, or perhaps to it’s web interface for an application running on it.
I just set up Wazuh at work and pointed it at a non-domain, vanilla Windows 11 machine to test and it came back with over 300 events immediately. Not trying to scare anyone off as I think it’s a great tool, more just a heads up that the rabbit hole runs very deep.
-
Don’t expose anything to the outside world. If you do, use something like Cloudflare tunnels or Tailscale.
Or host a VPN on it and get in through that. Many of these microservices are insecure, and the real risk comes from opening them up to the Internet. This is important.
Also set permissions properly if applicable
Take a look at CIS benchmarks and DoD STIGs. Many companies are starting to harden their infrastructure using these standards, depending on the requirements of the environment. Once you get the hang of it, then automate deployment. DO NOT blow in ALL of the rules at once. You WILL break shit. Every environment has security exceptions. If you’re running Active Directory, run Ping Castle and remediate any issues. Audit often, make sure everything is being monitored.
Honestly, between the home lab being behind a RTR, NATed, patched & updated, and given the lack of users clicking on random crap and plugging in thumb drives from God Only Knows Where … I’d go out on a limb and say it’s already more secure than most PCs.
There’s also no data besides what I already put on Medium and GitHub, so it’s not a very attractive target.
I watch networkchuck on occasion, but some of his ideas are… questionable I think. Not necessarily wrong, but not the “YOU MUST DO THIS” that his titles suggest (I get it, get clicks, no hate).
Of the ideas you mentioned, (2), (3), (4), and (5) are somewhere between “reasonable” and “definitely”. The rest are either iffy (unattended updates) or security theater (disable ICMP, change ports).
Something to keep in mind for step (2), securing SSH login with a key: this is only as secure as your key. If your own machine, or any machine or service that stores your key, is compromised then your entire network is compromised. Granted, this is kind of obvious, but just making it clear.
As for security theater, specifically step (6). Don’t disable ping. It adds nothing to security and makes it harder to troubleshoot. If I am an attacker in a position for ping to get to an internal resource in the first place, then I’m just going to listen for ARP broadcasts (on same subnet) or let an internal router do it for me (“request timed out” == host is there but not responding).
SSH shouldn’t be internet accessible Changing an SSH port won’t stop someone more than 15 seconds. Disabling ping is security through obscurity at best.
Internet > Firewall, IP Whitelist, IPS/IDS yada yada> DMZ / VLAN > > Proxmox /w FW:$true (rule only for game ports) > GameServer > Deny all traffic from GameServer to go anywhere but internet
Proxmox server has firewall, all the hosts on proxmox have firewall enabled (in proxmox). Only allow my main device to access. No VLAN crosstalk permitted.
I don’t bother with anything else internally, if they’re inside they deserve to SSH with my default root / password credentials
Don’t worry about it, no one wants to hack your plex server xD just don’t expose things directly to the internet and you’ll be fine.
Hosted reverse proxy and VPN servers. I have no open ports on my home network.
Unattended updates are a recipe for trouble. I’d never enable that.
I have no public services apart from 2 OpenVPN servers. To access everything else I connect to one of the OpenVPNs and use the services through the VPN routings.
The VPN can only be accessed if you possess a cert and key. I could even implement 2fa but for now SSL auth works securely enough.
I run unattended-upgrades on all the debian/ubuntu deployments I manage. One of the deployments even has automatic reboots enabled. I still do major upgrades by hand/terraform, but the process itself works flawless in my experience.
My home lab and production network are separated by a firewall.
I have backups and plans to rebuild my lab, I actually do it regularly.
My labs do risky things, I get comfortable with those things before doing it in production.
ssh default port is 22.
Really, unless I’m trying to learn security (valid), or have something to protect. I do the basic best practices.
Real security is an offline backup.
SSH port really doesnt matter. If it is an exposed SSH port, it will probably get picked up if its 69 or 22.
Air gapped, no Internet access. I don’t use Internet services for any of my stuff though so I can get away without direct Internet access
UDM’s regular built in threat filtering, good firewall rules, updated services, and not opening up unnecessarily to the internet. And be vigilant but don’t worry too much about it. That’s it.
automatic updates is a great strategy for breaking the system
Some would argue that not having them, is a great strategy for breaking in the system :P
Automatic backups are great for recovering from broken updates lol
agreed. i do daily backups for everything to s3