• 1 Post
  • 14 Comments
Joined 1 year ago
cake
Cake day: October 22nd, 2023

help-circle
  • I have two GPUs in a single tower.

    A GTX 750 to that I share with my LXCs. It does jellyfin transcode, frigate nvr for 3 cameras, kasm accelerated desktops, xfce4 pve host acceleration, Jupyter tensorflow, ersatz tv transcode, and I plan to use it for immich. At most it is taxed about 25 percent but I plan to have a lot more nvr and jellyfin streams.

    I also have a 1660 ti passed to windows 11 VM for my gaming VM. I use sunshine and moonlight for remote gaming but I also roll easy diffusion for some image generation. I had an LLM but (https://github.com/oobabooga/text-generation-webui) but it was too slow for what I’m used to - I just use bing chat and now meta on whatsapp for my personal and an LLM I have access to at work.





  • Not a great forum for this answer or you should specify how much traffic you are pushing on your self hosted, home network and whether or not you expect to use the routers tunneling.

    That said, I’m a fan of the glinet stuff having used the mango and opal. But for any serious hosting you’ll need more than the ports provided on those travel routers.


  • Why don’t you get a single machine with lots of pcie slots, slap a hypervisor on it (like proxmox), and then spin up a router VM that does all the network options with virtual adapters? You can do any number of x86 operating environments if you wanted to know how to secure/penetrate them and the pcie slots let you attack/configure physical devices as well that may have their own peculiarities. And then when you’re done, go after the hypervisor itself.

    If you want to learn against specific branded stuff (which you should if you are in the industry) then I don’t think that can be emulated. But before you go specifically for a firewalla that might not be in the sector where you are working, see what is out there (like I’d get whatever Cisco network device that has a similar OS to what is commonly being used right now).

    Unless you are going to secure/attack flustered systems, I wouldnt go for multiple mini systems. If you figure 10 GBs per os (which is a lot for most distros) a single TB NVMe can get you 90 systems up at the same time.


  • I agree with others saying you might only need one computer. $2500 buys you mostly newer stuff, except maybe the GPUs. But one machine saves a lot of headaches. You’d go with multiple machines if you wanted some high availability or redundancy but you’d need to set that all up (plus a single failure is a single failure). Plus if you go with windows, you’d need multiple licenses (which is no big deal, maybe $20 a pop).

    In this case, Id say it’s best to stay away from hypervisors being a small business since you don’t want to devote a lot of time maintaining your system; and instead of running a complicated storage setup use a a mix of fast NVMe drives and large 5 year warranty drives and a separate NAS located elsewhere in your home (or even better, pay for a cloud based backup solution) that does INCREMENTAL backups once a month, once a week, and once a day. That saves on how much bandwidth you use but has enough backups where a daily oopsie can be reversed and you have an old enough backup to shrug off a ransome ware attack (once you delete everything and implement a more hardened setup). If you already pay for Microsoft office, you have 1 TB OneDrive storage that you can use as a free option, depending on how big your critical files are.

    Sounds like you have windows but it’s also dependent on what your software requires (access to opengl, access to GPU, etc.) that might make sharing the one computer much more complicated. Assuming it’s simple (GPU and opengl acceleration) RDP is a good choice, it’s sturdy and built in and doesn’t require any command like stuff. Note that windows pro only allows 1 user to be logged in at a time, you’ll need to use something called rdpwrap to defeat that. Conversely you can pay a lot of money for windows server and have that unlocked - at that point, I’d look at running Ubuntu.

    5he other part of the conversation is how they will remote into your home. I highly recommend setting up a tunnel and only giving them access to their computers. The easiest way to do this is to buy a router with a tailscale client built in, put all the computers they need behind that router, and then have them install tailscale on their own computers. When you are done with the intern, you can easily revoke they access through the tailscale web portal.

    Lastly, your Internet provider needs to be up to snuff. I would say 100 mbit up is reasonable of all five people are going to be in there at the same time. That translates to 80 mbit actual performance, 20 for your household use, and 60/5 = 15 mbit for their rdp which is more than enough. I have 10 mbit up in one of my locations and it sucks.

    There are tons of other, more complicated and more expensive/cheaper ways to do this.



  • If you are running your docker in an unprivileged lxc (which you should be) proxmox is going to change the UID/gid to a low number.

    I’m assuming that in the docker lxc you correctly mounted /mnt/plex and you can touch/remove files on there? If not, your folder mount is wrong to the lxc.

    If you can, and are using docker compose, there is probably an environment value to set the UID/GID of the user for jdownloader. Set this to root or some other user that has the right access on your lxc.



  • If you are trying to access something in your lan that has an http(s) address pointed at it you are looking for hairpin NAT. This is not enabled by default.

    But you need to look up how reverse proxies work. In short,.you point your ANAME or CNAME to your router, you forward those ports to your reverse proxy, the reverse proxy resolves the A/CNAME, and then it points to the right internal LAN address and port. That last bit has all sorts of it’s own problems including making sure those two IPs can talk to each other (which they should because they are on the same subject but idk, firewall?)

    Fwiw, you can always resolve host names at your router and not have an external domain name. This is only required if you want to expose it externally and want to get to it easily/use a public SSL authority like let’s encrypt.


  • Almost anything reasonable in specs can do this. See the comment about single server vs multiple for HA

    Price is usually a factor and you didn’t give one. I say get a tower server for $15k and then be done with it!

    Second is space - get a 42U rack and be done with it! You see where this is going hopefully (noise, hard drive space, whatever).

    I will say that since you don’t really have anything that needs a a GPU for AI/ML or gaming or vGPU for your VMs, a modern processor with iGPU (ideally with AV1) will be more than good enough. The performance and efficiency cores on those modern CPUs will keep watts down. And without the need to shove a bunch of cards in your case, a smaller solution will probably be enough.

    I’m a fan of towers because that is what I have, fit my budget for an older gen, it’s quiet, reasonable on power expansion, and I can keep adding hard drives and cards to it (plus a blue ray drive that wanted).



  • I use easy diffusion in windows, it works acceptable. I tried a 7B LLM model based on llama on windows as well and it was terrible compared to cloud hosted gpt-3,4. I have a 1660 to to work with so I’m VRAM limited and GPU speed slowed. I have Jupyter with pytorch but I haven’t had a need to use it.

    At work, I’ve trained models with yolo and use have recently started using gpt-4 for writing and coding starts.

    The best use/start is to program some useful utilities in cuda, you don’t need a honking big dGPU for that but you need tensor cores.