• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: October 29th, 2023

help-circle
  • YT videos get taken down for any reason these days - fake copyright claims, hacking or just the creator getting fed up with YT’s policies. Entire channels vanish with no warning. Valuable videos that generate income suddenly become private only. It is not an open platform, it’s a monetised platform first and foremost.

    If you have these videos under your control, then if they’re no longer watchable online, you still have them. That’s exactly what TA is for and does a superb job of. Basically every YT video I watch that I think is useful, I hit the Save button. Some of them are indeed no longer available. I have entire channels downloading so if the creator does close up shop, at least I’ve got their latest.

    Obviously you need a lot of storage space - mine is over 5TB and growing. But it’s worth it.

    Also, it avoids the YT before, mid and after ads.




  • This. With a proper backup strategy, you are reducing the probability of a catastrophic sequence of events. It becomes P(some unlikely event) x P(some other unlikely event) x … Etc. for as many events you can think of and/or can afford to mitigate.

    As you say, the risk will never be zero. And even the best-laid plans can fail - the Gitlab incident a few years back saw five layers of backups and disaster preparedness fail.

    Really, all you can do is backup your data using standard methods, and TEST THE RESTORE before you need to rely on it!




  • Some consumer drives aren’t well suited to continuous use - they’re designed and rated for only a few hours a day. Heat and vibration tolerances are lower. I wore out some WD Greens that way - they were throwing errors by 60k hours.

    NAS drives are the opposite, they’re designed to run 24/7. In the same way, enterprise drives are designed for better vibration tolerance to be crammed in a chassis with many other spinning disks.

    Basically they’ll work, but longevity is an issue, which is particularly relevant to us hoarders. I use WD Reds in my NAS and enterprise/SAS drives in my servers now. Seems to be a good combination.



  • Heavy computation rack is in an unheated conservatory with a window cracked open. Keeps the HDD temperatures around 30 degrees. Temperature monitoring from my PDU shows a 3’C rise from the inlet to the exhaust side of the rack. This stuff is mostly powered off when not in use. In summer, it can get to 35’C in that room so I shut everything down at that point.

    24/7 rack is in my lounge and vents the heat into the room (helps a little bit with heating costs). Top of the rack is about 37’C but I’ve seen it around 45’C with all my hypervisors doing stuff. Nothing complains. As long as the intake air is within the manufacturer’s stated range, it’s fine.

    Might want to consider redirecting the heat into the house rather than venting it outside.


  • Motherboard, CPU and RAM - no problem at all (more accurately, problems are easy to spot with diagnostics and they shouldn’t wear out).

    Chassis - a bit of a wild card. The backplane in one of my systems is faulty.

    PSUs - ideally new.

    HDDs - almost all of mine are secondhand. Enterprise- or NAS-grade drives should have many years of life left. Ideally buy new to benefit from warranty but my experience has been great.

    SSDs - nope. Buy new. I bought some secondhand Samsung SSDs and they developed problems, both threw IO errors after a few weeks. SSDs are cheap enough not to bother with secondhand.

    Everything else I bought used, including the rack. In fact, the only things I bought new in my entire homelab are my router and WiFi AP.


    1. Domain auth (1 place to set passwords and SSH keys), no root SSH
    2. SSH by key only
    3. Passworded sudo (last line of defence)
    4. Only open firewall hole is OpenVPN with security dialled up high
    5. VLANs - laptops segregated from servers
    6. Strict firewall rules between VLANs
    7. TLS on everything
    8. Daily update check alerts (no automatic updates, but persists until I deal with them)
    9. Separate isolated syslog server for audit trails
    10. Cold backups

  • DIY - No Regrets.

    I built my NAS out of spare parts originally and then it evolved into needing dedicated purchases. I like having full control of the OS and everything on it - it helps me understand what daemons are doing what. It does a lot more than file sharing.

    The likes of QNAP and Synology may make a more polished product with an easy UI, as well as offering support, but as far as I care, I am support, so I like to fix problems myself.

    If you’re ping-ponging between the two options, from your post it reads like cost is the biggest problem you face. But as you say, storage is a critical part of the infrastructure and sometimes you do have to spend money on it if you want it to be reliable. I just upgraded my main NAS with a larger chassis and motherboard (from an ITX) so I can expand it further. It cost me a sizeable amount of money that might have bought me a low-end ready-made, but this is far more flexible.



  • Yes, this should work fine. SAS does not care what path the signal takes - it doesn’t differentiate between internal and external. You can run internal over external cables without issue. I’ve done similar by turning my old NAS chassis into a DAS, and connecting it to her internal ports of the HBA. And you can connect SAS or SATA drives to the DAS (system 1).





  • Power in the UK has gone through the roof. I’ve downsized my lab as much as I can and have at times wondered if I should shut it down completely.

    Originally I was running an EdgeRouter 4, Zyxel 48-port managed switch and custom-built NAS with an i3-9100T, 32GB ECC and 6x 12TB SAS drives in a zpool. The NAS did everything - VMs, storage, backups etc. but it was pulling quite a lot of power.

    A while back I ran a USFF PC as my server, which idled at 8W. Versus my 200W Xeon machine at the time, it paid for itself in 12 months. I dug that out and moved the VMs onto it. Storage went onto an ARM NAS. I was running too many VMs for a single USFF even maxed out, so I bought another 2 of them (identical). Now I run them in a Proxmox cluster. I use a passive cooled HP 1810 managed switch and an EdgeRouter Lite for the network, plus an Apple Airport with its transmitter dialled down to 25%. The ARM machine is much slower than my ZFS NAS, but it is much lighter on power - at that point, the HDDs are the significant draw, so I only run 2 spinners that are non-redundant and make sure they’re backed up to cold storage. I also power up my ZFS machine once a month or so and sync the data from it. Other than that, I keep the big x86 machines shut down until needed.


  • My homelab got me my last 2 jobs and the one I’m switching to for significantly more money.

    I gave it a passing mention in my resume and a couple of sentences in a cover letter. It got brought up in interview and I was able to talk through all the tech I had experience with, which sold them on me and got me an offer. Job I’m moving to, we only had a casual interview where I discussed my lab, and it turned out 90% of what they use, I’ve played with at home. Got an offer the same week.