• 0 Posts
  • 38 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle



  • If it worked that way in the US then that would be sensibly pro-worker while allowing the existing employer to defend their intellectual property and investments in employees.

    The reality is I have a 2 year noncompete that simply prevents me from working for competitors within 50 miles of any of my job sites unless I want to open myself up to a lawsuit. If I left today, I’d have to travel way further to get to an acceptable location, but would certainly not be receiving any compensation for that hassle from my previous employer. The elimination of noncompetes would be a huge boon to me and my colleagues, but this sort of court shenanigans is why I said I’d wait to be excited until it actually took effect.


  • Lots of good advice here. I’ve got a bunch of older WD Reds still in service (from before the SMR BS). I’ve also had good luck shucking drives from external enclosures as well as decommissioned enterprise drives. If you go that route, depending on your enclosure or power supply in these scenarios you may run into issues with a live 3.3V SATA power pin causing drives to reboot. I’ve never had this issue on mine but it can be fixed with a little kapton tape or a modified SATA adapter. It’s definitely cheaper to shuck or get used enterprise for capacity! I’m running at least a dozen shucked drives right now and they’ve been great for my needs.

    Also, if you start reaching the point of going beyond the ports available on your motherboard, do yourself a favor and get a quality HBA card flashed in IT mode to connect your drives. The cheapo 4 port cards I originally tried would have random dropouts in Unraid from time to time. Once I got a good HBA it’s been smooth sailing. It needs to be in IT mode to prevent hardware raid from kicking in so that Unraid can see the individual identifiers of the disks. You can flash it yourself or use an eBay seller like ThArtOfServer who will preflash them to IT mode.

    Finally, be aware that expanding your array is a slippery slope. You start with 3 or 4 drives and next thing you know you have a rack and 15+ drive array.











  • Well said. I had hardware that was killed by “upgrades” or manufacturers discontinuing them from their cloud features. I now instal locally controllable hardware as much as possible and it has led to a much more stable and long term reliable smart home. Everything ties back into Home Assistant. The only remaining things I have with a cloud-reliant integration are the robovac our Nest Protect Smoke Alarms, and smart vents. The only reason they’re cloud controlled is there wasn’t a viable option that met feature and price point requirements. Everything else, (65+devices) is local Wi-Fi/Homekit ZigBee or Z-Wave


  • Even on the Windows side of things they’re frustrating. Company took my perfectly working Thinkpad and replaced it last September with an “upgraded” Dell Inspiron laptop. It’s a piece of crap. Wakes up all the time in my bag, randomly drops wifi, and randomly drops ViewSonic monitors. Official IT solution: this happens sometimes, we don’t know why, and we’re going to send you Dell monitors instead.

    *Edit I guess it’s actually a Precision, not Inspiron. I don’t buy Dells so I don’t know all the names!




  • Great advice from everyone here. For the transcoding side of things you want an 8th gen or newer Intel chip to handle quicksync and have a good level of quality. I’ve been using a 10th gen i5 for a couple of years now and it’s been great. Regularly handles multiple transcodes and has enough cores to do all the other server stuff without an issue. You need Plex Pass to do the hardware transcodes if you don’t already have it or can look at switching to Jellyfin.

    As mentioned elsewhere, using an HBA is great when you start getting to large numbers of drives. I haven’t seen random drops the way I’ve seen occasionally on the cheap SATA PCI cards. If you get one that’s flashed in “IT mode” the drives appear normally to your OS and you can then build software raid however you want. If you don’t want to flash it yourself, I’ve had good luck with stuff from The Art of Server

    I know some people like to use old “real” server hardware for reliability or ECC memory but I’ve personally had good luck with quality consumer hardware and keeping everything running on a UPS. I’ve learned a lot from serverbuilds.net about compatibility works between some of the consumer gear, and making sense of some of the used enterprise gear that’s useful for this hobby. They also have good info on trying to do “budget” build outs.

    Most of the drives in my rack have been running for years and were shucked from external drives to save money. I think the key to success here has been keeping them cool and under consistent UPS power. Some of mine are in a disk shelf, and some are in the Rosewill case with the 12 hot swap bays. Drives are sitting at 24-28 degrees Celsius.

    Moving to the rack is a slippery slope… You start with one rack mounted server, and soon you’re adding a disk shelf and setting up 10 gigabit networking between devices. Give yourself more drive bays than you need now if you can so you have expansion space and not have to completely rearrange the rack 3 years later.

    Also if your budget can swing it, it’s nice keeping other older hardware around for testing. I leave my “critical” stuff running on one server now so that a reboot when tinkering doesn’t take down all the stuff running the house. That one only gets rebooted or has major changes made when it’s not in use (and wife isn’t watching Plex). The stuff that doesn’t quite need to be 24/7 gets tested on the other server that is safe to reboot.