• 0 Posts
  • 42 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle

  • Probably best to look at it as a competitor to a Xeon D system, rather than any full-size server.

    We use a few of the Dell XR4000 at work (https://www.dell.com/en-us/shop/ipovw/poweredge-xr4510c), as they’re small, low power, and able to be mounted in a 2-post comms rack.

    Our CPU of choice there is the Xeon D-2776NT (https://www.intel.com/content/www/us/en/products/sku/226239/intel-xeon-d2776nt-processor-25m-cache-up-to-3-20-ghz/specifications.html), which features 16 cores @ 2.1GHz, 32 PCIe 4.0 lanes, and is rated 117W.

    The ostensibly top of this range 4584PX, also with 16 cores but at double the clock speed, 28 PCIe 5.0 lanes, and 120W seems like it would be a perfectly fine drop-in replacement for that.

    (I will note there is one significant difference that the Xeon does come with a built-in NIC; in this case the 4-port 25Gb “E823-C”, saving you space and PCIe lanes in your system)

    As more PCIe 5.0 expansion options land, I’d expect the need for large quantities of PCIe to diminish somewhat. A 100Gb NIC would only require a x4 port, and even a x8 HBA could push more than 15GB/s. Indeed, if you compare the total possible PCIe throughput of those CPUs, 32x 4.0 is ~63GB/s, while 28x 5.0 gets you ~110GB/s.

    Unfortunately, we’re now at the mercy of what server designs these wind up in. I have to say though, I fully expect it is going to be smaller designs marketed as “edge” compute, like that Dell system.







  • Since the realistic competitor here is probably magnetic tape, current-generation (LTO9) media can transfer at around 400MB/s, taking 12 hours and change to fill an 18TB tape.

    Earlier archival optical disk formats (https://news.panasonic.com/global/stories/798) claimed 360MB/s, but I believe that is six, double-sided discs writing both sides simultaneously, so 30MB/s per stream. Filling the same six (300GB) discs would take about an hour and a half.

    Building the library to handle and read/write in bulk is always the issue though. The above optical system fit 1.9PB in the space of a server rack (and I didn’t see any options to expand further when that was current technology), and by the looks is 7 units that each can be writing a set of discs (call that 2.5GB/s total).

    In the same single rack you’d fit 560 LTO tapes (10.1PB for LTO9) and 21 drives (8.4GB/s).

    So they have a bit of catching up to do, especially with LTO10 (due in the next year or so) doubling the capacity and further increasing the throughput.

    There’s also the small matter that every one of these massive increases in optical disc capacity in recent years has turned out to be vapourware. I mean I don’t doubt that they will achieve it someday, but they always seem to go nowhere.


  • From the video description:

    I have been a Samsung product user for many years, and I don’t plan to stop anytime soon

    And all sympathy I had for this person just vanished. If you don’t demand better, they will keep doing - and getting away with - shit like this.

    Voting with your wallet might be the one voice you have left in this world, what a way to squander it by continuing to buy products from companies whose representatives behave in this manner.







  • Free for personal use, so yes-ish. That’ll certainly be a deal-breaker for some.

    Realistically, people who are using it for personal use would probably be upgrading to the next LTS shortly after it’s released (or in Ubuntu fashion, once the xxxx.yy.1 release is out). People who don’t qualify to be using it for free anyway are more likely to be the ones keeping the same version for >5 years.




  • Specs look good for the price, and those machine work great with Linux (I’m using Ubuntu 22.04 on the slightly earlier 9310 right now).

    The only slight downside of the 9315 is that the SSD is soldered to the motherboard. Make sure you back up your data regularly, because there might be no way to get anything off the machine if it breaks.

    There’s also something of a lack of IO; just one USB-C on each side (which is nice, because you can plug the charger into either side). But I have no issues with Bluetooth headphones, and monitors with USB-C have always worked great for plugging larger numbers of peripherals in.



  • Not in so much detail, but it’s also really hard to define unless you’ve one specific metric you’re trying to hit.

    Aside from the included power/cooling costs, we’re not (overly) constrained by space in our own datacentre so there’s no strict requirement for minimising the physical space other than for our own gratification. With HDD capacities steadily rising, as older systems are retired the total possible storage space increases accordingly…

    The performance of the disk system when adequately provisioned with RAM and SSD cache is honestly pretty good too, and assuming the cache tiers are adequate to hold the working set across the entire storage fleet (you could never have just one multi-petabyte system) the abysmal performance of HDDs really doesn’t come into it (filesystems like ZFS coalesce random writes into periodic sequential writes, and sequential performance is… adequate).

    Not mentioned too is the support costs - which typically start in the range of 10-15% of the hardware price per year - do eventually have an upward curve. For one brand we use, the per-terabyte cost bottoms out at 7 years of ownership then starts to increase again as yearly support costs for older hardware also rise. But you always have the option to pay the inflated price and keep it, if you’re not ready to replace.

    And again with the QLC, you’re paying for density more than you are for performance. On every fair metric you can imagine aside from the TB/RU density - latency, throughput/capacity, capacity/watt, capacity/dollar - there are a few tens of percent in it at most.