cross-posted from: https://beehaw.org/post/24650125
Because nothing says “fun” quite like having to restore a RAID that just saw 140TB fail.
Western Digital this week outlined its near-term and mid-term plans to increase hard drive capacities to around 60TB and beyond with optimizations that significantly increase HDD performance for the AI and cloud era. In addition, the company outlined its longer-term vision for hard disk drives’ evolution that includes a new laser technology for heat-assisted magnetic recording (HAMR), new platters with higher areal density, and HDD assemblies with up to 14 platters. As a result, WD will be able to offer drives beyond 140 TB in the 2030s.
Western Digital plans to volume produce its inaugural commercial hard drives featuring HAMR technology next year, with capacities rising from 40TB (CMR) or 44TB (SMR) in late 2026, with production ramping in 2027. These drives will use the company’s proven 11-platter platform with high-density media as well as HAMR heads with edge-emitting lasers that heat iron-platinum alloy (FePt) on top of platters to its Curie temperature — the point at which its magnetic properties change — and reducing its magnetic coercivity before writing data.
We’ve come a long way:

That was my first USB thumb drive.
IIRC that was 5 mb. It weighed about 2000 lbs
Fee-fi-fo-fum.
Okay cool, cool, so does this mean ridiculous data centers will use these things, and then can I get another 4TB RED for my NAS so I can fit my whole life on a mirrored total of 8TB without paying 8x what it’s worth, please?
Thaaaaanks…
Is there a Lemmy community for trading surplus hardware yet?
I have a pile of HDDs and servers that I no longer use. I’ve transitioned almost all mine to 20tb+. I might have 8 or 10 4tb REDs laying around. They’re old, probably have thousands of power on hours in the smart data though.
Right on!
I don’t know if there’s a hardware trading community yet. I think one challenge is simply how lemmy seems to aim for more general anonymity than reddit, and the DM system isn’t really used to my understanding. (Except by “that fediverse girl” LOL)
Establishing a sense of reasonable trustworthiness to thwart bad actors might take some work.
Are you in Europe by any chance? :)
Ah, no. Sorry. Midwest, USA.
No apologies needed, I’m not even OP :) it was just a long shot :D
deleted by creator
8TB? That’s my ideal RAM configuration lol. ;-)
If not joking, what would you want a huge amount of ram for on a server?
ZFS ARC, baby!
No, I was joking.
Running more multi box copies of gw2
If you were to ask me a year ago I’d tell you that HDD’s would be the next dead storage medium but now SSD’s cost more then I spent on my rig and HDD’s are pushing 140 TB’s
I just looked up prices for servers we sell out work.
They saw a price increase of 47%.
The SSDs and RAM saw an increase of about 25% and ~150% respectively.
Absolutely ludicrous and BS (ironically both the price and available stock increased. So it’s just preying on the market instead of an actual shortage lol)I wonder if tapes make any sort of ‘comeback’ to the consumer market.
I just hope smaller sized drives become cheaper. The word “hope” is doing a lot of heavy lifting here.
Ten years from now…
Amazon search: “hard drive”
Result: 4TB $198
BARGAIN!
I think ten years from now you’ll be hard pressed to find anyone even wasting their time on something so small.
Kind of the point of my comment was that drive size/cost is stagnating despite the massive technical progress in the space. I bought my first 4TB drive in 2020 ($89). Going back to 2015, I was buying 2TB at the same price ($86). Here in 2026, what’s the ~same price? 4TB ($99). 8TB is $180.
Well this is not a tech issue at all, it’s the fact that global economics have become a dumpster fire - particularly, in America. I can’t say I’m certain there are no other factors, but economically everything has gotten out of hand.
so you say, but people still collect “antique” hardware.
Well, retro etc. but I wouldn’t consider this to be that. There’s no inherent value of a run-of-the-mill drive with merely lower storage capacity. And certainly not worth a premium.
it’s not antique yet. i still have my 5.25" diskettes with quest for glory 2 on them and they’re almost antique. i think the usb drive that reads them still works. give them another couple years.
do HDDs work better than SSDs in space? because of the cosmic rays and shit? or something about intermittent power? no, really, this is a real problem that they could be already solving, one i know jack shit about.
It depends. For anything going into space, especially microsats, the biggest concerns are space, weight, and power. SSDs are better at all of those, plus they don’t have any gyroscopic effects, and they’re much less susceptible to vibrations (e.g. the absolute earthquake at liftoff and the sudden jolts during each rocket stage). They are more susceptible to high-energy particles, but they can be hardened through shielding and parity/redundancy.
For a datacenter on Mars, you’re less concerned with SWaP, only as much as you need to be to get it there as cargo. Obviously that means space and weight are still concerns, but not power.
The other factor with using fewer larger drives is that when you have a failure, you lose a lot more data, and any recovery takes longer.
So you want to be a hero!!! I only ever played the first one but fell in love with it.
Erana’s Peace. hidengoseke. Meep’s Peep, my friend.
the second was the best in the series, but they all have their charm. i really need to buy the new game the coles made
And how much will that cost? Sounds like something fantastic for my Jellyfin server. I’ll have all the 4k HDR I can get my hands on.
If you have to ask, you can’t afford it 😭
Maybe I can. The only thing you know about me is my username 😂
Who’s Barry Badrinath?
Going by the usual trends of $20+/tb, I’d say. fuckin expensive
Very cheap. Just kidding. Fuck that shit
For now anyway, it used to be $20+/gb. I’ll settle for flooding the market with refurbished 16+tb drives.
I would not put 130TB on any one piece of hardware, because when it fails, it will be a very sad day.
Don’t even mention it. I have real world experience in that area 😂
This hardware is for those who are storing EB of data.
That’s why this is perfect for a distributed array or as a data mass grave.
You don’t really store anything in there that is needed often.Guess why (deep-)archive S3 is so much cheaper than hot S3 storage.
That’s a reason why.
Holy fuck can you imagine how long it would take to re-stripe a failed drive in a z2 array 😭
Not a clue. Care to eli5?
When you are running a server just to store files (a NAS) you generally set it up so multiple physical hard disks are joined together into an array so if one fails, none of the data is lost. You can replace a failed drive by taking it out and putting in a new working drive and then the system has to copy all of the data over from the other drives. This process can take many hours to run even with the 10-20 TB drive you get today, so doing the same thing with 140 TB drive would take days.
@SmoothLiquidation @Telorand They also claim up to 8x speed improvements with HAMR. Obviously that remains to be seen, but if they could roughly match capacity improvements, that would keep restriping in the same ballpark.
Thanks! So, why does it matter? It’s a server, you can have it to do the job unattended. Or does it affect other services and you’re unable to use anything else before it finishes?
It will take a long time and while it runs it will use a lot of resources so the server can be bogged down. It is also a dangerous time for a NAS, because if you have a drive down, and another drive dies, the whole pool can collapse. The process involves reading every bit on every drive, so it does put strain on everything.
Some people will go out of their way to buy drives from different manufacturing batches so if one batch has a problem, not all of their drives will fail.
The way striping works (at an eli5 level) is you have a bunch of drives and one is a check for everything else. So let’s say you have four 10tb drives. Three would be data and one would be the check, so you get 30tb of usable space.
In reality you don’t have a single drive working as a check, instead you spread the checks across all of the drives, if you map it out with “d” being data and “c” being check it looks like this: dddc ddcd dcdd cddd
This way each drive has the same number of checks on it, and also why we call it striping.
My Z2 had à drive failure recently, with 4To drives. Took me almost 3 days to re-silver the array 😅. fortunately had a hot spare setup, so it started as soon as it failed, but now a second drive is showing signs of failing soon, so I had to pay the AI tax (168€) to get one asap (arriving Monday), as well as a second one, cheaper (around 120€), but which won’t arrive until the end of April.
Does the increased density mean that the speed also goes up? It would be nice if a 7200 RPM drive could finally saturate SATA3 bandwidth.
Linear density could also boost throughout. Multiple actuators also exist.
No.
Whats the point when the prices for 4-8TB disks are stable the last 5 years? (I think that they are getting higher even…)
The point is the need for more and more data storage is never going to stop.
The point is that 8TB are too small, and not enough for my anime.
“Anime”
Retaining that much detail on tentacles takes some drive space
See the problem is the details keep getting higher res, but we also never stopped to ask if 32 tentacles was too much…
If the price per TB is stable you just buy 2 or 3 disks. It used to be that you buy one disk because by the time you needed more space the price per TB would be dropped a lot (halved even).
My NAS has a limited number of bays, so buying more low-storage disks isn’t a great option.
Buy a SAS adapter and put them in an external storage rack.
Yep. It’s absurd. Who spends that much on a 4TB?
Probably still with only 1 year warranty…
And if it breaks at 10 months and they take another 2 to send your replacement back, well, they no longer need to send one that actually works this time either
In a pinch the drive can also double as a flywheel battery.
Okay. I want total honesty here. How many of you could actually fill that thing up?
Archive.org, Anna’s archive, Jan 6 footage, Epstein files, there’s plenty to back up.
With useful stuff? Never. With random bullshit I think might be useful some day if only I find the time? Easy
I have a lot of Linux IOSs which are definitely not VR porn. I have 200TB total including parity disks, and 150TB usable. It’s a real pain in the ass to maintain so many disks, and the power bill isn’t fun either. I’d love to replace them with fewer disks.
No sweat, try mirroring a private tracker and you’ll very quickly run out lol. You need a couple of petabytes worth.
The real problem is the price of HDDs not going down due to lower production in light of SSDs.
I fully expect WD to drop this as some stupidly expensive SAS drives that almost no consumer will buy. They should at least apply the dual heads for speed tech so we get faster HDDs for the same price.
… or be able to backup it?
I remember Mac OS X having an issue with its mail app awhile back that would create massive log files continuously that would keep generating until they filled the entire drive. You would have to boot to a recovery partition or such because the OS partition wouldn’t have enough room to expand/boot and remove them and fix the issue.
Imagine having 130 terabytes of invisible log files
keep the OS partition like 4tb and make a separate partition for the pirated movies
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters NAS Network-Attached Storage RAID Redundant Array of Independent Disks for mass storage SATA Serial AT Attachment interface for mass storage SSD Solid State Drive mass storage ZFS Solaris/Linux filesystem focusing on data integrity
5 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.
[Thread #72 for this comm, first seen 8th Feb 2026, 00:30] [FAQ] [Full list] [Contact] [Source code]
Question: Are failures due to issues on a specific platter? Meaning, could a ZRAID theoretically use specific platters as a way to replicate data and not require 140TB of resilvering on a failure?
@Fmstrat @veeesix Since there’s two very diffrent questions there… The first, “where do the failures happen?”: anywhere. It could be the controller dying (in which case the platters themselves are fine if you replace the board, but otherwise the whole thing is toast). It could be the head breaking. It could be issues with a specific platter. It could be something that affects _all_ the platters (like dust getting inside the sealed area). So basically, it very much depends.
@Fmstrat @veeesix The second, could you do raid across specific platters - yes and no. The drive firmware specifically hides the details of the underlying platter layout. But if you targeted a specific model, you could probably hack something together that would do raid across the platters. But given the answer to the first question, why would you?
Great answers, thank you.
IIRC, HDDs have some reserved sectors in case some go bad. But in practice, once you start having faulty sectors it’s usually a sign that the drive is dying and you should replace it ASAP.
I think if you know drive topology you can technically create partitions on platter level, but I don’t really see a reason why you’d do it. If the drive is dying you need to resilver the entire drive’s content to a new disk anyway.
I wonder why current consumer HDD’s don’t have NVME connectors on them. Like I know speeding up the bus isn’t going to make the spinning rust access faster but the cache ram would probably benefit from not being capped at 550MBps
Ya hmar
Yalla





















