cross-posted from: https://beehaw.org/post/24650125

Because nothing says “fun” quite like having to restore a RAID that just saw 140TB fail.

Western Digital this week outlined its near-term and mid-term plans to increase hard drive capacities to around 60TB and beyond with optimizations that significantly increase HDD performance for the AI and cloud era. In addition, the company outlined its longer-term vision for hard disk drives’ evolution that includes a new laser technology for heat-assisted magnetic recording (HAMR), new platters with higher areal density, and HDD assemblies with up to 14 platters. As a result, WD will be able to offer drives beyond 140 TB in the 2030s.

Western Digital plans to volume produce its inaugural commercial hard drives featuring HAMR technology next year, with capacities rising from 40TB (CMR) or 44TB (SMR) in late 2026, with production ramping in 2027. These drives will use the company’s proven 11-platter platform with high-density media as well as HAMR heads with edge-emitting lasers that heat iron-platinum alloy (FePt) on top of platters to its Curie temperature — the point at which its magnetic properties change — and reducing its magnetic coercivity before writing data.

  • MonkeMischief@lemmy.today
    link
    fedilink
    English
    arrow-up
    46
    ·
    7 days ago

    Okay cool, cool, so does this mean ridiculous data centers will use these things, and then can I get another 4TB RED for my NAS so I can fit my whole life on a mirrored total of 8TB without paying 8x what it’s worth, please?

    Thaaaaanks…

  • Ferroto@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    6 days ago

    If you were to ask me a year ago I’d tell you that HDD’s would be the next dead storage medium but now SSD’s cost more then I spent on my rig and HDD’s are pushing 140 TB’s

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      I just looked up prices for servers we sell out work.
      They saw a price increase of 47%.
      The SSDs and RAM saw an increase of about 25% and ~150% respectively.
      Absolutely ludicrous and BS (ironically both the price and available stock increased. So it’s just preying on the market instead of an actual shortage lol)

    • filcuk@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      I wonder if tapes make any sort of ‘comeback’ to the consumer market.

  • Shady_Shiroe@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    ·
    7 days ago

    I just hope smaller sized drives become cheaper. The word “hope” is doing a lot of heavy lifting here.

      • AndrewZabar@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        6 days ago

        I think ten years from now you’ll be hard pressed to find anyone even wasting their time on something so small.

        • Supervisor194@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          6 days ago

          Kind of the point of my comment was that drive size/cost is stagnating despite the massive technical progress in the space. I bought my first 4TB drive in 2020 ($89). Going back to 2015, I was buying 2TB at the same price ($86). Here in 2026, what’s the ~same price? 4TB ($99). 8TB is $180.

          • AndrewZabar@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            5 days ago

            Well this is not a tech issue at all, it’s the fact that global economics have become a dumpster fire - particularly, in America. I can’t say I’m certain there are no other factors, but economically everything has gotten out of hand.

          • AndrewZabar@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            6 days ago

            Well, retro etc. but I wouldn’t consider this to be that. There’s no inherent value of a run-of-the-mill drive with merely lower storage capacity. And certainly not worth a premium.

            • MinnesotaGoddam@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              6 days ago

              it’s not antique yet. i still have my 5.25" diskettes with quest for glory 2 on them and they’re almost antique. i think the usb drive that reads them still works. give them another couple years.

              do HDDs work better than SSDs in space? because of the cosmic rays and shit? or something about intermittent power? no, really, this is a real problem that they could be already solving, one i know jack shit about.

              • frongt@lemmy.zip
                link
                fedilink
                English
                arrow-up
                3
                ·
                6 days ago

                It depends. For anything going into space, especially microsats, the biggest concerns are space, weight, and power. SSDs are better at all of those, plus they don’t have any gyroscopic effects, and they’re much less susceptible to vibrations (e.g. the absolute earthquake at liftoff and the sudden jolts during each rocket stage). They are more susceptible to high-energy particles, but they can be hardened through shielding and parity/redundancy.

                For a datacenter on Mars, you’re less concerned with SWaP, only as much as you need to be to get it there as cargo. Obviously that means space and weight are still concerns, but not power.

                The other factor with using fewer larger drives is that when you have a failure, you lose a lot more data, and any recovery takes longer.

              • AndrewZabar@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                6 days ago

                So you want to be a hero!!! I only ever played the first one but fell in love with it.

                Erana’s Peace. hidengoseke. Meep’s Peep, my friend.

                • MinnesotaGoddam@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  6 days ago

                  the second was the best in the series, but they all have their charm. i really need to buy the new game the coles made

  • DonutsRMeh@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    6 days ago

    And how much will that cost? Sounds like something fantastic for my Jellyfin server. I’ll have all the 4k HDR I can get my hands on.

      • SmoothLiquidation@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        7 days ago

        When you are running a server just to store files (a NAS) you generally set it up so multiple physical hard disks are joined together into an array so if one fails, none of the data is lost. You can replace a failed drive by taking it out and putting in a new working drive and then the system has to copy all of the data over from the other drives. This process can take many hours to run even with the 10-20 TB drive you get today, so doing the same thing with 140 TB drive would take days.

        • wltr@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 days ago

          Thanks! So, why does it matter? It’s a server, you can have it to do the job unattended. Or does it affect other services and you’re unable to use anything else before it finishes?

          • SmoothLiquidation@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            6 days ago

            It will take a long time and while it runs it will use a lot of resources so the server can be bogged down. It is also a dangerous time for a NAS, because if you have a drive down, and another drive dies, the whole pool can collapse. The process involves reading every bit on every drive, so it does put strain on everything.

            Some people will go out of their way to buy drives from different manufacturing batches so if one batch has a problem, not all of their drives will fail.

            The way striping works (at an eli5 level) is you have a bunch of drives and one is a check for everything else. So let’s say you have four 10tb drives. Three would be data and one would be the check, so you get 30tb of usable space.

            In reality you don’t have a single drive working as a check, instead you spread the checks across all of the drives, if you map it out with “d” being data and “c” being check it looks like this: dddc ddcd dcdd cddd

            This way each drive has the same number of checks on it, and also why we call it striping.

    • Dremor@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 days ago

      My Z2 had à drive failure recently, with 4To drives. Took me almost 3 days to re-silver the array 😅. fortunately had a hot spare setup, so it started as soon as it failed, but now a second drive is showing signs of failing soon, so I had to pay the AI tax (168€) to get one asap (arriving Monday), as well as a second one, cheaper (around 120€), but which won’t arrive until the end of April.

  • pound_heap@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    11
    ·
    6 days ago

    Does the increased density mean that the speed also goes up? It would be nice if a 7200 RPM drive could finally saturate SATA3 bandwidth.

  • Kyden Fumofly@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    6 days ago

    Whats the point when the prices for 4-8TB disks are stable the last 5 years? (I think that they are getting higher even…)

    • Grapho@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 days ago

      And if it breaks at 10 months and they take another 2 to send your replacement back, well, they no longer need to send one that actually works this time either

  • Alpha71@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 days ago

    Okay. I want total honesty here. How many of you could actually fill that thing up?

    • JasSmith@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 days ago

      I have a lot of Linux IOSs which are definitely not VR porn. I have 200TB total including parity disks, and 150TB usable. It’s a real pain in the ass to maintain so many disks, and the power bill isn’t fun either. I’d love to replace them with fewer disks.

    • mlg@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 days ago

      No sweat, try mirroring a private tracker and you’ll very quickly run out lol. You need a couple of petabytes worth.

      The real problem is the price of HDDs not going down due to lower production in light of SSDs.

      I fully expect WD to drop this as some stupidly expensive SAS drives that almost no consumer will buy. They should at least apply the dual heads for speed tech so we get faster HDDs for the same price.

    • LifeInMultipleChoice@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      I remember Mac OS X having an issue with its mail app awhile back that would create massive log files continuously that would keep generating until they filled the entire drive. You would have to boot to a recovery partition or such because the OS partition wouldn’t have enough room to expand/boot and remove them and fix the issue.

      Imagine having 130 terabytes of invisible log files

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    2 days ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    NAS Network-Attached Storage
    RAID Redundant Array of Independent Disks for mass storage
    SATA Serial AT Attachment interface for mass storage
    SSD Solid State Drive mass storage
    ZFS Solaris/Linux filesystem focusing on data integrity

    5 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.

    [Thread #72 for this comm, first seen 8th Feb 2026, 00:30] [FAQ] [Full list] [Contact] [Source code]

  • Fmstrat@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 days ago

    Question: Are failures due to issues on a specific platter? Meaning, could a ZRAID theoretically use specific platters as a way to replicate data and not require 140TB of resilvering on a failure?

    • Andres@social.ridetrans.it
      link
      fedilink
      arrow-up
      8
      ·
      6 days ago

      @Fmstrat @veeesix Since there’s two very diffrent questions there… The first, “where do the failures happen?”: anywhere. It could be the controller dying (in which case the platters themselves are fine if you replace the board, but otherwise the whole thing is toast). It could be the head breaking. It could be issues with a specific platter. It could be something that affects _all_ the platters (like dust getting inside the sealed area). So basically, it very much depends.

      • Andres@social.ridetrans.it
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        6 days ago

        @Fmstrat @veeesix The second, could you do raid across specific platters - yes and no. The drive firmware specifically hides the details of the underlying platter layout. But if you targeted a specific model, you could probably hack something together that would do raid across the platters. But given the answer to the first question, why would you?

    • Nilz@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 days ago

      IIRC, HDDs have some reserved sectors in case some go bad. But in practice, once you start having faulty sectors it’s usually a sign that the drive is dying and you should replace it ASAP.

      I think if you know drive topology you can technically create partitions on platter level, but I don’t really see a reason why you’d do it. If the drive is dying you need to resilver the entire drive’s content to a new disk anyway.

  • zorflieg@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    7 days ago

    I wonder why current consumer HDD’s don’t have NVME connectors on them. Like I know speeding up the bus isn’t going to make the spinning rust access faster but the cache ram would probably benefit from not being capped at 550MBps