• hellure@lemmy.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 hours ago

    Was looking at NVMEs, only $1500 for a small no name Gen 3 with 3gbps max transfer rate.

    Some newer and faster name brand drives were listed as $400-500, but they were also out of stock, so those prices probably weren’t accurate.

  • Buelldozer@lemmy.today
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    3 hours ago

    Getting a half dozen 24tb nas drives this morning was painful. They are twice the cost of last fall and most vendors, even big ones, only had one or two available. This is insanity.

  • UltraGiGaGigantic@lemmy.ml
    link
    fedilink
    English
    arrow-up
    57
    ·
    edit-2
    8 hours ago

    I declare all resources mine purchased with a fancy loan. Now that all resources are mine, they are all worth 100,000 times more then before. Dont worry, if you cant afford to pay 100,000x more you can rent some if my stuff! Also now that I own everything, I’m To Big To Fail and will need a bailout when I cant pay my fancy loan.

    This is the healthiest, most efficient economy possible. To desire an alternative way to live our lives is now added to the DSM and will trigger involuntary institutionalization in a re-education camp.

    Aliens visit earth and you want to know why? To study our highly advanced economic system of course!

  • neonghost@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    8 hours ago

    I really hope this is a temporary supply bottleneck. I understand the constraints of producing chips and highly specialized hardware but AI demand is only going to go up from here.

    I’m optimistic a game changer gets whipped out of thin air

    • hellure@lemmy.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 hours ago

      It’s not really going up…

      I got bored with it last year, and have turned off everything but search summaries, as they often just put the answer right up front, no click throughs. Also I use a voice aid to set timers, reminders, and alarms…

      And this seems to be a common story lately.

      I can’t see AI really exploding beyond basic uses like that. Some people are still inovating and playing with AI, but it’ll settle down.

    • BobDole4Prez@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 hours ago

      Doubt it. Those don’t require as crazy of infrastructure as microchips. If PCB starts going up, we’re in huge trouble

  • charles@social.charles.wiki
    link
    fedilink
    English
    arrow-up
    52
    ·
    12 hours ago

    I’m afraid that a lot of the infrastructure will be heavily catered towards DoD computing resources. This means after the components hit their lifecycle, they aren’t released to the used markets on ebay, instead they are shredded and rendered electronic waste.

    • Greyscale@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      30
      ·
      10 hours ago

      All of those GPUs will be irrelevent in 24 months, and almost all of them are useless to consumers.

      Its by design, its intentional.

      They want you hooked to their cloud teat.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        ·
        edit-2
        8 hours ago

        A lot of scientists, tinkerers, 3D renderers and such would love cheap A100s and up.

        On the contrary, I don’t think they will get cheaper. Somehow they’ll get bought back and trashed (like Nvidia has done in the past), hoarded, tasked with busywork, something that that.

        • Greyscale@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          10
          ·
          7 hours ago

          They wont let them leave because it’d be falling into “the competitions” hands.

          They’ll shred every single last bit of silicon.

  • nialv7@lemmy.world
    link
    fedilink
    English
    arrow-up
    53
    ·
    12 hours ago

    3 months ago, watching ram prices skyrocketing, anticipating this exact scenario would happen, i bought 5 10tb drives.

    best decision i’ve made in a while.

    • RamSwamson@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 hours ago

      I ordered a couple of NAS drives during the holiday specials thinking the same thing. Received a confirmation email saying they would ship in a few days. 4 weeks passed without a single peep from WD. Started to get nervous my order would be cancelled. Then first week of January I got an email saying they were backordered but should be fulfilled “soon”. Didn’t get my drives till end of January but well worth the wait.

    • harsh3466@lemmy.ml
      cake
      link
      fedilink
      English
      arrow-up
      6
      ·
      8 hours ago

      Nice. I got three 14TBs around the same time for the same reason glad I did.

    • kamen@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      Bought a bunch of 20s a while back. My only concern now is if (when) one of them dies, I might not be able to get the same one (or any at all).

    • MadBigote@lemmy.world
      link
      fedilink
      English
      arrow-up
      36
      arrow-down
      1
      ·
      14 hours ago

      Wdym? Do you believe the manufacturers would try to congincr you they’re out of stock to create scarcity and increace prices?!? Do you jnow how silly that idea is?! \s

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        edit-2
        13 hours ago

        Those datacenters are real. AI companies aren’t using their money to build empty buildings. They’re buying enormous amounts of computer hardware off the market to fill them.

        https://blogs.microsoft.com/blog/2025/09/18/inside-the-worlds-most-powerful-ai-datacenter/

        Today in Wisconsin we introduced Fairwater, our newest US AI datacenter, the largest and most sophisticated AI factory we’ve built yet. In addition to our Fairwater datacenter in Wisconsin, we also have multiple identical Fairwater datacenters under construction in other locations across the US.

        These AI datacenters are significant capital projects, representing tens of billions of dollars of investments and hundreds of thousands of cutting-edge AI chips, and will seamlessly connect with our global Microsoft Cloud of over 400 datacenters in 70 regions around the world. Through innovation that can enable us to link these AI datacenters in a distributed network, we multiply the efficiency and compute in an exponential way to further democratize access to AI services globally.

        An AI datacenter is a unique, purpose-built facility designed specifically for AI training as well as running large-scale artificial intelligence models and applications. Microsoft’s AI datacenters power OpenAI, Microsoft AI, our Copilot capabilities and many more leading AI workloads.

        The new Fairwater AI datacenter in Wisconsin stands as a remarkable feat of engineering, covering 315 acres and housing three massive buildings with a combined 1.2 million square feet under roofs. Constructing this facility required 46.6 miles of deep foundation piles, 26.5 million pounds of structural steel, 120 miles of medium-voltage underground cable and 72.6 miles of mechanical piping.

        Unlike typical cloud datacenters, which are optimized to run many smaller, independent workloads such as hosting websites, email or business applications, this datacenter is built to work as one massive AI supercomputer using a single flat networking interconnecting hundreds of thousands of the latest NVIDIA GPUs. In fact, it will deliver 10X the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen.

        Hard drives haven’t been impacted nearly much as memory, which is the real bottleneck, but when just one AI company, OpenAI, rolls up and buys 40% of global memory production capacity’s output, it’d be extremely unlikely that we wouldn’t see memory shortages for at least a while, since it takes years to build new production capacity. And then you have other AI companies who want memory. And purchases of memory from companies who are, as a one-off, extending their PC upgrade cycle, due to the current shortage who will also be competing for supply. If you have less supply relative to demand of a product, price goes up to the new point where the available amount of memory people are willing to buy at that new price point matches what’s actually available. Everyone else gets priced out. And it won’t be until either demand drops (which is what people talking about a ‘bubble popping’ are thinking might occur, if the AI-infrastructure-building effort stops sooner than expected), or enough new production capacity comes online to provide enough supply, that that’ll change. Memory manufacturers are building new factories and expanding existing ones, and we’ve had articles about that. But it takes years to do that.

        • Greyscale@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          11
          ·
          10 hours ago

          25% of the datacenters being constructed right now will go bankrupt.

          The majority of this AI surge is for datacenters that neither have power nor water.

          Its all gonna end up being shredded, if it exists at all.

      • llama@lemmy.zip
        link
        fedilink
        English
        arrow-up
        12
        ·
        13 hours ago

        Sort of, there used to be way more HDD manufacturers and then they all talked each other into dropping them for SDDs. Now a sudden need arises and there are no HDDs.

      • gian @lemmy.grys.it
        link
        fedilink
        English
        arrow-up
        5
        ·
        13 hours ago

        It is not this case, I agree, but to be honest it would not be the first time that some company create an artificial scarcity to keep the prices up.

  • Ibuthyr@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    1
    ·
    15 hours ago

    Can’t wait for the bubble to pop and the used SAS HDD market to overflow with cheap hardware. Same with RAM.

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      38
      ·
      14 hours ago

      Same with RAM.

      Unfortunately, the RAM shortage is caused by a RAM component being diverted to specialized packages that can’t easily be converted into normal RAM. So even a bubble bursting won’t bring RAM onto the market.

      • Toes♀@ani.social
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        14 hours ago

        My next computer is probably gonna be running ecc ram because of this concern.

        • tal@lemmy.today
          link
          fedilink
          English
          arrow-up
          16
          ·
          edit-2
          13 hours ago

          I don’t know if you’re saying this, so my apologies if I’m misunderstanding what you’re saying, but this isn’t principally ECC DIMMs that are being produced.

          I suppose that a small portion of AI-related sales might go to ECC DDR5 DIMMs, because some of that hardware will probably use it, but what they’re really going to be using in bulk is high-bandwidth-memory (HBM), which is going to be non-modular, connected directly to the parallel compute hardware.

          HBM achieves higher bandwidth than DDR4 or GDDR5 while using less power, and in a substantially smaller form factor.[13] This is achieved by stacking up to eight DRAM dies and an optional base die which can include buffer circuitry and test logic.[14] The stack is often connected to the memory controller on a GPU or CPU through a substrate, such as a silicon interposer.[15][16] Alternatively, the memory die could be stacked directly on the CPU or GPU chip. Within the stack the dies are vertically interconnected by through-silicon vias (TSVs) and microbumps. The HBM technology is similar in principle but incompatible with the Hybrid Memory Cube (HMC) interface developed by Micron Technology.[17]

          The HBM memory bus is very wide in comparison to other DRAM memories such as DDR4 or GDDR5. An HBM stack of four DRAM dies (4‑Hi) has two 128‑bit channels per die for a total of 8 channels and a width of 1024 bits in total. A graphics card/GPU with four 4‑Hi HBM stacks would therefore have a memory bus with a width of 4096 bits. In comparison, the bus width of GDDR memories is 32 bits, with 16 channels for a graphics card with a 512‑bit memory interface.[18] HBM supports up to 4 GB per package.

          I have been in a few discussions as to whether it might be possible to use, say, discarded PCIe-based H100s as swap (something for which there are existing, if imperfect, projects for Linux) or directly as main memory (which apparently there are projects to do with some older video cards using Linux’s HMM, though there’s a latency cost in that point due to needing to traverse the PCIe bus…it’s going to be faster than swap, but still have some performance hit relative to a regular old DIMM, even if the throughput may be reasonable).

          It’s also possible that one could use the hardware as parallel compute hardware, I guess, but the power and cooling demands will probably be problematic for many home users.

          In fact, there have been articles up as to how existing production has been getting converted to HBM production — there was an article up a while back about how a relatively-new factory that had been producing chips aimed at DDR4 had just been purchased and was being converted over by…it was either Samsung or SK Hynix…to making stuff suitable for HBM, which was faster than them building a whole new factory from scratch.

          It’s possible that there may be economies of scale that will reduce the price of future hardware, if AI-based demand is sustained (instead of just principally being part of a one-off buildout) and some fixed costs of memory chip production are mostly paid by AI users, where before users of DIMMs had to pay them. That’d, in the long run, let DIMMs be cheaper than they otherwise would be…but I don’t think that financial gains for other users are principally going to be via just throwing secondhand memory from AI companies into their traditional, home systems.

          • Toes♀@ani.social
            link
            fedilink
            English
            arrow-up
            7
            ·
            13 hours ago

            Ah, thanks for the information. I was already aware most of it was going to GPU type hardware. I just naturally assumed all those gpus need servers with lots of ram.

    • Reygle@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      13 hours ago

      AMD platforms with ecc support could be insanely valuable in the future.

      Please pop, PLEASE POP

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    ·
    15 hours ago

    I guess my combined 12TB across five drives ranging in age from 13 to six years old will have to suffice. The only reason I’d need to buy a new drive is if a couple of my current drives die. Which does happen on occasion, of course.

    Also, fuck AI, and the assholes who made it, and everyone who currently, personally profits off it. This bubble popping will be the catalyst to take down the entire world economy. MMW.

    • realitista@lemmus.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 hours ago

      Yeah fortunately mine are all in RAID arrays, hopefully none die in the next year or I may have to run degraded.

      • deeferg@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        7 hours ago

        This feels like such a beginner question to be asking on Lemmy, let alone the tech community, but how does one go about setting up a RAID array to have my data mirrored? I only know the basics I remember about raid 1 and raid 0.

        Is this RAID array something you can do without one of those “multi-hard drive units”? I have 2 16TB hard drives that I’d like to have one as a mirror copy of the first as a backup that updates at the same time but they feel too big to fit into one of those units. But maybe setting up a RAID array could be done programmatically.

        I’d love and appreciate if anyone could point me in the right direction!

        • WhyJiffie@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 hours ago

          nowadays RAID is done with software, on linux if possible. common choices are ZFS and md-raid. you connect drives with SATA or SAS to a computer, and you can add them to a pool. drives added to pool will be formatted once.

          hardware raid is discouraged, because if the RAID card fails you need a replacement of the exact same kind, with same firmware version, and they can have other difficulties too that software RAID solutions don’t.

          • deeferg@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 hours ago

            That’s great, I’d love to not have to buy one of those machines, and I have been running my JF on a laptop just running Linux with a single one of the 16tb drives.

            If the drives added to the pool need to be formatted, is there a possibility that it wipes the data on it? I’ll take a bit of time to read up on some of the options you mentioned.

            Thanks for the help!

              • deeferg@lemmy.ca
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 hours ago

                Haha, it was a year and a half ago, and it was a Canada Computers sale. Just checked and it doesn’t look like it’s TOO much more expensive yet, but who knows for how long.

            • WhyJiffie@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              ·
              6 hours ago

              If the drives added to the pool need to be formatted, is there a possibility that it wipes the data on it?

              that’s what I meant, yes, but you said you have 2 16 TB drives right? at least with ZFS, setting up a mirror can be done only starting with a single drive. It’s a godsend.

              first, you take the empty drive, check that it’s actually empty, and if so, create a ZFS pool of a single drive from it, with zpool create. copy all your data over. you can use rsync, it has a bunch of options for preserving most filesystem metadata, and for printing progress.

              when done, check that absolutely everything got transferred, and add the other 16 TB drive too to the pool with zpool attach. doing this will convert the pool with only a disk vdev, into a pool with a mirror vdev of 2 disks.

              further recommended reading: https://openzfs.github.io/openzfs-docs/man/master/8/zpool.8.html

              you may want to enable compression from the beginning. if you do it later, existing data won’t be compressed. media files mostly don’t benefit from this. compression is enabled on the dataset level, with the zfs command, if you set it to lz4 (recommended alg) for the root dataset, everything will be compressed that way.

              • deeferg@lemmy.ca
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 hours ago

                Oh this is perfect, thanks so much! I’ve got one of the hard drives in use but I could definitely wipe the other one to make a proper copy with the zpool.

                Thanks for giving me a fun little week task!

    • I_Am_Lying@lemmy.org
      link
      fedilink
      English
      arrow-up
      9
      ·
      12 hours ago

      Just in case: https://serverpartdeals.com/ Still the same sort of prices you expect, but decent warranties on re-certified enterprise HDDs.

      Oddly, I’ve never had an HDD or SSD ever die on me. I’ve got old ass ones that aren’t even a GB that I’ve torn apart and thrown away. My oldest SSD just got removed and put in a cabinet because 256gb is just too small.

      • Blue_Morpho@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        Their prices seem 2x what they were a few years ago. 2.5 years ago I bought two 16TB HGSTs from them for $170 each.

        • I_Am_Lying@lemmy.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 hours ago

          Well, yeah. Everybody’s prices are that much higher. I think the cheapest I saw on there recently is like $17.20/terabyte. And I didn’t see them cheaper anywhere else.

  • relativestranger@feddit.nl
    link
    fedilink
    English
    arrow-up
    17
    ·
    16 hours ago

    glad i kept all the ones pulled from previous ssd upgrades and ewaste that went through here. i have several i have yet to reuse.

    the shit-tier shingled ones i got a couple years ago to store media files had been relatively stable for years on price at ~ 100-110usd. they’re now 170+

  • chunes@lemmy.world
    link
    fedilink
    English
    arrow-up
    64
    ·
    edit-2
    20 hours ago

    They could garner good will by setting aside a % of their stock to sell to red-blooded people at a lower price…

    If someone walks into a grocery store before a storm and wants to buy 10 pallets of water, the store tells them to fuck off.

      • sibachian@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 hours ago

        limit to one per customer per day like most tcg sellers do with pokemon and magic.

        • RememberTheApollo_@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 hours ago

          I’m sure there’s ways around that. Different cards, PO boxes, email addresses, names. Even if they had only 4 ways of buying that’s still almost 30 buys a week times however many scalpers there are.

          • sibachian@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 hours ago

            obviously there will be a handful of people pulling that shit but every system basically assumes that 10% of the people using the system will use it unethically to their advantage. just balance around that, as the vast majority aren’t exploitive scumbags.

    • Ulrich@feddit.org
      link
      fedilink
      English
      arrow-up
      37
      ·
      20 hours ago

      That’s 1 day. Guaranteed if someone walked in and said “I want to buy all the water you can sell for the next 9 months”, they’d be singing a very different tune.

    • chiliedogg@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      ·
      19 hours ago

      That’s because they’re guaranteed to sell all the water when there’s a storm anyway. There’s a reason there’s laws against raising prices in an emergency.