The University of Rhode Island’s AI lab estimates that GPT-5 averages just over 18 Wh per query, so putting all of ChatGPT’s reported 2.5 billion requests a day through the model could see energy usage as high as 45 GWh.

A daily energy use of 45 GWh is enormous. A typical modern nuclear power plant produces between 1 and 1.6 GW of electricity per reactor per hour, so data centers running OpenAI’s GPT-5 at 18 Wh per query could require the power equivalent of two to three nuclear power reactors, an amount that could be enough to power a small country.

  • ThePowerOfGeek@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    2
    ·
    15 hours ago

    BTW a lot of it seems to be just inefficient coding as Deepseek has shown.

    Kind of? Inefficient coding is definitely a part of it. But a large part is also just the iterative nature of how these algorithms operate. We might be able to improve that via code optimization a little bit. But without radically changing how these engines operates it won’t make a big difference.

    The scope of the data being used and trained on is probably a bigger issue. Which is why there’s been a push by some to move from LLMs to SLMs. We don’t need the model to be cluttered with information on geology, ancient history, cooking, software development, sports trivia, etc if it’s only going to be used for looking up stuff on music and musicians.

    But either way, there’s a big ‘diminishing returns’ factor to this right now that isn’t being appreciated. Typical human nature: give me that tiny boost in performance regardless of the cost, because I don’t have to deal with. It’s the same short-sighted shit that got us into this looming environmental crisis.

    • kescusay@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      14 hours ago

      Coordinated SLM governors that can redirect queries to the appropriate SLM seems like a good solution.

        • kautau@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          28 minutes ago

          Basically, but with MCP and SLMs interacting rather than a singular model, with the coordinator model only doing the work to figure out who to field the question to, and then continuously provide context to other SLMs in the case of more complex queries