Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • HugeNerd@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    11
    ·
    4 hours ago

    they’re all just guessing, literally

    They’re literally not.

    • m0darn@lemmy.ca
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      3 hours ago

      Isn’t it a probabilistic extrapolation? Isn’t that what a guess is?

      • vii@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 minutes ago

        This gets very murky very fast when you start to think how humans learn and process, we’re just meaty pattern matching machines.

      • Iconoclast@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        42 minutes ago

        It’s a Large Language Model. It doesn’t “know” anything, doesn’t think, and has zero metacognition. It generates language based on patterns and probabilities. Its only goal is to produce linguistically coherent output - not factually correct one.

        It gets things right sometimes purely because it was trained on a massive pile of correct information - not because it understands anything it’s saying.

        So no, it doesn’t “guess.” It doesn’t even know it’s answering a question. It just talks.

        • vii@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 minutes ago

          It gets things right sometimes purely because it was trained on a massive pile of correct information - not because it understands anything it’s saying.

          I know some humans that applies to