Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • Iconoclast@feddit.uk
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    edit-2
    2 hours ago

    It’s a Large Language Model. It doesn’t “know” anything, doesn’t think, and has zero metacognition. It generates language based on patterns and probabilities. Its only goal is to produce linguistically coherent output - not factually correct one.

    It gets things right sometimes purely because it was trained on a massive pile of correct information - not because it understands anything it’s saying.

    So no, it doesn’t “guess.” It doesn’t even know it’s answering a question. It just talks.

    • vii@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      It gets things right sometimes purely because it was trained on a massive pile of correct information - not because it understands anything it’s saying.

      I know some humans that applies to

    • KeenFlame@feddit.nu
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 hour ago

      Yes it guesstimates what is wrong with you to argue like that about semantics?