Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    6
    arrow-down
    16
    ·
    6 hours ago

    And that score is matched by GPT-5. Humans are running out of “tricky” puzzles to retreat to.

    • First_Thunder@lemmy.zip
      link
      fedilink
      English
      arrow-up
      20
      ·
      5 hours ago

      What this shows though is that there isn’t actual reasoning behind it. Any improvements from here will likely be because this is a popular problem, and results will be brute forced with a bunch of data, instead of any meaningful change in how they “think” about logic

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        10
        ·
        4 hours ago

        Plenty of people employ faulty reasoning every single day of their lives…

    • realitista@lemmus.org
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      6
      ·
      5 hours ago

      You’re getting downvoted but it’s true. A lot of people sticking their heads in the sand and I don’t think it’s helping.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        6
        arrow-down
        11
        ·
        5 hours ago

        Yeah, “AI is getting pretty good” is a very unpopular opinion in these parts. Popularity doesn’t change the results though.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            4
            ·
            4 hours ago

            It’s overhyped in many areas, but it is undeniably improving. The real question is: will it “snowball” by improving itself in a positive feedback loop? If it does, how much snow covered slope is in front of it for it to roll down?