• NateNate60@lemmy.world
    link
    fedilink
    arrow-up
    20
    arrow-down
    20
    ·
    2 days ago

    To be fair, a good proportion of humans would also say “neither” because they did not read correctly. It’s not smarter than humans, but it also isn’t that much dumber (in this instance, anyway).

    • Signtist@bookwyr.me
      link
      fedilink
      English
      arrow-up
      37
      ·
      2 days ago

      The difference is that the human came to their conclusion with active reasoning, but simply misheard the question, while the AI was aware of what was being asked, but lacks the ability to reason, so it’s unable to give any answer besides one already given by a real person answering a slightly different question somewhere in its training data.

      • NateNate60@lemmy.world
        link
        fedilink
        arrow-up
        10
        arrow-down
        2
        ·
        2 days ago

        A human who says “neither” would say that because they’ve heard this question before and assumed it was the same.

        • Cethin@lemmy.zip
          link
          fedilink
          English
          arrow-up
          18
          arrow-down
          1
          ·
          2 days ago

          That’s the difference. They made an assumption. This did not. It’s just the most likely text to follow the former text. It’s not a bad assumption. That requires thinking about it. It’s just a wrong result from a prediction machine.

          • NateNate60@lemmy.world
            link
            fedilink
            arrow-up
            8
            arrow-down
            5
            ·
            edit-2
            2 days ago

            Right, but I’m saying that the process that a mistaken human is using here is actually not that different from what the AI is doing. People would misread the passage because they expect the number 20 to be followed by the word “pounds” based on their previous encounters with similar texts.

            • Cethin@lemmy.zip
              link
              fedilink
              English
              arrow-up
              6
              ·
              2 days ago

              No, it’s not misreading anything. It isn’t reading at all. It just sees a string that is similar to other strings that it’s trained on, and knows the most likely sequence to follow is what it output. There is not comprehension. There is no reading. There is no thought. The process isn’t similar to what a human might do, only the result is.

              • bbb@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                ·
                10 hours ago

                If that was true, wouldn’t every AI get the answer wrong? It’s actually around 50/50. The leading “reasoning” models almost always get it right, the others often don’t.

                • Cethin@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  6 hours ago

                  It depends on what’s asked. What’s “around 50/50”? What is “it” that they almost always get right? I think you’ve bought into their marketing. It can often do math problems, that are worded properly, well. That doesn’t mean it’s intelligent though. It means that the statistical algorithm is useful for solving those problems. It isn’t thinking. Getting correct answers isn’t thought.

                  For the example in the OP, that is the correct answer, if correct is what you expect to follow a string that looks like this. For a statistical model, it did well. For a thinking machine (which it isn’t) it’s wrong. It accurately gave a string that is expected following the previous string, it just happens to not be the correct answer.

            • Signtist@bookwyr.me
              link
              fedilink
              English
              arrow-up
              7
              ·
              2 days ago

              But what we’re saying is that the process is totally different - it’s only the result that is similar. The AI isn’t “misreading” the question - it understands that it’s comparing pounds of bricks to a distinct number of feathers. The issue is that when it searches its database for answers to questions similar to the one it was asked, and sees that the answer was “they’re the same,” and incorrectly assumes that the answer is the same for this question. It’s a fundamental problem with the way AI works, that can’t be solved with a simple correction about how it’s interpreting the question the way a human misreading the question could be.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      15
      ·
      2 days ago

      It isn’t smarter or dumber, since that’s a measure of intelligence. It’s just spitting out the most likely (with some variability) next word. The fact humans also may get it wrong doesn’t matter. People can be dumb. A predictive algorithm can’t.

    • Marthirial@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 days ago

      AI should stand for Allien Intelligence. comparing LLMs to human intelligence is like comparing apples to black holes.

      • tangeli@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        AI is more like dark matter than black holes. Black holes actually exist. There are impacts on society and the economy that can be explained by the existence of AI, but no one has observed any yet.