• 🔍🦘🛎@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      5
      ·
      2 days ago

      LLMs do not think. The Plagiarism Machines read a million sentences humans wrote about AI thinking and regurgitated them.

      • Communist@lemmy.frozeninferno.xyz
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        Yeah but saying all that is annoying so I think we should stick with saying thinking and everyone knowing what we mean isn’t literally identical to thought. Do you have a better solution?

        • Fluke@feddit.uk
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          2 days ago

          Yeah, not conflating intelligent, creative problem solving with a glorified search engine that makes up the answers if it can’t lift them wholesale from another source. That would be a good start, right?

          • Railcar8095@lemmy.world
            link
            fedilink
            arrow-up
            5
            arrow-down
            2
            ·
            2 days ago

            This doesn’t answer the question of finding a better solution.

            I took the liberty to ask Lumo and his reasoning seem more useful than your thoughts:

            A better solution is to adopt functionalist terminology that distinguishes between biological consciousness and computational processing without resorting to metaphorical confusion.

            Instead of the binary of “it thinks” (which implies subjective experience) or “it doesn’t think” (which dismisses complex reasoning), we can use precise descriptors based on what the system is actually doing:

            “Reasoning” or “Synthesizing”: Use these terms when the model is connecting disparate data points, performing logical deductions, or generating novel structures based on patterns. This acknowledges the output’s complexity without claiming the machine has an inner life.

            Example: “The model is synthesizing a solution based on its training data,” rather than “The model is thinking about the problem.” “Simulating” or “Mimicking”: Use these when the output resembles human thought processes but is strictly algorithmic. This clarifies that the form is human-like, but the mechanism is statistical prediction.

            Example: “It is simulating a debate,” rather than “It is arguing.” “Processing” or “Computing”: Reserve these for the raw mechanical act of token generation.

            Example: “The system is processing the query,” rather than “The system is considering the query.” Why this works better:

            Precision: It avoids the philosophical baggage of “thought” (qualia, consciousness) while still acknowledging the utility of the output. Clarity: It prevents the “Plagiarism Machine” critique from being a total dismissal. Even if the data comes from humans, the recombination and application to new contexts is a distinct computational process worth naming accurately. Scalability: As models become more complex, “reasoning” or “synthesizing” scales better than “thinking,” which remains tied to biological definitions that may never apply to silicon. So, the compromise isn’t to keep saying “thinking” and hope people understand, nor to insist on “regurgitation” which ignores the emergent properties of large-scale pattern matching. Instead, we shift the vocabulary to describe the process (reasoning, synthesizing, simulating) rather than the state of being (thinking).

            • Communist@lemmy.frozeninferno.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              That doesn’t really work either, that adds synthesizing to the terminology but doesn’t describe most of the behaviors they have. It’s not reasoning or simulating either.

                • Communist@lemmy.frozeninferno.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  1 day ago

                  I don’t find the problem compelling enough to warrant a solution.

                  why should I care about this misunderstanding that can easily be remedied with even the most basic cursory research?

                  there are countless things we do this with, rivers don’t run, they flow

                  even with computers we have called processing “thinking” for ages and nobody ever cared

                  cities are actually not even capable of sleep either.

                  I think this is a problem that doesn’t matter at all even a little. Can you tell me why we should even try?

      • Samskara@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        arrow-down
        2
        ·
        2 days ago

        That‘s what human minds mostly do as well. The overwhelming things you think and say are things you have heard or read elsewhere. Sometimes you combine two things you learned from the outside. Sometimes you develop a thing you learned a small step further. Actual creative thoughts stemming from yourself are pretty rare.