• ramble81@lemmy.zip
    link
    fedilink
    arrow-up
    117
    arrow-down
    2
    ·
    2 days ago

    Seriously the sheer amount of people that equate coherent speech with sentience is mind boggling.

    All jokes aside, I have heard some decently educated technical people say “yeah, it’s really creepy that it put a random laugh in what it said” or “it broke the 4th wall when talking”… it’s fucking programmed to do that and you just walked right in to it.

    • Jankatarch@lemmy.world
      link
      fedilink
      arrow-up
      44
      ·
      edit-2
      2 days ago

      Technical term is the ELIZA effect.

      In 1966, Professor Weizenbaum made a chatbot called ELIZA that essentially repeats what you say back in different terms.

      He then noticed by accident that people keep convincing themselves it’s fucking concious.

      “I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

      - Prof. Weizenbaum on ELIZA.

    • Clay_pidgin@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 days ago

      Of course it’s creepy. Why wouldn’t it be? Someone programmed it to do that, or programmed it in such a way that it weighted those additions. That’s weird.

      • chaogomu@lemmy.world
        link
        fedilink
        English
        arrow-up
        33
        ·
        2 days ago

        The difference is knowledge. You know what an apple is. A LLM does not. It has training data that has the word apple is associated with the words red, green, pie, and doctor.

        The model then uses a random number generator to mix those words up a bit, and see if the result looks a bit like the training data, and if it does, the model spits out a sequence of words that may or may not be a sentence, depending on the size and quality of the training data.

        At no point is any actual meaning associated with any of the words. The model is just trying to fit different shaped blocks through different shaped holes, and sometimes everything goes through the square hole, and you get hallucinations.

        • CannonFodder@lemmy.world
          link
          fedilink
          arrow-up
          13
          arrow-down
          6
          ·
          2 days ago

          Our brains just get signals coming in from our nerves that we learn to associate with a concept of the apple. We have years of such training data, and we use more than words to tokenize thoughts, and we have much more sophisticated state / memory; but it’s essentially the same thing, just much much more complex. Our brains produce output that is consistent with its internal models and constantly use feedback to improve those models.

          • SparroHawc@lemmy.zip
            link
            fedilink
            arrow-up
            4
            ·
            1 day ago

            You can tell a person to think about apples, and the person will think about apples.

            You can tell an LLM ‘think about apples’ and the LLM will say ‘Okay’ but it won’t think about apples; it is only saying ‘okay’ because its training data suggests that is the most common response to someone asking someone else to think about apples. LLMs do not have an internal experience. They are statistical models.

            • CannonFodder@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              23 hours ago

              Well, the LLM does briefly ‘think’ about apples in that it activates its ‘thought’ areas relating to apples (the token repressing apples in its system). Right now, an llm’s internal experience is based on its previous training and the current prompt while it’s running. Our brains are always on and circulating thoughts, so of course that’s a very different concept of experience. But you can bet there are people working on building an ai system (with llm components) that works that way too. The line will get increasingly blurred. Or brain processing is just an organic based statistical model with complex state management and chemical based timing control.

              • SparroHawc@lemmy.zip
                link
                fedilink
                arrow-up
                3
                ·
                edit-2
                13 hours ago

                You misunderstand. The outcome of asking an LLM to think about an apple is the token ‘Okay’. It probably doesn’t get very far into even what you claim is ‘thought’ about apples, because when someone says the phrase “Think about X”, the immediate response is almost always ‘Okay’ and never anything about whatever ‘X’ is. That is the sum total of its objective. It does not perform a facsimile of human thought; it performs an analysis of what the most likely next token would be, given what text existed before it. It imitates human output without any of the behavior or thought processes that lead up to that output in humans. There is no model of how the world works. There is no theory of mind. There is only how words are related to each other with no ‘understanding’. It’s very good at outputting reasonable text, and even drawing inferences based on word relations, but anthropomorphizing LLMs is a path that leads to exactly the sort of conclusion that the original comic is mocking.

                Asking an LLM if it is alive does not cause the LLM to ponder the possibility of whether or not it is alive. It causes the LLM to output the response most similar to its training data, and nothing more. It is incapable of pondering its own existence, because that isn’t how it works.

                Yes, our brains are actually an immensely complex neural network, but beyond that the structure is so ridiculously different that it’s closer to comparing apples to the concept of justice than comparing apples to oranges.

                • CannonFodder@lemmy.world
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  12 hours ago

                  I’m well aware of how llms work. And I’m pretty sure the apple part in the prompt would trigger significant activity in the areas related to apples. It’s obviously not a thought about apples the way a human would. The complexity and the structure of a human brain is very different. But the llm does have a model of how the world works from its token relationship perspective. That’s what it’s doing - following a model. It’s nothing like human thought, but it’s really just a matter of degrees. Sure apples to justice is a good description. And t doesn’t ‘ponder’ because we don’t feedback continuously in a typical llm setup, although I suspect that’s coming. But what we’re doing with llms is a basis of thought. I see no fundamental difference except scales between current llms and human brains.

          • Jared White ✌️ [HWC]@humansare.social
            link
            fedilink
            English
            arrow-up
            23
            arrow-down
            6
            ·
            2 days ago

            You think you are saying things which proves you are knowledgeable on this topic, but you are not.

            The human brain is not a computer. And any comparisons between the two are wildly simplistic and likely to introduce more error than meaning into the discourse.

            • CannonFodder@lemmy.world
              link
              fedilink
              arrow-up
              6
              arrow-down
              7
              ·
              2 days ago

              The human brain is exactly like an organic highly parallel computer system using convolution system just like ai models. It’s just way more complex. We know how synapses work. We know the form of grey matter. It’s too complex for us to model it all artificially at this point, but there’s nothing indicating it requires a magical function to make it work.

            • WorldsDumbestMan@lemmy.today
              link
              fedilink
              English
              arrow-up
              7
              arrow-down
              13
              ·
              2 days ago

              What is this whole “human beings are special and have a soul?”. You happen to experience things you “feel”, that’s it. Everything else is just like a specialized computer, shapped by nature to act in a certain way.

          • Catoblepas@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            2
            ·
            2 days ago

            but it’s essentially the same thing, just much much more complex

            If you say that all your statements and beliefs are a slurry of weighted averages depending on how often you’ve seen something without any thought or analysis involved, I will believe you 🤷‍♂️

            • CannonFodder@lemmy.world
              link
              fedilink
              arrow-up
              4
              arrow-down
              4
              ·
              2 days ago

              There’s no reason to think that thought and analysis that you perceive isn’t based on such complex historical weighted averages in you brain. In fact, since we do know the basic fundamentals of how brains work, it would seem that’s exactly what’s happening.
              What’s funny is people thinking their brain is anything magically different than an organic computer.

              • Catoblepas@piefed.blahaj.zone
                link
                fedilink
                English
                arrow-up
                12
                arrow-down
                1
                ·
                2 days ago

                In fact, since we do know the basic fundamentals of how brains work, it would seem that’s exactly what’s happening.

                I encourage you to try to find and cite any reputable neuroscientist that believes we can even quantify what thought is, much less believes both A) we ‘know the basic fundamentals of how brains work’ and B) it’s just like an LLM.

                Your argument isn’t a line of reasoning invented by neuroscientists, it’s one invented by people who need to sell more AI processors. I know which group I think has a better handle on the brain.

                • CannonFodder@lemmy.world
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  4
                  ·
                  2 days ago

                  I never said it’s directly like an Ilm. That’s a very specific form. The brain has many different structures - and the neural interconnections we can map have been shown to be a form of convolution in much the same way that many ai systems use (not by coincidence). Scientists generally avoid metaphysics like subjects of consciousness because it’s inherently unprovable. We can look at the results of processing/thought and quantify the complexity and accuracy. We do this for children at various ages and can see how they learn to think in increasing complexity. We can do this for ai systems too. The leaps that we’ve seen over the last few years as computational power of computers has reached some threshold, show emergent abilities that only a decade ago were thought to be impossible. Since we can never know anyone else’s experience, we can only go on input/output. And so if it looks like intelligence, then it is intelligence. Then the concept of ‘thought’ in this context is only semantics. There is, so far, nothing to suggest that magic is needed for our brains to think; it’s just a physical process - so as we add more complexity and different structures to ai systems, there’s no reason to think we can’t make them do the same as our brains, or more.

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        14
        arrow-down
        3
        ·
        edit-2
        2 days ago

        Oh my goddd…

        Honestly, I think we need to take all these solipsistic tech-weirdos and trap them in a Starbucks until they can learn how to order a coffee from the counter without hyperventilating.