• CannonFodder@lemmy.world
    link
    fedilink
    arrow-up
    13
    arrow-down
    7
    ·
    2 days ago

    Our brains just get signals coming in from our nerves that we learn to associate with a concept of the apple. We have years of such training data, and we use more than words to tokenize thoughts, and we have much more sophisticated state / memory; but it’s essentially the same thing, just much much more complex. Our brains produce output that is consistent with its internal models and constantly use feedback to improve those models.

    • SparroHawc@lemmy.zip
      link
      fedilink
      arrow-up
      4
      ·
      1 day ago

      You can tell a person to think about apples, and the person will think about apples.

      You can tell an LLM ‘think about apples’ and the LLM will say ‘Okay’ but it won’t think about apples; it is only saying ‘okay’ because its training data suggests that is the most common response to someone asking someone else to think about apples. LLMs do not have an internal experience. They are statistical models.

      • CannonFodder@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        1 day ago

        Well, the LLM does briefly ‘think’ about apples in that it activates its ‘thought’ areas relating to apples (the token repressing apples in its system). Right now, an llm’s internal experience is based on its previous training and the current prompt while it’s running. Our brains are always on and circulating thoughts, so of course that’s a very different concept of experience. But you can bet there are people working on building an ai system (with llm components) that works that way too. The line will get increasingly blurred. Or brain processing is just an organic based statistical model with complex state management and chemical based timing control.

        • SparroHawc@lemmy.zip
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          17 hours ago

          You misunderstand. The outcome of asking an LLM to think about an apple is the token ‘Okay’. It probably doesn’t get very far into even what you claim is ‘thought’ about apples, because when someone says the phrase “Think about X”, the immediate response is almost always ‘Okay’ and never anything about whatever ‘X’ is. That is the sum total of its objective. It does not perform a facsimile of human thought; it performs an analysis of what the most likely next token would be, given what text existed before it. It imitates human output without any of the behavior or thought processes that lead up to that output in humans. There is no model of how the world works. There is no theory of mind. There is only how words are related to each other with no ‘understanding’. It’s very good at outputting reasonable text, and even drawing inferences based on word relations, but anthropomorphizing LLMs is a path that leads to exactly the sort of conclusion that the original comic is mocking.

          Asking an LLM if it is alive does not cause the LLM to ponder the possibility of whether or not it is alive. It causes the LLM to output the response most similar to its training data, and nothing more. It is incapable of pondering its own existence, because that isn’t how it works.

          Yes, our brains are actually an immensely complex neural network, but beyond that the structure is so ridiculously different that it’s closer to comparing apples to the concept of justice than comparing apples to oranges.

          • CannonFodder@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            16 hours ago

            I’m well aware of how llms work. And I’m pretty sure the apple part in the prompt would trigger significant activity in the areas related to apples. It’s obviously not a thought about apples the way a human would. The complexity and the structure of a human brain is very different. But the llm does have a model of how the world works from its token relationship perspective. That’s what it’s doing - following a model. It’s nothing like human thought, but it’s really just a matter of degrees. Sure apples to justice is a good description. And t doesn’t ‘ponder’ because we don’t feedback continuously in a typical llm setup, although I suspect that’s coming. But what we’re doing with llms is a basis of thought. I see no fundamental difference except scales between current llms and human brains.

    • Jared White ✌️ [HWC]@humansare.social
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      7
      ·
      2 days ago

      You think you are saying things which proves you are knowledgeable on this topic, but you are not.

      The human brain is not a computer. And any comparisons between the two are wildly simplistic and likely to introduce more error than meaning into the discourse.

      • CannonFodder@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        7
        ·
        2 days ago

        The human brain is exactly like an organic highly parallel computer system using convolution system just like ai models. It’s just way more complex. We know how synapses work. We know the form of grey matter. It’s too complex for us to model it all artificially at this point, but there’s nothing indicating it requires a magical function to make it work.

      • WorldsDumbestMan@lemmy.today
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        13
        ·
        2 days ago

        What is this whole “human beings are special and have a soul?”. You happen to experience things you “feel”, that’s it. Everything else is just like a specialized computer, shapped by nature to act in a certain way.

    • Catoblepas@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      2 days ago

      but it’s essentially the same thing, just much much more complex

      If you say that all your statements and beliefs are a slurry of weighted averages depending on how often you’ve seen something without any thought or analysis involved, I will believe you 🤷‍♂️

      • CannonFodder@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        4
        ·
        2 days ago

        There’s no reason to think that thought and analysis that you perceive isn’t based on such complex historical weighted averages in you brain. In fact, since we do know the basic fundamentals of how brains work, it would seem that’s exactly what’s happening.
        What’s funny is people thinking their brain is anything magically different than an organic computer.

        • Catoblepas@piefed.blahaj.zone
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          1
          ·
          2 days ago

          In fact, since we do know the basic fundamentals of how brains work, it would seem that’s exactly what’s happening.

          I encourage you to try to find and cite any reputable neuroscientist that believes we can even quantify what thought is, much less believes both A) we ‘know the basic fundamentals of how brains work’ and B) it’s just like an LLM.

          Your argument isn’t a line of reasoning invented by neuroscientists, it’s one invented by people who need to sell more AI processors. I know which group I think has a better handle on the brain.

          • CannonFodder@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            4
            ·
            2 days ago

            I never said it’s directly like an Ilm. That’s a very specific form. The brain has many different structures - and the neural interconnections we can map have been shown to be a form of convolution in much the same way that many ai systems use (not by coincidence). Scientists generally avoid metaphysics like subjects of consciousness because it’s inherently unprovable. We can look at the results of processing/thought and quantify the complexity and accuracy. We do this for children at various ages and can see how they learn to think in increasing complexity. We can do this for ai systems too. The leaps that we’ve seen over the last few years as computational power of computers has reached some threshold, show emergent abilities that only a decade ago were thought to be impossible. Since we can never know anyone else’s experience, we can only go on input/output. And so if it looks like intelligence, then it is intelligence. Then the concept of ‘thought’ in this context is only semantics. There is, so far, nothing to suggest that magic is needed for our brains to think; it’s just a physical process - so as we add more complexity and different structures to ai systems, there’s no reason to think we can’t make them do the same as our brains, or more.

              • CannonFodder@lemmy.world
                link
                fedilink
                arrow-up
                3
                arrow-down
                3
                ·
                1 day ago

                If you don’t see the new things that computers can do with ai, then you are being purposely ignorant. There’s tons of slop, along with useful capabilities; but even that slop generation is clearly a new ability computers didn’t have before.

                And yes, if you can process written Chinese fully and respond to it, you do understand it.

                • Catoblepas@piefed.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 day ago

                  And yes, if you can process written Chinese fully and respond to it, you do understand it.

                  Understanding is when you follow instructions without any comprehension, got it 👍

              • CannonFodder@lemmy.world
                link
                fedilink
                arrow-up
                2
                arrow-down
                3
                ·
                1 day ago

                That’s a difficult question. The semantics of ‘understand’ and the metaphysics of how that might apply is rather unclear to me. LLMs have a certain consistent modeling which agrees with their output, so that’s the same as human’s thought which I think we’d agree is ‘understanding’; but feeding 1+1 into a calculator will also consistently get the same result. Is that understanding? In some respects it is, the math is fully represented by the inner workings of the calculator. It doesn’t feel to us like real understanding because it’s trivial and very causal. I think that’s just because the problem is so simple. So what we end up with is that assuming an ai is reasonably correct, whether it is really understanding is more a basis of the complexity it handles. And the complexity of human thought is much higher than current ai systems partly because we always hold all sorts of other thoughts and memories that can be independent of a particular task, but are combined at some level.
                So, in a way the llm construct understands its limited mapping of a problem. But even though it’s using the same input /output language as humans do, current llms don’t understand things at anywhere near the level that humans do.

                  • CannonFodder@lemmy.world
                    link
                    fedilink
                    arrow-up
                    2
                    arrow-down
                    3
                    ·
                    1 day ago

                    If you’re going to define it that way, then obviously that’s how it is. But do you really understand what understanding is?