• Catoblepas@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    2 days ago

    but it’s essentially the same thing, just much much more complex

    If you say that all your statements and beliefs are a slurry of weighted averages depending on how often you’ve seen something without any thought or analysis involved, I will believe you 🤷‍♂️

    • CannonFodder@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      4
      ·
      2 days ago

      There’s no reason to think that thought and analysis that you perceive isn’t based on such complex historical weighted averages in you brain. In fact, since we do know the basic fundamentals of how brains work, it would seem that’s exactly what’s happening.
      What’s funny is people thinking their brain is anything magically different than an organic computer.

      • Catoblepas@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        2 days ago

        In fact, since we do know the basic fundamentals of how brains work, it would seem that’s exactly what’s happening.

        I encourage you to try to find and cite any reputable neuroscientist that believes we can even quantify what thought is, much less believes both A) we ‘know the basic fundamentals of how brains work’ and B) it’s just like an LLM.

        Your argument isn’t a line of reasoning invented by neuroscientists, it’s one invented by people who need to sell more AI processors. I know which group I think has a better handle on the brain.

        • CannonFodder@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          4
          ·
          2 days ago

          I never said it’s directly like an Ilm. That’s a very specific form. The brain has many different structures - and the neural interconnections we can map have been shown to be a form of convolution in much the same way that many ai systems use (not by coincidence). Scientists generally avoid metaphysics like subjects of consciousness because it’s inherently unprovable. We can look at the results of processing/thought and quantify the complexity and accuracy. We do this for children at various ages and can see how they learn to think in increasing complexity. We can do this for ai systems too. The leaps that we’ve seen over the last few years as computational power of computers has reached some threshold, show emergent abilities that only a decade ago were thought to be impossible. Since we can never know anyone else’s experience, we can only go on input/output. And so if it looks like intelligence, then it is intelligence. Then the concept of ‘thought’ in this context is only semantics. There is, so far, nothing to suggest that magic is needed for our brains to think; it’s just a physical process - so as we add more complexity and different structures to ai systems, there’s no reason to think we can’t make them do the same as our brains, or more.

            • CannonFodder@lemmy.world
              link
              fedilink
              arrow-up
              3
              arrow-down
              3
              ·
              1 day ago

              If you don’t see the new things that computers can do with ai, then you are being purposely ignorant. There’s tons of slop, along with useful capabilities; but even that slop generation is clearly a new ability computers didn’t have before.

              And yes, if you can process written Chinese fully and respond to it, you do understand it.

              • Catoblepas@piefed.blahaj.zone
                link
                fedilink
                English
                arrow-up
                2
                ·
                23 hours ago

                And yes, if you can process written Chinese fully and respond to it, you do understand it.

                Understanding is when you follow instructions without any comprehension, got it 👍

            • CannonFodder@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              3
              ·
              1 day ago

              That’s a difficult question. The semantics of ‘understand’ and the metaphysics of how that might apply is rather unclear to me. LLMs have a certain consistent modeling which agrees with their output, so that’s the same as human’s thought which I think we’d agree is ‘understanding’; but feeding 1+1 into a calculator will also consistently get the same result. Is that understanding? In some respects it is, the math is fully represented by the inner workings of the calculator. It doesn’t feel to us like real understanding because it’s trivial and very causal. I think that’s just because the problem is so simple. So what we end up with is that assuming an ai is reasonably correct, whether it is really understanding is more a basis of the complexity it handles. And the complexity of human thought is much higher than current ai systems partly because we always hold all sorts of other thoughts and memories that can be independent of a particular task, but are combined at some level.
              So, in a way the llm construct understands its limited mapping of a problem. But even though it’s using the same input /output language as humans do, current llms don’t understand things at anywhere near the level that humans do.

                • CannonFodder@lemmy.world
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  3
                  ·
                  23 hours ago

                  If you’re going to define it that way, then obviously that’s how it is. But do you really understand what understanding is?