• wonderingwanderer@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    10 hours ago

    I mostly agree. Machine Learning is AI, and LLMs are trained with a specific form of Machine Learning. It would be more accurate to say LLMs are created with AI, but themselves are just a static predictive model.

    And people also need to realize that “AI” doesn’t mean sentient or conscious. It’s just a really complex computer algorithm. Even AGI won’t be sentient, it would only mimic sentiency.

    And LLMs will never evolve into AGI, any more than the Broca’s and Wernicke’s areas can be adapted to replace the prefrontal cortex, the cingulate gyrus, or the vagus nerve.

    Tangent on the nature of consciousness:

    The nature of consciousness is philosophically contentious, but science doesn’t really have any answers there either. The “Best Guess™” is that consciousness is an emergent property of neural activity, but unfortunately that leads to the delusion that “If we can just program enough bits into an algorithm, it will become conscious.” And venture capitalists are milking that assumption for all it’s worth.

    The human brain isn’t merely electrical though, it’s electrochemical. It’s pretty foolish to write off the entire chemical aspect of the brain’s physiology and just assume that the electrical impulses are all that matter. The fact is, we don’t know what’s responsible for the property of consciousness. We don’t even know why humans are conscious rather than just being mindless automatons encased in meat.

    Yes, the brain can detect light and color, temperature and pressure, pleasure and pain, proprioception, sound vibrations, aromatic volatile gasses and particles, chemical signals perceived as tastes, other chemical signals perceived as emotions, etc… But why do we perceive what the brain detects? Why is there even an us to perceive it? That’s unanswerable.

    Furthermore, where are “we” even located? In the brainstem? The frontal cortex? The corpus callosum? The amygdala or hippocampus? The pineal or pituitary gland? The occipital, parietal, or temporal lobe? Are “we” distributed throughout the whole system? If so, does that include the spinal cord and peripheral nervous system?

    Where is the center of the “self” responsible for the perception of “selfhood” and “self-awareness”?

    Until science can answer that, there is no path to artificial sentiency, and the closest approximation we have to an explanation for our own sentiency is simply Cogito Ergo Sum: I only know that I am sentient, because if I wasn’t then I wouldn’t be able to question my own sentiency and be aware of the fact that I am questioning it.

    Why digital circuits will never be conscious:

    The human brain has about 14 billion neurons. The average commercial API-based LLM already has about 150 billion parameters, and with FP32 architecture that’s already 4 bytes per parameter. If all it takes is a complex enough system of digits, it would have already worked.

    It’s just as likely that consciousness doesn’t emerge from electrochemical interactions, but is an inherent property of them. If every electron was conscious of its whirring around, we wouldn’t know the difference. Perhaps when enough of them are concerted together in a common effort, their simple form of consciousness “pools together” to form a more complex, unitary consciousness just like drops of water in a bucket form one pool of water. But that’s just pure speculation. And so is emergent consciousness theory. The difference is that consciousness as a property rather than an effect would explain why it seems to emerge from complex enough systems.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      15 hours ago

      It’s just a really complex computer algorithm

      Not particularly complex. An LLM is:

      $P_\theta(x) = \prod_t \text{softmax}(f_\theta(x_{<t}))$

      where $f$ is a deep Transformer trained by maximum likelihood.

      • wonderingwanderer@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        14 hours ago

        That “deep Transformer trained by maximum likelihood” is the complex part.

        Billions of parameters in a tensor field distributed over a dozen or more layers, each layer divided by hidden sizes, and multiple attention heads per hidden size. Every parameter’s weight is algorithmically adjusted during training. For every query a matrix multiplication is done on multiple vectors to approximate the relevancy between each token. Possibly tens of thousands of tokens being stored in cached memory at a time, each one being analyzed relative to each other.

        And for standard architecture, each parameter requires four bytes of memory. Even 8-bit quantization requires one byte per parameter. That’s 12-24 GB RAM for a model considered small.

        Deep transformers are not simple systems, if they were then it wouldn’t take such an enormous amount of resources to fully train them.

        • Knock_Knock_Lemmy_In@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          12 hours ago

          The technical implementation, computational effort and sheer volume of training data is astounding.

          But that doesn’t change the fact that the algorithm is pretty simple. Deepseek is about 1,400 lines of code across 5 .py files

        • maplesaga@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          11 hours ago

          You’re really breaking the shitting on AI vibe when you make it sound like the height of human capacity and ingenuity. Can I just call it slop and go back to eating glue?

          • wonderingwanderer@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 hours ago

            You can still shit on AI, just because it’s computationally complex doesn’t make it the greatest thing ever. It still has a lot of problems. In fact, one of the main problems is its consumption of resources (water, electricity, RAM, etc.) due to its computational complexity.

            I’m not defending AI companies, I just think characterizing LLMs as “simple” is misleading.

            • maplesaga@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              10 hours ago

              Our whole economy is geared to consume resources, we have inflation targeting to prevent aggregate demand and prices from ever falling. If you want to lower consumption need hard currency, the cheap cash that the AI’s are riding on now is most likely still Covid stimulus and QE.

              • wonderingwanderer@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                10 hours ago

                And speculation. Venture capitalists think they can create money by investing betting money that they predict they’ll have in the future. It’s how this circular ponzi scheme between Nvidia and OpenAI is holding itself up for now.

                Those huge numbers that they count in their net worth don’t really exist. It’s money that’s been pledged by a different company based on money they pledged to that company in the first place. It’s speculation all the way down.

                They’re hoping for a pay-off, but it’s a bubble of sunken costs kicking the can down the road for as long as they can before it bursts.

                • maplesaga@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  10 hours ago

                  I do think QE and artificially low interest rates do lead to riskier stocks and commodity like Bitcoin doing better, where growth greatly outpaces value stocks.

                  Which I think this is a continuation of the Covid stimulus, and its up to economic gravity as to whether there is a debt bubble that will pop leading to a dramatic fall in money supply. I’m pretty sure we’re roughly saying the same thing.

                  • wonderingwanderer@sopuli.xyz
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    10 hours ago

                    Yeah, I wasn’t disagreeing with you. Just adding some detail.

                    But yeah, if you look at a graph of corporate profits and net worth, it literally exploded during covid and hardly ever slowed down since then. They got a taste for a greater flow of society’s lifeblood, and now they can’t live without it. Returning to pre-covid rates of “growth” would be labeled as “stagnation” and “failure.”

                    Meanwhile, consumers were/are facing runaway inflation and a ballooning cost of living, workers being laid off by cost-cutting companies, and yet these corporate executives still have the nerve to say “well we need to pinch pennies to stay afloat, we need tax-cuts and simultaneously we also need taxpayer-funded government stimulus/bailouts/subsidies. Inflation isn’t our fault, it’s just a normal part of a working economy, we’re totally not just artificially raising our prices because we can get away with it (don’t talk about deflation though, that’s a dirty word, that would mean we’re in a *gasp* recession). And no, we can’t lower our prices or pay our employees more; that would eat into our profit margins and we have a fiduciary responsibility to our shareholders.”

                    Their profit margins: