• foliumcreations@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    2
    ·
    20 hours ago

    I have made the conscious decision to try and not refer to it as AI, but predictive LLM or generative mimic models, to better reflect what they are. If we all manage to change our vernacular, perhaps we can make them silgtly less attractive to use for everything. Some might even feel less inclined to brag about using them for all their work.

    Other options might be unethical guessing machines, deceptive echo models, or the classic from Wh40k Abominable Intelligence.

    • cloudy1999@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      8 hours ago

      “Asking one’s chat bot” sounds so much less impressive than “leveraging AI”. Using the right language throws some cold water on the corporate narrative.

    • wonderingwanderer@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      14 hours ago

      I mostly agree. Machine Learning is AI, and LLMs are trained with a specific form of Machine Learning. It would be more accurate to say LLMs are created with AI, but themselves are just a static predictive model.

      And people also need to realize that “AI” doesn’t mean sentient or conscious. It’s just a really complex computer algorithm. Even AGI won’t be sentient, it would only mimic sentiency.

      And LLMs will never evolve into AGI, any more than the Broca’s and Wernicke’s areas can be adapted to replace the prefrontal cortex, the cingulate gyrus, or the vagus nerve.

      Tangent on the nature of consciousness:

      The nature of consciousness is philosophically contentious, but science doesn’t really have any answers there either. The “Best Guess™” is that consciousness is an emergent property of neural activity, but unfortunately that leads to the delusion that “If we can just program enough bits into an algorithm, it will become conscious.” And venture capitalists are milking that assumption for all it’s worth.

      The human brain isn’t merely electrical though, it’s electrochemical. It’s pretty foolish to write off the entire chemical aspect of the brain’s physiology and just assume that the electrical impulses are all that matter. The fact is, we don’t know what’s responsible for the property of consciousness. We don’t even know why humans are conscious rather than just being mindless automatons encased in meat.

      Yes, the brain can detect light and color, temperature and pressure, pleasure and pain, proprioception, sound vibrations, aromatic volatile gasses and particles, chemical signals perceived as tastes, other chemical signals perceived as emotions, etc… But why do we perceive what the brain detects? Why is there even an us to perceive it? That’s unanswerable.

      Furthermore, where are “we” even located? In the brainstem? The frontal cortex? The corpus callosum? The amygdala or hippocampus? The pineal or pituitary gland? The occipital, parietal, or temporal lobe? Are “we” distributed throughout the whole system? If so, does that include the spinal cord and peripheral nervous system?

      Where is the center of the “self” responsible for the perception of “selfhood” and “self-awareness”?

      Until science can answer that, there is no path to artificial sentiency, and the closest approximation we have to an explanation for our own sentiency is simply Cogito Ergo Sum: I only know that I am sentient, because if I wasn’t then I wouldn’t be able to question my own sentiency and be aware of the fact that I am questioning it.

      Why digital circuits will never be conscious:

      The human brain has about 14 billion neurons. The average commercial API-based LLM already has about 150 billion parameters, and with FP32 architecture that’s already 4 bytes per parameter. If all it takes is a complex enough system of digits, it would have already worked.

      It’s just as likely that consciousness doesn’t emerge from electrochemical interactions, but is an inherent property of them. If every electron was conscious of its whirring around, we wouldn’t know the difference. Perhaps when enough of them are concerted together in a common effort, their simple form of consciousness “pools together” to form a more complex, unitary consciousness just like drops of water in a bucket form one pool of water. But that’s just pure speculation. And so is emergent consciousness theory. The difference is that consciousness as a property rather than an effect would explain why it seems to emerge from complex enough systems.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        19 hours ago

        It’s just a really complex computer algorithm

        Not particularly complex. An LLM is:

        $P_\theta(x) = \prod_t \text{softmax}(f_\theta(x_{<t}))$

        where $f$ is a deep Transformer trained by maximum likelihood.

        • wonderingwanderer@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          18 hours ago

          That “deep Transformer trained by maximum likelihood” is the complex part.

          Billions of parameters in a tensor field distributed over a dozen or more layers, each layer divided by hidden sizes, and multiple attention heads per hidden size. Every parameter’s weight is algorithmically adjusted during training. For every query a matrix multiplication is done on multiple vectors to approximate the relevancy between each token. Possibly tens of thousands of tokens being stored in cached memory at a time, each one being analyzed relative to each other.

          And for standard architecture, each parameter requires four bytes of memory. Even 8-bit quantization requires one byte per parameter. That’s 12-24 GB RAM for a model considered small.

          Deep transformers are not simple systems, if they were then it wouldn’t take such an enormous amount of resources to fully train them.

          • Knock_Knock_Lemmy_In@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            16 hours ago

            The technical implementation, computational effort and sheer volume of training data is astounding.

            But that doesn’t change the fact that the algorithm is pretty simple. Deepseek is about 1,400 lines of code across 5 .py files

          • maplesaga@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            edit-2
            15 hours ago

            You’re really breaking the shitting on AI vibe when you make it sound like the height of human capacity and ingenuity. Can I just call it slop and go back to eating glue?

            • wonderingwanderer@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              2
              ·
              15 hours ago

              You can still shit on AI, just because it’s computationally complex doesn’t make it the greatest thing ever. It still has a lot of problems. In fact, one of the main problems is its consumption of resources (water, electricity, RAM, etc.) due to its computational complexity.

              I’m not defending AI companies, I just think characterizing LLMs as “simple” is misleading.

              • maplesaga@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                14 hours ago

                Our whole economy is geared to consume resources, we have inflation targeting to prevent aggregate demand and prices from ever falling. If you want to lower consumption need hard currency, the cheap cash that the AI’s are riding on now is most likely still Covid stimulus and QE.

                • wonderingwanderer@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  14 hours ago

                  And speculation. Venture capitalists think they can create money by investing betting money that they predict they’ll have in the future. It’s how this circular ponzi scheme between Nvidia and OpenAI is holding itself up for now.

                  Those huge numbers that they count in their net worth don’t really exist. It’s money that’s been pledged by a different company based on money they pledged to that company in the first place. It’s speculation all the way down.

                  They’re hoping for a pay-off, but it’s a bubble of sunken costs kicking the can down the road for as long as they can before it bursts.

                  • maplesaga@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    14 hours ago

                    I do think QE and artificially low interest rates do lead to riskier stocks and commodity like Bitcoin doing better, where growth greatly outpaces value stocks.

                    Which I think this is a continuation of the Covid stimulus, and its up to economic gravity as to whether there is a debt bubble that will pop leading to a dramatic fall in money supply. I’m pretty sure we’re roughly saying the same thing.

    • mechoman444@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 hours ago

      The men of iron are so freaking cool! They’re still around in modern 40k hiding biding their time.

      Maybe one day we’ll have a whole new army of AIs in 40k!