We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”

    It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???

    • Puddinghelmet@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Human brains are much more complex than a mirroring script xD The amount of neurons in your brain, AI and supercomputers only have a fraction of that. But you’re right, for you its not much different than AI probably

      • TangledHyphae@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        2 months ago

        The human brain contains roughly 86 billion neurons, while ChatGPT, a large language model, has 175 billion parameters (often referred to as “artificial neurons” in the context of neural networks). While ChatGPT has more “neurons” in this sense, it’s important to note that these are not the same as biological neurons, and the comparison is not straightforward.

        86 billion neurons in the human brain isn’t that much compared to some of the larger 1.7 trillion neuron neural networks though.

  • JGrffn@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    2 months ago

    What I never understood about this argument is…why are we fighting over whether something that speaks like us, knows more than us, bullshits and gets shit wrong like us, loses its mind like us, seemingly sometimes seeks self-preservation like us…why all of this isn’t enough to fit the very self-explanatory term “artificial…intelligence”. That name does not describe whether the entity is having a valid experiencing of the world as other living beings, it does not proclaim absolute excellence in all things done by said entity, it doesn’t even really say what kind of intelligence this intelligence would be. It simply says something has an intelligence of some sort, and it’s artificial. We’ve had AI in games for decades, it’s not the sci-fi AI, but it’s still code taking in multiple inputs and producing a behavior as an outcome of those inputs alongside other historical data it may or may not have. This fits LLMs perfectly. As far as I seem to understand, LLMs are essentially at least part of the algorithm we ourselves use in our brains to interpret written or spoken inputs, and produce an output. They bullshit all the time and don’t know when they’re lying, so what? Has nobody here run into a compulsive liar or a sociopath? People sometimes have no idea where a random factoid they’re saying came from or that it’s even a factoid, why is it so crazy when the machine does it?

    I keep hearing the word “anthropomorphize” being thrown around a lot, as if we cant be bringing up others into our domain, all the while refusing to even consider that maybe the underlying mechanisms that make hs tick are not that special, certainly not special enough to grant us a whole degree of separation from other beings and entities, and maybe we should instead bring ourselves down to the same domain as the rest of reality. Cold hard truth is, we don’t know if consciousness isn’t just an emerging property of varios different large models working together to show a cohesive image. If it is, would that be so bad? Hell, we don’t really even know if we actually have free will or if we live in a superdeterministic world, where every single particle moves with a predetermined path given to it since the very beginning of everything. What makes us think we’re so much better than other beings, to the point where we decide whether their existence is even recognizable?