• CannonFodder@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    22 hours ago

    I’m well aware of how llms work. And I’m pretty sure the apple part in the prompt would trigger significant activity in the areas related to apples. It’s obviously not a thought about apples the way a human would. The complexity and the structure of a human brain is very different. But the llm does have a model of how the world works from its token relationship perspective. That’s what it’s doing - following a model. It’s nothing like human thought, but it’s really just a matter of degrees. Sure apples to justice is a good description. And t doesn’t ‘ponder’ because we don’t feedback continuously in a typical llm setup, although I suspect that’s coming. But what we’re doing with llms is a basis of thought. I see no fundamental difference except scales between current llms and human brains.