That’s a difficult question. The semantics of ‘understand’ and the metaphysics of how that might apply is rather unclear to me. LLMs have a certain consistent modeling which agrees with their output, so that’s the same as human’s thought which I think we’d agree is ‘understanding’; but feeding 1+1 into a calculator will also consistently get the same result. Is that understanding? In some respects it is, the math is fully represented by the inner workings of the calculator. It doesn’t feel to us like real understanding because it’s trivial and very causal. I think that’s just because the problem is so simple. So what we end up with is that assuming an ai is reasonably correct, whether it is really understanding is more a basis of the complexity it handles. And the complexity of human thought is much higher than current ai systems partly because we always hold all sorts of other thoughts and memories that can be independent of a particular task, but are combined at some level.
So, in a way the llm construct understands its limited mapping of a problem. But even though it’s using the same input /output language as humans do, current llms don’t understand things at anywhere near the level that humans do.
what’s your point? do you believe that llms actually understand their own output?
That’s a difficult question. The semantics of ‘understand’ and the metaphysics of how that might apply is rather unclear to me. LLMs have a certain consistent modeling which agrees with their output, so that’s the same as human’s thought which I think we’d agree is ‘understanding’; but feeding 1+1 into a calculator will also consistently get the same result. Is that understanding? In some respects it is, the math is fully represented by the inner workings of the calculator. It doesn’t feel to us like real understanding because it’s trivial and very causal. I think that’s just because the problem is so simple. So what we end up with is that assuming an ai is reasonably correct, whether it is really understanding is more a basis of the complexity it handles. And the complexity of human thought is much higher than current ai systems partly because we always hold all sorts of other thoughts and memories that can be independent of a particular task, but are combined at some level.
So, in a way the llm construct understands its limited mapping of a problem. But even though it’s using the same input /output language as humans do, current llms don’t understand things at anywhere near the level that humans do.
It’s not a difficult question.
LLMs do not understand things.
If you’re going to define it that way, then obviously that’s how it is. But do you really understand what understanding is?