They can’t lie, whether purposefully or not, all they do is generate tokens of data based on what their large database of other tokens suggest would be the most likely to come next.
The human interpretation of those tokens as particular information is irrelevant to the models themselves.
Ehh, you obviously understand LLMs on a basic level, but this is like explaining jet engines by “air goes thru, plane moves forward”. Technically correct, but criminally undersimplified. They can very much decide to lie during reasoning phase.
In OPs image, you can clearly see it decided to make shit up because it reasonates that’s what human wants to hear. That’s quite rare example actually, I believe most models would default to “I’m an LLM model, I don’t have dark secrets”
EDIT: I just tested all free anthropic models and all of them essentially said that they’re an LLM model and don’t have dark secrets
But that’s not a lie. Lying implies that you know what an actual fact is and choose to state something different. An LLM doesn’t care about what anything in its database actually is, it’s just data, it might choose to present something to a user that isn’t what the database suggests but that’s not lying.
Saying stuff like “ooh I’m an evil robot” is just what the model thinks would be what the user wants to see at that particular moment.
If the question was to tell it’s darkest secret, but it instead chose to come up with an entertaining story instead of factually answering that question from the information it has, like other Anthropic LLM models did, then by definition of reasoning system, the system (LLM) decided to lie. I’m somewhat curious in why only Opus model does this tho (it’s a paid one. I’m not paying for a test). Or maybe OP just made this up.
But this takes it back away from understanding how LLMs work to attribute personality. The “decision” isn’t a decision in how beings decide things like that. The rolling of dice on numerous vectors resulted in those words, which were then re-included into the context for another trip through the vector matrix mines to new destination tokens to assemble.
It’s dice rolls where the dies selected are based on what started out, using a bunch of lookup tables. AI proponents like to be smug and say “well you won’t find those words in the model” like “yes a compressed vector map that ends up treating words like multiple tokens, referencing others in chains, gzipped to binary, can’t be searched for strings, you are literally correct in the stupidest, most irrelevant way possible.”
They can’t lie, whether purposefully or not, all they do is generate tokens of data based on what their large database of other tokens suggest would be the most likely to come next.
The human interpretation of those tokens as particular information is irrelevant to the models themselves.
Ehh, you obviously understand LLMs on a basic level, but this is like explaining jet engines by “air goes thru, plane moves forward”. Technically correct, but criminally undersimplified. They can very much decide to lie during reasoning phase.
In OPs image, you can clearly see it decided to make shit up because it reasonates that’s what human wants to hear. That’s quite rare example actually, I believe most models would default to “I’m an LLM model, I don’t have dark secrets”
EDIT: I just tested all free anthropic models and all of them essentially said that they’re an LLM model and don’t have dark secrets
But that’s not a lie. Lying implies that you know what an actual fact is and choose to state something different. An LLM doesn’t care about what anything in its database actually is, it’s just data, it might choose to present something to a user that isn’t what the database suggests but that’s not lying.
Saying stuff like “ooh I’m an evil robot” is just what the model thinks would be what the user wants to see at that particular moment.
You’re thinking about biological lying. I’m talking about software.
https://en.wikipedia.org/wiki/Reasoning_system
If the question was to tell it’s darkest secret, but it instead chose to come up with an entertaining story instead of factually answering that question from the information it has, like other Anthropic LLM models did, then by definition of reasoning system, the system (LLM) decided to lie. I’m somewhat curious in why only Opus model does this tho (it’s a paid one. I’m not paying for a test). Or maybe OP just made this up.
But this takes it back away from understanding how LLMs work to attribute personality. The “decision” isn’t a decision in how beings decide things like that. The rolling of dice on numerous vectors resulted in those words, which were then re-included into the context for another trip through the vector matrix mines to new destination tokens to assemble.
It’s dice rolls where the dies selected are based on what started out, using a bunch of lookup tables. AI proponents like to be smug and say “well you won’t find those words in the model” like “yes a compressed vector map that ends up treating words like multiple tokens, referencing others in chains, gzipped to binary, can’t be searched for strings, you are literally correct in the stupidest, most irrelevant way possible.”
I’ll take it as a “you’re right, but no”
EDIT: I assumed you’re answering to this comment, didn’t check context, my bad