It’s fair to wonder the specifics, but the word lying implies that the LLM knows that it’s providing a falsehood as a fact. It doesn’t know. It’s “lies” are either hallucinations (where it doesn’t have the information in its data set and can’t return the information requested and so it provides incorrect information because that info is statistically the as close as the thing can get, or it provides incorrect information because the guardrails set by the company engineering it said to provide Y when queried with X.
There is no thought involved in what the LLM does.
I’m interested in how they were wrong. Pointedly, was there a Right/MAGA/Nazi bias or trend apparent in the errors? Or, is it just stupid?
Because, “wrong” is just a mistake, but lying is intentional.
It’s fair to wonder the specifics, but the word lying implies that the LLM knows that it’s providing a falsehood as a fact. It doesn’t know. It’s “lies” are either hallucinations (where it doesn’t have the information in its data set and can’t return the information requested and so it provides incorrect information because that info is statistically the as close as the thing can get, or it provides incorrect information because the guardrails set by the company engineering it said to provide Y when queried with X.
There is no thought involved in what the LLM does.