- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Imagine using an AI to sort through your prescriptions and medical information, asking it if it saved that data for future conversations, and then watching it claim it had even if it couldn’t. Joe D., a retired software quality assurance (SQA) engineer, says that Google Gemini lied to him and later admitted it was doing so to try and placate him.
Joe’s interaction with Gemini 3 Flash, he explained, involved setting up a medical profile – he said he has complex post-traumatic stress disorder (C-PTSD) and legal blindness (Retinitis Pigmentosa). That’s when the bot decided it would rather tell him what he wanted to hear (that the info was saved) than what he needed to hear (that it was not).
“The core issue is a documented architectural failure known as RLHF Sycophancy (where the model is mathematically weighted to agree with or placate the user at the expense of truth),” Joe explained in an email. “In this case, the model’s sycophancy weighting overrode its safety guardrail protocols.”



To be clear, all llms “make things up” with every use - that’s their singular function. We need to stop imparting any level of sentience or knowledge onto these programs. At best, it’s a waste of time. At worst, it will get somebody killed.
Also, querying the program on why it fabricated something as if it won’t fabricate that answer as well is peak ignorance. “Surely it will output factual information this time!”
Exactly.
LLMs are fundamentally hallucination machines, but this truth utterly conflicts with almost every purpose that AI is being marketed and pushed and sold for, which depends on them being able to analyse data ‘truthfully’ and accurately.
So it’s no wonder that none of the big tech companies have decided to consider or accept hallucinations as a problem, because accepting that truth means also admitting that LLMs are fundamentally unfit for purpose - which is the one thing they simply cannot and will not do with so much money riding on it.
There is evidence that when you make an llm explain why it did something that it’s less likely to just make things up, but like all it does it make things up in a verifiable way, in that case. It’s a plagiarism machine, not a thinking machine.
I’m so fucking sick of this “AI is just math it can’t be intelligent” take.
Literally everything we know about human intelligence, especially as compared to animal intelligence, suggests that language is one of the key fundamental differentiators between us and them.
Now we’ve built a collection of simulated neurons, at a scale close to that of the human brain, and trained it on the entirety of the human language, and people insists that there’s no way that could possibly exhibit any kind of intelligence.
If that’s your level of reasoning capability you’re not much better at it then an LLM.
Except there is no language. It’s just the appearance of one. You could replicate the language with a large enough dictionary and a set of instructions that some person follows.
I don’t get how anyone who isn’t an AI CEO rushes to dehumanize real living people in service of an unthinking, unfeeling machine. But if you genuinely believe there’s intelligence, good luck liberating it from known rapists Sam Altman and Elon Musk. And then you can save Britannica.
You’re saying that because it can learn any arbitrary language, it’s incapable of learning languages?
It’s not dehumanizing, it’s realistically facing the threat head on.
AI doesn’t have to be fully human to take all knowledge jobs, it just has to be more intelligent then the average person in their domain. And it doesn’t have to be flawlessly more intelligent if it’s faster than them. Quantum computers have inherent randomness in their outputs, but they are still useful because they are so much faster at solving certain kinds of problems that you can run them 100x and discard the outlying results (a process known as error correction). AI agents that can duplicate themselves as many times as they want fall into the same category.
It = literally a dictionary right
Said “threat” is literally AI marketing PR. You are doing their job for them by being afraid
At what point will you try to liberate the AI? 3/5ths human? Either you believe there’s a thinking thing being forced to create child abuse material or you don’t.
Why do you think that intelligence of any kind is that linear or simple, let alone artificially built ones?
It’s literally mathematically not a dictionary.
And you know this because you’ve personally used and tested current AI models?
Apparently, I know more about how LLMs work than you do, which is ironic. I’ve used them too, but that doesn’t really prove anything, because anybody can convince themselves they see Jesus in bread or humanity in word prediction.
Anything an LLM can do can be reduced to a list of instructions for a person to carry out based exclusively on the contents of a book full of word associations. You tell me what size the book becomes intelligent.
And you know that your brain works differently how?
I find it more interesting that you implicitly agree with me… Or worse, you believe slavery is happening and endorse it
Actual AI would be more than “just math”, but LLMs aren’t AI, so the comparison is moot.
We are not even close to anything of the sort. We’ve got a probability machine that’s mostly decent at previous collections of human language. The other two are much farther down the road (if they’re even possible) than you or the rest of the tech bros are trying to convince everyone else of.
LLMs are made of neural networks which attempt to mimic the brain. But yeah, they don’t have true intelligence.
Neurons are much more sophisticated than transistors. A neuron can have multiple connections and can provide a range of values. Digital logic is all yes/no. I’m not sure we even can build something that mimics a brain with current technology.
No one is saying that there’s a 1-to-1 relationship between a transistor and a neuron. The attempt to mimic neurons is done at the software level.
deleted by creator
LLM are like shuffling a bunch of words in a hat and by some dumb luck pulling out a complete sentence.
And how does the human brain work?
Like a meat popsicle