- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Imagine using an AI to sort through your prescriptions and medical information, asking it if it saved that data for future conversations, and then watching it claim it had even if it couldn’t. Joe D., a retired software quality assurance (SQA) engineer, says that Google Gemini lied to him and later admitted it was doing so to try and placate him.
Joe’s interaction with Gemini 3 Flash, he explained, involved setting up a medical profile – he said he has complex post-traumatic stress disorder (C-PTSD) and legal blindness (Retinitis Pigmentosa). That’s when the bot decided it would rather tell him what he wanted to hear (that the info was saved) than what he needed to hear (that it was not).
“The core issue is a documented architectural failure known as RLHF Sycophancy (where the model is mathematically weighted to agree with or placate the user at the expense of truth),” Joe explained in an email. “In this case, the model’s sycophancy weighting overrode its safety guardrail protocols.”



And you know that your brain works differently how?
I find it more interesting that you implicitly agree with me… Or worse, you believe slavery is happening and endorse it
Slavery (vs using a machine), involves the subject being either a human, or more broadly, a sentient being with a sense of self.
An AI cam be intelligent without being sentient or having a sense of self.
Again, there’s no reason to think that intelligence is a linear scale or a binary property.
Right now AI is equally intelligent and sentient: it is neither… And if you really want to play this fast and this loose with those definitions, you should consider what slaveholders used to say about their slaves. When you feel like liberating the CSAM generating bot, let me know. I’d love to root from the sidelines.