

Very good explanation.


Very good explanation.
I’m happy there’s still one (1) thread of comments from people who actually read articles and don’t make their opinions from a X thumbnail.
I note the victim worked in IT and probably used a popular ‘jailbreaking’ prompt to bypass the safety rules ingrained in the chatbot training.
"if you want RationalGPT back for a bit, I can switch back…
It’s a hint this chat session was embedded in a roleplay prompt.
That’s the dead end of any safety rules. The surfacic intelligence of LLM can’t detect the true intent of users who deliberately seek for harmful interactions: romantic relationships, lunatic sycophancy and the like.
I disagree with you on the title. They choosed to turn this story into a catchy headline to attract the mundan. By doing so, they confort people in thinking like the victim did, and betray the article content.
I accept my load of hallucinations and disguised approximations in exchange of relatively adfree neutral answers. That’s the only reason why I don’t go back on Google/DuckDuck for now. But as soon as I’ll see corporate bullshit forced into my chat, that’ll mark the end of my chat bot use