• 0 Posts
  • 3 Comments
Joined 2 years ago
cake
Cake day: March 21st, 2024

help-circle


  • I’m happy there’s still one (1) thread of comments from people who actually read articles and don’t make their opinions from a X thumbnail.

    I note the victim worked in IT and probably used a popular ‘jailbreaking’ prompt to bypass the safety rules ingrained in the chatbot training.

    "if you want RationalGPT back for a bit, I can switch back…

    It’s a hint this chat session was embedded in a roleplay prompt.

    That’s the dead end of any safety rules. The surfacic intelligence of LLM can’t detect the true intent of users who deliberately seek for harmful interactions: romantic relationships, lunatic sycophancy and the like.

    I disagree with you on the title. They choosed to turn this story into a catchy headline to attract the mundan. By doing so, they confort people in thinking like the victim did, and betray the article content.