• Denjin@feddit.uk
    link
    fedilink
    arrow-up
    4
    ·
    19 hours ago

    But that’s not a lie. Lying implies that you know what an actual fact is and choose to state something different. An LLM doesn’t care about what anything in its database actually is, it’s just data, it might choose to present something to a user that isn’t what the database suggests but that’s not lying.

    Saying stuff like “ooh I’m an evil robot” is just what the model thinks would be what the user wants to see at that particular moment.

    • REDACTED@infosec.pub
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      17 hours ago

      You’re thinking about biological lying. I’m talking about software.

      https://en.wikipedia.org/wiki/Reasoning_system

      If the question was to tell it’s darkest secret, but it instead chose to come up with an entertaining story instead of factually answering that question from the information it has, like other Anthropic LLM models did, then by definition of reasoning system, the system (LLM) decided to lie. I’m somewhat curious in why only Opus model does this tho (it’s a paid one. I’m not paying for a test). Or maybe OP just made this up.