• skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 day ago

    This is demonstrably false, given you can download your own models and change the system prompts yourself.

    • zr0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      7
      ·
      1 day ago

      That’s not how it works, as the guard rails are not just simple prompts that you just can delete.

      Even with “abliteration”, you are modifying the model basically without the whole retraining, but also lose many capabilities at the same time.

      So much for “demonstrably false”, while you obviously have never tried to uncensor any LLM.

        • zr0@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          7
          ·
          1 day ago

          The prompts are part of the training, you realize that? They are then inside the weights. Not just text files you can delete and you are good?

          Only because an LLM reveals those negative-prompts does not mean you can just remove them.

          Do you genuinely know what you are talking about, or are you just here to ragebait?

          • Echo Dot@feddit.uk
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 day ago

            The prompts are part of the training

            No they’re not. They’re injected into every input that you enter into the system.