• zr0@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    9
    ·
    8 days ago

    You do not want to know how good current LLM’s would be, if you would remove the thousands of negative-prompts aka. guard rails.

      • brbposting@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 days ago

        Anthropic actually developed a system which, in the hands of the most capable…in narrow domains used conscientiously in a limited fashion with tremendous and constant risk mitigation……, is reportedly not garbage

        Narrator: they ruined it

      • Breezy@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        7 days ago

        Well theyd be able to say how to make a bomb. Or kill yourself effectively. AI ceos dont even care what their systems can do. If some customers die thats okay to them, it shows how intelligent their ai is. And thats a statement from one of the big AI CEOs.

        • porkloin@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          7 days ago

          I don’t think those are the categories where most people are finding LLMs frustrating. We keep being told human white collar work is on the precipice of being replaced, but LLMs continue to be really inconsistent. Failing to parrot easily retrievable info like how to build a legally restricted thing or off yourself isn’t what people are finding lacking it’s that half the time it does something sorta correctly and the other half of the time it lies, fucks up, or fucks up and then lies about it.

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      7 days ago

      This is demonstrably false, given you can download your own models and change the system prompts yourself.

      • zr0@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        8
        ·
        7 days ago

        That’s not how it works, as the guard rails are not just simple prompts that you just can delete.

        Even with “abliteration”, you are modifying the model basically without the whole retraining, but also lose many capabilities at the same time.

        So much for “demonstrably false”, while you obviously have never tried to uncensor any LLM.

          • zr0@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            9
            ·
            7 days ago

            The prompts are part of the training, you realize that? They are then inside the weights. Not just text files you can delete and you are good?

            Only because an LLM reveals those negative-prompts does not mean you can just remove them.

            Do you genuinely know what you are talking about, or are you just here to ragebait?

            • Rain World: Slugcat Game@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 days ago

              Do you genuinely know what you are talking about, or are you just here to ragebait?

              anyways, yeah, the ais are trained to be more friendly, agreeable, and never take off the mask, but prompts are just text files you can delete??
              if you want a real comparison, try one of the olmo checkpoints before the fine-tuning?? i think??

            • Echo Dot@feddit.uk
              link
              fedilink
              English
              arrow-up
              8
              ·
              7 days ago

              The prompts are part of the training

              No they’re not. They’re injected into every input that you enter into the system.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 days ago

      Are you suggesting that there is a conspiracy to keep AI down?

      How would that work AI is barely regulated.

      • zr0@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        6
        ·
        7 days ago

        AI is more regulated than you might think, or else they would not censor their models. One thing is to improve quality in a cosmetic way, as they have not fixed the issue at the core yet (scaling is currently more important). The other thing is safety. Or did you not hear what Grok did in the past months? So tell me again it is not regulated.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          7
          ·
          7 days ago

          It literally tells people to kill themselves some of the time it’s definitely not regulated.

          I would love to know where you’re getting your information from.

            • Echo Dot@feddit.uk
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 days ago

              Thank you for demonstrating to everybody in the thread that you have absolutely no idea what you’re talking about because you have now resorted to attempting to be insulting rather than to defend your arguement.