Sadly, it seems like Lemmy is going to integrate LLM code going forward: https://github.com/LemmyNet/lemmy/issues/6385 If you comment on the issue, please try to make sure it’s a productive and thoughtful comment and not pure hate brigading.

Edit: perhaps I should also mention this one here as a similar discussion: https://github.com/sashiko-dev/sashiko/issues/31 This one concerns the Linux kernel. I hope you’ll forgive me this slight tangent, but more eyes could benefit this one too.

  • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 @pawb.social
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    16 hours ago

    I mean the lead dev is literally agreeing that LLM code shouldn’t be in the project at all as the first reply to the issue. I’m not seeing how it’s headed toward integration from what you’ve linked.

      • ohshit604@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 hours ago

        Use of so-called Artificial Intelligence (AI) is allowed only if it is explicitly mentioned. Additionally all LLM-generated text or code must be manually reviewed by the author before submission (no vibe coding allowed).

        Linus Torvalds has the best take regarding using “AI” for software development a documentation.

        As I said in private elsewhere, I do not want any kernel development documentation to be some AI statement. We have enough people on both sides of the “sky is falling” and “it’s going to revolutionize software engineering”, I don’t want some kernel development docs to take either stance.

        It’s why I strongly want this to be that “just a tool” statement.

        source

        Using it in a staging environment should be perfectly acceptable usage, review it and adjust it before introducing it on a production build, treat is like a tool and not a one-click magic code machine.

        • ell1e@leminal.spaceOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          9 minutes ago

          That doesn’t take into account the extensively researched plagiarism concerns. It’s not just that LLMs make low quality slop but that some of us think the GPL won’t work if you can train LLMs on GPL, then have it spit out GPL snippets un-GPL’ed.

          Some people literally un-GPL projects via AI in one go. While that’s the egregious version, any LLM use seems to risk having a similar effect on a smaller scope.

          This isn’t only a legal question. At least if you think the GPL has societal and moral value.

      • Zetta@mander.xyz
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        4 hours ago

        Better stop using the internet. I always say this, but in the next five years, every single piece of software you use is going to have generated code in it. You may not like it, but it’s happening, so sorry.