Lutris maintainer use AI generated code for some time now. The maintainer also removed the co-authorship of Claude, so no one knows which code was generated by AI.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not.

sauce 1

sauce 2

  • etherphon@piefed.world
    link
    fedilink
    English
    arrow-up
    108
    arrow-down
    1
    ·
    15 hours ago

    I don’t get it, why would you take a program (or ANYTHING) you created and let some AI shit all over it. I will never.

    • Serinus@lemmy.world
      link
      fedilink
      arrow-up
      48
      arrow-down
      3
      ·
      12 hours ago

      Am I allowed to have an unpopular narrative here?

      There are levels of vibe coding, and it’s possible to use AI without vibe coding at all.

      If you’re very targeted in what you’re having the AI do and you carefully review the code, it can be a great tool.

      For example, “make this html grid sortable and add a download button that creates a csv file.” You know exactly what this does, it’s self contained, and it’s something you know can just be copied from stack overflow and applied to your code.

      That works, and works well.

      “Create an app that…” is vibe coded slop.

      • raspberriesareyummy@lemmy.world
        link
        fedilink
        arrow-up
        27
        arrow-down
        2
        ·
        11 hours ago

        For example, “make this html grid sortable and add a download button that creates a csv file.” You know exactly what this does, it’s self contained, and it’s something you know can just be copied from stack overflow and applied to your code.

        Even if this works, you’ll be stealing someone else’s code without authorship attribution for anything that’s a non-trivial algorithm.

        • Jako302@feddit.org
          link
          fedilink
          arrow-up
          6
          arrow-down
          2
          ·
          4 hours ago

          The copyright/license issues that come with it due to the current unregulated nature of ai are a completely different issue to the vibecode slop allegations.

          • raspberriesareyummy@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            22 minutes ago

            no. it’s one aspect of many. Using slop is ethically wrong AND it produces shitty code with zero innovation and creating technical debt.

      • Kindness is Punk@lemmy.ca
        link
        fedilink
        arrow-up
        15
        arrow-down
        1
        ·
        12 hours ago

        It can be useful when an experienced programmer knows how to guide it, although you have to be very intentional or you’ll end up wasting your time cleaning up after it.

        That being said I think most people are upset that they’re no longer declaring which parts of code are AI assisted

    • psycotica0@lemmy.ca
      link
      fedilink
      arrow-up
      42
      arrow-down
      2
      ·
      14 hours ago

      I’m going to assume from the part where they say they were at their lowest that the option the saw infront of them wasn’t “code with AI or not” but rather “burnout and don’t code, or code with AI”. And they chose to make progress using the crutch rather than stop. That’s my guess.

      • etherphon@piefed.world
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        2
        ·
        13 hours ago

        Humm, I mean, that happens to every creator. Writers block, burnout, etc. I guess it all comes down to what you think is important and your values are. I usually just walk away and do something else for a while, even a few weeks or months.

        • lime!@feddit.nu
          link
          fedilink
          arrow-up
          41
          ·
          13 hours ago

          most writers don’t get growing stacks of bug reports. open source burnout is extremely common, unfortunately.

          • iamthetot@piefed.ca
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            13 hours ago

            Like, I agree with you about open source burnout, but it feels weird to make it a dick measuring contest with writers, as a writer myself.

            • lime!@feddit.nu
              link
              fedilink
              arrow-up
              5
              ·
              12 hours ago

              writers are arguably suffering more. not because llms can replace them at all to the degree they can junior programmers, but because the people making the decisions believe they can.

              also, i wasn’t the one who brought it up :P

              • kkj@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                5
                ·
                11 hours ago

                LLMs also aren’t good at replacing junior programmers, but the people in charge believe that they can do that too.

                • lime!@feddit.nu
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  10 hours ago

                  well they are, in that they produce bad code that has to be vetted thoroughly and they don’t know git.

                  • kkj@lemmy.dbzer0.com
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    7 hours ago

                    It doesn’t take that long to reach a junior dev basic Git, and they can often explain their bad code. Plus, junior devs turn into senior devs, and LLMs don’t.

              • iamthetot@piefed.ca
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                9 hours ago

                You didn’t bring it up but you’re the one who implied it was a contest of who suffers more. Your comment was worded very much in a way that made it sound like they had it worse than writers, when the original commenter was just stating that all creatives experience burnout (not a comparison)

                • lime!@feddit.nu
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  7 hours ago

                  it wasn’t really about suffering more, the point was that it’s more out in the open and more directly connecting with people. i’m sure andy weir had the same issues with the martian since it was written in public.

          • etherphon@piefed.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            edit-2
            13 hours ago

            Good point there, that sounds like it would be annoying and I’m sure I would want to fix the bugs as fast as possible too, but then you are using AI and introducing how many more new bugs, and ones that you will not easily be able to track down since you didn’t write the code, so then you are locked in to using AI. Personally I would rather have buggy software, nothing is perfect. Open source developers don’t owe anyone anything, so if people are being assholes about bugs that’s pretty lame.

            • lime!@feddit.nu
              link
              fedilink
              arrow-up
              7
              ·
              13 hours ago

              yeah that’s what’s bugging me about all this. “remember the human” is even more important now.

              regarding introducing new bugs, both high-profile cases from this past week have been seasoned developers of tools with extensive test suites that claimed to have tested everything thoroughly. when someone with 30 years of experience say they’ve tested something, i tend to trust that judgement. but on the other hand we’ve also seen the cognitive decline heavy llm usage seems to lead to…

    • yucandu@lemmy.world
      cake
      link
      fedilink
      arrow-up
      3
      arrow-down
      10
      ·
      11 hours ago

      Because you can do a lot more with it, have you ever tried coding? Before AI, if you didn’t know how to do something, it was “Ask a question on Stack Overflow, then get told this question had already been asked/answered, then get linked to a loosely related question”. Now I can ask AI all my random obscure questions.

      I get being cautious around sensitive equipment like banking apps and government databases, but why would you hate LLM-generated code this much?

      • Mniot@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        I asked plenty of questions on SO and never had a bad experience. But I put quite a bit of work in. You couldn’t ask “how do i sort a list in JAVA” and get answers, you had to ask “here’s some code I’m writing <working example> and it does <x> but I think it should do <y> because <z> what’s going on?” and people gave some really nice answers. (Or you could put “how do sort list java” into web search and get a fine answer to that; it’s not like SO was the only place to ask low-effort questions.)

        One of the bad things with AI is it’s soooo helpful that when I get questions now it’s like “please create a DNS entry for foo.bar.baz” and they’re asking because the AI got completely stuck on something simple (like making a request to api.github.com) and wandered up and down and eventually decided on some nonsense course of action and the developer has given up on thinking about anything.

      • etherphon@piefed.world
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        2
        ·
        11 hours ago

        What I don’t get, is people’s inability to cope with their own limitations, or find their way out of problems without asking a magic box to do everything for them. Yes I have done some coding. Asking on Stack Overflow wasn’t even that bad, and eventually you could find an answer to almost anything there if you knew what you were looking for. Paging through programming books looking for answers was relatively a lot more difficult. However, both actually taught you things during the process, you made mistakes, learned, etc. The AI is teaching you nothing it’s just doing work for you. I don’t respect that, if you use it that’s you’re business but it’s not your code and not your product or whatever.

        • yucandu@lemmy.world
          cake
          link
          fedilink
          arrow-up
          3
          arrow-down
          4
          ·
          11 hours ago

          What I don’t get, is people’s inability to cope with their own limitations, or find their way out of problems without asking a magic box to do everything for them.

          I don’t know who those people are. I coded for 20 years before LLMs, and I coped just fine.

          The AI is teaching you nothing it’s just doing work for you.

          Unless you ask it to explain things to you. Which is often required to fix the things that the AI can’t get right on its own.

          if you use it that’s you’re business but it’s not your code

          How is it not my code?

          • prole@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            12
            arrow-down
            1
            ·
            10 hours ago

            An LLM cannot ever “explain” anything to anyone, because it doesn’t know anything. How are people still trusting anything these fucking things say?

            • Mniot@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              5 hours ago

              Right?? It’s bizarre to me that otherwise-smart-seeming people will think they can write “explain your reasoning” to the AI and it will explain its reasoning.

              Yes, it will write some fluent response that reads as an explanation of its reasoning. But you may not even be talking to the same model that wrote the original text when you get the “explanation”.

            • yucandu@lemmy.world
              cake
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              5 hours ago

              Because it’s right more often than google? I swear you AI critics aren’t actually using AI.

              • ilovepiracy@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 hours ago

                Agreed. Delusional mindsets stuck in 2023. I’ve never seen more entitled people before punching on FOSS devs and how they use their free time. “We need high quality, human coded FOSS programs with ZERO AI slop in them!” “Why no, I’ve never contributed to an open source project, nor do I know how to code, why do you ask?”

                Forks exist, get over it.

          • SparroHawc@lemmy.zip
            link
            fedilink
            arrow-up
            10
            arrow-down
            1
            ·
            10 hours ago

            How is it not my code?

            In case you missed it, courts have ruled that works produced by AI cannot have copyright, because it was not made by a human.

            You can make use of AI-generated code, but you didn’t write it. Since you can’t copyright it, it’s not your code - it’s our code, comrade.

            • yucandu@lemmy.world
              cake
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              5 hours ago

              In case you missed it, courts have ruled that works produced by AI cannot have copyright, because it was not made by a human.

              Courts have ruled that art that was 100% generated by AI cannot be copyrighted by the AI, because the AI is not a human person.

              The same courts have also ruled that works that were assisted by AI but created by a human can be copyrighted by that human.

              So, can you claim copyright in an AI-generated work in Canada? As of 2025, the safest answer is: only if a human author contributed substantial creative effort to the final work. There needs to be some human “skill and judgment” or creative spark for a work to be protected.

              If the AI was just a tool in your hands, for instance, you used AI to enhance or assemble content that you guided then your contributions are protected and you are the author of the overall work. But if an AI truly created the material with you providing little more than a prompt or idea, the law may treat that output as having no human author, and thus no copyright.

              Thankfully real life is far more nuanced than “fuck ai” allows.

      • prole@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        8
        ·
        10 hours ago

        Now I can ask AI all my random obscure questions.

        And get the wrong answer. But you don’t know it’s wrong, because you’re not already an expert on the obscure subject.

        Before AI, yes you had to learn how to do things. Why is that bad?

        • yucandu@lemmy.world
          cake
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          5 hours ago

          And get the wrong answer.

          No, it’s right more often than google was.

          If it was the wrong answer, the projects wouldn’t work, now would they?

          Before AI, yes you had to learn how to do things. Why is that bad?

          I’m still learning how to do things, just a lot faster, thanks to this helpful tool. Why is that bad?