Lutris maintainer use AI generated code for some time now. The maintainer also removed the co-authorship of Claude, so no one knows which code was generated by AI.
Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not.


An LLM cannot ever “explain” anything to anyone, because it doesn’t know anything. How are people still trusting anything these fucking things say?
Right?? It’s bizarre to me that otherwise-smart-seeming people will think they can write “explain your reasoning” to the AI and it will explain its reasoning.
Yes, it will write some fluent response that reads as an explanation of its reasoning. But you may not even be talking to the same model that wrote the original text when you get the “explanation”.
Because it’s right more often than google? I swear you AI critics aren’t actually using AI.
Agreed. Delusional mindsets stuck in 2023. I’ve never seen more entitled people before punching on FOSS devs and how they use their free time. “We need high quality, human coded FOSS programs with ZERO AI slop in them!” “Why no, I’ve never contributed to an open source project, nor do I know how to code, why do you ask?”
Forks exist, get over it.