Sadly, it seems like Lemmy is going to integrate LLM code going forward: https://github.com/LemmyNet/lemmy/issues/6385 If you comment on the issue, please try to make sure it’s a productive and thoughtful comment and not pure hate brigading.
Edit: perhaps I should also mention this one here as a similar discussion: https://github.com/sashiko-dev/sashiko/issues/31 This one concerns the Linux kernel. I hope you’ll forgive me this slight tangent, but more eyes could benefit this one too.


Not all ai, or rather, llm output is slop. Some is useful. The reason for review is to differentiate. I’m not just talking about coding. I’m talking about their actual useful functionality.
It would be great if they didn’t hallucinate, or produce slop. It would also be great if the fact that companies use them instead of workers meant we worked less hours and had more leisure time rather than less paying jobs and more stress. The llm is not at fault for the structure of society.
Llm and ai is a tool. If used appropriately, there should be no issue. Of used inappropriately, it should be called out. Certainly where there is a risk of it appearing on the surface to be good, but not actually good,.like AI generated codez then marking it as such seems reasonable. Banning it doesn’t get rid of it. It hides it. It exists and is now in the world. We need to have policies that support appropriate use.
I’m sorry, but no matter how many times I hear this argument, it never addresses the issues with AI that exist regardless of its usecase. There are plenty of other unacceptable things in this world that we apply strict bans to. No, it will never rid the world of the issue, but that doesn’t mean you concede to “appropriate” uses of the maliciously envisioned technology. Someone in the world will always be hungry, but that doesn’t mean we settle for mostly eradicating world hunger, we try to do all we can.
No amount of “but it’s for a good purpose” with erase the issues inherent to LLMs and “generative” AI. I like the idea of pure tedium being automated in the future, but so long as its based on the this tech as it currently exists, any genuine attempt to make create something positive is a non-starter. I’m not a “luddite”, I don’t hate progress or new ideas, I simply refuse to support projects that rub shoulders with hyper-capitalist theft machines that destroy the planet.
In your analogy, we don’t ban processed food as some people go hungry. We use agriculture to feed as many as possible with better foods. We try to do better. But more production is generally better. That’s what AI is, the equivalent of processed food. It’s not real food, it’s less healthy but it’s functional.
Same with ai. It is an input and output machine. It has costs associated. We assess the output on this merits and cost. If the output is slop, it should be discarded. If it is functional output, it gets used.
I knew I shouldn’t have used that analogy, because then the focus would be redirected to it and I’d end up defending it instead of the position it was meant to represent.
I’ve said what I intended to say. I don’t wanna argue over the uses of AI when its the foundation itself that’s rotten. There’s no good way to make use of “gen” AI as it stands.
It’s fine you have that opinion. I disagree and so do many others. I’ve used ai to generate notes, checklists, letters,.emails, work templates etc.
The output was correct and valid in most cases. What about the foundation is rotten, in your view? The fact that it’s based on other people’s work being regurgitated, or the environmental concerns, or how big tech is trying to leverage it to be an arbiter of knowledge and computing power? All are valid concerns, but they don’t mean the technology is inherently unusable or unethical.
Banning it because of the views of some is unfair on the views of others. I do think that marking it is appropriate, so that anyone who objects to its use can avoid it. I would be concerned that over time or becomes impossible to avoid though. However, that’s the point of open source. People can fork projects at the point where there is no AI code (except in the case where that is purposefully obfuscated).
“What about the foundation is rotten, in your view? The fact that it’s based on other people’s work being regurgitated, or the environmental concerns, or how big tech is trying to leverage it to be an arbiter of knowledge and computing power? All are valid concerns, but they don’t mean the technology is inherently unusable or unethical.”
It literally does. There’s no point in this discussion if we’re disagreeing over something so fundamental.
Cool, I can see it’s a waste of time too if you’re not able to appreciate other people’s view or express yours beyond absolutisms. It’s not a discussion when the only view you pay attention to is your own.
Lol as if I didn’t hear you out. At this point anyone could present any point against “generative” AI and you’d find a way to say “but if it produces something that works”.
At least, that’s how you’ve come off. I know I’m being abrasive, but I genuinely don’t wanna believe people think like that, and I don’t enjoy fighting like this.
When tedious tasks can be automated without using tech made by fascists for fascists, I’ll be all over that. Until then, its pretty hard to defend.
Nope, you’ve offered no justifications or rationale up to this point. Just AI bad. Ban it.
There are llm that are not made by fascists, including in Europe, open source models that are self hosted and Chinese models. I assume you mean American tech is fascism.