Sadly, it seems like Lemmy is going to integrate LLM code going forward: https://github.com/LemmyNet/lemmy/issues/6385 If you comment on the issue, please try to make sure it’s a productive and thoughtful comment and not pure hate brigading.
Edit: perhaps I should also mention this one here as a similar discussion: https://github.com/sashiko-dev/sashiko/issues/31 This one concerns the Linux kernel. I hope you’ll forgive me this slight tangent, but more eyes could benefit this one too.


You, like a large portion of the ‘fuck AI’ community are angry at LLMs or image/video generation models and their associated capitalist bubble. Yes, LLMs produce poor quality output compared to humans and yes the current marketing and capital explosion is bad for everyone involved that isn’t otherwise independently wealthy.
The reason that these are the AI that you’re aware of is that AI needs a lot of data to train and the only source of a huge amount of data, the Internet, is primarily text, images and video. So the first large transformer-based neural networks were trained on that dataset.
ChatGPT and Sora are toys, they were just the toys that were easiest to make given the data available when transformers were discovered.
If you train neural networks on different kinds of data you get different models. For example, if you train neural networks on protein folding data, you get neural networks that can predict protein folding based on an amino acid sequence. This is a thing that human-created software has not had great success at.
People may be familiar with Folding@Home, a project which attempts to leverage donated computing resources to brute force the problem. These projects have consumed thousands of person-hours of our best scientists and engineers and the results are pretty poor.
However, since we now know how to train neural networks on data, we can train an AI to predict the protein structures and the resulting networks such as AlphaFold (https://en.wikipedia.org/wiki/AlphaFold) produce results much higher than human engieered software.
In addition to predicting the structure, other scientists have used diffusion models (similar to how consumer AI products generate images) to go the other way. Now a scientist can describe a protein’s properties in a prompt and instead of generating a picture the network outputs the sequence of amino acids that are most likely to fold into a shape with those properties.
Robotics are another field where AI is making an impact unseen to the public. There isn’t an Internet full of bipedal motion or limb-positioning data, so it is much harder to train an AI to operate robotics. There are many projects which are working to create that data and the results are pretty impressive. This is a bipedal robot which has been trained on human motion: https://www.youtube.com/watch?v=I44_zbEwz_w compare that to pre-AI motion: https://www.youtube.com/watch?v=LikxFZZO2sk
Weather forecasting is another field where AI is useful. Predicting weather requires identifying patterns in huge amounts of data and AI is uniquely able to deal with that level of complexity.
None of these uses of AI can talk to you, or produce pictures. They cannot understand sentences or write e-mails or generate code. They’re trained on data generated specifically for their purpose, not on public data scrapped from the Internet. Their output allows us to develop medicines faster, automate dangerous jobs and predict weather disasters.
I’m with anyone who’s concerned about the capitalist frenzy over LLMs and image/video generation products. This is clearly another dotcom bubble and the spending frenzy and disruption in the job markets is damaging the economy and hurting workers at a large scale.
I do not lay the blame for this at the feet of neural networks. The blame lies with the human beings making the decision to take a promising technology and to dump trillions of dollars into it without any endgame other than market dominance.
The community should but ‘fuck AI executives’, AI has many uses outside of LLMs and image generation and people are completely missing all of the amazing things that this technology is making possible.
Thank you so much for taking the time to put into words what I’ve been too lazy to enunciate. Transformer-based tools are a great development with some fantastic uses. I think the problem is one of nomenclature and extremely aggressive marketing by grifters. The reason I’m in this community isn’t to outright banish anything related to transformer-based tech, but to rail against the insanely overhyped, economy-wrecking shitshow that has commandeered the nebulous term “AI” when it’s really just LLMs.
Same, I’m here because capitalism is doing serious damage to the world by taking a promising technology and massively over investing.
I’m not here to side with the Luddites who reflexively downvote anything that says ‘AI’.
Though, I will say that this is a nuanced opinion and so I understand that I’m going to be dog piled by the people who’re only here for low effort performative activism.
We were talking about lemmy and LLMs. They’re not part of any use case you’re listing.
But my apologies if I missed something here.
My point was that people are using the term ‘AI’ when they mean LLMs and/or Image generation.
You asked for good AI uses, when you mean good LLM uses which is the only point I wanted to make.
Yes, LLMs are pretty bad at most things. They’re usefulness is basically around that of a search engine or Stack Overflow. They’re often used as a crutch for junior coders, which damages their training and vibe coding is just a novelty… not a production-ready tool.
I don’t disagree that LLMs are massively over hyped, just that they’re only a tiny portion of the AI technologies. Most of which people should be excited about.
That’s why it’s frustrating seeing the confusion. LLMs suck, image generation is terrible for many reasons… but AI has many other uses than making 6 fingered people and shitty code.
Adding perhaps an additional layer of nuance - You’re totally right that there is an nomenclature issue around AI, and that the technology (basically like most technology) is value neutral. But, I think it remains a valid decision to make a choice to personally avoid it, and to engage with services and communities accordingly.
I’m perfectly happy to agree that there is “AI” use which is groovy. Maybe as result of narrowing the definition or using it conscientiously. I understand the difference forms it can come in. But me, personally, I want to use a service that strives for no AI, regardless of if it is good, bad, or neutral. Searching for a niche like this is actually why I started using lemmy (pretty recently).
I don’t begrudge lemmy taking an approach like “AI must be disclosed and reviewed” as suggested here (https://github.com/LemmyNet/lemmy-docs/pull/414/changes). Let Lemmy party however it wants! Honestly, I appreciate the disclosure, because it lets me know upfront that this isn’t the niche I was looking for. No shade, but I’m out. Nothing but peace and love to everybody who remains.
I was asking for good uses of LLMs since we were talking about those. Sorry for being unclear.
deleted by creator