SUPPORT THIS MOVIE: https://store.gamersnexus.net/black-market-gpu-backers - the backer tiers are the only way to get digital rewards, like the wallpaper pac...
Since we first got easy access to various LLMs, I’ve been doing the opposite, asking obscure questions I know the answer to, trying to get a better understanding of what various models are really (not) capable of, and what data they’re (not) trained on, but it seems that you’re right and I’m in a minority. Most people treat the only LLM they know of as an oracle, and don’t seem to understand that it can write with confidence and still be incorrect. I’ve seen countless examples of just that, some funnier than other, so to me it has always been very obvious. It’s possible that using GPT-2 (back in the talktotransformer days), which was not configured for chat-style conversation but rather just to generate a continuation to the user’s input text, has actually helped me understand LLMs better and avoid using them in that common naive way, but I’m not sure how to make it just as clear to everyone else…
What bugs me the most is I’ve pointed it out to people in conversations that basically go like this:
Me: You used it for X and caught mistakes - why are you trusting it for Y?
Them: That’s a good point.
And then they keep doing it anyway.
I’m not an AI hater at all - it can be a great way to accelerate work you are capable of doing on your own. But using it for things you don’t understand, and/or not double checking its work is insanity.
I tried to use an LLM to write a script for me. It confidently told me I could split a string in OpenSCAD with the [1:] operator. It works in Python, but isn’t an OpenSCAD feature.
Fortunately, programming has a good way of letting you know when the LLM is completely wrong.
The worrying part is that the LLMs can sometimes produce code that runs, but has massive security issues that you don’t notice if you just run the code and don’t closely analyse it.
Since we first got easy access to various LLMs, I’ve been doing the opposite, asking obscure questions I know the answer to, trying to get a better understanding of what various models are really (not) capable of, and what data they’re (not) trained on, but it seems that you’re right and I’m in a minority. Most people treat the only LLM they know of as an oracle, and don’t seem to understand that it can write with confidence and still be incorrect. I’ve seen countless examples of just that, some funnier than other, so to me it has always been very obvious. It’s possible that using GPT-2 (back in the talktotransformer days), which was not configured for chat-style conversation but rather just to generate a continuation to the user’s input text, has actually helped me understand LLMs better and avoid using them in that common naive way, but I’m not sure how to make it just as clear to everyone else…
What bugs me the most is I’ve pointed it out to people in conversations that basically go like this:
Me: You used it for X and caught mistakes - why are you trusting it for Y? Them: That’s a good point.
And then they keep doing it anyway.
I’m not an AI hater at all - it can be a great way to accelerate work you are capable of doing on your own. But using it for things you don’t understand, and/or not double checking its work is insanity.
I tried to use an LLM to write a script for me. It confidently told me I could split a string in OpenSCAD with the [1:] operator. It works in Python, but isn’t an OpenSCAD feature.
Fortunately, programming has a good way of letting you know when the LLM is completely wrong.
The worrying part is that the LLMs can sometimes produce code that runs, but has massive security issues that you don’t notice if you just run the code and don’t closely analyse it.
theyre basically just all reddit commenter summarizers imo so yeah. garbage.