i think he means that its a bit pointless to nitpick little things like this, when there are bigger and more severe problems with ai.
at least that is how i see it. And is it a bit bad to use slopmachine to prove the obvious when they waste resources?
Though I hope you share this outwards too, so people outside this community also see this, so is it pointless or not depends on how much effect it has on the actual llm hype. I doubt anyone here needs any convincing.
The little things are indicative of larger scale problems though. If an LLM gets simpler things wrong, what happens with more complex topics like science, medicine etc where the operator doesnt understand the full extent of the result.
well, yeah. llms are unreliable all the way. While they do have some use, trusting them at all is always a mistake.
The problem is that so many people seem to trust them to the point of getting a psychosis.
It’s literally called “Fuck AI” though, so you can’t blame people for being confused.
i think he means that its a bit pointless to nitpick little things like this, when there are bigger and more severe problems with ai. at least that is how i see it. And is it a bit bad to use slopmachine to prove the obvious when they waste resources?
Though I hope you share this outwards too, so people outside this community also see this, so is it pointless or not depends on how much effect it has on the actual llm hype. I doubt anyone here needs any convincing.
The little things are indicative of larger scale problems though. If an LLM gets simpler things wrong, what happens with more complex topics like science, medicine etc where the operator doesnt understand the full extent of the result.
well, yeah. llms are unreliable all the way. While they do have some use, trusting them at all is always a mistake. The problem is that so many people seem to trust them to the point of getting a psychosis.