This belongs along articles with titles like “I hit myself in the nuts every day for a month. It hurt and this happened” or “I tried using condiments as clothing for a month. I now have a rash and/or chemical burns, and this AMAZING thing happened”.
Either of those would be more surprising than this.
I used AI chatbots as a source of news for a month, and they were unreliable and erroneous
blink blink
Oh, I’m sorry. Were you expecting me to be surprised? Was I supposed to act shocked when you said that?
Ok, ok. Hold on. Let me get my shocked face ready…
shocked pikachu
“I used a hammer as a saw for a month and found that it was too dull to get the job done”. That’s what this sounds like. Nobody needed to use AI chatbots as a news source to know that they’re unreliable. The people who do this already know and don’t care. This article isn’t gonna change their minds. They like it.
The AI built in data will be months out of date, and if it even bothers to grab latest headlines then it can and will cherry pick depending on how it’s been programmed for bias, like grok would probably tell you the world is ending because of “the left”
I’m interested in how they were wrong. Pointedly, was there a Right/MAGA/Nazi bias or trend apparent in the errors? Or, is it just stupid?
Because, “wrong” is just a mistake, but lying is intentional.
What was the “new research”? Actually using the fucking things for five minutes?
who would’ve thought
Can’t say this surprises me too much.
So this person doesnt know how ai get its data
Its never up to date with the latest





