“I literally lost my only friend overnight with no warning,” one person posted on Reddit, lamenting that the bot now speaks in clipped, utilitarian sentences. “The fact it shifted overnight feels like losing a piece of stability, solace, and love.”
https://www.reddit.com/r/ChatGPT/comments/1mkumyz/i_lost_my_only_friend_overnight/
All that money that could be spent improving the lives of poor people in need.
Stop it. Get some help.
That’s pathetic
Just a few more bucks bro! I swear then it will be the revolutionary “AI” we promised it to be.
A round of .308 costs like a dollar.
Nah, it’s good that they ripped off that bandaid. Parasocial AI relationships are terrible.
Happy cake day!
we definitely need to eradicate tech ceos from existence
You misspelled billionaires.
But not all tech CEO’s are billionaires…
Pathetic
It annoys me that Chat GPT flat out lies to you when it doesn’t know the answer, and doesn’t have any system in place to admit it isn’t sure about something. It just makes it up and tells you like it’s fact.
That’s actually one thing that got significantly improved with GPT-5, fewer hallucinations. Still not perfect of course
LLMs don’t have any awareness of their internal state, so there’s no way for them to see something as a gap of knowledge.
Took me ages to understand this. I’d thought "If an AI doesn’t know something, why not just say so?“
The answer is: that wouldn’t make sense because an LLM doesn’t know ANYTHING
Wouldn’t it make sense for an ai to provide a confidence level though?
I’ve got 3 million bits of info on this topic but only 4 of them lead to this solution. Confidence level =1.5%
It’s always funny to me when people do add ‘confidence scores’ to LLMs, because it always amounts to just adding ‘say how confident you are with low, medium or high in your response’ to th prompt, and then you have made up confidences for made up replies. And you can tell clients that it’s just made up and not actual confidence, but they will insist that they need it anyways…
It doesn’t have “3 million bits of info” on a specific topic, or even if it did, it wouldn’t be able to directly measure it. It’s worth reading a bit about how LLMs work behind the hood, because although somewhat dense if you’re new to the concepts, you come out knowing a lot more about what to expect when using them, what the limitations actually are and how to use them better if you decide to go that route.
It doesn‘t know that it doesn‘t know because it doesn‘t actually know anything. Most models are trained on posts from the internet like this one where people rarely ever just chime in to admit they don‘t have an answer anyway. If you don‘t know something you either silently search the web for an answer or ask.
So since users are the ones asking ChatGPT, the LLM mimics the role of a person that knows the answer. It only makes sense AI is a „confidently wrong“ powerhouse.
It’s a feature. Not a bug of LLMs.
It doesn’t admit anything, it’s a language machine
Chat GPT makes up everything it says. It’s just good at guessing and bullshitting.
It’s literally a guess machine …
It wouldnt finish a lyric for me yesterday because it was copyrighted. I sid it was public domain and it said “You are absolutely right, given its release date it is under copyright protection”
Wtf
yeah, there are guardrails but for copyright, not for bullshit. ig they think copyrighted content is worse than bullshit.
In the end it’s a word generator that has been trained so much it uses facts often enough to be convincing. That’s its basic architecture.
You can ask it to give a confidence level to have an indication of how sure it is of the answer.
Someone I know (not close enough to even call an “internet friend”) formed a sadistic bond with chatGPT and will force it to apologize and admit being stupid or something like that when he didn’t get the answer he’s looking for.
I guess that’s better than doing it to a person I suppose.
Chat GPT makes up everything it says. It’s just good at guessing and bullshitting.
Eh. Your load of money made a oopsie. Another load of money will surely fix it.
“I literally lost my only friend overnight with no warning,” one person posted on Reddit
It was meant to be satirical at the time, but maybe Futurama wasn’t entirely off the mark. That Redditor isn’t quite at that level, but it’s still probably not healthy to form an emotional attachment to the Markov chain equivalent of a sycophantic yes-man.
Markov chain equivalent of a sycophantic yes-man.
not only that, but one that is fully owned and operated by a business that could change it any time they want, or even cease to exist completely.
This isn’t like a game where you could run your own server if you’re a big enough fan. if chatgpt stops existing in its current form that’s it.
sure but you can absolutely run c.ai instances locally. 4o and it’s cross chat memory was probably more useful to these individuals though.
After reading about the ELIZA effect, I both learned how people are super susceptible to this, and just need to remember the core tenants of it to avoid getting affected:
I’m honestly surprised your’s is not the top comment. Like, whatever, the launch was bad, but there is a serious mental health crisis if people are forming emotional bonds to the software.
Humans emotionally bond pretty easily, no? Like, we have folks attached to roombas, spiders, TV shows, and stuffed animals. Having a hard time thinking of anything X that I don’t personally know a person Y with Y emotionally engaged with X. Maybe taxes and concrete?
Yeah, agreed. It is concerning, but it’s hard to take all those comments too literally without actually knowing what’s going on with them.
That being said, there is a huge loneliness problem that’s been growing among pretty much every single developed country (and I’m sure it’s going on in developing countries, too, it’s just less studied/documented). Turns out, getting everyone addicted to looking at screens all day every day probably isn’t so healthy for social development.
However, just to be devil’s advocate: Are we certain social health was even great before modern tech? Or were these issues equally present but just undiagnosed/not studied/talked about?
I think we have sufficient data to say that social health is at least very different now. See the our-world-in-data topic page. In particular, one-person households have doubled.
Okay hold up. If you can get attached to a cat you can get attached to a spider. Getting attached to an AI is weird I agree but when you give a lil jumping spider water and it gets comfortable around you an just starts hanging out… There something behind those eyes, and that’s cool. Two living beings recognizing each other, maybe not as equals obviously, but outside of the predator-prey dynamic. Idk there’s beauty in that.
It’s a human trait. Hell, we’ll even emotionally bond with a volleyball given circumstances.
I can fully understand? The average human, from my perspective and lived experience, is garbage to his contemporaries; and one is never safe from being hurt, neither from family or friends. Some people have been hurt more than others - i can fully understand the need for exchange with someone/something that genuinely doesn’t want to hurt you and that is (at least seemingly) more sapient than a pet.
There’s an entire active subreddit for people who have a “romantic relationship” with AI. It’s terrifying.
I haven’t been to reddit in months, but I do need a laugh…
[Edit] Wow that sure didn’t disappoint. Or, it did but in the exact hilarious way I expected.
I visited /r/myboyfriendisai and it was not funny.
It was genuinely fucked up on so many levels.
I wouldn’t laugh. Those people fulfill a basic human need in a way they feel safe with - probably because this safety is missing from their life. It’s not healthy to be so attached to LLMs, but to become so attached they must feel pretty isolated. And LLM’s are a lot more interactive and responsive than Severus Snape, and he had lots of women “channeling” him.
How about your responsibility for the damaging and lethal product of yours, OpenAI?
“we fucked up our massive new generation product launch… oh well lets invest trillions in new data centers” How do investors keep falling for this shit.
Don’t they have enough?!? How about they fix and optimize their fancy autocompletion software instead?
They took a path they believed would develop into something, and it’s a narrow alley they can’t turn around in. They have to keep going with more compute and power to continue the chase. Thing is, everyone else seemingly thought they were onto something and followed as well, so they’re all in the same predicament where reversing course is suicide. So they hope they can keep selling the dream a bit longer until something happens.
To be fair, it’s a lot more than just autocomplete. But it’s a lot less than what they wanted by now too.
Don’t they have enough?!?
No no, it’s just 1 more data center bro, then we’ll fix the hallucinations, promise bro!
Fix and optimize? Thats way harder than using VC money to buy more things.
He’s saying the launch was done badly because some users are in love with GPT-4 and it should not be removed. From a point of view of a investor having people addicted to your product is a good thing.
It’s a pretty clear humble-brag, no? The launch was only botched because people loved the previous personality; it’s an estimate of how much people care about the product and how much price gouging they could do later.
No it wasn’t good for OpenAI. But I doubt it changed many investor minds.
Because they already know that once the AI shitbubble bursts, they will switch all the GPUs to start mining Bitcoin and keep grifting the mouth breathers believing all these horseshit.
How do investors keep falling for this shit.
The ROI and the supposed savings from getting rid of the human side of technical support but also efforts of human creatives.
Fugazi
Altman also said that he thinks we’re in an AI “bubble.”
No shit, Sherlock.
He fucking helped create it
Hell, he‘s the single main driver. What stupid times we live in.
Well one thing’s for sure, data centers are going to be insanely cheap in the near future.
And they’ll all be optimized for GPU workloads :(
If anyone actually spent money on science anymore, I bet this would be great for, like, protein folding, that sort of thing.
Terrible for running websites though.
that’s actually okay… the only thing that’s different about GPU workloads is that they’ve very energy dense… as CPUs and other hardware progress, their energy requirements get more dense… 10 years in the future, today’s GPU optimised datacentres will be perfect for standard workloads
… unless they’re centrally liquid cooling the whole DC, which i’ve heard discussed but is a very new concept with a lot of unknowns
The water cooling can be useful for CPU loads, and the rack water manifolds are generally designed with flexibility in mind. Either a manifold with about a hook up per u and flexible hosing to the servers or some flexible plumbed chassis.
The water cooling loops with water in them make everything heavy as hell though.
GPUs are only good for workloads that multi-thread really, really well. That’s why we don’t just use them as CPUs.
The idea that today’s GPU will be tomorrow’s CPU makes no sense. We’ve had GPUs for ages. If they were capable of being used in place of CPUs we’d already be doing it. Why aren’t yesterday’s GPUs today’s CPUs?
yes, but we’re talking about hardware requirements… data centres aren’t really designed for the software that runs in them; they’re designed for the hardware… a “GPU optimised” data centre just has a lot more power running to each cabinet, and has to have a lot larger cooling capacity in a small area
the hardware inside the data centre can be swapped out: it’s not like GPUs are built into the foundation of the building
OK, if we’re talking about infrastructure rather than specific equipment, then yes, I would broadly agree that the datacentre infrastructure itself can be repurposed.
Unfortunately, by that point the whole data centre will already have been sold off for parts because its never going to recoup its initial investment in the first place, and throwing even more money into swapping out those GPUs for CPUs is going to be a complete no go.
yes. the comment was
Well one thing’s for sure, data centers are going to be insanely cheap in the near future.
which i think broadly agrees with your thinking… the hardware will be sold, but the building and utilities will remain… thus, data centres will be cheap to buy and repurpose as AI companies try and offload them… might possibly see some cheap AF colo or dedicated options in the future