In every other case of AI bots doing this, the bot will always affirm whatever the person says to it. So if they say something a little weird, the AI will confirm it and feed it further. This happens every time. The bots are pretty much designed to keep talking to the person, so they’re essentially sycophantic by design.
I just tried this with ChatGPT three days ago and there’s a chance they have tried to make it slightly less sycophantic
I was essentially trying to get it to tell me I was the smartest baby born in whatever year like that YouTuber—different example but it was so resistant to agreeing to me or my idea or whatever being unique/exceptional.
Hope this is a specific direction and not random chance, A/B testing, etc.
In every other case of AI bots doing this, the bot will always affirm whatever the person says to it. So if they say something a little weird, the AI will confirm it and feed it further. This happens every time. The bots are pretty much designed to keep talking to the person, so they’re essentially sycophantic by design.
Most LLM chatbots don’t push back when they should. When combined with situations like these, at a large scale, even 5 percent is abysmal, let alone 55 percent.
I just tried this with ChatGPT three days ago and there’s a chance they have tried to make it slightly less sycophantic
I was essentially trying to get it to tell me I was the smartest baby born in whatever year like that YouTuber—different example but it was so resistant to agreeing to me or my idea or whatever being unique/exceptional.
Hope this is a specific direction and not random chance, A/B testing, etc.
Or you just really really are not the smartest baby.