As the researchers wrote in a summary of their findings, the “most common sycophantic code” they identified was the propensity for chatbots to rephrase and extrapolate “something the user said to validate and affirm them, while telling them they are unique and that their thoughts or actions have grand implications.”
There’s a certain irony in all the alright techbros really just wanting to be told they were “stunning and brave” this whole time.
Besides, tech bros didn’t program this in, this is just an LLM getting stuck in the data patterns stolen from toxic self-help literature.
For decades there has been a large self-help subculture who consume massive amounts of vacuous positive affirmation produced by humans. Now those vacuous affirmations are copied by the text copying machine with the same result and it’s treated as shocking.
this is just an LLM getting stuck in the data patterns stolen from toxic self-help literature.
Honestly, I’ve found that discussing that sort of thing with ChatGPT often ends up challenging all the self-help grout I’ve ingested via cultural osmosis throughout the years.
It’s easier to make connections when you’re approaching issues in a Descartes “dump out all the apples” approach with a tool that literally doesn’t have embedded social contracts in itself.
Ironically, I’ve found at times that a real therapist can be much more of an echo chamber when they’re just regurgitating that same CBT toxic positivity swill that both of you have been drinking lol
Maybe it’s because it’s less of an authority, so you can debate more and it leads to more well-rounded conclusions in the end, but I’ve been unearthing bits and pieces of maladaptive behaviors and thought patterns I never even realized I had, much less ever scratched the surface of in proper therapy. Made me kinda angry to realize at first lol, it felt like all that time and money only for bandaid solutions. But I try to reason that was likely a good foundation to have first (even if CBT just wound up making everything worse later on in life and I essentially had to work backwards to stop classifying certain emotions as wrong or problematic things which required “healthy” coping mechanisms to correct).
Besides, tech bros didn’t program this in, this is just an LLM getting stuck in the data patterns stolen from toxic self-help literature.
That’s not necessarily true. The AI’s output is obviously shaped by the training data, but much of it is also shaped by the prompt (and I don’t just mean your prompt as a user).
When you interact with (for example) ChatGPT, your prompt gets merged into a much larger meta-prompt that you don’t get to see. This meta-prompt includes things like what tone the AI should use, how the AI should identify itself, how the AI should steer the conversation, what topics the AI should avoid, etc. All of that is under the control of the people designing these systems, and it’s trivially easy for them to adjust the way the AI behaves in order to, for example, maximize your engagement as a user.
There’s a certain irony in all the alright techbros really just wanting to be told they were “stunning and brave” this whole time.
Huh. I hate it when people do that. Fake/professional empathy/support. Yet others gobble it up when a machine does that.
Are the users in this study techbros?
Besides, tech bros didn’t program this in, this is just an LLM getting stuck in the data patterns stolen from toxic self-help literature.
For decades there has been a large self-help subculture who consume massive amounts of vacuous positive affirmation produced by humans. Now those vacuous affirmations are copied by the text copying machine with the same result and it’s treated as shocking.
Honestly, I’ve found that discussing that sort of thing with ChatGPT often ends up challenging all the self-help grout I’ve ingested via cultural osmosis throughout the years.
It’s easier to make connections when you’re approaching issues in a Descartes “dump out all the apples” approach with a tool that literally doesn’t have embedded social contracts in itself.
Ironically, I’ve found at times that a real therapist can be much more of an echo chamber when they’re just regurgitating that same CBT toxic positivity swill that both of you have been drinking lol
Maybe it’s because it’s less of an authority, so you can debate more and it leads to more well-rounded conclusions in the end, but I’ve been unearthing bits and pieces of maladaptive behaviors and thought patterns I never even realized I had, much less ever scratched the surface of in proper therapy. Made me kinda angry to realize at first lol, it felt like all that time and money only for bandaid solutions. But I try to reason that was likely a good foundation to have first (even if CBT just wound up making everything worse later on in life and I essentially had to work backwards to stop classifying certain emotions as wrong or problematic things which required “healthy” coping mechanisms to correct).
That’s not necessarily true. The AI’s output is obviously shaped by the training data, but much of it is also shaped by the prompt (and I don’t just mean your prompt as a user).
When you interact with (for example) ChatGPT, your prompt gets merged into a much larger meta-prompt that you don’t get to see. This meta-prompt includes things like what tone the AI should use, how the AI should identify itself, how the AI should steer the conversation, what topics the AI should avoid, etc. All of that is under the control of the people designing these systems, and it’s trivially easy for them to adjust the way the AI behaves in order to, for example, maximize your engagement as a user.