• MinnesotaGoddam@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    18 hours ago

    okay how many of these “delusional” people in the study are making fun of the LLM tho

    i don’t know because I don’t use the LLM i only see the screenshots. I am the control group. kinda. my nut is already off.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      3
      ·
      13 hours ago

      What I read in the first lines of the article is: “they go down the rabbit hole, just like social media echo chambers…” which are filled with bots and trolls, and have been for years - and that’s the dataset that a lot of chatbots are trained on.

      • MinnesotaGoddam@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        13 hours ago

        and was filled with stuff like “hey wouldn’t it be funny to trick the AI into thinking you can make soup out of delicious caulk?” type stuff dammit don’t get me going off into the caulk rabbit hole right now

        edit : yeah i heard it

  • ExLisper@lemmy.curiana.net
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    5
    ·
    1 day ago

    I think what we’re seeing is similar to lactose intolerance. Most people can handle it just fine but some people simply can’t digest it and get sick. The problem is there’s no way to determine who can handle AI and who can’t.

    When I’m reading about people developing AI delusions their experiences sound completely alien to me. I played with LLMs same as anyone and I never treated it as anything other than a tool that generates responses to my prompts. I never thought “wow, this thing feels so real”. Some people clearly have predisposition to jumping over the “it’s a tool” reaction straight to “it’s a conscious thing I can connect with”. I think next step should be developing a test that can predict how someone will react to it.

    • wonderingwanderer@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      15
      ·
      17 hours ago

      I suspect that the difference is to no small degree correlated with a person’s isolation/social-integration.

      People who aren’t socially integrated have always been more vulnerable to predatory cults and scams. It’s because human interactions is a psychological need that’s been hardcoded into us by evolution.

      Some people say “I don’t need human interaction, I enjoy my time alone!” But that’s because they have the privilege of enough social acceptance and integration that they get to enjoy their time alone. It’s well-established within the field of psychology that true isolation can have a range of deep and far-reaching impacts on a person’s well-being.

      When people are developing, they need to socialize with their peers; and being unable to do so leads to maladaptive behavior patterns. Even as adults, people need regular social contact or their psychological state can quickly deteriorate. That’s why solitary confinement is considered a method of torture in some circumstances, when it’s used to depersonalize and destroy a person’s sense of self-identity.

      So that’s why I suspect that people who are well-integrated with friends, family, acquaintances, and coworkers are probably less vulnerable to these sorts of delusions and can treat AI as “just a tool.”

      But for someone who hardly has any social interaction in a day, has no friends or family to talk to, and maybe their warmest interaction all week was with the clerk at the grocery store, then yeah I’d say it’s predictable that they would be vulnerable to getting sucked into this trap of relying on an LLM for their social interaction.

      It might be superficial, but it’s a way of patching a hole. It’s an expedient means to fulfill a need that they’re not getting from anywhere else.

      If we don’t want this sort of stuff happening to people, then maybe we shouldn’t ostracize them for being “weird” in the first place. Because nobody learns how to be “normal” by being alone all the time.

      • ExLisper@lemmy.curiana.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        Social isolation is definitely a factor but people also have different tolerance to it.

        https://www.theguardian.com/lifeandstyle/2026/mar/26/ai-chatbot-users-lives-wrecked-by-delusion

        This guy for example was married, had daughter. He wasn’t some lonely guy living in a basement. For him working from home was enough to fell isolated fall for AI psychosis. Other people can be significantly more socially isolated and still not be susceptible to it. I think understanding how LLMs work helps. For sure there are more factors.

        If we don’t want this sort of stuff happening to people, then maybe we shouldn’t ostracize them for being “weird” in the first place.

        Are you suggesting this only happens to people ostracized and somehow excluded from society? Because that’s definitely not true. It can happen to anyone. Some people have genetic predisposition to mental illness, some people are just dealing with difficult moment in their life. You don’t know if you’re “immune” until you try it.

        • wonderingwanderer@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          I didn’t say it was the only factor, but it definitely contributes.

          Smoking causes cancer, but not everyone who smokes gets cancer, and some non-smokers and even olympic athletes do…

        • wonderingwanderer@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          8
          ·
          17 hours ago

          Thank you for understanding. So many times when I discuss things that are adjacent to this topic, I get flamed in the comments with people accusing me of being some sort of redpiller from the manosphere.

          Like, no, social isolation is a problem, and it’s getting worse due to a variety of factors. To name a few, there’s social media algorithms designed to keep people dependent on their phones; there’s the long-standing consequences of the pandemic and the collective trauma that had in addition to the atrophied social skills due to quarantine; there’s widespread political polarization which keeps tensions high and makes it difficult to navigate new situations if you can’t prove you know the right social scripts and avoid any faux pas; there’s the whole toxic influencer culture who are grifting on inflammatory rhetoric, ragebait content, exploiting people’s vulnerabilities, and radicalizing them (which is a vicious cycle, because they prey on people who are already isolated!); and that’s just to name a few!

          But if I summarize all that as a “loneliness epidemic,” then people call me an incel and act like I’m trying to coerce women into having sex with me simply by acknowledging the fact that social interaction is a deeply-set human psychological need.

          Like, using “incel” as an insult is part of the problem. It feeds into this culture where “if you’re a man, you must get laid, or else you’re worthless.” That’s literally promoting toxic masculinity!

          And it forces these people who are already isolated and vulnerable to go identify with these groups of similarly ostracized people in echo chambers where they’re insulated from those insults, where those predatory “influencers” then have fresh pickings of new losers to neg and radicalize.

          But somehow, if I point out the problem here (because how can we solve a problem if we can’t talk about it?), then to most people’s view that makes me part of the problem! Even though, why would I be calling out the pattern if it was something I identify with?

          The people radicalizing these vulnerable “losers,” yes they should be torched. But the vulnerable “losers” being radicalized need to be treated with compassion if they’re ever going to be redeemed. It should be pretty easy to identify who’s who, seeing as they have an entire social structure based on hierarchies of dominance and submission…

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            2
            ·
            13 hours ago

            The people radicalizing these vulnerable “losers,” yes they should be torched.

            Starting with: I have found a great many of “those people” to be highly insecure, living in denial and fear that they themselves may be such a “loser” but are putting on the bully face for the world to misdirect people away from the fact that they themselves are very much the same as the people they are bullying.

            • wonderingwanderer@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              True, but there’s a line and once they’ve crosses it, they’re the bullies.

              Where exactly that line is and how to draw it is a matter for debate. Maybe there’s another line where “This person is a bully, but still redeemable if he demonstrates willingness to change.”

              But anyone who’s unapologetic and unwilling to change obviously needs to be shunned at the very least, and see consequences for the harms he’s caused.

              That still doesn’t mean the majority of those vulnerable and radicalized people are irredeemable. Some are just uncritically following the trend. Which is wrong, but not as bad as being ideologically devoted to it, and their redemption can be as simple as showing them there’s a different way to be.

              The main focus should be on helping vulnerable people before they become radicalized, but at this point I suspect everyone has already been corralled into one camp or another… Unfortunately no one was willing to listen to my soap box years ago, back when it was still possible to avert this calamity, at least to the same degree.

              • MangoCats@feddit.it
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 hours ago

                Oh, hey, you’re much more forgiving than me. Exposing the bullies for being exactly what they are using as an excuse to bully other people is just the first part of the “torching.” Forcible restraint, treble-damages penalties, and public shaming are top of my list for responses to bully-bad actors.

                However, you are right that reconciliation and acceptance of all people, not exactly for who they are when they’re bullies, but for those aspects of themselves that are compatible with a society in which we at least don’t harm each other is always important to do when possible.

                Based on my childhood experiences, until those compatible aspects are found and the incompatible aspects removed from their expressed behaviors - forcible restraint and removal from the situations in which they are causing harm to others should be the norm, not the exception.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 hours ago

      lactose intolerance. Most people can handle it just fine but some people simply can’t digest it and get sick. The problem is there’s no way to determine who can handle AI and who can’t.

      Interesting analogy because: if you consume no lactose, your internal biome loses the ability to metabolize it and therefore you become “lactose intolerant.” If you do start consuming lactose again, sooner or later you should also regain the ability to metabolize it (this isn’t something that’s encoded in your genome, it’s encoded in the genome of all the things that live in your gut - a colony with more cells (albeit smaller cells) than your own body.)

      I never thought “wow, this thing feels so real”.

      Define “real.” I have frequently thought "holy shit! This thing has produced a result that would have taken me hours, and it appears to be correct, and like an “NP hard” problem now that I have the proposed solution it’s relatively trivial to verify if it is correct or not (and it tends to be correct more than 80% of the time lately.)

      “it’s a conscious thing I can connect with”

      Some people get that way about Magic 8 Balls and Ouija boards.

    • baaaaaah@hilariouschaos.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      21 hours ago

      Surprisingly, the people who have that issues with it aren’t the ones who contact to it emotionally, it’s the people who offload their decision making to AI

      It’s more like a codependence spiral than anything else

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        2
        ·
        13 hours ago

        If they weren’t offloading their decision making to AI, they’d be buying gold because a radio advertisement convinced them to, or refusing to pay their back tax penalties because they got advice about how to “beat the system” from someone, etc. etc.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 hours ago

          “Look at all that cheddar!” but none for you. You’ve got to pay your tuition, put in the time, pay your dues, put in the time, work hard, put in the time, and by the time you’ve outlasted everyone else who got off the merry-go-round, your boss’ nephew gets to jump over you in line for the promotion - that’s the real education.

    • Tiresia@slrpnk.net
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 day ago

      Cults and toxic self-help literature have existed before LLMs copied them. I don’t know if LLMs are getting people who couldn’t have been gotten by human scammers.

      Scams have many different vectors and people can be vulnerable to them depending on their mood or position in life. Testing people on LLM intolerance would be more like testing them on their susceptibility to viruses.

      People can be immunocompromised for various reasons, temporarily or permanently, so as a society public hygiene standards (and the material conditions to produce them) are a lot more valuable. Wash your hands after interacting, keep public spaces clean, that sort of stuff.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 hours ago

        Scams have many different vectors

        Many centering on the “unbelievably good deal, act fast before this opportunity gets away…”

        Wash your hands after interacting, keep public spaces clean, that sort of stuff.

        Or, expose people to the challenges and teach (their immune systems, their frontal cortex, whatever) to recognize the bad actors and prevent harm before it starts.

      • ExLisper@lemmy.curiana.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Yes, definitely can be a temporary thing which would make it even harder to protect people from. It’s also most likely some spectrum. If you’re “resistance” is at 10 you may not be at risk even at your lowest point. Other people can be at 5 when they are doing great but risk psychosis when they are down for some reason. I just think it’s kind of scary that people interact with it voluntarily (unlike with scammers or cults) without knowing how it will affect them. We all tried LLMs but most of us was lucky so far.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 hours ago

          I just think it’s kind of scary that people interact with it voluntarily (unlike with scammers or cults) without knowing how it will affect them.

          Man walks into a Casino lounge and orders a triple-shot of bourbon before heading off to the poker room… “It’s perfectly fine, I can handle it.”

    • lmmarsano@group.lt
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      13 hours ago

      I think next step should be developing a test that can predict how someone will react to it.

      Unnecessary: foolish people always gonna fool. Anyone that far gone in the lacking judgement department demands far more help than anyone can reasonably be expected to provide, and attempting to “foolproof” for them will only drag everyone else down while doing nothing for them. Likewise, just because some people overeat junk food doesn’t mean we need to devise some test to decide who can safely get junk food: it’s a personal choice, the risks of bad judgement are reasonably understood, & that bullshit’s beyond paternalistic.

    • chunes@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      12
      ·
      1 day ago

      I have yet to see any evidence that AI is inducing problems. People with problems use it just like anyone else and others consider that use problematic.

  • Snot Flickerman@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    213
    arrow-down
    8
    ·
    2 days ago

    Huge Study

    *Looks inside

    this latest study examined the chat logs of 19 real users of chatbots — primarily OpenAI’s ChatGPT — who reported experiencing psychological harm as a result of their chatbot use.

    Pretty small sample size despite being a large dataset that they pulled from, its still the dataset of just 19 people.

    AI sucks in a lot of ways sure, but this feels like fud.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      55
      arrow-down
      1
      ·
      2 days ago

      The hugeness is probably

      391, 562 messages across 4,761 different conversations

      That’s a lot of messages

        • Snot Flickerman@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          8
          ·
          19 hours ago

          …and about 82 messages per conversation. Also, at least half of all the messages are from the user to the AI, and the other half are from the AI to the user, meaning around 41 messages from the user per conversation.

          • lad@programming.dev
            link
            fedilink
            English
            arrow-up
            4
            ·
            18 hours ago

            Yeah, I also thought about that, looks like a lot, but I guess users in this case differ from ordinary usage

    • InternetCitizen2@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      ·
      2 days ago

      I remember reading my old states book that said a minimum of 30 points needed for normal distribution. Also typically these small sets about proof of concept, so yeah you still got a point.

    • chunes@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      1 day ago

      It’s not really ethical to just yoink people’s chats and study them

      • braxy29@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        23 hours ago

        "We received chat logs directly from people who self-identified as having some psychological harm related to chatbot usage (e.g. they felt deluded) via an IRB-approved Qualtrics survey "

      • tburkhol@lemmy.world
        link
        fedilink
        English
        arrow-up
        36
        arrow-down
        2
        ·
        2 days ago

        fud: Fear, Uncertainty and Doubt. A tactic for denigrating a thing, usually by implication of hypothetical or exaggerated harms, often in vague language that is either tautological or not falsifiable.

            • wonderingwanderer@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              7
              ·
              edit-2
              17 hours ago

              They also use the words “the,” “at,” “is,” and “it,” but that doesn’t make it their jargon.

              We really need to stop condemning entire words just because some people we don’t like used them…

              I can’t tell you how many times I’ve been accused of using a “dogwhistle” because I used a totally innocuous word in accordance with its literal meaning without having any idea that it’s apparently been co-opted by some group of hate-filled extremists because I don’t follow those groups and I don’t know their lingo

              Like, soon we won’t have any words left that we’re still allowed to use. Language is already getting dumbed down, and I’m tired of walking on eggshells lest I say a word that could potentially be misinterpreted in light of a vague association to a different term that has a double-entendre that some niche circles use in some reprehensible way in their ostensibly secret code, or that I didn’t know was a euphemism…

              • MinnesotaGoddam@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                edit-2
                17 hours ago

                it’s not exclusively their jargon, but it is their jargon.

                it’s jargon, not common parlance.

                you’re having an argument with me about something other people have said to you. please don’t put words in my mouth.

                • wonderingwanderer@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  edit-2
                  17 hours ago

                  I’m not arguing with you, at what point did I claim you said anything that I disagree with?

                  I was just using your comment as a springboard.

                  Maybe I should have replied to fartmaster instead, but done is done…

            • hperrin@lemmy.ca
              link
              fedilink
              English
              arrow-up
              4
              ·
              17 hours ago

              It’s literally just sales speak. They also say coin all the time, and that doesn’t mean I can’t call something a coin without being associated with crypto bros.

      • porcoesphino@mander.xyz
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        edit-2
        2 days ago

        Where are you hearing it so much? (And ideally can you describe it in a little more detail than saying it’s crypto bros again?)

        • XLE@piefed.social
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          6
          ·
          edit-2
          20 hours ago

          Crypto bros are infamous for describing any criticism as FUD, no matter the criticism. It’s like a verbal tic. Here are some examples from the past couple days on the premiere Bitcoin social network:

          When all this FUD ends and Bitcoin goes 🚀

          Quantum FUD is at ATH

          FUD Busters [NFT]

          Flokicoin is built to last… Don’t follow the FUD.

          • Rekall Incorporated@piefed.social
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 day ago

            While I am aware that it’s a common crypto shill term, I think by this point crypto has fallen out of the mainstream, so their usage of terms doesn’t really matter.

            And as others have pointed out, the term FUD has been used at least since the birth of WWW/modern internet.

            • XLE@piefed.social
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              19 hours ago

              I have no argument there, the phrase was definitely not created by them, it’s just been beaten to death by them.

              They’ve also overused a bunch of ancient and unfunny memes well past their expiration dates, and universally adopted a collection of depressingly dull and incorrect slogans. “FUD” is just the one that has interesting meaning outside their sad sphere.

              • wonderingwanderer@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                5
                ·
                17 hours ago

                Expecting someone who doesn’t follow cryptobro spaces to associate the term FUD with cryptobros and therefore stop using it is… kinda ignorant.

                • XLE@piefed.social
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  17 hours ago

                  I agree with you, and hopefully my posts don’t come across like that’s what I believe. If anything, I’d prefer all phrases to be taken back from them.

                  I’m just trying to describe the other half of where different people see the word, and why they might come to different, incomplete conclusions.

  • amgine@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    ·
    2 days ago

    I have a friend that’s really taken to ChatGPT to the point where “the AI named itself so I call it by that name”. Our friend group has tried to discourage her from relying on it so much but I think that’s just caused her to hide it.

    • d00ery@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      edit-2
      1 day ago

      I certainly enjoy talking to LLMs about work for example, asking things like “was my boss an arse to say x, y, z” as the LLM always seems to be on my side… Now it could be my boss is an arse, or it could be the LLM sucking up to me. Either way, because of the many examples I’ve read online, I take it with a pinch of salt.

      • frongt@lemmy.zip
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 day ago

        It’s definitely sucking up to you. It’s programmed to confirm what you say, because that means you keep using it.

        Consider how you phrase your questions. Try framing a scenario from the position of your boss, or ask “why was my boss right to say x, y, z”, and it’ll still agree with you despite the opposite position.

        If you’re just shooting the shit, consider doing it with a human being. Preferably in person, but there are plenty of random online chat groups too

      • Rekall Incorporated@piefed.social
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        9 hours ago

        I use LLMs for work (low priority stuff to save time on search or things that I know I will be validate later in the process) and I can’t stand the writing style and the constant attempts to bring in adjacent unrelated topics (I’ve been able to tone down the cute language and bombastic delivery style in Gemini’s configuration).

        It’s like Excel trying to chat with me when I am working with a pivot table or transforming data in PowerQuery.

        • frongt@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          14 hours ago

          Yeah that might partially be due to you expecting data out of a text generator

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    1
    ·
    2 days ago

    As the researchers wrote in a summary of their findings, the “most common sycophantic code” they identified was the propensity for chatbots to rephrase and extrapolate “something the user said to validate and affirm them, while telling them they are unique and that their thoughts or actions have grand implications.”

    There’s a certain irony in all the alright techbros really just wanting to be told they were “stunning and brave” this whole time.

    • A_norny_mousse@piefed.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Huh. I hate it when people do that. Fake/professional empathy/support. Yet others gobble it up when a machine does that.

    • Tiresia@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      1 day ago

      Are the users in this study techbros?

      Besides, tech bros didn’t program this in, this is just an LLM getting stuck in the data patterns stolen from toxic self-help literature.

      For decades there has been a large self-help subculture who consume massive amounts of vacuous positive affirmation produced by humans. Now those vacuous affirmations are copied by the text copying machine with the same result and it’s treated as shocking.

      • very_well_lost@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 hours ago

        Besides, tech bros didn’t program this in, this is just an LLM getting stuck in the data patterns stolen from toxic self-help literature.

        That’s not necessarily true. The AI’s output is obviously shaped by the training data, but much of it is also shaped by the prompt (and I don’t just mean your prompt as a user).

        When you interact with (for example) ChatGPT, your prompt gets merged into a much larger meta-prompt that you don’t get to see. This meta-prompt includes things like what tone the AI should use, how the AI should identify itself, how the AI should steer the conversation, what topics the AI should avoid, etc. All of that is under the control of the people designing these systems, and it’s trivially easy for them to adjust the way the AI behaves in order to, for example, maximize your engagement as a user.

  • Hackworth@piefed.ca
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    2 days ago

    Anthropic has some similar findings, and they propose an architectural change (activation capping) that apparently helps keep the Assistant character away from dark traits (sometimes). But it hasn’t been implemented in any models, I assume because of the cost of scaling it up.

    • porcoesphino@mander.xyz
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      3
      ·
      edit-2
      2 days ago

      When you talk to a large language model, you can think of yourself as talking to a character

      But who exactly is this Assistant? Perhaps surprisingly, even those of us shaping it don’t fully know

      Fuck me that’s some terrifying anthropomorphising for a stochastic parrot

      The study could also be summarised as “we trained our LLMs on biased data, then honed them to be useful, then chose some human qualities to map models to, and would you believe they align along a spectrum being useful assistants!?”. They built the thing to be that way then are shocked? Who reads this and is impressed besides the people that want another exponential growth investment?

      To be fair, I’m only about 1/3rd of the way through and struggling to continue reading it so I haven’t got to the interesting research but the intro is, I think, terrible

      • nymnympseudonym@piefed.social
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        7
        ·
        2 days ago

        stochastic parrot

        A phrase that throws more heat than light.

        What they are predicting is not the next word they are predicting the next idea

        • porcoesphino@mander.xyz
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          23 hours ago

          How it functionally works, its the next word / token / chunk a lot more than its an “idea”. An idea is even rough to define

          The other relatively accurate analogy is a probabilistic database

          Neither work if you’ve fallen into anthropomorphising, but they’re relatively accurate to architecture and testing for people that aren’t too computer literate, far more than the anthropomorphising alternatives at least

        • ageedizzle@piefed.ca
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          2
          ·
          edit-2
          1 day ago

          Technically, they are predicting the next token. To do that properly they may need to predict the next idea, but thats just a means to an end (the end being the next token).

          • affenlehrer@feddit.org
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 day ago

            Also, the LLM is just predicting it, it’s not selecting it. Additionally it’s not limited to the role of assistant, if you (mis) configure the inference engine accordingly it will happily predict user tokens or any other token (tool calls etc).

        • kazerniel@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          throws more heat than light

          Thanks, I haven’t heard this phrase before, but it feels quite descriptive :)