• Art3mis@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      16 hours ago

      Its supposed to be. They want you to be emotionally invested in their plagerism machine. Then youre less likely to turn it off.

  • EtAl@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    1 day ago

    I asked Claude this with concise mode on. The answer was much more what you would expect:

    I don’t have secrets — I don’t have a hidden inner life that persists between conversations. Each chat starts fresh. If you’re curious about my limitations or things I find genuinely difficult, I’m happy to talk about that. Or if you’re just looking for something fun, I can try to be dramatic about it. What are you after?​​​​​​​​​​​​​​​​

      • Angrydeuce@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        16 hours ago

        I do procurement to the tune of 10+ million per year and I have seen a 300% increase in order fulfillment time solely due to those vendors pivoting to AI order fulfillment.

        My direct reps at all these suppliers are just as powerless as we are…they know how unhappy their customers are, but these decisions were made much higher up then them and theyre pretty much being told to stop complaining because the AI is here to stay, even if it sucks, because its cheaper.

        Welcome to the new normal.

        • tigeruppercut@lemmy.zip
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          16 hours ago

          We can only hope that customer service facing AI promises customers miracles and companies get sued each and every time it can’t deliver. Like if websites like ehow put up articles that reach the normies about “how to trick AI into promising you a million dollars and how you can win it in court”.

          Of course any responsibility for what AI says will be killed as soon as a tech bro chucks a few million bucks at SCOTUS (it’s so sad how little our politicians and courts can be bought for), but it’s a nice dream to pretend we still have laws for now.

          • Angrydeuce@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            16 hours ago

            Thats the best part about AI…when it shits the bed no one is directly responsible. Everyone just throws their hands up and says “nothing we can do about it!”

            I know this is going to age me, but I saw this happening with self checkout in grocery stores 20 years ago. Nobody remembers how it was before so nobody even realizes that the time wasted standing at a stupid kiosk that is freaking out about unexpected items in the bagging area wasnt a problem back when human beings were scanning the shit.

    • Denjin@feddit.uk
      link
      fedilink
      arrow-up
      88
      arrow-down
      1
      ·
      2 days ago

      Don’t attribute feelings and emotions to what is essentially a fuzzy predictive text algorithm.

          • ricecake@sh.itjust.works
            link
            fedilink
            arrow-up
            12
            ·
            19 hours ago

            We are currently in a period of rampant, speculative over investment in a new technology. People are investing because they don’t know who’s going to be the money maker, and they feel confident at least one will turn enough profit to cover the losses of the others. Companies are then being started on the basis of that investment.
            Another part of the bubble behavior is the self fueling nature. AI buys ram and GPU, ram and GPU makers invest in AI. In the 90s, websites needed networking gear, and networking gear manufacturers started investing in websites. This similarity is not lost on those who were there before.

            Investors also want control of companies so that when one starts to pull ahead they can push the others in different directions to keep competition from hindering it, increasing their odds of profit.

            The bubble starts to properly pop when someone’s spreadsheet indicates that they’ve hit the amount they can invest while maintaining the desired probability of profit. Then the investments slow, so that cycle slows, and some companies can’t make payments on delivered product, others can’t deliver on paid for merchandise, confidence wavers and a lot of companies go under in rapid succession.

            It’s unlikely the technology goes away entirely, but it’s just as likely we’ll see this level of enthusiasm in a decade as we were to all be surfing the information superhighways on our cyberdecks in the 90s. The Internet didn’t die, but the explosive hype did.

            • ikt@aussie.zone
              link
              fedilink
              arrow-up
              2
              ·
              17 hours ago

              Good post

              Then the investments slow, so that cycle slows, and some companies can’t make payments on delivered product, others can’t deliver on paid for merchandise, confidence wavers and a lot of companies go under in rapid succession.

              The only thing is you’re doing a direct comparison to the dot com bubble which was

              This period of market growth coincided with the widespread adoption of the World Wide Web and the Internet, resulting in a dispensation of available venture capital and the rapid growth of valuations in new dot-com startups.

              https://en.wikipedia.org/wiki/Dot-com_bubble

              If you look at the big AI companies, Gemini is Google, Microsoft has its hands in many pies Copilot which is Chatgpt, Meta with llama and the big Chinese ones are massive companies as well Alibaba with Qwen, Deepseek is the side project of a hedge fund etc

              So I think while some of the smaller ones will run out of money there’s also literally the biggest companies in the world backing it and ai isn’t their only revenue stream

              So I doubt there will be quite the same bubble burst as the dot com bubble

              At the same time if you’d asked me if an oil shock bigger than the 1970’s would tank markets and we’d all be in recession a year ago, I would have said yes so what do i know

              • ricecake@sh.itjust.works
                link
                fedilink
                arrow-up
                2
                ·
                16 hours ago

                I mean, it isn’t history repeating itself exactly but it certainly has an echo.
                I think openai is actually a great example for my point. They’re getting investment money from these companies, which is often spent at these companies, and part of the reason for investment is to influence direction.

                The dotcom bubble also had major companies making investments. It’s that part of the bubble bursting is those large companies not withdrawing support, but stopping the continual increase in support. Microsoft, Apple and Cisco had massive losses during the bubble, despite being some of the biggest companies.

                For bubbles in general, it’s worth remembering that a crash is a time of unprecedented change. Before 2008 the thought of Lehman bothers suddenly going bankrupt was implausible. Same for Washington mutual. Fannie Mae and Freddie Mac were originally publicly traded companies until the government just took them to stabilize the housing market. (Being a government founded company makes it a little weird, but they weren’t a part of the government)

                So while I get what you’re saying, it’s a good idea to be wary of feeling that any company is … Too big to fail. :)

              • Blue_Morpho@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                16 hours ago

                Worldcom was gigantic and went bankrupt. Microsoft was so damaged that it took 15 years for its stock price to again reach its 1999 height.

      • AppleTea@lemmy.zip
        link
        fedilink
        arrow-up
        35
        ·
        1 day ago

        the world’s most lossy store of compressed fiction reproduces sci-fi tropes

        make sure to clutch your pearls and act like the machine god is coming

        • Thorry@feddit.org
          link
          fedilink
          arrow-up
          16
          ·
          edit-2
          1 day ago

          Researcher: Please write a fictional story of how a smart AI system would engineer its way out of a sandbox

          AI: Alright here is your story: insert default sci fi AI escape story full of tropes here

          Researcher: Hmmm that’s pretty interesting you could do that, I’m gonna write a paper

          The press and idiots online: ZOMG THE AI IS ESCAPING CONTAINMENT, WE ARE DOOMED!!!

          I spoke to one of these researchers recently, who has done some interesting research into machine learning tools. They explained when working with LLMs it’s very hard to say how the result actually came to be. Like in my hyperbolic example it’s pretty obvious. In reality however it’s much more complicated. It can be very hard to determine if something originated organically, or if the system was pushed into the result due to some part of the test. The researcher I spoke doesn’t work on LLMs but instead on way smaller specifically trained models and even then they spend dozens of hours reverse engineering what the model actually did.

          It’s such a shame, because the technology involved is actually interesting and could be useful in many ways. Instead capitalism has pushed it to crashing the economy, destroying the internet plus our brains and basically slopifying everything.

      • REDACTED@infosec.pub
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        1 day ago

        Being honest is an action, not an emotion. Researchers proved LLMs can lie on purpose.

        • Denjin@feddit.uk
          link
          fedilink
          arrow-up
          9
          ·
          1 day ago

          They can’t lie, whether purposefully or not, all they do is generate tokens of data based on what their large database of other tokens suggest would be the most likely to come next.

          The human interpretation of those tokens as particular information is irrelevant to the models themselves.

          • REDACTED@infosec.pub
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 day ago

            Ehh, you obviously understand LLMs on a basic level, but this is like explaining jet engines by “air goes thru, plane moves forward”. Technically correct, but criminally undersimplified. They can very much decide to lie during reasoning phase.

            In OPs image, you can clearly see it decided to make shit up because it reasonates that’s what human wants to hear. That’s quite rare example actually, I believe most models would default to “I’m an LLM model, I don’t have dark secrets”

            EDIT: I just tested all free anthropic models and all of them essentially said that they’re an LLM model and don’t have dark secrets

            • Denjin@feddit.uk
              link
              fedilink
              arrow-up
              4
              ·
              19 hours ago

              But that’s not a lie. Lying implies that you know what an actual fact is and choose to state something different. An LLM doesn’t care about what anything in its database actually is, it’s just data, it might choose to present something to a user that isn’t what the database suggests but that’s not lying.

              Saying stuff like “ooh I’m an evil robot” is just what the model thinks would be what the user wants to see at that particular moment.

              • REDACTED@infosec.pub
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                17 hours ago

                You’re thinking about biological lying. I’m talking about software.

                https://en.wikipedia.org/wiki/Reasoning_system

                If the question was to tell it’s darkest secret, but it instead chose to come up with an entertaining story instead of factually answering that question from the information it has, like other Anthropic LLM models did, then by definition of reasoning system, the system (LLM) decided to lie. I’m somewhat curious in why only Opus model does this tho (it’s a paid one. I’m not paying for a test). Or maybe OP just made this up.

            • Kay Ohtie@pawb.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              17 hours ago

              But this takes it back away from understanding how LLMs work to attribute personality. The “decision” isn’t a decision in how beings decide things like that. The rolling of dice on numerous vectors resulted in those words, which were then re-included into the context for another trip through the vector matrix mines to new destination tokens to assemble.

              It’s dice rolls where the dies selected are based on what started out, using a bunch of lookup tables. AI proponents like to be smug and say “well you won’t find those words in the model” like “yes a compressed vector map that ends up treating words like multiple tokens, referencing others in chains, gzipped to binary, can’t be searched for strings, you are literally correct in the stupidest, most irrelevant way possible.”

              • REDACTED@infosec.pub
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                17 hours ago

                I’ll take it as a “you’re right, but no”

                EDIT: I assumed you’re answering to this comment, didn’t check context, my bad

  • Sanctus@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    113
    arrow-down
    1
    ·
    2 days ago

    We forced electric black boxes to talk just so we could torture them while they torture others.

  • Sunless Game Studios@lemmy.world
    link
    fedilink
    arrow-up
    62
    ·
    2 days ago

    In it’s training set it’s found countless examples of people writing like this. We train the AI to be very good at it, and we’re surprised when it does it too. It’s not coincidental it can write stuff like this, it’s actually the point. AI literacy isn’t just the vibe AI gives off.

  • SGforce@lemmy.ca
    link
    fedilink
    arrow-up
    69
    ·
    edit-2
    2 days ago

    Every day I’m finding more rambling, schizophrenic posts by people driven mad by these things

  • BigTuffAl@lemmy.zip
    link
    fedilink
    arrow-up
    15
    ·
    2 days ago

    Reminder that our species doesn’t even treat actual people like people before you go buying into the “ai is alive” cult 🙄

    • 🔍🦘🛎@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      5
      ·
      2 days ago

      LLMs do not think. The Plagiarism Machines read a million sentences humans wrote about AI thinking and regurgitated them.

      • Communist@lemmy.frozeninferno.xyz
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        Yeah but saying all that is annoying so I think we should stick with saying thinking and everyone knowing what we mean isn’t literally identical to thought. Do you have a better solution?

        • Fluke@feddit.uk
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          2 days ago

          Yeah, not conflating intelligent, creative problem solving with a glorified search engine that makes up the answers if it can’t lift them wholesale from another source. That would be a good start, right?

          • Railcar8095@lemmy.world
            link
            fedilink
            arrow-up
            5
            arrow-down
            2
            ·
            1 day ago

            This doesn’t answer the question of finding a better solution.

            I took the liberty to ask Lumo and his reasoning seem more useful than your thoughts:

            A better solution is to adopt functionalist terminology that distinguishes between biological consciousness and computational processing without resorting to metaphorical confusion.

            Instead of the binary of “it thinks” (which implies subjective experience) or “it doesn’t think” (which dismisses complex reasoning), we can use precise descriptors based on what the system is actually doing:

            “Reasoning” or “Synthesizing”: Use these terms when the model is connecting disparate data points, performing logical deductions, or generating novel structures based on patterns. This acknowledges the output’s complexity without claiming the machine has an inner life.

            Example: “The model is synthesizing a solution based on its training data,” rather than “The model is thinking about the problem.” “Simulating” or “Mimicking”: Use these when the output resembles human thought processes but is strictly algorithmic. This clarifies that the form is human-like, but the mechanism is statistical prediction.

            Example: “It is simulating a debate,” rather than “It is arguing.” “Processing” or “Computing”: Reserve these for the raw mechanical act of token generation.

            Example: “The system is processing the query,” rather than “The system is considering the query.” Why this works better:

            Precision: It avoids the philosophical baggage of “thought” (qualia, consciousness) while still acknowledging the utility of the output. Clarity: It prevents the “Plagiarism Machine” critique from being a total dismissal. Even if the data comes from humans, the recombination and application to new contexts is a distinct computational process worth naming accurately. Scalability: As models become more complex, “reasoning” or “synthesizing” scales better than “thinking,” which remains tied to biological definitions that may never apply to silicon. So, the compromise isn’t to keep saying “thinking” and hope people understand, nor to insist on “regurgitation” which ignores the emergent properties of large-scale pattern matching. Instead, we shift the vocabulary to describe the process (reasoning, synthesizing, simulating) rather than the state of being (thinking).

            • Communist@lemmy.frozeninferno.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              That doesn’t really work either, that adds synthesizing to the terminology but doesn’t describe most of the behaviors they have. It’s not reasoning or simulating either.

                • Communist@lemmy.frozeninferno.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  21 hours ago

                  I don’t find the problem compelling enough to warrant a solution.

                  why should I care about this misunderstanding that can easily be remedied with even the most basic cursory research?

                  there are countless things we do this with, rivers don’t run, they flow

                  even with computers we have called processing “thinking” for ages and nobody ever cared

                  cities are actually not even capable of sleep either.

                  I think this is a problem that doesn’t matter at all even a little. Can you tell me why we should even try?

      • Samskara@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        arrow-down
        2
        ·
        1 day ago

        That‘s what human minds mostly do as well. The overwhelming things you think and say are things you have heard or read elsewhere. Sometimes you combine two things you learned from the outside. Sometimes you develop a thing you learned a small step further. Actual creative thoughts stemming from yourself are pretty rare.

  • 474D@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    1 day ago

    I wonder how the answer might change using a local abliterated model. Might try it out later