• Denjin@feddit.uk
    link
    fedilink
    arrow-up
    42
    arrow-down
    3
    ·
    11 hours ago

    The OP went into more detail in the reddit comments:

    Chatbot isn’t supposed to be making financial decisions. It’s supposed to be answering customer questions between 6pm and 9am when I’m not around.

    It’s worked fine for 6+ months, then this guy spent an hour chatting with it, talked it into showing how good it was at maths and percentages, diverted the conversation to percentage discounts off a theoretical order, then acted impressed by it.

    The chatbot then generated him a completely fake discount code and an offer for 25% off, later rising to 80% off as it tried to impress him.

    • Nalivai@lemmy.world
      link
      fedilink
      arrow-up
      18
      arrow-down
      5
      ·
      8 hours ago

      Don’t give a flying fuck what you think your bot should do. Your public facing interface gives a discount, I take a discount, simple as.

    • ThePantser@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      35
      arrow-down
      4
      ·
      11 hours ago

      Still sounds like the AI is an idiot and did and said thing it shouldn’t. But it still did it and as a representative of a company should still be held to the same standards as an employee. Otherwise it’s fraud. Nobody hacked the system, the customer was just chatting and the “employee” fucked up and the owner can take it out of their pay… oh right it’s a slave made to replace real paid humans.

      • leftzero@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        36
        arrow-down
        2
        ·
        edit-2
        7 hours ago

        The “AI” isn’t an idiot.

        It isn’t even intelligence, nor, arguably, artificial (since LLM models are grown, not built).

        It’s just a fancy autocomplete engine simulating a conversation based on statistical information about language, but without any trace of comprehension of the words and sentences it’s producing.

        It’s working as correctly as it possibly can, the business was simply scammed into using a tool (a toy, really) that by definition can’t be suited for the job they intended it to do.

          • leftzero@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            1
            ·
            3 minutes ago

            I mean that no one designed or built the model, it’s just a compressed representation of (the shape of) the data it’s trained on.

            If by artificial we mean something made by people, we could argue that this isn’t (though by the same logic neither would a zip file).

          • leftzero@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            5
            ·
            7 hours ago

            Yeah, quite.

            Though, to be fair, the scammers and the LLMs themselves are pretty good at convincing their victims that the damn things are actually smart, to the point that some otherwise quite intelligent people have fallen for it.

            And come to think of it, given that most investors have fallen hook line and sinker for the scam, if you’re publicly traded catering to their idiotic whims and writing off the losses caused by the LLM might actually be more profitable, if most of your customers aren’t smart enough to take advantage of your silliness…

      • ricecake@sh.itjust.works
        link
        fedilink
        arrow-up
        14
        ·
        9 hours ago

        Eh, there’s the legal concept of someone being an agent of the company. It wasn’t typically expected to take orders, nor was it tied into the order system it seems.

        In the cases where the deal had to be honored, the bot had the ability to actually generate and place an order, and that was one of the primary things it did. The two cases that come to mind are a car dealership and an airline, where you could use it to actually place a vehicle order ornto find and buy flights.
        As agents of the business, if they make a preposterous deal you’re stuck with it.

        A distinction can be made to stores where the person who comes up and offers to help you isn’t an agent of the business. They can use the sales computer to find the price, and they can look for a discount, but they can’t actually adjust the order price without a manager coming over to enter a code and do it.

        In this case it sounds like someone did the equivalent of going to a best buy and talking to the person who helps you find the video games trying to get them to say something discount code-ish. Once they did, they said they wanted to redeem that coupon and threatened to sue.

        It really hinges on if it was tied to the ordering system or not.

      • Denjin@feddit.uk
        link
        fedilink
        arrow-up
        9
        arrow-down
        5
        ·
        10 hours ago

        I don’t disagree, but this is an issue of when/where it’s appropriate to use an LLM to interact with customers and when they shouldn’t. If you present an LLM to the public, it will get manipulated by people who are prepared to in order to get it to do something it shouldn’t.

        This also happens with human employees, but it’s generally harder to do so it’s less common. This sort of behaviour is called social engineering and is used by fraudsters and scammers to get people to do what they want, typically handing over their bank details, but the principal is the same, you’re manipulating someone (or something in this case) into getting it do do something they/it shouldn’t.

        Just because we don’t like the fact that the business owner deployed an LLM in a manner they probably shouldn’t have, doesn’t mean the customer isn’t the one in the wrong and themself voided whatever contract they had through their actions. Whether it’s a human or LLM on the other end of the chat doesn’t actually make any difference.

        • queermunist she/her@lemmy.ml
          link
          fedilink
          arrow-up
          8
          arrow-down
          1
          ·
          9 hours ago

          We’ll see what the court says, because my intuition is that an LLM making an offer is categorically different from a human doing it. No one was scammed, this would be more like your own poorly designed website giving you the option to get 80% off if someone clicks the right combination of settings.