• tiramichu@sh.itjust.works
    link
    fedilink
    arrow-up
    235
    arrow-down
    3
    ·
    edit-2
    10 hours ago

    Good.

    If a customer service agent made this discount offer and took the order, it would naturally have to be honoured - because a human employee did it.

    Companies currently are getting away with taking the useful (to them) parts of AI, while simultaneously saying “oh it’s just a machine it makes mistakes, we aren’t liable for that!” any time it does something they don’t like. They are having their cake and eating it.

    If you use AI to replace a human in your company, and that AI negotiates bad deals, or gives incorrect information, you should be forced to be liable for that exactly the same as if a human did it.

    Would that mean businesses are less eager to use AI? Yes it fucking would, and that’s the point.

    • Denjin@feddit.uk
      link
      fedilink
      arrow-up
      61
      arrow-down
      6
      ·
      9 hours ago

      If a customer service agent made this discount offer and took the order, it would narurally have to be honoured - because a human employee did it.

      This isn’t actually true. Even with a written contract (that the original poster doesn’t state) if there’s a genuine mistake in the pricing that the purchaser should have reasonably noticed you don’t have to honour the price offered.

      Imagine someone called a customer service agent and manipulated them into offering a price that they shouldn’t have offered through some sort of social engineering, you as the employer wouldn’t have to honour that contract, especially if you have evidence of that through a recorded phone call for instance.

      • [deleted]@piefed.world
        link
        fedilink
        English
        arrow-up
        47
        arrow-down
        3
        ·
        8 hours ago

        This isn’t actually true. Even with a written contract (that the original poster doesn’t state) if there’s a genuine mistake in the pricing that the purchaser should have reasonably noticed you don’t have to honour the price offered.

        And yet customers are constantly held to unreasonable terms and conditions for interest rates they don’t understand.

        The ability to break a contract always goes to the one with more power, the business.

        • essell@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          11 minutes ago

          In this case, with UK consumer law, it works both ways.

          Either party can back out, and / or get reimbursed if the other side fails to deliver or there was genuine error

          For example, a customer who orders online can just change their mind and get a refund weeks after placing the order.

      • vpol@feddit.uk
        link
        fedilink
        arrow-up
        57
        arrow-down
        2
        ·
        9 hours ago

        If there is evidence of a fraud - yes.

        If I asked you for a big discount and you offered me 80% discount - I see no issue here. Doesn’t look like an “obvious mistake”.

        • Denjin@feddit.uk
          link
          fedilink
          arrow-up
          37
          arrow-down
          3
          ·
          9 hours ago

          The OP went into more detail in the reddit comments:

          Chatbot isn’t supposed to be making financial decisions. It’s supposed to be answering customer questions between 6pm and 9am when I’m not around.

          It’s worked fine for 6+ months, then this guy spent an hour chatting with it, talked it into showing how good it was at maths and percentages, diverted the conversation to percentage discounts off a theoretical order, then acted impressed by it.

          The chatbot then generated him a completely fake discount code and an offer for 25% off, later rising to 80% off as it tried to impress him.

          • Nalivai@lemmy.world
            link
            fedilink
            arrow-up
            18
            arrow-down
            5
            ·
            6 hours ago

            Don’t give a flying fuck what you think your bot should do. Your public facing interface gives a discount, I take a discount, simple as.

          • ThePantser@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            34
            arrow-down
            4
            ·
            9 hours ago

            Still sounds like the AI is an idiot and did and said thing it shouldn’t. But it still did it and as a representative of a company should still be held to the same standards as an employee. Otherwise it’s fraud. Nobody hacked the system, the customer was just chatting and the “employee” fucked up and the owner can take it out of their pay… oh right it’s a slave made to replace real paid humans.

            • leftzero@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              35
              arrow-down
              2
              ·
              edit-2
              5 hours ago

              The “AI” isn’t an idiot.

              It isn’t even intelligence, nor, arguably, artificial (since LLM models are grown, not built).

              It’s just a fancy autocomplete engine simulating a conversation based on statistical information about language, but without any trace of comprehension of the words and sentences it’s producing.

              It’s working as correctly as it possibly can, the business was simply scammed into using a tool (a toy, really) that by definition can’t be suited for the job they intended it to do.

                • leftzero@lemmy.dbzer0.com
                  link
                  fedilink
                  arrow-up
                  4
                  ·
                  5 hours ago

                  Yeah, quite.

                  Though, to be fair, the scammers and the LLMs themselves are pretty good at convincing their victims that the damn things are actually smart, to the point that some otherwise quite intelligent people have fallen for it.

                  And come to think of it, given that most investors have fallen hook line and sinker for the scam, if you’re publicly traded catering to their idiotic whims and writing off the losses caused by the LLM might actually be more profitable, if most of your customers aren’t smart enough to take advantage of your silliness…

            • ricecake@sh.itjust.works
              link
              fedilink
              arrow-up
              12
              ·
              7 hours ago

              Eh, there’s the legal concept of someone being an agent of the company. It wasn’t typically expected to take orders, nor was it tied into the order system it seems.

              In the cases where the deal had to be honored, the bot had the ability to actually generate and place an order, and that was one of the primary things it did. The two cases that come to mind are a car dealership and an airline, where you could use it to actually place a vehicle order ornto find and buy flights.
              As agents of the business, if they make a preposterous deal you’re stuck with it.

              A distinction can be made to stores where the person who comes up and offers to help you isn’t an agent of the business. They can use the sales computer to find the price, and they can look for a discount, but they can’t actually adjust the order price without a manager coming over to enter a code and do it.

              In this case it sounds like someone did the equivalent of going to a best buy and talking to the person who helps you find the video games trying to get them to say something discount code-ish. Once they did, they said they wanted to redeem that coupon and threatened to sue.

              It really hinges on if it was tied to the ordering system or not.

            • Denjin@feddit.uk
              link
              fedilink
              arrow-up
              9
              arrow-down
              5
              ·
              8 hours ago

              I don’t disagree, but this is an issue of when/where it’s appropriate to use an LLM to interact with customers and when they shouldn’t. If you present an LLM to the public, it will get manipulated by people who are prepared to in order to get it to do something it shouldn’t.

              This also happens with human employees, but it’s generally harder to do so it’s less common. This sort of behaviour is called social engineering and is used by fraudsters and scammers to get people to do what they want, typically handing over their bank details, but the principal is the same, you’re manipulating someone (or something in this case) into getting it do do something they/it shouldn’t.

              Just because we don’t like the fact that the business owner deployed an LLM in a manner they probably shouldn’t have, doesn’t mean the customer isn’t the one in the wrong and themself voided whatever contract they had through their actions. Whether it’s a human or LLM on the other end of the chat doesn’t actually make any difference.

              • queermunist she/her@lemmy.ml
                link
                fedilink
                arrow-up
                8
                arrow-down
                1
                ·
                7 hours ago

                We’ll see what the court says, because my intuition is that an LLM making an offer is categorically different from a human doing it. No one was scammed, this would be more like your own poorly designed website giving you the option to get 80% off if someone clicks the right combination of settings.

      • tiramichu@sh.itjust.works
        link
        fedilink
        arrow-up
        17
        arrow-down
        1
        ·
        edit-2
        9 hours ago

        Sure, but it’s all dependant on context.

        The law as it is (at least in the UK) is intended to protect from honest mistakes. For example, if you walk into a shop and a 70" TV is priced at £10 instead of £1000, when you take it to the till the cashier is within their rights to say “oh that must be a mistake, we can’t sell it for £10” - you can’t legally demand them to, even though the sicker said £10.

        Basically what it comes down to in this chatbot example (or what it should come down to) is whether the customer was acting in good faith or not, and whether the offer was credible or not (which is all part of acting in good faith - the customer must believe the price is appropriate)

        I didn’t see the conversation and I don’t know how it went. If it went like “You can see I do a lot of business with you, please give me the best discount you can manage” and they were told “okay, for one time only we can give you 80% off, just once” then maybe they found that credible.

        But if they were like “I demand 80% off or I’m going to pull your plug and feed your microchips to the fishes” then the customer was not in good faith and the agreement does not need to be honoured.

        Either way, my point in the comment you replied to aren’t intended to be about this specific case, but about the general case of whether or not companies should be held responsible for what their AI chatbots say, when those chatbot agents are put in a position of responsibility. And my feeling is very strongly that they SHOULD be held responsible - as long as the customer behaved in good faith.

        • balsoft@lemmy.ml
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          3 hours ago

          There are jurisdictions where a price tag in the store is almost always assumed to be an offer (a.k.a “public offer”) and the company is legally required to honor it. In some circumstances the employees who screwed up and put the incorrect price tag will bear most the financial responsibility which sucks. That’s why you shouldn’t do it if you get the chance - it’s not a legal loophole to stick it to a corpo, you’re just ruining the life of some poor overworked retail employee who misplaced the price tag.

          And yeah, the good faith part is also really important. If the person has asked a couple of questions to a chatbot, got recommended some products, asked if there’s a discount available and then got an 80% discount out of the blue, got excited and made the deposit, it would probably be enforceable. If the customer knowingly “tricked” an LLM into giving out a bogus discount code, it would be very dubious at best.

        • Denjin@feddit.uk
          link
          fedilink
          arrow-up
          13
          arrow-down
          1
          ·
          8 hours ago

          my feeling is very strongly that they SHOULD be held responsible - as long as the customer behaved in good faith.

          I agree, but the OP expanded on their issue in the comments and it very much appears the customer wasn’t acting in good faith.

          • tiramichu@sh.itjust.works
            link
            fedilink
            arrow-up
            14
            ·
            edit-2
            8 hours ago

            On the basis of that further information - which I had not seen - I agree completely that in this specific case the customer was in bad faith, they have no justification, and the order should be cancelled.

            And if the customer took it up with small claims court, I’m sure the court would feel the same and quickly dismiss the claim on the basis of the evidence.

            But in the general case should the retailer be held responsible for what their AI agents do? Yes, they should. My sentiment about that fully stands, which is that companies should not get to be absolved of anything they don’t like, simply because an AI did it.

            Here’s a link to an article about a different real case where an airline tried to claim that they have no responsibility for the incorrect advice their chatbot gave a customer, which ended up then costing the customer more money.

            https://www.bbc.co.uk/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know

            In that instance the customer was obviously in the moral right, but the company tried to weasel their way out of it by claiming the chatbot can’t be treated as a representative of the company.

            The company was ultimately found to be in the wrong, and held liable. And that in my opinion is exactly the precedent that we must set, that the company IS liable. (But again - and I don’t think I need to keep repeating this by now - strictly and only if the customer is acting in good faith)

            • Denjin@feddit.uk
              link
              fedilink
              arrow-up
              4
              ·
              8 hours ago

              Again, I agree, we are in total agreement about the principal of whether a business should be held accountable for their LLMs actions. We’re in “Fuck AI”!

              I was merely pointing out in my original comment that there are absolutely grounds not to honour a contract where a customer has acted in bad faith to get an offer.

        • tburkhol@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          7 hours ago

          Treat LLMs like “boss’s idiot nephew,” both in terms of whether the business should give them a privilege and whether the business should be liable for their inevitable screwups.

    • fizzle@quokk.au
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      3
      ·
      8 hours ago

      If a human offered an 80% discount, the humans employer isn’t magically forced to honor it.

      Firstly, it depends whether the offer satisfies the criteria of “an offer” in a legal sense. Unless purchases for 8,000 GBP are generally negotiated in a chat window on a website then this kinda seems unlikely. Anyhow, let’s assume it is an actual “offer”.

      If the offeror renegs on the contract, they may be liable to a claim from the buyer for the costs incurred by they’ve incurred as a result of the bogus purchase.

      For example, if the buyer dispatched a truck to the vendors warehouse and on arrival they were told the sale wouldn’t go through, then the vendor might be liable for the cost of the delivery truck for a few hours.

      In reality, no one is going to bother making a legal claim for that small amount of money.

    • Golden@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      7
      ·
      9 hours ago

      I placed an online order and used a coupon code I got from some browser extension. The seller refused to ship the item to me until I gave them more money because they said the free shipping code was meant for internal use only & not for customers.

      • tiramichu@sh.itjust.works
        link
        fedilink
        arrow-up
        12
        ·
        edit-2
        8 hours ago

        In that particular case, I’d suggest the seller was within their rights, honestly.

        If the code wasn’t obtained through any official means provided by the seller, then the seller has no responsibility to honour it, even it happens to ‘work’ in their checkout.

        But the seller was obviously stupidly petty, and that feels pretty pathetic on their part.

        They should have just sent your stuff, and taken the experience as notice to replace the voucher code that got leaked, so it doesn’t happen again.