On the basis of that further information - which I had not seen - I agree completely that in this specific case the customer was in bad faith, they have no justification, and the order should be cancelled.
And if the customer took it up with small claims court, I’m sure the court would feel the same and quickly dismiss the claim on the basis of the evidence.
But in the general case should the retailer be held responsible for what their AI agents do? Yes, they should. My sentiment about that fully stands, which is that companies should not get to be absolved of anything they don’t like, simply because an AI did it.
Here’s a link to an article about a different real case where an airline tried to claim that they have no responsibility for the incorrect advice their chatbot gave a customer, which ended up then costing the customer more money.
In that instance the customer was obviously in the moral right, but the company tried to weasel their way out of it by claiming the chatbot can’t be treated as a representative of the company.
The company was ultimately found to be in the wrong, and held liable. And that in my opinion is exactly the precedent that we must set, that the company IS liable. (But again - and I don’t think I need to keep repeating this by now - strictly and only if the customer is acting in good faith)
Again, I agree, we are in total agreement about the principal of whether a business should be held accountable for their LLMs actions. We’re in “Fuck AI”!
I was merely pointing out in my original comment that there are absolutely grounds not to honour a contract where a customer has acted in bad faith to get an offer.
I agree, but the OP expanded on their issue in the comments and it very much appears the customer wasn’t acting in good faith.
On the basis of that further information - which I had not seen - I agree completely that in this specific case the customer was in bad faith, they have no justification, and the order should be cancelled.
And if the customer took it up with small claims court, I’m sure the court would feel the same and quickly dismiss the claim on the basis of the evidence.
But in the general case should the retailer be held responsible for what their AI agents do? Yes, they should. My sentiment about that fully stands, which is that companies should not get to be absolved of anything they don’t like, simply because an AI did it.
Here’s a link to an article about a different real case where an airline tried to claim that they have no responsibility for the incorrect advice their chatbot gave a customer, which ended up then costing the customer more money.
https://www.bbc.co.uk/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know
In that instance the customer was obviously in the moral right, but the company tried to weasel their way out of it by claiming the chatbot can’t be treated as a representative of the company.
The company was ultimately found to be in the wrong, and held liable. And that in my opinion is exactly the precedent that we must set, that the company IS liable. (But again - and I don’t think I need to keep repeating this by now - strictly and only if the customer is acting in good faith)
Again, I agree, we are in total agreement about the principal of whether a business should be held accountable for their LLMs actions. We’re in “Fuck AI”!
I was merely pointing out in my original comment that there are absolutely grounds not to honour a contract where a customer has acted in bad faith to get an offer.
👍