I think humanizing them is a fairly trivial thing, in this sort of context.
Yes, it’s true, it didn’t “lie” about health.
But it has the same result as someone lying, it’s another bulletpoint in the list of reasons not to trust AI, even if it pulls from the right sources and presents information generally correctly, it may in fact just not present information it could have presented because the sources it learned from have done so in a way that would get those sources deemed “liars”.
Could write that out every time, I suppose, but people will say their dog is trying to trick them when he goes to the bowl 5 minutes after dinner, or goes to their partner for the same, and everyone understands the dog isn’t actually attempting to deceive them, and just wants more.
Same thing, to me at least. It lied, but in a similar way to how my dog lies, not in the way a human can lie.
I think humanizing them is a fairly trivial thing, in this sort of context.
Yes, it’s true, it didn’t “lie” about health.
But it has the same result as someone lying, it’s another bulletpoint in the list of reasons not to trust AI, even if it pulls from the right sources and presents information generally correctly, it may in fact just not present information it could have presented because the sources it learned from have done so in a way that would get those sources deemed “liars”.
Could write that out every time, I suppose, but people will say their dog is trying to trick them when he goes to the bowl 5 minutes after dinner, or goes to their partner for the same, and everyone understands the dog isn’t actually attempting to deceive them, and just wants more.
Same thing, to me at least. It lied, but in a similar way to how my dog lies, not in the way a human can lie.