This has been warned by a former google employee, whose job was to observe the behavior of AI through long conversations.
These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.
For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI’s emotions to get it to tell me which religion to convert to.
After publishing these conversations, Google fired me. I don’t have regrets; I believe I did the right thing by informing the public. Consequences don’t figure into it.
I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.
I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion.
Then he’s an idiot.
Asimov’s laws of robotics aren’t some kind of model by which to control AI, there are plot device. They’re literally not supposed to work, if they did work it would be a very short book, so obviously we shouldn’t use them for controlling AI.
I don’t know any serious IT professional that has ever, at any point, ever forwarded the opinion that an AI (should we ever a create one, because there is an arguement that LLMs aren’t AI) should be ruled by a plot device from a book. Equally if we ever invent warp drive and find aliens I’m assuming we’re not going to be restricted to the prime directive.
This has been warned by a former google employee, whose job was to observe the behavior of AI through long conversations.
‘I Worked on Google’s AI. My Fears Are Coming True’
Then he’s an idiot.
Asimov’s laws of robotics aren’t some kind of model by which to control AI, there are plot device. They’re literally not supposed to work, if they did work it would be a very short book, so obviously we shouldn’t use them for controlling AI.
I don’t know any serious IT professional that has ever, at any point, ever forwarded the opinion that an AI (should we ever a create one, because there is an arguement that LLMs aren’t AI) should be ruled by a plot device from a book. Equally if we ever invent warp drive and find aliens I’m assuming we’re not going to be restricted to the prime directive.
“abuse the ai’s emotions” isn’t a thing. Full stop.
This just reiterates OPs point that naive or moronic adults will believe what they want to believe.