The big models were trained on what might as well be everything public that people have ever written.
So I’d expect that their output will be a pretty convincing example of something that some random person might have written. And getting fooled by the wording in a joke is something that people do all the time. In fact, I bet examples of people getting it wrong are over-represented in the training data because that is more worth of reposts and will DrIvE EnGaGeMeNt!
20-30 years ago the big question was whether a computer could pass the Turing test.
Little did we realize that was the last thing we wanted! Simulating humans means simulating mistakes.
The problem is with the psychos and grifters that want to take this “passable simulation of random schmuck’s ramblings” and sell it to the business world as a literal deus ex machina that will swoop in and relieve them of their pesky “pay the humans” problem and is literally a $10-100 Trillion IP that we’re going to restructure our world around.
This makes total sense to me.
The big models were trained on what might as well be everything public that people have ever written.
So I’d expect that their output will be a pretty convincing example of something that some random person might have written. And getting fooled by the wording in a joke is something that people do all the time. In fact, I bet examples of people getting it wrong are over-represented in the training data because that is more worth of reposts and will DrIvE EnGaGeMeNt!
20-30 years ago the big question was whether a computer could pass the Turing test.
Little did we realize that was the last thing we wanted! Simulating humans means simulating mistakes.
The problem is with the psychos and grifters that want to take this “passable simulation of random schmuck’s ramblings” and sell it to the business world as a literal deus ex machina that will swoop in and relieve them of their pesky “pay the humans” problem and is literally a $10-100 Trillion IP that we’re going to restructure our world around.