Saying that it’s good at one thing and bad at others.
But that’s exactly the difference between narrow AI and a generally intelligent one. A narrow AI can be “superhuman” at one specific task - like generating natural-sounding language - but that doesn’t automatically carry over to other tasks.
People give LLMs endless shit for getting things wrong, but they should actually get credit for how often they get it right too. That’s a pure side effect of their training - not something they were ever designed to do.
It’s like cruise control that’s also kinda decent at driving in general. You might be okay letting it take the wheel as long as you keep supervising - but never forget it’s still just cruise control, not a full autopilot.
What does this word mean? Does this refer to something that does not exist? If so why are we using it as a practical benchmark or distinction to make statements about the world?
but they should actually get credit for how often they get it right too.
My text compression algorithm for tape gets the facts right to the exact character. Beat that.
But that’s exactly the difference between narrow AI and a generally intelligent one. A narrow AI can be “superhuman” at one specific task - like generating natural-sounding language - but that doesn’t automatically carry over to other tasks.
People give LLMs endless shit for getting things wrong, but they should actually get credit for how often they get it right too. That’s a pure side effect of their training - not something they were ever designed to do.
It’s like cruise control that’s also kinda decent at driving in general. You might be okay letting it take the wheel as long as you keep supervising - but never forget it’s still just cruise control, not a full autopilot.
What does this word mean? Does this refer to something that does not exist? If so why are we using it as a practical benchmark or distinction to make statements about the world?
My text compression algorithm for tape gets the facts right to the exact character. Beat that.