Thats well put, I’m under no naive assumption that LLMs are AI. Though I do think youre discounting the usefulness, as it did give the right answer, which is a fine use for average people doing basic math or whatever project theyre working on. I’m under no delusion that its replacing workers, unless someones job is writing fancy emails or building spreadsheets, and I do still think its a massive bubble.
Yeah, I get that it seems like a fine use for average people doing basic math. The nonzero chance of error could end up not mattering. But it could matter very much, depending on the use case. If you’re asking an LLM the volume of a bucket, it’s not a big deal. If you’re asking an LLM “how many milligrams of this drug is the correct dose for a 80 kg man”, that’s a big fucking deal.
If people don’t know LLMs can’t be trusted to give the corect answer, they’re not going to realize they need to do the math themselves in important use cases. And that is certainly not something Microsoft and Google are encouraging people to learn.
Then there’s the efficiency issue - Big Tech spent trillions of dollars to develop and train machine learning processors, which perform quadrillions of energy-intensive processes per second, and they’re being marketed to do a job that a 99 cent solar powered calculator from the 1980s can do better.
God, I just realized tax season is coming up. And after all the layoffs and political firings and general dogebaggery at the American IRS, they’re going to have to deal with people using AI to do their taxes 😆
Thats well put, I’m under no naive assumption that LLMs are AI. Though I do think youre discounting the usefulness, as it did give the right answer, which is a fine use for average people doing basic math or whatever project theyre working on. I’m under no delusion that its replacing workers, unless someones job is writing fancy emails or building spreadsheets, and I do still think its a massive bubble.
Yeah, I get that it seems like a fine use for average people doing basic math. The nonzero chance of error could end up not mattering. But it could matter very much, depending on the use case. If you’re asking an LLM the volume of a bucket, it’s not a big deal. If you’re asking an LLM “how many milligrams of this drug is the correct dose for a 80 kg man”, that’s a big fucking deal.
If people don’t know LLMs can’t be trusted to give the corect answer, they’re not going to realize they need to do the math themselves in important use cases. And that is certainly not something Microsoft and Google are encouraging people to learn.
Then there’s the efficiency issue - Big Tech spent trillions of dollars to develop and train machine learning processors, which perform quadrillions of energy-intensive processes per second, and they’re being marketed to do a job that a 99 cent solar powered calculator from the 1980s can do better.
God, I just realized tax season is coming up. And after all the layoffs and political firings and general dogebaggery at the American IRS, they’re going to have to deal with people using AI to do their taxes 😆