Same with the internet. Fuels billionaires, destroys the environment with data centers and cables, kills libraries and textbook research, spreads nazi propaganda. We need to stop using technology in general.
There are things you can do with the Internet that are impossible to do without the Internet. Everything you mentioned is very real harm that the Internet does to humanity in the world - even if you meant it sarcastically - but that harm has to be weighed against the benefits the Internet provides that can’t be replicated by anything else.
There’s nothing a LLM can do that a human can’t. The only thing LLMs are good at is convincing managers to replace human employees with LLMs. Because even though LLMs do a worse job than any human employee, they’re cheaper and won’t unionize.
The cost-benefit analysis for society is very different.
What you’ve given is an example of a problem where an LLM is inherently the wrong tool.
See, variation is built into LLMs. They’re programmed to evaluate probable responses and select from them on the basis of probability - to simplify ridiculously, if a particular word follows another 90% of a time, in 90% of the content it generates the LLM will have that word follow the other, and in the other 10% it won’t.
If you give an LLM the exact same prompt multiple times, you will get multiple different responses. They’ll all be similar responses, but they won’t be exactly the same, because how LLMs generate language is probabilistic and contains randomness.
(And that is why hallucination is an inherent feature of LLMs and can’t be trained out.)
But math isn’t language. Math problems have correct answers. When you use a software tool to answer a math problem, you don’t want variation. You want the correct answer every time.
To solve a math problem, you need to find the appropriate formula, which will be the same every time. Then you use a calculator, which always gives the correct result. You plug the numbers into the formula and calculate the result.
What I’m getting at is, if you use a calculator to do the math problem yourself, and you put in the correct formula, you’ll always get the correct result. If you use a LLM to generate the answer to a math problem, there is always a non-zero chance it will give you the wrong answer.
But what if, you might ask, you don’t know the correct formula? What if you’re not good enough at math to calculate the correct answers, even with a calculator? Isn’t this a time when the LLM can be useful, to do something you can’t?
The problem is, the LLM could be wrong. And if you haven’t looked up the formula yourself, from a reliable source that is not an LLM, you have no way to check the LLM’s work. Which means you can’t trust it for anything important and you have to do the math yourself anyway.
(This is true for everything an LLM does, but is especially true for math.)
And if you have looked up the formula yourself, it’s just as easy to use a calculator the first time and skip the LLM.
Right? This is what I’m getting at. An LLM can do some of the same things a human does, but it’s always going to be worse at it than a human, because it’s not conscious, it’s not reasoning its way to a correct answer, it’s just generating a string of linguistic tokens based on probabilities. And math problems might be the clearest possible example of this.
Thats well put, I’m under no naive assumption that LLMs are AI. Though I do think youre discounting the usefulness, as it did give the right answer, which is a fine use for average people doing basic math or whatever project theyre working on. I’m under no delusion that its replacing workers, unless someones job is writing fancy emails or building spreadsheets, and I do still think its a massive bubble.
Yeah, I get that it seems like a fine use for average people doing basic math. The nonzero chance of error could end up not mattering. But it could matter very much, depending on the use case. If you’re asking an LLM the volume of a bucket, it’s not a big deal. If you’re asking an LLM “how many milligrams of this drug is the correct dose for a 80 kg man”, that’s a big fucking deal.
If people don’t know LLMs can’t be trusted to give the corect answer, they’re not going to realize they need to do the math themselves in important use cases. And that is certainly not something Microsoft and Google are encouraging people to learn.
Then there’s the efficiency issue - Big Tech spent trillions of dollars to develop and train machine learning processors, which perform quadrillions of energy-intensive processes per second, and they’re being marketed to do a job that a 99 cent solar powered calculator from the 1980s can do better.
God, I just realized tax season is coming up. And after all the layoffs and political firings and general dogebaggery at the American IRS, they’re going to have to deal with people using AI to do their taxes 😆
Perhaps your right, though the AI also allows natural language or voice, and further explanations.
When you visualize a cylinder, think of stacking many thin circular disks (each with a height Δh) to build up the height h. The volume of each individual disk is its area πr2 multiplied by its infinitesimally small height Δh. When you aggregate these over the full height h, you arrive at the volume of the cylinder.
Its also eroding all the bullshit we used to do, like cover letters and things that had no reason to exist besides wasting someones time. So truth be told I’m a fan, even if it is a massively unprofitable bubble, I also recognize its limitations given its hallucinations so I understand it shouldnt be relied upon for useful work.
Same with the internet. Fuels billionaires, destroys the environment with data centers and cables, kills libraries and textbook research, spreads nazi propaganda. We need to stop using technology in general.
There are things you can do with the Internet that are impossible to do without the Internet. Everything you mentioned is very real harm that the Internet does to humanity in the world - even if you meant it sarcastically - but that harm has to be weighed against the benefits the Internet provides that can’t be replicated by anything else.
There’s nothing a LLM can do that a human can’t. The only thing LLMs are good at is convincing managers to replace human employees with LLMs. Because even though LLMs do a worse job than any human employee, they’re cheaper and won’t unionize.
The cost-benefit analysis for society is very different.
Lets see a standard problem I’m randomly making up using a free AI, you tell me if this kind of thing can be useful to someone:
If I have a bucket that is 1 meter tall and 1 meter wide how much volume can it hold?
The volume V of a cylinder can be calculated using the formula:
V=πr2h
Where:
r is the radius, h is the height.
In this case, the bucket is 1 meter tall and 1 meter wide, which means the diameter is 1 meter. Therefore, the radius r is:
r=21 meter=0.5 meters
Now substituting the values into the volume formula:
V=π(0.5m)2(1m) V=π(0.25m2)(1m) V≈0.7854m3
Thus, the volume the bucket can hold is approximately 0.785 cubic meters.
What you’ve given is an example of a problem where an LLM is inherently the wrong tool.
See, variation is built into LLMs. They’re programmed to evaluate probable responses and select from them on the basis of probability - to simplify ridiculously, if a particular word follows another 90% of a time, in 90% of the content it generates the LLM will have that word follow the other, and in the other 10% it won’t.
If you give an LLM the exact same prompt multiple times, you will get multiple different responses. They’ll all be similar responses, but they won’t be exactly the same, because how LLMs generate language is probabilistic and contains randomness.
(And that is why hallucination is an inherent feature of LLMs and can’t be trained out.)
But math isn’t language. Math problems have correct answers. When you use a software tool to answer a math problem, you don’t want variation. You want the correct answer every time.
To solve a math problem, you need to find the appropriate formula, which will be the same every time. Then you use a calculator, which always gives the correct result. You plug the numbers into the formula and calculate the result.
What I’m getting at is, if you use a calculator to do the math problem yourself, and you put in the correct formula, you’ll always get the correct result. If you use a LLM to generate the answer to a math problem, there is always a non-zero chance it will give you the wrong answer.
But what if, you might ask, you don’t know the correct formula? What if you’re not good enough at math to calculate the correct answers, even with a calculator? Isn’t this a time when the LLM can be useful, to do something you can’t?
The problem is, the LLM could be wrong. And if you haven’t looked up the formula yourself, from a reliable source that is not an LLM, you have no way to check the LLM’s work. Which means you can’t trust it for anything important and you have to do the math yourself anyway.
(This is true for everything an LLM does, but is especially true for math.)
And if you have looked up the formula yourself, it’s just as easy to use a calculator the first time and skip the LLM.
Right? This is what I’m getting at. An LLM can do some of the same things a human does, but it’s always going to be worse at it than a human, because it’s not conscious, it’s not reasoning its way to a correct answer, it’s just generating a string of linguistic tokens based on probabilities. And math problems might be the clearest possible example of this.
Thats well put, I’m under no naive assumption that LLMs are AI. Though I do think youre discounting the usefulness, as it did give the right answer, which is a fine use for average people doing basic math or whatever project theyre working on. I’m under no delusion that its replacing workers, unless someones job is writing fancy emails or building spreadsheets, and I do still think its a massive bubble.
Yeah, I get that it seems like a fine use for average people doing basic math. The nonzero chance of error could end up not mattering. But it could matter very much, depending on the use case. If you’re asking an LLM the volume of a bucket, it’s not a big deal. If you’re asking an LLM “how many milligrams of this drug is the correct dose for a 80 kg man”, that’s a big fucking deal.
If people don’t know LLMs can’t be trusted to give the corect answer, they’re not going to realize they need to do the math themselves in important use cases. And that is certainly not something Microsoft and Google are encouraging people to learn.
Then there’s the efficiency issue - Big Tech spent trillions of dollars to develop and train machine learning processors, which perform quadrillions of energy-intensive processes per second, and they’re being marketed to do a job that a 99 cent solar powered calculator from the 1980s can do better.
God, I just realized tax season is coming up. And after all the layoffs and political firings and general dogebaggery at the American IRS, they’re going to have to deal with people using AI to do their taxes 😆
Using llms for math questions is probably the worst usage for llms.
And all of this is easily calculated without ai. You can literally google it and let google do the math for you without ai.
Perhaps your right, though the AI also allows natural language or voice, and further explanations.
Its also eroding all the bullshit we used to do, like cover letters and things that had no reason to exist besides wasting someones time. So truth be told I’m a fan, even if it is a massively unprofitable bubble, I also recognize its limitations given its hallucinations so I understand it shouldnt be relied upon for useful work.
Found the Mennonite.