I’m trying to figure out why everyone is so mad about AI?
I’m still in the “wow” phase, marveled by the reasoning and information that it can give me, and just started testing some programming assistance which, with a few simple examples seems to be fine (using free models for testing).
So I still can’t figure out why theres so much push back, is everyone using it extensively and reached a dead end in what it can do?
The whole “let me summarise this for you” is pointless to me and is EVERYWHERE. It just adds a lot of friction in my everyday work with zere benefits. If I have a 10 pages pdf document, it means every single line and word is important to me, an summary is totally pointless and can be dangerous.
And I also see a world where people use LLM to enhance their emails, develop their ideas and then the reader of this email uses an LLM to mak a summary of this overly developped writing they recieved. It doesn’t help people to go straight to the point and just adds a lot of friction in communications with zero benefits.
Most of the people obsessed by IA in their workfield are tech people. They have a strong biais that IA will solve everything because it has a very strong impact on their work.
In my work (not tech), I tried some new IA stuff and realised it was far from being reliable so on only use it once in a while. And I guess, for many people outside the tech world, LLM are not that usefull overall despite what they hear everyday from tech companies. So there is a huge gap of perception of IA and LLM betweet tech people and other people. I think that’s where all the “hate” comes from.
For most people, IA just costs a lot of energy, rises the prices of hardware without bringing theme any tangible benefits.
I hope my personal point of view helps you understant better why people are upset with current state of IA. I am a “tech curious” person and tried some IA stuff over the past 2-3 years and ended up being very disapointed by the results.
tl;dr : Most people don’t see the benefits of IA, they only see the cost of it.
I think the reason tech people are so bought into it is a combination of
their careers depending on them liking the new tech thing,
a general sci-fi inspired enthusiasm for what can be accomplished,
a dash of everyone in this industry being an introvert with no friends,
and the misconception that, because they can engineer something, they are smarter than the people around them.
That last bullet, I don’t know if you remember the contemptuous rivalry between stem majors and the arts or humanities, any major that was less “useful”—that’s the exact smug attitude I’m talking about. There are a lot of people who think they could just program away life’s many problems.
I’m a very techy person and am vastly more fascinated by tech from 1977 to around 2010 than anything today. Most stuff today is boring and serves only to surveil and destroy our lives bit by bit. Besides medical advancements.
I like the top comment of the video you suggested. It sums up the current issue : “A computer can never be held responsible, therefore a computer must never make a management decision” - IBM training manual, 1979
Decisions cannot be automated. Many jobs require a lot of decision making and taking responsibility of these decisions.
Theres good reasona for luddite behaviour “they tuk er jerrrbs” style. Theres good reasons for environmental impact. Theres good reasons for fear of privacy, and cyberpunk dystopian overlords forming, and possibly terminators, and anger over theft of intellectual property etc, the fact that you didnt ask for it and its happening to you anyway, the effect its having on the hardware market for home computers/gaming, the fact that its not actually reliable yet.
Theres a lot going on in that space.
You’re right though its pretty incredible, and im still impressed by it, but I dont want to use it because i know it’ll be wrong a lot of the time. The only thing I’d want to use it for is to help me with things I dont know, and in that space I dont know that its doing it wrong. Either I blindly trust it and fuck up my project, or I research everything myself as well to make sure its right and at that point why bother?
It is a marvel, and is pretty great really. Main issues with AI today are that they are hardware-intensive and require cooling, tons of storage and computing capacity, replace workforce with subpar performance, etc.
You see, modern LLMs are generative AI and not really a sentient AI. They have tons of texts and content in their database from which AI will generate answer to your prompt. But more often than never it would generate things that are wrong. AIs task is to generate, not actually think.
It should’ve make us work less hours and have more free time by enhancing our productivity. Yet we lose jobs and have to work more hours while earning less.
I’m trying to figure out why everyone is so mad about AI?
I’m still in the “wow” phase, marveled by the reasoning and information that it can give me, and just started testing some programming assistance which, with a few simple examples seems to be fine (using free models for testing). So I still can’t figure out why theres so much push back, is everyone using it extensively and reached a dead end in what it can do?
Give me some red pills!
The ideology of AI is consistent with far right facist hyper surveillience regimes, for starters.
Why do you think scam altman was drumpfs largest donor?
Its no more useful than a wikipedia page, and the most damning part is it consistently just makes shit up
The whole “let me summarise this for you” is pointless to me and is EVERYWHERE. It just adds a lot of friction in my everyday work with zere benefits. If I have a 10 pages pdf document, it means every single line and word is important to me, an summary is totally pointless and can be dangerous.
And I also see a world where people use LLM to enhance their emails, develop their ideas and then the reader of this email uses an LLM to mak a summary of this overly developped writing they recieved. It doesn’t help people to go straight to the point and just adds a lot of friction in communications with zero benefits.
Most of the people obsessed by IA in their workfield are tech people. They have a strong biais that IA will solve everything because it has a very strong impact on their work. In my work (not tech), I tried some new IA stuff and realised it was far from being reliable so on only use it once in a while. And I guess, for many people outside the tech world, LLM are not that usefull overall despite what they hear everyday from tech companies. So there is a huge gap of perception of IA and LLM betweet tech people and other people. I think that’s where all the “hate” comes from.
For most people, IA just costs a lot of energy, rises the prices of hardware without bringing theme any tangible benefits.
I hope my personal point of view helps you understant better why people are upset with current state of IA. I am a “tech curious” person and tried some IA stuff over the past 2-3 years and ended up being very disapointed by the results.
tl;dr : Most people don’t see the benefits of IA, they only see the cost of it.
You are correct that there are reasons tech people are more inclined to like these things, but it’s not really because AI is useful to them.
Actually, here’s a good video about its usefulness.
I think the reason tech people are so bought into it is a combination of
That last bullet, I don’t know if you remember the contemptuous rivalry between stem majors and the arts or humanities, any major that was less “useful”—that’s the exact smug attitude I’m talking about. There are a lot of people who think they could just program away life’s many problems.
I’m a very techy person and am vastly more fascinated by tech from 1977 to around 2010 than anything today. Most stuff today is boring and serves only to surveil and destroy our lives bit by bit. Besides medical advancements.
Yeah… :/
I just like video games.
I like the top comment of the video you suggested. It sums up the current issue : “A computer can never be held responsible, therefore a computer must never make a management decision” - IBM training manual, 1979 Decisions cannot be automated. Many jobs require a lot of decision making and taking responsibility of these decisions.
Theres good reasona for luddite behaviour “they tuk er jerrrbs” style. Theres good reasons for environmental impact. Theres good reasons for fear of privacy, and cyberpunk dystopian overlords forming, and possibly terminators, and anger over theft of intellectual property etc, the fact that you didnt ask for it and its happening to you anyway, the effect its having on the hardware market for home computers/gaming, the fact that its not actually reliable yet.
Theres a lot going on in that space.
You’re right though its pretty incredible, and im still impressed by it, but I dont want to use it because i know it’ll be wrong a lot of the time. The only thing I’d want to use it for is to help me with things I dont know, and in that space I dont know that its doing it wrong. Either I blindly trust it and fuck up my project, or I research everything myself as well to make sure its right and at that point why bother?
It is a marvel, and is pretty great really. Main issues with AI today are that they are hardware-intensive and require cooling, tons of storage and computing capacity, replace workforce with subpar performance, etc.
You see, modern LLMs are generative AI and not really a sentient AI. They have tons of texts and content in their database from which AI will generate answer to your prompt. But more often than never it would generate things that are wrong. AIs task is to generate, not actually think.
It should’ve make us work less hours and have more free time by enhancing our productivity. Yet we lose jobs and have to work more hours while earning less.
Just give us more ram bro and we’ll have agi I promise bro just 30000 more acres of farmland and 10 billion gallons of water bro that’s all we need