What worries me is that companies are using “the AI fucked up” as an excuse and just… not fixing the problem. They’re using it as an accountability shield.
That’s what companies always do.
The very purpose of creating most company’s is to limit liability of shareholders and staff.
It’s significantly easier to commit crimes with the knowledge that the system can’t come after your liberty or wealth for those crimes.
In many cases, Alzuhair writes, human supply chain managers are no longer being asked to override automatic shipments or intervene when discrepancies occur under their jurisdiction.
Don’t worry guys, AI will revolutionize everything. You won’t have to think at all!
Except AI is trash at doing what it’s advertised to do, it makes everybody dumber, and its shills will blame you once it inevitably mucks everything up.
“You’re prompting it wrong”
Isn’t it incredible that “AI” is sold as a product that is ‘PhD level smart’ (lol), but if it doesn’t do the straightforward thing you asked of it then it’s your fault.
They don’t provide instructions for it because they can’t provide instructions; what works on one version might not work next week. But it’s still your fault if it doesn’t do what it’s supposed to.
Are you excited yet??
Isn’t it incredible that “AI” is sold as a product that is ‘PhD level smart’ (lol), but if it doesn’t do the straightforward thing you asked of it then it’s your fault.
Have you ever tried to get a PhD to do anything?
I have tried, it’s only possible if you butter him up with cookies first, and that only had a fifty percent chance!
PhD - Pompous, Hubristic Dickhead. That description fit a lot of college professors I’ve known, but AI is more like a sycophantic intern.
It’s about as smart as the average PhD in my experience.
It’s about as smart sounding as the average PhD in my experience.
As smart as the average PhD … when you ask the PhD something completely outside of their area of expertise and pressure them to make up an answer that sounds plausible, even if they don’t know the actual answer.
With the big difference that e.g. GPT isn’t on an expert level in anything.
Last year McDonalds tried a test of replacing human drive thru workers with an AI running the speaker board. It was shut down after only 3 weeks.
My favorite bit was a guy trying to order a big mac meal large with a coke.
What the AI heard, was 81,000 bottles of Dasani water. Then asked “Is this correct?” To which the guy responded “81,000 bottles of fucking water???”
To which the AI added a big mac meal medium with a water. Then asked if his updated order was correct. He just drove off.
I was at a Bojangles earlier this year and they had an AI doing their drive thru. I was trying to order a meal, but didn’t want a drink. That confused the heck out of the AI. It kept trying to force a drink in me. Gave up and walked into the store. Guy behind the counter was smiling and said something like, “we can hear what you’re saying to it. Next time just pull around. We got you.”
How do we know that actually happened? Is there a video? Who recorded it?
Why is this even in question. That’s exactly the type of shit that AI does.
deleted by creator
Oh, ya got me! Clearly an AI never makes mistakes, and everyone who tells you otherwise, including me, is clearly lying!
So you can’t trust what people say ever. You need to always see video.
Wait, but now video can be easily manipulated by AI. I can make evidence that never happened.
So you can’t trust people. You can’t trust video. I guess nothing ever happens, and if someone says something happened, you can’t trust the proof now either. Guess nothing ever happens.
If AI is “responsible” for the well-being of humans…DEAD humans can’t get sick. DEAD humans don’t have to pay rent. DEAD humans stay dead.
The logic is solid.
The Three Laws strike again.
Well. It would be the zeroth law, first of all, but the three laws would most definitely not allow humans to die.
The whole point of I, Robot was cases where the three laws were circumvented in various ways.
And how many of those circumventions were the result of humans being stupid?
Nobody is programming those laws into AI. It’s not required.
Nobody is programming those laws because it’s not possible with the way that LLMs are currently built and trained. Instead of The Three Laws, which are inviolable but in certain edge cases insufficient, we have Anthropic’s Constitution, which is 23,000 words worth of good intentions which Claude should keep in the back of its mind while it does whatever it wants to do.
i mean i guess total collapse is a form of revolution
The final frontier
The result of all this may be catastrophic. Should a worst-case scenario ever occur — a cyberattack, a natural disaster, an internet outage — there may be no human workers left with the skills that once kept food on the shelves.
Very nerdy of me, but this reminds me of a Stargate SG-1 episode “the Sentinel.” The team travels to a planet whose civilization relies on fully automated technology. The people don’t have to operate or maintain it (normally), so their society has completely forgotten how. In the episode, one set of antagonists comes in and sabotages their defense system, and another set sees the opportunity and invades. The protagonists have to then figure out the defense system and fix it.
We don’t live in a TV series. There aren’t benevolent outsiders who will swoop down and save our systems in the nick of time when they break down. We’re headed in a bad direction.
When smart home thermostats and light switches were still a new thing, I used to talk about “Jurassic Park Tech”, as in too worried about whether or not they could… and that’s even more the case with AI.
At some point I think this gets to be like S. M. Stirling’s Emberverse, where modern tech stops working and people who know how to make traditional wooden bows become an extremely valuable resource. Except it’ll be having some old-timer on hand who’s able to handle logistics with just spreadsheet, a Rolodex, and a calendar that’s going make or break companies.
I prefer the one where Teal’c drinks a fresh pot of hot coffee straighten from the pot.
Also had a civilization that needed robots to help maintain everything.
I prefer the Star Trek TNG episode where they kidnap a dozen children from the Enterprise.
Hey, can we stop calling everything with a computer “AI”? Order management systems have been a thing long before LLMs were invented (I’ve worked on one). This was perhaps one of the first applications of computing. Humans hand writing an order form in a major grocery store hasn’t been a thing since like the 80s.
Also, I’m like 80% sure this article was barfed out by an LLM. The em-dashes be everywhere.
The argument it’s making is not relying on technology (in this case some AI) because it can be distrupted. I don’t think having a single point of failure is unique to technology in general
I’m also suspicious that the ransonware attack had anything to do with AI, but I didn’t want to say so because going against the common consensus in threads like this gets me downvoted, so I’d rather not say it if people aren’t going to consider it (and then agree or disagree). heh
Then again, as a user of emdashes[1] — I suppose I’m under suspucion of being an LLM as well. ;-)
Would you like me to compose responses to any other comments in this thread?[2]
😹 How are we concerned with statistical systems being vulnerable (which is shitty, sure) when they don’t even lead to productivity increases, that is they cannot even do the jobs they’re made to do? Get real. What a clownshow
Yeah this is what bugs me.
There are no trade off, there are only disadvantages.
It’s like a drug that not only it’s bad for you, it’s also not fun to do.
It’s like a drug that not only it’s bad for you, it’s also not fun to do.
Cigarettes after a week?
Really? Just one week? I thought it took months to get numb to it.
Now that I think about it, it’s a good metaphor.
AI could reaaallly use a surgeon general warning on it.
Amen
No productivity increase? What? For waiters maybe.
What are you? An investor?
“AIs” can’t even operate vending machines, let alone recognize handwriting reliably or translate text. I know a few people that work in archives with (pre-)medieval manuscripts and I myself have bitten my teeth out on Google Translate™ and DeepL™. That’s how I know. There was also a study done on that vending machine thing. Come to think of it, you could make a simple vending machine that collects usage statistics and sends reports via radio that just works using a few scripts. Emphasis on “works”.
My my my
There’s nothing in this article about problems with AI specifically.











