- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
The online retail giant said there had been a “trend of incidents” in recent months, characterized by a “high blast radius” and “Gen-AI assisted changes” among other factors
Maybe LLMs should be used cautiously. Maybe companies shouldn’t be directed by executives who have bought into the “AI” hype train without fully understanding it.



It’s crazy they didn’t before.
That’s the thing: we keep hearing “AI is a tool”, “AI is a tool”, “AI is a tool” in an effort to legitimize and rationalize its use. But in the wild, it’s clearly being used as a human replacement strategy.
it is. for these companies it’s not a tool, it’s a replacement. I contract for a lot of startups and small to medium sized companies and they all use AI/LLMs as a go to end to end builder. not a tool, just straight up utilize it to build from beginning to end with prompts supplied by whatever junior dev or college intern they have on staff. None of it is verified once the build is complete. Just immediately pushed to production. So why don’t they check it? well because most places laid off the people who could check it or were half way decent at code review.
So what’s Amazons excuse? pretty much the same. keep senior staff to a minimum and rely on fresh college grads. they’ve always been like this. So them not verifying anything an AI wrote isn’t surprising. This is the same company that will force all new grads to be on call for a week straight expecting them to solve issues on their own at 3am.
Well. They’re trying to use it as human replacement. They keep finding out that they can’t.
now they think they can save money by using desperate graduates with little to no experience.
It’s really hard for everyone everywhere to vet AI’s output. It’s impossible if you’re not already an expert in the subject matter.
You think my brother is fact checking ShitGPT when he asks it about taxes? Most of the time, he immediately believes the answer is 100% right. If it’s something particularly important to him, he thinks he’s smart and adds something like “Don’t make stuff up. Don’t lie. Cite your sources.” And then that erases all doubt he had.
My manager has recently shat out 2 VERY large code changes in our codebase. He basically just shat it out and said, “Please review to make sure it’s right.” You know how hard it is to verify something you didn’t write? Nobody hit the weird edge cases while writing it. Nobody read the documentation on how to use this tool or library, so we don’t really know how this library works.
Then there’s reviewing for tech debt. Nobody typed out the code, which made them realize this code was shit because it’s super verbose. Nobody swapped out the implementation mid-way through because they realized there’s a better way to do it.
And I’m supposed to “review” a 1000+ line change? Realistically, not happening.
Yes, the reviewer is supposed to catch stuff. But, previously the writer also had some responsibility and built trust over time. This person is very meticulous or this person tends to make lots of mistakes around X. I trust Alice tested the edge cases. Bob told me he was worried about X part of the code.
But now the writer just lobbs some shit over the wall and the reviewer has to do all the work.
Again, realistically, not happening bro. I got my own shit to work on. If someone can’t be bothered to write something, then I can’t be bothered to read it.
LGTM, whatever. I don’t care. Let it blow up.
They should have been doing it that way from the start