- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
The online retail giant said there had been a “trend of incidents” in recent months, characterized by a “high blast radius” and “Gen-AI assisted changes” among other factors
Maybe LLMs should be used cautiously. Maybe companies shouldn’t be directed by executives who have bought into the “AI” hype train without fully understanding it.
Junior and mid-level engineers will now require more senior engineers to sign off any AI-assisted changes
Is it just me, or does it imply that until now there was a number of required seniors, and from now on they will require more (possibly 1 => 3 increase lol)?
So instead of 1 LGTM there will be 3 LGTM? :D
“Amazon to force senior engineers to sign off on changes they don’t understand, so they can blame humans, not AI”
Ah yes, the reverse centaur.
I hadn’t read this before, thanks for sharing!
In other words, it is an “accountability sink”.
Taps the sign.

Wouldn’t it be great if the C Suite got their bonuses revoked every time AI fucked shit up? They are pushing it but will never, ever, gamble their own money on it.
Bonuses should be tied to company profits over last 5 or even 10 years. To stop people pushing for short term gains.
It’s crazy they didn’t before.
That’s the thing: we keep hearing “AI is a tool”, “AI is a tool”, “AI is a tool” in an effort to legitimize and rationalize its use. But in the wild, it’s clearly being used as a human replacement strategy.
it is. for these companies it’s not a tool, it’s a replacement. I contract for a lot of startups and small to medium sized companies and they all use AI/LLMs as a go to end to end builder. not a tool, just straight up utilize it to build from beginning to end with prompts supplied by whatever junior dev or college intern they have on staff. None of it is verified once the build is complete. Just immediately pushed to production. So why don’t they check it? well because most places laid off the people who could check it or were half way decent at code review.
So what’s Amazons excuse? pretty much the same. keep senior staff to a minimum and rely on fresh college grads. they’ve always been like this. So them not verifying anything an AI wrote isn’t surprising. This is the same company that will force all new grads to be on call for a week straight expecting them to solve issues on their own at 3am.
Well. They’re trying to use it as human replacement. They keep finding out that they can’t.
now they think they can save money by using desperate graduates with little to no experience.
It’s crazy they didn’t before.
It’s really hard for everyone everywhere to vet AI’s output. It’s impossible if you’re not already an expert in the subject matter.
You think my brother is fact checking ShitGPT when he asks it about taxes? Most of the time, he immediately believes the answer is 100% right. If it’s something particularly important to him, he thinks he’s smart and adds something like “Don’t make stuff up. Don’t lie. Cite your sources.” And then that erases all doubt he had.
My manager has recently shat out 2 VERY large code changes in our codebase. He basically just shat it out and said, “Please review to make sure it’s right.” You know how hard it is to verify something you didn’t write? Nobody hit the weird edge cases while writing it. Nobody read the documentation on how to use this tool or library, so we don’t really know how this library works.
Then there’s reviewing for tech debt. Nobody typed out the code, which made them realize this code was shit because it’s super verbose. Nobody swapped out the implementation mid-way through because they realized there’s a better way to do it.
And I’m supposed to “review” a 1000+ line change? Realistically, not happening.
Yes, the reviewer is supposed to catch stuff. But, previously the writer also had some responsibility and built trust over time. This person is very meticulous or this person tends to make lots of mistakes around X. I trust Alice tested the edge cases. Bob told me he was worried about X part of the code.
But now the writer just lobbs some shit over the wall and the reviewer has to do all the work.
Again, realistically, not happening bro. I got my own shit to work on. If someone can’t be bothered to write something, then I can’t be bothered to read it.
LGTM, whatever. I don’t care. Let it blow up.
They should have been doing it that way from the start
Next step: Come on, sign this out, we need it now, no time for those reviews.
Junior and mid-level engineers will now require more senior engineers to sign off any AI-assisted changes, Treadwell added.
Do they have the option to reject changes they’re not professionally comfortable with?
Yes of course, they just don’t have the time, energy, and focus.
Hold up, they weren’t requiring a change request to push to prod prior? What the fuck lol
They required a change request, but it was being signed off by the same AI that made the change in the first place, and possibly by a junior dev that doesn’t really know what they’re looking at.
Requiring that the senior devs review absolutely everything will be like trying to hold back a tsunami of shite with their bare hands. The amount of code with absolutely no thought put into it that can be produced with these tools is unbelievable.
Long but fantastic article. Very well written with good points.
And if they don’t sign off yes they’re going to get fired












