So much of the white collar work is frankly a bit performative in general, and doing it well versus doing it badly versus not even doing it at all is sometimes not at all possible to tell.
Thanks to mismanagement, people are brought in “in case they might be useful” a bunch of material is produced that is beyond the ken of the management who just smiles and nods because they have no idea.
Witnessed a group manage to coast on doing effectively nothing for over a year on “we are going to do analytics in the cloud” as executive after executive sagely nodded. New executive came into the fold and got the same pitch and said “ok, fine, but what analytics, with what data sources, what do you expect to get out of it?” In a rare moment of competence an executive actually dared to figure out something instead of just smiling over the buzzwords. That same executive was gone within 3 months, because broadly speaking this was a problem for his peers that mostly operated by buzzword alignment.
There’s a mountain of internal project document material that must be created, but is never used, because of processes where non-technical executives imagine they can review a technical design as long as it isn’t “code”, or that they can fire their coders and replace with new coders if they can reference some ‘non-code’ document to help.
GenAI may be pretty bad, but depressingly it might not matter given how much pretty bad stuff is already out there.
Nah, that rank and file workers will go and the leadership will happily let genai keep doing performative bullshit that doesn’t matter and claim it’s like super important
“An evil man will burn his own nation to the ground to rule over the ashes.” ~ Sun Tzu
“AI Slop” is not mutually exclusive with “AI fascism”. Billionaires are already burning down the planet. Clearly they don’t care about killing humanity on the way.
In addition to what the other reply says, the current state of AI isn’t necessarily the best AI could be. Even with the iterative changes on the LLM-based model, things are improving so fast that it might be safe to shrink the workforce for technical tasks soon.
But I’m sure I’m not the only one that thinks the LLM-focused approach itself is just a local minimum the industry is stuck trying to optimize while another approach that isn’t just a big data “throw everything we can at it and hope it spits out useful results” but something more methodological that encodes our knowledge from experts to give it a head start as well as robust reasoning strategies and logic to let it improve on that starting point as it seeks and adds relevant data in ways similar to how we do science and engineering.
I believe that it’s a race between an AI that truly can outcompete us and societal collapse, because the real reason AI is more difficult to stop than those other three is how easy it is to hide development. The massive data centers are required for the current approach being scaled up for the world to use it. AI research and development can be done on home PCs, especially if you’re more interested in results than speed (in which case you aren’t limited by cores or memory but just by storage and time).
Wut. Rich people will shoot themselves in the foot by firing the proletariat. AI is trash.
The only thing that would save them is a bail out when everything crashes.
So much of the white collar work is frankly a bit performative in general, and doing it well versus doing it badly versus not even doing it at all is sometimes not at all possible to tell.
Thanks to mismanagement, people are brought in “in case they might be useful” a bunch of material is produced that is beyond the ken of the management who just smiles and nods because they have no idea.
Witnessed a group manage to coast on doing effectively nothing for over a year on “we are going to do analytics in the cloud” as executive after executive sagely nodded. New executive came into the fold and got the same pitch and said “ok, fine, but what analytics, with what data sources, what do you expect to get out of it?” In a rare moment of competence an executive actually dared to figure out something instead of just smiling over the buzzwords. That same executive was gone within 3 months, because broadly speaking this was a problem for his peers that mostly operated by buzzword alignment.
There’s a mountain of internal project document material that must be created, but is never used, because of processes where non-technical executives imagine they can review a technical design as long as it isn’t “code”, or that they can fire their coders and replace with new coders if they can reference some ‘non-code’ document to help.
GenAI may be pretty bad, but depressingly it might not matter given how much pretty bad stuff is already out there.
Makes sense! So your theory is leadership will fire themselves and replace themselves with genai, keeping the rank and file workers?
Nah, that rank and file workers will go and the leadership will happily let genai keep doing performative bullshit that doesn’t matter and claim it’s like super important
“AI Slop” is not mutually exclusive with “AI fascism”. Billionaires are already burning down the planet. Clearly they don’t care about killing humanity on the way.
In addition to what the other reply says, the current state of AI isn’t necessarily the best AI could be. Even with the iterative changes on the LLM-based model, things are improving so fast that it might be safe to shrink the workforce for technical tasks soon.
But I’m sure I’m not the only one that thinks the LLM-focused approach itself is just a local minimum the industry is stuck trying to optimize while another approach that isn’t just a big data “throw everything we can at it and hope it spits out useful results” but something more methodological that encodes our knowledge from experts to give it a head start as well as robust reasoning strategies and logic to let it improve on that starting point as it seeks and adds relevant data in ways similar to how we do science and engineering.
I believe that it’s a race between an AI that truly can outcompete us and societal collapse, because the real reason AI is more difficult to stop than those other three is how easy it is to hide development. The massive data centers are required for the current approach being scaled up for the world to use it. AI research and development can be done on home PCs, especially if you’re more interested in results than speed (in which case you aren’t limited by cores or memory but just by storage and time).