I find AI very frustrating. I had a script I wanted to turn into a systemd service which I’ve never done. I searched the web, didn’t find quite what I wanted so I asked AI. It gave a great answer to exactly my question and explained what every field was doing. It got me there faster than searching and browsing forums would have.
So great, I also wanted to set up a watchdog on the pi to reboot. It tells me to get watchdog package from apt then edit a systemd conf file. An hour later with nothing working right gave up and found a tutorial in about 30 seconds of web browsing that made it clear AI was mixing up instructions from 2 different methods.
So it saved me 5 minutes on one thing, cost me an hour on another. I feel like the internet and search engines of 10 years ago were much better than what we have now.
It was better ten years ago.
That touches on the heart of it; search engines have been so enshittified that AI is by default better, because it occasionally gets information from its training data that isn’t easily found through normal searching.
(Some) AI has it’s place, as in GAN AI is amazing at finding subtle indicators of patterns that can be extrapolated to new data, but got it’s just so bad at 99% of applications it has ever been used for, including the entire concept of LLMs which are such an inherently flawed technology that they’ll never be passable as useful for anyone that isn’t a greedy shortsighted CEO wanting to replace workers as soon as possible.
here’s how I do it :
Word it as best as I can. If the AI gives a specific and likely answer, doublecheck the documentation or stack overflow, or its listed sources.
It sucks a lot of the stuff I’m searching comes from the same three fucking AI generated things from 2024 onwards
Recombinant DNA promised better organ transplants, but it made Christians uncomfortable, so Bush II banned it.
None of those were paused LOLOLOLOL
Came here to say this, but without the LOLs.
Came here to say this, but with one lol.
this is the way
How is “human cloning” a) a real technology b) a bigger danger than the 8 billion fucking morons already here c) different from twins and triplets?
We can clone a sheep and even nearly bring species back from extinction via cloning. That is vastly more advanced than just cloning a person, as for the other factors it’s mostly a matter of ethics what with the potential for cloning celebrities for stupid reasons or making a sapient clone just to harvest their organs, which as an aside wasn’t that a Sliders episode?
making a sapient clone just to harvest their organs
A clone just makes a genetically identical baby, though, and they are shorter-lived. Dolly only lived half as long as the sheep she was a clone of, before she died of old age.
Unless you wanted to wait 15 - 20 years, for organs that might, on average, last 15, cloning isn’t practical.
I’m assuming we can solve the telemere issue for this. Frankly though it seems to be a stalled out field, at least until we can figure out how to better use stem cells.
But yeah if you are in your 20s or even 50s making a clone baby of yourself and waiting 20 years would be technically viable to get a new set of organs. Which is more what I’m referring to, especially since creating a healthy body you can rip apart would basically require letting it live a relatively healthy life.
It was probably a Sliders episode since 90s off-brad scifi did pretty much everything the twilight zone failed to, but it definitely was an entire movie
That’s not how this meme format works
Well in theory we can fix all society. But we are greedy fucks
Some of us are greedy fucks, we let them make the decisions for some reason.
I’ve always advocated for a system where people who are qualified but don’t want to should lead…
I think it’s the majority that are greedy fucks to be honest.
Thats called sortition
Yes it is!
I didn’t use that word because no one ever knows wtf it means lol
I kinda like the term randomocracy
Ooo I like that.
ooooohh it’s so dangerous and capable ooohhhhh please we need to be regulated ooooooo we’re not releasing it to the public it’s so dangerous ooooooo
No idea what you’re on about. Mythos is a GAME CHANGER. Completely DESTROYS software security. Thats why we’re going to SAVE THE WORLD by letting our corporate sponsors use it.
If they regulate something they don’t have (AGI), they (corps) can steal it from the small shop that creates it 30 years from now. insert head tapping meme here
The only thing dangerous about AI is people believing the hype and thinking it can actually think and do things it can’t do at all. LLMs, flock cameras etc. are just MENACE matchbox computers at their core. And it’s dangerous that governments and CEOs are just blindly relying on whatever crap they pump out without human supervision.
But! What if a computer could reproduce all the same phenomenon as a brain?
Do we have any reason to think this might be the case? Not really. But. We also (maybe) have no reason to think this isn’t the case. What else are we gonna spend trillions gambling on? An ecosystem capable of supporting mammals? Don’t make me laugh!
Its dangerous to our future, our ecosystem and our way of life even if the stupid things can’t think.
I agree that the amount of water and electricity these AI centers gobble up is a concern. But I don’t know what you mean by our way of life. Personally I think it’s very useful when judiciously used. It’s dangerous if NASA haphazardly tosses AI generated code into the OS for a rocket going on a moon mission. But to quickly generate a meme or YT thumbnail is harmless.
Turns out when everyone acts rationally, we can actually accomplish shit.
I mean none of those have immediate and massive financial advantages to the people doing them though.
Human cloning could print a workforce. Or an army.
Human fucking is already doing that.
Yes but they get to experience outside and music and get ideas.
Can’t have any of that nonsense.
Human cloning (media) and human cloning (real life) are two completely different processes that don’t have anything to do with each other. Human cloning in real life is a way to spend too much money to get something that is worse than just two people fucking.
While this is totally true and valid. It’s also a question of refinement. Human clone gen 10 is still going to give you a Dolly type situation, gen 100? Idk man.
It really isn’t. It’s a problem of people not understanding what cloning is. You seem to be talking about accelerated aging or something, which is just a different concept that gets lumped into cloning in sci fi because the reality of cloning itself is ultimately pointless.
As opposed to AI?
More or less the same as AI
I’m not sure what you’re saying. AI can be over hyped and misunderstood too. It’s not like there’s a limit to the number of things that can be underwhelming compared to common depictions.
Human cloning could be used to harvest organs and blood. I can imagine a lot of money could be made offering perfectly compatible transplant organs harvested from cloned people with O-negative blood.
Not without solving the issues with shortened telomers
https://thisweekinsciencenews.com/blog/2025/10/28/the-truth-about-human-cloning-and-why-it-fails/
and the issue of how to raise a child to carry the organs and then…harvest them all at once? If you take the heart from the clone, what, you just trash the kidneys and liver?
Firstly, I would be much more swayed by your argument had you not linked some AI slop article sprinkled in em-dashes and containing zero sources/links, much less reputable ones.
Secondly, we can imagine that such technology would be much more mature if it were legal and considered ethically acceptable to perform on humans.
Third, you could grow multiple people simultaneously and in intervals, such that multiple clients’ needs can be met by taking apart one host. We already have existing variable pricing systems, so a less-efficient scenario would simply cost the customer more. To a multi-billionaire, what is a couple billion dollars if you might live 10 more years? To the service provider, what is ‘wasted [insert organs]’ to the tune of a billion dollars or more in profit?
A lot of these aren’t paused entirely because people chose to pause them though, as much as it was hitting a limitation.
Cloning, for example. The bigger issue is that clones don’t live for very long, and that the clone is basically a new human. If we had a science fiction cloning machine that could copy people, you could easily bet that research would be forging ahead.
And then there’s antichiral bacteria, where the entire scientific community will shoot you if you even breath wrong adjacent to the idea
As someone who has family that died from mad cow (prion disease), fuck everything about that. The fact that there are prion-tainted spaces out in the wild, is terrifying enough.
What’s that and what do you mean by breathing wrong at the idea? Is someone trying to breed some sort of supervillain bacteria?
Almost every organic molecule has a mirrored counterpart, like a normal screw and a left-handed screw.
Almost none of them occur in the nature.
So we have the technology to synthesize them now, and synthesize a bacteria out of them.
But if you do that, and the bacteria escapes, all your existing medicine will be useless, so you need to re-synthesize all your antibiotics in left-hand configuration.
That typically does not happen with regular bacteria experiments, because most of what you can synthesize in the lab will be a descendant of some other well-known bacteria, which already have an appropriate medicine to treat it, and in most cases it will be effective against your new strain.
Though wouldn’t that incompatibility go both ways? Current drugs and antibodies wouldn’t work with them but wouldn’t they use the mirrored proteins for energy and functioning, thus our bodies would be of no use to them?
I’ve been wondering if bio-compatability would mean one doesn’t have a chance against the other or if it’s more like separate worlds that can only interact at a high level (like via the senses) but not at a lower level (sharing infections, food, and other biological processes).
In theory yes. In practice no one wants to try it.
Maybe?
Worth risking life as we know it just to find out, for shiggles?
The truth is, there will be somewhere that they outcompete native fauna for resources but can’t be stopped by what controls the natives, and whoops, there goes the ecosystem.
I think it would be important to know in the context of space exploration, assuming we can solve the other very hard problems standing in the way of a Star Trek future (though I’m not holding my breath lol), we’d need to know if we should stay the fuck away from any planets we find with life or if we can make contact without potentially dooming both our planet and theirs to potentially returning to the single-celled life stage.
But yeah, it is likely a real world pandora’s box.
ooo, that’s a fun concept to think of. yeah, grab your go bag.
The difference between AI and the other 3: AI has the potential to save all the rich people trillions through the firing of the proletariat whereas the 3 numbered items were merely a small group of people trying to make money for themselves.
1 and 3 could easily make a boatload of money, and could allow rich people to “live forever” and edit themselves in the process.
Wut. Rich people will shoot themselves in the foot by firing the proletariat. AI is trash.
The only thing that would save them is a bail out when everything crashes.
So much of the white collar work is frankly a bit performative in general, and doing it well versus doing it badly versus not even doing it at all is sometimes not at all possible to tell.
Thanks to mismanagement, people are brought in “in case they might be useful” a bunch of material is produced that is beyond the ken of the management who just smiles and nods because they have no idea.
Witnessed a group manage to coast on doing effectively nothing for over a year on “we are going to do analytics in the cloud” as executive after executive sagely nodded. New executive came into the fold and got the same pitch and said “ok, fine, but what analytics, with what data sources, what do you expect to get out of it?” In a rare moment of competence an executive actually dared to figure out something instead of just smiling over the buzzwords. That same executive was gone within 3 months, because broadly speaking this was a problem for his peers that mostly operated by buzzword alignment.
There’s a mountain of internal project document material that must be created, but is never used, because of processes where non-technical executives imagine they can review a technical design as long as it isn’t “code”, or that they can fire their coders and replace with new coders if they can reference some ‘non-code’ document to help.
GenAI may be pretty bad, but depressingly it might not matter given how much pretty bad stuff is already out there.
Makes sense! So your theory is leadership will fire themselves and replace themselves with genai, keeping the rank and file workers?
Nah, that rank and file workers will go and the leadership will happily let genai keep doing performative bullshit that doesn’t matter and claim it’s like super important
“An evil man will burn his own nation to the ground to rule over the ashes.” ~ Sun Tzu
“AI Slop” is not mutually exclusive with “AI fascism”. Billionaires are already burning down the planet. Clearly they don’t care about killing humanity on the way.
In addition to what the other reply says, the current state of AI isn’t necessarily the best AI could be. Even with the iterative changes on the LLM-based model, things are improving so fast that it might be safe to shrink the workforce for technical tasks soon.
But I’m sure I’m not the only one that thinks the LLM-focused approach itself is just a local minimum the industry is stuck trying to optimize while another approach that isn’t just a big data “throw everything we can at it and hope it spits out useful results” but something more methodological that encodes our knowledge from experts to give it a head start as well as robust reasoning strategies and logic to let it improve on that starting point as it seeks and adds relevant data in ways similar to how we do science and engineering.
I believe that it’s a race between an AI that truly can outcompete us and societal collapse, because the real reason AI is more difficult to stop than those other three is how easy it is to hide development. The massive data centers are required for the current approach being scaled up for the world to use it. AI research and development can be done on home PCs, especially if you’re more interested in results than speed (in which case you aren’t limited by cores or memory but just by storage and time).
Firing and rehiring at a lower wage. That is, if they’re motivated to continue producing functional products. It’s clear that at this point many aren’t. So maybe this content is moot.
Sure they did. /s
Didn’t we only stop RDNA experiments on humans? Like, nobody is gonna make a “designer baby” but don’t we still do shit like give flies extra wings?
Yeah, CRISPR came along so now that’s what everyone is into.
Looks like they’re starting it up again
Heed my warning science: If we start making flying purple babies, we will bring upon the wrath of the one eyed, one horned, flying purple people eater.
China has been doing designer babies(to some extent) for over a decade now
And we can choose to use AI only in certain domains as well.
By using IVF in vitro method you can pick traits for your baby.
That’s incorrect. What is allowed is DNA screening, however there is no editing of DNA.













