• hardcoreufo@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    6 hours ago

    I find AI very frustrating. I had a script I wanted to turn into a systemd service which I’ve never done. I searched the web, didn’t find quite what I wanted so I asked AI. It gave a great answer to exactly my question and explained what every field was doing. It got me there faster than searching and browsing forums would have.

    So great, I also wanted to set up a watchdog on the pi to reboot. It tells me to get watchdog package from apt then edit a systemd conf file. An hour later with nothing working right gave up and found a tutorial in about 30 seconds of web browsing that made it clear AI was mixing up instructions from 2 different methods.

    So it saved me 5 minutes on one thing, cost me an hour on another. I feel like the internet and search engines of 10 years ago were much better than what we have now.

    • marxismtomorrow@lemmy.today
      link
      fedilink
      arrow-up
      7
      ·
      6 hours ago

      That touches on the heart of it; search engines have been so enshittified that AI is by default better, because it occasionally gets information from its training data that isn’t easily found through normal searching.

      (Some) AI has it’s place, as in GAN AI is amazing at finding subtle indicators of patterns that can be extrapolated to new data, but got it’s just so bad at 99% of applications it has ever been used for, including the entire concept of LLMs which are such an inherently flawed technology that they’ll never be passable as useful for anyone that isn’t a greedy shortsighted CEO wanting to replace workers as soon as possible.

    • Skullgrid@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      5 hours ago

      here’s how I do it :

      Word it as best as I can. If the AI gives a specific and likely answer, doublecheck the documentation or stack overflow, or its listed sources.

      It sucks a lot of the stuff I’m searching comes from the same three fucking AI generated things from 2024 onwards

  • Duamerthrax@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    9 hours ago

    Recombinant DNA promised better organ transplants, but it made Christians uncomfortable, so Bush II banned it.

  • HugeNerd@lemmy.ca
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    8 hours ago

    How is “human cloning” a) a real technology b) a bigger danger than the 8 billion fucking morons already here c) different from twins and triplets?

    • vaultdweller013@sh.itjust.works
      link
      fedilink
      arrow-up
      6
      ·
      7 hours ago

      We can clone a sheep and even nearly bring species back from extinction via cloning. That is vastly more advanced than just cloning a person, as for the other factors it’s mostly a matter of ethics what with the potential for cloning celebrities for stupid reasons or making a sapient clone just to harvest their organs, which as an aside wasn’t that a Sliders episode?

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 hour ago

        making a sapient clone just to harvest their organs

        A clone just makes a genetically identical baby, though, and they are shorter-lived. Dolly only lived half as long as the sheep she was a clone of, before she died of old age.

        Unless you wanted to wait 15 - 20 years, for organs that might, on average, last 15, cloning isn’t practical.

        • vaultdweller013@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          57 minutes ago

          I’m assuming we can solve the telemere issue for this. Frankly though it seems to be a stalled out field, at least until we can figure out how to better use stem cells.

          But yeah if you are in your 20s or even 50s making a clone baby of yourself and waiting 20 years would be technically viable to get a new set of organs. Which is more what I’m referring to, especially since creating a healthy body you can rip apart would basically require letting it live a relatively healthy life.

  • Tavi@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    12 hours ago

    ooooohh it’s so dangerous and capable ooohhhhh please we need to be regulated ooooooo we’re not releasing it to the public it’s so dangerous ooooooo

    • SirIglooi@sh.itjust.works
      link
      fedilink
      arrow-up
      13
      ·
      10 hours ago

      No idea what you’re on about. Mythos is a GAME CHANGER. Completely DESTROYS software security. Thats why we’re going to SAVE THE WORLD by letting our corporate sponsors use it.

    • angband@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      8 hours ago

      If they regulate something they don’t have (AGI), they (corps) can steal it from the small shop that creates it 30 years from now. insert head tapping meme here

  • lastlybutfirstly@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    13 hours ago

    The only thing dangerous about AI is people believing the hype and thinking it can actually think and do things it can’t do at all. LLMs, flock cameras etc. are just MENACE matchbox computers at their core. And it’s dangerous that governments and CEOs are just blindly relying on whatever crap they pump out without human supervision.

    • AppleTea@lemmy.zip
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      12 hours ago

      But! What if a computer could reproduce all the same phenomenon as a brain?

      Do we have any reason to think this might be the case? Not really. But. We also (maybe) have no reason to think this isn’t the case. What else are we gonna spend trillions gambling on? An ecosystem capable of supporting mammals? Don’t make me laugh!

      • lastlybutfirstly@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        9 hours ago

        I agree that the amount of water and electricity these AI centers gobble up is a concern. But I don’t know what you mean by our way of life. Personally I think it’s very useful when judiciously used. It’s dangerous if NASA haphazardly tosses AI generated code into the OS for a rocket going on a moon mission. But to quickly generate a meme or YT thumbnail is harmless.

  • gmtom@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    14 hours ago

    I mean none of those have immediate and massive financial advantages to the people doing them though.

      • Nalivai@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        12 hours ago

        Human cloning (media) and human cloning (real life) are two completely different processes that don’t have anything to do with each other. Human cloning in real life is a way to spend too much money to get something that is worse than just two people fucking.

        • Alwaysnownevernotme@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          12 hours ago

          While this is totally true and valid. It’s also a question of refinement. Human clone gen 10 is still going to give you a Dolly type situation, gen 100? Idk man.

          • Havoc8154@mander.xyz
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            12 hours ago

            It really isn’t. It’s a problem of people not understanding what cloning is. You seem to be talking about accelerated aging or something, which is just a different concept that gets lumped into cloning in sci fi because the reality of cloning itself is ultimately pointless.

      • TheOakTree@lemmy.zip
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        14 hours ago

        Human cloning could be used to harvest organs and blood. I can imagine a lot of money could be made offering perfectly compatible transplant organs harvested from cloned people with O-negative blood.

          • TheOakTree@lemmy.zip
            link
            fedilink
            arrow-up
            1
            ·
            8 hours ago

            Firstly, I would be much more swayed by your argument had you not linked some AI slop article sprinkled in em-dashes and containing zero sources/links, much less reputable ones.

            Secondly, we can imagine that such technology would be much more mature if it were legal and considered ethically acceptable to perform on humans.

            Third, you could grow multiple people simultaneously and in intervals, such that multiple clients’ needs can be met by taking apart one host. We already have existing variable pricing systems, so a less-efficient scenario would simply cost the customer more. To a multi-billionaire, what is a couple billion dollars if you might live 10 more years? To the service provider, what is ‘wasted [insert organs]’ to the tune of a billion dollars or more in profit?

  • T156@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    13 hours ago

    A lot of these aren’t paused entirely because people chose to pause them though, as much as it was hitting a limitation.

    Cloning, for example. The bigger issue is that clones don’t live for very long, and that the clone is basically a new human. If we had a science fiction cloning machine that could copy people, you could easily bet that research would be forging ahead.

  • Baggie@lemmy.zip
    link
    fedilink
    arrow-up
    34
    ·
    20 hours ago

    And then there’s antichiral bacteria, where the entire scientific community will shoot you if you even breath wrong adjacent to the idea

    • dejected_warp_core@lemmy.world
      link
      fedilink
      arrow-up
      14
      ·
      19 hours ago

      As someone who has family that died from mad cow (prion disease), fuck everything about that. The fact that there are prion-tainted spaces out in the wild, is terrifying enough.

    • cornshark@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      18 hours ago

      What’s that and what do you mean by breathing wrong at the idea? Is someone trying to breed some sort of supervillain bacteria?

      • pelya@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        15 hours ago

        Almost every organic molecule has a mirrored counterpart, like a normal screw and a left-handed screw.

        Almost none of them occur in the nature.

        So we have the technology to synthesize them now, and synthesize a bacteria out of them.

        But if you do that, and the bacteria escapes, all your existing medicine will be useless, so you need to re-synthesize all your antibiotics in left-hand configuration.

        That typically does not happen with regular bacteria experiments, because most of what you can synthesize in the lab will be a descendant of some other well-known bacteria, which already have an appropriate medicine to treat it, and in most cases it will be effective against your new strain.

        • Buddahriffic@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          13 hours ago

          Though wouldn’t that incompatibility go both ways? Current drugs and antibodies wouldn’t work with them but wouldn’t they use the mirrored proteins for energy and functioning, thus our bodies would be of no use to them?

          I’ve been wondering if bio-compatability would mean one doesn’t have a chance against the other or if it’s more like separate worlds that can only interact at a high level (like via the senses) but not at a lower level (sharing infections, food, and other biological processes).

          • Fluke@feddit.uk
            link
            fedilink
            arrow-up
            4
            ·
            edit-2
            13 hours ago

            Maybe?

            Worth risking life as we know it just to find out, for shiggles?

            The truth is, there will be somewhere that they outcompete native fauna for resources but can’t be stopped by what controls the natives, and whoops, there goes the ecosystem.

            • Buddahriffic@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              13 hours ago

              I think it would be important to know in the context of space exploration, assuming we can solve the other very hard problems standing in the way of a Star Trek future (though I’m not holding my breath lol), we’d need to know if we should stay the fuck away from any planets we find with life or if we can make contact without potentially dooming both our planet and theirs to potentially returning to the single-celled life stage.

              But yeah, it is likely a real world pandora’s box.

  • Sunflier@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    edit-2
    8 hours ago

    The difference between AI and the other 3: AI has the potential to save all the rich people trillions through the firing of the proletariat whereas the 3 numbered items were merely a small group of people trying to make money for themselves.

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      2
      ·
      15 hours ago

      1 and 3 could easily make a boatload of money, and could allow rich people to “live forever” and edit themselves in the process.

    • quick_snail@feddit.nl
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      20 hours ago

      Wut. Rich people will shoot themselves in the foot by firing the proletariat. AI is trash.

      The only thing that would save them is a bail out when everything crashes.

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        11
        ·
        19 hours ago

        So much of the white collar work is frankly a bit performative in general, and doing it well versus doing it badly versus not even doing it at all is sometimes not at all possible to tell.

        Thanks to mismanagement, people are brought in “in case they might be useful” a bunch of material is produced that is beyond the ken of the management who just smiles and nods because they have no idea.

        Witnessed a group manage to coast on doing effectively nothing for over a year on “we are going to do analytics in the cloud” as executive after executive sagely nodded. New executive came into the fold and got the same pitch and said “ok, fine, but what analytics, with what data sources, what do you expect to get out of it?” In a rare moment of competence an executive actually dared to figure out something instead of just smiling over the buzzwords. That same executive was gone within 3 months, because broadly speaking this was a problem for his peers that mostly operated by buzzword alignment.

        There’s a mountain of internal project document material that must be created, but is never used, because of processes where non-technical executives imagine they can review a technical design as long as it isn’t “code”, or that they can fire their coders and replace with new coders if they can reference some ‘non-code’ document to help.

        GenAI may be pretty bad, but depressingly it might not matter given how much pretty bad stuff is already out there.

        • cornshark@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          18 hours ago

          Makes sense! So your theory is leadership will fire themselves and replace themselves with genai, keeping the rank and file workers?

          • jj4211@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            17 hours ago

            Nah, that rank and file workers will go and the leadership will happily let genai keep doing performative bullshit that doesn’t matter and claim it’s like super important

      • Canaconda@lemmy.ca
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        18 hours ago

        “An evil man will burn his own nation to the ground to rule over the ashes.” ~ Sun Tzu

        “AI Slop” is not mutually exclusive with “AI fascism”. Billionaires are already burning down the planet. Clearly they don’t care about killing humanity on the way.

      • Buddahriffic@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        13 hours ago

        In addition to what the other reply says, the current state of AI isn’t necessarily the best AI could be. Even with the iterative changes on the LLM-based model, things are improving so fast that it might be safe to shrink the workforce for technical tasks soon.

        But I’m sure I’m not the only one that thinks the LLM-focused approach itself is just a local minimum the industry is stuck trying to optimize while another approach that isn’t just a big data “throw everything we can at it and hope it spits out useful results” but something more methodological that encodes our knowledge from experts to give it a head start as well as robust reasoning strategies and logic to let it improve on that starting point as it seeks and adds relevant data in ways similar to how we do science and engineering.

        I believe that it’s a race between an AI that truly can outcompete us and societal collapse, because the real reason AI is more difficult to stop than those other three is how easy it is to hide development. The massive data centers are required for the current approach being scaled up for the world to use it. AI research and development can be done on home PCs, especially if you’re more interested in results than speed (in which case you aren’t limited by cores or memory but just by storage and time).

    • Kommeavsted@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      2
      ·
      17 hours ago

      Firing and rehiring at a lower wage. That is, if they’re motivated to continue producing functional products. It’s clear that at this point many aren’t. So maybe this content is moot.