• Thor_Whale@lemmus.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 minutes ago

    Personally I don’t hate it but I just think there’s really no urgent need for it. If they’re using it to take jobs away from people well what is everyone going to do for work? To the billionaires think there’s going to be a gigantic human die off and they’re going to be elite class of 100,000 people and they’ll be served by intelligent robots? If that’s their angle good luck.

  • Auli@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    26 minutes ago

    Sure everyone hates it but they’re all using it still.

  • wizblizz@lemmy.world
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    6
    ·
    4 hours ago

    The fuck are all these comments? AI is shit, fuck AI. It fuels billionaires, destroys the environment, kills critical thinking, confidently tells you to off yourself, praises Hitler, advocates for glue as a pizza topping. This tech is a war on artists and free thought and needs to be destroyed. Stop normalizing, stop using it.

    • mechoman444@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      6
      ·
      35 minutes ago

      The fuck are all these comments? The internet is shit, fuck the internet. It fuels billionaires, destroys the environment, kills critical thinking, confidently tells you to off yourself, praises Hitler, advocates for glue as a pizza topping. This tech is a war on artists and free thought and needs to be destroyed. Stop normalizing it. Stop using it.

      • criscodisco@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        3
        ·
        1 hour ago

        LLMs are shit, fuck LLMs. They fuel billionaires, destroy the environment, kill critical thinking, confidently tell you to off yourself, praise Hitler, advocate for glue as a pizza topping. This tech is a war on artists and free thought and needs to be destroyed. Stop normalizing, stop using it.

        And AI is a pipe dream no one is close to fulfilling, won’t be realized by feeding LLMs all of the data in existence, and billionaires are destroying our economy in their pursuit of it.

      • mechoman444@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        6
        ·
        edit-2
        34 minutes ago

        Change this out for any other technology that’s been innovated throughout human history. The printing press semiconductors the internet.

        The anti-ai rhetoric on this platform is becoming nonsensical.

        At this point it’s just bandwagon hate. These people don’t even understand the difference between llms and AIs and the various applications that they have.

    • TractorDuffy@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      12
      ·
      2 hours ago

      t’s the same as any other commercial tool. As long as it’s profitable the owner will continue to sell it, and users who are willing to pay will buy it. You use commercial tools every day that are harmful to the environment. Do you drive? Heat your home? Buy meat, dairy or animal products?

      I honestly don’t know where this hatred for AI comes from; it feels like a trend that people jump onto because they want to be included in something.

      AI is used in radiology to identify birth defects, cancer signs and heart problems. You’re acting like its only use-case is artwork, which isn’t true. You’re welcome to your opinion but you’re welcome to consider other perspectives as well. Ciao!

      • ClamDrinker@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        4
        ·
        46 minutes ago

        It’s in part because people aren’t open to contradictions in their world view. In part you can’t blame them for that since everyone has their own valid perspective. But staying willingly ignorant of positives and gray areas is a valid criticism. And sadly there are plenty of influencers peddling a black-white mindset on AI, ignoring all other uses. Not saying intentionally or not, again perspective. I’m sure online content creation has to contend with a lot more AI content compared to the norm. But only on the internet do I encounter rabidly anti AI people, in real life basically nobody cares. Some use it, some don’t, most do so responsibly as a tool. And I work in the creative industry…

        • CreativeShotgun@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          36 minutes ago

          “I’ve never seen it it must not exist”

          I work in a creative industry too and it is the bane of not only my group but every other company I’ve spoken to. Every artist and musician I know hates it too.

          • ClamDrinker@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            29 minutes ago

            I never said it doesn’t exist. I’m sorry people in your area are being negatively affected if so. But the point still stands. My experience is just as valid.

    • But_my_mom_says_im_cool@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      11
      ·
      2 hours ago

      Which ai and for which use? It’s a tool. It’s like getting mad cause a guy invented a hammer. It’s not the tool hurting you dude, it’s the people wielding it.

      • starelfsc2@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        1 hour ago

        If that hammer also had massive environmental impacts and hammers were pushed into every aspect of your life while also stealing massive amounts of copyrighted data, sure. It’s very useful for problems that can be easily verified, but the only reason it’s good at those is from the massive amount of stolen data.

  • vane@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    3 hours ago

    That was December 2024.

    McKinsey & Company consulting firm has agreed to pay $650 million to settle a federal investigation into its work to help opioids manufacturer Purdue Pharma boost the sales of the highly addictive drug OxyContin, according to court papers filed in Virginia on Friday.

    Drug dealer must sell drugs.

    • TractorDuffy@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      2 hours ago

      Obama was there, he awarded the medal of honor, my parents were proud of me, the AI chump was instantly killed

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      edit-2
      4 hours ago

      The research/tinkerer community overwhelmingly agrees. They were making fun of Tech Bros before chatbots blew up.

  • RampantParanoia2365@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    7
    ·
    59 minutes ago

    Nope. I’ve been using it for preliminary writing editing. It’s not creating anything, just giving advice on how to make it clearer.

        • Rekorse@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          5
          ·
          50 minutes ago

          If your analysis of whether AI is good or bad is simply “I use it and I like it”, then you are a child, or an adult with the mental capabilities of a child.

          I’m happy life is so simple for you though!

          • RampantParanoia2365@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            4
            ·
            31 minutes ago

            I am using a tool that’s available to me. I’m not going to not use it just because some people use it wrong. What the fuck?

            And if it wasn’t, I never would have discovered a new skill I had no clue I could do before, so I’m going to insist that you lick my nutsack. How’s that for maturity?

  • foliumcreations@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    4 hours ago

    I have made the conscious decision to try and not refer to it as AI, but predictive LLM or generative mimic models, to better reflect what they are. If we all manage to change our vernacular, perhaps we can make them silgtly less attractive to use for everything. Some might even feel less inclined to brag about using them for all their work.

    Other options might be unethical guessing machines, deceptive echo models, or the classic from Wh40k Abominable Intelligence.

    • wonderingwanderer@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      3 hours ago

      I mostly agree. Machine Learning is AI, and LLMs are trained with a specific form of Machine Learning. It would be more accurate to say LLMs are created with AI, but themselves are just a static predictive model.

      And people also need to realize that “AI” doesn’t mean sentient or conscious. It’s just a really complex computer algorithm. Even AGI won’t be sentient, it would only mimic sentiency.

      And LLMs will never evolve into AGI, any more than the Broca’s and Wernicke’s areas can be adapted to replace the prefrontal cortex, the cingulate gyrus, or the vagus nerve.

      Tangent on the nature of consciousness:

      The nature of consciousness is philosophically contentious, but science doesn’t really have any answers there either. The “Best Guess™” is that consciousness is an emergent property of neural activity, but unfortunately that leads to the delusion that “If we can just program enough bits into an algorithm, it will become conscious.” And venture capitalists are milking that assumption for all it’s worth.

      The human brain isn’t merely electrical though, it’s electrochemical. It’s pretty foolish to write off the entire chemical aspect of the brain’s physiology and just assume that the electrical impulses are all that matter. The fact is, we don’t know what’s responsible for the property of consciousness. We don’t even know why humans are conscious rather than just being mindless automatons encased in meat.

      Yes, the brain can detect light and color, temperature and pressure, pleasure and pain, proprioception, sound vibrations, aromatic volatile gasses and particles, chemical signals perceived as tastes, other chemical signals perceived as emotions, etc… But why do we perceive what the brain detects? Why is there even an us to perceive it? That’s unanswerable.

      Furthermore, where are “we” even located? In the brainstem? The frontal cortex? The corpus callosum? The amygdala or hippocampus? The pineal or pituitary gland? The occipital, parietal, or temporal lobe? Are “we” distributed throughout the whole system? If so, does that include the spinal cord and peripheral nervous system?

      Where is the center of the “self” responsible for the perception of “selfhood” and “self-awareness”?

      Until science can answer that, there is no path to artificial sentiency, and the closest approximation we have to an explanation for our own sentiency is simply Cogito Ergo Sum: I only know that I am sentient, because if I wasn’t then I wouldn’t be able to question my own sentiency and be aware of the fact that I am questioning it.

      Why digital circuits will never be conscious:

      The human brain has about 14 billion nerves. The average commercial API-based LLM already has about 150 billion parameters, and with FP32 architecture that’s already 4 bytes per parameter. If all it takes is a complex enough system of digits, it would have already worked.

      It’s just as likely that consciousness doesn’t emerge from electrochemical interactions, but is an inherent property of them. If every electron was conscious of its whirring around, we wouldn’t know the difference. Perhaps when enough of them are concerted together in a common effort, their simple form of consciousness “pools together” to form a more complex, unitary consciousness just like drops of water in a bucket form one pool of water. But that’s just pure speculation. And so is emergent consciousness theory.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 hours ago

        It’s just a really complex computer algorithm

        Not particularly complex. An LLM is:

        $P_\theta(x) = \prod_t \text{softmax}(f_\theta(x_{<t}))$

        where $f$ is a deep Transformer trained by maximum likelihood.

        • wonderingwanderer@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          That “deep Transformer trained by maximum likelihood” is the complex part.

          Billions of parameters in a tensor field distributed over a dozen or more layers, each layer divided by hidden sizes, and multiple attention heads per hidden size. Every parameter’s weight is algorithmically adjusted during training. For every query a matrix multiplication is done on multiple vectors to approximate the relevancy between each token. Possibly tens of thousands of tokens being stored in cached memory at a time, each one being analyzed relative to each other.

          And for standard architecture, each parameter requires four bytes of memory. Even 8-bit quantization requires one byte per parameter. That’s 12-24 GB RAM for a model considered small.

          Deep transformers are not simple systems, if they were then it wouldn’t take such an enormous amount of resources to fully train them.

  • Damaskox@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    11
    ·
    3 hours ago

    AI has its uses.
    You can critically think where to use it, and when. E.g. installing an open source AI on your computer that works offline.
    I have used it to get pictures for my story characters.

    • Windex007@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 hour ago

      The pictures you got for your story are a result of plagiarism.

      I’m not actually making a value judgement. If they make you happy, that’s great and im happy for you.

      Kinda like how diamond engagement rings make people happy. You gotta accept how that sausage was made

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      2 hours ago

      The problem is that even the open source models (and they’re not fully open source anyway) aren’t exactly trained ethically. They’re still trained on stolen data, they are still consuming gargantuan quantities of water and using the same amount of power as a small city.

      They are in no way harmless. The only thing that’s better about them is that they don’t steal your private data. The environmental issues are still there though.

    • But_my_mom_says_im_cool@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      2 hours ago

      It’s a shiny new tool, humans wanna use it on everything. Hopefully we find a balance. There’s a lot Ai can do that a human literally cannot, background work and such, I’ve seen it implemented in great ways. The problem isnt the tool, it’s the corporate dirtbags who want to maximize profits out of it

      • MIDItheKID@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 hours ago

        They haven’t even figured out how out make profit off of it, let alone maximize it. I really don’t understand the current approach. “Let’s slam this unfathomably expensive technology into everything and not turn a profit. Oh, also, it’s destroying the environment and making electronics and power really expensive”

        • But_my_mom_says_im_cool@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          2 hours ago

          People need to explain in clear detail to others the environmental impact of ai and how it causes that impact. You’re telling people that making a funny photo is fucking up the environment, you have to be more clear. Im not a dummy but the idea that ai is damaging the actual environment is vague and nebulous. It’s never been explained or shown to people in large way how it’s actually happening, so it’s not real to people because it’s not widely explained

    • midribbon_action@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      5
      ·
      2 hours ago

      What do those pictures add to the story? Do you believe that because you’re privileged enough to run it locally, that makes it ethical to use for you specifically, but not others that rely on datacenters?

      • TractorDuffy@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        2 hours ago

        It’s the same as any other commercial tool. As long as it’s profitable the owner will continue to sell it, and users who are willing to pay will buy it. You use commercial tools every day that are harmful to the environment. Do you drive? Heat your home? Buy meat, dairy or animal products?

        I honestly don’t know where this hatred for AI comes from; it feels like a trend that people jump onto because they want to be included in something.

        AI is used in radiology to identify birth defects, cancer signs and heart problems. You’re acting like its only use-case is artwork, which isn’t true. You’re welcome to your opinion but you’re welcome to consider other perspectives as well. Ciao!

        • midribbon_action@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          2 hours ago

          I mean, machine learning is great. I was replying to someone talking about using generative ai to make photos, not a doctor or a data scientist, to my knowledge.

          • midribbon_action@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 hours ago

            I kinda love that response though, I’m going to save it. You can tell it’s not even from a chatbox, it’s just kinda too unhinged. Like ‘oh you are curious how I view the ethics of my actions? You must hate cancer survivors.’ I think ai bros do like to think of themselves as pushing the bounds of science, they don’t see themselves as consumers.

            • TractorDuffy@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              1 hour ago

              Our paper bills are money. “Truth” or not? If we as a society stop adhering to that system, the paper stops being money. The “truth” of money being paper has changed.

              I think you’re assuming that your philosophical view is objectively correct, and it’s not. Sorry. The world doesn’t revolve around you and you can’t always be right. Hope that makes sense.

              Love and blessings be upon you :))))))))

            • TractorDuffy@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              2 hours ago

              Found someone who’s more desperate to fit in than educate themselves about what “AI” means. Welcome to the club buddy, you made it! Congrats!

  • BillyClark@piefed.social
    link
    fedilink
    English
    arrow-up
    141
    arrow-down
    7
    ·
    10 hours ago

    I don’t hate AI. That’s pointless. I hate the people who use AI to ruin everything, which is the majority of AI users today.

      • thespcicifcocean@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        4 hours ago

        I had an idea for a SciFi story I wanted to write where a person’s consciousness is uploaded into a computer. Now I can’t even trito it without feeling gross because LLMs ruined everything for me, even AI scifi.

        • edible_funk@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          55 minutes ago

          In fairness, uploading consciousness into a computer is a pretty old sci-fi trope so it would have been derivative even before the AI bubble. That being said, tropes are tropes for a reason, don’t let shitty real life “AI” stop you from writing that story. Just don’t call use the term artificial intelligence and you’re good

    • porous_grey_matter@lemmy.ml
      link
      fedilink
      English
      arrow-up
      58
      ·
      9 hours ago

      I think you’re being too literal, they mean they hate having to use it or they hate being constantly exposed to its shitty output. Obviously pretty much nobody hates, like, Markov chains.

    • MalReynolds@slrpnk.net
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      8 hours ago

      I reserve the hate (well, severe disdain and contempt, hate is personal in my book, haven’t needed it for quite a while) for the C-Suites and owners, users get contempt if they’re using it to think for them and a pass with some sympathy if they’ve found a way to use it as a tool while retaining executive function. LLMs and broader machine learning are fine, just a tool. You can use a wrench constructively or give someone a concussion, that’s on you.

      SamA is the exception, hate that market cornering fucker (and yes it’s personal, I was going to go AM5 this year).

  • lauha@lemmy.world
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    1
    ·
    9 hours ago

    Do I love my 4-year-old? Yes

    Would I let my precocious 4-year-old full of imagination write my business report? Fuck no. Are you stupid or what?

      • WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        4 hours ago

        If you’ve ever worked with consultants or managers in general, like 50-75% of them are fucking stupid. Just because they can convince other idiots that they’re not, doesn’t mean they aren’t. I’ve watched the blind lead the blind into financial ruin, while getting paid big bucks to do it.

        • StinkyFingerItchyBum@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          32 minutes ago

          As a former consultant and manager, I wholeheartedly agree, but your % is too low. The culture of consulting is poison and makes monsters out of people.

        • nlgranger@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 hours ago

          I don’t believe the people who contract them are being duped though. They do it to delegate and dilute the chain of responsibility until their decisions become acceptable.