Money quote:

Excel requires some skill to use (to the point where high-level Excel is a competitive sport), and AI is mostly an exercise in deskilling its users and humanity at large.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 hours ago

    In favour of AI absolutely, against it, no I can’t. What group would want to disvalue AI, after all most of the big tech companies are developing their own. They would want people to use AI, that’s the only way they make a profit.

    You keep providing these vague justifications for your belief but you never actually provide a concrete answer.

    Which groups in particular do you think are paying people to astroturf with negative AI comments? Which actual organisations, which companys? Do you have evidence for this beyond “lots of people on a technically inclined forum don’t like it” because that seems to be a fairly self-selecting set. You are seeing patterns in the clouds and are insisting that they are meaningful.

    • Melvin_Ferd@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 minutes ago

      You call it “patterns in the clouds,” but that’s how coordinated media campaigns are meant to look organic, coincidental, invisible unless you recognize the fingerprints. Spotting those fingerprints isn’t tinfoil-hat stuff, it’s basic media literacy.

      And let’s be real: plenty of groups have motives to discourage everyday people from embracing AI.

      Political think tanks and content farms (Heritage Foundation, Koch networks…) already pay for astroturfing campaigns and troll farms. They do it on issues like immigration, climate, and COVID. Why would AI magically be exempt?

      Reputation management/PR firms (Bent Pixels, marketing shops, crisis comms firms) literally get paid to scrub and reshape narratives online. Their business model depends on you not having the same tools for cheap or free.

      Established media and gatekeepers survive on controlling distribution pipelines. The more people use AI to generate, remix, and distribute their own content, the less leverage those outlets have.

      Now why does this matter with AI in particular? Because AI isn’t just another app it’s a force multiplier for individuals.

      A single parent can spin up an online store, write copy, generate images, and market it without hiring an agency.

      A student can build an interactive study tool in a weekend that used to take a funded research lab.

      An activist group can draft policy briefs, make explainer videos, and coordinate messaging with almost no budget.

      These kinds of tools only get created if ordinary people are experimenting, collaborating, and embracing AI. That’s what the “don’t trust AI” narrative is designed to discourage. If you keep people from touching it, you keep them dependent on the existing gatekeepers.

      So flip your own question: who pays for these narratives? The same people who already fund copy-paste headline campaigns like “illegals are taking our jobs and assaulting Americans.” It’s the same yellow-journalism playbook, just aimed at a new target.

      Dismissing this as “cloud patterns” is the exact mindset they hope you have. Because if you actually acknowledge how coordinated media framing works, you start to see why of course there are groups with the motive and budget to poison the well on AI.