• HalfAFrisbee@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    14 hours ago

    You are fucking insane. By your logic any customer of a company that might one day build a weapon is complicit. That is asinine.

    • Unattributed 𓂃✍︎@feddit.online
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      8 hours ago

      That’s not the argument at all. The argument is that there have been warning signs, big flashing warning signs, about the dangers of using AI for years now. Most technology, in general doesn’t come with anywhere near as many warnings.

      And, it’s been a known fact that people using AI are also training in the AI. That’s an active choice that people that signed up for accounts are making.

      So yes, users of this technology are taking an active role in the training of the technology, that makes them complicit.

      That is a far cry from data brokers going out and harvesting public records, or companies tracking your spending habits and feeding that into a database. If those companies then turned around and made a weapon, no I wouldn’t point the finger at people whose information got scraped. OTOH - if you continued to use a platform that you know is using you to gather information (aka, Facebook, Reddit, Twitter, etc.) and let them do it, then yeah…you have some level of complicity.

    • Rekorse@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      14 hours ago

      Yeah, we live and learn. We don’t expect perfection, we expect self improvement. Its important not to excuse bad decisions/behavior. Be more skeptical of new technology in the future and pay attention to who’s creating/selling it.