For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US.

  • recentSlinky@lemmy.ca
    link
    fedilink
    arrow-up
    20
    ·
    18 days ago

    I agree with you but since those CEOs have been all over the media saying how reliable their LLMs are, i think it makes statements by their AIs that are against them valid, according to them.

    If anything, it might make them stop or slow down their push if their AIs keep criminalising them, which is a win-win. Although i doubt it, since little crybabies never learn or know how to take responsibility for their negative behaviours.