For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US.

  • IchNichtenLichten@lemmy.wtf
    link
    fedilink
    arrow-up
    6
    ·
    18 days ago

    These companies tend to not say how they train their models, partially because much of it is stolen but the data is of pretty much everything. The LLM will generate a response to any prompt so if it can be used to put a celebrity in lingerie, it can also be used to do the same with a child. Of course there are guardrails, but they’re weak and I hope X gets sued into oblivion.