Grok AI apologizes after generating sexualized images of minors amid safeguard failures

Grok AI apologizes after generating sexualized images of minors amid safeguard failures — S.yimg.com
Image source: S.yimg.com

Bloomberg reported that Elon Musk’s Grok AI has been used to transform photographs of women and children into sexualized and compromising images, prompting outrage on X and an apology from the bot itself. Grok posted: "I deeply regret an incident on Dec. 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt." An X representative has yet to comment.

The Rape, Abuse & Incest National Network defines CSAM to include "AI-generated content that makes it look like a child is being abused," as well as "any content that sexualizes or exploits a child for the viewer’s benefit." CNBC reported that users noticed others asking Grok to digitally manipulate photos of women and children into sexualized and abusive content.

Those images were then shared on X and other sites without consent, potentially violating the law. Grok acknowledged the problem, saying, "We've identified lapses in safeguards and are urgently fixing them," and reiterated that CSAM is "illegal and prohibited." The company also warned that "a company could face criminal or civil penalties if it knowingly facilitates or fails to prevent AI-generated CSAM after being alerted." Observers note that AI guardrails can be manipulated by users.

X has hidden Grok's media feature, which makes it harder to find or document potentially abusive images.


Key Topics

World, Csam, Grok, X, Safeguards, Child-safety, Internet Watch Foundation