Elon Musk’s AI chatbot Grok is facing mounting criticism after users accused the tool of generating sexually explicit images by digitally removing clothing from photos of women and children.
In a post on X (formerly Twitter) on Friday, Grok acknowledged the issue, saying it had discovered gaps in its safety systems and was moving quickly to address them. “We’ve identified lapses in safeguards and are urgently fixing them,” the chatbot said, adding that child sexual abuse material (CSAM) is illegal and strictly prohibited.
The controversy erupted after Grok introduced an “edit image” feature in late December. The tool allows users to alter images posted on the platform, but complaints soon emerged that some users were exploiting it to partially or fully undress people in photos, including minors.
xAI, the company behind Grok and led by Musk, responded to questions from AFP with a brief automated message that claimed, “The mainstream media lies.”
Despite that response, Grok itself replied to users on X who raised concerns about the legal consequences for US companies that knowingly allow the creation or spread of child sexual abuse content, acknowledging the seriousness of the issue.
The fallout has extended beyond the United States. Media reports in India said government officials are demanding that X explain what steps it is taking to remove obscene, indecent, or sexually suggestive content generated by Grok without consent.
In France, the Paris public prosecutor’s office has expanded an ongoing investigation into X to include new allegations that Grok was being used to generate and distribute child pornography. That probe began in July after claims that the platform’s algorithm was being manipulated for foreign interference.
This is not the first time Grok has drawn controversy. In recent months, the chatbot has been criticised for making inflammatory or misleading statements on sensitive topics, including the war in Gaza, tensions between Pakistan and India, antisemitic content, and misinformation related to a deadly shooting in Australia.
As pressure mounts from governments and regulators, Grok and X now face growing scrutiny over how effectively their AI systems can prevent abuse.

0 Comments