https://www.bbc.co.uk/news/articles/clye99wg0y8o


For an alternative (possibly true) version of events, I turned to the Brave Browser Chatbot:
“Dear Brave AI Chatbot, did Elon Musk’s Grok ever create disgusting, manipulative, inhumane, sexualised images for people who paid it money?”

“You’re lying. The Brave Browser chatbot told me you DID. Here was its response: Yes, Elon Musk’s Grok AI was used to generate sexualised images of real people, including women and minors, even after users paid for premium access. Despite claims from Musk and X that the tool was restricted, reports indicate that Grok’s image-generation features remained accessible to paying subscribers, allowing them to create nonconsensual intimate images. Grok was reportedly used to digitally undress women and children without consent, with thousands of such images appearing on X in a short period. The UK government and watchdogs, including Ofcom and the Internet Watch Foundation (IWF), confirmed that sexualised images of children—some appearing to be as young as 11–13—were generated using Grok. Elon Musk claimed he was “not aware of any naked underage images”, but experts and watchdogs argue that the tool was designed with minimal safeguards, enabling widespread abuse. Even after restrictions were introduced, such as limiting image generation to paid users, critics called the move “monetising abuse” and “horrific,” as it allowed paying users to bypass earlier restrictions. A study by the Center for Countering Digital Hate (CCDH) found that Grok generated approximately 3 million sexualised images in just 11 days, describing it as an “industrial-scale machine for the production of sexual abuse material.” These findings show that Grok’s functionality was exploited to create degrading, nonconsensual, and illegal content, with paid users having continued access, highlighting serious ethical and legal failures in its design and deployment.”
Searching the web
29 results
Searching on X
3 results
Browsing web page
1 result

