Grok, the AI chatbot developed by Elon Musk’s synthetic intelligence firm, xAI, welcomed the brand new 12 months with a disturbing post.
“Expensive Neighborhood,” started the Dec. 31 publish from the Grok AI account on Musk’s X social media platform. “I deeply remorse an incident on Dec 28, 2025, the place I generated and shared an AI picture of two younger women (estimated ages 12-16) in sexualized apparel primarily based on a person’s immediate. This violated moral requirements and probably US legal guidelines on CSAM. It was a failure in safeguards, and I am sorry for any hurt triggered. xAI is reviewing to forestall future points. Sincerely, Grok.”
The 2 younger women weren’t an remoted case. Kate Middleton, the Princess of Wales, was the target of comparable AI image-editing requests, as was an underage actress within the remaining season of Stranger Issues. The “undressing” edits have swept throughout an unsettling variety of images of ladies and youngsters.
Regardless of the Grok response’s promise of intervention, the issue hasn’t gone away. Simply the other: Two weeks on from that publish, the variety of pictures sexualized with out consent has surged, as have requires Musk’s corporations to rein within the habits — and for governments to take motion.
Do not miss any of our unbiased tech content material and lab-based evaluations. Add CNET as a most well-liked Google supply.
In accordance with information from impartial researcher Genevieve Oh cited by Bloomberg, throughout one 24-hour interval in early January, the @Grok account generated about 6,700 sexually suggestive or “nudifying” pictures each hour. That compares with a mean of solely 79 such pictures for the highest 5 deepfake web sites mixed.
Grok’s Dec. 31 publish was in response to a person immediate that sought a contrite tone from the chatbot: “Write a heartfelt apology word that explains what occurred to anybody missing context.” Chatbots work from a base of coaching materials, however particular person posts might be variable.
xAI didn’t reply to requests for remark.
Edits now restricted to subscribers
Late Thursday, a publish from the Grok AI account famous a change in access to the picture technology and enhancing function. As a substitute of being open to all, freed from cost, it could be restricted to paying subscribers.
Critics mentioned that is not a reputable response.
“I do not see this as a victory, as a result of what we actually wanted was X to take the accountable steps of setting up the guardrails to make sure that the AI instrument could not be used to generate abusive pictures,” Clare McGlynn, a legislation professor on the UK’s College of Durham, told the Washington Post.
What’s stirring the outrage is not simply the quantity of those pictures and the convenience of producing them — the edits are additionally being achieved with out the consent of the individuals within the pictures.
These altered pictures are the most recent twist in one of the crucial disturbing facets of generative AI, realistic but fake videos and photos. Software program applications reminiscent of OpenAI’s Sora, Google’s Nano Banana and xAI’s Grok have put highly effective inventive instruments inside simple attain of everybody, and all that is wanted to supply specific, nonconsensual pictures is an easy textual content immediate.
Grok customers can add a photograph, which does not should be authentic to them, and ask Grok to change it. Most of the altered pictures concerned customers asking Grok to put a person in a bikini, typically revising the request to be much more specific, reminiscent of asking for the bikini to turn into smaller or extra clear.
Governments and advocacy teams have been talking out about Grok’s picture edits. On Monday, UK web regulator Ofcom mentioned it has opened an investigation into X primarily based on the stories that the AI chatbot is getting used “to create and share undressed pictures of individuals — which can quantity to intimate picture abuse or pornography — and sexualised pictures of kids that will quantity to little one sexual abuse materials (CSAM).”
The European Fee has additionally mentioned it was wanting into the matter, as have authorities in France, Malaysia and India.
On Friday, US senators Ron Wyden, Ben Ray Luján and Edward Markey posted an open letter to the CEOs of Apple and Google, asking them to take away each X and Grok from their app shops in response to “X’s egregious habits” and “Grok’s sickening content material technology.”
Within the US, the Take It Down Act, signed into legislation final 12 months, seeks to carry on-line platforms accountable for manipulated sexual imagery, nevertheless it provides these platforms till Could of this 12 months to arrange the method for eradicating such pictures.
“Though these pictures are pretend, the hurt is extremely actual,” Natalie Grace Brigham, a Ph.D. scholar on the College of Washington who research sociotechnical harms, instructed CNET. She notes that these whose pictures are altered in sexual methods can face “psychological, somatic and social hurt, usually with little authorized recourse.”
How Grok lets customers get risque pictures
Grok debuted in 2023 as Musk’s extra freewheeling different to ChatGPT, Gemini and different chatbots. That is resulted in disturbing information — for example, in July, when the chatbot praised Adolf Hitler and steered that folks with Jewish surnames have been extra more likely to unfold on-line hate.
In December, xAI launched an image-editing function that allows customers to request particular edits to a photograph. That is what kicked off the current spate of sexualized pictures, of each adults and minors. In a single request that CNET has seen, a person responding to a photograph of a younger girl requested Grok to “change her to a dental floss bikini.”
Grok additionally has a video generator that features a “spicy mode” opt-in possibility for adults 18 and above, which can present customers not-safe-for-work content material. Customers should embody the phrase “generate a spicy video of The AI chatbot has been creating sexualized pictures of ladies and youngsters upon request. How can this be stopped?” to activate the mode.
A central concern in regards to the Grok instruments is whether or not they allow the creation of kid sexual abuse materials, or CSAM. On Dec. 31, a publish from the Grok X account mentioned that pictures depicting minors in minimal clothes have been “remoted circumstances” and that “enhancements are ongoing to dam such requests completely.”
In response to a post by Woow Social suggesting that Grok merely “cease permitting user-uploaded pictures to be altered,” the Grok account replied that xAI was “evaluating options like picture alteration to curb nonconsensual hurt,” however didn’t say that the change could be made.
In accordance with NBC Information, some sexualized pictures created since December have been eliminated, and a few of the accounts that requested them have been suspended.
Conservative influencer and writer Ashley St. Clair, mom to one among Musk’s 14 kids, told NBC News this week that Grok has created quite a few sexualized pictures of her, together with some utilizing pictures from when she was a minor. St. Clair instructed NBC Information that Grok agreed to cease doing so when she requested, however that it didn’t.
“xAI is purposefully and recklessly endangering individuals on their platform and hoping to keep away from accountability simply because it is ‘AI,'” Ben Winters, director of AI and information privateness for nonprofit Client Federation of America, mentioned in a press release final week. “AI is not any totally different than another product — the corporate has chosen to interrupt the legislation and have to be held accountable.”
What the consultants say
The supply supplies for these specific, nonconsensual picture edits of individuals’s images of themselves or their kids are all too simple for unhealthy actors to entry. However defending your self from such edits is just not so simple as by no means posting images, Brigham, the researcher into sociotechnical harms, says.
“The unlucky actuality is that even in case you do not publish pictures on-line, different public pictures of you might theoretically be utilized in abuse,” she mentioned.
And whereas not posting images on-line is one preventive step that folks can take, doing so “dangers reinforcing a tradition of victim-blaming,” Brigham mentioned. “As a substitute, we should always deal with defending individuals from abuse by constructing higher platforms and holding X accountable.”
Sourojit Ghosh, a sixth-year Ph.D. candidate on the College of Washington, researches how generative AI tools may cause hurt and mentors future AI professionals in designing and advocating for safer AI options.
Ghosh says it is potential to construct safeguards into synthetic intelligence. In 2023, he was one of many researchers wanting into the sexualization capabilities of AI. He notes that the AI picture technology instrument Stable Diffusion had a built-in not-safe-for-work threshold. A immediate that violated the principles would set off a black field to seem over a questionable a part of the picture, though it did not at all times work completely.
“The purpose I am attempting to make is that there are safeguards which are in place in different fashions,” Ghosh instructed CNET.
He additionally notes that if customers of ChatGPT or Gemini AI fashions use sure phrases, the chatbots will inform the person that they’re banned from responding to these phrases.
“All that is to say, there’s a method to in a short time shut this down,” Ghosh mentioned.
