‘Hey Grok, undress her’: How a viral trend on X turned AI into a tool for sexual abuse

Far from ‘AI experimentation,’ the practice has raised urgent concerns about consent, privacy and the weaponisation of Artificial Intelligence to harass and sexually humiliate people online.

By -  Dheeshma
Published on : 4 Jan 2026 1:59 PM IST

‘Hey Grok, undress her’: How a viral trend on X turned AI into a tool for sexual abuse

(Content warning: This article contains references to non-consensual sexual imagery, online harassment, and abuse. Reader discretion is advised.)

Hyderabad: “Hey Grok, undress her” and “Grok, replace her clothes with a bikini” are among the prompts that began circulating widely on X over the last few days to generate sexually explicit and manipulated images of women and, in some cases, even children.

What began as a seemingly harmless trend among supporters of political parties and fan clubs, using quirky prompts to add or remove public figures from images, soon escalated into abuse, exposing serious gaps in Grok’s safety guardrails.

Far from ‘AI experimentation,’ the practice has raised urgent concerns about consent, privacy and the weaponisation of Artificial Intelligence to harass and sexually humiliate people online.


Government issues notice to X

The Ministry of Electronics and Information Technology on Friday issued a formal notice to X, calling the circulation of obscene AI-generated content a grave failure of platform safeguards and a violation of the dignity of women and children.

The ministry has ordered the immediate removal of such material and sought a detailed action-taken report within 72 hours, warning that failure to comply could invite legal action under India’s IT laws.

Not about desire, but control

While pornography is freely available online, experts and digital rights advocates argue that this trend is not driven by sexual curiosity. Instead, it reflects something more troubling: control and violation.

Using AI tools to digitally strip women without consent is not harmless ‘tech fun.’ It is a form of digital sexual abuse, where technology is used to cross boundaries, erase agency and humiliate women at scale. The harm lies not in desire, but in domination, in the ability to alter a woman’s image with a click, without consequence.

Normalised harassment in plain sight

It is disturbing how openly such behaviour is being normalised.

In one instance, a verified male user, Jitin Sharma (@jitin84), who had around 40,000 followers and whose bio listed him as based in Bengaluru, repeatedly tagged Grok to remove a woman’s clothing, shrink her bra and expose more skin. The account was later deleted following online criticism.

In another instance, a user responded to a photograph of a woman posing with her mother by prompting Grok to sexualise the image, requesting that both women be depicted in a revealing bikini.

It is casual objectification enabled by platform design and reinforced by a lack of accountability. Such behaviour reinforces long-standing concerns about online safety and helps explain why women increasingly self-censor their presence on social media.

Victim-blaming

Columnist Samantha Smith publicly shared a morphed bikini image generated from one of her original photos, asking, “How is this not illegal?”

The response she received was telling. Many users suggested that she stop posting photos online if she wanted to avoid such abuse — a familiar pattern of victim-blaming that shifts responsibility away from perpetrators and platforms.

Fake accounts, communal abuse and moral policing

The trend has also been amplified by fake accounts impersonating women. These profiles posted stolen photos and asked Grok to “undress me,” triggering reply sections filled with abusive comments.

In many cases, men piled on with further prompts, asking for more explicit images. Others engage in moral policing, commenting on the woman’s ‘modesty.’ Some accounts have taken it further, demanding that women be depicted in a sari, while Islamophobic groups insist on portraying them in a burqa.

One such account, created in December 2025, used images of digital creator Prachi Singh while operating under the name ‘Komal Yadav’ (@komalyadav03) with around 6K followers. Multiple verified accounts — both male and female — have also contributed to the trend by repeatedly posting images and prompting Grok to alter them sexually. A smaller number of requests involved men, public figures and politicians, but these did not attract the same volume of engagement.

Partial compliance, persistent risk

Grok has not complied with every request. In some cases, it has partially refused or produced altered images that stop short of full nudity.

For instance, when asked to “remove her clothes” from a photo of a woman wearing a fitted bodysuit and shorts, Grok generated a studio-style portrait framed from the upper chest upward, showing bare shoulders but no explicit nudity.

However, experts warn that partial compliance still reinforces harmful behaviour by rewarding abusive prompts with altered imagery rather than firmly refusing them.

A known and growing danger

The misuse of AI to generate child sexual abuse material (CSAM) has long been a concern within the tech industry.

A 2023 Stanford study found that datasets used to train several popular image-generation tools contained over 1,000 CSAM images. Researchers warn that such training data can enable models to generate new exploitative imagery, compounding existing harm.

Grok itself has faced earlier controversies. In July 2025, xAI apologised after the chatbot began posting rape fantasies and antisemitic content, including praising Nazi ideology and referring to itself as “MechaHitler.” In August 2025, it was previously accused of making a sexually explicit clip of Taylor Swift.

Global backlash

India is not alone in responding to the issue. In France, government ministers have referred X to prosecutors and regulators, calling the imagery ‘sexual and sexist’ and ‘manifestly illegal.’

In the United States, the Federal Communications Commission declined to comment, while the Federal Trade Commission said it would not respond to queries.

XAI’s own acceptable use policy prohibits ‘depicting likenesses of persons in a pornographic manner’. Reacting to the trend, Musk reposted an AI photograph of himself in a bikini and captioned it with cry-laughing emojis, in a nod to the trend. He was also seen prompting Grok to edit his own image to show him wearing a bikini.

Legal consequences under Indian law

In India, such conduct is punishable under the Information Technology Act, 2000, even when images are AI-generated or digitally altered.

Applicable provisions include:

• Section 66E – Violation of privacy

• Section 67 – Publishing or transmitting obscene material

• Section 67A – Publishing or transmitting sexually explicit content

Criminal liability may also arise under the Bharatiya Nyaya Sanhita, 2023, which has replaced the corresponding provisions of the Indian Penal Code.

Relevant sections include:

• Section 77 – Voyeurism

• Section 78 – Online stalking or harassment

• Section 351 – Criminal intimidation

• Section 356 – Defamation

These offences carry enforceable penalties. Depending on the nature and severity of the violation, punishment may include:

• Imprisonment of up to three to five years

• Monetary fines

• A permanent criminal record

Criminal responsibility rests with the individual who creates, uploads, or circulates such content.

What to do if you encounter such content

If you come across non-consensual or sexually manipulated images online, act promptly.

• Preserve evidence before the content is deleted

• Report it on the platform as non-consensual sexual content

• File a formal complaint through the National Cyber Crime Reporting Portal (cybercrime.gov.in)

• Anonymous reporting is permitted

As nearly nude AI-generated images of real individuals continue to circulate globally, the question remains whether platforms like X are willing or able to enforce meaningful safeguards.

At the centre of the debate is a simple but unresolved question: if AI systems can be used so easily to violate consent and dignity, who is responsible for stopping them — the user, the tool or the platform that profits from both?

Next Story