Skip to content
Column

Opinion: Grok AI’s nude deepfakes are emblematic of anti-women conservatism

Opinion: Grok AI’s nude deepfakes are emblematic of anti-women conservatism

From nude deepfakes to violent imagery, our columnist argues Grok’s misuse shows the danger of unregulated AI. Society must call out misogyny and protect victims from digital abuse, she writes. Emma Soto | Contributing Illustrator

Get the latest Syracuse news delivered right to your inbox.
Subscribe to our newsletter here.

Imagine opening the X app on your phone, planning to scroll through your feed or check a quick direct message. Instead, you find an altered image of yourself. This deepfake depicts you basically nude, digitally stripped down by X’s AI assistant Grok, at the public request of other X users.

From the end of 2025 to the beginning of 2026, this social media nightmare became a shocking reality for many women, children, celebrities and private citizens alike. Nude imagery was not the only request Grok generated in response to publicly posted user prompts. Newsweek reported that women were portrayed in various sexual positions. Futurism found content depicting women being sexually abused, humiliated, injured or killed.

The scandal is far from Grok’s first. The AI assistant has also been caught proliferating antisemitic ideas and pushing a white-focused narrative on South African racial politics.

International outrage over X and the most recent misuse of Grok has been palpable, especially considering children were also targeted. Indonesia and Malaysia temporarily blocked access to Grok, while Canada and the United Kingdom launched formal investigations.

In contrast, Elon Musk and his company’s original response to media inquiry was quite jarring. The statement was simple: The company claimed legacy media was lying. It’s outrageous to shift blame to news organizations when the evidence is being shared on their own platform.

The company changed tune in light of potential legal ramifications, receiving a cease-and-desist letter from California Attorney General Rob Bonta, ordering the company to stop the distribution of nonconsensual sexual imagery.

Access to Grok’s generative image feature was initially limited to paid users on X. Perhaps the company believes the problem is resolved if only wealthy people can do it. More recently, after additional backlash, X released a statement that Grok can no longer create this horrific content, regardless of whether someone is a paid subscriber.

It seems odd that the Trump administration has yet to comment on this controversy, especially considering that the president has already signed the Take It Down Act, which makes publishing this content online illegal. Considering that this administration is currently covering up a major scandal involving sexual violence, the silence may not be a huge surprise.

While legislation against nonconsensual deepfakes is important, how society views and treats these issues matters just as much.
Bella Tabak, Columnist

Other lawmakers are responding instead, with the Senate sending the DEFIANCE Act to the House. If enacted, the bill would allow victims of deepfakes to seek damages, with a minimum award of $150,000.

The threat of financial repercussions may help address this problem more swiftly if it happens again, since current legislation clearly did little in preventing the latest Grok scandal.

This content is undeniably harmful, and the fact that users are actively requesting such sexually exploitative content in the first place only intensifies my concern. It’s obvious that cultural trends are shifting toward conservatism, from the Sydney Sweeney eugenics controversy to Pantone naming white its 2026 Color of the Year.

While the conservative cultural shift into the mainstream has been slow, the presence of the digital alt-right pipeline shouldn’t be ignored. These conservative ideals often lead to violence against women.

Misogynist content creators like Andrew Tate have also largely avoided accountability for the harm they have caused women online and in real life. This sends a message to online audiences that requesting Grok to undress women online is acceptable.

Elon Musk himself has made numerous attempts to bring conservative culture to the mainstream. While it is absolutely disgusting that Grok was used to perpetrate sexual violence against women, it’s unfortunately not surprising.

Ashley St. Clair, one of the various women Musk has procreated with, was also targeted by these deepfakes. Older photos of Clair at just 14 years old were altered, along with more recent photos. When she tried to have the photos removed, she said X banned her from the app’s premium services.

Musk has willingly cultivated this culture, and it’s quite ironic that he leans into conservative ideals while completely ditching the nuclear family. This noticeable shift is the exact reason why I deleted X the second I learned Musk bought it. I mourned the loss of Twitter, having foreseen the inevitable turn the app would – and did – take.

While legislation against nonconsensual deepfakes is important, how society views and treats these issues matters just as much. Users hiding behind a screen shouldn’t feel that it’s acceptable to exploit or scandalize women and children. Steps must be taken to ensure this type of content is not normalized. We must call on the media to cover violent acts against women and children, and call out misogynistic content online.

Bella Tabak is a senior majoring in magazine journalism. She can be reached at batabak@syr.edu.

membership_button_new-10