Online abuse fueled by AI is escalating and increasingly turning into real-world harm, putting outspoken women at serious risk.
She spoke and the internet struck back
It begins with a tweet, a comment, a message, but for thousands of women around the world, it never ends there. In 2025, UN Women issued one of its starkest warnings yet: speaking out online, especially in defense of human rights or press freedom, is becoming not just risky but life-threatening. What once looked like “online harassment” has transformed into a global system of digital intimidation, algorithmic abuse, and AI-driven attacks that no longer stay on the screen. They follow women home, into their workplaces, and in far too many cases, into offline harm.
The scale of the crisis is staggering. UN Women’s survey gathered testimony from over 6,900 women across 119 countries, all working in public-facing roles: journalists, human-rights defenders, activists, writers, and other communicators. 70% of them reported experiencing online violence related to their work. That number climbs even higher depending on profession: 76% of writers and public communicators focusing on human rights issues said they had been targeted, compared to 72% of human-rights defenders and activists. Among journalists and media workers, 75% reported online violence in 2025.
But numbers alone do not capture the severity of what’s happening. For a growing share of these women, online abuse is not a virtual problem, it is a prelude. 41% of respondents said they had suffered offline harm linked directly to online violence: physical assault, sexual harassment, stalking, “swatting,” verbal abuse. The pattern intensifies across professions: 44% of human-rights defenders, activists, writers, and public communicators reported offline harm tied to online attacks.
And the long-term trend is even more chilling. In 2020, only 20% of women journalists attributed offline abuse to online violence. By 2025, that number had more than doubled to 42%, a dramatic escalation that reveals how digital aggression is migrating into the physical world with alarming speed.
A major accelerant of this crisis is artificial intelligence. Since the mainstream adoption of generative AI tools in late 2022, the nature of online abuse has shifted. Deepfakes, fabricated audio, doctored images, and AI-generated harassment campaigns have become more common, more convincing, and more accessible. Nearly a quarter of all respondents, 23.8%, said they had experienced AI-assisted online violence.
These technologies supercharge online violence, making it faster, more scalable, and far harder to counter. What was once a troll’s insult can now be a viral deepfake designed to discredit, shame, or endanger. The digital realm has become a weapon, turning misinformation into intimidation, and harassment into a coordinated assault on women’s safety and credibility.
The women most vulnerable are those whose work demands visibility: journalists confronting powerful interests, activists challenging injustice, human-rights defenders exposing abuse, and public communicators shaping public debate. UN Women warns that without urgent intervention, many women will be silenced, pushed out of public life, or forced into hiding, a blow not just to individual freedom, but to democracy itself.
The lesson of this global data is unequivocal: online violence is real violence. The message is unmistakable. The digital sphere has become a battlefield, and women who dare to speak are paying the price.
That is why UN Women’s report carries a title that feels less like research and more like a warning. The world is approaching a “Tipping Point” that demands action before the next attack moves from the screen into reality. What happens next depends on whether the world listens… or waits for the danger to cross another line.
