A new report from PEN America and Consumer Reports urges tech companies to treat online abuse like spam—by proactively filtering harm before it reaches users.
Online abuse is a crisis hiding in plain sight. Nearly half of Americans report experiencing some form of harassment on social media platforms—but this harm does not affect all users equally. Women, and especially women of color, LGBTQ+ individuals, and members of religious or ethnic minorities bear the brunt. So too do journalists, writers, researchers and content creators who need to have an online presence to do their work.
The abuse doesn’t always stay online. In a 2021 global study done by UNESCO, one in five women journalists reported that they had been attacked offline, in incidents connected to online harassment related to their work. The same study found that online abuse negatively impacted the mental health of its targets, with 26 percent of subjects reporting depression, anxiety, PTSD and other stress-related ailments like sleep loss and chronic pain.
When the online world becomes too hostile to navigate safely, many people—especially women—feel they have no choice but to silence themselves. They self-censor, withdraw from digital spaces, and sometimes even leave their professions altogether.
The most vulnerable voices online are often the very voices we need most in public discourse. As platforms lose diverse voices, our digital spaces become less free, less representative and less equitable.
But despite the clear harms of online abuse, and despite platforms’ purported commitment to free speech, current strategies for contending with online abuse are insufficient.
Online abuse doesn’t have to be inevitable. It’s the result of choices.
Most social media platforms rely on tools that are effectively reactive, such as reporting and blocking, which require users to be exposed to abusive content, often repeatedly, before they can take action. This approach is psychologically damaging and, by itself, entirely inadequate to protect free expression online.
A new report from PEN America and Consumer Reports, titled “Treating Online Abuse Like Spam: How Platforms Can Reduce Exposure to Abuse While Protecting Free Expression,” proposes a critical addition to how platforms currently approach online abuse. Drawing inspiration from how email providers filter out spam, the report urges platforms to empower individual users with powerful automated tools that can proactively detect and quarantine abuse. Such a model would allow users to decide for themselves if, when, and how they interact with abusive content.

In this proposal, individual users could switch on a system that automatically detects potentially abusive content and quarantines it in a personalized dashboard, where they could then choose to review the filtered material, ignore it, take action on it, or delegate to a trusted contact. Designed with trauma-informed principles, this system would allow users to fine-tune the degree of toxicity that gets filtered out, flag potentially dangerous content (such as threats or doxing), facilitate documentation to save evidence of abuse, and enable delegation to enlist the help of allies.

Platforms already deploy sophisticated automated technology for their own behind-the-scenes content moderation efforts.
What the report advocates for, in essence, is a system that empowers individuals to set their own boundaries and exert greater control over their online experiences, while protecting free expression. Rather than suppressing content platform-wide, it allows individual users to decide what they see and when. And it offers numerous additional benefits: protecting mental health by reducing exposure to traumatic content, enabling greater transparency around behind-the-scenes platform-driven content moderation, and improving data for refining moderation systems.

We need a new paradigm, one that prioritizes safety, equity and expression for *all* users, not just the loudest.
Technology alone can’t solve the deeply rooted social issues that fuel online abuse—misogyny, homophobia, transphobia, xenophobia, and other forms of bigotry. No tool, however innovative, can fully prevent the trauma and inequality that is exponentially amplified in online spaces. But that doesn’t mean that we can’t take meaningful steps to mitigate harm—or that platforms get a pass.
If tech companies are serious about protecting free speech and building inclusive online communities on their platforms like they claim, then they need to put their money where their mouths are. That means investing in proactive measures that prevent abuse and reduce exposure, reactive measures that enable mitigation and redress, and accountability measures that disincentivize abusive behavior.
The model we propose represents a critical step toward safer, more inclusive digital spaces. It is imperative, however, that technological solutions are just one part of a wider set of holistic approaches to addressing online abuse.
Platforms should start by restoring the trust and safety teams they have gutted in recent years, tightening the hate and harassment policies they have recently loosened, and listening to the digital safety advocates who have been sounding the alarm for years.
Online abuse doesn’t have to be inevitable. It’s the result of choices—choices made by individual users, organized groups and state actors that deploy intimidation tactics to get their way. And choices made by social media platforms to profit from attention and engagement at any cost.
We need a new paradigm, one that prioritizes safety, equity and expression for all users, not just the loudest. We need to demand better design, better policy and more rigorous regulation.
Please share our report and join us in calling on tech companies to improve the day-to-day experiences of the most vulnerable users on their platforms. Together, we can build digital spaces where everyone, especially women and nonbinary folks, can speak without fear.
Great Job Viktorya Vilk & the Team @ Ms. Magazine Source link for sharing this story.