Twitter’s former Trust & Safety head details the challenges facing decentralized social platforms | TechCrunch

Yoel Roth, previously the head of Twitter’s Trust and Safety, now at Match, is sharing his concerns about the future of the open social web and its ability to combat misinformation, spam, and other illegal content, like child sexual abuse material (CSAM). In a recent interview, Roth worried about the lack of moderation tools available to the fediverse — the open social web that includes apps like Mastodon, Threads, Pixelfed, and others, as well as other open platforms like Bluesky.

He also reminisced about key moments in Trust & Safety at Twitter, like its decision to ban President Trump from the platform, the misinformation spread by Russian bot farms, and how Twitter’s own users, including CEO Jack Dorsey, fell prey to bots.

On the podcast revolution.social with @Rabble, Roth pointed out that the efforts at building more democratically run online communities across the open social web are also those that have the fewest resources when it comes to moderation tools.

“…looking at Mastodon, looking at other services based on ActivityPub [protocol], looking at Bluesky in its earliest days, and then looking at Threads as Meta started to develop it, what we saw was that a lot of the services that were leaning the hardest into community-based control gave their communities the least technical tools to be able to administer their policies,” Roth said.

He also saw a “pretty big backslide” on the open social web when it came to the transparency and decision legitimacy that Twitter once had. While, arguably, many at the time disagreed with Twitter’s decision to ban Trump, the company explained its rationale for doing so. Now, social media providers are so concerned about preventing bad actors from gaming them that they rarely explain themselves.

Meanwhile, on many open social platforms, users wouldn’t receive a notice about their banned posts, and their posts would just vanish — there wasn’t even an indication to others that the post used to exist.

“I don’t blame startups for being startups, or new pieces of software for lacking all the bells and whistles, but if the whole point of the project was increasing democratic legitimacy of governance, and what we’ve done is take a step back on governance, then, has this actually worked at all?” Roth wonders.

Techcrunch event

San Francisco
|
October 27-29, 2025

The Economics of Moderation

He also brought up the issues around the economics of moderation and how the federated approach hasn’t yet been sustainable on this front.

For instance, an organization called IFTAS (Independent Federated Trust & Safety) had been working to build moderation tools for the fediverse, including providing the fediverse with access to tools to combat CSAM, but it ran out of money and had to shut down many of its projects earlier in 2025.

“We saw it coming two years ago. IFTAS saw it coming. Everybody who’s been working in this space is largely volunteering their time and efforts, and that only goes so far, because at some point, people have families and need to pay bills, and compute costs stack up if you need to run ML models to detect certain types of bad content,” he explained. “It just all gets expensive, and the economics of this federated approach to trust and safety never quite added up. And in my opinion, still don’t.”

Bluesky, meanwhile, has chosen to employ moderators and hire in trust and safety, but it limits itself to the moderation of its own app. Plus, they’re providing tools that let people customize their own moderation preferences.

“They’re doing this work at scale. There’s obviously room for improvement. I’d love to see them be a bit more transparent. But, fundamentally, they’re doing the right stuff,” Roth said. However, as the service further decentralizes, Bluesky will face questions about when it is the responsibility to protect the individual over the needs of the community, he notes.

For example, with doxxing, it’s possible that someone wouldn’t see that their personal information was being spread online because of how they configured their moderation tools. But it should still be someone’s responsibility to enforce those protections, even if the user isn’t on the main Bluesky app.

Where to draw the line on privacy

Another issue facing the fediverse is that the decision to favor privacy can thwart moderation attempts. While Twitter tried not to store personal data it didn’t need to, it still collected things like the IP address of the user, when they accessed the service, device identifiers and more. These helped the company when it needed to do forensic analysis of something like a Russian troll farm.

Federivse admins, meanwhile, may not even be collecting the necessary logs or won’t view them if they think it’s a violation of user privacy.

But the reality is that without data, it’s harder to deteremine who’s really a bot.

Roth offered a few examples of this from his Twitter days, noting how it became a trend for users to reply “bot” to anyone they disagreed with. He says that he initially set up an alert and reviewed all these posts manually, examining hundreds of instances of “bot” accusations, and nobody was ever right. Even Twitter co-founder and former CEO Jack Dorsey fell victim, retweeting posts from a Russian actor who claimed to be Crystal Johnson, a Black woman from New York.

“The CEO of the company liked this content, amplified it, and had no way of knowing as a user that Crystal Johnson was actually a Russian troll,” Roth said.

The Role of AI

One timely topic of discussion was how AI was changing the landscape. Roth referenced recent research from Stanford that found that, in a political context, large language models (LLMs) could even be more convincing than humans when properly tuned.

That means a solution that relies only on content analysis itself isn’t enough.

Instead, companies need to track other behavioral signals — like if some entity is creating multiple accounts, using automation to post, or posting at weird times of day that correspond to different time zones, he suggested.

“These are behavioral signals that are latent even in really convincing content. And I think that’s where you have to start this,” Roth said. “If you’re starting with the content, you’re in an arms race against leading AI models and you’ve already lost.”

Great Job Sarah Perez & the Team @ TechCrunch Source link for sharing this story.

#FROUSA #HillCountryNews #NewBraunfels #ComalCounty #LocalVoices #IndependentMedia

Felicia Ray Owens
Felicia Ray Owenshttps://feliciarayowens.com
Felicia Ray Owens is a media founder, cultural strategist, and civic advocate who creates platforms where power meets lived truth. As the voice behind C4: Coffee. Cocktails. Culture. Conversation and the founder of FROUSA Media, she uses storytelling, public dialogue, and organizing to spotlight the issues that matter most—locally and nationally. A longtime advocate for community wellness and political engagement, Felicia brings experience as a former Precinct Chair and former Chief Communications Officer of Indivisible Hill Country. Her work bridges culture, activism, and healing through curated spaces designed to inspire real change. Learn more at FROUSA.org

Latest articles

spot_img

Related articles

Leave a reply

Please enter your comment!
Please enter Your First & Last Name here

Leave the field below empty!

spot_img
Secret Link