Twitter is finally taking abuse seriously

Over the years, Twitter has become the unfortunate example of what happens on the internet when you govern your territory like the Wild West. On Twitter, the least civil parts of society not only persist, but flourish. There, even unassuming punctuation marks are just waiting to evolve into viral racial epithets.

But a set of new features may go a long way toward fighting the network’s ever-evolving abuse. On Tuesday, the company officially unveiled a new set of tools that will allow users to mute specific words and phases from appearing in notifications, as well as entire conversations. Eventually the feature will expand beyond just notifications, meaning users won’t see certain tweets at all if they don’t want to.

“The forms that abuse can take can vary tremendously,” said Del Harvey, Twitter’s VP of Trust and Safety. “Sometimes exactly the same phrase can be abusive between two people and not abusive between two others. There’s all these layers of meaning and intent. Plus there’s your own sensitivities.”

This new muting feature is similar to one Instagram released earlier this year. It is an excellent strategy that approaches thoughtfully the balance of safety and freedom of expression. Allowing users to decide for themselves what kind of commentary is taking it too far ensures that when one user expresses a view that may be hurtful to another, other users don’t have to suffer the effects. Twitter gets to avoid making strict, blanket rules for what kind of speech is acceptable and allows for its internet harassment targeting strategies to adapt at the speed of internet trolls.

The company seems to be trending toward a strategy that allows individual users to set more and more rules for themselves. In August, Twitter introduced a quality filter, along with a setting to allow users to limit who they receive notifications from.

Over the past two years, Twitter has taken many steps to curb harassment: It has banned revenge porn, issued new anti-harassment rules, established a trust and safety council, and de-verified high-profile users that it considers abusive. But those efforts seem to have not worked very well. Of 1,600 Twitter accounts that have sent out thousands of anti-Semitic tweets targeted at journalists over the past year, Twitter suspended just 21% of them, according to a recent Anti-Defamation League report. Frequently, users report that even threats of death and rape—clear violations of Twitter’s policies—are met with inaction.

“You know when you see that tweet going around that says ‘Really Twitter, this wasn’t a violation of your rules?’ with a screenshot of a tweet?,” said Harvey. “Usually we look and say, ‘Yep, that’s definitely a violation of our rules.’ All of that doesn’t really do us any good, if we’re not catching the conduct that’s actually abusive.”

The company also hopes to bolster the ability of its own moderators to respond appropriately to troublesome content. So Twitter has created a new option to report hateful conduct, as well as undertaking a retraining of its staff to help those reviewing flagged content to understand historical and cultural context.

Forthcoming, Harvey said, are tools that will allow Twitter to more easily identify repeat offenders and those trying to get around suspensions through strategies such as creating duplicate accounts.

The big realization here, Harvey told me, is that empowering the user may go the farthest in curbing abuse, something Twitter’s critics have long argued for.

“It is unrealistic that we will be able to predict everything people will consider harassing, every way that they will want their experience to be,” Harvey told me. “The more we can put tools in the hands of users to manage their own experience, the better.”

The necessary tradeoff is that such strategies risk turning Twitter into even more of a self-censored bubble.

But when I asked Harvey about how free speech advocates might respond to their new muting tools, she dismissed the concerns.

“Frankly one of the things I’ve come to realize over the years is I am probably striking the right balance if no one is happy with me,” she said.

On Nov. 15, we’re discussing strategies like this that actually might help make the internet a nicer place at the Real Future Fair in Oakland.

 
Join the discussion...