Civil conversations on social media may seem like an oxymoron on the order of “working vacation” or “jumbo shrimp.”

Yet Twitter—the social media platform built on the tenet of total transparency—continues to advocate for civility at all times.

In an attempt to cut through the abuse, harassment, and escalation of toxic threads while preserving free expression for all users, Twitter rolled out a “hide replies” feature in late 2019.

It offers users the ability to control the tone of a conversation they started by hiding replies deemed irrelevant or offensive and preventing a few troublemakers from creating drama with inappropriate, rude, or hateful comments.

Now, several years—and a new owner later—it seems like it maybe hasn’t made Twitter more civil (I’m not sure I can write that with a straight face).

Has the feature had the effect they expected? Does it truly cut down on abuse for organizations on the social platform?

Let’s talk about that and how to moderate social media in an extremely polarizing world.

Handling Trolls On Social Media

In a community on Facebook, the question was posed, “How are we handling trolls on social media these days?” 

I joked that I live in Chicago so I could hire a hitman for the questioner but was quickly shot down by friends who playfully told me I should never put that in writing.

Clearly, I learned my lesson there.

A hitman notwithstanding, there are certainly different levels of moderation to take into account and there are policy nuances and context to attend to, while also understanding the best way to monitor and moderate per social network.

It is not a one-size or one-situation-fits-all.

It used to be we could attend to the trolls by commenting publicly, “Hey, so-and-so…we see you, we hear you, we’d like to talk to you.” And then take the conversation offline.

But that approach really only works now if it’s a customer complaint and you truly can fix the issue with an offline conversation.

Today there are so many people out there with the sole intent of harming you, your leadership team, your colleagues, your organization, your influencers, your market share, your stock price, and/or your reputation. 

They’re referred to as bad actors or instigators—they’re people who’ve taken the troll of yesteryear to the next level.

It’s no longer the trolls who are being a pain in the butt.

There are truly bad actors who are intent on killing the brand.

If there are these people out there who are intent on harming your brand, how are the social networks dealing with it?

Twitter, in addition to “hide replies, allows users to mute, block, and filter comments on their feeds.

While “hide replies” keeps the comments of the muted or blocked from showing up in others’ feeds, it doesn’t offer the nuclear option of a delete button.

The concealed replies are not entirely removed from Twitter; they’re tucked behind an icon—an icon that might as well be blinking “click me.”

If someone wants to see the cloaked replies, they just click the icon and get a list of all the replies that were previously invisible.

Although the “hide replies” feature has given some comfort to brands seeking online civility, the tool also gives instigators an opportunity to game the system.

Let’s come back to that in a minute. 

Is Your Social Media Moderation Policy Iron-Clad?

Compared to Sisyphus pushing a rock up a hill for eternity, moderating social media is harder.

You pour your energy into an air-tight content policy that prohibits profanity, hate speech, abusive language, threats of violence, fake news, and more. 

While your social media moderation policy may be the result of months of careful and studied reflection dealing with thorny ethical issues and carefully avoiding hyper politicization, it probably doesn’t take into account different geographies, languages, and cultures from around the world.

It doesn’t take into account nuance or context. And it doesn’t take into account censorship and freedom of speech. So, while automatic filters may delete comments or hide replies that violate your brand’s social media moderation policy, they could also trigger an unexpected backlash.

Let’s go back to the “hide replies” feature on Twitter.

All an instigator has to do is click the hidden replies icon and they’ll see everything you’ve hidden.

If you have software that is automatically filtering and hiding replies based on your social media moderation policy, it’s pretty easy for someone to figure out what you allow—and what you don’t.

Let’s say, for argument’s sake, that you don’t allow the use of the word “gay” because you’ve found in the past that it’s been used in a derogatory way—one that is offensive to your colleagues and customers, alike.

But, as we all know, there are certain uses of the word “gay” that are perfectly acceptable.

For instance, when Marvel announced that their first gay male superhero will be married with kids, there were plenty of us excited for the news. 

Now let’s say I shared that news on Twitter with a comment that said, “Can’t wait to see what Marvel does with its first openly gay, married with kids superhero!”

Benign, right?

Your Organization Is Open to Crisis

Not if your social media moderation policy filters for the word “gay.” Now my tweet is hidden. And an instigator easily discovers that you filter for the word.

So they begin to posts positive tweets with keywords they know you filter to ensure that they will be hidden.

Then, to create a backlash, the instigator calls out the brand in tweets for being homophobic, racist, uncaring, or insensitive—comments that have the potential to go viral in a manner of seconds—all part of a campaign to create a backlash, amplify existing resentment, or harm the brand.

Despite the best of intentions, you can see how “hide replies” can cause issues for brands with automated moderation processes that unintentionally silence dissenting opinions, including those expressed thoughtfully, or even fact-checked clarifications.

It’s not as easy as saying, “Hey, so-and-so…we see you, we hear you, we’d like to talk to you.”

And then taking the conversation offline.

This is next-level trolling and it can actually create a crisis. 

The solution is that your software can absolutely do the first round of filtering, but you still need human beings to understand context, nuance, geographical differences, global differences, and even language and cultural differences.

Handling trolls in 2021 has to go beyond stating your social media policy, responding publicly, and then deleting and blocking. We could do that in much simpler days.

Today, we need a sophisticated understanding of how our policies might affect freedom of speech or the possibility of censoring someone.

We have to be aware that there are bad guys out there who are intent on taking us down—it’s not a matter of if, but when. It’s no longer enough to respond publicly and then shut them down.

That could backfire in the worst possible way. 

Gini Dietrich

Gini Dietrich is the founder, CEO, and author of Spin Sucks, host of the Spin Sucks podcast, and author of Spin Sucks (the book). She is the creator of the PESO Model© and has crafted a certification for it in collaboration with USC Annenberg. She has run and grown an agency for the past 19 years. She is co-author of Marketing in the Round, co-host of Inside PR, and co-host of The Agency Leadership podcast.

View all posts by Gini Dietrich