Enjoy the Fruits of User-Generated Content with Social Media ModerationMost of us have some sort of social media listening/monitoring program set up. From Google or Talkwalker alerts to Brand24, Zignal Labs, or Crisp, the tools range from simple to sophisticated. Some include artificial intelligence only and others have AI plus human intelligence.

What you use definitely depends on your needs, your goals, and how many mentions there are online of your organization or brand. But what happens when you reach a point that conversations need to be moderated because a human can’t do it alone?

User-generated content is the pinnacle of a brand’s social media efforts. When consumers buy and share how they use a product or service on their own social platforms, the brand enjoys significant boosts, both in awareness and trust. 

The numbers of user-generated content are mind-boggling. More than 27 million people use Instagram, nearly two billion on Facebook, more than 400 million on Pinterest, 800 million on TikTok, and nearly 500 million on TripAdvisor. All of these users provide massive opportunities for brands if harnessed—and moderated—correctly. 

As many as 90% of shoppers report that user-generated content influences their purchase decisions, according to a survey from TurnTo and Ipsos. It outranks all other forms of marketing, including: search engines (87%), promotional emails (79%), display ads (76%), and social (63%). 

There also are, of course, potential risks. No brand wants hate speech, profanity, pornography, bullying, or harmful content on their owned social media pages, let alone in the comments of their user-generated content. In fact, a recent survey found that 43% would not buy if there were negative comments about a brand online and 39% would not buy a product or service advertised if it had derogatory, offensive, or hurtful comments.

How does a savvy brand navigate the hidden risks of user-generated content to reach the mecca of its consumers singing their praises of its product or service? Two words: moderate well.

There are four types of social media moderation that, when done well, allow a brand to enjoy the fruits of user-generated content while providing a safe online place for its consumers and avoiding censorship: pre-, post-, reactive, and automated moderation.

Pre- Social Media Moderation Is the Least Risky

Pre-moderation simply means that user-generated content is moderated before it appears online. Once a user clicks to submit a post, it’s then sent to a team of moderators for review before it goes live.

This is one of the least risky ways to handle social media moderation because it ensures  inappropriate content never makes it online, which allows the moderators to maintain control of brand reputation. While it’s the least risky, it doesn’t come without downsides. Users can become impatient if they’re forced to wait for their comment to appear, not realizing that it is moving along a virtual moderation queue, and can post negative comments or complaints.

Pre-moderation works best for brands that do not have time-sensitive products or services or where children are involved. It doesn’t typically work in forum or virtual world moderation, where free-flowing conversation in real-time is key.  

Post- Social Media Moderation Is More Acceptable By Users

With post-moderation, the user-generated content appears instantly online, while at the same time, joining a virtual queue for social media moderation. This allows users to engage in real-time discussion so the flow of conversation isn’t compromised, but allows the brand to ensure what’s being posted isn’t harmful or inappropriate. If found to be either in post-moderation, the content is removed or hidden.

This is the type of moderation that nearly every brand in the world uses to protect their communities and brands, while providing the opportunity for the content to be truly user-generated. To safeguard these real-time discussions, the post-moderation route should be speedy, accurate and, most importantly, around the clock. 

Reactive Social Media Moderation Is Highly Risky

Reactive moderation relies on users to report inappropriate content when they see it. This means posts are usually only checked if a complaint is made, usually via the “report abuse” button on the social network itself. This requires the platform to alert the brand, which can often take days or weeks. 

This type of social media moderation is scalable, which allows an online community to grow without increasing the workload of the marketing team or the cost of moderating. However, posts are only removed when reported, which may be too slow in an era of instant gratification. Our survey also found that 34% of consumers expect a brand to respond to negative comments instantly and 55% within 24 hours. That kind of expectation doesn’t align with reactive moderation.

Reactive moderation is a good choice for companies with an adult audience and a fearless brand image, however it is unsuitable for communities where safety is critical or where your brand’s reputation must be protected from harmful content at all times.

Automated Social Media Moderation Is Fast and Inexpensive

The more traditional social media moderation tools are automated. They are powered by artificial intelligence, which allows a social media manager to set up triggers based on the brand’s policy. In the past, this was an inexpensive and satisfactory way to moderate user-generated content and comments on a brand’s owned social media pages.

The risk in today’s socially-charged environment, though, is automated moderation doesn’t understand the context and nuance of a comment full of hate speech versus one that shows support for social injustice

Although this is a fast and relatively inexpensive option, it can create a fire-house of alerts, which have to be manually checked to ensure no harmful content is missed—or someone isn’t accidentally censored. 

Some Kind of Social Media Moderation Is a Must 

All-in-all, social media moderation helps a brand to understand the sentiment of consumers, control spam comments, deal with trolls and other types of criticism, and, most importantly, scale an online campaign by allowing prospective buyers the chance to see how their peers use a product or service.

A brand cannot participate on social media without having some kind of moderation in place. What you choose is up to you, your goals, and your policies. 

Gini Dietrich

Gini Dietrich is the founder, CEO, and author of Spin Sucks, host of the Spin Sucks podcast, and author of Spin Sucks (the book). She is the creator of the PESO Model© and has crafted a certification for it in collaboration with USC Annenberg. She has run and grown an agency for the past 19 years. She is co-author of Marketing in the Round, co-host of Inside PR, and co-host of The Agency Leadership podcast.

View all posts by Gini Dietrich