Social injustices force brands to rethink social media moderation

Consumers want brands to take a stand on ending systemic racism and other social injustices. According to a survey by Edelman, the global public relations firm, more than three-quarters (77%) said it is “deeply important that companies respond to racial injustices to earn or keep their trust.” Almost half (48%) said how a brand responds to the racial injustice protests has a big effect on their likelihood to purchase from it, while 37% will buy or boycott a brand, based on its response.
Open the social media platform of your choice and you’ll find images of social justice movements, strong opinions on the forthcoming U.S. election, and heartfelt commentary on how everyone is coping with the global pandemic. All of this content appears on the same social media platforms on which companies promote their sneakers, cars, pizza, and beauty products—often all merging together.
As this happens, social media managers and communications experts work quickly to engage consumers and create empathetic and non-polarizing responses. However, even the most sophisticated of brands are struggling. Whether it is the Nike “For Once Don’t Do It” campaign or the Starbucks response to baristas wearing #BlackLivesMatter t-shirts, brands walk the line as they remove hate speech and racist comments, while not forgoing freedom of speech around sensitive topics their consumers care about.
During the last decade, we’ve all had to figure out how to use social media to build community and engage directly with our consumers because that is where they are. Experts, influencers, and marketers have created best practices and, for the most part, they’ve worked across brand, industry, product, and service. This year, though, has thrown a wrench into every best practice, making them outdated as we traverse uncharted waters. At a time when every consumer touchpoint matters more than ever, one public misstep on social media can wreak havoc on a brand’s reputation.
Brands must rethink social media moderation
According to our own research, 64% of consumers hold brands responsible for addressing inappropriate or harmful comments made on their owned social media pages, and 63% expect brands to address inappropriate or harmful content on their pages within an hour, with half of those people expecting it to be taken care of instantly.
At the same time, brands have to figure out what their messaging is on the social justice movements—a topic that was once considered risky and taboo to discuss for most brands. Some have walked up to the line of speaking out without stepping over, while others have stumbled through while still trying to appear to be supportive. Despite how they’ve approached these issues, there are major gaps between what consumers expect and how brands perform.
This is partly due to having to respond in real-time while scrambling internally to perhaps shift values and examine diversity, equality, and inclusion policies. It’s also because, to have the kind of human power to respond effectively in real-time, it takes an informed 24/7/365 team, which is both labor- and cost-intensive.
Some brands have chosen to rely on automatic word filters to overcome the challenge of real-time moderation, but that can backfire significantly. Automatic word filters don’t have the nuance to understand context, often lead to censoring free speech or, worse, missing hate or racist-filled comments. With massive increases in volume, social media teams simply can’t keep up.
To truly close the gap between what consumers expect and how brands perform, they must rethink their social media policies and their approach to social media moderation. Each issue, and its resulting content, is as diverse as the people supporting them. The same issue significantly morphs based on market, country, culture, and person (or people). All of this makes it incredibly challenging to have a ‘one-size-fits-all’ social media moderation policy.
Automatic moderation plus traditional policies no longer work
Brands that participate online likely use a social media monitoring tool that has automatic word filters. These tools range from simple to sophisticated. If the tool is built with artificial intelligence, it will not be clever enough to understand nuance or context.
For instance, you may have a comment on your Facebook page that is racist- or hate-filled, but the AI doesn’t pick it up. Further down on your page, someone else comments that there is a racist comment above and asks why it hasn’t been removed. The AI removes that comment because of the way it’s worded.
The challenge, then, is that the brand is eviscerated because it left the original racist comment, but removed the comment pointing it out, which creates a more significant brand backlash and the cycle soon spirals out of control.
However, deleting the original post can still cause problems. In the case of #BlackLivesMatter, comments on brands’ social media pages have been extensively deleted because of the strong, passionate language used within the posts. This creates a feeling that the brand is censoring social injustice issues, which causes a further backlash.
Brands have to figure out new rules of engagement while still promoting free speech, their own values, as well as their products and services. They have to revise their social media policies with systemic racism and social injustices in mind while boosting their nuance and context capabilities within social media moderation.
Stay up to date with our latest news