Instagram Launches New Anti-Bullying Features

Published on

Instagram Launches New Anti-Bullying Features

To kick off National Bullying Prevention Month, Instagram has launched two new features designed to “reduce negative interactions, including bullying and harassment, in comments.”

In the past 15 months, the social media platform has taken measures to reduce harmful content across its platform, including warnings in comments and captions to “ensure people reconsider their words before posting something that is potentially offensive.”

The platform’s latest launch includes a feature that automatically hides comments similar to others that have been reported. Instagram is also expanding its comment warning to proactively provide real-time feedback, in multiple languages, that warns users and allows them to reflect on potential consequences should they proceed.

Since they’ve made this commitment, more than 35 million Instagram accounts are using or have used Restrict capabilities to “safely control their Instagram experience.”

Harmful Content Is Spreading and Brands Need a Plan

Cyberbullying has been on the rise for many years. A Pew Research Center study shows that 59% of U.S. teens have been bullied or harassed online, and those numbers continue to increase. As harmful content like cyberbullying, hate speech, and others continue to spike, so do the risks for brands.

Our recent consumer survey found that nearly 50% of people have seen a significant increase in harmful content online, and consumers are holding brands accountable for managing it.

Whether you’re a social media platform or consumer goods brand, consumers expect you to lead the way in protecting your audiences, and they will hold brands accountable if they don’t have a plan to proactively address harmful content.