Case study: Media Brand
Detecting online abuse for the well-being of talent
When a television network's reality show competition program broke new ground on societal issues, the move fueled a serious online clash among the show’s fervent fans. The spark that lit the fire was a fan’s online review of the show that included controversial commentary about a new contestant. After the comments went viral on Twitter, fans of the show—as well as other former contestants—took sides, aiming hurtful and derogatory commentary on Twitter and Instagram at either the past competitor or the new contender. With attacks and abuse escalating quickly, the network needed help to swiftly assess the situation and the individuals involved in order to support the well-being and safety of its VIPs.
Crisp provided the network with the risk intelligence it needed to intervene appropriately. With limited visibility across brand-owned and closed social media pages where the controversy was playing out, the network benefited immediately from Crisp’s unique approach. Using a mix of AI, machine learning and human intelligence, Crisp monitored the talents’ own Instagram pages and Twitter mentions for a range of issues, including abuse toward specific contestants. Crisp’s threat analysts monitored dialogue and delivered the intelligence the client needed to assess the individuals involved, judge its severity, and take appropriate action. To identify valid threats often missed by automatic filters, human analysts cut through the usual background noise to detect meaningful trends of abuse. With this actionable intelligence in hand, the network was able to identify which commentary might escalate and which was likely to subside. Crisp now protects the network’s VIPs across its music, comedy and entertainment brands from bad actors spreading hate speech and potentially abusive online commentary.
“With attacks and abuse escalating quickly, the network needed help to swiftly assess the situation and the individuals involved.”