What is Crisp’s ‘Safety as a Service’?

Social platforms have traditionally relied on internally-built tools and teams of moderators (in-house or outsourced) to handle user generated content, user reports and bad actors. This approach is generally expensive, inefficient and rarely solves the problem. Often tools are not updated to spot the latest risks, and traditional moderation teams simply do not scale.

Crisp's Safety as a Service combines all of the elements required to deliver a safe social platform. We use the world’s most advanced technology – to identify risks in images, videos, text and chat and to detect bad actors, together with an on-demand risk intelligence team made up of social media risks analysts who intervene, moderate and address issues in minutes. Our team ensures the technology we use is constantly trained with the latest risk content.

What is the difference between Crisp’s managed service and a moderation tool?

A moderation tool either built internally or bought will help to identify issues, but it will need constant updates from your team or the vendor to make sure it continues to identify the latest risks or threats. And, a moderation tool alone will not actually address the issue. While a lot of risks can be automated there are many that require expert human intervention.

Crisp’s Safety as a Service does not just deliver a moderation tool; we also make sure the tool is constantly trained to identify the risks in images, videos and text, and recognize bad actor behavior. We offer guaranteed levels of accuracy per risk type, and our global on-demand social media risk analysts work alongside your trust and safety and customer service teams to intervene in minutes.

With Crisp’s service, there is no need to train or configure a moderation tool to address the risks, or invest in an external moderation company to review alerts. We take care of everything, leaving you to concentrate on your core business. We also provide quality guarantees on the speed and detection of risks.

How long will it take to integrate with my social platform?

Initially, it takes just a few hours for your social platform to be integrated with our REST APIs – the programming interface we use. We have developer tools and interactive documentation to help commence the process. We use ‘webhooks’ (a means of sending data) to provide a response on each piece of content.

Which languages does Crisp support?

We operate in over 50 different languages. We employ risk professionals across the globe to manage localization and cultural risks. We have dedicated teams for over 15 of our core languages.

Does Crisp deal with hate speech, abuse and trolls?

Absolutely, it’s one of the areas we are incredibly tuned into. We aim to keep everyone safe, and aspire to an internet free from trolls, abuse and hate speech.

How does Crisp deal with harassment?

Our approach to dealing with harassment focuses on profiling and identifying the bad actors – the individuals causing the problem, rather than just filtering the language. Crisp’s user and reputation profiling systems work in real-time to intervene with harassment by identifying the bad actors and deciding who sees the content.

How does Crisp’s reputation profiling system work?

Crisp has been at the forefront of reputation profiling for over ten years, and has numerous patents granted in the identification of bad actors. A bad actor is an individual or user who demonstrates toxic behavior toward others, such as sexual exploitation, trolling, extremism and online grooming. We have successfully profiled and tracked these individuals with high degrees of accuracy in over 15 different languages.Every action a user takes, whether good or bad, is used as a signal for our profiling systems. Each user has multiple reputation scores, which are utilized in real-time decision making concerning the content they post.

Is Crisp’s service secure?

Crisp is the only moderation and safety provider to have achieved SOC2 compliance for our moderation platform and processes. This professional standard requires recipients to maintain a strict security procedure. We are independently audited every six months, and all our employees (including moderators) undergo extensive vetting and police checks.

How effective is Crisp at detecting sexual predators and online grooming?

We work with leading universities and law enforcement agencies to measure our detection capability. For the last eight years we have maintained 100% accuracy in the identification of online grooming for our customers, with less than 1% false positive rate. In 2008, our detection rates for known grooming cases was 100%, with an overall accuracy of 98.4%. Today, we continue to achieve a 100% detection rate, while our overall accuracy rate is now closer to 99.9%.

What types of risky images does Crisp moderate?

Crisp identifies a variety of offensive and graphic images that pose a risk to your social platform, as well as concerns such as poor image quality and advertising suitability. Our typical risk categories include pornography, gore, guns, sexual exploitation, abuse and extremism. We also detect images containing inappropriate language. We moderate images that impact negatively on a brand or product, and identify intellectual property violations, copyright and regulatory issues.

Will Crisp identify specific in images, videos and text?

Will Crisp identify specific risks in images, videos and text? We deal with specific risks across all types of content including images, videos, text, chat, as well as threats posed by bad actors. Once we have established the types of risks you need to identify and manage, our social media risk analysts will create a set of data to train our systems. If we are not able to achieve the level of accuracy straight away using automated techniques, we will utilize our on-demand risk analysts to moderate content in near real-time 24/7, so your app or social platform is always safe. Crisp has implemented many types of risk categories including ad compliance, dating profiles and image relevance.

Will I be able to define my own level of risks and moderation guidelines to protect my social platform?

Absolutely, our service is tailored to your needs and your risk profile, and our platforms have been built to scale. We completely understand that every social platform has different issues and risk tolerances. Our customers have defined hundreds of custom risk categories for text, images and videos, as well as bad actor profiling and reputation management. Our clients range from those with very high tolerances where only the most extreme content is addressed, to the world’s biggest kids’ brands requiring the most stringent protection.

Will I be able to use my own moderators?

Yes, we often integrate alongside a customer’s existing moderation and community management teams. On new projects we recommend utilizing our global on-demand social media risk analysts, but we understand that existing social platforms may have their own moderation teams.

Do I need to hire a moderation team?

Crisp is a completely managed service so there is no need to hire your own moderators. Our experts around the globe can escalate and manage issues on your behalf where necessary. Crisp has hundreds of on-demand social media risk analysts around the globe who have been vetted and are constantly quality and speed assessed.

Does Crisp ever crowdsource human moderation?

When human moderation is required we never crowdsource or enlist individuals on an ad hoc basis. Crisp has a global on-demand social media risk analyst workforce. Every member of our team is subject to police vetting checks, and receives specific and ongoing training in risk identification, intervention and management.

We do not recommend crowdsourcing for moderation purposes. Should a risk be identified, you need to know that the person doing the identification and intervention is completely trustworthy. Our social media risk analysts sign strict non-disclosure agreements – if they see risky content they deal with it confidentially.

How scalable is Crisp’s service?

We deal with billions of pieces of user generated content and take action in real-time every month. Our platform is capable of processing thousands of moderation requests per second. We are already trusted by the world’s largest social platform to protect them.

Keep your social platform safe.
Request a free demo.