#@%*?!!! Word Filters
If you manage a community then chances are that you already have word filtering, or you’re considering the use of word filtering. It’s seen as the easy, low-cost way to ensure users aren’t saying things ‘bad things’ and turning your friendly community into a playground squabble or a bar brawl.
The most common type of filtering is the good old black list; a dictionary of words which are disallowed. This type of filtering works by matching the words a user types against the dictionary of words it holds and when a match is found, the bad word is either masked in some way when it is displayed or it’s not displayed at all. Slightly less common are white list filters which work in the opposite way to black lists in that their dictionaries contain the only words that can be published. White list filters tend only to be found on communities for children since they are obviously fairly restrictive.
Filters are a massively important tool in the community management kit, however there’s a danger they can become the default tool of choice, get over-used and start to impact negatively on user experience. The result can be that your environment is free of ‘bad words’, but is also massively frustrating to use. Recently I played about with an online game which was using black list filtering and accidentally discovered they had blocked the word ‘black’, presumably in an attempt to prevent racist speech. On the surface it may seem like a logical thing to do, however when you step back and consider the situation – how often would the word ‘black’ get used in a racist context, against the number of times it’s likely to be used in an innocent way? Anyone else like Amy Winehouse? Yeah, I loved Back to ***** erm…Back to that colour that’s a bit darker than grey.
The fact is, once you take out all the major profanities, racist slurs and all those naughty explicit words there arestill a myriad of perfectly acceptable words left which can be slung together to convey all manner of abuse, sexual behaviour, hate speech and more. The more lateral thinking you employ to use filters to block those words which are normally ‘good’, but can be used in bad ways, the more restrictive speech in your community becomes. The more restrictive the communication, the worse the experience gets and ultimately the price will be paid in user retention and growth.
So what’s the answer? Well the trick is to look beyond filtering and consider profiling the behaviour you’re trying to mask in order to stop it. At Crisp we advocate blocking all the obvious ‘nasty words’ to help maintain a friendly environment, but leaving the less obvious combinations of good words used with bad intent to engines that consider the context and user’s behaviour and builds a profile on that person. These profiles allow Crisp users to quickly identify their community’s troublemakers and take action against them specifically, rather than effectively punishing everyone by over-using filters and still not tackling the root cause; the troublemaker.
In essence, filtering is great at masking the ‘unsightly; the profanity and casual abuse but what filtering cannotdo is consider context and nor can it identify the protagonists behind more subtle on-going abuse such as cyber bullying or child grooming. For that you need another, more powerful, tool in your kit and that’s where Crisp’s behaviour profiling comes in to play.
Emma Monks, Head of Community