March 21, 2013 — s4r4hbrown
There is plenty of room for debate about appropriate responses to hate speech in a range of contexts. People might not want to censor Holocaust denial or crude racism on a personal blog, while having zero tolerance for comparatively subtle expressions of antisemitism from elected representatives. Here Robin Shepherd and Mike Whine offer eloquent, but opposing, perspectives on the French Court’s recent decision to identify antisemitic tweeters.
Facebook has a clear policy on hate speech:
Content that attacks people based on their actual or perceived race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or disease is not allowed. We do, however, allow clear attempts at humor or satire that might otherwise be considered a possible threat or attack. This includes content that many people may find to be in bad taste (ex: jokes, stand-up comedy, popular song lyrics, etc.).
It is the contention of a report which has just been released by the Online Hate Prevention Institute (OHPI), that Facebook staff sometimes lack the expertise to identify antisemitic hate speech – Holocaust denial and sites promoting the Protocols of the Elders of Zion for example – and thus fail to remove material which would seem to fall foul of their own guidelines. Andre Oboler, who compiled the report, explained in a piece he wrote back in 2009 why he thinks this issue is important:
The issue here is not about creating UK law to ban Holocaust denial. It is about having companies publish their terms of service and then holding them to those terms. It is about requiring a response in reasonable time when a complaint is made. It is about transparency of process. It is about actively working to prevent not only a spread of racism but a spread of hate more generally including tackling problems such as cyberbullying. With these things in place, adults can decide if they accept those terms and can decide whether a social media site is appropriate for children under their care.
OHPI’s report offers helpful, precise analysis of why certain images are antisemitic, including examples which focus on Israel, and conflate Zionism with Nazism. This is a form of antisemitism identified in the EUMC working definition – which OHPI is urging Facebook to adopt.
The screenshots at the end of the report demonstrate Facebook’s tendency to fail to (fully) recognize the hateful nature of some images and ‘jokes’. I largely avoid Facebook, so am not sure how consistently it polices racist and other hateful content more generally. If it aims to have a zero tolerance policy for such matters it should certainly adopt OHPI’s recommendations. Whatever one’s views are about freedom of expression, it is reasonable that Facebook should set its own standards and implement them consistently.
Some of OHPI’s other suggestions, many aimed at tightening up Facebook’s reporting procedures, seem very sensible. Apparently people often message the administrator of a dubious site, thinking they are reporting a problem to Facebook. This may lead to users being targeted individually for further racist abuse. Another practical suggestion is that complaints against users with previous form be prioritised. OHPI also welcomes Facebook’s recent adoption of a policy to inhibit posting hateful material by making it more difficult to do so anonymously:
In particular we commend Facebook for the new approach to pages which implements our suggestion that page owners be prevented from hiding behind the anonymity a page provides when they post hatful content. The new policy means content that falls short of hate speech (which would require removal), but is nonetheless hateful, must either be removed by the poster, or they must associate their profile with the page that makes the comments. This is done by optionally listing the account of the page administrators on a page’s about tab.
There are other areas where Facebook seems to be failing to comply with its own policies and processes too – here’s a recent piece on double standards with regard to images of sexual violence. Even if one would not want to see any of the (hateful) material highlighted in OHPI’s report banned outright from the internet, it seems reasonable for a social networking site to establish and maintain high community standards and take expert advice if it is not equipped to determine how to implement its own policies. »