Reddit, Quarantine and the Problem with Vague Policies

Since its creation ten years ago, Reddit has been one of the most liberal social media/networking sites when it comes to moderating unacceptable content; while Facebook has very strict rules around what you can post and what you can’t, Reddit’s general approach has always been “everything except child pornography, spam and personal information is fine”. This incredibly liberal approach caused Reddit to come under fire as a hotbed for extreme racism and misogyny;  top level employees left the site in droves, as its sheer size and sprawl made the site increasingly difficult to manage and maintain.

redditJust over a month ago, new CEO and site founder Steve Huffman proposed a new content policy. This new policy bans illegal content, harassment and bullying, the publication of other people’s private information, and anything that might incite harm or violence against other people (on top of the existing ban on spam and sexual content featuring minors); anything that would be considered “adult content” must be tagged NSFW (not safe for work). On top of this, content which violates “a common sense of decency” is to be quarantined, meaning users must log in and opt-in to see the content. Quarantined and NSFW content is free from advertisements (ie, generating no revenue for Reddit) and does not show up in public search results.

While the policy sounds good in theory, allowing Reddit to maintain the freedom of speech which has made it so popular while distancing itself from transgressive content, the vague wording is already causing some problems.

Twice in his official statement, Huffman suggests that you know pornography and transgressive content “when you see it.” What comes across as explicit sexual behaviour to one culture might seem completely benign to another (eg, a couple kissing); violent, racist speech may seem acceptable (right, even) to a religious minority, even if everyone else finds it abhorrent. Given that Reddit mostly relies on unpaid moderators to keep content in check, any policy those moderators have to enforce should be clear enough to transcend cultural differences and misunderstandings. Further, they should also make sure that they have enough moderators to keep up with the enormous amount of content posted to the site every day, and apply the new policies to existing subreddits in a timely manner. While some of the most notorious offenders, like racist subreddit Chimpire, were immediately removed following the implementation of the new content policy, other incredibly disturbing subreddits which feature illegal content (like Watch People Die, which includes incredibly graphic video content from car accidents and even murder scenes) are still standing, with only an age restriction in place.

Banning “illegal” content is also mildly problematic, as different geographic regions have different laws; for example, a Redditor based in Colorado should be perfectly within their rights to promote and sell marijuana via the website, whereas a Redditor based in New York should not.

If you’re running your own private social network, you’ll need to have content policies in place to make sure it’s a safe, welcoming environment for your members; you’ll also have to be mindful that you may need more staff as your community grows (voluntary or paid). That policy may also need to evolve as your community does. PeepSo will take care of the technical side, with a fantastic admin interface that works right out of the box; it’ll be up to you to come up with a set of rules that is clear, fair, and will allow your community to run smoothly.