Your rules should be fair and they should be clear. Make them visible.

Policing your private social network

Your rules should be fair and they should be clear. Make them visible.

Your rules should be fair and they should be clear. Make them visible.

When your private social network is beginning and the first people in are people you know and the people they know, your community will be a happy, friendly place.

As more people hear about your private social network though, and interactions expand from friends and acquaintances to strangers with different opinions, that politeness can start to break down.

When that happens there’s a risk your private social network will start to break down.

No one wants to visit a club with boorish members who like to hurl insults and pick fights. Just as a good bar needs a strong bouncer to kick the drunks out so a friendly community will need strict policing to keep things in order.

Lay out the rules:

Your rules should be fair and they should be clear. Make them visible. Prohibit anti-social activities like obscenity, insults and trolling, and state that the punishment may be banning. You don’t have to ban everyone who uses a four-letter word if you don’t want to, but you do want to give yourself as much power as possible to protect your private social network.

Enforce the rules:

If you’re reluctant to kick out boorish members of the private social network, give a warning. But no more than one—no matter how much they moan and beg for more chances. You’ll soon find that people who like to insult others on communities will keep doing it. Give them an inch and they’ll take your entire community away.

There are enough good people around to keep your community thriving. Don’t be afraid to kick out the hooligans with some strict policing. It makes for a much more pleasant neighborhood.

That’s all for now! In the next post, I discuss how to measure your growth.

 

Reddit, Quarantine and the Problem with Vague Policies

Since its creation ten years ago, Reddit has been one of the most liberal social media/networking sites when it comes to moderating unacceptable content; while Facebook has very strict rules around what you can post and what you can’t, Reddit’s general approach has always been “everything except child pornography, spam and personal information is fine”. This incredibly liberal approach caused Reddit to come under fire as a hotbed for extreme racism and misogyny;  top level employees left the site in droves, as its sheer size and sprawl made the site increasingly difficult to manage and maintain.

redditJust over a month ago, new CEO and site founder Steve Huffman proposed a new content policy. This new policy bans illegal content, harassment and bullying, the publication of other people’s private information, and anything that might incite harm or violence against other people (on top of the existing ban on spam and sexual content featuring minors); anything that would be considered “adult content” must be tagged NSFW (not safe for work). On top of this, content which violates “a common sense of decency” is to be quarantined, meaning users must log in and opt-in to see the content. Quarantined and NSFW content is free from advertisements (ie, generating no revenue for Reddit) and does not show up in public search results.

While the policy sounds good in theory, allowing Reddit to maintain the freedom of speech which has made it so popular while distancing itself from transgressive content, the vague wording is already causing some problems.

Twice in his official statement, Huffman suggests that you know pornography and transgressive content “when you see it.” What comes across as explicit sexual behaviour to one culture might seem completely benign to another (eg, a couple kissing); violent, racist speech may seem acceptable (right, even) to a religious minority, even if everyone else finds it abhorrent. Given that Reddit mostly relies on unpaid moderators to keep content in check, any policy those moderators have to enforce should be clear enough to transcend cultural differences and misunderstandings. Further, they should also make sure that they have enough moderators to keep up with the enormous amount of content posted to the site every day, and apply the new policies to existing subreddits in a timely manner. While some of the most notorious offenders, like racist subreddit Chimpire, were immediately removed following the implementation of the new content policy, other incredibly disturbing subreddits which feature illegal content (like Watch People Die, which includes incredibly graphic video content from car accidents and even murder scenes) are still standing, with only an age restriction in place.

Banning “illegal” content is also mildly problematic, as different geographic regions have different laws; for example, a Redditor based in Colorado should be perfectly within their rights to promote and sell marijuana via the website, whereas a Redditor based in New York should not.

If you’re running your own private social network, you’ll need to have content policies in place to make sure it’s a safe, welcoming environment for your members; you’ll also have to be mindful that you may need more staff as your community grows (voluntary or paid). That policy may also need to evolve as your community does. PeepSo will take care of the technical side, with a fantastic admin interface that works right out of the box; it’ll be up to you to come up with a set of rules that is clear, fair, and will allow your community to run smoothly.

Outsourcing Censorship: Who Cleans Up Your Social Network’s Feed?

To keep offensive content out of our newsfeeds, social networking sites can employ one of two strategies: they can “active moderate”(screening every single post uploaded), or they can rely on their users to report anything suspicious or unsavory, and pass those reports over to content moderators. Larger sites like Twitter and Facebook tend to use the latter strategy and, given the sheer number of reported posts daily, it’s understandable that they’d decide to outsource moderation of reported content.

2664703680_d7a02d4d38_o

Image via Tony Adams on Flickr.

Many of the people who spend their days looking through reported content are horrendously underpaid international contractors, making as little as one dollar per hour plus commissions (estimated to bring their average rate of pay up to four dollars an hour). They’re often highly educated and must pass a stringent English test in order to gain the role. Most content moderators end up leaving the role due to the psychological damage caused by hours of looking through incredibly disturbing content, from beheadings to animal torture. On-shore workers are better paid and can have very good physical working conditions, but still end up suffering greatly from what they have to look through each day: in an interview with Wired, a US based former content moderator describes developing depression and problems with alcohol as a result of the videos he was moderating for YouTube.

While Facebook’s public documentation keeps its content guidelines relatively vague, they’re laid out in explicit detail for its content moderators. A Moroccan contractor recently released his copy to Gawker, and its seventeen pages are divided into sections like “sex and nudity”, “hate content” and “graphic content.” Cartoon urine is okay, real urine is not. Deep flesh wounds and blood are okay, mothers breastfeeding is not. Some posts are judged on their context, rather than their content (eg, videos of animal abuse are okay as long as the person who posted it clearly thinks animal abuse is wrong). Strangely, all photoshopped content (whether positive, negative or neutral) is approved for deletion.

When you think about it, it’s concerning how little most social media users know about the rules they are expected to follow, or about the people and processes involved in enforcing those rules. One of the major benefits of starting your own social network is that you’re playing by your own rules – and you know exactly what those rules are. You decide what is acceptable, and what is not; both in terms of common decency, and keeping your community on-message.