How content is moderated on Facebook

Mum and teenage daughter looking at laptop and smart phone

Many of us use Facebook every day, but do you know what content is and isn't allowed on the platform? We've been working with Facebook to provide answers to our users' most popular questions and to offer a better understanding of how to enjoy the platform.

Read on to find out how Facebook moderates content and what you can do to stay within their guidelines.

What do you want to know?

Hate speech
Man and woman looking at laptop screen

Are media organisations ever banned from Facebook for putting up content that attracts hate speech?

There's no place on Facebook for bullying or hate speech. It doesn’t matter if you’re an individual or a media organisation. We pride ourselves on making Facebook a place for people to come and feel welcome, safe, and like they can express themselves.

There's no place on Facebook for bullying or hate speech. It doesn’t matter if you’re an individual or a media organisation.
So when ideas and opinions cross the line, creating an environment of intimidation and exclusion, we remove it as quickly as we can. We also have a zero tolerance policy which makes it clear that individuals and organisations engaging in 'organised hate' are not allowed on the platform, and that praise or support for these figures and groups is also banned.

We always want to find the right balance between giving people a place to express themselves and promoting a welcoming and safe environment for everyone.

Graphic content
Woman frowns at phone

I’m disturbed by how quickly graphic content can appear on social media within seconds of an event happening. Are there settings which filter this out so that it does not appear on my children's timeline before it is removed completely?

The ability to share stories, updates and photos instantaneously is one of the biggest appeals of social media. But, like you rightly point out, this can lead to problems when people upload distressing content. Because of this, we take action on offensive content as soon as we're made aware of it. And while it’s not possible to add a filter to your child’s timeline, we have precautions in place to identify and take down graphic content as soon as possible.

While it’s not possible to add a filter to your child’s timeline, we have precautions in place to identify and take down graphic content as soon as possible. This includes a safety team of 30,000, half of whom are content reviewers. We’ve also made significant investments in artificial intelligence which is successful in finding offending content, often before it’s reported to us by our community – so things like hate speech, fake accounts, terrorism, graphic violence will be removed faster than ever and before anyone’s seen it at all.

Our Live feature is also being reviewed, and we’re exploring restrictions on who can use it, taking into account factors such as previous Community Standards violations.

I remember a few years ago there was a lot of fuss about breastfeeding pictures being taken down. Where do you draw the line between breastfeeding and soft porn?

Our policies on nudity have definitely become more nuanced over time. Today, we understand that nudity can be shared for a variety of reasons: to raise awareness about a cause, or for educational or medical reasons or as a form of protest. Whenever this intent is clear, we make allowances for the content. Of course, we do restrict some images of female breasts that include the nipple, but we allow images of women actively engaged in breastfeeding and photos of mastectomy scarring. We also allow photographs of paintings, sculptures and other art that depict nude figures.

For more detail on what nudity is and isn’t allowed, take a look at our policy.

What types of content is prohibited? I presume terrorism and drug-related content. What else? And where do I find this information?

Key things we prohibit include hate speech, violence and graphic content, nudity and sexual activity. It is so important for us to strike the right balance between giving people a place to express themselves freely and ensuring that everyone feels safe and welcome on our platform. For that reason, we created Facebook’s Community Standards which clearly outline what is and isn't allowed on the platform. These standards apply around the world to all types of content.

Key things we prohibit include hate speech, violence and graphic content, nudity and sexual activity. For a full list, see our policy on objectionable content.

My account has been disabled multiple times (without warning) for unexplained violations of Community Standards. If no explanation is given for content removal or account disabling, then how can people abide by the rules of what content is allowed?

Our team of content reviewers make their decisions based on our Standards, and these are published so that everyone can know and (hopefully) abide by them. Of course, we recognise that sometimes we make mistakes, which is why everyone has the right to request a review of our decision to take down content or disable an account.

If you request this extra review, the content will be sent to our team and looked at by one of our team members, normally within 24 hours. If we agree that we made a mistake, we'll notify you, and your post, photo or video will be restored.

Fake profiles
Girl looks down at phone

How do you decide what is a fake profile and what is people trying to be less identifiable? For example teachers or policemen using first and middle names?

There's no place for fake accounts on Facebook, and we go to great lengths to find and remove them. That said, we don’t want to delete genuine profiles and we recognise the difference between people using different or middle names and those who are creating fake accounts to break our rules. All of this is taken into consideration if an account is reported to us or we identify it as a potentially fake account.

Fake accounts are often the starting point for other violations, and while they can be used to bully and harass people, the majority are created for scams and to spread money-motivated spam. Most of the profiles Facebook takes down are from automated accounts that are easy to spot, versus manually-created accounts which are a little more tricky.

Who is moderating?
People look at laptops

Who exactly decides the content guidelines? Who deems what’s appropriate and what isn’t?
Our policies on hate speech were developed alongside academics, human rights organisations and our community.

Our Community Standards is a clear set of rules that show what is and isn't allowed on Facebook. They apply around the world and to all types of content. We created them with the help of people who use our platform and experts in fields such as technology and public safety. For example, our policies on hate speech were developed alongside academics, human rights organisations and our community.

We’re also planning to create an oversight board to independently review our content decisions, especially the most difficult and contested.

If you use people to decide what is and is not acceptable, do you recruit them from a diverse background and check their opinions first?

Across our business, diversity is a top priority, and that includes who we employ as content reviewers – people who work in all major languages and are based all over the world. We also train them in a way that ensures personal biases aren't part of their decisions. Instead, they follow our Community Standards guidelines.

We’re also planning to develop an external independent oversight panel to review Facebook's content decisions. They will discuss the most difficult and contested outcomes to make them more balanced and fair. It is a key priority of ours to form a panel that is not ideologically skewed but is both balanced and diverse.

Visit your.fb.com to find out more about social media and mental health, safe use of the internet, content governance, and privacy.