Facebook has published its Community Standards Enforcement Report for the fourth quarter of 2020. This report provides metrics on how Facebook enforced its policies from October 2020 through December, including metrics across 12 policies on Facebook and 10 on Instagram.
Enforcement action highlights
Last quarter, we shared the prevalence of hate speech on Facebook for the first time to show the percentage of times people see this type of content on our platform. This quarter, hate speech prevalence dropped from 0.10-0.11% to 0.07-0.08%, or seven to eight views of hate speech for every 10 000 views of content. The prevalence of violent and graphic content also dropped from 0.07% to 0.05% and adult nudity content dropped from 0.05-0.06% to 0.03-0.04%.
Our improvements in prevalence are mainly due to changes we made to reduce problematic content in News Feed. Each post is ranked by processes that take into account a combination of integrity signals, such as how likely a piece of content is to violate our policies, as well as signals we receive from people, such as from surveys or actions they take on our platform like hiding or reporting posts.
Improving how we use these signals helps tailor News Feed to each individual’s preferences, and also reduces the number of times we display posts that later may be determined to violate our policies.
Our proactive rate, the percentage of content we took action on that we found before a user reported it to us, improved in certain problem areas, most notably bullying and harassment. Our proactive rate for bullying and harassment went from 26% in Q3 to 49% in Q4 on Facebook, and 55% to 80% on Instagram. Improvements to our AI in areas where nuance and context are essential, such as hate speech or bullying and harassment, helped us better scale our efforts to keep people safe.
We’re slowly continuing to regain our content review workforce globally, though we anticipate our ability to review content will be impacted by Covid-19 until a vaccine is widely available. With limited capacity, we prioritise the most harmful content for our teams to review, such as suicide and self-injury content.
On Facebook in Q4 we took action on:
- 6.3 million pieces of bullying and harassment content, up from 3.5 million in Q3 due in part to updates in our technology to detect comments
- 6.4 million pieces of organised hate content, up from 4 million in Q3
- 26.9 million pieces of hate speech content, up from 22.1 million in Q3 due in part to updates in our technology in Arabic, Spanish and Portuguese
- 2.5 million pieces of suicide and self-injury content, up from 1.3 million in Q3 due to increased reviewer capacity
On Instagram in Q4 we took action on:
- 5 million pieces of bullying and harassment content, up from 2.6 million in Q3 due in part to updates in our technology to detect comments
- 308 000 pieces of organised hate content, up from 224 000 in Q3
- 6.6 million pieces of hate speech content, up from 6.5 million in Q3
- 3.4 million pieces of suicide and self-injury content, up from 1.3 million in Q3 due to increased reviewer capacity
This year, we plan to share additional metrics on Instagram and add new policy categories on Facebook. We’re also working to make our enforcement data easier for people to understand by making these reports more interactive.
Our goal is to lead the technology industry in transparency, and we’ll continue to share more enforcement metrics as part of this effort. We also believe that no company should grade its own homework. Last year, we committed to undertaking an independent, third-party audit of our content moderation systems to validate the numbers we publish, and we’ll begin this process this year.
We will continue building on this progress and improving our technology and enforcement efforts to keep harmful content off of our apps.
“Our goal is to get better and more efficient at enforcing our Community Standards. We do this by increasing our use of Artificial Intelligence (AI), by prioritising the content that could cause the most immediate, widespread, and real-world harm, and by coordinating and collaborating with outside experts.” ~ Kojo Boakye, Director of Public Policy, Africa.
Guy Rosen is Facebook’s VP: Integrity. He oversees Facebook’s work on safety & integrity.
Want to continue this conversation on The Media Online platforms? Comment on Twitter @MediaTMO or on our Facebook page. Send us your suggestions, comments, contributions or tip-offs via e-mail to email@example.com.