Facebook increasing AI usage for content moderation
- While there are still around 15,000 human reviewers across over 50 time zones, Artificial Intelligence (AI) now primarily helps in proactively removing content which goes against Facebook’s policies.
World’s largest social media platform Facebook in a recent briefing shared that technology has started to play a more central role in content moderation on the platform. This shift has taken place in order to prioritise reported content.
While there are still around 15,000 human reviewers across over 50 time zones, Artificial Intelligence (AI) now primarily helps in proactively removing content which goes against Facebook’s policies.
Between April and June this year 99.6% of fake accounts, 99.8% of spam, 99.5% of violent and graphic content, 98.5% of terrorist content, and 99.3% of child nudity and sexual exploitation content, 95% of the content Facebook removed was identified and removed by their technology - without needing someone to report to them.
Furthermore, the company shared that they now prioritize content that needs reviewing based on several factors such as virality, severity and likelihood of violation. Prioritizing content in this way, regardless of when it was shared on Facebook or whether it was reported by a user or detected by their technology, allows them to get to the highest severity content first.It also means the reviewers in their Global Operations team spend more time on complex content issues where judgment is required, and less time on lower severity reports that technology is capable of handling.
Although technology is playing an increasing role in the way Facebook moderates content, for certain posts they still use a combination of technology + reports from the community + human review to identify and review content against their Community Standards. This is done to ensure the context of the post is understood better. The technology created for this is called Whole Post Integrity Embeddings or WPIE.
Lastly, FB shared details about a newly developed technology called XLM-R that can understand text in multiple languages. This model is trained in one language and then used with other languages without the need for additional training data or content examples.
Comments
Comments are closed.