In an effort to fight child exploitation, Facebook has removed 8.7 million pieces of content in the last three months that featured child nudity and exploited Facebook laws.
Facebook’s Global Head of Safety, Antigone Davis published a blog post yesterday highlighting how the company fights child exploitation. He informed about how the company, apart from other sexual content, also flags non-sexual child nudity.
In light of this, the social media network has ‘removed 8.7 million pieces of content on Facebook that violated our child nudity or sexual exploitation of children policies, 99% of which was removed before anyone reported it’.
Facebook introduces new feature in Pakistan to curb online abuse
Davis informed they were able to identify such content with the help of new AI and machine learning technology that was developed and implemented over the past year by Facebook. The technology examines post for child nudity and other exploitative content when they are uploaded and, if necessary, photos and accounts uploading them are reported to the National Center for Missing and Exploited Children (NCMEC). In turn, NCMEC then works with law enforcement agencies around the world to help the victims.
The firm said that to avoid even the potential for abuse, they take action on ‘nonsexual content as well, like seemingly benign photos of children in the bath’. Also, a similar system also catches users engaged in ‘grooming’ or befriending minors for sexual exploitation. “One of our most important responsibilities is keeping children safe on Facebook. We do not tolerate any behavior or content that exploits them online,” read the blog post.
Moreover, Facebook said that they also ‘collaborate with other safety experts, NGOs and companies to disrupt and prevent the sexual exploitation of children across online technologies’. Also, next month the company will join tech giant Microsoft and ‘other industry partners’ to start building tools for smaller firms to prevent the grooming of children online.