Facebook on Wednesday said that its software is getting more skilled at spying banned content at the social network, then working with humans to quickly remove terrorist videos and more.
"While we err on the side of free expression, we generally draw the line at anything that could result in real harm," Facebook chief executive Mark Zuckerberg said during a briefing on the company's latest report on ferreting out posts that violate its policies.
"This is a tiny fraction of the content on Facebook and Instagram, and we remove much of it before anyone sees it." Facebook has been investing heavily in artificial intelligence (AI) to automatically spot banned content, often before it is seen by users, and human teams of reviewers who check whether the software was on target.
Facebook has more than 35,000 people working on safety and security, and spends billions of dollars annually on that mission, according to Zuckerberg. "Our efforts are paying off," Zuckerberg said. "Systems we built for addressing these issues are more advanced."
When it comes to detecting hate speech, Facebook software now automatically finds 80 percent of the content removed in a massive improvement from two years ago, when nearly all such material was not dealt with until being reported by users, according to the California-based firm.
Zuckerberg noted that hate speech is tougher for AI to detect than nudity in images or video because of "linguistic nuances" that require context that could make even common words menacing.
Add to that videos of attacks driven by bias against a race, gender or religion could be shared to condemn such violence rather than glorify it. People at Facebook continue to try to share video of a horrific mosque attacks in Christchurch, New Zealand, with social network systems blocking 95 percent of those attempts, according to executives.
Comments
Comments are closed.