YouTube said Monday it would remove election-related videos that are "manipulated or doctored" to mislead voters, as part of its efforts to stem online misinformation. The Google-owned video service said it was taking the measures as part of an effort to be a "more reliable source" for news and to promote a "healthy political discourse."
Leslie Miller, YouTube's vice president of government affairs and public policy, said in a blog post that the service's community standards prohibit "content that has been technically manipulated or doctored in a way that misleads users ... and may pose a serious risk of egregious harm."
The policy also bans content which aims to mislead people about voting or the census processes.
The move comes amid growing concern about so-called "deepfake" videos altered by using artificial intelligence which can create credible-looking events, but also "shallow" fakes that use more rudimentary techniques to deceive viewers.
Online platforms have come under pressure to root out misinformation in the wake of a foreign manipulation effort in the 2016 US elections and claims that not enough is being done to curb false claims by candidates themselves.
The latest YouTube statement, which seeks to clarify a policy on misinformation, comes as the US presidential primary kicks off which caucuses being held in Iowa on Monday and the first primary next week in New Hampshire.
Google last year said it was stepping up efforts on election misinformation and would remove false claims in ads, including on YouTube.
"The underlying standards YouTube explains and illustrates today do not appear to be brand new, but the company deserves praise for setting them out in clear terms and warning that it intends to enforce them vigorously," said Paul Barrett of the New York University Center for Business and Human Rights and author of a 2019 study on political disinformation.
"YouTube's statement today appears to reiterate its determination not to allow its users to be conned during the 2020 election campaign."
The announcement underscores differing policies by major social networks on disinformation. Twitter has said it would ban all political ads for candidates, while Facebook has maintained a hands-off policy for political speech and ads, with some exceptions for content that misleads users about voting times and places.
"Each platform is weighing free expression against voter manipulation but the information operations work across platforms and exploit these loopholes," said Karen Kornbluh, a German Marshall Fund researcher who follows political disinformation.
"That's why the platforms should come together and develop shared, clear, consistent, enforceable rules to protect voters from becoming easy marks for disinformation campaigns."
Monday's statement offered specific examples of content that would be removed from YouTube.
Among the content banned include any video "manipulated to make it appear that a government official is dead" or which "aims to mislead people about voting or the census processes, like telling viewers an incorrect voting date."