Social media executives risk jail for failing to take down violent extremist content quickly, under controversial laws passed in Australia Thursday - a "world first" in the wake of the Christchurch mosques massacre. Lawmakers voted overwhelmingly in favour of the laws, which hold firms like Facebook and YouTube - and their executives - responsible for removing "abhorrent material" quickly.
The companies face fines approaching billions of dollars - or 10 percent of global annual turnover - for failing to enact the "expeditious removal" of footage of terrorism, murder and other serious crimes, while executives could face up to three years in jail. Technology companies, policy experts and lawyers pilloried the legislation - which was jammed through parliament in two days and faces an uncertain future beyond elections expected in May. Prime Minister Scott Morrison, who is facing a difficult reelection battle, said: "Big social media companies have a responsibility to take every possible action to ensure their technology products are not exploited by murderous terrorists." Attorney-General Christian Porter said the legislation was "most likely a world first." The opposition Labour party expressed serious misgivings but voted in favour of the legislation - in a step that echoed the bipartisan passage of a similarly controversial law forcing technology firms to weaken encryption.
With those two reforms, Australia has put itself at the forefront of global efforts to regulate social media giants more closely. But both measures have been roundly condemned by industry and experts as "knee-jerk" and ill-conceived. It will be up to a jury to decide whether the platforms acted with good speed to take down offending content, raising questions about how the law will be implemented.
"No one wants abhorrent content on their websites, and DIGI members work to take this down as quickly as possible," said Sunita Bose, managing director of the Digital Industry Group, which represents Google, Facebook, Twitter, Amazon and others. "But with the vast volumes of content uploaded to the internet every second, this is a highly complex problem that requires discussion with the technology industry, legal experts, the media and civil society to get the solution right - that didn't happen this week."
She also warned that the law would encourage companies to "proactively surveil" users and slammed parliament's "pass it now, change it later" approach. "This is not how legislation should be made in a democracy like Australia."
Technology companies now face the task of developing failsafe moderation tools capable of quickly detecting offensive material in hundreds of billions of media uploads to their platforms. In the immediate aftermath of the Christchurch shootings, Facebook alone said it had taken down 1.5 million videos of the attack. Current tools like Microsoft's Content Moderator API "cannot automatically classify an image, let alone a video," according to Monash University's Robert Merkel. "Nor can it automatically identify videos similar to another video."