Prime Minister Scott Morrison on Sunday proposed a new law that would force the social media giants to “unmask anonymous online trolls” as part of the government’s ongoing push to crack down on big tech.
It follows a raft of statements, speeches and general chest-beating , along with others in the party like Deputy Prime Minister Barnaby Joyce, about the scourge of anonymity as a “mask” for harassers online.
A announcing changes to the country’s defamation laws as a means of combating bad actors online provides a vague outline of what the new court powers would do.
But it also leaves unanswered questions about how the new rules will actually better protect Australians in practice, and how platforms will be forced to comply.
In a speech delivered at a press conference also attended by Attorney-General Michaelia Cash, the prime minister said the reforms to the country’s defamation laws would be “some of the strongest powers to tackle online trolls in the world” — rhetoric in line with Morrison’s past statements around the country’s approach to regulating big tech companies.
The prime minister also reiterated his belief that anonymity was central to the free rein of trolls online, and suggested the proposed new legislation would address this problem.
Social media was too often a place where “the anonymous can bully, harass and ruin lives without consequence,” Morrison said.
What will the proposed legislation do?
Under the government’s proposed legislation, social media companies will have to provide the emails and phone numbers of users posting abusive comments online so that legal proceedings can be brought against them.
Part of the reform will also mean defamation liability for comments posted by third-party users on their pages will shift from media outlets and other businesses, to social media platforms themselves.
This seeks to address the High Court’s ruling in the Dylan Voller case heard last year that said people and businesses operating or maintaining social media pages are considered the publishers of third-party content posted on their pages.
Cash said the Voller decision meant “ordinary Australians are at risk of being held legally responsible for defamatory material posted by anonymous online trolls,” and the reforms would make clear that people operating or maintaining social media pages are not publishers of comments made by others.
Core to the federal government’s new proposal to protect Australians from harm on social media is a series of new processes social media companies will need to put in place.
Firstly, the government said multinational social media companies will need to set up user-friendly and efficient complaints systems for use by Australians. These would be used by social media users to request the platforms delete comments they believe are defamatory, or reveal the identity of the people who posted them.
However the government also said that ‘trolls’ must consent to being identified or having their post removed.
A second means of recourse will be “a new Federal Court order” that will require social media giants to disclose the details of trolls to victims — this time without their consent — so that the social media user could independently lodge a defamation case against the harasser.
The government said that if a social media company “unmasked” a troll, it would have a defence from being held legally accountable for the defamatory comments.
However if the company failed to do this, it would be on the hook; if a defamation suit was brought against a specific user who was unable to be identified, the social platform itself could be sued.
The legislation has not yet been publicly released, but is said to contain “protections” for whistleblowers wanting to preserve their anonymity, and to prevent “vexatious” cases.
An exposure draft will be released in the coming days before a consultation period.
What does this mean for social media platforms?
At the press conference, Morrison put the onus for enforcement on platforms, saying the government wants “the social media companies to fix this” and that exact mechanisms would be “up to them”.
Because the government has only released a brief media statement, at this stage it is unclear whether the legislation will contain rules about what information platforms should collect and how they collect it in order to comply.
Facebook and Twitter have pushed back on government calls to unmask anonymous users in recent weeks, arguing that certain users — including whistleblowers, political activists, and LGBTIQ people — may have good reasons for wanting to preserve their anonymity.
The social media giants have said they don’t want to collect the further databases of personal contact information that could be required by this legislation.
The proposed law also has the potential to benefit platforms including Google and Facebook, because it means that they will not be held liable for defamatory posts when the person responsible is easily identifiable.
What would the proposed changes achieve?
Professor David Rolph, a defamation expert at the University of Sydney, told Business Insider Australia that the reforms only sought to affect “online defamation” in a very narrow way.
The government has a wider agenda devoted to addressing regulating content and reducing harm online, including the Online Safety Act that will come into effect in January 2022.
But Rolph said there was a question around whether “reforming aspects of defamation… and online defamation or defamation by social media will necessarily have the effect of dealing with all kinds of other online harm”.
Dr Belinda Barnet, a lecturer in Media and Communications at Swinburne University focusing on social media and digital culture, told Business Insider Australia the proposed reforms to current defamation laws were unlikely to help make seeking redress for alleged defamation online easier for any more Australians.
“It’s certainly not going to help 99% of victims of trolling in Australia,” Barnet said, adding that many would be unlikely to choose or be able to spend the time or money required to take a defamation case to court.
While social media companies had demonstrated they were unwilling to actively reduce harm on their platforms, Barnet said the proposal demonstrated a lack of understanding of the way the internet works by the government.
“The platforms do not want to [do that], because it means employing more human moderators to look at all the toxic stuff that happens on the internet every day,” she said.
“They most certainly would rather just collect identity documents than employing more people, for example.”
Rolph said that another open question was how effective the government would be at enforcing judgements against big tech companies.
“The issue here is not so much whether these social media companies can be liable,” he said, giving the example of current cases against Google.
“The issue is whether an Australian court can exercise jurisdiction over them or whether they could enforce any judgement against them. That’s, I think, the most significant issue.”
The post appeared first on .