In Short

The Situation:  The application of existing principles of defamation law to social media and other online platforms continues to pique the interest of online platform providers and users alike. The Australian courts have recently considered who can be held responsible for defamatory content posted online, particularly in the social media context.  

The Result:  The High Court of Australia has confirmed that news outlets are, for the purposes of defamation law, publishers of comments made by third parties reacting to articles posted by those news outlets to social media. A majority held that the news outlets did not need to know of the relevant defamatory comment or intend to convey the comment. By facilitating and encouraging comments, they had participated in the communication of the defamatory matter. 

Looking Ahead:  Businesses with Facebook pages and other media platforms that allow third parties to comment on their content should review their ability to moderate those comments in real time, including by preventing comments being immediately visible without moderation where there is any risk that the content will generate potentially defamatory responses.

Background 

Fairfax Media Publications Pty Ltd, Nationwide News Pty Ltd and Australian News Channel Pty Ltd all maintain public Facebook pages allowing comments from members of the public. Dylan Voller commenced proceedings against the news outlets, claiming that particular comments posted by third parties were defamatory of him, and that the news outlets were liable as publishers of the comments. The comments were made by third-party readers reacting to articles posted by the media outlets about Voller's mistreatment in a youth detention centre.

Whether the news outlets had published the comments was heard as a preliminary question. The trial judge ruled that the news outlets were the primary publishers of the comments. That conclusion was driven by findings that the news outlets had the means to delay publication of third-party comments and to monitor whether any were defamatory before releasing them to the general readership. The news outlets were found not to be merely conduits of the comments, as each provided the forum for and encouraged the publication of comments for its own commercial purpose. The Court of Appeal upheld the decision.

Appeal to the High Court

The news outlets appealed to the High Court, contending that there was a requirement that a publisher intends to communicate defamatory matters or have knowledge of the defamatory matters.

By majority, the High Court held that the acts of the news outlets in facilitating, encouraging and thereby assisting the posting of comments by the third-party Facebook users rendered them publishers of those comments. The majority focused on the tort of defamation being one of strict liability, thereby rendering knowledge of the defamatory posts and an intention to participate in their publication irrelevant.

Edelman J and Steward J, in separate dissenting reasons, expressed concern that the majority's decision would expose social media page owners to liability for defamatory comments posted by third parties which were unconnected with the original content in the posted articles. Edelman J would have limited responsibility for publication to those comments which have a connection to the subject matter posted by the news outlet, while Steward J would have found responsibility only where the comments were procured, provoked or conduced by posts made by the news outlets.

Observations 

The burden placed upon administrators, such as news media outlets, now requires a review of editorial policies, and will likely result in new or extended requirements for employees dedicated to social media moderation. Developing those policies and steps taken to moderate comments will require careful balancing as heavy-handed moderation will typically be met with concern from commenters about editorial censorship.

There are many ways in which media outlets may choose to moderate third-party comments, including:

  • Pre-moderation involves moderating all user-generated content before it appears online. Comments are not published until they are reviewed. This results in a slower engagement and less fluid discourse, but is the only way to be certain that defamatory content will not be published.
  • Post-moderation involves all user-generated content appearing immediately without moderation, and then any defamatory or other problematic content being removed later. This improves the fluidity of discourse, but comes with a heightened risk of defamatory content being published at least for a time and potentially being amplified by further discussion.
  • Reactive moderation  relies on audience participants to flag undesirable content, which is then reviewed and dealt with by moderators. It is moderation in real time: comments appear immediately and are dealt with as they come to the attention of moderators.
  • Distributed moderation relies on participants rating the comments of other users. Content which receives poor ratings is then referred for moderator action. As with reactive moderation, posts appear immediately and are then moderated when flagged.
  • Automated moderation uses many analytical tools, but in its simplest form may hide content that contains certain keywords set by the administrator.

The choice of moderation approach typically depends on the nature of the content and its propensity to elicit controversial commentary, the demographic and past behaviour of the audience, the features available on the platform and the availability of moderators.

A potential difficulty presented by the majority's decision, highlighted in the dissenting reasons of Edelman J and Steward J, is that the news outlets will be held liable for third-party comments completely unrelated to the substance of the posted news article—to use Edelman J's example, a news outlet posting an article about the weather would be considered the publisher of a third-party comment about an unrelated person being a thief. If that were the case, then the choice of moderation technique cannot be guided by the nature of the content, and a more active role will be required (if comments are not hidden or locked entirely).

However, as Steward J explained, the finding at trial was that the defamatory comments were an expected and likely result of posting the relevant articles onto the public Facebook pages. So, whether the scenarios (and resultant risks) will develop in the way contemplated by Edelman J remains to be seen. 

Four Key Takeaways 

  1. The administrators or "owners" of public social media pages will be considered publishers of the comments made by third parties on their content.
  2. A majority of the High Court considered it was enough that the owner of the social media page facilitated and encouraged comments, thereby assisting the posting of comments, to find that the owner of the social media page published the comments. The minority justices considered that there needed to be a connection between the original post and the comments, or that the original post would procure or provoke the comments.
  3. The particular articles considered by the High Court were likely to give rise to controversial comments, and such comments were expected by the news outlets posting the articles. Whether completely unrelated, but defamatory, comments will be taken to be published by the owner of a social media page is yet to be seen, but is open on the majority's reasons.
  4. Owners and administrators should give thought to updating policies and procedures for determining the most appropriate form of moderation to be adopted on social media and other platforms where comments are permitted on their content. Content which carries a high risk of generating potentially defamatory comments should, of course, be actively and intensively moderated, but lower risk content should not be ignored completely.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.