Episode image for Rebuilding the Town Square: Innocent Dissemination and the New Defamation Regime
LOADING ...
Preview of episode

Want to listen to the full episode and all our other episodes?

Hearsay allows you to fulfill your legal CPD requirements every year.

Our yearly subscription is only $299/year.

With a yearly subscription, you can access all of our episodes AND every episode we release over the next year.

Episode 148 Buy Episode

Rebuilding the Town Square: Innocent Dissemination and the New Defamation Regime

Law as stated: 28 February 2025 What is this? This episode was published and is accurate as at this date.
Scott Traeger, Partner at Lander & Rogers, joins David to explore the evolving role of publishers in the digital age, the application of the innocent dissemination defence to online platforms, and the challenges of balancing freedom of expression with protecting reputations.
Substantive Law Substantive Law
28 February 2025
Scott Traeger
Lander & Rogers
1 hour = 1 CPD point
How does it work?
What area(s) of law does this episode consider?Defamation law.
Why is this topic relevant?Defamation law has long-been a hotly debated topic in Australia and has become increasingly relevant in the digital age. As online platforms continuously become more and more integral to how we communicate and share information, the question of who qualifies as a “publisher” in defamation law – and thus who holds responsibility for defamatory content – has taken on new importance.

In July in NSW and the ACT, amendments came into effect addressing the liability of internet intermediaries, such as social media platforms and search engines, for defamatory material published by third-party users. These reforms introduced updates to the innocent dissemination defence but left open questions about balancing freedom of expression with protecting reputations online.

What legislation is considered in this episode?Defamation Act 2005 (NSW); Civil Law Wrongs Act 2002 (ACT); Defamation Act 2005 (Vic); Defamation Act 2005 (Qld); Defamation Act 2005 (Tas) (Defamation Act)

Communications Decency Act, 47 U.S.C. § 230

Securing the Protection of Our Enduring and Established Constitutional Heritage Act, 28 U.S.C. § 4101 (SPEECH Act)

U.S. Const. amend. I

What cases are considered in this episode?Fairfax Media Publications; Nationwide News Pty Ltd; Australian News Channel v Dylan Voller [2021] HCA 27

  • Dylan Voller sued several media companies for defamation based on harmful comments made by third parties on their Facebook pages. The issue was whether the media companies could be considered publishers of those third-party comments for the purposes of a defamation claim. The High Court held that the media companies were indeed publishers, as they facilitated and encouraged the comments by maintaining Facebook pages and posting content, even without direct knowledge or intent regarding the defamatory material.

Google LLC v Defteros [2022] HCA 27

  • George Defteros sued Google for defamation, arguing that its search engine results, which displayed a snippet and hyperlink to an article from *The Age*, made Google liable for defamatory content in the article. The issue was whether Google, by displaying search results, was a publisher of the third-party content it linked to. The High Court held that Google was not a publisher, as its automated search results merely facilitated access to third-party content without endorsing or adopting it, distinguishing its role from that of active publishers.
What are the main points?
  • Western Australian and Northern Territory are yet to introduce the Stage 1 defamation reforms, which came into force elsewhere in July 2021. This has resulted in delays in the implementation of Stage 2 reforms, with only New South Wales and Victoria being up to date.
  • The defence of innocent dissemination was created to protect subordinate distributors, such as news agents and booksellers, who did not have the opportunity to review the material they were reselling for defamation. This defence has also been historically used by forum administrators or platform hosts who do not actively vet the content posted by third parties before publication.
  • For those hosting or moderating sites like Facebook pages, there is a challenge in deciding whether to actively review and moderate content to avoid potential liability or to refrain until a complaint is made, risking exposure to defamatory posts.
  • The stage two reforms in Victoria and New South Wales provide a new defence for internet intermediaries, such as social media platforms or individuals moderating online content, against defamation claims.
  • The internet intermediary defence requires platforms and moderators to have an accessible complaints mechanism for defamatory content, and if they promptly act on complaints within seven days to remove or block such content, they will be protected from defamation lawsuits.
  • If an internet intermediary chooses not to remove content after receiving a complaint, they can use the usual defences to defamation claims; truth, honest opinion, the new public interest defence, and qualified privilege.
  • Paid search ads involve a third party paying Google to create content in response to user queries, making it an active process rather than passive dissemination. As a result, it is unlikely that the innocent dissemination defence would apply to these paid search results.
  • Qualified privilege is a defence that allows publishing information to a limited audience of individuals who have a legitimate interest in receiving it, such as in a private Facebook group.
  • The stage two reforms also brought in a requirement to show that there was serious harm to your reputation in addition to proving that the content is defamatory.
  • The public interest defence allows mainstream media to report on stories that are in the public interest without facing defamation claims. The key requirement is to show that the publication was in the public interest and that the author reasonably believed so at the time.
What are the practical takeaways?
  • To stay current and become an expert in a field of interest, it is important to read new cases regularly. With defamation cases, understanding the personal aspects and complexities beyond the truth of statements is crucial in providing clients with appropriate advice and achieving their desired outcomes.
Show notesDr Matt Collins AM QC (2019), ‘Nothing to write home about: Australia the defamation capital of the world’, Law Council of Australia

DT = David Turner; ST = Scott Traeger

00:00:00DT:Hello and welcome to Hearsay the Legal Podcast, a CPD podcast that allows Australian lawyers to earn their CPD points on the go and at a time that suits them. I’m your host, David Turner. Hearsay the Legal Podcast is proudly supported by Lext Australia. Lext’s mission is to improve user experiences in the law and legal services, and Hearsay the Legal Podcast is how we’re improving the experience of CPD.

On this episode of Hearsay, we’re talking about defamation and innocent dissemination. Now, defamation law has long been a hotly debated topic in Australia, and defamation cases often capture the national attention and imagination. Defamation’s especially hotly debated where I am, in Sydney. And it’s become increasingly relevant in the digital age, as online platforms continuously become more and more integral to how we communicate and share information, the question of who qualifies as a publisher in defamation jurisprudence, and thus who holds responsibility for publishing defamatory content, has taken on new and unexpected importance.

In July, in New South Wales and the Australian Capital Territory, amendments came into effect addressing the liability of internet intermediaries like social media platforms and search engines for defamatory material published by third party users. These reforms introduced updates to the so-called innocent dissemination defence, but left open questions about balancing freedom of expression and protecting reputations online.

In this episode, we’ll explore the liability of publishers for defamation, especially the role of social media platforms and search engines and their liability. We’ll also unpack the defence of innocent dissemination and how it applies to entities like search engines and social media sites. It’s a delicate balancing act that platforms have to perform, juggling the need for open dialogue with legal responsibilities to limit harm from defamatory speech.

Joining us today on the podcast to talk us through all of these topics is Scott Traeger. Scott is a Defamation and Brand Protection Partner at Lander & Rogers, and he specialises in defamation law and brand protection representing companies, publishers, and individuals across a wide spectrum of industries. Scott’s extensive experience means he’s ideal to inform us of the current state of play of defamation law in Australia.

Scott, thank you so much for joining us on Hearsay.

00:02:24ST:Thanks very much. Great to be here.
00:02:25DT:Now, before we get into the topic for today – innocent dissemination and defamation law, and some of these recent amendments – tell us a bit about how you ended up in this area.
00:02:33ST:It’s an area that I’d always been interested in. I studied media law when I was at university and found that the intersection between the media reporting that you see every day in the news and law was an interesting topic. It was something that unlike some of the other, sort of drier areas of law, there was always that public interest element to it. And certainly I’ve found throughout my career that it’s a lot easier to have a chat with a friend or something at a barbie about defamation law than it is about contracts or M&A or something of that nature. And so It’s just an area that through having that interest in, I was fortunate enough to be able to seek out the work where it was available and over the years managed to build it up to be a fairly significant part of my day to day practice.
00:03:14DT:Yeah, it’s a fascinating area of law, isn’t it? I think it does capture the imagination of lawyers and non lawyers alike. We see a lot of high profile defamation cases here in Australia, perhaps that might be because of some of the policy settings around liability for defamation here compared to our common law cousins elsewhere in the world.
00:03:33ST:Absolutely. Australia, I think, is often described as the defamation capital of the world, and Sydney and Melbourne are often sort of vying for the top place on the podium as to who’s got the greatest number of defamation cases. And I think that because our laws are so much more heavily focused upon the protection of individual reputations, rather than countries such as the US, where you’ve got this much greater emphasis on freedom of speech, it does tend to see quite a lot of high profile defamation actions taking place in Australia.

TIP: Now Scott and I both agree that Sydney has perhaps earned the title defamation capital of the world due to our extraordinarily high rate of defamation claims, especially in comparison to other countries like the United Kingdom. Despite Australia’s smaller population, we see twice as many defamation cases as the UK. In fact, New South Wales alone accounts for over half of all cases in Australia. Between 2014 and 2018, there were 577 Superior Court defamation references in Australia, of which 312 came from New South Wales, a state with only a population of 7.5 million. This means that defamation issues are considered by courts in New South Wales 10 times more often than they are considered by courts in London. 

00:04:44DT:And you’ve also got some fascinating and unique features to a defamation case, you know, it’s the only or one of the only civil matters presided over by a jury, for example.
00:04:53ST:Absolutely, yeah. The involvement of juries does add an interesting layer to defamation cases. As you said, you’ve got the right to elect a trial by jury if you want one in a defamation case. Although, interestingly, in recent years that has seen a preference amongst some practitioners to commence their defamation proceedings in the Federal Court where there isn’t the opportunity of jury trials. So there’s sort of differing views amongst different people as to whether a jury is or isn’t going to be favourable to them but now, since the Federal Court having a fairly firmly established jurisdiction in dealing with defamation matters, there is that opportunity for the plaintiff to make a decision at the early stage that if they want a jury, they can go to a state supreme court, or if they definitely don’t want a jury, then they can take refuge in the Federal Court where they know that’s not going to happen.
00:05:37DT:And that brings us on to our first topic. There is no power granted to the Commonwealth under the constitution to regulate defamation, so it is a system of state laws and those can vary. Tell us a little bit about how defamation and the tort of defamation or the action under the defamation acts in the states and territories of Australia, how that differs practically, and also how the Federal Court comes to have jurisdiction to hear that.
00:06:00ST:So in terms of the states, there used to be quite a divergence prior to 2005. In 2005, there was uniform defamation legislation introduced right throughout Australia. And that meant that for the first time there was then that complete uniformity of law and you could commence proceedings in any state or territory, knowing that the law governing it was going to be essentially the same. As time’s gone by, there has been some subtle divergence in the way in which different aspects of the law were interpreted from state to state, but then more significantly when the, what was known as the stage one defamation reforms, were introduced in July 2021, that meant there was a real break from that uniformity in that Western Australia and the Northern Territory have still, to this day, not yet adopted those stage 1 reforms. So we’ve now got a scenario where all of the states aside from WA and the Northern Territory have a particular set of laws incorporating Stage 1 – Northern Territory and WA are a few years behind, and then stage 2, which was scheduled to come into place in July of this year everywhere, has also fallen behind. New South Wales managed to meet the proposed July commencement date and got things started by when it was intended to. Victoria has been a couple months behind but has also now caught up. However, the rest of the country is still lagging behind and South Australia at least has indicated that it only supports part of those stage 2 reforms. So we now do have a scenario once again where there are some significant differences from site to site as to how the law applies.
00:07:38DT:It’s an interesting area in which to have differences from Australian jurisdiction to Australian jurisdiction, because historically defamation actions might concern national publications but now, I suppose in theory at least, many of them concern hypothetically global publications online that are accessible by, you know, a really indeterminate but indeterminately large audience.
00:08:01ST:Absolutely. The online publications have really become, at least by volume of cases that we see coming across our desks, the vast majority of the defamation inquiries. Now, it’s certainly still the case that the mainstream media publications are perhaps the ones that are most likely to reach a hearing or most likely to result in proceedings being commenced because the audience for them is so much larger but also so much easier to determine than something that is an online publication where there may be a certain number of likes and comments that you can see but beyond that it’s difficult to pinpoint exactly how many people have seen that particular publication. But the internet and those online publications mean that anyone who has a point of view, which may not necessarily be the most kind point of view, has a forum by which they can get it out to an audience of hundreds, if not thousands of people, at the click of a button.
00:08:53DT:And I was going to ask you about determining the reach or the breadth of the distribution of a defamatory statement online because obviously both parties to the action have an interest in adducing evidence on that front. The plaintiff wants to adduce evidence of a large audience to establish an entitlement to damages. The defendant might want to adduce evidence indicating that the publication is relatively limited but I imagine both of those tasks are pretty challenging with online publications.
00:09:20ST:They certainly can be, yes, but there’s some platforms where people will be able to tell, particularly if they’ve got, for example, a business style account where they’re paying for those analytics as to what the reach has been of each of their posts. They might have a pretty good idea as to how many people have seen particular things. Otherwise, though, you really are initially just relying upon the number of likes, the number of comments, the number of shares, and also the extent to which there has been other people approaching either the person who’s been defamed or the author and inquiring with them about what they’ve read online.
00:09:58DT:Interesting. And I suppose that brings us then to our topic for today, which is the innocent dissemination defence, because online publications will often show up on social media sites. As you’ve said, those likes, comments and shares are all happening on a platform, or being indexed by search engines which are surfacing this content in response to user searches. Before we talk specifically about those platforms, let’s talk about the defence of innocent dissemination more broadly. What is the defence? What does the defendant have to establish or rely on?
00:10:31ST:So the defence of innocent dissemination, it essentially began as a defence to protect the likes of news agents and booksellers who had large volumes of material coming through their doors, written by others, and with no real opportunity to be able to read and vet that material before they were participating in the resale, and therefore publication, of that content. Essentially, if you are what’s known as a subordinate distributor, so not the primary publisher, not the person who had that initial capacity to exercise editorial control of what was going to be published, and also you neither knew nor reasonably ought to have known that the matter was defamatory, then you can potentially avail yourself of a defence. Now, there’s some other elements to it. You need to show, for example, that your lack of knowledge of it being defamatory wasn’t due to any negligence on your part but if you can show that you are just essentially there as a forum administrator, a platform host, a platform itself, something like that, and you are not really participating in the vetting of that material before it goes live, then that’s a defence that you’ll, as your first port of call – or at least prior to the most recent reforms as your first port of call – seek to rely upon to avoid liability for those third party posts.
00:11:49DT:Got it. And so, well, we’ll talk about the position under the new reforms, but I imagine before those reforms, some of the practical things that would have brought the elements of that defence into data, things like content moderation, where platforms have a practical role in vetting or at least on a negative basis, deciding what remains publicly available.
00:12:07ST:Absolutely. And I think that was a real tension in the likes of the Voller litigation that ended up going the whole way to the High Court, which ultimately didn’t get to consider the question of innocent dissemination, but was focused on the question of; when are you, and aren’t you, a publisher of third party material? But for people who are hosting or moderating these sites, such as Facebook pages or other forums, there’s this real tension, or there was under the previous law, around; to what extent do I actively step into the realm of moderating and reviewing and vetting content, if doing that is going to potentially expose me to liability because I have seen the content and I know that it’s potentially defamatory, versus; to what extent do I simply say, “look, I’m not going to deal with it at all until somebody complains about it.” But then there is also that risk if you adopt the latter course, that if the posts or the content that is being put up by you and which you are aware of is the type of content that’s likely to attract defamatory posts in response, then you may well, in those circumstances, still fall foul. And that was certainly one of the arguments that was being raised in the Voller matter that the media defendants ought to have known that the types of stories they were posting were likely to attract defamatory content and therefore should have been doing more to block that content before it went online.
00:13:32DT:Interesting. And so what’s the position now under the so-called stage two reforms that came into effect in July?
00:13:39ST:So for the stage two reforms, which at the moment are only applicable to Victoria and New South Wales, there is this new defence for what’s known as internet intermediaries, which essentially is anybody who is playing a role as a third party who is either providing a platform or moderating or administering a platform. So it could apply to a company like Facebook, it similarly could apply to an individual who owns a Facebook page or is an administrator of a public Facebook group, or anything of that nature. What the new reforms now say is that, provided you have what the Act calls an accessible complaints mechanism – which can just be an email address or a button you can click to report content that you consider to be defamatory – if you’ve got that process there for users to go to if they have concerns about defamatory content, then provided you, within seven days of receiving any complaint through that channel, take action to block or essentially take down the defamatory content, then that gives you a complete defence to the defamation claim that would otherwise be brought. And so it really has made it a lot clearer as to where the bounds are as to the time that you have to respond and sort of what you need to do in order to respond in order to avail yourself of that defence.
00:14:58DT:And so this internet intermediary defence – it hasn’t resulted in the repeal of the innocent dissemination offense, which obviously applies to many other classes of innocent disseminator but in practice, if you’re an internet intermediary, this is a much clearer, less discretionary, more certain defence to rely on. If you’re saying that you can continue to be an innocent disseminator of defamatory material until you become aware that it’s defamatory and then you need to cease to do that, well, this new defence gives you a time frame and it gives you a pretty practical set of steps to follow to go from liability to liable. So in practice, if not in legislation or not in law, it seems like this will replace innocent dissemination insofar as social media platforms and their users are likely to rely on it as a defence, right?
00:15:46ST:I think that’s a fair comment, yes. The new defence really does provide quite a useful roadmap for anybody who is an administrator of some sort of online forum, as to what they need to tick off in order to be able to ensure that they are not liable for third party comments. Now, unlike the previous dissemination defence and the case law around that, there’s some clear guidance in the new legislation to also say that just because you have also implemented other steps to vet comments to ensure that they comply with your terms of service, things like that, that doesn’t disentitle you to rely upon the new internet intermediary defence. But instead, it is just about having the report complaint type button or the report complaint email and having the seven day turnaround. Now, the area where it gets a little bit more nuanced and difficult for people who are administering those types of forums is when there’s content that, on its face, might be defamatory in the sense that it would cause your readers to think less of the person who it’s talking about, but it may also be true. Now, in that instance, the person who’s administering the forum really needs to make a choice if they do receive a complaint about it: “Do I take down the content and therefore ensure that I am not able to be found liable for it? Or do I instead support the right of my platform users to be able to engage in discussion on this platform and to post content that they may feel strongly about?” But if I do choose to stand behind that content on the basis that I think it’s true, you’re then absolutely putting yourself in the firing line that you could get sued. And if you do, then you’ll be in essentially just the same position as the original author in needing to prove to a court that the content was in fact substantially true and therefore able to be defended by a justification defence.
00:17:40DT:Yeah. And that brings us back to the first topic we spoke about at the top of the episode, which is that Australia and Sydney and Melbourne are often considered the defamation capitals of the world because at least one reason is that in Australian defamation law, justification is a defence rather than a requirement for the plaintiff to establish justification, the defence of truth. The defendant bears the onus of proving that what they said was true, not the plaintiff bearing the onus that what was said about them was false. And so as a content moderator, as a social media platform, as a search engine, you’re almost making an election between two defences, aren’t you? You’re almost making an election; “Well, I can just take the content down and I can rely upon this new defence for internet intermediaries but if I don’t take it down, I need to be comfortable that I can now rely on the justification defence.” It’s not as though you’re able to put the person making the complaint to proof?
00:18:35ST:Absolutely. Yeah. Now, I mean, once you have elected not to take it down, you can rely upon any of the defences that the original author could have relied upon. Truth is always going to be your first and best defence. There’s other things like honest opinion, which will be applicable. There’s the new public interest defence, which may apply from time to time. There is qualified privilege, which can be applicable if you’re dealing with a relatively limited audience to whom you’re publishing the content. But absolutely, once that seven day period has elapsed after you’ve received the complaint, if you decide to keep the content online, you are at risk that you could get sued, and then if you do, then you’ll be very much in the same position as what the original author would be in terms of having to go through those steps of establishing that you’ve got a defence on the facts.
00:19:21DT:So as we were saying, the new defence effectively gives these platforms the opportunity, at least so far as Australian law is concerned, to make this election between; do I want to rely on the new internet intermediary defence, remove the content once it’s been reported, or do I want to stand behind the comment and effectively rely on any other defence that the original publisher might have had available to them, including potentially justification, the defence of truth? But I suppose what’s happening behind the scenes in order to make that decision is some content moderation and platforms like Facebook and X and Instagram already conduct content moderation to some degree. The owners of private groups certainly do that as well. What sort of decision process or what sort of moderation does either the legislation or the common law expect? Or what sort of decision process is already going on when a user clicks that button that says, “I want this removed because it’s defamatory?” Does the legislation really expect that, “well, if it’s been claimed, then we’ll take that as its highest and we have to remove it,” or is there some scope for moderation, for making further inquiries, for deciding whether or not that’s a legitimate claim?
00:20:34ST:So I think as far as legislation is concerned, it really is a binary decision to make of; either I remove the content and avail myself of an additional safe harbour style defence, or alternatively I keep it online and then need to prove it was true or prove there was some other defence that’s applicable. In terms of the decision making that’s going on behind the scenes, all I could really say is that it is very opaque. There is, generally speaking with the large US platforms at least, a much greater emphasis upon whether or not the content complies with their terms of service, rather than whether it complies with defamation law. Now, most of their terms of service will generally have at least a line or two in there about not posting content that is defamatory, but as you might appreciate, the views of what’s defamatory might differ vastly from jurisdiction to jurisdiction, and also in circumstances where something that is unkind or disparaging, but true, is not defamatory or is at least subject to a complete defence under most defamation laws and certainly in Australia. It’s always difficult to know where they draw the line as to what they’re going to allow to remain and what they’re not. I think that there are certainly some platforms that may rely upon or have lists of what they consider to be reputable sources, being media mastheads and the like, where if people are linking to content from there, then they assume that, “well, if this mainstream media masthead has written this, we’re satisfied that they’re a readable source and we’re not going to take down that content just for fear of getting sued.” Beyond that, it really does seem to be a case by case basis when you report content to particular platforms, you’ll very commonly tend to just get a one line response saying either, “we reviewed the content and decided that it doesn’t comply with our terms of service and it’s been removed,” or “we’ve reviewed the content and we consider that it is compliant and it’s going to remain online.” And often it is a matter of continuing to push them and engage with them to try and convince them that there is a proper basis for a legal claim under Australian law that it is very clearly defamatory.
00:22:52DT:Yeah. And bringing and then enforcing that claim under Australian law might be a whole other challenge, which we’ll come on to talk about in a moment.
00:22:59ST:Absolutely.
00:22:59DT:Now, I guess one other thing that the stage 2 amendments, well, not the amendments themselves, but what they represent in their being passed into law is the settling of a controversy around who constitutes a publisher online. And I think you mentioned in passing some of the cases concerning this around both the platforms themselves and also users of those platforms who operate, for example, a Facebook group or a messaging list or something like that.
00:23:26ST:Yeah. So I think the thing that a lot of non defamation lawyers don’t perhaps appreciate is that under defamation law, the definition of publisher is incredibly broad and essentially it is anybody who has participated in the bringing about of that publication. Sometimes there’s references to some old fashioned words like whether you “conduced” to the publication. Anyone, whether they are an original author, whether they are someone who is a platform host – if we’re looking at a traditional news media example, you’ve got the journalist who wrote the article, you’ve got the editor who’s reviewed it, you’ve got the masthead who has published it. There’s even an old fashioned case where someone who was literally standing on the side of a road pointing at a defamatory sign was found liable for defamation because they were drawing people’s attention to the existence of that sign.
00:24:19DT:Induced to the publication indeed.
00:24:21ST:Yes.
00:24:22DT:And so I suppose that broad definition means and is consistent with some of the findings in cases like the one you’d referred to earlier that social media platforms and their users can be publishers under the defamation laws, but until those cases and this legislation, which effectively acknowledges that status, that wasn’t so clear cut, right?
00:24:43ST:Yeah, you’re probably right about that. I think that given the Defamation Act was introduced long before social media became what it is, I think back in 2005, Facebook was only available to US college students. The likes of Google reviews weren’t really around, but there’s been a vast movement in what the online world feels like today compared to 2005 – well, when the uniform laws came into play, there has been some querying of where the boundaries of publication lie for those online acts. I think though that if you trace it back to first principles, it’s always been relatively clear that anybody who was having an involvement in the publication was likely to be liable. We did, however, have the Voller case, which went all the way to the High Court, which essentially involved a number of news media outlets who had published stories on their primary website, but then had sought to promote those stories by posting on Facebook about the stories. There were then third parties who were posting Facebook comments in response to those posts, and they were ultimately sued in respect of those third party comments. And there was a question about whether, or at what point, they became a publisher of those posts. Now, interestingly, the pathway that that dispute took, the primary focus, and certainly the issue that went to the High Court, was whether or not the media mastheads were a publisher of those third party comments rather than whether they had a defence of innocent dissemination available to them. Now, it may well have been the case that had that innocent dissemination argument been run, and it wasn’t for strategic legal reasons, they wanted to get that publication point determined first, that they would have had a perfectly applicable innocent dissemination defence available. But instead, the case was all about; at what point do you become a publisher, particularly where you haven’t been specifically put on notice of the publication by receiving a concerns notice or receiving some other complaint about its existence?

TIP: Scott just mentioned the case of Fairfax Media Publications; Nationwide News Pty Ltd; Australian News Channel v Dylan Voller [2021] HCA 27. In that case, Dylan Voller, a former detainee at the Don Dale Youth Detention Centre, sued several media companies for defamation, alleging harm from comments made by third parties on those companies’ Facebook pages. In response to news articles about him, Voller claimed that these comments were defamatory and the media companies disputed their liability, arguing that they weren’t the publishers of those third party comments on their Facebook pages. The High Court’s decision affirmed that media outlets and businesses with social media pages are considered publishers of the third party comments posted on those pages, even if they didn’t directly post them or approve them. The Court clarified that under defamation law, a person can be liable as a publisher without needing to know or intend for defamatory content to be published. Simply providing a platform for such comments, like contracting with Facebook to create a public page, and posting content that invites a public response, is enough to establish liability, at least as the Defamation Act is currently framed. This differs from cases involving physical property, like graffiti on billboards, as the media companies were deemed to have intentionally facilitated public comments by allowing comments on their pages. While the High Court confirmed that a defence of innocent dissemination might shield some from liability if they lack the control over the content before publication, before the recent amendments, it was unclear whether this defence applied. In this context, as we’ve been discussing with Scott, the defamation reforms will substantially change the holding in Fairfax Media v Voller. And I suppose that question of what constitutes publication. 

00:28:28DT:And I suppose that question of what constitutes publication – so far, we’ve been talking about social media platforms and their users, and search engines are a bit of a monolith, but those are in substance, quite different methods of publication, right? And even within those, there might be different methods of publication. So if I think about search engines, Google indexes, the web, it might innocently disseminate or publish a link to a website because its page rank algorithm suggests that it’s relevant for a particular search term. It might publish that link because a user has paid them to publish that link in response to particular search terms with Google AdWords. Let’s start there maybe, and then we can work through a couple of other examples, but is there a difference in defamation law between, say, search engine results and other content? And is there a difference when it comes to search engine results between organic and paid search?
00:29:22ST:Yes, there is. Absolutely. So, that’s something that’s been clarified by the recent amendments, but it’s one of those situations where the common law is progressing, albeit slowly, and then ultimately the legislature catches up and puts in place some new law to clarify matters, but prior to the recent amendments coming into place, there had been another case that went to the High Court involving a lawyer by the name of Mr. Defteros, who was suing over some defamatory search results on Google. Now, he sued Google in respect of those search results, and the question was whether or not Google was a publisher of those results. The Court ultimately, in that case, found that insofar as organic search was concerned, it wasn’t a publisher. Now, very shortly after that decision was handed down, the new amendments came into place, and that was further confirmed through the introduction of a separate, essentially a defence for Google in relation to organic search. Now, you also mentioned paid search. That’s always, but less since the Defteros case, and with the new amendments to the law has been a bit of a different category, because with paid search, Google has some incentive or some involvement in the publication and certainly has knowledge of what is going to show up in response to a search rather than it being solely based upon the algorithm working in the background. So, in the case of paid search, Google is a publisher of that and can be liable for that, but organic search has now become a bit of a defamation free zone.
00:30:56DT:Got it. So the immunity defence that the stage 2 amendments have brought in applies only to organic search. Paid search can constitute defamation, but I suppose the internet intermediary defence applies as equally to paid search as it would to, say, social media platforms?
00:31:11ST:Well, I’m not sure it would apply in that instance because of the fact that for paid search, it’s not a scenario where somebody who is completely separate as a third party has posted content without your involvement. Instead, it’s that third party coming to Google saying, “I will pay you X dollars per click for you to create this content in response to user inquiries.” So there’s a much more active role in that application. I very much doubt that the dissemination defence would be able to apply to those paid search, AdWords style, results.
00:31:45DT:Interesting. I hadn’t thought about it that way, that it’s a more active form of publication – I mean, it’s an advertisement, but publish the advertisement in response to the original publisher or the customer paying for you to do that – I hadn’t really thought about it as a separate category. That’s interesting.
00:31:59ST:And another separate category perhaps is the autocomplete suggestions that Google provides.
00:32:05DT:Oh, yeah. I hadn’t even thought of that.
00:32:07ST:Because again, that’s something that they are promoting themselves, creating themselves, rather than it being solely in response to a user query.
00:32:14DT:Well, that’s interesting, because I think there’s some mythology and some heuristics we use to try and understand the internet that we all live with. And for me, I had always thought that autocomplete was kind of organic and unmoderated, that it was almost this living beast responding to traffic, kind of reflecting the zeitgeist of the day or week. But is that regarded as a more active, intentional form of publication than organic search?
00:32:39ST:Well, certainly in the Defteros case, that was one of the questions for the Court in that matter. And there was a bit of a distinction drawn between organic search, where there was simply the provision of a hyperlink, possibly with some degree of snippet of what it contained, but the hyperlink basically pointing the person to where they can find further content. As opposed to autocomplete, where someone who may potentially want to go down a different search path is encouraged or enticed by Google to take a different path and enter a different search term. And certainly one of the things Mr Defteros was complaining about in that case was that when you started typing in his name, there were some defamatory autocomplete suggestions that came up.
00:33:26DT:Got it. If I’m searching for “David Turner best lawyer” and something else comes up after my name, that means Google might be disseminating something that I wasn’t originally going to look for in the first place.
00:33:37ST:Exactly.

TIP: Let’s dive a bit deeper on Google LLC v Defteros [2022] HCA 27. George Defteros, a criminal lawyer who represented figures linked to Melbourne’s gangland wars, filed a lawsuit against Google for defamation. In 2004, Defteros and Mario Condello were charged with conspiracy to murder involving individuals such as Carl Williams. Although these charges against Defterios were dropped, his prosecution received significant media attention, especially from The Age newspaper. In 2016, Defteros discovered that searching his name on Google produced a snippet and hyperlink to an article titled ‘Underworld Loses Valued Friend at Court’ published by The Age on the day he was charged. Defterios argued that that article, including the snippet and link, defamed him and held Google responsible for making it accessible by search results. Initially, both the Supreme Court and the Court of Appeal ruled against Google, but the High Court ultimately overturned the Court of Appeal’s decision. In a majority judgment, the High Court concluded that Google’s act of listing search results did not make it a publisher of the content on those linked third party pages. The Court emphasised that Google merely responded to user search queries by providing links without actively directing or encouraging users to click on any specific results. Therefore, Google was not considered to have lent assistance, in the relevant sense, to the publication of The Age article, as it neither facilitated nor influenced the article’s content or placement on The Age website. The court ruled that Google is not a publisher of defamatory content hosted on third party sites simply because the pages are hyperlinked in Google’s search results. While this decision appears straightforward, it challenges a traditional defamation principle that any party who assists in disseminating defamatory material can be held liable for it. In traditional publishing contexts, if a defamatory statement is published in a book, defamation claims can theoretically extend to anyone involved in the book’s creation and distribution – the quote’s originator, the author, the editor, the publisher, even the bookstore or library that makes the book available, though there are statutory defences for that last category. But the court ruled that Google, a bit like a bookstore or library, is not liable for merely facilitating access to content through search results. The High Court’s decision focused specifically on organic search results, where content is generated through automated processes without human intervention. That decision doesn’t apply to Google’s paid advertising platform AdWords, which displays sponsored links based on specific search terms paid for by advertisers. Justice Gageler had distinguished between organic search results, which are automatically generated, and sponsored links, which are created at the direction of the advertiser. Paid ads in search results might expose Google to defamation claims, akin to how newspapers are liable for defamatory advertisements that they publish. In earlier cases, like Google v Duffy, courts analysed Google’s role as a publisher concerning both organic search results and paid links. These cases drew a line between defamation and misleading conduct, suggesting that search engines or platforms might still face liability for defamation. When receiving payment to promote third party content, even with the Defteros ruling, platforms like Google might still be exposed to liability when acting as a paid publisher. Now, I know businesses often wonder if they can hold Google liable for defamatory user reviews on Google business profiles. Negative reviews can harm a business’s reputation profoundly, raising the question of whether Google is liable for publishing those. Unlike organic search, Google directly hosts reviews and retains full control over their content because they’re moderated. Courts have held similar platforms like Facebook liable for defamatory user comments. In Voller, the High Court found that Facebook page owners could be liable for third party posts on their pages and the Defteros ruling doesn’t alter this. It’s possible that Google is liable as a publisher of content on the review platforms it controls, like Google business profiles – although there hasn’t been a case that’s formally determined that. Google’s autocomplete function, which suggests search terms as user’s type, has also raised potential defamation concerns. In one case, autocomplete suggested terms associating the plaintiff with crimes, potentially a defamatory publication. And the High Court in that case suggested that Google’s intentional participation in showing autocomplete suggestions could establish publication. While this wasn’t a final ruling on liability, it signalled the potential for Google to be held liable if autocomplete suggestions promote defamatory associations. Of course, we should note that given how Google’s autocomplete function works today, in 2024, it would be difficult to argue that Google intentionally participates in the autocomplete results any more than it intentionally participates in generating search results. They’re generated dynamically and organically, not by Google in an intentional way. Google’s latest search feature – AI generated answers at the top of search results – is likely to be another flashpoint in defamation cases for internet intermediaries, and will need to be tested, probably before the High Court. Unlike linking to organic search results, or displaying snippets of third party websites, Google’s AI answers are generated by Google themselves, and published without necessarily being clearly linked to a third party publisher, and without necessarily being wholly or partly attributable to a third party publisher. It’ll be interesting to see if Google might be held liable for some of the things it publishes. If so, defamation wouldn’t be Google’s only concern when it comes to its AI generated answers. At an early stage in that feature’s life, Google’s AI generated answers, for example, suggested that users eat a rock a day to stay healthy, and that they should put craft glue on their pizza to prevent the cheese from sliding off.

00:38:36DT:We should also talk about – just because you mentioned that in Defteros there were organic search results that might have constituted just a link to third party content and then might also have contained a snippet – I suppose talking about social media platforms now who might well rely on the new intermediary defence, there could be posts on those platforms that only constitute a link to third party content or that contain user generated content. Is there a distinction between user generated content and links on those platforms?
00:39:06ST:Not necessarily. For the intermediary, the administrator, the platform host, whatever you want to call them, their position will be the same in that they are saying, “I didn’t post this content, somebody else did. I wasn’t able to know that it was defamatory until somebody complained about it.” In terms of the actual author themselves who has published the initial post, there’s always been a bit of a question mark about to what extent are readers taken to click through and read what’s in the hyperlink. Some of the amendments to the defences have now confirmed that, for example, under the honest opinion defence where you’re setting out the proper material upon which the opinion is based, you can do so by way of referring to a hyperlink containing further information for things like that. So I think that the courts are beginning to move with the times and accept that there’s different ways in which we do provide information to each other but the question of hyperlinks is still a developing area on the whole.
00:40:05DT:One thing that strikes me in our conversation today is we’re talking a lot about very large multinationals almost invariably based in the US when we’re talking about the prospective defendants to these actions – Google, Facebook… – and look, they have Australian or Asia Pacific subsidiaries, they have offices present here and I assume public offices present in Australia but in large part, the decisions about what functionality is available for users to report defamatory content to complain about content is a global product decision. So what impact does Australian legislation have on the decisions and the product design decisions of these large technology companies, and is what we’ve legislated in the stage two reforms reflected in other places around the world?
00:40:56ST:So I think that one of the main determinants of how impactful this legislation is, is: are the defendants – whether they are these US multinationals, whether they are other platforms – are they good corporate citizens? Are they going to seek to abide by Australian law or are they instead going to hide behind whatever protections they may have in their own jurisdiction? And the point you raise about the fact that many of them are domiciled in the US is an important one because there’s a couple of pieces of legislation in the States that are vastly different to the position in Australia in respect to defamation law. One of those is the Communications Decency Act, which essentially provides an immunity for online platforms for third party content. Now that immunity is far broader than the new internet intermediary defence in Australia. The second is a piece of legislation called the SPEECH Act, which is an acronym which I think stands for ‘Securing the Protection of our Enduring and Enshrined Constitutional Heritage.’
00:42:02DT:Love a legislative acronym, don’t they?
00:42:04ST:Yeah, they do. And the SPEECH Act essentially says that foreign defamation judgments won’t be enforced in the US unless either the forum in which they were handed down has the same protections on freedom of speech as what the US First Amendment provides, which Australia certainly does not, or alternatively, if the outcome would have been the same had the plaintiff sued in the US. And so the combined effect of those two pieces of legislation is really that if you’ve got an Australian defamation judgment, it’s very unlikely that you’ll be able to enforce it in the US, or certainly very unlikely you’ll be able to easily enforce it in the US, and you’re therefore quite reliant upon the fact that if you are going to a US based company and saying, “you need to take this down because it’s defamatory,” you need them to be a good corporate citizen and abide by Australian laws rather than seek to take refuge under those US protections.
00:43:02DT:That’s interesting. I wasn’t aware of the SPEECH Act in the sense that there is a legislative cover against enforcement of that judgment, even if you were to obtain it in Australia. I’m thinking about not a defamation case, but I’m thinking of X’s refusal to take down content that the Safety Commissioner gave them a notice to take down several months ago with a disappointing result with that injunction being lifted because it was embarrassing to the court how flagrantly it was not observed by the company. That context makes a lot more sense now and it does mean that I suppose these amendments targeting US based multinationals, in a way, are arrangements by consent in the sense that they are, as you say, predicated on these companies being good corporate citizens, responsible actors who follow the guidance the legislature gives rather than doing it out of fear of sanction.
00:43:57ST:Absolutely. And there are certainly some platforms in foreign jurisdictions that really make their money off hosting defamatory content. And there are some of those that I think if you’ve got defamatory content on there, there’s probably very little prospect of really being able to do anything about it in Australia. With the larger, better known ones, certainly my experience is that they will be willing to engage and take steps at appropriate times, but certainly it is still more difficult whenever you are dealing with a US host than if you’re dealing with a company in Australia who is going to be very clearly responsible under the Australian legislation.

TIP: The recent amendments to the Defamation Act are part of a two stage review process aimed at updating and clarifying the responsibilities of digital platforms and protections around defamation claims. Here’s a breakdown of the main changes: 

The first part of stage two of the Model Defamation Provisions Review led by the New South Wales Parliament focuses on the liability of internet intermediaries such as social media platforms and search engines for defamation related to third party publications. On the 22nd of September 2023, the Standing Council of Attorneys General endorsed these Part A reforms with most jurisdictions, South Australia being the only exception, committing to implement them by the 1st of July 2024, following approval by their respective cabinets. 

Under these new laws, courts can now issue orders to non party intermediaries to restrict access to defamatory material. Courts are now also required to consider specific factors to ensure a balanced approach when granting orders to identify individuals behind defamatory online content. Also, the requirements for making amendments for online defamation have been updated, clarifying expectations and processes. 

There are also some nice quality of life updates for the litigators. Additional provisions now permit electronic service of defamation related documents. These Part A amendments also aim to reduce liability for internet intermediaries that facilitate third party content without actively participating in its publication, establishing clearer guidelines for intermediaries while offering individuals avenues for redress in cases of online defamation. The Australian Government has also announced plans to introduce an exemption in state and territory defamation laws for compliance with section 235(1) of the Online Safety Act.

00:46:09DT:Scott, so far today we’ve been talking about online publication with almost the tacit assumption that online publication is widespread – that, you know, when we talk about online publication, it’s publication to the world at large – I think I even said that at the top of the episode. But one other feature of online publication and the widespread use of social media platforms to get our information and communicate with one another is that we have a lot of closed groups, small groups of users that publish information to one another – Facebook groups, WhatsApp groups, years ago when I was at the bar, I had a defamation matter about a WeChat group. What impact do these closed groups have? And there are some defences that might be available in circumstances of a publication to that small group that might not be in a publication at large.
00:46:54ST:Absolutely, and all of those different types of groups that you’ve mentioned are things that have come across my desk as well in terms of potential defamation claims. I think that there really is a significant and important difference between whether you are publishing something, as you say, to the world at large, which in the online context is essentially where you’ve got your Facebook privacy, for example, set to everybody rather than just to friends. Where you are publishing something to a smaller group, if the people who are members of that group share some common interest in receiving information about that subject, then you might be able to avail yourself of a defence known as qualified privilege. Now, essentially what the defence of qualified privilege says is that if you are publishing material to a limited audience, who have a need to know that information, then that gives you a defence. The legal test is whether they’ve got a legal, social, or moral interest in receiving information on the relevant subject – classic example would be, say, if you came home from work, you found that your house had been robbed. If you call the police and they say, do you have any ideas as to who it might have been, it’s perfectly reasonable and appropriate to say to the police, “oh, actually, yes, before I left for work this morning, my neighbour, Joe Bloggs, was peering over the fence looking quite sus, I think you should go and speak to him.” The police have that interest in receiving information from you about your suspicions so that they can properly discharge their duties and investigate the crime. Conversely, if you jump on Facebook and post, “I’ve just been robbed, pretty sure it was my dodgy neighbour, Joe Bloggs,” then your Facebook friends might be interested in it, but they do not have any legal, social, or moral interest in receiving information on that particular subject. If you though, take it to a Facebook group where the only people who are members of that group may, for example, be people who are members of a particular sports club or people who have some other entitling factor that means they can join that group. They then may be people who have that legal, social, or moral interest in receiving those communications. And if you’re publishing it through those channels with those appropriate privacy settings, it’s possible at least that you might have a qualified privilege defence available to you. So there’s two defences, there’s both statutory and common law qualified privilege. Broadly speaking, they’ve both got the same key elements, although there are certainly some subtle but important differences between the two, but essentially, a qualified privileged Defence entitles you to publish information to a small or limited audience of people who have a need to know that information. And I think that quite often people will not think carefully about the specific forum in which they’re publishing things and not think about the privacy settings on the page. And so, for example, something that might be able to be defended by a qualified privilege defence if it was published on a private Facebook page run by a particular sporting club, or something of that nature, wouldn’t be defensible if it was on the same page with public settings.
00:49:52DT:Yeah, I remember one from years ago. It was a publication in one of these private groups, one of these sort of private encrypted messaging groups. It was a business networking group for small business owners and other professionals, and someone published a statement about another member of the group saying, “I wouldn’t work with them again, they didn’t pay me, and it was a very unpleasant experience and I don’t think they’re very trustworthy.” That may be, depending on the circumstances, and I can’t say too much more about that – a circumstance where there is a legitimate legal or moral interest in that information in that particular group.
00:50:32ST:Yeah. And I think one area that we certainly as defamation practitioners tend to see quite a few of these cases in is large residential apartment complexes where you’ve got members of that body corporate who might have an owner’s Facebook page. They all tend to have at least some knowledge of each other. They might have opinions of the contractors working at the building or of other people living in the building. And because anything they’re posting on that page goes out to a fairly significant audience that can be rife for defamation allegations. And certainly one of the key questions in any of those defamation disputes is whether all of the members of that owner’s group or Facebook owner’s page have an interest in receiving information on the particular subject matter that the post relates to.
00:51:19DT:Yeah, we’ve talked on the show before to some strata lawyers about how often defamation issues can come up advising the body corporate in a strata scheme. You know, Scott, at the top of the episode, we talked about how Australia – and Sydney and Melbourne in particular, by some standards, the defamation capitals of the world – and how our policy settings encourage defamation litigation here more so than in other jurisdictions but the stage two amendments, also the stage one amendments, I suppose, have introduced some changes that adjust those policy settings a little bit. One of those is the so-called serious harm threshold that’s been introduced to the Act. Tell me a little bit about that.
00:51:55ST:So that’s probably been the most significant change to defamation law, at least since I started practicing. Prior to the stage one amendments, which as I said, came into place in July, 2021, everywhere except for Western Australia and the Northern Territory, all you needed to show was that content was defamatory in the sense that it would cause the ordinary reasonable reader or listener to lower their opinion of you. Now, since the stage one reforms were enacted, there’s an additional element to the cause of action for defamation where you need to show that it either has caused or is likely to cause serious harm to your reputation. And that was introduced primarily to weed out what were sometimes described as backyarder defamation disputes, which were small claims where the audience who had seen the defamatory content was quite limited. The quantum of damages that was going to be obtained was always going to be out of all proportion to the legal costs that would be incurred in litigating the matter through a hearing. And essentially that there was a feeling that there were too many of these cases clogging up the courts and that they should be stopped. Now, the serious harm threshold really has had a significant impact upon that, and there’s now the ability for a defendant, upon having defamation proceedings commenced against them, to seek an early hearing on the issue of serious harm and to have the court look just at the question of; has this publication actually caused or is it likely to cause serious harm to the reputation of the plaintiff? And in that serious harm analysis, you exclude things like hurt feelings, and the personal distress and things that are suffered. That’s relevant to the question of damages if you get to that point, but in looking at how you establish serious harm in order to have an actionable claim for defamation, it’s purely just harm to your reputation.
00:53:48DT:Got it. And I suppose that requires also an examination of the plaintiff’s current reputation. There’s one line of argument I’ve seen run before that even a false statement that contains a defamatory imputation can’t cause material or very much harm to a person whose reputation is that they have the characteristics that the defamatory imputation carries. So the defamatory imputation is that this person is a dishonest person, or this person is a violent person, or this person is a criminal, that person already holds that reputation in the public eye, then perhaps liability is established, but relatively limited damage.
00:54:23ST:Absolutely. And when we’re talking about scenarios where there’s already been a lot of media reporting about a particular person, it can be difficult to isolate the specific or additional harm that’s been caused by one particular article that they take issue with. There’s also scenarios where it may be said by a defendant that somebody has just such a poor reputation that there is really no reputation to defame at all, and there’s therefore been no serious harm caused. The other interesting thing about the serious harm amendments is that they deal with the claims by companies, because under Australian law, companies can’t sue for defamation unless they are either a not for profit entity or a small business with less than 10 full time equivalent employees. Now that still left a fairly significant number of small businesses who were able to pursue defamation claims in the likes of Google reviews and things like that under the old law. Since the introduction of the serious harm threshold though, the legislation specifically says that for a company, serious harm requires you to show that the publication has caused serious financial loss, and that has really just wiped out those claims by small businesses because it is very, very difficult in most instances to show that one particular online post or one particular article has caused serious financial loss.
00:55:43DT:And you mentioned this briefly earlier, but the public interest defence is one that might rarely be available for online publications. What is that one?
00:55:52ST:So the public interest defence is an entirely new defence. And I think it was intended in some respects to do what the old statutory qualified privilege defence had been hoped by some media defendants to do, but didn’t do, which was to allow mainstream media reporting of stories that are in the public interest. Now, we’ve only seen a very limited number of cases on that so far, but essentially the requirements to establish the public interest defence is to show that the publication of the defamatory matter was in the public interest, and at the time of publication, the author reasonably believed that it was in the public interest to publish it. Now, the reasonableness of that belief is the main focus of most of these cases. And we have had one piece of litigation involving a claim by a former SAS soldier, Heston Russell, against the ABC in relation to some media reporting about his conduct where the public interest defence was relied upon. And interestingly, one of the things that the court focused on in that, was that when you’re publishing a particular public interest article, it’s important to take into account the changing information as the story develops. And the court actually found that although at the time of the first of the stories being published, it may well have been reasonable to believe that it was in the public interest to publish that. As time had gone by and there had been new information come to light and doubt cast as to the veracity of some of the sources or some of the information, there then needed to be further investigation undertaken by the journalists before continuing to publish rather than just relying upon that same original material and that same original belief. So I think it’ll be interesting to see as time goes by whether the public interest defence does become one that is successfully relied upon by the media or if it kind of goes the way of the statutory qualified privilege defence, which has very rarely been a successful defence for mainstream media defendants.
00:58:02DT:Yeah, even in Russell, I think Justice Lee said that it didn’t help to advance the thinking in the profession or the judiciary about the new defence very much. I think he said it wasn’t a good vehicle for establishing that the defence had any real work to do. So, definitely still a largely untested new defence available under the amendments.
00:58:22ST:Absolutely.
00:58:23DT:Well, we’re nearly out of time, Scott, but before you go, I like to end each interview by asking a question for our younger listeners who might’ve recently joined the profession or are soon to join the profession. As we said at the top of the show, this is a fascinating area of practice and I think everyone who does a media law elective or even reads the newspaper once in a while thinks about what it might be like to practice in this area, but some of our listeners who might be hoping or thinking of following in your footsteps and becoming a defamation lawyer, what would your recommendation or your advice to them be?
00:58:53ST:So, I think just be curious. When you see new cases come out, read them. I’ve found throughout my career that If it’s an area that you are interested in, it is a lot easier to keep up with changes in the law and to become an expert in it because it is something that you will enjoy doing that extra research on, enjoy doing that reading on. I think that defamation, there’s always big personalities involved. There’s always that personal aspect to the dispute and it’s important to be able to understand that in order to be able to give your clients appropriate advice, it’s not as simple as; was it true or untrue? It’s not a black and white question. There often has been significant hurt and distress caused regardless of whether or not there is an actionable claim there. And I think that being able to grapple with that and work with your clients to be able to get an outcome that is going to achieve what they want, both from a legal standpoint, but also from a broader relationship and reputational standpoint is really important.
00:59:55DT:Absolutely, and that tip about reading cases as they come out, that’s great advice in many practice areas, but this is one of those areas where you don’t have to look too hard to find an interesting case that you won’t struggle to finish. These are interesting situations to read about and learn about.
01:00:11ST:The one thing about defamation cases is that the media loves reporting on them, particularly when they aren’t the defendants in the case. So, whenever there is a defamation case going through the courts, you can be pretty well assured that there’ll be a fair amount of media reports about it, so you don’t have to delve too deeply into the case reports in order to find out what’s going on. You can just rely upon your favourite news media source.
01:00:32DT:Yeah, absolutely. Well, Scott Traeger, thank you so much for joining me today on Hearsay.
01:00:35ST:Thanks very much. It’s been a pleasure.
01:00:47DT:As always, you’ve been listening to Hearsay, the legal podcast. I’d like to thank my guest today, Scott Traeger, for coming on the show. Now, if you need a professional skills point this year, why not check out our recent episode with Hans Weemaes, Head of Economics and Data Analytics at Vincents. That one’s episode 129 and it’s called ‘Grasping Causation: A Data Science Explanation of Causal Inference and the Role of Counterfactuals’.

If you’re an Australian legal practitioner, you can claim one continuing professional development point for listening to this episode. Whether an activity entitles you to claim a CPD unit is self-assessed, as you well know, but we suggest this episode entitles you to claim a substantive law point. For more information on claiming and tracking your points on Hearsay, please head to our website.

Hearsay the Legal Podcast is brought to you by Lext, a legal technology company that makes the law easier to access and easier to practice, including your CPD.

Finally, I’d like to ask you a favour. If you like Hearsay the Legal Podcast, please leave us a Google review. It helps other listeners to find us and that keeps us in business.

Thanks for listening and I’ll see you on the next episode of Hearsay.