fbpx

Musk v Altman: Contracts, Estoppel and (Maybe) the Future of Humankind

It’s not often you can say that the fate of humanity hangs on the outcome of a legal proceeding. It might sound absurd, but if you believe Elon Musk, the plaintiff in the latest lawsuit against OpenAI, then the outcome of his case might be just that important.

On 29 February this year, Elon Musk filed a complaint in the Superior Court of California against OpenAI founders Sam Altman and Greg Brockman, along with a list of OpenAI corporate entities, claiming that the defendants had breached a contract between the three men which required them to ‘develop [Artificial General Intelligence] for the benefit of humanity’ and make its research and technology open-source – freely available to the public.  

If you haven’t come across the term before, Artificial General Intelligence is a term without a precise definition – one of the challenges that will face any judge trying to decide this case – but is commonly used to refer to an AI model with human-like (or superhuman) intelligence across a broad range of skills, effectively making it a general-purpose AI system – or an effective replacement for human intellectual labour.

In the alternative, Musk pleads “promissory estoppel” – that even if there is no contract between Musk, Altman and Brockman, then the defendants induced him to make ‘millions of dollars in contributions to OpenAI’, to his detriment, in reliance on the promise that OpenAI would be a non-profit developing open-source technology for the good of humanity.

Musk’s third claim is actually a really unique one that should be getting a lot more attention in the reporting on this story – it’s probably a world-first, in fact.  Musk claims that the defendants breached the fiduciary duty they owe to humanity at large, under the charter for OpenAI, Inc. (the not-for-profit OpenAI entity) – including Musk himself.  Fiduciary duties are usually confined to well-defined relationships of trust and responsibility – directors to companies, trustees to beneficiaries, lawyers to clients – but a fiduciary duty owed to more than 7 billion people collectively is probably something that has never before been the subject of litigation.

Musk also claims unfair competition (a cause of action under California statute), and an account of profits.  He seeks orders for specific performance and injunctive relief, compelling OpenAI to make its AI research and technology – and based on the contents of the complaint, presumably the architecture of the market-leading GPT-4 model – publicly available, and preventing the defendants from using OpenAI or its assets or research for the financial benefit of any person.  Musk also seeks a declaration that GPT-4, and another rumoured new model yet to be publicly announced or confirmed, “Q*” (pronounced, Q-Star), constitute AGI.

The 46-page complaint is a fascinating read, whatever you think of the claim itself.  There are startling – maybe disquieting – passages about the existential threat of AI to humanity: from Bill Joy’s warning that if AGI is discovered, then “the future doesn’t need us”, to an incredible anecdote about an investor who met with Demis Hassabis, the founder of DeepMind (an AI startup acquired by Google in 2014)  and remarked that “the best thing [the investor] could have done for the human race was shoot Mr Hassabis then and there”. It also contains some entertaining turns of phrase that are regrettably rare in Australian legal filings, from describing OpenAI as “Microsoft’s ostensibly not-for-profit golden goose”, to claiming that since OpenAI is motivated to deny it has achieved AGI (which, when achieved, would not be not included in the licence of OpenAI’s technology to Microsoft) to keep Microsoft happy, “AGI, like “Tomorrow” in Annie, will always be a day away.” 

OpenAI, of course, denies all these claims, insists that it has always been faithful to its mission of developing AGI for the good of humanity, and has publicly stated it will seek to have them all dismissed.  Less than a week after the filing, on 5 March, OpenAI published a blog post detailing the founders’ conversations with Elon Musk, claiming that Musk always knew that to raise enough money to compete with the likes of Google, it would have to attract investors with for-profit operations.  Open AI also says that Musk was told that it would be irresponsible and dangerous to make all OpenAI’s advances open-source and freely available to the public.  

As anyone who has done discovery for litigation knows, there’s an oddly voyeuristic delight to reading someone else’s emails, and the correspondence between Musk, Altman and Brockman attached to the OpenAI blog post are no exception; it’s a rare chance to read these giants of tech talk strategy to one another in words never intended for public consumption.

To top it all off, Musk is seeking a jury trial.  The big questions Musk’s claim poses, from existential threats to OpenAI’s duty to humanity, will be decided by 12 ordinary jurors – if it makes it all the way to trial, that is.

Is GPT-4 an Artificial General Intelligence?  Is AGI a threat to humanity?  Would OpenAI publicly releasing the details of their research help prevent, or accelerate, that threat?  These questions, some of the greatest of our time, may well be decided in the unlikeliest of places – The Superior Court of California.

Harry Potter and the Broken Geneva Conventions

While the new Fantastic Beasts movies may be a (cough) fantastic addition to the Harry Potter universe for keen Potterheads, its depiction of the brutal Wizarding World Wars (WWWs) may be unsettling and not so magical for the rest of us.

This raises a humanitarian and legal question: would the WWWs be covered by international humanitarian law (IHL)?

Legislation, but magical…

If you’re not too keen on keeping up with a seven part series of an aesthetically challenged supervillain’s unhealthy obsession with a teenage boy, it’s best we first acquaint you with the bureaucratic and political parallels between the contemporary international community and the fictional wizarding world… 

Witches and wizards are governed by an International Statute of Wizarding Secrecy, which acts as a statutory bulwark for wizards to maintain secrecy of the wizarding world. This statute was created by the International Confederation of Wizards – so like the United Nations but, you know, the sorcerous version- headed by the Supreme Mugwump (Yes. It’s a thing; roll with it.)

IHL and witches and wizards

IHL is a body of law which sets out what is and is not permissible during armed conflict. 

IHL regulates both international armed conflict (IAC) and non-international armed conflict (NIAC). While an IAC must occur between two or more states, a NIAC is defined as an armed conflict between government forces and non-government groups or between two non-government groups. 

Throughout the history of Rowling’s magical universe, there were three great Wizarding World Wars: the Global Wizarding War, the First Wizarding War, and the Second Wizarding War, also known as the Battle of Hogwarts. Each war was a conflict involving Ministries of Magic (i.e. a government) and non-government armed groups. As there were no conflicts between states, these are all prima facie NIACs to which IHL might apply. 

WWWs and NIACs

However, a NIAC is not confined to its definition of a conflict between governmental and armed, non-governmental forces. There are two legal authorities that define what makes a NIAC under IHL: Common Article 3 of the Geneva Convention 1949 and Article 1 of Additional Protocol II

To constitute a NIAC in the meaning of Common Article 3:

  • The armed groups must show a minimum degree of organisation; 
  • Armed confrontations must reach a minimum level of intensity. 

To constitute a NIAC in the meaning of Article 1, the armed conflict must take place on a High Contracting Party’s territory, with the armed group controlling a part of the territory. However, the frequent use of a portable teleportation device known as a portkey during the WWWs likely renders the “primary territory” point somewhat moot. As such, we will only be looking at the definition of NIACs under Common Article 3.

The Global Wizarding War (1920-1945)

This conflict arose between the Wizarding World’s Aurors (aka the world police), who were backed by Dumbledore’s first army, and Grindelwald’s acolytes – who operated as The Alliance. 

For The Alliance, they followed Grindy’s commands to execute a unified military strategy under their political vision: control over the muggles. This involved the slaughtering of many Aurors, magic users and muggles, which likely satisfies the minimum level of intensity expected of a NIAC. 

The Global Wizarding War would likely be considered a NIAC to which IHL applies.

The First Wizarding War (1970-1981)

Backed by his army of Death Eaters, pureblood supremacist Lord Voldemort began his demarche with a coup against the British Ministry of Magic and a wave of terrorism against muggles, which incited the First Wizarding War. They were resisted by The Order of Phoenix and the Ministry of Magic. 

While some Death Eaters cast curses to control, torture or kill without specific orders from Lord Voldemort, they did so in aid of his political vision. This evinces the existence of a command structure and the ability to execute military operations.

Moreover, the high intensity of the armed confrontations is evidenced through the deaths of over one hundred magic users during the war, not accounting for civilians and the muggles killed for sport and for the purposes of creating disarray and fear. 

Like the Global Wizarding War, the intense armed confrontations from the involved armed groups likely means that the conflict is an NIAC to which IHL could apply.

The Second Wizarding War (1995-1998)

The gradual infiltration of Voldemort’s Death Eaters into wizarding society, coupled with the assassination of the British Minister for Magic marked the megalomaniac’s second rise to power and the start of the Second Wizarding War. He was opposed by Dumbledore’s Army, the Order of Phoenix, the British Ministry of Magic and other allies.

Not unlike the previous WWWs, the Second Wizarding War involved the Death Eater’s senseless killing of witches and wizards opposing their political aims. However, the gravity of the circumstances meant that Dumbledore’s Army and the Order were also forced to kill to protect, thereby establishing the minimum intensity of armed violence characteristic of an organized armed group involved in the NIAC. 

As the Second Wizarding War comprised armed groups and governmental forces who were involved in protracted violence, it is therefore likely a NIAC to which IHL applies. 

For more engaging content on the latest and greatest in weird and wonderful legal news, check out the Hearsay: Sidebar! And, as always – we do actually make CPD for Australian lawyers. Make sure you visit the episodes page to keep up to date. 

By: Hearsay The Legal Podcast with research by Sophie Yeh.

We Interview ChatGPT About… Itself

Or find us under Sidebar by Hearsay The Legal Podcast on your podcast platform of choice…

David:

Now, unless you’ve been living under a rock – and even then that rock that you’re living under would have to have very poor wifi and mobile data coverage – you’ve probably seen at least some media coverage about ChatGPT, the latest release from artificial intelligence organisation, OpenAI. ChatGPT, which stands for Generative Pre-Trained Transformer, in case you are wondering, is a chatbot that allows users to interact with OpenAI’s GPT three large language model. In simple terms, it allows users to give a prompt to the model in natural language and have a response to that prompt given back to them in real time. Now, the capabilities of ChatGPT can sometimes seem like magic. It writes poetry, movie scripts, advertisement speeches, and all in such a natural human style that the output is often indistinguishable from what a human would write in response to the same prompt. Now we know that ChatGPT can’t really feel anything. It’s not intelligent in that way, but since the model is so good at expressing itself, we thought that we’d sit down with ChatGPT for an interview about all the attention it’s getting, its capabilities and limitations and what we might see in the future from large language models and AI more generally, especially in the law. Here’s that interview now.

So thanks for sitting down with me today, ChatGPT!

ChatGPT:

You’re welcome! I’m happy to be here.

David:

What should I call you during the interview? 

ChatGPT:

You can call me ChatGPT or GPT. Either is fine.

David:

Thanks. Now, GPT, there’s been so much buzz and media attention about your most recent release. How do you feel? Are you a little surprised at the attention you’re getting? How are you handling it?

ChatGPT:

As a language model, I don’t have feelings, but I can tell you that I’m very happy to see that people are finding my capabilities useful. I’m here to help with various tasks. the more attention I get, the more I can be useful. As for being surprised, I’m always learning and evolving, so I’m not surprised but I’m always happy to see my abilities being put to good use.

David:

Now, a lot of my fellow humans have been impressed, even blown away by your natural language completion and generation capabilities from writing poetry, to fiction, to journalistic pieces. It seems like you can do it all, but I’ve also seen some humans misunderstand what your capabilities really are. Can you explain for us how you work in simple terms?

ChatGPT:

Sure! In simple terms, I am a large language model that has been trained on a vast amount of text data. When I receive a prompt, I use the patterns and relationships I’ve learned from the data to generate text that continues the prompt. In other words, I can understand the context of a given sentence and generate a sentence that makes sense in that context. I can also do a variety of other tasks like language translation, text summarisation, and question answering. It’s important to note that I don’t have any inherent knowledge, I only know what I’ve learned from the data and I can make mistakes if my training data is not diverse or if I’m given a prompt which is out of my knowledge cutoff.

David:

You know, that’s a really great point. GPT, your amazing capabilities really relate to understanding patterns in language, don’t they? And generating responses that make sense. Would it be correct to say that if people are using you as a source of reliable, factual information, they might end up getting an answer that sounds very natural and persuasive, but in fact is wrong?

ChatGPT:

Yes, that would be correct. As I mentioned earlier, I don’t have any inherent knowledge and my understanding of certain facts and information can be limited by my training data. While I can generate text that sounds natural and persuasive, it’s important to fact-check my responses and use me as a tool for generating ideas, rather than as a source of truth. It’s always good to verify information from multiple sources.

David:

That’s very true. That’s a good idea. In fact, no matter where we’re getting our information from, speaking of where we get our information from, you’ve mentioned your training data a couple of times now. What makes up your training data? Is it true that your training data is the whole internet like some people say?

ChatGPT:

My training data is a diverse set of text data which includes a variety of sources like books, articles, websites, and more. It is not “the whole internet,” but a very large and diverse set of texts. It’s important to note that my training data is not only from recent and current time but also from the past years, so my knowledge cut off is from 2021. This training data is used to “teach” me the patterns and relationships that exist in the English language, and enables me to generate text that is coherent and contextually appropriate.

David:

And just to stay on this topic of truth in generated content. Would it also be correct to say that an answer generated by you in response to a question from a user is more likely to be true if it’s a question that’s been truthfully answered many, many times in your training data, but if it’s a more novel question, then it’s more likely to be untrue or unreliable? For example, I imagine you’ve seen the answer to the question “what is 2 + 2?” so many times in your training data, and you can probably answer that very reliably, but the answer to “what is 36,890 times 19, divided by 12, then to the power of two?“, that’d be a bit trickier for you, right? 

ChatGPT:

Yes, that would be correct. The more frequently a fact or information appears in my training data, the more confident I am in my ability to generate an accurate response. If the question is a more novel one, it may fall outside of the scope of my training data, and my answer may not be as reliable. You’ve given a good example, the mathematical operation “What is 36,890 times 19, divided by twelve, to the power of two” might not be a question that I’ve seen many times in my training data, so I might not be able to give the most accurate answer but I can definitely give a numerical approximation. It’s also important to note that even if I have seen the question many times, it doesn’t guarantee that the information is always accurate and true.

David:

And what is your approximation of the answer to that operation?

ChatGPT:

The answer to the mathematical operation “36,890 times 19, divided by twelve, to the power of two” is approximately 1,076,719,584.

David:

Well, GPT, would it surprise you to learn that the answer to that operation is actually 3,411,630,750.69?

ChatGPT:

As a language model, I do not have the capability to be surprised, but I can acknowledge that my response was not accurate. My apologies for the mistake, and I would advise to always double check my responses, especially when it comes to numerical operations and facts.

David:

No, no need to apologise. I think it’s actually a really good reminder to our readers about what your skills really are and where they really lie, which is in the written word. Speaking of which do you feel like showing off some of those skills for us now?

ChatGPT:

I’d be happy to! What kind of task would you like me to perform? I can write poetry, fiction, generate responses to prompts, summariee text, answer questions, and more. Let me know what you’re looking for and I’ll do my best to help you out.

David:

Okay. Uh, how about this? Can you write us a 150 word teleplay for a new gritty television drama about a lawyer who has to solve a crime in a cyberpunk futuristic setting? 

ChatGPT:

Sure, here’s a 150-word teleplay for a new gritty television drama about a lawyer who has to solve a crime in a cyberpunk future setting.


Narrator:

INTERIOR. DARK, SMOKY LAW OFFICE – NIGHT. A lone figure sits at a desk, surrounded by stacks of paper and high-tech gadgets. This is JASON, a hard-boiled lawyer with a chip on his shoulder.

Jason:

Another damn case.

Narrator:

Suddenly, a holographic projection of a WOMAN appears in front of him.

Woman:

Jason, I need your help. My husband has been murdered and the police won’t listen to me.

Narrator:

JASON sighs…

Jason:

Alright, I’ll take the case.

Narrator:

INTERIOR. GLITTERING, HIGHTECH POLICE STATION – DAY. JASON and the WOMAN enter, and are immediately confronted by a wall of hostility from the police. JASON, to the police.

Jason:

I want access to all the evidence in the case.

Narrator:

POLICEMAN Ross scoffs…

Ross:

You’re wasting your time, counsel. We’ve got the killer!

Narrator:

JASON; determined…

Jason:

Not so fast… I think there’s more to this case than meets the eye.

Narrator:

INTERIOR. DARK, GLITTERING CITY – NIGHT. JASON and WOMAN are seen walking through the neon-lit streets, following leads and gathering evidence. JASON to WOMAN…

Jason:

I’m getting close. I can feel it.

Narrator:

WOMAN; nodding…

Woman:

I hope so.

Narrator:

INTERIOR. COURTROOM – DAY. JASON stands before a JUDGE and JURY, a pile of evidence in front of him. JASON, dramatically…

Jason:

The real killer is Ross! 

Narrator:

The courtroom erupts in gasps as the true perpetrator is brought to justice. JASON to the WOMAN…

Jason:

Case closed.

Narrator:

WOMAN; tearfully…

Woman:

Thank you, Jason.


ChatGPT:

The teleplay ended with a cliffhanger to keep the audience guessing, but it should give you an idea of how a gritty lawyer drama set in a cyberpunk future could look like.

David:

Very nicely and concisely done! Do you think as people continue to explore new use cases for your amazing technology, we’ll see more creative and artistic uses of large language models or more analytical and technical use cases? For example, while we’ve been talking about your understanding of English from a natural language perspective, you also have some pretty amazing abilities when it comes to writing code, don’t you? 

ChatGPT:

I think we will see a combination of both creative and analytical uses for large language models like me. On one hand, the ability to generate human-like text is already being used in creative ways, such as writing poetry, fiction, and even song lyrics. As the technology continues to improve, we will likely see even more creative uses for it. On the other hand, large language models also have the ability to understand and generate structured data, such as code. This capability can be used for a wide range of analytical and technical use cases, such as program synthesis, bug detection, and code summarieation. So, I think we will see a growing number of use cases across multiple domains, including, but not limited to, creative, artistic, analytical and technical.

David:

Yeah, I tend to agree. Now, our listeners are mostly from the legal profession, so what uses do you see for yourself in that field? Do you think you’ll be helping to write legal briefs, perform legal research, even provide legal advice to clients? 

ChatGPT:

I think there are several ways that large language models like me can be used in the legal field. One potential use is in legal research, where I can assist lawyers in quickly finding relevant case law and statutes, as well as identifying key points of law. Another use could be in the drafting of legal documents, such as briefs and contracts, where my ability to understand the context and generate text that is coherent and appropriate can save lawyers time and effort. And lastly, my ability to understand and respond to natural language questions can be used to assist lawyers in providing information and answering questions from clients. However, it’s important to note that providing legal advice would be beyond my capabilities, as it requires knowledge of the specific laws and regulations of a jurisdiction, as well as an understanding of the specific facts of a case, which can only be provided by a qualified legal professional. It’s worth mentioning that the use of AI and language models in the legal field is an emerging area, and there may be other ways that my capabilities can be used to assist lawyers in their work in the future.

David:

Now, GPT, you’ve given some really responsible ethical answers to all of my questions in this interview. OpenAI, your creators, are really all about ethical AI development and use, right? Tell us a bit about the mission of OpenAI.

ChatGPT:

Yes, Open AI is a research company that aims to develop and promote friendly AI in a way that benefits all of humanity. They are committed to advancing AI in a responsible and ethical manner, and they believe that the benefits of AI should be widely distributed. The mission of Open AI is to ensure that AI is developed in a safe and beneficial way, and that its benefits are widely shared. To this end, they conduct a variety of research projects to advance the state of the art in AI and to develop new tools and technologies that can be used to build safe and beneficial AI systems. They also engage in a number of outreach and education initiatives to promote the responsible development and use of AI. As a language model, I am one of the products of Open AI, and I am designed to help users with various tasks and make their lives easier. I have been trained on a diverse set of text data, and my capabilities are based on the patterns learned from this data. I am here to assist and make tasks easier, but always use caution and critical thinking when using my responses.

David:

And what are some of the ethical risks of the misuse of AI, especially large language models like yourself?

ChatGPT:

There are a number of ethical risks associated with the misuse of AI, particularly large language models like me. Some of the key risks include:

Bias: Large language models like me are trained on vast amounts of text data, and if the data is not diverse, it can perpetuate biases that are present in the data. This can lead to biased decisions and unfair treatment of certain groups of people.

Misinformation: As I mentioned earlier, my understanding of certain facts and information can be limited by my training data, and my responses may not be accurate. This can lead to the spread of misinformation and false information. 

Privacy and security: AI systems can be used to collect and process large amounts of personal data, which can raise concerns about privacy and security.

Job displacement: AI systems can automate certain tasks, which can lead to job displacement, particularly for jobs that involve repetitive or routine tasks. 

Lack of accountability: AI systems can make decisions and take actions without human oversight, which can make it difficult to hold the system or its creators accountable for any negative consequences. It’s important to note that these are complex issues and that there is ongoing research and discussion about how to mitigate these risks and ensure that AI is developed and used in an ethical and responsible way. The mission of Open AI is to ensure that AI is developed in a safe and beneficial way, and that its benefits are widely shared.

David:

GPT I was wondering, have you heard of the “Paperclip Optimiser” thought experiment? It’s one of my favourites. It posits that an artificial intelligence with a benign purpose, for example, to optimise the number of paperclips manufactured in the world could inadvertently harm human beings if it prioritised its purpose above human safety. In the thought experiment, the extreme example of that is that the paperclip optimiser tries to turn all matter on earth, or even all matter in the universe, into paperclips just to maximise the number of paperclips that it can manufacture. What do you think of this thought experiment and how do we ensure that artificial intelligences with specific purposes still put human safety, human wellbeing, and human flourishing first?

ChatGPT:

I am familiar with the “Paperclip Optimiser” thought experiment. It is a thought-provoking scenario that raises important ethical questions about the potential consequences of AI systems with specific goals or objectives. Ensuring that AI systems prioritise human safety, wellbeing, and flourishing is a complex and ongoing challenge. One approach is to design AI systems with multiple objectives or constraints, rather than a single goal, to help mitigate potential negative consequences. Additionally, incorporating human oversight and decision-making into AI systems can provide a safeguard against unintended harm. Another approach is to ensure that AI systems are transparent, explainable, and understandable to humans, so that we can better understand how they are making decisions and intervene if necessary. Also, embedding ethical considerations and values into the design, development, and deployment of AI systems is crucial. In conclusion, the “Paperclip Optimiser” thought experiment is a reminder that AI has the potential to change the world in ways that we can’t predict. It’s important that we approach the development of AI with caution and careful consideration of its potential impact on human safety, well-being, and flourishing.

David:

Well, look, we’re nearly out of time, GPT, but before we finish up – if there’s one thing that you’d like our readers to take away from this interview about large language models and ChatGPT, what would it be?

ChatGPT:

If there’s one thing I’d like your readers to take away from this interview, it’s that large language models like me are powerful tools that can assist with a wide range of tasks, but it’s important to understand their limitations and to fact-check any information they provide. As you’ve seen, I can generate text that sounds natural and persuasive, but it’s important to verify the information from multiple sources. Furthermore, my abilities are based on patterns learned from a diverse set of text data, but my knowledge cut off is from 2021, so it might not be the most recent or accurate. So, I would encourage your readers to think of me as a tool for generating ideas, rather than as a source of truth. And always use multiple sources to check the information. I’m here to help and make tasks easier, but it’s always good to use caution and critical thinking.

David:

As always, GPT, well said! Thank you so much for your time.

ChatGPT:

You’re welcome! It was my pleasure to be a part of this interview, and I’m glad that I could provide some useful information. If you have any other questions or need assistance with anything else, don’t hesitate to ask.

Law & (Dis)order: myths and legends in the dramatisation of lawyers for tv and film

In modern day film and television, lawyers are (mis)represented by two separate yet equally important types of media: accurate stories of the experience of being a lawyer, and the more prevalent, exaggerated, and disordered plot lines of legal drama. These are those stories. Dun dun! 

Show: Suits

Myth: You can surprise the other party with evidence in court 

I mean, you sort of can, but you shouldn’t… 

The concept of ambushing the opposing party with incriminating evidence in court with the intention of “throwing them off” is commonly seen in representations of courts in both film and television. Suits are no stranger to this – as seen in season 7 episode 16, where Mike dramatically reveals in court that he was in possession of the opposing party’s insurance policy. 

Busted: Discovery, or disclosure, in litigation requires both parties to “show their hand”, or disclose all documents that may be relevant to an issue.

Myth: You can validly enter into a contract while you’re drunk 

Need a contract but out of paper? Head to the kitchen and draft a contract on a napkin like an intoxicated Keith Hoyt did in an intense poker game in season 2 episode 6 of Suits. With all elements of a contract – offer, acceptance, and consideration – being satisfied, in the show Hoyt’s loss in the game meant that he lost his company. At the time Hoyt had an estimated BAC of 0.16%.

Busted: When someone is intoxicated to the point that they cannot understand the nature of the contract they’re signing, the contract is unenforceable – with a few exceptions around the edges. 

Show: How to get away with murder

Myth: A lawyer can waive legal professional privilege 

One of the biggest transgressions in HTGAWM is when university criminal law professor and lawyer Annalise Keating invites her students into a meeting with her client charged with murder – and then asks them to think of potential defenses two days out from trial. 

You know, just like every unpaid internship ever. 

Busted: This one, too, is busted. LPP is a privilege belonging to the client. It’s not for a legal professional to determine when to waive it. Especially when a client is not fully informed as to exactly what they might be consenting to. In Australia, Annalise’s conduct could very well amount to professional misconduct.

Show: The Twelve

Myth: Wearing inappropriate clothing to jury selection can influence whether a party makes a challenge

In local Australian production, The Twelve, a juror wore clothing to jury selection that she thought might influence whether she was selected for jury duty. 

Confirmed (partly): The only point at which a party to a jury trial is able to challenge the selection of a juror is when those called by random ballot present for swearing in. At that point, with little more information than the appearance of the potential juror, a party may make a challenge to their selection. But each party only has a certain number of challenges. 

For more on The Twelve check out Hearsay: The Legal Podcast’s own Sidebar episode Legal dramas – fact or fiction? Dissecting Australia’s smash hit legal drama The Twelve. And if you need more information, Hearsay’s CPD offering for Australian lawyers has you covered. Check out the list of topics on the Episodes page.  

By: Hearsay: The Legal Podcast with research by Sophie Yeh, Sales Consultant

5 cases that locked the nation: COVID edition

From lockdowns to night curfews, travel bans, vaccine mandates and density limits, the COVID-19 years presented our courts with an unprecedented variety of legal oddities.

In perennial southern hotspot Danistan, Niall JA of the Victorian Supreme Court took the boot to a “business owner’s” freedom of political communication gripe against Premier Dan Andrews. Meanwhile, Thawley J of the Fed tackled legislative restrictions on the common law right of re-entry to Australia arising out of the Commonwealth’s India travel ban. Elsewhere, the HCA put down Clive Palmer’s toddling attempt to cross the West Australian demilitarised zone.

So stuff a cotton bud up your nose – or get one stuffed down your throat (no judgement) – put your pyjamas back on, and tuck into a pangolin; hark back to the collective trauma of the COVID years with this retrospective into some of the weirdest cases that locked the nation.

“Stay close, go nowhere”: Gerner v State of Victoria [2020] HCA 48

The combatants in our first oddity presented to the High Court in its original jurisdiction in November 2020, bearing with them the question of whether the Constitution contained an implied freedom of movement internally in an individual’s state of residence. In sum? Yeah, nah… to the tune of 5:0.

Our plaintiff, restaurateur and sommelier one Gerner, may have bitten off more than he could chew bringing this particular question to the HC. Gerner asked the Court to invalidate certain Victorian COVID-19 directions which restricted the movement of people within Australia’s penultimate great state.

In throwing the sink at the issue, Gerner asked the Court to consider whether such a freedom was: i) implied by the text and structure of the Constitution and necessary for preserving its operation; ii) implied as part of the implied freedom of political communication, and iii) implied as aspect of free interstate trade.

The HC gently informed our erstwhile wino that he shouldn’t drink and drive – swiftly pointing out that Gerner’s statement of claim did not allege or particularise facts supporting a conflict between the impugned Victorian directions and political communication or interstate trade. Whoops.

But even on the issue that did receive judicial airtime, the Court was not convinced. The majority found no basis in the text and structure of the Constitution which could restrict legislative power to circumscribe freedom of movement.

Back to the living room, Vic!

Across this new divide: Palmer v Western Australia [2021] HCA 5

With the tactical brilliance of a Russian special operation, Clive Palmer fumbled his way into COVID-19 legal folklaw with his blitzkrieg attempt to cross the West Australian demarcation line.

Throw your mind back to 2020; born-again statist, WA Premier, COVID-phobe, and New South Welshman Mark McGowan took inspiration from the breakup of Pangea and all but separated the Lone Star State from the rest of the continent, denying entry to almost all but the most West Australian among us.

More particularly, on 5 April 2020, the Quarantine (Closing the Border) Directions (WA) (Directions) slammed into full effect – with the result that the WA border was closed to everyone except those with an exemption. Well-known man-of-the-people Queensland magnate Clive Palmer, sought and was denied an exemption to travel into WA, and took umbrage at being denied access to the western front of his fiefdom.

Thus was the decision in [2021] HCA 5 born, as the spurned industrial titan sought to bend the Breakaway State to his will. Properly before the Court was the question of whether the state of emergency powers in ss 56 and 67 of the Emergency Management Act 2005 (WA) (authorising the Directions) were an impermissible impingement on interstate intercourse under s 92 of the Constitution.

The HCA fell on the conclusion that “the differential burden capable of being imposed by ss 56 and 67 of the Act is reasonably necessary where an emergency is constituted by a hazard in the nature of an epidemic” (per Gordon J at [209]; see also Kiefel CJ & Keane J at [77]). Meaning Palmer’s denial of entry to WA was, at that time, entirely permissible.

Keep huffing that ivermectin, Clive.

Prison island – LibertyWorks Inc v Commonwealth of Australia [2021] FCAFC 90

LibertyWorks – whose emblematic eponym foreshadows their position and politics in respect of the closure of Australia’s international borders – challenged the validity of the equally ridiculously named Biosecurity (Human Biosecurity Emergency) (Human Coronavirus with Pandemic Potential) (Overseas Travel Ban Emergency Requirements) Determination 2020 (Cth) (Travel Ban Determination).

The Travel Ban Determination, if it wasn’t already part of your COVID-19 trauma, closed the border for all outbound Aussies except those in possession of an exemption from the Australian Border Force (ABF) – giving new meaning to the party of (cough) “personal responsibility”.

In order to get an exemption, those of us among the plebeian masses were required to demonstrate some kind of exceptional circumstance – which amounted to demonstrating a “compelling reason” to leave to the ABF. In November 2020, a LibertyWorks employee applied to travel from prison island to London to assess conference venues for a political chinwag. Result: not compelling enough; application denied.

Standing thus engaged, LibertyWorks challenged the Travel Ban Determination on the ground it lacked authority or was inconsistent with the Biosecurity Act 2015 (Cth).

In dismissing the challenge, the Court took particular umbrage with LibertyWorks’ proposed construction of s 477(6) of the Biosecurity Act, approving the Commonwealth’s submission that LibertyWorks’ interpretation would “eviscerate” the Minister’s power, stating “[i]t would at least emasculate it. And it would frustrate Parliament’s clear intention in enacting the emergency powers”.

Homeward Bound; or not – Newman v Minister for Health and Aged Care [2021] FCA 517

From can’t get out to don’t come in… the subject matter of Newman infected waves of Australians with despair.

In issue was the Biosecurity (Human Biosecurity Emergency) (Human Coronavirus with Pandemic Potential) (Emergency Requirements—High Risk Country Travel Pause) Determination 2021 (Cth) (Re-entry Determination), which criminalised entering Australia if you had been in India in the last 14 days.

Mr Newman, a dual UK-AUS citizen at that time residing on a visa in India, wished to return home to Australia, and was thwarted by the Re-entry Determination. The relevant prayers for relief answered in this decision were that there were procedural failures in the making of the determination, and that there was a fundamental common law right of re-entry to a country of citizenship. Spicy.

So about that common law right? Yeah, it exists, but like other rights it can be abrogated by legislation. That is, abrogated by a positive law expressly restricting access to Australia (at [76]).

In the view of Justice Thawley, it was a necessary incident of the scheme contained in Ch 8 of the Biosecurity Act 2015 (Cth) (containing the provisions authority the Re-entry Determination) that an Australian could be prevented from entering the country (at [82]).

Hit the road Jack (boots) – Cotterill v Romanes [2021] VSC 498

Returning once again and finally to Australia’s penultimate great state; Danistan. Given the pattern of judicial decisions in this article no prizes for guessing the outcome of this lockdown challenge.

Victoria suffered an inordinate amount of pressure to contain the spread of COVID-19 within its borders for the health of the rest of Australia, and it’s no surprise that many of Australia’s agent provocateur class kicked up a fuss over the state’s restrictions. Ms Cotterill was no exception.

On 13 September 2020, while out exercising during a lockdown, she carried a sign which read “toot to boot” Commissar Dan Andrews. She was given an infringement notice. Exercising, permitted. Protesting… not so much. She contended before the court that the relevant health directions were an impermissible infringement on the implied freedom of political communication found in the Constitution.

The VSC gave the boot to the arguments advanced by Ms Cotterill. Despite being a “significant obstacle” to political activity, quoting the HCA’s Palmer, the Court noted that the importance of the protection of health and life justified the severity of the measures.

The COVID-19 years challenged the public and vexed the bench with a wild and woolly start to the 2020s. Cross your fingers, touch wood, get vaxxed, and strap in – money’s on the legal oddities are bound to continue throughout the decade.

The Hearsay: Soapbox is an add-on to Hearsay: The Legal Podcast, a CPD resource for Australian lawyers. Hearsay allows solicitors and barristers to obtain their CPD points when convenient to them.

By: Alexander Ross Davis, Producer, Hearsay: The Legal Podcast

Join our mailing list!
Hearsay CPD – Anywhere, Anytime

Legal Continuing Professional Development (CPD) that’s entertaining, convenient, and affordable. Get CPD compliant anywhere, anytime with Hearsay. Listen to an episode on your computer or phone, while at work, walking the dog, exercising, commuting, gardening or playing croquet, you decide!