fbpx

Musk v Altman: Contracts, Estoppel and (Maybe) the Future of Humankind

It’s not often you can say that the fate of humanity hangs on the outcome of a legal proceeding. It might sound absurd, but if you believe Elon Musk, the plaintiff in the latest lawsuit against OpenAI, then the outcome of his case might be just that important.

On 29 February this year, Elon Musk filed a complaint in the Superior Court of California against OpenAI founders Sam Altman and Greg Brockman, along with a list of OpenAI corporate entities, claiming that the defendants had breached a contract between the three men which required them to ‘develop [Artificial General Intelligence] for the benefit of humanity’ and make its research and technology open-source – freely available to the public.  

If you haven’t come across the term before, Artificial General Intelligence is a term without a precise definition – one of the challenges that will face any judge trying to decide this case – but is commonly used to refer to an AI model with human-like (or superhuman) intelligence across a broad range of skills, effectively making it a general-purpose AI system – or an effective replacement for human intellectual labour.

In the alternative, Musk pleads “promissory estoppel” – that even if there is no contract between Musk, Altman and Brockman, then the defendants induced him to make ‘millions of dollars in contributions to OpenAI’, to his detriment, in reliance on the promise that OpenAI would be a non-profit developing open-source technology for the good of humanity.

Musk’s third claim is actually a really unique one that should be getting a lot more attention in the reporting on this story – it’s probably a world-first, in fact.  Musk claims that the defendants breached the fiduciary duty they owe to humanity at large, under the charter for OpenAI, Inc. (the not-for-profit OpenAI entity) – including Musk himself.  Fiduciary duties are usually confined to well-defined relationships of trust and responsibility – directors to companies, trustees to beneficiaries, lawyers to clients – but a fiduciary duty owed to more than 7 billion people collectively is probably something that has never before been the subject of litigation.

Musk also claims unfair competition (a cause of action under California statute), and an account of profits.  He seeks orders for specific performance and injunctive relief, compelling OpenAI to make its AI research and technology – and based on the contents of the complaint, presumably the architecture of the market-leading GPT-4 model – publicly available, and preventing the defendants from using OpenAI or its assets or research for the financial benefit of any person.  Musk also seeks a declaration that GPT-4, and another rumoured new model yet to be publicly announced or confirmed, “Q*” (pronounced, Q-Star), constitute AGI.

The 46-page complaint is a fascinating read, whatever you think of the claim itself.  There are startling – maybe disquieting – passages about the existential threat of AI to humanity: from Bill Joy’s warning that if AGI is discovered, then “the future doesn’t need us”, to an incredible anecdote about an investor who met with Demis Hassabis, the founder of DeepMind (an AI startup acquired by Google in 2014)  and remarked that “the best thing [the investor] could have done for the human race was shoot Mr Hassabis then and there”. It also contains some entertaining turns of phrase that are regrettably rare in Australian legal filings, from describing OpenAI as “Microsoft’s ostensibly not-for-profit golden goose”, to claiming that since OpenAI is motivated to deny it has achieved AGI (which, when achieved, would not be not included in the licence of OpenAI’s technology to Microsoft) to keep Microsoft happy, “AGI, like “Tomorrow” in Annie, will always be a day away.” 

OpenAI, of course, denies all these claims, insists that it has always been faithful to its mission of developing AGI for the good of humanity, and has publicly stated it will seek to have them all dismissed.  Less than a week after the filing, on 5 March, OpenAI published a blog post detailing the founders’ conversations with Elon Musk, claiming that Musk always knew that to raise enough money to compete with the likes of Google, it would have to attract investors with for-profit operations.  Open AI also says that Musk was told that it would be irresponsible and dangerous to make all OpenAI’s advances open-source and freely available to the public.  

As anyone who has done discovery for litigation knows, there’s an oddly voyeuristic delight to reading someone else’s emails, and the correspondence between Musk, Altman and Brockman attached to the OpenAI blog post are no exception; it’s a rare chance to read these giants of tech talk strategy to one another in words never intended for public consumption.

To top it all off, Musk is seeking a jury trial.  The big questions Musk’s claim poses, from existential threats to OpenAI’s duty to humanity, will be decided by 12 ordinary jurors – if it makes it all the way to trial, that is.

Is GPT-4 an Artificial General Intelligence?  Is AGI a threat to humanity?  Would OpenAI publicly releasing the details of their research help prevent, or accelerate, that threat?  These questions, some of the greatest of our time, may well be decided in the unlikeliest of places – The Superior Court of California.

Join our mailing list!
Hearsay CPD – Anywhere, Anytime

Legal Continuing Professional Development (CPD) that’s entertaining, convenient, and affordable. Get CPD compliant anywhere, anytime with Hearsay. Listen to an episode on your computer or phone, while at work, walking the dog, exercising, commuting, gardening or playing croquet, you decide!