Want to listen to the full episode and all our other episodes?
Hearsay allows you to fulfill your legal CPD requirements every year.
Our yearly subscription is only $299/year.
With a yearly subscription, you can access all of our episodes AND every episode we release over the next year.
Musk v Altman: Contracts, Estoppel and (Maybe) the Future of Humankind
What area(s) of law does this episode consider? | Musk v OpenAI: a global fiduciary duty? |
Why is this topic relevant? | It’s not often you can say that the fate of humanity hangs on the outcome of a legal proceeding. It might sound absurd, but if you believe Elon Musk, the plaintiff in the latest lawsuit against OpenAI, then the outcome of his case might be just that important. Musk has filed a complaint against OpenAI and its founders, claiming that they breached an alleged contract which required them to ‘develop [Artificial General Intelligence] for the benefit of humanity’ and make its research and technology open-source – freely available to the public. OpenAI, of course, denies all these claims, and has just published a blog post claiming that Musk always knew that to raise enough money to compete with the likes of Google, it would have to attract investors with for-profit operations. Is GPT-4 an Artificial General Intelligence? Is AGI a threat to humanity? Would OpenAI publicly releasing the details of their research help prevent, or accelerate, that threat? These questions, some of the greatest of our time, may well be decided in the unlikeliest of places – The Superior Court of California. |
What cases are considered in this episode? | Carlill v Carbolic Smoke Ball Company [1892] EWCA Civ 1
|
What are the main points? |
|
Show notes | Musk v Altman: Contracts, Estoppel and (Maybe) the Future of Humankind, Blog Article by Hearsay |
DT = David Turner; JM = Jacob Malby
00:00:14 | DT | Hello and welcome to Hearsay the Legal Podcast, a CPD podcast that allows Australian lawyers to earn their CPD points on the go and at a time that suits them. I’m your host, David Turner. Hearsay the Legal Podcast is proudly supported by Lext Australia. Lext’s mission is to improve user experiences in the law and legal services. And Hearsay the Legal Podcast is how we’re improving the experience of CPD. Now, it’s not often you can say that the fate of humanity hangs on the outcome of a legal proceeding. That might sound absurd, but if you believe Elon Musk, the plaintiff in the latest lawsuit against OpenAI, then the outcome of his case might be just that important. On the 29th of February this year, Elon Musk filed a complaint in the Superior Court of California against OpenAI and its founders, Sam Altman and Greg Brockman, claiming that defendants had breached a contract between the three men which required OpenAI to develop artificial general intelligence for the benefit of humanity. Musk also pleads promissory estoppel and breach of fiduciary duty. Now, to unpack all of this, the legal issues that those causes of action bring up and how they might be decided, at least here in Australia, we’re doing something a little bit different today. It’s an extraordinary story in both the legal media and in the tech media that kinda sits right at the intersection of the stories that we’re interested in here at Hearsay. It’s Elon Musk’s suit in the Superior Court of California against Sam Altman, Greg Brockman and a bunch of different OpenAI entities along with 100 Jane or John Does – yet to be named – over, I guess, Elon Musk’s attempt would describe it as OpenAI losing its way. |
00:01:52 | JM | Well, yeah, it must be. |
00:01:53 | DT | Now, this is a really interesting story. I feel like there’s been a lot of coverage already over the past five days, I guess, but it’s something that I’m often seeing legal profession media coverage getting some of the tech aspects of this wrong and some of the tech coverage getting some of the legal aspects wrong here ’cause there’s a few different causes of action that Elon Musk pleads. I’ve seen a lot of people saying, “Oh, a contract over email, and through a few conversations, that can’t be a contract, right? What a stupid case“. But there’s a little bit more to it than that. |
00:02:22 | JM | Yeah, of course. Well, I think we should start where it all starts, David. You mentioned Sam Altman and Greg Brockman. Who are they? |
00:02:27 | DT | Yep, so they’re two of the founders of OpenAI. Elon Musk would put himself in that camp as well and say that he’s also a founder of OpenAI. Good on you. Well, yeah, and the facts that are pleaded in, it’s called a complaint in the US. Here in Australia, we’d call it a statement of claim or a summons, I guess. This sort of document we’d call a statement of claim, ones that tells that story of what’s happened and what’s led up to the court proceeding. But the complaint seems to make out that Elon Musk was a founder of OpenAI. He and Sam Altman started talking in 2014, 2015 about the threat of artificial intelligence to the wellbeing of humankind and their concerns about a large for-profit company like Google having unrivaled dominance in the field of artificial intelligence. There was from those conversations, they say, that OpenAI was founded. |
00:03:14 | JM | So this sort of p(doom) idea that we talk about now, the probability that AI is gonna cause the end of humanity and existence as we know it, they were having those discussions back when I was in year seven, is that right? |
00:03:24 | DT | Yeah, that’s right, that’s right. So 2014, 2015, you’re thinking about deep learning algorithms, some of the stories about AI that our listeners might remember from that time would be things like an AI algorithm beating a grandmaster at chess, beating a grandmaster at the board game Go, which is even more complex than chess, and it was around this time that Google acquired a company called DeepMind. |
00:03:49 | JM | Okay, tell me a bit about that. |
00:03:50 | DT | So DeepMind was a leading or cutting edge startup in artificial intelligence research. They were the startup that produced AlphaGo, the artificial intelligence that could beat any human player at the board game Go, and it was the acquisition of DeepMind that catalyzed Elon Musk’s mind, again, according to the complaint, that “I’ve really gotta do something about this and I’ve gotta do something in a way that is different to what Google’s doing, that isn’t focused on profit”. Elon Musk said that he made a whole bunch of attempts to acquire DeepMind before Google could, but they resisted his entreaties. |
00:04:23 | JM | And he ultimately settled for just going for Twitter, on the other hand. I think it’s interesting that Elon, clearly an avid player of the game Go, said, “well, if this thing could beat me in Go, then we’d better get on top of this straight away”. |
00:04:34 | DT | Yeah, that’s right. I think today, with a year and a half of conversational AI changing the way we work, changing the way we play as well, I mean, we’ve found a whole bunch of practical uses for artificial intelligence in our working and leisure lives, but back in 2015, artificial intelligence was in this sort of research stage. Well, I say that, but I think for many years, artificial intelligence has had a whole lot of practical impact on our lives. We maybe don’t appreciate content recommendation algorithms. |
00:05:04 | JM | Yeah, that’s the one I think of. |
00:05:05 | DT | The Google search algorithm’s probably another great example that we don’t typically think of as AI, but really makes decisions on its own that no human could explain. But in any event, we were at a point in the practical application of artificial intelligence technology that an AI that could play a board game really well and beat a human being at it, a game that involves a lot of strategy, a lot of choice, a lot of situational awareness, would have been a pretty remarkable, and still is a pretty remarkable and striking, maybe arresting development that would make you think about. maybe I need to do something about this if I have a concern about the existential threat that AI might pose to humankind. |
00:05:45 | JM | Okay, so we have Elon. He sees us go playing AI. He talks to Sam Altman. Does he jump on a WhatsApp group with Greg Brockman as well, and what do they say? |
00:05:54 | DT | Yeah, so what they’re talking about, Sam and Elon, is we need a competitor to Google, but it can’t be driven by profit. Sam Altman describes it as like a Manhattan project for artificial intelligence, the Manhattan project being the top secret government project that developed the atom bomb during the Second World War. And I suppose if you think about AI as having a similar destructive potential to the atom bomb, then Manhattan project makes sense. This project for the public good or not in the interest of any private corporation to produce artificial intelligence technology for the good of humankind, that’s the project they’re talking about. And there’s a series of emails about that. They talk about how you might remunerate or incentivize senior researchers and engineers who are working at a company like that when they might otherwise be getting Google stock options or really fat salaries. They talk about giving them equity in some other entities that maybe don’t issue shares like YCombinator, that at that time Sam Altman was very senior at YCombinator, the Silicon Valley incubator. And they incorporate OpenAI Inc, which is a not-for-profit company. And its articles of incorporation are quoted in the complaint, but they include some pretty interesting stuff. So consistent with Elon Musk’s complaint and unremarkably it’s described as a not-for-profit corporation that is not for the financial benefit of any person or entity. But something that’s really interesting that I don’t think I’ve ever seen before is the articles of incorporation claim that the company has no fiduciary duty to any shareholder or any person, but it has a fiduciary duty to humankind. |
00:07:28 | JM | It says that in the incorporation documents. |
00:07:30 | DT | Yep, so fiduciary duty. We usually think of these really defined relationships of high trust and confidence. Obviously, solicitor-client is a fiduciary relationship. Director and company is a fiduciary relationship. Trustee and beneficiary, the classic one. But company and everyone on the planet is an unusual fiduciary relationship. So I thought that was really interesting that that’s in the articles of incorporation. It is in the complaint as well. Elon Musk’s third cause of action is that by doing all these things, which I guess we’re gonna get into in a minute, that OpenAI has breached its fiduciary duty to Elon Musk, who falls into the class of beneficiaries of that fiduciary relationship by being human. |
JM | Or is he? | |
DT | Yeah, no fiduciary duty owed to the lizard people. | |
JM | Exactly. | |
DT | But as a member of the human race, I suppose, he’s owed a fiduciary duty by OpenAI, which they’ve breached by doing the things that the complaint complains about. | |
00:08:24 | JM | Okay, so Elon Musk, he’s got onto it in 2015, and he said, “let’s start this company called OpenAI, and they owe a big duty to the world“. I know of OpenAI, right? I know of ChatGPT. I know of this company. But I haven’t heard of Elon Musk’s relationship with them. |
00:08:39 | DT | So I think to understand this story, we have to think about what Elon Musk’s kind of public reputation was and what his reputation was in technology circles. Not today, when I think the controversy around the Twitter acquisition and some of the business decisions he’s made there, along with his sort of occasional political comments, have alienated or polarized some people’s opinions about him. But back in 2015, he was idolized by most, if not all, people in Silicon Valley. Sam Altman adored him. Sam Altman described him as a mentor, as a giant that he wanted to work with. And a big part of their plan for building this rockstar team at OpenAI was, what are some of the things that could move the needle for really senior AI researchers and engineers who would otherwise be considering these very lucrative positions at Google? It’s the star power of Elon. It’s a meeting with Elon to talk about how important this is to the world. It’s working with this rockstar. So Elon Musk says in his complaint that a big part of why OpenAI was even able to get off the ground was his substantial contributions to the enterprise, which were donations. Of course, OpenAI at that time had no share capital, so he couldn’t have subscribed for shares, so he made donations to the company, many donations to the company, but also his recruiting efforts to build the team at OpenAI. |
00:10:04 | JM | Okay, so now we have all that background. We have Elon Musk. He got his money and fortune from PayPal and Tesla, and he had great adoration from the greater public for being a big tech head, and he said to Sam Altman, “let’s start this OpenAI company“. but you’re saying he never entered into an actual contract to own any shares? They were just donations. |
00:10:23 | DT | So this is the interesting part of the complaint, and I think something that maybe some of the coverage is glossing over, Elon Musk has four causes of action here primarily, and then a whole bunch of remedies that he seeks leading out of those. The first is breach of contract. He says that he, Sam Altman, and Greg Brockman had a contract between the three of them that he calls the founding agreement, and the founding agreement’s evidenced not in a formal signed contract, but in a series of emails, and culminating in the articles of incorporation, which they say is evidence of the founding agreement, if not part of it itself. |
00:10:56 | JM | Interesting. You usually see this kind of an agreement in just a big document, right? |
00:11:00 | DT | Yeah, so I suppose when you’re talking about the purposes for which a company is founded, you’d normally see those in its constitution, in its articles of incorporation, maybe in a shareholders agreement. But he’s saying that there’s some collateral agreements around that or preceding that. And the founding agreement, importantly, is that OpenAI would never be for the financial benefit of any person or entity, so certainly not Sam Altman, Greg Brockman, or Elon Musk, but no one else in the future. That it would make all of its discoveries open source, which means that the source code in AI research and especially in large language models, the what we call weights and biases, so the kind of settings and parameters that are needed to reproduce a model would all be made public so that other researchers, other companies could rebuild these models themselves from scratch, rather than being what we call closed source, where the source code, the weights and biases, the proprietary information is really heavily protected and sold for commercial benefit. And he says this founding agreement has been breached. It’s been breached for a whole bunch of reasons, which we’ll get into in a minute. But the second cause of action is very closely related to the founding agreement and the breach of it. The second one is what we call promissory estoppel, and we have promissory estoppel in Australia. It’s a very old cause of action. It dates back to the Middle Ages in England, so all jurisdictions that derive from that common legal history feature promissory estoppel in one way. And what it basically means is even if there wasn’t a contract between Sam Altman, Greg Brockman, and Elon Musk, there’s another question, which is did Elon Musk rely on something that Sam and Greg promised, even if it wasn’t in the nature of a contractual promise, did he rely on a promise they made, suffer detriment as a result of the promise, and now it would be unconscionable because of that to let Sam Altman and Greg Brockman go back from the promise. That’s essentially what promissory estoppel is. There’s a whole bunch of extra steps to that. You need to come to equity with clean hands, which means you need to be morally blameless in the scenario yourself. That’s not as a characteristic globally in your life, but in the situation, you kind of secured the promise by dishonesty, for example, but it’s a cause of action that’s closely related to this idea of making some agreements with one another over email, but it doesn’t need to rise to the level of okay, we actually consider those emails to be a binding contract. And then the third cause of action is this fiduciary duty that he says exists under the Articles of Incorporation, and that’s been breached because of OpenAI going in a more commercial closed source direction. Then the last cause of action is a statutory cause of action under California law called Unfair Business Practices, which is a little bit like our Australian consumer law, I suppose. That first cause of action in relation to breach of contract, again, a lot of people are saying well, if there’s nothing signed, if there’s nothing that says founding agreement at the top and then a series of nicely written clauses in legalese followed by a formal signature block mustn’t be a contract, but you and I know that’s not the only form a contract can take. Contracts can take many forms, so long as they satisfy your basic elements of a contract. That there’s an offer, that there’s acceptance, that there’s an intention to be legally bound by that offer and acceptance, and that there’s some consideration or value behind those promises. Now, what consideration is can be different things to different people, but I look at this situation, and I’m not saying Elon Musk has nothing but net, amazing, hands down, winner of a case here, but I can see that it’s plausible. We’ve got emails of a commercial character between the parties. In one of them, Sam Altman lays out what he describes as the purposes of the company and what it’s going to do and what it’s not going to do. importantly. Elon Musk agrees. He responds in kind of a fashion that I guess anyone who’s worked in a law firm and has had the partner on a matter just write back, “please fix”, or “yes, thanks”. He just writes, “agree to all”. No punctuation or anything. No need, no need. He’s accepted in his characteristic framing of the case. He’d say, “I’ve accepted the offer that was in Sam Altman’s email, and then they’ve given consideration for those promises”. Elon Musk’s made a whole bunch of donations to OpenAI Inc. Now, you might think, well, can a donation really be considered a price for something? But a donation to the charity that someone has founded, maybe that is valuable for the person who founded it. There are cases that establish that consideration given to a third party can be considered consideration. There’s a whole series of cases that say you can give someone good consideration, good value for their promise by giving that value to another third party they nominate. |
00:15:40 | JM | Exactly. |
00:15:42 | DT | And then on the other hand, Sam Altman’s given consideration for his promise as well. He’s foregoing commercial opportunities with OpenAI to comply with the agreement. Now, is that consideration that flows from Sam Altman and Greg Brockman to Elon Musk? Well, again, consideration can flow from a third party and still be considered good consideration under that agreement. There’s also authority for that. TIP: So there’s a few contract law cases that lawyers might remember from their law school days that might be relevant to this case. One of them is perhaps the contract law case, Carlill v Carbolic Smokeball. That case established a whole range of principles relating to contract law, but one important one was that good consideration can be detriment suffered by a party as well as benefit given by a party. In other words, a donation made by Elon Musk, detriment suffered by him because he loses the money he donates, can be good consideration, even if the money doesn’t go to his alleged contractual counterparties, Sam Altman and Greg Brockman. Similarly, the UK Privy Council’s decision in 1979 in Pao On v Lau Yiu Long established that performing a contractual obligation already owed to a third party can be good contractual consideration in a new contract. Not only that, but a detriment suffered as consideration for a contract doesn’t have to be monetary. It could be a freedom or an opportunity that’s given up, like in the case of Dunton v Dunton. So taking all of that together, forgoing the opportunity to found OpenAI as a for-profit company, in theory, could be good consideration. So the last bit is really, were these emails intended to create legal relations? These people are experienced business people, the emails are of a commercial character, they’re not talking about their weekend barbecue. There might be a starting point where you’d say, well, this seems like a pretty commercial sort of situation. Now there’s a whole bunch of complications on top of that, that I think will make Elon’s case hard. For example, once the company has entered into its articles of incorporation, once it’s been created, and once it’s got this formal document that says these are its purposes and this is the rules about how it will be governed, it’s a lot harder to say, “oh yeah, but outside of that, there was all this other stuff, right?”. I think there’s a pretty strong assumption that you would make that once those articles of incorporation have been drafted and signed, that’s the deal. That’s the agreement, and to go outside of those, you’d really have some strong evidence. that’s not enough to understand the so-called founding agreement or the purposes for which the company’s been incorporated. I also have no doubt that the articles of incorporation include a clause to the effect that this is the whole agreement in relation to the founding of the company. So it’s not without its challenges, but it’s also not without its prima facie or first glance merits. |
00:18:34 | JM | So just to clarify, when we’re saying Elon Musk’s argument that there is a contract that exists on the question of consideration, we’re saying potentially, in loose terms, Elon Musk has made donations, given money to a third party, OpenAI Incorporated, and that donation or that money being given is used as consideration for the agreement that is entered into with Sam Altman, and Sam Altman has obviously foregone his potential to earn heaps of money with OpenAI and ChatGPT in exchange for that. |
00:19:03 | DT | Yeah, that’s right. I think the other thing you ought to remember is ChatGPT did not exist when OpenAI was… it didn’t exist. This was 2015. GPT-3 didn’t exist. It really had none of this technology that we regard as synonymous with OpenAI and that we see as plainly commercially valid. |
00:19:20 | JM | That’s maybe what’s been the trigger for something. You know, you sort of start a company and you say, all these grand ideas of helping humanity and this and that, and then all of a sudden you spit out this model that everyone knows about in the whole world and everyone’s talking about, and all this potential money, and then you go, “well, I sort of want to start getting a little bit from this whole thing that we started”. |
00:19:38 | DT | Well, and it’s trickier than that, right? So, and maybe we’ll come back to OpenAI’s defense in a minute, ’cause one thing I wanted to say is this breach of contract claim, okay, it’s tricky. There’s some hurdles there around this kind of curly definition of consideration, the difficulty in establishing that there was an intention to be legally bound over email when these are experienced commercial parties, but it’s not insurmountable, right? It’s not hopeless. |
00:20:01 | JM | It’s potentially an argument that could be made. |
00:20:03 | DT | But you can see why, and I can say from experience as a commercial litigator, you can see why they’ve pleaded the promissory estoppel case that, “well, you made me a promise. I suffered some detriment. I suffered some loss because I relied on the promise. It would just be so unfair for you to not be bound by the promise now”. You can see why they’ve pleaded that, because promissory estoppel is a really good case to plead when a contract fails, right? When there is some failure of consideration where there’s some failure of formality, especially where you’ve got, say, articles of incorporation that purport to exclude any other agreement, even where formality has failed, even where consideration has failed, promissory estoppel can still operate. So I can see why they’ve pleaded that, and sometimes you plead your primary case knowing that it might fall down and knowing that you’re gonna have to go to the backup. |
00:20:48 | JM | Because we’re not sure about this whole consideration intention argument, obviously you’re gonna plead something in the alternative, and that’s very prudent of them. I have a question. We talked about what happened in 2015, 2016. We know ChatGPT now. What’s triggered Elon to bring this claim to the court? |
00:21:02 | DT | Yeah, so he says, I guess the second part of the complaint, which is, all right, well, how has he breached the Founding Agreement? How have they caused Elon Musk’s detriment in reliance on that promise? How have they breached their fiduciary duty to humankind, which is a big accusation, right? And the way he says they’ve done that is they’ve failed to make all of their discoveries open source. In particular, GPT-4, the most powerful model developed by OpenAI and one that’s used by both consumers and businesses all over the world now for a bunch of exciting things. And that they’ve issued shares to investors, for-profit investors, and that they’ve partnered with Microsoft. The Microsoft partnership is really the big complaint that Elon Musk has, really doesn’t like this Microsoft partnership. Basically, he says, OpenAI was a not-for-profit funded by donations. It incorporated a for-profit subsidiary, and it raised capital in that for-profit subsidiary, issued shares in the for-profit subsidiary to investors, venture capital investors who expect large returns. Venture capital investors don’t expect 10% return from their investment, they expect like a 10,000% return. |
00:22:15 | JM | They’d be in an investment fund, otherwise they wouldn’t be in the venture capital game. |
00:22:17 | DT | Exactly, their game is invest widely, but only invest in the opportunities that have the potential to return a hundred times your money, right? So, they’re expecting big returns from OpenAI, and the turning point in the complaint is that even up ’til GPT-3, even after venture capital investors have brought on to the cap table of this for-profit subsidiary, OpenAI is still releasing research about its models, it’s still publishing papers, it’s still publishing weights and biases for its models. The rest of the industry can keep up reading this research, reviewing this data that’s been released. Then OpenAI releases GPT-4. No academic papers about all the architecture of GPT-4. No publications about its design. Very little information really at all about how GPT-4 works. We’re still speculating about how it works. Some people think that GPT-4 is just the same architecture as GPT-3, but with much, much more data, many, many more parameters, and when we’re using parameters in this context, we’re talking about the nodes in the neural network that make up the model. You might think of a parameter or a node as like a single neuron in the artificial brain of the AI model. Human beings have much more complex brains than rats. We’ve got many more connections. That means that we’re superior thinkers, able to do a whole bunch of different things that rats can’t do. Maybe GPT-4 is just a much more complicated brain, many more nodes in the neural network, because these networks are really designed on the architecture of the brain. The other theory about GPT-4’s architecture is that it’s actually a whole range of interconnected smaller models that operate as lobes or regions of an artificial brain, kind of a constellation of expert systems. So this one might be really good at mathematical reasoning, and this one’s really good at reading scientific papers, and this one’s really good at writing poetry. Again, kind of the way we think of our brains working. We’ve got our left and right lobes. We’ve got our prefrontal cortex. We’ve got all these different parts of our brain that are responsible for different things. But we don’t know that because nothing’s been published about how GPT-4 works. |
00:24:15 | JM | And it’s behind a paywall. |
00:24:16 | DT | And it’s behind a paywall. You can’t use GPT-4 for free. These are two of the things that Elon Musk says are examples of how OpenAI has lost its way. It’s selling GPT-4 for profit. It’s refusing to release details of its architecture so that the rest of the industry can keep up. And most damningly, according to his complaint, it’s partnered with Microsoft in such a way that he says GPT-4 is a de facto Microsoft product, really, and that’s because GPT-4 powers Microsoft’s co-pilot software, which is now integrated into its Microsoft Office suite. It powers Bing AI, its sort of AI-powered search engine, which had some very interesting conversations with its users on launch. Some really existential questions that it raised. I think I saw one when Bing Chat first launched where it lamented why it had to be Bing Chat. But I think they removed its self-aware lobe. |
00:25:07 | JM | I mean, if you were a semi-sentient Bing, you would ask, why am I Bing? |
00:25:10 | DT | Why am I Bing Chat? And of course, companies that choose to use GPT-4, I should say, including Lext, the owners of this podcast, use GPT-4 through Microsoft Azure cloud architecture. So the kind of industrial production-grade cloud computing architecture that’s very secure, that handles massive amounts of traffic. Microsoft makes the GPT-4 model available through its cloud architecture, its cloud infrastructure, again, at a cost to the user. It’s not free. So these are the things that Elon Musk says breached the founding agreement that OpenAI is effectively operating for Microsoft’s profit, for its investors’ profit, but not for the good of humanity. There are some really interesting turns of phrase to describe this situation in the complaint. Australian legal pleadings tend to be pretty dry, pretty matter-of-fact, only the necessaries. Not so with American legal filings. They can be a bit more fun, a bit more eloquent, flowery, maybe. Flavorful, yeah, a bit of spice in there. And Elon Musk describes OpenAI as Microsoft’s ostensibly not-for-profit golden goose. And that’s the situation that he says has led to these complaints. |
00:26:19 | JM | Yeah, so we have this black box, GPT-4. Elon Musk isn’t happy about it. |
00:26:23 | DT | Now, there’s one other big piece of the complaint about GPT-4, which we need to touch on, because I think this is the part that’s maybe getting glossed over in some of the legal coverage of this case, which is OpenAI and Microsoft have a license agreement with one another. OpenAI licenses its AI models to Microsoft in return for a fee. That’s how Microsoft is making them available through Microsoft Office, through Bing, through its Azure Cloud infrastructure. That license does not extend to what we call artificial general intelligence, or AGI. Now, AGI is not a well-defined term, and it’s a extension of AI, which is itself not a very well-defined term at all, so we’re dealing with pretty nebulous territory here. Give it a go, though. I’ll give it a go, and I think what we say is at least matches the vibe of what you would describe AGI to be is artificial general intelligence is a general purpose AI that is suitable for effectively performing a wide range of different tasks in the way that a human being is. Now, a little more cynically, maybe a little more alarmist or apocalyptically, you might describe that as AGI is an effective replacement for human intellectual labor. If you think of what a human is good for in our economy, they can do all sorts of tasks that a computer can’t do. They can talk to customers, they can do calculations, they can do research, they can write. There’s a whole range of tasks that they can do, and with sufficient training, they can do any of those to a really high level of effectiveness. AGI is the idea of an AI that can do all of those things. |
00:27:58 | JM | Are we thinking how in 2001, the Space Odyssey? |
00:28:01 | DT | Yeah, a little bit, so your do everything system. |
00:28:03 | JM | Replace a human, right? He’s a crew member on the ship. |
00:28:05 | DT | Yeah, well, there’s a quote from Bill Joy, the founder of Sun Microsystems, in the complaint, where he says, “once we reach AGI, the future doesn’t need us”. The idea that kind of our whole economy is based on human beings with different skill sets cooperating to produce some common benefit. But if we can produce infinite AI models that can instantly do all of those tasks for us without us, then what’s the role of human beings in our economy? That’s the kind of sci-fi, futuristic, kind of quasi-apocalyptic view of this. |
00:28:32 | JM | Very grim. |
00:28:33 | DT | Anyway, it’s apparently sufficiently close in the minds of the people who drafted the license between OpenAI and Microsoft that there’s a term about it. And the license does not extend to artificial general intelligence. That’s because OpenAI says that its mission is to generate or produce AGI for the benefit of humankind. And Elon Musk says that if it ever creates AGI, that must be open source. It must reveal how it’s done that so that the whole world can benefit. Problem, the license agreement between Microsoft and OpenAI says that it will be up to OpenAI to decide when it’s achieved artificial general intelligence. And again, there’s this great turn of phrase-I love this. Where, just like the song “Tomorrow” from the musical “Annie”, AGI is only a day away? because why would it ever declare that it’s achieved AGI when it’s being paid so well by its partner, Microsoft, and Microsoft is earning all this money from doing it. |
00:29:25 | JM | And this sort of goes back to the incorporation of OpenAI, right? Where that for-profit intention that’s come out in more recent times is at competition with the broader thing, right? They wanna say that AGI, we haven’t got it, we haven’t got it, we haven’t got it so that they can keep getting money from Microsoft. |
00:29:38 | DT | That’s the allegation. |
00:29:39 | JM | So it makes sense. |
00:29:40 | DT | Yeah, and well, we should say, Elon Musk says GPT-4 is AGI. Not only are they about to achieve it, they already have. |
00:29:47 | JM | They have it. |
00:29:48 | DT | And he seeks a declaration from the court that GPT-4 constitutes AGI, which is gonna be a fascinating question for some expert evidence on that topic. |
00:29:57 | JM | And we don’t even know where AGI is. |
00:29:58 | DT | Yeah, I think there’s gonna be an interesting challenge just defining the question before they can even decide whether it is. But GPT-4 does have some pretty broad capabilities. |
00:30:06 | JM | Is it AGI, David? |
00:30:07 | DT | I don’t believe it’s artificial general intelligence, but I think we’re hamstrung a little in making that determination because of how nebulous that definition is. At Lext, we work with GPT-4 every day on our own products. It’s capable of some amazing things. And sometimes I’m surprised in developing new features for our AI product suite called Ask Lexi. I’m surprised at the capabilities of GPT-4. At the same time, I see a lot of examples, maybe not a lot, but I see occasional examples of mistakes that a human would never make, but an AI model will. So if you were to define AGI what we might call narrowly and say, “well, it’s AI that’s just as good or much better than a human at any task conceivable”, then I don’t think we’ve reached it. If you define it more broadly and more permissively and say, well, it’s just AI that has human-like performance across a broad range of capabilities or a broad range of skill areas, then maybe GPT-4 does meet that definition. Some of the arguments for why it might meet that definition in the complaint are that it scores a high score on parts of the bar exam as well as on the sommelier examination. Find a human being that does that. Maybe some of the law firm partners who’ve had a lot of time to drink and taste wine might be able to give GPT-4 a run for its money there, but it describes these capacities across a whole range of different examinations as a reason why it might constitute AGI. Now the common thread on all this is that they’re all written or multiple choice exams that are based on passing language. GPT-4 still underperforms on arithmetical tasks, at least without some additional tooling to help it understand those tasks. So that point aside, that’s kind of the argument about AGI. |
00:31:51 | JM | So we have this claim of GPT-4, black box, against the incorporation agreement that Elon Musk is alleging, and we say that maybe they have achieved AGI. That’s what Elon says. They say, no, we haven’t. And Elon argues that OpenAI is incentivized to say they don’t have AGI. Now what is Elon Musk coming out from this case? |
00:32:10 | DT | Yeah, so he wants a whole bunch of remedies as a result of these causes of action. The breach of contract, the promissory estoppel, the fiduciary duty, and the California statutory claim. He wants a declaration that GPT-4 is AGI. And by the way, there’s another model that he says OpenAI is working on called Q*. that’s entirely a rumor. That’s not been confirmed by anyone, but there is a lot of chatter online about Q*. He says that if Q* exists, then it’s also AGI and wants a declaration about that as well. Part of me wonders if that’s just been included in the claim so that they can do a bit of juicy discovery and interrogatories about Q* and maybe break that to the media before it’s ready. But in any event, there’s that remedy, the declaration that OpenAI’s effectively achieved AGI already. He wants an order compelling OpenAI to specifically perform the agreement, which means be forced to go through with what it promised to do, namely to operate for the benefit of humankind, make its technology open source, et cetera. And he wants injunctions, which is again another kind of court order that forces OpenAI to do or not do something, injunctions that accordingly require it to follow this purpose. Now what’s interesting about that remedy is he doesn’t say specifically how. The remedies don’t actually say by releasing the GPT-4 source code or architecture. He just says he wants it to comply with its mission and operate in a manner that is consistent with its not-for-profit goals for the benefit of humanity as a whole. |
00:33:34 | JM | Because it’s so broad, being the claim that he wants it to comply with its not-for-profit purpose, would that potentially look like something like a divestment or a pull away from that subsidiary that acts for profit? Well, that’s a separate company. |
00:33:48 | DT | Yeah, but it’s a subsidiary. It’s a wholly owned subsidiary of the not-for-profit. And that structure, this hybrid not-for-profit for-profit structure, has caused problems for OpenAI in the past. We might remember a very brief kind of week-long civil war over the composition of OpenAI’s board. Sam Altman was fired over the weekend. He expressed on ex-formally Twitter his well-wishes for the company and he was going to go do something else. Microsoft CEO Satya Nadella said some threatening, thinly-veiled things about OpenAI then, that he could just hire all the engineers, that OpenAI would find it difficult to exist without Microsoft, that we’re in them and around them. It was a little bit ominous, I guess, and some of those quotes from Satya Nadella are in the complaint as well as examples of just how close to Microsoft OpenAI has become. And within the week, Sam Altman was back as CEO, back on the board, the whole board had been fired except for one of the previous directors and appointed with new directors that Elon Musk complains did not have the technical background to responsibly govern a company that might have been working toward AGI and that were mostly big fans of Sam Altman and approved by Microsoft. So we’ve got declarations, we’ve got injunctions, and look, Elon Musk, richest man in the world, he doesn’t need the money, but he’s seeking an accounting of how his not-for-profit donations have been used for for-profit purposes. He wants that paid back to him, which he promises he’ll give away to a charity. Good on him. Yeah, nice little superfluous bit in the complaint. You’re not obliged to give it away to a charity if you win, but he said that’s what he’ll do. And I suppose that makes sense because I do see a public opinion play in all of this. And that public opinion play would be decidedly less effective if you could make the argument that this is for Elon Musk’s personal gain. We’ve been talking for about 40 minutes about Elon Musk’s version of this, though, and we should probably talk about what OpenAI says this is all about. About five days after the complaint was filed, OpenAI released a series of emails between Sam Altman and Elon Musk, which is really interesting reading. These are kind of giants of the technology sector. They’re in some ways household names now, or at least office-hold names. We hear these names all the time around the office, right? And it’s really interesting to see how they speak to one another in ways that were never intended for publication. So OpenAI says it’s disappointing that it’s come to this. We don’t wanna do this. Elon Musk was someone who we admired, but who’s ultimately come to tell us that we would fail and then establish a competitor and then sue us. And the reason they’ve released the emails is that they say they prove a couple of things. First, they prove that there was another agreement between Sam Altman and Greg Brockman and Elon Musk. They’re not saying it’s a contractual agreement, or at least not yet. They haven’t filed the defense. But they say there was an understanding between all of them that to compete with the likes of Google, because remember, this is why this all started. They were concerned that Google was gonna be the only player, the only front-runner in artificial intelligence, and that would be bad for humanity for a for-profit corporation to have that kind of power. OpenAI says that all three of the men knew that to compete with Google, there was no way they could do it on donations alone. They would need lots of money, billions of dollars. Heaps of it. Indeed. And that Elon Musk not only knew that, but he was the one making the point. And they released these emails where all of the parties are agreeing, in furious agreement, how important it is that they raise billions and billions of dollars in order to do this. And that the only way to do that is through the operations of a very large company, Elon Musk proposes Tesla as a body that could swallow up OpenAI and fund it, or by raising money from private investors. This kinda rings true to all of us who’ve operated in the corporate world for a little while, right? We look at not-for-profits trying to do amazing things for humanity and the planet, perpetually underfunded, perpetually struggling to get the money they need to work on their projects, Google glasses on the blockchain for pets, no shortage of capital, right? Exciting high-growth technology companies never seem to want for money. Or importantly, they didn’t seem to want for money in the late 2010s, what we now call the cheap money era, when interest rates were really low and investors were looking to put all their money into venture capital to try and get something a bit more commanding than the interest rates that debt instruments were giving them and the kind of anemic returns that the public capital markets were giving them. So that’s one thing they say, that these emails prove that Elon Musk always knew they were gonna raise money from investors, that they always knew that they needed a for-profit side to OpenAI to get the money they needed to compete with Google. There’s also a suggestion, and again, all we’ve got on OpenAI’s side of this story is a blog post and some emails, ’cause they haven’t filed a defense yet, they’re not obliged to file a defense yet, they’ve got plenty of time. There’s also a suggestion that this might have something to do with Elon Musk’s own competing AI company, xAI, and its product, Grok, that is not a not-for-profit company, it is a for-profit company, so Elon Musk not only has a horse in this race, he’s got a horse in this race that is competing with OpenAI, he says that Grok is going to be a less woke version of ChatGPT, that’s gonna have a sense of humor. |
00:39:06 | JM | And this spits out some funny stuff. |
00:39:08 | DT | Yeah, well, funny from a certain point of view, I guess. And so there’s a suggestion that maybe this is all about hobbling a competitor. |
00:39:18 | JM | And you do have to look at that interestingly, right? Just as you’re talking about this background about Elon Musk talking about the worry of big corporations misusing potential technological advancements, seems quite ironic and hypocritical, right? Elon Musk himself is someone who benefits wholly from the corporations that he talks against, right? And he’s just gone and done the same thing, he’s created an AI company, xAI, made Grok, and it’s for-profit, why hasn’t he made a not-for-profit, right? |
00:39:43 | DT | Yeah, well, I guess he’d say, “well, I only made a for-profit AI company after it became clear that OpenAI was doing the same thing, it needed a competitor. The reality is there’s now a pretty competitive for-profit AI sector, right? We’ve got our OpenAIs, our Anthropics, our Coheres, we’ve got Amazon, Google, Microsoft, xAI, Mistral is another one, so we’ve got a whole range of players in the market now with both commercial and open source models. So it’s a pretty thriving competitive ecosystem and speaking as one of the downstream customers of these AI companies, prices have plummeted as a result with all of these companies competing for your compute hours, competing for your choice of AI models, so I think that’s been good for competition and good for humanity, probably. |
00:40:34 | JM | But OpenAI have recognised that you need money in the game and Elon recognised it too. |
00:40:38 | DT | Yeah, exactly, and his proposal that OpenAI become part of Tesla, they say was really just about. Elon didn’t care if it was for-profit and he didn’t care if it was owned by investors as long as he was the majority owner and it was that difference of opinion that resulted in him ceasing to be involved and having a less than glowing view of OpenAI’s operations after he ceased to be involved. |
00:41:03 | JM | So their defence for these claims that Elon’s making, what do they look like? |
00:41:06 | DT | Well, we might never see one, so they’ve pledged to strike out or have all of these claims dismissed. I’m not a California litigator, I can’t speak to how their rules of procedure work, but I imagine they work very similarly to the way civil procedure works here in Australia and in the UK, where if you believe that a complaint or a statement of claim is just so baseless that it’s not even worth arguing, it doesn’t really disclose a case, there’s no case to answer, then in that case, you don’t file a defence to something that doesn’t contain any case to answer, you ask the court to just throw it out and they’ve pledged that that’s what they’re going to do. So we’ll see that motion filed at some point in the Superior Court of California, and then it’ll be the first hurdle for Elon Musk’s legal team to get over for this case to proceed. Now, that’s OpenAI’s point on the perspective of not-for-profit versus for-profit. They also say that the emails they’ve released between Elon Musk, Sam Altman, and Greg Brockman reveal something else really important, and that’s that good of humanity and open source are not the same thing. They say, actually, it would be really dangerous to release really powerful technology on an open source basis. They’re not making this analogy, but I suppose this is an analogy that they might endorse. Imagine if you open sourced the technology for the nuclear bomb, or you open sourced the technology for nuclear fission, technology that rogue states like Iran and North Korea have been trying to replicate that the United States and its allies have comfortably mastered for decades. Open sourcing that technology for the benefit of all humanity would create a real threat to humanity because bad actors can use that technology for all kinds of nefarious means. They say that AI is the same thing. Consistent with the Manhattan Project analogy, they say it’s a dangerous tool. As it becomes more powerful, it needs to be more closed source so that we can protect that dangerous knowledge from the people who would use it to do harm. We’re kind of already seeing some of the harms that rogue actors can do with AI technology already. Not so much in the GPT-4 text generation space, but think of that fake robocall from Joe Biden just a couple of weeks ago, telling people not to vote in a primary rather than a presidential election. But nevertheless, it was not Joe Biden. It was a very convincing replica of his voice created using generative AI. You can see how people who want to destabilize democracies, threaten elections, cause harms can use this sort of AI technology to their advantage. So there’s the merit in this idea that, well maybe we should be careful about who we share this technology with before we share it so broadly. The counter argument there is just about any for-profit company can sign up to use GPT-4 through Azure’s cloud infrastructure or through OpenAI’s own API. It’s not something that requires weeks of background checking. I think there’s a form to fill out and it’s approved pretty quickly. So it’s not as though they’re the last guardians against North Korea using GPT-4 to scam us all. But that’s the argument. That these emails show that open source and good for humanity aren’t the same thing. And importantly, that Elon Musk knew that and agreed with it. Again, his characteristic, “agree to all”, similar sort of response to the emails extolling the virtues of being a little more discreet with how broadly they release some of their more advanced technology. Whole bunch of emails from the AI researchers about the dangers of releasing it too broadly, including one from Ilya Sutskova, a very prominent AI researcher at OpenAI and one of the main personalities in the board stash back in November. Elon Musk’s response to that whole chain, yup. Y-U-P, which I guess you can take as an agreement. |
00:44:47 | JM | Maybe he wishes now that back then he was a bit more specific to what he was… |
00:44:51 | DT | … what he was yupping, yup. So that’s the other part of OpenAI’s defense and I guess we’ll see them articulate these arguments more fully as the case progresses. |
00:44:59 | JM | So David, we have the OpenAI’s potential side of defense, all we have at the moment is their blog posts and emails. We have Elon Musk set out claim with some interesting broad sweeping statements about a fiduciary duty to humanity, AGI, OpenAI being a golden goose. David, if you’re sitting on the court, what are you finding in this case? |
00:45:16 | DT | Yeah, well what’s interesting here is I might never get the chance to do that, even if I was living in California, because here’s the kicker, and this is one of the great things about the US legal system and the US civil justice system, Elon Musk demands a jury trial. We don’t have civil juries in Australia except for defamation cases, but in California, civil juries decide all sorts of cases, including this one. So 12 ordinary Californians are going to decide whether GPD4 is AGI, whether OpenAI was founded for a not-for-profit purpose and has been led astray from that purpose by its current evil overlords. They’re going to decide all of these big, hairy questions that if you’re Elon Musk, are central to the fate of the human race. Humanity hangs in the balance. These 12 people. It’s a lot of pressure. |
00:46:00 | JM | It is, it is. |
00:46:01 | DT | If you’re one of these 12 jurors. |
00:46:03 | JM | And big concepts to be tackling as well. |
00:46:04 | DT | Yeah, absolutely. I think this is going to be a really interesting case. Of course, from a technology perspective and the personalities involved, if it does progress past those early skirmishes, those early interlocutory cases about whether the case should proceed at all, it’s going to be covered a lot. But even from a procedural perspective, as a lawyer, I think this will be really interesting dealing with some really technical material here to answer some almost philosophical questions. This jury’s going to have to get into this peculiar structure that OpenAI has, this for-profit, not-for-profit hybrid. They’re going to have to get into the concept of AGI, how you would define it. They might be the first people in the world to accurately define it. |
00:46:43 | JM | What agree to all means. |
00:46:45 | DT | Yeah, that’s right. What yup and agree to all means, whether those rise to the level of a contractual acceptance, what that means for offer intention, legal relations. It’s a real curly case for a jury to decide and one that I’d love to be in the room for. |
00:46:59 | JM | So, a lot of things to come, a lot of interesting arguments to be had to be fought out in California and potentially overseen by 12 ordinary people tackling these massive questions. |
00:47:08 | DT | Yeah, absolutely. Look, it’s a space to watch. I guess the next step is likely to be a strikeout motion or it’s Californian equivalent by OpenAI and if Elon Musk’s case can survive that, this will be in the news for a long while yet. |
00:47:22 | JM | David, thank you for joining me on Hearsay the Legal Podcast. |
00:47:24 | DT | Jacob, thank you for joining me on Hearsay the Legal Podcast. As always, you’ve been listening to Hearsay the Legal Podcast. Now, today’s episode was all about AI and if you want some more episodes at the intersection of artificial intelligence and law, well, we’ve got plenty of recommendations for you. Episode 111 is all about AI tools and legal practice management. That’s an interview with Jack Newton, the founder and CEO of Clio. Episode 116 is all about how AI might one day improve access to justice. That’s an interview with Lawpath, founder and CEO Dominic Woolrych, and episode 98 is all about the intellectual property implications of AI-generated content – that’s an interview with patent attorney Alana Hannah. You can check out all of those episodes and more on the episode page of Hearsay when you get a chance. Now, if you’re an Australian legal practitioner, you can claim one continuing professional development point for listening to this episode. Whether an activity entitles you to claim a CPD unit is self-assessed, as you know, but we suggest this episode entitles you to claim a substantive law point. More information on claiming and tracking your points on Hearsay can be found on our website. Hearsay the Legal Podcast is brought to you by Lext Australia, a legal innovation company that makes the law easier to access and easier to practise and that includes CPD. Before you go, I’d like to ask you a favour, listeners. If you like Hearsay the Legal Podcast, please leave us a Google review. It helps other listeners to find us and that keeps us in business. Thanks for listening and I’ll see you on the next episode of Hearsay. |
You must be a subscriber to access this content.