Episode image for Crossing the Rubicon: Regulating the Benefits, Pitfalls and Risks of the AI Frontier II
LOADING ...
Preview of episode

Want to listen to the full episode and all our other episodes?

Hearsay allows you to fulfill your legal CPD requirements every year.

Our yearly subscription is only $299/year.

With a yearly subscription, you can access all of our episodes AND every episode we release over the next year.

Episode 94 Buy Episode

Crossing the Rubicon: Regulating the Benefits, Pitfalls and Risks of the AI Frontier II

Law as stated: 4 August 2023 What is this? This episode was published and is accurate as at this date.
Ray Sun AKA techie_ray and David continue their love affair with AI. Touching on copyright, the EU and its history of precedent setting, and artificial general intelligence.
Professional Skills Professional Skills
Substantive Law Substantive Law
4 August 2023
Raymond Sun
Techie Ray
1 hour = 1 CPD point
How does it work?
What area(s) of law does this episode consider?The regulation of AI; current and emerging.
Why is this topic relevant?Artificial intelligence is currently revolutionising entire industries and reshaping the way we live and work. Alongside the potential benefits of the technology, there is a growing recognition that AI carries inherent risks that must – or should – be addressed through regulation.

This is part two of a two part series on the regulation of AI. Understanding and preparing for the future of AI and the law is a key skill for a modern lawyer.

What are the main points?
  • Copyright is a massive unanswered issue in the AI space.
  • On a practical level, once a work is bundled up into something like The Pile – a massive textual scape of the internet – where the copyright lies is difficult.
  • History and culture can be a good predictor of regulatory approaches. For example, Europe focuses on individual human rights. China is focused on cultural cohesion and risks to national security.
  • One of the real risks of AI is algorithmic bias. Algorithmic bias is a significant issue that needs to be addressed.
  • Current anti-discrimination laws are reactive to issues that have happened and do not prospectively regulate the development of AI to prevent bias.
  • The challenge in combating bias lies in the inexplicable nature of AI systems, making it difficult to understand and defend their decisions.
  • Private industry also needs to be aware of the risk of algorithmic bias and have proper legal and regulatory measures in place.
  • Education and training are crucial for ensuring compliance with policies – especially in relation to AI use in a business context.
  • Accessible policies delivered through AI tools can help employees understand and follow them better.
  • The development of artificial general intelligence (AGI) is still a long way off, but regulations will likely control its creation.
  • AGI may only be accessible to big tech companies or those with resources, but hopefully responsible AI practices will become mainstream by then.
What are the practical takeaways?
  • Do what you’re passionate about! Ray says be genuine about your career choice.
  • Learning to code to be a tech lawyer is not necessary, but it can assist understanding the tech and the clients.
  • Really know how AI systems work if you want to operate in the space.
  • Be informed of history and the market in various regions and you may gain some insight into each jurisdiction’s regulatory approach.
Show notesGlobal AI Regulation Tracker

David Turner = DT; Raymond Sun = RS; Ross Davis = RD; Sophie Yeh = SY; Lexi = L

DT:Hello and welcome to Hearsay the Legal Podcast, a CPD podcast that allows Australian lawyers to earn their CPD points on the go and at a time that suits them. I’m your host David Turner. Hearsay The Legal Podcast is proudly supported by Lext Australia. Lext’s mission is to improve user experiences in the law and legal services. And Hearsay the Legal Podcast is how we’re improving the experience of CPD.
00:00:42RD:Now, on the last episode of Hearsay we sat down with Raymond Sun – better known to some as techie_ray – to discuss the state of global regulation of AI. Ray told us that countries really fall into one of three different categories – there are those that have in-force legislation governing AI or plan to have that legislation soon, like the EU; those jurisdictions that govern parts of the AI industry through existing laws but do not have specific regulation, and those jurisdictions that are expressly not regulating AI. In the first category you have China and, to a lesser extent, the European Union. Australia, the UK and US fall into the second category, and India falls into the third. We’ll pick up that conversation on the subject of a race to the bottom in regulation a bit later. We actually talked about this idea of a regulatory race to the bottom with Duncan Blake in an earlier episode about the space industry. Basically it means that jurisdictions wanting to attract the biggest and best in this developing industry will undercut each other on standards to attract investment.
DT:It makes me wonder about a guest that we had on the show at the end of our last season, Duncan Blake, he’s a space lawyer. And we were talking about the so-called race to the bottom, which means more permissive regulation in the jurisdictions that choose to play in this space. Do you think we’ll see a race to the bottom among some jurisdictions in artificial intelligence?
RS:Ooh, lots to unpack here. I think I’ll just trace back to the copyright then go into the race to the bottom question. I think this is probably one of the periods where the whole copyright and the whole IP jurisprudence will change. Because right now we’re assessing these AI use cases and problems from current definitions of IP law. It might be that they might change and you can see how certain jurisdictions have tried to accommodate for this sort of change. So for example, one of the standout provisions in the EU approach is that they want to require generative AI providers to disclose all copyright sources that they’ve trained the AI on. So that goes to the whole copyright of training data. They’re going to require those who will use that training data to disclose that they’ve been copyrighted. But people have assessed how practical that is…
DT:I was about to say, The Pile, which is a huge web scrape that’s often used for text is trillions of tokens. Tokens being a part of a word. I mean, you could say; “who claims the copyright in the Pile dataset?”. But the millions, billions of copyrighted works that sit within it… I don’t know how anyone could say with confidence.
00:03:27RS:It’s hard. Just a personal anecdote, I’ve tried just for fun to create my own text art generators, right? And I’ve tried to find databases of art images, and I found a sum and they’re like up to tens or something terabytes. And I just for fun just clicked on download and the download screen says it will take me like four days, five days to download just a quarter of it. So they’re really big, and millions and billions of images in there. Even the practical question of; “how would you identify your own image in such a big database?”. It’s already a hard enough question. But anyway, that’s a practicality question. On the race to the bottom, I think also comes down to the economics of it as well. I know US and China are in a race to outcompete one another in AI innovation. And regulation is one of the enablers for that. So there’s also that element to the whole global competition. But countries outside of that race or are not too focused on that race, I think these countries are in some way taking their time just to see how other countries are approaching the regulation, see how they play out in XYZ years, and then find the most optimal approach to regulate it. For example, the African countries – haven’t heard much update from African countries in terms of AI regulation, because they’re probably more focused on other areas of the economy. So it was also the economic context that’s important to consider. Understanding the economic context can also help you kind of predict the regulatory approach. So for example, in the EU, I expected, I in some way predicted that they’ll go for pretty tough and comprehensive approach to AI, given how they’ve approached privacy. And the way they’ve approached privacy is rooted in European history. Europe has gone through a lot of wars. And these wars have really emphasized the need for human rights. And human rights includes right to privacy from their perspective, hence why privacy laws have been drafted in a way and designed in the way very pro consumer pro individual. And I can see that happening again as well. And it’s already happening right now in the EU bill is quite pro individual friendly, very anti risk…
DT:… prohibition of social credit systems.
RS:That’s right. That’s right.
DT:Some of our listeners, Australian lawyers, law students might be wondering; “well, why do I really care about an EU bill that might come into force in 2026 if I’m here in Australia?”. But we’ve seen – and as a privacy lawyer you’d know this – that laws passed in other jurisdictions can not only apply to Australian businesses if they operate or have customers in those jurisdictions, but they can also really inform the way laws develop here. We’ve seen that the GDPR was ultimately adopted substantially in the same form in the UK. And we’re seeing recommendations to substantially amend the Privacy Act to look a lot more like GDPR here in Australia as well. And many Australian businesses already comply with the GDPR because it’s a gold standard in privacy law and because they hope to have EU or UK customers. So do you think that similar sort of dynamic might occur with the EU bill where we see the EU as a first mover in this area, setting the standard and the model for a whole range of other jurisdictions?
00:06:41RS:Absolutely. Yeah, I think what you said, though, that’ll add relevance to international regular approaches. I also see relevance through the lens of these broader AI regulations will inform industry specific regulations. For example, banking sector in Australia, governed by the CPS standards, the telecommunications sector has their own sector specific regulation. So eventually, these AI values, concepts, principles, also feed into industry specific regulation. And these industry specific regulation would then affect businesses more closely than the broader level regulation. And eventually, how these businesses operate will affect our jobs. And that’s how it goes down to the individual. So I can definitely see that and also, these regulations are also very relevant in guiding how the economy develops in the future. And also it forms your rights in some way. So privacy rights, as you said, informs how other countries build their own price regulation. And as we become more digitised, privacy become more of a greater concern, right? And so understanding your privacy rights abroad and here informs you how you should conduct yourself online as well in a more proper way that doesn’t infringe on other rights or other laws. So I think it’s just basic citizenship, right? General understanding of laws that apply to things that affect you personally, I think it’s useful to have that background understanding regardless of whether or not you think they will be relevant to your career directly or not, I think just useful to be a well informed citizen.
DT:And in China, you see kind of the opposite, right?
RS:Exactly.
DT:Their AI powered social credit system is already being rolled out. Yeah, you see instead kind of the prohibition chiefly on the sort of AI tools that might be subversive or undermining of the state.
RS:Yes, right.
DT:Deepfake generators, for example.
RS:And I find that so interesting, like China’s culture approach and attitude towards privacy is not as risk adverse compared to the Western countries. In China, they’ve been used to automation and efficiency for a long time. And the general population is quite comfortable with some of the data being used for convenience. And so when it comes to AI regulation, it’s mostly about like what is more convenient for the people. And the Chinese regulation is more about regulating apps that might subvert national interests or national security. Because from their cultural perspective, social stability, social harmony is more of a high priority compared to other things. And there’s no right or wrong answer. It’s just how cultures have developed and turned out. And so that’s why China’s approach, they’re quite fast paced in terms of AI regulation. Due to their political system, they’re able to just push out regulation quite quickly, and just move on with innovation. Whereas in other countries, for example, the US – many people assume that US is quite advanced in AI regulation, actually it’s not as advanced as people expect it to be. It’s advanced in innovation, but its regulatory landscape is quite over the place. The states have their own thing. The federal government has proposed many bills before, but very few of them actually passed. And they’re having these consultation periods over and over again. It’s just the way that they approach regulation, it has to go through this process. And there’s like a lot of debates around what’s good or not good. So yeah, I think understanding culture and economies also gives you a good lens as to the state of AI regulation around the world.
00:10:12RD:Ray makes an excellent point, and it’s one we’ve come across before on the podcast – that of the interaction between culture and the law. We last looked at this closely in episode 92 – The Culture Compass: Navigating Aboriginal and Torres Strait Islander Perspectives in Court. And it’s not unreasonable to draw that analogy further, that episode looked at the interaction of Aboriginal and Torres Strait Islander people with the Queensland Coroner’s Court, looking at ways to best respect and incorporate the culture of Australia’s First Nations people in a system built around the cultural norms of the West.

It was a fascinating and thorough discussion of that intersection with Melia Benn and Avelina Tarrago, who expressed the idea that for the most part, the coroner’s court does its best with the means it has to incorporate indigenous cultural norms. Similarly, Ray’s comment to take a cultural lens to prospective regulation is an extension of that concept.

Frankel and Braun wrote in the Boston University Law Review in 2021 that both systems – law and culture that is – consist of rules and their enforcement. It may be that the systems which compose those rules and the ways that they’re enforced are really quite different. But nonetheless, there are parallels between them. Each informs the other.

So what would an Australian AI look like and how should we regulate it? To bring this discussion back to practice and front and center is Sophie. Sophie is going to introduce an interview, a very special guest of Hearsay.

SY: So we’ve been talking a lot about regulation in AI in the abstract, and I thought that we might just stop it there for a second just to give our listeners a demonstration of an AI product in practice. Something that isn’t ChatGPT. Now you may know that this podcast is produced by Lext, an Australian legal innovation company but Lext doesn’t only produce Hearsay. Hearsay is just one step taken by Lext towards legal accessibility. Lexi is Lext’s very own AI. She can answer questions about business, law issues. She can review your legal documents and many more tp come, and she’s about to join us around the mic here today to demonstrate some of her capabilities and to talk to us about privacy laws in Australia. Lexi, welcome to Hearsay the Legal Podcast.
L:Thanks Sophie. It’s great to come on the show.
SY:So Hearsay episodes usually start off with us asking our guests to tell us a little bit about themselves. Can you introduce yourself to us, Lexi?
00:12:56L:I am an AI created by Lext. I can answer legal questions,  review and generate legal documents and help you find legal help. My responses are based on the knowledge I’ve been trained on, and I strive to provide clear and concise answers. I. As an AI, I don’t have a personality in the human sense. I don’t have feelings, beliefs, or desires.  My purpose is to provide accurate and helpful information about Australian business law.  However, my responses are designed to be user-friendly and easy to understand.  I am to communicate in a clear and concise manner using plain language to explain complex legal concepts. If I don’t know the answer to a question, I’ll let you know.
SY:Okay. What areas of law do you know about?
L:I can answer questions about Australian business law. This includes areas covered under civil law, such as negligence, family law, employment, debt discrimination, and contract law. Civil law cases typically involve individuals, companies, or government entities taking legal action against others for alleged unfairness, harm, or violation of an agreement. Please note that while I strive to provide accurate information, the law can be intricate and it’s advisable to consult with a lawyer to verify your understanding of the law.
SY:Lexi, you mentioned that you’d let me know if you didn’t know the answer to something. Among humans – which you don’t know, obviously – admitting that you don’t know something can actually be sometimes seen as a weakness. So I’m curious, why is it that you’re so comfortable admitting that you don’t know something?
00:14:05L:My knowledge is based on the information I’m provided with. My answers are based on information verified by Australian lawyers and stored in a knowledge database. If a question falls outside of the scope of my knowledge database, I won’t be able to provide a comprehensive answer. In such cases, I’ll let you know that I don’t have enough information to give a good answer. This is to ensure that you receive accurate and reliable information.
SY:I think given the rapid developments in AI recently, a lot of our listeners might be cautious when interacting with AI. For example, ChatGPT can sometimes generate hallucinations. So basically confident statements of fact that aren’t true and it does this in order to provide a statement that looks plausible. So now it’s got me thinking, and I think a lot of our listeners might be thinking the same thing as I am, how are you different from ChatGPT?
L:Like ChatGPT I interface with large language models like GPT 3.5 and GPT 4, but I’m specifically designed for legal research and inquiries. I use a knowledge retrieval technique, which provides me with trusted contextual information to improve reliability. This is achieved by using a database of verified legal information, which sets me apart from other large language model interfaces. I perform a semantic search of my database of verified Australian business law knowledge. When you ask me a question, the most relevant information or context is selected for processing, which I use as a reference point to generate a reliable answer.
SY:And who would you say can use your abilities?
00:15:31L:My abilities are designed to be used by a wide range of individuals. People who need quick, clear information about Australian business law can benefit from my services. This includes business owners, students, legal professionals, and anyone interested in understanding the Australian legal system better. I can also assist those who need help with understanding legal documents. I can review legal documents and answer questions about them making the process more efficient and accessible.
SY:It sounds like you’re a great tool to help users simplify their experience with the law. I think it’s time we give you a test run. This interview with Ray has been on privacy law and we’ve just touched on the standards set by the GDPR and whether something similar will happen with the EU AI bill. So I was thinking we talk about privacy.
L:Sure, Sophie. What can I assist you with?
SY:I want to know what are the key differences between the Australian Privacy Act and the EU GDPR?
00:16:21L:The General Data Protection Regulation, or the GDPR, and the Australian Privacy Act 1988 (Cth) both aim to protect personal data, but they differ in scope, applicability, and the rights they confer on individuals. The GDPR, implemented by the EU, has a broader scope than the Australian Privacy Act. It applies to all organizations regardless of their location that offer goods or services to or monitor the behavior of EU residents. Australian organizations with an online presence in the EU are obligated to comply with the GDPR. This is a significant departure from the Australian Privacy Act, which primarily governs how the federal government, its agencies, and most Australian businesses, handle personal information. The GDPR grants EU residents greater control over their personal information. It requires organizations to provide clear explanations about how they intend to use personal information from the outset. This is in contrast to the Australian Privacy Principle 8 or APP 8, which outlines measures an APP entity must take to safeguard personal information before it is disclosed overseas, but does not necessarily require clear explanations of intended use at the outset. In conclusion, while both the GDPR and the Australian Privacy Act aim to protect personal data, they differ in their scope, applicability, and the rights they confer on individuals. Australian organisations need to be aware of these differences, especially if they have an online presence in the EU.
SY:Okay, thank you for that insight. And I have another question for you, Lexi. When do Australian businesses have to comply with the GDPR?
L:Australian businesses are required to comply with the GDPR if they have an online presence in the EU or for goods or services to or monitor the behavior of EU residents. This requirement applies regardless of whether the business is subject to the Australian Privacy Act. It is crucial for Australian businesses to understand the GDPR’s requirements and ensure their practices align with these regulations to avoid potential penalties for non-compliance.
SY:Thanks, Lexi. That’s really useful. You’ve been a great help today.
L:You’re welcome, Sophie. Now I think we’re going to return to the conversation with Ray on the topic of some of the risks of AI. I’m looking forward to learning what he has to say.
00:18:33DT:And also the practicability of implementing regulation. Talking about deepfakes again, there’s an adversarial aspect to both the generation and detection of deepfakes. And that creates this arms race where computing power is really necessary to identify deepfakes. And we are likely to see a greater regulatory response there in developed economies and a sort of lagging regulatory response there in developing economies. One other risk that I think perhaps second only to the existential Hollywood risk of killer robots that gets attention, and this is the real risk, is algorithmic bias. That’s a more well-known risk of artificial intelligence in the general population. Tell us a little bit about algorithmic bias.
RS:Yeah, so algorithmic bias really comes down to the principle that garbage in equals garbage out. So in other words, if the quality of the data that the AI was trained on is poor or unbalanced, then that will carry through into the output and the output itself will also potentially be biased. And the “so what” of that is: biased decisions by an AI system could potentially breach discrimination laws, or if not legal, then unfair outcomes. So probably one of the most well-known cases is where in the US back in 2015, they were experimenting with an AI system that could automate decisions as to whether a defendant should get bail or not. And now that AI was trained on historical criminal records. And for those who know American history, these criminal records have an overrepresentation of African Americans. And so it was trained on that. And as a result, it was biased against African American cases. So when it was applied to live cases, it found a high proportion of African American defendants being refused bail, even though objectively their merits deserve bail. And that system was scrapped since then. Same issues when it comes to like recruitment. Some big tech companies have adopted AI to automate recruitment, especially in the tech sector. But due to historical data, certain gender groups are over represented or underrepresented compared to others, and therefore they’ve been biased either against or bias in favour of by the AI system. So you have these cases which show the real risk of algorithmic bias. And people might not appreciate it now, but eventually, let’s say hospitals, they start using AI to determine who gets care or not. I think one example is when COVID happened, the outbreak of COVID, hospitals were crowded, under shortage of staff, and very critical resource allocation decisions were made. And I know there were side cases in Italy, where there weren’t enough beds, weren’t enough doctors, doctors had to make the excruciating decision to just basically treat one patient over another due to not enough resources. Now let’s just say you had an AI that does that, and the AI misjudges a situation due to bias, and it treats one particular group over another, that from an objective standpoint, shouldn’t have happened. And then your personal livelihoods affected. That’s where it becomes really real, and that’s where the whole issue of algorithmic bias is actually quite an important thing to address. So right now, most countries just have like anti-discrimination laws to regulate that. But these anti-discrimination laws are after the fact. Once a discrimination has occurred, that’s where these laws become triggered. But there’s not many examples of regulation that regulate the actual development of the AI to make sure it doesn’t have these biased tendencies in the first place. I think it’s very hard to eliminate these biased tendencies at all, because data reflects human experience, and human experience is biased in some way. So it’s hard to eliminate it, but at least we can try to reduce the risk. But it’s really hard in practice because I really issue this, it’s a black box problem. So AI systems are notorious for being unexplainable. Even developers built these AI systems, they can’t really explain how a particular input leads to an output. Because the whole magic of AI is that they find hidden patterns in data.
DT:Those patterns might not be explainable to a human being
00:22:43RS:Exactly. And even if you can produce these patterns on the screen, they’re just gibberish numbers that even humans can’t understand them. And that’s a big challenge, because if you can’t explain these patterns, you can’t explain decisions made by an AI. If you can’t explain decisions, then you can’t defend them, you can’t challenge them. And that relates back to the whole discrimination, algorithmic bias problem. And it also goes back to regulation. How do you regulate and reduce bias if you don’t know how in the first place these bias arise within the model? It’s all interconnected, these issues. So it’s, I don’t know the answer. It’s a question that researchers around the world are scrambling to work out.
DT:Yeah. I’m so glad you mentioned the recruitment situation of algorithmic bias, because it really highlights two things. One, this isn’t just a problem for governments. Private industry need to be aware of the risk of algorithmic bias in their own implementation of artificial intelligence. Two, it really illustrates that you need to have your legal and regulatory house in order in order to take advantage of artificial intelligence in your business. If you have an a gender pay inequality in your business, if your model of the hireable candidate looks like a man, then your artificial intelligence implementation that’s supposed to be bringing you productivity will be bringing you bias and will be bringing you discrimination suits. So you need to have a set of quality data to train that model on. To do that, you need to have good policies in that.
RS:And it’s not just policies. Policies are like 20% of the solution. The 80% is actually making sure all these policies are complied with. So education and training is very important because you can have all the policies in the world that addresses issues, but if no one follows them, it’s just as useless. So the human aspect is very important. And for example, these recruitment tools, they can be wonderful if they’re used responsibly and it’s a human oversight of the decisions and the human can identify whether a decision is biased or not. And then address that accordingly. So it’s a combination of the technical side. So make sure the data itself is balanced and make sure the model itself is coded properly. The policy aspect to inform behavior, but also the practical behavioral aspect to bring all of these things together. And that’s something that I want to emphasize to like clients or businesses out there. You can get lawyers to help you draft policies, but if no one’s following them, just there’s no use.
00:25:19DT:Yeah, absolutely. I wonder if we’ll see better access to policies through artificial intelligence. Often these employee handbooks or safety policies, privacy policies, they’re enormous documents. They’re difficult for a layperson to read. They tend to be written for a legal audience and they’re not very accessible for employees. But already we’re seeing tools that leverage those documents to inform the answers given by artificial intelligence chatbots to make those documents more queryable, more able to respond to specific circumstances and questions. And maybe we’ll see people following their policies a bit more when they have the ability to talk to them in that way. Now, Ray, we’re getting away from the realm of reality and towards science fiction or the distant future, although as you said, maybe not the distant future. The capability of GPT 3.5 and GPT 4 was theorized to be decades away. Let’s talk about AGI, artificial general intelligence, or intelligence that resembles or perhaps even surpasses human intelligence. Will we ever see something like that? And if we do, what’s the role of regulation there?
RS:Yeah, well, ultimately it comes to how you define AGI. But I think the mainstream definition of AGI is AI that can do anything. So right now, the AI tools we have work well for particular function and particular use case, but it doesn’t work well if you take them out of that use case. So the essence of AGI is that you can put them in any use case and they can perform it. So it’s like a human. You can put a human in any use case, as long as it learns what the rules are, then you can perform that competently. So to get to that stage, we need to be able to build really big models and also potentially new model architectures that allow these models to consume all kinds of data and be able to action them in like every way possible. That’s where the whole multimodal type of modular concept comes in.
DT:We’ve already seen multimodal models worked on at Meta, at Google. The GPT-4 is in fact a multimodal model, although the multimodal aspects of it are not released. So when those are released in the next 12 months, is that AGI?
00:27:32RS:If you have a loose definition of AGI, you could maybe argue it, but I think it’s still quite a long, long way away from true AGI, which is as I said before, it’s AI that can do anything. But I think we have to also consider like the practicality, right? By the time we get to a point where we can develop AGI, I think at that point, there will already be regulations that control what types of AI you can develop. And depending on how these regulations are crafted, it could be that it might be illegal to create such powerful AI unless you have specific licenses or you build AGI that is purely beneficial. So you also have to consider the existing regulatory constraints and whether they allow you to build AGI regardless of the technological capabilities in development. Now, let’s just say assuming regulation does allow you to build AGI, then we get to a point where this is my prediction now, it will probably only be exclusively accessed by big tech companies or companies that have the resources to build them and access to these tools won’t be as mainstream compared to let’s say what Chatgpt is right now. I think at that point, the world will be very well cognizant and aware of AI risks. And so that will inform behavior and market practice as to what AI tools should be widely available versus AI tools that should have restricted practice. And also I think at that point, the world will have evolved to have responsible AI practice as like the mainstream type of behavior. I think eventually all companies in the world will have responsible AI practices procedures similar to how most companies already have privacy policy procedures. So my point is, even if we do have AGI, I think the human behavior that makes AGI dangerous and powerful will be controlled already to the point that AGI is only used for beneficial purposes. So I’m taking more of a realistic, I guess, practical view as to how things would develop. But from a sci-fi perspective, I believe AGI will be possible. Keep thinking about how humans have built technology along the way. Like no one thought flight was possible, but the Wright brothers built planes.
DT:It does seem inevitable to me. And I think about this analogy from the introduction to Human Compatible: if you knew that an alien civilization was going to land on earth 40 years from now, what would be the reaction to that, right? It would be pandemonium. People would be going crazy at this idea that aliens would land on the planet. But it seems reasonably likely, I’d say even at worst, that we will see something that looks an awful lot like artificial general intelligence within the next 40 years. And so I suppose we do need to be prepared for that. On that topic, this isn’t regulatory necessarily, but it’s a call for regulation. Last month, there was a call to pause development of large language models by some pretty big names, Elon Musk, Bill Gates, Steve Wozniak, hall of famers in software and web technology. Do you, Ray, think that that’s a step that we should be taking pausing development until regulation catches up?
RS:Well, if we look at that letter from face value, and that letter was calling for government action and regulation, if we’re looking from that perspective, and I think that letter worked, because since that letter, America launched their Senate hearings into regulation, the EU has in a way sped up their review of the AI bill. So in terms of that letter, boosting or accelerating regulatory developments, I think has worked in that regard. But it’s also the cynical argument as to whether the big tech companies or thought leaders in that space have used that letter to in a way entrench their dominance in the market on AI and create a regulatory moat around them.
DT:You do have to wonder whether it’s a coincidence that a lot of the people who signed that letter already have a pretty fast horse in the race of AI development.
00:31:33RS:And what’s interesting is that weeks later, Elon Musk launched his X AI company. Weeks after that letter came out, he launched his own AI company that aims to compete against OpenAI. So you have these events after the fact, which also raise, I guess, cynical views as to whether the letter or not is genuine or not, or whether it’s just a business move. Regardless, I think the ultimate effect was that it has accelerated regulatory developments, which is good in a way. So I like to see from that perspective. But as to the question as to whether we truly need to pause language model developments that are more powerful than GPT-4, I do see credit in that. Maybe not completely pause, but probably prioritize standards, prioritize responsible AI behavior before we go full crazy in AI development. But I think many people often forget that the development of these models is very resource intensive. It comes down to money. If you don’t have the funding, you can’t build these models. So also, I think it’s not just a government thing. I think it’s also a bigger question for investors and capitalists who fund these projects. I think if we want to have a more effective call to action, it will be targeting these people, people who are able to provide the money and funds for development of these systems. If OpenAI were to lose their funding right now, I doubt whether they could continue making even more powerful projects without that money. So I think it’s probably more back end, back door method, but I think it would be much more powerful if we target the industry and market, especially those who have deep pockets and let them consider whether they should put their money in particular areas which are safer and not so much focused on building powerful tools for the sake of powerful tools. So my prediction is that if the market goes in a very responsible manner, I think there’ll be a natural inclination for powerful AI systems to be built towards use cases that humans really need powerful AI systems for. For example, climate change. Climate change is such a big problem that humanity alone needs to work together and to help improve that humanity effort. That’s why powerful AI is really useful, provided that it’s only used to help solve climate change. So if people can fund that particular use case, then I think that will eventually make responsible AI a very natural thing to look at because it’s funded, people take care of it. Whereas if the money is going towards, say weapons development or war, that’s when all the humanity existential risks come about. And that’s what that letter was talking about. So thinking very practically also comes down to where the money is going.
DT:Yeah, absolutely. And I suppose what’s possible without a lot of money, you know, we’ve seen a lot of developments in the open source community around artificial intelligence over the last few months. There was a leaked memo from a Google engineer about just how remarkable some of those developments were. Models trained on hundreds of dollars worth of compute that could perform almost to the level of GPT-3. So also that some of that basic level functionality of large language models is democratised and made to developers and to people who don’t have the backing of these large companies and venture capitalists. Now, Ray, we’re nearly out of time. But before we go today, if we’ve got young lawyers or students listening who think; “yeah, that’s what I want to do. I want to be a programmer, a lawyer, an AI expert”. How do they get started? What should they be thinking about to upskill in artificial intelligence?
00:35:10RS:Yeah, I think the first thing is I’m always a big believer in doing things that you’re passionate about and genuinely interested in. If you want to do AI law just for the resume, and you don’t necessarily find the space interesting, I wouldn’t really push myself to do it. I’ll would do it if you’re interested in it. So I think be genuine about your career choice. And I think it has worked out. Maybe there might be some competing views, but I think that’s worked out for me the best just doing stuff that I love. Now in terms of the big question as to whether you need to learn to code to be a good AI lawyer, code is not necessary. You just need to be a good lawyer to be a good lawyer. But it does help. It does help with having a technical understanding, does help you understand industry a bit closer, help you understand the tech. Also builds rapport with your clients. There’s some credibility when a lawyer knows the tech as deeply as a law.
DT:When I was in private practice, there was a lot of talk about speaking your client’s language. Being able to use the terminology they use, being able to talk about their product or their business in terms they understood. And I think as a lawyer, if you want to follow that principle, learning some basic coding skills is a great way to do that. Being able to talk about what a code base is and what a pull request is, talk about Git without sounding like one. Having these basic programming concepts under your belt does help you connect with your clients.
00:36:37RS:And also as a bare minimum for AI lawyers, just understanding how AI systems work, how models work, how data works and use AI to help you learn. Like you can use Bing or Bard or other chatbots to help you understand these basic concepts. They’re great learning resources tools if you know how to verify accuracy and stuff like that. Also be aware of what the market is doing and regulation. Speaking of regulation, sneaky plug, I’ve got a free public global AI regulation tracker that tracks regulatory approaches across as many countries as I can possibly like summarise an update. I’m updating that tracker every day and also provide my own personal commentary opinions on each country’s approach. So definitely check it out if you’re interested. I think it’s a nice shortcut way to just learn what’s happening around the world. But yeah, I think in summary, actually be interested in AI if you are. Learn some tech basics. Don’t be afraid to learn some tech basics and just be aware and be well informed about market and regulatory approaches. I think that’s what you need. And slowly over time you’ll develop skills and experience to be a well informed AI lawyer, which I’m still working on and growing, but I think it’s a good space to be in.
DT:Yeah, absolutely. Couldn’t agree more. Ray, thanks so much for joining me on the Inside the Legal podcast.
RS:Thank you so much, David.
00:37:58RD:As always, you’ve been listening to Hearsay the Legal Podcast. I’d like to thank today’s guest, Ray Sun, for being a part of it.

As you well know, if you’re an Australian legal practitioner, you can claim one Continuing Professional Development point for listening to this episode. Whether an activity entitles you to claim a CPD unit is self-assessed, but we suggest this episode entitles you to claim a professional skills or substantive law unit. More information on claiming and tracking your points on Hearsay can be found on our website.

Hearsay the Legal Podcast is, as always, brought to you by Lext Australia, a legal innovation company that makes the law easier to access and easier to practice, and that includes your CPD.

Hearsay is recorded on the lands of the Gadigal People of the Eora nation and we would like to pay our respects to elders past and present. Thanks for listening and see you all on the next episode of Hearsay!