
Want to listen to the full episode and all our other episodes?
Hearsay allows you to fulfill your legal CPD requirements every year.
Our yearly subscription is only $299/year.
With a yearly subscription, you can access all of our episodes AND every episode we release over the next year.
Regulating AI: Australia’s Interim Response Unpacked
What area(s) of law does this episode consider? | Use of AI and options for legal regulation. |
Why is this topic relevant? | In January of this year the Australian Government published its Interim Response to the Safe and responsible AI in Australia which is being run through the Department of Industry, Science, and Resources. The consultation received 510 online submissions, held 11 roundtable discussions, and one town hall event with 345 participants. The Interim Response highlights the enthusiasm of Australians for AI, its potential transformative benefits when put to good use, and the potential benefits to the economy from the technology. |
What cases are considered in this episode? | Commissioner of Patents v Thaler [2022] FCAFC 62
|
What are the main points? |
|
Show notes | Ray’s Global AI Regulation Tracker Interim Response to the Safe and Responsible Use of AI in Australia |
DT = David Turner; RS = Raymond Sun
00:00:00 | DT: | Hello and welcome to Hearsay the Legal Podcast, a CPD podcast that allows Australian lawyers to earn their CPD points on the go and at a time that suits them. I’m your host David Turner. Hearsay the Legal Podcast is proudly supported by Lext Australia. Lext’s mission is to improve user experiences in the law and legal services, and Hearsay the Legal Podcast is how we’re improving the experience of CPD. We spoke with Ray Sun on the last season of Hearsay the Legal Podcast, and in that conversation, a theme emerged about the nascency of the regulation of AI in Australia. Artificial intelligence and its uses is still in many ways a burgeoning area of technology, so it’s no surprise that regulation lags both in Australia and abroad. But in January of this year, the Australian government published its interim response to the Safe and Responsible AI in Australia consultation that’s being run through the Department of Industry, Science and Resources. To say that this was a popular consultation process with many submissions is a bit of an understatement. The consultation received 510 online submissions, held 11 roundtable discussions, one town hall event which had 345 participants and the interim response really shows the enthusiasm of Australians about AI and about its potential transformative benefits but also its potential risks. What the consultation process has not done however, was propose new specific regulation in Australia but it has put forward the government’s proposed approach to regulating the technology At least on a temporary basis. So, who better to return to discuss the interim response than Ray Sun? For those of you who haven’t heard Ray’s first episode, Ray is a technology lawyer at Herbert Smith Freehills and the creator of the Global AI Regulation Tracker. Ray, thank you so much for coming back on Hearsay the Legal Podcast. |
00:02:01 | RS: | Thank you, David. Ready for round two. Let’s go. |
00:02:05 | DT: | Let’s do it. So we had you on About this time last year, and since then, what’s been happening since we last spoke? |
00:02:10 | RS: | Yeah, well, glad that you mentioned my AI Regulation Tracker. That has evolved a lot since the last time we spoke. So, last time, it was just a tracker, so just a website that tracks regulations around the world, click on a country and view its summary. But since then, I’ve added a bunch of new features to that tracker to the point that it’s now a platform. TIP: Ray’s Global AI Regulation Tracker is a free and interactive tool where you can click on any country to see a quick summary of its AI regulatory approach and developments. We’ll leave a link to Ray’s tool in the show notes to this episode. So, I’ve got like a chatbot feature in there where you can interrogate the bot on AI related topics. Also, a new feature that can compare different markets and how their regulations compare against each other. I’ve also set up a live AI law news feed that gives you real time news updates on AI and also a governance library, which basically has a repository of templates, policies, and toolkits around AI governance and safety. So, a lot of new features. That’s been my main side project since the last time. |
00:03:20 | DT: | Wow, and that will keep you busy because on the one hand, this is an area, as we said at the top of the show, it’s An area that’s largely unregulated or regulations changing very rapidly in the countries that have chosen to introduce some regulation for AI. But, I suppose you’re keeping track of all this news as it’s coming up and you must have to update your data set pretty regularly. |
00:03:42 | RS: | Yeah, well, I’ve got a process in place that automates most of that. I just do a final touch review before posting the updates. But yeah, you’re right. A lot of movement around the world since the last time we spoke. I’m happy to dive deep into them. |
00:03:56 | DT: | Yeah well, we’re going to talk about the Australian position in a little bit. |
00:03:59 | RS: | Yeah. |
00:03:59 | DT: | But before we do, any interesting moves internationally that you want to talk about? |
00:04:03 | RS: | Yeah, I think the big two last year, November, the US. President Biden issued the executive order on AI, and that was really the big turning point in the American AI regulatory scene. Since then, there have been a bunch of new initiatives and frameworks being released within the US and also in the EU. For those who regularly follow that market, the EU just passed the final text of the AI Act and is expected to be passed officially and published in their journal this month. So yeah, that’s quite exciting in that market. So, these are probably the big two international developments. |
00:04:39 | DT: | Yeah, I mean, that EU legislation is exciting. I think the European Union has always been a bit of a front runner in technology regulation. It’s a market that is more willing to regulate the technology sector than the US market is, but it de facto sets the international standard on many technology regulatory issues because of the reality that many US based global technology companies have to operate in EU jurisdictions. So, we saw that with the GDPR becoming the international gold standard for privacy regulation. We saw that with the USBC cable becoming the de facto standard for charging and data connections after the uniform connector regulation in the EU, and I wonder if we’ll see their regulation of artificial intelligence in the same way in a few years time. Alright, let’s talk about the Australian position now. |
00:05:26 | RS: | Yeah. |
00:05:27 | DT: | Our main topic for today. Before we talk about the interim response from government, let’s talk a bit about the consultation itself. So, what kicked the process off? Why was the government looking for submissions on this subject? |
00:05:38 | RS: | Yeah, so the consultation process started last year around this time. There was a discussion paper on safe and responsible AI, and actually this was not the first time the government consulted into AI regulation. In 2022, there was also a consultation more broadly around digital economy and part of that consultation, there was a discussion around AI and automated decision-making regulation. So, we had an AI regulation consultation in 2022, but the reason why the government did it again was because between those two consultations, ChatGPT was released and that kickstarted the whole generative AI wave and generative AI poses a lot of new issues compared to traditional AI, and so that is why there was another consultation. And so yeah that took the actual discussion period to, let’s say one or two months where the industry could write submissions, and as you said there was a bunch of roundtable discussions and then this year in January the government released its interim response that basically consolidated the views of the community and proposed the government’s next steps and actions. |
00:06:45 | DT: | As we said at the top of the episode, a huge number of submissions received in this consultation process. Having done a little bit of law reform work myself in a previous life, this is both a relatively long consultation process and also a popular one or one that sparked a lot of responses. Tell us a bit about who was writing in to this consultation process. |
00:07:04 | RS: | Yeah, pretty much all the big names you can think of, law firms, big tech companies, associations, of course media. Because AI impacts a lot of sectors of the economy, there was a really good representation among the 500 or so submissions. |
00:07:20 | DT: | And what were some of the views being put forward in these submissions? |
00:07:24 | RS: | Yeah, this is a very interesting one. So I personally haven’t read every single submission, I’m just going off what the government said in their response, but there was a pretty broad but pretty consistent view among many submissions. Mainly, the big two were, Australia doesn’t need to jump into AI specific regulation without first understanding our existing laws and whether our existing laws can be reformed to accommodate for that. And number two, any approach that we take should find a balance between innovation and regulation. So, these were the two broad themes that are quite consistent across many of the submissions. TIP: Now, as we mentioned, the Department of Industry, Science and Resources received 510 submissions to the Safe and Responsible AI discussion paper, and of those, 447 have been published and you can read them on the Department’s website. The consultation sought views on whether Australia has the right governance arrangements in place to support the safe and responsible use and development of AI. And the Department received submissions from businesses and organisations of all sizes operating in a variety of industries. The Australian Federal Police, the Law Council of Australia, Australia’s unicorn startup Canva and the Australian Copyright Council, all lodged submissions, as well as our very own company, Lext. We’ll include a link to the published responses in the show notes. |
00:08:44 | DT: | And what sort of topics was the consultation process seeking comment on? I know you’ve mentioned that there was a need to balance innovation with regulation. You also mentioned that there was a point, which I think is well made, about needing to see to what extent our existing laws address some of the risks of artificial intelligence and I’m sure copyright law is one of those that we’ll need to discuss. But what were the topics that government was seeking feedback on? |
00:09:11 | RS: | Yeah, so they were all laid out in the initial discussion paper. I think there were 12 or 15 questions that the government wanted feedback on, and they were all categorised chronologically, which was helpful. So, the first few questions were around defining the AI space, “How would you define AI?”, and then there was another category of questions around what the issues or risks from AI that the government could intervene and regulate, and the third bunch of questions were around the solution. So, looking into a risk-based approach, which is the dominant thinking within the government right now, and also any insights or lessons from international examples. So, I thought it was a pretty intuitive way of inquiring. |
00:09:51 | DT: | And you mentioned that the government’s taking a risk-based approach to this consultation and to any response to the submissions that have been made to it. What does that mean? |
00:09:59 | RS: | Yeah, so risk-based response is basically regulating AI systems based on their risk profile. So, the higher the risk, the more strict the regulations can be, and the lower the risk, the more relaxed. The risk-based approach was really popularized by the EU because the whole EU AI Act is based on the risk tier system. So that’s the genesis of that thinking. |
00:10:22 | DT: | So, you think that might have come from a look at some of the international examples, especially in the EU? |
00:10:26 | RS: | Yeah, yeah that’s right. TIP: The Australian government’s interim response to the Safe and Responsible AI in Australia consultation was published back in January of this year. The interim response summarizes the feedback received from various different stakeholders, including members of the public, academics, industry bodies, government agencies, and small and large corporations.This feedback was received through numerous consultations, which according to page 7 of the report, included a virtual town hall event with 345 participants and a bunch of different roundtables featuring interested stakeholders totaling over 200 participants in those. Now, many of the submissions identified a number of risks regarding the use of AI, which the report categorises on page 11. Those risks include:
Now, many of the submissions which called for regulatory action put forth some proposals about what that regulatory action might be. Some of them included the establishment of an expert AI advisory body, regulatory sandboxes, in other words, experimental environments where regulatory approaches could be trialed, and the adoption of international standards for AI development. The report notes that the submissions it received clearly emphasised a need for establishing safeguards to protect Australians from the use or misuse of artificial intelligence. But it also stressed the importance of maintaining a balance between implementing these safeguards whilst making sure that low risk applications of artificial intelligence can be performed smoothly to promote innovation. The government’s report also identifies a need for a classification system to identify the risk level of AI applications, in the same way that European Union’s draft legislation does. Now, the discussion paper that was circulated before the publication of this interim report did propose a method for classifying whether an application of artificial intelligence could be considered high-risk or not. That was, whether the potential impacts of the application were systemic, irreversible, or perpetual. What some listeners might have heard of as the one-way or two-way door scenario. If a decision is a one-way door, that is, it’s irreversible or perpetual, then it needs to be approached with a much greater degree of scrutiny and care than a two-way door approach, which can be walked back through and reversed if it’s wrong. The report does also mention the European Union’s approach to its draft legislation, which is to provide an exhaustive list of artificial intelligence use cases that it considers high risk, like applications in medical devices, law enforcement, and critical infrastructure. Now the government’s report isn’t all about risk and danger, though. The government also noted in the report, it’s clear that AI has the potential to positively impact society and the economy in a huge way, with its applications aiding in medical image analysis, engineering design optimization, and the improved management of natural emergencies, just to name a few use cases where it’s already having an impact. But the benefits of AI extend beyond just the specific use cases of the technology. It has the capacity to create new jobs, enhance consumer benefits, revolutionize learning for students, young and old, stimulate new industries, boost productivity, improve healthcare, the list goes on and on. So it’s clear Australians see a lot of potential for AI in terms of its benefits socially and economically, but there are also obstacles; low trust, skills gaps, inadequate infrastructure, financial constraints, regulatory uncertainty, and this has resulted in a widespread call for the government to take more action to both leverage the opportunities and mitigate the risks. Although, in the submissions to the consultation, opinions varied greatly on the specific actions that would be required. Now a lot of the submissions to the consultation suggested that existing laws appear to fall short in preventing AI facilitated harms before they occur, and more work is needed to respond to these harms after they occur, especially given just how quickly AI systems are developing and scaling. With that in mind, the government’s report suggests that we need to consider mandatory safety obligations for high-risk artificial intelligence developers and users, and the government should collaborate with international partners to establish safety mechanisms and test these systems. So, with all of these ideas from all of these different submissions, what is the government going to do next? Well, the report also outlines the government’s immediate next steps. It’s going to develop an AI safety standard, it’s going to consider compulsory watermarking for AI outputs developed in high-risk settings, and it’s going to establish a temporary expert advisory group. There’s also a focus on helping the Australian Communications and Media Authority, ACMA, combat misinformation, reviewing the Online Safety Act 2021, developing a regulatory framework for automated vehicles, an entirely different field of artificial intelligence, and teaching the responsible use of generative AI in Australian schools. The report also mentions that a number of different areas of law will be assessed for gaps that can be found regarding the use of AI, in particular copyright and broader intellectual property law, privacy law, and competition and consumer law. We’ve spoken a number of times on this show about how copyright law, patent law, and other areas of intellectual property need to catch up to some of these developments in artificial intelligence. Now, as I’ve already hinted at, the government’s report discusses at length Australia’s commitment to working collaboratively with its international partners to mitigate AI risks. And I suppose, this is especially important given that some of these risks and some of these AI products and use cases aren’t present in Australia yet. In November 2023, Australia participated in the first global AI Safety Summit and signed the Bletchley Declaration alongside the European Union and 27 other signatory states. The Bletchley Declaration affirms a commitment to developing and using AI in a way that is safe and responsible, as well as being proactive in identifying and reducing potential risks that might arise. And this interim report, I suppose, is a step in the right direction. At the summit, Australia also agreed to collaborate on the International State of Science report on Frontier AI. Australia’s put forward C-S-I-R-O, chief Scientist Bronwyn Fox, who will sit on the expert advisory panel overseeing that international report and once published. We’re hoping that this report will summarise some of the most up-to-date research available on artificial intelligence, and hopefully provide the public with a better understanding of what the current state of artificial intelligence technology looks like. This kind of international, intergovernmental report on the current state of AI technology is looking increasingly important, as frontier companies who are leading the charge, who are developing cutting edge AI technology, like OpenAI, increasingly become more and more closed about their innovations and more secretive about their innovations as well as competition in the space heats up. The report also touches on some of the steps the Australian government has already taken to support the adoption of AI in Australia and by Australian businesses. It mentions the 75 million dollars of funding for AI initiatives included in the 2023-2024 budget, which was designed to help small and medium sized businesses adopt AI, and the expansion of the National AI Centre. The report also notes that the government is going to consider possible opportunities to invest more money in the development of AI in Australia in the future, as well as just adopting AI technologies developed overseas. Now if you’d like to read the full interim report, we’ll include a link to it in the show notes. |
00:17:59 | DT: | Okay, let’s talk about the interim response which is informed by this enormous number of consultations. What did that reveal about some of the concerns that Australian businesses and individuals have about AI? What risks are they concerned about? |
00:18:11 | RS: | Yeah, so the risks have been quite well known since the first consultation. General risks around AI include bias, and that could then eventuate into discrimination. You have issues around, let’s say IP intellectual property and also on data and privacy, like how the AI was trained on. I think what this recent consultation was focused on was also around the more novel challenges from generative AI, particularly around misinformation or disinformation, hallucination risks, and also IP and data issues but from the output side. So previously, traditional AI was more concerned around the input side, whether you had the relevant rights and licenses to use the data. But generative AI, because you can create outputs, is also the issues on the output side. So for example, whether an artwork produced by an AI infringes on existing copyrights, or can you even own IP rights in that output? So, some of these novel issues from generated AI have been added onto the existing thoughts that we’ve seen last time. |
00:19:14 | DT: | Yeah, for some of our listeners who might be out of the loop, this issue around the ownership of AI generated artworks, copyright, even patentable inventions arises from the fact that these works have no human author. And a consistent theme in the intellectual property law of the Western tradition today has been centered around the idea of a human author. We saw the DABUS decision a few years ago, holding that an inventor must be human and so an AI could not be registered as the inventor of a design or a patent, and there have been similar decisions more recently holding that copyright cannot vest in AI generated works, at least under the law as it stands today. So, those are some of the risks and I guess some of them are not unique to generative AI. Some of them have been around since, if you like, the predictive AI boom of the mid 2010s. But as you say, these risks of misinformation, disinformation, hallucination, and ownership of output, have been added as a bit of a gloss arising from the generative AI boom over the last couple of years. As we said at the top of the episode, the interim paper doesn’t really suggest anything concrete in terms of changes to regulation. That might be for a good reason. You mentioned that the government wants to see if our existing laws can be amended or even reinterpreted to address the existing risks or concerns about AI. Tell me a bit more about that. |
00:20:37 | RS: | Yeah, so it’s part of a long-term phase right now. We have some laws going under review, so, notably the Privacy Act. Actually recently, the review report on the Privacy Act was released last year, but I’ve yet to see actually new privacy amendments being drafted out, but you have the privacy review going on. The government also put together an expert group on the copyright law issue, so I think there’ll be a copyright act review going on in the process. So, there are a bunch of piecemeal initiatives which are happening in parallel, or they might have started since the interim response on AI. So, we have those things going on and they’re very sector specific, so it’s driven by the relevant department. So there are these parallel initiatives going on right now. |
00:21:23 | DT: | What do you think about this approach of responding to the risks of artificial intelligence in this, as you say, sector specific or issue specific way? Are the risks that were identified by parties making submissions to the consultation process? Are they the sorts of risks that can be identified in this siloed, here’s the copyright risks, the privacy risks sort of way, or do they really need a Kind of sui generis new kind of response? |
00:21:50 | RS: | Yeah, that’s the big question, right? And that’s really the key point that differentiates different markets from each other. It really comes down to the market’s current circumstances, right? So generally speaking, if you have an existing law that covers a risk that covers a ground of certain risk and you only need to widen its scope or clarify its language, then obviously path of least resistance, right? You try to amend the existing law. The question of whether we need to have AI specific regulation, deep down the hidden question is, what is the unregulated gap? Is there a big enough space that’s unregulated that requires a need for AI regulation? And if you look at Australia’s circumstances, we’ve got already a pretty robust legal system that covers online safety, privacy, intellectual property, consumer law. Whether or not these laws are future proof or not, that’s a different question. But we do have all of these laws that cover a lot of the ground. So, for Australia’s case, then yeah, looking at existing laws is probably the more appropriate response. Compared to other jurisdictions, we don’t necessarily have the same pieces and components, for example, the EU historically has been pretty tech specific in its approach. So, in addition to this recent AI act, they’ve also got a new law on digital services. So, they’ve been pretty much targeting areas of technology in an area-by-area focus, and also, in the similar boat with Australia is like the US. The US has also got a bunch of existing laws. Their regulators are doing their own thing and also, at a state level too. So, I guess the short answer is there is no one correct answer here. You really have to understand what market you’re in and how are the issues currently regulated, and then you then work from there. So, an AI specific regulation is not always going to be the right response. |
00:23:40 | DT: | We should say this is an interim response as the Department conducts further investigations. That’s the Department of Industry Science and Resources. It may well form a different view, but what it’s looking like is the Australian government will come out in a very different place to the European Union, in that it will seek to regulate AI in this issue-specific way of adapting or extending existing bodies of law rather than responding to AI as its own discreet issue. I suppose in a similar way to the way Australia has addressed risks in social media, for example, by extending the privacy law, extending the criminal code in ways that address specific harms. Who do you think’s right, the EU or Australia? |
00:24:24 | RS: | Let’s go back to first principles, right? So many people misconceive AI as a single set of technology. AI is, in my view, it’s a field of science that studies how you can automate intelligence, and that field of science is then embedded into so many different applications. So, the face filter on your phone uses the exact same technology as CCTV cameras that try to identify a suspect in a crime scene. So, AI is just a field of science, right? So, it’s really hard to regulate a field of science as a whole. We don’t see governments saying let’s regulate science, let’s regulate maths. That’s the key challenge. |
00:25:03 | DT: | That’s right, let’s pass an act on chemistry. |
00:25:06 | RS: | Exactly, so, this is where having an AI specific regulation – the scope of the regulation is one of the most challenging questions because you’re trying to deal with an area that’s always changing and it’s always expanding which is one of the reasons why the EU act has gone through three or four years of negotiations and the definition of AI has changed in multiple times. |
00:25:27 | DT: | Yeah, we actually spoke the last time you were on the show about the real definitional challenges in regulating AI as its own body of law because I don’t think there’s really even agreement amongst AI researchers on the definition of AI. |
00:25:41 | RS: | Exactly, and so this is why the approach of tackling existing laws might make more sense because existing laws by design, they’re meant to be future proof. Well, hopefully they’re future proof and they target risks that apply within that area of law. So, it doesn’t matter if it’s caused by human, caused by car, caused by AI, whatever, as long as the risk is there and satisfies the elements of that law, then that law will apply. So, that’s the thinking of tech neutral, tech agnostic laws. It’s really about future proofness. That’s really what the core question is, future proofness, and also scope. Is this scope manageable? And is this scope also something that can be defined? Going to the EU and Australia approach, it’s also not just a matter of law. There’s also a way bigger picture behind AI regulation. It’s also about culture, the economics, the political factors. So, if I were to take a big picture, one of the core differences between EU and Australia is that Australia is more of an AI importer. So, within the whole AI supply chain, most Australian businesses sit on the end use side, whereas in the EU, they have some strong powerhouse economies, like Germany and France were actually producing AI systems. So, they all sit on the manufacturing end. |
00:26:55 | DT: | Hugging Face; notable French company developing open-source platforms. |
00:26:59 | RS: | And also Mistral. And you also have the UK, which is outside the EU now, but they also have strong manufacturing capabilities in the AI space. So that context is important because one of the key benefits of having AI specific regulation is really around the regulation of the development of AI. So, that’s where your transparency obligations come in. Whether you need to disclose any copyright materials, cybersecurity requirements on AI, they’re more relevant to developers, not so much users. Now, if Australia’s developing AI sector is not as big, then there are questions as to whether we really need the AI specific regulations. But on the use side, that’s where existing laws might have a better, I guess, relevance or better focus in targeting use. And also in the interim response paper, one of the government’s key focuses is to have a balanced and proportionate response, that’s the words they use, to make sure that whatever regulation we take doesn’t stifle innovation. And Australia, as part of digital economy strategy and its whole AI strategy really wants to strengthen our manufacturing side, especially in AI. And so, the US need to consider that economic consideration. |
00:28:05 | DT: | Yeah, well, I’m glad you mentioned the innovation side of things and talked a bit about the risks and how we might regulate them. But as you said, the government’s trying to strike a balance between regulation and innovation, and I imagine a lot of those 500 consultation submissions were talking about supporting the Australian technology sector to innovate rather than guarding against harms. We should say, maybe Australia is at the moment a net importer of AI technology, but we do punch above our weight on the international stage. I think Airtree just recently released its list of Australian technology startups worth over a hundred million US and it’s a long list. So, tell me a little bit about the innovation side of this consultation. What are some of the submissions saying about what the government needs to do to promote innovation in the AI space in Australia? |
00:28:51 | RS: | A key question in the innovation side is the copyright question. |
00:28:56 | DT: | Well, Japan is an interesting example there, right? Because I think last year the Japanese government effectively issued some guidance that suggested that AI companies operating in Japan would be shielded from the threat of litigation of a copyright infringement in respect of any copyrighted materials that used in training data, right? |
00:29:16 | RS: | Yeah, exactly. So copyright, I like to call it like a frontier or a proxy ballot in this innovation versus regulation debate. And it’s interesting because the views you see are really based on the perspective of who’s praising that view. So, one key question is should AI developers have a general legal right to use copyrighted material to train AI systems on the basis that, if this AI system promotes benefits or for the public good, then they should have the right to use copyright? So that’s the key question; is this allowed on the developer side? That’s where the tech companies come in. You have also, science research bodies. They’re more on the lines that the copyright regulation should be a bit more relaxed when it comes to our training. Whereas on the other end, you have the creator sides where the media organisations come in artists, any other media-based organizations. They’re more on the heavier, stricter sides, making sure that either, you restrict our training based on copyrighted material. Or, if you do want to allow that, there should be some sort of consent regime or there should be some sort of reward royalties regime around that. So, that’s an example of where you see that innovation versus regulation debate being manifested in a particular legal issue and the views depend on the perspective; which side of the AI supply chain are you on? Which is why, again, the economics is such an important factor in this whole debate. |
00:30:36 | DT: | Yeah, and we’re seeing some of those issues play out in the courts rather than in the legislature. OpenAI is the defendant in a few copyright cases at the moment. |
00:30:43 | RS: | And that’s another big thing, just going back to the whole AI specific regulation versus existing laws debate. Another big fundamental difference between Australia and the EU is that Australia is a common law system, whereas the EU is a civil law system. A civil law system requires their laws to be detailed in full on the paper because their core system doesn’t have as large of a role compared to a common law system. So, they need to make sure whatever statutes they have cover every ground. Whereas in a common law system like Australia, the UK, we can afford to have gaps in the legislature’s drafting and have the court system fill in the gaps through the interpretation. And so, that’s also another factor as to whether we should have specific laws or just use existing laws. The common law system plays a big role in that. |
00:31:27 | DT: | Yeah, absolutely. So in terms of What the government’s seeking to do to foster innovation, what were some of the suggestions in the submissions, or at least in the interim response, what are some of the things we might expect to see coming in a final response? |
00:31:40 | RS: | It seems to be like a staggered process. The interim response, as the name suggests, is only an interim response. So, the actions proposed are really for the interim phase. It’s really targeting the high-risk areas. Actually, the interim response didn’t define what high-risk AI means. So, it’s still up to interpretation, but they did reference how the EU has defined high risk, and it’s really just AI that poses significant risk of harm to life, society, and health. And then, there are some sectors which are considered high risk. For example, use of AI in law enforcement, or use in workplace, or use in the court judicial system, stuff like that. So, we could adopt that similar sort of thinking, but the thing about Australia is that the government seems to be more focused on just voluntary guardrails and guidelines first. You might already know this, but the Australian government has gone under the National AI Centre to develop voluntary standards around AI safety, and there’s also a consideration of creating guidelines, whether they be voluntary or mandatory, around water markings. So, this goes back to the disinformation, misinformation issue. One, I guess, popular thought is to require AI generated outputs to be labeled that they’re AI generated. But whether that label is a requirement or a voluntary thing, that’s still being considered by the government. But again, they’re just interim steps. The government hasn’t yet given a definitive position as to whether they’ll have an AI act or just update existing laws, so I’m still waiting for the final response, yeah. |
00:33:08 | DT: | So I guess the interim response lays out two things. It sets out some guiding principles for potential future regulation, and it proposes a temporary expert advisory body. Let’s deal with the first part first. What are these guiding principles? |
00:33:22 | RS: | Yes. I briefly mentioned these already, so the risk-based approach, balanced and proportionate, and also helping Australia improve its international standing of position in AI. These are the main three. I think there are a bunch more, but they’re all sub principles of those main three. So what we’ve just discussed before is basically those principles. |
00:33:40 | DT: | So, the last of those three guiding principles, improving Australia’s standing internationally, is addressing one of the issues that you described before that feeds into why Australia’s regulatory approach might be different, which is that at the moment, we’re kind of a net importer. We’re more concerned about being the users of AI systems developed in the United States or in the European Union. This third guiding principle is talking about us becoming more of a producer of artificial intelligence systems or perhaps some of the goods and services that are part of the supply chain for artificial intelligence. So, tell me a bit more about how that guiding principle feeds into the government’s proposed approach. What are they looking to do to support Australia to become a more significant player in the AI supply chain? |
00:34:25 | RS: | Yeah, so that goal is not solely a regulatory goal, it’s a broader economic goal, right? And so again, big picture is important, right? Because the government budget, and there’s also funding within the technology space, including AI, and so you have all these initiatives going on to promote innovation that space. So, what the regulatory framework has to do is to make sure it doesn’t cut across these innovation programs or frameworks. So, the law itself can’t cause innovation. Law can only provide an environment for innovation. So, that’s basically what that principle is getting at, making sure that Australia’s approach creates an environment that fosters these other budget, economic and practical initiatives around innovation. But there’s a limit to what the law can do. |
00:35:07 | DT: | Let’s talk about that second proposal in the interim response, a temporary expert advisory body. This body would be part of the Department of Industry Science and Resources. It would advise the government on options for regulating AI. Now, a lot of the submissions were talking about establishing a permanent body now. So, this is more of a hedge your bet, sit on the fence kind of approach. What do you think about a temporary as opposed to a permanent body? |
00:35:30 | RS: | It depends on what they do, right? So, if it’s just an advisory body, then just generally advisory bodies, they’re often temporary because they’re just advised on an ad hoc basis on a specific issue. Once the issue has been solved or answered, then you don’t really need to have that long advisory body again, unless the same issue comes up. I think the permanency, my view is that that often gets conflated with having a regulatory body. Advisory boards are different from regulated bodies, right? The advisory board is just there to advise. Regulated departments, yeah that’s where you need some permanency. You need to have some enforcement body that enforces the law or the framework. So, I think that’s where permanency is more relevant. But for advisory bodies, expert committees, et cetera, I think by nature, they’re meant to be just temporary or just on an ad hoc basis. |
00:36:15 | DT: | Yeah, yeah. To me it makes sense that the advisory body is temporary, at least for now. If the government doesn’t yet know the approach it’s going to take to regulating AI related issues, it makes sense to not have too much set in stone. As you say, advisory bodies have a very different role than regulatory bodies. We’re not talking about having the Turing Police from Neuromancer. So, it makes sense to me, and I suppose it’s in keeping with the government’s developing position on artificial intelligence and its regulation. I guess on the theme of advisory bodies, a kind of opt-in, voluntary approach, not setting too much in stone in terms of prohibitions just yet, the interim response also suggests that it’s expanding the remit of the National AI Center and for the AI Center to develop a voluntary risk-based safety framework for Australian businesses about responsible adoption of AI and AI tools. What’s the National AI Center, to start with? |
00:37:11 | RS: | Yeah, the National AI Center is basically an organisation within the government that’s responsible for developing the AI related standards, and also helping be the champion for AI innovation within Australia. I think they’re analogous to the NIST, so the National Institute of Science Technology in the US, but the National AI Center is not the same as the AI Safety Institute that we see in the US and the UK. The National AI Center has a pretty flexible, I guess, scope and also a profile in this space. They have a pretty flexible remit in what they do for now. |
00:37:48 | DT: | They’re part of the CSIRO, right? I suppose, if you think about what the CSIRO does for Australia, both promoting innovation and doing its own research. The National AI Center is doing that in this particular field. And what would this voluntary framework look like? I guess that’s almost advice to the public or advice to Australian businesses from the National AI Center on some of these risks and how to mitigate them, but not rising to the level of regulation. |
00:38:16 | RS: | Just based on previous examples, we’ve got the Commonwealth AI ethic principles. I think the frame will be largely based on that, probably just a bit more detailed versions of those ethic principles. A potential end state could look something like the New South Wales AI assurance framework. That’s actually often an overlooked Framework in Australia. The New South Wales government actually has a pretty detailed framework or toolkit on AI, and I know it’s been updated to account for generative AI and it’s very detailed. It’s got a lot of good checklists in there, a lot of considerations for businesses to think about. It’s actually mandatory on New South Wales government agencies and optional for businesses. But I can see whatever the National AI Center is developing, it might have a similar feel to that. |
00:39:00 | DT: | Kind of a Commonwealth level. |
00:39:02 | RS: | Yeah. |
00:39:03 | DT: | Tell me a bit about the New South Wales toolkit. I’ve got to say, we at Lext, develop AI tools and I didn’t actually know it existed. |
00:39:09 | RS: | Yeah, so, just a caveat, the New South Wales AI Transforming is only binding on New South Wales public agencies and it’s only binding on certain projects. So, there’s a criteria, if the project has a particular risk profile, then that’s where the framework applies. Other than that, it’s all just voluntary and it’s based on the five ethic principles within New South Wales around fairness, privacy, security, accountability, et cetera. So, all the usual ethics that we see across the world, then for each ethic principle, then build out a checklist, a very specific checklist ranging from technical steps, like data processing and how you deploy an AI, all the way to governance. For example, do you have an AI officer, do you have a review committee, do you have a board to manage and monitor AI systems? So, it’s pretty comprehensive. It’s one of the more comprehensive frameworks out there in the world. |
00:40:00 | DT: | So it starts from a high level ethical principles standpoint, and then gives some practical tips, checklists on how to implement an AI system, perhaps even a decision making system in government that complies with those ethical principles. |
00:40:15 | RS: | And it’s based on a tally system. So the considerations aren’t just words for you to read. There’s actually like a matrix where you can assess your system based on each consideration. And then you add up your points and your points will then give you the risk profile of your AI system, and each risk profile then has recommendations as to what you can do to mitigate and control risk. It’s a self help thing. |
00:40:36 | DT: | Yeah, interesting. I mean, well, that does sound like what we might have at the Commonwealth level from the National AI Centre. Well, that’s interesting. As you said, it’s not a mandatory framework, except for certain New South Wales government agencies, but we’ll include a link to the New South Wales framework as some interesting extra reading for our listeners. All right, so what are the next steps in this consultation process? This is the interim response. Plenty more work to do, right? |
00:40:58 | RS: | Yeah, well, the actions are pretty much the next step. So what we just discussed, these are the next steps. We’ll just wait for updates from the National AI Center. Just wait for further updates from what the expert committee has advised government on. So it’s really just, stay tuned. |
00:41:13 | DT: | Now you’re tracking the regulation of AI across the globe with your global AI regulation tracker. How are we doing in Australia? Are we on par? Are we behind? We’re behind, right? |
00:41:24 | RS: | Actually, we’re actually not behind. If I were to use an analogy, right? Actually, I think I raised this analogy last time. If we’re in a race, just everyone’s running. I think Australia has taken a few more places up front. I think we’re between UK and Canada in terms of how similar we are. So just for context, the UK’s position is we’re not going to go for AI specific laws, we’re just going to beef up our regulators, our sector specific regulators to make sure they enforce these common set of AI principles. You can adapt and interpret in your own way, fit for your sector and make sure whatever it’s happening, your sector is safe. That’s the UK position. So they’re very regulator focused. I like to call the Canada position a lighter weight version of the EU model because Canada has a draft bill, which is actually being drafted out and being considered to regulate high risk AI systems. So, they haven’t regulated the medium risk, the limited risk, it’s only high risk. I see it as like a short form version of the EU model. Australia is in between because Australia’s interim response says that we’ll be considering a risk-based approach, but mostly high risk, so that’s where the Canada analogy comes in. Australia also has this pro innovation theme, not stifling industry and potentially leaving it to sector specific regulators and existing laws to cover the ground. So that’s like analogous to the UK approach so I see Australia between UK and The Canada model. |
00:42:52 | DT: | Interesting. Okay, so not at the back of the pack. |
00:42:54 | RS: | Not at the back, no. |
00:42:55 | DT: | Fantastic, and I suppose no one’s won the race yet, right? We’re all developing. |
00:42:59 | RS: | Yeah. I actually argue that there is no longer a race now. There’s a distinct fork between EU, China, and I say the UK/US. EU regulate everything, regulate all AI systems based on risk tier. Some might not be regulated at the end, but everything’s captured under that one act. China, only specific applications. UK, US, still just Patrick of laws and regulators approach. Last time we talked about the race with everyone’s still thinking along the same lines. Now, it’s like a three-way fork and you choose your path. So I find that very interesting now. |
00:43:37 | DT: | Yeah, very interesting. And I guess we’re yet to see how effective those different approaches to regulation are because we’re still so early in seeing the practical impact of these regulatory decisions. |
00:43:47 | RS: | And also, another driving factor is the international law around AI. We don’t have one right now, but recently in the last two weeks the United Nations General Assembly passed a resolution on AI. It was drafted by the US, so it was pretty US centric thinking, but it mainly just restates some of the AI ethics I’ve seen and just also encouraging governments to enact laws that align with these, and so perhaps, maybe one way that countries could in a way get a speed boost in this space, is just whatever international framework is out there, we just ratify it and just mirror legislation domestically, and that could be a quick streamlined way to get AI specific regulation. But again, this is all subject to, as I said, economic and political considerations. TIP: Now, Ray’s just referred to a resolution adopted by the United Nations General Assembly on the 21st of March 2024, a resolution to promote safe, secure, and trustworthy artificial intelligence systems for sustainable development. The resolution, as Ray said, was sponsored by the United States and co-sponsored by another 123 countries. It was adopted by consensus, which means it had the support of all 193 UN member states. We’ll include a link to the full text of this resolution in the show notes. The one big thing about AI regulation is that, unlike other fields, it’s a field where the private sector has a huge role in it. It’s also a field where, if you don’t do it properly, one, if you under regulate, a lot of risks, people play around with it, lots of harm happen. If you regulate too much, because AI is embedded in so many systems, you basically kill the economy. I know it is the same sort of thinking for other technologies, but AI is a bit of a larger scale, wider scale, and so, lobbying influencers play a huge role in here. And as I said before, creators versus developers, you have a huge debate between those two, and it’s tricky. I think to this stage, whenever a government body or court has had to deal with an issue that involves creator vs developer rights, 90 percent of the time it’s been like a deferral answer. For example, it’s either this question is out of our scope, so we won’t answer that, or if you do have to answer it, it will be subject to market practice or market standards. And it’s been that sort of like deferral, deferral, deferral, I don’t know, maybe this pattern might continue until we have a market or the international scene has a really defined answer to this and everyone just follows it. Or it’ll just be a pattern of deferring, deferring, deferring, until one side gains more strength than the other, or the interests are more important than the other, and they will then drive laws around that with insight practically and realistically, that’s how I see things will play out across the world. |
00:46:35 | DT: | Well, Ray, we’re nearly out of time before we let you go. I’ve got to ask you how you use AI in your own life and in your own professional work. You’re investigating the regulatory approach to AI in your spare time. You’re working as a technology lawyer during your working day. How do you use AI? |
00:46:52 | RS: | Yeah, so for work, I find AI has been really helpful for just fixing the tone of my emails. Sometimes you want to write an email, it’s hard to find like the right words for stuff. I find AI is really helpful in that field. Also, I’m really excited for the co-pilot stuff, especially co-pilot that can mark-up contracts. That’ll be so fun. Personal side, again, as I said, building my own AI regulation tracker platform. I incorporate a lot of AI side features in there, so whether it’s leveraging language models or developing my regressive analysis AI in there. Yeah, just implementing AI for the use case with the use case or the pain point I’m dealing with require some sort of automated intelligence, then I’ll try to leverage AI functions. If not, I’ll just use standard algorithms or whatever that gets a job – that’s my philosophy, whatever gets a job done, I’ll just use it. |
00:47:44 | DT: | Is there any part of your practice or personal life that you won’t use AI for? |
00:47:49 | RS: | Maybe just friends. I think I’ll still keep human friends. |
00:47:53 | DT: | I thought you meant writing to friends. But replacing your friends with AI, it’s good to know you won’t be doing that. |
00:47:57 | RS: | Yeah, maybe I’ve over interpreted that question. But again, like if the use case requires AI, I’ll just use it for that. I don’t have a bucket view, like I must use AI for this, I shouldn’t use AI for that. It’s whether it gets a job done, that’s my thinking. Like today, we’re going to be talking about AI. In the next few years, probably going to be talking about something else. I feel like that’s how tech works in general. So, AI eventually is going to become one of those utilities and infrastructure that’s in everything, just like the internet. |
00:48:26 | DT: | Well, we don’t talk about software being cloud based. Everything’s cloud based. We take that for granted. And I suppose the AI layer of our digital lives will maybe be taken for granted in the next ten years. |
00:48:39 | RS: | Exactly, and probably the next thing we’re all going to be talking about is like, I don’t know, Metaverse or the next second wave of blockchain tech, something like that. So, it’s just taking a holistic view to things. |
00:48:50 | DT: | Well, Ray Sun, thank you so much for joining me again on Hearsay the Legal Podcast. |
00:48:53 | RS: | Yeah, thanks David. Thanks for having me again. |
00:49:05 | DT: | As always, you’ve been listening to Hearsay the Legal Podcast. I’d like to thank my guest today, Ray Sun from Freehills, for coming on the show. Now, we talked about regulation of AI in this episode, and Ray has another episode on this topic that you may want to listen to. But if you’re interested in AI, but not from a regulatory perspective, you might also want to listen to my conversations with Dominic Woolrych from Lawpath or Jack Newton from Clio about how their companies use AI and how you can too in your practice. If you’re an Australian legal practitioner, you can claim one continuing professional development point for listening to this episode. As you know, whether an activity entitles you to claim a CPD unit is self-assessed, but we suggest this episode entitles you to claim a substantive law point. More information on claiming and tracking your points on hearsay can be found on our website. Hearsay the Legal Podcast is brought to you by Lext Australia; a legal innovation company that makes the law easier to access, and easier to practice, and that includes your CPD. Before you go, I’d like to ask you a favour, listeners. If you like Hearsay the Legal Podcast, please leave us a Google review. It helps other listeners to find us, and that helps keep us in business. Thanks for listening, and I’ll see you on the next episode of Hearsay. |
You must be a subscriber to access this content.