Episode image for AI’s Legislation Lexicon: Regulating the Benefits, Pitfalls and Risks of the AI Frontier
LOADING ...
Preview of episode

Want to listen to the full episode and all our other episodes?

Hearsay allows you to fulfill your legal CPD requirements every year.

Our yearly subscription is only $299/year.

With a yearly subscription, you can access all of our episodes AND every episode we release over the next year.

Episode 93 Buy Episode

AI’s Legislation Lexicon: Regulating the Benefits, Pitfalls and Risks of the AI Frontier

Law as stated: 21 July 2023 What is this? This episode was published and is accurate as at this date.
Ray Sun AKA techie_ray joins an inspired David Turner in Curiosity over a shared love of artificial intelligence and the law. Touching on the state of AI regulation the world over, the pair deep dive into the geekier end of law and tech.
Professional Skills Professional Skills
Substantive Law Substantive Law
Raymond Sun
Techie Ray
1 hour = 1 CPD point
How does it work?
What area(s) of law does this episode consider?The regulation of AI; current and emerging
Why is this topic relevant?Artificial intelligence is currently revolutionising entire industries and reshaping the way we live and work. The next few years will determine whether AI becomes a transformative force like the printing press or steam engine, or if it falls into relative obscurity like MiniDisc or certain cryptocurrencies.

Alongside the potential benefits of the technology, there is a growing recognition that AI carries inherent risks that must – or should – be addressed through regulation. Understanding and preparing for the future of AI and the law is a key skill for a modern lawyer.

What are the main points?Regulation

  • In Ray’s view there are three broad categories of approach to AI regulation internationally.
  • The first is jurisdictions that are proposing – or have already enforced – direct regulation governing AI. For example, China.
  • The second is jurisdictions that have ethical frameworks, or national strategies, or white papers on AI. But don’t yet have direct AI regulation. For example, Australia.
  • This also includes the United States where there is ongoing debate about the need for federal legislation on AI and how it should be regulated.
  • The third is jurisdictions that either have not made any statement or taken a position on AI, or have openly decided not to regulate AI. For example, India.
  • The EU has proposed a comprehensive tiered regulatory framework for AI based on risk.
  • It has been working on a draft AI bill for almost three years, which was recently approved by the EU Parliament. The bill divides the AI sector into risk categories, banning certain applications and imposing strict restrictions on high-risk ones.
  • South Korea and Brazil have followed a similar approach.
  • The first notable category in the EU approach is the sorts of AI which are banned – applications like social crediting systems or that have a real life dangerous effect on human rights or livelihoods.
  • Some jurisdictions, like Japan, have made policy decisions prioritizing innovation over copyright protection in artificial intelligence.
  • In Ray’s view, there is no one right approach to AI regulation as it depends on each country’s economic, legal, and political circumstances.

Human-like intelligence?

  • The recent focus on generative AI has prompted some jurisdictions to catch up with regulation that was long overdue.
  • However, there is a misconception that artificial intelligence is a path towards human-like intelligence, rather than something new.
  • AI applications, such as content recommendation algorithms, are designed for specific tasks and do not necessarily replicate human traits.
  • AI development progresses, there will likely be more specific applications rather than artificial general intelligence.

Risks

  • There are risks associated with AI; including misinformation and the hallucinations generated by chatbots.
  • These risks arise because language models used in chatbots predict words based on statistics without comprehending substance.
  • There are legal risks too, such as defamation resulting from misinformation generated by AI tools. Artificial intelligence can also pose risks including privacy and confidentiality breaches when uploading sensitive information to third-party systems.
  • There are also concerns about intellectual property rights and copyright infringement in relation to AI-generated works that resemble original artwork used for training.
  • The outcome of ongoing litigation will have significant implications for the industry and could impact innovation in the space.
What are the practical takeaways?
  • The language of law is formulaic and AI can pick up on patterns and expressions very easily. The law, as a verbose profession, is susceptible to developments in AI.
  • It is important to build in human accountability when leveraging AI in legal workflows.
  • However, lawyers and law students can be susceptible to misinformation produced by chatbots due to the convincing nature of legal language.
Show notestechie_ray