Skip to Content (custom) - bh

Angle

Is the EU on the Cusp of Pioneering AI Regulation?

  • Regulatory & Compliance
  • 4 Mins

Artificial intelligence (AI) is a fascinating tool in the modern world. It can suggest products based on a person’s search history. It can recognize faces to unlock a device. It can help recruiters pick the best candidate to fill a position. It can cull datasets down significantly for a case or investigation. And so much more.

While AI has revolutionized many aspects of business and personal life, many have expressed concerns over inherent bias. Humans need to train AI tools before deployment, which can create vulnerability to bias and prejudice. If this occurs and no one ever challenges the technology, then it becomes difficult to explain decisions and opens the door to reputational harm and legal liability.

While there has been some patchwork regulation in countries like the U.S. and China, there are no broad laws on the books. The EU has taken a groundbreaking step via the Artificial Intelligence Act, which is currently deep in the negotiation process. It was introduced in April 2021 and had been moving through the legislative process these past two years.

Right now, the European Parliament is expected to vote in the spring and the law should be approved into law this year. The U.K. also has released a policy paper on AI taking a different approach. It is crucial to understand what these laws would change and start preparing for compliance, as this will set the stage for other countries to follow suit.

The EU’s Artificial Intelligence Act

The goal of the EU’s AI Act is meant to promote transparent and ethical AI usage while safeguarding data. While it is not in final form yet, it is nearing the end. Here are key features of the proposed law:

  • The definition of AI covers all software developed with machine learning, logic-based, knowledge-based, and/or statistical approaches. Organizations that develop or use such software in the EU would be subject to liability.
  • AI tools will fall under four categories: unacceptable, high-risk, limited risk, and minimal risk. Unacceptable systems such as social scoring used in the public areas would be banned under the law. The regulation mainly focuses on those falling into the high-risk category. This includes AI used for employment, law enforcement, education, biometric identification, and more.
  • AI providers have the highest burden. Key obligations would include a prior conformity assessment before putting a tool into the market; creation of a risk management system to target bias during the design, development, and deployment that carries through the entire usage lifecycle; cybersecurity requirements; recordkeeping; human oversight at every step; quality management; creation of strong AI governance frameworks; and public registration.
  • The term AI users would include individuals and organizations that use AI under their authority as opposed to end-users. Recruitment agencies are a prime example. Responsibilities include training with relevant data, monitoring, recording, data protection impact assessments, and strong AI governance frameworks.
  • Penalties are high, and currently include up to 30 million euros or six percent of the breaching organization’s global revenue.

As this continues to move through the process, it is important to take note of any changes or additions. Lawmakers have expressed concerns over how it will regulate biometrics and ensuring there is flexibility embedded in the law due to AI’s dynamic nature.

Proposed U.K. Approach

The U.K. policy paper on AI governance and regulation came out last summer. It also strives to promote transparency, security, and fairness. However, it departs from the EU regulation in many aspects by focusing more on innovation via a sector-based approach.

Put simply, while there would still be standards to follow, each agency would regulate AI usage in their specific sector. This is designed to avoid too much regulation and account for differing risks amongst industries. The U.K. regulation would be tech agnostic focusing on outcomes and whether systems prove to be adaptable and autonomous, as these types of AI are unpredictable and have more inherent risks.

Although this would give U.K. regulators flexibility, there would be core principles to follow when governing an organization’s AI usage:

  • Ensuring safe AI usage
  • Ensuring the system is technically secure and functions as designed
  • Transparency and explain ability
  • Embedding fairness considerations into the system
  • Designating a legal person as responsible for governance
  • Creating clear protocols around redress and contestability

A white paper further detailing this topic was expected to come out late-2022, but that has not yet occurred. When this is released, it will provide further insight as to whether the U.K. will move forward with official regulation and provide a better sense of the timeline.

Next Steps

AI will continue to integrate into society in a multitude of ways as technology advances. Regulation in this space will help alleviate fears of bias, protect data, encourage innovation, and explain the decision-making process. But will this type of regulation trend globally like what happened with privacy regulation after the GDPR passed? Will the EU set a global standard, or will there be more movements in the U.K. or other countries like China who have already tested the waters on a smaller scale? Only time will tell. Right now, monitoring legislative developments and getting a jumpstart on compliance initiatives is the best way to prepare.

The contents of this article are intended to convey general information only and not to provide legal advice or opinions.

Subscribe to Future Blog Posts