top of page

NAVIGATING THE EU AI ACT: A STRATEGIC GUIDE FOR LEGAL PROFESSIONALS



The European Union’s Artificial Intelligence Act (EU AI Act) marks a groundbreaking step in global tech regulation, offering the first comprehensive legal framework dedicated to AI. As the EU AI Act begins to roll out and obligations for high-risk systems begin to apply, legal professionals are at the forefront of ensuring their organizations navigate its complexities effectively. This blog serves as a strategic guide, offering practical insights to help legal teams ensure compliance and manage risk in fostering responsible and trustworthy AI deployment.


This blog is written by Anny Ho, a seasoned attorney specializing in technology, privacy, and data protection law across global industries. She currently leads European data protection and AI governance at an innovative automotive company. As a Board Member of the Data Protection Chapter at the Dutch Association of Company Lawyers, she collaborates with peers to explore emerging trends in technology and data protection law.

 

The European Union's Artificial Intelligence Act (‘EU AI Act’) is the world's first comprehensive legal framework of its kind, designed to balance innovation with safety and ethical use. As the EU AI Act begins to take effect, legal professionals face a pivotal moment in ensuring their organizations' compliance with this revolutionary legislation. This strategic guide will provide legal professionals with actionable steps to navigate the complex regulatory landscape and build trust in AI systems. By leveraging the EU AI Act as a strategic advantage, organizations can not only minimize legal risks but also position themselves as leaders in responsible AI deployment.


HOW DOES THE EU AI ACT WORK AND WHAT ARE THE MAIN COMPLIANCE REQUIREMENTS?

The EU AI Act adopts a risk-based approach, classifying AI systems based on their potential impact on health, safety, and fundamental rights of individuals in the EU. The following four risk categories are in place:

  • Unacceptable Risk AI systems are prohibited and banned outright. This includes systems such as social scoring, emotion recognition at work, and behavioral manipulation.

  • High-Risk AI systems that may pose serious risks include:

    • AI systems integrated into products already regulated due to the possible risks they encompass such as medical devices, aircrafts and vehicles;

    • AI systems used for essential services, such as credit scoring and health insurance risk assessments;

    • AI systems used for employment and worker management such as AI tools for recruitment, performance evaluation, or task allocation. 

These systems are subject to strict obligations and regulatory oversight to ensure they meet specific safety and ethical standards. This includes conformity assessments to access the EU market, establishment of a risk management system throughout the AI system’s lifecycle, and maintenance of human oversight.

  • Limited Risk AI systems, such as chatbots, deepfakes for entertainment, and recommender systems, are subject to transparency requirements. These systems must ensure users are aware of AI interactions and that content is digitally marked as AI generated.

  • Minimal Risk AI systems, such as spam filters and basic video games, are essentially unregulated under the EU AI Act. However, maintaining transparency is encouraged by informing users when they are interacting with an AI system.


Additionally, General-Purpose AI (GPAI) models have separate obligations. These include ensuring transparency when AI systems interact with individuals, making summaries of training data publicly available and respecting EU copyright laws. Models with systemic risk face additional obligations, including adversarial testing, cybersecurity measures, and incident reporting. 


A general obligation for all AI systems is AI literacy. Organizations that develop and/or deploy AI must ensure that their employees possess adequate AI literacy. This includes understanding AI systems' capabilities, limitations, and potential risks.


WHAT ARE THE PENALTIES FOR NON-COMPLIANCE?

Non-compliance with the EU AI Act can – besides reputational damages and loss of customer trust – result in significant penalties:

  • Fines: Organizations may face fines up to EUR 35 million or 7% of their global turnover for violating the Act's provisions.

  • Market Withdrawal: Authorities can order the withdrawal of non-compliant AI systems from the market, which can have a significant impact on business operations and reputation.


WHERE TO START - ACTIONABLE STRATEGIC STEPS FOR IMPLEMENTATION

In order to navigate the EU AI Act strategically and effectively, it is recommended that legal professionals follow these five steps in any case: 

  1. Create an AI Inventory including the details of all AI systems that your organization develops and deploys.

  2. Perform a thorough EU AI Act applicability assessment and risk assessment to evaluate the potential impact of the Act on your organization. Classify all AI systems accurately based on their risk levels and determine the role(s) of your organization under the Act (e.g., provider, deployer, importer, etc.) and its associated obligations. 

  3. Develop and implement a robust AI governance framework and compliance program encompassing comprehensive policies that ensure transparency, accountability, and human oversight. This program should also include an AI impact assessment questionnaire, along with AI service provider template agreements to be incorporated into existing template service provider agreements. 

  4. Invest in AI literacy: Developing an AI literacy program is essential to ensure ongoing training for employees involved in AI development and deployment, enhancing their understanding of AI systems, AI-related risks and compliance requirements.

Collaborate across departments: Ensure that all relevant departments, such as Legal, Compliance, Strategy, IT, Cybersecurity and Data work together to implement the Act's requirements effectively.


Conclusion

The EU AI Act establishes a global precedent for AI regulation, emphasizing trustworthiness and human-centric AI. Its aim is to encourage innovation while safeguarding against potential harms. Legal professionals can play a crucial role in helping organizations navigate the evolving AI landscape by understanding this regulation. It is not only about ensuring compliance, but also enabling the business to thrive in innovation within the legal boundaries.


 

Collaborate with us!


As always, we appreciate you taking the time to read our blog post.

If you have news relevant to our global WAI community or expertise in AI and law, we invite you to contribute to the WAI Legal Insights Blog. To explore this opportunity, please contact WAI editors Silvia A. Carretta, WAI Chief Legal Officer (via LinkedIn or silvia@womeninai.co), or Dina Blikshteyn,  (dina@womeninai.co).


Silvia A. Carretta and Dina Blikshteyn

- Editors

Comments


bottom of page