top of page

Navigating the AI Act & AI Governance Risks


WAI Legal Insights Blog on AI Act and Governance Risks by Sonal Makhija. Features a smiling person with gray hair, yellow accents.

By Sonal Makhija


The EU AI Act (AI Act) came into effect on August 2, 2024. Since its adoption, the AI Act has received significant political, academic, and policy attention for being the world’s first law regulating artificial intelligence (AI)  - both lauding the European Union (EU) for being the first to regulate AI and criticising the EU for potentially stifling innovation. Given the AI Act’s risk-based approach to AI - where legal obligations are proportional to the potential harm an AI system poses to fundamental rights and societal safety - substantial attention is being devoted to the regulatory burdens placed on developers and users of AI systems. Thus, raising concerns about the long-term impact on Europe’s AI innovation and global competitiveness. 


This article Sonal Makhija, an AI & data compliance lead at the H&M Group, where she has led the AI Act implementation. It explores the kind of legal ambiguity that small and medium enterprises face when navigating the AI Act and the resulting need for greater clarity and practical guidance. It also discusses how AI governance risks arising out of everyday AI use cases, which typically fall within limited to low-risk classifications under the AI Act, but pose significant security, privacy, and data challenges and risks for organisations, often being a deterrent to AI adoption. 



The EU AI Act (AI Act) came into effect on August 2, 2024. Since its adoption, the AI Act has received significant political, academic, and policy attention for being the world’s first law regulating artificial intelligence (AI)  - both lauding the European Union (EU) for being the first to regulate AI and criticising the EU for potentially stifling innovation. Given the AI Act’s risk-based approach to AI - where legal obligations are proportional to the potential harm an AI system poses to fundamental rights and societal safety - substantial attention is being devoted to the regulatory burdens placed on developers and users of AI systems. Thus, raising concerns about the long-term impact on Europe’s AI innovation and global competitiveness.


AI Innovation & Regulatory Ambiguity

While the AI Act establishes a comprehensive framework for regulating AI, its practical applicability has proved to be a bigger hurdle than the weight of obligations themselves. Anyone working in the AI space knows that definitions of AI are widely contested. As a result, organizations collaborating with small and medium enterprises (SMEs) developing AI systems struggle to determine whether their products or systems are or could qualify as an AI system, and fall within the ambit of the AI Act. This, in turn, causes uncertainty, often requiring legal guidance and assessment reviews.


Determining the correct risk classification under the AI Act is another source of interpretative confusion for SMEs, especially when assessing whether an AI system qualifies as a high-risk system. Uncertainty further arises as to whether incorporating  human-in-the-loop features into the AI system to encourage human oversight changes the classification and obligations under the AI Act. More than the stringency of the AI Act, interpreting the precise boundaries of applicability of the AI Act and determining types of system updates that would merit a re-classification is challenging for SMEs. At the same time, there is also a limited understanding of what constitutes meaningful human oversight in practice, highlighting the need for further practical guidance. 


Yet, as it has been pointed out (for mor,e read here), these issues are not merely related to the AI Act. Companies developing AI systems struggle with overlapping EU regulations and demonstrating compliance across them. It is particularly challenging for SMEs that need to invest in legal guidance and assessments. The deployers and users of AI systems also fear using them or encouraging their adoption. For global organisations developing or using AI systems, the overlapping regulation means monitoring divergent regulations, resulting in slower AI adoption and global roll-outs. 


If the simplification agenda under the Digital omnibus succeeds (for more, read here), it can offer much-needed clarity. By harmonising requirements, clarifying obligations and offering implementation support, it can help reduce legal uncertainty and make legal compliance ‘possible’ and predictable. This would, in turn, enable SMEs to allocate resources to innovation rather than divesting resources for regulatory interpretation. Clear guidance on the AI Act can establish it as a minimum benchmark for organisations developing, deploying, and using AI systems across different jurisdictions, strengthening competitiveness and encouraging AI adoption without undermining fundamental rights or safety.


 AI Risks

Other than determining whether their AI systems qualify as prohibited or high-risk AI systems under the AI Act, many organisations are facing AI risks that do not necessarily trigger strict legal obligations, but may still risk their security, privacy, and business confidentiality. These risks are amplified by the available and free generative AI (GenAI) tools that generate text, images, voice, and other content. The ease and accessibility of GenAI tools further heighten AI governance challenges, like data leakage, intellectual property risks, lack of oversight, and security. 


To create awareness of these risks, it is imperative that organisations focus on AI literacy to socialise governance risks associated with the use of GenAI tools. The AI Act attaches transparency obligations for AI-generated content - such as mandatory labelling and disclosure - but organisational risks associated with GenAI tools go beyond transparency concerns. 


The availability of GenAI tools and open-source AI models has increased shadow AI usage, making AI governance risks harder to monitor. The shadow AI is the use of non-approved AI tools within organisations. The 2025 Gartner survey indicates that shadow AI  is a top concern for organisations and poses significant security and compliance risk. The potential organisational harms extend beyond the risk categories defined in the AI Act. Increasingly, organisational risks arise from the everyday use of "limited” to “low-risk” GenAI tools and AI models — as seen in the breach at Disney, where an employee unknowingly downloaded a malicious AI-image generation tool. The risk will only increase with the emergence of Agentic AI - capable of autonomous planning, execution, and decision-making with risk of minimal human oversight. Thus, underscoring the need for adequate organisational controls and AI literacy.


Secure & Responsible AI

While the AI Act does not eliminate all risks and vulnerabilities, it offers organisations an opportunity to take a structured approach to AI governance by adopting the EU Trustworthy AI principles as a standard for responsible AI development and deployment, with a focus on AI literacy. Irrespective of how the AI literacy obligation under Article 4 of the AI Act is enforced, organisation-wide AI literacy is imperative for navigating risks and adopting and scaling AI with confidence in an ever-evolving technological and regulatory landscape.


_____________________________________________________________

Collaborate with us!

As always, we appreciate you taking the time to read our blog post.

If you have news relevant to our global WAI community or expertise in AI and law, we invite you to contribute to the WAI Legal Insights Blog in 2026! To explore this opportunity, please contact WAI editors Silvia A. Carretta - WAI Chief Legal Officer (via LinkedIn or silvia@womeninai.co) or Dina Blikshteyn  (dina@womeninai.co).



Silvia A. Carretta and Dina Blikshteyn

- Editors

bottom of page