Mind the Gap: AI Literacy Requirements Under the EU AI Act and the Gender Divide
- WAI CONTENT TEAM

- 1 day ago
- 5 min read
By Ayesha Gulley

As AI technologies continue to integrate into decision-making processes and workflows, businesses face a critical new territory: learning how to upskill their workforce. As of February 2, 2025, the AI literacy requirements under Article 4 of the EU AI Act have been applicable, meaning businesses need to ensure they have set up measures for effective AI literacy. At the same time, gender gaps in confidence, access, and technical training continue to limit women’s participation in AI-related fields, raising important questions about inclusion, opportunity, and the future of work.
This article explores how the EU AI Act’s new AI literacy requirements intersect with efforts to upskill women for the future of work, a crucial step toward ensuring equitable participation in an AI-driven economy.
It is written by Ayesha Gulley, an AI Governance and Policy Product Manager at Holistic AI. Her research focuses on AI regulation, fairness, and responsible AI practices. Before joining Holistic AI, Ayesha worked at the Internet Society (ISOC), advising policymakers on the importance of protecting and promoting strong encryption.
Literacy is a legal requirement
Since February 2, 2025, the European Union (EU) AI Act has imposed AI literacy requirements on all businesses that provide or deploy AI systems in the EU, regardless of where they are based. Article 4 requires providers and deployers to ensure their staff have a “sufficient level of AI literacy” to understand, use, and interact with AI responsibly and effectively.
The Act defines AI literacy broadly as the “skills, knowledge, and understanding” needed to deploy AI systems responsibly, recognize opportunities and risks, and identify potential harms (Article 3(56)). This extends beyond technical know-how to include the broader context and implications of AI use across an organization.
Article 4’s requirements are intentionally broad, applying to all businesses in scope, whether they are developing or deploying systems, and regardless of the system’s risk level. In designing their literacy program, organizations must consider three key factors: (1) the technical knowledge, experience, education, and training of staff and others operating AI systems on their behalf; (2) the context in which the AI system will be used; and (3) the persons or groups of persons affected by the system.
AI literacy programs must therefore cast a wide net, reaching not only technical teams but also business units such as sales and marketing, as well as contractors, vendors, clients, and individuals impacted by AI systems, meaning there should be multiple pathways to address different levels of technical knowledge. For instance, training for a software developer deploying a high-risk recruitment tool will differ greatly from that for a marketing team using GenAI for content creation, yet both are required under Article 4.
The Act does not prescribe specific training methods but instead points to best practices through its living AI repository. This gives organizations flexibility to tailor their approach based on staff roles and technical sophistication, specific AI use cases and deployment contexts, and risk levels and business objectives. There is no one-size-fits-all approach to AI literacy, making this flexibility both an opportunity and a challenge.
While Article 4 does not explicitly require documentation of AI literacy efforts for most systems, providers and deployers of high-risk AI must demonstrate compliance with broader obligations, and literacy practices may fall within that scope. Even where documentation is not mandated, maintaining records of literacy programs, training, and competency assessments is strongly recommended. Doing so supports transparency, accountability, and readiness to demonstrate compliance if questioned by regulators.
Mind the gap
These Article 4 requirements take on new urgency when viewed through the lens of existing gender disparities in AI adoption. Driven by automation, market slowdown, and labor shortages, 80% of leaders agree that employees need new skills in the age of AI. But 60% of organizations report an AI literacy skill gap, which is limiting AI’s potential impact, and this gap is not evenly distributed.
The picture is particularly complex for women. Senior women in technical roles are leading the way, being more likely to use generative AI (GenAI) than their male peers. However, broader patterns reveal significant barriers: women workers globally are 7-12% less likely to use GenAI at work than men, and are 20% less likely to use ChatGPT in the same occupation despite being equally optimistic about its productivity benefits. Research shows that men have significantly higher levels of trust and understanding of GenAI, further widening existing knowledge gaps.
For organizations attempting to meet Article 4’s requirement of “sufficient” AI literacy, these disparities reveal a critical challenge: baseline knowledge and confidence levels vary significantly by gender, meaning a uniform approach to literacy training is unlikely to achieve compliance across the workforce. Article 4 explicitly requires organizations to account for staff's “technical knowledge, experience, education, and training”, and where nearly half of women lack basic awareness of GenAI, this creates a legal imperative to design targeted, accessible literacy pathways.
At the same time, the implications extend beyond compliance. This presents a double-edged sword for women. While AI may eliminate some traditional jobs, it also creates new opportunities. The key lies in acquiring the skills needed to thrive in this evolving landscape. This goes beyond just technical knowledge; it requires a deep understanding of AI’s capabilities, limitations, and social and ethical impacts.
Using AI literacy to support broader compliance efforts
Organizations subject to Article 4 of the AI Act are likely also bound by other regulatory frameworks, including those governing content and safety, data protection and privacy, cybersecurity and consumer protection. They’re also increasingly accountable for diversity, equity, and inclusion outcomes, making the intersection of AI literacy and gender equity both a legal and strategic priority.
A common mistake is to treat AI literacy as a standalone checkbox. A more effective approach is to integrate it into existing compliance, risk, and training programs. Organizations should evaluate how their AI systems intersect with current obligations and business goals, and what teams need to know to operate effectively and compliantly. AI literacy should not be confined to IT or legal, but should involve collaboration across departments.
Recommended practices include:
Developing a clear inventory and categorization of all AI systems within the organization.
Collaborating with team leaders to organize cross-functional trainings.
Organising periodic company-wide sessions or workshop days with senior leadership to discuss priorities, objectives, and direction for AI literacy.
Considering how the proper documentation and facilitation of AI literacy efforts can prepare an organization for potential mergers, acquisitions or audits.
The bottom line
AI is reshaping how we learn, think, and make decisions. The challenge is not only how we use these tools to innovate, but how we align them strategically with business and societal goals. AI literacy is more than a technical skill; it is a cornerstone of responsible digital transformation. However, not everyone is equally positioned to participate in this shift.
AI literacy shouldn't be an isolated compliance exercise. When embedded in a broader digital responsibility strategy, it becomes both a competitive advantage and a pathway to sustainable, compliant AI adoption. Gender disparities make this even more urgent. Women represent half the workforce and innovation potential, but gaps in confidence, access, and training continue to limit participation.
Building AI literacy among women is essential to ensure full participation in an AI-driven world. By investing in initiatives that strengthen understanding of how AI works, how it informs decision-making, and how it applies to real-world challenges, organizations can foster greater inclusion, innovation, and accountability. Closing the AI gap is not only a gender issue; it is a governance imperative, central to ensuring that technological progress benefits everyone.
Collaborate with us!
As always, we appreciate you taking the time to read our blog post.
If you have news relevant to our global WAI community or expertise in AI and law, we invite you to contribute to the WAI Legal Insights Blog. To explore this opportunity, please contact WAI editors Silvia A. Carretta, WAI Chief Legal Officer (via LinkedIn or silvia@womeninai.co), or Dina Blikshteyn, (dina@womeninai.co).
Silvia A. Carretta and Dina Blikshteyn
- Editors




Comments