Women have long faced significant obstacles, including bias, stereotypes, and a lack of representation. These obstacles translate into lost opportunities, lack of role models and mentors, and a complex, if not hostile, work environment. These biases also translate into AI tools and large language models trained on real-world, biassed data. The EU AI Act encourages innovation and collaboration, but does it have mechanisms to minimise bias in AI?
This blog is brought to you by Nebahat Arslan (Nebs), our newly appointed Chief Partnership Officer!Â
Nebs is the Group General Counsel and Compliance Officer at Goodnotes digital note-taking app. In September 2024, she was voted the chief partnership officer at Women In AI, after many years of collaboration with the global legal team and the WAI partnership team. Additionally, she holds significant roles in the World Economic Forum's AI governance and Alliance work and is a member of the EU AI Alliance.
Here is what she has to say about the upcoming implementation of the AI Act in Europe and its impact on diversity and women in business.
Why is it essential for developers and providers of AI tools to ensure diversity in their development cycle?
Understanding bias in AI requires recognising that biases do not arise suddenly.
Women face significant obstacles, including bias, stereotypes, and a lack of representation. The popularity and use of large language models in AI systems only magnify these obstacles. The large language models are now raising concerns about built-in biases in AI systems, which could deepen existing biases against women, gender, and minorities. Biases, both implicit and explicit, are now seen in hiring practices and workplace culture, often resulting in women being underpaid and underrepresented in leadership roles. Stereotypes that question women's technical abilities and suitability for STEM fields discourage many from pursuing careers in these areas. The predominantly male tech industry also perpetuates a cycle where young women lack role models and mentors. These challenges create an unwelcoming and difficult environment for women to navigate and succeed in, with social ramifications when building technologies.
Developers and providers of AI tools need to prioritise diversity in their development cycle to reduce algorithmic bias and promote fairness. By including a diverse group in the development process, teams can tackle biases like gender bias, racial prejudice, and age discrimination that might be unintentionally built into AI systems. It's crucial to have robust governance mechanisms to monitor and address potential biases, highlighting the significance of diversity within AI development teams.
The lack of evidence about gender diversity in the AI and data science workforce is a significant challenge because the data is fragmented and inadequate. When datasets are available, they often rely on commercial data produced through proprietary analyses and methodologies.
The EU AI Act encourages innovation and collaboration within the open data ecosystem by establishing clear guidelines and standards. This framework fosters the development and deployment of AI systems safely and reliably, promoting greater collaboration among various stakeholders in the open data community and leading to more diverse and inclusive AI development.
The EU AI Act also introduces provisions for companies to test new technologies under regulatory supervision in a sandbox environment, allowing for controlled experimentation and innovation - this ensures that diverse perspectives and use cases are considered in AI development.
Furthermore, the EU AI Act aligns with sector-specific regulations. It imposes specific requirements for high-risk AI systems, especially those used in sensitive areas such as education, healthcare, and high-risk environments. This alignment ensures that AI systems are developed and deployed in a way that upholds diversity and non-discrimination principles.
How can we ensure that developers and providers are held accountable while leaving room for improvement and ensuring that humans remain in charge of these AI agents?
It is crucial to establish effective mechanisms and institutions that will enforce accountability for AI providers and developers who do not meet the requirements outlined in the EU AI Act. It is important to have checks and balances in place to ensure that a diverse group of stakeholders is involved in developing AI tools. Robust governance mechanisms are necessary to monitor and address potential biases. It is imperative to ensure that AI systems and tools complement human involvement rather than replace it. The EU AI Act should hold the providers and developers of AI systems liable for achieving this goal.
Collaborate with us!
As always, we are grateful for you taking the time to read our blog post.
If you want to share news that is relevant for our community and to those who read us from all corners of the world-wide WAI community, if you have an appropriate background working in the field of AI and law, reach out to Silvia A. Carretta, WAI Chief Legal Officer (via LinkedIn or via e-mail silvia@womeninai.co ) or to Dina Blikshteyn for the opportunity to be featured in our WAI Legal insights Blog.Â
Silvia A. Carretta and Dina Blikshteyn
- Editors
Comments