top of page

AEQUITAS Experimenter: A Tool to Build Fair and Legally Aligned AI Systems

ree

The AEQUITAS Experimenter is now available to help organisations, researchers, and developers assess AI systems for fairness and legal compliance. It guides users through a structured process grounded in socio-legal principles and supported by technical automation.


Bridging Socio-Legal Principles and AI Development

The AEQUITAS Experimenter is a software platform developed as part of the Horizon Europe project AEQUITAS – Assessment and Engineering of eQuitable, Unbiased, Impartial and Trustworthy AI Systems (G.A. 101070363). Its purpose is to make fairness in AI operational by connecting socio-legal requirements with technical development practices.

Built on a flexible meta-methodology, the platform supports the co-creation and deployment of fair-by-design AI systems across different application areas. Fairness is treated as a process throughout the AI lifecycle rather than a single metric or outcome.


Key Features

  • Open-source and transparent: The code is available on GitHub, allowing for audit, review, and extension.

  • Modular and scalable: Designed with clean architecture and domain-driven principles, the platform is Docker-ready and can be integrated into a variety of environments.

  • Live service access: Users can explore the tool at http://aequitas.apice.unibo.it.

  • Context-sensitive guidance: A Question–Answering (Q/A) mechanism guides users through a dynamic flow of questions. Each path is tailored to the user’s answers, helping select appropriate fairness metrics and mitigation strategies.

  • Legal-to-technical translation: The tool translates abstract legal and ethical requirements, such as those in the EU AI Act, into actionable technical steps.

  • Automated fairness operations: Supports metrics such as Statistical Parity Difference and Disparate Impact, and mitigation strategies including Disparate Impact Remover and Learned Fair Representations, applied in response to user input.


Participatory Design

The Experimenter was developed using a participatory design approach that involved legal experts, developers, civil society representatives, and members of underrepresented groups. This ensures that fairness is not solely a technical measure but considered in broader social and ethical contexts.

Validation sessions emphasised key priorities:

  • Human oversight and transparency in automation

  • Support for intersectional fairness

  • The ability to tailor AI models to specific project contexts rather than applying one-size-fits-all solutions


Access the AEQUITAS Experimenter

The AEQUITAS Experimenter provides a practical way to assess and guide AI systems towards fairness and legal alignment. Organisations, researchers, and developers can explore its features and test AI systems at http://aequitas.apice.unibo.it.

Women in AI (WAI) is pleased to be a partner in the AEQUITAS Project, supporting tools that make fairness and compliance more accessible in AI development.


ree

Learn more about the project: www.aequitas-project.eu


ree

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the Culture Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.


Comments


bottom of page