Eticas' audit of RisCanvi uncovered biases and reliability issues, crucial for transparency in AI. Through ethnographic and comparative audits, it highlighted discrepancies in risk assessments, calling for fairer practices in criminal justice AI. The DIVERSIFAIR project, in which Women in AI is managing the "Dissemination and Impact Maximisation" Work Package, advocates for ethical AI development and rigorous evaluation methodologies.
We are delighted to announce that Eticas, DIVERSIFAIR project partner, has successfully completed their inaugural audit for the project. This comprehensive examination focused on RisCanvi, an AI tool employed within Catalonia's (Spain) criminal justice system. This audit marks a significant milestone in our mission to enhance transparency and fairness in AI technologies.
Key Findings from the AuditÂ
Titled "Automating (In)justice: An Adversarial Audit of RisCanvi", the audit uncovered critical deficiencies in the tool:Â
Bias in Risk Classifications: Static factors in risk assessments revealed biases against specific demographics, particularly those with challenging backgrounds.Â
Reliability Issues: Significant shortcomings were identified in RisCanvi's reliability, compromising its ability to provide assurances to inmates, lawyers, judges, and other criminal justice stakeholders.Â
Regulatory Non-Compliance: Despite Spanish regulations mandating audits for automated systems since 2016, RisCanvi had not undergone scrutiny until Eticas' examination.
Methodology of the AuditÂ
Eticas employed a comprehensive audit methodology comprising two primary components:Â
Ethnographic Audit: This involved immersive research, including interviews with inmates, legal professionals, and stakeholders within and outside the criminal justice system, providing a holistic view of RisCanvi's impact.Â
Comparative Output Audit: Using public data on inmate populations and recidivism, Eticas compared RisCanvi's risk factors and behaviours with real-world outcomes. This analysis revealed discrepancies and potential biases within the system.Â
​Adversarial audits play a crucial role in thoroughly evaluating AI systems. They extend beyond technical assessments to consider broader societal implications, emphasising fairness, transparency, and accountability. The RisCanvi audit underscores the impact of multidisciplinary collaboration in shaping responsible AI practices. ​
​
This audit of RisCanvi falls within DIVERSIFAIR’s scope by addressing intersectional bias in AI systems used in sensitive areas like criminal justice. Our objective is to develop, apply, and test tools and methods to identify and mitigate biases across various sectors. This includes conducting internal and external audits of AI solutions, such as risk assessment tools, predictive systems, natural language processing, facial recognition, and matching algorithms. Through this approach, we aim to ensure fairness in AI development and use, contributing to the creation of inclusive AI systems and fostering a more equitable digital future. Read the full audit report on Eticas website.
Results from this audit have also been featured in the Spanish newspaper El PaÃs.
About EticasÂ
Eticas is the world's first algorithmic auditing company, having conducted adversarial audits of systems used by YouTube, TikTok, Uber, insurance systems, and the Spanish government, examining their impact on radicalisation, migrant representation and discrimination, bias against people with disabilities, workers' rights, and protection for victims of gender violence. Find out more about Eticas.
Â
About the DIVERSIFAIR projectÂ
DIVERSIFAIR is an Erasmus+ project aiming at addressing intersectional bias in AI and mitigating the discriminatory impact of AI on people’s lives. Committed to advancing AI technology that is fair, unbiased, and inclusive, we aim to raise social awareness, influence policy-making, and provide future-proof AI training. Find out more about DIVERSIFAIR.
Stay in touch
Follow DIVERSIFAIR on Linkedin and register to the project's newsletter.
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the Culture Executive Agency (EACEA). Neither the European Union nor the EACEA can be held responsible for them.
Comments