Building WAI Labs
- WAI CONTENT TEAM
- 11 hours ago
- 7 min read

By Dr. Vinutha Magal Shreenath, Chief Research Officer, Women in AI
Introduction
Women in AI(WAI) launched WAI Labs in January 2022. Our mission is to conduct, facilitate and promote research led by our community in the field of Artificial Intelligence(AI). Our communities across the world are mostly made by women. We have supported these communities in all types of activities related to AI, and not just related to gender and AI. When it comes to AI research, we are similarly supporting all types of research conducted in the field. Our aspirations, our goals and our ways of doing things are to ensure women are also in the center and not just on the sidelines in shaping AI technology. In this blog, I lay out the essence of why and how we carry this out.
Who gets to conduct science, who gets to be a scientist?
Science as organised creation of knowledge is built on continuous reform, openness to examination and debate by other scholars. To a large extent, people who have been able to participate have been from lineages of learning, of means, or who have patrons for the pursuit of knowledge. The direction of pursuit of new knowledge has depended on the scholar, and other factors such as where they lived, their patrons and so forth. A body that is capable of producing outcomes that advance conditions for the rest, is served better by having scholars who are from everywhere from all walks of life, who are capable of maintaining this pursuit. This training, rigor, the standard needed to establish truth, to build on top of others’ work is something any scholar worth their mettle takes very seriously.
We know having gender diverse teams produce more novelty and are more innovative (1,2). For a science that can improve everyone’s conditions, we not only need more women scientists in this pursuit, we also need those from all walks of life, and not just those already privileged(3).
AI is a unique technology in many ways. The incredible speed of advancement in AI has given rise to a speaker-class that literally only does that. Apart from coming from various privileges, they have no training in science or engineering or any standard of scholarship. Muddied waters of discourse around this technology, fear mongering, has led to public confusion on who they should listen to and why. Sure, everyone has a stake in what is being built and science is for everyone. Anyone with the right skills has a role in bringing this tech, this knowledge from nascency to a stage where it is reliably accessible. The field needs more ways to disseminate expert opinions, while also talking about real consequences, hopes, fears, and questions from other stakeholders – who take seriously their own role and the consequences of their actions. But it does not mean that we give space to charlatans – of any gender - who consider themselves above science, and openly indulge in speculation and conjecture, while profiting from the panic caused. There is a real need for putting the scientist back in AI science. We are also living through a paradox of having enormous amounts of money being spent on very few bets i.e. only directions that have patronage, while the funds of conducting science away from profit incentives have dried up. We need new ways to incentivise coalitions to collaborate on pertinent problems.
Good-humanist-ethical-responsible-trustworthy-safe AI?
There are many areas within AI research that notionally represent different schools of thought. In the physical sciences, different schools of thought meant literally that the world works entirely differently compared to what is proposed in another school. It means different experimentation, different equipment, entirely different sets of people involved, and widely different outcomes.
In today’s AI discourse, the above terminology is almost used as mutually exclusive, and are led by different figures in the AI landscape. To be clear, all these are important, and have made their own distinct points. However, there is not nearly enough separating them to consider them entirely different schools of thought.(There is actually alarming homogenization, especially when it comes to architectures and scaling hypothesis, with nearly uniform efforts and funds being spent by the largest labs). Perhaps the distinction between them is better explained not as entirely different schools of thought, but more as different levels of abstraction.
Very shortly and incompletely: Trustworthiness is about to what extent can we know what AI does is correct and reliable. Responsibility is about which stakeholder is responsible for which area and how to manage risk when something goes wrong, and cataloging of what can go wrong or right and to what degree. Safety has been about mitigating AI systems from creating harm to people, intended or otherwise. Ethical or AI for good is about how AI systems can be used for “good”(according to whom?) outcomes.
All these topics are relevant to our communities, we will not pick one over the other. As stated, we believe and have acted for inclusion for creation and application of AI technology that is useful for all, having above characteristics, and in conducting research in an open and transparent manner. We are interested in (a non exhaustive list):
Value chain from data to product, preserving privacy and agency
Applied areas important to our stakeholders and communities
Impact of AI systems in the world
Evaluation of capabilities, to overall needs from AI technology.
New paradigms of learning that can be functionally replicated in models
Models that operate and respond to the physical world
This list encompasses topics that are deeply technical, to deeply social aspects of socio-technical. We will pursue research directions that are grounded within our communities, with the involvement of the right skillset and scholarship at various stages, for knowledge and advocacy.
Larger questions
What is intelligence? How do we measure this different kind of intelligence, with rudimentary and deeply flawed tools that we have? Can one actually separate humanity with intelligence?
Are publications / impact scores the right measures of a scientist?
Many of the “schools of thought” are almost exclusively run by men, many who also have ties through funding or other ways to orgs that have explicit goals of keeping women out or saying we are inferior in some way. To what degree can one separate the people practicing this craft, to the craft itself? Doesn’t that make their practice illegitimate?
Are we all losing jobs? How do we ensure prosperity being created is for everyone?
These are larger questions to ponder, and they affect us. Many of them are in the realm of wicked problems (4), where opposing views on “optimal” are the norm, and “solving” of which require larger collective processes, not only scientific.
However, we as an org are just one among the many working tirelessly for voices to be heard, rights to be honored, for anyone’s right in participation of creating and enjoying prosperity. While prejudices such as racism and misogyny are acutely known to us, we also know of the oil of humanity between all of us that defies descriptions. There are others on the right side of the above questions, like us, who want to do better with technology, so it connects us, empowers us, who believe in equality of all. We call for such actors to work with us.
Operations as a research collective
Our work here for conducting research as a collective started with making women scientists work in AI research known, from all over the world. We have now showcased ~90, from >20 countries. We have budding conversations about research, as described above in ~10 countries, and we have collaborated across borders – for short intense periods – and managed to get into major conferences in AI research, such as IJCAI, EMNLP, AAAI and NeurIPS. We have done all of this as volunteers, in small pockets of time between other jobs. This has been enabled by volunteers in all capacities. All of us on the board, to chapters, take seriously what is at stake here. I thank each and every one of them for making this work possible.
We are open to showcasing more women scientists, through blogs, interviews here, their profiles on other social media(SM), showcasing relevant papers, etc. We are conducting open science in the topics listed above, and more, depending on interest from chapters. We will do this through releasing problem specifications here in blogs, and other SM, call for volunteers, call for papers, call to jointly participate in funding for projects, call of sponsors for projects, events, panels etc. As there are so few of us scientists in any of our chapters, we act locally and we also collaborate globally on important problems. If you are interested in any of the above, reach out to us at wailabs@womeninai.co. If you fit below criteria and want to volunteer, fill out this form: https://forms.gle/cmfd3SXY7DL2ZTdt5 , or your work to be showcased, fill out this form: https://forms.gle/vruhobBdWGYc6FPE6
Late-stage doctoral research or above. Could be anyone with a PhD, need not be an academic.
Must be engaging in research activity in AI or related fields.
Flexible and open working style, being able to work cross-functionally with a global team of scientists from various disciplines related to AI.
Commitment of 8-10 hrs / month, for at least 8 months.
References
1.Yang, Y., Tian, T. Y., Woodruff, T. K., Jones, B. F., & Uzzi, B. (2022). Gender-diverse teams produce more novel and higher-impact scientific ideas. Proceedings of the National Academy of Sciences, 119(36), e2200841119.
2. Hofstra, B., Kulkarni, V. V., Munoz-Najar Galvez, S., He, B., Jurafsky, D., & McFarland, D. A. (2020). The diversity–innovation paradox in science. Proceedings of the National Academy of Sciences, 117(17), 9284-9291.
3. Morgan, Allison C., et al. "Socioeconomic roots of academic faculty." Nature human behaviour 6.12 (2022): 1625-1633.
4. Rittel, Horst WJ, and Melvin M. Webber. "Dilemmas in a general theory of planning." Policy sciences 4.2 (1973): 155-169.
Acknowledgements
Generative-AI tool use: None
