Global responsible AI hackathon 2023:
Moving AI Ethics from Talk to Action
AI is quickly becoming ubiquitous in all industries. Unfortunately, even though AI can be incredibly harmful, few organizations act to ensure that their AI is safe and beneficial.
For example, 74% of companies in a recent survey didn’t work to reduce unintended bias.
The goal of this hackathon is to help organizations act to make their AI more responsible. To that end, individuals and teams will be invited to create proofs of concept for increasing the implementation of responsible AI practices.
SUBMIT YOUR APPLICATIONS BY OCT 15, 2023 23:59 ET
Sponsors and partners
Applications Close: Oct 15, 2023
Teams Formation: Oct 22, 2023
Challenges Matching and DevPost/Slack Onboarding: Oct 29, 2023
Hackathon Virtual Event: Nov 03, 04 and 05, 2023
Submission of Solutions: Nov 05, 2023 by 12 PM ET
Closing Keynote and Winners Announcements: Nov 05, 2023 7 PM ET
Participants will develop their proofs of concept between May and October 2023.
The participants will finalize their proofs of concept in a 3-day hackathon in November 2023.
The format of the 3-day hackathon will be hybrid. It will include virtual events for all, and regional leaders will organize in-person events in their regions.
To support the participants, we will host a series of events between May-October 2023.
The events will include speaker series events, talks, workshops and check-ins for hackathon participants.
The topics of the events will include introductions on AI ethics, deep dives into AI ethics topics, and hands-on practice in implementing AI ethics concepts.
To support the participants between May-October 2023, mentors and vendors will offer free services to hackathon participants.
The resources may include consultations, usage of responsible AI platforms, cloud resources, datasets for training and testing, engineering support, product development support, and audits.
Three hackathon tracks
Solutions to help organizations understand prominent AI ethics themes.
Help organizations understand the AI risks that they pose
Diverse Input Collection
Help organizations collect diverse input about the AI risks their technology poses
Help organizations educate their employees on AI ethics
Develop solutions to help organizations introduce AI ethics practices into their workflows.
Strategy and Measures
Help organizations develop an AI strategy, including clear metrics and standards
Help organizations implement or include AI ethics practices
Help organizations implement incentive structures to support the execution of their AI ethics strategy and procedures
Develop solutions to help organizations improve their AI ethics accountability.
Help organizations report on AI ethics progress to internal stakeholders, e.g. employees
Help organizations report on their AI ethics progress to external stakeholders, such as their board or the public
Periodical External Audits
Help organizations undergo AI ethics audits
Workshop Series Event
The Workshop Series will be focused on Responsible AI topics and moderated by Women in AI community members and volunteers.
Workshops will be 90 minutes, and will include break-out sessions and time for Q&A.
The Workshop Series will be virtual to reach global audiences!
We are looking for expert workshop speakers in the following Areas of Focus:
AI Policy and Privacy
AI Tools workshops
AI Business workshops
WORKSHOP SPEAKER BENEFITS
Your photo, logo, hyperlink, and a short description of your organization, services or work will appear in the Hackathon Workshop Series online content
Promotion of your talk to Responsible AI Workshop Series participants
A thank you banner from Women in AI for your social media channels
Social media promotion of you and/or your organization across the event series
To express interest in being a Workshop speaker, please reach out to
Participant selection will be based on the following criteria:
The participants will be individuals or teams of up to 5 people
Participants must commit to attending at least four of the workshops and check-ins
Participants must commit to writing a public description of their project
Applications will be screened based on a one-paragraph description of the problem the team wants to try to solve
Winners will be chosen based on the following criteria:
Problem articulation - How well did the team articulate the problem they are trying to solve, its importance to responsible AI, and the appropriateness of the proposed solution?
Execution - How well did the team execute the proof of concept?
Novelty - To what extent is the proposed solution adding to the current landscape of responsible AI solutions?
Feasibility - How feasible would it be to develop the proof concept into a full-blown solution that would be adopted by practitioners?
Accessibility - To what extent the solution would be accessible to anyone who needs it?