July 15, 2024
Principal author: Karen Jensen
Contributing authors: Damita Snow, Samantha Wigglesworth
As part of our ongoing Speaker Series, on July 15th, the Global Ethics and Culture team welcomed an esteemed panel, and an exceptional moderator, to discuss actionable items that organizations can use to align their business goals with regulatory and industry frameworks to build Responsible AI by Design.
In 2023, the Global Ethics and Culture team launched a Speaker Series that focused on education and awareness of bias in AI, and our Global Hackathon event challenged organizations from around the world to design AI solutions that addressed the ongoing challenges of gender parity and equity.
This year, we’re building on our 2023 success(es) with a new Speaker Series that will identify actionable solutions organizations can implement to overcome bias and foster change in AI ecosystems and infrastructures.
Our global panelists in today’s series included Dr. Ana Clarke and Emma Moller (Please see the links below to our speaker's profiles on LinkedIn.)
Our panelists first discussed their respective journeys into Artificial Intelligence. They were also asked a series of questions, along with questions submitted by our global audience, on how to design policies for Responsible AI. Their responses have been summarized here. For the complete dialogue, please watch the recorded session on our YouTube channel.
Emma’s journey into Artificial Intelligence:
Emma grew up in the mountains of Sweden. Six years ago, she founded Lumiera, a European boutique advisory firm with policy, tech, and business expertise that equips organizations with responsible AI strategies. She’s worked with companies like Google in the disciplines of Natural Language Processing and Digital Strategies. Emma’s focus is on compliance, ethics, and responsible AI by design. Some of the policy strategies Emma and Lumiera use are driven by a deep understanding of the organization and how they envision change management
Dr. Clarke’s journey into Artificial Intelligence:
Dr. Clarke hails from beautiful Buenos Aires and brings over 25 years of experience to the discipline. She is the CEO and Founder of AC Smart Data, and brings experience across business management, academia, and research to organizations seeking regulatory consultancy to navigate technology challenges. Ana believes designing new revenue streams for technology and understanding the regulatory landscapes of AI deployments are key challenges for organizations.
Our moderator asked our panelists the following questions:
How can organizations define the goals and frameworks of their Artificial Intelligence applications? What is right and necessary to put these goals and frameworks in place?
Design with a Strategic approach: Organizations should determine a clear understanding of what they want to achieve using Artificial Intelligence applications. There is no “one size fits all” approach to Responsible AI by design, so organizations should consider a 360-degree scope that includes all stakeholders, industry and regulatory frameworks, and the organizational infrastructure.
Designate an AI champion within the organization: An AI champion can help the organization stay focused on the mission goals, establish communication pathways with stakeholders, and build transparency into organizational solutions.
Develop “fail fast and recover” processes for AI implementations: Smaller, more flexible projects, can help organizations measure outputs and make corrections before full-scale deployments. Pilot projects and organizational feedback continue the pathway for transparency and deployments that consider the human impact.
How can organizations achieve balance with both ethics and business goals?
Include ethics as part of the Core Strategy: a successful AI strategy is not a one-time effort. Organizations must include measurement and monitoring of outputs to ensure that AI deployments are accomplishing organizational goals and aligning with organizational values.
How can organizations ensure that deployments are aligned with their organizational goals?
Build a culture of human-centric deployments: Regulatory frameworks, while important, are not enough. Organizations should define clear examples of how AI applications can enhance the value of their existing products or services, or how they can create new products or services. Champion Human-In-the-Loop practices that contribute to solutions that provide order and stability. Early adopters of Responsible AI will have fewer problems as they evolve.
Please share examples of your successful AI strategies.
Dr. Ana shared her experiences in Supply Chain & Logistics: by building a streamlined AI model, the team was able to increase efficiencies in transportation routes and protect business and sensitive information. The team defined tools for predictive analytics and decision-making with a goal of commercializing emerging technologies.
Emma shared her experiences in designing for Responsible AI with a value-based approach. She confirms that regulations by themselves are not enough. Her team encourages skepticism, even though it can be hard work, but organizations should approach these changes in a positive way to build a culture of human-centric deployments.
Regulations are constantly evolving. How can organizations, and experts, stay adaptive and nimble in the best practices of AI governance?
Invest in Continuous Learning: Staying up to date on trends and transparency practices can help to keep stakeholders engaged and informed. The only thing we know for sure is that we can’t predict the future. The EU AI Act comes into force on August 1st, so this is still a gray area. We need to continue to ensure that the process remains transparent and efficient.
Harmonize Regulatory Frameworks: As emerging technologies evolve, expect to see regulatory frameworks attempt to harmonize to ensure that Responsible AI develops with adaptations for cultural and geographical differences. Ideally, a unified approach to regulatory frameworks will help ensure the development of Responsible AI.
Questions from the audience
Are there specific checklists that organizations can use to assess impacts?
Scientific validity is important. The UN Global AI Initiatives, the frameworks for your industry, and/or countries are great places to start in aligning with Responsible AI frameworks. Additionally, testing & reliability of outcomes as well as human-to-human interactions is critical.
How do you see AI enhancing industries?
Improved outcomes. For example, at present, current sea ice forecasting is a manual process, with significant lag time. AI sea ice forecasting has significantly improved lag times, and with improved forecasting capabilities, has the potential to reduce negative impacts to human life.
Takeaways and Learning:
Embrace early adoption of Responsible AI frameworks: While some organizations may express skepticism about regulatory frameworks, proactively building trust into AI deployments is essential for increasing accountability and protecting people from harm. Designating an AI Officer or AI Champion can significantly advance your organization's position at the forefront of Responsible AI.
Design for Iterative Development and Continuous Learning: Regardless of the size of your organization, or the industry your organization represents, start small with AI deployment projects and focus on getting it right the first time. Use pilot projects to measure successful outcomes, gather feedback, and define business goals that align with organizational values, and changing regulatory frameworks.
Establish Human-Centric policies for AI deployments: Impact assessments should be both external and internal and should include all stakeholders. Responsible AI is more than just technology outputs, it's about harnessing the power of emerging technologies to enhance human life while protecting against potential harms. Rigorous and scientific testing of AI outputs ensures that these systems align with human values, respect individual rights, and contribute to a more equitable and just society.
Event recording
You can view the recording of the event using this link.
Ethics & Culture Team
Please see the links below to our Team’s profiles on LinkedIn.
Comments