top of page

2025 Women in AI Expert Series: Generative AI, Explicit Content, and Organizational Solutions for Not Safe for Work (NSFW) images

ree

Principal author: Karen Jensen


Welcome to the 2025 Expert Series from the Global Ethics and Culture office of Women in AI.


In 2025, we continue our global initiatives in Education, Entrepreneurship, Innovation, and Research to make AI accessible and inclusive for everyone, with a special focus on women and girls.

Like the 2024 Speaker Series, this year's Expert Series aims to boost opportunities for women and girls in AI. We'll feature global women experts sharing practical AI skills that could help you launch a new AI career or reskill for AI roles.  


In our first session, titled "Prompt Like a Pro: AI Skills for Students and Young Professionals," our expert, Charlotte Tao,  offered practical guidance on the essential skill of prompt engineering.

In our second session, titled “Agentic AI: Navigating Autonomy, Accountability, and Ethics,” our expert, Dhivya Nagasubramanian, offered clear metrics and understanding of Agentic AI and how it differs from Generative AI and its uses.

In our third session, titled, “Responsible AI in Action: A Look Back at a Winning Hackathon Project”, our expert, Dr. Ja’Nya Jenoch, brought together members of some of the teams from our Global Hackathon from 2023 and discussed where they are now, some things they’ve learned, and their inspiring message for women and girls as they move into careers in emerging technologies.


Throughout the world, women have been storytellers and the keepers of oral histories.  We honor those traditions and welcome a new generation of storytellers, dedicated to the deployment of Ethical and Responsible AI.


Our expert for today is Bobbi Stattelman. Bobbi is the COO of Falcons.ai. Falcons.ai is the number one most downloaded AI model and can be found on HuggingFace.co. Bobbi will be discussing the organisational challenges for Not Safe for Work (NSFW) images and how emerging technologies offer promising solutions for this challenge.


Generative AI and the explosion of AI Generated Explicit Content

While Generative AI offers significant opportunities for advancements in solving societal challenges, the darker side of these emerging technologies includes the prolific generation of AI-generated explicit content and non-consensual deep fake images.  This content primarily focuses on women and children with damaging impacts that have yet to be fully recognized and realized. Billions of images and deepfakes are generated using Generative AI, and according to Bobbi’s research, some 90% of these images and deepfakes are pornographic! While many countries are developing legislation to address the impact of these images, there is still much work to be done.


The Workplace and Why it’s important

The intersection of explicit content and workplace technologies creates significant organizational vulnerabilities and risks:

·       Legal Liabilities: Increasingly, senior leadership at organizations is being held accountable for workplace behaviors.  Since 50% of employees have used a work device or technology-based architecture to access explicit content, it’s clear that organizations have a deeper responsibility to protect their ecosystems from this risk

·       Hostile work environments: Employees who show explicit content in work environments create the potential for hostile work environments, and their consequences, and increased turnover due to toxic organizational behavior

·       Reputation and Brand Damage: Social media outrage has the power to significantly damage organizational reputations and brand identities.


Current State: The challenges and limitations of the new threat landscape

The challenge for organizations using today’s content moderation strategies are many.  In the current state, AI models work with human collaborators via an external API configuration to review and analyze proliferate volumes of content. Sharing this content via API applications has the potential to create significant vulnerabilities in data privacy compliance. Additionally, employees viewing explicit content may suffer psychological impacts, and digital and human context analysis may be inconsistent, resulting in high volumes of false positive and false negative outputs.


Navigating the Future: AI only models that offer more comprehensive protection

On-device Al detection via the Falcons.ai model, including BYOD (bring your own device), represents the next generation of defense, examining content at the point of rendering regardless of source or encryption status, opening a wide array of new detection capabilities across a multitude of devices across both digital and physical spaces, including network level detection, real time monitoring, client-side encryption and drone surveillance. These models can be highly effective on significantly less power and can be deployed across all electronic devices.


Summary

As emerging technologies continue to evolve, our #AI4Good technologies need to evolve in parallel to ensure that our societies and our vulnerable community members are protected from harm.

Share your comments here on this post and with us on Social Media @WomeninAI to ensure we #MakeitFAIR!

Event recording: You can view the recording of the event using this link.

This Expert Series is presented by the Women in AI Ethics & Culture Office volunteer team, dedicated to A Global Vision for achieving gender parity in emerging technologies through increasing Opportunity, championing inclusive Policies, and fostering practical Action that delivers meaningful and measurable impact.


ree

Ethics & Culture Team

Please see the links below to our Team’s profiles on LinkedIn.

A special welcome to our new member:

bottom of page