There is no doubt that generative AI will have an impact on workers and jobs. The Hollywood strikes last year pushed back against studios using AI to replace actors and writers. SAG-AFTRA's deal now requires performers' consent and fair pay for using their digital replicas, but there's still confusion around AI-generated characters or "synthetic performers." In the UK, the Trades Union Congress (TUC) proposed a new bill to protect workers from unfair AI decisions like hiring or firing, while also demanding transparency and safety checks for workplace AI. Both efforts show unions stepping up to make sure AI doesn't take jobs or mistreat workers.
Today’s article explores the trade union’s global impact on AI regulation. It is written by Giulia Trojano, an associate at Hausfeld, where she focuses on competition disputes, particularly with a tech angle. Giulia is also currently undertaking an interdisciplinary Master in AI Ethics & Society at the University of Cambridge, aimed at professionals. Let’s read what she writes about this interesting interplay between technology, politics and society.
Just over a year ago, the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) and the Writers’ Guild of America (WGA) strikes came to an end in Hollywood. SAG-AFTRA and WGA union members were particularly concerned with studios using generative AI to replace actors and writers for example, by employing AI tools to write or re-write scripts; leveraging a studio’s copyright in hours of footage to generate digital replicas of real performers or to train a model into producing new “synthetic performers”. If the studios were to engage in such unrestricted conduct, union members would be deprived of potential work and earnings.
Taking the model agreement that ended the SAG-AFTRA strike as an example, performers have greater protections in the case of studios seeking to capture their likeness when shooting a film/TV show (“employment-based digital replicas”). Studios must obtain the performer’s explicit consent for the capture; give at least 48 hours’ notice to consider a proposal; engage in separate negotiations with the performer every time they intend to use the employment-based digital replica and pay a day rate for its use.
However, where studios use existing materials from performers (i.e. past films in which they hold copyright) to (1) generate “independently created digital replicas”, that is, replicas which capture a performer in a film they did not actually act in at all as a natural person; or (2) train generative AI models to then produce “synthetic performers” which do not bear any external-likeness to the original performer or to any recognisable performer, the position is less clear-cut. Studios will not be prevented from generating “independently created digital replicas” or “synthetic performers” but, broadly speaking, need to engage in good faith negotiations with the performer or SAG-AFTRA to arrive at an equitable compensation.
It remains to be seen how these clauses are enforced in practice, especially given that copyright gives studios greater bargaining power. However, both SAG-AFTRA and WGA were incredibly influential in bringing employment issues arising out of the deployment of generative AI to the fore and in forcing studios to incorporate clauses specifically dealing with the potential consequences of loss of work and earnings due to generative AI.
Another influential union sharing AI regulation in the context of employment law is the UK’s Trades Union Congress (TUC), an umbrella organisation bringing together 5.5 million working people from 48 member unions. The TUC had already published a series of reports on the use of AI for employee surveillance, as well as AI-specific guidance aimed at union officers and representatives to be used in collective bargaining processes.
However, in April 2024 it stepped up its efforts and published its draft Artificial Intelligence (Employment and Regulation) Bill (the “TUC Bill”). The aim of the TUC Bill is to protect workers from decision-making based on AI where the outcome of a decision has “legal effects or other similarly significant effects” such as recruitment and dismissals which may impact workers at all stages.
In support of its core aim, the TUC Bill also seeks to ensure that AI systems deployed in a workplace are tested for safety and that workers and unions are consulted. Further, in order to redress data asymmetries, the TUC’s proposals for Workplace AI Risk Assessments include frameworks for workers, employees and unions to gain access to information on how AI systems are operating. Likewise, the TUC Bill contains a right for employees and jobseekers to receive a personalised statement which explains how an AI system made high-risk decisions about them. The TUC Bill also seeks to reverse the burden of proof, making it easier for employees to prove AI-based discrimination under the Equality Act 2010 has taken place, as well as introduce a right to disconnect – in recognition of the fact that greater automation and implementation of AI systems may lead to the intensification of work for employees.
While the UK Government has indicated that it will be considering the impact of AI on workers, the TUC Bill is another illustration of trade unions taking the lead and envisaging what AI regulation can and should look like for workers.
If you wanna know more, Giulia suggests these additional resources:
TUC proposed AI Bill: Artificial Intelligence (Regulation and Employment Rights) Bill | TUC
SAG-AFTRA model agreement 2023 SAG-AFTRA TV-Theatrical MOA_F.pdf
Collaborate with us!
As always, we are grateful for you taking the time to read our blog post.
If you want to share news that is relevant for our community and to those who read us from all corners of the world-wide WAI community, if you have an appropriate background working in the field of AI and law, reach out to Silvia A. Carretta, WAI Chief Legal Officer (via LinkedIn or via e-mail silvia@womeninai.co) or to Dina Blikshteyn (via dina@womeninai.co) for the opportunity to be featured in our WAI Legal insights Blog.
Silvia A. Carretta and Dina Blikshteyn
- Editors
Comments