top of page
Writer's pictureWomen in AI (WAI)

AI & Law: 2024 is (already) a year for the books!



Welcome to W(AI) Legal Insights first blog!

Before we parted for the summer, we announced the upcoming launch of our brand-new legal blog AI Legal Insights. As promised, here we are with its first issue! 


Our bi-weekly blog entries will keep you abreast of recent legal and legislative developments in AI worldwide. It will be filled with pills of knowledge on recent AI trends in intellectual property rights, data privacy, societal challenges, and regulatory considerations. We will also publish news about opportunities, upcoming conferences, books to read, calls for papers and other information you will need and want to navigate your way through the world of AI & Law.


While you all took a well deserved summer-break, we at WAI Global Legal team were working behind the scene to bring you a first issue full of interesting discussions, recent developments, fan facts around AI & law news that occurred in 2024. And we hope not to disappoint! 

This year, 2024, is a year for the books for AI & Law

With AI popularity and use skyrocketing, 2024 is the year of many legislative efforts proposed, passed, and enacted and many judges called to adjudicate AI uses, training, and creative efforts.

In the EU, AI Regulation is being implemented…

In 2024, the European Union stayed busy implementing AI regulations. The year started with two regulations from the Digital Services Act Package coming into force and continued with the infamous EU AI Act entering the enforcement stage, among many other legislations. 


In February 2024, the Digital Service Act (DSA) became applicable to all online platforms. The DSA creates a safer digital space in which the fundamental rights of users are protected. It concerns all sorts of digital services, from online marketplaces, social networks, content-sharing platforms, to app stores, and online travel and accommodation platforms. The rules specified in the DSA primarily protect users from falling into the trap of online services by being misused by manipulative algorithmic systems to amplify the spread of disinformation, hate speech and other harmful purposes such as cyberbullying or online harassment. They also ensure easier rules to report illegal content, goods or services via the implementation of ‘Notice and Action’ mechanisms. On this note, the DSA reforms the intermediaries’ liability regime to ensure that online platforms promptly remove or limit accessibility to illegal content upon knowledge of its existence, and that they keep the terms and conditions updated and consistent with provisions of law.


Around the same time, the Digital Market Act (DMA) also saw all of its obligations becoming fully applicable. The DMA is one of the first regulatory tools to comprehensively regulate the market power of the largest digital companies. It establishes a set of clearly defined objective criteria to identify “gatekeepers”: i.e. large digital platforms providing core services, such as online search engines, app stores, messenger services and more. These gatekeepers now have to comply with the do’s (i.e. obligations) and don’ts (i.e. prohibitions) listed in the new regulation. The DMA complements, but does not change EU competition rules, which continue to apply fully, creating a new framework that foster innovation, growth, and competitiveness, both in the European Single Market and globally.


Last but not least, the infamous EU AI Act, which lays down harmonized rules on artificial intelligence, was published in the official journal of the European Union in July 2024, thus becoming applicable from 2 August 2026. A lot has already been said about this new regulation and more will be said in our upcoming AI Legal Insights issues. Here it suffices for you to remember that between now (2024) and then (2026), multiple provisions of this regulation will enter into force progressively: the general provision & prohibited AI practices from 2 February 2025; the provisions on general-purpose AI models, governance and penalties from 2 August 2025, and the classification rules for high-risk AI systems and obligations on harmonized products from 2 August 2027. The AI Act is now legally enforceable and it will concretely impact the design, development and deployment of AI systems across the entire EU Internal Market.


…while in the US several bills have been proposed to Congress and enacted on state level.

Unlike the European Union, the United States has yet to pass federal-level legislation. However, lawmakers are actively working to regulate AI by proposing several bills in Congress. These bills address a broad spectrum of concerns, including copyright, transparency, and ethical issues.


One such bill is the Generative AI Copyright Disclosure Act, introduced by Representative Adam Schiff. This legislation would require developers of generative AI systems to disclose any copyrighted materials used in training datasets for AI models. This move aims to ensure transparency and protect intellectual property rights in the AI development process.


The No FAKES Act of 2024 is another important legislative effort. This Act provides protections for individuals against unauthorized use of their voice or likeness in digital replicas. Digital replicas, which are realistic computer-generated representations of a person’s voice or image in recordings or audiovisual works, may sometimes be created without the individual’s involvement or consent. The Act recognizes a person’s digital replica as a property right that cannot be transferred during their lifetime, though it can be licensed by the individual or their heirs.


Additionally, the 2024 Future of Artificial Intelligence Innovation Act is a bipartisan initiative designed to promote U.S. leadership in AI by fostering innovation and ensuring AI safety. It proposes the establishment of the U.S. Artificial Intelligence Safety Institute (AISI) at the National Institute of Standards and Technology (NIST). This institution, along with national AI labs, would be responsible for developing guidelines and accelerating AI advancements, all while promoting public-private partnerships.


While federal AI legislation is still under consideration, individual states have been proactive in passing AI-related laws. In 2024, states like Colorado, California, and Tennessee have introduced their own AI regulations. For example, the Colorado AI Act, enacted in May 2024, mandates that developers avoid algorithmic discrimination when creating high-risk AI systems. California’s legislature passed comprehensive AI-related bills in August 2024, requiring companies to regulate their large language models, publicly disclose their safety protocols, and ban election-related deepfakes on social media platforms. In March 2024, Tennessee passed the ELVIS Act, updating the state's Personal Rights Protection Act to include protections against the unauthorized use of generative AI to mimic a person’s voice or sound without consent.


So, what can we expect for the rest of 2024?

As AI continues to evolve rapidly, lawmakers in the U.S. and EU are racing to keep pace. It is crucial for anyone involved in AI development or use to stay informed about the latest legislative developments, both domestically and internationally.


What should be of interest for many of our readers is the latest news from the Council of Europe, which adopted the Framework Convention on AI on September 5, 2024. This first-of-its-kind, legally binding international treaty will ensure the respect of human rights, the rule of law and democracy across the entire lifecycle of AI systems. The Convention adopts a risk-based approach, similar to that of the AI Act, which covers AI systems use by public authorities or private actors acting on their behalf. Signature is open to non-European countries across the world. From day one, the European Commission, USA, UK and other countries have already ratified the treaty in a show of commitment to protecting people's rights as a priority.


Other AI initiatives have been initiated around the world. In March, the UN General Assembly adopted a landmark UN Resolution on promoting “safe, secure and trustworthy” AI to benefit a sustainable development for all (backed by more than 120 other member States!). In addition, the AI Safety Summit of May 2024 released a Seoul Declaration which aims to enhance international cooperation on AI governance and advance global discussions on AI. Under Japan's G7 presidency last year, Italy, Canada, France, Germany, Japan, UK, and the US launched the so-called Hiroshima Process: 11 guiding principles and a voluntary Code of Conduct that promote safe, secure, and trustworthy AI.

These treaties and international agreements aim to complement, at international level, legally binding legislations (e.g. the EU AI Act) as global efforts to regulate development of AI are gaining momentum. The treaties also aim to promote the respect and protection of human rights as a priority, in the design, development, deployment and use of AI. For those involved in AI, staying informed of these evolving frameworks is essential to navigating the dynamic regulatory landscape.


Last-Minute opportunity: don’t miss it!

The EU AI Office has opened a consultation on trustworthy general-purpose AI models.

If you want to contribute your insights, please do so by 18 September EOD through this link. Your expertise is crucial in shaping the future of AI in Europe!


Collaborate with us!

If you read so far, thank you! If you only scrolled down and ended up here, that’s good too!

This first issue of the W(AI) Legal Insights blog is coming to an end. 


But first, one more thing. We want to make sure we are as inclusive and representative of our global community as possible, to share news that are relevant to you and to those who read us from all corners of the WAI community, from France to South Africa, from Australia to Canada, from Brazil to Philippines and all the other WAI countries in between.


To do that we need your help! If you have relevant background, working in the field of AI and law and want to share your expertise or your work experience in one of the next issues, reach out to our Chief Legal Officer Silvia A. Carretta (via e-mail silvia(a)womeninai.co or via LinkedIn) for the opportunity to be featured in our W(AI) Legal insights Blog. 


Let’s pave the way for a more inclusive and informed future in AI!


Silvia A. Carretta and Dina Blikshteyn

- Editors

0 comments

Comments


bottom of page