top of page

Adopting AI in Law Schools: Why Being Human Matters

WAI Legal Insights Blog banner with a smiling person in a suit. Text: Adopting AI in Law Schools: Why Being Human Matters.

By Maddison Konway


A second-year law student reflects on entering legal education as generative AI rapidly reshapes learning, yet remains cautiously acknowledged in academia. Drawing on her engineering background, Maddison contrasts earlier tech-forward training with law school’s more conservative approach, shaped by the school’s new AI policy. The AI policy illustrates how legal institutions, along with other industries, were forced to respond quickly to this emerging technology. Maddison reflects on how the AI policy shapes how students engage with AI in school, which will then translate to how students use AI in legal practice.


Maddison Konway is a second-year law student at Berkeley Law, where she is focusing on intellectual property and administrative law. Prior to law school, she graduated with a Bachelor of Engineering and Management with a specialization in Materials Science from McMaster University in Canada. Maddison spent her 1L summer at Haynes and Boone working on patent prosecution, patent litigation, and PTAB proceedings for diverse technological fields.

Born as a Zoomer, then studying engineering from 2018 to 2022, I was no stranger to technology. However, I completed my Bachelor’s degree before many artificial intelligence tools we have quickly taken for granted became mainstream, and most of my exams were in-person and on paper. Consequently, commencing law school was a significant shift from my prior educational experience. Despite the prevalence of AI tools, discussion of them was distinctly absent in my first semester of law school, with only two of my classes even mentioning AI in the syllabus. Berkeley’s most recent policy for “Generative AI Rule for Exams and Assessments” is dated April 7th, 2023. It focuses on three main principles: 


  1. AI may never be used in a way that would constitute plagiarism, wherein generative AI is the author

  2. AI may be used to perform research

  3. AI may not be used in exams


As I begin the spring semester of my second year of law school, I have seen changes in how generative AI is managed, and yet, the landscape is not substantially different from when I began my degree in August 2024.


Before I get into the changes, I wanted to take a step back and share a little bit about where Berkeley’s policy came from (and where I suspect other law schools may get theirs from). To do so, I took the opportunity to reach out to one of my professors, who remarked in passing that he helped co-write the policy in 2023. He even commented that he did not entirely realize what he was writing at the time. To this effect, I reached out to Professor Jonah Gelbach, my Evidence professor, to learn more about the process. The high-level answer was that Berkeley’s AI policy was developed through an informal process, which started as a discussion among faculty and ultimately led to a specific text that now influences the daily life of the over 1000 students at Berkeley Law. 


So, what is this influence? As I mentioned above, there are three strands of impact: AI-generated content, conducting research, and completing exams. In my doctrinal classes, I have seen minimal applications for AI. As someone who typically buys physical textbooks, there’s no way to summarize my class readings using AI, and for materials like cases, I have found other commercial tools more beneficial. In contrast, I have also taken a legal writing course, and it was during this class that I began to recognize both the benefits and dangers of AI with greater acuity. 


To start, I will highlight the benefits. First, AI was fantastic for research. When I was gathering preliminary information and getting a high-level overview of the topic prior to diving into deeper research, AI increased my efficiency and provided resources in a succinct manner that I could subsequently verify by looking at underlying sources. This was particularly beneficial since I was focused on a rapidly evolving field that was changing weekly, if not daily (at times). As I transitioned into more detailed research, which included thousands of pages of legislative reports, AI tools were helpful for synthesizing information.


In the late stages of my research, AI was also beneficial for finding specific resources because AI tools could scrape the web more effectively than I could have done manually. Second, AI was beneficial for refining technical aspects of my writing. For instance, I admittedly do not remember every single comma rule, nor do I have the Bluebook for legal citations committed to memory (yet). As such, AI was beneficial for finding rules in the Chicago Manual of Style and the Bluebook for specific scenarios I encountered. At times, I would even feed a single sentence into ChatGPT to ask whether it was punctuated correctly.


Finally, I used AI tools to identify areas of my writing with the incorrect tone (for instance, an argumentative presentation of factual material in a background section). Once identified, I would ultimately make my own word changes and iteratively provide my revised sentences until ChatGPT identified the tone as being neutral, or at least only subtly argumentative in a way that contributed to my overall argument. While I could have completed all of these steps unaided by AI, when one is working with a fifty-page paper, streamlining editing like this—when used with appropriate caution and oversight—helps expedite the process.



Takeaways

Reflecting on my experience, I have several key takeaways for students and academic institutions as AI becomes part of every sphere of daily life and is increasingly accepted in legal fields.


  • Keep up with AI Developments:  I would encourage students to explore the capabilities of AI and ensure they are keeping up with current developments. There are many legal-focused AI tools, including those integrated into existing platforms like Lexis and Westlaw. Knowing how to use these tools will become essential as law firms begin adopting tools as they are adapted to the legal field and increasing safeguards are added. Consequently, new associates should be familiar with cutting edge tools while still grounding themselves in fundamental understanding of core legal concepts combined with ethical uses of AI. Members of the profession should be aware of the concerns, particularly as pertains to confidentiality and well documented issues like hallucinations. An example area of exploration would be case summaries. Overall, this is a fairly low risk way to explore AI tools, provided students are cautious of pulling any direct conclusions or quotes from AI generated summaries. Ways to stress test this are asking follow up questions, including things as simple as asking for pincites for information synthesized by the AI tool, thus enabling verification rather than reliance on AI. Another avenue for exploration that students should be familiar with is synthesizing large amounts of data, as I discussed in the context of my own writing project. 


  • AI is only a Tool: It is imperative that students remember what AI is: a tool. As such, it should almost never be treated as a one-stop-shop. Rather than taking AI generated information at face value, it is imperative students verify information, pushback when AI hallucinates, and review any sources generative AI draws from. In conjunction, I would advocate that education needs to come before automation. In other words, do not ask AI to do anything you would not be capable of doing yourself. Without such an approach, it is extremely difficult to catch errors and scrutinize generative AI outputs.


  • AI Boundaries: Law schools should set clear boundaries (as exemplified in Berkeley Law’s AI policy), including both permitted and prohibited uses, and encourage responsible use of AI in appropriate settings. As someone who errs on the side of caution, I would not have used AI tools absent both explicit permission in the AI policy and encouragement by the professors in my writing course. As part of the legal education, law schools should equip students to leverage emerging technologies in an ethical manner while still prioritizing efficiency and a client-centric mindset.


  • Ask Questions: Above all else, I would encourage my fellow law students to ask questions of their professors, supervisors, and—most of all—themselves. While some questions may not have simple answers, or any answer at all, asking questions is the foundation of learning and one of the defining features of being human in an increasingly automated world. 


_____________________________________________________________

Collaborate with us!

As always, we appreciate you taking the time to read our blog post.

If you have news relevant to our global WAI community or expertise in AI and law, we invite you to contribute to the WAI Legal Insights Blog in 2026! To explore this opportunity, please contact WAI editors Silvia A. Carretta - WAI Chief Legal Officer (via LinkedIn or silvia@womeninai.co) or Dina Blikshteyn  (dina@womeninai.co).



Silvia A. Carretta and Dina Blikshteyn

- Editors

Comments


bottom of page