top of page

AI and Ethics in the Legal Community: A North American Lens


This article explores how AI is transforming the legal industry by increasing efficiency in tasks like legal research and document drafting, but also raising serious ethical concerns. While AI tools can enhance speed and productivity, attorneys’ reliance on biased historical data can perpetuate injustice, particularly in areas like sentencing. Institutions are starting to respond, with organizations like the American Bar Association and governments proposing regulations and guidance. Yet the law is lagging behind technology. High-profile misuses, such as lawyers citing fake cases generated by AI, highlight the risks. The core message is that trust, transparency, and human accountability must remain central and AI should assist, not replace, legal judgment. 


This blog is written by Mishka Nizar. Mishka is a founder of Mirihana Legal, a Toronto-based firm specializing in litigation and small claims, and an author of The Art of AI. With a passion for justice and innovation, Mishka explores the intersection of law, ethics, and emerging technologies, advocating for responsible AI use in legal practice across North America.

AI is rewriting the rules in every industry — and law is no exception. In courtrooms and law offices across the world, from North America to Asia, artificial intelligence is being adopted to streamline everything from legal research to contract review. But as the tools become more powerful, so do the ethical questions.


At first glance, the promise of AI is efficiency. Algorithms can scan hundreds of cases in seconds. Tools like ChatGPT or Harvey.ai can help draft briefs or summarize evidence. For solo practitioners or understaffed firms, that kind of speed is invaluable. But speed without scrutiny is dangerous — and in the legal field, it can be catastrophic.


Bias is the Elephant in the Courtroom

Every AI system is only as neutral as the data it’s trained on. In law, this is a serious issue. If historical case data reflects racial or socioeconomic bias — and let’s be honest, it often does — then, any AI trained on that data could reinforce injustice under the guise of objectivity. We've already seen how predictive policing tools in the U.S. have disproportionately targeted marginalized communities. The same risk exists in courtrooms when AI is used to assess risk, draft sentencing recommendations, or even predict case outcomes.

So what happens when we lean too heavily on these tools? Who’s accountable when the algorithm gets it wrong?


Legal Bodies Are Catching Up — Slowly

The American Bar Association now urges lawyers to understand the AI tools they use — not just how they work, but what risks they carry. In Canada, the federal government has proposed the Artificial Intelligence and Data Act (AIDA), which would regulate “high-impact” AI systems and hold developers accountable for harm. But as of now, most regulations are still in early stages. The technology is racing ahead of the rules.


Meanwhile, real-world cases are already surfacing. In New York, a lawyer faced disciplinary action after relying on ChatGPT to draft a motion — which cited fake, nonexistent case law. In Canada, legal professionals are quietly debating whether AI-generated content should ever be submitted without human review.


It’s Not Just About Ethics — It’s About Trust

The legal system only works if people believe it’s fair. If AI is making decisions that affect someone’s freedom, their business, their children — people deserve to know how and why that decision was made. Black-box algorithms with no transparency or explanation don't belong in court.


That’s not to say we shouldn’t use AI. But we need to stay rooted in first principles: justice, fairness, and accountability.


What’s Our Role?

For lawyers, paralegals, and law students, the responsibility is growing. We need to ask tough questions about the tools we use. We need to explain to clients how AI fits into their case, and when it doesn’t. We need to remain the final decision-makers — not the software.

And for everyone else? Think about it this way: would you want a bot deciding the outcome of your lawsuit? Do you know when your data might have been used to train an AI that could one day help determine someone’s sentence?


The Future Isn't Set — Yet

We’re in a defining moment. Legal AI is still young. The rules haven’t hardened. That means we have a chance — right now — to shape how it’s used.


Should there be mandatory audits of legal AI tools? Should AI be banned in certain types of cases? Should clients be able to opt out?

These aren’t abstract questions. They’re urgent.


As someone working at the intersection of law and technology here in Toronto, I see the opportunities. But I also see the risks. Let’s not sleepwalk into an automated justice system. Let’s ask the right questions — while we still can.


Collaborate with us!


As always, we appreciate you taking the time to read our blog post.

If you have news relevant to our global WAI community or expertise in AI and law, we invite you to contribute to the WAI Legal Insights Blog. To explore this opportunity, please contact WAI editors Silvia A. Carretta, WAI Chief Legal Officer (via LinkedIn or silvia@womeninai.co), or Dina Blikshteyn,  (dina@womeninai.co).


Silvia A. Carretta and Dina Blikshteyn

- Editors

bottom of page