Stanford Legal

AI, Liability, and Hallucinations in a Changing Tech and Law Environment

Overview

Content

AI's Impact on the Legal Profession

"Legal Hallucinations" and AI Limitations

- Lawyers have submitted fictitious case citations generated by AI - A Wyoming judge threatened to sanction lawyers for using fabricated legal references - Chief Justice Roberts has highlighted hallucination as a serious concern

- Between 58% to 88% in state-of-the-art models - Even specialized legal research engines showed hallucination rates of 20-33%

- Are trained on massive textual data including both fictional and non-fictional content - Learn statistical patterns of language use rather than developing true understanding - Cannot reliably distinguish between fictional and real information - Lack comprehension of legal hierarchies and context

Specific Examples of AI Legal Failures

- Originated from an after-dinner speech - Was inserted into a law review article as a joke - Demonstrates how AI can generate false information about non-existent sources

- Treating fictional sources (like Harry Potter) with the same authority as legal documents - Lacking understanding of basic legal concepts taught to first-year law students - Producing absurd claims (e.g., suggesting Nebraska Supreme Court overruled U.S. Supreme Court) - Confusing different legal actors (e.g., mixing up Justice Ginsburg with her daughter)

Research Methodology and Findings

- Designing legal research questions - Manually reading and grading AI-generated essay answers - Checking legal propositions for accuracy and grounding

- Lacking humility and always attempting to provide an answer - Inability to acknowledge uncertainty or say "I don't know" - Inherent design to generate responses rather than assess accuracy - Lack of nuanced understanding of legal context and precision

Access to Justice Implications

- Struggle with trial court level decisions - Misfire in scenarios most needed by underrepresented individuals - Provide potentially incorrect information for pro se litigants - Risk undermining the promise of improving legal access

- There's value in the process itself, beyond pure accuracy - Example: Veterans wanting to "be heard" even if they might lose their case - People feel better about outcomes if they feel they've been properly listened to

Retrieval-Augmented Context (RAC) Systems

- Potentially favoring recent case law over authoritative case law - Not accounting for jurisdictional nuances - Lacking nuanced understanding of legal contexts

AI Assistant Performance Patterns

- Improve performance of less experienced professionals - Minimally boost performance of high-performing professionals

- Bar associations focusing on general AI training may miss tool-specific risks - Example: Westlaw system potentially generating passages from overruled cases while removing citations - Lack of transparency in AI-generated legal content

Stanford Law School's AI Integration

- Using AI to generate legal fact patterns - Hosting AI and law workshops - Conducting exercises that reveal AI language model limitations - Planning future workshops on AI usage in legal contexts

Responsible AI Integration in Legal Practice

- E-discovery - Document review - Deposition summarization

More from Stanford Legal

Explore all episode briefs from this podcast

View All Episodes →

Listen smarter with PodBrief

Get AI-powered briefs for all your favorite podcasts, plus a daily feed that keeps you informed.

Download on the App Store