When AI Gets Too Real: Courtrooms Grapple with Deepfake Risks—and Legal Hallucinations

Picture this: a highly respected expert warns a court about the dangers of AI-generated misinformation—only to be discredited because he used AI to prepare his testimony, and it produced fake citations. That’s not a law school hypothetical. It’s what happened earlier this year in Kohls v. Ellison, a First Amendment challenge out of Minnesota.
The irony was as sharp as it was instructive: a case about the risks of generative AI undermined by those very risks. And it’s far from an isolated incident. Across the country, courts are increasingly confronted with deepfakes, fabricated evidence, and “hallucinations” generated by AI tools. The legal system is scrambling to adapt.
The Case That Sparked Attention
In Kohls v. Ellison, Minnesota’s Attorney General defended a law aimed at limiting election-related deepfakes. To support the law, the state relied on a declaration from a Stanford professor, an expert on misinformation. But the declaration contained fabricated references—AI-generated articles and citations created by GPT-4o.
The court excluded the declaration and sharply criticized the expert. The judge noted that declarations are signed under penalty of perjury, and reliance on hallucinated sources “shattered credibility.” The ruling underscored that courts expect expert witnesses and attorneys alike to verify everything, no matter how sophisticated the technology.
A Broader Pattern of AI Missteps
This wasn’t the first—and won’t be the last—AI-related courtroom debacle.
– New York, 2025: A damages expert admitted using Microsoft’s Copilot to generate analysis. The court ran the same prompts and got different results, highlighting AI’s inconsistency. The judge warned that attorneys must disclose AI use and thoroughly vet any AI-derived work.
– Federal Courts, 2023–24: Several attorneys, including two from Morgan & Morgan, submitted filings citing non-existent cases generated by ChatGPT. Courts threatened sanctions, and the firm issued a stark warning to staff: AI can invent case law, and it’s your job to catch it.
– Georgia, 2024: An appellate court vacated a trial court order after discovering pleadings included fabricated case citations. The fallout made national news and raised urgent questions about legal ethics in the AI era.
How Judges and Rulemakers Are Responding
Courts are not standing still. Judicial warnings, procedural reforms, and proposed rule changes are multiplying:
– UK Warning: A High Court judge warned that lawyers who cite fake AI-generated cases could face contempt charges—or even criminal prosecution for perverting the course of justice.
– Federal Rules Advisory Committee: In the U.S., proposed amendments to Rule 901 and related evidence rules would require heightened authentication for machine-generated material, placing the burden on proponents to prove reliability.
– State-Level Task Forces: States like California, Texas, and Delaware are forming commissions to study how AI affects evidence, attorney conduct, and consumer protection. Bar associations are also drafting guidance on responsible AI use in litigation.
– Judicial Practice Guides: Some judges now explicitly ask in pretrial conferences whether AI has been used in drafting or discovery. Disclosure obligations are becoming the norm, not the exception.
Why Even Smart Lawyers and Experts Slip
How do seasoned professionals fall into these traps? Three recurring issues stand out:
1. AI Is a Word Generator, Not a Database
Unlike Westlaw or Lexis, AI doesn’t pull from a closed, verified set of cases. It generates plausible-sounding answers, which can include fake citations.
2. Hallucinations Hide in Plain Sight
AI often blends accurate information with subtle fabrications. Without line-by-line verification, fabricated references slip by unnoticed.
3. The Speed Trap
Lawyers and experts work under intense deadlines. AI promises efficiency—but in the rush, caution gets sidelined, and verification falls short.
Practical Guardrails for Attorneys
To avoid embarrassment—and sanctions—lawyers should adopt a disciplined approach:
– Verify Every Citation: Treat AI outputs as a draft, not a source. Confirm every case, statute, and article independently.
– Disclose AI Use: If court rules or local orders require it, be upfront about AI assistance.
– Use Checklists: Build review protocols into your drafting process, including a mandatory “source check.”
– Train Your Team: Paralegals, associates, and experts must understand AI’s limits as well as its benefits. AI literacy should be as essential as citation literacy.
– Preserve Chain of Custody: For evidence that could be AI-generated (e.g., video or audio), maintain rigorous custody records to defend authenticity.
Looking Ahead: Deepfakes and the Courtroom of the Future
The Kohls v. Ellison incident is a cautionary tale, but deepfakes pose an even bigger storm on the horizon. Imagine:
– A fabricated video showing a parent behaving abusively in a custody dispute.
– An altered recording of a business executive admitting fraud.
– A political campaign derailed by AI-generated speech clips.
Courts will need to rely heavily on forensic experts, AI-detection tools, and stricter evidentiary standards. Just as DNA evidence transformed criminal trials, deepfake detection technology may soon become a routine part of courtroom practice.
Final Thoughts
AI can be a powerful tool in law practice, but when misused, it creates risks that ripple far beyond embarrassment—it can derail cases, damage reputations, and undermine justice. The lesson from recent cases is clear: human judgment, diligence, and verification remain the backbone of credible advocacy.
At Adkins Law, PLLC (Huntersville, NC), we stay on the cutting edge of legal technology and evidence law. We understand the dangers of deepfakes and AI-generated misinformation, and we help our clients protect their rights in an era where not everything that looks or sounds real can be trusted. Whether you’re facing a custody battle, a civil dispute, or a case where digital evidence is critical, our team ensures that authenticity, accuracy, and truth remain at the center of your case. Contact us today.
Share This Story, Choose Your Platform!
Disclaimer: This website provides general information and discussion about legal topics. The content is not legal advice and should not be relied upon as such. Always seek the advice of a licensed attorney for legal matters.

