Generative AI is rewriting how lawyers work at record speed, but is also quietly flooding Courts with phantom citations and invented case law along the way. In recent months, a number of judges across the country have sanctioned attorneys for submitting briefs laced with fictitious case law conjured by generative AI tools like ChatGPT. These so-called "hallucinations" aren't just embarrassing, they can constitute professional misconduct, jeopardize your company's interests, and damage the credibility of the legal profession in an already skeptical courtroom environment. In fact, since mid-2023, more than 120 cases of AI-driven legal "hallucinations" have been identified, with at least 58 occurring so far in 2025.
Court filings, from trial court motions to appellate briefs, are increasingly peppered with AI‑generated content. When used responsibly, generative AI can help lawyers draft faster, research more efficiently, and focus on higher-value strategic work. However, judges across the country, from Federal District Courts to State Supreme Courts, are flagging submissions riddled with citations that never existed. In one notable example reported by the ABA Journal, a special master imposed a $31,100 sanction against a firm for relying on bogus AI research in a filed brief. This underscores the growing legal and financial risks of unverified AI use, whether intentional or not.
The surge in AI-generated Court filings reveals something deeper about the legal profession's evolving relationship with technology. Generative AI does not deal in certainties, it is predicated on probabilities, so its mistakes are features, not bugs. That reality demands a deliberate, defensible process for integrating AI tools into legal workflows. Too often, attorneys treat AI content like a finished product rather than what it really is: the work of a young virtual lawyer that still needs supervision, guidance, and a thorough cite check.
The solution to managing AI-hallucinated Court filings isn't to ban AI from legal practice, it's better training. Attorneys must be trained to use AI tools appropriately and responsibly. Too few legal teams are taught how to develop a consistent and repeatable prompting process or how to methodically verify the accuracy of AI-generated citations before inclusion in a court filing. In fact, many Courts have taken matters into their own hands by mandating this process and requiring firms to affirmatively certify they fact‑checked all AI‑generated sources. These emerging mandates signal a broader shift: using AI is no longer optional, but verifying its output is non-negotiable.
Effective legal training begins with a mindset shift: AI-generated content should be verified, not trusted. This requires a strong prompting process with a foundational principle of skepticism toward AI output.
Successful training programs incorporate three fundamental principles:
- Tailored prompt training for legal-specific use cases, including live examples or hallucinations and examples of manipulation, jailbreaking, or rooting AI tools;
- Verification techniques that instill muscle memory – making it second nature for legal professionals to fact-check all generative AI output before relying on it; and
- Ongoing, customized education, similar to CLE, ensuring legal teams stay current as AI tools evolve and as different platforms exhibit differing strengths, limitations, and risks.
AI isn't the problem, poor process is. The spike in hallucinated filings is not a technology failure, but a failure to train lawyers to use AI tools effectively and appropriately. This moment presents an opportunity for in-house legal departments to build smart, defensible workflows to make AI an asset, not a liability.
Baker Donelson's AI Team works regularly with in-house counsel across the country to design prompting protocols, verification safeguards, and training programs to reduce risk and reinforce credibility. If your team is ready to embrace AI while protecting your brand, your counsel, and your bottom line, we are here to help. As Justice Cardozo once said, "The law, like the traveler, must be ready for the morrow. It must have a principle of growth." We can train your in-house legal team legal department to grow using AI safely, strategically, and with credibility intact.
The Firm's attorneys are prepared to assist you in navigating the complexities surrounding AI tools and proper use; please reach out to Justin Daniels, Matt White, or any member of Baker Donelson's AI Group