In courtrooms across America, a quiet crisis is unfolding. Lawyers are submitting legal briefs that cite judicial opinions and case law that were never written. These aren't simple typos. They are complete fabrications, generated by artificial intelligence and filed by attorneys who failed to check the work.
The pattern is now familiar. A lawyer, pressed for time, uses a chatbot like ChatGPT to draft a motion. The tool produces polished, convincing legal prose, complete with citations. The lawyer files it. Then a judge or opposing counsel discovers the cited cases don't exist.
New York attorney Steven Schwartz was fined $5,000 in 2023 for submitting a brief with six fake cases invented by AI. In Colorado, Zachariah Crabill was suspended for filing AI-generated motions containing fabricated quotes from judges in a child custody dispute. These are not isolated events. Similar incidents have been documented in federal, state, and bankruptcy courts, leading to sanctions, fines, and malpractice concerns.
The core issue is what developers call 'hallucination'—the AI's tendency to generate plausible but false information. The systems predict text patterns; they don't know facts. A citation that looks perfect, with the right volume and page number, can be entirely made up.
Judges have begun to react, but the response is a patchwork. Some courts now require lawyers to certify that AI-generated content has been human-verified. The Eastern District of Texas was an early adopter. Others have no specific rules. The American Bar Association states existing ethics rules apply, but that hasn't stopped the flow of bogus filings.
New legal tech products from companies like Thomson Reuters and LexisNexis aim to tether AI to verified databases, but they carry subscription costs. Many solo practitioners or small firms, who initially turned to free AI tools to save money, can't afford them.
The stakes are escalating. Disciplinary actions are mounting, and malpractice insurers are warning that coverage could be denied for failures to verify AI work. The problem has also surfaced in criminal and immigration proceedings.
As the underlying technology improves, the fabrications may become harder to spot. The legal system's foundation relies on the trust that citations are real. That trust is now being tested, one AI-generated brief at a time.
Source: Webpronews