AI-generated legal filings are making a mess of the judicial system

Cal Jeffrey

Posts: 4,496   +1,620
Staff member
In context: Large language models have already been used to cheat in school and spread misinformation in news reports. Now they're creeping into the courts, fueling bogus filings that judges face amid heavy caseloads – raising new risks for a legal system already stretched thin.

A recent Ars Technica report detailed a Georgia appeals court decision highlighting a growing risk for the US legal system: AI-generated hallucinations creeping into court filings and even influencing judicial rulings. In the divorce dispute, the husband's lawyer submitted a draft order peppered with citations to cases that do not exist – likely invented by generative AI tools like ChatGPT. The initial trial court signed off on the document and subsequently ruled in the husband's favor.

Only when the wife appealed did the fabricated citations come to light. The appellate panel, led by Judge Jeff Watkins, vacated the order, noting that the bogus cases had undermined the court's ability to review the decision. Watkins didn't mince words, calling the citations possible generative-artificial intelligence hallucinations. The court fined the husband's lawyer $2,500.

That might sound like a one-off, but a lawyer was fined $15,000 in February under similar circumstances. Legal experts warn it is likely a sign of things to come. Generative AI tools are notoriously prone to fabricating information with convincing confidence – a behavior labeled "hallucination." As AI becomes more accessible to both overwhelmed lawyers and self-represented litigants, experts say judges will increasingly face filings filled with fake cases, phantom precedents, and garbled legal reasoning dressed up to look legitimate.

The problem is compounded by a legal system already stretched thin. In many jurisdictions, judges routinely rubberstamp orders drafted by attorneys. However, the use of AI raises the stakes.

"I can envision such a scenario in any number of situations where a trial judge maintains a heavy docket," said John Browning, a former Texas appellate judge and legal scholar who has written extensively on AI ethics in law.

Browning told Ars Technica he thinks it's "frighteningly likely" these kinds of mistakes will become more common. He and other experts warn that courts, especially at the lower levels, are ill-prepared to handle this influx of AI-driven nonsense. Only two states – Michigan and West Virginia – currently require judges to maintain a basic level of "tech competence" when it comes to AI. Some judges have banned AI-generated filings altogether or mandated disclosure of AI use, but these policies are patchy, inconsistent, and hard to enforce due to case volume.

Meanwhile, AI-generated filings aren't always obvious. Large language models often invent realistic-sounding case names, plausible citations, and official-sounding legal jargon. Browning notes that judges can watch for telltale signs: incorrect court reporters, placeholder case numbers like "123456," or stilted, formulaic language. However, as AI tools become more sophisticated, these giveaways may fade.

Researchers, like Peter Henderson at Princeton's Polaris Lab, are developing tools to track AI's influence on court filings and are advocating for open repositories of legitimate case law to simplify verification. Others have floated novel solutions, such as "bounty systems" to reward those who catch fabricated cases before they slip through.

For now, the Georgia divorce case stands as a cautionary tale – not just about careless lawyers, but about a court system that may be too overwhelmed to track AI use in every legal document. As Judge Watkins warned, if AI-generated hallucinations continue slipping into court records unchecked, they threaten to erode confidence in the justice system itself.

Image credit: Shutterstock

Permalink to story:

 
A strange combination of acceptance and fatigue will likely set in before the issue of hallucinations sees meaningful progress.

People will need to decide for themselves if they want to grapple with the truth or be fed comfortably curated misinformation, soecifically designed to resonate with their own internal biases.

A problem as old as communication itself.
Only this time your average Joe can become the entire propaganda machine.
 
First offense - 6 months to a year unable to practice law.
Second offense - disbarred.

Or something of the like, I dunno. As Barney said, "Nip it in the bud!".
It would be nice to see, but the US justice system has a real serious issue with enforcing punishments on people (unless you have w33d). People go from one probation to another and just refuse to fix their lives while the courts keep using a feather pillow to slap their wrists. For those in the system its even worse. Judges are horrible at keeping each other in line.

I cant see them going hardline "stop this or you will be disbarred with prejudice". Maybe if the AI pollution gets so bad that judges cant do their jobs anymore....
 
First offense - 6 months to a year unable to practice law.
Second offense - disbarred.

Or something of the like, I dunno. As Barney said, "Nip it in the bud!".
Not only that, the client ought to be able to sue the lawyer for malpractice and return of fees. If I found out the person I was paying big dollars to represent me in one of the most important disputes or cases of my life was turning in garbage work from an AI, I'd be the first one asking the local oversight board to disbar them.

Edit: and billing fraud, since I'm betting anyone who is doing this is still charging for the full normal time it takes a human to do the real job.
 
Honestly, this might be a good thing. If you piss off the lawyers and the judges then they WILL do something about it. They might be who AI needs to get mad inorder for regulation to be pushed through because all this AI slop in every aspect of our lives is getting annoying.
 
Back