ChatGPT found guilty of fabricating cases and citations for a Manhattan lawyer's Federal...

Jimmy2x

Posts: 239   +29
Staff
Cutting corners: Legal fees certainly aren't cheap, so when we retain legal representation, we assume we're paying for that legal professional's time and expertise. Rather than provide the typical services retained, one Manhattan lawyer tried to shorten the research process by letting ChatGPT cite his case references for a Federal Court filing. And as he found out the hard way, fact-checking is pretty important, especially when your AI has a penchant for making up facts.

Attorney Steven A. Schwartz was retained by a client to represent them in a personal injury case against Avianca Airlines. According to the claim, Schwartz's client was allegedly struck in the knee with a serving cart during a 2019 flight into Kennedy International Airport.

As one would expect in this type of legal situation, the airline asked a Manhattan Federal judge to toss the case, which Schwartz immediately opposed. So far, it sounds like a pretty typical courtroom exchange. That is, until Schwartz, who admittedly never before used ChatGPT, decided that it was time to let technology do the talking.

In his opposition to Avianca's request, Schwartz submitted a 10-page brief citing several relevant court decisions. The citations referenced similar cases, including Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines, and Varghese v. China Southern Airlines. According to the New York Times' article, the last citation even provided a lengthy discussion of federal law and "the tolling effect of the automatic stay on a statute of limitations."

While it sounds like Schwartz may have come armed and ready to defend the case, there was one underlying problem: none of those cases are real. Martinez, Zicherman, and Varghese don't exist. ChatGPT fabricated them all with the sole purpose of supporting Schwartz's submission.

When confronted with the error by Judge P. Kevin Castel, Schwartz conceded that he had no intent to deceive the court or the airline. He also expressed regret for relying on the AI service, admitting that he had never used ChatGPT, and was "...unaware of the possibility that its content could be false." According to Schwartz's statements, he at one point attempted to verify the authenticity of the citations by asking the AI if the cases were in fact real. It simply responded with "yes."

Judge Castel has ordered a follow-on hearing on June 8 to discuss potential sanctions related to Schwartz's actions. Castel's order accurately presented the strange new situations as "an unprecedented circumstance," littered with "bogus judicial decisions, with bogus quotes and bogus internal citations." And in a cruel twist of fate, Schwartz's case could very well end up as one of the citations used in future AI-related court cases.

Permalink to story.

 
I think chatgpt should be programmed to random give a 25% false answer rate to teach people to fact check
I think that if you're paying for a product that it shouldn't have a 25% chance to not give you that product.
 
Wow upto 25 % anonomoly rate! While I do believe this will plautau closer to zero over time people are becoming too dependent on this technology may result into a disastrous outcomes. Some beta testers were criticizing the 6 month halt on this technology because they want to contribute more to to the beta testing. 🙃
 
Chatbot is only a tool - like most other things - a tool that will get better
No problem the Lawyer using it- to review documents for inconsistencies , mistakes.
No problem trying to get to build a case or suggest strategies ( though Chatbot says if does do legal case easy work arounds )

He used it poorly and wrong - as stated this is shockingly bad - as any lawyer would have pulled the quoted case up in question- have it printed out and referenced to submit to Judge.

The story is really about a silly lawyer.
Law firms - already have search engines - they are already building purpose built AI
Learn , understand your tools and limitations
That's what good schools are doing - not banning it
 
Always understood AI to be independent but I've read that behind the scenes there has been much human contribution in the form of corrections, modifications and answers to however this system works for chatgpt - ChatGPT is powered by a hidden army of contractors making $15 per hour

At what point is AI, actually true AI? self learning perhaps, growing out of the confines of the program? Can we teach it for 5 years and then it begins to adapt and learn on its own - I don't see it. Have I got the definition for AI completely wrong? People keep spluttering AI here and there but all I see is a coded program or piece of software that has confined limitations that it'll never grow out of.
 
Last edited:
With this kind of problem it sounds like they needed to be blocked by ALL municipal, state, and Federal agencies as well as their web sites, otherwise the accuracy of the same will be highly questionable ....
 
I'm using a free version, trial ware? Anyways that brings up another point, paying to use chatbots 😂😂😂
You aren't paying to use a chat bot, you are paying to use a tool that creates content. I've used it to make product descriptions for websites. I use chatGPT as an AI assistant and it is well worth the $15/m I pay for it.
 
I think chatgpt should be programmed to random give a 25% false answer rate to teach people to fact check
If it could be programmed to give a 25% false answer rate, it could be programmed to deliver a very low or 0% false answer rate. The fact is that ChatGPT and other LLMs work by hallucinating 100% of the time, it just so happens that most of the time it gives good results. Many times I have asked it to link to its sources, and it just makes up a URL that looks convincing but doesn't actually work (though, I was using an open source version, not actual ChatGPT). The problem isn't the tool, though, it's the user. Disclaimers that the tool can produce harmful, misleading, or false content should be on the service, but beyond that, it's ultimately up to humans to figure out how to use the tool responsibly, just like any other tool out there.
 
You aren't paying to use a chat bot, you are paying to use a tool that creates content. I've used it to make product descriptions for websites. I use chatGPT as an AI assistant and it is well worth the $15/m I pay for it.
If you've got a GPU with enough VRAM (the more the better), you can use the oobagooba text generation web UI, download a model from HuggingFace, and get very good results for free. Plus, the conversation remains local (private), unlike conversations with ChatGPT. If you don't have a GPU, you can still use CPU-only versions of the models, which can produce just as good results, they just run slower and consume a lot of RAM.
 
It's inevitable that low grade lawyer jobs will disappear and be replaced by bots, maybe not ChatGPT, but certainly more accurate systems that will appear. How many people would go to a high priced lawyer when a lawyer bot could evaluate your claim in seconds and charge you peanuts? I don't think it will be long before judges are replaced by bots also. You'd be able to dispute the outcome and go before a "real" judge but there'd be consequences if you're still found guilty.
 
If you've got a GPU with enough VRAM (the more the better), you can use the oobagooba text generation web UI, download a model from HuggingFace, and get very good results for free. Plus, the conversation remains local (private), unlike conversations with ChatGPT. If you don't have a GPU, you can still use CPU-only versions of the models, which can produce just as good results, they just run slower and consume a lot of RAM.

Thanks, any other references on how to engage this new tech and deploy privately?

How are you using it currently.

Much appreciated,
 
Back