New York court fines lawyers for citing fake cases generated by ChatGPT

DragonSlayer101

Posts: 372   +2
Staff
What just happened? Two lawyers and their law firm have been fined $5,000 by a district judge in Manhattan for citing fake legal research generated by ChatGPT. In a written opinion, Judge P. Kevin Castel chided attorneys Steven Schwartz and Peter LoDuca for failing to do due diligence before submitting their filing and abandoning their legal responsibilities as officers of the court when they submitted "nonexistent judicial opinions with fake quotes and citations."

The ruling came a month after attorney Steven A. Schwartz admitted to submitting fake legal research generated by ChatGPT in a personal injury case against Colombian airline Avianca. Schwartz's citation referenced several cases similar to the one he was fighting, but none of the cases he cited were actually real. As it turned out, ChatGPT fabricated them all with the sole purpose of supporting Schwartz's submission.

When confronted with the error by Judge Castel, Schwartz admitted that it was the first time he was using ChatGPT for research and had no idea that the content could be false. He also apologized for the mix-up and claimed he had no intention of deceiving the court. He further claimed that he had attempted to verify the authenticity of the citations by asking ChatGPT if the cases were real, and got an affirmative response from the chatbot.

In his ruling against the two attorneys and their law firm Levidow, Levidow & Oberman, Judge Castel said that there's nothing "inherently improper" about using artificial intelligence in legal scenarios. However, it is incumbent upon the lawyers to ensure that their filings are factually accurate. The judge also took exception to the fact that Schwartz seemingly stood by the fake opinions even after lawyers for Avianca alerted the court that there's no record of any of the cases cited in the filing.

Following the ruling from the Manhattan district court, Levidow, Levidow & Oberman released a statement, saying it "respectfully" disagreed with the court's decision. "We made a good-faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth," it said. Schwartz and his lawyers declined to comment on the ruling, while LoDuca did not respond to Reuters' request for a comment.

On the other hand, lawyers for Avianca applauded the court's decision to impose the fine and dismiss the personal injury case. It is worth noting that the dismissal had nothing to do with the fake citations, but was done because the case was filed too late.

Permalink to story.

 
And thus it was demonstrated that AI just isn’t ready for work unless you happen to have models specifically tailored to your use case. This perfectly mirrors my experience of using chatGPT only I’m smart enough to verify its claims… and to stop using it after it fed me enough bs.
 
A lawyer verify AI's claims? 🤣 They are too busy chasing ambulances to do that. 🤣

Perhaps generative AI should be renamed "Generative BS". What we probably do not see is that the big players in AI are probably more interested in its capability to feed them more spyware data about its users.
 
The case was filed too late and with no effort. These lawyers should be kissing the judge's feet for only a $5k fine each. Who remembers the ai lawyer? Thank goodness that was stopped in its tracks. This is the foreshadowing I have foreseen for the entire industry even outside the law they will blame their incompetenance on ai. The workforce is cut and your unemployed blame it on ai, your lazy lawyer files late and loses case blame it on ai, people dieing from autonomous vehicles blame it on ai! Wait it's already here!
 
And thus it was demonstrated that AI just isn’t ready for work unless you happen to have models specifically tailored to your use case. This perfectly mirrors my experience of using chatGPT only I’m smart enough to verify its claims… and to stop using it after it fed me enough bs.
Nope. You're actually wrong about that. And, anybody that has ever tested current and previous versions of ChatGPT can attest to the fact that it makes up fake cases the are incredibly supportive of the filers side. AI is great. It's case citation just isn't ready for primetime because it doesn't have access to WestLaw and LexisNexis.
 
Nope. You're actually wrong about that. And, anybody that has ever tested current and previous versions of ChatGPT can attest to the fact that it makes up fake cases the are incredibly supportive of the filers side. AI is great. It's case citation just isn't ready for primetime because it doesn't have access to WestLaw and LexisNexis.
On a side note, should either legal research company give AI access to their servers - even just overnight - it's over for both of them. So they have to seriously protect their data or charge a ton.
 
Nope. You're actually wrong about that. And, anybody that has ever tested current and previous versions of ChatGPT can attest to the fact that it makes up fake cases the are incredibly supportive of the filers side. AI is great. It's case citation just isn't ready for primetime because it doesn't have access to WestLaw and LexisNexis.
A good lawyer would at a minimum cross reference for accuracy potentially saving the case, fines, and humility. I am hearing stories that some clients don't want hire lawyers that aren't using ai as well.
 
The case was filed too late and with no effort. These lawyers should be kissing the judge's feet for only a $5k fine each. Who remembers the ai lawyer? Thank goodness that was stopped in its tracks. This is the foreshadowing I have foreseen for the entire industry even outside the law they will blame their incompetenance on ai. The workforce is cut and your unemployed blame it on ai, your lazy lawyer files late and loses case blame it on ai, people dieing from autonomous vehicles blame it on ai! Wait it's already here!
All the likely disgruntled client has to do is file a state bar complaint (and/or lodge a complaint to the judge in the case) and those lawyers will all be suspended. Once that happens, they could fairly easily file a legal malpractice case against them. Those lawyers ARE lucky if all they pay is $5k.
 
All the likely disgruntled client has to do is file a state bar complaint (and/or lodge a complaint to the judge in the case) and those lawyers will all be suspended. Once that happens, they could fairly easily file a legal malpractice case against them. Those lawyers ARE lucky if all they pay is $5k.
And I have previously sued several lawyers (including a Harvard Law grad) and won.
 
Nope. You're actually wrong about that. And, anybody that has ever tested current and previous versions of ChatGPT can attest to the fact that it makes up fake cases the are incredibly supportive of the filers side.
That's the problem. It really does not matter what the subject, AI just makes crap up as other users of TS can attest to. And if its made up, its just useless crap with no basis in reality.
AI is great. It's case citation just isn't ready for primetime because it doesn't have access to WestLaw and LexisNexis.
IMO, that's no excuse for making up crap. If it were really worth anything, it would just say something along the lines of, 'I cannot give an answer. You need to look it up on WestLaw or LexisNexus, or contact a real lawyer.' IMO, without being able to say that, it is all just BS and anyone using it would do well to remember the phrase "buyer beware".
 
That's the problem. It really does not matter what the subject, AI just makes crap up as other users of TS can attest to. And if its made up, its just useless crap with no basis in reality.

IMO, that's no excuse for making up crap. If it were really worth anything, it would just say something along the lines of, 'I cannot give an answer. You need to look it up on WestLaw or LexisNexus, or contact a real lawyer.' IMO, without being able to say that, it is all just BS and anyone using it would do well to remember the phrase "buyer beware".
Yes, it is now saying that it cannot give an answer and then I just turn to LexisNexis.
 
Yes, it is now saying that it cannot give an answer and then I just turn to LexisNexis.
IMO, Its about time.

Maybe the judge should also have levied a substantial fine on the creators of ChatGPT.

In any event, I think it wise of the judge to fine these lawyers, at least as an example to others who would rely on dubious claims.
 
IMO, Its about time.

Maybe the judge should also have levied a substantial fine on the creators of ChatGPT.

In any event, I think it wise of the judge to fine these lawyers, at least as an example to others who would rely on dubious claims.
Judge has zero jurisdiction to fine ChatGPT. Also, they'd attempt to cite section 230 as a defense. I'm positive of it. "We didn't post anything, your honor, the artificial brain did it and we can't be held liable for what others post to our site". And also, their disclaimers - even lawyers don't read them - shocker, I know. Plus, the lawyers entered the prompt manually and voluntarily , assuming all risks of feedback received. There will absolutely be growing pains as AI gets smarter/stronger. This is just some of them. (Those that don't know or respect that are bound to pay $5k+).
 
Judge has zero jurisdiction to fine ChatGPT. Also, they'd attempt to cite section 230 as a defense. I'm positive of it. "We didn't post anything, your honor, the artificial brain did it and we can't be held liable for what others post to our site". And also, their disclaimers - even lawyers don't read them - shocker, I know. Plus, the lawyers entered the prompt manually and voluntarily , assuming all risks of feedback received. There will absolutely be growing pains as AI gets smarter/stronger. This is just some of them. (Those that don't know or respect that are bound to pay $5k+).
At least a few lawyers I have known suffered from extreme arrogance. IMO, that was their "Waterloo."
 
Judge has zero jurisdiction to fine ChatGPT. Also, they'd attempt to cite section 230 as a defense. I'm positive of it. "We didn't post anything, your honor, the artificial brain did it and we can't be held liable for what others post to our site".
What the law allows and what should be done are often two different things. ;)
And also, their disclaimers - even lawyers don't read them - shocker, I know. Plus, the lawyers entered the prompt manually and voluntarily , assuming all risks of feedback received. There will absolutely be growing pains as AI gets smarter/stronger. This is just some of them. (Those that don't know or respect that are bound to pay $5k+).
Given that recent surveys indicate that only 40% of the people who use the internet have tried it, IMO, its too early to tell whether AI (especially LLM AI) will survive at all.

That said, there are some obvious places where AI seems highly successful - such as in the medical field, however, except for those limited cases, if users of AI get responses that are blatantly false, I think people will turn away from it. It will not make dolts any smarter, and even if such dolts find temporary "success" with AI, they will be found out sooner or later.

One other failing is that AI can be tricked into writing malware. IMO, it does not bode well for AI used in such a manner.
 
What the law allows and what should be done are often two different things. ;)

Given that recent surveys indicate that only 40% of the people who use the internet have tried it, IMO, its too early to tell whether AI (especially LLM AI) will survive at all.

That said, there are some obvious places where AI seems highly successful - such as in the medical field, however, except for those limited cases, if users of AI get responses that are blatantly false, I think people will turn away from it. It will not make dolts any smarter, and even if such dolts find temporary "success" with AI, they will be found out sooner or later.

One other failing is that AI can be tricked into writing malware. IMO, it does not bode well for AI used in such a manner.
AI continues to evolve and grow smarter by the day, even if those versions are secret (and kept from lawmakers). Yes, the medical advancements alone are reason enough to keep it. Tens of thousands of experiments running in a few days where it took months to run hundreds previously. Some guy got a diagnosis from AI that was more helpful than his primary care physician. Another person saved their dogs life by asking AI when their regular vet was clueless. While it is true that AI has written the most advanced malware out there, only AI can stop said malware because it gets past the most advanced virus scanners. So, looks like we're stuck with it, like it or not.
 
AI continues to evolve and grow smarter by the day, even if those versions are secret (and kept from lawmakers). Yes, the medical advancements alone are reason enough to keep it. Tens of thousands of experiments running in a few days where it took months to run hundreds previously. Some guy got a diagnosis from AI that was more helpful than his primary care physician. Another person saved their dogs life by asking AI when their regular vet was clueless. While it is true that AI has written the most advanced malware out there, only AI can stop said malware because it gets past the most advanced virus scanners. So, looks like we're stuck with it, like it or not.
Where its useful, I would not say we are stuck with it, but where it spouts crap it remains to be seen whether we are stuck with it, IMO. My bet is that enough people will not tolerate crap responses and refuse to use it. As I see it, its persistence will be driven by user adoption.
 
Where its useful, I would not say we are stuck with it, but where it spouts crap it remains to be seen whether we are stuck with it, IMO. My bet is that enough people will not tolerate crap responses and refuse to use it. As I see it, its persistence will be driven by user adoption.
Don't judge a 1.0 by a 0.9. In other words, everything you may have seen this far isn't quite ready for prime time. The fascinating yet scary thing is it can learn while we sleep. Imagine if you never had to sleep and you could read entire books in seconds as opposed to weeks or months. It's unstoppable. And people are inclined to use and rely upon it as it has already passed multiple state board licensing exams.
 
Back