Loading...

Lawyer Figures Out ChatGPT Made Up Fake Cases In His Brief On Day Of Hearing

Lawyer Figures Out ChatGPT Made Up Fake Cases In His Brief On Day Of Hearing<br />
<b>Warning</b>:  Undefined array key /var/www/vhosts/lawyersinamerica.com/httpdocs/app/views/singleBlog/singleBlogView.php on line 59
">
Technology
Jun 2023


Lawyer Figures Out ChatGPT Made Up Fake Cases In His Brief On Day Of Hearing
The highly publicized misfortune of Steven Schwartz and Peter LoDuca may have just saved another lawyer from the same fate. MAYBE. Schwartz infamously used ChatGPT to conduct legal research to respond to a motion to dismiss and his colleague Peter LoDuca signed his name to the affirmation before shooting it off to federal court. The problem: neither of these jokers bothered to independently look up the case cites ChatGPT provided. Instead, they compounded the error by apparently asking ChatGPT to confirm the research, which is like asking Donald Trump to confirm that he's checked for classified documents. Whenever the generative AI tool can dig itself deeper, it will. After ChatGPT spit out superficially real-ish but entirely fake opinions, the lawyers went ahead and filed those and now find themselves in a whole mess of trouble.

Colorado Springs attorney Zachariah Crabill might just avoid that fate. After filing a motion to set aside a summary judgment, the young attorney decided to make sure ChatGPT hadn't just tried to lead him into Sanctionsville. Based on his follow-up, he informed the court that the motion cited phony cases conjured up by ChatGPT.

"Based on the accuracy of prior validated responses, and the apparent accuracy of the case law citations, it never even dawned on me that this technology could be deceptive," Crabill said in court documents.

Crabill used the platform to research simple questions at first, which it answered accurately. That was all it took for him to trust the results of his subsequent searches... even though those spit out "dozens" of non-existent cases. Crabill claims to have realized the mistake on the day of the hearing.

Unfortunately, the judge had already figured it out too and threatened to file a complaint against the attorney. We'll see how much leniency Crabill gets for not doubling down and trusting that ChatGPT would never lie to him.

Once again, it's a mistake to pin these screw-ups on ChatGPT. Lawyers have an obligation to check their work. There's nothing wrong with asking the system to answer a question, but then whatever the bot spits out, the attorney needs to hit Lexis or Westlaw or Fastcase or, hell, the physical books to check it against reality.

Preferably before filing the brief.

Colorado Springs attorney says ChatGPT created fake cases he cited in court documents [KRDO]

Earlier: For The Love Of All That Is Holy, Stop Blaming ChatGPT For This Bad Brief
Lawyers Who Used ChatGPT To Dummy Up Cases Throw Themselves On The Mercy Of The Court


Lawyer Figures Out ChatGPT Made Up Fake Cases In His Brief On Day Of Hearing
Joe Patrice is a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free to email any tips, questions, or comments. Follow him on Twitter if you're interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.

Top