Loading...

For The Love Of All That Is Holy, Stop Blaming ChatGPT For This Bad Brief

For The Love Of All That Is Holy, Stop Blaming ChatGPT For This Bad Brief<br />
<b>Warning</b>:  Undefined array key /var/www/vhosts/lawyersinamerica.com/httpdocs/app/views/singleBlog/singleBlogView.php on line 59
">
Technology
May 2023


For The Love Of All That Is Holy, Stop Blaming ChatGPT For This Bad Brief
"Here's What Happens When Your Lawyer Uses ChatGPT," blasted the New York Times headline to the delight of tech skeptic lawyers everywhere. A seemingly quite irate Judge Kevin Castel of the Southern District of New York issued a show cause order directed at the law firm of Levidow, Levidow & Oberman and its attorneys Peter LoDuca and Steven Schwartz asking why he shouldn't impose sanctions after a number of cases in the firm's recent filing turned out to be made up -- another of ChatGPT's well-documented court case hallucinations.

An airline defendant filed a motion to dismiss a personal injury claim that had wound its way into federal court. The case belonged to Schwartz, but after it was removed to federal court, LoDuca became counsel of record and filed the response to the motion to dismiss under his name. It deals with a lot of thorny issues -- competing statutes of limitations, the Bankruptcy Code, international treaties -- but the response managed to find on-point citations for every procedural hurdle. Pretty compelling stuff!

The United States Court of Appeals for the Eleventh Circuit specifically addresses the effect of a bankruptcy stay under the Montreal Convention in the case of Varghese v. China Southern Airlines Co.. Ltd., 925 F.3d 1339 (11th Cir. 2019), stating "Appellants argue that the district court erred in dismissing their claims as untimely. They assert that the limitations period under the Montreal Convention was tolled during the pendency of the Bankruptcy Court proceedings. We agree. The Bankruptcy Code provides that the filing of a bankruptcy petition operates as a stay of proceedings against the debtor that were or could have been commenced before the bankruptcy case was filed. 11 U.S.C. ? 362(a)....

And the purported quote from the Eleventh Circuit detailing the precise result plaintiff sought under the precise instance of bankruptcy impaired, Montreal Convention statute of limitations goes on with multiple internal citations for another half a page.

Unfortunately, this case doesn't exist. And some of the internal citations don't either. Perhaps finding a whole page of directly quotable support for a hyperspecific legal question should've tipped someone off?

Schwartz had asked the buzzy AI application to give him a research assist, unaware that those of us covering generative AI have flagged its propensity to flagrantly make stuff up to please its user. And it's very, very confident in its output regardless of its ability to back it up, which is why the epithet "Mansplaining as a Service" rings so true. That's also why the conversation among the tech savvy has advanced from what generative AI can accomplish to how do you put ethical and professional guardrails on this thing?

But while the media -- and the social media zeitgeist -- spent the weekend ripping ChatGPT, this isn't about generative AI. They can chase the clicks with their GPT headlines, but this is a simple lawyering story.

This isn't any different than turning in a brief with red-flagged cases or just slapping the first 10 results from a database search into the filing and calling it a day. The problem wasn't the search that ChatGPT produced, it was the lawyer failing to bother to read the full opinions in the cases he chose to cite.

That's why I'm not really buying the Schwartz defense that he had never really used the app and "therefore was unaware of the possibility that its content could be false." It doesn't matter if the results were right or wrong, you still have to read the frickin' opinions! Back in the day, a Westlaw or Lexis search would rarely turn up the right result on a lawyer's first stab at it -- and you had to check to make sure the opinion really was useful.

Adding generative AI to the mix of research tools doesn't alter that calculus. It just -- when deployed with the right protections -- reduces the number of tries an attorney will need to get the right cases. That's going to make a significant improvement in the accuracy and efficiency of lawyering. But it's not replacing the attorney.

Don't blame AI for lawyering fails. Like a lot of things in tech, the source of the error here lies between the keyboard and the chair.

Here's What Happens When Your Lawyer Uses ChatGPT [New York Times]


For The Love Of All That Is Holy, Stop Blaming ChatGPT For This Bad Brief
Joe Patrice is a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free to email any tips, questions, or comments. Follow him on Twitter if you're interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.

Top