Judge Contemplates Whether To Sanction Lawyers Who Used ChatGPT After Hearing Them Out
">
Every lawyer with an internet connection has by now heard about Steven Schwartz, the lawyer who submitted a ChatGPT-prepared brief that contained fake case law.
Schwartz and his partner, Peter LoDuca, filed an opposition to a motion to dismiss their client's case. Opposing counsel filed a response stating that they could neither locate nor verify the cases cited on the motion. When the presiding judge Kevin Castel caught on to it, he was not pleased. Last Thursday, Schwartz and LoDuca, represented by counsel, appeared before the judge to argue why they should not be sanctioned. Now everyone waits to see what punishment the judge will hand down, if any.
Based on reports by the New York Times, Daily News, Reuters, and the Courthouse News Service, both Schwartz and DoLuca had little to say other than that they made a mistake by relying on ChatGPT, which they initially thought was a different search engine. They did not think that it was possible that ChatGPT could make up a case.
After hearing their arguments as well as their apologies, Castel said that he would issue a written ruling in the future. But he said that the lawyer's use of ChatGPT was "the beginning of the narrative, not the end" and was only part of the problem.
What did he mean by that? On April 11, Castel issued an order to LoDuca demanding an affidavit which includes the nonexistent cases cited in his opposition to the defendant's motion to dismiss the case. Failure to do so will result in dismissal of the case. Two weeks later, LoDuca filed the affidavit stating that he has annexed copies of eight cases while admitting that he was unable to locate the others. And one case was an unpublished opinion.
So it would appear that Schwartz and LoDuca were on notice about the nonexistent cases due to the April 11 order but continued to use them on LoDuca's April 25 affidavit, even going so far as to include copies or excerpts of the fake opinions. This could not only result in substantial penalties, but they could also have to answer to their state bar and their licenses could be in jeopardy.
They argue that the ChatGPT cases were not immediately suspect as they included official sounding terms such as a citation, a case caption, a docket number, internal citations, and a list of the judges issuing the opinion. Schwartz asked ChatGPT whether one of the cases were real. It replied that the case was real and can be verified on legal research databases such as Lexis and Westlaw. So Schwartz continued to use ChatGPT despite suspicions that the cases may be fake and the court's order.
Their lawyers argued that while Schwartz and LoDuca exercised poor judgment by failing to independently verify the cases, it is not enough to warrant sanctions. In addition, Schwartz has stated that he has suffered professionally due to the widespread publicity and thus sanctions would be unnecessarily punitive.
There is little background as to why Schwartz, a lawyer practicing for 30 years, used and relied so heavily on ChatGPT, especially considering its novelty. He claims that he learned about it through his children and numerous articles about artificial intelligence tools being used in professional settings, including law firms. The concept of AI-based legal research tools is not new. Some concepts have been shown over the years although none have really taken off.
But it is probably best for them to say as little as possible as the public will scrutinize and criticize any excuse they make.
As a silver lining, they did an indirect public service to the profession. This proves that the only thing artificial intelligence can produce is artificial case law. Lawyers everywhere can breathe a sigh of relief that their jobs will not be in danger anytime soon.
Also, this has led to a judge in Texas requiring all lawyers to certify that their work has not been produced by artificial intelligence without a human checking its accuracy.
So are sanctions warranted for Schwartz and LoDuca? If they owned up to their mistake immediately after the judge's April 11 order, chances are that they would have simply received a mouthful from the judge and maybe a modest fine. But their response to the order is what could get them into trouble if the judge believes that they intentionally tried to cover up one lie with another.
Steven Chung is a tax attorney in Los Angeles, California. He helps people with basic tax planning and resolve tax disputes. He is also sympathetic to people with large student loans. He can be reached via email at stevenchungatl@gmail.com. Or you can connect with him on Twitter (@stevenchung) and connect with him on LinkedIn.