A lawyer’s citation of court decisions fabricated by ChatGPT shows the peril of relying on the artificial intelligence chatbot without proper safeguards.
New York lawyers Steven Schwartz and Peter LoDuca face a June 8 hearing on potential sanctions after a court brief they submitted cited six nonexistent cases. Schwartz acknowledged that ChatGPT invented the cases, even though he initially believed the tool had surfaced authentic citations, according to an affidavit he filed May 25 in Manhattan federal court.
“Maybe this is an extreme example” of lawyer over-reliance on ChatGPT, said Drew Simshaw, a Gonzaga University law professor. “This might not be the last time that we see an integration too quickly, and with too high of reliance—and additional regulatory guidance is going to be warranted.”
ChatGPT, a chatbot model from research non-profit OpenAI, is among the buzziest of generative AI tools that promise to ease lawyer workloads with the technology’s ability to carry on human-like conversations, quickly compile information from the internet and respond to questions through machine learning.
Advances in technology are no substitute for checking work, said Bruce Green, a Fordham Law School professor. Attorney professional conduct rules include provisions on technological competence and the responsibilities of supervisory lawyers.
“This isn’t really a new problem; lawyers have offloaded or delegated work for years,” Green said. “You have to make sure if it’s work done by a non-lawyer, by someone who’s not licensed in the jurisdiction, that the work is competently done, and you have to own it.”
Schwartz and LoDuca, who each have practiced at the law firm Levidow Levidow & Oberman since the 1990s, did not immediately return requests for comment on Tuesday. The New York Times first reported about their case May 27.
‘Bogus Quotes’
Schwartz utilized ChatGPT while representing Roberto Mata, who faulted an Avianca Airlines employee for banging his knee with a serving cart and injuring him on a flight bound for New York.
District Judge Kevin Castel first raised problems in the case on May 4, noting that Mata’s opposition to Avianca’s motion to dismiss included citations to “nonexistent cases” and “bogus quotes.”
Those cases and quotes were provided by ChatGPT, according to Schwartz, who said he did the research for the brief without LoDuca.
Schwartz had never before used ChatGPT for legal research and was “unaware of the possibility that its content could be false,” he said in the affidavit. LoDuca said in a separate court submission that he had no reason to doubt Schwartz’s research.
Federal courts hold the authority to issue sanctions for attorney misconduct. Schwartz could also be referred to state disciplinary authorities, according to a May 26 order.
While the case has perhaps no precedent, it produces bigger questions for the industry, Green said. For instance, should lawyers need to disclose the extent to which they rely on AI tools in the course of their work?
Law firms will have to address the use of generative AI through internal policies, said Tom Sharbaugh, a Penn State Law professor and former Morgan Lewis & Bockius managing partner of operations. Firms need policies “because you have the likelihood that people are going to do things the easy way,” he said.
‘Early Days’
AI is the latest in a long line of tech advancements now assisting lawyers on tasks like document discovery and contract reviews. Many aspects of legal work could become automated through the use of AI, according to a March Goldman Sachs report.
The Schwartz case shows the need for “a lot of care in these early days,” said Simshaw, who writes about the intersection of legal tech and ethics.
“From a regulatory standpoint, there’s a lot of questions out there,” he said. “The legal profession has lagged behind in regulating technology. States usually adopt ethics rules closely modeling the American Bar Association’s model rules of professional conduct. Those really haven’t been updated since 2012 when cloud computing was the new buzzword.”
The ChatGPT snafu is part of a larger story that involves large-scale benefits and potential drawbacks of a new technology, said Ralph Baxter, a former chairman at Orrick Herrington & Sutcliffe who now advises law firms. The technology has the potential to upend the legal profession, with significant downstream effects on law firm staffing and finances, he said.
“All large firms expect their people to become proficient in using generative AI,” Baxter said. “Law firms are going to need to consider, ‘How do we remain mindful of the risks and limitations, but how do we make the most of this.’”
The case is Mata v. Avianca, Inc., S.D.N.Y., 22-01461, 5/26/23
To contact the reporter on this story:
To contact the editors responsible for this story: