Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Lawyer cites fake cases invented by ChatGPT, judge is not amused

Simon Willison’s Weblog: “Legal Twitter is having tremendous fun right now reviewing the latest documents from the case Mata v. Avianca, Inc. (1:22-cv-01461). Here’s a neat summary: So, wait. They file a brief that cites cases fabricated by ChatGPT. The court asks them to file copies of the opinions. And then they go back to ChatGPT and ask it to write the opinions, and then they file them? Beth Wilensky, May 26 2023

Here’s a New York Times story about what happened. I’m very much not a lawyer, but I’m going to dig in and try to piece together the full story anyway. The TLDR version: A lawyer asked ChatGPT for examples of cases that supported an argument they were trying to make.

ChatGPT, as it often does, hallucinated wildly—it invented several supporting cases out of thin air. When the lawyer was asked to provide copies of the cases in question, they turned to ChatGPT for help again—and it invented full details of those cases, which they duly screenshotted and copied into their legal filings. At some point, they asked ChatGPT to confirm that the cases were real… and ChatGPT said that they were. They included screenshots of this in another filing. The judge is furious. Many of the parties involved are about to have a very bad time. A detailed timeline: I pieced together the following from the documents on courtlistener.com…” [Very detailed and well cited evaluation of this matter. The insights are most welcome after the brouhaha to date.]

Sorry, comments are closed for this post.