OK. Let’s get into the bogus ChatGPT brief debacle.
Table of contents
You might think that certain things don’t need to be said:
- Your superhero cape will not enable you to fly.
- Eating laundry detergent is bad for your health.
- Don’t let ChatGPT write your filing to the court.
You’d be wrong.
Of course, I’m referring to the recent revelation that a lawyer submitted a brief in federal court containing several nonexistent cases — thanks to ChatGPT and some seriously bad judgment.
Here’s What Happened in the ‘Bogus’ Case Citations Snafu
New York attorney Steven Schwartz of Levidow, Levidow & Oberman used ChatGPT to write a brief and submitted it in federal court to the Southern District of New York without verifying that the cases ChatGPT cited and quoted were accurate.
Well, that’s not entirely accurate. Schwartz did ask ChatGPT if the cases were real. Apparently, he didn’t know that ChatGPT has a track record of making false statements. (Artificial intelligence is only as good as the information used to train it.)
When did the lawyer realize his mistake?
Probably after the opposing party submitted a letter to the judge questioning the authenticity of the cases cited in the brief. In May, the judge confirmed that at least six cases cited “appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” and set a date for a sanctions hearing.
Oh, geez. Is he a new lawyer?
Schwartz has over 30 years of experience.
What’s going to happen to him?
We’ll see. Schwartz submitted an affidavit to the court where he admitted full responsibility for his actions. Schwartz also stated that this was the first time he had used ChatGPT for legal research and that he will not use it again without verifying the information.
His hearing is Thursday, June 8. We’ll see if this misstep has other repercussions regarding his employment and ability to practice law.
The Right Way to Use Generative Artificial Intelligence in Legal Practice
Writer, podcaster and all-around great guy Jay Acunzo suggests that you think of AI as your intern. It can supplement your efforts, but it should never be a replacement for your research, arguments and writing.
And just as with any intern, always verify the accuracy of the information an AI tool gives you.
Remember, clients hire a specific lawyer because they want to benefit from that professional’s education, skills and experience. If a client wanted to be represented by AI, they’d do that instead.
If I were going to integrate ChatGPT or a similar generative AI platform into my workflow, I’d write the first draft myself. Then I’d put it through the AI for suggestions on how to make my arguments more persuasive if I’m arguing to the court — or make me sound scarier if I were writing a demand letter. (To protect the attorney-client privilege, I’d change the names of the parties and other identifying information as needed.)
As of this writing, I have not used ChatGPT or any generative AI for legal work. I’m actually more interested in using AI to generate the verbiage I use in contracts and letters on a regular basis, so I don’t have to wrack my brain trying to remember which client file to look in when I need a particular block of text. (For more ideas on using AI in law practice, read my interview with Paul Roetzer, founder and CEO of Marketing AI Institute.)
Does the Bogus Cases Incident Signal a Bigger Problem?
As I pondered how a lawyer could use AI to write a brief and submit it to the court without double-checking its work, many follow-up questions zipped through my brain:
- How did the lawyer not recognize so many of the cases cited by ChatGPT? Did he do any other research on this matter?
- How little time did he give himself to write this brief that he didn’t have time to check the cases?
- Does this situation indicate a bigger problem regarding attorney workloads? Did the lawyer have too many pressing client matters to give each one the time and attention it needed?
- Is there a bigger issue to address besides time management and stress, like substance abuse or a mental health issue?
For clarification, I’m not suggesting that Steven Schwartz has a problem with addiction or a mental health disorder. This situation made me wonder what might be happening in a law firm or an attorney’s life that would contribute to them looking for a quick fix to get their work done. If an attorney doesn’t check an AI’s work before using it, I suspect it’s because they didn’t have time to do it.
Hopefully, Schwartz is the only attorney who has to make the mistake of trying to let artificial intelligence do the work for them instead of assisting them.
Related Reading: Perils and Pitfalls of Generative AI
- “Beware of the Ethical Perils When Using Generative AI” from Sensei Enterprises
- “Ethical Pitfalls When Using ChatGPT” by Mark C. Palmer
- “Should You Use ChatGPT to Generate Your Marketing Content?” by David Arato
- “The Problem with ‘Bogus’ Chat GPT Legal Brief? It’s Not the Tech” by Stephanie Wilkins, at Law.com
Image © iStockPhoto.com.
Don’t miss out on our daily practice management tips. Subscribe to Attorney at Work’s free newsletter here >