Now that OpenAI is being sued for practicing law without a license, what should lawyers be telling clients about using generative AI?

Table of contents
Well, it finally happened. OpenAI got sued for practicing law without a license.
What Triggered the Lawsuit?
Apparently the same thing that has happened for millennia: A client who didn’t like their lawyer’s advice went looking for a second opinion. But this time from ChatGPT.
Instead of hiring new counsel, she hired the algorithm. And it shows how that small behavioral shift is already producing some very strange legal situations.
At the center of the lawsuit filed by Nippon Life Insurance Co. of America against OpenAI, the creator of ChatGPT, is a former policyholder, Graciela Dela Torre.
The case reads less like an insurance dispute and more like a preview of the profession’s AI-shaped future.
‘Hey ChatGPT, Is My Lawyer Gaslighting Me?’
The Nippon Life lawsuit brings a new urgency to the conversation surrounding AI and the unauthorized practice of law. Here’s the short version of the dispute.
Dela Torre reached a settlement with Nippon and signed a release. The case was closed. Done. Over. Later, she wanted to reopen negotiations. Her attorney pointed out a minor detail: She had already released the claims. Legally speaking, that tends to end the conversation.
Unconvinced, she uploaded her lawyer’s letter and case materials into ChatGPT and asked a question many professionals have heard in some form: “Am I being gaslighted?”
ChatGPT reportedly said yes.
At that point, things escalated. Dela Torre fired her attorney and began representing herself — with ChatGPT as co-counsel. She drafted and filed 21 motions, one subpoena, and eight notices and statements. In a case that was already closed.
The court denied the motions. Undeterred, she returned to ChatGPT and drafted an entirely new lawsuit.
Eventually, Nippon sued OpenAI, alleging its technology engaged in the unlicensed practice of law.
And the Broader Issue?
Whether that claim succeeds is ultimately up to the courts. But the broader issue is clear: Clients now have access to tools that generate legal arguments instantly, whether or not those arguments are correct, relevant or procedurally viable. And those tools are persuasive.
Large language models produce confident, coherent answers. They do not say, “You signed a release. This is over.” They generate language that sounds like reasoning. They also respond within the framing of the question they are given.
To a frustrated client, that can feel like validation. From the client’s perspective, it’s simple:
My lawyer says I can’t. The AI says I can.
Maybe my lawyers just don’t want to fight — or worse, maybe they’re wrong.
The result is more than awkward conversations. It’s filings. Motions. New lawsuits. All of which the courts and opposing counsel must now sort through.
This is likely only the beginning.
So, What Should Lawyers Tell Clients Now?
1. AI is a drafting tool, not a legal advisor
AI is very good at producing language. It can outline, summarize, and draft quickly. What it cannot reliably do is determine whether a claim is viable, whether jurisdiction exists or whether a release ends the matter. It predicts patterns in text. That is not the same thing as practicing law.
Clients should understand the difference.
2. Filing AI-generated documents has consequences
Courts are already seeing AI-drafted filings that cite nonexistent cases or make arguments that do not apply. Judges are not amused. Once a document is filed, it becomes part of the record. A motion built on fictional authority can damage credibility very quickly. What feels empowering in a chat window can be reckless in a courtroom.
3. AI often agrees with the question you ask
AI systems are designed to be responsive and supportive. If someone arrives convinced they’ve been wronged, the model often explores that premise. That can sound like agreement. Ask a GAI platform, “Am I being gaslighted?” and the response may thoughtfully explain why the situation could feel that way. But the model is not weighing evidence or applying procedural rules. It is responding to the narrative embedded in the question.
The AI did not decide the lawyer was wrong. It simply followed the story it was given.
ChatGPT as Co-Counsel
The Dela Torre episode is amusing on the surface. Twenty-one motions in a closed case will do that. But it also signals a shift.
Clients now have instant access to tools that sound authoritative, respond confidently and never send an invoice. For lawyers, that means generative AI has become the newest participant in many client matters.
ChatGPT may not be opposing counsel, but GAI is definitely in the room.
More Law Practice Tips from Brooke Lively
For more tips on building a profitable law firm, read:
- Flat-Fee Pricing for Legal Services: Why You Should Add 20% and Not Feel Bad About It
- Mea Culpa: What I Got Wrong About Private Equity in Law Firms
- Growing Pains: Why Client Service Is Absolutely a Law Firm CFO’s Problem
- Law Firm Metrics: Why Law Firms Struggle with Data Chaos (and What to Do About It)
- The Hidden Cost of Complacency: Why Keeping Underperforming Billers Is Killing Your Firm
- Law Firm Profits: 5 Ways You May Be Sabotaging Your Firm’s Growth
- Are Your Law Firm’s Financial Systems Ready to Scale?
- Law Firm Profits and Year-End Taxes: Avoiding a Surprise Tax Bill
- The Best Compensation Plans Use the Law of Thirds
- Law Firm Overhead: What It Is — and What It Isn’t
- Building a Law Firm That Pays You First
- Understanding Law Firm Profits — And What to Do With Them
Image © iStockPhoto.com.

Sign up for Attorney at Work’s daily practice tips newsletter here and subscribe to our podcast, Attorney at Work Today.







