Every business should have a consumer AI policy, but understanding the intersection of AI and attorney-client privilege is now a critical requirement for legal protection.

Key Takeaways:
- Consumer-grade AI tools like free Claude, Geminia and ChatGPT platforms do not guarantee the confidentiality required to maintain privilege.
- Courts may rule that clients using AI without specific direction from legal counsel waive attorney work-product protections.
- For attorneys, entering confidential client information into a consumer AI tool can violate ABA Model Rule 1.6(c).
- To protect attorney-client privilege in the age of AI, firms must counsel clients on the risks of using commercial-grade AI tools.
Table of contents
Protecting Attorney-Client Privilege After the Heppner Ruling
A written prohibition on the use of consumer AI tools is a sticky note on an unlocked door. It does not prevent entry. Contracts, technical controls, training and enforcement must reinforce the policy—or it remains paper.
In United States v. Heppner, No. 25 Cr. 503 (S.D.N.Y. Feb. 17, 2026), a financial-services CEO charged with securities fraud used the free consumer version of Anthropic’s Claude to analyze his legal exposure. He acted on his own, without his lawyers’ direction. Some of his prompts included information he had received from counsel. When prosecutors sought the AI-generated documents, the court ordered them produced, rejecting both privilege and work-product protection. Claude is not an attorney. Anthropic’s consumer terms disclaimed confidentiality. And the materials were not prepared at counsel’s direction.
But the court left room for a different outcome: had counsel directed the defendant to use Claude through a platform with proper confidentiality terms, the result might have been different. That distinction, consumer versus enterprise AI, frames the advice that follows.
The Risk Extends Beyond Privilege and Work Product
Consumer AI platforms may, under their default terms, retain user inputs, train on them, and disclose data to third parties. The consumer terms that helped defeat privilege in Heppner can also compromise trade secrets, breach NDA obligations, or expose regulated personal data. For lawyers, the concern runs deeper: entering confidential client information into a consumer AI tool may violate ABA Model Rule 1.6(c), which requires reasonable efforts to prevent unauthorized disclosure of information relating to the representation.
How to Help Clients Protect Confidential Information in AI Tools
If your clients use AI for anything involving confidential or sensitive information, a policy marks only the starting point. Here is what to tell them.
- Identify the data going into the tool. Client employees may be entering deal terms, personal information, litigation strategy, financial projections, or regulatory materials, each carrying different legal obligations. Help clients classify what their people are putting into AI tools.
- Review the AI platform terms. Offer to review the terms with the client. Look specifically at provisions on data retention, training on inputs, human review, and third-party disclosure. The terms should bar training on customer data, restrict provider access, and impose confidentiality commitments. Watch for carve-outs that could weaken a confidentiality argument, including provisions for anonymized data, safety review, or service improvement.
- Use enterprise-grade AI for confidential and legal work. Advise clients that consumer-tier plans may lack critical protections. Consumer terms often permit data retention and provider access that enterprise agreements prohibit. Enterprise plans typically add audit logs, custom data-retention controls, and negotiated commercial terms.
- Deploy technical controls. Recommend that clients work with IT to block consumer AI domains on company networks and managed devices and to add data-loss-prevention tools that scan for sensitive data sent to AI platforms. No single control is airtight. Employees on personal devices or off-network connections can bypass corporate gateways. But layered defenses reduce exposure.
- Make approved tools easy to use. Convenience is gravity. If the enterprise tool requires three logins and a VPN, employees will likely turn instead to the consumer alternative.
- Train the people who handle confidential information. Training should reach beyond the legal team. Every employee who handles confidential information should understand the line between enterprise and consumer AI and know that what they type into an AI tool may later be produced in discovery.
- Ask about prior AI use. At the outset of any legal, regulatory, or compliance matter, ask whether anyone used AI tools to analyze or discuss the issue. Prior consumer-AI use may have compromised privilege before the matter reached you. Make this question as routine as the litigation-hold notice.
- Update litigation-hold and preservation protocols. AI interactions are ESI subject to preservation obligations and discovery requests. Chat logs, uploaded documents, exported files, and AI-generated summaries should all be covered by litigation-hold notices and retention policies.
Closing the Gap to Keep Confidential Client Information Safe
AI use raises challenges well beyond data protection, from hallucinations to the risks of agentic tools. The steps above focus on one aspect of AI risk: keeping confidential and sensitive information out of the wrong platforms.
AI use can fall outside the legal protections many users assume exist. Companies that close that gap will be prepared when confidential information is challenged, records are demanded, or regulators come calling. Those that wait may confront the consequences in discovery, in a disciplinary proceeding — or both.
Common Questions About Using AI and Legal Privilege
As seen in the recent ruling U.S. v. Heppner, using consumer-grade AI tools (like the free versions of ChatGPT, Gemini or Claude) may result in a waiver of privilege.
Enterprise AI versions typically offer “Opt-out” features for data training and stricter SOC 2 Type II compliance, which are essential for maintaining the duty of confidentiality under ABA Model Rule 1.6. Law firms are now being advised to update engagement letters and litigation hold notices to explicitly warn clients about the risks of using consumer-facing AI for case-related matters.
Law firms should follow the advice they would give clients, beginning with the checklist in this article. If your firm has not yet approved a formal Law Firm AI Policy or updated its client engagement documents to include its AI use policy, doing so will force you to address questions and gaps in your AI use. Among other issues, the policy should include clear guidance on the use of commercial and enterprise-level AI tools. It should address “shadow AI” use, requirements for human review, and when to disclose AI use to clients. Catherine Reach’s article, Beyond the Ban: Why Your Law Firm Needs a Realistic AI Policy in 2026, is an excellent guide and includes links to bar association AI policy templates and guidelines for law firms.
Featured Image © iStockPhoto.com







