Trellis White paper Ad 770 Spot #6
share TWEET PIN IT share share 0
Ask the Experts at

Ethical Pitfalls When Using ChatGPT

By Mark C. Palmer

QUESTION: I’ve been trying out ChatGPT, and it’s pretty impressive! Before I start using it for legal writing, research or other tasks centered around my law practice, what ethical pitfalls might I watch out for when using such a tool?

ChatGPT Ethical Pitfalls

ANSWER: I’m glad you’ve taken the first step to test out the capabilities of generative AI — in this case, ChatGPT from OpenAI. As the legal industry continues to embrace technological advancements, many attorneys and legal professionals are exploring the use of ChatGPT, Google’s Bard, and other artificial intelligence language models. These large language models (LLMs) and the applications powered by them have the potential to revolutionize the way we practice law, but their use must be subject to ethical considerations.

Background on ChatGPT

The popularity and uses of ChatGPT are moving as fast as the technology itself is advancing. Like you, I am anxious to learn more about how these LLMs work and how they might supercharge the practice of law. For a quick primer on how ChatGPT does its thing and some examples of prompts geared toward legal work, check out my post at, “Why ChatGPT Matters for the Future of Legal Services.”

Confidentiality and ChatGPT Ethical Pitfalls

Attorneys have an ethical duty to maintain client confidentiality of current, former and even potential clients. See ABA Model Rules 1.6, 1.9 and 1.18. Much discussion on attorney-client confidentiality is centered around shielding sensitive information from unintended recipients, e.g., cloud-based cybersecurity or email encryption. Prudent attorneys must contemplate how their clients’ information is being received, transmitted, stored and even destroyed.

It’s not unusual for an attorney to utilize legal research tools such as Westlaw or Fastcase by inputting their clients’ legal issues or even specific facts at issue. So, what about sharing those similar issues and facts with ChatGPT? The comments to Rule 1.6 (Confidentiality of Information) offer some reminders:

  • [18] Paragraph (c) requires a lawyer to act competently to safeguard information relating to the representation of a client against unauthorized access by third parties and against inadvertent or unauthorized disclosure by the lawyer or other persons who are participating in the representation of the client or who are subject to the lawyer’s supervision. See Rules 1.1, 5.1, and 5.3. The unauthorized access to, or the inadvertent or unauthorized disclosure of, information relating to the representation of a client does not constitute a violation of paragraph (c) if the lawyer has made reasonable efforts to prevent the access or disclosure. …
  • [19] When transmitting a communication that includes information relating to the representation of a client, the lawyer must take reasonable precautions to prevent the information from coming into the hands of unintended recipients. This duty, however, does not require that the lawyer use special security measures if the method of communication affords a reasonable expectation of privacy. …

When using ChatGPT, it is important to ensure that confidential client information is not disclosed. The terms of use for ChatGPT at the time of this post expressly state that any content shared using ChatGPT (i.e., the “Non-API Content” referred to below) may be reviewed and is not private:

3(c) Use of Content to Improve Services. We do not use Content that you provide to or receive from our API (“API Content”) to develop or improve our Services. We may use Content from Services other than our API (“Non-API Content”) to help develop and improve our Services. You can read more here about how Non-API Content may be used to improve model performance. If you do not want your Non-API Content used to improve Services, you can opt out by filling out this form. Please note that in some cases this may limit the ability of our Services to better address your specific use case.

The ChatGPT General FAQ page further emphasizes, “Please don’t share any sensitive information in your conversations.”

Nevertheless, you will want to maintain the privacy of the inputs and outputs with ChatGPT that are still related to your representation of clients. Section 5 of the terms of use puts all obligations regarding data security and privacy on the user:

5(b) Security. You must implement reasonable and appropriate measures designed to help secure your access to and use of the Services.

Similar to securing your information transmitted in the cloud, information transmitted via electronic means must be properly safeguarded; you must “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.” Model Rule 1.6(c).

Attorneys and legal professionals may use ChatGPT for general legal research, writing and getting ideas, but without providing specific details of a client’s case or disclosing personal or confidential information. Additionally, any output generated by ChatGPT that contains confidential client information should be treated as confidential and protected.

Supervision: People and AI

Model Rules 5.1 and 5.3 state that attorneys have a duty to supervise lawyers and non-lawyers working with them. Attorneys should ensure that those in their organization using ChatGPT, lawyers and non-lawyers alike, are properly trained and understand the ethical considerations surrounding its use. Similar to my discussion before of attorneys’ use of chatbots, this duty extends to others’ use of ChatGPT and to ChatGPT itself.

The responses generated by ChatGPT can be imperfect and even problematic. LLMs such as ChatGPT are trained on vast amounts of text data, so they may not always provide the most up-to-date or relevant information on a given legal topic, even when the prompt directs it to focus on a particular context.

For example, while it can quickly generate a step-by-step guide for a simple legal problem such as returning a security deposit, the jurisdictional nuances such as local ordinances or court document requirements are more error-prone. But you cannot blame the bot, as ChatGPT can only generate text based on patterns it learned from the data it was trained on. And when it isn’t trained on the data you need, it’s pretty darn good at fabricating responses instead.

Attorneys know to closely examine the subsequent treatment of a case (i.e., Shepardize) to ensure its authority before relying on its use. Likewise, attorneys should supervise and review any output generated by ChatGPT (e.g., see this ChatGPT hallucinations example) and train their legal professionals to verify outputs before using them.

Embrace Your Technology Competency Requirement

ChatGPT can be a powerful tool for the legal profession to improve efficiency and productivity. Still, its use must be subject to ethical considerations as with any novel tool or process, particularly regarding confidentiality and supervision.

As ChatGPT and similar applications provide new ways to enhance the practice of law and the delivery of legal services, don’t fear the replacement of lawyers by robots. Instead, embrace your technology competency requirement to understand “the benefits and risks associated with relevant technology.”

Laws, rules, regulations and opinions vary by jurisdiction. The information provided in this post does not, and is not intended to, constitute legal advice; instead, all information, content and materials are for general informational purposes only.

About the Illinois Supreme Court Commission on Professionalism

The Illinois Supreme Court Commission on Professionalism was established by the Illinois Supreme Court in 2005 under Supreme Court Rule 799(c) to foster increased civility, professionalism and inclusiveness among lawyers and judges in Illinois. By advancing the highest standards of conduct among lawyers and judges, the Commission works to better serve clients and society alike. For more information, please visit and follow @2CivilityOrg.

More on AI and ChatGPT on Attorney at Work

Should You Use AI-Generated Content in Your Law Firm Marketing?” by David Arrata

Image ©

Don’t miss out on our daily practice management tips. Subscribe to Attorney at Work’s free newsletter here >

share TWEET PIN IT share share
Mark C. Palmer Mark C. Palmer

Mark C. Palmer is Chief Counsel at the Illinois Supreme Court Commission on Professionalism. Mark writes on civility, professionalism and future law for the Commission’s 2Civility blog and delivers statewide professionalism programming, including a lawyer mentoring program, to attorneys and law students across Illinois. Follow him @palmerlaw.

More Posts By This Author
MUST READ Articles for Law Firms Click to expand

Welcome to Attorney at Work!

Sign up for our free newsletter.


All fields are required. By signing up, you are opting in to Attorney at Work's free practice tips newsletter and occasional emails with news and offers. By using this service, you indicate that you agree to our Terms and Conditions and have read and understand our Privacy Policy.