Sensei’s experts share takeaways from OpenAI’s prompt guide and offer tips on keeping out of ethical trouble when using ChatGPT.
OpenAI Releases a Prompt Guide for ChatGPT
We got a holiday gift in December when OpenAI released a prompt guide for ChatGPT and other large language models. There is a wealth of knowledge in the guide. What follows are some examples that are sure to “up your game” when using ChatGPT.
Make Sure You Give Clear Instructions
Add details to your query to get good answers. Example: “I am writing an article for lawyers about how to avoid getting into ethical trouble when using AI — what should I suggest?” You can tell ChatGPT how long you want the output to be. You can also provide examples of what you are looking for — or give ChatGPT a specific role, e.g., “You are an expert in legal ethics.”
Provide Reference Texts and Break Complex Tasks into Subtasks
Instruct ChatGPT to respond based on a text that you reference. You can also instruct it to respond with quotes from a reference text.
Complex tasks, as you might imagine, have higher error rates than simpler tasks. You can often avoid errors by breaking a complex task into a series of tasks. Further instructions can be found in the guide.
Give ChatGPT “Time to Think”
Sounds a bit peculiar, doesn’t it? AI tends to make more errors when it tries to respond right away. But you can instruct ChatGPT to think step by step before it responds to your request. The guide has an extensive explanation on how to do this.
For the record, as much as we have used ChatGPT, we have not yet run into a situation where we needed to give ChatGPT time to think.
Using External Tools
The guide has a long section on this, suggesting that you use other tools, such as text search systems or code execution programs, which then make the AI more powerful than pure language models.
Of more use to lawyers is evaluating prompts you use a lot through targeted evaluations to assess quality. Results can be evaluated by people, computers or both. OpenAI offers an open-source software called Evals for this task.
Again, from the perspective of the average lawyer, this may not be necessary. The authors have not experienced much difficulty in figuring out when our prompts are flawed or in figuring out how to get better, more useful, responsive answers. We are specific and detailed:
- If you need a list (as opposed to an article), ask for one.
- Stating the purpose of the inquiry is helpful.
- Specify the relevant jurisdiction you are interested in.
- You can ask in the prompt to make sure the response complies with legal and ethical requirements.
Bonus Prompt Suggestion
Ask ChatGPT, “What are the best prompt engineering tips for lawyers?” If you have a specific area of practice, use that as part of the question. The suggestions you will get should be quite good.
Bonus Gift: Keeping Out of Trouble With ChatGPT
OpenAI has been transparent about the limitations of ChatGPT. Its website states: “GPT-4 still has many known limitations that we are working to address, such as social biases, hallucinations, and adversarial prompts.” (To recap: GPT-4 is the paid version of ChatGPT and GPT-3.5 is the free version.)
Anyone who has worked with ChatGPT has run across biases (largely derived from historical data). When author Nelson challenged its bias, ChatGPT was downright rueful, apologizing for the bias and explaining that historical data was known to cause some amount of bias.
We’ve all heard about AI hallucinations. We’ve encountered bogus cases, real judges named as overseeing bogus cases, books, articles and links that didn’t exist, false allegations of criminal conduct by real people, and the list goes on.
ChatGPT has utter confidence in its answers to queries — and lawyers have proved time and again that they are bad at checking such a self-assured resource.
At all times, ChatGPT seems quite confident. You certainly can’t ask a known liar if it is telling the truth. So, you must validate information through other reputable sources.
How Do You Validate Information From ChatGPT?
From a lawyer’s perspective, validation will come from reputable legal sources. ChatGPT recommends that you consult official court websites, Westlaw, LexisNexis and Bloomberg. We queried ChatGPT about attorneys who can’t afford some of the paid resources and asked it why it hadn’t recommended Google Scholar.
To our amusement, it apologized for overlooking that some lawyers might not have access to expensive resources, and it affirmed that Google Scholar might be an excellent resource. Without being asked for anything further, ChatGPT took it upon itself to offer a bulleted list of how lawyers could effectively use Google Scholar for validation. We thought it most impressive that it offered the pointers, unasked.
Guardrails for the Use of Any AI
As many experts have concluded, we need guardrails for safety when using AI. In many law firms, all kinds of AI may be in use. It’s called “shadow AI” because, frequently, no one in the firm knows who is using which AI. So, the first step is to create an AI usage policy:
- Create a policy for acceptable AI use. This would include, obviously, the need to verify information provided by the AI from an authoritative source. There are templates everywhere — start with the template and customize it for your law firm.
- Train your employees on AI usage. To most of them, AI is a vast unknown and they are stumbling around trying to determine how it can help them in the practice of law. AI training will likely be mainstream in 2024.
- Make sure you disclose to clients your firm’s AI usage and get consent to utilize it. If the usage of AI shortens labor hours, this should be reflected in the invoice. You can probably count on your clients querying you about that!
- Make sure you pay close attention to legal and regulatory requirements. There are a limited number of such requirements now, but there will be a flood of them within the next several years.
Final Words: Verify, Verify, Verify
Getting into trouble with AI is easy — all you have to do is ignore the advice above. If you fail to verify the truth of the information that AI gives you, you may earn the wrath of judges, clients and colleagues. More than one attorney has earned a pink slip for failure to validate. “Verify, verify, verify” should be your mantra.
Sharon D. Nelson is a practicing attorney and the president of Sensei Enterprises, Inc. She is a past president of the Virginia State Bar, the Fairfax Bar Association and the Fairfax Law Foundation. She is a co-author of 18 books published by the ABA. email@example.com.
John W. Simek is vice president of Sensei Enterprises. He is a Certified Information Systems Security Professional (CISSP), Certified Ethical Hacker (CEH) and a nationally known expert in digital forensics. He and Sharon provide legal technology, cybersecurity and digital forensics services from their Fairfax, Virginia, firm. firstname.lastname@example.org.
Michael C. Maschke is the CEO/Director of Cybersecurity and Digital Forensics of Sensei Enterprises. He is an EnCase Certified Examiner and a Certified Computer Examiner. email@example.com.
More on ChatGPT for Lawyers
- “Harnessing ChatGPT: A Primer for Lawyers” by Mark C. Palmer
- “Is Your Firm Generative AI Ready?” by Alex Smith
- “Make AI Your Intern, Not Your Replacement as a Lawyer” by Ruth Carter
- “Beware of Ethical Perils When Using Generative AI” by Sharon Nelson, John Simek & Michael Maschke
Image © iStockPhoto.com
Don’t miss out on our daily practice management tips. Subscribe to Attorney at Work’s free newsletter here >