Now that ChatGPT has been shining its ever-present spotlight on AI for more than six months, it’s a good time to take stock of the current state of affairs. Here’s a status report on using generative AI for law firm marketing.
Table of contents
A Midyear Update on Generative AI
That said, AI’s role in legal marketing is evolving quickly, and it’s vital for anyone using it to stay on top of recent developments. Here are some things to consider as we enter the second half of 2023 with respect to generative AI and how to use it in marketing a legal practice.
Google Has Made it Clear that AI-Generated Content Does Not Violate Its Guidelines, Per Se
Google has said varying things about AI over the years. As recently as April of 2022, John Mueller of Google said that automatically-generated content is against Google’s Webmaster Guidelines and will be treated as spam and could be subject to a manual penalty. Since then, Google has changed its tune and suggested that “however content is produced, those seeking success in Google Search should be looking to produce original, high-quality, people-first content demonstrating qualities E-E-A-T.”
For the uninitiated, E-E-A-T refers to experience, expertise, authority, and trust. This is particularly important for websites that deal with “your money or your life” (YMYL), like legal websites.
Generative AI Still “Hallucinates”
Hallucinations – or when generative AI confidently spits out incorrect information – is an ongoing issue with ChatGPT and other AI models. The fact is that AI does not “know” anything – it operates by determining what the next word should likely be based on its vast amount of training data. As a result, if it does not have the information it needs to provide accurate information, it will simply provide something that sounds plausible. According to Elena Alson of Zapier:
Ask any AI chatbot a question, and its answers are either amusing, helpful, or just plain made up.
Because AI tools like ChatGPT work by predicting strings of words that it thinks best match your query, they lack the reasoning to apply logic or consider any factual inconsistencies they’re spitting out. In other words, AI will sometimes go off the rails trying to please you. This is what’s known as a “hallucination.”
The Idea of Watermarking AI-Generated Content is Gaining Traction
While Google has made it clear that it’s OK with AI-generated content so long as it’s helpful, there is still the issue of AI being able to mass-produce misinformation at a scale never before seen. Vast amounts of misinformation have the potential to wreak chaos throughout society, including in the realms of politics and finance.
In fact, there have already been real-world issues caused by AI-generated misinformation. For example, when purportedly AI-generated images of an attack on the Pentagon started to circulate on social media in May of this year, it caused ripples in various markets, including the S&P 500 and U.S. Treasuries.
Similarly, generative AI is capable of creating realistic deepfakes that could have far-reaching implications for candidates running for political office. On June 8, the New York Times reported that Governor Ron DeSantis of Florida’s campaign spread images of former president Donald Trump embracing and kissing Dr. Anthony Fauci, a figure disliked by many members of the Republican Party.
The risks associated with misinformation posed by generative AI have resulted in calls for watermarking or other tools allowing people to determine the provenance of an image or piece of content. In response, at a White House meeting earlier this summer, seven companies have pledged to implement safety measures like watermarking this month. According to a release from The White House:
The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers of fraud and deception.
What Does This Mean for Legal Marketing?
With these developments in mind, what are the takeaways for law firms and marketing teams that are considering using AI in legal marketing?
You Can Use Generative AI … But Use It With Caution
Google has made it clear that you can use AI in the content creation process, but it also cautions that using it to create content designed to “game” SEO is against its guidelines. So, for example, what you shouldn’t do is write a practice area page titled “Los Angeles Car Accident Attorney” and then have ChatGPT rewrite it for every municipality in the area.
Some ways you can safely use ChatGPT or other generative AI models in your marketing efforts include:
- Topic ideation
- Keyword research
- Outlining blog posts or content pages
- Rewriting boilerplate sections such as calls-to-action
- Coming up with headlines and titles
- Summarizing content for social media posts
It’s Critical to Verify Any Output
If you use generative AI for any part of the content creation process, you need to verify any statements of law or fact the models spit out. In addition, you can’t just ask ChatGPT to verify its own statements, as it will very likely double down on false information. Instead, head over to Google (or the search engine of your choice) to verify any statements by checking with an authoritative source.
To see what can happen to lawyers who rely on ChatGPT for legal research, consider what happened to New York lawyers Steven A. Schwartz and Peter LoDuca earlier this year. When the attorneys used ChatGPT to create a legal brief, it cited six non-existent cases. Perhaps most frighteningly, when Schwartz asked whether the cases were real, the AI doubled down and assured him they were. In addition, ChatGPT told Schwartz that the fake cases could be found on “reputable legal databases” like Westlaw and LexisNexis.
Ultimately, Schwartz and LoDuca were sanctioned and ordered to pay $5,000 and send a copy of the presiding judge’s sanctions opinion to all of the judges named in their brief.
They aren’t the only lawyers misusing generative AI.
AI Content May Be Watermarked in the Future
If you are using AI to generate content, keep in mind that, eventually, it may be watermarked. That means AI companies like OpenAI may embed some sort of cryptographic code into the content it generates that allows search engines (or other software) to determine whether it was generated by AI.
If that does happen, it remains to be seen whether Google and other search engines will treat AI content differently than human-generated content.
There are certainly use cases for AI content where the stakes are not particularly high. For example, if you use AI to generate content for product descriptions for your online apparel store, it’s very unlikely someone is going to suffer harm. On the other hand, if medical or legal sites are overrun by unchecked and misleading AI-generated content, users could suffer significant harm. As a result, it stands to reason that Google and the other search engines have an incentive to identify AI-generated content, especially in YMYL areas like law, medicine and finance.
In Conclusion …
The future of AI in law firm marketing holds great promise, but it demands responsible usage, constant verification and safeguards to maintain the integrity and credibility of the content you generate. By navigating the evolving landscape of generative AI with a cautious and ethical approach, law firms can harness its benefits to enhance marketing strategies and provide valuable, reliable information to their audience.
Related: “Should You Use ChatGPT to Generate Your Law Firm’s Blog Content?” by David Arato
Image © iStockPhoto.com
Don’t miss out on our daily practice management tips. Subscribe to Attorney at Work’s free newsletter here >