Legal Tech

Is Your Law Firm Generative AI Ready?

By Alex Smith

Generative AI clearly has significant potential for lawyers and their firms. But before adopting it, you will need to carefully work through some issues to mitigate business risk.

Just when you’ve covered your bases in one area — practice management, perhaps, or document management — along comes a “new kid on the block” that you need to evaluate, lest your firm fall behind. The latest entry is generative AI. It burst onto the scene less than a year ago but has already stirred up a decade’s worth of buzz — almost as fast as ChatGPT can generate words. 

generative AI in law

Generative AI Is Out There — But Is Your Firm Ready to Start Using It?

Generative AI can supercharge the important daily administrative tasks that lawyers perform. It can help inform their roles as trusted advisors, too. Additionally, generative AI has significant potential to enable law firms to change their business model and offer new pricing models by passing on time-savings to clients. 

At this point, you might be saying to yourself: “This sounds great. What’s not to like?”   

Because of the way generative AI works — more on this in a bit — adopting it is not a simple matter of rolling out a ChatGPT-style bot and letting it perform all your legal tasks for you. First, there are issues around ensuring accuracy while maintaining security and confidentiality that you will need to work through carefully to mitigate business risk.  

Creating a Realistic and Practical Deployment Road Map

At the heart of every generative AI product is a large language model, or LLM. You can think of this as the engine that gives generative AI its intelligence. Successfully implementing generative AI in your firm depends on how well you train your LLM.

Train Your LLM on Trusted Material

An LLM is trained by feeding it enough trusted data that it starts to build up a “best worldview” that it can draw on to answer questions or generate new content when an end user enters a prompt, such as “What are the most important clauses to include in a prenuptial agreement?” 

Like humans, however, LLMs can be trained to develop a worldview that is slightly skewed or not entirely aligned with reality. This can lead to generative AI producing some inaccurate or just plain bizarre results. 

So, it is critical to make sure that the LLM has been trained on quality, trusted material that will produce quality results. This process is known as “grounding.” 

You Need to Share the Knowledge

If you’re a solo practitioner who keeps a fair amount of institutional knowledge in your head, then you likely have an ideal prenuptial agreement in mind, one you worked on a year or so ago, that you could use to train the model. Or. perhaps you could come up with a couple of examples of “good” real estate leases or share purchase agreements off the top of your head. 

But what if there are five other attorneys in the firm? Or 15? Or 50? Then training the model becomes an entirely different matter. 

For starters, it’s essential to have a centralized location for work product, like a document management system. Otherwise, locating the trusted data sets that can be used to train the model is a matter of tracking down files scattered across the organization. 

It’s also important to have some kind of knowledge management function within the firm to determine what exactly a “good” real estate lease or nuptial agreement looks like — and what the best examples are. Crucially, someone should be in charge of maintaining those knowledge assets on an ongoing basis so that the model can draw on the most up-to-date resources from the firm’s trusted data sets. 

Confidentiality Counts — and So Does Consistency 

A further complication in deploying generative AI revolves around how LLMs can be trained to establish consistency in responses. Due to security and governance concerns, not everyone in a firm will have access to every document because of the confidential and privileged nature of particular files.  

So, how can you ensure that you don’t get a completely different answer than another lawyer who has slightly different access levels to the firm’s matter files when you pose a question to generative AI or ask it for the best example of a particular document? 

As a way of navigating this problem, it’s worthwhile to establish a slightly different security posture for knowledge assets and best practices content that makes it more open and accessible. The overall goal here is to ensure you have a process in place to provide the generative AI engine with data that are examples of the best work while knowing they are vetted and free of confidential client information. 

Explore Generative AI Responsibly 

Generative AI in law has significant potential but it needs to be applied responsibly. This means taking a thoughtful approach that addresses accuracy, security and confidentiality concerns. In this way, you can create a realistic and practical deployment road map that readies your firm to explore generative AI’s possibilities further — helping to minimize overall business risk while optimizing potential outcomes.  

Image © iStockPhoto.com.

Don’t miss out on our daily practice management tips. Subscribe to Attorney at Work’s free newsletter here >

share TWEET PIN IT share share
ALEX SMITH Alex Smith

Alex Smith is Global Product Lead for Knowledge, Search and AI at iManage, a knowledge work platform that helps organizations uncover and activate the knowledge that exists in their business content and communications. As a senior director, Alex works in the emerging redesign of legal services to show the art of the possible and make lawyers excited about data. Before joining iManage in 2019, Smith served as Innovation Manager at Reed Smith LLP.

More Posts By This Author
MUST READ Articles for Law Firms Click to expand
envelope

Welcome to Attorney at Work!

Sign up for our free newsletter.

x

All fields are required. By signing up, you are opting in to Attorney at Work's free practice tips newsletter and occasional emails with news and offers. By using this service, you indicate that you agree to our Terms and Conditions and have read and understand our Privacy Policy.