Machine learning was once groundbreaking for law firms, but with the rapid pace of innovation, the process of creating and maintaining an ML model has already become inefficient and costly. Large Language Models (LLMs) offer a path to true, long-term legal AI cost-effectiveness, eliminating the rising fees and vendor dependency associated with legacy technology.

Table of contents
Machine learning-powered technology has been commonplace for legal professionals for over 20 years. Instead of spending hours parsing forms to find key data, users could leverage machine learning (ML) to quickly classify documents and identify useful cases and statutes otherwise buried within their repository. Law firms that invested in machine learning models could even automate administrative tasks, such as billing, thereby freeing their legal teams to focus on more productive tasks.
As costs flattened for machine-learning tools, they became more attractive for even smaller firms. Vendors promised cheaper, pre-trained models that delivered efficiency gains. Over time, however, the costs involved in the once-promising technology began to pile up. First, let’s look at what went wrong and why — and then at what the future of automation technology looks like.
What Went Wrong With Machine Learning Models?
The same programming that made all these advancements possible is also why legacy machine-learning costs are rising. ML models are trained to do one thing well. A model may be capable of extracting pertinent clauses from thousands of legal documents quickly because it’s been programmed with fixed keywords to flag.
However, these models don’t understand context, and they’re not automatically updated to match evolving terminology. For example, users searching for any documentation relating to “precedents” may discover otherwise useful forms with the term “stare decisis” are excluded from results.
As legal terminology evolves, rigid ML models require costly retraining to remain relevant. Because a firm might use multiple ML models, each for a different use case, those fees will quickly multiply. Firms outsourcing maintenance are also likely to wait in a long queue for the vendor to make changes and are at the vendor’s mercy for software updates and patches. The introduction of ChatGPT and other large language models (LLMs) into legal work has also created new opportunity costs. Generative AI is transforming legal work. Lawyers are now asking complex questions, uncovering insights, and embedding timely intelligence directly into their content. And the capabilities are evolving rapidly. Firms contracted with an ML vendor remain locked into legacy patterns while their competition is free to innovate.
What About Dipping Your Toes Into LLMs?
Legal professionals comparing the costs of legacy ML to LLMs sometimes look for paths to test the new technology without committing fully. That could seem like a good idea through the lens of ML models: if a firm already runs different models for each use case, vendors might suggest testing it out on a single subsection of documents, such as contract templates.
In practice, however, this approach prevents an LLM from reaching its potential. Rather than working with a subset of content, LLMs work best when they serve as a fabric of knowledge across the firm. Running searches through a limited LLM is likely to deliver partial answers, and poor experiences will create distrust in the technology. Firms will also run into the same problem they face with ML — even if the pilot works, additional training costs mount to incorporate the full knowledge base.
With the rapid pace of LLM advancement, the costs are continuing to drop. The more ML models a firm can replace with a single LLM, the more the benefits of Legal AI Cost-Effectiveness will be realized, making these advanced tools significantly more economical.
Why Do LLMs Make a Difference?
When an LLM has access to a firm’s full knowledge base, users can fine-tune their prompts to get more accurate responses rather than fine-tuning the data within the model itself. For example, when asking an LLM to review and summarize a statute, offering a statute example and an accompanying summary can help the model understand how to properly execute the task.
Training employees to properly leverage this broad-reaching technology is considerably cheaper than hiring a vendor to continually update a legacy ML model. This inherent efficiency is the core of Legal AI Cost-Effectiveness. Furthermore, LLMs offer smart cost governance, with usage caps to ensure a firm can access the model’s full power — across all its content — while remaining on budget.
LLMs also grow and change with the firm in real-time. Through retrieval-augmented techniques, the model can draw from an existing knowledge base while pulling in external sources to ensure outputs are accurate and up to date. Because LLMs operate in the cloud, the vendor can push updates and new capabilities automatically, eliminating the delays and costs of manual reconfiguration.
Looking at the Real Costs of ML vs. LLM
Firms looking to future-proof their AI-powered processes need models that seamlessly evolve with their operations, because every day spent waiting on updates makes the ongoing investment harder to justify. LLMs are the answer, but shifting to new technologies requires a change in mindset about how to work with AI.
By adopting LLMs across a complete pool of content — rather than just a sliver — firms can unlock the technology’s full power while reducing costs in the process.
Image © iStockPhoto.com.

Sign up for Attorney at Work’s daily practice tips newsletter here and subscribe to our podcast, Attorney at Work Today.