Sign up for our free newsletter.
Lawyers are well aware of their ethical responsibilities. Those responsibilities permeate relationships with clients and extend to every aspect of lawyers’ professional lives — including the technology they use.
It’s been six years since the American Bar Association updated Model Rule 1.1 (competent representation) to state that lawyers have an ethical responsibility to “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” Since then, 30 states have updated their rules to follow suit and the Canadian bar is debating revising its Code of Professional Conduct to include technical competency. (See LawSites’ updated tally here.)
Few would argue lawyers’ ethical responsibility to communicate with clients using email. It’s fast, secure, cheap and provides a digital paper trail. Or that lawyers have a duty to safeguard clients’ private information, which means understanding risks associated with various tools, whether email, text messages or document storage in the cloud.
How do these ethics standards apply to the use of emerging technologies like artificial intelligence?
Recently, I participated in a thoughtful panel discussion on “The Impact of New Legal Technologies on the Legal Profession,” held during the International Institute of Communications annual event in Ottawa. Several panelists, including Professor Monica Goyal of Osgoode Hall Law School, Mat Goldstein, a business lawyer at Dentons, and Jordan Furlong, a legal market analyst, discussed determining an ethical responsibility to use AI in the practice of law.
AI continues to gain momentum as it promises to transform the future of work. In the legal industry, one of the fastest growing use cases for AI is using machine learning to help lawyers review and summarize contracts. By eliminating this tedious work, lawyers can quickly sort and summarize contracts, finding and flagging critical provisions twice as fast and up to 80 percent more accurately than when done manually.
Proponents assert that this technology increases the quality of service lawyers deliver to clients by automating specific tasks. On the other side, the voices of skeptics cannot be ignored: Machines cannot replace the critical thinking of lawyers.
Does entrusting technology with a greater role in the practice of law create or mitigate risk?
“People have gotten comfortable with the risks in processes they know,” says Goldstein. “When you show them a new process, they suddenly see more risk. One battle to convert the legal profession to use AI is reminding people of risks in current practice. Young associates summarizing documents at 2 a.m. to meet a deadline are making mistakes. Because lawyers are used to that practice, they don’t view it as a risk.”
Each panelist had a refreshing take on using AI in the practice of law. One unanimous view was that clients don’t care how lawyers work — they want the right level of service, at the right cost, to produce the desired outcome.
While there is certainly a lot of chatter and “visioneering” around AI, said Furlong, “AI is just a technology and technology is a tool to help do a job more effectively.”
Goldstein’s view is that the role of the legal counsel is changing. The highest value lawyers can provide is based on their experience and knowledge. Increasingly, lawyers handling tasks such as reading and summarizing documents will be considered a misuse of time and money. AI has huge potential to put the lawyer back into the advisory role and eliminate low-level, time-intensive reading and summarizing of documents, which does not require years of specialized training.
As with many emerging technologies, educating lawyers on the benefits of AI is needed at different levels.
“Today it’s AI. Tomorrow it will be blockchain,” said Goyal. “Law schools need to get on board with teaching not just the technology, but how to implement it. Lawyers need to understand how to use it efficiently. Continuing professional development needs to play a role.”
In 2017, the ABA adopted a new recommendation for accrediting technology continuing education credits. In “How to Meet the Duty of Technical Competence” (Law Technology Today), Ivy B. Grey notes: “The revised MCLE requirements are important because they reinforce the fact that the duty is continuing, training is important, and mere exposure to technology is not enough.”
Clearly, the panelists are proponents of using AI in the practice of law. But is there an ethical obligation for lawyers to use emerging technologies?
Goldstein said that while he doesn’t feel it is currently an ethical obligation, it is a matter of professional responsibility: “The service I provide to clients is based on presenting the latest, most efficient and cost-effective options that allow me to maximize my experience and knowledge as a lawyer with individual clients.”
The conversations around AI in the practice of law are well underway, and it’s clear the trend line will continue.
Laura van Wyngaarden is Chief Operating Officer and Co-Founder of Diligen Software, a leading intelligent contract assistant. Diligen uses AI to deliver faster, higher-quality contract review. Laura is a leading expert on legal tech and AI. She holds undergraduate and honors degrees from the University of Cape Town and a master’s from Oxford. Follow her on Twitter @laurawyngaarden.
“Artificial Intelligence for Legal Marketers” by Mark Greene
“Running With the Experts: Takeaways from the College of Law Practice Management Futures Conference” — Fellows Andy Daws, Susan Hackett, Patrick Lamb, Marc Lauritsen, Sharon Nelson, Mark Tamminga, Courtney Troutman, John Simek and Greg Siskind share their perspectives on artificial intelligence and the legal industry
Get really good ideas every day: Subscribe to the Daily Dispatch and Weekly Wrap (it’s free). Follow us on Twitter @attnyatwork.
Sign up for our free newsletter.
The "duty to Google" is a shorthand way of saying that when information is easily available, it simply cannot be ignored.February 21, 2019 0 0 0