Sign up for our free newsletter.
Last week in Atlanta, the College of Law Practice Management’s (COLPM) 2017 Futures Conference, “Running With the Machines,” explored the current state of artificial intelligence in law practice — and provided a glimpse into the future. We asked College Fellows Andy Daws, Susan Hackett, Patrick Lamb, Marc Lauritsen, Sharon Nelson, Mark Tamminga, Courtney Troutman, John Simek and Greg Siskind to share their perspectives and top takeaways. Lace up!
We are in the earliest stage of the implementation of this new class of tools. And while a number of use cases are apparent (e-disco, due diligence, research, advice bots), we have barely scratched the surface of how cognitive computing and learning machines will affect our professional lives. What is clear is that “artificial intelligence” is rapidly improving in capability and reach, sometimes spectacularly (translation services come to mind).
The problem is that we are at the bottom of the classic S-curve. Despite all the hyperventilation, a lot of the legal-specific AI software isn’t really ready for prime time, takes a long time to “teach,” and is restricted to highly limited domains. But S-curves are sneaky things and have these Moore’s Law-style inflection points where the world changes from one moment to the next. Those inflection points are near, moving quickly, and very, very disruptive. Law firms will struggle to cope, but those who manage best will do so by engaging meaningfully with the technology. So:
Mark Tamminga (@MarkTamminga) is a partner and leader of the Innovation Initiatives at Gowling WLG.
Algorithm. Not just one of the most-used words at the Futures Conference, but also one of the keys to artificial intelligence. At times, my brain needed an algorithm to process the firehose of information, delivered by some of the brightest and best minds in the world of legal technology and data management. The conference was one of the best planned I’ve attended, with top-notch speakers and sometimes mind-bending sessions. I’m not ashamed to say that, at times, running with the experts was as challenging as running with the machines. Here are just a few of the high spots:
During “Speed Networking” — the opportunity for a diverse group of professionals to participate in an organized “musical chairs” — participants were divided into groups and given two minutes apiece for pairs to sketch out their work and tell how they were either using AI or anticipated its use in that work.
Day two sessions focused not only on what is currently happening with AI, but also what is possible — all at once hopeful, fascinating and frightening.
Courtney Troutman (@SCBar_PMAP) is Director of the South Carolina Bar Practice Management Assistance Program, which she founded in 2002.
While there are any number of great takeaways, one thing really stood out for me: the implicit biases inherent in the algorithms on which AI is based.
During a discussion, Fellow Ken Grady (@LeanLawStrategy) referenced the Loomis case in Wisconsin. The New York Times reported in an unnerving story headlined, “Sent to Prison by a Software Program’s Secret Algorithms”:
“Eric L. Loomis … was sentenced to six years in prison based in part on a private company’s proprietary software. Mr. Loomis says his right to due process was violated by a judge’s consideration of a report generated by the software’s secret algorithm, one Mr. Loomis was unable to inspect or challenge.”
The Wisconsin supreme court affirmed the sentence and the U.S. Supreme Court declined to hear the case.
Grady reported that questions were raised about the implicit biases in the private company’s software and that the Wisconsin court clumsily avoided the problem. There was palpable unease at the Futures Conference about this use of AI, and for many, including me, this was an introduction to the biases within AI algorithms. Like me, it seemed that many naively believed that math was value agnostic.
The discussion became even more interesting in the final session, which came to be known as the “Gee, What Could Possibly Go Wrong” session. One example the panel provided was the programming of autonomous vehicles. The panel focused on whether, under suitable circumstances, a vehicle could choose to injure others but reduce the risk to the driver/occupant. And what could happen if three vehicles all powered by Waymo AI were about to collide? Would the AI choose a course best for each driver? Or, would it choose a course that created the least financial risk to Waymo? A number of examples were discussed, but these examples illustrate why the panel become known as indicated.
The outcome of the discussion was that more questions were asked than answered. While the frequency of events might be reduced, the complexity of events may well increase dramatically. What that means for lawyers’ duties, including the duty of competence, raises important as well as interesting questions. What it means for things like the unauthorized practice of law also generated discussion and will remain an important issue for years to come.
Artificial Intelligence seems like the big new thing, but it’s been around as an area of research since the 1950s. Still, we do seem to be at a point with Siri, self-driving cars and IBM’s Watson wonder-computer where people are starting to become excited about how the technology can transform society. The legal profession is usually seen as a laggard when it comes to adopting cutting-edge tech, but it seems this time things may be different. The 2017 Futures Conference really made that clear, as attendees were able to deep dive into how many AI products are already changing how lawyers practice and how we need to begin planning now for how these products will shape our law practices.
The lawyers who will thrive in the years to come are the ones with the vision to see how they can use these tools to produce better results at lower costs.
Greg Siskind (@gsiskind) is one of America’s best-known immigration lawyers and is founding partner of Siskind Susser, PC.
While COLPM can sometimes feel like an exclusive “cool kids club,” it does a great job assembling wonderfully diverse people. This year’s conference was one of the best. Some takeaways:
Marc Lauritsen (@marclauritsen) is a legal knowledge systems architect and president of Capstone Practice Systems.
The Futures Conference offered great programs, of course, and a welcome focus on a single topic that allowed us to drill down much more usefully than when a conference tries to cover the entire horizon of issues. Here are the conversations that resonated most with me:
AI is good at a number of important tasks, but it occurs to me that law firms and legal departments must first get a better handle on basic automation (and how it supports, not replaces, the kinds of value-driven internal workflow that needs to be in place). AI’s uses are more sophisticated, and we should be reminded to crawl before we walk and walk before we run (with the machines).
Susan Hackett (@HackettInHouse) is CEO of Legal Executive Leadership, LLC.
The kick-off session at this year’s Futures Conference identified the tension between AI hype and current reality, helping recalibrate expectations for what’s possible now versus what may or may not be on the horizon anytime soon. Other tensions emerged as we dug deeper. AI automation yields efficiency, but that has always been the enemy of the billable hour and the traditional law firm model. Firms need to maintain utilization rates for junior associates, but much of their routine work is already being automated. Risk-aversion is part of what makes lawyers good at their job, but is in conflict with the need to embrace change and build disruption into delivery models in the “second machine age.” There is a fundamental tension between the commoditization and widespread adoption of these technologies and the traditional supply chain for legal services. Now is the time to be grappling with these challenges, and if not “running with the machines,” then at least learning to walk with them!
Andy Daws (@dawsandy) is Chief Customer Officer at Kim Technologies.
We were most struck by how many people were both excited about the future of AI and afraid of it. There were those who sided with the AI enthusiasm of Mark Zuckerberg and Bill Gates, and those who shared the dark fears of Elon Musk and Stephen Hawking — and a large crossover between the two groups.
In terms of a takeaway tip, we believe that lawyers who wish to survive the transformation in law practice need to have some understanding of AI, how it is now being used in law firms, and how it will be used in the not-too-distant future. This was really a breakout year for AI in law practice — what we once dreamed of is becoming a reality. As the train has now inexorably left the station, lawyers need to run to get aboard — or at least to get educated about artificial intelligence quickly, so they are ready to board the next train!
Sharon D. Nelson (@SharonNelsonEsq) and John W. Simek (@SenseiEnt) are President and Vice President of Sensei Enterprises, Inc., a digital forensics, legal technology and information security firm
How do you believe AI will transform the legal profession? Now? Over next three to five years? Ten years out? Tell us in the comments below or email firstname.lastname@example.org
Next year, the College of Law Practice Management’s annual conference heads to Boston, October 25-26, to tackle the theme “Cybersecurity: This Way There Be Dragons!”
Sign up for Attorney at Work and help us grow! Subscribe to the Daily Dispatch and the Weekly Wrap (same price: free). Follow us on LinkedIn, Facebook and Twitter @attnyatwork.
Sign up for our free newsletter.
The "duty to Google" is a shorthand way of saying that when information is easily available, it simply cannot be ignored.February 21, 2019 0 0 0