Ad-Spot-#1---AAW---March
Ready Set Scale 770
share TWEET PIN IT share share 0
Spotlight on the Future

Running With the Experts: Takeaways From the 2017 Futures Conference

By Joan Feldman

Last week in Atlanta, the College of Law Practice Management’s (COLPM) 2017 Futures Conference, “Running With the Machines,” explored the current state of artificial intelligence in law practice — and provided a glimpse into the future. We asked College Fellows Andy Daws, Susan Hackett, Patrick Lamb, Marc Lauritsen, Sharon Nelson, Mark Tamminga, Courtney Troutman, John Simek and Greg Siskind to share their perspectives and top takeaways. Lace up!

Mark Tamminga: Engaging Meaningfully With Technology

We are in the earliest stage of the implementation of this new class of tools. And while a number of use cases are apparent (e-disco, due diligence, research, advice bots), we have barely scratched the surface of how cognitive computing and learning machines will affect our professional lives. What is clear is that “artificial intelligence” is rapidly improving in capability and reach, sometimes spectacularly (translation services come to mind).

The problem is that we are at the bottom of the classic S-curve. Despite all the hyperventilation, a lot of the legal-specific AI software isn’t really ready for prime time, takes a long time to “teach,” and is restricted to highly limited domains. But S-curves are sneaky things and have these Moore’s Law-style inflection points where the world changes from one moment to the next. Those inflection points are near, moving quickly, and very, very disruptive. Law firms will struggle to cope, but those who manage best will do so by engaging meaningfully with the technology. So:

  • Someone internal needs to know this stuff — someone who has gone through the exercise of training various algorithms and who can pinpoint appropriate places to experiment. If you have a KM function in your firm, it needs to be all over this.
  • Talk to the vendors. They are full of missionary zeal. They’re trying to sell you their wares, for sure, but they’re also generally excellent teachers and have no interest in selling you on a tool that no chance of working for your problem — mainly because the AI revenue model tends to be use-, subscription-, or document-based, so if it doesn’t work quickly, they won’t get paid.
  • Read. Keep up. This technology is moving quickly and breakthroughs are happening on a daily basis.
  • Get actual AI to help you! And it’s free. Just tell Google Alerts (by most definitions most of Google is run by AI algorithms) to pay attention for your specific problem or area of interest, along with the words “artificial intelligence.” You’ll get really interesting stuff, really quickly.

Mark Tamminga (@MarkTamminga) is a partner and leader of the Innovation Initiatives at Gowling WLG

Courtney Troutman: The Hopeful, Fascinating and Frightening

Algorithm. Not just one of the most-used words at the Futures Conference, but also one of the keys to artificial intelligence. At times, my brain needed an algorithm to process the firehose of information, delivered by some of the brightest and best minds in the world of legal technology and data management. The conference was one of the best planned I’ve attended, with top-notch speakers and sometimes mind-bending sessions. I’m not ashamed to say that, at times, running with the experts was as challenging as running with the machines. Here are just a few of the high spots:

During “Speed Networking” — the opportunity for a diverse group of professionals to participate in an organized “musical chairs” — participants were divided into groups and given two minutes apiece for pairs to sketch out their work and tell how they were either using AI or anticipated its use in that work.

Day two sessions focused not only on what is currently happening with AI, but also what is possible — all at once hopeful, fascinating and frightening.

  • The hopeful. We heard how legal departments are already using AI (or at least advanced automation and algorithms) to streamline work and lessen errors and omissions in culling huge amounts of data.
  • The fascinating. Demonstrations via videos showed how AI is currently in use — from lifelike robots capable of processing conversations and responding in human-like ways (including a dark sense of humor), to a demonstration of Jill Watson, the teaching assistant that turned out to be a bot (demonstrated by Georgia Tech professor Ashok Goel).
  • The frightening. The ethics of AI. Isaac Asimov’s three laws of robotics were cited several times. Sharon Nelson stated that Asimov’s “I, Robot” series of stories was prescient and even more true today, more than a half century after they were first penned. Real-life examples of how AI can run amok included Fastcase CEO Ed Walters recounting the 2010 flash crash of the stock market due to a lone man employing a conflicting algorithm. The discussion of what could happen if machines surpassed human intelligence and circumvented Asimov’s laws of robotics, to the extent of even starting a war, was chilling. Perhaps the most comforting words came from Walters, who reminded us that AI can be demystified: “It’s just tools. It may be different, but you still have to be safe with it, like any other tools.”

Courtney Troutman (@SCBar_PMAP) is Director of the South Carolina Bar Practice Management Assistance Program, which she founded in 2002.

Patrick Lamb: Greater Complexity Underlies Great Complexity

While there are any number of great takeaways, one thing really stood out for me: the implicit biases inherent in the algorithms on which AI is based.

During a discussion, Fellow Ken Grady (@LeanLawStrategy) referenced the Loomis case in Wisconsin. The New York Times reported in an unnerving story headlined, “Sent to Prison by a Software Program’s Secret Algorithms”:

“Eric L. Loomis … was sentenced to six years in prison based in part on a private company’s proprietary software. Mr. Loomis says his right to due process was violated by a judge’s consideration of a report generated by the software’s secret algorithm, one Mr. Loomis was unable to inspect or challenge.”

The Wisconsin supreme court affirmed the sentence and the U.S. Supreme Court declined to hear the case.

Grady reported that questions were raised about the implicit biases in the private company’s software and that the Wisconsin court clumsily avoided the problem. There was palpable unease at the Futures Conference about this use of AI, and for many, including me, this was an introduction to the biases within AI algorithms. Like me, it seemed that many naively believed that math was value agnostic.

The discussion became even more interesting in the final session, which came to be known as the “Gee, What Could Possibly Go Wrong” session. One example the panel provided was the programming of autonomous vehicles. The panel focused on whether, under suitable circumstances, a vehicle could choose to injure others but reduce the risk to the driver/occupant. And what could happen if three vehicles all powered by Waymo AI were about to collide? Would the AI choose a course best for each driver? Or, would it choose a course that created the least financial risk to Waymo? A number of examples were discussed, but these examples illustrate why the panel become known as indicated.

The outcome of the discussion was that more questions were asked than answered. While the frequency of events might be reduced, the complexity of events may well increase dramatically. What that means for lawyers’ duties, including the duty of competence, raises important as well as interesting questions. What it means for things like the unauthorized practice of law also generated discussion and will remain an important issue for years to come.

Kudos to conference co-chairs Sharon Nelson and Mark Tamminga for developing an outstanding program.

Patrick Lamb (@ValoremLamb) is a founding partner of Valorem Law Group and a founding member at ValoremNext

Greg Siskind: This Time Things Might Be Different

Artificial Intelligence seems like the big new thing, but it’s been around as an area of research since the 1950s. Still, we do seem to be at a point with Siri, self-driving cars and IBM’s Watson wonder-computer where people are starting to become excited about how the technology can transform society. The legal profession is usually seen as a laggard when it comes to adopting cutting-edge tech, but it seems this time things may be different. The 2017 Futures Conference really made that clear, as attendees were able to deep dive into how many AI products are already changing how lawyers practice and how we need to begin planning now for how these products will shape our law practices.

The lawyers who will thrive in the years to come are the ones with the vision to see how they can use these tools to produce better results at lower costs. 

Greg Siskind (@gsiskind) is one of America’s best-known immigration lawyers and is  founding partner of Siskind Susser, PC.

Marc Lauritsen: Let’s Not Lull Ourselves into Nonchalance

While COLPM can sometimes feel like an exclusive “cool kids club,” it does a great job assembling wonderfully diverse people. This year’s conference was one of the best. Some takeaways:

  • Lawyers have dwindling excuses not to use AI. But, one drag on progress is the opacity of pricing. Too many licensing and subscription arrangements are unnecessarily bespoke.
  • We humans need to up our game if we hope to remain competitive with our nonbiological progeny. (Let’s not lull ourselves into nonchalance by supposing that “general” intelligence will elude them much longer.)
  • Speakers Ed Walters and Sharon Nelson reminded me that I have to re-read Isaac Asimov’s “I, Robot.”

Marc Lauritsen (@marclauritsen) is a legal knowledge systems architect and president of Capstone Practice Systems.

Susan Hackett: Learning to Walk Before We Run

The Futures Conference offered great programs, of course, and a welcome focus on a single topic that allowed us to drill down much more usefully than when a conference tries to cover the entire horizon of issues. Here are the conversations that resonated most with me:

  • Lawyers need to recognize that sometimes AI (and other technologies, not to mention well-trained workers and non-law-firm legal service providers) can do the job of lawyers better than lawyers themselves. That’s a demonstrable fact. Rather than dispute it, what are we going to do to leverage technologies that make us better and improve service to our clients?
  • While we’re all very excited by how data, tech, and particularly AI can empower us to be better lawyers and problem-solvers for clients, we need to be mindful of their faults as well, including the inherent bias that can exist in data pulls and algorithms. How do we identify bias? Can we “correct” it? How do we adjust our thinking to avoid bias? This is the most recent iteration of “junk in/junk out.” 
  • Every conversation reminded the lawyers in the room how little law school or previous practice prepared us for the challenges of integrating technology into law practice — and the extremely high behavioral hurdles we have to get over in order to improve. While frustrating, we kept coming back to the best way forward: through close and respectful collaboration with colleagues who do have that vital education and experience. The lesson: Lawyers need to embrace professional staff in law firms, legal ops leaders in law departments, and experts in supporting service companies, and find ways to drive the value of everyone’s contribution to tomorrow’s high-performing team. That requires lawyers to get past their false dichotomy of “lawyer/non-lawyer” and acknowledge the value of others’ contributions.

AI is good at a number of important tasks, but it occurs to me that law firms and legal departments must first get a better handle on basic automation (and how it supports, not replaces, the kinds of value-driven internal workflow that needs to be in place). AI’s uses are more sophisticated, and we should be reminded to crawl before we walk and walk before we run (with the machines).

Susan Hackett (@HackettInHouse) is CEO of Legal Executive Leadership, LLC

Andy Daws: Recalibrating Expectations

The kick-off session at this year’s Futures Conference identified the tension between AI hype and current reality, helping recalibrate expectations for what’s possible now versus what may or may not be on the horizon anytime soon. Other tensions emerged as we dug deeper. AI automation yields efficiency, but that has always been the enemy of the billable hour and the traditional law firm model. Firms need to maintain utilization rates for junior associates, but much of their routine work is already being automated. Risk-aversion is part of what makes lawyers good at their job, but is in conflict with the need to embrace change and build disruption into delivery models in the “second machine age.” There is a fundamental tension between the commoditization and widespread adoption of these technologies and the traditional supply chain for legal services. Now is the time to be grappling with these challenges, and if not “running with the machines,” then at least learning to walk with them! 

Andy Daws (@dawsandy) is Chief Customer Officer at Kim Technologies.

Sharon Nelson and John Simek: Run to Get Onboard

We were most struck by how many people were both excited about the future of AI and afraid of it. There were those who sided with the AI enthusiasm of Mark Zuckerberg and Bill Gates, and those who shared the dark fears of Elon Musk and Stephen Hawking — and a large crossover between the two groups. 

In terms of a takeaway tip, we believe that lawyers who wish to survive the transformation in law practice need to have some understanding of AI, how it is now being used in law firms, and how it will be used in the not-too-distant future. This was really a breakout year for AI in law practice — what we once dreamed of is becoming a reality. As the train has now inexorably left the station, lawyers need to run to get aboard — or at least to get educated about artificial intelligence quickly, so they are ready to board the next train!

Sharon D. Nelson (@SharonNelsonEsq) and John W. Simek (@SenseiEnt) are President and Vice President of Sensei Enterprises, Inc., a digital forensics, legal technology and information security firm

Are You Using AI in Your Practice?

How do you believe AI will transform the legal profession? Now? Over next three to five years? Ten years out? Tell us in the comments below or email editor@attorneyatwork.com

Save the Date for the 2018 Futures Conference

Next year, the College of Law Practice Management’s annual conference heads to Boston, October 25-26, to tackle the theme “Cybersecurity: This Way There Be Dragons!”

Illustration ©iStockPhoto.com

Get “One Really Good Idea Every Day”

Sign up for Attorney at Work and help us grow! Subscribe to the Daily Dispatch and the Weekly Wrap (same price: free). Follow us on LinkedIn, Facebook and Twitter @attnyatwork.

share TWEET PIN IT share share
Joan Hamby Feldman Joan Feldman

Joan Feldman is Editor-in-Chief and a co-founder of Attorney at Work, publishing “one really good idea every day” since 2011. She has created and steered myriad leading practice management and trade publications, including the ABA’s Law Practice magazine where she served as managing editor for a dozen years. Joan is a Fellow and served as a Trustee of the College of Law Practice Management. Follow her on LinkedIn and @JoanHFeldman.

More Posts By This Author
MUST READ Articles for Law Firms Click to expand
envelope

Welcome to Attorney at Work!

Sign up for our free newsletter.

x

All fields are required. By signing up, you are opting in to Attorney at Work's free practice tips newsletter and occasional emails with news and offers. By using this service, you indicate that you agree to our Terms and Conditions and have read and understand our Privacy Policy.