“Alexa, Call the Statistician!”

Print Friendly, PDF & Email

In this special guest feature, Julia Brickell, Executive Managing Director and General Counsel at, H5, believes that the legal profession is on a collision course with artificial intelligence, given AI’s massive potential to disrupt how the law is practiced, and its extraordinary growth. AI’s ability to transform electronic discovery, draw up contracts, analyze judicial bias, and predict legal outcomes has many in the profession calling for the development of standards around how AI is used. Julia oversees the legal affairs of the company, including corporate governance and legal compliance issues, and also advises on corporate strategy. She serves on the faculty of Columbia University’s Executive Master of Science in Technology Management where she builds awareness and skills for close collaboration between technologists and lawyers in the digital era. Julia is an active participant in committees addressing artificial intelligence and technology in DRI, FDCC, IADC and other organizations and serves on the board of Lawyers for Civil Justice. She obtained her B.A. from Smith College and her J.D. from Columbia University School of Law.

As smart technologies and artificial intelligence capabilities are increasingly incorporated into our lives, doing tasks previously done by humans, the question keeps arising “will AI affect the practice of law?” The answer, of course, is “absolutely!” But the biggest change in our practice will not be the tools that our clients are creating, though to be sure, those will cause us to ask our usual questions about new products in order to provide legal advice. It is not even the tools we lawyers can use to create contracts, draft briefs, do research, and otherwise speed our practice, though those will indeed reduce repetitive or mundane work (perhaps offsetting reduced fees with higher profits and increased job satisfaction). Rather, the most comprehensive change to the practice of law is likely to be lawyers’ need to embrace the idea that new scientific competencies will be required to understand these technologies, deploy them competently, and ascertain if they have produced the intended results.

After artificial intelligence entered the legal arena in the form of technology-assisted review some 15 years ago, it became apparent that the quality of the results obtained from the tools 1) are measurable, 2) vary with the tool, and 3) vary with the expertise of those who deploy the technology. The methods to make those measurements have been known to those in field of information retrieval for decades, but not to most lawyers. Indeed, research conducted under the auspices of the National Institute of Standards and Technology a decade ago shows the wild variation in results, the benefits of expertise, and participants’ lack of understanding of how well or poorly they in fact performed. And while we lawyers have been slow to adopt the technologies, we have been slower still to acknowledge the expertise needed to produce quality results and embrace the scientists who can properly deploy the statistics needed to assess the results achieved.

Why must that change? The tools lawyers are encountering are built around opaque models that take often disparate and hard to interpret input data and generate seemingly coherent and actionable decision guidance (this document is likely responsive; this suspect is a flight risk; this street corner has a high probability of being the scene of a gang-related crime between midnight and 1:00am). Some tools are rule-based – if they encounter defined data (e.g., a search term or phrase appearing in a prescribed way), they will return a result. Some are statistical, counting and weighting data they encounter against a mathematical model that has been learned from training data. The output will depend on inputs and change as new data is encountered. This may result in different calls of relevance and different predictions or autonomous actions as new information is added to the data pool. Algorithms have long been counting our computer clicks and voice requests and documenting our locations. They predict preferences and prompt us to buy or act. They may aggregate data about us and compare and contrast us to others in our imputed demographic. Algorithms are embedding values we can’t see and making recommendations (think hiring, credit scoring, and recidivism predictions) or even directing actions (think smart home assistants and autonomous vehicles).

Understanding capabilities, choices, efficacy, and impacts of artificial intelligence matters. Often, understanding the output will suffice: what does it actually contain or represent, what has been missed or ignored, what are its biases? What we or our clients say or do based on a misunderstood output may create liability. At other times, to render advice to clients building AI products, we’ll need to know more: what complex array of choices and preferences are being embedded in the autonomous product?  Not only are we constrained by the Rules of Professional Responsibility to be competent in our advice about technology and its use (ABA Model Rule of Professional Responsibility 1.1), but we are also required to understand it sufficiently to explain the impacts of our use or nonuse of it to our clients (ABA Model Rule 1.4 (communication) and ABA Model Rule 1.5 (fees)) and to be accurate in what we say to others (ABA Model Rules 3.3 (candor), 3.4 (fairness), 4.1 (truthfulness), 5.3 (supervision )). To gain the requisite understanding requires competencies and assessment protocols that are not taught in law school and are not readily acquired in the field; we need to call upon the requisite fields of expertise.

 

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*