Renowned legal educator Roscoe Pound stated, “Law must be stable and yet it cannot stand still.” Yet, as Susan Nevelow Mart has demonstrated in a seminal article that the different online research services (Westlaw, Lexis Advance, Fastcase, Google Scholar, Ravel and Casetext) produce significantly different results when researching case law. Furthermore, a recent study of 325 federal courts of appeals decisions, revealed that only 16% of the cases cited in appellate briefs make it into the courts’ opinions. This does not exactly inspire confidence in legal research or its tools to maintain stability of the law. As Robert Berring foresaw, “The world of established sources and sets of law book that has been so stable at to seem inevitable suddenly has vanished. The familiar set of printed case reporters, citators, and second sources that were the core of legal research are being minimized before our eyes.”
In this article I focus on Artificial Intelligence (AI) and natural language processing with respect to searching. My article will proceeds as follows. To understand how effective natural language processing is in current legal research, I go about building a model of a legal information retrieval system that incorporates natural language processing. I have had to build my own model because we do not know very much about how the proprietary systems of Westlaw, Lexis, Bloomberg, Fastcase and Casetext work. However, there are descriptions in information science literature and on the Internet of how systems with advanced programing techniques actually work or could work. Next, I compare such systems with the features and search results produced by the major vendors to illustrate the probable use of natural language processing, similar to the models. In addition, the use of word prediction or type ahead techniques in the major research services are studied–particularly, how such techniques can be used to bring secondary resources to the forefront of a search. Finally, I explore how the knowledge gained may help us to better instruct law students and attorneys in the use of the major legal information retrieval systems.
My conclusion is that the adeptness of natural language processing is uneven among the various vendors and that what we receive in search results from such systems varies widely depending on a host of unknown variables. Natural language processing has introduced uncertainty to the law. We are a long way from AI systems that understand, let alone search, legal texts in a stable and consistent way.
Callister, Paul D., Law, Artificial Intelligence, and Natural Language Processing: A Funny Thing Happened on the Way to My Search Results (October 14, 2020). 112 Law Library Journal 161-212 (2020).
How will artificial intelligence (AI) and associated digital technologies reshape the work of lawyers and structure of law firms? Legal services are traditionally provided by highly-skilled humans — that is, lawyers. Dramatic recent progress in AI has triggered speculation about the extent to which automated systems may come to replace humans in legal services. A related debate is whether the legal profession’s adherence to the partnership form inhibits capital-raising necessary to invest in new technology. This Article presents what is to our knowledge the most comprehensive empirical study yet conducted into the implementation of AI in legal services, encompassing interview-based case studies and survey data. We focus on two inter-related issues: how the nature of legal services work will change, and how the firms that co-ordinate this work will be organized. A central theme is that prior debate focusing on the “human vs technology” aspect of change overlooks the way in which technology is transforming the human dimensions of legal services.
Our analysis of the impact of AI on legal services work suggests that while it will replace humans in some tasks, it will also change the work of those who are not replaced. It will augment the capabilities of human lawyers who use AI-enabled services as inputs to their work and generate new roles for legal experts in producing these AI-enabled services. We document these new roles being clustered in multidisciplinary teams (“MDTs”) that mix legal with a range of other disciplinary inputs to augment the operation of technical systems. We identify challenges for traditional law firm partnerships in implementing AI. Contrary to prior debate, these do not flow from constraints on finance to invest in technical assets. Rather, the central problems have to do with human capital: making necessary strategic decisions; recruiting, coordination and motivation the necessary MDTs; and adjusting professional boundaries. These findings have important implications for lawyers, law firms and the legal profession.
Armour, John and Parnham, Richard and Sako, Mari, Augmented Lawyering (August 21, 2020).
Read the full article on SSRN.
This paper discusses models of law and regulation of Artificial Intelligence (“AI”). The discussion focuses on four models: the black letter model, the emergent model, the ethical model, and the risk regulation model. All four models currently inform, individually or jointly, integrally or partially, consciously or unconsciously, law and regulatory reform towards AI. We describe each model’s strengths and weaknesses, discuss whether technological evolution deserves to be accompanied by existing or new laws, and propose a fifth model based on externalities with a moral twist.
Petit, Nicolas and De Cooman, Jerome, Models of Law and Regulation for AI (October 2020). Robert Schuman Centre for Advanced Studies Research Paper No. RSCAS 2020/63.
Read the full paper at SSRN.
The office of a judge is nowadays an indispensable part of the system of governance. However, this does not mean that the legal regulation of this area is optimal and this area does not pose any challenges for lawyers. Moreover, there is no general consensus on how state power, including that of the courts, should be exercised. Judicial power is usually one of the balancing powers in democratic countries, independent of the executive and legislative powers. This power has its problems, such as the length of judicial proceedings and the inefficiency of the entire judicial system. For some time now, therefore, various mechanisms have been sought to solve the existing problems of this authority. In the world of new technologies, i.e. the world in which we live, more and more instruments are responsible for mechanising certain elements of our lives. In this connection a dilemma arises, among others, whether some of the tasks of the judiciary can be realized in a mechanized, automated way. This is because technological achievements may already today allow for their application in the justice system. Here it wonders whether there is a possibility that at least a part of court cases could be solved in an automated way, i.e. without the participation of a judge and with the use of algorithms and artificial intelligence. The author looks at this area and wonders about the technological possibilities created by the use of artificial intelligence mechanisms to resolve some court disputes.
Załucki, Mariusz, AI and Dispute Resolution (June 24, 2020). [in:] El derecho público y privado ante las nuevas tecnologías, J. Garcia Gonzalez, A. Alzina Lozano, G. Martin Rodriguez (eds.), Madrid 2020, Available at SSRN: https://ssrn.com/abstract=3636187 or http://dx.doi.org/10.2139/ssrn.3636187
On April 26-27, 2019, the Duquesne University School of Law hosted a conference titled “Artificial Intelligence: Thinking About Law, Law Practice, and Legal Education.” Over those two days, more than 100 attendees were able to listen to nineteen presentations offered by thirty-one professors, educators, technology experts, and lawyers. The four articles in this symposium issue of the Duquesne Law Review resulted from that conference. All of the presentations from the conference are available on the Duquesne website, at: https://www.duq.edu/academics/schools/law/academics/legal-research-and-writing/2019-artificial-intelligence-conference.
Levine, Jan M., Artificial Intelligence: Thinking About Law, Law Practice, and Legal Education (January 1, 2020). Duquesne University Law Review, Vol. 58, No. 1, 2020, Duquesne University School of Law Research Paper No. 2020-06.
Available from the SSRN site.
One major challenge facing human kind in the 21st century the widespread use of Artificial Intelligence (AI). Hardly a day passes without news about the disruptive force of AI – both good and bad. Some warn that AI could be the worst event in the history of our civilization. Others stress the chances of AI diagnosing, for instance, cancer, or supporting humans in the form of autonomous cars. However, because AI is so disruptive the call for its regulation is widespread, including the call by some actors for international treaties banning, for instance, so-called “killer robots”. Nevertheless, until now, there is no consensus how and to which extent we should regulate AI. This paper examines whether we can identify key elements of responsible AI, spells out what exists as part “top down” regulation, and how new guidelines, such as the 2019 OECD Recommendations on AI can be part of a solution to regulate AI systems. In the end, a solution is proposed that is coherent with international human rights to frame the challenges posed by AI that lie ahead of us without undermining science and innovation; reasons are given why and how a human rights based approach to responsible AI should inspire a new declaration at the international level.
Voeneky, Silja, Key Elements of Responsible Artificial Intelligence – Disruptive Technologies and Human Rights (January 1, 2020). Freiburger Informationspapiere, January 2020.
Available from the SSRN
Given the ubiquity of artificial intelligence (AI) in modern societies, it is clear that individuals, corporations, and countries will be grappling with the legal and ethical issues of its use. As global problems require global solutions, we propose the establishment of an international AI regulatory agency that – drawing on interdisciplinary expertise – could create a unified framework for the regulation of AI technologies and inform the development of AI policies around the world. We urge that such an organization be developed with all deliberate haste, as issues such as cryptocurrencies, personalized political ad hacking, autonomous vehicles and autonomous weaponized agents, are already a reality, affecting international trade, politics, and war.
Paper Available Here
Olivia J. Erdelyi, University of Canterbury – College of Business and Law & Judy Goldsmith, University of Kentucky – Department of Computer Science