Recent theoretical writings on the possibility that algorithms would someday be able to create law have delayed algorithmic law-making, and the need to decide on its legitimacy, to some future time in which algorithms would be able to replace human lawmakers. This Article argues that such discussions risk essentializing an anthropomorphic image of the algorithmic lawmaker as a unified decision-maker and divert attention away from algorithmic systems that are already performing functions that together have a profound effect on legal implementation, interpretation, and development. Adding to the rich scholarship of the distortive effects of algorithmic systems, the Article suggests that state-of-the-art algorithms capable of limited legal analysis can have the effect of preventing legal development. Such algorithm-induced ossification, the Article argues, raises questions of legitimacy that are no less consequential than those raised by some futuristic algorithms that can actively create norms.
To demonstrate this point, the Article puts forward a hypothetical example of algorithms performing limited legal analysis to assist healthcare professionals in reporting suspected child maltreatment. Already in use are systems performing risk analysis to aid child protective services in screening maltreatment reports. Drawing on the example of algorithms increasingly used today in social media content moderation, the Article suggests that similar systems could be used for flagging cases that show signs of suspected abuse. Such assistive systems, the Article argues, will likely cement the prevailing legal meaning of maltreatment. As mandated reporters increasingly rely on such systems, the result would be the absence of legal evolution, preventing changes to contentious elements in the legal definition of reportable suspicion, including the scope of acceptable physical disciplining. Together with the familiar effect of existing systems, the effect of this hypothetical system could have a profound effect on the path of the law on child maltreatment, equivalent in its significance to the effect autonomous algorithmic adjudication would have.
Maggen, Daniel, Predict and Suspect: The Emergence of Artificial Legal Meaning (March 1, 2021). North Carolina Journal of Law and Technology, Vol. 23, No. 1, 2021,
This article identifies the five large-scale changes that have happened or are happening to the legal profession:
1. How technology solutions have moved law from a wholly bespoke service to one that resembles an off-the-shelf commodity;
2. How globalisation and outsourcing upend traditional expectations that legal work is performed where the legal need is, and shifts production away from high cost centres to low cost centres;
3. How managed legal service providers – who are low cost, technology-enabled, and process-driven – threaten traditional commercial practice;
4. How technology platforms will diminish the significance of the law firm; and
5. How artificial intelligence and machine learning systems will take over a significant portion of lawyers’ work by the end of the 2020s.
The article discusses how these changes have transformed or are transforming the practice of law, and explains how institutions within the law will need to respond if they are to remain relevant (or even to survive). More broadly, it examines the social implications of a legal environment where a large percentage of the practice of law is performed by institutions that sit outside the legal profession.
Hunter, Dan, The Death of the Legal Profession and the Future of Law (March 17, 2020). 43(4) University of New South Wales Law Journal 1199 (2020),
Read the full article on SSRN.
How will artificial intelligence (AI) and associated digital technologies reshape the work of lawyers and structure of law firms? Legal services are traditionally provided by highly-skilled humans — that is, lawyers. Dramatic recent progress in AI has triggered speculation about the extent to which automated systems may come to replace humans in legal services. A related debate is whether the legal profession’s adherence to the partnership form inhibits capital-raising necessary to invest in new technology. This Article presents what is to our knowledge the most comprehensive empirical study yet conducted into the implementation of AI in legal services, encompassing interview-based case studies and survey data. We focus on two inter-related issues: how the nature of legal services work will change, and how the firms that co-ordinate this work will be organized. A central theme is that prior debate focusing on the “human vs technology” aspect of change overlooks the way in which technology is transforming the human dimensions of legal services.
Our analysis of the impact of AI on legal services work suggests that while it will replace humans in some tasks, it will also change the work of those who are not replaced. It will augment the capabilities of human lawyers who use AI-enabled services as inputs to their work and generate new roles for legal experts in producing these AI-enabled services. We document these new roles being clustered in multidisciplinary teams (“MDTs”) that mix legal with a range of other disciplinary inputs to augment the operation of technical systems. We identify challenges for traditional law firm partnerships in implementing AI. Contrary to prior debate, these do not flow from constraints on finance to invest in technical assets. Rather, the central problems have to do with human capital: making necessary strategic decisions; recruiting, coordination and motivation the necessary MDTs; and adjusting professional boundaries. These findings have important implications for lawyers, law firms and the legal profession.
Armour, John and Parnham, Richard and Sako, Mari, Augmented Lawyering (August 21, 2020).
Read the full article on SSRN.
A new article by Will Douglas Heaven, senior AI editor at the MIT technology review has called for an end to the use of predictive policing and justice, powered by AI algorithms. The article looks at a number of ways that race feeds into AI algorithms, and how this can detriment minorities. The article suggests that current AI systems, when applied to justice, end up continuing to reinforce existing systemic racism, and potentially lead to an increased bias, as judgement formed by a supposedly objective system, then reinforces existing bias.
Heaven, therefore, suggests that until AI has been developed to the point where it can be genuinely objective, it should not be used in such an important decision-making capacity, particularly as discussions continue in the US and globally around racism and bias in the justice system.
Visit the MIT technology review to read the full argument.
The Bar Standards Board has announced on the 12th May 2020, that the Bar Professional Training Course and Bar Transfer Test assessments, that were delayed from April to August, will be carried out online with the assistance of Pearson’s OnVUE secure global online proctoring solution, which will allow for remote invigilation. Allowing the exams to take place within this timeframe will then allow for students with pupillage offers to take these up in the Autumn, rather than causing further delays.
The BSB has said that the “OnVUE system uses a combination of artificial intelligence and live monitoring to ensure the exam is robustly guarded, deploying sophisticated security features such as face-matching technology, ID verification, session monitoring, browser lockdown and recordings.” However, some criticism has come about suggesting that the system may prejudice students with young children, as the system automatically ends the test if another person is detected in the presence of the examinee.
BSB director-general Mark Neale said: “Since the current health emergency began… students and transferring qualified lawyers have had to face considerable uncertainty, which we very much regret, and I am delighted that we can now deliver centralised assessments remotely in August with Pearson VUE’s state-of-the-art online proctoring system.”
For more information see the full article on the BSB site.
With the regulation of Artificial Intelligence (AI), the European Commission is addressing one of the central issues of our time. However, a number of core legal questions are still unresolved. Against this background, the article in a first step lays regulatory foundations by examining the possible scope of a future AI regulation, and by discussing legal strategies for implementing a risk-based approach.
In this respect, I suggest an adaptation of the Lamfalussy procedure, known from capital markets law, which would combine horizontal and vertical elements of regulation at several levels. This should include, at Level 1, principles for AI development and application, as well as sector-specific regulation, safe harbors and guidelines at Levels 2-4. In this way, legal flexibility for covering novel technological developments can be effectively combined with a sufficient amount of legal certainty for companies and AI developers.
In a second step, the article implements this framework by addressing key specific issues of AI regulation at the EU level, such as: documentation and access requirements; a regulatory framework for training data; a revision of product liability and safety law; strengthened enforcement; and a right to a data-free option.
Hacker, Philipp, AI Regulation in Europe (May 7, 2020).
Download the full paper from the SSRN
One major challenge facing human kind in the 21st century the widespread use of Artificial Intelligence (AI). Hardly a day passes without news about the disruptive force of AI – both good and bad. Some warn that AI could be the worst event in the history of our civilization. Others stress the chances of AI diagnosing, for instance, cancer, or supporting humans in the form of autonomous cars. However, because AI is so disruptive the call for its regulation is widespread, including the call by some actors for international treaties banning, for instance, so-called “killer robots”. Nevertheless, until now, there is no consensus how and to which extent we should regulate AI. This paper examines whether we can identify key elements of responsible AI, spells out what exists as part “top down” regulation, and how new guidelines, such as the 2019 OECD Recommendations on AI can be part of a solution to regulate AI systems. In the end, a solution is proposed that is coherent with international human rights to frame the challenges posed by AI that lie ahead of us without undermining science and innovation; reasons are given why and how a human rights based approach to responsible AI should inspire a new declaration at the international level.
Voeneky, Silja, Key Elements of Responsible Artificial Intelligence – Disruptive Technologies and Human Rights (January 1, 2020). Freiburger Informationspapiere, January 2020.
Available from the SSRN
Machine learning has entered the world of the professions with differential impacts. Engineering, architecture, and medicine are early and enthusiastic adopters. Other professions, especially law, are late and in some cases reluctant adopters. And in the wider society automation will have huge impacts on the nature of work and society. This paper examines the effects of artificial intelligence and blockchain on professions and their knowledge bases. We start by examining the nature of expertise in general and then how it functions in law. Using examples from law, such as Gulati and Scott’s analysis of how lawyers create (or don’t create) legal agreements, we show that even non-routine and complex legal work is potentially amenable to automation. However, professions are different because they include both indeterminate and technical elements that make pure automation difficult to achieve. We go on to consider the future prospects of AI and blockchain on professions and hypothesise that as the technologies mature they will incorporate more human work through neural networks and blockchain applications such as the DAO. For law, and the legal profession, the role of lawyer as trusted advisor will again emerge as the central point of value.
Flood, John A. and Robb, Lachlan, Professions and Expertise: How Machine Learning and Blockchain are Redesigning the Landscape of Professional Knowledge and Organisation (August 9, 2018). Griffith University Law School Research Paper No. 18-20.
Available from the SSRN site.
What will happen to law firms and the legal profession when the use of artificial intelligence (AI) becomes prevalent in legal services? This paper addresses this question by considering specific AI use cases in legal services, and by identifying four AI-enabled business models (AIBM) which are relatively new to legal services (if not new to the world). These AIBMs are different from the traditional professional service firm (PSF) business model at law firms, and require complementary investments in human resources, intra-firm governance and inter-firm governance. Law firms are experimenting with combinations of business models. We identify three patterns in law firm experimentation: first, combining the traditional PSF business model with the legal process and/or consulting business models; second, vertically integrating the software vendor business models; and third, accessing AIBMs from third-party vendors to take advantage of contracting for innovation. While predicting the future is not possible, we conclude that how today’s law firms transform themselves into tomorrow’s next generation law companies depends on their willingness and ability to invest in necessary complements.
Armour, John and Sako, Mari, AI-Enabled Business Models in Legal Services: From Traditional Law Firms to Next-Generation Law Companies? (July 12, 2019). Available at SSRN.