Introduction
There is some debate over which jobs are most likely to be taken over by robots in the coming years. The well-known axiom of microprocessor development enshrined as Moore's law suggests that computing power doubles every 18 months. This, combined with developments in big data, the internet of things, machine learning and a rapidly expanding robotics industry,Footnote 1 is already transforming the world of work. Many people believe that routine and repetitive jobs will be the first to go: car plant assembly workers already replaced by robots may be followed by delivery drivers who will be supplanted by drones, and Uber or Lyft drivers superseded by driverless cars. However the white collar, professional and creative occupations are far from safe.Footnote 2 While occupations that involve judgment and human interaction may be among the last to be taken over by machines, a significant study of the impact of technology on work ranked 702 occupations from those most likely to be replaced to least likely. In terms of the legal professions, paralegals were placed in the first quartile of those to be superseded. Software that can scan documents for key words and phrases has already transformed the role of paralegals and legal assistants.Footnote 3 Lawyers in general – perhaps as a consequence of their interpersonal, advisory roles – were safely in the fourth quartile of least likely to be replaced. Judges, however, were placed at 271 – just above the midpoint – and only a little safer than locker room and coatroom attendants.Footnote 4 It is certainly possible that the role of lawyers might be augmented by machines but could they, or more particularly even, judges, be replaced by robots? There are certainly attractions to this idea. The features of economy, accessibility, reach, speed and enhanced information management are valuable. Some may even think that robot judges could eliminate human biases – either accidentally or intentionally – while being able to dedicate unlimited processing and learning capacity to deal with cases in parallel rather than in series, with all the savings of time and money that this may entail.Footnote 5
Of course new technology does not only promise solutions – it also multiplies the number of disputes and their complexity as online trading across jurisdictions, internet shopping and the internet-enabled gig or sharing economy provide occasions for increased dispute as well as offering new criminal opportunities. Perhaps the new technology-enhanced world requires equivalent appropriate technology to match. Here we consider the possibility of this and examine some of the issues that are raised. This involves looking at both what it is that judges do in their various roles across the range of courts and how technology might replace (rather than simply assist) the judge. While much of the focus will be on the machine-learning algorithms which are becoming increasingly current, we do acknowledge that artificial intelligence (AI) in this context covers a range of computational models of legal reasoning, including natural language processing, that are of a different nature to machine learning approaches.Footnote 6 However, we must declare an initial and strong scepticism that the essentially social nature of law can be reproduced by machines, no matter how sophisticated.
We take the view that most, if not indeed all, approaches that seek to bring AI to the activity of judging mistake the nature of law. It generally is seen there simplistically, as a traditional Austin-style ‘command backed by sanction’,Footnote 7 capable of being applied to a straightforward ‘fact’ situation, rather than the complex social process that it actually is.Footnote 8 We argue that AI approaches cannot yet, and probably cannot ever, develop the complexity to reproduce the essentially social activity of delivering justice. To do so would require the reduction, concealment, or distortion of underlying social relations and interactions.Footnote 9 Furthermore (and related to this) we argue, machine-based approaches are unable to accommodate the idea of resistance that must inevitably be present in any exercise of power, including judicial power, as a result of the inescapable reality that, as Foucault reminds us, ‘where there is power, there is resistance, and yet, or rather consequently, this resistance is never in a position of exteriority in relation to power’.Footnote 10 Such resistance is pluralistic, ranging from the mundane to the grand, and is important for the law as a productive and contingent technical process, intended to direct social action and resolve conflicts, and one that is continuously responsive to outside influences and open to new possibilities.Footnote 11 Algorithmic technologies provide only one such set of influences.
This is not to rule out that the implementation of such technologies may have a significant effect on some aspects of the operation and function of legal systems. For example, as this paper discusses below in the specific context of dispute resolution, there are particular opportunities for digital technologies that may not be available so directly in the context of more formal hearings or trials in either a civil or criminal context. Developments here are reviewed, as we note the evolution of ‘alternative dispute resolution’ (ADR) into ‘online dispute resolution’ (ODR), and consider briefly how the component tasks of mediation might be thought to lend themselves to technological enhancement. However, we argue that these developments will be ever contingent upon the relations and needs of, and conflicts between, actors at the social level, meaning that humans will retain a central place in determining the direction of legal processes.
We will be particularly alive to the issue of whether any of this technology has the potential to amount to a new system of dispute resolution, as opposed to simply being a tool to augment existing processes. Then attention will turn to speculation as to whether the sort of patterns that might be gathered from big data and sorted by machine learning algorithms could provide the basis of a new approach, what this might mean in terms of legal subjectivity, and the ways in which decisions are governed and mediated by technology.Footnote 12 Here a number of systems used in criminal justice are reviewed before the account turns to further analysis of how the implementation of algorithmic tools may usher in new forms of semi-automated justice that, so far, are not accounted for in the current regulatory frameworks. Rather than framing this process of change solely as one where human actors may be replaced by technological equivalents, it is argued that it is more beneficial to consider how this questions the purpose and role of human actors in legal procedures, and whether this will change the nature and purpose of ‘justice’ more generally.
1. Dispute resolution – a move from ADR to ODRFootnote 13
There is no doubting both the policy push for technology and the roll-out of various pioneering examples within dispute resolution in the UK and beyond.Footnote 14 As stated in the Ministry of Justice's Transforming our Justice System report, digitisation of proceedings is intended to play a major role in ensuring that the legal system of England and Wales provides ‘swift and certain justice’, in a manner that ‘[saves] people time and money, and [shrinks] the impact of legal proceedings on their lives’.Footnote 15 Behind this goal is the belief that the current, political culture of continuing austerity lays the platform and provides the opportunity to transform and re-engineer how the courts and tribunal systems work in the UK.Footnote 16
In contrast, concerns have been raised regarding such changes by, among others, the National Audit Office and the House of Commons Committee of Public Accounts, who stress the difficulties in implementing such ambitious reforms within a legal system that previously significantly lagged behind those of other European states in terms of technological innovation.Footnote 17 Furthermore, the Law Society of England and Wales has launched the Public Policy Technology and Law Commission to investigate the use of algorithmic tools for the purposes of decision-making in the justice system, and how this may affect current rights protections, fairness and trust.Footnote 18
Nonetheless, in a context where the number of disputes is rising in a way not matched by the capacity of the current formal system to provide effective access to justice, the promise of information and communication technology (ICT) is irresistible – particularly so in terms of providing low-cost dispute resolution.Footnote 19 Indeed, as Katsh and Rabinovich-Einy point out, some of the problem features of ICT – such as the lack of face-to-face interaction, large scale collection and storage of all available data, the side-lining of privacy concerns, and a belief in, and reliance upon, the ‘intelligence’ of the machine – can be beneficial in the context of ADR: where asynchronous communication allows time to consult and research specific problems, more complete data sets help to build up a wider picture; decrease in privacy can assist in quality control and prevention strategies; and intelligence of the machine can enhance efficiency through automation of large numbers of small-scale disputes.Footnote 20
Certainly there are many examples of simple and effective dispute resolution mechanisms utilising ICT.Footnote 21 Providers such as The Mediation RoomFootnote 22 and BenoamFootnote 23 have developed online platforms to allow mediators and arbitrators to exchange documents and communicate online. Cybersettle and, more recently, TryToSettle.com, offer a blind bidding system where the parties to a dispute can attempt to find a match between offer and demand, while Smartsettle encourages parties to list their interests and assign them a value to allow a more complex spectrum of agreement to be achieved. Perhaps the most widely used dispute resolution format in the world is eBay's ODR system, developed by SquareTrade, which handles more than 60 million disputes every year. Online forms are used initially to make claims and demands, and an online mediation with human mediators is available if no early resolution is made.Footnote 24 In the Netherlands a start-up called Justice42 has developed an online collaborative platform to assist divorcing couples.Footnote 25
These innovations are also being replicated as part of the service delivery remit of state bodies. For example, as part of the Ministry of Justice's reform package mentioned above, Her Majesty's Courts and Tribunal's Service is producing new online platforms for divorce and probate applications, small money claims, and traffic penalty appeals, among others, so that issues can be dealt with by individuals in the first instance through a form of ‘do-it-yourself’ justice.Footnote 26 This move towards so-called ‘online court’ processes has been accompanied by 86 court closures across England and Wales, with a further 15 identified for future action.Footnote 27 Such transformations are also occurring elsewhere. In Germany there is a government funded system, Online Schlichter, which is used as an online mediation service for business-to-consumer, e-commerce and direct selling disputes between states inside Germany and countries within the EU.Footnote 28
As highlighted by Susskind, this potentially produces a social renegotiation as to whether a court should be defined more broadly as a service, rather than a place, or physical space.Footnote 29 This renegotiation, according to Donoghue, may potentially erode the ‘important symbolic function of the courthouse as the home of justice’, thus opening up new possibilities for the understanding the meaning of justice.Footnote 30 Either way, it certainly alters the ways in which individuals must navigate their interactions with the justice system, re-casting them as ‘users’ around which these tools must be designed.
While all of these examples are no doubt useful, the technology seems to act mainly as a tool to assist in dispute resolution rather than an autonomous system which can actually process, adjudicate or settle disputes independently. Katsh and Rabinovich-Einy identify three major phases in the development of ODR, whereby services have moved from simply putting on-line the various elements of the dispute resolution triad (ie the two disputing parties and the moderator), through systems which deploy software to support and assist resolution, to the current (or indeed next) generation where the emphasis is on algorithms and smart machines using and re-using data to inform and underwrite systems that prevent disputes, or find easy ways to resolve them.Footnote 31 This introduces an important distinction.
There are those ICT elements that contribute to dispute resolution in rather the same way as a complaints form; this is at the first level of evolution – simply making (a sometimes complex) account of the dispute available to be compiled and addressed by the parties online. At the second level not only is there software that may, for example, manage blind bids in an effort to reach settlement, but also relatively straightforward algorithms that may apply various rules in relation to multiple factors. For example, in the context of an online shopping forum such as Amazon, if the buyer is a frequent purchaser or Amazon Prime member, an infrequent returner of goods, or if the goods are of low value or the subject of many complaints, then a particular outcome – a refund, replacement or other outcome – may be produced by the algorithm without the intervention of any costly human resources. This again is useful: it may improve the consumer experience and is certainly a more economically efficient business model than using human mediators in a telephone complaints department. However, it is not really replicating the work of a court, or even necessarily a mediator.
At the third level, data is collected in bulk quantities and examined and re-used by algorithms so as to analyse patterns and produce predictions or decisions regarding the outcome of a particular case. Again this may be valuable in ascertaining ways of avoiding disputes – keep the terms and conditions clear, provide a better description of the goods, offer a faster delivery service or whatever – but it does not amount to the sort of exercise in achieving third party agreement, with all the elements of discretion, appeal to authoritative determination or middle way arbitration that characterises the classic triad of dispute resolution.Footnote 32 Algorithms here are being used as an aid or tool within a wider process.
It is, of course, important to be careful to distinguish between the various elements of arbitration, negotiation, mediation and the various hybrid forms of ADR as well as straightforward adjudication.Footnote 33 However, if we parse out the various elements of dispute resolution it can be seen that most ICT enhanced processes are some way off replicating the human umpire. A dispute will involve variations of the following steps: identifying the issues; establishing ‘facts’ – with varying degrees of evidential formality; ascertaining the relevant legal framework; providing an opportunity for venting feelings; evaluating the parties’ interests; disaggregating issues; establishing positions; exchanging information; suggesting options for resolution; setting out a time frame for actions; seeking agreement and creating binding resolutions. Routine civil disputes or consumer matters may sometimes be reducible to such steps. The high volume of cases in administrative tribunals too may be capable of analysis into a number of steps, and certainly the Transforming our Justice System report mentioned above contains a vision for tribunals to include online hearings, traditional in-person hearings, and a mixture of the two.Footnote 34 It envisages a new, simpler, procedure occurring online where lay users can be guided through the system in areas such as social security and child support.Footnote 35 It is certainly possible that in areas such as these, various decision-making stages of the dispute can be assisted by ICT, alongside the augmentation of the wider process. This does not, however, amount to machines solely resolving disputes. The input of human actors is still required at various stages to facilitate the process, and finalise decisions and agreement.Footnote 36
2. Towards the courtroom proper
If we move along from ADR and other forms of more informal dispute resolution towards other applications in the legal system, it is clear that new technology is already having a considerable impact.Footnote 37 There are a number of examples of machines assisting with, if not entirely taking over, human functions of adjudications.
For example, the Traffic Penalty Tribunal (TPT) in England and Wales allows drivers to appeal online against tickets handed out by local authorities.Footnote 38 The idea behind this is that it follows what Shapiro refers to as the ideal prototype of courts, whereby an independent adjudicator applies the relevant law to the facts at hand, within adversarial proceedings, to produce a dichotomous decision that announces one party as being legally right, and one as legally wrong (without any need for legal representation in this case).Footnote 39 Applicants must enter the relevant Penalty Charge Notice (PCN) number and provide reasons for their appeal, before their case is evaluated by an adjudicator. The realities of how well this system fits such an analysis could be debated, but the crucial point is that it has allowed large numbers of relatively simple cases to be dealt with online, and has been deemed as a successful case of digitisation in the UK courts by the judiciary.Footnote 40
While tools like the TPT allow drivers to appeal tickets through an online platform, the actual process of judgment or resolution is still carried out by a human actor who, ideally, neutrally assesses the evidence and arguments at hand, and fulfils the various steps of a dispute laid out above. A system of algorithmic dispute resolution, or robot judgment, would require something much more than this. The independent adjudicators would no longer be ‘lawyers with a minimum of five years’ legal experience’,Footnote 41 but sophisticated algorithms with machine learning capabilities, operating with a powerfully adaptive and ‘mindless agency’ to produce both decisions and predictions.Footnote 42 In such a situation, those five years of human experience could arguably be outstripped or rendered irrelevant by a robot judge, in much less time than it took a human to gather them.
There are other examples of machines taking decisions. President Obama's Data-Driven Justice initiative committed 67 city, county, and state governments across the USA to using data-driven strategies, in order to divert low-level offenders with mental illness out of the criminal justice system in 2016.Footnote 43 This took place within the context of wider systematic attempts to ‘rationalise’ criminal justice so as to make it ‘smarter’ and more ‘evidence based’.Footnote 44 To make criminal justice decision-making smart in this sense requires the use of algorithmic risk assessment tools, acting as a decision support tool, to predict a defendant's ‘riskiness’ and potential for future offending. This enables decisions to be made as to how individuals should be processed through the system, and judged in relation to decisions on bail, sentencing, probation and parole, among others.Footnote 45 In contrast to ‘tough on crime’ approaches towards criminal justice, this style of ‘actuarial justice’ focuses on the control of population behaviour, and the appropriate allocation of state resources, in line with an individual's predicted risk (of recidivism) level – as opposed to punishment through mass incarceration holding pride of place in penal policy.Footnote 46
Two algorithmic tools used for these purposes have gained significant academic attention in recent times: COMPAS (or Correctional Offender Management Profiling for Alternative Sanctions) in the USA, and HART (Harm Assessment Risk Tool) in the UK.Footnote 47 These are useful and instructive examples in that while they both follow and demonstrate general trends towards the more prevalent use of algorithmic tools in justice decisions, they are also different in methods of analysis and implementation – showing that algorithmic risk assessment can produce very different results depending on how its procedures are designed. For example, while HART makes use of official record data on relevant defendants, including their age, gender, postcode, criminal history and the types of offences committed, COMPAS takes a much more complex route, by taking similar record data, then adding to this the statistical evaluation of an offender interview and ‘self-report’, amounting to an analysis of the defendant's socio-economic and psychological circumstances and needs.Footnote 48 In addition, the use of HART is restricted to the period immediately following an arrest, where a decision must be made about the custody of a given arrestee in the custody suite of their relevant police station, whereas COMPAS plays a part throughout the ‘offender processing continuum’, informing pre-trial, sentencing, and post-conviction decisions to produce ‘behavioural change’ and therefore ‘treat’ and ‘correct’ the offender by reducing risk.Footnote 49
The use of both tools combines human in-the-loop and human on-the-loop decisions – meaning those where humans are required to select and guide inputs, and those where the tool generally works in an automated fashion and where humans are only needed for final execution or intervention respectively.Footnote 50 COMPAS can be described as a human in-the-loop tool. It is designed in such a way that humans both take part in data collection through the self-report questionnaire, and are able to operate final discretion over the risk prediction by using built-in overrides.Footnote 51 It is only in between these two points where the process is fully automated, as the algorithm constructs a profile through machine learning techniques. Similarly, HART can be seen as being a human on-the-loop tool. As mentioned, demographic information and official record data are collected, then combined to produce a risk score from 509 separate decision tree algorithms voting on the information at hand, in what is known as a ‘random forest model’.Footnote 52 Human supervision takes place only at the final moment of decision, to decide whether to follow the tool's recommendation that the individual is low, medium, or high risk.Footnote 53 The final decisions arrived at, following human oversight, can therefore be considered as semi-automated. Humans are not replaced within this process, but take on a different role, as the overseer and correcting mechanism for the algorithmic predictions.
This overseer role, where human insights must be combined with those of the machine, introduces a new tension into the decision-making process, between the values of human actors, and those of purely statistically-focused algorithms – thus demonstrating the difficulty of capturing the social element of a decision. This can be viewed in debate over the bias of algorithmic tools, particularly in questions over the racist predictions of COMPAS, whereby black defendants in Florida were two times more likely to be misclassified as high risk by the tool, and the resultant statistical explanation for this – which the developers viewed as legitimate – in that black prisoners demonstrated a higher base rate of offending due to the institutional data available.Footnote 54 What constitutes as a fair or unbiased decision is ultimately a question of ‘trading off’ between various possible risk factors and statistical predictors.Footnote 55 What the developer of a tool may view as legitimate and fair, may clash with the operational needs and experience of a human decision-maker, who may be more inclined to place an individual within a lower risk category.Footnote 56 Although it may be viewed as an inefficiency, maintaining this contestation over the values and purpose of such a process is important, as it enables conscious debate over the utility and suitability of such tools. Automatically following the predictions of algorithms may ensure that decision-makers can optimise their output and performance, by reducing offending and rates of incarceration at a statistical level.Footnote 57 It does not directly follow, however, that this provides an immediate social benefit.
3. Robots in the courtroom
While there are many who are sceptical about the possibility of humans being replaced fully in the courtroom, there is a significant body of opinion which does see a long-term possibility where lawyers become policy experts rather than individual advisers advising individual clients, and the human decision-making of judges is replaced by AI algorithms.Footnote 58 This involves not just computers helping lawyers to do much the same as they do now but instead entails a wholly different view of how the system will work. For example, Richard Susskind has predicted that
online courts and ODR will prove to be a disruptive technology that fundamentally challenges the work of traditional litigators and of judges. In the long run I expect them to become the dominant way to resolve all but the most complex and high-value disputes.Footnote 59
Alarie, Niblett and Yoon too have an understanding of a wholly changed legal process in the future where
Once given facts relevant to the question, a machine can situate these facts within the domain of applicable legal precedents. In addition to being less susceptible to various kinds of biases, machines do not suffer from other problems affecting human lawyers exercising such judgment. Algorithms can generate predictions that others can replicate. Moreover, algorithms do not tire. Computers do not need to take time off.Footnote 60
This brave new world will of course require technology that presently does not fully exist. In a recent and authoritative account of the developments in the current legal tech boom that have built upon earlier research in AI and law, Kevin AshleyFootnote 61 identifies the task for AI and law researchers as being to develop computational models to perform legal reasoning, as opposed to being able to simply answer legal questions in a superficial way, as programs like Watson or Debater may be able to do.Footnote 62 This requires that models can generate arguments for and against particular outcomes in problems input as text, predict a problem's outcome, and explain their predictions with reasons that legal professionals will recognise and be able to evaluate for themselves. In order for this to happen, Ashley argues, researchers will need to answer two questions: ‘How can text analytic tools and techniques extract the semantic information necessary for [argument retrieval] and how can that information be applied to achieve cognitive computing’.Footnote 63 His account reviews how computational models of legal reasoning (CMLRs) have been developed to model the various legal reasoning techniques involved with statutes and cases, and integrate them with values to predict and construct legal arguments. While to date these CMLRs have not dealt directly with legal texts, there are developments in text analytics that may change this, allowing conceptual information to be extracted automatically from a range of legal sources with tools developed to process some aspects of the semantics or meanings of legal texts. This in turn may lead to the development of applications which integrate the ‘question answering’ and ‘information extraction’ functions with argument mining techniques and particular CMLRs (computational models of legal reasoning) to yield new tools for conceptual legal information retrieval, including argument retrieval. Ashley accepts that text analytic techniques may not ever be able to extract all the conceptual information, as some conceptual inferences may remain too indirect or require too much background context. However, in his view this general approach may lead to the development of a new breed of legal apps allowing humans and computers to collaborate. Such a collaboration could, it is argued, amount to what Susskind has described as a significant evolutionary stage in the legal services: this is where legal work changes from a bespoke or customised service to a standardised, systematised, packaged, and, ultimately, commoditised format available directly for the end-user.Footnote 64 There are also clearly implications for judges too.
While our argument is not a technical one, and does not depend on the possibility of the technology failing to develop, it does perhaps seize hold of the admission by Ashley that some conceptual inferences may never yield to textual analytic techniques, as a result of their complexity or rootedness in wider structures and beliefs. Indeed we argue that the practice of law, and the role of judges, is fundamentally socially produced and acted on by dynamic processes within the wider legal system which are complex, and contingent on a social context in ways that it is difficult to imagine ICT capturing in full or accurately. This argument will be returned to below. Now, however, as the focus turns to the role of technology within the courtroom, sensu stricto, and to its potential to replace judges as arbiters of law and fact in a formal courtroom setting, it is important that the judicial role is explored further in this context.
4. The judicial role
There has, of course, been considerable research on the judicial role. Indeed, as Cranston points out, there is ‘an academic industry’ considering what is involved with the business of courts.Footnote 65 There are various branches to this area of study considering the adjudication and law-making aspects of a court, covering issues such as the use of discretion in decision-making, resolving hard cases, and factoring in the role of morality and policy arguments. Cranston also distinguishes a strand of legal anthropological writing focusing on disputes and their resolution, and the work which distinguishes courts from mediation or arbitration by concentrating on the defining characteristic involving the application of doctrine and the use of legal method. This is what characterises judgment: as Fuller and Winston express it, in contrast to an administrative decision, ‘adjudication is a device which gives formal and substantive expression to the influence of reasoned argument’.Footnote 66
This would seem to take us into the world of legal reasoning, justification, argumentation, and the weighty literature here.Footnote 67 Indeed it is this approach to law (or, more accurately, a misreading of itFootnote 68) that provided the initial attraction for the early pioneers of AI who took up law as a potentially fruitful area, believing that they could model what they saw as straightforward ‘rules’ as applied to clear-cut ‘facts’ into machine code.Footnote 69 These ideas of argumentation, automatisation and AI are again beginning to regain currency among some formalists.Footnote 70 Some of this is becoming quite sophisticated in terms of taking into account values and social purposes.Footnote 71
Nevertheless, our difficulty here is that this often seems rather essentialist, or abstract. Much of the legal theorists’ approach – and certainly almost all of the AI researchers’ work – does not seem to say much about what courts and judges actually do.Footnote 72 As the extensive debates on judicial appointments, and the qualities that are needed to show ‘merit’ here, can attest, this involves much more than ‘judging’, and will include a range of interpersonal skills around how judges organise their court and manage cases, as well as how they deal with the personnel in the courtroom and the public.Footnote 73 The importance of these elements, and the difficulties around capturing them in algorithmic form, should not be overlooked. However, this should not distract us from the wider contexts of judging which also may be difficult to reproduce by machines.
First of all, it is important to acknowledge that judging is not a single activity, with a fundamental method that is unchanging across whatever context it is being employed in. It is necessary to distinguish clearly between minor civil disputes, major cases, and those with complex law or facts. Obviously the various constituent elements or procedural requirements will vary in their formality as the stakes rise. In more serious criminal cases the system is perhaps at its most complex with strict rules of evidence and full opportunity to participate in the process. Indeed it is this element of participation in a decision that affects an individual – with all the rights that this engages – that is centrally characteristic to a formal legal dispute.Footnote 74 This is linked to issues of legitimacy and trust. Simmons develops arguments from the area of procedural justice relating to the various factors that participants use to determine whether the process is fair, to suggest that predictive algorithms do not succeed very well on these factors.Footnote 75 In particular, he argues, that there may be specific difficulties about trust, neutrality and bias in view of the opacity of the machine process.Footnote 76 Indeed this raises issues about the capacity of machines to justify any decision they may make in clear terms. This sort of transparency is, arguably, an important element of judgingFootnote 77 (although perhaps not all human judges do this adequately all of the time) and questions need to be asked as to whether and how such justificatory elements in a decision may be formalised for capture and subsequent development in computational models.Footnote 78 This connects too to ideas about human dignity and the need to recognise this among all parties in the courtroom. As Sourdin and Cornes argue, human judges are not merely data processors, and if the judicial role is reduced to this, without factoring in wider psychological insights, this would involve rejecting not only the humanity of the judge but that of all who come before them.Footnote 79
It is also important to see courts within their context as part of a wider political process, reinforcing allocations of wealth and power, restating the rights, rules, entitlements and obligations underpinning society, and supervising markets. Courts are part of the wider constitutional landscape too. This may be true not only in a general sense, but as they operate in relation to individual cases. In a classic account, Summers refers to ‘process values’ within courts systems, involving participatory governance, procedural rationality and humaneness.Footnote 80 This sort of approach was affirmed in a recent landmark decision about the costs of justice by the UK Supreme Court, which acknowledged that actual and effective access to justice, and the procedure that the courts and tribunals provide, is not merely a public service like any other but a key part of the rule of law and the fabric of rights.Footnote 81
These arguments about the wider context of the judicial role need to be added to the argument mentioned earlier which suggests that we consider the actual practice of law in the real world to be a highly social activity occurring within the complex milieu of legal practice. As one empirical study drawing upon the legal realism of Jerome Frank argues,Footnote 82 in the real world of legal practice lawyers work with ‘legal information’, which is a wider category than simply law and facts, and may include knowledge about what arguments work in particular courts. Here ‘facts’ are negotiated among the parties, who must of course agree about what they are disagreeing about, before entering into the complex social interactions of the courtroom. Law too is selected in a way that reflects the wider social and professional context (including, of course, the need to win cases, keep clients happy and develop an individual career). From the huge range and volume of ‘raw’ legal material (comprising not only statutory material from the domestic legislatures and Europe, and also the 250,000 cases processed annually and 5000 heard in the higher courts) relevant legal arguments must be selected. The potential here for almost limitless indeterminacy, where novel arguments could be deployed almost indefinitely, is controlled by the wider context of legal practice in a social process, mediated by judges and conditioned by a whole range of broader professional, social and economic factors within the overall legal system. This remains the case even with more specialist, upper courts where one might imagine that facts are more settled and the law more specialist.Footnote 83
In order for algorithms to entirely permeate this socio-legal milieu, AI technologies would need to reach the point whereby a judge, or at least some form of surrogate judging system, could be produced either semi- or fully autonomously using big data analytics, and that this could act independently to adjudicate on, and intervene in, the governance of human relations and interactions.Footnote 84 To create a system like this would require a complete instrumentalisation of all these social aspect of law into something that could be technologically calculated and predicted. Even in something as relatively straightforward as the ADR example discussed earlier this would require an algorithmic actor identifying the issues and legal framework, establishing ‘facts’, evaluating the parties’ interests, disaggregating issues, establishing positions, exchanging information, suggesting options for resolution, setting out a time frame for actions, seeking agreement and creating binding resolutions.
When the fuller and more dynamic context of a formal courtroom hearing is considered, it becomes even less likely that even the most sophisticated machine learning would work. The current level of technological ability suggests that such a situation is a long way off – not to mention the potential for judicial and political opposition, the threat from the powerful lobby that is the legal profession, and the dangers associated with opening up new opportunities for private sector technical developers who may come to dominate the market and unduly influence what should be a state function. For now at least, ICTs represent ‘disruptive technologies’, in that they may disrupt current working patterns and flows, but they cannot produce a new kind of justice system alone. As such they remain tools for current legal actors to augment their actions in specific tasks and processes. Crucially, in common law systems, such technology would also need to afford legal representatives the opportunity to assess and contest the evidence and arguments of their opposing parties, in line with disclosure requirements. In practice therefore, while an algorithmic tool may theoretically be capable of assessing a given data set on a legal dispute – and providing a calculated, predictive, result, in a similar role to that of a judge – in practice, space will probably remain for the exercise of social contestation as an essential element of law and its functions.Footnote 85
In other words, while the technical capabilities of algorithmic tools can enable a closer working relationship between humans and machines within the justice system, it will remain a social process, although with potential new forms of interactions between individuals and technologies, producing new ways of resolving legal problems. Marshall McLuhan has observed that, ‘when a new technology comes into a social milieu it cannot cease to permeate that milieu until every institution is saturated’.Footnote 86 What saturation would probably mean in the context of algorithmic tools in the justice system, is that working practices may be adjusted, legal procedures may be altered, and new types of dispute and trial may be possible, but the practice of law would remain fundamentally social, and tied to the needs of human actors – including those mentioned in the above paragraph, who may express opposition or scepticism. It is here that the issue of resistance becomes important, as discussed earlier in this paper.
Resistance in relation to the potential instrumentalisation of the law by algorithmic tools, does not refer to, or require, a grand political movement taking a stance against the implementation of technology. Rather, it involves the changes produced by everyday social conflicts, disagreements, and struggles of legal practice, whereby legal rules and legal meaning are ever-moving and transforming.Footnote 87 This can also include struggles surrounding innovation policies, such as those in the UK discussed above. Where technologies become involved in legal processes, their use is as much saturated by these conflicts and disagreements surrounding implementation and design, as the processes of the law are saturated by the capabilities of algorithmic tools. This can already be witnessed in operational use.
5. Semi-automated justice
This paper has dealt directly with the role of the judge, and specifically whether judicial reasoning – or the activity of judging – can be replicated by an algorithm, operating with machine learning capabilities. To be clear, this is again distinct from the role of lawyers and legal assistants, where as we observed above, algorithms are having a significant effect on employment and working practices. However, as discussed above, it can influence the role of other human actors, who must make important procedural decisions, such as the individual police officers that decide based on an algorithmic prediction whether an individual should be bailed.Footnote 88 The important distinction here is that both judges and officers alike take on an institutional role, whereby they must judge individuals based upon institutional methods of decision and control.
Attempts to model judicial decision-making, for the purposes of automating courts and other judging activities – in a manner that remains fair, accurate and transparent – demonstrates the difficulty of fully automating legal procedures. This can be witnessed in recent attempts to algorithmically predict case outcomes, of which there are two prime examples. First is the attempt by Aletras et al to predict decisions of the European Court of Human Rights (ECtHR), and secondly, there is work by Katz et al to predict decisions of the United States Supreme Court.Footnote 89 The former's model predicted 584 decisions with an average of 79% accuracy (as high as 84% for violations of Art 6 of the European Convention on Human Rights), and the latter predicted 28,000 case outcomes with 70.2% accuracy, and 240,000 judicial votes with 71.9% accuracy. Both studies used the available databases of judgments for each court to construct their model, applying natural language processing and machine learning algorithms to text-based material. The sheer scale of cases involved demonstrates the significant computing power of big data that was previously unavailable with other forms of analysis.
The study by Aletras et al, in particular, is useful because the algorithm used was significantly more accurate at predicting outcomes based on facts and procedure, than on the relevant law (and even better again when combined). Given that facts and procedure make up a large chunk of the social reality of a case, this is a significant development, particularly as such technology is only in its infancy. Despite this, it should be noted that since the ECtHR is an appeals court, the facts were not in dispute at this stage. This means that the tools were reliant on a number of set givens, already defined by a combination of procedural norms and human input and interaction. It would no doubt be interesting to see how an algorithm would perform at the level of a lower court, to discern its performance as something beyond a support tool or assistant. Further, while such models may not fully replace the role of the judge in coming years, they could potentially be usefully adapted as a method of triaging cases, in an effort to deal with rising caseloads.Footnote 90
Framing the automation of such judging activities through the implementation of algorithmic tools in precisely this manner, as tools or assistants, rather than newly functioning independent systems, recognises that at least for the present moment, fully automated justice as a practical reality is unlikely. While future technological developments may change this – or indeed policy wants and needs may give such tools overriding decision-making powers in preference to human actors – the present situation is at least one of transition, where justice provided through the use of these tools and platforms can be classified as semi-automated. This acknowledges that while the tools themselves do not provide a full system of judgment, they are increasingly being incorporated into wider legal systems and procedures, which are influenced by their capabilities, as well as the new functions they allow. This new way of ‘doing’ justice does not prioritise human judgment over that of the machine in general, but instead allows new institutional configurations of human-machine interaction, which are augmented in comparison with existing methods of decision-making. The ‘best’ configuration, as viewed from the institutional perspective, is not therefore necessarily that with the most accurate algorithm, but the one which enables specific and intended forms of government and control. As mentioned above, humans can act in an overseer role, but the purpose of this role is entirely flexible and open to change.
The practical importance of acknowledging and articulating this situation is that under current data protection frameworks, as applicable to the UK as a whole, solely automated decision-making is prohibited by both the Data Protection Act 2018 and the European Union's General Data Protection Act 2018 – with a limited number of derogations.Footnote 91 Through this same regime, semi-automated decision-making is identified and preferred, as a method of counteracting issues associated with algorithmic analysis. Solely automated decisions are therefore highlighted as the primary danger to a fair, transparent, and accountable legal system – and through this lens, so long as human decision-makers are actively assessing algorithmic predictions, these issues will be avoided.Footnote 92 Meanwhile, however, this ignores the fact that similarly transformative institutional changes can be produced through the use of semi-automated procedures. These bring new challenges for actors and judged individuals alike, who must navigate algorithmic predictions, and deal with the final decisions formed from new configurations of human and machine.
In turn, as discussed in relation to COMPAS and HART above, these new configurations pose new possibilities as to what an appropriate decision is within a given situation, and also how individuals – and groups – should be appropriately judged and governed. Indeed it may be argued that algorithmic risk assessment provides a new episteme, or a new way of thinking and producing knowledge about the world.Footnote 93 For example, in the context of algorithmic risk assessment in criminal justice, algorithms allow responses to crime which frame its existence as a regular and predictable event, with interventions focused on the regulation and scientific management of its occurrence.Footnote 94 This scientific management entails the control of activity at the population level, with a particular focus on the efficiency of decision-making and resource allocation.Footnote 95 Individuals are attributed a given risk profile, based on how their data compares to that of the wider population, meaning that groups can be sorted and classified in more ‘rational’ and ‘efficient’ ways.Footnote 96 Defendants are therefore not judged purely upon their present situation and past behaviour, but also by algorithmic predictions which compare them to a wider aggregate, and ‘bring the subject into being’ through new calculations of behavioural patterns, and the pre-emption of reality.Footnote 97 This can be further augmented by the insights, experience and working practices of a human decision-maker.
Neither COMPAS nor HART, as algorithmic tools, are able to make the final decision on an individual. This requires the interpretation of a human actor, whether it be police officer, judge, or parole caseworker, to analyse and assess the data provided. Both tools provide an override option, allowing the predictions to be altered or ignored, and a different decision-route to be taken.Footnote 98 Such overrides are often put in place to empower users, to both increase uptake, and improve decision-making quality.Footnote 99 Nonetheless, this decision whether to override or not rests within a changing policy context, which can alter the meaning of what an ‘appropriate’ decision is – as enabled by the capacities of algorithmic tools, and new institutional configurations, that emphasise the ‘scientific’ and ‘rational’ management of risky individuals. This differing approach to governing, and its specific kind of algorithmic governmentality, is a complex and nuanced transformation of government. It is captured neither by a focus on full automation, or by seeing semi-automated decision-making as a solution to the problems it created. It is important to remember here that analysing changes through this frame does not require one to place the analytical and justificatory abilities of humans against those of algorithmic tools, but simply encourages the identification of evolving configurations, as well as points of contestation and possibility.
To return to McLuhan, the ‘saturation’ of the ‘social milieu’ of legal decision-making by algorithms may not best be conceived of as robots automatically making decisions regarding human lives in a deterministic fashion, but rather the gradual re-engineering of legal systems and policy goals, to come more into line with the capacities, capabilities, and predictions of algorithmic tools – which are themselves altered to align with social needs and conflicts. In this context, while neither the methods of analysis and decision-making undertaken by humans and machines are prioritised at the expense of the other, the overall justice system will retain an element of social contestation and conflict that cannot be accounted for purely through the technical frame of algorithmic tools and technologies.Footnote 100
Conclusions
There is much about technology that is destabilising and redefining existing ways of living, and the organisation of social processes. It should, therefore, perhaps be no surprise that even the operation of law, at both the level of the courts and below, is challenged by technologies that revolutionise how we collect, analyse, and make use of the ever-increasing volumes of data and information, which can be translated into operational institutional knowledge.
In light of this, we share in part the viewpoint of predictions that envisage radical change in the nature of legal practice and judging – especially around settlement systems and dispute avoidance mechanisms. In particular, it is likely that judges will increasingly adopt a ‘managerial stance’ towards civil disputes and criminal cases that expands to encapsulate the umpiring of algorithmic functions in courts. However, we maintain reservations about the presence of new and emerging algorithmic technologies in the context of judging – taken in the broadest conception, to include each decision-making milestone in a dispute or criminal case.
In particular we feel it is important to resist the wilder claims about the coupling of the analytical capabilities of algorithms to the wider data deluge that we live in, whereby organisations will rely ever more greatly upon on machines to infer motivations and narratives through the fully automatic analysis of data patterns and correlations. As argued above, it is more likely that we will find ourselves facing new situations of semi-automated justice, caused by new institutional configurations of human and machine, where human decision-makers take on the managerial role of an overseer or umpire. Rather than a technical solution to the complex problems facing justice systems in a digital world, this situation will bring forth its own new challenges, and these new challenges will primarily affect those individuals who both pursue and receive the provision of justice. Law, perhaps above all forms of social interaction, must remain a site for struggle for essentially human values. And in practice, it maintains a central social element that not only tolerates, but produces, resistance and contestation.Footnote 101 The effect of this is that it is unlikely that we will ever see it fully converted into an automated algorithmic system. Yet, it is likely that it will be re-engineered into new configurations of man and machine, both intentionally and through emergent processes, which will look very different through contemporary eyes.