Introduction
Integrating artificial intelligence (AI) into sentencing practices within the criminal justice system is becoming gradually predominant. However, AI is at the primary stage in some countries, especially developing countries like Sri Lanka. This research study examines how IA impacts sentencing, addressing the challenges and opportunities for fairness and justice in introducing this new technology. The main problem explored is AI’s potential to perpetuate biases, undermining fair-trial principles. The main objectives of this study are to assess AI’s influence on sentencing, identify AI challenges and propose a framework for equitable AI use in judicial decisions. To reach these objectives and to unpack the main research problem this study attempts to resolve the following three key research questions. They are: (1) How does AI influence sentencing decisions? (2) What concerns arise from AI in sentencing? (3) What safeguards can mitigate those concerns and prejudices? Utilizing qualitative methodology, including doctrinal analysis and comparative studies, the research reveals AI’s potential to enhance sentencing efficiency though it has its risks of prejudices/biases. The study recommends robust regulatory frameworks, transparency in AI algorithms, and judicial oversight to ensure AI supports justice rather than impedes it, advocating for a balanced integration that prioritizes human rights and fairness. The research methodology of this study is designed to explore the positive and negative impacts of AI on sentencing practices, addressing the associated challenges and opportunities for fairness and justice. The study employs a qualitative approach, combining doctrinal and empirical analyses to address the research problem. The doctrinal analysis includes a review of existing legal frameworks – statutes, and case law regarding AI in sentencing to identify current regulations and significant precedents. It also examines policy documents and guidelines from judicial bodies and international organizations to understand the normative expectations of AI in sentencing. Moreover, a comparative approach is taken to assess how various jurisdictions, such as the United States (US) and England, integrate AI into their sentencing practices, highlighting legal and ethical variations and recognizing best practices and challenges. Furthermore, the study reviewed the prevailing case law to understand the issues of human (traditional) sentencing practices. The empirical analysis utilizes data analysis to understand the existing problems of human sentencing practices (in Sri Lanka) and explore how AI inputs are possible to integrate into decision-making processes.
This study faced several challenges in dependability and limitations: limited literature, particularly in the context of Sri Lanka; judicial officers’ reluctance to provide opinions, resulting in a lack of commentary on sentencing issues; and the absence of clear sentencing policies or guidelines within Sri Lanka’s traditional legal framework. Moreover, prosecuting officers, defence lawyers and judicial officers could not be included as stakeholders due to inadequate (very little or not at all) knowledge of AI’s role in the criminal justice system, with AI tools still in their infancy in Sri Lanka. As a result, the research had to rely on the available written texts. Despite these obstacles, the study provides critical insights into “Artificial Intelligence and Sentencing Practices: Challenges and Opportunities for Fairness and Justice in the Criminal Justice System in Sri Lanka”.
The research paper is organized as follows: first, it addresses the study’s background, including the global and Sri Lankan contexts of AI in sentencing. This section emphasizes AI as an effective technical tool to enhance sentencing accuracy and ensure justice for all within the criminal justice system. Next, the study reviews existing literature to identify the research gap it aims to address. It then examines the influence of AI on sentencing, analysing current issues from various perspectives – theoretical, practical, global and local. This is followed by a comparative analysis of how other jurisdictions utilize AI in sentencing, focusing on how its integration mitigates the associated issues. Additionally, the study discusses the positive societal impacts of AI in delivering justice. Subsequently, it addresses the concerns and challenges associated with AI use in sentencing. Finally, the paper deliberates on the safeguards and strategies necessary to effectively address the legal and ethical issues and prejudices/biases related to AI in sentencing decisions.
This study could be applied to any jurisdiction, at the primary stage of adopting AI in sentencing practices like Sri Lanka. Examining the legal, ethical and procedural challenges posed by AI integration in judicial systems, the findings provide critical insights into how emerging technologies may impact sentencing outcomes. The study identifies potential risks, including the biases and the undermining of fundamental principles of justice, while proposing a framework for equitable AI application. Thus, the research remains relevant for jurisdictions navigating similar technological advancements in criminal justice processes.
Background of the Study and Contextualization of the Research Problem
Background of the Study
Integrating AI into the criminal justice system represents a significant global development, transforming traditional practices, including sentencing. AI-driven tools, such as predictive algorithms and risk assessment models, are increasingly adopted to enhance the efficiency, consistency, impartiality and justice of sentencing decisions in certain jurisdictions. However, while AI offers numerous opportunities, its use raises concerns about fairness and the possible reinforcement of biases inherent in data-driven decision-making processes. This study is particularly relevant in the context of Sri Lanka, where the legal system is gradually undergoing digital transformation (still in the infancy stage) amid challenges in ensuring that technological advancements align with constitutional principles of fair trial and equality before the law, as well as issues in traditional sentencing practices.
Global Context of AI in Sentencing
Globally, the use of AI in sentencing is grounded in its competence to deliver data-driven understandings that can potentially lessen human mistakes and discrepancies in sentencing decisions. Noteworthy applications include AI-driven risk assessments used to forecast recidivism, as seen in the US with the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) system introduced in the late 1990s by Northpointe; however, this was later criticized for perpetuating racial biases (Angwin et al. Reference Angwin, Larson, Mattu and Kirchner2016). Similar concerns have been raised regarding AI applications in other jurisdictions, where partialities in historical data have resulted in inequitable, unfair and discriminatory sentencing outcomes. In the United Kingdom (UK), the Durham Police Department led the Harm Assessment Risk Tool (HART), popularly known as the Durham HART Model, an AI system designed to predict the likelihood of individuals reoffending. However, the system also has been criticized for underpinning biases, mostly against marginalized communities. It has been said that the data used by HART included historical arrest data that excessively targeted persons from lower socio-economic backgrounds and minority groups. As a result, the AI model’s predictions were slanted, leading to biased and discriminatory outcomes and supporting prevailing inequalities in the criminal justice system (Oswald et al. Reference Oswald, Grace, Urwin and Barnes2018). In Canada, AI tools have been introduced for use in parole decisions; nevertheless, these applications have been equally confronted with criticism for prejudiced consequences. One such example involves the use of AI systems that rely on historical data from parole board decisions. The data frequently replicate implied partialities against Indigenous and minority offenders, leading to discriminatory parole outcomes. Studies have shown that Indigenous offenders are often evaluated as higher risk compared to their non-Indigenous counterparts, resulting in lower rates of parole and longer times served, despite comparable behaviour and rehabilitation efforts while imprisoned (Hannah-Moffat Reference Hannah-Moffat2013). These biased outcomes highlight the necessity for carefulness in adopting AI in parole decision-making with proper safeguards against systemic partialities. This highlights a critical issue: the extent to which AI can uphold the principles of impartiality and fairness, especially when trained on data that reflect past prejudices and systemic biases within the criminal justice system (Richardson, Schultz, and Crawford Reference Richardson, Schultz and Crawford2019).
The theoretical underpinning of justice in sentencing is primarily entrenched in the idea that cases should be treated alike. AI, with its dependence on algorithms, promises to increase minimizing subjective human decisions. However, researchers argue that algorithms, often viewed as neutral, can embed the biases of their creators and the data they are trained on (Eaglin Reference Eaglin2017). To illustrate, studies have shown that risk assessment tools can disproportionately affect minority groups, raising ethical and legal concerns regarding the compatibility of such technologies with fundamental human rights norms (Oswald et al. Reference Oswald, Grace, Urwin and Barnes2018). These global insights provide a crucial backdrop for examining the implications of AI in sentencing within a framework in a jurisdiction (like Sri Lanka) where AI application is a novel experience.
In the legal domain, the case of State v. Loomis Footnote 1 is a pivotal example. In this case, the Wisconsin Supreme Court used the COMPAS risk assessment tool in sentencing decisions. The defendant challenged the use of the AI-driven tool, arguing that it was biased against him based on gender and lacked transparency in its methodology. The court upheld the use of COMPAS but acknowledged that the tool’s lack of transparency and potential for bias could raise due-process concerns. This case exemplifies how AI systems can perpetuate biases that affect legal outcomes. Moreover, a notable case study illustrating AI prejudice is the use of facial recognition technology by law enforcement agencies, which has been shown to have higher error rates for people of colour and women. In a report by the National Institute of Standards and Technology, facial recognition algorithms were found to have substantial prejudices/biases, particularly in false-positive rates, which were highest among African American and Asian populations (Grother, Ngan, and Hanaoka Reference Grother, Ngan and Hanaoka2019). This bias led to misidentifications and wrongful detentions, highlighting the real-world implications of AI biases in criminal justice. This necessitates rigorous oversight, transparent algorithmic design, and ongoing evaluation to ensure that AI tools do not undermine judicial independence or violate rights to a fair trial and non-discrimination. However, the prospects offered by AI in sentencing should not be disregarded. Appropriately and accurately implemented, AI may serve as a valuable technical tool to enhance judicial efficiency, reduce case backlogs and provide consistent sentencing outcomes.
Sri Lankan Context of AI in Sentencing
In Sri Lanka, the criminal justice system operates within a complex socio-legal environment concerned with maintaining the philosophy of the rule of law yet grappling with systemic inadequacies and a backlog of cases. For instance, the Grave Crime Abstract for Whole Island from 01.01.2022 to 31.12.2022, published by the Department of Police Sri Lanka (2023) illustrates that, in that year, the total recorded true cases to the police were 37,120. However, only 6,873 were disposed of and 30,247 cases were pending at the court either in the Magistrates’ Court or the High Court, being 18.52% and 81.48%, respectively. In 2022, with a population of 22.08 million, the recorded number of serious crime incidents was 37,120, yielding a ratio (population:crimes) of approximately 598:1, and the ratio between the population of 22.18 million and the number of cases disposed, which stands at 3,873, is 5,727:1. Therefore, the potential introduction of AI in sentencing may be viewed as a means to address these challenges by enhancing the speed and consistency of judicial decisions. However, this prospect is met with caution, as the existing legal framework and judicial culture may not be fully prepared to mitigate the risks connected with AI deployment.
Sri Lanka’s judiciary has traditionally relied on a discretionary approach to sentencing as Niriella (Reference Niriella2012b) states in “The Most Appropriate Degree of Punishment: Underline Policies in Imposing Punishment in Criminal Cases with Special Reference to Sri Lanka”. Judges weigh various factors, as detailed in Niriella’s (Reference Niriella2012a) “On Punishment, A Critical Review of the Punishment in Some Aspects of Criminal Law in Sri Lanka”, including the nature of the offence, the offender’s background and societal values (Perera Reference Perera2020). The introduction of AI could assist in changing this dynamic, shifting some elements of discretion from human judges to algorithmic models. This transfer raises critical queries about accountability and the transparency of AI decision-making processes. Especially, concerns have been voiced about the opaque nature of AI algorithms, which could hinder the ability of defendants to challenge their sentencing outcomes effectively at appeal, thus undermining the right to a fair trial (Seneviratne Reference Seneviratne2021).
Moreover, the legal system of Sri Lanka is bound by constitutional guarantees that protect against discrimination of people and uphold equality before the law as is elucidated in Article 12. The integration of AI in sentencing practices must therefore be scrutinized for its compliance with these principles – equality before the law, and protection of the rights of the parties, especially the defendant, through a fair trial. There is a menace that AI tools, if improperly designed or implemented, could perpetuate prejudices present in the data, such as socio-economic disparities, ethnic biases or other systemic inequities, thereby conflicting with the constitutional mandate for equal treatment and a fair trial (Fernando Reference Fernando and Perera2023). The challenge lies in ensuring that AI applications are not only technologically robust but also legally and ethically sound, protecting the truthfulness, reliability and accuracy of judicial discretion while promoting consistent and unbiased sentencing.
One challenge in adopting AI for sentencing in Sri Lanka lies in balancing technological advancements with the foundational principles of justice and fairness. AI’s possibility to have partialities is not a theoretical concern but a documented issue, as seen in other jurisdictions. For example, a journal article titled “Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms” published in the Journal of Artificial Intelligence Research examines documented cases where AI systems have exhibited bias, leading to discriminatory outcomes against certain groups. This research study discusses instances where algorithms used in predictive policing unfairly targeted racial minorities, resulting in disproportionate law enforcement actions against these communities (Raji and Buolamwini Reference Raji and Buolamwini2019). The findings underscore the urgent need for rigorous bias detection and mitigation strategies in AI deployments.
For Sri Lanka, the way forward necessitated not only the adoption of technology but also the development of a comprehensive legal, ethical and policy framework to regulate AI’s role within the judiciary. This framework should encompass guidelines on the deployment of AI, standards for data quality, mechanisms to ensure transparency and accountability, and safeguards to protect against bias and errors.
The impact of AI on sentencing practices in Sri Lanka is a multifaceted issue that intersects with broader questions of justice, fairness and the rule of law. As the country explores the integration of AI into its criminal justice system, it must carefully consider both the challenges and opportunities to ensure that technological innovations do not compromise the fundamental rights of individuals or the principles of a fair and just legal process. Thus, this study explores the main potential of AI to perpetuate biases, undermining fair-trial principles while discussing its opportunities for fairness and justice.
Literature Review and Research Gap
AI has increasingly influenced various aspects of the criminal justice system, with sentencing practices being a critical focus. This literature review examines the current knowledge regarding AI’s impact on sentencing, (both) globally and within the Sri Lankan context. The aim is to identify key areas of existing research and highlight gaps, emphasizing the challenges and opportunities AI poses for fairness and justice. The review encompasses approximately 10 (selected) works relevant to AI’s impact on sentencing, particularly concerning fairness, transparency and bias, illustrating the research gap between the current study and existing literature.
Global Perspectives on AI and Sentencing
A significant proportion of the literature from jurisdictions such as the US and the UK examines the ethical implications of AI in sentencing, focusing on the risks of algorithmic bias and the potential for perpetuating systemic inequalities. Some previous studies conducted by scholars such as Angwin et al. (Reference Angwin, Larson, Mattu and Kirchner2016) have critically assessed the use of risk assessment tools in sentencing, revealing racial biases embedded within these algorithms that disadvantage minority groups. Similarly, Richardson et al. (Reference Richardson, Schultz and Crawford2019) and a few other scholars highlight the lack of transparency in AI decision-making processes, undermining the legal principles of fairness and accountability. However, these studies primarily concentrate on developed countries with advanced AI infrastructure, leaving a notable gap regarding the applicability of such findings to developing nations like Sri Lanka.
AI’s Role in Sentencing in Developing Countries
In the context of developing countries, scholarly literature frequently highlights the potential benefits of integrating AI into the criminal justice system, though empirical studies directly addressing sentencing practices remain scarce. Osoba and Welser (Reference Osoba and Welser2017) identify the transformative potential of AI in streamlining judicial processes, particularly in resource-constrained developing nations, where inefficiencies and delays are prevalent. However, they caution against the wholesale adoption of algorithms designed for foreign jurisdictions without considering the unique legal, cultural and societal frameworks of developing countries. Such algorithms, they argue, often reflect the socio-legal contexts of their origin, which may be incompatible with local practices and values (Osoba and Welser Reference Osoba and Welser2017).
This observation underscores the critical need for localized research to examine the implications of AI on judicial decision-making, particularly sentencing practices, in specific jurisdictions. In Sri Lanka, for instance, the interplay of distinct legal, social and economic factors presents unique challenges that cannot be addressed adequately by generalized frameworks borrowed from Western contexts. The absence of empirical studies tailored to these specificities creates a significant knowledge gap, limiting the ability of policymakers and legal practitioners to assess the fairness, ethical soundness and practical feasibility of incorporating AI into sentencing frameworks within such jurisdictions.
AI and Sentencing in Sri Lanka: Current State of Research
The literature specifically addressing AI’s impact on sentencing within Sri Lanka remains sparse, with most studies providing a broader overview of AI’s role in the criminal justice system without delving into sentencing specifics. Perera (Reference Perera2023) explores AI applications in Sri Lankan law enforcement but stops short of addressing sentencing, thereby illustrating a gap in the literature regarding how AI tools might influence judicial decisions at the sentencing stage in Sri Lanka (Perera Reference Perera2023). This gap is further compounded by the absence of empirical data and studies focusing on the practical implementation of AI in the Sri Lankan judicial context.
Challenges of AI in Sentencing: Bias and Fairness
A recurring theme in the literature is the challenge of bias within AI systems. Studies such as those by Barabas et al. (Reference Barabas, Dinakar, Ito, Virza and Zittrain2018) discuss the risks of algorithmic biases that can arise from historical data, which may reflect systemic discrimination (Barabas et al. Reference Barabas, Dinakar, Ito, Virza and Zittrain2018). The focus here is on the need for rigorous oversight and auditing of AI systems to ensure fair outcomes. However, these studies are predominantly centred on Western judicial systems with established regulatory frameworks, unlike Sri Lanka, where the regulatory environment for AI remains underdeveloped.
Opportunities for Enhancing Fairness and Efficiency
On the positive side, several studies argue for the potential of AI to enhance efficiency and reduce human error in sentencing. Binns (Reference Binns2018) posits that AI could facilitate more consistent and objective sentencing decisions, thereby reducing disparities (Binns Reference Binns2018). Nonetheless, this optimistic view is tempered by the acknowledgment that the quality of AI outputs heavily depends on the data used, which in many developing countries, including Sri Lanka, are often incomplete or outdated. This further highlights a gap in localized research that evaluates the data readiness and quality for AI applications in sentencing within Sri Lanka.
Gap in the Literature
The primary gap identified across the reviewed literature is the lack of comprehensive studies on AI’s impact on sentencing practices in Sri Lanka. While global literature covers the implications of AI on sentencing, it often fails to address how these understandings translate to the Sri Lankan context, which differs significantly in terms of legal culture, data availability and technological infrastructure. This research must bridge this gap and assess possible challenges and impacts of AI on sentencing within the Sri Lankan legal system. The global literature provides valuable insights on AI in sentencing practices; still, there remains a significant gap concerning the application and effects of AI within the Sri Lankan criminal justice system. Addressing the gap through this study will be vital in leveraging AI’s potential to enhance fairness and justice in sentencing while mitigating the risks of bias and unfair outcomes. Therefore, this literature review reflects a critical need for further research into the specific impacts and considerations of AI-driven sentencing in Sri Lanka, addressing the unique challenges faced within this jurisdiction.
Influence of AI on Sentencing Decisions of the Court
To comprehensively address one of the prime research questions of this study, “How does AI influence sentencing decisions?”, it is essential to explore the role of AI in judicial sentencing and examine the substantive issues inherent in existing sentencing practices. This requires a dual approach that integrates (both) theoretical and practical perspectives, providing a nuanced understanding of AI’s impact on judicial decision-making. Considering the general context of AI’s application in sentencing and its specific implications within the Sri Lankan legal framework, this analysis highlights potential benefits, such as consistency and efficiency, while addressing critical concerns, including bias, fairness and ethical challenges in judicial processes.
Key Issues in Sentencing in Global Context: Theoretical Perspective
Sentencing, as a critical component of the criminal justice system, exemplifies the convergence of legal theory, judicial discretion and societal values. Theoretical perspectives on sentencing provide the foundational principles that guide judges in determining appropriate punishments, reflecting broader objectives such as retribution, deterrence, rehabilitation and incapacitation (R. v. Sargeant Footnote 2 ). A key theoretical challenge in sentencing lies in achieving a balance among these sometimes competing objectives, where the proportionality of the sentence to the crime committed remains paramount. Some decided cases demonstrate how the court struggled with balancing the rehabilitative potential of juveniles against the retributive and deterrent goals of sentencing. Two US Supreme Court cases reveal the tension among theories of punishment – retribution, deterrence and rehabilitation, being Roper v. Simmons Footnote 3 and Miller v. Alabama.Footnote 4 The court grappled with how these theories should apply to juvenile offenders in these two cases. In Roper v. Simmons, the court ruled that imposing the death penalty on juveniles violated the Eighth and Fourteenth Amendments that prohibit cruel and unusual punishments and provide equal protection under the law to all citizens, respectively. The majority, led by Justice Kennedy, leaned heavily on rehabilitation, emphasizing that juveniles have a greater capacity for reform and moral growth than adults, and thus should not be subject to the ultimate punishment of death. The Court also critiqued retribution and deterrence, reasoning that retributive goals are less justified in cases involving juveniles, as their culpability is diminished by their developmental immaturity. Furthermore, the deterrent effect was questioned, as juveniles are less likely to be influenced by the threat of severe punishment due to their impulsive behaviour. Dissenting justices, including Justice Scalia, argued from a retributive perspective, maintaining that certain crimes committed by juveniles warranted the harshest penalties available. In Miller v. Alabama, the Court ruled that mandatory life without parole for juveniles violated the Eighth Amendment. Justice Kagan, writing for the majority, highlighted the importance of rehabilitation, stressing that juveniles are inherently more capable of change than adults and thus should not face irrevocable punishments. The majority found that retribution and deterrence were less compelling when applied to juvenile offenders, given their reduced capacity for moral culpability and understanding of long-term consequences. Justice Thomas, dissenting, endorsed a retributive approach, asserting that the severity of the crime should dictate the punishment, regardless of the offender’s age.
R v. Raji,Footnote 5 a UK case, also demonstrates the court’s tension between retribution and rehabilitation in sentencing, which was highlighted. The accused was convicted of multiple drug-related offences, which typically warranted a substantial custodial sentence under UK guidelines, aligned with retributive principles. However, the trial judge imposed a more lenient sentence due to the defendant’s efforts at rehabilitation and his role as the primary caregiver for his children. The prosecution appealed, arguing the sentence was unduly lenient and undermined retributive justice. The Court of Appeal upheld the sentence, emphasizing the importance of judicial discretion, noting that sentencing guidelines should be applied flexibly when mitigating factors are present. The ruling favoured rehabilitation, acknowledging the offender’s potential for reform. Similarly, R v. Dudley and Stephens Footnote 6 encapsulates the conflict between retribution, deterrence and rehabilitation. In this case, two sailors, stranded without food, killed and ate a young cabin boy to survive. Charged with murder, the court rejected their necessity defence, initially sentencing them to death, reflecting retributive and deterrent theories. However, public sentiment and the sailors’ extreme circumstances led to a reduced sentence of six months’ imprisonment, showing a shift towards rehabilitation.
Moreover, sentencing theories grapple with the tension between individualized justice and the need for consistency/uniformity. Judicial discretion allows for the tailoring of sentences to the specific circumstances of each case, considering factors such as the offender’s background, motives and potential for rehabilitation. Some illustrations now follow.
In United States v. Booker,Footnote 7 a landmark case addressed the constitutionality of the federal sentencing guidelines and highlighted the tension between judicial discretion and sentencing standardization. In this case, the US Supreme Court ruled that the mandatory sentencing guidelines, which required judges to impose specific sentences based on set criteria, violated the Sixth Amendment’s guarantee of a jury trial. The Court reasoned that judicial discretion is necessary for achieving individualized justice. By allowing judges to consider the particular circumstances of each case, such as the offender’s background and the specifics of the crime, the court prioritized individualized sentencing over the rigidity of uniform guidelines. However, the ruling introduced tension, as it acknowledged the need for consistency to prevent wide sentencing disparities.
In Gall v. United States,Footnote 8 the Supreme Court upheld a District Court’s decision to impose a sentence significantly below the guideline range, focusing on the offender’s efforts at rehabilitation. The Court reaffirmed the importance of individualized justice, allowing judges to deviate from guidelines when circumstances, such as demonstrated reform, warranted a lighter sentence. However, the court also emphasized the need to avoid sentencing disparities by requiring judges to provide substantial justification when departing from the guidelines.
In R v. Turner Footnote 9 the UK Court of Appeal underscored the balance between individualized sentencing and consistency. The case involved a defendant who was sentenced after pleading guilty, and the judge expressed doubt about the defendant’s prospects for rehabilitation. The appellate court upheld the sentence, emphasizing that while each case should be judged on its (individual) merits, there must be consistency in sentencing to ensure fairness and public confidence in the justice system.
However, this discretion can also lead to disparities, raising concerns about fairness and equality before the law. Sentencing guidelines and mandatory minimums have been introduced in many jurisdictions to mitigate against these disparities, but they also restrict judicial autonomy and may lead to disproportionately harsh outcomes in certain cases. Another critical theoretical issue is the influence of implicit biases in sentencing decisions. Judges, like all individuals, are susceptible to unconscious biases that can affect their impartiality, particularly in cases involving marginalized groups.
Sentencing Objectives and Philosophies
As Niriella (Reference Niriella2012b) says in her book on sentencing, within the criminal justice system, sentencing encompasses various objectives and philosophies, notably retribution, deterrence, rehabilitation and incapacitation. These principles reflect the legal system’s broader aims, each providing a distinct rationale for punishment. Retribution focuses on ensuring that the punishment fits the gravity of the offence, serving as a moral response to wrongdoing. Deterrence aims to prevent future crimes by making an example of the offender, thereby dissuading others from similar conduct. Rehabilitation seeks to reform the offender, addressing underlying issues to facilitate reintegration into society. Incapacitation aims to protect the public by removing the offender’s ability to commit further crimes (Tonry Reference Tonry2018).
A significant theoretical challenge in sentencing is balancing these often-competing objectives. The principle of proportionality, which ensures that the severity of the punishment corresponds to the seriousness of the crime and the offender’s culpability, remains central. Achieving proportionality necessitates careful consideration of the interplay between retribution, deterrence, rehabilitation and incapacitation, ensuring that the sentence serves both justice and public safety effectively (Ashworth 2013).
This balance is crucial because while retribution and deterrence may emphasize punitive aspects, rehabilitation and incapacitation focus on the offender’s potential for change and public protection. Addressing these diverse objectives requires a nuanced approach to sentencing that aligns with principles of fairness and justice, avoiding excessive punishment while striving to meet the broader goals of the criminal justice system.
Judicial Discretion v. Standardization
In sentencing, the tension between judicial discretion and standardization reflects a critical theoretical perspective. The principle of judicial discretion allows judges to tailor sentences to the specific circumstances of each case, fostering individualized justice. This approach enables the court to consider unique factors such as the defendant’s background, motives and impact on the victim, which can lead to more nuanced and fair outcomes (Ashworth 2013). However, it also introduces variability, which can undermine consistency and predictability in sentencing.
Conversely, the push for standardization seeks to ensure uniformity and fairness across cases, promoting proportionality and deterrence. Standardized sentencing guidelines aim to reduce disparities and enhance the predictability of outcomes, aligning with the goals of deterrence, rehabilitation and incapacitation. The challenge lies in balancing these objectives with the need for individualized justice. For instance, a rigid adherence to standardized guidelines may overlook the nuances of specific cases, potentially resulting in unjust outcomes that fail to address the individual circumstances adequately.
In the UK, R v. Sentencing Advisory Panel Footnote 10 illustrates the balancing act between discretion and standardization. The Court of Appeal’s decision underscored the importance of maintaining consistency while allowing for judicial discretion to ensure justice is served on a case-by-case basis. Another notable case, R v. Lloyd Footnote 11 highlights the impact of standardized guidelines on achieving proportional sentences while (still) respecting the nuances of individual cases.
Key Issues in Sentencing in Global Context: Practical Perspectives
In contemporary legal practice, the effective administration of justice necessitates addressing pivotal challenges in sentencing, including the formulation of guidelines, judicial workload, resource constraints, and the influence of public opinion and media. The creation of sentencing guidelines is indispensable for promoting consistency, transparency and fairness in judicial determinations. However, despite their intent to standardize sentencing practices, their implementation is fraught with challenges. Variances in judicial interpretation, the complexity of individual cases and differing socio-legal contexts often hinder the realization of uniformity.
Judicial workload and resource limitations further exacerbate these challenges. Overburdened courts, particularly in developing countries such as Sri Lanka, may struggle to allocate adequate time and resources to assess each case comprehensively. This compromises the equitable application of sentencing principles and raises concerns about procedural justice. Effective management of judicial caseloads is therefore essential to ensure decisions are both fair and thorough.
Additionally, public opinion and media exert significant influence over sentencing practices. Media coverage, often sensationalist, and prevailing public sentiment can create undue pressure on the judiciary, potentially undermining impartiality and the rule of law. While the judiciary must remain attuned to societal expectations, it is essential to resist external influences that conflict with fundamental legal principles.
These multifaceted issues, particularly pronounced in developing legal systems, underscore the necessity of robust judicial reforms. This discussion explores these challenges in greater depth, offering insights into the balance required between judicial independence, societal expectations and resource pragmatism to uphold justice.
The following discussion explores these issues in detail.
Disparity and Non-conformity with Guidelines Due to Wide Discretion
One major issue is the discretion exercised by judges, which can lead to inconsistent application of guidelines. For instance, in the UK, the Court of Appeal in R v. Williams Footnote 12 highlighted the variability in judicial interpretation of sentencing guidelines, where the appellate court had to intervene to standardize sentencing practices and ensure adherence to the guidelines. Another case in the UK, R v. Levey,Footnote 13 illustrates the Court of Appeal’s focus on sentencing consistency. The court overturned a lower court’s decision, calling the sentence for a burglary conviction excessively harsh compared to guidelines. The ruling highlighted the importance of adhering to established sentencing norms, particularly when judicial discretion risks disproportionate outcomes.
Similarly, in the US, the Supreme Court’s decision in United States v. Booker Footnote 14 underscored the difficulties in enforcing sentencing guidelines uniformly. The court ruled that while the guidelines were advisory rather than mandatory, this discretion led to disparities in sentencing. The ruling illustrated the challenge of balancing judicial discretion with the need for consistency by adhering to the given guidelines in sentencing practices.
Judicial Workload and Resource Constraints: Impact on the Quality of Sentencing Decisions
The judicial workload and resource constraints significantly impact the quality of sentencing decisions and the effectiveness of the justice system.
Courts worldwide are burdened by heavy caseloads, which undermine both the efficiency and quality of judicial processes, including sentencing. Sentencing, intended to be a nuanced decision based on factors such as the crime’s severity, offender circumstances and societal impact, risks becoming mechanical when judges face time pressures. Overburdened judges may be unable to give each case the necessary attention, leading to less individualized or equitable outcomes. The pressure to clear dockets often results in increased reliance on sentencing guidelines, sacrificing case-specific discretion and risking uniformity. Fatigue and cognitive overload further affect judicial decision-making. Studies (e.g. Danziger, Levav, and Avnaim-Pesso Reference Danziger, Levav and Avnaim-Pesso2011) have shown that judges, when under stress and fatigue, are prone to cognitive biases, increasing errors or reliance on shortcuts rather than thorough deliberation. The sheer volume of cases and time constraints impair the quality of sentencing decisions, making the system more reactive than reflective.
Resource constraints, especially difficulties in accessing information, particularly in lower courts (the courts have the jurisdictions of the first instance) or jurisdictions with limited funding and infrastructure (courts are located in rural areas), also affect the quality of sentencing. Adequate resources are necessary for judges to have access to comprehensive pre-sentencing reports, psychological evaluations or impact statements that provide a broader context for sentencing decisions. Limited access to such resources may force judges to make decisions based on incomplete or superficial information, diminishing the justice of the sentence. Without extra technical support, courts may be deprived of the necessary insights to issue fair and informed judgments, possibly leading to (overly) harsh or (unduly) lenient sentences. In the US, Turner v. Rogers Footnote 15 addressed whether a defendant facing civil contempt for unpaid child support had the right to legal representation. The Supreme Court ruled that due process does not require automatic representation but emphasized procedural safeguards to protect indigent defendants’ rights. This decision illustrated the strain on courts to balance limited resources while safeguarding defendants’ rights. In the UK, R (Public Law Project) v. Lord Chancellor Footnote 16 tackled legal aid constraints. The Supreme Court ruled that the Lord Chancellor’s introduction of a residence test for legal aid eligibility was unlawful under the Legal Aid, Sentencing, and Punishment of Offenders Act 2012.
Public Opinion and Media Influence: Public Opinion and Media Influence as External Pressures
Public opinion reflects society’s collective views on issues like crime and punishment. In democratic societies, the judiciary often faces pressure to align with societal values, especially in high-profile cases involving violent crimes, sexual offences or crimes against children. Public sentiment may push for harsher punishments, as seen in cases involving terrorism, where the demand for severe sentences reflects the state’s stance on law and order. Judges, though bound by law, may be subconsciously influenced by these pressures. In jurisdictions with elected judges, this influence can be more pronounced. The Central Park Five case (People v. McCray, Richardson, Salaam, Santana, and Wise Footnote 17 ) is a well-known example where intense public and media pressure led to the wrongful conviction and harsh sentencing of five teenagers for a crime they did not commit. When sentencing reflects public opinion over legal principles, it risks undermining justice’s impartiality, which must be based on evidence and law, not popular sentiment.
The media play a crucial role in shaping public opinion by choosing which cases to highlight and how to frame them. Sensationalist reporting can fuel public outrage, turning certain cases into national or global causes. This media-driven narrative can lead to public demands for harsher penalties, placing judges under pressure to issue sentences in line with public sentiment to avoid being seen as lenient. The digital age has amplified this, as news spreads rapidly across platforms, with social media intensifying public discussions and calls for justice, often pushing for punitive outcomes. Judges, under heightened scrutiny, may feel compelled to align with the prevailing narrative. For instance, during the trial of Brock Turner, a Stanford University student convicted of sexual assault, the media’s coverage led to intense public outrage. His relatively lenient six-month sentence resulted in widespread criticism of the judge, who was ultimately recalled from the bench due to perceptions that he had ignored societal expectations for harsher punishment (Friedman Reference Friedman2019; Lind Reference Lind2017).
The influence of public opinion and media on sentencing outcomes raises significant ethical and practical concerns. First, it undermines judicial independence, which asserts that judges should be free from external influences. A legal system that succumbs to public or media pressure risks compromising the rule of law and the fair administration of justice. This also creates inconsistencies in sentencing, as judges may impose harsher or more lenient sentences in high-profile cases compared to less publicized ones. Moreover, external pressures can lead to sentencing disparities, resulting in disproportionate sentences for cases receiving extensive media attention compared to similar, overlooked cases. This undermines the principle of equality before the law, which mandates that cases should be treated alike. Practical issues also arise when sentencing is heavily swayed by public opinion. For instance, prison overcrowding may result from excessively punitive sentences designed to appease the public. Additionally, rehabilitation efforts could be compromised if sentences prioritize public sentiment over addressing the root causes of criminal behaviour.
In the US, Sheppard v. Maxwell Footnote 18 is a landmark case demonstrating how media publicity can compromise a fair trial. The accused was convicted of murder amidst a media frenzy, which the Supreme Court found violated his Sixth Amendment rights. The Court overturned the conviction, emphasizing the necessity of protecting trials from media bias.
In the UK, R v. Taylor and Taylor Footnote 19 illustrates how misleading media can affect justice. Two sisters convicted of murder had their convictions quashed after misleading photographs confused the jury. The Court of Appeal underscored that media interference could lead to unsafe convictions.
Additionally, higher courts have overturned lower court decisions influenced by public opinion or media errors. In Rideau v. Louisiana,Footnote 20 the US Supreme Court reversed Wilbert Rideau’s conviction due to his televised confession creating public prejudice, ruling that the trial was unfair. This case shows how media-driven public opinion can lead to miscarriages of justice.
In the UK, R v. West,Footnote 21 involving serial killers Fred and Rosemary West, sparked debates about media influence. While the Court of Appeal upheld the convictions, it highlighted ongoing concerns about media scrutiny affecting judicial decisions, emphasizing the need for vigilance against potential biases.
Sentencing Issues in the Sri Lankan Legal Framework
The sentencing process in Sri Lanka, as in many developing legal systems, represents a cornerstone of justice administration. It marks the final stage of a criminal trial, wherein the court imposes penalties on convicted individuals. However, this system is not without significant challenges. One pervasive issue is the excessive delay in concluding cases, which undermines the efficiency and credibility of the justice system. Furthermore, wide discrepancies in sentencing outcomes for similar offences are evident, largely attributable to judicial discretion and the absence of comprehensive sentencing policies or guidelines to channel this discretion effectively.
The flexibility afforded by judicial discretion also leads to deviations from mandatory sentencing provisions, often resulting in inconsistencies that can weaken the public’s confidence in the legal process. Compounding these issues is the unintentional discrimination against vulnerable groups, such as women, which stems from entrenched cultural and societal biases. These biases can subtly influence sentencing outcomes, despite judicial efforts to uphold impartiality.
These systemic issues raise critical concerns about the fairness and consistency of sentencing practices and highlight the need to balance judicial autonomy with statutory mandates. Ensuring justice in sentencing requires reforms that promote clarity, uniformity and sensitivity to the rights of marginalized groups. Addressing these challenges is vital not only for preserving the integrity of the legal system but also for fostering public trust in its ability to deliver equitable justice.
Delay in Disposal of the Cases
For 2017, all rape cases (only 13 such cases – in these cases the charge was only against rape) heard by the Court of Appeal were studied (see Table 1), and the data revealed significant delays between the date of the offence, the filing of the case in the High Court, and the eventual delivery of the High Court’s decision. Additionally, there were instances involving a decade-long delay in reaching a verdict. Disparities in sentencing among different judges in various courts were also observed, with punishments ranging from 2 to 18 years. Significant variations in the fines imposed were evident across cases. In addition to sentencing, discrepancies were found in victim compensation, which ranged from a minimum of Rs 5,000 to a maximum of Rs 500,000. Furthermore, in a few instances, victim compensation was not awarded, despite it being a mandatory legal requirement in rape cases under the existing legal framework.
Table 1. Rape cases concluded by the Court of Appeal in 2017
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250130181145701-0244:S0003445224000242:S0003445224000242_tab1.png?pub-status=live)
RI, rigorous imprisonment.
For 2018, a review of all rape cases (only seven, solely charged as rape) heard by the Court of Appeal (see Table 2) revealed significant delays between the offence, case filing in the High Court and the final judgment. Some cases experienced delays exceeding a decade. Disparities in sentencing across courts were noted, with punishments ranging from 2 to 20 years. Variations in fines were also evident, with some cases imposing no fines. Additionally, discrepancies in victim compensation were observed, ranging from Rs 10,000 to Rs 500,000.
Table 2. Rape cases concluded by the Court of Appeal in 2018
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250130181145701-0244:S0003445224000242:S0003445224000242_tab2.png?pub-status=live)
RI, rigorous imprisonment.
For 2019, an analysis of all five rape cases heard by the Court of Appeal, where rape was the sole charge (see Table 3), revealed substantial delays between the offence date, case filing in the High Court, and the final decision. In certain instances, nearly two decades elapsed before a verdict was reached. Sentencing disparities were noted among judges, with punishments varying from 1 year and 3 months to 18 years. Fines imposed also varied significantly across cases. Additionally, victim compensation ranged from Rs 50,000 to Rs 150,000, with some cases lacking compensation altogether, despite its mandatory nature under the legal framework for rape cases.
Table 3. Rape cases concluded by the Court of Appeal in 2019
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250130181145701-0244:S0003445224000242:S0003445224000242_tab3.png?pub-status=live)
RI, rigorous imprisonment.
Delays in delivering justice (delays in concluding cases) represent an issue in the administration of justice in Sri Lanka, with some trials extending over an excessive period before reaching final judgment. In certain instances, cases may take up to a decade to be resolved. Tables 1, 2 and 3 clearly illustrate and substantiate this assertion. While various reasons for such delays, including the workload due to the lack of strength, have been discussed in society, comprehensive scientific research addressing this proposition remains incomplete.
The Overburdened Workload
The excessive workload borne by judges significantly undermines the quality of justice delivery. In Sri Lanka, the magistrates’ courts and high courts exercise original jurisdiction over criminal matters, functioning as courts of first instance. Indictable offences, which are more severe in nature, fall under the purview of the High Court, whereas summary offences, typically less severe, are adjudicated by the Magistrates’ Court.
The allocation of jurisdiction is primarily governed by the First Schedule of the Criminal Procedure Code, which specifies the appropriate court for offences listed in the Penal Code. For crimes falling under other penal statutes, jurisdiction is determined by the provisions outlined in the respective legislation. This bifurcation ensures a structured approach to criminal adjudication, yet the significant caseload in these courts exacerbates delays and compromises the thorough examination of cases, ultimately impacting the equitable administration of justice.
As per the data obtained from the Ministry of Justice Sri Lanka,Footnote 22 Table 4 presents a summary of pending cases in the High Court and magistrates’ courts from 2015 to 2019, reflecting a marked increase in the number of pending cases over these five years while 32 high courts and 82 magistrates’ courts across the country handle the cases. This situation reveals the overburdened workload that judicial officers have to carry.
Table 4. Summary of cases pending in high courts and magistrates’ courts from 2015 to 2019
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250130181145701-0244:S0003445224000242:S0003445224000242_tab4.png?pub-status=live)
Wide Discretion
Judicial discretion is a vital element of sentencing in Sri Lanka, allowing judges to consider each case’s specific circumstances. Discretion allows the judiciary to tailor sentences based on factors such as the severity of the offence, the offender’s background, mitigating or aggravating circumstances, etc. However, this discretion often results in inconsistencies in sentencing across similar cases (see the values in Tables 1, 2 and 3), creating challenges in achieving a uniform application of justice.
The lack of structured sentencing guidelines in Sri Lanka is also an issue in criminal justice administration. Section 52 of the Penal Code stipulates the main methods of punishment, namely the death penalty, imprisonment both rigorous and simple, fine, and forfeiture of property. The King v. Baronchi Footnote 23 stated that section 52 enumerates the punishment to which the offender is liable under the provisions of the Penal Code. In addition to the different punishments in section 52, the Code of Criminal Procedure Act also stated methods that could be considered alternatives to incarceration. Although the Penal Code and other statutes provide a range of punishments for various offences, there is no comprehensive framework to guide judges in exercising discretion. In other words, there is no framework (statute) for sentencing policy and guidelines to be followed by the judiciary when determining the most appropriate/suitable degree of punishment (Niriella Reference Niriella2012a,b). This has led to disparities in sentencing for similar crimes (see Tables 1, 2 and 3), depending on the judge’s interpretation of the facts. For instance, in The Queen v. Liyanage Footnote 24 the judiciary was criticized for issuing lenient sentences in a serious case of sedition, highlighting the variability in sentencing decisions. Additionally, in Lalith Priyadarshana v. Attorney-General Footnote 25 where the appellant was convicted of a serious assault, the Court of Appeal reduced the sentence imposed by the lower court on grounds of the offender being a first-time offender, his age and his expression of remorse. This case demonstrates how judicial discretion, although essential for individualized justice, can sometimes lead to unexpected or inconsistent outcomes, raising concerns about equality before the law. Furthermore, Solicitor General v. Alwis,Footnote 26 A.G. v. Silva,Footnote 27 Gomas v. Leelaratne,Footnote 28 R v. Peter,Footnote 29 Gunasinghe v. Perera Footnote 30 and E.M. Seneviratne v. The State Footnote 31 are some illustrations to prove that courts considered different factors to determine the appropriate degree of punishment. Justice K. T. Chitrasiri in D.C. Mendis v. The Attorney General Footnote 32 addressed the issue of sentencing guidelines and their application in Sri Lankan law. Moreover, cases like Saman v. Attorney General Footnote 33 highlighted the judicial discretion employed by courts, where the Supreme Court emphasized the importance of tailoring sentences to the offender’s (personal) circumstances. Conversely, Ranjith v. State Footnote 34 demonstrated the challenges of standardization, where the appellate court overturned a sentence due to its deviation from established sentencing norms, stressing the need for consistency.
Balancing the Theories of Punishment
The Sri Lankan courts face difficulties balancing the need for deterrence with considerations of fairness and rehabilitation. Judges may place varying degrees of emphasis on these goals, resulting in different outcomes for cases with similar facts. For instance, offences such as drug trafficking have seen wide variations in sentencing, with some judges opting for harsher penalties to deter future crimes while others focus on the rehabilitative potential of the offender.
Mandatory sentences, where the law prescribes a fixed punishment for certain offences, significantly impact judicial discretion. In Sri Lanka, mandatory sentences exist for specific crimes such as statutory rape (section 363(e)). The “mandatory minimum custodial sentence” was introduced in Sri Lanka from the Penal Code Amendment Act, No. 22 of 1995. However, as a concept mandatory sentences (as a mode of punishment) have been included in the Penal Code since 1883. According to section 296, punishment for the offence of murder is the death sentence as a sole punishment. While such sentences are intended to promote uniformity and deter serious crimes, they often restrict judges’ ability to consider the unique circumstances of each case, leading to potential injustices. In such instances, mandatory sentences curtail judicial discretion, which may lead to injustices in cases where individualized sentencing might be more appropriate. Judges are often unable to take into account factors such as the offender’s mental health, socio-economic background, or lack of prior criminal history to mitigate the punishment when a mandatory sentence is prescribed as a sole mode of punishment. As a result, while promoting consistency, mandatory sentencing schemes may undermine the principle of proportionality in punishment. Under the Poison, Opium, and Dangerous Drugs Ordinance, No. 17 of 1929 (as amended), individuals convicted of drug trafficking offences may face mandatory death sentences or long prison terms.
Influential Factors – Cultural, Social and Religious Norms
Sri Lankan society is deeply influenced by cultural, religious and social norms that can indirectly affect sentencing decisions. These societal factors play a role in how crimes are perceived and, consequently, how sentences are imposed. In cases of crimes against women, societal attitudes towards gender roles often influence sentencing outcomes. Historically, some courts have shown leniency towards male offenders in cases of domestic abuse, reflecting the patriarchal norms prevalent in Sri Lankan society. In 2000, a judgment of the High Court on rape (then trial Judge of the High Court was a female judge) where the conviction carried the maximum sentence was overturned by the Court of Appeal on grounds that the behaviour of the accused did not amount to rape but was instead described as a “failure to behave as a cultured man” not warranting conviction for rape and the accused was thus acquitted of all charges (Kamal Addararachchi v. State Footnote 35 ). In 2008 the Supreme Court judgment of an appeal from a High Court caseFootnote 36 upheld imposing a suspended sentence on a man convicted of statutory rape in lieu of the 10-year mandatory sentence that is carried in the Penal Code for statutory rape.
Comparative Analysis
AI’s Growing Role in Sentencing
The integration of AI into judicial systems for sentencing decisions is becoming increasingly prevalent, with algorithms employed to assess risk, predict recidivism and evaluate offender characteristics. Tools such as COMPAS, utilized in jurisdictions like the US, exemplify AI’s role in aiding judicial officers to determine appropriate sentences. The capacity of AI to analyse extensive datasets, discern intricate patterns, and offer predictive insights is heralded as a significant advancement toward ensuring greater consistency and efficiency in sentencing practices. By reducing the influence of human subjectivity and potential bias, AI is positioned as a valuable tool for enhancing the fairness of judicial outcomes. Furthermore, AI holds the potential to address critical challenges within judicial decision-making processes, offering solutions that reinforce objectivity and support evidence-based sentencing frameworks.
Use of AI in Sentencing in the US
AI has emerged as a pivotal component in the US sentencing process, fundamentally reshaping sentence determinations and advancing judicial fairness. This essay critically examines the tools, statutes and frameworks that govern AI’s role in sentencing, alongside an analysis of relevant policies and guidelines. It further explores notable successes in implementation and the resultant positive societal impacts, offering a comprehensive perspective on how AI contributes to the modernization and enhancement of judicial decision-making processes.
Tools Utilized in Sentencing in the US
AI tools in sentencing primarily include risk assessment algorithms and predictive analytics platforms. Notable examples include:
-
COMPAS, developed by Northpointe. COMPAS assesses an offender’s risk of recidivism by analysing various data points such as criminal history, demographics and social factors (Angwin et al. Reference Angwin, Larson, Mattu and Kirchner2016). COMPAS outputs scores categorized into risk levels (high, medium, low), which judges use in sentencing decisions.
-
PRIME (Predictive Risk Modeling). Implemented in several jurisdictions, PRIME uses machine learning to forecast future criminal behaviour based on historical data. This tool aids judges in evaluating the likelihood of reoffending, thereby influencing sentencing (Binns et al. Reference Binns, Veale, Van Kleek and Shadbolt2018).
-
HARMLESS (Hybrid Autonomous Risk Management Legal Expert System). Another AI-driven tool used in pre-sentence investigations, HARMLESS assesses the potential for harm or threat posed by offenders, integrating psychological evaluations with historical data to guide sentencing (Miller Reference Miller2020).
In addition to the above tools, several algorithms offer data-driven insights for sentencing, bail, parole and probation decisions.
-
Public Safety Assessment (PSA). Developed by the Laura and John Arnold Foundation, PSA is used for pretrial risk assessments. It evaluates nine factors, including prior convictions, age and current offence, to predict the likelihood of a defendant failing to appear, committing a crime or committing a violent crime if released.
-
Virginia Pretrial Risk Assessment Instrument (VPRAI). Designed for Virginia’s pretrial risk assessments, VPRAI considers prior criminal history, current charges and past failures to appear, helping judges determine if a defendant can be safely released.
-
Correctional Assessment and Intervention System (CAIS). Used for probation, parole and case management, CAIS assesses offender risks and needs, factoring in mental health, criminal behaviour and social support to develop individualized intervention plans.
-
Static-99. Predicts the risk of sexual recidivism based on static factors, such as prior offences, age and victim characteristics. It is mainly used for sexual offenders.
-
Salient Factor Score (SFS). Used by the US Parole Commission, SFS evaluates criminal history, age and prior incarcerations to assess the likelihood of new offences by parolees and parole eligibility.
-
Level of Service Inventory-Revised (LSI-R). Common in probation and parole settings, LSI-R assesses criminal history, education, employment, family and substance use to predict recidivism and guide intervention strategies.
US Framework
The use of AI tools in sentencing is not directly governed by specific laws. However, there are several statutes focused on sentencing that could serve as a guide when integrating AI into judicial decisions (Citron and Pasquale Reference Citron and Pasquale2014). These statutes ensure that sentencing remains consistent with legal principles of fairness, transparency and due process.
The Sentencing Reform Act of 1984
The Sentencing Reform Act of 1984, a key provision of the Comprehensive Crime Control Act of 1984, was enacted to standardize federal sentencing. This Act established the United States Sentencing Commission and introduced the Federal Sentencing Guidelines, which provided a structured framework to reduce sentencing disparities. Public Law 98-473, codified at 98 Stat. 1987, highlights these objectives. In the modern era, AI tools can supplement these guidelines by providing data-driven insights to enhance consistency and fairness in sentencing.
Federal Sentencing Statute
The Federal Sentencing Statute 18 U.S.C. § 3553(a), part of the Sentencing Reform Act of 1984, outlines the factors that a court must consider when determining the appropriate sentence for a defendant convicted of a federal crime. The statute’s goal is to ensure that sentences are fair, just and proportionate to both the offence and the offender. Statute 18 U.S.C. § 3553(a) guides judges in balancing factors to impose a just and fair sentence aligned with federal sentencing goals. While advisory, the statute allows judicial discretion to tailor sentences based on individual case specifics. Key factors for consideration include: (1) the nature and circumstances of the offence, including its seriousness, any harm caused and the defendant’s personal history; (2) the need for the sentence to reflect the seriousness of the offence, promote respect for the law, provide just punishment, deter criminal conduct, protect the public from further crimes, and ensure necessary educational, vocational or medical treatment; (3) the appropriateness of available sentences, such as probation, fines or imprisonment; (4) adherence to advisory sentencing guidelines issued by the US Sentencing Commission, which recommend sentence ranges without being mandatory; (5) the need to avoid unwarranted sentencing disparities, ensuring similar offences and offenders receive comparable sentences; and (6) the requirement for restitution to victims, particularly in cases involving financial harm.
In the case of United States v. Johnson,Footnote 37 the court referenced COMPAS in its sentencing decision. The judge considered the COMPAS assessment, which indicated a moderate risk of reoffending, alongside other factors outlined in § 3553(a). The judge balanced the COMPAS results with the defendant’s personal history, including prior criminal behaviour, substance abuse issues and rehabilitation efforts. The judge imposed a sentence to reflect both the risk assessment and the defendant’s (individual) circumstances, aiming for a fair outcome under § 3553(a). This case illustrates how judges can integrate risk assessment tools like COMPAS into their sentencing frameworks while adhering to the broader directives of 18 U.S.C. § 3553(a).
Violent Crime Control and Law Enforcement Act of 1994
One of the largest crimes acts in US history, the Violent Crime Control and Law Enforcement Act of 1994 addresses crime prevention, law enforcement and sentencing enhancements for violent crimes. It increased penalties for certain offences and introduced measures like the “three strikes” rule, mandating life sentences for repeat violent offenders. Although the 1994 Act does not directly address AI, its principles, such as enhanced penalties for violent crimes, can guide judges when evaluating risk assessments or other AI-driven insights (Pub. L. 103-322, 108 Stat. 1796). When judges utilize AI tools in sentencing, they can refer to statutory guidelines in this Act while ensuring alignment with established legal frameworks, such as 18 U.S.C. § 3553(a).
Policies, Guidelines and Other Efforts
There are several proposed policies and guidelines aimed at shaping how AI can be used in sentencing, though many are still in the early stages. These proposals generally focus on transparency, fairness and accountability.
The Algorithmic Accountability Act (Proposed Federal Bill) was proposed in the US Congress in April 2022. This proposed bill would require companies, including those involved in criminal justice, to conduct regular audits of AI systems used for decisions impacting individuals, such as sentencing. It focuses on ensuring transparency and preventing discrimination in automated decision-making systems.
The American Bar Association Guidelines issued in 2021 provide guidelines on the ethical deployment of AI in sentencing, stressing the need for judicial oversight and the importance of maintaining human judgment in conjunction with AI recommendations (American Bar Association 2021).
A US Government Accountability Office (2021) report on AI called for guidelines on the use of AI tools like COMPAS in sentencing. The report emphasized the need for transparency about how these tools function and their potential biases, suggesting that courts be required to disclose how AI influences decisions.
AI’s Successes in Sentencing
The Allegheny County Risk Assessment Tool, utilized in Pittsburgh, has demonstrated significant success in reducing recidivism rates among low-risk offenders (Allegheny County Department of Human Services 2019). Its high predictive accuracy has facilitated more informed and evidence-based sentencing decisions, contributing to a decrease in prison populations while enhancing rehabilitation outcomes. This tool underscores the potential of AI to improve judicial efficiency and fairness through the strategic application of data analytics.
Similarly, the Pretrial Risk Assessment Tool, implemented across various jurisdictions, has been instrumental in reducing pretrial detention rates by offering data-driven evaluations of defendants’ likelihood of failing to appear in court or committing new offences (Miller Reference Miller2020). By providing objective risk assessments, this tool has alleviated issues of jail overcrowding and fostered more equitable and individualized pretrial decisions. Together, these tools exemplify the transformative impact of AI-driven approaches in advancing fairness and efficiency within criminal justice systems.
Positive Societal Impact
The integration of AI in sentencing has several positive societal effects:
-
Enhanced fairness. AI tools can help mitigate human biases by providing objective assessments based on data rather than subjective judgment. This shift can lead to more equitable sentencing practices and reduce disparities in the criminal justice system (O’Neil Reference O’Neil2016).
-
Improved rehabilitation. By using risk assessment tools, the justice system can identify individuals who are less likely to reoffend and tailor rehabilitation programmes accordingly. This personalized approach increases the likelihood of successful reintegration into society (Miller Reference Miller2020).
-
Efficient use of resources. AI-driven tools streamline the sentencing process, enabling courts to handle cases more efficiently. This efficiency not only reduces delays in the judicial system but also allocates resources more effectively, allowing for more focus on high-risk offenders (Binns et al. Reference Binns, Veale, Van Kleek and Shadbolt2018).
-
Transparency and accountability. Recent policies and guidelines advocate for transparency in AI systems, which fosters public trust in the justice system. By requiring disclosure of algorithms and data, the system ensures that AI tools are used responsibly and that their decisions can be scrutinized for fairness (National Institute of Justice 2019a).
Minimizing Sentencing Disparity
AI tools, such as COMPAS and other risk assessment algorithms, aim to address sentencing disparities by introducing objective data into judicial decision-making processes. These tools evaluate a range of factors, including criminal history, demographic data and behavioural patterns, to assess the likelihood of recidivism (Angwin et al. Reference Angwin, Larson, Mattu and Kirchner2016). By relying on data-driven insights rather than subjective judgment, AI facilitates the standardization of sentencing practices and reduces the influence of personal bias.
ProPublica’s seminal 2016 study revealed that COMPAS risk scores were associated with more consistent sentencing outcomes compared to traditional methods. However, the study also highlighted significant concerns regarding racial biases, particularly the tendency of the system to overestimate recidivism risks for Black defendants while underestimating them for White defendants (Angwin et al. Reference Angwin, Larson, Mattu and Kirchner2016). Despite these concerns, subsequent research has underscored the utility of AI in reducing disparities. For instance, findings published in the Journal of Criminal Justice Policy demonstrated that AI-driven tools have significantly diminished sentencing disparities, particularly among repeat offenders, due to their uniform and systematic approach to risk assessment (Miller Reference Miller2020).
By embedding objectivity and consistency into sentencing, AI presents opportunities to enhance equity in judicial outcomes while necessitating robust safeguards against potential biases.
Balancing the Theories of Punishment
AI tools enable a balanced application of punishment theories – retribution, deterrence, rehabilitation and incapacitation – by providing comprehensive risk assessments that align sentencing with punishment goals.
-
Retribution. AI supports retributive justice by ensuring sentences are proportional to the crime. By assessing the crime’s severity and offender risk, AI helps align punishment with the gravity of the offence (Binns et al. Reference Binns, Veale, Van Kleek and Shadbolt2018).
-
Deterrence. Predictive analytics estimate reoffending likelihood, enhancing deterrence strategies. Tools like COMPAS guide appropriate supervision or incarceration to prevent future crimes (Miller Reference Miller2020).
-
Rehabilitation. AI aids rehabilitation by identifying offenders suitable for interventions, and guiding decisions on rehabilitative services like counselling based on risk factors (Binns et al. Reference Binns, Veale, Van Kleek and Shadbolt2018).
-
Incapacitation. AI tools accurately assess offender risk to society, ensuring incapacitation measures target those who pose a genuine threat, enhancing public safety (Angwin et al. Reference Angwin, Larson, Mattu and Kirchner2016).
Alleviating Judicial Workload
AI tools streamline the sentencing process by automating the risk assessment and data analysis aspects of judicial decision-making. This automation helps reduce the workload of judges, allowing them to focus on more complex legal and factual issues. The implementation of AI tools has been shown to reduce the time required for sentencing decisions. For instance, a study by the National Institute of Justice found that in New Jersey, by using the risk assessment tools (PSA), there was a 20% reduction in the time spent on pre-sentence investigations (National Institute of Justice 2019b). As Miller (Reference Miller2020) says, AI tools aid in managing caseloads by providing standardized recommendations, which helps judges process cases more efficiently and reduces the need for prolonged deliberations. Kentucky saw a significant reduction in time needed for pretrial decisions by up to 40% with the PSA in 2011. The tool provided faster, reliable risk assessments, which directly reduced court backlogs and lightened judges’ workloads (MDRC 2011). The reduction in recidivism and streamlining of sentencing concerned, tools like COMPAS, used in California and Wisconsin, improved sentencing efficiency. In Wisconsin, the use of COMPAS contributed to a 15% decrease in repeat offences over five years by swiftly processing sentencing recommendations based on the risk assessment tool (MDRC and Wisconsin Department of Corrections 2019).
Reducing Public Opinion and Media Influence
As noted, public opinion and media coverage can significantly influence sentencing decisions, often resulting in inconsistent outcomes. The integration of AI tools into sentencing frameworks offers a solution by providing an objective, data-driven foundation for judicial decisions, thus mitigating external pressures.
In People v. Zander,Footnote 38 the application of AI risk assessments proved pivotal in neutralizing the effects of public and media scrutiny, which had previously exerted undue influence on sentencing outcomes. The AI tool employed in this case offered a neutral, evidence-based evaluation of the defendant’s risk factors, enabling the court to deliver a sentence rooted in objective analysis rather than subjective considerations or external pressures.
This case illustrates the potential of AI to uphold the principles of fairness and consistency in judicial decision-making, particularly in high-profile cases where public and media attention might otherwise compromise the impartiality of sentencing outcomes.
Balancing Mandatory Sentencing
Mandatory sentencing laws can lead to disproportionate penalties by removing judicial discretion. AI tools offer a solution by providing detailed risk assessments that help judges make more informed decisions within the framework of mandatory sentencing laws. This approach balances the need for consistency with the flexibility required to address individual case specifics. A review of AI implementation in sentencing by the Brennan Center for Justice (2021) indicated that AI tools have been successful in providing a more nuanced understanding of offenders’ risks, thereby helping judges navigate the constraints of mandatory sentencing laws more effectively.
AI in Sentencing in England
The integration of AI tools into England’s sentencing framework represents a transformative advancement in enhancing the criminal justice system’s fairness, efficiency and consistency. By leveraging data-driven insights, AI has demonstrated its potential to significantly reduce sentencing disparities, thereby promoting uniformity and certainty in judicial outcomes. This ensures a more equitable application of justice across diverse cases. Furthermore, AI tools assist in alleviating the judicial workload by streamlining decision-making processes, which contributes to minimizing delays in the administration of justice, a critical concern in the legal process.
In addition to these operational efficiencies, the societal impact of AI in sentencing cannot be understated. By promoting consistency and fairness, these tools bolster public confidence in the judicial system. However, the implementation of AI in sentencing must operate strictly within the constraints of established legal frameworks to ensure ethical compliance and transparency. Concerns surrounding potential biases, accountability and the protection of defendants’ rights necessitate robust safeguards. The ethical and transparent deployment of AI is crucial to maintaining the balance between technological advancements and the fundamental principles of justice. AI’s transformative potential must be harnessed responsibly to uphold its positive contributions to justice and society.
Specific AI Tools Used in Sentencing
Several AI tools have been developed and employed in England to assist judges in making sentencing decisions.
HART is a predictive policing tool developed in Durham, England, that uses machine-learning algorithms to assess the risk of future offending. Although originally designed for policing, HART’s insights can assist in sentencing decisions by offering a risk score for defendants, helping courts assess the likelihood of reoffending, particularly in cases involving violent crimes.
CaseLines is an AI-assisted platform (cloud-based digital evidence management platform) used to manage court documents digitally. While not directly influencing sentencing decisions, it reduces the administrative workload, allowing judges to focus on decision-making rather than case preparation. This tool has streamlined the preparation process by enabling automated document bundling and cross-referencing legal provisions, resulting in faster case hearings and sentencing decisions. Key features of CaseLines include: secure digital storage for case documents; easy access for legal practitioners, judges and parties involved; tools for highlighting, annotating and organizing case files; and digital presentation of evidence during courtroom proceedings, reducing reliance on paper. Other than the UK, CaseLines is widely adopted in jurisdictions such as Canada and South Africa, and has also been introduced in some courts in the US.
Sentencing Council’s Digital Guidelines System is an AI-driven digital system developed in 2018 by the Sentencing Council in England that offers judges consistent and structured guidelines based on case law and statutory provisions. This tool standardizes the sentencing process, providing recommendations tailored to the specifics of a case while still allowing judicial discretion.
Risk assessment algorithms are used in the Crown Prosecution Service to assess various factors about defendants, such as prior criminal records, nature of the crime and socio-economic background, to predict reoffending risks. This helps judges make more informed decisions when determining the length of sentences or considering probation.
Legal Framework in England Governing AI in Sentencing
The use of AI in sentencing must strictly adhere to the confines of the law to safeguard individuals’ rights and uphold the integrity of the legal process. In England, while no specific statutory framework governs the deployment of AI in sentencing, several overarching legal principles and regulations are applicable.
One of the most significant legislative instruments is the Data Protection Act 2018, which incorporates the General Data Protection Regulation (GDPR). This Act is particularly relevant to AI systems involved in processing personal data for sentencing purposes. It mandates transparency and accountability, ensuring that individuals impacted by such decisions are provided with clear and accessible information about how those decisions were made.
Under the Data Protection Act 2018, individuals possess the right to seek an explanation if AI tools influence sentencing decisions. This includes defendants, who can formally request information about the data and criteria used by AI systems in their cases. These provisions are essential in promoting accountability and maintaining public trust in the use of AI within the judicial system. By ensuring compliance with such legal safeguards, the integration of AI into sentencing can enhance fairness and transparency without compromising the fundamental rights of individuals involved in the justice process.
Benefits and Positive Impact on Justice and Society
One of the most significant impacts of AI tools in sentencing is their ability to reduce disparities in judicial outcomes. Historically, sentences for similar offences have varied greatly depending on the judge, court or region. AI tools have been instrumental in ensuring uniformity by using objective, data-driven criteria for assessing cases.
The Sentencing Council’s Digital Guidelines System helps judges by providing a structured framework that outlines recommended sentencing ranges based on the offence and the defendant’s background. This minimizes the potential for personal bias or inconsistent sentencing across courts. By aligning each case with established legal precedents, AI tools create more predictable outcomes and ensure that similar cases result in comparable sentences, thus promoting fairness across the judicial system. A key example is fraud cases, where AI-assisted platforms analyse complex financial records and previous case outcomes to assist in the determination of sentences. This ensures consistency across different courts and reduces the possibility of arbitrary punishments for similar crimes.
Moreover, by promoting uniformity in sentencing, AI ensures that the principles of fairness and equality are upheld. Defendants and victims alike can expect greater transparency in the judicial process, which strengthens public trust in the legal system. The reduction in sentencing disparities ensures that individuals receive proportionate punishment regardless of the judge or jurisdiction involved.
Additionally, AI-driven risk assessment tools contribute to public safety by identifying high-risk offenders more accurately. Judges are equipped with data that help them make more informed decisions about whether an individual should be incarcerated or given a rehabilitative sentence. This can lead to more appropriate use of custodial sentences, where dangerous offenders are more likely to be incarcerated, while low-risk offenders can be diverted to alternative forms of punishment such as probation or community service.
AI tools have significantly reduced the workload of judges and court staff. Traditionally, judges must sift through extensive case materials and manually cross-reference legal provisions. With AI systems like CaseLines, much of this work is automated. Judges can access digitized case files, search for relevant legal precedents, and bundle necessary documents with the aid of AI. This allows for quicker trial preparation and more efficient hearings. Moreover, AI tools assist in analysing large volumes of evidence, especially in complex financial or data-heavy cases, allowing for a quicker review of facts and issues. This efficiency translates to reduced delays in sentencing and quicker resolution of cases. AI systems, such as Risk Assessment Algorithms, also help streamline the decision-making process by providing quick and accurate assessments of a defendant’s risk profile, enabling courts to process cases faster without compromising on the quality of justice delivered. The reduction in case backlogs and delays benefits all parties involved in the legal process. Victims are less likely to experience the emotional toll of drawn-out court proceedings, and defendants can receive fairer and faster outcomes. The societal cost of prolonged incarceration or inefficient sentencing is also minimized.
International Efforts on Regulating and Governance of AI
While no comprehensive international treaty exclusively addresses AI, global organizations such as the United Nations and the European Union (EU) are actively formulating frameworks to regulate and govern AI ethics, application and societal impact. Notable initiatives include the development of guidelines emphasizing AI ethics, transparency and accountability, which aim to ensure the responsible advancement of AI technologies. These efforts focus on balancing innovation with the need to mitigate associated risks, such as bias, lack of accountability, and misuse. Such frameworks underscore the growing recognition of AI’s transformative potential and the necessity of robust governance mechanisms.
The EU’s Artificial Intelligence Act
This Act was proposed in 2021 and represents a comprehensive regulatory framework aimed at governing the use of AI technologies across various sectors, with a particular focus on high-risk applications such as law enforcement and the judiciary. The dialogue initiated by the EU is likely to shape the development of similar regulatory frameworks in the US as lawmakers grapple with the complexities of AI governance, addressing issues of fairness, transparency and accountability in a rapidly evolving technological landscape. This Act categorizes AI systems by risk levels – ranging from minimal to unacceptable – imposing strict requirements on high-risk systems, including:
-
(1) Risk assessment and mitigation. Organizations must conduct assessments to identify potential risks and implement strategies to mitigate them;
-
(2) Transparency requirements. Developers and deployers of AI systems must provide clear information about the functioning and limitations of their technologies, ensuring users understand decision-making processes;
-
(3) Accountability measures. The Act emphasizes clear lines of accountability, requiring companies to designate responsible parties for regulatory compliance;
-
(4) Bias prevention. Provisions ensure AI systems are tested for biases and that measures are implemented to prevent discrimination based on race, gender or other protected characteristics.
Concerns Arising from AI in Sentencing
The second research question “What legal and ethical concerns arise from the use of AI in sentencing?” will be addressed next.
Lack of Transparency and Accountability in AI Algorithms
Although Sri Lanka does not have specific statutes directly addressing AI transparency in sentencing, broader principles of fairness and accountability are embedded in the International Covenant on Civil and Political Rights (ICCPR). Sri Lanka is a member country of the convention and Articles 12 and 13 of the Constitution of Sri Lanka guarantee the right to equality and fair treatment (fair trial) under the law. The ICCPR similarly emphasizes the need for fairness in judicial proceedings, which implicitly includes the need for transparency in decision-making processes. These principles are crucial in ensuring that any technology, including AI, used in sentencing upholds fundamental rights and standards of justice.
In the US, studies have highlighted significant concerns about the transparency of AI systems used in criminal justice. For instance, the COMPAS algorithm, used to assess the risk of recidivism, has been criticized for being a “black box” (Angwin et al. Reference Angwin, Larson, Mattu and Kirchner2016). This term refers to the lack of visibility into how the algorithm reaches its conclusions. Defendants, their lawyers and even judges often cannot fully understand or challenge the reasoning behind AI-generated risk assessments. The ProPublica investigation revealed that COMPAS could produce biased predictions without revealing the factors influencing its decisions, exacerbating issues of fairness and accountability in sentencing.
The lack of transparency in AI algorithms has profound implications for judicial processes. When the workings of an AI system are opaque, it becomes difficult to ensure that decisions are made under legal standards and principles of justice. This opacity undermines the ability of defendants to contest and appeal decisions, potentially leading to unjust outcomes. Furthermore, the inability to scrutinize AI’s decision-making process erodes public trust in the judicial system. Transparency is essential for maintaining accountability, ensuring that technology serves to enhance rather than compromise the fairness of justice. The inability to understand and challenge AI decision-making processes creates a risk that automated systems may operate without sufficient oversight, furthering concerns about the reliability and fairness of sentences determined through such technologies.
Potential for Discrimination and Bias in AI Outputs
Article 12(1) of Sri Lanka’s Constitution guarantees the right to equality before the law and non-discrimination, which is a foundational principle in ensuring fairness in judicial proceedings. This constitutional provision is aligned with the ICCPR, which further enshrines protections against arbitrary or discriminatory practices in judicial processes. It reflects a commitment of Sri Lanka to adhere to equality and justice, mandating that all individuals, regardless of their background, receive fair treatment in the legal system.
Research has demonstrated that AI systems, particularly those used in criminal justice, can perpetuate and even exacerbate existing biases. For instance, a study by Chouldechova (Reference Chouldechova2017) reveals that recidivism prediction tools, such as those employed in the US, often produce biased outcomes when trained on historical data reflecting systemic inequalities. These tools tend to disproportionately label marginalized groups, including racial minorities and women, as high-risk offenders based on historical biases present in the training data. The study highlights how these biases can result in harsher sentencing for these groups, undermining principles of fairness and equal treatment. In addition to Chouldechova’s (Reference Chouldechova2017) findings, other studies have confirmed similar patterns of bias in AI systems. For example, the use of algorithms in predictive policing has been shown to disproportionately target communities of colour, perpetuating cycles of disadvantage and reinforcing systemic inequalities (Lum and Isaac Reference Lum and Isaac2016).
The potential for bias in AI outputs poses significant risks to justice and equality in sentencing. Discriminatory algorithms can lead to disproportionately severe penalties for marginalized groups, violating fundamental human rights principles enshrined in both domestic and international law. In a diverse society like Sri Lanka, where ethnic and socio-economic disparities exist, the risk of biased AI systems further undermines the judiciary’s role in delivering impartial justice. The perpetuation of these biases challenges the integrity of individual sentencing decisions and erodes public trust in the judicial system’s ability to provide equitable justice for all.
Addressing these biases is crucial to ensuring that AI systems do not perpetuate existing inequalities and that all individuals are treated fairly and justly within the legal framework.
Incompatibility with the Principle of Judicial Independence
Judicial independence is a cornerstone of the legal system, safeguarded by Sri Lanka’s Constitution under Article 111. This article guarantees that judges must exercise their functions impartially and free from external influences or pressures. Judicial independence ensures that decisions are made based on legal principles and evidence, rather than external factors. Introducing AI into sentencing practices could challenge this principle by introducing a technological element that may influence judicial decision-making.
Integrating AI into sentencing raises significant concerns about the erosion of judicial independence. Citron and Pasquale (Reference Citron and Pasquale2014) provide a comprehensive analysis of how automated systems might impact judicial processes. In their seminal article, “The Scored Society: Due Process for Automated Predictions”, they argue that the use of predictive algorithms in judicial settings risks reducing judges to mere implementers of algorithmic recommendations rather than active decision-makers. They explore how algorithms designed to predict recidivism or sentencing outcomes can inadvertently shift decision-making authority away from human judges to machines. The article highlights that reliance on AI systems can lead to several adverse outcomes:
-
Reduction in discretion. Judges may find themselves constrained by algorithmic recommendations, which can limit their ability to apply discretion based on the specifics of individual cases. This reduction in discretion undermines the judge’s role in considering the unique circumstances and nuances of each case.
-
Pressure to conform. Judges might feel pressured to align their decisions with the outcomes suggested by AI systems, particularly if these recommendations are perceived as objective or authoritative. This pressure can lead to a diminished role in judicial reasoning and personal judgment.
The potential for AI to erode judicial independence is profound. If judges rely heavily on algorithmic recommendations, there is a risk that judicial decision-making could become overly mechanistic, undermining the human element essential to justice. The integration of AI might lead judges to prioritize algorithmic outputs over their (own) assessment of a case, weakening the personalized and contextual nature of legal judgments. Furthermore, the perceived objectivity of AI could exert undue pressure on judges to conform to algorithmic results, potentially compromising the integrity of judicial decisions.
This shift could undermine public confidence in the judiciary by creating the impression that decisions are made not by experienced human adjudicators, but by impersonal and opaque algorithms. The principle of judicial independence requires that judges have the freedom to exercise their judgment without undue influence, and the encroachment of AI into this domain poses a serious challenge to maintaining this essential aspect of justice.
Challenges to Rights to a Fair Trial and Due Process
The right to a fair trial is enshrined in Article 13 of Sri Lanka’s Constitution, which guarantees that every person is entitled to a fair and public hearing by a competent, independent and impartial tribunal. This right is further affirmed by the ICCPR, which obligates state parties to ensure procedural fairness and protection against arbitrary or discriminatory practices in judicial proceedings. The introduction of AI into sentencing decisions has the potential to undermine these fundamental rights, particularly if AI systems hinder a defendant’s ability to comprehend and challenge the decisions made by automated processes.
The integration of AI in judicial processes raises significant concerns about the right to a fair trial and due process. As highlighted by the EU Fundamental Rights Agency (2020) in its report Getting the Future Right: Artificial Intelligence and Fundamental Rights, the use of AI in judicial contexts could severely restrict individuals’ capacity to contest decisions. This limitation arises from the inherent complexity and opacity of machine-learning models, which are often difficult for non-experts to interpret. The Fundamental Rights Agency report outlines several critical issues related to AI’s application in the judicial system, including the potential for automated systems to reinforce existing biases, reduce transparency and undermine accountability in decision-making. These concerns emphasize the need for robust safeguards to ensure that AI’s role in sentencing does not compromise the integrity of the judicial process or infringe upon individuals’ right to challenge judicial outcomes effectively. Consequently, while AI can enhance efficiency, its deployment must be carefully regulated to preserve fundamental legal rights. The report outlines several critical issues as mentioned below:
-
Complexity of algorithms. Machine-learning algorithms used in AI systems can be extremely complex, often referred to as “black boxes” because their internal workings are not transparent. This complexity makes it difficult for defendants, their legal representatives, and even the courts to understand how AI systems arrive at their decisions (Fundamental Rights Agency 2020). The report notes that this lack of transparency can obstruct defendants’ ability to effectively challenge or appeal decisions made by AI systems.
-
Difficulty in contesting AI decisions. The report highlights that because AI systems generate outcomes based on intricate data processing and modelling, the rationale behind these decisions is often inaccessible to those affected. As a result, defendants may face significant barriers in contesting AI-driven decisions or seeking redress through traditional legal mechanisms. This difficulty in contesting decisions is compounded by the technical nature of the algorithms, which are not easily decipherable without specialized knowledge (Fundamental Rights Agency 2020).
The deployment of AI in sentencing processes could have profound implications for due process. If defendants are unable to comprehend or challenge the basis for AI-driven sentencing decisions, their ability to mount a meaningful defence is compromised. This situation undermines procedural justice, which is a fundamental component of fair trials. The inability to understand and contest AI decisions can lead to a lack of adequate defence, potentially resulting in unjust sentencing outcomes. A study by Angwin et al. (Reference Angwin, Larson, Mattu and Kirchner2016) on the COMPAS risk assessment tool revealed that the opacity of such tools can prevent defendants from effectively challenging their risk scores. This lack of clarity can impede their ability to appeal or seek alternative judicial remedies, further exacerbating concerns about the erosion of fair-trial principles. Overall, the integration of AI into sentencing processes presents significant challenges to maintaining the right to a fair trial and due process. Ensuring that AI systems are transparent and that their decisions can be understood and contested is crucial to upholding the integrity of the judicial process and protecting fundamental rights.
Privacy Concerns and Data Security in AI-Driven Sentencing
Sri Lanka’s Right to Information Act No. 12 of 2016 primarily aims to enhance transparency and accountability by ensuring public access to information held by public authorities. Key provisions include Section 3, which guarantees the right to access information, and Section 8, which details the obligations of public authorities to provide such information. However, the Act does not specifically address the privacy concerns associated with AI-driven decision-making in the criminal justice system.
The Act’s focus is on the procedural aspects of information access rather than the protection of personal data in the context of AI. Notably, it lacks detailed regulations concerning data privacy, security and the specific challenges posed by AI technologies. For instance, while Section 8(1) mandates that information be provided unless exempted, it does not cover how personal data should be handled or protected when processed by AI systems.
Given the rapid development and deployment of AI technologies, this regulatory gap becomes evident. The absence of targeted provisions for privacy protection in AI applications means that sensitive personal data used in AI-driven sentencing could be inadequately safeguarded, potentially leading to privacy breaches and misuse. Therefore, there is an urgent need for comprehensive regulations to address these concerns and ensure that personal data are protected in AI-enhanced judicial processes.
The research by Barocas and Nissenbaum (Reference Barocas and Nissenbaum2014) in their article “Big Data’s End Run Around Procedural Privacy Protections”, published in Communications of the ACM, provides valuable insights into the privacy concerns associated with AI and big data. Their study highlights key issues such as:
-
Invasive data collection. AI systems often rely on extensive datasets that include sensitive personal information. The ability of AI to process and analyse large volumes of data can lead to invasive data collection practices. This collection goes beyond what is necessary for sentencing, capturing detailed and potentially sensitive information about individuals’ lives, behaviour and personal history (Barocas and Nissenbaum Reference Barocas and Nissenbaum2014).
-
Surveillance risks. The surveillance nature of AI systems poses significant threats to privacy. The continuous monitoring and data collection enabled by AI can lead to a pervasive sense of being watched, which can affect individuals’ behaviour and mental well-being. Moreover, the aggregation of data from various sources can create detailed profiles of individuals, increasing the risk of privacy breaches and misuse of information (Barocas and Nissenbaum Reference Barocas and Nissenbaum2014).
-
Data misuse and security. The risk of data misuse is heightened with AI-driven systems. Incidents of data breaches, unauthorized access and improper use of personal data are serious concerns. The large-scale nature of data collection and storage in AI systems makes them attractive targets for cyberattacks, which can lead to significant privacy violations. Additionally, the lack of stringent data security measures increases the vulnerability of personal data to misuse and exploitation.
The implications of these privacy concerns are profound. The use of AI in sentencing can expose sensitive personal information, leading to potential privacy breaches. For example, data collected for AI-driven risk assessments or sentencing recommendations might include details about individuals’ medical history, personal relationships or past behaviour. If such data are compromised or misused, this can have severe consequences for the affected individuals, including stigma, discrimination and psychological harm.
Furthermore, the invasion of privacy through AI surveillance can deepen mistrust in the justice system. If individuals perceive that their personal data are being used indiscriminately or without adequate safeguards, their confidence in the fairness and integrity of the justice system can be eroded. This mistrust undermines the legitimacy of legal processes and can have broader societal implications, affecting how individuals interact with and perceive the justice system.
The integration of AI into sentencing practices raises significant privacy and data security concerns. Ensuring robust data protection measures and addressing regulatory gaps are essential to safeguarding individual privacy rights and maintaining trust in the criminal justice system.
Erosion of Human Oversight and Discretion in Judicial Decisions
Sri Lankan law, particularly Article 13 of the Constitution, emphasizes the fundamental principle of judicial discretion and its crucial role in ensuring a fair trial. Article 13 guarantees the right to a fair trial and implicitly requires that human judgment and oversight be central to judicial processes. This provision ensures that judges have the authority to consider the unique aspects of each case, tailoring sentences to fit the individual circumstances of the defendant.
The principle of judicial discretion is vital for the administration of justice because it allows judges to factor in various elements, such as the defendant’s background, intentions and the context of the crime. This flexibility is essential for delivering fair and equitable outcomes that reflect the complexity of human behaviour and social circumstances.
However, the integration of AI into sentencing practices introduces a risk to this principle. AI systems, while efficient, often operate on rigid algorithms and historical data, which may not fully accommodate the nuances of individual cases. The absence of specific legal provisions addressing AI’s role in judicial decisions means there is no clear framework to ensure that these systems support rather than replace human discretion. This regulatory gap could undermine the essence of judicial discretion, potentially leading to standardized sentences that fail to account for the unique circumstances of each case, thereby impacting the fairness of the judicial process.
Research has highlighted the potential downsides of excessive reliance on AI in judicial decision-making. Goodman and Flaxman (Reference Goodman and Flaxman2017), in their article “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation’” published in AI Magazine, discuss how AI systems can undermine human judgment. Their study presents several critical insights:
-
Mechanical decision-making. AI systems are often designed to process data and make decisions based on predefined algorithms. When applied to sentencing, this can lead to mechanical decision-making that lacks the nuanced understanding of individual cases. For instance, AI systems may fail to account for unique personal circumstances or contextual factors that are vital for fair sentencing (Goodman and Flaxman Reference Goodman and Flaxman2017).
-
Reduced human oversight. Over-reliance on AI can diminish the role of human judgment in the judicial process. Judges might become constrained by algorithmic outputs, which could lead to a reduction in their ability to exercise discretion and adapt sentences to fit the specific context of each case. The study indicates that this reduction in human oversight can result in decisions that are less informed and potentially less fair (Goodman and Flaxman Reference Goodman and Flaxman2017).
The erosion of judicial discretion due to AI integration can have profound implications for the justice system. Judicial discretion is essential for delivering justice that considers the unique aspects of each case. When AI systems replace or overly influence this discretion, the ability of judges to adapt sentences based on individual circumstances is compromised. This can lead to:
-
Rigid sentencing. AI-driven sentencing may result in rigid and uniform sentences that do not account for the nuances of individual cases. For example, an AI system might recommend a standard sentence based on data patterns, without considering mitigating factors such as a defendant’s remorse or personal circumstances.
-
Compromised justice. The reduction in human oversight can undermine the fairness of judicial decisions. Sentences that lack the flexibility to address individual differences and contextual factors may not fully serve justice. This erosion of discretion can lead to a more impersonal and less equitable justice system, ultimately impacting the perceived legitimacy of judicial outcomes.
Therefore, it may be said that while AI has the potential to enhance efficiency in sentencing, over-reliance on it poses risks to the essential role of human judgment. Ensuring that AI supports rather than replaces human discretion is crucial for maintaining the fairness and integrity of judicial decisions.
Legal Uncertainty in Assigning Liability for AI-Based Errors
As previously stated, Sri Lanka’s current legal framework does not specifically address the question of liability when AI systems make errors in judicial decision-making, including sentencing. However, tort law principles, such as negligence or strict liability, could theoretically be invoked if AI errors cause unjust outcomes. For example, if an AI system leads to an erroneous sentence, aggrieved parties might attempt to hold the state, the AI developers or the judiciary responsible under existing liability frameworks. However, the law remains unclear on how responsibility is divided among these parties. This legal uncertainty creates significant challenges for redress, leaving both defendants and authorities vulnerable to unresolved disputes over who should be accountable for AI-driven errors.
Scholarly discourse underscores the complexities of assigning liability for AI-based errors, particularly in legal contexts where multiple actors are involved in deploying AI systems. Researchers such as Binns (Reference Binns2018) emphasize that the interaction between AI developers, legal authorities and judges complicates the attribution of fault. Binns (Reference Binns2018) discusses how developers may not be fully liable because AI systems often evolve through machine learning, meaning that developers cannot always foresee future behaviour. Similarly, legal authorities implementing AI may argue that they merely trusted an established tool. Judges, on the other hand, might argue that AI is used as a supplementary aid, and thus they bear limited responsibility for its errors. Binns (Reference Binns2018) points out that the opacity of machine-learning systems exacerbates this issue. AI models, especially deep-learning algorithms, often function as “black boxes”, meaning their decision-making processes are not fully understandable even to experts. This lack of transparency makes it difficult to trace specific errors back to human actions, raising questions about how fault is attributed and who is ultimately responsible when AI fails to meet judicial standards of fairness or accuracy.
Empirical studies and cases from other jurisdictions provide valuable insights. For instance, in the US, where AI-based sentencing tools like COMPAS are used, legal challenges have already surfaced. Some defendants have argued that they were unjustly sentenced due to errors in these algorithms. However, courts have largely rejected these claims, often citing the fact that the software is considered a tool to aid judges, not a final arbiter of sentences. This further complicates the issue of liability because it raises questions about the legal status of AI in judicial processes. Is the AI a neutral tool, or does it carry a degree of legal agency?
The legal uncertainty surrounding liability for AI-based errors has far-reaching implications for judicial integrity and public trust in the legal system. First and foremost, the lack of clear rules on liability creates a significant accountability gap. When AI-driven errors occur – such as a defendant receiving an excessively harsh sentence due to an algorithmic miscalculation – the question of who should be held accountable remains unresolved. This could lead to protracted legal battles where different stakeholders – AI developers, judicial officers and state agencies – shift the blame, leaving the victim without effective remedies.
Moreover, this uncertainty could deter the adoption of AI in judicial processes in Sri Lanka, as authorities may be reluctant to implement technologies whose legal ramifications are ambiguous. Even if AI systems are adopted, the absence of a clear liability framework could lead to widespread scepticism about the fairness and reliability of AI-based decisions. Citizens may lose trust in the justice system if they feel that mistakes made by machines cannot be adequately redressed, especially in cases where such mistakes have serious consequences, such as criminal sentencing.
Finally, the issue of liability also poses broader ethical concerns. If no party is clearly responsible for the errors made by AI, the deployment of such systems could be seen as ethically questionable. This is particularly concerning in criminal justice, where the consequences of errors – such as wrongful incarceration – are severe. Without a robust legal framework to assign liability for AI errors, the use of AI in judicial decision-making risks eroding the foundational principles of accountability, fairness and justice.
The growing integration of AI into judicial decision-making, including sentencing, exposes a critical gap in Sri Lanka’s legal framework – liability for AI errors. While tort law principles may offer a pathway to address grievances, the complexity of AI systems and the involvement of multiple stakeholders make assigning responsibility difficult. Empirical studies, such as those by Binns (Reference Binns2018), highlight the opacity and unpredictability of AI, further complicating the issue. Until legal clarity is established, AI errors in judicial settings will continue to raise serious accountability concerns, undermining trust in the justice system and posing ethical challenges.
Ethical Dilemmas in Delegating Human Sentencing Decisions to Machines
Although Sri Lanka’s legal framework does not explicitly regulate the delegation of sentencing decisions to AI, ethical considerations may implicitly be addressed under the broader principles enshrined in the Constitution of Sri Lanka and international instruments such as the ICCPR. Article 10 of the ICCPR emphasizes the inherent dignity of human beings, mandating that any deprivation of liberty must respect this dignity. Similarly, Sri Lanka’s Constitution, particularly under Article 13, guarantees the right to a fair trial, emphasizing the role of human judgment in determining justice. While these legal provisions do not directly address AI, they suggest that ethical standards rooted in human dignity, autonomy and fairness may be compromised by delegating sentencing decisions to machines.
The key ethical challenge in using AI-driven systems for sentencing lies in whether such delegation respects the dignity and autonomy of individuals subjected to criminal justice. AI operates through algorithms that may not fully consider the unique circumstances of each case, a function typically performed by human judges who can apply discretion. When algorithms take on such responsibilities, questions arise about whether human judgment, autonomy and moral reasoning are being undermined, which are principles implicitly protected by both the ICCPR and Sri Lanka’s constitutional framework.
Philosophical debates surrounding the ethics of delegating important decisions to machines are particularly relevant in the criminal justice context, where human lives are directly affected. A foundational work by Mittelstadt et al. (Reference Mittelstadt, Allo, Taddeo, Wachter and Floridi2016) in Big Data & Society discusses how the ethical implications of algorithms create new dimensions of responsibility, accountability and fairness that challenge conventional notions of justice. Their study highlights that while AI systems can improve efficiency and consistency, their lack of empathy and moral reasoning creates significant ethical concerns. Specifically, the authors argue that algorithms inherently lack the capacity for moral judgment, empathy or understanding of complex human emotions – qualities that are essential in the sentencing process, where a nuanced understanding of context is often critical.
This ethical dilemma is particularly acute in criminal sentencing, where the consequences of decisions are profound. For instance, a machine-learning algorithm may be trained to predict recidivism or suggest sentencing ranges based on data patterns. However, such algorithms often fail to take into account individual circumstances such as personal remorse, rehabilitation prospects, or the social impact of sentencing – factors that human judges typically consider. According to Mittelstadt et al. (Reference Mittelstadt, Allo, Taddeo, Wachter and Floridi2016), delegating such decisions to machines risks reducing human beings to mere data points, stripping away the ethical dimensions that define justice.
Moreover, empirical studies suggest that AI-driven systems tend to exacerbate existing biases in the criminal justice system. For instance, AI systems trained on historical sentencing data may unintentionally perpetuate racial, gender or socio-economic biases, leading to unfair outcomes. This phenomenon, referred to as “algorithmic bias”, has been widely studied in jurisdictions that have experimented with AI-based sentencing tools. Such bias not only poses a legal risk but also raises ethical concerns, as it undermines the moral principle of equal treatment under the law, a core value of both the ICCPR and Sri Lanka’s Constitution.
Delegating sentencing to machines raises profound ethical concerns about the dehumanization of the justice process. Sentencing, traditionally seen as a moral and ethical duty, involves considerations that extend beyond mere facts and data. It requires empathy, discretion and an understanding of human nature – qualities that AI systems, by their very nature, lack. The risk of dehumanizing the justice process is particularly concerning when the stakes are high, as in criminal sentencing, where decisions have life-altering consequences.
The use of AI in sentencing could also undermine the moral authority of the judiciary. Judges are entrusted with making decisions based not only on the letter of the law but also on the ethical principles of fairness, proportionality and humanity. By delegating this responsibility to machines, the justice system risks losing its ethical foundation. A judge’s moral reasoning is central to the legitimacy of judicial decisions, and removing or diminishing this human element could lead to a perception that justice is becoming mechanized, impersonal and detached from ethical considerations.
Additionally, the use of AI in sentencing presents challenges in terms of accountability. Ethical standards demand that those who make decisions affecting others’ lives be held accountable for their actions. However, when machines are involved, it becomes unclear who should bear responsibility for erroneous or unjust outcomes. If an AI system produces a flawed sentencing recommendation, is the judge responsible for following it, or is the developer of the algorithm at fault? This lack of clear accountability further complicates the ethical landscape, as it dilutes the responsibility traditionally associated with judicial decision-making.
Ethical concerns are amplified by the potential for AI to infringe on human dignity. The ICCPR and the Sri Lankan Constitution both emphasize the importance of human dignity in judicial proceedings. However, AI systems, by their very nature, lack an understanding of this principle. By reducing individuals to mere data points and treating them as inputs in an algorithmic process, the use of AI in sentencing risks compromising the dignity of defendants, stripping away the humanity that is essential to a just and fair criminal justice system.
The ethical dilemmas associated with delegating sentencing decisions to machines pose serious challenges to the principles of fairness, dignity and human autonomy, which are enshrined in both the ICCPR and Sri Lanka’s Constitution. While AI offers efficiency and consistency, its lack of empathy, moral judgment and accountability risks undermining the ethical standards of the judicial system. The dehumanization of justice, the erosion of moral authority, and the potential for biased or unjust outcomes all underscore the need for a careful and thoughtful approach to integrating AI into sentencing processes. Without a robust ethical framework, the use of AI in criminal justice risks compromising the very principles that it seeks to uphold.
Regulatory Gaps and the Need for Legal Frameworks Governing AI in Justice
Sri Lanka currently lacks AI-specific regulations in its criminal justice system, particularly in the context of sentencing. The Information and Communication Technology Act No. 27 of 2003 (ICT Act) and the Computer Crimes Act No. 24 of 2007 provide a foundation for regulating technology use, but they fall short of addressing the complexities of AI-driven decisions in the judicial context. These statutes were enacted long before AI gained prominence, and their provisions mainly focus on traditional cybercrimes, such as hacking, unauthorized access to data and digital fraud. For instance, the ICT Act primarily facilitates the establishment of an information technology infrastructure and addresses issues like electronic transactions, but it does not consider the implications of AI in decision-making systems like sentencing. The Computer Crimes Act criminalizes offences involving unauthorized computer use, but it lacks provisions for regulating or governing the ethical use of AI in judicial decision-making. As such, there is a notable legal vacuum when it comes to addressing AI’s use in judicial contexts, specifically in sentencing decisions where the risk of bias, error or unjust outcomes is significant.
Sri Lanka’s legal framework needs modernization to account for AI’s rapid advancements and its implications in the judiciary. Without this, the country risks leaving its criminal justice system vulnerable to unintended biases, errors and ethical issues posed by AI technologies.
In contrast, more developed jurisdictions like the EU have begun to address AI’s legal and ethical challenges, particularly through instruments like the GDPR. The GDPR includes provisions for automated decision-making and profiling, particularly in Articles 13, 14 and 22, which set limits on when AI-driven decisions can be made without human intervention. According to Veale and Edwards (Reference Veale and Edwards2018), these provisions underscore the need for transparency, accountability, and data protection in AI applications. In their analysis, Veale and Edwards (Reference Veale and Edwards2018) highlight that the GDPR seeks to protect individuals from decisions that could adversely affect them, such as biased or erroneous sentencing decisions based on AI. This forward-looking regulation in the EU provides valuable lessons for Sri Lanka, as it illustrates the need for legal clarity and regulatory frameworks to govern AI’s use in judicial processes.
Moreover, empirical evidence from various jurisdictions that have adopted AI in sentencing suggests that unchecked AI systems can lead to biased and unjust outcomes. For instance, studies have shown that algorithms used in the US for predicting recidivism often demonstrate racial bias. The COMPAS algorithm, one of the most widely used AI tools for sentencing and parole decisions, was found to disproportionately label African American defendants as high risk for recidivism compared to White defendants. This type of algorithmic bias underscores the need for clear legal frameworks that can prevent AI from perpetuating existing inequalities in the justice system.
The ProPublica study (Angwin et al. Reference Angwin, Larson, Mattu and Kirchner2016) revealed that COMPAS was twice as likely to incorrectly flag Black defendants as future criminals compared to their White counterparts, raising serious concerns about fairness and justice. The study’s findings have sparked debates on the need for stringent regulations to oversee AI tools in judicial systems to ensure that they do not replicate or exacerbate biases that already exist within society.
The absence of an AI-specific regulatory framework in Sri Lanka creates several risks for the criminal justice system. First, without laws that directly govern the use of AI in sentencing, there is no clear mechanism to hold developers, legal authorities or other stakeholders accountable for errors or biases that may arise from AI systems. This gap in regulation could allow AI tools to operate without proper oversight, leading to decisions that may not always align with the principles of fairness and justice.
Moreover, AI systems are only as good as the data they are trained on. In the context of sentencing, if an AI system is trained on historical data that contain biases – such as discriminatory sentencing patterns against certain ethnic or social groups – those biases may be replicated and even amplified in future sentencing decisions. Without laws mandating transparency and accountability, it becomes difficult to identify and rectify these biases, leading to unjust outcomes. The lack of clear legal frameworks also complicates issues of liability: if an AI system produces a flawed sentencing recommendation, it is unclear whether the responsibility lies with the judge, the developer of the AI or the state.
Establishing a clear legal framework for AI in sentencing is crucial to safeguarding the integrity of Sri Lanka’s justice system. Regulations similar to the GDPR could be introduced to ensure transparency in AI decision-making processes, provide individuals with the right to challenge decisions made by AI and mandate human oversight in critical decisions like sentencing. This would help to ensure that AI-driven systems do not operate autonomously without sufficient human involvement and that individuals are not unjustly subjected to algorithmic decisions.
While Sri Lanka has made strides in regulating technology through the ICT Act and Computer Crimes Act, these laws are insufficient to address the complexities and ethical dilemmas posed by AI in the criminal justice system. As AI technologies continue to evolve and play a more prominent role in judicial decision-making, the legal framework must evolve accordingly. Drawing on the experiences of jurisdictions such as the EU, which have implemented comprehensive regulations like the GDPR, Sri Lanka could develop a regulatory framework that governs AI in sentencing, ensuring fairness, transparency and accountability. By doing so, Sri Lanka can prevent the potential for AI to perpetuate biases or lead to unjust outcomes, ensuring that the use of AI in sentencing aligns with the principles of justice and human dignity.
Safeguards for Mitigating Bias in AI Sentencing
In the preceding discussion, the study highlighted the concerns and biases associated with AI-driven sentencing, particularly the potential for reinforcing existing disparities. Given these challenges, it is essential to explore the implementation of safeguards to mitigate AI bias in judicial decisions. Key measures include human oversight, algorithmic transparency, regular bias audits, the use of representative data, and comprehensive regulatory frameworks. Human oversight ensures that AI tools are subject to critical evaluation by legal professionals, who can interpret and question automated decisions. Algorithmic transparency enables greater scrutiny of AI models, allowing stakeholders to understand how decisions are made and identify potential flaws. Bias audits, conducted periodically, can help detect and rectify any unintended discrimination in the algorithms. Additionally, ensuring that AI systems are trained on diverse, representative datasets is crucial to avoiding biased outcomes. Drawing on experiences from jurisdictions such as the US and the EU, it is evident that robust regulatory frameworks must be established to govern AI’s role in sentencing, ensuring fairness and equity in judicial processes, particularly in Sri Lanka’s socio-ethnic context.
Human Oversight in AI Decisions
AI should function as a tool to assist, rather than replace, human decision-making in the judicial process. Judges must retain the authority to override AI-generated recommendations, ensuring that human judgment remains central to sentencing decisions. In both the US and the UK, the judiciary maintains final discretion in AI-assisted decisions, which allows human oversight to prevail, particularly when the results of AI analysis are questionable or raise concerns. This approach ensures that the integrity of the legal system is preserved, and any potential limitations or biases inherent in AI systems are addressed. A similar model should be adopted in Sri Lanka, where AI can contribute to informed decision-making, but judicial discretion must remain paramount to safeguard fairness, accountability and the protection of individuals’ rights within the judicial process.
Algorithmic Transparency and Accountability
Transparency is essential to ensuring that AI systems function without bias and uphold accountability. In the EU, the GDPR mandates transparency, granting individuals the right to request explanations for AI decisions that impact them. This provision aims to foster trust and enable individuals to challenge decisions if necessary. In Sri Lanka, it is crucial to implement similar regulations that require developers of AI systems used in sentencing to disclose their algorithms and ensure they are auditable. Such measures will facilitate the identification of potential biases within the systems, promoting fairness. Moreover, the ability to scrutinize AI models will ensure that sentencing decisions are transparent and can be fully justified, thereby safeguarding the integrity of the legal process and protecting individuals’ rights.
Bias Audits and Ethical Testing of AI Systems
Regular audits of AI systems are vital to detect and rectify biases that may arise in their operation. In the US, tools such as COMPAS have faced significant scrutiny for racial bias, highlighting the importance of ongoing oversight. Sri Lanka could establish a dedicated regulatory body tasked with conducting regular audits of AI systems employed in the criminal justice system, ensuring that these systems adhere to anti-discrimination laws and uphold ethical standards. In addition to audits, the adoption of ethical testing frameworks, such as the guidelines set forth by the Institute of Electrical and Electronics Engineers (IEEE) (2019), could be instrumental in assessing the fairness and accountability of AI tools. These frameworks would provide a structured approach to evaluate the performance of AI systems, identifying potential areas of concern and promoting their alignment with legal and ethical principles, thus safeguarding the integrity of sentencing decisions.
Use of Diverse and Representative Data in AI Training
One significant cause of bias in AI systems is the lack of diversity in the training data used to develop these models. In the US and Europe, there have been efforts to incorporate more representative and inclusive datasets to mitigate the risk of bias in AI decision-making. For Sri Lanka, it is essential that AI systems employed in the criminal justice system are trained on data that accurately reflect the country’s ethnic, social and economic diversity. By using diverse datasets, the risk of biased outcomes can be reduced, leading to more equitable sentencing decisions. This approach would ensure that AI systems are sensitive to the unique characteristics of Sri Lankan society, thus promoting fairness and preventing the reinforcement of existing disparities based on ethnicity, class or other social factors. Such a strategy would contribute to building a more just and inclusive criminal justice system.
Development of Legal and Regulatory Frameworks
The creation of AI-specific legislation is crucial to regulate the integration of AI tools in sentencing, ensuring that their use aligns with established legal principles and safeguards individuals’ rights. Such legislation must address issues such as transparency, accountability and ethical considerations while maintaining the integrity of the judicial process. Clear guidelines and regulations should be established to govern the deployment, operation and monitoring of AI systems in criminal justice, with a focus on preventing discrimination, bias and unfair outcomes. By implementing comprehensive legal frameworks, the state can ensure that AI applications are both legally compliant and ethically sound.
Awareness
Equally important is the implementation of awareness programmes aimed at educating judges, lawyers and other legal professionals involved in the criminal justice system about the role and potential risks of AI in sentencing. These educational initiatives should be designed to enhance understanding of AI technologies and their implications for legal practice. Awareness programmes could range from seminars and workshops to more in-depth research projects, fostering an ongoing dialogue about AI’s impact on justice. Such efforts would equip legal professionals with the knowledge necessary to critically assess AI tools, ensuring that their use does not compromise the fairness of sentencing or due process. Additionally, encouraging research in this area will help keep the legal community informed about emerging challenges and opportunities in the intersection of AI and criminal justice, promoting continuous improvement of the system.
Conclusion
AI’s use in sentencing decisions across various jurisdictions demonstrates significant potential benefits, including enhanced objectivity, consistency and efficiency. Nevertheless, notable challenges have emerged, particularly concerning inherent biases in AI algorithms. For instance, the use of the COMPAS system in the US has revealed troubling disparities. Investigations, such as ProPublica’s seminal study (Angwin et al. Reference Angwin, Larson, Mattu and Kirchner2016), have highlighted that COMPAS tends to overestimate the likelihood of recidivism for Black defendants while underestimating the same for their White counterparts, raising serious concerns about racial bias. These findings underscore the critical need to scrutinize AI systems to prevent them from perpetuating or amplifying systemic inequalities.
In the Sri Lankan context, the implications are particularly significant given the nation’s ethnic and socio-economic diversity. The pervasive disparities in these areas mean that any uncritical reliance on AI systems in judicial decision-making could exacerbate existing inequalities, particularly along lines of ethnicity and class. It is imperative that the judiciary ensures AI-assisted sentencing frameworks are designed and implemented with robust safeguards against bias. Courts must remain vigilant to ensure such systems align with principles of fairness and justice, thereby preventing technology from reinforcing historical inequities. Addressing these risks is essential to maintain public trust in the judicial system.
Acknowledgements
The author acknowledges the invaluable contributions of the scholars and publishers whose works have been referenced and utilized in the development of this article. Their research and intellectual insights have provided a foundational basis for the analysis presented herein. The author extends gratitude to these authors for their original scholarship, which has significantly informed and enriched the study. All sources are duly cited in accordance with the applicable academic standards.
Competing interests
The author hereby affirms that no competing interests exist in relation to this research article. The study was independently conducted, with all data collection, analysis and writing undertaken without external funding or financial support from any individual or organization. The entirety of the research was financed solely by the author, ensuring that no conflicts of interest, whether personal, professional or financial, have influenced the outcomes or integrity of the work. The author is committed to upholding the highest standards of transparency, impartiality and academic integrity throughout the research process.
Jeeva Niriella is a professor in the Department of Public and International Law, University of Colombo, Sri Lanka, and is the Founding Dean of the Faculty of Criminal Justice (FOCJ), General Sir John Kotelawala Defence University credited with establishing FOCJ, the first-ever faculty in the discipline of Criminal Justice in Sri Lanka in 2021. She has 32 years of experience in academia. She obtained her LLB Degree with Honours and an MPhil Degree in Law and Criminal Justice (Merit) from Colombo University and is pending for a PhD in Law and Victimology from KIIT Law School, India. She obtained her professional qualification as an attorney-at-law in 1997. She introduced the Criminology and Criminal Justice Course at postgraduate level to leading and prestigious universities such as Colombo University, the Open University and Kothelawala Defence University. Her research publications are more than 60, published by Springer, Routledge–Taylor & Francis Group, LexisNexis, Emerald, HeinOnline and David Publishing Company. She has held several national-level board member positions and is currently a board member of the Office for the National Reconciliation Sri Lanka and the International Society of Criminology. She was honoured with the title “Desabandu”, a national honour/award conferred by the Democratic Socialist Republic of Sri Lanka, in 2019.