We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 8 analyses the use of AI and ADM tools in welfare and surveillance through the lens of critical race studies. Aitor Jiménez and Ainhoa Nadia Douhaibi point to the necessity of building a non-Anglocentric theoretical framework from which to study a new global phenomenon: the digital welfare and surveillance state. Accordingly, the authors frame its rise within the wider context of the Southern European iteration of racial neoliberalism, what they coin as the Islamophobic Consensus. As the chapter demonstrates, the digital welfare and surveillance state does not rely on the same technologies, focus on the same subjects, and pursues the same objectives in every context. On the contrary, it draws on contextual genealogies of domination, specific socioeconomic structures, and distinctive forms of distributing power. The authors provide an empirical analysis on the ways the Islamophobic Consensus is being operationalised in Catalonia and expose the overlapped racism mechanisms governing the lives of racialized black and brown young adults. The chapter demonstrates how ADM technologies designed to govern “deviated”, “risky”, and “dangerous” Muslim youth “radicals” connect with colonial punitive governmental strategies.
The chapter is structured in two parts. The first part analyses the surveillance-governmental automated apparatus deployed over Islamic communities in Catalunya. The second part frames the ideological, epistemological, and historical fundamentals of the Southern European way to racial neoliberalism, here labelled as the Islamophobic Consensus. Drawing on surveillance and critical race studies, the authors synthesise the defining features that distinguish this model of domination from other iterations of neoliberal racism.
Chapter 10 explores the increasingly blurred line between public and private authority in designing and applying the AI tools, and searches for appropriate safeguards necessary to ensure the rule of law and protection fundamental rights. ADM tools are increasingly sorting individuals out, with important consequences. Governments use such tools to rank and rate their citizens, creating a data-driven infrastructure of preferences that condition people’s behaviours and opinions. Some commentators point to the rule of law deficits in the automation of government functions, others emphasize how such technologies systematically exacerbate inequalities, and still others argue that a society constantly being scored, profiled, and predicted threatens due process and justice generally. Using the case of Houston Federation of Teachers v. Houston Independent School District as a starting point, Lin asks some critical questions still left unanswered. How are AI and ADM tools reshaping professions like education? Does the increasingly blurred line between public and private authority in designing and applying these algorithmic tools pose new threats? Premised upon these scholarly and practical inquiries, this chapter seeks to identify appropriate safeguards necessary to ensure rule of law values, protect fundamental rights, and harness the power of automated governments.
In the future, administrative agencies will rely increasingly on digital automation powered by AI. Can U.S. administrative law accommodate such a future? Not only might an automated state readily meet longstanding administrative law principles, but the responsible use of AI might perform even better than the status quo in terms of fulfilling administrative law’s core values of expert decision-making and democratic accountability. AI governance clearly promises more accurate, data-driven decisions. Moreover, due to their mathematical properties, AI and ADM tools might well prove to be more faithful agents of democratic institutions. Yet even if an automated state was smarter and more accountable, it might risk being less empathic. Although the degree of empathy in existing human-driven bureaucracies should not be overstated, a large-scale shift to the use of AI tools by government will pose a new challenge for administrative law: ensuring that an automated state is also an empathic one.
Social welfare has long been a priority area for digitisation and more recently for ADM. Digitisation and ADM can either advance or threaten socio-economic rights of the marginalised. Current Australian examples include the roll-out of on-line and apps-based client interfaces and compliance technologies in Centrelink. Others include work within the National Disability Insurance Scheme (NDIS) on development of virtual assistants or use of AI to leverage existing data sets to aid or displace human decision-making. Drawing on these examples and other recent experience, this chapter reviews the adequacy of traditional processes of public policy development, public administration, and legal regulation/redress in advancing and protecting the socio-economic rights of the marginalised in the rapidly emerging automated welfare state. It is argued that protections are needed against the power of ADM to collapse program design choices so that outliers, individualisation, complexity, and discretion are excluded or undervalued. It is suggested that innovative new processes may be needed, such as genuine co-design and collaborative fine-tuning of ADM initiatives, new approaches to (re)building citizen trust and empathy in an automated welfare state, and creative new ways of ensuring equal protection of the socio-economic rights of the marginalised in social services and responsiveness to user interests.
There is a broad consensus that human supervision holds the key to sound automated decision-making: if a decision-making policy uses the predictive outputs of a statistical algorithm, but those outputs form only part of a decision that is made ultimately by a human actor, use of those outputs will not (per se) fall foul of the requirements for due process in public and private decision-making. Thus, the focus in academic and judicial spheres has been on making sure that humans are equipped and willing to wield this ultimate decision-making power. Yet, proprietary software obscures the reasons for any given prediction; this is true both for machine learning and deterministic algorithms. And without these reasons, the decision-maker cannot accord appropriate weight to that prediction in their reasoning process. Thus, a policy of using opaque statistical software to make decisions about how to treat others is unjustified, however involved humans are along the way.
This chapter closes Part 1 by analysing how the opacity surrounding the use of AI and ADM tools by financial corporations is enabled, and even encouraged by the law. As other chapters in the book demonstrate, such opacity brings about significant risks to fundamental rights, consumer rights, and the rule of law. Analysing examples from jurisdictions including the US, UK, EU, and Australia, Bednarz and Przhedetsky unpack how financial entities often rely on rules and market practices protecting corporate secrecy such as complex credit scoring systems, proprietary rights to AI models and data, as well as the carve out of ‘non-personal’ information from data and privacy protection laws. The authors then focus on the rules incentivising the use of AI and ADM tools by financial entities, showing how they provide a shield behind which corporations can hide their consumer scoring and rating practices. The authors also explore potential regulatory solutions that could break the opacity and ensure transparency, introducing direct accountability and scrutiny of ADM and AI tools, and reducing the control of financial corporations over people’s data.
Chapter 7 analyses the legal challenges that incorporation of AI-systems in the Automated State will bring. The starting point is that legal systems have coped relatively well so far with the use of computers by public authorities. The critical disruption of the Automated State predicted by Robert McBride in 1967 has not been materialised and, therefore, we have not been forced to substantively rethink the adequacy of how administrative law deals with machines. However, the incorporation of AI in automation may be that disruption. In this chapter, Bello y Villarino offers a counterpoint to those who believe that existing principles and rules can be easily adapted to address the use of AI in the public sector. He discusses the distinct elements of AI, through an exploration of the dual role of public authorities: a state that executes policy and a state that designs policy. The use of AI systems in both contexts are of a different regulatory order. Until now there has been an assumption that policy design should be allowed a broad margin of discretion, especially when compared to the state as an executor of policies and rules. Yet, the automation of policy design will require that public authorities make explicit decisions about objectives, boundary conditions, and preferences. Discretion for humans can remain, but AI systems analysing policy choices may suggest that certain options are superior to others. This could justify employing different legal lenses to approach the regulation of automated decision-making and decision-support systems used by the State. The reasoning, to some extent, could also be extrapolated to Automated Banks. Each perspective is analysed in reference to the activity of modern states. The main argument is that the AI-driven Automated State is not suited for the one-size-fits-all approach often claimed to apply to administrative law. The final part of the chapter explores some heuristics that could facilitate the regulatory transition.
Artificial intelligence (AI) and automated decision-making (ADM) tools promise money and unmatched power to banks and governments alike. As the saying goes, they will know everything about their citizens and customers and will also be able to predict their behaviour, preferences, and opinions. Global consulting firm McKinsey estimates that AI technologies will unlock $1 trillion in additional value for the global banking industry every year.1 Governments around the world are getting on the AI bandwagon, expecting increased efficiency, reduced costs, and better insights into their populations.
The potential of AI solutions to enhance effective decision-making, reduce costs, personalise offers and products, and improve risk management have not gone unnoticed by the financial industry. On the contrary, the characteristics of AI systems seem to perfectly accommodate to the features of financial services and to masterly address their most distinctive and challenging needs. Thus, the financial industry proves to provide a receptive and conducive environment to the growing application of AI solutions in a variety of tasks, activities, and decision-making processing. The aim of this paper is to examine the current state of the legal regime applicable in the European Union to the use of AI systems in the financial sector and to reflect on the need to formulate principles and rules that ensure responsible automation of decision-making and that serve as a guide for widely and extensively implementing AI solutions in banking activity.
Governments are increasingly adopting artificial intelligence (AI) tools to assist, augment, and even replace human administrators. In this chapter, Paul Miller, the NSW Ombudsman, discusses how the well-established principles of administrative law and good decision-making apply, or may be extended, to control the use of AI and other automated decision-making (ADM) tools in administrative decision-making. The chapter highlights the importance of careful design, implementation and ongoing monitoring to mitigate the risk that ADM in the public sector could be unlawful or otherwise contravene principles of good decision-making – including consideration of whether express legislative authorisation for the use of ADM technologies may be necessary or desirable.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.