Amoore’s Cloud Ethics opens with the most timely of scenes, a public protest against the murder of a Black man by police in an American city. It was 2015, the man was Freddie Gray, and the city was Baltimore. The incident was filmed by citizens, the latest documentary evidence of powerful State forces inflicting violence on citizens. In the aftermath, as the city exploded in protest, police used new tools of public surveillance, looking at social media accounts in a geofenced region as part of their extended situational awareness, facilitated by software services of the tech firm Geofeedia. The social media chatter of some teens at a local high school were brought to the attention of police. Boarding a bus, these young prospective protestors were detained by police, preemptively assessed, and prohibited from participating. Amoore’s point is not about the substance of this particular protest and its political salience or worthiness. Instead she points to the general act of making political claims (attending a protest, voting in an election, crossing a port of entry). The use of new algorithmic techniques intentionally disrupts such acts. She warns of a future where political rights are increasingly subject to soft prohibition through the assessment of an anticipated outcome, a calculated probability of harm.
Cloud Ethics focuses particularly on the way algorithms are deployed by the State or its agents. The sousveillance (scrutiny from below) of citizens wielding camera phones to expose systemic police violence is met by countersystems deployed by the police to detect the qualities of a protest, the “sentiment” of the crowd, the nature of social media chatter. The Baltimore case lends itself to a discussion of surveillance and the inhibiting effect of being observed and datafied in the present day. However, instead of treading these well-worn paths, Amoore focuses on the implications of the complex algorithmic logics applied to this data. Algorithmic systems shape the possibilities of what one can and cannot do, based on the logics of probability. This logic, in turn, is shaped by what is currently known, nameable and measurable, and by datasets composed of past actions. From these givens, how might the unimagined political claims of the future be made? And will they be allowed to be made?
To make this critical point Amoore first identifies and rejects a prevailing theory regarding algorithmic harms. It rests on an inside/outside logic: the notion that there is a clear boundary between the algorithm and the surrounding politico-legal world. In this paradigm, the algorithms are “infringing or undercutting a precisely legible world of rights belonging to human subjects” [5]. Algorithmic violations become a special problem by virtue of their opaqueness. The answer, it follows, is greater transparency, a way to see more clearly where the algorithmic mechanisms go astray and to hold accountable those who have control over them. A very clear account of this thinking, for example, is Frank Pasquale’s The Black Box Society Footnote 1. This is an “encoded ethics” but it is not a cloud ethics.
Amoore instead pushes further into the implications of algorithmic regimes offering a novel reiteration of an old debate about the (non)neutrality of technology. Algorithms are inherently, “ethicopolitical arrangements of values, assumptions, and propositions about the world” [6]. A cloud ethics stems from this foundational notion. We cannot hold out for the false promise of an unbiased algorithm that would overcome opacity with clarity and precision. We also cannot expect to fix the algorithms by making them conform to legal determinations. Instead an algorithmic ethicopolitics must be considered for what it brings into the world, and for the world it creates.
Cloud Ethics joins a growing body of work that employs interpretivist, philosophical and ethical approaches to the analysis of algorithms. It draws technical terms from computer science and the artificial intelligence subfield in a way that is mindful of their technical meaning, but without being beholden to such definitions. Such an approach reworks these terms, treating them as suggestive grist for the mill. Amoore talks of neural networks, recursive functions, random forest algorithms, hidden layers, and tuning algorithms. From time to time these terms seem slightly askew in translation, overgeneralized or assumed in places, a foreign language, and the activity of devising algorithms or writing code an unfamiliar work practice. Yet the cross-field translation attempted in Cloud Ethics is creatively productive. This book is not a primer on algorithms for scholars in non-CS fields as other books in this genre attempt to be. The right approach for readers, I believe, is to allow for a semiotic understanding of these terms to emerge from within the text. As they are originally understood on their home turf of computer science, such terms resist what Amoore requires from them, and what she is able to draw out of them. She is wise to the politics and power underlying cross-field borrowing, arguing that, as computer science draws on philosophy and ethics to make assertions about “machine reasoning”, then, so too, may other fields “play with the arrangements, thresholds, and assumptions of algorithms” [158].
This is also to say that Cloud Ethics is not structured to reward impatient readers with a quick payoff—although the introduction offers enough of the argument to read as a stand-alone piece. Key themes emerge as a surprise mid-chapter or scattered in bits and pieces across the book. For example, what exactly is a “cloud ethics?” Amoore engages with the cloud-as-metaphor in chapter one, at first through a materialist reading. “Cloud computing” evokes the ethereal—a cloud icon is used in system diagrams to represent abstracted computing capacity—but it exists materially in particular locales, determined by tax breaks and data sovereignty legislation. It is visible in the peculiar architecture of data centers. The cloud also evokes a “cloud chamber,” the instrument of scientific observation that makes atomic behaviors perceptible to humans. Amoore describes algorithms as an “instrument of mattering” that reveals by condensing, that makes visible while skewing what is seen. The cloud metaphor also plays on the notion of ambiguity. It alludes to uncertainty, incalculability, incompleteness. A few chapters on, in chapter four to be precise, Amoore’s “cloud ethics” is delivered more completely in contrastive terms. In a chapter that considers the claim of algorithmic “madness” she shows that, once in the world, algorithms do not slip free of their logics, so much as apply them relentlessly with surprising effects under unanticipated circumstances. It is the insistence on reining in the algorithm and expecting to reinstitute control (through an encoded ethics) that is misguided, the embrace of uncertainty that defines a cloud ethics.
In the book’s conclusion, Amoore considers a “scene understanding” algorithm. It assesses context, but only to surface the foreground objects that, once identified and labeled, render the rest as now superfluous and to be discarded. What surrounding context of this book review shape it, what would normally be left unremarked upon, implicitly shaping the text but deleted from view? The COVID-19 pandemic, for one, has almost immediately altered the world. So much so, that it has “broken” many algorithms of routinized daily life—systems of supply chain management and pricing, for example. The way algorithms are gamed to amplify disinformation campaigns is actively threatening, in turn, to “break” democracy. This highlights profoundly the hubris of both mechanistic control through algorithms and the response to make them more transparent and more compliant. The rise in military-style occupation by State forces in some US cities, underlines the way seemingly anodyne pursuits of algorithmic policing become sinister in certain hands. New forms of surveillance (facial recognition of protestors, crowdsourcing used for photo identification) and sousveillance (protestors collecting and identifying munitions used against them) have emerged in protest actions. The privacy intrusions that interfere with the right to assemble and protest are disrupted now by masks, umbrellas. In Portland, Oregon, tear gas was turned back on police and Federal agents by hockey sticks and leaf blowers. Protest itself evolves creatively. Amoore’s assertion, I believe, is that the context always matters, not just in the discrete decision-making point where the algorithmic “aperture” narrows toward a finite output measure.
Through a tour of algorithms, their technical attributes, and their practical applications, Amoore arrives at a critical conclusion—a defense of uncertainty. Or perhaps, more accurately, it can be stated that the book’s argument is not the value of uncertainty but of the inescapability of these qualities in human relations and in decision-making. She critiques the ethicopolitics of algorithms as one that “reduces the intractable difficulties and duress of living, the undecidability of what could be happening in a scene, into a single human-readable and actionable meaning” [156]. Yet, this promised certainty is a mirage. In the insistent application of tools to eliminate uncertainty, what are the consequences?
When an algorithmic logic is deployed to bring order to the world, it colonizes the future by its relentless predictive logic. Alternative political claims are foreclosed. The question of what the future holds is heavy in this moment. From my corner of the world, a looming presidential election and democratic crisis, rising street protests, point to an overwhelming desire to know, to alleviate uncertainty, but at the same time, it points to the importance of protecting those rights of citizens against the impositions of non-human optimization––no matter how much data can be obtained, no matter the accuracy rating of the assessment. Amoore suggests that a “cloud ethics must be capable of asking questions and making political claims that are not already recognized on the existing terrain of rights to privacy and freedoms of association and assembly” [170]. If society accepts the outputs of algorithmic logics, allowing the world to be dominantly guided by them, will we ever know what counterfactual future we have surrendered? The problem of algorithmic systems is not principally that their internal mechanisms are opaque, Amoore suggests, it is the futures they preclude. A greater challenge than the call for transparency and accountability is to refuse more fundamentally a world remade algorithmically.
The outlines of an “ethical practice” are gestured toward in Cloud Ethics, but Amoore admits that a “definitive method for critique for resistance” is not what she has to offer [157]. Yet, the call for the urgency of working toward such a method ultimately comes together in the book’s concluding sentence: “ethicopolicial life is about irresolvable struggles, intransigence, duress, and opacity, and it must continue to be if a future possibility for politics is not to be eclipsed by the output signals of algorithms” [172]. There are a set of values and concerns easily swept away by the promise of algorithmic precision but, at least in being named, Cloud Ethics gives us a fighting chance to resist their casual obliteration.