1. Introduction
Here is a worrying possibility: there is a significant gap between our feeling that something is clear and our actually understanding it. The sense of clarity can be a marker of cognitive success, but it can also be seductive. Oversimplifications slip easily into our minds and connive themselves into our deliberative processes.
In that case, the sense of clarity might be intentionally exaggerated for exploitative ends. Outside forces, with an interest in manipulating our beliefs and actions, can make use of clarity’s appeal. Seduction, after all, often involves a seducer. Romantic seduction, in its more malicious form, involves manipulating the appearances of intimacy and romance in order to subvert the aims of the seduced. There is an analogous form of cognitive seduction, where hostile forces play with the signals and appearances of clarity in order to lead our thinking astray.
The sense of clarity is a potent focal point for manipulation because of its crucial role in managing our cognitive resources. After all, we only have so much mental energy to go around; we need to prioritize our inquiries. In particular, we need some way to estimate that we’ve probably thought enough on some matter for the moment – that it’s probably safe to move on to more pressing matters, even if we haven’t gotten to the absolute rock bottom of the matter. Our sense of clarity, and its absence, plays a key role in our cognitive self-regulation. A sense of confusion is a signal that we need to think more. But when things feel clear to us, we are satisfied. A sense of clarity is a signal that we have, for the moment, thought enough. It is an imperfect signal, but it is one we often actually use in the quick-and-dirty of everyday practical deliberation. This shows why, say, manipulative interests might be particularly interested in aping clarity. If the sense of clarity is a thought-terminator, then successful imitations of clarity will be quite powerful. If somebody else can stimulate our sense of clarity, then they can gain control of a particular cognitive blind spot. They can hide their machinations behind a veil of apparent clarity.
Here’s another way to put it: the moment when we come to understand often has a particular feel to it – what some philosophers have called the ‘a-ha!’ moment. The moment when we come to understand, says Alison Gopnik, is something like an intellectual orgasm (Gopnik, Reference Gopnik1998). And, as John Kvanvig suggests, it is our internal sense of understanding – our sense of ‘a-ha!’ and ‘Eureka!’ – that provides a sense of closure to an investigation (Kvanvig, Reference Kvanvig2011, p. 88). The ‘a-ha’ feeling is both pleasurable and indicates that a matter has been investigated enough. If, then, hostile forces can learn to simulate that ‘a-ha’ feeling, then they have a very powerful weapon for epistemic manipulation.
I offer two sustained case studies of cognitive subversion through the seductions of clarity. First, I will look at the sorts of belief systems often promulgated by moral and political echo chambers, which offer simplistic pictures of a world full of hostile forces and conspiracy theories. Such belief systems can create an exaggerated sense of clarity, in which every event can be easily explained and every action easily categorized. Second, I will look at the seductive clarity of quantification. I borrow my use of ‘seduction’ from Sally Engle Merry’s The Seductions of Quantification (Reference Merry2016), a study into how global institutions deploy metrics and indicators in the service of political influence. Merry focuses on the generation of indicators and metrics on the global stage, such as the Human Development Index, which attempts to sum up the quality of life across each country’s entire citizenship in a single, numerical score. The HDI then compiles these scores to offer a single apparently authoritative ranking of all countries by their quality of life. Such systems of quantification can offer an exaggerated sense of clarity without an accompanying amount of understanding or knowledge. Their cognitive appeal can outstrip their cognitive value.
It is striking how quantified presentations of value seem to have a profound cognitive stickiness. The motivational draw of quantified values has been well-documented across many terrains (Porter, Reference Porter1996; Merry, Reference Merry2016; Espeland and Sauder, Reference Espeland and Sauder2016). This motivational power is why so many companies and governments have become interested in the technologies of gamification. Gamification attempts to incorporate the mechanics of games – points, experience points, and leveling up – into non-game activities, in order to transform apparently ‘boring’ activity as work and education into something more engaging, compelling, and addictive (McGonigal, Reference McGonigal2011; Walz et al, Reference Zimmerman, Bogost, Linehan, Kirman, Roche, Pesce, Rigby, Walz and Deterding2015; Lupton, Reference Lupton2016). I am worried, however, that gamification might increase motivation, but only at the cost of changing our goals in problematic ways. After all, step counts are not the same as health, and citation rates are not the same as wisdom (Nguyen, Reference Nguyen2020, pp. 189–215; Reference Nguyen and Lackeyforthcoming). The, seductions of clarity are, I believe, one important mechanism through which gamification works.
Let me be clear: the present inquiry is not a study in ideal rationality, nor is it a study of epistemic vice and carelessness. It is a study in the vulnerabilities of limited, constrained cognitive agents, and how environmental features might exploit those vulnerabilities. It is a foray into what we might call hostile epistemology. Hostile epistemology includes the intentional efforts of epistemic manipulators, working to exploit those vulnerabilities for their own ends. We might call the study of these intentional epistemic hostilities combat epistemology. Hostile epistemology also includes the study of environmental features which present a danger to those vulnerabilities, made without hostile epistemic intent. Hostile environments, after all, don’t always arise from hostile intent. Hostile environments include intentionally placed minefields, but also crumbling ruins, the deep sea, and Mars. An epistemically hostile environment contains features which, whether by accident, evolution, or design, attack our vulnerabilities.
I will focus for the early parts of this paper on cases of combat epistemology. I think this is the easiest place to see how certain sorts of systems have a hostile epistemic function. The cases of intentionally manufactured hostile environments will then help us to recognize cases of the unintentional formation of hostile epistemic environments. Hostile epistemic environments can arise from entirely well-intentioned, and even successful, pursuits of other purposes. A culinarily extraordinary pastry shop also presents an environment hostile to my attempts at healthy eating. In many bureaucratic cases, as we will see, systems of quantification often arise for very good reason: to efficiently manage large and complex institutional data-sets, or to increase accountability (Scott, Reference Scott1998; Perrow, Reference Perrow2014). But these very design features also make them into epistemically hostile environments. Because of the magnetic motivational pull of quantification, the very features which render them good for efficient administration also functions to imbue them with seductive clarity.Footnote 1
Other recent inquiries into hostile epistemology include discussions of epistemic injustice, propaganda, echo chambers, fake news, and more (Fricker, Reference Fricker2007; Medina Reference Medina2012; Dotson Reference Dotson2014; Stanley, Reference Stanley2016; Rini Reference Rini2017; Nguyen Reference Nguyen2018b). Importantly, the study of hostile epistemology is distinct from the study of epistemic vice. The study of the epistemic vices – such as closed-mindedness, gullibility, active ignorance, and cynicism – is a study of epistemically problematic character traits. It is the study of failings in the epistemic agents themselves (Sullivan and Tuana, Reference Sullivan and Tuana2007; Proctor and Schiebinger, Reference Proctor and Schiebinger2008; Cassam, Reference Cassam2016; Battaly, Reference Battaly2018). Hostile epistemology, on the other hand, is the study of how external features might subvert the efforts of epistemic agents. Of course, vice and hostility are often entangled. Hostile environments press on our vices and make it easier for us to fall more deeply into them. But vice and hostility represent two different potential loci of responsibility for epistemic failure.
This all might just seem like common sense. Of course people are drawn to oversimplifications; what’s new in that? But there are important questions here, about why we’re drawn to oversimplification and how culpable we are for giving in to it. Importantly, many theorists treat our interest in oversimplification as straightforwardly irrational. In the psychological and social sciences, the appeal of oversimplification is usually explained as a mistake which can be understood in terms of individual psychological tendencies, such as motivated reasoning or the undue influence of the emotions. We accept oversimplifications, it is thought, because they make us feel smug, they comfort us, or they reinforce our sense of tribal identity (Kahan and Braman, Reference Kahan and Braman2006; Sunstein, Reference Sunstein2017). Similarly, many philosophical accounts treat our susceptibility to oversimplification as a problem arising wholly from an individual’s own personal failures of character – from their epistemic vices. Quassim Cassam, for example, tells the story of Oliver the conspiracy theorist, who believes that 9/11 was an inside job. Says Cassam, there isn’t a good rational explanation for Oliver’s beliefs. The best explanation is a failure of intellectual character. Oliver, says Cassam, is gullible and cynical; he lacks discernment (pp. 162–63).
I will present a picture that is far more sympathetic to the seduced. It is a picture in which exaggerated clarity plays upon specific structural weaknesses in our cognition. As cognitively limited beings, we need to rely on various heuristics, signals, and short-cuts to manage the cognitive barrage. But these strategies also leave us vulnerable to exploitation. Seductive clarity takes advantage of our cognitive vulnerabilities, which arise, in turn, from our perfectly reasonable attempts to cope with the world using our severely limited cognitive resources. And, certainly, the pull of seductive clarity will be worse if we give in to various epistemic vices. And, certainly, once we realize all this, we will want to act more vigorously to secure the vulnerable backdoors to our cognition. The general point, however, is that giving into the seductions of clarity isn’t just some brute error, or the result of sheer laziness and epistemic negligence. Rather, it is driven, in significant degree, by systems and environments which function to exploit the cognitive vulnerabilities generated by the coping strategies of cognitively finite beings.
2. Clarity as thought-terminator
I have been speaking loosely so far; let me now stipulate some terminology. On the one hand, there are epistemically positive states: knowledge, understanding, and the like. On the other hand, there are the phenomenal states that are connected to those epistemically positive states. These are the experiences of being in an epistemically positive state – like the sense of understanding, the feeling of clarity. Loosely: understanding is our successful grasp of parts of the world and their relationships, and the sense of clarity is the phenomenal state associated with understanding. For brevity’s sake, let me use the terms ‘clarity’ and ‘the sense of clarity’ interchangeably, to refer to the phenomenal experience associated with understanding. I do not mean to be using ‘clarity’ in the Cartesian sense, where it is a perfect guarantee of knowledge. Clarity, in my usage, is merely an impression of a certain kind of cognitive success – what J.D. Trout has called the sense of understanding (Trout, Reference Trout2002). Clarity may often accompany genuine understanding, but it is by no means a perfect indicator that we do, in fact, genuinely understand. So external forces can exploit the gap between genuine understanding and the feeling of understanding – that sense of clarity.
There are two general strategies for epistemic manipulation. There is epistemic intimidation: the strategy of trying to get an epistemic agent to accept something by making them afraid or uncomfortable to think otherwise. There is also epistemic seduction: the strategy of manipulating positive cognitive signals to get an epistemic agent to accept something. The manipulation of clarity is a form of epistemic seduction. It is the attempt to use our own cognitive processes against us, whispering pleasantly all the while.
How might clarity seduce? There are many potential pathways. For one thing, clarity seduces because it is pleasurable. But for the remainder of this discussion, I’ll focus another, even more dangerous feature: that the sense of clarity can bring us to end our inquiries into a topic too early. This possibility arises because of the profoundly quick-and-dirty nature of daily decision-making. We are finite beings with limited cognitive resources.Footnote 2 In daily life, we need to figure out what to do: where to spend our money, who to vote for, which candidate to back. We face a constant barrage of potentially relevant information, evidence, and argument – far more than we could assess in any conclusive manner. So we need to figure out the best way to allocate our cognitive resources while leaving most of our investigations unfinished, in some cosmic sense.
When practically reasoning about the messy complexities of the real world, we are unlikely to arrive at any conclusive ground-floor, where we can know with any certainty that we’re done.Footnote 3 So, for everyday practical deliberation, we need some method for determining that we’ve thought enough.Footnote 4 And that basis often needs to be fast and loose, to cope with the fast and loose manner of everyday practical deliberation. We need some basis for estimating that our understanding is probably good enough, so that we can make a decision and move on. We need something like a heuristic for terminating thought.
Here, then, is the ruling supposition for my inquiry: the sense of clarity is one of the signals we typically use to allocate our cognitive resources. (I do not claim that it is the only signal, though I do claim it is a significant one.) We often use our sense of confusion as a signal that we need to keep investigating, and our sense of clarity as a signal that we’ve thought enough.Footnote 5 Our sense of clarity is a signal that we can terminate an investigation. When a system of thought seems clear to us, then we have a heuristic reason to stop inquiring into it.Footnote 6
I’m not claiming that this heuristic is a necessary part of all practical reasoning – only that the heuristic is currently under common usage. After all, heuristics are usually contingent tendencies and not necessary parts of our cognitive architecture. In fact, some research suggests that we can slowly change the heuristics we use (Reber and Unkelbach, Reference Reber and Unkelbach2010).
Here’s my plan. First, we’ll start to think about how powerful it would be if this supposition were true, and there were such a pleasurable and thought-terminating heuristic. I’ll look at some evidence from the empirical literature on cognitive heuristics that supports something in the vicinity of my supposition. I’ll show how the supposition, which concerns how we use our feeling of understanding, emerges from a recent discussion in the philosophy of science about the nature of genuine understanding. Then, I’ll use the supposition to think about what sorts systems and environments might successfully exploit the sense of clarity. I’ll dig into some historical and sociological literature on echo chambers and on the social effect of simplistic quantification. The supposition will turn out to provide a unifying explanation for many of the documented effects of echo chambers and quantification. My argument in favor of the supposition, then, will be that it provides a unifying explanation for various observations from cognitive science, sociology, and history, while integrating neatly with a standard account of the nature of understanding. But this mode of argumentation can only render the supposition a plausible hypothesis; more empirical investigation is certainly called for.
3. Clarity as vulnerability
Suppose, then, that the sense of clarity plays a crucial role in the regulation of our cognitive resources, functioning as a signal that we can safely terminate a particular line of inquiry. Obviously, the sense of clarity can come apart from actual full understanding.Footnote 7 It must, in order for it to play a heuristic role in quick-and-dirty daily deliberation.Footnote 8 In order to know that we fully understood something, we would need to conduct an exhaustive and thorough investigation. The sense of clarity is far more accessible to us, so we can use it to make rough estimates about whether we’ve inquired enough.
If a hostile force could ape such clarity, then they would have a potent tool for getting us to accept their preferred systems of thought. This is because false clarity would provide an excellent cover for intellectual malfeasance. A sense of clarity could bring us to terminate our inquiry into something before we could discover its flaws. It would be something like an invisibility cloak – one that works by manipulating our attention. Our attention, after all, is narrow. We barely notice what's outside the focused spotlight of our attention. We can make something effectively disappear simply by directing their attention elsewhere.Footnote 9 One way to make something cognitively invisible, then, is by making it signal unimportance. The spy novelist John Le Carre – who had actually worked in British intelligence – describes, in his novel Tinker Tailor Soldier Spy, what a genuinely effective spy looks like. They aren't dashing and handsome, like some James Bond figure. An effective spy presents as entirely normal, bland, and dull. They can disappear because they have learned to magnify the signals of boringness. Similarly, the techniques of stage magic involve attentional misdirection. Stage magicians learn to signal boringness with the active hand while directing signals of interestingness elsewhere, in order to control their audience’s attention. The sense of clarity can work in an analogous strategy of attentional misdirection. An epistemic manipulator who wants us to accept some system of thought should imbue that system with a sense of clarity, so that cognitive resources will be less likely to be directed towards it. The strategy will be even more effective if they simultaneously imbue some other target with a sense of confusion. The confusing object seizes our attention by signaling that we need to investigate it, which makes it easier for the clear-seeming system to recede into the shadows. The manipulator can thus gain control of their target’s attention by manipulating their targets’ priority queue for investigation.
Thus, hostile forces can manipulate the cognitive architecture of resource-management in order to bypass the safeguards provided by the various processes of cognitive inquiry. In the movies, the crooks are always hacking the system which controls the security cameras. Epistemic criminals will want to hack the cognitive equivalent.
4. Ease and fluency
The experience of clarity is complex and its phenomenal markers many. Let’s start with a case study in one small and simple aspect of clarity – one which has been relatively well-studied in the psychological sciences. Consider the experience of cognitive ease – the relative degree to which it is easy to think about something. In the literature on cognitive heuristics, cognitive ease is part of the study of ‘cognitive fluency’, which is the ‘subjective experience of ease or difficulty with which we are able to process information’ (Oppenheimer, Reference Oppenheimer2008, p. 237). Research has demonstrated that we do, in fact, often use fluency as a cognitive heuristic. If we comprehend an idea easily, we will be more likely to accept it. Cognitive difficulty, on the other hand, makes it more likely that we will reject an idea. This heuristic is not entirely unreasonable: we often experience cognitive ease in a domain precisely because we have a lot of experience with it. Cognitive ease often correlates with experience, which correlates with skill and accuracy. But, obviously, ease is separable from accuracy. Studies have demonstrated that one’s mere familiarity with an idea makes one more likely to accept it. Familiarity creates a sense of cognitive ease, but without the need for any relevant skill or expertise. Studies have also shown that we are more likely to believe something written in a more legible font. Legibility leads to easier processing, which leads to readier acceptance. In other words: we are using our cognitive ease with some proposition or domain as a heuristic for our accuracy with that proposition or domain. Rolf Reber and Christian Unkelbach have argued that fluency heuristics are, in fact, often quite useful. Through a Bayesian analysis, they conclude that fluency is a good heuristic when the user’s environment contains more true propositions than false ones – and the better the ratio of true to false propositions in their environment, the better the fluency heuristic will work (Reber and Unkelbach, Reference Reber and Unkelbach2010). But that heuristic can be gamed.Footnote 10
Suppose that the usual fluency heuristic is in place. How might it be exploited? To game the fluency heuristic, a manipulator would want to offer their targets ideas expressed in some familiar manner, by using well-worn patterns of thought and forms of expression. This exploitative methodology should be quite familiar: it explains the rhetorical power of cliched slogans and Internet memes.
Suppose that the world has many such epistemic manipulators in it, and has become chock full of misleading ideas that have been engineered to seem familiar. Our best strategy to avoid manipulation would be to update our heuristics to close off this cognitive backdoor. As Reber and Unkelbach showed, we are capable of changing and updating our heuristics when we receive evidence that they have led us astray. The manipulators, then, would want to mask from us any evidence that our use of the fluency heuristic was leading us astray. This is, however, easier to do in some domains than others. Some epistemic domains have obvious litmus tests. It is easy to check for mistaken reasoning in them because successes and failures are obvious to any onlooker. For example, we can tell that our theory of bridge-building has gone wrong if our new bridges keep falling down. But other epistemic domains have no such easy litmus tests – like the moral and aesthetic domains. If one’s reasoning has been systematically subverted in such a subtle domain, there is no obvious error result that could function as a check.Footnote 11 So if manipulators wanted to gain control via the fluency heuristic, one good strategy would be to perform their fluency-manipulations over, say, claims about morality and value. Alternatively, they may want to devote their fluency-manipulations to complex and diffuse social phenomena or more esoteric scientific phenomenon. Some empirical claims cannot be straightforwardly checked by the layperson, such as scientific arguments for climate change or sociological claims how oppression perpetuates. If the manipulators’ targets have been given a seductively clear explanation which dismiss, say, sociologists and climate change scientists as corrupt, those explanations will be quite hard to dislodge. Most targets will be unable to see that they have been led astray, and so won’t update their heuristics (Nguyen, Reference Nguyen2018b; Reference Nguyen2018c).
5. Aping understanding
Perhaps it seems implausible to you that somebody would terminate a really important inquiry just because of fluency. There is, however, another much more sophisticated form of epistemic seduction which will more plausibly trigger the thought-terminating function. Hostile epistemic manipulators can try to imitate, not just ease, but a full feeling of understanding. They can present the phenomena associated with a positive and rich experience of clarity.
In order to see how one might fake the feeling of understanding, let’s start by thinking about the nature of genuine understanding. For that, let’s turn to a recent discussion of the nature of understanding in the philosophy of science. According to a recent strand of thinking, knowledge isn’t actually the primary goal of much of our epistemic efforts. Knowledge is usually conceived of as something like the possession of true facts. Having knowledge, by the usual accounts, doesn’t require any particular integration of those facts. But many of our intellectual efforts are aimed at getting something more than just knowing some disparate facts. We aim at something more holistic: understanding. The precise nature of understanding is still under some debate, but we can extract some common and largely uncontroversial ideas.Footnote 12 First, when we understand something, we not only possess a lot of independent facts, but we see how those facts connect. Understanding is of a system; it involves grasping a structure and not just independent nodes. Second, when we understand something, we possess some internal model or account of it which we can use to make predictions, conduct further investigations, and categorize new phenomena.Footnote 13
That is an account of what it means to actually have understanding. So what are the experiential phenomena associated with understanding? What does it feel like to understand something? There are several distinct phenomena to consider here. First, there are the experiences associated with coming to understand. As Catherine Elgin puts it, when we come to understand, our way of looking at things suddenly shifts to accommodate new information. Understanding, she says, ‘comes not through passively absorbing new information, but through incorporating it into a system of thought that is not, as it stands, quite ready to receive it’ (Elgin, Reference Elgin2002, p. 14). When we come to understand, our system of thought changes and pieces of information that we could not accommodate before suddenly find a place. Kvanvig offers a similar account: to understand, he says, is to grasp a coherence relationship. It is to be aware of how the information fits together (Kvanvig Reference Kvanvig2003, p. 202). The experience of coming to understand, then, involves an experience of grasping a new and improved coherence. Let us call this the phenomenon of cognitive epiphany. And, as Gopnick points out, cognitive epiphanies are incredibly pleasurable.
Next, there are phenomena associated with having an understanding. Understanding involves a certain facility with the terrain. As Kvanvig puts it,
…To have mastered such explanatory relationships is valuable not only because it involves the finding of new truths but also because finding such relationships organizes and systematizes our thinking on a subject matter in a way beyond the mere addition of more true beliefs or even justified true beliefs. Such organization is pragmatically useful because it allows us to reason from one bit of information to another related information that is useful as a basis for action, where unorganized thinking provides no such basis for inference. Moreover, such organized elements of thought provide intrinsically satisfying closure to the process of inquiry, yielding a sense or feeling of completeness to our grasp of a particular subject matter. (p. 202)
When we understand a cognitive terrain, we can move between its nodes more quickly and easily. We can use our understanding to easily and powerfully generate relevant explanations. And if our understanding is fecund, these new explanations will serve to create even more useful connections. And, as Michael Strevens says, having an understanding also involves having the capacity to communicate that understanding – to explain to how the connections work (Strevens, Reference Strevens2013). Let’s call all these the phenomena of cognitive facility. Footnote 14 And, at least in my own experience, the pleasure of clarity lies not only in Gopnick’s moment of coming to understand, but also in the continuing joys of apparent facility and intellectual power. It feels incredibly good to be able to swiftly explain complex phenomena. It is the pleasure of engaging our skills and capacities to powerful effect.Footnote 15
Let’s enter into the mindset of the hostile epistemic manipulator. Our goal is to seduce with apparent clarity – to game other people’s cognitive processes and heuristics so that they will accept our preferred system of thought. We’ll want to engineer that system, then, to create the feeling of cognitive epiphany. We’ll want to maximize, for our system’s adopters, the sense that unexplained information is sliding into place, the feeling of newfound coherency. So we’ll want to give the system easy-to-apply categorizations which are readily connected into a coherent network. And, once that system has been adopted, we’ll want it to create the feeling of cognitive facility. We’ll want to engineer it so that, once somebody adopts the system, thinking in its terrain will seem distinctly easier and more effective than before. We’ll want it to give adopters a heightened sensation of forming connections and moving easily between them. We’ll want it to create the impression of explanatory power, quickly and easily explaining any new phenomena that come up. And we would want to do all that while simultaneously masking its epistemic faults.
This might seem like an overwhelmingly difficult task for the aspiring manipulator. We manipulators, however, have some very significant advantages. First, we don’t need to successfully imitate understanding all the way down. We simply need for our system to trigger the clarity heuristic early enough, before its adopters stumble across any of the flaws. If you’re building a Potemkin village, you don’t need to actually build any actual houses. You just need to build the facades – so long as those facades convince people not to try and enter the buildings. We manipulators, then, can hide our system’s weakness and inferior performance behind a veil of apparent clarity.Footnote 16
But our most significant advantage is that we are unburdened by the constraints of truth in engineering our extra-tasty system of thought. Epistemically sincere systems – that is, systems of thought generated for the sake of real knowledge and genuine understanding – are heavily constrained by their allegiance to getting things right.Footnote 17 We manipulators are unbound by any such obligations. We are free to tweak our system to maximize its appealing clarity. This is similar, in a way, to how unhealthy restaurants are free to appeal more directly to our sense of deliciousness, because they are freed from considerations of health. (Or, at least, that’s how my mother saw it.) We manipulators, then, can optimize our system to offer the sense of easily made connection and explanations. We can build a cartoon of understanding. And that cartoon will have a competitive advantage in the cognitive marketplace. It can be engineered for the sake of pleasure, and it will carry with it a signal that inquiry is finished, and that we should look elsewhere.
6. Two systems of cognitive seduction
Let’s look at two case studies of the seductions of clarity: echo chambers and institutional quantification. The first case study of echo chambers will strike many, I suspect, as a plausible and familiar case of the seductions of clarity. The discussion of quantification may prove more surprising. And I hope that the differences between these two case studies will help us to hone in on the phenomenon’s more general qualities.
Let’s start with echo chambers. Most social scientists and journalists use the terms ‘echo chamber’ and ‘epistemic bubble’ synonymously. But, as I’ve argued, if we look at the original sources of these terms, we find two very different phenomena. An epistemic bubble is a social phenomenon of simple omission. It’s bad connections in your information network – like if all your friends on Facebook share your politics, and you simply never run across the arguments presented by the other side. An echo chamber, on the other hand, is a social structure which discredits all outsiders. When you are in a bubble, you don’t hear the other side. When you’re in an echo chamber, you don’t trust the other side. Echo chambers don’t cut off lines of communication from the outside world; rather, they isolate their members by manipulating their members’ trust (Nguyen, Reference Nguyen2018b).
What matters for the present study is the particular content of the systems of thought which echo chambers use to manipulate trust. I’m drawing here on Kathleen Jamieson and Joseph Cappella’s empirical analysis of the echo chamber around Rush Limbaugh and the Fox News ecosystem (Jamieson and Cappella, Reference Jamieson and Cappella2010). According to Jamieson and Cappella, Rush Limbaugh offers a world-view with some very distinctive features. First, Limbaugh presents a world of sharply divided forces locked in a life-or-death struggle. There are no onlookers or reasonable moderates. Either you’re a Limbaugh follower – and so on the side of right – or you are one of the malevolent forces out to undermine the side of right. Limbaugh then offers an explanatory system in which most moral and political action can be understood in terms of that all-consuming struggle. Disagreement with Limbaugh’s world view can be readily explained as the product of some organized, malevolent action to block the side of right. Most importantly, for our present purposes, the undermining function and the explanatory function are often accomplished with the help of conspiracy theories, which provide a ready explanation for disagreement from outsiders. The liberal media is in the grip of a nefarious network of elites, as are universities, and the academic sciences. These conspiracy theories offer to explain complex features of the world in terms of a single coherent narrative.
This is an obvious deployment of the seductions of clarity. First, Limbaugh’s world-view offers the sensations of epiphany. Once his world-view is accepted, difficult-to-categorize actions suddenly become easily categorized. Previously hard-to-explain facts – like the existence of substantive moral disagreement between apparently sincere people – suddenly become easily explicable in terms of a secret war between good and evil. Second, the world-view offers the sensations of cognitive facility. The conspiracy theory offers a ready and neatly unified explanation for all sorts of behavior. And those explanations are easy to create. The world suddenly becomes more intellectually manageable. This is particularly vivid in some of communities around the wilder conspiracy theories. CNN recently conducted some quite telling interviews with some members of the fast-growing community of Flat Earth conspiracy theorists. Many theorists describe the satisfactions of being a Flat Earth theorist in in terms of cognitive facility. As Flat Earth theorist and filmmaker Mark Sargent puts it, ‘You feel like you've got a better handle on life and the universe. It's now more manageable’. And Flat Earth theorist David Weiss says, ‘When you find out the Earth is flat… then you become empowered’ (Picheta, Reference Picheta2019).
Furthermore, well-designed echo chambers typically have systems of belief which can reinterpret incoming evidence in order to avoid refutation. For example, many echo chambers include sweeping scientific claims, such as denying the existence of climate change. Echo chamber members may have adopted belief systems with the help of the clarity heuristic. But, one might think, heuristics are defeasible – and contrary scientific evidence should surely bring members to abandon their settled acceptance of their belief system. However, a clever echo chamber can preemptively defuse such contrary evidence. A well-designed echo chamber can include, in its belief system, a conspiracy theory about how the media and the institutions of science were entirely corrupt and in the grip of a vast malicious conspiracy. This explanation performs a kind of intellectual judo. As Endre Begby (Reference Begby2020) points out, such a belief system transforms apparently contrary evidence into confirmations of the belief system – a process which he calls ‘evidential pre-emption’. If Limbaugh predicts that the liberal media will accuse him of falsifying information, then when his followers hear such accusations from the liberal media, they will have reason to increase their trust in Limbaugh – since his predictions have been fulfilled! But notice that there is a secondary effect, beyond the simple confirmation Begby describes – an effect that arises from the seductions of clarity. The belief system makes it easy to create an explanation for incoming contrary evidence and to provide explanations that unify and connect that event with many others. This provides an experience of cognitive facility – which should trigger the clarity heuristic. This is an extremely well-designed epistemic trap, in which contrary evidence triggers two different defense mechanisms. First, the conspiracy theory preemptively predicts the presence of contrary evidence, and so confirms itself in the process of dismissing that contrary evidence. Second, the ease with which the conspiracy theory performs that prediction and dismissal is an experience of cognitive facility – which creates the sense of clarity, which, in turn, triggers the thought-terminating heuristic.
Such defensive conspiracy theories are an obvious case of the seductive, manipulative use of clarity. Let’s now turn to a less obvious case. Consider the appeal of quantified systems. Consider, especially, the way in which large-scale institutions try to reduce complex, value-laden qualities to simple metrics and measures. In Trust in Numbers, a history of the culture of quantification, Theodore Porter notes that quantified systems are powerfully attractive. This is why, he says, politicians and bureaucrats love to cite the authority of quantified systems of analysis. Numbers, he says, smell of science. They have the ring of objectivity, and so they will be used in inappropriate circumstances in attempts to gain political control (Porter, Reference Porter1996, p. 8). I think Porter is entirely right about the credibility advantage of numbers and their scientific feel – but I don’t think this is the whole story. The details of his study offer us the opportunity to build a second account of the appeal of numbers, alongside his credibility account, in terms of the seductions of clarity.
There are, says Porter, qualitative ways of knowing and quantitative ways of knowing. Porter is not here making the crude claim that quantitative ways of knowing are inherently bad. Rather, he is interested in the relative advantages and disadvantages of each way of knowing. Qualitative ways of knowing, he says, are typically nuanced, sensitive, and rich in contextual detail, but they are not portable or aggregable. When we transition from qualitative to quantitative ways of knowing, we strip out much of the nuance and many of the contextual details. In return for this loss of informational richness, we get to express our knowledge in neat packages: in the form of numbers, whose meanings are portable, and which can be easily aggregated with other numerical results. This can be very valuable. Obviously, quantification is vital for modern science. And there are many administrative functions which quantification makes far more efficient. But, says Porter, contemporary culture seems to have lost sight of the distinctive value of qualitative ways of knowing. We tend to reach for quantitative ways of knowing compulsively, even when they aren’t most appropriate for the task at hand.
In The Seductions of Quantification, Merry applies Porter’s analysis to the recent rise of quantified metrics in international governance. She is interested in indicators – simple, quantified representations of complex global phenomena. One indicator is the UN’s Human Development Index, which gives countries a single score for their performance in supporting the quality of life of their citizens. Another indicator is the US State Department’s Trafficking in Persons Reports, which gives countries a score on their performance in reducing sex trafficking. Indicators present themselves in the form of a single, easy-to-use, easy-to-understand numerical score. These indicators, she says, hide the complexity and subjectivity of their manufacture. And that concealment is much of the point. Their power, says Merry, comes in significant part from their appearance of unambiguity. And once these indicators have been manufactured, they invariably become central in various governments’ and politicians’ decision-making processes. The very qualities which make them so powerful also make them blunt instruments, missing in much subtlety and detail. But, says Merry, they are incredibly hard to dislodge from the minds of the public and of policy-makers (Merry, Reference Merry2016, pp. 1–43, 112–60).
Why are quantifications so sticky? The seductions of clarity offer an explanation. Quantified systems are, by design, highly usable and easily manipulable. They provide a powerful experience of cognitive facility. It is much easier to do things with grades and rubrics than it is with qualitative descriptions. We can offer justifications (‘I averaged it according to the syllabus’ directives’; ‘I applied the rubric’). We can generate graphs and quantified summaries. And the sense of facility is even stronger in large-scale institutions, where the use of numbers has been stringently regularized. Because of the portability of numbers and the constancy and enforced regularity of typical institutional deliberation procedures, inside such institutions, it is vastly easier to use numbers to produce powerful and effective communications. And they are communications in terms which we know will be understood and acted upon – because the meanings and uses of these institutional terms has been so aggressively regularized.
In a university for which I once worked, all departments had to produce yearly assessment data which was supposed to demonstrate, in quantitative form, the quality of education that our students had received. Our assessments results had to be coded according to certain institutionally specified Educational Learning Outcomes (ELOs). So, the fact that our students scored well this year in their critical thinking multiple choice tests gets coded and entered into the system. Those scores now support our claim that a particular class succeeds in supporting certain university-wide learning: the Critical Thinking ELO, the Writing Skills ELO, the Moral Reflection ELO and the Mathematical Reasoning ELO. And the data for each particular class, in turn, is used to support the claim that our department as a whole supports the university-wide learning outcomes. And that claim, in turn, is used as support the claim that the University is succeeding in its mission, and achieving its stated Core Values: like Communication, Community, and Engagement. And the way in which class, departmental, and university ELO’s link up are coded explicitly into our databasing system, so that new data can travel automatically up the chain. When I enter the latest batch of scores for my students, it produces an immediate effect into the system: all the reported ELOs up the chain will change. And this is possible precisely because the data I’ve entered has been rendered portable and because our outcomes reporting system has been set up to automatically take advantage of that portability.
Notice that all this gives me the experience of an enormous amount of apparently effective cognitive and communicative activity. I have a sense of grasping connections. I can see exactly how my class’s ELOs support my department’s ELOs, which in turn support my college’s ELOs, which in turn support the university’s ELOs and, in turn, the University Core Values. And my grasp of this system can give me a certain sense of cognitive facility. I can easily generate explanations of course content and generate evidence of teaching success. And I can know that they will be understood, since they have been expressed in the pre-prepared, standardized, and explicitly interconnected language of the institution. I know that my justifications will be incorporated into larger institutional aggregates, because my justifications occur in those intentionally stabilized terms. And I know that when I give justifications in those designated terms, they will usually generate pre-specified sorts of actions – ones which I can usually predict with some success. A stabilized, explicit system of quantified and systemized institutional value is designed so that its users can make themselves easily understood and their pronouncements quickly integrated into institutional systems of information processing and decision-making. In short, by using the provided terms of institutional discourse inside the institution, my speech and thinking will seem clear, precisely because they fit so well into a pre-established network of communication and justification. That pre-engineered fit creates a sense of cognitive facility, with all its associated pleasures. And the ring of clarity can trigger the thought-terminating heuristic in others who have also bought into the provided system of institutional discourse – ending inquiry into the apparently clear claim.
Of course, I’ll have genuine cognitive facility if my various mental efforts actually track real elements in the world and process them in some epistemically valuable way. And, as Charles Perrow and Paul Du Gay have argued, bureaucracies certainly need regular methods and quantified systems in order to function and to administrate fairly (Du Gay, Reference du Gay2000; Perrow, Reference Perrow2014). The worry, though, is that we might set up systems that are useful for certain very specific data-collection and managerial function – but that can also exert a magnetic pull on our thinking in nearby domains. For example: GPAs and citations rates might be useful for certain particular tasks of bureaucratic administration. But, because they are so seductive, students and scholars may start using them as the primary lens through which they evaluate their own education and output.Footnote 18 And surely GPAs are not perfect indicators of a good education, and citations rates are not perfect indicators of good scholarship. A particular quantification can get an excess grip on our reasoning, even in contexts when it is less appropriate, by presenting an appealing sense of clarity. And we will fail to investigate whether this quantified metric is the most appropriate form of evaluation to use, precisely because its clarity terminates our investigations into its appropriateness.
So far, we’ve been concentrating on systems of thought whose contents themselves are seductively clear. But the seductions of clarity can also affect our judgments of the expertise and authority of the sources of those contents. The seductions of clarity can get us to accept a system by making its users and authors seem more credible or expert, precisely because they seem more clear. Recall that one of the standard signals of expertise is communicative facility. Non-experts trust purported experts when those experts are able to communicate their understanding – when the purported experts can explain to their audiences the connections between nodes, generate justifications, and the like. But consider what happens to the appearance of communicative facility inside a bureaucratized system of educational assessment. Those users willing to express themselves in the designated terms of that system have a considerable advantage in displaying communicative facility. They can easily generate justifications. They can easily make their reasons and requests understood and acted upon in institutional settings. They will seem clear because their communication will be readily taken up and acted upon. Their apparent facility will seem especially impressive to outsiders, who are out of contact with the subtler values involved with education. This is, obviously, a form of epistemic injustice (Fricker, Reference Fricker2007). Here, it is a form of epistemic injustice which gives a significant credibility advantage to anybody willing to speak in the terms provided by bureaucracies and institutions, which provide regularized systems of justification and languages of evaluation. And since the ability to create and disseminate such systems is usually held by those already in power, the bureaucratization of language will typically serve to amplify power differentials by granting more credibility to those who accept those bureaucratic terms of discourse.
To put it in Kristie Dotson’s terms, epistemic oppression occurs when agents are denied the opportunity to use shared epistemic resources to participate in knowledge production (Dotson, Reference Dotson2014). Bureaucratic and institutionalized language can enable a particular kind of epistemic oppression. Ideas that can be easily expressed in the institutional language are readily entered into the shared knowledge base. But the standardization of language puts a special oppressive power in the hands of whoever creates the standardization. Once the standardization is in place and widely accepted, anybody who uses it will demonstrate cognitive facility and demonstrate communicative facility. They will seem clear precisely because they are using language for which a system of reception has been pre-prepared.
The sense of clarity is a terminator for inquiry, and ideas expressed in that regularized institutional language will bear that sense of clarity. So ideas expressed in that language are more likely to be accepted without question. Information that isn’t placed into institutional language, on the other hand, will tend to be disappear. Such recalcitrant expressions will be less likely to be accepted, transmitted and remembered within the system. At the very least, since they seem confusing rather than clear, those recalcitrant expressions will be subject to constant questioning and inquiry, rather than quickly accepted. In a standardized system, non-standardized information will be subject to incredible friction. This creates a further competitive disadvantage. By the very fact that such information transmits slowly and poorly, the information and its authors will seem to have less communicative facility and so seem less credible. Those whose ideas don’t fit comfortably into the regularized institutional language are at a significant disadvantage in participating in the production and dissemination of knowledge.
7. Nuance and closure
The point here is not to claim that quantified systems and conspiracy theories are always bad. Science and bureaucracy need quantification, and we certainly should accept conspiracy theories when there are actually conspiracies.Footnote 19 The point is, rather, that these sorts of ideas and methodologies are among the choicest tools for epistemic subversion. A ruthless epistemic manipulator, freed from the constraints of genuine inquiry, can re-formulate these sorts of systems to maximize their potential for seductiveness.
And this also offers us insights into unintentional cognitive seduction. Bureaucracies and institutions have very good reason to develop internally consistent and quantified systems of evaluation. Such systems make the administration of complex organizations possible. But insofar as such systems share a significant number of the traits and effects as those systems made for intentional manipulation – and especially insofar as such systems perpetuate because of their seductive effects – then such systems also function as seductively clear.
This suggests another reason to resist the seductions of clarity. Sometimes, we need to dwell in unclear systems of thought because we have not yet earned the right to clarity. In her study of metaphors, Elizabeth Camp (Reference Camp2006) suggests that metaphors are most appropriate when we are still in the process of coming to understand. Metaphors are unclear by design. They are, says Camp, a special way of pointing to the world. We define simple nouns through simpler forms of pointing. ‘Red’ we define as looking like that. Metaphors let us point with a rough, waving gesture.
The reason we might want to do so, says Camp, is that such pointing lets us access the richness of the world in our talk. When I say, ‘I don’t understand what’s going on with Robert very much, but his neurosis seems a lot like Liza’s,’ I’m not using some well-defined abstract predicate to describe Robert. I am pointing to Liza and to all the rich features of reality that are bound up with her. I am saying that I don’t know what it is about Liza that matters, exactly, but it’s something over there, where ‘there’ is a gesture in the direction of all the richness of Liza’s actual self. And this sort of vague gesture is especially useful, says Camp, when we are trying to grapple with things we do not yet adequately understand. With metaphors, she says, we are gesturing vaguely at part the worlds.
Intentionally and openly vague forms of communication are very important. They remind us that our thinking – our concepts, our inquiries, our understanding – is not yet finished. Clarity is compelling, but signals us to end our inquiries. Seductively clear systems mask the fact that we should, in fact, be confused, and should be pressing on with our inquiries. They present themselves as finalized. On the other hand, metaphors and their kin wear their unfinishedness plainly on their faces. They are hard to use, and that difficulty reminds us that there is more work to be done. They leave the basement door open, so we know there is more to explore down there. When clarity seduces, it can prevent us from pushing on, from finding and dwelling on our confusions. Seductive clarity presents us with a false floor for our investigations into the world.
How do we resist the seductions of clarity? One possible defensive strategy is to develop new counter-heuristics, designed to sniff out the seductive manipulation of our original heuristics. Here’s a rough analogy: a certain kind of culinary yumminess was once a decent heuristic for nutritious eating. But our nutritive environment changed, especially when various corporate forces figured out our heuristics and tendencies and started to aggressively game them. In response, we have had to adapt our heuristics. We have needed to become suspicious of too much yumminess. Many of us have already trained ourselves to notice when things are just a little too delicious. The crunchy, sweet, salty stuff that hits us just so – we have learned to taste in them the engineer’s manipulative touch. We have developed an intuitive feel for designed craveability. This is a counter-heuristic, designed to trigger in response to signals that outside forces are trying to manipulate our more primitive heuristics. Sweetness, crunchiness, saltiness – our counter-heuristic makes as immediately suspicious when we find these in plenty.
In fighting the seductions of clarity, we need to develop new counter-heuristics in a similar key. The sense of clarity is something like cognitive sugar. Once upon a time, using our sense of clarity as a signal to terminate our inquiries might have been a good and useful heuristic. But now we live in an environment where we are surrounded by seductive clarity, much of it designed to exploit our heuristics. We now need to train ourselves to become suspicious of ideas and systems that go down just a little too sweetly – that are pleasurable and effortless and explain everything so wonderfully. Systems of thought that feel too clear should make us step up our investigative efforts instead of ending them. We need to learn to recognize, by feel, the seductions of clarity.Footnote 20