Hostname: page-component-745bb68f8f-hvd4g Total loading time: 0 Render date: 2025-02-09T17:10:01.900Z Has data issue: false hasContentIssue false

The Seductions of Clarity

Published online by Cambridge University Press:  24 May 2021

C. Thi Nguyen*
Affiliation:
University of Utah
Rights & Permissions [Opens in a new window]

Abstract

The feeling of clarity can be dangerously seductive. It is the feeling associated with understanding things. And we use that feeling, in the rough-and-tumble of daily life, as a signal that we have investigated a matter sufficiently. The sense of clarity functions as a thought-terminating heuristic. In that case, our use of clarity creates significant cognitive vulnerability, which hostile forces can try to exploit. If an epistemic manipulator can imbue a belief system with an exaggerated sense of clarity, then they can induce us to terminate our inquiries too early – before we spot the flaws in the system. How might the sense of clarity be faked? Let’s first consider the object of imitation: genuine understanding. Genuine understanding grants cognitive facility. When we understand something, we categorize its aspects more easily; we see more connections between its disparate elements; we can generate new explanations; and we can communicate our understanding. In order to encourage us to accept a system of thought, then, an epistemic manipulator will want the system to provide its users with an exaggerated sensation of cognitive facility. The system should provide its users with the feeling that they can easily and powerfully create categorizations, generate explanations, and communicate their understanding. And manipulators have a significant advantage in imbuing their systems with a pleasurable sense of clarity, since they are freed from the burdens of accuracy and reliability. I offer two case studies of seductively clear systems: conspiracy theories; and the standardized, quantified value systems of bureaucracies.

Type
Papers
Copyright
Copyright © The Royal Institute of Philosophy and the contributors 2021

1. Introduction

Here is a worrying possibility: there is a significant gap between our feeling that something is clear and our actually understanding it. The sense of clarity can be a marker of cognitive success, but it can also be seductive. Oversimplifications slip easily into our minds and connive themselves into our deliberative processes.

In that case, the sense of clarity might be intentionally exaggerated for exploitative ends. Outside forces, with an interest in manipulating our beliefs and actions, can make use of clarity’s appeal. Seduction, after all, often involves a seducer. Romantic seduction, in its more malicious form, involves manipulating the appearances of intimacy and romance in order to subvert the aims of the seduced. There is an analogous form of cognitive seduction, where hostile forces play with the signals and appearances of clarity in order to lead our thinking astray.

The sense of clarity is a potent focal point for manipulation because of its crucial role in managing our cognitive resources. After all, we only have so much mental energy to go around; we need to prioritize our inquiries. In particular, we need some way to estimate that we’ve probably thought enough on some matter for the moment – that it’s probably safe to move on to more pressing matters, even if we haven’t gotten to the absolute rock bottom of the matter. Our sense of clarity, and its absence, plays a key role in our cognitive self-regulation. A sense of confusion is a signal that we need to think more. But when things feel clear to us, we are satisfied. A sense of clarity is a signal that we have, for the moment, thought enough. It is an imperfect signal, but it is one we often actually use in the quick-and-dirty of everyday practical deliberation. This shows why, say, manipulative interests might be particularly interested in aping clarity. If the sense of clarity is a thought-terminator, then successful imitations of clarity will be quite powerful. If somebody else can stimulate our sense of clarity, then they can gain control of a particular cognitive blind spot. They can hide their machinations behind a veil of apparent clarity.

Here’s another way to put it: the moment when we come to understand often has a particular feel to it – what some philosophers have called the ‘a-ha!’ moment. The moment when we come to understand, says Alison Gopnik, is something like an intellectual orgasm (Gopnik, Reference Gopnik1998). And, as John Kvanvig suggests, it is our internal sense of understanding – our sense of ‘a-ha!’ and ‘Eureka!’ – that provides a sense of closure to an investigation (Kvanvig, Reference Kvanvig2011, p. 88). The ‘a-ha’ feeling is both pleasurable and indicates that a matter has been investigated enough. If, then, hostile forces can learn to simulate that ‘a-ha’ feeling, then they have a very powerful weapon for epistemic manipulation.

I offer two sustained case studies of cognitive subversion through the seductions of clarity. First, I will look at the sorts of belief systems often promulgated by moral and political echo chambers, which offer simplistic pictures of a world full of hostile forces and conspiracy theories. Such belief systems can create an exaggerated sense of clarity, in which every event can be easily explained and every action easily categorized. Second, I will look at the seductive clarity of quantification. I borrow my use of ‘seduction’ from Sally Engle Merry’s The Seductions of Quantification (Reference Merry2016), a study into how global institutions deploy metrics and indicators in the service of political influence. Merry focuses on the generation of indicators and metrics on the global stage, such as the Human Development Index, which attempts to sum up the quality of life across each country’s entire citizenship in a single, numerical score. The HDI then compiles these scores to offer a single apparently authoritative ranking of all countries by their quality of life. Such systems of quantification can offer an exaggerated sense of clarity without an accompanying amount of understanding or knowledge. Their cognitive appeal can outstrip their cognitive value.

It is striking how quantified presentations of value seem to have a profound cognitive stickiness. The motivational draw of quantified values has been well-documented across many terrains (Porter, Reference Porter1996; Merry, Reference Merry2016; Espeland and Sauder, Reference Espeland and Sauder2016). This motivational power is why so many companies and governments have become interested in the technologies of gamification. Gamification attempts to incorporate the mechanics of games – points, experience points, and leveling up – into non-game activities, in order to transform apparently ‘boring’ activity as work and education into something more engaging, compelling, and addictive (McGonigal, Reference McGonigal2011; Walz et al, Reference Zimmerman, Bogost, Linehan, Kirman, Roche, Pesce, Rigby, Walz and Deterding2015; Lupton, Reference Lupton2016). I am worried, however, that gamification might increase motivation, but only at the cost of changing our goals in problematic ways. After all, step counts are not the same as health, and citation rates are not the same as wisdom (Nguyen, Reference Nguyen2020, pp. 189–215; Reference Nguyen and Lackeyforthcoming). The, seductions of clarity are, I believe, one important mechanism through which gamification works.

Let me be clear: the present inquiry is not a study in ideal rationality, nor is it a study of epistemic vice and carelessness. It is a study in the vulnerabilities of limited, constrained cognitive agents, and how environmental features might exploit those vulnerabilities. It is a foray into what we might call hostile epistemology. Hostile epistemology includes the intentional efforts of epistemic manipulators, working to exploit those vulnerabilities for their own ends. We might call the study of these intentional epistemic hostilities combat epistemology. Hostile epistemology also includes the study of environmental features which present a danger to those vulnerabilities, made without hostile epistemic intent. Hostile environments, after all, don’t always arise from hostile intent. Hostile environments include intentionally placed minefields, but also crumbling ruins, the deep sea, and Mars. An epistemically hostile environment contains features which, whether by accident, evolution, or design, attack our vulnerabilities.

I will focus for the early parts of this paper on cases of combat epistemology. I think this is the easiest place to see how certain sorts of systems have a hostile epistemic function. The cases of intentionally manufactured hostile environments will then help us to recognize cases of the unintentional formation of hostile epistemic environments. Hostile epistemic environments can arise from entirely well-intentioned, and even successful, pursuits of other purposes. A culinarily extraordinary pastry shop also presents an environment hostile to my attempts at healthy eating. In many bureaucratic cases, as we will see, systems of quantification often arise for very good reason: to efficiently manage large and complex institutional data-sets, or to increase accountability (Scott, Reference Scott1998; Perrow, Reference Perrow2014). But these very design features also make them into epistemically hostile environments. Because of the magnetic motivational pull of quantification, the very features which render them good for efficient administration also functions to imbue them with seductive clarity.Footnote 1

Other recent inquiries into hostile epistemology include discussions of epistemic injustice, propaganda, echo chambers, fake news, and more (Fricker, Reference Fricker2007; Medina Reference Medina2012; Dotson Reference Dotson2014; Stanley, Reference Stanley2016; Rini Reference Rini2017; Nguyen Reference Nguyen2018b). Importantly, the study of hostile epistemology is distinct from the study of epistemic vice. The study of the epistemic vices – such as closed-mindedness, gullibility, active ignorance, and cynicism – is a study of epistemically problematic character traits. It is the study of failings in the epistemic agents themselves (Sullivan and Tuana, Reference Sullivan and Tuana2007; Proctor and Schiebinger, Reference Proctor and Schiebinger2008; Cassam, Reference Cassam2016; Battaly, Reference Battaly2018). Hostile epistemology, on the other hand, is the study of how external features might subvert the efforts of epistemic agents. Of course, vice and hostility are often entangled. Hostile environments press on our vices and make it easier for us to fall more deeply into them. But vice and hostility represent two different potential loci of responsibility for epistemic failure.

This all might just seem like common sense. Of course people are drawn to oversimplifications; what’s new in that? But there are important questions here, about why we’re drawn to oversimplification and how culpable we are for giving in to it. Importantly, many theorists treat our interest in oversimplification as straightforwardly irrational. In the psychological and social sciences, the appeal of oversimplification is usually explained as a mistake which can be understood in terms of individual psychological tendencies, such as motivated reasoning or the undue influence of the emotions. We accept oversimplifications, it is thought, because they make us feel smug, they comfort us, or they reinforce our sense of tribal identity (Kahan and Braman, Reference Kahan and Braman2006; Sunstein, Reference Sunstein2017). Similarly, many philosophical accounts treat our susceptibility to oversimplification as a problem arising wholly from an individual’s own personal failures of character – from their epistemic vices. Quassim Cassam, for example, tells the story of Oliver the conspiracy theorist, who believes that 9/11 was an inside job. Says Cassam, there isn’t a good rational explanation for Oliver’s beliefs. The best explanation is a failure of intellectual character. Oliver, says Cassam, is gullible and cynical; he lacks discernment (pp. 162–63).

I will present a picture that is far more sympathetic to the seduced. It is a picture in which exaggerated clarity plays upon specific structural weaknesses in our cognition. As cognitively limited beings, we need to rely on various heuristics, signals, and short-cuts to manage the cognitive barrage. But these strategies also leave us vulnerable to exploitation. Seductive clarity takes advantage of our cognitive vulnerabilities, which arise, in turn, from our perfectly reasonable attempts to cope with the world using our severely limited cognitive resources. And, certainly, the pull of seductive clarity will be worse if we give in to various epistemic vices. And, certainly, once we realize all this, we will want to act more vigorously to secure the vulnerable backdoors to our cognition. The general point, however, is that giving into the seductions of clarity isn’t just some brute error, or the result of sheer laziness and epistemic negligence. Rather, it is driven, in significant degree, by systems and environments which function to exploit the cognitive vulnerabilities generated by the coping strategies of cognitively finite beings.

2. Clarity as thought-terminator

I have been speaking loosely so far; let me now stipulate some terminology. On the one hand, there are epistemically positive states: knowledge, understanding, and the like. On the other hand, there are the phenomenal states that are connected to those epistemically positive states. These are the experiences of being in an epistemically positive state – like the sense of understanding, the feeling of clarity. Loosely: understanding is our successful grasp of parts of the world and their relationships, and the sense of clarity is the phenomenal state associated with understanding. For brevity’s sake, let me use the terms ‘clarity’ and ‘the sense of clarity’ interchangeably, to refer to the phenomenal experience associated with understanding. I do not mean to be using ‘clarity’ in the Cartesian sense, where it is a perfect guarantee of knowledge. Clarity, in my usage, is merely an impression of a certain kind of cognitive success – what J.D. Trout has called the sense of understanding (Trout, Reference Trout2002). Clarity may often accompany genuine understanding, but it is by no means a perfect indicator that we do, in fact, genuinely understand. So external forces can exploit the gap between genuine understanding and the feeling of understanding – that sense of clarity.

There are two general strategies for epistemic manipulation. There is epistemic intimidation: the strategy of trying to get an epistemic agent to accept something by making them afraid or uncomfortable to think otherwise. There is also epistemic seduction: the strategy of manipulating positive cognitive signals to get an epistemic agent to accept something. The manipulation of clarity is a form of epistemic seduction. It is the attempt to use our own cognitive processes against us, whispering pleasantly all the while.

How might clarity seduce? There are many potential pathways. For one thing, clarity seduces because it is pleasurable. But for the remainder of this discussion, I’ll focus another, even more dangerous feature: that the sense of clarity can bring us to end our inquiries into a topic too early. This possibility arises because of the profoundly quick-and-dirty nature of daily decision-making. We are finite beings with limited cognitive resources.Footnote 2 In daily life, we need to figure out what to do: where to spend our money, who to vote for, which candidate to back. We face a constant barrage of potentially relevant information, evidence, and argument – far more than we could assess in any conclusive manner. So we need to figure out the best way to allocate our cognitive resources while leaving most of our investigations unfinished, in some cosmic sense.

When practically reasoning about the messy complexities of the real world, we are unlikely to arrive at any conclusive ground-floor, where we can know with any certainty that we’re done.Footnote 3 So, for everyday practical deliberation, we need some method for determining that we’ve thought enough.Footnote 4 And that basis often needs to be fast and loose, to cope with the fast and loose manner of everyday practical deliberation. We need some basis for estimating that our understanding is probably good enough, so that we can make a decision and move on. We need something like a heuristic for terminating thought.

Here, then, is the ruling supposition for my inquiry: the sense of clarity is one of the signals we typically use to allocate our cognitive resources. (I do not claim that it is the only signal, though I do claim it is a significant one.) We often use our sense of confusion as a signal that we need to keep investigating, and our sense of clarity as a signal that we’ve thought enough.Footnote 5 Our sense of clarity is a signal that we can terminate an investigation. When a system of thought seems clear to us, then we have a heuristic reason to stop inquiring into it.Footnote 6

I’m not claiming that this heuristic is a necessary part of all practical reasoning – only that the heuristic is currently under common usage. After all, heuristics are usually contingent tendencies and not necessary parts of our cognitive architecture. In fact, some research suggests that we can slowly change the heuristics we use (Reber and Unkelbach, Reference Reber and Unkelbach2010).

Here’s my plan. First, we’ll start to think about how powerful it would be if this supposition were true, and there were such a pleasurable and thought-terminating heuristic. I’ll look at some evidence from the empirical literature on cognitive heuristics that supports something in the vicinity of my supposition. I’ll show how the supposition, which concerns how we use our feeling of understanding, emerges from a recent discussion in the philosophy of science about the nature of genuine understanding. Then, I’ll use the supposition to think about what sorts systems and environments might successfully exploit the sense of clarity. I’ll dig into some historical and sociological literature on echo chambers and on the social effect of simplistic quantification. The supposition will turn out to provide a unifying explanation for many of the documented effects of echo chambers and quantification. My argument in favor of the supposition, then, will be that it provides a unifying explanation for various observations from cognitive science, sociology, and history, while integrating neatly with a standard account of the nature of understanding. But this mode of argumentation can only render the supposition a plausible hypothesis; more empirical investigation is certainly called for.

3. Clarity as vulnerability

Suppose, then, that the sense of clarity plays a crucial role in the regulation of our cognitive resources, functioning as a signal that we can safely terminate a particular line of inquiry. Obviously, the sense of clarity can come apart from actual full understanding.Footnote 7 It must, in order for it to play a heuristic role in quick-and-dirty daily deliberation.Footnote 8 In order to know that we fully understood something, we would need to conduct an exhaustive and thorough investigation. The sense of clarity is far more accessible to us, so we can use it to make rough estimates about whether we’ve inquired enough.

If a hostile force could ape such clarity, then they would have a potent tool for getting us to accept their preferred systems of thought. This is because false clarity would provide an excellent cover for intellectual malfeasance. A sense of clarity could bring us to terminate our inquiry into something before we could discover its flaws. It would be something like an invisibility cloak – one that works by manipulating our attention. Our attention, after all, is narrow. We barely notice what's outside the focused spotlight of our attention. We can make something effectively disappear simply by directing their attention elsewhere.Footnote 9 One way to make something cognitively invisible, then, is by making it signal unimportance. The spy novelist John Le Carre – who had actually worked in British intelligence – describes, in his novel Tinker Tailor Soldier Spy, what a genuinely effective spy looks like. They aren't dashing and handsome, like some James Bond figure. An effective spy presents as entirely normal, bland, and dull. They can disappear because they have learned to magnify the signals of boringness. Similarly, the techniques of stage magic involve attentional misdirection. Stage magicians learn to signal boringness with the active hand while directing signals of interestingness elsewhere, in order to control their audience’s attention. The sense of clarity can work in an analogous strategy of attentional misdirection. An epistemic manipulator who wants us to accept some system of thought should imbue that system with a sense of clarity, so that cognitive resources will be less likely to be directed towards it. The strategy will be even more effective if they simultaneously imbue some other target with a sense of confusion. The confusing object seizes our attention by signaling that we need to investigate it, which makes it easier for the clear-seeming system to recede into the shadows. The manipulator can thus gain control of their target’s attention by manipulating their targets’ priority queue for investigation.

Thus, hostile forces can manipulate the cognitive architecture of resource-management in order to bypass the safeguards provided by the various processes of cognitive inquiry. In the movies, the crooks are always hacking the system which controls the security cameras. Epistemic criminals will want to hack the cognitive equivalent.

4. Ease and fluency

The experience of clarity is complex and its phenomenal markers many. Let’s start with a case study in one small and simple aspect of clarity – one which has been relatively well-studied in the psychological sciences. Consider the experience of cognitive ease – the relative degree to which it is easy to think about something. In the literature on cognitive heuristics, cognitive ease is part of the study of ‘cognitive fluency’, which is the ‘subjective experience of ease or difficulty with which we are able to process information’ (Oppenheimer, Reference Oppenheimer2008, p. 237). Research has demonstrated that we do, in fact, often use fluency as a cognitive heuristic. If we comprehend an idea easily, we will be more likely to accept it. Cognitive difficulty, on the other hand, makes it more likely that we will reject an idea. This heuristic is not entirely unreasonable: we often experience cognitive ease in a domain precisely because we have a lot of experience with it. Cognitive ease often correlates with experience, which correlates with skill and accuracy. But, obviously, ease is separable from accuracy. Studies have demonstrated that one’s mere familiarity with an idea makes one more likely to accept it. Familiarity creates a sense of cognitive ease, but without the need for any relevant skill or expertise. Studies have also shown that we are more likely to believe something written in a more legible font. Legibility leads to easier processing, which leads to readier acceptance. In other words: we are using our cognitive ease with some proposition or domain as a heuristic for our accuracy with that proposition or domain. Rolf Reber and Christian Unkelbach have argued that fluency heuristics are, in fact, often quite useful. Through a Bayesian analysis, they conclude that fluency is a good heuristic when the user’s environment contains more true propositions than false ones – and the better the ratio of true to false propositions in their environment, the better the fluency heuristic will work (Reber and Unkelbach, Reference Reber and Unkelbach2010). But that heuristic can be gamed.Footnote 10

Suppose that the usual fluency heuristic is in place. How might it be exploited? To game the fluency heuristic, a manipulator would want to offer their targets ideas expressed in some familiar manner, by using well-worn patterns of thought and forms of expression. This exploitative methodology should be quite familiar: it explains the rhetorical power of cliched slogans and Internet memes.

Suppose that the world has many such epistemic manipulators in it, and has become chock full of misleading ideas that have been engineered to seem familiar. Our best strategy to avoid manipulation would be to update our heuristics to close off this cognitive backdoor. As Reber and Unkelbach showed, we are capable of changing and updating our heuristics when we receive evidence that they have led us astray. The manipulators, then, would want to mask from us any evidence that our use of the fluency heuristic was leading us astray. This is, however, easier to do in some domains than others. Some epistemic domains have obvious litmus tests. It is easy to check for mistaken reasoning in them because successes and failures are obvious to any onlooker. For example, we can tell that our theory of bridge-building has gone wrong if our new bridges keep falling down. But other epistemic domains have no such easy litmus tests – like the moral and aesthetic domains. If one’s reasoning has been systematically subverted in such a subtle domain, there is no obvious error result that could function as a check.Footnote 11 So if manipulators wanted to gain control via the fluency heuristic, one good strategy would be to perform their fluency-manipulations over, say, claims about morality and value. Alternatively, they may want to devote their fluency-manipulations to complex and diffuse social phenomena or more esoteric scientific phenomenon. Some empirical claims cannot be straightforwardly checked by the layperson, such as scientific arguments for climate change or sociological claims how oppression perpetuates. If the manipulators’ targets have been given a seductively clear explanation which dismiss, say, sociologists and climate change scientists as corrupt, those explanations will be quite hard to dislodge. Most targets will be unable to see that they have been led astray, and so won’t update their heuristics (Nguyen, Reference Nguyen2018b; Reference Nguyen2018c).

5. Aping understanding

Perhaps it seems implausible to you that somebody would terminate a really important inquiry just because of fluency. There is, however, another much more sophisticated form of epistemic seduction which will more plausibly trigger the thought-terminating function. Hostile epistemic manipulators can try to imitate, not just ease, but a full feeling of understanding. They can present the phenomena associated with a positive and rich experience of clarity.

In order to see how one might fake the feeling of understanding, let’s start by thinking about the nature of genuine understanding. For that, let’s turn to a recent discussion of the nature of understanding in the philosophy of science. According to a recent strand of thinking, knowledge isn’t actually the primary goal of much of our epistemic efforts. Knowledge is usually conceived of as something like the possession of true facts. Having knowledge, by the usual accounts, doesn’t require any particular integration of those facts. But many of our intellectual efforts are aimed at getting something more than just knowing some disparate facts. We aim at something more holistic: understanding. The precise nature of understanding is still under some debate, but we can extract some common and largely uncontroversial ideas.Footnote 12 First, when we understand something, we not only possess a lot of independent facts, but we see how those facts connect. Understanding is of a system; it involves grasping a structure and not just independent nodes. Second, when we understand something, we possess some internal model or account of it which we can use to make predictions, conduct further investigations, and categorize new phenomena.Footnote 13

That is an account of what it means to actually have understanding. So what are the experiential phenomena associated with understanding? What does it feel like to understand something? There are several distinct phenomena to consider here. First, there are the experiences associated with coming to understand. As Catherine Elgin puts it, when we come to understand, our way of looking at things suddenly shifts to accommodate new information. Understanding, she says, ‘comes not through passively absorbing new information, but through incorporating it into a system of thought that is not, as it stands, quite ready to receive it’ (Elgin, Reference Elgin2002, p. 14). When we come to understand, our system of thought changes and pieces of information that we could not accommodate before suddenly find a place. Kvanvig offers a similar account: to understand, he says, is to grasp a coherence relationship. It is to be aware of how the information fits together (Kvanvig Reference Kvanvig2003, p. 202). The experience of coming to understand, then, involves an experience of grasping a new and improved coherence. Let us call this the phenomenon of cognitive epiphany. And, as Gopnick points out, cognitive epiphanies are incredibly pleasurable.

Next, there are phenomena associated with having an understanding. Understanding involves a certain facility with the terrain. As Kvanvig puts it,

…To have mastered such explanatory relationships is valuable not only because it involves the finding of new truths but also because finding such relationships organizes and systematizes our thinking on a subject matter in a way beyond the mere addition of more true beliefs or even justified true beliefs. Such organization is pragmatically useful because it allows us to reason from one bit of information to another related information that is useful as a basis for action, where unorganized thinking provides no such basis for inference. Moreover, such organized elements of thought provide intrinsically satisfying closure to the process of inquiry, yielding a sense or feeling of completeness to our grasp of a particular subject matter. (p. 202)

When we understand a cognitive terrain, we can move between its nodes more quickly and easily. We can use our understanding to easily and powerfully generate relevant explanations. And if our understanding is fecund, these new explanations will serve to create even more useful connections. And, as Michael Strevens says, having an understanding also involves having the capacity to communicate that understanding – to explain to how the connections work (Strevens, Reference Strevens2013). Let’s call all these the phenomena of cognitive facility. Footnote 14 And, at least in my own experience, the pleasure of clarity lies not only in Gopnick’s moment of coming to understand, but also in the continuing joys of apparent facility and intellectual power. It feels incredibly good to be able to swiftly explain complex phenomena. It is the pleasure of engaging our skills and capacities to powerful effect.Footnote 15

Let’s enter into the mindset of the hostile epistemic manipulator. Our goal is to seduce with apparent clarity – to game other people’s cognitive processes and heuristics so that they will accept our preferred system of thought. We’ll want to engineer that system, then, to create the feeling of cognitive epiphany. We’ll want to maximize, for our system’s adopters, the sense that unexplained information is sliding into place, the feeling of newfound coherency. So we’ll want to give the system easy-to-apply categorizations which are readily connected into a coherent network. And, once that system has been adopted, we’ll want it to create the feeling of cognitive facility. We’ll want to engineer it so that, once somebody adopts the system, thinking in its terrain will seem distinctly easier and more effective than before. We’ll want it to give adopters a heightened sensation of forming connections and moving easily between them. We’ll want it to create the impression of explanatory power, quickly and easily explaining any new phenomena that come up. And we would want to do all that while simultaneously masking its epistemic faults.

This might seem like an overwhelmingly difficult task for the aspiring manipulator. We manipulators, however, have some very significant advantages. First, we don’t need to successfully imitate understanding all the way down. We simply need for our system to trigger the clarity heuristic early enough, before its adopters stumble across any of the flaws. If you’re building a Potemkin village, you don’t need to actually build any actual houses. You just need to build the facades – so long as those facades convince people not to try and enter the buildings. We manipulators, then, can hide our system’s weakness and inferior performance behind a veil of apparent clarity.Footnote 16

But our most significant advantage is that we are unburdened by the constraints of truth in engineering our extra-tasty system of thought. Epistemically sincere systems – that is, systems of thought generated for the sake of real knowledge and genuine understanding – are heavily constrained by their allegiance to getting things right.Footnote 17 We manipulators are unbound by any such obligations. We are free to tweak our system to maximize its appealing clarity. This is similar, in a way, to how unhealthy restaurants are free to appeal more directly to our sense of deliciousness, because they are freed from considerations of health. (Or, at least, that’s how my mother saw it.) We manipulators, then, can optimize our system to offer the sense of easily made connection and explanations. We can build a cartoon of understanding. And that cartoon will have a competitive advantage in the cognitive marketplace. It can be engineered for the sake of pleasure, and it will carry with it a signal that inquiry is finished, and that we should look elsewhere.

6. Two systems of cognitive seduction

Let’s look at two case studies of the seductions of clarity: echo chambers and institutional quantification. The first case study of echo chambers will strike many, I suspect, as a plausible and familiar case of the seductions of clarity. The discussion of quantification may prove more surprising. And I hope that the differences between these two case studies will help us to hone in on the phenomenon’s more general qualities.

Let’s start with echo chambers. Most social scientists and journalists use the terms ‘echo chamber’ and ‘epistemic bubble’ synonymously. But, as I’ve argued, if we look at the original sources of these terms, we find two very different phenomena. An epistemic bubble is a social phenomenon of simple omission. It’s bad connections in your information network – like if all your friends on Facebook share your politics, and you simply never run across the arguments presented by the other side. An echo chamber, on the other hand, is a social structure which discredits all outsiders. When you are in a bubble, you don’t hear the other side. When you’re in an echo chamber, you don’t trust the other side. Echo chambers don’t cut off lines of communication from the outside world; rather, they isolate their members by manipulating their members’ trust (Nguyen, Reference Nguyen2018b).

What matters for the present study is the particular content of the systems of thought which echo chambers use to manipulate trust. I’m drawing here on Kathleen Jamieson and Joseph Cappella’s empirical analysis of the echo chamber around Rush Limbaugh and the Fox News ecosystem (Jamieson and Cappella, Reference Jamieson and Cappella2010). According to Jamieson and Cappella, Rush Limbaugh offers a world-view with some very distinctive features. First, Limbaugh presents a world of sharply divided forces locked in a life-or-death struggle. There are no onlookers or reasonable moderates. Either you’re a Limbaugh follower – and so on the side of right – or you are one of the malevolent forces out to undermine the side of right. Limbaugh then offers an explanatory system in which most moral and political action can be understood in terms of that all-consuming struggle. Disagreement with Limbaugh’s world view can be readily explained as the product of some organized, malevolent action to block the side of right. Most importantly, for our present purposes, the undermining function and the explanatory function are often accomplished with the help of conspiracy theories, which provide a ready explanation for disagreement from outsiders. The liberal media is in the grip of a nefarious network of elites, as are universities, and the academic sciences. These conspiracy theories offer to explain complex features of the world in terms of a single coherent narrative.

This is an obvious deployment of the seductions of clarity. First, Limbaugh’s world-view offers the sensations of epiphany. Once his world-view is accepted, difficult-to-categorize actions suddenly become easily categorized. Previously hard-to-explain facts – like the existence of substantive moral disagreement between apparently sincere people – suddenly become easily explicable in terms of a secret war between good and evil. Second, the world-view offers the sensations of cognitive facility. The conspiracy theory offers a ready and neatly unified explanation for all sorts of behavior. And those explanations are easy to create. The world suddenly becomes more intellectually manageable. This is particularly vivid in some of communities around the wilder conspiracy theories. CNN recently conducted some quite telling interviews with some members of the fast-growing community of Flat Earth conspiracy theorists. Many theorists describe the satisfactions of being a Flat Earth theorist in in terms of cognitive facility. As Flat Earth theorist and filmmaker Mark Sargent puts it, ‘You feel like you've got a better handle on life and the universe. It's now more manageable’. And Flat Earth theorist David Weiss says, ‘When you find out the Earth is flat… then you become empowered’ (Picheta, Reference Picheta2019).

Furthermore, well-designed echo chambers typically have systems of belief which can reinterpret incoming evidence in order to avoid refutation. For example, many echo chambers include sweeping scientific claims, such as denying the existence of climate change. Echo chamber members may have adopted belief systems with the help of the clarity heuristic. But, one might think, heuristics are defeasible – and contrary scientific evidence should surely bring members to abandon their settled acceptance of their belief system. However, a clever echo chamber can preemptively defuse such contrary evidence. A well-designed echo chamber can include, in its belief system, a conspiracy theory about how the media and the institutions of science were entirely corrupt and in the grip of a vast malicious conspiracy. This explanation performs a kind of intellectual judo. As Endre Begby (Reference Begby2020) points out, such a belief system transforms apparently contrary evidence into confirmations of the belief system – a process which he calls ‘evidential pre-emption’. If Limbaugh predicts that the liberal media will accuse him of falsifying information, then when his followers hear such accusations from the liberal media, they will have reason to increase their trust in Limbaugh – since his predictions have been fulfilled! But notice that there is a secondary effect, beyond the simple confirmation Begby describes – an effect that arises from the seductions of clarity. The belief system makes it easy to create an explanation for incoming contrary evidence and to provide explanations that unify and connect that event with many others. This provides an experience of cognitive facility – which should trigger the clarity heuristic. This is an extremely well-designed epistemic trap, in which contrary evidence triggers two different defense mechanisms. First, the conspiracy theory preemptively predicts the presence of contrary evidence, and so confirms itself in the process of dismissing that contrary evidence. Second, the ease with which the conspiracy theory performs that prediction and dismissal is an experience of cognitive facility – which creates the sense of clarity, which, in turn, triggers the thought-terminating heuristic.

Such defensive conspiracy theories are an obvious case of the seductive, manipulative use of clarity. Let’s now turn to a less obvious case. Consider the appeal of quantified systems. Consider, especially, the way in which large-scale institutions try to reduce complex, value-laden qualities to simple metrics and measures. In Trust in Numbers, a history of the culture of quantification, Theodore Porter notes that quantified systems are powerfully attractive. This is why, he says, politicians and bureaucrats love to cite the authority of quantified systems of analysis. Numbers, he says, smell of science. They have the ring of objectivity, and so they will be used in inappropriate circumstances in attempts to gain political control (Porter, Reference Porter1996, p. 8). I think Porter is entirely right about the credibility advantage of numbers and their scientific feel – but I don’t think this is the whole story. The details of his study offer us the opportunity to build a second account of the appeal of numbers, alongside his credibility account, in terms of the seductions of clarity.

There are, says Porter, qualitative ways of knowing and quantitative ways of knowing. Porter is not here making the crude claim that quantitative ways of knowing are inherently bad. Rather, he is interested in the relative advantages and disadvantages of each way of knowing. Qualitative ways of knowing, he says, are typically nuanced, sensitive, and rich in contextual detail, but they are not portable or aggregable. When we transition from qualitative to quantitative ways of knowing, we strip out much of the nuance and many of the contextual details. In return for this loss of informational richness, we get to express our knowledge in neat packages: in the form of numbers, whose meanings are portable, and which can be easily aggregated with other numerical results. This can be very valuable. Obviously, quantification is vital for modern science. And there are many administrative functions which quantification makes far more efficient. But, says Porter, contemporary culture seems to have lost sight of the distinctive value of qualitative ways of knowing. We tend to reach for quantitative ways of knowing compulsively, even when they aren’t most appropriate for the task at hand.

In The Seductions of Quantification, Merry applies Porter’s analysis to the recent rise of quantified metrics in international governance. She is interested in indicators – simple, quantified representations of complex global phenomena. One indicator is the UN’s Human Development Index, which gives countries a single score for their performance in supporting the quality of life of their citizens. Another indicator is the US State Department’s Trafficking in Persons Reports, which gives countries a score on their performance in reducing sex trafficking. Indicators present themselves in the form of a single, easy-to-use, easy-to-understand numerical score. These indicators, she says, hide the complexity and subjectivity of their manufacture. And that concealment is much of the point. Their power, says Merry, comes in significant part from their appearance of unambiguity. And once these indicators have been manufactured, they invariably become central in various governments’ and politicians’ decision-making processes. The very qualities which make them so powerful also make them blunt instruments, missing in much subtlety and detail. But, says Merry, they are incredibly hard to dislodge from the minds of the public and of policy-makers (Merry, Reference Merry2016, pp. 1–43, 112–60).

Why are quantifications so sticky? The seductions of clarity offer an explanation. Quantified systems are, by design, highly usable and easily manipulable. They provide a powerful experience of cognitive facility. It is much easier to do things with grades and rubrics than it is with qualitative descriptions. We can offer justifications (‘I averaged it according to the syllabus’ directives’; ‘I applied the rubric’). We can generate graphs and quantified summaries. And the sense of facility is even stronger in large-scale institutions, where the use of numbers has been stringently regularized. Because of the portability of numbers and the constancy and enforced regularity of typical institutional deliberation procedures, inside such institutions, it is vastly easier to use numbers to produce powerful and effective communications. And they are communications in terms which we know will be understood and acted upon – because the meanings and uses of these institutional terms has been so aggressively regularized.

In a university for which I once worked, all departments had to produce yearly assessment data which was supposed to demonstrate, in quantitative form, the quality of education that our students had received. Our assessments results had to be coded according to certain institutionally specified Educational Learning Outcomes (ELOs). So, the fact that our students scored well this year in their critical thinking multiple choice tests gets coded and entered into the system. Those scores now support our claim that a particular class succeeds in supporting certain university-wide learning: the Critical Thinking ELO, the Writing Skills ELO, the Moral Reflection ELO and the Mathematical Reasoning ELO. And the data for each particular class, in turn, is used to support the claim that our department as a whole supports the university-wide learning outcomes. And that claim, in turn, is used as support the claim that the University is succeeding in its mission, and achieving its stated Core Values: like Communication, Community, and Engagement. And the way in which class, departmental, and university ELO’s link up are coded explicitly into our databasing system, so that new data can travel automatically up the chain. When I enter the latest batch of scores for my students, it produces an immediate effect into the system: all the reported ELOs up the chain will change. And this is possible precisely because the data I’ve entered has been rendered portable and because our outcomes reporting system has been set up to automatically take advantage of that portability.

Notice that all this gives me the experience of an enormous amount of apparently effective cognitive and communicative activity. I have a sense of grasping connections. I can see exactly how my class’s ELOs support my department’s ELOs, which in turn support my college’s ELOs, which in turn support the university’s ELOs and, in turn, the University Core Values. And my grasp of this system can give me a certain sense of cognitive facility. I can easily generate explanations of course content and generate evidence of teaching success. And I can know that they will be understood, since they have been expressed in the pre-prepared, standardized, and explicitly interconnected language of the institution. I know that my justifications will be incorporated into larger institutional aggregates, because my justifications occur in those intentionally stabilized terms. And I know that when I give justifications in those designated terms, they will usually generate pre-specified sorts of actions – ones which I can usually predict with some success. A stabilized, explicit system of quantified and systemized institutional value is designed so that its users can make themselves easily understood and their pronouncements quickly integrated into institutional systems of information processing and decision-making. In short, by using the provided terms of institutional discourse inside the institution, my speech and thinking will seem clear, precisely because they fit so well into a pre-established network of communication and justification. That pre-engineered fit creates a sense of cognitive facility, with all its associated pleasures. And the ring of clarity can trigger the thought-terminating heuristic in others who have also bought into the provided system of institutional discourse – ending inquiry into the apparently clear claim.

Of course, I’ll have genuine cognitive facility if my various mental efforts actually track real elements in the world and process them in some epistemically valuable way. And, as Charles Perrow and Paul Du Gay have argued, bureaucracies certainly need regular methods and quantified systems in order to function and to administrate fairly (Du Gay, Reference du Gay2000; Perrow, Reference Perrow2014). The worry, though, is that we might set up systems that are useful for certain very specific data-collection and managerial function – but that can also exert a magnetic pull on our thinking in nearby domains. For example: GPAs and citations rates might be useful for certain particular tasks of bureaucratic administration. But, because they are so seductive, students and scholars may start using them as the primary lens through which they evaluate their own education and output.Footnote 18 And surely GPAs are not perfect indicators of a good education, and citations rates are not perfect indicators of good scholarship. A particular quantification can get an excess grip on our reasoning, even in contexts when it is less appropriate, by presenting an appealing sense of clarity. And we will fail to investigate whether this quantified metric is the most appropriate form of evaluation to use, precisely because its clarity terminates our investigations into its appropriateness.

So far, we’ve been concentrating on systems of thought whose contents themselves are seductively clear. But the seductions of clarity can also affect our judgments of the expertise and authority of the sources of those contents. The seductions of clarity can get us to accept a system by making its users and authors seem more credible or expert, precisely because they seem more clear. Recall that one of the standard signals of expertise is communicative facility. Non-experts trust purported experts when those experts are able to communicate their understanding – when the purported experts can explain to their audiences the connections between nodes, generate justifications, and the like. But consider what happens to the appearance of communicative facility inside a bureaucratized system of educational assessment. Those users willing to express themselves in the designated terms of that system have a considerable advantage in displaying communicative facility. They can easily generate justifications. They can easily make their reasons and requests understood and acted upon in institutional settings. They will seem clear because their communication will be readily taken up and acted upon. Their apparent facility will seem especially impressive to outsiders, who are out of contact with the subtler values involved with education. This is, obviously, a form of epistemic injustice (Fricker, Reference Fricker2007). Here, it is a form of epistemic injustice which gives a significant credibility advantage to anybody willing to speak in the terms provided by bureaucracies and institutions, which provide regularized systems of justification and languages of evaluation. And since the ability to create and disseminate such systems is usually held by those already in power, the bureaucratization of language will typically serve to amplify power differentials by granting more credibility to those who accept those bureaucratic terms of discourse.

To put it in Kristie Dotson’s terms, epistemic oppression occurs when agents are denied the opportunity to use shared epistemic resources to participate in knowledge production (Dotson, Reference Dotson2014). Bureaucratic and institutionalized language can enable a particular kind of epistemic oppression. Ideas that can be easily expressed in the institutional language are readily entered into the shared knowledge base. But the standardization of language puts a special oppressive power in the hands of whoever creates the standardization. Once the standardization is in place and widely accepted, anybody who uses it will demonstrate cognitive facility and demonstrate communicative facility. They will seem clear precisely because they are using language for which a system of reception has been pre-prepared.

The sense of clarity is a terminator for inquiry, and ideas expressed in that regularized institutional language will bear that sense of clarity. So ideas expressed in that language are more likely to be accepted without question. Information that isn’t placed into institutional language, on the other hand, will tend to be disappear. Such recalcitrant expressions will be less likely to be accepted, transmitted and remembered within the system. At the very least, since they seem confusing rather than clear, those recalcitrant expressions will be subject to constant questioning and inquiry, rather than quickly accepted. In a standardized system, non-standardized information will be subject to incredible friction. This creates a further competitive disadvantage. By the very fact that such information transmits slowly and poorly, the information and its authors will seem to have less communicative facility and so seem less credible. Those whose ideas don’t fit comfortably into the regularized institutional language are at a significant disadvantage in participating in the production and dissemination of knowledge.

7. Nuance and closure

The point here is not to claim that quantified systems and conspiracy theories are always bad. Science and bureaucracy need quantification, and we certainly should accept conspiracy theories when there are actually conspiracies.Footnote 19 The point is, rather, that these sorts of ideas and methodologies are among the choicest tools for epistemic subversion. A ruthless epistemic manipulator, freed from the constraints of genuine inquiry, can re-formulate these sorts of systems to maximize their potential for seductiveness.

And this also offers us insights into unintentional cognitive seduction. Bureaucracies and institutions have very good reason to develop internally consistent and quantified systems of evaluation. Such systems make the administration of complex organizations possible. But insofar as such systems share a significant number of the traits and effects as those systems made for intentional manipulation – and especially insofar as such systems perpetuate because of their seductive effects – then such systems also function as seductively clear.

This suggests another reason to resist the seductions of clarity. Sometimes, we need to dwell in unclear systems of thought because we have not yet earned the right to clarity. In her study of metaphors, Elizabeth Camp (Reference Camp2006) suggests that metaphors are most appropriate when we are still in the process of coming to understand. Metaphors are unclear by design. They are, says Camp, a special way of pointing to the world. We define simple nouns through simpler forms of pointing. ‘Red’ we define as looking like that. Metaphors let us point with a rough, waving gesture.

The reason we might want to do so, says Camp, is that such pointing lets us access the richness of the world in our talk. When I say, ‘I don’t understand what’s going on with Robert very much, but his neurosis seems a lot like Liza’s,’ I’m not using some well-defined abstract predicate to describe Robert. I am pointing to Liza and to all the rich features of reality that are bound up with her. I am saying that I don’t know what it is about Liza that matters, exactly, but it’s something over there, where ‘there’ is a gesture in the direction of all the richness of Liza’s actual self. And this sort of vague gesture is especially useful, says Camp, when we are trying to grapple with things we do not yet adequately understand. With metaphors, she says, we are gesturing vaguely at part the worlds.

Intentionally and openly vague forms of communication are very important. They remind us that our thinking – our concepts, our inquiries, our understanding – is not yet finished. Clarity is compelling, but signals us to end our inquiries. Seductively clear systems mask the fact that we should, in fact, be confused, and should be pressing on with our inquiries. They present themselves as finalized. On the other hand, metaphors and their kin wear their unfinishedness plainly on their faces. They are hard to use, and that difficulty reminds us that there is more work to be done. They leave the basement door open, so we know there is more to explore down there. When clarity seduces, it can prevent us from pushing on, from finding and dwelling on our confusions. Seductive clarity presents us with a false floor for our investigations into the world.

How do we resist the seductions of clarity? One possible defensive strategy is to develop new counter-heuristics, designed to sniff out the seductive manipulation of our original heuristics. Here’s a rough analogy: a certain kind of culinary yumminess was once a decent heuristic for nutritious eating. But our nutritive environment changed, especially when various corporate forces figured out our heuristics and tendencies and started to aggressively game them. In response, we have had to adapt our heuristics. We have needed to become suspicious of too much yumminess. Many of us have already trained ourselves to notice when things are just a little too delicious. The crunchy, sweet, salty stuff that hits us just so – we have learned to taste in them the engineer’s manipulative touch. We have developed an intuitive feel for designed craveability. This is a counter-heuristic, designed to trigger in response to signals that outside forces are trying to manipulate our more primitive heuristics. Sweetness, crunchiness, saltiness – our counter-heuristic makes as immediately suspicious when we find these in plenty.

In fighting the seductions of clarity, we need to develop new counter-heuristics in a similar key. The sense of clarity is something like cognitive sugar. Once upon a time, using our sense of clarity as a signal to terminate our inquiries might have been a good and useful heuristic. But now we live in an environment where we are surrounded by seductive clarity, much of it designed to exploit our heuristics. We now need to train ourselves to become suspicious of ideas and systems that go down just a little too sweetly – that are pleasurable and effortless and explain everything so wonderfully. Systems of thought that feel too clear should make us step up our investigative efforts instead of ending them. We need to learn to recognize, by feel, the seductions of clarity.Footnote 20

Footnotes

1 I am influenced here by A.W. Eaton's discussion of artifact function, which draws on and develops Ruth Millikan's notion of function (Millikan, Reference Millikan1984, Eaton, Reference Eaton, Gaskell and Carter2020). Eaton argues that the intent of an artifact's designer does not determine that artifact's function. She suggests a more evolutionary model: An artifact may be unintentionally imbued with trait, but insofar as that trait is selectively reproduced in future artifacts, then its effect is part of those artifacts’ function. So, if a bureaucracy generates a quantified metric for accounting purposes, but that quantified metric survives and is reproduced in further bureaucratic systems because of its seductive effect, then the seductiveness is part of those systems’ function.

2 Two particularly relevant discussions on cognitive limitation and epistemology are Wimsatt (Reference Wimsatt2007); Dallman (Reference Dallmann2017).

3 As Elijah Millgram puts it, practical reasoning doesn't result in settled arguments to finalized conclusions. Practical reasoning produces only tentative conclusions. Practical conclusions are always open to defeat from unexpected angles, and new forms of defeat may always surprise us (Millgram, Reference Millgram1997). The closest we can get to conclusiveness is to think that a certain piece of practical reasoning seems good enough, so far as we can tell. And even if you reject Millgram's view and believe that there were firm practical conclusions that we might eventually reach – surely, finding such firm conclusions is well beyond the reach of most human-scale practical deliberation in everyday circumstances.

4 Very little has been written on how we decide to end our inquiries in practical deliberation. And much of that work has focused, not on fast-and-loose daily heuristics for terminating inquiry, but on when we can conclusively terminate inquiry. See, for example, Alan Millar and Kvanvig's debate about whether we need merely need knowledge to conclusively terminate inquiry, or whether we need to reflectively know that we know in order to terminate inquiry (Millar, Reference Millar2011; Kvanvig, Reference Kvanvig2011). Trout himself argues that the ‘sense of understanding’ – that ‘a-ha’ feeling – is not of particular use in the sciences because it is quite vulnerable to cognitive biases and other corrupting psychological influences. In Trout's terms, the mere sense of understanding doesn't grant us what we really want in science, which is good explanations. We have other ways of recognizing good explanations, far more accurate than mere internal feelings. We know we have a good scientific explanation when our scientific model makes good predictions. We should, says Trout, therefore largely ignore the various internal signals of understanding, which will simply lead us astray. We should, instead, remain firmly fixed on the evidence that our scientific model provides good explanations, which are measured in the usual scientific methods: prediction, testing, and the like (Trout, Reference Trout2002; Reference Trout and Grimm2017). Notice, however, that this sort of approach imagines the relevant epistemic agents to be cognitively ideal beings with essentially unlimited resources. It then asks how such cognitive beings should go about getting things right once and for all. And that might be the right idealization for thinking about how we should pursue long-term epistemic projects as parts of intergenerational communities, as we do in philosophy and science. But things look very different for cognitively limited beings in the quick-and-dirty of day-to-day decision-making. Sometimes we might be able to adopt some methodology with a pre-established threshold for terminating thought. Consider, for example, the cognitive strategy of satisficing: taking the first solution which crosses some pre-established minimal threshold (Simon, Reference Simon1956). But what do we do when we aren't satisficing? In many cases, our investigations are more open-ended, without any sort of pre-established minimal threshold. For those sorts of investigations, we need some heuristic basis for attentional management.

5 My discussion here heavily borrows structural features from Elijah Millgram's discussion of the function of boredom and interest in practical reason and agency. Millgram argues that a sense of interest is our signal that our values are good ones for us to have, and a sense of boredom is our signal that our values are bad for us to have, so we should change them (Millgram, Reference Millgram2004).

6 As far as I know, Justin Dallman offers the only contemporary account of how our cognitive limitations force us to manage our efforts of inquiry. The best procedure to cope with cognitive limitation, he says, is to set up a priority queue. We assign priority levels to our various outstanding investigations, and then we proceed in order from highest priority to lowest (Dallman, Reference Dallmann2017). But what basis do we have for assigning priority levels? To put my suggestion into Dallman's terms, we need some heuristic for quickly estimating priorities, and our sense of clarity functions as a heuristic basis assigning a low priority to its investigation. A sense of clarity can thus terminate a line of inquiry – not conclusively, but by lowering its priority below the barrage of other, more pressing matters.

7 For an in-depth discussion of this point, see Trout's discussion of the gap between the sense of understanding in science, and actually possessing a genuine understanding (Trout, Reference Trout2002). There is a useful further discussion in Grimm (Reference Grimm2012, pp. 106–109), which defends Trout's claims against Linda Zagzebksi's claim that we always know when we understand (Zagzebski, Reference Zagzebski and Steup2001, p. 247). See also Strevens (Reference Strevens2013).

8 I am drawing here from the cognitive science literature on heuristics. Key relevant moments in that literature include Gigerenzer and Goldstein (Reference Gigerenzer and Goldstein1996); Kahneman (Reference Kahneman2013).

9 The locus of the modern discussion of this sort of attentional blindness is in Christopher Chabris and Daniel Simons's influential experiments, including, famously, an experiment where half of the study subjects failed to notice a person in a gorilla suit walking across a room, and pounding their chest, when the subjects were instructed to perform a relatively simple counting task (Chabris and Simons, Reference Chabris and Simons2011).

10 Trout makes a similar point about fluency and the sense of understanding (Trout, Reference Trout and Grimm2017), although his concern is largely with attacking other accounts of understanding, and not providing a full picture of exploitation. I take myself to be filling in the details of his suggestion.

11 For an extensive discussion of litmus tests and expert-vetting, see Nguyen (Reference Nguyen2018a).

12 Much of the debate in that literature has turned on what is constitutive of understanding, and what is merely typically associated with understanding. For example, according to Steven Grimm and Henk de Regt, the skill of practical application is partially constitutive of understanding (Grimm, Reference Grimm2006; de Regt, Reference Henk2009; Wilkenfeld, Reference Wilkenfeld2013; Reference Wilkenfeld2017). Michael Strevens, on the other hand, denies this constitutive relationship; skill typically follows from understanding, but isn't constitutive of it (Strevens, Reference Strevens2013). Note that we don't need to resolve debates like this for the current inquiry. Since we're interested what signs are associated with understanding, we don't really need to distinguish carefully between what is constitutive of understanding, and what follows from it. Finally, Kareem Khalifa has argument that these accounts of understanding can be reduced to the idea of knowing an explanation (Khalifa, Reference Khalifa2012). My account here should be compatible with Khalifa's view – though, in his language, I would be talking about faking the feel of knowing an explanation.

13 This discussion constitutes a fast-growing literature. I am particularly influenced by Catherine Elgin's account, Stephen Grimm's useful survey, and Michael Strevens’ and Michael Patrick Lynch's discussions (Elgin, Reference Elgin2002, Reference Elgin2017; Grimm, Reference Grimm2012, Strevens, Reference Strevens2013; Lynch, Reference Lynch and Grimm2018).

14 I owe my framing to Laura Callahan's (Reference Callahan2018, p. 442) useful discussion of understanding.

15 For more on the aesthetic pleasure of one's own skillful action, see Nguyen (Reference Nguyen2020, pp. 101–120).

16 This strategy exploits a cognitive error of over-weighting early evidence. For a discussion of why this is a cognitive error, see Kelly (Reference Kelly2008). For an application of that discussion to conspiracy theories and echo chambers, see Nguyen (Reference Nguyen2018b).

17 Elgin (Reference Elgin2017) defends the use of idealizations and non-truths as parts of the models that help us to understand. However, the choice of models is still driven by an orientation towards getting the world right, in a more holistic way.

18 I offer a fuller discussion of how simplified and quantified systems of value can give their adopters the game-like pleasures of value clarity in Nguyen (Reference Nguyen2020).

19 There is a very useful discussion of the occasional usefulness of conspiracy theories in Coady (Reference Coady2012, pp. 110–37); Dentith (Reference Dentith2018; Reference Dentith2019).

20 I'd like to thank, for all their help with this paper, Andrew Buskell, Josh DiPaolo, A.W. Eaton, Caitlin Dolan, Jon Ellis, Melinda Fagan, Keren Gorodeisky, Arata Hamakawa, Rob Hopkins, Jenny Judge, Samantha Matherne, Jay Miller, Stephanie Patridge, Antonia Peacocke, Geoff Pynn, Nick Riggle, David Spurrett, Madelaine Ransom, Jonah Schupbach, Tim Sundell, and Matt Strohl.

References

Battaly, Heather, ‘Closed-Mindedness and Dogmatism’, Episteme 15 (2018), 261–82.CrossRefGoogle Scholar
Begby, Endre, ‘Evidential Preemption’, Philosophy and Phenomenological Research 2020 https://doi.org/10.1111/phpr.12654CrossRefGoogle Scholar
Callahan, Laura Frances, ‘Moral Testimony: A Re-Conceived Understanding ExplanationPhilosophical Quarterly 68 (2018), 437–59.CrossRefGoogle Scholar
Camp, Elisabeth, ‘Metaphor and That Certain ‘Je Ne Sais Quoi’Philosophical Studies 129 (2006), 125.CrossRefGoogle Scholar
Cassam, Quassim, ‘Vice EpistemologyThe Monist 99 (2016), 159–80.CrossRefGoogle Scholar
Chabris, Christopher and Simons, Daniel, The Invisible Gorilla: How Our Intuitions Deceive Us (Reprint edition: Harmony, 2011).Google Scholar
Coady, David, What to Believe Now: Applying Epistemology to Contemporary Issues (Hoboken: Wiley-Blackwell, 2012).Google Scholar
Dallmann, Justin, ‘When Obstinacy Is a Better PolicyPhilosophers’ Imprint 17 (2017).Google Scholar
Dentith, Matthew R. X., ‘The Problem of ConspiracismArgumenta 3 (2018), 327–43.Google Scholar
Dentith, M.R. X., ‘Conspiracy theories on the basis of evidenceSynthese 196 (2019), 2243–61.CrossRefGoogle Scholar
Dotson, Kristie, ‘Conceptualizing Epistemic OppressionSocial Epistemology 28 (2014), 115–38.CrossRefGoogle Scholar
Eaton, A. W., ‘Artifacts and Their Functions’ In Oxford Handbook of History and Material Culture. Edited by Gaskell, Ivan and Carter, Sarah Anne (Oxford: Oxford University Press, 2020).Google Scholar
Elgin, Catherine, True Enough, (Cambridge: MIT Press, 2017).CrossRefGoogle Scholar
Elgin, Catherine Z., ‘Creation as Reconfiguration: Art in the Advancement of Science’, International Studies in the Philosophy of Science 16 (2002), 1325.CrossRefGoogle Scholar
Espeland, Wendy Nelson and Sauder, Michael, Engines of Anxiety: Academic Rankings, Reputation, and Accountability (New York: Russell Sage Foundation, 2016).Google Scholar
Fricker, Miranda, Epistemic Injustice: Power and the Ethics of Knowing (Clarendon Press, 2007).CrossRefGoogle Scholar
du Gay, Dr Paul, In Praise of Bureaucracy: Weber - Organization - Ethics (London: SAGE Publications Ltd., 2000).Google Scholar
Gigerenzer, Gerd and Goldstein, Daniel G., ‘Reasoning the Fast and Frugal Way: Models of Bounded Rationality’, Psychological Review 103 (1996), 650–69.CrossRefGoogle ScholarPubMed
Gopnik, Alison, ‘Explanation as OrgasmMinds and Machines 8 (1998), 101118.CrossRefGoogle Scholar
Grimm, Stephen R., ‘Is Understanding a Species of Knowledge?’, British Journal for the Philosophy of Science 57 (2006), 515–35.CrossRefGoogle Scholar
Grimm, Stephen R.The Value of Understanding’, Philosophy Compass 7 (2012), 103117.CrossRefGoogle Scholar
Jamieson, Kathleen Hall and Cappella, Joseph, Echo Chamber: Rush Limbaugh and the Conservative Media Establishment (Oxford: Oxford University Press, 2010).Google Scholar
Kahan, Dan M. and Braman, Donald, ‘Cultural Cognition and Public Policy’, Yale Law & Policy Review 24 (2006), 147–70.Google Scholar
Kahneman, Daniel, Thinking, Fast and Slow. (New York: Farrar, Straus and Giroux, 2013, 1st Edition).Google Scholar
Kelly, Thomas, ‘Disagreement, Dogmatism, and Belief Polarization.Journal of Philosophy 105 (2008), 611–33.CrossRefGoogle Scholar
Khalifa, Kareem, ‘Inaugurating Understanding or Repackaging Explanation?’, Philosophy of Science 79 (2012), 1537.CrossRefGoogle Scholar
Kvanvig, Jonathan L., The Value of Knowledge and the Pursuit of Understanding, (Cambridge: Cambridge University Press, 2003).CrossRefGoogle Scholar
Kvanvig, Jonathan L.. ‘II–Jonathan L. Kvanvig: Millar on the Value of Knowledge’, Aristotelian Society Supplementary Volume 85 (2011), 8399.CrossRefGoogle Scholar
Lupton, Deborah, The Quantified Self (Cambridge: Polity, 2016).Google Scholar
Lynch, Michael, ‘Understanding and Coming to Understand’ In Making Sense of the World: New Essays on the Philosophy of Understanding, edited by Grimm, Stephen, (Oxford: Oxford University Press, 2018) 194208.Google Scholar
McGonigal, Jane, Reality Is Broken: Why Games Make Us Better and How They Can Change the World. (New York: Penguin Books, 2011).Google Scholar
Medina, Jose, The Epistemology of Resistance: Gender and Racial Oppression, Epistemic Injustice, and Resistant Imaginations (Oxford, New York: Oxford University Press, 2012).Google Scholar
Merry, Sally Engle, The Seductions of Quantification: Measuring Human Rights, Gender Violence, and Sex Trafficking (Chicago: University of Chicago Press, 2016).CrossRefGoogle Scholar
Millar, Alan, ‘Why Knowledge Matters’, Aristotelian Society Supplementary Volume 85 (2011), 6381.CrossRefGoogle Scholar
Millgram, Elijah, Practical Induction, (Cambridge, Mass.: Harvard University Press, 1997).Google Scholar
Millgram, ElijahOn Being Bored Out of Your Mind’, Proceedings of the Aristotelian Society 104 (2004), 163–84.CrossRefGoogle Scholar
Millikan, Ruth Garrett, ‘Language, Thought and Other Biological Categories: New Foundations for Realism’, Philosophy of Science 52 (1984), 477–78.Google Scholar
Nguyen, C. Thi, ‘Cognitive Islands and Runaway Echo Chambers: Problems for Epistemic Dependence on Experts’, Synthese (2018a), https://doi.org/10.1007/s11229-018-1692-0.CrossRefGoogle Scholar
Nguyen, C. Thi ‘Echo Chambers and Epistemic Bubbles’, Episteme (2018b), https://doi.org/10.1017/epi.2018.32CrossRefGoogle Scholar
Nguyen, C. ThiExpertise and the Fragmentation of Intellectual Autonomy’, Philosophical Inquiries 6 (2018c), 107124.Google Scholar
Nguyen, C. Thi Games: Agency as Art (New York: Oxford University Press, 2020).CrossRefGoogle Scholar
Nguyen, C. ThiHow Twitter gamifies communication’, Applied Epistemology, ed. Lackey, Jennifer (New York: Oxford University Press, Forthcoming).Google Scholar
Oppenheimer, Daniel M., ‘The Secret Life of Fluency.Trends in Cognitive Sciences 12 (2008), 237–41.CrossRefGoogle ScholarPubMed
Perrow, Charles, Complex Organizations: A Critical Essay, (Brattleboro, Vermont: Echo Point Books & Media, 2014, 3rd edition).Google Scholar
Porter, Theodore, Trust in Numbers, (Princeton: Princeton University Press, 1996).Google Scholar
Picheta, Rob, ‘The flat-Earth conspiracy is spreading around the globe. Does it hide a darker core?’, CNN (Nov 18, 2019. Accessed July 10, 2020) https://www.cnn.com/2019/11/16/us/flat-earth-conference-conspiracy-theories-scli-intl/index.htmlGoogle Scholar
Proctor, Robert and Schiebinger, Londa L., Agnotology: The Making and Unmaking of Ignorance, (Stanford University Press, 2008).Google Scholar
Reber, Rolf and Unkelbach, Christian, ‘The Epistemic Status of Processing Fluency as Source for Judgments of Truth’, Review of Philosophy and Psychology, 1 (2010), 563–81.CrossRefGoogle ScholarPubMed
Henk, W. de. Regt, ‘The Epistemic Value of Understanding’, Philosophy of Science 76 (2009), 585–97.Google Scholar
Rini, Regina, ‘Fake News and Partisan Epistemology’, Kennedy Institute of Ethics Journal 27 (2017), 4364.CrossRefGoogle Scholar
Scott, James C., Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed (New Haven: Yale University Press, 1998).Google Scholar
Simon, Herbert, ‘Rational Choice and the Structure of the Environment’, Psychological Review 63 (1956), 129–38.CrossRefGoogle ScholarPubMed
Stanley, Jason, How Propaganda Works, (Princeton: Princeton University Press, 2016).Google Scholar
Strevens, Michael, ‘No Understanding Without Explanation.Studies in History and Philosophy of Science Part A 44 (2013), 510–15.CrossRefGoogle Scholar
Sullivan, Shannon and Tuana, Nancy, Race and Epistemologies of Ignorance, (SUNY Press, 2007).Google Scholar
Sunstein, Cass, #Republic, (Princeton: Princeton University Press, 2017).CrossRefGoogle Scholar
Trout, J. D., ‘Scientific Explanation and the Sense of Understanding’, Philosophy of Science 69 (2002), 212–33.CrossRefGoogle Scholar
Trout, J. D.Understanding and Fluency’, In Making Sense of the World: New Essays on the Philosophy of Understanding. Edited by Grimm, Stephen, (Oxford: Oxford University Press, 2017).Google Scholar
Wilkenfeld, Daniel A., ‘Understanding as Representation Manipulability’, Synthese 190 (2013), 9971016.CrossRefGoogle Scholar
Wilkenfeld, Daniel A.MUDdy Understanding’, Synthese 194 (2017), 1273–93.CrossRefGoogle Scholar
Wimsatt, William C., Re-Engineering Philosophy for Limited Beings: Piecewise Approximations to Reality, (Cambridge, Mass: Harvard University Press, 2007).Google Scholar
Zagzebski, Linda, ‘Recovering Understanding’, In Knowledge, Truth, and Duty: Essays on Epistemic Justification, Responsibility, and Virtue, edited by Steup, M., (Oxford: Oxford University Press, 2001).Google Scholar
Zimmerman, Eric, Bogost, Ian, Linehan, Conor, Kirman, Ben, Roche, Bryan, Pesce, Mark, Rigby, Scott, et al. , The Gameful World: Approaches, Issues, Applications. Edited by Walz, Steffen P. and Deterding, Sebastian, (Cambridge, Mass.: The MIT Press, 2015).Google Scholar