Hostname: page-component-745bb68f8f-b6zl4 Total loading time: 0 Render date: 2025-02-06T03:47:45.576Z Has data issue: false hasContentIssue false

THE FOG OF DEBATE

Published online by Cambridge University Press:  15 June 2022

Nathan Ballantyne*
Affiliation:
Philosophy, Cognition, and Culture, Arizona State University, USA
Rights & Permissions [Opens in a new window]

Abstract

The fog of war—poor intelligence about the enemy—can frustrate even a well-prepared military force. Something similar can happen in intellectual debate. What I call the fog of debate is a useful metaphor for grappling with failures and dysfunctions of argumentative persuasion that stem from poor information about our opponents. It is distressingly easy to make mistakes about our opponents’ thinking, as well as to fail to comprehend their understanding of and reactions to our arguments. After describing the fog of debate and outlining its sources in cognition and communication, I consider a few policies we might adopt upon learning we are in this fog.

Type
Research Article
Copyright
© 2022 Social Philosophy & Policy Foundation. Printed in the USA

“What do you say? What? I do not understand you. Will you be kind enough to say it again? I understand you still less.”

– Jean de La Bruyère (1645–1696)

“I ceased [ … ] to believe that one can convince one’s opponents with arguments printed in books. It is not to do that, therefore, that I have taken up my pen, but merely so as to annoy them, and to bestow strength and courage on those on our own side, and to make it known to others that they have not convinced us.

– Georg Christoph Lichtenberg (1742–1799)

“Machines don’t fight wars. Terrain doesn’t fight wars. Humans fight wars. You must get into the mind of humans. That’s where the battles are won.”

– Colonel John Boyd, United States Air Force (1927–1997)

Shrouded in the fog of war, military commanders lack good intelligence about the enemy. They are uncertain about their enemy’s positions and strategic capabilities. “Many intelligence reports in war are contradictory; even more are false, and most are uncertain,” observed Carl von Clausewitz, the Prussian military theorist and general. 1 In the nineteenth century, the expression “fog of war” would have evoked “the opacity of the black powder battlefield” 2—thick clouds of smoke, concealing enemy movements, burning observers’ eyeballs. In the theater of war, commanders experience the fog as uncertainty in judgment.

The fog-of-war idea hints at a model for thinking about intellectual debate—any communication where we aim to persuade others using argument. As I’ll show, the fog of debate is a useful metaphor for grappling with some failures and dysfunctions of argumentative persuasion. 3

In ideal circumstances, we want to persuade other people by rational means. We share arguments—premises linked to a conclusion, where the former are offered in support of the latter. But often we lack accurate information about others’ actual thinking and do not recognize what evidence would move them rationally. And when our opponents react outwardly after hearing our arguments, we do not always know how to interpret those reactions: their behavior does not reveal their thinking. Because of our ignorance, we might do a poor job calibrating our arguments to our audience. We use premises or inferences they will not accept. Our efforts to persuade are like a cannonball sailing off and landing in an empty field. Unaware that the angle of argument needs adjustment, we reload and keep firing away, feeling sure that victory in a dialectical war of attrition will be ours.

The claim that armed conflict is obscure and confusing is intuitive, but the idea that persuasion has similar qualities could seem less compelling. As it happens, people argue about policy, ethics, and philosophy in the grips of “naïve realism,” the tendency to believe we view the world objectively. 4 We presume we know who our opponents are, what they believe, and which arguments should convince them. We know how to set them straight. But if we are in the fog of debate, we had better not take so much for granted about our opponents. The way things appear to us cannot be trusted and so some other strategy for persuading them is advisable. Reflecting on the fog could lead us to better strategies.

This essay has three parts: I articulate an account of the fog of debate (Section I), describe some of its sources (Section II), and then consider the practical matter of how to deal with situations where we know we are fogged in (Section III).

I. Fog

Military commanders in the fog of war lack certain kinds of knowledge about enemy forces and the battlefield. Thinkers in the fog of debate lack certain kinds of knowledge about the people they want to persuade. To understand what the fog of debate is, we must identify what kinds of facts the fog obscures.

Let me begin with an analogy. You stack tin cans in a row atop a fence and try to shoot them off with your Red Ryder BB gun. You squeeze off a shot, hear a metallic ping, and see a can drop—direct hit! After perfecting your shot for hours, you decide to make target practice somewhat more interesting. You put on a blindfold and earplugs, and then aim in the vicinity of the cans. What happens when you pull the trigger? Perhaps your shot nailed a can, grazed a can, or ricocheted and stunned a neighbor’s cat. Perhaps you are not even shooting in the right direction. You just can’t tell because you lack access to certain facts. Any judgment you make about the shot is unreliably formed, though not necessarily wrong.

The Red Ryder example illuminates the causal chain of argumentative persuasion. In a prototypical debate situation, we know where our audience stands on an issue; we share an argument with them to change their minds; we know they comprehend our argument; we know they should accept its conclusion; and then we find out how they react. The fog of debate prevents us from seeing how all of this unfolds and where the chain might break down.

Consider what facts the fog obscures. When trying to persuade others, we normally believe or presume some answers to these four questions: (i) What does our audience think about an issue? (ii) Do they understand our argument? (iii) Would they be justified in accepting its conclusion after they understand it? (iv) What does their behavior indicate about their thinking about the issue after they hear our argument? When our answers—believed or presumed—are significantly wrong or unreliably formed, we are in the fog of debate, as I use the term. We lack certain pieces of knowledge about our audience and our argument, because our answers to those questions are wrong or unreliably formed.

Let me try to be more precise about these ideas. For any argument we share with an audience, the following four conditions may or may not hold:

These are the clarity conditions. The term “fog of debate” picks out circumstances in which the clarity conditions, whether all or in significant part, are not satisfied, making effective persuasion by argument fraught. One implication is that the fog of debate is heterogeneous; there are fogs of debate, we might say. And we can think of the fog as a gradable or spectral state, coming in degrees. This means there are marginal or borderline cases in which some clarity conditions hold but others do not. 5

An important point about the meaning of “fog of debate”: it has both an objective and a subjective reading. The fog can involve conscious awareness of the fog or not. The term is similar to “confused” in that respect. I can be said to be confused because I’m making a mistake, even if I am totally unaware that I’m making a mistake. Or I can be called confused because I’m feeling unsure and I’m aware that I could be making a mistake. So too with the fog of debate. Someone can be in a fog with or without awareness. That dual aspect of the term is worth noting in connection with the attributor’s perspective. From the first-person perspective, someone says “I am in the fog.” Self-attributing that state normally implies someone is aware of the fog. But neither a second-personal attribution (“You are in the fog”) nor a third-personal attribution (“They are in the fog”) implies that anyone is aware of the fog. Again, comparison with the term “confused” is useful. Saying “I am confused” normally implies that someone experiences confusion, whereas attributing confusion to others (“You are confused” or “They are confused”) does not.

The word “debate” is meant to pick out someone’s attempt to show using argument that they are right and their opponent is wrong. 6 This is the familiar sort of adversarial or antagonistic debate. There are other kinds of debate, to be sure. We can debate to uncover new ideas or perspectives, in which case nobody wins or loses. In a fact-finding context, debate can be a means to bring out the truth, not to overcome an adversary. I will focus on debates where our goal is winning—changing an opponent’s mind—even though we may want to adjust our purposes after we learn of the fog. (In Section III, I consider that possibility.)

To illustrate my account of the fog of debate, consider a case with no wisp of fog. Imagine we are trying to persuade an ideal audience: Socrates. He always asks for clarification when he does not understand a premise or an inference, using language that leaves us with no doubt about his thinking. He always reacts honestly to our arguments. Let’s also suppose we share with Socrates all relevant background evidence on whatever issues we discuss, and we know that. This means we can evaluate our own evidence in order to figure out whether Socrates is justified to accept an argument’s conclusion. All of this being so, whenever we try to persuade Socrates, we are well positioned to know when Standpoint, Comprehension, Force, and Feedback are met.

When there are deviations from a pristine sort of situation like our debate with Socrates, the fog of debate can rise. But how does that happen, exactly? Which factors produce and exacerbate the fog?

II. Inclement Conditions

As noted, the fog comes when all or some clarity conditions fail to hold. Understanding what produces the fog is a matter of understanding which situations and processes erode or undercut the clarity conditions. That calls for empirical study, but first the conceptual and theoretical terrain needs to be mapped out. Philosophical effort can help. We can begin to make sense of the phenomena by practicing what Paul Rozin calls “informed curiosity.” 7 In view of various examples and different types of data, we can try to make sense of the fog’s manifestations and significance in epistemic life. To that end, I will note findings from the social sciences and observations about persuasion, drawing distinctions and working through cases, in order to illuminate how the clarity conditions can fail. I will also explain why people are often poorly positioned to recognize when those conditions fail. Being in the fog is bad enough. But things might become worse if we tend to deny that we are in it, when we are. Military commanders receive fragmentary and ambiguous reports but nevertheless feel confident that their perspective on the battlefield is luminously clear. That is how battles are lost.

Begin with Standpoint—the idea that we accurately estimate our audience’s thinking about an issue before we share our argument. Psychologists and political scientists have asked whether partisans have accurate views of the attitudes and positions of their ideological opponents. One study from the early 1990s examined the attitudes of English professors at public and private universities in California. 8 The survey queried the professors’ views on issues concerning the literary canon, such as “The aim of a liberal arts education should be to teach students a common cultural heritage,” “A literary text has several, equally correct interpretations,” and “Literature is best understood and taught when the instructor is of the same ethnicity and/or gender as the author.” Unsurprisingly, “traditionalist” and “revisionist” scholars were divided on the issues. What’s interesting is that partisans on both sides exaggerated the difference between themselves and their opponents. They even underestimated significant common ground on which books to select for the syllabus of an introductory course. The English professors thought they disagreed more than they did.

Other studies show that partisans tend to have inaccurate views about the attitudes of out-group members. 9 Political partisans tend to overestimate how extreme their opponents’ views are. 10 One tendency in interpersonal judgment, named false polarization, leads opponents to overestimate the gap between their positions. 11 Partisans have also been found to believe they are more certain than members of the opposing side and to underestimate their opponents’ actual level of certainty. 12 Some researchers found that partisans who had the most inaccurate judgments about opponents’ attitudes were the ones who most strongly identify with their ideology and were most involved in persuading others. 13

Failures of Standpoint can undermine our attempts to persuade. First, our arguments might flop. Being wrong about where other people stand makes us less likely to select arguments they will accept. We might commit the “straw man” fallacy, misrepresenting the audience’s position and then attacking the misrepresentation. 14 Second, if Standpoint doesn’t hold, we might inadvertently make our audience less receptive to arguments we put forward, even perfectly good ones they would otherwise accept. Signaling to our audience that we believe they hold a more extreme position than they do hold could make them less receptive to what we say. It is like beginning a negotiation with an insulting opening offer. Attributing overly extreme views to others can fuel suspicion, distrust, and hostility. They will not see us as fair-minded or sincerely invested in discussion, but rather as uninformed, combative, or biased. When Standpoint fails in a debate, recovery can be difficult.

But even if we start by judging correctly where our audience stands, that is no guarantee they will grasp our well-targeted arguments. This brings us to the matter of Comprehension—the condition that says the audience understands our argument. Comprehension can fail for many reasons. Audiences can be distracted, inattentive, or otherwise lack the resources of memory to keep the argument’s crucial details in mind. They may be unfamiliar with the technical terms or concepts the argument deploys. They may be listening only for the purpose of devising a sharp rejoinder to “own” us; but then they are not engaging, in the sense of seeing how our argument intersects with their own thinking. 15

Setting aside the audience’s limitations, we might inadvertently miscommunicate and share the “wrong” argument. One reason Comprehension fails turns on the subtle difference between the argument we rationally base our belief upon and the argument we use to persuade others to adopt that belief. We almost always express “argument sketches,” which function as compressed versions of more detailed arguments. John Pollock illustrates:

Someone might think to himself, “Should I believe in God? Well, something had to create the universe, so there must be a God.” Someone else might think, “Well, if there were a God, there wouldn’t be so much evil in the world, so there can’t be a God.” As arguments, these are grossly incomplete, and notoriously difficult to complete. 16

An argument sketch, according to Pollock, “asserts that certain things are inferable on the basis of other things without actually working through the argument.” 17 When going through an argument sketch, we are “meta-reasoning about the possibility of constructing [more detailed] arguments”: we are reasoning about what further reasoning we could do. 18 This allows us to move rapidly toward a conclusion without getting bogged down in all the premises, sub-arguments for premises, and so on. Sketched arguments, as I think of them, include an implied “promissory note” or “IOU.” If I give you an argument sketch, it’s like I hand you an IOU. When you cash in the IOU, you can demand “payment”—in the form of a filled-out argument. Argument sketches can sometimes justify our beliefs. But at other times, we will need to work through more details in order to have sufficient justification to accept a conclusion, especially if we have doubts about whether some sketch can be completed successfully.

Comprehension can fail in predictable ways when we share argument sketches. Here’s why. If our audience has different evidence from ours, they might not be able to see what further parts of the argument could be filled in later or how to do the filling in. We might incorrectly assume they will know how to add the expansion themselves. It is a little like we are speaking in code, wrongly supposing the other party has the right decryption key. Or we could be unaware that they even need more information to grasp our argument, because we fail to see how their evidence diverges from ours. Sometimes our audience might wrongly presume our argument has a significant “hole” in it, not realizing it is a sketch. Since they do not see how the further details can work out, our argument looks like a non sequitur to them. In other cases, they might recognize that an argument is a sketch, but then “incorrectly” fill it in with uncharitable or dubious claims. Ideological opponents have a knack for misconstruing each other’s arguments. Being misinterpreted is frustrating—That’s not what I meant!—but since we nearly always present argument sketches in debate, failures of Comprehension should not be surprising. 19

When Comprehension goes sideways, will we know that? Not always and sometimes we might be inclined to believe Comprehension holds when it doesn’t. Seeing how our audience sees our argument is tricky. One important obstacle for perspective-taking is called the “curse of knowledge,” the inability to think about an issue from a less informed perspective. 20 Knowing more means knowing less about what it means to know less. And our knowledge of a topic blinds us to the ways in which uninformed people miss what is obvious to us. Because we see our argument as lucid, well structured, and easy to grasp, we might overlook the ways in which our audience simply does not get it.

Despite the barriers to understanding, people sometimes receive our arguments in the intended form. Enter the third clarity condition, Force—the idea that our audience is justified to accept our argument’s conclusion after understanding it. Someone can grasp an argument perfectly well and yet not be justified to adopt its conclusion. Many arguments leave the audience discretion for how to react properly in this sense: after grasping our arguments, they may reject a premise or inference by appealing to further evidence. Insofar they have different total evidence than we do, we can expect they will sometimes rationally resist our arguments. What is justified for us could differ from what is justified for them, given differences between our total evidence and theirs. 21

But don’t some arguments foreclose all potential evasion? Can’t our reasoning occasionally force others to accept conclusions on pain of irrationality? One special brand of argument that rationally compels assent has been called a “knockdown argument.” 22 If we share such an argument with our audience and Comprehension is satisfied, then Force is satisfied. But how common are knockdown arguments? Broad consensus among experts from many fields of science, mathematics, engineering, medicine, and history suggests that all sorts of claims are conclusively established—that electrons exist, that mosquitoes spread malaria, that the continents are in motion, that the Pythagorean theorem is correct, that human beings landed on the moon in 1969, that Oscar Peterson played the piano, and so forth. When experts reach a consensus, oftentimes that is because there is a knockdown argument. But are there such arguments for all positions concerning contentious questions of policy, morality, and philosophy? That is unclear. And even when contentious claims are supported by some unquestionably compelling argument, adequately sharing that argument with an audience is not always simple. Quick sketches won’t do. Consequently, in many debates, knockdown arguments are either unavailable or have not been fully shared with our audience in a form that would force assent. In those debates, our audience may evade our conclusion if their total evidence allows them to reject a premise or inference.

So, Comprehension can hold even when Force does not—our audience could grasp our argument perfectly well yet be justified to reject its conclusion. But notice something interesting about our perception of Force: even when that condition does not hold, we could be biased to believe it does. The trouble lies in the fact that, in general, our judgments about other people are influenced by self-judgment. The Marquis de Vauvenargues, an eighteenth-century French writer, tersely depicted our predicament: “We are too inattentive or too much occupied with ourselves to understand each other.” 23 Contemporary psychologists offer models of interpersonal judgment according to which “people make judgments about others by making judgments about themselves, and only after the fact recognize potential differences between themselves and others.” 24 Briefly, when trying to calculate what others think and feel, we mentally “trade places,” imagining ourselves in their situation. Instead of answering a question about them (“What are others justified to believe?”), we answer a question about ourselves (“What would I be justified to believe if I were in others’ situation?”). This shift leads to what psychologists call empathy gaps in perspective-taking, which conceal from us our opponents’ thinking and reasoning. 25 An empathy gap opens when we project our actual thinking or feeling onto others who are in different states altogether. Even when there is no possible way for us to experience or feel what others do, we tend to lean on our personal experience for constructing others’ perspectives. 26 Empathy gaps are commonplace in interpersonal judgment, but we are not adept at anticipating or overcoming them. In egocentric minds like ours, self-judgment becomes social judgment, though we do not necessarily see the difference.

How could we improve? Our judgments concerning what others should think about our arguments need to be more sensitive to relevant information. One source of information is what people report or reveal about their thinking and reasoning. And so, at last, we have arrived at Feedback—the clarity condition that says we accurately interpret our audience’s behavioral reactions to our argument.

Noticing how Feedback fails is important for understanding the fog of debate. Suppose we share a well-targeted argument that our audience grasps. And suppose we also believe they should accept the conclusion. But now we want to know their private, internal response. Did their thinking change because of the argument? If they resist the conclusion, why do they do so? Do they have good reasons? If they accept the conclusion, is their opinion based on the premises or did something else move them? Answering these questions well depends on whether Feedback holds.

An analogy underscores the significance of Feedback. In military conflict, there are signs of victory. The fog of war may lift, revealing abandoned bunkers, wrecked artillery pieces, shell-shocked POWs, and bodies of the dead. These signs can mislead, but sometimes they let us accurately reconstruct a battle. In intellectual debate, however, the consequences of our argumentative efforts may remain entirely unknown. That is precisely what can happen when Feedback fails.

I will note three types of cases in which Feedback does not hold. First, our audience might provide false or misleading feedback. People normally do not like to admit when they get a weighty argument to which they have no good reply. To save face, they might dishonestly report they think our argument is a flop. They could be unsettled in their thinking while outwardly projecting confidence. Consider how Emily Pronin, Carolyn Puccio, and Lee Ross describe participants in their laboratory:

During contentious discussions, many individuals choose to remain silent [ … ]; those who do not remain silent generally hesitate to reveal any ambivalence in their beliefs. When addressing peers who seem to be on the other side of the issue, partisans seek mainly to defend their position rather than share doubts or complexities in their beliefs, lest their “concessions” give their adversaries “ammunition.” 27

Our audience may not reveal that they are moved by our arguments, but they can also hide the fact that they are rationally unmoved. Consider one way that could happen. By sharing an argument, we signal to others that we care about an issue—perhaps much more than they do. And they know that by disclosing their objections they might get caught in an awkward or unwanted debate. Decorum demands faux agreement, at least in some cultures. Polite people smile and nod, pretending to be impressed with arguments they don’t accept, even ones they know are flawed. 28

People are adept at calculating what others want to hear. In one revealing study, Ara Norenzayan and Norbert Schwarz investigated how subjects’ tacit beliefs about a researcher’s epistemic goals influenced judgment. 29 The study involved presenting subjects with a questionnaire that asked for explanations of a target person’s behavior. Norenzayan and Schwarz found the subjects gave more “situational” explanations when the letterhead of the questionnaire identified the researcher as a social scientist, compared to when the letterhead indicated the researcher was a personality psychologist. What they call the “letterhead effect” reveals that subjects adjusted their answers to the researcher’s epistemic goals—subjects believed a social scientist would be interested in situational causes of behavior, and a personality psychologist would be interested in dispositional ones. For better or worse, the norms that govern the sharing of feedback in debate are not focused merely on accuracy and truth.

Social dynamics can distort or suppress feedback in many ways. For example, people might not say what they believe if they fear censure or rejection for expressing unpopular dissent. 30 And powerful people, it is often observed, tend to attract proverbial Yes Men, who might not be men at all. “Of all forms of monotony,” wrote Joseph Joubert, “the monotony of affirmation is the worst.” 31 People with authority and influence, including emperors with no clothes, too seldom get truthful negative feedback. The power they wield over others sets up a special type of “echo chamber” or “epistemic bubble” 32 inside which misleading feedback reigns. The Yes Men must be managed. A twentieth-century historian, Arthur M. Schlesinger, Jr., who studied political power and the U.S. Presidency, noted that the “intoxications of the office” should prompt a wise President to seek “passports to reality.” 33 It is not only Presidents and other elites who need such passports. The monotony of affirmation is available today for anyone with an Internet connection and a social media account; your online followers can be become a cyber militia of Yes Men. Insofar as we do not correctly understand the dynamics that shape feedback, our interpretations might go wildly off track.

In a second type of case where Feedback does not hold, our audience can provide us with poor feedback. Teachers of critical thinking and informal logic know how their students, at the outset of a course, toss around unclear and imprecise language to evaluate arguments. Someone might be thinking of a good objection to our argument but her description does not enlighten us. Expressions like “bad argument,” “it’s wrong,” “that can’t be right,” or “that doesn’t flow” are ambiguous in many contexts. Maybe the speaker means that an argument’s conclusion does not follow from the premises, that an explicitly stated premise is false, that an implicit premise is false, and so on. Poor feedback makes it hard for us to judge others’ reactions accurately. Sometimes, someone might not have any principled resistance to our argument but only feels its conclusion is wrong. 34 Plausibly, a great deal of moral and political disagreement among partisans arises from “affective differences [that] are difficult to identify and verbalize.” 35 Our audience’s stated objection to our argument may or may not reveal their real hang-up with the argument. This is because expressed objections do not always indicate the actual source of doubt, as we see when our audience’s objections are addressed but her doubt remains.

So far, I have noted how Feedback can fail due to false and poor feedback from our audience. A third possibility is that we process our audience’s feedback in biased ways and do not receive the right message. Here are a few possibilities: (i) we uncharitably interpret their words, misconstruing their good objection as a bad one; (ii) we cynically interpret their negative reactions as an indication they are in fact moved by our reasoning and trying to save face; or (iii) we mistake their ambivalent or lukewarm reactions as indicating our argument is compelling.

Take one example of the third error. Suppose you are trying to gauge how persuasive your arguments should be for others. You might rely too much on your own sense that your arguments are compelling. Those arguments make the conclusions seem obvious or even self-evident—for you. But your prior belief about the arguments’ persuasiveness leads you to decipher feedback, especially ambiguous feedback, in such a way that confirms your prior belief. Someone could remark, “Huh, interesting argument,” “I hadn’t thought about that before,” or “Let me think more about it.” These reactions could mean many things. For one, the feedback may be false because your audience wants to move on with the conversation or avoid conflict. But if you assume your arguments are forceful, you are prone to construe such feedback as more positive than the evidence warrants. As a result, you will be more likely to see arguments as “direct hits” even when they were only marginally effective or not at all. As noted above, self-judgment can transmute into social judgment when we evaluate what others should think. Similarly, our personal evaluation of our arguments can shape what we think about how others evaluate them.

The fog of debate is not inevitable. Communication about arguments often works well enough; I hypothesize that it works in those cases because the clarity conditions hold, more or less. But various aspects of conversation and cognition easily undermine the clarity conditions and, unfortunately, we cannot always recognize when those conditions fail. Discussions of complex questions or value-laden issues invite trouble.

Before I turn to the practical matter of what we might do, let me note a topic so far unmentioned: the role of the medium of communication in thickening the fog. Here is a natural question for denizens of an Internet-connected world. Do certain forms of online exchange contribute to the fog? Perhaps under the influence of technologists’ hype, many people embrace the idea that computer networks “bring us closer together.” That may be true in a sense—but only because the Internet introduces much greater “distance” in other respects.

Information available in face-to-face exchange is missing on social media platforms: tone of voice, acoustical properties of speech such as pitch and volume, facial expressions, body language, situational explanations for users’ behavior, and so on. Cyberspace was not built to help us understand others’ minds. Researchers have found that people readily use paralinguistic cues in speech to infer others’ mental processing, but since these cues are absent in texts, hearing people explain their beliefs makes them seem more mentally capable and more human than reading identical explanations. 36 Further, the ties between online communicators can be quite “weak” 37 and some users end up “alone together,” feeling alienated, anxious, and emotionally exhausted despite—and because of—near-constant “connection” to other users. 38 Users can cloak themselves in anonymity, misleading others about their real beliefs and identities.

Questions about how different communication mediums influence argumentative persuasion are beyond the scope of this essay. Even so, it appears some online forums provide ideal circumstances for the fog of debate. The more we constrict our argumentative communication by making it increasingly less similar to conversation or long-form writing, the more the audience becomes hidden from our view. Some of these platforms exhibit a distressing mixture of elements: ultra-brief, non-face-to-face communication shaped by social networks where the inducements for rhetoric, signaling, and outrage are everything. Careful thinking, evidence, and the truth are left behind. “We have asked for truth; we have been given gadgets,” quipped the philosopher T. V. Smith. 39

III. Waiting for the Fog to Clear

The fog of debate arises through the routine workings of communication and cognition. We should consider what to do about it. As much as I am bothered by the fog and would like it to dissipate, significantly changing our practices of argument-giving might not be sensible in some circumstances. So, let me note two policies that would allow us to continue our standard approach to debate even after we become aware of the fog.

One policy is suggested by Georg Christoph Lichtenberg, in the quotation in the Introduction of this essay. An eighteenth-century German physicist and aphorist, Lichtenberg said that the purpose of his writing was not to convince his opponents with arguments, but to “annoy them” and “make it known to others that they have not convinced us. 40 According to what I will call the Lichtenbergian policy, debate is for stirring up feelings in other people, not for persuasion; so, the fog of debate is nothing to worry about.

The basic idea can be clarified. Argumentative communication is a type of speech act. Expressing an argument can have different effects: persuading one’s opponents, annoying one’s opponents, rallying one’s allies, making oneself feel good, and so on. Following taxonomy from J. L. Austin’s work on speech acts, those effects are perlocutions: acts done by saying something. Here is a way to describe what goes awry when we haphazardly fling an argument at our opponents through the fog: the perlocution of persuading our opponents is not a plausible outcome of communication. But if we know in advance that the perlocution is ill fated, we could intend a different perlocution—an effect that is easier for us to bring about from inside the fog. That is the Lichtenbergian policy. Persuading opponents is an unrealistic goal. Instead, we argue to annoy our opponents or to rally our allies. Our argumentative communication is a signal to others: “No, I’m not joining the Dark Side!” and “You foolish deplorables have the stupidest arguments!” 41

In fairness, the Lichtenbergian policy may be an apt description of what is sometimes called “argument.” But the policy will not satisfy us if we are committed to using reasons and evidence to persuade. From the mere fact that our efforts to persuade are ineffective in a debate, it does not follow that we should now use argument merely to stir up feelings in our opponents or allies. Setting that point aside, even if we feel the pull of this policy, we may worry whether it universalizes. Ask yourself: Would you want your opponents to treat their arguments as tools for annoying you? Perhaps not. If they could possibly teach you something, you would presumably want them to persist intelligently and creatively to try to get the message through. And since you believe you have something they ought to know, why not continue trying, as you think they should do unto you?

Consider a second policy meant to maintain the argumentative status quo. We can continue to share arguments, intending to persuade, because we think that engaging in debate is our best means to understand our audience’s thinking. Knowing about the fog calls us to carry on as before, because arguing might lift the fog. Call this the persistence policy. It reminds me of a sentiment expressed by G. E. Moore. In his Principia Ethica, Moore lamented the “peculiarly unsatisfactory state” of philosophy, owing to the fact there is no consensus about philosophical questions “as there is about the existence of chairs and lights and benches.” 42 Moore continued:

I should therefore be a fool if I hoped to settle one great point of controversy, now and once for all. It is extremely improbable I shall convince. [ … ] Philosophical questions are so difficult, the problems they raise are so complex, that no one can fairly expect, now, any more than in the past, to win more than a very limited assent. And yet I confess that the considerations which I am about to present appear to me to be absolutely convincing. I do think that they ought to convince, if only I can put them well. In any case, I can but try. I shall try now to put an end to that unsatisfactory state of things, of which I have been speaking. 43

Though he did not say it, Moore might have thought something like the fog of debate contributed to the lack of consensus among philosophers. He was acutely aware of the dangers of merely verbal disputes, 44 one possible consequence of the fog, and that awareness led him tirelessly to disambiguate the claims he was making from those he wasn’t. I would suspect that finding out he was fogged in would not have stopped Moore from trying to convince philosophical colleagues.

In the right circumstances, arguing with people can certainly reveal to us their thinking. But recall we have significant doubts about our capacities to know what our audience thinks. We could keep launching our arguments at them, but what will we learn, given our doubts? The persistence policy seems to ignore the difficulty of carrying on while aware of a fog. Suppose we are prospectively considering whether to share an argument with our audience. Familiar questions arise. Is this argument well aimed at our audience? Would they understand it? Would they reasonably accept the conclusion after grasping it? Would we interpret their feedback accurately? Importantly, being aware of the fog means we will not have reasons for affirmative answers to all or some of these questions. But insofar as we cannot answer them affirmatively, we will often be unsure whether, and how, to proceed. After all, we know that lobbing another argument could jeopardize our persuasive efforts in the future. We should slow down and consider our options. Just as military commanders operate differently once they recognize the obscurity of battlefield judgment, so it goes for thinkers who are aware of the fog of debate.

This hints at a rule of thumb. Whenever we have expended considerable effort trying to convince our opponents, without success, but feel confident they are finally in the crosshairs of our arguments, think again. On careful reflection, we may decide, like Moore, that we should persevere in trying to persuade others with our arguments—“if only I can put them well.” Or we may decide, like Lichtenberg, that annoying our opponents or rallying our allies is the best we can accomplish for now. 45

But I want to draw attention to a further possibility. While thinking again, we should see the difference between arguing and preparing to argue. We should sometimes ask how best to devise, deliver, and defend arguments to our audience. A warfare analogy is illuminating here. One lesson of the fog of war might be to stand down. When the enemy is elusive, a military force can deploy spies, listening stations, and high-altitude reconnaissance aircraft to identify strategic opportunities and tactical vulnerabilities. Commanders take counsel from intelligence officers, run war-game scenarios, and study the history of war. To prepare for conflict, commanders gather intelligence.

In the shift from arguing to preparing to argue, I envision various kinds of “intelligence gathering ops” and “tradecraft.” These would help thinkers overcome the fog of debate. How would this work? What are effective techniques? Well, I don’t pretend to know—I would just like to find out. We could benefit from knowing some empirically tested techniques.

Fog reduction requires knowing how to change social and cognitive processes and activities. It is unclear how optimistic we can be for large improvements. Some circumstances plunge us into impenetrable fog. Although we know that the fog is to some extent caused and sustained by different cognitive biases, our knowledge about how de-biasing works is limited. 46 But if researchers can understand how to mitigate fog, small improvements might pay off. To the best of my knowledge, there is no established body of theory that we can take directly “off the shelf” for application in fog elimination. Researchers studying the social dynamics of knowledge production and social conflict could have useful insights, including researchers from social psychology, counseling psychology, conflict resolution, sociology of knowledge, and history of science. Human beings frequently seek to overcome uncertainty and confusion in interpersonal judgment and argumentative communication. What has worked for them? How can we find out what works for us?

If researchers can tell us how to lift the fog, we should scrutinize our academic institutions and research practices to ensure they are set up to minimize fog. I strongly suspect that some relevant knowledge takes the form of know-how and practical wisdom. Reducing the fog is a matter of repositioning ourselves, seeing situations differently. Here are a few examples for further reflection.

For all of their talking and typing, some intellectuals are less skilled at listening than would be desirable. Arguing in a confident and decisive style is celebrated more than listening well. Brenda Ueland, a twentieth-century American journalist, described “censorious listening,” where a listener is an “ungenerous eavesdropper who mentally (or aloud) keeps saying as you talk, ‘Bunk … Bunk … Hokum’.” 47 Some academics, maybe philosophers especially, are trained to be censorious listeners; their job is to detect, and root out, error and illogic. 48 Listening will not get you tenure. At any rate, hearing what people are saying about their standpoint and their ways of thinking might clear away some fog. In their work on false polarization, David Sherman, Leif Nelson, and Lee Ross found that “when partisans are confronted with the actual views of their opponents (which are more similar to their own than they think), they are apt to see much more common ground between the sides and be much more optimistic about negotiating their differences.” 49

But, as noted above, social dynamics can block accurate feedback. Even when we can hear others, they might not dish. “Are you listening,” asked Joseph Joubert, “to the ones who keep quiet?” 50 An unforthcoming audience could call for something not unlike a spy’s tradecraft. I will describe a technique used by a friend of mine. He is an academic in the United States and, like most American academics, he has liberal political opinions. But unlike most, my friend has a non-American accent, which lets him pass as a tourist. Sometimes, he casually asks people for their thoughts on political issues, letting them assume he is unfamiliar with some divisive, hot-button affair. In return, he receives reports that other liberal academics are unlikely ever to hear from their conservative neighbors. People are candid with my friend because they do not attribute to him the biases and defects they reactively attribute to PhDs who work in universities.

Slipping into the ranks of an out-group can give us insight into how the “other side” really thinks. Let me share two examples about a philosophical debate concerning the nature of the mind. Note that latter-day academic philosophers as a group strongly tend to accept physicalism—the thesis that mental properties are, or are reducible to, physical properties. A recent survey found more than half of respondents accepted physicalism while only about one quarter accepted non-physicalist views. 51 In philosophy, as elsewhere, thinking carefully and creatively about the alternatives to a dominant viewpoint can be a struggle. Conventional notions come to seem obvious or inevitable.

Jaegwon Kim, who taught at Brown University in the Philosophy Department for many years, affirmed physicalism for part of his career. But then Kim began to worry that physicalism could not accommodate facts about consciousness and mental causation. What could a sensible departure from physicialism look like? As he continued to patiently explore physicalism in his work, he decided to visit the University of Notre Dame, where he taught a series of graduate seminars between 1999 and 2005. One of Kim’s motivations for visiting Notre Dame was the eclectic group he found in its Philosophy Department: some philosophers there accepted unpopular non-physicalist views concerning the metaphysics of mind, such as substance dualism, neutral monism, idealism, as well as an off-beat physicalist view, eliminative materialism. Kim told his friends and colleagues that he found the diversity of thought refreshing and stimulating for his ongoing research. 52

Getting inside other theories does not always require talking to people who hold them. It could be that we need to do some talking ourselves. At the same time Jaegwon Kim was visiting Notre Dame, William Lycan was teaching at the University of North Carolina. Like Kim, Lycan had invested considerable energy in the philosophy of mind. Lycan reports that he has found physicalism compelling “all my adult life, since first I considered the mind-body issue.” 53 Back in the 1980s, Lycan had written a book that dismissed dualism by rehearsing stock counterarguments. But then in the spring semester of 2006, Lycan taught a graduate seminar on the mind-body problem and decided to play the role “of a committed dualist as energetically as I could.” Taking on the mantle of the dualist was, he says, “a strange feeling, something like being a cat burglar for a few months.” 54 Lycan reports that his dialectical role-playing showed him the defects of arguments against dualism. Those arguments he had originally thought were so compelling were not persuasive on closer inspection. He was ultimately led to compare his physicalist perspective to a political or religious ideology: “my own faith in materialism,” Lycan confesses, “is based on science-worship.” 55 , 56

To prepare for debate, we can surround ourselves with opponents or pretend to be those opponents. We can listen carefully to them or study their writings rigorously and generously. One possibility is that we will come to think anew about not only their thinking but ours as well. Our preparation could be humbling, teaching us how tenuous our arguments are and how elusive the truth can be. We see dimly through the fog. Even our confidence about who our opponents are might become unsettled, as when our debates turn out to be merely verbal. “When two persons sit and converse in a thoroughly good understanding,” Ralph Waldo Emerson observed, “the remark is sure to be made, See how we have disputed about words!” 57 By holding back on debate for a little while, by preparing our best arguments to share, we might make peace with our adversaries—even if the fog never subsides, even if our arguments never persuade.

Footnotes

*

School of Historical, Philosophical, and Religious Studies, Arizona State University. Competing Interests: The author declares none. For helpful conversations and comments, I am grateful to Ian Axel Anderson, Andrew Bailey, Jared Celniker, David Christensen, Carlo DaVia, Peter Ditto, Xingming Hu, Madeline Jalbert, Samuel Kampa, Charlie Lassiter, William Lycan, Tom Noah, Andrew Rotondo, Peter Seipel, Eric Schwitzgebel, Claudia Vanney, Joseph Vukov, Peter Andrey Smith, Shane Wilkins, Benjamin Wilson, and an anonymous referee. I am especially grateful to E. J. Coffman, Peter Ditto, Brett Mercier, and Norbert Schwarz for insightful conversations and written comments. I want to thank audiences at Universidad Austral in October 2020 and Nanjing University in May 2021 for discussions of earlier versions of this essay. Finally, I acknowledge the John Templeton Foundation for its generous support of my research (grant 61014).

References

1 von Clausewitz, Carl, On War, ed. and trans. Howard, Michael and Paret, Peter (Princeton, NJ: Princeton University Press, 1976 [1816–1830]), 117 CrossRefGoogle Scholar.

2 Kiesling, Eugenia C., “On War without the Fog,” Military Review 81, no. 5 (2001): 8587 Google Scholar [at 85].

3 George Lakoff and Mark Johnson (Metaphors We Live By [Chicago: University of Chicago Press, 1980]) discuss the metaphor “argument is war,” which they point out is reflected in all sorts of language: “Your claims are indefensible,” “I demolished their argument,” “He attacked every weak point in my argument,” “She shot down all of my arguments,” “My objection will blow up your claim,” and so on. Lakoff and Johnson contend in general that metaphor shapes how people think and act. Although I reject the idea that we should literally treat debate as a war, I have certainly ended up thinking that the “fog of debate” concept illuminates some facets of argumentative persuasion for the reason that the argument-is-war metaphor is entrenched deeply in language and culture—just as Lakoff and Johnson say.

4 Ross, Lee and Ward, Andrew, “Naive Realism in Everyday Life: Implications for Social Conflict and Misunderstanding,” in Reed, E. S., Turiel, E., and Brown, T., eds., Values and Knowledge (Hillsdale, NJ: Lawrence Erlbaum Associates, 1996), 103–35Google Scholar.

5 The possibility of marginal cases is worth a note. Suppose, for example, that Standpoint does not hold but the other clarity conditions do. Then it may be plausible to say that someone is not in a fog. To see why, suppose I don’t know how to accurately estimate where others stand on an issue before I share my argument. But I can potentially overcome that obstacle to effective debate if Comprehension, Force, and Feedback hold. As a further example, someone may be in a fog even if all of the clarity conditions hold except for Feedback. If all the information you receive about your audience’s reaction to your argument is systematically biased, you seem to be in a fog.

6 Thanks to Madeline Jalbert, Norbert Schwarz, and an anonymous referee for questions here.

7 Rozen, Paul, “Social Psychology and Science: Some Lessons From Solomon Asch,” Personality and Social Psychology Review 5, no. 1 (2001): 214 CrossRefGoogle Scholar.

8 Keltner, Dacher and Robinson, Robert J., “Defending the Status Quo: Power and Bias in Social Conflict,” Personality and Social Psychology Bulletin 23, no. 10 (1997): 1066–77CrossRefGoogle Scholar.

9 Robinson, Robert J., Keltner, Dacher, Ward, Andrew, and Ross, Lee, “Actual Versus Assumed Differences in Construal: Naive Realism in Intergroup Perception and Conflict,” Journal of Personality and Social Psychology 68, no. 3 (1995): 404417CrossRefGoogle Scholar; Chambers, John R., Baron, Robert S., and Inman, Mary L., “Misperceptions in Intergroup Conflict: Disagreeing about What We Disagree About,” Psychological Science 17, no. 1 (2006): 3845 CrossRefGoogle Scholar.

10 Westfall, Jacob, Van Boven, Leaf, Chambers, John R., and Judd, Charles M., “Perceiving Political Polarization in the United States: Party Identity Strength and Attitude Extremity Exacerbate the Perceived Partisan Divide,” Perspectives on Psychological Science 10, no. 2 (2015): 145–58CrossRefGoogle ScholarPubMed.

11 Pronin, Emily, Puccio, Carolyn, and Ross, Lee, Understanding Misunderstanding: Social Psychological Perspectives,” in Gilovich, T., Griffin, D., and Kahneman, D., eds., Heuristics and Biases: The Psychology of Intuitive Judgment (New York: Cambridge University Press, 2002), 636–65CrossRefGoogle Scholar [at 651–53]; Sherman, David K., Nelson, Leif D., and Ross, Lee D., “Naïve Realism and Affirmative Action: Adversaries Are More Similar Than They Think,” Basic and Applied Social Psychology 25, no. 4 (2003): 275–89CrossRefGoogle Scholar; Kenyon, Tim, “False Polarization: Debiasing as Applied Social Epistemology,” Synthese 191 (2014): 2529–47CrossRefGoogle Scholar.

12 Blatz, Craig W. and Mercier, Brett, “False Polarization and False Moderation: Political Opponents Overestimate the Extremity of Each Other’s Ideologies but Underestimate Each Other’s Certainty,” Social Psychological and Personality Science 9, no. 5 (2018): 521–29CrossRefGoogle Scholar.

13 Westfall, Van Boven, Chambers, and Judd, “Perceiving Political Polarization in the United States,” 155.

14 Walton, Douglas, “The Straw Man Fallacy,” in van Bentham, J., van Eemeren, F., Grootendorst, R., and Veltman, F., eds., Logic and Argumentation (Amsterdam: Royal Netherlands Academy of Arts and Sciences, 1996), 115–28Google Scholar.

15 Kwong, Jack M. C., “Open‐Mindedness as Engagement,” The Southern Journal of Philosophy 54, no. 1 (2016): 7086 CrossRefGoogle Scholar.

16 Pollock, John, “Irrationality and Cognition,” in Smith, Quentin, ed., Epistemology: New Essays (New York: Oxford University Press, 2008), 249–75CrossRefGoogle Scholar [at 260].

17 Ibid., 259.

18 Ibid.

19 Thinking about how Comprehension fails reminded me of some advice I picked up years ago from a philosophy teacher. There is a useful rule for reconstructing arguments called the charity principle: “When clarifying an argument, make the argument as sensible as you possibly can, given what its author said when presenting it” (E. J. Coffman, “Finding, Clarifying, and Evaluating Arguments,” no date, Philosophy Department, University of Tennessee, Knoxville, https://philpapers.org/archive/COFFCA.pdf [at 5]). That principle is not the advice I got from my teacher; he shared what I call the anti-charity principle. When preparing a draft manuscript for submission to a professional journal, invite some friends to interpret your arguments anti-charitably, thereby helping you forestall bad objections from unsympathetic referees. (I am grateful to Klaas Kraay for his help and advice over the years.)

20 Camerer, Colin, Loewenstein, George, and Weber, Martin, “The Curse of Knowledge in Economic Settings: An Experimental Analysis,” Journal of Political Economy 97, no. 5 (1989): 1232–54CrossRefGoogle Scholar.

21 I assume here that insofar as what conclusions people are justified to accept depends on evidence, their total evidence matters ( Kelly, Thomas, “Evidence: Fundamental Concepts and the Phenomenal Conception,” Philosophy Compass 3, no. 5 [2008]: 933–55CrossRefGoogle Scholar [at 937–39]). Potentially, justification depends on factors over and above evidence, such as whether someone’s cognitive faculties are functioning property (Michael Bergmann, Justification Without Awareness: A Defense of Epistemic Externalism [Oxford: Oxford University Press, 2006]). But non-evidentialist theories of justification are compatible with my assumption that insofar as evidence matters for the justification of argument-based beliefs, it is total evidence that matters.

22 Ballantyne, Nathan, “Knockdown Arguments,” Erkenntnis 79, no. 3 (2014): 525–43CrossRefGoogle Scholar.

23 de Vauvenargues, Marquis, Selections from the Characters, Reflexions and Maxims, translated with introductory notes and memoirs by Lee, Elizabeth (Westminster, London: Archibald Constable and Co., 1903 [1746]), 184–85Google Scholar.

24 Van Boven, Leaf, Loewenstein, George, Dunning, David, and Nordgren, Loran F., “Changing Places: A Dual Judgment Model of Empathy Gaps in Emotional Perspective Taking,” in Advances in Experimental Social Psychology, Volume 48 (Cambridge, MA: Academic Press, 2013), 117–71Google Scholar [at 120].

25 Loewenstein, George, “Hot-Cold Empathy Gaps and Medical Decision Making,” Health Psychology 24, no. 4 (2005): 4956 CrossRefGoogle ScholarPubMed; Ditto, Peter H. and Koleva, Spassena P., “Moral Empathy Gaps and the American Culture War,” Emotion Review 3, no. 3 (2011): 331–32CrossRefGoogle Scholar; and Van Boven, Loewenstein, Dunning, and Nordgren, “Changing Places.”

26 Van Boven, Loewenstein, Dunning, and Nordgren, “Changing Places,” 124–27.

27 Emily Pronin, Carolyn Puccio, and Ross, Lee, “Understanding Misunderstanding: Social Psychological Perspectives,” in Gilovich, T., Griffin, D., and Kahneman, D., eds., Heuristics and Biases: The Psychology of Intuitive Judgment (New York: Cambridge University Press, 2002), 636–65Google Scholar [at 652].

28 While traveling on airplanes, I learned the value of sharing false feedback. Seated and buckled in next to talkative cranks or extroverted ideologues, honesty is not necessarily the best policy. (“Well, thanks—I’ve always wanted to know how the Egyptian pyramids were built. I should get a bit of work wrapped up before we land in Chicago, but I’ll definitely check out that book you recommended.”)

29 Norenzayan, Ara and Schwarz, Norbert, “Telling What They Want To Know: Participants Tailor Causal Attributions to Researchers’ Interests,” European Journal of Social Psychology 29, no. 8 (1999): 1011–203.0.CO;2-A>CrossRefGoogle Scholar.

30 Noelle-Neumann, Elisabeth, “The Spiral of Silence: A Theory of Public Opinion,” Journal of Communication 24, no. 2 (1974): 4351 CrossRefGoogle Scholar.

31 Joubert, Joseph, Pensées and Letters of Joseph Joubert, Translated with an introduction by Collins, H. P. (Freeport, New York: Books for Libraries Press, 1972 [1842]), 73 Google Scholar.

32 Nguyen, C. Thi, “Echo Chambers and Epistemic Bubbles,” Episteme 17, no. 2 (2020): 141–61CrossRefGoogle Scholar.

33 Schlesinger, Arthur M., The Imperial Presidency (Boston, MA: Houghton Mifflin, 2004), 408 Google Scholar.

34 On other occasions, metacognitive feelings come prior to any effortful analysis but the feelings are accurate indicators of validity, as demonstrated by Morsanyi, Kinga and Handley, Simon J., “Logic Feels So Good—I like it! Evidence for Intuitive Detection of Logicality in Syllogistic Reasoning,” Journal of Experimental Psychology: Learning, Memory, and Cognition 38, no. 3 (2012): 596616 Google ScholarPubMed.

35 Ditto and Koleva, “Moral Empathy Gaps and the American Culture War,” 332.

36 Schroeder, Juliana, Kardas, Michael, and Epley, Nicholas, “The Humanizing Voice: Speech Reveals, and Text Conceals, a More Thoughtful Mind in the Midst of Disagreement,” Psychological Science 28, no. 12 (2017): 1745–62CrossRefGoogle Scholar.

37 Gilbert, Eric and Karahalios, Karrie, “Predicting Tie Strength with Social Media,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2009): 211–20CrossRefGoogle Scholar.

38 Turkle, Sherry, Alone Together: Why We Expect More from Technology and Less from Each Other (New York: Basic Books, 2011)Google Scholar.

39 Smith, T. V., “The Tragic Realm of Truth,” The Philosophical Review 45, no. 2 (1936): 111–25CrossRefGoogle Scholar [at 113].

40 Lichtenberg, Georg Christoph, The Waste Books. Translated and with Introduction by Hollingdale, R. J. (New York: New York Review Book, 1990 [1775–1776]), 67 Google Scholar.

41 For discussion of attributions of malice and stupidity in conflicts, see Nathan Ballantyne and Peter H. Ditto, “Hanlon’s Razor,” Midwest Studies in Philosophy (forthcoming), https://doi.org/10.5840/msp2021933.

42 Moore, G. E., Principia Ethica (Cambridge: Cambridge University Press, 1903), 45 Google Scholar.

43 Ibid., 45.

44 Ibid., vii.

45 Do I think anyone should follow the Lichtenbergian policy? Maybe occasionally, though only cautiously. Here is one type of situation that may justify the policy. Sometimes our adversaries do not care one whit about the truth; we know this because they tell us so. They fling mud, not arguments. Intellectuals are naturally uncomfortable with the sophists’ dirty tricks. But we who care deeply about reason and evidence can use rhetoric and passion to inoculate other people against the sophists. We could be called upon to safeguard the pursuit of truth in our community using every rhetorical weapon available. (Thanks to Shane Wilkins for discussion.)

46 Schwarz, Norbert, Sanna, Lawrence J., Skurnik, Ian, and Yoon, Carolyn, “Metacognitive Experiences and the Intricacies of Setting People Straight: Implications for Debiasing and Public Information Campaigns,” Advances in Experimental Social Psychology 39 (2007): 127–61CrossRefGoogle Scholar; Lilienfeld, Scott O., Ammirati, Rachel, and Landfield, Kristin, “Giving Debiasing Away: Can Psychological Research on Correcting Cognitive Errors Promote Human Welfare?Perspectives on Psychological Science 4, no. 4 (2009): 390–98CrossRefGoogle ScholarPubMed; Lewandowsky, Stephan, Ullrich, K. H. Ecker, Seifert, Colleen M., Schwarz, Norbert, and Cook, John, “Misinformation and Its Correction: Continued Influence and Successful Debiasing,” Psychological Science in the Public Interest 13, no. 3 (2012): 106–31CrossRefGoogle ScholarPubMed.

47 Ueland, Brenda, “Tell Me More: On the Fine Art of Listening,” in Strength to Your Sword Arm: Selected Writings (Duluth, MN: Holy Cow! Press, 1993), 205210 Google Scholar [at 210].

48 E. J. Coffman shared Thomas Senor’s remarks (Thomas D. Senor, “Still More Advice to Christians in Philosophy,” Logoi [Spring 2015]: 6–8. https://philreligion.nd.edu/assets/280358/logoi_spring.2015.pdf) on the theme of “censorious listening” at academic philosophy conferences:

We go to philosophy talks to poke holes in the speaker’s main argument, or to show that something important was overlooked. We are there as much to instruct as we are to learn—and this is so even if we don’t take ourselves to know as much about the subject of the talk than the speaker does. Our hands shoot up when the Q&A starts because we want to get in our own clever objection before someone beats us to it. (7)

49 Sherman, Nelson, and Ross, “Naïve Realism and Affirmative Action,” 276.

50 Joubert, Joseph, The Notebooks of Joseph Joubert, translated and with an introduction by Auster, Paul (New York: New York Review Book, 2005 [1791]), 13 Google Scholar.

51 Bourget, David and Chalmers, David J., “What Do Philosophers Believe?Philosophical Studies 170, no. 3 (2014): 465500 CrossRefGoogle Scholar.

52 Thanks to Fritz Warfield for email correspondence (September 2020) about Jaegwon Kim’s visits to South Bend, Indiana.

53 Lycan, William G., On Evidence in Philosophy (Oxford: Oxford University Press, 2019), 66 CrossRefGoogle Scholar.

54 Ibid., 66, n. 8.

55 Ibid., 68, n. 11.

56 Some of the material in this paragraph is adapted from Nathan Ballantyne, “Review of William G. Lycan’s On Evidence in Philosophy,” Notre Dame Philosophical Reviews, January 4, 2020, https://ndpr.nd.edu/news/on-evidence-in-philosophy/

57 Emerson, Ralph Waldo, “New England Reformers,” in The Essential Writings of Ralph Waldo Emerson, Modern Library Paperback Edition (New York: Random House, 2000 [1844]), 402420 Google Scholar, at 416.