Hostname: page-component-745bb68f8f-hvd4g Total loading time: 0 Render date: 2025-02-11T20:09:58.251Z Has data issue: false hasContentIssue false

EPISTEMIC AKRASIA AND EPISTEMIC REASONS

Published online by Cambridge University Press:  12 April 2018

Rights & Permissions [Opens in a new window]

Abstract

It seems that epistemically rational agents should avoid incoherent combinations of beliefs and should respond correctly to their epistemic reasons. However, some situations seem to indicate that such requirements cannot be simultaneously satisfied. In such contexts, assuming that there is no unsolvable dilemma of epistemic rationality, either (i) it could be rational that one's higher-order attitudes do not align with one's first-order attitudes or (ii) requirements such as responding correctly to epistemic reasons that agents have are not genuine rationality requirements. This result doesn't square well with plausible theoretical assumptions concerning epistemic rationality. So, how do we solve this puzzle? In this paper, I will suggest that an agent can always reason from infallible higher-order reasons. This provides a partial solution to the above puzzle.

Type
Articles
Copyright
Copyright © Cambridge University Press 2018 

Meet Doctor Watson, Sherlock Holmes's assistant. While he rarely matches Holmes's reasoning skills, Watson is an epistemically rational reasoner.Footnote 1 Now, imagine that Watson finds himself in the following situations:

Clear Evidence. Watson has sufficient evidence of numerous distinctive features X (the type of murder, the type of victim, the crime scene's location, etc.). Given features X, it seems highly probable to Watson that the killer is Jack the Ripper.

Fallible Reasons. Watson analyzes numerous distinctive features X (the type of murder, the type of victim, the crime scene's location, etc.). He finds a justificatory chain leading to the conclusion that the killer is Jack the Ripper. However, he is aware that the reasons he responded to are fallible to a certain degree.

Bad Reasoning. Watson concludes that the killer is Jack the Ripper on the basis of numerous distinctive features X (the type of murder, the type of victim, the crime scene's location, etc.). However, he also has evidence (i) that Holmes thinks that he (Watson) made a mistake in processing the evidence and (ii) that Holmes is almost always reliable. For example, Holmes could suggest that, on that particular occasion, Watson reached a conclusion through incorrect reasoning.

Let's assume that, in cases like Clear Evidence, Watson is epistemically rational in concluding that Jack the Ripper is the killer. However, in cases like Bad Reasoning or Fallible Reasons, things get complicated. In such cases, it isn't clear how Watson will rationally weight the evidence he has or evaluate his own reasoning.

Several authors have recently suggested that, in cases like Bad Reasoning or Fallible Reasons, it is rational for Watson to hold an akratic combination of attitudes (Coates Reference Coates2012; Lasonen-Aarnio Reference Lasonen-Aarnio2014, Reference Worsnip, Jackson and JacksonForthcoming). Others have suggested that such cases show that responding to epistemic reasons is not a genuine requirement of epistemic rationality, or at least that responding to epistemic reasons can conflict with coherence requirements (Worsnip Reference Worsnip2015). Let's call this a Rational Puzzle:

Rational Puzzle. At least one of the following verdicts is correct: (i) epistemic akrasia can be rational, or (ii) requirements such as responding correctly to epistemic reasons are not genuine rationality requirements.

Rational Puzzle is problematic because it does not cope well with plausible assumptions concerning epistemic rationality. In particular, it is hard to imagine that an epistemically rational agent sometimes has to choose between responding correctly to his or her reasons and maintaining internal coherence.

In this paper, I shed light on the above puzzle. First, it is sometimes helpful to determine that what appears to be a new problem is, in fact, very similar to a well-known one. I will suggest that Rational Puzzle is essentially related to traditional problems of responding to fallible reasons such as the lottery paradox. Specifically, if the fallibilist solution to the lottery paradox is correct, then it could be rational for an agent to hold an akratic combination of attitudes. Nevertheless, I will suggest that an agent never has to choose between responding to his or her reasons and avoiding akratic combinations of attitudes, because he or she is always in a position to satisfy both.

In Section 1, I will clarify what I mean by requirements of rationality, epistemic reasons and the enkratic requirements. I will also present Rational Puzzle and explain why cases like Bad Reasoning or Fallible Reasons are closely related to this puzzle. In Section 2, I will argue that Rational Puzzle holds only if a rational agent can have sufficient epistemic reason to believe that “he or she has sufficient epistemic reason to believe P,” while having sufficient epistemic reason against believing P. I will then explain that such situations are possible only if higher-order epistemic reasons are sometimes fallible.

This will lead me, in Section 3, to analyze the possibility of fallible higher-order epistemic reasons. I will argue that, while there can be fallible higher-order epistemic reasons, an agent can always respond to infallible higher-order epistemic reasons. Furthermore, relative to rational reasoning, responding to infallible higher-order epistemic reasons appears to be preferable. In other words, I will argue that a rational agent would prefer responding to infallible higher-order reasons. This provides a partial solution to Rational Puzzle: while this paper does not rule out the possibility of rational epistemic akrasia, (i) no epistemically rational agent is required to maintain such a combination of attitudes and (ii) remaining in such a state seems undesirable.

1. RATIONAL BELIEVERS, ENKRATIC REQUIREMENT(S) AND RATIONAL PUZZLE

1.1 Rational believers and epistemic reasons

An ideally rational agent satisfies all state and process rationality requirements.Footnote 2 State requirements govern relations among multiple attitudes. They are, for the most part, coherence requirements. Here are two putative coherence requirements of rationality:Footnote 3

Consistency. Rationality requires that, if A believes that P, then it is false that A believes that ~P.

Intra-Level Coherence. Rationality requires that, if A believes that P1, believes that P2, … and believes that Pn, then it is false that A believes that ~(P1^P2 … ^Pn).

Consistency is logically weaker than Intra-Level Coherence. For example, simultaneously believing P, believing Q and believing ~(P^Q) violates Intra-Level Coherence but such a combination of beliefs does not necessarily violates Consistency. For the moment, I will only assume that Consistency is correct, and I will come back to Intra-Level Coherence in Section 3 when discussing lottery cases.

Process requirements govern how agents form and revise their attitudes over time. For example, when an agent has sufficient epistemic reason to believe P, this seems to put him or her under a normative pressure to come to believe P, as in the following:

Reasons-Responsiveness. Rationality requires that, if A has sufficient epistemic reason to believe P, A believes that P.

In Reasons-Responsiveness, the notions of sufficiency and reasons remain to be clarified. First, sufficiency. Some authors prefer to say that agents ought to respond to conclusive reasons. A conclusive epistemic reason to believe P puts agents under a normative pressure to believe P. I prefer the notion of sufficient epistemic reason, since conclusiveness is sometimes assumed to be infallible. If conclusive reasons are infallible, then having conclusive reason to believe P is incompatible with P's being false. Since this is not what I have in mind, I prefer to avoid using the notion of conclusive reason (but if conclusive reasons can be fallible, one could replace my “sufficient epistemic reason” with “conclusive epistemic reason”).

Now, reasons. There are many substantial debates surrounding the nature of epistemic reasons that I do not wish to address here. For instance, I will not take a stand in the objectivism-perspectivism debate on reasons. According to objectivism, what you have sufficient reason for believing depends on the facts of your situation. Perspectivists consider that your perspective (what you are in a position to know, what appears true from your standpoint, and so forth) explains what reasons are.

However, we can remain neutral on these substantial issues surrounding reasons while representing them in a particular way. With respect to the project of this paper, offering a representation of the distinction between fallible and infallible reasons is very important. Fallible reasons to believe P are reasons compatible with P's being false or reasons that could be misleading concerning P (Moretti and Piazza Reference Moretti, Piazza and Zalta2013: sec. 3.2).

Reasons can be represented through possibility theory, subjective levels of confidence, probabilities, ranking theory and so forth.Footnote 4 In this paper, I will limit my argument and examples to a probabilistic representation of reasons. Specifically, I will assume that epistemic reasons are represented by epistemic probabilities, understood as the probabilities warranted by an agent's body of epistemic reasons. In such a context, fallible reasons to believe P warrant an epistemic probability of less than 1 in P, and infallible reasons to believe P warrant an epistemic probability of 1 in P. Also, while rational credences are not identical to epistemic probabilities, they track epistemic probabilities. For example, if P's epistemic probability is 0.9 relative to a body of epistemic reasons, then it is rational for an agent who has such a body of epistemic reasons to entertain a credence of 0.9 in P.

The probabilistic representation of reasons raises methodological difficulties. It is not always clear how we should represent perceptual learning, defeaters and undermining evidence in a probabilistic framework (Christensen Reference Christensen1992; Pryor Reference Pryor and Tucker2013; Weisberg Reference Weisberg2015). While we should take these difficulties seriously, there are two reasons why I maintain a probabilistic representation of reasons. First, as we will see in Section 3, some authors defending rational epistemic akrasia make use of a probabilistic representation of fallible reasons.Footnote 5 Since my goal is to address other arguments found in the literature, it seems justified to make use of the probabilistic representation of reasons. Second, even if the probabilistic representation of reasons is limited and problematic, understanding the type of results we can get in this framework could eventually help us to develop similar arguments in other frameworks. So, even if this is not the most adequate representation of reasons, it is worth considering what results we reach through such a representation.

1.2 Formulating the enkratic requirement(s)

Akratic agents seem to be irrational. Many people have suggested that akrasia reveals inter-level incoherence – that is, incoherence between an agent's first and higher-order attitudes.Footnote 6 The “anti-akrasia constraint” can be defined as follows:

Reasons Enkrasia. Rationality requires that (if A believes that he or she has sufficient epistemic reason to believe P, then A believes that P).

However, we find many variants of this thesis in the literature, as in the following:Footnote 7

Evidence Enkrasia. Rationality requires that (if A believes that his or her evidence sufficiently supports the belief that P, then A believes that P).

Ought Enkrasia. Rationality requires that (if A believes that he or she ought to believe that P, then A believes that P).

Justification Enkrasia. Rationality requires that (if A believes that he or she is epistemically justified in believing that P, then A believes that P).

“Rational” Enkrasia. Rationality requires that (if A believes that rationality requires of him or her to believe that P, then A believes that P).

Obviously, claims concerning epistemic rationality, knowledge, justification, epistemic obligations and evidence are related to epistemic reasons in some ways. However, we cannot assume that all the above claims are equivalent. Since I will not assume that claims concerning justification, rationality, obligations, epistemic reasons and evidence are equivalent, I will focus on Reasons Enkrasia and leave the other variants behind.Footnote 8

Historically, philosophers have been concerned with the possibility of holding an akratic combination of attitudes.Footnote 9 More recently, philosophers have focused on the normative issue of whether an epistemically akratic combination of attitudes can be rational. These two issues are related. If agents cannot hold an akratic combination of attitudes, determining whether an epistemically akratic combination of attitudes can be rational seems pointless, since such a situation can never happen. In this paper, I will assume that akratic combinations of attitudes are possible. I will focus on whether such combinations of attitudes are necessarily irrational.

1.3 The case for rational puzzle

I now wish to explain the case for Rational Puzzle, which is a reconstruction from two distinct positions that can be found in the literature. Since these two stands were developed independently from each other, I want to explain why these positions, taken together, constitute a puzzle.

First, suppose that there are process requirements of rationality, such as responding to the epistemic reasons agents have, and that there is no unsolvable dilemma of rationality. In cases like Bad Reasoning, it could be suggested that one way to respond correctly to the evidence agents have is to transgress Reasons Enkrasia. According to Allen Coates, if Holmes tells Watson that he is irrational in concluding that Jack the Ripper is the killer, Watson's rational response to such higher-order evidence is to believe that his epistemic reasons (including deductive reasoning and evidence) do not support the conclusion that Jack the Ripper is guilty. However, recall that there are rational false beliefs. So, perhaps Watson is rational in concluding that Jack the Ripper is the killer. In such a case, Watson could be rational in believing that Jack the Ripper is guilty and respond correctly to his evidence in concluding that his epistemic reasons do not support that conclusion (Coates Reference Coates2012: 113–15). According to Coates:

Before he spoke to Holmes, Watson's belief was, by hypothesis, perfectly rational. And the only change in his epistemic circumstances is that he has heard Holmes's assessment. So any objection which claims that his belief is irrational must show that Holmes's assessment of it somehow explains why it is irrational. (Coates Reference Coates2012: 115)

Therefore, Watson could be rational in having an akratic combination of attitudes. Now, what about the fact that violating Reasons Enkrasia appears deeply incoherent? According to Maria Lasonen-Aarnio, when an agent has higher-order evidence concerning his or her own rationality, it is not always possible to identify a single coherent combination of attitudes that he or she could hold (Lasonen-Aarnio Reference Lasonen-Aarnio2014, Forthcoming). For example, she argues that “recommending that one believe that a rule is flawed is not tantamount to recommending that one stop following the rule. That one should believe that one shouldn't φ doesn't entail that one shouldn't φ” (Lasonen-Aarnio Reference Lasonen-Aarnio2014: 343). In accordance with Coates, Lasonen-Aarnio concludes that it is sometimes rational for an agent to maintain incoherent combinations of beliefs, and thus to transgress Reasons Enkrasia.

Alex Worsnip also agrees that, in some situations, Watson's evidence can support (i) that Jack the Ripper is the killer and also support (ii) that his evidence does not support that conclusion. What Worsnip rejects is that responding correctly to the evidence agents have is rationally required. Indeed, while Coates and Lasonen-Aarnio's conclusion presupposes that responding correctly to the evidence agents have is a requirement of rationality, Worsnip denies that if Watson's evidence supports P, then rationality requires of Watson that he believes that P, especially in cases where this means having an incoherent combination of attitudes. According to him, evidence-responsiveness and inter-level coherence “are, properly understood, fundamentally different kinds of normative claim, such that they should not be stated using the same normative concept” (Worsnip Reference Worsnip2015: 6). As I indicated in the previous section, for the sake of comparability between arguments found in the literature, I'll reinterpret Worsnip's claim in terms of epistemic reasons. A plausible reinterpretation of Worsnip's conclusion is to deny that Reasons-Responsiveness necessarily has to do with rationality.Footnote 10

In summary, it seems that we must accept the puzzle. On the one hand, we can admit that Reasons-Responsiveness is a requirement of rationality and that there is no dilemma of epistemic rationality, but then we must give up Reasons Enkrasia. On the other hand, we can admit that there is no dilemma of epistemic rationality and that Reasons Enkrasia is a rationality requirement, but then we must give up Reasons-Responsiveness. Rational Puzzle seriously affects how rationality is canonically understood. Contra Lasonen-Aarnio and Coates, it is plausible that coherence requirements are genuine requirements of rationality, including coherence between an agent's first and higher-order attitudes.Footnote 11 Contra Worsnip, it seems that epistemic rationality has to do with more than mere coherence. Otherwise, if conspiracy theorists and hard-core skeptics are fully coherent, they would also be fully rational, and that doesn't seem correct.Footnote 12 A priori, no position is comfortable or copes well with other plausible theoretical assumptions regarding epistemic rationality.Footnote 13

2. RATIONAL PUZZLE AND LEVEL-SPLITTING

In this section, I will argue that Rational Puzzle holds only if an agent can have sufficient epistemic reason to believe that “he or she has sufficient epistemic reason to believe P,” while not having sufficient epistemic reason to believe P.Footnote 14 I will refer to these situations as cases of level-splitting.

A key feature of Rational Puzzle is that Reasons-Responsiveness and Reasons Enkrasia sometimes lead to incompatible verdicts. As long as higher-order epistemic reasons are coherent with first-order epistemic reasons, Reasons Enkrasia and Reasons-Responsiveness are compatible. For example, suppose that an agent has sufficient epistemic reason to believe that “he or she has sufficient epistemic reason to believe P” and sufficient epistemic reason to believe P. In such a case, Reasons-Responsiveness requires of that agent to believe that he or she has sufficient epistemic reason to believe P and to believe P. Such a combination of attitudes satisfies Reasons Enkrasia. So, if an agent's first and higher-order epistemic reasons are coherent, Reasons-Responsiveness and Reasons Enkrasia do not lead to incompatible verdicts.

2.1 Level-splitting and incommensurability

I see two possible explanations of why, in some situations, first-order reasons and higher-order reasons come apart. The first explanation is that higher-order reasons are of a special kind and cannot be compared to first-order reasons. Let's call this the argument from incommensurability, as in the following:

Incommensurability. Epistemic reasons to believe P and epistemic reasons concerning what one has sufficient reason to believe are incommensurable. In such a case, the balance of epistemic reasons to believe P differs from the balance of reasons for believing that one has sufficient epistemic reason to believe P.

Here is another way to put it. Let's suppose that first-order reasons are always commensurable with higher-order reasons. In view of the foregoing, reasons to believe that there are reasons to believe P are reasons for believing P, and reasons for believing P are reasons to believe that there are reasons to believe P. So, in a case like Bad Reasoning, Watson should not judge that he has two distinct sets of epistemic reasons (one set of epistemic reasons concerning P and one set of epistemic reasons concerning whether it is rational to conclude that P). He should consider that Holmes's claim that he made a mistake in processing his epistemic reasons is a new reason affecting (to a certain degree) his conclusion that Jack the Ripper is guilty.Footnote 15 But now, suppose that Holmes's testimony is not a reason against the conclusion that Jack the Ripper is guilty, but only a reason to believe that such a conclusion is not supported by epistemic reasons.Footnote 16 In such a case, sufficient epistemic reasons could lead to level-splitting. Thus if Incommensurability is true, we would learn something from cases like Bad Reasoning. Indeed, from Watson's perspective, Holmes's testimony could be sufficient evidence to draw a higher-order conclusion, while the various pieces of evidence he gathered could lead him to conclude that Jack the Ripper is the killer. Each type of epistemic reasons could play distinct roles.

Following many others, I find the Incommensurability argument highly implausible.Footnote 17 Indeed, suppose that there are cases where higher-order epistemic reasons are not commensurable with reasons for believing P or against believing P. Now, let's assume that an agent has an infallible reason to believe that he or she has sufficient reason to believe P and an infallible reason against believing P. Such a situation would not be impossible, since the incommensurability argument implies that higher-order epistemic reasons and first-order reasons can be of a different kind. So, an epistemically rational agent could be perfectly confident that he or she has sufficient epistemic reason to believe P, but also be perfectly confident that P is false. As Horowitz rightly stresses, the agent would conclude that whether P and whether he or she has epistemic reasons to believe P are entirely separate issues, which appears nonsensical (Horowitz Reference Horowitz2014a: 726). Specifically, it is highly implausible that, in some cases, reasons to believe that there are reasons to believe P does not even have the slightest impact on reasons to believe P.

2.2 Level-splitting and fallible reasons

If reasons for believing P and reasons for believing that there are reasons for believing P are commensurable, this means that higher-order reasons can somehow count as first-order reasons. In such a context, the denial of Incommensurability paves the way for various principles connecting higher-order reasons and first-order reasons. Nevertheless, such principles could be correct while cases of level-splitting are possible.Footnote 18 So, there must be another explanation of why first-order reasons and higher-order reasons can come apart.

A second explanation of why there could be cases of level-splitting is that higher-order epistemic reasons are fallible. We can imagine how higher-order fallible reasons can open the door to cases of level-splitting, as in the following:

Higher-Order Fallibilism. One can have fallible sufficient reason for believing that one has sufficient reason to believe P. In a case where such a reason is misleading, it is possible that one is rational to conclude that he or she has sufficient epistemic reason to believe P while lacking sufficient reason for the belief that P.

Suppose that an agent has fallible reasons for believing that he or she has sufficient reason to believe P. If higher-order reasons are fallible, having sufficient reason for believing that one has sufficient reason to believe P does not entail the conclusion that one has sufficient reason to believe P, since these reasons could be misleading. So, it is possible that fallible higher-order reasons lead to level-splitting. It seems that Rational Puzzle could be explained by Higher-Order Fallibilism, since one could be rational to believe that he or she has sufficient epistemic reason to believe P while not believing P (either by withholding judgment on whether P or by disbelieving P).

If Higher-Order Fallibilism is true, we will learn something from cases like Fallible Reasons. Suppose that Watson believes that he has sufficient reason to conclude that Jack the Ripper is the killer. Watson's belief can be based on sufficient epistemic reasons, but not necessarily on infallible epistemic reasons. While such a belief can be rational, it could be based on fallible and misleading reasons. This means that Watson could lack sufficient reasons to draw the conclusion that Jack the Ripper is the killer. In such a context, Watson would be rational not to conclude that Jack the Ripper is the killer.

It seems that, apart from Incommensurability and Higher-Order Fallibilism, there is no third possible explanation of why Rational Puzzle holds. Indeed, if higher-order sufficient reasons are infallible, having sufficient reason for believing that one has sufficient reason to believe P means that one inevitably has sufficient reason to believe P, and so there cannot be cases of level-splitting. Consequently, if Rational Puzzle holds, the culprit is Higher-Order Fallibilism.

3. HIGHER-ORDER FALLIBILISM

In this section, I start by suggesting that Rational Puzzle is closely related to other well-known issues concerning fallible reasons. We cannot give a definitive answer to Rational Puzzle without solving traditional problems of responding to fallible reasons, such as the lottery paradox. However, I will argue that, under one interpretation of higher-order reasons, there is no obstacle to eliminating higher-order fallible reasons. My argument relies on the probabilistic representation of reasons introduced in Section 1.1 and can be roughly summarized as follows:

  1. (1) There can be cases of level-splitting only if agents respond to higher-order fallible reasons.

  2. (2) Relative to the probabilistic representation of reasons, higher-order fallible reasons can be represented as conditional probabilities and higher-order infallible reasons can be represented as unconditional probabilities.

  3. (3) But conditional probabilities can be replaced by unconditional probabilities.

  4. (4) So, relative to the probabilistic representation of reasons, fallible higher-order reasons can be replaced by infallible higher-order reasons, and agents can avoid responding to fallible higher-order reasons.

  5. (C) So, relative to the probabilistic representation of reasons, cases of level-splitting can be avoided.

Consequently, there is no reason why a rational agent would necessarily have to choose between satisfying Reasons-Responsiveness and satisfying Reasons Enkrasia. Furthermore, it seems plausible that a rational agent would prefer to ground his or her beliefs concerning what he or she has sufficient reason to believe on infallible reasons. In summary, I do not rule out the possibility that a rational agent can maintain an akratic combination of attitudes while responding correctly to his or her epistemic reasons, but I claim that this would be an odd preference.

3.1 Canonical problems related to responding to fallible reasons

The possibility of responding correctly to fallible reasons is problematic. On one hand, it seems perfectly plausible that rational beliefs are sometimes false (Greco Reference Greco2014: 203). It seems that an agent can be rational in believing P when P's epistemic probability is smaller than 1. For example, if one is certain that P has 0.95 chance (or any other high but imperfect threshold) of obtaining, then one is rationally permitted to believe P. On the other hand, responding to fallible reasons leads to numerous puzzles. Specifically, rational reasoning should have some logical properties, such that if you reason correctly from rational attitudes, your conclusion should also be rational. These two demands sometimes conflict, as in the following examples:

Lottery. Imagine a lottery with a sufficiently high number of tickets. Only one ticket is a winner. Each ticket is equally likely to win. Since the probability that each ticket will lose is more than 0.95 (or any other probability that you like), an agent should rationally believe that each ticket is a loser. Indeed, the agent's beliefs concerning chances of winning reflect his or her knowledge of the objective probabilities. However, it is rational to believe that one ticket will win. So, one should believe that each ticket is a loser and that one ticket is a winner, which is inconsistent.

Cheap Justification. Imagine that the sufficient threshold for believing any proposition is 0.95. An agent rationally believes that there is a 0.96 chance that there is a 0.96 chance that P (and a 0.04 chance that there is 0 chance that P). Indeed, the agent's rational beliefs concerning chances reflect his or her knowledge of the objective probabilities. Since the sufficient threshold for believing a proposition is 0.95, the agent then comes to the conclusion that there is a 0.96 chance that P (since, from the agent's perspective, such a proposition has 0.96 chance of obtaining). The agent then comes to the conclusion that P, since (again) the sufficient threshold for believing a proposition is 0.95. However, since 0.92 is equivalent to ≈0.96·0.96, the agent is irrational in believing P (since 0.92 ≤ 0.95).Footnote 19 So, one is rationally prohibited from believing that P, but can still manage to identify a justificatory chain to the conclusion that P, which is nonsensical.

Various solutions to cases like Lottery and Cheap Justification have been suggested. A first solution is to argue that sufficient reasons are infallible (or may not saliently appear fallible).Footnote 20 An agent can rationally believe that P only if, relative to his or her evidence, P could not be false. In cases like Lottery, such a solution prohibits a rational agent from believing that each ticket is a loser, since it is possible that one ticket is a winner. In cases like Cheap Justification, if P is uncertain, no infallible justificatory chain leading to the conclusion that P can be identified, since some “residual” uncertainty will remain in any justificatory chain.

Another solution is to argue that rational beliefs do not necessarily ground rational reasoning.Footnote 21 While P and Q logically imply (P^Q), rationally believing that P and rationally believing that Q are not necessarily sufficient for rationally concluding that (P^Q). In cases like Lottery, this solution implies that, while I rationally believe that ticket 1 is a loser, that ticket 2 is a loser and so forth, I am not rationally permitted to believe that (ticket 1 is a loser and ticket 2 is a loser and … ticket n is a loser). In fact, this solution to Lottery entails the denial of Intra-Level Coherence, which roughly states that if an epistemically rational agent believes that P and believes that Q, it is false that he or she believes that ~(P^Q). In cases like Cheap Justification, I may rationally believe that there is a high chance that P, but that does not necessarily entail the rational conclusion that P, since my belief that there is a high chance that P is based on fallible reasons.

3.2 Rational puzzle and fallible reasons

The above analysis of fallible reasons sheds light on Rational Puzzle. Let's assume for a moment that the first solution to Lottery and Cheap Justification is correct and that sufficient reasons are infallible. This would solve Rational Puzzle, since rational agents would be required to respond only to infallible reasons. Having sufficient reason to believe that one has sufficient reason to believe P would amount to having infallible reason to believe that one has infallible reason to believe P, which would necessarily secure the rational conclusion that P. Thus, there could never be sufficient reason to believe that one has sufficient reason to believe P without there being sufficient reason to believe P.

Now, let's assume that the second solution to Lottery and Cheap Justification is correct, and so that rational beliefs do not necessarily ground rational reasoning. In such a context, the incoherentist solution to Rational Puzzle would then be correct. According to incoherentism, Reasons Enkrasia is not a genuine rationality requirement, since one can be rational in believing that one has sufficient reason to believe P, while not believing that P. Consider cases like Cheap Justification. One is rational in believing that there is a 0.96 chance that P. A 0.95 chance that P would constitute a sufficient reason to believe P. Nevertheless, it would be irrational for him or her to believe P, since relative to that agent's epistemic reasons, P has a 0.92 chance of being the case. Interestingly, some of Lasonen-Aarnio's examples in favour of the conflict between an agent's rational expectations of the rational credence in P and enkratic requirements are very close to cases like Cheap Justification, as she indicates in the following:

Assume that the threshold for belief is 0.9, and that you know this. Assume that you have the following rational credences: your credence that the rational credence in p is 0.89 is 0.9, and your credence that the rational credence in p is 0.99 is 0.1. Then, your expectation of the rational credence is 0.9 … Given the 0.9 threshold for belief, you believe p. But you also believe that it is not rational to believe p. Hence, you are in a state of epistemic akrasia. (Lasonen-Aarnio Reference Lasonen-Aarnio2015: 169)Footnote 22

Offering a full solution to Rational Puzzle boils down to determining the constraints on responding to fallible reasons. Rather than being a brand new puzzle, Rational Puzzle seems to be a consequence of latent issues concerning fallible reasons. If sufficient reasons are infallible, then there cannot be a dilemma between Reasons Enkrasia and Reasons-Responsiveness. But if rational beliefs do not necessarily ground rational reasoning, then Reasons Enkrasia could not be a genuine rationality requirement. Thus, as long as we do not have a clear picture of the constraints limiting how agents respond to fallible reasons, we will not be in a position to give a full answer to Rational Puzzle, since Reasons Enkrasia could not be a genuine rationality requirement.

3.3 The possibility of always responding to higher-order infallible reasons

Let's now assume that the rational status of Reasons Enkrasia is uncertain and that we cannot give a full answer to Rational Puzzle. In view of the foregoing, what are we in a position to defend? I previously argued that if all higher-order reasons are infallible, then there cannot be cases of level-splitting. This means that there are two ways to offer a partial solution to Rational Puzzle, as in the following:

  1. (1) While there are first-order fallible reasons, higher-order reasons concerning facts about reasons or rationality are infallible.Footnote 23

  2. (2) While it is possible for an epistemically rational agent to respond to higher-order fallible reasons, he or she is always in a position to respond to higher-order infallible reasons.

I will now provide an argument for (2), the claim that an agent is always in a position to respond to higher-order infallible reasons. This provides a partial solution to Rational Puzzle, since if one can avoid responding to higher-order fallible reasons, then one is always in a position to satisfy both Reasons Enkrasia and Reasons-Responsiveness.

I previously assumed that epistemic reasons warrant epistemic probabilities, understood as the probabilities warranted by an agent's body of epistemic reasons. With respect to Rational Puzzle, we can learn something from such a representation of reasons.

There are two main types of probability assessments – namely, conditional probabilities and unconditional probabilities. In other words, we can wonder what P's unconditional probability is, but we can also wonder what P's probability is on the condition that some states of affair (Q, R, S …) obtain.Footnote 24 Relative to the probabilistic representation of reasons, fallible higher-order reasons can be represented by conditional epistemic probabilities. If the probability that [P's probability is 0.9] is 0.9, then P's probability is 0.9 on the condition that Q obtains, and Q's probability is 0.9. In such a case, it could be false that P's probability is 0.9, since such a claim is conditional on Q obtaining, and Q is uncertain. By way of contrast, infallible higher-order reasons can be represented by unconditional epistemic probabilities. If it is certain that P's probability is 0.9, then such an evaluation of P's probability is not conditional on some merely probable event Q obtaining.

One reason why it seems appropriate to represent higher-order reasons by conditional and unconditional epistemic probabilities is that such a representation is compatible with the Commensurability constraint discussed in Section 2 (according to such a constraint, higher-order reasons can count as first-order reasons). Here is why. Suppose that P's epistemic probability is 0.9 on the condition that Q obtains and that P's epistemic probability is 0 on the condition that ~Q obtains. In such a context P's probability will vary depending on Q's obtaining. In particular, if Q were certain, this would entail that P's probability is 0.9. Similarly, if ~Q were certain, this would entail that P's probability is 0. As we can see, the existence of reasons for or against the conclusion that Q can affect the probability of first-order conclusions such as P. Since epistemic reasons are represented by epistemic probabilities, we can conclude that acquiring higher-order epistemic reasons can somehow count as acquiring first-order reasons. Hence, the Commensurability condition discussed in Section 2 is satisfied.

Here is the trick: as long as chains of conditional probabilities end with an unconditional probability, a conditional probability can be replaced by an unconditional probability. For example, if the epistemic probability that [the epistemic probability that P is 0.9] is 0.9 and the epistemic probability that [the epistemic probability that P is 0] is 0.1, it is possible to determine P's unconditional epistemic probability. For example, in this specific case, P's unconditional epistemic probability would be 0.81.Footnote 25 In other words, the epistemic probability that [the epistemic probability that P is 0.81] is 1. Now, recall that infallible higher-order reasons can be represented by unconditional epistemic probabilities. This means that, all things being equal, we can pass from higher-order fallible reasons (as represented by conditional epistemic probabilities) to higher-order infallible reasons (as represented by unconditional epistemic probabilities). That is, the same body of epistemic reasons can be understood as providing higher-order fallible reasons and higher-order infallible reasons.

We can move from conditional epistemic probabilities to unconditional epistemic probabilities as long as chains of conditional probabilities end with an unconditional probability. What about the cases where P's epistemic probability is infinitely conditional? For example, there could be cases where P's probability is conditional on Q, and that Q is conditional on R, and that such a regress does not stop with a “final” unconditional probability. Even in such situations, there is a modest sense in which we can move from higher-order fallible reasons to higher-order infallible reasons. Indeed, imagine that P's probability is determined by the following series:Footnote 26

$$\eqalign{\enspace \,{\rm P(P)} &= {\rm P(A1) - P(A2) - P(A3)} \ldots - {\rm P(An)}, \,{\rm where \ P(An)}= 0.9 \cdot (10 {^\wedge}(1 - {\rm n})) \cr & \quad \ \hbox{and n tends to infinity}}$$
$${\rm If \,P} \left( {\rm P} \right) = \sum 0.9 - \left({0.9 \cdot 0.1} \right) \ldots - 0.9\cdot (10 {^\wedge} (1 - n)),{\rm P}\left( {\rm P} \right)\, {\rm converges \, to} \,0.8.$$

As we can see, P's probability is here defined by an infinite series of merely probable events, but still converges to 0.8. The lesson here is that while P's probability is conditional on a series of merely probable events, there is a modest sense in which we can determine P's unconditional probability, since P's unconditional probability converges to 0.8. If such an infinite probabilistic chain converges, then there is a modest sense in which P's unconditional probability can be determined.Footnote 27

This is an important step toward solving Rational Puzzle. Relative to the probabilistic representation of reasons, higher-order fallible reasons can be represented by conditional epistemic probabilities and higher-order infallible reasons can be represented by unconditional epistemic probabilities. Since conditional probabilities can be replaced by an unconditional probability, fallible higher-order reasons can be replaced by infallible higher-order reasons, and so it is rational for agents to avoid responding to fallible higher-order reasons. In such a context, there is no specific reason why it would be necessary for agents to respond to higher-order fallible reasons. Furthermore, if agents can avoid responding to higher-order fallible reasons, cases of level-splitting can also be avoided. This provides a partial solution to Rational Puzzle.

3.4 A step further: the conflict between the Rational Reflection principle and Enkrasia

The argument I just offered can shed light on the putative conflict between the Rational Reflection principle and enkratic requirements. The Rational Reflection principle roughly states that an agent's rational expectations of the rational credence in P constrains his or her rational credence in P. Lasonen-Aarnio (Reference Lasonen-Aarnio2015: 169) claims that satisfying the Rational Reflection principle can lead to forming akratic combinations of attitudes. This is so because one can rationally believe P while rationally believing that one's own belief is irrational, as in the following line of reasoning:

  1. (1) It is rational for A to believe P if and only if A has a rational credence of at least 0.9 in P.

  2. (2) The rational credence that [the rational credence in P is 0.89] is 0.9, and the rational credence that [the rational credence in P is 0.99] is 0.1.

  3. (3) Following the Rational Reflection principle, Cr(P)=(0.99·0.1)+(0.89·0.9) = 0.9, and so A rationally believes P.

  4. (4) But the credence in [the rational credence in P is 0.89] is 0.9. So, A rationally believes that the rational credence in P is 0.89 and that believing P is irrational.

However, Lasonen-Aarnio assumes that credence assignments are rational only insofar as they track (or reflect) epistemic probabilities (Lasonen-Aarnio Reference Lasonen-AarnioForthcoming: 2). This means that, in the above situation, the epistemic probability that [P's epistemic probability is 0.89] is 0.9 and the epistemic probability that [P's epistemic probability is 0.99] is 0.1. Now, if the epistemic probability that [P's epistemic probability is 0.89] is 0.9, this means that P's epistemic probability is 0.89 conditional on an event Q obtaining, and Q's epistemic probability is 0.9. Similarly, if the epistemic probability that [P's epistemic probability is 0.99] is 0.1, this means that P's epistemic probability is 0.99 conditional on an event Q not obtaining, and ~Q's epistemic probability is 0.1. Finally, we can use P's conditional probabilities to calculate P's unconditional probability. In the above case, P's unconditional epistemic probability is 0.9 (since (0.89·0.9)+(0.99·0.1) = 0.9). This means that the epistemic probability that [P's epistemic probability is 0.9] is 1.

Now, recall that infallible higher-order reasons are represented by unconditional epistemic probabilities. In such a context, since 0.9 is P's unconditional epistemic probability, it would be rational for an agent to be certain that 0.9 is the rational credence in P. In other words, he or she has an infallible reason to conclude that 0.9 is the rational credence in P, and so being certain that 0.9 is the rational credence in P would be an appropriate response to his or her epistemic reasons. There is no need for the agent to believe that such a credence assignment is irrational relative to his or her epistemic reasons. The agent has all the information required not to be mistaken about his or her own epistemic rationality.

Here is another way to put it. In the described case, an agent's rational credences can track the following epistemic probabilities: the epistemic probability that [P's epistemic probability is 0.89] is 0.9 and the epistemic probability that [P's epistemic probability is 0.99] is 0.1. As long as sufficient reasons can be fallible, tracking these epistemic probabilities can lead to a conflict between the Rational Reflection principle and enkratic requirements. However, an agent's rational credences can also track the following epistemic probability: the epistemic probability that [P's epistemic probability is 0.9] is 1. If the agent's rational credences track this epistemic probability, we get the following result:

  1. (5) It is rational for A to believe P if and only if A has a rational credence of at least 0.9 in P.

  2. (6) The rational credence that [the rational credence in P is 0.9] is 1.

  3. (7) Following the Rational Reflection principle, Cr(P)=(0.9·1) = 0.9, and so A rationally believes P.

  4. (8) Since the credence in [the rational credence in P is 0.9] is 1, A rationally believes that believing P is rational.

As we can see, when tracking higher-order infallible reasons (as represented by unconditional epistemic probabilities), the Rational Reflection principle does not lead to forming an akratic combination of beliefs.

Now, perhaps we should not accept the Rational Reflection principle (Lasonen-Aarnio (Reference Lasonen-Aarnio2015) ultimately rejects such a principle). I am not defending such a principle here. What I wish to stress is that, when taking the possibility of responding to higher-order infallible reasons into account, the conflict between the Rational Reflection principle and enkratic requirements is a lot less clear. Surely, when agents respond to higher-order fallible reasons, the Rational Reflection principle can conflict with enkratic requirements. However, as long as it is possible for the agent to avoid responding to higher-order fallible reasons (which is always the case), such a conflict is resolved.

3.5 The relevance of responding to higher-order infallible reasons

If agents are always in a position to respond to higher-order infallible reasons, this means that, minimally, it is always possible to simultaneously satisfy Reasons Enkrasia and Reasons-Responsiveness. I will now go a step further and suggest that rational agents prefer responding to infallible higher-order reasons. While this will not prove that Reasons Enkrasia is a genuine rationality requirement, such an argument will make it plausible that Reasons Enkrasia is a requirement of rationality, since an agent would have no reason to entertain an epistemically akratic combination of attitudes.

Responding to higher-order infallible reasons provides a better answer to cases like Cheap Justification. Recall that, in Cheap Justification, an agent rationally believes that there is a 0.96 chance that there is a 0.96 chance that P (and a 0.04 chance that there is 0 chance that P), and such rational beliefs concerning chances reflect his or her knowledge of the objective probabilities. If cases like Cheap Justification support incoherentism, it must be admitted that a rational agent can frequently figure out a misleading chain of justification in favour of numerous higher-order beliefs concerning sufficient reasons. For example, in some situations where I know that P's objective probability is 0.75, I could believe that there is a ≈0.87 chance that there is a ≈0.87 chance that P, since 0.75 is equivalent to ≈0.87·0.87.Footnote 28 Assuming that 0.85 is a sufficient probabilistic threshold, I could then come to the conclusion that there is a ≈0.87 chance that P. But there's something quite wrong with such a result. I take it as a datum that no rational agent would want to have such a misleading justificatory chain of attitudes concerning sufficient reasons. Plausibly, if I know that P's objective probability is 0.75, I am better off believing that there is a 0.75 chance that P, and this seems best explained by the fact that I should respond to infallible higher-order reasons.

Here is another way to understand my point. Allowing fallible higher-order reasons can lead to patently strange situations that no rational agent would want to be in (especially since they can easily be avoided). Consider the following conversation:

Watson: What are the odds that Jack the Ripper did it?

Holmes: You may rationally believe that there is a ≈0.87 chance that Jack the Ripper did it.

Watson: Why would it be rational for me to believe that there is a ≈0.87 chance that Jack the Ripper is guilty?

Holmes: Well, let's see. Undoubtedly, there is a 0.75 chance that Jack the Ripper did it, but in this specific case there is a ≈0.87 chance that there is a ≈0.87 chance that Jack the Ripper did it (and a 0.13 chance that there is 0 chance that Jack the Ripper did it). The sufficient threshold for rationally believing a proposition is 0.85. In such a context, it is rational for you to conclude that there is a ≈0.87 chance that Jack the Ripper is guilty.

Watson: Okay, and so following the same explanation you just provided, I am also rational in concluding that Jack the Ripper did it.

Holmes: No! Your belief that there is a ≈0.87 chance that Jack the Ripper did it is not a sufficient reason for believing that Jack the Ripper did it. You see, since there is no doubt that the objective probability that Jack the Ripper did it is 0.75, you are not permitted to believe that Jack the Ripper is the killer.

Watson: Oh, so you first gave me information from which I cannot rationally reason, but you had information from which I could reason. You gave me a sufficient reason to believe something from which I could badly reason.

Holmes: Exactly!

What do we learn from the above conversation? Even if we assume that Watson did not violate any rule of rationality in believing that there is a ≈0.87 chance that P, it is patently clear to him that, in believing that there is a 0.75 chance that P, he has access to a more informative and useful way to reason. Believing that there is a 0.75 chance that P would ground correct reasoning, while believing that there is a ≈0.87 chance that P will not. Also, while Holmes is not making any rational mistake in presenting the chances differently, there is a better way for him to inform Watson of P's likelihood. Thus, in situations where fallible reasons concerning what is probable can be replaced with infallible reasons concerning what is probable, the latter appears preferable.

In summary, since beliefs concerning sufficient reasons often aim at reasoning correctly, a rational agent would prefer responding to infallible reasons concerning what he or she has sufficient reason to believe. Furthermore, there seems to be no structural obstacle to avoid responding to higher-order fallible reasons. In such a context, it is possible that an epistemically akratic combination of attitudes is rational, but the higher-order belief that one has sufficient reason to believe P would play no role in an agent's reasoning (or a potentially misleading role). Even if, strictly speaking, it would not be irrational to respond to fallible higher-order reasons, I see no reason why an agent would prefer responding to fallible higher-order reasons.

4. CONCLUSION AND DISCUSSION

This paper offers a partial solution to Rational Puzzle, the view that either Reasons-Responsiveness or Reasons Enkrasia (or possibly both) are not genuine rationality requirements. I first argued that Rational Puzzle holds only if level-splitting can be rational – that is, only if a rational agent can have sufficient epistemic reason to conclude that “he or she has sufficient epistemic reason to believe P,” while having sufficient epistemic reason against believing P. I then explained why level-splitting is possible only if higher-order epistemic reasons are sometimes fallible and misleading.

Since an agent is always in a position to respond to higher-order infallible reasons, he or she never has to choose between satisfying Reasons-Responsiveness and Reasons Enkrasia. Furthermore, since reasoning from infallible higher-order reasons appears preferable to an epistemically rational reasoner, I see no reason why an agent would reason from fallible higher-order reasons and end up with an akratic combination of attitudes. This is why I partially solved Rational Puzzle: I offered an argument that we can always satisfy both Reasons Enkrasia and Reasons-Responsiveness.

Nevertheless, Reasons Enkrasia could fail to be a genuine rationality requirement, since strictly speaking, I did not prove that inter-level incoherence is necessarily irrational. As I argued, proving that incoherence is irrational would also require solving problems such as the lottery paradox. This is why I did not offer a full answer to Rational Puzzle, which would include a principled vindication of Reasons Enkrasia and Reasons-Responsiveness.

The argument of this paper has clear limits. I assumed that a probabilistic representation of reasons was correct and that we could reach the same results through other theories, such as possibility theory or ranking theory. But as I indicated in Section 1.1, the probabilistic representation of reasons raises methodological difficulties. Also, assuming such an equivalence between representations of fallible reasons will be unsatisfactory to many philosophers. We should either prove that the results of this paper can be reached through any representation of reasons or adapt the argument to other frameworks.Footnote 29

Footnotes

1 I borrowed these “Watson cases” from Coates (Reference Coates2012) and Horowitz (Reference Horowitz2014a).

2 Some authors have suggested that there are no distinct state requirements of rationality. Specifically, process requirements of rationality, which govern how rational agents form and revise beliefs, could secure putative state requirements such as Consistency (Kolodny Reference Kolodny2007). I do not wish to address that debate here.

3 See notably Broome (Reference Broome2005: 322; Reference Broome2007: 355; Reference Broome2013: sec. 9.2).

5 For instance, Lasonen-Aarnio indicates that “a doxastic state in a proposition p is epistemically permitted if and only if it tracks the probability of p on one's evidence, or the evidential probability of p” (Lasonen-Aarnio Reference Lasonen-AarnioForthcoming: 2).

6 Alexander (Reference Alexander2013) suggests that, when agents have a higher-order doubt about P, they should not take a higher-order attitude towards P. Broome (Reference Broome2013: 22–23, 170–71) roughly suggests that, in practical cases, failure to conform to the enkratic requirement is an internal failure, a failure with respect to your own deliberation and standards. However, he suggests that the epistemic version of Enkrasia brings more difficulties (Broome Reference Broome2013, 170–2, 216–19). Greco (Reference Greco2014) argues that epistemic akrasia leads to a kind of fragmentation or irrational inner conflict. Hinchman (Reference Hinchman2013) defends the claim that epistemically akratic agents end up in a situation of self-mistrust. According to Horowitz (Reference Horowitz2014a), epistemically akratic combinations of attitudes lead to patently bad reasoning. Reisner (Reference Reisner2013) suggests that, while the enkratic requirement is not a rationality requirement, it is strongly connected with agentivity. According to Titelbaum, mistakes concerning rationality requirements are necessarily irrational, which implies that “no situation rationally permits any overall state containing both an attitude A and the belief that A is rationally forbidden in one's current situation” (Titelbaum Reference Titelbaum, Gendler and Hawthorne2015: 261). Titelbaum's argument is premised on the assumption that akrasia is irrational. See also Littlejohn (Reference Littlejohn2015), who endorses Titelbaum's view and adds that inter-level incoherence is the sign of an opaque mindset.

Finally, many philosophers defend the claim that akrasia is similar to Moore-paradoxical doxastic states – some deeply incoherent combinations of attitudes. See notably Feldman (Reference Feldman2005), Huemer (Reference Huemer, Nucceteli and Seays2007), Smithies (Reference Smithies2012) and Chislenko (Reference Chislenko2014).

7 For example, Horowitz (Reference Horowitz2014a) analyzes the converse of Evidence Enkrasia, Broome (Reference Broome2013) considers Ought Enkrasia, Feldman (Reference Feldman2005) is concerned with Justification Enkrasia, and Lasonen-Aarnio (Reference Lasonen-Aarnio2015) addresses “Rational” Enkrasia. Also, some putative requirements of rationality like the “RR principle” of the Fixed Point thesis are very close to “Rational” Enkrasia. See notably Conee (Reference Conee, Warfield and Feldman2010: sec. 3), Littlejohn (Reference Littlejohn2015: 5), Titelbaum (Reference Titelbaum, Gendler and Hawthorne2015) and Lasonen-Aarnio (Reference Lasonen-AarnioForthcoming: sect. II).

It should also be noted that many philosophers are concerned with the oddity of combination of attitudes like the following: “P, but it is false that my epistemic reasons sufficiently support the conclusion that P” (see Horowitz Reference Horowitz2014a on this case and see Lasonen-Aarnio Reference Lasonen-AarnioForthcoming for discussion). I am not convinced that this variant of epistemic akrasia is necessarily irrational. There could be cases where an epistemically rational agent believes P while believing that his or her epistemic reasons do not sufficiently support P. For example, one could be in an epistemically permissive situation where, relative to a body of evidence, incompatible doxastic attitudes towards P are rationally permitted (see notably White (Reference White, Steup, Turri and Sosa2014) and Kelly (Reference Kelly, Steup, Turri and Sosa2014) on epistemic permissiveness). To avoid the debate surrounding permissiveness, the only counterexamples to Reasons Enkrasia I will consider look like the following: “I don't believe that P, but my epistemic reasons sufficiently support the conclusion that P.”

8 This generates a methodological difficulty, since the enkratic requirements discussed in the literature take distinct incompatible forms. Nevertheless, as long as it does not lead to straightforward nonsensical results, I will engage with the literature as if other authors had discussed Reasons Enkrasia.

9 See notably Davidson (Reference Davidson, Wollheim and Hopkins1982: 302–4), Pears (Reference Pears1984: ch. 9), Mele (Reference Mele1988: ch. 2-3), Zheng (Reference Zheng2001) and Ribeiro (Reference Ribeiro2011).

10 Strictly speaking, Worsnip never said such a thing. However, this strikes me as a plausible consequence of his view, since he associates coherence with rationality and argues that Reasons-Responsiveness is best captured by different normative claims. In view of the foregoing, it seems that Reasons-Responsiveness would be best captured by claims outside the realm of rationality. Also Worsnip's view is compatible with the claim that Reasons-Responsiveness is a source of normative pressure on agents, but such a normative pressure would not come from rationality. See also Worsnip (Reference Worsnip2016).

11 See Broome (Reference Broome2013: ch. 9) or Gibbons (Reference Gibbons2013: 229–34). See also note 6.

12 See Dogramaci and Horowitz (Reference Dogramaci and Horowitz2016) and Horowitz (Reference Horowitz2014b).

13 A third possibility would be to maintain Reasons Enkrasia and Reasons-Responsiveness requirements, but to conclude that, in some situations, agents will necessarily defy the ideals of epistemic rationality. If Watson concludes that he cannot rationally respond to his epistemic reasons, he could withhold judgment on whether Jack the Ripper is guilty. However, he has sufficient evidence that Jack the Ripper is the killer, which means that he does not respond correctly to the evidence he has. But if he believes that he can rationally respond to his epistemic reasons, Watson does not respond correctly to Holmes's testimony that he is currently unable to respond to his epistemic reasons. According to David Christensen, in such a case, regardless of how Watson responds to his evidence, he could be “doomed to fall short of the rational ideal” (Christensen Reference Christensen2010: 212). Such a claim is controversial. Chang (Reference Chang1997, Reference Chang2001) and Bélanger (Reference Bélanger2011) argue that all normative dilemmas can be solved. Plausibly, if rationality is supposed to offer guidance, or to consistently determine an agent's permissions and obligations, then every apparent dilemma of rationality should be solvable. This is why I here assume that putative dilemmas between Reasons Enkrasia and Reasons-Responsiveness are solvable. On the other hand, Sinnott-Armstrong (Reference Sinnott-Armstrong and Mason1996) and Williams (Reference Williams1965) defend the claim that there are unsolvable normative dilemmas.

14 Horowitz (Reference Horowitz2014a), Worsnip (Reference Worsnip2015) and Lasonen-Aarnio (Reference Lasonen-AarnioForthcoming) reach similar conclusions.

15 There is ample debate on how much weight Watson should give to Holmes's testimony. This issue is related to recent works on conciliationism in cases of peer disagreement. For arguments in favour of conciliationism, see Christensen (Reference Christensen2014) and Feldman (Reference Feldman2005). For arguments in favour of the steadfast view, see Kelly (Reference Kelly, Gendler and Hawthorne2005) and Schoenfield (Reference Schoenfield2014). See Christensen (Reference Christensen2009) for an overview of the debate.

16 Coates (Reference Coates2012) endorses such a view.

17 See notably Horowitz (Reference Horowitz2014a: sec. 3) and Littlejohn (Reference Littlejohn2015: sec. 5).

18 As I will explain in Sections 3.2 and 3.4, Lasonen-Aarnio (Reference Lasonen-Aarnio2015: 169) argues that the Rational Reflection principle, which roughly states that an agent's rational expectations of the rational credence in P constrains his or her rational credence in P, can lead to rational epistemic akrasia (see also Elga (Reference Elga2013) on the Rational Reflection principle). However, this principle presupposes that whether P and whether there are epistemic reasons to believe P are not separate issues. So, even if we admit that higher-order reasons and first-order reasons are commensurable, this doesn't seem sufficient to rule out the possibility of level-splitting.

19 At least in some situations, such an equivalence is correct. Imagine that an agent is about to roll two dice and that there is 0.92 chance that he or she will not roll a six twice. However, he or she could consider that there are two probabilities here (one for the first die and one for the second). The agent could believe that there is a 0.96 chance that there is a 0.96 chance that he or she will not roll a six twice. Formally, there are different ways to understand this equivalence, but here is a straightforward one. Since P(B)·P(C|B) amounts to P(B^C), it suffices to say that A=(B^C) for it to be rationally permitted to replace P(A) with P(B)·P(C|B). For example, if P(B)·P(C|B)=0.92, P(B)≈0.96, and A=(B^C), then it is correct to conclude that P(A)≈0.96·0.96. See also Worsnip (Reference Worsnip, Jackson and JacksonForthcoming: sec. 2) on a similar problem.

20 See Littlejohn, who argues that there are no justified false beliefs (Littlejohn Reference Littlejohn2012: 99–102, 121–7). It should be noted that this solution does not exclude degrees of beliefs. Probabilism, for example, is compatible with this view. Under some interpretations of probabilism, a credence is just a percentage of certainty (Sturgeon Reference Sturgeon2008: 162, n.1). Also, the saliency condition can be interpreted in different ways. Clarke (Reference Clarke2013) argues that, while rationally believing P is having a rational credence of 1 in P, rational credences are determined by alternative possibilities one entertains. Leitgeb (Reference Leitgeb2014) defends the claim that an agent's rational credence in P and the partitioning of possibilities he or she entertains determine the sufficient threshold for believing P. In a lottery case where an agent has rational attitudes concerning every ticket, this solution amounts to fixing the sufficient threshold for believing that “ticket n will lose” at 1.

21 Sturgeon (Reference Sturgeon2008), Foley (Reference Foley, Huber and Schmidt-Petri2009) and Demey (Reference Demey2013) reject closure under conjunction and argue that while agents can rationally believe P and rationally believe Q, it can be rational for them to withhold judgment or disbelieve (P^Q). Kroedel (Reference Kroedel2011) argues that epistemic justification has to do with permissibility, and that since permissions do not agglomerate (being permitted to drink and being permitted to drive does not imply that one is permitted to drink and drive simultaneously), rationally believing P and rationally believing Q do not agglomerate and warrant the rational conclusion that (P^Q). Relatedly, Easwaran and Fitelson (Reference Easwaran and Fitelson2015) argue that, from an accuracy-centered perspective, it can be rational to believe P and to believe Q, but to disbelieve (P^Q). Specifically, believing P, believing Q and disbelieving (P^Q) can maximize expected accuracy.

22 Elsewhere, she offers another example close to Cheap Justification: “Assume, for instance, that p is sufficiently likely, and it is only likely to degree 0.3 that p is not sufficiently likely (and hence, likely to degree 0.7 that p is sufficiently likely). Nevertheless, one has misleading evidence about how likely it is that p is not sufficiently likely: in fact, it is very likely (say to degree 0.95) that it is likely that p is not sufficiently likely … For all that has been said, the belief that she is not rationally permitted to believe p can satisfy the entirety of the above condition” (Lasonen-Aarnio Reference Lasonen-AarnioForthcoming: 5).

23 This view is very close to Titelbaum's (Reference Titelbaum, Gendler and Hawthorne2015) Fixed Point thesis, which roughly states that mistakes concerning the requirements of rationality are mistakes of rationality. However, Titelbaum's Fixed Point thesis relies on the premise that akrasia is irrational (Titelbaum Reference Titelbaum, Gendler and Hawthorne2015: 254), an assumption that I question in this paper. Also, the claim that mistakes concerning the requirements of rationality are mistakes of rationality is compatible with the rejection of Reasons-Responsiveness. Consider the following argument: (1) Rational agents cannot be mistaken concerning what rationality requires of them; (2) however, in responding correctly to their reasons, agents can form rational false beliefs concerning what they have sufficient reason to believe; (C) so, responding correctly to reasons an agent has is not a genuine requirement of rationality, or claims concerning Reasons-Responsiveness are outside the realm of rationality. For these reasons, I will not explore Titelbaum's line of reasoning here. However, I acknowledge that exploring such a line of reasoning could eventually solve Rational Puzzle.

24 We could also say that an unconditional probability is a probability conditional on a necessarily true event or proposition. For example, if (Bv~B) is necessarily true, then P(A)=P(A|(Bv~B)).

25 We can express such a result formally. Suppose that, conditional on A, P's probability is X, but conditional on ~A, P's probability is Y. Conditions A and ~A are also merely probable. Let's assume that P(P|A)=X, P(P|~A)=Y, P(A)=C and P(~A)=(1−C). In such a context, we can determine P's conditional probability, but we can also determine P's unconditional probability. Indeed, P(J)=P(J^K)+P(J^~K) and P(J^K)=P(K)·P(J|K) are familiar probability rules. Since P(J^K)=P(K)·P(J|K), we can conclude that X·C=P(P^A) and Y·(1−C)=P(P^~A). Since P(J)=P(J^K)+P(J^~K), we can conclude that P(P)=(Y·(1−C))+(X·C). In the situation described, since P(P|A)=0.9, P(P|~A)=0, P(A)=0.9 and P(~A)=0.1, we get the result that P(P)=(0·0.1)+(0.9·0.9)=0.81. Hence, at least in the situation described, combinations of conditional probabilities can be replaced by an unconditional one.

26 This example is largely inspired by Atkinson and Peijnenburg's (Reference Atkinson and Peijnenburg2006, Reference Atkinson and Peijnenburg2009) result that an infinite probabilistic chain can ground P's probability.

27 For the sake of simplicity, I here limit myself to cases where an infinite chain of conditional probabilities is represented by a convergent series, not a divergent one.

28 See note 19.

29 This research was supported by the Groupe de Recherche Interuniversitaire sur la Normativité (GRIN) and the Social Sciences and Humanities Research Council (#767-2016-1771). An earlier version of this paper was presented at the Journées du GRIN 2017 (Montréal, March 2017) and at the Higher Seminar in Theoretical Philosophy at Uppsala University (April 2017). Thanks to two anonymous referees, Aude Bandini, Karl Bergman, Charles Côté-Bouchard, Samuel Dishaw, Daniel Fogal, Jens Gillessen, Ulf Hlobil, Carl Montan, David Montminy, Samuel Montplaisir, Andrew Reisner, Olle Risberg, Henrik Rydéhn, Xander Selene, Alain Voizard and Alex Worsnip for helpful comments. I am also very grateful to Daniel Laurier, who have provided invaluable feedback on earlier versions of this paper.

References

REFERENCES

Alexander, D. J. 2013. ‘The Problem of Respecting Higher-Order Doubt.’ Philosopher's Imprint, 13: 112.Google Scholar
Atkinson, D. and Peijnenburg, J. 2006. ‘Probability without Certainty: Foundationalism and the Lewis–Reichenbach Debate.’ Studies in History and Philosophy of Science Part A, 37(3): 442–53.Google Scholar
Atkinson, D. and Peijnenburg, J. 2009. ‘Justification by an Infinity of Conditional Probabilities.’ Notre Dame Journal of Formal Logic, 50(2): 183–93. doi: 10.1215/00294527-2009-005.Google Scholar
Bélanger, M. 2011. Existe-t-Il Des Dilemmes Moraux Insolubles? Paris: Editions L'Harmattan.Google Scholar
Broome, J. 2005. ‘Does Rationality Give Us Reasons?Philosophical Issues, 15(1): 321–37.Google Scholar
Broome, J. 2007. ‘Does Rationality Consist in Responding Correctly to Reasons?Journal of Moral Philosophy, 4(3): 349–74.Google Scholar
Broome, J. 2013. Rationality through Reasoning. Oxford: Wiley.Google Scholar
Chang, R. (ed.) 1997. Incommensurability, Incomparability, and Practical Reason. Cambridge, MA: Harvard University Press.Google Scholar
Chang, R. 2001. ‘Against Constitutive Incommensurability or Buying and Selling Friends.’ Philosophical Issues, 11(1): 3360.Google Scholar
Chislenko, E. 2014. ‘Moore's Paradox and Akratic Belief.’ Philosophy and Phenomenological Research. doi: 10.1111/phpr.12127.Google Scholar
Christensen, D. 1992. ‘Confirmational Holism and Bayesian Epistemology.’ Philosophy of Science, 59(4): 540–57.Google Scholar
Christensen, D. 2009. ‘Disagreement as Evidence: The Epistemology of Controversy.’ Philosophy Compass, 4(5): 756–67.Google Scholar
Christensen, D. 2010. ‘Higher-Order Evidence.’ Philosophy and Phenomenological Research, 81(1): 185215.Google Scholar
Christensen, D. 2014. ‘Conciliation, Uniqueness and Rational Toxicity.’ Noûs. doi: 10.1111/nous.12077.Google Scholar
Clarke, R. 2013. ‘Belief Is Credence One (in Context).’ Philosopher's Imprint, 13(11): 118.Google Scholar
Coates, A. 2012. ‘Rational Epistemic Akrasia.’ American Philosophical Quarterly, 48(2): 113–24.Google Scholar
Conee, E. 2010. ‘Rational Disagreement Defended.’ In Warfield, T. and Feldman, R. (eds) Disagreement, pp. 6990. Oxford: Oxford University Press.Google Scholar
Davidson, D. 1982. ‘Paradoxes of Irrationality.’ In Wollheim, R. and Hopkins, J. (eds), Philosophical Essays on Freud, pp. 289305. Cambridge: Cambridge University Press.Google Scholar
Demey, L. 2013. ‘Contemporary Epistemic Logic and the Lockean Thesis.’ Foundations of Science, 18(4): 599610.Google Scholar
Dogramaci, S. and Horowitz, S. 2016. ‘An Argument for Uniqueness About Evidential Support.’ Philosophical Issues, 26(1): 130–47.Google Scholar
Dubois, D. and Prade, H. 2009. ‘Accepted Beliefs, Revision and Bipolarity in the Possibilistic Framework.’ In Huber, F. and Schmidt-Petri, C. (eds), Degrees of Belief, pp. 161–84. Dordrecht: Springer.Google Scholar
Easwaran, K. and Fitelson, B. 2015. ‘Accuracy, Coherence, and Evidence.’ Oxford Studies in Epistemology, 5: 6196.Google Scholar
Elga, A. 2013. ‘The Puzzle of the Unmarked Clock and the New Rational Reflection Principle.’ Philosophical Studies, 164(1): 127–39. doi: 10.1007/s11098-013-0091-0.Google Scholar
Feldman, R. 2005. ‘Respecting the Evidence.’ Philosophical Perspectives, 19(1): 95119.Google Scholar
Foley, R. 2009. ‘Beliefs, Degrees of Belief, and the Lockean Thesis.’ In Huber, F. and Schmidt-Petri, C. (eds), Degrees of Belief, pp. 3747. Dordrecht: Springer.Google Scholar
Gibbons, J. 2013. The Norm of Belief. Oxford: Oxford University Press.Google Scholar
Greco, D. 2014. ‘A Puzzle about Epistemic Akrasia.’ Philosophical Studies, 167(2): 201–19.Google Scholar
Hinchman, E. S. 2013. ‘Rational Requirements and ‘Rational’ Akrasia.’ Philosophical Studies, 166(3): 529–52.Google Scholar
Horowitz, S. 2014a. ‘Epistemic Akrasia.’ Noûs, 48(4): 718–44.Google Scholar
Horowitz, S. 2014b. ‘Immoderately Rational.’ Philosophical Studies, 167(1): 4156.Google Scholar
Huemer, M. 2007. ‘Moore's Paradox and the Norm of Belief.’ In Nucceteli, S. and Seays, G. (eds), Themes from GE Moore, 142157. New York, NY: Oxford University Press.Google Scholar
Kelly, T. 2005. ‘The Epistemic Significance of Disagreement.’ In Gendler, T. Szabó and Hawthorne, J. (eds), Oxford Studies in Epistemology, Vol. 1, pp. 167–96. Oxford: Oxford University Press.Google Scholar
Kelly, T. 2014. ‘Evidence Can Be Permissive.’ In Steup, M., Turri, J. and Sosa, E. (eds), Contemporary Debates in Epistemology, pp. 298312. Chichester: Wiley.Google Scholar
Kolodny, N. 2007. ‘How Does Coherence Matter?Proceedings of the Aristotelian Society, 107: 229–63.Google Scholar
Kroedel, T. 2011. ‘The Lottery Paradox, Epistemic Justification and Permissibility.’ Analysis, 72(1): 5760.Google Scholar
Lasonen-Aarnio, M. 2014. ‘Higher-Order Evidence and the Limits of Defeat.’ Philosophy and Phenomenological Research 88(2): 314–45.Google Scholar
Lasonen-Aarnio, M. 2015. ‘New Rational Reflection and Internalism about Rationality.’ Oxford Studies in Epistemology, 5: 145–71.Google Scholar
Lasonen-Aarnio, M. Forthcoming. ‘Enkrasia or Evidentialism? Learning to Love Mismatch.’ Philosophical Studies.Google Scholar
Leitgeb, H. 2014. ‘The Stability Theory of Belief.’ Philosophical Review, 123(2): 131–71.Google Scholar
Littlejohn, C. 2012. Justification and the Truth-Connection. Cambridge: Cambridge University Press.Google Scholar
Littlejohn, C. 2015. ‘Stop Making Sense? On a Puzzle about Rationality.’ Philosophy and Phenomenological Research. doi: 10.1111/phpr.12271.Google Scholar
Mele, A. 1988. Irrationality. Oxford: Oxford University Press.Google Scholar
Moretti, L. and Piazza, T. 2013. ‘Transmission of Justification and Warrant.’ In Zalta, E. N. (ed.), The Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/transmission-justification-warrant/.Google Scholar
Pears, D. F. 1984. Motivated Irrationality. Oxford: Oxford University Press.Google Scholar
Pryor, J. 2013. ‘Problems for Credulism.’ In Tucker, C. (ed.), Seemings and Justification: New Essays on Dogmatism and Phenomenal Conservatism, pp. 89131. Oxford: Oxford University Press.Google Scholar
Reisner, A. 2013. ‘Is the Enkratic Principle a Requirement of Rationality?Organon F, 20(4): 437–63.Google Scholar
Ribeiro, B. 2011. ‘Epistemic Akrasia.’ International Journal for the Study of Skepticism, 1(1): 1825.Google Scholar
Schoenfield, M. 2014. ‘Permission to Believe: Why Permissivism Is True and What It Tells Us about Irrelevant Influences on Belief.’ Noûs, 48(2): 193218.Google Scholar
Sinnott-Armstrong, W. 1996. ‘Moral Dilemmas and Rights.’ In Mason, H. E. (ed.), Moral Dilemmas and Moral Theory, pp. 4851. New York, NY: Oxford University Press.Google Scholar
Smithies, D. 2012. ‘Moore's Paradox and the Accessibility of Justification.’ Philosophy and Phenomenological Research, 85(2): 273300.Google Scholar
Spohn, W. 2009. ‘A Survey of Ranking Theory.’ In Huber, F. and Schmidt-Petri, C. (eds), Degrees of Belief, pp. 185228. Dordrecht: Springer.Google Scholar
Sturgeon, S. 2008. ‘Reason and the Grain of Belief.’ Noûs, 42(1): 139–65.Google Scholar
Titelbaum, M. 2015. ‘Rationality's Fixed Point (or: In Defense of Right Reason).’ In Gendler, T. Szabó and Hawthorne, J. (eds), Oxford Studies in Epistemology, Vol. 5, pp. 253–94. Oxford: Oxford University Press.Google Scholar
Weisberg, J. 2015. ‘Updating, Undermining, and Independence.’ British Journal for the Philosophy of Science, 66(1): 121–59.Google Scholar
White, R. 2014. ‘Evidence Cannot Be Permissive.’ In Steup, M., Turri, J. and Sosa, E. (eds), Contemporary Debates in Epistemology, pp. 312–23. Chichester: Wiley.Google Scholar
Williams, B. A. O. 1965. ‘Ethical Consistency.’ Proceedings of the Aristotelian Society, Suppl. Vol. 35: 103–24.Google Scholar
Worsnip, A. 2015. ‘The Conflict of Evidence and Coherence.’ Philosophy and Phenomenological Research. doi: 10.1111/phpr.12246.Google Scholar
Worsnip, A. 2016. ‘Moral Reasons, Epistemic Reasons and Rationality.’ Philosophical Quarterly, 66(263): 341–61.Google Scholar
Worsnip, A. Forthcoming. ‘Isolating Correct Reasoning.’ In Jackson, M. Balcerak and Jackson, B. Balcerak (eds), Reasoning: New Essays on Theoretical and Practical Thinking. Oxford: Oxford University Press.Google Scholar
Zheng, Y. 2001. ‘Akrasia, Picoeconomics, and a Rational Reconstruction of Judgment Formation in Dynamic Choice.’ Philosophical Studies, 104(3): 227–51.Google Scholar