The earliest relevant logics were put forward in the form of Hilbert-style axiom systems.Footnote 1 Alan Ross Anderson and Nuel Belnap provided Fitch-style cascade natural deduction systems for what they regarded as the central relevant logics, the logics R, E, and T.Footnote 2 The idea behind the systems was that when one introduces a conditional, the antecedent should be relevant to the consequent. The usual conditional rules are the following.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211028155520724-0816:S1755020319000133:S1755020319000133_eqnu4.png?pub-status=live)
In the
$\rightarrow $
I rule, one only needs to reach the conclusion B under the assumption of A. Standard cascade systems have a rule of reiteration that permit one to copy sentences from superproofs to subproofs, permitting the following derivation.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211028155520724-0816:S1755020319000133:S1755020319000133_eqnu5.png?pub-status=live)
To prevent this sort of derivation, Anderson and Belnap required that for a conditional to be introduced, the antecedent had to be used to obtain the consequent. They employed indices to track use, in this sense. The indices are sets of natural numbers. Hypotheses have singleton indices
$\{ k\}$
, where k is the depth of nesting of subproofs at that hypothesis. The Anderson–Belnap rules for the conditional are the following.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211028155520724-0816:S1755020319000133:S1755020319000133_eqnu6.png?pub-status=live)
In
$\rightarrow $
I, it is required that
$k\in a$
. This requirement prevents the derivation of (K) axiom, sometimes called the weakening axiom or the law of irrelevance,
$A\mathbin {\rightarrow }(B\mathbin {\rightarrow } A)$
, above, as the index on A will be
$\{ 1\}$
, whereas B will have
$\{ 2\}$
. These rules are the conditional rules for the logic R. To obtain the logics E and T, Anderson and Belnap place restrictions on either the reit rule or
$\rightarrow $
E. Indexed cascade systems can be extended from the arrow fragment to the vocabulary
$\{ \mathbin {\rightarrow }, \mathord {\sim },\&,\lor \}$
without difficulty. Brady (Reference Brady1984) extended the Anderson–Belnap systems to cover a wide range of relevant logics without any restriction on reit.
Prawitz (Reference Prawitz1965, 81ff.) gave a tree natural deduction system for a relevant logic. His system did not use indices. Rather, he used side conditions on the rule
$(\mathbin {\rightarrow } I)$
. First, he required that each instance of the rule discharge at least one occurrence of an assumption. This no-vacuous discharge policy was sufficient to rule out the derivation of
$A\mathbin {\rightarrow } (B\mathbin {\rightarrow } A)$
, but the addition of the
$\&$
and
$\lor $
created some difficulties.
To accommodate the truth-functional connectives, Prawitz imposed a side condition on derivations, in addition to the condition banning vacuous discharge for conditionals. The condition uses the concept of a sentence occurrence B originating from a sentence occurrence A. B originates from A just in case there is a sequence of sentence occurrences
$C_1,\ldots ,C_n$
, where
$C_1=A$
,
$C_n=B$
, A is not discharged by
$(\lor E)$
, and
$C_{i+1}$
is either immediately below
$C_i$
or
$C_{i+1}$
is an assumption discharged by an application of
$(\lor E)$
. The side condition is on applications of
$(\mathbin {\rightarrow } I)$
that are below applications of
$(\& I)$
or
$(\lor E)$
. Let such an application of
$(\mathbin {\rightarrow } I)$
have B as its premise and discharge occurrences of A. If one premise of
$(\&I)$
, or one minor premise of
$(\lor E)$
, originates from A in the subderivation determined by B, then the other premise, respectively minor premise, must also originate from an occurrence of A that is also discharged by that application of
$(\mathbin {\rightarrow } I)$
.Footnote
3
Prawitz’s system has the benefit of using the familiar tree natural deduction rules for
$\&$
and
$\lor $
.Footnote
4
The system normalizes, and it permits a derivation of distribution of conjunction over disjunction:
$A\,\&\,(B\lor C)\mathbin {\rightarrow } (A\,\&\, B)\lor (A\,\&\, C)$
.Footnote
5
It trades indices for the side condition above. The downside, depending on one’s view, is that the logic is not that of R, but rather the positive logic obtained from Urquhart’s semilattice semantics.Footnote
6
This is a proper strengthening of positive R, as it permits derivation of
$(A\mathbin {\rightarrow } B\lor C)\,\&\,(C\mathbin {\rightarrow } D)\mathbin {\rightarrow } (A\mathbin {\rightarrow } B\lor D)$
, which is invalid in R. The derivation
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211028155520724-0816:S1755020319000133:S1755020319000133_eqnU7.png?pub-status=live)
can be extended to a derivation of the R-invalid formula by two applications of
$(\mathbin {\rightarrow } I)$
. The interested reader can check that Prawitz’s side condition holds for this derivation.
Since the publication of Prawitz’s work, there appears to have been comparatively little interest in developing tree natural deduction systems for relevant logics.Footnote
7
I will briefly mention a few exceptions. First, there has been some interest in tree systems in connection with Urquhart’s semilattice semantics. Giambrone & Urquhart (Reference Giambrone and Urquhart1987) and Meyer et al. (Reference Meyer, Martin, Giambrone and Urquhart1988) gave systems for the positive logics of the semilattice semantics. These systems use indices rather than Prawitz’s side condition on
$(\mathbin {\rightarrow } I)$
.Footnote
8
Second, Slaney (Reference Slaney1990), Read (Reference Read1988), and Restall (Reference Restall2000) present tree natural deduction systems in which nodes are sequents. These approaches cover a large range of relevant logics, and include structural rules operating on the left-hand side of the sequents. I will discuss these systems more in the final section. Finally, Francez (Reference Francez2014) investigates an index-free, signed tree natural deduction system for a positive fragment of R.
In this paper, I will provide indexed tree natural deduction systems for a range of relevant logics (§2). I will first present the cascade systems developed by Brady (§1). I will show the trees sound and complete for the target logics by showing how to translate cascade proofs into tree proofs (§3), and vice versa (§4).Footnote 9 I will conclude by indicating some possible payoffs of these new systems (§5).
1 Cascade systems
Anderson and Belnap present cascade systems for the relevant logics R, E, and T, which are relatively strong logics. There are a host of well known, weaker relevant logics.Footnote 10 Brady (Reference Brady1984) provided cascade systems for many of these weaker systems.Footnote 11 Before getting into the cascade systems, it will be useful to lay out Hilbert-style axioms for the positive logics. I will return to negation later in this section.
The axiom systems are for logics as sets of theorems.Footnote 12 All of the logics I will look at have the following axioms and rules.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211028155520724-0816:S1755020319000133:S1755020319000133_eqnu1.png?pub-status=live)
The logics are distinguished by the adoption of some or all of the following axioms.
-
(C1)
$(A\mathbin {\rightarrow } B)\,\&\,(B\mathbin {\rightarrow } C)\mathbin {\rightarrow } (A\mathbin {\rightarrow } C)$
-
(C2)
$(A\mathbin {\rightarrow } B)\mathbin {\rightarrow } ((B\mathbin {\rightarrow } C)\mathbin {\rightarrow } (A\mathbin {\rightarrow } C))$
-
(C3)
$(A\mathbin {\rightarrow } B)\mathbin {\rightarrow } ((C\mathbin {\rightarrow } A)\mathbin {\rightarrow } (C\mathbin {\rightarrow } B))$
-
(C4)
$(A\mathbin {\rightarrow }(A\mathbin {\rightarrow } B))\mathbin {\rightarrow } (A\mathbin {\rightarrow } B)$
-
(C5)
$(A\mathbin {\rightarrow }(B\mathbin {\rightarrow } C))\mathbin {\rightarrow }(B\mathbin {\rightarrow }(A\mathbin {\rightarrow } C))$
Different positive logics are obtained by adopting some or all of these axioms.Footnote 13
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211028155520724-0816:S1755020319000133:S1755020319000133_eqnu2.png?pub-status=live)
With the target logics in view, we can now turn to the cascade systems.
The cascade systems have an intuitive graphical presentation. For the development of this paper, it will be useful to present them in a nongraphical form, which will permit precise definitions.Footnote
14
To settle some notation, X and Y will be used for sequences of formulas,
$X,A$
will be used for the concatenation of X and
$\langle A\rangle $
,
$X^{\frown }\langle A\rangle $
, and
$X,Y$
will be used for the concatenation
$X^{\frown }Y$
. The lower case letters
$a,b,c,d,$
and e are used for sets of natural numbers and lower case letters from later in the alphabet,
$i,j,k,$
etc., as well as
$\lambda $
, are used for natural numbers.
Definition 1. A sequent,
, is a pair of a sequence,
$\langle\kern-1pt A^{1}_{\{k_{1}\}},\ldots ,A^{n}_{\{k_{n}\}}\kern-1pt\rangle $
, of indexed formulas and an indexed formula
$B_{b}$
.Footnote
15
If
$n=0$
, then the sequence is
$\langle \rangle $
.
The length
$\mathfrak {L}(X)$
of a sequence X is defined recursively as
$\mathfrak {L}(\langle \rangle )=0$
and
$\mathfrak {L}(X^{\frown }\langle A\rangle )=\mathfrak {L}(X)+1$
.
The depth of a sequent
,
$\mathfrak {D}(s)$
, is
$\mathfrak {L}(X)$
.
An S-sequence is a sequence of sequents.Footnote 16
Let
$\Sigma $
be an S-sequence,
$\langle s_1,\ldots ,s_k\rangle $
. The depth of the nth member of
$\Sigma $
,
$\mathfrak {D}(n)$
, is
$\mathfrak {D}(s_{n})$
.
Definition 2. An S-sequence,
$\langle s_1,\ldots ,s_n\rangle $
, is nested iff for all
$m< n$
,
$s_{m}$
and
$s_{m+1}$
have one of the following forms: either
-
•
$s_{m}$ is
and
$s_{m+1}$ is
,
-
•
$s_{m}$ is
and
$s_{m+1}$ is
, or
-
•
$s_{m}$ is
and
$s_{m+1}$ is
,
provided
$n\geq 2$
. If
$n=1$
, then
$s_{1}$
is of the form
.
The third case in the definition deserves comment. The third case is the one in which a hypothesis is no longer available. For the purposes of this paper, it is sufficient to consider proofs in which a hypothesis is discharged via a conditional introduction rule. This is not the most general form of the definition, but it will work for present purposes. The addition of quantifiers or a modal operator, or the use of a different disjunction elimination rule or another rule that discharges assumptions, would require altering this definition.
For many purposes, it is best to picture the S-sequences written out vertically, rather than horizontally. I will refer to the individual members of an S-sequent as lines. The indexed formula
$A_{a}$
in a line
is the formula of the line. For reasons that will become clear soon, I will call a sequent of the form
, where no
$k'\geq k$
occurs in X a hyp step. I will also use the notation
$n\in [i,j]$
for
$i\leq n\leq j$
, with parentheses replacing one or both brackets to mean the corresponding ordering is strict, e.g.
$n\in (i,j]$
is
$i<n\leq j$
.
Definition 3 Subproof, availability
Let
$\Sigma $
be a nested S-sequence. A subproof
$\sigma $
of
$\Sigma $
is the subsequence
$\langle s_{i},\ldots ,s_{j}\rangle $
, where
$1\leq i\leq j\leq \mathfrak {L}(\Sigma )$
such that
-
•
$s_{i}$ is
,
-
• for all
$n\in [i,j]$ ,
$\mathfrak {D}(i)\leq \mathfrak {D}(n)$ ,
-
• either
$\mathfrak {D}(j+1)=\mathfrak {D}(i)-1$ or
$j=\mathfrak {L}(\Sigma )$ ,
Such a subproof will be denoted
$\sigma ^{i}_{j}$
.
A subproof
$\sigma ^{i}_{j}$
is closed at line n iff
$j<n$
, and otherwise
$\sigma ^{i}_{j}$
is open at line n. For a closed subproof
$\sigma ^{i}_{j}$
, j is the line that closes the subproof.
The innermost subproof at line n,
$\sigma (n)$
, is the subrpoof
$\sigma ^{i}_{j}$
, where
$n\in [i,j]$
and
$\mathfrak {D}(i)=\mathfrak {D}(n)$
.
One subproof
$\sigma $
contains another subproof
$\sigma '$
,
$\sigma \sqsupset \sigma '$
iff for some
$n,m,j,k$
,
$\sigma $
is
$\sigma ^{n}_{m}$
and
$\sigma '$
is
$\sigma ^{j}_{k}$
,
$n<j$
, and either
$k<m$
or both
$m=k$
and m is the last line of
$\Sigma $
. The reflexive closure of
$ \sqsupset $
is
$ \sqsupseteq $
.
A line n is available to a line m,
$n\succ m$
, iff
$n<m$
and either
$\mathfrak {D}(n)=0$
or
$\sigma (n) \sqsupseteq \sigma (m)$
.
A subproof
$\sigma ^{i}_{j}$
is available to a line n iff either
$j+1=n$
or
$j+1 \succ n$
.
These definitions are meant to capture, formally, the usual features of the graphical presentation of cascade proofs. A subproof is a sequence of lines beginning with a hyp step and ending when the formula assumed in the hyp step is no longer an assumption. One subproof
$\sigma $
contains another
$\sigma '$
when
$\sigma '$
begins in the scope of
$\sigma $
and either ends in the scope of
$\sigma $
or both continue till the end of the proof. One line is available to another just in case the former happens earlier, either categorically, i.e. not in a subproof, or with an appropriate subproof containment.
The definition of a closed subproof being available to a line relies on a feature of the definition of nesting: in stepping out of a closed subproof to the the line following the ending of that subproof the depth must decrease by exactly
$1$
. While there are more flexible definitions one could use, it turns out that this will be enough for our purposes.
A proof be a nested S-sequence labeled with rules and justification numbers. The rules will be the following. The conventions on justification numbers will be explained following the definition of a proof.
-
Hyp
, where
$k>k'$ , for any number
$k'$ occurring in an index in X.
-
Reit From
to infer
.
-
→E From
and
, to infer
.
-
→I From a subproof starting with
ending with
, to infer
, provided
$k\in b$ .
-
&E From
, to infer
or
.
-
&I From
and
, to infer
.
-
∨E From
,
, and
, to infer
.
-
∨I From
or
, to infer
.
-
Dist From
, to infer
.
Call the index
$\{ k\}$
on the displayed formula
$A_{\{ k\}}$
in the hyp rule the hyp index. The hyp rule above is more flexible than Brady’s formulation of the rule, a point to which I will return following the definition of a cascade proof.
Now, for the conventions on justification numbers. The rule hyp does not require any justification numbers. A step justified by the rule
$\rightarrow $
I is annotated with the range
$i-j$
, where
$\sigma ^{i}_{j}$
is the subproof indicated in the rule. The justification numbers for the remaining rules are those for lines containing the premise sequents of the rules, in the order presented in the rule statement.
Definition 4. A proof
$\Sigma $
is a nested S-sequence labeled with rules and justification numbers such that
-
• line 1 is a hyp step with depth
$1$ ,
-
• each member of
$\Sigma $ is the conclusion of one of the rules, which labels it, and whose justification numbers are those of the displayed premise sequents or subproof, and
-
• all premise sequents or subproofs cited in justification numbers for a line
$\lambda $ are available to
$\lambda $ .
It will be useful in the translations to treat the lines of a proof as including the rules and justifications, which I will do.
I can now proceed to Brady’s side conditions. These are conditions on the
$\rightarrow $
E and
$\lor $
E rules, with
${\text {max}(\emptyset )}\mathbin {=_{Df}}-1$
.
-
B If
$b\neq \emptyset $ , then
$a=\{ k\}$ such that
${\text {max}(b)}<k$ .
-
DJ If
$b\neq \emptyset $ , then
$a\neq \emptyset $ ,
${\text {max}(b)}<{\text {max}(a)}$ , and
$a-\{ {\text {max}(a)}\}\in \{ b,\emptyset \}$ .
-
TW If
$b\neq \emptyset $ , then
$a\neq \emptyset $ ,
${\text {max}(b)}<{\text {max}(a)}$ , and
$a\cap b=\emptyset $ .
-
TJ If
$b\neq \emptyset $ , then
$a\neq \emptyset $ ,
${\text {max}(b)}<{\text {max}(a)}$ , and either
$a\cap b=\emptyset $ or
$a-\{ {\text {max}(a)}\}=b$ .
-
T If
$b\neq \emptyset $ , then
$a\neq \emptyset $ ,
${\text {max}(b)}\leq {\text {max}(a)}$ .
-
RW
$a\cap b=\emptyset $ .
-
R No restriction.
Brady requires the hyp index to be the rank of the formula, defined as the depth of the line of the hyp rule. The additional flexibility of the hyp rule adopted here does not lead to new theorems being provable. Any proof using this hyp rule can be transformed into a proof using Brady’s hyp rule by shifting hyp indices that are greater than the rank to the rank throughout the proof and percolating the results through the rules. That the result satisfies the conditions is a consequence of the following lemma, for which it will be useful to employ a definition.
Definition 5. A partial function
$f:\omega \rightharpoonup \omega $
is
-
• monotonic iff
$n<m$ implies
$f(n)\leq f(m)$ , for all
$n,m$ in the domain of f;
-
• injective iff
$f(n)=f(m)$ implies
$n=m$ , for all
$n,m$ in the domain of f.
For a set of natural numbers b contained in the domain of a partial function f,
$f[b]=\{ f(n):n\in b\}$
.
Lemma 1. Suppose a monotonic, injective partial function
$f:\omega \rightharpoonup \omega $
is defined on a finite set
$a\cup b$
, where
$a,b\subseteq \omega $
. Then the following hold.
-
1. If
$a\neq \emptyset $ then
$f[a]\neq \emptyset $ .
-
2. If a is a singleton, then
$f[a]$ is a singleton.
-
3. If
${\text {max}(a)}<{\text {max}(b)}$ , then
${\text {max}(\,f[a])}<{\text {max}(\,f[b])}$ .
-
4. If
$a\cap b=\emptyset $ , then
$f[a]\,\cap\, f[b]=\emptyset $ .
-
5. If
$a-{\text {max}(a)}=b$ , then
$f[a]-{\text {max}(\,f[a])}=f[b]$ .
Proof. Suppose a monotonic, injective partial function f is defined on a finite set
$a\cup b$
, where
$a,b\subseteq \omega $
.
Both 1 and 2 are immediate.
For 3, suppose
${\text {max}(a)}<{\text {max}(b)}$
. Since f is monotonic,
${\text {max}(\,f[a])}\leq {\text {max}(\,f[b])}$
. By the injective condition on f,
${\text {max}(\,f[a])}\neq {\text {max}(\,f[b])}$
.
For 4, suppose
$a\cap b=\emptyset $
, but
$f[a]\cap f[b]\neq \emptyset $
. Then there is some
$n\in f[a]\cap f[b]$
, and so
$n\in f[a]$
and
$n\in f[b]$
. But then there are m and k such that
$m\in a$
,
$k\in b$
and
$f(m)=f(k)=n$
. By the injective condition,
$m=k$
, so
$a\cap b\neq \emptyset $
, contradicting the assumption.
For 5, suppose
$a-{\text {max}(a)}=b$
. Suppose
$f[a]-{\text {max}(\,f[a])}\neq f[b]$
. There are two cases. Suppose there is
$n\in f[a]-{\text {max}(\,f[a])}$
but
$n\not \in f[b]$
. Then for
$m=f(n)$
,
$m\in a-{\text {max}(a)}$
but
$m\not \in b$
, contradicting the assumption. Suppose there is
$n\not \in f[a]-{\text {max}(\,f[a])}$
and
$n\in f[b]$
. Then for
$m=f(n)$
,
$m\not \in a-{\text {max}(a)}$
but
$m \in b$
, contradicting the assumption.□
Shifting indices in a sequent so that hyp indices match the rank of a step defines a monotonic, injective partial function on indices in a sequent. The indices on the steps in the original cascade proof satisfy the required conditions. This is sufficient to apply lemma 1, as appropriate for the target logic, for the corresponding steps in the proof with shifted indices.
I will focus on the positive logics for this paper. The addition of negation adds no new challenges, just a proliferation of straightforward cases. The techniques of this paper can easily be extended to cover negation rules. The issue, rather, is that the addition of negation would somewhat clutter the presentation. The full logics, along with other logics, such as DW and DK, that differ from ones above only in their negation axioms can be accommodated by the techniques below.
Next, I need to introduce some notation. I will use the notation
$A^{1}_{a_{1}},\ldots ,A^{n}_{a_{n}}\mathbin {\vdash _{\mathfrak {C}}} B_{b}$
to indicate that there is a proof whose final line is
, with appropriate rule and justification annotations.Footnote
17
I will say that
$\Sigma $
is a proof of
$A^{1}_{a_{1}},\ldots ,A^{n}_{a_{n}}\mathbin {\vdash _{\mathfrak {C}}} B_{b}$
just in case
$\Sigma $
witnesses the claim that
$A^{1}_{a_{1}},\ldots ,A^{n}_{a_{n}}\mathbin {\vdash _{\mathfrak {C}}} B_{b}$
.
Let us proceed to the tree natural deduction proof systems.
2 Trees
The tree natural deduction proof systems I will give use indices, in much the same way as the Anderson–Belnap–Brady systems of the previous section. The technical development of the tree systems will follow that of Prawitz (Reference Prawitz1965), which will largely be suppressed here. There is a rule of assumption,
$A_{\{ k\}}$
, whose index is a singleton
$\{ k\}$
, where k is an natural number. The other rules are given in Figure 1.Footnote
18
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211028155520724-0816:S1755020319000133:S1755020319000133_fig1.png?pub-status=live)
Fig. 1 Rules of the tree systems.
The rule
$(\mathbin {\rightarrow } I)$
has three conditions: (i)
$k\in a$
, (ii) all occurrences of open assumptions of
$A_{\{ k\}}$
in the subtree ending with the premise of the rule are discharged, and (iii) no other assumption in the subtree ending with the premise of the rule has an index
$\{ k'\}$
with
$k'>k$
. The third condition can be dropped in the cases of R and RW, but its inclusion does not cause any problems. The condition is needed in the other cases.
Condition (ii) can be motivated by reflection on some ideas of Leivant (Reference Leivant1979). In the tree systems, the indices on assumptions mark out assumption classes, in the sense of Leivant (Reference Leivant1979). An assumption class is a set of occurrences of an indexed formula in a proof. Leivant was working with unindexed formulas and so used Prawitz’s notion of a discharge function to distinguish members of a particular assumption class. In this setting, the indices on assumption formulas mark out the assumption classes and so carry the information needed for a discharge function for a proof.
Condition (iii) may seem slightly ad hoc, so a few words motivating it are in order. The condition is actually in force in cascade proofs, although it is hidden in plain sight. In graphical form, the rule for introducing the conditional is the following.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211028155520724-0816:S1755020319000133:S1755020319000133_eqnu8.png?pub-status=live)
The hypothesis discharged,
$A_{\{ k\}}$
, is the assumption of the innermost subproof at line m. Since the hyp index for line n is k, and the subproof started on line n is the innermost, there is no
$k'>k$
such that an undischarged hypothesis has index
$\{ k'\}$
. There is no need to postulate an analog of (iii) for cascade proofs because the subproof structure builds it in.Footnote
19
Finally, we must supply global constraints on tree proofs for the index discipline we adopt. First, assumptions have indices that are singletons,
$\{ k\}$
, where k is a natural number. No open assumptions of distinct formulas can have the same index. An open assumption may have the same index as a discharged assumption, depending on the construction of the proof. Different occurrences of a single formula as open assumptions may have the same index, but they need not.
These are the rules and conditions common to all of the tree systems. To obtain the different logics of the previous section, one imposes Brady’s conditions on the
$(\lor E)$
and
$(\mathbin {\rightarrow } E)$
rules.
As in the previous section, I will define some notation for being a proof of a formula under certain assumptions. The notation
$A^{1}_{a_{1}},\ldots ,A^{n}_{a_{n}}\mathbin {\vdash _{\mathfrak {T}}} B_{b}$
means that there is a tree proof of
$B_{b}$
whose sequence of open assumption classes is
$A^{1}_{a_{1}},\ldots ,A^{n}_{a_{n}}$
. I will say that
$\Pi $
is a proof of
$A^{1}_{a_{1}},\ldots ,A^{n}_{a_{n}}\mathbin {\vdash _{\mathfrak {T}}} B_{b}$
just in case
$\Pi $
is a proof witnessing that
$A^{1}_{a_{1}},\ldots ,A^{n}_{a_{n}}\mathbin {\vdash _{\mathfrak {T}}} B_{b}$
.
Now, I will move on to the translations.
3 Translations: from cascades to trees
In this section, I will show how to translate cascade proofs to tree proofs, and conversely. I will begin with the translation of cascade proofs to tree proofs. As noted in 1, the justification annotations play an important role in this paper. They enable the definition of trace trees, which are key to this direction of the translation.
Given such a proof, we will construct a tree, called the trace tree. The nodes of a trace tree will have one of three forms:
$A_{a}$
,
$[A_{\{ k\}}, n]$
, and
$(A_{\{ k\}},n)$
. The latter two are intermediaries that help deal with discharging of assumptions and will be removed once the initial translation is completed. In the translation, assumptions are translated as
$[A_{\{ k\}}, n]$
, and
$(A_{\{ k\}},n)$
marks which occurrences of
$[A_{\{ k\}}, n]$
will be replaced by discharged assumptions, with the other occurrences of the bracketed pairs replaced by undischarged assumptions. The trace tree
$\mathfrak {t}(\lambda )$
of a line
$\lambda $
of a proof
$\Sigma $
is defined inductively as follows.
-
Case: Hyp. Line
$\lambda $ is
, hyp. Then,
$\mathfrak {t}(\lambda )$ is
-
Case: Reit. Line
$\lambda $ is
, reit k. Then,
$\mathfrak {t}(\lambda )$ is
$\mathfrak {t}(k)$ .
-
Case:
$\rightarrow $ I. Line
$\lambda $ is
,
$\rightarrow $ I
$i-j$ . Then,
$\mathfrak {t}(\lambda )$ is the following.
-
Case:
$\&$ E,
$\lor $ I, Dist. Suppose
$\rho \in \{ {\&\text {E}}{}, {\lor \text {I}}{}, \text {Dist}\}$ . Line
$\lambda $ is
,
$\rho n$ . In that case,
$\mathfrak {t}(\lambda )$ is the following.
-
Case:
$\rightarrow $ E,
$\&$ I. Suppose
$\rho \in \{ {\rightarrow \text {E}{},{\&\text {I}}{}}\}$ . Line
$\lambda $ is
,
$\rho m, n$ . Then,
$\mathfrak {t}(\lambda )$ is the following.Footnote 20
-
Case:
$\lor $ E. Line
$\lambda $ is
,
$\lor $ E
$m, n, p$ . In that case,
$\mathfrak {t}(\lambda )$ is the following.
The trace tree
$\mathfrak {t}(\lambda )$
for a line
$\lambda $
of a cascade proof may not be a tree proof. There may be nodes of of the form
$(A_{\{ k\}}, i)$
or
$[A_{\{ k\}},i]$
. The tree must be operated upon to remove these nodes. Further, the tree may not satisfy the global constraints. These will be addressed in turn.
To deal with the nonformula nodes, we will construct a sequence of trees. First note that each node
$\alpha $
of the form
$(A_{\{ k\}},i)$
occurs next to a subtree
$\pi _{\alpha }$
ending in the premise
$B_{a}$
of a
$(\mathbin {\rightarrow } I)$
rule. This will be used to define an ordering on nodes
$\alpha $
of the form
$(A_{\{ k\}},i)$
. Say that for nodes
$\alpha $
and
$\beta $
,
$\alpha <\beta $
iff
$\beta $
occurs in
$\pi _{\alpha }$
.
Let
$\Pi _{0}$
be the initial tree,
$\mathfrak {t}(\lambda )$
. To form
$\Pi _{j+1}$
, for
$j\geq 0$
, we do the following. Let
$\alpha $
be a
$<$
-minimal node in
$\Pi _{j}$
. In the subtree
$\pi _{\alpha }$
ending with
$B_{a}$
, there are nodes of the form
$[A_{\{ k\}},i]$
, where
$A_{\{ k\}}$
is discharged by the corresponding application of the rule
$\rightarrow $
I in the cascade proof. Replace all occurrences of
$[A_{\{ k\}},i]$
with
$A_{\{ k\}}$
, and mark them as discharged by the
$(\mathbin {\rightarrow } I)$
following
$\pi _{\alpha }$
. Finally, delete the node
$(A_{\{ k\}},i)$
and add
$(i)$
to the rule label
$(\mathbin {\rightarrow } I)$
. The result is
$\Pi _{j+1}$
. There are fewer nodes of the forms
$(A_{\{ k\}}, i)$
in
$\Pi _{j+1}$
than in
$\Pi _{j}$
, so there must be a
$j\geq 0$
such that
$\Pi _{j}$
contains no nodes of that form. Call this tree
$\Pi '$
.
The tree
$\Pi '$
may contain nodes of the form
$[A_{\{ k\}},i]$
. Replace each node of the form
$[A_{\{ k\}},i]$
with
$A_{\{ k\}}$
. These are the undischarged assumptions of the proof. Call the resulting tree
$\Pi $
. It remains to check that
$\Pi $
satisfies the conditions on the rules and the global conditions on indices.
To begin, there are a two observations concerning
$\Pi $
. First, if
$a\neq \emptyset $
, then an occurrence of indexed formula
$A_{a}$
in
$\Pi $
has above it, for each
$k\in a$
, an undischarged assumption whose index is
$\{ k\}$
. Second, observe that each inference labeled with one of
$(\lor I)$
,
$(Dist)$
,
$(\& I)$
, and
$(\& E)$
is an application of that rule.
Next, we show that
$\Pi $
satisfies the restrictions on
$(\mathbin {\rightarrow } I)$
. Conditions (i) and (ii) are immediate from the construction. For condition (iii), note that in the
$\rightarrow $
I case of the construction of the trace tree, all hyp steps occurring in
$\sigma ^{i}_{j}$
, apart from the initial hyp step of
$\sigma ^{i}_{j}$
have hyp indices greater than that of line i, and the construction of
$\Pi $
does not change any indices. Suppose that there is a violation of condition (iii) for some application
$\alpha $
of
$(\mathbin {\rightarrow } I)$
such that
$A_{\{ k_{\alpha }\}}$
is discharged but for some
$k>k_{\alpha }$
,
$B_{\{ k\}}$
is an open assumption of
$\pi _{\alpha }$
. Suppose the former is the translation of hyp line n and the latter of hyp line m. There are two cases,
$n<m$
and
$n>m$
, since
$n=m$
is impossible. Suppose
$n<m$
. As
$B_{\{ k\}}$
is an open assumption in
$\pi _{\alpha }$
but
$A_{\{ k_{\alpha }\}}$
is discharged by
$\alpha $
, the subproof
$\sigma (m)$
cannot close before
$\sigma (n)$
does. But this violates the conditions on nesting in
$\Sigma $
. If
$n>m$
, then the restriction the rule hyp that hyp indices increase with depth is violated. Therefore,
$\alpha $
cannot be a violation of (iii). So, there are no violations of (iii).
Next, we argue that
$\Pi $
satisfies the restrictions on
$(\mathbin {\rightarrow } E)$
and
$(\lor E)$
.
$\Pi $
satisfies the restrictions because the corresponding steps in
$\Sigma $
did, and the construction of
$\Pi $
does not alter any indices.
All that remains is to check that the global conditions on indices are satisfied. The condition that assumptions have singleton indices holds, as no step in the translation added additional numbers to indices. No open assumptions of distinct formulas in
$\Pi $
have the same index. The construction did not change the indices on the open assumptions of the cascade proof, which were all distinct as required by the hyp rule. As all the conditions are satisfied, I conclude that
$\Pi $
is the desired tree proof.
To translate a cascade proof
$\Sigma $
of
$A^{1}_{a_{1}},\ldots ,A^{n}_{a_{n}}\mathbin {\vdash _{\mathfrak {C}}} B_{b}$
, one constructs
$\mathfrak {t}(\mathfrak {L}(\Sigma ))$
. One then constructs a series proofs,
$\Pi _{0},\ldots ,\Pi _{n}$
, marks assumptions discharged, removes certain nodes, and finally changes the open assumption nodes from the form
$[A_{\{ k\}}, i]$
to
$A_{\{ k\}}$
. The resulting proof
${\Pi }$
is a tree proof of
$A^{i_{1}}_{a_{i_{1}}},\ldots ,A^{i_{m}}_{a_{i_{m}}}\mathbin {\vdash _{\mathfrak {T}}} B_{b}$
. Call the translation just described
$\mathbb {T}$
. So
$\mathbb {T}(\Sigma )=\Pi $
. This gives us the following theorem.
Theorem 1. Let
$\Sigma $
be a cascade proof of
$A^{1}_{a_{1}},\ldots ,A^{n}_{a_{n}}\mathbin {\vdash _{\mathfrak {C}}} B_{b}$
. Then
$\mathbb {T}(\Sigma )=\Pi $
is a tree proof of
$A^{i_{1}}_{a_{i_{1}}},\ldots ,A^{i_{m}}_{a_{i_{m}}}\mathbin {\vdash _{\mathfrak {T}}} B_{b}$
.
The proof
$\Pi $
may leave out some steps of
$\Sigma $
, including some open assumptions of
$\Sigma $
. These are steps, and hypotheses, that do not show up in the trace tree of the conclusion. Such steps are, in a sense, otiose for obtaining the conclusion, so it is not, I think, a problem that they are omitted by the translation.
Let us turn to translations in the other direction, from trees to cascades.
4 Translations: from trees to cascades
The translation from tree proofs to cascade proofs uses a technique that I will call weaving, which I will define following some auxiliary definitions.
Definition 6. Suppose the cascade proofs
$\Sigma _1,\ldots ,\Sigma _n$
are, respectively, proofs of
. Then the proofs are compatible iff for formulas
$B_{\{ k_1\}}$
and
$C_{\{ k_2\}}$
occurring in the
$X_{i}$
’s, if
$k_{1}=k_{2}$
then
$B=C$
.
The idea for the translation is that subtrees of the tree proof will translate to compatible cascade proofs, which will in turn be woven together. The index discipline on tree proofs will enforce the differentiated condition. To proceed, we will need a definition.
Definition 7. Let
$\Sigma $
be a cascade proof of
$A^1_{\{ k_1\}},\ldots ,A^n_{\{ k_n\}}\mathbin {\vdash _{\mathfrak {C}}} B_{b}$
where
$\lambda _{i}$
and
$\lambda _{j}$
are the lines for the hyp steps for
$A^{i}_{\{ k_{i}\}}$
and
$A^{j}_{\{ k_{j}\}}$
, where
$j= i+1$
and
$1\leq i<j\leq n$
.
The top of
$\sigma (\lambda _{i})$
with respect to
$\lambda _{j}$
is the sequence of lines
$\langle s_{\lambda _{i}},\ldots , s_{m}\rangle $
, where
$\lambda _{j}=m+1$
.
The truncated top of
$\sigma (\lambda _{i})$
with respect to
$\lambda _{j}$
is the sequence of lines
$\langle s_{\lambda _{i}+1},\ldots , s_{m}\rangle $
, where
$\lambda _{j}=m+1$
. If
$\lambda _{i}=m$
, then the truncated top is
$\langle \rangle $
.
In the top of an open subproof, there may be hyp steps, although these will be for subproofs that close in the top of that subproof. So, the depth of the top of an open subproof may increase before decreasing again to its initial depth.
Suppose one has a pair of compatible cascade proofs,
$\Sigma _1$
of
$A^{1}_{a_{1}},\ldots ,A^{n}_{a_{n}}\mathbin {\vdash _{\mathfrak {C}}} B_{b}$
and
$\Sigma _2$
of
$C^1_{c_{1}},\ldots ,C^m_{c_{m}}\mathbin {\vdash _{\mathfrak {C}}} D_{d}$
. Let
$(E^1_{e_1}, p_1),\ldots ,(E^{\ell }_{e_{\ell }},p_{\ell })$
be a sequence of pairs of the open hypotheses of
$\Sigma _1$
and
$\Sigma _2$
together with natural numbers such that
$e_i\leq e_{i+1}$
, for
$1\leq i<\ell $
and
$p_i$
is 1 or 2, depending on whether
$E^{i}_{e_{i}}$
is an open assumption of exactly one of the two proofs or of both. It will be notationally convenient to have a way to refer to the E formulas in order of their subscripts. Let X be
$\langle A^{1}_{a_{1}},\ldots ,A^{n}_{a_{n}}\rangle $
and let Y be
$\langle C^1_{c_{1}},\ldots ,C^m_{c_{m}}\rangle $
. Then
$\mu (X, Y)$
is
$\langle E^1_{e_1} ,\ldots ,E^{\ell }_{e_{\ell }}\rangle $
.
We construct a sequence of proofs,
$\Sigma ^{i}$
, where
$\Sigma ^{0}=\langle \rangle $
, and a series of partial functions defined on initial segments of the natural numbers
$f^{i}:\omega \times \{ 1,2\}\rightharpoonup \omega $
,
$0\leq i<\ell $
. The notation
$\sigma _{j}$
will be used for subproofs from
$\Sigma _{j}$
, with
$j\in \{ 1,2\}$
. Given the proof constructed up to stage i,
$\Sigma ^{i}$
, further lines are appended as indicated below. Let X be the sequence
$E^{1}_{e_{{1}}},\ldots ,E^{i}_{e_{{i}}}$
, and if
$i=0$
,
$X=\langle \rangle $
. Where
$s(Y)$
is a sequent with the displayed sequence Y of indexed formulas,
$s(X)$
is
$s(Y)$
, replacing Y with X. There are two basic cases, depending on whether
$p_{i+1}=1$
or
$p_{i+1}=2$
.
-
Case 1: Embed. In this case,
$p_{i+1}=1$ , so
$E^{i+1}_{e_{{i+1}}}$ is an assumption of only one proof,
$\Sigma _{j}$ , and it is the hyp step on line
$\lambda $ ,
, where Y is a possibly empty sequence of indexed formulas. Append to
$\Sigma ^{i}$ the sequence
$\langle s_{r}(X)\ldots ,s_{p}(X)\rangle $ , where
$\langle s_{r}(Y)\ldots ,s_{p}(Y)\rangle $ is the top of
$\sigma _{j}(\lambda )$ .
Let
$\lambda _{j}$ be the greatest line number from
$\Sigma _{j}$ such that
$f^{i}(\lambda _{j},j)$ is defined.Footnote 21 Set
$f^{i+1}(n,j)=f^{i}(n,j)$ , where
$n\leq \lambda _{j}$ . For each line
$s_{q}(Y)$ in the top of
$\sigma _{j}(\lambda )$ , set
$f(q,j)=m$ , where m is the line number of the newly added
$s_{q}(X)$ in
$\Sigma ^{i+1}$ . Finally, each justification number n in
$\Sigma ^{i+1}$ is replaced by
$f(n,j)$ , with
$j\in \{ 1,2\}$ as appropriate.
-
Case 2: Merge. In this case,
$p_{i+1}=2$ , so
$E^{i+1}_{e_{i+1}}$ is a hyp formula of both
$\Sigma _{1}$ and
$\Sigma _{2}$ . The respective lines are
$\lambda _{1}$ ,
, and
$\lambda _{2}$ ,
.Append to
$\Sigma ^{i}$ the sequence
$\langle s_{r_{1}}(X)\ldots ,s_{p_{1}}(X)\rangle $ , where
$\langle s_{r_{1}}(Y)\ldots ,s_{p_{1}}(Y)\rangle $ is the top of
$\sigma _{1}(\lambda _{1})$ . Then, append the sequence
$\langle s_{r_{2}}(X)\ldots ,s_{p_{2}}(X)\rangle $ , where
$\langle s_{r_{2}}(Z)\ldots ,s_{p_{2}}(Z)\rangle $ is the truncated top of
$\sigma _{2}(\lambda _{2})$ .
As in the previous case, let
$\lambda _{j}$ be the greatest line number from
$\Sigma _{j}$ such that
$f^{i}(\lambda _{j},j)$ is defined. Set
$f^{i+1}(n,j)=f^{i}(n,j)$ , where
$n\leq \lambda _{j}$ . For each line
$s_{q}(Y)$ in the top of
$\sigma _{1}(\lambda _{1})$ , set
$f(q,1)=m$ , where m is the line number of the newly added
$s_{q}(X)$ in
$\Sigma ^{i+1}$ . For each line
$s_{q}(Z)$ in the truncated top of
$\sigma _{2}(\lambda _{2})$ , set
$f(q,2)=m$ , where m is the line number of the newly added
$s_{q}(X)$ in
$\Sigma ^{i+1}$ . Additionally, set
$f(\lambda _{2}+1,2)=f(\lambda _{1}+1,1)$ . Finally, each justification number n in
$\Sigma ^{i+1}$ is replaced by
$f(n,j)$ , with
$j\in \{ 1,2\}$ as appropriate.
The preceding construction is the weave of
$\Sigma _{1}$
and
$\Sigma _{2}$
,
$\mathcal {W}(\Sigma _1,\Sigma _2)$
, yielding
$\Sigma ^{\ell }$
as the output.
There are two important features of
$\mathcal {W}(\Sigma _{1},\Sigma _{2})$
. First, it is a cascade proof. Each stage produces a sequence of lines that satisfies the conditions on being a proof with appropriate rules and justifications. Second, since no indices are changed, if the proofs
$\Sigma _1,\ldots ,\Sigma _n$
are compatible, then
$\mathcal {W}(\Sigma _{1},\Sigma _{2}),\Sigma _3,\ldots ,\Sigma _n$
are compatible.
While the details of weaving can get a bit involved, the idea behind the construction is fairly straightforward. Given two cascade proofs, one orders their open assumptions so that the hyp indices are increasing. One proceeds down the sequence, pulling in steps, with modifications to the sequence of assumptions, from one or both proofs, according to the current member of the sequence of assumptions,
$(E_{e},p)$
In the first case, one inserts the steps in the top of the subproof beginning with
$E_{e}$
, with the possible addition of more hypotheses. In the second, one inserts the steps in the top of the subproof beginning with
$E_{e}$
in one proof, and then the steps in the truncated top of the subproof starting with
$E_{e}$
in the other, “merging” the two hyp steps for
$E_{e}$
in the weave. Figure 2 presents the graphical form of case
$1$
, with the leftmost proof being
$\Sigma ^{i}$
, the middle displaying the relevant subproof top from
$\Sigma _{j}$
, with
$j\in \{ 1,2\}$
, and the rightmost presenting
$\Sigma ^{i+1}$
. Figure 3 presents the graphical form of case
$2$
, with the tops of the two displayed subproofs from
$\Sigma _{1}$
and
$\Sigma _{2}$
on the left merged to produce the subproof of
$\Sigma ^{i}$
whose top is on the right.Footnote 22
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211028155520724-0816:S1755020319000133:S1755020319000133_fig2.png?pub-status=live)
Fig. 2 Weave case
$1$
in graphical form.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211028155520724-0816:S1755020319000133:S1755020319000133_fig3.png?pub-status=live)
Fig. 3 Weave case
$2$
in graphical form.
I can now proceed to the translation. One translates the tree proof
$\Pi $
from the leaves downwards to the conclusion. There are a several cases, depending on the rule used. It will be helpful to bear in mind that, since
$\Pi $
is a tree proof, the proof as a whole obeys the index discipline on assumptions.
-
Case: Assumption.
$\Pi $ is an assumption,
$A_{\{ k\}}$ . This translates to a hyp step
.
-
Case: One premise, no discharge. This case covers
$(\& E), (\lor I),$ and
$ (Dist)$ . The case is the following.
By the inductive hypothesis, there is a translation
$\Sigma $ of
$\Pi $ , ending with the following line.
This is then extended as follows
-
Case:
$(\mathbin {\rightarrow } I)$ . The case is the following.
By the inductive hypothesis, there is a translation
$\Sigma '$ of
$\Pi $ , ending with the following.
Note that since k is the greatest integer among the indices on open assumptions,
$A_{\{ k\}}$ heads the innermost subproof of
$\Sigma '$ . One can then extend
$\Sigma '$ with the following line.
Let this new proof be
$\Sigma $ , which is the desired translation.
-
Case:
$(\mathbin {\rightarrow } E)$ . The case is the following.
By the inductive hypothesis, there are translations
$\Sigma _1$ and
$\Sigma _2$ of
$\Pi _1$ and
$\Pi _2$ , respectively, ending with the following.
By the index discipline on the trees, the indices on open hypotheses of
$\Sigma _1$ and
$\Sigma _2$ satisfy the conditions on indices and they are compatible.Footnote 23 Construct the weave
$\mathcal {W}(\Sigma _1,\Sigma _2)$ . There are then two subcases, depending on whether
${\text {max}(b)}<{\text {max}(a)}$ . In case
${\text {max}(b)}<{\text {max}(a)}$ ,
$\mathcal {W}(\Sigma _1,\Sigma _2)$ end with the following, where U is the initial subsequence of
$\mu (X, Y)$ obtained at line i.
This is then extended with
The step satisfies Brady’s conditions, if any, because the step in the tree proof satisfied them.The subcase in which
${\text {max}(b)}\geq {\text {max}(a)} $ is similar, with a different reit step.
-
Case:
$(\& I)$ . The case is the following.
By the inductive hypothesis, there are translations
$\Sigma _1$ and
$\Sigma _2$ of
$\Pi _1$ and
$\Pi _2$ , respectively, ending in the following.
Then,
$\mathcal {W}(\Sigma _1,\Sigma _2)$ ends with the following.
$\mathcal {W}(\Sigma _1,\Sigma _2)$ is then extended with the following.
The result is the desired translation.
-
Case:
$(\lor E)$ . The case is the following.
By the inductive hypothesis, there are translations
$\Sigma _1$ ,
$\Sigma _2$ , and
$\Sigma _3$ of
$\Pi _1$ ,
$\Pi _2$ , and
$\Pi _3$ , respectively, ending with the following.
$\mathcal {W}(\Sigma _1,\Sigma _2)$ ends with the following.
There are then two cases, depending on whether
${\text {max}(b)}<{\text {max}(a)}$ or not. Only one,
${\text {max}(b)}<{\text {max}(a)}$ , will be presented here, as they are similar.
${\mathcal {W}(\mathcal {W}(\Sigma _1,\Sigma _2),\Sigma _3)}$ then ends with the following lines, where U is the initial subsequence of
$\mu (X, Y)$ obtained at lines
$m_{1}$ and
$m_{2}$ .
This is extended by the following lines.
The result is the desired translation. The translation satisfies Brady’s conditions, if any, because the original tree proof satisfied them.Call the translation just described
$\mathbb {C}$ . So
$\mathbb {C}(\Pi )=\Sigma $ . We have the following theorem.
Theorem 2. Let
$\Pi $ be a tree proof of
$A^{1}_{a_{1}},\ldots ,A^{n}_{a_{n}}\mathbin {\vdash _{\mathfrak {T}}} B_{b}$ . Then
$\mathbb {C}(\Pi )=\Sigma $ is a cascade proof of
${A^{1}_{a_{1}},\ldots ,A^{n}_{a_{n}}} \mathbin {\vdash _{\mathfrak {C}}} {B_{b}}$ .
Tree proofs translate to cascade proofs, and cascade proofs translate to tree proofs. Brady (Reference Brady1984) shows that the cascade systems are sound and complete for the relevant logic X, where X is a logic listed in 1. As Brady’s results carry over to the cascade systems of this paper, it follows that the tree system for the relevant logic X are sound and complete for X.
5 Conclusions
In this paper, I have provided indexed tree natural deduction proof systems for a range of relevant logics with cascade proof systems. The translations above demonstrate the adequacy of these systems and, I hope, shed some light on the mechanisms of the index discipline and restrictions in the Anderson–Belnap–Brady systems. I will close with some comments on the work so far, beginning with a discussion of the trade offs of the two kinds of natural deduction systems, followed by accommodating negation and the logic E, and ending with a discussion of dependency as found in the indexed systems, contrasted with Lemmon-style systems.
There is a trade off in the move from tree to cascade or conversely: In the cascade systems, the dependencies of the assumptions are made clear by the subproof structure, whose linearity requires abandoning immediate connection between premises and conclusions of rules; in contrast, the tree systems enforce an immediate connection between premises and conclusions but obscure the dependencies of assumptions by spreading them throughout the tree structure.Footnote 24 Given these trade offs, one might wonder what advantages are to be gained from moving to a tree proof system. There are two potential payoffs that I would like to highlight. First, fusion is a very natural connective in the context of trees whose inferential role is, perhaps, less clear in cascade systems. The rules for fusion are the following.Footnote 25
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211028155520724-0816:S1755020319000133:S1755020319000133_eqnu3.png?pub-status=live)
For logics weaker than R, conditions will need to be placed on the indices in both rules, which I leave open here. In the context of tree proofs, the contribution of fusion may be easier to see than in the context of cascade proofs.
Second, techniques for proving normalization theorems are well known for tree natural deduction systems.Footnote
26
Normalization for cascade systems, while not unknown, is not as common.Footnote
27
These systems may facilitate normalization proofs for some of these relevant logics, particularly if
$(Dist)$
is dropped. Dropping
$(Dist)$
results in the distribution-less logic corresponding to dropping the distribution axiom from the Hilbert-style system or the distribution rule from the cascade system.
Next, a comment on extensions of the systems. In 1, I indicated that negation rules can be added without any problems. The translations extend to negation rules in a straightforward way. Thus, a large swath of the propositional logics studied by Brady (Reference Brady1984) can be given tree systems.Footnote 28
Returning to the motivations laid out in the introduction, one might wonder about E, which is conspicuous in its absence from the bulk of this paper. Anderson and Belnap’s cascade system for E is obtained by imposing a restriction on the rule of reiteration, which captures the modal aspect of E’s conditional. Brady (Reference Brady1984) does not offer a cascade system for E, since those systems are obtained by restrictions on the rules
$\rightarrow $
E and
$\lor $
E rather than on reit. There are two ways one can proceed for giving a tree natural deduction system for E. The first is to use one of the systems discussed by Standefer & Brady (Reference Standefer and Brady2019) as the basis for a tree system. The second is to use the tree system of Standefer (Reference Standefer2018). These offer different views of natural deduction for E.
Finally, I will close with a comment about the indices, which play such an important role in Anderson–Belnap–Brady systems, as well as the tree systems presented here. Some logicians, such as Prawitz, give systems that do not use indices, opting for side conditions on rules. The side conditions can, sometimes, be complex and nonlocal. The indices have the advantage of being local annotations, in the sense of going along with each formula, and permitting simple rules. To check whether a rule can be applied, one checks the indices on the premises of the rule, rather than checking a global condition on the proofs. The indices are simple, in the sense that they are just sets of natural numbers. Other approaches to relevant logic tend to use more general structures to indicate dependencies on premises.Footnote 29 The indices are, further, not model-theoretically loaded. They do not need to be understood as representing features of a model, as usually happens in indexed tableau systems.Footnote 30 This is not to say that the indices must be treated purely instrumentally and without philosophical import.Footnote 31 The indices here track use and dependence, important concepts in relevant logic.
The main advantage of the indices over the Lemmon-style systems of, say, Slaney (Reference Slaney1990) is that the structure of dependencies is simpler. A set of natural numbers has a natural representation as the sequence of those numbers in increasing order, as opposed to the more general groupoid structures used by Slaney. Assumptions corresponding to later numbers depend on the assumptions corresponding to earlier numbers; newer assumptions come later. With that in mind, we can revisit Brady’s conditions from 1.
The rule
$\rightarrow $
E will be the rule of interest below, with
$A\mathbin {\rightarrow } B_{b}$
the major premise and
$A_{a}$
the minor. The condition for B requires that the minor premise depend on only one assumption, which is newer than anything the major premise depends on, so the dependencies of the conclusion extend those of the major premise by one. The condition added for DJ permits the dependencies of the major and minor premises to overlap significantly, requiring that the minor premise depends on what the major premise does, with the addition of one newer assumption. The dependencies of the conclusion are then those of the minor premise. With both B and DJ, the dependencies of the conclusion extend those of the major premise by one. The condition for T says that the minor premise must depend on newer assumptions than the major premise. The dependencies of the conclusion are interlaced in the manner of the weave construction.Footnote
32
The condition for RW requires that the dependencies be, in a sense, disjoint, and the conclusion gathers together these dependencies. Finally, R eschews additional restrictions, relying only on the strictures of
$\rightarrow $
I.
For the indexed natural deduction systems of this paper, there is a sense in which it is most natural to start with R, obtaining weaker logics by adding conditions on the dependencies being appropriate for
$\rightarrow $
E and
$\lor $
E. In contrast, the Lemmon-style systems “start” with the weaker logics, around B. They have rules operating on premise structures, which are not part of the cascade or tree systems of this paper. Stronger logics are obtained through the adoption of additional rules dealing with premise structure. These systems appear to provide a different view of dependence than that of the indexed systems, although a fuller discussion is beyond the scope of this paper.
Acknowledgments
I would like to thank Greg Restall, Lloyd Humberstone, Ross Brady, and the audience at the Melbourne Logic Seminar for comments and discussion that have improved this paper. Thanks also to an anonymous referee at this journal for constructive feedback that greatly improved this paper. This research was supported by the Australian Research Council, Discovery Grant DP150103801.