Hostname: page-component-745bb68f8f-hvd4g Total loading time: 0 Render date: 2025-02-11T07:22:53.080Z Has data issue: false hasContentIssue false

ON EQUATIONAL COMPLETENESS THEOREMS

Published online by Cambridge University Press:  13 September 2021

TOMMASO MORASCHINI*
Affiliation:
DEPARTMENT OF PHILOSOPHY UNIVERSITY OF BARCELONA CARRER DE MONTALEGRE 6 08001BARCELONA, SPAIN
Rights & Permissions [Opens in a new window]

Abstract

A logic is said to admit an equational completeness theorem when it can be interpreted into the equational consequence relative to some class of algebras. We characterize logics admitting an equational completeness theorem that are either locally tabular or have some tautology. In particular, it is shown that a protoalgebraic logic admits an equational completeness theorem precisely when it has two distinct logically equivalent formulas. While the problem of determining whether a logic admits an equational completeness theorem is shown to be decidable both for logics presented by a finite set of finite matrices and for locally tabular logics presented by a finite Hilbert calculus, it becomes undecidable for arbitrary logics presented by finite Hilbert calculi.

Type
Article
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of The Association for Symbolic Logic

1 Introduction

A propositional logic $\vdash $ admits an equational (soundness and) completeness theorem if there are a set of equations $\boldsymbol {\tau }(x)$ and a class of algebras $\mathsf {K}$ such that for every set of formulas $\varGamma \cup \{ \varphi \}$ ,

$$ \begin{align*} \varGamma \vdash \varphi \Longleftrightarrow& \text{ for every }\boldsymbol{A} \in \mathsf{K} \text{ and } \vec{a} \in A,\\ &\text{ if }\boldsymbol{A} \vDash \boldsymbol{\tau}(\gamma^{\boldsymbol{A}}(\vec{a}))\quad \text{for all }\gamma \in \varGamma, \text{ then }\boldsymbol{A} \vDash \boldsymbol{\tau}(\varphi^{\boldsymbol{A}}(\vec{a})). \end{align*} $$

In this case, $\mathsf {K}$ is said to be an algebraic semantics for $\vdash $ (or a $\boldsymbol {\tau }$ -algebraic semantics for $\vdash $ ) [Reference Blok and Pigozzi4]. Accordingly, a logic admits an equational completeness theorem precisely when it has an algebraic semantics.Footnote 1

For instance, in view of the well-known equational completeness theorem of classical propositional logic ${\textbf { CPC}}$ with respect to the variety of Boolean algebras, stating that for every set of formulas $\varGamma \cup \{ \varphi \}$ ,

$$ \begin{align*} \varGamma \vdash_{{\textbf{ CPC}}} \varphi \Longleftrightarrow& \text{ for every Boolean algebra }\boldsymbol{A} \text{ and } \vec{a} \in A,\\ &\text{ if }\boldsymbol{A} \vDash \gamma^{\boldsymbol{A}}(\vec{a}) \thickapprox 1\quad \text{for all }\gamma \in \varGamma, \text{ then }\boldsymbol{A} \vDash \varphi^{\boldsymbol{A}}(\vec{a}) \thickapprox 1, \end{align*} $$

the variety of Boolean algebras is an algebraic semantic for ${\textbf { CPC}}$ .

Despite the apparent simplicity of the concept, intrinsic characterizations of logics with an algebraic semantics have proved elusive, partly because equational completeness theorem can take nonstandard forms. For instance, by Glivenko’s theorem [Reference Glivenko24], for every set of formulas $\varGamma \cup \{ \varphi \}$ ,

$$ \begin{align*} \varGamma \vdash_{{\textbf{ CPC}}} \varphi \Longleftrightarrow\text{ } \{ \lnot \lnot \gamma \colon \gamma \in \varGamma \} \vdash_{{\textbf{ IPC}}} \lnot \lnot \varphi, \end{align*} $$

where ${\textbf { IPC}}$ stands for intuitionistic propositional logic. Since Heyting algebras form an $\{ x \thickapprox 1 \}$ -algebraic semantics for ${\textbf { IPC}}$ , one obtains

$$ \begin{align*} \varGamma \vdash_{{\textbf{ CPC}}} \varphi \Longleftrightarrow& \text{ for every Heyting algebra }\boldsymbol{A} \text{ and } \vec{a} \in A,\\ &\text{ if }\boldsymbol{A} \vDash \lnot \lnot \gamma^{\boldsymbol{A}}(\vec{a}) \thickapprox 1\quad \text{for all }\gamma \in \varGamma, \text{ then }\boldsymbol{A} \vDash \lnot \lnot \varphi^{\boldsymbol{A}}(\vec{a})\thickapprox 1. \end{align*} $$

Consequently, the variety of Heyting algebras is also an algebraic semantics for ${\textbf { CPC}}$ , although certainly not the intended one [Reference Blok and Rebagliato7, Proposition 2.6].

As it happens, not only there is no easy escape from nonstandard equational completeness theorems, but, sometimes, these are the sole possible ones. It might be convenient to illustrate this point with a simple example, namely the $\langle \land , \lor \rangle $ -fragment ${\textbf { CPC}}_{\land \lor }$ of ${\textbf { CPC}}$ . As we proceed to explain, this fragment lacks any standard equational completeness theorem, i.e., one with respect to the variety of distributive lattices $\mathsf {DL}$ , but admits a nonstandard one.

Proof sketch. Suppose, with a view to contradiction, that ${\textbf { CPC}}_{\land \lor }$ admits a standard equational completeness theorem. Then there exists a set of equations $\boldsymbol {\tau }(x)$ such that $\mathsf {DL}$ is a $\boldsymbol {\tau }$ -algebraic semantics for ${\textbf { CPC}}_{\land \lor }$ . As every equation in which only a single variable occurs is valid in $\mathsf {DL}$ , we obtain $\mathsf {DL} \vDash \boldsymbol {\tau }(x)$ . Together with the assumption that $\mathsf {DL}$ is a $\boldsymbol {\tau }$ -algebraic semantics for ${\textbf { CPC}}_{\land \lor }$ , this implies $\emptyset \vdash _{{\textbf { CPC}}_{\land \lor }} x$ , a contradiction. Consequently, ${\textbf { CPC}}_{\land \lor }$ lacks any standard equational completeness theorem.

Nonetheless, it admits a nonstandard one. For consider the three-element algebra $\boldsymbol {A} = \langle \{ 0^{+}, 0^{-}, 1 \}; \land , \lor \rangle $ whose binary commutative operations are defined by the following tables:

$$ \begin{align*} \begin{array}{ c | c | c | c |} \land & 0^{-} & 0^{+} & 1 \\ \hline 0^{-} & 0^{+} & 0^{+} & 0^{+} \\ \hline 0^{+} & & 0^{-} & 0^{+} \\ \hline 1 & & & 1 \\ \hline \end{array} \quad \begin{array}{ c | c | c | c |} \lor & 0^{-} & 0^{+} & 1 \\ \hline 0^{-} & 0^{+} & 0^{+} & 1 \\ \hline 0^{+} & & 0^{-} & 1 \\ \hline 1 & & & 1 \\ \hline \end{array} \end{align*} $$

Let also $\boldsymbol {D}_{2}$ be the two-element distributive lattice with universe $\{ 0, 1 \}$ . Notice that the map $f \colon \boldsymbol {A} \to \boldsymbol {D}_{2}$ such that $f(1) = 1$ and $f(0^{+}) = f(0^{-}) = 0$ is a surjective homomorphism. Bearing in mind that ${\textbf { CPC}}_{\land \lor }$ is complete with respect to the logical matrix $\langle \boldsymbol {D}_{2}, \{ 1 \} \rangle $ , this implies that it is also complete with respect to the matrix $\langle \boldsymbol {A}, f^{-1}(\{ 1 \})\rangle $ , that is $\langle \boldsymbol {A}, \{ 1 \} \rangle $ . Finally, since $\{ 1 \}$ is the set of solutions of the equations in $\boldsymbol {\tau }(x)$ in $\boldsymbol {A}$ , we conclude that the class $\{ \boldsymbol {A} \}$ constitutes a $\boldsymbol {\tau }$ -algebraic semantics for ${\textbf { CPC}}_{\land \lor }$ .⊣

The nonstandard equational completeness theorem for ${\textbf { CPC}}_{\land \lor }$ described above is a special instance of a general construction, devised in [Reference Blok and Rebagliato7, Theorem 3.1], that produces a (possibly nonstandard) algebraic semantics for every logic possessing an idempotent connective. While most familiar logics have such a connective and, therefore, an algebraic semantics, the existence of logics that lack any algebraic semantics is known since [Reference Blok and Pigozzi4], see also [Reference Blok and Rebagliato7, Reference Font18, Reference Font19, Reference Raftery38]. It is therefore sensible to wonder whether an intelligible characterization of logics with an algebraic semantics could possibly be obtained [Reference Raftery39]. In this paper we provide a positive answer to this question for a wide family of logics.

To this end, it is convenient to isolate some limits cases: a logic is said to be graph-based when its language comprises only constant symbols and, possibly, a single unary connective. The algebraic models of graph-based logics are suitable directed graphs with distinguished elements. This makes them amenable to a series of combinatorial observations, culminating in a classification of graph-based logics with an algebraic semantics (Theorem 10.2 and Section 11).

To tackle the case of logics that are not graph-based, we first introduce a new method for constructing nonstandard algebraic semantics based on a universal algebraic trick known as Maltsev’s Lemma, which provides a description of congruence generation (Theorem 6.2). As a consequence, we derive a characterization of logics with an algebraic semantics that, moreover, have at least one tautology in which a variable occurs (Theorem 8.1).

This result is subsequently specialized to the case of protoalgebraic logics, i.e., logics $\vdash $ for which there exists a set of formulas $\varDelta (x, y)$ that globally behaves as a very weak implication, in the sense that $\emptyset \vdash \varDelta (x, x)$ and $x, \varDelta (x, y) \vdash y$ . More precisely, we prove that a nontrivial protoalgebraic logic has an algebraic semantics if and only if syntactic equality differs from logical equivalence, that is, there are two distinct logically equivalent formulas (Theorem 9.3). It follows that most familiar protoalgebraic logics have an algebraic semantics which, however, can be nonstandard. For instance, while the local consequences of the modal systems ${\textbf { K}}$ , ${\textbf {K4}}$ , and ${\textbf { S4}}$ possess a nonstandard algebraic semantics, the classes of algebras naturally associated with them, namely the varieties of modal, K4, and interior algebras, are not an algebraic semantics for them (Corollary 9.7).

Lastly, a logic is called locally tabular when, up to logical equivalence, it has only finitely many n-ary formulas for every nonnegative integer n. A detailed characterization of locally tabular logics with an algebraic semantics is established in Section 11. Accordingly, every locally tabular logic, whose language comprises at least an n-ary operation with $n \geqslant 2$ , has an algebraic semantics (Proposition 11.3).

The paper ends by considering the computational aspects of the problem of determining whether a logic has an algebraic semantics. We show that this problem is decidable both for logics presented by a finite set of finite logical matrices and for locally tabular logics presented by a finite Hilbert calculus. It becomes undecidable, however, when formulated for arbitrary (i.e., not necessarily locally tabular) logics presented by a finite Hilbert calculus (Theorem 12.1).

2 Matrix semantics

For a systematic presentation of matrix semantics, we refer the reader to [Reference Blok and Pigozzi3Reference Blok and Pigozzi5, Reference Cintula and Noguera11, Reference Czelakowski14, Reference Font20Reference Font, Jansana and Pigozzi22], while for a basic introduction to universal algebra we recommend [Reference Bergman2, Reference Burris and Sankappanavar8]. Given an algebraic language $\mathscr {L}$ , we denote by $Fm_{\mathscr {L}}$ the set of its formulas build up with a denumerable set $\textit {Var}$ of variables and by $\boldsymbol {Fm}_{\mathscr {L}}$ the corresponding algebra. When $\mathscr {L}$ is clear from the context, we shall write $Fm$ and $\boldsymbol {Fm}$ instead of $Fm_{\mathscr {L}}$ and $\boldsymbol {Fm}_{\mathscr {L}}$ . Generic elements of $\textit {Var}$ will be denoted by $x, y, z \dots $ .

A logic $\vdash $ is then a consequence relation on the set $Fm_{\mathscr {L}}$ , for some algebraic language $\mathscr {L}$ , that is substitution invariant in the sense that for every substitution $\sigma $ on $Fm_{\mathscr {L}}$ (a.k.a. endomorphism of $\boldsymbol {Fm}_{\mathscr {L}}$ ) and every $\varGamma \cup \{ \varphi \} \subseteq Fm_{\mathscr {L}}$ ,

$$ \begin{align*} \text{if }\varGamma \vdash \varphi \text{, then }\sigma[\varGamma] \vdash \sigma(\varphi). \end{align*} $$

Given $\varGamma , \varDelta \subseteq Fm$ , we write $\varGamma \vdash \varDelta $ when $\varGamma \vdash \delta $ for all $\delta \in \varDelta $ . Accordingly, $\varGamma \mathrel {\dashv \mkern 1.5mu\vdash } \varDelta $ stands for $\varGamma \vdash \varDelta $ and $\varDelta \vdash \varGamma $ . For $\varphi , \psi \in Fm$ , we abbreviate $\{ \varphi \} \mathrel {\dashv \mkern 1.5mu\vdash } \{ \psi \}$ by $\varphi \mathrel {\dashv \mkern 1.5mu\vdash } \psi $ .

The language in which a logic $\vdash $ is formulated is denoted by $\mathscr {L}_{\vdash }$ . If $\mathscr {L} \subseteq Fm_{\mathscr {L}_{\vdash }}$ , the $\mathscr {L}$ -fragment of $\vdash $ is the restriction of $\vdash $ to formulas in $Fm_{\mathscr {L}}$ . Moreover, a logic $\vdash '$ is said to be an extension of $\vdash $ if it is formulated in the same language as $\vdash $ and ${\vdash } \subseteq {\vdash '}$ .

Given a logic $\vdash $ , we denote by $\textrm {Cn}_{\vdash } \colon \mathcal {P}(Fm) \to \mathcal {P}(Fm)$ the closure operator naturally associated with $\vdash $ , i.e., the map defined by the rule

$$ \begin{align*} \textrm{Cn}_{\vdash}(\varGamma) := \{ \gamma \in Fm \colon \varGamma \vdash \gamma \}. \end{align*} $$

The elements of $\textrm {Cn}_{\vdash }(\emptyset )$ are said to be the theorems of $\vdash $ . By a rule we understand an expression of the form $\varGamma \rhd \varphi $ where $\varGamma \cup \{ \varphi \} \subseteq Fm$ . A rule $\varGamma \rhd \varphi $ is valid in $\vdash $ if $\varGamma \vdash \varphi $ .

Given an algebra $\boldsymbol {A}$ , a map $p \colon A^{n} \to A$ is said to be a polynomial function of $\boldsymbol {A}$ if there are a formula $\varphi (x_{1},\dots , x_{n}, \vec {y})$ and a tuple $\vec {c} \in A$ such that

$$ \begin{align*} p(a_{1}, \dots, a_{n}) = \varphi^{\boldsymbol{A}}(a_{1}, \dots, a_{n}, \vec{c}) \end{align*} $$

for every $a_{1}, \dots , a_{n} \in A$ .

Algebras whose language is $\mathscr {L}$ are called $\mathscr {L}$ -algebras. Given a logic $\vdash $ and an $\mathscr {L}_{\vdash }$ -algebra $\boldsymbol {A}$ , a set $F \subseteq A$ is said to be a deductive filter of $\vdash $ on $\boldsymbol {A}$ when for every $\varGamma \cup \{ \varphi \} \subseteq Fm$ such that $\varGamma \vdash \varphi $ and every homomorphism $h \colon \boldsymbol {Fm} \to \boldsymbol {A}$ , if $h[\varGamma ] \subseteq F$ , then $h(\varphi ) \in F$ . The set of deductive filters of $\vdash $ on $\boldsymbol {A}$ is a closure system, whose closure operator is denoted by $\textrm {Fg}_{\vdash }^{\boldsymbol {A}}(\cdot ) \colon \mathcal {P}(A) \to \mathcal {P}(A)$ . When $a \in A$ , we shall write $\textrm {Fg}_{\vdash }^{\boldsymbol {A}}(a)$ as a shorthand for $\textrm {Fg}_{\vdash }^{\boldsymbol {A}}(\{ a \})$ .

Every logic $\vdash $ can be associated with a distinguished class of algebras, as we proceed to explain. Given an $\mathscr {L}_{\vdash }$ -algebra $\boldsymbol {A}$ , let $\equiv _{\vdash }^{\boldsymbol {A}}$ be the congruenceFootnote 2 of $\boldsymbol {A}$ defined for every $a, c \in A$ as

(1) $$ \begin{align} a \equiv_{\vdash}^{\boldsymbol{A}}c \Longleftrightarrow& \text{ for every unary polynomial functions}\ p\ \text{of}\ \boldsymbol{A},\nonumber\\ & \text{ }\textrm{Fg}_{\vdash}^{\boldsymbol{A}}(p(a)) = \textrm{Fg}_{\vdash}^{\boldsymbol{A}}(p(c)). \end{align} $$

The algebraic counterpart of $\vdash $ is the class of algebras

$$ \begin{align*} \mathsf{Alg}(\vdash) := \{ \boldsymbol{A} \colon \! \boldsymbol{A} \text{ is an}\ \mathscr{L}_{\vdash}\text{-algebra and}\equiv_{\vdash}^{\boldsymbol{A}} \text{is the identity relation on }A\}. \end{align*} $$

If $\vdash $ is classical propositional logic, $\mathsf {Alg}(\vdash )$ is the variety (a.k.a. equational class) of Boolean algebras.

A canonical way to present logics is by means of classes of (logical) matrices. By a matrix we understand a pair $\langle \boldsymbol {A}, F \rangle $ where $\boldsymbol {A}$ is an algebra and $F \subseteq A$ . In this case, $\boldsymbol {A}$ is called the algebraic reduct of the matrix. The logic induced by a class of similar matrices $\mathsf {M}$ is the consequence relation $\vdash _{\mathsf {M}}$ on $Fm$ defined for every $\varGamma \cup \{ \varphi \} \subseteq Fm$ as

$$ \begin{align*} \varGamma \vdash_{\mathsf{M}} \varphi \Longleftrightarrow&\text{ for every}\ \langle \boldsymbol{A}, F \rangle \in \mathsf{M}\ \text{and homomorphism }h \colon \boldsymbol{Fm} \to \boldsymbol{A}, \\ &\text{ if }h[\varGamma] \subseteq F\text{, then }h(\varphi)\in F. \end{align*} $$

Accordingly, a class of matrices $\mathsf {M}$ is said to be a matrix semantics for a logic $\vdash $ when the latter coincides with $\vdash _{\mathsf {M}}$ . By the classical Lindenbaum–Tarski method, every logic $\vdash $ is induced by the class of matrices $\{ \langle \boldsymbol {Fm}, \varGamma \rangle \colon \varGamma \subseteq Fm \text { and }\textrm {Cn}_{\vdash }(\varGamma ) = \varGamma \}$ , whence

Theorem 2.1. Every logic has a matrix semantics.

A congruence $\theta $ of an algebra $\boldsymbol {A}$ is compatible with a set $F \subseteq A$ when for every $a, c \in A$ ,

$$ \begin{align*} \text{if }\langle a, c \rangle \in \theta \text{ and }a \in F\text{, then }c \in F. \end{align*} $$

The Leibniz congruence $\boldsymbol {\varOmega }^{\boldsymbol {A}}F$ is the largest congruence on $\boldsymbol {A}$ compatible with F and can be described as follows:

Proposition 2.2 [Reference Font20, Theorem 4.23]

Let $\boldsymbol {A}$ be an algebra, $F \subseteq A$ , and $a, c\in A$ .

$$ \begin{align*} \langle a, c \rangle \in \boldsymbol{\varOmega}^{\boldsymbol{A}}F\Longleftrightarrow& \text{ for every unary polynomial function}\ p\ \text{of}\ \boldsymbol{A},\\ & \text{ }p(a)\in F\text{ if and only if }p(c)\in F. \end{align*} $$

This description of $\boldsymbol {\varOmega }^{\boldsymbol {A}}F$ can be often simplified. For instance, if $\boldsymbol {A}$ is a Boolean algebra and F one of its filters, then

$$ \begin{align*} \langle a, c \rangle \in \boldsymbol{\varOmega}^{\boldsymbol{A}}F \Longleftrightarrow a \to^{\boldsymbol{A}} c, c \to^{\boldsymbol{A}} a \in F. \end{align*} $$

Given a matrix $\langle \boldsymbol {A}, F \rangle $ , we denote by ${\mathbb {H}_{\textrm {s}}^{-1}}(\langle \boldsymbol {A}, F \rangle )$ the class of all matrices $\langle \boldsymbol {B}, G \rangle $ for which there exists a surjective homomorphism $h \colon \boldsymbol {B} \to \boldsymbol {A}$ such that $G = h^{-1}[F]$ .

Lemma 2.3 [Reference Font20, Proposition 4.35.1]

If $\langle \boldsymbol {A}, F \rangle $ is a matrix and $\langle \boldsymbol {B}, G \rangle \in {\mathbb {H}_{\textrm {s}}^{-1}}(\langle \boldsymbol {A}, F \rangle )$ , then $\langle \boldsymbol {A}, F \rangle $ and $\langle \boldsymbol {B}, G \rangle $ induce the same logic.

A matrix $\langle \boldsymbol {A}, F \rangle $ is said to be reduced when $\boldsymbol {\varOmega }^{\boldsymbol {A}}F$ is the identity relation. The matrix

$$ \begin{align*} \langle \boldsymbol{A}, F \rangle^{\ast} := \langle \boldsymbol{A} / \boldsymbol{\varOmega}^{\boldsymbol{A}}F, F / \boldsymbol{\varOmega}^{\boldsymbol{A}}F\rangle \end{align*} $$

is always reduced. Similarly, given a class of matrices $\mathsf {M}$ , set

$$ \begin{align*} \mathsf{M}^{\ast} := \{ \langle \boldsymbol{A}, F \rangle^{\ast} \colon \langle \boldsymbol{A}, F \rangle \in \mathsf{M} \}. \end{align*} $$

As $\langle \boldsymbol {A}, F \rangle \in {\mathbb {H}_{\textrm {s}}^{-1}}(\langle \boldsymbol {A}, F \rangle ^{\ast })$ for every matrix $\langle \boldsymbol {A}, F \rangle $ , from Lemma 2.3 we obtain:

Corollary 2.4. The logics induced by $\mathsf {M}$ and $\mathsf {M}^{\ast }$ coincide, for every class of matrices $\mathsf {M}$ .

By Theorem 2.1 and Corollary 2.4, every logic $\vdash $ has a matrix semantics of the form $\mathsf {M}^{\ast }$ . Moreover, if $\langle \boldsymbol {A}, F \rangle \in \mathsf {M}^{\ast }$ , then F is a deductive filter of $\vdash $ on $\boldsymbol {A}$ and $\boldsymbol {\varOmega }^{\boldsymbol {A}}F$ is the identity relation. In this case, by Proposition 2.2, for every pair of distinct $a, c \in A$ , there is a unary polynomial function p of $\boldsymbol {A}$ such that

$$ \begin{align*} \textrm{Fg}_{\vdash}^{\boldsymbol{A}}(F \cup \{ p(a) \}) \ne \textrm{Fg}_{\vdash}^{\boldsymbol{A}}(F \cup \{ p(c) \}). \end{align*} $$

In particular, this implies

$$ \begin{align*} \textrm{Fg}_{\vdash}^{\boldsymbol{A}}(p(a)) \ne \textrm{Fg}_{\vdash}^{\boldsymbol{A}}(p(c)). \end{align*} $$

By (1), we conclude that $\equiv _{\vdash }^{\boldsymbol {A}}$ is the identity relation, whence $\boldsymbol {A} \in \mathsf {Alg}(\vdash )$ . We deduce:

Corollary 2.5. Every logic has a matrix semantics whose algebraic reducts belong to $\mathsf {Alg}(\vdash )$ .

When $\boldsymbol {A} = \boldsymbol {Fm}$ , we shall drop the superscript $\boldsymbol {Fm}$ and write $\equiv _{\vdash }$ instead of $\equiv _{\vdash }^{\boldsymbol {Fm}}$ . Two formulas $\varphi $ and $\psi $ are said to be logically equivalent in $\vdash $ when $\varphi \equiv _{\vdash } \psi $ , i.e., if

$$ \begin{align*} \delta(\varphi, \vec{z}) \mathrel{\dashv\mkern1.5mu\vdash} \delta(\psi, \vec{z}),\quad \text{for all }\delta(x, \vec{z}) \in Fm. \end{align*} $$

In the case of propositional classical logic, two formulas $\varphi $ and $\psi $ are logically equivalent in the above sense if and only if $\varphi \leftrightarrow \psi $ is a classical tautology.

Lastly, the free Lindenbaum–Tarski algebra of $\vdash $ is $\boldsymbol {Fm}(\vdash ) := \boldsymbol {Fm} / {\equiv }_{\vdash }$ . In the case of classical propositional logic, $\boldsymbol {Fm}(\vdash )$ is the free Boolean algebra with a denumerable set of free generators. More in general, the following holds.

Lemma 2.6 [Reference Font20, Proposition 5.75.2]

For every logic $\vdash $ , $\boldsymbol {Fm}(\vdash )$ is the free algebra of $\mathsf {Alg}(\vdash )$ with a denumerable set of free generators.

Validity of equations in the algebraic counterpart of a logic and logical equivalence are related as follows:

Lemma 2.7. The following conditions are equivalent for a logic $\vdash $ and $\varepsilon , \delta \in Fm$ :

  1. (i) $\mathsf {Alg}(\vdash ) \vDash \varepsilon \thickapprox \delta ;$

  2. (ii) $\vdash $ is induced by a class of matrices $\mathsf {M}$ such that $\mathsf {M} \vDash \varepsilon \thickapprox \delta ;$

  3. (iii) $\varepsilon $ and $\delta $ are logically equivalent;

  4. (iv) $\boldsymbol {Fm}(\vdash ) \vDash \varepsilon \thickapprox \delta $ .

In the above result, (i) $\Rightarrow $ (ii) follows from Corollary 2.5, (ii) $\Rightarrow $ (iii) is obvious, (iii) $\Rightarrow $ (iv) is a consequence of the definition of $\equiv _{\vdash }$ , and (iv) $\Rightarrow $ (i) follows from Lemma 2.6.

3 Algebraic semantics

A logic is said to have an algebraic semantics when it admits an equational completeness theorem. This concept originated in Blok and Pigozzi’s monograph on algebraizable logics [Reference Blok and Pigozzi4, Section 2] and was further investigated by Blok and Rebagliato [Reference Blok and Rebagliato7] and Raftery [Reference Raftery38]. In order to review it, let $\mathsf {K}$ be a class of similar algebras.

The equational consequence relative to $\mathsf {K}$ is the consequence relation $\vDash _{\mathsf {K}}$ on the set of equations $Eq$ in variables $\textit {Var}$ defined for every $\varTheta \cup \{ \varepsilon \thickapprox \delta \} \subseteq Eq$ as follows:

$$ \begin{align*} \varTheta \vDash_{\mathsf{K}} \varepsilon \thickapprox \delta \Longleftrightarrow&\text{ for every}\ \boldsymbol{A} \in \mathsf{K}\ \text{and homomorphism }h \colon \boldsymbol{Fm} \to \boldsymbol{A}, \\ &\text{ if }h(\varphi) = h(\psi)\quad \text{for all }\varphi \thickapprox \psi \in \varTheta\text{, then }h(\varepsilon) = h(\delta). \end{align*} $$

For every set of equations $\boldsymbol {\tau }(x)$ and set of formulas $\varGamma \cup \{\varphi \}$ , we shall abbreviate

$$ \begin{align*} \{ \varepsilon(\varphi) \thickapprox \delta(\varphi) \colon \varepsilon \thickapprox \delta \in \boldsymbol{\tau} \} \text{ as }\boldsymbol{\tau}(\varphi), \text{ and } \bigcup_{\gamma \in \varGamma} \boldsymbol{\tau}(\gamma) \text{ as }\boldsymbol{\tau}[\varGamma]. \end{align*} $$

Definition 3.1. A logic $\vdash $ is said to have an algebraic semantics if there are a set of equations $\boldsymbol {\tau }(x)$ and a class $\mathsf {K}$ of $\mathscr {L}_{\vdash }$ -algebras such that for all $\varGamma \cup \{ \varphi \} \subseteq Fm$ ,

$$ \begin{align*} \varGamma \vdash \varphi \Longleftrightarrow \boldsymbol{\tau}[\varGamma] \vDash_{\mathsf{K}} \boldsymbol{\tau}(\varphi). \end{align*} $$

In this case $\mathsf {K}$ is said to be a $\boldsymbol {\tau }$ -algebraic semantics (or simply an algebraic semantics) for $\vdash $ . An algebraic semantics $\mathsf {K}$ for a logic $\vdash $ is called standard when $\mathsf {K} \subseteq \mathsf {Alg}(\vdash )$ . It is said to be nonstandard otherwise.

The completeness theorem of classical propositional logic ${\textbf { CPC}}$ with respect to Boolean algebras states that these form a $\boldsymbol {\tau }$ -algebraic semantics for ${\textbf { CPC}}$ where $\boldsymbol {\tau } = \{ x \thickapprox 1 \}$ . A logic, however, can have nonstandard algebraic semantics. For instance, Glivenko’s Theorem [Reference Glivenko24] implies that Heyting algebras form a $\boldsymbol {\tau }$ -algebraic semantics for ${\textbf { CPC}}$ where $\boldsymbol {\tau } = \{ \lnot \lnot x \thickapprox 1 \}$ , as noticed in the introduction.

Given a set of equations $\boldsymbol {\tau }(x)$ and an algebra $\boldsymbol {A}$ , we set

$$ \begin{align*} \boldsymbol{\tau}(\boldsymbol{A}) := \{ a \in A \colon \boldsymbol{A} \vDash \boldsymbol{\tau}(a) \}. \end{align*} $$

The following observations are immediate consequences of the definition of an algebraic semantics.

Proposition 3.2 [Reference Blok and Pigozzi4, Theorem 2.4]

A class of algebras $\mathsf {K}$ is a $\boldsymbol {\tau }$ -algebraic semantics for a logic $\vdash $ if and only if $\{ \langle \boldsymbol {A}, \boldsymbol {\tau }(\boldsymbol {A}) \rangle \colon \boldsymbol {A} \in \mathsf {K} \}$ is a matrix semantics for $\vdash $ .

Proposition 3.3 [Reference Blok and Rebagliato7, Theorem 2.16]

If $\vdash $ has a $\boldsymbol {\tau }$ -algebraic semantics, then

$$ \begin{align*} x, \varphi(\varepsilon, \vec{z}) \mathrel{\dashv\mkern1.5mu\vdash} \varphi(\delta, \vec{z}), x \end{align*} $$

for all $\varepsilon \thickapprox \delta \in \boldsymbol {\tau }$ and $\varphi (v, \vec {z}) \in Fm$ .

In the remaining part of this section, we shall rephrase in purely algebraic parlance the problem of determining whether a logic has a $\boldsymbol {\tau }$ -algebraic semantics. To this end, given an algebra $\boldsymbol {A}$ and $X \subseteq A^{2}$ , we denote by $\textrm {Cg}^{\boldsymbol {A}}(X)$ the congruence of $\boldsymbol {A}$ generated by X.

Definition 3.4. For every $\varGamma \subseteq Fm$ and set of equations $\boldsymbol {\tau }(x)$ , define

$$ \begin{align*} \theta(\varGamma, \boldsymbol{\tau}) := \textrm{Cg}^{\boldsymbol{Fm}}(\{ \langle \varepsilon(\gamma), \delta(\gamma)\rangle \colon \varepsilon \thickapprox \delta \in \boldsymbol{\tau} \text{ and }\gamma \in \varGamma \}). \end{align*} $$

In the statement of the next result we identify equations with pairs of formulas, whence the notation $\varepsilon \thickapprox \delta $ stands for $\langle \varepsilon , \delta \rangle $ . Notice that, under this convention, expressions of the form $\boldsymbol {\tau }(\varphi ) \subseteq \theta (\varGamma , \boldsymbol {\tau })$ make sense.

Proposition 3.5. A logic $\vdash $ has a $\boldsymbol {\tau }$ -algebraic semantics if and only if $\varGamma \vdash \varphi $ for all $\varGamma \cup \{ \varphi \} \subseteq Fm$ such that $\boldsymbol {\tau }(\varphi ) \subseteq \theta (\varGamma , \boldsymbol {\tau })$ .

Proof. We begin by proving the “if” part. Define

$$ \begin{align*} \mathsf{K} := \{ \boldsymbol{Fm}/ \theta(\varGamma, \boldsymbol{\tau}) \colon \varGamma \subseteq Fm \text{ and }\textrm{Cn}_{\vdash}(\varGamma) = \varGamma \}. \end{align*} $$

By assumption, for every $\varphi \in Fm$ and $\varGamma \subseteq Fm$ such that $\textrm {Cn}_{\vdash }(\varGamma ) = \varGamma $ ,

$$ \begin{align*} \varphi / \theta(\varGamma, \boldsymbol{\tau}) \in \boldsymbol{\tau}(\boldsymbol{Fm}/ \theta(\varGamma, \boldsymbol{\tau})) \Longleftrightarrow \boldsymbol{\tau}(\varphi) \subseteq \theta(\varGamma, \boldsymbol{\tau}) \Longleftrightarrow \varphi \in \varGamma. \end{align*} $$

Consequently, for every $\varGamma \subseteq Fm$ such that $\textrm {Cn}_{\vdash }(\varGamma ) = \varGamma $ ,

$$ \begin{align*} \langle \boldsymbol{Fm}, \varGamma \rangle \in {\mathbb{H}_{\textrm{s}}^{-1}}(\langle \boldsymbol{Fm} / \theta(\varGamma, \boldsymbol{\tau}), \boldsymbol{\tau}(\boldsymbol{Fm}/ \theta(\varGamma, \boldsymbol{\tau})) \rangle). \end{align*} $$

Hence, by Lemma 2.3, the following classes of matrices induce the same logic:

$$ \begin{align*} \mathsf{M}_{1} &:= \{ \langle \boldsymbol{Fm}, \varGamma \rangle \colon \varGamma \subseteq Fm \text{ and }\textrm{Cn}_{\vdash}(\varGamma) = \varGamma \},\\ \mathsf{M}_{2} & := \{ \langle \boldsymbol{A}, \boldsymbol{\tau}(\boldsymbol{A}) \rangle \colon \boldsymbol{A} \in \mathsf{K} \}. \end{align*} $$

As $\mathsf {M}_{1}$ is a matrix semantics for $\vdash $ , so is $\mathsf {M}_{2}$ . By Proposition 3.2 we conclude that $\mathsf {K}$ is a $\boldsymbol {\tau }$ -algebraic semantics for $\vdash $ .

To prove the “only if” part, let $\mathsf {K}$ be a $\boldsymbol {\tau }$ -algebraic semantics for $\vdash $ . Then consider $\varGamma \cup \{ \varphi \} \subseteq Fm$ such that $\boldsymbol {\tau }(\varphi ) \subseteq \theta (\varGamma , \boldsymbol {\tau })$ . In order to prove that $\varGamma \vdash \varphi $ , it suffices to show that for every $\boldsymbol {A} \in \mathsf {K}$ and homomorphism $h \colon \boldsymbol {Fm} \to \boldsymbol {A}$ , if $h[\varGamma ] \subseteq \boldsymbol {\tau }(\boldsymbol {A})$ , then $h(\varphi ) \in \boldsymbol {\tau }(\boldsymbol {A})$ . To this end, consider $\boldsymbol {A} \in \mathsf {K}$ and a homomorphism $h \colon \boldsymbol {Fm} \to \boldsymbol {A}$ such that $h[\varGamma ] \subseteq \boldsymbol {\tau }(\boldsymbol {A})$ . Observe that the kernel $\textrm {Ker}(h)$ of h contains the generators of $\theta (\varGamma , \boldsymbol {\tau })$ , whence

$$ \begin{align*} \boldsymbol{\tau}(\varphi) \subseteq \theta(\varGamma, \boldsymbol{\tau}) \subseteq \textrm{Ker}(h). \end{align*} $$

But this amounts to $h(\varphi ) \in \boldsymbol {\tau }(\boldsymbol {A})$ . Thus, we conclude that $\varGamma \vdash \varphi $ , as desired.⊣

Corollary 3.6 [Reference Blok and Rebagliato7, Theorem 2.15]

The property of having an algebraic semantics persists in extensions of a logic.

4 Tame examples: assertional logics

The notion of an assertional logic was introduced by Pigozzi in [Reference Pigozzi37] and further investigated by Blok and Raftery among others [Reference Blok and Raftery6]. For the present purpose, the interest of assertional logics comes from the fact that they are typical examples of logics with an algebraic semantics. To review these facts, it is convenient to recall some basic definitions:

Definition 4.1. A class of matrices $\mathsf {M}$ is said to be unital if F is either empty or a singleton, for every $\langle \boldsymbol {A}, F \rangle \in \mathsf {M}$ .

Proposition 4.2. A logic $\vdash $ has a unital matrix semantics if and only if for every $\varphi (v, \vec {z}) \in Fm$ ,

$$ \begin{align*} x, y, \varphi(x, \vec{z}) \vdash \varphi(y, \vec{z}). \end{align*} $$

Proof. This is attributed to Suszko in [Reference Rautenberg40, p. 67]. For a detailed argument, however, the reader may consult the proof of [Reference Albuquerque, Font, Jansana and Moraschini1, Theorem 10].⊣

Let $\boldsymbol {A}$ be an algebra and $\varphi (x)$ a formula. If the term-function $\varphi ^{\boldsymbol {A}} \colon A \to A$ is a constant function, we denote by $\varphi ^{\boldsymbol {A}}$ both the term-function and its unique value.

Definition 4.3. A logic $\vdash $ is said to be assertional if it is induced by a class of matrices $\mathsf {M}$ for which there is a formula $\varphi (x)$ such that $\varphi ^{\boldsymbol {A}} \colon A \to A$ is a constant function and $F = \{ \varphi ^{\boldsymbol {A}} \}$ , for all $\langle \boldsymbol {A}, F \rangle \in \mathsf {M}$ .

Observe that, in this case, $\mathsf {M}$ is unital and $\varphi $ a theorem of $\vdash $ . Moreover, the class $\mathsf {K}$ of algebraic reducts of matrices in $\mathsf {M}$ is a $\boldsymbol {\tau }$ -algebraic semantics for $\vdash $ where $\boldsymbol {\tau } = \{ x \thickapprox \varphi (x) \}$ . Consequently, we obtain the following:

Proposition 4.4. Assertional logics have an algebraic semantics.

Example 4.5 (Intermediate logics)

A Heyting algebra is a structure $\boldsymbol {A} = \langle A; \land , \lor , \to , 0, 1 \rangle $ comprising a bounded lattice $\langle A; \land , \lor , 0, 1 \rangle $ such that for every $a, b, c \in A$ ,

$$ \begin{align*} a \land b \leqslant c \Longleftrightarrow a \leqslant b \to c, \end{align*} $$

where $x \leqslant y$ is a shorthand for $x = x \land y$ . The assertional logic $\vdash _{\mathsf {K}}$ , associated with a variety $\mathsf {K}$ of Heyting algebras, is defined by the rule

(2) $$ \begin{align} \varGamma \vdash_{\mathsf{K}} \varphi \Longleftrightarrow \boldsymbol{\tau}[\varGamma] \vDash_{\mathsf{K}} \boldsymbol{\tau}(\varphi), \end{align} $$

where $\boldsymbol {\tau } = \{ x \thickapprox 1 \}$ . Logics arising in this way are called intermediate or superintuitionistic [Reference Chagrov and Zakharyaschev10]. Notice that if $\mathsf {K}$ is the variety of Boolean algebras (resp. of all Heyting algebras), then $\vdash _{\mathsf {K}}$ is classical (resp. intuitionistic) propositional logic.⊣

Example 4.6 (Global modal logics)

A modal algebra is a structure $\boldsymbol {A} = \langle A; \land , \lor , \lnot , \square , 0, 1 \rangle $ comprising a Boolean algebra $\langle A; \land , \lor , \lnot , 0, 1 \rangle $ and a unary operation $\square $ such that for every $a, c \in A$ ,

$$ \begin{align*} \square 1 = 1 \text{ and }\square(a \land c) = \square a \land \square c. \end{align*} $$

The assertional logic $\vdash _{\mathsf {K}}$ , associated with a variety $\mathsf {K}$ of modal algebras, is defined by the rule (2). Logics arising in this way have been called global modal logics [Reference Kracht32].

Notice that assertionality persists in fragments of intermediate and global modal logics in which $1$ is term-definable.⊣

As we mentioned, assertional logics necessarily have theorems. The next definition extends this concept beyond logics with theorems.

Definition 4.7. A logic lacking theorems is said to be almost assertional if it has a matrix semantics $\mathsf {M}$ for which there is a formula $\varphi (x)$ such that $\varphi ^{\boldsymbol {A}} \colon A \to A$ is a constant function and $F = \{ \varphi ^{\boldsymbol {A}} \}$ , for all $\langle \boldsymbol {A}, F \rangle \in \mathsf {M}$ with $F \ne \emptyset $ .

In general, almost assertional logics need not have an algebraic semantics. In order to explain when this is the case, we rely on the following (see [Reference Font20, Lemma 4.3], if necessary):

Lemma 4.8. Let $\vdash $ be a logic lacking theorems. If $\mathsf {M}$ is a matrix semantics for $\vdash $ , then so is

$$ \begin{align*} \mathsf{N} := \{ \langle \boldsymbol{A}, F \rangle \in \mathsf{M} \colon F \ne \emptyset \} \cup \{ \langle \boldsymbol{Fm}, \emptyset \rangle \}. \end{align*} $$

Almost assertional logics with an algebraic semantics can be characterized as follows:

Proposition 4.9. An almost assertional logic $\vdash $ has an algebraic semantics if and only if there exists a set of equations $\boldsymbol {\tau }(x)$ for which the following conditions hold:

  1. (i) There is no substitution $\sigma $ such that $\sigma (\varepsilon ) = \sigma (\delta )$ for all $\varepsilon \thickapprox \delta \in \boldsymbol {\tau };$

  2. (ii) For every $\varepsilon \thickapprox \delta \in \boldsymbol {\tau }$ and $\varphi (v, \vec {z}) \in Fm$ ,

    $$ \begin{align*} x, \varphi(\varepsilon, \vec{z}) \mathrel{\dashv\mkern1.5mu\vdash} \varphi(\delta, \vec{z}), x. \end{align*} $$

Proof. Let $\vdash $ be an almost assertional logic. Then $\vdash $ has a matrix semantics $\mathsf {M}$ for which there is a formula $\varphi (x)$ such that $\varphi ^{\boldsymbol {A}} \colon A \to A$ is a constant function and $F = \{ \varphi ^{\boldsymbol {A}} \}$ , for all $\langle \boldsymbol {A}, F \rangle \in \mathsf {M}$ with $F \ne \emptyset $ . Since $\mathsf {M}$ is a matrix semantics for $\vdash $ , we get

(3) $$ \begin{align} y \vdash \varphi(x). \end{align} $$

To prove the “only if” part, suppose that $\vdash $ has a $\boldsymbol {\tau }$ -algebraic semantics $\mathsf {K}$ . As $\vdash $ lacks theorems, there are $\boldsymbol {A} \in \mathsf {K}$ and $a \in A$ such that $\varphi ^{\boldsymbol {A}}(a) \notin \boldsymbol {\tau }(\boldsymbol {A})$ . Together with (5), this implies $\boldsymbol {\tau }(\boldsymbol {A}) = \emptyset $ . Consequently, $\boldsymbol {\tau }$ satisfies condition (i). Moreover, $\boldsymbol {\tau }$ satisfies condition (ii) by Proposition 3.3.

Then we turn to prove the “if” part. As $\mathsf {M}$ is a matrix semantics for $\vdash $ , the same holds for $\mathsf {M}^{\ast }$ by Corollary 2.4. Furthermore, for every $\langle \boldsymbol {A}, F \rangle \in \mathsf {M}^{\ast }$ ,

(4) $$ \begin{align} \text{if }a \in F \text{, then }a \in \boldsymbol{\tau}(\boldsymbol{A}). \end{align} $$

To prove this, consider $\langle \boldsymbol {A}, F \rangle \in \mathsf {M}^{\ast }$ , $a \in F$ , and $\varepsilon \thickapprox \delta \in \boldsymbol {\tau }$ . From condition (ii) it follows that for every formula $\varphi (v,\vec {z})$ and $\vec {c} \in A$ ,

$$ \begin{align*} \varphi^{\boldsymbol{A}}(\varepsilon^{\boldsymbol{A}}(a), \vec{c}) \in F \Longleftrightarrow \varphi^{\boldsymbol{A}}(\delta^{\boldsymbol{A}}(a), \vec{c}) \in F. \end{align*} $$

This means that for every unary polynomial function p of $\boldsymbol {A}$ ,

$$ \begin{align*} p(\varepsilon^{\boldsymbol{A}}(a)) \in F \Longleftrightarrow p(\delta^{\boldsymbol{A}}(a)) \in F. \end{align*} $$

By Proposition 2.2, we conclude that $\langle \varepsilon ^{\boldsymbol {A}}(a), \delta ^{\boldsymbol {A}}(a) \rangle \in \boldsymbol {\varOmega }^{\boldsymbol {A}}F$ . Since $\langle \boldsymbol {A}, F \rangle \in \mathsf {M}^{\ast }$ , the matrix $\langle \boldsymbol {A}, F \rangle $ is reduced, i.e., $\boldsymbol {\varOmega }^{\boldsymbol {A}}F$ is the identity relation on A. Consequently, $\varepsilon ^{\boldsymbol {A}}(a) = \delta ^{\boldsymbol {A}}(a)$ , establishing (4).

Then set

$$ \begin{align*} \boldsymbol{\rho} := \boldsymbol{\tau} \cup \{ x \thickapprox \varphi(x) \}, \end{align*} $$

and observe that for every $\langle \boldsymbol {A}, F \rangle \in \mathsf {M}^{\ast }$ ,

(5) $$ \begin{align} \text{if }F \ne \emptyset, \text{ then }\boldsymbol{\rho}(\boldsymbol{A}) = F. \end{align} $$

To prove this, consider $\langle \boldsymbol {A}, F \rangle \in \mathsf {M}^{\ast }$ such that $F \ne \emptyset $ . Then $\langle \boldsymbol {A}, F \rangle = \langle \boldsymbol {B} / \boldsymbol {\varOmega }^{\boldsymbol {B}}G, G / \boldsymbol {\varOmega }^{\boldsymbol {B}}G\rangle $ for some $\langle \boldsymbol {B}, G \rangle \in \mathsf {M}$ such that $G = \emptyset $ . By assumption, $\varphi ^{\boldsymbol {B}} \colon B \to B$ is a constant function and $G = \{ \varphi ^{\boldsymbol {B}} \}$ . This easily implies

$$ \begin{align*} F = \{ a \in A \colon a = \varphi^{\boldsymbol{A}}(a) \}. \end{align*} $$

Together with (4), the above display implies (5).

Recall that $\mathsf {M}^{\ast }$ is a matrix semantics for $\vdash $ and that $\vdash $ lacks theorems. Then by Lemma 4.8 the following is also a matrix semantics for $\vdash $ :

$$ \begin{align*} \mathsf{N} := \{ \langle \boldsymbol{A}, F \rangle \in \mathsf{M}^{\ast} \colon F \ne \emptyset \} \cup \{ \langle \boldsymbol{Fm}, \emptyset \rangle \}. \end{align*} $$

Moreover,

(6) $$ \begin{align} \text{if }\langle \boldsymbol{A}, F \rangle \in \mathsf{N}\text{, then }F = \boldsymbol{\rho}(\boldsymbol{A}). \end{align} $$

To prove this, consider $\langle \boldsymbol {A}, F \rangle \in \mathsf {N}$ . If $F \ne \emptyset $ , then $F = \boldsymbol {\rho }(\boldsymbol {A})$ by (5). Then we consider the case where $F = \emptyset $ . By definition of $\mathsf {N}$ , in this case $\langle \boldsymbol {A}, F \rangle = \langle \boldsymbol {Fm}, \emptyset \rangle $ . By condition (i) we get $\boldsymbol {\rho }(\boldsymbol {Fm}) \subseteq \boldsymbol {\tau }(\boldsymbol {Fm}) = \emptyset $ , whence $\boldsymbol {\rho }(\boldsymbol {Fm}) = \emptyset $ as desired. This establishes (6).

As a consequence,

$$ \begin{align*} \mathsf{N} = \{ \langle \boldsymbol{A}, \boldsymbol{\rho}(\boldsymbol{A}) \rangle \colon \boldsymbol{A} \in \mathsf{K} \}, \end{align*} $$

where $\mathsf {K}$ is the class of algebraic reducts of matrices in $\mathsf {N}$ . Since $\mathsf {N}$ is a matrix semantics for $\vdash $ , we can apply Proposition 3.2 obtaining that $\mathsf {K}$ is a $\boldsymbol {\rho }$ -algebraic semantics for $\vdash $ .⊣

The following result will be used later on.

Proposition 4.10. A logic $\vdash $ is either assertional or almost assertional if and only if there is a formula $\psi (x)$ such that $y \vdash \psi (x)$ and for every $\varphi (v, \vec {z}) \in Fm$ ,

$$ \begin{align*} x, y, \varphi(x, \vec{z}) \vdash \varphi(y, \vec{z}). \end{align*} $$

Proof. Observe that a logic $\vdash $ is either assertional or almost assertional if and only if it has a matrix semantics $\mathsf {M}$ for which there is a formula $\psi (x)$ such that $\psi ^{\boldsymbol {A}} \colon A \to A$ is a constant function and $F = \{ \psi ^{\boldsymbol {A}} \}$ , for all $\langle \boldsymbol {A}, F \rangle \in \mathsf {M}$ with $F \ne \emptyset $ .

To prove the “only if’ part, consider a logic $\vdash $ that is either assertional or almost assertional. By the above remarks, $\vdash $ has a unital matrix semantics $\mathsf {M}$ and a formula $\psi (x)$ such that $y \vdash \psi (x)$ . Hence, with an application of Proposition 4.2, we obtain $x, y, \varphi (x, \vec {z}) \vdash \varphi (y, \vec {z})$ for every $\varphi (v, \vec {z}) \in Fm$ , as desired.

Then we turn to prove the “if’ part. Consider a logic $\vdash $ for which there is a formula $\psi (x)$ such that $y \vdash \psi (x)$ and $x, y, \varphi (x, \vec {z}) \vdash \varphi (y, \vec {z})$ for every $\varphi (v, \vec {z}) \in Fm$ . By Proposition 4.2, $\vdash $ has a unital matrix semantics $\mathsf {M}$ . The fact that $\mathsf {M}$ is unital and $y \vdash \psi (x)$ imply that $\psi ^{\boldsymbol {A}} \colon A \to A$ is a constant function and $F = \{ \psi ^{\boldsymbol {A}} \}$ , for all $\langle \boldsymbol {A}, F \rangle \in \mathsf {M}$ with $F \ne \emptyset $ . Thus, by the remark at the beginning of the proof, $\vdash $ is either assertional or almost assertional.⊣

5 Getting rid of limit cases

While characterizing logics with an algebraic semantics, it is convenient to treat separately some limit cases. These do not present special difficulties and will be examined briefly is in this section.

Definition 5.1. Let $\vdash $ be a logic.

  1. 1. $\vdash $ is said to be inconsistent if $\varGamma \vdash \varphi $ for all $\varGamma \cup \{ \varphi \} \subseteq Fm$ .

  2. 2. $\vdash $ is said to be almost inconsistent if it lacks theorems and $\varGamma \vdash \varphi $ for all $\varGamma \cup \{ \varphi \} \subseteq Fm$ such that $\varGamma \ne \emptyset $ .

  3. 3. $\vdash $ is said to be trivial if it is either inconsistent or almost inconsistent.

Notice that a logic $\vdash $ is trivial precisely when $x \vdash y$ .

Proposition 5.2. The following conditions hold for a logic $\vdash $ .

  1. (i) If $\vdash $ is inconsistent, then it has an algebraic semantics.

  2. (ii) If $\vdash $ is almost inconsistent, then it has an algebraic semantics if and only if $\mathscr {L}_{\vdash }$ comprises either two distinct constants or a non-constant connective.

Proof. (i): Observe that if $\vdash $ is inconsistent, then any class of algebras $\mathsf {K}$ is an $\boldsymbol {\tau }$ -algebraic semantics for $\vdash $ where $\boldsymbol {\tau } = \emptyset $ .

(ii): Let $\vdash $ be almost inconsistent. To prove the “only if” part, suppose, with a view to contradiction, that $\vdash $ has a $\boldsymbol {\tau }$ -algebraic semantics $\mathsf {K}$ , but that $\mathscr {L}_{\vdash }$ is either empty or comprises only a constant symbol $1$ . Due to the poor language in which $\vdash $ is formulated,

$$ \begin{align*} \boldsymbol{\tau} \subseteq \{ x \thickapprox x, x \thickapprox 1, 1 \thickapprox x, 1 \thickapprox 1 \}. \end{align*} $$

We can assume without loss of generality that $\boldsymbol {\tau }$ does not contain any trivially valid equation, whence $\boldsymbol {\tau } \subseteq \{ x \thickapprox 1, 1 \thickapprox x \}$ . Furthermore, by symmetry we can assume that $\boldsymbol {\tau } \subseteq \{ x \thickapprox 1 \}$ . Thus, either $\boldsymbol {\tau } = \emptyset $ or $\boldsymbol {\tau } = \{ x \thickapprox 1 \}$ . In both cases, the fact that $\mathsf {K}$ is a $\boldsymbol {\tau }$ -algebraic semantics for $\vdash $ implies $\emptyset \vdash 1$ , contradicting the fact that $\vdash $ lacks theorems.

To prove the “if” part, suppose first that $\mathscr {L}_{\vdash }$ comprises a non-constant connective $f(x_{1}, \dots , x_{n})$ . Then define

$$ \begin{align*} \boldsymbol{\tau} := \{ f(x, \dots, x) \thickapprox f(f(x, \dots, x), \dots, f(x, \dots, x)) \}. \end{align*} $$

Clearly, $\boldsymbol {\tau }(\boldsymbol {Fm}) = \emptyset $ . As $\vdash $ is almost inconsistent, this implies that $\mathsf {K} := \{ \boldsymbol {Fm} \}$ is a $\boldsymbol {\tau }$ -algebraic semantics for $\vdash $ .

The case where $\mathscr {L}_{\vdash }$ comprises two constant symbols $1$ and $0$ is treated similarly, taking $\boldsymbol {\tau } := \{ 1 \thickapprox 0 \}$ .⊣

6 A sufficient condition for algebraic semantics

Definition 6.1. An algebraic language $\mathscr {L}$ is said to be graph-based if the arities of its connectives are $\leqslant 1$ and, moreover, $\mathscr {L}$ comprises at most one unary connective.Footnote 3 Similarly, a logic $\vdash $ is said to be graph-based when $\mathscr {L}_{\vdash }$ is.

Given a formula $\varphi $ , we denote by $\textit {Var}(\varphi )$ the set of variables really occurring in $\varphi $ . The aim of this section is to establish the following result:

Theorem 6.2. Let $\vdash $ be a logic that is not graph-based. If there are two distinct logically equivalent formulas $\varphi $ and $\psi $ such that $\textit {Var}(\varphi ) \cup \textit {Var}(\psi ) = \{ x \}$ , then $\vdash $ has an algebraic semantics.

One of the consequences of the above result is that every logic is “essentially” equivalent to one with an algebraic semantics. In order to make this observation mathematically precise, we shall recall some basic concepts.

Given two algebraic languages $\mathscr {L}$ and $\mathscr {L}'$ , a translation of $\mathscr {L}$ into $\mathscr {L}'$ is a map that associates a (possibly complex) formula $f_{\boldsymbol {\tau }}(x_{1}, \dots , x_{n})$ of $\mathscr {L}'$ with every n-ary connective f of $\mathscr {L}$ . Furthermore, given an algebraic language $\mathscr {L}$ , let $\mathscr {L}^{\dagger }$ be the algebraic language obtained by replacing every constant symbol $\textbf {c}$ of $\mathscr {L}$ with a new unary connective $\textbf {c}(x)$ . Every $\mathscr {L}$ -algebra $\boldsymbol {A}$ can then be transformed into an $\mathscr {L}^{\dagger }$ -algebra $\boldsymbol {A}^{\dagger }$ by interpreting each new operation $\textbf {c}(x)$ as the constant function with value $\textbf {c}^{\boldsymbol {A}}$ .

Lastly, two logics $\vdash $ and $\vdash '$ are said to be term-equivalent if there are matrix semantics $\mathsf {M}$ and $\mathsf {M}'$ for $\vdash $ and $\vdash '$ , respectively, and translations $\boldsymbol {\tau }$ of $\mathscr {L}_{\vdash }^{\dagger }$ into $\mathscr {L}_{\vdash '}^{\dagger }$ and $\boldsymbol {\rho }$ of $\mathscr {L}_{\vdash '}^{\dagger }$ into $\mathscr {L}_{\vdash }^{\dagger }$ such that

$$ \begin{align*} \langle \boldsymbol{A}^{\dagger}, F \rangle =\langle \boldsymbol{A}^{\dagger\boldsymbol{\tau}\boldsymbol{\rho}}, F \rangle \; \text{ and } \; \langle \boldsymbol{B}^{\dagger}, G \rangle = \langle \boldsymbol{B}^{\dagger\boldsymbol{\rho}\boldsymbol{\tau}}, G \rangle \end{align*} $$

for every $\langle \boldsymbol {A}, F \rangle \in \mathsf {M}'$ and $\langle \boldsymbol {B}, G \rangle \in \mathsf {M}$ .Footnote 4

Corollary 6.3. Every logic is term-equivalent to one with an algebraic semantics.

Proof. Consider an arbitrary logic $\vdash $ . Furthermore, let $\mathsf {M}$ be a matrix semantics for $\vdash $ (it exists by Theorem 2.1). For every $\langle \boldsymbol {A}, F \rangle \in \mathsf {M}$ , let also $\boldsymbol {A}^{+}$ be the expansion of $\boldsymbol {A}$ with a new binary operation $+ \colon A^{2} \to A$ that is interpreted as the projection of the first coordinate, i.e.,

$$ \begin{align*} a + c := a,\quad \text{for all }a, c \in A. \end{align*} $$

Then let $\vdash ^{+}$ be the logic induced by the class of matrices

$$ \begin{align*} \mathsf{M}^{+} := \{ \langle \boldsymbol{A}^{+}, F \rangle : \langle \boldsymbol{A}, F \rangle \in \mathsf{M} \}. \end{align*} $$

From the definition of $\mathsf {M}^{+}$ it follows that $\vdash $ and $\vdash ^{+}$ are term-equivalent.

Now, from the definition of $\vdash ^{+}$ it follows that the formulas x and $x + x$ are logically equivalent in $\vdash ^{+}$ . Since $\vdash ^{+}$ is not graph-based, we can apply Theorem 6.2 obtaining that it has an algebraic semantics.⊣

The above result implies that “having an algebraic semantics” is not a property of clones or free algebras, thus confirming the fragility of the notion.

The remaining part of the section is devoted to proving Theorem 6.2. We rely on the description of congruence generation given by Maltsev’s Lemma [Reference Bergman2, Theorem 4.17], which we proceed to recall:

Theorem 6.4. Let $\boldsymbol {A}$ be an algebra, $X \subseteq A \times A$ , and $a, c \in A$ . Then $\langle a, c \rangle \in \textrm {Cg}^{\boldsymbol {A}}(X)$ if and only if there are $e_{0}, \dots , e_{n} \in A$ , $\langle b_{0}, d_{0}\rangle , \dots , \langle b_{n-1}, d_{n-1}\rangle \in X$ , and unary polynomial functions $p_{0}, \dots , p_{n-1}$ of $\boldsymbol {A}$ such that

$$ \begin{align*} a = e_{0}, c = e_{n} \text{, and }\{ e_{i}, e_{i+1} \} = \{ p_{i}(b_{i}), p_{i}(d_{i}) \},\quad \text{for every }i < n. \end{align*} $$

Lastly, given a formula $\gamma $ , the subformula tree $T(\gamma )$ of $\gamma $ is defined by induction on the construction of $\gamma $ as follows. If $\gamma $ is a variable or a constant, then $T(\gamma )$ is the trivial tree whose unique element is labeled by $\gamma $ . Moreover, if $\gamma = f(\delta _{1}, \dots , \delta _{n})$ for some n-ary connective f and formulas $\delta _{1}, \dots , \delta _{n}$ , then $T(\gamma )$ is obtained by adding a common bottom element labeled by f to the disjoint union of $T(\delta _{1}), \dots , T(\delta _{n})$ .

Convention 6.5. Given a formula $\gamma $ , we denote by $T(\gamma )^{-x}$ the tree obtained by removing from $T(\gamma )$ the leaves labeled by the variable x.

We are now ready to prove the main result of this section:

Proof of Theorem 6.2. Since $\vdash $ is not graph-based, either $\mathscr {L}_{\vdash }$ comprises two distinct unary connectives $\square $ and $\Diamond $ or it comprises at least an n-ary connective with $n \geqslant 2$ . We shall detail the case where $\mathscr {L}_{\vdash }$ comprises $\square x$ and $\Diamond x$ , as the other one is analogous.

Suppose that $\vdash $ has two distinct logically equivalent formulas $\varphi $ and $\psi $ such that $\textit {Var}(\varphi ) \cup \textit {Var}(\psi ) = \{ x \}$ . By Lemma 2.7, the logic $\vdash $ has a matrix semantics $\mathsf {M}$ validating the equation $\varphi \thickapprox \psi $ . Notice that we can safely assume that

(7) $$ \begin{align} \textit{Var}(\varphi) = \textit{Var}(\psi) = \{ x \}. \end{align} $$

For suppose that (7) does not hold. Then by symmetry we can assume that $\textit {Var}(\varphi ) = \{ x \}$ and $\textit {Var}(\psi ) = \emptyset $ . Now, from $\mathsf {M} \vDash \varphi (x) \thickapprox \psi (x)$ it follows $\mathsf {M}\vDash \varphi (\varphi (x)) \thickapprox \psi (\varphi (x))$ . Since $\textit {Var}(\psi ) = \emptyset $ , the formula $\psi $ is closed, whence $\psi (\varphi (x)) = \psi (x)$ . Thus, $\mathsf {M}\vDash \varphi (\varphi (x)) \thickapprox \psi (x)$ . Together with $\mathsf {M} \vDash \varphi (x) \thickapprox \psi (x)$ , this implies

$$ \begin{align*} \mathsf{M} \vDash \varphi(x) \thickapprox \varphi(\varphi(x)). \end{align*} $$

We have two cases: either $\varphi = x$ or $\varphi \ne x$ . First suppose that $\varphi = x$ . As $\psi $ is a closed formula, $\mathsf {M} \vDash x \thickapprox \psi $ implies that algebraic reducts of the matrices in $\mathsf {M}$ are trivial. Because $\mathsf {M}$ is a matrix semantics for $\vdash $ , this yields $x \vdash y$ which, in turn, means that $\vdash $ is trivial. As the language of $\vdash $ comprises a non-constant symbol, from Proposition 5.2 it follows that $\vdash $ has an algebraic semantics, thus concluding the proof of the theorem. Then it only remains to consider the case where $\varphi \ne x$ . In this case, $\varphi (\varphi (x)) \ne \varphi (x)$ . Thus we can replace $\psi $ by $\varphi (\varphi (x))$ , validating both the assumption of the theorem and (7). Accordingly, from now we shall assume that (7) holds.

Let $k \in \omega $ be greater than the length of all branches in $T(\varphi )$ and $T(\psi )$ . As $\varphi = \varphi (x)$ and $\psi = \psi (x)$ , it makes sense to define

$$ \begin{align*} \varphi' := \square^{2k}\Diamond \varphi(\square^{k} \Diamond x) \text{ and }\psi':= \square^{2k}\Diamond \psi(\square^{k} \Diamond x). \end{align*} $$

Furthermore, since $\mathsf {M} \vDash \varphi \thickapprox \psi $ ,

(8) $$ \begin{align} \mathsf{M} \vDash \varphi' \thickapprox \psi'. \end{align} $$

Now, let m be the number of occurrences of x in $\varphi '(x)$ . Then, consider distinct variables $x_{1}, \dots , x_{m}$ and let $\hat {\varphi }'(x_{1}, \dots , x_{m})$ be the formula obtained by replacing the j-th occurrence of x in $\varphi '$ by $x_{j}$ for all $j \leqslant m$ . Clearly,

$$ \begin{align*} \varphi' = \hat{\varphi}'(x, \dots, x). \end{align*} $$

Claim 6.6. There are no $\alpha _{1}, \dots , \alpha _{m}, \beta \in Fm$ such that

$$ \begin{align*} \hat{\varphi}'(\alpha_{1}, \dots, \alpha_{m}) = \psi'(\beta). \end{align*} $$

Proof. Given a tree T and two nodes $w < u$ , we denote by $[w, u]$ the set of nodes of T in between $w$ and u (including $w$ and u). Bearing this in mind, suppose, with a view to contradiction, that $\hat {\varphi }'(\alpha _{1}, \dots , \alpha _{m}) = \psi '(\beta )$ for some $\alpha _{1}, \dots , \alpha _{m}, \beta \in Fm$ . Consequently,

(9) $$ \begin{align} T(\hat{\varphi}'(\alpha_{1}, \dots, \alpha_{m})) = T(\psi'(\beta)). \end{align} $$

Notice that, because of $\varphi ' = \square ^{2k}\Diamond \varphi (\square ^{k} \Diamond x)$ , the tree $T(\varphi )^{-x}$ can be identified with a unique downset of the tree obtained by removing the smallest $2k+1$ nodes from $T(\hat {\varphi }'(\alpha _{1}, \dots , \alpha _{m}))$ . Working under this identification, a node $w$ of $T(\hat {\varphi }'(\alpha _{1}, \dots , \alpha _{m}))$ belongs to $T(\varphi )^{-x}$ if and only if it satisfies the following conditions:

  1. (i) The downset of $w$ in $T(\hat {\varphi }'(\alpha _{1}, \dots , \alpha _{m}))$ has at least $2k+2$ and at most $3k$ elements;

  2. (ii) There are no nodes $u < v$ in $T(\hat {\varphi }'(\alpha _{1}, \dots , \alpha _{m}))$ such that $w \in [u, v]$ and $[u, v] = \{ b_{1}, \dots , b_{k}, d \}$ , where

    $$ \begin{align*} b_{1} < \dots < b_{k} < d \end{align*} $$
    and the nodes $b_{i}$ and d are labeled, respectively, with $\square $ and $\Diamond $ .

This observation is a consequence of the fact that k is greater than the length of branches in $T(\varphi )$ and $\varphi ' = \square ^{2k}\Diamond \varphi (\square ^{k}\Diamond x)$ .

A similar argument shows that $T(\psi )^{-x}$ can be identified with the subtree of $T(\psi '(\beta ))$ comprising the nodes $w$ of $T(\psi '(\beta ))$ satisfying conditions (i) and (ii). Together with (9), this implies $T(\varphi )^{-x} = T(\psi )^{-x}$ and, therefore, $\varphi = \psi $ . But this contradicts the assumption that the formulas $\varphi $ and $\psi $ are distinct.⊣

Our aim is to show that $\vdash $ has a $\boldsymbol {\tau }$ -algebraic semantics, where

$$ \begin{align*} \boldsymbol{\tau}(x) := \{ \varphi' \thickapprox \psi' \}. \end{align*} $$

Suppose the contrary, with a view to contradiction. By Proposition 3.5 there is $\varGamma \cup \{ \gamma \} \subseteq Fm$ such that $\boldsymbol {\tau }(\gamma ) \subseteq \theta (\varGamma , \boldsymbol {\tau })$ and $\varGamma \nvdash \gamma $ . We can assume without loss of generality $\varGamma = \textrm {Cn}_{\vdash }(\varGamma )$ .

Claim 6.7. The congruence $\theta (\varGamma , \boldsymbol {\tau })$ is compatible with $\varGamma $ .

Proof. Consider $\alpha , \beta \in Fm$ such that $\alpha \in \varGamma $ and $\langle \alpha , \beta \rangle \in \theta (\varGamma , \boldsymbol {\tau })$ . We need to prove that $\beta \in \varGamma $ . As $\varGamma = \textrm {Cn}_{\vdash }(\varGamma )$ , it suffices to show $\varGamma \vdash \beta $ . Bearing in mind that $\mathsf {M}$ is a matrix semantics for $\vdash $ , consider $\langle \boldsymbol {A}, F \rangle \in \mathsf {M}$ and a homomorphism $h \colon \boldsymbol {Fm} \to \boldsymbol {A}$ such that $h[\varGamma ] \subseteq F$ . From (8) it follows that the generators of $\theta (\varGamma , \boldsymbol {\tau })$ belong to the kernel $\textrm {Ker}(h)$ . Consequently, $\langle \alpha , \beta \rangle \in \theta (\varGamma , \boldsymbol {\tau }) \subseteq \textrm {Ker}(h)$ and, therefore, $h(\beta ) = h(\alpha ) \in h[\varGamma ] \subseteq F$ .⊣

Since $\boldsymbol {\tau }(\gamma ) \subseteq \theta (\varGamma , \boldsymbol {\tau })$ , by Theorem 6.4 there are

$$ \begin{align*} \alpha_{0}, \dots, \alpha_{n} &\in Fm,\\ \beta_{0}, \dots \beta_{n-1} &\in \varGamma,\\ \delta_{0}(x, \vec{z}), \dots, \delta_{n-1}(x, \vec{z}) &\in Fm, \end{align*} $$

such that $\varphi '(\gamma ) = \alpha _{0}$ , $\psi '(\gamma ) = \alpha _{n}$ , and

(10) $$ \begin{align} \{ \alpha_{i}, \alpha_{i+1} \} = \{ \delta_{i}(\varphi'(\beta_{i}), \vec{z}), \delta_{i}(\psi'(\beta_{i}), \vec{z}) \},\quad \text{for every }i < n. \end{align} $$

We shall prove by induction on i that for all $\alpha _{i}$ there are $\gamma _{1}^{i}, \dots , \gamma _{m}^{i} \in Fm$ such that

(11) $$ \begin{align} \alpha_{i} = \hat{\varphi}'(\gamma_{1}^{i}, \dots, \gamma_{m}^{i}) \text{ and }\gamma \equiv \gamma_{1}^{i} \equiv \cdots \equiv \gamma_{m}^{i} \quad\mod \theta(\varGamma, \boldsymbol{\tau}). \end{align} $$

For the base case, it suffices to take

$$ \begin{align*} \gamma = \gamma_{1}^{0} = \cdots = \gamma_{m}^{0}. \end{align*} $$

For the induction step, suppose that $\alpha _{i} = \hat {\varphi }'(\gamma _{1}^{i}, \dots , \gamma _{m}^{i})$ for some $\gamma _{1}^{i}, \dots , \gamma _{m}^{i} \in Fm$ such that $\gamma \equiv \gamma _{1}^{i}\equiv \cdots \equiv \gamma _{m}^{i} \quad\mod \theta (\varGamma , \boldsymbol {\tau })$ . Moreover, from (10) it follows $\{ \alpha _{i}, \alpha _{i+1} \} = \{ \delta _{i}(\varphi '(\beta _{i}), \vec {z}), \delta _{i}(\psi '(\beta _{i}), \vec {z}) \}$ . There are two cases:

  1. (i) either $\hat {\varphi }'(\gamma _{1}^{i}, \dots , \gamma _{m}^{i}) = \alpha _{i} = \delta _{i}(\varphi '(\beta _{i}), \vec {z})$ and $\alpha _{i+1} = \delta _{i}(\psi '(\beta _{i}), \vec {z})$ , or

  2. (ii) $\hat {\varphi }'(\gamma _{1}^{i}, \dots , \gamma _{m}^{i}) = \alpha _{i} = \delta _{i}(\psi '(\beta _{i}), \vec {z})$ and $\alpha _{i+1} = \delta _{i}(\varphi '(\beta _{i}), \vec {z})$ .

(i): We shall prove that there exist $\varepsilon _{1}(x, \vec {z}), \dots , \varepsilon _{m}(x, \vec {z}) \in Fm$ such that

(12) $$ \begin{align} \delta_{i}(x, \vec{z}) = \hat{\varphi}'(\varepsilon_{1}(x, \vec{z}), \dots, \varepsilon_{m}(x, \vec{z})). \end{align} $$

Suppose the contrary, with a view to contradiction. By (i), $\hat {\varphi }'(\gamma _{1}^{i}, \dots , \gamma _{m}^{i}) = \delta _{i}(\varphi '(\beta _{i}), \vec {z})$ . Thus, $T(\delta _{i}(x, \vec {z}))^{-x}$ and $T(\varphi ')^{-x}$ can be identified with two unique (possibly empty) downsets of $T(\hat {\varphi }'(\gamma _{1}^{i}, \dots , \gamma _{m}^{i}) )$ . Accordingly, from now on we shall work under this identification.

As $\delta _{i}(x, \vec {z}) \ne \hat {\varphi }'(\varepsilon _{1}, \dots , \varepsilon _{m})$ for any $\varepsilon _{1}, \dots , \varepsilon _{m} \in Fm$ , the tree $T(\varphi ')^{-x}$ is not a downset of $T(\delta _{i}(x, \vec {z}))^{-x}$ . Let then $w$ be a node of $T(\varphi ')^{-x}$ that is minimal in $T(\varphi ')^{-x} \smallsetminus T(\delta _{i}(x, \vec {z}))^{-x}$ . Let also $\chi $ be the unique formula such that $T(\chi )$ is the upset of $w$ in $T(\hat {\varphi }'(\gamma _{1}^{i}, \dots , \gamma _{m}^{i}))$ . As $\delta _{i}(\varphi '(\beta _{i}), \vec {z}) = \hat {\varphi }'(\gamma _{1}^{i}, \dots , \gamma _{m}^{i})$ , the minimality of $w$ implies that

(13) $$ \begin{align} \chi = \varphi'(\beta_{i}) = \square^{2k}\Diamond \varphi(\square^{k} \Diamond \beta_{i}). \end{align} $$

As k is greater than the length of all branches in $T(\varphi )$ and $\varphi ' = \square ^{2k}\Diamond \varphi (\square ^{k} \Diamond x)$ , there cannot be any point u in $T(\varphi ')^{-x}$ other than the root whose upset in $T(\hat {\varphi }'(\gamma _{1}^{i}, \dots , \gamma _{m}^{i}))$ is the subformula tree of a formula starting with the prefix $\square ^{2k}$ . Thus we conclude that $w$ is the root of $T(\varphi ')^{-x}$ and, therefore, of $T(\hat {\varphi }'(\gamma _{1}^{i}, \dots , \gamma _{m}^{i}))$ . As $T(\chi )$ is the upset of the root $w$ of $T(\hat {\varphi }'(\gamma _{1}^{i}, \dots , \gamma _{m}^{i}))$ , together with (13), this implies

(14) $$ \begin{align} \hat{\varphi}'(\gamma_{1}^{i}, \dots, \gamma_{m}^{i}) = \chi = \varphi'(\beta_{i}). \end{align} $$

Since $\textit {Var}(\varphi ') = \{ x \}$ by (7), this yields

$$ \begin{align*} \beta_{i} = \gamma_{1}^{i} = \cdots = \gamma_{m}^{i}. \end{align*} $$

As by inductive hypothesis $\gamma \equiv \gamma _{1}^{i}\equiv \cdots \equiv \gamma _{m}^{i} \quad\mod \theta (\varGamma , \boldsymbol {\tau })$ , the above display guarantees $\langle \gamma , \beta _{i} \rangle \in \theta (\varGamma , \boldsymbol {\tau })$ . Lastly, since $\beta _{i} \in \varGamma $ and $\theta (\varGamma , \boldsymbol {\tau })$ is compatible with $\varGamma $ by Claim 6.7, this implies $\gamma \in \varGamma $ , a contradiction. This establishes that there exist $\varepsilon _{1}(x, \vec {z}), \dots , \varepsilon _{m}(x, \vec {z}) \in Fm$ validating (12).

From (12) it follows

$$ \begin{align*} \hat{\varphi}'(\gamma_{1}^{i}, \dots, \gamma_{m}^{i}) = \delta_{i}(\varphi'(\beta_{i}), \vec{z}) = \hat{\varphi}'(\varepsilon_{1}(\varphi'(\beta_{i}), \vec{z}), \dots, \varepsilon_{m}(\varphi'(\beta_{i}), \vec{z})). \end{align*} $$

Consequently,

$$ \begin{align*} \gamma_{1}^{i} = \varepsilon_{1}(\varphi'(\beta_{i}), \vec{z}), \dots, \gamma_{m}^{i} = \varepsilon_{m}(\varphi'(\beta_{i}), \vec{z}). \end{align*} $$

Thus, defining

$$ \begin{align*} \gamma_{1}^{i+1} := \varepsilon_{1}(\psi'(\beta_{i}), \vec{z}), \dots, \gamma_{m}^{i+1} := \varepsilon_{m}(\psi'(\beta_{i}), \vec{z}), \end{align*} $$

we obtain

$$ \begin{align*} \alpha_{i+1} = \delta_{i}(\psi'(\beta_{i}), \vec{z}) &= \hat{\varphi}'(\varepsilon_{1}(\psi'(\beta_{i}), \vec{z}), \dots, \varepsilon_{m}(\psi'(\beta_{i}), \vec{z}))\\ &= \hat{\varphi}'(\gamma_{1}^{i+1}, \dots, \gamma_{m}^{i+1}). \end{align*} $$

Furthermore, for every $m \geqslant j \in \omega $ ,

$$ \begin{align*} \gamma \equiv \gamma_{j}^{i} = \varepsilon_{j}(\varphi'(\beta_{i}), \vec{z}) \equiv \varepsilon_{j}(\psi'(\beta_{i}), \vec{z}) = \gamma_{j}^{i+1} \quad\mod \theta(\varGamma, \boldsymbol{\tau}). \end{align*} $$

The first equivalence above follows from the inductive hypothesis, while the third from $\beta _{i} \in \varGamma $ . This concludes the proof of the induction step in case (i).

(ii): Replicating the proof described for case (i) up to (14), we obtain

$$ \begin{align*} \hat{\varphi}'(\gamma_{1}^{i}, \dots, \gamma_{m}^{i}) = \psi'(\beta_{i}), \end{align*} $$

a contradiction with Claim 6.6. This concludes the inductive proof.

Finally, taking $i=n$ in (11), we obtain

$$ \begin{align*} \hat{\varphi}'(\gamma_{1}^{n}, \dots, \gamma_{m}^{n}) = \alpha_{n} = \psi'(\gamma). \end{align*} $$

But this is impossible by Claim 6.6. Hence we reached the desired contradiction. We conclude that $\vdash $ has a $\boldsymbol {\tau }$ -algebraic semantics.

This establishes the theorem for logics whose language comprises two distinct unary symbols. To conclude the proof, suppose that $\mathscr {L}_{\vdash }$ comprises an n-ary connective f with $ n \geqslant 2$ . Then define

$$ \begin{align*} \square x := f(f(x, \dots, x), x, \dots, x) \text{ and }\Diamond x := f(x, f(x, \dots, x), x \dots, x). \end{align*} $$

Using this definition of $\square $ and $\Diamond $ , it is possible to reproduce the argument detailed in the case where $\mathscr {L}_{\vdash }$ comprises two distinct unary connectives.⊣

Remark 6.8. Interesting conditions implying the existence of an algebraic semantics were first discovered in [Reference Blok and Rebagliato7, Theorems 3.1 and 3.8]. In particular, [Reference Blok and Rebagliato7, Theorem 3.8] is a consequence of Theorem 6.2, while [Reference Blok and Rebagliato7, Theorem 3.1] can be derived from a simple variant of the argument of detailed above.⊣

7 Examples: modal and substructural logics

From Theorem 6.2 it follows that most well-known logics have an algebraic semantics, as we proceed to explain.

Example 7.1 (Local modal logics)

Given a variety $\mathsf {K}$ of modal algebras, let $\vdash ^{\ell }_{\mathsf {K}}$ be the logic defined by rule

$$ \begin{align*} \varGamma \vdash^{\ell}_{\mathsf{K}}\varphi \Longleftrightarrow& \text{ for all }\boldsymbol{A} \in \mathsf{K}, a \in A, \text{and homomorphism }h \colon \boldsymbol{Fm} \to \boldsymbol{A},\\ &\text{ if }a \leqslant h(\gamma)\quad \text{for all }\gamma \in \varGamma \text{, then }a \leqslant h(\varphi). \end{align*} $$

Logics arising in this way have been called local modal logics [Reference Kracht32].⊣

Example 7.2 (Substructural logics)

A commutative FL-algebra is a structure $\boldsymbol {A} = \langle A; \land , \lor , \cdot , \to , 1, 0 \rangle $ comprising a lattice $\langle A; \land , \lor \rangle $ , an Abelian monoid $\langle A; \cdot , 1 \rangle $ , a constant $0$ , and a binary operation $\to $ such that for every $a, b, c \in A$ ,

$$ \begin{align*} a \cdot b \leqslant c \Longleftrightarrow a \leqslant b \to c. \end{align*} $$

Notice that if $\cdot $ coincides with $\land $ and $0$ is the minimum of $\leqslant $ , then $\boldsymbol {A}$ is a Heyting algebra.

The logic $\vdash _{\mathsf {K}}$ associated with a variety $\mathsf {K}$ of residuated lattices is defined by the rule

$$ \begin{align*} \varGamma \vdash_{\mathsf{K}} \varphi \Longleftrightarrow \boldsymbol{\tau}[\varGamma] \vDash_{\mathsf{K}} \boldsymbol{\tau}(\varphi), \end{align*} $$

where $\boldsymbol {\tau } = \{ x \land 1 \thickapprox 1 \}$ . Logics arising in this way have been called substructural logics with exchange [Reference Galatos, Jipsen, Kowalski and Ono23]. Notice that when $\mathsf {K}$ is a variety of Heyting algebras, $\vdash _{\mathsf {K}}$ is the intermediate logic associated with $\mathsf {K}$ .⊣

An equation $\varphi \thickapprox \psi $ is said to be nontrivial if $\varphi \ne \psi $ .

Theorem 7.3. The following holds:

  1. (i) Extensions of fragments of local and global modal logics in which a connective among $\{ \land , \lor , \to \}$ is term-definable have an algebraic semantics.

  2. (ii) Extensions of fragments of substructural logics with exchange in which a connective among $\{ \land , \lor , \cdot , \to \}$ is term-definable have an algebraic semantics.

Proof. We detail the proof of (ii) only, as that of (i) is analogous. By Corollary 3.6 it suffices to show that fragments of substructural logics with exchange in which a connective among $\{ \land , \lor , \cdot , \to \}$ is term-definable have an algebraic semantics. Accordingly, let $\vdash $ be one of these fragments. Then there is a variety $\mathsf {K}$ of commutative FL-algebras such that $\vdash $ is the logic induced by the class of matrices

$$ \begin{align*} \mathsf{M} := \{ \langle \boldsymbol{A}, F \rangle \colon \boldsymbol{A} \text{ is the}\ \mathscr{L}_{\vdash}\text{-reduct of an FL-algebra} \boldsymbol{A}^{+} \text{and }F = \boldsymbol{\tau}(\boldsymbol{A}^{+}) \}, \end{align*} $$

where $\boldsymbol {\tau } = \{ x \land 1 \thickapprox 1 \}$ . Thus, by Lemma 2.7 and Theorem 6.2, it only remains to prove that $\mathsf {M}$ validates a nontrivial equation $\varepsilon \thickapprox \delta $ such that $\textit {Var}(\varepsilon ) \cup \textit {Var}(\delta ) = \{ x \}$ . To this end, recall that in $\vdash $ a connective in $\ast \in \{ \land , \lor , \cdot , \to \}$ is term-definable by means of a, possibly complex, formula $x + y$ .

  1. (i) If $\ast \in \{ \land , \lor \}$ , take $\varepsilon := x$ and $\delta := x + x$ ;

  2. (ii) If $\ast = \cdot $ , take $\varepsilon := x + (x + x)$ and $\delta := (x + x) + x$ ;

  3. (iii) If $\ast = {\to }$ , take $\varepsilon := (x + x) + (x + x)$ and $\delta := x + ((x + x) + x)$ .

In all cases, $\varepsilon \thickapprox \delta $ is nontrivial, valid in $\mathsf {M}$ , and such that $\textit {Var}(\varepsilon ) \cup \textit {Var}(\delta ) = \{ x \}$ . ⊣

Remark 7.4. Theorem 7.3 was essentially discovered [Reference Blok and Rebagliato7, Corollaries 3.4, 3.7, and 3.9]. With respect to its first incarnation, the current formulation provides a small improvement by admitting the presence of the operation $\cdot $ in condition (ii) and, more in general, by allowing the operations $\land , \lor , \to , \cdot $ to be term-definable, as opposed to basic connectives.⊣

8 Logics with theorems

The aim of this section is to establish the following characterization of logics with theorems possessing an algebraic semantics.

Theorem 8.1. Let $\vdash $ be a nontrivial logic with a theorem $\varphi $ such that $\textit {Var}(\varphi ) \ne \emptyset $ . The following conditions are equivalent:

  1. (i) $\vdash $ has an algebraic semantics;

  2. (ii) Either $\vdash $ is assertional and graph-based or it is not graph-based and there are two distinct logically equivalent formulas $\varepsilon $ and $\delta $ such that $\textit {Var}(\varepsilon ) \cup \textit {Var}(\delta ) = \{ x \};$

  3. (iii) Either $\vdash $ is assertional and graph-based or it is not graph-based and $\mathsf {Alg}(\vdash ) \vDash \varepsilon \thickapprox \delta $ for some nontrivial equation $\varepsilon \thickapprox \delta $ such that $\textit {Var}(\varepsilon ) \cup \textit {Var}(\delta ) = \{ x \};$

  4. (iv) Either $\vdash $ is assertional and graph-based or it is not graph-based and has a matrix semantics validating a nontrivial equation $\varepsilon \thickapprox \delta $ such that $\textit {Var}(\varepsilon ) \cup \textit {Var}(\delta ) = \{ x \}$ .

We restricted the statement of Theorem 8.1 to the case of nontrivial logics for the sake of readability. However, the reader may consult Proposition 5.2 for a catalogue of trivial logics with an algebraic semantics.

Convention 8.2. From now on we shall denote by $\square $ the unary connective (if any) of an arbitrary graph-based logic and its set of constants by $\{ \textbf {c}_{i} \colon i < \alpha \}$ where $\alpha $ is a suitable ordinal.

Proposition 8.3. Graph-based logics with an algebraic semantics have a unital matrix semantics.

Proof. Let $\vdash $ be a graph-based logic with an algebraic semantics. By Proposition 4.2, to prove that $\vdash $ has a unital matrix semantics, it suffices to show that $x, y, \varphi (x, \vec {z}) \vdash \varphi (y, \vec {z})$ for all $\varphi (v, \vec {z}) \in Fm$ . Since $\vdash $ is graph-based, if it does not have unary connectives, this condition holds vacuously. Then we can assume that $\vdash $ has a unary connective $\square $ . Since $\vdash $ is graph-based, it will be enough to show that for every $n \in \omega $ ,

$$ \begin{align*} x, y, \square^{n}x \vdash \square^{n}y. \end{align*} $$

To this end, consider $n \in \omega $ . By assumption $\vdash $ has an algebraic semantics given by a class of algebras $\mathsf {K}$ and a set of equations $\boldsymbol {\tau }(x)$ . Consider $\boldsymbol {A} \in \mathsf {K}$ and $a, b \in A$ such that

$$ \begin{align*} a, b, \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{n\text{-times}}a \in \boldsymbol{\tau}(\boldsymbol{A}). \end{align*} $$

Then consider $\varepsilon \thickapprox \delta \in \boldsymbol {\tau }$ . We need to prove

(15) $$ \begin{align} \varepsilon^{\boldsymbol{A}}(\underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{n\text{-times}}b) = \delta^{\boldsymbol{A}}(\underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{n\text{-times}}b). \end{align} $$

As $\vdash $ is graph-based, one of the following conditions holds:

  1. (i) $\varepsilon = \square ^{m}x$ and $\delta = \square ^{k}x$ for some $m, k \in \omega $ ;

  2. (ii) $\varepsilon = \square ^{m}\textbf {c}_{i}$ and $\delta = \square ^{k}x$ for some $m, k \in \omega $ and $i < \alpha $ ;

  3. (iii) $\varepsilon = \square ^{m}x$ and $\delta = \square ^{k}\textbf {c}_{i}$ for some $m, k \in \omega $ and $i < \alpha $ ;

  4. (iv) $\varepsilon = \square ^{m} \textbf {c}_{i}$ and $\delta = \square ^{k}\textbf {c}_{j}$ for some $m, k \in \omega $ and $i, j < \alpha $ .

(i): From $b \in \boldsymbol {\tau }(\boldsymbol {A})$ and $\varepsilon \thickapprox \delta \in \boldsymbol {\tau }$ , it follows

$$ \begin{align*} \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{m\text{-times}}b = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{k\text{-times}}b. \end{align*} $$

In particular, this implies

$$ \begin{align*} \varepsilon^{\boldsymbol{A}}(\underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{n\text{-times}}b) = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{m+n\text{-times}}b = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{k+n\text{-times}}b = \delta^{\boldsymbol{A}}(\underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{n\text{-times}}b). \end{align*} $$

(ii): From $a, \underbrace {\square ^{\boldsymbol {A}} \dots \square ^{\boldsymbol {A}}}_{n\text {-times}}a \in \boldsymbol {\tau }(\boldsymbol {A})$ and $\varepsilon \thickapprox \delta \in \boldsymbol {\tau }$ it follows

$$ \begin{align*} \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{n+m\text{-times}}\textbf{c}_{i}^{\boldsymbol{A}} = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{n+k\text{-times}}a = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{k+n\text{-times}}a = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{m\text{-times}}\textbf{c}_{i}^{\boldsymbol{A}}. \end{align*} $$

Moreover, from $b \in \boldsymbol {\tau }(\boldsymbol {A})$ and $\varepsilon \thickapprox \delta \in \boldsymbol {\tau }$ it follows

$$ \begin{align*} \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{m\text{-times}} \textbf{c}_{i}^{\boldsymbol{A}} = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{k\text{-times}}b. \end{align*} $$

From the two displays above we obtain

$$ \begin{align*} \varepsilon^{\boldsymbol{A}}(\underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{n\text{-times}}b) = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{m\text{-times}} \textbf{c}_{i}^{\boldsymbol{A}} = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{n+m\text{-times}}\textbf{c}_{i}^{\boldsymbol{A}} = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{k+n\text{-times}}b = \delta^{\boldsymbol{A}}(\underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{n\text{-times}}b). \end{align*} $$

Case (iii) is handled analogously to case (ii).

(iv): As $\varepsilon $ and $\delta $ are closed formulas, the fact that $\varepsilon ^{\boldsymbol {A}}(a) = \delta ^{\boldsymbol {A}}(a)$ implies

$$ \begin{align*} \varepsilon^{\boldsymbol{A}}(\underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{n\text{-times}}b) = \delta^{\boldsymbol{A}}(\underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{n\text{-times}}b). \end{align*} $$

This establishes (15) and, therefore, that $\underbrace {\square ^{\boldsymbol {A}} \dots \square ^{\boldsymbol {A}}}_{n\text {-times}}b \in \boldsymbol {\tau }(\boldsymbol {A})$ . As $\mathsf {K}$ is a $\boldsymbol {\tau }$ -algebraic semantics for $\vdash $ , we conclude that $x, y, \square ^{n}x \vdash \square ^{n}y$ , as desired.⊣

Corollary 8.4. A graph-based logic with theorems has an algebraic semantics if and only if it is assertional.

Proof. Assertional logics have an algebraic semantics by Proposition 4.4. Then consider a logic $\vdash $ that is graph-based and has an algebraic semantics. By Proposition 8.3 $\vdash $ has a unital matrix semantics. Therefore, if $\vdash $ has theorems, it is also assertional.⊣

We shall also rely on the following technical observation:

Lemma 8.5. Let $\vdash $ be a logic with a $\boldsymbol {\tau }$ -algebraic semantics. If $\ \vdash $ is nontrivial, then $\boldsymbol {\tau }$ contains a nontrivial equation $\varepsilon \thickapprox \delta $ such that $\textit {Var}(\varepsilon ) \cup \textit {Var}(\delta ) = \{ x \}$ .

Proof. We reason by contraposition. Suppose that $\vdash $ has a $\boldsymbol {\tau }$ -algebraic semantics $\mathsf {K}$ and that $\boldsymbol {\tau }$ contains no nontrivial equation $\varepsilon \thickapprox \delta $ such that $\textit {Var}(\varepsilon ) \cup \textit {Var}(\delta ) = \{ x \}$ . We shall prove that $x \vdash y$ , i.e., that $\vdash $ is trivial. As $\mathsf {K}$ is a $\boldsymbol {\tau }$ -algebraic semantics for $\vdash $ , it suffices to show that for all $\boldsymbol {A} \in \mathsf {K}$ and $a, c \in A$ , if $a \in \boldsymbol {\tau }(\boldsymbol {A})$ , then $c \in \boldsymbol {\tau }(\boldsymbol {A})$ .

To this end, consider $\boldsymbol {A} \in \mathsf {K}$ and $a, c \in A$ such that $a \in \boldsymbol {\tau }(\boldsymbol {A})$ . We need to show that $\varepsilon ^{\boldsymbol {A}}(c) = \delta ^{\boldsymbol {A}}(c)$ for all $\varepsilon \thickapprox \delta \in \boldsymbol {\tau }$ . Then consider an equation $\varepsilon \thickapprox \delta $ in $\boldsymbol {\tau }$ . By the assumption, we have two cases: either $\varepsilon \thickapprox \delta $ is trivial or $\textit {Var}(\varepsilon ) \cup \textit {Var}(\delta ) = \emptyset $ . If $\varepsilon \thickapprox \delta $ is trivial, then $\varepsilon = \delta $ , whence $\varepsilon ^{\boldsymbol {A}}(c) = \delta ^{\boldsymbol {A}}(c)$ . Then we consider the case where $\textit {Var}(\varepsilon ) \cup \textit {Var}(\delta ) = \emptyset $ , i.e., $\varepsilon $ and $\delta $ are closed formulas. In this case, $a \in \boldsymbol {\tau }(\boldsymbol {A})$ implies $\varepsilon ^{\boldsymbol {A}}(a) = \delta ^{\boldsymbol {A}}(a)$ . But, since $\varepsilon $ and $\delta $ are closed formulas, $\varepsilon ^{\boldsymbol {A}}(a) = \varepsilon ^{\boldsymbol {A}}(c)$ and $\delta ^{\boldsymbol {A}}(a) = \delta ^{\boldsymbol {A}}(c)$ , whence $\varepsilon ^{\boldsymbol {A}}(c) = \delta ^{\boldsymbol {A}}(c)$ . We conclude that $c \in \boldsymbol {\tau }(\boldsymbol {A})$ , as desired.⊣

We are now ready to prove the main result of this section:

Proof of Theorem 8.1. Let $\vdash $ be a nontrivial logic with a theorem $\varphi $ such that $\textit {Var}(\varphi ) \ne \emptyset $ . As $\vdash $ is substitution invariant, we can assume without loss of generality $\textit {Var}(\varphi ) = \{ x \}$ .

(i) $\Rightarrow $ (iv): Suppose that $\vdash $ has a $\boldsymbol {\tau }$ -algebraic semantics $\mathsf {K}$ . First if $\vdash $ is graph-based, then it is assertional by Corollary 8.4. Then we consider the case where $\vdash $ is not graph-based.

As $\vdash $ is nontrivial, we can apply Lemma 8.5, obtaining that $\boldsymbol {\tau }$ contains a nontrivial equation $\varepsilon \thickapprox \delta $ such that $\textit {Var}(\varepsilon ) \cup \textit {Var}(\delta ) = \{ x \}$ . Recall from Proposition 3.2 that

$$ \begin{align*} \mathsf{M} := \{ \langle \boldsymbol{A}, \boldsymbol{\tau}(\boldsymbol{A}) \rangle \colon \boldsymbol{A} \in \mathsf{K} \} \end{align*} $$

is a matrix semantics for $\vdash $ . As $\varepsilon \thickapprox \delta \in \boldsymbol {\tau }$ and $\varphi $ is a theorem of $\vdash $ ,

$$ \begin{align*} \mathsf{M} \vDash \varepsilon(\varphi) \thickapprox \delta(\varphi). \end{align*} $$

Moreover, as $\textit {Var}(\varepsilon ) \cup \textit {Var}(\delta ) = \{ x \} = \textit {Var}(\varphi )$ ,

$$ \begin{align*} \textit{Var}(\varepsilon(\varphi)) \cup \textit{Var}(\delta(\varphi)) = \{ x \}. \end{align*} $$

In view of the two displays above, since $\mathsf {M}$ is a matrix semantics for $\vdash $ , it only remains to prove that the equation $\varepsilon (\varphi ) \thickapprox \delta (\varphi )$ is nontrivial. Suppose the contrary, with a view to contradiction. As $\textit {Var}(\varphi ) = \{ x \}$ , the tree $T(\varepsilon )^{-x}$ can be identified with the subtree of $T(\varepsilon (\varphi ))$ obtained by removing from $T(\varepsilon (\varphi ))$ the upsets equal to $T(\varphi )$ . Similarly, $T(\delta )^{-x}$ can be identified with the subtree of $T(\delta (\varphi ))$ obtained by removing from $T(\delta (\varphi ))$ the upsets equal to $T(\varphi )$ . Since $\varepsilon (\varphi ) = \delta (\varphi )$ , this implies $T(\varepsilon )^{-x} = T(\delta )^{-x}$ and, therefore, $\varepsilon = \delta $ . But this contradicts the fact that the equation $\varepsilon \thickapprox \delta $ is nontrivial.

(ii) $\Rightarrow $ (i): If $\vdash $ is graph-based, then by assumption it is also assertional. Thus, in this case, $\vdash $ has an algebraic semantics by Proposition 4.4. Then we consider the case where $\vdash $ is not graph-based. Together with the assumption, this implies that $\vdash $ has an algebraic semantics by Theorem 6.2.

Lastly, conditions (ii), (iii), and (iv) are equivalent by Lemma 2.7.⊣

Problem 1. Extend the characterization of logics with an algebraic semantics given in Theorem 8.1 beyond the setting of logics with theorems.

9 Protoalgebraic logics

In this section, Theorem 8.1 is specialized to the class of protoalgebraic logics [Reference Czelakowski14], introduced by Czelakowski [Reference Czelakowski12, Reference Czelakowski13] and refined by Blok and Pigozzi [Reference Blok and Pigozzi3].

Definition 9.1. A logic $\vdash $ is protoalgebraic if there exists a set of formulas $\varDelta (x, y)$ such that

(16) $$ \begin{align} \emptyset \vdash \varDelta(x, x) \text{ and } x, \varDelta(x, y) \vdash y. \end{align} $$

The class of protoalgebraic logics embraces all logics $\vdash $ possessing a term-definable operation $x \to y$ such that

$$ \begin{align*} \emptyset \vdash x \to x\text{ and }x, x \to y \vdash y \end{align*} $$

as, in this case, the set $\varDelta (x, y) := \{ x \to y \}$ satisfies condition (16). In particular, protoalgebraic logics comprise all fragments of substructural logics and of local and global modal logics containing $\to $ . On the other hand, examples of nonprotoalgebraic logics include Visser’s logic [Reference Visser43], as shown in [Reference Suzuki, Wolter and Zakharyashev41, Theorem 14] (see also [Reference Celani and Jansana9]), as well as many implication-less fragments of familiar logics such as positive modal logic [Reference Dunn17], as proved in [Reference Jansana26, Theorem 9].

Lemma 9.2 [Reference Font20, Proposition 6.11(5–7)]

If $\vdash $ is a nontrivial protoalgebraic logic, then $\vdash $ has an n-ary connective with $n \geqslant 2$ and a theorem $\varphi $ such that $\textit {Var}(\varphi ) \ne \emptyset $ .

Theorem 9.3. The following conditions are equivalent for a nontrivial protoalgebraic logic $\vdash :$

  1. (i) $\vdash $ has an algebraic semantics;

  2. (ii) There are two distinct logically equivalent formulas;

  3. (iii) $\mathsf {Alg}(\vdash )$ validates a nontrivial equation;

  4. (iv) $\vdash $ has a matrix semantics validating a nontrivial equation;

  5. (v) The free Lindenbaum–Tarski algebra $\boldsymbol {Fm}(\vdash )$ is not isomorphic to $\boldsymbol {Fm}$ .

Proof. Let $\vdash $ be a nontrivial protoalgebraic logic. By Lemma 9.2, $\vdash $ has a theorem $\varphi $ such that $\textit {Var}(\varphi ) \ne \emptyset $ and an n-ary connective $f(x_{1}, \dots , x_{n})$ with $n \geqslant 2$ .

(i) $\Rightarrow $ (iv): Suppose that $\vdash $ has an algebraic semantics. As $\vdash $ has a theorem $\varphi $ such that $\textit {Var}(\varphi ) \ne \emptyset $ , it falls in the scope of Theorem 8.1. Therefore, the implication (i) $\Rightarrow $ (iv) in Theorem 8.1 yields that either $\vdash $ is graph-based or it has a matrix semantics validating a nontrivial equation. Thus, to conclude the proof, it suffices to show that $\vdash $ is not graph-based. But this follows from the fact that $\vdash $ has an n-ary connective $f(x_{1}, \dots , x_{n})$ with $n \geqslant 2$ .

(iv) $\Rightarrow $ (i): Suppose that $\vdash $ has a matrix semantics $\mathsf {M}$ validating a nontrivial equation $\varepsilon (x_{1}, \dots , x_{n}) \thickapprox \delta (x_{1}, \dots , x_{n})$ . By the implication (iv) $\Rightarrow $ (i) in Theorem 8.1, to conclude the proof it suffices to show that there are formulas $\varepsilon '$ and $\delta '$ such that $\textit {Var}(\varepsilon ') \cup \textit {Var}(\delta ') = \{ x \}$ and the equation $\varepsilon ' \thickapprox \delta '$ is both nontrivial and valid in $\mathsf {M}$ .

To this end, recall that f is n-ary with $n \geqslant 2$ . Therefore, if $\varepsilon $ and $\delta $ are closed formulas, we are done taking

$$ \begin{align*} \varepsilon' := f(\varepsilon, x, \dots, x) \text{ and }\delta' := f(\delta, x, \dots, x). \end{align*} $$

Then we can assume by symmetry that $x_{1} \in \textit {Var}(\varepsilon )$ . For every $n \in \omega $ , we define recursively a formula $\varphi _{n}(x)$ by the rule

$$ \begin{align*} \varphi_{0}:= f(x, \dots, x) \text{ and }\varphi_{m+1} := f(\varphi_{m}, x, \dots, x). \end{align*} $$

Clearly $\textit {Var}(\varphi _{n}) = \{ x \}$ for all $n \in \omega $ . Let then $k \in \omega $ be greater than the length of all branches in the trees $T(\varepsilon )$ and $T(\delta )$ , and define

$$ \begin{align*} \varepsilon' := \varepsilon(\varphi_{k}, \varphi_{2k}, \dots, \varphi_{nk}) \text{ and }\delta' := \varepsilon(\varphi_{k}, \varphi_{2k}, \dots, \varphi_{nk}). \end{align*} $$

Notice that $\textit {Var}(\varepsilon ') \cup \textit {Var}(\delta ') = \{ x \}$ as $x_{1} \in \textit {Var}(\varepsilon )$ and $\textit {Var}(\varphi _{m}) = \{ x \}$ for all $m \in \omega $ . Furthermore, $\varepsilon ' \thickapprox \delta '$ is valid in $\mathsf {M}$ , because $\varepsilon \thickapprox \delta $ is.

Therefore, it only remains to prove that the equation $\varepsilon ' \thickapprox \delta '$ is nontrivial. To this end, observe that, since k is greater than all branches in $T(\varepsilon )$ , the tree $T(\varepsilon )$ can be obtained as follows. First, let $S_{n}$ be the tree obtained by replacing in $T(\varepsilon )$ the principal upsets whose longest branch has length $nk + 1$ by a point labeled by $x_{n}$ . Then let $S_{n-1}$ be the tree obtained by replacing in $S_{n}$ the principal upsets whose longest branch have length $(n-1)k + 1$ by a point labeled by $x_{n-1}$ . Iterating this process, we eventually get a tree $S_{1}$ which coincides with $T(\varepsilon )$ . Notice that $T(\delta )$ can be obtained by pruning $T(\delta ')$ in exactly the same way. Consequently, if $T(\varepsilon ') = T(\delta ')$ , then $T(\varepsilon ) = T(\delta )$ and, therefore, $\varepsilon = \delta $ . As by assumption $\varepsilon \ne \delta $ , we conclude that $T(\varepsilon ') \ne T(\delta ')$ , i.e., $\varepsilon ' \ne \delta '$ .

(ii) $\Leftrightarrow $ (v): It is easy to see that $\boldsymbol {Fm}(\vdash )$ is isomorphic to $\boldsymbol {Fm}$ if and only if $\equiv _{\vdash }$ is the identity relation on $Fm$ . Bearing this in mind, the equivalence between conditions (ii) and (v) follows from the definition of $\equiv _{\vdash }$ .

Lastly, conditions (ii)–(iv) are equivalent by Lemma 2.7.⊣

Remark 9.4. The above proof shows that Theorem 9.3 can be generalized to nontrivial logics with a theorem $\varphi $ such that $\textit {Var}(\varphi ) \ne \emptyset $ and an n-ary connective with $n \geqslant 2$ .⊣

By Theorem 9.3, a nontrivial protoalgebraic logic has an algebraic semantics if and only if its algebraic counterpart satisfies a nontrivial equation. It follows that almost all reasonable protoalgebraic logics have an algebraic semantics. Because of this, it makes sense to review some exceptions to this general rule.

Example 9.5. To the best of our knowledge, the first example of a protoalgebraic logic lacking an algebraic semantics was discovered in [Reference Blok and Rebagliato7, Theorem 2.19] (see also [Reference Font18, Proposition 5.2]) and subsequently generalized as follows. Let $\mathscr {L}$ be an algebraic language comprising at least an n-ary connective with $n \geqslant 2$ . A set $\varDelta (x, y) \subseteq Fm_{\mathscr {L}}$ is said to be coherent if $\varphi (x, x) = \psi (x, x)$ , for all $\varphi , \psi \in \varDelta (x, y)$ . Then, given a coherent set $\varDelta (x, y) \subseteq Fm_{\mathscr {L}}$ , we denote by ${\textbf { I}}(\varDelta )$ the logic, formulated in $\mathscr {L}$ , axiomatized by the rules

$$ \begin{align*} \emptyset \rhd \varDelta(x, x) \text{ and }x, \varDelta(x, y) \rhd y. \end{align*} $$

The logic ${\textbf { I}}(\varDelta )$ is protoalgebraic and lacks an algebraic semantics [Reference Font19, Proposition 5.5].

Another protoalgebraic logic without an algebraic semantics comes from the field of relevance logic. The logic ${\textbf { P}}$ ${\textbf { W}}$ [Reference Komori30, Reference Martin and Meyer33] is formulated in the language comprising a binary symbol $\to $ only and is axiomatized by the rules

$$ \begin{align*} \emptyset \rhd (x \to y) &\to ((z \to x) \to (z \to y)),\\ \emptyset \rhd (x \to y) &\to (( y \to z) \to (x \to z)),\\ \emptyset \rhd x &\to x,\\ x, x &\to y \rhd y. \end{align*} $$

Observe that the last two rules in the above display guarantee that ${\textbf { P}}$ ${\textbf { W}}$ is protoalgebraic. The fact that ${\textbf { P}}$ ${\textbf { W}}$ lacks an algebraic semantics was established in [Reference Raftery38, Proposition 38].⊣

As we mentioned, most familiar protoalgebraic logics have an algebraic semantics. However, these algebraic semantics were constructed ad hoc and have little to do with the intuitive interpretation of these logics. For this reason, it is natural to wonder whether protoalgebraic logics $\vdash $ with an algebraic semantics have also a standard one. We close this section by showing that this is not the case, even in the well-behaved setting of local modal logics.

To this end, we assume that the reader is familiar with the basics of modal logic [Reference Chagrov and Zakharyaschev10, Reference Kracht31]. Given a class $\mathfrak {F}$ of Kripke frames, let $\vdash ^{\ell }_{\mathfrak {F}}$ be the logic defined by the rule

$$ \begin{align*} \varGamma \vdash^{\ell}_{\mathfrak{F}} \varphi \Longleftrightarrow& \text{ for every }\langle W, R \rangle \in \mathfrak{F}, w \in W, \text{and evaluation }v \text{ into }\langle W, R \rangle,\\ &\text{ if }w, v \Vdash \varGamma \text{, then }w, v \Vdash \varphi. \end{align*} $$

As it happens, $\vdash ^{\ell }_{\mathfrak {F}}$ is always a protoalgebraic logic.

Furthermore, given a Kripke frame $\mathcal {F} = \langle W, R \rangle $ , we denote by $\mathcal {F}^{+}$ the Kripke frame obtained adding to $\mathcal {F}$ a new point $w^{+}$ that is related to everything in $W \cup \{ w^{+} \}$ .

Proposition 9.6. Let $\mathfrak {F}$ be a nonempty class of Kripke frames such that if $\mathcal {F} \in \mathfrak {F}$ , then $\mathcal {F}^{+} \in \mathfrak {F}$ . Then $\vdash ^{\ell }_{\mathfrak {F}}$ does not have a standard algebraic semantics.

Proof. We will make use of the following well-known fact: for all $\varphi , \psi \in Fm$ ,

$$ \begin{align*} \mathsf{Alg}(\vdash^{\ell}_{\mathfrak{F}}) \vDash \varphi \thickapprox \psi \Longleftrightarrow \varphi \mathrel{\dashv\mkern1.5mu\vdash}^{\ell}_{\mathfrak{F}} \psi. \end{align*} $$

Suppose, with a view to contradiction, that $\vdash ^{\ell }_{\mathfrak {F}}$ has a $\boldsymbol {\tau }$ -algebraic semantics based on $\mathsf {Alg}(\vdash ^{\ell }_{\mathfrak {F}})$ . As $\mathfrak {F}$ is nonempty, the logic $\vdash ^{\ell }_{\mathfrak {F}}$ is nontrivial. Together with the fact that $\mathsf {Alg}(\vdash ^{\ell }_{\mathfrak {F}})$ is a $\boldsymbol {\tau }$ -algebraic semantics, this implies that there exists an equation $\varepsilon \thickapprox \delta \in \boldsymbol {\tau }$ such that $\mathsf {Alg}(\vdash ^{\ell }_{\mathfrak {F}}) \nvDash \varepsilon \thickapprox \delta $ . Thus, in view of the above display, we can assume by symmetry that $\varepsilon \nvdash ^{\ell }_{\mathfrak {F}} \delta $ . This means that there are $\mathcal {F} \in \mathfrak {F}$ , a world $w \in \mathcal {F}$ , and an evaluation v in $\mathcal {F}$ such that $w, v \Vdash \varepsilon $ and $w, v \nVdash \delta $ .

Recall that, by the assumptions, $\mathcal {F}^{+} \in \mathfrak {F}$ . Let $v^{+}$ be the unique evaluation on $\mathcal {F}^{+}$ such that for every $y \in \textit {Var}$ and $q \in \mathcal {F}^{+}$ :

$$ \begin{align*} q, v^{+} \Vdash y \Longleftrightarrow \text{ either }(q \in \mathcal{F} \text{ and }q, v \Vdash y)\text{ or }q = w^{+}. \end{align*} $$

From the definition of $\mathcal {F}^{+}$ and $v^{+}$ it follows that

$$ \begin{align*} q, v^{+} \Vdash \varphi \Longleftrightarrow q, v \Vdash \varphi \end{align*} $$

for all $\varphi \in Fm$ and $q \in \mathcal {F}$ . Consequently, as $w, v \Vdash \varepsilon $ and $w, v \nVdash \delta $ ,

$$ \begin{align*} w^{+}, v^{+} \Vdash x \text{ and } w^{+}, v^{+} \nVdash \square(\varepsilon \to \delta). \end{align*} $$

This implies

$$ \begin{align*} x \nvdash^{\ell}_{\mathfrak{F}} \square(\varepsilon \to \delta). \end{align*} $$

On the other hand, clearly $\emptyset \vdash ^{\ell }_{\mathfrak {F}} \square (\delta \to \delta )$ . Consequently,

$$ \begin{align*} x, \square(\delta \to \delta) \nvdash^{\ell}_{\mathfrak{F}} \square(\varepsilon \to \delta). \end{align*} $$

Together with Proposition 3.3 and the fact that $\vdash $ has a $\boldsymbol {\tau }$ -algebraic semantics, this implies $\varepsilon \thickapprox \delta \notin \boldsymbol {\tau }$ , a contradiction.⊣

Corollary 9.7. If $\mathfrak {F}$ is the class of all (resp. transitive, resp. reflexive and transitive) Kripke frames, then $\vdash ^{\ell }_{\mathfrak {F}}$ does not have a standard algebraic semantics, namely one based on the variety of modal algebras (resp. of K4-algebras, resp. of interior algebras).

Proof. We detail the case where $\mathfrak {F}$ is the class of all Kripke frames, as the remaining ones are analogous. First observe that if $\mathcal {F} \in \mathfrak {F}$ , then $\mathcal {F}^{+} \in \mathfrak {F}$ . Hence we can apply Proposition 9.6, obtaining that $\vdash ^{\ell }_{\mathfrak {F}}$ does not have an algebraic semantics based on $\mathsf {Alg}(\vdash ^{\ell }_{\mathfrak {F}})$ . As $\mathsf {Alg}(\vdash ^{\ell }_{\mathfrak {F}})$ is the variety of modal algebras, we are done.⊣

Problem 2. Characterize the local modal logics $\vdash $ possessing a standard algebraic semantics. More in general, while this paper focuses mostly on nonstandard equational completeness theorems, it would be interesting to obtain similar descriptions of logics admitting standard equational completeness theorems.

10 Intermezzo: graph-based logics

This section contains a useful characterization of graph-based logics with an algebraic semantics. The reader may safely skip to the next section, after having absorbed the statement of Theorem 10.2.

In the next definition, given a nonempty set $X \subseteq \omega $ , we denote by $\textrm {gcd}(X)$ the greatest common divisor of the elements of X and assume that $\textrm {gcd}(\emptyset ) := 0$ .

Definition 10.1. Let $\mathscr {L}$ be a graph-based language comprising a unary symbol $\square $ and a constant $\textbf {c}_{i}$ , and let k be a positive integer.

  1. 1. Let $\mathcal {U}$ be the set of rules of the following form, where $n \in \omega $ ,

    $$ \begin{align*} x, y, \square^{n}x \rhd \square^{n}y. \end{align*} $$
  2. 2. Let $\mathcal {S}_{k, i}$ be the set of rules of the following form, where $n \in \omega $ ,

    $$ \begin{align*} x, \square^{k+ n}x \rhd \square^{n} \textbf{c}_{i} \text{ and }x, \square^{n} \textbf{c}_{i} \rhd \square^{k+n} x. \end{align*} $$
  3. 3. Let $\mathcal {R}_{k, i}$ be the set of rules of the form

    $$ \begin{align*} \{ \square^{t}x \} \cup \{ \square^{u_{j}}x \colon s> j \in \omega\} \cup \{ \square^{v_{j}}x \colon s > j \in \omega \} \rhd \square^{t+g}x, \end{align*} $$
    where $t, s, g \in \omega $ and $\{ u_{j} \colon s> j \in \omega \} \cup \{ v_{j} \colon s > j \in \omega \} \subseteq \omega $ are such that $u_{j} < v_{j}$ for all $s> j \in \omega $ , and g is a multiple of $\textrm {gcd}(\{ v_{j} - u_{j} \colon s> j \in \omega \})$ .
  4. 4. Let $\mathcal {I}_{k, i}$ be the set of rules of the form

    $$ \begin{align*} \{ \square^{u_{j}}x \colon s> j \in \omega \} \cup \{ \square^{v_{j}}x \colon s > j \in \omega \} \rhd \square^{g}\textbf{c}_{i}, \end{align*} $$
    where $s, g \in \omega $ and $\{ u_{j} \colon s> j \in \omega \} \cup \{ v_{j} \colon s > j \in \omega \} \subseteq \omega $ are such that $u_{j} < v_{j}$ for all $s> j \in \omega $ , and $g + k$ is a multiple of $\textrm {gcd}(\{ v_{j} - u_{j} \colon s> j \in \omega \})$ .

The aim of this section is to establish the following:

Theorem 10.2. A graph-based logic $\vdash $ has an algebraic semantics if and only if one of the following conditions holds:

  1. (i) $\vdash $ is assertional;

  2. (ii) $\vdash $ has a unary connective $\square $ and $x \vdash \square x;$

  3. (iii) $\vdash $ has a unary connective $\square $ and the rules in $\mathcal {U} \cup \mathcal {S}_{k, i} \cup \mathcal {R}_{k, i} \cup \mathcal {I}_{k, i}$ are valid in $\vdash $ for some positive integer k and some ordinal $i < \alpha ;$

  4. (iv) $\vdash $ is almost assertional and there is a set of equations $\boldsymbol {\tau }(x)$ satisfying conditions (i) and (ii) in Proposition 4.9.

Moreover, when condition (iii) holds, k can be taken to be the least positive integer n for which the rules in $\mathcal {S}_{n, i}$ are valid in $\vdash $ .

The remaining part of the section is devoted to proving Theorem 10.2. We begin with the following observation:

Proposition 10.3. Let $\vdash $ be a graph-based logic with a unary connective. The following conditions are equivalent:

  1. (i) $\vdash $ has a $\boldsymbol {\tau }$ -algebraic semantics for some $\boldsymbol {\tau }$ such that

    $$ \begin{align*} \boldsymbol{\tau} \subseteq \{ \square^{n}x \thickapprox \square^{m}x \colon n, m \in \omega \} \cup \{ \square^{n}\textbf{c}_{i} \thickapprox \square^{m}\textbf{c}_{j} \colon i, j < \alpha \text{ and }n, m \in \omega \}; \end{align*} $$
  2. (ii) $\vdash $ has a $\boldsymbol {\tau }$ -algebraic semantics where $\boldsymbol {\tau } := \{ x \thickapprox \square x \};$

  3. (iii) $x \vdash \square x$ .

Proof. The implication (ii) $\Rightarrow $ (i) is obvious. To prove the implication (i) $\Rightarrow $ (iii), let $\mathsf {K}$ be the algebraic semantics given by condition (i). To show that $x \vdash \square x$ , it suffices to prove that $\square ^{\boldsymbol {A}}a \in \boldsymbol {\tau }(\boldsymbol {A})$ for all $\boldsymbol {A} \in \mathsf {K}$ and $a \in \boldsymbol {\tau }(\boldsymbol {A})$ . To this end, consider $\boldsymbol {A} \in \mathsf {K}$ , $a \in \boldsymbol {\tau }(\boldsymbol {A})$ , and an equation $\varepsilon \thickapprox \delta $ in $\boldsymbol {\tau }$ . We need to establish

(17) $$ \begin{align} \varepsilon^{\boldsymbol{A}}(\square^{\boldsymbol{A}}a) = \delta^{\boldsymbol{A}}(\square^{\boldsymbol{A}}a). \end{align} $$

Now, recall from the assumption that either $\varepsilon = \square ^{n}\textbf {c}_{i}$ and $\delta = \square ^{m}\textbf {c}_{j}$ for some $i, j < \alpha $ and $n, m \in \omega $ , or $\varepsilon = \square ^{n}x$ and $\delta = \square ^{m}x$ for some $n, m \in \omega $ . Observe that if $\varepsilon = \square ^{n}\textbf {c}_{i}$ and $\delta = \square ^{m}\textbf {c}_{j}$ , then $a \in \boldsymbol {\tau }(\boldsymbol {A})$ implies $\underbrace {\square ^{\boldsymbol {A}} \dots \square ^{\boldsymbol {A}}}_{n\text {-times}}\textbf {c}_{i}^{\boldsymbol {A}} = \underbrace {\square ^{\boldsymbol {A}} \dots \square ^{\boldsymbol {A}}}_{m\text {-times}}\textbf {c}_{j}^{\boldsymbol {A}}$ and, therefore,

$$ \begin{align*} \varepsilon^{\boldsymbol{A}}(\square^{\boldsymbol{A}} a) = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{n\text{-times}}\textbf{c}_{i}^{\boldsymbol{A}} = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{m\text{-times}}\textbf{c}_{j}^{\boldsymbol{A}} = \delta^{\boldsymbol{A}}(\square^{\boldsymbol{A}} a). \end{align*} $$

Then we consider the case where $\varepsilon = \square ^{n}x$ and $\delta = \square ^{m}x$ . From $a \in \boldsymbol {\tau }(\boldsymbol {A})$ it follows

$$ \begin{align*} \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{n\text{-times}}a = \varepsilon^{\boldsymbol{A}}(a) = \delta^{\boldsymbol{A}}(a) = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{m\text{-times}}a. \end{align*} $$

Consequently,

$$ \begin{align*} \varepsilon^{\boldsymbol{A}}(\square^{\boldsymbol{A}}a) = \square^{\boldsymbol{A}}\underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{n\text{-times}}a = \square^{\boldsymbol{A}}\underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{m\text{-times}}a = \delta^{\boldsymbol{A}}(\square^{\boldsymbol{A}}a). \end{align*} $$

This establishes (17) and, therefore, $x \vdash \square x$ .

It only remains to prove the implication (iii) $\Rightarrow $ (ii). To this end, set $\boldsymbol {\tau } := \{ x \thickapprox \square x \}$ . By Proposition 3.5, in order to show that $\vdash $ has a $\boldsymbol {\tau }$ -algebraic semantics, it suffices to check that for every $\varGamma \cup \{ \varphi \} \subseteq Fm$ ,

(18) $$ \begin{align} \text{if }\boldsymbol{\tau}(\varphi) \subseteq \theta(\varGamma, \boldsymbol{\tau})\text{, then }\varGamma \vdash \varphi. \end{align} $$

To this end, consider $\varGamma \cup \{ \varphi \} \subseteq Fm$ such that $\boldsymbol {\tau }(\varphi ) \subseteq \theta (\varGamma , \boldsymbol {\tau })$ . Then define

$$ \begin{align*} \phi := \{ \langle \varepsilon, \delta\rangle \in Fm \times Fm : \text{either }\varepsilon, \delta \in \textrm{Cn}_{\vdash}(\varGamma) \text{ or }\varepsilon = \delta \}. \end{align*} $$

We shall see that $\phi $ is a congruence of $\boldsymbol {Fm}$ . As $\phi $ is clearly an equivalence relation on $Fm$ , it suffices to show that $\phi $ preserves the operation $\square $ . To this end, consider $\langle \varepsilon , \delta \rangle \in \phi $ . There are two cases: either $\varepsilon = \delta $ or $\varepsilon \ne \delta $ . If $\varepsilon = \delta $ , then $\square \varepsilon = \square \delta $ and, therefore, $\langle \square \varepsilon , \square \delta \rangle \in \phi $ . Then we consider the case where $\varepsilon \ne \delta $ . As $\langle \varepsilon , \delta \rangle \in \phi $ , we get $\varepsilon , \delta \in \textrm {Cn}_{\vdash }(\varGamma )$ . Now, recall from the assumption that $x \vdash \square x$ and, therefore, $\varepsilon \vdash \square \varepsilon $ and $\delta \vdash \square \delta $ . Together with $\varepsilon , \delta \in \textrm {Cn}_{\vdash }(\varGamma )$ , this implies $\square \varepsilon , \square \delta \in \textrm {Cn}_{\vdash }(\varGamma )$ , whence $\langle \square \varepsilon , \square \delta \rangle \in \phi $ . We conclude that $\phi $ is a congruence of $\boldsymbol {Fm}$ .

We shall prove that $\phi $ extends $\theta (\varGamma , \boldsymbol {\tau })$ . As $\phi $ is a congruence of $\boldsymbol {Fm}$ , it will suffice to show that the generators of $\theta (\varGamma , \boldsymbol {\tau })$ belong to $\phi $ , i.e., that $\langle \gamma , \square \gamma \rangle \in \phi $ for every $\gamma \in \varGamma $ . To this end, consider $\gamma \in \varGamma $ . By $x \vdash \square x$ , we obtain $\square \gamma \in \textrm {Cn}_{\vdash }(\varGamma )$ , whence $\langle \gamma , \square \gamma \rangle \in \phi $ . As a consequence, $\theta (\varGamma , \boldsymbol {\tau }) \subseteq \phi $ . In particular, this implies

$$ \begin{align*} \{ \langle \varphi, \square \varphi \rangle \} = \boldsymbol{\tau}(\varphi) \subseteq \theta(\varGamma, \boldsymbol{\tau}) \subseteq \phi. \end{align*} $$

As $\varphi \ne \square \varphi $ , the fact that $\langle \varphi , \square \varphi \rangle \in \phi $ implies $\varphi , \square \varphi \in \textrm {Cn}_{\vdash }(\varGamma )$ . Consequently, $\varGamma \vdash \varphi $ , establishing (18). Hence, we conclude that $\vdash $ has a $\boldsymbol {\tau }$ -algebraic semantics.⊣

A logic is said to be mono-unary if its language consists of a unary connective only. As a consequence of Proposition 10.3 we obtain a new proof of the following:

Corollary 10.4 [Reference Blok and Rebagliato7, Theorem 2.20]

A mono-unary logic $\vdash $ has an algebraic semantics if and only if $x \vdash \square x$ .

Proposition 10.5. Let $\vdash $ be a graph-based logic with an algebraic semantics. One of the following conditions holds:

  1. (i) $\vdash $ has a unary connective and $x \vdash \square x;$

  2. (ii) $\vdash $ is assertional;

  3. (iii) $\vdash $ is almost assertional;

  4. (iv) $\vdash $ has a unary connective and a $\boldsymbol {\tau }$ -algebraic semantics where $\boldsymbol {\tau } = \{ \square ^{k} x \thickapprox \square ^{n} \textbf {c}_{i} \}$ , $0 \leqslant n < k \in \omega $ , and $i < \alpha $ .

Proof. By assumption, $\vdash $ has a $\boldsymbol {\rho }$ -algebraic semantics $\mathsf {K}$ . There are two cases: either $\vdash $ has a unary connective or it does not. First, suppose that $\vdash $ has no unary connective, i.e., its language comprises constant symbols only. If $\vdash $ is trivial, then it is either assertional or almost assertional and we are done. Then, we consider the case where $\vdash $ is nontrivial. By Lemma 8.5, there is a nontrivial equation $\varepsilon \thickapprox \delta \in \boldsymbol {\rho }$ such that $\textit {Var}(\varepsilon ) \cup \textit {Var}(\delta ) = \{ x \}$ . As $\mathscr {L}_{\vdash }$ comprises constant symbols only, we can assume by symmetry that there is an $i < \alpha $ such that $\varepsilon = x$ and $\delta = \textbf {c}_{i}$ . The fact that $\vdash $ has a $\boldsymbol {\rho }$ -algebraic semantics where $x \thickapprox \textbf {c}_{i} \in \boldsymbol {\rho }$ easily implies that $y \vdash \textbf {c}_{i}$ and $x, y, \varphi (x, \vec {z}) \vdash \varphi (y, \vec {z})$ for all $\varphi (v, \vec {z}) \in Fm$ . By Proposition 4.10, we conclude that $\vdash $ is either assertional or almost assertional, as desired.

Therefore, it only remains to consider the case where $\vdash $ has a unary connective $\square $ . First, if condition (i) holds, we are done. Then we consider the case where condition (i) fails. By the implication (iii) $\Rightarrow $ (i) in Proposition 10.3, this yields that there are $s, t \in \omega $ and an ordinal $j < \alpha $ such that

$$ \begin{align*} \boldsymbol{\rho} \cap \{ \square^{s} x \thickapprox \square^{t} \textbf{c}_{j}, \square^{t} \textbf{c}_{j} \thickapprox \square^{s} x \} \ne \emptyset. \end{align*} $$

Consequently, for every $\boldsymbol {A} \in \mathsf {K}$ and $a \in \boldsymbol {\rho }(\boldsymbol {A})$ ,

$$ \begin{align*} \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{s\text{-times}}a = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{t\text{-times}} \textbf{c}_{j}^{\boldsymbol{A}}. \end{align*} $$

Accordingly, the set

$$ \begin{align*} S := \{ m \in \omega \colon&\text{there are }n \in \omega \text{ and an ordinal }i < \alpha \text{ such that for all }\boldsymbol{A} \in \mathsf{K},\\ &\text{if }a \in \boldsymbol{\rho}(\boldsymbol{A}),\text{ then } \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{m\text{-times}}a = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{n\text{-times}} \textbf{c}_{i}^{\boldsymbol{A}} \} \end{align*} $$

is nonempty. Let then k be the minimum of S. There are $n \in \omega $ and an ordinal $i < \alpha $ such that

(19) $$ \begin{align} \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{k\text{-times}}a = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{n\text{-times}} \textbf{c}_{i}^{\boldsymbol{A}} \end{align} $$

for all $\boldsymbol {A} \in \mathsf {K}$ and $a \in \boldsymbol {\rho }(\boldsymbol {A})$ . Furthermore, define

$$ \begin{align*} \boldsymbol{\tau} := \{ \square^{k}x \thickapprox \square^{n} \textbf{c}_{i} \}. \end{align*} $$

Claim 10.6. If $\boldsymbol {A} \in \mathsf {K}$ and $\boldsymbol {\rho }(\boldsymbol {A}) \ne \emptyset $ , then $\boldsymbol {\tau }(\boldsymbol {A}) = \boldsymbol {\rho }(\boldsymbol {A})$ .

Proof. Consider $\boldsymbol {A} \in \mathsf {K}$ such that there is some $b \in \boldsymbol {\rho }(\boldsymbol {A})$ . From the definition of $\boldsymbol {\tau }$ it follows immediately that $\boldsymbol {\rho }(\boldsymbol {A}) \subseteq \boldsymbol {\tau }(\boldsymbol {A})$ . To prove the other inclusion, consider an element $a \in \boldsymbol {\tau }(\boldsymbol {A})$ and an equation $\varepsilon \thickapprox \delta $ in $\boldsymbol {\rho }$ . We need to show $\varepsilon ^{\boldsymbol {A}}(a) = \delta ^{\boldsymbol {A}}(a)$ . As $\vdash $ is graph-based, one of the following conditions holds:

  1. (i) $\varepsilon = \square ^{s}x$ and $\delta = \square ^{t}x$ for some $s, t \in \omega $ ;

  2. (ii) $\varepsilon = \square ^{s}\textbf {c}_{j}$ and $\delta = \square ^{t}x$ for some $s, t \in \omega $ and $j < \alpha $ ;

  3. (iii) $\varepsilon = \square ^{s}x$ and $\delta = \square ^{t}\textbf {c}_{j}$ for some $s, t \in \omega $ and $j < \alpha $ ;

  4. (iv) $\varepsilon = \square ^{s} \textbf {c}_{h}$ and $\delta = \square ^{t}\textbf {c}_{j}$ for some $s, t \in \omega $ and $h, j < \alpha $ .

(i): If $s = t$ , then clearly $\varepsilon ^{\boldsymbol {A}}(a) = \delta ^{\boldsymbol {A}}(a)$ . Then by symmetry we can assume that $s < t$ . There are two cases: either $k \leqslant t$ or $t < k$ . First suppose that $k \leqslant t$ . Observe that for every $\boldsymbol {B} \in \mathsf {K}$ and $d \in \boldsymbol {\rho }(\boldsymbol {B})$ ,

$$ \begin{align*} \underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{s\text{-times}}d = \underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{t\text{-times}}d = \underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{(t-k)\text{-times}}\underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{k\text{-times}}d = \underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{(t-k)\text{-times}}\underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{n\text{-times}}\textbf{c}_{i}^{\boldsymbol{B}}. \end{align*} $$

The first equality in the above display follows from $\varepsilon \thickapprox \delta \in \boldsymbol {\rho }$ and the third one from (19). Furthermore, together with the minimality of k, the above display implies $k \leqslant s$ . Thus, $k \leqslant s, t$ . Bearing this in mind, observe that from $b \in \boldsymbol {\rho }(\boldsymbol {A})$ and (19) it follows

$$ \begin{align*} \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{(s-k+n)\text{-times}}\textbf{c}_{i}^{\boldsymbol{A}} = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{(s-k)\text{-times}}\underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{k\text{-times}}b = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{(t-k)\text{-times}}\underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{k\text{-times}}b = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{(t-k+n)\text{-times}}\textbf{c}_{i}^{\boldsymbol{A}}. \end{align*} $$

Together with $a \in \boldsymbol {\tau }(\boldsymbol {A})$ , this implies

$$ \begin{align*} \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{(s-k)\text{-times}}\underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{k\text{-times}}a = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{(s-k+n)\text{-times}}\textbf{c}_{i}^{\boldsymbol{A}} = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{(t-k+n)\text{-times}}\textbf{c}_{i}^{\boldsymbol{A}} = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{(t-k)\text{-times}}\underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{k\text{-times}}a. \end{align*} $$

Hence, we conclude $\varepsilon ^{\boldsymbol {A}}(a) = \delta ^{\boldsymbol {A}}(a)$ , as desired.

Then we consider the case where $t < k$ . We shall see that this case never happens, i.e., it is contradictory. To prove this, notice that for every $\boldsymbol {B} \in \mathsf {K}$ and $d \in \boldsymbol {\rho }(\boldsymbol {B})$ ,

$$ \begin{align*} \underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{(k-t+s)\text{-times}}d = \underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{k\text{-times}} d = \underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{n\text{-times}} \textbf{c}_{i}^{\boldsymbol{B}}. \end{align*} $$

The first equality above follows from $\varepsilon \thickapprox \delta \in \boldsymbol {\rho }$ and the second one from (19). Now, together with the minimality of k, the above display implies $k - t+ s \geqslant k$ . But this contradicts the fact that $s < t$ . Hence, the case where $t \leqslant k$ never happens.

(ii): As $\square ^{s} \textbf {c}_{j} \thickapprox \square ^{t}x \in \boldsymbol {\rho }$ , by minimality of k we get $k \leqslant t$ . Bearing this in mind,

$$ \begin{align*} \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{(t-k+n)\text{-times}}\textbf{c}_{i} = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{t\text{-times}}b = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{s\text{-times}} \textbf{c}_{j}^{\boldsymbol{A}}. \end{align*} $$

The first equality above follows from (19) and $b \in \boldsymbol {\rho }(\boldsymbol {A})$ and the second one from $\varepsilon \thickapprox \delta \in \boldsymbol {\rho }$ and $b \in \boldsymbol {\rho }(\boldsymbol {A})$ . From the above display and $a \in \boldsymbol {\tau }(\boldsymbol {A})$ it follows

$$ \begin{align*} \varepsilon^{\boldsymbol{A}}(a) = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{s\text{-times}} \textbf{c}_{j}^{\boldsymbol{A}} = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{(t-k+n)\text{-times}}\textbf{c}_{i} = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{t\text{-times}}a = \delta^{\boldsymbol{A}}(a). \end{align*} $$

Case (iii) is analogous to case (ii). Finally, in case (iv) the equality $\varepsilon ^{\boldsymbol {A}}(a) = \delta ^{\boldsymbol {A}}(a)$ holds vacuously, because $\boldsymbol {\rho }(\boldsymbol {A}) \ne \emptyset $ .⊣

Now, if $\vdash $ has theorems, then it is assertional by Corollary 8.4. In this case, condition (ii) of Proposition 10.5 holds and we are done. Then we consider the case where $\vdash $ lacks theorems. Since $\mathsf {K}$ is a $\boldsymbol {\rho }$ -algebraic semantics for $\vdash $ , we can apply Proposition 3.2 obtaining that $\{ \langle \boldsymbol {A}, \boldsymbol {\rho }(\boldsymbol {A})\rangle \colon \boldsymbol {A} \in \mathsf {K}\}$ is a matrix semantics for $\vdash $ . As $\vdash $ lacks theorems, by Lemma 4.8 the following is also a matrix semantics for $\vdash $ :

$$ \begin{align*} \mathsf{N} := \{ \langle \boldsymbol{A}, \boldsymbol{\rho}(\boldsymbol{A})\rangle \colon \boldsymbol{A} \in \mathsf{K} \text{ such that } \boldsymbol{\rho}(\boldsymbol{A}) \ne \emptyset \} \cup \{ \langle \boldsymbol{Fm}, \emptyset \rangle \}. \end{align*} $$

Lastly, with an application of Claim 10.6,

(20) $$ \begin{align} \mathsf{N} = \{ \langle \boldsymbol{A}, \boldsymbol{\tau}(\boldsymbol{A})\rangle \colon \boldsymbol{A} \in \mathsf{K} \text{ such that }\rho(\boldsymbol{A}) \ne \emptyset \} \cup \{ \langle \boldsymbol{Fm}, \emptyset \rangle \}. \end{align} $$

There are two cases: either $n < k$ or $k \leqslant n$ . If $n < k$ , there is no formula $\varphi $ such that $\square ^{k}\varphi = \square ^{n}\textbf {c}_{i}$ . Consequently, $\boldsymbol {\tau }(\boldsymbol {Fm}) = \emptyset $ . Together with (20), this yields

$$ \begin{align*} \mathsf{N} = \{ \langle \boldsymbol{A}, \boldsymbol{\tau}(\boldsymbol{A})\rangle \colon \boldsymbol{A} \in \mathsf{K} \text{ such that }\rho(\boldsymbol{A}) \ne \emptyset \} \cup \{ \langle \boldsymbol{Fm}, \boldsymbol{\tau}(\boldsymbol{Fm}) \rangle \}. \end{align*} $$

Since $\mathsf {N}$ is a matrix semantics for $\vdash $ , this implies that the class of algebraic reducts of $\mathsf {N}$ is a $\boldsymbol {\tau }$ -algebraic semantics for $\vdash $ . Thus, condition (iv) of Proposition 10.5 holds and we are done.

It only remains to consider the case where $k \leqslant n$ . In this case, from the fact that $\mathsf {N}$ is a matrix semantics for $\vdash $ and (20) it follows $x \vdash \square ^{n-k}\textbf {c}_{i}$ . Recall from Proposition 8.3 that $\vdash $ has a unital matrix semantics $\mathsf {M}$ . Together with $x \vdash \square ^{n-k}\textbf {c}_{i}$ , this implies

$$ \begin{align*} F = \{ \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{(n-k)\text{-times}} \textbf{c}_{i}^{\boldsymbol{A}} \} \end{align*} $$

for all $\langle \boldsymbol {A}, F \rangle \in \mathsf {M}$ such that $F \ne \emptyset $ . As $\vdash $ lacks theorems, we conclude that $\vdash $ is almost assertional and, therefore, that condition (iii) of Proposition 10.5 holds.⊣

The proof of the next result relies on two technical lemmas which, for the sake of readability, are established in the Appendix.

Proposition 10.7. The following conditions are equivalent for a graph-based logic $\vdash $ with a unary connective:

  1. (i) $\vdash $ has a $\boldsymbol {\tau }$ -algebraic semantics where $\boldsymbol {\tau } = \{ \square ^{k}x \thickapprox \square ^{n} \textbf {c}_{i} \}$ for some nonnegative integers $n < k$ and ordinal $i < \alpha ;$

  2. (ii) $\vdash $ has a $\boldsymbol {\tau }$ -algebraic semantics where $\boldsymbol {\tau } = \{ \square ^{k}x \thickapprox \textbf {c}_{i} \}$ for some positive integer k and ordinal $i < \alpha ;$

  3. (iii) The rules in $\mathcal {U} \cup \mathcal {S}_{k, i} \cup \mathcal {R}_{k, i} \cup \mathcal {I}_{k, i}$ are valid in $\vdash $ for some positive integer k and ordinal $i < \alpha $ .

Moreover, when condition (iii) holds, k can be taken to be the least positive integer n for which the rules in $\mathcal {S}_{n, i}$ are valid in $\vdash $ .

Proof. The implication (ii) $\Rightarrow $ (i) is obvious (just take $n := 0$ ).

(i) $\Rightarrow $ (iii): Assume that $\vdash $ has a $\boldsymbol {\tau }$ -algebraic semantics where $\boldsymbol {\tau } = \{ \square ^{k}x \thickapprox \square ^{n} \textbf {c}_{i} \}$ for some nonnegative integers $n < k$ and ordinal $i < \alpha $ . We begin with the following observation:

Claim 10.8. The rules in $\mathcal {S}_{k-n, i}$ are valid in $\vdash $ .

Proof. We need to prove that for every $m \in \omega $ ,

$$ \begin{align*} x, \square^{k-n+ m}x \mathrel{\dashv\mkern1.5mu\vdash} \square^{m}\textbf{c}_{i}, x. \end{align*} $$

To this end, consider $m \in \omega $ . As $\vdash $ has a $\boldsymbol {\tau }$ -algebraic semantics, by Proposition 3.5 it suffices to establish

$$ \begin{align*} \langle \square^{k+k - n+m}x, \square^{n}\textbf{c}_{i}\rangle &\in \theta(\{ x, \square^{m}\textbf{c}_{i}\}, \boldsymbol{\tau}),\\ \langle \square^{k+m}\textbf{c}_{i}, \square^{n}\textbf{c}_{i}\rangle &\in \theta(\{ x, \square^{k-n+m}x\}, \boldsymbol{\tau}). \end{align*} $$

The above conditions follow, respectively, from Lemmas A.1 and A.3, which are proved in the Appendix.⊣

Observe that $k -n$ is a positive integer, since by assumption $k> n$ . Thus by Claim 10.8 there exists the least positive integer m such that the rules $\mathcal {S}_{m, i}$ are valid in $\vdash $ . We shall prove that the rules in $\mathcal {U} \cup \mathcal {R}_{m, i} \cup \mathcal {I}_{m, i}$ are valid in $\vdash $ . This will establish both condition (iii) and the remark at the end of the statement of Proposition 10.7.

First notice that, as $\vdash $ has an algebraic semantics, by Propositions 8.3 and 4.2 the rules in $\mathcal {U}$ are valid in $\vdash $ . Then, consider a typical rule in $\mathcal {R}_{m, i}$ :

(21) $$ \begin{align} \{ \square^{t}x \} \cup \{ \square^{u_{j}}x \colon s> j \in \omega\} \cup \{ \square^{v_{j}}x \colon s > j \in \omega \} \rhd \square^{t+g}x. \end{align} $$

Define

$$ \begin{align*} \varGamma := \{ \square^{t}x \} \cup \{ \square^{u_{j}} x \colon s> j \in \omega \} \cup \{ \square^{v_{j}}x \colon s > j \in \omega \}. \end{align*} $$

By Proposition 3.5, as $\vdash $ has a $\boldsymbol {\tau }$ -algebraic semantics for $\vdash $ , in order to prove that the rule (21) is valid in $\vdash $ , it suffices to show that

(22) $$ \begin{align} \langle \square^{k}\square^{t+g}x, \square^{n}\textbf{c}_{i}\rangle \in \theta(\varGamma, \boldsymbol{\tau}). \end{align} $$

But this is a consequence of the “if” part of Lemma A.1. Hence we conclude that the rules in $\mathcal {R}_{m, i}$ are valid in $\vdash $ . The fact that the rules in $\mathcal {I}_{m, i}$ are valid in $\vdash $ is proved similarly, by replacing the only application of Lemma A.1 with one of Lemma A.3.

(iii) $\Rightarrow $ (ii): Suppose that the rules in $\mathcal {U} \cup \mathcal {S}_{k, i} \cup \mathcal {R}_{k, i} \cup \mathcal {I}_{k, i}$ are valid in $\vdash $ for some positive integer k and some ordinal $i < \alpha $ . Then let $\mathcal {R}_{k, i}^{+}$ be the set of rules of the form

$$ \begin{align*} \{ \square^{t}x \} \cup \{ \square^{u_{j}}y_{j} \colon s> j \in \omega \} \cup \{ \square^{u_{j}}y_{j} \colon s > j \in \omega \} \rhd \square^{t+g}x, \end{align*} $$

where $t, s, g \in \omega $ and $\{ u_{j} \colon s> j \in \omega \} \cup \{ v_{j} \colon s > j \in \omega \} \subseteq \omega $ are such that $u_{j} < v_{j}$ for all $s> j \in \omega $ , and g is a multiple of $\textrm {gcd}(\{ v_{j} - u_{j} \colon s> j \in \omega \})$ .

Let also $\mathcal {I}_{k, i}^{+}$ be the set of rules of the form

$$ \begin{align*} \{ \square^{u_{j}}y_{j} \colon s> j \in \omega \} \cup \{ \square^{v_{j}}y_{j} \colon s > j \in \omega \} \rhd \square^{g}\textbf{c}_{i}, \end{align*} $$

where $s, g \in \omega $ and $\{ u_{j} \colon s> j \in \omega \} \cup \{ v_{j} \colon s > j \in \omega \} \subseteq \omega $ are such that $u_{j} < v_{j}$ for all $s> j \in \omega $ , and $g + k$ is a multiple of $\textrm {gcd}(\{ v_{j} - u_{j} \colon s> j \in \omega \})$ .

Notice that, as the rules in $\mathcal {U} \cup \mathcal {R}_{k, i} \cup \mathcal {I}_{k, i}$ are valid in $\vdash $ , it is not hard to see that so are those in $\mathcal {R}_{k, i}^{+} \cup \mathcal {I}_{k, i}^{+}$ . Furthermore, notice that

(23) $$ \begin{align} \square^{w}\textbf{c}_{i} \vdash \square^{2w+k}\textbf{c}_{i},\quad \text{for all }w \in \omega. \end{align} $$

To justify the above display, consider $w \in \omega $ . As the rules in $\mathcal {S}_{k, i}$ are valid in $\vdash $ , we have $x, \square ^{w}\textbf {c}_{i} \vdash \square ^{w+k}x$ . Hence, by substitution invariance, we obtain $\square ^{w}\textbf {c}_{i}, \square ^{w}\textbf {c}_{i} \vdash \square ^{2w+k}\textbf {c}_{i}$ and, therefore, $\square ^{w}\textbf {c}_{i} \vdash \square ^{2w+k}\textbf {c}_{i}$ .

Now, we shall prove that $\vdash $ has a $\boldsymbol {\tau }$ -algebraic semantics for

$$ \begin{align*} \boldsymbol{\tau} := \{ \square^{k}x \thickapprox \textbf{c}_{i} \}. \end{align*} $$

By Proposition 3.5 it suffices to check that

(24) $$ \begin{align} \text{if }\boldsymbol{\tau}(\varphi) \subseteq \theta(\varGamma, \boldsymbol{\tau})\text{, then }\varGamma \vdash \varphi \end{align} $$

for all $\varGamma \cup \{ \varphi \} \subseteq Fm$ .

To this end, consider $\varGamma \cup \{ \varphi \} \subseteq Fm$ such that $\boldsymbol {\tau }(\varphi ) \subseteq \theta (\varGamma , \boldsymbol {\tau })$ , i.e., such that

(25) $$ \begin{align} \langle \square^{k}\varphi, \textbf{c}_{i}\rangle \in \theta(\varGamma, \boldsymbol{\tau}). \end{align} $$

We denote by $At(\mathscr {L}_{\vdash })$ the set of atomic formulas of $\vdash $ , i.e., the union of $\textit {Var}$ and the set of constants of $\mathscr {L}_{\vdash }$ . As $\vdash $ is graph-based, there are $p \in At(\mathscr {L}_{\vdash })$ and $h \in \omega $ such that $\varphi = \square ^{h}p$ . There are two cases: either $p \ne \textbf {c}_{i}$ or $p = \textbf {c}_{i}$ .

First we consider the case where $p \ne \textbf {c}_{i}$ . By applying the “only if” part of Lemma A.1 to (25), we obtain

$$ \begin{align*} s, s^{\ast} \in \omega \text{ and }\{ q_{j} \colon &s> j \in \omega \} \subseteq At(\mathscr{L}_{\vdash}) \text{ and}\\ \{ u_{j} \colon s > j \in \omega \} \cup \{ v_{j} \colon & s > j \in \omega \} \cup \{ w_{j^{\ast}} \colon s^{\ast} > i \in \omega \} \subseteq \omega \end{align*} $$

such that

  1. 1. $u_{j} < v_{j}$ , for all $s> j \in \omega $ ;

  2. 2. $\square ^{u_{j}}q_{j}, \square ^{v_{j}}q_{j}, \square ^{w_{j^{\ast }}}\textbf {c}_{i} \in \varGamma $ , for all $s> j \in \omega $ and $s^{\ast }> j^{\ast } \in \omega $ ;

  3. 3. $h = t+g$ for some $t, g \in \omega $ such that $\square ^{t}p \in \varGamma $ and g is a multiple of

    $$ \begin{align*} d := \textrm{gcd}(\{ v_{j} - u_{j} \colon s> j \in \omega \} \cup \{ k + w_{j^{\ast}} \colon s^{\ast} > j^{\ast} \in \omega \}). \end{align*} $$

Define

$$ \begin{align*} \varDelta := &\{ \square^{t}p \} \cup \{ \square^{u_{j}}q_{j} \colon s> j \in \omega \} \cup \{ \square^{v_{j}}q_{j} \colon s > j \in \omega \} \cup{} \\ & \{ \square^{w_{j^{\ast}}}\textbf{c}_{i} \colon s^{\ast} > j^{\ast} \in \omega \} \cup \{ \square^{2w_{j^{\ast}}+k}\textbf{c}_{i} \colon s^{\ast} > j^{\ast} \in \omega \}. \end{align*} $$

From conditions (2) and (3) above and (23) it follows that $\varGamma \vdash \varDelta $ . Hence, to prove $\varGamma \vdash \varphi $ , it suffices to show $\varDelta \vdash \varphi $ . From the fact that g is a multiple of d, it follows that it is also a multiple of

$$ \begin{align*} \textrm{gcd}(\{ r - r' \colon \square^{r}q, \square^{r'}q \in \varDelta \text{ for some }q \in At(\mathscr{L}_{\vdash}) \text{ such that } r> r'\}). \end{align*} $$

Together with the fact that $2w_{j^{\ast }}+ k> w_{j^{\ast }}$ for all $s^{\ast }> j^{\ast } \in \omega $ (as k is positive) and that the rules in $\mathcal {R}_{k, i}^{+}$ are valid in $\vdash $ , this implies $\varDelta \vdash \square ^{t+ g}p$ . As, by condition (3), $h = t+g$ , this amounts to $\varDelta \vdash \square ^{h}p$ . Since $\varphi = \square ^{h}p$ , we conclude $\varDelta \vdash \varphi $ and, therefore, $\varGamma \vdash \varphi $ .

Then we consider the case where $p = \textbf {c}_{i}$ . By applying the “only if” part of Lemma A.3 to (25), we obtain

$$ \begin{align*} s, s^{\ast} \in \omega \text{ and }\{ q_{j} \colon &s> j \in \omega \} \subseteq At(\mathscr{L}_{\vdash}) \text{ and}\\ \{ u_{j} \colon s > j \in \omega \} \cup \{ v_{j} \colon & s > j \in \omega \} \cup \{ w_{j^{\ast}} \colon s^{\ast} > i \in \omega \} \subseteq \omega \end{align*} $$

validating conditions (1) and (2) above and such that

  1. 4. $h+k$ is a multiple of

    $$ \begin{align*} d := \textrm{gcd}(\{ v_{j} - u_{j} \colon s> j \in \omega \} \cup \{ k + w_{j^{\ast}} \colon s^{\ast} > j^{\ast} \in \omega \}). \end{align*} $$

Defining $\varDelta $ as in the previous case (but omitting $\square ^{t}p$ ), we get $\varGamma \vdash \varDelta $ . Thus, to prove $\varGamma \vdash \varphi $ , it suffices to show $\varDelta \vdash \varphi $ . Now, as $h+k$ is a multiple of d, it is also a multiple of

$$ \begin{align*} \textrm{gcd}(\{ r - r' \colon \square^{r}q, \square^{r'}q \in \varDelta \text{ for some }q \in At(\mathscr{L}_{\vdash}) \text{ such that } r> r'\}). \end{align*} $$

Together with the fact that $2w_{j^{\ast }}+ k> w_{j^{\ast }}$ for all $s^{\ast }> j^{\ast } \in \omega $ (as k is positive) and that the rules in $\mathcal {I}_{k, i}^{+}$ are valid in $\vdash $ , this implies $\varDelta \vdash \square ^{h}p$ . Hence, we conclude that $\varGamma \vdash \varphi $ , establishing (24).⊣

We are now ready to prove the main result of this section:

Proof of Theorem 10.2. To prove the “only if” part, suppose that $\vdash $ has an algebraic semantics and that conditions (i), (ii), and (iv) fail. As $\vdash $ has an algebraic semantics, Proposition 4.9 and the assumption that condition (iv) fails imply that $\vdash $ is not almost assertional. Therefore, from Proposition 10.5 it follows that $\vdash $ has a $\boldsymbol {\tau }$ -algebraic semantics where $\boldsymbol {\tau } = \{ \square ^{k} x \thickapprox \square ^{n} \textbf {c}_{i} \}$ for some nonnegative integers $n < k$ and ordinal $i < \alpha $ . By Proposition 10.7 this implies that the rules in $\mathcal {U} \cup \mathcal {S}_{m, i} \cup \mathcal {R}_{m, i} \cup \mathcal {I}_{m, i}$ are valid in $\vdash $ for some positive integer m, i.e., that condition (iii) holds, as desired.

Then we turn to prove the “if” part. The fact that each condition in the list (i)–(iv) implies that $\vdash $ has an algebraic semantics follows respectively from Propositions 4.4, 10.3, 10.7, and 4.9.

Finally, the last sentence in the statement of Theorem 10.2 follows from the last part of Proposition 10.7.⊣

11 Locally tabular logics

Definition 11.1. A logic $\vdash $ is said to be locally tabular if for every integer $n \geqslant 1$ there are only finitely many formulas in variables $x_1, \dots , x_n$ up to logical equivalence in $\vdash $ .Footnote 5

An algebra is said to be locally finite when its finitely generated subalgebras are finite. Similarly, a class of algebras $\mathsf {K}$ is locally finite when its members are.

Proposition 11.2. The following conditions are equivalent for a logic $\vdash :$

  1. (i) $\vdash $ is locally tabular;

  2. (ii) $\mathsf {Alg}(\vdash )$ is locally finite;

  3. (iii) $\vdash $ has a matrix semantics whose algebraic reducts generate a locally finite variety.

Proof. The implications (i) $\Rightarrow $ (ii) and (iii) $\Rightarrow $ (i) follow immediately from Lemma 2.7.

(ii) $\Rightarrow $ (iii): Suppose that $\mathsf {Alg}(\vdash )$ is locally finite. Since $\mathsf {Alg}(\vdash )$ contains a free algebra with a denumerable set of free generators by Lemma 2.6, the fact that $\mathsf {Alg}(\vdash )$ is locally finite implies that also variety generated by $\mathsf {Alg}(\vdash )$ is locally finite. Since $\vdash $ has a matrix semantics whose algebraic reducts belong to $\mathsf {Alg}(\vdash )$ by Corollary 2.5, we are done.⊣

In this section we present a characterization of locally tabular logics with an algebraic semantics. For the sake of readability, this result is split in the next three propositions.

Proposition 11.3. Every locally tabular logic that is not graph-based has an algebraic semantics.

Proof. Let $\vdash $ be a locally tabular logic that is not graph-based. The latter assumption guarantees that there is a complex formula $\varphi $ such that $\textit {Var}(\varphi ) = \{ x \}$ . As $\vdash $ is locally tabular, there are two nonnegative integers $m \leqslant n$ such that

$$ \begin{align*} \varphi^{m}(x) \equiv_{\vdash} \varphi^{n+1}(x). \end{align*} $$

Since $\varphi $ is a complex formulas, the two formulas in the above display are different. Moreover, from $\textit {Var}(\varphi ) = \{ x \}$ it follows

$$ \begin{align*} \textit{Var}(\varphi^{m}(x)) \cup \textit{Var}(\varphi^{n+1}(x)) = \{ x \}. \end{align*} $$

As $\vdash $ is not graph-based, we can apply Theorem 6.2 obtaining that $\vdash $ has an algebraic semantics.⊣

Corollary 11.4. A locally tabular logic with theorems has an algebraic semantics if and only if either it is graph-based and assertional or it is not graph-based.

Proof. The “only if” part follows from Corollary 8.4, while the “if’ one is a consequence of Propositions 4.4 and 11.3.⊣

In view of Proposition 11.3, it only remains to describe graph-based locally tabular logics with an algebraic semantics. To some extent, this has already been done in Theorem 10.2. However, in this section we aim for a more computational characterization, which will be instrumental to derive some decidability results (Theorem 12.1).

To this end, notice that for every locally tabular graph-based logic $\vdash $ with a unary connective there are two nonnegative integers $m \leqslant n$ such that $\square ^{m}x \equiv _{\vdash } \square ^{n+1}x$ . The next result explains when such a logic has an algebraic semantics.

Proposition 11.5. Let $\vdash $ be a graph-based logic with a unary connective for which there are two nonnegative integers $m \leqslant n$ such that $\square ^{m}x \equiv _{\vdash } \square ^{n+1}x$ . Then $\vdash $ has an algebraic semantics if and only if $x, y, \square ^{t}x \vdash \square ^{t}y$ for all nonnegative integers $t \leqslant n$ and one of the following conditions holds:

  1. (i) $x \vdash \square x;$

  2. (ii) $y \vdash \square ^{t}p$ for some $p \in \{ x \} \cup \{ \textbf {c}_{i} \colon i \text { is an ordinal}< \alpha \}$ and nonnegative integer $t \leqslant n;$

  3. (iii) The following conditions hold for some ordinal $i < \alpha $ and positive integer $k \leqslant n:$

    1. (a) $x, \square ^{t+k}x \mathrel {\dashv \mkern 1.5mu\vdash } \square ^{t}\textbf {c}_{i}, x$ for all nonnegative integers $t \leqslant n;$

    2. (b) For all $s, g, h, t \in \omega $ such that $s \leqslant (2n-m+1)^{2}$ and $g, h, t \leqslant 2n-m+1$ , and for all $\{ u_{j} \colon s> j \in \omega \} \cup \{ v_{j} \colon s > j \in \omega \} \subseteq \omega $ ,

      $$ \begin{align*} \{ \square^{t}x \} \cup \{ \square^{u_{j}}x \colon s> j \in \omega\} \cup \{ \square^{v_{j}}x \colon s > j \in \omega \} &\vdash \square^{t+g}x,\\ \{ \square^{u_{j}}x \colon s > j \in \omega \} \cup \{ \square^{v_{j}}x \colon s > j \in \omega \} &\vdash \square^{h}\textbf{c}_{i}, \end{align*} $$
      provided that $u_{j} < v_{j} \leqslant n + n-m+1$ for all $s> j \in \omega $ , and that g and $h+k$ are a multiples of $\textrm {gcd}(\{ v_{j} - u_{j} \colon s> j \in \omega \})$ .

Proof. First observe that $\square ^{m}x \equiv _{\vdash } \square ^{n+1}x$ implies $\square ^{m}x \mathrel {\dashv \mkern 1.5mu\vdash } \square ^{n+1}x$ . Consequently, for every formula $\varphi $ there are some $p \in \textit {Var} \cup \{ \textbf {c}_{i} \colon i \text { is an ordinal}< \alpha \}$ and some nonnegative integer $t \leqslant n$ such that $\varphi \mathrel {\dashv \mkern 1.5mu\vdash } \square ^{t}p$ . In what follows, we will make repeated use of this observation.

To prove the “only if” part, suppose that $\vdash $ has an algebraic semantics. From Propositions 4.2 and 8.3 it follows $x, y, \square ^{t}x \vdash \square ^{t}y$ for all nonnegative integer $t \leqslant n$ . Furthermore, as $\vdash $ is a graph-based logic with an algebraic semantics, one of conditions (i)–(iv) in Theorem 10.2 holds.

If condition (i) of Theorem 10.2 holds, then $x \vdash \square x$ and we are done.

Then suppose that condition (ii) or (iv) in Theorem 10.2 holds. In this case, $\vdash $ is either assertional or almost assertional. Thus, by Proposition 4.10, there is a formula $\psi (x)$ such that $y \vdash \psi (x)$ . As $\psi $ can be chosen of the form $\square ^{t}p$ for some $p \in \{ x \} \cup \{ \textbf {c}_{i} \colon i \text { is an ordinal}< \alpha \}$ and some nonnegative integer $t \leqslant n$ , condition (ii) in the statement holds.

It only remains to consider the case where condition (iii) in Theorem 10.2 holds. In this case, the rules in $\mathcal {S}_{k, i} \cup \mathcal {R}_{k, i} \cup \mathcal {I}_{k, i}$ are valid in $\vdash $ for some positive integer k and some ordinal $i < \alpha $ . Accordingly, conditions (a) and (b) in the statement hold. Therefore, to establish condition (iii) in the statement, it only remains to prove that $k \leqslant n$ . To this end, in view of Theorem 10.2, we can safely assume that k is the least positive integer t for which the rules in $\mathcal {S}_{t, i}$ are valid in $\vdash $ . Furthermore, let $n \geqslant s \in \omega $ be such that $\square ^{s}x \mathrel {\dashv \mkern 1.5mu\vdash } \square ^{k}x$ . By substitution invariance, for every $t \in \omega $ ,

$$ \begin{align*} \square^{s+t}x \mathrel{\dashv\mkern1.5mu\vdash} \square^{k+t}x. \end{align*} $$

Then consider $t \in \omega $ . As the rules in $\mathcal {S}_{k, i}$ are valid in $\vdash $ , we obtain $x, \square ^{k+t}x \mathrel {\dashv \mkern 1.5mu\vdash } \square ^{t} \textbf {c}_{i}, x$ . Together with the above display, this yields $x, \square ^{s+t}x \mathrel {\dashv \mkern 1.5mu\vdash } \square ^{t} \textbf {c}_{i}, x$ . Consequently, the rules in $\mathcal {S}_{s, i}$ are also valid in $\vdash $ . By the minimality of k, we conclude that $k \leqslant s \leqslant n$ .

Then we turn to prove the “if” part. First, suppose that $x, y, \square ^{t}x \vdash \square ^{t}y$ for all nonnegative integers $t \leqslant n$ . Since $m \leqslant n$ and $\square ^{m}x \mathrel {\dashv \mkern 1.5mu\vdash } \square ^{n+1}x$ , this implies that the rules in $\mathcal {U}$ are valid in $\vdash $ . To conclude the proof, it suffices to show that each condition in the list (iiii) forces $\vdash $ to possess an algebraic semantics.

By Theorem 10.2 this is true for (i). Then suppose that condition (ii) holds. As the rules in $\mathcal {U}$ are valid in $\vdash $ , (ii) and Proposition 4.10 imply that $\vdash $ is either assertional or almost assertional. Recall that assertional logics have an algebraic semantics by Proposition 4.4. Therefore we only need to consider the case where $\vdash $ is almost assertional. Since $\square ^{m}x \equiv _{\vdash } \square ^{n+1}x$ , for every $\varphi (v, \vec {z}) \in Fm$ ,

$$ \begin{align*} x, \varphi(\square^{m}x, \vec{z}) \mathrel{\dashv\mkern1.5mu\vdash} \varphi(\square^{n+1}, \vec{z}), x. \end{align*} $$

Moreover, as $m \leqslant n$ , there is no formula $\varphi $ such that $\square ^{m}\varphi = \square ^{n+1}\varphi $ . Thus the set $\boldsymbol {\tau } := \{ \square ^{m}x \thickapprox \square ^{n+1}x \}$ satisfies conditions (i) and (ii) in Proposition 4.9. We conclude that condition (iv) in Theorem 10.2 holds, whence $\vdash $ has an algebraic semantics.

It only remains to consider the case where condition (iii) in the statement holds. By Theorem 10.2, in order to prove that $\vdash $ has an algebraic semantics, it suffices to show that the rules in $\mathcal {U} \cup \mathcal {S}_{k, i} \cup \mathcal {R}_{k, i} \cup \mathcal {I}_{k, i}$ are valid in $\vdash $ . We know that this is the case for the rules in $\mathcal {U}$ . Moreover, from (a) and $\square ^{m}x \mathrel {\dashv \mkern 1.5mu\vdash } \square ^{n+1}x$ it follows that also the rules in $\mathcal {S}_{k, i}$ are valid in $\vdash $ . Therefore, it only remains to prove that the rules in $\mathcal {R}_{k, i} \cup \mathcal {I}_{k, i}$ are valid in $\vdash $ .

To this end, consider a generic rule in $\mathcal {R}_{k, i}$ :

$$ \begin{align*} \{ \square^{t}x \} \cup \{ \square^{u_{j}}x \colon s> j \in \omega\} \cup \{ \square^{v_{j}}x \colon s > j \in \omega \} \rhd \square^{t+g}x. \end{align*} $$

From the definition $\mathcal {R}_{k, i}$ it follows that g is a multiple of

$$ \begin{align*} d := \textrm{gcd}(\{ v_{j} - u_{j} \colon s> j \in \omega \}). \end{align*} $$

Define

$$ \begin{align*} \varGamma &:= \{ \square^{t}x \} \cup \{ \square^{u_{j}}x \colon s> j \in \omega\} \cup \{ \square^{v_{j}}x \colon s > j \in \omega \},\\ \varDelta & := \{ \square^{j}x \colon n + n-m+1 \geqslant j \in \omega \text{ and }\square^{j}x \mathrel{\dashv\mkern1.5mu\vdash} \gamma \text{ for some }\gamma \in \varGamma \}. \end{align*} $$

Since $\varGamma \vdash \varDelta $ , to conclude the proof, it suffices to prove $\varDelta \vdash \square ^{t+g}x$ . To this end, define

$$ \begin{align*} d^{\ast} := \textrm{gcd}(\{ v - u \colon \square^{u}x, \square^{v}x \in \varDelta \text{ and }u < v\}). \end{align*} $$

Claim 11.6. d is a multiple of $d^{\ast }$ .

Proof. By the definition of d and $d^{\ast }$ , it suffices to prove that $v_{j} - u_{j}$ is a multiple of $d^{\ast }$ , for every $s> j \in \omega $ . To this end, consider $s> j \in \omega $ . If $v_{j} \leqslant n + n-m+1$ , then $u_{j}, v_{j} \leqslant n + n-m+1$ , since $u_{j} < v_{j}$ . Consequently, $\square ^{u_{j}}x, \square ^{v_{j}} x \in \varDelta $ . Thus, $v_{j} - u_{j}$ is a multiple of $d^{\ast }$ by definition of $d^{\ast }$ .

Then suppose that $v_{j}> n + n - m + 1$ . In this case, there are $1 \leqslant p_{v} \in \omega $ and $n - m \geqslant q_{v} \in \omega $ such that

$$ \begin{align*} v_{j} = n + q_{v} + 1 + p_{v}( n- m+ 1 ). \end{align*} $$

Notice that $\square ^{m} x \mathrel {\dashv \mkern 1.5mu\vdash } \square ^{n+1}x$ yields $\square ^{v_{j}}x \mathrel {\dashv \mkern 1.5mu\vdash } \square ^{m+q_{v}}x \mathrel {\dashv \mkern 1.5mu\vdash } \square ^{n+q_{v}+1}x$ . Together with $\square ^{v_{j}}x \in \varGamma $ and $m+q_{v}, n+q_{v}+1 \leqslant n + n - m + 1$ , this implies

$$ \begin{align*} \square^{m + q_{v}}x, \square^{n+q_{v}+1}x \in \varDelta. \end{align*} $$

Notice that $m + q_{v} < n+q_{v}+1$ , since $m \leqslant n$ . Consequently,

$$ \begin{align*} n-m+1 = (m+q_{v}+1)-(m+q_{v}) \end{align*} $$

is a multiple of $d^{\ast }$ , by definition of $d^{\ast }$ . Thus, there is an $r_{1} \in \omega $ such that

(26) $$ \begin{align} d^{\ast} r_{1} = n-m+1. \end{align} $$

Now, if $u_{j} \leqslant n$ , then $\square ^{u_{j}}x \in \varDelta $ . Together with $\square ^{n+q_{v}+1}x \in \varDelta $ and $u_{j} \leqslant n < n + q_{v} +1$ , this implies that $d^{\ast }$ is a divisor of $n+ q_{v} + 1 - u_{j}$ . Thus, there is an $r_{2} \in \omega $ such that $d^{\ast }r_{2} = n+ q_{v} + 1 - u_{j}$ . Consequently,

$$ \begin{align*} v_{j} - u_{j} &= n + q_{v} + 1 + p_{v}( n- m+ 1 ) - u_{j} \\ &= (n + q_{v} + 1 - u_{j}) + p_{v}( n- m+ 1 ) \\ &= d^{\ast} (r_{2} + r_{1}p_{v}). \end{align*} $$

Hence, we conclude that $v_{j} - u_{j}$ is a multiple of $d^{\ast }$ , as desired.

Then we consider the case where $u_{j}> n$ . There are $1 \leqslant p_{u} \in \omega $ and $n - m \geqslant q_{u} \in \omega $ such that

$$ \begin{align*} u_{j} = n + q_{u} + 1 + p_{u}( n- m+ 1 ). \end{align*} $$

Furthermore, from $\square ^{u_{j}} \in \varGamma $ it follows

$$ \begin{align*} \square^{m + q_{u}}x, \square^{n+q_{u}+1}x \in \varDelta. \end{align*} $$

Since $u_{j} < v_{j}$ , clearly $p_{u} \leqslant p_{v}$ . There are two cases: either $q_{u} \leqslant q_{v}$ or $q_{v} < q_{u}$ .

If $q_{u} \leqslant q_{v}$ , then

$$ \begin{align*} v_{j} - u_{j} = (p_{v} - p_{u}) (n - m + 1) + q_{v} - q_{u}. \end{align*} $$

We shall see that $q_{v} - q_{u}$ is a multiple of $d^{\ast }$ . If $q_{v} = q_{u}$ , this is obvious. Then suppose that $q_{v}> q_{u}$ . In this case, $n+q_{u}+1 < n+q_{v}+1$ . Together with $\square ^{n+q_{v}+1}x, \square ^{n+q_{u}+1}x \in \varDelta $ and $n+q_{v}+1 \leqslant n + n - m +1$ , this implies that $q_{v} - q_{u}$ is a multiple of $d^{\ast }$ by definition of $d^{\ast }$ . Together with (26) and the above display, this guarantees that $v_{j} - u_{j}$ is a multiple of $d^{\ast }$ .

It only remains to consider the case where $q_{v} < q_{u}$ . As $u_{j} < v_{j}$ , in this case $p_{u} < p_{v}$ . Consequently,

$$ \begin{align*} v_{j} - u_{j} = (p_{v}- p_{u}- 1)(n - m+1) + (n - m + 1 + q_{v} - q_{u}), \end{align*} $$

where $(n - m + 1 + q_{v} - q_{u}) \geqslant 0$ , since $q_{u} \leqslant n-m$ . In view of (26) and the above display, to conclude the proof it suffices to show that $n - m + 1 + q_{v} - q_{u}$ is a multiple of $d^{\ast }$ . To this end, recall that $\square ^{n+q_{v}+ 1}x, \square ^{m + q_{u}}x \in \varDelta $ . Together with $q_{u} \leqslant n-m \leqslant n < n +1$ , this implies that $n - m + 1 + q_{v} - q_{u} = (n+q_{v}+ 1)- (m + q_{u})$ is a multiple of $d^{\ast }$ .⊣

Now, let $n \geqslant t^{\ast } \in \omega $ be such that $\square ^{t} x\mathrel {\dashv \mkern 1.5mu\vdash } \square ^{t^{\ast }}x$ . As $\square ^{t}x \in \varGamma $ , we get $\square ^{t^{\ast }}x \in \varDelta $ . Furthermore, from the definition of $\varDelta $ it follows that the set

$$ \begin{align*} Z := \{ v - u \colon \square^{u}x, \square^{v}x \in \varDelta \text{ and }u < v\} \end{align*} $$

has cardinality $\leqslant (2n-m+2)^{2}$ . Lastly, $d^{\ast } \leqslant 2n - m +1$ . To prove this, observe that if $Z = \emptyset $ , then $d^{\ast } = 0$ and we are done. Otherwise, $d^{\ast }$ is a divisor of $v - u$ for some $u, v \leqslant 2n -m +1$ . Consequently, $d^{\ast } \leqslant 2n - m +1$ , as desired.

Then we can apply assumption (b), obtaining $\varDelta \vdash \square ^{t^{\ast }+ d^{\ast }}x$ . Recall that g is a multiple of d. Thus, by Claim 11.6 there is an $r \in \omega $ such that $rd^{\ast } = g$ . Since the rules in $\mathcal {U}$ are valid in $\vdash $ , we have $\square ^{t^{\ast }}x, \square ^{t^{\ast }+ d^{\ast }}x \vdash \square ^{t^{\ast }+rd^{\ast }}x$ and, therefore, $\varDelta \vdash \square ^{t^{\ast }+g}x$ . Together with $\square ^{t} x\mathrel {\dashv \mkern 1.5mu\vdash } \square ^{t^{\ast }}x$ , this implies $\varDelta \vdash \square ^{t+g}x$ . We conclude that the rules in $\mathcal {R}_{k, i}$ are valid in $\vdash $ .

It only remains to prove that the same holds for the rules in $\mathcal {I}_{k, i}$ . To this end, consider a generic rule in $\mathcal {I}_{k, i}$ :

$$ \begin{align*} \{ \square^{u_{j}}x \colon s> j \in \omega\} \cup \{ \square^{v_{j}}x \colon s > j \in \omega \} \rhd \square^{h}\textbf{c}_{i}. \end{align*} $$

From the definition $\mathcal {I}_{k, i}$ it follows that $h+k$ is a multiple of

$$ \begin{align*} d := \textrm{gcd}(\{ v_{j} - u_{j} \colon s> j \in \omega \}). \end{align*} $$

As k is positive, this implies that $d \geqslant 1$ .

Define

$$ \begin{align*} \varGamma &:= \{ \square^{u_{j}}x \colon s> j \in \omega\} \cup \{ \square^{v_{j}}x \colon s > j \in \omega \},\\ \nabla & := \{ \square^{j}x \colon n + n-m+1 \geqslant j \in \omega \text{ and }\square^{j}x \mathrel{\dashv\mkern1.5mu\vdash} \gamma \text{ for some }\gamma \in \varGamma \},\\ Z &:= \{ v - u \colon \square^{u}x, \square^{v}x \in \nabla \text{ and }u < v\},\\ d^{\ast} &:= \textrm{gcd}(Z). \end{align*} $$

Notice that the proof of Claim 11.6 can be repeated with these definitions, thus d is a multiple of $d^{\ast }$ . Consequently, as d is positive, $d^{\ast }$ is as well. In particular, this implies that $Z \ne \emptyset $ and, therefore, $\nabla \ne \emptyset $ .

We shall define a set of formulas $\varDelta $ as follows. If there is a $\square ^{u}x \in \nabla $ such that $u \geqslant m$ , then just set $\varDelta := \nabla $ . Otherwise,

$$ \begin{align*} \nabla \subseteq \{ \square^{0}x, \square^{1} x, \dots, \square^{m-1}x \}. \end{align*} $$

Since $\nabla \ne \emptyset $ , we can choose a formula $\square ^{t}x \in \nabla $ . Because of $Z \ne \emptyset $ and of the above display,

(27) $$ \begin{align} 1 \leqslant d^{\ast} \leqslant m-1 \leqslant n \text{ and }t \leqslant m-1. \end{align} $$

As the rules in $\mathcal {R}_{k, i}$ are valid in $\vdash $ , we get $\nabla \vdash \square ^{d^{\ast }+ t}x$ . Notice that $\square ^{t}x, \square ^{d^{\ast }+t}x \vdash \square ^{rd^{\ast }+t}x$ for all $r \in \omega $ , since the rules in $\mathcal {U}$ are valid in $\vdash $ . Consequently,

(28) $$ \begin{align} \nabla \vdash \square^{rd^{\ast} +t}x,\quad \text{for all }r \in \omega. \end{align} $$

By (27) and (28) there is a $u \in \omega $ such that $m \leqslant u \leqslant n+ n - m +1$ and $\nabla \vdash \square ^{u}x$ . Then set

$$ \begin{align*} \varDelta := \nabla \cup \{ \square^{r}x \colon n + n - m + 1 \geqslant r \in \omega \text{ and }\square^{u}x \vdash \square^{r}x \}. \end{align*} $$

This completes the definition of $\varDelta $ .

Observe that in both cases $\varGamma \vdash \nabla \vdash \varDelta $ . Thus, to conclude the proof, it suffices to show that $\varDelta \vdash \square ^{h}\textbf {c}_{i}$ . To this end, define

$$ \begin{align*} d^{\ast\ast} := \textrm{gcd}(\{ v - u \colon \square^{u}x, \square^{v}x \in \varDelta \text{ and }u < v\}). \end{align*} $$

Recall that d is a multiple of $d^{\ast }$ . Furthermore, as $\nabla \subseteq \varDelta $ , $d^{\ast }$ is clearly a multiple of $d^{\ast \ast }$ . Thus, d is also a multiple of $d^{\ast \ast }$ . Furthermore, $d^{\ast \ast }$ is positive, as d is.

Since $h+k$ is a multiple of d and $d^{\ast \ast }$ is a divisor of d, there is an $r_{1} \in \omega $ such that

(29) $$ \begin{align} h + k = r_{1}d^{\ast\ast}. \end{align} $$

Now, if $h \leqslant n + n - m + 1$ , we can apply condition (b) to $\varDelta $ , obtaining that $\varDelta \vdash \square ^{h}\textbf {c}_{i}$ , as desired. The suppose that $n + n - m + 1 < h$ . In this case, there are $1 \leqslant p \in \omega $ and $n- m\geqslant q \in \omega $ such that

(30) $$ \begin{align} h = p(n-m+1) + n + q. \end{align} $$

By definition of $\varDelta $ there is an $m \leqslant u \in \omega $ such that $\square ^{u}x \in \varDelta $ . Since $\square ^{m}x \mathrel {\dashv \mkern 1.5mu\vdash } \square ^{n+1}x$ and $m \leqslant u$ , there is an $n \geqslant r \in \omega $ such that

$$ \begin{align*} \square^{u}x \mathrel{\dashv\mkern1.5mu\vdash} \square^{r}x \mathrel{\dashv\mkern1.5mu\vdash} \square^{r + n - m + 1}x. \end{align*} $$

By the definition of $\varDelta $ , we obtain $\square ^{r}x, \square ^{r + n - m + 1}x \in \varDelta $ . Consequently, $n - m +1$ is a multiple of $d^{\ast \ast }$ , i.e., there is an $r_{2} \in \omega $ such that

$$ \begin{align*} r_{2}d^{\ast\ast} = n- m +1. \end{align*} $$

Together with (29) and (30) this implies

$$ \begin{align*} r_{1}d^{\ast\ast} = pr_{2}d^{\ast\ast} + n + q + k. \end{align*} $$

Consequently, $n+q+k$ is a multiple of $d^{\ast \ast }$ . Since $q \leqslant n-m$ , we can apply condition (b) to $\varDelta $ , obtaining that $\varDelta \vdash \square ^{n+q}\textbf {c}_{i}$ . Lastly, from $\square ^{m}x \mathrel {\dashv \mkern 1.5mu\vdash } \square ^{n+1}x$ and (30) it follows $\square ^{n+q}\textbf {c}_{i} \mathrel {\dashv \mkern 1.5mu\vdash } \square ^{h}\textbf {c}_{i}$ , whence $\varDelta \vdash \square ^{h}\textbf {c}_{i}$ , as desired.⊣

We conclude with a characterization of (locally tabular) logics, formulated in languages comprising constant symbols only, with an algebraic semantics.

Proposition 11.7. Let $\vdash $ be a logic such that $\mathscr {L}_{\vdash }$ comprises constant symbols only. Then $\vdash $ has an algebraic semantics if and only if one of the following conditions holds:

  1. (i) Either $\emptyset \vdash x$ or $\emptyset \vdash \textbf {c}_{i}$ for some ordinal $i < \alpha ;$

  2. (ii) $x \vdash \textbf {c}_{i}$ and $x \vdash \textbf {c}_{j}$ for some ordinals $i < j < \alpha ;$

  3. (iii) $x \vdash \textbf {c}_{k}$ and $\textbf {c}_{i} \mathrel {\dashv \mkern 1.5mu\vdash } \textbf {c}_{j}$ for some ordinals $i, j, k < \alpha $ such that $i \ne j$ .

Proof. To prove the “only if” part, suppose that $\vdash $ has an algebraic semantics. Then $\vdash $ is either assertional or almost assertional by Theorem 10.2. If $\vdash $ is assertional, then it has theorems, whence condition (i) holds. Then suppose that $\vdash $ is almost assertional. By Proposition 4.9 there is a set of equations $\boldsymbol {\tau }(x)$ such that

(31) $$ \begin{align} x, \varepsilon \mathrel{\dashv\mkern1.5mu\vdash} \delta, x,\quad \text{for all }\varepsilon \thickapprox \delta \in \boldsymbol{\tau}, \end{align} $$

and for which there is no substitution $\sigma $ such that $\sigma (\varepsilon ) = \sigma (\delta )$ for all $\varepsilon \thickapprox \delta \in \boldsymbol {\tau }$ . Because $\mathscr {L}_{\vdash }$ comprises constant symbols only, the latter condition implies that there are two ordinals $i < j < \alpha $ such that either $\textbf {c}_{i} \thickapprox \textbf {c}_{j} \in \boldsymbol {\tau }$ or, by symmetry, $\{ x \thickapprox \textbf {c}_{i}, x \thickapprox \textbf {c}_{j} \} \subseteq \boldsymbol {\tau }$ .

If $\{ x \thickapprox \textbf {c}_{i}, x \thickapprox \textbf {c}_{j} \} \subseteq \boldsymbol {\tau }$ , then (31) yields $x \vdash \textbf {c}_{i}$ and $x \vdash \textbf {c}_{j}$ . Therefore, condition (ii) holds and we are done. Then we consider the case where $\textbf {c}_{i} \thickapprox \textbf {c}_{j} \in \boldsymbol {\tau }$ . By (31) we get $x, \textbf {c}_{i} \mathrel {\dashv \mkern 1.5mu\vdash } \textbf {c}_{j}, x$ . Notice that, by substitution invariance, this yields $\textbf {c}_{i} \mathrel {\dashv \mkern 1.5mu\vdash } \textbf {c}_{j}$ . Now, if $\vdash $ is trivial, then $x \vdash y$ and, therefore, $x \vdash \textbf {c}_{i}$ . Accordingly, taking $k := i$ , condition (iii) holds and we are done. Then suppose that $\vdash $ is nontrivial, i.e., that $x \nvdash y$ . Since $\vdash $ is almost assertional, by Proposition 4.10 there is a formula $\psi (y)$ such that $x \vdash \psi (y)$ . Since $x \nvdash y$ , the formula $\psi (y)$ is not a variable. As $\mathscr {L}_{\vdash }$ comprises constant symbols only, we conclude that $y = \textbf {c}_{k}$ for some ordinal $k < \alpha $ . Therefore, $x \vdash \textbf {c}_{k}$ , whence condition (iii) holds, as desired.

Then we turn to prove the “if” part. Suppose that a condition in (i)–(iii) holds. Because of the poorness of the language in which $\vdash $ if formulated, it is easy to see that $x, y, \varphi (x, \vec {z}) \vdash \varphi (y, \vec {z})$ for every formula $\varphi (v, \vec {z})$ . Moreover, by assumption there is a formula $\psi (y)$ such that $x \vdash \psi (y)$ . Consequently, we can apply Proposition 4.10 obtaining that $\vdash $ is either assertional or almost assertional. If $\vdash $ is assertional, then it has an algebraic semantics by Proposition 4.4. Then suppose that $\vdash $ is almost assertional. As $\vdash $ lacks theorems, condition (i) fails. Then either (ii) or (iii) holds. In both cases, define $\boldsymbol {\tau } := \{ \textbf {c}_{i} \thickapprox \textbf {c}_{j} \}$ . It is easy to see that conditions (i) and (ii) in Proposition 4.9 hold, whence $\vdash $ has an algebraic semantics.⊣

12 Computational aspects

Two of the most common ways to present a logic are by exhibiting either a class of matrices that induces the logic or a set of rules that axiomatizes it. In the latter case, the set of rules is sometimes called a Hilbert-style axiomatization of the logic. We close our journey by studying the decidability of the problem of determining whether a logic presented by any of these two methods has an algebraic semantics, cf. [Reference Moraschini34, Reference Moraschini36]. To this end, a rule $\varGamma \rhd \varphi $ is said to be finite when $\varGamma $ is finite. The aim of this section is to establish the following:

Theorem 12.1.

  1. (i) The problem of determining whether a logic presented by a finite set of finite matrices in a finite language has an algebraic semantics is decidable.

  2. (ii) The problem of determining whether a locally tabular logic presented by a finite set of finite rules in a finite language has an algebraic semantics is decidable.

  3. (iii) The problem of determining whether a logic presented by a finite set of finite rules in a finite language has an algebraic semantics is undecidable.

Proof of Theorem 12.1 (i). Let $\mathsf {M}$ be a finite set of finite matrices in a finite language. As the variety generated by the algebraic reducts of $\mathsf {M}$ is finitely generated, it is also locally finite [Reference Burris and Sankappanavar8, Theorem II.10.16]. Thus $\vdash _{\mathsf {M}}$ is locally tabular by Proposition 11.2.

In order to determine whether $\vdash _{\mathsf {M}}$ has an algebraic semantics, our algorithm checks first if $\vdash _{\mathsf {M}}$ is graph-based or not. If $\vdash _{\mathsf {M}}$ is not graph-based, then it has an algebraic semantics in view of Proposition 11.3 and we are done. Otherwise, $\vdash _{\mathsf {M}}$ is graph-based and our algorithm checks whether its language comprises a unary symbol or not. If it does not, then $\vdash _{\mathsf {M}}$ has an algebraic semantics if and only if one of conditions (i)–(iii) in Proposition 11.7 holds. Notice that this can be checked mechanically, as $\mathsf {M}$ is a finite set of finite matrices in a finite language. Therefore, it only remains to consider the case where the language of $\vdash _{\mathsf {M}}$ comprises a unary symbol. As the $\mathsf {M}$ is a finite set of finite matrices, our algorithm can find two nonnegative integers $m \leqslant n$ such that $\mathsf {M} \vDash \square ^{m}x \thickapprox \square ^{n+1}x$ . Consequently, $\square ^{m}x \equiv _{\vdash } \square ^{n+1}x$ and $\vdash _{\mathsf {M}}$ has an algebraic semantics when one of conditions (i)–(iii) in Proposition 11.5 holds. But, again, this can be checked mechanically, as $\mathsf {M}$ is a finite set of finite matrices in a finite language.⊣

Proof of Theorem 12.1 (ii). In order to determine whether a locally tabular logic $\vdash $ , presented by a finite set of finite rules in a finite language, has an algebraic semantics, our algorithm checks first whether $\vdash $ is graph-based. If $\vdash $ is not graph-based, then it has an algebraic semantics by Proposition 11.3.

Suppose then that $\vdash $ is graph-based. In view of Theorem 12.1(i), to determine whether $\vdash $ has an algebraic semantics, our algorithm needs only to produce a finite set of finite matrices that induces $\vdash $ . To this end, we shall first detail the case where $\vdash $ has a unary connective $\square $ . In this case, the algorithm finds two nonnegative integers $m \leqslant n$ such that $\square ^{m} x \mathrel {\dashv \mkern 1.5mu\vdash } \square ^{n+1}x$ . Notice that n and m exist because $\vdash $ is locally tabular, and that we can find them mechanically by enumerating all proofs obtained from the Hilbert-style calculus axiomatizing $\vdash $ .

Now, observe that, up to isomorphism, there are only finitely many $(2^{n+1}+1)$ -generated $\mathscr {L}_{\vdash }$ -algebras $\boldsymbol {A}$ such that $\boldsymbol {A} \vDash \square ^{m}x \thickapprox \square ^{n+1}x$ . Furthermore, these algebras $\boldsymbol {A}_{1}, \dots , \boldsymbol {A}_{m}$ can be produced mechanically. As $\vdash $ is presented by a finite set of finite rules, our algorithm can construct the following finite set of finite matrices:

$$ \begin{align*} \mathsf{M} := \{ \langle \boldsymbol{A}_{i}, F \rangle \colon m \geqslant i \in \omega \text{ and the logic induced by }\langle \boldsymbol{A}_{i}, F \rangle \text{ extends }\vdash \}. \end{align*} $$

Therefore, to conclude the proof, it only remains to show that $\vdash $ is induced by $\mathsf {M}$ .

That $\vdash _{\mathsf {M}}$ is an extension of $\vdash $ follows immediately from the definition of $\mathsf {M}$ . To prove the converse, consider $\varGamma \cup \{ \varphi \} \subseteq Fm$ such that $\varGamma \nvdash \varphi $ . Observe that every formula $\delta \in Fm$ has the form $\square ^{t}p$ for some $t \in \omega $ and some p that is either a constant or a variable. As $\square ^{m} x \mathrel {\dashv \mkern 1.5mu\vdash } \square ^{n+1}x$ , there is an $n \geqslant k \in \omega $ such that $\delta \mathrel {\dashv \mkern 1.5mu\vdash } \square ^{k}p$ . Accordingly, set $\delta ^{\ast } := \square ^{k}p$ where k is the least natural number such that $\delta \mathrel {\dashv \mkern 1.5mu\vdash } \square ^{k}p$ . Clearly, $\delta \mathrel {\dashv \mkern 1.5mu\vdash } \delta ^{\ast }$ . Similarly, given a set of formulas $\varGamma $ , define

$$ \begin{align*} \varGamma^{\ast} := \{ \gamma^{\ast} \colon \gamma \in \varGamma \}. \end{align*} $$

From $\varGamma \nvdash \varphi $ it follows $\varGamma ^{\ast } \nvdash \varphi ^{\ast }$ .

Now, we define an equivalence relation R on the set

$$ \begin{align*} \textit{Var}(\varGamma^{\ast}) := \bigcup_{\gamma \in \varGamma^{\ast}} \textit{Var}(\gamma), \end{align*} $$

by the rule

$$ \begin{align*} \langle x, y \rangle \in R \Longleftrightarrow \{ k \in \omega \colon \square^{k}x \in \varGamma^{\ast} \} = \{ k \in \omega \colon \square^{k}y \in \varGamma^{\ast} \}. \end{align*} $$

Observe that $\textit {Var}(\varGamma ^{\ast }) / R$ has at most $2^{n+1}$ equivalence classes, because $\varGamma ^{\ast }$ contains only formulas of the form $\square ^{t}p$ where $n \geqslant t \in \omega $ and p is either a constant or a variable. Accordingly, we can choose representatives $v_{1}, \dots , v_{k}$ with $k \leqslant 2^{n+1}$ of the equivalence classes in $\textit {Var}(\varGamma ^{\ast }) / R$ in such a way that if $\textit {Var}(\varphi ) = \{ x \} \subseteq \textit {Var}(\varGamma ^{\ast })$ for some variable x, then the representative of $x / R$ is x. Furthermore, we consider the unique substitution $\sigma $ defined for every $y \in \textit {Var}$ as

$$ \begin{align*} \sigma(y) := \begin{cases} y & \text{if}\ y \notin \textit{Var}(\varGamma^{\ast}),\\ v_{i} & \text{if}\ y \in \textit{Var}(\varGamma^{\ast})\ \text{and}\ \langle y, v_{i}\rangle \in R.\\ \end{cases} \end{align*} $$

Notice that $\sigma (\varphi ^{\ast }) = \varphi ^{\ast }$ .

We shall prove that

(32) $$ \begin{align} \sigma[\varGamma^{\ast}] \subseteq \varGamma^{\ast}. \end{align} $$

To this end, consider $\gamma \in \varGamma ^{\ast }$ . As $\vdash $ is graph-based, there are two cases: either $\textit {Var}(\gamma ) = \emptyset $ or $\textit {Var}(\gamma )$ is a singleton. If $\textit {Var}(\gamma ) = \emptyset $ , then $\gamma $ is a closed formula, whence $\sigma (\gamma ) = \gamma \in \varGamma ^{\ast }$ , as desired. Then we consider the case where $\textit {Var}(\gamma ) = \{ x \}$ for some $x \in \textit {Var}$ . By definition of $\varGamma ^{\ast }$ , there is an $n \geqslant t \in \omega $ such that $\gamma = \square ^{t}x$ . Considering the unique $k \geqslant i \in \omega $ such that $\langle x, v_{i}\rangle \in R$ , we obtain $\square ^{t}v_{i} \in \varGamma ^{\ast }$ . Consequently, $\sigma (\gamma ) = \square ^{t}\sigma (x) = \square ^{t}v_{i} \in \varGamma ^{\ast }$ , establishing (32).

Furthermore, as $\vdash $ is graph-based and substitution invariant, from $\square ^{m} x \mathrel {\dashv \mkern 1.5mu\vdash } \square ^{n+1}x$ we obtain $\varphi (\square ^{m}x, \vec {z}) \mathrel {\dashv \mkern 1.5mu\vdash } \varphi (\square ^{n+1}x, \vec {z})$ for all $\varphi (v, \vec {z}) \in Fm$ and, therefore,

$$ \begin{align*} \square^{m}x \equiv_{\vdash}\square^{n+1}x. \end{align*} $$

Consequently, $\vdash $ is induced by a class of matrices $\mathsf {N}$ validating the equation $\square ^{m}x \thickapprox \square ^{n+1}x$ by Lemma 2.7.

From $\varGamma ^{\ast } \nvdash \varphi ^{\ast }$ and (32) it follows $\sigma [\varGamma ^{\ast }] \nvdash \varphi ^{\ast }$ . Therefore, since $\mathsf {N}$ is a matrix semantics for $\vdash $ , there are a matrix $\langle \boldsymbol {B}, G \rangle \in \mathsf {N}$ and a homomorphism $h \colon \boldsymbol {Fm} \to \boldsymbol {B}$ such that $h[\sigma [\varGamma ^{\ast }]] \subseteq G$ and $h(\varphi ^{\ast }) \notin G$ . Let $\boldsymbol {A}$ be the subalgebra of $\boldsymbol {B}$ generated by $h(v_{1}), \dots , h(v_{k})$ if $\textit {Var}(\varphi ^{\ast }) = \emptyset $ and by $h(v_{1}), \dots , h(v_{k}), h(x)$ if $\textit {Var}(\varphi ^{\ast }) = \{ x \}$ for some variable x. As $k \leqslant 2^{n+1}$ , in both cases $\boldsymbol {A}$ is a $(2^{n+1}+1)$ -generated algebra. Moreover, as $\langle \boldsymbol {B}, G \rangle \in \mathsf {N}$ and $\vdash $ is the logic induced by $\mathsf {N}$ , the logic induced by $\langle \boldsymbol {A}, G \cap A \rangle $ is an extension of $\vdash $ . Lastly, $\boldsymbol {A}$ validates $\square ^{m}x \thickapprox \square ^{n+1}x$ , as $\boldsymbol {B}$ does. Consequently, $\langle \boldsymbol {A}, G \cap A \rangle \cong \langle \boldsymbol {A}_{i}, F_{i}\rangle $ for some $\ m \geqslant i \in \omega $ . For the sake of simplicity, we shall assume that $\langle \boldsymbol {A}, G \cap A \rangle = \langle \boldsymbol {A}_{i}, F_{i}\rangle $ and, therefore, $\langle \boldsymbol {A}, G \cap A \rangle \in \mathsf {M}$ . By considering any homomorphism $g \colon \boldsymbol {Fm} \to \boldsymbol {A}$ that coincides with h on the variables occurring in $\sigma [\varGamma ^{\ast }]$ and $\varphi ^{\ast }$ , we obtain $g[\sigma [\varGamma ^{\ast }]] \subseteq G \cap A$ and $g(\varphi ^{\ast }) \notin G \cap A$ . Since $\langle \boldsymbol {A}, G \cap A \rangle \in \mathsf {M}$ , this yields

$$ \begin{align*} \sigma[\varGamma^{\ast}] \nvdash_{\mathsf{M}} \varphi^{\ast}. \end{align*} $$

Now, recall that $\sigma (\varphi ^{\ast }) = \varphi ^{\ast }$ . Thus, from the above display it follows $\sigma [\varGamma ^{\ast }] \nvdash _{\mathsf {M}} \sigma (\varphi ^{\ast })$ . As $\vdash _{\mathsf {M}}$ is substitution invariant, this implies $\varGamma ^{\ast } \nvdash _{\mathsf {M}} \varphi ^{\ast }$ . Finally, by $\mathsf {M} \vDash \square ^{m}x \thickapprox \square ^{n+1}x$ we obtain $\square ^{m}x \mathrel {\dashv \mkern 1.5mu\vdash }_{\mathsf {M}} \square ^{n+1}x$ , whence $\varGamma \mathrel {\dashv \mkern 1.5mu\vdash }_{\mathsf {M}} \varGamma ^{\ast }$ and $\varphi \mathrel {\dashv \mkern 1.5mu\vdash }_{\mathsf {M}} \varphi ^{\ast }$ . Together with $\varGamma ^{\ast } \nvdash _{\mathsf {M}} \varphi ^{\ast }$ , this implies $\varGamma \nvdash _{\mathsf {M}}\varphi $ . Hence, $\vdash $ is an extension of $\vdash _{\mathsf {M}}$ , as desired. This concludes the proof that $\vdash $ is the logic induced by $\mathsf {M}$ .

It only remains to consider the case where $\vdash $ is a graph-based logic whose language comprises only constant symbols. But this case is handled similarly to the previous one (hint: repeat the proof by taking $n = m = 0$ ).⊣

Problem 3. Investigate the complexity of the decision problems mentioned in conditions (i) and (ii) of Theorem 12.1.Footnote 6

The remaining part of this section is devoted to the proof of condition (iii) in Theorem 12.1. To this end, we assume the reader is familiar with computability theory, and sketch the basic definitions only to fix some terminology and conventions.

By a Turing machine ${\textbf { M}}$ we understand a tuple $\langle P, Q, q_{0}, \delta \rangle $ where P and Q are sets of “states,” $q_{0} \in Q$ is the initial state, Q the set of nonfinal states, P the set of final states, and

$$ \begin{align*} \delta \colon Q \times \{ 0, 1, \emptyset \} \to (Q \cup P) \times \{ 0, 1 \} \times \{ L, R \}. \end{align*} $$

Instruction of the form $\delta (q, a) = \langle q', b, L\rangle $ (resp. $\delta (q, a)=\langle q', b, R\rangle $ ) should be understood as follows: if the machine ${\textbf { M}}$ reads a at state q, then it replaces a with b, moves left (resp. right), and switches to state $q'$ .

Our Turing machines work on tapes that are infinite both to the left and to the right, can write only zeros and ones, but can read $0$ , $1$ , and the empty symbol $\emptyset $ . At the beginning of the computation, the empty symbol $\emptyset $ occupies all cells in the tape.

A configuration for a Turing machine ${\textbf { M}}$ is a tuple $\langle q, \vec {w}, v, \vec {u}\rangle $ where $q \in Q \cup P$ , $\vec {w}$ and $\vec {u}$ are either finite nonempty sequences of zeros and ones or the one-element sequence $\langle \emptyset \rangle $ , and $v \in \{ \langle 0 \rangle , \langle 1 \rangle , \langle \emptyset \rangle \}$ which, in addition, satisfies the following requirement: if $\vec {w}$ and $\vec {u}$ are different from $\langle \emptyset \rangle $ , then v is also. Intuitively, the configuration $\langle q, \vec {w}, v, \vec {u}\rangle $ represents the instant in which ${\textbf { M}}$ is in state q, it is reading the unique symbol in v, and the tape contains exactly the concatenation

$$ \begin{align*} \dots \emptyset, \emptyset, \emptyset \rangle^{\frown} \vec{w}^{\frown}v^{\frown}\vec{u}^{\frown} \langle \emptyset, \emptyset, \emptyset \dots \end{align*} $$

Given two configurations $\texttt {c}$ and $\texttt {d}$ for ${\textbf { M}}$ , we say that $\texttt {c}$ yields $\texttt {d}$ if ${\textbf { M}}$ allows to move from $\texttt {c}$ to $\texttt {d}$ in a single step of computation.

An input is a finite sequence $\vec {t} = \langle t_{1}, \dots , t_{m} \rangle $ of zeros and ones such that $m \geqslant 2$ .Footnote 7 The initial configuration for ${\textbf { M}}$ under $\vec {t}$ is the tuple

$$ \begin{align*} \textrm{In}({\textbf{M}}, \vec{t}) := \langle q_{0}, \langle \emptyset \rangle, \langle t_{1}\rangle, \langle t_{2}, \dots, t_{m}\rangle \rangle. \end{align*} $$

Then ${\textbf { M}}$ is said to halt on $\vec {t}$ if there is a finite sequence $\texttt {c}_{1}, \dots , \texttt {c}_{n}$ of configurations for ${\textbf {M}}$ such that $\texttt {c}_{1} = \textrm {In}({\textbf {M}}, \vec {t}{\kern1pt})$ , the state in $\texttt {c}_{n}$ belongs to the set P of final states, and $\texttt {c}_{m}$ yields $\texttt {c}_{m+1}$ for every positive integer $m < n$ .

The halting problem asks to determine, given a Turing machine ${\textbf { M}}$ and an input $\vec {t}$ , whether ${\textbf { M}}$ halts on $\vec {t}$ . It is well known that this problem is undecidable, as shown by Turing in [Reference Turing42]. In the remaining part of the section we shall see that the halting problem reduces to that of determining whether a logic presented by a finite set of finite rules in a finite language has an algebraic semantics. It follows that the latter problem is also undecidable.

To this end, we shall associate a logic with each Turing machine as follows:

Definition 12.2. Let ${\textbf { M}} = \langle P, Q, q_{0}, \delta \rangle $ be a Turing machine.

  1. 1. Let $\mathscr {L}({\textbf {M}})$ be the algebraic language, whose set of constant symbols is

    $$ \begin{align*} P \cup Q \cup \{ 0, 1, \emptyset \}, \end{align*} $$
    and which comprises a binary connective $x \cdot y$ , and a ternary connective $\lambda (x, y, z)$ .
  2. 2. Let $\vdash _{{\textbf { M}}}$ be the logic in $\mathscr {L}({\textbf {M}})$ axiomatized by the following Hilbert-style calculus:

    (R1) $$ \begin{align} q \cdot \lambda(x \cdot y, a, z) &\rhd q' \cdot \lambda(x, y, b \cdot z), \end{align} $$

    (R2) $$ \begin{align} \hat{q} \cdot \lambda(x, \hat{a}, y \cdot z) &\rhd \hat{q}' \cdot \lambda( x \cdot \hat{b}, y, z), \end{align} $$

    (R3) $$ \begin{align} p \cdot \lambda(x, y, z) \lhd &\rhd p \cdot \lambda(\emptyset \cdot x, y, z), \end{align} $$

    (R4) $$ \begin{align} p \cdot \lambda(x, y, z) \lhd &\rhd p \cdot \lambda(x, y, z \cdot \emptyset) \end{align} $$
    for every $p, q, q', \hat {q}, \hat {q}' \in P \cup Q$ and every $a, \hat {a}, b, \hat {b} \in \{ 0, 1, \emptyset \}$ such that
    $$ \begin{align*} \delta(q, a) = \langle q', b, L\rangle \text{ and } \delta(\hat{q}, \hat{a}) = \langle \hat{q}', \hat{b}, R\rangle. \end{align*} $$

Let $\texttt {c} = \langle q, \vec {w}, v, \vec {u}\rangle $ be a configuration for a Turing machine ${\textbf { M}}$ , where

$$ \begin{align*} \vec{w} = \langle w_{1}, \dots, w_{n}\rangle, v = \langle a \rangle, \text{ and }\vec{u} = \langle u_{1}, \dots, u_{m}\rangle. \end{align*} $$

We associate a formula of $\mathscr {L}({\textbf {M}})$ with $\texttt {c}$ as follows:

$$ \begin{align*} \varphi_{\texttt{c}} := q \cdot \lambda( ( \cdots (( w_{1} \cdot w_{2}) \cdot w_{3})\cdots ) \cdot w_{n}, a, u_{1}\cdot ( u_{2} \cdot ( \dots (u_{m-1} \cdot u_{m}) \dots ))). \end{align*} $$

If $n =1$ (resp. $m =1$ ), the expression $( \cdots (( w_{1} \cdot w_{2}) \cdot w_{3})\cdots ) \cdot w_{n}$ (resp. $u_{1}\cdot ( u_{2} \cdot ( \dots (u_{m-1} \cdot u_{m}) \dots ))$ ) in the above display should be understood as $w_{1}$ (resp. $u_{1}$ ).

Lemma 12.3. Let ${\textbf { M}}$ be a Turing machine with configurations ${\texttt {c}}$ and ${\texttt {d}}$ . If ${\texttt {c}}$ yields ${\texttt {d}}$ , then $\varphi _{{\texttt {c}}} \vdash _{{\textbf { M}}} \varphi _{{\texttt {d}}}$ .

Proof. Consider two configurations $\texttt {c}$ and $\texttt {d}$ for ${\textbf { M}}$ such that $\texttt {c}$ yields $\texttt {d}$ . By definition of configuration, $\texttt {c}$ has the form $\langle q, \vec {w}, v, \vec {u}\rangle $ for some state q and sequences

$$ \begin{align*} \vec{w} = \langle w_{1}, \dots, w_{n}\rangle, v = \langle a \rangle, \text{ and }\vec{u} = \langle u_{1}, \dots, u_{m}\rangle. \end{align*} $$

By symmetry, we can assume that $\delta (q, a) = \langle q', b, L\rangle $ for some state $q'$ and $b \in \{ 0, 1 \}$ . Since $\texttt {c}$ yields $\texttt {d}$ , the following holds:

  1. (i) If $n \geqslant 2$ and $u_{1} \ne \emptyset $ , then $\texttt {d} = \langle q', \langle w_{1}, \dots , w_{n-1}\rangle , \langle w_{n}\rangle , \langle b, u_{1}, \dots , u_{m}\rangle \rangle $ ;

  2. (ii) If $n = 1$ and $u_{1} \ne \emptyset $ , then $\texttt {d} = \langle q', \langle \emptyset \rangle , \langle w_{1}\rangle , \langle b, u_{1}, \dots , u_{m}\rangle \rangle $ ;

  3. (iii) If $n \geqslant 2$ and $u_{1} = \emptyset $ , then $\texttt {d} = \langle q', \langle w_{1}, \dots , w_{n-1}\rangle , \langle w_{n}\rangle , \langle b\rangle \rangle $ ;

  4. (iv) If $n =1$ and $u_{1} = \emptyset $ , then $\texttt {d} = \langle q', \langle \emptyset \rangle , \langle w_{1}\rangle , \langle b\rangle \rangle $ .

Accordingly, first suppose that $n \geqslant 2$ and $u_{1} \ne \emptyset $ . By (i) we can apply rule (R1) to $\varphi _{\texttt {c}}$ , obtaining $\varphi _{\texttt {c}} \vdash _{{\textbf { M}}} \varphi _{\texttt {d}}$ .

Then we consider the case where $n = 1$ and $u_{1} \ne \emptyset $ . In this case,

$$ \begin{align*} \varphi_{\texttt{c}} = q \cdot \lambda( w_{1}, a, u_{1}\cdot ( u_{2} \cdot ( \dots (u_{m-1} \cdot u_{m}) \dots ))). \end{align*} $$

Therefore, applying rule (R3) to $\varphi _{\texttt {c}}$ , we get

$$ \begin{align*} \varphi_{\texttt{c}} \vdash_{{\textbf{ M}}}q \cdot \lambda( \emptyset \cdot w_{1}, a, u_{1}\cdot ( u_{2} \cdot ( \dots (u_{m-1} \cdot u_{m}) \dots ))). \end{align*} $$

Furthermore, by (ii) we can apply rule (R1) to the right-hand side of the above display, obtaining

$$ \begin{align*} q \cdot \lambda( \emptyset \cdot w_{1}, a, u_{1}\cdot ( u_{2} \cdot ( \dots (u_{m-1} \cdot u_{m}) \dots ))) \vdash_{{\textbf{ M}}} \varphi_{\texttt{d}}. \end{align*} $$

Consequently, $\varphi _{\texttt {c}} \vdash _{{\textbf { M}}} \varphi _{\texttt {d}}$ , as desired.

Consider then the case where $n \geqslant 2$ and $u_{1} = \emptyset $ . Since $u_{1} = \emptyset $ , the fact that $\texttt {c}$ is a configuration implies $m = 1$ . Consequently,

$$ \begin{align*} \varphi_{\texttt{c}} = q \cdot \lambda( ( \cdots (( w_{1} \cdot w_{2}) \cdot w_{3}) \cdots ) \cdot w_{n}, a, \emptyset). \end{align*} $$

By applying to the above formula the rule (R1), we obtain

$$ \begin{align*} \varphi_{\texttt{c}} \vdash_{{\textbf{ M}}} q' \cdot \lambda( ( \cdots (( w_{1} \cdot w_{2}) \cdot w_{3})\cdots ) \cdot w_{n-1}, w_{n}, b \cdot \emptyset). \end{align*} $$

By (iii), applying the rule (R4) to the right-hand side of the above display, we get

$$ \begin{align*} q' \cdot \lambda( ( \cdots (( w_{1} \cdot w_{2}) \cdot w_{3}) \cdots ) \cdot w_{n-1}, w_{n}, b \cdot \emptyset) \vdash_{{\textbf{ M}}} \varphi_{\texttt{d}}. \end{align*} $$

Consequently, $\varphi _{\texttt {c}} \vdash _{{\textbf { M}}} \varphi _{\texttt {d}}$ .

It only remains to consider the case where $n =1$ and $u_{1} = \emptyset $ . As in the previous case, we get $m = 1$ . Consequently,

$$ \begin{align*} \varphi_{\texttt{c}} = q \cdot \lambda(w_{1}, a, \emptyset). \end{align*} $$

By applying to the above formula the rule (R3), we obtain

$$ \begin{align*} \varphi_{\texttt{c}} \vdash_{{\textbf{ M}}}q \cdot \lambda(\emptyset \cdot w_{1}, a, \emptyset). \end{align*} $$

Furthermore, applying the rule (R1) to the right-hand side of the above display, we get

$$ \begin{align*} q \cdot \lambda(\emptyset \cdot w_{1}, a, \emptyset) \vdash_{{\textbf{ M}}} q' \cdot \lambda( \emptyset, w_{1}, b \cdot \emptyset). \end{align*} $$

By (iv), applying the rule (R4) to the right-hand side of the above display, we have

$$ \begin{align*} q' \cdot \lambda( \emptyset, w_{1}, b \cdot \emptyset) \vdash_{{\textbf{ M}}}\varphi_{\texttt{d}}. \end{align*} $$

Hence we conclude that, also in this case, $\varphi _{\texttt {c}} \vdash _{{\textbf { M}}} \varphi _{\texttt {d}}$ .⊣

We shall also associate a logic with every pair consisting of a Turing machine and an input.

Definition 12.4. Let ${\textbf { M}}$ be a Turing machine and $\vec {t}$ an input.

  1. 1. Let $\mathscr {L}({\textbf {M}}, \vec {t}{\kern1pt})$ be $\mathscr {L}({\textbf {M}})$ extended with a new binary connective $x \to y$ .

  2. 2. Let also $\vdash _{{\textbf { M}}}^{\vec {t}}$ be the logic axiomatized by the sets of rules (R1), …, (R4) plus

    (R5) $$ \begin{align} \emptyset &\rhd \varphi_{\text{In}({\textbf{M}}, \vec{t})}, \end{align} $$

    (R6) $$ \begin{align} p \cdot y & \rhd x \to (x \cdot x), \end{align} $$

    (R7) $$ \begin{align} \emptyset & \rhd x \to x, \end{align} $$

    (R8) $$ \begin{align} x, x \to y & \rhd y, \end{align} $$

    (R9) $$ \begin{align} x_{1} \to y_{1}, \dots, x_{n} \to y_{n} & \rhd \ast(x_{1}, \dots, x_{n}) \to \ast(y_{1}, \dots, y_{n}) \end{align} $$
    for every $p \in P$ , every positive integer n, and every n-ary connective $\ast $ .

Lemma 12.5. Let ${\textbf { M}}$ be a Turing machine and $\vec {t}$ an input. For every pair of formulas $\varepsilon , \delta \in Fm$ ,

$$ \begin{align*} \emptyset \vdash_{{\textbf{ M}}}^{\vec{t}} \varepsilon \to \delta \Longleftrightarrow \varepsilon \text{ and }\delta \text{ are logically equivalent in }\vdash_{{\textbf{ M}}}^{\vec{t}}. \end{align*} $$

Proof. To prove the implication from right to left, observe that, taking $\varphi := v \to \varepsilon $ , the assumption implies $\varepsilon \to \varepsilon \vdash _{{\textbf { M}}}^{\vec {t}} \varepsilon \to \delta $ . By (R7) this yields $\emptyset \vdash _{{\textbf { M}}}^{\vec {t}} \varepsilon \to \delta $ , as desired.

Then we turn to prove the implication from left to right. First observe that from the rule (R9) it follows

$$ \begin{align*} x \to y, x \to x \vdash_{{\textbf{ M}}}^{\vec{t}} (x \to x) \to (y \to x). \end{align*} $$

By rule (R7) this simplifies to

$$ \begin{align*} x \to y \vdash_{{\textbf{ M}}}^{\vec{t}} (x \to x) \to (y \to x). \end{align*} $$

Finally, from (R8) we get $x \to x, (x \to x) \to (y \to x) \vdash _{{\textbf { M}}}^{\vec {t}} y \to x$ , which simplifies to $(x \to x) \to (y \to x) \vdash _{{\textbf { M}}}^{\vec {t}} y \to x$ by (R7). Together with the above display, this implies

(33) $$ \begin{align} x \to y \vdash_{{\textbf{ M}}}^{\vec{t}} y \to x. \end{align} $$

Now, suppose that $\emptyset \vdash _{{\textbf { M}}}^{\vec {t}} \varepsilon \to \delta $ and consider a formula $\varphi (v, \vec {z})$ . By applying repeatedly rule (R9) and using the assumption $\emptyset \vdash _{{\textbf { M}}}^{\vec {t}} \varepsilon \to \delta $ , it is not hard to see that $\emptyset \vdash _{{\textbf { M}}}^{\vec {t}} \varphi (\varepsilon , \vec {z}) \to \varphi (\delta , \vec {z})$ . By (33) we get

$$ \begin{align*} \emptyset \vdash_{{\textbf{ M}}}^{\vec{t}} \varphi(\varepsilon, \vec{z}) \to \varphi(\delta, \vec{z}), \varphi(\delta, \vec{z}) \to \varphi(\varepsilon, \vec{z}). \end{align*} $$

Together with (R8) this implies $\varphi (\varepsilon , \vec {z})\mathrel {\dashv \mkern 1.5mu\vdash }_{{\textbf { M}}}^{\vec {t}} \varphi (\delta , \vec {z})$ . It follows that $\varepsilon $ and $\delta $ are logically equivalent in $\vdash _{{\textbf { M}}}^{\vec {t}}$ .⊣

Corollary 12.6. Let ${\textbf { M}}$ be a Turing machine and $\vec {t}$ an input. The logic $\vdash _{{\textbf { M}}}^{\vec {t}}$ has an algebraic semantics if and only if $\emptyset \vdash _{{\textbf { M}}}^{\vec {t}} \varepsilon \to \delta $ for some distinct $\varepsilon ,\delta \in Fm$ .

Proof. Suppose first that $\vdash _{{\textbf { M}}}^{\vec {t}}$ is trivial. As $\vdash _{{\textbf { M}}}^{\vec {t}}$ has theorems by rule (R7), this means that it is inconsistent. Consequently, it has an algebraic semantics by Proposition 5.2. Moreover, $\emptyset \vdash _{{\textbf { M}}}^{\vec {t}} \varepsilon \to \delta $ for some (every) pair of distinct $\varepsilon ,\delta \in Fm$ .

Then we consider the case where $\vdash _{{\textbf { M}}}^{\vec {t}}$ is nontrivial. Observe that $\vdash _{{\textbf { M}}}^{\vec {t}}$ is protoalgebraic by rules (R7) and (R8). Therefore, in view of Theorem 9.3, $\vdash _{{\textbf { M}}}^{\vec {t}}$ has an algebraic semantics if and only if there are distinct logically equivalent formulas $\varepsilon $ and $\delta $ . By Lemma 12.5 this happens precisely when $\emptyset \vdash _{{\textbf { M}}}^{\vec {t}} \varepsilon \to \delta $ .⊣

The next result is the cornerstone of the argument.

Proposition 12.7. A Turing machine ${\textbf { M}}$ halts on an input $\vec {t}$ if and only if the logic $\vdash _{\textbf { M}}^{\vec {t}}$ has an algebraic semantics.

Proof. First suppose that ${\textbf { M}}$ halts on $\vec {t}$ . Then there is a finite sequence of configurations $\texttt {c}_{1}, \dots , \texttt {c}_{n}$ such that $\texttt {c}_{1} = \textrm {In}({\textbf {M}}, \vec {t}{\kern1pt})$ , the state in $\texttt {c}_{n}$ is final, and $\texttt {c}_{i}$ yields $\texttt {c}_{i +1}$ for every positive integer $i < n$ . By Lemma 12.3 we get

$$ \begin{align*} \varphi_{\texttt{c}_{1}} \vdash_{\textbf{ M}}^{\vec{t}} \varphi_{\texttt{c}_{2}} \vdash_{\textbf{ M}}^{\vec{t}} \dots \vdash_{\textbf{ M}}^{\vec{t}} \varphi_{\texttt{c}_{n}}. \end{align*} $$

Together with (R5) and $\texttt {c}_{1} = \textrm {In}({\textbf {M}}, \vec {t} )$ , this implies $\emptyset \vdash _{\textbf { M}}^{\vec {t}} \varphi _{\texttt {c}_{n}}$ . Thus, there is a formula $\psi $ and a final state $p \in P$ such that $\emptyset \vdash _{\textbf { M}}^{\vec {t}} p \cdot \psi $ . By rule (R6) we obtain

$$ \begin{align*} \emptyset \vdash_{\textbf{ M}}^{\vec{t}} x \to (x \cdot x). \end{align*} $$

Hence, Corollary 12.6 implies that $\vdash _{\textbf { M}}^{\vec {t}}$ has an algebraic semantics.

To prove the “if” part, we reason by contraposition. Suppose that ${\textbf { M}}$ does not halt on $\vec {t}$ . Then there is an infinite sequence of configurations $\texttt {c}_{1}, \texttt {c}_{2}, \texttt {c}_{3}, \dots , \texttt {c}_{n}, \dots $ whose states are not final such that $\texttt {c}_{1} = \textrm {In}({\textbf {M}}, \vec {t}{\kern1pt})$ , and $\texttt {c}_{i}$ yields $\texttt {c}_{i+1}$ for every positive integer i.

We shall associate a set of formulas $C_{n}$ to every $\texttt {c}_{n}$ . To this end, consider a positive integer n and recall that $\texttt {c}_{n}$ has the form $\langle q, \vec {w}, v, \vec {u}\rangle $ for some state q and sequences

$$ \begin{align*} \vec{w} = \langle w_{1}, \dots, w_{k}\rangle, v = \langle a \rangle, \text{ and }\vec{u} = \langle u_{1}, \dots, u_{m}\rangle. \end{align*} $$

For every $i \in \omega $ , we define two formulas

$$ \begin{align*} \alpha_{i} &:= \underbrace{\emptyset \cdot (\emptyset \cdot (\cdots (\emptyset}_{i\text{-times}} \cdot (w_{1} \cdot ( \cdots ( w_{k-1} \cdot w_{k}) \cdots ))) \cdots )),\\ \beta{i} &:= u_{1} \cdot (u_{2} \cdot (\cdots (u_{m} \cdot (\underbrace{\emptyset \cdot ( \cdots ( \emptyset \cdot \emptyset}_{i\text{-times}}) \cdots ))) \cdots )). \end{align*} $$

Furthermore, let $[\alpha _{i}]$ and $[\beta _{i}]$ be the sets of formulas equivalent, respectively, to $\alpha _{i}$ and $\beta _{i}$ under the assumption that $\cdot $ is associative. In other words, $[\alpha _{i}]$ (resp. $[\beta {i}]$ ) is the set of formulas obtained reordering the parentheses in $\alpha _{i}$ (resp. $\beta _{i}$ ). Bearing this in mind, we set

$$ \begin{align*} C_{n} := \{ q \cdot \lambda(\varepsilon, a, \delta ) \colon \text{there are }i, j \in \omega \text{ such that }\varepsilon \in [\alpha_{i}] \text{ and }\delta \in [\beta_{i}] \}. \end{align*} $$

Notice that $\varphi _{\texttt {c}_{n}} \in C_{n}$ .

Now, consider the sets

$$ \begin{align*} \varDelta := \{ q \cdot \lambda(\varepsilon, \gamma, \delta) \colon q \in Q \text{ and } \varepsilon, \gamma, \delta \in Fm \text{ are such that }\gamma \notin \{ 0, 1, \emptyset \} \} \end{align*} $$

and

$$ \begin{align*} \varGamma := \bigcup_{1 \leqslant n \in \omega} C_{n} \cup \varDelta \cup \{ \gamma \to \gamma \colon \gamma \in Fm \}. \end{align*} $$

Claim 12.8. The logic induced by $\langle \boldsymbol {Fm}, \varGamma \rangle $ extends $\vdash _{\textbf { M}}^{\vec {t}}$ .

Proof. To prove the claim, it suffices to show that the rules axiomatizing $\vdash _{\textbf { M}}^{\vec {t}}$ are valid in $\langle \boldsymbol {Fm}, \varGamma \rangle $ , as we proceed to do.

(R1–R2): We detail only the case of (R1) as the other one is analogous. Among the formulas in $\varGamma $ , the rules (R1) can be applied only to those in $\bigcup _{1 \leqslant n \in \omega }C_{n}$ . Then suppose that a rule $q \cdot \lambda (x \cdot y, a, z) \rhd q' \cdot \lambda (x, y, b \cdot z)$ in (R1) is applied to a formula in $C_{n}$ . In particular, this implies that q is the state in $\texttt {c}_{n}$ and $q'$ the state in $\texttt {c}_{n+1}$ . Consequently, $q' \in Q$ , i.e., q is not a final state. Moreover, observe the result of the application of the rule is a formula of the form $q' \cdot \lambda (\varepsilon , \gamma , b \cdot \delta )$ where $q \cdot \lambda (\varepsilon \cdot \gamma , a, \delta ) \in C_{n}$ . Since $q'$ is not a final state, if $\gamma \notin \{ 0, 1, \emptyset \}$ , then $q' \cdot \lambda (\varepsilon , \gamma , b \cdot \delta ) \in \varDelta \subseteq \varGamma $ , and we are done. Then suppose that $\gamma \in \{ 0, 1, \emptyset \}$ . But in this case, $q' \cdot \lambda (\varepsilon , \gamma , b \cdot \delta ) \in C_{n+1} \subseteq \varGamma $ , since $q' \cdot \lambda (\varepsilon \cdot \gamma , a, \delta ) \in C_{n}$ . Thus, we conclude that the rules (R1) are valid in $\langle \boldsymbol {Fm}, \varGamma \rangle $ .

(R3–R4): Among formulas in $\varGamma $ , these rules can be applied only to formulas in $\bigcup _{n \in \omega }C_{n} \cup \varDelta $ . But this set is closed under the application of the rules by construction.

(R5): Recall that $\varphi _{\textrm {In}({\textbf {M}}, \vec {t} )} = \texttt {c}_{1} \in C_{1} \subseteq \varGamma $ . Thus, the rule (R5) is valid in $\langle \boldsymbol {Fm}, \varGamma \rangle $ .

(R6): Recall that the states in $\texttt {c}_{1}, \texttt {c}_{2}, \texttt {c}_{3}, \dots , \texttt {c}_{n}, \dots $ are not final. Therefore, this rule cannot be applied to formulas in $\varGamma $ .

(R7): This rule is valid in $\langle \boldsymbol {Fm}, \varGamma \rangle $ because $\{ \gamma \to \gamma \colon \gamma \in Fm \} \subseteq \varGamma $ .

(R8–R9): Among formulas in $\varGamma $ , these rules can be applied only to formulas in $\{ \gamma \to \gamma \colon \gamma \in Fm \}$ . This makes the validity of both rules in $\langle \boldsymbol {Fm}, \varGamma \rangle $ straightforward.⊣

By Claim 12.8, if a formula is a theorem of $\vdash _{\textbf { M}}^{\vec {t}}$ , then it must belong to $\varGamma $ . In particular, by definition of $\varGamma $ , formulas of the form $\varepsilon \to \delta $ can be theorems of $\vdash _{\textbf { M}}^{\vec {t}}$ only if $\varepsilon = \delta $ . Hence, by Corollary 12.6 we conclude that $\vdash _{\textbf { M}}^{\vec {t}}$ does not have an algebraic semantics.⊣

Proof of Theorem 12.1 (iii). Observe that for every Turing machine ${\textbf { M}}$ and input $\vec {t}$ , the logic $\vdash _{\textbf { M}}^{\vec {t}}$ is presented by a finite set of finite rules in a finite language. Thus, the result follows from the undecidability of the halting problem and Proposition 12.7.⊣

Remark 12.9. The reader familiar with abstract algebraic logic might have noticed that the logic $\vdash _{\textbf { M}}^{\vec {t}}$ is finitely equivalential [Reference Czelakowski14, Reference Font20]. Consequently, Theorem 12.1(iii) remains true when restricted to the family of finitely equivalential logics.⊣

Appendix A

Given a graph-based language $\mathscr {L}$ , we denote by $At(\mathscr {L})$ the set of its atomic formulas, i.e., the union of $\textit {Var}$ and the set of constants of $\mathscr {L}$ .

Lemma A.1. Let $\mathscr {L}$ be a graph-based language, $\varGamma \subseteq Fm_{\mathscr {L}}$ , and

$$ \begin{align*} \boldsymbol{\tau}(x) = \{ \square^{k}x \thickapprox \square^{n} \textbf{c}_{i} \} \end{align*} $$

for some nonnegative integers $n < k$ and ordinal $i < \alpha $ . Moreover, let $p \in At(\mathscr {L}) \smallsetminus \{ \textbf {c}_{i} \}$ and $h \in \omega $ . Then $\langle \square ^{k}\square ^{h}p, \square ^{n}\textbf {c}_{i}\rangle \in \theta (\varGamma , \boldsymbol {\tau })$ if and only if there are $s, s^{\ast } \in \omega $ , $\{ q_{j} \colon s> j \in \omega \} \subseteq At(\mathscr {L})$ , and $\{ u_{j} \colon s> j \in \omega \} \cup \{ v_{j} \colon s > j \in \omega \} \cup \{ w_{j^{\ast }} \colon s^{\ast } > j^{\ast } \in \omega \} \subseteq \omega $ such that

  1. (i) $u_{j} < v_{j}$ , for all $s> j \in \omega ;$

  2. (ii) $\square ^{u_{j}}q_{j}, \square ^{v_{j}}q_{j}, \square ^{w_{j^{\ast }}}\textbf {c}_{i} \in \varGamma $ , for all $s> j \in \omega $ and $s^{\ast }> j^{\ast } \in \omega ;$

  3. (iii) $h = t+g$ for some $t, g \in \omega $ such that $\square ^{t}p \in \varGamma $ and g is a multiple of

    $$ \begin{align*} \textrm{gcd}(\{ v_{j} - u_{j} \colon s> j \in \omega \} \cup \{ k + w_{j^{\ast}} - n \colon s^{\ast} > j^{\ast} \in \omega \}). \end{align*} $$

Proof. We begin by proving the “only if” part. To this end, suppose that $\langle \square ^{k}\square ^{h}p, \square ^{n}\textbf {c}_{i}\rangle \in \theta (\varGamma , \boldsymbol {\tau })$ . There are two cases: either $\square ^{h}p \in \varGamma $ or $\square ^{h}p \notin \varGamma $ . In the case where $\square ^{h}p \in \varGamma $ , we define $s := 0$ and $s^{\ast } := 0$ . Consequently, conditions (i) and (ii) are vacuously satisfied. By the same token,

$$ \begin{align*} d := \textrm{gcd}(\{ v_{j} - u_{j} \colon s> j \in \omega \} \cup \{ k + w_{j^{\ast}} - n \colon s^{\ast} > j^{\ast} \in \omega \}) = \textrm{gdc}(\emptyset) = 0. \end{align*} $$

Let also $g := 0$ and $t := h$ . Clearly $h = t+g$ and $\square ^{t}p = \square ^{h} \in \varGamma $ . Moreover, by the above display, g is a multiple of d. This establishes condition (iii), thus concluding the proof.

Then we consider the case where $\square ^{h}p \notin \varGamma $ . By Theorem 6.4 there are $\alpha _{0}, \dots , \alpha _{r} \in Fm$ , $\gamma _{0}, \dots , \gamma _{r-1} \in \varGamma $ , and $\delta _{0}(x, \vec {z}), \dots , \delta _{r-1}(x, \vec {z}) \in Fm$ such that

$$ \begin{align*} \square^{k}\square^{h}p &= \alpha_{0}, \square^{n}\textbf{c}_{i} = \alpha_{r-1} \text{, and }\{ \alpha_{j}, \alpha_{j+1} \}\\ & = \{ \delta_{j}(\square^{k}\gamma_{j}, \vec{z}), \delta_{j}(\square^{n}\textbf{c}_{i}, \vec{z}) \},\quad \text{for every }r> j \in \omega. \end{align*} $$

This implies

(34) $$ \begin{align} \langle \square^{k}\square^{h}p, \square^{n}\textbf{c}_{i}\rangle \in \theta(\varGamma^{-}, \boldsymbol{\tau}), \end{align} $$

where

$$ \begin{align*} \varGamma^{-} := \{ \gamma_{0}, \dots, \gamma_{r-1} \}. \end{align*} $$

Observe that we can assume without loss of generality that $x \in \textit {Var}(\delta _{0})$ . As $\mathscr {L}$ is graph-based, this implies the existence of a $g^{\ast } \in \omega $ such that $\delta _{0}(x, \vec {z}) = \square ^{g^{\ast }}x$ . Bearing this in mind, we obtain

$$ \begin{align*} \{\square^{k}\square^{h}p, \alpha_{1}\} = \{ \alpha_{0}, \alpha_{1} \} = \{ \delta_{0}(\square^{k}\gamma_{0}, \vec{z}), \delta_{0}(\square^{n}\textbf{c}_{i}, \vec{z}) \} = \{ \square^{g^{\ast}}\square^{k}\gamma_{0}, \square^{g^{\ast}}\square^{n} \textbf{c}_{i} \}. \end{align*} $$

As $p \ne \textbf {c}_{i}$ by assumption, necessarily $\square ^{k}\square ^{h}p = \square ^{g^{\ast }}\square ^{k}\gamma _{0}$ . Consequently, there is a $t^{\ast } \in \omega $ such that $\square ^{t^{\ast }}p = \gamma _{0} \in \varGamma ^{-}$ . Then let $t := \min \{ s \in \omega \colon \square ^{s}p \in \varGamma ^{-} \}$ . Clearly $t \leqslant t^{\ast }$ . Consequently, there is a $g \in \omega $ such that

$$ \begin{align*} \square^{k}\square^{h}p = \square^{g^{\ast}}\square^{k}\gamma_{0} = \square^{g^{\ast}} \square^{k} \square^{t^{\ast}}p = \square^{g} \square^{k} \square^{t}p. \end{align*} $$

We conclude

(35) $$ \begin{align} h = t + g \text{ and }\square^{t}p \in \varGamma^{-}. \end{align} $$

Now, consider the set

$$ \begin{align*} \varDelta := \{ \langle \gamma, \delta \rangle \colon & \gamma, \delta \in \varGamma^{-} \text{ and }\gamma = \square^{u}q \text{ and } \delta = \square^{v}q,\\ & \text{for some }q \in At(\mathscr{L}) \text{ and nonnegative integers }u < v \}. \end{align*} $$

Then let $s \in \omega $ , $\{ q_{j} \colon s> j \in \omega \} \subseteq At(\mathscr {L})$ , and $\{ u_{j} \colon s> j \in \omega \} \cup \{ v_{j} \colon s > j \in \omega \}\subseteq \omega $ be such that

$$ \begin{align*} \varDelta = \{ \langle \square^{u_{j}}q_{j}, \square^{v_{j}}q_{j}\rangle \colon s> j \in \omega \}. \end{align*} $$

Similarly, consider

$$ \begin{align*} \nabla := \{ \square^{w} \textbf{c}_{i} \in \varGamma^{-} \colon w \in \omega \} \end{align*} $$

and let $s^{\ast } \in \omega $ and $\{ w_{j^{\ast }} \colon s^{\ast }> j^{\ast } \in \omega \} \subseteq \omega $ be such that

$$ \begin{align*} \nabla = \{ \square^{w_{j^{\ast}}}\textbf{c}_{i} \colon s^{\ast}> j^{\ast} \in \omega \}. \end{align*} $$

Observe that conditions (i) and (ii) hold for these definitions of $u_{j}, v_{j}, w_{j^{\ast }}$ , and $q_{j}$ .

Thus, to conclude the proof, it suffices to establish condition (iii). By (35) it will be enough to show that

$$ \begin{align*} d := \textrm{gcd}(\{ v_{j} - u_{j} \colon s> j \in \omega \} \cup \{ k + w_{j^{\ast}} - n \colon s^{\ast} > j^{\ast} \in \omega \}) \end{align*} $$

is a divisor of g. To this end, with each $q \in At$ we associate a set

$$ \begin{align*} M_{q} := \{ m \in \omega \colon \square^{m}q \in \varGamma^{-} \}. \end{align*} $$

If $M_{q} \ne \emptyset $ , we denote by $m_{q}$ its minimum element. Otherwise, we set $m_{q} := \infty $ . Observe that

(36) $$ \begin{align} t = m_{p}. \end{align} $$

Claim A.2. $d \geqslant 1$ .

Proof. Suppose the contrary, with a view to contradiction. Then $d = 0$ . By definition of $\textrm {gcd}$ , this implies $\varDelta = \nabla = \emptyset $ . As $\varDelta = \emptyset $ , for every $q \in At(\mathscr {L})$ ,

(37) $$ \begin{align} \text{either }M_{q} = \{ m_{q} \} \text{ or }m_{q} = \infty. \end{align} $$

Moreover, as $\nabla = \emptyset $ ,

(38) $$ \begin{align} m_{\textbf{c}_{i}} = \infty. \end{align} $$

Consider the sets

$$ \begin{align*} A_{1} &:= \{ \square^{r}q \colon q \in At(\mathscr{L}), r \in \omega, \text{ and }m_{q} = \infty \},\\ A_{2} &:= \{ \square^{r}q \colon q \in At(\mathscr{L}) \smallsetminus \{ \textbf{c}_{i} \}, m_{q} \in \omega, \text{ and }k + m_{q} -1 \geqslant r \in \omega \}. \end{align*} $$

Moreover, define $A := A_{1} \cup A_{2}$ . Observe that

(39) $$ \begin{align} At(\mathscr{L}) \subseteq A. \end{align} $$

To prove this, consider $q \in At(\mathscr {L})$ . If $m_{q} = \infty $ , then $q \in A_{1} \subseteq A$ . Then suppose that $m_{q} \ne \infty $ . By (38) $q \ne \textbf {c}_{i}$ , whence $q \in At(\mathscr {L}) \smallsetminus \{ \textbf {c}_{i} \}$ . Moreover, $k \geqslant 1$ as by assumption $k> n \geqslant 0$ . Thus $0 \leqslant k + m_{q} -1$ . Consequently, $q = \square ^{0}q \in A_{2} \subseteq A$ . This establishes (39).

We shall endow the set A with the structure of an algebra $\boldsymbol {A}$ of type $\mathscr {L}$ . For all ordinal $j < \alpha $ we define $\textbf {c}_{j}^{\boldsymbol {A}} := \textbf {c}_{j} \in A$ . This can be done by (39). Moreover, we define $\square ^{\boldsymbol {A}} \colon A \to A$ as

$$ \begin{align*} \square^{\boldsymbol{A}}\gamma := \left\{\begin{array}{@{\,}ll} \square \gamma & \text{if}\ \square \gamma \in A,\\ \square^{n} \textbf{c}_{i} & \text{otherwise}\\ \end{array} \right. \end{align*} $$

for every $\gamma \in A$ . Observe that $\square ^{\boldsymbol {A}}$ is well-defined by (38). Moreover, from (38) it follows

(40) $$ \begin{align} \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{r\text{-times}}\textbf{c}_{i} = \square^{r}\textbf{c}_{i},\quad \text{for all }r \in \omega. \end{align} $$

Similarly, from (37) it follows that for all $q \in At(\mathscr {L})$ ,

(41) $$ \begin{align} \text{if }m_{q} \in \omega\text{, then }\underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{(k + m_{q})\text{-times}}q = \square^{n}\textbf{c}_{i}. \end{align} $$

Now, recall by (39) that $\textit {Var} \subseteq A$ . Bearing this in mind, let $h \colon \boldsymbol {Fm} \to \boldsymbol {A}$ be the unique homomorphism such that $v(x) = x$ for every $x \in \textit {Var}$ . We shall prove that

(42) $$ \begin{align} h(\square^{k}\gamma) = h(\square^{n} \textbf{c}_{i}) \end{align} $$

for every $\gamma \in \varGamma ^{-}$ . To this end, consider $\gamma \in \varGamma ^{-}$ . There are $q \in At(\mathscr {L})$ and $j \in \omega $ such that $\gamma = \square ^{j}q$ . Together with $\square ^{j}q \in \varGamma ^{-}$ , (37) and (38) imply $m_{q} = j \in \omega $ . Hence, applying (40) and (41), we get

$$ \begin{align*} h(\square^{k} \gamma)= h(\square^{k}\square^{m_{q}}q) =\underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{(k+m_{q})\text{-times}}h(q) = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{(k+m_{q})\text{-times}}q = \square^{n} \textbf{c}_{i} = h(\square^{n}\textbf{c}_{i}), \end{align*} $$

establishing (42).

By (42), the kernel $\textrm {Ker}(h)$ contains the generators of $\theta (\varGamma ^{-}, \boldsymbol {\tau })$ , whence $\theta (\varGamma ^{-}, \boldsymbol {\tau }) \subseteq \textrm {Ker}(h)$ . Together with (34), this yields $\langle \square ^{k}\square ^{h}p, \square ^{n}\textbf {c}_{i}\rangle \in \theta (\varGamma ^{-}, \boldsymbol {\tau }) \subseteq \textrm {Ker}(h)$ , i.e., $h(\square ^{k}\square ^{h}p) = h(\square ^{n}\textbf {c}_{i})$ . Now, recall that $\square ^{t}p \in \varGamma ^{-}$ . By (37) this implies $m_{p} = t \in \omega $ and, therefore, $h = m^{p} + g$ (as $h = t + g$ ). By the definition of $\square ^{\boldsymbol {A}}$ , (40), and (41) we obtain

$$ \begin{align*} h(\square^{h}\square^{k}p) &= h(\square^{g + k + m_{p}}p) = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{g\text{-times}}\underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{(k + m_{p})\text{-times}}p\\ &=\underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{g\text{-times}}\square^{n}\textbf{c}_{i} = \square^{g+n}\textbf{c}_{i}. \end{align*} $$

Together with $h(\square ^{k}\square ^{h}p) = h(\square ^{n}\textbf {c}_{i})$ and (40), this implies $\square ^{n}\textbf {c}_{i} = \square ^{g+n}\textbf {c}_{i}$ , whence $g = 0$ . Thus $\square ^{h}p = \square ^{t+g}p = \square ^{t}p \in \varGamma ^{-} \subseteq \varGamma $ . But this contradicts the assumption that $\square ^{h}p \notin \varGamma $ . Hence we reached a contradiction, as desired.⊣

Now, consider the sets

$$ \begin{align*} B_{1} &:= \{ \square^{r}q \colon q \in At(\mathscr{L}) \smallsetminus \{ \textbf{c}_{i} \}, r \in \omega, \text{ and }m_{q} = \infty \},\\ B_{2} &:= \{ \square^{r}q \colon q \in At(\mathscr{L}) \smallsetminus \{ \textbf{c}_{i} \}, m_{q} \in \omega, \text{ and } k + m_{q} -1 \geqslant r \in \omega \},\\ B_{3} &:= \{ \square^{r}\textbf{c}_{i} \colon n+d-1 \geqslant r \in \omega \}. \end{align*} $$

Moreover, define $B := B_{1} \cup B_{2} \cup B_{3}$ . Observe that

(43) $$ \begin{align} At(\mathscr{L}) \subseteq B. \end{align} $$

To prove this, consider $q \in At(\mathscr {L})$ . We have two cases: either $q \ne \textbf {c}_{i}$ or $q = \textbf {c}_{i}$ . If $q \ne \textbf {c}_{i}$ , we repeat the argument used to establish (39). Then we consider the case where $q = \textbf {c}_{i}$ . By Claim A.2 we get $0 \leqslant n + d -1$ , whence $\textbf {c}_{i} \in B_{3} \subseteq B$ . This establishes (43).

Let $\boldsymbol {B}$ be the algebra of type $\mathscr {L}$ obtained endowing B with the interpretation of the symbols in $\mathscr {L}$ defined in the proof of Claim A.2 for the algebra $\boldsymbol {A}$ . Observe that the interpretation of constant symbols is well-defined by (43). Similarly, from Claim A.2 it follows that $n \leqslant n + d - 1$ and, therefore, that $\square ^{n} \textbf {c}_{i} \in B_{3} \subseteq B$ . This guarantees that the interpretation of $\square $ in $\boldsymbol {B}$ is also well-defined.

Observe that for all $r \in \omega $ ,

(44) $$ \begin{align} \underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{r\text{-times}}\textbf{c}_{i} = \begin{cases} \square^{r}\textbf{c}_{i} & \text{if}\ r \leqslant n,\\ \square^{n + a} \textbf{c}_{i} & \text{if}\ r> n\ \text{and }\ a\ \text{is the remainder of}\ \frac{r - n}{d}.\\ \end{cases} \end{align} $$

Similarly, for every $q \in At(\mathscr {L}) \smallsetminus \{ \textbf {c}_{i} \}$ such that $m_{q} \in \omega $ , and every $r \in \omega $ ,

(45) $$ \begin{align} \underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{r\text{-times}}q = \begin{cases} \square^{r}q & \text{if}\ r \leqslant k + m_{q} - 1,\\ \square^{n + a} \textbf{c}_{i} & \text{if}\ r> k + m_{q}\ \text{and}\ a\ \text{is the remainder of}\ \frac{r - k - m_{q}}{d}.\\ \end{cases} \end{align} $$

Now, recall by (43) that $\textit {Var} \subseteq B$ . Bearing this in mind, let $h \colon \boldsymbol {Fm} \to \boldsymbol {B}$ be the unique homomorphism such that $v(x) = x$ for every $x \in \textit {Var}$ . We shall prove that

(46) $$ \begin{align} h(\square^{k}\gamma) = h(\square^{n} \textbf{c}_{i}) \end{align} $$

for every $\gamma \in \varGamma ^{-}$ . To this end, consider $\gamma \in \varGamma ^{-}$ . There are $q \in At(\mathscr {L})$ and $j \in \omega $ such that $\gamma = \square ^{j}q$ . There are two cases: either $q \ne \textbf {c}_{i}$ or $q = \textbf {c}_{i}$ .

First suppose that $q \ne \textbf {c}_{i}$ . As $\square ^{j}q = \gamma \in \varGamma $ , necessarily $m_{q} \leqslant j$ . Observe that

(47) $$ \begin{align} j - m_{q} = r \cdot d\text{, for some }r \in \omega. \end{align} $$

To prove this, notice that if $j = m_{q}$ , we are done taking $r := 0$ . Then suppose $j \ne m_{q}$ , i.e., $m_{q} < j$ . In this case, $\langle \square ^{m_{q}}q, \square ^{j}q\rangle \in \varDelta $ . Consequently, $j - m_{q}$ is a multiple of d, as desired. This establishes (47). Together with (44) and (45), this yields

$$ \begin{align*} h(\square^{k}\gamma) &= h(\square^{k}\square^{j}q) = h(\square^{j-m_{q}}\square^{k + m_{q}}q) = h(\square^{r \cdot d}\square^{k + m_{q}}q)\\ & = \underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{(r \cdot d)\text{-times}}\underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{(k + m_{q})\text{-times}}h(q) = \underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{(r \cdot d)\text{-times}}\underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{(k + m_{q})\text{-times}}q\\ &= \square^{n} \textbf{c}_{i} = h(\square^{n} \textbf{c}_{i}). \end{align*} $$

Then we consider the case where $q = \textbf {c}_{i}$ . In this case, an argument similar to the one detailed above (where $\nabla $ takes the role of $\varDelta $ ) shows that there is an $r \in \omega $ such that $r \cdot d = k + j - n$ . Consequently, by (44), we obtain

$$ \begin{align*} h(\square^{k}\gamma) &= h(\square^{k}\square^{j}\textbf{c}_{i}) = h(\square^{k+j-n}\square^{n}\textbf{c}_{i}) = h(\square^{r \cdot d}\square^{n}\textbf{c}_{i})\\ &=\underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{(r \cdot d)\text{-times}}\underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{n\text{-times}}h(\textbf{c}_{i}) = \underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{(r \cdot d)\text{-times}}\underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{n\text{-times}}\textbf{c}_{i}\\ &= \square^{n} \textbf{c}_{i} = h(\square^{n} \textbf{c}_{i}), \end{align*} $$

thus establishing (46).

Lastly, by (46) the kernel $\textrm {Ker}(h)$ contains the generators of $\theta (\varGamma ^{-}, \boldsymbol {\tau })$ , whence $\theta (\varGamma ^{-}, \boldsymbol {\tau }) \subseteq \textrm {Ker}(h)$ . By (34) this yields $\langle \square ^{k}\square ^{h}p, \square ^{n}\textbf {c}_{i}\rangle \in \theta (\varGamma ^{-}, \boldsymbol {\tau }) \subseteq \textrm {Ker}(h)$ , i.e., $h(\square ^{k}\square ^{h}p) = h(\square ^{n}\textbf {c}_{i})$ . We obtain

$$ \begin{align*} \underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{(g + m_{p} + k)\text{-times}}p = \underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{(g + t + k)\text{-times}}p = \underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{(h + k)\text{-times}}p = h(\square^{k}\square^{h}p) = h(\square^{n}\textbf{c}_{i})= \square^{n} \textbf{c}_{i}. \end{align*} $$

The equalities above are justified as follows. First one follows from (36), the second from (35), the third one is obvious, the fourth one was justified right before the above display, and the last one follows from (44).

Now, as $m_{p} = t \in \omega $ and $p \ne \textbf {c}_{i}$ , the value of $\underbrace {\square ^{\boldsymbol {B}} \dots \square ^{\boldsymbol {B}}}_{(g + m_{p} + k)\text {-times}}p$ can be computed according to condition (45). Together with the above display, this guarantees that g is a multiple of d. Hence, condition (iii) holds, as desired.

To prove the “if” part, suppose that conditions (i)–(iii) hold. Consider $s> j \in \omega $ . Observe that $\square ^{u_{j}}q_{j}, \square ^{v_{j}}q_{j} \in \varGamma $ . Consequently,

$$ \begin{align*} \langle \square^{k}\square^{u_{j}}q_{j}, \square^{n} \textbf{c}_{i}\rangle, \langle \square^{k}\square^{v_{j}}q_{j}, \square^{n} \textbf{c}_{i}\rangle \in \theta(\varGamma, \boldsymbol{\tau}). \end{align*} $$

Bearing in mind that $u_{j} < v_{j}$ , this easily implies

(48) $$ \begin{align} \square^{v_{j}- u_{j}}\square^{n} \textbf{c}_{i} \equiv \square^{v_{j}-u_{j}}\square^{k}\square^{u_{j}}q_{j} = \square^{v_{j}}\square^{k}q_{j} \equiv \square^{n} \textbf{c}_{i}\quad\mod{\theta(\varGamma, \boldsymbol{\tau})}. \end{align} $$

Furthermore, consider any $s^{\ast }> j^{\ast } \in \omega $ and recall that $\square ^{w_{j^{\ast }}} \textbf {c}_{i} \in \varGamma $ . Consequently,

$$ \begin{align*} \langle \square^{k} \square^{w_{j^{\ast}}} \textbf{c}_{i}, \square^{n} \textbf{c}_{i}\rangle \in \theta(\varGamma, \boldsymbol{\tau}). \end{align*} $$

Again, this implies

(49) $$ \begin{align} \square^{k + w_{j^{\ast}} - n} \square^{n} \textbf{c}_{i} = \square^{k + w_{j^{\ast}}} \textbf{c}_{i} \equiv \square^{n} \textbf{c}_{i}\quad\mod{\theta(\varGamma, \boldsymbol{\tau})}. \end{align} $$

Now, from (48) and (49) it follows that for all $r \in \omega $ ,

(50) $$ \begin{align} \langle\square^{n} \textbf{c}_{i}, \square^{r \cdot d}\square^{n} \textbf{c}_{i}\rangle \in \theta(\varGamma, \boldsymbol{\tau}), \end{align} $$

where

$$ \begin{align*} d := \textrm{gcd}(\{ v_{j} - u_{j} \colon s> j \in \omega \} \cup \{ k + w_{j^{\ast}} - n \colon s^{\ast} > j^{\ast} \in \omega \}). \end{align*} $$

Hence there is an $r \in \omega $ such that

$$ \begin{align*} \square^{k}\square^{h}p = \square^{g} \square^{t} \square^{k}p \equiv \square^{g} \square^{n} \textbf{c}_{i} = \square^{r \cdot d} \square^{n} \textbf{c}_{i} \equiv \square^{n} \textbf{c}_{i}\quad\mod{\theta(\varGamma, \boldsymbol{\tau})}. \end{align*} $$

The steps in the above display are justified as follows. The first follows from the assumption that $h = t+g$ . The second from the assumption that $\square ^{t}p \in \varGamma $ and, therefore, $\langle \square ^{k}\square ^{t}p, \square ^{n} \textbf {c}_{i}\rangle \in \theta (\varGamma , \boldsymbol {\tau })$ . The third from the assumption that there is some $r \in \omega $ such that $g = r \cdot d$ , and the last one from (50). The above display concludes the proof.⊣

Lemma A.3. Let $\mathscr {L}$ be a graph-based language, $\varGamma \subseteq Fm_{\mathscr {L}}$ , and

$$ \begin{align*} \boldsymbol{\tau}(x) = \{ \square^{k}x \thickapprox \square^{n} \textbf{c}_{i} \} \end{align*} $$

for some nonnegative integers $n < k$ and ordinal $i < \alpha $ . Moreover, let $h \in \omega $ . Then $\langle \square ^{k}\square ^{h} \textbf {c}_{i}, \square ^{n}\textbf {c}_{i}\rangle \in \theta (\varGamma , \boldsymbol {\tau })$ if and only if there are $s, s^{\ast } \in \omega $ , $\{ q_{j} \colon s> j \in \omega \} \subseteq At(\mathscr {L})$ , and $\{ u_{j} \colon s> j \in \omega \} \cup \{ v_{j} \colon s > j \in \omega \} \cup \{ w_{j^{\ast }} \colon s^{\ast } > j^{\ast } \in \omega \} \subseteq \omega $ such that

  1. (i) $u_{j} < v_{j}$ , for all $s> j \in \omega ;$

  2. (ii) $\square ^{u_{j}}q_{j}, \square ^{v_{j}}q_{j}, \square ^{w_{j^{\ast }}}\textbf {c}_{i} \in \varGamma $ , for all $s> j \in \omega $ and $s^{\ast }> j^{\ast } \in \omega ;$

  3. (iii) $k + h - n$ is a multiple of

    $$ \begin{align*} \textrm{gcd}(\{ v_{j} - u_{j} \colon s> j \in \omega \} \cup \{ k + w_{j^{\ast}} - n \colon s^{\ast} > j^{\ast} \in \omega \}). \end{align*} $$

Proof. We begin by proving the “only if” part. To this end, suppose that $\langle \square ^{k}\square ^{h} \textbf {c}_{i}, \square ^{n}\textbf {c}_{i}\rangle \in \theta (\varGamma , \boldsymbol {\tau })$ . We define $\varDelta $ , $\nabla $ , and $m_{q}$ as in the proof of Lemma A.1. Similarly, we enumerate $\varDelta $ and $\nabla $ as in the proof of Lemma A.1. Observe that conditions (i) and (ii) hold for these definitions of $u_{j}, v_{j}, w_{j^{\ast }}$ , and $q_{j}$ .

Then define

$$ \begin{align*} d := \textrm{gcd}(\{ v_{j} - u_{j} \colon s> j \in \omega \} \cup \{ k + w_{j^{\ast}} - n \colon s^{\ast} > j^{\ast} \in \omega \}). \end{align*} $$

To conclude this part of the proof, it suffices to establish condition (iii), i.e., to prove that $k + h - n$ is a multiple of d. To this end, we rely on the following:

Claim A.4. $d \geqslant 1$ .

Proof. Suppose the contrary, with a view to contradiction. We replicate the first part of the proof of Claim A.2 by defining an algebra $\boldsymbol {A}$ and a homomorphism $h \colon \boldsymbol {Fm} \to \boldsymbol {A}$ such that $\textrm {Ker}(h)$ extends $\theta (\varGamma , \boldsymbol {\tau })$ . Together with the assumption $\langle \square ^{k}\square ^{h} \textbf {c}_{i}, \square ^{n}\textbf {c}_{i}\rangle \in \theta (\varGamma , \boldsymbol {\tau })$ , the fact that $\theta (\varGamma , \boldsymbol {\tau }) \subseteq \textrm {Ker}(h)$ implies

$$ \begin{align*} \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{(k+h)\text{-times}} \textbf{c}_{i} = h(\square^{k}\square^{h} \textbf{c}_{i}) = h(\square^{n}\textbf{c}_{i}) = \underbrace{\square^{\boldsymbol{A}} \dots \square^{\boldsymbol{A}}}_{n\text{-times}} \textbf{c}_{i}. \end{align*} $$

By (40) this yields $k + h = n$ . But this contradicts the fact that $k> n$ . Hence we reached a contradiction, as desired.⊣

Using Claim A.4, we replicate the construction of the algebra $\boldsymbol {B}$ and of the homomorphism $h \colon \boldsymbol {Fm} \to \boldsymbol {B}$ in the proof of Lemma A.1, and obtain $\textrm {Ker}(h) \supseteq \theta (\varGamma , \boldsymbol {\tau })$ . Together with the assumption $\langle \square ^{k}\square ^{h} \textbf {c}_{i}, \square ^{n}\textbf {c}_{i}\rangle \in \theta (\varGamma , \boldsymbol {\tau })$ , this implies

$$ \begin{align*} \underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{(k+h)\text{-times}} \textbf{c}_{i} = h(\square^{k}\square^{h} \textbf{c}_{i}) = h(\square^{n}\textbf{c}_{i}) = \underbrace{\square^{\boldsymbol{B}} \dots \square^{\boldsymbol{B}}}_{n\text{-times}} \textbf{c}_{i}. \end{align*} $$

As $k> n$ , we get $h+k> n$ . Thus, together with (44), the above display implies that $k+h-n$ is a multiple of d.

To prove the “if” part, suppose that conditions (i)–(iiis) hold. As in the proof of Lemma A.1, we obtain that for all $r \in \omega $ ,

(51) $$ \begin{align} \langle\square^{n} \textbf{c}_{i}, \square^{r \cdot d}\square^{n} \textbf{c}_{i}\rangle \in \theta(\varGamma, \boldsymbol{\tau}), \end{align} $$

where

$$ \begin{align*} d := \textrm{gcd}(\{ v_{j} - u_{j} \colon s> j \in \omega \} \cup \{ k + w_{j^{\ast}} - n \colon s^{\ast} > j^{\ast} \in \omega \}). \end{align*} $$

Hence we obtain that for some $r \in \omega $ ,

$$ \begin{align*} \square^{k}\square^{h}\textbf{c}_{i} = \square^{k + h - n}\square^{n}\textbf{c}_{i} = \square^{r \cdot d} \square^{n} \textbf{c}_{i} \equiv \square^{n} \textbf{c}_{i} \quad\mod \theta(\varGamma, \boldsymbol{\tau}). \end{align*} $$

The steps in the above display are justified as follows. The first is sound because by assumption $k> n$ . The second follows from the assumption that there is some $r \in \omega $ such that $k + h - n = r \cdot d$ , and the last one from (51). The above display concludes the proof.⊣

Thanks are due to R. Jansana and M. Stronkowski for inspiring conversation on this topic and for many useful suggestions which helped to improve the presentation of the paper. Moreover, we gratefully thank the anonymous referee for the careful reading of the manuscript and I. Shillito for raising a question which inspired Corollary 9.7. The author was supported by the research grant $2017$ SGR $95$ of the AGAUR from the Generalitat de Catalunya, by the I+D+i research project PID2019-110843GA-I00 La geometria de las logicas no-clasicas funded by the Ministry of Science and Innovation of Spain, and by the Beatriz Galindo grant BEAGAL $18$ / $00040$ funded by the Ministry of Science and Innovation of Spain.

Footnotes

Dedicated to Professor Ramon Jansana on the occasion of his retirement

1 For related studies of completeness theorems involving equations, see for instance [Reference Blok and Pigozzi4, Reference Czelakowski and Jansana16, Reference Herrmann25, Reference Moraschini35, Reference Raftery38].

2 This congruence is named after Tarski in [Reference Font20Reference Font, Jansana and Pigozzi22].

3 This terminology comes from the fact that algebras in graph-based languages can be naturally identified with certain directed graphs with a set of distinguished elements.

4 This notion of term-equivalence appeared first in [Reference Jansana and Moraschini27Reference Jansana and Moraschini29]. Even if in these papers term-equivalence was defined by means of the so-called Suszko reduced models of a logic [Reference Czelakowski15], the present definition is easily seen to be equivalent to the original one.

5 The expression locally tabular originates in the field of modal and intermediate logics.

6 The naive procedure for problem (i) described above runs in exponential time in the length of the input.

7 The assumption that $m \geqslant 2$ is inessential and is intended to simplify the proof of Theorem 12.1(iii) only.

References

Albuquerque, H., Font, J. M., Jansana, R., and Moraschini, T., Assertional logics, truth-equational logics, and the hierarchies of abstract algebraic logic , Don Pigozzi on Abstract Algebraic Logic and Universal Algebra (J. Czelakowski, editor), Outstanding Contributions, vol. 16, Springer, Cham, 2018.Google Scholar
Bergman, C., Universal Algebra: Fundamentals and Selected Topics, Chapman and Hall/CRC, Boca Raton, FL, 2011.Google Scholar
Blok, W. J. and Pigozzi, D., Protoalgebraic logics . Studia Logica, vol. 45 (1986), pp. 337369.CrossRefGoogle Scholar
Blok, W. J. and Pigozzi, D., Algebraizable Logics, Memoirs of the American Mathematical Society, vol. 396, American Mathematical Society, Providence, 1989.Google Scholar
Blok, W. J. and Pigozzi, D., Algebraic semantics for universal horn logic without equality , Universal Algebra and Quasigroup Theory (A. Romanowska and J. D. H. Smith, editors), Heldermann, Berlin, 1992, pp. 156.Google Scholar
Blok, W. J. and Raftery, J. G., Assertionally equivalent quasivarieties . International Journal of Algebra and Computation, vol. 18 (2008), no. 4, pp. 589681.CrossRefGoogle Scholar
Blok, W. J. and Rebagliato, J., Algebraic semantics for deductive systems . Studia Logica, Special Issue on Abstract Algebraic Logic, Part II, vol. 74 (2003), no. 5, pp. 153180.Google Scholar
Burris, S. and Sankappanavar, H. P., A Course in Universal Algebra, Springer, New York, 1981.CrossRefGoogle Scholar
Celani, S. and Jansana, R., A closer look at some subintuitionistic logics . Notre Dame Journal of Formal Logic, vol. 42 (2001), no. 4, pp. 225255.Google Scholar
Chagrov, A. and Zakharyaschev, M., Modal Logic, Oxford Logic Guides, vol. 35, Oxford University Press, Oxford, 1997.Google Scholar
Cintula, P. and Noguera, C., Logic and Implication: An Introduction to the General Algebraic Study of Non-Classical Logics , Trends in Logic, Springer, Cham, to appear.Google Scholar
Czelakowski, J., Algebraic aspects of deduction theorems . Studia Logica, vol. 44 (1985), pp. 369387.CrossRefGoogle Scholar
Czelakowski, J., Local deductions theorems . Studia Logica, vol. 45 (1986), pp. 377391.CrossRefGoogle Scholar
Czelakowski, J., Protoalgebraic Logics, Trends in Logic—Studia Logica Library, vol. 10, Kluwer Academic, Dordrecht, 2001.CrossRefGoogle Scholar
Czelakowski, J., The Suszko operator. Part I . Studia Logica, Special Issue on Abstract Algebraic Logic, Part II, vol. 74 (2003), no. 5, pp. 181231.Google Scholar
Czelakowski, J. and Jansana, R., Weakly algebraizable logics, this Journal, vol. 65 (2000), no. 2, pp. 641–668.Google Scholar
Dunn, J. M., Positive modal logic . Studia Logica, vol. 55 (1995), no. 2, pp. 301317.CrossRefGoogle Scholar
Font, J. M., The simplest protoalgebraic logic . Mathematical Logic Quaterly, vol. 59 (2013), no. 6, pp. 435451.Google Scholar
Font, J. M., Ordering protoalgebraic logics . Journal of Logic and Computation, vol. 26 (2014), no. 5, pp. 13951419.CrossRefGoogle Scholar
Font, J. M., Abstract Algebraic Logic—An Introductory Textbook, Studies in Logic—Mathematical Logic and Foundations, vol. 60, College Publications, London, 2016.Google Scholar
Font, J. M. and Jansana, R., A General Algebraic Semantics for Sentential Logics, second ed., Lecture Notes in Logic, vol. 7, Association for Symbolic Logic, 2017.CrossRefGoogle Scholar
Font, J. M., Jansana, R., and Pigozzi, D., A survey on abstract algebraic logic . Studia Logica, Special Issue on Abstract Algebraic Logic, Part II, vol. 74 (2003), nos. 1–2, pp. 1397. With an “Update” in vol. 91 (2009), pp. 125–130.Google Scholar
Galatos, N., Jipsen, P., Kowalski, T., and Ono, H., Residuated Lattices: An Algebraic Glimpse at Substructural Logics, Studies in Logic and the Foundations of Mathematics, vol. 151, Elsevier, Amsterdam, 2007.Google Scholar
Glivenko, V. I., Sur quelques points de la logique de M. Brouwer . Academie Royal de Belgique Bulletin, vol. 15 (1929), pp. 183188.Google Scholar
Herrmann, B., Algebraizability and Beth’s theorem for equivalential logics . Bulletin of the Section of Logic, vol. 22 (1993), no. 2, pp. 8588.Google Scholar
Jansana, R., Full models for positive modal logic . Mathematical Logic Quarterly, vol. 48 (2002), no. 3, pp. 427445.3.0.CO;2-T>CrossRefGoogle Scholar
Jansana, R. and Moraschini, T., The poset of all logics I: Interpretations and lattice structure, this Journal, 2021, to appear.CrossRefGoogle Scholar
Jansana, R. and Moraschini, T., The poset of all logics II: Leibniz classes and hierarchy, this Journal, 2021, to appear.CrossRefGoogle Scholar
Jansana, R. and Moraschini, T., The poset of all logics III: Finitely presentable logics . Studia Logica, vol. 109 (2021), pp. 539580.CrossRefGoogle Scholar
Komori, Y., Syntactical investigations into BI logic and BB’I logic . Studia Logica, vol. 53 (1994), pp. 397416.CrossRefGoogle Scholar
Kracht, M., Tools and Techniques in Modal Logic, Studies in Logic and the Foundations of Mathematics, vol. 142, North-Holland, Amsterdam, 1999.CrossRefGoogle Scholar
Kracht, M., Modal consequence relations , Handbook of Modal Logic, vol. 3 (P. Blackburn, J. van Benthem, and F. Wolter, editors), Elsevier, New York, 2006, Chapter 8.Google Scholar
Martin, E. P. and Meyer, R. K., Solution to the P–W problem, this Journal, vol. 47 (1982), pp. 869–886.Google Scholar
Moraschini, T., A computational glimpse at the Leibniz and Frege hierarchies . Annals of Pure and Applied Logic, vol. 169 (2018), no. 1, pp. 120.CrossRefGoogle Scholar
Moraschini, T., A study of the truth predicates of matrix semantics . The Review of Symbolic Logic, vol. 11 (2018), no. 4, pp. 780804.CrossRefGoogle Scholar
Moraschini, T., On the complexity of the Leibniz hierarchy . Annals of Pure and Applied Logic, vol. 170 (2019), no. 7, pp. 805824.CrossRefGoogle Scholar
Pigozzi, D., Fregean algebraic logic , Algebraic Logic (H. Andréka, J. D. Monk, and I. Németi, editors), Colloquia Mathematica Societatis János Bolyai, vol. 54, North-Holland, Amsterdam, 1991, pp. 473502.Google Scholar
Raftery, J. G., The equational definability of truth predicates . Reports on Mathematical Logic, vol. 41 (2006), pp. 95149.Google Scholar
Raftery, J. G., A perspective on the algebra of logic . Quaestiones Mathematicae, vol. 34 (2011), pp. 275325.CrossRefGoogle Scholar
Rautenberg, W., On reduced matrices . Studia Logica, vol. 52 (1993), pp. 6372.Google Scholar
Suzuki, Y., Wolter, F., and Zakharyashev, M., Speaking about transitive frames in propositional languages . Journal of Logic, Language and Information, vol. 7 (1998), pp. 317339.CrossRefGoogle Scholar
Turing, A. M., On computable numbers, with and application to the Entscheidungs problem . Proceedings of the London Mathematical Society, vol. 42 (1936–1937), no. 2, pp. 230265. A correction, vol. 43 (1937), pp. 544–546.Google Scholar
Visser, A., A propositional logic with explicit fixed points . Studia Logica, vol. 40 (1981), pp. 155175.CrossRefGoogle Scholar