1 Introduction
Moment-cumulant relations and Wick products play a central role in probability theory and related fields [Reference Akhiezer1, Reference Peccati and Taqqu22]. In classical probability, cumulant sequences $(c_n)_{n\in {\mathbf{N}}^\ast }$ linearize the notion of independence of random variables: if two random variables, $X,Y$ , with moments of all orders are independent, then for $n\geq 1$ , $c_n(X+Y)=c_n(X)+c_n(Y)$ . Wick polynomials, Wick products, and chaos expansions are related to cumulants. Indeed, recall, for example, that given a random variable X with moments of all orders, the Wick polynomial $W(X^n)$ is the coefficient of $\frac {t^n}{n!}$ in the expansion of $\exp (tX-K(t))$ , where $K(t)$ is the exponential generating series of cumulants.
Voiculescu’s theory of free probability [Reference Voiculescu28, Reference Voiculescu, Dykema and Nica29] provides the paradigm of a noncommutative probability theory, where the notion of freeness replaces the classical concept of probabilistic independence. Speicher showed that free cumulants linearize Voiculescu’s notion of freeness. See [Reference Nica and Speicher21, Reference Sarah and Schurmann24] for detailed introductions. Following Voiculescu’s ideas, various authors [Reference Bożejko8, Reference Hasebe and Saigo19, Reference Muraki20, Reference Speicher26, Reference Speicher, Woroudi and Voiculescu27] considered different types of independences (Boolean, monotone, and others), each characterized by particular moment-cumulant relations with explicit combinatorial descriptions given in terms of different types of set partitions. Relations between the different brands of cumulants were thoroughly explored by Arizmendi et al. in [Reference Arizmendi, Hasebe, Lehner and Vargas6]. Free and Boolean Wick polynomials have been introduced in this setting by Anshelevich [Reference Anshelevich2–Reference Anshelevich4].
In a previous paper [Reference Ebrahimi-Fard, Patras, Tapia and Zambotti15], the authors presented a Hopf-algebraic framework describing both the combinatorial structure of the classical moment-cumulant relations as well as the related notions of Wick polynomials and Wick products. The approach is based on convolution products of linear functionals defined on a coalgebra and encompasses the multidimensional extension of the moment-cumulant relations. In this framework, classical Wick polynomials result from a Hopf-algebraic deformation under the action of linear automorphisms induced by multivariate moments associated to an arbitrary family of random variables with moments of all orders.
In a series of recent papers [Reference Ebrahimi-Fard and Patras10, Reference Ebrahimi-Fard and Patras12, Reference Ebrahimi-Fard and Patras14], two of us explored relations between multivariate moments and free, Boolean, and monotone cumulants as well as relations among latter in noncommutative probability theory by studying a particular graded connected Hopf algebra H defined on the double tensor algebra over a noncommutative probability space $(A,\varphi )$ . In this approach, the associated set partitions (noncrossing, interval, and monotone, respectively) appear through the evaluation of elements of the group G (Lie algebra $\mathfrak g$ ) of (infinitesimal) Hopf-algebraic characters on words.
In the paper at hand, we revisit from a Hopf-theoretic point of view the theory of free, Boolean, and conditionally free Wick polynomials. The relevance of shuffle group actions and structures in the sense of [Reference Ebrahimi-Fard and Patras14] is also emphasized.
The article is organized as follows. In Section 2, we recall the definitions of classical cumulants and Wick polynomials. In Section 3, we do the same for free and Boolean cumulants. Section 4 defines free Wick polynomials using the Hopf-algebraic approach. The new definition is shown to extend Anshelevich’s definition of multivariate free Appell polynomials. At the beginning of Section 5, we introduce the shuffle-theoretic framework allowing to deal with noncommutative moment-cumulant relations and the corresponding noncommutative Wick polynomials. Section 5.1 revisits accordingly moment-cumulant relations in noncommutative probability theory following mainly the references [Reference Ebrahimi-Fard and Patras10–Reference Ebrahimi-Fard and Patras12]. Section 5.2 develops shuffle calculus for free Wick polynomials. In Section 6, Boolean Wick polynomials are also introduced and analyzed from this point of view. Section 7 uses the same approach to define conditionally free Wick polynomials. In Section 8, we show how the three notions of noncommutative Wick polynomials can be related through comodule structures and the induced group actions. Section 9 shows how the classical notion of Wick products generalizes naturally to the noncommutative setting, inducing three new associative algebra structures on the tensor algebra over a noncommutative probability space. Finally, in Section 10, we show using a Hopf-algebraic approach how the definition of classical cumulants lifts to the notion of tensor cumulants for random variables in a noncommutative probability space. In Section 10.1, we explain how this leads to the definition of tensor Wick polynomials. These two sections extend the results of [Reference Ebrahimi-Fard, Patras, Tapia and Zambotti15] from the classical to the tensor framework.
Below, $\mathbb {K}$ denotes the base field of characteristic zero over which all algebraic structures are defined. All (co)algebras are (co)associative and (co)unital unless otherwise stated.
2 Cumulants and Wick polynomials
Let us first recall briefly the definition of classical cumulants and Wick polynomials. Let X be a real-valued random variable, defined on a probability space $(\Omega ,\mathcal F,\mathbb P)$ , with finite moments of all orders, i.e., such that $m_n := \mathbb EX^n<\infty $ for all $n>0$ . Its exponential moment-generating function is defined as a power series in t
If we assume suitable growth conditions on the coefficients $m_n$ so that the above series has a positive radius of convergence, then this power series defines a function of class $C^\infty $ around the origin, and the moments $m_n$ can be recovered from it by differentiation.
The exponential cumulant-generating function
is a power series in t defined through the classical exponential relation between moments and cumulants
Using standard power series manipulations, this equation rewrites:
Here, $P(n)$ denotes the collection of all set partitions, $\pi :=\{B_1,\ldots ,B_l\}$ , of the set $[n]:=\{1,\dotsc ,n\}$ , where the block $B_i \in \pi $ contains $|B_i|$ elements. In general, for a finite subset $U \subset \mathbb N$ , we denote by $P(U)$ the collection of all set partitions of U.
Let $(X_1,\dotsc ,X_p)$ be a finite collection of real-valued random variables defined on a common probability space, such that all the moments $m_{\mathbf{n}}:=\mathbb E[X_1^{n_1}\dotsm X_p^{n_p}]$ exist, where $\mathbf{n}:=(n_1,\dotsc ,n_p)\in \mathbb N^p$ is a multi-index. We may consider a multivariate extension of (2.1), namely
where $t^{\mathbf{n}}:= t_1^{n_1}\dotsm t_p^{n_p}$ and $\mathbf{n}!:= n_1!\dotsm n_p!$ . As before, the cumulant-generating function is defined by a relation analogous to (2.2), and its coefficients are related to the moments in a way analogous to (2.3). This relation will be revisited in the following sections.
There exists a particular family of polynomials associated to a random variable X with finite moments of all orders, called Wick polynomials and denoted here by $W_n(x)$ , $n\ge 0$ . It turns out to be the unique family of polynomials such that $W_0(x)=1$ and
for all $n>0$ . The latter defining property means that $(W_n)_{n\ge 0}$ qualifies as a sequence of Appell polynomials [Reference Appell5]. For example, if X is a standard Gaussian random variable, this family coincides with the Hermite polynomials. These polynomials are interesting for physics. In particular, the Wick exponential
is closely related to moment- and cumulant-generating functions. In fact, this relation can be used to define Wick polynomials, because the exponential power series in t serves as a generating function.
The polynomial
is called the nth Wick power of X. For example,
In general, these explicit expansions can be recursively obtained from the change of basis relation
The latter can be generalized to finite collections $(X_1,\dotsc ,X_p)$ of random variables in a way analogous to (2.4).
3 Free and Boolean cumulants
Voiculescu introduced free probability theory in the 1980s [Reference Voiculescu28, Reference Voiculescu, Dykema and Nica29].Footnote 1 In this theory, the classical notion of independence is replaced by the algebraic notion of freeness. A family of unital subalgebras $(B_i:i\in I)$ of a noncommutative probability space $(A,\varphi )$ is called freely independent (or free), if $\varphi (a_1 \cdot _{\!\scriptscriptstyle {A}} \dotsm \cdot _{\!\scriptscriptstyle {A}} a_n)=0$ whenever $\varphi (a_j)=0$ for all $j=1,\dotsc ,n$ and $a_j\in B_{i_j}$ for some indices $i_1\neq i_2 \neq \dotsb \neq i_n$ .
Speicher introduced the notion of free cumulants [Reference Speicher26] as the right analogue of the classical cumulants in the theory of free probability, allowing for a more tractable characterization of Voiculescu’s notion of freeness. Free cumulants are defined by a formula analogous to (2.3) where the lattice P of set partitions is replaced by the lattice $\operatorname {NC}$ of noncrossing partitions:
As above, we set $k (a_B):= k (a_{i_1}, \ldots , a_{i_{|B|}})$ , for $B=\{i_1 < \cdots < i_{|B|}\}$ , to be the multivariate free cumulant of order $|B|$ . Free cumulants reflect freeness in the sense that they vanish whenever the involved random variables belong to different freely independent subalgebras.
Relation (3.1) between moments and free cumulants can be concisely expressed in terms of their ordinary generating functions. Indeed, given $a_1,\dots ,a_n$ in A, introduce noncommuting variables $w_1,w_2,\dots ,w_n$ and the generating functions
Here, we define $\varphi (a_{\mathbf{n}}):=\varphi (a_{n_1} \cdot _{\!\scriptscriptstyle {A}} \dotsm \cdot _{\!\scriptscriptstyle {A}} a_{n_p})$ for the multi-index $\mathbf{n} := (n_1,\dotsc ,n_p)\in [n]^p,\ p\in {\mathbb N}^\ast $ , and similarly for $k(a_{\mathbf{n}})=k(a_{n_1}, \ldots , a_{n_p})$ . Then, (3.1) is summarized by the intriguing identity [Reference Anshelevich2, Reference Nica and Speicher21]
where the substitution
is in place on the right-hand side.
The fact that the random variables under consideration do not commute entails that we are able to consider several other notions of independence in addition to Voiculescu’s freeness. For example, the notion of Boolean cumulants appears naturally in the context of the study of stochastic differential equations [Reference von Waldenfels, Behara, Krickeberg and Wolfowitz30]. Speicher and Woroudi [Reference Speicher, Woroudi and Voiculescu27] defined the multivariate Boolean cumulants, $ b(a_1, \ldots , a_n)$ , and the corresponding relations with moments in the context of noncommutative probability theory in terms of the following recursion:
While the combinatorics of free cumulants is described by the lattice of noncrossing partitions, the relation between moments and Boolean cumulants can be expressed by using the lattice $\operatorname {Int}$ of interval partitions:
Using the multi-index notation from above, these relations can be encapsulated in a single identity by introducing the generating function
yielding the simple expression [Reference Anshelevich3, Reference Nica and Speicher21]
Observe that in this case, as opposed to the functional equation describing the relation between moments and free cumulants, there is no substitution such as (3.2) to be made.
Surprisingly, the relation between moments and the different types of cumulants can be described concisely as the action of linear maps on the double tensor algebra. For this, two of us introduced, in [Reference Ebrahimi-Fard and Patras10], a different coproduct which allows to express these relations in a way similar to the presentation of the preceding sections.
4 Free Wick polynomials
In [Reference Ebrahimi-Fard and Patras10, Reference Ebrahimi-Fard and Patras12, Reference Ebrahimi-Fard, Patras, Tapia and Zambotti15], an approach in terms of Hopf algebras to the moment-cumulant relations in both classical and noncommutative probability was introduced. It permits to describe moment-cumulant relations in a rather different way, avoiding the use of generating functions.
Definition 4.1 A noncommutative probability space $(A,\varphi )$ consists of a unital algebra A together with a unital map $\varphi \colon A \to \mathbb {K}$ , i.e., $\varphi (1_A)=1$ .
To avoid ambiguities, we also denote the product of elements $a,b$ in the algebra A by $m_{A}(a \otimes b)=: a \cdot _{\!\scriptscriptstyle {A}} b$ . We still write $m_A$ for the iterates
Notice that we do not require the algebra A to be commutative. The elements of A should be thought of in general as noncommutative random variables, and the map $\varphi $ plays then the role of the expectation map. Elements in A can represent, for example, operator-valued random variables such as those appearing in the Fock space approach to Quantum Field Theory [Reference Effros and Popa16].
We consider the nonunital tensor algebra over A
and we denote elements of $T(A)$ using word notation ( $a_1\cdots a_n=a_1\otimes \dots \otimes a_n$ ). It is graded by the number of letters, i.e., the length of a word. The unitalization of $T(A)$ follows from adding the empty word $\mathbf{1}$ and is denoted by $\overline T(A)=T_0(A)\oplus T(A):= \mathbb K\mathbf{1}\oplus T(A)$ . The product on $T(A)$ (resp. $\overline T(A)$ ) is given by concatenation of words, $\mathrm{conc}(w_1 \otimes w_2):= w_1w_2$ , for $w_1,w_2 \in T(A)$ (with the empty word $\mathbf{1}$ being the unit). Let A be an algebra and consider the double tensor algebra $\overline T(T(A))$ over A. On $\overline T(T(A))$ , we also consider the concatenation product, but we denote it with a vertical bar in order to distinguish it from concatenation in $T(A)$ , i.e., $\mathrm{conc}(w_1\otimes w_2)=w_1|w_2$ for $w_1,w_2\in \overline T(T(A))$ .
Given a subset $U\subset \mathbb N$ , an interval or connected component of U is a maximal sequence of successive elements in U. For a subset $S\subseteq [n]$ , we denote by $J_1^S,\dotsc ,J_{k(S)}^S$ the connected components of $[n]\setminus S$ , ordered in increasing order of their minimal element. For notational convenience, we will often omit making explicit the dependency on S of the number of these connected components and, when there is no risk of confusion, will write simply $J_1^S,\dotsc ,J_k^S$ for $J_1^S,\dotsc ,J_{k(S)}^S$ .
Definition 4.2 The map $\Delta \colon T(A)\to \overline {T}(A)\otimes \overline T(T(A))$ is defined by
It has a unique multiplicative extension $\Delta \colon \overline T(T(A))\to \overline T(T(A))\otimes \overline T(T(A))$ such that $\Delta (\mathbf{1})=\mathbf{1}\otimes \mathbf{1}$ .
Note that in the sum on the right-hand side of (4.1), we have inserted the concatenation product in $\overline T(T(A))$ between the words corresponding to the connected components ${J^S_1},\dotsc ,{J^S_k}$ associated to the nonempty set $S\subsetneq [n]$ , that is, whereas $a_S \in {T}(A)$ , we have $a_{J^S_1}\vert \dotsm \vert a_{J^S_k} \in T(T(A))$ .
Theorem 4.3 [Reference Ebrahimi-Fard and Patras10]
The unital double tensor algebra $\overline T(T(A))$ equipped with $\Delta $ is a noncommutative noncocommutative connected graded Hopf algebra.
Extending our approach to classical Wick polynomials into the noncommutative realm, we introduce an endomorphism of the double tensor algebra $\overline T(T(A))$ . This provides, among others, a new way of introducing the noncommutative Wick (a.k.a. free Appell) polynomials appearing in the work of Anshelevich [Reference Anshelevich2], as explained below.
Suppose that $(A,\varphi )$ is a probability space. We define the map $\Phi \colon \overline T(T(A))\to \mathbb {K}$ as the unique unital multiplicative extension of the linear map $\phi $ defined on $T(A)$ by $\phi (a_1\dotsm a_n) := \varphi (a_1\cdot _{\!\scriptscriptstyle {A}} \dotsm \cdot _{\!\scriptscriptstyle {A}} a_n)$ . Because $\Phi $ is—by definition—a Hopf-algebraic character, it is an invertible element in the corresponding convolution algebra. Its convolution inverse, denoted $\Phi ^{-1}$ , is the unique character on the double tensor algebra such that $\Phi ^{-1}*\Phi =\Phi *\Phi ^{-1}=\varepsilon $ . Here, $\varepsilon \colon \overline T(T(A))\to \mathbb K$ denotes the counit, defined as the unique multiplicative map such that $\ker \varepsilon =T(T(A))$ , and which acts as the neutral element for the convolution product. In other words, the map $\varepsilon $ is such that $\varepsilon (\mathbf{1})=1$ and vanishes otherwise, and $\varepsilon (w_1|w_2)=\varepsilon (w_1)\varepsilon (w_2)$ for all $w_1,w_2\in \overline T(T(A))$ .
Definition 4.4 The free Wick map $\mathrm{W} \colon \overline T(T(A))\to \overline T(T(A))$ is defined by
or, implicitly, by
We call free Wick polynomials the family $\{\mathrm{W}(a_1\cdots a_n)$ , $a_i\in A$ , $i=1,\ldots , n\}$ .
Proposition 4.5 The free Wick map is multiplicative, i.e., for words $w,w'\in T(A)$ ,
We recall that $a|b$ denotes the concatenation of a and b in $\overline T(T(A))$ .
Proof As the identity map $\mathrm{id}$ and $\Phi ^{-1}$ are both multiplicative, using Sweedler’s notation, $\Delta (w)= \sum w^{(1)}\otimes w^{(2)}$ , for the coproduct defined in (4.1):
▪
The compositional inverse of $\mathrm{W}$ , denoted $\mathrm{W}^{\circ -1}$ , is given by
From Definition 4.4, we also obtain that the usual monomials in $\overline T(A)$ can be expressed in terms of free Wick polynomials:
Note that $\mathrm{W}$ restricts to an automorphism of $\overline T(A)$ . By [Reference Anshelevich2, Proposition 3.12], our free Wick polynomials agree with Anshelevich’s free Appell polynomials, because our formula (4.2) coincides with formula [Reference Anshelevich2, Formula (3.42)].
Here are some low-degree computations:
The computation of the third order polynomial (4.3) is somewhat subtle and should be compared with the expression (10.5) below.
The free Wick polynomials inherit immediately from their Hopf-algebraic definition a key property of classical Wick polynomials.
Lemma 4.6 The Wick polynomials $\mathrm{W}$ in Definition 4.4 are centered. That is,
Definition 4.7 Let us call universal polynomial $P=P(x_1,\ldots ,x_n;\gamma )$ for noncommutative probability spaces any linear combination of symbols
where $I\coprod J_1\coprod \cdots \coprod J_p$ is a partition of $[n]$ and $\gamma $ takes values in $\mathbb {K}$ .
To a universal polynomial P together with a noncommutative probability space $(A,\varphi )$ and elements $a_1,\ldots ,a_n \in A$ , we associate the element $P(a_1,\ldots ,a_n;\varphi )\in \overline {T}(A)$ obtained from P by replacing $X_I$ with the tensor monomial $a_{i_1}\cdots a_{i_k}$ , where $I=\{i_1,\ldots ,i_k\}$ , and $X_J^{\bullet }$ with $a_{j_1}\cdot _{\!\scriptscriptstyle {A}} \cdots \cdot _{\!\scriptscriptstyle {A}} a_{j_l}$ , where $J=\{j_1,\ldots ,j_l\}$ .
A family $(f_{(A,\varphi )})$ of linear endomorphisms of $\overline {T}(A)$ , where $(A,\varphi )$ runs over noncommutative probability spaces, is called universal if its action on words $a_1\cdots a_n$ is given by universal polynomials. The Wick map, $\mathrm{W}$ , the inverse Wick map, $\mathrm{W}^{\circ -1}$ , the moment map, and the cumulant maps are examples of universal families.
Now, given $(A,\varphi )$ , we define a formal derivation with respect to an element $a \in A$ as follows. Fix a decomposition $A=\mathbb {K}a \oplus A^{\prime }$ . Denote by $\zeta _a \colon T(A)\to \mathbb {K}$ the linear map defined by $\zeta _a(a):=1$ , $\zeta _a(b):=0$ for $b\in A^{\prime }$ , and $\zeta _a(w):=0$ for every word $w=a_1\cdots a_n$ , $a_i \in A$ , $n\geq 2$ . This map (which depends on the chosen direct sum decomposition of A) is then extended as an infinitesimal character to the double tensor algebra. We set
Observe that for any word $w=a_1\cdots a_n \in T(A)$ where $a_j=a$ or $a_j \in A^{\prime }$ , we then get
For example, if $w,w_1,w_2\in T(A^{\prime })$ , then
Because $\zeta _a$ is infinitesimal, $\partial _a$ turns out to be a derivation on $\overline T(T(A))$ .
Theorem 4.8 The Wick map $\mathrm{W}$ is the unique family of algebra automorphisms of $\overline T(T(A))$ , where $(A,\varphi )$ runs over noncommutative probability spaces, such that
-
• The restrictions of $\mathrm{W}$ to $\overline {T}(A)$ form a universal family.
-
• The map $\mathrm{W}$ is centered, $\Phi \circ \mathrm{W}=\varepsilon $ , with $W(\mathbf{1})=1$ in particular.
-
• For any $a \in A$ and any direct sum decomposition $A={\mathbb K} a \oplus A^{\prime }$ ,
$$ \begin{align*}\partial_a\circ \mathrm{W}=\mathrm{W}\circ\partial_a. \end{align*} $$
Proof The first two statements were already shown. The third one follows from the coassociativity of the coproduct:
Uniqueness follows from the fact that these three properties define the universal family $\mathrm{W}$ by induction. Given an integer n, choose, for example, a family $a_1,\ldots ,a_n$ of linearly independent free random variables in a noncommutative probability space $(A,\varphi )$ . Use then an adapted direct sum decomposition $A={\mathbb K}a_1\oplus \dots \oplus {\mathbb K}a_n\oplus A^{\prime \prime }$ to define the derivations. The knowledge of the
and the centering property determine then uniquely $\mathrm{W}(a_1\cdots a_n)$ . The identities
ensure the consistency of the formulas.▪
5 Shuffle algebra
In this section, we briefly recall the definition of shuffle algebra, thereby setting the notation used in the rest of the paper. We follow references [Reference Ebrahimi-Fard and Patras10, Reference Ebrahimi-Fard and Patras12] and refer to these articles for further bibliographical indications on the subject. We use in the present article the topologists’ convention and call shuffle products that are not necessarily commutative (see the definitions below). See also the recent survey [Reference Ebrahimi-Fard and Patras13] on the appearance of shuffle algebras (a.k.a. dendriform algebras) and related structures in the theory of iterated integrals and more generally chronological calculus.
Definition 5.1 A shuffle algebra is a vector space D endowed with two bilinear products ${\prec }\colon D\otimes D\to D$ and ${\succ }\colon D\otimes D\to D$ , called the left and right half-shuffles, respectively, satisfying the shuffle relations
where we have set $a*b := a\succ b+a\prec b$ .
These relations imply that $(D,*)$ is a nonunital associative algebra. We also consider its unitization $\overline D:= \mathbb {K}\mathbf{1}\oplus D$ by extending the half-shuffles: $\mathbf{1}\prec a:= 0=: a\succ \mathbf{1}$ and $\mathbf{1}\succ a:= a=: a\prec \mathbf{1}$ for all $a\in D$ . This entails that $1*a=a*1$ for all a in D; note, however, that the products $\mathbf{1}\prec \mathbf{1}$ and $\mathbf{1}\succ \mathbf{1}$ are not defined; we put, however, $\mathbf{1}*\mathbf{1}:=\mathbf{1}$ .
Definition 5.2 A commutative shuffle algebra is a shuffle algebra where the left and right half-shuffles are identified by the identity:
so that, in particular, $(\overline D,*)$ becomes a commutative algebra and the knowledge of the left half-shuffle $\prec $ (or the right half-shuffle $\succ $ ) is enough to determine the full structure.
Shuffle products are frequently denoted
, as we do further below in this article (10.2). Fundamental examples of such products are provided by the shuffle product of simplices in geometry and topology (see the first part of [Reference Ebrahimi-Fard and Patras13] for a modern account) as well as the commutative shuffle product of words defined inductively on $\overline {T}(X)$ :
The latter is dual to the unshuffle coproduct
. This example is generic in the sense that the tensor algebra over an alphabet B equipped with this product is the free commutative shuffle algebra over B [Reference Schützenberger25]. The shuffle algebras we will study in the present article are noncommutative variants of the tensor algebra.
Dual to the notion of shuffle algebra is the concept of unshuffle coalgebra [Reference Foissy17]. An unshuffle coalgebra is a vector space C equipped with two linear maps $\Delta _\prec \colon C\to C\otimes C$ and $\Delta _\succ \colon C\to C\otimes C$ , called the left and right half-unshuffles, such that
where $\overline \Delta :=\Delta _\prec +\Delta _\succ $ . As before, these axioms imply that $(C,\overline \Delta )$ is a noncounital coassociative coalgebra.
Definition 5.3 An unshuffle bialgebra is a vector space $\overline B=\mathbb {K}\mathbf{1}\oplus B$ together with linear maps $\Delta _\prec \colon B\to B\otimes B$ , $\Delta _\succ \colon B\to B\otimes B$ , and $m\colon \overline B\otimes \overline B\to \overline B$ such that:
-
(1) $(B,\Delta _\prec ,\Delta _\succ )$ is an unshuffle coalgebra,
-
(2) $(\overline B,m)$ is an associative algebra, and
-
(3) the following compatibility relations are satisfied:
$$ \begin{align*} \Delta^+_\succ(ab) =\Delta^+_\succ(a)\Delta(b), \quad \Delta^+_\prec(ab) =\Delta^+_\prec(a)\Delta(b), \end{align*} $$where we have set$$ \begin{align*} \Delta_\prec^+(a) :=\Delta_\prec(a)+a\otimes\mathbf{1},\quad\Delta_\succ^+(a):=\Delta_\succ(a)+\mathbf{1}\otimes a \end{align*} $$and$$ \begin{align*} \Delta(a) :=\Delta^+_\prec(a)+\Delta^+_\succ(a) =\overline\Delta(a)+a\otimes\mathbf{1}+\mathbf{1}\otimes a. \end{align*} $$
Given an unshuffle bialgebra, we adjoin a counit $\varepsilon \colon \overline B\to \mathbb {K}$ , which is the unique linear map such that $\ker \varepsilon =B$ and $\varepsilon (\mathbf{1})=1$ . We observe that, in particular, for any unshuffle bialgebra, the triple $(\overline B,m,\Delta )$ becomes a bialgebra in the usual sense. Thus, its graded dual space $\overline D:=\overline B^*$ becomes an algebra under the convolution product
Moreover, (5.2)–(5.4) imply that $\overline D=\mathbb {K}\mathbf{1} \oplus B^\ast $ is an unital shuffle algebra, because the convolution product splits
where $\varphi (\mathbf{1})=\psi (\mathbf{1})=0$ , $\varphi \prec \psi :=(\varphi \otimes \psi )\Delta _\prec ^+$ , and $\varphi \succ \psi :=(\varphi \otimes \psi )\Delta _\succ ^+$ . The counit of $\overline B$ plays the role of the unit for this shuffle product, and one sets for $\varphi \in \overline D, \ \varphi (\mathbf{1})=0$ ,
By definition, an unshuffle coalgebra is cocommutative if $\tau \circ \Delta _\prec =\Delta _\succ $ , where $\tau $ is the usual switch map $\tau (x\otimes y):= y\otimes x$ . An example is given by the algebra $\overline {T}(A)$ equipped with unshuffle coproduct, , defined in (10.1) below.
5.1 Shuffle approach to moments and cumulants
We consider an example of Definition 5.3, which is also the main setting for the shuffle algebra approach to moment-cumulant relations in noncommutative probability theory.
We note that the coproduct $\Delta $ can be split into two parts: the left half-coproduct
and we set
The right half-coproduct is defined by
and we define
This is extended to the double tensor algebra by defining
Theorem 5.4 [Reference Ebrahimi-Fard and Patras10]
The bialgebra $\overline T(T(A))$ equipped with $\Delta _\prec $ and $\Delta _\succ $ is an unshuffle bialgebra.
We recall now from reference [Reference Ebrahimi-Fard and Patras10, Reference Ebrahimi-Fard and Patras12] how the unshuffle bialgebra $\overline T(T(A))$ provides an algebraic structure for encoding the relation between free, Boolean, and monotone cumulants and moments in noncommutative probability theory from the point of view of shuffle products.
The group of characters is denoted by G, and its Lie algebra of infinitesimal characters $\mathfrak g$ consists of linear maps that send $\mathbf{1} \in \overline T(T(A))$ as well as any nontrivial product in $ \overline T(T(A))$ to zero. The convolution exponential $\exp ^*$ defines a bijection between $\mathfrak g$ and G. We recall that the map $\Phi \colon \overline T(T(A))\to \mathbb {K}$ is the unique unital multiplicative extension of the linear map $\phi $ defined on $T(A)$ by $\phi (a_1\dotsm a_n) := \varphi (a_1\cdot _{\!\scriptscriptstyle {A}} \dotsm \cdot _{\!\scriptscriptstyle {A}} a_n)$ . We are going to define three different exponential-type bijections between the group G and its Lie algebra $\mathfrak g$ , corresponding, respectively, to the convolution product $*$ and to the right and left half-shuffles (see equations (5.5) and (5.6)). As a result, we can associate to the character $\Phi \in G$ three different infinitesimal characters $\rho , \kappa ,\beta \in \mathfrak g$ . The three exponential-type bijections encode the three moment-cumulant relations (monotone, free, and Boolean).
The free and Boolean cumulants can be represented in terms of infinitesimal characters as the unique maps satisfying the so-called left and, respectively, right half-shuffle fixed point equations
These equations define bijections between the Lie algebra $\mathfrak g$ and the group G, i.e., the so-called left and right half-shuffle exponentials such that
Hence, we see that $\Phi $ is the left (or free) half-shuffle exponential of the infinitesimal character $\kappa \in \mathfrak g$ . Analogously, $\Phi $ is the right (or Boolean) half-shuffle exponential of the infinitesimal character $\beta \in \mathfrak g$ . It can be shown [Reference Ebrahimi-Fard and Patras10, Theorem 5.2] that the free moment-cumulant relation of order n is given by computing
Analogously, $\mathcal E_\succ (\beta )$ gives the Boolean moment-cumulant relations [Reference Ebrahimi-Fard and Patras12, Theorem 4]
due to the fact that $\beta \in \mathfrak g$ , together with the right half-shuffle operation defined in terms of (5.7), implies that
Shuffle algebra permits to show that half-shuffle exponentials entail the following left and right half-shuffle logarithms:
as well as the relation between the Boolean and free cumulants through the shuffle adjoint action
With these notations in place, one can show that the convolutional inverse of $\Phi $ can be also described in terms of the half-shuffle exponentials
yielding solutions to the half-shuffle fixed point equations
5.2 Shuffle calculus for free Wick polynomials
The Wick map $\mathrm{W}$ can be related to the free cumulants by using (5.10), whence we obtain from Definition 4.4
Evaluating both sides on a word from $T(A)$ yields
Hence, from Definition 4.2 of the coproduct, we obtain an explicit formula for the Wick polynomial $\mathrm{W}(a_1\dotsm a_n)$ , in terms of free cumulants (cf. [Reference Anshelevich2])
which coincides with [Reference Anshelevich2, Formula (3.44)]. Note that the combination of the factor $(-1)^{|\pi |}$ and the sum over interval partitions on the right-hand side stems from the fact that $\Phi ^{-1}$ is expressed in terms of the right (or Boolean) half-shuffle exponential evaluated on the infinitesimal character $-\kappa $ corresponding to negative values of free cumulants. This is the reason for calling these polynomials free Wick polynomials, and $\mathrm{W}$ is called the free Wick map.
Proposition 5.5 The free Wick polynomials satisfy the following recursion in terms of the free cumulants:
where $\mathrm{e}:=\eta \circ \varepsilon $ and $\eta $ is the unit map on $\overline {T}(T(A))$ .
Proof This follows from the relations satisfied by the shuffle operations and (5.11):
▪
We remark that by observing that the left half-coproduct, $\Delta _\prec $ , can be expressed in terms of the coproduct $\Delta $ , i.e., $\Delta _\prec (a_1\dotsm a_n)=(a_1{\cdot }\otimes {\mathrm{id}})\Delta (a_2\dotsm a_n)$ , we recover from (5.12) the elegant recursive formula [Reference Anshelevich2, Formula (3.43)]
6 Boolean Wick polynomials
It is natural to ask whether one could also relate Wick map, $\mathrm{W}$ , to Boolean cumulants. Indeed, by using once again (5.10) and Definition 4.4, we obtain
Expanding the left half-shuffle exponential, $\mathcal E_\prec (-\beta )$ , on the right-hand side, we see that
where we have used, in the last identity, the recursion (5.11) and relations (5.1) to rearrange the iterated half-shuffle products. This argument can be made precise with the help of Proposition 5.5.
Proposition 6.1 The Wick map can be expressed in terms of Boolean cumulants as
Proof From Proposition 5.5, we have the identity
But (5.9) implies that $\Phi ^{-1}\succ \kappa =\beta \prec \Phi ^{-1}$ , so that
Because $a\succ (b\prec c)=(a\succ b)\prec c$ from (5.1), we get
▪
We now introduce another map, which allows to recover in a similar way the Boolean Appel Polynomials [Reference Anshelevich3, Section 3].
Definition 6.2 The Boolean Wick map $\mathrm{W}'\colon \overline T(T(A))\to \overline T(T(A))$ is defined by
We call as usual Boolean Wick polynomials the $\mathrm{W}'(a_1\cdots a_n)$ , $a_i\in A$ , $i=1,\ldots , n$ . In particular, we immediately obtain the explicit expression [Reference Anshelevich3, Formula (3.1)]
Proposition 6.3 The Boolean Wick polynomials are centered.
Proof By definition, we have that
using (5.8).▪
Proposition 6.1 entails the relation
between the Boolean and free Wick maps. This gives the following rewriting rule for the corresponding polynomials:
From (6.1), we deduce that
which leads to the expansion
Observe that the expansion terminates after $n+1$ terms when applied to a word $w\in T(A)$ with $|w|=n$ letters, thanks to $\beta $ being an infinitesimal character, i.e.,
where $R^{(i)}_{\succ \beta }(\mathrm{W}^{\prime }):= R^{(i-1)}_{\succ \beta }(\mathrm{W}^{\prime }) \succ \beta $ and $R^{(0)}_{\succ \beta }(\mathrm{W}^{\prime })=\mathrm{W}^{\prime }$ . The first few terms are
Here, we used the Boolean moment-cumulant relations, which say that $\Phi (a_1 \cdots a_n) = \sum _{I \in \mathrm{Int}([n])} \prod_{\pi \in I}\beta (a_\pi )$ .
Proposition 6.4 Let $w=a_1\cdots a_n \in T(A)$ ,
Proof For the word $w=a_1\dotsm a_n \in T(A)$ , we find from (6.5)
The essential input here is that the Boolean cumulants are given by $\beta $ , which is an infinitesimal character.▪
Eventually, from (6.4), we deduce the inverse Boolean Wick map.
Proposition 6.5 The inverse Boolean Wick map is given as solution to the fixed point equation
Proof Note that the definition of the Boolean Wick map (6.1) implies that it is invertible. We show explicitly that ${\mathrm{W}^{\prime }}^{\circ -1} \circ \mathrm{W}^{\prime } = \mathrm{W}^{\prime } \circ {\mathrm{W}^{\prime }}^{\circ -1} = \mathrm{id}.$ Indeed, we see that
Induction on the length of words in $T(A)$ gives for $a \in A$
On a word $w=a_1\dotsm a_n \in T(A)$ , $n>1$ , we find
Here, we used the induction hypothesis, $(\mathrm{W}' \circ {\mathrm{W}^{\prime }}^{\circ -1}) (a_{i+1} \cdots a_n) = a_{i+1} \cdots a_n$ , for $i>0$ . An analogue computation gives the opposite, i.e., ${\mathrm{W}^{\prime }}^{\circ -1} \circ \mathrm{W}' =\mathrm{id}$ .▪
From (6.6), it follows that
Remark 6.6 In [Reference Anshelevich3], the Boolean cumulants were defined by the relation between generating functions G of Boolean Wick polynomials and Boolean cumulants $\eta $ ,
which implies an expression similar to (6.2) but with $\beta $ applied to the other half of the word. In principle, one could take either relation as a starting point, because there is a choice here due to the noncommutativity of the series, and neither choice seems to be more natural than the other. However, we decided to work with (6.2) instead, because the polynomials so obtained are more naturally described from the shuffle algebra point of view. The relation (6.3) also has its counterpart in terms of generating functions, which involves a particular kind of variable substitution.
7 Conditionally free Wick polynomials
Note the apparent asymmetry in the definitions of the free and Boolean Wick polynomials. There is a third family of polynomials that generalizes both the free and Boolean cases. Indeed, we may consider the notion of conditional freeness [Reference Bożejko, Leinert and Speicher9] which generalizes Voiculescu’s notion of freeness in the context of two states. Recall that a two-state noncommutative probability space $(A,\varphi ,\psi )$ is a noncommutative probability space $(A,\varphi )$ endowed with a second unital linear map $\psi \colon A \to \mathbb {K}$ . We denote by $\Psi $ the canonical character extension of $\psi $ to the double tensor algebra $\overline T(T(A))$ . We denote by $\beta ^\varphi $ the Boolean infinitesimal character associated to $\varphi $ (and define similarly $\beta ^\psi ,\kappa ^\varphi ,\kappa ^\psi $ ).
In the shuffle algebra approach, we have the following characterization of conditionally (or c-)free cumulants [Reference Ebrahimi-Fard and Patras11]: the corresponding infinitesimal character $R^{\varphi ,\psi } \in \mathfrak {g}$ is defined through shuffle adjoint action:
This means that $\beta ^\varphi =\Psi ^{-1}\succ R^{\varphi ,\psi }\prec \Psi $ , such that
Following [Reference Ebrahimi-Fard and Patras11, Proposition 6.1], the evaluation of formula (7.2) on a word, i.e., computing $\Phi (a_1 \cdots a_n) = \varphi (a_1\cdot _{\!\scriptscriptstyle {A}} \dotsm \cdot _{\!\scriptscriptstyle {A}} a_n)$ ,
gives back the formula discovered in reference [Reference Bożejko, Leinert and Speicher9] and recalled in the next theorem.
Theorem 7.1 [Reference Bożejko, Leinert and Speicher9]
The following relation between moments and conditionally free cumulants holds:
Here, a block $\pi _i$ of a noncrossing partition $\pi \in NC_n$ is “Inner” if there exists a $\pi _j$ and $a,b\in \pi _j$ such that $a<c<b$ for all $c\in \pi _i$ . A block which is not an inner one is “Outer.”
Conditionally free cumulants contain both free and Boolean cumulants as limiting cases. More precisely, if we consider the case $\psi =\varphi $ , then (7.1) entails
by (5.9). On the other hand, if $\psi =\varepsilon $ is the trivial state, then
Theorem 7.2 [Reference Ebrahimi-Fard and Patras14]
Let $\alpha _1,\alpha _2$ be two infinitesimal characters of the double tensor algebra and denote by $\mathcal E_\succ (\alpha _1)$ and $\mathcal E_\succ (\alpha _2)$ the corresponding right half-shuffle exponentials. The right half-shuffle Baker–Campbell–Hausdorff formula holds:
where $\Theta $ stands for the (shuffle) adjoint action:
Proof Let $X=\mathcal E_\succ (\alpha _1)$ and $Y=\mathcal E_\succ (\alpha _2)$ . By definition of the shuffle product, we have that
Now, observe that
This implies the result using the definition of $\mathcal {L}_\succ $ .▪
Returning to Definition 4.4, because $\Phi =\mathcal E_\succ (\Theta _{\Psi }(R^{\varphi ,\psi }))$ and $\Phi ^{-1}=\mathcal E_\prec (-\Theta _{\Psi }(R^{\varphi ,\psi }))$ , we may now express the free Wick map $\mathrm{W}=\mathrm{id}\ast \Phi ^{-1}$ in terms of the conditionally free cumulants $R^{\varphi ,\psi }$ as
A computation similar to the Boolean case yields
Definition 7.3 The conditionally free Wick polynomials are defined to be
This means
From (7.4), we deduce a—intricate—recursion for the inverse of conditionally free Wick map:
Starting again from the identity
we obtain, after some simple manipulations,
where we have used Theorem 7.2 in the last equality. Hence, we have that the free Wick maps $\mathrm{W}$ and $\mathrm{W}^\psi :=(\mathrm{id} * \Psi ^{-1}) $ are related:
Finally, we observe from (7.4) that, in the cases $\Psi =\Phi $ and $\Psi =\varepsilon $ , we recover the free and Boolean Wick maps $\mathrm{W}$ and $\mathrm{W}^{\prime }$ , respectively.
8 Wick polynomials as group actions
Observe that the coproduct defined in Definition 4.2 is linear on the left and polynomial on the right factor when restricted to $T(A)$ , i.e., $\Delta \colon T(A)\to \overline T(A)\otimes \overline T(T(A))$ . This means, in particular, that $\overline T(A)$ is a right comodule over $\overline T(T(A))$ , simply by coassociativity. Thus, we can induce an action of the group G of characters over $\overline T(T(A))$ on the space $\operatorname {End}(\overline T(A))$ of linear endomorphisms of $\overline T(A)$ by setting
More precisely, we have the following proposition.
Proposition 8.1 Given $\Psi \in G$ and $L \in \operatorname {End}(\overline T(A))$ , define $L. \Psi \in \operatorname {End}(\overline T(A))$ as above. Then, $(\Psi ,L) \mapsto L.\Psi $ defines a (right) action of G on $\operatorname {End}(\overline T(A))$ .
Proof Let $\Psi _1,\Psi _2 \in G$ and $L \in \operatorname {End}(\overline T(A))$ . Clearly, $L.\Psi \in \operatorname {End}(\overline T(A))$ and
so the mapping $(\Psi _1,L)\mapsto L.\Psi _1$ is an action of G on $\operatorname {End}(\overline T(A))$ .▪
In the following, we identify implicitly the (various) notions of Wick polynomials with the (various) restrictions of the Wick maps to $\overline T(A)$ . So, in this section and the following, $\mathrm{W}$ denotes the restriction of $\mathrm{W}$ to $\overline T(A)$ , and so on (as should be anyway clear from the context).
As we have seen above, the orbit of the identity map ${\mathrm{id}}\in \operatorname {Aut}(\overline T(A))$ consists only of automorphisms of $\overline T(A)$ and we have the inversion formula for the composition of endomorphisms $({\mathrm{id}}.\Psi )^{-1}={\mathrm{id}}.\Psi ^{-1}$ where, on the right-hand side, $\Psi $ is inverted with respect to convolution. The free Wick polynomials $\mathrm{W}=\mathrm{id}.\Phi ^{-1}$ are elements in the orbit of the identity endomorphism by the group action of G on $\operatorname {End}(\overline T(A))$ .
Regarding the left half-unshuffle coproduct $\Delta _\prec ^+$ , we get from (5.2) that $(T(A),\Delta _\prec )$ is also a right-comodule over $(\overline T(T(A)), \Delta )$ . At the level of endomorphisms, we obtain the following proposition.
Proposition 8.2 Let $L\in \operatorname {End}(T(A))$ and $\Psi \in G$ . The composition $(\Psi ,L)\mapsto L^\Psi :=(L\otimes \Psi )\Delta _\prec $ defines a (right) action.
Thus, we might reinterpret the Boolean Wick polynomials $\mathrm{W}^{\prime }=\mathrm{e}+(\mathrm{W}-\mathrm{e})\prec \Phi $ as being given on $T(A)$ by a combined action $\mathrm{W}^{\prime }=\mathrm{e}+({\mathrm{id}}.\Phi ^{-1}-\mathrm{e})^{\Phi }$ . More generally, the relation between the conditionally free and free Wick polynomials can be re-expressed on $T(A)$ as
Neglecting the degree zero (that is, the $\mathrm{e}$ ) terms, the relations between free, Boolean, and conditionally free Wick polynomials are encoded by the following diagram:
9 Free, Boolean, and conditionally free Wick products
Let $(A,\varphi )$ be a noncommutative probability space. Let $F \colon \overline T(A)\to \overline T(A)$ be an invertible linear map such that $F(1_A)=1_A$ . One can induce a modified product ${\bullet }$ on $\overline T(A)$ by conjugacy, that is, setting $w\bullet w' := F(F^{-1}(w)F^{-1}(w'))$ . Associativity follows from associativity of the concatenation product on $\overline T(A)$ . Therefore, F becomes a unital algebra morphism from $(\overline T(A),\otimes )$ to $(T(A),\bullet )$ .
Because the maps $\mathrm{W}$ , $\mathrm{W}^{\prime }$ , and $\mathrm{W}^c$ are all invertible when acting on $\overline T(A)$ , we obtain from this construction three new products on $\overline T(A)$ .
Definition 9.1 The three associative products on $\overline T(A)$ induced by the three Wick maps $\mathrm{W}$ , $\mathrm{W}^{\prime }$ , and $\mathrm{W}^c$ are denoted by $\bullet $ , $\odot $ , and $\times $ and called the free, Boolean, and conditionally free Wick products, respectively. The Wick maps are morphisms of algebras when $\overline T(A)$ is equipped with either of these new products. In particular, for $a\in A$ ,
and similarly for the other cases.
The conjugacy formula gives the rule for computing the new products. For example, in the free and Boolean cases, we find the following:
Proposition 9.2
-
(1) The free Wick product $\bullet $ admits the following closed-form formula: for words $w=a_1\dotsm a_n$ and $w'=a_{n+1}\dotsm a_{n+m}$ in $T(A)$ , we find
$$ \begin{align*} w\bullet w'=\sum_{S\subseteq[n+m]}\mathrm{W}(a_S)\Phi(a_{K^S_1})\dotsm\Phi(a_{K_l^S}), \end{align*} $$where the $K_i^S,i=1,\dots , l$ , run over the connected components of $[n]-([n]\cap S)$ and $(n+[m])-(n+[m]\cap S)$ . -
(2) The Boolean Wick product $\odot $ admits the following closed-form formula: for words $w=a_1\dotsm a_n$ and $w'=b_{1}\dotsm b_{m}$ in $T(A)$ , we find
$$ \begin{align*} w \odot w'= \sum_{\substack{0 \leq i \leq n\\0 \leq j \leq m} \Phi(a_1 \cdots a_i \vert b_{1} \cdots b_j) \mathrm{W}'(a_{i+1} \cdots a_n b_{j+1} \cdots b_m).} \end{align*} $$
Proof
-
(1) Set $b_i:= a_{n+i}, \ i=1,\dots , m$ . Because the inverse free Wick map is the map $\mathrm{W}^{\circ -1}=({\mathrm{id}}\otimes \Phi )\Delta $ , we have that
$$ \begin{align*} \mathrm{W}^{\circ -1}(w)\mathrm{W}^{\circ -1}(w') &=\sum_{S\subseteq[n]}\sum_{S'\subseteq [m]} a_S\,b_{S'}\,\Phi(a_{J_1^S})\dotsm\Phi(a_{J_{k(S)}^S})\Phi(b_{J_1^{S'}})\dotsm\Phi(b_{J_{k(S')}^{S'}}). \end{align*} $$By re-expressing in terms of the $a_i$ , we get$$ \begin{align*}\mathrm{W}^{\circ -1}(w)\mathrm{W}^{\circ -1}(w') =\sum_{S\subseteq[n+m]}a_S\,\Phi(a_{K^S_1})\dotsm\Phi(a_{K_l^S}). \end{align*} $$The conclusion then follows by applying $\mathrm{W}$ to both sides of this identity. -
(2) Recall Proposition 6.5 stating that the inverse Boolean Wick map is given recursively ${\mathrm{W}^{\prime }}^{\circ -1}=\mathrm{id} + {\mathrm{W}^{\prime }}^{\circ -1} \succ \beta $ such that
$$ \begin{align*}{\mathrm{W}^{\prime}}^{\circ -1}(a_1\dotsm a_n) = \sum_{j=0}^n \Phi(a_1\dotsm a_j) a_{j+1}\dotsm a_n. \end{align*} $$Then, we have$$ \begin{align*}{\mathrm{W}^{\prime}}^{\circ -1}(a_1\dotsm a_n) {\mathrm{W}^{\prime}}^{\circ -1}(b_1\dotsm b_m)= \sum_{\substack{0 \leq i \leq n\\ 0 \leq j \leq m}} \Phi(a_1 \cdots a_i \vert b_{1} \cdots b_j) a_{i+1} \cdots a_n b_{j+1} \cdots b_m. \end{align*} $$The conclusion then follows by applying $\mathrm{W}'$ to both sides of this identity.▪
Remark 9.3 A closed formula for the conditionally free Wick products follows from using the recursion (7.5):
Applying ${\mathrm{W}^c}$ on both sides gives the conditionally free Wick product
10 Tensor cumulants
We now briefly show how our approach allows to lift the classical notion of cumulants to the noncommutative setting and to revisit the notion of tensor cumulants [Reference Nica and Speicher21] as a warm up for the definition of tensor Wick polynomials.
As before, we work on a noncommutative probability space $ (A,\varphi )$ (see Definition 4.1). On $\overline T(A)$ , the unshuffle coproduct is defined by declaring elements in $A \hookrightarrow \overline T(A)$ to be primitive and extending it multiplicatively to all of $\overline T(A)$ . As a result, one gets that for any $a_1,\dotsc , a_n \in A$ ,
where we have set $a_\emptyset :=\mathbf{1}$ and
for $U=\{u_1 < \cdots < u_p\}\subseteq [n]$ . This endows the unital tensor algebra with the structure of a cocommutative graded connected Hopf algebra. The antipode reverses the order of the letters in a word and multiplies it by a minus sign if the word has odd length.
Its dual $\overline T(A)^*$ is a commutative algebra with the convolution product defined for linear maps $\mu ,\nu \colon \overline T(A) \to \mathbb {K}$ by the commutative shuffle product
The unit for this product is the counit $\varepsilon \colon \overline T(A) \to \mathbb {K}$ , which is uniquely defined by $\ker \varepsilon =T(A)$ and $\varepsilon (\mathbf{1})=1$ . See, e.g., [Reference Reutenauer23] for details.
The generalized expectation map $\varphi $ permits to define a linear map $\phi \colon \overline T(A)\to \mathbb {K}$ by setting $\phi (a_1\dotsm a_n) = \varphi (a_1\cdot _{\!\scriptscriptstyle {A}} \dotsm \cdot _{\!\scriptscriptstyle {A}} a_n)$ and $\phi (\mathbf{1})=1$ .
The grading on $\overline T(A)$ permits to think of $\phi $ as a graded series
where $\phi _n\colon T(A) \to \mathbb {K}$ is a linear map vanishing outside $T_n(A)$ , the degree n component of $T(A)$ . In this way, we may regard the map $\phi $ as being some kind of generalized moment-generating function. Because the algebra $T(A)$ is graded by the length of words and connected ( $T_0(A)=\mathbb K\mathbf{1}$ ), the exponential and logarithm maps define inverse bijections between unital linear maps on $\overline {T}(A)$ and reduced maps (maps that vanish on $\mathbb {K}$ , the degree zero component). In particular, there exists a unique linear map $c \in \overline T(A)^*$ with $c(\mathbf{1})=0$ such that
where $\phi ^{-1}$ is the inverse of $\phi $ for the shuffle product (10.2).
Definition 10.1 The tensor cumulant map associated to $\phi $ is the linear application $c\colon \overline T(A) \to \mathbb {K}$ defined by
Its evaluations $c(a_1\cdots a_n) \in \mathbb {K}$ are also written $c(a_1,\dots ,a_n)$ and are called the multivariate tensor cumulants associated to the sequence $(a_1,\dots ,a_n)$ of noncommutative random variables.
The defining relation
is a version in a noncommutative context of the usual formula relating the moment- and cumulant-generating functions (see (2.3)). From (10.1), we see that for any $j>0$ , the iterated reduced coproduct
is given by
and
where $P_j(n)$ is the collection of all set partitions $\pi =\{B_1,\ldots ,B_j\}$ of $[n]:=\{1,\dotsc ,n\}$ into j disjoint subsets and $\mathbb S_j$ is the jth symmetric group (recall that for $x\in T(A)$ ,
). From $c(\mathbf{1})=0$ , we deduce
giving the multidimensional version of formula (2.3):
Recall that $c (a_B):=c (a_{i_1}, \ldots , a_{i_{|B|}})$ , for $B=\{i_1 < \cdots < i_{|B|}\}$ , is the multivariate cumulant of order $|B|$ and $P(n)$ is the collection of all set partitions of $[n]$ . In fact, many other versions of this relation can be recovered from the properties of the underlying Hopf algebra. See [Reference Ebrahimi-Fard and Patras10, Reference Ebrahimi-Fard, Patras, Tapia and Zambotti15] for further details. The important point here is that set partitions appear naturally in (10.4) through formula (10.3) due to the definition of the coproduct in (10.1).
10.1 Tensor Wick polynomials
It turns out that the same Hopf-algebraic framework used for describing the tensor moment-cumulant relations allows to get an explicit description of tensor Wick polynomials (that can be understood as a natural noncommutative lift of classical Wick polynomials).
Definition 10.2 The tensor Wick map $W_T\colon \overline T(A)\to \overline T(A)$ is defined by
Its inverse $W_T^{-1}\colon \overline T(A)\to \overline T(A)$ is given by
Given a sequence $(a_1,\dots ,a_n)\in A$ , $W_T(a_1\cdots a_n)$ is called the tensor Wick polynomial associated to this sequence.
Let us compute a few examples using the reduced unshuffle coproduct (10.3) and the fact that the inverse $\phi ^{-1} \colon \overline T(A)\to \mathbb {K}$ is given by the Neumann series
Then, the first three tensor Wick polynomials in $\overline T(A)$ are
Remark 10.3 The tensor Wick map $W_T$ associates to a $w \in \overline T(A)$ a noncommutative polynomial $W_T(w)$ in $\overline T(A)$ . Saying this, if the algebra A is commutative, then those noncommutative polynomials map by the evaluation $ev\colon \ a_1 \cdots a_n\longmapsto a_1 \cdot _{\!\scriptscriptstyle {A}} \dots \cdot _{\!\scriptscriptstyle {A}} a_n$ to the classical multivariate Wick polynomials. In particular, we have that, in this case,
for a single element $a\in A$ [Reference Ebrahimi-Fard, Patras, Tapia and Zambotti15].
Observe that, by definition, we have that , so we get a tensor version of relation (2.5):
Applying the evaluation map, the resulting relation is sometimes used as a recursive definition of the Wick polynomials [Reference Hairer and Shen18] in terms of moments or cumulants.
Because $\phi $ is not a character on $\overline {T}(A)$ (it is not multiplicative: $\phi (a_1a_2)=\phi (a_1 \cdot _{\!\scriptscriptstyle {A}} a_2)$ is different from the product $\phi (a_1)\phi (a_2)$ in general), it is not an element in the group of characters, i.e., the group-like elements in the completion of the dual graded Hopf algebra. Therefore, the $\exp /\log $ correspondence between tensor cumulants and moments cannot be analyzed from a Lie theoretic point of view. We refer the reader to [Reference Ebrahimi-Fard and Patras14] for a discussion of the group and Lie algebra correspondence in the context of free probability. The map $\phi $ has then a unique extension $\Phi \colon \overline T(T(A))\to \mathbb {K}$ as an algebra character. The unshuffle coproduct on the tensor algebra $\overline {T}(A)$ also admits a unique extension
as an algebra morphism:
where the unit of $\overline {T}(A)$ is implicitly identified with the unit of $\overline {T}(T(A))$ .
The following proposition and theorem are variants of the corresponding results in [Reference Ebrahimi-Fard, Patras, Tapia and Zambotti15], where they were obtained in the case where the algebra A is commutative.
Proposition 10.4 The double tensor algebra $\overline T(T(A))$ with the coproduct is a graded connected Hopf algebra, where $\deg (w_1\vert \dotsm \vert w_n)= \deg (w_1)+\dotsb +\deg (w_n)$ . Its antipode $\mathcal S$ is the unique algebra anti-automorphism of $\overline T(T(A))$ such that
for all $a_1,\dotsc ,a_n\in A$ .
As a consequence, we obtain using either Proposition 10.4 and lifting the computation of $W_T$ to $\overline T(T(A))$ or directly the definition of $W_T$ .
Theorem 10.5 The tensor Wick map admits the explicit expansion
Another point that is also addressed in [Reference Ebrahimi-Fard, Patras, Tapia and Zambotti15] is the fact that Wick powers do not satisfy the usual rules of calculus: for example, because $:\!X\!:=X-\mathbb EX$ and $:\!X^2\!:=X^2-2X\mathbb EX+2(\mathbb EX)^2-\mathbb EX^2$ , we see that $:X:\!\!\cdot :X: \neq :X^2:\!\!$ . Nonetheless, using Hopf-algebraic techniques, the invertibility of the Wick map allowed us to define a modified product $\cdot _{\!\scriptscriptstyle {\varphi }}$ on polynomials such that
and a similar formula holds in the multivariate case. Because $W_T$ is a linear automorphism of $\overline {T}(A)$ , these observations can be adapted to the tensor case as in Section 9.