Hostname: page-component-745bb68f8f-kw2vx Total loading time: 0 Render date: 2025-02-06T09:54:30.815Z Has data issue: false hasContentIssue false

Wick polynomials in noncommutative probability: a group-theoretical approach

Published online by Cambridge University Press:  25 August 2021

Kurusch Ebrahimi-Fard
Affiliation:
Department of Mathematical Sciences, Norwegian University of Science and Technology (NTNU), 7491 Trondheim, Norway e-mail: kurusch.ebrahimi-fard@ntnu.no
Frédéric Patras
Affiliation:
Laboratoire J.A. Dieudonné, Université Côte d’Azur, CNR, UMR 7351, Parc Valrose, Nice, France e-mail: frederic.patras@unice.fr
Nikolas Tapia*
Affiliation:
Weierstrass Institute, Mohrenstraße 39, 10117 Berlin, Germany and Institute of Mathematics, Technische Universtät Berlin, Straße des 17. Juni 136, 10587 Berlin, Germany
Lorenzo Zambotti
Affiliation:
Laboratoire de Probabilités, Statistiques et Modélisation, Sorbonne Université, Université de Paris, 4 Place Jussieu, 75005 Paris, France e-mail: zambotti@lpsm.paris
Rights & Permissions [Opens in a new window]

Abstract

Wick polynomials and Wick products are studied in the context of noncommutative probability theory. It is shown that free, Boolean, and conditionally free Wick polynomials can be defined and related through the action of the group of characters over a particular Hopf algebra. These results generalize our previous developments of a Hopf-algebraic approach to cumulants and Wick products in classical probability theory.

Type
Article
Copyright
© Canadian Mathematical Society 2021

1 Introduction

Moment-cumulant relations and Wick products play a central role in probability theory and related fields [Reference Akhiezer1, Reference Peccati and Taqqu22]. In classical probability, cumulant sequences $(c_n)_{n\in {\mathbf{N}}^\ast }$ linearize the notion of independence of random variables: if two random variables, $X,Y$ , with moments of all orders are independent, then for $n\geq 1$ , $c_n(X+Y)=c_n(X)+c_n(Y)$ . Wick polynomials, Wick products, and chaos expansions are related to cumulants. Indeed, recall, for example, that given a random variable X with moments of all orders, the Wick polynomial $W(X^n)$ is the coefficient of $\frac {t^n}{n!}$ in the expansion of $\exp (tX-K(t))$ , where $K(t)$ is the exponential generating series of cumulants.

Voiculescu’s theory of free probability [Reference Voiculescu28, Reference Voiculescu, Dykema and Nica29] provides the paradigm of a noncommutative probability theory, where the notion of freeness replaces the classical concept of probabilistic independence. Speicher showed that free cumulants linearize Voiculescu’s notion of freeness. See [Reference Nica and Speicher21, Reference Sarah and Schurmann24] for detailed introductions. Following Voiculescu’s ideas, various authors [Reference Bożejko8, Reference Hasebe and Saigo19, Reference Muraki20, Reference Speicher26, Reference Speicher, Woroudi and Voiculescu27] considered different types of independences (Boolean, monotone, and others), each characterized by particular moment-cumulant relations with explicit combinatorial descriptions given in terms of different types of set partitions. Relations between the different brands of cumulants were thoroughly explored by Arizmendi et al. in [Reference Arizmendi, Hasebe, Lehner and Vargas6]. Free and Boolean Wick polynomials have been introduced in this setting by Anshelevich [Reference Anshelevich2Reference Anshelevich4].

In a previous paper [Reference Ebrahimi-Fard, Patras, Tapia and Zambotti15], the authors presented a Hopf-algebraic framework describing both the combinatorial structure of the classical moment-cumulant relations as well as the related notions of Wick polynomials and Wick products. The approach is based on convolution products of linear functionals defined on a coalgebra and encompasses the multidimensional extension of the moment-cumulant relations. In this framework, classical Wick polynomials result from a Hopf-algebraic deformation under the action of linear automorphisms induced by multivariate moments associated to an arbitrary family of random variables with moments of all orders.

In a series of recent papers [Reference Ebrahimi-Fard and Patras10, Reference Ebrahimi-Fard and Patras12, Reference Ebrahimi-Fard and Patras14], two of us explored relations between multivariate moments and free, Boolean, and monotone cumulants as well as relations among latter in noncommutative probability theory by studying a particular graded connected Hopf algebra H defined on the double tensor algebra over a noncommutative probability space $(A,\varphi )$ . In this approach, the associated set partitions (noncrossing, interval, and monotone, respectively) appear through the evaluation of elements of the group G (Lie algebra $\mathfrak g$ ) of (infinitesimal) Hopf-algebraic characters on words.

In the paper at hand, we revisit from a Hopf-theoretic point of view the theory of free, Boolean, and conditionally free Wick polynomials. The relevance of shuffle group actions and structures in the sense of [Reference Ebrahimi-Fard and Patras14] is also emphasized.

The article is organized as follows. In Section 2, we recall the definitions of classical cumulants and Wick polynomials. In Section 3, we do the same for free and Boolean cumulants. Section 4 defines free Wick polynomials using the Hopf-algebraic approach. The new definition is shown to extend Anshelevich’s definition of multivariate free Appell polynomials. At the beginning of Section 5, we introduce the shuffle-theoretic framework allowing to deal with noncommutative moment-cumulant relations and the corresponding noncommutative Wick polynomials. Section 5.1 revisits accordingly moment-cumulant relations in noncommutative probability theory following mainly the references [Reference Ebrahimi-Fard and Patras10Reference Ebrahimi-Fard and Patras12]. Section 5.2 develops shuffle calculus for free Wick polynomials. In Section 6, Boolean Wick polynomials are also introduced and analyzed from this point of view. Section 7 uses the same approach to define conditionally free Wick polynomials. In Section 8, we show how the three notions of noncommutative Wick polynomials can be related through comodule structures and the induced group actions. Section 9 shows how the classical notion of Wick products generalizes naturally to the noncommutative setting, inducing three new associative algebra structures on the tensor algebra over a noncommutative probability space. Finally, in Section 10, we show using a Hopf-algebraic approach how the definition of classical cumulants lifts to the notion of tensor cumulants for random variables in a noncommutative probability space. In Section 10.1, we explain how this leads to the definition of tensor Wick polynomials. These two sections extend the results of [Reference Ebrahimi-Fard, Patras, Tapia and Zambotti15] from the classical to the tensor framework.

Below, $\mathbb {K}$ denotes the base field of characteristic zero over which all algebraic structures are defined. All (co)algebras are (co)associative and (co)unital unless otherwise stated.

2 Cumulants and Wick polynomials

Let us first recall briefly the definition of classical cumulants and Wick polynomials. Let X be a real-valued random variable, defined on a probability space $(\Omega ,\mathcal F,\mathbb P)$ , with finite moments of all orders, i.e., such that $m_n := \mathbb EX^n<\infty $ for all $n>0$ . Its exponential moment-generating function is defined as a power series in t

(2.1) $$ \begin{align} M(t):=\mathbb E\, \exp({tX}) = 1+ \sum_{n> 0}m_n\frac{t^n}{n!}. \end{align} $$

If we assume suitable growth conditions on the coefficients $m_n$ so that the above series has a positive radius of convergence, then this power series defines a function of class $C^\infty $ around the origin, and the moments $m_n$ can be recovered from it by differentiation.

The exponential cumulant-generating function

$$ \begin{align*} K(t):= \sum_{n>0} c_n \frac{t^n}{n!} \end{align*} $$

is a power series in t defined through the classical exponential relation between moments and cumulants

(2.2) $$ \begin{align} M(t)=\exp\left( K(t) \right). \end{align} $$

Using standard power series manipulations, this equation rewrites:

(2.3) $$ \begin{align} m_n=\sum_{\pi\in P(n)}\prod_{B\in\pi}c_{|B|}. \end{align} $$

Here, $P(n)$ denotes the collection of all set partitions, $\pi :=\{B_1,\ldots ,B_l\}$ , of the set $[n]:=\{1,\dotsc ,n\}$ , where the block $B_i \in \pi $ contains $|B_i|$ elements. In general, for a finite subset $U \subset \mathbb N$ , we denote by $P(U)$ the collection of all set partitions of U.

Let $(X_1,\dotsc ,X_p)$ be a finite collection of real-valued random variables defined on a common probability space, such that all the moments $m_{\mathbf{n}}:=\mathbb E[X_1^{n_1}\dotsm X_p^{n_p}]$ exist, where $\mathbf{n}:=(n_1,\dotsc ,n_p)\in \mathbb N^p$ is a multi-index. We may consider a multivariate extension of (2.1), namely

(2.4) $$ \begin{align} M(t_1,\dotsc,t_p) &:=\mathbb E\, \exp({t_1X_1+\dotsb+t_pX_p}) \nonumber \\ &=:\sum_{\mathbf{n}}m_{\mathbf{n}}\frac{t^{\mathbf{n}}}{\mathbf{n}!}, \end{align} $$

where $t^{\mathbf{n}}:= t_1^{n_1}\dotsm t_p^{n_p}$ and $\mathbf{n}!:= n_1!\dotsm n_p!$ . As before, the cumulant-generating function is defined by a relation analogous to (2.2), and its coefficients are related to the moments in a way analogous to (2.3). This relation will be revisited in the following sections.

There exists a particular family of polynomials associated to a random variable X with finite moments of all orders, called Wick polynomials and denoted here by $W_n(x)$ , $n\ge 0$ . It turns out to be the unique family of polynomials such that $W_0(x)=1$ and

$$ \begin{align*} \mathbb E\, W_n(X)=0, \qquad \frac{\mathrm{d}}{\mathrm{d}x}W_n(x)=nW_{n-1}(x), \end{align*} $$

for all $n>0$ . The latter defining property means that $(W_n)_{n\ge 0}$ qualifies as a sequence of Appell polynomials [Reference Appell5]. For example, if X is a standard Gaussian random variable, this family coincides with the Hermite polynomials. These polynomials are interesting for physics. In particular, the Wick exponential

$$ \begin{align*} :\exp:(tX):=\sum_{n\ge0}W_n(X)\frac{t^n}{n!} =\frac{\exp({tX})}{\mathbb E \exp({tX})}=\exp({tX-K(t)}) \end{align*} $$

is closely related to moment- and cumulant-generating functions. In fact, this relation can be used to define Wick polynomials, because the exponential power series in t serves as a generating function.

The polynomial

$$ \begin{align*}:X^n::= W_n(X) \end{align*} $$

is called the nth Wick power of X. For example,

$$ \begin{align*} :X:=X-\mathbb EX, \quad :X^2:=X^2-2X\,\mathbb EX+2(\mathbb EX)^2-\mathbb EX^2, \quad \dotsc. \end{align*} $$

In general, these explicit expansions can be recursively obtained from the change of basis relation

(2.5) $$ \begin{align} x^n=\sum_{j=0}^n\binom{n}{j}W_j(x)\,m_{n-j}. \end{align} $$

The latter can be generalized to finite collections $(X_1,\dotsc ,X_p)$ of random variables in a way analogous to (2.4).

3 Free and Boolean cumulants

Voiculescu introduced free probability theory in the 1980s [Reference Voiculescu28, Reference Voiculescu, Dykema and Nica29].Footnote 1 In this theory, the classical notion of independence is replaced by the algebraic notion of freeness. A family of unital subalgebras $(B_i:i\in I)$ of a noncommutative probability space $(A,\varphi )$ is called freely independent (or free), if $\varphi (a_1 \cdot _{\!\scriptscriptstyle {A}} \dotsm \cdot _{\!\scriptscriptstyle {A}} a_n)=0$ whenever $\varphi (a_j)=0$ for all $j=1,\dotsc ,n$ and $a_j\in B_{i_j}$ for some indices $i_1\neq i_2 \neq \dotsb \neq i_n$ .

Speicher introduced the notion of free cumulants [Reference Speicher26] as the right analogue of the classical cumulants in the theory of free probability, allowing for a more tractable characterization of Voiculescu’s notion of freeness. Free cumulants are defined by a formula analogous to (2.3) where the lattice P of set partitions is replaced by the lattice $\operatorname {NC}$ of noncrossing partitions:

(3.1) $$ \begin{align} \varphi(a_1 \cdot_{\!\scriptscriptstyle{A}} \dotsm \cdot_{\!\scriptscriptstyle{A}} a_n) =\sum_{\pi\in \operatorname{NC}([n])}\prod_{B\in\pi}k(a_B). \end{align} $$

As above, we set $k (a_B):= k (a_{i_1}, \ldots , a_{i_{|B|}})$ , for $B=\{i_1 < \cdots < i_{|B|}\}$ , to be the multivariate free cumulant of order $|B|$ . Free cumulants reflect freeness in the sense that they vanish whenever the involved random variables belong to different freely independent subalgebras.

Relation (3.1) between moments and free cumulants can be concisely expressed in terms of their ordinary generating functions. Indeed, given $a_1,\dots ,a_n$ in A, introduce noncommuting variables $w_1,w_2,\dots ,w_n$ and the generating functions

$$ \begin{align*} M(w):=1+\sum_{\mathbf{n}}\varphi(a_{\mathbf{n}})\,w_{\mathbf{n}}, \quad R(w):=\sum_{\mathbf{n}} k(a_{\mathbf{n}})\,w_{\mathbf{n}}. \end{align*} $$

Here, we define $\varphi (a_{\mathbf{n}}):=\varphi (a_{n_1} \cdot _{\!\scriptscriptstyle {A}} \dotsm \cdot _{\!\scriptscriptstyle {A}} a_{n_p})$ for the multi-index $\mathbf{n} := (n_1,\dotsc ,n_p)\in [n]^p,\ p\in {\mathbb N}^\ast $ , and similarly for $k(a_{\mathbf{n}})=k(a_{n_1}, \ldots , a_{n_p})$ . Then, (3.1) is summarized by the intriguing identity [Reference Anshelevich2, Reference Nica and Speicher21]

$$ \begin{align*}M(w)=1+R(z), \end{align*} $$

where the substitution

(3.2) $$ \begin{align} z_i:= w_iM(w) \end{align} $$

is in place on the right-hand side.

The fact that the random variables under consideration do not commute entails that we are able to consider several other notions of independence in addition to Voiculescu’s freeness. For example, the notion of Boolean cumulants appears naturally in the context of the study of stochastic differential equations [Reference von Waldenfels, Behara, Krickeberg and Wolfowitz30]. Speicher and Woroudi [Reference Speicher, Woroudi and Voiculescu27] defined the multivariate Boolean cumulants, $ b(a_1, \ldots , a_n)$ , and the corresponding relations with moments in the context of noncommutative probability theory in terms of the following recursion:

$$ \begin{align*} \varphi(a_1 \cdot_{\!\scriptscriptstyle{A}} \dotsm \cdot_{\!\scriptscriptstyle{A}} a_n) =\sum_{j=1}^n b(a_1, \ldots , a_j) \, \varphi(a_{j+1} \cdot_{\!\scriptscriptstyle{A}} \dotsm\cdot_{\!\scriptscriptstyle{A}} a_n). \end{align*} $$

While the combinatorics of free cumulants is described by the lattice of noncrossing partitions, the relation between moments and Boolean cumulants can be expressed by using the lattice $\operatorname {Int}$ of interval partitions:

$$ \begin{align*} \varphi(a_1 \cdot_{\!\scriptscriptstyle{A}} \dotsm \cdot_{\!\scriptscriptstyle{A}} a_n) =\sum_{\pi\in \operatorname{Int}([n])}\prod_{B\in\pi} b(a_B), \qquad b(a_B):= b(a_{i_1}, \ldots, a_{i_{|B|}}). \end{align*} $$

Using the multi-index notation from above, these relations can be encapsulated in a single identity by introducing the generating function

$$ \begin{align*} \eta(w):=\sum_{\mathbf{n}}b(a_{\mathbf{n}})w_{\mathbf{n}}, \end{align*} $$

yielding the simple expression [Reference Anshelevich3, Reference Nica and Speicher21]

$$ \begin{align*}M(w)=1+\eta(w)M(w). \end{align*} $$

Observe that in this case, as opposed to the functional equation describing the relation between moments and free cumulants, there is no substitution such as (3.2) to be made.

Surprisingly, the relation between moments and the different types of cumulants can be described concisely as the action of linear maps on the double tensor algebra. For this, two of us introduced, in [Reference Ebrahimi-Fard and Patras10], a different coproduct which allows to express these relations in a way similar to the presentation of the preceding sections.

4 Free Wick polynomials

In [Reference Ebrahimi-Fard and Patras10, Reference Ebrahimi-Fard and Patras12, Reference Ebrahimi-Fard, Patras, Tapia and Zambotti15], an approach in terms of Hopf algebras to the moment-cumulant relations in both classical and noncommutative probability was introduced. It permits to describe moment-cumulant relations in a rather different way, avoiding the use of generating functions.

Definition 4.1 A noncommutative probability space $(A,\varphi )$ consists of a unital algebra A together with a unital map $\varphi \colon A \to \mathbb {K}$ , i.e., $\varphi (1_A)=1$ .

To avoid ambiguities, we also denote the product of elements $a,b$ in the algebra A by $m_{A}(a \otimes b)=: a \cdot _{\!\scriptscriptstyle {A}} b$ . We still write $m_A$ for the iterates

$$ \begin{align*}m_A\colon a_1\otimes\dots \otimes a_n\longmapsto a_1 \cdot_{\!\scriptscriptstyle{A}} \dots \cdot_{\!\scriptscriptstyle{A}} a_n. \end{align*} $$

Notice that we do not require the algebra A to be commutative. The elements of A should be thought of in general as noncommutative random variables, and the map $\varphi $ plays then the role of the expectation map. Elements in A can represent, for example, operator-valued random variables such as those appearing in the Fock space approach to Quantum Field Theory [Reference Effros and Popa16].

We consider the nonunital tensor algebra over A

$$ \begin{align*} T(A):=\bigoplus_{n>0} A^{\otimes n}, \end{align*} $$

and we denote elements of $T(A)$ using word notation ( $a_1\cdots a_n=a_1\otimes \dots \otimes a_n$ ). It is graded by the number of letters, i.e., the length of a word. The unitalization of $T(A)$ follows from adding the empty word $\mathbf{1}$ and is denoted by $\overline T(A)=T_0(A)\oplus T(A):= \mathbb K\mathbf{1}\oplus T(A)$ . The product on $T(A)$ (resp. $\overline T(A)$ ) is given by concatenation of words, $\mathrm{conc}(w_1 \otimes w_2):= w_1w_2$ , for $w_1,w_2 \in T(A)$ (with the empty word $\mathbf{1}$ being the unit). Let A be an algebra and consider the double tensor algebra $\overline T(T(A))$ over A. On $\overline T(T(A))$ , we also consider the concatenation product, but we denote it with a vertical bar in order to distinguish it from concatenation in $T(A)$ , i.e., $\mathrm{conc}(w_1\otimes w_2)=w_1|w_2$ for $w_1,w_2\in \overline T(T(A))$ .

Given a subset $U\subset \mathbb N$ , an interval or connected component of U is a maximal sequence of successive elements in U. For a subset $S\subseteq [n]$ , we denote by $J_1^S,\dotsc ,J_{k(S)}^S$ the connected components of $[n]\setminus S$ , ordered in increasing order of their minimal element. For notational convenience, we will often omit making explicit the dependency on S of the number of these connected components and, when there is no risk of confusion, will write simply $J_1^S,\dotsc ,J_k^S$ for $J_1^S,\dotsc ,J_{k(S)}^S$ .

Definition 4.2 The map $\Delta \colon T(A)\to \overline {T}(A)\otimes \overline T(T(A))$ is defined by

(4.1) $$ \begin{align} \Delta(a_1\cdots a_n) := a_1\cdots a_n \otimes \mathbf{1} + \mathbf{1} \otimes a_1\cdots a_n + \sum_{\substack{S\subsetneq[n]\\S \neq \emptyset}} a_S\otimes a_{J^S_1}\vert\dotsm\vert a_{J^S_k}. \end{align} $$

It has a unique multiplicative extension $\Delta \colon \overline T(T(A))\to \overline T(T(A))\otimes \overline T(T(A))$ such that $\Delta (\mathbf{1})=\mathbf{1}\otimes \mathbf{1}$ .

Note that in the sum on the right-hand side of (4.1), we have inserted the concatenation product in $\overline T(T(A))$ between the words corresponding to the connected components ${J^S_1},\dotsc ,{J^S_k}$ associated to the nonempty set $S\subsetneq [n]$ , that is, whereas $a_S \in {T}(A)$ , we have $a_{J^S_1}\vert \dotsm \vert a_{J^S_k} \in T(T(A))$ .

Theorem 4.3 [Reference Ebrahimi-Fard and Patras10]

The unital double tensor algebra $\overline T(T(A))$ equipped with $\Delta $ is a noncommutative noncocommutative connected graded Hopf algebra.

Extending our approach to classical Wick polynomials into the noncommutative realm, we introduce an endomorphism of the double tensor algebra $\overline T(T(A))$ . This provides, among others, a new way of introducing the noncommutative Wick (a.k.a. free Appell) polynomials appearing in the work of Anshelevich [Reference Anshelevich2], as explained below.

Suppose that $(A,\varphi )$ is a probability space. We define the map $\Phi \colon \overline T(T(A))\to \mathbb {K}$ as the unique unital multiplicative extension of the linear map $\phi $ defined on $T(A)$ by $\phi (a_1\dotsm a_n) := \varphi (a_1\cdot _{\!\scriptscriptstyle {A}} \dotsm \cdot _{\!\scriptscriptstyle {A}} a_n)$ . Because $\Phi $ is—by definition—a Hopf-algebraic character, it is an invertible element in the corresponding convolution algebra. Its convolution inverse, denoted $\Phi ^{-1}$ , is the unique character on the double tensor algebra such that $\Phi ^{-1}*\Phi =\Phi *\Phi ^{-1}=\varepsilon $ . Here, $\varepsilon \colon \overline T(T(A))\to \mathbb K$ denotes the counit, defined as the unique multiplicative map such that $\ker \varepsilon =T(T(A))$ , and which acts as the neutral element for the convolution product. In other words, the map $\varepsilon $ is such that $\varepsilon (\mathbf{1})=1$ and vanishes otherwise, and $\varepsilon (w_1|w_2)=\varepsilon (w_1)\varepsilon (w_2)$ for all $w_1,w_2\in \overline T(T(A))$ .

Definition 4.4 The free Wick map $\mathrm{W} \colon \overline T(T(A))\to \overline T(T(A))$ is defined by

$$ \begin{align*} \mathrm{W} :=({\mathrm{id}}\otimes\Phi^{-1})\Delta, \end{align*} $$

or, implicitly, by

$$ \begin{align*} \mathrm{id} =(\mathrm{W}\otimes\Phi)\Delta. \end{align*} $$

We call free Wick polynomials the family $\{\mathrm{W}(a_1\cdots a_n)$ , $a_i\in A$ , $i=1,\ldots , n\}$ .

Proposition 4.5 The free Wick map is multiplicative, i.e., for words $w,w'\in T(A)$ ,

$$ \begin{align*}\mathrm{W}(w|w')=\mathrm{W}(w)|\mathrm{W}(w'). \end{align*} $$

We recall that $a|b$ denotes the concatenation of a and b in $\overline T(T(A))$ .

Proof As the identity map $\mathrm{id}$ and $\Phi ^{-1}$ are both multiplicative, using Sweedler’s notation, $\Delta (w)= \sum w^{(1)}\otimes w^{(2)}$ , for the coproduct defined in (4.1):

$$ \begin{align*} \mathrm{W}(w|w') &=(\mathrm{id}\otimes \Phi^{-1})\Delta(w|w')\\ &=(\mathrm{id}\otimes \Phi^{-1})(\Delta(w)\Delta(w'))\\ &=\sum \sum (w^{(1)}|{w'}^{(1)})\,\Phi^{-1}(w^{(2)})\,\Phi^{-1}({w'}^{(2)}) =\mathrm{W}(w)|\mathrm{W}(w').\\[-31pt] \end{align*} $$

The compositional inverse of $\mathrm{W}$ , denoted $\mathrm{W}^{\circ -1}$ , is given by

$$ \begin{align*} \mathrm{W}^{\circ -1}=({\mathrm{id}}\otimes\Phi)\Delta. \end{align*} $$

From Definition 4.4, we also obtain that the usual monomials in $\overline T(A)$ can be expressed in terms of free Wick polynomials:

(4.2) $$ \begin{align} a_1\dotsm a_n=\sum_{S\subseteq[n]} \mathrm{W}(a_S)\Phi(a_{J_1^S})\dotsm\Phi(a_{J_k^S}). \end{align} $$

Note that $\mathrm{W}$ restricts to an automorphism of $\overline T(A)$ . By [Reference Anshelevich2, Proposition 3.12], our free Wick polynomials agree with Anshelevich’s free Appell polynomials, because our formula (4.2) coincides with formula [Reference Anshelevich2, Formula (3.42)].

Here are some low-degree computations:

(4.3) $$ \begin{align} \mathrm{W}(a_1) &= a_1-\varphi(a_1)\mathbf{1}, \nonumber\\ \mathrm{W}(a_1a_2) &= a_1a_2-\varphi(a_2)a_1-\varphi(a_1)a_2 - \big(\varphi(a_1\cdot_{\!\scriptscriptstyle{A}} a_2)-2\varphi(a_1)\varphi(a_2)\big)\mathbf{1}, \nonumber\\ \mathrm{W}(a_1a_2a_3) &= a_1a_2a_3-\varphi(a_3)a_1a_2-\varphi(a_2)a_1a_3-\varphi(a_1)a_2a_3\nonumber\\ &\quad - \big(\varphi(a_2\cdot_{\!\scriptscriptstyle{A}} a_3) - 2\varphi(a_2)\varphi(a_3)\big)a_1 + \varphi(a_1)\varphi(a_3)a_2 - \big(\varphi(a_1\cdot_{\!\scriptscriptstyle{A}} a_2)\nonumber\\ &\quad -2\varphi(a_1)\varphi(a_2)\big)a_3 - \big(\varphi(a_1\cdot_{\!\scriptscriptstyle{A}} a_2\cdot_{\!\scriptscriptstyle{A}} a_3) - 2\varphi(a_1)\varphi(a_2\cdot_{\!\scriptscriptstyle{A}} a_3)\nonumber\\ &\quad -2\varphi(a_3)\varphi(a_1\cdot_{\!\scriptscriptstyle{A}} a_2) - \varphi(a_2)\varphi(a_1\cdot_{\!\scriptscriptstyle{A}} a_3) + 5\varphi(a_1)\varphi(a_2)\varphi(a_3)\big)\mathbf{1}. \end{align} $$

The computation of the third order polynomial (4.3) is somewhat subtle and should be compared with the expression (10.5) below.

The free Wick polynomials inherit immediately from their Hopf-algebraic definition a key property of classical Wick polynomials.

Lemma 4.6 The Wick polynomials $\mathrm{W}$ in Definition 4.4 are centered. That is,

$$ \begin{align*} \Phi \circ \mathrm{W} =(\Phi\otimes\Phi^{-1})\Delta =\Phi*\Phi^{-1}=\varepsilon. \end{align*} $$

Definition 4.7 Let us call universal polynomial $P=P(x_1,\ldots ,x_n;\gamma )$ for noncommutative probability spaces any linear combination of symbols

$$ \begin{align*}\gamma(X^{\bullet}_{J_1}) \cdots \gamma(X^{\bullet}_{J_p})X_I, \end{align*} $$

where $I\coprod J_1\coprod \cdots \coprod J_p$ is a partition of $[n]$ and $\gamma $ takes values in $\mathbb {K}$ .

To a universal polynomial P together with a noncommutative probability space $(A,\varphi )$ and elements $a_1,\ldots ,a_n \in A$ , we associate the element $P(a_1,\ldots ,a_n;\varphi )\in \overline {T}(A)$ obtained from P by replacing $X_I$ with the tensor monomial $a_{i_1}\cdots a_{i_k}$ , where $I=\{i_1,\ldots ,i_k\}$ , and $X_J^{\bullet }$ with $a_{j_1}\cdot _{\!\scriptscriptstyle {A}} \cdots \cdot _{\!\scriptscriptstyle {A}} a_{j_l}$ , where $J=\{j_1,\ldots ,j_l\}$ .

A family $(f_{(A,\varphi )})$ of linear endomorphisms of $\overline {T}(A)$ , where $(A,\varphi )$ runs over noncommutative probability spaces, is called universal if its action on words $a_1\cdots a_n$ is given by universal polynomials. The Wick map, $\mathrm{W}$ , the inverse Wick map, $\mathrm{W}^{\circ -1}$ , the moment map, and the cumulant maps are examples of universal families.

Now, given $(A,\varphi )$ , we define a formal derivation with respect to an element $a \in A$ as follows. Fix a decomposition $A=\mathbb {K}a \oplus A^{\prime }$ . Denote by $\zeta _a \colon T(A)\to \mathbb {K}$ the linear map defined by $\zeta _a(a):=1$ , $\zeta _a(b):=0$ for $b\in A^{\prime }$ , and $\zeta _a(w):=0$ for every word $w=a_1\cdots a_n$ , $a_i \in A$ , $n\geq 2$ . This map (which depends on the chosen direct sum decomposition of A) is then extended as an infinitesimal character to the double tensor algebra. We set

$$ \begin{align*}\partial_a\colon\overline T(T(A))\to\overline T(T(A)), \qquad \partial_a:=(\zeta_a\otimes\mathrm{id})\Delta. \end{align*} $$

Observe that for any word $w=a_1\cdots a_n \in T(A)$ where $a_j=a$ or $a_j \in A^{\prime }$ , we then get

$$ \begin{align*} \partial_a(w)=\sum_{j:a_j=a} a_1\dotsm a_{j-1}\vert a_{j+1}\dotsm a_n. \end{align*} $$

For example, if $w,w_1,w_2\in T(A^{\prime })$ , then

$$ \begin{align*} \partial_a(aw)=w=\partial_a(wa),\quad \partial_a(w_1aw_2) =w_1\vert w_2,\quad\partial_a(awa)=aw+wa. \end{align*} $$

Because $\zeta _a$ is infinitesimal, $\partial _a$ turns out to be a derivation on $\overline T(T(A))$ .

Theorem 4.8 The Wick map $\mathrm{W}$ is the unique family of algebra automorphisms of $\overline T(T(A))$ , where $(A,\varphi )$ runs over noncommutative probability spaces, such that

  • The restrictions of $\mathrm{W}$ to $\overline {T}(A)$ form a universal family.

  • The map $\mathrm{W}$ is centered, $\Phi \circ \mathrm{W}=\varepsilon $ , with $W(\mathbf{1})=1$ in particular.

  • For any $a \in A$ and any direct sum decomposition $A={\mathbb K} a \oplus A^{\prime }$ ,

    $$ \begin{align*}\partial_a\circ \mathrm{W}=\mathrm{W}\circ\partial_a. \end{align*} $$

Proof The first two statements were already shown. The third one follows from the coassociativity of the coproduct:

$$ \begin{align*} \partial_a\circ \mathrm{W}&=(\partial_a\otimes\Phi^{-1})\Delta\\ &=(\zeta_a\otimes{\mathrm{id}}\otimes\Phi^{-1})(\Delta\otimes{\mathrm{id}})\Delta\\ &=(\zeta_a\otimes{\mathrm{id}}\otimes\Phi^{-1})({\mathrm{id}}\otimes\Delta)\Delta\\ &=\mathrm{W}\circ\partial_a. \end{align*} $$

Uniqueness follows from the fact that these three properties define the universal family $\mathrm{W}$ by induction. Given an integer n, choose, for example, a family $a_1,\ldots ,a_n$ of linearly independent free random variables in a noncommutative probability space $(A,\varphi )$ . Use then an adapted direct sum decomposition $A={\mathbb K}a_1\oplus \dots \oplus {\mathbb K}a_n\oplus A^{\prime \prime }$ to define the derivations. The knowledge of the

$$ \begin{align*}\partial_{a_i}\mathrm{W}(a_1\cdots a_n) =\mathrm{W}(a_1\cdots a_{i-1})|\mathrm{W}(a_{i+1}\cdots a_n) \end{align*} $$

and the centering property determine then uniquely $\mathrm{W}(a_1\cdots a_n)$ . The identities

$$ \begin{align*}\partial_{a_i}\partial_{a_j}\mathrm{W}(a_1\cdots a_n) =\partial_{a_j}\partial_{a_i}\mathrm{W}(a_1\cdots a_n) \end{align*} $$

ensure the consistency of the formulas.▪

5 Shuffle algebra

In this section, we briefly recall the definition of shuffle algebra, thereby setting the notation used in the rest of the paper. We follow references [Reference Ebrahimi-Fard and Patras10, Reference Ebrahimi-Fard and Patras12] and refer to these articles for further bibliographical indications on the subject. We use in the present article the topologists’ convention and call shuffle products that are not necessarily commutative (see the definitions below). See also the recent survey [Reference Ebrahimi-Fard and Patras13] on the appearance of shuffle algebras (a.k.a. dendriform algebras) and related structures in the theory of iterated integrals and more generally chronological calculus.

Definition 5.1 A shuffle algebra is a vector space D endowed with two bilinear products ${\prec }\colon D\otimes D\to D$ and ${\succ }\colon D\otimes D\to D$ , called the left and right half-shuffles, respectively, satisfying the shuffle relations

(5.1) $$ \begin{align} \begin{aligned} (a\prec b)\prec c =a\prec(b*c), & \quad a\succ(b\succ c)=(a*b)\succ c,\\ (a\succ b)\prec c&=a\succ(b\prec c), \end{aligned} \end{align} $$

where we have set $a*b := a\succ b+a\prec b$ .

These relations imply that $(D,*)$ is a nonunital associative algebra. We also consider its unitization $\overline D:= \mathbb {K}\mathbf{1}\oplus D$ by extending the half-shuffles: $\mathbf{1}\prec a:= 0=: a\succ \mathbf{1}$ and $\mathbf{1}\succ a:= a=: a\prec \mathbf{1}$ for all $a\in D$ . This entails that $1*a=a*1$ for all a in D; note, however, that the products $\mathbf{1}\prec \mathbf{1}$ and $\mathbf{1}\succ \mathbf{1}$ are not defined; we put, however, $\mathbf{1}*\mathbf{1}:=\mathbf{1}$ .

Definition 5.2 A commutative shuffle algebra is a shuffle algebra where the left and right half-shuffles are identified by the identity:

$$ \begin{align*} a \succ b - b \prec a=0, \end{align*} $$

so that, in particular, $(\overline D,*)$ becomes a commutative algebra and the knowledge of the left half-shuffle $\prec $ (or the right half-shuffle $\succ $ ) is enough to determine the full structure.

Shuffle products are frequently denoted

, as we do further below in this article (10.2). Fundamental examples of such products are provided by the shuffle product of simplices in geometry and topology (see the first part of [Reference Ebrahimi-Fard and Patras13] for a modern account) as well as the commutative shuffle product of words defined inductively on $\overline {T}(X)$ :

The latter is dual to the unshuffle coproduct

. This example is generic in the sense that the tensor algebra over an alphabet B equipped with this product is the free commutative shuffle algebra over B [Reference Schützenberger25]. The shuffle algebras we will study in the present article are noncommutative variants of the tensor algebra.

Dual to the notion of shuffle algebra is the concept of unshuffle coalgebra [Reference Foissy17]. An unshuffle coalgebra is a vector space C equipped with two linear maps $\Delta _\prec \colon C\to C\otimes C$ and $\Delta _\succ \colon C\to C\otimes C$ , called the left and right half-unshuffles, such that

(5.2) $$ \begin{align} (\Delta_\prec\otimes{\mathrm{id}})\Delta_\prec&=({\mathrm{id}}\otimes\overline\Delta)\Delta_\prec, \end{align} $$
(5.3) $$ \begin{align} (\Delta_\succ\otimes{\mathrm{id}})\Delta_\prec&=({\mathrm{id}}\otimes\Delta_\prec)\Delta_\succ, \end{align} $$
(5.4) $$ \begin{align} (\overline\Delta\otimes{\mathrm{id}})\Delta_\succ&=({\mathrm{id}}\otimes\Delta_\succ)\Delta_\succ, \end{align} $$

where $\overline \Delta :=\Delta _\prec +\Delta _\succ $ . As before, these axioms imply that $(C,\overline \Delta )$ is a noncounital coassociative coalgebra.

Definition 5.3 An unshuffle bialgebra is a vector space $\overline B=\mathbb {K}\mathbf{1}\oplus B$ together with linear maps $\Delta _\prec \colon B\to B\otimes B$ , $\Delta _\succ \colon B\to B\otimes B$ , and $m\colon \overline B\otimes \overline B\to \overline B$ such that:

  1. (1) $(B,\Delta _\prec ,\Delta _\succ )$ is an unshuffle coalgebra,

  2. (2) $(\overline B,m)$ is an associative algebra, and

  3. (3) the following compatibility relations are satisfied:

    $$ \begin{align*} \Delta^+_\succ(ab) =\Delta^+_\succ(a)\Delta(b), \quad \Delta^+_\prec(ab) =\Delta^+_\prec(a)\Delta(b), \end{align*} $$
    where we have set
    $$ \begin{align*} \Delta_\prec^+(a) :=\Delta_\prec(a)+a\otimes\mathbf{1},\quad\Delta_\succ^+(a):=\Delta_\succ(a)+\mathbf{1}\otimes a \end{align*} $$
    and
    $$ \begin{align*} \Delta(a) :=\Delta^+_\prec(a)+\Delta^+_\succ(a) =\overline\Delta(a)+a\otimes\mathbf{1}+\mathbf{1}\otimes a. \end{align*} $$

Given an unshuffle bialgebra, we adjoin a counit $\varepsilon \colon \overline B\to \mathbb {K}$ , which is the unique linear map such that $\ker \varepsilon =B$ and $\varepsilon (\mathbf{1})=1$ . We observe that, in particular, for any unshuffle bialgebra, the triple $(\overline B,m,\Delta )$ becomes a bialgebra in the usual sense. Thus, its graded dual space $\overline D:=\overline B^*$ becomes an algebra under the convolution product

(5.5) $$ \begin{align} \varphi * \psi :=(\varphi\otimes\psi)\Delta. \end{align} $$

Moreover, (5.2)–(5.4) imply that $\overline D=\mathbb {K}\mathbf{1} \oplus B^\ast $ is an unital shuffle algebra, because the convolution product splits

(5.6) $$ \begin{align} \varphi*\psi=\varphi\prec\psi + \varphi\succ\psi, \end{align} $$

where $\varphi (\mathbf{1})=\psi (\mathbf{1})=0$ , $\varphi \prec \psi :=(\varphi \otimes \psi )\Delta _\prec ^+$ , and $\varphi \succ \psi :=(\varphi \otimes \psi )\Delta _\succ ^+$ . The counit of $\overline B$ plays the role of the unit for this shuffle product, and one sets for $\varphi \in \overline D, \ \varphi (\mathbf{1})=0$ ,

$$ \begin{align*}\begin{array}{c} \varepsilon\prec\varphi=(\varepsilon\otimes\varphi)\Delta^+_\prec=0,\\ \varphi\succ\varepsilon=(\varphi\otimes\varepsilon)\Delta^+_\succ=0, \end{array} \qquad \begin{array}{c} \varphi\prec\varepsilon=(\varphi\otimes\varepsilon)\Delta^+_\prec=\varphi,\\ \varepsilon\succ\varphi=(\varepsilon\otimes\varphi)\Delta^+_\succ=\varphi. \end{array} \end{align*} $$

By definition, an unshuffle coalgebra is cocommutative if $\tau \circ \Delta _\prec =\Delta _\succ $ , where $\tau $ is the usual switch map $\tau (x\otimes y):= y\otimes x$ . An example is given by the algebra $\overline {T}(A)$ equipped with unshuffle coproduct, , defined in (10.1) below.

5.1 Shuffle approach to moments and cumulants

We consider an example of Definition 5.3, which is also the main setting for the shuffle algebra approach to moment-cumulant relations in noncommutative probability theory.

We note that the coproduct $\Delta $ can be split into two parts: the left half-coproduct

$$ \begin{align*} \Delta_\prec^+(a_1\dotsm a_n) :=\sum_{1\in S\subseteq[n]}a_S\otimes a_{J_1^S}\vert\dotsm\vert a_{J_k^S}, \end{align*} $$

and we set

$$ \begin{align*} \Delta_\prec(a_1\dotsm a_n):=\Delta_\prec^+(a_1\dotsm a_n)-a_1\dotsm a_n\otimes\mathbf{1}. \end{align*} $$

The right half-coproduct is defined by

(5.7) $$ \begin{align} \Delta_\succ^+(a_1\dotsm a_n):=\sum_{1\not\in S\subset[n]}a_S\otimes a_{J_1^S}\vert\dotsm\vert a_{J_k^S}, \end{align} $$

and we define

$$ \begin{align*} \Delta_\succ(a_1\dotsm a_n):=\Delta_\succ^+(a_1\dotsm a_n)-\mathbf{1}\otimes a_1\dotsm a_n. \end{align*} $$

This is extended to the double tensor algebra by defining

$$ \begin{align*} \Delta_\prec^+(w_1\vert\dotsm\vert w_m)&:=\Delta_\prec^+(w_1)\Delta(w_2)\dotsm\Delta(w_m),\\ \Delta_\succ^+(w_1\vert\dotsm\vert w_m)&:=\Delta_\succ^+(w_1)\Delta(w_2)\dotsm\Delta(w_m). \end{align*} $$

Theorem 5.4 [Reference Ebrahimi-Fard and Patras10]

The bialgebra $\overline T(T(A))$ equipped with $\Delta _\prec $ and $\Delta _\succ $ is an unshuffle bialgebra.

We recall now from reference [Reference Ebrahimi-Fard and Patras10, Reference Ebrahimi-Fard and Patras12] how the unshuffle bialgebra $\overline T(T(A))$ provides an algebraic structure for encoding the relation between free, Boolean, and monotone cumulants and moments in noncommutative probability theory from the point of view of shuffle products.

The group of characters is denoted by G, and its Lie algebra of infinitesimal characters $\mathfrak g$ consists of linear maps that send $\mathbf{1} \in \overline T(T(A))$ as well as any nontrivial product in $ \overline T(T(A))$ to zero. The convolution exponential $\exp ^*$ defines a bijection between $\mathfrak g$ and G. We recall that the map $\Phi \colon \overline T(T(A))\to \mathbb {K}$ is the unique unital multiplicative extension of the linear map $\phi $ defined on $T(A)$ by $\phi (a_1\dotsm a_n) := \varphi (a_1\cdot _{\!\scriptscriptstyle {A}} \dotsm \cdot _{\!\scriptscriptstyle {A}} a_n)$ . We are going to define three different exponential-type bijections between the group G and its Lie algebra $\mathfrak g$ , corresponding, respectively, to the convolution product $*$ and to the right and left half-shuffles (see equations (5.5) and (5.6)). As a result, we can associate to the character $\Phi \in G$ three different infinitesimal characters $\rho , \kappa ,\beta \in \mathfrak g$ . The three exponential-type bijections encode the three moment-cumulant relations (monotone, free, and Boolean).

The free and Boolean cumulants can be represented in terms of infinitesimal characters as the unique maps satisfying the so-called left and, respectively, right half-shuffle fixed point equations

(5.8) $$ \begin{align} \Phi=\varepsilon+\kappa\prec\Phi \qquad \mathrm{and} \qquad \Phi=\varepsilon+\Phi\succ\beta. \end{align} $$

These equations define bijections between the Lie algebra $\mathfrak g$ and the group G, i.e., the so-called left and right half-shuffle exponentials such that

$$ \begin{align*} \Phi=\mathcal E_\prec(\kappa)=\mathcal E_\succ(\beta). \end{align*} $$

Hence, we see that $\Phi $ is the left (or free) half-shuffle exponential of the infinitesimal character $\kappa \in \mathfrak g$ . Analogously, $\Phi $ is the right (or Boolean) half-shuffle exponential of the infinitesimal character $\beta \in \mathfrak g$ . It can be shown [Reference Ebrahimi-Fard and Patras10, Theorem 5.2] that the free moment-cumulant relation of order n is given by computing

$$ \begin{align*}\mathcal E_\prec(\kappa)(a_1 \cdots a_n)=\sum_{\pi \in \operatorname{NC}([n])}\prod_{B\in\pi}\kappa(a_B). \end{align*} $$

Analogously, $\mathcal E_\succ (\beta )$ gives the Boolean moment-cumulant relations [Reference Ebrahimi-Fard and Patras12, Theorem 4]

$$ \begin{align*}\mathcal E_\succ(\beta)(a_1 \cdots a_n) = \sum_{\pi \in \operatorname{Int}([n])} \prod_{B\in\pi} \beta(a_B), \end{align*} $$

due to the fact that $\beta \in \mathfrak g$ , together with the right half-shuffle operation defined in terms of (5.7), implies that

$$ \begin{align*}\mathcal E_\succ(\beta)(a_1 \cdots a_n) = \sum_{j=1}^n \Phi(a_{j+1} \cdots a_n) \, \beta(a_1 \cdots a_j). \end{align*} $$

Shuffle algebra permits to show that half-shuffle exponentials entail the following left and right half-shuffle logarithms:

$$ \begin{align*} \kappa=\mathcal{L}_\prec(\Phi):=(\Phi-\varepsilon)\prec\Phi^{-1}, \quad \beta=\mathcal{L}_\succ(\Phi):=\Phi^{-1}\succ(\Phi-\varepsilon) \end{align*} $$

as well as the relation between the Boolean and free cumulants through the shuffle adjoint action

(5.9) $$ \begin{align} \beta=\Theta_\Phi(\kappa):=\Phi^{-1} \succ\kappa\prec\Phi. \end{align} $$

With these notations in place, one can show that the convolutional inverse of $\Phi $ can be also described in terms of the half-shuffle exponentials

(5.10) $$ \begin{align} \Phi^{-1}= \mathcal E_\succ(-\kappa)=\mathcal E_\prec(-\beta), \end{align} $$

yielding solutions to the half-shuffle fixed point equations

(5.11) $$ \begin{align} \Phi^{-1}=\varepsilon-\Phi^{-1}\succ \kappa, \qquad \Phi^{-1}=\varepsilon-\beta\prec\Phi^{-1}. \end{align} $$

5.2 Shuffle calculus for free Wick polynomials

The Wick map $\mathrm{W}$ can be related to the free cumulants by using (5.10), whence we obtain from Definition 4.4

$$ \begin{align*} \mathrm{W}=({\mathrm{id}}\otimes\mathcal E_\succ(-\kappa))\Delta. \end{align*} $$

Evaluating both sides on a word from $T(A)$ yields

$$ \begin{align*} \mathrm{W}(a_1\dotsm a_n) =({\mathrm{id}}\otimes\mathcal E_\succ(-\kappa))\Delta(a_1\dotsm a_n). \end{align*} $$

Hence, from Definition 4.2 of the coproduct, we obtain an explicit formula for the Wick polynomial $\mathrm{W}(a_1\dotsm a_n)$ , in terms of free cumulants (cf. [Reference Anshelevich2])

$$ \begin{align*} \mathrm{W}(a_1\dotsm a_n)=\sum_{S\subseteq[n]}a_S\sum_{\substack{\pi\in\operatorname{Int} ([n]\setminus S)\\\pi\cup S\in \operatorname{NC}([n])}}(-1)^{|\pi|}\prod_{B\in\pi}\kappa(a_B), \end{align*} $$

which coincides with [Reference Anshelevich2, Formula (3.44)]. Note that the combination of the factor $(-1)^{|\pi |}$ and the sum over interval partitions on the right-hand side stems from the fact that $\Phi ^{-1}$ is expressed in terms of the right (or Boolean) half-shuffle exponential evaluated on the infinitesimal character $-\kappa $ corresponding to negative values of free cumulants. This is the reason for calling these polynomials free Wick polynomials, and $\mathrm{W}$ is called the free Wick map.

Proposition 5.5 The free Wick polynomials satisfy the following recursion in terms of the free cumulants:

(5.12) $$ \begin{align} \mathrm{W}=\mathrm{e} + {(\mathrm{id}-\mathrm{e})}\prec\Phi^{-1} - \mathrm{W}\succ\kappa, \end{align} $$

where $\mathrm{e}:=\eta \circ \varepsilon $ and $\eta $ is the unit map on $\overline {T}(T(A))$ .

Proof This follows from the relations satisfied by the shuffle operations and (5.11):

$$ \begin{align*} \mathrm{W}&=({\mathrm{id}}\otimes\Phi^{-1})\Delta\\ &=\mathrm{e}+{(\mathrm{id}-\mathrm{e})}\prec\Phi^{-1}+{\mathrm{id}}\succ(\Phi^{-1}-\varepsilon)\\ &=\mathrm{e}+{(\mathrm{id}-\mathrm{e})}\prec\Phi^{-1}-{\mathrm{id}}\succ(\Phi^{-1}\succ\kappa)\\ &=\mathrm{e}+{(\mathrm{id}-\mathrm{e})}\prec\Phi^{-1}-\mathrm{W}\succ\kappa.\\[-34pt] \end{align*} $$

We remark that by observing that the left half-coproduct, $\Delta _\prec $ , can be expressed in terms of the coproduct $\Delta $ , i.e., $\Delta _\prec (a_1\dotsm a_n)=(a_1{\cdot }\otimes {\mathrm{id}})\Delta (a_2\dotsm a_n)$ , we recover from (5.12) the elegant recursive formula [Reference Anshelevich2, Formula (3.43)]

$$ \begin{align*} \mathrm{W}(a_1\cdots a_n)=a_1\mathrm{W}(a_2\dotsm a_n)-\sum_{j=0}^{n-1} \kappa(a_1\cdots a_j)\,\mathrm{W}(a_{j+1}\dotsm a_n). \end{align*} $$

6 Boolean Wick polynomials

It is natural to ask whether one could also relate Wick map, $\mathrm{W}$ , to Boolean cumulants. Indeed, by using once again (5.10) and Definition 4.4, we obtain

$$ \begin{align*} \mathrm{W}=({\mathrm{id}} \otimes \mathcal E_\prec(-\beta))\Delta. \end{align*} $$

Expanding the left half-shuffle exponential, $\mathcal E_\prec (-\beta )$ , on the right-hand side, we see that

$$ \begin{align*} \mathrm{W} &={\mathrm{id}}-{\mathrm{id}}\prec\beta-{\mathrm{id}}\succ\beta+{\mathrm{id}}\succ(\beta\prec\beta)+{\mathrm{id}}\prec(\beta\prec\beta)+\dotsb\\ &=\mathrm{e}+{(\mathrm{id}-\mathrm{e})}\prec(\varepsilon-\beta+\beta\prec\beta+\dotsb)-({\mathrm{id}}\succ\beta)\prec(\varepsilon-\beta+(\beta\prec\beta)+\dotsb)\\ &=\mathrm{e}+({\mathrm{id}-\mathrm{e}}-{\mathrm{id}}\succ\beta)\prec\Phi^{-1}, \end{align*} $$

where we have used, in the last identity, the recursion (5.11) and relations (5.1) to rearrange the iterated half-shuffle products. This argument can be made precise with the help of Proposition 5.5.

Proposition 6.1 The Wick map can be expressed in terms of Boolean cumulants as

$$ \begin{align*} \mathrm{W}=\mathrm{e}+({\mathrm{id}-\mathrm{e}}-{\mathrm{id}}\succ\beta)\prec\Phi^{-1}. \end{align*} $$

Proof From Proposition 5.5, we have the identity

$$ \begin{align*} \mathrm{W}=\mathrm{e}+({\mathrm{id}-\mathrm{e}})\prec\Phi^{-1}-\mathrm{W}\succ\kappa =\mathrm{e}+({\mathrm{id}-\mathrm{e}})\prec\Phi^{-1}-{\mathrm{id}}\succ(\Phi^{-1}\succ\kappa). \end{align*} $$

But (5.9) implies that $\Phi ^{-1}\succ \kappa =\beta \prec \Phi ^{-1}$ , so that

$$ \begin{align*} \mathrm{W}=\mathrm{e}+({\mathrm{id}-\mathrm{e}})\prec\Phi^{-1}-{\mathrm{id}}\succ(\beta\prec\Phi^{-1}). \end{align*} $$

Because $a\succ (b\prec c)=(a\succ b)\prec c$ from (5.1), we get

$$ \begin{align*} \mathrm{W}=\mathrm{e}+({\mathrm{id}-\mathrm{e}}-{\mathrm{id}}\succ\beta)\prec\Phi^{-1}.\\[-31pt] \end{align*} $$

We now introduce another map, which allows to recover in a similar way the Boolean Appel Polynomials [Reference Anshelevich3, Section 3].

Definition 6.2 The Boolean Wick map $\mathrm{W}'\colon \overline T(T(A))\to \overline T(T(A))$ is defined by

(6.1) $$ \begin{align} \mathrm{W}^{\prime}:={\mathrm{id}}-{\mathrm{id}}\succ\beta. \end{align} $$

We call as usual Boolean Wick polynomials the $\mathrm{W}'(a_1\cdots a_n)$ , $a_i\in A$ , $i=1,\ldots , n$ . In particular, we immediately obtain the explicit expression [Reference Anshelevich3, Formula (3.1)]

(6.2) $$ \begin{align} \mathrm{W}^{\prime}(a_1\dotsm a_n) =a_1\dotsm a_n-\sum_{j=1}^{n}\beta(a_1\dotsm a_{j}) \, a_{j+1}\dotsm a_n. \end{align} $$

Proposition 6.3 The Boolean Wick polynomials are centered.

Proof By definition, we have that

$$ \begin{align*} \Phi\circ \mathrm{W}^{\prime}=\Phi-\Phi\succ\beta=\varepsilon \end{align*} $$

using (5.8).▪

Proposition 6.1 entails the relation

$$ \begin{align*}\mathrm{W}^{\prime}=\mathrm{e}+(\mathrm{W}-\mathrm{e})\prec\Phi \end{align*} $$

between the Boolean and free Wick maps. This gives the following rewriting rule for the corresponding polynomials:

(6.3) $$ \begin{align} \mathrm{W}^{\prime}(a_1\cdots a_n) =\sum_{1\in S\subseteq[n]}\mathrm{W}(a_S)\,\Phi(a_{J_1^S})\dotsm\Phi(a_{J_k^S}). \end{align} $$

From (6.1), we deduce that

(6.4) $$ \begin{align} {\mathrm{id}}=\mathrm{W}^{\prime} + {\mathrm{id}}\succ\beta, \end{align} $$

which leads to the expansion

$$ \begin{align*}{\mathrm{id}} = \mathrm{W}^{\prime} + \mathrm{W}^{\prime}\succ\beta + (\mathrm{W}^{\prime}\succ\beta) \succ \beta + ((\mathrm{W}^{\prime}\succ\beta) \succ\beta) \succ \beta + \cdots. \end{align*} $$

Observe that the expansion terminates after $n+1$ terms when applied to a word $w\in T(A)$ with $|w|=n$ letters, thanks to $\beta $ being an infinitesimal character, i.e.,

(6.5) $$ \begin{align} w=\mathrm{W}^{\prime}(w) + \sum_{i=1}^{|w|} R^{(i)}_{\succ\beta}(\mathrm{W}^{\prime})(w), \end{align} $$

where $R^{(i)}_{\succ \beta }(\mathrm{W}^{\prime }):= R^{(i-1)}_{\succ \beta }(\mathrm{W}^{\prime }) \succ \beta $ and $R^{(0)}_{\succ \beta }(\mathrm{W}^{\prime })=\mathrm{W}^{\prime }$ . The first few terms are

$$ \begin{align*} a_1 &= \mathrm{W}^{\prime}(a_1) + \beta(a_1),\\ a_1a_2 &= \mathrm{W}^{\prime}(a_1a_2 ) + \mathrm{W}^{\prime}(a_2 ) \beta(a_1) + \beta(a_1a_2 ) + \beta(a_2) \beta(a_1) ,\\ a_1a_2a_3 &= \mathrm{W}^{\prime}(a_1a_2a_3) + \mathrm{W}^{\prime}(a_2 a_3) \beta(a_1) +\mathrm{W}^{\prime}(a_3 ) \beta(a_1a_2) + \mathrm{W}^{\prime}(a_3 ) \beta(a_2) \beta(a_1) \\ &\quad + \beta(a_1a_2a_3) + \beta(a_1a_2) \beta(a_3) + \beta(a_1) \beta(a_2a_3) + \beta(a_1) \beta(a_2) \beta(a_3)\\ &= \mathrm{W}^{\prime}(a_1a_2a_3) + \mathrm{W}^{\prime}(a_2 a_3) \beta(a_1) +\mathrm{W}^{\prime}(a_3 ) \big(\beta(a_1a_2) + \beta(a_2) \beta(a_1)\big) \\ &\quad +\Phi(a_1 a_2 a_3) \\ &= \mathrm{W}^{\prime}(a_1a_2a_3) + \sum_{j=1}^{3}\Phi(a_1\cdots a_{j}) \mathrm{W}^{\prime}(a_{j+1}\cdots a_3),\\ a_1a_2a_3a_4 &= \mathrm{W}^{\prime}(a_1a_2a_3a_4) + \mathrm{W}^{\prime}(a_4)\Big(\beta(a_1a_2a_3) + \beta(a_1a_2)\beta(a_3)+ \beta(a_1)\beta(a_2a_3)\\ &\quad+ \beta(a_1)\beta(a_2)\beta(a_3)\Big)+ \mathrm{W}^{\prime}(a_3a_4) \Big(\beta(a_1a_2) + \beta(a_1)\beta(a_2)\Big)\\ &\quad + \mathrm{W}^{\prime}(a_2a_3a_4) \beta(a_1) +\Phi(a_1 a_2 a_3a_4). \end{align*} $$

Here, we used the Boolean moment-cumulant relations, which say that $\Phi (a_1 \cdots a_n) = \sum _{I \in \mathrm{Int}([n])} \prod_{\pi \in I}\beta (a_\pi )$ .

Proposition 6.4 Let $w=a_1\cdots a_n \in T(A)$ ,

$$ \begin{align*} w=\mathrm{W}^{\prime}(w) + \sum_{j=1}^{ n}\Phi(a_1\dotsm a_{j}) \, \mathrm{W}^{\prime}(a_{j+1}\dotsm a_n). \end{align*} $$

Proof For the word $w=a_1\dotsm a_n \in T(A)$ , we find from (6.5)

$$ \begin{align*} w&=\mathrm{W}^{\prime}(w) + \sum_{j=1}^{ n} \mathrm{W}^{\prime}(a_{j+1} \cdots a_n) \sum_{I \in\mathrm{Int}([j])} \prod_{\pi \in I}\beta(a_\pi)\\ &=\mathrm{W}^{\prime}(w) + \sum_{j=1}^{ n} \Phi(a_{1}\cdots a_j)\,\mathrm{W}^{\prime}(a_{j+1} \cdots a_n). \end{align*} $$

The essential input here is that the Boolean cumulants are given by $\beta $ , which is an infinitesimal character.▪

Eventually, from (6.4), we deduce the inverse Boolean Wick map.

Proposition 6.5 The inverse Boolean Wick map is given as solution to the fixed point equation

(6.6) $$ \begin{align} {\mathrm{W}^{\prime}}^{\circ -1} = \mathrm{id} + {\mathrm{W}^{\prime}}^{\circ -1} \succ \beta. \end{align} $$

Proof Note that the definition of the Boolean Wick map (6.1) implies that it is invertible. We show explicitly that ${\mathrm{W}^{\prime }}^{\circ -1} \circ \mathrm{W}^{\prime } = \mathrm{W}^{\prime } \circ {\mathrm{W}^{\prime }}^{\circ -1} = \mathrm{id}.$ Indeed, we see that

$$ \begin{align*}\mathrm{W}^{\prime} \circ {\mathrm{W}^{\prime}}^{\circ -1} = \mathrm{W}^{\prime} + (\mathrm{W}^{\prime}\circ {\mathrm{W}^{\prime}}^{\circ -1}) \succ \beta. \end{align*} $$

Induction on the length of words in $T(A)$ gives for $a \in A$

$$ \begin{align*}\mathrm{W}' \circ {\mathrm{W}^{\prime}}^{\circ -1}(a) = \mathrm{W}'(a) + \beta(a) = a. \end{align*} $$

On a word $w=a_1\dotsm a_n \in T(A)$ , $n>1$ , we find

$$ \begin{align*} \mathrm{W}^{\prime} \circ {\mathrm{W}^{\prime}}^{\circ -1}(w) &= \mathrm{W}^{\prime}(w) + \sum_{i=1}^{n}(\mathrm{W}' \circ {\mathrm{W}^{\prime}}^{\circ -1}) (a_{i+1} \cdots a_n) \beta(a_{1} \cdots a_{i}) \\ &= \mathrm{W}^{\prime}(w) + \sum_{i=1}^{n}a_{i+1} \cdots a_n \beta(a_{i+1} \cdots a_n) \\ &=w. \end{align*} $$

Here, we used the induction hypothesis, $(\mathrm{W}' \circ {\mathrm{W}^{\prime }}^{\circ -1}) (a_{i+1} \cdots a_n) = a_{i+1} \cdots a_n$ , for $i>0$ . An analogue computation gives the opposite, i.e., ${\mathrm{W}^{\prime }}^{\circ -1} \circ \mathrm{W}' =\mathrm{id}$ .▪

From (6.6), it follows that

$$ \begin{align*}{\mathrm{W}^{\prime}}^{\circ -1}(a_1\dotsm a_n ) = \sum_{j=0}^n \Phi(a_1\dotsm a_j)a_{j+1}\dotsm a_n. \end{align*} $$

Remark 6.6 In [Reference Anshelevich3], the Boolean cumulants were defined by the relation between generating functions G of Boolean Wick polynomials and Boolean cumulants $\eta $ ,

$$ \begin{align*} G(x,z)=(1-x\cdot z)^{-1}(1-\eta(z)), \end{align*} $$

which implies an expression similar to (6.2) but with $\beta $ applied to the other half of the word. In principle, one could take either relation as a starting point, because there is a choice here due to the noncommutativity of the series, and neither choice seems to be more natural than the other. However, we decided to work with (6.2) instead, because the polynomials so obtained are more naturally described from the shuffle algebra point of view. The relation (6.3) also has its counterpart in terms of generating functions, which involves a particular kind of variable substitution.

7 Conditionally free Wick polynomials

Note the apparent asymmetry in the definitions of the free and Boolean Wick polynomials. There is a third family of polynomials that generalizes both the free and Boolean cases. Indeed, we may consider the notion of conditional freeness [Reference Bożejko, Leinert and Speicher9] which generalizes Voiculescu’s notion of freeness in the context of two states. Recall that a two-state noncommutative probability space $(A,\varphi ,\psi )$ is a noncommutative probability space $(A,\varphi )$ endowed with a second unital linear map $\psi \colon A \to \mathbb {K}$ . We denote by $\Psi $ the canonical character extension of $\psi $ to the double tensor algebra $\overline T(T(A))$ . We denote by $\beta ^\varphi $ the Boolean infinitesimal character associated to $\varphi $ (and define similarly $\beta ^\psi ,\kappa ^\varphi ,\kappa ^\psi $ ).

In the shuffle algebra approach, we have the following characterization of conditionally (or c-)free cumulants [Reference Ebrahimi-Fard and Patras11]: the corresponding infinitesimal character $R^{\varphi ,\psi } \in \mathfrak {g}$ is defined through shuffle adjoint action:

(7.1) $$ \begin{align} R^{\varphi,\psi}:=\Psi\succ\beta^\varphi\prec\Psi^{-1}. \end{align} $$

This means that $\beta ^\varphi =\Psi ^{-1}\succ R^{\varphi ,\psi }\prec \Psi $ , such that

(7.2) $$ \begin{align} \Phi=\varepsilon+\Phi\succ(\Psi^{-1}\succ R^{\varphi,\psi}\prec\Psi). \end{align} $$

Following [Reference Ebrahimi-Fard and Patras11, Proposition 6.1], the evaluation of formula (7.2) on a word, i.e., computing $\Phi (a_1 \cdots a_n) = \varphi (a_1\cdot _{\!\scriptscriptstyle {A}} \dotsm \cdot _{\!\scriptscriptstyle {A}} a_n)$ ,

$$ \begin{align*}\Phi(a_1 \cdots a_n) =\Phi\succ(\Psi^{-1}\succ R^{\varphi,\psi}\prec\Psi)(a_1 \cdots a_n), \end{align*} $$

gives back the formula discovered in reference [Reference Bożejko, Leinert and Speicher9] and recalled in the next theorem.

Theorem 7.1 [Reference Bożejko, Leinert and Speicher9]

The following relation between moments and conditionally free cumulants holds:

(7.3) $$ \begin{align} \varphi(a_1\cdot_{\!\scriptscriptstyle{A}} \dotsm\cdot_{\!\scriptscriptstyle{A}} a_n) =\sum_{\pi\in \operatorname{NC}([n])}\prod_{B\in\operatorname{Outer}(\pi)} R^{\varphi,\psi}(a_B)\prod_{B\in\operatorname{Inner}(\pi)}\kappa^\psi(a_B). \end{align} $$

Here, a block $\pi _i$ of a noncrossing partition $\pi \in NC_n$ is “Inner” if there exists a $\pi _j$ and $a,b\in \pi _j$ such that $a<c<b$ for all $c\in \pi _i$ . A block which is not an inner one is “Outer.”

Conditionally free cumulants contain both free and Boolean cumulants as limiting cases. More precisely, if we consider the case $\psi =\varphi $ , then (7.1) entails

$$ \begin{align*} R^{\varphi,\varphi} =\Phi\succ\beta^\varphi\prec\Phi^{-1} =\kappa^\varphi \end{align*} $$

by (5.9). On the other hand, if $\psi =\varepsilon $ is the trivial state, then

$$ \begin{align*} R^{\varphi,\varepsilon}=\beta^\varphi. \end{align*} $$

Theorem 7.2 [Reference Ebrahimi-Fard and Patras14]

Let $\alpha _1,\alpha _2$ be two infinitesimal characters of the double tensor algebra and denote by $\mathcal E_\succ (\alpha _1)$ and $\mathcal E_\succ (\alpha _2)$ the corresponding right half-shuffle exponentials. The right half-shuffle Baker–Campbell–Hausdorff formula holds:

$$ \begin{align*} \mathcal{L}_\succ(\mathcal E_\succ(\alpha_1)*\mathcal E_\succ(\alpha_2)) =\alpha_2+\Theta_{\mathcal E_\succ(\alpha_2)}(\alpha_1), \end{align*} $$

where $\Theta $ stands for the (shuffle) adjoint action:

$$ \begin{align*}\Theta_{\mathcal E_\succ(\alpha_2)}(\alpha_1) = \mathcal E^{-1}_\succ(\alpha_2)\succ \alpha_1\prec \mathcal E_\succ(\alpha_2). \end{align*} $$

Proof Let $X=\mathcal E_\succ (\alpha _1)$ and $Y=\mathcal E_\succ (\alpha _2)$ . By definition of the shuffle product, we have that

$$ \begin{align*} X*Y-\varepsilon&=(X-\varepsilon)\prec Y+X\succ (Y-\varepsilon)\\ &=(X\succ \alpha_1)\prec Y+X\succ(Y\succ \alpha_2)\\ &=(X\succ \alpha_1)\prec Y+(X*Y)\succ \alpha_2. \end{align*} $$

Now, observe that

$$ \begin{align*} (X\succ \alpha_1)\prec Y&=((X*Y*Y^{-1})\succ \alpha_1)\prec Y\\ &=(X*Y\succ (Y^{-1}\succ \alpha_1))\prec Y\\ &=X*Y\succ (Y^{-1}\succ \alpha_1 \prec Y). \end{align*} $$

This implies the result using the definition of $\mathcal {L}_\succ $ .▪

Returning to Definition 4.4, because $\Phi =\mathcal E_\succ (\Theta _{\Psi }(R^{\varphi ,\psi }))$ and $\Phi ^{-1}=\mathcal E_\prec (-\Theta _{\Psi }(R^{\varphi ,\psi }))$ , we may now express the free Wick map $\mathrm{W}=\mathrm{id}\ast \Phi ^{-1}$ in terms of the conditionally free cumulants $R^{\varphi ,\psi }$ as

$$ \begin{align*} \mathrm{W}=\big({\mathrm{id}} \otimes \mathcal E_\prec(-\Theta_{\Psi}(R^{\varphi,\psi}))\big)\Delta. \end{align*} $$

A computation similar to the Boolean case yields

$$ \begin{align*} \mathrm{W}&=\big({\mathrm{id}} \otimes \mathcal E_\prec(-\Theta_{\Psi}(R^{\varphi,\psi}))\big)\Delta\\ &=\mathrm{e}+\big(\mathrm{id} - \mathrm{e} - \mathrm{id} \succ \Theta_{\Psi}(R^{\varphi,\psi}) \big)\prec \Phi^{-1}\\ &=\mathrm{e}+\big((\mathrm{id} - \mathrm{e}) \prec \Psi^{-1} - \mathrm{id} \succ( \Psi^{-1} \succ R^{\varphi,\psi} \big)\big)\prec (\Phi*\Psi^{-1})^{-1}. \end{align*} $$

Definition 7.3 The conditionally free Wick polynomials are defined to be

(7.4) $$ \begin{align} \mathrm{W}^c := \mathrm{e}+(\mathrm{W} - \mathrm{e}) \prec \Phi*\Psi^{-1}. \end{align} $$

This means

$$ \begin{align*} \mathrm{W}^c &:= \mathrm{e}+(\mathrm{id}-\mathrm{e}) \prec \Psi^{-1} - \mathrm{id} \succ( \Psi^{-1} \succ R^{\varphi,\psi})\\ &= \mathrm{e}+(\mathrm{id}-\mathrm{e}) \prec \Psi^{-1} - (\mathrm{id} * \Psi^{-1}) \succ R^{\varphi,\psi}\\ &= \mathrm{e} +\big(\mathrm{id} - \mathrm{e} - \mathrm{id} \succ \Theta_{\Psi}(R^{\varphi,\psi})\big)\prec \Psi^{-1}. \end{align*} $$

From (7.4), we deduce a—intricate—recursion for the inverse of conditionally free Wick map:

(7.5) $$ \begin{align} \begin{split} &{{\mathrm{W}^c}^{\circ -1}(a_1 \cdots a_n) = a_1 \cdots a_n - {\mathrm{W}^c}^{\circ -1}\circ(\mathrm{W} - \mathrm{id})(a_1 \cdots a_n) } \\ &\qquad\qquad\qquad\quad - \sum_{1\in S \subsetneq [n]} {\mathrm{W}^c}^{\circ -1}\circ\mathrm{W}(a_S) (\Phi*\Psi^{-1})(a_{J^S_1}) \dotsm (\Phi*\Psi^{-1})(a_{J_l^S}). \end{split} \end{align} $$

Starting again from the identity

$$ \begin{align*} \mathrm{W}={\mathrm{id}}*\Phi^{-1}={\mathrm{id}}*\mathcal E_\prec(-\Theta_\Psi(R^{\varphi,\psi})), \end{align*} $$

we obtain, after some simple manipulations,

$$ \begin{align*} \mathrm{W} &={\mathrm{id}}*\Psi^{-1}*\Psi*\mathcal E_\prec(-\Theta_\Psi(R^{\varphi,\psi}))\\ &=\mathrm{W}^\psi*\mathcal E_\prec(\kappa^\psi)*\mathcal E_\prec(-\Theta_\Psi(R^{\varphi,\psi}))\\ &=\mathrm{W}^\psi*\mathcal E_\prec(\kappa^\psi-R^{\varphi,\psi}), \end{align*} $$

where we have used Theorem 7.2 in the last equality. Hence, we have that the free Wick maps $\mathrm{W}$ and $\mathrm{W}^\psi :=(\mathrm{id} * \Psi ^{-1}) $ are related:

$$ \begin{align*} \mathrm{W} &= \mathrm{W}^\psi * (\Psi* \Phi^{-1})\\ &= \mathrm{W}^\psi*\mathcal E_\prec(\kappa^\psi-R^{\varphi,\psi}). \end{align*} $$

Finally, we observe from (7.4) that, in the cases $\Psi =\Phi $ and $\Psi =\varepsilon $ , we recover the free and Boolean Wick maps $\mathrm{W}$ and $\mathrm{W}^{\prime }$ , respectively.

8 Wick polynomials as group actions

Observe that the coproduct defined in Definition 4.2 is linear on the left and polynomial on the right factor when restricted to $T(A)$ , i.e., $\Delta \colon T(A)\to \overline T(A)\otimes \overline T(T(A))$ . This means, in particular, that $\overline T(A)$ is a right comodule over $\overline T(T(A))$ , simply by coassociativity. Thus, we can induce an action of the group G of characters over $\overline T(T(A))$ on the space $\operatorname {End}(\overline T(A))$ of linear endomorphisms of $\overline T(A)$ by setting

$$ \begin{align*} L. \Psi=(L\otimes \Psi)\Delta. \end{align*} $$

More precisely, we have the following proposition.

Proposition 8.1 Given $\Psi \in G$ and $L \in \operatorname {End}(\overline T(A))$ , define $L. \Psi \in \operatorname {End}(\overline T(A))$ as above. Then, $(\Psi ,L) \mapsto L.\Psi $ defines a (right) action of G on $\operatorname {End}(\overline T(A))$ .

Proof Let $\Psi _1,\Psi _2 \in G$ and $L \in \operatorname {End}(\overline T(A))$ . Clearly, $L.\Psi \in \operatorname {End}(\overline T(A))$ and

$$ \begin{align*} (L.\Psi_1).\Psi_2 &= (L\otimes \Psi_1 \otimes \Psi_2) \circ(\Delta \otimes {\mathrm{id}})\circ\Delta\\ &= (L\otimes \Psi_1\otimes \Psi_2)\circ({\mathrm{id}}\otimes\Delta)\circ\Delta\\ &= (L\otimes \Psi_1*\Psi_2)\circ\Delta\\ &= L.(\Psi_1*\Psi_2), \end{align*} $$

so the mapping $(\Psi _1,L)\mapsto L.\Psi _1$ is an action of G on $\operatorname {End}(\overline T(A))$ .▪

In the following, we identify implicitly the (various) notions of Wick polynomials with the (various) restrictions of the Wick maps to $\overline T(A)$ . So, in this section and the following, $\mathrm{W}$ denotes the restriction of $\mathrm{W}$ to $\overline T(A)$ , and so on (as should be anyway clear from the context).

As we have seen above, the orbit of the identity map ${\mathrm{id}}\in \operatorname {Aut}(\overline T(A))$ consists only of automorphisms of $\overline T(A)$ and we have the inversion formula for the composition of endomorphisms $({\mathrm{id}}.\Psi )^{-1}={\mathrm{id}}.\Psi ^{-1}$ where, on the right-hand side, $\Psi $ is inverted with respect to convolution. The free Wick polynomials $\mathrm{W}=\mathrm{id}.\Phi ^{-1}$ are elements in the orbit of the identity endomorphism by the group action of G on $\operatorname {End}(\overline T(A))$ .

Regarding the left half-unshuffle coproduct $\Delta _\prec ^+$ , we get from (5.2) that $(T(A),\Delta _\prec )$ is also a right-comodule over $(\overline T(T(A)), \Delta )$ . At the level of endomorphisms, we obtain the following proposition.

Proposition 8.2 Let $L\in \operatorname {End}(T(A))$ and $\Psi \in G$ . The composition $(\Psi ,L)\mapsto L^\Psi :=(L\otimes \Psi )\Delta _\prec $ defines a (right) action.

Thus, we might reinterpret the Boolean Wick polynomials $\mathrm{W}^{\prime }=\mathrm{e}+(\mathrm{W}-\mathrm{e})\prec \Phi $ as being given on $T(A)$ by a combined action $\mathrm{W}^{\prime }=\mathrm{e}+({\mathrm{id}}.\Phi ^{-1}-\mathrm{e})^{\Phi }$ . More generally, the relation between the conditionally free and free Wick polynomials can be re-expressed on $T(A)$ as

$$ \begin{align*} \mathrm{W}^c =\mathrm{e}+({\mathrm{id}}.\Phi^{-1}-\mathrm{e})^{\Phi*\Psi^{-1}} =\mathrm{e}+\left[ \left( {\mathrm{id}}.\Phi^{-1} -\mathrm{e}\right)^{\Phi} \right]^{\Psi^{-1}}. \end{align*} $$

Neglecting the degree zero (that is, the $\mathrm{e}$ ) terms, the relations between free, Boolean, and conditionally free Wick polynomials are encoded by the following diagram:

9 Free, Boolean, and conditionally free Wick products

Let $(A,\varphi )$ be a noncommutative probability space. Let $F \colon \overline T(A)\to \overline T(A)$ be an invertible linear map such that $F(1_A)=1_A$ . One can induce a modified product ${\bullet }$ on $\overline T(A)$ by conjugacy, that is, setting $w\bullet w' := F(F^{-1}(w)F^{-1}(w'))$ . Associativity follows from associativity of the concatenation product on $\overline T(A)$ . Therefore, F becomes a unital algebra morphism from $(\overline T(A),\otimes )$ to $(T(A),\bullet )$ .

Because the maps $\mathrm{W}$ , $\mathrm{W}^{\prime }$ , and $\mathrm{W}^c$ are all invertible when acting on $\overline T(A)$ , we obtain from this construction three new products on $\overline T(A)$ .

Definition 9.1 The three associative products on $\overline T(A)$ induced by the three Wick maps $\mathrm{W}$ , $\mathrm{W}^{\prime }$ , and $\mathrm{W}^c$ are denoted by $\bullet $ , $\odot $ , and $\times $ and called the free, Boolean, and conditionally free Wick products, respectively. The Wick maps are morphisms of algebras when $\overline T(A)$ is equipped with either of these new products. In particular, for $a\in A$ ,

$$ \begin{align*}\mathrm{W}(a^n)=\mathrm{W}(a)^{\bullet n}, \end{align*} $$

and similarly for the other cases.

The conjugacy formula gives the rule for computing the new products. For example, in the free and Boolean cases, we find the following:

Proposition 9.2

  1. (1) The free Wick product $\bullet $ admits the following closed-form formula: for words $w=a_1\dotsm a_n$ and $w'=a_{n+1}\dotsm a_{n+m}$ in $T(A)$ , we find

    $$ \begin{align*} w\bullet w'=\sum_{S\subseteq[n+m]}\mathrm{W}(a_S)\Phi(a_{K^S_1})\dotsm\Phi(a_{K_l^S}), \end{align*} $$
    where the $K_i^S,i=1,\dots , l$ , run over the connected components of $[n]-([n]\cap S)$ and $(n+[m])-(n+[m]\cap S)$ .
  2. (2) The Boolean Wick product $\odot $ admits the following closed-form formula: for words $w=a_1\dotsm a_n$ and $w'=b_{1}\dotsm b_{m}$ in $T(A)$ , we find

    $$ \begin{align*} w \odot w'= \sum_{\substack{0 \leq i \leq n\\0 \leq j \leq m} \Phi(a_1 \cdots a_i \vert b_{1} \cdots b_j) \mathrm{W}'(a_{i+1} \cdots a_n b_{j+1} \cdots b_m).} \end{align*} $$

Proof

  1. (1) Set $b_i:= a_{n+i}, \ i=1,\dots , m$ . Because the inverse free Wick map is the map $\mathrm{W}^{\circ -1}=({\mathrm{id}}\otimes \Phi )\Delta $ , we have that

    $$ \begin{align*} \mathrm{W}^{\circ -1}(w)\mathrm{W}^{\circ -1}(w') &=\sum_{S\subseteq[n]}\sum_{S'\subseteq [m]} a_S\,b_{S'}\,\Phi(a_{J_1^S})\dotsm\Phi(a_{J_{k(S)}^S})\Phi(b_{J_1^{S'}})\dotsm\Phi(b_{J_{k(S')}^{S'}}). \end{align*} $$
    By re-expressing in terms of the $a_i$ , we get
    $$ \begin{align*}\mathrm{W}^{\circ -1}(w)\mathrm{W}^{\circ -1}(w') =\sum_{S\subseteq[n+m]}a_S\,\Phi(a_{K^S_1})\dotsm\Phi(a_{K_l^S}). \end{align*} $$
    The conclusion then follows by applying $\mathrm{W}$ to both sides of this identity.
  2. (2) Recall Proposition 6.5 stating that the inverse Boolean Wick map is given recursively ${\mathrm{W}^{\prime }}^{\circ -1}=\mathrm{id} + {\mathrm{W}^{\prime }}^{\circ -1} \succ \beta $ such that

    $$ \begin{align*}{\mathrm{W}^{\prime}}^{\circ -1}(a_1\dotsm a_n) = \sum_{j=0}^n \Phi(a_1\dotsm a_j) a_{j+1}\dotsm a_n. \end{align*} $$
    Then, we have
    $$ \begin{align*}{\mathrm{W}^{\prime}}^{\circ -1}(a_1\dotsm a_n) {\mathrm{W}^{\prime}}^{\circ -1}(b_1\dotsm b_m)= \sum_{\substack{0 \leq i \leq n\\ 0 \leq j \leq m}} \Phi(a_1 \cdots a_i \vert b_{1} \cdots b_j) a_{i+1} \cdots a_n b_{j+1} \cdots b_m. \end{align*} $$
    The conclusion then follows by applying $\mathrm{W}'$ to both sides of this identity.▪

Remark 9.3 A closed formula for the conditionally free Wick products follows from using the recursion (7.5):

$$ \begin{align*} &{{\mathrm{W}^c}^{\circ -1}(a_1 \cdots a_n){\mathrm{W}^c}^{\circ -1}(b_1 \cdots b_m)}\\ &= \Big(a_1 \cdots a_n - {\mathrm{W}^c}^{\circ -1}\circ(\mathrm{W} - \mathrm{id})(a_1 \cdots a_n)\\ &\qquad - \sum_{1\in S \subsetneq [n]} {\mathrm{W}^c}^{\circ -1}\circ\mathrm{W}(a_S) (\Phi*\Psi^{-1})(a_{J^S_1}) \dotsm (\Phi*\Psi^{-1})(a_{J_l^S})\Big)\\ & \Big(b_1 \cdots b_m - {\mathrm{W}^c}^{\circ -1}\circ(\mathrm{W} - \mathrm{id})(b_1 \cdots b_m)\\ &\qquad - \sum_{1\in S \subsetneq [m]} {\mathrm{W}^c}^{\circ -1}\circ\mathrm{W}(b_S) (\Phi*\Psi^{-1})(b_{J^S_1}) \dotsm (\Phi*\Psi^{-1})(b_{J_l^S})\Big). \end{align*} $$

Applying ${\mathrm{W}^c}$ on both sides gives the conditionally free Wick product

$$ \begin{align*}a_1 \cdots a_n \times b_1 \cdots b_m ={\mathrm{W}^c}\big({\mathrm{W}^c}^{\circ -1}(a_1 \cdots a_n){\mathrm{W}^c}^{\circ -1}(b_1 \cdots b_m)\big). \end{align*} $$

10 Tensor cumulants

We now briefly show how our approach allows to lift the classical notion of cumulants to the noncommutative setting and to revisit the notion of tensor cumulants [Reference Nica and Speicher21] as a warm up for the definition of tensor Wick polynomials.

As before, we work on a noncommutative probability space $ (A,\varphi )$ (see Definition 4.1). On $\overline T(A)$ , the unshuffle coproduct is defined by declaring elements in $A \hookrightarrow \overline T(A)$ to be primitive and extending it multiplicatively to all of $\overline T(A)$ . As a result, one gets that for any $a_1,\dotsc , a_n \in A$ ,

(10.1)

where we have set $a_\emptyset :=\mathbf{1}$ and

$$ \begin{align*}a_U:={a_{u_1}\dotsm a_{u_p}} \end{align*} $$

for $U=\{u_1 < \cdots < u_p\}\subseteq [n]$ . This endows the unital tensor algebra with the structure of a cocommutative graded connected Hopf algebra. The antipode reverses the order of the letters in a word and multiplies it by a minus sign if the word has odd length.

Its dual $\overline T(A)^*$ is a commutative algebra with the convolution product defined for linear maps $\mu ,\nu \colon \overline T(A) \to \mathbb {K}$ by the commutative shuffle product

(10.2)

The unit for this product is the counit $\varepsilon \colon \overline T(A) \to \mathbb {K}$ , which is uniquely defined by $\ker \varepsilon =T(A)$ and $\varepsilon (\mathbf{1})=1$ . See, e.g., [Reference Reutenauer23] for details.

The generalized expectation map $\varphi $ permits to define a linear map $\phi \colon \overline T(A)\to \mathbb {K}$ by setting $\phi (a_1\dotsm a_n) = \varphi (a_1\cdot _{\!\scriptscriptstyle {A}} \dotsm \cdot _{\!\scriptscriptstyle {A}} a_n)$ and $\phi (\mathbf{1})=1$ .

The grading on $\overline T(A)$ permits to think of $\phi $ as a graded series

$$ \begin{align*} \phi=\sum_{n\ge 0}\phi_n, \end{align*} $$

where $\phi _n\colon T(A) \to \mathbb {K}$ is a linear map vanishing outside $T_n(A)$ , the degree n component of $T(A)$ . In this way, we may regard the map $\phi $ as being some kind of generalized moment-generating function. Because the algebra $T(A)$ is graded by the length of words and connected ( $T_0(A)=\mathbb K\mathbf{1}$ ), the exponential and logarithm maps define inverse bijections between unital linear maps on $\overline {T}(A)$ and reduced maps (maps that vanish on $\mathbb {K}$ , the degree zero component). In particular, there exists a unique linear map $c \in \overline T(A)^*$ with $c(\mathbf{1})=0$ such that

where $\phi ^{-1}$ is the inverse of $\phi $ for the shuffle product (10.2).

Definition 10.1 The tensor cumulant map associated to $\phi $ is the linear application $c\colon \overline T(A) \to \mathbb {K}$ defined by

Its evaluations $c(a_1\cdots a_n) \in \mathbb {K}$ are also written $c(a_1,\dots ,a_n)$ and are called the multivariate tensor cumulants associated to the sequence $(a_1,\dots ,a_n)$ of noncommutative random variables.

The defining relation

is a version in a noncommutative context of the usual formula relating the moment- and cumulant-generating functions (see (2.3)). From (10.1), we see that for any $j>0$ , the iterated reduced coproduct

is given by

and

(10.3)

where $P_j(n)$ is the collection of all set partitions $\pi =\{B_1,\ldots ,B_j\}$ of $[n]:=\{1,\dotsc ,n\}$ into j disjoint subsets and $\mathbb S_j$ is the jth symmetric group (recall that for $x\in T(A)$ ,

). From $c(\mathbf{1})=0$ , we deduce

giving the multidimensional version of formula (2.3):

(10.4) $$ \begin{align} \phi(a_1 \dotsm a_n) = \varphi(a_1\cdot_{\!\scriptscriptstyle{A}} \dotsm\cdot_{\!\scriptscriptstyle{A}} a_n) = \sum_{\pi\in P(n)}\prod_{B\in\pi}c (a_B). \end{align} $$

Recall that $c (a_B):=c (a_{i_1}, \ldots , a_{i_{|B|}})$ , for $B=\{i_1 < \cdots < i_{|B|}\}$ , is the multivariate cumulant of order $|B|$ and $P(n)$ is the collection of all set partitions of $[n]$ . In fact, many other versions of this relation can be recovered from the properties of the underlying Hopf algebra. See [Reference Ebrahimi-Fard and Patras10, Reference Ebrahimi-Fard, Patras, Tapia and Zambotti15] for further details. The important point here is that set partitions appear naturally in (10.4) through formula (10.3) due to the definition of the coproduct in (10.1).

10.1 Tensor Wick polynomials

It turns out that the same Hopf-algebraic framework used for describing the tensor moment-cumulant relations allows to get an explicit description of tensor Wick polynomials (that can be understood as a natural noncommutative lift of classical Wick polynomials).

Definition 10.2 The tensor Wick map $W_T\colon \overline T(A)\to \overline T(A)$ is defined by

Its inverse $W_T^{-1}\colon \overline T(A)\to \overline T(A)$ is given by

Given a sequence $(a_1,\dots ,a_n)\in A$ , $W_T(a_1\cdots a_n)$ is called the tensor Wick polynomial associated to this sequence.

Let us compute a few examples using the reduced unshuffle coproduct (10.3) and the fact that the inverse $\phi ^{-1} \colon \overline T(A)\to \mathbb {K}$ is given by the Neumann series

Then, the first three tensor Wick polynomials in $\overline T(A)$ are

(10.5) $$ \begin{align} W_T(a_1)&=a_1 - \varphi(a_1)\mathbf{1}, \nonumber\\ W_T(a_1a_2)&=a_1a_2 - a_2\varphi(a_1) - a_1\varphi(a_2) + \big(-\varphi(a_1 \cdot_{\!\scriptscriptstyle{A}} a_2) + 2 \varphi(a_1)\varphi(a_2)\big)\mathbf{1}, \nonumber \\ W_T(a_1a_2a_3)&=a_1a_1a_3 - a_2a_3\varphi(a_1) - a_1a_3\varphi(a_2) - a_1a_2\varphi(a_3) +a_1\big(-\varphi(a_2 \cdot_{\!\scriptscriptstyle{A}} a_3) \\ &\quad + 2 \varphi(a_2)\varphi(a_3)\big) + a_2(-\varphi(a_1 \cdot_{\!\scriptscriptstyle{A}} a_3) + 2 \varphi(a_1)\varphi(a_3)) +a_3 \big(-\varphi(a_1 \cdot_{\!\scriptscriptstyle{A}} a_2)\\ &\quad + 2 \varphi(a_1)\varphi(a_2)\big) + \big(- \varphi(a_1 \cdot_{\!\scriptscriptstyle{A}} a_2 \cdot_{\!\scriptscriptstyle{A}} a_3) + \varphi(a_1)\varphi(a_2 \cdot_{\!\scriptscriptstyle{A}} a_3)\\ &\quad + \varphi(a_2)\varphi(a_1\cdot_{\!\scriptscriptstyle{A}} a_3) + \varphi(a_3)\varphi(a_1\cdot_{\!\scriptscriptstyle{A}} a_2) - 6 \varphi(a_1)\varphi(a_2)\varphi(a_3)\big)\mathbf{1}. \end{align} $$

Remark 10.3 The tensor Wick map $W_T$ associates to a $w \in \overline T(A)$ a noncommutative polynomial $W_T(w)$ in $\overline T(A)$ . Saying this, if the algebra A is commutative, then those noncommutative polynomials map by the evaluation $ev\colon \ a_1 \cdots a_n\longmapsto a_1 \cdot _{\!\scriptscriptstyle {A}} \dots \cdot _{\!\scriptscriptstyle {A}} a_n$ to the classical multivariate Wick polynomials. In particular, we have that, in this case,

$$ \begin{align*}ev(W_T(a^{\otimes n}))=W_n(a) \end{align*} $$

for a single element $a\in A$ [Reference Ebrahimi-Fard, Patras, Tapia and Zambotti15].

Observe that, by definition, we have that , so we get a tensor version of relation (2.5):

$$ \begin{align*} a_1\dotsm a_n &= \sum_{S\subseteq[n]}W_T(a_S)\phi(a_{[n]\setminus S})\\ &= \sum_{S\subseteq[n]}W_T(a_S)\sum_{\pi\in P([n]\setminus S)}\prod_{B\in\pi}c(a_B). \end{align*} $$

Applying the evaluation map, the resulting relation is sometimes used as a recursive definition of the Wick polynomials [Reference Hairer and Shen18] in terms of moments or cumulants.

Because $\phi $ is not a character on $\overline {T}(A)$ (it is not multiplicative: $\phi (a_1a_2)=\phi (a_1 \cdot _{\!\scriptscriptstyle {A}} a_2)$ is different from the product $\phi (a_1)\phi (a_2)$ in general), it is not an element in the group of characters, i.e., the group-like elements in the completion of the dual graded Hopf algebra. Therefore, the $\exp /\log $ correspondence between tensor cumulants and moments cannot be analyzed from a Lie theoretic point of view. We refer the reader to [Reference Ebrahimi-Fard and Patras14] for a discussion of the group and Lie algebra correspondence in the context of free probability. The map $\phi $ has then a unique extension $\Phi \colon \overline T(T(A))\to \mathbb {K}$ as an algebra character. The unshuffle coproduct on the tensor algebra $\overline {T}(A)$ also admits a unique extension

as an algebra morphism:

where the unit of $\overline {T}(A)$ is implicitly identified with the unit of $\overline {T}(T(A))$ .

The following proposition and theorem are variants of the corresponding results in [Reference Ebrahimi-Fard, Patras, Tapia and Zambotti15], where they were obtained in the case where the algebra A is commutative.

Proposition 10.4 The double tensor algebra $\overline T(T(A))$ with the coproduct is a graded connected Hopf algebra, where $\deg (w_1\vert \dotsm \vert w_n)= \deg (w_1)+\dotsb +\deg (w_n)$ . Its antipode $\mathcal S$ is the unique algebra anti-automorphism of $\overline T(T(A))$ such that

$$ \begin{align*} \mathcal S(a_1\cdots a_n) = \sum_{\pi\in P(n)}(-1)^{|\pi|}\sum_{\sigma\in\mathbb S_{|\pi|}} a_{B_{\sigma(1)}}\vert\dotsm\vert a_{B_{\sigma(|\pi|)}} \end{align*} $$

for all $a_1,\dotsc ,a_n\in A$ .

As a consequence, we obtain using either Proposition 10.4 and lifting the computation of $W_T$ to $\overline T(T(A))$ or directly the definition of $W_T$ .

Theorem 10.5 The tensor Wick map admits the explicit expansion

$$ \begin{align*} W_T(a_1\dotsm a_n) = \sum_{S\subseteq[n]}a_S\sum_{\pi\in P([n]\setminus S)}(-1)^{|\pi|}|\pi|!\prod_{B\in\pi}\phi(a_B). \end{align*} $$

Another point that is also addressed in [Reference Ebrahimi-Fard, Patras, Tapia and Zambotti15] is the fact that Wick powers do not satisfy the usual rules of calculus: for example, because $:\!X\!:=X-\mathbb EX$ and $:\!X^2\!:=X^2-2X\mathbb EX+2(\mathbb EX)^2-\mathbb EX^2$ , we see that $:X:\!\!\cdot :X: \neq :X^2:\!\!$ . Nonetheless, using Hopf-algebraic techniques, the invertibility of the Wick map allowed us to define a modified product $\cdot _{\!\scriptscriptstyle {\varphi }}$ on polynomials such that

$$ \begin{align*} :X^n:\!\!\cdot_{\!\scriptscriptstyle{\varphi}}:X^m:=:X^{n+m}:\!\!, \end{align*} $$

and a similar formula holds in the multivariate case. Because $W_T$ is a linear automorphism of $\overline {T}(A)$ , these observations can be adapted to the tensor case as in Section 9.

Footnotes

This work was supported by the European Research Council for Informatics and Mathematics through contract ERCIM 2018-10, and the BMS MATH+ EF1-5 project “On robustness of Deep Neural Networks.”

1 The referee pointed us to the early reference [Reference Avitzour7] for construction of a free product state on the free product of a family of $C^\ast $ -algebras.

References

Akhiezer, N. I., The classical moment problem: and some related questions in analysis, University Mathematical Monographs, 5, Oliver and Boyd, Edinburgh, UK, 1965.Google Scholar
Anshelevich, M., Appell polynomials and their relatives . Int. Math. Res. Not. IMRN 2004(2004), no. 65, 34693531.CrossRefGoogle Scholar
Anshelevich, M., Appell polynomials and their relatives. II: Boolean theory . Indiana Univ. Math. J. 58(2009a), no. 2, 929968.CrossRefGoogle Scholar
Anshelevich, M., Appell polynomials and their relatives. III: conditionally free theory . Illinois J. Math. 53(2009b), no. 1, 3966.CrossRefGoogle Scholar
Appell, P., Sur une classe de polynomes . Ann. Sci. Éc. Norm. Supér. (2) 2(1880), no. 9, 119144.CrossRefGoogle Scholar
Arizmendi, O., Hasebe, T., Lehner, F., and Vargas, C., Relations between cumulants in noncommutative probability . Adv. Math. 282(2015), 5692.CrossRefGoogle Scholar
Avitzour, D., Free products of C*-algebras . Trans. Amer. Math. Soc. 271(1982), no. 2, 423435.Google Scholar
Bożejko, M., Positive definite functions on the free group and the noncommutative Riesz product . Boll. Unione Mat. Ital. A (6) 5(1986), no. 1, 1321.Google Scholar
Bożejko, M., Leinert, M., and Speicher, R., Convolution and limit theorems for conditionally free random variables . Pacific J. Math. 175(1996), no. 2, 357388.CrossRefGoogle Scholar
Ebrahimi-Fard, K. and Patras, F., Cumulants, free cumulants and half-shuffles . Proc. Roy. Soc. London Ser. A 471(2015), no. 2176, 20140843.Google ScholarPubMed
Ebrahimi-Fard, K. and Patras, F., A group-theoretical approach to conditionally free cumulants. Preprint, 2018a. arXiv:1806.06287 Google Scholar
Ebrahimi-Fard, K. and Patras, F., Monotone, free, and Boolean cumulants: a shuffle algebra approach . Adv. Math. 328(2018b), 112132.CrossRefGoogle Scholar
Ebrahimi-Fard, K. and Patras, F., From iterated integrals and chronological calculus to Hopf and Rota-Baxter algebras. Preprint, 2019a. arXiv:1911.08766 Google Scholar
Ebrahimi-Fard, K. and Patras, F., Shuffle group laws. Applications in free probability . Proc. Lond. Math. Soc. 119(2019b), no. 3, 814840.CrossRefGoogle Scholar
Ebrahimi-Fard, K., Patras, F., Tapia, N., and Zambotti, L., Hopf-algebraic deformations of products and Wick polynomials . Int. Math. Res. Not. IMRN 2020(2020), 1006410099.CrossRefGoogle Scholar
Effros, E. G. and Popa, M., Feynman diagrams and Wick products associated with q-Fock space . Proc. Natl. Acad. Sci. USA 100(2003), no. 15, 86298633.CrossRefGoogle Scholar
Foissy, L., Bidendriform bialgebras, trees, and free quasi-symmetric functions . J. Pure Appl. Algebra 209(2007), no. 2, 439459.CrossRefGoogle Scholar
Hairer, M. and Shen, H., A central limit theorem for the KPZ equation . Ann. Probab. 45(2017), no. 6B, 41674221.CrossRefGoogle Scholar
Hasebe, T. and Saigo, H., The monotone cumulants . Ann. Henri Poincaré B 47(2011), no. 4, 11601170.Google Scholar
Muraki, N., Monotonic independence, monotonic central limit theorem and monotonic law of small numbers . Infin. Dimens. Anal. Quantum Probab. Relat. Top. 4(2001), no. 1, 3958.CrossRefGoogle Scholar
Nica, A. and Speicher, R., Lectures on the combinatorics of free probability, London Mathematical Society Lecture Note Series, 335, Cambridge University Press, Cambridge, 2006.CrossRefGoogle Scholar
Peccati, G. and Taqqu, M. S., Wiener chaos: moments, cumulants and diagrams: a survey with computer implementation . Vol. 1, Springer Science & Business Media, Milan, 2011.Google Scholar
Reutenauer, C.. Free Lie algebras, London Mathematical Society Monographs. New Series, 7, The Clarendon Press, Oxford University Press, New York, 1993.Google Scholar
Sarah, M. and Schurmann, M., Non-commutative stochastic independence and cumulants . Infin. Dimens. Anal. Quantum Probab. Relat. Top. 20(2017), no. 2, 1750010.Google Scholar
Schützenberger, M. P., Sur une propriété combinatoire des algebres de Lie libres pouvant étre utilisée dans un probleme de mathématiques appliquées . Sém. Dubreil. Algèbre Théorie Nr. 12(1958), no. 1, 123.Google Scholar
Speicher, R., Free probability theory and non-crossing partitions . Sém. Lothar. Combin. 39(1997), 138.Google Scholar
Speicher, R. and Woroudi, R., Boolean convolution . In: Voiculescu, D. (ed.), Free probability theory. Papers from a workshop on random matrices and operator algebra free products, Toronto, Canada, Mars 1995, American Mathematical Society, Providence, RI, 1997, pp. 267279.Google Scholar
Voiculescu, D.-V. (ed.), Free probability theory, Fields Institute Communications, 12, American Mathematical Society, Providence, RI, 1997.Google Scholar
Voiculescu, D.-V., Dykema, K., and Nica, A., Free random variables. A noncommutative probability approach to free products with applications to random matrices, operator algebras and harmonic analysis on free groups, CRM Monograph Series, 1, American Mathematical Society, Providence, RI, 1992.Google Scholar
von Waldenfels, W., An approach to the theory of pressure broadening of spectral lines. In: Behara, M., Krickeberg, K., and Wolfowitz, J. (eds.), Probability and information theory II, Lecture Notes in Mathematics, 296, Springer, Berlin, 1973, pp. 1969.CrossRefGoogle Scholar