Hostname: page-component-745bb68f8f-kw2vx Total loading time: 0 Render date: 2025-02-06T01:06:39.115Z Has data issue: false hasContentIssue false

A product form for the general stochastic matching model

Published online by Cambridge University Press:  23 June 2021

Pascal Moyal*
Affiliation:
UTC/Université de Lorraine
Ana Bušić*
Affiliation:
Inria and DI ENS, PSL Research University
Jean Mairesse*
Affiliation:
CNRS, Sorbonne Université
*
*Postal address: LMAC, Université de Technologie de Compiègne, 60203 Compiègne Cedex, France and Institut Elie Cartan, Université de Lorraine, F-54506 Nancy Cedex, France. Email address: pascal.moyal@univ-lorraine.fr
**Postal address: INRIA/Département d’Informatique (ENS), Université PSL, 2 rue Simone Iff, 75012 Paris, France. Email address: ana.busic@inria.fr
***Postal address: Sorbonne Université, CNRS, LIP6, F-75005, Paris, France. Email address: jean.mairesse@lip6.fr
Rights & Permissions [Opens in a new window]

Abstract

We consider a stochastic matching model with a general compatibility graph, as introduced by Mairesse and Moyal (2016). We show that the natural necessary condition of stability of the system is also sufficient for the natural ‘first-come, first-matched’ matching policy. To do so, we derive the stationary distribution under a remarkable product form, by using an original dynamic reversibility property related to that of Adan, Bušić, Mairesse, and Weiss (2018) for the bipartite matching model.

Type
Original Article
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Consider a general stochastic matching (GM) model, as introduced in [Reference Mairesse and Moyal16]: items of various classes enter a system one by one, to be matched in couples and to leave the system immediately upon matching. Two items are compatible if and only if their classes are adjacent in a compatibility graph G that is fixed beforehand. The classes of the entering items are drawn following a prescribed probability measure $\mu$ on ${\mathcal{V}}$ , the set of nodes of G. This model is a variant of the bipartite matching (BM) model introduced in [Reference Caldentey, Kaplan and Weiss12]; see also [Reference Adan and Weiss1]. In the BM model, the compatibility graph is bipartite (say ${\mathcal{V}}={\mathcal{V}}_1 \cup {\mathcal{V}}_2$ ). Among the various natural applications of this model, the nodes of ${\mathcal{V}}_1$ and ${\mathcal{V}}_2$ represent respectively classes of customers and servers, kidneys and patients, blood givers and blood receivers, houses and applicants, and so on. The items are matched in couples of ${\mathcal{V}}_1\times{\mathcal{V}}_2$ , and also arrive in couples of ${\mathcal{V}}_1\times{\mathcal{V}}_2$ . The classes of the elements of the entering couples are random, and it is assumed in the aforementioned references that the class of the entering element of ${\mathcal{V}}_1$ is always independent of that of the entering element of ${\mathcal{V}}_2$ . Various aspects and properties of the BM model have been studied in the recent literature, along with various applications. The max-weight policy is shown to be asymptotically optimal for models with relaxations in [Reference Bušić and Meyn10]. A stabilizing policy and fluid/diffusion approximations are given respectively in [Reference Buke and Chen8] and [Reference Buke and Chen9] for a model in which the compatibility graph is bipartite-complete, but the nominal matchings between items are themselves random. A fluid model for an overloaded system representing a housing allocation platform is analyzed in [Reference Talreja and Whitt21], and [Reference Boxma, David, Perry and Stadje7] addresses an organ transplant system under real-time constraint. An extension of GM models to the case where the matching structure is a hypergraph (that is, a set of nodes with edges possibly gathering more than two nodes) was recently proposed in [Reference Rahmé and Moyal20], addressing the form of the stability region as a function of the geometry of the considered hypergraph.

An important generalization of the BM model is the so-called extended bipartite matching (EBM) model introduced in [Reference Bušić, Gupta and Mairesse11], where the independence assumption between the classes of the entering customer and the entering server is relaxed. Possible entering couples are elements of a bipartite arrival graph on the bipartition ${\mathcal{V}}_1\cup{\mathcal{V}}_2$ . The construction of perfect infinite matchings for EBM models, that is, of random graphs gathering the entered nodes and whose edges represent the executed matches, was recently presented in [Reference Moyal, Bušić and Mairesse18]. Importantly, notice that a GM model can in fact be seen as a particular case of an EBM model, taking the bipartite double cover of G as compatibility graph, and duplicating arrivals with a copy of an item of the same class—see the discussion in [Reference Moyal, Bušić and Mairesse18, p. 4].

The main question raised in [Reference Mairesse and Moyal16] is the shape of the stability region of the GM model; that is, the set of probability measures on ${\mathcal{V}}$ rendering the associated Markov chain positive recurrent. Partly relying on the aforementioned connection between GM and EBM models, and the results of [Reference Bušić, Gupta and Mairesse11, Reference Mairesse and Moyal16] show that the stability region is always included in the set of measures defined by (2) below. In words, a probability measure belongs to (2) if the measure of any set of nodes I that does not include any couple of neighbors is less than the measure of the set of neighbors of the nodes of I. The form of the stability region is then heavily dependent on the structural properties of the compatibility graph, and on the matching policy, that is, the rule governing the choice of a match for an entering item whenever several matches are possible. A matching policy is then said to have a maximal stability region for G if the system is stable for any measure satisfying (2). It is shown in [Reference Mairesse and Moyal16] that a GM model on a bipartite G is never stable, that a designated class of graphs (the complete k-partite graphs for $k\ge 3$ , see below) makes the stability region maximal for all matching policies, and that the ‘match the longest’ policy always has a maximal stability region for a non-bipartite G. Applying fluid instability arguments to a continuous-time version of the GM model, [Reference Moyal and Perry17] shows that, aside from a particular class of graphs, whenever G is not complete k-partite there always exists a policy of the strict priority type that does not have a maximal stability region, and that the ‘uniform’ random policy (natural in the case where no information is available to the entering items on the state of the system) never has a maximal stability region, thereby providing a partial converse of the result in [Reference Mairesse and Moyal16]. Related models are studied in [Reference Gurvich and Ward14, Reference Nazari and Stolyar19], proposing optimization schemes for models of various matching structures (general graphs and hypergraphs), in which matchings are associated to a cost or a reward. Notice that in both references the matching policies are possibly retarded (or idling), i.e. compatible items may not be matched right away, in order to wait for a more profitable future match.

In this paper we are concerned with the stability region of the GM model under the ‘first-come, first-matched’ (fcfm) policy, which implies matching the entering item with the oldest compatible item in the system. Compared to the aforementioned stability studies, this matching policy raises technical problems, mainly due to the intricate form of the state space of its natural Markov representation. In the history of study of BM models, the corresponding ‘first-come, first-served’ (FCFS) policy was the first one considered in the seminal papers [Reference Adan and Weiss1, Reference Caldentey, Kaplan and Weiss12], which showed the existence of a stationary matching whenever the probability measure $\mu$ satisfies a natural resource pooling condition analog to the one in (2). Further, [Reference Adan, Bušić, Mairesse and Weiss3] recently showed that the stationary distribution of the Markov chain of the system can be obtained in a remarkable product form, which is obtained using an original dynamic reversibility argument. We show hereafter that we can in fact construct a reversibility scheme that is related to the one proposed in [Reference Adan, Bušić, Mairesse and Weiss3] for the BM model, to show that the stability region of fcfm is indeed maximal for the GM model, and that the stationary distribution of the corresponding Markov representation can also be expressed in a product form. Our construction presents several specific difficulties with respect to the one in [Reference Adan, Bušić, Mairesse and Weiss3], for two main reasons. First, the GM model is not a particular case of a BM model, but can be seen as a particular case of an EBM model by considering the bipartite double cover of G as a compatibility graph (see [Reference Mairesse and Moyal16, Section 3.2]), and duplicating the arrivals by seeing incoming couples of respective classes $(i,\tilde i)$ , where $\tilde i$ is the copy of node i in the bipartite double cover of G (see the detail of the construction in [Reference Moyal, Bušić and Mairesse18, p. 4]). However, for EBM models the reversibility argument in [Reference Adan, Bušić, Mairesse and Weiss3] is unlikely to hold in general; in particular, the maximality of FCFS for the EBM model is conjectured, but left as on open problem, in [Reference Bušić, Gupta and Mairesse11]. Second, several technical difficulties are inherent to the GM model, and are mostly due to the fact that several analogous versions of the two-dimensional (customers/servers) auxiliary chains defined in [Reference Adan, Bušić, Mairesse and Weiss3] can be proposed, in the present context, for a one-dimensional chain in which the original letters and their copies are mixed, and the crucial dynamic reversibility property is unlikely to hold for all of these possible versions. See the precise construction in Section 3.2.

Our main result, representing the stationary distribution of the Markov chain of the model in a product form, is reminiscent of a series of similar results concerning skill-based queueing systems: a similar product form result is presented in [Reference Adan and Weiss2] regarding a two-sided skill-based queueing system in which servers apply the FCFS-ALIS (assign to the longest idle server) policy, connected with the one obtained in [Reference Adan, Bušić, Mairesse and Weiss3]; see also [Reference Buke and Chen9], where the product form of the stationary state of a related skill-based system representing a computer cluster allows the construction of a scheduling algorithm that achieves balanced fair sharing of the resources. Interestingly enough, many multi-class queueing systems with redundancy also present a steady state in a similar product form (see [Reference Ayesta, Bodas and Verloop5, Reference Gardner13] and references therein). It is remarkable that queueing models with matching mechanisms and redundancy models can, in fact, be studied in a common framework; see [Reference Adan, Kleiner, Righter and Weiss4] for a unified approach based on the construction in [Reference Adan, Bušić, Mairesse and Weiss3]. Our work clearly shares common ground with this line of research: our product-form result follows from a related remarkable quasi-reversibility property (see Section 3.2), namely that the local balance equations imply that, up to a bijective transformation of the state space, the system seen in reversed time has symmetrical dynamics to the original model, and Kelly’s lemma can be applied (see Lemma 3.3). Observe, however, that, despite these similarities, the present model cannot be seen as a proper service system for two reasons: first, the compatibility graph is in general not bipartite, implying that an interpretation of the incoming items as customers on the one side, and servers on the other, is in general not possible. Second, in the present model there are no service times, meaning that the items leave the system without delay as soon as they are matched. In the aforementioned references, the model that has the closest connection to the present one is the so-called FCFS infinite directed matching model, presented in [Reference Adan, Kleiner, Righter and Weiss4, Section 6], in which there is a single flow of arrivals and no service times, but a bipartite compatibility graph, with a partition between classes of customers and classes of servers. In this reference, the authors made the assumption that incoming servers only serve earlier customers, and remain unmatched otherwise. The product form obtained in [Reference Adan, Kleiner, Righter and Weiss4, Theorem 6.1] has a similar form to the one in (3); however, the dynamics of the two models are substantially different, due to the fact that a GM model with a bipartite compatibility graph (as is the case in [Reference Adan, Kleiner, Righter and Weiss4], but not in the present work) is necessary unstable, as remarked in [Reference Mairesse and Moyal16, Theorem 2]. Consequently, the Markov description in [Reference Adan, Kleiner, Righter and Weiss4, Section 6] necessarily ignores all unmatched servers, assuming that the total arrival rates of servers exceeds the total arrival rate of customers.

The paper is organized as follows. In Section 2 we introduce and formalize our model. In Section 3 we develop our reversibility scheme for the fcfm system, leading to our main result, Theorem 3.1, which establishes the existence of a stationary probability measure in product form for the natural Markov representation of the system under the natural condition that $\mu$ is an element of the set defined by (2). Under the same condition, the construction of perfect infinite fcfm matchings is presented in Section 4.

2. The model

2.1. General notation

Denote by $\mathbb{R}$ the set of real numbers, by $\mathbb{N}$ the set of non-negative integers, and by $\mathbb{N}_+$ the subset of positive integers. For any two integers m and n, we denote $$\left[\kern-0.15em\left[ {m,n} \right]\kern-0.15em\right] = [m,n] \cap \mathbb{N}$$ . For any $n\in\mathbb{N}_+$ , let $\mathfrak S_n$ be the group of permutations of $$\left[\kern-0.15em\left[ {1,n} \right]\kern-0.15em\right]$$ . Let $A^*$ (respectively $A^{\mathbb{N}}$ ) be the set of finite (resp. infinite) words over the alphabet A, i.e. the sets of finite and infinite ordered sequences of elements in the finite set A. Denote by $\emptyset$ the empty word of $A^*$ . For any word $w \in A^*$ and any subset B of A, we let ${\left|w\right|}_B$ be the number of occurrences of elements of B in w. For any letter $a\in A$ , we denote ${\left|w\right|}_a\,{:}\,{\raise-1.5pt{=}}\, {\left|w\right|}_{\{a\}}$ , and for any finite word w we let ${\left|w\right|}=\sum_{a\in A} {\left|w\right|}_a$ be the length of w. For a word $w \in A^*$ of length ${\left|w\right|}=q$ , we write $w=w(1)w(2) \cdots w(q)$ , i.e. w(i) is the ith letter of the word w. In the proofs below, we understand the word $w(1) \cdots w(k)$ as $\emptyset$ whenever $k=0$ . Also, for any $w \in A^*$ and any $i\in \left[\kern-0.15em\left[ {1,\left| w \right|} \right]\kern-0.15em\right]$ , we denote by $w_{[i]}$ the word of length ${\left|w\right|}-1$ obtained from w by deleting its ith letter.

Consider a simple undirected graph $G=({\mathcal{V}},{\mathcal{E}})$ , where ${\mathcal{V}}$ denotes the set of nodes and ${\mathcal{E}} \subset {\mathcal{V}}^2\setminus \Delta$ is the set of edges for $\Delta=\left\{(i,i):\,i\in {\mathcal{V}}\right\}$ , i.e. we do not allow self-loops. We use the notation $$u - - v$$ for $(u,v) \in {\mathcal{E}}$ and for $(u,v) \not\in {\mathcal{E}}$ . For $U \subset {\mathcal{V}}$ , we define $U^c = {\mathcal{V}} \setminus U$ and ${\mathcal{E}}(U) = \{v \in {\mathcal{V}} :\, \text{there exists } u \in U \text{ such that} \ u-- v\}$ .

An independent set of G is a non-empty subset ${\mathcal{I}} \subset {\mathcal{V}}$ which does not include any pair of neighbors, i.e. for all $i, j \in {\mathcal{V}}$ , $i {--} j \Rightarrow i\not\in {\mathcal{I}}$ or $j\not\in {\mathcal{I}}$ . Let ${\mathbb{I}}(G)$ be the set of independent sets of G. An independent set ${\mathcal{I}}$ is said to be maximal if ${\mathcal{I}} \cup \{j\} \not\in {\mathbb{I}}(G)$ for any $j \not\in {\mathcal{I}}$ .

Recall that a connected graph $G=({\mathcal{V}},{\mathcal{E}})$ is said to be complete p-partite, $p\ge 2$ , if there exists a partition of ${\mathcal{V}}$ into maximal independent sets ${\mathcal{I}}_1,\dots, {\mathcal{I}}_p$ , such that, for all $i\neq j$ , $u \in {\mathcal{I}}_i$ , and $v \in {\mathcal{I}}_j$ , $u {--} v$ . Notice that such graphs are referred to as separable in [Reference Mairesse and Moyal16, Reference Moyal and Perry17].

For $w=w(1)w(2) \cdots w(q) \in {\mathcal{V}}^*$ , we denote by $${\vec w}$$ the reversed version of w, that is, ${\vec w}=w(q)w(q-1) \cdots w(2)w(1)$ . Let $\overline{\mathcal{V}}$ be an independent copy of the set ${\mathcal{V}}$ , i.e. a set of cardinality $|{\mathcal{V}}|$ , disjoint of ${\mathcal{V}}$ , and containing copies of the elements of ${\mathcal{V}}$ . We denote by $\overline{a}$ the copy of any element a of ${\mathcal{V}}$ , and also say that $\overline{a}$ is the counterpart of a, and vice versa. For any $\overline{a} \in \overline{{\mathcal{V}}}$ , let us denote $\overline{\overline{a}}=a$ . Let ${\textbf V}\,{:}\,{\raise-1.5pt{=}}\, {\mathcal{V}} \cup \overline{\mathcal{V}}$ . For any word ${\textbf w} \in \textbf V^*$ , denote by ${\mathcal{V}}({\textbf w})$ (respectively $\overline{\mathcal{V}}({\textbf w})$ ) the set of letters of ${\mathcal{V}}$ (resp. $\overline{\mathcal{V}}$ ) that are present in ${\textbf w}$ :

\begin{equation*}{\mathcal{V}}({\textbf w}) =\bigl\{a \in {\mathcal{V}} :\,{\left|\textbf{w}\right|}_a >0\bigl\};\quad \quad\overline{\mathcal{V}}({\textbf{w}}) =\bigl\{\overline{a} \in \overline{\mathcal{V}} :\,{\left|\textbf{w}\right|}_{\overline{a}} >0\bigl\}.\end{equation*}

For any ${\textbf w} \in {\textbf V}^*$ , the restriction of ${\textbf w}$ to ${\mathcal{V}}$ (respectively $\overline{\mathcal{V}} $ ) is the word ${\textbf w}|_{{\mathcal{V}}} \in {\mathcal{V}}^*$ (resp. ${\textbf w}|_{\overline{\mathcal{V}}} \in \overline{\mathcal{V}}^*$ ) of size ${\left|{\textbf w}\right|}_{\mathcal{V}}$ (resp. ${\left|{\textbf w}\right|}_{\overline{\mathcal{V}}}$ ) obtained by keeping only the letters belonging to ${\mathcal{V}}$ (resp. $\overline{\mathcal{V}}$ ) in ${\textbf w}$ , in the same order. The dual $\overline{\textbf w}$ of the word ${\textbf w}={\textbf w}(1) \cdots {\textbf w}(q) \in {\textbf V}^*$ is the word obtained by exchanging the letters of ${\textbf w}$ with their counterpart, i.e. $\overline{\textbf w}=\overline{{\textbf w}(1)} \cdots \overline{{\textbf w}(q)}$ .

Example 2.1. Take, for instance, ${\textbf w}=a\,b\,\overline{a}\,c\,\overline{b}\,\overline{c}\,\overline{b}\,d\,a$ . We obtain ${\mathcal{V}}({\textbf w}) = \{a,b,c,d\}$ , $\overline{\mathcal{V}}({\textbf w}) = \{\overline{a}, \overline{b},\overline{c}\}$ , ${\textbf w}|_{{\mathcal{V}}} = a\,b\,c\,d\,a$ , ${\textbf w}|_{\overline{\mathcal{V}}}=\overline{a}\,\overline{b}\,\overline{c\,b}$ , $\overline{\textbf w} =\overline{a}\,\overline{b}\,a\,\overline{c}\,b\,c\,b\,\overline{d}\,\overline{a}$ , ${\vec w} = a\,d\,\overline{b}\,\overline{c}\,\overline{b}\,c\,\overline{a}\,b\,a$ .

2.2. Formal definition of the model

We consider a general stochastic matching model, as defined in [Reference Mairesse and Moyal16]: items enter a system one by one, each belonging to a determinate class. The set of classes is denoted by ${\mathcal{V}}$ , and identified with $$\left[\kern-0.15em\left[ {1,|{\cal V}|} \right]\kern-0.15em\right]$$ . We fix a connected simple graph $G=({\mathcal{V}},{\mathcal{E}})$ having set of nodes ${\mathcal{V}}$ , termed the compatibility graph. Upon arrival, any incoming item of class, say, $i \in {\mathcal{V}}$ is either matched with an item present in the buffer of a class j such that $i {--} j$ , if any, or, if no such item is available, it is stored in the buffer to wait for its match. Whenever several matches are possible for an incoming item i, it is the role of the matching policy to decide the match of the latter item without ambiguity. Throughout, we assume that the matching policy is ‘first-come, first-matched’ (fcfm), i.e. the match of i is the oldest among all stored items of neighboring classes of i. Each matched pair departs the system right away.

We assume that the successive classes of entering items are random. We fix an underlying probability space $(\Omega,\mathcal F,\mathbb P)$ , on which all random variables are defined. For any $n \in \mathbb{N}_+$ , let $V_n \in {\mathcal{V}}$ denote the class of the nth incoming item. Throughout this work, we assume that the sequence ${\left({V_{n}}\right)_{i\in\mathbb{N}_+}}$ is independent and identically distributed with respect to the distribution $\mu$ on $\mathcal V$ . Without loss of generality, we assume that $\mu$ has full support ${\mathcal{V}}$ ,i.e. $\mu(i)>0$ for any $i\in V$ and $\sum_{i\in {\mathcal{V}}} \mu(i) =1$ (we write $\mu \in {\mathscr{M}}({\mathcal{V}})$ ). Altogether, according to the terminology in [Reference Mairesse and Moyal16], we thus consider the GM model associated to $$(G,\mu ,FCFM)$$ .

2.3. Markov representation

Fix the compatibility graph $G=({\mathcal{V}},{\mathcal{E}})$ until the end of this section. Fix an integer $n_0 \ge 1$ , and a realization $v_1,\ldots,v_{n_0}$ of $V_1,\ldots,V_{n_0}$ . Define the word $v\in {\mathcal{V}}^*$ by $v\,{:}\,{\raise-1.5pt{=}}\, v_1\cdots v_{n_0}$ . Then, there exists a unique fcfm matching of the word v, i.e. a graph having the set of nodes $\left\{v_1,\ldots,v_{n_0}\right\}$ and whose edges represent the matches performed in the system until time $n_0$ , if the successive arrivals are given by v and the matching policy is fcfm. This matching is denoted by $M^{FCFM}(v)$ . The state of the system is defined as the word $W^{FCFM}(v)\in {\mathcal{V}}^*$ , whose letters are the classes of the unmatched items at time $n_0$ , i.e. the isolated vertices in the matching $M^{FCFM}(v)$ , in their order of arrival. The word $W^{FCFM}(v)\in {\mathcal{V}}^*$ is called the queue detail at time $n_0$ . Then, any admissible queue detail belongs to the set $\mathbb W = \bigl\{ w\in {\mathcal{V}}^* : \, \text{for all } (i,j) \in {\mathcal{E}}, \, |w|_i|w|_j=0 \bigr\}$ .

Fix a (possibly random) word $Y \in \mathbb W$ . Denote, for all $n\in\mathbb{N}_+$ , by $W^{\{Y\}}_n$ the queue detail at time n (i.e. just after the arrival of item n) if the queue detail at time 0 was set to Y, in other words $W^{\{Y\}}_n= W^\phi\left(YV_1 \cdots V_{n}\right)$ . Then, the buffer-content sequence is stochastic recursive, since we clearly have that

\[\left\{\begin{array}{l@{\quad}l}W^{\{Y\}}_0 &= Y , \\ \\[-7pt] W^{\{Y\}}_{n+1} &=W^{\{Y\}}_n \odot_{FCFM} (V_{n+1}),\quad n\in\mathbb{N},\end{array}\right.\]

where, for all $w\in\mathbb W$ and $v\in{\mathcal{V}}$ ,

\[w \odot_{FCFM} (v) =\left \{\begin{array}{l@{\quad}l}wv & \textrm{if } \; |w|_{{\mathcal{E}}(v)} = 0 , \\ \\[-7pt] w_{\left [\Phi(w,v)\right]} & \textrm{otherwise},\end{array}\right .\]

where $\Phi(w,v) = \min \{k\in \left[\kern-0.15em\left[ {{\rm{1,|w|}}} \right]\kern-0.15em\right] :\, w_k\in{\mathcal{E}}(v)\}$ . Consequently, if we assume that the sequence ${\left({V_n}\right)_{i\in\mathbb{N}_+}}$ is independent of Y, the queue detail ${\Big({W^{\{Y\}}_n}\Big)_{n\in\mathbb{N}}}$ is a $\mathbb W$ -valued $\mathcal F_n$ -Markov chain, where $\mathcal F_0 = \sigma(Y)$ and $\mathcal F_n=\sigma\left(Y,V_1,\ldots,V_{n}\right)$ for all $n\in\mathbb{N}_+$ . The sequence ${\left({W_n}\right)_{n\in\mathbb{N}}}$ is termed the natural chain of the model.

2.4. Stability of the matching model

Fix a connected graph G. The chain ${\left({W_n}\right)_{n\in\mathbb{N}}}$ is clearly irreducible. To see this, observe that for any word $w=w(1) \cdots w(q) \in{\mathcal{V}}^*$ , w can be reached from the empty word $\emptyset$ , for instance if the classes of the first q incoming items are successively given by $w(1),w(2), \ldots, w(q)$ , and, conversely, $\emptyset$ can be reached from w, for instance if the classes of the first q incoming items are $w^{\prime}(1),w^{\prime}(2), \ldots, w^{\prime}(q)$ , where $w(\ell)-w'(\ell)$ for any $\ell \in \left[\kern-0.15em\left[ {1,q} \right]\kern-0.15em\right]$ .

In line with [Reference Mairesse and Moyal16] we define the stability region of the GM model $(G,\mu,{FCFM})$ by

(1) \begin{equation}{STAB}(G,{FCFM}) \,{:}\,{\raise-1.5pt{=}}\, \left\{\mu \in {\mathscr{M}}\left({\mathcal{V}}\right):\,{\left({W_n}\right)_{n\in\mathbb{N}}} \mbox{ is positive recurrent}\right\}\!,\end{equation}

which is clearly independent of the initial state Y in view of the above observation. Let us also define the set of probability measures

(2) \begin{equation}{NCOND}(G): \left\{\mu \in {\mathscr{M}}\left({\mathcal{V}}\right):\,\mbox{for any }{\mathcal{I}} \in {\mathbb{I}}(G), \mu\left({\mathcal{I}}\right) < \mu\left({\mathcal{E}}\left({\mathcal{I}}\right)\right)\right\} ,\end{equation}

which, from [Reference Mairesse and Moyal16, Theorem 1], is non-empty if and only if G is non-bipartite. We know from [Reference Mairesse and Moyal16, Proposition 2] that for any policy $\Phi$ in the set of so-called admissible matching policies (that includes fcfm), the stability region of the GM model associated to $(G,\mu,\Phi)$ , defined similarly to (1), is included in Ncond(G). An admissible policy $\Phi$ is said to be maximal if these two sets coincide. Theorem 2 in [Reference Mairesse and Moyal16] establishes the maximality of the ‘match the longest’ policy (consisting in matching the incoming item with an arbitrary item within the class having the longest queue among all compatible classes, ties being broking uniformly at random) for any non-bipartite graph; however, priority policies and the ‘uniform’ random policy are in general not maximal [Reference Moyal and Perry17, Theorem 3 and Proposition 7]. Last, it is stated in [Reference Mairesse and Moyal16, Theorem 2] that any GM model on a graph G that is complete p-partite for $p\ge 3$ has a stability region that coincides with Ncond(G) whatever the admissible matching policy; in other words, any admissible policy is maximal in this case.

We show in this paper that the policy fcfm is maximal, and characterize the steady state of the system under the condition that $\mu$ is an element of ${NCOND}(G)$ .

3. Product form

In this section we establish our main result, Theorem 3.1, which establishes the maximality of the fcfm policy by explicitly constructing the stationary distribution of the natural chain on $\mathbb W$ . Interestingly enough, this probability distribution has a remarkable product form, detailed in (3).

Theorem 3.1. Let $G=({\mathcal{V}},{\mathcal{E}})$ be a non-bipartite graph. Then the sets ${STAB}(G,{FCFM})$ and ${NCOND}(G)$ , defined respectively by (1) and (2), coincide. In other words, the GM model $(G,\mu,{FCFM})$ is stable if and only if $\mu$ belongs to the set ${NCOND}(G)$ . In that case, the only stationary probability of the natural chain ${\left({W_n}\right)_{n\in\mathbb{N}}}$ is

(3) \begin{equation}\Pi_W\left(w\right)=\alpha\prod\limits_{\ell =1}^q \frac{\mu(w(\ell)) }{\mu\bigl({\mathcal{E}}\left(\left\{w(1),\ldots,w(\ell)\right\}\right)\bigl)} \quad \,\textit{for any }w=w(1) \cdots w(q) \in \mathbb{W},\end{equation}

where

(4) \begin{equation}\alpha = \left\{1 +\sum_{{\mathcal{I}}\in\mathbb I(G)} \sum_{\sigma\in\mathfrak{S}_{|{\mathcal{I}}|}} \prod_{j=1}^{|{\mathcal{I}}|}\frac{\mu(i_{\sigma(j)}) }{\mu\left({\mathcal{E}}\left(\{i_{\sigma(1)},\ldots,i_{\sigma(j)}\}\right)\right)-\mu\left(\left\{i_{\sigma(1)},\ldots,i_{\sigma(j)}\right\}\right)}\right\}^{-1}.\end{equation}

The remainder of this section is devoted to the proof of Theorem 3.1, which is based on a subtle reversibility scheme that is related to the proof of reversibility for the BM model in [Reference Adan, Bušić, Mairesse and Weiss3]. Let us recall, however, that the GM model is not a particular case of the BM model, so the proof below presents many differences with respect to [Reference Adan, Bušić, Mairesse and Weiss3] (see the discussion in Section 1).

3.1. Auxiliary Markov representations

We now introduce two auxiliary Markov representations of the system: the $\textbf V^*$ -valued backwards and forwards detailed chains, related to those of the backwards and forwards pair-by-pair detailed FCFS matching processes introduced in [Reference Adan, Bušić, Mairesse and Weiss3, Section 5.1].

Backwards detailed chain. We define the ${\textbf V}^*$ -valued backwards detailed process ${\left({B_n}\right)_{n\in\mathbb{N}}}$ as follows. $B_0=\emptyset$ ; for any $n\ge 1$ ,

  • if $W_n=\emptyset$ (i.e. all the items arrived up to time n are matched at time n), then we set $B_n=\emptyset$ ;

  • if not, we let $i(n)\le n$ be the arrival time of the oldest item in line. Then, the word $B_n$ is of length $n-i(n)+1$ and, for any $\ell \in \left[\kern-0.15em\left[ 1,n-i(n)+1 \right]\kern-0.15em\right]$ , we set

    \[B_n(\ell)=\left\{\begin{array}{l@{\quad}l}V_{i(n)+\ell-1} \,\, &\mbox{if $V_{i(n)+\ell-1}$ has not been matched up to time \textit{n}} , \\ \\[-7pt] \overline{V_{k}}\,\, &\mbox{if $V_{i(n)+\ell-1}$ is matched at or before time \textit{n} with item $V_k$}\\ &\mbox{(where $k \le n$)}.\\ \end{array}\right.\]

In other words, assuming that the initial system is empty, the word $B_n$ gathers the class indices of all unmatched items entered up to n, at the places corresponding to their arrival times, and the copies of the class indices of the items matched before n, but after the arrival of the oldest unmatched item at n, at the place corresponding to the arrival time of their respective matches. Observe that we necessarily have that $B_n(1)=V_{i(n)}\in {\mathcal{V}}$ . Moreover, the word $B_n$ necessarily contains all the letters of $W_n$ . More precisely, we have

(5) \begin{equation}W^{\{\emptyset\}}_n=B_n|_{{\mathcal{V}}}, \,\,\,\qquad n\ge 0.\end{equation}

It is easily seen that ${\left({B_n}\right)_{n\in\mathbb{N}}}$ is also a ${\mathcal{F}}_n$ -Markov chain: for any $n\ge 0$ , the value of $B_{n+1}$ can be deduced from that of $B_n$ and the class $V_{n+1}$ of the item entered at time $n+1$ .

Forward detailed chain. The ${\textbf V}^*$ -valued forward detailed process ${\left({F_n}\right)_{n\in\mathbb{N}}}$ is defined as follows: $F_0=\emptyset$ ; for any $n\ge 1$ ,

  • if $W_n=\emptyset$ then we also set $F_n=\emptyset$ ;

  • if not, we let $\mathscr U_n$ be the set of items entered up to n and not yet matched at n (which is non-empty since $W_n \ne \emptyset$ ), and set

    \[j(n)=\sup\left\{ m > n:\, V_m \mbox{is matched with an element of} \mathscr U_n \right\}.\]
    Notice that j(n) is possibly infinite. Then, $F_n$ is the word of ${\textbf V}^*$ of size $j(n)-n$ if $j(n)<\infty$ (respectively the word of ${\textbf V}^{\mathbb{N}}$ if $j(n)=+\infty$ ) such that, for any $\ell \in \left[\kern-0.15em\left[ 1,j(n)-n \right]\kern-0.15em\right]$ (resp. $\ell \in \mathbb{N}_+$ ),
    \[F_n(\ell)=\left\{\begin{array}{l@{\quad}l} V_{n+\ell} \,\, &\mbox{if $V_{n+\ell}$ is not matched with an item arrived up to \textit{n}},\\ \\[-7pt] \overline{V_{k}}\,\, &\mbox{if $V_{n+\ell}$ is matched with item $V_k$, where $k \le n$}. \end{array}\right.\]

In other words, assuming that the initial system is empty, the word $F_n$ contains the copies of all the class indices of the items entered up to time n and matched after n, at the place corresponding to the arrival time of their respective matches, together with the class indices of all items entered after n and before the last item matched with an item entered up to n, and not matched with an element entered before n, if any, at the place corresponding to their arrival times. Observe that if $F_n \in {\textbf V}^*$ is finite, then $F_n\left(j(n)-n\right) \in \overline{\mathcal{V}}$ since, by definition, the item $V_{j(n)}$ is matched with some $V_k$ for $k\le n$ , and therefore $F_n\left(j(n)-n\right)=\overline{V}_k$ . It is also clear that ${\left({F_n}\right)_{n\in\mathbb{N}}}$ is a ${\mathcal{F}}_{n}$ -Markov chain since, for any $n\ge 0$ , the value of $F_{n+1}$ depends solely on $F_n$ and the class index $V_{j(n)+1}$ of the item entered at time $n+j(n)+1$ . Recalling that $F_0=\emptyset \in\textbf V^*$ , observe that

(6) \begin{equation}F_n\in\textbf V^* \quad \mbox{almost surely (a.s.) for all }n\in\mathbb{N}.\end{equation}

Indeed, for any $n\in\mathbb{N}$ and ${\textbf w}\in\textbf V^*$ , on the event $\{F_n\in {\textbf w}\}$ , $F_{n+1}$ being an infinite word (that is, an element of $\textbf V^\mathbb{N}$ ) would mean that the incoming item at time $n+1$ is never matched, an event of null probability given the connectedness of G.

Example 3.1. Consider the compatibility graph of Figure 1, addressed in [Reference Mairesse and Moyal16, Section 5] (this is the smallest graph that is neither bipartite nor complete p-partite). An arrival scenario together with successive values of the natural chain and the backwards and forwards chains are represented in Figure 2.

Figure 1: Compatibility graph of Example.

Figure 2: An arrival scenario on the graph of Figure 1, and the trajectories of the three Markov chains.

3.2. Reversibility

For both chains ${\left({B_n}\right)_{n\in\mathbb{N}}}$ and ${\left({F_n}\right)_{n\in\mathbb{N}}}$ , a state ${\textbf w} \in {\textbf V}^*$ is said to be admissible if it can be reached by the chain under consideration under fcfm. We denote

\begin{align*}\mathbb{B} &\,{:}\,{\raise-1.5pt{=}}\, \bigl\{{\textbf w} \in {\textbf V}^*:\,{\textbf w} \mbox{ is admissible for }{\left({B_n}\right)_{n\in\mathbb{N}}}\bigl\} \\[3pt] \mathbb{F} &\,{:}\,{\raise-1.5pt{=}}\, \bigl\{{\textbf w} \in {\textbf V}^*:\,{\textbf w} \mbox{ is admissible for }{\left({F_n}\right)_{n\in\mathbb{N}}}\bigl\}. \nonumber\end{align*}

Define the following measure on ${\textbf V}^*$ :

(7) \begin{equation}\left\{\begin{array}{ll}\nu_B(\emptyset) &=1 , \\ \nu_B\left({\textbf w}\right) &=\prod\limits_{i=1}^p \mu(i)^{|{\textbf w}|_i+| \overline{\textbf w}|_i},\quad{\textbf w} \in {\textbf V}^*\setminus\{\emptyset\}.\end{array}\right.\end{equation}

Observe that by the very definition in (7), the measure of a word ${\textbf w}$ does not change whenever any of its letters a is exchanged with $\overline{a}$ . In particular, we have $\nu_B(\vec{\overline{\textbf w}})=\nu_B({\textbf w})$ for any ${\textbf w}$ in ${\textbf V}^*$ . We have the following result.

Proposition 3.1. Suppose that $\mu$ belongs to ${NCOND}(G)$ defined by (2). Then the measure $\nu_B$ defined by (7) is finite on $\mathbb{B}$ . Moreover, the backwards detailed Markov chain ${\left({B_n}\right)_{n\in\mathbb{N}}}$ is positive recurrent, and admits $\beta\nu_B$ as a stationary distribution for any $\beta>0$ .

Before proving Proposition 3.1 we need to introduce a few technical results.

Lemma 3.1. Let ${\textbf w}={\textbf w}(1)\cdots{\textbf w}(q)\in {\textbf V}^*$ . Then ${\textbf w}\in \mathbb{B}$ if and only if the following three conditions hold:

(8) \begin{align}&{\textbf w}(1)\in {\mathcal{V}};\end{align}
(9) \begin{align}&{\textbf w}(k) {\not\!\!--} {\textbf w}(\ell)\mbox{ for any }k,\ell \in \left[\kern-0.15em\left[ 1,q \right]\kern-0.15em\right]\mbox{ such that } {\textbf w}(k)\in {\mathcal{V}}\mbox{ and }{\textbf w}(\ell)\in {\mathcal{V}};\end{align}
(10) \begin{align}&{\textbf w}(k) {\not\!\!--} \overline{{\textbf w}(j)}\mbox{ for any } k,j,\,\, 1\le k <j \le q,\mbox{ such that } {\textbf w}(k)\in {\mathcal{V}}\mbox{ and }\overline{{\textbf w}(j)}\in \overline{\mathcal{V}}.\end{align}

Proof of Lemma 3.1. By the very definition of ${\left({B_n}\right)_{n\in\mathbb{N}}}$ , (8) holds true for any (non-empty) admissible word of the chain. Moreover, the necessity of (9) is obvious: should an element ${\textbf w}$ of $\mathbb{B}$ contain two compatible letters in ${\mathcal{V}}$ , the two corresponding items would have been matched. Let us prove the necessity of (10). Fix two such indices k and j in $\left[\kern-0.15em\left[ 1,q \right]\kern-0.15em\right]$ . This means that $V_{i(n)+j-1}$ is matched with an item $V_\ell$ of class ${{\textbf w}(j)}$ . Then, it can be easily checked that ${\textbf w}(k) {--} \overline{{\textbf w}(j)}$ implies that $V_{\ell}$ is matched with $V_{i(n)+k-1}$ , an absurdity since the item of class $V_{i(n)+k-1}={\textbf w}(k)$ is still unmatched at n. This completes the proof of necessity.

Regarding sufficiency, fix a non-empty state ${\textbf w}$ satisfying (8), (9), and (10):

(11) \begin{equation}{\textbf w} = a(1)\,\overline{a(1,1)}\,\overline{a(1,2)}\cdots\overline{a(1,k_1)}\,a(2)\,\overline{a(2,1)}\cdots\overline{a(2,k_2)}\,a(3)\cdots a(q)\,\overline{a(q,1)}\cdots\overline{a(q,k_q)},\end{equation}

where $q\ge 1$ and, for all $\ell \in\left[\kern-0.15em\left[ 1,q \right]\kern-0.15em\right]$ , $k_\ell \in \mathbb{N}$ , $a(\ell) \in {\mathcal{V}}$ , and $a(\ell, j) \in {\mathcal{V}}$ for all $j\in\left[\kern-0.15em\left[ 1,k_\ell \right]\kern-0.15em\right]$ . We provide an arrival scenario leading to ${\textbf w}$ from the empty state, and to easily construct this sequence of arrivals it is convenient to represent it as the following word of ${\mathcal{V}}^*$ ,

\begin{align*}\textbf{b}&\,{:}\,{\raise-1.5pt{=}}\, b(0,k_0)b(0,k_0-1)\cdots b(0,1)b(1)b(1,1)\cdots b(1,k_1)b(2)b(2,1) \cdots b(2,k_2)b(3)\cdots \\[3pt] &\quad \cdots b(q)b(q,1)\cdots b(q,k_q),\end{align*}

where $b(0,k_0)$ is the class of the first incoming item, $b(0,k_0-1)$ is the class of the second incoming item, and so on, and $b(q, k_q)$ is the class of the last incoming item. The word $\textbf b$ is constructed as follows. During the procedure, all letters of $\textbf b$ that have been fixed are said to be ‘determined’ and the other ones ‘undetermined’. Also, for all $\ell \in\left[\kern-0.15em\left[ 1,q \right]\kern-0.15em\right]$ and $j\in\left[\kern-0.15em\left[ 1,k_\ell \right]\kern-0.15em\right]$ , we say that the letters $a(\ell)$ and $b(\ell)$ , and respectively the letters $a(\ell,k_\ell)$ and $b(\ell,k_\ell)$ , are ‘at the same place’ in the words ${\textbf w}$ and $\textbf b$ . We proceed as follows:

  1. 1. At first, all letters of $\textbf b$ are undetermined, and we set $k_0=0$ .

  2. 2. We then set, for all $\ell\in\left[\kern-0.15em\left[ 1,q \right]\kern-0.15em\right]$ , $b(\ell)=a(\ell)$ .

  3. 3. Then, as long as there are undetermined letters of $\textbf b$ on the right of b(1), we take the copied letter of ${\textbf w}$ at the place of the first (from right to left) undetermined letter in $\textbf b$ . Say this copied letter is $\overline{a(\ell, j)}$ . We are in the following alternative:

    1. (i) If there exists at least one copied letter $\overline{a(\ell',j')}$ of ${\textbf w}$ on the left of $\overline{a(\ell, j)}$ at a place of an undetermined letter of $\textbf b$ , such that $a(\ell,j){--} a(\ell',j')$ in G, then we take the first one such copied letter from right to left, and denote it $\overline{a(\ell',j')}$ . Then we set $b(\ell,j)=a(\ell',j')$ and $b(\ell',j')=a(\ell,j)$ , i.e. an incoming $a(\ell',j')$ -item is matched with a stored $a(\ell,j)$ -item, and the state of the backwards chain marks the copy and exchange of the corresponding letters.

    2. (ii) Otherwise, there is no copied letter $\overline{a(\ell',j')}$ of ${\textbf w}$ on the left of $\overline{a(\ell, j)}$ at a place of an undetermined letter of $\textbf b$ such that $a(\ell,j){--} a(\ell',j')$ in G, so in the state of the backwards chain the letter $\overline{a(\ell,j)}$ corresponds to the match of an item compatible with an $a(\ell,j)$ -item that was stored in the system before the arrival of the b(1)-item. Then we set $k_0\,{:}\,{\raise-1.5pt{=}}\, k_0+1$ and $b(0,k_0)=a(\ell,j)$ (i.e. we add a new letter $a(\ell,j)$ at the first place of the word $\textbf b$ ), and set $b(\ell,j)$ as an arbitrary letter such that $b(\ell,j){--} a(\ell,j)$ in G. Then the corresponding incoming item of class $b(\ell,j)$ is matched upon arrival with the $a(\ell, j)$ -item that was stored in the buffer before the arrival of the b(1)-item.

At the end of the procedure, we set the classes of the consecutive incoming items as the successive letters of $\textbf b$ taken from left to right; in other words we set $V_1=b(0,k_0)$ , $V_2=b(0,k_0-1)$ , …, $V_{k_0}=b(0,1)$ , $V_{k_0+1}=b(1)$ , …, $V_{q+\sum_{\ell=0}^q k_q}=b(q,k_q)$ . Start from an empty system, i.e. $B_0=\emptyset$ . It can then be checked that the above procedure constructs a proper perfect fcfm matching of the incoming items corresponding to the copied letters of ${\textbf w}$ . To see this, first observe that at step 3 (cases (i) or (ii)) we only match compatible items. Second, the matching policy is non-idling, in the sense that it cannot be the case that an item enters the system, finds a possible match but remains stored in the system. Indeed, suppose that in $\textbf b$ , two items of respective classes c and d such that $c{--} d$ enter the system in that order, are not matched upon the arrival of d, but are only matched later, respectively with an e-item and an f-item. We are then in one of the following two situations:

  • either the f-item enters the system after the e-item, in which case the word ${\textbf w}$ contains, in order, from left to right, the letters $\bar e$ , $\bar f$ , $\bar c$ , $\bar d$ , and then, according to step 3 of the above procedure, $\bar d$ would be investigated first, and find $\bar c$ on its left before finding $\bar f$ , so the d-item would be matched with the c-item, an absurdity;

  • or, the f-item enters the system between the d-item and the e-item, in which case the word ${\textbf w}$ contains, in order, from left to right, the letters $\bar e$ , $\bar f$ , $\bar d$ , $\bar c$ . In this case, according to step 3, the letter $\bar c$ would be investigated first, and find $\bar d$ on its left before finding $\bar e$ , so the c-item would be matched with the d-item, another absurdity.

Hence, the matching policy induced by the above procedure is non-idling. It remains to check that it is fcfm. For this, suppose that the word $\textbf b$ contains, in order, from left to right, the letters c, d, e, and that , $c{--} e$ , and $d{--} e$ . We show that it cannot be the case that the e-item is matched upon arrival with the d-item, and leaves the c-item unmatched. Indeed, if it were so then the c-item would be matched with an f-item, for some letter f appearing to the right of e in $\textbf b$ . Then, in the word ${\textbf w}$ we would see the letters $\overline{f}$ , $\overline{e}$ , $\overline{d}$ , $\overline{c}$ , in that order from left to right; but in that case, according to step 3 above, the letter $\overline{c}$ would be considered first, and would choose $\overline{e}$ (which appears first on its left) over $\overline{f}$ , meaning that the corresponding c-item would be matched with the e-item but not with the f-item, an absurdity. Hence the matching policy is fcfm.

In conclusion, the copied letters of ${\textbf w}$ are obtained after performing a perfect non-idling fcfm-matching, and $B_0=\emptyset$ leads to $B_{q+\sum_{\ell=0}^q k_q}={\textbf w}$ if the arrival scenario is given by $\textbf b$ . This concludes the proof.

We illustrate the construction procedure for the arrival scenario in the above proof by the following example.

Example 3.2. Let G be the graph of Figure 3, and consider the word ${\textbf w}=1\overline{3}\,\overline{4}\,\overline{3}\,\overline{4}\,5\,\overline{3}\,\overline{3}\,3$ . In the above procedure, by writing with a dot the undetermined elements of $\textbf b$ , after step 2 we have that $\textbf b = 1\,.\,.\,.\,.\,5\,.\,.\,3$ . Then, at step 3 we first consider the first $\bar 3$ (from right to left) in ${\textbf w}$ and find as the first ‘matchable’ copied letter the first letter $\bar 4$ from right to left (we are thus in case (i)). Thus, after the match and the exchange, the word $\textbf b$ becomes $\textbf b = 1\,.\,.\,.\,3\,5\,.\,4\,3$ . Similarly, after considering the second $\bar 3$ from right to left, we perform a ‘match and exchange’ with the remaining $\bar 4$ , leading to $\textbf b = 1\,.\,3\,.\,3\,5\,4\,4\,3$ . For the remaining two $\bar 3$ s in ${\textbf w}$ , we do not find a compatible copied letter on their left, so we consider (appyling step 3 case (ii) twice) that the two 3-items wait in the system before the arrival of the 1-item, and match these two items, for instance, respectively with a 2-item and a 4-item arriving in the system at the corresponding places. We obtain $\textbf b = 3\,3\,1\,4\,3\,2\,3\,5\,4\,4\,3$ . In other words, an arrival scenario leading from $\emptyset$ to ${\textbf w}$ is thus given by 3,3,1,4,3,2,3,5,4,4,3.

As a consequence, we have the following lemma.

Lemma 3.2. The mapping ${\textbf w} \mapsto \vec{\overline{\textbf w}}$ is one-to-one from $\mathbb{B}$ into $\mathbb{F}$ .

Proof. From Lemma 3.1, it is sufficient to prove that a state ${\textbf w}$ belongs to $\mathbb{F}$ if and only if $\vec{\overline{{\textbf w}}}$ satisfies (8), (9), and (10). Fix a word ${\textbf w}\in\mathbb{F}$ . First, by the very definition of the forward chain, it must be the case that the last letter on the right of ${\textbf w}$ is a copied letter. Second, suppose that two letters w(j) and w(k), for $j<k$ , are two copied letters such that $\overline{w(j)} {--} \overline{w(k)}$ . This would mean that two items of respective classes $\overline{w(j)}$ and $\overline{w(k)}$ were already stored in the system before time n, but remained unmatched, an absurdity. Last, suppose that two letters w(j) and w(k), for $j<k$ , are such that $w(j)\in{\mathcal{V}}$ , $w(k)\in\overline{{\mathcal{V}}}$ , and $w(j){--}\overline{w(k)}$ . Then, the $\overline{w(k)}$ -item stored in the system before time n would see an arrival of a compatible item of class w(j) without getting matched to it, an absurdity. Thus, $\vec{\overline{{\textbf w}}}$ satisfies (8), (9), and (10), which completes the proof of necessity.

We now prove the sufficiency of the condition. For this, as in the proof of Lemma 3.1, we fix a word ${\textbf w} \in \mathbb{F}$ such that $\vec{\overline{{\textbf w}}}$ satisfies (8), (9), and (10), and similarly to (11), we denote it as

\[{\textbf w} = {a(q,k_q)} \cdots {a(q,1)}\overline{a(q)} \cdots \overline{a(3)} {a(2,k_2)} \cdots {a(2,1)}\overline{a(2)}{a(1,k_1)} \cdots {a(1,1)} \overline{a(1)}.\]

It is then easy to construct an arrival scenario $\textbf c$ leading to the state ${\textbf w}$ starting from the empty state, i.e. $F_0=\emptyset$ . For this, it suffices to fix q elements of ${\mathcal{V}}$ , say $c(1), \ldots, c(q)$ , such that $c(\ell)-a(\ell)$ for all $\ell \in\left[\kern-0.15em\left[ 1,q \right]\kern-0.15em\right]$ , and to set the following element of ${\mathcal{V}}^*$ ,

\begin{align*}\textbf c &= a(q)\cdots a(2)a(1){a(q,k_q)} \cdots {a(q,1)}c(q)\cdots {c(3)} {a(2,k_2)}\cdots {a(2,1)}{c(2)}{a(1,k_1)}\cdots \\[3pt] &\quad \cdots {a(1,1)} {c(1)}.\end{align*}

Then, if $F_0=\emptyset$ and we set the successive classes of the incoming items as the successive letters of $\textbf c$ from left to right, i.e. $V_1=a(q)$ , $V_2=a(q-1)$ , …, $V_{q}=a(1)$ , and then $V_{q+1}=a(q,k_q)$ , …, $V_{2q+\sum_{\ell=1}^q k_\ell - 1} = a(1,1)$ , $V_{2q+\sum_{\ell=1}^q k_\ell } = c(1)$ , we obtain, by the definition of fcfm, that $F_q={\textbf w}$ , which concludes the proof.

Figure 3: Compatibility graph of Example 3.1.

We can now state the following connection between the dynamics of ${\left({B_n}\right)_{n\in\mathbb{N}}}$ and ${\left({F_n}\right)_{n\in\mathbb{N}}}$ .

Lemma 3.3. Let $\nu_B$ be the measure on ${\mathcal{V}}^*$ defined by (7). Then, for any ${\textbf w},{\textbf w}' \in\mathbb{B}$ such that ${\mathbb{P}\left[{B_{n+1}={\textbf w}' \mid B_n={\textbf w}}\right]} >0$ ,

\begin{equation*} \nu_B({\textbf w}){\mathbb{P}\left[{B_{n+1}={\textbf w}' \mid B_n={\textbf w}}\right]} = \nu_B\left(\vec{\overline{{\textbf w}'}}\right){\mathbb{P}\left[{F_{n+1}=\vec{{\overline{\textbf w}}} \mid F_n=\vec{\overline{{\textbf w}'}}}\right]}.\end{equation*}

Proof. Fix ${\textbf w} \in \mathbb{B}$ (so that $\vec{\overline{{\textbf w}}} \in \mathbb{F}$ from Lemma 3.2). Whenever ${\textbf w}\ne \emptyset$ we set ${\textbf w}={\textbf w}(1)\cdots{\textbf w}(q)\in {\textbf V}^*$ , and recall that ${\textbf w}(1) \in {\mathcal{V}}$ in that case. There are five possible cases for the transition of ${\left({B_n}\right)_{n\in\mathbb{N}}}$ , which we address one by one.

  1. (1) Suppose that ${\textbf w}\ne \emptyset$ and ${\textbf w}'={\textbf w} a$ for some $a\in {\mathcal{V}}$ . Plainly, such a state is admissible if and only if $a \in {\mathcal{E}}\left({\mathcal{V}}({\textbf w})\right)^c$ . The backwards chain moves from ${\textbf w}$ to ${\textbf w} a$ at $n+1$ whenever $V_{n+1}=a$ , so we have that ${\mathbb{P}\left[{B_{n+1}={\textbf w} a \mid B_n={\textbf w}}\right]}=\mu(a)$ . On the other hand, $F_n=\vec{{\overline{{{\textbf w}} a}}}$ entails that the item entering at $n+1$ is matched with an item of class a entered up to time n. Therefore, we necessarily have that $F_{n+1}=\vec{\overline{\textbf w}}$ , in other words,

    \begin{align*}\nu_B\left(\vec{\overline{{{\textbf w}}a}}\right){\mathbb{P}\left[{F_{n+1}=\vec{\overline{\textbf w}} \mid F_n=\vec{\overline{{{\textbf w}}a}}}\right]}&=\nu_B\left(\vec{\overline{{{\textbf w}}a}}\right)\\ &=\nu_B\left(\vec{\overline{\textbf w}}\right)\mu(a)=\nu_B({\textbf w})\mu(a)\\ & =\nu_B({\textbf w}){\mathbb{P}\left[{B_{n+1}={{\textbf w}}a \mid B_n={\textbf w}}\right]}.\end{align*}
  2. (2) Suppose now that ${\textbf w}\ne \emptyset$ and ${\textbf w}'={\textbf w}(1)\cdots{\textbf w}(k-1)\,\overline{a} \,{\textbf w}(k+1)\cdots{\textbf w}(q)\overline{{\textbf w}(k)}$ . This means that ${\textbf w}(k)\in {\mathcal{V}}$ and that $V_{n+1}=a$ , for $a \in{\mathcal{E}}\left({\textbf w}_{k}\right) \cap {\mathcal{E}}\left({\mathcal{V}}\left({\textbf w}(1)\cdots{\textbf w}(k-1)\right)\right)^c$ , so that, in fcfm, the item entered at $n+1$ is matched with the item of class $V_{i(n)+k-1}={\textbf w}(k)$ . Suppose that

    \[F_n=\vec{\overline{{\textbf w}'}}={\textbf w}(k)\overline{{\textbf w}(q)}\,\overline{{\textbf w}(q-1)}\cdots\overline{{\textbf w}(k+1)}\, a \,\overline{{\textbf w}(k-1)}\cdots\overline{{\textbf w}(1)}.\]
    Then, ${\textbf w}(k)$ is not adjacent to any of the elements of ${\mathcal{V}}\left(\overline{{\textbf w}(q)}\,\overline{{\textbf w}(q-1)}\cdots\overline{{\textbf w}(k+1)}\right)$ by Lemma 3.2. But ${\textbf w}(k) {--} a$ , so the item entered at $n+1$ of class $V_{n+1}={\textbf w}(k)$ is matched with the item entered at time $n+q-k+2$ , of class $V_{n+q-k+2}=a$ , and we have, with probability 1,
    \[F_{n+1}= \overline{{\textbf w}(q)}\,\overline{{\textbf w}(q-1)}\cdots\overline{{\textbf w}(k+1)}\, \overline{{\textbf w}(k)} \,\overline{{\textbf w}(k-1)}\cdots\overline{{\textbf w}(1)}=\vec{\overline{\textbf w}}.\]
    Therefore, in this case,
    \begin{align*}\nu_B\left(\vec{\overline{{\textbf w}'}}\right){\mathbb{P}\left[{F_{n+1}=\vec{\overline{\textbf w}} \mid F_n=\vec{\overline{{\textbf w}'}}}\right]} &=\nu_B\left(\vec{\overline{{\textbf w}'}}\right)\\ &=\nu_B\left(\vec{\overline{\textbf w}}\right)\mu(a)\\ &=\nu_B({\textbf w})\mu(a) =\nu_B({\textbf w}){\mathbb{P}\left[{B_{n+1}={\textbf w}' \mid B_n={\textbf w}}\right]}.\end{align*}
  3. (3) Now suppose that ${\textbf w}\ne \emptyset$ and ${\textbf w}'={\textbf w}(k){\textbf w}(k+1)\cdots{\textbf w}(q)\overline{{\textbf w}(1)}$ for some $k \in \left[\kern-0.15em\left[ 2,q \right]\kern-0.15em\right]$ . This means that $V_{n+1}$ belongs to ${\mathcal{E}}\left({\textbf w}(1)\right)$ , so the item entered at $n+1$ is matched with the oldest item in line, of class $V_{i(n)}$ . Then, ${\textbf w}(2),{\textbf w}(3),\ldots,{\textbf w}(k-1)$ all belong to $\overline{{\mathcal{V}}}$ , and so ${\textbf w}(k) \in {\mathcal{V}}$ and is the class of the oldest item in line after $V_{i(n)}$ , now becoming the new oldest one. Suppose that $F_n=\vec{\overline{{\textbf w}'}}={\textbf w}(1)\overline{{\textbf w}(q)}\,\overline{{\textbf w}(q-1)}\cdots\overline{{\textbf w}(k+1)}\,\overline{{\textbf w}(k)}$ . Again applying Lemma 3.2, we obtain that ${\textbf w}_1$ is not adjacent to any of the elements of the set ${\mathcal{V}}\left(\overline{{\textbf w}(q)}\,\overline{{\textbf w}(q-1)}\cdots\overline{{\textbf w}(k)}\right)$ , so the state $\vec{\overline{{\textbf w}'}}$ clearly belongs to $\mathbb{F}$ . Similarly, again in view of Lemma 3.2, ${\textbf w}(1)$ is not adjacent to any of the elements $\overline{{\textbf w}(2)},\overline{{\textbf w}(3)},\ldots,\overline{{\textbf w}(k-1)}$ , which all are elements of ${\mathcal{V}}$ . So we obtain

    \[F_{n+1}=\overline{{\textbf w}(q)}\,\overline{{\textbf w}(q-1)}\cdots\overline{{\textbf w}(k+1)}\,\overline{{\textbf w}(k)}\,\overline{{\textbf w}(k-1)}\cdots\overline{{\textbf w}(2)}\,\overline{{\textbf w}(1)}=\vec{\overline{\textbf w}}\]
    if $V_{n+q-k+3},\ldots,V_{n+q}$ are respectively equal to $\overline{{\textbf w}}(k-1),\ldots,\overline{{\textbf w}}(2)$ , and $V_{n+q+1}$ belongs to ${\mathcal{E}}\left({\textbf w}(1)\right)$ . This occurs with probability $\mu\bigl(\overline{{\textbf w}(k-1)}\bigl)\cdots\mu\bigl(\overline{{\textbf w}(2)}\bigl)\mu\bigl({\mathcal{E}}\left({\textbf w}(1)\right)\bigl)$ . Gathering all the above, we obtain
    \begin{align*}\nu_B\left(\vec{\overline{{\textbf w}'}}\right){\mathbb{P}\left[{F_{n+1}=\vec{\overline{\textbf w}} \mid F_n=\vec{\overline{{\textbf w}'}}}\right]} &=\nu_B\left(\vec{\overline{{\textbf w}'}}\right)\mu\bigl(\overline{{\textbf w}(k-1)}\bigl)\cdots\mu\bigl(\overline{{\textbf w}(2)}\bigl)\mu\bigl({\mathcal{E}}\left({\textbf w}(1)\right)\bigl)\\ &=\nu_B\left(\vec{\overline{\textbf w}}\right)\mu\bigl({\mathcal{E}}\left({\textbf w}(1)\right)\bigl)\\ &=\nu_B({\textbf w})\mu\bigl({\mathcal{E}}\left({\textbf w}(1)\right)\bigl) =\nu_B({\textbf w}){\mathbb{P}\left[{B_{n+1}={\textbf w}' \mid B_n={\textbf w}}\right]}.\end{align*}
  4. (4) Suppose now that ${\textbf w}\ne \emptyset$ and ${\textbf w}'=\emptyset$ , which is possible only if ${\textbf w}(1)\in {\mathcal{V}}$ , $V_{n+1}$ belongs to ${\mathcal{E}}\left({\textbf w}(1)\right)$ , and, if $q \ge 2$ , ${\textbf w}(2),.\ldots,{\textbf w}(q) \in \overline{\mathcal{V}}$ , which implies, again from Lemma 3.2, that ${\textbf w}(1) {\not\!\!--} \overline{{\textbf w}(j)}$ for any $j\in \left[\kern-0.15em\left[ 2,q \right]\kern-0.15em\right]$ . Thus, $F_n=\emptyset$ leads to the state $F_{n+1}=\vec{\overline{{\textbf w}}}=\overline{{\textbf w}(q)}\,\overline{{\textbf w}(q-1)}\cdots\overline{{\textbf w}(2)}\,\overline{{\textbf w}(1)}$ if and only if $V_{n+1}={\textbf w}(1)$ , and then $V_{n+2}=\overline{{\textbf w}(q)}$ , $V_{n+3}=\overline{{\textbf w}(q-1)}$ , …, $V_{n+q}=\overline{{\textbf w}(2)}$ , and $V_{n+q+1}\in{\mathcal{E}}\left({\textbf w}(1)\right)$ . This event occurs with probability $\mu({\textbf w}_1)\mu\left(\overline{{\textbf w}(q)}\right)\cdots\mu\left(\overline{{\textbf w}(2)}\right)\mu\left({\mathcal{E}}\left({\textbf w}(1)\right)\right)$ . So, we obtain

    \begin{align*}\nu_B\left(\emptyset\right){\mathbb{P}\left[{F_{n+1}=\vec{\overline{\textbf w}} \mid F_n=\emptyset}\right]} &=\mu\left({\textbf w}(1)\right)\mu\bigl(\overline{{\textbf w}(2)}\bigl)\mu\bigl(\overline{{\textbf w}(3)}\bigl)\cdots\mu\bigl(\overline{{\textbf w}(q)}\bigl)\mu\bigl({\mathcal{E}}\left({\textbf w}(1)\right)\bigl)\\[3pt] &=\nu_B\left(\vec{\overline{w}}\right)\mu\Bigl({\mathcal{E}}\left({\textbf w}_1\right)\Bigl)\\[3pt] &=\nu_B({\textbf w})\mu\bigl({\mathcal{E}}\left({\textbf w}(1)\right)\bigl) =\nu_B({\textbf w}){\mathbb{P}\left[{B_{n+1}={\textbf w}' \mid B_n={\textbf w}}\right]}.\end{align*}
  5. (5) The only case that remains to be treated is when ${\textbf w}=\emptyset$ . Then, for any $a\in {\mathcal{V}}$ , we obtain $B_{n+1}=a$ provided that $V_{n+1}=a$ , which occurs with probability $\mu(a)$ . Then, $F_n=\overline{a}$ means that the item entered at time $n+1$ is matched with an item of class a that was entered before n. Then we necessarily have that $F_{n+1}=\emptyset$ , and thus

    \[\nu_B\left(\overline{a}\right){\mathbb{P}\left[{F_{n+1}=\emptyset \mid F_n=\overline{a}}\right]}=\nu_B\left(a\right)=\mu(a) =\nu_B(\emptyset){\mathbb{P}\left[{B_{n+1}=a \mid B_n=\emptyset}\right]}. \]
    This completes the proof.

We can now turn to the proof of Proposition 3.1.

Proof of Proposition 3.1 Suppose that $\mu$ belongs to the set ${NCOND}(G)$ defined by (2). From Lemma 3.1, we know that for any word ${\textbf w}$ in $\mathbb{B}$ , ${\mathcal{V}}({\textbf w})$ is an element of $\mathbb I(G)$ , i.e. the letters of ${\mathcal{V}}$ present in ${\textbf w}$ form an independent set of ${\mathcal{V}}$ . Also, the intermediate letters of ${\textbf w}$ in $\bar{{\mathcal{V}}}$ have counterparts in ${\mathcal{V}}$ that are not adjacent to any prior letter of ${\textbf w}$ in ${\mathcal{V}}$ . Therefore, we can partition the set $\mathbb{B}$ as follows: Denote, for any independent set ${\mathcal{I}}\,{:}\,{\raise-1.5pt{=}}\, \left\{i_1,\ldots,i_{|{\mathcal{I}}|}\right\}\in \mathbb I(G)$ , and for any permutation $\sigma$ in $\mathfrak S_{|{\mathcal{I}}|}$ , by $\mathbb{B}_{{\mathcal{I}},\sigma}$ , the set of non-empty words $\textbf w\in \mathbb{B}$ whose letters in ${\mathcal{V}}$ are exactly the elements of ${\mathcal{I}}$ (i.e. ${\mathcal{V}}(\textbf w)={\mathcal{I}}$ ), and $i_{\sigma(1)}$ , $i_{\sigma(2)}$ , …, $i_{\sigma(|{\mathcal{I}}|)}$ denote the letters of ${\mathcal{V}}(\textbf w)$ ranked in their order of first occurrence in the word $\textbf w$ . In other words, in ${\textbf w}$ , reading from left to right, the letter $i_{\sigma(1)}$ appears first (possibly several times), the first letter in ${\mathcal{V}}(\textbf w)\setminus\left\{i_{\sigma(1)}\right\}$ is $i_{\sigma(2)}$ , the first letter in ${\mathcal{V}}(\textbf w)\setminus\left\{i_{\sigma(1)},i_{\sigma(2)}\right\}$ is $i_{\sigma(3)}$ , and so on. Then, we clearly have the following disjoint union:

(12) \begin{equation}\mathbb B = \{\emptyset\} \cup \bigcup_{{\mathcal{I}}\in\mathbb I(G)} \bigcup_{\sigma\in\mathfrak S_{|{\mathcal{I}}|}} \mathbb B_{{\mathcal{I}},\sigma}.\end{equation}

Now, fix ${\mathcal{I}}\in\mathbb I(G)$ and $\sigma\in\mathfrak S_{|{\mathcal{I}}|}$ . Then, for any word ${\textbf w}$ in $\mathbb{B}_{{\mathcal{I}},\sigma}$ , and any $j\in\left[\kern-0.15em\left[ 1,|{\mathcal{I}}| \right]\kern-0.15em\right]$ , between the first occurrence of the letter $i_{\sigma(j)}$ (excluded) and the first occurrence of the letter $i_{\sigma(j+1)}$ (or to the right of $i_{\sigma(|{\mathcal{I}}|)}$ if $j\equiv |{\mathcal{I}}|$ ), there are, say, $\varphi(j)$ letters in ${\textbf w}$ (where $\varphi(j) \in\mathbb{N}$ ) that belong to the set $\{i_{\sigma(1)},\ldots,i_{\sigma(j)}\}$ . Denote these letters (again, read from left to right) by $k_{1},\ldots,k_{\varphi(j)}$ , and set $k_0\,{:}\,{\raise-1.5pt{=}}\, i_{\sigma(j)}$ . In between these letters, say between $k_l$ and $k_{l+1}$ for $\varphi(j)>0$ and $l\in\left[\kern-0.15em\left[ 0,\varphi(j)-1 \right]\kern-0.15em\right]$ (or to the right of $k_{\varphi(j)}$ for $l \equiv \varphi(j)$ ), there can be in ${\textbf w}$ any number $\psi(l) \in\mathbb{N}$ of letters in $\bar{{\mathcal{V}}}$ whose counterparts in ${\mathcal{V}}$ are elements of ${\mathcal{E}}\left(\left\{i_{\sigma(1)},\ldots,i_{\sigma(j)}\right\}\right)^c$ . Denote by $\left\{h_1,\ldots,h_{r(j)}\right\}$ the elements of ${\mathcal{E}}\left(\left\{i_{\sigma(1)},\ldots,i_{\sigma(j)}\right\}\right)^c$ , whenever the latter is non-empty. Denoting also, for all $m=1,\ldots,r(j)$ , by $p_m$ the number of occurences of the letter $\overline{h_m}$ in between $k_l$ and $k_{l+1}$ in the word ${\textbf w}$ , we obtain

\begin{align*}& \nu_B\left(\mathbb{B}_{{\mathcal{I}},\sigma}\right) \\[3pt] &=\prod_{j=1}^{|{\mathcal{I}}|} \sum_{\varphi(j)\in\mathbb{N}}\sum_{\substack{(k_1,\ldots,k_{\varphi(j)})\\[3pt] \in \{i_{\sigma(1)},\ldots,i_{\sigma(j)}\}^{\varphi(j)}}}\prod_{l=0}^{\varphi(j)} \mu(k_l) \sum_{\psi(l)\in\mathbb{N}}\displaystyle\sum_{\substack{p_1,\ldots,p_{r(j)}\in\mathbb{N}:\\ p_1+\cdots+p_{r(j)} = \psi(l)}} {\psi(l) p_1,\ldots,p_{r(j)}} \displaystyle\prod_{m=1}^{r(j)}\mu\left(h_m\right)^{p_m} \\ &=\prod_{j=1}^{|{\mathcal{I}}|} \sum_{\varphi(j)\in\mathbb{N}}\sum_{\substack{(k_1,\ldots,k_{\varphi(j)})\\ \in \{i_{\sigma(1)},\ldots,i_{\sigma(j)}\}^{\varphi(j)}}}\prod_{l=0}^{\varphi(j)} \mu(k_l) \sum_{\psi(l)\in\mathbb{N}}\left(\sum_{m=1}^{r(j)} \mu(h_m)\right)^{\psi(l)} \\ &= \prod_{j=1}^{|{\mathcal{I}}|} \sum_{\varphi(j)\in\mathbb{N}}\sum_{\substack{(k_1,\ldots,k_{\varphi(j)})\\ \in \{i_{\sigma(1)},\ldots,i_{\sigma(j)}\}^{\varphi(j)}}}\prod_{l=0}^{\varphi(j)} \frac{\mu(k_l) }{\mu\left({\mathcal{E}}\left(\{i_{\sigma(1)},\ldots,i_{\sigma(j)}\}\right)\right)},\end{align*}

where products and sums over empty sets are respectively understood as 1 and 0. Now, if $\varphi(j)>0$ , denoting, for all $n=1,\ldots,j$ , by $q_n$ the number of occurrences of the letter $i_{\sigma(n)}$ in the word $k_1\cdots k_{\varphi(j)}$ , the above becomes

\begin{align*}\nu_B\left(\mathbb{B}_{{\mathcal{I}},\sigma}\right)&=\prod_{j=1}^{|{\mathcal{I}}|}\frac{\mu\left(i_{\sigma(j)}\right) }{\mu\left({\mathcal{E}}\left(\left\{i_{\sigma(1)},\ldots,i_{\sigma(j)}\right\}\right)\right)}\displaystyle\sum_{\varphi(j)\in\mathbb{N}}\frac{ \displaystyle\sum_{\substack{q_1,\ldots,q_j\in\mathbb{N}:\\q_1+\cdots+q_j = \varphi(j)}} {\varphi(j) q_1,\ldots,q_j} \displaystyle\prod_{n=1}^j\mu\left(i_{\sigma(n)}\right)^{q_n} }{\mu\left({\mathcal{E}}\left(\{i_{\sigma(1)},\ldots,i_{\sigma(j)}\}\right)\right)^{\varphi(j)}} \\ \nonumber\\ & = \prod_{j=1}^{|{\mathcal{I}}|}\frac{\mu\left(i_{\sigma(j)}\right) }{\mu\left({\mathcal{E}}\left(\left\{i_{\sigma(1)},\ldots,i_{\sigma(j)}\right\}\right)\right)}\sum_{\varphi(j)\in\mathbb{N}} \left(\frac{\mu\left(\left\{i_{\sigma(1)},\ldots,i_{\sigma(j)}\right\}\right) }{\mu\left({\mathcal{E}}\left(\{i_{\sigma(1)},\ldots,i_{\sigma(j)}\}\right)\right)}\right)^{\varphi(j)} \\ \nonumber\\ &= \prod_{j=1}^{|{\mathcal{I}}|}\frac{\mu\left(i_{\sigma(j)}\right)}{\mu\left({\mathcal{E}}\left(\left\{i_{\sigma(1)},\ldots,i_{\sigma(j)}\right\}\right)\right)}\frac{\mu\left({\mathcal{E}}\left(\{i_{\sigma(1)},\ldots,i_{\sigma(j)}\}\right)\right) }{\mu\left({\mathcal{E}}\left(\{i_{\sigma(1)},\ldots,i_{\sigma(j)}\}\right)\right)-\mu\left(\left\{i_{\sigma(1)},\ldots,i_{\sigma(j)}\right\}\right)} \\ \nonumber\\ &= \prod_{j=1}^{|{\mathcal{I}}|}\frac{\mu\left(i_{\sigma(j)}\right) }{\mu\left({\mathcal{E}}\left(\{i_{\sigma(1)},\ldots,i_{\sigma(j)}\}\right)\right)-\mu\left(\left\{i_{\sigma(1)},\ldots,i_{\sigma(j)}\right\}\right)}, \end{align*}

where the assumption that $\mu\in{NCOND}(G)$ is used in the third equality, as $\left\{i_{\sigma(1)},\ldots,i_{\sigma(j)}\right\}$ is an independent set of G for any j as above. In conclusion, from (12) we obtain

(13) \begin{equation}\nu_B(\mathbb{B})= 1 +\sum_{{\mathcal{I}}\in\mathbb I(G)} \sum_{\sigma\in\mathfrak S_{|{\mathcal{I}}|}} \prod_{j=1}^{|{\mathcal{I}}|}\frac{\mu\left(i_{\sigma(j)}\right) }{\mu\left({\mathcal{E}}\left(\{i_{\sigma(1)},\ldots,i_{\sigma(j)}\}\right)\right)-\mu\left(\left\{i_{\sigma(1)},\ldots,i_{\sigma(j)}\right\}\right)},\end{equation}

and so $\nu_B$ is a finite measure on $\mathbb{B}$ .

Then, it suffices to apply Kelly’s lemma [Reference Kelly15, Section 1.7]: define, for any ${\textbf w},{\textbf w}'\in \mathbb{B}$ ,

(14) \begin{equation}P_{{\textbf w}',{\textbf w}}=\frac{{\mathbb{P}\left[{B_{n+1}={\textbf w}' \mid B_n={\textbf w}}\right]}\nu_B({\textbf w}) }{\nu_B({\textbf w}')}.\end{equation}

Then, $\nu_B$ is a stationary distribution of ${\left({B_n}\right)_{n\in\mathbb{N}}}$ if P defines a transition operator on $\mathbb{B}$ . But this is a simple consequence of Lemmas 3.2 and 3.3, together with (6): for any ${\textbf w}' \in \mathbb{B}$ we have that

\begin{align*}\sum_{{\textbf w} \in \mathbb{B}} P_{{\textbf w}',{\textbf w}} & = \sum_{{\textbf w} \in\mathbb{B}} \frac{\nu_B\left(\vec{\overline{{\textbf w}'}}\right){\mathbb{P}\left[{F_{n+1}=\vec{{\overline{\textbf w}}} \mid F_n=\vec{\overline{{\textbf w}'}}}\right]} }{\nu_B({\textbf w}')}\\ &= \sum_{{\textbf w} \in \mathbb{B}} {\mathbb{P}\left[{F_{n+1}=\vec{{\overline{\textbf w}}} \mid F_n=\vec{\overline{{\textbf w}'}}}\right]} =1,\end{align*}

which concludes the proof.

Remark 3.1. As is easily seen, we can exchange the roles of $B_n,B_{n+1}$ and $F_n,F_{n+1}$ in (14), implying that ${\left({F_n}\right)_{n\in\mathbb{N}}}$ is the reversed Markov chain of ${\left({B_n}\right)_{n\in\mathbb{N}}}$ , on a sample space where arrivals are reversed in time and exchanged with their match. In particular, the forward chain ${\left({F_n}\right)_{n\in\mathbb{N}}}$ also admits $\nu_B$ as a stationary distribution on $\mathbb{F}$ .

We are now in position to prove our main result.

3.3. Proof of Theorem 3.1

We know from [Reference Mairesse and Moyal16, Proposition 2] that if $\mu$ is not an element of ${NCOND}(G)$ , then the chain ${\left({W_n}\right)_{n\in\mathbb{N}}}$ is transient or null recurrent. If we now assume that $\mu\in{NCOND}(G)$ , then we first observe that, in view of (13), the measure defined for all ${\textbf w}\in\mathbb{B}$ by $\alpha\nu_B({\textbf w})$ , for $\alpha$ in (4) and $\nu_B$ defined by (7), defines a probability measure on $\mathbb{B}$ . Second, from Proposition 3.1, the auxiliary chain ${\left({B_n}\right)_{n\in\mathbb{N}}}$ is positive recurrent, and admits $\nu_B$ as a stationary measure. So, from (5), ${\left({W_n}\right)_{n\in\mathbb{N}}}$ is also positive recurrent. As it is also irreducible on $\mathbb W$ , it has a unique stationary probability. To check that the latter is given by $\Pi_W$ defined by (3), it thus suffices to check that

\begin{equation*}\Pi_{W}(w)=\alpha\sum\limits_{{\textbf w}\in \mathbb{B}:\,{\textbf w}|_{{\mathcal{V}}}=w} \nu_B({\textbf w})\qquad\mbox{for any }w\in \mathbb W.\end{equation*}

Let $w=w(1)\cdots w(q) \in \mathbb W$ . From Lemma 3.1, any ${\textbf w} \in \mathbb{B}$ such that ${\textbf w}|_{{\mathcal{V}}}=w$ is of the form

\[{\textbf w} = a(1)\,\overline{a(1,1)}\,\overline{a(1,2)}\cdots\overline{a(1,k_1)}\,a(2)\,\overline{a(2,1)}\cdots\overline{a(2,k_2)}\,a(3)\cdots a(q)\,\overline{a(q,1)}\cdots\overline{a(q,k_q)},\]

where any of the elements $a(\ell, j)$ is such that $a(\ell, j) {\not\!\!--} a(i)$ for any $i \le \ell$ . We then obtain that

\begin{align*}& \alpha\sum\limits_{{\textbf w}\in \mathbb{B}:\,{\textbf w}|_{{\mathcal{V}}}=w} \nu_B({\textbf w}) \\ & = \alpha\prod\limits_{\ell =1}^q \left(\mu(a(\ell))\left(1+\sum_{k\in\mathbb{N}_+}\sum\limits_{(a(\ell,1),\ldots,a(\ell, k)) \in \left({\mathcal{E}}\left(\left\{a(1),\ldots,a(\ell)\right\}\right)^c\right)^k}\prod_{j=1}^{k} \mu\left(a(\ell, j)\right)\right)\right) \\ &=\alpha\prod\limits_{\ell =1}^q \left(\mu(a(\ell))\sum_{k\in\mathbb{N}}\left(\mu\biggl({\mathcal{E}}\Bigl(\left\{a(1),\ldots,a(\ell)\right\}\Bigl)^c\biggl)\right)^{k}\right) \\ &=\alpha\prod\limits_{\ell =1}^q \frac{\mu(a(\ell)) }{1-\mu\biggl({\mathcal{E}}\Bigl(\left\{a(1),\ldots,a(\ell)\right\}\Bigl)^c\biggl)}\\ &=\alpha\prod\limits_{\ell =1}^q \frac{\mu(a(\ell)) }{\mu\biggl({\mathcal{E}}\Bigl(\left\{a(1),\ldots,a(\ell)\right\}\Bigl)\biggl)}=\Pi_{W}(w), .\end{align*}

which concludes the proof of Theorem 3.1.

3.4. Characteristics at equilibrium

We can easily deduce from Theorem 3.1 closed-form formulas for performance measures of the system in steady state. Denote by $W_\infty$ the stationary queue detail of the system, i.e. a random variable distributed following the stationary probability $\Pi_W$ .

First, for any class $i\in{\mathcal{V}}$ , the average number of items of class i in storage at equilibrium is given by ${\mathbb{E}\left[{\left|W_\infty\right|_i}\right]}= \sum_{k \in\mathbb{N}} k \Pi_W\left(\{w \in \mathbb W:\,|w|_i=k\}\right)$ for $\Pi_W$ in (3). The average total number of items in storage at equilibrium then reads

\[{\mathbb{E}\left[{|W_\infty|}\right]}=\sum_{i\in{\mathcal{V}}} {\mathbb{E}\left[{\left|W_\infty\right|_i}\right]}= \sum_{k \in\mathbb{N}} k \Pi_W\left(\{w \in \mathbb W:\,|w|=k\}\right)\!.\]

According to Little’s law, for any $i\in{\mathcal{V}}$ , the waiting time before getting matched, for an item of class i entering the system in steady state, is given by

\[\frac{{\mathbb{E}\left[{|W_\infty|}\right]}}{\mu(i)} = \frac{1}{\mu(i)} \sum_{k \in\mathbb{N}} k \Pi_W\left(\{w \in \mathbb W:\,|w|_i=k\}\right)\!.\]

In particular, the probability that a class i-item does have to wait before getting matched in a stationary system reads ${\mathbb{P}\left[{|W_\infty|_{{\mathcal{E}}(i)}=0}\right]} = \Pi_W\left(\{w \in \mathbb W:\,|w|_{{\mathcal{E}}(i)}=0\}\right)$ for $\Pi_W$ in (3).

4. Perfect fcfm-matchings

There is an interesting connection between the stability of stochastic matching models and the construction of perfect matchings on graphs, in a sense that will be specified below. As we show in this section, the positive recurrence of the empty state for the GM model $(G,\mu,{FCFM})$ implies the existence of infinitely many perfect matchings on a sequence of labelled random graphs, constructed from G, as follows.

4.1. Perfect infinite fcfm-matchings

As is easily seen, the model $(G,\mu,{FCFM})$ generates a family of random graphs. For any $n\in \mathbb{N}$ , we consider the (initially empty) fcfm-matching $\textbf M^{0}_{n}\,{:}\,{\raise-1.5pt{=}}\, M^{FCFM}\left(V_{1}\cdots V_{n}\right)$ , i.e. the random graph in which the nodes are the items entered from 1 to n, and there is an edge between two nodes if and only if the corresponding items are matched in fcfm. The nodes of this random graph are labelled according to their classes, and (implicitly) to their arrival dates. However, for notational simplicity, we only keep the first labelling (i.e. the classes) and remove the second one, which is implicitly given by the order of nodes (from left to right) in Figures 4 and 5.

The definition of an fcfm-matching can then be extended to any time points $m<n$ in $\mathbb{N}$ such that $W_m=\emptyset$ , by setting $\textbf M^{m}_{n}\,{:}\,{\raise-1.5pt{=}}\, M^{FCFM}\left(V_{m+1}\cdots V_{n}\right)$ . In $\textbf M^{m}_{n}$ , all nodes are of degree 0 or 1. The finite matching $\textbf M^{m}_{n}$ is then said to be perfect if all of its nodes are of degree 1. It is then immediate that, for any m such that $W_m=\emptyset$ and any $n>m$ , $\textbf M^{m}_{n}$ is an induced subgraph of $\textbf M^{m}_{n+1}$ . Figure 4 shows an example once again concerning the graph G of Figure 1.

Figure 4: Construction of the increasing sequence ${\left({\textbf M^0_n}\right)_{n\in\mathbb{N}}}$ , and the construction points, for the compatibility graph G of Figure 1 and the sample 134231.

Figure 5: Top: A perfect fcfm-matching on two construction points, say $\textbf M^{C_{n-2}}_{C_{n}}$ . Middle: The matching $\textbf M^{C_{n-2}}_{C_{n}}$ after completing the exchanges on the two perfectly matched blocks. Bottom: The resulting matching $\vec{\overline{\mbox{$\textbf{M}$}^{C_{n}}_{C_{n-2}}}}$ after time reversal is an fcfm-matching.

In particular, we can easily construct, from the increasing sequence ${\left({\textbf M^{0}_{n}}\right)_{n\in\mathbb{N}}}$ , the limiting object $\textbf M^{0}_{\infty}$ as the infinite labelled random graph obtained when letting n go to infinity in the above. We then say that $\textbf M^{0}_{\infty}$ is perfect if all its nodes are of degree 1. We have the following result.

Proposition 4.1. If the graph G is non-bipartite and condition (2) is satisfied, then the infinite labelled random graph $\textbf M^{0}_{\infty}$ is a.s. perfect.

Proof. From Theorem 3.1, the chain ${\left({W^{\{\emptyset\}}_n}\right)_{n\in\mathbb{N}}}$ is positive recurrent, so the family of integers $\mathscr C\,{:}\,{\raise-1.5pt{=}}\, \left\{n \in {\textbf N}_+:\, W^{\{\emptyset\}}_{n-1} = \emptyset\right\}$ is a.s. infinite. The elements of the latter set, denoted $1\,{:}\,{\raise-1.5pt{=}}\, C_0 < C_1 < C_2 < \cdots $ , are called construction points of the model. Then, we readily get

\[\textbf M^{0}_{\infty} = \lim_{k\to \infty} \textbf M^0_{C_k}=\lim_{k\to\infty} \bigcup_{j=1}^{k} \textbf M^{C_{j-1}}_{C_j} = \bigcup_{j=1}^{\infty} \textbf M^{C_{j-1}}_{C_j},\]

which is a union of finite perfect fcfm-matchings. It follows that all the nodes of $\textbf M^{0}_{\infty}$ are of degree 1.

4.2 Perfect fcfm-matchings in reverse time

Perfect fcfm-matchings have an interesting time-reversibility property. First, observe that we can complete the ‘exchange’ mechanism introduced in Section 3.1 using construction points as follows. Consider two construction points $C_m<C_{n} \in\mathscr C$ . Then, in the perfect finite fcfm-matching $\textbf M^{C_m}_{C_{n}}$ , replace the indices of all nodes (i.e. the classes of the corresponding items) by the copies of the classes of their matches. Last, keep all edges of $\textbf M^{C_m}_{C_{n}}$ unchanged. This procedure is illustrated in Figure 5.

It can then be immediately observed that the resultant indexed random graph, which we denote by $\vec{\overline{\mbox{$\textbf{M}$}^{C_{n}}_{C_m}}}$ , is a perfect matching in the sense that all its nodes are a.s. of degree 1. We call it the reversed perfect matching between $C_n$ and $C_m$ . We have the following result.

Proposition 4.2. For all $n>0$ , the reversed perfect matching $\vec{\overline{\mbox{$\textbf{M}$}^{C_{n}}_0}}=\vec{\overline{\mbox{$\textbf{M}$}^{C_{n}}_{C_0}}}$ obtained in the above procedure is a perfect fcfm-matching.

Proof. Let the four nodes i, j, k, and $\ell$ be such that, in G, $i {--} k$ , $j {--} k$ , and $i {--} \ell$ , and suppose that, for $m\le n$ , after the exchange procedure over the matching $\textbf M^{C_{m-1}}_{C_m}$ , four copies, $\overline{i}$ , $\overline{j}$ , $\overline{k}$ , and $\overline{\ell}$ , are read in that order in reverse time, i.e. from right to left. Let us also assume that the fcfm rule in reverse time is violated on this quadruple. Then, the $\overline{k}$ -item is matched with the $\overline{j}$ -item while the $\overline{i}$ -item is still unmatched, and then the latter item is matched with the $\overline{\ell}$ -item. This occurs if and only if, in direct time, the four items of classes i, j, k, and $\ell$ arrive in that order, and the k-item chooses the j-item over the i-item for its match, and then the unmatched i-item is matched with the $\ell$ -item. This in turn violates the fcfm policy, according to which the k-item should have been matched with the i-item instead of the j-item. Thus, the reversed perfect matching $\vec{\overline{\mbox{$\textbf{M}$}^{C_{m}}_{C_{m-1}}}}$ is a perfect fcfm-matching. We finally observe that, by the very definition of the construction points,

\begin{equation*}\vec{\overline{\mbox{$\textbf{M}$}_0 ^ {C_{n}}}} = \bigcup_{m=1}^n \vec{\overline{\mbox{$\textbf{M}$}^{C_{m}}_{C_{m-1}}}}.\end{equation*}

This completes the proof.

Acknowledgements

The authors would like to warmly thank Jocelyn Begeot, at Université de Lorraine, for his comments and his careful reading of this work.

References

Adan, I. and Weiss, G. (2012). Exact FCFS matching rates for two infinite multi-type sequences. Operat. Res. 60, 475489.CrossRefGoogle Scholar
Adan, I. and Weiss, G. (2014). A skill based parallel service system under FCFS-ALIS – steady state, overloads, and abandonments. Stoch. Sys. 4, 250299.CrossRefGoogle Scholar
Adan, I., Bušić, A., Mairesse, J. and Weiss, G. (2018). Reversibility and further properties of the FCFM bipartite matching model. Math. Operat. Res. 43, 598621.CrossRefGoogle Scholar
Adan, I., Kleiner, I., Righter, R., and Weiss, G. (2018). FCFS parallel service systems and matching models. Performance Evaluation 127–128, 253272.10.1016/j.peva.2018.10.005CrossRefGoogle Scholar
Ayesta, U., Bodas, T., and Verloop, I. (2018). On a unifying product form framework for redundancy models. Performance Evaluation 127–128, 93119.CrossRefGoogle Scholar
Bonald, T. and Comte, C. (2017). Balanced fair resource sharing in computer clusters. Performance Evaluation 116, 7083.CrossRefGoogle Scholar
Boxma, O., David, I., Perry, D. and Stadje, W. (2011). A new look at organ transplantation models and double matching queues. Prob. Eng. Inf. Sci. 25, 135155.10.1017/S0269964810000318CrossRefGoogle Scholar
Buke, B. and Chen, H. (2015). Stabilizing policies for probabilistic matching systems. Queueing Sys. Theor. Appl. 80, 3569.CrossRefGoogle Scholar
Buke, B. and Chen, H. (2017). Fluid and diffusion approximations of probabilistic matching systems. Queueing Sys. Theor. Appl. 86, 133.CrossRefGoogle Scholar
Bušić, A. and Meyn, S. (2014). Approximate optimality with bounded regret in dynamic matching models. Preprint, arXiv:1411.1044 [math.PR].Google Scholar
Bušić, A., Gupta, V. and Mairesse, J. (2013). Stability of the bipartite matching model. Adv. Appl. Prob. 45, 351378.10.1239/aap/1370870122CrossRefGoogle Scholar
Caldentey, R., Kaplan, E. H. and Weiss, G. (2009). FCFS infinite bipartite matching of servers and customers. Adv. Appl. Prob. 41, 695730.CrossRefGoogle Scholar
Gardner, K. et al. (2016). Queueing with redundant requests: exact analysis. Queueing Sys. Theor. Appl. 83, 227259.CrossRefGoogle Scholar
Gurvich, I. and Ward, A. On the dynamic control of matching queues. Stoch. Sys. 4, 145.Google Scholar
Kelly, F. P. (1979). Reversibility and Stochastic Networks. Wiley, New York.Google Scholar
Mairesse, J. and Moyal, P. (2016). Stability of the stochastic matching model. J. Appl. Prob. 53, 10641077.10.1017/jpr.2016.65CrossRefGoogle Scholar
Moyal, P. and Perry, O. (2017). On the instability of matching queues. Ann. Appl. Prob. 27, 33853434.CrossRefGoogle Scholar
Moyal, P., Bušić, A. and Mairesse, J. (2018). Loynes construction for the extended bipartite matching. Preprint, arXiv:math.PR/1803.02788.Google Scholar
Nazari, M. and Stolyar, A. (2016). Reward maximization in general dynamic matching systems. Preprint, arXiv:math.PR/1608.01646.Google Scholar
Rahmé, Y. and Moyal, P. (2019). A stochastic matching model on hypergraphs. Preprint, arXiv:math.PR/1907.12711.Google Scholar
Talreja, R. and Whitt, W. Fluid models for overloaded multi-class many-service queueing systems with FCFS routing. Manag. Sci. 54, 15131527.10.1287/mnsc.1080.0868CrossRefGoogle Scholar
Figure 0

Figure 1: Compatibility graph of Example.

Figure 1

Figure 2: An arrival scenario on the graph of Figure 1, and the trajectories of the three Markov chains.

Figure 2

Figure 3: Compatibility graph of Example 3.1.

Figure 3

Figure 4: Construction of the increasing sequence ${\left({\textbf M^0_n}\right)_{n\in\mathbb{N}}}$, and the construction points, for the compatibility graph G of Figure 1 and the sample 134231.

Figure 4

Figure 5: Top: A perfect fcfm-matching on two construction points, say $\textbf M^{C_{n-2}}_{C_{n}}$. Middle: The matching $\textbf M^{C_{n-2}}_{C_{n}}$ after completing the exchanges on the two perfectly matched blocks. Bottom: The resulting matching $\vec{\overline{\mbox{$\textbf{M}$}^{C_{n}}_{C_{n-2}}}}$ after time reversal is an fcfm-matching.