Hostname: page-component-745bb68f8f-b6zl4 Total loading time: 0 Render date: 2025-02-06T09:54:09.045Z Has data issue: false hasContentIssue false

An asynchronous inertial algorithm for solving convex feasibility problems with strict pseudo-contractions in Hilbert spaces

Published online by Cambridge University Press:  18 February 2022

A.U. Bello
Affiliation:
African University of Science and Technology, Abuja, Nigeria (uabdulmalik@aust.edu.ng; mon0002@auburn.edu) Federal University, Dutsin-Ma, Katsina State, Nigeria
M.O. Nnakwe
Affiliation:
African University of Science and Technology, Abuja, Nigeria (uabdulmalik@aust.edu.ng; mon0002@auburn.edu) Department of Mathematics and Statistics, Auburn University, Alabama, AL36849, USA
Rights & Permissions [Opens in a new window]

Abstract

Finding a common point in a finite intersection of sets, say, $C=\cap _{i=1}^{n} F(T_i)$, where each $T_i$ is a non-expansive-type mapping, is a central task in mathematics as it cuts across different areas of application, such as road design and medical image reconstruction. There are many algorithms for approximating solutions of such problems. Of particular interest in the implementation of these algorithms are cost and speed. This is due to the large computations to be performed at each step of the iterative process. One of the most efficient methods that optimizes the time of computation and cost of implementation is the asynchronous-parallel algorithm method. In this paper, we prove a weak convergence theorem for the asynchronous sequential inertial (ASI) algorithm (introduced by Heaton and Censor in [H. Heaton and Y. Censor, Asynchronous sequential inertial iterations for common fixed points problems with an application to linear systems, J. Glob. Optim. 74 (2019), 95–119.] ) for strictly pseudo-contractive mappings in Hilbert spaces. Under additional mild conditions, we also obtain a strong convergence theorem. Finally, we apply the ASI algorithm to solving convex minimization problems and Hammerstein integral equations.

Type
Research Article
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press on Behalf of The Edinburgh Mathematical Society

1. Introduction

The need for fast computational methods in different areas of application, such as weather prediction, fluid dynamics, image and signal processing, optimization problems, large-scale interconnected systems, etc., has always been of great interest (see, e.g. [Reference Bertsekas and Tsitsiklis12]). This is simply due to the large numerical computations that are involved. Most of the problems that arise in these areas of application can be decomposed in such a way that the computations can be handled simultaneously by more powerful processors. This further led to the continuation of research on the development of parallel algorithms and high-efficient computing machines with multiprocessors, see, for example, [Reference Geer29].

The convex feasibility problem (CFP): find $x\in C=\cap _{i=1}^{n} F(T_i)$, where $T_i:H\to H$, for each $i$, is an operator defined on a real Hilbert space $H$, is a central task in mathematics. Such problems arise in different areas of application, such as road design [Reference Bauschke and Koch10], medical image reconstruction [Reference Elsner, Koltracht and Neumann28], signal processing, optimization problems, radiation therapy [Reference Censor, Elfving, Kopf and Bortfeld20], to mention but a few. There are many algorithms in the literature for approximating solutions of CFPs. One of the most commonly used algorithms is the projection algorithms that employ projections onto the underlying set, see, e.g. [Reference Bauschke and Borwein8, Reference Bauschke and Combettes9, Reference Cegielski14, Reference Cegielski, Reich and Zalas15, Reference Censor and Cegielski17, Reference Censor and Zenios18, Reference Censor, Chen, Combettes, Davidi and Herman21, Reference Combettes24, Reference Combettes25, Reference Herman32, Reference Masad and Reich40] and the references therein.The Krasnoselkii–Mann algorithm is often used when the underlying set is a finite intersection of fixed point sets of some non-expansive-type mappings, see, e.g. [Reference Browder and Petryshyn13, Reference Halpern30, Reference Kim and Xu33, Reference Lions34, Reference Mann37, Reference Marino and Xu38, Reference Reich42, Reference Reich43]. In an attempt to speed up the computation of algorithms, several authors have developed parallel algorithms for CFPs, see, e.g. [Reference Aharoni and Censor2, Reference Bauschke, Combettes and Kruk11, Reference Censor16, Reference Censor, Gordon and Gordon19]. The parallel algorithms, which can be run in a multiprocessor machine is, without doubt, better than the non-parallel algorithms that run in a machine with a single processor. This is simply because, for multiprocessor machines, the task can be distributed among the processors. However, in a case where the output of the processors needs to be synchronized before computing the next iterate, the master processor has to wait for the slowest processor to provide its output. A typical case of this is when the convex combination of the output from each processor is to be taken before moving to the next step. In essence, in synchronous parallel iterative algorithms, some of the processors will be idle for some time at each step. Moreover, if there is a failure of transmission from one of the processors, then the master processor will have to resend the last output for recomputation thereby delaying other processors. One way of preventing the problems associated with the synchronous method is via asynchronization. In asynchronous-parallel iterative algorithms, there is no need to wait for other processors in order to compute the next iterate. This method allows flexibility in such a way that a processor can use out-of-date information to compute the next output, see e.g., [Reference Zhimin, Yangyang, Ming and Wotao47]. The general method of approximating CFPs via synchronous parallel iterative algorithm can be put in the following form

(1.1)\begin{equation} x^{n+1}=F_n(x^{n}), \quad x_0\in H, \ n\geq 1, \end{equation}

where $F_n$ is a sequence of operators. However, the asynchronous version of 1.1 takes, in general, the following form:

(1.2)\begin{equation} x^{n+1}=F_n(\tilde{x}^{n}), \quad x_0\in H, \ n\geq 1, \end{equation}

where $\tilde {x}^{n}$ is either $x^{n}$ or $x^{n-j}$ for some $j\in \mathbb {N}$, where $j$ is the delay amount of some processor. To fix ideas, suppose we have $p$ processors and we set

\[ J^{n}=(j_1^{n}, j_2^{n}, j_3^{n}, \ldots, j_p^{n})\in {\mathbb{N}}^{p} \]

to be the delay vector at each iteration step. Then, the $i^{th}$ component of the vector $J^{n}$ gives the delay amount of the $i^{th}$ processor. Moreover, suppose $\{i_n\}_{n\geq 1}\subset \mathbb {N}$, where $1\leq i_n\leq p$, is an index sequence that gives information on whose output will be used to compute $x^{n+1}$. Then $\tilde {x}^{n}$ is the last iterate sent to the $i_n^{th}$ processor, i.e., $\tilde {x}^{n}=x^{n-j_{i_n}^{n}}$. It is, however, worth mentioning that, while the asynchronous-parallel iterative algorithms have advantages over synchronous parallel iterative algorithms, they also have their own disadvantages. For instance, the asynchronous-parallel iterative algorithms are not easy to handle when it comes to convergence analysis and implementation compared to synchronous parallel iterative algorithms.

The asynchronous-parallel iterative algorithm was first introduced by Chazan and Miranker [Reference Chazan and Miranker23] in 1969 to solve system of linear equations. Since then, many authors have successfully apply the method to solve problems in different areas, such as optimization [Reference Liu and Wright35, Reference Liu, Wright, Ré, Bittorf and Sridhar36, Reference Tai and Tseng44, Reference Zhang and Kwok46], nonlinear problems [Reference Bahi, Miellou and Rhofir4, Reference Baudet7], differential equations [Reference Aharoni and Barak1, Reference Amitai, Averbuch, Israeli and Itzikowitz3, Reference Chau, Spiteri, Guivarch and Boisson22, Reference Donzis and Aditya27] to mention but a few. Baudet [Reference Baudet7], in 1978, used totally asynchronous parallel iterative algorithm (in which arbitrary amount of delay is allowed without any bound) for $p$-contractions (see, e.g. [Reference Zhimin, Yangyang, Ming and Wotao47]) for definition of $p$-contractions). In [Reference Tseng, Bertsekas and Tsitsiklis45], Tseng et al. considered partially asynchronous parallel iterative algorithm (where the delay vectors are bounded from above by some non-negative number, say, $\rho$) for quasi-non-expansive mappings (see preliminaries below for definition). Recently, Zhimin et al. introduced ARock algorithm for approximation of fixed points of non-expansive mappings in Hilbert spaces. As opposed to the methods in [Reference Baudet7] and [Reference Tseng, Bertsekas and Tsitsiklis45] which assign coordinates in a deterministic manner, the ARock method is stochastic. More precisely, the ARock algorithm is following:

Algorithm 1: ARock: a framework for async-parallel coordinate updates

where $\mathcal {H}=H_1\times H_2\times H_3\ldots \times H_m$, $S_{i_k}{x}_k=(0,\,\ldots,\,0,\,(Sx)_{i_k},\,0,\,\ldots,\,0)$.

Theorem 1.1 Zhimin et al. [ Reference Zhimin, Yangyang, Ming and Wotao47]. Let $T:\mathcal {H}\to \mathcal {H}$ be a non-expansive operator that has a fixed point. Let $(x^{k})_{k\geq 0}$ be the sequence generated by Algorithm $1$ with properly bounded step sizes $\eta _k$. Then, with probability one, $(x^{k})_{k\geq 0}$ converges weakly to a fixed point of $T$. This convergence becomes strong if $\mathcal {H}$ has a finite dimension. In addition, if $T$ is demicompact, then with probability one, $(x^{k})_{k\geq 0}$ converges strongly to a fixed point of $T$. Furthermore, if $S\equiv I-T$ is quasi-strongly monotone, then, T has a unique fixed point $x^{*}$, $(x^{k})_{k\geq 0}$ converges strongly to $x^{*}$ with probability one, and $\mathbb {E}\|x^{k}-x^{*}\|^{2}$ converges to $0$ at a linear rate.

More recently, Heaton and Censor [Reference Heaton and Censor31] introduced the asynchronous inertial algorithm (ASI) for non-expansive mappings and proved weak convergence result via the monotonicity of a sequence that includes classical error added to the error introduced from using out-of-date iterates. Consider a collection of $m$ non-expansive operators $\{T_i\}_{i=1}^{m}$ on H with a common fixed point. Consider the CFP with $C=\cap _{i=1}^{m}F(T_i)$, $i\in \{1,\,2,\,3,\,\ldots,\,m\}$. For each $i\in \{1,\,2,\,3,\,\ldots,\,m\}$, set $S_i=I-T_i$, then the ASI algorithm is as follows:

Algorithm 2: Asynchronous Inertial Algorithm

They proved the following theorem:

Theorem 1.2 Heaton and Censor [Reference Heaton and Censor31]

Let $\{x^{k}\}_{k\in \mathbb {N}}$ be a sequence generated by the ASI algorithm and suppose Assumption $1$ holds. If there is an $\epsilon >0$ such that

\[ 0<\epsilon\leq \lambda_k\leq \frac{1}{2\tau+1+\epsilon}, \ \ k\in \mathbb{N}, \]

then the sequence $\{x^{k}\}_{k\in \mathbb {N}}$ converges weakly to a common fixed point $x^{*}$ of the family $\{T_i\}_{i=1}^{m},$ i.e.,

\[ x^{k}\rightharpoonup x^{*}\in C=\cap_{i=1}^{m}F(T_i). \]

In this paper, we prove weak and strong convergence theorem (under additional mild condition) for the class of strict pseudo-contractions which properly contains the class of non-expansive operators considered in [Reference Heaton and Censor31]. More precisely, we prove the following theorem:

Theorem Let $T_i:H\longrightarrow H,\, \ i=1,\,2,\,3,\,\ldots,\,m$ be $k_i$-strictly pseudo-contractive mappings. Let $\{x_n\}$ be a sequence defined by $x^{0}\in H,$

\[ x^{n+1}= \begin{cases} x^{n}, & \leq \sup_{n\in \mathbb{N}}\|d^{n}\|_{\infty},\\ (1-\alpha_n)x^{n}-\alpha_n(T_{i_n}\tilde{x}^{n}+(x^{n}-\tilde{x}^{n})), & \text{otherwise}. \end{cases} \]

Suppose $\sup _{n\in \mathbb {N}}\|d^{n}\|_{\infty }\leq \rho$ for some $\rho \geq 1$ and that

\[ 0<\epsilon_0\leq \alpha_n\leq \displaystyle\frac{1-k}{2\rho+1+\epsilon_0}<1-k, \quad \forall \ n\in \mathbb{N}, \]

for some $\epsilon _0>0,$ where $k=\displaystyle \max _{1\leq i\leq m}k_i$. Then, the sequence $\{x_n\}$ converges weakly to some point $x^{*}\in C=\cap _{i=1}^{n}F(T_i)\neq \emptyset$.

This class of operators allows some flexibility when data come with some noise in which the operator involved may fail to be non-expansive. Furthermore, we present the application of the ASI algorithm to solving convex minimization problems and Hammerstein integral equations.

2. Preliminaries

In this section, we give some definitions and lemmas that will be useful in the proof of our main results. In the sequel, we assume that $H$ is a real Hilbert space.

Definition 2.1 Let $C$ be a non-empty subset of $H$. A map $T:C\longrightarrow C$ is called

  1. (i) Non-expansive if $\|Tx-Ty\|\leq \|x-y\| \ \ \forall,\, \ x,\,y\in C.$

  2. (ii) Quasi-non-expansive if $F(T)\neq \emptyset$ and $\|Tx-y\|\leq \|x-y\| \ \ \forall,\, \ x\in C \ \mbox {and} \ y\in F(T),$ where

    \[ F(T)=\{y\in C:Ty=y\}. \]
  3. (iii) $k$-strictly pseudo-contractive if there exists $k\in [0,\,1)$ such that

    \[ \|Tx-Ty\|\leq \|x-y\|+k\|(I-T)x-(I-T)y\| \ \ \forall, \ x,y\in C. \]

Remark 1 It is well known that the class of $k$-strictly pseudo-contractive map properly contains the class of quasi-non-expansive maps and that of non-expansive maps. For example, the map $T:l_2\longrightarrow l_2$ defined by $Tx=-{5}/{4}x$ is not non-expansive. However, a simple calculation shows that

\[ \|Tx-Ty\|^{2}\leq \|x-y\|^{2}+{1}/{9}\|(I-T)x-(I-T)y\|^{2} \ \ \forall, \quad x,y\in C, \]

i.e., $T$ is ${1}/{9}$-strictly pseudo-contractive.

Definition 2.2 Let $C$ be a non-empty subset of $H$ and $T:C\longrightarrow C$ be a map. For a fixed $\alpha \in (0,\,1)$, the map $T_{\alpha }$ defined by

(2.1)\begin{equation} T_{\alpha}x=(1-\alpha)x+\alpha Tx, \quad \forall \ x\in C \end{equation}

is called $\alpha$-relaxation of the map $T$.

It can be easily seen that $F(T)\neq \emptyset$ if and only if $F(T_{\alpha })\neq \emptyset$ and that $F(T)=F(T_{\alpha })$. If $T$ is non-expansive, then $T_{\alpha }$ is also non-expansive, and in this case, $T_{\alpha }$ is called $\alpha$-averaged of $T$ (see, e.g. [Reference Baillon, Bruck and Reich6]).

Lemma 2.3 Let $H$ be a real Hilbert space. Then the following identities hold

  1. (i) $\| \alpha x+(1-\alpha )y\|^{2}= \alpha \|x\|^{2}+(1-\alpha )\|y\|^{2}-\alpha (1-\alpha )\|x-y\|^{2}$.

  2. (ii) $\|x-y\|^{2}=\|x\|^{2}-2\langle x,\,y\rangle +\|y\|^{2}.$

Lemma 2.4 See, e.g. [Reference Marino and Xu38], proposition 2.1

Let $C$ be a non-empty closed convex subset of $H$ and $T:C\longrightarrow C$ be $k$-strictly pseudo-contractive map. Then,

  1. (i) $T$ is Lipschitz with the Lipschitz constant $\frac {1+k}{1-k},$ i.e.,

    \[ \|Tx-Ty\|\leq \frac{1+k}{1-k} \|x-y\|\ \ \forall, \quad x,y\in C. \]
  2. (ii) The map $(I-T)$ is demiclosed at $0,$ i.e., if $x_n\rightharpoonup x$ and $(I-T)x_n\rightarrow 0$ then $Tx=x$.

Lemma 2.5 See, e.g. [Reference Marino, Bruno and Erdal39], Lemma 2.6

Let $C$ be a non-empty closed convex subset of $H$ and $T:C\longrightarrow C$ be $k$-strictly pseudo-contractive map such that $F(T)\neq \emptyset$. Then, the following inequality holds:

(2.2)\begin{equation} (1-k)\|Tx-x\|^{2}\leq 2\langle x-p,x-Tx\rangle \ \ \forall, \quad p\in F(T) \ \text{and} \ \ x\in C. \end{equation}

Lemma 2.6 Let $T_i:H\longrightarrow H,$ $i=1,\,2,\,\ldots,\, n$ be $k_i$-strictly pseudo-contractive maps such that $C=\cap _{i=1}^{n}F(T_i)\neq \emptyset$. Let $\{x_n\}_{n\in \mathbb {N}}$ be a sequence in $H$ such that $\|x_n-x^{*}\|$ converges for each $x^{*}\in C$ and that $\|T_ix_n-x_n\|\to 0$ for all $i=1,\,2,\,\ldots,\,n$, then the sequence $\{x_n\}_{n\in \mathbb {N}}$ converges weakly to a point in $C$.

Proof. We first observe that if $\{x_{n_k}\}_{k\in \mathbb {N}}$ is any subsequence of $\{x_n\}_{n\in \mathbb {N}}$ that converges weakly to some point, say $x^{*}$, then, by lemma 2.4 (ii) $x^{*}\in C$. We now show that $\{x_n\}_{n\in \mathbb {N}}$ has only one weak cluster point in $C$ from which the result will then follow.

Let $\{x_{n_k}\}_{k\in \mathbb {N}}$ and $\{x_{n_j}\}_{j\in \mathbb {N}}$ be two subsequences of $\{x_n\}_{n\in \mathbb {N}}$ such that $x_{n_k}\rightharpoonup p$ and $x_{n_j}\rightharpoonup q$, where $p,\,q\in C$ (note that this is possible since the sequence $\{x_n\}_{n\in \mathbb {N}}$ is bounded). We show that $p=q.$ We have that

\begin{align*} 2\langle x_n,p-q\rangle & = 2\langle x_n, p\rangle-2\langle x_n, q\rangle\\ & \leq\|x_n-p\|^{2}-\|x_n\|^{2}-\|p\|^{2}-\|x_n-q\|^{2}+\|x_n\|^{2}+\|q\|^{2}\\ & \leq \|x_n-p\|^{2}-\|p\|^{2}-(\|x_n-q\|^{2}-\|q\|^{2}). \end{align*}

From the assumption that $\|x_n-x^{*}\|$ converges for each $x^{*}\in C$ we see $\lim _{n\to \infty }\langle x_n,\,p-q\rangle$ exists. So that

\begin{align*} \|p-q\|^{2} & = \langle p,p-q\rangle-\langle q,p-q\rangle\\ & = \lim_{k\to \infty}\langle x_{n_k},p-q\rangle-\lim_{j\to \infty}\langle x_{n_j},p-q\rangle=0. \end{align*}

Hence, $p=q$. The proof is complete.

Lemma 2.7 Let $C$ be a non-empty closed and convex subset of $H$ and $T:C\longrightarrow C$ be a $k$-strictly pseudo-contractive map. Then for $\alpha \in (0,\,1-k),$ $T_{\alpha }$ is a non-expansive map.

Proof. Using lemma 2.3 (i), Equation (2.1) and the fact that $T$ is $k$-strictly pseudo-contractive, we have

\begin{align*} \|T_{\alpha}x-T_{\alpha}y\|^{2} & = \|(1-\alpha)(x-y)+\alpha (Tx-Ty)\|^{2}\\ & =(1-\alpha)\|(x-y)\|^{2}+\alpha \|(Tx-Ty)\|^{2}-\alpha(1-\alpha)\|(x-Tx)-(y-Ty)\|^{2}\\ & \leq (1-\alpha)\|(x-y)\|^{2}+\alpha (\|(x-y)\|^{2}+k\|(x-Tx)-(y-Ty)\|^{2})\\ & -\alpha(1-\alpha)\|(x-Tx)-(y-Ty)\|^{2}\\ & = \|(x-y)\|^{2}-\alpha(1-\alpha-k)\|(x-Tx)-(y-Ty)\|^{2}\\ & \leq \|(x-y)\|^{2}. \end{align*}

The result follows from the fact that $\alpha \in (0,\,1-k)$.

3. Main results

We now state and prove our main theorem.

Theorem 3.1 Let $T_i:H\longrightarrow H,\, \ i=1,\,2,\,3,\,\ldots,\,m$ be $k_i$-strictly pseudo-contractive mappings. Let $\{x_n\}$ be a sequence defined by $x^{0}\in H,$

(3.1)\begin{equation} x^{n+1}= \begin{cases} x^{n}, & n\leq \sup_{n\in \mathbb{N}}\|d^{n}\|_{\infty},\\ (1-\alpha_n)x^{n}-\alpha_n(T_{i_n}\tilde{x}^{n}+(x^{n}-\tilde{x}^{n})), & \mbox{otherwise}. \end{cases} \end{equation}

Suppose $\sup _{n\in \mathbb {N}}\|d^{n}\|_{\infty }\leq \rho$ for some $\rho \geq 1$ and that

(3.2)\begin{equation} 0<\epsilon_0\leq \alpha_n\leq \displaystyle\frac{1-k}{2\rho+1+\epsilon_0}<1-k, \quad \forall \ n\in \mathbb{N}, \end{equation}

for some $\epsilon _0>0$, where $k=\displaystyle \max _{1\leq i\leq m}k_i$. Then, the sequence $\{x_n\}$ converges weakly to some point $x^{*}\in C=\cap _{i=1}^{n}F(T_i)\neq \emptyset$.

In Theorem (3.2), for each $n\in \mathbb {N}$, $d^{n}$ is the delay vector, $\{i_n\}$ is an almost cyclic sequence on the set $\{1,\,2,\,3\ldots,\,m\}$, $T_{i_n}$ is the processor whose output will be used to compute $x^{n+1}$ and finally, $\tilde {x}^{n}=x^{n}$ or $\tilde {x}^{n}=x^{n-j_{i_n}^{n}}$, where $j_{i_n}^{n}$ is the delay amount of the $i_n^{th}$ processor.

Remark 2 In Theorem (3.1), if we set $A_{i_n}=(I-T_{i_n})$, then algorithm (3.1) becomes

(3.3)\begin{equation} x^{n+1}= \begin{cases} x^{n}, & n\leq \sup_{n\in \mathbb{N}}\|d^{n}\|_{\infty},\\ x^{n}-\alpha_nA_{i_n}(\tilde{x}^{n}), & \mbox{otherwise}. \end{cases} \end{equation}

Proof of Theorem 3.3. We divide the proof into three steps:

Step 1. For any $x^{*}\in C=\cap _{i=1}^{n}F(T_i)$, the following inequality holds:

(3.4)\begin{equation} \|x^{n+1}-x^{*}\|^{2}\leq \|x^{n}-x^{*}\|^{2}+\displaystyle\sum_{r=1}^{\rho}\|x^{n+1-r}-x^{n-r}\|^{2}-\alpha_n(1-k-\alpha_n(1+\rho))\|A_{i_n}(\tilde{x}^{n})\|^{2}. \end{equation}

Indeed, let $x^{*}\in C=\cap _{i=1}^{n}F(T_i)$, then using Lemma 2.3 (ii), we have

(3.5)\begin{align} \|x^{n+1}-x^{*}\|^{2} & = \|x^{n}-x^{*}-\alpha_nA_{i_n}(\tilde{x}^{n}) \|^{2}\nonumber\\ & = \|x^{n}-x^{*}\|^{2}-2\alpha_n\langle x^{n}-\tilde{x}^{n},A_{i_n}(\tilde{x}^{n})\rangle-2\alpha_n\langle \tilde{x}^{n}-{x}^{*}, A_{i_n}(\tilde{x}^{n})\rangle \nonumber\\ & \quad+{\alpha_n}^{2}\|A_{i_n}(\tilde{x}^{n})\|^{2} \end{align}

Using Lemma 2.5, we get

(3.6)\begin{align} -2\alpha_n\langle \tilde{x}^{n}-{x}^{n}, A_{i_n}(\tilde{x}^{n})\rangle & \leq{-}\alpha_n(1-k_i)\|A_{i_n}(\tilde{x}^{n})\|^{2}\nonumber\\ & \leq{-}\alpha_n(1-k)\|A_{i_n}(\tilde{x}^{n})\|^{2} \end{align}

Also by using the fact that $d_{i_n}^{n}$ is the delay amount for the $i_n^{th}$ processor, and repeated application of the triangle inequality, we have

\begin{align*} -2\alpha_n\langle {x}^{n}-\tilde{x}^{n}, A_{i_n}(\tilde{x}^{n})\rangle & \leq 2\alpha_n\|x^{n}-\tilde{x}^{n}\|\| A_{i_n}(\tilde{x}^{n})\| \nonumber\\ & \leq 2\alpha_n \displaystyle\sum_{r=1}^{d_{i_n}^{n}}\| x^{n+1-r}-x^{n-r}\|\| A_{i_n}(\tilde{x}^{n})\|\nonumber\\ & \leq 2\alpha_n \sum_{r=1}^{\rho}\| x^{n+1-r}-x^{n-r}\|\| A_{i_n}(\tilde{x}^{n})\| \end{align*}

Using the fact that $2ab\leq \frac {1}{\gamma }a^{2}+\gamma b^{2}$ for any $a,\,b\in \mathbb {R}$ and $\gamma >0$, we have

(3.7)\begin{align} -2\alpha_n\langle {x}^{n}-\tilde{x}^{n}, A_{i_n}(\tilde{x}^{n})\rangle & \leq \alpha_n \left( \displaystyle\sum_{r=1}^{\rho}\displaystyle\frac{1}{\alpha_n}\| x^{n+1-r}-x^{n-r}\|^{2}+\alpha_n\| A_{i_n}(\tilde{x}^{n})\|^{2}\right) \end{align}
(3.8)\begin{align} & =\displaystyle\sum_{r=1}^{\rho}\| x^{n+1-r}-x^{n-r}\|^{2}+{\rho {\alpha_n}^{2}}\| A_{i_n}(\tilde{x}^{n})\|^{2}. \end{align}

Substituting inequalities (3.6) and (3.8) in (3.5), we get the desired result.

Step 2. For any $x^{*}\in C=\cap _{i=1}^{n}F(T_i)$ the non-negative sequence $\{\beta _n\}_{n \in \mathbb {N}}$ defined by

(3.9)\begin{equation} \beta_{n}=\|x^{n}-x^{*}\|^{2}+\displaystyle\sum_{r=1}^{\rho}c_r\|x^{n+1-r}-x^{n-r}\|^{2} \end{equation}

is convergent, where $c_r=(1+\rho -r)+\epsilon _0,\, \ r=1,\,2,\,3\ldots,\, m+1$.

Using inequality (3.4), we have

(3.10)\begin{align} \beta_{n+1}& =\|x^{n+1}-x^{*}\|^{2}+\displaystyle\sum_{r=1}^{\rho}c_r\|x^{n+2-r}-x^{n+1-r}\|^{2}\nonumber\\ & \leq \|x^{n}-x^{*}\|^{2}+\displaystyle\sum_{r=1}^{\rho}\|x^{n+1-r}-x^{n-r}\|^{2} \nonumber\\ & \quad-\alpha_n(1-k-\alpha_n(1+\rho))\|A_{i_n}(\tilde{x}^{n})\|^{2}+\sum_{r=1}^{\rho}c_r\|x^{n+2-r}-x^{n+1-r}\|^{2}. \end{align}

By changing the indexing variable and observing that $c_{r+1}+1=c_r$, we have

(3.11)\begin{align} \beta_{n+1}& \leq \|x^{n}-x^{*}\|^{2}+\displaystyle\sum_{r=1}^{\rho}\|x^{n+1-r}-x^{n-r}\|^{2}+c_1\|x^{n+1}-x^{*}\|^{2}\nonumber\\ & \quad-c_{\rho +1}\|x^{n+1-\rho}-x^{n-\rho}\|^{2}-\alpha_n(1-k-\alpha_n(1+\rho))\|A_{i_n}(\tilde{x}^{n})\|^{2}\nonumber\\ & = \beta_n-c_{\rho +1}\|x^{n+1-\rho}-x^{n-\rho}\|^{2}-\alpha_n(1-k-\alpha_n(1+\rho))\|A_{i_n}(\tilde{x}^{n})\|^{2} \end{align}

Since $c_{\rho +1}=\epsilon _0$ and $c_1=\rho +\epsilon _0$ we have that $\{\beta _n\}$ is non-increasing. Thus, $\{\beta _n\}_{n \in \mathbb {N}}$ converges.

Step 3. The sequences $\{\|x^{n}-x^{*}\|\}_{n \in \mathbb {N}}$, $\{\|x^{n+1}-x^{n}\|\}_{n \in \mathbb {N}}$ and $\{\|x^{n}-\tilde {x}^{n}\|\}_{n \in \mathbb {N}}$ converge as $n\to \infty$.

Indeed, from (3.11), we have that

(3.12)\begin{equation} 0\leq c_{\rho +1}\|x^{n+1-\rho}-x^{n-\rho}\|^{2}\leq \beta_n-\beta_{n+1}, \end{equation}

so that

(3.13)\begin{equation} \|x^{n+1}-x^{n}\|\to 0 \ \mbox{as} \ \ n\to \infty. \end{equation}

Using (3.9) and (3.13), we see that

(3.14)\begin{equation} \lim_{n\to \infty}\beta_n=\lim_{n\to \infty}\|x^{n}-x^{*}\|. \end{equation}

Finally, we have

(3.15)\begin{equation} 0\leq \|x^{n}-\tilde{x}^{n}\|\leq \displaystyle\sum_{r=1}^{\rho}\|x^{n+1-r}-x^{n-r}\|\to 0 \ \mbox{as} \ \ n\to \infty. \end{equation}

So that

(3.16)\begin{equation} \|x^{n}-\tilde{x}^{n}\|\to 0 \ \mbox{as} \ \ n\to \infty. \end{equation}

Step 4. $\|T_ix^{n}-x^{n}\|\to 0$ as $n\to \infty$ for all $i=1,\,2,\,3\ldots,\,m$.

Let $i\in \{1,\,2,\,3\ldots,\,m\}$ and $T_i^{\alpha }$ be an $\alpha$-relaxation of $T_i$. Let $s_n$ be a sequence in $\mathbb {N}$ such that $s_n$ is the smallest integer $\geq n$ with $i_{s_n}=i$. Then

(3.17)\begin{equation} \|T_i^{\alpha_{s_n}}(x^{n})-x^{n}\|\leq \|T_i^{\alpha_{s_n}}(x^{n})-x^{{s_n}+1}\|+\|x^{{s_n}+1}-x^{n}\| \end{equation}

Now,

(3.18)\begin{align} \|T_i^{\alpha_{s_n}}(x^{n})-x^{{s_n}+1}\|& = \|T_i^{\alpha_{s_n}}(x^{n})-(x^{s_n}-\alpha_{s_n}\tilde{x}^{s_n}+\alpha_{s_n}T_i(\tilde{x}^{s_n})\|\nonumber\\ & =\|T_i^{\alpha_{s_n}}(x^{n})-(T_i^{\alpha_{s_n}}(\tilde{x}^{s_n})+(\tilde{x}^{s_n}-x^{s_n})\|\nonumber\\ & \leq \|T_i^{\alpha_{s_n}}(x^{n})-(T_i^{\alpha_{s_n}}(\tilde{x}^{s_n})\|+\|\tilde{x}^{s_n}-x^{s_n}\| \end{align}
(3.19)\begin{align} & \leq \|x^{n}-\tilde{x}^{s_n}\|+\|\tilde{x}^{s_n}-x^{s_n}\|. \end{align}

In the last inequality, we have used the fact that $T_i^{\alpha _{s_n}}$ is non-expansive (Lemma 2.7). Using the fact that $\{i_n\}_{n\in \mathbb {N}}$ is an almost cyclic sequence on $\{1,\,2,\,3\ldots,\,m\}$ with almost cyclic index $N$ we have $\tilde {x}^{s_n}=x^{h}$ for some $n-\rho \leq h\leq s_n\leq n+N$. Thus, from inequality (3.17) and repeated application of triangle inequality, we have

(3.20)\begin{align} \|T_i^{\alpha_{s_n}}(x^{n})-x^{{s_n}+1}\|& \leq \displaystyle\sum_{r={-}\rho}^{N-1}\|x^{n+1-r}-x^{n-r}\|+\sum_{r={-}\rho}^{N-1}\|x^{n+1-r}-x^{n-r}\|\nonumber\\ & = 2\displaystyle\sum_{r={-}\rho}^{N-1}\|x^{n+1-r}-x^{n-r}\|. \end{align}

From inequalities (3.19) and (3.20), we have

(3.21)\begin{align} \|T_i^{\alpha_{s_n}}(x^{n})-x^{{s_n}+1}\|& \leq \displaystyle\sum_{r={-}\rho}^{N-1}\|x^{n+1-r}-x^{n-r}\|+\sum_{r={-}\rho}^{N-1}\|x^{n+1-r}-x^{n-r}\|\nonumber\\ & = 2\displaystyle\sum_{r={-}\rho}^{N-1}\|x^{n+1-r}-x^{n-r}\|. \end{align}

Inequalities (3.17) and (3.21) further imply that

(3.22)\begin{align} \|T_i^{\alpha_{s_n}}(x^{n})-x^{n}\| & \leq 2\displaystyle\sum_{r={-}\rho}^{N-1}\|x^{n+1-r}-x^{n-r}\|+\|x^{{s_n}+1}-x^{n}\|\nonumber\\ & \leq 3\displaystyle\sum_{r={-}\rho}^{N-1}\|x^{n+1-r}-x^{n-r}\|. \end{align}

Applying (3.13), we see that $\|T_i^{\alpha _{s_n}}(x^{n})-x^{n}\|\to 0$ as $n\to \infty$.

Now

(3.23)\begin{equation} \epsilon_0\|T_i(x^{n})-x^{n}\|\leq \alpha_{s_n}\|T_i(x^{n})-x^{n}\|=\|T_i^{\alpha_{s_n}}(x^{n})-x^{n}\|. \end{equation}

Therefore,

(3.24)\begin{equation} \|T_i(x^{n})-x^{n}\|\to 0 \text{ as } n\to \infty \text{ for each } i=1,2,3\ldots,m. \end{equation}

(3.14) combined with (3.24) implies that all the assumptions of Lemma 2.6 are satisfied. Hence the sequence $\{x^{n}\}_{n\in \mathbb {N}}$ converges weakly to some point in $C=\cap _{i=1}^{n}F(T_i)$. The proof is complete.

Definition 3.2 see, e.g. [Reference Petryshyn41]

A map $T:H \longrightarrow H$ is said to be demicompact if it has the property that whenever a sequence $\{x_n\}$ is bounded and the sequence $\{x_n-Tx_n\}$ converges strongly, then there exists a subsequence of $\{x_n\}$ that converges strongly.

We now give the following strong convergence result.

Theorem 3.3 Let $T_i:H\longrightarrow H,\, \ i=1,\,2,\,3,\,\ldots,\,m$ be $k_i$-strictly pseudo-contractive mappings. Let $\{x_n\}$ be a sequence defined by $x^{0}\in H,$

(3.25)\begin{equation} x^{n+1}= \begin{cases} x^{n}, & n\leq \sup_{n\in \mathbb{N}}\|d^{n}\|_{\infty},\\ (1-\alpha_n)x^{n}-\alpha_n(T_{i_n}\tilde{x}^{n}+(x^{n}-\tilde{x}^{n})), & \mbox{otherwise}. \end{cases} \end{equation}

Suppose $\sup _{n\in \mathbb {N}}\|d^{n}\|_{\infty }\leq \rho$ for some $\rho \geq 1$ and that

(3.26)\begin{equation} 0<\epsilon_0\leq \alpha_n\leq \displaystyle\frac{1-k}{2\rho+1+\epsilon_0}<1-k, \quad \forall \ n\in \mathbb{N}, \end{equation}

for some $\epsilon _0>0,$ where $k=\displaystyle \max _{1\leq i\leq m}k_i$. Then, the sequence $\{x_n\}$ converges weakly to some point $x^{*}\in C=\cap _{i=1}^{n}F(T_i)\neq \emptyset$. Suppose in addition that $T_{i_0}$ is demicompact for some $i_0\in \{i=1,\,2,\,3,\,\ldots,\,m\}$, then the sequence $\{x_n\}$ converges strongly to $x^{*}$.

Proof. Suppose $T_{i_0}$ is demicompact for some $i_0\in \{i=1,\,2,\,3,\,\ldots,\,m\}$. Since we have, by Theorem 3.1 that $\{x_n\}$ is bounded and $x_n-Tx_n \to 0$ as $n\to \infty$, then there exists a subsequence $\{x_{n_k}\}$ of $\{x_n\}$ such that $x_{n_k}\to x^{*}$. Lemma 2.6 implies that $x^{*}\in C$. Similarly, by Theorem 3.1, we have that $\|x_n-x^{*}\|$ converges. Therefore, $x_n\to x^{*}$ as $n\to \infty$. The proof is complete.

4. Applications

In this section, we also give the application of our main theorem to solving the convex minimization problem and Hammerstein integral equations.

4.1. Convex optimization

We first recall the following definition.

Definition 4.1 Let $A:H\longrightarrow H$ be a map. $A$ is called $\beta$-inverse strongly monotone if for each $x,\,y\in H$ we have

\[ \langle Ax-Ay,x-y\rangle \geq \beta \|Ax-Ay\|^{2}. \]

It is well known that a map $T$ is $k$-strictly pseudo-contractive if and only if the map $A:=I-T$ is $\frac {1-k}{2}$-inverse strongly monotone and that $F(T)=N(A)$ where $N(A)=\{x\in H: Ax=0\}$ (see, e.g. [Reference Browder and Petryshyn13]). We now give the following Lemma which is due to Baillon and Haddad [Reference Baillon and Haddad5].

Lemma 4.2 Baillon and Haddad [Reference Baillon and Haddad5]

Let $E$ be a Banach space. Suppose $f$ is continuously Fréchet differentiable and convex function. If the gradient $\nabla f$ of $f$ is $\frac {1}{\alpha }$-Lipschitz, then $\nabla f$ is $\alpha$-inverse strongly monotone.

Theorem 4.3 Let $H$ be a real Hilbert space and $f_i:H\longrightarrow \mathbb {R},$ $i=1,\,2,\,3,\,\ldots,\, m$ be continuously Fréchet differentiable and convex functions such that $\nabla f_i$ is $L_i$-Lipschitz for each $i=1,\,2,\,3,\,\ldots,\, m$. Suppose that the set $C=\cap _{i=1}^{m} C_i\neq \emptyset,$ where $C_i=argmin_{x\in H}f_i(x)=\{x\in H: f(x)=\min _{x\in H}f(x)\}$ for each $i$. Under the conditions of Theorem (3.1), the sequence generated by Algorithm (3.3) converges weakly to a point in $C$. Moreover, if $\nabla f_{i_0}$ is demicompact for some $i_0$, then the sequence converges strongly to a point in $C$.

Proof. We simply use the fact that $\nabla f_i(x^{*})=0$ if and only if $x^{*}$ is a minimizer of the function $f_i$ and the result follows immediately.

4.2. Hammerstein integral equations

Let $\Omega$ be a measurable and bounded subset of ${\mathbb {R}}^{n}$. A non-linear integral equation of Hammerstein type is one of the forms

(4.1)\begin{equation} u(x)+\displaystyle\int_\Omega k(x,y)f(y,u(y))dy=w(x), \end{equation}

where $dy$ is a $\sigma$-finite measure. The function $k:\Omega \times \Omega \rightarrow \mathbb {R}$ is the kernel of the equation and $f:\Omega \times \mathbb {R} \rightarrow \mathbb {R}$ is a measurable real-valued function. The function $w$ and the unknown function $u$ lie in a suitable Banach space of measurable real-valued functions, say, $\mathcal {F}(\Omega,\, \mathbb {R})$. If we define the operators $F:\mathcal {F}(\Omega,\, \mathbb {R})\rightarrow \mathcal {F}(\Omega,\, \mathbb {R})$ and $K:\mathcal {F}(\Omega,\, \mathbb {R})\rightarrow \mathcal {F}(\Omega,\, \mathbb {R})$ by

(4.2)\begin{equation} Fu(x)=f(x,u(x)) \text{ and }Kv(x)=\displaystyle\int_\Omega k(x,y)v(y)dy, \quad x\in \Omega, \end{equation}

then (4.1) can easily be put in the abstract Hammerstein equation

(4.3)\begin{equation} u+KFu=0, \end{equation}

where, without loss of generality, we have taken $w$ to be the zero map in $\mathcal {F}(\Omega,\, \mathbb {R})$. We also recall that a map $A:H\longrightarrow H$ is called $\alpha$-strongly monotone if for each $x,\,y\in H$ we have

\[ \langle Ax-Ay,x-y\rangle \geq \alpha \|x-y\|^{2}. \]

Now, let $E=H\times H$ with the inner product

\[ \langle (u_1,v_1),(u_2,v_2)\rangle_{H\times H}=\langle u_1,u_2\rangle_{H} +\langle v_1,v_2\rangle_{H} \]

and the norm

\[ \|(u,v)\|_{H\times H}^{2}=\|u\|_{H}^{2}+\|v\|_{H}^{2}. \]

Define the map $T:E\longrightarrow E$ by

(4.4)\begin{equation} T(u,v)=(Fu-v,Kv+u). \end{equation}

Then it is easy to see that $u^{*}\in H$ solves (4.3) if and only if $T(u^{*},\,v^{*})=(0,\,0)$, where $v^{*}=KFu^{*}$; and that $T$ is strongly monotone and Lipschitz whenever the maps $F$ and $K$ are both strongly monotone and Lipschitz. It is also easy to verify that a Lipschitz strongly monotone map is inverse strongly monotone. We now give the following result for approximating a common solution of Hammerstein integral equations.

Theorem 4.4 Let $H$ be a real Hilbert space and $F_i,\,K_i:H\longrightarrow H$ be $\eta _i$-strongly monotone and $L_i$-Lipschitz maps for $i=1,\,2,\,3,\,\ldots,\, m$. Suppose that the set

\[ \Omega=\{u\in H: u+K_iF_iu=0, \quad \forall \ i=1,2,3,\ldots, m\}\neq \emptyset. \]

Then, under the conditions of Theorem (3.1), the sequence generated by Algorithm (3.3) converges weakly to a point in $\Omega$.

Proof. Define the maps $A_i:E\longrightarrow E$ by $A_i(u,\,v)=(F_iu-v,\,K_iv+u)$ as in (4.4). Then, for each $i=1,\,2,\,3,\,\ldots,\, m$, $A_i$ is inverse strongly monotone and the result follows.

5. Conclusion

In this paper, we proved weak and strong convergence of the asynchronous inertial algorithm to a common fixed point of $k$-strictly pseudo-contractive maps in Hilbert spaces. We also give application of our main result to solving convex minimization problems and Hammerstein integral equations. In future work, we shall concentrate on the implementation of the asynchronous inertial algorithm and also on the convergence of the algorithm for other classes of non-linear maps, such as pseudo-contractions, accretive and monotone maps (see e.g. [Reference Combettes and Eckstein26]).

References

Aharoni, D. and Barak, A., Parallel iterative discontinuous Galerkin finite-element methods, in Discontinuous Galerkin methods, pp. 247–254 (Springer, 2000)CrossRefGoogle Scholar
Aharoni, R. and Censor, Y., Block-iterative projection methods for parallel computation of solutions to convex feasibility problems, Linear Algebra Appl. 120 (1989), 165175.CrossRefGoogle Scholar
Amitai, D., Averbuch, A., Israeli, M. and Itzikowitz, S., Implicit-explicit parallel asynchronous solver of parabolic pdes, SIAM J. Sci. Comput. 19 (4) (1998), 13661404.CrossRefGoogle Scholar
Bahi, J., Miellou, J. C. and Rhofir, K., Asynchronous multisplitting methods for nonlinear fixed point problems, Numer. Algor. 15 (3–4) (1997), 315345.CrossRefGoogle Scholar
Baillon, J. B. and Haddad, G., Quelques propriétés des oprateurs angle-bornés et n-cycliquement monotones, Israel J. Math. 26 (1977), 137–50.CrossRefGoogle Scholar
Baillon, J.-B., Bruck, R. E. and Reich, S., On the asymptotic behavior of nonexpansive mappings and semigroups, Houston J. Math. 4 (1978), 19.Google Scholar
Baudet, G. M., Asynchronous iterative methods for multiprocessors, J. ACM (JACM) 25 (2) (1978), 226244.CrossRefGoogle Scholar
Bauschke, H. H. and Borwein, J. M., On projection algorithms for solving convex feasibility problems, SIAM Rev. 38 (1996), 367426.CrossRefGoogle Scholar
Bauschke, H. H. and Combettes, P. L., Convex analysis and monotone operator theory in Hilbert spaces (Springer, New York, 2011).CrossRefGoogle Scholar
Bauschke, H. H. and Koch, V., Projection methods: Swiss army knives for solving feasibility and best approximation problems with halfspaces, Contemp. Math. 636 (2015), 140.CrossRefGoogle Scholar
Bauschke, H., Combettes, P. L. and Kruk, S., Extrapolation algorithm for affine-convex feasibility problems, Numer. Algor. 41 (2006), 239274.CrossRefGoogle Scholar
Bertsekas, D. P. and Tsitsiklis, J. N., Parallel and distributed computation: numerical methods, volume 23. (Prentice Hall, Englewood Cliffs, NJ, 1989).Google Scholar
Browder, F. E. and Petryshyn, W. V., Construction of fixed points of nonlinear mappings in Hilbert spaces, J. Math. Anal. Appl. 20 (1967), 197228.CrossRefGoogle Scholar
Cegielski, A., Iterative methods for fixed point problems in Hilbert spaces (Springer, Heidelberg, 2012).Google Scholar
Cegielski, A., Reich, S. and Zalas, R., Weak, strong and linear convergence of the $CQ$-method via the regularity of Landweber operators, Optimization 69 (2020), 605636.CrossRefGoogle Scholar
Censor, Y., Parallel application of block-iterative methods in medical imaging and radiation therapy, Math. Program. 42 (1988), 307325.CrossRefGoogle Scholar
Censor, Y. and Cegielski, A., Projection methods: an annotated bibliography of books and reviews, Optimization 64 (2015), 23432358.CrossRefGoogle Scholar
Censor, Y. and Zenios, S. A., Parallel optimization, (Oxford University Press, New York, 1997).Google Scholar
Censor, Y., Gordon, M. and Gordon, R., Component averaging: an efficient iterative parallel algorithm for large and sparse unstructured problems, Parallel Comput. 27 (2001), 777808.CrossRefGoogle Scholar
Censor, Y., Elfving, T., Kopf, N. and Bortfeld, T., The multiple-sets split feasibility problem and its applications for inverse problems, Inverse Probl. 21 (2005), 20712084.CrossRefGoogle Scholar
Censor, Y., Chen, W., Combettes, P. L., Davidi, R. and Herman, G. T., On the effectiveness of projection methods for convex feasibility problems with linear inequality constraints, Comput. Optim. Appl. 51 (2012), 10651088.CrossRefGoogle Scholar
Chau, M., Spiteri, P., Guivarch, R. and Boisson, H., Parallel asynchronous iterations for the solution of a 3d continuous flow electrophoresis problem, Comput. Fluids 37 (9) (2008), 1126–137.CrossRefGoogle Scholar
Chazan, D. and Miranker, W., Chaotic relaxation, Linear Algebra Appl. 2 (2) (1969), 199222.CrossRefGoogle Scholar
Combettes, P. L., The convex feasibility problem in image recovery, Adv. Imaging Electron Phys. 95 (1996), 155270.CrossRefGoogle Scholar
Combettes, P. L., Hilbertian convex feasibility problems: convergence of projection methods, Appl. Math. Optim. 35 (1997), 311330.CrossRefGoogle Scholar
Combettes, P. L. and Eckstein, J., Asynchronous block-iterative primal-dual decomposition methods for monotone inclusions, Math. Program. 168 (2018), 645672. doi: 10.1007/s10107-016-1044-0CrossRefGoogle Scholar
Donzis, D. A. and Aditya, K., Asynchronous finite-difference schemes for partial differential equations, J. Computat. Phys. 274 (2014), 370392.CrossRefGoogle Scholar
Elsner, L., Koltracht, I. and Neumann, M., Convergence of sequential and asynchronous nonlinear paracontractions, Numer. Math. 62 (1992), 305319.CrossRefGoogle Scholar
Geer, D., Chip makers turn to multicore processors, Computer 38 (2005), 1113.CrossRefGoogle Scholar
Halpern, B., Fixed points of nonexpanding maps, Bull. Amer. Math. Soc. 73 (1967), 957961.CrossRefGoogle Scholar
Heaton, H. and Censor, Y., Asynchronous sequential inertial iterations for common fixed points problems with an application to linear systems, J. Glob. Optim. 74 (2019), 95119.CrossRefGoogle Scholar
Herman, G. T., Fundamentals of computerized tomography, (Springer, Dordrecht, 2009).CrossRefGoogle Scholar
Kim, T. H. and Xu, H. K., Strong convergence of modified Mann iterations, Nonlinear Anal. 61 (2005), 5160.CrossRefGoogle Scholar
Lions, P. L., Approximation de points fixes de contractions, C. R. Acad. Sci. Sér. A-B Paris 284 (1977), 13571359.Google Scholar
Liu, J. and Wright, S. J., Asynchronous stochastic coordinate descent: parallelism and convergence properties, SIAM J. Optim. 25 (1) (2015), 351376.CrossRefGoogle Scholar
Liu, J., Wright, S. J., , C., Bittorf, V. and Sridhar, S., An asynchronous parallel stochastic coordinate descent algorithm, J. Mach. Learn. Res. 16 (2015), 285322.Google Scholar
Mann, W. R., Mean value methods in iteration, Proc. Am. Math. Soc. 4 (1953), 506510.CrossRefGoogle Scholar
Marino, G. and Xu, H. K., Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces, J. Math. Anal. Appl. 329 (2007), 336346.CrossRefGoogle Scholar
Marino, G., Bruno, S. and Erdal, K., Strong convergence theorem for strict pseudo-contractions in Hilbert spaces, J. Inequal. Appl. 2016 (2016), 134.CrossRefGoogle Scholar
Masad, E. and Reich, S., A note on the multiple-set split convex feasibility problem in Hilbert space, J. Nonlinear Convex Anal. 8 (2007), 367371.Google Scholar
Petryshyn, W., Construction of fixed points of demicompact mappings in Hilbert space, J. Math. Anal. Appl. 14 (2) (1966), 276284.CrossRefGoogle Scholar
Reich, S., A limit theorem for projections, Linear Multilinear Algebra 9 (1983), 281290.CrossRefGoogle Scholar
Reich, S., Weak convergence theorems for nonexpansive mappings in Banach spaces, J. Math. Anal. Appl. 67 (1979), 274276.CrossRefGoogle Scholar
Tai, X. C. and Tseng, P., Convergence rate analysis of an asynchronous space decomposition method for convex minimization, Math. Comput. 71 (239) (2002), 11051135.CrossRefGoogle Scholar
Tseng, P., Bertsekas, D. and Tsitsiklis, J. N., Partially asynchronous, parallel algorithms for network flow and other problems, SIAM J. Control Optim. 28 (3) (1990), 678710.CrossRefGoogle Scholar
Zhang, R. and Kwok, J., Asynchronous distributed ADMM for consensus optimization, in Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 1701–1709 (2014).Google Scholar
Zhimin, P., Yangyang, X., Ming, Y. and Wotao, Y., An algorithmic framework for asynchronous parallel iterative coordinate updates, SIAM J. Sci. Comput. 38 (5) (2015), 024950.Google Scholar
Figure 0

Algorithm 1: ARock: a framework for async-parallel coordinate updates

Figure 1

Algorithm 2: Asynchronous Inertial Algorithm