Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Research article Special Issues

Iterative solutions via some variants of extragradient approximants in Hilbert spaces

  • This paper provides iterative solutions, via some variants of the extragradient approximants, associated with the pseudomonotone equilibrium problem (EP) and the fixed point problem (FPP) for a finite family of η-demimetric operators in Hilbert spaces. The classical extragradient algorithm is embedded with the inertial extrapolation technique, the parallel hybrid projection technique and the Halpern iterative methods for the variants. The analysis of the approximants is performed under suitable set of constraints and supported with an appropriate numerical experiment for the viability of the approximants.

    Citation: Yasir Arfat, Muhammad Aqeel Ahmad Khan, Poom Kumam, Wiyada Kumam, Kanokwan Sitthithakerngkiet. Iterative solutions via some variants of extragradient approximants in Hilbert spaces[J]. AIMS Mathematics, 2022, 7(8): 13910-13926. doi: 10.3934/math.2022768

    Related Papers:

    [1] Lu-Chuan Ceng, Shih-Hsin Chen, Yeong-Cheng Liou, Tzu-Chien Yin . Modified inertial subgradient extragradient algorithms for generalized equilibria systems with constraints of variational inequalities and fixed points. AIMS Mathematics, 2024, 9(6): 13819-13842. doi: 10.3934/math.2024672
    [2] Lu-Chuan Ceng, Li-Jun Zhu, Tzu-Chien Yin . Modified subgradient extragradient algorithms for systems of generalized equilibria with constraints. AIMS Mathematics, 2023, 8(2): 2961-2994. doi: 10.3934/math.2023154
    [3] Mohammad Dilshad, Fahad Maqbul Alamrani, Ahmed Alamer, Esmail Alshaban, Maryam G. Alshehri . Viscosity-type inertial iterative methods for variational inclusion and fixed point problems. AIMS Mathematics, 2024, 9(7): 18553-18573. doi: 10.3934/math.2024903
    [4] Francis Akutsah, Akindele Adebayo Mebawondu, Austine Efut Ofem, Reny George, Hossam A. Nabwey, Ojen Kumar Narain . Modified mildly inertial subgradient extragradient method for solving pseudomonotone equilibrium problems and nonexpansive fixed point problems. AIMS Mathematics, 2024, 9(7): 17276-17290. doi: 10.3934/math.2024839
    [5] Rose Maluleka, Godwin Chidi Ugwunnadi, Maggie Aphane . Inertial subgradient extragradient with projection method for solving variational inequality and fixed point problems. AIMS Mathematics, 2023, 8(12): 30102-30119. doi: 10.3934/math.20231539
    [6] Austine Efut Ofem, Jacob Ashiwere Abuchu, Godwin Chidi Ugwunnadi, Hossam A. Nabwey, Abubakar Adamu, Ojen Kumar Narain . Double inertial steps extragadient-type methods for solving optimal control and image restoration problems. AIMS Mathematics, 2024, 9(5): 12870-12905. doi: 10.3934/math.2024629
    [7] Mohammad Dilshad, Mohammad Akram, Md. Nasiruzzaman, Doaa Filali, Ahmed A. Khidir . Adaptive inertial Yosida approximation iterative algorithms for split variational inclusion and fixed point problems. AIMS Mathematics, 2023, 8(6): 12922-12942. doi: 10.3934/math.2023651
    [8] James Abah Ugboh, Joseph Oboyi, Hossam A. Nabwey, Christiana Friday Igiri, Francis Akutsah, Ojen Kumar Narain . Double inertial extrapolations method for solving split generalized equilibrium, fixed point and variational inequity problems. AIMS Mathematics, 2024, 9(4): 10416-10445. doi: 10.3934/math.2024509
    [9] Anjali, Seema Mehra, Renu Chugh, Salma Haque, Nabil Mlaiki . Iterative algorithm for solving monotone inclusion and fixed point problem of a finite family of demimetric mappings. AIMS Mathematics, 2023, 8(8): 19334-19352. doi: 10.3934/math.2023986
    [10] Bancha Panyanak, Chainarong Khunpanuk, Nattawut Pholasa, Nuttapol Pakkaranang . A novel class of forward-backward explicit iterative algorithms using inertial techniques to solve variational inequality problems with quasi-monotone operators. AIMS Mathematics, 2023, 8(4): 9692-9715. doi: 10.3934/math.2023489
  • This paper provides iterative solutions, via some variants of the extragradient approximants, associated with the pseudomonotone equilibrium problem (EP) and the fixed point problem (FPP) for a finite family of η-demimetric operators in Hilbert spaces. The classical extragradient algorithm is embedded with the inertial extrapolation technique, the parallel hybrid projection technique and the Halpern iterative methods for the variants. The analysis of the approximants is performed under suitable set of constraints and supported with an appropriate numerical experiment for the viability of the approximants.



    Mathematical modelling provides a systematic formalism for the understanding of the corresponding real-world problem. Moreover, adequate mathematical tools for the analysis of the translated real-world problem are at our disposal. Fixed point theory (FPT), an important branch of nonlinear functional analysis, is prominent for modelling a variety of real-world problems. It is worth mentioning that the real-world phenomenon can be translated into well known existential as well as computational FPP.

    The EP theory provides an other systematic formalism for modelling the real-world problems with possible applications in optimization theory, variational inequality theory and game theory [7,10,13,17,18,19,21,25,28,31,32]. In 1994, Blum and Oettli [13] proposed the (monotone-) EP in Hilbert spaces. Since then various classical iterative algorithms are employed to compute the optimal solution of the (monotone-) EP and the FPP. It is remarked that the convergence characteristic and the speed of convergence are the principal attributes of an iterative algorithm. All the classical iterative algorithms from FPT or EP theory have a common shortcoming that the convergence characteristic occurs with respect to the weak topology. In order to enforce the strong convergence characteristic, one has to assume stronger assumptions on the domain and/or constraints. Moreover, strong convergence characteristic of an iterative algorithm is often more desirable than weak convergence characteristic in an infinite dimensional framework.

    The efficiency of an iterative algorithm can be improved by employing the inertial extrapolation technique [29]. This technique has successfully been combined with the different classical iterative algorithms; see e.g., [2,3,4,5,6,8,9,14,15,16,23,27]. On the other hand, the parallel architecture of the algorithm helps to reduce the computational cost.

    In 2006, Tada and Takahashi [33] suggested a hybrid framework for the analysis of monotone EP and FPP in Hilbert spaces. On the other hand, the iterative algorithm proposed in [33] fails for the case of pseudomonotone EP. In order to address this issue, Anh [1] suggested a hybrid extragradient method, based on the seminal work of Korpelevich [24], to address the pseudomonotone EP together with the FPP. Inspired by the work of Anh [1], Hieu et al. [21] suggested a parallel hybrid extragradient framework to address the pseudomonotone EP together with the FPP associated with nonexpansive operators.

    Inspired and motivated by the ongoing research, it is natural to study the pseudomonotone EP together with the FPP associated with the class of an η-demimetric operators. We therefore, suggest some variants of the classical Mann iterative algorithm [26] and the Halpern iterative algorithm [20] in Hilbert spaces. We formulate these variants endowed with the inertial extrapolation technique and parallel hybrid architecture for speedy strong convergence results in Hilbert spaces.

    The rest of the paper is organized as follows. We present some relevant preliminary concepts and useful results regarding the pseudomonotone EP and FPP in Section 2. Section 3 comprises strong convergence results of the proposed variants of the parallel hybrid extragradient algorithm as well as Halpern iterative algorithm under suitable set of constraints. In Section 4, we provide detailed numerical results for the demonstration of the main results in Section 3 as well as the viability of the proposed variants with respect to various real-world applications.

    Throughout this section, the triplet (H,<,>,) denotes the real Hilbert space, the inner product and the induced norm, respectively. The symbolic representation of the weak and strong convergence characteristic are and , respectively. Recall that a Hilbert space satisfies the Opial's condition, i.e., for a sequence (pk)H with pkν then the inequality lim inf holds for all \mu \in \mathcal{H} with \nu \neq \mu . Moreover, \mathcal{H} satisfies the the Kadec-Klee property, i.e., if p_{k}\rightharpoonup \nu and \Vert p_{k}\Vert \rightarrow \Vert \nu \Vert as k \rightarrow \infty , then \Vert p_{k}-\nu\Vert\rightarrow 0 as k \rightarrow \infty .

    For a nonempty closed and convex subset K\subseteq \mathcal{H} , the metric projection operator \Pi^{\mathcal{H}}_{K}:\mathcal{H}\rightarrow K is defined as \Pi^{\mathcal{H}}_{K}(\mu) = argmin_{\nu \in K}\Vert \mu-\nu\Vert . If T:\mathcal{H} \rightarrow \mathcal{H} is an operator then Fix(T) = \{\nu \in \mathcal{H}|\nu = T\nu\} represents the set of fixed points of the operator T . Recall that the operator T is called \eta -demimetric (see [35]) where \eta \in (-\infty, 1) , if Fix(T) \neq \emptyset and

    \begin{equation*} \langle \mu-\nu, \mu-T\mu\rangle \geq \frac{1}{2}(1-\eta)\Vert \mu-T\mu\Vert^{2}, \; \; \forall\; \; \mu \in \mathcal{H} \; {\rm{and}}\; \nu \in Fix(T). \end{equation*}

    The above definition is equivalently represented as

    \begin{equation*} \Vert T\mu-\nu\Vert^{2}\leq \Vert \mu-\nu\Vert^{2}+\eta\Vert \mu-T\mu\Vert^{2}, \; \; \forall\; \; \mu \in \mathcal{H} \; {\rm{and}}\; \nu \in Fix(T), \end{equation*}

    Recall also that a bifunction g:K\times K\rightarrow \mathbb{R}\cup \{+\infty \} is coined as (ⅰ) monotone if g(\mu, \nu)+g(\nu, \mu)\leq 0, \ {\rm{ for\ all }}\ \mu, \nu\in K; and (ⅱ) strongly pseudomonotone if g(\mu, \nu)\geq 0\Rightarrow g(\nu, \mu)\leq -\alpha \Vert \mu-\nu\Vert ^{2}, \; \; {\rm{for\ all }}\ \mu, \nu\in K , where \alpha > 0 . It is worth mentioning that the monotonicity of a bifunction implies the pseudo-monotonicity, but the converse is not true. Recall the EP associated with the bifunction g is to find \mu \in K such that g(\mu, \nu)\geq 0 for all \nu \in K . The set of solutions of the equilibrium problem is denoted by EP(g) .

    Assumption 2.1. [12,13] Let g:K\times K\rightarrow \mathbb{R}\cup \{+\infty \} bea bifunction satisfying the following assumptions:

    (A1) g is pseudomonotone, i.e., g(\mu, \nu)\geq 0\Rightarrow g(\mu, \nu)\leq 0, \ {{for\ all}}\; \mu, \nu\in K ;

    (A2) g is Lipschitz-type continuous, i.e., there exist two nonnegativeconstants d_{1}, d_{2} such that

    \begin{equation*} g(\mu, \nu)+g(\nu, \xi)\geq g(\mu, \xi)-d_{1}\Vert \mu-\nu\Vert ^{2}-d_{2}\Vert \nu-\xi\Vert ^{2}, \; \ {{for\ all}}\; \mu, \nu, \xi\in K; \end{equation*}

    (A3) g is weakly continuous on K\times K imply that, if \mu, \nu \in K and (p_{k}) , (q_{k}) are two sequences in K such that p_{k}\rightharpoonup \mu and q_{k}\rightharpoonup \nu respectively, then f(p_{k}, q_{k}) \rightarrow f(\mu, \nu) ;

    (A4) For each fixed \mu\in K , g(\mu, \cdot) is convex and subdifferentiable on K .

    In view of the Assumption 2.1, EP(g) associated with the bifunction g is weakly closed and convex.

    Let g_{i}:K\times K\rightarrow \mathbb{R}\cup \{+\infty \} be a finite family of bifunctions satisfying Assumption 2.1. Then for all i \in \{1, 2, \cdots, M\} , we can compute the same Lipschitz coefficients (d_{1}, d_{2}) for the family of bifunctions g_{i} by employing the condition (A2) as

    \begin{eqnarray*} g_{i}(\mu, \xi)-g_{i}(\mu, \nu)-g_{i}(\nu, \xi)\leq d_{1, i}\Vert \mu-\nu\Vert^{2}+d_{2, i}\Vert \nu-\xi\Vert^{2} \leq d_{1}\Vert \mu-\nu\Vert^{2}+d_{2}\Vert \nu-\xi\Vert^{2}, \end{eqnarray*}

    where d_{1} = \max_{1\leq i \leq M}\{d_{1, i}\} and d_{2} = \max_{1\leq i \leq M}\{d_{2, i}\} . Therefore, g_{i}(\mu, \nu)+g_{i}(\nu, \xi)\geq g_{i}(\mu, \xi)-d_{1}\Vert \mu-\nu \Vert^{2}-d_{2}\Vert \nu-\xi\Vert^{2} . In addition, we assume T_{j}:\mathcal{H} \rightarrow \mathcal{H} to be a finite family of \eta -demimetric operators such that \Gamma: = \big(\bigcap^{M}_{i = 1}EP(g_{i})\big)\cap \big(\bigcap^{N}_{j = 1}Fix(T_{j})\big) \neq \emptyset . Then we are interested in the following problem:

    \begin{equation} \hat{p} \in \Gamma . \end{equation} (2.1)

    Lemma 2.2. [11] Let \mu, \nu\in \mathcal{H} and \beta \in \mathbb{R} then

    (1) \Vert \mu+\nu\Vert^{2} \leq \Vert \mu \Vert^{2}+2 \langle \nu, \mu+\nu \rangle ;

    (2) \Vert \mu-\nu\Vert^{2} = \Vert \mu \Vert^{2}-\Vert \nu \Vert^{2}-2 \langle\mu-\nu, \nu \rangle ;

    (3) \|\beta\mu+(1-\beta)\nu\|^{2} = \beta\|\mu\|^{2}+(1-\beta)\|\nu\|^{2}-\beta(1-\beta)\|\mu-\nu\|^{2}.

    Lemma 2.3. [35] Let T:K\rightarrow \mathcal{H} be an \eta -demimetric operator defined on a nonempty, closed and convex subset K of a Hilbert space \mathcal{H} with \eta \in (-\infty, 1) . Then Fix(T) is closed and convex.

    Lemma 2.4. [36] Let T:K\rightarrow \mathcal{H} be an \eta -demimetric operator defined on a nonempty, closed and convex subset K of a Hilbert space \mathcal{H} with \eta \in (-\infty, 1) . Then the operator L = (1-\gamma)Id+\gamma T is quasi-nonexpansive provided that Fix(T)\neq \emptyset and 0 < \gamma < 1-\eta .

    Lemma 2.5. [11] Let T:K \rightarrow K be a nonexpansive operator defined on a nonempty closed convex subset K of a real Hilbert space \mathcal{H} and let (p_{k}) be a sequence in K . If p_{k}\rightharpoonup x and if (Id-T)p_{k}\rightarrow 0 , then x \in Fix(T) .

    Lemma 2.6. [37] Let h:K\rightarrow \mathbb{R} be aconvex and subdifferentiable function on nonempty closed and convex subset K of a real Hilbert space \mathcal{H} . Then, p_{\ast} solves the \min \{h(q):q\in K\} , if and only if 0\in\partial h(p_{\ast})+N_{K}(p_{\ast}) , where \partial h(\cdot) denotes the subdifferential of h and N_{K}(\bar{p}) is the normal cone of K at \bar{p} .

    Our main iterative algorithm of this section has the following architecture:

    Theorem 3.1. Let the following conditions:

    {\rm(C1)} \sum^{\infty}_{k = 1}\xi_{k}\|p_{k}-p_{k-1}\| < \infty ;

    {\rm(C2)} 0 < a^{\ast} \leq \gamma_{k} \leq min\{1-\eta_{1}, \cdots, 1-\eta_{N}\} ,

    hold. Then Algorithm 1 solves the problem 2.1.

    Algorithm 1 Parallel Hybrid Inertial Extragradient Algorithm (Alg.1)
    Initialization: Choose arbitrarily, p_{0}, p_{1} \in \mathcal{H} , K\subseteq \mathcal{H} and C_{1} =\mathcal{H} . Set k \geq 1 , \{\alpha_{1}, \cdots, \alpha_{N}\} \subset (0, 1) such that \sum_{j=1}^{N}\alpha_{j}=1 , 0 < \mu < min(\frac{1}{2d_{1}}, \frac{1}{2d_{2}}) , \xi_{k} \subset [0, 1) and \gamma_{k} \in (0, \infty) .
    Iterative Steps: Given p_{k} \in \mathcal{H} , calculate e_{k} , \bar{v}_{k} and {w}_{k} as follows:
      Step 1. Compute
         \begin{eqnarray*} \left\{ \begin{array}{ll} & e_{k} = p_{k}+\xi_{k}(p_{k}-p_{k-1}); \\ & u_{i, k} = \arg\min\{\mu g_{i}(e_{k}, \nu)+\frac{1}{2}\Vert e_{k}-\nu\Vert^{2}:\nu \in K\}, i = 1, 2, \cdots, M; \\ & v_{i, k} = \arg\min\{\mu g_{i}(u_{i, k}, \nu)+\frac{1}{2}\Vert e_{k}-\nu\Vert^{2}:\nu \in K\}, i = 1, 2, \cdots, M; \\ & i_{k} = \arg\max\{\Vert v_{i, k}-p_{k}\Vert: i = 1, 2, \cdots, M\}, \bar{v}_{k} = v_{i_{k}, k}; \\ & w_{k} = \sum_{j = 1}^{N}\alpha_{j}((1-\gamma_{k})Id+\gamma_{k}T_{j})\bar{v}_{k}; \\ \end{array} \right. \end{eqnarray*}
      If {w}_{k}=\bar{v}_{k}=e_{k}=p_{k} then terminate and p_{k} solves the problem 2.1. Else
      Step 2. Compute
         \begin{eqnarray*} C_{k+1} & = & \{z \in C_{k}: \Vert {w}_{k}-z \Vert^{2} \leq \Vert p_{k}-z\Vert^{2}+\xi^{2}_{k}\Vert p_{k}-p_{k-1} \Vert^{2}+2\xi_{k}\langle p_{k}-z, p_{k}-p_{k-1}\rangle\}, \\ p_{k+1} & = & \Pi^{\mathcal{H}}_{C_{k+1}}p_{1}, \; \forall\; k \geq 1. \end{eqnarray*}
      Set k =: k+1 and return to Step 1.

     | Show Table
    DownLoad: CSV

    The following result is crucial for the strong convergence result of the Algorithm 1.

    Lemma 3.2. [1,30] Suppose that \nu_{\ast} \in EP(g_{i}) , and p_{k} , e_{k} , u_{i, k} , v_{i, k} , i \in \{1, 2, \cdots, M\} are defined in Step 1 of the Algorithm 1. Then we have

    \begin{equation*} \Vert v_{i, k}-\nu_{\ast}\Vert^{2}\leq \Vert e_{k}-\nu_{\ast}\Vert^{2}-(1-2\mu d_{1})\Vert u_{i, k}-e_{k}\Vert^{2}-(1-2\mu d_{2})\Vert u_{i, k}-v_{i, k}\Vert^{2}. \end{equation*}

    Proof of Theorem 3.1.

    Step 1. The Algorithm 1 is stable.

    Observe the following representation of the set C_{k+1} :

    \begin{eqnarray*} C_{k+1} = \{z \in C_{k}:\langle {w}_{k}-p_{k}, z\rangle \leq \frac{1}{2}(\Vert {w}_{k}\Vert^{2}-\Vert p_{k} \Vert^{2}+\xi^{2}_{k}\Vert p_{k}-p_{k-1} \Vert^{2}+2\xi_{k}\langle p_{k}-z, p_{k}-p_{k-1}\rangle)\}. \end{eqnarray*}

    This infers that C_{k+1} is closed and convex for all k \geq 1 . It is well-known that EP(g_{i}) and Fix(T_{j}) (from the Assumption 2.1 and Lemma 2.3, respectively) are closed and convex. Hence \Gamma is nonempty, closed and convex. For any p_{\ast}\in \Gamma , it follows from Algorithm 1 that

    \begin{eqnarray} \Vert e_{k}-p_{\ast}\Vert^{2}& = &\Vert p_{k}-p_{\ast}+\xi_{k}(p_{k}-p_{k-1})\Vert^{2} \\ &\leq& \Vert p_{k}-p_{\ast}\Vert^{2}+\xi^{2}_{k}\Vert p_{k}-p_{k-1} \Vert^{2}+2\xi_{k}\langle p_{k}-p_{\ast}, p_{k}-p_{k-1}\rangle. \end{eqnarray} (3.1)

    From (3.1) and recalling Lemma 2.4, we obtain

    \begin{eqnarray*} \Vert {w}_{k}-p_{\ast}\Vert& = &\Big\Vert \sum\limits_{j = 1}^{N}\alpha_{j}((1-\gamma_{k})Id+\gamma_{k}T_{j})\bar{v}_{k}-p_{\ast}\Big\Vert \leq \sum\limits_{j = 1}^{N}\alpha_{j}\Vert ((1-\gamma_{k})Id+\gamma_{k}T_{j})\bar{v}_{k}-p_{\ast}\Vert\notag \\ &\leq& \sum\limits_{j = 1}^{N}\alpha_{j}\Vert \bar{v}_{k}-p_{\ast}\Vert = \Vert \bar{v}_{k}-p_{\ast}\Vert \notag . \end{eqnarray*}

    Now recalling Lemma 3.2, the above estimate implies that

    \begin{eqnarray} \Vert {w}_{k}-p_{\ast}\Vert^{2} &\leq&\Vert \bar{v}_{k}-p_{\ast}\Vert^{2} \\ &\leq &\Vert p_{k}-p_{\ast}\Vert^{2}+\xi^{2}_{k}\Vert p_{k}-p_{k-1} \Vert^{2}+2\xi_{k}\langle p_{k}-p_{\ast}, p_{k}-p_{k-1}\rangle. \end{eqnarray} (3.2)

    The above estimate (3.2) infers that \Gamma \subset C_{k+1} . It is now clear from these facts that the Algorithm 1 is well-defined.

    Step 2. The limit \lim_{k\rightarrow \infty }\Vert p_{k}-p_{1}\Vert exists.

    From p_{k+1} = \Pi^{\mathcal{H}}_{C_{k+1}}p_{1}, we have \langle p_{k+1}-p_{1}, p_{k+1}-\nu \rangle \leq 0 for each \nu \in C_{k+1} . In particular, we have \langle p_{k+1}-p_{1}, p_{k+1}-p_{\ast} \rangle \leq 0 for each p_{\ast}\in \Gamma . This proves that the sequence (\Vert p_{k}-p_{1}\Vert) is bounded. However, from p_{k} = \Pi^{\mathcal{H}_{1}}_{C_{k}}p_{1} and p_{k+1} = \Pi^{\mathcal{H}_{1}}_{C_{k+1}}p_{1} \in C_{k+1}, we have that

    \begin{equation*} \Vert p_{k}-p_{1}\Vert \leq \Vert p_{k+1}-p_{1}\Vert . \end{equation*}

    This infers that (\Vert p_{k}-p_{1}\Vert) is nondecreasing and hence

    \begin{equation} \lim\limits_{k\rightarrow \infty }\Vert p_{k}-p_{1}\Vert \; \; \; {\rm{exists}}. \end{equation} (3.3)

    Step 3. \tilde{p_{\ast}}\in \Gamma.

    Compute

    \begin{eqnarray*} \left\Vert p_{k+1}-p_{k}\right\Vert ^{2} & = &\left\Vert p_{k+1}-p_{1}+p_{1}-p_{k}\right\Vert ^{2} \\ & = &\left\Vert p_{k+1}-p_{1}\right\Vert ^{2}+\left\Vert p_{k}-p_{1}\right\Vert ^{2}-2\langle p_{k}-p_{1}, p_{k+1}-p_{1}\rangle \\ & = &\left\Vert p_{k+1}-p_{1}\right\Vert ^{2}+\left\Vert p_{k}-p_{1}\right\Vert ^{2}-2\langle p_{k}-p_{1}, p_{k+1}-p_{k}+p_{k}-p_{1}\rangle \\ & = &\left\Vert p_{k+1}-p_{1}\right\Vert ^{2}-\left\Vert p_{k}-p_{1}\right\Vert ^{2}-2\langle p_{k}-p_{1}, p_{k+1}-p_{k}\rangle \\ &\leq &\left\Vert p_{k+1}-p_{1}\right\Vert ^{2}-\left\Vert p_{k}-p_{1}\right\Vert ^{2}. \end{eqnarray*}

    Utilizing (3.3), the above estimate infers that

    \begin{equation} \lim\limits_{k\rightarrow \infty }\left\Vert p_{k+1}-p_{k}\right\Vert = 0. \end{equation} (3.4)

    Recalling the definition of (e_{k}) and the condition ( {\rm{C1}} ), we have

    \begin{equation} \lim\limits_{k\rightarrow \infty }\Vert e_{k}-p_{k}\Vert = \lim\limits_{k\rightarrow \infty }\xi _{k}\Vert p_{k}-p_{k-1}\Vert = 0. \end{equation} (3.5)

    Recalling (3.4) and (3.5), the following relation

    \begin{equation*} \Vert e_{k}-p_{k+1}\Vert\leq\Vert e_{k}-p_{k}\Vert+\Vert p_{k}-p_{k+1}\Vert, \end{equation*}

    infers that

    \begin{equation} \lim\limits_{k \rightarrow \infty}\Vert e_{k}-p_{k+1}\Vert = 0. \end{equation} (3.6)

    Note that p_{k+1}\in C_{k+1} , therefore the following relation

    \begin{equation*} \Vert {w}_{k}-p_{k+1}\Vert \leq \Vert p_{k}-p_{k+1}\Vert+2\xi_{k}\Vert p_{k}-p_{k-1}\Vert+2\xi_{k}\langle p_{k}-p_{k+1}, p_{k}-p_{k-1}\rangle , \end{equation*}

    infers, on employing (3.4) and the condition (C1), that

    \begin{equation} \lim\limits_{k\rightarrow \infty }\Vert {w}_{k}-p_{k+1}\Vert = 0. \end{equation} (3.7)

    Again, recalling (3.4) and (3.7), the following relation

    \begin{equation*} \Vert {w}_{k}-p_{k}\Vert \leq \Vert {w}_{k}-p_{k+1}\Vert+\Vert p_{k+1}-p_{k}\Vert \end{equation*}

    infers that

    \begin{equation} \lim\limits_{k \rightarrow \infty}\Vert {w}_{k}-p_{k}\Vert = 0. \end{equation} (3.8)

    In view of the condition (C2), observe the variant of (3.2)

    \begin{align*} &(1-2\mu d_{1})\Vert u_{i_{k}, k}-e_{k}\Vert^{2}-(1-2\mu d_{2})\Vert u_{i_{k}, k}-v_{i_{k}, k}\Vert^{2} \notag \\ &\quad \leq (\Vert p_{k}-p_{\ast}\Vert+\Vert {w}_{k}-p_{\ast}\Vert)\Vert p_{k}-{w}_{k}\Vert+\xi^{2}_{k}\|p_{k}-p_{k-1}\|^{2}+2\xi_{k}\Vert p_{k}-p_{\ast}\Vert \Vert p_{k}-p_{k-1} \Vert. \end{align*}

    Recalling (3.8) and condition (C1), we get

    \begin{equation} (1-2\mu d_{1})\lim\limits_{k \rightarrow \infty}\Vert u_{i_{k}, k}-e_{k}\Vert^{2}-(1-2\mu d_{2})\lim\limits_{k \rightarrow \infty}\Vert u_{i_{k}, k}-v_{i_{k}, k}\Vert^{2} = 0. \end{equation} (3.9)

    The above estimate (3.9) implies that

    \begin{equation} \lim\limits_{k \rightarrow \infty}\Vert u_{i_{k}, k}-e_{k}\Vert^{2} = \lim\limits_{k \rightarrow \infty}\Vert u_{i_{k}, k}-v_{i_{k}, k}\Vert^{2} = 0. \end{equation} (3.10)

    Reasoning as above, recalling (3.5), (3.8) and (3.10), we have

    \Vert \bar{v}_{k}-e_{k}\Vert \leq\Vert \bar{v}_{k}-u_{i_{k}, k}\Vert+\Vert u_{i_{k}, k}-e_{k}\Vert \rightarrow 0 ;

    \Vert \bar{v}_{k}-p_{k}\Vert \leq\Vert \bar{v}_{k}-e_{k}\Vert+\Vert e_{k}-p_{k}\Vert \rightarrow 0 ;

    \Vert {w}_{k}-e_{k}\Vert \leq\Vert {w}_{k}-p_{k}\Vert+\Vert p_{k}-e_{k}\Vert \rightarrow 0 ;

    \Vert {w}_{k}-\bar{v}_{k}\Vert \leq\Vert {w}_{k}-e_{k}\Vert+\Vert e_{k}-\bar{v}_{k}\Vert \rightarrow 0 .

    In view of the estimate \lim_{k \rightarrow \infty}\Vert {w}_{k}-\bar{v}_{k}\Vert = 0 , we have

    \begin{equation} \lim\limits_{k \rightarrow \infty}\Vert T_{j}\bar{v}_{k}-\bar{v}_{k}\Vert = 0, \; \; \forall\; \; j = \{1, 2, \cdots, N\}. \end{equation} (3.11)

    Next, we show that \tilde{p_{\ast}} \in \bigcap^{M}_{i = 1}EP(g_{i}) .

    Observe that

    \begin{equation*} u_{i, k} = \arg\min\{\mu g_{i}(e_{k}, \nu)+\frac{1}{2}\Vert e_{k}-\nu\Vert^{2}:\nu \in K\}. \end{equation*}

    Recalling Lemma 2.6, we get

    \begin{equation*} 0 \in \partial_{2}\{\mu g_{i}(e_{k}, \nu)+\frac{1}{2}\Vert e_{k}-\nu\Vert^{2}\}(u_{i, k})+N_{K}(u_{i, k}). \end{equation*}

    This implies the existence of \tilde{x} \in \partial_{2} g_{i}(e_{k}, u_{i, k}) and \tilde{x_{*}} \in N_{K}(u_{i, k}) such that

    \begin{equation} \mu \tilde{x}+e_{k}-u_{i, k}+\tilde{x_{*}}. \end{equation} (3.12)

    Since \tilde{x_{*}} \in N_{K}(u_{i, k}) and \langle \tilde{x_{*}}, \nu-u_{i, k} \rangle \leq 0 for all \nu \in K . Therefore recalling (3.12), we have

    \begin{equation} \mu \langle \tilde{x}, \nu-u_{i, k}\rangle \geq \langle u_{i, k}-e_{k}, \nu-u_{i, k} \rangle, \; \; \forall\; \; \nu \in K. \end{equation} (3.13)

    Since \tilde{x} \in \partial_{2}g_{i}(e_{k}, u_{i, k}) ,

    \begin{equation} g_{i}(e_{k}, \nu)-g_{i}(e_{k}, u_{i, k})\geq \langle p, \nu-u_{i, k}\rangle, \; \; \forall\; \; \nu \in K. \end{equation} (3.14)

    Therefore recalling (3.13) and (3.14), we obtain

    \begin{equation} \mu(g_{i}(e_{k}, \nu)-g_{i}(e_{k}, u_{i, k}))\geq \langle u_{i, k}-e_{k}, \nu-u_{i, k}\rangle, \; \; \forall\; \; \nu \in K. \end{equation} (3.15)

    Observe from the fact that (p_{k}) is bounded then p_{k_{t}} \rightharpoonup \tilde{p_{\ast}} \in \mathcal{H} as t \rightarrow \infty for a subsequence (p_{k_{t}}) of (p_{k}) . This also infers that \bar{w}_{k_{t}} \rightharpoonup \tilde{p_{\ast}} , \bar{v}_{k_{t}} \rightharpoonup\tilde{p_{\ast}} and b_{k_{t}} \rightharpoonup \tilde{p_{\ast}} as t \rightarrow \infty . Since e_{k} \rightharpoonup \tilde{p_{\ast}} and \Vert e_{k}-u_{i, k}\Vert \rightarrow 0 as k \rightarrow \infty , this implies u_{i, k} \rightharpoonup \tilde{p_{\ast}} . Recalling the assumption (A3) and (3.15), we deduce that g_{i}(\tilde{p_{\ast}}, \nu) \geq 0 for all \nu \in K and i \in \{1, 2, \cdots, M\} . Therefore, \tilde{p_{\ast}} \in \bigcap^{M}_{i = 1}EP(g_{i}) . Moreover, recall that \bar{v}_{k_{t}} \rightharpoonup \tilde{p_{\ast}} as t \rightarrow \infty and (3.11) we have \tilde{p_{\ast}} \in \bigcap^{N}_{j = 1}Fix(T_{j}) . Hence \tilde{p_{\ast}} \in \Gamma .

    Step 4. p_{k}\rightarrow p_{\ast} = \Pi^{\mathcal{H}}_{\Gamma }p_{1}.

    Since p_{\ast} = \Pi^{\mathcal{H}}_{\Gamma}p_{1} and \tilde{p_{\ast}} \in \Gamma , therefore we have p_{k+1} = \Pi^{\mathcal{H}}_{C_{k+1}}p_{1} and p_{\ast}\in \Gamma \subset C_{k+1} . This implies that

    \begin{equation*} \left\Vert p_{k+1}-p_{1}\right\Vert \leq \left\Vert p_{\ast}-p_{1}\right\Vert . \end{equation*}

    By recalling the weak lower semicontinuity of the norm, we have

    \begin{eqnarray*} \Vert p_{1}-p_{\ast}\Vert\leq\Vert p_{1}-\tilde{p_{\ast}}\Vert&\leq&\liminf\limits_{t \rightarrow \infty}\Vert p_{1}-p_{k_{t}}\Vert\leq\limsup\limits_{t \rightarrow \infty}\Vert p_{1}-p_{k_{t}}\Vert\leq\Vert p_{1}-p_{\ast}\Vert. \end{eqnarray*}

    Recalling the uniqueness of the metric projection operator yields that \tilde{p_{\ast}} = p_{\ast} = \Pi^{\mathcal{H}}_{\Gamma }p_{1} . Also \lim_{t \rightarrow \infty}\Vert p_{k_{t}}-p_{1}\Vert = \Vert p_{\ast}-p_{1}\Vert = \Vert \tilde{p_{\ast}}-p_{1}\Vert . Moreover, recalling the Kadec-Klee property of \mathcal{H} with the fact that p_{k_{t}}-p_{1}\rightharpoonup \tilde{p_{\ast}}-p_{1} , we have p_{k_{t}}-p_{1}\rightarrow \tilde{p_{\ast}}-p_{1} and hence p_{k_{t}}\rightarrow \tilde{p_{\ast}} . This completes the proof.

    Corollary 3.3. Let K \subseteq \mathcal{H} be a nonempty closed and convex subset of a real Hilbert space \mathcal{H} . For all i \in \{1, 2, \cdots, M\} , let g_{i}:K\times K\rightarrow \mathbb{R}\cup \{+\infty \} be a finite family of bifunctions satisfying Assumption 2.1. Assume that \Gamma: = \bigcap^{M}_{i = 1}EP(g_{i}) \neq \emptyset , such that

    \begin{eqnarray} \left\{ \begin{array}{ll} & { e_{k} = p_{k}+\xi_{k}(p_{k}-p_{k-1}) ;} \\ & { u_{i, k} = \arg\min\{\mu g_{i}(e_{k}, \nu)+\frac{1}{2}\Vert e_{k}-\nu\Vert^{2}:\nu \in K\} }, \; \; i = 1, 2, \cdots, M; \\ & { v_{i, k} = \arg\min\{\mu g_{i}(u_{i, k}, \nu)+\frac{1}{2}\Vert e_{k}-\nu\Vert^{2}:\nu \in K\} }, \; \; i = 1, 2, \cdots, M; \\ & { i_{k} = \arg\max\{\Vert v_{i, k}-p_{k}\Vert: i = 1, 2, \cdots, M\}, \bar{v}_{k} = v_{i_{k}, k} ;}\\ & { C_{k+1} = \{z \in C_{k}: \Vert \bar{v}_{k}-z \Vert^{2} \leq \Vert p_{k}-z \Vert^{2}+\xi^{2}_{k}\Vert p_{k}-p_{k-1} \Vert^{2}+2\xi_{k}\langle p_{k}-z, p_{k}-p_{k-1}\rangle\} ;} \\ & { p_{k+1} = \Pi^{\mathcal{H}}_{C_{k+1}}p_{1}, \forall\; k \geq 1 }. \end{array} \right. \end{eqnarray} (3.16)

    Assume that the condition (C1) holds, then the sequence (p_{k}) generated by (3.16) strongly converges to a point in \Gamma .

    We now propose an other variant of the hybrid iterative algorithm embedded with the Halpern iterative algorithm [20].

    Remark 3.4. Note for the Algorithm 2 that the claim p_{k} is a common solution of the EP and FPP provided that p_{k+1} = p_{k} , in general is not true. So intrinsically a stopping criterion is implemented for k > k_{max} for some chosen sufficiently large number k_{max} .

    Algorithm 2 Parallel Hybrid Inertial Halpern-Extragradient Algorithm (Alg.2)
    Initialization: Choose arbitrarily q, p_{0}, p_{1} \in \mathcal{H} , K\subseteq \mathcal{H} and C_{1}=\mathcal{H}. Set k\geq 1 , \{\alpha_{1}, \cdots, \alpha_{N}\}, \beta_{k} \subset (0, 1) such that \sum_{j=1}^{N}\alpha_{j}=1 , 0 < \mu < min(\frac{1}{2d_{1}}, \frac{1}{2d_{2}}) , \xi_{k} \subset [0, 1) and \gamma_{k} \in (0, \infty) .
    Iterative Steps: Given p_{k} \in \mathcal{H} , calculate e_{k} , \bar{v}_{k} and {w}_{k} as follows:
      Step 1. Compute
         \begin{eqnarray*} \left\{ \begin{array}{ll} & e_{k} = p_{k}+\xi_{k}(p_{k}-p_{k-1}); \\ & u_{i, k} = \arg\min\{\mu g_{i}(e_{k}, \nu)+\frac{1}{2}\Vert e_{k}-\nu\Vert^{2}:\nu \in K\}, i = 1, 2, \cdots, M; \\ & v_{i, k} = \arg\min\{\mu g_{i}(u_{i, k}, \nu)+\frac{1}{2}\Vert e_{k}-\nu\Vert^{2}:\nu \in K\}, i = 1, 2, \cdots, M; \\ & i_{k} = \arg\max\{\Vert v_{i, k}-p_{k}\Vert: i = 1, 2, \cdots, M\}, \bar{v}_{k} = v_{i_{k}, k}; \\ & w_{k} = \sum_{j = 1}^{N}\alpha_{j}((1-\gamma_{k})Id+\gamma_{k}T_{j})\bar{v}_{k}; \\ & t_{l, k} = \beta_{k}q+(1-\beta_{k}){w}_{k}; \\ & l_{k} = \arg\max\{\Vert t_{j, k}-p_{k}\Vert: j = 1, 2, \cdots, P\}, \bar{t}_{k} = t_{l_{k}, k}. \end{array} \right. \end{eqnarray*}
        If \bar{t}_{k}={w}_{k}=\bar{v}_{k}=e_{k}=p_{k} then terminate and p_{k} solves the problem 2.1. Else
      Step 2. Compute
         \begin{eqnarray*} C_{k+1} & = & \{z \in C_{k}: \Vert \bar{t}_{k}-z \Vert^{2} \leq \beta_{k}\Vert q-z\Vert^{2} +(1-\beta_{k})(\Vert p_{k}-z\Vert^{2}+\xi^{2}_{k}\Vert p_{k}-p_{k-1} \Vert^{2} \\ & & \quad +2\xi_{k}\langle p_{k}-z, p_{k}-p_{k-1}\rangle)\}; \\ p_{k+1} & = & \Pi^{\mathcal{H}}_{C_{k+1}}\ \ p_{1}, \; \forall\; k \geq 1. \end{eqnarray*}
      Set k =: k+1 and go back to Step 1.

     | Show Table
    DownLoad: CSV

    Theorem 3.5. Let \Gamma \neq \emptyset and the following conditions:

    {\rm(C1)} \sum^{\infty}_{k = 1}\xi_{k}\|p_{k}-p_{k-1}\| < \infty ;

    {\rm(C2)} 0 < a^{\ast} \leq \gamma_{k} \leq min\{1-\eta_{1}, \cdots, 1-\eta_{N}\} and \lim_{k \rightarrow \infty}\beta_{k} = 0 ,

    hold. Then the Algorithm 2 solves the problem 2.1.

    Proof. Observe that the set C_{k+1} can be expressed in the following form:

    \begin{eqnarray*} C_{k+1} = \{z \in C_{k}:\Vert \bar{t}_{k}-z\Vert^{2}&\leq& \beta_{k}\Vert q-z\Vert^{2}+(1-\beta_{k})(\Vert p_{k}-z\Vert^{2}+\xi^{2}_{k}\Vert p_{k}-p_{k-1} \Vert^{2}\\ &&+2\xi_{k}\langle p_{k}-z, p_{k}-p_{k-1}\rangle)\}. \end{eqnarray*}

    Recalling the proof of Theorem 3.1, we deduce that the sets \Gamma \; {\rm{and }}\ C_{k+1} are closed and convex satisfying \Gamma \subset C_{k+1} for all k \geq 0 . Further, (p_{k}) is bounded and

    \begin{equation} \lim\limits_{k \rightarrow \infty} \Vert p_{k+1} - p_{k}\Vert = 0. \end{equation} (3.17)

    Since p_{k+1} = \Pi^{\mathcal{H}}_{C_{k+1}}(q) \in C_{k+1} , we have

    \begin{eqnarray*} \Vert \bar{t}_{k}-p_{k+1}\Vert^{2}&\leq& \beta_{k}\Vert q-p_{k+1}\Vert^{2}+(1-\beta_{k})(\Vert p_{k}-p_{k+1}\Vert^{2} +\xi^{2}_{k}\Vert p_{k}-p_{k-1} \Vert^{2}\\ &&+2\xi_{k}\langle p_{k}-p_{k+1}, p_{k}-p_{k-1}\rangle). \end{eqnarray*}

    Recalling the estimate (3.17) and the conditions (C1) and (C2), we obtain

    \begin{equation*} \lim\limits_{k \rightarrow \infty}\Vert \bar{t}_{k}-p_{k+1}\Vert = 0. \end{equation*}

    Reasoning as above, we get

    \begin{equation*} \lim\limits_{k \rightarrow \infty} \Vert \bar{t}_{k}-p_{k}\Vert = 0. \end{equation*}

    The rest of the proof of Theorem 3.5 follows from the proof of Theorem 3.1 and is therefore omitted.

    The following remark elaborate how to align condition (C1) in a computer-assisted iterative algorithm.

    Remark 3.6. We remark here that the condition {\rm(C1)} can easily be aligned in a computer-assisted iterative algorithm since the value of \|p_{k}-p_{k-1}\| is quantified before choosing \xi_{k} such that 0 \leq \xi_{k} \leq \widehat{\xi_{k}} with

    \begin{equation*} \widehat{\xi_{k}} = \left\{ \begin{array}{ll} &{ \min\{\frac{\sigma_{k}}{\|p_{k}-p_{k-1}\|}, \xi\}\; \; \; if\; \; p_{k}\; \; \neq\; \; p_{k-1} ;} \\ & { \xi \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ otherwise.} \end{array} \right. \end{equation*}

    Here \{ \sigma_{k}\} denotes a sequence of positives \sum^{\infty}_{k = 1}\sigma_{k} < \infty and \xi \in [0, 1) .

    As a direct application of Theorem 3.1, we have the following variant of the problem 2.1, namely the generalized split variational inequality problem associated with a finite family of single-valued monotone and hemicontinuous operators A_{j}: K \rightarrow \mathcal{H} defined on a nonempty closed convex subset K of a real Hilbert space \mathcal{H} for each j \in \{1, 2, \cdots, N\} . The set VI(K, A) represents all the solutions of the following variational inequality problem \langle A\mu, \nu-\mu \rangle \geq 0\; \; \forall \; \; \nu\; \in \; \; C .

    Theorem 3.7. Assume that \Gamma = \bigcap^{M}_{i = 1}VI(C, A_{i})\cap \bigcap^{N}_{j = 1}Fix(T_{j}) \neq \emptyset and the conditions (C1)–(C4) hold. Then the sequence (p_{k})

    \begin{eqnarray} \left\{ \begin{array}{ll} & { e_{k} = p_{k}+\xi_{k}(p_{k}-p_{k-1}) ;} \\ & { u_{i, k} = \Pi_{K}(e_{k}-\mu A_{i}(e_{k})) }, \; \; i = 1, 2, \cdots, M; \\ & { v_{i, k} = \Pi_{K}(e_{k}-\mu A_{i}(u_{i, k})) }, \; \; i = 1, 2, \cdots, M; \\ & { i_{k} = \arg\max\{\Vert v_{i, k}-p_{k}\Vert: i = 1, 2, \cdots, M\}, \bar{v}_{k} = v_{i_{k}, k} ;}\\ & { w_{k} = \sum\nolimits_{j = 1}^{N}\alpha_{j}((1-\gamma_{k})Id+\gamma_{k}T_{j})\bar{v}_{k} ;} \\ & { C_{k+1} = \{z \in C_{k}: \Vert {w}_{k}-z \Vert^{2} \leq \Vert p_{k}-z\Vert^{2}+\xi^{2}_{k}\Vert p_{k}-p_{k-1} \Vert^{2}+2\xi_{k}\langle p_{k}-z, p_{k}-p_{k-1}\rangle\} }, \\ & { p_{k+1} = \Pi^{\mathcal{H}}_{C_{k+1}}p_{1}, \; \forall\; k \geq 1 }, \end{array} \right. \end{eqnarray} (3.18)

    generated by (3.18) solves the problem 2.1.

    Proof. Observe that, if we set g_{i}(\bar{\mu}, \bar{\nu}) = \langle A_{i}(\bar{\mu}), \bar{\nu}-\bar{\mu} \rangle for all \bar{\mu}, \bar{\nu} \in K , then each A_{i} being L -Lipschitz continuous infers that g_{i} is Lipschitz-type continuous with d_{1} = d_{2} = \frac{L}{2} . Moreover, the pseudo-monotonicity of A_{i} ensures the pseudo-monotonicity of g_{i} . Recalling the assumptions (A3)–(A4) and the Algorithm 1, note that

    \begin{eqnarray*} u_{i, k}& = &\arg\min\{ \mu \langle A_{i}(p_{k}), \nu - p_{k}\rangle+\frac{1}{2}\Vert p_{k}-\nu\Vert^{2}: \nu \in K\}; \notag \\ v_{i, k}& = &\arg\min\{ \mu \langle A_{i}(u_{i, k}), \nu - u_{i, k}\rangle+\frac{1}{2}\Vert p_{k}-\nu\Vert^{2}: \nu \in K\}, \end{eqnarray*}

    can be transformed into

    \begin{eqnarray*} u_{i, k}& = &\arg\min\{\frac{1}{2}\Vert \nu-(p_{k}-\mu A_{i}(p_{k})\Vert^{2}: \nu \in K\} = \Pi_{K}(p_{k}-\mu A_{i}(p_{k})); \notag \\ v_{i, k}& = &\arg\min\{\frac{1}{2}\Vert \nu-(p_{k}-\mu A_{i}(u_{i, k})\Vert^{2}: \nu \in K\} = \Pi_{K}(p_{k}-\mu A_{i}(u_{i, k})). \end{eqnarray*}

    Hence recalling g_{i}(\bar{\mu}, \bar{\nu}) = \langle A_{i}(\bar{\mu}), \bar{\nu}-\bar{\mu} \rangle for all \bar{\mu}, \bar{\nu} \in K and for all i \in \{1, 2, \cdots, M\} in Theorem 3.1, we have the desired result.

    This section provides the effective viability of the algorithm via a suitable numerical experiment.

    Example 4.1. Let \mathcal{H} = \mathbb{R} be the set of all real numbers with the inner product defined by \langle p, q\rangle = pq, for all p, q \in \mathbb{R} and the induced usual norm |\cdot| . For each i = \{1, 2, \cdots, M\} , let the family of pseudomonotone bifunctions g_{i}(p, q): K\times K \rightarrow \mathbb{R} on K = [0, 1] \subset \mathcal{H} , is defined by g_{i}(p, q) = S_{i}(p)(q-p) , where

    \begin{equation*} S_{i}(p) = \left\{ \begin{array}{ll} & { 0 , \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; 0 \leq p \leq \lambda_{i} ;} \\ & { \sin(p-\lambda_{i})+\exp( p-\lambda_{i})-1 , \; \; \; \; \; \; \; \; \; \; \lambda_{i} \leq p \leq 1 .} \end{array} \right. \end{equation*}

    where 0 < \lambda_{1} < \lambda_{2} < ... < \lambda_{M} < 1 . Note that EP(g_{i}) = [0, \lambda_{i}] if and only if 0 \leq p \leq \lambda_{i} and q \in [0, 1] . Consequently, \bigcap^{M}_{i = 1}EP(g_{i}) = [0, \lambda_{1}] . For each j \in \{1, 2, \cdots, N\} , let the family of operators T_{j}: \mathbb{R} \rightarrow \mathbb{R} be defined by

    \begin{equation*} T_{j}(p) = \left\{ \begin{array}{ll} & { -\frac{3p}{j} , \; \; \; \; \; \; \; \; \; \; p \in [0, \infty) ;} \\ & { \; \; \; \; \; \; \; p , \; \; \; \; \; \; \; \; \; \; p \in (-\infty, 0) .} \end{array} \right. \end{equation*}

    Clearly, T_{j} defines a finite family of \eta -demimetric operators with \bigcap^{N}_{j = 1}Fix(T_{j}) = \{0\} . Hence \Gamma = (\bigcap^{M}_{i = 1}EP(g_{i})) \cap (\bigcap^{N}_{j = 1}Fix(T_{j})) = 0 . In order to compute the numerical values of the Algorithm 1, we choose \xi = 0.5 , \alpha_{k} = \frac{1}{100k+1} , \mu = \frac{1}{7} , \lambda_{i} = \frac{i}{(M+1)} , M = 2 \times 10^{5} and N = 3 \times 10^{5} . Since

    \left\{ \begin{array}{ll} & { \min\{\frac{1}{k^{2}\|p_{k}-p_{k-1}\ \ \|}, 0.5\}\ \ \ \ if \ \ \ p_{k}\neq p_{k-1} ;} \\ & {0.5 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ otherwise}, \end{array} \right.

    Observe that the expression

    \begin{equation*} u_{i, k} = \arg\min\{\mu S_{i}(e_{k})(\nu-e_{k})+\frac{1}{2}(y-p_{k})^{2}, \; \; \forall\; \; \nu \in [0, 1]\}, \end{equation*}

    in the Algorithm 1 is equivalent to the following relation u_{i, k} = e_{k}-\mu S_{i}(e_{k}), \; \; {for\ all}\; \; i \in \{1, 2, \cdots, M\}. Similarly v_{i, k} = e_{k}-\mu S_{i}(u{i, k}), \; \; {for\ all}\; \; i \in \{1, 2, \cdots, M\}. \label{4.1} Hence, we can compute the intermediate approximation \bar{v}_{k} which is farthest from e_{k} among v_{i, k} , for all i \in \{1, 2, \cdots, M\} . Generally, at the k^{th} step if E_{k} = \|p_{k}-p_{k-1}\| = 0 then p_{k}\in \Gamma implies that p_{k} is the required solution of the problem. The terminating criteria is set as E_{k} < 10^{-6} . The values of the Algorithm 1 and its variant are listed in the following table (see Table 1):

    Table 1.  Numerical values of Algorithm 1.
    No. of Iter. CPU-Time (Sec)
    N0. Alg.1, \xi_{k}=0 Alg.1, \xi_{k} \neq 0 Alg.1, \xi_{k} = 0 Alg.1, \xi_{k} \neq 0
    Choice 1. p_{0}=(5) , p_{1}=(2) 87 75 0.088153 0.073646
    Choice 2. p_{0}=(4.3) , p_{1}=(1.7) 88 79 0.072250 0.068662
    Choice 3. p_{0}=(-7) , p_{1}=(3) 99 92 0.062979 0.051163

     | Show Table
    DownLoad: CSV

    The values of the non-inertial and non-parallel variant of the Algorithm 1 referred as Alg. 1^{\ast} are listed in the following table (see Table 2):

    Table 2.  Numerical values of Algorithm Alg.1 ^{\ast} .
    No. of Choices No. of Iter. CPU-Time (Sec)
    Choice 1. p_{0}=(5) , p_{1}=(2) 111 0.091439
    Choice 2. p_{0}=(4.3) , p_{1}=(1.7) 106 0.089872
    Choice 3. p_{0}=(-7) , p_{1}=(3) 104 0.081547

     | Show Table
    DownLoad: CSV

    The error plotting E_{k} against the Algorithm 1 and its variants for each choices in Tables 1 and 2 are illustrated in Figure 1.

    Figure 1.  Comparison between Algorithm 1 and its variants in view of Example 4.1.

    Example 4.2. Let \mathcal{H} = \mathbb{R}^{n} with the induced norm \Vert p \Vert = \sqrt{\sum^{n}_{i = 1}\vert p_{i} \vert^{2}} and the inner product \langle p, q \rangle = \sum^{n}_{i = 1}p_{i}q_{i} , for all p = (p_{1}, p_{2}, \cdots, p_{n}) \in \mathbb{R}^{n} and q = (q_{1}, q_{2}, \cdots, q_{n}) \in \mathbb{R}^{n} . The set K is given by K = \{p \in \mathbb{R}^{n}_{+}:\vert p_{k}\vert \leq 1\} , where k = \{1, 2, \cdots, n\} . Consider the following problem:

    find\;{p_ * } \in \Gamma : = \bigcap\limits_{i = 1}^M E P({g_i}) \cap \bigcap\limits_{j = 1}^N F ix({T_j}),

    where g_{i}:K \times K \rightarrow \mathbb{R} is defined by:

    \begin{equation*} g_{i}(p, q) = \mathop \sum \limits_{k = 1}^n S_{i, k}(q^{2}_{k}-p^{2}_{k}), \; \forall \; \; i \in \{1, 2, \cdots, M\}, \end{equation*}

    where S_{i, k} \in (0, 1) is randomly generated for all i = \{1, 2, \cdots, M\} and k = \{1, 2, \cdots, n\} . For each j \in \{1, 2, \cdots, N\} , let the family of operators T_{j}: \mathcal{H} \rightarrow \mathcal{H} be defined by

    \begin{equation*} T_{j}(p) = \left\{ \begin{array}{ll} & { -\frac{4p}{j} , \; \; \; \; \; \; \; \; \; \; p \in [0, \infty) ;} \\ & { \; \; \; \; \; \; \; p , \; \; \; \; \; \; \; \; \; \; p \in (-\infty, 0) .} \end{array} \right. \end{equation*}

    for all p \in \mathcal{H} . It is easy to observe that \Gamma = \bigcap^{M}_{i = 1}EP(g_{i}) \cap \bigcap^{N}_{j = 1}Fix(T_{j}) = 0 . The values of the Algorithm 1 and its non-inertial variant are listed in the following table (see Table 3):

    Table 3.  Numerical values of Algorithm 1.
    No. of Iter. CPU-Time (Sec)
    N0. Alg.1, \xi_{k}=0 Alg.1, \xi_{k} \neq 0 Alg.1, \xi_{k} = 0 Alg.1, \xi_{k} \neq 0
    Choice 1. p_{0}=(5) , p_{1}=(2) , n=5 46 35 0.061975 0.054920
    Choice 2. p_{0}=(1) , p_{1}=(1.5) , n=10 38 27 0.056624 0.040587
    Choice 3. p_{0}=(-8) , p_{1}=(3) , n=30 50 37 0.055844 0.041246

     | Show Table
    DownLoad: CSV

    The values of the non-inertial and non-parallel variant of the Algorithm 1 referred as Alg. 1^{\ast} are listed in the following table (see Table 4):

    Table 4.  Numerical values of Algorithm Alg.1 ^{\ast} .
    No. of Choices No. of Iter. CPU-Time (Sec)
    Choice 1. p_{0}=(5) , p_{1}=(2) , n=5 81 0.072992
    Choice 2. p_{0}=(1) , p_{1}=(1.5) , n=10 75 0.065654
    Choice 3. p_{0}=(-8) , p_{1}=(3) , n=30 79 0.068238

     | Show Table
    DownLoad: CSV

    The error plotting E_{k}\leq 10^{-6} against the Algorithm 1 and its variants for each choices in Tables 3 and 4 are illustrated in Figure 2.

    Figure 2.  Comparison between Algorithm 1 and its variants in view of Example 4.2.

    Example 4.3. Let L^{2}([0, 1]) = \mathcal{H} with induced norm \Vert p \Vert = (\int^{1}_{0}\vert p(s)\vert^{2} ds)^{\frac{1}{2}} and the inner product \langle p, q \rangle = \int^{1}_{0}p(s)q(s)ds , for all p, q \in L^{2}([0, 1]) and s \in [0, 1] . The feasible set K is given by: K = \{p \in L^{2}([0, 1]): \Vert p \Vert \leq 1\} . Consider the following problem:

    find\;\bar p \in \Gamma : = \bigcap\limits_{i = 1}^M E P({g_i}) \cap \bigcap\limits_{j = 1}^N F ix({T_j}),

    where g_{i}(p, q) is defined as \langle S_{i}p, q-p\rangle with the operator S_{i}:L^{2}([0, 1]) \rightarrow L^{2}([0, 1]) given by

    \begin{equation*} S_{i}(p(s)) = \max\Big\{0, \frac{p(s)}{i}\Big\}, \; \forall\; i \in \{1, 2, \cdots, M\}, \; s \in [0, 1]. \end{equation*}

    Since each g_{i} is monotone and hence pseudomonotone on C . For each j \in \{1, 2, \cdots, N\} , let the family of operators T_{j}: \mathcal{H} \rightarrow \mathcal{H} be defined by

    \begin{equation*} T_{j}(p) = \Pi_{C}(p) = \left\{ \begin{array}{ll} & { \frac{p}{\Vert p\Vert} , \; \; \; \; \; \; \; \; \; \; \Vert p \Vert > 1 ;} \\ & { \; \; \; \; \; \; \; p , \; \; \; \; \; \; \; \; \; \; \Vert p\Vert \leq 1 .} \end{array} \right. \end{equation*}

    Then T_{j} is a finite family of \eta -demimetric operators. It is easy to observe that \Gamma = \bigcap^{M}_{i = 1}EP(g_{i}) \cap \bigcap^{N}_{j = 1}Fix(T_{j}) = 0 . Choose M = 50 and N = 100 . The values of the Algorithm 1 and its non-inertial variant have been computed for different choices of p_{0} and p_{1} in the following table (see Table 5):

    Table 5.  Numerical values of Algorithm 1.
    No. of Iter. CPU-Time (Sec)
    N0. Alg.1, \xi_{k}=0 Alg.1, \xi_{k} \neq 0 Alg.1, \xi_{k} = 0 Alg.1, \xi_{k} \neq 0
    Choice 1. p_{0}=exp(3s)\times\sin (s) , p_{1}=3s^{2}-s 10 5 1.698210 0.981216
    Choice 2. p_{0}=\frac{1}{1+s} , p_{1}=\frac{s^{2}}{10} 14 6 2.884623 1.717623
    Choice 3. p_{0}=\frac{\cos(3s)}{7} , p_{1}=s 16 5 2.014687 1.354564

     | Show Table
    DownLoad: CSV

    The values of the non-inertial and non-parallel variant of the Algorithm 1 referred as Alg. 1^{\ast} have been computed for different choices of p_{0} and p_{1} in the following table (see Table 6):

    Table 6.  Numerical values of Algorithm Alg.1 ^{\ast} .
    No. of Choices No. of Iter. CPU-Time (Sec)
    Choice 1. p_{0}=exp(3s)\times\sin (s) , p_{1}=3s^{2}-s 23 2.65176
    Choice 2. p_{0}=\frac{1}{1+s} , p_{1}=\frac{s^{2}}{10} 27 3.102587
    Choice 3. p_{0}=\frac{\cos(3s)}{7} , p_{1}=s 26 2.903349

     | Show Table
    DownLoad: CSV

    The error plotting E_{k} = < 10^{-4} against the Algorithm 1 and its variants for each choices in Tables 5 and 6 are illustrated in Figure 3.

    Figure 3.  Comparison between Algorithm 1 and its variants in view of Example 4.3.

    We can see from Tables 16 and Figures 13 that the Algorithm 1 out performs its variants with respect to the reduction in the error, time consumption and the number of iterations required for the convergence towards the common solution.

    In this paper, we have constructed some variants of the classical extragradient algorithm that are embedded with the inertial extrapolation and hybrid projection techniques. We have shown that the algorithm strongly converges towards the common solution of the problem 2.1. A useful instance of the main result, that is, Theorem 3.1, as well as an appropriate example for the viability of the algorithm, have also been incorporated. It is worth mentioning that the problem 2.1 is a natural mathematical model for various real-world problems. As a consequence, our theoretical framework constitutes an important topic of future research.

    The authors declare that they have no competing interests.

    The authors wish to thank the anonymous referees for their comments and suggestions.

    The author Yasir Arfat acknowledge the support via the Petchra Pra Jom Klao Ph.D. Research Scholarship from King Mongkut's University of Technology Thonburi, Thailand (Grant No.16/2562).

    The authors Y. Arfat, P. Kumam, W. Kumam and K. Sitthithakerngkiet acknowledge the financial support provided by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT. Moreover, this reserch was funded by King Mongkut's University of Technology North Bangkok, Contract No. KMUTNB-65-KNOW-28.



    [1] P. N. Anh, A hybrid extragradient method for pseudomonotone equilibrium problems and fixed point problems, B. Malays. Math. Sci. Soc., 36 (2013), 107–116.
    [2] Y. Arfat, P. Kumam, P. S. Ngiamsunthorn, M. A. A. Khan, An inertial based forward-backward algorithm for monotone inclusion problems and split mixed equilibrium problems in Hilbert spaces, Adv. Differ. Equ., 2020 (2020), 453. https://doi.org/10.1186/s13662-020-02915-3 doi: 10.1186/s13662-020-02915-3
    [3] Y. Arfat, P. Kumam, P. S. Ngiamsunthorn, M. A. A. Khan, H. Sarwar, H. F. Din, Approximation results for split equilibrium problems and fixed point problems of nonexpansive semigroup in Hilbert spaces, Adv. Differ. Equ., 2020 (2020), 512. https://doi.org/10.1186/s13662-020-02956-8 doi: 10.1186/s13662-020-02956-8
    [4] Y. Arfat, P. Kumam, M. A. A. Khan, P. S. Ngiamsunthorn, A. Kaewkhao, An inertially constructed forward-backward splitting algorithm in Hilbert spaces, Adv. Differ. Equ., 2021 (2021), 124. https://doi.org/10.1186/s13662-021-03277-0 doi: 10.1186/s13662-021-03277-0
    [5] Y. Arfat, P. Kumam, M. A. A. Khan, P. S. Ngiamsunthorn, An accelerated projection based parallel hybrid algorithm for fixed point and split null point problems in Hilbert spaces, Math. Meth. Appl. Sci., 2021, 1–19. https://doi.org/10.1002/mma.7405 doi: 10.1002/mma.7405
    [6] Y. Arfat, P. Kumam, M. A. A. Khan, P. S. Ngiamsunthorn, Parallel shrinking inertial extragradient approximants for pseudomonotone equilibrium, fixed point and generalized split null point problem, Ric. Mat., 2021. https://doi.org/10.1007/s11587-021-00647-4 doi: 10.1007/s11587-021-00647-4
    [7] Y. Arfat, P. Kumam, M. A. A. Khan, P. S. Ngiamsunthorn, Shrinking approximants for fixed point problem and generalized split null point problem in Hilbert spaces, Optim. Lett., 2021. https://doi.org/10.1007/s11590-021-01810-4 doi: 10.1007/s11590-021-01810-4
    [8] Y. Arfat, , P. Kumam, M. A. A. Khan, O. S. Iyiola, Multi-inertial parallel hybrid projection algorithm for generalized split null point problems, J. Appl. Math. Comput., 2021. https://doi.org/10.1007/s12190-021-01660-4 doi: 10.1007/s12190-021-01660-4
    [9] Y. Arfat, P. Kumam, M. A. A. Khan, P. S. Ngiamsunthorn, An accelerated Visco-Cesaro means Tseng type splitting method for fixed point and monotone inclusion problems, Carpathian J. Math., 38 (2022), 281–297. https://doi.org/10.37193/CJM.2022.02.02 doi: 10.37193/CJM.2022.02.02
    [10] J. P. Aubin. Optima and equilibria: An introduction to nonlinear analysis, Springer, New York, NY, 1998.
    [11] H. H. Bauschke, P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, 2 Eds., Springer, Berlin, 2017.
    [12] M. Bianchi, S. Schaible, Generalized monotone bifunctions and equilibrium problems, J. Optim. Theory Appl., 90 (1996), 31–43. https://doi.org/10.1007/BF02192244 doi: 10.1007/BF02192244
    [13] E. Blum, W. Oettli, From optimization and variational inequalities to equilibrium problems, Math. Stud., 63 (1994), 123–145.
    [14] L. C. Ceng, A. Petrusel, X. Qin, J. C. Yao, A modified inertial subgradient extragradient method for solving pseudomonotone variational inequalities and common fixed point problems, Fixed Point Theory, 21 (2020), 93–108. https://doi.org/10.24193/fpt-ro.2020.1.07 doi: 10.24193/fpt-ro.2020.1.07
    [15] L. C. Ceng, A. Petrusel, X. Qin, J. C. Yao, Two inertial subgradient extragradient algorithms for variational inequalities with fixed-point constraints, Optimization, 70 (2021), 1337–1358. https://doi.org/10.1080/02331934.2020.1858832 doi: 10.1080/02331934.2020.1858832
    [16] L. C. Ceng, Q. Yuan, Composite inertial subgradient extragradient methods for variational inequalities and fixed point problems, Optimization, 70 (2021), 1337–1358. https://doi.org/10.1186/s13660-019-2229-x doi: 10.1186/s13660-019-2229-x
    [17] P. Cholamjiak, Y. Shehu, Inertial forward-backward splitting method in Banach spaces with application to compressed sensing, Appl. Math., 64 (2019), 409–435. https://doi.org/10.21136/AM.2019.0323-18 doi: 10.21136/AM.2019.0323-18
    [18] P. L. Combettes, S. A. Hirstoaga, Equilibrium programming in Hilbert spaces, J. Nonlinear Convex A., 6 (2005), 117–136.
    [19] P. Daniele, F. Giannessi, A. Maugeri, Equilibrium problems and variational models, Kluwer Academic Publisher, 2003.
    [20] B. Halpern, Fixed points of nonexpanding maps, Bull. Amer. Math. Soc., 73 (1967), 957–961. https://doi.org/10.1090/S0002-9904-1967-11864-0 doi: 10.1090/S0002-9904-1967-11864-0
    [21] D. V. Hieu, L. D. Muu, P. K. Anh, Parallel hybrid extragradient methods for pseudomonotone equilibrium problems and nonexpansive mappings, Numer. Algorithms, 73 (2016), 197–217. https://doi.org/10.1007/s11075-015-0092-5 doi: 10.1007/s11075-015-0092-5
    [22] D. V. Hieu, H. N. Duong, B. H. Thai, Convergence of relaxed inertial methods for equilibrium problems, J. Appl. Numer. Optim., 3 (2021), 215–229. https://doi.org/10.23952/jano.3.2021.1.13 doi: 10.23952/jano.3.2021.1.13
    [23] M. A. A. Khan, Y. Arfat, A. R. Butt, A shrinking projection approach to solve split equilibrium problems and fixed point problems in Hilbert spaces, Sci. Bull. Ser. A, 80 (2018), 33–46.
    [24] G. M. Korpelevich, The extragradient method for finding saddle points and other problems, Ekon. Mat. Meto., 12 (1976), 747–756.
    [25] L. Liu, S. Y. Cho, J. C. Yao, Convergence analysis of an inertial Tseng's extragradient algorithm for solving pseudomonotone variational inequalities and applications, J. Nonlinear Var. Anal., 5 (2021), 627–644.
    [26] W. R. Mann, Mean value methods in iteration, Proc. Amer. Math. Soc., 4 (1953), 506–510. https://doi.org/10.1090/S0002-9939-1953-0054846-3 doi: 10.1090/S0002-9939-1953-0054846-3
    [27] A. Moudafi, M. Oliny, Convergence of a splitting inertial proximal method for monotone operators, J. Comput. Appl. Math., 155 (2003), 447–454. https://doi.org/10.1016/S0377-0427(02)00906-8 doi: 10.1016/S0377-0427(02)00906-8
    [28] F. U. Ogbuisi, O. S. Iyiola, J. M. T. Ngnotchouye, T. M. M. Shumba, On inertial type self-adaptive iterative algorithms for solving pseudomonotone equilibrium problems and fixed point problems, J. Nonlinear Funct. Anal., 2021 (2021). https://doi.org/10.23952/jnfa.2021.4 doi: 10.23952/jnfa.2021.4
    [29] B. T. Polyak, Some methods of speeding up the convergence of iteration methods, Comput. Math. Math. Phys., 4 (1964), 1–17. https://doi.org/10.1016/0041-5553(64)90137-5 doi: 10.1016/0041-5553(64)90137-5
    [30] T. D. Quoc, L. D. Muu, N. V. Hien, Extragradient algorithms extended to equilibrium problems, Optimization, 57 (2008), 749–776. https://doi.org/10.1007/s10898-011-9693-2 doi: 10.1007/s10898-011-9693-2
    [31] Y. Shehu, P. Cholamjiak, Another look at the split common fixed point problem for demicontractive operators, RACSAM, 110 (2016), 201–218. https://doi.org/10.1007/s13398-015-0231-9 doi: 10.1007/s13398-015-0231-9
    [32] Y. Shehu, P. T. Vuong, P. Cholamjiak, A self-adaptive projection method with an inertial technique for split feasibility problems in Banach spaces with applications to image restoration problems, J. Fixed Point Theory A., 21 (2019), 1–24. https://doi.org/10.1007/s11784-019-0684-0 doi: 10.1007/s11784-019-0684-0
    [33] A. Tada, W. Takahashi, Strong convergence theorem for an equilibrium problem and a nonexpansive mapping, J. Nonlinear Convex A., 2006,609–617. https://doi.org/10.1007/s10957-007-9187-z doi: 10.1007/s10957-007-9187-z
    [34] W. Takahashi, H. K. Xu, J. C. Yao, Iterative methods for generalized split feasibility problems in Hilbert spaces, Set Valued Var. Anal., 23 (2015), 205–221. https://doi.org/10.1007/s11228-014-0285-4 doi: 10.1007/s11228-014-0285-4
    [35] W. Takahashi, The split common fixed point problem and the shrinking projection method in Banach spaces, J. Conv. Anal., 24 (2017), 1015–1028.
    [36] W. Takahashi, C. F. Wen, J. C. Yao, The shrinking projection method for a finite family of demimetric mappings with variational inequality problems in a Hilbert space, Fixed Point Theory, 19 (2018), 407–419. https://doi.org/10.24193/fpt-ro.2018.1.32 doi: 10.24193/fpt-ro.2018.1.32
    [37] J. Tiel, Convex analysis: An introductory text, Wiley, Chichester, 1984.
  • This article has been cited by:

    1. Yasir Arfat, Poom Kumam, Supak Phiangsungnoen, Muhammad Aqeel Ahmad Khan, Hafiz Fukhar-ud-din, An inertially constructed projection based hybrid algorithm for fixed point and split null point problems, 2023, 8, 2473-6988, 6590, 10.3934/math.2023333
    2. Yasir Arfat, Poom Kumam, Muhammad Aqeel Ahmad Khan, Thidaporn Seangwattana, Zaffar Iqbal, Some variants of the hybrid extragradient algorithm in Hilbert spaces, 2024, 2024, 1029-242X, 10.1186/s13660-023-03052-7
    3. Yasir Arfat, Supak Phiangsungnoen, Poom Kumam, Muhammad Aqeel Ahmad Khan, Jamshad Ahmad, Some variant of Tseng splitting method with accelerated Visco-Cesaro means for monotone inclusion problems, 2023, 8, 2473-6988, 24590, 10.3934/math.20231254
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2228) PDF downloads(99) Cited by(3)

Figures and Tables

Figures(3)  /  Tables(6)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog