Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Review Special Issues

Incidence and Prevention of Strokes in TAVI

  • Transcatheter aortic valve implantation (TAVI) is now a widely adopted option for many inoperable and high risk patients with severe aortic valve stenosis, and clinical trials continue to show great benefit with regards to mortality and major cardiovascular endpoints. As the technology continues to expand and possibly grow to include intermediate and low risk populations, investigators have remained focused on efforts to reduce the risk of peri-procedural complications, of which neurologic events remain some of the most feared. Fortunately, contemporary studies have shown a significant decline in the risk of stroke with TAVI as compared to early clinical trials, and no difference when compared to surgical aortic valve replacement in the most recent trials. This review will focus on current methods for diagnosing, defining, and quantifying the effect of stroke after TAVI, explore the evidence with regards to stroke risk in various populations undergoing these procedures, discuss possible mechanisms for both early and late neurologic events after TAVI, and discuss strategies for both pharmacologic and device based embolic protection during these procedures.

    Citation: Brandon M. Jones, E. Murat Tuzcu, Amar Krishnaswamy, Samir R. Kapadia. Incidence and Prevention of Strokes in TAVI[J]. AIMS Medical Science, 2015, 2(1): 51-64. doi: 10.3934/medsci.2015.1.51

    Related Papers:

    [1] Tiantian Zhang, Wenwen Xu, Xindong Li, Yan Wang . Multipoint flux mixed finite element method for parabolic optimal control problems. AIMS Mathematics, 2022, 7(9): 17461-17474. doi: 10.3934/math.2022962
    [2] Jie Liu, Zhaojie Zhou . Finite element approximation of time fractional optimal control problem with integral state constraint. AIMS Mathematics, 2021, 6(1): 979-997. doi: 10.3934/math.2021059
    [3] Lu Zhi, Jinxia Wu . Adaptive constraint control for nonlinear multi-agent systems with undirected graphs. AIMS Mathematics, 2021, 6(11): 12051-12064. doi: 10.3934/math.2021698
    [4] Andrew Calcan, Scott B. Lindstrom . The ADMM algorithm for audio signal recovery and performance modification with the dual Douglas-Rachford dynamical system. AIMS Mathematics, 2024, 9(6): 14640-14657. doi: 10.3934/math.2024712
    [5] Xiang Wu, Yuzhou Hou, Kanjian Zhang . Optimal feedback control for a class of fed-batch fermentation processes using switched dynamical system approach. AIMS Mathematics, 2022, 7(5): 9206-9231. doi: 10.3934/math.2022510
    [6] Yuelong Tang . Error estimates of mixed finite elements combined with Crank-Nicolson scheme for parabolic control problems. AIMS Mathematics, 2023, 8(5): 12506-12519. doi: 10.3934/math.2023628
    [7] Jun Moon . The Pontryagin type maximum principle for Caputo fractional optimal control problems with terminal and running state constraints. AIMS Mathematics, 2025, 10(1): 884-920. doi: 10.3934/math.2025042
    [8] Jun Moon . A Pontryagin maximum principle for terminal state-constrained optimal control problems of Volterra integral equations with singular kernels. AIMS Mathematics, 2023, 8(10): 22924-22943. doi: 10.3934/math.20231166
    [9] Min Wang, Jiao Teng, Lei Wang, Junmei Wu . Application of ADMM to robust model predictive control problems for the turbofan aero-engine with external disturbances. AIMS Mathematics, 2022, 7(6): 10759-10777. doi: 10.3934/math.2022601
    [10] Gerardo Sánchez Licea . Strong and weak measurable optimal controls. AIMS Mathematics, 2021, 6(5): 4958-4978. doi: 10.3934/math.2021291
  • Transcatheter aortic valve implantation (TAVI) is now a widely adopted option for many inoperable and high risk patients with severe aortic valve stenosis, and clinical trials continue to show great benefit with regards to mortality and major cardiovascular endpoints. As the technology continues to expand and possibly grow to include intermediate and low risk populations, investigators have remained focused on efforts to reduce the risk of peri-procedural complications, of which neurologic events remain some of the most feared. Fortunately, contemporary studies have shown a significant decline in the risk of stroke with TAVI as compared to early clinical trials, and no difference when compared to surgical aortic valve replacement in the most recent trials. This review will focus on current methods for diagnosing, defining, and quantifying the effect of stroke after TAVI, explore the evidence with regards to stroke risk in various populations undergoing these procedures, discuss possible mechanisms for both early and late neurologic events after TAVI, and discuss strategies for both pharmacologic and device based embolic protection during these procedures.


    In this paper, let $ H $ denote a real Hilbert space with inner product $ \langle \cdot, \cdot\rangle $ and norm $ \|\cdot\| $. Let $ M $, $ \mathbb{R} $, and $ \mathbb{N} $ stand for the nonempty closed convex subset of $ H $, set of real numbers and set of positive integers, respectively. Let $ G:H\to H $ be a mapping. The variational inequality problem (VIP) is concerned with the problem of finding a point $ u^\star\in M $ such that

    $ Gu,uu0,uM. $ (1.1)

    We denote the solution set of VIP (1.1) by $ VI(M, G) $. The VIP, which Fichera [12] and Stampacchia [38] independently examined, is a crucial tool in both the applied and pure sciences. It has attracted the attention of many authors in recent years due to its wide range of applications to issues arising from partial differential equations, optimal control problems, saddle point problems, minimization problems, economics, engineering, and mathematical programming.

    On the other hand, an element $ u\in M $ is said to be the fixed point of a mapping $ S:M\to M $, if $ Su = u $. The set of all the fixed points of $ S $ is denoted by $ F(S) = \{u\in M:Su = u\} $. The study of the fixed point theory of nonexpansive mappings has been applied in several fields such as game theory, differential equations, signal processing, integral equations, convex optimization, and control theory [19]. There are several recent results in the literature on approximation of fixed points of nonexpansive mappings (see, for example, [8,9,26,27,28,29,34,35,36] and the references therein).

    It is well-known that the VIP (1.1) can be reformulated as a fixed point problem as follows:

    $ u=PM(IηG)u, $ (1.2)

    where $ P_M:H\to M $ is the metric projection and $ \eta > 0 $. The extragradient method is a prominent method that has been used by many authors over the years to solve VIP. This method was first introduced by Korpelevich [21] in 1976. Given an initial point $ u_0\in M $, the sequence $ \{u_m\} $ generated by the extragradient method is as follows:

    $ {vm=PM(IηG)um,um+1=PM(umηGvm),m0, $ (1.3)

    where $ \eta\in (0, \frac{1}{L}) $, and $ G $ is an operator that is $ L $-Lipschitz continuous and monotone. For $ VI(M, G)\neq \emptyset $, the author showed that the sequence $ \{u_m\} $ defined by (1.3) converges weakly to an element in $ VI(M, G) $.

    The extragradient method's main flaw is its iterative requirement to compute two projections on the feasible set M. In fact, if M has a complex structure, this might have an impact on how efficiently the method computes. In recent years, several authors have paid a great deal of attention to overcoming this restriction (see, for example [6,7,11,16,48]). In order to address the drawback of the extragragient method, in 1997, He [16] introduced a method that requires only a single projection per each iteration. This method is known as the projection and contraction method and it is given as follows:

    $ {vm=PM(umηGum),wm=(umvm)η(GumGvm),um+1=umσϖmwm, $

    where $ \sigma \in (0, 2) $, $ \eta\in (0, \frac{1}{L}) $ and $ \varpi_m $ is defined as

    $ ϖm=umvm,wmwm2. $ (1.4)

    The author showed that the sequence $ \{u_m\} $ generated by (1.4) converges weakly to a unique solution of VIP (1.1). The subgradient extragradient method, which was developed by Censor et al. [6,7,11], is another effective strategy for addressing the limitation of the extragradient method and it is defined as follows:

    $ {vm=PM(umηGum),Tm={uH|umηGumvm,uvm0},um+1=PTm(umηGvm), $ (1.5)

    where $ \eta\in (0, \frac{1}{L}) $, and $ G $ is a $ L $-Lipschitz continuous and monotone operator. The main idea in this method is that a projection onto a special contractible half-space is used to replace the second projection onto $ M $ of the extragradient method, and this significantly reduces the difficulty of calculation. The authors showed that if $ VI(M, G)\neq \emptyset $, the sequence $ \{u_m\} $ defined by (1.5) weakly converges to a point in $ VI(M, G) $.

    Furthermore, the notion of the inertial extrapolation technique is based upon a discrete analogue of a second order dissipative dynamical system and it is known as an acceleration process of iterative methods. It was first developed in [37] to solve smooth convex minimization problems. For some years now, the inertial techniques have been widely adopted by many authors to improve the convergence rate of various iterative algorithms for solving several kinds of optimization problems (see, for example, [1,17,30,31,32,41,44,45,46,55]).

    It is worthy to note that the study of the problem involving the approximation of the common solution of the fixed point problem (FPP) and VIP plays a significant role in mathematical models whose constraints can be expressed as FPP and VIP. This happens in real-world applications such as image recovery, signal processing, network resource allocation, and composite site reduction (see, for example, [2,14,18,22,24,25,33,51] and the references therein).

    Very recently, Thong and Hieu [43] introduced two modified subgradient extragradient methods with line search process for solving the VIP with $ L $-Lipschitz continuous and monotone operator $ G $ and FPP involving quasi-nonexpansive mapping $ S $, such that $ I-S $ is demiclosed at zero. Under appropriate assumptions, the authors showed that the sequences generated by their algorithms weakly converge some points in $ F(S)\cap VI(M, G) $.

    We note that Thong and Hieu [43] only proved weak convergence results for their algorithms. According to Bauschke and Combettes [3], for the solution of optimization problems, the strong convergence of iterative methods are more desirable than their weak convergence counterparts. Furthermore, we observe that Thong and Hieu [43] employed the Armijo-type line search rule step size to their algorithms in order to enable them to operate without requiring prior knowledge of the Lipschitz constant of the operators. However, the use of Armijo-type step sizes may cause the considered methods to perform multiple calculations of the projection values per iteration on the feasible set. To overcome this limitation, Liu and Yang [23] developed an adaptive step size criterion, which only needs the use of some previously given information to complete the step size calculation.

    As far as we know, there is no result in the literature involving the subgradient extragradient method with double inertial extrapolations for finding the common solution of VIP and FPP in real Hilbert spaces. Due to the importance of common solutions of VIP and FPP to some real-world problems, it is natural to ask the following question:

    Is it possible to construct a double inertial subgradient extragradient-type algorithms with a new step size for finding the common solution of VIP and FPP?

    One of the purposes of this article is to give an affirmative answer to the above question. Motivated by the ongoing research in these directions, we propose some modified subgradient extragradient methods with a new step size. These proposed methods are derived from the combinations of the original subgradient extragradient method, viscosity method, projection and contraction method. We prove that our new methods converge strongly to the common solutions of VIP involving pseudo-monotone mappings and FPP involving quasi-nonexpansive mappings that are demiclosed at zero in real Hilbert spaces. The following are more contributions made in this research:

    ● Our algorithms do not need any Armijo-type line search techniques. Rather, they use a new self-adaptive step size technique, which generates a non-monotonic sequence of step sizes. This step size is formulated such that it reduces the dependence of the algorithms on the initial step size. Conducted numerical experiments proved that the proposed step size is more efficient and ensures that our methods require less computation time than many methods in the literature that work with Armijo-type line search technique.

    ● Our step size properly includes those in [23,41,50].

    ● Our algorithms are constructed to approximate the common solution of VIP involving pseudo-monotone mappings and FPP involving quasi-nonexpansive mappings. Since the class of Pseudo-monotone mappings is more general than the class of monotone mappings, it means that our results improve and generalize several results in the literature for finding common solution VIP involving monotone mappings and quasi-nonexpansive mappings. Hence, our results are improvements of the results in [22,43,47] and several others.

    ● Our algorithms are embedded with double inertial terms to accelerate their convergence speed. Numerical tests showed that the proposed algorithms converge faster than the compared existing methods with single inertial term.

    ● We prove our strong convergence result under mild conditions imposed on the parameters. Our results are improvements on the weak convergence results in [43,47].

    ● To show the computational advantage of the suggested methods over some well-known methods in the literature, several numerical experiments are provided.

    ● We utilize our methods to solve some real-world problems, such as optimal control and signal processing problems.

    ● The proofs of our strong convergence results do not require the conventional "two cases" approach that have been employed by several authors in the literature to establish strong convergence results; see, for example, [5,30].

    The article is organized as follows: In Section 2, some useful definitions and lemmas are recalled. The proposed algorithms and their convergence results are presented in Section 3. In Section 4, we conduct some numerical experiments to show the efficiency of our proposed algorithms over several well known methods. In Section 5, we consider the application of our algorithms to the solution of optimal control problem. In Section 6, we apply our methods to image recovery problem and in Section 7, we give summary of the basic contributions in this work.

    In what follows, we denote the weak convergence of the sequence $ \{u_m\} $ to $ u $ by $ u_m\rightharpoonup u $ as $ m\to\infty $ and the strong convergence of the sequences $ \{u_m\} $ is denoted by $ u_m\to u $ as $ m\to\infty $.

    Next, the following definitions and lemmas will be recalled. Let $ G:H\to H $ be an operator, then $ G $ is called:

    $(a_1)$ contraction if there exists a constant $ k\in [0, 1) $ such that

    $ GuGvkuv,u,vH; $

    $ (a_2)$ $ L $-Lipschitz continuous, if $ L > 0 $ exists with

    $ GuGvLuv,u,vH. $

    If $ L = 1 $, then $ G $ becomes a nonexpansive mapping;

    $(a_3)$ Quasi-nonexpansive, if $ F(G)\neq\emptyset $ such that

    $ Guuuu,uH,uF(G); $

    $ (a_4) $ $ \alpha $-strongly monotone, if there exists a constant $ \alpha > 0 $ such that

    $ GuGv,uvαuv2,u,vH; $

    $ (a_5) $ Monotone, if

    $ GuGv,uv0,u,vH; $

    $ (a_6) $ Pseudo-monotone, if

    $ Gu,uv0Gu,uv0,u,vH; $

    $ (a_7) $ Sequentially weakly continuous, if for any sequence $ \{u_m\} $ which converges weakly to $ u $, then the sequence $ \{Gu_m\} $ weakly converges to $ Gu $.

    Lemma 2.1. [15] Let $ H $ be a real Hilbert space and $ M $ a nonempty closed convex subset of $ H $. Suppose $ u\in H $ and $ v\in M $, then $ v = P_M u $ $ \iff $ $ \langle u-v, v-w\rangle\geq 0 $, $ \forall w\in M $.

    Lemma 2.2. [15] Let $ M $ be a closed convex subset of a real Hilbert space $ H $. If $ u\in H $, then

    (i) $ \|P_M u-P_M v\|^2\leq \langle P_M u-P_M v, u-v\rangle, \, \, \forall v\in H $;

    (ii) $ \langle (I-P_M)u-(I-P_M)v, u-v\rangle \geq\|(I-P_M)u-(I-P_M)v\|^2, \, \, \forall v\in H $;

    (iii)$ \|P_M u-v\|^2\leq \|u-v\|^2-\|u-P_M u\|^2, \, \, \forall v\in H $.

    Lemma 2.3. For each $ u, v, w\in H $ and where $ \alpha, \beta, \delta \in [0, 1] $ with $ \alpha+\beta+\delta = 1 $, the followings hold in Hilbert spaces:

    (a)

    $ u+vu2+2v,u+v; $

    (b)

    $ u+v2=u2+2u,v+v2; $

    (c)

    $ αu+βv+γw2=αu2+βv2+γw2αβuv2αγuw2βγvw2. $

    Lemma 2.4. [15] Let $ G:H\to H $ be a nonlinear operator such that $ F(G)\neq \emptyset $. Then $ I -G $ is called demiclosed at zero if for any $ {u_m} \in H $, the following implication holds:

    $ u_m\rightharpoonup u\, \, \mathit{\text{and}}\, \, (I -G)u_m\to0\implies\, \, u\in F(G). $

    Lemma 2.5. [52] Let $ \{a_m\} $ be a sequence of nonnegative real numbers such that

    $ am+1(1νm)am+νmbm,m1, $

    where $ \{\nu_m\}\subset (0, 1) $ with $ \sum_{m = 0}^{\infty}\nu_m = \infty $. If $ \limsup\limits_{k\to\infty}b_{m_k}\leq 0 $ for every subsequence $ \{a_{m_k}\} $ of $ \{a_{m}\} $, the following inequality holds:

    $ lim inf $

    Then $ \lim\limits_{m\to\infty}a_m = 0 $.

    In this section, we introduce three new double inertial subgradient extragradient algorithm-types for solving VIP and FPP. In order to establish our main results, we assume that the following conditions are fulfilled:

    ($ C_1 $) The feasible set $ M $ is nonempty, closed and convex.

    ($ C_2 $) The mapping $ G:H\to H $ is pseudo-monotone and $ L $-Lipschitz continuous.

    ($ C_3 $) The solution set $ F(S)\cap VI(M, G)\neq \emptyset $.

    ($ C_4 $) The mapping $ G $ is sequentially weak continuous on $ M $.

    ($ C_5 $) The mappings $ K, J:H\to H $ are non-expansive.

    ($ C_6 $) The mapping $ S:H\to H $ is quasi-nonexpansive such that $ I-S $ is demiclosed at zero.

    ($ C_7 $) The mapping $ f:H\to H $ is a contraction with constant $ k \in [0, 1) $.

    ($ C_8 $) Let $ \{\alpha_m\}\subset(0, 1) $, $ \{\beta_m\} $, $ \{\gamma_m\}\subset[a, b]\subset (0, 1) $ such that $ \alpha_m+\beta_m+\gamma_m = 1 $, $ \lim\limits_{m\to \infty}\alpha_m = 0 $, $ \sum\limits_{m = }^{\infty}\alpha_m = \infty $ and $ \lim\limits_{m\to\infty}\frac{\epsilon_m}{\alpha_m} = 0 = \lim\limits_{m\to\infty}\frac{\xi_m}{\alpha_m} $, where $ \{\epsilon_m\} $ and $ \{\xi_m\} $ are positive real sequences.

    ($ C_9 $) Let $ \{p_m\}, \{q_m\}\subset [0, \infty) $ and $ \{h_m\}\subset [1, \infty) $ such that $ \sum\limits_{m = 0}^{\infty}p_m < \infty $, $ \lim\limits_{m\to \infty}q_m = 0 $, and $ \lim\limits_{m\to \infty}h_m = 1 $.

    Remark 3.1. We note the following in Algorithm 3.1:

    Algorithm 3.1.
    Initialization: Choose $ \eta_1 > 0, \phi > 0, \theta > 0, \rho\in \left(0, 2\right), \mu \in (0, 1) $ and let $ g_0, g_1\in H $ be arbitrary.
    Iterative Steps: Given the iterates $ u_{m-1} $ and $ \{u_m\} $ $ (m\geq1) $, calculate $ u_{m+1} $ as follows:
    Step 1: Choose $ \phi_m $ and $ \theta_m $ such that $ \phi_m\in [0, \bar{\phi}_m] $ and $ \theta_m\in [0, \bar{\theta}_m] $, where
    $ \begin{align} \bar{\phi}_m& = \begin{cases} \min\left\{\frac{m-1}{m+\phi-1}, \frac{\epsilon_m}{\|u_m-u_{m-1}\|}\right\}, \;\; \text{ if }\;u_{m}\neq u_{m-1}, \\ \frac{m-1}{m+\phi-1}, \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;otherwise. \end{cases} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \text { (3.1) } \end{align}$
    $ \begin{align} \bar{\theta}_m& = \begin{cases} \min\left\{\frac{m-1}{m+\theta-1}, \frac{\xi_m}{\|u_m-u_{m-1}\|}\right\}, \;\; \text{ if }\;u_{m}\neq u_{m-1}, \\ \frac{m-1}{m+\theta-1}, \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;otherwise. \end{cases} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \text { (3.2) } \end{align}$
    Step 2: Set
    $\begin{eqnarray} s_m = u_m+\phi_m(Ku_m-Ku_{m-1}), \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \text { (3.3) }\end{eqnarray} $
    $\begin{eqnarray} r_m = u_m+\theta_m(Ju_m-Ju_{m-1}), \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \text { (3.4) }\end{eqnarray} $
    and compute
    $\begin{align} w_{m}& = P_M(s_m-\eta_mGs_m). \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \text { (3.5) }\end{align} $
    If $ s_m = w_m $ or $ Gs_m = 0 $, stop; $ s_m $ is a solution of the VIP. Otherwise, do Step 3.
    Step 3: Compute
    $ \begin{align} z_{m} = P_{T_m}(s_m-\rho\eta_m\delta_mGw_m), \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \text { (3.6) }\end{align}$
    where
    $\begin{align} T_m = \{u\in H:\langle s_m-\eta_mGs_m-w_m, u-w_m\rangle\leq 0\}, \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \text { (3.7) }\end{align} $
    $\begin{align} \delta_{m} = &\begin{cases}\frac{\langle s_m-w_m, v_m\rangle}{\|v_m\|^2}, \;\; \text{ if }\;v_m\neq 0, \\ \\ 0, \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;otherwise, \end{cases} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \text { (3.8) }\end{align} $
    and
    $ \begin{align} v_m = s_m-w_m-\eta_m(Gs_m-Gw_m). \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \text { (3.9) }\end{align} $
    Step 4: Compute
    $ \begin{align} u_{m+1} = \alpha_mf(r_m)+\beta_mz_m+\gamma_m Sz_m. \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \text { (3.10) }\end{align} $
    Update
    $\begin{align} \eta_{m+1} = &\begin{cases} \min\left\{\frac{(q_m+h_m\mu)\|s_m-w_m\|}{\|Gs_m-Gw_m\|}, \eta_m+p_m\right\}, \;\; \text{ if }\;Gs_m\neq Gw_m, \\ \\ \eta_m+p_m, \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, otherwise. \end{cases} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \text { (3.11) }\end{align} $
    Set $ m: = m+1 $ and go back to Step 1.

    (i) It is not hard to see from (3.1), (3.2), and condition $ (C_8) $ that

    $ \begin{eqnarray*} \lim\limits_{m\to \infty}\phi_m\|u_m-u_{m-1}\| = \lim\limits_{m\to \infty}\theta_m\|u_m-u_{m-1}\| = 0 \end{eqnarray*} $

    and

    $ \begin{eqnarray*} \lim\limits_{m\to \infty}\frac{\phi_m}{\alpha_m}\|u_m-u_{m-1}\| = \lim\limits_{m\to \infty}\frac{\theta_m}{\alpha_m}\|u_m-u_{m-1}\| = 0. \end{eqnarray*} $

    (ii) In order to get larger step sizes, we introduce the sequence $ \{q_m\} $ and $ \{h_m\} $ in (3.11) to relax the the parameter $ \mu $. The relaxation parameters can often improve the numerical performances of algorithms, see [10]. If $ q_m = 0 $ in (3.11), then $ \{\eta_m\} $ becomes the step size in [41]. If $ h_m = 1 $ in (3.11), then $ \{\eta_m\} $ becomes that in [50]. If $ q_m = 0 $ and $ h_m = 1 $ in (3.11), then the step size $ \{\eta_m\} $ reduces to that in [23]. Lastly, if $ q_m = p_m = 0 $ and $ h_m = 1 $, $ \{\eta_m\} $ reduces to the step sizes used by many authors in the literature (see, for example, [13,42,53,54]).

    We now establish the following lemmas that will be useful in proving our strong convergence theorems.

    Lemma 3.1. If conditions $ (C_3) $ and $ (C_4) $ are fulfilled and $ \{\eta_m\} $ is the sequence generated by (3.11). Then, $ \{\eta_m\} $ is well-defined and $ \lim\limits_{m\to\infty}\eta_m = \eta\in \left[\min \left\{\frac{\mu}{L}, \eta_1\right\}, \eta_1+\sum\limits_{m = 1}^{\infty}p_m\right] $.

    Proof. Since $ G $ is Lipschitz continuous with $ L > 0 $, $ q_m\geq 0 $ and $ h_m\geq 1 $, by (3.11), if $ Gs_m\neq Gw_m $, we have

    $ \begin{align*} \eta_m\geq \frac{(q_m+h_m\mu)\|s_m-w_m\|}{\|Gs_m-Gw_m\|}\geq \frac{q_m+h_m\mu}{L}\geq \frac{\mu}{L}. \end{align*} $

    We omit the remaining part of the proof to avoid repetitive expressions of the proof of Lemma 3.1 in [50].

    Lemma 3.2. Let $ \{s_m\} $ and $ \{w_m\} $ be two sequences generated by Algorithm 3.1. Suppose that conditions $ (C_1) $–$ (C_4) $ are fulfilled and if a subsequence $ \{s_{m_k}\} $ of $ \{s_m\} $ exists, such that $ s_{m_k}\rightharpoonup v^\star\in H $ and $ \lim\limits_{k\to \infty}\|s_{m_k}-w_{m_k}\| = 0 $, then $ v^\star\in VI(M, G) $.

    Proof. Since $ w_{m_k} = P_M(s_{m_k}-\eta_{m_k}Gs_{m_k}) $, then by applying Lemma 2.1, we have

    $ \begin{align*} \langle s_{m_k}-\eta_{m_k}Gs_{m_k}-w_{m_k}, u-w_{m_k}\rangle\leq 0, \, \forall u\in M. \end{align*} $

    Equivalently, we have

    $ \begin{align*} \frac{1}{\eta_{m_k}}\langle s_{m_k}-w_{m_k}, u-w_{m_k}\rangle\leq \langle Gs_{m_k}, u-w_{m_k}\rangle, \, \forall u\in M. \end{align*} $

    It follows that

    $ \begin{align} \frac{1}{\eta_{m_k}}\langle s_{m_k}-w_{m_k}, u-w_{m_k}\rangle+\langle Gs_{m_k}, w_{m_k}- s_{m_k}\rangle\leq \langle Gs_{m_k}, u-s_{m_k}\rangle, \, \forall u\in M. \end{align} $ (3.12)

    Since $ s_{m_k}\rightharpoonup v^\star $, we know that $ \{s_{m_k}\} $ is bounded and $ G $ is $ L $-Lipschitz continuous on $ H $, this means that $ \{Gs_{m_k}\} $ is also bounded. Again, since $ \lim\limits_{k\to \infty}\|s_{m_k}-w_{m_k}\| = 0 $, then $ \{w_{m_k}\} $ is also bounded and $ \{\eta_{m_k}\} \geq \left\{\frac{\mu}{L}, \eta_1\right\} $. From (3.12), we have

    $ \begin{align} \liminf\limits_{k\to \infty} \langle Gs_{m_k}, u-s_{m_k}\rangle\geq 0, \, \forall u\in M. \end{align} $ (3.13)

    On the other hand, we have

    $ \begin{align} \langle Gw_{m_k}, u-w_{m_k}\rangle = \langle Gw_{m_k}-Gs_{m_k}, u-s_{m_k}\rangle+\langle Gs_{m_k}, u-s_{m_k}\rangle+\langle Gw_{m_k}, s_{m_k}-w_{m_k}\rangle, \, \forall u\in M. \end{align} $ (3.14)

    Since $ \lim\limits_{k\to \infty}\|s_{m_k}-w_{m_k}\| = 0 $ and $ G $ is $ L $-Lpischitz continuous on $ H $, we have

    $ \begin{align} \lim\limits_{k\to \infty}\|Gs_{m_k}-Gw_{m_k}\| = 0. \end{align} $ (3.15)

    By $ \lim\limits_{k\to \infty}\|s_{m_k}-w_{m_k}\| = 0 $, (3.13) and (3.15), (3.14) reduces to

    $ \begin{align} \liminf\limits_{k\to \infty} \langle Gw_{m_k}, u-w_{m_k}\rangle\geq 0, \, \forall u\in M. \end{align} $ (3.16)

    Next, we show that $ v^\star\in VI(M, G) $. To show this, we choose a decreasing sequence $ \{\xi_k\} $ of positive numbers which approaches zero. For each $ k $, let $ N_k $ stand for the smallest positive integer fulfilling the following inequality:

    $ \begin{align} \langle Gw_{m_j}, u-w_{m_j}\rangle+\xi_k\geq 0, \, \, \forall j\geq N_k. \end{align} $ (3.17)

    It is not hard to see that the sequence $ \{N_k\} $ increases as $ \{\xi_k\} $ decreases. Moreover, since $ w_{N_k}\subset M $, for each $ k $, we can assume that $ Gw_{N_k}\neq 0 $ (otherwise, $ w_{N_k} $ is a solution). Putting

    $ \begin{align*} g_{N_k} = \frac{Gw_{N_k}}{\|Gw_{N_k}\|^2}, \end{align*} $

    we get $ \langle Gw_{N_k}, g_{N_k}\rangle = 1 $, for each $ k $. We can infer from (3.17) that for each $ k $

    $ \begin{align*} \langle Gw_{N_k}, u+\xi_kg_{N_k}-w_{N_k}\rangle\geq 0. \end{align*} $

    Now, owing to the pseudo-monotonicity of $ G $ on $ H $, we have

    $ \begin{align*} \langle G(u+\xi_kg_{N_k}), u+\xi_kg_{N_k}-w_{N_k}\rangle\geq 0. \end{align*} $

    This means that

    $ \begin{align} \langle Gu, u-w_{N_k}\rangle \geq \langle Gu-G(u+\xi_kg_{N_k}), u+\xi_kg_{N_k}-w_{N_k}\rangle-\xi_k\langle Gu, g_{N_k}\rangle. \end{align} $ (3.18)

    We now have to show that $ \lim\limits_{k\to \infty}\xi_kg_{N_k} = 0 $. Indeed, by the fact that $ s_{m_k}\rightharpoonup v^\star $ and $ \lim\limits_{k\to \infty}\|s_{m_k}-w_{m_k}\| = 0 $, we have $ w_{N_k}\rightharpoonup v^\star $ as $ k\to \infty $. Since the norm mapping is sequentially weakly lower semicontinuous, we have

    $ \begin{align} 0 < \|Gv^\star\|\leq \liminf\limits_{k\to\infty}\|Gw_{m_k}\|. \end{align} $ (3.19)

    Since $ w_{N_k}\subset w_{m_k} $ and $ \xi_k \to 0 $ as $ k\to\infty $, we have

    $ \begin{align} 0\leq \limsup\limits_{k\to \infty}\|\xi_kg_{N_k}\| = \limsup\limits_{k\to \infty}\left(\frac{\xi_k}{\|Gw_{m_k}\|}\right)\leq \frac{\lim\limits_{k\to\infty}\xi_k}{\liminf\limits_{k\to \infty}\|Gw_{m_k}\|} = 0, \end{align} $ (3.20)

    which implies that $ \lim\limits_{k\to \infty}\xi_kg_{N_k} = 0 $. Now, owing to the fact that $ G $ is Lipschitz continuous, $ \{w_{m_k}\} $, $ \{g_{N_k}\} $ are bounded, and $ \lim\limits_{k\to \infty}\xi_kg_{N_k} = 0 $, then letting $ k\to\infty $ in (3.18), we obtain

    $ \begin{align*} \liminf\limits_{k\to \infty}\langle Gu, u-w_{N_k}\rangle\geq 0. \end{align*} $

    Thus, for all $ u\in M $, we have

    $ \begin{align*} \langle Gu, u-v^\star \rangle = \lim\limits_{k\to \infty} \langle G u, u-w_{N_k}\rangle = \liminf\limits_{k\to \infty}\langle G u, u-w_{N_k}\rangle\geq 0. \end{align*} $

    Lemma 3.3. Assume that conditions $ (C_1) $–$ (C_3) $ hold and $ \{z_m\} $ is a sequence generated by Algorithm 3.1, then, for all $ u^\star\in VI(M, G) $, and for $ m_0 > 0 $, we have

    $ \begin{eqnarray} \|z_m-u^\star \|^2\leq \|s_m-u^\star \|^2-\|s_m-z_m-\rho\delta_mv_m\|^2-(2-\rho)\rho\left(\frac{1-\frac{q_m+h_m\mu}{\eta_{m+1}}}{1+\frac{q_m+h_m\mu}{\eta_{m+1}}}\right)^2\|s_m-w_m\|^2, \, \, \forall m\geq m_0. \end{eqnarray} $ (3.21)

    Proof. From Lemma 3.1 and (3.9), we have

    $ \begin{eqnarray} \|v_m\|& = &\|s_m-w_m-\eta_m(Gs_m-Gw_m)\|\\ &\geq&\|s_m-w_m\|-\eta_m\|Gs_m-Gw_m\|\\ &\geq&\|s_m-w_m\|-\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\|s_m-w_m\|\\ & = &\left(1-\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)\|s_m-w_m\|. \end{eqnarray} $ (3.22)

    By Lemma 3.1, we know that $ \lim\limits_{m\to \infty}\eta_m $ exists, which together with $ \lim\limits_{m\to\infty}q_m = 0 $ and $ \lim\limits_{m\to\infty}h_m = 1 $ gives

    $ \begin{align*} \lim\limits_{m\to \infty}\left(1-\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right) = 1-\mu > 0. \end{align*} $

    Thus, there exists $ m_0\in \mathbb{N} $ such that

    $ \begin{align*} 1-\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}} > \frac{1-\mu}{2}, \, \, \forall m\geq m_0. \end{align*} $

    By (3.22), for all $ m\geq m_0 $, we have

    $ \begin{eqnarray} \|v_m\| > \left(\frac{1-\mu}{2}\right)\|s_m-w_m\|\geq0. \end{eqnarray} $ (3.23)

    Since $ u^\star \in VI(M, C)\subset M\subset T_m $, then by Lemmas 2.2 and 2.3,

    $ \begin{eqnarray} 2\|z_m-u^\star\|^2& = &2\| P_{T_m}(s_m-\rho\eta_m\delta_mGw_m)-P_{T_m}u^\star\|^2\\ &\leq &2\langle z_m-u^\star, s_m-\rho\eta_m\delta_mGw_m-u^\star\rangle\\ & = &\| z_m-u^\star\|^2+\|s_m-\rho\eta_m\delta_mGw_m-u^\star\|^2-\|z_m-s_m+\rho\eta_m\delta_mGw_m\|^2\\ & = &\| z_m-u^\star\|^2+\| s_m-u^\star\|^2+\rho\eta^2_m\delta^2_m\|Gw_m\|^2-2\langle s_m-u^\star, \rho\eta_m\delta_mGw_m\rangle\\&&-\|z_m-s_m\|^2-\rho\eta^2_m\delta^2_m\|Gw_m\|^2-2\langle z_m-s_m, \rho\eta_m\delta_mGw_m\rangle\\ & = &\| z_m-u^\star\|^2+\| s_m-u^\star\|^2-\|z_m-s_m\|^2-2\langle z_m-u^\star, \rho\eta_m\delta_mGw_m\rangle. \end{eqnarray} $

    This implies that

    $ \begin{equation} \|z_m-u^\star\|^2\leq \|s_m-u^\star\|^2-\|z_m-s_m\|^2-2\rho\eta_m\delta_m\langle z_m-u^\star, Gw_m\rangle. \end{equation} $ (3.24)

    Since $ w_m\in M $ and $ u^\star\in VI(M, G) $, we have $ \langle Gu^\star, w_m-u^\star\rangle\geq 0 $. From the pseudo-monotonicity of $ G $, we know that $ \langle Gw_m, w_m-u^\star\rangle\geq 0. $ This implies that

    $ \begin{eqnarray*} \langle Gw_m, z_m-u^\star\rangle = \langle Gw_m, z_m-w_m\rangle+\langle Gw_m, w_m-u^\star\rangle. \end{eqnarray*} $

    Thus,

    $ \begin{align} -2\rho\eta_m\delta_m \langle Gw_m, z_m-u^\star\rangle\leq -2\rho\eta_m\delta_m \langle Gw_m, z_m-w_m\rangle. \end{align} $ (3.25)

    On the other hand, from $ z_m\in T_m $, we have

    $ \begin{align*} \langle s_m-\eta_m Gs_m-w_m, z_m-w_m\rangle\leq 0. \end{align*} $

    It follows that

    $ \begin{align*} \langle s_m-w_m-\eta_m(Gs_m-Gw_m), z_m-w_m\rangle\leq \eta_m\langle Gw_m, z_m-w_m\rangle. \end{align*} $

    Thus,

    $ \begin{align*} \langle v_m, z_m-w_m\rangle \leq \eta_m \langle Gw_m, z_m-w_m\rangle. \end{align*} $

    Therefore,

    $ \begin{align} -2\rho\eta_m\delta_m \langle Gw_m, z_m-w_m\rangle\leq -2\rho\delta_m \langle v_m, z_m-w_m\rangle. \end{align} $ (3.26)

    Moreover, we have

    $ \begin{align} -2\rho\delta_m \langle v_m, z_m-w_m\rangle = -2\rho\delta_m \langle v_m, s_m-w_m\rangle+2\rho\delta_m \langle v_m, s_m-z_m\rangle. \end{align} $ (3.27)

    Recalling (3.23), we have know that $ v_m\neq 0 $, for all $ m\geq m_0 $. This implies that $ \delta_m = \frac{\langle s_m-w_m, v_m\rangle}{\|v_m\|^2} $. Thus, we have

    $ \begin{align} \langle s_m-w_m, v_m\rangle = \delta_m\|v_m\|^2, \, \forall m\geq m_0. \end{align} $ (3.28)

    On the other hand,

    $ \begin{eqnarray} 2\rho\delta_m \langle v_m, s_m-z_m\rangle = 2\langle\rho \delta_mv_m, s_m-z_m\rangle = \|s_m-z_m\|^2+\rho^2 \delta^2_m\|v_m\|^2-\|s_m-z_m-\rho \delta_mv_m\|^2. \end{eqnarray} $ (3.29)

    Putting (3.28) and (3.29) into (3.27), then for all $ m\geq m_0 $, we get

    $ \begin{eqnarray} -2\rho\delta_m \langle v_m, z_m-w_m\rangle&\leq& -2\rho\delta^2_m\|v_m\|^2+ \|s_m-z_m\|^2+\rho^2\delta^2_m\|v_m\|^2-\|s_m-z_m-\rho \delta_mv_m\|^2\\ & = &\|s_m-z_m\|^2-\|s_m-z_m-\rho \delta_mv_m\|^2-(2-\rho)\rho \delta^2_m\|v_m\|^2. \end{eqnarray} $ (3.30)

    Using (3.26) and (3.30), we get

    $ \begin{eqnarray} -2\rho\eta_m\delta_m \langle Gw_m, z_m-w_m\rangle&\leq& -2\rho\delta^2_m\|v_m\|^2+ \|s_m-z_m\|^2+\rho^2\delta^2_m\|v_m\|^2-\|s_m-z_m-\rho \delta_mv_m\|^2\\ & = &\|s_m-z_m\|^2-\|s_m-z_m-\rho \delta_mv_m\|^2-(2-\rho)\rho \delta^2_m\|v_m\|^2. \end{eqnarray} $ (3.31)

    Also, from the combination of (3.25) and (3.31), we have

    $ \begin{eqnarray} -2\rho\eta_m\delta_m \langle Gw_m, z_m-u^\star\rangle&\leq& -2\rho\delta^2_m\|v_m\|^2+ \|s_m-z_m\|^2+\rho^2\delta^2_m\|v_m\|^2-\|s_m-z_m-\rho \delta_mv_m\|^2\\ & = &\|s_m-z_m\|^2-\|s_m-z_m-\rho \delta_mv_m\|^2-(2-\rho)\rho \delta^2_m\|v_m\|^2. \end{eqnarray} $ (3.32)

    Putting (3.32) into (3.24), we obtain

    $ \begin{align} \|z_m-u^\star\|^2\leq \|s_m-u^\star\|^2-\|s_m-z_m-\rho \delta_mv_m\|^2-(2-\rho)\rho \delta^2_m\|v_m\|^2. \end{align} $ (3.33)

    Now, by Lemma 3.1 and (3.9), we have

    $ \begin{eqnarray*} \nonumber \|v_m\|& = &\|s_m-w_m-\eta_m(Gs_m-Gw_m)\|\\ &\leq&\|s_m-w_m\|+\eta_m\|Gs_m-Gw_m\|\\ \nonumber &\leq&\|s_m-w_m\|+\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\|s_m-w_m\|\\ & = &\left(1+\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)\|s_m-w_m\|. \label{h3} \end{eqnarray*} $

    Thus,

    $ \begin{eqnarray*} \|v_m\|^2 \leq\left(1+\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)^2\|s_m-w_m\|^2, \end{eqnarray*} $

    or equivalently

    $ \begin{eqnarray*} \frac{1}{\|v_m\|^2} \geq\frac{1}{\left(1+\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)^2\|s_m-w_m\|^2}. \end{eqnarray*} $

    Again, from (3.9), we have

    $ \begin{eqnarray*} \nonumber \langle s_m-w_m, v_m\rangle & = &\|s_m-w_m\|^2-\eta_m\langle s_m-w_m, Gs_m-Gw_m\rangle\\ \nonumber &\geq &\|s_m-w_m\|^2-\eta_m\| s_m-w_m\|\|Gs_m-Gw_m\|\\ \nonumber&\geq&\|s_m-w_m\|^2-\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\|s_m-w_m\|^2\\ & = &\left(1-\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)\|s_m-w_m\|^2. \end{eqnarray*} $

    Therefore, for all $ m\geq m_0 $, we have

    $ \begin{align} \delta_m\|v_m\|^2 = \langle s_m-w_m, v_m\rangle\geq \left(1-\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)\|s_m-w_m\|^2 \end{align} $ (3.34)

    and

    $ \begin{align} \delta_m = \frac{ \langle s_m-w_m, v_m\rangle }{\|v_m\|^2}\geq\frac{\left(1-\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)}{\left(1+\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)^2}. \end{align} $ (3.35)

    Combining (3.34) and (3.35), we have

    $ \begin{align} \delta^2_m\|v_m\|^2\geq\frac{\left(1-\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)^2}{\left(1+\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)^2}\|s_m-w_m\|^2, \, \forall m\geq m_0. \end{align} $ (3.36)

    Putting (3.36) into (3.33), we have

    $ \begin{eqnarray*} \|z_m-u^\star\|^2\leq \|s_m-u^\star\|^2-\|s_m-z_m-\rho \delta_mv_m\|^2-(2-\rho)\rho\frac{\left(1-\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)^2}{\left(1+\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)^2}\|s_m-w_m\|^2, \, \forall m\geq m_0. \end{eqnarray*} $

    Next, the strong convergence theorem of Algorithm 3.1 is established as follows:

    Theorem 3.1. Suppose the conditions $ (C_1) $–$ (C_8) $ are performed and $ \{u_m\} $ is the sequence generated by Algorithm 3.1, then $ \{u_m\} $ converges strongly to an element $ u^\star\in F(S)\cap VI(M, G) $, where $ u^\star = P_{F(S)\cap VI(M, G)}\circ f(u^\star) $.

    Proof. We divide the proof into four parts as follows:

    Claim 1. We show that $ \{u_m\} $ is bounded.

    Indeed, due to (3.21), we have

    $ \begin{align} \|z_m-u^\star\|\leq \|s_m-u^\star\|. \end{align} $ (3.37)

    From (3.3), we have

    $ \begin{eqnarray} \|s_m-u^\star\|& = &\|u_m+\phi_m(Ku_m-Ku_{m-1})-u^\star\|\\ &\leq&\|u_m-u^\star\|+\phi_m\|Ku_m-Ku_{m-1}\|\\ &\leq&\|u_m-u^\star\|+\phi_m\|u_m-u_{m-1}\|\\ & = &\|u_m-u^\star\|+\alpha_m\frac{\phi_m}{\alpha_m}\|u_m-u_{m-1}\|. \end{eqnarray} $ (3.38)

    From Remark 1, $ \lim\limits_{m\to \infty}\frac{\phi_m}{\alpha_m}\|u_m-u_{m-1}\| = 0 $. Therefore, $ \{\frac{\phi_m}{\alpha_m}||u_m-u_{m-1}\|\} $ is bounded, so, a constant $ \Gamma_1 > 0 $ exists such that

    $ \begin{align} \frac{\phi_m}{\alpha_m}\|u_m-u_{m-1}\|\leq \Gamma_1, \, \forall m\geq 1. \end{align} $ (3.39)

    Combining (3.37)–(3.39), we have

    $ \begin{align} \|z_m-u^\star\|\leq \|s_m-u^\star\|\leq \|u_m-u^\star\|+\alpha_m \Gamma_1. \end{align} $ (3.40)

    Also, from (3.4), we have

    $ \begin{eqnarray} \|r_m-u^\star\|& = &\|u_m+\theta_m(Ju_m-Ju_{m-1})-u^\star\|\\ &\leq&\|u_m-u^\star\|+\theta_m\|Ju_m-Ju_{m-1}\|\\ &\leq&\|u_m-u^\star\|+\theta_m\|u_m-u_{m-1}\|\\ & = &\|u_m-u^\star\|+\alpha_m\frac{\theta_m}{\alpha_m}\|u_m-u_{m-1}\|. \end{eqnarray} $ (3.41)

    From Remark 3.1, we see that $ \lim\limits_{m\to \infty}\frac{\theta_m}{\alpha_m}\|u_m-u_{m-1}\| = 0 $. Thus, a constant $ \Gamma_2 > 0 $ exists such that

    $ \begin{align} \frac{\theta_m}{\alpha_m}\|u_m-u_{m-1}\|\leq \Gamma_2, \, \forall m\geq 1. \end{align} $ (3.42)

    Combining (3.41) and (3.42), we have

    $ \begin{align} \|r_m-u^\star\|\leq \|u_m-u^\star\|+\alpha_m \Gamma_2. \end{align} $ (3.43)

    Using (3.10) and condition $ (C_7) $, we have

    $ \begin{eqnarray} \| u_{m+1}-u^\star\|& = &\|\alpha_mf(r_m)+\beta_mz_m+\gamma_m Sz_m-u^\star\|\\ & = &\|\alpha_m(f(r_m)-u^\star)+\beta_m(z_m-u^\star)+\gamma_m (Sz_m-u^\star)\|\\ &\leq& \alpha_m\|f(r_m)-f(u^\star)+f(u^\star)-u^\star\|+\beta_m\|z_m-u^\star\|+\gamma_m \|Sz_m-u^\star\|\\ &\leq& \alpha_m\|f(r_m)-f(u^\star)\|+\alpha_m\|f(u^\star)-u^\star\|+\beta_m\|z_m-u^\star\|+\gamma_m \|Sz_m-u^\star\|\\ &\leq& \alpha_mk\|r_m-u^\star\|+\alpha_m\|f(u^\star)-u^\star\|+\beta_m\|z_m-u^\star\|+\gamma_m \|z_m-u^\star\|\\ & = &\alpha_mk\|r_m-u^\star\|+\alpha_m\|f(u^\star)-u^\star\|+(1-\alpha_m)\|z_m-u^\star\|. \end{eqnarray} $ (3.44)

    Putting (3.40) and (3.43) into (3.44), we have

    $ \begin{eqnarray} \| u_{m+1}-u^\star\|&\leq&\alpha_mk(\|u_m-u^\star\|+\alpha_m \Gamma_2)+\alpha_m\|f(u^\star)-u^\star\|+ (1-\alpha_m)(\|u_m-u^\star\|+\alpha_m \Gamma_1)\\ & = &(1-(1-k)\alpha_m)\|u_m-u^\star\|+\alpha^2_m k\Gamma_2+\alpha_m(1-\alpha_m) \Gamma_1+\alpha_m\|f(u^\star)-u^\star\|\\ &\leq&(1-(1-k)\alpha_m)\|u_m-u^\star\|+\alpha_m \Gamma_2+\alpha_m \Gamma_1+\alpha_m\|f(u^\star)-u^\star\|\\ & = &(1-(1-k)\alpha_m)\|u_m-u^\star\|+\alpha_m \Gamma_3+\alpha_m\|f(u^\star)-u^\star\|\\ & = &(1-(1-k)\alpha_m)\|u_m-u^\star\|+(1-k)\alpha_m\frac{\Gamma_3+\|f(u^\star)-u^\star\|}{1-k} \\ &\leq& \max\left\{\|u_m-u^\star\|, \frac{\Gamma_3+\|f(u^\star)-u^\star\|}{1-k}\right\}\\ &\leq& \cdots\\ &\leq&\max\left\{\|u_{m_0}-u^\star\|, \frac{\Gamma_3+\|f(u^\star)-u^\star\|}{1-k}\right\}, \, \forall m\geq m_0, \end{eqnarray} $ (3.45)

    where $ \Gamma_3 = \Gamma_1+\Gamma_2 $. This means that $ \{u_m\} $ is bounded. It follows that $ \{z_m\} $, $ \{s_m\} $, $ \{w_m\} $, $ \{f(r_m)\} $ and $ \{f(z_m)\} $ are bounded.

    Claim 2.

    $ \begin{align*} &(1-\alpha_m)\|s_m-z_m-\rho\delta_mv_m\|^2+(1-\alpha_m)(2-\rho)\rho\frac{\left(1-\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)^2}{\left(1+\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)^2}\|s_m-w_m\|^2+\beta_m\gamma_m\|z_m-Sz_m\|^2 \\\leq&\|u_m-u^\star\|^2-\| u_{m+1}-u^\star\|^2+\alpha_m\Gamma_7, \, \forall m\geq m_0, \end{align*} $

    for some $ \Gamma_7 > 0 $.

    Indeed, from (3.40), we have

    $ \begin{eqnarray} \|s_m- u^\star\|^2\leq(\|u_m-u^\star\|+\alpha_m \Gamma_1)^2 = \|u_m-u^\star\|^2+\alpha_m(2\Gamma_1\|u_m-u^\star\|+\alpha_m\Gamma^2_1). \end{eqnarray} $ (3.46)

    Since $ \{u_m\} $ is a bounded sequence, it therefore implies that a constant $ \Gamma_4 > 0 $ exists, such that $ 2\Gamma_1\|u_m-u^\star\|+\alpha_m\Gamma^2_1\leq \Gamma_4 $. Hence, (3.46) becomes

    $ \begin{align*} \|s_m- u^\star\|^2 \leq\|u_m-u^\star\|^2+\alpha_m\Gamma_4. \end{align*} $

    Also, from (3.43), we get

    $ \begin{eqnarray} \|r_m- u^\star\|^2\leq(\|u_m-u^\star\|+\alpha_m \Gamma_2)^2 = \|u_m-u^\star\|^2+\alpha_m(2\Gamma_2\|u_m-u^\star\|+\alpha_m\Gamma^2_2). \end{eqnarray} $ (3.47)

    Since $ \{u_m\} $ is a bounded sequence, it therefore implies that a constant $ \Gamma_5 > 0 $ exists, such that $ 2\Gamma_2\|u_m-u^\star\|+\alpha_m\Gamma^2_2\leq \Gamma_5 $. Hence, (3.47) becomes

    $ \begin{align*} \|r_m- u^\star\|^2 \leq\|u_m-u^\star\|^2+\alpha_m\Gamma_5. \end{align*} $

    Now, from (3.10) and Lemma 2.3, we have

    $ \begin{eqnarray} \| u_{m+1}-u^\star\|^2& = &\|\alpha_mf(r_m)+\beta_mz_m+\gamma_m Sz_m-u^\star\|^2\\ & = &\|\alpha_m(f(r_m)-u^\star)+\beta_m(z_m-u^\star)+\gamma_m (Sz_m-u^\star)\|^2\\ &\leq&\alpha_m\|f(r_m)-u^\star\|^2+\beta_m\|z_m-u^\star\|^2\\&&+\gamma_m\| Sz_m-u^\star\|^2-\beta_m\gamma_m\|z_m-Sz_m\|^2\\ &\leq&\alpha_m(\|f(r_m)-f(u^\star)\|+\|f(u^\star)-u^\star\|)^2+\beta_m\|z_m-u^\star\|^2\\&&+\gamma_m\| Sz_m-u^\star\|^2-\beta_m\gamma_m\|z_m-Sz_m\|^2\\ &\leq&\alpha_m(k\|r_m-u^\star\|+\|f(u^\star)-u^\star\|)^2+\beta_m\|z_m-u^\star\|^2\\&&+\gamma_m\| z_m-u^\star\|^2-\beta_m\gamma_m\|z_m-Sz_m\|^2\\ & = &\alpha_m(k^2\|r_m-u^\star\|^2+2\|r_m-u^\star\|\|f(u^\star)-u^\star\|+\|f(u^\star)-u^\star\|^2)\\&&+(1-\alpha_m)\|z_m-u^\star\|^2-\beta_m\gamma_m\|z_m-Sz_m\|^2\\ &\leq&\alpha_m(\|r_m-u^\star\|^2+2\|r_m-u^\star\|\|f(u^\star)-u^\star\|+\|f(u^\star)-u^\star\|^2)\\&&+(1-\alpha_m)\|z_m-u^\star\|^2-\beta_m\gamma_m\|z_m-Sz_m\|^2\\ & = &\alpha_m\|r_m-u^\star\|^2+\alpha_m(2\|r_m-u^\star\|\|f(u^\star)-u^\star\|+\|f(u^\star)-u^\star\|^2)\\&&+(1-\alpha_m)\|z_m-u^\star\|^2-\beta_m\gamma_m\|z_m-Sz_m\|^2. \end{eqnarray} $ (3.48)

    Due to the boundedness of $ \{r_m\} $, we know that a constant $ \Gamma_6 > 0 $ exists, such that $ 2\|r_m-u^\star\|\|f(u^\star)-u^\star\|+\|f(u^\star)-u^\star\|^2\leq \Gamma_6 $. Therefore, (3.48) becomes

    $ \begin{align} \| u_{m+1}-u^\star\|^2 \leq\alpha_m\|r_m-u^\star\|^2+(1-\alpha_m)\|z_m-u^\star\|^2-\beta_m\gamma_m\|z_m-Sz_m\|^2+\alpha_m\Gamma_6. \end{align} $ (3.49)

    Putting (3.21) into (3.49), we get

    $ \begin{eqnarray} \| u_{m+1}-u^\star\|^2&\leq\alpha_m\|r_m-u^\star\|^2+(1-\alpha_m) \|s_m-u^\star\|^2-(1-\alpha_m)\|s_m-z_m-\rho \delta_mv_m\|^2\\&-(1-\alpha_m)(2-\rho)\rho\frac{\left(1-\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)^2}{\left(1+\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)^2}\|s_m-w_m\|^2-\beta_m\gamma_m\|z_m-Sz_m\|^2+\alpha_m\Gamma_6. \end{eqnarray} $ (3.50)

    Substituting (3.40) and (3.43) into (3.50), we have

    $ \begin{eqnarray} \| u_{m+1}-u^\star\|^2&\leq&\alpha_m( \|u_m-u^\star\|+\alpha_m \Gamma_2)^2+(1-\alpha_m) ( \|u_m-u^\star\|+\alpha_m \Gamma_1)^2\\&&-(1-\alpha_m)\|s_m-z_m-\rho \delta_mv_m\|^2\\&&-(1-\alpha_m)(2-\rho)\rho\frac{\left(1-\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)^2}{\left(1+\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)^2}\|s_m-w_m\|^2\\&&-\beta_m\gamma_m\|z_m-Sz_m\|^2+\alpha_m\Gamma_6.\\ &\leq&\|u_m-u^\star\|^2-(1-\alpha_m)\|s_m-z_m-\rho \delta_mv_m\|^2\\&&-(1-\alpha_m)(2-\rho)\rho\frac{\left(1-\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)^2}{\left(1+\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)^2}\|s_m-w_m\|^2\\&&-\beta_m\gamma_m\|z_m-Sz_m\|^2+\alpha_m\Gamma_1 +\alpha_m \Gamma_2+\alpha_m \Gamma_6, \end{eqnarray} $ (3.51)

    it follows from (3.51) that

    $ \begin{align*} &(1-\alpha_m)\|s_m-z_m-\rho\delta_mv_m\|^2+(1-\alpha_m)(2-\rho)\rho\frac{\left(1-\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)^2}{\left(1+\frac{(q_m+h_m\mu)\eta_m}{\eta_{m+1}}\right)^2}\|s_m-w_m\|^2+\beta_m\gamma_m\|z_m-Sz_m\|^2 \\\leq&\|u_m-u^\star\|^2-\| u_{m+1}-u^\star\|^2+\alpha_m\Gamma_7, \, \forall m\geq m_0, \end{align*} $

    where $ \Gamma_7 = \Gamma_1+\Gamma_2+\Gamma_6 > 0 $.

    Claim 3.

    $ \begin{eqnarray} \| u_{m+1}-u^\star\|^2 &\leq& (1-(1-k)\alpha_m)\|u_m-u^\star\|^2+(1-k)\alpha_m\left[\frac{2}{1-k}\langle f(u^\star)-u^\star, u_{m+1}-u^\star\rangle\right.\\&&\left.+\frac{3\Gamma_8}{1-k}\cdot\frac{\theta_m}{\alpha_m}\|u_m-u_{m-1}\|+\frac{3\Gamma_9}{1-k}\cdot\frac{\phi_m}{\alpha_m}\|u_m-u_{m-1}\|\right], \, \forall m\geq m_0, \end{eqnarray} $ (3.52)

    for some $ \Gamma_8 > 0 $ and $ \Gamma_9 > 0 $.

    Indeed, using (3.3), we have

    $ \begin{eqnarray} \|s_m-u^\star\|^2& = & \|u_m+\phi_m(Ku_m-Ku_{m-1})-u^\star\|^2\\ & = & \|u_m-u^\star+\phi_m(Ku_m-Ku_{m-1})\|^2\\ &\leq& \|u_m-u^\star\|^2+2\phi_m\|u_m-u^\star\|\|Ku_m-Ku_{m-1}\|+\phi^2_m\|Ku_m-Ku_{m-1}\|^2\\ &\leq& \|u_m-u^\star\|^2+2\phi_m\|u_m-u^\star\|\|u_m-u_{m-1}\|+\phi^2_m\|u_m-u_{m-1}\|^2. \end{eqnarray} $ (3.53)

    Also, from (3.4), we get

    $ \begin{eqnarray} \|r_m-u^\star\|^2& = & \|u_m+\theta_m(Ju_m-Ju_{m-1})-u^\star\|^2\\ & = & \|u_m-u^\star+\theta_m(Ju_m-Ju_{m-1})\|^2\\ &\leq& \|u_m-u^\star\|^2+2\theta_m\|u_m-u^\star\|\|Ju_m-Ju_{m-1}\|+\theta^2_m\|Ju_m-Ju_{m-1}\|^2\\ &\leq& \|u_m-u^\star\|^2+2\theta_m\|u_m-u^\star\|\|u_m-u_{m-1}\|+\theta^2_m\|u_m-u_{m-1}\|^2. \end{eqnarray} $ (3.54)

    Using (3.10) and Lemma 2.3, we have

    $ \begin{eqnarray} \| u_{m+1}-u^\star\|^2& = &\|\alpha_mf(r_m)+\beta_mz_m+\gamma_m Sz_m-u^\star\|^2\\ & = &\|\alpha_m(f(r_m)-u^\star)+\beta_m(z_m-u^\star)+\gamma_m (Sz_m-u^\star)\|^2 \\ & = &\|\alpha_m(f(r_m)-f(u^\star))+\beta_m(z_m-u^\star)+\gamma_m (Sz_m-u^\star)+\alpha_m(f(u^\star)-u^\star)\|^2\\ &\leq&\|\alpha_m(f(r_m)-f(u^\star))+\beta_m(z_m-u^\star)+\gamma_m (Sz_m-u^\star)\|^2\\&&+2\alpha_m\langle f(u^\star)-u^\star, u_{m+1}-u^\star\rangle\\ &\leq&\alpha_m\|f(r_m)-f(u^\star)\|^2+\beta_m\|z_m-u^\star\|^2+\gamma_m\| Sz_m-u^\star\|^2\\&&+2\alpha_m\langle f(u^\star)-u^\star, u_{m+1}-u^\star\rangle\\ &\leq&\alpha_mk^2\|r_m-u^\star\|^2+\beta_m\|z_m-u^\star\|^2+\gamma_m\| z_m-u^\star\|^2\\&&+2\alpha_m\langle f(u^\star)-u^\star, u_{m+1}-u^\star\rangle\\ &\leq&\alpha_mk\|r_m-u^\star\|^2+\beta_m\|z_m-u^\star\|^2+\gamma_m\| z_m-u^\star\|^2\\&&+2\alpha_m\langle f(u^\star)-u^\star, u_{m+1}-u^\star\rangle\\ & = &\alpha_mk\|r_m-u^\star\|^2+(1-\alpha_m)\|z_m-u^\star\|^2+2\alpha_m\langle f(u^\star)-u^\star, u_{m+1}-u^\star\rangle\\ &\leq&\alpha_mk\|r_m-u^\star\|^2+(1-\alpha_m)\|s_m-u^\star\|^2+2\alpha_m\langle f(u^\star)-u^\star, u_{m+1}-u^\star\rangle. \end{eqnarray} $ (3.55)

    Substituting (3.53) and (3.54) into (3.55), we obtain

    $ \begin{eqnarray} \| u_{m+1}-u^\star\|^2&\leq&\alpha_mk[\|u_m-u^\star\|^2+2\theta_m\|u_m-u^\star\|\|u_m-u_{m-1}\|+\theta^2_m\|u_m-u_{m-1}\|^2]\\&&+(1-\alpha_m)[|u_m-u^\star\|^2+2\phi_m\|u_m-u^\star\|\|u_m-u_{m-1}\|+\phi^2_m\|u_m-u_{m-1}\|^2]\\&&+2\alpha_m\langle f(u^\star)-u^\star, u_{m+1}-u^\star\rangle\\ &\leq& (1-(1-k)\alpha_m)\|u_m-u^\star\|^2+(1-k)\alpha_m\frac{2}{1-k}\langle f(u^\star)-u^\star, u_{m+1}-u^\star\rangle\\&&+\theta_m\|u_m-u_{m-1}\|[2\|u_m-u^\star\|+\theta_m\|u_m-u_{m-1}\|]\\&&+\phi_m\|u_m-u_{m-1}\|[2\|u_m-u^\star\|+\phi_m\|u_m-u_{m-1}\|]\\ &\leq& (1-(1-k)\alpha_m)\|u_m-u^\star\|^2+(1-k)\alpha_m\left[\frac{2}{1-k}\langle f(u^\star)-u^\star, u_{m+1}-u^\star\rangle\right.\\&&\left.+\frac{3\Gamma_8}{1-k}\cdot\frac{\theta_m}{\alpha_m}\|u_m-u_{m-1}\|+\frac{3\Gamma_9}{1-k}\cdot\frac{\phi_m}{\alpha_m}\|u_m-u_{m-1}\|\right], \, \forall m\geq m_0, \end{eqnarray} $

    where $ \Gamma_8 = \sup\limits_{m\in \mathbb{N}}\{\|u_m-u^\star\|, \theta\|u_m-u_{m-1}\|\} $ and $ \Gamma_9 = \sup\limits_{m\in \mathbb{N}}\{\|u_m-u^\star\|, \phi\|u_m-u_{m-1}\|\} $.

    Claim 4. The sequence $ \{\|u_m-u^\star\|^2\} $ converges to zero. Indeed, from (3.52), Remark 3.1 and Lemma 2.5, it is enough to show that $ \limsup\limits_{k\to \infty}\langle f(u^\star)-u^\star, u_{m_k+1}-u^\star\rangle\leq 0 $ for any subsequence of $ \{\|u_{m_k}-u^\star\|^2\} $ of $ \{\|u_m-u^\star\|^2\} $ fulfilling

    $ \begin{align} \liminf\limits_{k\to \infty}(\|u_{m_k+1}-u^\star\|^2-\|u_{m_k}-u^\star\|^2)\geq 0. \end{align} $ (3.56)

    Now, we assume that $ \|u_{m_k}-u^\star\|^2 $ is a subsequence of $ \|u_{m}-u^\star\|^2 $ such that (3.56) holds, then

    $ \begin{align*} & \liminf\limits_{k\to \infty}(\|u_{m_k+1}-u^\star\|^2-\|u_{m_k}-u^\star\|^2)\\& = \liminf\limits_{k\to \infty}[(\|u_{m_k+1}-u^\star\|-\|u_{m_k}-u^\star\|)(\|u_{m_k+1}-u^\star\|+\|u_{m_k}-u^\star\|)]\geq 0. \end{align*} $

    By Claim 2 and condition $ (C_8) $, we get

    $ \begin{align*} &\limsup\limits_{k\to \infty} \begin{Bmatrix} (1-\alpha_{m_k})\|s_{m_k}-z_{m_k}-\rho\delta_{m_k}v_{m_k}\|^2\\+(1-\alpha_{m_k})(2-\rho)\rho\frac{\left(1-\frac{(q_{m_k}+h_{m_k}\mu)\eta_{m_k}}{\eta_{m_k+1}}\right)^2}{\left(1+\frac{(q_{m_k}+h_{m_k}\mu)\eta_{m_k}}{\eta_{m_k+1}}\right)^2}\|s_{m_k}-w_{m_k}\|^2\\+\beta_{m_k}\gamma_{m_k}\|z_{m_k}-Sz_{m_k}\|^2 \end{Bmatrix}\\&\leq \limsup\limits_{k\to \infty}\{\|u_{m_k}-u^\star\|^2-\| u_{m_k+1}-u^\star\|^2+\alpha_{m_k}\Gamma_7\}\\ & = -\liminf\limits_{k\to \infty}\{\|u_{m_k}-u^\star\|^2-\| u_{m_k+1}-u^\star\|^2\}, \end{align*} $

    which implies that

    $ \begin{align} \lim\limits_{k\to \infty}\|s_{m_k}-z_{m_k}-\rho\delta_{m_k}v_{m_k}\| = \lim\limits_{k\to \infty}\|s_{m_k}-w_{m_k}\| = \lim\limits_{k\to \infty}\|z_{m_k}-Sz_{m_k}\| = 0. \end{align} $ (3.57)

    On the other hand,

    $ \begin{align} \|s_{m_k}-z_{m_k}\| = \|s_{m_k}-z_{m_k}-\rho\delta_{m_k}v_{m_k}+\rho\delta_{m_k}v_{m_k}\| \leq \|s_{m_k}-z_{m_k}-\rho\delta_{m_k}v_{m_k}\|+\rho\delta_{m_k}\|v_{m_k}\|. \end{align} $ (3.58)

    By (3.8) and (3.23), we know that

    $ \begin{align} \delta_{m_k}\|v_{m_k}\| = &\frac{\langle s_{m_k}-w_{m_k}, v_{m_k}\rangle}{\|v_{m_k}\|}. \end{align} $ (3.59)

    Putting (3.59) into (3.58) and using the Cauchy Schwartz inequality, we have

    $ \begin{align} \|s_{m_k}-z_{m_k}\|&\leq \|s_{m_k}-z_{m_k}-\rho\delta_{m_k}v_{m_k}\|+\rho \|s_{m_k}-w_{m_k}\|. \end{align} $

    Recalling (3.57), we have

    $ \begin{align} \lim\limits_{k\to \infty}\|s_{m_k}-z_{m_k}\| = 0. \end{align} $ (3.60)

    Also, from (3.3), we have

    $ \begin{align} \| s_{m_k}-u_{m_k}\| = \phi_{m_k}\|Ku_{m_k}-Ku_{m_k-1}\| \leq\phi_{m_k}\|u_{m_k}-u_{m_k-1}\| \leq \alpha_{m_k}\cdot\frac{\phi_{m_k}}{\alpha_{m_k}}\|u_{m_k}-u_{m_k-1}\|. \end{align} $ (3.61)

    By Remark 3.1, condition $ (C_8) $ and (3.61), we have

    $ \begin{align} \lim\limits_{k\to \infty} \| s_{m_k}-u_{m_k}\| = 0. \end{align} $ (3.62)

    Using (3.60) and (3.62), we have

    $ \begin{align} \lim\limits_{k\to \infty} \| z_{m_k}-u_{m_k}\|\leq \lim\limits_{k\to \infty} (\| z_{m_k}-s_{m_k}\|+ \| s_{m_k}-u_{m_k}\|) = 0. \end{align} $ (3.63)

    Again, from (3.10), we have

    $ \begin{align} \|u_{m_k+1}-z_{m_k}\|\leq\alpha_{m_k}\|f(r_m)-z_{m_k}\|+\beta_{m_k}\|z_{m_k}-z_{m_k}\|+\gamma_{m_k}\| Sz_{m_k}-z_{m_k}\|. \end{align} $ (3.64)

    From condition $ (C_8) $, (3.57) and (3.64), we obtain

    $ \begin{align} \lim\limits_{k\to \infty} \| u_{m_k+1}-z_{m_k}\| = 0. \end{align} $ (3.65)

    Next, we have that

    $ \begin{align} \|u_{m_k+1}-u_{m_k}\|\leq\|u_{m_k+1}-z_{m_k}\|+\|z_{m_k}-s_{m_k}\|+\| s_{m_k}-u_{m_k}\|. \end{align} $ (3.66)

    Combing (3.60), (3.62), (3.65), and (3.66), we have

    $ \begin{align} \lim\limits_{k\to \infty} \| u_{m_k+1}-u_{m_k}\| = 0. \end{align} $ (3.67)

    Since the sequence $ \{u_{m_k}\} $ is bounded, then we know that a subsequence $ \{u_{m_{k_j}}\} $ of $ \{u_{m_k}\} $ exists such that $ u_{m_{k_j}}\rightharpoonup q^\star $. Furthermore,

    $ \begin{align} \limsup\limits_{k\to \infty}\langle f(u^\star)-u^\star, u_{m_k}-u^\star\rangle = \lim\limits_{j\to \infty}\langle f(u^\star)-u^\star, u_{m_{k_j}}-u^\star\rangle = \langle f(u^\star)-u^\star, q^\star-u^\star\rangle. \end{align} $ (3.68)

    Thus, we have $ s_{m_{k_j}}\rightharpoonup q^\star $ since $ \lim\limits_{k\to \infty} \| s_{m_k}-u_{m_k}\| = 0. $ Since $ \lim\limits_{k\to \infty} \| s_{m_k}-w_{m_k}\| = 0 $, it follows from Lemma 3.2 that $ q^\star\in VI(M, G) $. From (3.63), it follows that $ z_{m_{k_j}}\rightharpoonup q^\star $. Following the demiclosedness of $ I-S $ at zero as defined in Lemma 2.4, we know that $ q^\star\in F(S) $. Thus, $ q^\star\in F(S)\cap VI(M, G) $. By combining (3.68), $ q^\star\in F(S) $ and $ u^\star = P_{F(S)\cap VI(M, G)}\circ f(u^\star) $, we get

    $ \begin{align} \limsup\limits_{k\to \infty}\langle f(u^\star)-u^\star, u_{m_k}-u^\star\rangle = \langle f(u^\star)-u^\star, q^\star-u^\star\rangle\leq0. \end{align} $ (3.69)

    Using (3.67) and (3.69), we have

    $ \begin{eqnarray} \limsup\limits_{k\to \infty}\langle f(u^\star)-u^\star, u_{m_k+1}-u^\star\rangle&\leq& \limsup\limits_{k\to \infty}\langle f(u^\star)-u^\star, u_{m_k+1}-u_{m_k}\rangle + \limsup\limits_{k\to \infty}\langle f(u^\star)-u^\star, u_{m_k}-u^\star\rangle\\ & = &\langle f(u^\star)-u^\star, q^\star-u^\star\rangle\leq0. \end{eqnarray} $ (3.70)

    By Claim 3, Remark 1, (3.70), and Lemma 2.5, we obtain that $ \lim\limits_{m\to\infty}\|u_m-u^\star\| = 0 $, and this completes the proof of Theorem 3.1.

    Next, we propose our second and third algorithms as in Algorithms 3.2 and 3.3, which differ slightly from Algorithm 3.1.

    Algorithm 3.2.
    Initialization: Choose $ \eta_1 > 0, \phi > 0, \theta > 0, \rho\in \left(0, 2\right), \mu \in (0, 1) $ and let $ g_0, g_1\in H $ be arbitrary.
    Iterative Steps: Given the iterates $ u_{m-1} $ and $ \{u_m\} $ $ (m\geq1) $, calculate $ u_{m+1} $ as follows:
    Step 1: Choose $ \phi_m $ and $ \theta_m $ such that $ 0\leq \phi_m\leq \bar{\phi}_m $ and $ 0\leq \theta_m\leq \bar{\theta}_m $, where $ \bar{\phi}_m $ and $ \bar{\theta}_m $ are as defined in (3.1) and (3.2).
    Step 2: Set
                                                                                 $ \begin{eqnarray*} s_m = u_m+\phi_m(Ku_m-Ku_{m-1}), \\ r_m = u_m+\theta_m(Ju_m-Ju_{m-1}), \end{eqnarray*} $
    and compute
                                                                                           $ \begin{align*} w_{m}& = P_M(s_m-\eta_mGs_m). \end{align*} $
    If $ s_m = w_m $ or $ Gs_m = 0 $, stop, $ s_m $ is a solution of the VIP. Otherwise, do Step 3.
    Step 3: Compute
                                                                                      $ \begin{align*} z_{m} = P_{T_m}(s_m-\rho\eta_m\delta_mGw_m), \end{align*}$
    where $ T_m $, $ \delta_{m} $ and $ v_m $ are as defined in (3.7)–(3.9).
    Step 4: Compute
                                                                                 $\begin{align*} u_{m+1} = \alpha_mf(u_m)+\beta_mz_m+\gamma_m Sz_m. \end{align*} $
    Update $ \eta_{m+1} $ by (3.11).
    Set $ m: = m+1 $ and go back to Step 1.

    Algorithm 3.3.
    Initialization: Choose $ \eta_1 > 0, \phi > 0, \theta > 0, \rho\in \left(0, 2\right), \mu \in (0, 1) $ and let $ g_0, g_1\in H $ be arbitrary.
    Iterative Steps: Given the iterates $ u_{m-1} $ and $ \{u_m\} $ $ (m\geq1) $, calculate $ u_{m+1} $ as follows:
    Step 1: Choose $ \phi_m $ and $ \theta_m $ such that $ 0\leq \phi_m\leq \bar{\phi}_m $ and $ 0\leq \theta_m\leq \bar{\theta}_m $, where $ \bar{\phi}_m $ and $ \bar{\theta}_m $ are as defined in (3.1) and (3.2).
    Step 2: Set
                                                                            $\begin{eqnarray*} s_m = u_m+\phi_m(Ku_m-Ku_{m-1}), \\ r_m = u_m+\theta_m(Ju_m-Ju_{m-1}), \end{eqnarray*}$
    and compute
                                                                                     $\begin{align*} w_{m}& = P_M(s_m-\eta_mGs_m). \end{align*} $
    If $ s_m = w_m $ or $ Gs_m = 0 $, stop, $ s_m $ is a solution of the VIP. Otherwise, do Step 3.
    Step 3: Compute
                                                                                $ \begin{align*} z_{m} = P_{T_m}(s_m-\rho\eta_m\delta_mGw_m), \end{align*} $
    where $ T_m $, $ \delta_{m} $ and $ v_m $ are as defined in (3.7)–(3.9).
    Step 4: Compute
                                                                            $\begin{align*} u_{m+1} = \alpha_mf(s_m)+\beta_mz_m+\gamma_m Sz_m. \end{align*} $
    Update $ \eta_{m+1} $ by (3.11).
    Set $ m: = m+1 $ and go back to Step 1.

    Remark 3.2. In Algorithm 3.2, we replace the term $ f(z_m) $ in (3.10) of Algorithm 3.1 with $ f(u_m) $. Also, in Algorithm 3.3, we replace the term $ f(z_m) $ in (3.10) of Algorithm 3.1 with $ f(s_m) $. Now, the strong convergence theorems of Algorithms 3.2 and 3.3 will be stated without proofs. Their proofs are very similar to that of Theorem 3.1. Hence, we leave the proofs for the reader to verify.

    Theorem 3.2. Suppose the conditions $ (C_1) $–$ (C_8) $ are performed and $ \{u_m\} $ is the sequence generated by Algorithm 3.2, then $ \{u_m\} $ converges strongly to an element $ u^\star\in F(S)\cap VI(M, G) $, where $ u^\star = P_{F(T)\cap VI(M, G)}\circ f(u^\star) $.

    Theorem 3.3. Suppose the conditions $ (C_1) $–$ (C_8) $ are performed and $ \{u_m\} $ is the sequence generated by Algorithm 3.3, then $ \{u_m\} $ converges strongly to an element $ u^\star\in F(S)\cap VI(M, G) $, where $ u^\star = P_{F(T)\cap VI(M, G)}\circ f(u^\star) $.

    In this part of the work, we consider two numerical examples to demonstrate the computational efficiency of our Algorithms 3.1–3.3 (shortly, OAUAN Algs. 3.1, 3.7 and 3.8) over some existing modified algorithms, namely, Algorithms 1 and 2 of Thong and Hieu [43] (shortly, TH Alg. 1 and TH Alg. 2), Algorithm 2 of Tian and Tong [47] (shortly, TT Alg. 2), Algorithm 3.1 of Ogwo et al. [33] (shortly, OAM Alg. 3.1), Algorithm 3.1 of Godwin et al. [14] (shortly, GAMY Alg 3.1), and Algorithm 3.1 of Maluleka et al. [24] (shortly, MUA Alg 3.1). We perform all numerical simulations using MATLAB R2020b and carried out on PC Desktop Intel$ ^{Ⓡ} $ Core$ ^{ TM} $ i7-3540M CPU @ 3.00GHz $ \times $ 4 memory 400.00GB.

    Example 4.1. Suppose that $ G:\mathbb{R}^k\to \mathbb{R}^k \; (k = 30, 50, 80,110) $ is defined by $ G(u) = Qu+q $, where $ q\in \mathbb{R}^k $ and $ Q = AA^T+B+C $, $ C $ is a $ k\times k $ diagonal matrix whose diagonal terms are nonnegative (hence $ Q $ is positive symmetric definite), $ B $ is a $ k\times k $ skew-symmetric, and $ A $ is a $ k\times k $ matrix. We define the feasible set $ M $ by

    $ \begin{align*} M = \{u\in \mathbb{R}^k:-5\leq u_i\leq 5, \, i = 1, \cdots k\}. \end{align*} $

    It is not hard to see that the mapping $ G $ is monotone and $ L $-Lipschitz continuous with $ L = \|Q\| $ (hence, $ G $ is pseudo-monotone). For $ q = 0 $, the solution set $ VI(M, G) = \{0\} $. On the other hand, let $ Su $$ = \frac{3}{4}u\sin \|u\| $. Clearly, the only fixed point of $ S $ is 0, i.e., $ F(S) = \{0\} $. The mapping $ S $ is quasi-nonexpansive but not nonexpansive. Indeed, for $ k = 1 $, we have

    $ \begin{align*} |Su-0| = \left |\frac{3}{4}u\sin |u|\right|\leq \left|\frac{3u}{4}\right|\leq |u| = |u-0|, \, \forall u\in M. \end{align*} $

    Hence, $ S $ is quasi-nonexpansive. Moreover, if we take $ u = 2\pi $ and $ v = \frac{3\pi}{2} $, then we have

    $ \begin{align*} |Su-Sv| = \left|\frac{6\pi}{4}\sin2\pi-\frac{9\pi}{8}\sin \frac{3\pi}{2}\right| = \frac{9\pi}{8} > \frac{\pi}{2} = |u-v|. \end{align*} $

    Therefore, $ S $ is not quasinonexpansive. Notice that $ I-S $ is demiclosed at 0 and $ F(S)\cap VI(M, G) = \{0\}\neq\emptyset $. Furthermore, we take $ Ku = \sin u $, where for $ k > 1 $, $ \sin u = (\sin u_1, \sin u_2, \; \ldots\; , \sin u_k)^T $ and $ Ju = \frac{u}{2} $.

    The parameters for all the algorithms are taken as follows:

    For Algorithms 3.1–3.3, we take $ \eta_1 = 0.9 $, $ \mu = 0.4 $, $ \alpha_m = \frac{1}{2m+20} $, $ \beta_m = \gamma_m = \frac{m}{2m+20} $, $ p_m = \frac{1}{(m+100)^{1.1}} $, $ q_m = \frac{m+1}{m} $, $ h_m = \frac{1}{m+100} $, $ \phi = 0.6 $, $ \theta = 0.9 $, $ \rho = 0.0001 $ and $ \epsilon_m = \frac{1}{(2m+1)^3} $.

    For TH Algs. 1 and 2 $ \gamma = 2, \; l = 0.5, \; \tau_1 = 0.8, \; \alpha_m = 0.5 $, $ \beta_m = 0.5 $, $ \mu = 0.6 $.

    For Algorithm 2 of Tian and Tong [47] (TT Alg.), we take $ \alpha_m = 0.5 $, $ \beta_m = 0.5 $, $ \mu = 0.4 $ and $ \lambda_1 = \frac{1}{7} $.

    For Algorithm 3.1 of Godwin et al. [14] (GAMY Alg. 3.1), we take $ \alpha = 4 $, $ \lambda_1 = 0.5, \; \theta_m = \bar{\theta}_m $ $ \delta = 0.4 $ $ c'(x) = 2x $, $ \phi_m = \frac{2m+1}{5m+2} $, $ \beta_m = \frac{2m}{3m+2} $, $ \gamma = 1 $, $ \gamma_m = \Big(\frac{2}{3m+1}\Big)^2 $, $ \alpha_m = (\frac{2}{3m+1} $, $ \mu = 0.8 $, $ Dx = Tx = 0.5x $ and $ f(x) = \frac{1}{3}x $.

    For Algorithm 3.1 of Maluleka et al. [24] (MUA Alg. 3.1), we take $ \theta = 0.9 $, $ \lambda_1 = 3.1, \; \mu_m = \frac{1}{(m+1)^2} $ $ \alpha_m = \frac{1}{m+1} $, $ \beta_m = 0.5 $ and $ \rho = 0.5 $.

    For Algorithm 3.2 of Ogwo et al. [33] (OAM Alg. 3.1), we take $ \alpha = 3 $, $ \lambda_1 = 0.5, \; \alpha_m = \bar{\alpha}_m $ $ \mu = 0.4 $, $ \beta_m = \frac{m}{m+10} $, $ \gamma_1 = 0.01 $, $ \tau_m = (\frac{1}{(m+1)^2} $, $ \theta_m = \frac{1}{m+10} $, $ Dx = 0.01x $ and $ f(x) = 0.01x $.

    In this example, all entries $ A $, $ B $ and $ C $ are taken randomly from [1, 100]. We consider 4 different dimensions for $ k $, Case I: $ k = 50 $, Case II: $ k = 100 $, Case III: $ k = 300 $, Case IV: $ k = 500 $. The initial values $ u_1 = u_2 $ are chosen at random using $ randn(k, 1) $ in MATLAB and stopping criterion is taken as $ \|u_{m+1}-u_m\|\leq 10^{-8} $. The results of the numerical simulations are presented in Table 1 and Figures 1 and 2.

    Table 1.  Numerical Results for the four dimensions considered in Example 4.1.
    Algorithms Case Ⅰ Case Ⅱ Case Ⅲ Case Ⅳ
    Iter. CPU Iter. CPU Iter. CPU Iter. CPU
    OUANC Alg. 3.1 15 0.0062 14 0.0043 15 0.0093 15 0.0205
    OUANC Alg. 3.7 16 0.0099 16 0.0075 16 0.0096 17 0.0199
    OUANC Alg. 3.8 17 0.0089 13 0.0037 14 0.0096 17 0.0242
    TH Alg. 1 33 0.0194 35 0.0363 35 0.0777 39 0.1864
    TH Alg. 2 38 0.0254 31 0.0413 38 0.0823 51 0.1878
    TT Alg. 2 23 0.0092 30 0.0181 36 0.0146 30 0.0565
    GAMY Alg. 3.1 90 0.0201 91 0.0399 99 0.0276 103 0.0712
    MUA Alg. 3.1 47 0.0207 47 0.0159 44 0.0294 45 0.0453
    OAM Alg. 3.1 40 0.0144 39 0.0076 41 0.0159 42 0.033

     | Show Table
    DownLoad: CSV
    Figure 1.  Graph of the iterates for Cases Ⅰ and Ⅱ.
    Figure 2.  Graph of the iterates for Cases Ⅲ and Ⅳ.

    Example 4.2. Let $ {H} = \ell^2 $, i.e., $ {H} = \{u = (u_1, u_2, u_3, \cdots, u_i, \cdots):\sum\limits_{i = 1}^{\infty}|u_i|^2 < +\infty\} $. Let $ e, d\in\mathbb{R} $ be such that $ d > e > \frac{d}{2} > 0 $. Let $ {M} = \{u\in \ell^2:\|u\|\leq e\} $ and $ {G}u = (d-\|u\|)u $. Obviously, the solution set $ {VI(M, G)} = \{0\} $. Now, we show that $ {G} $ is $ L $-Lipschitz continuous on $ {H} $ and pseudo-monotone on $ {M} $. Indeed, for any $ u, v\in {H} $, we have

    $ \begin{align*} \|{G}u-{G}v\|& = \|(d-\|u\|)u-(d-\|v\|)v\|\\ & = \|d(u-v)-\|u\|(u-v)-(\|u\|-\|v\|)v\|\\ &\leq d\|u-v\|+\|u\|\|u-v\|+|\|u\|-\|v\||\|v\|\\ &\leq d\|u-v\|+e\|u-v\|+\|u-v\|e\\ & = (d+2e)\|u-v\|. \end{align*} $

    Hence, $ {G} $ is Lipschitz continuous with $ L = d+2e $. Now, let $ u, v\in {M} $ be such that $ \langle {G}u, v-u\rangle > 0 $, then we have $ (d-\|u\|)\langle u, v-u\rangle > 0 $. Since $ \|u\|\leq e\leq d $, we have $ \langle u, v-u\rangle > 0 $. Hence,

    $ \begin{align*} \langle {M}v, v-u\rangle& = (d-\|v\|)\langle v, v-u\rangle\geq (d-\|v\|)(\langle v, v-u\rangle-\langle u, v-u\rangle\geq (d-e)\|u-v\|^2\geq 0. \end{align*} $

    This shows that $ {G} $ is a pseudo-monotone mapping. If we set $ e = 3 $ and $ d = 5 $, the projection formula is defined by

    $ \begin{align} P_{{M}} = \begin{cases} u, \, \, \, &\text{if}\, \, \|u\|\leq 3, \%\\ \frac{3u}{\|u\|}, &otherwise. \end{cases} \end{align} $ (4.1)

    Now, let $ Su = \frac{u}{2}. $ It is not hard to show that the mapping $ S $ is nonexpansive (hence, quasi-nonexpansive). We see that $ F(S) = \{0\}\neq \emptyset $. Thus, $ F(S)\cap VI(M, G) $. We take the stopping criterion as $ \|u_{m+1}-u_m\| \leq 10^{-8} $. Furthermore more, we maintain the same control parameters as in Example 4.1. Since we cannot sum to infinity in MATLAB, we considered the subspace of $ \ell_0^2 $ consisting of finite nonzero terms defined by

    $ \ell_0^2(\mathbb{R}) = \{u_1\in \ell^2 : u_1 = (u_{1, 1}, u_{1, 2}, u_{1, 3}, \ldots , u_{1, i}, 0, 0, \ldots )\}, \; \; \mbox{ for some } i\geq1. $

    The first $ i $ points of the initial points are generated randomly considering the following cases for $ i $: Case I: $ i = 100 $, Case II: $ i = 1,000 $, Case III: $ i = 10,000 $, Case IV: $ i = 100,000 $. We use the same control parameters used in the previous example for all the algorithms. The results of the numerical simulations are presented in Table 2 and Figures 3 and 4.

    Table 2.  Numerical results for the four dimensions considered in Example 4.2.
    Algorithms Case Ⅰ Case Ⅱ Case Ⅲ Case Ⅳ
    Iter. CPU Iter. CPU Iter. CPU Iter. CPU
    OUANC Alg. 3.1 13 0.0024 16 0.0042 17 0.0309 17 0.1011
    OUANC Alg. 3.7 16 0.0067 17 0.0083 18 0.0220 19 0.1094
    OUANC Alg. 3.8 16 0.0089 16 0.0081 17 0.0273 20 0.1105
    TH Alg. 1 37 0.0065 35 0.0286 40 0.1310 45 1.1786
    TH Alg. 2 34 1.0409 35 0.0190 37 0.1328 38 1.1063
    TT Alg. 2 36 0.0131 37 0.0101 38 0.0256 46 0.1978
    GAMY Alg. 3.1 67 0.0089 65 0.0081 69 0.0545 73 0.3740
    MUA Alg. 3.1 44 0.0083 42 0.0063 45 0.0467 47 0.2787
    OAM Alg. 3.1 33 0.0039 34 0.0128 37 0.0299 39 0.1892

     | Show Table
    DownLoad: CSV
    Figure 3.  Graph the Iterates for Cases Ⅰ and Ⅱ.
    Figure 4.  Graph the Iterates for Cases Ⅲ and Ⅳ.

    Remark 4.1. After conducting numerical simulations in Examples 4.1 and 4.2 our proposed Algorithms 3.1–3.3 have exhibited a competitive nature and potential when compared to existing algorithms. They outperformed Algorithms 1 and 2 of Thong and Hieu [43], Algorithm 2 of Tian and Tong [47], Algorithm 3.1 of Ogwo et al. [33], Algorithm 3.1 of Godwin et al. [14], and Algorithm 3.1 of Maluleka et al. [24] in terms of computational time and the number of iterations required to meet the specified stopping criteria, highlighting their superior performance.

    In this section, the solution of variational inequality problem arising from optimal control problem is approximated by our Algorithm 3.1. Let $ 0 < T\in \mathbb{R} $, then we denote the Hilbert space of the square integrable by $ L_2([0, 1], \mathbb{R}^k) $, measurable vector function $ s:[0, T]\to \mathbb{R}^m $ induced with the inner product

    $ \begin{align*} \langle s, r\rangle = \int_{0}^{T}\langle s(g), r(g)\rangle dg, \end{align*} $

    and norm

    $ \begin{align*} \|s\|_2 = \sqrt[]{\langle s, s\rangle} < \infty. \end{align*} $

    Now, the following optimal control problem will be considered on [0, T]:

    $ \begin{align} s^*(g) = argmin\{\zeta(s):s\in S\}, \end{align} $ (5.1)

    supposing such control exists. Note that $ S $ denotes the set of admissible controls, which takes the form an $ k $-dimensional box and is made up of a piecewise continuous function:

    $ \begin{align*} S = \{s(g)\in L_2([0, 1], \mathbb{R}^k):s_i(g)\in [s^-_i, s^+_i], \, \, i = 1, 2, ..., k\}. \end{align*} $

    Particularly, the control can be piecewise constant function (bang-bang).

    The terminal objective can be expressed as:

    $ \begin{align*} \zeta(s) = \theta(u(T)), \end{align*} $

    where $ \theta $ is a differentiable and convex function defined on the attainability set. If the trajectory $ u(z)\in L_2([0, 1]) $ fulfills constrains in the form of a linear differential equation system:

    $ \begin{align} \dot{u}(g) = D(z)u(g)+B(g)s(g), \, \, u(0) = u_0, \, \, z\in [0, T], \end{align} $ (5.2)

    where $ D(g)\in \mathbb{R}^{m\times m} $ and $ B(g)\in \mathbb{R}^{m\times k} $ are matrices which are continuous for all $ z\in [0, T] $. Using the Pontryagin maximum principle, we know that a function $ x^*\in L_2([0, 1]) $ exists with the triple $ (u^*, x^*, s^*) $ solving the following system for a.e. $ z\in [0, T] $:

    $ \begin{eqnarray} \left\{\begin{array}{lc} \dot{u^*}(g) = D(g)u^*(z)+B(g)s^*(z), &\%\\ u^*(0) = u_0, \end{array}\right. \end{eqnarray} $ (5.3)
    $ \begin{eqnarray} \left\{\begin{array}{lc} \dot{x^*}(g) = -D(g)^Tx^*(z), & \\ x^*(0) = \triangledown \zeta(u(T)), \end{array}\right. \end{eqnarray} $ (5.4)
    $ \begin{eqnarray} 0\in B(g)^Tx^*(g)+N_S(s^*(g)), \end{eqnarray} $ (5.5)

    where $ N_S(s) $ is the normal cone to $ S $ at $ s $ defined by

    $ \begin{eqnarray} N_S(s) = \left\{\begin{array}{lc} \emptyset, \, \, \, \, \, \, \, \, \, \, \, \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\;\; \text{if}\, \, \, s\notin S, &\\ \{\ell\in H:\langle\ell, r-s\rangle\leq0\, \, \forall\, s\in S\}, \, \, \, \text{if}\, \, \, s\in S. \end{array}\right. \end{eqnarray} $ (5.6)

    Letting $ Fs(g) = B(z)^Tx(g) $, where $ Fs $ is shown by Khoroshilova [20] to be the gradient of objective cost function $ \zeta $. The express (5.4) can be expressed as a variational inequality problem as follows:

    $ \begin{align} \langle Fs^*, r-s^*\rangle\geq 0, \, \, \, \forall\, \, r\in S. \end{align} $ (5.7)

    Next, we discretize the continuous function and also take a natural number $ N $ with the mesh size $ h = \frac{T}{N} $. Furthermore, we identify any discretized control $ s^N = (s_0, s_1, \cdots, s_{N}) $ with its piecewise constant extension:

    $ \begin{align*} s^N(g) = s_j, \, \, \, \forall \, g\in [g_j, g_{j+1}), \, \, j = 0, 1, \cdots, N-1. \end{align*} $

    Again, any discretized state $ u^N = (u_0, u_1, \cdots, u_{N}) $ is identified with its piecewise linear interpolation

    $ \begin{align} u^N(g) = u_j+\frac{g-g_j}{h}(u_{j+1}-u_j), \, g\in [g_j, g_{j+1}), \, \, j = 0, 1, \cdots, N-1. \end{align} $ (5.8)

    The same approach can be used to identify the co-state variable $ x^N = (x_0, x_1, \cdots, x_{N}) $.

    The system of ordinary differential equations (ODEs) (5.3) and (5.4) will be solved by the Euler method [49]

    $ \begin{eqnarray} \left\{\begin{array}{lc} u^N_{j+1} = u^N_j+h[D(g_i)u^N_j+B(g_j)s^N_j], \\ u(0) = 0, \end{array}\right. \end{eqnarray} $ (5.9)
    $ \begin{eqnarray} \left\{\begin{array}{lc} x^N_i = x^N_{j+1}+hD(g_i)^Tx^N_{j+1}, \\ x(N) = \triangledown\theta(u(N)). \end{array}\right. \end{eqnarray} $ (5.10)

    Next, we solve use Algorithm 3.1 to solve the problem in the following example:

    Example 5.1. (see [4])

    $ \begin{align*} minimize\, \, -u_1(2)&+(u_2(2))^2, \\ subject \;to\, \, \, \, \dot{u_1}(g)& = u_2(g), \\ \dot{u_2}(g)& = x(g), \, \, \, \forall g\in [0, 2], \\ \dot{u_1}(0)& = 0\, \, \, \dot{u_2}(0) = 0, \\ s(g)&\in [-1, 1]. \end{align*} $

    The exact solution of the problem in Example 5.1 is

    $ \begin{eqnarray*} s^* = \left\{\begin{array}{lc} 1, \, \, \, \, \, \, \, if\, \, g\in [0, 1.2), \\ -1, \, \, if\, \, g\in [1.2, 2]. \end{array}\right. \end{eqnarray*} $

    The initial controls $ s_0(t) = s_1(t) $ are randomly taken in [-1, 1]. For this, we use the same parameters defined in Example 4.1 and set $ Su = \frac{u}{2} $. The stopping criterion for this section is $ \|u_{m+1}-u_m\|\leq 10^{-7} $. The approximate optimal control and the corresponding trajectories of Algorithm 3.1 are shown in Figure 5.

    Figure 5.  Random initial control (green) and optimal control (purple) on the left and optimal trajectories on the right for Example 5.1 generated by Algorithm 3.1.

    It is noticed that images are, in most cases distorted by the process of acquisition. The purpose of the restoration technique for distorted images is to restore the original image from the noisy observation of it. The image restoration problem can be modeled as the following undetermined system of the linear equation:

    $ \begin{align} v = Fu+w, \end{align} $ (6.1)

    where $ F:\mathbb{R}^N\to \mathbb{R}^M (M < N) $ is a bounded linear operator, $ u\in \mathbb{R}^N $ is an original image and $ v\in \mathbb{R}^M $ is the observed image with noise $ w $. It is well-known that the solution of the model (6.1) is equivalent the solution of the (LASSO) problem as follows [39]:

    $ \begin{eqnarray} \min\limits_{u\in \mathbb{R}^N}\{k\|u\|_1+\frac{1}{2}\|v-Fu\|^2_2\}, \end{eqnarray} $ (6.2)

    where $ k > 0 $. It is worthy to know that according [40], one can reconstruct the LASSO problem (6.2) as a variational inequality problem by letting $ {G}u = F^T(Fu-v) $. For this, $ {G} $ is monotone (hence $ {G} $ is pseudomonotone) and Lipschitz continuous with $ L = \|F^TF\| $.

    Now, we compare the restoration efficiency of our suggested Algorithms 3.1–3.3 (shortly, OAUAN Algs. 3.1, 3.7 and 3.8) with Algorithms 1 and 2 of Thong and Hieu [43] (shortly, TH Alg. 1 and TH Alg. 2), and Algorithm 2 of Tian and Tong [47] (shortly, TT Alg. 2), Algorithm 3.1 of Ogwo et al. [33] (shortly, OAM Alg. 3.1), Algorithm 3.1 of Godwin et al. [14] (shortly, GAMY Alg. 3.1), and Algorithm 3.1 of Maluleka et al. [24], (shortly, MUA Alg. 3.1). The test images are Austine and Peacock of sizes $ 289\times 350 $ and $ 245\times 245 $, respectively. The images went through a Gaussian blur of size $ 9\times 9 $ and standard deviation of $ \sigma = 4 $. The performances of the algorithms are measured via signal-to-noise ratio (SNR) defined by

    $ \begin{equation} SNR = 25\log_{10}\left(\frac{\|u\|_2}{\|u-u^*\|_2}\right), \end{equation} $ (6.3)

    where $ u^* $ is the restored image and $ u $ is the original image. In this experiment, we maintain the same parameters used for all the algorithms in Example 4.1 with stopping criterion $ E_m = \|u_{m+1}-u_m\|\leq 10^{-5} $. The numerical results for this experiment are shown in Figures 69 and Tables 36.

    Figure 6.  Austine's image deblurring by various algorithms.
    Figure 7.  Peacock's image deblurring by various algorithms.
    Figure 8.  Graph corresponding to Tables 3 and 4.
    Figure 9.  Graph corresponding to Tables 5 and 6.
    Table 3.  Numerical comparison of various algorithms using their SNR values for Austine's image.
    Images m OAUAN Alg. 3.1 OAUAN Alg. 3.7 OAUAN Alg. 3.8 OAM Alg 3.1 GAMY Alg. 3.1
    Austine.png SNR SNR SNR SNR SNR
    ($ 285\times 350 $) 50 54.18938 40.5451 33.1598 28.1770 26.6383
    100 54.2745 40.7152 34.2100 28.8195 26.6932
    150 55.3164 41.3918 34.8141 29.5183 27.7202
    200 55.3532 41.17770 34.5151 29.9243 27.7442

     | Show Table
    DownLoad: CSV
    Table 4.  Numerical comparison of various algorithms using their SNR values for Austine's image.
    Images m MUA Alg. 3.1 TT Alg. 2 TH Alg. 1 TH Alg. 2
    Austine.png SNR SNR SNR SRN
    ($ 285\times 350 $) 50 26.6726 21.18938 21.5451 13.1598
    100 26.6726 25.2745 21.7152 13.2100
    150 26.8450 25.3164 21.3918 13.8141
    200 26.9953 25.3532 21.1777 13.5151

     | Show Table
    DownLoad: CSV
    Table 5.  Numerical comparison of various algorithms using their SNR values for Peacock's image.
    Images m OAUAN Alg. 3.1 OAUAN Alg. 3.7 OAUAN Alg. 3.8 OAM Alg. 3.1 GAMY Alg. 3.1
    Peacock.png SNR SNR SNR SNR SNR
    ($ 285\times 350 $) 40 53.17939 40.6452 33.2599 28.2771 26.7384
    80 54.3746 40.8153 34.3101 28.9196 26.7933
    120 55.4165 41.4919 34.9142 29.6184 27.8203
    150 55.4533 41.27771 34.6152 29.9244 27.8443

     | Show Table
    DownLoad: CSV
    Table 6.  Numerical comparison of various algorithms using their SNR values for Peacock's image.
    Images m MUA Alg. 3.1 TT Alg. 2 TH Alg. 1 TH Alg. 2
    Peacock.png SNR SNR SNR SNR
    ($ 285\times 350 $) 40 26.7727 21.28939 21.6452 13.2599
    80 26.8727 25.3746 21.8153 13.3101
    120 26.9451 25.4165 21.4919 13.9142
    150 26.9955 25.4533 21.2778 13.6152

     | Show Table
    DownLoad: CSV

    It is well-known that the higher the SNR value of an algorithm, the better the quality of the image it restores. From Figures 69 and Tables 36, it is evident that our Algorithms 3.1–3.3 restored the blurred images better than Algorithms 1 and 2 of Thong and Hieu [43], and Algorithm 2 of Tian and Tong [47], Algorithm 3.1 of Ogwo et al. [33], Algorithm 3.1 of Godwin et al. [14], and Algorithm 3.1 of Maluleka et al. [24]. Hence, our algorithms are more effective and applicable than many existing methods.

    In this work, we have introduced three novel iterative algorithms for finding the common solution of quasi-nonexpansive FPP and pseudo-monotone variational inequality problems. Our algorithms embed double inertial steps which accelerate their convergence rates. Numerical experiments have shown that our algorithms outperformed several existing algorithms with single or no inertial terms. Further, we a considered a new self-adaptive step size technique that produces a non-monotonic sequence of step sizes while also correctly incorporating a number of well-known step sizes. The step size is designed to lessen the algorithms' reliance on the initial step size. Numerical tests were performed, and the results showed that our step size is more effective and that it guarantees that our methods require less execution time. Our convergence results were obtained without the imposition of stringent conditions on the control parameters. The class of pseudo-monotone operators, which has been studied in the work, is more general than the class of monotone operators which has been studied in [43,47] and several other articles. To test the applicability and efficiencies of our methods in solving real-world problems, we utilized the methods to solve optimal control and image restorations problems.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors extend their appreciation to Prince Sattam bin Abdulaziz University for funding this work through the projection number (PSAU/2023/01/8980).

    The authors declare that they have no conflict of interest.

    [1] Adams DH, Popma JJ, Reardon MJ, et al. (2014) Transcatheter aortic-valve replacement with a self-expanding prosthesis. N Engl J Med 370: 1790-1798. doi: 10.1056/NEJMoa1400590
    [2] Leon MB, Smith CR, Mack M, et al. (2010) Transcatheter aortic-valve implantation for aortic stenosis in patients who cannot undergo surgery. N Engl J Med 363: 1597-1607. doi: 10.1056/NEJMoa1008232
    [3] Popma JJ, Adams DH, Reardon MJ, et al. (2014) Transcatheter aortic valve replacement using a self-expanding bioprosthesis in patients with severe aortic stenosis at extreme risk for surgery. J Am Coll Cardiol 63: 1972-1981. doi: 10.1016/j.jacc.2014.02.556
    [4] Smith CR, Leon MB, Mack MJ, et al. (2011) Transcatheter versus surgical aortic-valve replacement in high-risk patients. N Engl J Med 364: 2187-2198. doi: 10.1056/NEJMoa1103510
    [5] O'Brien SM, Shahian DM, Filardo G, et al. (2009) The Society of Thoracic Surgeons 2008 cardiac surgery risk models: part 2--isolated valve surgery. Ann Thorac Surg 88: S23-42. doi: 10.1016/j.athoracsur.2009.05.056
    [6] Messe SR, Acker MA, Kasner SE, et al. (2014) Stroke after aortic valve surgery: results from a prospective cohort. Circulation 129: 2253-2261. doi: 10.1161/CIRCULATIONAHA.113.005084
    [7] Leon MB, Piazza N, Nikolsky E, et al. (2011) Standardized endpoint definitions for transcatheter aortic valve implantation clinical trials: a consensus report from the Valve Academic Research Consortium. Eur Heart J 32: 205-217. doi: 10.1093/eurheartj/ehq406
    [8] Kappetein AP, Head SJ, Genereux P, et al. (2012) Updated standardized endpoint definitions for transcatheter aortic valve implantation: the Valve Academic Research Consortium-2 consensus document. Eur Heart J 33: 2403-2418. doi: 10.1093/eurheartj/ehs255
    [9] Spaziano M, Francese DP, Leon MB, et al. (2014) Imaging and functional testing to assess clinical and subclinical neurological events after transcatheter or surgical aortic valve replacement: a comprehensive review. J Am Coll Cardiol 64: 1950-1963. doi: 10.1016/j.jacc.2014.07.986
    [10] Eifert S, Reichenspurner H, Pfefferkorn T, et al. (2003) Neurological and neuropsychological examination and outcome after use of an intra-aortic filter device during cardiac surgery. Perfusion 18 Suppl 1: 55-60.
    [11] Alassar A, Roy D, Valencia O, et al. (2013) Cerebral embolization during transcatheter aortic valve implantation compared with surgical aortic valve replacement (abstr.). Interact Cardiovasc Thorac Surg 17: S134-S135.
    [12] Jones BM, Tuzcu EM, Krishnaswamy A, et al. (2015) Neurologic events after transcatheter aortic valve replacement. Interventional Cardiology Clinics 4: 83-93. doi: 10.1016/j.iccl.2014.09.005
    [13] Hauville C, Ben-Dor I, Lindsay J, et al. (2012) Clinical and silent stroke following aortic valve surgery and transcatheter aortic valve implantation. Cardiovasc Revasc Med 13: 133-140. doi: 10.1016/j.carrev.2011.11.001
    [14] Bucerius J, Gummert JF, Borger MA, et al. (2003) Stroke after cardiac surgery: a risk factor analysis of 16,184 consecutive adult patients. Ann Thorac Surg 75: 472-478. doi: 10.1016/S0003-4975(02)04370-9
    [15] Thourani VH, Ailawadi G, Szeto WY, et al. (2011) Outcomes of surgical aortic valve replacement in high-risk patients: a multiinstitutional study. Ann Thorac Surg 91: 49-55; discussion 55-46. doi: 10.1016/j.athoracsur.2010.09.040
    [16] Eggebrecht H, Schmermund A, Voigtlander T, et al. (2012) Risk of stroke after transcatheter aortic valve implantation (TAVI): a meta-analysis of 10,037 published patients. EuroIntervention8: 129-138.
    [17] Miller DC, Blackstone EH, Mack MJ, et al. (2012) Transcatheter (TAVR) versus surgical (AVR) aortic valve replacement: occurrence, hazard, risk factors, and consequences of neurologic events in the PARTNER trial. J Thorac Cardiovasc Surg 143: 832-843 e813. doi: 10.1016/j.jtcvs.2012.01.055
    [18] Tay EL, Gurvitch R, Wijesinghe N, et al. (2011) A high-risk period for cerebrovascular events exists after transcatheter aortic valve implantation. JACC Cardiovasc Interv 4: 1290-1297. doi: 10.1016/j.jcin.2011.08.012
    [19] Stortecky S, Windecker S, Pilgrim T, et al. (2012) Cerebrovascular accidents complicating transcatheter aortic valve implantation: frequency, timing and impact on outcomes. EuroIntervention 8: 62-70. doi: 10.4244/EIJV8I1A11
    [20] Nuis RJ, Van Mieghem NM, Schultz CJ, et al. (2012) Frequency and causes of stroke during or after transcatheter aortic valve implantation. Am J Cardiol 109: 1637-1643. doi: 10.1016/j.amjcard.2012.01.389
    [21] Mack MJ, Brennan JM, Brindis R, et al. (2013) Outcomes following transcatheter aortic valve replacement in the United States. JAMA 310: 2069-2077. doi: 10.1001/jama.2013.282043
    [22] Hamon M, Baron JC, Viader F (2008) Periprocedural stroke and cardiac catheterization. Circulation 118: 678-683. doi: 10.1161/CIRCULATIONAHA.108.784504
    [23] Omran H, Schmidt H, Hackenbroch M, et al. (2003) Silent and apparent cerebral embolism after retrograde catheterisation of the aortic valve in valvular stenosis: a prospective, randomised study. Lancet 361: 1241-1246. doi: 10.1016/S0140-6736(03)12978-9
    [24] Letac B, Cribier A, Koning R, et al. (1988) Results of percutaneous transluminal valvuloplasty in 218 adults with valvular aortic stenosis. Am J Cardiol 62: 598-605. doi: 10.1016/0002-9149(88)90663-7
    [25] Hahn RT, Pibarot P, Webb J, et al. (2014) Outcomes With Post-Dilation Following Transcatheter Aortic Valve Replacement: The PARTNER I Trial (Placement of Aortic Transcatheter Valve). JACC Cardiovasc Interv 7: 781-789. doi: 10.1016/j.jcin.2014.02.013
    [26] Amat-Santos IJ, Rodes-Cabau J, Urena M, et al. (2012) Incidence, predictive factors, and prognostic value of new-onset atrial fibrillation following transcatheter aortic valve implantation. J Am Coll Cardiol 59: 178-188. doi: 10.1016/j.jacc.2011.09.061
    [27] Shults C, Gunter R, Thourani VH (2012) The versatility of transapical access: Will it lead to a completely new approach to valvular therapy? Ann Cardiothorac Surg 1: 220-223.
    [28] Athappan G, Gajulapalli RD, Sengodan P, et al. (2014) Influence of transcatheter aortic valve replacement strategy and valve design on stroke after transcatheter aortic valve replacement: a meta-analysis and systematic review of literature. J Am Coll Cardiol 63: 2101-2110. doi: 10.1016/j.jacc.2014.02.540
    [29] Abdel-Wahab M, Mehilli J, Frerker C, et al. (2014) Comparison of balloon-expandable vs self-expandable valves in patients undergoing transcatheter aortic valve replacement: the CHOICE randomized clinical trial. JAMA 311: 1503-1514. doi: 10.1001/jama.2014.3316
    [30] Nombela-Franco L, Rodes-Cabau J, DeLarochelliere R, et al. (2012) Predictive factors, efficacy, and safety of balloon post-dilation after transcatheter aortic valve implantation with a balloon-expandable valve. JACC Cardiovasc Interv 5: 499-512.
    [31] Whitlock RP, Sun JC, Fremes SE, et al. (2012) Antithrombotic and thrombolytic therapy for valvular disease: Antithrombotic Therapy and Prevention of Thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Chest 141: e576S-600S.
    [32] Holmes DR, Jr., Mack MJ, Kaul S, et al. (2012) 2012 ACCF/AATS/SCAI/STS expert consensus document on transcatheter aortic valve replacement. J Am Coll Cardiol 59: 1200-1254. doi: 10.1016/j.jacc.2012.01.001
    [33] Ussia GP, Scarabelli M, Mule M, et al. (2011) Dual antiplatelet therapy versus aspirin alone in patients undergoing transcatheter aortic valve implantation. Am J Cardiol 108: 1772-1776. doi: 10.1016/j.amjcard.2011.07.049
    [34] Sergie Z, Lefevre T, Van Belle E, et al. (2013) Current periprocedural anticoagulation in transcatheter aortic valve replacement: could bivalirudin be an option? Rationale and design of the BRAVO 2/3 studies. J Thromb Thrombolysis 35: 483-493. doi: 10.1007/s11239-013-0890-3
    [35] Freeman M, Barbanti M, Wood DA, et al. (2014) Cerebral events and protection during transcatheter aortic valve replacement. Catheter Cardiovasc Interv.
    [36] Nietlispach F, Wijesinghe N, Gurvitch R, et al. (2010) An embolic deflection device for aortic valve interventions. JACC Cardiovasc Interv 3: 1133-1138. doi: 10.1016/j.jcin.2010.05.022
    [37] Onsea K, Agostoni P, Samim M, et al. (2012) First-in-man experience with a new embolic deflection device in transcatheter aortic valve interventions. EuroIntervention 8: 51-56. doi: 10.4244/EIJV8I1A9
    [38] Naber CK, Ghanem A, Abizaid AA, et al. (2012) First-in-man use of a novel embolic protection device for patients undergoing transcatheter aortic valve implantation. EuroIntervention 8: 43-50. doi: 10.4244/EIJV8I1A8
    [39] Banbury MK, Kouchoukos NT, Allen KB, et al. (2003) Emboli capture using the Embol-X intraaortic filter in cardiac surgery: a multicentered randomized trial of 1,289 patients. Ann Thorac Surg 76: 508-515; discussion 515. doi: 10.1016/S0003-4975(03)00530-7
    [40] Ye J, Webb JG (2014) Embolic capture with updated intra-aortic filter during coronary artery bypass grafting and transaortic transcatheter aortic valve implantation: First-in-human experience. J Thorac Cardiovasc Surg.
    [41] Etienne PY, Papadatos S, Pieters D, et al. (2011) Embol-X intraaortic filter and transaortic approach for improved cerebral protection in transcatheter aortic valve implantation. Ann Thorac Surg 92: e95-96. doi: 10.1016/j.athoracsur.2011.05.109
  • Reader Comments
  • © 2015 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5575) PDF downloads(1176) Cited by(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog