Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Research article Special Issues

Viscosity-type inertial iterative methods for variational inclusion and fixed point problems

  • In this paper, we have introduced some viscosity-type inertial iterative methods for solving fixed point and variational inclusion problems in Hilbert spaces. Our methods calculated the viscosity approximation, fixed point iteration, and inertial extrapolation jointly in the starting of every iteration. Assuming some suitable assumptions, we demonstrated the strong convergence theorems without computing the resolvent of the associated monotone operators. We used some numerical examples to illustrate the efficiency of our iterative approaches and compared them with the related work.

    Citation: Mohammad Dilshad, Fahad Maqbul Alamrani, Ahmed Alamer, Esmail Alshaban, Maryam G. Alshehri. Viscosity-type inertial iterative methods for variational inclusion and fixed point problems[J]. AIMS Mathematics, 2024, 9(7): 18553-18573. doi: 10.3934/math.2024903

    Related Papers:

    [1] Yali Zhao, Qixin Dong, Xiaoqing Huang . A self-adaptive viscosity-type inertial algorithm for common solutions of generalized split variational inclusion and paramonotone equilibrium problem. AIMS Mathematics, 2025, 10(2): 4504-4523. doi: 10.3934/math.2025208
    [2] Mohammad Dilshad, Aysha Khan, Mohammad Akram . Splitting type viscosity methods for inclusion and fixed point problems on Hadamard manifolds. AIMS Mathematics, 2021, 6(5): 5205-5221. doi: 10.3934/math.2021309
    [3] Mohammad Dilshad, Mohammad Akram, Md. Nasiruzzaman, Doaa Filali, Ahmed A. Khidir . Adaptive inertial Yosida approximation iterative algorithms for split variational inclusion and fixed point problems. AIMS Mathematics, 2023, 8(6): 12922-12942. doi: 10.3934/math.2023651
    [4] Meiying Wang, Luoyi Shi, Cuijuan Guo . An inertial iterative method for solving split equality problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 17628-17646. doi: 10.3934/math.2022971
    [5] Anjali, Seema Mehra, Renu Chugh, Salma Haque, Nabil Mlaiki . Iterative algorithm for solving monotone inclusion and fixed point problem of a finite family of demimetric mappings. AIMS Mathematics, 2023, 8(8): 19334-19352. doi: 10.3934/math.2023986
    [6] Austine Efut Ofem, Jacob Ashiwere Abuchu, Godwin Chidi Ugwunnadi, Hossam A. Nabwey, Abubakar Adamu, Ojen Kumar Narain . Double inertial steps extragadient-type methods for solving optimal control and image restoration problems. AIMS Mathematics, 2024, 9(5): 12870-12905. doi: 10.3934/math.2024629
    [7] Lu-Chuan Ceng, Shih-Hsin Chen, Yeong-Cheng Liou, Tzu-Chien Yin . Modified inertial subgradient extragradient algorithms for generalized equilibria systems with constraints of variational inequalities and fixed points. AIMS Mathematics, 2024, 9(6): 13819-13842. doi: 10.3934/math.2024672
    [8] Konrawut Khammahawong, Parin Chaipunya, Poom Kumam . An inertial Mann algorithm for nonexpansive mappings on Hadamard manifolds. AIMS Mathematics, 2023, 8(1): 2093-2116. doi: 10.3934/math.2023108
    [9] Habib ur Rehman, Poom Kumam, Kanokwan Sitthithakerngkiet . Viscosity-type method for solving pseudomonotone equilibrium problems in a real Hilbert space with applications. AIMS Mathematics, 2021, 6(2): 1538-1560. doi: 10.3934/math.2021093
    [10] Doaa Filali, Mohammad Dilshad, Mohammad Akram . Generalized variational inclusion: graph convergence and dynamical system approach. AIMS Mathematics, 2024, 9(9): 24525-24545. doi: 10.3934/math.20241194
  • In this paper, we have introduced some viscosity-type inertial iterative methods for solving fixed point and variational inclusion problems in Hilbert spaces. Our methods calculated the viscosity approximation, fixed point iteration, and inertial extrapolation jointly in the starting of every iteration. Assuming some suitable assumptions, we demonstrated the strong convergence theorems without computing the resolvent of the associated monotone operators. We used some numerical examples to illustrate the efficiency of our iterative approaches and compared them with the related work.



    A fixed point problem (FPP) is a significant problem that provides a natural support for studying a broad range of nonlinear problems with applications. The fixed point problem of mapping T is defined as

    Fix(T)={sE:T(s)=s}, (1.1)

    where E is a real Hilbert space and T:EE is a nonexpansive mapping.

    For a single-valued monotone operator Q:EE and a set-valued operator G:EE, the variational inclusion problem (VIsP) is to search sE such that

    0Q(s)+G(s). (1.2)

    Several problems, such as image recovery, optimization, variational inequality, can be transformed into a FPP or VIsP. Due to such applicability, in the last decades, several iterative methods have been formulated to solve FPPs and VIsPs in linear and nonlinear spaces, for example, [4,8,9,12,13,15,32].

    Dauglas and Rachford [11] formulated the forward-backward splitting method for VIsP:

    sn+1=RGμn[IμnQ](sn), (1.3)

    where μn>0, RGμn=[I+μnG]1 is the resolvent of G (also known as the backward operator), and [IμnQ] is known as the forward operator. We can rewrite (1.3) as

    snsn+1μnQ(sn)+G(sn+1), (1.4)

    which is studied by Ansari and Babu [2] in nonlinear space. If Q=0, the monotone inclusion problem (MIsP) is to search sE such that

    0G(s), (1.5)

    which was studied in [26]. The proximal point method, or the regularization method, is one of the renowned methods for MIsP studied by Lions and Mercier [18]:

    sn+1=[I+μnG]1(sn). (1.6)

    Since the operator RGμn is nonexpansive appearing in backward step, the algorithms have been studied widely by numerous authors, see for example [7,10,15,16,17,19,23,27].

    An essential development in the field of nonlinear science is the inertial extrapolation, introduced by Polyak [22], for fast convergence of algorithms. Alvarez and Attouch [6] implemented the inertial extrapolation to acquire the inertial proximal point method to solve MIsP. For μn>0, find sn+1E such that

    0μnG(sn+1)+sn+1snβn(snsn1), (1.7)

    and equivalently

    sn+1=RGμn[sn+βn(snsn1)], (1.8)

    where βn[0,1) is the extrapolation coefficient and βn(snsn1) is known as the inertial step. They proved the weak convergence of (1.8) assuming

    n=1βnsnsn12<+. (1.9)

    Inertial extrapolation has been demonstrated to have good convergence properties and a high convergence rate, therefore they have been improved and used in a variety of nonlinear problems, see [3,5,13,14,28,29] and the references inside.

    The following inertial proximal point approach was presented by Moudafi and Oliny in [21] to solve VIsP:

    {un=sn+βn(snsn1),sn=[I+μnG]1(unμnQun), (1.10)

    where μn<2/κ, and κ is the Lipschitz constant of operator Q. They proved the weak convergence of (1.10) using the same assumption (1.9). Recently, Duang et al. [30] studied the VIsP and FPP. They proposed the following viscosity inertial method (Algorithm 1.1) for estimating the common solution in Hilbert spaces.

    Algorithm 1.1 (Algorithm 3 of [30]) Viscosity inertial method (VIM)
    Choose arbitrary points s0 and s1 and set n=1.
    Step 1. Compute
    un=sn+θn(snsn1),
    vn=[I+λG]1(IλQ)un.
    If un=vn, then stop (sn is a solution of VIsP). If not, proceed to Step 2.
    Step 2. Compute
    sn+1=ψnk(un)+(1ψn)Tvn.
    Let n=n+1 and proceed to Step 1.

    In the above calculation Q is η-inverse strongly monotone (in short η-ism) and G is a maximal monotone operator, k is a contraction, T is a nonexpansive mapping, λ(0,2η), and the control sequence fulfills the requirements listed below:

    (ⅰ) ψn(0,1),lim,

    (ⅱ) \theta_n\in [0, \theta), \theta > 0, \lim\limits_{n\to\infty}\frac{\theta_n}{\psi_n}\|s_n-s_{n-1}\| = 0.

    Recently, Reich and Taiwo [24] investigated hybrid viscosity-type iterative schemes for solving variational inclusion problems in which viscosity approximation and inertial extrapolation were computed jointly. Ahmed and Dilshad [1] studied the Halpern-type iterative method for solving split common null point problems where the Halpern iteration and inertial iterations are computed simultaneously at the start of every iteration.

    Motivated by the work in [24,30], we present two viscosity-type inertial iteration methods for common solutions of {\rm VI_sP_s} and {\rm FPP_s} . In our algorithms, we implement the viscosity iteration, fixed point iteration, and inertial extrapolation at the first step of each iteration. Our methods do not need the inverse strongly monotone assumptions on the operators Q and G, which are considered in the literature. We prove the strong convergence of the presented methods without calculating the resolvent of the associated monotone operators Q and G.

    We organize the paper as follows: In Section 2, we discuss some basic definitions and useful lemmas. In Section 3, we propose viscosity-type iterative methods for solving {\rm VI_sP_s} and {\rm FPP_s} and prove the strong convergence theorems. In Section 4, as a consequence of our methods, we present Halpern-type inertial iterative methods for {\rm VI_sP_s} and {\rm FPP_s} . Sections 5 describes some applications for solving variational inequality and optimization problems. In Section 6, we show the efficiency of the suggested methods by comparing them with Algorithm 3 of [30].

    Let \{s_n\} be a sequence in \mathbb{E} . Then s_n\to s denotes strong convergence of \{s_n\} to s and s_n \rightharpoonup s denotes weak convergence. The weak {w} -limit of \{s_n\} is defined by

    {w}_w(s_n) = \{s\in H: y_{n_j}\rightharpoonup\; s{\; \rm as}j\to \infty\; {\rm where}\; s_{n_j} {\; \rm is\; a\; subsequnce\; of }\; s_n\}.

    The following useful inequality is well-known in the Hilbert space \mathbb{E} :

    \begin{eqnarray} \|s_1\pm w_1\|^2 \leq \|s_1\|^2\pm 2 \langle s_1, w_1\rangle+\|w_1\|^2. \end{eqnarray} (2.1)

    Definition 2.1. A mapping k:\mathbb{E}\to \mathbb{E} is called

    {(i)} a contraction, if \|k(s_1)-k(w_1)\| \leq \tau\|s_1-w_1\|, \; \forall\; s_1, w_1\in \mathbb{E}, \tau\in (0, 1);

    {(ii)} nonexpansive, if \|k(s_1)-k(w_1)\| \leq \|s_1-w_1\|, \forall\; s_1, w_1 \in \mathbb{E}.

    Definition 2.2. Let Q:\mathbb{E}\to \mathbb{E} . Then

    {(i)} Q is called monotone, if \langle Q(s_1)-Q(w_1), \; s_1-w_1 \rangle\geq 0, \forall\; s_1, w_1\in \mathbb{E};

    {(ii)} Q is called \eta -ism, if there exists \eta > 0 such that

    \langle Q(s_1)-Q(w_1),\; s_1-w_1\rangle\geq \eta\|Q(s_1)-Q(w_1)\|^2, \; \forall\; s_1,w_1\in \mathbb{E};

    {(iii)} Q is called \delta -strongly monotone, if there exists \delta > 0 such that

    \langle Q(s_1)-Q(w_1),\; s_1-w_1\rangle\geq \delta\|s_1-w_1\|^2, \; \forall\; s_1,w_1\in \mathbb{E};

    {(iv)} Q is called \kappa -Lipschitz continuous, if there exists \kappa > 0 such that

    \|Q(s_1)-Q(w_1)\|\leq \kappa \|s_1-w_1\|,\; \forall\; s_1,w_1\in \mathbb{E}.

    Definition 2.3. Let G:\mathbb{E}\to 2^\mathbb{E} . Then

    {(i)} the graph of G is defined by Graph(G) = \{(s_1, w_1)\in \mathbb{E}\times \mathbb{E}: w_1\in G(s_1)\};

    {(ii)} G is called monotone, if for all (s_1, w_1), (s_2, w_2)\in Graph(G), \; \langle w_1-w_2, \; s_1-s_2\rangle\geq 0;

    {(iii)} G is called maximal monotone, if G is monotone and (I+\mu G)(\mathbb{E}) = \mathbb{E} , for \mu > 0 .

    Lemma 2.1. [31] Let s_n\in \mathbb{R} be a nonnegative sequence such that

    s_{n+1}\leq (1-\lambda_n) s_n + \lambda_n \xi_n,\; \; n\geq n_0 \in\mathbb{N},

    where \lambda_n\in (0, 1) and \xi_n \in \mathbb{R}\; fulfill the requirements given below:

    \lim\limits_{n\to \infty}\lambda_n = 0,\; \sum\limits_{n = 1}^{\infty}\lambda_n = \infty,\; \; {\rm and\; \; } \lim\sup\limits_{n\to \infty}\xi_n\leq 0.

    Then s_n\to 0 as n\to \infty .

    Lemma 2.2. [20] Let y_{n}\in \mathbb{R} be a sequence that does not decrease at infinity in the sense that there exists a subsequence y_{n_k} of y_{n} such that y_{n_k} < y_{n_k+1} for all k \geq 0 . Also consider the sequence of integers \{\Upsilon(n)\}_{n\geq n_{0}} defined by

    \begin{eqnarray} \Upsilon(n) = \max\{k\leq n: y_{k}\leq y_{k+1}\} . \end{eqnarray}

    Then \{\Upsilon(n)\}_{n\geq n_{0}} is a nondecreasing sequence verifying \lim\limits_{n\to\infty}\Upsilon(n) = \infty , and for all \; n\geq n_{0} , the following inequalities hold:

    \begin{eqnarray} y_{\Upsilon(n)} &\leq& y_{\Upsilon(n)+1},\\ y_{(n)}&\leq& y_{\Upsilon(n)+1} . \end{eqnarray}

    In the present section, we define our viscosity-type inertial iteration methods for solving {\rm FPP} and {\rm VI_sP} . We symbolize the solution set of {\rm FPP} by \Lambda and of {\rm VI_sP} by \Delta and assume that \Lambda \cap \Delta \neq \emptyset . We adopt the following assumptions in order to prove the convergence of the sequences obtained from the suggested methods:

    ({\bf S_1}) k:\mathbb{E}\to \mathbb{E} is a \tau -contraction with 0 < \tau < 1 ;

    ({\bf S_2}) Q:\mathbb{E}\to \mathbb{E} is a \delta -strongly monotone and \kappa -Lipschitz continuous operator and G:\mathbb{E}\rightrightarrows{\mathbb{E}} is a maximal monotone operator;

    ({\bf S_3}) \mu_n is a sequence such that 0 < \bar{\mu} \leq \mu_n \leq \mu < 1/{2\delta} and \kappa\leq 2\delta ;

    ({\bf S_4}) \lambda_n\in(0, 1) satisfies \lim\limits_{n\to \infty}\lambda_n = 0 and \sum\limits_{n = 1}^{\infty}\lambda_n = \infty ;

    ({\bf S_5}) \sigma_n is a positive sequence satisfying \sum\limits_{n = 1}^{\infty}\sigma_n < \infty and \lim\limits_{n\to \infty}\frac{\sigma_n}{\lambda_n} = 0 .

    Theorem 3.1. If the Assumptions ({\bf S_1}) ({\bf S_5}) are fulfilled then the sequences induced by the Algorithm 3.1 converge strongly to s^{\star}\in{\Delta\cap\Lambda}, which solve the following variational inequality:

    \begin{eqnarray} \langle k(s^{\star})-s^{\star},\; \; y-s^{\star} \rangle \leq 0,\; \; \forall\; y\in \Delta\cap\Lambda. \end{eqnarray} (3.1)

    Algorithm 3.1. Viscosity-type inertial iterative method-I (VIIM-I)
    Let \beta\in [0, 1) and \mu_n > 0 are given. Choose arbitrary points s_0 and s_1 and set n = 0 .
    Iterative step. For iterates s_{n} , and s_{n-1} , for n \geq 1 , select 0 < \beta_n < \bar{\beta}_n , where
    \bar{\beta}_n = \left\{ \begin{array}{ll} \min \big\{\frac{\sigma_n}{\|s_n-s_{n-1}\|}, \quad \beta \big\}, \quad {\rm if}\; s_n \neq s_{n-1}, \\ \quad \beta, \quad \qquad\qquad\qquad {\rm otherwise}, \end{array}\right. (3.2)
    compute
    u_n = \lambda_n k(s_n) +(1-\lambda_n)\big[T(s_n)+ \beta_n(s_n-s_{n-1})\big], (3.3)
    0\in Q(u_n)+G(s_{n+1})+\frac{s_{n+1}-u_{n}}{\mu_n}. (3.4)
    If s_{n+1} = u_n , then stop. If not, set n = n+1 and proceed to the iterative step.

    Remark 3.1. From (3.2), we have \beta_n\leq \frac{\sigma_n}{\|s_n-s_{n-1}\|} . Since \beta_n > 0 and \sigma_n satisfies \sum\limits_{n = 1}^{\infty}\sigma_n < \infty , we obtain \lim\limits_{n\to\infty} \beta_n \|s_n-s_{n-1}\| = 0 and \lim\limits_{n\to\infty} \frac{\beta_n \|s_n-s_{n-1}\|}{\lambda_n}\leq \lim\limits_{n\to\infty} \frac{\sigma_n}{\lambda_n} = 0 .

    Proof. Let s^{\star}\in{\Delta\cap\Lambda} , then -Q(s^{\star})\in G(s^{\star}) and using (3.4), we have \frac{u_n-s_{n+1}}{\mu_n}-Q(u_n)\in G(s_{n+1}) . Since G is monotone, we get

    \begin{eqnarray} \Big \langle \frac{u_{n}-s_{n+1}}{\mu_n}-Q(u_n)+Q(s^{\star}),\, s_{n+1}-s^{\star}\Big \rangle\geq 0. \end{eqnarray} (3.5)

    Since Q is strongly monotone with constant \delta > 0 , we have

    \begin{eqnarray} \Big \langle Q(s_{n+1})-Q(s^{\star}),\, s_{n+1}-s^{\star}\Big \rangle\geq \delta\|s_{n+1}-s^{\star}\|^2. \end{eqnarray} (3.6)

    By adding (3.5) and (3.6), we get

    \begin{eqnarray} \Big \langle \frac{u_{n}-s_{n+1}}{\mu_n}+ Q(s_{n+1})-Q(u_n),\, s_{n+1}-s^{\star}\Big \rangle\geq \delta\|s_{n+1}-s^{\star}\|^2 \end{eqnarray} (3.7)

    or

    \begin{eqnarray} \frac{1}{\mu_n}\Big \langle u_{n}-s_{n+1},\, s_{n+1}-s^{\star}\Big \rangle + \Big \langle Q(s_{n+1})-Q(u_n),\, s_{n+1}-s^{\star}\Big \rangle\geq \delta\|s_{n+1}-s^{\star}\|^2. \end{eqnarray} (3.8)

    By using the Cauchy Schwartz inequality and Lipschitz continuity of Q, we have

    \begin{eqnarray} \Big \langle Q(s_{n+1})-Q(u_n),\, s_{n+1}-s^{\star}\Big \rangle&\leq& \|Q(s_{n+1})-Q(u_n)\|\|s_{n+1}-s^{\star}\|\\ &\leq&\kappa \|s_{n+1}-u_n\| \|s_{n+1}-s^{\star}\|\\ & = & \frac{\kappa}{2}\big\{ \|s_{n+1}-u_n\|^2 + \|s_{n+1}-s^{\star}\|^2 \big\}. \end{eqnarray} (3.9)

    By using (2.1), we have

    \begin{eqnarray} \|u_{n}-s^{\star}\|^2 = \| u_{n}-s_{n+1}+s_{n+1}-s^{\star}\|^2 = \| u_n-s_{n+1}\|^2+\|s_{n+1}-s^{\star}\|^2+2 \langle u_{n}-s_{n+1}, \, s_{n+1}-s^{\star}\rangle . \end{eqnarray} (3.10)

    Considering (3.8)–(3.10), we get

    \begin{eqnarray} \|s_{n+1}-s^{\star}\|^2 \leq \|u_{n}-s^{\star}\|^2 -\| u_n-s_{n+1}\|^2+ \mu_n \kappa \Big\{ \|s_{n+1}-u_n\|^2 + \|s_{n+1}-s^{\star}\|^2 \Big\}-2\mu_n \delta\|s_{n+1}-s^{\star}\|^2. \end{eqnarray} (3.11)

    Since \kappa\leq 2\delta , we have

    \begin{eqnarray} \|s_{n+1}-s^{\star}\|^2 \leq \|u_{n}-s^{\star}\|^2 -(1-2\delta \mu_n)\|s_{n+1}-u_n\|^2 \end{eqnarray} (3.12)

    or

    \begin{eqnarray} \|s_{n+1}-s^{\star}\|^2 \leq \|u_{n}-s^{\star}\|^2 . \end{eqnarray} (3.13)

    Since \lim\limits_{n\to\infty}\frac{\beta_n\|s_n-s_{n-1}\|}{\lambda_n} = 0 (Remark 3.1), there exists K_1 > 0 such that \frac{\beta_n\|s_n-s_{n-1}\|}{\lambda_n}\leq K_1 , that is \beta_n\|s_n-s_{n-1}\|\leq \lambda_n K_1 . By using (3.13) and mathematical induction, bearing in mind that k is a contraction and T is nonexpansive, it follows from (3.3) that

    \begin{eqnarray} \label{p10} \|u_n-s^{\star}\|& = & \|\lambda_n k(s_n)+(1-\lambda_n) \big[T(s_n)+\beta_n(s_n-s_{n-1}\|\big]-s^{\star}\|\\ & = & \lambda_n \|k(s_n)-s^{\star}\|+ (1-\lambda_n)\big[ \|(T(s_n)-s^{\star}+\beta_n(s_n-s_{n-1})\|\big] \\ &\leq & \lambda_n \|k(s_n)-k(s^{\star})\|+ \lambda_n\|k(s^{\star})-s^{\star}\|+(1-\lambda_n) \big[ \|T(s_n)-s^{\star}\|+\beta_n\|s_n-s_{n-1}\|\big] \\ &\leq & \lambda_n \tau\|s_n-s^{\star}\|+ \lambda_n\|k(s^{\star})-s^{\star}\|+(1-\lambda_n) \|s_n-s^{\star}\|+\lambda_n K_1 \\ &\leq & [1- \lambda_n(1-\tau)]\|s_n-s^{\star}\|+ \lambda_n(1-\tau) \frac{\|k(s^{\star})-s^{\star}\|+ K_1}{1-\tau}\\ &\leq & \max\Big\{\|s_n-s^{\star}\|,\; \; \; \frac{\|k(s^{\star})-s^{\star}\|+K_1}{1-\tau} \Big\}\\ &\leq & \max\Big\{\|u_{n-1}-s^{\star}\|,\; \; \; \frac{\|k(s^{\star})-s^{\star}\|+K_1}{1-\tau} \Big\} \\ &\vdots &\\ &\leq & \max\Big\{\|u_{0}-s^{\star}\|,\; \; \; \frac{\|k(s^{\star})-s^{\star}\|+K_1}{1-\tau} \Big\}, \end{eqnarray}

    meaning that \{u_n\} is bounded and hence \{s_n\} is also bounded. Let v_n = T(s_n)+\beta_n(s_n-s_{n-1}) . Note that v_n is also bounded. By using (3.3), we get

    \begin{eqnarray} \|u_n-s^{\star}\|^2& = & \|\lambda_n k(s_n)+(1-\lambda_n)v_n-s^{\star}\|^2\\ & = & \lambda_n^{2} \|k(s_n)-s^{\star}\|^2+ (1-\lambda_n)^2\|v_n-s^{\star}\|^2+2\lambda_n (1-\lambda_n) \langle k(s_n)-s^{\star}, v_n-s^{\star}\rangle. \end{eqnarray} (3.14)

    Now, we need to calculate

    \begin{eqnarray} \|v_n-s^{\star}\|^2& = & \|T(s_n)+\beta_n(s_n-s_{n-1})-s^{\star}\|^2\\ & = & \|T(s_n)-s^{\star}\|^2+ 2 \beta_n \langle s_n-s_{n-1}, v_n-s^{\star} \rangle \\ &\leq& \|T(s_n)-s^{\star}\|^2+ 2 \beta_n \|s_n-s_{n-1}\| \|v_n-s^{\star}\|\\ &\leq& \|s_n-s^{\star}\|^2+ 2 \Theta_n \|v_n-s^{\star}\|, \end{eqnarray} (3.15)

    where \Theta_n = \beta_n\|z_n-z_{n-1}\| , and

    \begin{eqnarray} \langle k(s_n)-s^{\star}, v_n-s^{\star}\rangle& = & \langle k(s_n)-k(s^{\star}), v_n-s^{\star}\rangle+ \langle k(s^{\star})-s^{\star}, v_n-s^{\star}\rangle \\ &\leq&\| k(s_n)-k(s^{\star}\|\| v_n-s^{\star}\|+ \langle k(s^{\star})-s^{\star}, v_n-s^{\star}\rangle \\ &\leq &\frac{1}{2}\big\{\tau^2 \|s_n-s^{\star}\|^2+\|v_n-s^{\star}\|^2\big\}+\langle k(s^{\star})-s^{\star},\; \; v_n-s^{\star} \rangle \end{eqnarray} (3.16)

    and

    \begin{eqnarray} \langle k(s^{\star})-s^{\star},\; \; v_n-s^{\star} \rangle& = &\langle k(s^{\star})-s^{\star},\; \; T(s_n)+\beta_n(s_n-s_{n-1})-s^{\star} \rangle\\ &\leq& \langle k(s^{\star})-s^{\star},\; \; T(s_n)-s^{\star} \rangle+\langle k(s^{\star})-s^{\star},\; \; \beta_n(s_n-s_{n-1}) \rangle\\ &\leq& \langle k(s^{\star})-s^{\star},\; \; T(s_n)-s^{\star} \rangle+\beta_n \|k(s^{\star})-s^{\star}\|\|s_n-s_{n-1}\|\\ &\leq& \langle k(s^{\star})-s^{\star},\; \; T(s_n)-s^{\star} \rangle+\Theta_n \|k(s^{\star})-s^{\star}\|. \end{eqnarray} (3.17)

    By using (3.14)–(3.17), we get

    \begin{eqnarray} \|u_n-s^{\star}\|^2&\leq&\lambda_n^2 \|k(s_n)-s^{\star}\|^2+(1-\lambda_n)^2\Big\{ \|s_n-s^{\star}\|^2+ 2 \Theta_n \|v_n-s^{\star}\|\Big\}\\ &&+\lambda_n(1-\lambda_n)\tau^2 \|s_n-s^{\star}\|^2+ \lambda_n(1-\lambda_n)\|v_n-s^{\star}\|^2\\ &&+ 2\lambda_n(1-\lambda_n)\langle k(s^{\star})-s^{\star},\; \; T(s_n)-s^{\star} \rangle+2 \lambda_n(1-\lambda_n)\Theta_n \|k(s^{\star})-s^{\star}\| \\ &\leq& [1-\lambda_n(1-\tau^2)]\|s_n-s^{\star}\|^2\|\\ &&+\lambda_n\Big\{ \lambda_n \|k(s_n)-s^{\star}\|^2 +2(1-\lambda_n)\langle k(s^{\star})-s^{\star},\; \; T(s_n)-s^{\star} \rangle \\ && +\Theta_n \|k(s^{\star})-s^{\star}\|+\frac{2 \Theta_n}{\lambda_n} \|v_n-s^{\star}\| \Big\}. \end{eqnarray} (3.18)

    Let \gamma_n = \lambda_n(1-\tau^2) . Then it follows from (3.12) and (3.18) that

    \begin{eqnarray} \|s_{n+1}-s^{\star}\|^2 &\leq& (1-\gamma_n)\|s_n-s^{\star}\|^2+\gamma_n U_n-(1-2\delta \mu_n)\|s_{n+1}-u_n\|^2, \end{eqnarray} (3.19)

    where

    \begin{eqnarray} U_n = \frac{\lambda_n \|k(s_n)-s^{\star}\|^2 +2(1-\lambda_n)\langle k(s^{\star})-s^{\star},\; \; T(s_n)-s^{\star} \rangle +\Theta_n \|k(s^{\star})-s^{\star}\|+\frac{2 \Theta_n}{\lambda_n}\|v_n-s^{\star}\| }{1-\tau^2}. \end{eqnarray} (3.20)

    Now, we continue the proof in the following two cases:

    Case Ⅰ: If \{\|s_n-s^{\star}\|\} is monotonically decreasing then there exists a number N_1 such that \|s_{n+1}-s^{\star}\|\leq \|s_n-s^{\star}\| for all n\geq N_1 . Hence, boundedness of \{\|s_n-s^{\star}\|\} implies that \{\|s_n-s^{\star}\|\} is convergent. Therefore, using (3.19), we have

    \begin{eqnarray} (1-2\delta \mu_n)\|s_{n+1}-u_n\|^2 &\leq& \|s_{n+1}-s^{\star}\|^2+\|s_n-s^{\star}\|^2-\gamma_n\|s_n-s^{\star}\|^2+\gamma_n U_n. \end{eqnarray} (3.21)

    Since 2\delta \mu_n < 1 and \lim\limits_{n\to \infty}\gamma_n = 0 , we obtained

    \begin{eqnarray} \lim\limits_{n\to \infty}\|s_{n+1}-u_n\| = 0. \end{eqnarray} (3.22)

    By using (3.22) and Remark 3.1, we get

    \begin{eqnarray} \lim\limits_{n\to \infty}\|v_{n}-T(s_n)\| = \lim\limits_{n\to \infty}\beta_n\|s_n-s_{n-1}\| = 0. \end{eqnarray} (3.23)

    Boundedness of \{s_n\} and \{v_n\} implies that there exist M_1 and M_2 such that \|s_n-s^{\star}\|\leq M_1 and \|k(s^{\star})-v_n\|\leq M_2 , hence

    \begin{eqnarray} \|u_n-v_n\|& = & \lambda_n \|k(s_n)-v_n\|\\ &\leq& \lambda_n \big[\|k(s_n)-k(s^{\star})\|+\|k(s^{\star})-v_n\|\big]\\ &\leq& \lambda_n \big[\tau\|s_n-s^{\star}\|+\|k(s^{\star})-v_n\|\big]\\ &\leq& \lambda_n \big[\tau M_1+ M_2\big]\to 0\; \; {\rm as\; \; }n\to \infty. \end{eqnarray} (3.24)

    The following can be obtained easily by using (3.22) and (3.23):

    \begin{eqnarray} \lim\limits_{n\to \infty}\|Ts_n-s_n\| = \lim\limits_{n\to \infty}\|s_n-v_n\| = 0. \end{eqnarray} (3.25)

    Since \{s_n\} is bounded, it guarantees the existence of subsequence \{s_{n_k}\} of \{s_n\} such that s_{n_k}\rightharpoonup \bar{s} . As a consequence, from (3.22) and (3.25), it follows that u_{n_k}\rightharpoonup \bar{s} and v_{n_k}\rightharpoonup \bar{s} . Now, we will show that \bar{s}\in \Delta\cap\Lambda . Since T is nonexpansive, hence by (3.25), we obtain \bar{s}\in {\rm Fix}(T) . From (3.4), we have

    \begin{eqnarray} z_{n_k} = \frac{u_{n_k}-s_{n_{k+1}}}{\mu_{n_k}}- Q(u_{n_k})\in G(s_{n_k+1}). \end{eqnarray} (3.26)

    Since 0 < \bar{\mu} < \mu_n < \mu and from (3.22), we have \|s_{n_k+1}-u_{n_k}\|\to 0 and by the Lipschitz continuity of Q, we get

    \begin{eqnarray} z_{n_k}\to -Q(\bar{y})\; \; {\rm as\; \; }k\to \infty. \end{eqnarray} (3.27)

    Taking k\to \infty , since the graph of the maximal monotone operator is weakly-strongly closed, we get -Q(\bar{s})\in G(\bar{s}) , that is 0\in Q(\bar{s})+ G(\bar{s}) . Thus \bar{s}\in \Delta\cap\Lambda.

    Next, we show that \{s_n\} strongly converges to s^{\star} . From (3.19), it immediately follows that

    \begin{eqnarray} \|s_{n+1}-s^{\star}\|^2 &\leq& (1-\gamma_n)\|\|s_n-s^{\star}\|^2\|+\gamma_n U_n \end{eqnarray} (3.28)

    and

    \begin{eqnarray} \label{p27} &&\lim\sup\limits_{n\to \infty} U_n \\ && = \lim\sup\limits_{n\to \infty}\frac{\lambda_n \|k(s_n)-s^{\star}\|^2 +2(1-\lambda_n)\langle k(s^{\star})-s^{\star},\; \; T(s_n)-s^{\star} \rangle +\Theta_n \|k(s^{\star})-s^{\star}\|+\frac{2 \Theta_n}{\lambda_n}\|v_n-s^{\star}\| }{1-\tau^2}\\ && = \langle k(s^{\star})-s^{\star},\; \; \bar{s}-s^{\star} \rangle \leq 0. \end{eqnarray}

    By using Lemma 2.1, we deduce that \{s_n\} converges strongly to s^{\star} , where s^{\star} is the solution to the variational inequality (3.1). Further, it follows that \|s_n-u_n\|\to 0 , u_n\rightharpoonup\bar{y}\in \Delta\cap\Lambda , and s_n\to s^{\star} as n\to \infty , thus \bar{y} = s^{\star} . This completes the proof.

    Case Ⅱ: If Case Ⅰ is false, then the function \rho:\mathbb{N}:\to \mathbb{N} defined by \rho(n) = \max\{n\geq m: \|s_{m}-s^{\star}\| \leq\|s_{m+1}-s^{\star}\|\} is an increasing sequence and \rho(n)\to\infty as n\to\infty and

    \begin{eqnarray} 0\leq \|s_{\rho(n)}-s^{\star}\| \leq\|s_{\rho(n)+1}-s^{\star}\|, \; \forall\; n\geq m. \end{eqnarray} (3.29)

    For the same reasons as in the proof of Case Ⅰ, we obtain \|s_{\rho(n)}-s^{\star}\| \to 0 and \|s_{\rho(n)}-u_{\rho(n)}\|\to 0 as n \to \infty . By using (3.19) and (3.29), we obtain

    \begin{eqnarray} 0\leq \|s_{\rho(n)}-s^{\star}\|\leq U_n. \end{eqnarray} (3.30)

    Thus, we get \|s_{\rho(n)}-s^{\star}\|\to 0 as n\to \infty . Keeping in mind Lemma 2.2, we have

    \begin{eqnarray} 0\leq \|s_n-s^{\star}\|\leq \max\big\{\|s_n-s^{\star}\|, \|s_{\rho(n)}-s^{\star}\| \big\}\leq \|s_{\rho(n)+1}-s^{\star}\|. \end{eqnarray} (3.31)

    Consequently, from (3.31), \|s_n-s^{\star}\|\to 0 as n\to \infty . Therefore, s_n\to s^{\star} as n\to \infty , where s^{\star} is a solution of the variational inequality (3.1).

    Theorem 3.2. If the Assumptions ({\bf S_1}) ({\bf S_5}) are satisfied then the sequences induced by the Algorithm 3.2 converge strongly to s^{\star}\in{\Lambda}\cap{\Delta} , which solves the following variational inequality:

    \begin{eqnarray} \langle k(s^{\star})-s^{\star},\; \; w-s^{\star} \rangle \leq 0,\; \; \forall\; w\in {\Lambda}\cap{\Delta}. \end{eqnarray}

    Algorithm 3.2. Viscosity-type inertial iterative method-II (VIIM-II)
    Let \beta\in [0, 1) and \mu_n > 0 are given. Choose arbitrary points s_0 and s_1 and set n = 0 .
    Iterative step. For iterates s_{n} , and s_{n-1} , for n \geq 1 , select 0 < \beta_n < \bar{\beta}_n , where
    \bar{\beta}_n = \left\{ \begin{array}{ll} \min \big\{\frac{\sigma_n}{\|s_n-s_{n-1}\|}, \quad \beta \big\}, \quad {\rm if}\; s_n \neq s_{n-1}, \\ \quad \beta, \quad \qquad\qquad\qquad {\rm otherwise}, \end{array}\right. (3.32)
    compute
    u_n = \lambda_n k(s_n) +(1-\lambda_n)T(s_n)+ \beta_n(s_n-s_{n-1}), (3.33)
    0\in Q(u_n)+G(s_{n+1})+\frac{s_{n+1}-u_{n}}{\mu_n}. (3.34)
    If s_{n+1} = u_n , then stop. If not, set n = n+1 and go back to the iterative step.

    Proof. Let s^{\star}\in {\Lambda}\cap{\Delta} , then by using (3.33), we obtain

    \begin{eqnarray} \|u_n-s^{\star}\|& = & \|\lambda_n k(s_n)+(1-\lambda_n)T(s_n)+\beta_n(s_n-s_{n-1})-s^{\star}\|\\ &\leq& \lambda_n\| k(s_n)-s^{\star}\|+(1-\lambda_n)\|T(s_n)-s^{\star}\|+\beta_n\|s_n-s_{n-1}\|\\ &\leq& \lambda_n\| k(s_n)-k(s^{\star})\|+ \lambda_n\|k(s^{\star})-s^{\star}\|+(1-\lambda_n)\|s_n-s^{\star}\|+\beta_n\|s_n-s_{n-1}\|\\ & = &\lambda_n\big[\tau\|s_n-s^{\star}\|+\|k(s^{\star})-s^{\star}\|+\frac{\beta_n}{\lambda_n}\|s_n-s_{n-1}\|\big]+(1-\lambda_n)\|s_n-s^{\star}\|\\ & = &\big[1-\lambda_n(1-\tau)\big]\|s_n-s^{\star}\|+\lambda_n(1-\tau)\frac{\|k(s^{\star})-s^{\star}\|+M_1}{1-\tau} \\ &\leq& \max\Big\{ \|s_n-s^{\star}\|, \frac{\|k(s^{\star})-s^{\star}\|+M_1}{1-\tau} \Big\} \\ &\leq& \max\Big\{ \|u_{n-1}-s^{\star}\|, \frac{\|k(s^{\star})-s^{\star}\|+M_1}{1-\tau} \Big\} \\ &&\vdots \\ &\leq& \max\Big\{ \|u_{0}-s^{\star}\|, \frac{\|k(s^{\star})-s^{\star}\|+M_1}{1-\tau} \Big\}, \end{eqnarray} (3.35)

    implying that \{u_n\} is bounded and so is \{s_n\} . Let x_n = \lambda_n k(s_n)+(1-\lambda_n)T(s_n) , then by using (2.1), we get

    \begin{eqnarray} \|u_n-s^{\star}\|^2& = & \|x_n+\beta_n(s_n-s_{n-1})-s^{\star}\|^2\\ & = & \|x_n-s^{\star}\|^2+2\langle x_n-s^{\star}, \beta_n(s_n-s_{n-1}) \rangle+\beta_{n}^{2}\|s_n-s_{n-1}\|^2, \end{eqnarray} (3.36)

    and

    \begin{eqnarray} \|x_n-s^{\star}\|^2& = & \|\lambda_n k(s_n)+(1-\lambda_n)T(s_n)-s^{\star}\|^2\\ & = & \lambda_n^{2}\|k(s_n)-s^{\star}\|^2+2 \lambda_n (1-\lambda_n)\langle k(s_n)-s^{\star}, T(s_n)-s^{\star} \rangle \\ &&+(1-\lambda_n)^{2}\|T(s_n)-s^{\star}\|^2\\ & = & \lambda_n^{2}\|k(s_n)-s^{\star}\|^2+2 \lambda_n (1-\lambda_n)\langle k(s_n)-k(s^{\star}), T(s_n)-s^{\star} \rangle \\ &&+2 \lambda_n (1-\lambda_n)\langle k(s^{\star})-s^{\star}, T(s_n)-s^{\star} \rangle+(1-\lambda_n)^{2}\|T(s_n)-s^{\star}\|^2\\ &\leq& \lambda_n^{2}\|k(s_n)-s^{\star}\|^2+(1-\lambda_n)^{2}\|T(s_n)-s^{\star}\|^2 +2 \lambda_n\langle k(s_n)-k(s^{\star}), T(s_n)-s^{\star} \rangle \\ &&+2 \lambda_n\langle k(s^{\star})-s^{\star}, T(s_n)-s^{\star} \rangle+2 \lambda_n^2 \langle k(s^{\star})-s^{\star}, T(s_n)-s^{\star} \rangle\\ &\leq& \lambda_n^{2}\|k(s_n)-s^{\star}\|^2+(1-\lambda_n)^{2}\|s_n-s^{\star}\|^2 +2 \lambda_n \tau\| s_n-s^{\star}\|\|s_n-s^{\star}\| \\ &&+2 \lambda_n\langle k(s^{\star})-s^{\star}, T(s_n)-s^{\star} \rangle+2 \lambda_n^2 \| k(s^{\star})-s^{\star}\|\| s_n-s^{\star}\|\\ &\leq& [1-2\lambda_n(1-\tau)]\|s_n-s^{\star}\|^2+\lambda_n\Big\{\lambda_n \| k(s_n)-s^{\star}\|^2+\lambda_n\|s_n-s^{\star}\|^2 \\ &&+2\lambda_n\|k(s_n)-s^{\star}\|\|s_n-s^{\star}\|+2 \langle k( s^{\star})-s^{\star}, T(s_n)-s^{\star} \rangle\Big\} \end{eqnarray} (3.37)

    and

    \begin{eqnarray} \langle x_n-s^{\star},\; \beta_n(s_n-s_{n-1}) \rangle & = & \langle \lambda_n k(s_n)+(1-\lambda_n)T(s_n)-s^{\star},\; \beta_n(s_n-s_{n-1}) \rangle \\ & = & \lambda_n \langle k(s_n)-s^{\star},\; \beta_n(s_n-s_{n-1}) \rangle \\ &&+ (1-\lambda_n)\langle T(s_n)-s^{\star}, \; \beta_n (s_n-s_{n-1}) \rangle\\ &\leq& \lambda_n \| k(s_n)-s^{\star}\|\; \beta_n\|s_n-s_{n-1}\| \\ &&+ (1-\lambda_n)\|T(s_n)-s^{\star}\|\; \beta_n \|s_n-s_{n-1}\|\\ &\leq& \beta_n\|s_n-s_{n-1}\|\big\{\|k(s_n)-s^{\star}\|+\|s_n-s^{\star}\|\big\}. \end{eqnarray} (3.38)

    From (3.36)–(3.38), we get

    \begin{eqnarray} \label{r6} \|u_n-s^{\star}\|^2 &\leq& [1-\lambda_n(1-\tau)]\|s_n-s^{\star}\|^2+\lambda_n\Big\{\lambda_n \| k(s_n)-s^{\star}\|^2+\lambda_n\|s_n-s^{\star}\|^2 \\ &&+2\lambda_n\|k(s_n)-s^{\star}\|\|s_n-s^{\star}\|+2 \langle k( s^{\star})-s^{\star}, T(s_n)-s^{\star} \rangle\\ &&+\frac{2 \beta_n}{\lambda_n}\|s_n-s_{n-1}\|\bigg(\|k(s_n)-s^{\star}\|+\|s_n-s^{\star}\|\bigg)+\frac{\beta_{n}^{2}\|s_n-s_{n-1}\|^2}{\lambda_n}\Big\} \end{eqnarray}

    or

    \begin{eqnarray} \|u_n-s^{\star}\|^2 &\leq& (1-\varsigma_n)\|s_n-s^{\star}\|^2+\varsigma_n V_n, \end{eqnarray} (3.39)

    where \varsigma_n = \lambda_n(1-\tau) and

    \begin{eqnarray} \label{r8} V_n& = &\frac{1}{(1-\tau)}\Big[\lambda_n \| k(s_n)-s^{\star}\|^2+\lambda_n\|s_n-s^{\star}\|^2 +2\lambda_n\|k(s_n)-s^{\star}\| \|s_n-s^{\star}\|\hfill\\ &&+2 \langle k( s^{\star})-s^{\star}, T(s_n)-s^{\star} \rangle+\frac{2 \beta_n}{\lambda_n}\|s_n-s_{n-1}\| \bigg(\|k(s_n)-s^{\star}\|+\|s_n-s^{\star}\|\bigg)+\frac{\beta_{n}^{2}\|s_n-s_{n-1}\|^2}{\lambda_n}\Big].\,\,\,\,\, \end{eqnarray}

    By taking together (3.12) and (3.39), we obtain

    \begin{eqnarray} \label{} \|s_{n+1}-s^{\star}\|^2 \leq (1-\varsigma_n)\|s_n-s^{\star}\|^2+\varsigma_n V_n -(1-2\delta \mu_n)\|s_{n+1}-u_n\|^2. \end{eqnarray}

    We obtain the intended outcomes by following the same procedures as in the proof of Theorem 3.1.

    Some Halpern-type inertial iterative methods for {\rm VI_sP_s } and {\rm FPP_s } are the consequences of our suggested methods.

    Corollary 4.1. Suppose that the assumptions (\rm{ S2}) (\rm {S5}) hold. The sequence \{s_n\} induced by Algorithm 4.1 converges strongly to {y}^{\star} = {\rm P}_{\Lambda\cap\Delta} (z) .

    Algorithm 4.1. Halpern-type inertial iteration method-1
    Let \beta\in [0, 1) and \mu_n > 0 are given. Choose arbitrary points s_0 and s_1 , and z\in\mathbb{E} for n = 0 .
    Iterative step. For iterates s_{n} , and s_{n-1} , for n \geq 1 , select 0 < \beta_n < \bar{\beta}_n , where
    \bar{\beta}_n = \left\{ \begin{array}{ll} \min \big\{\frac{\sigma_n}{\|s_n-s_{n-1}\|}, \quad \beta \big\}, \quad {\rm if}\; s_n \neq s_{n-1}, \nonumber \\ \quad \beta, \qquad \quad\qquad\qquad {\rm otherwise}, \nonumber \end{array}\right.
    compute
    u_n = \lambda_n z +(1-\lambda_n)[T(s_n)+ \beta_n(s_n-s_{n-1})],
    0\in Q(u_n)+G(s_{n+1})+\frac{s_{n+1}-u_{n}}{\mu_n}.
    If s_{n+1} = u_n , then stop. If not, set n = n+1 and go back to the iterative step.

    Proof. By replacing k(y) by z in Algorithm 3.1 and following the proof of Theorem 3.1, we get the desired result.

    Corollary 4.2. Suppose that the assumptions (\rm{ S2}) (\rm {S5}) hold. The sequence \{s_n\} induced by Algorithm 4.2 converges strongly to {y}^{\star} = {\rm P}_{\Lambda\cap\Delta} (z) .

    Algorithm 4.2. Halpern-type inertial iteration method-2
    Let \beta\in [0, 1) and \mu_n > 0 are given. Choose arbitrary points s_0 , s_1 , and z\in\mathbb{E} for n = 0 .
    Iterative step. For iterates s_{n} , and s_{n-1} , for n \geq 1 , select 0 < \beta_n < \bar{\beta}_n , where
    \bar{\beta}_n = \left\{ \begin{array}{ll} \min \big\{\frac{\sigma_n}{\|s_n-s_{n-1}\|}, \quad \beta \big\}, \quad {\rm if}\; s_n \neq s_{n-1}, \nonumber \\ \quad \beta, \quad \qquad\qquad\qquad {\rm otherwise}, \nonumber \end{array}\right.
    compute
    u_n = \lambda_n z +(1-\lambda_n)T(s_n)+ \beta_n(s_n-s_{n-1}),
    0\in Q(u_n)+G(s_{n+1})+\frac{s_{n+1}-u_{n}}{\mu_n}.
    If s_{n+1} = u_n , then stop. If not, set n = n+1 and go back to the iterative step.

    Proof. By replacing k(y) by z in Algorithm 3.2 and following the proof of Theorem 3.2, we get the result.

    Now, we present some theoretical applications of our methods for solving variational inequality and optimization problems together with the fixed point problem.

    Let \Omega\subseteq \mathbb{E} and Q:\mathbb{E}\to \mathbb{E} be a monotone operator. The variational inequality problem {\rm (VI_tP)} is to find s^{\star}\in \mathbb{E} such that

    \begin{eqnarray} \langle Q(s^{\star}),\; w-s^{\star}\rangle \geq 0,\; \forall\; w \in \Omega. \end{eqnarray} (5.1)

    The normal cone to \Omega at z is defined by

    \begin{eqnarray} N_{\Omega}(z) = \{u\in \mathbb{E}:\langle u, w-z \rangle\leq 0,\; \forall \; w\in \Omega\}. \end{eqnarray} (5.2)

    It is know to us that s^{\star} solves {\rm (VI_tP)} if and only if

    \begin{eqnarray} 0\in Q(s^{\star})+N_{\Omega}(s^{\star}). \end{eqnarray} (5.3)

    The indicator function of \Omega is defined by

    \begin{eqnarray} I_{\Omega}(w) = \left\{ \begin{array}{ll} \; \; 0,\; \; \; \; {\rm if}\; w\in \Omega,\\ +\infty,\; \; \; {\rm if}\; w\notin \Omega.\nonumber \end{array}\right. \end{eqnarray}

    Since I_{\Omega} is a proper lower semicontinuous convex function on \mathbb{E} , the subdifferential of I_{\Omega} is defined as

    \begin{eqnarray} \partial I_{\Omega}(z) = \{z\in \mathbb{E}: \langle u, w-z \rangle\leq 0,\; \forall \; w\in \Omega\}, \end{eqnarray} (5.4)

    which is maximal monotone (see [26]). From (5.2) and (5.4), we can write (5.3) as

    \begin{eqnarray} \label{v6} 0\in Q(s^{\star})+\partial I_{\Omega}(s^{\star}). \end{eqnarray}

    By replacing G by \partial I_{\Omega} in Algorithms 3.1 and 3.2, we get viscosity-type inertial iteration methods for common solutions to {\rm VI_sP_s} and {\rm FPP_s} .

    Let \Omega \subseteq \mathbb{E} be a nonempty closed convex subset and f_1, f_2 be proper, lower semicontinuous functions. Assume that f_1 is differentiable and \nabla f_1 is \delta -strongly monotone (hence, monotone) and \kappa -Lipschitz continuous. The subdifferential of f_2 is defined by

    \begin{eqnarray} \label{sd} \partial f_2 (y) = \{z\in \Omega: f_2(y)-f_1(w)\geq \langle y-w, z\rangle, \forall w\in \mathbb{E} \} \end{eqnarray}

    and is maximal monotone [25]. The following convex minimization problem ( {\rm COP} ) is taken into consideration:

    \begin{eqnarray} \label{mp} \min\limits_{w\in \Omega} \{ F(y)\} = \min\limits_{y\in \Omega}\{ f_1(y)+f_2(y)\}. \end{eqnarray}

    Therefore, by taking Q = \nabla f_1 and G = \partial f_2 in Algorithms 3.1 and 3.2, we get two viscosity-type inertial iteration methods for common solutions to {\rm COP_s} and {\rm FPP_s} .

    Example 6.1. Let \mathbb{E} = \mathbb{R}^3 . For s = (s_1, s_2, s_3) and w = (w_1, w_2, w_3)\in\mathbb{R}^3 , the usual inner product is defined by \langle s, \, w\rangle = s_1w_1 + s_2w_2 + s_3w_3 and \|w\|^2 = |w_1|^2 + |w_2|^2 + |w_3|^2 . We define the operators Q and G by

    \begin{equation} Q \begin{bmatrix} w_1 \\ w_2\\ w_3 \end{bmatrix} = \begin{bmatrix} 1/2&0&0 \\ 0&1/3&0\\ 0&0&1/4 \end{bmatrix}\begin{bmatrix} w_1 \\ w_2\\ w_3 \end{bmatrix}\,{ and }\, G\begin{bmatrix} w_1 \\ w_2\\ w_3 \end{bmatrix} = \begin{bmatrix} 1/6&0&0 \\ 0&1/5&0\\ 0&0&1/4 \end{bmatrix}\begin{bmatrix} w_1 \\ w_2\\ w_3 \end{bmatrix}.\nonumber \end{equation}

    It is trivial to show that the mapping Q is \eta -inverse strongly monotone with \eta = 2 , \delta -strongly monotone (hence monotone) with \delta = \frac{1}{4} , and \kappa -Lipschitz continuous with \kappa = \frac{1}{2} . The mapping G is maximal monotone. We define the mappings T and k as follows:

    \begin{equation} T\begin{bmatrix} w_1 \\ w_2\\ w_3 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&1&0\\ 0&0&1 \end{bmatrix}\begin{bmatrix} w_1 \\ w_2\\ w_3 \end{bmatrix} \ { and }\ k \begin{bmatrix} w_1 \\ w_2\\ w_3 \end{bmatrix} = \begin{bmatrix} 1/6&0&0 \\ 0&1/6&0\\ 0&0&1/6 \end{bmatrix}\begin{bmatrix} w_1 \\ w_2\\ w_3 \end{bmatrix}.\nonumber \end{equation}

    The mapping T is nonexpansive and k is s \tau -contraction with \tau = 1/6 . For Algorithms 3.1 and 3.2, we choose \beta = 0.3, \lambda_n = \frac{1}{\sqrt{100+n}}, \sigma_n = \frac{1}{(1+n)^2}, \mu_n = \frac{3}{2}-\frac{1}{10+n} , \beta_n is selected randomly from (0, \bar{\beta_n}) , and \bar{\beta_n} is calculated by (3.2). For Algorithm 1.1, we choose \theta = 0.5 and \theta_n = \frac{1}{(1+n)^2}\in (0, \theta) , \lambda = 0.5\in (0, 2\eta) and \psi_n = \frac{1}{(10+n)^{0.1}} . We compute the results of Algorithms 3.1 and 3.2 and then compare them with Algorithm 1.1. The stopping criteria for our calculation is Tol_n < 10^{-15} , where Tol_n = \|s_{n+1}-s_n\| . We select some different cases of initial values as given below:

    Case (a): w_0 = (1, 7, -9)\; \; w_1 = (1, -3, 4);

    Case (b): w_0 = (30, 53, 91)\; \; w_1 = (1/2, -3/4, -4/11);

    Case (c): w_0 = (1/2, -14, 0)\; \; w_1 = (0, -23, 1/4);

    Case (d): w_0 = (0.1, -10,200)\; \; w_1 = (100, -2, 1/4).

    The experimental findings are shown in Table 1 and Figures 14.

    Table 1.  Comparison table of VIIM-1, VIIM-2, and VIM by using Cases (a)–(d).
    {\rm Case} VIIM-1 VIIM-2 VIM
    \rm (a) Iterations 22 22 31
    Time in seconds 8.7e-006 1.3e-005 1.06e-005
    \rm (b) Iterations 21 23 31
    Time in seconds 8.6e-006 8.9e-006 8.1e-006
    \rm (c) Iterations 17 19 28
    Time in seconds 8.9e-006 1.22e-005 1.08e-005
    \rm (d) Iterations 23 26 35
    Time in seconds 8.8e-006 9.5e-006 1.08e-005

     | Show Table
    DownLoad: CSV
    Figure 1.  Graphical behavior of \|s_{n+1}-s_n\| from VIIM-1, VIIM-2, and VIM by choosing Case (a).
    Figure 2.  Graphical behavior of \|s_{n+1}-s_n\| from VIIM-1, VIIM-2, and VIM by choosing Case (b).
    Figure 3.  Graphical behavior of \|s_{n+1}-s_n\| from VIIM-1, VIIM-2, and VIM by choosing Case (c).
    Figure 4.  Graphical behavior of \|s_{n+1}-s_n\| from VIIM-1, VIIM-2, and VIM by choosing Case (d).

    Example 6.2. Let us consider the infinite dimensional real Hilbert space \mathbb{E}_1 = \mathbb{E}_2 = l_2: = \big\{ u: = (u_1, u_2, u_3, \cdots, u_n, \cdots), u_n\in \mathbb{R}: \sum\limits_{n = 1}^{\infty} |u_n| < \infty \big\} with inner product \langle u, v\rangle = \sum\limits_{n = 1}^{\infty} u_n v_n and the norm is given by \|u\| = \Big(\sum\limits_{n = 1}^{\infty} |u_n|^2 \Big)^{1/2} . We define the monotone mappings by Q(u): = \frac{u}{5} = (\frac{u_1}{5}, \frac{u_2}{5}, \frac{u_3}{5}, \cdots, \frac{u_n}{5}, \cdots) and G(u): = u = (u_1, u_2, u_3, \cdots, u_n, \cdots) . Let k(u): = \frac{u}{15} be the contraction and the nonexpansive map T is defined by T(u): = \frac{u}{3} = (\frac{u_1}{3}, \frac{u_2}{3}, \frac{u_3}{3}, \cdots, \frac{u_n}{3}, \cdots) .

    It can be seen that Q is \delta -strongly monotone with \delta = \frac{1}{5} and \kappa -Lipschitz continuous with \kappa = \frac{1}{5} and also \eta -inverse strongly monotone with \eta = 5 ; G is maximal monotone; k be the \tau -contraction with \tau = \frac{1}{15} . We choose \beta = 0.4 , \lambda_n = \frac{1}{(n+200)^{0.25}} , \sigma_n = \frac{1}{(10+n)^3} , \mu_n = \frac{4}{3}-\frac{1}{n+50} , \beta_n is selected randomly, and \bar{\beta}_n by (3.2). We choose \theta = 0.4 and \theta_n = \frac{1}{(10+n)^3}\in (0, \theta) , \lambda = 0.7\in (0, 2\eta) , and \psi_n = \frac{1}{(200+n)^{0.25}} . We compute the results of Algorithms 3.1 and 3.2, then compare with Algorithm 1.1. The stopping criteria for our computation is Tol_n < 10^{-15} , where Tol_n = \frac{1}{2}\|s_{n+1}-s_n\| . We compute the results of the Algorithms 3.1 and 3.2, and then compare them with Algorithm 1.1. We consider the following four cases of initial values:

    Case (a'): w_0 = \Big\{\frac{1}{n}\Big\}_{n = 1}^{\infty}, \; \; w_1 = \Big\{\frac{1}{1+n^2}\Big\}_{n = 0}^{\infty};

    Case (b'): w_0 = \left\{ \begin{array}{ll} \frac{1}{n+1}, \; \; \; \; { if}\; n{\; is \; odd, }\\ \; 0, \; \; \; \; \; \; { if}\; n{ \; is\; even, }\nonumber \end{array}\right. \; \; \; w_1 = \Big\{\frac{1}{1+n^3}\Big\}_{n = 1}^{\infty};

    Case (c'): w_0 = (0, 0, 0, 0, \cdots), \; \; w_1 = (1, 2, 3, 4, 0, 0, 0, \cdots);

    Case (d'): w_0 = \Big\{\frac{(-1)^n}{n}\Big\}_{n = 1}^{\infty}, \; \; w_1 = \left\{ \begin{array}{ll} 0, \; \; \; \; { if}\; n{\; is \; odd, }\\ \frac{1}{n^2}, \; \; \; { if}\; n{ \; is\; even}. \end{array}\right.

    The experimental findings are shown in Table 2 and Figures 58.

    Table 2.  Comparison table of VIIM-1, VIIM-2, and VIM by using {\rm {Case (a')–Case (d')}} .
    {\rm Case} VIIM-1 VIIM-2 VIM
    (a') Iterations 37 40 90
    Time in seconds 9.4e-006 1.2e-005 8.5e-006
    (b') Iterations 36 39 90
    Time in seconds 1.39e-005 1.02e-005 1.06e-005
    (c') Iterations 32 34 39
    Time in seconds 9.6e-006 9.7e-006 1.66e-005
    (d') Iterations 22 32 50
    Time in seconds 1.35e-005 2.1e-005 1.41e-005

     | Show Table
    DownLoad: CSV
    Figure 5.  Graphical behavior of \|s_{n+1}-s_n\| from VIIM-1, VIIM-2, and VIM by choosing Case (a').
    Figure 6.  Graphical behavior of \|s_{n+1}-s_n\| from VIIM-1, VIIM-2, and VIM by choosing Case (b').
    Figure 7.  Graphical behavior of \|s_{n+1}-s_n\| from VIIM-1, VIIM-2, and VIM by choosing Case (c').
    Figure 8.  Graphical behavior of \|s_{n+1}-s_n\| from VIIM-1, VIIM-2, and VIM by choosing Case (d').

    We suggested two viscosity-type inertial iteration methods for solving VIsP and FPP in Hilbert spaces. Our methods calculated the viscosity approximation, fixed point iteration, and inertial extrapolation simultaneously at the beginning of each iteration. We proved the strong convergence of the proposed methods without calculating the resolvent of the associated monotone operators. Some consequences and theoretical applications were also discussed. Finally, we illustrated the proposed methods by using some suitable numerical examples. It has been deduced from the numerical examples that our algorithms performed well in the sense of time acquired by the CPU and the number of iterations.

    M. Dilshad: Conceptualization, Methodology, Formal analysis, Investigation, Writing-original draft, Software, Writing-review & editing; A. Alamer: Conceptualization, Methodology, Formal analysis, Software, Writing-review & editing; Maryam G. Alshahri: Conceptualization, Methodology, Formal analysis, Software, Writing-review & editing; Esmail Alshaban: Investigation, Writing-original draft; Fahad M. Alamrani: Investigation, Writing-original draft. All authors have read and approved the final version of the manuscript for publication.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors declare no conflicts of interest.



    [1] A. Alamer, M. Dilshad, Halpern-type inertial iteration methods with self-adaptive step size for split common null point problem, Mathematics, 12 (2024), 747. http://dx.doi.org/10.3390/math12050747 doi: 10.3390/math12050747
    [2] Q. Ansari, F. Babu, Proximal point algorithm for inclusion problems in Hadamard manifolds with applications, Optim. Lett., 15 (2021), 901–921. http://dx.doi.org/10.1007/s11590-019-01483-0 doi: 10.1007/s11590-019-01483-0
    [3] A. Adamu, D. Kitkuan, A. Padcharoen, C. Chidume, P. Kumam, Inertial viscosity-type iterative method for solving inclusion problems with applications, Math. Comput. Simulat., 194 (2022), 445–459. http://dx.doi.org/10.1016/j.matcom.2021.12.007 doi: 10.1016/j.matcom.2021.12.007
    [4] M. Akram, M. Dilshad, A. Rajpoot, F. Babu, R. Ahmad, J. Yao, Modified iterative schemes for a fixed point problem and a split variational inclusion problem, Mathematics, 10 (2022), 2098. http://dx.doi.org/10.3390/math10122098 doi: 10.3390/math10122098
    [5] M. Akram, M. Dilshad, A unified inertial iterative approach for general quasi variational inequality with application, Fractal Fract., 6 (2022), 395. http://dx.doi.org/10.3390/fractalfract6070395 doi: 10.3390/fractalfract6070395
    [6] F. Alvarez, H. Attouch, An inertial proximal method for maximal monotone operators via discretization of a nonlinear Oscillator with damping, Set-Valued Anal., 9 (2001), 3–11. http://dx.doi.org/10.1023/A:1011253113155 doi: 10.1023/A:1011253113155
    [7] J. Cruz, T. Nghia, On the convergence of the forward-backward splitting method with linesearches, Optim. Method. Softw., 31 (2016), 1209–1238. http://dx.doi.org/10.1080/10556788.2016.1214959 doi: 10.1080/10556788.2016.1214959
    [8] P. Combettes, V. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Sim., 4 (2005), 1168–1200. http://dx.doi.org/10.1137/050626090 doi: 10.1137/050626090
    [9] P. Combettes, The convex feasibility problem in image recovery, Adv. Imag. Elect. Phys., 95 (1996), 155–270. http://dx.doi.org/10.1016/S1076-5670(08)70157-5 doi: 10.1016/S1076-5670(08)70157-5
    [10] Q. Dong, D. Jiang, P. Cholamjiak, Y. Shehu, A strong convergence result involving an inertial forward-backward algorithm for monotone inclusions, J. Fixed Point Theory Appl., 19 (2017), 3097–3118. http://dx.doi.org/10.1007/s11784-017-0472-7 doi: 10.1007/s11784-017-0472-7
    [11] J. Douglas, H. Rachford, On the numerical solution of heat conduction problems in two and three space variables, Trans. Amer. Math. Soc., 82 (1956), 421–439. http://dx.doi.org/10.2307/1993056 doi: 10.2307/1993056
    [12] M. Dilshad, A. Khan, M. Akram, Splitting type viscosity methods for inclusion and fixed point problems on Hadamard manifolds, AIMS Mathematics, 6 (2021), 5205–5221. http://dx.doi.org/10.3934/math.2021309 doi: 10.3934/math.2021309
    [13] M. Dilshad, M. Akram, Md. Nsiruzzaman, D. Filali, A. Khidir, Adaptive inertial Yosida approximation iterative algorithms for split variational inclusion and fixed point problems, AIMS Mathematics, 8 (2023), 12922–12942. http://dx.doi.org/10.3934/math.2023651 doi: 10.3934/math.2023651
    [14] D. Filali, M. Dilshad, L. Alyasi, M. Akram, Inertial iterative algorithms for split variational inclusion and fixed point problems, Axioms, 12 (2023), 848. http://dx.doi.org/10.3390/axioms12090848 doi: 10.3390/axioms12090848
    [15] D. Kitkuan, P. Kumam, J. Martínez-Moreno, Generalized Halpern-type forward-backward splitting methods for convex minimization problems with application to image restoration problems, Optimization, 69 (2020), 1557–1581. http://dx.doi.org/10.1080/02331934.2019.1646742 doi: 10.1080/02331934.2019.1646742
    [16] G. López, V. Martín-Márquez, F. Wang, H. Xu, Forward-backward splitting methods for accretive operators in Banach spaces, Abstr. Appl. Anal., 2012 (2012), 109236. http://dx.doi.org/10.1155/2012/109236 doi: 10.1155/2012/109236
    [17] D. Lorenz, T. Pock, An inertial forward-backward algorithm for monotone inclusions, J. Math. Imaging Vis., 51 (2015), 311–325. http://dx.doi.org/10.1007/s10851-014-0523-2 doi: 10.1007/s10851-014-0523-2
    [18] P. Lion, B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16 (1979), 964–979. http://dx.doi.org/10.1137/0716071 doi: 10.1137/0716071
    [19] Y. Malitsky, M. Tam, A forward-backward splitting method for monotone inclusions without cocoercivity, SIAM J. Optimiz., 30 (2020), 1451–1472. http://dx.doi.org/10.1137/18M1207260 doi: 10.1137/18M1207260
    [20] P. Mainge, Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization, Set-Valued Anal., 16 (2008), 899–912. http://dx.doi.org/10.1007/s11228-008-0102-z doi: 10.1007/s11228-008-0102-z
    [21] A. Moudafi, M. Oliny, Convergence of a splitting inertial proximal method for monotone operators, J. Comput. Appl. Math., 155 (2003), 447–454. http://dx.doi.org/10.1016/S0377-0427(02)00906-8 doi: 10.1016/S0377-0427(02)00906-8
    [22] B. Polyak, Some methods of speeding up the convergence of iteration methods, USSR Comput. Math. Math. Phys., 4 (1964), 1–17. http://dx.doi.org/10.1016/0041-5553(64)90137-5 doi: 10.1016/0041-5553(64)90137-5
    [23] M. Rahaman, R. Ahmad, M. Dilshad, I. Ahmad, Relaxed \eta-proximal operator for solving a variational-like inclusion problem, Math. Model. Anal., 20 (2015), 819–835. http://dx.doi.org/10.3846/13926292.2015.1117026 doi: 10.3846/13926292.2015.1117026
    [24] S. Reich, A. Taiwo, Fast hybrid iterative schemes for solving variational inclusion problems, Math. Methods. Appl. Sci., 46 (2023), 17177–17198. http://dx.doi.org/10.1002/mma.9494 doi: 10.1002/mma.9494
    [25] R. Rockafellar, On the maximal monotonicity of subdifferential mappings, Pac. J. Math., 33 (1970), 209–216. http://dx.doi.org/10.2140/pjm.1970.33.209 doi: 10.2140/pjm.1970.33.209
    [26] R. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control Optim., 14 (1976), 877–898. http://dx.doi.org/10.1137/0314056 doi: 10.1137/0314056
    [27] W. Takahashi, N. Wong, J. Yao, Two generalized strong convergence theorems of Halpern's type in Hilbert spaces and applications, Taiwan. J. Math., 16 (2012), 1151–1172. http://dx.doi.org/10.11650/twjm/1500406684 doi: 10.11650/twjm/1500406684
    [28] Y. Tang, H. Lin, A. Gibali, Y. Cho, Convergence analysis and applications of the inertial algorithm solving inclusion problems, Appl. Numer. Math., 175 (2022), 1–17. http://dx.doi.org/10.1016/j.apnum.2022.01.016 doi: 10.1016/j.apnum.2022.01.016
    [29] Y. Tang, Y. Zhang, A. Gibali, New self-adaptive inertial-like proximal point methods for the split common null point problem, Symmetry, 13 (2021), 2316. http://dx.doi.org/10.3390/sym13122316 doi: 10.3390/sym13122316
    [30] D. Thong, N. Vinh, Inertial methods for fixed point problems and zero point problems of the sum of two monotone mappings, Optimization, 68 (2019), 1037–1072. http://dx.doi.org/10.1080/02331934.2019.1573240 doi: 10.1080/02331934.2019.1573240
    [31] H. Xu, Iterative algorithms for nonlinear operators, J. Lond. Math. Soc., 66 (2002), 240–256. http://dx.doi.org/10.1112/S0024610702003332 doi: 10.1112/S0024610702003332
    [32] P. Yodjai, P. Kumam, D. Kitkuan, W. Jirakitpuwapat, S. Plubtieng, The Halpern approximation of three operators splitting method for convex minimization problems with an application to image inpainting, Bangmod Int. J. Math. Comp. Sci., 5 (2019), 58–75.
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1172) PDF downloads(31) Cited by(0)

Figures and Tables

Figures(8)  /  Tables(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog