1.
Introduction
A fixed point problem (FPP) is a significant problem that provides a natural support for studying a broad range of nonlinear problems with applications. The fixed point problem of mapping T is defined as
where E is a real Hilbert space and T:E→E is a nonexpansive mapping.
For a single-valued monotone operator Q:E→E and a set-valued operator G:E⇉E, the variational inclusion problem (VIsP) is to search s⋆∈E such that
Several problems, such as image recovery, optimization, variational inequality, can be transformed into a FPP or VIsP. Due to such applicability, in the last decades, several iterative methods have been formulated to solve FPPs and VIsPs in linear and nonlinear spaces, for example, [4,8,9,12,13,15,32].
Dauglas and Rachford [11] formulated the forward-backward splitting method for VIsP:
where μn>0, RGμn=[I+μnG]−1 is the resolvent of G (also known as the backward operator), and [I−μnQ] is known as the forward operator. We can rewrite (1.3) as
which is studied by Ansari and Babu [2] in nonlinear space. If Q=0, the monotone inclusion problem (MIsP) is to search s⋆∈E such that
which was studied in [26]. The proximal point method, or the regularization method, is one of the renowned methods for MIsP studied by Lions and Mercier [18]:
Since the operator RGμn is nonexpansive appearing in backward step, the algorithms have been studied widely by numerous authors, see for example [7,10,15,16,17,19,23,27].
An essential development in the field of nonlinear science is the inertial extrapolation, introduced by Polyak [22], for fast convergence of algorithms. Alvarez and Attouch [6] implemented the inertial extrapolation to acquire the inertial proximal point method to solve MIsP. For μn>0, find sn+1∈E such that
and equivalently
where βn∈[0,1) is the extrapolation coefficient and βn(sn−sn−1) is known as the inertial step. They proved the weak convergence of (1.8) assuming
Inertial extrapolation has been demonstrated to have good convergence properties and a high convergence rate, therefore they have been improved and used in a variety of nonlinear problems, see [3,5,13,14,28,29] and the references inside.
The following inertial proximal point approach was presented by Moudafi and Oliny in [21] to solve VIsP:
where μn<2/κ, and κ is the Lipschitz constant of operator Q. They proved the weak convergence of (1.10) using the same assumption (1.9). Recently, Duang et al. [30] studied the VIsP and FPP. They proposed the following viscosity inertial method (Algorithm 1.1) for estimating the common solution in Hilbert spaces.
In the above calculation Q is η-inverse strongly monotone (in short η-ism) and G is a maximal monotone operator, k is a contraction, T is a nonexpansive mapping, λ∈(0,2η), and the control sequence fulfills the requirements listed below:
(ⅰ) ψn∈(0,1),lim,
(ⅱ) \theta_n\in [0, \theta), \theta > 0, \lim\limits_{n\to\infty}\frac{\theta_n}{\psi_n}\|s_n-s_{n-1}\| = 0.
Recently, Reich and Taiwo [24] investigated hybrid viscosity-type iterative schemes for solving variational inclusion problems in which viscosity approximation and inertial extrapolation were computed jointly. Ahmed and Dilshad [1] studied the Halpern-type iterative method for solving split common null point problems where the Halpern iteration and inertial iterations are computed simultaneously at the start of every iteration.
Motivated by the work in [24,30], we present two viscosity-type inertial iteration methods for common solutions of {\rm VI_sP_s} and {\rm FPP_s} . In our algorithms, we implement the viscosity iteration, fixed point iteration, and inertial extrapolation at the first step of each iteration. Our methods do not need the inverse strongly monotone assumptions on the operators Q and G, which are considered in the literature. We prove the strong convergence of the presented methods without calculating the resolvent of the associated monotone operators Q and G.
We organize the paper as follows: In Section 2, we discuss some basic definitions and useful lemmas. In Section 3, we propose viscosity-type iterative methods for solving {\rm VI_sP_s} and {\rm FPP_s} and prove the strong convergence theorems. In Section 4, as a consequence of our methods, we present Halpern-type inertial iterative methods for {\rm VI_sP_s} and {\rm FPP_s} . Sections 5 describes some applications for solving variational inequality and optimization problems. In Section 6, we show the efficiency of the suggested methods by comparing them with Algorithm 3 of [30].
2.
Preliminaries
Let \{s_n\} be a sequence in \mathbb{E} . Then s_n\to s denotes strong convergence of \{s_n\} to s and s_n \rightharpoonup s denotes weak convergence. The weak {w} -limit of \{s_n\} is defined by
The following useful inequality is well-known in the Hilbert space \mathbb{E} :
Definition 2.1. A mapping k:\mathbb{E}\to \mathbb{E} is called
{(i)} a contraction, if \|k(s_1)-k(w_1)\| \leq \tau\|s_1-w_1\|, \; \forall\; s_1, w_1\in \mathbb{E}, \tau\in (0, 1);
{(ii)} nonexpansive, if \|k(s_1)-k(w_1)\| \leq \|s_1-w_1\|, \forall\; s_1, w_1 \in \mathbb{E}.
Definition 2.2. Let Q:\mathbb{E}\to \mathbb{E} . Then
{(i)} Q is called monotone, if \langle Q(s_1)-Q(w_1), \; s_1-w_1 \rangle\geq 0, \forall\; s_1, w_1\in \mathbb{E};
{(ii)} Q is called \eta -ism, if there exists \eta > 0 such that
{(iii)} Q is called \delta -strongly monotone, if there exists \delta > 0 such that
{(iv)} Q is called \kappa -Lipschitz continuous, if there exists \kappa > 0 such that
Definition 2.3. Let G:\mathbb{E}\to 2^\mathbb{E} . Then
{(i)} the graph of G is defined by Graph(G) = \{(s_1, w_1)\in \mathbb{E}\times \mathbb{E}: w_1\in G(s_1)\};
{(ii)} G is called monotone, if for all (s_1, w_1), (s_2, w_2)\in Graph(G), \; \langle w_1-w_2, \; s_1-s_2\rangle\geq 0;
{(iii)} G is called maximal monotone, if G is monotone and (I+\mu G)(\mathbb{E}) = \mathbb{E} , for \mu > 0 .
Lemma 2.1. [31] Let s_n\in \mathbb{R} be a nonnegative sequence such that
where \lambda_n\in (0, 1) and \xi_n \in \mathbb{R}\; fulfill the requirements given below:
Then s_n\to 0 as n\to \infty .
Lemma 2.2. [20] Let y_{n}\in \mathbb{R} be a sequence that does not decrease at infinity in the sense that there exists a subsequence y_{n_k} of y_{n} such that y_{n_k} < y_{n_k+1} for all k \geq 0 . Also consider the sequence of integers \{\Upsilon(n)\}_{n\geq n_{0}} defined by
Then \{\Upsilon(n)\}_{n\geq n_{0}} is a nondecreasing sequence verifying \lim\limits_{n\to\infty}\Upsilon(n) = \infty , and for all \; n\geq n_{0} , the following inequalities hold:
3.
Main results
In the present section, we define our viscosity-type inertial iteration methods for solving {\rm FPP} and {\rm VI_sP} . We symbolize the solution set of {\rm FPP} by \Lambda and of {\rm VI_sP} by \Delta and assume that \Lambda \cap \Delta \neq \emptyset . We adopt the following assumptions in order to prove the convergence of the sequences obtained from the suggested methods:
({\bf S_1}) k:\mathbb{E}\to \mathbb{E} is a \tau -contraction with 0 < \tau < 1 ;
({\bf S_2}) Q:\mathbb{E}\to \mathbb{E} is a \delta -strongly monotone and \kappa -Lipschitz continuous operator and G:\mathbb{E}\rightrightarrows{\mathbb{E}} is a maximal monotone operator;
({\bf S_3}) \mu_n is a sequence such that 0 < \bar{\mu} \leq \mu_n \leq \mu < 1/{2\delta} and \kappa\leq 2\delta ;
({\bf S_4}) \lambda_n\in(0, 1) satisfies \lim\limits_{n\to \infty}\lambda_n = 0 and \sum\limits_{n = 1}^{\infty}\lambda_n = \infty ;
({\bf S_5}) \sigma_n is a positive sequence satisfying \sum\limits_{n = 1}^{\infty}\sigma_n < \infty and \lim\limits_{n\to \infty}\frac{\sigma_n}{\lambda_n} = 0 .
Theorem 3.1. If the Assumptions ({\bf S_1}) – ({\bf S_5}) are fulfilled then the sequences induced by the Algorithm 3.1 converge strongly to s^{\star}\in{\Delta\cap\Lambda}, which solve the following variational inequality:
Remark 3.1. From (3.2), we have \beta_n\leq \frac{\sigma_n}{\|s_n-s_{n-1}\|} . Since \beta_n > 0 and \sigma_n satisfies \sum\limits_{n = 1}^{\infty}\sigma_n < \infty , we obtain \lim\limits_{n\to\infty} \beta_n \|s_n-s_{n-1}\| = 0 and \lim\limits_{n\to\infty} \frac{\beta_n \|s_n-s_{n-1}\|}{\lambda_n}\leq \lim\limits_{n\to\infty} \frac{\sigma_n}{\lambda_n} = 0 .
Proof. Let s^{\star}\in{\Delta\cap\Lambda} , then -Q(s^{\star})\in G(s^{\star}) and using (3.4), we have \frac{u_n-s_{n+1}}{\mu_n}-Q(u_n)\in G(s_{n+1}) . Since G is monotone, we get
Since Q is strongly monotone with constant \delta > 0 , we have
By adding (3.5) and (3.6), we get
or
By using the Cauchy Schwartz inequality and Lipschitz continuity of Q, we have
By using (2.1), we have
Considering (3.8)–(3.10), we get
Since \kappa\leq 2\delta , we have
or
Since \lim\limits_{n\to\infty}\frac{\beta_n\|s_n-s_{n-1}\|}{\lambda_n} = 0 (Remark 3.1), there exists K_1 > 0 such that \frac{\beta_n\|s_n-s_{n-1}\|}{\lambda_n}\leq K_1 , that is \beta_n\|s_n-s_{n-1}\|\leq \lambda_n K_1 . By using (3.13) and mathematical induction, bearing in mind that k is a contraction and T is nonexpansive, it follows from (3.3) that
meaning that \{u_n\} is bounded and hence \{s_n\} is also bounded. Let v_n = T(s_n)+\beta_n(s_n-s_{n-1}) . Note that v_n is also bounded. By using (3.3), we get
Now, we need to calculate
where \Theta_n = \beta_n\|z_n-z_{n-1}\| , and
and
By using (3.14)–(3.17), we get
Let \gamma_n = \lambda_n(1-\tau^2) . Then it follows from (3.12) and (3.18) that
where
Now, we continue the proof in the following two cases:
Case Ⅰ: If \{\|s_n-s^{\star}\|\} is monotonically decreasing then there exists a number N_1 such that \|s_{n+1}-s^{\star}\|\leq \|s_n-s^{\star}\| for all n\geq N_1 . Hence, boundedness of \{\|s_n-s^{\star}\|\} implies that \{\|s_n-s^{\star}\|\} is convergent. Therefore, using (3.19), we have
Since 2\delta \mu_n < 1 and \lim\limits_{n\to \infty}\gamma_n = 0 , we obtained
By using (3.22) and Remark 3.1, we get
Boundedness of \{s_n\} and \{v_n\} implies that there exist M_1 and M_2 such that \|s_n-s^{\star}\|\leq M_1 and \|k(s^{\star})-v_n\|\leq M_2 , hence
The following can be obtained easily by using (3.22) and (3.23):
Since \{s_n\} is bounded, it guarantees the existence of subsequence \{s_{n_k}\} of \{s_n\} such that s_{n_k}\rightharpoonup \bar{s} . As a consequence, from (3.22) and (3.25), it follows that u_{n_k}\rightharpoonup \bar{s} and v_{n_k}\rightharpoonup \bar{s} . Now, we will show that \bar{s}\in \Delta\cap\Lambda . Since T is nonexpansive, hence by (3.25), we obtain \bar{s}\in {\rm Fix}(T) . From (3.4), we have
Since 0 < \bar{\mu} < \mu_n < \mu and from (3.22), we have \|s_{n_k+1}-u_{n_k}\|\to 0 and by the Lipschitz continuity of Q, we get
Taking k\to \infty , since the graph of the maximal monotone operator is weakly-strongly closed, we get -Q(\bar{s})\in G(\bar{s}) , that is 0\in Q(\bar{s})+ G(\bar{s}) . Thus \bar{s}\in \Delta\cap\Lambda.
Next, we show that \{s_n\} strongly converges to s^{\star} . From (3.19), it immediately follows that
and
By using Lemma 2.1, we deduce that \{s_n\} converges strongly to s^{\star} , where s^{\star} is the solution to the variational inequality (3.1). Further, it follows that \|s_n-u_n\|\to 0 , u_n\rightharpoonup\bar{y}\in \Delta\cap\Lambda , and s_n\to s^{\star} as n\to \infty , thus \bar{y} = s^{\star} . This completes the proof.
Case Ⅱ: If Case Ⅰ is false, then the function \rho:\mathbb{N}:\to \mathbb{N} defined by \rho(n) = \max\{n\geq m: \|s_{m}-s^{\star}\| \leq\|s_{m+1}-s^{\star}\|\} is an increasing sequence and \rho(n)\to\infty as n\to\infty and
For the same reasons as in the proof of Case Ⅰ, we obtain \|s_{\rho(n)}-s^{\star}\| \to 0 and \|s_{\rho(n)}-u_{\rho(n)}\|\to 0 as n \to \infty . By using (3.19) and (3.29), we obtain
Thus, we get \|s_{\rho(n)}-s^{\star}\|\to 0 as n\to \infty . Keeping in mind Lemma 2.2, we have
Consequently, from (3.31), \|s_n-s^{\star}\|\to 0 as n\to \infty . Therefore, s_n\to s^{\star} as n\to \infty , where s^{\star} is a solution of the variational inequality (3.1). □
Theorem 3.2. If the Assumptions ({\bf S_1}) – ({\bf S_5}) are satisfied then the sequences induced by the Algorithm 3.2 converge strongly to s^{\star}\in{\Lambda}\cap{\Delta} , which solves the following variational inequality:
Proof. Let s^{\star}\in {\Lambda}\cap{\Delta} , then by using (3.33), we obtain
implying that \{u_n\} is bounded and so is \{s_n\} . Let x_n = \lambda_n k(s_n)+(1-\lambda_n)T(s_n) , then by using (2.1), we get
and
and
From (3.36)–(3.38), we get
or
where \varsigma_n = \lambda_n(1-\tau) and
By taking together (3.12) and (3.39), we obtain
We obtain the intended outcomes by following the same procedures as in the proof of Theorem 3.1. □
4.
Consequences
Some Halpern-type inertial iterative methods for {\rm VI_sP_s } and {\rm FPP_s } are the consequences of our suggested methods.
Corollary 4.1. Suppose that the assumptions (\rm{ S2}) – (\rm {S5}) hold. The sequence \{s_n\} induced by Algorithm 4.1 converges strongly to {y}^{\star} = {\rm P}_{\Lambda\cap\Delta} (z) .
Proof. By replacing k(y) by z in Algorithm 3.1 and following the proof of Theorem 3.1, we get the desired result. □
Corollary 4.2. Suppose that the assumptions (\rm{ S2}) – (\rm {S5}) hold. The sequence \{s_n\} induced by Algorithm 4.2 converges strongly to {y}^{\star} = {\rm P}_{\Lambda\cap\Delta} (z) .
Proof. By replacing k(y) by z in Algorithm 3.2 and following the proof of Theorem 3.2, we get the result. □
5.
Applications
Now, we present some theoretical applications of our methods for solving variational inequality and optimization problems together with the fixed point problem.
5.1. Variational inequality problem
Let \Omega\subseteq \mathbb{E} and Q:\mathbb{E}\to \mathbb{E} be a monotone operator. The variational inequality problem {\rm (VI_tP)} is to find s^{\star}\in \mathbb{E} such that
The normal cone to \Omega at z is defined by
It is know to us that s^{\star} solves {\rm (VI_tP)} if and only if
The indicator function of \Omega is defined by
Since I_{\Omega} is a proper lower semicontinuous convex function on \mathbb{E} , the subdifferential of I_{\Omega} is defined as
which is maximal monotone (see [26]). From (5.2) and (5.4), we can write (5.3) as
By replacing G by \partial I_{\Omega} in Algorithms 3.1 and 3.2, we get viscosity-type inertial iteration methods for common solutions to {\rm VI_sP_s} and {\rm FPP_s} .
5.2. Optimization problem
Let \Omega \subseteq \mathbb{E} be a nonempty closed convex subset and f_1, f_2 be proper, lower semicontinuous functions. Assume that f_1 is differentiable and \nabla f_1 is \delta -strongly monotone (hence, monotone) and \kappa -Lipschitz continuous. The subdifferential of f_2 is defined by
and is maximal monotone [25]. The following convex minimization problem ( {\rm COP} ) is taken into consideration:
Therefore, by taking Q = \nabla f_1 and G = \partial f_2 in Algorithms 3.1 and 3.2, we get two viscosity-type inertial iteration methods for common solutions to {\rm COP_s} and {\rm FPP_s} .
6.
Numerical experiments
Example 6.1. Let \mathbb{E} = \mathbb{R}^3 . For s = (s_1, s_2, s_3) and w = (w_1, w_2, w_3)\in\mathbb{R}^3 , the usual inner product is defined by \langle s, \, w\rangle = s_1w_1 + s_2w_2 + s_3w_3 and \|w\|^2 = |w_1|^2 + |w_2|^2 + |w_3|^2 . We define the operators Q and G by
It is trivial to show that the mapping Q is \eta -inverse strongly monotone with \eta = 2 , \delta -strongly monotone (hence monotone) with \delta = \frac{1}{4} , and \kappa -Lipschitz continuous with \kappa = \frac{1}{2} . The mapping G is maximal monotone. We define the mappings T and k as follows:
The mapping T is nonexpansive and k is s \tau -contraction with \tau = 1/6 . For Algorithms 3.1 and 3.2, we choose \beta = 0.3, \lambda_n = \frac{1}{\sqrt{100+n}}, \sigma_n = \frac{1}{(1+n)^2}, \mu_n = \frac{3}{2}-\frac{1}{10+n} , \beta_n is selected randomly from (0, \bar{\beta_n}) , and \bar{\beta_n} is calculated by (3.2). For Algorithm 1.1, we choose \theta = 0.5 and \theta_n = \frac{1}{(1+n)^2}\in (0, \theta) , \lambda = 0.5\in (0, 2\eta) and \psi_n = \frac{1}{(10+n)^{0.1}} . We compute the results of Algorithms 3.1 and 3.2 and then compare them with Algorithm 1.1. The stopping criteria for our calculation is Tol_n < 10^{-15} , where Tol_n = \|s_{n+1}-s_n\| . We select some different cases of initial values as given below:
Case (a): w_0 = (1, 7, -9)\; \; w_1 = (1, -3, 4);
Case (b): w_0 = (30, 53, 91)\; \; w_1 = (1/2, -3/4, -4/11);
Case (c): w_0 = (1/2, -14, 0)\; \; w_1 = (0, -23, 1/4);
Case (d): w_0 = (0.1, -10,200)\; \; w_1 = (100, -2, 1/4).
The experimental findings are shown in Table 1 and Figures 1–4.
Example 6.2. Let us consider the infinite dimensional real Hilbert space \mathbb{E}_1 = \mathbb{E}_2 = l_2: = \big\{ u: = (u_1, u_2, u_3, \cdots, u_n, \cdots), u_n\in \mathbb{R}: \sum\limits_{n = 1}^{\infty} |u_n| < \infty \big\} with inner product \langle u, v\rangle = \sum\limits_{n = 1}^{\infty} u_n v_n and the norm is given by \|u\| = \Big(\sum\limits_{n = 1}^{\infty} |u_n|^2 \Big)^{1/2} . We define the monotone mappings by Q(u): = \frac{u}{5} = (\frac{u_1}{5}, \frac{u_2}{5}, \frac{u_3}{5}, \cdots, \frac{u_n}{5}, \cdots) and G(u): = u = (u_1, u_2, u_3, \cdots, u_n, \cdots) . Let k(u): = \frac{u}{15} be the contraction and the nonexpansive map T is defined by T(u): = \frac{u}{3} = (\frac{u_1}{3}, \frac{u_2}{3}, \frac{u_3}{3}, \cdots, \frac{u_n}{3}, \cdots) .
It can be seen that Q is \delta -strongly monotone with \delta = \frac{1}{5} and \kappa -Lipschitz continuous with \kappa = \frac{1}{5} and also \eta -inverse strongly monotone with \eta = 5 ; G is maximal monotone; k be the \tau -contraction with \tau = \frac{1}{15} . We choose \beta = 0.4 , \lambda_n = \frac{1}{(n+200)^{0.25}} , \sigma_n = \frac{1}{(10+n)^3} , \mu_n = \frac{4}{3}-\frac{1}{n+50} , \beta_n is selected randomly, and \bar{\beta}_n by (3.2). We choose \theta = 0.4 and \theta_n = \frac{1}{(10+n)^3}\in (0, \theta) , \lambda = 0.7\in (0, 2\eta) , and \psi_n = \frac{1}{(200+n)^{0.25}} . We compute the results of Algorithms 3.1 and 3.2, then compare with Algorithm 1.1. The stopping criteria for our computation is Tol_n < 10^{-15} , where Tol_n = \frac{1}{2}\|s_{n+1}-s_n\| . We compute the results of the Algorithms 3.1 and 3.2, and then compare them with Algorithm 1.1. We consider the following four cases of initial values:
Case (a'): w_0 = \Big\{\frac{1}{n}\Big\}_{n = 1}^{\infty}, \; \; w_1 = \Big\{\frac{1}{1+n^2}\Big\}_{n = 0}^{\infty};
Case (b'): w_0 = \left\{ \begin{array}{ll} \frac{1}{n+1}, \; \; \; \; { if}\; n{\; is \; odd, }\\ \; 0, \; \; \; \; \; \; { if}\; n{ \; is\; even, }\nonumber \end{array}\right. \; \; \; w_1 = \Big\{\frac{1}{1+n^3}\Big\}_{n = 1}^{\infty};
Case (c'): w_0 = (0, 0, 0, 0, \cdots), \; \; w_1 = (1, 2, 3, 4, 0, 0, 0, \cdots);
Case (d'): w_0 = \Big\{\frac{(-1)^n}{n}\Big\}_{n = 1}^{\infty}, \; \; w_1 = \left\{ \begin{array}{ll} 0, \; \; \; \; { if}\; n{\; is \; odd, }\\ \frac{1}{n^2}, \; \; \; { if}\; n{ \; is\; even}. \end{array}\right.
The experimental findings are shown in Table 2 and Figures 5–8.
7.
Conclusions
We suggested two viscosity-type inertial iteration methods for solving VIsP and FPP in Hilbert spaces. Our methods calculated the viscosity approximation, fixed point iteration, and inertial extrapolation simultaneously at the beginning of each iteration. We proved the strong convergence of the proposed methods without calculating the resolvent of the associated monotone operators. Some consequences and theoretical applications were also discussed. Finally, we illustrated the proposed methods by using some suitable numerical examples. It has been deduced from the numerical examples that our algorithms performed well in the sense of time acquired by the CPU and the number of iterations.
Author contributions
M. Dilshad: Conceptualization, Methodology, Formal analysis, Investigation, Writing-original draft, Software, Writing-review & editing; A. Alamer: Conceptualization, Methodology, Formal analysis, Software, Writing-review & editing; Maryam G. Alshahri: Conceptualization, Methodology, Formal analysis, Software, Writing-review & editing; Esmail Alshaban: Investigation, Writing-original draft; Fahad M. Alamrani: Investigation, Writing-original draft. All authors have read and approved the final version of the manuscript for publication.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Conflict of interest
The authors declare no conflicts of interest.