Iter. | Time [sec] | |
Algorithm 1 | 297 | 1.7283 |
scheme (1.6) | 482 | 3.0215 |
Algorithm 6.1 in [31] | 1311 | 8.5415 |
Algorithm 3.1 in [32] | 477 | 2.3758 |
We introduce an alternated inertial subgradient extragradient algorithm of non-Lipschitz and pseudo-monotone operators to solve variational inequality and fixed point problems. We also demonstrated that, under certain conditions, the sequence produced by our algorithm exhibits weak convergence. Moreover, some numerical experiments have been proposed to compare our algorithm with previous algorithms in order to demonstrate the effectiveness of our algorithm.
Citation: Yuanheng Wang, Chenjing Wu, Yekini Shehu, Bin Huang. Self adaptive alternated inertial algorithm for solving variational inequality and fixed point problems[J]. AIMS Mathematics, 2024, 9(4): 9705-9720. doi: 10.3934/math.2024475
[1] | Saudia Jabeen, Bandar Bin-Mohsin, Muhammad Aslam Noor, Khalida Inayat Noor . Inertial projection methods for solving general quasi-variational inequalities. AIMS Mathematics, 2021, 6(2): 1075-1086. doi: 10.3934/math.2021064 |
[2] | Meiying Wang, Luoyi Shi, Cuijuan Guo . An inertial iterative method for solving split equality problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 17628-17646. doi: 10.3934/math.2022971 |
[3] | Lu-Chuan Ceng, Shih-Hsin Chen, Yeong-Cheng Liou, Tzu-Chien Yin . Modified inertial subgradient extragradient algorithms for generalized equilibria systems with constraints of variational inequalities and fixed points. AIMS Mathematics, 2024, 9(6): 13819-13842. doi: 10.3934/math.2024672 |
[4] | Yali Zhao, Qixin Dong, Xiaoqing Huang . A self-adaptive viscosity-type inertial algorithm for common solutions of generalized split variational inclusion and paramonotone equilibrium problem. AIMS Mathematics, 2025, 10(2): 4504-4523. doi: 10.3934/math.2025208 |
[5] | Zheng Zhou, Bing Tan, Songxiao Li . Two self-adaptive inertial projection algorithms for solving split variational inclusion problems. AIMS Mathematics, 2022, 7(4): 4960-4973. doi: 10.3934/math.2022276 |
[6] | Pongsakorn Yotkaew, Nopparat Wairojjana, Nuttapol Pakkaranang . Accelerated non-monotonic explicit proximal-type method for solving equilibrium programming with convex constraints and its applications. AIMS Mathematics, 2021, 6(10): 10707-10727. doi: 10.3934/math.2021622 |
[7] | Mohammad Dilshad, Mohammad Akram, Md. Nasiruzzaman, Doaa Filali, Ahmed A. Khidir . Adaptive inertial Yosida approximation iterative algorithms for split variational inclusion and fixed point problems. AIMS Mathematics, 2023, 8(6): 12922-12942. doi: 10.3934/math.2023651 |
[8] | Ziqi Zhu, Kaiye Zheng, Shenghua Wang . A new double inertial subgradient extragradient method for solving a non-monotone variational inequality problem in Hilbert space. AIMS Mathematics, 2024, 9(8): 20956-20975. doi: 10.3934/math.20241020 |
[9] | Austine Efut Ofem, Jacob Ashiwere Abuchu, Godwin Chidi Ugwunnadi, Hossam A. Nabwey, Abubakar Adamu, Ojen Kumar Narain . Double inertial steps extragadient-type methods for solving optimal control and image restoration problems. AIMS Mathematics, 2024, 9(5): 12870-12905. doi: 10.3934/math.2024629 |
[10] | Cuijie Zhang, Zhaoyang Chu . New extrapolation projection contraction algorithms based on the golden ratio for pseudo-monotone variational inequalities. AIMS Mathematics, 2023, 8(10): 23291-23312. doi: 10.3934/math.20231184 |
We introduce an alternated inertial subgradient extragradient algorithm of non-Lipschitz and pseudo-monotone operators to solve variational inequality and fixed point problems. We also demonstrated that, under certain conditions, the sequence produced by our algorithm exhibits weak convergence. Moreover, some numerical experiments have been proposed to compare our algorithm with previous algorithms in order to demonstrate the effectiveness of our algorithm.
In a real Hilbert space H, with D being a nonempty closed convex subset, where the inner product ⟨⋅,⋅⟩ and norm ‖⋅‖ are defined, the classical variational inequality problem (VIP) is to determine a point x∗∈D such that ⟨Ax∗,y−x∗⟩≥0 holds for all y∈D, where A:H→H is an operator. Then, we define ◊ as its solution set. Stampacchia [1] proposed variational inequality theory in 1964, which appeared in various models to solve a wide range of engineering, regional, physical, mathematical, and other problems. The mathematical theory of variational inequality problems was first applied to solve equilibrium problems. Within this model, the function is derived from the first-order variation of the respective potential energy. As a generalization and development of classical variational problems, the form of variational inequality has become more diverse, and many projection algorithms have been studied by scholars [2,3,4,5,6,7,8,9,10]. In [11], Hu and Wang utilized the projected neural network (PNN) to solve the VIP under the pseudo-monotonicity or pseudoconvexity assumptions. Furthermore, He et al. [12] proposed an inertial PNN method for solving the VIP, while Eshaghnezhad et al. [13] presented a novel PNN method for solving the VIP. In addition, in [14], a modified neurodynamic network (MNN) was proposed for solving the VIP, and under the assumptions of strong pseudo monotonicity and L-continuity, the fixed-time stability convergence of MNN was established.
The most famous method for solving the VIP is called the projection gradient method (GM), which is expressed as
xn+1=PD(xn−γAxn). | (1.1) |
Observably, the iterative sequence {xn} produced by this method converges towards a solution of the VIP, and PD:H→D is a metric projection, with γ denoting the stepsize parameter, and A being both strongly monotone and Lipschitz continuous. The projection gradient method fails when A is weakened to a monotonic operator. On this basis, Korpelevich [15] proposed a two-step iteration called the extragradient method (EGM)
{x0∈D,sn=PD(xn−γAxn),xn+1=PD(xn−γAsn), | (1.2) |
where γ is the stepsize parameter, and A is Lipschitz continuous and monotone. However, the calculation of projection is a major challenge in each iteration process. Hence, to address this issue, Censor et al. [16] proposed the idea of the half-space and modified the algorithm to
{sn=PD(xn−γAxn),Hn={x∈H:⟨xn−γAxn−sn,x−sn⟩≤0},xn+1=PHn(xn−γAsn). | (1.3) |
Recently, adaptive step size [17,18,19] and inertia [20,21,22,23] have been frequently used to accelerate algorithm convergence. For example, Thong and Hieu [24] presented the following algorithm:
{hn=xn+αn(xn−xn−1),sn=PD(hn−τnAhn),en=PHn(hn−τnAsn),xn+1=βnf(en)+(1−βn)en, | (1.4) |
where Hn={x∈H:⟨hn−τnAhn−sn,x−sn⟩≤0}, and
τn+1={min{μ‖hn−sn‖‖Ahn−Asn‖,τn}, if Ahn−Asn≠0,τn, otherwise. |
They also combined the VIP with fixed point problems [25] (we define Δ as a common solution set). For example, Nadezhkina and Takahashi [26] proposed the following algorithm:
{x0∈D,sn=PD(xn−τnAxn),xn+1=(1−αn)xn+αnTPD(xn−τnAsn), | (1.5) |
where A is Lipschitz continuous and monotone, and T:D→D is nonexpansive. The sequence produced by this algorithm exhibits weak convergence toward an element in Δ. Another instance is the algorithm proposed by Thong et al. [27], which is as follows:
{hn=xn+αn(xn−xn−1),sn=PD(hn−τnAhn),en=PHn(hn−τnAsn),xn+1=(1−βn)hn+βnTen, | (1.6) |
where τn is selected as the maximum τ within the set {γ,γl,γl2,...} that satisfies the condition
τ‖Ahn−Asn‖≤μ‖hn−sn‖. |
Based on the preceding research, we present a self-adaptive step-size and alternated inertial subgradient extragradient algorithm designed for addressing the VIP and fixed-point problems involving non-Lipschitz and pseudo-monotone operators in this paper. The article's structure is outlined as follows: Section 2 contains definitions and preliminary results essential for our approach. Section 3 establishes the convergence of the iterative sequence generated. Finally, Section 4 includes a series of numerical experiments demonstrating the practicality and effectiveness of our algorithm.
For a sequence {xn} and x in H, strong convergence is represented as xn→x, weak convergence is represented as xn⇀x.
Definition 2.1. [28] We define a nonlinear operator T:H→H to have an empty fixed point set (Fix(T)≠∅), if the following expression holds for {qn}∈H:
{qn⇀q(I−T)qn→0⇒q∈Fix(T), |
where I denotes the identity operator. In such cases, we characterize I−T as being demiclosed at zero.
Definition 2.2. For an operator T:H→H, the following definitions apply:
(1) T is termed nonexpansive if
‖Tq1−Tq2‖≤‖q1−q2‖∀q1,q2∈H. |
(2) T is termed quasi-nonexpansive with a non-empty fixed point set Fix(T)≠∅ if
‖Tx−η‖≤‖x−η‖∀x∈H,η∈Fix(T). |
Definition 2.3. A sequence {qn} is said to be Fejér monotone concerning a set D if
‖qn+1−q‖≤‖qn−q‖,∀q∈D. |
Lemma 2.1. For each ζ1,ζ2∈H and ϵ∈R, we have
‖ζ1+ζ2‖2≤2⟨ζ1+ζ2,ζ2⟩+‖ζ1‖2; | (2.1) |
‖ϵζ2+(1−ϵ)ζ1‖2=(1−ϵ)‖ζ1‖2+ϵ‖ζ2‖2−ϵ(1−ϵ)‖ζ2−ζ1‖2. | (2.2) |
Lemma 2.2. [26] Given ψ∈H and φ∈D, then
(1) ‖PDψ−PDφ‖2≤⟨ψ−φ,PDψ−PDφ⟩;
(2) ‖φ−PDψ‖2≤‖ψ−φ‖2−‖ψ−PDψ‖2;
(3) ⟨ψ−PDψ,PDψ−φ⟩≥0.
Lemma 2.3. [29] Suppose A:D→H is pseudomonotone and uniformly continuous. Then, ς is a solution of ◊ ⟺ ⟨Ax,x−ς⟩≥0,∀x∈D.
Lemma 2.4. [30] Let D be a nonempty subset of H. A sequence {xn} in H is said to weakly converge to a point in D if the following conditions are met:
(1) For every x∈D, limn→∞‖xn−x‖ exists;
(2) Every sequential weak cluster point of {xn} is in D.
This section presents an alternated inertial projection algorithm designed to address the VIP and fixed point problems associated with a quasi-nonexpansive mapping T in H. We have the following assumptions:
Assumption 3.1.
(a) The operator A:H→H is pseudo-monotone, uniformly continuous over H, and exhibits sequential weak continuity on D;
(b) ϖ∈(1−μ4,1−μ2), 0<κn<min{1−μ−2ϖ2ϖ,1−ϖ1+ϖ}.
The algorithm (Algorithm 1) is as follows:
Algorithm 1 |
Initialization: Let x0,x1∈H be arbitrary. Given γ>0, l∈(0,1), μ∈(0,1). Iterative step: Calculate xn+1 as follows: Step 1. Set hn={xn,n=even,xn+ϖ(xn−xn−1),n=odd. Step 2. Compute sn=PD(hn−τnAhn). If sn=hn, stop. Otherwise compute en=PHn(hn−τnAsn), where Hn={x∈H:⟨hn−τnAhn−sn,x−sn⟩≤0}, and τn is selected as the maximum τ from the set {γ,γl,γl2,⋯} that satisfies τ⟨Asn−Ahn,sn−en⟩≤μ‖sn−hn‖‖sn−en‖. Step 3. Compute xn+1=(1−κn)en+κnTen. Set n:=n+1 and go back to Step 1. |
To prove the algorithm, we first provide several lemmas.
Lemma 3.1. The sequence produced by Algorithm 1, denoted as {x2n}, is bounded and limn→∞‖x2n−ϱ‖ exists for all ϱ∈Δ.
Proof. Indeed, let ϱ∈Δ. Then, we have
‖en−ϱ‖2=‖PHn(hn−τnAsn)−ϱ‖2≤‖hn−τnAsn−ϱ‖2−‖hn−τnAsn−en‖2=‖hn−ϱ‖2+τ2n‖Asn‖2−2τn⟨hn−ϱ,Asn⟩−‖hn−en‖2−τ2n‖Asn‖2+2τn⟨hn−en,Asn⟩=‖hn−ϱ‖2−‖hn−en‖2+2τn⟨ϱ−en,Asn⟩=‖hn−ϱ‖2−‖hn−en‖2−2τn⟨sn−ϱ,Asn⟩+2τn⟨sn−en,Asn⟩. | (3.1) |
According to ϱ∈Δ, it follows that ⟨Aϱ,s−ϱ⟩≥ for all s∈D, and, at the same time, because of the pseudomonotonicity of A, we establish ⟨As,s−ϱ⟩≥0 for all s∈D. If we set s=sn, then ⟨Asn,sn−ϱ⟩≥0. Thus, by (3.1), we can get
‖en−ϱ‖2≤‖hn−ϱ‖2−‖hn−en‖2+2τn⟨sn−en,Asn⟩=‖hn−ϱ‖2−‖hn−sn‖2−‖en−sn‖2−2⟨hn−sn,sn−en⟩+2τn⟨sn−en,Asn⟩=‖hn−ϱ‖2−‖hn−sn‖2−‖en−sn‖2+2⟨sn−hn+τnAsn,sn−en⟩=‖hn−ϱ‖2−‖hn−sn‖2−‖en−sn‖2+2⟨hn−τnAhn−sn,en−sn⟩+2τn⟨Asn−Ahn,sn−en⟩≤‖hn−ϱ‖2−‖hn−sn‖2−‖en−sn‖2+2μ‖sn−hn‖‖sn−en‖≤‖hn−ϱ‖2−‖hn−sn‖2−‖en−sn‖2+μ[‖sn−hn‖2+‖en−sn‖2]=‖hn−ϱ‖2−(1−μ)‖hn−sn‖2−(1−μ)‖en−sn‖2. | (3.2) |
Subsequently, by (2.2), we obtain
‖xn+1−ϱ‖2=‖(1−κn)en+κnTen−ϱ‖2=‖κn(Ten−ϱ)+(1−κn)(en−ϱ)‖2=κn‖Ten−ϱ‖2+(1−κn)‖en−ϱ‖2−κn(1−κn)‖Ten−en‖2≤κn‖en−ϱ‖2+(1−κn)‖en−ϱ‖2−κn(1−κn)‖Ten−en‖2=‖en−ϱ‖2−κn(1−κn)‖Ten−en‖2≤‖hn−ϱ‖2−(1−μ)‖hn−sn‖2−(1−μ)‖en−sn‖2−κn(1−κn)‖Ten−en‖2. | (3.3) |
Meanwhile, combined with (3.3), it is evident that
‖xn+1−ϱ‖2≤(1−κn)‖hn−ϱ‖2+κn‖en−ϱ‖2. | (3.4) |
In particular,
‖x2n+2−ϱ‖2≤‖h2n+1−ϱ‖2−(1−μ)‖h2n+1−s2n+1‖2−(1−μ)‖e2n+1−s2n+1‖2−κ2n+1(1−κ2n+1)‖Te2n+1−e2n+1‖2. | (3.5) |
By (2.2), we obtain
‖h2n+1−ϱ‖2=‖x2n+1+ϖ(x2n+1−x2n)−ϱ‖2=(1+ϖ)‖x2n+1−ϱ‖2−ϖ‖x2n−ϱ‖2+ϖ(1+ϖ)‖x2n+1−x2n‖2. | (3.6) |
As another special case of (3.3), we have
‖x2n+1−ϱ‖2≤‖x2n−ϱ‖2−(1−μ)‖x2n−s2n‖2−(1−μ)‖e2n−s2n‖2−κ2n(1−κ2n)‖Te2n−e2n‖2≤‖x2n−ϱ‖2−1−μ2‖x2n−e2n‖2−κ2n(1−κ2n)‖Te2n−e2n‖2, | (3.7) |
and then, bringing (3.7) into (3.6), we can get
‖h2n+1−ϱ‖2=‖x2n−ϱ‖2−(1+ϖ)(1−μ)2‖x2n−e2n‖2−κ2n(1−κ2n)(1+ϖ)‖Te2n−e2n‖2+ϖ(1+ϖ)‖x2n+1−x2n‖2. | (3.8) |
Plugging (3.8) into (3.5) gives
‖x2n+2−ϱ‖2≤‖x2n−ϱ‖2−(1+ϖ)(1−μ)2‖x2n−e2n‖2−κ2n(1−κ2n)(1+ϖ)‖Te2n−e2n‖2+ϖ(1+ϖ)‖x2n+1−x2n‖2−(1−μ)‖h2n+1−s2n+1‖2−(1−μ)‖e2n+1−s2n+1‖2−κ2n+1(1−κ2n+1)‖Te2n+1−e2n+1‖2, | (3.9) |
where
‖x2n+1−x2n‖2=‖(1−κ2n)e2n+κ2nTe2n−x2n‖2=‖e2n−x2n+κ2n(Te2n−e2n)‖2=‖e2n−x2n‖2+κ22n‖Te2n−e2n‖2+2κ2n⟨e2n−x2n,Te2n−e2n⟩≤‖e2n−x2n‖2+κ22n‖Te2n−e2n‖2+κ2n(‖e2n−x2n‖2+‖Te2n−e2n‖2)=(1+κ2n)‖e2n−x2n‖2+κ2n(κ2n+1)‖Te2n−e2n‖2. | (3.10) |
Thus, putting (3.10) into (3.9), we have
‖x2n+2−ϱ‖2≤‖x2n−ϱ‖2−[(1+ϖ)(1−μ)2−ϖ(1+ϖ)(1+κ2n)]‖e2n−x2n‖2−[κ2n(1−κ2n)(1+ϖ)−ϖ(1+ϖ)κ2n(κ2n+1)]‖Te2n−e2n‖2−(1−μ)‖h2n+1−s2n+1‖2−(1−μ)‖e2n+1−s2n+1‖2−κ2n+1(1−κ2n+1)‖Te2n+1−e2n+1‖2. | (3.11) |
According to ϖ∈(1−μ4,1−μ2), 0<κn<min{1−μ−2ϖ2ϖ,1−ϖ1+ϖ}, we get the sequence {‖x2n−ϱ‖} is decreasing, and thus limn→∞‖x2n−ϱ‖ exists. This implies {‖x2n−ϱ‖} is bounded, hence, {x2n} is bounded. For (3.7), we can get that {‖x2n+1−ϱ‖} is also bounded. Therefore, {‖xn−ϱ‖} is bounded. Thus, {xn} is bounded.
Lemma 3.2. Consider the sequence {x2n} produced by Algorithm 1. If the subsequence {x2nk} of {x2n} weakly converges to x∗∈H and limk→∞‖x2nk−s2nk‖=0, then x∗∈◊.
Proof. Because of h2n=x2n, using the definition of {s2nk} and Lemma 2.2, we get
⟨x2nk−τ2nkAx2nk−s2nk,x−s2nk⟩≤0,∀x∈D, |
and so
1τ2nk⟨x2nk−s2nk,x−s2nk⟩≤⟨Ax2nk,x−s2nk⟩,∀x∈D. |
Hence,
1τ2nk⟨x2nk−s2nk,x−s2nk⟩+⟨Ax2nk,s2nk−x2nk⟩≤⟨Ax2nk,x−x2nk⟩,∀x∈D. | (3.12) |
Because of limk→∞‖x2nk−s2nk‖=0 and taking the limit as k→∞ in (3.12), we acquire
lim_k→∞⟨Ax2nk,x−x2nk⟩≥0,∀x∈D. | (3.13) |
Select a decreasing sequence {ϵk}⊂(0,∞) to make limk→∞ϵk=0 hold. Then, for each ϵk, based on (3.13) we use Mk to represent the smallest positive integer satisfying
⟨Ax2nj,x−x2nj⟩+ϵk≥0,∀j≥Mk. | (3.14) |
Since {ϵk} is decreasing, then {Mk} is increasing. Also, for each k, Ax2Mk≠0, let
v2Mk=Ax2Mk‖Ax2Mk‖2. |
Here, ⟨Ax2Mk,v2Mk⟩=1 for each k. Then, by (3.14), for each k we have
⟨Ax2Mk,x+ϵkv2Mk−x2Mk⟩≥0. |
Because A is pseudo-monotonic, we get
⟨A(x+ϵkv2Mk),x+ϵkv2Mk−x2Mk⟩≥0. | (3.15) |
Since x2nk⇀x∗ as k→∞, and A exhibits sequential weak continuity on H, it follows that the sequence {Ax2nk} weakly converges to Ax∗. Then, based on the weakly sequential continuity of the norm, we obtain
0<‖Ax∗‖≤lim_k→∞‖Ax2nk‖. |
Since {xMk}⊂{xnk} and limk→∞ϵk=0, we have
0≤¯limk→∞‖ϵkv2Mk‖=¯limk→∞(ϵk‖Ax2nk‖)≤¯limk→∞ϵklim_k→∞‖Ax2nk‖=0‖Ax∗‖=0, |
which means limk→∞‖ϵkv2Mk‖=0. Finally, we let k→∞ in (3.15) and get
⟨Ax,x−x∗⟩≥0. |
This implies x∗∈◊.
Lemma 3.3. Considering {x2n} as the sequence produced by Algorithm 1, since {x2n} is a bounded sequence, there exists a subsequence {x2nk} of {x2n} and x∗∈H such that x2nk⇀x∗. Hence, x∗∈Δ.
Proof. From (3.11) and the convergence of {‖x2n−ϱ‖}, we can deduce that
‖e2n+1−s2n+1‖→0,‖x2n−x2n+1‖→0, | (3.16) |
‖h2n+1−s2n+1‖→0,‖Te2n−e2n‖→0, | (3.17) |
‖Te2n+1−e2n+1‖→0,asn→+∞. |
By the definition of {x2n+1}, we have
‖x2n−e2n‖=‖x2n−x2n+1+κ2n(Te2n−e2n)‖≤‖x2n−x2n+1‖+κ2n‖Te2n−e2n‖, |
then
‖x2n−e2n‖→0, | (3.18) |
and by (3.18) and x2nk⇀x∗, we can get
e2nk⇀x∗. | (3.19) |
Since T is demiclosed at zero, Definition 2.1, (3.17), and (3.19) imply
x∗∈Fix(T). | (3.20) |
From (3.2), we deduce
‖e2n−ϱ‖2≤‖x2n−ϱ‖2−(1−μ)‖x2n−s2n‖2−(1−μ)‖e2n−s2n‖2. |
This implies that
(1−μ)‖x2n−s2n‖2≤‖x2n−ϱ‖2−‖e2n−ϱ‖2. | (3.21) |
Based on the convergence of {‖x2n−ϱ‖2}, we can assume that
‖x2n−ϱ‖2→l. | (3.22) |
At the same time, according to (3.16), it can be obtained that
‖x2n+1−ϱ‖2→l. | (3.23) |
It follows from (3.4) that
‖x2n+1−ϱ‖2≤(1−κ2n)‖x2n−ϱ‖2+κ2n‖e2n−ϱ‖2. |
Then,
‖e2n−ϱ‖2≥‖x2n+1−ϱ‖2−‖x2n−ϱ‖2κ2n+‖x2n−ϱ‖2. | (3.24) |
It implies from (3.22)–(3.24) that
limn→∞‖e2n−ϱ‖2≥limn→∞‖x2n−ϱ‖2=l. | (3.25) |
By (3.2), we get
limn→∞‖e2n−ϱ‖2≤limn→∞‖x2n−ϱ‖2=l. | (3.26) |
Combining (3.25) and (3.26), we get
limn→∞‖e2n−ϱ‖2=l. | (3.27) |
Combining with (3.21), (3.22), and (3.27), we have
limn→∞‖x2n−s2n‖2=0. |
Therefore, it implies from Lemma 3.2 that
x∗∈◊. | (3.28) |
Combining (3.20) and (3.28), we can derive
x∗∈Δ. |
Theorem 3.2. {xn}, a sequence produced by Algorithm 1, weakly converges to a point within Δ.
Proof. Let x∗∈H such that x2nk⇀x∗. Then, by Lemma 3.3, it implies
x∗∈Δ. |
Combining limn→∞‖x2n−ϱ‖2 exists for all ϱ∈Δ, and by Lemma 2.4, we get that {x2n} converges weakly to an element within Δ. Now, suppose {x2n} converges weakly to ξ∈Δ. For all g∈H, it follows that
limn→∞⟨x2n−ξ,g⟩=0. |
Furthermore, by (3.16), for all g∈H,
|⟨x2n+1−ξ,g⟩|=|⟨x2n+1−x2n+x2n−ξ,g⟩|≤|⟨x2n+1−x2n,g⟩|+|⟨x2n−ξ,g⟩|≤‖x2n+1−x2n‖‖g‖+|⟨x2n−ξ,g⟩|→0,asn→∞. |
Therefore, {x2n+1} weakly converges to ξ∈Δ. Hence, {xn} weakly converges to ξ∈Δ
This section will showcase three numerical experiments aiming to compare Algorithm 1 against scheme (1.6) and Algorithm 6.1 in [31], and Algorithm 3.1 in [32]. All codes were written in MATLAB R2018b and performed on a desktop PC with Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz 1.80 GHz, RAM 8.00 GB.
Example 4.1. Assume that H=R3 and D:={x∈R3:Φx≤ϕ}, where Φ represents a 3×3 matrix and ϕ is a nonnegative vector. For A(x):=Qx+q, with Q=BBT+E+F, where B is a 3×3 matrix, E is a 3×3 skew-symmetric matrix, F is a 3×3 diagonal matrix with nonnegative diagonal entries, and q is a vector in R3. Notably, A is both monotone and Lipschitz continuous with constant L=‖Q‖. Define T(x)=x,∀x∈R3.
Under the assumption q=0, the solution set Δ={0}, which means that x∗=0. Now, the error at the n-th step iteration is measured using ‖xn−x∗‖. In both Algorithm 1 and scheme (1.6), we let μ=0.5, γ=0.5, l=0.5; in Algorithm 1, we let ϖ=0.2, κn=0.2; in scheme (1.6), we let αn=0.25, βn=0.5; in Algorithm 6.1 in [31], we let τ=0.01, αn=0.25; in Algorithm 3.1 in [32], we let αn=1n+1, βn=n2n+1, f(x)=0.5x, τ1=1, μ=0.2, θ=0.3, ϵn=100(n+1)2. The outcomes of this numerical experiment are presented in Table 1 and Figure 1.
Iter. | Time [sec] | |
Algorithm 1 | 297 | 1.7283 |
scheme (1.6) | 482 | 3.0215 |
Algorithm 6.1 in [31] | 1311 | 8.5415 |
Algorithm 3.1 in [32] | 477 | 2.3758 |
From Table 1, we can see that the algorithm in this article has the least number of iterations and the shortest required time. Therefore, this indicates that Algorithm 1 is feasible. According to the situation shown in Figure 1, we can see that Algorithm 1 is more efficient than the other two algorithms.
Example 4.2. Consider H=R and the feasible set D=[−2,5]. Let A:H→H be defined as
At:=t+sin(t), |
and T:H→H be defined as
Tt:=t2sin(t). |
It is evident that A is Lipschitz continuous and monotone, while T is a quasi-nonexpansive mapping. Consequently, it is straightforward to observe that Δ={0}.
In Algorithm 1 and scheme (1.6), we let γ=0.5, l=0.5, μ=0.9; in Algorithm 1, we let κn=23, ϖ=0.03; in scheme (1.6), we let αn=0.25, βn=0.5; in Algorithm 6.1 in [31], we let τ=0.4, αn=0.5, in Algorithm 3.1 in [32], we let αn=1n+1, βn=n2n+1, f(x)=0.5x, τ1=1, μ=0.2, θ=0.3, ϵn=100(n+1)2. The results of the numerical experiment are shown in Table 2 and Figure 2.
Iter. | Time [sec] | |
Algorithm 1 | 20 | 0.3542 |
scheme (1.6) | 26 | 0.5168 |
Algorithm 6.1 in [31] | 41 | 0.4293 |
Algorithm 3.1 in [32] | 26 | 0.3574 |
Table 2 and Figure 2 illustrate that Algorithm 1 has a faster convergence speed.
Example 4.3. Consider H=L2([0,1]) with the inner product
⟨m,n⟩:=∫10m(p)n(p)dp∀m,n∈H, |
and the induced norm
‖m‖:=(∫10|m(p)|2dp)12∀m∈H. |
The operator A:H→H is defined as
(Am)(p)=max{0,m(p)},p∈[0,1]∀m∈H. |
The set D:={m∈H:‖m‖≤1} represents the unit ball. Specifically, the projection operator PD(m) is defined as
PD(m)={m‖m‖L2,‖m‖L2>1,m,‖m‖L2≤1. |
Let T:L2([0,1])→L2([0,1]) be defined by
(Tm)(p)=m(p)2. |
Therefore, we can get that Δ={0}.
In Algorithm 1 and scheme (1.6), we let γ=0.5, l=0.5, μ=0.5; in Algorithm 1, we let κn=0.2, ϖ=0.2; in scheme (1.6), we let αn=0.25, βn=0.3; in Algorithm 6.1 in [31], we let τ=0.9, αn=0.6. The results of the numerical experiment are shown in Figure 3.
Figure 3 shows the behaviors of En=‖xn−x∗‖ generated by all the algorithms, commencing from the initial point x0(p)=p2. The presented results also indicate that our algorithm is superior to other algorithms.
This paper introduces a novel approach for tackling variational inequality problems and fixed point problems. Algorithm 1 extends the operator A to pseudo-monotone, uniformly continuous, and incorporates a new self-adaptive step size, and adds an alternated inertial method based on scheme (1.6). The efficiency of our algorithm is validated through the results obtained from three distinct numerical experiments.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This work was supported by the National Natural Science Foundation of China (Grant No. 12171435).
The authors declare that they have no competing interests.
[1] | G. Stampacchia, Formes bilineaires coercitives sur les ensembles convexes, C. R. Acad. Sci., 258 (1964), 4413–4416. |
[2] |
Y. Malitsky, V. Semenov, An extragradient algorithm for monotone variational inequalities, Cybern. Syst. Anal., 50 (2014), 271–277. http://dx.doi.org/10.1007/s10559-014-9614-8 doi: 10.1007/s10559-014-9614-8
![]() |
[3] |
P. Tseng, A modified forward-backward splitting method for maximal monotone mapping, SIAM J. Control Optim., 38 (2000), 431–446. http://dx.doi.org/10.1137/S0363012998338806 doi: 10.1137/S0363012998338806
![]() |
[4] |
M. Solodov, B. Svaiter, New projection method for variational inequality problems, SIAM J. Control Optim., 37 (1999), 765–776. http://dx.doi.org/10.1137/S0363012997317475 doi: 10.1137/S0363012997317475
![]() |
[5] |
Y. Malitsky, Projected reflected gradient methods for monotone variational inequalities, SIAM J. Optim., 25 (2015), 502–520. http://dx.doi.org/10.1137/14097238X doi: 10.1137/14097238X
![]() |
[6] |
P. Mainge, M. Gobinddass, Convergence of one-step projected gradient methods for variational inequalities, J. Optim. Theory Appl., 171 (2016), 146–168. http://dx.doi.org/10.1007/s10957-016-0972-4 doi: 10.1007/s10957-016-0972-4
![]() |
[7] |
A. Iusem, B. Svaiter, A variant of Korpelevich's method for variational inequalities with a new search strategy, Optimization, 42 (1997), 309–321. http://dx.doi.org/10.1080/02331939708844365 doi: 10.1080/02331939708844365
![]() |
[8] |
R. Kraikaew, S. Saejung, Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert space, J. Optim. Theory Appl., 163 (2014), 399–412. http://dx.doi.org/10.1007/s10957-013-0494-2 doi: 10.1007/s10957-013-0494-2
![]() |
[9] |
Y. Shehu, O. Iyiola, Strong convergence result for monotone variational inequalities, Numer. Algor., 76 (2017), 259–282. http://dx.doi.org/10.1007/s11075-016-0253-1 doi: 10.1007/s11075-016-0253-1
![]() |
[10] | A. Gibali, A new non-Lipschitzian method for solving variational inequalities in Euclidean spaces, Journal of Nonlinear Analysis and Optimization: Theory and Applications, 6 (2015), 41–51. |
[11] |
X. Hu, J. Wang, Solving pseudomonotone variational inequalities and pseudoconvex optimization problems using the projection neural network, IEEE Trans. Neural Netw., 17 (2006), 1487–1499. http://dx.doi.org/10.1109/TNN.2006.879774 doi: 10.1109/TNN.2006.879774
![]() |
[12] |
X. He, T. Huang, J. Yu, C. D. Li, C. J. Li, An inertial projection neural network for solving variational inequalities, IEEE Trans. Cybern., 47 (2017), 809–814. http://dx.doi.org/10.1109/TCYB.2016.2523541 doi: 10.1109/TCYB.2016.2523541
![]() |
[13] |
M. Eshaghnezhad, S. Effati, A. Mansoori, A neurodynamic model to solve nonlinear pseudo-monotone projection equation and its applications, IEEE Trans. Cybern., 47 (2017), 3050–3062. http://dx.doi.org/10.1109/TCYB.2016.2611529 doi: 10.1109/TCYB.2016.2611529
![]() |
[14] |
J. Zheng, J. Chen, X. Ju, Fixed-time stability of projection neurodynamic network for solving pseudomonotone variational inequalities, Neurocomputing, 505 (2022), 402–412. http://dx.doi.org/10.1016/j.neucom.2022.07.034 doi: 10.1016/j.neucom.2022.07.034
![]() |
[15] | G. Korpelevich, The extragradient method for finding saddle points and other problem, Matecon, 12 (1976), 747–756. |
[16] |
Y. Censor, A. Gibali, S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert space, J. Optim. Theory Appl., 148 (2011), 318–335. http://dx.doi.org/10.1007/s10957-010-9757-3 doi: 10.1007/s10957-010-9757-3
![]() |
[17] |
D. Thong, A. Gibali, Extragradient methods for solving non-Lipschitzian pseudo-monotone variational inequalities, J. Fixed Point Theory Appl., 21 (2019), 20. http://dx.doi.org/10.1007/s11784-018-0656-9 doi: 10.1007/s11784-018-0656-9
![]() |
[18] |
Q. Dong, G. Cai, Convergence analysis for fixed point problem of asymptotically nonexpansive mappings and variational inequality problem in Hilbert spaces, Optimization, 70 (2021), 1171–1193. http://dx.doi.org/10.1080/02331934.2020.1789127 doi: 10.1080/02331934.2020.1789127
![]() |
[19] |
Q. Dong, S. He, L. Liu, A general inertial projected gradient method for variational inequality problems, Comp. Appl. Math., 40 (2021), 168. http://dx.doi.org/10.1007/s40314-021-01540-4 doi: 10.1007/s40314-021-01540-4
![]() |
[20] | A. Moudafi, E. Elisabeth, Approximate inertial proximal method using enlargement of a maximal monotone operator, International Journal of Pure and Applied Mathematics, 5 (2003), 283–299. |
[21] |
S. Denisov, V. Semenov, L. Chabak, Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators, Cybern. Syst. Anal., 51 (2015), 757–765. http://dx.doi.org/10.1007/s10559-015-9768-z doi: 10.1007/s10559-015-9768-z
![]() |
[22] |
D. Thong, Y. Shehu, O. Iyiola, Weak and strong convergence theorems for solving pseudomonotone variational inequalities with non-Lipschitz mappings, Numer. Algor., 84 (2020), 795–823. http://dx.doi.org/10.1007/s11075-019-00780-0 doi: 10.1007/s11075-019-00780-0
![]() |
[23] |
Y. Shehu, Q. Dong, L. Liu, J. Yao, New strong convergence method for the sum of two maximal monotone operators, Optim. Eng., 22 (2021), 2627–2653. http://dx.doi.org/10.1007/s11081-020-09544-5 doi: 10.1007/s11081-020-09544-5
![]() |
[24] |
D. Thong, D. Hieu, T. Rassias, Self adaptive inertial subgradient extragradient algorithms for solving pseudomonotone variational inequality problems, Optim. Lett., 14 (2020), 115–144. http://dx.doi.org/10.1007/s11590-019-01511-z doi: 10.1007/s11590-019-01511-z
![]() |
[25] |
F. Ma, J. Yang, M. Yin, A strong convergence theorem for solving pseudo-monotone variational inequalities and fixed point problems using subgradient extragradient method in Banach spaces, AIMS Mathematics, 7 (2022), 5015–5028. http://dx.doi.org/10.3934/math.2022279 doi: 10.3934/math.2022279
![]() |
[26] |
N. Nadezhkina, W. Takahashi, Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings, J. Optim. Theory Appl., 128 (2006), 191–201. http://dx.doi.org/10.1007/s10957-005-7564-z doi: 10.1007/s10957-005-7564-z
![]() |
[27] |
D. Thong, D. Hieu, Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems, Numer. Algor., 80 (2019), 1283–1307. http://dx.doi.org/10.1007/s11075-018-0527-x doi: 10.1007/s11075-018-0527-x
![]() |
[28] |
H. Xu, Viscosity approximation methods for nonexpansive mappings, J. Math. Anal. Appl., 298 (2004), 279–291. http://dx.doi.org/10.1016/j.jmaa.2004.04.059 doi: 10.1016/j.jmaa.2004.04.059
![]() |
[29] |
R. Cottle, J. Yao, Pseudo-monotone complementarity problems in Hilbert space, J. Optim. Theory Appl., 75 (1992), 281–295. http://dx.doi.org/10.1007/BF00941468 doi: 10.1007/BF00941468
![]() |
[30] |
Z. Opial, Weak convergence of the sequence of successive approximations for nonexpansive mappings, Bull. Amer. Math. Soc., 73 (1967), 591–597. http://dx.doi.org/10.1090/S0002-9904-1967-11761-0 doi: 10.1090/S0002-9904-1967-11761-0
![]() |
[31] |
Y. Censor, A. Gibali, S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert space, J. Optim. Theory Appl., 148 (2011), 318–335. http://dx.doi.org/10.1007/s10957-010-9757-3 doi: 10.1007/s10957-010-9757-3
![]() |
[32] |
B. Tan, Z. Zhou, S. Li, Viscosity-type inertial extragradient algorithms for solving variational inequality problems and fixed point problems, J. Appl. Math. Comput., 68 (2022), 1387–1411. http://dx.doi.org/10.1007/s12190-021-01576-z doi: 10.1007/s12190-021-01576-z
![]() |
1. | Qian Yan, Libo An, Gang Cai, Qiao-Li Dong, Strong convergence of inertial extragradient methods for solving pseudomonotone variational inequality problems, 2025, 10075704, 108938, 10.1016/j.cnsns.2025.108938 |