
In this paper, the approximation problem for pseudo-monotonic variational inequalities is studied. With the help of the reciprocal form of parameters and the golden section ratio, two kinds of algorithms are introduced, and their strong convergence is proved under certain appropriate conditions.
Citation: Haoran Tang, Weiqiang Gong. Two improved extrapolated gradient algorithms for pseudo-monotone variational inequalities[J]. AIMS Mathematics, 2025, 10(2): 2064-2082. doi: 10.3934/math.2025097
[1] | Cuijie Zhang, Zhaoyang Chu . New extrapolation projection contraction algorithms based on the golden ratio for pseudo-monotone variational inequalities. AIMS Mathematics, 2023, 8(10): 23291-23312. doi: 10.3934/math.20231184 |
[2] | Yuanheng Wang, Chenjing Wu, Yekini Shehu, Bin Huang . Self adaptive alternated inertial algorithm for solving variational inequality and fixed point problems. AIMS Mathematics, 2024, 9(4): 9705-9720. doi: 10.3934/math.2024475 |
[3] | Austine Efut Ofem, Jacob Ashiwere Abuchu, Godwin Chidi Ugwunnadi, Hossam A. Nabwey, Abubakar Adamu, Ojen Kumar Narain . Double inertial steps extragadient-type methods for solving optimal control and image restoration problems. AIMS Mathematics, 2024, 9(5): 12870-12905. doi: 10.3934/math.2024629 |
[4] | Ziqi Zhu, Kaiye Zheng, Shenghua Wang . A new double inertial subgradient extragradient method for solving a non-monotone variational inequality problem in Hilbert space. AIMS Mathematics, 2024, 9(8): 20956-20975. doi: 10.3934/math.20241020 |
[5] | Habib ur Rehman, Wiyada Kumam, Poom Kumam, Meshal Shutaywi . A new weak convergence non-monotonic self-adaptive iterative scheme for solving equilibrium problems. AIMS Mathematics, 2021, 6(6): 5612-5638. doi: 10.3934/math.2021332 |
[6] | Fei Ma, Jun Yang, Min Yin . A strong convergence theorem for solving pseudo-monotone variational inequalities and fixed point problems using subgradient extragradient method in Banach spaces. AIMS Mathematics, 2022, 7(4): 5015-5028. doi: 10.3934/math.2022279 |
[7] | Muhammad Aslam Noor, Khalida Inayat Noor . Higher order strongly general convex functions and variational inequalities. AIMS Mathematics, 2020, 5(4): 3646-3663. doi: 10.3934/math.2020236 |
[8] | Feng Ma, Bangjie Li, Zeyan Wang, Yaxiong Li, Lefei Pan . A prediction-correction based proximal method for monotone variational inequalities with linear constraints. AIMS Mathematics, 2023, 8(8): 18295-18313. doi: 10.3934/math.2023930 |
[9] | Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth . Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces. AIMS Mathematics, 2021, 6(5): 4873-4900. doi: 10.3934/math.2021286 |
[10] | Lu-Chuan Ceng, Li-Jun Zhu, Tzu-Chien Yin . Modified subgradient extragradient algorithms for systems of generalized equilibria with constraints. AIMS Mathematics, 2023, 8(2): 2961-2994. doi: 10.3934/math.2023154 |
In this paper, the approximation problem for pseudo-monotonic variational inequalities is studied. With the help of the reciprocal form of parameters and the golden section ratio, two kinds of algorithms are introduced, and their strong convergence is proved under certain appropriate conditions.
Let C be a nonempty closed convex subset in the real Hilbert space H, and let A:H→H be a mapping. The variational inequality problem is to find x∗∈C that is satisfied with the following inequality:
⟨A(x∗),y−x∗⟩≥0,∀y∈C. |
In recent years, due to the wide application of variational inequalities in mathematics [1,2], physics [3,4,5,6], economics [7,8,9,10], and other fields, the theory of variational inequalities has become an important direction during the process of investigating the nonlinear analysis. Many academics have been studying it and put forward a multitude of findings [11,12,13,14]. Among the projection algorithms of variational inequalities, the Korpelevich's extragradient [15] algorithm is one of the most effective algorithms, that is, the following iterative formula for producing a sequence {xn}:
{yn=PC(xn−λA(xn)),xn+1=PC(xn−λA(yn)), |
in which L is the Lipschitz constant of the mapping A and λ∈(0,1L). In each iteration, one needs to compute the projection onto the feasible set twice. It should be pointed out that if the structure of C is complex, the projection will be difficult to calculate, which in turn impacts the algorithm's efficiency. In 2000, Tseng introduced an extragradient method with a single projection; that is, one needs to calculate only a single projection onto the feasible set during each iteration [16]. The specific iteration formula was given as follows:
{yn=PC(xn−λA(xn)),xn+1=yn−λ(A(yn)−A(xn)), |
where L is the Lipschitz constant of the mapping A and λ∈(0,1L). It is worth noting that the steps of the above algorithms should all depend on the Lipschitz constant of mapping A. However, the Lipschitz constant is often difficult to calculate or approximate. In order to avoid estimating the Lipschitz constant, it is common to utilize the Armijo linear search to get the step size [17,18]. However, determining the appropriate set size can demand significant computational effort at each iteration, making the linear search with Armijo's rule rather costly. Recently, Yang and Liu proposed an adaptive external gradient algorithm [19], in which the step size was updated step by step by simple calculation instead of estimating the Lipschitz constant.
Based on literature [20,21], this paper introduces two new projection algorithms. Under the assumption that the mapping A is pseudo-monotone and Lipschitz continuous, it is proved that the sequence generated by the algorithm converges strongly to the solution of the variational inequality. Compared with literature [15,16,17,18,19], the step size of the algorithm proposed in this paper is updated step by step through simple calculation without the need to estimate the Lipschitz constant.
Let H be a real Hilbert space, and C⊂H is a nonempty closed convex subset. If {xn}⊂C and {xn} weakly converges to x, which is denoted as xn⇀x(n→∞); {xn} strongly converges to x, which is denoted as xn→x(n→∞). For all x,y∈H and α∈R, we have
||x+y||2≤||x||2+2⟨y,x+y⟩ | (2.1) |
and
||αx+(1−α)y||2=α||x||2+(1−α)||y||2−α(1−α)||x−y||2. | (2.2) |
Definition 2.1. Let H be a real Hilbert space, and A:H→H is a nonlinear operator, and then we call A as
(1) L−Lipschitz continuous (L>0) if
||Ax−Ay||≤L||x−y||,∀x,y∈H. | (2.3) |
(2) Monotone if
⟨Ax−Ay,x−y⟩≥0,∀x,y∈H. | (2.4) |
(3) Pseudo-monotone if
⟨Ax,y−x⟩≥0⇒⟨Ay,y−x⟩≥0,∀x,y∈H. | (2.5) |
(4) Weakly sequentially continuous if for all sequences {xn}∈H, when n→∞, we have
xn⇀x⇒Axn⇀Ax. | (2.6) |
Definition 2.2. [22] Let H be a real Hilbert space, and C⊂H is a nonempty closed convex subset. For all x∈H, there exists a unique approximation point in C, which is subjects to ||x−PC(x)||≤||x−y||, ∀y∈C. We denote it by PC(x), and we call it a metric projection.
Lemma 2.1. [22] Let H be a real Hilbert space, and C⊂H is a nonempty closed convex subset. For all x∈H, z∈C, we have
z=PC(x)⇔⟨x−z,z−y⟩≥0,∀y∈C. |
Lemma 2.2. [22] Let H be a real Hilbert space, and C⊂H is a nonempty closed convex subset, and x∈H, we have
(1) ||PC(x)−PC(y)||2≤⟨PC(x)−PC(y),x−y⟩,∀y∈C,
(2) ||PC(x)−y||2≤||x−y||2−||x−PC(x)||2,∀y∈C,
(3) ⟨(I−PC)(x)−(I−PC)(y),x−y⟩≥||(I−PC)(x)−(I−PC)(y)||2,∀y∈C.
Lemma 2.3. [12] For all x∈H,α≥β>0, the following inequality holds:
(1) ||x−PC[x−βA(x)]||≤||x−PC[x−αA(x)]||,
(2) ||x−PC[x−αA(x)]||α≤||x−PC[x−βA(x)]||β.
Lemma 2.4. [23] Let {xn}⊂(0,1) be a real number sequence, ∞∑n=1αn=∞ and satisfy:
xn+1≤(1−αn)xn+αnyn,∀n≥1. |
If lim supk→∞ynk≤0, and for every subsequence {αnk} of {αn}, we have lim infk→∞(xnk+1−xnk)≥0, then limn→∞xn=0.
Lemma 2.5. [24] Let A:H→H be L−Lipschitz continuous in H′s bounded subset, and M is a bounded subset in H; then A(M) is bounded.
Lemma 2.6. [25] Let A:C→H be a pseudo-monotone continuous mapping, and x∗∈C, then
x∗∈VI(C,A)⇔⟨A(x),x−x∗⟩≥0,∀x∈C. |
Lemma 2.7. [26] Let {xnj} be a subsequence of the nonnegative real number sequence {xn}, and satisfy for all j∈N. We have xnj<xnj+1, then there exists a monotonically increasing subsequence of integers {mk} that subjects to limk→∞mk=∞. When k∈N is large enough, we have
xmk≤xmk+1,xk≤xmk+1. |
In fact, mk is the largest number in the set {1,2,⋯,k} and satisfies xn<xn+1.
Lemma 2.8. [27] Let {xn} be a sequence of nonnegative real number that satisfies
xn+1≤(1−γn)xn+γnyn,∀n≥0, |
where {γn}, {yn} meet
{γn}⊂(0,1),{yn}⊂R,∞∑n=1γn=∞,lim supn→∞yn=0, |
then
limn→∞xn=0. |
Lemma 2.9. [21] Let {pk}⊂C be a bounded sequence. We suppose p is a cluster point of {pk}; when limk→∞||pk−p|| exists, {pk} is convergent.
Assumpton 3.1.
(C1) Feasible set C is a nonempty closed convex subset in Hilbert space H. Solution set VI(C,A) is nonempty.
(C2) Operator A:H→H is pseudo-monotone in C 's bounded set, L−Lipschitz continuous, and weakly sequentially continuous.
(C3) f:H→H is a contraction mapping (ρ∈[0,1)), the sequence {βn}⊂(0,1] is satisfied with:
limn→∞βn=0,∞∑n=1βn=∞. |
Next, the improved adaptive subgradient outer gradient algorithm is given.
Algorithm 3.1. Improved adaptive subgradient outer gradient algorithm.
Step 0: Given λ0>0, μ>2, ξ∈(0,1]. ∀x0,x1∈H, {τn} is a positive real number sequence. limn→∞τnλn=η, η is a constant, and η∈(2μ,1].
Step 1: Compute
ωn=xn+αn(xn−xn−1). |
Step 2: Compute
yn=PC[ωn−ξτnA(ωn)],zn=PTn[ωn−ξλnA(yn)],Tn≜{x∈H:⟨ωn−ξτnA(ωn)−yn,x−yn⟩≤0}. |
If ωn=yn or A(yn)=0, STOP. Otherwise, go to Step 3.
Step 3: Compute
xn+1=βnf(zn)+(1−βn)zn. |
Step 4: Compute
λn+1={max{μ⟨A(ωn)−A(yn),zn−yn⟩||ωn−yn||2+||zn−yn||2,λn},if⟨A(ωn)−A(yn),zn−yn⟩>0,λn,else. | (3.1) |
Set n:=n+1, and go to Step 1.
Lemma 3.1. Suppose that (C1)–(C3) hold; then {λn} is a nondecreasing sequence generated by (3.1) and satisfies
0<limn→∞λn=λ≤max{λ,μL2}. |
Proof. According to (3.1), we gain that {λn} is a nondecreasing sequence. Since A is L−Lipschitz continuous, we can also gain ||A(xn)−A(yn)||≤L||xn−yn||. When ⟨A(ωn)−A(yn),zn−yn⟩>0, we have
μ⟨A(ωn)−A(yn),zn−yn⟩||ωn−yn||2+||zn−yn||2≤μ||A(ωn)−A(yn)|| ||zn−yn||2||ωn−yn|| ||zn−yn||≤μL2, |
so {λn} is nondecreasing and has an upper bound, then there exists limn→∞λn=λ, and λn+1≤max{λ,μL2}. Based on the mathematical induction, we obtain
λn+1≤max{λ,μL2}. |
When ⟨A(ωn)−A(yn),zn−yn⟩≤0, we get the conclusion.
Lemma 3.2. Suppose (C1)–(C3) are set up; {zn} is the sequence generated by the Algorithm 3.1. Then
||zn−p||2≤||ωn−p||2−(1−τnλn)||ωn−zn||2−(τnλn−2ξμλn+1λn)||ωn−yn||2−(τnλn−2ξμλn+1λn)||zn−yn||2. | (3.2) |
Proof. We know that it can be obtained that p∈VI(C,A)⊂C⊂Tn. According to Lemma 2.2, we have
||zn−p||2=||PTn[ωn−ξλnA(yn)]−p||2≤||ωn−ξλnA(yn)−p||2−||ωn−ξλnA(yn)−zn||2=||ωn−p||2+||ξλnA(yn)||2−2ξλn⟨A(yn),ωn−p⟩−||ωn−zn||2−||ξλnA(yn)||2+2ξλn⟨A(yn),ωn−zn⟩=||ωn−p||2−||ωn−zn||2−2ξλn⟨A(yn),zn−p⟩=||ωn−p||2−||ωn−zn||2−2ξλn⟨A(yn),zn−yn+yn−p⟩=||ωn−p||2−||ωn−zn||2−2ξλn⟨A(yn),zn−yn⟩−2ξλn⟨A(yn),yn−p⟩. |
For yn∈C, we gain ⟨A(p),yn−p⟩≥0. Because A is a pseudo-monotone operator, then ⟨A(yn),yn−p⟩≥0. Hence
||zn−p||2≤||ωn−p||2−||ωn−zn||2−2ξλn⟨A(yn),zn−yn⟩, | (3.3) |
in which
||ωn−zn||2+2ξλn⟨A(yn),zn−yn⟩=||ωn−yn+yn−zn||2+2ξλn⟨A(yn),zn−yn⟩=||ωn−yn||2+||yn−zn||2+2⟨ωn−yn,yn−zn⟩+2ξλn⟨A(yn),zn−yn⟩=||ωn−yn||2+||yn−zn||2+2⟨yn−ωn+ξλnA(yn),zn−yn⟩. | (3.4) |
Furthermore, combined with yn=PC[ωn−ξτnA(ωn)] and zn∈Tn, we have
2⟨yn−ωn+ξλnA(yn),zn−yn⟩=2⟨yn−ωn+ξτnA(ωn)−ξτnA(ωn)+ξτnA(yn)−ξτnA(yn)+ξλnA(yn),zn−yn⟩=2⟨yn−ωn+ξτnA(ωn),zn−yn⟩+2ξτn⟨A(yn)−A(ωn),zn−yn⟩+2(ξλn−ξτn)⟨A(yn),zn−yn⟩≥2ξτn⟨A(yn)−A(ωn),zn−yn⟩+τn−λnτn⋅2ξλn⟨A(yn),zn−yn⟩. | (3.5) |
Substituting (3.5) into (3.4), one has
||ωn−zn||2+2ξλn⟨A(yn),zn−yn⟩≥||ωn−yn||2+||yn−zn||2+2ξτn⟨A(yn)−A(ωn),zn−yn⟩+(1−λnτn)⋅2ξλn⟨A(yn),zn−yn⟩, |
that is to say
[1−(1−λnτn)]⋅2ξλn⟨A(yn),zn−yn⟩≥||ωn−yn||2+||yn−zn||2−||ωn−zn||2+2ξτn⟨A(yn)−A(ωn),zn−yn⟩. | (3.6) |
Multiply both sides of (3.6) with τnλn, we have
2ξλn⟨A(yn),zn−yn⟩≥τnλn||ωn−yn||2+τnλn||yn−zn||2−τnλn||ωn−zn||2+2ξλn⟨A(yn)−A(ωn),zn−yn⟩. | (3.7) |
It follows from the definition of {λn} that
⟨A(ωn)−A(yn),zn−yn⟩≤λn+1μ||ωn−yn||2+λn+1μ||zn−yn||2. | (3.8) |
In fact, if ⟨A(ωn)−A(yn),zn−yn⟩≤0, then the inequality (3.8) holds. Otherwise, from (3.1), it infers
λn+1=max{μ⟨A(ωn)−A(yn),zn−yn⟩||ωn−yn||2+||zn−yn||2,λn}≥μ⟨A(ωn)−A(yn),zn−yn⟩||ωn−yn||2+||zn−yn||2, |
which implies that (3.8) holds. In addition, combining (3.7) and (3.8), we have
2ξλn⟨A(yn),zn−yn⟩≥τnλn||ωn−yn||2+τnλn||yn−zn||2−τnλn||ωn−zn||2−2ξλn⋅λn+1μ||ωn−yn||2−2ξλn⋅λn+1μ||zn−yn||2=(τnλn−2ξμ⋅λn+1λn)||ωn−yn||2+(τnλn−2ξμ⋅λn+1λn)||zn−yn||2−τnλn||ωn−zn||2. | (3.9) |
Putting (3.9) into (3.3), one obtain
||zn−p||2≤||ωn−p||2−||ωn−zn||2−(τnλn−2ξμ⋅λn+1λn)||ωn−yn||2−(τnλn−2ξμ⋅λn+1λn)||zn−yn||2+τnλn||ωn−zn||2=||ωn−p||2−(1−τnλn)||ωn−zn||2−(τnλn−2ξμ⋅λn+1λn)||ωn−yn||2−(τnλn−2ξμ⋅λn+1λn)||zn−yn||2. |
The proof is completed.
Lemma 3.3. Suppose (C1)–(C3) are set up; {ωn} is the sequence generated by Algorithm 3.1. If there exists a subsequence {ωnk}⊂{ωn} weakly converging to z∈H such that
limk→∞||ωnk−ynk||=0, |
then we have z∈VI(C,A).
Proof. According to the definition of Tn, we have
⟨ωnk−ξτnkA(ωnk)−ynk,x−ynk⟩≤0,∀x∈C, |
which equals
τnkξ⟨ωnk−ynk,x−ynk⟩≤⟨A(ωnk),x−ynk⟩,∀x∈C. |
After expanding the right-hand side of the above equation, we can obtain
τnkξ⟨ωnk−ynk,x−ynk⟩≤⟨A(ωnk),x−ωnk⟩+⟨A(ωnk),ωnk−ynk⟩,∀x∈C. |
Following transposition, we have
τnkξ⟨ωnk−ynk,x−ynk⟩+⟨A(ωnk),ynk−ωnk⟩≤⟨A(ωnk),x−ωnk⟩,∀x∈C. | (3.10) |
Because {ωn} weakly converges to z, we know {ωnk} is bounded. Once more, due to the A′s Lipschitz continuity, we have {A(ωnk)} is bounded. As ||ωnk−ynk||→0, {ynk} is also bounded, and 0<τn≤λn≤max{λ,μL2}.
Set k→∞ in (3.10), we gain
lim infk→∞⟨A(ωnk),x−ωnk⟩≥0,∀x∈C. | (3.11) |
In addition, we have
⟨A(ynk),x−ynk⟩=⟨A(ynk)−A(ωnk),x−ωnk⟩+⟨A(ωnk),x−ωnk⟩+⟨A(ynk),ωnk−ynk⟩. | (3.12) |
It follows from limk→∞||ωnk−ynk||=0 and the L− Lipschitz continuity of A in H, one obtain
limk→∞||A(ωnk)−A(ynk)||=0. |
Combining the above equation with (3.11) and (3.12), we have
lim infk→∞⟨A(ynk),x−ynk⟩≥0. |
Next, we prove that z∈VI(C,A). Choose decreasing sequences of positive terms {ϵk} and lim infk→∞ϵk=0. For any k, Nk denotes the smallest positive integer such that
⟨A(ynj),x−ynj⟩+ϵk≥0,∀j≥Nk. | (3.13) |
Due to {ϵk} being decreasing, then Nk is also obviously decreasing. Moreover, for any k, since {yNk}⊂C, we can suppose A(yNk)≠0(Otherwise, yNk is the solution) and set
vNk=A(yNk)||A(yNk)||2, |
which infers that ⟨A(yNk),vNk⟩=1. Now, it follows from (3.13) that for any k,
⟨A(yNk),x+ϵkvNk−yNk⟩≥0. |
Since A is pseudo-monotone, the above equation can be reduced to
⟨A(x+ϵkvNk),x+ϵkvNk−yNk⟩≥0, |
which implies that
⟨A(x),x−yNk⟩≥⟨A(x)−A(x+ϵkvNk),x+ϵkvNk−yNk⟩−ϵk⟨A(x),vNk⟩. | (3.14) |
Next, we will prove that limk→∞ϵkvNk=0. Since ωnk⇀z and limk→∞||ωnk−ynk||=0, we can get yNk⇀z(k→∞). While {yn}⊂C, we have z∈C. According to A, it is weakly sequence continuous in H, and we have {A(yNk)} weakly converges to A(z). Assume that A(z)≠0(Otherwise, z is the solution). Since the norm map is weakly sequential lower semicontinuous, one has
0<||A(z)||≤lim infk→∞||A(yNk)||. |
Combined with {yNk}⊂{ynk} and ϵk→0(k→∞), we obtain
0≤lim supk→∞||ϵkvNk||=lim supk→∞(ϵk||A(ynk)||)≤lim supk→∞ϵklim infk→∞||A(ynk)||=0, |
which infers limk→∞ϵkvNk=0.
Let k→∞ in (3.13); since A is continuous, and the right-hand side tends to 0, {ωNk} and {vNk} are bounded, and limk→∞ϵkvNk=0. Therefore, we have
lim infk→∞⟨A(x),x−yNk⟩≥0. |
Since ∀x∈C, we have
⟨A(x),x−z⟩=limk→∞⟨A(x),x−yNk⟩=lim infk→∞⟨A(x),x−yNk⟩≥0. |
From Lemma 2.6, one obtain
z∈VI(C,A). |
The proof is completed.
Theorem 3.1. Suppose that (C1)–(C3) hold, the sequence {αn} is satisfied with:
limn→∞αnβn||xn−xn−1||=0. |
Then the sequence {xn} generated by Algorithm 3.1 strongly converges to p∈VI(C,A) and p=PVI(C,A)∘f(p).
Proof. For the convenience of the proof, we divide it into the following four steps.
Step 1: Prove that {xn} is bounded.
In fact, it follows from Lemma 3.2 and the definition of {τn} that there exists N1∈N subject to
||zn−p||2≤||ωn−p||2−(1−τnλn)||ωn−zn||2−(τnλn−2ξμ⋅λn+1λn)||ωn−yn||2−(τnλn−2ξμ⋅λn+1λn)||zn−yn||2≤||ωn−p||2. | (3.15) |
Hence
||xn+1−p||=||βnf(zn)+(1−βn)zn−p||=||βn[f(zn)−p]+(1−βn)(zn−p)||≤βn||f(zn)−p||+(1−βn)||zn−p||≤βn||f(zn)−f(p)||+βn||f(p)−p||+(1−βn)||zn−p||≤βnρ||zn−p||+βn||f(p)−p||+(1−βn)||zn−p||=[1−βn(1−ρ)]⋅||zn−p||+βn||f(p)−p||≤[1−βn(1−ρ)]⋅||ωn−p||+βn||f(p)−p||. | (3.16) |
Otherwise
||ωn−p||=||xn+αn(xn−xn−1)−p||=||xn−p||+αn||xn−xn−1||≤||xn−p||+βn⋅αnβn||xn−xn−1||. | (3.17) |
Since limn→∞αnβn||xn−xn−1||=0, there exist N2∈N and constant M1>0 such that
αnβn||xn−xn−1||≤M1,∀n≥N2. | (3.18) |
Simultaneous (3.17) and (3.18), we have
||ωn−p||≤||xn−p||+βnM1,∀n≥N2. | (3.19) |
Set N=max{N1,N2}. Combining (3.16) and (3.19), we then gain
∀x>N,||xn+1−p||≤[1−βn(1−ρ)]||xn−p||+βnM1+βn||f(p)−p||=[1−βn(1−ρ)]||xn−p||+βn(1−ρ)||f(p)−p||+M11−ρ≤max{||xn−p||,||f(p)−p||+M11−ρ}≤⋅⋅⋅≤max{||xN−p||,||f(p)−p||+M11−ρ}. |
According to the above discussion, it can be proved that {xn} is bounded.
Step 2: Prove
(1−τnλn)||ωn−zn||2+(τnλn−2ξμ⋅λn+1λn)||ωn−yn||2+(τnλn−2ξμ⋅λn+1λn)||zn−yn||2≤||xn−p||2−||xn+1−p||2+βnM4, | (3.20) |
in which M4>0 is a constant. In fact,
||xn+1−p||2=||βnf(zn)+(1−βn)zn−p||2=||βn[f(zn)−p]+(1−βn)(zn−p)||2≤βn||f(zn)−p||2+(1−βn)||zn−p||2≤βn(||f(zn)−f(p)||+||f(p)−p||)2+(1−βn)||zn−p||2≤βn(ρ||zn−p||+||f(p)−p||)2+(1−βn)||zn−p||2≤βn(||zn−p||+||f(p)−p||)2+(1−βn)||zn−p||2≤βn||zn−p||2+(1−βn)||zn−p||2+βnM2=||zn−p||2+βnM2, | (3.21) |
and M2≜supn∈N{2||zn−p||⋅||f(p)−p||+||f(p)−p||2>0}. By substituting (3.2) into (3.21), it can be obtained that
||xn+1−p||2≤||ωn−p||2−(1−τnλn)||ωn−zn||2−(τnλn−2ξμ⋅λn+1λn)||ωn−yn||2−(τnλn−2ξμ⋅λn+1λn)||zn−yn||2+βnM2. | (3.22) |
From (3.19), one has
||ωn−p||2≤(||xn−p||+ βnM1)2=||xn−p||2+βn(2M1||xn−p||+βnM21)≤||xn−p||2+βnM3, | (3.23) |
in which M3≜supn∈N{2M1||xn−p||+βnM21}>0. Substitute (3.23) into (3.22), we gain
||xn+1−p||2≤||xn−p||2+βnM3+βnM2−(1−τnλn)||ωn−zn||2−(τnλn−2ξμ⋅λn+1λn)||ωn−yn||2−(τnλn−2ξμ⋅λn+1λn)||zn−yn||2, |
which infers that
(1−τnλn)||ωn−zn||2+(τnλn−2ξμ⋅λn+1λn)||ωn−yn||2+(τnλn−2ξμ⋅λn+1λn)||zn−yn||2≤||xn−p||2−||xn+1−p||2+βnM4, |
and M4≜M2+M3.
Step 3: Prove that there exists some constant M>0 such that
||xn+1−p||2≤[1−βn(1−ρ)]||xn−p||2+βn(1−ρ)[21−ρ⟨f(p)−p,xn+1−p⟩+2M1−ρ⋅αnβn||xn−xn−1||]. |
In fact, applying (2.1) and {ωn} 's definition, we have
||ωn−p||2=||xn+αn(xn−xn−1)−p||2≤||xn−p||2+2αn⟨xn−xn−1,ωn−p⟩≤||xn−p||2+2αn||xn−xn−1||⋅||ωn−p||. | (3.24) |
Combined with (2.1), (2.2), and (3.15), we obtain
||xn+1−p||2=||βnf(zn)+(1−βn)zn−p||2=||βn[f(zn)−f(p)]+(1−βn)(zn−p)+βn[f(p)−p]||2≤||βn[f(zn)−f(p)]+(1−βn)(zn−p)||2+2βn⟨f(p)−p,xn+1−p⟩≤βn||f(zn)−f(p)||2+(1−βn)||zn−p||2+2βn⟨f(p)−p,xn+1−p⟩≤βnρ2||zn−p||2+(1−βn)||zn−p||2+2βn⟨f(p)−p,xn+1−p⟩≤[1−βn(1−ρ)]||zn−p||2+2βn⟨f(p)−p,xn+1−p⟩≤[1−βn(1−ρ)]||ωn−p||2+2βn⟨f(p)−p,xn+1−p⟩. | (3.25) |
Since ||ωn−p||2≤||xn−p||2+2αn||xn−xn−1||⋅||ωn−p|| holds, and we put (3.24) into (3.25), one has
||xn+1−p||2≤[1−βn(1−ρ)]||xn−p||2+2αn||xn−xn−1||⋅||ωn−p||+2βn⟨f(p)−p,xn+1−p⟩=[1−βn(1−ρ)]||xn−p||2+βn(1−ρ)⋅21−ρ⟨f(p)−p,xn+1−p⟩+2αn||xn−xn−1||⋅||ωn−p||≤[1−βn(1−ρ)]||xn−p||2+βn(1−ρ)⋅21−ρ⟨f(p)−p,xn+1−p⟩+2Mαn||xn−xn−1||=[1−βn(1−ρ)]||xn−p||2+βn(1−ρ)[21−ρ⟨f(p)−p,xn+1−p⟩+2M1−ρ⋅αnβn||xn−xn−1||], |
and M≜supn∈N{||xn−p||}>0.
Step 4: Prove {||xn−p||2} converges to 0.
In fact, according to Lemma 2.4, we only need to prove the sequence {||xn−p||} 's subsequence, which satisfies lim infk→∞(||xnk+1−p||−||xnk−p||)≥0, {||xnk−p||} meet
lim supk→∞⟨f(p)−p,xnk+1−p⟩≤0. |
Thus, assume that {||xnk−p||} is the subsequence of {||xn−p||} and satisfies lim infk→∞(||xnk+1−p||−||xnk−p||)≥0, then
lim infk→∞(||xnk+1−p||2−||xnk−p||2)=lim infk→∞[(||xnk+1−p||−||xnk−p||)(||xnk+1−p||+||xnk−p||)]≥0. |
According to Step 2, we have
lim supk→∞[(1−τnkλnk)||ωnk−znk||2+(τnkλnk−2ξμ⋅λnk+1λnk)||ωnk−ynk||2+(τnkλnk−2ξμ⋅λnk+1λnk)||znk−ynk||2]≤lim supk→∞[||xnk−p||2−||xnk+1−p||2+βnkM4]≤lim supk→∞[||xnk−p||2−||xnk+1−p||2]+lim supk→∞βnkM4=−lim infk→∞[||xnk+1−p||2−||xnk−p||2]≤0. |
As the coefficients of ||ωnk−znk||2,||ωnk−ynk||2,||znk−ynk||2 are all positive, hence
limk→∞||ωnk−znk||=0,limk→∞||ωnk−ynk||=0,limk→∞||znk−ynk||=0. | (3.26) |
Next, we will prove that
||xnk+1−xnk||→0(k→∞). |
As k→∞, we have
||xnk+1−znk||=βnk||znk−f(znk)||→0, | (3.27) |
and
||xnk−ωnk||=αnk||xnk−xnk−1||=βnk⋅αnkβnk||xnk−xnk−1||→0. | (3.28) |
From (3.26)–(3.28), one gets
||xnk+1−xnk||≤||xnk+1−znk||+||znk−ωnk||+||ωnk−xnk||→0(k→∞). |
Since {xnk} is bounded, then there exists a subsequence {xnkj}⊂{xnk} weakly converging to z∈H, thereby
lim supk→∞⟨f(p)−p,xnk−p⟩=limj→∞⟨f(p)−p,xnkj−p⟩=⟨f(p)−p,z−p⟩. | (3.29) |
Apply (3.28), then ωnk⇀z(k→∞). Combining (3.26) and Lemma 3.3, we have z∈VI(C,A). Available from (3.29) and p=PVI(C,A)∘f(p), we gain
lim supk→∞⟨f(p)−p,xnk−p⟩=⟨f(p)−p,z−p⟩≤0. | (3.30) |
Then
lim supk→∞⟨f(p)−p,xnk+1−p⟩≤lim supk→∞⟨f(p)−p,xnk+1−xnk⟩+lim supk→∞⟨f(p)−p,xnk−p⟩≤0. | (3.31) |
Hence it follows from Lemma 2.4, (3.24), and (3.31) that limn→∞||xn−p||=0. The proof is completed.
Remark 3.1. The algorithm combines the subgradient outer gradient method, the inertia method, and the viscosity method. Under appropriate conditions, different parameters are introduced to improve the algorithm, and the selection of {λn} and {τn} in the existing algorithm is improved to the reciprocal form to avoid excessive iteration and control convergence. The second class of algorithms is given next, and the convergence is proved.
Algorithm 4.1. Golden section algorithm for pseudo-monotone variational inequalities.
Step 0: For given p0,p1∈C,λ0>0,¯λ>0,ϕ∈(1,φ],φ is, golden section ratio constant √5+12.
Let ¯p0=p1,θ0=1,ρ=1ϕ+1ϕ2,k=1.
Step 1: Compute xk=A(pk),xk−1=A(pk−1),
λk=min{ρλk−1,ϕθk−14λk−1||pk−pk−1||2||xk−xk−1||2,¯λ}. | (4.1) |
Step 2: Compute
¯pk=(ϕ−1)pk+¯pk−1ϕ,pk+1=PC(¯pk−λkxk). |
Step 3: Compute
θk=λkλk−1ϕ. |
Set k:=k+1, and go to Step 1.
Remark 4.1. Two key inequalities can be obtained from (4.1). First, we have λk≤(1ϕ+1ϕ2)λk−1, it means θk−1−1ϕ≤0. Second, combining ϕθk−1λk−1=θkθk−1λk and (4.1), we can gain
λk≤ϕθk−14λk−1||pk−pk−1||2||xk−xk−1||2. |
Then we have
λ2k||xk−xk−1||2≤θkθk−14||pk−pk−1||2. | (4.2) |
Lemma 4.1. Suppose that the mapping A:H→H is L− Lipschitz continuous. If the sequence {pk} generated by Algorithm 4.1 is bounded, then both {λk} and {θk} are also bounded and separate from 0.
Proof. Obviously, {λk} is decreasing and positive, so {λk} is bounded. Next, the mathematical induction will be utilized to prove that {λk} separates from 0.
Since {pk} is bounded, then there exists L>0 such that
||xk−xk−1||=||A(pk)−A(pk−1)||≤L||pk−pk−1||. |
Set L large enough, subject to λi≥ϕ24L2¯λ(i=0,1). Suppose that for all i=0,1,…,k−1, λi≥ϕ24L2¯λ holds, then one has
λk=ρλk−1≥λk−1≥ϕ24L2¯λ, |
or
λk=ϕ24λk−2||pk−pk−1||2||xk−xk−1||2≥ϕ24λk−2L2≥ϕ24L2¯λ. |
So regardless of what the expression for {λk}is we have λk≥ϕ24L2¯λ holds, hence {λk} separates from 0. While θk=λkλk−1ϕ, by the similar method, itus easy to figure out that {θk} is also bounded and separates from 0.
To simplify the proof that follows, we define Ψ(u,v)=⟨u∗,v−u⟩, and u∗=A(u).
Theorem 4.1. Suppose that A:H→H is pseudo-monotone and satisfies L−Lipschitz continuity. For every p1,¯p0∈C,λ∈(0,φ2L], the sequences {pk} and {¯pk} generated by Algorithm 4.1 converge to VI(C,A).
Proof. Combining pk+1=PC(¯pk−λkxk) and Lemma 2.1, we have
⟨pk+1−¯pk+λkxk,z−pk+1⟩≥0,∀z∈C. | (4.3) |
Set k=k−1 in (4.3), and take z=pk+1, one obtains
⟨pk−¯pk−1+λk−1xk−1,pk+1−pk⟩≥0. | (4.4) |
Multiply both sides of (4.4) with λkλk−1, and utilize the equation λkλk−1(pk−¯pk−1)=θk(pk−¯pk); it infers
⟨θk(pk−¯pk)+λkxk−1,pk+1−pk⟩≥0. | (4.5) |
Combining (4.3) and (4.5), we obtain
⟨pk+1−¯pk,z−pk+1⟩+θk⟨pk−¯pk,pk+1−pk⟩+λk⟨xk−xk−1,pk−pk+1⟩≥λk⟨xk,pk−z⟩≥λk⟨x,pk−z⟩=λkΨ(z,pk). | (4.6) |
Writing the first two terms of (4.6) in the form of a norm, we can obtain
||pk+1−z||2≤||¯pk−z||2−||pk+1−¯pk||2+2λk⟨xk−xk−1,pk−pk+1⟩+θk(||pk+1−¯pk||2−||pk+1−pk||2−||pk−¯pk||2)−2λkΨ(z,pk). | (4.7) |
According to ¯pk+1=(ϕ−1)pk+1+¯pkϕ, we have
||pk+1−z||2=ϕϕ−1||¯pk+1−z||2−1ϕ−1||¯pk−z||2+1ϕ||pk+1−¯pk||2. |
The above equation combined with (4.7), one has
ϕϕ−1‖ˉpk+1−z‖2≤ϕϕ−1‖ˉpk−z‖2+(θk−1−1ϕ)‖pk+1−ˉpk‖2−2λkΨ(z,pk)−θk(||pk+1−pk||2+||pk−¯pk||2)+2λk⟨xk−xk−1,pk−pk+1⟩. | (4.8) |
While θk≤1+1ϕ, it follows from (4.2) that the rightmost term in (4.8) can be scaled to
2λk⟨xk−xk−1,pk−pk+1⟩≤2λk||xk−xk−1||||pk−pk+1||≤√θkθk−1||pk−pk−1||||pk−pk+1||≤θk2||pk+1−pk||2+θk−12||pk−pk−1||2. |
However, after scaling (4.8) with this formula, it infers
ϕϕ−1||¯pk+1−z||2+θk2||pk+1−pk||2+2λkΨ(z,pk)≤ϕϕ−1||¯pk−z||2+θk−12||pk−pk−1||2−θk||pk−¯pk||2. | (4.9) |
After iteratively accumulating the above equation, we have
ϕϕ−1||¯pk+1−z||2+θk2||pk+1−pk||2+k∑i=2θi||pi−¯pi||2+2k∑i=1λiΨ(z,pi)≤ϕϕ−1||¯p2−z||2+θ12||p2−p1||2−θ2||p2−¯p2||2+2λ1Ψ(z,p1). |
Set z=p∈VI(C,A), then the last term on the left in (4.10) is nonnegative. It can gain {¯pk} and {pk} are bounded, and θk||pk−¯pk||→0. According to the proof of Lemma 4.1, we have λk≥ϕ24L2¯λ and {θk} separates from 0. Hence, limk→+∞||pk−¯pk||=0 holds. It means that pk−¯pk−1→0 and pk+1−pk→0.
Next, we will prove that every cluster point of {pk} and {¯pk} belongs to VI(C,A).
Set subsequence {ki} subjects to pki→˜p and λki→λ>0(i→+∞) holds. Obviously, both pki+1→˜p and ¯pki→˜p hold. Substituting k=ki into (4.3), as i increases to infinity, we have
λ⟨˜x,z−˜p⟩≥0,˜x=A(˜p),∀z∈C. |
Hence, ˜p∈VI(C,A). According to (4.9), we can get ϕϕ−1||¯pk−z||2+θk−12||pk−pk−1||2 is non-monotonically increasing, which implies limk→+∞||¯pk−z|| exists. Since z∈VI(C,A) is arbitrary, it follows from Lemma 2.9 states that there exist sequences {pk} and {¯pk} converging to the certain elements of VI(C,A). The proof is completed.
In order to evaluate the performance of the proposed algorithms, we present numerical experiments relative to the variational inequality. In this section, we provide an example to compare Algorithms 3.1 and 4.1 with the modified projection and contraction algorithm [28], see Figure 1.
Example 5.1. Let f:Rn→Rn, be defined by f(x)=Ax+b, where A=ZTZ,Z=(zij)n×n and b=(bi)∈Rn where zij∈[1,100] and bi∈[−100,0] are generated randomly.
Under the given parameters, we can obtain the convergence rate ratio of Algorithms 3.1 and 4.1 which is faster than Modified PCA.
Remark 5.1. First, we begin with a comparison of the stochastic projection and contraction algorithm in [29] and the fast alternated inertial projection algorithms in [30]. But the variational inequality conditions of these two literatures are too different from those of this literature, so I choose the literature [28] with more similar conditions and conclusions to compare.
In this paper, the approximation problem has been investigated for the pseudo-monotonic variational inequalities. By taking advantage of the reciprocal form of parameters and the golden section ratio, two improved extrapolated gradient algorithms have been proposed, in which the strong convergence of the two defined algorithms for solving pseudo-monotone variational inequalities problems is achieved by extending the existing results of Thong [17] and Malitsky [21]. Finally, one numerical example is provided to illustrate the effectiveness of the proposed results.
Haoran Tang: Formal analysis, Writing editing, Visualization, Data curation; Weiqiang Gong: Funding acquisition, Visualization, Conceptualization, Supervision. All the authors have agreed and given their consent for the publication of this research paper.
The author is supported by the National Natural Science Foundation of China (Program No. 79970017) and the National Natural Science Foundation of China (Program No. 61906084).
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
The author declares there is no conflict of interest.
[1] |
P. N. Anh, Relaxed projection methods for solving variational inequality problems, J. Glob. Optim., 90 (2024), 909–930. https://doi.org/10.1007/s10898-024-01398-w doi: 10.1007/s10898-024-01398-w
![]() |
[2] |
W. Singh, S. Chandok, Mann-type extragradient algorithm for solving variational inequality and fixed point problems, Comp. Appl. Math., 43 (2024), 259. https://doi.org/10.1007/s40314-024-02785-5 doi: 10.1007/s40314-024-02785-5
![]() |
[3] |
S. F. Liu, W. Wang, Y. Jia, H. B. Bian, W. Q. Shen, Modeling of hydro-mechanical coupled fracture propagation in quasi-brittle rocks using a variational phase-field method, Rock. Mech. Rock Eng., 57 (2024), 7079–7101. https://doi.org/10.1007/s00603-024-03896-5 doi: 10.1007/s00603-024-03896-5
![]() |
[4] |
X. Zhang, D. Hou, Q. Mao, Z. Wang, Predicting microseismic sensitive feature data using variational mode decomposition and transformer, J. Seismol., 28 (2024), 229–250. https://doi.org/10.1007/s10950-024-10193-9 doi: 10.1007/s10950-024-10193-9
![]() |
[5] |
G. Arenas-Henriquez, A. Cisterna, F. Diaz, R. Gregory, Accelerating black holes in 2 + 1 dimensions: holography revisited, J. High Energ. Phys., 2023 (2023), 122. https://doi.org/10.1007/JHEP09(2023)122 doi: 10.1007/JHEP09(2023)122
![]() |
[6] |
J. E. Brasil, E. R. Oliveira, R. R. Souza, Thermodynamic formalism for general iterated function systems with measures, Qual. Theory Dyn. Syst., 22 (2023), 19. https://doi.org/10.1007/s12346-022-00722-7 doi: 10.1007/s12346-022-00722-7
![]() |
[7] |
J. Jeon, G. Kim, Variational inequality arising from variable annuity with mean reversion environment, J. Inequal. Appl., 2023 (2023), 99–119. https://doi.org/10.1186/s13660-023-03015-y doi: 10.1186/s13660-023-03015-y
![]() |
[8] |
G. Valencia-Ortega, A. M. A. de Parga-Regalado, M. A. Barranco-Jiménez, On thermo-economic optimization effects in the balance price-demand of generation electricity for nuclear and fossil fuels power plants, Energy Syst., 14 (2023), 1163–1184. https://doi.org/10.1007/s12667-022-00537-0 doi: 10.1007/s12667-022-00537-0
![]() |
[9] |
A. Barbagallo, S. Guarino Lo Bianco, A random time-dependent noncooperative equilibrium problem, Comput. Optim. Appl., 84 (2023), 27–52. https://doi.org/10.1007/s10589-022-00368-w doi: 10.1007/s10589-022-00368-w
![]() |
[10] |
M. B. Donato, M. Milasi, A. Villanacci, Restricted participation on financial markets: a general equilibrium approach using variational inequality methods, Netw. Spat. Econ., 22 (2022), 327–359. https://doi.org/10.1007/s11067-019-09491-4 doi: 10.1007/s11067-019-09491-4
![]() |
[11] |
Y. Censor, A. Gibali, S. Reich, Extensions of Korpelevich's extragradient method for the variational inequality problem in Euclidean space, Optimization, 61 (2012), 1119–1132. https://doi.org/10.1080/02331934.2010.539689 doi: 10.1080/02331934.2010.539689
![]() |
[12] |
S. V. Denisov, V. V. Semenov, L. M. Chabak, Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators, Cybern. Syst. Anal., 51 (2015), 757–765. https://doi.org/10.1007/s10559-015-9768-z doi: 10.1007/s10559-015-9768-z
![]() |
[13] | F. Facchinei, J. S. Pang, Finite-dimensional varitional inequalities and complementarity problems, In: Springer series in operations research and financial engineering, New York: Springer, 2003. https://doi.org/10.1007/b97543 |
[14] |
M. C. Ferris, J. S. Pang, Engineering and economic applications of complementarity problems, SIAM Rev., 39 (1997), 669–713. https://doi.org/10.1137/S0036144595285963 doi: 10.1137/S0036144595285963
![]() |
[15] | G. M. Korpelevich, The extragradient method for finding saddle points and other problem, Matecon, 12 (1976), 747–756. |
[16] |
P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control Optim., 38 (2000), 431–446. https://doi.org/10.1137/S0363012998338806 doi: 10.1137/S0363012998338806
![]() |
[17] |
D. V. Thong, D. V. Hieu, Weak and strong convergence theorems for variational inequality problems, Numer. Algor., 78 (2018), 1045–1060. https://doi.org/10.1007/S11075-017-0412-z doi: 10.1007/S11075-017-0412-z
![]() |
[18] |
S. Yekini, S. I. Olaniyi, Strong convergence result for monotone variational inequalities, Numer. Algor., 76 (2017), 259–282. https://doi.org/10.1007/s11075-016-0253-1 doi: 10.1007/s11075-016-0253-1
![]() |
[19] |
Y. Censor, A. Gibali, S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert spaces, J. Optim. Theory. Appl., 148 (2011), 318–335. https://doi.org/10.1007/s10957-010-9757-3 doi: 10.1007/s10957-010-9757-3
![]() |
[20] |
J. Yang, H. Liu, Strong convergence result for solving monotone variational inequalities in Hilbert space, Numer. Algor., 80 (2019), 741–752. https://doi.org/10.1007/s11075-018-0504-4 doi: 10.1007/s11075-018-0504-4
![]() |
[21] |
Y. Malitsky, Golden ratio algorithms for variational inequalities, Math. Program., 184 (2020), 383–410. https://doi.org/10.1007/s10107-019-01416-w doi: 10.1007/s10107-019-01416-w
![]() |
[22] | K. Goebel, S. Reich, Uniform convexity, hyperbolic geometry, and nonexpansive mappings, New York: Marcel Dekker, 1984. |
[23] |
S. Seajung, P. Yotkaew, Approximation of zeros of inverse strongly monotone operators in Banach spaces, Nonlinear Anal.-Theory, Methods Appl., 75 (2012), 742–750. https://doi.org/10.1016/j.na.2011.09.005 doi: 10.1016/j.na.2011.09.005
![]() |
[24] |
J. Mashreghi, M. Nasri, Forcing strong convergence of Korpelevich's method in Banach spaces with its applications in game theory, Nonlinear Anal.-Theory, Methods Appl., 72 (2010), 2086–2099. https://doi.org/10.1016/j.na.2009.10.009 doi: 10.1016/j.na.2009.10.009
![]() |
[25] |
R. W. Cottle, J. C. Yao, Pseudo-monotone complementarity problems in Hilbert space, J. Optim. Theory Appl., 75 (1992), 281–295. https://doi.org/10.1007/BF00941468 doi: 10.1007/BF00941468
![]() |
[26] |
P. E. Mainge, Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization, Set-Valued Anal., 16 (2008), 899–912. https://doi.org/10.1007/s11228-008-0102-z doi: 10.1007/s11228-008-0102-z
![]() |
[27] |
H. K. Xu, Iterative algorithms for nonlinear operators, J. Lond. Math. Soc., 66 (2002), 240–256. https://doi.org/10.1112/S0024610702003332 doi: 10.1112/S0024610702003332
![]() |
[28] |
Q. L. Dong, Y. J. Cho, L. L. Zhong, T. M. Rassias, Inertial projection and contraction algorithms for variational inequalities, J. Glob. Optim., 70 (2018), 687–704. https://doi.org/10.1007/s10898-017-0506-0 doi: 10.1007/s10898-017-0506-0
![]() |
[29] |
L. Liu, X. Qin, A stochastic projection and contraction algorithm with inertial effects for stochastic variational inequalities, J. Nonlinear Var. Anal., 7 (2023), 995–1016. https://doi.org/10.23952/jnva.7.2023.6.08 doi: 10.23952/jnva.7.2023.6.08
![]() |
[30] |
Y. Shehu, Q. L. Dong, L. Liu, Fast alternated inertial projection algorithms for pseudo-monotone variational inequalities, J. Comput. Appl. Math., 415 (2022), 114517. https://doi.org/10.1016/j.cam.2022.114517 doi: 10.1016/j.cam.2022.114517
![]() |