1.
Introduction
In the present investigation, H is a real Hilbert space and C is a nonempty, closed, and convex subset of H, A:H→H is a continus mapping. The variational inequality problem (abbreviated, VI(A,C)) is of the form: find z∗∈C satisfied with
Numerous domains have important uses for variational inequalities. Many academics have studied and come up with a multitude of findings [1,2,3,4].
The problem VI(A,C) (1.1) is analogous to the problem of fixed points:
As a result, VI(A,C) (1.1) is possible solved by using the fixed point problem (see, e.g., [5,6]). The following projection gradient algorithm is the simplest one:
However, this method's convergence necessitates a moderately strong supposition that A is a η-strongly monotone and L-Lipschitz continuous mapping, η is a positive constant and step-size λ∈(0,2ηL2). However, algorithm (1.2) does not work when A is monotone.
The extragradient algorithm of the following type was presented by Korpelevich in [7]: given z1∈C,
where λn∈(0,1L). A is relaxed to a monotone mapping based on algorithm (1.2). Moreover, it has been shown that the sequence {xn} will eventually arrive at a solution for the (1.1). However, PC lacks a closed form formula and (1.3) requires calculating PC twice in each iteration, which will result in an increase in the amount of computing that the procedure requires. There has been a lot of research done in this area by Censor et al ([8,9,10]). The issue that it can be tricky to calculate PC was solved by using the projection onto the half space or intersection of half spaces rather than subset C. He first proposed projection and contraction method (PCM) in [11]. Cai et al. in [12] have studied the optimal step sizes ηn for PCM, and the method which takes the following form:
where
The benefit of this method is that A is as flexible as the algorithm (1.3) and only needs to calculate the projection once. The method's efficacy will be vastly enhanced by both theoretical and numerical experiments. After adding the optimal step size ηn, the speed of convergence is enhanced further. The focus of numerous professionals has been drawn to method (1.4) because of its great characteristics and results. Based on the method (1.4), numerous academics have achieved numerous significant advancements (see, e.g., [12,13,14] and others). Recently, Dong et al. in [13] added inertial to method (1.4) in order to obtain better convergence effect. In [15], Shehu and Iyiola incorporated alternating inertial and adaptive step-sizes:
where
and
When the assumption of mapping A is relaxed to pseudo-monotone, convergence of the algorithm is proved. Additionally, they gave R-linear convergence analysis when A is a strongly pseudo-monotone mapping. In numerical experiments, the algorithm with alternating inertial in [15] performs better than the algorithm with general inertial in [13].
A fascinating concept has lately been created by Malitsky in [16] to solve mixed variational inequalities problem: find z∗∈C satisfied with
where A is monotone mapping, g is a proper convex lower semicontinuous function. He proposed the following version of the golden ratio algorithm:
where ϕ is golden ratio, i.e. ϕ=√5+12. In algorithm (1.10), ¯zn is actually a convex combination of all the previously generated iterates. It is straightforward to ascertain that when g=ιC, (1.19) is equivalent to (1.1). Then, the algorithm (1.10) may be written equivalently as:
Numerous inertial algorithms have been published to address the issue of pseudo-monotone variational inequalities. Moreover, the golden ratio algorithms and their convergence have been researched for solving mixed variational inequalities problem when A is monotone. However, there are still few results about golden ratio for solving variational inequalities problem (1.1) when A is pseudo-monotone. The algorithm presented by Malitsky is very novel, and it provides us with some inspiration. Under more general circumstances, we hope to solve the variational inequalities problem (1.1) using the convex combination structure in this algorithm.
In this research, we combine the projection contraction method in [11] and golden ratio technique to present a new extrapolation projection contraction algorithm for the pseudo-monotone VI(A,C) (1.1). To speed up the convergence of the new extrapolation projection contraction algorithm, we also present an alternating extrapolation algorithm. We can greatly expand the selection range of step size in the combination structure, and expanding the range of step size has a significant effect on the results of numerical experiments. Although the golden ratio is not used in our algorithm in the end in [13], considering that this paper is inspired by Malitsky’s golden ratio algorithm, the algorithms proposed in this paper is still recorded as projection contraction algorithms based on the golden ratio. In this paper, we primarily make the following improvements:
∙ We propose a projection contraction algorithm and an alternating extrapolation projection contraction algorithm based on the golden ratio. Weak convergence of two algorithms are established when A is pseudo-monotone, sequentially weakly continuous and L-Lipschitz continuous.
∙ We get R-linear convergence results of two algorithms when A is strongly pseudo-monotone.
∙ Our algorithms all use the new self-adaptive step-sizes which is not monotonically decreasing, like (1.10).
∙ In our algorithms, A is a pseudo-monotone mapping which is weaker than [13,17,18]. Additionally, it is not necessary to restrict the extrapolation parameter ψ in (1,√5+12] as in [19,20], it can be to extend the value to (1,+∞).
The structure of the article is as follows:
Section 2: Related knowledge involved in the paper. Section 3: We give a projection contraction algorithm based on the golden ratio and the proofs of weak and R-linear convergence of the algorithm. Section 4: We also give an alternating extrapolation projection contraction algorithm based on the golden ratio, prove weak and R-linear convergence of the algorithm. Section 5: We give two numerical examples to verify the effectiveness of the algorithms.
2.
Preliminaries
Let {zn} be a sequence in H. We denote zn⇀z as {zn} weakly converges to z, while denote zn→z as {zn} strongly converges to z.
Definition 2.1. [21] A:H→H is known as:
(a) η-strongly pseudo-monotone if
where η>0;
(b) pseudo-monotone if
(c) L-Lipschitz continuous if there exists a constant L>0 such that
(d) sequentiallyweaklycontinuous if for each sequence {un} :
Definition 2.2. [22] PC is called the metric projection onto C, if for any point u∈H, there exists a unique point PCu∈C such that ‖u−PCu‖≤‖u−v‖,∀u∈C.
Definition 2.3. [23] Suppose a sequence {zn} in H converges in norm to z∗∈H. We say that {zn} converges to z∗ R-linearly if ¯limn→∞‖zn−z∗‖1n<1.
Lemma 2.1. [21], [22] PC has the following properties:
(i) ⟨u−v,PCu−PCv⟩≥‖PCu−PCv‖2,∀u,v∈H;
(ii) PCu∈Cand⟨v−PCu,PCu−u⟩≥0,∀v∈C.
Lemma 2.2. [21] This following equation holds in H :
Lemma 2.3. [24] Suppose A is pseudo-monotone in VI(A,C) (1.1) and S is the solution set of VI(A,C) (1.1). Then S is closed, convex and
Lemma 2.4. [25] Let {zn} be a sequence in H such that the following two conditions hold:
(i) for any z∈C, limn→∞‖zn−z‖ exists;
(ii) ωw(zn)⊂S.
Then {zn} converges weakly to a point in C.
Lemma 2.5. [19] Let {an} and {bn} be non-negative real sequences which meet
where N is some non-negative integer. Then limn→∞bn=0 and limn→∞an exists.
Lemma 2.6. [26] Let {λn} be non-negative number sequence such that
where {ξn} and {τn} meet
Then limn→∞λn exists.
3.
Projection contraction algorithm based on the golden ratio
We provide a PCM algorithm with a new extrapolation step and the corresponding convergence analyses in this section.
Assumption 3.1. In this paper, the following suppositions are true:
(a) A: H→H is pseudo-monotone, sequentially weakly continuous and L-Lipschitz continuous.
(b) The solution set S is nonempty.
(c) {ξn}⊂[1,+∞),∑∞n=1(ξn−1)<+∞,τn>0and∑∞n=1τn<+∞.
Remark 3.1. Observe in Algorithm 3.1 that if Avn≠A¯un, then
Therefore, λn≥min{μL,λ1}>0. By (3.3), we have λn+1≤ξnλn+τn. From Lemma 2.6 we obtain limn→∞λn=λ when {ξn}⊂[1,+∞),∑∞n=1(ξn−1)<+∞,and∑∞n=1τn<+∞.
Remark 3.2. In our algorithms, it is not necessary to restrict the range of ψ to (1,√5+12] or (1,2], ψ only needs to be greater than 1, which greatly relaxes the range of parameter to be chosen.
Lemma 3.1. Assume {un} is the sequence generated by Algorithm 3.1 under the conditions of Assumption 3.1. Then {un} is bounded and limn→∞‖un−u∗‖ exists, where u∗∈S.
Proof. It is available from the iterative formate
According to (3.2) and Lemma 2.1(i),
Since ¯un∈C and u∗∈S, and Definition 2.1 (b), we have ⟨A¯un,¯un−u∗⟩≥0, thus,
Making use of (3.8) and (3.9), we gain
so,
Putting (3.10) in (3.7), we get
By (3.5) and (3.6), we gain
Putting (3.12) in (3.11),
From (3.1) and Lemma 2.2,
Combing (3.13) and (3.14),
so we can obtain
where
From the above proof, we have obtained an≥0 and bn≥0. According to Lemma 2.5, we can get limn→∞bn=0 and limn→∞an exists. Thus, we can get further limn→∞‖vn−un+1‖2=0.
Inferring from the definition of vn, we get
We already know that
we can easily get limn→∞‖vn+1−u∗‖2 exists. From this it can be concluded that limn→∞‖un+1−u∗‖2 exists and {un},{vn} are bounded. □
Lemma 3.2. Suppose {¯un} and {vn} are generated by Algorithm 3.1. Then under Assumption 3.1, limn→∞‖vn−¯un‖=0.
Proof. Noting
Available from (3.4),
Choosing a fixed number ρ in (μ,1). Since limn→∞λn=λ, we have limn→∞λnλn+1μ=μ<ρ. Then ∃n0 such that λnλn+1μ<ρ,∀n≥n0. Therefore, ∀n≥n0, we have
and
Thus,
and so, ∀n≥n0, we can get
From (3.21), we get limn→∞‖vn−¯un‖=0. □
Lemma 3.3. Assume that {un} is generated by Algorithm 3.1, then ωw(un)⊂S.
Proof. Since {un} is bounded, ωw(un)≠∅. Arbitrarily choose q∈ωw(un), then there exists a subsequence {unk}⊂{un} such that unk⇀q. Then ¯unk−1⇀q,vnk−1⇀q. From Lemma 2.1(ii) and (3.2) we have
thus,
From (3.22) we can obtain
So
Fixing u∈C and passing k→∞ in (3.23), noting ‖vnk−¯unk‖→0, {¯unk}and{Avnk} are bounded, we obtain
Choosing a decreasing sequence {ϵk} such that ϵk>0 and limk→∞ϵk=0. For each ϵk,
where Nk is smallest non-negative integer that satisfies (3.25). As {ϵk} is decreasing, {Nk} is increasing. For simplicity, it is useful to write Nk as nNk. Setting
one gets ⟨AvNk−1,ϑNk−1⟩=⟨AvNk−1,AvNk−1‖AvNk−1‖2⟩=1. Then, by (3.25) for each k,
From Definition 2.1(b), we have
Since vnk−1⇀q as k→∞ and Definition 2.1(d), we obtain that Avnk−1⇀Aq. Suppose Aq≠0 (if Aq=0,q∈S). Following that, employing the norm's sequentially weakly lower semicontinuity, we gain
Because {Nk}⊂{nk}, and limk→∞ϵk=0,
and this means limk→∞‖ϵkϑNk−1‖=0. Inputting k→∞ into (3.27), we get
From Lemma 2.3, q∈S, then ωw(un)⊂S. □
Theorem 3.1. Assume {un} is the sequence generated by Algorithm 3.1 under the conditions of Assumption 3.1. There exists u⋆∈S such that un⇀u⋆.
Proof. From Lemmas 3.1 and 3.3, we get limn→∞‖un−u∗‖ exists and ωw(un)⊂S. From Lemma 2.4, un⇀u⋆∈S. □
Theorem 3.2. Suppose {un} is generated by Algorithm 3.1 under the condition of A is η-strongly pseudo-monotone with η>0. Then {un} converges R-linearly to the unique solution u∗ of VI(A,C) (1.1).
Proof. Since ¯un∈C, from Definition 2.1(a), we have
Multiply λn on both sides of above inequality, we get
(3.8) plus (3.28), we obtain
so
Putting (3.30) into (3.7), we obtain
Using (3.18) in (3.31), we have
where
The last inequality is true because there exists n′0 such that βn>1−ρ(1+ρ)2,λn>λ2 and (1−λnλn+1μ)>1−ρ,∀n≥n′0. Putting (3.33) in (3.32), we get
Since βn>1−ρ(1+ρ)2 and (1−λnλn+1μ)>1−ρ, we obtain
Let δ2=1−γ1−ρ(1+ρ)2min{12(2−γ)(1−ρ),12λη}, we have 0<δ2<1 and
Putting (3.14) into (3.35), after collation, we get
Since 0<δ2<1, δ2+1ψ−1<1+1ψ−1=ψψ−1. And we can get 0<δ2+1ψ−1ψψ−1<1. Therefore,
where r=√δ2+1ψ−1ψψ−1. By induction, we get
By (3.35),
And we have
So
Therefore, {un} converges R-linearly to the unique solution u∗. □
4.
Alternating extrapolation projection contraction algorithm based on the golden ratio
In this part, we offer an algorithm for settling the problem of variational inequalities based on the golden ratio and provide the proofs of weak and R-linear convergence.
Lemma 4.1. Assume {un} is the sequence generated by Algorithm 4.1 under the conditions of Assumption 3.1. Then {u2n} is bounded and limn→∞‖u2n−u∗‖ exists, where u∗∈S.
Proof. Following the proof line (3.7)–(3.13) of Lemma 3.1 and ‖v2n−u∗‖2=‖u2n−u∗‖2, we obtain
From (3.13) we have
By the definition of vn,
Combing (4.9) and (4.8), we obtain
From (4.7) we have
Combining (4.10) and (4.11), we get
Therefore ‖u2n+2−u∗‖≤‖u2n−u∗‖. This proves that {u2n} is bounded and limn→∞‖u2n−u∗‖ exists. □
Lemma 4.2. Under Assumption 3.1, suppose {u2n} and {¯u2n} are generated by Algorithm 4.1. Then limn→∞‖u2n−¯u2n‖=0.
Proof. From (4.12) and u2n=v2n, we get that {‖u2n−u∗‖} is bounded and
From (3.18) and (3.19), we have
and
Combining (4.13) and (4.14), we can obtain
Using u2n=v2n and limn→∞‖u2n−u2n+1‖=0 in (4.15), we get
□
Lemma 4.3. Assume that {u2n} is generated by Algorithm 4.1, then ωw(u2n)⊂S.
Proof. ∀p∈ωw(u2n), then exists a subsequence {u2nk}⊂{u2n}, such that u2nk⇀p. By Lemma 2.1(ii) and (4.2) we have
thus,
and
Similar to Lemma 3.3, the following proof steps are omitted as they are redundant. Thus, we come to the conclusion,
Using Lemma 2.3, we get p∈S. □
Theorem 4.1. Assume {un} is the sequence generated by Algorithm 4.1 under the conditions of Assumption 3.1. There exists q∈S such that un⇀q.
Proof. {u2n} is bounded implies that {u2n} has weakly convergent subsequences. Then, we can choose a subsequence of {u2n}, denoted by {u2nk} such that u2nk⇀q∈H. We obtain limn→∞‖u2n−q‖ exists and q∈S from Lemma 4.1 and 4.3. The proof of the whole sequence u2n⇀q∈S which is the same as Lemma 4.4 in [15]. Hence, un⇀q∈S. □
Theorem 4.2. Suppose {un} is generated by Algorithm 4.1 under the condition of A is η-strongly pseudo-monotone with η>0. Then {un} converges R-linearly to the unique solution u∗ of VI(A,C) (1.1).
Proof. From (3.35), ∀n≥n′0, we have
and
where δ2=1−γ1−ρ(1+ρ)2min{12(2−γ)(1−ρ),12λη} and 0<δ2<1. Combining (4.9) and (4.18),
Putting (4.17) in (4.19), we have
So
By induction, we have
Thus,
Therefore, {un} converges R-linearly to the unique solution u∗. □
5.
Numerical examples
The following sections provide some computational experiments and comparisons between our algorithms considered in Sections 3 and 4 and other algorithms. All codes were written in MATLAB R2016b and performed on a PC Desktop AMD Ryzen R7-5600U CPU @ 3.00 GHz, RAM 16.00 GB.
We make a comparison of our Algorithm 3.1, Algorithm 4.1, Algorithm 2 in [15] and Algorithm 1 in [27], Time in the table indicates CPU Time. In this section, we set maximum number of iterations nmax=6×105, ξn=1+1n2 and τn=1n2.
Example 5.1. Define A:Rm→Rm by
where M=e−uTQu, N is a positive semi-definite matrix, Q is a positive definite matrix, q∈Rm and β>0. In addition to being easy to obtain, A is pseudo-monotone, differentiable and Lipschitz continuous. Take C={u∈Rm∣Bu≤b}, where B is a k∗×m matrix and b∈Rk∗+ with k∗=10. Select the initial point u1=(1,1,…,1)T for all algorithms. Initial points of Algorithm 3.1 and Algorithm 4.1, v0 is generated randomly in Rm. We take ψ=√5+12,μ=0.6 in Algorithm 3.1 and Algorithm 4.1. We take θn=2−γ1.01γ in Algorithm 2 in [15] and θ=0.45(1−μ) in Algorithm 1 in [27]. Thus, we take different values for λ1 and γ respectively to compare with the algorithms in the other two papers. In this example, we take ‖¯un−vn‖<10−3 as the stopping criterion.
In Table 1, we give a comparison of our Algorithms 3.1 and 4.1 with Algorithm 1 in [27] and Algorithm 2 in [15] in different dimensions for γ=1.5,λ1=0.05 and a comparison Figure 1 for m=100. It is illustrated that our two algorithms have some superiority.
In Tables 2 and 3, we give a comparison of Algorithm 3.1 and Algorithm 4.1 for the same number of dimensions with different γ, respectively. We find that the larger γ is for both algorithms in the same dimension, the fewer the iterations and the shorter the CPU Time, where γ∈(0,2).
Example 5.2. [28] Define a mapping A by
The matrices N and P are randomly generated skew-symmetric matrix and positive diagonal matrix, respectively. Assume C:={u∈Rm∣Mu≤p}, where matrix M∈Rk×m and vector p∈Rk are randomly generated. Thus, all entries in p are non-negative. Here VI(A,C) (1.1) has a unique solution u∗=0. Set ψ=√5+12,μ=1√2 in Algorithm 3.1, 4.1. We choose θn=2−γ1.01γ in Algorithm 2 in [15] and θ=0.45(1−μ) in Algorithm 1 in [27]. Additionally, we take different values for λ1 and γ, respectively, to compare with the algorithms in the other two papers. We use the stopping criterion ‖¯un−yn‖≤10−3.
In Table 4, we give a comparison of our Algorithm 3.1 and Algorithm 4.1 with Algorithm 1 in [27] and Algorithm 2 in [15] in different dimensions for γ=1.5,λ1=0.05 and a comparison Figure 2 for k=30,m=60.
In Figures 3 and 4 we compared the impact of Algorithm 3.1 and Algorithm 4.1 with varying ψ.
Remark 5.1. From Figures 1 and 2, we can see that the projection contraction algorithms based on golden ratio have numerical advantages over inertial extrapolation. Alternating extrapolation projection contraction algorithm is better than projection contraction algorithm based on golden ratio. Thus, it can be seen from Figures 3 and 4 that our algorithm with larger ψ converges faster.
6.
Conclusions
We present a projection contraction algorithm and an alternating extrapolation projection contraction algorithm based on the golden ratio for solving pseudo-monotone variational inequalities problem in real Hilbert spaces. We give proofs of weak convergence of the two algorithms when the operator is pseudo-monotone. Thus, we obtain R-linear convergence when A is strongly pseudo-monotone mapping. We have extended the range of the ψ from (1,√5+12] to (1,+∞), and the proofs of both algorithms are given in the absence of Lipschitz constant. We give numerical examples and show the superiority of our algorithms. Then, we discover that our algorithms suffer less impact under the same unfavorable conditions and has a relatively stable rate of convergence.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
The first author is supported by the Fundamental Science Research Funds for the Central Universities (Program No. 3122018L004).
Conflict of interest
The authors declare there is no conflict of interest.