Research article

New extrapolation projection contraction algorithms based on the golden ratio for pseudo-monotone variational inequalities

  • Received: 16 May 2023 Revised: 26 June 2023 Accepted: 06 July 2023 Published: 24 July 2023
  • MSC : 47H05, 47J20, 47J25, 65K15, 90C25

  • In real Hilbert spaces, for the purpose of trying to deal with the pseudo-monotone variational inequalities problem, we present a new extrapolation projection contraction algorithm based on the golden ratio in this study. Unlike ordinary inertial extrapolation, the algorithms are constructed based on a convex combined structure about the entire iterative trajectory. Extrapolation parameter ψ is selected in a more relaxed range instead of only taking the golden ratio ϕ=5+12 as the upper bound. Second, we propose an alternating extrapolation projection contraction algorithm to better increase the convergence effects of the extrapolation projection contraction algorithm based on the golden ratio. All our algorithms employ non-constantly decreasing adaptive step-sizes. The weak convergence results of the two algorithms are established for the pseudo-monotone variational inequalities. Additionally, the R-linear convergence results are investigated for strongly pseudo-monotone variational inequalities. Finally, we show the validity and superiority of the suggested methods with several numerical experiments. The numerical results show that alternating extrapolation does have obvious acceleration effect in practical application compared with no alternating extrapolation. Thus, the obvious effect of relaxing the selection range of parameter ψ on our two algorithms is clearly demonstrated.

    Citation: Cuijie Zhang, Zhaoyang Chu. New extrapolation projection contraction algorithms based on the golden ratio for pseudo-monotone variational inequalities[J]. AIMS Mathematics, 2023, 8(10): 23291-23312. doi: 10.3934/math.20231184

    Related Papers:

    [1] Haoran Tang, Weiqiang Gong . Two improved extrapolated gradient algorithms for pseudo-monotone variational inequalities. AIMS Mathematics, 2025, 10(2): 2064-2082. doi: 10.3934/math.2025097
    [2] Yuanheng Wang, Chenjing Wu, Yekini Shehu, Bin Huang . Self adaptive alternated inertial algorithm for solving variational inequality and fixed point problems. AIMS Mathematics, 2024, 9(4): 9705-9720. doi: 10.3934/math.2024475
    [3] Feng Ma, Bangjie Li, Zeyan Wang, Yaxiong Li, Lefei Pan . A prediction-correction based proximal method for monotone variational inequalities with linear constraints. AIMS Mathematics, 2023, 8(8): 18295-18313. doi: 10.3934/math.2023930
    [4] Bancha Panyanak, Chainarong Khunpanuk, Nattawut Pholasa, Nuttapol Pakkaranang . A novel class of forward-backward explicit iterative algorithms using inertial techniques to solve variational inequality problems with quasi-monotone operators. AIMS Mathematics, 2023, 8(4): 9692-9715. doi: 10.3934/math.2023489
    [5] Austine Efut Ofem, Jacob Ashiwere Abuchu, Godwin Chidi Ugwunnadi, Hossam A. Nabwey, Abubakar Adamu, Ojen Kumar Narain . Double inertial steps extragadient-type methods for solving optimal control and image restoration problems. AIMS Mathematics, 2024, 9(5): 12870-12905. doi: 10.3934/math.2024629
    [6] Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth . Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces. AIMS Mathematics, 2021, 6(5): 4873-4900. doi: 10.3934/math.2021286
    [7] Muhammad Aslam Noor, Khalida Inayat Noor . Higher order strongly general convex functions and variational inequalities. AIMS Mathematics, 2020, 5(4): 3646-3663. doi: 10.3934/math.2020236
    [8] Saudia Jabeen, Bandar Bin-Mohsin, Muhammad Aslam Noor, Khalida Inayat Noor . Inertial projection methods for solving general quasi-variational inequalities. AIMS Mathematics, 2021, 6(2): 1075-1086. doi: 10.3934/math.2021064
    [9] Junaid Ahmad, Kifayat Ullah, Reny George . Numerical algorithms for solutions of nonlinear problems in some distance spaces. AIMS Mathematics, 2023, 8(4): 8460-8477. doi: 10.3934/math.2023426
    [10] Ting Xie, Dapeng Li . On the stability of projected dynamical system for generalized variational inequality with hesitant fuzzy relation. AIMS Mathematics, 2020, 5(6): 7107-7121. doi: 10.3934/math.2020455
  • In real Hilbert spaces, for the purpose of trying to deal with the pseudo-monotone variational inequalities problem, we present a new extrapolation projection contraction algorithm based on the golden ratio in this study. Unlike ordinary inertial extrapolation, the algorithms are constructed based on a convex combined structure about the entire iterative trajectory. Extrapolation parameter ψ is selected in a more relaxed range instead of only taking the golden ratio ϕ=5+12 as the upper bound. Second, we propose an alternating extrapolation projection contraction algorithm to better increase the convergence effects of the extrapolation projection contraction algorithm based on the golden ratio. All our algorithms employ non-constantly decreasing adaptive step-sizes. The weak convergence results of the two algorithms are established for the pseudo-monotone variational inequalities. Additionally, the R-linear convergence results are investigated for strongly pseudo-monotone variational inequalities. Finally, we show the validity and superiority of the suggested methods with several numerical experiments. The numerical results show that alternating extrapolation does have obvious acceleration effect in practical application compared with no alternating extrapolation. Thus, the obvious effect of relaxing the selection range of parameter ψ on our two algorithms is clearly demonstrated.



    In the present investigation, H is a real Hilbert space and C is a nonempty, closed, and convex subset of H, A:HH is a continus mapping. The variational inequality problem (abbreviated, VI(A,C)) is of the form: find zC satisfied with

    Az,zz0,zC. (1.1)

    Numerous domains have important uses for variational inequalities. Many academics have studied and come up with a multitude of findings [1,2,3,4].

    The problem VI(A,C) (1.1) is analogous to the problem of fixed points:

    z=PC(zλAz),λ>0.

    As a result, VI(A,C) (1.1) is possible solved by using the fixed point problem (see, e.g., [5,6]). The following projection gradient algorithm is the simplest one:

    zn+1=PC(znλAzn). (1.2)

    However, this method's convergence necessitates a moderately strong supposition that A is a η-strongly monotone and L-Lipschitz continuous mapping, η is a positive constant and step-size λ(0,2ηL2). However, algorithm (1.2) does not work when A is monotone.

    The extragradient algorithm of the following type was presented by Korpelevich in [7]: given z1C,

    {yn=PC(znλnAzn),zn+1=PC(znλnAyn), (1.3)

    where λn(0,1L). A is relaxed to a monotone mapping based on algorithm (1.2). Moreover, it has been shown that the sequence {xn} will eventually arrive at a solution for the (1.1). However, PC lacks a closed form formula and (1.3) requires calculating PC twice in each iteration, which will result in an increase in the amount of computing that the procedure requires. There has been a lot of research done in this area by Censor et al ([8,9,10]). The issue that it can be tricky to calculate PC was solved by using the projection onto the half space or intersection of half spaces rather than subset C. He first proposed projection and contraction method (PCM) in [11]. Cai et al. in [12] have studied the optimal step sizes ηn for PCM, and the method which takes the following form:

    {yn=PC(znλAzn),d(zn,yn)=znynλ(AznAyn),zn+1=znγηnd(zn,yn), (1.4)

    where

    ηn={znyn,d(zn,yn)d(zn,yn)2,d(zn,yn)0,0,d(zn,yn)=0. (1.5)

    The benefit of this method is that A is as flexible as the algorithm (1.3) and only needs to calculate the projection once. The method's efficacy will be vastly enhanced by both theoretical and numerical experiments. After adding the optimal step size ηn, the speed of convergence is enhanced further. The focus of numerous professionals has been drawn to method (1.4) because of its great characteristics and results. Based on the method (1.4), numerous academics have achieved numerous significant advancements (see, e.g., [12,13,14] and others). Recently, Dong et al. in [13] added inertial to method (1.4) in order to obtain better convergence effect. In [15], Shehu and Iyiola incorporated alternating inertial and adaptive step-sizes:

    {vn={un+αn(unun1),n= odd ,un,n= even ˉun=PC(vnλnAvn),d(vn,ˉun)=vnˉunλn(AvnAˉun),un+1=vnγηnd(vn,ˉun), (1.6)

    where

    λn+1={min{μvn¯unAvnA¯un,λn},AvnA¯un,λn,otherwise, (1.7)

    and

    ηn={vn¯un,d(vn,¯un)d(vn,¯un)2,d(vn,¯un)0,0,d(vn,¯un)=0. (1.8)

    When the assumption of mapping A is relaxed to pseudo-monotone, convergence of the algorithm is proved. Additionally, they gave R-linear convergence analysis when A is a strongly pseudo-monotone mapping. In numerical experiments, the algorithm with alternating inertial in [15] performs better than the algorithm with general inertial in [13].

    A fascinating concept has lately been created by Malitsky in [16] to solve mixed variational inequalities problem: find zC satisfied with

    Az,zz+g(z)g(z)0,zC, (1.9)

    where A is monotone mapping, g is a proper convex lower semicontinuous function. He proposed the following version of the golden ratio algorithm:

    {¯zn=(ϕ1)zn+¯zn1ϕ,zn+1=proxλg(¯znλAzn), (1.10)

    where ϕ is golden ratio, i.e. ϕ=5+12. In algorithm (1.10), ¯zn is actually a convex combination of all the previously generated iterates. It is straightforward to ascertain that when g=ιC, (1.19) is equivalent to (1.1). Then, the algorithm (1.10) may be written equivalently as:

    {¯zn=(ϕ1)zn+¯zn1ϕ,zn+1=PC(¯znλAzn). (1.11)

    Numerous inertial algorithms have been published to address the issue of pseudo-monotone variational inequalities. Moreover, the golden ratio algorithms and their convergence have been researched for solving mixed variational inequalities problem when A is monotone. However, there are still few results about golden ratio for solving variational inequalities problem (1.1) when A is pseudo-monotone. The algorithm presented by Malitsky is very novel, and it provides us with some inspiration. Under more general circumstances, we hope to solve the variational inequalities problem (1.1) using the convex combination structure in this algorithm.

    In this research, we combine the projection contraction method in [11] and golden ratio technique to present a new extrapolation projection contraction algorithm for the pseudo-monotone VI(A,C) (1.1). To speed up the convergence of the new extrapolation projection contraction algorithm, we also present an alternating extrapolation algorithm. We can greatly expand the selection range of step size in the combination structure, and expanding the range of step size has a significant effect on the results of numerical experiments. Although the golden ratio is not used in our algorithm in the end in [13], considering that this paper is inspired by Malitsky’s golden ratio algorithm, the algorithms proposed in this paper is still recorded as projection contraction algorithms based on the golden ratio. In this paper, we primarily make the following improvements:

    We propose a projection contraction algorithm and an alternating extrapolation projection contraction algorithm based on the golden ratio. Weak convergence of two algorithms are established when A is pseudo-monotone, sequentially weakly continuous and L-Lipschitz continuous.

    We get R-linear convergence results of two algorithms when A is strongly pseudo-monotone.

    Our algorithms all use the new self-adaptive step-sizes which is not monotonically decreasing, like (1.10).

    In our algorithms, A is a pseudo-monotone mapping which is weaker than [13,17,18]. Additionally, it is not necessary to restrict the extrapolation parameter ψ in (1,5+12] as in [19,20], it can be to extend the value to (1,+).

    The structure of the article is as follows:

    Section 2: Related knowledge involved in the paper. Section 3: We give a projection contraction algorithm based on the golden ratio and the proofs of weak and R-linear convergence of the algorithm. Section 4: We also give an alternating extrapolation projection contraction algorithm based on the golden ratio, prove weak and R-linear convergence of the algorithm. Section 5: We give two numerical examples to verify the effectiveness of the algorithms.

    Let {zn} be a sequence in H. We denote znz as {zn} weakly converges to z, while denote znz as {zn} strongly converges to z.

    Definition 2.1. [21] A:HH is known as:

    (a) η-strongly pseudo-monotone if

    Av,uv0Au,uvηuv2,u,vH,

    where η>0;

    (b) pseudo-monotone if

    Av,uv0Au,uv0,u,vH;

    (c) L-Lipschitz continuous if there exists a constant L>0 such that

    AuAvLuv,u,vH;

    (d) sequentiallyweaklycontinuous if for each sequence {un} :

    unuAunAu.

    Definition 2.2. [22] PC is called the metric projection onto C, if for any point uH, there exists a unique point PCuC such that uPCuuv,uC.

    Definition 2.3. [23] Suppose a sequence {zn} in H converges in norm to zH. We say that {zn} converges to z R-linearly if ¯limnznz1n<1.

    Lemma 2.1. [21], [22] PC has the following properties:

    (i) uv,PCuPCvPCuPCv2,u,vH;

    (ii) PCuCandvPCu,PCuu0,vC.

    Lemma 2.2. [21] This following equation holds in H :

    ϱu+(1ϱ)v2=ϱu2+(1ϱ)v2ϱ(1ϱ)uv2,ϱR,u,vH. (2.1)

    Lemma 2.3. [24] Suppose A is pseudo-monotone in VI(A,C) (1.1) and S is the solution set of VI(A,C) (1.1). Then S is closed, convex and

    S={zC:Aw,wz0,wC}.

    Lemma 2.4. [25] Let {zn} be a sequence in H such that the following two conditions hold:

    (i) for any zC, limnznz exists;

    (ii) ωw(zn)S.

    Then {zn} converges weakly to a point in C.

    Lemma 2.5. [19] Let {an} and {bn} be non-negative real sequences which meet

    an+1anbn,n>N,

    where N is some non-negative integer. Then limnbn=0 and limnan exists.

    Lemma 2.6. [26] Let {λn} be non-negative number sequence such that

    λn+1ξnλn+τn,nN,

    where {ξn} and {τn} meet

    {ξn}[1,+),n=1(ξn1)<+,τn>0andn=1τn<+.

    Then limnλn exists.

    We provide a PCM algorithm with a new extrapolation step and the corresponding convergence analyses in this section.

    Assumption 3.1. In this paper, the following suppositions are true:

    (a) A: HH is pseudo-monotone, sequentially weakly continuous and L-Lipschitz continuous.

    (b) The solution set S is nonempty.

    (c) {ξn}[1,+),n=1(ξn1)<+,τn>0andn=1τn<+.

    Algorithm 3.1. Projection contraction algorithm based on the golden ratio.
    Step 0: Take the iterative parameters μ(0,1), ψ(1,+), γ(0,2), and ξ1,τ1,λ1>0. Let u1H, v0H be given starting points. Known sequences {ξn},{τn}. Set n:=1.
    Step 1: Compute
         vn=ψ1ψun+1ψvn1.     (3.1)
    Step 2: Compute
        ¯un=PC(vnλnAvn),     (3.2)
          where
         λn+1={min{μvn¯unAvnA¯un,ξnλn+τn},AvnA¯un,ξnλn+τn,otherwise.     (3.3)
      If vn=¯un, STOP. Otherwise, go to Step 3.
    Step 3: Compute
        d(vn,¯un)=(vn¯un)λn(AvnA¯un),     (3.4)
        φn=vn¯un,d(vn,¯un),
        un+1=vnγβnd(vn,¯un),     (3.5)
      where
        βn={φnd(vn,¯un)2,d(vn,¯un)0,0,d(vn,¯un)=0.     (3.6)
    Step 4: Set nn+1, and go to Step 1.

    Remark 3.1. Observe in Algorithm 3.1 that if AvnA¯un, then

    μvn¯unAvnA¯unμLvn¯unvn¯un=μL.

    Therefore, λnmin{μL,λ1}>0. By (3.3), we have λn+1ξnλn+τn. From Lemma 2.6 we obtain limnλn=λ when {ξn}[1,+),n=1(ξn1)<+,andn=1τn<+.

    Remark 3.2. In our algorithms, it is not necessary to restrict the range of ψ to (1,5+12] or (1,2], ψ only needs to be greater than 1, which greatly relaxes the range of parameter to be chosen.

    Lemma 3.1. Assume {un} is the sequence generated by Algorithm 3.1 under the conditions of Assumption 3.1. Then {un} is bounded and limnunu exists, where uS.

    Proof. It is available from the iterative formate

    un+1u2=vnuγβnd(vn,¯un)2=vnu22γβnvnu,d(vn,¯un)+γ2β2nd(vn,¯un)2. (3.7)

    According to (3.2) and Lemma 2.1(i),

    ¯unu,vn¯unλnAvn=PC(vnλnAvn)PCu,vnλnAvnu+u¯un=PC(vnλnAvn)PCu,vnλnAvnu+PC(vnλnAvn)PCu,u¯unPC(vnλnAvn)PCu2+¯unu,u¯un=¯unu2¯unu2=0. (3.8)

    Since ¯unC and uS, and Definition 2.1 (b), we have A¯un,¯unu0, thus,

    λnA¯un,¯unu0. (3.9)

    Making use of (3.8) and (3.9), we gain

    ¯unu,d(vn,¯un)=¯unu,vn¯unλnAvn+λnA¯un0,

    so,

    vnu,d(vn,¯un)φn. (3.10)

    Putting (3.10) in (3.7), we get

    un+1u2vnu2γ(2γ)βnφn. (3.11)

    By (3.5) and (3.6), we gain

    βnφn=βnd(vn,¯un)2=1γ2vnun+12. (3.12)

    Putting (3.12) in (3.11),

    un+1u2vnu22γγvnun+12. (3.13)

    From (3.1) and Lemma 2.2,

    unu2=ψψ1vnu21ψ1vn1u2+ψ(ψ1)2vnvn12=ψψ1vnu21ψ1vn1u2+1ψunvn12. (3.14)

    Combing (3.13) and (3.14),

    un+1u2unu21ψ1vnu2+1ψ1vn1u21ψunvn122γγvnun+12, (3.15)

    so we can obtain

    an+1anbn,

    where

    an=unu2+1ψ1vn1u2,bn=2γγvnun+12+1ψunvn12.

    From the above proof, we have obtained an0 and bn0. According to Lemma 2.5, we can get limnbn=0 and limnan exists. Thus, we can get further limnvnun+12=0.

    Inferring from the definition of vn, we get

    an+1=un+1u2+1ψ1vnu2=ψψ1vn+1u2+ψ(ψ1)2vn+1vn21ψ1vnu2+1ψ1vnu2=ψψ1vn+1u2+1ψun+1vn2. (3.16)

    We already know that

    limnvnun+12=0andlimnanexists, (3.17)

    we can easily get limnvn+1u2 exists. From this it can be concluded that limnun+1u2 exists and {un},{vn} are bounded.

    Lemma 3.2. Suppose {¯un} and {vn} are generated by Algorithm 3.1. Then under Assumption 3.1, limnvn¯un=0.

    Proof. Noting

    φn=vn¯un2λnvn¯un,AvnA¯unvn¯un2λnvn¯unAvnA¯un(1λnμλn+1)vn¯un2. (3.18)

    Available from (3.4),

    d(vn,¯un)vn¯un+λnAvnA¯un(1+λnλn+1μ)vn¯un. (3.19)

    Choosing a fixed number ρ in (μ,1). Since limnλn=λ, we have limnλnλn+1μ=μ<ρ. Then n0 such that λnλn+1μ<ρ,nn0. Therefore, nn0, we have

    d(vn,¯un)<(1+ρ)vn¯un,

    and

    φn>(1ρ)vn¯un2.

    Thus,

    βn=φnd(vn,¯un)2>(1ρ)vn¯un2(1+ρ)2vn¯un2=1ρ(1+ρ)2, (3.20)

    and so, nn0, we can get

    vn¯un2<11ρφn=1(1ρ)βnγ2vnun+12<(1+ρ)2(1ρ)2γ2vnun+12. (3.21)

    From (3.21), we get limnvn¯un=0.

    Lemma 3.3. Assume that {un} is generated by Algorithm 3.1, then ωw(un)S.

    Proof. Since {un} is bounded, ωw(un). Arbitrarily choose qωw(un), then there exists a subsequence {unk}{un} such that unkq. Then ¯unk1q,vnk1q. From Lemma 2.1(ii) and (3.2) we have

    vnλnAvn¯un,¯unu0,uC,

    thus,

    Avn,u¯un1λnvn¯un,u¯un,uC. (3.22)

    From (3.22) we can obtain

    Avn,uvnAvn,¯unvn+1λnvn¯un,u¯un,uC.

    So

    Avnk1,uvnk1Avnk1,¯unk1vnk1+1λnk1vnk1¯unk1,u¯unk1,uC. (3.23)

    Fixing uC and passing k in (3.23), noting vnk¯unk0, {¯unk}and{Avnk} are bounded, we obtain

    lim_kAvnk1,uvnk10. (3.24)

    Choosing a decreasing sequence {ϵk} such that ϵk>0 and limkϵk=0. For each ϵk,

    AvNk0andAvnj1,uvnj1+ϵk0,jNk, (3.25)

    where Nk is smallest non-negative integer that satisfies (3.25). As {ϵk} is decreasing, {Nk} is increasing. For simplicity, it is useful to write Nk as nNk. Setting

    ϑNk1=AvNk1AvNk12,

    one gets AvNk1,ϑNk1=AvNk1,AvNk1AvNk12=1. Then, by (3.25) for each k,

    AvNk1,u+ϵkϑNk1vNk1=AvNk1,uvNk1+ϵkAvNk1,ϑNk10. (3.26)

    From Definition 2.1(b), we have

    A(u+ϵkϑNk1),u+ϵkϑNk1vNk10. (3.27)

    Since vnk1q as k and Definition 2.1(d), we obtain that Avnk1Aq. Suppose Aq0 (if Aq=0,qS). Following that, employing the norm's sequentially weakly lower semicontinuity, we gain

    0<Aqlim_kAvnk1.

    Because {Nk}{nk}, and limkϵk=0,

    0¯limkϵkϑNk1=¯limk(ϵk1AvNk1)¯limkϵklim_kAvNk10Aq=0,

    and this means limkϵkϑNk1=0. Inputting k into (3.27), we get

    Au,uq0,uC.

    From Lemma 2.3, qS, then ωw(un)S.

    Theorem 3.1. Assume {un} is the sequence generated by Algorithm 3.1 under the conditions of Assumption 3.1. There exists uS such that unu.

    Proof. From Lemmas 3.1 and 3.3, we get limnunu exists and ωw(un)S. From Lemma 2.4, unuS.

    Theorem 3.2. Suppose {un} is generated by Algorithm 3.1 under the condition of A is η-strongly pseudo-monotone with η>0. Then {un} converges R-linearly to the unique solution u of VI(A,C) (1.1).

    Proof. Since ¯unC, from Definition 2.1(a), we have

    A¯un,¯unuη¯unu2.

    Multiply λn on both sides of above inequality, we get

    λnA¯un,¯unuλnη¯unu2. (3.28)

    (3.8) plus (3.28), we obtain

    ¯unu,d(vn,¯un)=¯unu,vn¯unλnAvn+λnA¯unλnη¯unu2, (3.29)

    so

    vnu,d(vn,¯un)φn+λnη¯unu2. (3.30)

    Putting (3.30) into (3.7), we obtain

    un+1u2vnu2γ(2γ)βnφn2γβnλnη¯unu2. (3.31)

    Using (3.18) in (3.31), we have

    un+1u2vnu2γ(2γ)βn(1λnλn+1μ)vn¯un22γβnλnη¯unu2, (3.32)

    where

    γ(2γ)βn(1λnλn+1μ)vn¯un22γβnλnη¯unu2γβnmin{(2γ)(1λnλn+1μ),2λnη}(vn¯un2+¯unu2)γβnmin{12(2γ)(1λnλn+1μ),λnη}vnu2<γ1ρ(1+ρ)2min{12(2γ)(1ρ),12λη}vnu2,nn0. (3.33)

    The last inequality is true because there exists n0 such that βn>1ρ(1+ρ)2,λn>λ2 and (1λnλn+1μ)>1ρ,nn0. Putting (3.33) in (3.32), we get

    un+1u2<(1γ1ρ(1+ρ)2min{12(2γ)(1ρ),12λη})vnu2. (3.34)

    Since βn>1ρ(1+ρ)2 and (1λnλn+1μ)>1ρ, we obtain

    0<1γ1ρ(1+ρ)2min{12(2γ)(1ρ),12λη}<1.

    Let δ2=1γ1ρ(1+ρ)2min{12(2γ)(1ρ),12λη}, we have 0<δ2<1 and

    un+1u2<δ2vnu2,nn0. (3.35)

    Putting (3.14) into (3.35), after collation, we get

    ψψ1vn+1u2<(δ2+1ψ1)vnu2,nn0. (3.36)

    Since 0<δ2<1, δ2+1ψ1<1+1ψ1=ψψ1. And we can get 0<δ2+1ψ1ψψ1<1. Therefore,

    vn+1u2<r2vnu2,nn0,

    where r=δ2+1ψ1ψψ1. By induction, we get

    vn+1u2<r2(nn0+1)vn0u2,nn0.

    By (3.35),

    un+1u2<δ2r2(nn0)vn0u2,nn0.

    And we have

    un+1u1n<rnn0n(δvn0u)1n,nn0.

    So

    ¯limnun+1u1nr<1.

    Therefore, {un} converges R-linearly to the unique solution u.

    In this part, we offer an algorithm for settling the problem of variational inequalities based on the golden ratio and provide the proofs of weak and R-linear convergence.

    Algorithm 4.1. Alternating extrapolation projection contraction algorithm based on the golden ratio.
    Step 0: Take the iterative parameters μ(0,1), ψ(1,+), γ(0,2) and ξ1,τ1,λ1>0. Let u1H, v0H be given starting points. Known sequences {ξn},{τn}. Set n:=1.
    Step 1: Compute
        vn={ψ1ψun+1ψvn1,n=odd,un,n=even.    (4.1)
    Step 2: Compute
        ¯un=PC(vnλnAvn),    (4.2)
      where
        λn+1={min{μvn¯unAvnA¯un,ξnλn+τn},AvnA¯un,ξnλn+τn,otherwise.    (4.3)
      If vn=¯un, STOP. Otherwise, go to Step 3.
    Step 3: Compute
        d(vn,¯un)=(vn¯un)λn(AvnA¯un),    (4.4)
        un+1=vnγβnd(vn,¯un),    (4.5)
        φn=vn¯un,d(vn,¯un)
      where
        βn={φnd(vn,¯un)2,d(vn,¯un)0,0,d(vn,¯un)=0.    (4.6)
    Step 4: Set nn+1, and go to Step 1.

    Lemma 4.1. Assume {un} is the sequence generated by Algorithm 4.1 under the conditions of Assumption 3.1. Then {u2n} is bounded and limnu2nu exists, where uS.

    Proof. Following the proof line (3.7)–(3.13) of Lemma 3.1 and v2nu2=u2nu2, we obtain

    u2n+1u2u2nu22γγv2nu2n+12. (4.7)

    From (3.13) we have

    u2n+2u2v2n+1u22γγv2n+1u2n+22. (4.8)

    By the definition of vn,

    v2n+1u2=ψ1ψu2n+1u2+1ψv2nu2ψ1ψ2v2nu2n+12. (4.9)

    Combing (4.9) and (4.8), we obtain

    u2n+2u2u2nu2ψ1ψ(u2n+1u2u2nu2)2γγv2n+1u2n+22ψ1ψ2v2nu2n+12. (4.10)

    From (4.7) we have

    u2n+1u2u2nu22γγv2nu2n+12. (4.11)

    Combining (4.10) and (4.11), we get

    u2n+2u2u2nu2ψ1ψ(2γγ+1ψ)v2nu2n+122γγv2n+1u2n+220. (4.12)

    Therefore u2n+2uu2nu. This proves that {u2n} is bounded and limnu2nu exists.

    Lemma 4.2. Under Assumption 3.1, suppose {u2n} and {¯u2n} are generated by Algorithm 4.1. Then limnu2n¯u2n=0.

    Proof. From (4.12) and u2n=v2n, we get that {u2nu} is bounded and

    limnu2nu2n+1=0.

    From (3.18) and (3.19), we have

    φ2n(1λ2nλ2n+1μ)v2n¯u2n2, (4.13)

    and

    d(v2n,¯u2n)(1+λ2nλ2n+1μ)v2n¯u2n. (4.14)

    Combining (4.13) and (4.14), we can obtain

    u2n+1v2n=γβ2nd(v2n,¯u2n)=γφ2nd(v2n,¯u2n)γ(1λ2nμλ2n+11+λ2nμλ2n+1)v2n¯u2n>γ1ρ1+ρv2n¯u2n,nn0. (4.15)

    Using u2n=v2n and limnu2nu2n+1=0 in (4.15), we get

    limnu2n¯u2n=0.

    Lemma 4.3. Assume that {u2n} is generated by Algorithm 4.1, then ωw(u2n)S.

    Proof. pωw(u2n), then exists a subsequence {u2nk}{u2n}, such that u2nkp. By Lemma 2.1(ii) and (4.2) we have

    u2nkλ2nkAu2nk¯u2nk,¯u2nku0,uC,

    thus,

    Au2nk,u¯u2nk1λnku2nk¯u2nk,u¯u2nk,uC,

    and

    1λ2nku2nk¯u2nk,u¯u2nk+Au2nk,¯u2nku2nkAu2nk,uu2nk,uC. (4.16)

    Similar to Lemma 3.3, the following proof steps are omitted as they are redundant. Thus, we come to the conclusion,

    Au,up0,uC.

    Using Lemma 2.3, we get pS.

    Theorem 4.1. Assume {un} is the sequence generated by Algorithm 4.1 under the conditions of Assumption 3.1. There exists qS such that unq.

    Proof. {u2n} is bounded implies that {u2n} has weakly convergent subsequences. Then, we can choose a subsequence of {u2n}, denoted by {u2nk} such that u2nkqH. We obtain limnu2nq exists and qS from Lemma 4.1 and 4.3. The proof of the whole sequence u2nqS which is the same as Lemma 4.4 in [15]. Hence, unqS.

    Theorem 4.2. Suppose {un} is generated by Algorithm 4.1 under the condition of A is η-strongly pseudo-monotone with η>0. Then {un} converges R-linearly to the unique solution u of VI(A,C) (1.1).

    Proof. From (3.35), nn0, we have

    u2n+1u2<δ2v2nu2=δ2u2nu2, (4.17)

    and

    u2n+2u2<δ2v2n+1u2, (4.18)

    where δ2=1γ1ρ(1+ρ)2min{12(2γ)(1ρ),12λη} and 0<δ2<1. Combining (4.9) and (4.18),

    u2n+2u2<δ2(ψ1ψu2n+1u2+1ψu2nu2ψ1ψ2v2nu2n+12). (4.19)

    Putting (4.17) in (4.19), we have

    u2n+2u2<δ2(ψ1ψδ2u2nu2+1ψu2nu2ψ1ψ2v2nu2n+12)δ2(ψ1ψδ2+1ψ)u2nu2<δ2u2nu2,nn0. (4.20)

    So

    u2n+2u2<δ2u2nu2,nn0. (4.21)

    By induction, we have

    u2n+2u2<δ2(nn0+1)u2n0u2,nn0.

    Thus,

    u2n+3u2<δ2u2n+2u2<u2n+2u2<δ2(nn0+1)u2n0u2,nn0. (4.22)

    Therefore, {un} converges R-linearly to the unique solution u.

    The following sections provide some computational experiments and comparisons between our algorithms considered in Sections 3 and 4 and other algorithms. All codes were written in MATLAB R2016b and performed on a PC Desktop AMD Ryzen R7-5600U CPU @ 3.00 GHz, RAM 16.00 GB.

    We make a comparison of our Algorithm 3.1, Algorithm 4.1, Algorithm 2 in [15] and Algorithm 1 in [27], Time in the table indicates CPU Time. In this section, we set maximum number of iterations nmax=6×105, ξn=1+1n2 and τn=1n2.

    Example 5.1. Define A:RmRm by

    Au=(M+β)(Nu+q),

    where M=euTQu, N is a positive semi-definite matrix, Q is a positive definite matrix, qRm and β>0. In addition to being easy to obtain, A is pseudo-monotone, differentiable and Lipschitz continuous. Take C={uRmBub}, where B is a k×m matrix and bRk+ with k=10. Select the initial point u1=(1,1,,1)T for all algorithms. Initial points of Algorithm 3.1 and Algorithm 4.1, v0 is generated randomly in Rm. We take ψ=5+12,μ=0.6 in Algorithm 3.1 and Algorithm 4.1. We take θn=2γ1.01γ in Algorithm 2 in [15] and θ=0.45(1μ) in Algorithm 1 in [27]. Thus, we take different values for λ1 and γ respectively to compare with the algorithms in the other two papers. In this example, we take ¯unvn<103 as the stopping criterion.

    In Table 1, we give a comparison of our Algorithms 3.1 and 4.1 with Algorithm 1 in [27] and Algorithm 2 in [15] in different dimensions for γ=1.5,λ1=0.05 and a comparison Figure 1 for m=100. It is illustrated that our two algorithms have some superiority.

    Table 1.  Example 5.1 with γ=1.5,λ1=0.05 and various values of m.
    Problem size Alg 3.1 Alg 4.1 Alg 2 in [15] Alg 1 in [27]
    k m Iter Time Iter Time Iter Time Iter Time
    10 100 1821 0.3262 1162 0.2071 1979 0.3926 7644 1.4647
    150 1568 0.3271 1050 0.2163 4716 0.9535 29390 6.3288
    200 1645 0.4181 1164 0.3074 6754 1.6716 —— ——
    300 2034 0.7110 1192 0.4525 18566 6.0136 —— ——
    500 2641 2.2280 1584 1.4692 55310 27.8013 —— ——
    1000 3885 9.3913 2332 5.8732 —— —— —— ——

     | Show Table
    DownLoad: CSV
    Figure 1.  Relationship between error value and iteration times in Example 5.1 with k=10,m=100.

    In Tables 2 and 3, we give a comparison of Algorithm 3.1 and Algorithm 4.1 for the same number of dimensions with different γ, respectively. We find that the larger γ is for both algorithms in the same dimension, the fewer the iterations and the shorter the CPU Time, where γ(0,2).

    Table 2.  Algorithm 3.1 with different γ.
    γ m=200 m=400 m=800 m=1000
    Iter Time Iter Time Iter Time Iter Time
    0.25 9996 2.4136 14529 1.5986 20320 27.2981 23441 42.2450
    0.5 4746 1.1729 7244 3.3542 10136 14.8189 11069 21.5681
    1 2438 0.6266 3402 1.7116 5049 8.4571 5736 12.3817
    1.25 1939 0.6079 2866 1.4242 4126 7.2440 4515 10.2743
    1.5 1546 0.4253 2315 1.1200 3386 6.0285 3878 9.1699

     | Show Table
    DownLoad: CSV
    Table 3.  Algorithm 4.1 with different γ.
    γ m=200 m=400 m=800 m=1000
    Iter Time Iter Time Iter Time Iter Time
    0.25 7212 1.7554 10193 4.6670 14085 19.7867 17188 32.4055
    0.5 3063 6.7968 4836 2.3662 7362 11.7332 8108 16.3521
    1 1626 0.4460 2292 1.1557 3170 5.8066 3324 8.0294
    1.25 1429 0.7354 1468 0.8067 2602 5.0285 3212 7.8888
    1.5 964 0.2697 1300 0.7061 2104 4.1989 2586 6.4205

     | Show Table
    DownLoad: CSV

    Example 5.2. [28] Define a mapping A by

    Au=(MTM+N+P)u.

    The matrices N and P are randomly generated skew-symmetric matrix and positive diagonal matrix, respectively. Assume C:={uRmMup}, where matrix MRk×m and vector pRk are randomly generated. Thus, all entries in p are non-negative. Here VI(A,C) (1.1) has a unique solution u=0. Set ψ=5+12,μ=12 in Algorithm 3.1, 4.1. We choose θn=2γ1.01γ in Algorithm 2 in [15] and θ=0.45(1μ) in Algorithm 1 in [27]. Additionally, we take different values for λ1 and γ, respectively, to compare with the algorithms in the other two papers. We use the stopping criterion ¯unyn103.

    In Table 4, we give a comparison of our Algorithm 3.1 and Algorithm 4.1 with Algorithm 1 in [27] and Algorithm 2 in [15] in different dimensions for γ=1.5,λ1=0.05 and a comparison Figure 2 for k=30,m=60.

    Table 4.  Example 5.2 with γ=1.5,λ1=0.05 and various values of k,m.
    Problem size Alg 3.1 Alg 4.1 Alg 2 in [15] Alg 1 in [27]
    k m Iter Time Iter Time Iter Time Iter Time
    30 60 1146 0.2566 437 0.0990 2227 0.5476 7656 12.8395
    100 1429 0.3478 510 0.1237 6731 2.2071 23695 277.4316
    120 1680 0.4009 586 0.1407 7585 2.4785 27968 245.5148
    50 50 1359 0.4956 501 0.1841 1166 0.5706 4987 15.0018
    100 1434 0.5570 476 0.1862 5817 3.1011 22307 300.3848
    150 1439 0.6184 420 0.1832 14681 8.5531 52346 1.8247e+03
    100 100 1399 1.0800 498 0.3947 4115 4.7235 16011 514.9273
    200 1445 1.3125 405 0.3740 19535 28.4272 —— ——
    500 500 1561 0.7445 448 0.2162 25091 17.1813 —— ——
    1000 887 8.8188 255 2.5293 —— —— —— ——
    1000 1000 957 18.0159 270 5.0675 —— —— —— ——
    2000 647 22.4660 206 7.0048 —— —— —— ——

     | Show Table
    DownLoad: CSV
    Figure 2.  Relationship between error value and iteration times in Example 5.2 with γ=1.5,λ1=0.05,k=30,m=60.

    In Figures 3 and 4 we compared the impact of Algorithm 3.1 and Algorithm 4.1 with varying ψ.

    Figure 3.  Algorithm 3.1 with varying ψ.
    Figure 4.  Algorithm 4.1 with varying ψ.

    Remark 5.1. From Figures 1 and 2, we can see that the projection contraction algorithms based on golden ratio have numerical advantages over inertial extrapolation. Alternating extrapolation projection contraction algorithm is better than projection contraction algorithm based on golden ratio. Thus, it can be seen from Figures 3 and 4 that our algorithm with larger ψ converges faster.

    We present a projection contraction algorithm and an alternating extrapolation projection contraction algorithm based on the golden ratio for solving pseudo-monotone variational inequalities problem in real Hilbert spaces. We give proofs of weak convergence of the two algorithms when the operator is pseudo-monotone. Thus, we obtain R-linear convergence when A is strongly pseudo-monotone mapping. We have extended the range of the ψ from (1,5+12] to (1,+), and the proofs of both algorithms are given in the absence of Lipschitz constant. We give numerical examples and show the superiority of our algorithms. Then, we discover that our algorithms suffer less impact under the same unfavorable conditions and has a relatively stable rate of convergence.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The first author is supported by the Fundamental Science Research Funds for the Central Universities (Program No. 3122018L004).

    The authors declare there is no conflict of interest.



    [1] D. V. Thong, D. V. Hieu, Inertial extragradient algorithms for strongly pseudomonotone variational inequalities, J. Comput. Appl. Math., 341 (2018), 80–98. https://doi.org/10.1016/j.cam.2018.03.019 doi: 10.1016/j.cam.2018.03.019
    [2] P. K. Anh, D. V. Thong, N. T. Vinh, Improved inertial extragradient methods for solving pseudo-monotone variational inequalities, Optimization, 71 (2020), 505–528. https://doi.org/10.1080/02331934.2020.1808644 doi: 10.1080/02331934.2020.1808644
    [3] P. Q. Khanh, D. V. Thong, N. T. Vinh, Versions of the subgradient extragradient method for pseudomonotone variational inequalities, Acta Appl. Math., 170 (2020), 319–345. https://doi.org/10.1007/s10440-020-00335-9 doi: 10.1007/s10440-020-00335-9
    [4] S. Reich, D. V. Thong, Q. L. Dong, X. H. Li, V. T. Dung, New algorithms and convergence theorems for solving variational inequalities with non-Lipschitz mappings, Numer. Algorithms, 87 (2021), 527–549. https://doi.org/10.1007/s11075-020-00977-8 doi: 10.1007/s11075-020-00977-8
    [5] F. Facchinei, J. S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, In: Springer Series in Operations Research, New York: Springer, 2003.
    [6] I. Konnov, Combined relaxation methods for variational inequalities, Berlin: Springer-Verlag, 2001.
    [7] G. M. Korpelevich, The extragradient method for finding saddle points and other problems, Matecon, 12 (1976), 747–756.
    [8] Y. Censor, A. Gibali, S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert space, J. Optim. Theory Appl., 148 (2011), 318–335. https://doi.org/10.1007/s10957-010-9757-3 doi: 10.1007/s10957-010-9757-3
    [9] Y. Censor, A. Gibali, S. Reich, Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space, Optim. Method. Softw., 26 (2011), 827–845. https://doi.org/10.1080/10556788.2010.551536 doi: 10.1080/10556788.2010.551536
    [10] Y. Censor, A. Gibali, S. Reich, Extensions of Korpelevich's extragrsient method for the variational inequality problem in Euclidean space, Optimization, 61 (2012), 1119–1132. https://doi.org/10.1080/02331934.2010.539689 doi: 10.1080/02331934.2010.539689
    [11] B. S. He, A class of projection and contraction methods for monotone variational inequalities, Appl. Math. Optim., 35 (1997), 69–76.
    [12] X. J. Cai, G. Y. Gu, B. S. He, On the O(1/t) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators, Comput. Optim. Appl., 57 (2014), 339–363. https://doi.org/10.1007/s10589-013-9599-7 doi: 10.1007/s10589-013-9599-7
    [13] Q. L. Dong, Y. J. Cho, L. L. Zhong, T. M. Rassias, Inertial projection and contraction algorithms for variational inequalities, J. Global Optim., 70 (2018), 687–704. https://doi.org/10.1007/s10898-017-0506-0 doi: 10.1007/s10898-017-0506-0
    [14] Q. L. Dong, Y. J. Cho, T. M. Rassias, The projection and contraction methods for finding common solutions to variational inequality problems, Optim. Lett., 12 (2018), 1871–1896. https://doi.org/10.1007/s11590-017-1210-1 doi: 10.1007/s11590-017-1210-1
    [15] Y. Shehu, O. S. Iyiola, Projection methods with alternating inertial steps for variational inequalities: Weak and linear convergence, Appl. Numer. Math., 157 (2020), 315–337. https://doi.org/10.1016/j.apnum.2020.06.009 doi: 10.1016/j.apnum.2020.06.009
    [16] Y. Malitsky, Golden ratio algorithms for variational inequalities, Math. Program., 184 (2018), 383–410. https://doi.org/10.1007/s10107-019-01416-w doi: 10.1007/s10107-019-01416-w
    [17] D. V. Thong, N. T. Vinh, Y. J. Cho, New strong convergence theorem of the inertial projection and contraction method for variational inequality problems, Numer. Algorithms, 84 (2019), 285–305. https://doi.org/10.1007/s11075-019-00755-1 doi: 10.1007/s11075-019-00755-1
    [18] P. Cholamjiak, D. V. Thong, Y. J. Cho, A novel inertial projection and contraction method for solving pseudomonotone variational inequality problems, Acta Appl. Math., 169 (2019), 217–245. https://doi.org/10.1007/s10440-019-00297-7 doi: 10.1007/s10440-019-00297-7
    [19] X. K. Chang, J. F. Yang, A golden ratio primal-dual algorithm for structured convex optimization, J. Sci. Comput., 87 (2021), 47. https://doi.org/10.1007/s10915-021-01452-9 doi: 10.1007/s10915-021-01452-9
    [20] X. K. Chang, J. F. Yang, H. C. Zhang, Golden ratio primal-dual algorithm with linesearch, SIAM J. Optim., 32 (2022), 1584–1613. https://doi.org/10.1137/21M1420319 doi: 10.1137/21M1420319
    [21] H. H. Bauschke, P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, New York: Springer, 2011.
    [22] K. Goebel, S. Reich, Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings, New York: Marcel Dekker, 1984.
    [23] J. M. Ortega, W. C. Rheinboldt, Iterative solution of nonlinear equations in several variables, New York: Academic Press, 1970.
    [24] J. Mashreghi, M. Nasri, Forcing strong convergence of Korpelevich's method in Banach spaces with its applications in game theory, Nonlinear Anal., 72 (2010), 2086–2099. https://doi.org/10.1016/j.na.2009.10.009 doi: 10.1016/j.na.2009.10.009
    [25] G. L. Acedo, H. K. Xu, Iterative methods for strict pseudo-contractions in Hilbert spaces, Nonlinear Anal., 67 (2007), 2258–2271. https://doi.org/10.1016/j.na.2006.08.036 doi: 10.1016/j.na.2006.08.036
    [26] M. O. Osilike, S. C. Aniagbosor, Weak and strong convergence theorems for fixed points of asymptotically nonexpansive mappings, Math. Comput. Modell., 32 (2000), 1181–1191. https://doi.org/10.1016/S0895-7177(00)00199-0 doi: 10.1016/S0895-7177(00)00199-0
    [27] Y. Shehu, Q. L. Dong, L. L. Liu, Fast alternated inertial projection algorithms for pseudo-monotone variational inequalities, J. Comput. Appl. Math., 415 (2022), 114517. https://doi.org/10.1016/j.cam.2022.114517 doi: 10.1016/j.cam.2022.114517
    [28] P. T. Harker, J.-S. Pang, A damped-Newton method for the linear complementarity problem, in: Computational Solution of Nonlinear Systems of Equations (Fort Collins, CO, 1988), In: Lectures in Applied Mathematics, 1990.
  • This article has been cited by:

    1. Olawale K. Oyewole, Simeon Reich, Two subgradient extragradient methods based on the golden ratio technique for solving variational inequality problems, 2024, 97, 1017-1398, 1215, 10.1007/s11075-023-01746-z
    2. Youngjin Hwang, Soobin Kwak, Hyundong Kim, Junseok Kim, An explicit numerical method for the conservative Allen–Cahn equation on a cubic surface, 2024, 9, 2473-6988, 34447, 10.3934/math.20241641
    3. Hammed Anuoluwapo Abass, Olawale Kazeem Oyewole, Seithuti Philemon Moshokoa, Abubakar Adamu, An Adapted Proximal Point Algorithm Utilizing the Golden Ratio Technique for Solving Equilibrium Problems in Banach Spaces, 2024, 12, 2227-7390, 3773, 10.3390/math12233773
    4. Olawale K. Oyewole, Hammed A. Abass, O.J. Ogunsola, An improved subgradient extragradient self-adaptive algorithm based on the golden ratio technique for variational inequality problems in Banach spaces, 2024, 03770427, 116420, 10.1016/j.cam.2024.116420
    5. O. K. Oyewole, H. A. Abass, S. P. Moshokoa, A simple proximal algorithm based on the golden ratio for equilibrium problem on Hadamard manifolds, 2025, 74, 0009-725X, 10.1007/s12215-024-01183-4
    6. Victor Amarachi Uzor, Oluwatosin Temitope Mewomo, Convergence analysis of a resolvent-free method for solving inclusion problems beyond Co-coercivity, 2025, 36, 1012-9405, 10.1007/s13370-025-01275-z
    7. Limei Xue, Jianmin Song, Shenghua Wang, A modified projection and contraction method for solving a variational inequality problem in Hilbert spaces, 2025, 10, 2473-6988, 6128, 10.3934/math.2025279
    8. Zhong-bao Wang, Xing-yu Chen, Zhang-you Chen, A New Golden Ratio Inertial Algorithm with Two Types of Self Adaptive Step Sizes for Solving Nonlinear Inclusion Problems, 2025, 103, 0885-7474, 10.1007/s10915-025-02903-3
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1615) PDF downloads(86) Cited by(8)

Figures and Tables

Figures(4)  /  Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog