Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Two improved extrapolated gradient algorithms for pseudo-monotone variational inequalities

  • In this paper, the approximation problem for pseudo-monotonic variational inequalities is studied. With the help of the reciprocal form of parameters and the golden section ratio, two kinds of algorithms are introduced, and their strong convergence is proved under certain appropriate conditions.

    Citation: Haoran Tang, Weiqiang Gong. Two improved extrapolated gradient algorithms for pseudo-monotone variational inequalities[J]. AIMS Mathematics, 2025, 10(2): 2064-2082. doi: 10.3934/math.2025097

    Related Papers:

    [1] Cuijie Zhang, Zhaoyang Chu . New extrapolation projection contraction algorithms based on the golden ratio for pseudo-monotone variational inequalities. AIMS Mathematics, 2023, 8(10): 23291-23312. doi: 10.3934/math.20231184
    [2] Yuanheng Wang, Chenjing Wu, Yekini Shehu, Bin Huang . Self adaptive alternated inertial algorithm for solving variational inequality and fixed point problems. AIMS Mathematics, 2024, 9(4): 9705-9720. doi: 10.3934/math.2024475
    [3] Austine Efut Ofem, Jacob Ashiwere Abuchu, Godwin Chidi Ugwunnadi, Hossam A. Nabwey, Abubakar Adamu, Ojen Kumar Narain . Double inertial steps extragadient-type methods for solving optimal control and image restoration problems. AIMS Mathematics, 2024, 9(5): 12870-12905. doi: 10.3934/math.2024629
    [4] Ziqi Zhu, Kaiye Zheng, Shenghua Wang . A new double inertial subgradient extragradient method for solving a non-monotone variational inequality problem in Hilbert space. AIMS Mathematics, 2024, 9(8): 20956-20975. doi: 10.3934/math.20241020
    [5] Habib ur Rehman, Wiyada Kumam, Poom Kumam, Meshal Shutaywi . A new weak convergence non-monotonic self-adaptive iterative scheme for solving equilibrium problems. AIMS Mathematics, 2021, 6(6): 5612-5638. doi: 10.3934/math.2021332
    [6] Fei Ma, Jun Yang, Min Yin . A strong convergence theorem for solving pseudo-monotone variational inequalities and fixed point problems using subgradient extragradient method in Banach spaces. AIMS Mathematics, 2022, 7(4): 5015-5028. doi: 10.3934/math.2022279
    [7] Muhammad Aslam Noor, Khalida Inayat Noor . Higher order strongly general convex functions and variational inequalities. AIMS Mathematics, 2020, 5(4): 3646-3663. doi: 10.3934/math.2020236
    [8] Feng Ma, Bangjie Li, Zeyan Wang, Yaxiong Li, Lefei Pan . A prediction-correction based proximal method for monotone variational inequalities with linear constraints. AIMS Mathematics, 2023, 8(8): 18295-18313. doi: 10.3934/math.2023930
    [9] Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth . Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces. AIMS Mathematics, 2021, 6(5): 4873-4900. doi: 10.3934/math.2021286
    [10] Lu-Chuan Ceng, Li-Jun Zhu, Tzu-Chien Yin . Modified subgradient extragradient algorithms for systems of generalized equilibria with constraints. AIMS Mathematics, 2023, 8(2): 2961-2994. doi: 10.3934/math.2023154
  • In this paper, the approximation problem for pseudo-monotonic variational inequalities is studied. With the help of the reciprocal form of parameters and the golden section ratio, two kinds of algorithms are introduced, and their strong convergence is proved under certain appropriate conditions.



    Let C be a nonempty closed convex subset in the real Hilbert space H, and let A:HH be a mapping. The variational inequality problem is to find xC that is satisfied with the following inequality:

    A(x),yx0,yC.

    In recent years, due to the wide application of variational inequalities in mathematics [1,2], physics [3,4,5,6], economics [7,8,9,10], and other fields, the theory of variational inequalities has become an important direction during the process of investigating the nonlinear analysis. Many academics have been studying it and put forward a multitude of findings [11,12,13,14]. Among the projection algorithms of variational inequalities, the Korpelevich's extragradient [15] algorithm is one of the most effective algorithms, that is, the following iterative formula for producing a sequence {xn}:

    {yn=PC(xnλA(xn)),xn+1=PC(xnλA(yn)),

    in which L is the Lipschitz constant of the mapping A and λ(0,1L). In each iteration, one needs to compute the projection onto the feasible set twice. It should be pointed out that if the structure of C is complex, the projection will be difficult to calculate, which in turn impacts the algorithm's efficiency. In 2000, Tseng introduced an extragradient method with a single projection; that is, one needs to calculate only a single projection onto the feasible set during each iteration [16]. The specific iteration formula was given as follows:

    {yn=PC(xnλA(xn)),xn+1=ynλ(A(yn)A(xn)),

    where L is the Lipschitz constant of the mapping A and λ(0,1L). It is worth noting that the steps of the above algorithms should all depend on the Lipschitz constant of mapping A. However, the Lipschitz constant is often difficult to calculate or approximate. In order to avoid estimating the Lipschitz constant, it is common to utilize the Armijo linear search to get the step size [17,18]. However, determining the appropriate set size can demand significant computational effort at each iteration, making the linear search with Armijo's rule rather costly. Recently, Yang and Liu proposed an adaptive external gradient algorithm [19], in which the step size was updated step by step by simple calculation instead of estimating the Lipschitz constant.

    Based on literature [20,21], this paper introduces two new projection algorithms. Under the assumption that the mapping A is pseudo-monotone and Lipschitz continuous, it is proved that the sequence generated by the algorithm converges strongly to the solution of the variational inequality. Compared with literature [15,16,17,18,19], the step size of the algorithm proposed in this paper is updated step by step through simple calculation without the need to estimate the Lipschitz constant.

    Let H be a real Hilbert space, and CH is a nonempty closed convex subset. If {xn}C and {xn} weakly converges to x, which is denoted as xnx(n); {xn} strongly converges to x, which is denoted as xnx(n). For all x,yH and αR, we have

    ||x+y||2||x||2+2y,x+y (2.1)

    and

    ||αx+(1α)y||2=α||x||2+(1α)||y||2α(1α)||xy||2. (2.2)

    Definition 2.1. Let H be a real Hilbert space, and A:HH is a nonlinear operator, and then we call A as

    (1) LLipschitz continuous (L>0) if

    ||AxAy||L||xy||,x,yH. (2.3)

    (2) Monotone if

    AxAy,xy0,x,yH. (2.4)

    (3) Pseudo-monotone if

    Ax,yx0Ay,yx0,x,yH. (2.5)

    (4) Weakly sequentially continuous if for all sequences {xn}H, when n, we have

    xnxAxnAx. (2.6)

    Definition 2.2. [22] Let H be a real Hilbert space, and CH is a nonempty closed convex subset. For all xH, there exists a unique approximation point in C, which is subjects to ||xPC(x)||||xy||, yC. We denote it by PC(x), and we call it a metric projection.

    Lemma 2.1. [22] Let H be a real Hilbert space, and CH is a nonempty closed convex subset. For all xH, zC, we have

    z=PC(x)xz,zy0,yC.

    Lemma 2.2. [22] Let H be a real Hilbert space, and CH is a nonempty closed convex subset, and xH, we have

    (1) ||PC(x)PC(y)||2PC(x)PC(y),xy,yC,

    (2) ||PC(x)y||2||xy||2||xPC(x)||2,yC,

    (3) (IPC)(x)(IPC)(y),xy||(IPC)(x)(IPC)(y)||2,yC.

    Lemma 2.3. [12] For all xH,αβ>0, the following inequality holds:

    (1) ||xPC[xβA(x)]||||xPC[xαA(x)]||,

    (2) ||xPC[xαA(x)]||α||xPC[xβA(x)]||β.

    Lemma 2.4. [23] Let {xn}(0,1) be a real number sequence, n=1αn= and satisfy:

    xn+1(1αn)xn+αnyn,n1.

    If lim supkynk0, and for every subsequence {αnk} of {αn}, we have lim infk(xnk+1xnk)0, then limnxn=0.

    Lemma 2.5. [24] Let A:HH be LLipschitz continuous in Hs bounded subset, and M is a bounded subset in H; then A(M) is bounded.

    Lemma 2.6. [25] Let A:CH be a pseudo-monotone continuous mapping, and xC, then

    xVI(C,A)A(x),xx0,xC.

    Lemma 2.7. [26] Let {xnj} be a subsequence of the nonnegative real number sequence {xn}, and satisfy for all jN. We have xnj<xnj+1, then there exists a monotonically increasing subsequence of integers {mk} that subjects to limkmk=. When kN is large enough, we have

    xmkxmk+1,xkxmk+1.

    In fact, mk is the largest number in the set {1,2,,k} and satisfies xn<xn+1.

    Lemma 2.8. [27] Let {xn} be a sequence of nonnegative real number that satisfies

    xn+1(1γn)xn+γnyn,n0,

    where {γn}, {yn} meet

    {γn}(0,1),{yn}R,n=1γn=,lim supnyn=0,

    then

    limnxn=0.

    Lemma 2.9. [21] Let {pk}C be a bounded sequence. We suppose p is a cluster point of {pk}; when limk||pkp|| exists, {pk} is convergent.

    Assumpton 3.1.

    (C1) Feasible set C is a nonempty closed convex subset in Hilbert space H. Solution set VI(C,A) is nonempty.

    (C2) Operator A:HH is pseudo-monotone in C 's bounded set, LLipschitz continuous, and weakly sequentially continuous.

    (C3) f:HH is a contraction mapping (ρ[0,1)), the sequence {βn}(0,1] is satisfied with:

    limnβn=0,n=1βn=.

    Next, the improved adaptive subgradient outer gradient algorithm is given.

    Algorithm 3.1. Improved adaptive subgradient outer gradient algorithm.

    Step 0: Given λ0>0, μ>2, ξ(0,1]. x0,x1H, {τn} is a positive real number sequence. limnτnλn=η, η is a constant, and η(2μ,1].

    Step 1: Compute

    ωn=xn+αn(xnxn1).

    Step 2: Compute

    yn=PC[ωnξτnA(ωn)],zn=PTn[ωnξλnA(yn)],Tn{xH:ωnξτnA(ωn)yn,xyn0}.

    If ωn=yn or A(yn)=0, STOP. Otherwise, go to Step 3.

    Step 3: Compute

    xn+1=βnf(zn)+(1βn)zn.

    Step 4: Compute

    λn+1={max{μA(ωn)A(yn),znyn||ωnyn||2+||znyn||2,λn},ifA(ωn)A(yn),znyn>0,λn,else. (3.1)

    Set n:=n+1, and go to Step 1.

    Lemma 3.1. Suppose that (C1)–(C3) hold; then {λn} is a nondecreasing sequence generated by (3.1) and satisfies

    0<limnλn=λmax{λ,μL2}.

    Proof. According to (3.1), we gain that {λn} is a nondecreasing sequence. Since A is LLipschitz continuous, we can also gain ||A(xn)A(yn)||L||xnyn||. When A(ωn)A(yn),znyn>0, we have

    μA(ωn)A(yn),znyn||ωnyn||2+||znyn||2μ||A(ωn)A(yn)|| ||znyn||2||ωnyn|| ||znyn||μL2,

    so {λn} is nondecreasing and has an upper bound, then there exists limnλn=λ, and λn+1max{λ,μL2}. Based on the mathematical induction, we obtain

    λn+1max{λ,μL2}.

    When A(ωn)A(yn),znyn0, we get the conclusion.

    Lemma 3.2. Suppose (C1)–(C3) are set up; {zn} is the sequence generated by the Algorithm 3.1. Then

    ||znp||2||ωnp||2(1τnλn)||ωnzn||2(τnλn2ξμλn+1λn)||ωnyn||2(τnλn2ξμλn+1λn)||znyn||2. (3.2)

    Proof. We know that it can be obtained that pVI(C,A)CTn. According to Lemma 2.2, we have

    ||znp||2=||PTn[ωnξλnA(yn)]p||2||ωnξλnA(yn)p||2||ωnξλnA(yn)zn||2=||ωnp||2+||ξλnA(yn)||22ξλnA(yn),ωnp||ωnzn||2||ξλnA(yn)||2+2ξλnA(yn),ωnzn=||ωnp||2||ωnzn||22ξλnA(yn),znp=||ωnp||2||ωnzn||22ξλnA(yn),znyn+ynp=||ωnp||2||ωnzn||22ξλnA(yn),znyn2ξλnA(yn),ynp.

    For ynC, we gain A(p),ynp0. Because A is a pseudo-monotone operator, then A(yn),ynp0. Hence

    ||znp||2||ωnp||2||ωnzn||22ξλnA(yn),znyn, (3.3)

    in which

    ||ωnzn||2+2ξλnA(yn),znyn=||ωnyn+ynzn||2+2ξλnA(yn),znyn=||ωnyn||2+||ynzn||2+2ωnyn,ynzn+2ξλnA(yn),znyn=||ωnyn||2+||ynzn||2+2ynωn+ξλnA(yn),znyn. (3.4)

    Furthermore, combined with yn=PC[ωnξτnA(ωn)] and znTn, we have

    2ynωn+ξλnA(yn),znyn=2ynωn+ξτnA(ωn)ξτnA(ωn)+ξτnA(yn)ξτnA(yn)+ξλnA(yn),znyn=2ynωn+ξτnA(ωn),znyn+2ξτnA(yn)A(ωn),znyn+2(ξλnξτn)A(yn),znyn2ξτnA(yn)A(ωn),znyn+τnλnτn2ξλnA(yn),znyn. (3.5)

    Substituting (3.5) into (3.4), one has

    ||ωnzn||2+2ξλnA(yn),znyn||ωnyn||2+||ynzn||2+2ξτnA(yn)A(ωn),znyn+(1λnτn)2ξλnA(yn),znyn,

    that is to say

    [1(1λnτn)]2ξλnA(yn),znyn||ωnyn||2+||ynzn||2||ωnzn||2+2ξτnA(yn)A(ωn),znyn. (3.6)

    Multiply both sides of (3.6) with τnλn, we have

    2ξλnA(yn),znynτnλn||ωnyn||2+τnλn||ynzn||2τnλn||ωnzn||2+2ξλnA(yn)A(ωn),znyn. (3.7)

    It follows from the definition of {λn} that

    A(ωn)A(yn),znynλn+1μ||ωnyn||2+λn+1μ||znyn||2. (3.8)

    In fact, if A(ωn)A(yn),znyn0, then the inequality (3.8) holds. Otherwise, from (3.1), it infers

    λn+1=max{μA(ωn)A(yn),znyn||ωnyn||2+||znyn||2,λn}μA(ωn)A(yn),znyn||ωnyn||2+||znyn||2,

    which implies that (3.8) holds. In addition, combining (3.7) and (3.8), we have

    2ξλnA(yn),znynτnλn||ωnyn||2+τnλn||ynzn||2τnλn||ωnzn||22ξλnλn+1μ||ωnyn||22ξλnλn+1μ||znyn||2=(τnλn2ξμλn+1λn)||ωnyn||2+(τnλn2ξμλn+1λn)||znyn||2τnλn||ωnzn||2. (3.9)

    Putting (3.9) into (3.3), one obtain

    ||znp||2||ωnp||2||ωnzn||2(τnλn2ξμλn+1λn)||ωnyn||2(τnλn2ξμλn+1λn)||znyn||2+τnλn||ωnzn||2=||ωnp||2(1τnλn)||ωnzn||2(τnλn2ξμλn+1λn)||ωnyn||2(τnλn2ξμλn+1λn)||znyn||2.

    The proof is completed.

    Lemma 3.3. Suppose (C1)–(C3) are set up; {ωn} is the sequence generated by Algorithm 3.1. If there exists a subsequence {ωnk}{ωn} weakly converging to zH such that

    limk||ωnkynk||=0,

    then we have zVI(C,A).

    Proof. According to the definition of Tn, we have

    ωnkξτnkA(ωnk)ynk,xynk0,xC,

    which equals

    τnkξωnkynk,xynkA(ωnk),xynk,xC.

    After expanding the right-hand side of the above equation, we can obtain

    τnkξωnkynk,xynkA(ωnk),xωnk+A(ωnk),ωnkynk,xC.

    Following transposition, we have

    τnkξωnkynk,xynk+A(ωnk),ynkωnkA(ωnk),xωnk,xC. (3.10)

    Because {ωn} weakly converges to z, we know {ωnk} is bounded. Once more, due to the As Lipschitz continuity, we have {A(ωnk)} is bounded. As ||ωnkynk||0, {ynk} is also bounded, and 0<τnλnmax{λ,μL2}.

    Set k in (3.10), we gain

    lim infkA(ωnk),xωnk0,xC. (3.11)

    In addition, we have

    A(ynk),xynk=A(ynk)A(ωnk),xωnk+A(ωnk),xωnk+A(ynk),ωnkynk. (3.12)

    It follows from limk||ωnkynk||=0 and the L Lipschitz continuity of A in H, one obtain

    limk||A(ωnk)A(ynk)||=0.

    Combining the above equation with (3.11) and (3.12), we have

    lim infkA(ynk),xynk0.

    Next, we prove that zVI(C,A). Choose decreasing sequences of positive terms {ϵk} and lim infkϵk=0. For any k, Nk denotes the smallest positive integer such that

    A(ynj),xynj+ϵk0,jNk. (3.13)

    Due to {ϵk} being decreasing, then Nk is also obviously decreasing. Moreover, for any k, since {yNk}C, we can suppose A(yNk)0(Otherwise, yNk is the solution) and set

    vNk=A(yNk)||A(yNk)||2,

    which infers that A(yNk),vNk=1. Now, it follows from (3.13) that for any k,

    A(yNk),x+ϵkvNkyNk0.

    Since A is pseudo-monotone, the above equation can be reduced to

    A(x+ϵkvNk),x+ϵkvNkyNk0,

    which implies that

    A(x),xyNkA(x)A(x+ϵkvNk),x+ϵkvNkyNkϵkA(x),vNk. (3.14)

    Next, we will prove that limkϵkvNk=0. Since ωnkz and limk||ωnkynk||=0, we can get yNkz(k). While {yn}C, we have zC. According to A, it is weakly sequence continuous in H, and we have {A(yNk)} weakly converges to A(z). Assume that A(z)0(Otherwise, z is the solution). Since the norm map is weakly sequential lower semicontinuous, one has

    0<||A(z)||lim infk||A(yNk)||.

    Combined with {yNk}{ynk} and ϵk0(k), we obtain

    0lim supk||ϵkvNk||=lim supk(ϵk||A(ynk)||)lim supkϵklim infk||A(ynk)||=0,

    which infers limkϵkvNk=0.

    Let k in (3.13); since A is continuous, and the right-hand side tends to 0, {ωNk} and {vNk} are bounded, and limkϵkvNk=0. Therefore, we have

    lim infkA(x),xyNk0.

    Since xC, we have

    A(x),xz=limkA(x),xyNk=lim infkA(x),xyNk0.

    From Lemma 2.6, one obtain

    zVI(C,A).

    The proof is completed.

    Theorem 3.1. Suppose that (C1)–(C3) hold, the sequence {αn} is satisfied with:

    limnαnβn||xnxn1||=0.

    Then the sequence {xn} generated by Algorithm 3.1 strongly converges to pVI(C,A) and p=PVI(C,A)f(p).

    Proof. For the convenience of the proof, we divide it into the following four steps.

    Step 1: Prove that {xn} is bounded.

    In fact, it follows from Lemma 3.2 and the definition of {τn} that there exists N1N subject to

    ||znp||2||ωnp||2(1τnλn)||ωnzn||2(τnλn2ξμλn+1λn)||ωnyn||2(τnλn2ξμλn+1λn)||znyn||2||ωnp||2. (3.15)

    Hence

    ||xn+1p||=||βnf(zn)+(1βn)znp||=||βn[f(zn)p]+(1βn)(znp)||βn||f(zn)p||+(1βn)||znp||βn||f(zn)f(p)||+βn||f(p)p||+(1βn)||znp||βnρ||znp||+βn||f(p)p||+(1βn)||znp||=[1βn(1ρ)]||znp||+βn||f(p)p||[1βn(1ρ)]||ωnp||+βn||f(p)p||. (3.16)

    Otherwise

    ||ωnp||=||xn+αn(xnxn1)p||=||xnp||+αn||xnxn1||||xnp||+βnαnβn||xnxn1||. (3.17)

    Since limnαnβn||xnxn1||=0, there exist N2N and constant M1>0 such that

    αnβn||xnxn1||M1,nN2. (3.18)

    Simultaneous (3.17) and (3.18), we have

    ||ωnp||||xnp||+βnM1,nN2. (3.19)

    Set N=max{N1,N2}. Combining (3.16) and (3.19), we then gain

    x>N,||xn+1p||[1βn(1ρ)]||xnp||+βnM1+βn||f(p)p||=[1βn(1ρ)]||xnp||+βn(1ρ)||f(p)p||+M11ρmax{||xnp||,||f(p)p||+M11ρ}max{||xNp||,||f(p)p||+M11ρ}.

    According to the above discussion, it can be proved that {xn} is bounded.

    Step 2: Prove

    (1τnλn)||ωnzn||2+(τnλn2ξμλn+1λn)||ωnyn||2+(τnλn2ξμλn+1λn)||znyn||2||xnp||2||xn+1p||2+βnM4, (3.20)

    in which M4>0 is a constant. In fact,

    ||xn+1p||2=||βnf(zn)+(1βn)znp||2=||βn[f(zn)p]+(1βn)(znp)||2βn||f(zn)p||2+(1βn)||znp||2βn(||f(zn)f(p)||+||f(p)p||)2+(1βn)||znp||2βn(ρ||znp||+||f(p)p||)2+(1βn)||znp||2βn(||znp||+||f(p)p||)2+(1βn)||znp||2βn||znp||2+(1βn)||znp||2+βnM2=||znp||2+βnM2, (3.21)

    and M2supnN{2||znp||||f(p)p||+||f(p)p||2>0}. By substituting (3.2) into (3.21), it can be obtained that

    ||xn+1p||2||ωnp||2(1τnλn)||ωnzn||2(τnλn2ξμλn+1λn)||ωnyn||2(τnλn2ξμλn+1λn)||znyn||2+βnM2. (3.22)

    From (3.19), one has

    ||ωnp||2(||xnp||+ βnM1)2=||xnp||2+βn(2M1||xnp||+βnM21)||xnp||2+βnM3, (3.23)

    in which M3supnN{2M1||xnp||+βnM21}>0. Substitute (3.23) into (3.22), we gain

    ||xn+1p||2||xnp||2+βnM3+βnM2(1τnλn)||ωnzn||2(τnλn2ξμλn+1λn)||ωnyn||2(τnλn2ξμλn+1λn)||znyn||2,

    which infers that

    (1τnλn)||ωnzn||2+(τnλn2ξμλn+1λn)||ωnyn||2+(τnλn2ξμλn+1λn)||znyn||2||xnp||2||xn+1p||2+βnM4,

    and M4M2+M3.

    Step 3: Prove that there exists some constant M>0 such that

    ||xn+1p||2[1βn(1ρ)]||xnp||2+βn(1ρ)[21ρf(p)p,xn+1p+2M1ραnβn||xnxn1||].

    In fact, applying (2.1) and {ωn} 's definition, we have

    ||ωnp||2=||xn+αn(xnxn1)p||2||xnp||2+2αnxnxn1,ωnp||xnp||2+2αn||xnxn1||||ωnp||. (3.24)

    Combined with (2.1), (2.2), and (3.15), we obtain

    ||xn+1p||2=||βnf(zn)+(1βn)znp||2=||βn[f(zn)f(p)]+(1βn)(znp)+βn[f(p)p]||2||βn[f(zn)f(p)]+(1βn)(znp)||2+2βnf(p)p,xn+1pβn||f(zn)f(p)||2+(1βn)||znp||2+2βnf(p)p,xn+1pβnρ2||znp||2+(1βn)||znp||2+2βnf(p)p,xn+1p[1βn(1ρ)]||znp||2+2βnf(p)p,xn+1p[1βn(1ρ)]||ωnp||2+2βnf(p)p,xn+1p. (3.25)

    Since ||ωnp||2||xnp||2+2αn||xnxn1||||ωnp|| holds, and we put (3.24) into (3.25), one has

    ||xn+1p||2[1βn(1ρ)]||xnp||2+2αn||xnxn1||||ωnp||+2βnf(p)p,xn+1p=[1βn(1ρ)]||xnp||2+βn(1ρ)21ρf(p)p,xn+1p+2αn||xnxn1||||ωnp||[1βn(1ρ)]||xnp||2+βn(1ρ)21ρf(p)p,xn+1p+2Mαn||xnxn1||=[1βn(1ρ)]||xnp||2+βn(1ρ)[21ρf(p)p,xn+1p+2M1ραnβn||xnxn1||],

    and MsupnN{||xnp||}>0.

    Step 4: Prove {||xnp||2} converges to 0.

    In fact, according to Lemma 2.4, we only need to prove the sequence {||xnp||} 's subsequence, which satisfies lim infk(||xnk+1p||||xnkp||)0, {||xnkp||} meet

    lim supkf(p)p,xnk+1p0.

    Thus, assume that {||xnkp||} is the subsequence of {||xnp||} and satisfies lim infk(||xnk+1p||||xnkp||)0, then

    lim infk(||xnk+1p||2||xnkp||2)=lim infk[(||xnk+1p||||xnkp||)(||xnk+1p||+||xnkp||)]0.

    According to Step 2, we have

    lim supk[(1τnkλnk)||ωnkznk||2+(τnkλnk2ξμλnk+1λnk)||ωnkynk||2+(τnkλnk2ξμλnk+1λnk)||znkynk||2]lim supk[||xnkp||2||xnk+1p||2+βnkM4]lim supk[||xnkp||2||xnk+1p||2]+lim supkβnkM4=lim infk[||xnk+1p||2||xnkp||2]0.

    As the coefficients of ||ωnkznk||2,||ωnkynk||2,||znkynk||2 are all positive, hence

    limk||ωnkznk||=0,limk||ωnkynk||=0,limk||znkynk||=0. (3.26)

    Next, we will prove that

    ||xnk+1xnk||0(k).

    As k, we have

    ||xnk+1znk||=βnk||znkf(znk)||0, (3.27)

    and

    ||xnkωnk||=αnk||xnkxnk1||=βnkαnkβnk||xnkxnk1||0. (3.28)

    From (3.26)–(3.28), one gets

    ||xnk+1xnk||||xnk+1znk||+||znkωnk||+||ωnkxnk||0(k).

    Since {xnk} is bounded, then there exists a subsequence {xnkj}{xnk} weakly converging to zH, thereby

    lim supkf(p)p,xnkp=limjf(p)p,xnkjp=f(p)p,zp. (3.29)

    Apply (3.28), then ωnkz(k). Combining (3.26) and Lemma 3.3, we have zVI(C,A). Available from (3.29) and p=PVI(C,A)f(p), we gain

    lim supkf(p)p,xnkp=f(p)p,zp0. (3.30)

    Then

    lim supkf(p)p,xnk+1plim supkf(p)p,xnk+1xnk+lim supkf(p)p,xnkp0. (3.31)

    Hence it follows from Lemma 2.4, (3.24), and (3.31) that limn||xnp||=0. The proof is completed.

    Remark 3.1. The algorithm combines the subgradient outer gradient method, the inertia method, and the viscosity method. Under appropriate conditions, different parameters are introduced to improve the algorithm, and the selection of {λn} and {τn} in the existing algorithm is improved to the reciprocal form to avoid excessive iteration and control convergence. The second class of algorithms is given next, and the convergence is proved.

    Algorithm 4.1. Golden section algorithm for pseudo-monotone variational inequalities.

    Step 0: For given p0,p1C,λ0>0,¯λ>0,ϕ(1,φ],φ is, golden section ratio constant 5+12.

    Let ¯p0=p1,θ0=1,ρ=1ϕ+1ϕ2,k=1.

    Step 1: Compute xk=A(pk),xk1=A(pk1),

    λk=min{ρλk1,ϕθk14λk1||pkpk1||2||xkxk1||2,¯λ}.  (4.1)

    Step 2: Compute

    ¯pk=(ϕ1)pk+¯pk1ϕ,pk+1=PC(¯pkλkxk).

    Step 3: Compute

    θk=λkλk1ϕ.

    Set k:=k+1, and go to Step 1.

    Remark 4.1. Two key inequalities can be obtained from (4.1). First, we have λk(1ϕ+1ϕ2)λk1, it means θk11ϕ0. Second, combining ϕθk1λk1=θkθk1λk and (4.1), we can gain

    λkϕθk14λk1||pkpk1||2||xkxk1||2.

    Then we have

     λ2k||xkxk1||2θkθk14||pkpk1||2. (4.2)

    Lemma 4.1. Suppose that the mapping A:HH is L Lipschitz continuous. If the sequence {pk} generated by Algorithm 4.1 is bounded, then both {λk} and {θk} are also bounded and separate from 0.

    Proof. Obviously, {λk} is decreasing and positive, so {λk} is bounded. Next, the mathematical induction will be utilized to prove that {λk} separates from 0.

    Since {pk} is bounded, then there exists L>0 such that

    ||xkxk1||=||A(pk)A(pk1)||L||pkpk1||.

    Set L large enough, subject to λiϕ24L2¯λ(i=0,1). Suppose that for all i=0,1,,k1, λiϕ24L2¯λ holds, then one has

    λk=ρλk1λk1ϕ24L2¯λ,

    or

    λk=ϕ24λk2||pkpk1||2||xkxk1||2ϕ24λk2L2ϕ24L2¯λ.

    So regardless of what the expression for {λk}is we have λkϕ24L2¯λ holds, hence {λk} separates from 0. While θk=λkλk1ϕ, by the similar method, itus easy to figure out that {θk} is also bounded and separates from 0.

    To simplify the proof that follows, we define Ψ(u,v)=u,vu, and u=A(u).

    Theorem 4.1. Suppose that A:HH is pseudo-monotone and satisfies LLipschitz continuity. For every p1,¯p0C,λ(0,φ2L], the sequences {pk} and {¯pk} generated by Algorithm 4.1 converge to VI(C,A).

    Proof. Combining pk+1=PC(¯pkλkxk) and Lemma 2.1, we have

    pk+1¯pk+λkxk,zpk+10,zC. (4.3)

    Set k=k1 in (4.3), and take z=pk+1, one obtains

    pk¯pk1+λk1xk1,pk+1pk0. (4.4)

    Multiply both sides of (4.4) with λkλk1, and utilize the equation λkλk1(pk¯pk1)=θk(pk¯pk); it infers

    θk(pk¯pk)+λkxk1,pk+1pk0. (4.5)

    Combining (4.3) and (4.5), we obtain

    pk+1¯pk,zpk+1+θkpk¯pk,pk+1pk+λkxkxk1,pkpk+1λkxk,pkzλkx,pkz=λkΨ(z,pk). (4.6)

    Writing the first two terms of (4.6) in the form of a norm, we can obtain

    ||pk+1z||2||¯pkz||2||pk+1¯pk||2+2λkxkxk1,pkpk+1+θk(||pk+1¯pk||2||pk+1pk||2||pk¯pk||2)2λkΨ(z,pk). (4.7)

    According to ¯pk+1=(ϕ1)pk+1+¯pkϕ, we have

    ||pk+1z||2=ϕϕ1||¯pk+1z||21ϕ1||¯pkz||2+1ϕ||pk+1¯pk||2.

    The above equation combined with (4.7), one has

    ϕϕ1ˉpk+1z2ϕϕ1ˉpkz2+(θk11ϕ)pk+1ˉpk22λkΨ(z,pk)θk(||pk+1pk||2+||pk¯pk||2)+2λkxkxk1,pkpk+1. (4.8)

    While θk1+1ϕ, it follows from (4.2) that the rightmost term in (4.8) can be scaled to

    2λkxkxk1,pkpk+12λk||xkxk1||||pkpk+1||θkθk1||pkpk1||||pkpk+1||θk2||pk+1pk||2+θk12||pkpk1||2.

    However, after scaling (4.8) with this formula, it infers

    ϕϕ1||¯pk+1z||2+θk2||pk+1pk||2+2λkΨ(z,pk)ϕϕ1||¯pkz||2+θk12||pkpk1||2θk||pk¯pk||2. (4.9)

    After iteratively accumulating the above equation, we have

    ϕϕ1||¯pk+1z||2+θk2||pk+1pk||2+ki=2θi||pi¯pi||2+2ki=1λiΨ(z,pi)ϕϕ1||¯p2z||2+θ12||p2p1||2θ2||p2¯p2||2+2λ1Ψ(z,p1).

    Set z=pVI(C,A), then the last term on the left in (4.10) is nonnegative. It can gain {¯pk} and {pk} are bounded, and θk||pk¯pk||0. According to the proof of Lemma 4.1, we have λkϕ24L2¯λ and {θk} separates from 0. Hence, limk+||pk¯pk||=0 holds. It means that pk¯pk10 and pk+1pk0.

    Next, we will prove that every cluster point of {pk} and {¯pk} belongs to VI(C,A).

    Set subsequence {ki} subjects to pki˜p and λkiλ>0(i+) holds. Obviously, both pki+1˜p and ¯pki˜p hold. Substituting k=ki into (4.3), as i increases to infinity, we have

    λ˜x,z˜p0,˜x=A(˜p),zC.

    Hence, ˜pVI(C,A). According to (4.9), we can get ϕϕ1||¯pkz||2+θk12||pkpk1||2 is non-monotonically increasing, which implies limk+||¯pkz|| exists. Since zVI(C,A) is arbitrary, it follows from Lemma 2.9 states that there exist sequences {pk} and {¯pk} converging to the certain elements of VI(C,A). The proof is completed.

    In order to evaluate the performance of the proposed algorithms, we present numerical experiments relative to the variational inequality. In this section, we provide an example to compare Algorithms 3.1 and 4.1 with the modified projection and contraction algorithm [28], see Figure 1.

    Figure 1.  Comparison of Algorithm 3.1, Algorithm 4.1, and modified projection and contraction algorithm for Example 5.1.

    Example 5.1. Let f:RnRn, be defined by f(x)=Ax+b, where A=ZTZ,Z=(zij)n×n and b=(bi)Rn where zij[1,100] and bi[100,0] are generated randomly.

    Under the given parameters, we can obtain the convergence rate ratio of Algorithms 3.1 and 4.1 which is faster than Modified PCA.

    Remark 5.1. First, we begin with a comparison of the stochastic projection and contraction algorithm in [29] and the fast alternated inertial projection algorithms in [30]. But the variational inequality conditions of these two literatures are too different from those of this literature, so I choose the literature [28] with more similar conditions and conclusions to compare.

    In this paper, the approximation problem has been investigated for the pseudo-monotonic variational inequalities. By taking advantage of the reciprocal form of parameters and the golden section ratio, two improved extrapolated gradient algorithms have been proposed, in which the strong convergence of the two defined algorithms for solving pseudo-monotone variational inequalities problems is achieved by extending the existing results of Thong [17] and Malitsky [21]. Finally, one numerical example is provided to illustrate the effectiveness of the proposed results.

    Haoran Tang: Formal analysis, Writing editing, Visualization, Data curation; Weiqiang Gong: Funding acquisition, Visualization, Conceptualization, Supervision. All the authors have agreed and given their consent for the publication of this research paper.

    The author is supported by the National Natural Science Foundation of China (Program No. 79970017) and the National Natural Science Foundation of China (Program No. 61906084).

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The author declares there is no conflict of interest.



    [1] P. N. Anh, Relaxed projection methods for solving variational inequality problems, J. Glob. Optim., 90 (2024), 909–930. https://doi.org/10.1007/s10898-024-01398-w doi: 10.1007/s10898-024-01398-w
    [2] W. Singh, S. Chandok, Mann-type extragradient algorithm for solving variational inequality and fixed point problems, Comp. Appl. Math., 43 (2024), 259. https://doi.org/10.1007/s40314-024-02785-5 doi: 10.1007/s40314-024-02785-5
    [3] S. F. Liu, W. Wang, Y. Jia, H. B. Bian, W. Q. Shen, Modeling of hydro-mechanical coupled fracture propagation in quasi-brittle rocks using a variational phase-field method, Rock. Mech. Rock Eng., 57 (2024), 7079–7101. https://doi.org/10.1007/s00603-024-03896-5 doi: 10.1007/s00603-024-03896-5
    [4] X. Zhang, D. Hou, Q. Mao, Z. Wang, Predicting microseismic sensitive feature data using variational mode decomposition and transformer, J. Seismol., 28 (2024), 229–250. https://doi.org/10.1007/s10950-024-10193-9 doi: 10.1007/s10950-024-10193-9
    [5] G. Arenas-Henriquez, A. Cisterna, F. Diaz, R. Gregory, Accelerating black holes in 2 + 1 dimensions: holography revisited, J. High Energ. Phys., 2023 (2023), 122. https://doi.org/10.1007/JHEP09(2023)122 doi: 10.1007/JHEP09(2023)122
    [6] J. E. Brasil, E. R. Oliveira, R. R. Souza, Thermodynamic formalism for general iterated function systems with measures, Qual. Theory Dyn. Syst., 22 (2023), 19. https://doi.org/10.1007/s12346-022-00722-7 doi: 10.1007/s12346-022-00722-7
    [7] J. Jeon, G. Kim, Variational inequality arising from variable annuity with mean reversion environment, J. Inequal. Appl., 2023 (2023), 99–119. https://doi.org/10.1186/s13660-023-03015-y doi: 10.1186/s13660-023-03015-y
    [8] G. Valencia-Ortega, A. M. A. de Parga-Regalado, M. A. Barranco-Jiménez, On thermo-economic optimization effects in the balance price-demand of generation electricity for nuclear and fossil fuels power plants, Energy Syst., 14 (2023), 1163–1184. https://doi.org/10.1007/s12667-022-00537-0 doi: 10.1007/s12667-022-00537-0
    [9] A. Barbagallo, S. Guarino Lo Bianco, A random time-dependent noncooperative equilibrium problem, Comput. Optim. Appl., 84 (2023), 27–52. https://doi.org/10.1007/s10589-022-00368-w doi: 10.1007/s10589-022-00368-w
    [10] M. B. Donato, M. Milasi, A. Villanacci, Restricted participation on financial markets: a general equilibrium approach using variational inequality methods, Netw. Spat. Econ., 22 (2022), 327–359. https://doi.org/10.1007/s11067-019-09491-4 doi: 10.1007/s11067-019-09491-4
    [11] Y. Censor, A. Gibali, S. Reich, Extensions of Korpelevich's extragradient method for the variational inequality problem in Euclidean space, Optimization, 61 (2012), 1119–1132. https://doi.org/10.1080/02331934.2010.539689 doi: 10.1080/02331934.2010.539689
    [12] S. V. Denisov, V. V. Semenov, L. M. Chabak, Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators, Cybern. Syst. Anal., 51 (2015), 757–765. https://doi.org/10.1007/s10559-015-9768-z doi: 10.1007/s10559-015-9768-z
    [13] F. Facchinei, J. S. Pang, Finite-dimensional varitional inequalities and complementarity problems, In: Springer series in operations research and financial engineering, New York: Springer, 2003. https://doi.org/10.1007/b97543
    [14] M. C. Ferris, J. S. Pang, Engineering and economic applications of complementarity problems, SIAM Rev., 39 (1997), 669–713. https://doi.org/10.1137/S0036144595285963 doi: 10.1137/S0036144595285963
    [15] G. M. Korpelevich, The extragradient method for finding saddle points and other problem, Matecon, 12 (1976), 747–756.
    [16] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control Optim., 38 (2000), 431–446. https://doi.org/10.1137/S0363012998338806 doi: 10.1137/S0363012998338806
    [17] D. V. Thong, D. V. Hieu, Weak and strong convergence theorems for variational inequality problems, Numer. Algor., 78 (2018), 1045–1060. https://doi.org/10.1007/S11075-017-0412-z doi: 10.1007/S11075-017-0412-z
    [18] S. Yekini, S. I. Olaniyi, Strong convergence result for monotone variational inequalities, Numer. Algor., 76 (2017), 259–282. https://doi.org/10.1007/s11075-016-0253-1 doi: 10.1007/s11075-016-0253-1
    [19] Y. Censor, A. Gibali, S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert spaces, J. Optim. Theory. Appl., 148 (2011), 318–335. https://doi.org/10.1007/s10957-010-9757-3 doi: 10.1007/s10957-010-9757-3
    [20] J. Yang, H. Liu, Strong convergence result for solving monotone variational inequalities in Hilbert space, Numer. Algor., 80 (2019), 741–752. https://doi.org/10.1007/s11075-018-0504-4 doi: 10.1007/s11075-018-0504-4
    [21] Y. Malitsky, Golden ratio algorithms for variational inequalities, Math. Program., 184 (2020), 383–410. https://doi.org/10.1007/s10107-019-01416-w doi: 10.1007/s10107-019-01416-w
    [22] K. Goebel, S. Reich, Uniform convexity, hyperbolic geometry, and nonexpansive mappings, New York: Marcel Dekker, 1984.
    [23] S. Seajung, P. Yotkaew, Approximation of zeros of inverse strongly monotone operators in Banach spaces, Nonlinear Anal.-Theory, Methods Appl., 75 (2012), 742–750. https://doi.org/10.1016/j.na.2011.09.005 doi: 10.1016/j.na.2011.09.005
    [24] J. Mashreghi, M. Nasri, Forcing strong convergence of Korpelevich's method in Banach spaces with its applications in game theory, Nonlinear Anal.-Theory, Methods Appl., 72 (2010), 2086–2099. https://doi.org/10.1016/j.na.2009.10.009 doi: 10.1016/j.na.2009.10.009
    [25] R. W. Cottle, J. C. Yao, Pseudo-monotone complementarity problems in Hilbert space, J. Optim. Theory Appl., 75 (1992), 281–295. https://doi.org/10.1007/BF00941468 doi: 10.1007/BF00941468
    [26] P. E. Mainge, Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization, Set-Valued Anal., 16 (2008), 899–912. https://doi.org/10.1007/s11228-008-0102-z doi: 10.1007/s11228-008-0102-z
    [27] H. K. Xu, Iterative algorithms for nonlinear operators, J. Lond. Math. Soc., 66 (2002), 240–256. https://doi.org/10.1112/S0024610702003332 doi: 10.1112/S0024610702003332
    [28] Q. L. Dong, Y. J. Cho, L. L. Zhong, T. M. Rassias, Inertial projection and contraction algorithms for variational inequalities, J. Glob. Optim., 70 (2018), 687–704. https://doi.org/10.1007/s10898-017-0506-0 doi: 10.1007/s10898-017-0506-0
    [29] L. Liu, X. Qin, A stochastic projection and contraction algorithm with inertial effects for stochastic variational inequalities, J. Nonlinear Var. Anal., 7 (2023), 995–1016. https://doi.org/10.23952/jnva.7.2023.6.08 doi: 10.23952/jnva.7.2023.6.08
    [30] Y. Shehu, Q. L. Dong, L. Liu, Fast alternated inertial projection algorithms for pseudo-monotone variational inequalities, J. Comput. Appl. Math., 415 (2022), 114517. https://doi.org/10.1016/j.cam.2022.114517 doi: 10.1016/j.cam.2022.114517
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(463) PDF downloads(48) Cited by(0)

Figures and Tables

Figures(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog