Research article Special Issues

Self adaptive alternated inertial algorithm for solving variational inequality and fixed point problems

  • We introduce an alternated inertial subgradient extragradient algorithm of non-Lipschitz and pseudo-monotone operators to solve variational inequality and fixed point problems. We also demonstrated that, under certain conditions, the sequence produced by our algorithm exhibits weak convergence. Moreover, some numerical experiments have been proposed to compare our algorithm with previous algorithms in order to demonstrate the effectiveness of our algorithm.

    Citation: Yuanheng Wang, Chenjing Wu, Yekini Shehu, Bin Huang. Self adaptive alternated inertial algorithm for solving variational inequality and fixed point problems[J]. AIMS Mathematics, 2024, 9(4): 9705-9720. doi: 10.3934/math.2024475

    Related Papers:

    [1] Saudia Jabeen, Bandar Bin-Mohsin, Muhammad Aslam Noor, Khalida Inayat Noor . Inertial projection methods for solving general quasi-variational inequalities. AIMS Mathematics, 2021, 6(2): 1075-1086. doi: 10.3934/math.2021064
    [2] Meiying Wang, Luoyi Shi, Cuijuan Guo . An inertial iterative method for solving split equality problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 17628-17646. doi: 10.3934/math.2022971
    [3] Lu-Chuan Ceng, Shih-Hsin Chen, Yeong-Cheng Liou, Tzu-Chien Yin . Modified inertial subgradient extragradient algorithms for generalized equilibria systems with constraints of variational inequalities and fixed points. AIMS Mathematics, 2024, 9(6): 13819-13842. doi: 10.3934/math.2024672
    [4] Yali Zhao, Qixin Dong, Xiaoqing Huang . A self-adaptive viscosity-type inertial algorithm for common solutions of generalized split variational inclusion and paramonotone equilibrium problem. AIMS Mathematics, 2025, 10(2): 4504-4523. doi: 10.3934/math.2025208
    [5] Zheng Zhou, Bing Tan, Songxiao Li . Two self-adaptive inertial projection algorithms for solving split variational inclusion problems. AIMS Mathematics, 2022, 7(4): 4960-4973. doi: 10.3934/math.2022276
    [6] Pongsakorn Yotkaew, Nopparat Wairojjana, Nuttapol Pakkaranang . Accelerated non-monotonic explicit proximal-type method for solving equilibrium programming with convex constraints and its applications. AIMS Mathematics, 2021, 6(10): 10707-10727. doi: 10.3934/math.2021622
    [7] Mohammad Dilshad, Mohammad Akram, Md. Nasiruzzaman, Doaa Filali, Ahmed A. Khidir . Adaptive inertial Yosida approximation iterative algorithms for split variational inclusion and fixed point problems. AIMS Mathematics, 2023, 8(6): 12922-12942. doi: 10.3934/math.2023651
    [8] Ziqi Zhu, Kaiye Zheng, Shenghua Wang . A new double inertial subgradient extragradient method for solving a non-monotone variational inequality problem in Hilbert space. AIMS Mathematics, 2024, 9(8): 20956-20975. doi: 10.3934/math.20241020
    [9] Austine Efut Ofem, Jacob Ashiwere Abuchu, Godwin Chidi Ugwunnadi, Hossam A. Nabwey, Abubakar Adamu, Ojen Kumar Narain . Double inertial steps extragadient-type methods for solving optimal control and image restoration problems. AIMS Mathematics, 2024, 9(5): 12870-12905. doi: 10.3934/math.2024629
    [10] Cuijie Zhang, Zhaoyang Chu . New extrapolation projection contraction algorithms based on the golden ratio for pseudo-monotone variational inequalities. AIMS Mathematics, 2023, 8(10): 23291-23312. doi: 10.3934/math.20231184
  • We introduce an alternated inertial subgradient extragradient algorithm of non-Lipschitz and pseudo-monotone operators to solve variational inequality and fixed point problems. We also demonstrated that, under certain conditions, the sequence produced by our algorithm exhibits weak convergence. Moreover, some numerical experiments have been proposed to compare our algorithm with previous algorithms in order to demonstrate the effectiveness of our algorithm.



    In a real Hilbert space H, with D being a nonempty closed convex subset, where the inner product , and norm are defined, the classical variational inequality problem (VIP) is to determine a point xD such that Ax,yx0 holds for all yD, where A:HH is an operator. Then, we define as its solution set. Stampacchia [1] proposed variational inequality theory in 1964, which appeared in various models to solve a wide range of engineering, regional, physical, mathematical, and other problems. The mathematical theory of variational inequality problems was first applied to solve equilibrium problems. Within this model, the function is derived from the first-order variation of the respective potential energy. As a generalization and development of classical variational problems, the form of variational inequality has become more diverse, and many projection algorithms have been studied by scholars [2,3,4,5,6,7,8,9,10]. In [11], Hu and Wang utilized the projected neural network (PNN) to solve the VIP under the pseudo-monotonicity or pseudoconvexity assumptions. Furthermore, He et al. [12] proposed an inertial PNN method for solving the VIP, while Eshaghnezhad et al. [13] presented a novel PNN method for solving the VIP. In addition, in [14], a modified neurodynamic network (MNN) was proposed for solving the VIP, and under the assumptions of strong pseudo monotonicity and L-continuity, the fixed-time stability convergence of MNN was established.

    The most famous method for solving the VIP is called the projection gradient method (GM), which is expressed as

    xn+1=PD(xnγAxn). (1.1)

    Observably, the iterative sequence {xn} produced by this method converges towards a solution of the VIP, and PD:HD is a metric projection, with γ denoting the stepsize parameter, and A being both strongly monotone and Lipschitz continuous. The projection gradient method fails when A is weakened to a monotonic operator. On this basis, Korpelevich [15] proposed a two-step iteration called the extragradient method (EGM)

    {x0D,sn=PD(xnγAxn),xn+1=PD(xnγAsn), (1.2)

    where γ is the stepsize parameter, and A is Lipschitz continuous and monotone. However, the calculation of projection is a major challenge in each iteration process. Hence, to address this issue, Censor et al. [16] proposed the idea of the half-space and modified the algorithm to

    {sn=PD(xnγAxn),Hn={xH:xnγAxnsn,xsn0},xn+1=PHn(xnγAsn). (1.3)

    Recently, adaptive step size [17,18,19] and inertia [20,21,22,23] have been frequently used to accelerate algorithm convergence. For example, Thong and Hieu [24] presented the following algorithm:

    {hn=xn+αn(xnxn1),sn=PD(hnτnAhn),en=PHn(hnτnAsn),xn+1=βnf(en)+(1βn)en, (1.4)

    where Hn={xH:hnτnAhnsn,xsn0}, and

    τn+1={min{μhnsnAhnAsn,τn}, if AhnAsn0,τn, otherwise.

    They also combined the VIP with fixed point problems [25] (we define Δ as a common solution set). For example, Nadezhkina and Takahashi [26] proposed the following algorithm:

    {x0D,sn=PD(xnτnAxn),xn+1=(1αn)xn+αnTPD(xnτnAsn), (1.5)

    where A is Lipschitz continuous and monotone, and T:DD is nonexpansive. The sequence produced by this algorithm exhibits weak convergence toward an element in Δ. Another instance is the algorithm proposed by Thong et al. [27], which is as follows:

    {hn=xn+αn(xnxn1),sn=PD(hnτnAhn),en=PHn(hnτnAsn),xn+1=(1βn)hn+βnTen, (1.6)

    where τn is selected as the maximum τ within the set {γ,γl,γl2,...} that satisfies the condition

    τAhnAsnμhnsn.

    Based on the preceding research, we present a self-adaptive step-size and alternated inertial subgradient extragradient algorithm designed for addressing the VIP and fixed-point problems involving non-Lipschitz and pseudo-monotone operators in this paper. The article's structure is outlined as follows: Section 2 contains definitions and preliminary results essential for our approach. Section 3 establishes the convergence of the iterative sequence generated. Finally, Section 4 includes a series of numerical experiments demonstrating the practicality and effectiveness of our algorithm.

    For a sequence {xn} and x in H, strong convergence is represented as xnx, weak convergence is represented as xnx.

    Definition 2.1. [28] We define a nonlinear operator T:HH to have an empty fixed point set (Fix(T)), if the following expression holds for {qn}H:

    {qnq(IT)qn0qFix(T),

    where I denotes the identity operator. In such cases, we characterize IT as being demiclosed at zero.

    Definition 2.2. For an operator T:HH, the following definitions apply:

    (1) T is termed nonexpansive if

    Tq1Tq2q1q2q1,q2H.

    (2) T is termed quasi-nonexpansive with a non-empty fixed point set Fix(T) if

    TxηxηxH,ηFix(T).

    Definition 2.3. A sequence {qn} is said to be Fejér monotone concerning a set D if

    qn+1qqnq,qD.

    Lemma 2.1. For each ζ1,ζ2H and ϵR, we have

    ζ1+ζ222ζ1+ζ2,ζ2+ζ12; (2.1)
    ϵζ2+(1ϵ)ζ12=(1ϵ)ζ12+ϵζ22ϵ(1ϵ)ζ2ζ12. (2.2)

    Lemma 2.2. [26] Given ψH and φD, then

    (1) PDψPDφ2ψφ,PDψPDφ;

    (2) φPDψ2ψφ2ψPDψ2;

    (3) ψPDψ,PDψφ0.

    Lemma 2.3. [29] Suppose A:DH is pseudomonotone and uniformly continuous. Then, ς is a solution of Ax,xς0,xD.

    Lemma 2.4. [30] Let D be a nonempty subset of H. A sequence {xn} in H is said to weakly converge to a point in D if the following conditions are met:

    (1) For every xD, limnxnx exists;

    (2) Every sequential weak cluster point of {xn} is in D.

    This section presents an alternated inertial projection algorithm designed to address the VIP and fixed point problems associated with a quasi-nonexpansive mapping T in H. We have the following assumptions:

    Assumption 3.1.

    (a) The operator A:HH is pseudo-monotone, uniformly continuous over H, and exhibits sequential weak continuity on D;

    (b) ϖ(1μ4,1μ2), 0<κn<min{1μ2ϖ2ϖ,1ϖ1+ϖ}.

    The algorithm (Algorithm 1) is as follows:

    Algorithm 1

    Initialization: Let x0,x1H be arbitrary. Given γ>0, l(0,1), μ(0,1).
    Iterative step: Calculate xn+1 as follows:
    Step 1. Set
         hn={xn,n=even,xn+ϖ(xnxn1),n=odd.
    Step 2. Compute
         sn=PD(hnτnAhn).
    If sn=hn, stop. Otherwise compute
         en=PHn(hnτnAsn),
    where
         Hn={xH:hnτnAhnsn,xsn0},
    and τn is selected as the maximum τ from the set {γ,γl,γl2,} that satisfies
        τAsnAhn,snenμsnhnsnen.

    Step 3. Compute
        xn+1=(1κn)en+κnTen.

    Set n:=n+1 and go back to Step 1.

    To prove the algorithm, we first provide several lemmas.

    Lemma 3.1. The sequence produced by Algorithm 1, denoted as {x2n}, is bounded and limnx2nϱ exists for all ϱΔ.

    Proof. Indeed, let ϱΔ. Then, we have

    enϱ2=PHn(hnτnAsn)ϱ2hnτnAsnϱ2hnτnAsnen2=hnϱ2+τ2nAsn22τnhnϱ,Asnhnen2τ2nAsn2+2τnhnen,Asn=hnϱ2hnen2+2τnϱen,Asn=hnϱ2hnen22τnsnϱ,Asn+2τnsnen,Asn. (3.1)

    According to ϱΔ, it follows that Aϱ,sϱ for all sD, and, at the same time, because of the pseudomonotonicity of A, we establish As,sϱ0 for all sD. If we set s=sn, then Asn,snϱ0. Thus, by (3.1), we can get

    enϱ2hnϱ2hnen2+2τnsnen,Asn=hnϱ2hnsn2ensn22hnsn,snen+2τnsnen,Asn=hnϱ2hnsn2ensn2+2snhn+τnAsn,snen=hnϱ2hnsn2ensn2+2hnτnAhnsn,ensn+2τnAsnAhn,snenhnϱ2hnsn2ensn2+2μsnhnsnenhnϱ2hnsn2ensn2+μ[snhn2+ensn2]=hnϱ2(1μ)hnsn2(1μ)ensn2. (3.2)

    Subsequently, by (2.2), we obtain

    xn+1ϱ2=(1κn)en+κnTenϱ2=κn(Tenϱ)+(1κn)(enϱ)2=κnTenϱ2+(1κn)enϱ2κn(1κn)Tenen2κnenϱ2+(1κn)enϱ2κn(1κn)Tenen2=enϱ2κn(1κn)Tenen2hnϱ2(1μ)hnsn2(1μ)ensn2κn(1κn)Tenen2. (3.3)

    Meanwhile, combined with (3.3), it is evident that

    xn+1ϱ2(1κn)hnϱ2+κnenϱ2. (3.4)

    In particular,

    x2n+2ϱ2h2n+1ϱ2(1μ)h2n+1s2n+12(1μ)e2n+1s2n+12κ2n+1(1κ2n+1)Te2n+1e2n+12. (3.5)

    By (2.2), we obtain

    h2n+1ϱ2=x2n+1+ϖ(x2n+1x2n)ϱ2=(1+ϖ)x2n+1ϱ2ϖx2nϱ2+ϖ(1+ϖ)x2n+1x2n2. (3.6)

    As another special case of (3.3), we have

    x2n+1ϱ2x2nϱ2(1μ)x2ns2n2(1μ)e2ns2n2κ2n(1κ2n)Te2ne2n2x2nϱ21μ2x2ne2n2κ2n(1κ2n)Te2ne2n2, (3.7)

    and then, bringing (3.7) into (3.6), we can get

    h2n+1ϱ2=x2nϱ2(1+ϖ)(1μ)2x2ne2n2κ2n(1κ2n)(1+ϖ)Te2ne2n2+ϖ(1+ϖ)x2n+1x2n2. (3.8)

    Plugging (3.8) into (3.5) gives

    x2n+2ϱ2x2nϱ2(1+ϖ)(1μ)2x2ne2n2κ2n(1κ2n)(1+ϖ)Te2ne2n2+ϖ(1+ϖ)x2n+1x2n2(1μ)h2n+1s2n+12(1μ)e2n+1s2n+12κ2n+1(1κ2n+1)Te2n+1e2n+12, (3.9)

    where

    x2n+1x2n2=(1κ2n)e2n+κ2nTe2nx2n2=e2nx2n+κ2n(Te2ne2n)2=e2nx2n2+κ22nTe2ne2n2+2κ2ne2nx2n,Te2ne2ne2nx2n2+κ22nTe2ne2n2+κ2n(e2nx2n2+Te2ne2n2)=(1+κ2n)e2nx2n2+κ2n(κ2n+1)Te2ne2n2. (3.10)

    Thus, putting (3.10) into (3.9), we have

    x2n+2ϱ2x2nϱ2[(1+ϖ)(1μ)2ϖ(1+ϖ)(1+κ2n)]e2nx2n2[κ2n(1κ2n)(1+ϖ)ϖ(1+ϖ)κ2n(κ2n+1)]Te2ne2n2(1μ)h2n+1s2n+12(1μ)e2n+1s2n+12κ2n+1(1κ2n+1)Te2n+1e2n+12. (3.11)

    According to ϖ(1μ4,1μ2), 0<κn<min{1μ2ϖ2ϖ,1ϖ1+ϖ}, we get the sequence {x2nϱ} is decreasing, and thus limnx2nϱ exists. This implies {x2nϱ} is bounded, hence, {x2n} is bounded. For (3.7), we can get that {x2n+1ϱ} is also bounded. Therefore, {xnϱ} is bounded. Thus, {xn} is bounded.

    Lemma 3.2. Consider the sequence {x2n} produced by Algorithm 1. If the subsequence {x2nk} of {x2n} weakly converges to xH and limkx2nks2nk=0, then x.

    Proof. Because of h2n=x2n, using the definition of {s2nk} and Lemma 2.2, we get

    x2nkτ2nkAx2nks2nk,xs2nk0,xD,

    and so

    1τ2nkx2nks2nk,xs2nkAx2nk,xs2nk,xD.

    Hence,

    1τ2nkx2nks2nk,xs2nk+Ax2nk,s2nkx2nkAx2nk,xx2nk,xD. (3.12)

    Because of limkx2nks2nk=0 and taking the limit as k in (3.12), we acquire

    lim_kAx2nk,xx2nk0,xD. (3.13)

    Select a decreasing sequence {ϵk}(0,) to make limkϵk=0 hold. Then, for each ϵk, based on (3.13) we use Mk to represent the smallest positive integer satisfying

    Ax2nj,xx2nj+ϵk0,jMk. (3.14)

    Since {ϵk} is decreasing, then {Mk} is increasing. Also, for each k, Ax2Mk0, let

    v2Mk=Ax2MkAx2Mk2.

    Here, Ax2Mk,v2Mk=1 for each k. Then, by (3.14), for each k we have

    Ax2Mk,x+ϵkv2Mkx2Mk0.

    Because A is pseudo-monotonic, we get

    A(x+ϵkv2Mk),x+ϵkv2Mkx2Mk0. (3.15)

    Since x2nkx as k, and A exhibits sequential weak continuity on H, it follows that the sequence {Ax2nk} weakly converges to Ax. Then, based on the weakly sequential continuity of the norm, we obtain

    0<Axlim_kAx2nk.

    Since {xMk}{xnk} and limkϵk=0, we have

    0¯limkϵkv2Mk=¯limk(ϵkAx2nk)¯limkϵklim_kAx2nk=0Ax=0,

    which means limkϵkv2Mk=0. Finally, we let k in (3.15) and get

    Ax,xx0.

    This implies x.

    Lemma 3.3. Considering {x2n} as the sequence produced by Algorithm 1, since {x2n} is a bounded sequence, there exists a subsequence {x2nk} of {x2n} and xH such that x2nkx. Hence, xΔ.

    Proof. From (3.11) and the convergence of {x2nϱ}, we can deduce that

    e2n+1s2n+10,x2nx2n+10, (3.16)
    h2n+1s2n+10,Te2ne2n0, (3.17)
    Te2n+1e2n+10,asn+.

    By the definition of {x2n+1}, we have

    x2ne2n=x2nx2n+1+κ2n(Te2ne2n)x2nx2n+1+κ2nTe2ne2n,

    then

    x2ne2n0, (3.18)

    and by (3.18) and x2nkx, we can get

    e2nkx. (3.19)

    Since T is demiclosed at zero, Definition 2.1, (3.17), and (3.19) imply

    xFix(T). (3.20)

    From (3.2), we deduce

    e2nϱ2x2nϱ2(1μ)x2ns2n2(1μ)e2ns2n2.

    This implies that

    (1μ)x2ns2n2x2nϱ2e2nϱ2. (3.21)

    Based on the convergence of {x2nϱ2}, we can assume that

    x2nϱ2l. (3.22)

    At the same time, according to (3.16), it can be obtained that

    x2n+1ϱ2l. (3.23)

    It follows from (3.4) that

    x2n+1ϱ2(1κ2n)x2nϱ2+κ2ne2nϱ2.

    Then,

    e2nϱ2x2n+1ϱ2x2nϱ2κ2n+x2nϱ2. (3.24)

    It implies from (3.22)–(3.24) that

    limne2nϱ2limnx2nϱ2=l. (3.25)

    By (3.2), we get

    limne2nϱ2limnx2nϱ2=l. (3.26)

    Combining (3.25) and (3.26), we get

    limne2nϱ2=l. (3.27)

    Combining with (3.21), (3.22), and (3.27), we have

    limnx2ns2n2=0.

    Therefore, it implies from Lemma 3.2 that

    x. (3.28)

    Combining (3.20) and (3.28), we can derive

    xΔ.

    Theorem 3.2. {xn}, a sequence produced by Algorithm 1, weakly converges to a point within Δ.

    Proof. Let xH such that x2nkx. Then, by Lemma 3.3, it implies

    xΔ.

    Combining limnx2nϱ2 exists for all ϱΔ, and by Lemma 2.4, we get that {x2n} converges weakly to an element within Δ. Now, suppose {x2n} converges weakly to ξΔ. For all gH, it follows that

    limnx2nξ,g=0.

    Furthermore, by (3.16), for all gH,

    |x2n+1ξ,g|=|x2n+1x2n+x2nξ,g||x2n+1x2n,g|+|x2nξ,g|x2n+1x2ng+|x2nξ,g|0,asn.

    Therefore, {x2n+1} weakly converges to ξΔ. Hence, {xn} weakly converges to ξΔ

    This section will showcase three numerical experiments aiming to compare Algorithm 1 against scheme (1.6) and Algorithm 6.1 in [31], and Algorithm 3.1 in [32]. All codes were written in MATLAB R2018b and performed on a desktop PC with Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz 1.80 GHz, RAM 8.00 GB.

    Example 4.1. Assume that H=R3 and D:={xR3:Φxϕ}, where Φ represents a 3×3 matrix and ϕ is a nonnegative vector. For A(x):=Qx+q, with Q=BBT+E+F, where B is a 3×3 matrix, E is a 3×3 skew-symmetric matrix, F is a 3×3 diagonal matrix with nonnegative diagonal entries, and q is a vector in R3. Notably, A is both monotone and Lipschitz continuous with constant L=Q. Define T(x)=x,xR3.

    Under the assumption q=0, the solution set Δ={0}, which means that x=0. Now, the error at the n-th step iteration is measured using xnx. In both Algorithm 1 and scheme (1.6), we let μ=0.5, γ=0.5, l=0.5; in Algorithm 1, we let ϖ=0.2, κn=0.2; in scheme (1.6), we let αn=0.25, βn=0.5; in Algorithm 6.1 in [31], we let τ=0.01, αn=0.25; in Algorithm 3.1 in [32], we let αn=1n+1, βn=n2n+1, f(x)=0.5x, τ1=1, μ=0.2, θ=0.3, ϵn=100(n+1)2. The outcomes of this numerical experiment are presented in Table 1 and Figure 1.

    Table 1.  Numerical results for Example 4.1.
    Iter. Time [sec]
    Algorithm 1 297 1.7283
    scheme (1.6) 482 3.0215
    Algorithm 6.1 in [31] 1311 8.5415
    Algorithm 3.1 in [32] 477 2.3758

     | Show Table
    DownLoad: CSV
    Figure 1.  Comparison of Algorithm 1 and scheme (1.6), Algorithm 6.1 in [31], and Algorithm 3.1 in [32] for Example 4.1.

    From Table 1, we can see that the algorithm in this article has the least number of iterations and the shortest required time. Therefore, this indicates that Algorithm 1 is feasible. According to the situation shown in Figure 1, we can see that Algorithm 1 is more efficient than the other two algorithms.

    Example 4.2. Consider H=R and the feasible set D=[2,5]. Let A:HH be defined as

    At:=t+sin(t),

    and T:HH be defined as

    Tt:=t2sin(t).

    It is evident that A is Lipschitz continuous and monotone, while T is a quasi-nonexpansive mapping. Consequently, it is straightforward to observe that Δ={0}.

    In Algorithm 1 and scheme (1.6), we let γ=0.5, l=0.5, μ=0.9; in Algorithm 1, we let κn=23, ϖ=0.03; in scheme (1.6), we let αn=0.25, βn=0.5; in Algorithm 6.1 in [31], we let τ=0.4, αn=0.5, in Algorithm 3.1 in [32], we let αn=1n+1, βn=n2n+1, f(x)=0.5x, τ1=1, μ=0.2, θ=0.3, ϵn=100(n+1)2. The results of the numerical experiment are shown in Table 2 and Figure 2.

    Table 2.  Numerical results for Example 4.2.
    Iter. Time [sec]
    Algorithm 1 20 0.3542
    scheme (1.6) 26 0.5168
    Algorithm 6.1 in [31] 41 0.4293
    Algorithm 3.1 in [32] 26 0.3574

     | Show Table
    DownLoad: CSV
    Figure 2.  Comparison of Algorithm 1 and scheme (1.6), Algorithm 6.1 in [31], and Algorithm 3.1 in [32] for Example 4.2.

    Table 2 and Figure 2 illustrate that Algorithm 1 has a faster convergence speed.

    Example 4.3. Consider H=L2([0,1]) with the inner product

    m,n:=10m(p)n(p)dpm,nH,

    and the induced norm

    m:=(10|m(p)|2dp)12mH.

    The operator A:HH is defined as

    (Am)(p)=max{0,m(p)},p[0,1]mH.

    The set D:={mH:m1} represents the unit ball. Specifically, the projection operator PD(m) is defined as

    PD(m)={mmL2,mL2>1,m,mL21.

    Let T:L2([0,1])L2([0,1]) be defined by

    (Tm)(p)=m(p)2.

    Therefore, we can get that Δ={0}.

    In Algorithm 1 and scheme (1.6), we let γ=0.5, l=0.5, μ=0.5; in Algorithm 1, we let κn=0.2, ϖ=0.2; in scheme (1.6), we let αn=0.25, βn=0.3; in Algorithm 6.1 in [31], we let τ=0.9, αn=0.6. The results of the numerical experiment are shown in Figure 3.

    Figure 3.  Comparison of Algorithm 1 and scheme (1.6), and Algorithm 6.1 in [31] for Example 4.3.

    Figure 3 shows the behaviors of En=xnx generated by all the algorithms, commencing from the initial point x0(p)=p2. The presented results also indicate that our algorithm is superior to other algorithms.

    This paper introduces a novel approach for tackling variational inequality problems and fixed point problems. Algorithm 1 extends the operator A to pseudo-monotone, uniformly continuous, and incorporates a new self-adaptive step size, and adds an alternated inertial method based on scheme (1.6). The efficiency of our algorithm is validated through the results obtained from three distinct numerical experiments.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by the National Natural Science Foundation of China (Grant No. 12171435).

    The authors declare that they have no competing interests.



    [1] G. Stampacchia, Formes bilineaires coercitives sur les ensembles convexes, C. R. Acad. Sci., 258 (1964), 4413–4416.
    [2] Y. Malitsky, V. Semenov, An extragradient algorithm for monotone variational inequalities, Cybern. Syst. Anal., 50 (2014), 271–277. http://dx.doi.org/10.1007/s10559-014-9614-8 doi: 10.1007/s10559-014-9614-8
    [3] P. Tseng, A modified forward-backward splitting method for maximal monotone mapping, SIAM J. Control Optim., 38 (2000), 431–446. http://dx.doi.org/10.1137/S0363012998338806 doi: 10.1137/S0363012998338806
    [4] M. Solodov, B. Svaiter, New projection method for variational inequality problems, SIAM J. Control Optim., 37 (1999), 765–776. http://dx.doi.org/10.1137/S0363012997317475 doi: 10.1137/S0363012997317475
    [5] Y. Malitsky, Projected reflected gradient methods for monotone variational inequalities, SIAM J. Optim., 25 (2015), 502–520. http://dx.doi.org/10.1137/14097238X doi: 10.1137/14097238X
    [6] P. Mainge, M. Gobinddass, Convergence of one-step projected gradient methods for variational inequalities, J. Optim. Theory Appl., 171 (2016), 146–168. http://dx.doi.org/10.1007/s10957-016-0972-4 doi: 10.1007/s10957-016-0972-4
    [7] A. Iusem, B. Svaiter, A variant of Korpelevich's method for variational inequalities with a new search strategy, Optimization, 42 (1997), 309–321. http://dx.doi.org/10.1080/02331939708844365 doi: 10.1080/02331939708844365
    [8] R. Kraikaew, S. Saejung, Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert space, J. Optim. Theory Appl., 163 (2014), 399–412. http://dx.doi.org/10.1007/s10957-013-0494-2 doi: 10.1007/s10957-013-0494-2
    [9] Y. Shehu, O. Iyiola, Strong convergence result for monotone variational inequalities, Numer. Algor., 76 (2017), 259–282. http://dx.doi.org/10.1007/s11075-016-0253-1 doi: 10.1007/s11075-016-0253-1
    [10] A. Gibali, A new non-Lipschitzian method for solving variational inequalities in Euclidean spaces, Journal of Nonlinear Analysis and Optimization: Theory and Applications, 6 (2015), 41–51.
    [11] X. Hu, J. Wang, Solving pseudomonotone variational inequalities and pseudoconvex optimization problems using the projection neural network, IEEE Trans. Neural Netw., 17 (2006), 1487–1499. http://dx.doi.org/10.1109/TNN.2006.879774 doi: 10.1109/TNN.2006.879774
    [12] X. He, T. Huang, J. Yu, C. D. Li, C. J. Li, An inertial projection neural network for solving variational inequalities, IEEE Trans. Cybern., 47 (2017), 809–814. http://dx.doi.org/10.1109/TCYB.2016.2523541 doi: 10.1109/TCYB.2016.2523541
    [13] M. Eshaghnezhad, S. Effati, A. Mansoori, A neurodynamic model to solve nonlinear pseudo-monotone projection equation and its applications, IEEE Trans. Cybern., 47 (2017), 3050–3062. http://dx.doi.org/10.1109/TCYB.2016.2611529 doi: 10.1109/TCYB.2016.2611529
    [14] J. Zheng, J. Chen, X. Ju, Fixed-time stability of projection neurodynamic network for solving pseudomonotone variational inequalities, Neurocomputing, 505 (2022), 402–412. http://dx.doi.org/10.1016/j.neucom.2022.07.034 doi: 10.1016/j.neucom.2022.07.034
    [15] G. Korpelevich, The extragradient method for finding saddle points and other problem, Matecon, 12 (1976), 747–756.
    [16] Y. Censor, A. Gibali, S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert space, J. Optim. Theory Appl., 148 (2011), 318–335. http://dx.doi.org/10.1007/s10957-010-9757-3 doi: 10.1007/s10957-010-9757-3
    [17] D. Thong, A. Gibali, Extragradient methods for solving non-Lipschitzian pseudo-monotone variational inequalities, J. Fixed Point Theory Appl., 21 (2019), 20. http://dx.doi.org/10.1007/s11784-018-0656-9 doi: 10.1007/s11784-018-0656-9
    [18] Q. Dong, G. Cai, Convergence analysis for fixed point problem of asymptotically nonexpansive mappings and variational inequality problem in Hilbert spaces, Optimization, 70 (2021), 1171–1193. http://dx.doi.org/10.1080/02331934.2020.1789127 doi: 10.1080/02331934.2020.1789127
    [19] Q. Dong, S. He, L. Liu, A general inertial projected gradient method for variational inequality problems, Comp. Appl. Math., 40 (2021), 168. http://dx.doi.org/10.1007/s40314-021-01540-4 doi: 10.1007/s40314-021-01540-4
    [20] A. Moudafi, E. Elisabeth, Approximate inertial proximal method using enlargement of a maximal monotone operator, International Journal of Pure and Applied Mathematics, 5 (2003), 283–299.
    [21] S. Denisov, V. Semenov, L. Chabak, Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators, Cybern. Syst. Anal., 51 (2015), 757–765. http://dx.doi.org/10.1007/s10559-015-9768-z doi: 10.1007/s10559-015-9768-z
    [22] D. Thong, Y. Shehu, O. Iyiola, Weak and strong convergence theorems for solving pseudomonotone variational inequalities with non-Lipschitz mappings, Numer. Algor., 84 (2020), 795–823. http://dx.doi.org/10.1007/s11075-019-00780-0 doi: 10.1007/s11075-019-00780-0
    [23] Y. Shehu, Q. Dong, L. Liu, J. Yao, New strong convergence method for the sum of two maximal monotone operators, Optim. Eng., 22 (2021), 2627–2653. http://dx.doi.org/10.1007/s11081-020-09544-5 doi: 10.1007/s11081-020-09544-5
    [24] D. Thong, D. Hieu, T. Rassias, Self adaptive inertial subgradient extragradient algorithms for solving pseudomonotone variational inequality problems, Optim. Lett., 14 (2020), 115–144. http://dx.doi.org/10.1007/s11590-019-01511-z doi: 10.1007/s11590-019-01511-z
    [25] F. Ma, J. Yang, M. Yin, A strong convergence theorem for solving pseudo-monotone variational inequalities and fixed point problems using subgradient extragradient method in Banach spaces, AIMS Mathematics, 7 (2022), 5015–5028. http://dx.doi.org/10.3934/math.2022279 doi: 10.3934/math.2022279
    [26] N. Nadezhkina, W. Takahashi, Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings, J. Optim. Theory Appl., 128 (2006), 191–201. http://dx.doi.org/10.1007/s10957-005-7564-z doi: 10.1007/s10957-005-7564-z
    [27] D. Thong, D. Hieu, Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems, Numer. Algor., 80 (2019), 1283–1307. http://dx.doi.org/10.1007/s11075-018-0527-x doi: 10.1007/s11075-018-0527-x
    [28] H. Xu, Viscosity approximation methods for nonexpansive mappings, J. Math. Anal. Appl., 298 (2004), 279–291. http://dx.doi.org/10.1016/j.jmaa.2004.04.059 doi: 10.1016/j.jmaa.2004.04.059
    [29] R. Cottle, J. Yao, Pseudo-monotone complementarity problems in Hilbert space, J. Optim. Theory Appl., 75 (1992), 281–295. http://dx.doi.org/10.1007/BF00941468 doi: 10.1007/BF00941468
    [30] Z. Opial, Weak convergence of the sequence of successive approximations for nonexpansive mappings, Bull. Amer. Math. Soc., 73 (1967), 591–597. http://dx.doi.org/10.1090/S0002-9904-1967-11761-0 doi: 10.1090/S0002-9904-1967-11761-0
    [31] Y. Censor, A. Gibali, S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert space, J. Optim. Theory Appl., 148 (2011), 318–335. http://dx.doi.org/10.1007/s10957-010-9757-3 doi: 10.1007/s10957-010-9757-3
    [32] B. Tan, Z. Zhou, S. Li, Viscosity-type inertial extragradient algorithms for solving variational inequality problems and fixed point problems, J. Appl. Math. Comput., 68 (2022), 1387–1411. http://dx.doi.org/10.1007/s12190-021-01576-z doi: 10.1007/s12190-021-01576-z
  • This article has been cited by:

    1. Qian Yan, Libo An, Gang Cai, Qiao-Li Dong, Strong convergence of inertial extragradient methods for solving pseudomonotone variational inequality problems, 2025, 10075704, 108938, 10.1016/j.cnsns.2025.108938
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1173) PDF downloads(68) Cited by(1)

Figures and Tables

Figures(3)  /  Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog