Loading [MathJax]/jax/output/SVG/jax.js
Research article

Two self-adaptive inertial projection algorithms for solving split variational inclusion problems

  • Received: 17 June 2021 Revised: 22 December 2021 Accepted: 23 December 2021 Published: 29 December 2021
  • MSC : 47H05, 49J40, 65K10, 65Y10

  • This paper is to analyze the approximation solution of a split variational inclusion problem in the framework of Hilbert spaces. For this purpose, inertial hybrid and shrinking projection algorithms are proposed under the effect of a self-adaptive stepsize which does not require information of the norms of the given operators. The strong convergence properties of the proposed algorithms are obtained under mild constraints. Finally, a numerical experiment is given to illustrate the performance of proposed methods and to compare our algorithms with an existing algorithm.

    Citation: Zheng Zhou, Bing Tan, Songxiao Li. Two self-adaptive inertial projection algorithms for solving split variational inclusion problems[J]. AIMS Mathematics, 2022, 7(4): 4960-4973. doi: 10.3934/math.2022276

    Related Papers:

    [1] Yali Zhao, Qixin Dong, Xiaoqing Huang . A self-adaptive viscosity-type inertial algorithm for common solutions of generalized split variational inclusion and paramonotone equilibrium problem. AIMS Mathematics, 2025, 10(2): 4504-4523. doi: 10.3934/math.2025208
    [2] Meiying Wang, Luoyi Shi, Cuijuan Guo . An inertial iterative method for solving split equality problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 17628-17646. doi: 10.3934/math.2022971
    [3] Mohammad Dilshad, Mohammad Akram, Md. Nasiruzzaman, Doaa Filali, Ahmed A. Khidir . Adaptive inertial Yosida approximation iterative algorithms for split variational inclusion and fixed point problems. AIMS Mathematics, 2023, 8(6): 12922-12942. doi: 10.3934/math.2023651
    [4] Yu Zhang, Xiaojun Ma . An accelerated conjugate method for split variational inclusion problems with applications. AIMS Mathematics, 2025, 10(5): 11465-11487. doi: 10.3934/math.2025522
    [5] Meiying Wang, Luoyi Shi . A new self-adaptive inertial algorithm with W-mapping for solving split feasibility problem in Banach spaces. AIMS Mathematics, 2022, 7(10): 18767-18783. doi: 10.3934/math.20221032
    [6] Chibueze C. Okeke, Abubakar Adamu, Ratthaprom Promkam, Pongsakorn Sunthrayuth . Two-step inertial method for solving split common null point problem with multiple output sets in Hilbert spaces. AIMS Mathematics, 2023, 8(9): 20201-20222. doi: 10.3934/math.20231030
    [7] Jun Yang, Prasit Cholamjiak, Pongsakorn Sunthrayuth . Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces. AIMS Mathematics, 2021, 6(5): 4873-4900. doi: 10.3934/math.2021286
    [8] Suparat Kesornprom, Prasit Cholamjiak . A modified inertial proximal gradient method for minimization problems and applications. AIMS Mathematics, 2022, 7(5): 8147-8161. doi: 10.3934/math.2022453
    [9] Hengdi Wang, Jiakang Du, Honglei Su, Hongchun Sun . A linearly convergent self-adaptive gradient projection algorithm for sparse signal reconstruction in compressive sensing. AIMS Mathematics, 2023, 8(6): 14726-14746. doi: 10.3934/math.2023753
    [10] Saudia Jabeen, Bandar Bin-Mohsin, Muhammad Aslam Noor, Khalida Inayat Noor . Inertial projection methods for solving general quasi-variational inequalities. AIMS Mathematics, 2021, 6(2): 1075-1086. doi: 10.3934/math.2021064
  • This paper is to analyze the approximation solution of a split variational inclusion problem in the framework of Hilbert spaces. For this purpose, inertial hybrid and shrinking projection algorithms are proposed under the effect of a self-adaptive stepsize which does not require information of the norms of the given operators. The strong convergence properties of the proposed algorithms are obtained under mild constraints. Finally, a numerical experiment is given to illustrate the performance of proposed methods and to compare our algorithms with an existing algorithm.



    Inspired by the split variational inequality problem proposed by Censor et al. [1], Moudafi [2] introduced a more general form of this problem, that is, the split monotone variational inclusion problem. It is worth noting that an important special case of the split monotone variation inclusion problem is the split variational inclusion problem (for short, SVIP), which is to find a zero of a maximal monotone mapping in one space, and the image of which under a given bounded linear transformation is a zero of another maximal monotone mapping in another space. As well as, the split variational inclusion problem is also a generalized form of many problems, such as the split variational inequality problem, the split minimization problem, the split equilibrium problem, the split saddle point problem and the split feasibility problem; see, for instance, [2,3,4,5,6,7] and the references therein. As applications, these problems are also widely applied to radiation therapy treatment planning, image recovery and signal recovery; for detail, we refer to [8,9,10]. In the SVIP, when the two spaces are the same and the given bounded linear operator is an identity mapping, it is equivalent to the well-known common solution problem, i.e., the common solution of two variational inclusion problems. Naturally, common solution problems of other aspects can be obtained, such as the variational inequality problem, the minimization problem and the equilibrium problem. In general, the above common solution problems can be regarded as the distinguished convex feasibility problem.

    In particular, finding the zero of a maximal monotone mapping is known as the variational inclusion problem (for short, VIP), which is a special case of the SVIP. Since the resolvent mapping of the maximal monotone mapping is an important tool for solving the VIP, the variational inclusion problem and the split variational inclusion problem has obtained quite a few remarkable results; for example, see, [11,12,13,14,15,16]. On the other hand, based on the idea of the time implicit discretization of a second-order differential equation, Alvarez and Attouch [17] introduced an inertial proximal point algorithm to approximate a solution of the VIP. Under the effect of the inertial technique, the iterative sequence of the SVIP and other problems rapidly converges to the approximation solution of the corresponding problems, such as the split variational inclusion problem [3,6,7,16,18], the split common fixed point problem [10,19], the monotone inclusion problem [20,21], the fixed point problem [22,23,24] and the variational inequality problem [25,26,27,28].

    From the existing results of the split variational inclusion problem, we find that it is easy to get the weak convergence property, and sometimes its strong convergence is proved in the case of other methods, such as the viscosity method, the Halpern method, the Mann-type method, the hybrid steepest descent method, and so on; for detail, see [3,4,6,15]. Unfortunately, the stepsize sequences in these existing results often depend on the norm of bounded linear operators. Hence, the work of this paper can be summarized in two aspects. The first one is to construct new inertial iterative algorithms that converge strongly to a solution of the SVIP. For this purpose, we consider two projection methods in our algorithms, namely hybrid projection [29] and shrinking projection [30]. The second one is to design a new stepsize sequence which does not need prior knowledge of the bounded linear operator in our algorithms.

    The remainder of this paper is organized as follows. Section 2 introduces the split variational inclusion problem and some preliminaries. Two new iterative algorithms and their convergence theorems for the SVIP are proposed in Section 3. Theoretical applications on other mathematical problems are given in Section 4. Finally, in Section 5, the validity and authenticity of the convergence behavior of the proposed algorithms are demonstrated by some applicable numerical examples.

    Let H1 and H2 be Hilbert spaces, B1:H12H1 and B2:H22H2 be maximal monotone mappings. Let A:H1H2 be a bounded linear operator. The split variational inclusion problem is to find a point xH1 such that

    0B1(x) and 0B2(Ax).               (SVIP)

    The solution set of the SVIP is denoted by Ω, i.e.,

    Ω:={xH1:0B1(x) and 0B2(Ax)}.

    To standardize, the notations and stand for strong convergence and weak convergence, respectively. The symbol Fix(S) denotes the fixed point set of a mapping S, and ωw(xn) represents the set of weak cluster point of a sequence {xn}. Let H be a Hilbert space with the inner product , and the norm induced by the inner product. Let B:H2H be a set-valued mapping with domain D(B)={xH:B(x)} and graph G(B)={(x,w)H×H:xD(B),wB(x)}. Recall that a mapping B:H2H is monotone if and only if xy,wv0 for any wB(x) and vB(y). A monotone mapping B:H2H is maximal, that is, the graph G(B) is not properly contained in the graph of any other monotone mapping. In this case, B is a maximal monotone mapping if and only if for any (x,w)G(B) and (y,v)H×H, xy,wv0 implies vB(y). In addition, the metric projection from H onto C, denoted PC, is defined as PCx=argminyCxy, xH. Naturally, the following properties of PC hold:

    PCxx,PCxy0,yCyPCx2+xPCx2xy2.

    Lemma 2.1 ([31,32]).The resolvent mapping JBβ of a maximal monotone mapping B with β>0 is defined as JBβ(x)=(I+βB)1(x),xH. The following properties associated with JBβ hold.

    (1) The mapping JBβ is single-valued and firmly nonexpansive;

    (2) The fixed point set of JBβ is equivalent to

    B1(0)={xD(B):0B(x)}.

    Lemma 2.2 ([33]). Let B:D(B)H2H be a maximal monotone mapping. For any 0<βr, we have

    xJBβ(x)2xJBr(x), xH.

    Definition 2.3. The mapping S:HH is said to be

    (1) nonexpansive if SxSyxy, x,yH;

    (2) firmly nonexpansive if SxSy2SxSy,xy, x,yH.

    Remark 2.4. If S is a firmly nonexpansive mapping, then it is also nonexpansive and IS is a firmly nonexpansive mapping.

    Lemma 2.5 ([32]). Let C be a nonempty closed convex subset of H and S:CC be a nonexpansive mapping with Fix(S). IS is demiclosed at zero, that is, for any sequence {xn} in C, satisfying xnx and (IS)xn0, then xFix(S).

    Lemma 2.6 ([34]). Let C be a nonempty closed convex subset of H. Let a sequence {xn} in H and u=PCv, vH. If ωw(xn)C and xnvuv, then {xn} converges strongly to u.

    Combining the inertial technique with the projection methods, two types of projection algorithms are given for approximating a solution of the split variational inclusion problem. Before this, we always assume that the following conditions are satisfied:

    (C1) H1, H2 are two Hilbert spaces and A:H1H2 is a bounded linear operator with the adjoint operator A;

    (C2) B1:H12H1 and B2:H22H2 are two set-valued maximal monotone mappings.

    An inertial hybrid projection algorithm and an inertial shrinking projection algorithm are introduced below and the strong convergence of these algorithms are guaranteed by the following appropriate parameter conditions:

    (P1) {αn}[a,b](,) and {βn}(0,) with infn{βn}β>0;

    (P2) If AznB120, the stepsize γn=σn(IJB2βn)Azn2A(IJB2βn)Azn2 with 0<cσnd<2. Otherwise, γn=0.

    Algorithm 3.1 Given appropriate parameter sequences {αn}, {βn} and {γn}, for any x0, x1H1, the sequence {xn} is constructed by the following iterative form.

    {zn=xn+αn(xnxn1),un=JB1βn(znγnA(IJB2βn)Azn),Cn={xH1:unx2znx2θn},Qn={xH1:xnx1,xnx0},xn+1=PCnQnx1, n1,

    where

    θn=γn(2(IJB2βn)Azn2γnA(IJB2βn)Azn2).

    Lemma 3.1. Assumed that (C1)-(C2) hold. For any γn>0, βn>0 and set un=JB1βn(znγnA(IJB2βn)Azn), n1, we have

    unx2znx2γn(2(IJB2βn)Azn2γnA(IJB2βn)Azn2), xΩ.

    Proof. Choose any xΩ, we have xB11(0) and AxB12(0). Since JB1βn, JB2βn and IJB2βn are firmly nonexpansive mappings, we have

    unx2znγnA(IJB2βn)Aznx2=znx2+γ2nA(IJB2βn)Azn22γnznx,A(IJB2βn)Aznznx2+γ2nA(IJB2βn)Azn22γn(IJB2βn)Azn2=znx2γn(2(IJB2βn)Azn2γnA(IJB2βn)Azn2).

    The proof is complete.

    Theorem 3.2. Assumed that (C1)-(C2) and (P1)-(P2) hold. If the solution set Ω is nonempty, then {xn} generated by Algorithm 3.1 converges strongly to x=PΩx1Ω.

    Proof. Step 1: Firstly, we show that PCnQn is well defined and ΩCnQn.

    From the definition of Cn and Qn, it is obvious that the sets Cn and Qn are convex and closed, which implies that PCnQn is well defined. For any pΩ, it follows from Lemma 3.1 that ΩCn. In addition, Q1={xH1:x1x1,x1x0}=H1, then ΩQ1. Further, suppose ΩCn1Qn1, using the property of metric projection and xn=PCn1Qn1x1, we get

    xnx1,xnx0, xCn1Qn1;
    xnx1,xnp0, pΩ.

    This implies that ΩQn. Hence, ΩCnQn, n1.

    Step 2: Afterwards, we show that iterative sequence {xn} is bounded and xn+1xn0 as n.

    Since Ω is a nonempty closed convex set, there exists a point x=PΩx1Ω. Combining xn+1=PCnQnx1 with ΩCnQn, we have x1xn+1x1x. Accordingly, the sequence {x1xn} is bounded, i.e., the sequence {xn} is bounded. From the definition of Qn and xn+1=PCnQnx1Qn, we get xn=PQnx1 and x1xnx1xn+1. These indicate that limnx1xn exists. Further, it follows from the property of metric projection PQn that

    xnxn+12x1xn+12x1xn2.

    This implies limnxnxn+1=0.

    Step 3: Lastly, we prove that the sequence {xn} converges strongly to x=PΩx1.

    From the boundedness of {xn}, there exists a subsequence {xnl} of {xn} that converges weakly to q, for any qωw(xn). Furthermore, znxn=αnxnxn10, as n. This implies that {zn} is bounded and znlq. From (P2) and Algorithm 3.1, we have unxn+12znxn+12θnznxn+12. In addition,

    unznunxn+xnznunxn+1+xnxn+1+xnzn2znxn+2xnxn+10, n.

    Hence, the sequence {un} is bounded. Using Lemma 3.1, for any pΩ,

    θnznp2unp2(znpunp)(znp+unp)znun(znp+unp)0, n.

    If AznB120, from the definition of θn, limn(IJB2βn)Azn=0. On the other hand, from the definition of un and the firmly nonexpansive property of JB1βn, we obtain

    unJB1βnznγnA(IJB2βn)AznγnA(IJB2βn)Azn0, as n.

    Therefore, we also have limnznJB1βnzn=0. Further, using Lemma 2.2 and infn{βn}β>0, we have

    znJB1βzn2znJB1βnzn0, (IJB2β)Azn2(IJB2βn)Azn0.

    Since A is a bounded linear operator, we get AznlAq. By Remark 2.4 and Lemma 2.5, it follows that qFix(JB1β) and AqFix(JB2β), that is, qΩ. Meanwhile, if AznB120, we can also get the same result. In summary, we have ωw(xn)Ω and xnx1xx1. By virtue of Lemma 2.6, we obtain that {xn} converges strongly to x=PΩx1.

    Algorithm 3.2 Given appropriate parameter sequences {αn}, {βn} and {γn}. Choose any x0, x1H1 and C1:=H1, the sequence {xn} is constructed by the following iterative process.

    {zn=xn+αn(xnxn1),un=JB1βn(znγnA(IJB2βn)Azn),xn+1=PCn+1x1,n1,

    where

    Cn+1={xCn:unx2znx2θn}

    and θn is defined as in Algorithm 3.1.

    Theorem 3.3. Assumed that (C1)-(C2) and (P1)-(P2) hold. If the solution set Ω is nonempty, then the sequence {xn} generated by Algorithm 3.2 converges strongly to x=PΩx1Ω.

    Proof. Firstly, it is obvious that the half space Cn (n1) is convex and closed and PCn is well defined. By Lemma 3.1, we can easily get that the solution set ΩCn. Using xn=PCnx1, xn+1=PCn+1x1 and Cn+1Cn, we have xnx1xn+1x1, which implies that {xnx1} is nondecreasing. Furthermore, xnx1px1, for any pΩ, that is, {xn} is bounded. These imply that limnxnx1 exists. Similarly to the proof of Theorem 3.2, we can also prove that the sequence {xn} converges strongly to x=PΩx1.

    In this section, we give several interesting special cases of the split variation inclusion problem. At the same time, Algorithms 3.1 and 3.2 are applied to these problems.

    Let C and Q be nonempty closed convex subsets of Hilbert spaces H1 and H2, respectively. Let F:H1H1 and G:H2H2 be given operators, A:H1H2 be a bounded linear operator. The split variational inequality problem is to find a point xC such that

    F(x),xx0, xC and G(Ax),yAx0, yQ.

    Especially, when H1=H2, F=G and A=I, the split variational inequality problem is transformed into the classical variational inequality problem which is to find a point xC such that F(x),xx0, xC, and the solution set of the variational inequality problem is represented by VI(F,C). Then, the split variational inequality problem is formulated as

    find xC such that xVI(F,C) and AxVI(G,Q). (4.1)

    Meanwhile, the solution set of problem (4.1) is denoted by Θ. Before this, the normal cone NC(x) of C at a point xC is defined as follows:

    NC(x)={zH:z,vx0, vC}.

    Further, the set-valued mapping SF related to the normal cone NC(x) is defined by

    SF(x):={F(x)+NC(x),xC,,otherwise.

    In the sense, if F is a α-inverse strongly monotone operator (i.e., for any x,zC, F(x)F(z),xzαF(x)F(z)2), then SF is a maximal monotone mapping. More importantly, xVI(F,C) if and only if 0SF(x). Let F and G be α-inverse strongly monotone operators. The set-valued mappings SF and SG are associated with F and G, respectively. The split variational inequality problem is equivalent to the following form:

    find xH1 such that 0SF(x) and 0SG(Ax).

    Therefore, the following theorem can naturally arise to solve the split variational inequality problem.

    Theorem 4.1. Choose real numbers sequences {αn}[a,b](,), {σn}[c,d](0,2) and {βn}(0,) with infn{βn}β>0. For any x0, x1H1, let the sequence {xn} be constructed by the following iterative form.

    {zn=xn+αn(xnxn1),un=JSFβn(znγnA(IJSGβn)Azn),Cn={xH1:unx2znx2ˆθn},Qn={xH1:xnx1,xnx0},xn+1=PCnQnx1, n1, (4.2)

    where ˆθn:=γn(2(IJSGβn)Azn2γnA(IJSGβn)Azn2)and

    γn:={σn(IJSGβn)Azn2A(IJSGβn)Azn2,AznVI(G,Q),0,otherwise.

    If the solution set Θ is nonempty, then the iterative sequence {xn} generated by algorithm (4.2) converges strongly to x=PΘx1.

    Theorem 4.2. Choose real numbers sequences {αn}[a,b](,), {σn}[c,d](0,2) and {βn}(0,) with infn{βn}β>0. For any x0, x1H1 and C1:=H1, let the sequence {xn} be generated by the following algorithm.

    {zn=xn+αn(xnxn1),un=JSFβn(znγnA(IJSGβn)Azn),Cn+1={xCn:unx2znx2ˆθn},xn+1=PCn+1x1, n1, (4.3)

    where ˆθn and γn are defined as in algorithm (4.2).If the solution set Θ is nonempty, then the sequence {xn} generated by algorithm (4.3) converges strongly to x=PΘx1.

    Let X and Y be Hilbert spaces. A bifunction L:X×YR{,} is convex-concave if and only if L(x,) is convex for any xX and L(,y) is concave for any yY. The operator TL is defined as follows:

    TL(x,y)=(1L(x,y),2(L)(x,y)),

    where 1 is the subdifferential of L with respect to x and 2 is the subdifferential of L with respect to y. It is worth noting that TL is maximal monotone if and only if L is closed and proper, for detail, see, [35]. Naturally, the zeros of TL coincide with the saddle points of L. Therefore, let Xi(i=1,2), Yi (i=1,2) be Hilbert spaces. Let A:X1×Y1X2×Y2 be a bounded linear operator with the adjoint operator A. Let L1:X1×Y1R{,} and L2:X2×Y2R{,} be closed proper convex-concave bifunctions. Then, the split saddle point problem is to find a point (x,y)X1×Y1 such that

    (x,y)argminmax(x,y)X1×Y1L1(x,y)

    and

    A(x,y)argminmax(z,w)X2×Y2L2(z,w).

    For convenience, the solution set of the split saddle point problem is expressed as Φ. Let Hi=Xi×Yi (i=1,2) and TLi=Bi (i=1,2), the split saddle point problem is regarded as a special case of the split variational inclusion problem, and the following theorems can be derived naturally.

    Theorem 4.3. Let real numbers sequences {αn}[a,b](,), {σn}[c,d](0,2) and {βn}(0,) with infn{βn}β>0. For any initial points x0, x1H1, the sequence {xn} is obtained by the following process.

    {zn=xn+αn(xnxn1),un=JTL1βn(znγnA(IJTL2βn)Azn),Cn={xH1:unx2znx2ϱn},Qn={xH1:xnx1,xnx0},xn+1=PCnQnx1, n1, (4.4)

    where ϱn:=γn(2(IJTL2βn)Azn2γnA(IJTL2βn)Azn2) and

    γn:={σn(IJTL2βn)Azn2A(IJTL2βn)Azn2,AznargminmaxyH2L2(y),0,otherwise.

    If the solution set Φ is nonempty, then the iterative sequence {xn} generated by algorithm (4.4) converges strongly to x=PΦx1.

    Theorem 4.4. Let real numbers sequences {αn}[a,b](,), {σn}[c,d](0,2) and {βn}(0,) with infn{βn}β>0. For any initial points x0, x1H1 and C1:=H1, the sequence {xn} is constructed by the following iterative form.

    {zn=xn+αn(xnxn1),un=JTL1βn(znγnA(IJTL2βn)Azn),Cn+1={xCn:unx2znx2ϱn},xn+1=PCnQnx1, n1, (4.5)

    where ϱn and γn are defined as in algorithm (4.4). If the solution set Φ is nonempty, then {xn} generated by algorithm (4.5) converges strongly to x=PΦx1.

    Let H1 and H2 be Hilbert spaces. Let ϕ:H1R and ψ:H2R be proper lower semi-continuous convex functions, A:H1H2 be a bounded linear operator. The split minimization problem is to find xH1 such that

    xargminxH1ϕ(x) and AxargminyH2ψ(y).

    It is well know that xargminxH1ϕ(x) if and only if 0ϕ(x), where ϕ is the subdifferential of ϕ defined by

    ϕ(x):={ˆxH1:ϕ(x)+zx,ˆxϕ(z), zH1}.

    Recall that the proximal operator proxϕ of ϕ is defined as follows:

    proxβ,ϕ(x)=argminzH1{ϕ(z)+12βzx2}, β>0.

    It is very important that proxβ,ϕ(x)=(I+βϕ)1(x)=Jϕβ(x). In addition, ϕ is a maximal monotone mapping and proxϕ is a firmly nonexpansive mapping. In view of this, when B1=ϕ and B2=ψ in (2.1), the split variational inclusion problem is transformed into the split minimization problem. Based on our Theorems 3.2 and 3.3, we also have the following results.

    Theorem 4.5. Given real numbers sequences {αn}[a,b](,), {σn}[c,d](0,2) and β>0. For any x0, x1H1, the sequence {xn} is constructed by the following iterative form.

    {zn=xn+αn(xnxn1),un=proxβ,ϕ(znγnA(Iproxβ,ψ)Azn),Cn={xH1:unx2znx2χn},Qn={xH1:xnx1,xnx0},xn+1=PCnQnx1, n1, (4.6)

    where χn:=γn(2(Iproxβ,ψ)Azn2γnA(Iproxβ,ψ)Azn2)and

    γn:={σn(Iproxβ,ψ)Azn2A(Iproxβ,ψ)Azn2,AznargminyH2ψ(y),0,otherwise.

    If the solution set Υ of the split minimization problem is nonempty, then {xn} generated by algorithm (4.6) converges strongly to x=PΥx1.

    Theorem 4.6. Given real numbers sequences {αn}[a,b](,), {σn}[c,d](0,2) and β>0. For any x0, x1H1 and C1:=H1, the sequence {xn} is constructed by the following iterative form.

    {zn=xn+αn(xnxn1),un=proxβ,ϕ(znγnA(Iproxβ,ψ)Azn),Cn={xH1:unx2znx2χn},Qn={xH1:xnx1,xnx0},xn+1=PCnQnx1, n1, (4.7)

    whereχn and γn are defined as in algorithm (4.6). If the solution set Υ of the split minimization problem is nonempty, then the iterative sequence {xn} generated by algorithm (4.7) converges strongly to x=PΥx1.

    Remark 4.7. Through the above results, the split variational inclusion problem, which includes the split variational inequality problem, the split saddle point problem and the split minimization problem as special cases, is quite general. Using the same methods as in Theorems 3.2 and 3.3, the strong convergence of Theorems 4.1–4.6 are obtained under the above corresponding conditions in Subsections 4.1, 4.2 and 4.3.

    In this section, a numerical example is provided to illustrate the effectiveness and realization of convergence behavior of Algorithms 3.1 and 3.2. All codes were written in Matlab 2018a on a Intel(R) Core(TM) i5-8250U CPU @1.60 GHz computer with RAM 8.00 GB. Our results compare the existing conclusion below.

    Theorem 5.1. (Byrne et al. [4,Algorithm 4.4]) Let H1 and H2 be Hilbert spaces, A:H1H2 be a bounded linear operator with the adjoint operator A. Let B1:H12H1 and B2:H22H2 be two set-valued maximal monotone mappings. Take any initial point x1H1, δn(0,1) and β>0, the iterative sequence {xn} is generated by the following iterative scheme.

    xn+1=δnx1+(1δn)JB1β(xnγA(IJB2β)Axn), n1.

    If {δn} satisfies limnδn=0 and n=1δn=, 0<γ<2/AA, then the iterative sequence {xn} converges strongly to a point xΩ.

    Example 5.2. Assume that A,A1,A2:RmRm are created from a normal distribution with mean zero and unit variance. Let B1:RmRm and B2:RmRm be defined by B1(x)=A1A1x and B2(y)=A2A2y, respectively. Consider the problem of finding a point ˉx=(ˉx1,,ˉxm)TRm such that B1(ˉx)=(0,,0)T and B2(Aˉx)=(0,,0)T. It is easy to see that the minimum norm solution of the mentioned above problem is x=(0,,0)T. Our parameter settings are as follows. In our algorithms 3.1 and 3.2, set αn=0.5, βn=1 and σn=1.5. Take β=1, δn=1n+1 and γn=1.5AA in the Algorithm 4.4 proposed by Byrne et al. [4]. We use En=xnx to measure the iteration error of all algorithms. The stopping condition is that the maximum number of iterations is 300 times. Figure 1 describes the numerical behavior of all algorithms in different dimensions.

    Figure 1.  Numerical behavior of all algorithms in different dimensions.

    It can be seen from the above results that our Algorithms 3.1 and 3.2 are efficient and robust. These results are independent of the selection of initial values and dimensions. Moreover, the convergence performance and the iteration error of the suggested Algorithm 3.2 are better than the existing Algorithm 4.4 in [4].

    In this paper, our innovations are twofold. One is to provide a self-adaptive stepsize selection which does not require the norm of the bounded linear operator. The other is to propose two types of projection algorithms (i.e., a hybrid projection algorithm and a shrinking projection algorithm), which combine inertial technique with the proposed self-adaptive stepsize. Under mild constraints, the corresponding strong convergence theorems of SVIP are obtained in the framework of Hilbert spaces. At the same time, our results are also extended to the split variational inequality problem, the split saddle point problem and the split minimization problem. In terms of numerical experiments, the effectiveness of our proposed algorithms is showed by comparing with some existing results.

    The authors would like to thank the referees for reading our manuscript very carefully and for their valuable comments and suggestions.

    No potential conflict of interest was reported by the authors.



    [1] Y. Censor, A. Gibali, S. Reich, Algorithms for the split variational inequality problem, Numer. Algorithms, 59 (2012), 301–323. https://doi.org/10.1007/s11075-011-9490-5 doi: 10.1007/s11075-011-9490-5
    [2] A. Moudafi, Split monotone variational inclusions, J. Optim. Theory Appl., 150 (2011), 275–283. https://doi.org/10.1007/s10957-011-9814-6 doi: 10.1007/s10957-011-9814-6
    [3] P. K. Anh, D. V. Thong, V. T. Dung, A strongly convergent Mann-type inertial algorithm for solving split variational inclusion problems, Optim. Eng., 22 (2021), 159–185. https://doi.org/10.1007/s11081-020-09501-2 doi: 10.1007/s11081-020-09501-2
    [4] C. Byrne, Y. Censor, A. Gibali, S. Reich, Weak and strong convergence of algorithms for the split common null point problem, J. Nonlinear Convex Anal., 13 (2011), 759–775.
    [5] M. Gabeleh, N. Shahzad, Existence and uniqueness of a solution for some nonlinear programming problems, Mediterr. J. Math., 12 (2015), 133–C146. https://doi.org/10.1007/s00009-013-0380-z doi: 10.1007/s00009-013-0380-z
    [6] L. V. Long, D. V. Thong, V. T. Dung, New algorithms for the split variational inclusion problems and application to split feasibility problems, Optimization, 68 (2019), 2339–2367. https://doi.org/10.1080/02331934.2019.1631821 doi: 10.1080/02331934.2019.1631821
    [7] X. Qin, J. C. Yao, A viscosity iterative method for a split feasibility problem, J. Nonlinear Convex Anal., 20 (2019), 1497–1506.
    [8] A. Chambolle, P. L. Lions, Image recovery via total variation minimization and related problems, Numer. Math., 76 (1997), 167–188. https://doi.org/10.1007/s002110050258 doi: 10.1007/s002110050258
    [9] M. Nikolova, A variational approach to remove outliers and impulse noise, J. Math. Imaging Vision, 20 (2004), 99–120. https://doi.org/10.1023/B:JMIV.0000011920.58935.9c doi: 10.1023/B:JMIV.0000011920.58935.9c
    [10] Z. Zhou, B. Tan, S. Li, An accelerated hybrid projection method with a self-adaptive step-size sequence for solving split common fixed point problems, Math. Methods Appl. Sci., 44 (2021), 7294–7303. https://doi.org/10.1002/mma.7261 doi: 10.1002/mma.7261
    [11] S. S. Chang, C. F. Wen, J. C. Yao, Common zero point for a finite family of inclusion problems of accretive mappings in Banach spaces, Optimization, 67 (2018), 1183–1196. https://doi.org/10.1080/02331934.2018.1470176 doi: 10.1080/02331934.2018.1470176
    [12] S. Y. Cho, X. Qin, L. Wang, Strong convergence of a splitting algorithm for treating monotone operators, Fixed Point Theory Appl., 2014 (2014), Article ID 94.
    [13] K. R. Kazmi, S. H. Rizvi, An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping, Optim. Lett., 8 (2014), 1113–1124. https://doi.org/10.1007/s11590-013-0629-2 doi: 10.1007/s11590-013-0629-2
    [14] X. Qin, S. Y. Cho, L. Wang, A regularization method for treating zero points of the sum of two monotone operators, Fixed Point Theory Appl., 2014 (2014), Article ID 75.
    [15] K. Sitthithakerngkiet, J. Deepho, J. Martnez-Moreno, P. Kumam, Convergence analysis of a general iterative algorithm for finding a common solution of split variational inclusion and optimization problems, Numer. Algorithms, 79 (2018), 801–824. https://doi.org/10.1007/s11075-017-0462-2 doi: 10.1007/s11075-017-0462-2
    [16] B. Tan, X. Qin, J. C. Yao, Strong convergence of self-adaptive inertial algorithms for solving split variational inclusion problems with applications, J. Sci. Comput., 87 (2021), Article ID 20. https://doi.org/10.1007/s10915-021-01428-9 doi: 10.1007/s10915-021-01428-9
    [17] F. Alvarez, H. Attouch, An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping, Set-Valued Anal., 9 (2001), 3–11.
    [18] Z. Zhou, B. Tan, S. Li, Adaptive hybrid steepest descent algorithms involving an inertial extrapolation term for split monotone variational inclusion problems. Math. Methods Appl. Sci., 2021. In press. Available from: https://doi.org/10.1002/mma.7931.
    [19] Z. Zhou, B. Tan, S. Li, An inertial shrinking projection algorithm for split common fixed point problems, J. Appl. Anal. Comput., 10 (2020), 2104–2120.
    [20] Y. Shehu, G. Cai, Strong convergence result of forwardCbackward splitting methods for accretive operators in banach spaces with applications, RACSAM, 112 (2018), 71–87. https://doi.org/10.1007/s13398-016-0366-3 doi: 10.1007/s13398-016-0366-3
    [21] X. Qin, S. Y. Cho, L. Wang, Iterative algorithms with errors for zero points of m-accretive operators, Fixed Point Theory Appl., 2013 (2013), Article ID 148.
    [22] Y. Shehu, O. S. Iyiola, F. U. Ogbuisi, Iterative method with inertial terms for nonexpansive mappings: Applications to compressed sensing, Numer. Algorithms, 83 (2020), 1321–1347. https://doi.org/10.1007/s11075-019-00727-5 doi: 10.1007/s11075-019-00727-5
    [23] Q. L. Dong, H. B. Yuan, Y. J. Cho, T. M. Rassias, Modified inertial Mann algorithm and inertial CQ-algorithm for nonexpansive mappings, Optim. Lett., 12 (2018), 87–102. https://doi.org/10.1007/s11590-016-1102-9 doi: 10.1007/s11590-016-1102-9
    [24] B. Tan, S. Li, Strong convergence of inertial Mann algorithms for solving hierarchical fixed point problems, J. Nonlinear Var. Anal., 4 (2020), 337–355. https://doi.org/10.23952/jnva.4.2020.3.02 doi: 10.23952/jnva.4.2020.3.02
    [25] Y. Shehu, O. S. Iyiola, Projection methods with alternating inertial steps for variational inequalities: weak and linear convergence, Appl. Numer. Math., 157 (2020), 315–337. https://doi.org/10.1016/j.apnum.2020.06.009 doi: 10.1016/j.apnum.2020.06.009
    [26] Y. Shehu, P. Cholamjiak, Iterative method with inertial for variational inequalities in Hilbert spaces. Calcolo, 56 (2019), Article ID 4. https://doi.org/10.1007/s10092-018-0300-5
    [27] B. Tan, X. Qin, J. C. Yao, Two modified inertial projection algorithms for bilevel pseudomonotone variational inequalities with applications to optimal control problems, Numer. Algorithms, 88 (2021), 1757–1786. https://doi.org/10.1007/s11075-021-01093-x doi: 10.1007/s11075-021-01093-x
    [28] B. Tan, X. Qin, Self adaptive viscosity-type inertial extragradient algorithms for solving variational inequalities with applications, Math. Model. Anal., (2022), Available from: https://doi.org/ 10.3846/mma.2022.13846.
    [29] K. Nakajo, W. Takahashi, Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups, J. Math. Anal. Appl., 279 (2003), 372–379. https://doi.org/10.1016/S0022-247X(02)00458-4 doi: 10.1016/S0022-247X(02)00458-4
    [30] W. Takahashi, Y. Takeuchi, R. Kubota, Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert spaces, J. Math. Anal. Appl., 341 (2008), 276–286. https://doi.org/10.1016/j.jmaa.2007.09.062 doi: 10.1016/j.jmaa.2007.09.062
    [31] G. Marino, H. K. Xu, Convergence of generalized proximal point algorithm, Commun. Pure Appl. Anal., 3 (2004), 791–808. https://doi.org/10.3934/cpaa.2004.3.791 doi: 10.3934/cpaa.2004.3.791
    [32] H. Zhou, X. Qin, Fixed Points of Nonlinear Operators: Iterative Methods, Berlin, Boston: De Gruyter 2020.
    [33] H. Cui, M. Su, On sufficient conditions ensuring the norm convergence of an iterative sequence to zeros of accretive operators, Appl. Math. Comput., 258 (2015), 67–71. https://doi.org/10.1016/j.amc.2015.01.108 doi: 10.1016/j.amc.2015.01.108
    [34] C. Martinez-Yanes, H. K. Xu, Strong convergence of the CQ method for fixed point iteration processes, Nonlinear Anal., 64 (2006), 2400–2411. https://doi.org/10.1016/j.na.2005.08.018 doi: 10.1016/j.na.2005.08.018
    [35] R. T. Rockafellar, Monotone operators associated with saddle functions and minimax problems, In: Browder F.E. (ed.) Nonlinear Functional Analysis, Part 1, Amer. Math. Soc., 18 (1970), 397–C407.
  • This article has been cited by:

    1. Li-Jun Zhu, Tzu-Chien Yin, Yongqiang Fu, Tseng-Type Algorithms for the Split Variational Inclusion, 2022, 2022, 2314-4785, 1, 10.1155/2022/4819157
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2190) PDF downloads(101) Cited by(1)

Figures and Tables

Figures(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog