Research article Special Issues

An inertial Mann algorithm for nonexpansive mappings on Hadamard manifolds

  • An inertial Mann algorithm will be presented in this article with the purpose of approximating a fixed point of a nonexpansive mapping on a Hadamard manifold. Any sequence that is generated by using the proposed approach, under suitable assumptions, converges to fixed points of nonexpansive mappings. The proposed method is also dedicated to solving inclusion and equilibrium problems. Lastly, we give a number of computational experiments that show how well the inertial Mann algorithm works and how it compares to other methods.

    Citation: Konrawut Khammahawong, Parin Chaipunya, Poom Kumam. An inertial Mann algorithm for nonexpansive mappings on Hadamard manifolds[J]. AIMS Mathematics, 2023, 8(1): 2093-2116. doi: 10.3934/math.2023108

    Related Papers:

    [1] Mohammad Dilshad, Aysha Khan, Mohammad Akram . Splitting type viscosity methods for inclusion and fixed point problems on Hadamard manifolds. AIMS Mathematics, 2021, 6(5): 5205-5221. doi: 10.3934/math.2021309
    [2] Lu-Chuan Ceng, Yeong-Cheng Liou, Tzu-Chien Yin . On Mann-type accelerated projection methods for pseudomonotone variational inequalities and common fixed points in Banach spaces. AIMS Mathematics, 2023, 8(9): 21138-21160. doi: 10.3934/math.20231077
    [3] Lu-Chuan Ceng, Li-Jun Zhu, Tzu-Chien Yin . Modified subgradient extragradient algorithms for systems of generalized equilibria with constraints. AIMS Mathematics, 2023, 8(2): 2961-2994. doi: 10.3934/math.2023154
    [4] Sani Salisu, Vasile Berinde, Songpon Sriwongsa, Poom Kumam . Approximating fixed points of demicontractive mappings in metric spaces by geodesic averaged perturbation techniques. AIMS Mathematics, 2023, 8(12): 28582-28600. doi: 10.3934/math.20231463
    [5] Francis Akutsah, Akindele Adebayo Mebawondu, Austine Efut Ofem, Reny George, Hossam A. Nabwey, Ojen Kumar Narain . Modified mildly inertial subgradient extragradient method for solving pseudomonotone equilibrium problems and nonexpansive fixed point problems. AIMS Mathematics, 2024, 9(7): 17276-17290. doi: 10.3934/math.2024839
    [6] Anantachai Padcharoen, Kritsana Sokhuma, Jamilu Abubakar . Projection methods for quasi-nonexpansive multivalued mappings in Hilbert spaces. AIMS Mathematics, 2023, 8(3): 7242-7257. doi: 10.3934/math.2023364
    [7] Kaiwich Baewnoi, Damrongsak Yambangwai, Tanakit Thianwan . A novel algorithm with an inertial technique for fixed points of nonexpansive mappings and zeros of accretive operators in Banach spaces. AIMS Mathematics, 2024, 9(3): 6424-6444. doi: 10.3934/math.2024313
    [8] Hasanen A. Hammad, Hassan Almusawa . Modified inertial Ishikawa iterations for fixed points of nonexpansive mappings with an application. AIMS Mathematics, 2022, 7(4): 6984-7000. doi: 10.3934/math.2022388
    [9] Charu Batra, Renu Chugh, Mohammad Sajid, Nishu Gupta, Rajeev Kumar . Generalized viscosity approximation method for solving split generalized mixed equilibrium problem with application to compressed sensing. AIMS Mathematics, 2024, 9(1): 1718-1754. doi: 10.3934/math.2024084
    [10] Kobkoon Janngam, Suthep Suantai, Rattanakorn Wattanataweekul . A novel fixed-point based two-step inertial algorithm for convex minimization in deep learning data classification. AIMS Mathematics, 2025, 10(3): 6209-6232. doi: 10.3934/math.2025283
  • An inertial Mann algorithm will be presented in this article with the purpose of approximating a fixed point of a nonexpansive mapping on a Hadamard manifold. Any sequence that is generated by using the proposed approach, under suitable assumptions, converges to fixed points of nonexpansive mappings. The proposed method is also dedicated to solving inclusion and equilibrium problems. Lastly, we give a number of computational experiments that show how well the inertial Mann algorithm works and how it compares to other methods.



    Fixed point problems for a nonexpansive mapping can be used for modeling a variety of problems that arise in several research fields, such as optimization problems [43,Corollary 17.5], monotone inclusion problems [43,Proposition 23.38], variational inequalities [43,Subchapter 25.5] and equilibrium problems [44]. Let us recall it in the following equation:

    x=F(x), (1.1)

    where F is a nonexpansive self mapping defined on a subset of suitable spaces. The solutions of the problem (1.1) are said to be fixed points of the mapping F. The set of fixed points of the mapping F is represented by Fix(F). The construction of fixed points for nonexpansive mappings is of tremendous interest and importance since it has applications in a range of areas, including signal processing, image recovery and machine learning (see, for example [17,18,19,20,33]).

    One of the iterative procedures that succeeds in approximating fixed points for nonexpansive-type mappings is, undoubtedly, the Mann iterative algorithm [8]. On the other hand, the Mann algorithm has a slow convergence rate, especially for large-scale problems (see [41,42] for details). In 1964, Polyak [10] was the first to suggest the heavy ball method, which uses an inertial extrapolation technique to accelerate the method's convergence. It has proven to be a valuable resource for improving the method's performance and has great convergence properties. Numerous fast iterative algorithms have been developed using inertial methods such as the inertial proximal point methods [26,27], the inertial forward-backward splitting methods [28,29] and the inertial extragradient methods [30,34]. Especially, Maingé [9] combined the Mann algorithm and the inertial extrapolation technique to construct the following inertial Mann algorithm:

    {yn=xn+λn(xnxn1),xn+1=γnyn+(1γn)F(yn),n1, (1.2)

    where {λn}[0,1) and {γn}(0,1). Under certain assumptions, the researcher [9] proved that the sequence {xn} converges weakly to fixed points of F.

    During the last decade, many scholars have turned their attention to nonlinear problems on manifolds. This is due to the fact that the problems can't be posed in linear spaces and need a manifold structure (not necessary with a linear structure). The main advantages of these extensions are that constrained optimization problems can be considered as unconstrained from the perspective of Riemannian geometry, and that other optimization problems with non-convex objective functions can be equivalently transformed into convex ones by using an appropriate Riemannian metric. Note that several scholars have also developed optimization theory in a more general setting of a CAT(κ) space, which generalizes a Riemannian manifold with upper sectional curvature bounded by κ. However, without additional unnatural assumptions, the inertial step (which is the main interest of this paper) cannot be defined by such a general structure. Hence, we shall restrict ourselves only to the setting of a Hadamard manifold.

    To further illustrate the motivation of extending from Hilbert space to manifold settings, we recall an optimization problem from [36,p. 1–3]. In this example, we would see two facts: (1) A nonconvex objective may be turned into a convex objective in the manifold setting; (2) An incomplete set can be turned into a complete metric space from the underlying manifold structure. Let M be a Riemannian manifold and φ:[a,b]M be a geodesic. A function g:MR is a geodesic convex means that the function gφ:[a,b]R is convex in the usual sense. The convex optimization problem on Riemannian manifolds for the geodesic convex objective function g:MR is defined by

    minxMg(x). (1.3)

    Let M=Rm++={(x1,,xm)Rm:xi>0,i=1,,m} be the Riemannian manifold endowed with the Riemannian metric |:TM×TMR defined by υ|ν:=υW(x)ν, for all υ,νTxM, where TxM is the tangent space of M at xM, TM is the tangent bundle of M, υ is a transpose of υ and W(x) is an n×n symmetric matrix defined by W(x)=diag(x21,x22,,x2m). Then (Rm++,|) is a complete Riemannian manifold, as proved in [37,Corollary 10.1.5]. A geodesic joining x=(x1,,xm)Rm++ and y=(y1,,ym)Rm++ is given by

    φ(s):=(x1s1ys1,,x1smysm),sR.

    This geodesic would be unique according to the Hopf-Rinow theorem.

    Let g:Rm++R be a function defined by

    g(x):=mi=1lnxi,x=(x1,,xm)Rm++.

    Then g is geodesic convex on (Rm++,|). Indeed,

    (gφ)(s)=mi=1lnx1siysi,(gφ)(s)=mi=1yixi,

    and

    (gφ)(s)=0.

    As a result, (gφ) is convex in the usual sense. Therefore g is geodesic convex, consults [37,Theorem 10.1.2] for more details.

    On the other hand, g is not convex on Rm++ in the usual sense. Consider the Hessian of g on Rm++ which is given by

    Hessg(x)=[2g(x)x212g(x)x1x22g(x)x1xm2g(x)x2x12g(x)x222g(x)x2xm2g(x)xmx12g(x)xmx22g(x)x2m]=[1x210001x220001x2m]

    which is not a positive semidefinite matrix on Rm++. Then by the fact that "a function on Rm is convex if and only if its Hessian is positive semidefinite", g is not convex on Rm++ in the usual sense. Therefore, the following non-convex optimization problem

    minxRm++g(x)

    over an incomplete set becomes a (geodesic) convex optimization problem (1.3) on the complete Riemannian manifold (Rm++,|).

    The above example shows that a non-convex optimization problem can be transformed into a (geodesic) convex optimization problem via the introduction of an appropriate Riemannian metric. Not only this, but non-monotone problems in Euclidean spaces may also be similarly transformed into monotone problems, see [25] for more details. As a result, many authors have focused on developing nonlinear problems on Riemannian manifolds, see, for instance, [11,12,13,14,16,21,22,23,24,25,35,38,39].

    Recently, several authors considered and studied the fixed point problem (1.1) in the Riemannian context. For instance; Li et al. [11] proposed the Mann algorithm and the Halpern algorithm for approximating fixed points of nonexpansive mappings in the setting of Hadamard manifolds. Chugh et al. [12] investigated the Ishikawa algorithm in the context of Hadamard manifolds. Huang [16] generalized an iterative method of the viscosity type with a weak contraction in Hadamard manifolds. The conjugate gradient technique was applied, as stated by Yao et al. [13], in order to speed up the Halpern algorithm. Sahu et al. [14] just recently presented the S-iterative technique as a method for approximating a common fixed point of two nonexpansive mappings.

    The goal of this research is to develop an inertial Mann approach for Hadamard manifolds, motivated by the vast range of applications of the inertial Mann method. With the inertial extrapolation technique, our proposed method converges faster than some existing methods. We examine the inertial Mann method in terms of exponential mappings and illustrate how the proposed method can be utilized to solve inclusion problems and equilibrium problems.

    The following is an overview of the paper's structure. Basic concepts and fundamental theorems in Riemannian geometry are presented in Section 2. We present the inertial Mann iterative method in Section 3 and show that a sequence generated by the suggested method converges to fixed points of nonexpansive mappings on Hadamard manifolds. In Section 4, we show how to use the proposed method to solve inclusion and equilibrium problems in the setting of Hadamard manifolds. Section 5 contains numerical examples to demonstrate the performance of the inertial Mann algorithm. Finally, Section 6 provides a concise overview of the paper.

    Let (M,g) be a finite-dimensional Riemannian manifold. The tangent bundle of M is denoted by TM=xMTxM, where TxM represents the tangent space of M at x and the zero section of TM is denoted by 0. For the sake of notational clarity, |gx denotes the inner product on TxM whose induced norm is written as . Let be the Riemannian connection induced by g. A Riemannian distance is the minimizing length of all such geodesics joining x to y, and it is denoted by d(x,y).

    If the geodesics of a Riemannian manifold are defined for any value of sR, it is said to be complete. The Hopf-Rinow theorem says that any pair of points in M can be connected by a minimizing geodesic if M is complete. Also, the metric space denoted by (M,d) is a complete space, and subsets that are closed and bounded are compact.

    Assuming that M is a complete Riemannian manifold, the exponential map expx:TxMM at the point x is assigned by expxυ=φυ(1,x) for all υTxM, where φ()=φυ(,x) is the geodesic starting from x with velocity υ. Then we have expxsυ=φυ(s,x) and expx0=φυ(0,x)=x for any value s. The exponential map has inverse exp1x:MTxM. Furthermore, for all x,yM, we have d(x,y)=exp1xy.

    A Hadamard manifold is a complete, simply connected Riemannian manifold with non-positive sectional curvature. A finite-dimensional Hadamard manifold is referred to as M throughout the rest of this article. Given xM, the exponential mapping expx:TxMM is a diffeomorphism. For any two points x,yM there exists a unique normalized geodesic joining x to y, which is a minimizing geodesic [3,Theorem 4.1].

    We now describe certain geometric properties of the finite dimensional Hadamard manifold M which are analogous to the settings of Euclidean space Rn. Recall that a geodesic triangle (x1,x2,x3) of a Riemannian manifold M is a set consisting of three points x1, x2 and x3, and three minimizing geodesics φi joining xi to xi+1, where i=1,2,3 (mod 3).

    Proposition 1. [3] Let (x1,x2,x3) be a geodesic triangle in M. Then

    d2(x1,x2)+d2(x2,x3)2exp1x2x1|exp1x2x3d2(x3,x1), (2.1)

    and

    d2(x1,x2)exp1x1x3|exp1x1x2+exp1x2x3|exp1x2x1. (2.2)

    Furthermore, if θ is the angle at x1, then we get

    exp1x1x2|exp1x1x3=d(x2,x1)d(x1,x3)cosθ.

    In [4] shows the relationship between geodesic triangles in Riemannian manifolds and triangles in R2.

    Lemma 1. [4] Let (x1,x2,x3) be a geodesic triangle in M. Then, there exists (¯x1,¯x2,¯x3) in R2 for (x1,x2,x3) such that

    d(x1,x2)=¯x1¯x2,d(x2,x3)=¯x2¯x3,d(x2,x3)=¯x2¯x3.

    The triangle (¯x1,¯x2,¯x3) is called a comparison triangle of the geodesic triangle (x1,x2,x3), which is unique up to isometry of M. The points ¯x1, ¯x2, ¯x3 are called comparison points to the points x1,x2,x3, respectively.

    Lemma 2. Let (x1,x2,x3) be a geodesic triangle in M and (¯x1,¯x2,¯x3) be its comparison triangle.

    (i) Let θ1,θ2,θ3 (respectively, ¯θ1,¯θ2,¯θ3) be the angles of (x1,x2,x3) (respectively, (¯x1,¯x2,¯x3)) at the vertices x1,x2,x3 (respectively, ¯x1, ¯x2, ¯x3). Then θ1¯θ1, θ2¯θ2 and θ3¯θ3.

    (ii) Let p be a point on the geodesic connecting x1 and x2 and ¯p its comparison point in the interval [¯x1,¯x2]. If d(x1,p)=¯x1¯p and d(x2,p)= ¯x2¯p, then d(x3,p)¯x3¯p.

    Since every two points on a Hadamard manifold can be connected by a unique minimizing geodesic, we use the notation Py,x:TxMTyM to denote the parallel translation along the minimizing geodesic connecting x and y for any two points x,yM.

    The following results are critical to the proof of our main theorems.

    Remark 1. [2] If x,yM and υTxM, then

    υ|exp1xy=υ|Px,yexp1yx=Py,xυ|exp1yx. (2.3)

    Lemma 3. [40] Let M be an Hadamard manifold and x,y,zM, then

    exp1xzPx,yexp1yzd(x,y). (2.4)

    Let Q be a nonempty subset of M. A set Q is said to be geodesic convex if, for any two points x and y in Q, the geodesic joining x to y is contained in Q. A function g from M to R is said to be geodesic convex if, for any geodesic φ(s) (s[0,1]) joining x to y in M, the function gφ is convex, that is,

    g(φ(s))sg(φ(0))+(1s)g(φ(s))=sg(x)+(1s)g(y).

    Proposition 2. [3] Let M be an Hadamard manifold, then the following inequality holds:

    d(expx1sexp1x1x2,expy1sexp1y1y2)(1s)d(x1,y1)+sd(x2,y2),

    for all x1,x2,y1,y2M and s[0,1].

    Specifically, for all xM, the function d(,x):MR is a geodesic convex function.

    Definition 1. [11,23] A mapping F:MM is said to be

    (ⅰ) nonexpansive if

    d(F(x),F(y))d(x,y),x,yM;

    (ⅱ) firmly nonexpansive if for any two points x,yM, the function ξ:[0,1][0,+] defined by

    ξ(s):=d(expxsexp1xF(x),expysexp1yF(y)),

    for all s[0,1], is nonincreasing.

    As can be seen from Definition 1, every firmly nonexpansive map is nonexpansive.

    The concept described below is well-known.

    Definition 2. [15] A mapping F:MM is said to be demiclosed if, for any bounded sequence {xn} in M such that limnxn=x and limnd(xn,F(xn))=0, then F(x)=x.

    Remark 2. [15] It is easy to prove that each nonexpansive mapping F:MM is demiclosed.

    Let's end this section with some results that will be useful in the future.

    Lemma 4. [31] Let {an} and {bn} are nonnegative real sequences, assume that η[0,1),τ>0 and for all nN, the following holds:

    an+1ηan+τbn,n1.

    If n=1bn<, then limnan=0.

    Lemma 5. [27] Let {ζn}, {σn} and {λn} are sequences in [0,+) satisfying

    ζn+1ζn+λn(ζnζn1)+σn,n1,

    where n=1σn< and 0λnλ<1 for all n1. Then the following hold:

    (i) n=1[ζnζn1]+< with [t]+:=max{t,0} for any tR;

    (ii) limnζn=ζ[0,).

    We present the inertial Mann method that is described below. This algorithm is an extension of algorithm (1.2) from Hilbert spaces to Hadamard manifolds.

    Algorithm 1. Let M be an Hadamard manifold and F:MM a mapping. Choose x0,x1M. Define a sequence {xn} by the following iterative scheme:

    yn:=expxn(λnexp1xnxn1), (3.1)
    xn+1:=expyn(1γn)exp1ynF(yn), (3.2)

    where {λn}[0,) and {γn}(0,1) satisfy the following conditions:

    (C1) 0λnλ<1,n1;

    (C2) n=1 λnd2(xn,xn1)<;

    (C3) 0<γ1γnγ2<1,n1;

    (C4) n=1γn<.

    Remark 3. We can get the following particular conclusions from Algorithm 1:

    (ⅰ) If M=Rn, then Algorithm 1 reduces to iterative process (1.2) which introduced by Maingé [9].

    (ⅱ) The posterior condition (C2) for constructing λn can be implemented from the known value of d2(xn,xn1) following the rules [27]:

    0λn¯λn,¯λn={min{ϵnd2(xn,xn1),λ},if xnxn1,λ,othewise, (3.3)

    where {ϵn}[0,) such that n=1ϵn<.

    (ⅲ) If λn0, then we obtain the Mann iteration introduced by Li et al. [11].

    (ⅳ) We provide some prototypes that the sequences γn satisfies conditions (C3) and (C4).

    γn:=1(n+1)2,1en,12n,n1.

    Next, we present and prove the convergence theorem for Algorithm 1.

    Theorem 1. Let M be an Hadamard manifold and F:MM be a nonexpansive mapping such that Fix(F). Then the sequence {xn} defined by Algorithm 1 converges a fixed point of F.

    Proof. Fix nN, let zFix(F) and (yn,F(yn),z)M be a geodesic triangle with vertices yn, F(yn) and z, and (¯yn,¯F(yn),¯z)R2 be the corresponding comparison triangle. In accordance with Lemma 1, we have d(yn,F(yn))=¯yn¯F(yn), d(yn,z)=¯yn¯z and d(F(yn),z)=¯F(yn)¯z. Let ¯xn+1=γn¯yn+(1γn)¯F(yn) be the comparison point of xn+1. Using Lemma 2 (ii) together with the nonexpansiveness of F,

    d2(xn+1,z)¯xn+1¯z2=γn¯yn+(1γn)¯F(yn)¯z2=γn¯yn¯z2+(1γn)¯F(yn)¯z2γn(1γn)¯yn¯F(yn)2γnd2(yn,z)+(1γn)d2(yn,z)γn(1γn)d2(yn,F(yn)) (3.4)
    =d2(yn,z). (3.5)

    Consider the geodesic triangle (yn,xn,z). Then, by Lemma 1, there exists the corresponding comparison (¯yn,¯xn,¯z)R2 such that

    d(yn,xn)=¯yn¯xn,d(yn,z)=¯yn¯zandd(xn,z)=¯xn¯z.

    Now,

    d2(yn,z)=¯yn¯z2=¯yn¯xn+¯xn¯z2=¯yn¯xn2+¯xn¯z2+2¯yn¯xn|¯xn¯z=¯yn¯xn2+¯xn¯z2+2¯yn¯xn|¯xn¯z+2¯xn¯z22¯xn¯z2=d2(yn,xn)+d2(xn,z)+2¯yn¯z|¯xn¯z2d2(xn,z). (3.6)

    Let the angles at the vertices z and ¯z be denoted by θ and ¯θ respectively. We can obtain θ¯θ by using (i) of Lemma 2. Applying Proposition 1, we arrive at the conclusion that

    ¯yn¯z|¯xn¯z=¯yn¯z¯xn¯zcos¯θ=d(yn,z)d(z,xn)cos¯θd(yn,z)d(z,xn)cosθ=exp1zyn|exp1zxn. (3.7)

    Inserting (3.7) in (3.6), gives

    d2(yn,z)d2(yn,xn)+d2(xn,z)+2exp1zyn|exp1zxn2d2(xn,z).

    From Eq (3.1), we can deduce that exp1xnyn=λnexp1xnxn1 and we also know that d(xn,yn)=λnd(xn,xn1), where λn[0,1). Then the last inequality becomes

    d2(yn,z)λ2nd2(xn,xn1)+d2(xn,z)+2exp1zyn|exp1zxn2d2(xn,z)λnd2(xn,xn1)+d2(xn,z)+2exp1zyn|exp1zxn2d2(xn,z).

    Taking into consideration Remark 1 and Lemma 3 in the above inequality, we obtain

    d2(yn,z)λnd2(xn,xn1)+d2(xn,z)2d2(xn,z)+2exp1zynPz,xnexp1xnyn+Pz,xnexp1xnyn|exp1zxn=λnd2(xn,xn1)+d2(xn,z)2d2(xn,z)+2exp1zynPz,xnexp1xnyn|exp1zxn2exp1xnyn|exp1xnzλnd2(xn,xn1)+d2(xn,z)2d2(xn,z)+2exp1zynPz,xnexp1xnynexp1zxn2exp1xnyn|exp1xnzλnd2(xn,xn1)+d2(xn,z)2d2(xn,z)+2d(z,xn)d(z,xn)2exp1xnyn|exp1xnz=λnd2(xn,xn1)+d2(xn,z)2exp1xnyn|exp1xnz. (3.8)

    From the fact that exp1xnyn=λnexp1xnxn1 and using Remark 1 and Lemma 3, then Eq (3.8) becomes

    d2(yn,z)λnd2(xn,xn1)+d2(xn,z)+2λnexp1xnxn1|exp1xnz=λnd2(xn,xn1)+d2(xn,z)+2λnexp1xnxn1Pxn,zexp1zxn1+Pxn,zexp1zxn1|exp1xnz=λnd2(xn,xn1)+d2(xn,z)+2λnexp1xnxn1Pxn,zexp1zxn1|exp1xnz2λnexp1zxn1|exp1zxnλnd2(xn,xn1)+d2(xn,z)+2λnexp1xnxn1Pxn,zexp1zxn1exp1xnz2λnexp1zxn1|exp1zxnλnd2(xn,xn1)+d2(xn,z)+2λnd(xn,z)d(xn,z)2λnexp1zxn1|exp1zxn=λnd2(xn,xn1)+d2(xn,z)+2λnd2(xn,z)2λnexp1zxn1|exp1zxn. (3.9)

    Let (xn,xn1,z) be a geodesic triangle. It is stated in Proposition 1 that

    2exp1zxn1|exp1zxnd2(xn,xn1)d2(xn1,z)d2(xn,z). (3.10)

    By combining (3.9) and (3.10), we get

    d2(yn,z)λnd2(xn,xn1)+d2(xn,z)+2λnd2(xn,z)+λnd2(xn,xn1)λnd2(xn1,z)λnd2(xn,z)=d2(xn,z)+λn(d2(xn,z)d2(xn1,z))+2λnd2(xn,xn1). (3.11)

    Substitution (3.11) into (3.5) yields

    d2(xn+1,z)d2(xn,z)+λn(d2(xn,z)d2(xn1,z))+2λnd2(xn,xn1). (3.12)

    Since n=1λnd2(xn,xn1)<, where 0λn<λ<1. By combining this last estimate with the previous one (3.12) and utilizing Lemma 5, we deduce that the sequence {d(xn,z)} is convergent (as a result, {xn} is bounded). We may see that the sequence {yn} is also bounded in view of (3.11). Again from (3.12) and Lemma 5, we get n=1[d2(xn,z)d2(xn1,z)]+<. Since d(yn,xn)=λnd(xn,xn1), this implies that limnd(xn,yn)=0, because limnλnd(xn,xn1)=0 by the assumption n=1λnd2(xn,xn1)<.

    Next, we show that limnd(yn,F(yn))=0. By combining (3.4) and (3.11), then

    γ1(1γ2)d2(yn,F(yn))γn(1γn)d2(yn,F(yn))d2(xn,z)d2(xn+1,z)+λn(d2(xn,z)d2(xn1,z))+2λnd2(xn,xn1).

    Since λn[0,1), limnd2(xn,z) exists and limnλnd(xn,xn1)=0, we get

    limnd(yn,F(yn))=0.

    Let {xnk} be a subsequence of {xn} such that {xnk} converges to some x in M (as k). Since limnd(xn,yn)=0, we get {ynk} converges to x. We already have limkd(ynk,F(ynk))=0, and from the fact that F is demiclosed, this implies that xFix(F).

    Now, we prove that {xn} converges to an element in Fix(F). By combining (3.4) and (3.11), then

    d2(xn+1,x)(1γn)[d2(xn,x)+λn(d2(xn,x)d2(xn1,x))+2λnd2(xn,xn1)]+γnd2(yn,x)(1γ1)d2(xn,x)+λ(1γ1)[d2(xn,x)d2(xn1,x)]++2(1γ1)λnd2(xn,xn1)+γnd2(yn,x),

    which implies that

    an+1ηan+τbn,

    where

    an=d2(xn,x),bn=λ(1γ1)[d2(xn,x)d2(xn1,x)]++2(1γ1)λnd2(xn,xn1)+γnd2(yn,x),η=1γ1,τ=1.

    By applying

    n=1[d2(xn,x)d2(xn1,x)]+<,n=1λnd2(xn,xn1)<

    and (C4), and utilizing Lemma 4, we conclude that limnd(xn,x)=0. Thus, the sequence {xn} converges to an element in Fix(F). Therefore, the proof is completed.

    Next, we study the convergence theorem for Algorithm 1 by eliminating the condition (C4). To show the next convergence theorem, we require the lemma stated below.

    Lemma 6. [32,Theorem 3.2] Let {xn} be a sequence in M such that there exists a nonempty set QM satisfying:

    (i) For all xQ, limnd(xn,x) exists.

    (ii) Any cluster point of {xn} belongs to Q.

    Then, there exists ˜xQ such that {xn} converges to ˜x.

    Theorem 2. Let M be an Hadamard manifold and F:MM be a nonexpansive mapping such that Fix(F). Suppose that {xn} be a sequence generated by Algorithm 1 and {λn}[0,), {γn}(0,1) satisfy conditions (C1)–(C3). Then the sequence {xn} converges to a fixed point of F.

    Proof. In accordance with the proof of Theorem 1, we have that limnd(xn,z) exists, where zFix(F), and any cluster point of {xn} that belongs to Fix(F). Using Lemma 6, we can see that {xn} converges to an element in Fix(F). Therefore, the proof is completed.

    In this section, we discuss two implementations of the inertial algorithm we proposed in Hadamard manifolds: monotone inclusion problems and equilibrium problems.

    We denote by Ψ(M) the set of all multivalued vector fields A:M2TM such that A(x)TxM for all xM, and D(A) the domain of A defined by D(A)={xM:A(x)}. In this subsection, we consider the problem of finding xM such that

    0A(x). (4.1)

    The point x is called a singularity of A and the set of all singularities of A is denoted by A1(0)={xM:0A(x)}. The notion of monotonicity for multivalued vector fields on Hadamard manifolds is then discussed.

    Definition 3. [5] A multivalued vector field AΨ(M) is said to be

    (i) monotone if for any x,yD(A), we have

    υ|exp1xy+ν|exp1yx0,υA(x) and νA(y);

    (ii) maximal monotone if it is monotone and the following implication holds for any xM and υTxM:

    υ|exp1xy+ν|exp1yx0,yD(A) and νA(y)υA(x).

    Li et al. [23] proposed a resolvent of multivalued vector fields on Hadamard manifolds as well as a relation between nonexpansiveness and monotonicity.

    Definition 4. [23] For a given μ>0, the resolvent of a multivalued vector field AΨ(M) of order μ is a multivalued map JAμ:M2M defined by

    JAμ(x):={zM:1μexp1zxA(z)},xM.

    Remark 4. [23] For μ>0, the range of resolvent JAμ contained the domain of A and Fix(JAμ)=A1(0).

    Theorem 3. [23] Let a vector field AΨ(M). Then, for all μ>0, the vector field A is monotone if and only if JAμ is single-valued and firmly nonexpansive.

    It was shown by Li et al. [2] that the subdifferential of a proper, lower semicontinuous and geodesic convex function is a maximal monotone vector field.

    Definition 5. [3] Let g:MR be a geodesic convex function. The subdifferential g(x) of g at xM is defined by

    g(x):={υTxM:υ|exp1xyg(y)g(x),yM}. (4.2)

    It is easy to check that g(x) is closed and geodesic convex.

    Lemma 7. [2] Let g:MR{+} be a proper, lower semicontinuous and geodesic convex function and D(g)=M. Then the subdifferential g of g is a maximal monotone vector field.

    We can understand this by looking at Remark 4, which tells us that the problem of finding singularities of A becomes the problem of finding fixed points of the mapping JAμ. Following this, we will implement Algorithm 1 in order to find singularities of monotone multivalued vector fields.

    Theorem 4. Suppose that AΨ(M) is a monotone multivalued vector field, and JAμ is the resolvent of A for μ>0 such that A1(0). Let x0,x1M and a sequence {xn} is defined by

    yn:=expxn(λnexp1xnxn1),xn+1:=expyn(1γn)exp1ynJAμ(yn),

    where {λn}[0,), {γn}(0,1) satisfy conditions (C1)–(C4). Then the sequence {xn} converges to an element in A1(0).

    Proof. By taking F=JAμ. From Theorem 3, F is single-valued and firmly nonexpansive mapping. As a result, it is nonexpansive with Fix(F)=A1(0). From the hypothesis, Fix(F)=A1(0). As a consequence of this, the desired results are achieved according to Theorem 1.

    Theorem 5. Suppose that AΨ(M) is a monotone multivalued vector field, and JAμ is the resolvent of A for μ>0 such that A1(0). Let x0,x1M and a sequence {xn} is defined by

    yn:=expxn(λnexp1xnxn1),xn+1:=expyn(1γn)exp1ynJAμ(yn),

    where {λn}[0,), {γn}(0,1) satisfy conditions (C1)–(C3). Then the sequence {xn} converges to an element in A1(0).

    Proof. By taking F=JAμ. From Theorem 3, F is single-valued and firmly nonexpansive mapping. As a result, it is nonexpansive with Fix(F)=A1(0). From the hypothesis, Fix(F)=A1(0). As a consequence of this, the desired results are achieved according to Theorem 2.

    Let Q be a nonempty, closed and geodesic convex of M, and let ψ:Q×QR a bifunction that satisfies the equation ψ(x,x)=0 for all xQ. Calao et al. [6] conducted research on an equilibrium problem in the setting of Hadamard manifolds. Let us recall it in the following problem: Find xQ such that

    ψ(x,y)0,yQ. (4.3)

    The solution of the equilibrium problem (4.3) is said to be an equilibrium point, and EP(ψ) stands for the set of all equilibrium points.

    In order to study the equilibrium problem (4.3), let us suppose that ψ satisfies the following assumptions

    (H1) ψ(x,x)0 for all xQ.

    (H2) ψ is monotone bifunction, that is, ψ(x,y)+ψ(y,x)0 for all x,yQ.

    (H3) For all yQ, xψ(x,y) is upper semicontinuous.

    (H4) For all xQ, yψ(x,y) is geodesic convex and lower semicontinuous.

    Calao et al. [6] have presented the notion of resolvent for the bifunction on Hadamard manifolds as follows: Let ψ:Q×QR, the resolvent of a bifunction ψ is a multivalued mapping Rψμ:M2Q such that for all xM

    Rψμ(x)={zQ:ψ(z,y)1μexp1zx|exp1zy0, yQ}.

    The following theorem concerns the bifunction ψ:Q×QM that is defined in Hadamard manifolds:

    Theorem 6. [6,7] Let ψ:Q×QR be a bifunction satisfying the following conditions:

    (1) ψ is monotone;

    (2) For all μ>0, Rψμ is properly defined, that is, the domain D(Rψμ).

    Then for any λ>0,

    (i) the resolvent Rψμ is single-valued and firmly nonexpansive;

    (ii) the fixed point set of Rψμ is the equilibrium point set of ψ,

    Fix(Rψμ)=EP(ψ).

    Moreover, if ψ satisfying conditions (H1)–(H4). Then D(Rψμ)=M.

    In light of Theorem 6, we can see that the equilibrium problem (4.3) were working on transforms into the problem of finding fixed points of Rψμ. Next, we apply Algorithm 1 to find equilibrium points.

    Theorem 7. Suppose that Q is a nonempty, closed and geodesic convex subset of M. Let ψ:Q×QR be a bifunction satisfying (H1)–(H4), and Rψμ be the resolvent of ψ for μ>0 such that EP(ψ). Let x0,x1M and a sequence {xn} is defined by

    yn:=expxn(λnexp1xnxn1),xn+1:=expyn(1γn)exp1ynRψμ(yn),

    where {λn}[0,), {γn}(0,1) satisfy conditions (C1)–(C4). Then the sequence {xn} converges to an element in EP(ψ).

    Proof. By taking F=Rψμ. From Theorem 6, F is single-valued and firmly nonexpansive mapping. As a result, it is nonexpansive with Fix(F)=EP(ψ). From the hypothesis, Fix(F)=EP(ψ). As a consequence of this, the desired results are achieved according to Theorem 1.

    Theorem 8. Suppose that Q is a nonempty, closed and geodesic convex of M. Let ψ:Q×QR be a bifunction satisfying (H1)–(H4), and Rψμ be the resolvent of ψ for μ>0 such that EP(ψ). Let x0,x1M and a sequence {xn} is defined by

    yn:=expxn(λnexp1xnxn1),xn+1:=expyn(1γn)exp1ynRψμ(yn),

    where {λn}[0,), {γn}(0,1) satisfy conditions (C1)–(C3). Then the sequence {xn} converges to an element in EP(ψ).

    Proof. By taking F=Rψμ. From Theorem 6, F is single-valued and firmly nonexpansive mapping. As a result, it is nonexpansive with Fix(F)=EP(ψ). From the hypothesis, Fix(F)=EP(ψ). As a consequence of this, the desired results are achieved according to Theorem 2.

    We give three numerical experiments to illustrate the computational performance of the inertial Mann algorithm and compare it with other existence algorithms. All programs was coded in Matlab Program, and the computations were done on a personal computer with an Intel(R) Core(TM) i7 @1.80 GHz, together with 8 GB 1600 MHz DDR3.

    Example 1. Let M:=Rm++={xRm:xi>0,i=1,,m} and Rm+={xRm:xi0,i=1,,m}. As [25], let (Rm++,|) be the Riemannian manifold with the Riemannian metric | defined by υ|ν:=υTW(x)ν for all xRm++ and υ,νTxRm++ where W(x) is a diagonal metrix defined by W(x)=diag(x21,x22,,x2m). Tangent space at xRm++, denoted by TxRm++. Additional, the mapping σ:RmRm++ given by σ(x)=(ex1,ex2,,exm) is isometry between the Euclidean space Rm and the Riemannian manifold (Rm++,|). Then the Riemannian distance d:Rm++×Rm++Rm+ is defined by

    d(x,y):=|σ1(x)σ1(y)|=mi=1ln2xiyi,x,yRm++.

    Thus, (Rm++,|) is a Hadamard manifold. The exponential map on Rm++ is given by

    expxsυ=(x1eυ1sx1,x2eυ2sx2,,xmeυmsxm).

    for xRm++ and υTxRm++. The inverse of the exponential map is assigned by

    exp1xy=(x1lny1x1,x2lny2x2,,xmlnymxm),x,yRm++.

    Test 1. We verify the usefulness of Algorithm 1 in M=(R++,|). Let F:R++R++ be a nonexpansive mapping defined by

    F(x):=elnx/2,xR++.

    Then Fix(F)={1}. We set γn=1/(n+1)2 and let the inertial parameter λn be updated by (3.3) where ϵn=(1/2)n and λ{0,0.1,0.3,0.5,0.7,0.9}. Denoted Dn=d(xn,1)<106 as the stopping criterion. The initial values x0,x1 are randomly generated in MATLAB program. We perform a parameter analysis on the proposed Algorithm 1. The numerical results are shown in Table 1 and Figure 1.

    Table 1.  Computation results for Test 1 of Example 1.
    λ 0 0.1 0.3 0.5 0.7 0.9
    Iteration 19 16 13 19 22 33
    Time(s) 0.0075 0.0049 0.0033 0.0049 0.0076 0.0077

     | Show Table
    DownLoad: CSV
    Figure 1.  Numerical behavior of {Dn} for Test 1 of Example 1.

    Remark 5. (ⅰ) Table 1 shows that Algorithm 1 with six different parameter choices is efficient and simple to implement. The most essential point is that Algorithm 1 converges quickly when the control parameter 0<λ0.5. Also, we can see that the parameter values we choose have no effect on our computational results.

    (ⅱ) The speed of our proposed Algorithm 1 with the parameter λ=0.3 is clearly faster than others, as can be seen in Figure 1.

    Test 2. In this test, we evaluate the performance of Algorithm 1 in M=(Rm++,|) to solve the inclusion problem (4.1). Let a function g:Rm++R defined by

    g(x):=mi=1gi(xi),gi(xi):=ailn(xdii+bi)ciln(xi),i=1,,m,

    where ai,bi,ci,diR++ satisfy ci<aidi and di2 for i=1,,m. The minimizer of g is x=(x1,x2,,xm), where xi=dibici/(aidici), for i=1,,m. The function g is not Euclidian convex. However, Ferreira et al. [24] showed that g is geodesic convex function in (Rm++,|). In view of Definition 5, we have

    g(x)={υTxRm++ | g(y)g(x)+υ|exp1xy},yRm++.

    The subdifferential g of g is a maximal monotone vector field, as shown by Lemma 7. We consider the maximal monotone vector field A by g in (4.1). Moreover, we have

    Jgμ(x)=argminyRm++{g(y)+12μd2(y,x)}, μ>0.

    It is easy to see that xminRm++g0g(x), where minRm++g={xRm++:g(x)g(y),yRm++} is the set of minimizers of g. Hence, g1(0)={x=(x1,x2,,xm), where xi=dibici/(aidici), for i=1,,m}.

    We set μ=1 and our parameters are the same as Test 1. Let ai=bi=ci=1 and di=2, for i=1,,m, then x=1, where xi=1 for i=1,,m. We take into account of the various of numbers for dimension m and parameters. Denoted Dn=d(xn,x)<104 as the stopping criterion. The initial values x0,x1 are randomly generated in MATLAB. Our numerical results are reported in Table 2 and Figure 2.

    Table 2.  Computation results for Test 2 of Example 1.
    λ m
    10 50 100 500
    Iteration Time(s) Iteration Time(s) Iteration Time(s) Iteration Time(s)
    0 16 0.7073 18 1.1911 18 2.6154 42 15.9526
    0.1 14 0.5878 15 0.9592 16 2.1549 57 21.5351
    0.3 12 0.5829 14 0.8666 15 1.8511 16 6.0665
    0.5 15 0.6268 13 0.6365 14 1.6361 12 4.4499
    0.7 18 0.7743 20 1.2781 18 2.1459 24 8.7960
    0.9 18 0.6962 21 1.1829 22 2.5327 21 7.5839

     | Show Table
    DownLoad: CSV
    Figure 2.  Numerical behavior of {Dn} for Test 2 of Example 1.

    Remark 6. (ⅰ) According to the results shown in Table 2, the numerical experiments show that the proposed Algorithm 1 with different dimensional values and parameters converges to a singularity of the maximal monotone multivalued vector field. Our method is efficient and simple to implement for solving the inclusion problem (4.1). Furthermore, the number of iterations required by Algorithm 1 is unaffected by the dimension selection; in fact, the number of iterations required by the proposed method is only slightly affected by the dimension leaping change.

    (ⅱ) When the dimension changes, the Algorithm 1 with λ=0.5 has a faster convergence rate, as seen in Figure 2.

    (ⅲ) Since g is non-convex in the Euclidean sense, which implies that the Euclidean methods [9,33] can not be applied to solve the inclusion problem (4.1).

    Example 2. Let M=(R3,|) be an Hadamard manifolds with Riemannian metric υ|ν=υW(x)ν for all υ,νTxR3 and x=(x1,x2,x3)M, where W(x) is 3×3 matrix defined by

    W(x):=(10001+4x222x202x21),xR3.

    The Riemannian distance is defined by

    d2(x,y)=2i=1(xiyi)2+(x22x3y22+y3)2,x,yR3.

    More information can be found in [25]. The geodesic that joins the points φ(0)=x and φ(1)=y is given by

    φ(s):=(φ1(s),φ2(s),φ3(s)),s[0,1],

    where φi(s)=xi+s(yixi) for all i=1,2 and

    φ3(s)=x3+s((y3x3)2(y2x2)2)+2s2(y2x2)2.

    Therefore, expx(sυ)=φ(s), where φ:RR3 is the unique geodesic starting from φ(0)=x with υ=φ(0)TxR3. The inverse exponential mapping is assigned by

    exp1xy=(y1x1,y2x2,y3x3(y2x2)2).

    Let F:R3R3 be a nonexpansive mapping defined by

    F(x)=(x12,x23,x32+x222),xR3.

    Then Fix(F)={(0,0,0)}. In order to show the effectiveness of Algorithm 1, we compare it to three other algorithms: the Mann algorithm [11], the Halpern algorithm [11], and the Ishikawa algorithm [12]. In Mann algorithm and Halpern algorithm, we set αn=1/(n+3)2. In Ishikawa algorithm, we take αn=0.91/(n+4) and βn=(1/2)n. In the proposed Algorithm 1, let inertial parameter λn be updated by (3.3) where ϵn=(1/2)n and λ=0.3, and the parameter γn=1/100(n+1)2. Denoted Dn=d(xn,(0,0,0))<106 as the stopping criterion. The initial values x0,x1 are randomly generated in MATLAB. The numerical results of our investigation are shown in Table 3, as well as Figure 3.

    Table 3.  Computation results of Example 2.
    Algorithm Iteration Time(s)
    Algorithm 1 15 0.0030
    Mann [11] 20 0.0064
    Ishikawa [12] 19 0.0090
    Halpern [11] 1277 0.0328

     | Show Table
    DownLoad: CSV
    Figure 3.  Numerical behavior of {Dn} for Example 2.

    Example 3. Let M:=H3={x=(x1,x2,x3,x4)R4:x,x=1,x4>0} be the 3-dimensional hyperbolic space endowed the symmetric bilinear form (which is called the Lorentz metric) defined by

    x,y=x1y1+x2y2+x3y3x4y4,x=(x1,x2,x3,x4), y=(y1,y2,y3,y4)H3.

    For richer detail, see [1]. Then, H3 is a Hadamard manifold with sectional curvature 1. Furthermore, the normalized geodesic φ:RH3 starting from xH3 is given by

    φ(s)=(coshs)x+(sinhs)υ,

    where s>0 and υTxH3 is unit vector. This means that the expression expxsυ=(coshs)x+(sinhs)υ. From [1], one can check the inverse exponential map is given by

    exp1xy=arccosh(x,y)y+x,yxx,y21,x,yH3.

    The Riemannian distance d:H3×H3R is defined by d(x,y)=arccosh(x,y).

    Let F:H3H3 be a nonexpansive mapping defined by

    F(x1,x2,x3,x4)=(x1,x2,x3,x4),x=(x1,x2,x3,x4)H3.

    Then, Fix(F)={(0,0,0,1)}. To demonstrate the effectiveness of our proposed method, we compare it to the Mann and Halpern algorithms in [11], as well as the Ishikawa algorithm in [12]. In Mann algorithm and Halpern algorithm, we set αn=1/(n+3)2. In Ishikawa algorithm, we take αn=1/(n+3) and βn=1/(n+3)2. In the proposed Algorithm 1, let inertial parameter λn be updated by (3.3) with ϵn=(1/2)n and λ=0.5, and the parameter γn=(n+1)/(2n+4). Denoted Dn=d(xn,(0,0,0,1))<106 as the stopping criterion. The initial values x0,x1 are chosen from [1]

    x0=(0.82054041398189,1.78729906182707,0.11578260956854,2.20932797928782),x1=(0.93181457846166,0.46599434167542,0.41864946772751,1.50356127641534).

    Table 4 and Figure 4 provide the numerical results that we have obtained.

    Table 4.  Computation results of Example 3.
    Algorithm Iteration Time(s)
    Algorithm 1 18 0.0015
    Mann [11] 2930 0.0365
    Ishikawa [12] 2190 0.0480
    Halpern [11] 2630744 25.9806

     | Show Table
    DownLoad: CSV
    Figure 4.  Numerical behavior of {Dn} for Example 3.

    Remark 7. (ⅰ) According to Tables 3 and 4, we can observe that all of the algorithms converge to a fixed point of the given nonexpansive mapping. The numerical results indicate that our proposed method performs better than the other three algorithms in terms of both the amount of time required to compute and the number of iterations.

    (ⅱ) Figures 3 and 4 depicts the convergence history of Algorithm 1, the Mann algorithm, the Ishikawa algorithm and the Halpern algorithm for Examples 2 and 3, respectively. It is noted from Figures 3 and 4 that the convergence rate of Algorithm 1 to a fixed point of the given nonexpansive mapping is quite faster than the convergence rate of the Mann, Halpern and Ishikawa methods.

    The problem of finding fixed points of nonexpansive mappings on Hadamard manifolds is the subject of this article. Regarding the solution to this problem, an inertial Mann algorithm is suggested. The proposed method has been shown to converge under certain assumptions. The effectiveness of the proposed method is shown by numerical examples. The proposed algorithm's convergence rate will be taken into account in future studies.

    The authors would like to thank anonymous referee for valuable comments and Dr. Jamilu Abubakar for a useful discussion. K. Khammahawong was supported by Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi (RMUTT). The second author was supported by Thailand Science Research and Innovation (TSRI) Basic Research Fund: Fiscal Year 2023.

    The authors declare that they have no conflict interests.



    [1] O. P. Ferreira, L. R. Lucambio Pérez, S. Z. Németh, Singularities of monotone vector fields and an extragradient-type algorithm, J. Glob. Optim., 31 (2005), 133–151. https://doi.org/10.1007/s10898-003-3780-y doi: 10.1007/s10898-003-3780-y
    [2] C. Li, G. López, V. Martín-Márquez, Monotone vector fields and the proximal point algorithm on Hadamard manifolds, J. Lond. Math. Soc., 79 (2009), 663–683. https://doi.org/10.1112/jlms/jdn087 doi: 10.1112/jlms/jdn087
    [3] T. Sakai, Riemannian geometry, American Mathematical Society, Vol. 149, 1996.
    [4] M. R. Bridson, A. Haefliger, Metric spaces of non-positive curvature, Springer-Verlag, Berlin, Vol. 319, 1999. https://doi.org/10.1007/978-3-662-12494-9
    [5] J. X. da Cruz Neto, O. P. Ferreira, L. R. Lucambio Pérez, Monotone point-to-set vector fields, Balkan J. Geom. Appl., 5 (2000), 69–79.
    [6] V. Colao, G. López, G. Marino, V. Martín-Márquez, Equilibrium problems in Hadamard manifolds, J. Math. Anal. Appl., 388 (2012), 61–77. https://doi.org/10.1016/j.jmaa.2011.11.001 doi: 10.1016/j.jmaa.2011.11.001
    [7] X. Wang, G. López, C. Li, J. Yao, Equilibrium problems on Riemannian manifolds with applications, J. Math. Anal. Appl., 473 (2019), 866–891. https://doi.org/10.1016/j.jmaa.2018.12.073 doi: 10.1016/j.jmaa.2018.12.073
    [8] W. R. Mann, Mean value methods in iteration, Proc. Amer. Math. Soc., 4 (1953), 506–510. https://doi.org/10.2307/2032162 doi: 10.2307/2032162
    [9] P. Maingé, Convergence theorems for inertial KM-type algorithms, J. Comput. Appl. Math., 219 (2018), 223–236. https://doi.org/10.1016/j.cam.2007.07.021 doi: 10.1016/j.cam.2007.07.021
    [10] B. T. Polyak, Some methods of speeding up the convergence of iteration methods, Elsevier, 1964. https://doi.org/10.1016/0041-5553(64)90137-5
    [11] C. Li, G. López, V. Martín-Márquez, Iterative algorithms for nonexpansive mappings on Hadamard manifolds, Taiwan. J. Math., 14 (2010), 541–559. https://doi.org/10.11650/twjm/1500405806 doi: 10.11650/twjm/1500405806
    [12] R. Chugh, M. Kumari, A. Kumar, Two-step iterative procedure for non-expansive mappings on Hadamard manifolds, Commun. Optim. Theory, 2014.
    [13] T. T. Yao, Y. H. Li, Y. S. Zhang, Z. Zhao, A modified Riemannian Halpern algorithm for nonexpansive mappings on Hadamard manifolds, Optimization, 2021. https://doi.org/10.1080/02331934.2021.1914036 doi: 10.1080/02331934.2021.1914036
    [14] D. R. Sahu, F. Babu, S. Sharma, The S-iterative techniques on Hadamard manifolds and applications, J. Appl. Numer. Optim., 2 (2020), 353–371. https://doi.org/10.23952/jano.2.2020.3.06 doi: 10.23952/jano.2.2020.3.06
    [15] S. Chang, J. C. Yao, M. Liu, L. C. Zhao, J. H. Zhu, Shrinking projection algorithm for solving a finite family of quasi-variational inclusion problems in Hadamard manifold, RACSAM, 115 (2021), 166. https://doi.org/10.1007/s13398-021-01105-4 doi: 10.1007/s13398-021-01105-4
    [16] S. Huang, Approximations with weak contractions in Hadamard manifolds, Linear Nonlinear Anal., 1 (2015), 317–328.
    [17] C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Prob., 20 (2004), 103–120. https://doi.org/10.1088/0266-5611/20/1/006 doi: 10.1088/0266-5611/20/1/006
    [18] E. Hacıoğlu, F. Gürsoy, S. Maldar, Y. Atalan, G. V. Milovanović, Iterative approximation of fixed points and applications to two-point second-order boundary value problems and to machine learning, Appl. Numer. Math., 167 (2021), 143–172. https://doi.org/10.1016/j.apnum.2021.04.020 doi: 10.1016/j.apnum.2021.04.020
    [19] C. I. Podilchuk, R. J. Mammone, Image recovery by convex projections using a least-squares constraint, J. Optical Soc. Amer. A, 7 (1990), 517–521. https://doi.org/10.1364/josaa.7.000517 doi: 10.1364/josaa.7.000517
    [20] A. Padcharoen, P. Sukprasert, Nonlinear operators as concerns convex programming and applied to signal processing, Mathematics, 7 (2019), 866. https://doi.org/10.3390/math7090866 doi: 10.3390/math7090866
    [21] S. Al-Homidan, Q. H. Ansari, F. Babu, Halpern- and Mann-type algorithms for fixed points and inclusion problems on Hadamard manifolds, Numer. Funct. Anal. Optim., 40 (2019), 621–653. https://doi.org/10.1080/01630563.2018.1553887 doi: 10.1080/01630563.2018.1553887
    [22] J. Hu, X. Liu, Z. W. Wen, Y. X. Yuan, A brief introduction to manifold optimization, J. Oper. Res. Soc. China, 8 (2020), 199–248. https://doi.org/10.1007/s40305-020-00295-9 doi: 10.1007/s40305-020-00295-9
    [23] C. Li, G. López, V. Martín-Márquez, J. H. Wang, Resolvents of set-valued monotone vector fields in Hadamard manifolds, Set-Valued Anal., 19 (2021), 361–383. https://doi.org/10.1007/s11228-010-0169-1 doi: 10.1007/s11228-010-0169-1
    [24] O. P. Ferreira, M. S. Louzeiro, L. F. Prudente, Gradient method for optimization on Riemannian manifolds with lower bounded curvature, SIAM J. Optim., 29 (2019), 2517–2541. https://doi.org/10.1137/18M1180633 doi: 10.1137/18M1180633
    [25] J. X. Da Cruz Neto, O. P. Ferreira, L. R. Lucambio Pérez, S. Z. Németh, Convex- and monotone-transformable mathematical programming problems and a proximal-like point method, J. Glob. Optim., 35 (2006), 53–69. https://doi.org/10.1007/s10898-005-6741-9 doi: 10.1007/s10898-005-6741-9
    [26] F. Alvarez, Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space, SIAM J. Optim., 14 (2003), 773–782. https://doi.org/10.1137/S1052623403427859 doi: 10.1137/S1052623403427859
    [27] F. Alvarez, H. Attouch, An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping, Set-Valued Anal., 9 (2001), 3–11. https://doi.org/10.1023/A:1011253113155 doi: 10.1023/A:1011253113155
    [28] D. A. Lorenz, T. Pock, An inertial forward-backward algorithm for monotone inclusions, J. Math. Imaging Vis., 51 (2015), 311–325. https://doi.org/10.1007/s10851-014-0523-2 doi: 10.1007/s10851-014-0523-2
    [29] P. Ochs, T. Brox, T. Pock, iPiasco: Inertial proximal algorithm for strongly convex optimization, J. Math. Imaging Vis., 53 (2015), 171–181. https://doi.org/10.1007/s10851-015-0565-0 doi: 10.1007/s10851-015-0565-0
    [30] J. Fan, L. Liu, X. Qin, A subgradient extragradient algorithm with inertial effects for solving strongly pseudomonotone variational inequalities, Optimization, 69 (2020), 2199–2215. https://doi.org/10.1080/02331934.2019.1625355 doi: 10.1080/02331934.2019.1625355
    [31] N. Van Dung, N. T. Hieu, A new hybrid projection algorithm for equilibrium problems and asymptotically quasi ϕ-nonexpansive mappings in Banach spaces, RACSAM, 113 (2019), 2017–2035. https://doi.org/10.1007/s13398-018-0595-8 doi: 10.1007/s13398-018-0595-8
    [32] J. Chen, S. Liu, Extragradient-like method for pseudomontone equilibrium problems on Hadamard manifolds, J. Inequal. Appl., 2020 (2020), 205. https://doi.org/10.1186/s13660-020-02473-y doi: 10.1186/s13660-020-02473-y
    [33] B. Tan, Z. Zhou, S. Li, Strong convergence of modified inertial Mann algorithms for nonexpansive mappings, Mathematics, 8 (2020), 462. https://doi.org/10.3390/math8040462 doi: 10.3390/math8040462
    [34] B. Tan, J. Fan, X. Qin, Inertial extragradient algorithms with non-monotonic step sizes for solving variational inequalities and fixed point problems, Adv. Oper. Theory, 6 (2021), 61. https://doi.org/10.1007/s43036-021-00155-0 doi: 10.1007/s43036-021-00155-0
    [35] A. N. Iusem, V. Mohebbi, An extragradient method for vector equilibrium problems on Hadamard manifolds, J. Nonlinear Var. Anal., 5 (2021), 459–476. https://doi.org/10.23952/jnva.5.2021.3.09 doi: 10.23952/jnva.5.2021.3.09
    [36] F. Babu, Iterative methods for certain problems from nonlinear analysis in the setting of manifolds, Aligarh Muslim University, PhD Dissertation, 2019.
    [37] T. Rapcsák, Smooth nonlinear optimization in Rn, New York: Springer, 1997. https://doi.org/10.1007/978-1-4615-6357-0
    [38] K. Khammahawong, P. Kumam, P. Chaipunya, Splitting algorithms for equilibrium problems and inclusion problems on Hadamard manifolds, Numer. Funct. Anal. Optim., 42 (2021), 1645–1682. https://doi.org/10.1080/01630563.2021.1933523 doi: 10.1080/01630563.2021.1933523
    [39] P. Chaipunya, K. Khammahawong, P. Kumam, Iterative algorithm for singularities of inclusion problems in Hadamard manifolds, J. Inequal. Appl., 2021 (2021), 147. https://doi.org/10.1186/s13660-021-02676-x doi: 10.1186/s13660-021-02676-x
    [40] K. Khammahawong, P. Chaipunya, P. Kumam, Iterative algorithms for monotone variational inequality and fixed point problems on Hadamard manifolds, Adv. Oper. Theory, 7 (2022), 43. https://doi.org/10.1007/s43036-022-00207-z doi: 10.1007/s43036-022-00207-z
    [41] Q. L. Dong, H. B. Yuan, Accelerated Mann and CQ algorithms for finding a fixed point of a nonexpansive mapping, Fixed Point Theory Appl., 2015 (2015), 125. https://doi.org/10.1186/s13663-015-0374-6 doi: 10.1186/s13663-015-0374-6
    [42] Q. L. Dong, H. B. Yuan, Y. J. Cho, Th. M. Rassias, Modified inertial Mann algorithm and inertial CQ-algorithm for nonexpansive mappings, Optim. Lett., 12 (2018), 87–102. https://doi.org/10.1007/s11590-016-1102-9 doi: 10.1007/s11590-016-1102-9
    [43] H. H. Bauschke, P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, New York: Springer, 2011. https://doi.org/10.1007/978-1-4419-9467-7
    [44] G. Kassay, V. D. Rădulescu, Equilibrium problems and applications, London: Elsevier/Academic Press, 2019.
  • This article has been cited by:

    1. P. V. Ndlovu, L. O. Jolaoso, M. Aphane, H. A. Abass, Viscosity extragradient with modified inertial method for solving equilibrium problems and fixed point problem in Hadamard manifold, 2024, 2024, 1029-242X, 10.1186/s13660-024-03099-0
    2. Hammed Anuoluwapo Abass, Olawale Kazeem Oyewole, Kazeem Olalekan Aremu, Lateef Olakunle Jolaoso, Self-adaptive Technique with Double Inertial Steps for Inclusion Problem on Hadamard Manifolds, 2024, 2194-668X, 10.1007/s40305-024-00537-0
    3. H. A. Abass, O. K. Oyewole, L. O. Jolaoso, K. O. Aremu, Modified inertial Tseng method for solving variational inclusion and fixed point problems on Hadamard manifolds, 2024, 103, 0003-6811, 1604, 10.1080/00036811.2023.2256357
    4. Gabriel Ruiz-Garzón, Rafaela Osuna-Gómez, Antonio Rufián-Lizana, Antonio Beato-Moreno, Semi-infinite interval equilibrium problems: optimality conditions and existence results, 2023, 42, 2238-3603, 10.1007/s40314-023-02378-8
    5. Mohammad Dilshad, Inertial proximal point algorithm for sum of two monotone vector fields in Hadamard manifold, 2024, 0030-3887, 10.1007/s12597-024-00838-1
    6. H. A. Abass, Linear convergence of alternating inertial Tseng-type method for solving inclusion problems on Hadamard manifolds, 2024, 0013-0915, 1, 10.1017/S0013091524000543
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2010) PDF downloads(108) Cited by(6)

Figures and Tables

Figures(4)  /  Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog