Processing math: 24%
Research article

Fixed point approach to solve nonlinear fractional differential equations in orthogonal F-metric spaces

  • Received: 27 September 2022 Revised: 18 November 2022 Accepted: 05 December 2022 Published: 13 December 2022
  • MSC : 46S40, 47H10, 54H25

  • In this paper, we introduce the notion of a generalized (α, ΘF)-contraction in the context of an orthogonal F-complete metric space and obtain some new fixed point results for this newly introduced contraction. A nontrivial example is also provided to satisfy the validity of the established results. As consequences of our obtained results, we derive the leading results in [Fixed Point Theory Appl., 2015,185, 2015] and [Symmetry, 2020, 12,832]. As an application, we investigate the existence and uniqueness of the solution for a nonlinear fractional differential equation.

    Citation: Abdullah Eqal Al-Mazrooei, Jamshaid Ahmad. Fixed point approach to solve nonlinear fractional differential equations in orthogonal F-metric spaces[J]. AIMS Mathematics, 2023, 8(3): 5080-5098. doi: 10.3934/math.2023255

    Related Papers:

    [1] Nurcan Bilgili Gungor . Some fixed point results via auxiliary functions on orthogonal metric spaces and application to homotopy. AIMS Mathematics, 2022, 7(8): 14861-14874. doi: 10.3934/math.2022815
    [2] Qing Yang, Chuanzhi Bai . Fixed point theorem for orthogonal contraction of Hardy-Rogers-type mapping on O-complete metric spaces. AIMS Mathematics, 2020, 5(6): 5734-5742. doi: 10.3934/math.2020368
    [3] Gunaseelan Mani, Arul Joseph Gnanaprakasam, Choonkil Park, Sungsik Yun . Orthogonal F-contractions on O-complete b-metric space. AIMS Mathematics, 2021, 6(8): 8315-8330. doi: 10.3934/math.2021481
    [4] Menaha Dhanraj, Arul Joseph Gnanaprakasam, Gunaseelan Mani, Rajagopalan Ramaswamy, Khizar Hyatt Khan, Ola Ashour A. Abdelnaby, Stojan Radenović . Fixed point theorem on an orthogonal extended interpolative ψF-contraction. AIMS Mathematics, 2023, 8(7): 16151-16164. doi: 10.3934/math.2023825
    [5] Gunaseelan Mani, Arul Joseph Gnanaprakasam, Vidhya Varadharajan, Fahd Jarad . Solving an integral equation vian orthogonal neutrosophic rectangular metric space. AIMS Mathematics, 2023, 8(2): 3791-3825. doi: 10.3934/math.2023189
    [6] Senthil Kumar Prakasam, Arul Joseph Gnanaprakasam, Gunaseelan Mani, Fahd Jarad . Solving an integral equation via orthogonal generalized α-ψ-Geraghty contractions. AIMS Mathematics, 2023, 8(3): 5899-5917. doi: 10.3934/math.2023297
    [7] Abdullah Eqal Al-Mazrooei, Jamshaid Ahmad . Fixed point approach to solve nonlinear fractional differential equations in orthogonal F-metric spaces. AIMS Mathematics, 2023, 8(3): 5080-5098. doi: 10.3934/math.2023255
    [8] Mohammed H. Alharbi, Jamshaid Ahmad . Solution of fractional differential equation by fixed point results in orthogonal F-metric spaces. AIMS Mathematics, 2023, 8(11): 27347-27362. doi: 10.3934/math.20231399
    [9] S. S. Razavi, H. P. Masiha, Hüseyin Işık, Hassen Aydi, Choonkil Park . On Geraghty -contractions in O-metric spaces and an application to an ordinary type differential equation. AIMS Mathematics, 2022, 7(9): 17393-17402. doi: 10.3934/math.2022958
    [10] Xiang Gao, Linzhang Lu, Qilong Liu . Non-negative Tucker decomposition with double constraints for multiway dimensionality reduction. AIMS Mathematics, 2024, 9(8): 21755-21785. doi: 10.3934/math.20241058
  • In this paper, we introduce the notion of a generalized (α, ΘF)-contraction in the context of an orthogonal F-complete metric space and obtain some new fixed point results for this newly introduced contraction. A nontrivial example is also provided to satisfy the validity of the established results. As consequences of our obtained results, we derive the leading results in [Fixed Point Theory Appl., 2015,185, 2015] and [Symmetry, 2020, 12,832]. As an application, we investigate the existence and uniqueness of the solution for a nonlinear fractional differential equation.



    Over the past decades, owing to a broad variety of applications in engineering, sciences and economics, the linear complementarity problem (LCP) has been an active topic in the optimization community and has garnered a flurry of interest. The LCP is a powerful mathematical model which is intimately related to many significant scientific problems, such as the well-known primal-dual linear programming, bimatrix game, convex quadratic programming, American option pricing problem and others, see e.g., [1,2,3] for more details. The LCP consists in determining a vector zRn such that

    z0,v=Az+q0andzv, (1.1)

    where ARn×n and qRn are given. We hereafter abbreviate the problem (1.1) by LCP(A,q).

    The LCP(A,q) of form (1.1) together with its extensions are extensively investigated in recent years, and designing efficient numerical algorithms to fast and economically obtain the solution of the LCP(A,q) (1.1) is of great significance. Some numerical iterative algorithms have been developed for solving the LCP(A,q) (1.1) over the past decades, such as the pivot algorithms [1,2,4], the projected iterative methods [5,6,7,8], the multisplitting methods [9,10,11,12,13,14], the Newton-type iteration methods [15,16] and others, see e.g., [17,18,19] and the references therein. The modulus-based matrix splitting (MMS) iteration method, which was first introduced in[20], is particularly attractive for solving the LCP(A,q) (1.1). Based on the general variable transformation, by setting z=|x|+xγ and v=Ωγ(|x|x), and let A=MN, Bai reformulated the LCP(A,q) (1.1) as the following equivalent form [20]

    (Ω+M)x=Nx+(ΩA)|x|γq,

    where γ>0 and ΩRn×n is a positive diagonal matrix. Then he skillfully designed a general framework of MMS iteration method for solving the large-scale sparse LCP(A,q) (1.1), which exhibits the following formal formulation.

    Algorithm 1.1. ([20]) (The MMS method) Let A=MN be a splitting of the matrix ARn×n. Assume that x0Rn is an arbitrary initial guess. For k=0,1,2,, compute {xk+1} by solving the linear system

    (Ω+M)xk+1=Nxk+(ΩA)|xk|γq,

    and then set

    zk+1=1γ(|xk+1|+xk+1)

    until the iterative sequence {zk} is convergent. Here, ΩRn×n is a positive diagonal matrix and γ is a positive constant.

    The MMS iteration method not only covers some presented iteration methods, such as the nonstationary extrapolated modulus method [21] and the modified modulus method [22] as its special cases, but also yields a series of modulus-based relaxation methods, such as the modulus-based Jacobi (MJ), the modulus-based Gauss-Seidel (MGS), the modulus-based successive overrelaxation (MSOR) and the modulus-based accelerated overrelaxation (MAOR) methods. Thereafter, since the promising behaviors and elegant mathematical properties of the MMS iterative scheme, it immediately received considerable attention and diverse versions of the MMS method occurred. For instance, Zheng and Yin [23] established a new class of accelerated MMS (AMMS) iteration methods for solving the large-scale sparse LCP(A,q) (1.1), and the convergence analyses of the AMMS method with the system matrix A being a positive definite matrix or an H+-matrix were explored. In order to further accelerate the MMS method, Zheng et al. [24] combined the relaxation strategy with the matrix splitting technique in the modulus equation of [25] and presented a relaxation MMS (RMMS) iteration method for solving the LCP(A,q) (1.1). The parametric selection strategies of the RMMS method were discussed in depth [24]. In addition, the RMMS method covers the general MMS (GMMS) method [25] as a special case. In the sequel, by extending the two-sweep iteration methods [26,27], Wu and Li [28] developed a general framework of two-sweep MMS (TMMS) iteration method to solve the LCP(A,q) (1.1), and the convergences of the TMMS method were established with the system matrix A being either an H+-matrix or a positive-definite matrix. Ren et al. [29] proposed a class of general two-sweep MMS (GTMMS) iteration methods to solve the LCP(A,q) (1.1) which encompasses the TMMS method by selecting appropriate parameter matrices. Peng et al. [30] presented a relaxation two-sweep MMS (RTMMS) iteration method for solving the LCP(A,q) (1.1) and gave its convergence theories with the system matrix A being an H+-matrix or a positive-definite matrix. Huang et al. [31] combined the parametric strategy, the relaxation technique and the acceleration technique to construct an accelerated relaxation MMS (ARMMS) iteration method for solving the LCP(A,q) (1.1). The ARMMS method can be regarded as a generalization of some existing methods, such as the MMS [20], the GMMS [25] and the RMMS [24]. For more modulus-based matrix splitting type iteration methods, see [32,33,34,35,36,37,38,39,40,41] and the references therein.

    On the other hand, Bai and Tong [42] equivalently transformed the LCP(A,q) (1.1) into a nonlinear equation without using variable transformation and proposed an efficient iterative algorithm by using the matrix splittings and extrapolation acceleration techniques. Then some relaxed versions of the method proposed in [42] were constructed by Bai and Huang [43] and the convergence theories were established under some mild conditions. Recently, Wu and Li [44] recasted the LCP(A,q) (1.1) into an implicit fixed-point equation

    (Ω+M)z=Nz+|(AΩ)z+q|q, (1.2)

    where A=MN. In fact, if M=A and Ω=I, then (1.2) reduces to the fixed-point equation proposed in [42]. Based on (1.2), the new MMS (NMMS) method for solving the LCP(A,q) (1.1) was constructed in [44].

    Algorithm 1.2. ([44]) (The NMMS method) Let A=MN be a splitting of the matrix ARn×n and the matrix Ω+M be nonsingular, where ΩRn×n is a positive diagonal matrix. Given a nonnegative initial vector z0Rn, for k=0,1,2, until the iteration sequence {zk} is convergent, compute zk+1Rn by solving the linear system

    (Ω+M)zk+1=Nzk+|(AΩ)zk+q|q.

    It is obvious that the NMMS method does not need any variable transformations, which is different from the above mentioned MMS type iteration methods. However, the NMMS method still inherits the merits of the MMS type iteration methods and some relaxation versions of it are studied.

    Remark 1.1. Let A=DALAUA, where DA, LA and UA are the diagonal, strictly lower-triangular and strictly upper-triangular parts of A, respectively. It has been mentioned in [44] that the Algorithm 1.2 can reduce to the following methods.

    (i) If M=A, Ω=I and N=0, then the Algorithm 1.2 becomes the new modulus method:

    (I+A)zk+1=|(AI)zk+q|q.

    (ii) If M=A, N=0 and Ω=αI, then Algorithm 1.2 turns into the new modified modulus iteration method:

    (αI+A)zk+1=|(AαI)zk+q|q.

    (iii) Let M=1α(DAβLA) and N=1α((1α)DA+(αβ)LA+αUA), then Algorithm 1.2 reduces to the new MAOR iteration method:

    (αΩ+DAβLA)zk+1=[(1α)DA+(αβ)LA+αUA]zk+α(|(AΩ)zk+q|q). (1.3)

    Evidently, based on (1.3), when (α,β) is equal to (α,α), (1,1) and (1,0), respectively, we can obtain the new MSOR (NMSOR), the new MGS (NMGS) and the new MJ (NMJ) iteration methods, respectively.

    The goal of this paper is to further improve the computing efficiency of the Algorithm 1.2 for solving the LCP(A,q) (1.1). To this end, we utilize the two-sweep matrix splitting iteration technique in [28,29] and the relaxation technique, and construct a new class of relaxed acceleration two-sweep MMS (NRATMMS) iteration method for solving the LCP(A,q) (1.1). Convergence analysis of the NRATMMS iteration method is studied in detail. By choosing suitable parameter matrices, the NRATMMS iteration method can generate some relaxation versions. Numerical results are reported to demonstrate the efficiency of the NRATMMS iteration method.

    The remainder of this paper is organized as follows. In Section 2, {we present some notations and definitions used hereinafter.} Section 3 is devoted to establishing the NRATMMS iteration method for solving the LCP(A,q) (1.1) and the global linear convergence of the proposed method is explored. Section 4 reports the numerical results. Finally, some concluding remarks are given in Section 5.

    In this section, we collect some notations, classical definitions and some auxiliary results which lay the foundation of our developments.

    Rn×n denotes the set of all n×n real matrices and Rn=Rn×1. I is the identity matrix with suitable dimension. || denotes absolute value for real scalar or modulus for complex scalar. For xRn, xi refers to its i-th entry, |x|=(|x1|,|x2|,,|xn|)Rn represents the componentwise absolute value of a vector x. tridiag(a,b,c) denotes a tridiagonal matrix that has a,b,c as the subdiagonal, main diagonal and superdiagonal entries in the matrix, respectively. Tridiag(A,B,C) denotes a block tridiagonal matrix that has A, B, C as the subdiagonal, main diagonal and superdiagonal block entries in the matrix, respectively.

    Let two matrices P=(pij)Rm×n and Q=(qij)Rm×n, we write PQ(P>Q) if pijqij(pij>qij) holds for any i and j. For A=(aij)Rm×n, A and |A| represent the transpose of A and the absolute value of A(|A|=(|aij|)Rm×n), respectively. For A=(aij)Rn×n, ρ(A) represents its spectral radius. Moreover, the comparison matrix A is defined by

    aij={|aij|,if i=j,|aij|,if ij, i,j=1,2,,n.

    A matrix ARn×n is called a Z-matrix if all of its off-diagonal entries are nonpositive, and it is a P-matrix if all of its principal minors are positive; we call a real matrix as an M-matrix if it is a Z-matrix with A10, and it is called an H-matrix if its comparison matrix A is an M-matrix. In particular, an H-matrix with positive diagonals is called an H+-matrix [9]. In addition, a sufficient condition for the matrix A to be a P-matrix is that A is an H+-matrix. ARn×n is called a strictly diagonal dominant matrix if |aii|>ji|aij| for all 1in.

    Let M be nonsingular, then A=MN is called an M-splitting if M is an M-matrix and N0, an H-splitting if M|N| is an M-matrix and an H-compatible splitting if A=M|N|[45]. Finally, the following lemmas are needed in the convergence analysis of the proposed method.

    Lemma 1. ([46]) Let ARn×n be an H+-matrix, then the LCP(A,q) (1.1) has a unique solution for any qRn.

    Lemma 2. ([47]) Let BRn×n be a strictly diagonal dominant matrix. Then for all CRn×n,

    B1Cmax1in(|C|e)i(Be)i

    holds, where e=(1,1,,1).

    Lemma 3. ([48]) Let A be an H-matrix, then |A1|A1.

    Lemma 4. ([49]) If A is an M-matrix, there exists a positive diagonal matrix V such that AV is a strictly diagonal dominant matrix with positive diagonal entries.

    Lemma 5. ([49]) Let A, B be two Z-matrices, A be an M-matrix, and BA. Then B is an M-matrix.

    Lemma 6. ([26]) Let

    A=(BCI0)0 and ρ(B+C)<1,

    then ρ(A)<1.

    Lemma 7. ([45]) If A=MN is an M-splitting of A, then ρ(M1N)<1 if and only if A is an M-matrix.

    In this section, the NRATMMS iteration method for solving the LCP(A,q) (1.1) is developed, and the general convergence analysis of the NRATMMS iteration method will be explored.

    Let A=M1N1=M2N2 be two splittings of A and Ω=Ω1Ω2=Ω3Ω4 with Ωi (i=1,2,3,4) being all nonnegative diagonal matrices, then (1.2) can be reformulated to the following fixed point format:

    (Ω1+M1)z=(N1+Ω2)[θz+(1θ)z]+|(M2Ω3)z+(Ω4N2)z+q|q, (3.1)

    where θ0 is a relaxation parameter. Based on (3.1), the NRATMMS iteration method is established as in the following Algorithm 3.1.

    Algorithm 3.1. (The NRATMMS iteration method) Let A=M1N1=M2N2 be two splittings of A and Ω=Ω1Ω2=Ω3Ω4 with Ωi(i=1,2,3,4) being all nonnegative diagonal matrices such that M1+Ω1 is nonsingular. Given two initial guesses z0,z1Rn and a nonnegative relaxation parameter θ, the iteration sequence {zk} is generated by

    (Ω1+M1)zk+1=(N1+Ω2)[θzk+(1θ)zk1]+|(M2Ω3)zk+(Ω4N2)zk1+q|q (3.2)

    for k=1,2, until convergence.

    The Algorithm 3.1 provides a general framework of NMMS iteration methods for solving the LCP(A,q) (1.1), and it can yield a series of NMMS type iteration methods with suitable choices of the matrix splittings and the relaxation parameter. For instance, when θ=1 and Ωi=0 (i=1,2,3,4), the Algorithm 3.1 reduces to the new accelerated two-sweep MMS (NATMMS) iteration method

    M1zk+1=N1zk+|M2zkN2zk1+q|q.

    When θ=1, Ω1=Ω3=Ω, Ω2=Ω4=0, M2=A and N2=0, the Algorithm 3.1 reduces to the Algorithm 1.2. When M1=1α(DAβLA), N1=1α[(1α)DA+(αβ)LA+αUA], M2=DAUA, N2=LA with α,β>0, the Algorithm 3.1 gives the new relaxed acceleration two-sweep MAOR (NRATMAOR) iteration method. If (α,β) is equal to (α,α),(1,1), and (1,0), the NRATMAOR iteration method reduces to the new relaxed acceleration two-sweep MSOR (NRATMSOR) iteration method, the new relaxed acceleration two-sweep MGS (NRATMGS) iteration method and the new relaxed acceleration two-sweep MJ (NRATMJ) iteration method, respectively.

    The convergence analysis for Algorithm 3.1 is investigated with the system matrix A of the LCP(A,q) (1.1) being an H+-matrix.

    Lemma 8. Assume that ARn×n is an H+-matrix. Let A=M1N1 and A=M2N2 be an H-splitting and a general splitting of A, respectively, and Ωi(i=1,2,3,4) be four nonnegative diagonal matrices such that M1+Ω1 is nonsingular. Denote

    ˜L=(Ω1+M1)1[(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|],

    then the iteration sequence {zk} generated by the Algorithm 3.1 converges to the unique solution z for arbitrary two initial vectors if ρ(˜L)<1.

    Proof. Let z be the exact solution of the LCP(A,q) (1.1), then it satisfies

    (Ω1+M1)z=(N1+Ω2)[θz+(1θ)z]+|(M2Ω3)z+(Ω4N2)z+q|q. (3.3)

    Subtracting (3.3) from (3.2), we have

    |zk+1z|=|(Ω1+M1)1(N1+Ω2)[θ(zkz)+(1θ)(zk1z)]+(Ω1+M1)1|(M2Ω3)zk+(Ω4N2)zk1+q|(Ω1+M1)1|(M2Ω3)z+(Ω4N2)z+q|||(Ω1+M1)1||N1+Ω2||θ(zkz)+(1θ)(zk1z)|+|(Ω1+M1)1|||(M2Ω3)zk+(Ω4N2)zk1+q||(M2Ω3)z+(Ω4N2)z+q|||(Ω1+M1)1||N1+Ω2|[θ|zkz|+|1θ||zk1z|]+|(Ω1+M1)1||(M2Ω3)(zkz)+(Ω4N2)(zk1z)||(Ω1+M1)1||N1+Ω2|[θ|zkz|+|1θ||zk1z|]+|(Ω1+M1)1|[|M2Ω3||zkz|+|Ω4N2||zk1z|]=|(Ω1+M1)1|[θ|N1+Ω2|+|M2Ω3|]|zkz|+|(Ω1+M1)1|[|1θ||N1+Ω2|+|Ω4N2|]|zk1z|.

    For simplicity, let

    F=|(Ω1+M1)1|[θ|N1+Ω2|+|M2Ω3|], (3.4)

    and

    G=|(Ω1+M1)1|[|1θ||N1+Ω2|+|Ω4N2|]. (3.5)

    Then we have

    |(zk+1zzkz)|(FGI0)|(zkzzk1z)|.

    Let

    L=(FGI0),

    then the iteration sequence {zk} converges to the unique solution z if ρ(L)<1. Since L0, according to Lemma 6, ρ(F+G)<1 implies ρ(L)<1. To prove the convergence of the Algorithm 3.1, it is sufficient to prove ρ(F+G)<1.

    Under the conditions that A is an H+-matrix and A=M1N1 {is} an H-splitting of A, i.e., M1|N1| is an M-matrix, then by Lemma 5, M1M1|N1| implies that M1 is an H-matrix, and Ω1+M1 is also an H-matrix. In the light of Lemma 3, it follows that

    0|(Ω1+M1)1|(Ω1+M1)1.

    Recall (3.4) and (3.5), we obtain

    F+G=|(Ω1+M1)1|[(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|],

    which yields that

    0F+G(Ω1+M1)1[(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|]:=˜L.

    As a consequence, based on the monotone property of the spectral radius, the iteration sequence {zk} generated by Algorithm 3.1 converges to the unique solution z of the LCP(A,q) (1.1) if ρ(˜L)<1. The proof is completed.

    Theorem 3.1. Assume that ARn×n is an H+-matrix. Let A=M1N1 be an H-compatible splitting and A=M2N2 be an M-splitting of A, and Ωi(i=1,2,3,4) be four nonnegative diagonal matrices such that M1+Ω1 is nonsingular. Denote

    ˜L=(Ω1+M1)1[(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|],

    then the iteration sequence {zk} generated by the Algorithm 3.1 converges to the unique solution z of the LCP(A,q) (1.1) for arbitrary two initial vectors if one of the following two conditions holds.

    (i) 0<θ1 and Ωi(i=1,2,3,4) satisfy

    {AVe>Ω4Ve,ifΩ3DM2,(A+Ω)Ve>DM2Ve,ifΩ3<DM2. (3.6)

    (ii) θ>1 and Ωi(i=1,2,3,4) satisfy

    {θ<1+min1in[(AΩ4)Ve]i[(|N1|+Ω2)Ve]iand[(AΩ4)Ve]i[(|N1|+Ω2)Ve]i>0,ifΩ3DM2,θ<1+min1in[(A+ΩDM2)Ve]i[(|N1|+Ω2)Ve]iand[(A+ΩDM2)Ve]i[(|N1|+Ω2)Ve]i>0,ifΩ3<DM2. (3.7)

    Here, Ω=Ω1Ω2=Ω3Ω4 and V is an arbitrary positive diagonal matrix such that (Ω1+M1)V is a strictly diagonal dominant matrix.

    Proof. According to Lemma 8, we only need to demonstrate ρ(˜L)<1. Then, on the basis of Lemma 2 and Lemma 4, it follows that

    ρ(˜L)=ρ(V1˜LV)||V1˜LV||=||[(Ω1+M1)V]1[(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|]V||max1in{[(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve}i[(Ω1+M1)Ve]i.

    When 0<θ1, it holds that

    ρ(˜L)max1in{[|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve}i[(Ω1+M1)Ve]i. (3.8)

    Since A=M2N2 is an M-splitting of A, M2 is an M-matrix. Let M2=DM2BM2 be a splitting of M2, where DM2 is the positive diagonal matrix of M2.

    If Ω3DM2, it can be concluded that

    (Ω1+M1)Ve[|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve=(Ω1+M1|N1+Ω2||Ω3M2||Ω4N2|)Ve=(Ω1+M1|N1+Ω2||Ω3DM2+BM2||Ω4N2|)Ve(Ω1+M1|N1+Ω2||Ω3DM2||BM2||Ω4N2|)Ve(Ω1+M1|N1|Ω2Ω3+DM2|BM2|Ω4|N2|)Ve=(Ω1+M1|N1|Ω2Ω3+M2Ω4|N2|)Ve=(2A+Ω1Ω2Ω3Ω4)Ve=(2A2Ω4)Ve. (3.9)

    If Ω3<DM2, we get

    (Ω1+M1)Ve[|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve=(Ω1+M1|N1+Ω2||M2Ω3||Ω4N2|)Ve=(Ω1+M1|N1+Ω2||DM2Ω3BM2||Ω4N2|)Ve(Ω1+M1|N1+Ω2||DM2Ω3||BM2||Ω4N2|)Ve(Ω1+M1|N1|Ω2+Ω32DM2+DM2|BM2|Ω4|N2|)Ve=(Ω1+M1|N1|Ω2+Ω32DM2+M2Ω4|N2|)Ve=(2A2DM2+Ω1Ω2+Ω3Ω4)Ve=(2A2DM2+2Ω)Ve. (3.10)

    According to (3.8), (3.9) and (3.10), we have ρ(˜L)<1 if (3.6) holds.

    When θ>1, it follows that

    ρ(˜L)max1in{[(2θ1)|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve}i[(Ω1+M1)Ve]i. (3.11)

    If Ω3DM2, it can be derived that

    (Ω1+M1)Ve[(2θ1)|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve=(Ω1+M1(2θ1)|N1+Ω2||Ω3M2||Ω4N2|)Ve=(Ω1+M1(2θ1)|N1+Ω2||Ω3DM2+BM2||Ω4N2|)Ve(Ω1+M1(2θ1)|N1|(2θ1)Ω2|Ω3DM2||BM2|Ω4|N2|)Ve=(Ω1+M1(2θ1)|N1|(2θ1)Ω2Ω3+DM2|BM2|Ω4|N2|)Ve=(Ω1+M1|N1|2(θ1)|N1|2(θ1)Ω2Ω2Ω3+M2Ω4|N2|)Ve=(2A2Ω42(θ1)(|N1|+Ω2))Ve,

    from which we have

    [2A2Ω42(θ1)(|N1|+Ω2)]Ve>0 (3.12)

    provided that 1<θ<min1in1+[(AΩ4)Ve]i[(|N1|+Ω2)Ve]i and [(AΩ4)Ve]i[(|N1|+Ω2)Ve]i>0(i=1,2,,n).

    If Ω3<DM2, it is implied that

    (Ω1+M1)Ve[(2θ1)|N1+Ω2|+|M2Ω3|+|Ω4N2|]Ve=(Ω1+M1(2θ1)|N1+Ω2||M2Ω3||Ω4N2|)Ve=(Ω1+M1(2θ1)|N1+Ω2||DM2Ω3BM2||Ω4N2|)Ve(Ω1+M1(2θ1)|N1|(2θ1)Ω2|DM2Ω3||BM2|Ω4|N2|)Ve=(Ω1+M1(2θ1)|N1|(2θ1)Ω2DM2+Ω3|BM2|Ω4|N2|)Ve=(Ω1+M1|N1|2(θ1)|N1|2(θ1)Ω22DM2Ω2+Ω3+M2Ω4|N2|)Ve=(2A+2Ω2DM22(θ1)(|N1|+Ω2))Ve,

    from which we have

    [2A+2Ω2DM22(θ1)(|N1|+Ω2)]Ve>0 (3.13)

    provided that 1<θ<min1in1+[(A+ΩDM2)Ve]i[(|N1|+Ω2)Ve]i and [(A+ΩDM2)Ve]i[(|N1|+Ω2)Ve]i>0(i=1,2,,n).

    According to (3.11), (3.12) and (3.13), we have ρ(˜L)<1 if (3.7) holds.

    Theorem 3.2. Assume that ARn×n is an H+-matrix. Let ϱρ(D1A|BA|). Assume that the choices of the four nonnegative diagonal matrices Ωi(i=1,2,3,4) and the three positive parameters α,β,θ such that M1+Ω1 is nonsingular and either Ω3DAmin{2Ω,2Ω2(θ1)Ω2} or max{2Ω4,2Ω4+2(θ1)Ω2}DA<Ω3. Then the NRATMAOR iteration method is convergent for arbitrary two initial vectors if one of the following eight conditions holds:

    (i) 0<θ1,0<α1,0<βα,ϱ<12;

    (ii) 0<θ1,1<α<2,0<βα,ϱ<2α2α;

    (iii) 0<θ1,0<α1,0<αβ,ϱ<α2β;

    (iv) 0<θ1,1<α<2,0<αβ,ϱ<2α2β;

    (v) 1<θ<2α2αϱ2α+2,2(θ1)2θ1<α1,0<βα,ϱ<12;

    (vi) 1<θ<α2αϱ+2α2,1<α<2θ2θ1,0<βα,ϱ<2α2α;

    (vii) 1<θ<2α2βϱ2α+2,2(θ1)2θ1<α1,0<αβ,ϱ<α2β;

    (viii) 1<θ<α2βϱ+2α2,1<α<2θ2θ1,0<αβ,ϱ<2α2β.

    Proof. For the NRATMAOR iteration method, we have A=M1N1=M2N2 with

    M1=1α(DAβLA),N1=1α[(1α)DA+(αβ)LA+αUA] (3.14)

    and

    M2=DAUA,N2=LA,

    where α,β>0 are parameters. In order to use the result of Lemma 8, we need A=M1N1 to be an H-splitting of A. Since A is an H+-matrix, we have DA>0. It follows from (3.14) that

    M1|N1|=1α(DAβLA)|1α[(1α)DA+(αβ)LA+αUA]|=1α(DAβ|LA|)1α|[(1α)DA+(αβ)LA+αUA]|=1αDAβα|LA||1α|αDA|αβ|α|LA||UA|=1|1α|αDAβ+|αβ|α|LA||UA|S

    If 0<βα, then

    S=1|1α|αDA|LA||UA|=1|1α|αDA|BA|,

    and it follows from Lemma 7 that S is an M-matrix if

    1|1α|>0andϱ<1|1α|α,

    which is satisfied if

    0<α1andϱ<1

    or

    1<α<2andϱ<2αα.

    If 0<αβ, then

    S1|1α|αDA2βα|LA||UA|1|1α|αDA2βα|BA|ˉS. (3.15)

    It follows from Lemma 7 that ˉS is an M-matrix if

    1|1α|>0andϱ<1|1α|2β,

    which is satisfied if

    0<α1andϱ<α2β (3.16)

    or

    1<α<2andϱ<2α2β. (3.17)

    In this case, since S is a Z-matrix, it follows from Lemma 5 and (3.15) that S is an M-matrix if (3.16) or (3.17) holds.

    In conclusion, A=M1N1 is an H-splitting of A (or, equivalently, S is an M-matrix) if one of the following four conditions holds:

    0<α1,0<βα,ϱ<1, (3.18)
    1<α<2,0<βα,ϱ<2αα, (3.19)
    0<α1,0<αβ,ϱ<α2β (3.20)

    or

    1<α<2,0<αβ,ϱ<2α2β. (3.21)

    In the following, let ˆA=ˆMˆN with ˆM=Ω1+M1 and ˆN=(θ+|1θ|)|N1+Ω2|+|M2Ω3|+|Ω4N2|, then ˜L=ˆM1ˆN. In order to prove the convergence of the NRATMAOR iteration method, based on Lemma 8, it suffices to prove ρ(˜L)<1 provided that A=M1N1 is an H-splitting of A.

    Since

    ˆM=Ω1+1α(DAβLA)=Ω1+1α(DAβ|LA|)

    is a lower triangular matrix with positive diagonal entries and non-positive off-diagonal entries, it is an M-matrix. In addition, ˆN0. According to Lemma 7, ˆA is an M-matrix implies ρ(˜L)<1. Thus, we will prove that the Z-matrix ˆA is an M-matrix in the following.

    Case I: 0<θ1. In this case, we have

    ˆN=|N1+Ω2|+|M2Ω3|+|Ω4N2|=|1α[(1α)DA+(αβ)LA+αUA]+Ω2|+|DAΩ3UA|+|Ω4LA||1α|αDA+α+|αβ|α|LA|+2|UA|+Ω2+Ω4+|DAΩ3|˜P,

    from which we have

    ˆA=ˆMˆNˆM˜P=Ω1+1α(DAβ|LA|)|1α|αDAα+|αβ|α|LA|2|UA|Ω2Ω4|DAΩ3|=(Ω32Ω4|DAΩ3|)+1|1α|αDAα+β+|αβ|α|LA|2|UA|. (3.22)

    It can be easy to prove that the first term of (3.22) is nonnegative if

    Ω3DA2Ω (3.23)

    or

    2Ω4DA<Ω3. (3.24)

    Then it follows from (3.22) that

    ˆA1|1α|αDAα+β+|αβ|α|LA|2|UA|. (3.25)

    (i) If 0<βα, then it can be deduced from (3.25) that

    ˆA1|1α|αDA2|LA|2|UA|=1|1α|αDA2|BA|T,

    from which and Lemma 5, we obtain that ˆA is an M-matrix whenever T is. It follows from Lemma 7 that T is an M-matrix if

    1|1α|>0andϱ<1|1α|2α,

    which is satisfied if

    0<α1andϱ<12

    or

    1<α<2andϱ<2α2α.

    (ii) If 0<αβ, it can be deduced from (3.25) that

    ˆA1|1α|αDA2βα|LA|2|UA|1|1α|αDA2βα|BA|=ˉS,

    which is an M-matrix if (3.16) or (3.17) holds.

    In Case I, it can be concluded from (i) and (ii) that ˆA is an M-matrix if one of the following four conditions holds:

    0<θ1,0<α1,0<βα,ϱ<12, (3.26)
    0<θ1,1<α<2,0<βα,ϱ<2α2α, (3.27)
    0<θ1,,0<α1,0<αβ,ϱ<α2β (3.28)

    or

    0<θ1,1<α<2,0<αβ,ϱ<2α2β. (3.29)

    Case II: θ>1. In this case, we have

    ˆN=(2θ1)|N1+Ω2|+|M2Ω3|+|Ω4N2|=(2θ1)|1α[(1α)DA+(αβ)LA+αUA]+Ω2|+|DAΩ3UA|+|Ω4LA|(2θ1)|1α|αDA+α+(2θ1)|αβ|α|LA|+2θ|UA|+(2θ1)Ω2+Ω4+|DAΩ3|˜N,

    from which we obtain

    ˆA=ˆMˆNˆM˜N=Ω1+1α(DAβ|LA|)(2θ1)|1α|αDAα+(2θ1)|αβ|α|LA|2θ|UA|(2θ1)Ω2Ω4|DAΩ3|=(Ω32Ω42(θ1)Ω2|DAΩ3|)+1(2θ1)|1α|αDAα+β+(2θ1)|αβ|α|LA|2θ|UA|. (3.30)

    The first term of (3.30) is nonnegative if

    Ω3DA2Ω2(θ1)Ω2<2Ω (3.31)

    or

    2Ω42Ω4+2(θ1)Ω2DA<Ω3. (3.32)

    Then it follows from (3.30) that

    ˆA1(2θ1)|1α|αDAα+β+(2θ1)|αβ|α|LA|2θ|UA|. (3.33)

    (a) If 0<βα, then it follows from (3.33) that

    ˆA1(2θ1)|1α|αDA2θ|BA|R,

    from which and Lemma 5, we find that ˆA is an M-matrix whenever R is. It follows from Lemma 7 that R is an M-matrix if

    1(2θ1)|1α|>0andϱ<1(2θ1)|1α|2θα,

    which is satisfied if

    2(θ1)2θ1<α1,1<θ<2α2αϱ2α+2,ϱ<12

    or

    1<α<2θ2θ1,1<θ<α2αϱ+2α2,ϱ<2α2α.

    (b) If 0<αβ, then

    ˆA1(2θ1)|1α|αDA2θβ2α(θ1)α|LA|2θ|UA|=1(2θ1)|1α|αDA(2θβα|LA|+2θ|UA|)+2(θ1)|LA|1(2θ1)|1α|αDA2θ(βα|LA|+|UA|)1(2θ1)|1α|αDA2θβα|BA|˜R,

    from which and Lemma 5, we find that \hat{A} is an M -matrix whenever \tilde{R} is. It follows from Lemma 7 that \tilde{R} is an M -matrix if

    \begin{align} 1-(2\theta-1)|1-\alpha| > 0\quad \text{and}\quad \varrho < \frac{1-(2\theta-1)|1-\alpha|}{2\theta\beta}, \end{align}

    which is satisfied if

    \frac{2(\theta-1)}{2\theta-1} < \alpha \leq 1, \quad 1 < \theta < \frac{2-\alpha}{2\beta\varrho-2\alpha+2}, \quad \varrho < \frac{\alpha}{2\beta}

    or

    1 < \alpha < \frac{2\theta}{2\theta-1},\quad 1 < \theta < \frac{\alpha}{2\beta \varrho +2\alpha-2}, \quad \varrho < \frac{2-\alpha}{2\beta}.

    In Case II, it can be concluded from (a) and (b) that \hat{A} is an M -matrix if one of the following four conditions holds:

    \begin{align} \theta& > 1,\quad \frac{2(\theta-1)}{2\theta-1} < \alpha \leq 1,\quad 0 < \beta\leq\alpha,\quad 1 < \theta < \frac{2-\alpha}{2\alpha \varrho-2\alpha+2},\quad \varrho < \frac{1}{2}, \end{align} (3.34)
    \begin{align} \theta& > 1,\quad 1 < \alpha < \frac{2\theta}{2\theta-1},\quad 0 < \beta\leq\alpha,\quad 1 < \theta < \frac{\alpha}{2\alpha\varrho+2\alpha-2}, \quad \varrho < \frac{2-\alpha}{2\alpha}, \end{align} (3.35)
    \begin{align} \theta& > 1,\quad \frac{2(\theta-1)}{2\theta-1} < \alpha \leq 1, \quad 0 < \alpha \leq\beta, \quad 1 < \theta < \frac{2-\alpha}{2\beta\varrho-2\alpha+2}, \quad \varrho < \frac{\alpha}{2\beta} \end{align} (3.36)

    or

    \begin{equation} \theta > 1,\quad 1 < \alpha < \frac{2\theta}{2\theta-1},\quad 0 < \alpha \leq\beta, \quad 1 < \theta < \frac{\alpha}{2\beta \varrho +2\alpha-2}, \quad \varrho < \frac{2-\alpha}{2\beta}. \end{equation} (3.37)

    The proof is completed by combining (3.18)–(3.21), (3.23), (3.24), (3.26)–(3.29), (3.31), (3.32) and (3.34)–(3.37).

    In this section, three numerical examples are performed to validate the effectiveness of the NRATMMS iteration method.

    All test problems are conducted in MATLAB R2016a on a personal computer with 1.19 GHz central processing unit (Intel (R) Core (TM) i5-1035U), 8.00 GB memory and Windows 10 operating system. In the numerical results, we report the number of iteration steps (denoted by "IT"), the elapsed CPU time in seconds (denoted as "CPU") and the norm of the absolute residual vector (denoted by "RES"). Here, RES is defined by

    \text{RES}(z^{k}) \doteq \left\|\min\{Az^{k}+q,z^{k}\}\right\|_{2}.

    As [44], the following three examples are used.

    Example 4.1. ([20]) Consider the LCP (A, q) , where the matrix A = \hat{A} + \mu I_{m^2} (\mu\geq 0) with

    \hat{A} = \textbf{Tridiag}(-I_{m},S_{m},-I_{m}) \in \mathbb{R}^{m^2\times m^2},\quad S_{m} = \textbf{tridiag}(-1,4,-1) \in \mathbb{R}^{m\times m},

    and q = -Az^{\ast}\in\mathbb{R}^{m^2} with z^{\ast} = (1, 2, 1, 2, \cdots, 1, 2, \cdots)^{\top} being the unique solution of the LCP (A, q) (1.1).

    Example 4.2. ([20]) Consider the LCP (A, q) , where the matrix A = \hat{A} + \mu I_{m^2} (\mu\geq 0) with

    \hat{A} = \textbf{Tridiag}(-1.5I_{m},\; S_{m},-0.5I_{m}) \in \mathbb{R}^{m^2\times m^2},\quad S_{m} = \textbf{tridiag}(-1.5,4,-0.5) \in \mathbb{R}^{m\times m},

    and q = -Az^{\ast}\in\mathbb{R}^{m^2} with z^{\ast} = (1, 2, 1, 2, \cdots, 1, 2, \cdots)^{\top} being the unique solution of the LCP (A, q) (1.1).

    Example 4.3. (Black-Scholes American option pricing). In [50], the original Black-Scholes-Merton model changes to the following partial differential complementarity system

    \begin{equation} (\frac{\partial y}{\partial \tau}-\frac{\partial^2 y}{\partial x^2})(y(x,\tau)-g(x,\tau)) = 0, \ \frac{\partial y}{\partial \tau}-\frac{\partial^2 y}{\partial x^2}\geq0, \ y(x,\tau)-g(x,\tau)\geq0. \end{equation} (4.1)

    The initial and boundary conditions of the American put option price y(x, \tau) become y(x, 0) = g(x, 0) and \lim \limits_{x\rightarrow \pm\infty}y(x, \tau) = \lim \limits_{x\rightarrow \pm\infty}g(x, \tau) , where g(x, \tau) is the transformed payoff function. For the price x\in[a, b] , (a, b) is equal to (-1.5, 1.5) , (-2, 2) or (-4, 4) . Let \vartheta, \eta be a number of time steps and a number of x -nodes, \sigma, T be fixed volatility and expiry time. According to [50], by discretizing (4.1), we obtain the LCP:

    \begin{equation} w: = Au-d\geq0,\ u-g\geq0\ \text{and}\ w^{\top}(u-g) = 0, \end{equation} (4.2)

    with A = tridiag(-\tau, 1+2\tau, -\tau) and \tau = \frac{\Delta t}{(\Delta x)^2} , where \Delta t = \frac{0.5\sigma^2 T}{\vartheta} , \Delta x = \frac{b-a}{\eta} denote the time step and the price step, respectively. And then, if we employ a transformation z = u-g and q = Ag-d to formula (4.2), the American option pricing problem can be rewritten as LCP (1.1). In our numerical computations, we take g = 0.5z^{\ast} , and z^{\ast} = (1, 0, 1, 0, \cdots, 1, 0, \cdots)^{\top} . The vector d is adjusted such that d = Az^{\ast}-w^{\ast} , where w^{\ast} = (0, 1, 0, 1, \cdots, 0, 1, \cdots)^{\top} . The involved parameters are detailed in Table 3.

    As shown in [44], the NMGS method can be top-priority when the six tested methods there are used to solve the LCP( A, q ) in the three examples. Therefore, in this paper, we focus on comparing the performance of the NMGS method in [44] with the NRATMGS method. For the NMGS iteration method, \Omega = D_{A} is used [44]. For the NRATMGS method, we take \Omega_{1} = \Omega_{3} = D_{A} , \Omega_{2} = \Omega_{4} = 0 , M_{2} = A , N_{2} = 0 and \alpha = \beta = 1 . In addition, the optimal parameter \theta_{\rm{exp}} in the NRATMGS iteration method is obtained experimentally (ranging from 0 to 2 with step size 0.1 for Example 4.1 and Example 4.2, and with step size 0.01 for Example 4.3) by minimizing the corresponding iteration step number. For the sake of fairness, each methods are run 10 times and we take the average value of computing times as the reported CPU. Both methods are started from the initial vectors z^{0} = z^{1} = (1, 0, 1, 0, \cdots, 1, 0, \cdots)^{\top} and stopped if \text{RES}(z^{k}) < 10^{-5} or the prescribed maximal iteration number k_{\max} = 500 is exceeded. The involved linear systems are solved by " \setminus ". Numerical results for Examples 1–3 are reported in Tables 13. It follows from Tables 13 the NRATMGS method is better than the NMGS method (and the other methods tested in [44]) in terms of the iteration step number and CPU time when the parameter \theta_{\rm{exp}} is selected appropriately.

    Table 1.  Numerical results of Example 4.1.
    Method m
    16 32 64 128
    \mu=2 NMGS IT 28 31 32 34
    CPU 0.0009 0.0018 0.0055 0.0364
    RES 9.6887\times 10^{-6} 6.1988\times 10^{-6} 8.7866\times 10^{-6} 6.8116\times 10^{-6}
    NRATMGS \theta_{{\rm exp}} 1.4 1.4 1.4 1.4
    IT 24 25 26 28
    CPU \textbf{0.0005} \textbf{0.0012} \textbf{0.0048} \textbf{0.0318}
    RES 6.6171\times 10^{-6} 8.5424\times 10^{-6} 9.5986\times 10^{-6} 5.5387\times 10^{-6}
    \mu=4 NMGS IT 18 20 21 21
    CPU 0.0003 0.0009 0.0036 0.0224
    RES 9.3072\times 10^{-6} 4.4327\times 10^{-6} 4.2816\times 10^{-6} 9.0760\times 10^{-6}
    NRATMGS \theta_{{\rm exp}} 1.2 1.2 1.3 1.3
    IT 16 17 17 18
    CPU \textbf{0.0002} \textbf{0.0009} \textbf{0.0032} \textbf{0.0161}
    RES 9.3811\times 10^{-6} 9.4009\times 10^{-6} 8.9594\times 10^{-6} 5.9780\times 10^{-6}
    \mu=6 NMGS IT 15 16 16 17
    CPU 0.0002 0.0011 0.0030 0.0194
    RES 4.3687\times 10^{-6} 3.6549\times 10^{-6} 8.1157\times 10^{-6} 5.6681\times 10^{-6}
    NRATMGS \theta_{{\rm exp}} 1.2 1.2 1.2 1.2
    IT 13 14 14 15
    CPU \textbf{0.0002} \textbf{0.0009} \textbf{0.0025} \textbf{0.0138}
    RES 5.5869\times 10^{-6} 3.4746\times 10^{-6} 6.9767\times 10^{-6} 3.9164\times 10^{-6}
    \mu=8 NMGS IT 13 13 14 15
    CPU 0.0003 0.0006 0.0027 0.0173
    RES 4.0397\times 10^{-6} 9.9549\times 10^{-6} 5.9143\times 10^{-6} 3.3650\times 10^{-6}
    NRATMGS \theta_{{\rm exp}} 1.2 1.2 1.2 1.2
    IT 11 12 12 13
    CPU \textbf{0.0003} \textbf{0.0006} \textbf{0.0022} \textbf{0.0118}
    RES 8.9972\times 10^{-6} 4.0405\times 10^{-6} 6.4052\times 10^{-6} 2.6779\times 10^{-6}

     | Show Table
    DownLoad: CSV
    Table 2.  Numerical results of Example 4.2.
    Method m
    16 32 64 128
    \mu=2 NMGS IT 24 26 28 29
    CPU 0.0007 0.0011 0.0051 0.0324
    RES 8.9826\times 10^{-6} 9.8366\times 10^{-6} 7.5002\times 10^{-6} 9.2033\times 10^{-6}
    NRATMGS \theta_{{\rm exp}} 1.8 1.9 1.9 1.9
    IT 18 19 20 21
    CPU \textbf{0.0003} \textbf{0.0010} \textbf{0.0037} \textbf{0.0232}
    RES 8.1939\times 10^{-6} 9.1146\times 10^{-6} 7.2571\times 10^{-6} 5.7367\times 10^{-6}
    \mu=4 NMGS IT 16 17 18 19
    CPU 0.0002 0.0007 0.0030 0.0238
    RES 8.3608\times 10^{-6} 8.5767\times 10^{-6} 7.6698\times 10^{-6} 6.2746\times 10^{-6}
    NRATMGS \theta_{{\rm exp}} 1.5 1.5 1.6 1.6
    IT 13 14 17 14
    CPU \textbf{0.0002} \textbf{0.0007} \textbf{0.0026} \textbf{0.0145}
    RES 6.5082\times 10^{-6} 4.9487\times 10^{-6} 6.5055\times 10^{-6} 9.6328\times 10^{-6}
    \mu=6 NMGS IT 13 14 15 15
    CPU 0.0003 0.0008 0.0028 0.0224
    RES 7.5880\times 10^{-6} 5.6490\times 10^{-6} 3.6901\times 10^{-6} 7.7841\times 10^{-6}
    NRATMGS \theta_{{\rm exp}} 1.4 1.4 1.4 1.4
    IT 11 11 12 12
    CPU \textbf{0.0002} \textbf{0.0008} \textbf{0.0022} \textbf{0.0121}
    RES 4.7675\times 10^{-6} 8.9849\times 10^{-6} 3.8629\times 10^{-6} 7.2580\times 10^{-6}
    \mu=8 NMGS IT 12 12 13 13
    CPU 0.0003 0.0005 0.0024 0.0163
    RES 2.8368\times 10^{-6} 7.0877\times 10^{-6} 3.6923\times 10^{-6} 7.7385\times 10^{-6}
    NRATMGS \theta_{{\rm exp}} 1.3 1.3 1.3 1.3
    IT 10 10 11 11
    CPU \textbf{0.0002} \textbf{0.0005} \textbf{0.0021} \textbf{0.0112}
    RES 3.3256\times 10^{-6} 7.2249\times 10^{-6} 2.6219\times 10^{-6} 5.2755\times 10^{-6}

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical results of Example 4.3.
    Case Grid( \eta , \vartheta ) \tau NMGS NRATMGS
    IT CPU RES \theta_{{\rm exp}} IT CPU RES
    \sigma=0.2
    T=0.5
    a=-1.5
    b=1.5
    (4000, 2000) 8.8889 23 0.0034 3.4764\times 10^{-6} 0.9 5 20 \textbf{0.0028} 3.2943\times 10^{-6}
    (6000, 3000) 13.3333 26 0.0062 7.7964\times 10^{-6} 0.93 22 \textbf{0.0046} 5.1432\times 10^{-6}
    (8000, 5000) 14.2222 25 0.0075 1.5296\times 10^{-6} 1.03 23 \textbf{0.0062} 7.9327\times 10^{-6}
    (8000, 8000) 8.8889 23 0.0065 4.9204\times 10^{-6} 0.95 20 \textbf{0.0056} 4.6315\times 10^{-6}
    (10000, 10000) 11.1111 23 0.0077 9.1134\times 10^{-7} 1.04 21 \textbf{0.0073} 5.2285\times 10^{-6}
    \sigma=0.2
    T=0.25
    a=-1.5
    b=1.5
    (6000, 3000) 6.6667 21 0.0043 6.0408\times 10^{-6} 0.93 18 \textbf{0.0038} 7.5730\times 10^{-6}
    (8000, 4000) 8.8889 23 0.0064 4.9204\times 10^{-6} 0.95 20 \textbf{0.0055} 4.6315\times 10^{-6}
    (10000, 5000) 11.1111 23 0.0078 9.1134\times 10^{-7} 1.04 21 \textbf{0.0073} 5.2285\times 10^{-6}
    (15000, 15000) 8.3333 22 0.0111 8.1100\times 10^{-6} 0.82 19 \textbf{0.0101} 7.3832\times 10^{-6}
    (20000, 20000) 11.1111 23 0.0177 1.2828\times 10^{-6} 1.04 21 \textbf{0.0152} 7.2461\times 10^{-6}
    \sigma=0.3
    T=0.5
    a=-2
    b=2
    (4000, 2500) 9 23 0.0034 3.8926\times 10^{-6} 0.95 20 \textbf{0.0029} 3.1298\times 10^{-6}
    (6000, 3000) 16.875 27 0.0056 7.5378\times 10^{-6} 1.1 25 \textbf{0.0052} 7.0849\times 10^{-6}
    (8000, 4000) 22.5 32 0.0088 5.2859\times 10^{-6} 0.99 28 \textbf{0.0076} 4.8398\times 10^{-6}
    (8000, 6000) 15 26 0.0071 4.9459\times 10^{-6} 0.86 23 \textbf{0.0062} 7.8720\times 10^{-6}
    (10000, 10000) 14.0625 25 0.0087 2.5353\times 10^{-6} 1.05 23 \textbf{0.0081} 6.5500\times 10^{-6}
    \sigma=0.3
    T=0.25
    a=-4
    b=4
    (8000, 4000) 2.8125 18 0.0050 6.2985\times 10^{-6} 0.92 16 \textbf{0.0045} 8.3277\times 10^{-6}
    (16000, 8000) 5.625 21 0.0112 5.5724\times 10^{-6} 0.94 18 \textbf{0.0101} 7.3356\times 10^{-6}
    (20000, 10000) 7.0313 23 0.0177 6.2544\times 10^{-7} 0.91 20 \textbf{0.0144} 6.1941\times 10^{-6}
    (24000, 15000) 6.75 23 0.0202 6.6919\times 10^{-7} 0.91 20 \textbf{0.0179} 6.9170\times 10^{-6}
    (30000, 24000) 6.5918 23 0.0275 7.3343\times 10^{-7} 0.91 20 \textbf{0.0243} 7.7511\times 10^{-6}

     | Show Table
    DownLoad: CSV

    In this paper, by applying the matrix splitting, relaxation technique and two-sweep iteration form to the new modulus-based matrix splitting formula, we propose a new relaxed acceleration two-sweep modulus-based matrix splitting (NRATMMS) iteration method for solving the LCP (A, q) (1.1). We investigate the convergence properties of the NRATMMS iteration method with the system matrix A of the LCP (A, q) (1.1) being an H_{+} -matrix. Numerical experiments illustrate that the NRATMMS iteration method is effective, and it can be superior to some existing methods. However, the choices of the optimal parameters in theory require further investigation.

    The authors are grateful to the five reviewers and the editor for their helpful comments and suggestions that have helped to improve the paper. This research is supported by the National Natural Science Foundation of China (12201275, 12131004), the Ministry of Education in China of Humanities and Social Science Project (21YJCZH204), the Project of Liaoning Provincial Federation Social Science Circles (2023lslqnkt-044, 2022lslwtkt-069), the Natural Science Foundation of Fujian Province (2021J01661) and the Ministry of Science and Technology of China (2021YFA1003600).

    The authors confirm that there has no conflict of interest.



    [1] M. Bestvina, Real trees in topology, geometry and group theory, arXiv: math/9712210.
    [2] W. Kirk, Some recent results in metric fixed point theory, J. Fixed Point Theory Appl., 2 (2007), 195–207. http://dx.doi.org/10.1007/s11784-007-0031-8 doi: 10.1007/s11784-007-0031-8
    [3] C. Semple, M. Steel, Phylogenetics, Oxford: Oxford University Press, 2003.
    [4] M. Frechet, Sur quelques points du calcul fonctionnel, Rend. Circ. Matem. Palermo, 22 (1906), 1–72. http://dx.doi.org/10.1007/BF03018603 doi: 10.1007/BF03018603
    [5] S. Czerwik, Contraction mappings in b-metric spaces, Acta Mathematica et Informatica Universitatis Ostraviensis, 1 (1993), 5–11.
    [6] A. Branciari, A fixed point theorem of Banach-Caccioppoli type on a class of generalizedmetric spaces, Publ. Math. Debrecen, 57 (2000), 31–37. http://dx.doi.org/10.5486/PMD.2000.2133 doi: 10.5486/PMD.2000.2133
    [7] M. Jleli, B. Samet, On a new generalization of metric spaces, J. Fixed Point Theory Appl., 20 (2018), 128. http://dx.doi.org/10.1007/s11784-018-0606-6 doi: 10.1007/s11784-018-0606-6
    [8] M. Gordji, D. Rameani, M. De La Sen, Y. Cho, On orthogonal sets and Banach fixed point theorem, Fixed Point Theory, 18 (2017), 569–578. http://dx.doi.org/10.24193/fpt-ro.2017.2.45 doi: 10.24193/fpt-ro.2017.2.45
    [9] T. Kanwal, A. Hussain, H. Baghani, M. De La Sen, New fixed point theorems in orthogonal \mathcal{F}-metric spaces with application to fractional differential equation, Symmetry, 12 (2020), 832. http://dx.doi.org/10.3390/sym12050832 doi: 10.3390/sym12050832
    [10] I. Bakhtin, The contraction mapping principle in almost metric spaces, Funct. Anal, 30 (1989), 26–37.
    [11] M. Khamsi, N. Hussain, KKM mappings in metric type spaces, Nonlinear Anal.-Theor., 73 (2010), 3123–3129. http://dx.doi.org/10.1016/j.na.2010.06.084 doi: 10.1016/j.na.2010.06.084
    [12] J. Ahmad, A. Al-Rawashdeh, A. Al-Mazrooei, Fixed point results for (\alpha, \bot _{\mathcal{F}})-contractions in orthogonal F-metric spaces with applications, J. Funct. Space., 2022 (2022), 8532797. http://dx.doi.org/10.1155/2022/8532797 doi: 10.1155/2022/8532797
    [13] L. Alnaser, D. Lateef, H. Fouad, J. Ahmad, Relation theoretic contraction results in F-metric spaces, J. Nonlinear Sci. Appl., 12 (2019), 337–344. http://dx.doi.org/10.22436/jnsa.012.05.06 doi: 10.22436/jnsa.012.05.06
    [14] S. Al-Mezel, J. Ahmad, G. Marino, Fixed point theorems for generalized (\alpha \beta -\psi )-contractions in F-metric spaces with applications, Mathematics, 8 (2020), 584. http://dx.doi.org/10.3390/math8040584 doi: 10.3390/math8040584
    [15] M. Alansari, S. Mohammed, A. Azam, Fuzzy fixed point results in F-metric spaces with applications, J. Funct. Space., 2020 (2020), 5142815. http://dx.doi.org/10.1155/2020/5142815 doi: 10.1155/2020/5142815
    [16] D. Lateef, J. Ahmad, Dass and Gupta's Fixed point theorem in F -metric spaces, J. Nonlinear Sci. Appl., 12 (2019), 405–411. http://dx.doi.org/10.22436/jnsa.012.06.06 doi: 10.22436/jnsa.012.06.06
    [17] A. Hussain, T. Kanwal, Existence and uniqueness for a neutral differential problem with unbounded delay via fixed point results, T. A. Razmadze Math. In., 172 (2018), 481–490. http://dx.doi.org/10.1016/j.trmi.2018.08.006 doi: 10.1016/j.trmi.2018.08.006
    [18] Z. Ahmadi, R. Lashkaripour, H. Baghani, A fixed point problem with constraint inequalities via a contraction in incomplete metric spaces, Filomat, 32 (2018), 3365–3379. http://dx.doi.org/10.2298/FIL1809365A doi: 10.2298/FIL1809365A
    [19] H. Baghani, M. Ramezani, Coincidence and fixed points for multivalued mappings in incomplete metric spaces with applications, Filomat, 33 (2019), 13–26. http://dx.doi.org/10.2298/FIL1901013B doi: 10.2298/FIL1901013B
    [20] A. Ran, M. Reuring, A fixed point theorem in partially ordered sets and some applications to matrix equations, Proc. Am. Math. Soc., 132 (2004), 1435–1443. http://dx.doi.org/10.1090/S0002-9939-03-07220-4 doi: 10.1090/S0002-9939-03-07220-4
    [21] K. Javed, H. Aydi, F. Uddin, M. Arshad, On orthogonal partial b-metric spaces with an application, J. Math., 2021 (2021), 6692063. http://dx.doi.org/10.1155/2021/6692063 doi: 10.1155/2021/6692063
    [22] B. Samet, C. Vetro, P. Vetro, Fixed point theorems for \alpha -\psi -contractive type mappings, Nonlinear Anal.-Theor., 75 (2012), 2154–2165. http://dx.doi.org/10.1016/j.na.2011.10.014 doi: 10.1016/j.na.2011.10.014
    [23] M. Ramezani, Orthogonal metric space and convex contractions, Int. J. Nonlinear Anal., 6 (2015), 127–132. http://dx.doi.org/ 10.22075/IJNAA.2015.261 doi: 10.22075/IJNAA.2015.261
    [24] A. Asif, M. Nazam, M. Arshad, S. Kim, F-metric, F -contraction and common fixed point theorems with applications, Mathematics, 7 (2019), 586. http://dx.doi.org/10.3390/math7070586 doi: 10.3390/math7070586
    [25] G. Mani, A. Gnanaprakasam, N. Kausar, M. Munir, Salahuddin. Orthogonal F-contraction mapping on O-complete metric space with applications, Int. J. Fuzzy Log. Inte., 21 (2021), 243–250. http://dx.doi.org/10.5391/IJFIS.2021.21.3.243 doi: 10.5391/IJFIS.2021.21.3.243
    [26] S. Banach, Sur les operations dans les ensembles abstraits et leur applications aux equations integrales, Fund. Math., 3 (1922), 133–181. http://dx.doi.org/10.4064/fm-3-1-133-181 doi: 10.4064/fm-3-1-133-181
    [27] J. Ahmad, A. Al-Rawashdeh, A. Azam, Fixed point results for \{ \alpha, \xi \}-expansive locally contractive mappings, J. Inequal. Appl., 2014 (2014), 364. http://dx.doi.org/10.1186/1029-242X-2014-364 doi: 10.1186/1029-242X-2014-364
    [28] J. Ahmad, A. Al-Rawashdeh, A. Azam, New fixed point theorems for generalized F-contractions in complete metric spaces, Fixed Point Theory Appl., 2015 (2015), 80. http://dx.doi.org/10.1186/s13663-015-0333-2 doi: 10.1186/s13663-015-0333-2
    [29] M. Jleli, B. Samet, A new generalization of the Banach contraction principle, J. Inequal. Appl., 2014 (2014), 38. http://dx.doi.org/10.1186/1029-242X-2014-38 doi: 10.1186/1029-242X-2014-38
    [30] J. Ahmad, A. Al-Mazrooei, Y. Cho, Y. Yang, Fixed point results for generalized \Theta -contractions, J. Nonlinear Sci. Appl., 10 (2017), 2350–2358. http://dx.doi.org/10.22436/jnsa.010.05.07 doi: 10.22436/jnsa.010.05.07
    [31] N. Hussain, V. Parvaneh, B. Samet, C. Vetro, Some fixed point theorems for generalized contractive mappings in complete metric spaces, Fixed Point Theory Appl., 2015 (2015), 185. http://dx.doi.org/10.1186/s13663-015-0433-z doi: 10.1186/s13663-015-0433-z
    [32] N. Hussain, J. Ahmad, New Suzuki-Berinde type fixed point results, Carpathian J. Math., 33 (2017), 59–72. http://dx.doi.org/10.37193/CJM.2017.01.07 doi: 10.37193/CJM.2017.01.07
    [33] Z. Li, S. Jiang, Fixed point theorems of JS-quasi-contractions, Fixed Point Theory Appl., 2016 (2016), 40. http://dx.doi.org/10.1186/s13663-016-0526-3 doi: 10.1186/s13663-016-0526-3
    [34] F. Vetro, A generalization of Nadler fixed point theorem, Carpathian J. Math, 31 (2015), 403–410.
    [35] H. Ali, H. Isik, H. Aydi, E. Ameer, J. Lee, M. Arshad, On multivalued Suzuki-type \Theta -contractions and related applications, Open Math., 18 (2020), 386–399. http://dx.doi.org/10.1515/math-2020-0139 doi: 10.1515/math-2020-0139
    [36] E. Ameer, H. Aydi, M. Arshad, A. Hussain, A. Khan, Ćirić type multi-valued \alpha _{\ast }-\eta _{\ast }-\Theta -contractions on b-meric spaces with applications, Int. J. Nonlinear Anal., 12 (2021), 597–614. http://dx.doi.org/10.22075/IJNAA.2021.4865 doi: 10.22075/IJNAA.2021.4865
    [37] L. Ćirić, Generalized contractions and fixed-point theorems, Publ. Inst. Math., 12 (1971), 9–26.
    [38] R. Kannan, Some results on fixed points, Bull. Calcutta Math. Soc., 60 (1968), 71–76.
    [39] S. Chatterjea, Fixed point theorem, C. R. Acad. Bulg. Sci., 25 (1972), 727–730.
    [40] S. Reich, Kannan's fixed point theorem, Bull. Univ. Mat. Italiana, 4 (1971), 1–11.
    [41] H. Mohammadi, S. Kumar, S. Rezapour, S. Etemad, A theoretical study of the Caputo-Fabrizio fractional modeling for hearing loss due to Mumps virus with optimal control, Chaos Soliton. Fract., 144 (2021), 110668. http://dx.doi.org/10.1016/j.chaos.2021.110668 doi: 10.1016/j.chaos.2021.110668
    [42] S. Kumar, P. Shaw, A. Abdel-Aty, E. Mahmoud, A numerical study on fractional differential equation with population growth model, Numer. Meth. Part. D. E., in press. http://dx.doi.org/10.1002/num.22684
    [43] P. Shaw, S. Kumar, S. Momani, S. Hadid, Dynamical analysis of fractional plant disease model with curative and preventive treatments, Chaos, Soliton. Fract., 164 (2022), 112705. http://dx.doi.org/10.1016/j.chaos.2022.112705 doi: 10.1016/j.chaos.2022.112705
    [44] D. Gopal, M. Abbas, D. Patel, C. Vetro, Fixed point of \alpha -type F-contractive mappings with an application to nonlinear fractional differential equation, Acta Math. Sci., 36 (2016), 957–970. http://dx.doi.org/10.1016/S0252-9602(16)30052-2 doi: 10.1016/S0252-9602(16)30052-2
    [45] D. Baleanu, S. Rezapour, H. Mohammadi, Some existence results on nonlinear fractional differential equations, Philos. Trans. A Math. Phys. Eng. Sci., 371 (2013), 20120144. http://dx.doi.org/10.1098/rsta.2012.0144 doi: 10.1098/rsta.2012.0144
  • This article has been cited by:

    1. Senthil Kumar Prakasam, Arul Joseph Gnanaprakasam, Gunaseelan Mani, Fahd Jarad, Solving an integral equation via orthogonal generalized {\boldsymbol{\alpha}} - {\boldsymbol{\psi}} -Geraghty contractions, 2022, 8, 2473-6988, 5899, 10.3934/math.2023297
    2. Aiman Mukheimer, Arul Joseph Gnanaprakasam, Absar Ul Haq, Senthil Kumar Prakasam, Gunaseelan Mani, Imran Abbas Baloch, Hüseyin Işık, Solving an Integral Equation via Orthogonal Branciari Metric Spaces, 2022, 2022, 2314-8888, 1, 10.1155/2022/7251823
    3. Nurcan Bilgili Gungor, Some fixed point results via auxiliary functions on orthogonal metric spaces and application to homotopy, 2022, 7, 2473-6988, 14861, 10.3934/math.2022815
    4. Gunaseelan Mani, Arul Joseph Gnanaprakasam, Vidhya Varadharajan, Fahd Jarad, Solving an integral equation vian orthogonal neutrosophic rectangular metric space, 2023, 8, 2473-6988, 3791, 10.3934/math.2023189
    5. Arul Joseph Gnanaprakasam, Gunaseelan Mani, Ozgur Ege, Ahmad Aloqaily, Nabil Mlaiki, New Fixed Point Results in Orthogonal B-Metric Spaces with Related Applications, 2023, 11, 2227-7390, 677, 10.3390/math11030677
    6. Gunaseelan Mani, Arul Joseph Gnanaprakasam, Khalil Javed, Santosh Kumar, Mohammed S. Abdo, On Orthogonal Coupled Fixed Point Results with an Application, 2022, 2022, 2314-8888, 1, 10.1155/2022/5044181
    7. Menaha Dhanraj, Arul Joseph Gnanaprakasam, Gunaseelan Mani, Ozgur Ege, Manuel De la Sen, Solution to Integral Equation in an O-Complete Branciari b-Metric Spaces, 2022, 11, 2075-1680, 728, 10.3390/axioms11120728
    8. Gunasekaran Nallaselli, Arul Joseph Gnanaprakasam, Gunaseelan Mani, Ozgur Ege, NOVEL RESULTS OF AN ORTHOGONAL (α−F)-CONVEX CONTRACTION MAPPING, 2024, 54, 0035-7596, 10.1216/rmj.2024.54.1411
    9. Menaha Dhanraj, Arul Joseph Gnanaprakasam, Santosh Kumar, Solving integral equations via orthogonal hybrid interpolative RI-type contractions, 2024, 2024, 2730-5422, 10.1186/s13663-023-00759-6
    10. Menaha Dhanraj, Arul Joseph Gnanaprakasam, Gunaseelan Mani, Rajagopalan Ramaswamy, Khizar Hyatt Khan, Ola Ashour A. Abdelnaby, Stojan Radenović, Fixed point theorem on an orthogonal extended interpolative \psi\mathcal{F} -contraction, 2023, 8, 2473-6988, 16151, 10.3934/math.2023825
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1470) PDF downloads(127) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog