Loading [MathJax]/jax/output/SVG/jax.js
Research article

New error bound for linear complementarity problem of S-SDDS-B matrices

  • Received: 26 July 2021 Accepted: 09 November 2021 Published: 26 November 2021
  • MSC : 15A48, 65G50, 90C31, 90C33

  • S-SDDS-B matrices is a subclass of P-matrices which contains B-matrices. New error bound of the linear complementarity problem for S-SDDS-B matrices is presented, which improves the corresponding result in [1]. Numerical examples are given to verify the corresponding results.

    Citation: Lanlan Liu, Pan Han, Feng Wang. New error bound for linear complementarity problem of S-SDDS-B matrices[J]. AIMS Mathematics, 2022, 7(2): 3239-3249. doi: 10.3934/math.2022179

    Related Papers:

    [1] Yuanjie Geng, Deshu Sun . Error bounds for linear complementarity problems of strong $ SDD_{1} $ matrices and strong $ SDD_{1} $-$ B $ matrices. AIMS Mathematics, 2023, 8(11): 27052-27064. doi: 10.3934/math.20231384
    [2] Deshu Sun . Note on error bounds for linear complementarity problems involving $ B^S $-matrices. AIMS Mathematics, 2022, 7(2): 1896-1906. doi: 10.3934/math.2022109
    [3] Mengting Gan . Global error bounds for the extended vertical LCP of $ \sum-SDD $ matrices. AIMS Mathematics, 2024, 9(9): 24326-24335. doi: 10.3934/math.20241183
    [4] Hongmin Mo, Yingxue Dong . A new error bound for linear complementarity problems involving $ B- $matrices. AIMS Mathematics, 2023, 8(10): 23889-23899. doi: 10.3934/math.20231218
    [5] Dizhen Ao, Yan Liu, Feng Wang, Lanlan Liu . Schur complement-based infinity norm bounds for the inverse of $ S $-Sparse Ostrowski Brauer matrices. AIMS Mathematics, 2023, 8(11): 25815-25844. doi: 10.3934/math.20231317
    [6] Yingxia Zhao, Lanlan Liu, Feng Wang . Error bounds for linear complementarity problems of $ SD{{D}_{1}} $ matrices and $ SD{{D}_{1}} $-$ B $ matrices. AIMS Mathematics, 2022, 7(7): 11862-11878. doi: 10.3934/math.2022662
    [7] Xinnian Song, Lei Gao . CKV-type $ B $-matrices and error bounds for linear complementarity problems. AIMS Mathematics, 2021, 6(10): 10846-10860. doi: 10.3934/math.2021630
    [8] Lanlan Liu, Yuxue Zhu, Feng Wang, Yuanjie Geng . Infinity norm bounds for the inverse of $ SDD_1^{+} $ matrices with applications. AIMS Mathematics, 2024, 9(8): 21294-21320. doi: 10.3934/math.20241034
    [9] Xiaoyong Chen, Yating Li, Liang Liu, Yaqiang Wang . Infinity norm upper bounds for the inverse of $ SDD_1 $ matrices. AIMS Mathematics, 2022, 7(5): 8847-8860. doi: 10.3934/math.2022493
    [10] Xiaodong Wang, Feng Wang . Infinity norm upper bounds for the inverse of $ {SDD_k} $ matrices. AIMS Mathematics, 2023, 8(10): 24999-25016. doi: 10.3934/math.20231276
  • S-SDDS-B matrices is a subclass of P-matrices which contains B-matrices. New error bound of the linear complementarity problem for S-SDDS-B matrices is presented, which improves the corresponding result in [1]. Numerical examples are given to verify the corresponding results.



    Many fundamental problems in optimization and mathematical programming can be described as a linear complementarity problem (LCP). Such as quadratic programming, nonlinear obstacle problem, invariant capital stock, the Nash eqilibrium point of a bimatrix game, optimal stopping, free boundary problem for journal bearing and so on, see [1,2,3]. The error bound on the distance between an arbitrary point in Rn and the solution set of the LCP plays an important role in the convergence analysis of algorithm, for details, see [4,5,6,7].

    It is well known that LCP has a unique solution for any vector qRn if and only if M is a P-matrix. Some basic definitions for the special matrix are given below: A matrix M=(mij)Rn×n is called a Z-matrix, if mij0 for any ij; a P-matrix, if all its principal minors are positive; an M-matrix, if M10 and M is a Z-matrix; an H-matrix, if its comparison matrix M is an M-matrix, where the comparison matrix is given by

    ˜mij={|mij|,ifi=j,|mij|,ifij.

    Linear complementarity problem is to find a vector xRn such that

    x0,Mx+q0,xT(Mx+q)=0

    or to prove that no such vector x exists, where M=(mij)Rn×n and qRn. One of the essencial problems in the LCP(M, q) is to estimate

    maxd[0,1]n(ID+DM)1,

    which is used to bound the error xx, that is

    xxmaxd[0,1]n(ID+DM)1r(x),

    where x is the solution of the LCP(M,q), r(x)=min{x,Mx+q}, D=diag(di) with 0di1, and the min operator r(x) denote the componentwise of the two vectors. When real H-matrices with positive diagonal entries form a subclass of P-matrices, the error bound becomes simpler (see formula (2.4) in [8]). Nowadays, many scholars interest in the research on special H-matrices, such as QN-matrices [9], S-SDD matrices [10], Nekrasov matrices [11] and Ostrowski matrices [12]. The corresponding error bounds for LCPs of QN-matrices are achieved by Dai et al. in [13] and Gao et al. in [14]. A new error bound for the LCP of Σ-SDD matrices was given in [15], which only depended on the entries of the involved matrices.

    When the matrix A is not an H matrix we can not use formula (2.4) in [8]. However, for some subclasses of P-matrices that are not H-matrices, error bounds for LCPs have also been needed. For example, for SB-matrices [16], for BS-matrices [17], for weakly chained diagonally dominant B-matrices [18], for DB-matrices [19] and for MB-matrices [20]. B-matrices as an important subclass of P-matrices has been researched for years and has achieved fruitful results, see [18,21,22,23,24,25].

    In this paper, we focus on the error bound for the LCP(M,q) when M is an S-SDDS-B-matrix, that is a P-matrix. In Section 2, we introduce some notations, definitions and lemmas, which will be used in the subsequence analysis. In Section 3, a new error bound is presented, then the new error bound is compared with the bound in [1]. In Section 4, we give some numerical examples and graphs to show the efficiency of the method in our paper.

    In this section, some notations, definitions and lemmas are recalled.

    Give a matrix A=(aij)Rn×n and a subset Sn, n2, we denote

    ri(A)=nj=1,ji|aij|,i=1,n,
    rji(A)=ri(A)|aij|,whereji,i=1,,n,

    and also

    rSi(A)={jS,ji|aij|,iS,jS|aij|,iS.

    ˉSS=n, ˉS is the complement of S in n.

    In according with [26], a matrix A=(aij), n2 is said to be S-SDD if the following conditions are fulfilled:

    |aii|>rSi(A),foralliS,

    and

    [|aii|rSi(A)][|ajj|rˉSj(A)]>rˉSi(A)rSj(A),foralliS and jˉS.

    We extend the S-SDD matrices by introducing the following definitions.

    Definition 2.1. [26] A matrix A=(aij)Rn×n is said to be S-SDDS (S-SDD Sparse) if the following conditions are satisfied:

    (i) |aii|>rSi(A) for all iS,

    (ii) |ajj|>rˉSj(A) for all jˉS,

    (iii) For all iS and all jˉS such that aij0 or aji0

    [|aii|rSi(A)][|ajj|rˉSj(A)]>rˉSi(A)rSj(A). (2.1)

    If A=(aij)Rn×n for each i=1,,n and r+i:=max{0,aij|ji}, then we write A=B++C, where

    B+=(bij)=(a11r+1a1nr+1an1r+nannr+n),C=(r+1r+1r+nr+n). (2.2)

    Definition 2.2. Suppose that A=(aij)Rn×n, n2 is matrix with the form of A=B++C, we say A is an S-SDDS-B matrix if and only if B+ is an S-SDDS matrix with positive diagonal entries.

    There is an equivalence definition in [27], which is closely related to strictly diagonally dominant matrices.

    Definition 2.3. [27] Let A=(aij)Rn×n and A=B++C, where B+ is defined as (2.2), then A is an B-matrix if and only if B+ is a strictly diagonally dominant matrix.

    Immediately, we know S-SDDS-B matrices contain B-matrices from Definition 2.3. That is

    B-matricesS-SDDS-Bmatrices.

    Now, we will introduce some useful lemmas.

    Lemma 2.1. [26] Let A=(aij)Rn×n,n2 is an S-SDDS matrix, then A is a nonsingular H-matrix.

    Lemma 2.2. [26] Let A=(aij)Rn×n,n2 is an S-SDDS matrix, then

    A1{maxiS:rˉSi(A)=01|aii|rSi(A),  maxjˉS:rSj(A)=01|ajj|rˉSj(A),maxiS,jˉS:aij0fSij(A),  maxiS,jˉS:aji0fˉSij(A)}, (2.3)

    where

    fSij(A)=|ajj|rˉSj(A)+rˉSi(A)[|aii|rSi(A)][|ajj|rˉSj(A)]rˉSi(A)rSj(A),iS,jˉS.

    Lemma 2.3. [1] Let ARn×n is a B-matrix, B+ is the matrix in (2.2), then

    max(ID+DM)1n1min{β,1}, (2.4)

    where β=miniN{βi}, βi=biinji|bij|.

    Lemma 2.4. [21] Let γ>0 and η>0, for any x[0,1],

    11x+γx1min{γ,1},ηx1x+γxηγ.

    Lemma 2.5. [27] Let ARn×n is a nonsingular M-matrix, P is a nonnegative matrix with rank 1, then A+P is a P-matrix.

    In this section, a new error bound of LCP(M,q) is presented when M is an S-SDDS-B matrix. Firstly, we prove that an S-SDDS-B matrix is a P-matrix.

    Lemma 3.1. Let ARn×n (n2) be an S-SDDS-B matrix, then A is a P-matrix.

    Proof. By Definition 2.2, we have that C in (2.2) is a nonnegative matrix with rank 1. By the fact that S-SDDS matrix is a nonnegative H-matrix, we have B+ is a nonnegative M-matrix. We can conclude A is a P-matrix from Lemma 2.5.

    Lemma 3.2. Suppose that M=(mij)Rn×n (n2) is an S-SDDS matrix with positive diagonal entries, let

    ˜M=ID+DM=(˜mij),D=diag(di),0di1,

    then ˜M is an S-SDDS matrix with positive diagonal entries.

    Proof. From ˜M=ID+DM=(˜mij), we have

    ˜mij={1di+dimij,i=j,dimij,ij.

    Because M is an S-SDDS matrix with positive diagonal entries and D=diag(di),0di1, for any iS, we get

    |˜mii|=|1di+dimii|>|dimii|>dirSi(M)=rSi(˜M).

    Similarly, for some jˉS, we have

    |˜mjj|=|1dj+djmjj|>|djmjj|>djrˉSj(M)=rˉSj(˜M).

    For any iS, jˉS, we obtain

    (|˜mii|rSi(˜M))(|˜mjj|rˉSj(˜M))=(|1di+dimii|dirSi(M))(|1dj+djmjj|djrˉSj(M))>didj(|mii|rSi(M))(|mjj|rˉSj(M))>didjrˉSi(M)rSj(M)=rˉSi(˜M)rSj(˜M).

    From Definition 2.1, ˜M is an S-SDDS matrix with positive diagonal entries.

    Theorem 3.1. Let A=[aij]Rn×n is an S-SDDS-B matrix, denote A=B++C, where B+=(bij) is defined as (2.2), then

    maxdn[0,1](ID+DA)1(n1)max{μi(B+),μj(B+),μSij(B+),μˉSji(B+)}, (3.1)

    where

    μi(B+)=maxiS:rˉSi(B+D)=0{(biirSi(B+))1,1},
    μj(B+)=maxjˉS:rSj(B+D)=0{(bjjrˉSj(B+))1,1},
    μSij(B+)=maxiS,jˉS,bij0(bjjrˉSj(B+))(biirSi(B+)min{biirSi(B+),1}+rˉSi(B+)min{bjjrˉSj(B+),1})(biirSi(B+))(bjjrˉSj(B+))rˉSi(B+)rSj(B+),
    μˉSji(B+)=maxiS,jˉS,bji0(biirSi(B+))(bjjrˉSj(B+)min{bjjrˉSj(B+),1}+rSj(B+)min{biirSi(B+),1})(bjjrˉSj(B+))(biirSi(B+))rSj(B+)rˉSi(B+).

    Proof. We denote AD=ID+DA, then

    AD=ID+DA=ID+D(B++C)=B+D+CD,

    where B+D=ID+DB+, CD=DC. Since B+ is an S-SDDS matrix with positive diagonal entries, it's easy to know B+D is an S-SDDS matrix from Lemma 3.2.

    Note that

    A1D(I+(B+D)CD)1(B+D)1(n1)(B+D)1, (3.2)

    the estimation of the (B+D)1 will be given below. Since B+D=ID+DB+=:(˜bij), from Lemma 2.2, we have

    (B+D)1{maxiS:rˉSi(B+D)=01|˜bii|rˉSi(B+D),maxjˉS:rSj(B+D)=01|˜bjj|rSj(B+D),maxiS,jˉS:˜bij0fSij(B+D),maxiS,jˉS:˜bji0fˉSij(B+D)}.

    When rSi(B+D)=0, it is easy to get rSi(B+)=0, or di=0 for any iN.

    (1) If di=0, for any iN, we get

    (B+D)1=1˜biirSi(B+D)=11di+dibiidirSi(B+)=1max1min{biirSi(B+),1}+1min{bjjrˉSj(B+),1}rˉSi(B+)biirSi(B+)1rˉSi(B+)biirSi(B+)rSj(B+)bjjrˉSj(B+)=μSij(B+). (3.3)

    (2) If rˉSi(B+)=0, for any iS, we have

    (B+D)1maxiS:rˉSi(B+D)=01˜biirSi(B+D)=maxiS:rˉSi(B+D)=011di+dibii1dirSi(B+)1di+dibiimaxiS:rˉSi(B+D)=0{1biirSi(B+),1}. (3.4)

    (3) If rˉSi(B+)=0, for any iS, jˉS, we obtain

    (B+D)1maxjˉS:rSj(B+D)=01˜bjjr˜Sj(B+D)=11dj+djbjj+djr˜Sj(B+D)maxjˉS:rSj(B+D)=0{1bjjrˉSj(B+),1}. (3.5)

    (4) If rˉSi(B+D)0, there exist ˜bij0 for some jˉS, we derive (B+D)1 as follow:

    (B+D)1maxiS,jˉS:˜bij0fSij(B+D)=maxiS,jˉS:˜bij0˜bjjrˉSj(B+D)+rˉSi(B+D)[˜biirSi(B+D)][˜bjjrˉSj(B+D)]rˉSi(B+D)rSj(B+D)=maxiS,jˉS:bij01min{(biirSi(B+D),1}+rˉSi(B+D)(biirSi(B+D))min{(bjjrˉSj,1}1rˉSi(B+D)rSj(B+D)(biirSi(B+D))(bjjrˉSj(B+D))maxiS,jˉS:bij01min{(biirSi(B+),1}+rˉSi(B+)(biirSi(B+))min{(bjjrˉSj,1}1rˉSi(B+)rSj(B+)(biirSi(B+))(bjjrˉSj(B+))=μSij(B+). (3.6)

    (5) If rSj(B+D)0, there exist ˜bji0 for some iS, we arrive at the inequality

    (B+D)1maxiS,jˉS:˜bji0fSij(B+D)μˉSji(B+). (3.7)

    Consequently, (3.1) holds. The proof is completed.

    The bound in (3.1) also holds for B-matrix, because B-matrix is a subclass of S-SDDS-B-matrix. Next, we will indicate that the bound in Theorem 3.1 is better than that in Lemma 2.3 in some conditions.

    Theorem 3.2. If A=(aij)Rn×n is an S-SDDS-B matrix which can be written as A=B++C, where B+=(bij) and C are as (2.2). For all iS, jˉS, if biirSi(B+)<1, bjjrˉSj(B+)<1, then

    max{μi(B+),μj(B+),μSij(B+),μˉSji(B+)}1min{β,1}. (3.8)

    Proof. From Lemma 2.3, β=miniN{βi} and βi=biijN,jibij, when biirSi(B+)<1 and bjjrˉSj(B+)<1, it is obvious that

    μi(B+)=maxiS:rˉSi(B+)=0{1biirSi(B+),1}=1biirSi(B+)1min{β,1}.

    In the same way, we get

    μj(B+)=maxjˉS:rSj(B+)=0{1bjjrˉSj(B+),1}=1bjjrSj(B+)1min{β,1}.

    When biirSi(B+)<1 and bjjrˉSj(B+)<1, it holds that

    μSij(B+)maxiS,jˉS:aij0bjjrˉSj(B+)+rˉSi(B+)(biirSi(B+))(bjjrˉSj(B+))rˉSi(B+)rSj(B+).

    When bjjrj(B+)>biiri(B+)=biirSi(B+)rˉSi(B+), for any iS, jˉS, we can multiply rˉSi(B+) on two sides and plus (biirSi(B+))(bjjrˉSj(B+)), then

    (biirSi(B+))(bjjrˉSj(B+))rˉSi(B+)rSj(B+)>(bjjrˉSj(B+)+rˉSi(B+))(biiri(B+)),

    we have

    bjjrˉSj(B+)+rˉSi(B+)(biirSi(B+))(bjjrˉSj(B+))rˉSi(B+)rSj(B+)<1biiri(B+)1min{β,1}.

    When bjjrj(B+)=bjjrSj(B+)rˉSj(B+)biiri(B+), rSj(B+), the following inequality can obtain in the same way

    biirSi(B+)+rSj(B+)(biirSi(B+))(bjjrˉSj(B+))rˉSi(B+)rSj(B+)<1bjjrj(B+)1min{β,1}.

    So the conclusion in (3.8) holds.

    In this section, an example is given to show the advantage of the bound in Theorem 3.1.

    Example 1. Consider the S-SDDS-B matrix

    A=(0.60.100.10.10.50.1000.10.60.100.100.4).

    Matrix A can be split into A=B++C, where

    B+=(0.500.1000.400.10.100.500.100.10.3),C=(0.10.10.10.10.10.10.10.10.10.10.10.10.10.10.10.1).

    Since A is a B-matrix, by Lemma 2.3, then

    maxd[0,1]4(ID+DA)130. (4.1)

    Because A is a B-matrix, so it is an S-SDDS-B matrix. When S=(1,2,3),ˉS=(4), we also can compute the complementarity error bound by Theorem 3.1 as follow:

    maxd[0,1]4(ID+DA)118.00. (4.2)

    The results in (4.1) and (4.2) indicate that Theorem 3.1 is better than Lemma 2.3.

    It is shown by Figure 1, in which the first 1000 matrices are given by the following MATLAB codes, that 18 is better than 30 for max(ID+DA)1. Blue stars in Figure 1 represent the (ID+DA)1 when matrices D come from 1000 different random matrices in [0, 1].

    Figure 1.  (ID+DA)1 for the first 1000 matrices D generated by diag(rand(5, 1)).

    MATLAB codes: For i = 1:1000; D=diag(rand(5,1)); end.

    Example 2.

    A=(1212211131803233221111122301533322201022313039220112220).

    A can be split into A=B++C, where

    B+=(10100111015301000091110103120000002800020306102110018),    C=(2222222333333322222223333333222222233333332222222).

    Taking in account that B+ is not a a strictly diagonally dominant matrix and so A is not a B-matrix. It is easy to check that when S={1,2,3,4} and ˉS={5,6,7}, it fulfills Definition 2.2. Therefore, by Theorem 1, we obtain

    maxd[0,1]n(ID+DA)12.8002.

    In this paper, we first give a new error bound for the LCP(M, q) with S-SDDS-B matrices, which depends only on the matrix of M. Then, based on the new result, we compare it with the error bound in [1]. From Figure 1, we can find that our result improves that in [1].

    This work was supported by the National Natural Science Foundation of China (11861077), the Foundation of Science and Technology Department of Guizhou Province (20191161, 20181079), the Research Foundation of Guizhou Minzu University (2019YB08) and the Foundation of Education Department of Guizhou Province (2018143).

    The authors declare that they have no competing interests.



    [1] M. García-Esnaola, J. M. Pe˜na, Error bounds for linear complementarity problems for B-matrices, Appl. Math. Lett., 22 (2009), 1071–1075. doi: 10.1016/j.aml.2008.09.001. doi: 10.1016/j.aml.2008.09.001
    [2] K. G. Murty, F. T. Yu, Linear complementarity, linear and nonlinear programming, Berlin: Heldermann, 1988.
    [3] R. W. Cottle, J. S. Pang, R. E. Stone, The linear complementarity problem, Boston: Academic Press, 1992.
    [4] Z. Q. Luo, P. Tseng, On the linear convergence of descent methods for convex essentially smooth minimization, SIAM J. Control Optim., 30 (1992), 408–425. doi: 10.1137/0330025. doi: 10.1137/0330025
    [5] Z. Q. Luo, P. Tseng, Error bound and convergence analysis of matrix splitting algorithms for the affine variational inequality problem, SIAM J. Optim., 2 (1992), 43–54. doi: 10.1137/0802004. doi: 10.1137/0802004
    [6] J. S. Pang, A posteriori error bound for the linearly-constrained variational inequality problem, Math. Oper. Res., 12 (1987), 474–484. doi: 10.1287/moor.12.3.474. doi: 10.1287/moor.12.3.474
    [7] J. S. Pang, Inexact Newton methods for the nonlinear complementarity problem, Math. Program., 36 (1986), 54–71. doi: 10.1007/BF02591989. doi: 10.1007/BF02591989
    [8] X. J. Chen, S. H. Xiang, Computation of error bounds for P-matix linear complementary problems, Math. Program., 106 (2006), 513–525. doi: 10.1007/s10107-005-0645-9. doi: 10.1007/s10107-005-0645-9
    [9] L. Y. Kolotilina, Bounds for the inverses of generalized Nekrosov matrices, J. Math. Sci., 207 (2015), 786–794. doi: 10.1007/s10958-015-2401-x. doi: 10.1007/s10958-015-2401-x
    [10] L. Cvetkoviˊc, V. Kostiˊc, R. S. Varga, A new Geršgorin-type eigenvalue inclusion set, Electron. T. Numer. Ana., 302 (2004), 73–80.
    [11] T. Szulc, L. Cvetkoviˊc, M. Nedoviˊc, Scaling technique for partition-Nekrasov matrices, Appl. Math. Comput., 207 (2015), 201–208. doi: 10.1016/j.amc.2015.08.136. doi: 10.1016/j.amc.2015.08.136
    [12] L. Cvetkoviˊc, V. Kostiˊc, S. Rauški, A new subclass of H-matrices, Appl. Math. Comput., 208 (2009), 206–210. doi: 10.1016/j.amc.2008.11.037. doi: 10.1016/j.amc.2008.11.037
    [13] P. F. Dai, J. C. Li, Y. T. Li, C. Y. Zhang, Error bounds for linear complementarity problems of QN-matrices, Calcolo, 53 (2016), 647–657. doi: 10.1007/s10092-015-0167-7. doi: 10.1007/s10092-015-0167-7
    [14] L. Gao, Y. Q. Wang, C. Q. Li, New error bounds for the linear complementarity problem of QN-matrices, Numer. Algorithms, 77 (2018), 229–242. doi: 10.1007/s11075-017-0312-2. doi: 10.1007/s11075-017-0312-2
    [15] Z. W. Hou, X. Jing, L. Gao, New error bounds for linear complementarity problems of Σ-SDD matrices and SB-matirces, Open Math., 17 (2019), 1599–1614. doi: 10.1515/math-2019-0127. doi: 10.1515/math-2019-0127
    [16] P. F. Dai, Y. T. Li, C. J. Lu, New error bounds for the linear complementarity problem with an SB-matrix, Numer. Algorithms, 64 (2013), 741–757. doi: 10.1007/s11075-012-9691-6. doi: 10.1007/s11075-012-9691-6
    [17] M. García-Esnaola, J. M. Pe˜na, Error bounds for linear complementarity problems involving BS-matrices, Appl. Math. Lett., 25 (2012), 1379–1383. doi: 10.1016/j.aml.2011.12.006. doi: 10.1016/j.aml.2011.12.006
    [18] F. Wang, Error bounds for linear complementarity problem of weakly chained diagonally dominant B-matrices, J. Inequal. Appl., 2017 (2017), 1–8. doi: 10.1186/s13660-017-1303-5. doi: 10.1186/s13660-017-1303-5
    [19] P. F. Dai, Error bounds for linear complementarity problems of DB-matrices, Linear Algebra Appl., 434 (2011), 830–840. doi: 10.1016/j.laa.2010.09.049. doi: 10.1016/j.laa.2010.09.049
    [20] T. T. Chen, W. Li, X. P. Wu, S. Vong, Error bounds for linear complementarity problems of MB-matrices, Numer. Algorithms, 70 (2015), 341–356. doi: 10.1007/s11075-014-9950-9. doi: 10.1007/s11075-014-9950-9
    [21] C. Q. Li, Y. T. Li, Note on error bounds for linear complementarity problems for B-matrices, Appl. Math. Lett., 57 (2016), 108–113. doi: 10.1016/j.aml.2016.01.013. doi: 10.1016/j.aml.2016.01.013
    [22] C. Q. Li, M. T. Gan, S. R. Yang, A new error bounds for linear complementarity problems for B-matrices, Electron. J. Linear Al., 31 (2016), 476–484. doi: 10.13001/1081-3810.3250. doi: 10.13001/1081-3810.3250
    [23] M. García-Esnaola, J. M. Pe˜na, A comparison of error bounds for linear complementarity problems of H-matrices, Linear Algebra Appl., 433 (2010), 956–964. doi: 10.1016/j.laa.2010.04.024. doi: 10.1016/j.laa.2010.04.024
    [24] C. Q. Li, Y. T. Li, Weakly chained diagonally dominant B-matrices and error bounds for linear complementarity problem, Numer. Algorithms, 73 (2016), 985–998. doi: 10.1007/s11075-016-0125-8. doi: 10.1007/s11075-016-0125-8
    [25] D. Sun, F. Wang, New error bounds for linear complementarity problem of weakly chained diagonally dominant B-matrices, Open Math., 15 (2017), 978–986. doi: 10.1515/math-2017-0080. doi: 10.1515/math-2017-0080
    [26] L. Y. Kolotilina, Some bounds for inverses involving matrix sparsity pattern, J. Math. Sci., 249 (2020), 242–255. doi: 10.1007/s10958-020-04938-3. doi: 10.1007/s10958-020-04938-3
    [27] J. M. Pe˜na, A class of P-matrix with applications to localization of the eigenvalues of a real matrix, SIAM J. Matrix Anal. Appl., 22 (2001), 1027–1037. doi: 10.1137/S0895479800370342. doi: 10.1137/S0895479800370342
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2054) PDF downloads(45) Cited by(0)

Figures and Tables

Figures(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog