Research article

Error bounds for linear complementarity problems of strong SDD1 matrices and strong SDD1-B matrices

  • Received: 17 August 2023 Revised: 09 September 2023 Accepted: 15 September 2023 Published: 22 September 2023
  • MSC : 15A48, 65G50, 90C31, 90C33

  • In this paper, an error bound for linear complementarity problems of strong SDD1 matrices is given. By properties of SDD1 matrices, a new subclass of P-matrices called SDD1-B is presented, which contains B-matrices. A new error bound of linear complementarity problems for SDD1-B is also provided, which improves the corresponding results. Numerical examples are given to illustrate the effectiveness of the obtained results.

    Citation: Yuanjie Geng, Deshu Sun. Error bounds for linear complementarity problems of strong SDD1 matrices and strong SDD1-B matrices[J]. AIMS Mathematics, 2023, 8(11): 27052-27064. doi: 10.3934/math.20231384

    Related Papers:

    [1] Lanlan Liu, Pan Han, Feng Wang . New error bound for linear complementarity problem of $ S $-$ SDDS $-$ B $ matrices. AIMS Mathematics, 2022, 7(2): 3239-3249. doi: 10.3934/math.2022179
    [2] Yingxia Zhao, Lanlan Liu, Feng Wang . Error bounds for linear complementarity problems of $ SD{{D}_{1}} $ matrices and $ SD{{D}_{1}} $-$ B $ matrices. AIMS Mathematics, 2022, 7(7): 11862-11878. doi: 10.3934/math.2022662
    [3] Hongmin Mo, Yingxue Dong . A new error bound for linear complementarity problems involving $ B- $matrices. AIMS Mathematics, 2023, 8(10): 23889-23899. doi: 10.3934/math.20231218
    [4] Xinnian Song, Lei Gao . CKV-type $ B $-matrices and error bounds for linear complementarity problems. AIMS Mathematics, 2021, 6(10): 10846-10860. doi: 10.3934/math.2021630
    [5] Deshu Sun . Note on error bounds for linear complementarity problems involving $ B^S $-matrices. AIMS Mathematics, 2022, 7(2): 1896-1906. doi: 10.3934/math.2022109
    [6] Dizhen Ao, Yan Liu, Feng Wang, Lanlan Liu . Schur complement-based infinity norm bounds for the inverse of $ S $-Sparse Ostrowski Brauer matrices. AIMS Mathematics, 2023, 8(11): 25815-25844. doi: 10.3934/math.20231317
    [7] Xiaoyong Chen, Yating Li, Liang Liu, Yaqiang Wang . Infinity norm upper bounds for the inverse of $ SDD_1 $ matrices. AIMS Mathematics, 2022, 7(5): 8847-8860. doi: 10.3934/math.2022493
    [8] Lanlan Liu, Yuxue Zhu, Feng Wang, Yuanjie Geng . Infinity norm bounds for the inverse of $ SDD_1^{+} $ matrices with applications. AIMS Mathematics, 2024, 9(8): 21294-21320. doi: 10.3934/math.20241034
    [9] Xiaodong Wang, Feng Wang . Infinity norm upper bounds for the inverse of $ {SDD_k} $ matrices. AIMS Mathematics, 2023, 8(10): 24999-25016. doi: 10.3934/math.20231276
    [10] Mengting Gan . Global error bounds for the extended vertical LCP of $ \sum-SDD $ matrices. AIMS Mathematics, 2024, 9(9): 24326-24335. doi: 10.3934/math.20241183
  • In this paper, an error bound for linear complementarity problems of strong SDD1 matrices is given. By properties of SDD1 matrices, a new subclass of P-matrices called SDD1-B is presented, which contains B-matrices. A new error bound of linear complementarity problems for SDD1-B is also provided, which improves the corresponding results. Numerical examples are given to illustrate the effectiveness of the obtained results.



    Many fundamental problems in optimization and mathematical programming can be described as linear complementarity problems, such as quadratic programming, nonlinear obstacle problems, invariant capital stock, the Nash eqilibrium point of a bimatrix game, optimal stopping, free boundary problems for journal bearing and so on, see for instance, [1,2,3,4].

    Some basic definitions for the special matrices are given as follows: let n be an integer number, N={1,2,,n}, and let Rn×n be the set of all real matrices of order n. Matrix A=(aij)Rn×n is called a Z-matrix, if aij0 for any ij; a P-matrix, if all its principal minors are positive; an M-matrix, if A is a Z-matrix with eigenvalues whose real parts are non-negative; an H-matrix, if its comparison matrix A=(ˉaij) is an M-matrix, where

    ˉaij={|aij|,ifi=j,|aij|,ifij.

    Linear complementarity problem of matrix A, denoted by LCP(A,q), is to find a vector xRn such that

    Ax+q0,      (Ax+q)Tx=0,      x0, (1.1)

    or to prove that no such vector x exists, where ARn×n and qRn. One of the essencial problems in LCP(A,q) is to estimate

    maxd[0,1]n(ID+DA)1,

    where D=diag(di), d=(d1,d2,,dn), 0di1, i=1,2,,n. It is well known that when A is a P-matrix, there is a unique solution to linear complementarity problems.

    In [4], Chen et al. gave the following error bound for LCP(A,q),

    xxmaxd[0,1]n(ID+DA)1r(x),     xRn, (1.2)

    where x is the solution of LCP(A,q), r(x)=min{x,Ax+q}, and the min operator r(x) denotes the componentwise minimum of two vectors. It is well known that when real H-matrix A with positive diagonal entries is a subclass of P-matrices, error bound of LCP(A,q) can be obtained by formula (2.4) in [4]. Furthermore, to avoid the high-cost computations of the inverse matrix in (2.4), some easily computable bounds for LCP(A,q) are derived for the different subclass of H-matrices, such as Ostrowski matrices [5], QN-matrices [6], Nekrasov matrices [7], S-SDDS matrices [8] and DZ-matrices [9], which only depends on the entries of the involved matrix A.

    When the class of involved matrices is subclass of P-matrices that are not H-matrices, error bounds of LCP(A,q) also need to be studied, such as, BRπ-matrices [10], B-Nekrasov matrices [11] and CKV-type-B-matrices [12].

    In this paper, we apply upper bound for infinity norm of the inverse of strong SDD1 matrix to estimate the error for linear complementarity problems of strong SDD1 matrices and strong SDD1-B matrices. Numerical examples show that the obtained results can improve other existing bounds.

    In this section, some definitions and lemmas are given. Assume that S denotes a nonempty subset of N and ¯S:=NS the complement of S. For each iN, ri(A):=ji|aij|, rSi(A):=jS{i}|aij|.

    Definition 1. [13] Matrix A=(aij)Rn×n is called a strictly diagonally dominant (SDD) matrix if, for all iN,

    |aii|>ri(A).

    Definition 2. [14] Matrix A=(aij)Rn×n is said a strong SDD1 matrix if there exists a subset S of N such that

    ● (i) |aii|>ri(A), for iS satisfying r¯Si(A)=0,

    ● (ii) |ajj|>rj(A), for j¯S,

    ● (iii) [|aii|rSi(A)]|ajj|>r¯Si(A)rj(A), for iS and j¯S such that aij0.

    Definition 3. Let A=(aij)Rn×n (n2) be a matrix with the form of A=B++C. We say that A is a strong SDD1-B matrix if B+ is a strong SDD1 matrix with positive diagonal entries, where

    B+=(bij)=(a11r+1a1nr+1an1r+nannr+n),C=(r+1r+1r+nr+n), (2.1)

    and r+i=max{0,aijji}.

    There is an equivalence definition of B-matrices in [1,15], which is closely related to strictly diagonally dominant matrices.

    Definition 4. [15] Matrix A=(aij)Rn×n is called a B-matrix if, for all iN,

    nk=1aik>0,          1n(nk=1aik)>aij,       ji.

    Definition 5. [1] Let A=(aij)Rn×n and A=B++C, where B+ is defined as in (2.1). We say that A is a B-matrix if B+ is an SDD matrix.

    Next, we will introduce some useful lemmas.

    Lemma 1. [14] Let A=(aij)Rn×n be a strong SDD1 matrix. Then,

     A1max{maxiS:r¯Si(A)=01|aii|ri(A),maxj¯S1|ajj|rj(A),maxiS,j¯S:aij0|ajj|+r¯Si(A)(|aii|rSi(A))|ajj|r¯Si(A)rj(A)}.

    Lemma 2. [14] If matrix A=(aij)Rn×n is a strong SDD1 matrix, then A is a nonsingular H-matrix.

    Lemma 3. [15] Let A=(aij)Rn×n be a nonsingular M-matrix, and let P be a nonnegative matrix with rank 1. Then A+P is a P-matrix.

    Lemma 4. Let A=(aij)Rn×n (n2) be a strong SDD1-B matrix. Then A is a P-matrix.

    Proof. By Definition 3, we have that C in (2.1) is a nonnegative matrix with rank 1. By Lemma 2, we get that B+ is a nonnegative M-matrix. We can conclude that A is a P-matrix from Lemma 3.

    Remark 1. From Definitions 15, Lemmas 2 and 4, we have the following relationships:

    SDDmatricesstrongSDD1matricesH-matrices,
    B-matricesstrongSDD1-BmatricesP-matrices.

    Lemma 5. Let A=(aij)Rn×n be a strong SDD1 matrix. Then ˜A=(˜aij)=ID+DA is also a strong SDD1 matrix, where D=diag(di) with 0di1, iN.

    Proof. Since ˜A=ID+DA=(˜aij), then,

    ˜aij={1di+diaij,i=j,diaij,ij.

    Based on A is a strong SDD1 matrix and D=diag(di), 0di1(iN), by Lemma 3, we can get the following results.

    1) For iS, satisfying r¯Si(˜A)=0, by Definition 3, it holds that if di=0, then

    |˜aii|=1>0=|diaii|=diri(A)=ri(˜A).

    If di0, then

    |˜aii|=|1di+diaii|>|diaii|>diri(A)=ri(˜A).

    2) For j¯S, it follows that if dj=0, then

    |˜ajj|=1>0=|djajj|=djrj(A)=rj(˜A).

    If dj0, then

    |˜ajj|=|1dj+djajj|>|djajj|>djrj(A)=rj(˜A).

    3) For iS, j¯S, satisfying ˜aij0, i.e., di0 and aij0, we can obtain that if di0, dj=0, then

    (|˜aii|rSi(˜A))|˜ajj|=(|1di+diaii|dirSi(A))>didj(|aii|rSi(A))|ajj|=didjr¯Si(A)rj(A)=r¯Si(˜A)rj(˜A).

    If di0, dj0, then

    (|˜aii|rSi(˜A))|˜ajj|=(1dj+djajj)di(1dj+djajj)+di[aiirSi(A)]didj[aiirSi(A)]+didjaiiajjdidjrSi(A)didj[|aii|rSi(A)]|ajj|>didjr¯Si(A)rj(A)=r¯Si(˜A)rj(˜A).

    Therefore, ˜A is a strong SDD1 matrix, the conclusion follows.

    Lemma 6. [16] Let γ>0 and η0. Then for any x[0,1],

    11x+xγ1min{γ,1},    ηx1x+xγηγ.

    Lemma 7. [17] If A=(aij)Rn×n is an SDD matrix, then

    maxd[0,1]n(ID+DA)1max{1miniN{|aii|ri(A)},1}.

    Lemma 8. [1] Let A=(aij)Rn×n be a B-matrix, and let B+ be the matrix in (3). Then

    maxd[0,1]n(ID+DA)1n1min{β,1},

    where β=miniN{βi}, βi=|bii|nji|bij|.

    In this section, new error bound of LCP(A,q) is provided when A is a strong SDD1 matrix.

    Theorem 1. Let A=(aij)Rn×n, n2, be a strong SDD1 matrix with poistive diagonal entries, and let ˜A=(˜aij)=ID+DA, where D=diag(di) with 0di1. Then

    maxd[0,1]n(ID+DA)1max{maxiS:r¯Si(A)=01min{aiiri(A),1},maxj¯S1min{ajjrj(A),1},η(A)}, (3.1)

    where

    η(A)=maxiS,j¯S,aij0ajj(aiirSi(A)min{aiirSi(A),1}+r¯Si(A)min{ajj,1})(aiirSi(A))ajjr¯Si(A)rj(A).

    Proof. Since ˜A=(˜aij)=ID+DA, then from Lemma 5, we know that ˜A is a strong SDD1 matrix with positive diagonal entries. By Lemma 1, it holds that

    ˜A1max{maxiS:r¯Si(˜A)=01|˜aii|ri(˜A),maxj¯S1|˜ajj|rj(˜A),maxiS,j¯S:˜aij0|˜ajj|+r¯Si(˜A)(|˜aii|rSi(˜A))|˜ajj|r¯Si(˜A)rj(˜A)}. (3.2)

    Note that ri(˜A)=diri(A), rj(˜A)=djrj(A) for all iS, jˉS. Next, we divide into three cases to prove the result.

    Case 1. For iS, satisfying r¯Si(˜A)=0, it follows that di=0 or r¯Si(A)=0. If di=0 and rˉSi(A)=0, iS, then

    1|˜aii|ri(˜A)=11di+diaiidiri(A)=1<1min{aiiri(A),1}.

    If di0 and r¯Si(A)=0, iS, by Lemma 6, we have

    1˜aiiri(˜A)=11di+diaiidiri(A)1min{aiiri(A),1}.

    If di=0 and r¯Si(A)0, iS, then there exists j¯S such that aij0. Thus, by Lemma 6, we get

    1˜aiiri(˜A)=11di+diaiidiri(A)=1=1dj+djajj+dir¯Si(A)(1di+diaiidirSi(A))(1dj+djajj)dir¯Si(A)djrj(A)=1dj+djajj+dir¯Si(A)(1di+diaiidirSi(A))(1dj+djajj)1dir¯Si(A)(1di+diaiidirSi(A))(1dj+djajj)1min{aiirSi(A),1}+r¯Si(A)(aiirSi(A))min{ajj,1}1r¯Si(A)rj(A)(aiirSi(A))ajj=ajj(aiirSi(A)min{aiirSi(A),1}+r¯Si(A)min{ajj,1})(aiirSi(A))ajjr¯Si(A)rj(A).

    So, it holds that

    maxiS:r¯Si(˜A)=01˜aiiri(˜A)max{maxiS:r¯Si(A)=01min{aiiri(A),1},maxiS,j¯S,aij0ajj(aiirSi(A)min{aiirSi(A),1}+r¯Si(A)min{ajj,1})(aiirSi(A))ajjr¯Si(A)rj(A)}.

    Case 2. For j¯S, if dj=0, then

    1˜ajjrj(˜A)=11dj+djajjdjrj(A)=1<1min{ajjrj(A),1}.

    If dj0, by Lemma 6, we get

    1˜ajjrj(˜A)=11dj+djajjdjrj(A)1min{ajjrj(A),1}.

    Case 3. For iS and j¯S, such that ˜aij0, it holds that di0 and aij0. Thus, by Lemma 6, it holds that

    ˜ajj+r¯Si(˜A)(˜aiirSi(˜A))˜ajjr¯Si(˜A)rj(˜A)=1dj+djajj+dir¯Si(A)(1di+diaiidirSi(A))(1dj+djajj)dir¯Si(A)djrj(A)=1dj+djajj+dir¯Si(A)(1di+diaiidirSi(A))(1dj+djajj)1dir¯Si(A)djrj(A)(1di+diaiidirSi(A))(1dj+djajj)1min{aiirSi(A),1}+r¯Si(A)(aiirSi(A))min{ajj,1}1r¯Si(A)rj(A)(aiirSi(A))ajj=ajj(aiirSi(A)min{aiirSi(A),1}+r¯Si(A)min{ajj,1})(aiirSi(A))ajjr¯Si(A)rj(A).

    From Cases 13, the conclusion follows.

    Next, let's use the following two examples to illustrate the advantages of our results.

    Example 1. Consider the matrix:

    A=(403.557100.16).

    Then, A is not only an SDD matrix but also a strong SDD1 matrix for S={1,2}. From Lemma 7, we have

    maxd[0,1]3(ID+DA)12.

    By Theorem 1, we get

    maxd[0,1]3(ID+DA)11.28.

    Example 2. Consider the tri-diagonal matrix ARn×n arising from the finite difference method for free boundary problems [4], where

    A=(b+αsin(1n)c00ab+αsin(2n)c00ab+αsin(n1n)c00ab+αsin(1)).

    Take that n=500, a=0.5, b=3, c=2.3 and α=0. Then A is not only an SDD matrix but also a strong SDD1 matrix for S={2,,499}. From Lemma 7, we get

    maxd[0,1]500(ID+DA)15.

    By Theorem 1, we have

    maxd[0,1]500(ID+DA)12.2677.

    Example 3. Consider the matrix:

    A=(41013501000205023102007310010104).

    It is easy to verify that A is a strong SDD1 matrix for S={1,2,4}, but not an SDD matrix and nor S-SDD matrix for any nonempty subset S of N. By Theorem 1, we have

    maxd[0,1]5(ID+DA)116.

    In this section, a new error bound of LCP(A,q) is presented when A is a strong SDD1-B matrix.

    Theorem 2. Let A=(aij)Rn×n be a strong SDD1-B matrix with the form of A=B++C, and let B+=(bij) be the matrix in (2.1). Denote AD=ID+DA, where D=diag(di) with 0di1. Then

    maxd[0,1]n(ID+DA)1ζ(B+):=(n1)max{maxiS:r¯Si(B+)=01min{biiri(B+),1},maxj¯S1min{bjjrj(B+),1},η(B+)}, (4.1)

    where

    η(B+):=maxiS,j¯S:bij0bjj(biirSi(B+)min{biirSi(B+),1}+r¯Si(B+)min{bjj,1})(biirSi(B+))bjjr¯Si(B+)rj(B+).

    Proof. For D=diag(di) with 0di1(iN), we have

    AD=ID+DA=(ID+DB+)+DC=B+D+CD,

    where B+D=(˜bij)=ID+DB+ and CD=DC. Notice that B+ is a strong SDD1 matrix with positive diagonal entries, then by Lemma 5, it follows that B+D=ID+DB+ is also a strong SDD1 matrix with positive diagonal entries. Similarly as the proof of Theorem 2.2 in [1], we can obtain:

    A1D[I+(B+D)1CD]1(B+D)1(n1)(B+D)1.

    We now bound (B+D)1. By Lemma 1, it holds that

    (B+D)1max{maxiS:r¯Si(B+D)=01|˜bii|ri(B+D),maxj¯S1|˜bjj|rj(B+D),μij(B+D)},

    where

    μij(B+D):=maxiS,j¯S:˜bij0|˜bjj|+r¯Si(B+D)(|˜bii|rSi(B+D))|˜bjj|r¯Si(B+D)rj(B+D).

    Next, we divide into three cases to prove the conclusion.

    Case 1. For iS, satisfying r¯Si(B+D)=0, then r¯Si(B+)=0 or di=0. If di=0, r¯Si(B+)0, iS, then there exists j¯S such that aij0. Hence, by Lemmas 6 and 7, we get

    (B+D)1maxiS:r¯Si(B+D)=01|˜bii|ri(B+D)=11di+dibiidiri(B+)=1maxiS,j¯S:bij01min{biirSi(B+),1}+r¯Si(B+)(biirSi(B+))min{bjj,1}1r¯Si(B+)rj(B+)(biirSi(B+))bjj=maxiS,j¯S:bij0bjj(biirSi(B+)min{biirSi(B+),1}+r¯Si(B+)min{bjj,1})(biirSi(B+))bjjr¯Si(B+)rj(B+)=η(B+).

    If di=0, r¯Si(B+D)=0, iS, we have

    (B+D)1maxiS:r¯Si(B+D)=01|˜bii|ri(B+D)maxiS:r¯Si(B+)=01min{biiri(B+),1}.

    Case 2. For j¯S, it holds that

    (B+D)1maxj¯S1|˜bjj|rj(B+D)maxj¯S1min{bjjrj(B+),1}.

    Case 3. For iS and j¯S, such that ˜bij0, then di0 and bij0. Thus, by Lemma 6, we have

    (B+D)1maxiS,j¯S:˜bij0˜bjj+r¯Si(B+D)(˜biirSi(B+D))˜bjjrSi(B+D)rj(B+D)=maxiS,j¯S:bij01dj+djbjj+dir¯Si(B+)(1di+dibiidirSi(B+))(1dj+djbjj)dir¯Si(B+)djrj(B+)=maxiS,j¯S:bij01dj+djbjj+dir¯Si(B+)(1di+dibiidirSi(B+))(1dj+djbjj)1dir¯Si(B+)djrj(B+)(1di+dibiidirSi(B+))(1dj+djbjj)maxiS,j¯S:bij01min{biirSi(B+),1}+r¯Si(B+)(biirSi(B+))min{bjj,1}1r¯Si(B+)rj(B+)(biirSi(B+))bjj=maxiS,j¯S:bij0bjj(biirSi(B+)min{biirSi(B+),1}+r¯Si(B+)min{bjj,1})(biirSi(B+))bjjr¯Si(B+)rj(B+)=η(B+).

    Consequently, from Cases 13, the conclusion follows.

    The bound in Theorem 2 also holds for B-matrix, because B-matrix is a subclass of strong SDD1-B-matrix. Next, we will indicate that the bound in Theorem 2 is better than that in Lemma 8 in some cases.

    Theorem 3. Let A=(aij)Rn×n be a B-matrix with A=B++C, and let B+=(bij) be the matrix in (2.1). If 0<bii1(iN), then

    ζ(B+)1miniN{β,1}, (4.2)

    where ζ(B+) and β are defined as in Theorem 2 and Lemma 8, respectively.

    Proof. By 0<bii1 (iN), we have

    maxiS:r¯Si(B+)=01min{biiri(B+),1}=maxiS:r¯Si(B+)=01biiri(B+)1miniN{β,1}

    and

    maxj¯S1min{bjjrj(B+),1}=maxj¯S1bjjrj(B+)1miniN{β,1}.

    For iS and j¯S, such that bij0, it follows that if biiri(B+)bjjrj(B+), then

    η(B+)1biiri(B+)1miniN{β,1}.

    If bjjrj(B+)biiri(B+), then

    η(B+)1bjjrj(B+)1miniN{β,1}.

    Therefore, the conclusion follows.

    The following numerical examples show the validity of the error bounds for strong SDD1-B matrix.

    Example 4. Consider the matrix:

    A=(0.70.20.20.200.50.10.100.10.40.100.20.20.6).

    We can write it as A=B++C, where

    B+=(0.70.20.20.20.10.4000.100.300.2000.4),C=(00000.10.10.10.10.10.10.10.10.20.20.20.2).

    It is easy to check that A is a B-matrix, consequently, a strong SDD1-B matrix. By Lemma 8, we have

    maxd[0,1]4(ID+DA)130.

    When S={1}, by Theorem 2, we have

    maxd[0,1]4(ID+DA)125.

    It is shown by Figure 1, in which the first 1000 matrices are given by the following MATLAB codes, that 25 is better than 30 for max(ID+DA)1. Blue stars in Figure 1 represent the (ID+DA)1 when matrices D come from 1000 different random matrices in [0,1].

    Figure 1.  (ID+DA)1 for the first 1000 matrices D generated by diag(rand (5, 1)).} MATLAB codes: for i = 1:1000; D = diag(rand (5, 1)); end.

    Example 5. Consider the matrix:

    A=(6410240.5220302006).

    Matrix A can be split into A=B++C, where

    B+=(6410240.5220302006),    C=(0000000000000000).

    It is easy to verify that A is neither an SDD matrix nor a B-matrix. On the other hand, A is a strong SDD1-B matrix for S={1,2,3}. By Theorem 2, we get

    maxd[0,1]4(ID+DA)14.2.

    Based on the properties strong SDD1 matrices, we introduce a new subclass of P-matrices called strong SDD1-B matrices. Moreover, we apply upper bound for the infinity norm of the inverse of SDD1 matrices to estimate error bounds for linear complementarity problems of SDD1 matrices and SDD1-B matrices, which is useful for free boundary problems. Numerical examples are given to show the sharpness of the proposed bounds. In the future, based on the proposed infinity norm bound, we will explore the computable global error bounds of extended vertical linear complementarity problems for SDD1 matrices and SDD1-B matrices.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research is supported by Guizhou Provincial Science and Technology Projects (20191161), the Natural Science Research Project of Department of Education of Guizhou Province (QJJ2022015, QJJ2023012, QJJ2023062), the Talent Growth Project of Education Department of Guizhou Province (2018143), and the Research Foundation of Guizhou Minzu University (2019YB08).

    The authors declare that they have no competing interests.



    [1] M. García-Esnaola, J. M. Peña, Error bounds for linear complementarity problems for B-matrices, Appl. Math. Lett., 22 (2009), 1071–1075. https://doi.org/10.1016/j.aml.2008.09.001 doi: 10.1016/j.aml.2008.09.001
    [2] K. G. Murty, Linear complementarity, linear and nonlinear programming, Berlin: Heldermann, 1988.
    [3] R. W. Cottle, J. S. Pang, R. E. Stone, The linear complementarity problem, San Diego: Academic Press, 1992.
    [4] X. J. Chen, S. H. Xiang, Computation of error bounds for P-matix linear complementary problems, Math. Program., 106 (2006), 513–525. https://doi.org/10.1007/s10107-005-0645-9 doi: 10.1007/s10107-005-0645-9
    [5] L. Cvetković, V. Kostić, S. Rauški, A new subclass of H-matrices, Appl. Math. Comput., 208 (2009), 206–210. https://doi.org/10.1016/j.amc.2008.11.037 doi: 10.1016/j.amc.2008.11.037
    [6] L. Y. Kolotilina, Bounds for the inverses of generalized Nekrasov matrices, J. Math. Sci., 207 (2015), 786–794. https://doi.org/10.1007/s10958-015-2401-x doi: 10.1007/s10958-015-2401-x
    [7] T. Szulc, L. Cvetković, M. Nedović, Scaling technique for partition-Nekrasov matrices, Appl. Math. Comput., 271 (2015), 201–208. https://doi.org/10.1016/j.amc.2015.08.136 doi: 10.1016/j.amc.2015.08.136
    [8] L. Y. Kolotilina, Some bounds for inverses involving matrix sparsity pattern, J. Math. Sci., 249 (2020), 242–255. https://doi.org/10.1007/s10958-020-04938-3 doi: 10.1007/s10958-020-04938-3
    [9] J. X. Zhao, Q. L. Liu, C. Q. Li, Y. T. Li, Dashnic-Zusmanovich type matrices: A new subclass of nonsingular H-matrices, Linear Algebra Appl., 552 (2018), 277–287. https://doi.org/10.1016/j.laa.2018.04.028 doi: 10.1016/j.laa.2018.04.028
    [10] M. García-Esnaola, J. M. Peña, BRπ-Matrices and error bounds for linear complementarity problems, Calcolo, 54 (2017), 813–822. https://doi.org/10.1007/s10092-016-0209-9 doi: 10.1007/s10092-016-0209-9
    [11] C. Q. Li, P. F. Dai, Y. T. Li, New error bounds for linear complementarity problems of Nekrasov matrices and B-Nekrasov matrices, Numer. Algor., 74 (2017), 997–1009. https://doi.org/10.1007/s11075-016-0181-0 doi: 10.1007/s11075-016-0181-0
    [12] X. Song, L. Gao, CKV-Type B-matrices and error bounds for linear complementarity problems, AIMS Mathematics, 6 (2021), 10846–10860. https://doi.org/10.3934/math.2021630 doi: 10.3934/math.2021630
    [13] A. Berman, R. J. Plemmons, Nonnegative matrices in the mathematical sciences, New York: Academic Press, 1979.
    [14] Y. H. Wang, X. N. Song, L. Gao, An infinity norm bound for the inverse of strong SDD1 matrices with applications, Japan J. Indust. Appl. Math., 40 (2023), 1287–1304. https://doi.org/10.1007/s13160-023-00576-9 doi: 10.1007/s13160-023-00576-9
    [15] J. M. Peña, A class of P-matrix with applications to localization of the eigenvalues of a real matrix, SIAM. J. Matrix Anal. A., 22 (2001), 1027–1037. https://doi.org/10.1137/S0895479800370342 doi: 10.1137/S0895479800370342
    [16] L. Gao, An alternative error bound for linear complementarily problems involving BS-matrices, J. Inequal. Appl., 2018 (2018), 28. https://doi.org/10.1186/s13660-018-1618-x doi: 10.1186/s13660-018-1618-x
    [17] M. García-Esnaola, J. M. Peña, A comparison of error bounds for linear complementarity problems of H-matrices, Linear Algebra Appl., 433 (2010), 956–964. https://doi.org/10.1016/j.laa.2010.04.024 doi: 10.1016/j.laa.2010.04.024
  • This article has been cited by:

    1. Qin Li, Wenwen Ran, Feng Wang, Infinity norm bounds for the inverse of generalized $${SDD_2}$$ matrices with applications, 2024, 41, 0916-7005, 1477, 10.1007/s13160-024-00658-2
    2. Qin Li, Wenwen Ran, Feng Wang, Infinity norm bounds for the inverse of Quasi-$$SDD_k$$ matrices with applications, 2024, 1017-1398, 10.1007/s11075-024-01949-y
    3. Wenwen Ran, Feng Wang, Extended $$SDD_1^{\dag } $$ matrices and error bounds for linear complementarity problems, 2024, 0916-7005, 10.1007/s13160-024-00685-z
    4. Yuanjie Geng, Yuxue Zhu, Fude Zhang, Feng Wang, Infinity Norm Bounds for the Inverse of $$\textrm{SDD}_1$$-Type Matrices with Applications, 2025, 2096-6385, 10.1007/s42967-024-00457-z
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1265) PDF downloads(49) Cited by(4)

Figures and Tables

Figures(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog