
S-SDDS-B matrices is a subclass of P-matrices which contains B-matrices. New error bound of the linear complementarity problem for S-SDDS-B matrices is presented, which improves the corresponding result in [
Citation: Lanlan Liu, Pan Han, Feng Wang. New error bound for linear complementarity problem of S-SDDS-B matrices[J]. AIMS Mathematics, 2022, 7(2): 3239-3249. doi: 10.3934/math.2022179
[1] | Yuanjie Geng, Deshu Sun . Error bounds for linear complementarity problems of strong $ SDD_{1} $ matrices and strong $ SDD_{1} $-$ B $ matrices. AIMS Mathematics, 2023, 8(11): 27052-27064. doi: 10.3934/math.20231384 |
[2] | Deshu Sun . Note on error bounds for linear complementarity problems involving $ B^S $-matrices. AIMS Mathematics, 2022, 7(2): 1896-1906. doi: 10.3934/math.2022109 |
[3] | Mengting Gan . Global error bounds for the extended vertical LCP of $ \sum-SDD $ matrices. AIMS Mathematics, 2024, 9(9): 24326-24335. doi: 10.3934/math.20241183 |
[4] | Hongmin Mo, Yingxue Dong . A new error bound for linear complementarity problems involving $ B- $matrices. AIMS Mathematics, 2023, 8(10): 23889-23899. doi: 10.3934/math.20231218 |
[5] | Dizhen Ao, Yan Liu, Feng Wang, Lanlan Liu . Schur complement-based infinity norm bounds for the inverse of $ S $-Sparse Ostrowski Brauer matrices. AIMS Mathematics, 2023, 8(11): 25815-25844. doi: 10.3934/math.20231317 |
[6] | Yingxia Zhao, Lanlan Liu, Feng Wang . Error bounds for linear complementarity problems of $ SD{{D}_{1}} $ matrices and $ SD{{D}_{1}} $-$ B $ matrices. AIMS Mathematics, 2022, 7(7): 11862-11878. doi: 10.3934/math.2022662 |
[7] | Xinnian Song, Lei Gao . CKV-type $ B $-matrices and error bounds for linear complementarity problems. AIMS Mathematics, 2021, 6(10): 10846-10860. doi: 10.3934/math.2021630 |
[8] | Lanlan Liu, Yuxue Zhu, Feng Wang, Yuanjie Geng . Infinity norm bounds for the inverse of $ SDD_1^{+} $ matrices with applications. AIMS Mathematics, 2024, 9(8): 21294-21320. doi: 10.3934/math.20241034 |
[9] | Xiaoyong Chen, Yating Li, Liang Liu, Yaqiang Wang . Infinity norm upper bounds for the inverse of $ SDD_1 $ matrices. AIMS Mathematics, 2022, 7(5): 8847-8860. doi: 10.3934/math.2022493 |
[10] | Xiaodong Wang, Feng Wang . Infinity norm upper bounds for the inverse of $ {SDD_k} $ matrices. AIMS Mathematics, 2023, 8(10): 24999-25016. doi: 10.3934/math.20231276 |
S-SDDS-B matrices is a subclass of P-matrices which contains B-matrices. New error bound of the linear complementarity problem for S-SDDS-B matrices is presented, which improves the corresponding result in [
Many fundamental problems in optimization and mathematical programming can be described as a linear complementarity problem (LCP). Such as quadratic programming, nonlinear obstacle problem, invariant capital stock, the Nash eqilibrium point of a bimatrix game, optimal stopping, free boundary problem for journal bearing and so on, see [1,2,3]. The error bound on the distance between an arbitrary point in Rn and the solution set of the LCP plays an important role in the convergence analysis of algorithm, for details, see [4,5,6,7].
It is well known that LCP has a unique solution for any vector q∈Rn if and only if M is a P-matrix. Some basic definitions for the special matrix are given below: A matrix M=(mij)∈Rn×n is called a Z-matrix, if mij≤0 for any i≠j; a P-matrix, if all its principal minors are positive; an M-matrix, if M−1≥0 and M is a Z-matrix; an H-matrix, if its comparison matrix ⟨M⟩ is an M-matrix, where the comparison matrix is given by
˜mij={|mij|,ifi=j,−|mij|,ifi≠j. |
Linear complementarity problem is to find a vector x∈Rn such that
x≥0,Mx+q≥0,xT(Mx+q)=0 |
or to prove that no such vector x exists, where M=(mij)∈Rn×n and q∈Rn. One of the essencial problems in the LCP(M, q) is to estimate
maxd∈[0,1]n‖(I−D+DM)−1‖∞, |
which is used to bound the error ‖x−x∗‖∞, that is
‖x−x∗‖∞≤maxd∈[0,1]n‖(I−D+DM)−1‖∞‖r(x)‖∞, |
where x∗ is the solution of the LCP(M,q), r(x)=min{x,Mx+q}, D=diag(di) with 0≤di≤1, and the min operator r(x) denote the componentwise of the two vectors. When real H-matrices with positive diagonal entries form a subclass of P-matrices, the error bound becomes simpler (see formula (2.4) in [8]). Nowadays, many scholars interest in the research on special H-matrices, such as QN-matrices [9], S-SDD matrices [10], Nekrasov matrices [11] and Ostrowski matrices [12]. The corresponding error bounds for LCPs of QN-matrices are achieved by Dai et al. in [13] and Gao et al. in [14]. A new error bound for the LCP of Σ-SDD matrices was given in [15], which only depended on the entries of the involved matrices.
When the matrix A is not an H matrix we can not use formula (2.4) in [8]. However, for some subclasses of P-matrices that are not H-matrices, error bounds for LCPs have also been needed. For example, for SB-matrices [16], for BS-matrices [17], for weakly chained diagonally dominant B-matrices [18], for DB-matrices [19] and for MB-matrices [20]. B-matrices as an important subclass of P-matrices has been researched for years and has achieved fruitful results, see [18,21,22,23,24,25].
In this paper, we focus on the error bound for the LCP(M,q) when M is an S-SDDS-B-matrix, that is a P-matrix. In Section 2, we introduce some notations, definitions and lemmas, which will be used in the subsequence analysis. In Section 3, a new error bound is presented, then the new error bound is compared with the bound in [1]. In Section 4, we give some numerical examples and graphs to show the efficiency of the method in our paper.
In this section, some notations, definitions and lemmas are recalled.
Give a matrix A=(aij)∈Rn×n and a subset S⊂⟨n⟩, n≥2, we denote
ri(A)=n∑j=1,j≠i|aij|,i=1,⋯n, |
rji(A)=ri(A)−|aij|,wherej≠i,i=1,⋯,n, |
and also
rSi(A)={∑j∈S,j≠i|aij|,i∈S,∑j∈S|aij|,i∉S. |
ˉS∪S=⟨n⟩, ˉS is the complement of S in ⟨n⟩.
In according with [26], a matrix A=(aij), n≥2 is said to be S-SDD if the following conditions are fulfilled:
|aii|>rSi(A),foralli∈S, |
and
[|aii|−rSi(A)][|ajj|−rˉSj(A)]>rˉSi(A)rSj(A),foralli∈S and j∈ˉS. |
We extend the S-SDD matrices by introducing the following definitions.
Definition 2.1. [26] A matrix A=(aij)∈Rn×n is said to be S-SDDS (S-SDD Sparse) if the following conditions are satisfied:
(i) |aii|>rSi(A) for all i∈S,
(ii) |ajj|>rˉSj(A) for all j∈ˉS,
(iii) For all i∈S and all j∈ˉS such that aij≠0 or aji≠0
[|aii|−rSi(A)][|ajj|−rˉSj(A)]>rˉSi(A)rSj(A). | (2.1) |
If A=(aij)∈Rn×n for each i=1,⋯,n and r+i:=max{0,aij|j≠i}, then we write A=B++C, where
B+=(bij)=(a11−r+1⋯a1n−r+1⋮⋮an1−r+n⋯ann−r+n),C=(r+1⋯r+1⋮⋮r+n⋯r+n). | (2.2) |
Definition 2.2. Suppose that A=(aij)∈Rn×n, n≥2 is matrix with the form of A=B++C, we say A is an S-SDDS-B matrix if and only if B+ is an S-SDDS matrix with positive diagonal entries.
There is an equivalence definition in [27], which is closely related to strictly diagonally dominant matrices.
Definition 2.3. [27] Let A=(aij)∈Rn×n and A=B++C, where B+ is defined as (2.2), then A is an B-matrix if and only if B+ is a strictly diagonally dominant matrix.
Immediately, we know S-SDDS-B matrices contain B-matrices from Definition 2.3. That is
B-matrices⊆S-SDDS-Bmatrices. |
Now, we will introduce some useful lemmas.
Lemma 2.1. [26] Let A=(aij)∈Rn×n,n≥2 is an S-SDDS matrix, then A is a nonsingular H-matrix.
Lemma 2.2. [26] Let A=(aij)∈Rn×n,n≥2 is an S-SDDS matrix, then
‖A−1‖∞≤{maxi∈S:rˉSi(A)=01|aii|−rSi(A), maxj∈ˉS:rSj(A)=01|ajj|−rˉSj(A),maxi∈S,j∈ˉS:aij≠0fSij(A), maxi∈S,j∈ˉS:aji≠0fˉSij(A)}, | (2.3) |
where
fSij(A)=|ajj|−rˉSj(A)+rˉSi(A)[|aii|−rSi(A)][|ajj|−rˉSj(A)]−rˉSi(A)rSj(A),i∈S,j∈ˉS. |
Lemma 2.3. [1] Let A∈Rn×n is a B-matrix, B+ is the matrix in (2.2), then
max‖(I−D+DM)−1‖∞≤n−1min{β,1}, | (2.4) |
where β=mini∈N{βi}, βi=bii−∑nj≠i|bij|.
Lemma 2.4. [21] Let γ>0 and η>0, for any x∈[0,1],
11−x+γx≤1min{γ,1},ηx1−x+γx≤ηγ. |
Lemma 2.5. [27] Let A∈Rn×n is a nonsingular M-matrix, P is a nonnegative matrix with rank 1, then A+P is a P-matrix.
In this section, a new error bound of LCP(M,q) is presented when M is an S-SDDS-B matrix. Firstly, we prove that an S-SDDS-B matrix is a P-matrix.
Lemma 3.1. Let A∈Rn×n (n≥2) be an S-SDDS-B matrix, then A is a P-matrix.
Proof. By Definition 2.2, we have that C in (2.2) is a nonnegative matrix with rank 1. By the fact that S-SDDS matrix is a nonnegative H-matrix, we have B+ is a nonnegative M-matrix. We can conclude A is a P-matrix from Lemma 2.5.
Lemma 3.2. Suppose that M=(mij)∈Rn×n (n≥2) is an S-SDDS matrix with positive diagonal entries, let
˜M=I−D+DM=(˜mij),D=diag(di),0≤di≤1, |
then ˜M is an S-SDDS matrix with positive diagonal entries.
Proof. From ˜M=I−D+DM=(˜mij), we have
˜mij={1−di+dimij,i=j,dimij,i≠j. |
Because M is an S-SDDS matrix with positive diagonal entries and D=diag(di),0≤di≤1, for any i∈S, we get
|˜mii|=|1−di+dimii|>|dimii|>dirSi(M)=rSi(˜M). |
Similarly, for some j∈ˉS, we have
|˜mjj|=|1−dj+djmjj|>|djmjj|>djrˉSj(M)=rˉSj(˜M). |
For any i∈S, j∈ˉS, we obtain
(|˜mii|−rSi(˜M))(|˜mjj|−rˉSj(˜M))=(|1−di+dimii|−dirSi(M))(|1−dj+djmjj|−djrˉSj(M))>didj(|mii|−rSi(M))(|mjj|−rˉSj(M))>didjrˉSi(M)rSj(M)=rˉSi(˜M)rSj(˜M). |
From Definition 2.1, ˜M is an S-SDDS matrix with positive diagonal entries.
Theorem 3.1. Let A=[aij]∈Rn×n is an S-SDDS-B matrix, denote A=B++C, where B+=(bij) is defined as (2.2), then
maxdn∈[0,1]‖(I−D+DA)−1‖∞≤(n−1)max{μi(B+),μj(B+),μSij(B+),μˉSji(B+)}, | (3.1) |
where
μi(B+)=maxi∈S:rˉSi(B+D)=0{(bii−rSi(B+))−1,1}, |
μj(B+)=maxj∈ˉS:rSj(B+D)=0{(bjj−rˉSj(B+))−1,1}, |
μSij(B+)=maxi∈S,j∈ˉS,bij≠0(bjj−rˉSj(B+))(bii−rSi(B+)min{bii−rSi(B+),1}+rˉSi(B+)min{bjj−rˉSj(B+),1})(bii−rSi(B+))(bjj−rˉSj(B+))−rˉSi(B+)rSj(B+), |
μˉSji(B+)=maxi∈S,j∈ˉS,bji≠0(bii−rSi(B+))(bjj−rˉSj(B+)min{bjj−rˉSj(B+),1}+rSj(B+)min{bii−rSi(B+),1})(bjj−rˉSj(B+))(bii−rSi(B+))−rSj(B+)rˉSi(B+). |
Proof. We denote AD=I−D+DA, then
AD=I−D+DA=I−D+D(B++C)=B+D+CD, |
where B+D=I−D+DB+, CD=DC. Since B+ is an S-SDDS matrix with positive diagonal entries, it's easy to know B+D is an S-SDDS matrix from Lemma 3.2.
Note that
‖A−1D‖∞≤‖(I+(B+D)CD)−1‖∞‖(B+D)−1‖∞≤(n−1)‖(B+D)−1‖∞, | (3.2) |
the estimation of the ‖(B+D)−1‖∞ will be given below. Since B+D=I−D+DB+=:(˜bij), from Lemma 2.2, we have
‖(B+D)−1‖∞≤{maxi∈S:rˉSi(B+D)=01|˜bii|−rˉSi(B+D),maxj∈ˉS:rSj(B+D)=01|˜bjj|−rSj(B+D),maxi∈S,j∈ˉS:˜bij≠0fSij(B+D),maxi∈S,j∈ˉS:˜bji≠0fˉSij(B+D)}. |
When rSi(B+D)=0, it is easy to get rSi(B+)=0, or di=0 for any i∈N.
(1) If di=0, for any i∈N, we get
‖(B+D)−1‖∞=1˜bii−rSi(B+D)=11−di+dibii−dirSi(B+)=1≤max1min{bii−rSi(B+),1}+1min{bjj−rˉSj(B+),1}rˉSi(B+)bii−rSi(B+)1−rˉSi(B+)bii−rSi(B+)rSj(B+)bjj−rˉSj(B+)=μSij(B+). | (3.3) |
(2) If rˉSi(B+)=0, for any i∈S, we have
‖(B+D)−1‖∞≤maxi∈S:rˉSi(B+D)=01˜bii−rSi(B+D)=maxi∈S:rˉSi(B+D)=011−di+dibii1−dirSi(B+)1−di+dibii≤maxi∈S:rˉSi(B+D)=0{1bii−rSi(B+),1}. | (3.4) |
(3) If rˉSi(B+)=0, for any i∈S, j∈ˉS, we obtain
‖(B+D)−1‖∞≤maxj∈ˉS:rSj(B+D)=01˜bjj−r˜Sj(B+D)=11−dj+djbjj+djr˜Sj(B+D)≤maxj∈ˉS:rSj(B+D)=0{1bjj−rˉSj(B+),1}. | (3.5) |
(4) If rˉSi(B+D)≠0, there exist ˜bij≠0 for some j∈ˉS, we derive ‖(B+D)−1‖∞ as follow:
‖(B+D)−1‖∞≤maxi∈S,j∈ˉS:˜bij≠0fSij(B+D)=maxi∈S,j∈ˉS:˜bij≠0˜bjj−rˉSj(B+D)+rˉSi(B+D)[˜bii−rSi(B+D)][˜bjj−rˉSj(B+D)]−rˉSi(B+D)rSj(B+D)=maxi∈S,j∈ˉS:bij≠01min{(bii−rSi(B+D),1}+rˉSi(B+D)(bii−rSi(B+D))min{(bjj−rˉSj,1}1−rˉSi(B+D)rSj(B+D)(bii−rSi(B+D))(bjj−rˉSj(B+D))≤maxi∈S,j∈ˉS:bij≠01min{(bii−rSi(B+),1}+rˉSi(B+)(bii−rSi(B+))min{(bjj−rˉSj,1}1−rˉSi(B+)rSj(B+)(bii−rSi(B+))(bjj−rˉSj(B+))=μSij(B+). | (3.6) |
(5) If rSj(B+D)≠0, there exist ˜bji≠0 for some i∈S, we arrive at the inequality
‖(B+D)−1‖∞≤maxi∈S,j∈ˉS:˜bji≠0fSij(B+D)≤μˉSji(B+). | (3.7) |
Consequently, (3.1) holds. The proof is completed.
The bound in (3.1) also holds for B-matrix, because B-matrix is a subclass of S-SDDS-B-matrix. Next, we will indicate that the bound in Theorem 3.1 is better than that in Lemma 2.3 in some conditions.
Theorem 3.2. If A=(aij)∈Rn×n is an S-SDDS-B matrix which can be written as A=B++C, where B+=(bij) and C are as (2.2). For all i∈S, j∈ˉS, if bii−rSi(B+)<1, bjj−rˉSj(B+)<1, then
max{μi(B+),μj(B+),μSij(B+),μˉSji(B+)}≤1min{β,1}. | (3.8) |
Proof. From Lemma 2.3, β=mini∈N{βi} and βi=bii−∑j∈N,j≠ibij, when bii−rSi(B+)<1 and bjj−rˉSj(B+)<1, it is obvious that
μi(B+)=maxi∈S:rˉSi(B+)=0{1bii−rSi(B+),1}=1bii−rSi(B+)≤1min{β,1}. |
In the same way, we get
μj(B+)=maxj∈ˉS:rSj(B+)=0{1bjj−rˉSj(B+),1}=1bjj−rSj(B+)≤1min{β,1}. |
When bii−rSi(B+)<1 and bjj−rˉSj(B+)<1, it holds that
μSij(B+)≤maxi∈S,j∈ˉS:aij≠0bjj−rˉSj(B+)+rˉSi(B+)(bii−rSi(B+))(bjj−rˉSj(B+))−rˉSi(B+)rSj(B+). |
When bjj−rj(B+)>bii−ri(B+)=bii−rSi(B+)−rˉSi(B+), for any i∈S, j∈ˉS, we can multiply rˉSi(B+) on two sides and plus (bii−rSi(B+))(bjj−rˉSj(B+)), then
(bii−rSi(B+))(bjj−rˉSj(B+))−rˉSi(B+)rSj(B+)>(bjj−rˉSj(B+)+rˉSi(B+))(bii−ri(B+)), |
we have
bjj−rˉSj(B+)+rˉSi(B+)(bii−rSi(B+))(bjj−rˉSj(B+))−rˉSi(B+)rSj(B+)<1bii−ri(B+)≤1min{β,1}. |
When bjj−rj(B+)=bjj−rSj(B+)−rˉSj(B+)≤bii−ri(B+), rSj(B+), the following inequality can obtain in the same way
bii−rSi(B+)+rSj(B+)(bii−rSi(B+))(bjj−rˉSj(B+))−rˉSi(B+)rSj(B+)<1bjj−rj(B+)≤1min{β,1}. |
So the conclusion in (3.8) holds.
In this section, an example is given to show the advantage of the bound in Theorem 3.1.
Example 1. Consider the S-SDDS-B matrix
A=(0.60.100.10.10.50.1000.10.60.100.100.4). |
Matrix A can be split into A=B++C, where
B+=(0.50−0.1000.40−0.1−0.100.50−0.10−0.10.3),C=(0.10.10.10.10.10.10.10.10.10.10.10.10.10.10.10.1). |
Since A is a B-matrix, by Lemma 2.3, then
maxd∈[0,1]4‖(I−D+DA)−1‖∞≤30. | (4.1) |
Because A is a B-matrix, so it is an S-SDDS-B matrix. When S=(1,2,3),ˉS=(4), we also can compute the complementarity error bound by Theorem 3.1 as follow:
maxd∈[0,1]4‖(I−D+DA)−1‖∞≤18.00. | (4.2) |
The results in (4.1) and (4.2) indicate that Theorem 3.1 is better than Lemma 2.3.
It is shown by Figure 1, in which the first 1000 matrices are given by the following MATLAB codes, that 18 is better than 30 for max‖(I−D+DA)−1‖∞. Blue stars in Figure 1 represent the ‖(I−D+DA)−1‖∞ when matrices D come from 1000 different random matrices in [0, 1].
MATLAB codes: For i = 1:1000; D=diag(rand(5,1)); end.
Example 2.
A=(1212211131803233221111122301533322201022313039220112220). |
A can be split into A=B++C, where
B+=(10−100−1−1−1015−30−100009−1−1−10−10−312000000−28000−20−306−10−2−1−10018), C=(2222222333333322222223333333222222233333332222222). |
Taking in account that B+ is not a a strictly diagonally dominant matrix and so A is not a B-matrix. It is easy to check that when S={1,2,3,4} and ˉS={5,6,7}, it fulfills Definition 2.2. Therefore, by Theorem 1, we obtain
maxd∈[0,1]n‖(I−D+DA)−1‖∞≤2.8002. |
In this paper, we first give a new error bound for the LCP(M, q) with S-SDDS-B matrices, which depends only on the matrix of M. Then, based on the new result, we compare it with the error bound in [1]. From Figure 1, we can find that our result improves that in [1].
This work was supported by the National Natural Science Foundation of China (11861077), the Foundation of Science and Technology Department of Guizhou Province (20191161, 20181079), the Research Foundation of Guizhou Minzu University (2019YB08) and the Foundation of Education Department of Guizhou Province (2018143).
The authors declare that they have no competing interests.
[1] |
M. García-Esnaola, J. M. Pe˜na, Error bounds for linear complementarity problems for B-matrices, Appl. Math. Lett., 22 (2009), 1071–1075. doi: 10.1016/j.aml.2008.09.001. doi: 10.1016/j.aml.2008.09.001
![]() |
[2] | K. G. Murty, F. T. Yu, Linear complementarity, linear and nonlinear programming, Berlin: Heldermann, 1988. |
[3] | R. W. Cottle, J. S. Pang, R. E. Stone, The linear complementarity problem, Boston: Academic Press, 1992. |
[4] |
Z. Q. Luo, P. Tseng, On the linear convergence of descent methods for convex essentially smooth minimization, SIAM J. Control Optim., 30 (1992), 408–425. doi: 10.1137/0330025. doi: 10.1137/0330025
![]() |
[5] |
Z. Q. Luo, P. Tseng, Error bound and convergence analysis of matrix splitting algorithms for the affine variational inequality problem, SIAM J. Optim., 2 (1992), 43–54. doi: 10.1137/0802004. doi: 10.1137/0802004
![]() |
[6] |
J. S. Pang, A posteriori error bound for the linearly-constrained variational inequality problem, Math. Oper. Res., 12 (1987), 474–484. doi: 10.1287/moor.12.3.474. doi: 10.1287/moor.12.3.474
![]() |
[7] |
J. S. Pang, Inexact Newton methods for the nonlinear complementarity problem, Math. Program., 36 (1986), 54–71. doi: 10.1007/BF02591989. doi: 10.1007/BF02591989
![]() |
[8] |
X. J. Chen, S. H. Xiang, Computation of error bounds for P-matix linear complementary problems, Math. Program., 106 (2006), 513–525. doi: 10.1007/s10107-005-0645-9. doi: 10.1007/s10107-005-0645-9
![]() |
[9] |
L. Y. Kolotilina, Bounds for the inverses of generalized Nekrosov matrices, J. Math. Sci., 207 (2015), 786–794. doi: 10.1007/s10958-015-2401-x. doi: 10.1007/s10958-015-2401-x
![]() |
[10] | L. Cvetkoviˊc, V. Kostiˊc, R. S. Varga, A new Geršgorin-type eigenvalue inclusion set, Electron. T. Numer. Ana., 302 (2004), 73–80. |
[11] |
T. Szulc, L. Cvetkoviˊc, M. Nedoviˊc, Scaling technique for partition-Nekrasov matrices, Appl. Math. Comput., 207 (2015), 201–208. doi: 10.1016/j.amc.2015.08.136. doi: 10.1016/j.amc.2015.08.136
![]() |
[12] |
L. Cvetkoviˊc, V. Kostiˊc, S. Rauški, A new subclass of H-matrices, Appl. Math. Comput., 208 (2009), 206–210. doi: 10.1016/j.amc.2008.11.037. doi: 10.1016/j.amc.2008.11.037
![]() |
[13] |
P. F. Dai, J. C. Li, Y. T. Li, C. Y. Zhang, Error bounds for linear complementarity problems of QN-matrices, Calcolo, 53 (2016), 647–657. doi: 10.1007/s10092-015-0167-7. doi: 10.1007/s10092-015-0167-7
![]() |
[14] |
L. Gao, Y. Q. Wang, C. Q. Li, New error bounds for the linear complementarity problem of QN-matrices, Numer. Algorithms, 77 (2018), 229–242. doi: 10.1007/s11075-017-0312-2. doi: 10.1007/s11075-017-0312-2
![]() |
[15] |
Z. W. Hou, X. Jing, L. Gao, New error bounds for linear complementarity problems of Σ-SDD matrices and SB-matirces, Open Math., 17 (2019), 1599–1614. doi: 10.1515/math-2019-0127. doi: 10.1515/math-2019-0127
![]() |
[16] |
P. F. Dai, Y. T. Li, C. J. Lu, New error bounds for the linear complementarity problem with an SB-matrix, Numer. Algorithms, 64 (2013), 741–757. doi: 10.1007/s11075-012-9691-6. doi: 10.1007/s11075-012-9691-6
![]() |
[17] |
M. García-Esnaola, J. M. Pe˜na, Error bounds for linear complementarity problems involving BS-matrices, Appl. Math. Lett., 25 (2012), 1379–1383. doi: 10.1016/j.aml.2011.12.006. doi: 10.1016/j.aml.2011.12.006
![]() |
[18] |
F. Wang, Error bounds for linear complementarity problem of weakly chained diagonally dominant B-matrices, J. Inequal. Appl., 2017 (2017), 1–8. doi: 10.1186/s13660-017-1303-5. doi: 10.1186/s13660-017-1303-5
![]() |
[19] |
P. F. Dai, Error bounds for linear complementarity problems of DB-matrices, Linear Algebra Appl., 434 (2011), 830–840. doi: 10.1016/j.laa.2010.09.049. doi: 10.1016/j.laa.2010.09.049
![]() |
[20] |
T. T. Chen, W. Li, X. P. Wu, S. Vong, Error bounds for linear complementarity problems of MB-matrices, Numer. Algorithms, 70 (2015), 341–356. doi: 10.1007/s11075-014-9950-9. doi: 10.1007/s11075-014-9950-9
![]() |
[21] |
C. Q. Li, Y. T. Li, Note on error bounds for linear complementarity problems for B-matrices, Appl. Math. Lett., 57 (2016), 108–113. doi: 10.1016/j.aml.2016.01.013. doi: 10.1016/j.aml.2016.01.013
![]() |
[22] |
C. Q. Li, M. T. Gan, S. R. Yang, A new error bounds for linear complementarity problems for B-matrices, Electron. J. Linear Al., 31 (2016), 476–484. doi: 10.13001/1081-3810.3250. doi: 10.13001/1081-3810.3250
![]() |
[23] |
M. García-Esnaola, J. M. Pe˜na, A comparison of error bounds for linear complementarity problems of H-matrices, Linear Algebra Appl., 433 (2010), 956–964. doi: 10.1016/j.laa.2010.04.024. doi: 10.1016/j.laa.2010.04.024
![]() |
[24] |
C. Q. Li, Y. T. Li, Weakly chained diagonally dominant B-matrices and error bounds for linear complementarity problem, Numer. Algorithms, 73 (2016), 985–998. doi: 10.1007/s11075-016-0125-8. doi: 10.1007/s11075-016-0125-8
![]() |
[25] |
D. Sun, F. Wang, New error bounds for linear complementarity problem of weakly chained diagonally dominant B-matrices, Open Math., 15 (2017), 978–986. doi: 10.1515/math-2017-0080. doi: 10.1515/math-2017-0080
![]() |
[26] |
L. Y. Kolotilina, Some bounds for inverses involving matrix sparsity pattern, J. Math. Sci., 249 (2020), 242–255. doi: 10.1007/s10958-020-04938-3. doi: 10.1007/s10958-020-04938-3
![]() |
[27] |
J. M. Pe˜na, A class of P-matrix with applications to localization of the eigenvalues of a real matrix, SIAM J. Matrix Anal. Appl., 22 (2001), 1027–1037. doi: 10.1137/S0895479800370342. doi: 10.1137/S0895479800370342
![]() |