1.
Introduction
Schur complement of a matrix is widely used and has attracted the attention of many scholars. In 1979, the Schur complement question of a strictly diagonally dominant (SDD) matrix was studied by Carlson and Markham [1]. They certified the Schur complement of SDD matrix is also an SDD matrix. Before long, some renowned matrices such as doubly diagonally dominant matrices and Dashnic-Zusmanovich (DZ) matrices were researched, and the results were analogous [2,3,4,5]. In 2020, Li et al. proved that the Schur complements and the diagonal-Schur complements of Dashnic-Zusmanovich type (DZ-type) matrices are DZ-type matrices under certain conditions in [6]. In 2023, Song and Gao [7] proved that the Schur complements and the diagonal-Schur complements of CKV-type matrices are CKV-B-type matrices under certain conditions. Furthermore, there are many conclusions on Schur complements and diagonal-Schur complements for other classes of matrices, see [8,9,10,11,12,13,14,15].
The upper bound of the inverse infinite norm of the non-singular matrix is widely used in mathematics, such as the convergence analysis of matrix splitting and matrix multiple splitting iterative method for solving linear equations. A traditional way to find the upper bound of an infinite norm for the inverse of a nonsingular matrix is to use the definition and properties of a given matrix class, see [16,17,18,19] for details. The first work was by Varah [19], who in 1975 gave the upper bound of the infinite norm of the inverse of the SDD matrix. However, in some cases, the bounds of Varah may yield larger values. In 2020, Li [20] obtained two upper bounds of the infinite norm of the inverse of the SDD matrix based on Schur complement, and in 2021, Sang [21] obtained two upper bounds for the infinity norm of DSDD matrices. In 2022, based on the Schur complement, Li and Wang obtained some upper bounds for the infinity norm of the inverse of GDSDD matrices [22].
In this paper, n is a positive integer and N={1,2,...,n}. Let S be any nonempty subset of N, S⊂N, ¯S:=N∖S for the complement of S. Cn×n denotes the set of complex matrices of all n×n. Rn×n denotes the set of all n×n real matrices. I∈Rn×n is an identity matrix, A=[aij]∈Cn×n, |A|=[|aij|]∈Rn×n and
The matrix A is known as the strictly diagonal dominance SDD matrix, abbreviated as A∈ SDD, if
Definition 1. [23] Let S be an arbitrary nonempty proper subset of the index set. A=[aij]∈Cn×n,n≥2, is called an S-SOB (S-Sparse Ostrowski-Brauer) matrix if
(i) |aii|>rSi(A) for all i∈S;
(ii) |ajj|>r¯Sj(A) for all j∈S;
(iii) For all i∈S and all j∈ˉS such that aij≠0,
(iv) For all i∈S and all j∈ˉS such that aji≠0,
Definition 2. [24] A matrix A is called GDSDD matrix if J≠∅ and there exists proper subsets N1,N2 of N such that N1∩N2=∅,N1∪N2=N and for any i∈N1 and j∈N2,
where J:={i∈N:|aii|>ri(A)}.
Definition 3. [25] A matrix A is called an H-matrix, if its comparison matrix μ(A)=[μij] defined by
is an M-matrix, i.e., [μ(A)]−1≥0.
It is shown in [1] that if A is an H-matrix, then,
Let A be an M-matrix, then det(A)>0.
In addition, it was shown that S-SOB, SDD and GDSDD matrices are nonsingular H-matrix in [23,26]. Varah [19] gave the following upper bound for the infinity norm of the inverse of SDD matrices:
Theorem 1. [19] Let A=[aij] be an SDD matrix. Then,
Theorem 2. [27] Let A=[aij]∈Cn×n,n≥2, be an S-SOB matrix, where S⊂N, 1≤|S|≤n−1. Then,
where
Theorem 3. [28] Let A=[aij]∈Cn×n, n≥2, be an GDSDD matrix, where S⊂N, 1≤|S|≤n−1. Then,
In this paper, based on the Schur complement, we present some upper bounds for the infinity norm of the inverse of S-SOB matrices, and numerical examples are given to show the effectiveness of the obtained results. In addition, applying these new bounds, a lower bound for the smallest singular value of S-SOB matrices is obtained.
2.
The Schur complement of S-SOB matrices
Given a matrix A=(aij)∈Cn×n that is nonsingular, α={i1,i2,...,ik} is any nonempty proper subset of N, |α| is the cardinality of α (the number of elements in α, i.e., |α|=k), ˉα=N−α={j1,⋯,jl} is the complement of α with respect to N, A(α,ˉα) is the submatrix of A lying in the rows indexed by α and the columns indexed by ˉα, A(α) is the leading submatrix of A whose row and column are both indexed by α, and the elements of α and of ˉα are both conventionally arranged in increasing order. If A(α) is not singular, the matrix A/α is called the Schur complement of A with respect to A(α). At this point
Lemma 1. (Quotient formula [28,29]) Let A be a square matrix. Let B is a nonsingular principal submatrix of A and C is a nonsingular principal submatrix of B. Then, B/C is a nonsingular principal submatrix of A/C and A/B=(A/C)/(B/C), where B/C is the Schur complement of C in matrix B.
Lemma 2. Let A=(aij)∈Cn×n be an S-SOB matrix, n≥2 and where α⊆S or α⊆ˉS. Then, A(α) is an SDD matrix.
Proof. When α⊆S, since A is an S-SOB matrix and |aii|>rSi(A)≥rαi(A)=∑k≠i,k∈α|aik| for all i∈α, we have ri[A(α)]=∑k≠i,k∈α|aik|=rαi(A) and |aii|>ri[A(α)]. It is easy to obtain that A(α) is an SDD matrix. Homoplastically, so is α⊆ˉS. □
Lemma 3. Let A=(aij)∈Cn×n be an S-SOB matrix, n≥2 and α be a subset of N. Then, A(α) is an S-SOB matrix.
Proof. If S⊂α, since A is an S-SOB matrix, then,
(i) For all i∈S, |aii|>rSi(A)=rSi(A(α)),
(ii) For all j∈ˉS∩α, |ajj|>rˉSj(A)>rˉS∩αj(A)=rˉS∩αj(A(α)),
(iii) For all i∈S,j∈ˉS∩α such that aij≠0,
(iv) For all i∈S,j∈ˉS∩α such that aji≠0,
Thus, A(α) is an S-SOB matrix and A(α)∈\{S-SOB}.
In a similar way, if ˉS⊂α, A(α) is an ˉS-SOB matrix. Meanwhile, when α is contained neither in S nor in ˉS, A(α) is an (S∩α)-SOB matrix. Finally, A(α)∈{S-SOB}. □
Lemma 4. Let A=(aij)∈Cn×n be an S-SOB matrix, n≥2 and let A be a matrix satisfying aij=0,aii>ri(A) and aji=0,ajj>rj(A) for i∈S,j∈ˉS. If α={i1}⊂S, denote
where jt∈(S∖α),js∈ˉS, then B∈{SGDD3}.
Proof. Since A is an S-SOB matrix, if SB={1,2}, for all i∈SB, then,
There exist four different cases.
Case 1. When |ajsi1|≠0, |ai1js|≠0.
(i) If |ajsjs|<rjs(A), from Definition 1, we have |ai1i1|≥ri1(A),
(ii) If |ajsjs|>rjs(A), |ai1i1|≥ri1(A), we get
(iii) If |ajsjs|>rjs(A), |ai1i1|≤ri1(A), we obtain
Case 2. When |ajsi1|≠0, |ai1js|=0, |ai1i1|≥ri1(A) the proof is analogous to (i) and (ii) in Case 1. We obtain
Case 3. If |ajsi1|=0, |ai1js|≠0, then, |ajsjs|>rjs(A). By the same proof method as (ii) and (iii) in Case 1, we have
Case 4. If |ajsi1|=0, |ai1js|=0, then, |ai1i1|>ri1(A), |ajsjs|>rjs(A), and
To sum up, the inequality [|b11|−rSB1(B)][|b33|−rˉSB3(B)]>rˉSB1(B)rSB3(B) is held. In the same way, the inequality [|b22|−rSB2(B)][|b33|−rˉSB3(B)]>rˉSB2(B)rSB3(B) also holds. At last, we obtain B∈{GDSDD3} and B=μ(B) is an M-matrix. By Definition 3, we know that detB>0. The proof is completed. □
Theorem 4. Let A=(aij)∈Cn×n be an S-SOB matrix, n≥2 and let A be a matrix satisfying aij=0,aii>ri(A) and aji=0,ajj>rj(A) for i∈S,j∈ˉS. Denote A/α=(a′jtjs). If α⊂S, then, A/α∈{ GDSDD(S∖α),ˉSn−k}.
Proof. Note that α contains only one element. If α=i1⊂S, for all jt∈S∖α, js∈ˉS, then we have
We have A/{i1}∈{ GDSDD(S∖{i1}),ˉSn−1} for any i1∈S. Consider that α contains more than one element. If i1∈α, by the quotient formula (in [9] Theorem 2 (ii)), we have A/α=(A/{i1})/((A(α)/i1)∈{GDSDD(S∖α),ˉSn−k}. The proof is completed. □
Corollary 1. Let A=(aij)∈Cn×n be an S-SOB matrix, n≥2 and let A be a matrix satisfying aij=0,aii>ri(A) and aji=0,ajj>rj(A) for i∈S,j∈ˉS. Denote A/α=(a′jtjs). If α⊂ˉS, jt∈S,js∈ˉS∖α, then, A/α∈{GDSDDS,(ˉS∖α)n−k}.
Proof. The conclusion can be drawn by using the same proof method as Theorem 4. □
Corollary 2. Let A=(aij)∈Cn×n be an S-SOB matrix, n≥2 and let A be a matrix satisfying aij=0,aii>ri(A) and aji=0,ajj>rj(A) for i∈S,j∈ˉS. Denote A/α=(a′jtjs). If α is contained neither in S nor in ˉS, jt∈S∖α,js∈ˉS∖α, then A/α∈{GDSDD(S∖α),(ˉS∖α)n−k}.
Proof. The proof is similar to ([9], Theorem 2 (iii)), so we get A/α=(A/(S∩α))/((A(α)/(S∩α))∈{GDSDD(S∖α),(ˉS∖α)n−k}. □
Theorem 5. Let A=(aij)∈Cn×n be an S-SOB matrix, n≥2 and denote A/α=(a′jtjs). If α=S or α=ˉS, then A/α is an SDD matrix.
Proof. If {i1}=α=S, for all jt∈ˉα, then we have
If ajti1=0, then we get
If ajti1≠0, then we obtain
Hence, for any {i1}=α=S, A/{i1} is an SDD matrix. Taking i1∈α=S and using the fact that A is SDD, we know its Schur complement is as well. At last, we have A/α=(A/{i1})/(A(α)/{i1})∈{SDD}. By the same argument, so is α=ˉS. □
Corollary 3. Let A=(aij)∈Cn×n be an S-SOB matrix, n≥2 and denote A/α=(a′jtjs). If S⊂α or ˉS⊂α, then A/α is an SDD matrix.
Proof. From Theorem 5, A/S is an SDD matrix, consequently, A/α=[A/S]/[(A(α)/S]∈{SDD}. Similarly, if ˉS⊂α, we have A/α=[A/ˉS]/[(A(α)/ˉS]∈{SDD}. □
Finally, making a summary of part of the content: if α⊂S or α⊂ˉS, then A(α)∈{SDD}, A/α∈{ GDSDD}; if S⊂α or ˉS⊂α, then A(α)∈{S-SOB}, A/α∈{SDD}; if S=α or ˉS=α, then A(α)∈{SDD}, A/α∈{SDD}; if α is contained neither in S nor in ˉS, then A(α)∈{S-SOB}, A/α∈{GDSDD}.
3.
Schur complement-based infinity bounds for the inverse of S-SOB matrices
In order to obtain the upper bound of the infinite norm of the inverse of the S-SOB matrix, we need to give the definition of a permutation matrix in which every row and every column of it has only one element of 1 and all the other elements are 0. It is easy to see from the definition that permutation matrices are also elementary matrices, so multiplication of any matrix only changes the position of the matrix elements, but does not change the size of the matrix elements.
For a given nonempty proper subset α, there is a permutation matrix P such that
We might as well assume that A(α) is nonsingular, let
under the circumstances
and
where I1 (resp.I2) is the identity matrix of order l (resp.m). We know that if P is a permutation matrix, then PT is also a permutation matrix, and ||P||∞=1. From the above we can obtain
Therefore, if the upper bounds of ||F||∞, ||(EPTAPF)−1||∞, and ||E||∞ can be obtained, the upper bounds of ||A−1||∞ can also be obtained, that is, the product of the above three norm bounds needs to be calculated. It's not hard to figure out
and
In [20], Li gives an upper bound for ||E||∞ as follows:
Lemma 5. [20] Let A=[aij]∈Cn×n be nonsingular with aii≠0, for i∈N, and ∅≠α⊂N. If A(α) is nonsingular and
then,
Theorem 6. Let A=[aij]∈Cn×n be an S-SOB matrix and D=[dij]∈Cn×m. Then,
where Ri(D)=∑k∈M|dik|.
Proof. Since A=[aij]∈Cn×n is an S-SOB matrix, we know from [1] that A is an H-matrix, [μ(A)]−1≥|A−1|. Let
and e = {{(1, ..., 1)}^{T}} be an m-dimensional vector, consequently,
Because of S\subset N , {{\psi}_{p}} = \underset{k\in S}{\mathop{\max }}\, \{{{\psi}_{k}}\}, \; {{\psi}_{q}} = \underset{k\in \bar{S}}{\mathop{\max }}\, \{{{\psi}_{k}}\}, it implies that
If {{\psi}_{p}}\geq {{\psi}_{q}} , then,
That is to say, if {{\psi}_{p}}\geq {{\psi}_{q}} , r_{p}^{\bar{S}}(A) = 0 , then,
and
If {{\psi}_{p}}\geq {{\psi}_{q}} , r_{p}^{\bar{S}}(A)\neq0 , then,
and
By Eq (3.10) \times|{{a}_{qq}}| + Eq (3.11) \times r_{p}^{\bar{S}}(A) , we have
Thus,
If {{\psi}_{q}}\geq {{\psi}_{p}} , equally,
When r_{q}^{S}(A) = 0 , \sum\limits_{k\in M}{|{{d}_{qk}}}|\geq[|{{a}_{qq}}|-r_{q}^{\bar{S}}(A)]{{\psi}_{q}} .
When r_{q}^{S}(A)\neq0 , then
Eq (3.14) \times r_{q}^{S}(A) + Eq (3.15) \times |{{a}_{pp}}| , we have
Consequently,
The conclusion follows from inequalities Eqs (3.9), (3.12), (3.13) and (3.16). □
Replacing A and D in Theorem 6 with A(\alpha) and A(\alpha, \bar{\alpha}) , respectively, yields Corollary 4.
Corollary 4. Let A = \left[{{a}_{ij}} \right]\in {{{C}}^{n\times n}} be an S -SOB matrix and \alpha\in N , then, ||F||_{\infty}\leq1+\max\{\max\limits_{i\in\alpha}\frac{R_{i}[A(\alpha, \bar{\alpha})]}{|a_{ii}|-r_{i}[A(\alpha)]}, \; \beta(\alpha), \; \gamma(\alpha), \lambda(\alpha)\}, where
Proof. Let \alpha\subseteq S or \alpha\subseteq\bar{S} , A(\alpha) be an SDD matrix (from Lemma 2). Thus,
From Lemma 3, we have
(1) if S\subset\alpha , A(\alpha) is an S -SOB matrix, then
Hence, ||F|{{|}_{\infty }}\leq 1+\beta(\alpha) .
(2) If \bar{S}\subset\alpha , A(\alpha) is an \bar{S} -SOB matrix, then
Accordingly, ||F|{{|}_{\infty }}\leq 1+\gamma(\alpha) .
(3) If \alpha is contained neither in S nor in \bar{S} , A(\alpha) is an (S\cap\alpha) -SOB matrix, then we have
Hence, ||F|{{|}_{\infty }}\leq 1+\lambda(\alpha) . The proof is completed. □
Lemma 6. Let A = \left[{{a}_{ij}} \right]\in {{{C}}^{n\times n}} be an S -SOB matrix and x = [\mu(A(\alpha))]^{-1}y^{T} , where \alpha\subseteq S , or \alpha\subseteq \bar{S}. Let \; x = (x_{1}, x_{2}, \cdots, x_{k}) , y = (y_{1}, y_{2}, \cdots, y_{k}) , y_{k} > 0 , x_{g} = \max\limits_{i_{k}\in\alpha} x_{k} , then
Proof. Note that x = [\mu(A(\alpha))]^{-1}y^{T} , so [\mu(A(\alpha))]x = y^{T}. For all \alpha\in S , or \alpha\in \bar{S} , from Lemma 2, \mu(A(\alpha)) is an H -matrix, so [\mu(A(\alpha))]^{-1}\geq0 by Eq (1.3). Then
which gives x_{g}\leq\frac{y_{g}}{|a_{i_{g}i_{g}}|-\sum\limits_{i_{v}\in \alpha}|a_{i_{g}i_{v}}|} = \frac{y_{g}}{|a_{i_{g}i_{g}}|- r_{i_{g}}^{\alpha}(A)}. Consequently, 0 \leq x_{k} \leq\max\limits_{i_{v}\in \alpha}\frac{y_{v}}{|a_{i_{v}i_{v}}|- r_{i_{v}}^{\alpha}(A)}, \; i_{k}\in\alpha . □
Lemma 7. Let A = \left[{{a}_{ij}} \right]\in {{{C}}^{n\times n}} be an S -SOB matrix, x, \; y^{T} from Lemma 6, if \alpha is contained neither in S nor in \bar{S} , x_{g} = \max\limits_{i_{k}\in\alpha} x_{k} , then
where
Proof. When \alpha is contained neither in S nor in \bar{S} , A(\alpha) is an (S\cap\alpha) -SOB matrix, so is \mu(A(\alpha)) . Thus,
Replacing A and D in Theorem 6 with [\mu(A(\alpha))]^{-1} and y^{T} , respectively, yields
Which implies that: 0 \leq x_{k} \leq \pi_{y^{T}}(\alpha)), \; i_{k}\in\alpha. □
For the sake of convenience, assume that the symbol of A/\alpha in this part is the same as in the second part and denote:
I = (1, 1, \cdots, 1)^{T} is an k order column vector.
Theorem 7. Let A = ({{a}_{ij}})\in {{C}^{n\times n}} be an S -SOB matrix, n\ge 2 and A is a matrix satisfying {{a}_{ij}} = 0, {{a}_{ii}} > r_{i}(A) and {{a}_{ji}} = 0, {{a}_{jj}} > r_{j}(A) for i\in S, j\in \bar{S}. Denote A/\alpha = (a^{'}_{j_{t}j_{s}}) . If \alpha\subset S , then,
where \theta_{1}(\alpha) = \max\{\max\limits_{i\in\alpha}\frac{1}{|a_{ii}|-r_{i}(A(\alpha))}, \; \eta_{1}(\alpha)\},
Proof. By Lemma 2, we know A(\alpha) is an SDD matrix. Applying Varah's bound to A(\alpha) , we get
By Corollary 4, we have
By Theorem 4, it is easy to know A/\alpha\in\{ GDSDD _{n-k}^{(S\setminus\alpha), \bar{S}}\} . Therefore, from Theorem 3,
And then
Similarly,
Let
Furthermore, by Eqs (3.21)–(3.23), we have
Finally, by Eqs (3.2), (3.3), (3.19), (3.20) and (3.24), the conclusion follows. □
The following inference can be naturally drawn from Theorem 7:
Corollary 5. Let A = ({{a}_{ij}})\in {{C}^{n\times n}} be an S -SOB matrix, n\ge 2 and A be a matrix satisfying {{a}_{ij}} = 0, {{a}_{ii}} > r_{i}(A) and {{a}_{ji}} = 0, {{a}_{jj}} > r_{j}(A) for i\in S, j\in \bar{S}. Denote A/\alpha = (a^{'}_{j_{t}j_{s}}) . If \alpha\subset \bar{S} , then,
where \theta_{2}(\alpha) = \max\{\max\limits_{i\in\alpha}\frac{1}{|a_{ii}|-r_{i}(A(\alpha))}, \; \eta_{2}(\alpha)\},
Theorem 8. Let A = ({{a}_{ij}})\in {{C}^{n\times n}} be an S -SOB matrix, n\ge 2 and A be a matrix satisfying {{a}_{ij}} = 0, {{a}_{ii}} > r_{i}(A) and {{a}_{ji}} = 0, {{a}_{jj}} > r_{j}(A) for i\in S, j\in \bar{S}. Denote A/\alpha = (a^{'}_{j_{t}j_{s}}) . If \alpha is contained neither in S nor in \bar{S} , then,
where \theta_{3}(\alpha) = \max\{\delta_{1}(\alpha), \; \eta_{3}(\alpha)\},
Proof. By Lemma 3, we know A(\alpha) is an (S\cap\alpha) -SOB matrix. Applying the bound of Theorem 2 to A(\alpha) , we get
By Corollary 4, we have
By Corollary 2, we know A/\alpha\in\{ GDSDD _{n-k}^{(S\setminus\alpha), (\bar{S}\setminus\alpha)}\} . Therefore,
And then,
Let y^{T} = \mathbf{y_{1}} = \sum\limits_{j_{k}\in (\bar{S}\setminus\alpha)}|w_{j_{k}}| , y^{T} from Lemma 7, we get
In like manner, let y^{T} = \mathbf{y_{2}} = \sum\limits_{j_{k}\in (S\setminus\alpha)}|w_{j_{k}}| , y^{T} from Lemma 7, we get
Let
Furthermore, by Eqs (3.28)–(3.30), we have
Finally, by Eqs (3.2), (3.3), (3.25), (3.26) and (3.31), the conclusion follows. □
Theorem 9. Let A = \left[{{a}_{ij}} \right]\in {{{C}}^{n\times n}} be an S -SOB matrix, \phi\neq\alpha = S . If Eq (3.7) holds, then,
where \theta_{4}(\alpha) = \max\{\max\limits_{i\in\alpha}\frac{1}{|a_{ii}|-r_{i}(A(\alpha))}, \; \eta_{4}(\alpha)\},
Expressly, when \phi\neq\alpha = S = \{i\} ,
\theta_{4}'(\alpha) = \max\{\frac{1}{|a_{ii}|}, \; \eta_{4}'(\alpha)\},
Proof. By Lemma 2, we know A(\alpha) is an SDD matrix. ||A(\alpha)^{-1}||_{\infty} is the same as Eq (3.19), and ||F||_{\infty} is the same as Eq (3.20). By Theorem 5, knowing that A/\alpha is an SDD matrix. Therefore,
Finally, by Eqs (3.2), (3.3), (3.19), (3.20) and (3.32), the conclusion follows. □
A proof similar to Theorem 9 leads to the results.
Corollary 6. Let A = \left[{{a}_{ij}} \right]\in {{{C}}^{n\times n}} be an S -SOB matrix, where \phi\neq\alpha = \bar{S} . If Eq (3.7) holds, then,
where \theta_{5}(\alpha) = \max\{\max\limits_{i\in\alpha}\frac{1}{|a_{ii}|-r_{i}(A(\alpha))}, \; \eta_{5}(\alpha)\},
Distinguishingly, when \phi\neq\alpha = \bar{S} = \{i\} ,
\theta_{5}'(\alpha) = \max\{\frac{1}{|a_{ii}|}, \; \eta_{5}'(\alpha)\},
Theorem 10. Let A = \left[{{a}_{ij}} \right]\in {{{C}}^{n\times n}} be an S -SOB matrix, where S\subset\alpha . If Eq (3.7) holds, then,
where \theta_{6}(\alpha) = \max\{\delta_{2}(\alpha), \; \eta_{6}(\alpha)\},
Proof. A(\alpha) is an S -SOB matrix (by Lemma 3). Thus,
From Corollary 4, we know
By Corollary 3, we obtain A/\alpha is an SDD matrix. Therefore,
Finally, by Eqs (3.2), (3.3), (3.33), (3.34) and (3.35), the conclusion follows. □
According to Theorem 10, the following result will come out naturally.
Corollary 7. Let A = \left[{{a}_{ij}} \right]\in {{{C}}^{n\times n}} be an S -SOB matrix, \bar{S}\subset\alpha . If Eq (3.7) holds, then
where \theta_{7}(\alpha) = \max\{\delta_{3}(\alpha), \; \eta_{7}(\alpha)\},
\eta_{7}(\alpha) = \max\limits_{{i\in(S\setminus\alpha) }}\frac{1} {|{{a}_{ii}}|-r_{i}^{(S\setminus\alpha)}(A) -|v_{i}|[\mu(A(\alpha))]^{-1}\sum\limits_{k\in (S\setminus\alpha)}|w_{k}|}.
Theorem 11. Let A = ({{a}_{ij}})\in {{C}^{n\times n}} be an S -SOB matrix, n\ge 3 and let A satisfy that when {{a}_{ij}} = 0, {{a}_{ii}} > r_{i}(A) and {{a}_{ji}} = 0, {{a}_{jj}} > r_{j}(A) for i\in S, j\in \bar{S}. Denote A/\alpha = (a^{'}_{j_{t}j_{s}}) , then,
where \Gamma_{i}(A) = (1+\frac{\max\limits_{j\in N, \atop j\neq i}|a_{ji}|}{|a_{ii}|})(1+\frac{\max\limits_{j\in N, \atop j\neq i}|a_{ij}|}{|a_{ii}|})\tilde{\Gamma}_{i}(A),
and c_{jk} = a_{jk}-\frac{a_{ji}a_{ik}}{a_{ii}}.
Proof. Since A is an S -SOB matrix, by Lemma 2 and Theorem 5, we know A(\alpha) and A/\alpha are nonsingular. Therefore, taking \alpha = \{i\} , then A(\alpha) = a_{ii} , \bar{\alpha} = N-\{i\} , and
Because A/\alpha = (a^{'}_{j_{t}j_{s}}) , let |a^{'}_{j_{t}j_{s}}| = |a_{j_{t}j_{s}}-\frac{a_{j_{t}i}a_{ij_{s}}}{a_{ii}}| = |c_{j_{t}j_{s}}|(j_{t}, j_{s}\in (N\setminus \{i\})) . By calculation, we obtain for j_{t}\in (S\setminus \{i\}), j_{s}\in (\bar{S}\setminus \{i\}) ,
By Eq (3.27), we have
Finally, by Eqs (3.36), (3.37), (3.38) and (3.39) the conclusion follows. □
We illustrate our results by the following examples:
Example 1. Consider matrix A as a tri-diagonal n\times n matrix
Let b = 1.5, \; n = 10000 . We get that matrix A is an SDD matrix. It is easy to verify matrix A is an SDD matrix, so it is also a S -SOB, DSDD , GDSDD and DZ matrix. Therefore, from Theorem 1, we put the result in Table 1.
Actually, \|A^{-1}\|_{\infty} = 0.0002 . This example shows that the boundary in Theorem 11 is superior to other theorems in some cases.
Example 2. Consider matrix
By computation, the matrix A is an S -SOB matrix and S = \{2, 3, 5\} . According to Theorem 2, we obtain
According to Theorem 11, it is easy to get
In practice, ||{{A}^{-1}}|{{|}_{\infty }} = 0.2155 . Obviously, the boundary in Theorem 11 is superior to Theorem 2 in some cases.
Example 3. Consider matrix
Obviously, the matrix A is an SDD matrix, and it's also an S -SOB matrix and S = \{2, 3, 4, 5, 8\} . According to Theorem 1, we can obtain
According to Theorem 2, we can obtain
According to Theorem 11, we can obtain
In fact, ||{{A}^{-1}}|{{|}_{\infty }} = 0.0497. This example shows that the boundary in Theorem 11 is superior to Theorems 1 and 2 in some cases.
4.
Error bounds for LCPs of S -SOB matrices
In this section, we will apply the result in Section 3 to the linear complementarity problems (LCPs), to obtain two kinds of error bounds for LCPs of S -SOB matrices. We first need to give some lemmas that would be used in the following theorems:
Lemma 8. [29] Let \gamma > 0 and \eta\geq 0 , for any x\in [0, 1] ,
Lemma 9. Suppose that M = (m_{ij})\in \mathbb{R}^{n\times n} is an S-SOB matrix with positive diagonal entries, let
then, \tilde{M} is also a real S-SOB matrix with positive diagonal entries, where D = diag(d_{1}, \cdots, d_{n}) , d_{i}\in[0, 1] .
Proof. Note that
Hence, for each i\in S , j\in \bar{S} ,
Then, for any i\in S , j\in\bar{S} , d_{i}\in(0, 1) , we have
For any i\in S , j\in\bar{S} , we get
When d_{i} = 0 , \tilde{m}_{ii} = 1-d_{i}+d_{i}m_{ii} = 1 , we obtain
When d_{i} = 1 , \tilde{m}_{ij} = 1-d_{i}+d_{i}m_{ij} = m_{ij} , then
As d_{i}\in[0, 1] , conditions (i)–(iv) in Definition 1 are fulfilled for all i\in S and j\in \bar{S} . So the conclusion follows. □
Lemma 9 indicates that \tilde{M} is an S -SOB matrix when M is an S -SOB matrix. We will present an error bound for the linear complementarity problem of S -SOB matrices. The following theorem is one of our main results, which gives an upper bound on the condition constant \max_{d\in[0, 1]^{n}}\|(I-D+DA)^{-1}\|_{\infty} when A is an S -SOB matrix.
Theorem 12. Let A = (a_{ij})\in\mathbb{R}^{n\times n} be an S-SOB matrix with positive diagonal entries, and \tilde{A} = [\tilde{a_{ij}}] = I-D+DA , where D = diag(d_{i}) with 0\leq d_{i}\leq 1 . Then
where
and \varsigma_{j}^{S}(A) = \frac{1-d_{j}+d_{j}a_{jj}}{1-d_{t}+d_{t}a_{tt}}-\frac{a_{ji}a_{ij}}{a_{ii}a_{jj}}-\sum\limits_{p\in S, \atop p\neq j, i} \frac{a_{jk}}{a_{jj}}-\sum\limits_{p\in S, \atop p\neq j, i}\frac{a_{ji}a_{ik}}{a_{ii}a_{jj}} .
Proof. Because \tilde{A} = (\tilde{a_{ij}}) = (I-D+DA) , we know \tilde{A} is an S - SOB matrix with positive diagonal entries from Lemma 9. By Theorem 11, the following inequality holds
where \Gamma_{i}(\tilde{A}) = (1+\frac{\max\limits_{j\in N, \atop j\neq i}|\tilde{a_{ji}}|}{|\tilde{a_{ii}}|})(1+\frac{\max\limits_{j\in N, \atop j\neq i}|\tilde{a_{ij}}|}{|\tilde{a_{ii}}|})\tilde{\Gamma}_{i}(\tilde{A}),
and \tilde{c_{jk}} = \tilde{a_{jk}}-\frac{\tilde{a_{ji}}\tilde{a_{ik}}}{\tilde{a_{ii}}}.
Since \tilde{A} is a S -SOB matrix, we have \tilde{a_{ii}} = 1-d_{i}+d_{i}a_{ii} and \tilde{a_{ij}} = d_{i}a_{ij} for all i, j\in N .
Similarly, we have
By Lemma 8, it is easy to get
Denote 1-d_{t}+d_{t}a_{tt} = \max_{i\in N}\{1-d_{i}+d_{i}a_{ii}\} . From Lemmas 8 and 9, we get
where \varsigma_{j}^{S}(A) = \frac{1-d_{j}+d_{j}a_{jj}}{1-d_{t}+d_{t}a_{tt}}-\frac{a_{ji}a_{ij}}{a_{ii}a_{jj}}-\sum\limits_{p\in S, \atop p\neq j, i} \frac{a_{jk}}{a_{jj}}-\sum\limits_{p\in S, \atop p\neq j, i}\frac{a_{ji}a_{ik}}{a_{ii}a_{jj}} . In similar way, we know
So, from Eqs (4.2)–(4.6) the conclusion follows. This proof is completed. □
5.
Conclusions
Based on the fact that the Schur complement of the S -SOB matrix is a GDSDD matrix, we give an infinity norm bound for the inverse of the S -SOB matrix based on the Schur complement. By using the infinity norm bound for the inverse of the S -SOB matrix, an error bound is given for the linear complementarity problem of the S -SOB matrix.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgements
This work was supported by the Natural Science Research Project of Department of Education of Guizhou Province (Grant No. QJJ2022015) and the Natural Science Research Project of Department of Education of Guizhou Province (Grant No. QJJ2022047). The Natural Science Research Project of Department of Education of Guizhou Province (Grant Nos. QJJ2023012, QJJ2023061, QJJ2023062).
Conflict of interest
The authors declare no conflict of interest.