SDD | SDD1 | Ostrowski | Nekrasov | O−scal | N−scal |
- | - | - | √ | - | √ |
Non-singular H-matrices represent an important research frame in analysis of many classical problems of numerical linear algebra, as well as in applications in engineering, health, information sciences, and social studies. As identification of H-matrices was never an easy task, a research area was formed around some special H-matrices, characterized by checkable conditions-inequalities expressed via matrix entries only. In this paper, we introduced new conditions for a given matrix to be a non-singular H-matrix. We introduced a new special subclass of non-singular H-matrices and applied new criterion to obtain results on infinity norm of the inverse matrix, errors in linear complementarity problems, and estimation of minimal singular value. Also, results on spectra of the Schur complement matrix were given in the form of scaled disks and in the form of intervals that included or excluded real parts of eigenvalues. Results were interpreted in the light of mixed linear complementarity problems. Numerical examples illustrated improvements obtained by applications of new criteria.
Citation: Maja Nedović, Dunja Arsić. New scaling criteria for H-matrices and applications[J]. AIMS Mathematics, 2025, 10(3): 5071-5094. doi: 10.3934/math.2025232
[1] | Dizhen Ao, Yan Liu, Feng Wang, Lanlan Liu . Schur complement-based infinity norm bounds for the inverse of S-Sparse Ostrowski Brauer matrices. AIMS Mathematics, 2023, 8(11): 25815-25844. doi: 10.3934/math.20231317 |
[2] | Lanlan Liu, Yuxue Zhu, Feng Wang, Yuanjie Geng . Infinity norm bounds for the inverse of SDD+1 matrices with applications. AIMS Mathematics, 2024, 9(8): 21294-21320. doi: 10.3934/math.20241034 |
[3] | Yingxia Zhao, Lanlan Liu, Feng Wang . Error bounds for linear complementarity problems of SDD1 matrices and SDD1-B matrices. AIMS Mathematics, 2022, 7(7): 11862-11878. doi: 10.3934/math.2022662 |
[4] | Qin Zhong, Chunyan Zhao . Extended Perron complements of M-matrices. AIMS Mathematics, 2023, 8(11): 26372-26383. doi: 10.3934/math.20231346 |
[5] | Baifeng Qiu, Zhiping Xiong . The reverse order law for the weighted least square g-inverse of multiple matrix products. AIMS Mathematics, 2023, 8(12): 29171-29181. doi: 10.3934/math.20231494 |
[6] | Qin Zhong, Ling Li . Notes on the generalized Perron complements involving inverse N0-matrices. AIMS Mathematics, 2024, 9(8): 22130-22145. doi: 10.3934/math.20241076 |
[7] | Xinnian Song, Lei Gao . CKV-type B-matrices and error bounds for linear complementarity problems. AIMS Mathematics, 2021, 6(10): 10846-10860. doi: 10.3934/math.2021630 |
[8] | Xueyong Wang, Gang Wang, Ping Yang . Novel Pareto Z-eigenvalue inclusion intervals for tensor eigenvalue complementarity problems and its applications. AIMS Mathematics, 2024, 9(11): 30214-30229. doi: 10.3934/math.20241459 |
[9] | Xiaoyong Chen, Yating Li, Liang Liu, Yaqiang Wang . Infinity norm upper bounds for the inverse of SDD1 matrices. AIMS Mathematics, 2022, 7(5): 8847-8860. doi: 10.3934/math.2022493 |
[10] | Xiaodong Wang, Feng Wang . Infinity norm upper bounds for the inverse of SDDk matrices. AIMS Mathematics, 2023, 8(10): 24999-25016. doi: 10.3934/math.20231276 |
Non-singular H-matrices represent an important research frame in analysis of many classical problems of numerical linear algebra, as well as in applications in engineering, health, information sciences, and social studies. As identification of H-matrices was never an easy task, a research area was formed around some special H-matrices, characterized by checkable conditions-inequalities expressed via matrix entries only. In this paper, we introduced new conditions for a given matrix to be a non-singular H-matrix. We introduced a new special subclass of non-singular H-matrices and applied new criterion to obtain results on infinity norm of the inverse matrix, errors in linear complementarity problems, and estimation of minimal singular value. Also, results on spectra of the Schur complement matrix were given in the form of scaled disks and in the form of intervals that included or excluded real parts of eigenvalues. Results were interpreted in the light of mixed linear complementarity problems. Numerical examples illustrated improvements obtained by applications of new criteria.
Hadamard or H-matrix is proved to be of a great value in many classical problems in numerical linear algebra. This matrix class provides a frame for extensive research on stability of dynamical systems with emphasis on matrix spectra and spectra-related parameters. The information on eigenvalue localization within a prescribed area in the complex plane as a sufficient condition for a certain type of stability could be obtained from analysis of special H-matrices. Special H-matrices have their role in dealing with linear complementarity problems when considering questions of existence and uniqueness of solutions and in error analysis.
H-matrices are closely related to well-known Minkowski or M-matrices. A matrix A=[aij]∈Cn,n is an H-matrix if its comparison matrix ⟨A⟩=[mij] defined by
⟨A⟩=[mij]∈Cn,n,mij={|aii|,i=j−|aij|,i≠j |
is an M-matrix, i.e., ⟨A⟩−1≥0; for more details, see [3,14]. H-matrices are also related to P-matrices, real square matrices with all principal minors positive. A real H-matrix with positive diagonal entries is a P-matrix. In literature, H-matrices also appear as GSDD matrices, or generalized strictly diagonally dominant matrices, as they represent a generalization of strictly diagonally dominant (SDD) matrices.
Motivation for the research on special H-matrices comes from the fact that diagonal dominance and related matrix properties play an important role in matrix splitting methods and convergence analysis of iterative procedures for solving large sparse systems of linear equations, eigenvalue localization problems, norm for the inverse and condition number estimation, and linear complementarity problems. As the SDD condition is very limiting, results developed for SDD matrices, such as Varah norm bound for the inverse (see [46]) cannot be applied to wider matrix classes. Therefore, we are motivated to consider modifications of the SDD condition and obtain new estimates applicable to new classes of matrices.
In this paper, we obtain new bounds for the norm of the inverse that work in some cases when well-known bounds cannot be applied. Also, for some matrices, both new bounds and some already known bounds are applicable, and we show through numerical examples that new bounds can give tighter results. As the condition number of a matrix is defined as
κ(A)=||A||||A−1||, |
results on norm bounds directly imply estimates for the condition number. Condition number helps us recognize ill-conditioned problems, with high relative change in output for a relative change in input data. Therefore, upper estimates for condition number have important applications in engineering, in control of perturbation effects.
Another important benefit of new estimates for the inverse is the possibility to apply new results to block H-matrices. It is important to emphasize that block H-matrices do not have to be H-matrices in the classical sense. In these cases, bounds developed for H-matrices cannot be applied, but block approach could still work. This is of great importance in ecological models describing interactions between several populations (see [24]) with self-interactions in community matrices equal to zero. Matrices with zero diagonal entries, or 'hollow matrices', do not belong to H-matrices in the classical sense, but they could be block H-matrices with respect to some partition of the index set and they could be approached using tools of H-matrix theory.
New norm estimations are further applied in linear complementarity problems. Also, we obtain eigenvalue localization for the Schur complements of special H-matrices that can be applied in convergence analysis in Schur-based iterative methods.
Concept of diagonal dominance and related subclasses of H-matrices are widely applied in analysis of stability and other dynamical properties of complex real systems, in wireless sensor networks, structural engineering, and economy.
The paper by Fiedler and Pták (see [17]) brought a characterization of H-matrices through their scaling relation to SDD matrices. Fiedler and Pták proved that a matrix A=[aij]∈Cn,n is an H-matrix if, and only if, there exists a diagonal non-singular matrix W such that AW is SDD. This definition of (non-singular) H-matrices is also known as the 'scaling characterization', while the diagonal matrix W (that can be chosen to have all positive diagonal entries) is called the 'scaling matrix'. A complex square matrix A=[aij]∈Cn,n is an SDD matrix if, for all indices i∈N,
|aii|>ri(A)=∑j∈N,j≠i|aij|. |
While the SDD condition is expressed in a very elegant manner and easily checkable with low calculation cost, identification of H-matrices was never that simple. In literature, there are various iterative or direct criteria for recognizing special H-matrices; see [4,10,15,16,25,26,30,42]. Through slight relaxations of the SDD condition, using different approaches, new special subclasses of H-matrices were defined. These new conditions could be seen as different ways of 'breaking the SDD condition'.
Although scaling characterization connects H-matrices to SDD matrices through a very simple diagonal scaling transformation, in most cases these scaling relations are unknown to us, as we are not able to easily determine the scaling matrix. In [8], new criteria for identifying non-singular H-matrices were presented based on a special form of scaling. Motivated by these results, in this paper we introduce new criteria for identifying special H-matrices based on special scaling relation of the new class to the class of Nekrasov matrices; see [25,34].
It is worth mentioning that scaling methods could be interpreted in different ways and prove to be useful in a wide range of practical applications when the purpose of scaling is to obtain prescribed row and column sums in a nonnegative matrix see [2,3]. In analysis of budget allocation problems, transportation problems, Leontief input-output systems, Markov chains, and related mathematical models of real-life systems, the concept of scaling plays an important role.
In the remainder of this section, we recall well-known criteria for identifying some special H-matrices. In Section 2, we introduce a new subclass of H-matrices and discuss relation of the new class to well-known matrix classes. In Section 3, we present possible applications of the new class: Estimation of the infinity norm of the inverse, bounds for minimal singular value, error estimation for linear complementarity problems, and preliminary eigenvalue localizations for Schur complements of matrices in the new class. The fourth section consists of concluding remarks.
In [8], a scaling-based condition for identifying non-singular H-matrices was given as follows. Let A=[aij]∈Cn,n, n≥2 be a matrix with nonzero diagonal entries. Denote
Ri(A)=∑k∈N∖{i}rk(A)|akk||aik|,i∈N. |
If
ri(A)rj(A)>Ri(A)Rj(A)for alli,j∈N,i≠j, | (1.1) |
then A is an H-matrix. In further consideration, we say that a matrix is an O-scal matrix if it satisfies the condition (1.1). The proof of the statement that O-scal matrices are non-singular H-matrices, given in [8], relies on the special scaling relation between O-scal matrices and well-known Ostrowski matrices; see [41]. Recall that a matrix A=[aij]∈Cn,n, n≥2 is an Ostrowski matrix if
|aii||ajj|>ri(A)rj(A),for alli,j∈N,i≠j. | (1.2) |
In [8], it was proved that for an O-scal matrix A and diagonal matrix
Q=diag(qk),qk=rk(A)|akk|,k=1,…,n, | (1.3) |
the matrix AQ is an Ostrowski matrix.
Now we recall Nekrasov matrices; see [25,34]. Consider recursively defined row sums:
h1(A)=∑j≠1|a1j|,hi(A)=i−1∑j=1|aij|hj(A)|ajj|+n∑j=i+1|aij|,i=2,3,…,n. |
A matrix A=[aij]∈Cn,n,n≥2 is a Nekrasov matrix if, for each i∈N, it holds that
|aii|>hi(A). |
In literature, there are different propositions by different authors on how to construct a scaling matrix for the given Nekrasov matrix. We recall a construction presented in [45].
Theorem 1.1. [45] Let A=[aij]∈Cn,n be a Nekrasov matrix with all nonzero Nekrasov row sums. Then, for a diagonal positive matrix Y=diag(yi), where yi=εihi(A)|aii|,i∈N, and (εi)ni=1 is an increasing sequence of numbers with ε1=1 and εi∈(1,|aii|hi(A)),i=2,…,n, the matrix AY is an SDD matrix.
In [42], Peña presented a class of matrices named SDD1 matrices via a condition that generalizes the SDD condition by imposing on the rows that are not SDD of a weaker demand.
Definition 1.2. A matrix A=[aij]∈Cn,n,n≥2, is an SDD1 matrix if
|aii|>pi(A),i∈NA, | (1.4) |
where
pi(A)=∑j∈NA∖{i}|aij|+∑j∈¯NA∖{i}rj(A)|ajj||aij|, |
NA={i∈N||aii|≤ri(A)} \, and \, ¯NA={i∈N||aii|>ri(A)}.
As SDD1 matrices were researched recently by different authors, we are going to discuss relations of the SDD1 condition to new criteria we presented in the Section 2.
Well-known classes of Ostrowski (see [41]) or Dashnic-Zusmanovich (see [15,16]) were defined based on a special treatment of one index in the index set; Nekrasov matrices (see [26,34,44]) were introduced via recursively defined row sums and further generalized through conditions involving permutations of the index set (see [10]); while SDD1 matrices (see [42]) involve a new type of row sums. All of these conditions (i.e., matrix classes) were further applied in estimations of the norm of the inverse matrix (see [13,18,28,29,37,38]), estimations of errors in linear complementarity problems (see [1,20,21,22,33,38]), eigenvalue localizations (see [47]), and analysis of Schur complements, (see [9,11,12,39]).
In this section, we introduce new conditions defining a subclass of non-singular H-matrices. These new criteria are obtained as scaling modifications of the Nekrasov condition.
Given a matrix A=[aij]∈Cn,n,n≥2, with nonzero diagonal entries, let
H1(A)=n∑j=2|a1j|hj(A)|ajj|,Hi(A)=i−1∑j=1|aij|Hj(A)|ajj|+n∑j=i+1|aij|hj(A)|ajj|,i=2,…,n. |
Definition 2.1. Given a matrix A=[aij]∈Cn,n,n≥2, with nonzero diagonal entries, then A is an N-scal matrix if
hi(A)>Hi(A),foralli∈N. |
Theorem 2.2. If a matrix A=[aij]∈Cn,n,n≥2, is an N-scal matrix, then A is a non-singular H-matrix.
Proof. Let us consider the diagonal matrix X=diag(xk), with
xk=hk(A)|akk|,k∈N. |
For the diagonal matrix X defined in that way, let us prove that AX is a Nekrasov matrix. It is easy to see that
(AX)kk=|akk|xk=|akk|hk(A)|akk|=hk(A). |
We now prove by induction that for all i∈N,
hi(AX)=Hi(A). |
First, for i=1,
h1(AX)=∑j≠1|a1j|xj=∑j≠1|a1j|hj(A)|ajj|=H1(A). |
Consider i∈{2,3,…,n}. Assume that for all indices k∈{1,2,…,i−1}, it holds that
hk(AX)=Hk(A). |
Then,
hi(AX)=i−1∑j=1|aij|xjhj(AX)|ajj|xj+n∑j=i+1|aijxj|=i−1∑j=1|aij|Hj(A)|ajj|+n∑j=i+1|aij|hj(A)|ajj|=Hi(A),i=2,3,…,n. |
Therefore, A is an H-matrix.
N-scal class is derived from Nekrasov class and, just like the Nekrasov class, it is not closed under simultaneous permutations of rows and columns, as the following example shows.
Example 2.3. Consider
A0=[3.4000.42.972.200.134.11.3005.65.6]. |
The matrix A0 is an N-scal matrix, but for P being a counter-identical permutation, PTA0P is not an N-scal matrix.
Let us now discuss relations of the N-scal condition to other well-known criteria for identifying H-matrices.
Example 2.4. Consider the following matrix
A1=[1−0.10−0.5−0.110−0.5−0.101−0.5−0.4−1−0.31]. |
The matrix A1 belongs to some of the matrix classes presented in Table 1.
SDD | SDD1 | Ostrowski | Nekrasov | O−scal | N−scal |
- | - | - | √ | - | √ |
Consider the matrix
A2=[7032176006722109]. |
The matrix A2 belongs to some of the matrix classes presented in Table 2.
SDD | SDD1 | Ostrowski | Nekrasov | O−scal | N−scal |
- | √ | - | - | √ | - |
Consider the matrix
A3=[1−0.500−0.5100−1−21−0.5−1−2−0.331]. |
The matrix A3 belongs to some of the matrix classes presented in Table 3.
SDD | SDD1 | Ostrowski | Nekrasov | O−scal | N−scal |
- | - | - | - | √ | √ |
Consider the matrix
A4=[3200130001300008]. |
The matrix A4 belongs to some of the matrix classes presented in Table 4.
SDD | SDD1 | Ostrowski | Nekrasov | O−scal | N−scal |
√ | √ | √ | √ | - | - |
Consider the matrix
A5=[2001031000411005]. |
The matrix A5 belongs to all of the matrix classes presented in Table 5.
SDD | SDD1 | Ostrowski | Nekrasov | O−scal | N−scal |
√ | √ | √ | √ | √ | √ |
Consider the matrix
A6=[84232925121162226]. |
The matrix A6 belongs to some of the matrix classes presented in Table 6.
SDD | SDD1 | Ostrowski | Nekrasov | O−scal | N−scal |
- | - | - | - | - | √ |
Remark 2.5. If we consider the matrix A3, we see that the N-scal class is not contained in any of the following classes: SDD, SDD1, Ostrowski, Nekrasov. Considering the matrix A1, we see that the N-scal class is not contained in O-scal.
Remark 2.6. Neither of the classes SDD, SDD1, Ostrowski, Nekrasov is contained in N-scal, as seen from the matrix A4. Considering A2, we see that O-scal is not a subclass of N-scal.
Remark 2.7. The matrix A5 belongs to the new class N-scal and also to all of the classes SDD, SDD1, Ostrowski, Nekrasov, O-scal, so the intersection of these classes is nonempty.
Remark 2.8. If we consider matrices A2 and A3, we see that O-scal matrices can have more than one non-SDD row. This is not the case with Ostrowski matrices.
Remark 2.9. If we consider matrix A6, we see that it is an N-scal matrix, but it does not belong to any of the classes SDD, SDD1, Ostrowski, Nekrasov, O-scal.
The previous remarks show that the new class N-scal stands in a general relation to each of the classes SDD, SDD1, Ostrowski, Nekrasov, O-scal.
In [46], a max-norm bound is given for the inverse of SDD matrices.
Theorem 3.1. [46] Given an SDD matrix A=[aij]∈Cn,n, the following bound applies,
||A−1||∞≤1mini∈N(|aii|−ri(A)). |
This result of Varah served as a starting point for defining new norm estimations applicable to wider classes of matrices.
Theorem 3.2. [32] Let A∈Cn,n, n≥2, be an Ostrowski matrix. Then,
||A−1||∞≤maxj≠i, i,j∈N|aii|+rj(A)|aii||ajj|−ri(A)rj(A). |
In [40], in a slightly different form, one can find the following result.
Theorem 3.3. Let A∈Cn,n, n≥2, be an O-scal matrix. Then,
||A−1||∞≤maxk∈Nrk(A)|akk|maxj≠i, i,j∈Nri(A)+Rj(A)ri(A)rj(A)−Ri(A)Rj(A). |
Theorem 3.4. [31] Given a Nekrasov matrix A=[aij]∈Cn,n, the following bound applies,
||A−1||∞≤maxi∈Nzi(A)|aii|−hi(A), |
where
z1(A)=1,zi(A)=i−1∑j=1|aij|zj(A)|ajj|+1,i=2,…,n. |
Based on the N-scal condition, we now define a new norm estimation as follows.
Theorem 3.5. If a matrix A=[aij]∈Cn,n,n≥2, is an N-scal matrix, then the following bound applies:
‖A−1‖∞≤maxk∈Nhk(A)|akk|maxi∈Nzi(A)hi(A)−Hi(A), |
where
z1(A)=1,zi(A)=i−1∑j=1|aij|zj(A)|ajj|+1,i=2,…,n. |
Proof. As A is an N-scal matrix, for X=diag(xk), with xk=hk(A)|akk|, k∈N, we know that the matrix B=AX is a Nekrasov matrix.
Also, B−1=(AX)−1=X−1A−1, i.e., A−1=XB−1. Therefore,
||A−1||∞=||XB−1||∞≤||X||∞||B−1||∞=maxk∈Nhk(A)|akk|||B−1||∞. |
It is easy to prove that zi(B)=zi(A), for all i∈N, by induction.
Namely, for i=1, it holds that z1(A)=z1(B)=1, by definition. If we assume that zj(B)=zj(A), for all j=1,2,…,i−1, then
zi(B)=i−1∑j=1zj(B)|bjj||bij|+1=i−1∑j=1zj(B)|ajj|xj|aij|xj+1=i−1∑j=1zj(A)|ajj||aij|+1=zi(A). |
Applying the upper bound for the norm of the inverse given in Theorem 3.4 to the matrix B, we obtain
||B−1||∞≤maxi∈Nzi(B)|bii|−hi(B)=maxi∈Nzi(A)hi(A)−Hi(A). |
It follows that
||A−1||∞≤maxk∈Nhk(A)|akk|maxi∈Nzi(A)hi(A)−Hi(A). |
Example 3.6. Consider the matrix
B1=[803504242520044020]. |
The matrix B1 belongs to the N-scal class and the norm estimation from Theorem 3.5 is
||B−11||∞≤0.776197, |
while the exact value is
||B−11||∞=0.6187. |
The matrix B1 does not belong to any of the classes SDD, Ostrowski, Nekrasov, O-scal; therefore, the corresponding norm bounds cannot be applied.
Example 3.7. Consider
B2=[15020115613331703407]. |
The matrix B2 belongs to the N-scal class and the norm estimation from Theorem 3.5 is
||B−12||∞≤1.013626, |
while the exact value is
||B−12||∞=0.5708. |
The matrix B2 does not belong to any of the classes SDD, Ostrowski, Nekrasov, SDD1; therefore, the corresponding norm bounds cannot be applied. It does belong to the O-scal class and the norm estimation from Theorem 3.3 is 1.035986.
Example 3.8. Consider
B3=[150315927236044010]. |
Norm bounds for the inverse for B3 are presented in Table 7.
SDD | Ostrowski | Nekrasov | O−scal | N−scal | |
in the class | - | - | - | √ | √ |
bound for ||B−13||∞ | - | - | - | 9.788136 | 1.138751 |
The matrix B3 belongs to classes O-scal and N-scal. Exact value for the norm of the inverse matrix is
||B−13||∞=0.3929. |
Also, the matrix B3 belongs to the SDD1 class. Applying the norm bound given in [4]HY__HY, Theorem 8] for the SDD1 class, we obtain ||B−13||∞≤4.215686. Therefore, in this case, the bound obtained for N-scal matrices gave a more precise result than the bound for SDD1.
Example 3.9. In mathematical models in ecology, when modeling interactions between species, stochastic matrices represent corresponding transition probabilities between stages in life cycles. If we consider predator and prey species in juvenile or adult life stage, probabilities of transitions between different states could be represented with the following matrix:
B4=[0.60.30.100.20.70.100.30.10.50.10.10.30.20.4]. |
Coming from real systems, these matrices often possess properties related to diagonal dominance. Here, the matrix B4 is neither SDD nor Ostrowski, but it belongs to the N-scal class and the norm estimation from Theorem 3.5 is
||B−14||∞≤12.1099, |
which is tighter than bounds obtained for Nekrasov (24.1429) or the O-scal class (18.85714).
Example 3.10. In mathematical models in structural engineering, when applying the finite element method, matrices representing relationships between forces and displacements occur. In some cases, these matrices can be analyzed via tools of the H-matrix theory. Consider a tridiagonal matrix of order 10, B5=tridiag[−6,12,−6], having a form that often appears in modeling systems in engineering or economics. The matrix B5 is neither SDD nor Ostrowski, but it belongs to the N-scal class and the norm estimation from Theorem 3.5 is
||B−15||∞≤21.2084961, |
which is tighter than the bound obtained for Nekrasov (85.16666).
In order to bound the smallest singular value of a special H-matrix A, let us denote
ν(A)=maxk∈Nhk(A)|akk|maxi∈Nzi(A)hi(A)−Hi(A). |
For the given matrix A∈Cn,n, let ||A||1,||A||2, and ||A||∞ represent the 1-norm, 2-norm, and ∞-norm of A, respectively. It is well-known that the following relation between the norms hold
||A||22≤||A||1||A||∞. |
Applying this inequality, the lower bound for the minimal singular value of special H-matrices is obtained.
Theorem 3.11. Let A∈Cn,n, n≥2 be a matrix with nonzero diagonal entries. If
hi(A)>Hi(A) and hi(AT)>Hi(AT), |
for all i∈N, then
σn(A)≥√1ν(A)ν(AT), |
where σn(A) is the smallest singular value of A.
Proof. Matrices A and AT are both N-scal matrices. From the previous theorem, we have
||A−1||∞≤ν(A) and ||(AT)−1||∞≤ν(AT). |
As
||A−1||1=||(A−1)T||∞=||(AT)−1||∞≤ν(AT), |
we obtain
||A−1||22≤||A−1||1⋅||A−1||∞≤ν(AT)⋅ν(A), |
i.e.,
σn(A)=||A−1||−12≥√1ν(A)ν(AT). |
Remark 3.12. We can apply obtained results to other norms as well. If matrices A and AT are both N-scal matrices, from the previous consideration, it follows that
||A−1||1≤ν(AT),||A−1||2≤√ν(A)ν(AT)≤max{ν(A),ν(AT)}. |
Remark 3.13. One can consider block generalizations of H-matrices and subclasses and define infinity norm estimation for the inverse in the block-case as well. Block H-matrices were researched in [43,47].
For A=[aij]∈Cn,n and a partition π={pj}lj=0,p0=0<p1<p2<...<pl=n of the index set N, consider the block matrix [Aij]l×l with the index set L={1,2,...,l}.
The relation between point-wise and the block case (see [7]) implies the following estimation.
For a given A=[aij]∈Cn,n and a given partition π={pj}lj=0 of the index set N, if ⟩A⟨π is an N-scal matrix, then
||A−1||∞≤||(⟩A⟨π)−1||∞≤ν(⟩A⟨π). |
Here, comparison matrix is defined in a usual manner. The matrix ⟩A⟨π=[pij]l×l is given by
pij={(||A−1ii||∞)−1,i=janddet(Aii)≠0,0,i=janddet(Aii)=0,−||Aij||∞,i≠j}. |
Example 3.14. When considering a continuous-time dynamical ecological system composed of several populations resting at a feasible equilibrium point (see [24]) self-interactions are often removed from the community matrix, meaning that diagonal entries are equal to zero. Matrices with zero diagonal, by the name 'hollow matrices', do not belong to the class of H-matrices in the classical sense; therefore, they cannot be treated nor analyzed via bounds obtained for special classes of H-matrices. However, the community matrix often has a certain block structure and we can apply results developed for block H-matrices. Consider
G=[08003050800000000004204000400000205002000000020000404000020000000200]. | (3.1) |
This matrix is not an H-matrix, as it is hollow, but with respect to partition π={0,2,4,6,8}, we obtain the comparison matrix ⟩G⟨π, which is an N-scal matrix; therefore, we obtain
||G−1||∞≤||(⟩G⟨π)−1||∞≤ν(⟩G⟨π)=0.776197. |
Assume that A=[aij]∈Rn,n and q∈Rn. The linear complementarity problem LCP(A,q) is defined as follows. Find x∈Rn satisfying
x≥0,Ax+q≥0,(Ax+q)Tx=0, |
or show that such a vector does not exist. In modern game-theory, as well as in economy and engineering, there are certain problems that can be formulated as LCP problems. Special matrix classes, including P-matrices, Q-matrices, and M-matrices, have always played an important role in analysis of linear complementarity problems, whether in establishing conditions for existence and uniqueness of the solution, or in construction of procedures for solving LCP. Necessary and sufficient condition for LCP(A,q) to have a unique solution with respect to any q∈Rn is the demand on matrix A to be a P-matrix, a real square matrix with all its principal minors positive; see [6].
For a P-matrix A, let x∗ denote the solution of the LCP(A,q) and denote r(x)=min(x,Ax+q), D=diag(di),0≤di≤1. In [5], the following estimate for the error of the solution was presented:
||x−x∗||∞≤maxd∈[0,1]n||(I−D+DA)−1||∞||r(x)||∞. |
It is well-known that an H+ matrix, a real H-matrix with positive diagonal entries, is a P-matrix.
Therefore, it is convenient to apply the upper error bound for error of the solution of the given LCP(A,q) presented in [19] that can be used in cases when we know how to construct a scaling matrix for the given H+ matrix.
Theorem 3.15. [19] Let A=[aij]∈Rn,nbe an H-matrix with aii>0 for all i∈N. Let W=diag(w1,…,wn) with wi>0, for all i=1,…,n, be a diagonal scaling matrix for A, such that AW is SDD. Then,
maxd∈[0,1]n||(I−D+DA)−1||∞≤max{maxi∈Nwimini∈Nβi,maxi∈Nwimini∈Nwi}, |
where for each i=1,2,…,n,βi=aiiwi−∑i∈N∖{i}|aij|wj.
In what follows, we denote
max{maxi∈Nwimini∈Nβi,maxi∈Nwimini∈Nwi}=:η(A,W). |
Note that the norm estimation depends both on the entries of the given matrix A and the choice of scaling matrix W.
As we know how to construct a scaling matrix for a given N-scal matrix, we directly obtain the following error estimation.
Theorem 3.16. Let A=[aij]∈Rn,n,n≥2 be an N-scal matrix with positive diagonal entries. Then,
maxd∈[0,1]n||(I−D+DA)−1||∞≤η(A,W), |
where W=diag(wi) and for each i=1,2,…,n,wi=εiHi(A)aii, and (εi)ni=1 is an increasing sequence of numbers with ε1=1,εi∈(1,hi(A)Hi(A)),i=2,…,n.
Proof. Let A=[aij]∈Rn,n,n≥2,aii>0,i=1,…,n be an N-scal matrix. Then, for the diagonal matrix X, X=diag(xi) with
xi=hi(A)aii,i=1,…,n, |
the matrix AX is a Nekrasov matrix. Notice that Nekrasov row sums are nonzero both in A and AX. Therefore, we can construct the scaling matrix Y for the Nekrasov matrix AX as proposed in Theorem 1.1. This implies that the matrix AXY, with Y=diag(yi),
yi=εihi(AX)(AX)ii=εiHi(A)hi(A),i=1,…,n, |
is an SDD matrix, for (εi)ni=1 being an increasing sequence of numbers with
ε1=1,εi∈(1,(AX)iihi(AX))=(1,hi(A)Hi(A)),i=2,…,n. |
Applying Theorem 3.15 with W=XY=diag(wi),i=1,…,n, where
wi=xiyi=hi(A)aiiHi(A)hi(A)εi=εiHi(A)aii, |
the proof is completed.
Assume that A=[aij]∈Cn,n is an Ostrowski matrix with positive diagonal entries, with the only non-SDD row indexed by l. The estimation η(A,W) given in Theorem 3.15 can be applied to Ostrowski matrix A with scaling matrix W defined as W=diag(wk),
wk={˜γ,k=l,1,k∈N∖{l}, | (3.2) |
where ˜I1=rl(A)|all|<˜γ<mini≠l,ail≠0|aii|−ri(A)+|ail||ail|=˜I2, and if ail=0, for all i≠l, then ˜I2=∞.
The estimation η(A,W) of Theorem 3.15 can also be applied to an O-scal matrix A=[aij]∈Cn,n with positive diagonal entries, with the following choice of the scaling matrix W. Let W=diag(wi),
wi={ri(A)aiiγ,i=l,ri(A)aii,i∈N∖{l},γ∈(RNl(A)rl(A),mini≠l,ail≠0ri(A)all−RNi(A)all+|ail|rl(A)|ail|rl(A)), |
l is the index of the only non-SDD row in AQ, Q=diag(qk) and qk=rk(A)|akk|,k∈N. If there is no such index l, we choose γ=1.
In the following numerical examples, we compare η estimations for LCP obtained from Theorem 3.15 with corresponding choices of scaling matrices.
Example 3.17. Consider
B=[40100200010200011010003101100011040000100020502100000016001100111070100100110801222010019010200020110]. | (3.3) |
The matrix B is an N-scal matrix, so we apply Theorem 3.16 for εi=1+0.0362i,i=0,…,9. We obtain η estimation 2.729348. As B does not belong to any of the classes SDD, Ostrowski, O-scal, Nekrasov, the corresponding bounds obtained for these classes cannot be applied. In all the examples ε1=1 and εi,i=2,…,n, are chosen equidistantly inside the interval (1,mini=2,…,nhi(A)Hi(A)).
Example 3.18. Consider
C=[229.14.22.10.79.14.22.10.70.7422137]. |
The matrix C belongs to all of the classes we considered; therefore, we apply bounds developed for all these classes and compare them. Results are presented in Table 8.
SDD | Ostrowski | Nekrasov | O−scal | N−scal | |
in the class | √ | √ | √ | √ | √ |
bound for ||C−1||∞ | 1.666667 | 1.368421 | 1.125035 | 1.080676 | 1.1319 |
η bound for LCP | 1.666667 | 1.666667 | 1.344799 | 1.30413 | 1.226424 |
For this matrix, we see that the best bound for the norm of the inverse matrix is the bound from Theorem 3.3 for the O-scal class, while the best η bound for LCP is the bound from Theorem 3.16 for the N-scal class.
The Schur complement matrix can be viewed as a side product of block-Gaussian elimination. It allows transformation of a given large dimensional system to problems of smaller dimensions.
The Schur complement of A=[aij]∈Cn,n with respect to a proper subset α of the index set N is denoted by A/α and defined to be
A/α=A(¯α)−A(¯α,α)(A(α))−1A(α,¯α). |
Here, A(α,β) denotes the sub-matrix of A∈Cn,n formed by the rows indexed by α and the columns indexed by β, while A(α,α) is abbreviated to A(α). In our considerations, we assume that A(α) is a non-singular matrix.
Results on the Schur complement of special H-matrices including spectra localizations and closure properties can be found in [9,11,12,27,39,48].
Denoting by σ(A) the spectrum of matrix A, i.e., the set of all eigenvalues of the matrix A, we now recall the famous Geršgorin theorem; see [23]. It states that for any matrix A∈Cn,n,
Γ(A)=⋃i∈NΓi(A)⊇σ(A), |
where the set Γi(A)={z∈C||z−aii|≤ri(A)} is called the i-th Geršgorin disk, while Γ(A) is the Geršgorin set.
In what follows, we apply the scaling method in order to obtain information on spectra of Schur complements. For this purpose, the scaling will be performed in the form of a similarity transformation, as similar matrices have the same set of eigenvalues. Notice that if W is a diagonal scaling matrix for the given matrix A, i.e., if AW is SDD, then the matrix W−1AW is also SDD, as scaling the rows does not affect SDD property. Having this in mind, for a given non-singular diagonal matrix W, we denote
rWi(A)=ri(W−1AW), |
and
ΓWi(A)={z∈C||z−aii|≤rWi(A)}. |
We also make use of another convenient property of the Schur complement. Namely, Schur complements agree well with multiplications with diagonal non-singular matrices, both from right and left. More precisely, for a diagonal non-singular W, it holds that
(WA)/α=W(¯α)(A/α), |
and also
(AW)/α=(A/α)W(¯α). |
In [36], the authors provided information on spectra of the Schur complement of an SDD matrix as follows.
Theorem 3.19. [36] Let A∈Cn,n be an SDD matrix with real diagonal entries, and let α be a proper subset of the index set. Then, A/α and A(¯α) have the same number of eigenvalues whose real parts are greater (less) than f(A) (resp., −f(A)), where
f(A)=minj∈¯α(|ajj|−rj(A)+mini∈α|aii|−ri(A)|aii|∑k∈α|ajk|). | (3.4) |
In [35], the authors presented another interesting result on spectra of the Schur complement.
For A∈Cn,n, α={i1,i2,…,ik}⊆¯NA,¯α={j1,j2,…,jl} and for every eigenvalue λ of A/α, there exists 1≤t≤l such that
|λ−ajtjt|≤rjt(A). | (3.5) |
In other words, the eigenvalues of A/α are contained in the union of those Geršgorin disks for the original matrix A whose indices are in ¯α.
In further considerations, we apply this result and we obtain information on spectra of Schur complements of N-scal matrices.
Theorem 3.20. Let A∈Cn,n be an N-scal matrix with real diagonal entries, and let α be a proper subset of the index set. Then, A/α and A(¯α) have the same number of eigenvalues whose real parts are greater (less) than f(W−1AW) (resp., −f(W−1AW)), where f(A) is defined as in (3.4) and W=diag(wi) is a diagonal scaling matrix for A, with wi=εiHi(A)aii,i=1,2,…,n, where (εi)ni=1 is an increasing sequence of numbers with ε1=1,εi∈(1,hi(A)Hi(A)),i=2,…,n.
Proof. Since A is an N-scal matrix with real diagonal entries and W is the corresponding scaling matrix, W−1AW is an SDD matrix with real diagonal entries. Notice that matrices (W−1AW)/α and A/α are similar, as well as matrices (W−1AW)(¯α) and A(¯α), so their spectra coincide. Now, the statement is proved by applying Theorem 3.19.
The method of the proof holds for all H-matrices with real diagonal entries and their scaling matrices.
Now, we obtain a Geršgorin-like spectra localization for the Schur complement of an N-scal matrix using scaling matrices.
Theorem 3.21. Let A∈Cn,n be an N-scal matrix, let α⊆N, and let W=diag(wi) be a diagonal scaling matrix for A, with wi=εiHi(A)aii,i=1,2,…,n, where (εi)ni=1 is an increasing sequence of numbers with ε1=1,εi∈(1,hi(A)Hi(A)),i=2,…,n. Then,
σ(A/α)⊆⋃j∈¯αΓWj(A). |
Proof. It is easy to see that (W−1AW)/α is similar to A/α. Similarity implies that spectra of these matrices coincide. Applying results from [35] to SDD matrix W−1AW, we complete the proof.
In this way, we obtain Geršgorin-like eigenvalue localization area for the Schur complement of an N-scal matrix using only the entries of the original matrix A. Again, the method of the proof holds for any H-matrix A and its scaling matrix W; see [12,39].
Results on spectra of Schur complements can be interpreted in light of the mLCP.
Consider mLCP(A,B,C,D,a,b) with a non-singular matrix A. For A∈Rn,n, B∈Rm,m, C∈Rn,m, D∈Rm,n, a∈Rn i b∈Rm, the problem can be formulated as follows. Find
u∈Rn,v∈Rm,v≥0, |
such that
a+Au+Cv=0,b+Du+Bv≥0,vT(b+Du+Bv)=0. |
For a non-singular A, the given problem can be transformed to a classical LCP(B−DA−1C,b−DA−1a). Denote by M the block matrix
M=[ACDB]. |
Then, the matrix that appears in classical LCP is the Schur complement M/A=B−DA−1C. If the matrix M belongs to some special class, we are able to obtain results on eigenvalues as follows.
● If M∈Cn+m,n+m, n+m≥2 is an N-scal matrix with real diagonal entries, then M/A and B have equal number of eigenvalues with real parts greater (less) than f(W−1MW) (−f(W−1MW)). Here, f(M) is defined as in Theorem 3.19 and W=diag(wi) is a diagonal scaling matrix for M.
● For M∈Cn+m,n+m of order n+m≥2 being an N-scal matrix, let W=diag(wi) be a corresponding diagonal scaling matrix. Then,
σ(M/A)⊆⋃j∈{n+1,…,n+m}ΓWj(M). |
Example 3.22. Consider
A=[520124011.52510.5025]. |
It belongs both to the SDD class and N-scal class. Let α={1,2}. Applying (3.5), we obtain Γ3(A)={z||z−5|≤4.5}, Γ4(A)={z||z−5|≤2.5},
σ(A/α)⊆⋃j∈{3,4}Γj(A)=:ΓSDD(A/α). |
From Theorem 3.21 for N-scal matrices, with εi=1+0.375i,i=0,1,2,3, we obtain ΓW3(A)={z||z−5|≤2.6335}, ΓW4(A)={z||z−5|≤3.549}, and
σ(A/α)⊆⋃j∈{3,4}ΓWj(A)=:ΓNscal(A/α). |
We see that ΓNscal(A/α) (blue) gives tighter localization than ΓSDD(A/α) (pink); see Figure 1.
Example 3.23. Consider, again, the matrix B given in (3.3) and set α={1,2,3,4,5,6,7}. As NB ={1,2,3,5} and α⊈ we cannot use localization from [35].
However, we obtain the spectra localization for B/\alpha given in Figure 2 applying Theorem 3.21, with \; \varepsilon_i = 1+0.0362i, \; i = 0, \ldots, 9.\; Here, \Gamma_{8}^W(B) = \{z||z-8| \leq 6.68754\} , \Gamma_{9}^W(B) = \{z||z-9| \leq 7.398684\}, and \Gamma_{10}^W(B) = \{z||z-10| \leq 8.547165\}.
Now, let \alpha = \{4, 6, 7, 8, 9, 10\}. As \alpha \subseteq \overline N_B, we are able to apply results for both the SDD case and N -scal case. From (3.5) for the SDD case, we obtain
\sigma(B / \alpha) \subseteq \bigcup\limits_{j \in \{1, 2, 3, 5\}} \Gamma_{j}(B) = \Gamma_{SDD}(B / \alpha), |
where \Gamma_{1}(B) = \{z||z-4| \leq 4\} , \Gamma_{2}(B) = \{z||z-2| \leq 3\} , \Gamma_{3}(B) = \{z||z-3| \leq 3\}, and \Gamma_{5}(B) = \{z||z-5| \leq 5\}.
Applying Theorem 3.21 for the N -scal class, we obtain
\sigma(B / \alpha) \subseteq \bigcup\limits_{j \in \{1, 2, 3, 5\}} \Gamma_{j}^W(B) = \Gamma_{Nscal}(B / \alpha), |
where \Gamma_{1}^W(B) = \{z||z-4| \leq 3.077659\} , \Gamma_{2}^W(B) = \{z||z-2| \leq 1.607273\} , \Gamma_{3}^W(B) = \{z||z-3| \leq 2.320485\}, and \Gamma_{5}^W(B) = \{z||z-5| \leq 4.128828\}.
If we compare localization areas, we see that \Gamma_{Nscal}(B / \alpha) (violet) is tighter than \Gamma_{SDD}(B / \alpha) (yellow); see Figure 3.
In this paper, we presented a new sufficient condition for non-singularity of matrices by the name N -scal condition. We proved that matrices satisfying this condition belong to the H -matrix class. We discussed the relation of the new N -scal matrix class to some well-known subclasses of non-singular H -matrices, such as SDD , Ostrowski, SDD_1 , O -scal, and the Nekrasov class. Numerical examples show that the new class stands in a general relation to each of these classes. We used new criteria to obtain bounds for the infinity norm of the inverse matrix. Numerical examples illustrate that these bounds can give better results in some cases than existing bounds developed for some well-known matrix classes. Due to relations among classes, it is obvious that new estimates developed for N -scal matrices can work in some cases when already known estimates cannot be applied. Applying scaling relations between the new class and the well-known matrix classes, we defined error estimation for linear complementarity problems that involve N -scal matrices. Also, Schur complements of N -scal matrices are considered. Spectra localizations for Schur complements are obtained via diagonal scaling in the form of similarity transformation. If the matrix in consideration is both SDD and N -scal, numerical examples show that new localization can give tighter estimate due to the scaling method.
Both authors have equally contributed to conceptualization, investigation, methodology, formal analysis, writing and editing, validation, inspection. All authors have read and approved the final version of the manuscript for publication.
The authors would like to thank the editor and the anonymous referees for their valuable suggestions and comments. This research has been supported by the Ministry of Science, Technological Development and Innovation (Contract No. 451-03-65/2024-03/200156) and the Faculty of Technical Sciences, University of Novi Sad through project ''Scientific and Artistic Research Work of Researchers in Teaching and Associate Positions at the Faculty of Technical Sciences, University of Novi Sad" (No. 01-3394/1).
The authors declare that there are no conflict of interest.
[1] |
D. Arsić, M. Nedović, On \Pi-Nekrasov matrices, Filomat, 37 (2023), 4335–4350. https://dx.doi.org/10.2298/FIL2313335A doi: 10.2298/FIL2313335A
![]() |
[2] | R. B. Bapat, T. E. S. Raghavan, Nonnegative matrices and applications, Cambridge University Press, 1997. https://dx.doi.org/10.1017/CBO9780511529979 |
[3] | A. Berman, R. J. Plemmons, Nonnegative matrices in the mathematical sciences, Classics in Applied Mathematics, SIAM, Philadelphia, 1994. https://dx.doi.org/10.1016/C2013-0-10361-3 |
[4] |
X. Chen, Y. Li, L. Liu, Y. Wang, Infinity norm upper bound for the inverse of SDD_1 matrices, AIMS Math., 7 (2022), 8847–8860. https://dx.doi.org/10.3934/math.2022493 doi: 10.3934/math.2022493
![]() |
[5] |
X. J. Chen, S. H. Xiang, Computation of error bounds for P-matrix linear complementarity problems, Math. Program., 106 (2006), 513–525. https://dx.doi.org/10.1007/s10107-005-0645-9 doi: 10.1007/s10107-005-0645-9
![]() |
[6] | R. W. Cottle, J. S. Pang, R. E. Stone, The linear complementarity problem, Academic, Boston, 1992. https://dx.doi.org/10.1137/1.9780898719000 |
[7] |
L. Cvetković, K. Doroslovački, Max norm estimation for the inverse of block matrices, Appl. Math. Comput., 242 (2014), 694–706. https://doi.org/10.1016/j.amc.2014.06.035 doi: 10.1016/j.amc.2014.06.035
![]() |
[8] |
L. Cvetković, V. Kostić, New criteria for identifying H-matrices, J. Comput. Appl. Math., 180 (2005), 265–278. https://dx.doi.org/10.1016/j.cam.2004.10.017 doi: 10.1016/j.cam.2004.10.017
![]() |
[9] |
L. Cvetković, V. Kostić, M. Kovačević, T. Szulc, Further results on H-matrices and their Schur complements, Appl. Math. Comput., 198 (2008), 506–510. http://dx.doi.org/10.1016/j.amc.2007.09.001 doi: 10.1016/j.amc.2007.09.001
![]() |
[10] |
L. Cvetković, V. Kostić, M. Nedović, Generalizations of Nekrasov matrices and applications, Open Math., 13 (2015), 1–10. http://dx.doi.org/10.1515/math-2015-0012 doi: 10.1515/math-2015-0012
![]() |
[11] |
L. Cvetković, M. Nedović, Special H-matrices and their Schur and diagonal-Schur complements, Appl. Math. Comput., 208 (2009), 225–230. http://dx.doi.org/10.1016/j.amc.2008.11.040 doi: 10.1016/j.amc.2008.11.040
![]() |
[12] |
L. Cvetković, M. Nedović, Eigenvalue localization refinements for the Schur complement, Appl. Math. Comput., 218 (2012), 8341–8346. http://dx.doi.org/10.1016/j.amc.2012.01.058 doi: 10.1016/j.amc.2012.01.058
![]() |
[13] |
P. F. Dai, J. Li, S. Zhao, Infinity norm bounds for the inverse for GSDD_1 matrices using scaling matrices, Comput. Appl. Math., 42 (2023), 121. https://dx.doi.org/10.1007/s40314-022-02165-x doi: 10.1007/s40314-022-02165-x
![]() |
[14] |
P. F. Dai, J. C. Li, Y. T. Li, J. Bai, A general preconditioner for linear complementarity problem with an M-matrix, J. Comput. Appl. Math., 317 (2017), 100–112. https://doi.org/10.1016/j.cam.2016.11.034 doi: 10.1016/j.cam.2016.11.034
![]() |
[15] | L. S. Dashnic, M. S. Zusmanovich, O nekotoryh kriteriyah regulyarnosti matric i lokalizacii ih spectra, Zh. Vychisl. Matem. I Matem. Fiz., 5 (1970), 1092–1097. |
[16] | L. S. Dashnic, M. S. Zusmanovich, K voprosu o lokalizacii harakteristicheskih chisel matricy, Zh. Vychisl. Matem. I Matem. Fiz., 10 (1970), 1321–1327. |
[17] |
M. Fiedler, V. Pták, Generalized norms of matrices and the location of the spectrum, Czech. Math. J., 12 (1962), 558–571. http://dx.doi.org/10.21136/CMJ.1962.100540 doi: 10.21136/CMJ.1962.100540
![]() |
[18] |
L. Gao, X. M. Gu, X. Jia, C. Q. Li, Upper triangulation-based infinity norm bounds for the inverse of Nekrasov matrices with applications, Numer. Algorithms, 97 (2024), 1453–1479. https://dx.doi.org/10.1007/s11075-024-01758-3 doi: 10.1007/s11075-024-01758-3
![]() |
[19] |
M. G. Esnaola, J. M. Peña, A comparison of error bounds for linear complementarity problems of H-matrices, Linear Algebra Appl., 433 (2010), 956–964. https://dx.doi.org/10.1016/j.laa.2010.04.024 doi: 10.1016/j.laa.2010.04.024
![]() |
[20] |
M. G. Esnaola, J. M. Peña, Error bounds for linear complementarity problems of Nekrasov matrices, Numer. Algorithms, 67 (2014), 655–667. https://dx.doi.org/10.1007/s11075-013-9815-7 doi: 10.1007/s11075-013-9815-7
![]() |
[21] |
M. G. Esnaola, J. M. Peña, On the asymptotic optimality of error bounds for some linear complementarity problems, Numer. Algorithms, 80 (2019), 521–532. https://dx.doi.org/10.1007/s11075-018-0495-1 doi: 10.1007/s11075-018-0495-1
![]() |
[22] |
M. G. Esnaola, J. M. Peña, B-Nekrasov matrices and error bounds for linear complementarity problems, Numer. Algorithms, 72 (2016), 435–445. https://dx.doi.org/10.1007/s11075-015-0054-y doi: 10.1007/s11075-015-0054-y
![]() |
[23] | S. Geršgorin, Uber die Abgrenzung der eigenwerte einer matrix, Izv. Akad. Nauk SSSR Ser. Mat., 1 (1931), 749–754. |
[24] |
J. Grilli, T. Rogers, S. Allesina, Modularity and stability in ecological communities, Nat. Commun., 7 (2016), 12031. https://dx.doi.org/10.1038/ncomms12031 doi: 10.1038/ncomms12031
![]() |
[25] | V. V. Gudkov, On a certain test for nonsingularity of matrices, Latvian Math. Yearbook, 1965,385–390. |
[26] | A. S. Householder, On a condition for the nonsingularity of a matrix, Math. Comput., 26 (1972), 119–120. |
[27] |
K. D. Ikramov, M. Y. Ibragimov, The Nekrasov property is hereditary for Gaussian elimination, ZAMM, 77 (1997), 394–396. https://dx.doi.org/10.1002/zamm.19970770522 doi: 10.1002/zamm.19970770522
![]() |
[28] |
L. Y. Kolotilina, Bounds for the infinity norm of the inverse for certain M- and H-matrices, Linear Algebra Appl., 430 (2009), 692–702. https://dx.doi.org/10.1016/j.laa.2008.09.005 doi: 10.1016/j.laa.2008.09.005
![]() |
[29] |
L. Y. Kolotilina, Bounds for the inverses of generalized Nekrasov matrices, J. Math. Sci., 207 (2015), 786–794. https://dx.doi.org/10.1007/s10958-015-2401-x doi: 10.1007/s10958-015-2401-x
![]() |
[30] |
L. Y. Kolotilina, Diagonal dominance characterization of PM- and PH-matrices, J. Math. Sci., 165 (2010), 556–561. https://dx.doi.org/10.1007/s10958-010-9825-0 doi: 10.1007/s10958-010-9825-0
![]() |
[31] |
L. Y. Kolotilina, On bounding inverse to Nekrasov matrices in the infinity norm, J. Math. Sci., 199 (2014), 432–437. https://dx.doi.org/10.1007/s10958-014-1870-7 doi: 10.1007/s10958-014-1870-7
![]() |
[32] |
V. Kostić, L. Cvetković, D. L. Cvetković, C. Q. Li, Pseudospectra localizations and their applications, Numer. Linear Algebra, 23 (2016), 356–372. https://dx.doi.org/10.1002/nla.2028 doi: 10.1002/nla.2028
![]() |
[33] |
C. Q. Li, P. F. Dai, Y. T. Li, New error bounds for linear complementarity problems of Nekrasov matrices and B-Nekrasov matrices, Numer. Algorithms, 74 (2017), 997–1009. https://dx.doi.org/10.48550/arXiv.1606.06838 doi: 10.48550/arXiv.1606.06838
![]() |
[34] |
W. Li, On Nekrasov matrices, Linear Algebra Appl., 281 (1998), 87–96. https://dx.doi.org/10.1016/S0024-3795(98)10031-9 doi: 10.1016/S0024-3795(98)10031-9
![]() |
[35] |
J. Liu, Y. Huang, J. Zhang, The dominant degree and the disc theorem for the Schur complement of matrix, Appl. Math. Comput., 215 (2010), 4055–4066. https://dx.doi.org/10.1016/j.amc.2009.12.063 doi: 10.1016/j.amc.2009.12.063
![]() |
[36] |
J. Z. Liu, F. Z. Zhang, Disc separation of the Schur complements of diagonally dominant matrices and determinantal bounds, SIAM J. Matrix Anal. Appl., 27 (2005), 665–674. https://dx.doi.org/10.1137/040620369 doi: 10.1137/040620369
![]() |
[37] |
M. Nedović, Norm bounds for the inverse for generalized Nekrasov matrices in point-wise and block case, Filomat, 35 (2021), 2705–2714. https://dx.doi.org/10.2298/FIL2108705N doi: 10.2298/FIL2108705N
![]() |
[38] |
M. Nedović, L. Cvetković, Norm bounds for the inverse and error bounds for linear complementarity problems for \{P1, P2\}-Nekrasov matrices, Filomat, 35 (2021), 239–250. https://dx.doi.org/10.2298/FIL2101239N doi: 10.2298/FIL2101239N
![]() |
[39] |
M. Nedović, L. Cvetković, The Schur complement of PH-matrices, Appl. Math. Comput., 362 (2019), 124541. https://dx.doi.org/10.1016/j.amc.2019.06.055 doi: 10.1016/j.amc.2019.06.055
![]() |
[40] | M. Nedović, S. Milićević, A new lower bound for the smallest singular value of a special class of matrices, The 8th Conference on Mathematics in Engineering: Theory and Applications, Novi Sad, May 26–28th, 2023. |
[41] |
A. M. Ostrowski, Über die Determinanten mit überwiegender Hauptdiagonale, Comment. Math. Helv., 10 (1937), 69–96. http://dx.doi.org/10.1007/BF01214284 doi: 10.1007/BF01214284
![]() |
[42] |
J. M. Peña, Diagonal dominance, Shur complements and some classes of H-matrices and P-matrices, Adv. Comput. Math., 35 (2011), 357–373. https://dx.doi.org/10.1007/s10444-010-9160-5 doi: 10.1007/s10444-010-9160-5
![]() |
[43] |
F. Robert, Blocs H-matrices et convergence des methodes iteratives classiques par blocs, Linear Algebra Appl., 2 (1969), 223–265. https://dx.doi.org/10.1016/0024-3795(69)90029-9 doi: 10.1016/0024-3795(69)90029-9
![]() |
[44] |
T. Szulc, Some remarks on a theorem of Gudkov, Linear Algebra Appl., 225 (1995), 221–235. https://dx.doi.org/10.1016/0024-3795(95)00343-P doi: 10.1016/0024-3795(95)00343-P
![]() |
[45] |
T. Szulc, L. Cvetković, M. Nedović, Scaling technique for Partition-Nekrasov matrices, Appl. Math. Comput., 271 (2015), 201–208. https://dx.doi.org/10.1016/j.amc.2015.08.136 doi: 10.1016/j.amc.2015.08.136
![]() |
[46] |
J. M. Varah, A lower bound for the smallest singular value of a matrix, Linear Algebra Appl., 11 (1975), 3–5. https://dx.doi.org/10.1016/0024-3795(75)90112-3 doi: 10.1016/0024-3795(75)90112-3
![]() |
[47] | R. S. Varga, Geršgorin and his circles, Springer Series in Computational Mathematics, 36 (2004). https://dx.doi.org/10.1007/978-3-642-17798-9 |
[48] | F. Zhang, The Schur complement and its applications, New York: Springer, 2005. https://dx.doi.org/10.1007/b105056 |
SDD | SDD_1 | Ostrowski | Nekrasov | O- scal | N- scal |
- | - | - | \surd | - | \surd |
SDD | SDD_1 | Ostrowski | Nekrasov | O- scal | N- scal |
- | \surd | - | - | \surd | - |
SDD | SDD_1 | Ostrowski | Nekrasov | O- scal | N- scal |
- | - | - | - | \surd | \surd |
SDD | SDD_1 | Ostrowski | Nekrasov | O- scal | N- scal |
\surd | \surd | \surd | \surd | - | - |
SDD | SDD_1 | Ostrowski | Nekrasov | O- scal | N- scal |
\surd | \surd | \surd | \surd | \surd | \surd |
SDD | SDD_1 | Ostrowski | Nekrasov | O- scal | N- scal |
- | - | - | - | - | \surd |
SDD | Ostrowski | Nekrasov | O- scal | N- scal | |
in the class | - | - | - | \surd | \surd |
bound for ||B_3^{-1}||_{\infty} | - | - | - | 9.788136 | 1.138751 |
SDD | Ostrowski | Nekrasov | O- scal | N- scal | |
in the class | \surd | \surd | \surd | \surd | \surd |
bound for ||C^{-1}||_{\infty} | 1.666667 | 1.368421 | 1.125035 | 1.080676 | 1.1319 |
\eta bound for LCP | 1.666667 | 1.666667 | 1.344799 | 1.30413 | 1.226424 |
SDD | SDD_1 | Ostrowski | Nekrasov | O- scal | N- scal |
- | - | - | \surd | - | \surd |
SDD | SDD_1 | Ostrowski | Nekrasov | O- scal | N- scal |
- | \surd | - | - | \surd | - |
SDD | SDD_1 | Ostrowski | Nekrasov | O- scal | N- scal |
- | - | - | - | \surd | \surd |
SDD | SDD_1 | Ostrowski | Nekrasov | O- scal | N- scal |
\surd | \surd | \surd | \surd | - | - |
SDD | SDD_1 | Ostrowski | Nekrasov | O- scal | N- scal |
\surd | \surd | \surd | \surd | \surd | \surd |
SDD | SDD_1 | Ostrowski | Nekrasov | O- scal | N- scal |
- | - | - | - | - | \surd |
SDD | Ostrowski | Nekrasov | O- scal | N- scal | |
in the class | - | - | - | \surd | \surd |
bound for ||B_3^{-1}||_{\infty} | - | - | - | 9.788136 | 1.138751 |
SDD | Ostrowski | Nekrasov | O- scal | N- scal | |
in the class | \surd | \surd | \surd | \surd | \surd |
bound for ||C^{-1}||_{\infty} | 1.666667 | 1.368421 | 1.125035 | 1.080676 | 1.1319 |
\eta bound for LCP | 1.666667 | 1.666667 | 1.344799 | 1.30413 | 1.226424 |