2 | 4 | 6 | 8 | |
Cor |
||||
Th |
2 | 2 | 2 | 2 |
Citation: Manish Kumar Bansal, Devendra Kumar. On the integral operators pertaining to a family of incomplete I-functions[J]. AIMS Mathematics, 2020, 5(2): 1247-1259. doi: 10.3934/math.2020085
[1] | Xiaoyong Chen, Yating Li, Liang Liu, Yaqiang Wang . Infinity norm upper bounds for the inverse of SDD1 matrices. AIMS Mathematics, 2022, 7(5): 8847-8860. doi: 10.3934/math.2022493 |
[2] | Lanlan Liu, Yuxue Zhu, Feng Wang, Yuanjie Geng . Infinity norm bounds for the inverse of SDD+1 matrices with applications. AIMS Mathematics, 2024, 9(8): 21294-21320. doi: 10.3934/math.20241034 |
[3] | Dizhen Ao, Yan Liu, Feng Wang, Lanlan Liu . Schur complement-based infinity norm bounds for the inverse of S-Sparse Ostrowski Brauer matrices. AIMS Mathematics, 2023, 8(11): 25815-25844. doi: 10.3934/math.20231317 |
[4] | Yingxia Zhao, Lanlan Liu, Feng Wang . Error bounds for linear complementarity problems of SDD1 matrices and SDD1-B matrices. AIMS Mathematics, 2022, 7(7): 11862-11878. doi: 10.3934/math.2022662 |
[5] | Xinnian Song, Lei Gao . CKV-type B-matrices and error bounds for linear complementarity problems. AIMS Mathematics, 2021, 6(10): 10846-10860. doi: 10.3934/math.2021630 |
[6] | Fatih Yılmaz, Aybüke Ertaş, Samet Arpacı . Some results on circulant matrices involving Fibonacci polynomials. AIMS Mathematics, 2025, 10(4): 9256-9273. doi: 10.3934/math.2025425 |
[7] | Man Chen, Huaifeng Chen . On ideal matrices whose entries are the generalized k−Horadam numbers. AIMS Mathematics, 2025, 10(2): 1981-1997. doi: 10.3934/math.2025093 |
[8] | Yuanjie Geng, Deshu Sun . Error bounds for linear complementarity problems of strong SDD1 matrices and strong SDD1-B matrices. AIMS Mathematics, 2023, 8(11): 27052-27064. doi: 10.3934/math.20231384 |
[9] | Baijuan Shi . A particular matrix with exponential form, its inversion and some norms. AIMS Mathematics, 2022, 7(5): 8224-8234. doi: 10.3934/math.2022458 |
[10] | Lanlan Liu, Pan Han, Feng Wang . New error bound for linear complementarity problem of S-SDDS-B matrices. AIMS Mathematics, 2022, 7(2): 3239-3249. doi: 10.3934/math.2022179 |
Let n be an integer number, N={1,2,…,n}, and let Cn×n be the set of all complex matrices of order n. A matrix M=(mij)∈Cn×n(n≥2) is called a strictly diagonally dominant (SDD) matrix [1] if
|mii|>ri(M)=n∑j=1,j≠i|mij|, ∀i∈N. |
It was shown that an SDD matrix is an H-matrix [1], where matrix M=(mij)∈Cn×n(n≥2) is called an H-matrix [1, 2, 3] if and only if there exists a positive diagonal matrix X such that MX is an SDD matrix [1, 2, 4]. H-matrices are widely applied in many fields, such as computational mathematics, economics, mathematical physics and dynamical system theory, see [1, 4, 5, 6]. Meanwhile, upper bounds for the infinity norm of the inverse matrices of H-matrices can be used in the convergence analysis of matrix splitting and matrix multi-splitting iterative methods for solving the large sparse of linear equations [7], as well as linear complementarity problems. Moreover, upper bounds of the infinity norm of the inverse for different classes of matrices have been widely studied, such as CKV-type matrices [8], S-SDDS matrices [9], DZ and DZ-type matrices [10, 11], Nekrasov matrices [12, 13, 14, 15], S-Nekrasov matrices [16], Q-Nekrasov matrices [17], GSDD1 matrices [18] and so on.
In 2011, Peňa [19] proposed a new subclass of H-matrices called SDD1 matrices, whose definition is listed below. A matrix M=(mij)∈Cn×n(n≥2) is called an SDD1 matrix if
|mii|>pi(M),∀i∈N1(M), |
where
pi(M)=∑j∈N1(M)∖{i}|mij|+∑j∈N2(M)∖{i}rj(M)|mjj||mij|,N1(M)={i||mii|≤ri(M)}, N2(M)={i||mii|>ri(M)}. |
In 2023, Dai et al. [18] gave a new subclass of H-matrices named generalized SDD1 (GSDD1) matrices, which extends the class of SDD1 matrices. Here, a matrix M=(mij)∈Cn×n(n≥2) is said a GSDD1 matrix if
ri(M)−pN2(M)i(M)>0,∀i∈N2(M), |
and
(ri(M)−pN2(M)i(M))(|ajj|−pN1(M)j(M))>pN1(M)i(M)pN2(M)j(M),∀i∈N2(M),∀j∈N1(M), |
where
pN2(M)i(M)=∑j∈N2(M)∖{i}rj(M)|mjj||mij|,pN1(M)i(M)=∑j∈N1(M)∖{i}|mij|,i∈N. |
Subsequently, some upper bounds for the infinite norm of the inverse matrices of SDD matrices, SDD1 matrices and GSDD1 matrices are presented, see [18, 20, 21]. For example, the following results that will be used later are listed.
Theorem 1. (Varah bound) [21] Let matrix M=(mij)∈Cn×n(n≥2) be an SDD matrix. Then
||M−1||∞≤1min1≤i≤n(|mii|−ri(M)). |
Theorem 2. [20] Let matrix M=(mij)∈Cn×n(n≥2) be an SDD matrix. Then
||M−1||∞≤maxi∈Npi(M)|mii|+εmini∈NZi,0<ε<mini∈N|mii|−pi(M)ri(M), |
where
Zi=ε(|mii|−ri(M))+∑j∈N∖{i}(rj(M)−pj(M)|mjj|)|mij|. |
Theorem 3. [20] Let matrix M=(mij)∈Cn×n(n≥2) be an SDD matrix. If ri(M)>0(∀i∈N), then
||M−1||∞≤maxi∈Npi(M)|mii|mini∈N∑j∈N∖{i}rj(M)−pj(M)|mjj||mij|. |
Theorem 4. [18] Let M=(mij)∈Cn×n be a GSDD1 matrix. Then
||M−1||∞≤max{ε,maxi∈N2(M)ri(M)|mii|}min{mini∈N2(M)ϕi,mini∈N1(M)ψi}, |
where
ϕi=ri(M)−∑j∈N2(M)∖{i}rj(M)|mjj||mij|−∑j∈N1(M)∖{i}|mij|ε,ψi=|mii|ε−∑j∈N1(M)∖{i}|mij|ε+∑j∈N2(M)∖{i}rj(M)|mjj||mij|, |
and
maxi∈N1(M)pN2(M)i(M)|mii|−pN1(M)i(M)<ε<minj∈N2(M)rj(M)−pN2(M)j(M)pN1(M)j(M). |
On the basis of the above articles, we continue to study special structured matrices and introduce a new subclass of H-matrices called SDDk matrices, and provide some new upper bounds for the infinite norm of the inverse matrices for SDD matrices and SDDk matrices, which improve the previous results. The remainder of this paper is organized as follows: In Section 2, we propose a new subclass of H-matrices called SDDk matrices, which include SDD matrices and SDD1 matrices, and derive some properties of SDDk matrices. In Section 3, we present some upper bounds for the infinity norm of the inverse matrices for SDD matrices and SDDk matrices, and provide some comparisons with the well-known Varah bound. Moreover, some numerical examples are given to illustrate the corresponding results.
In this section, we propose a new subclass of H-matrices called SDDk matrices, which include SDD matrices and SDD1 matrices, and derive some new properties.
Definition 1. A matrix M=(mij)∈Cn×n(n≥2) is called an SDDk matrix if there exists k∈N such that
|mii|>p(k−1)i(M),∀i∈N1(M), |
where
p(k)i(M)=∑j∈N1(M)∖{i}|mij|+∑j∈N2(M)∖{i}p(k−1)j(M)|mjj||mij|,p(0)i(M)=∑j∈N1(M)∖{i}|mij|+∑j∈N2(M)∖{i}rj(M)|mjj||mij|. |
Immediately, we know that SDDk matrices contain SDD matrices and SDD1 matrices, so
{SDD}⊆{SDD1}⊆{SDD2}⊆⋯⊆{SDDk}. |
Lemma 1. A matrix M=(mij)∈Cn×n(n≥2) is an SDDk(k≥2) matrix if and only if for ∀i∈N, |mii|>p(k−1)i(M).
Proof. For ∀i∈N1(M), from Definition 1, it holds that |mii|>p(k−1)i(M).
For ∀i∈N2(M), we have that |mii|>ri(M). When k=2, it follows that
|mii|>ri(M)≥∑j∈N1(M)∖{i}|mij|+∑j∈N2(M)∖{i}rj(M)|mjj||mij|=p(0)i(M)≥∑j∈N1(M)∖{i}|mij|+∑j∈N2(M)∖{i}p(0)j(M)|mjj||mij|=p(1)i(M). |
Suppose that |mii|>p(k−1)i(M)(k≤l,l>2) holds for ∀i∈N2(M). When k=l+1, we have
|mii|>ri(M)≥∑j∈N1(M)∖{i}|mij|+∑j∈N2(M)∖{i}p(l−2)j(M)|mjj||mij|=p(l−1)i(M)≥∑j∈N1(M)∖{i}|mij|+∑j∈N2(M)∖{i}p(l−1)j(M)|mjj||mij|=p(l)i(M). |
By induction, we obtain that for ∀i∈N2(M), |mii|>p(k−1)i(M). Consequently, it holds that matrix M is an SDDk matrix if and only if |mii|>p(k−1)i(M) for ∀i∈N. The proof is completed.
Lemma 2. If M=(mij)∈Cn×n(n≥2) is an SDDk(k≥2) matrix, then M is an H-matrix.
Proof. Let X be the diagonal matrix diag{x1,x2,⋯,xn}, where
(0<) xj={1,j∈N1(M),p(k−1)j(M)|mjj|+ε,j∈N2(M), |
and
0<ε<mini∈N|mii|−p(k−1)i(M)∑j∈N2(M)∖{i}|mij|. |
If ∑j∈N2(M)∖{i}|mij|=0, then the corresponding fraction is defined to be ∞. Next we consider the following two cases.
Case 1: For each i∈N1(M), it is not difficult to see that |(MX)ii|=|mii|, and
ri(MX)=∑j=1,j≠i|mij|xj=∑j∈N1(M)∖{i}|mij|+∑j∈N2(M)∖{i}(p(k−1)j(M)|mjj|+ε)|mij|≤∑j∈N1(M)∖{i}|mij|+∑j∈N2(M)∖{i}(p(k−2)j(M)|mjj|+ε)|mij|=∑j∈N1(M)∖{i}|mij|+∑j∈N2(M)∖{i}p(k−2)j(M)|mjj||mij|+∑j∈N2(M)∖{i}ε|mij|=p(k−1)i(M)+ε∑j∈N2(M)∖{i}|mij|<p(k−1)i(M)+|mii|−p(k−1)i(M)=|mii|=|(MX)ii|. |
Case 2: For each i∈N2(M), we can obtain that
|(MX)ii|=|mii|(pk−1i(M)|mii|+ε)=p(k−1)i(M)+ε|mii|, |
and
ri(MX)=∑j=1,j≠i|mij|xj=∑j∈N1(M)∖{i}|mij|+∑j∈N2(M)∖{i}(p(k−1)j(M)|mjj|+ε)|mij|≤∑j∈N1(M)∖{i}|mij|+∑j∈N2(M)∖{i}(p(k−2)j(M)|mjj|+ε)|mij|=∑j∈N1(M)∖{i}|mij|+∑j∈N2(M)∖{i}p(k−2)j(M)|mjj||mij|+∑j∈N2(M)∖{i}ε|mij|=p(k−1)i(M)+ε∑j∈N2(A)∖{i}|mij|<p(k−1)i(M)+ε|mii|=|(MX)ii|. |
Based on Cases 1 and 2, we have that MX is an SDD matrix, and consequently, M is an H-matrix. The proof is completed.
According to the definition of SDDk matrix and Lemma 1, we obtain some properties of SDDk matrices as follows:
Theorem 5. If M=(mij)∈Cn×n(n≥2) is an SDDk matrix and N1(M)≠∅, then for ∀i∈N1(M), ∑j≠i,j∈N2(M)|mij|>0.
Proof. Suppose that for ∀i∈N1(M), ∑j≠i,j∈N2(M)|mij|=0. By Definition 1, we have that p(k−1)i(M)=ri(M), ∀i∈N1(M). Thus, it is easy to verify that |mii|>p(k−1)i(M)=ri(M)≥|mii|, which is a contradiction. Thus for ∀i∈N1(M), ∑j≠i,j∈N2(M)|mij|>0. The proof is completed.
Theorem 6. Let M=(mij)∈Cn×n(n≥2) be an SDDk(k≥2) matrix. It holds that ∑j≠i,j∈N2(M)|mij|>0, ∀i∈N2(M). Then
|mii|>p(k−2)i(M)>p(k−1)i(M)>0,∀i∈N2(M), |
and
|mii|>p(k−1)i(M)>0,∀i∈N. |
Proof. By Lemma 1 and the known conditions that for ∀i∈N2(M), ∑j≠i,j∈N2(M)|mij|>0, it holds that
|mii|>p(k−2)i(M)>p(k−1)i(M)>0,∀i∈N2(M), |
and
|mii|>p(k−1)i(M),∀i∈N. |
We now prove that |mii|>p(k−1)i(M)>0(∀i∈N) and consider the following two cases.
Case 1: If N1(M)=∅, then M is an SDD matrix. It is easy to verify that |mii|>p(k−1)i(M)>0, ∀i∈N2(M)=N.
Case 2: If N1(M)≠∅, by Theorem 5 and the known condition that for ∀i∈N2(M), ∑j≠i,j∈N2(M)|mij|>0, then it is easy to obtain that |mii|>p(k−1)i(M)>0(∀i∈N).
From Cases 1 and 2, we have that |mii|>p(k−1)i(M)>0(∀i∈N). The proof is completed.
Theorem 7. Let M=(mij)∈Cn×n(n≥2) be an SDDk(k≥2) matrix and for ∀i∈N2(M), ∑j≠i,j∈N2(M)|mij|>0. Then there exists a diagonal matrix X=diag{x1,x2,⋯,xn} such that MX is an SDD matrix. Elements x1,x2,…,xn are determined by
xi=p(k−1)i(M)|mii|,∀i∈N. |
Proof. We need to prove that matrix MX satisfies the following inequalities:
|(MX)ii|>ri(MX),∀i∈N. |
From Theorem 6 and the known condition that for ∀i∈N2(M), ∑j≠i,j∈N2(M)|mij|>0, it is easy to verify that
0<p(k−1)i(M)|mii|<p(k−2)i(M)|mii|<1,∀i∈N2(M). |
For each i∈N, we have that |(MX)ii|=p(k−1)i(M) and
ri(MX)=∑j=1,j≠i|mij|xj=∑j∈N1(M)∖{i}p(k−1)j(M)|mjj||mij|+∑j∈N2(M)∖{i}p(k−1)j(M)|mjj||mij|<∑j∈N1(M)∖{i}|mij|+∑j∈N2(M)∖{i}p(k−2)j(M)|mjj||mij|=p(k−1)i(M)=|(MX)ii|, |
that is,
|(MX)ii|>ri(MX). |
Therefore, MX is an SDD matrix. The proof is completed.
In this section, by Lemma 2 and Theorem 7, we provide some new upper bounds of the infinity norm of the inverse matrices for SDDk matrices and SDD matrices, respectively. We also present some comparisons with the Varah bound. Some numerical examples are presented to illustrate the corresponding results. Specially, when the involved matrices are SDD1 matrices as subclass of SDDk matrices, these new bounds are in line with that provided by Chen et al. [20].
Theorem 8. Let M=(mij)∈Cn×n(n≥2) be an SDDk(k≥2) matrix. Then
||M−1||∞≤max{1,maxi∈N2(M)p(k−1)i(M)|mii|+ε}min{mini∈N1(M)Ui,mini∈N2(M)Vi}, |
where
Ui=|mii|−∑j∈N1(M)∖{i}|mij|−∑j∈N2(M)∖{i}(p(k−1)j(M)|mjj|+ε)|mij|,Vi=ε(|mii|−∑j∈N2(M)∖{i}|mij|)+∑j∈N2(M)∖{i}(p(k−2)j(M)−p(k−1)j(M)|mjj|)|mij|, |
and
0<ε<mini∈N|mii|−p(k−1)i(M)∑j∈N2(M)∖{i}|mij|. |
Proof. By Lemma 2, we have that there exists a positive diagonal matrix X such that MX is an SDD matrix, where X is defined as Lemma 2. Thus,
||M−1||∞=||X(X−1M−1)||∞=||X(MX)−1||∞≤||X||∞||(MX)−1||∞, |
and
||X||∞=max1≤i≤nxi=max{1,maxi∈N2(M)p(k−1)i(M)|mii|+ε}. |
Notice that MX is an SDD matrix. Hence, by Theorem 1, we have
||(MX)−1||∞≤1min1≤i≤n(|(MX)ii|−ri(MX)). |
Thus, for any i∈N1(M), it holds that
|(MX)ii|−ri(MX)=|mii|−∑j∈N1(M)∖{i}|mij|−∑j∈N2(M)∖{i}(p(k−1)j(M)|mjj|+ε)|mij|=Ui. |
For any i∈N2(M), it holds that
|(MX)ii|−ri(MX)=p(k−1)i(M)+ε|mii|−∑j∈N1(M)∖{i}|mij|−∑j∈N2(M)∖{i}(p(k−1)j(M)|mjj|+ε)|mij|=∑j∈N1(M)∖{i}|mij|+∑j∈N2(M)∖{i}p(k−2)j(M)|mjj||mij|+ε|mii|−∑j∈N1(M)∖{i}|mij|−∑j∈N2(M)∖{i}(p(k−1)j(M)|mjj|+ε)|mij|=ε(|mii|−∑j∈N2(M)∖{i}|mij|)+∑j∈N2(M)∖{i}(p(k−2)j(M)−p(k−1)j(M)|mjj|)|mij|=Vi. |
Therefore, we get
||M−1||∞≤max{1,maxi∈N2(M)p(k−1)i(M)|mii|+ε}min{mini∈N1(M)Xi,mini∈N2(M)Yi}. |
The proof is completed.
From Theorem 8, it is easy to obtain the following result.
Corollary 1. Let M=(mij)∈Cn×n(n≥2) be an SDD matrix. Then
||M−1||∞≤maxi∈Np(k−1)i(M)|mii|+εmini∈NZi, |
where k≥2,
Zi=ε(|mii|−ri(M))+∑j∈N∖{i}(p(k−2)j(M)−p(k−1)j(M)|mjj|)|mij|, |
and
0<ε<mini∈N|mii|−p(k−1)i(M)ri(M). |
Example 1. Consider the n×n matrix:
M=(421.51.542482482⋱⋱⋱4824824823.54). |
Take that n=20. It is easy to verify that M is an SDD matrix.
By calculations, we have that for k=2,
maxi∈Np(1)i(M)|mii|+ε1=0.5859+ε1,mini∈NZi=0.4414+0.5ε1,0<ε1<0.4732. |
For k=4,
maxi∈Np(3)i(M)|aii|+ε2=0.3845+ε2,mini∈NZi=0.2959+0.5ε2,0<ε2<0.7034. |
For k=6,
maxi∈Np(5)i(M)|mii|+ε3=0.2504+ε3,mini∈NZi=0.1733+0.5ε3,0<ε3<0.8567. |
For k=8,
maxi∈Np(7)i(M)|mii|+ε4=0.1624+ε4,mini∈NZi=0.0990+0.5ε4,0<ε4<0.9572. |
So, when k=2,4,6,8, by Corollary 1 and Theorem 1, we can get the upper bounds for ||M−1||∞, see Table 1. Thus,
||M−1||∞≤0.5859+ε10.4414+0.5ε1<2,||M−1||∞≤0.3845+ε20.2959+0.5ε2<2, |
2 | 4 | 6 | 8 | |
Cor |
||||
Th |
2 | 2 | 2 | 2 |
and
||M−1||∞≤0.2504+ε30.1733+0.5ε3<2,||M−1||∞≤0.1624+ε40.0990+0.5ε4<2. |
Moreover, when k=1, by Theorem 2, we have
||M−1||∞≤0.7188+ε50.4844+0.5ε5,0<ε5<0.3214. |
The following Theorem 9 shows that the bound in Corollary 1 is better than that in Theorem 1 of [20] in some cases.
Theorem 9. Let matrix M=(mij)∈Cn×n(n≥2) be an SDD matrix. If there exists k≥2 such that
maxi∈Np(k−1)i(M)|mii|mini∈N(|mii|−ri(M))≤mini∈N∑j∈N∖{i}p(k−2)j(M)−p(k−1)j(M)|mjj||mij|, |
then
||M−1||∞≤maxi∈Np(k−1)i(M)|mii|+εmini∈NZi≤1min1≤i≤n(|mii|−ri(M)), |
where Zi and ε are defined as in Corollary 1, respectively.
Proof. From the given condition, we have that there exists k≥2 such that
maxi∈Np(k−1)i(M)|mii|mini∈N(|mii|−ri(M))≤mini∈N∑j∈N∖{i}p(k−2)j(M)−p(k−1)j(M)|mjj||mij|, |
then
maxi∈Np(k−1)i(M)|mii|mini∈N(|mii|−ri(M))+εmini∈N(|mii|−ri(M))≤mini∈N∑j∈N∖{i}p(k−2)j(M)−p(k−1)j(M)|mjj||mij|+εmini∈N(|mii|−ri(M)). |
Thus, we get
(maxi∈Np(k−1)i(M)|mii|+ε)mini∈N(|mii|−ri(M))≤mini∈N∑j∈N∖{i}p(k−2)j(M)−p(k−1)j(M)|mjj||mij|+εmini∈N(|mii|−ri(M))=mini∈N∑j∈N∖{i}p(k−2)j(M)−p(k−1)j(M)|mjj||mij|+mini∈N(ε(|mii|−ri(M)))≤mini∈N(ε(|mii|−ri(M))+∑j∈N∖{i}p(k−2)j(M)−p(k−1)j(M)|mjj||mij|)=mini∈NZi. |
Since M is an SDD matrix, then
|mii|>ri(M),Zi>0,∀i∈N. |
It's easy to verify that
maxi∈Np(k−1)i(M)|mii|+εmini∈NZi≤1min1≤i≤n(|mii|−ri(M)). |
Thus, by Corollary 1, it holds that
||M−1||∞≤maxi∈Np(k−1)i(M)|mii|+εmini∈NZi≤1min1≤i≤n(|mii|−ri(M)). |
The proof is completed.
We illustrate Theorem 9 by the following Example 2.
Example 2. This is the previous Example 1. For k=4, we have
maxi∈Np(3)i(M)|mii|mini∈N(|mii|−ri(M))=0.1923<0.2959=mini∈N∑j∈N∖{i}p(2)j(M)−p(3)j(M)|mjj||mij|. |
Thus, by Theorem 8, we obtain that for each 0<ε2<0.7034,
||M−1||∞≤0.3845+ε20.2959+0.5ε2<2=1min1≤i≤n(|mii|−ri(M)). |
However, we find that the upper bounds in Theorems 8 and 9 contain the parameter ε. Next, based on Theorem 7, we will provide new upper bounds for the infinity norm of the inverse matrices of SDDk matrices, which only depend on the elements of the given matrices.
Theorem 10. Let M=(mij)∈Cn×n(n≥2) be an SDDk(k≥2) matrix and for each i∈N2(M), ∑j≠i,j∈N2(M)|mij|>0. Then
||M−1||∞≤maxi∈Np(k−1)i(M)|mii|mini∈N(p(k−1)i(M)−∑j∈N∖{i}p(k−1)j(M)|mjj||mij|). |
Proof. By Theorems 7 and 8, we have that there exists a positive diagonal matrix X such that MX is an SDD matrix, where X is defined as in Theorem 7. Thus, it holds that
||M−1||∞=||X(X−1M−1)||∞=||X(MX)−1||∞≤||X||∞||(MX)−1||∞, |
and
||X||∞=max1≤i≤nxi=maxi∈Np(k−1)i(M)|mii|. |
Notice that MX is an SDD matrix. Thus, by Theorem 1, we get
||(MX)−1||∞≤1min1≤i≤n(|(MX)ii|−ri(MX))=1min1≤i≤n(|miixi|−ri(MX))=1mini∈N(p(k−1)i(M)−∑j∈N∖{i}p(k−1)j(M)|mjj||mij|). |
Therefore, we have that
||M−1||∞≤maxi∈Np(k−1)i(M)|mii|mini∈N(p(k−1)i(M)−∑j∈N∖{i}p(k−1)j(M)|mjj||mij|). |
The proof is completed.
Since SDD matrices are a subclass of SDDk matrices, by Theorem 10, we can obtain the following result.
Corollary 2. Let M=(mij)∈Cn×n(n≥2) be an SDD matrix. If ri(M)>0(∀i∈N), then there exists k≥2 such that
||M−1||∞≤maxi∈Np(k−1)i(M)|mii|mini∈N∑j∈N∖{i}p(k−2)j(M)−p(k−1)j(M)|mjj||mij|. |
Two examples are given to show the advantage of the bound in Theorem 10.
Example 3. Consider the following matrix:
M=(40−1−2−1−2010−4.1−4−6−20−233−4−80−4−620−2−30−4−2040). |
It is easy to verify that M is not an SDD matrix, an SDD1 matrix, a GSDD1 matrix, an S-SDD matrix, nor a CKV-type matrix. Therefore, we cannot use the error bounds in [1, 8, 9, 18, 20] to estimate ||M−1||∞. But, M is an SDD2 matrix. So by the bound in Theorem 10, we have that ‖.
Example 4. Consider the tri-diagonal matrix M\in R^{n\times n} arising from the finite difference method for free boundary problems [18], where
M = \left(\begin{array}{ccccc} b+\alpha sin \left(\frac{1}{n}\right) &c &0 &\cdots &0 \\ a &b+\alpha sin \left(\frac{2}{n}\right) &c &\cdots &0 \\ &\ddots &\ddots &\ddots &\\ 0 &\cdots &a &b+\alpha sin \left(\frac{n-1}{n}\right) &c \\ 0 &\cdots &0 &a & b+\alpha sin \left(1\right) \end{array} \right). |
Take that n = 4 , a = 1 , b = 0 , c = 3.7 and \alpha = 10 . It is easy to verify that M is neither an SDD matrix nor an SDD_1 matrix. However, we can get that M is a GSDD_1 matrix and an SDD_3 matrix. By the bound in Theorem 10, we have
\|{M^{ - 1}}\|_\infty \leq 8.2630, |
while by the bound in Theorem 4, it holds that
\begin{eqnarray*} \|M^{- 1}\|_\infty \le \frac{\varepsilon }{{\min \left\{2.1488-\varepsilon, 0.3105, 2.474\varepsilon-3.6272 \right\} }}, \; \; \; \varepsilon\in (1.4661, 2.1488). \end{eqnarray*} |
The following two theorems show that the bound in Corollary 2 is better than that in Theorem 1 in some cases.
Theorem 11. Let M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) be an {SDD} matrix. If {r_i}(M) > 0(\forall i \in N) and there exists k \geq 2 such that
\mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} \ge \mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)), |
then
||{M^{ - 1}}|{|_\infty } \le \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}}}{{\mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} }} < \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}. |
Proof. Since M is an {SDD} matrix, then N_1(M) = \emptyset and M is an {SDD_k} matrix. By the given condition that {r_i}(M) > 0(\forall i \in N) , it holds that
\begin{eqnarray*} |{m_{ii}}| & > & {r_i}(M) > \sum\limits_{j \in N \backslash \{ i\} } {\frac{{r_j(M)}}{{|{m_{jj}}|}}} |{m_{ij}}| = p_i^{(0)}(M) > 0, \; \; \; \forall i \in N, \\ p_i^{(0)}(M)& = &\sum\limits_{j \in N \backslash \{ i\} } {\frac{{r_j(M)}}{{|{m_{jj}}|}}} |{m_{ij}}| > \sum\limits_{j \in N \backslash \{ i\} } {\frac{{p_j^{(0)}(M)}}{{|{m_{jj}}|}}} |{m_{ij}}| = p_i^{(1)}(M) > 0, \; \; \; \forall i \in N. \end{eqnarray*} |
Similarly, we can obtain that
\begin{eqnarray*} |{m_{ii}}| > {r_i}(M) > p_i^{(0)}(M) > \cdots > p_i^{(k-1)}(M) > 0, \; \; \; \forall i \in N, \end{eqnarray*} |
that is,
\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}} < 1. |
Since there exists k \geq 2 such that
\mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} \ge \mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)), |
then we have
\frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}}}{{\mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} }} < \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}. |
Thus, from Corollary 2, we get
||{M^{ - 1}}|{|_\infty } \le \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}}}{{\mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} }} < \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}. |
The proof is completed.
We illustrate the Theorem 11 by following Example 5.
Example 5. Consider the matrix M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) , where
M = \left( {\begin{array}{*{20}{c}} 4&3&{0.9}&{}&{}&{}&{}&{}&{}\\ 1&6&2&{}&{}&{}&{}&{}&{}\\ {}&2&5&2&{}&{}&{}&{}&{}\\ {}&{}&2&5&2&{}&{}&{}&{}\\ {}&{}&{}& \ddots & \ddots & \ddots &{}&{}&{}\\ {}&{}&{}&{}&2&5&2&{}&{}\\ {}&{}&{}&{}&{}&2&5&2&{}\\ {}&{}&{}&{}&{}&{}&1&6&2\\ {}&{}&{}&{}&{}&{}&{0.9}&3&4 \end{array}} \right). |
Take that n = 20 . It is easy to check that M is an {SDD} matrix. Let
{l_k} = \mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|}, \; \; m = \mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)). |
By calculations, we have
\begin{eqnarray*} {l_2} & = & 0.2692 > 0.1 = m, \; \; \; \; {l_3} = 0.2567 > 0.1 = m, \; \; \; \; {l_4} = 0.1788 > 0.1 = m, \\ {l_5} & = & 0.1513 > 0.1 = m, \; \; \; \; {l_6} = 0.1037 > 0.1 = m. \end{eqnarray*} |
Thus, when k = 2, 3, 4, 5, 6 , the matrix M satisfies the conditions of Theorem 11. By Theorems 1 and 11, we can derive the upper bounds for ||{M^{ - 1}}|{|_\infty } , see Table 2. Meanwhile, when k = 1 , by Theorem 3, we get that ||{M^{ - 1}}|{|_\infty } \leq 1.6976.
2 | 3 | 4 | 5 | 6 | |
Th 11 | 1.9022 | 1.5959 | 1.8332 | 1.7324 | 2.0214 |
Th 1 | 10 | 10 | 10 | 10 | 10 |
From Table 2, we can see that the bounds in Theorem 11 are better than that in Theorems 1 and 3 in some cases.
Theorem 12. Let M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) be an {SDD} matrix. If {r_i}(M) > 0(\forall i \in N) and there exists k \geq 2 such that
\begin{eqnarray*} \mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}\mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)) &\le& \mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} \\ & < & \mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)), \end{eqnarray*} |
then
||{M^{ - 1}}|{|_\infty } \le \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}}}{{\mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} }} \le \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}. |
Proof. By Theorem 7 and the given condition that {r_i}(M) > 0(\forall i \in N) , it is easy to get that
\sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} > 0, \; \; \; \forall i \in N. |
From the condition that there exists k \geq 2 such that
\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}\mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)) \le \mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|}, |
we have
\frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}}}{{\mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} }} \le \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}. |
Thus, from Corollary 2, it holds that
||{M^{ - 1}}|{|_\infty } \le \frac{{\mathop {\max }\limits_{i \in N} \frac{{p_i^{(k - 1)}(M)}}{{|{m_{ii}}|}}}}{{\mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(k - 2)}(M) - p_j^{(k - 1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} }} \le \frac{1}{{\mathop {\min}\limits_{1 \le i \le n} (|{m_{ii}}| - {r_i}(M))}}. |
The proof is completed.
Next, we illustrate Theorem 12 by the following Example 6.
Example 6. Consider the tri-diagonal matrix M = (m_{ij}) \in {C^{n \times n}}(n \geq 2) , where
M = \left( {\begin{array}{*{20}{c}} 3&{-2.5}&{}&{}&{}&{}&{}&{}&{}\\ {-1.2}&4&-2&{}&{}&{}&{}&{}&{}\\ {}&{-2.8}&5&-1&{}&{}&{}&{}&{}\\ {}&{}&{-2.8}&5&-1&{}&{}&{}&{}\\ {}&{}&{}& \ddots & \ddots & \ddots &{}&{}&{}\\ {}&{}&{}&{}&{-2.8}&5&-1&{}&{}\\ {}&{}&{}&{}&{}&{-2.8}&5&-1&{}\\ {}&{}&{}&{}&{}&{}&{-1.2}&4&-2\\ {}&{}&{}&{}&{}&{}&{}&{-2.5}&3 \end{array}} \right). |
Take that n = 20 . It is easy to verify that M is an {SDD} matrix.
By calculations, we have that for k = 2 ,
\begin{eqnarray*} \mathop {\max }\limits_{i \in N} \frac{{p_i^{(1)}(M)}}{{|{m_{ii}}|}}\mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)) & = & 0.2686 < \mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(0)}(M) - p_j^{(1)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} = 0.3250\\ & < & 0.5 = \mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)). \end{eqnarray*} |
For k = 5 , we get
\begin{eqnarray*} \mathop {\max }\limits_{i \in N} \frac{{p_i^{(4)}(M)}}{{|{m_{ii}}|}}\mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)) & = & 0.1319 < \mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(3)}(M) - p_j^{(4)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} = 0.1685\\ & < & 0.5 = \mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)). \end{eqnarray*} |
For k = 10 , it holds that
\begin{eqnarray*} \mathop {\max }\limits_{i \in N} \frac{{p_i^{(9)}(M)}}{{|{m_{ii}}|}}\mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)) & = & 0.0386 < \mathop {\min}\limits_{i \in N} \sum\limits_{j \in N\backslash \{ i\} } {\frac{{p_j^{(8)}(M) - p_j^{(9)}(M)}}{{|{m_{jj}}|}}|{m_{ij}}|} = 0.0485\\ & < & 0.5 = \mathop {\min}\limits_{i \in N} (|{m_{ii}}| - {r_i}(M)). \end{eqnarray*} |
Thus, for k = 2, 5, 10 , the matrix M satisfies the conditions of Theorem 12. Thus, from Theorems 12 and 1, we get the upper bounds for ||{M^{ - 1}}|{|_\infty } , see Table 3. Meanwhile, when k = 1 , by Theorem 3, we have that ||{M^{ - 1}}|{|_\infty } \leq 1.7170.
2 | 5 | 10 | |
Th |
1.6530 | 1.5656 | 1.5925 |
Th |
2 | 2 | 2 |
From Table 3, we can see that the bound in Theorem 12 is sharper than that in Theorems 1 and 3 in some cases.
{SDD_k} matrices as a new subclass of H -matrices are proposed, which include {SDD} matrices and {SDD_1} matrices, and some properties of {SDD_k} matrices are obtained. Meanwhile, some new upper bounds of the infinity norm of the inverse matrices for {SDD} matrices and {SDD_k} matrices are presented. Furthermore, we prove that the new bounds are better than some existing bounds in some cases. Some numerical examples are also provided to show the validity of new results. In the future, based on the proposed infinity norm bound, we will explore the computable error bounds of the linear complementarity problems for SDD_k matrices.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This research is supported by Guizhou Provincial Science and Technology Projects (20191161), and the Natural Science Research Project of Department of Education of Guizhou Province (QJJ2023062, QJJ2023063).
The authors declare that they have no competing interests.
[1] | V. P. Saxena, Formal solution of certain new pair of dual integral equations involving H-functions, Proc. Nat. Acad. Sci. India Sect. A, 52 (1982), 366-375. |
[2] | M. Abramowitz, I. A. Stegun, Handbook of Mathematical Functions with Formulas; Graphs; and Mathematical Tables, Applied Mathematics Series, 55, National Bureau of Standards, Washington, D.C., 1964; Reprinted by Dover Publications,New York, 1965. |
[3] | L. C. Andrews, Special Functions for Engineers and Applied Mathematicians, Macmillan Company, New York, 1985. |
[4] | W. Magnus, F. Oberhettinger, R. P. Soni, Formulas and Theorems for the Special Functions of Mathematical Physics, Third Enlarged Edition, Die Grundlehren der Mathematischen Wissenschaften in Einzeldarstellungen mit besonderer Ber ucksichtingung der Anwendungsgebiete, Bd. 52, Springer-Verlag, Berlin, Heidelberg and New York, 1966. |
[5] | H. M. Srivastava, B. R. K. Kashyap, Special Functions in Queuing Theory and Related Stochastic Processes, Academic Press, New York and London, 1982. |
[6] | M. K. Bansal, D. Kumar, R. Jain, Interrelationships between Marichev-Saigo-Maeda fractional integral operators, the Laplace transform and the \bar{H}-Function, Int. J. Appl. Comput. Math, 5 (2019), Art. 103. |
[7] |
M. K. Bansal, N. Jolly, R. Jain, et al., An integral operator involving generalized Mittag-Leffler function and associated fractional calculus results, J. Anal., 27 (2019), 727-740. doi: 10.1007/s41478-018-0119-0
![]() |
[8] | M. K. Bansal, D. Kumar, R. Jain, A study of Marichev-Saigo-Maeda fractional integral operators associated with S-Generalized Gauss Hypergeometric Function, KYUNGPOOK Math. J., 59 (2019), 433-443. |
[9] | H. C. Kang, C. P. An, Differentiation formulas of some hypergeometric functions with respect to all parameters, Appl. Math. Comput., 258 (2015), 454-464. |
[10] | S. D. Lin, H. M. Srivastava, J. C. Yao, Some classes of generating relations associated with a family of the generalized Gauss type hypergeometric functions, Appl. Math. Inform. Sci., 9 (2015), 1731-1738. |
[11] |
S. D. Lin, H. M. Srivastava, M. M.Wong, Some applications of Srivastava's theorem involving a certain family of generalized and extended hypergeometric polynomials, Filomat, 29 (2015), 1811-1819. doi: 10.2298/FIL1508811L
![]() |
[12] | H. M. Srivastava, P. Agarwal, S. Jain, Generating functions for the generalized Gauss hypergeometric functions, Appl. Math. Comput., 247 (2014), 348-352. |
[13] | H. M. Srivastava, A. C. etinkaya, I. O. Kıymaz, A certain generalized Pochhammer symbol and its applications to hypergeometric functions, Appl. Math. Comput., 226 (2014), 484-491. |
[14] | R. Srivastava, Some classes of generating functions associated with a certain family of extended and generalized hypergeometric functions, Appl. Math. Comput., 243 (2014), 132-137. |
[15] | R. Srivastava, N. E. Cho, Some extended Pochhammer symbols and their applications involving generalized hypergeometric polynomials, Appl. Math. Comput., 234 (2014), 277-285. |
[16] |
R. Srivastava, Some properties of a family of incomplete hypergeometric functions, Russian J. Math. Phys., 20 (2013), 121-128. doi: 10.1134/S1061920813010111
![]() |
[17] | R. Srivastava, N. E. Cho, Generating functions for a certain class of incomplete hypergeometric polynomials, Appl. Math. Comput., 219 (2012), 3219-3225. |
[18] |
H. M. Srivastava, M. A. Chaudhry, R. P. Agarwal, The incomplete Pochhammer symbols and their applications to hypergeometric and related functions, Integral Transforms Spec. Funct., 23 (2012), 659-683. doi: 10.1080/10652469.2011.623350
![]() |
[19] |
H. M. Srivastava, M. K. Bansal, P. Harjule, A study of fractional integral operators involving a certain generalized multi-index Mittag-Leffler function, Math. Meth. Appl. Sci., 41 (2018), 6108-6121. doi: 10.1002/mma.5122
![]() |
[20] | M. K. Bansal, D. Kumar, I. Khan, et al., Certain unified integrals associated with product of M-series and incomplete H-functions, Mathematics, 7 (2019), 1191. |
[21] | J. Singh, D. Kumar, M. K. Bansal, Solution of nonlinear differential equation and special functions, Math. Meth. Appl. Sci.,(2019), DOI: https://doi.org/10.1002/mma.5918. |
[22] | I. S. Gradshteyn, I. M. Ryzhik, Table of integrals, series, and products: Seventh Edition, Academic Press, California, 2007. |
[23] | L. Debnath, D. Bhatta, Integral Transforms and Their Applications, Third edition, Chapman and Hall (CRC Press), Taylor and Francis Group, London and New York, 2014. |
[24] | A. Erdélyi, W. Magnus, F. Oberhettinger, et al., Higher Transcendental Functions (Vols. I and II), McGraw-Hill Book Company, New York, Toronto and London, 1954. |
[25] | I. N. Sneddon, The Use of Integral Transforms, Tata McGrawHill, New Delhi, India, 1979. |
[26] |
H. M. Srivastava, R. K. Saxena, R. K. Parmar, Some families of the incomplete H-Functions and the incomplete \bar{H}-Functions and associated integral transforms and operators of fractional calculus with applications, Russ. J. Math. Phys., 25(2018), 116-138. doi: 10.1134/S1061920818010119
![]() |
[27] | M. K. Bansal, J. Choi, A note on pathway fractional integral formulas associated with the incomplete H-Functions, Int. J. Appl. Comput. Math, 5 (2019), Art. 133. |
[28] | H. M. Srivastava, K. C. Gupta, S. P. Goyal, The H-Functions of one and two variables with applications, South Asian Publishers, New Delhi and Madras, 1982. |
[29] | A. A. Kilbas, H. M. Srivastava, J. J. Trujillo, Theory and applications of fractional differential equations, North-Holland Mathematical Studies, Vol. 204, Elsevier (North-Holland) Science Publishers, Amsterdam, London and New York, 2006. |
[30] | M. A. Chaudhry, A. Qadir, Incomplete exponential and hypergeometric functions with applications to non-central χ2-Distribution, Comm. Statist. Theory Methods, 34 (2002), 525-535. |
[31] | R. V. Churchill, Fourier Series and Boundary Values Problems, McGraw-Hill Book Co. New York, 1942. |
[32] | G. Szegö, Orthogonal Polynomials, Fourth edition, Amererican Mathematical Society Colloquium Publications, Vol. 23, American Mathematical Society, Providence, Rhode Island, 1975. |
[33] | H. M. Srivastava, H. L. Manocha, A Treatise on Generating Functions, Halsted Press (Ellis Horwood Limited, Chichester), John Wiley and Sons, New York, Chichester, Brisbane and Toronto, 1984. |
[34] |
V. B. L. Chaurasia, The H-function and temperature in a non homogenous bar, Proc. Indian Acad Sci., 85 (1977), 99-103. doi: 10.1007/BF03046816
![]() |
1. | Qin Li, Wenwen Ran, Feng Wang, Infinity norm bounds for the inverse of generalized {SDD_2} matrices with applications, 2024, 41, 0916-7005, 1477, 10.1007/s13160-024-00658-2 | |
2. | Qin Li, Wenwen Ran, Feng Wang, Infinity norm bounds for the inverse of Quasi-SDD_k matrices with applications, 2024, 1017-1398, 10.1007/s11075-024-01949-y | |
3. | Wenwen Ran, Feng Wang, Extended SDD_1^{\dag } matrices and error bounds for linear complementarity problems, 2024, 0916-7005, 10.1007/s13160-024-00685-z | |
4. | Yuanjie Geng, Yuxue Zhu, Fude Zhang, Feng Wang, Infinity Norm Bounds for the Inverse of \textrm{SDD}_1-Type Matrices with Applications, 2025, 2096-6385, 10.1007/s42967-024-00457-z | |
5. | L. Yu. Kolotilina, SSDD Matrices and Relations with Other Subclasses of the Nonsingular ℋ-Matrices, 2025, 1072-3374, 10.1007/s10958-025-07711-6 |
2 | 4 | 6 | 8 | |
Cor |
||||
Th |
2 | 2 | 2 | 2 |
2 | 3 | 4 | 5 | 6 | |
Th 11 | 1.9022 | 1.5959 | 1.8332 | 1.7324 | 2.0214 |
Th 1 | 10 | 10 | 10 | 10 | 10 |
2 | 5 | 10 | |
Th |
1.6530 | 1.5656 | 1.5925 |
Th |
2 | 2 | 2 |
2 | 4 | 6 | 8 | |
Cor |
||||
Th |
2 | 2 | 2 | 2 |
2 | 3 | 4 | 5 | 6 | |
Th 11 | 1.9022 | 1.5959 | 1.8332 | 1.7324 | 2.0214 |
Th 1 | 10 | 10 | 10 | 10 | 10 |
2 | 5 | 10 | |
Th |
1.6530 | 1.5656 | 1.5925 |
Th |
2 | 2 | 2 |