The class of generalized SDD1(GSDD1) matrices is a new subclass of H-matrices. In this paper, we focus on the subdirect sum of GSDD1 matrices, and some sufficient conditions to ensure that the subdirect sum of GSDD1 matrices with strictly diagonally dominant (SDD) matrices is in the class of GSDD1 matrices are given. Moreover, corresponding examples are given to illustrate our results.
Citation: Jiaqi Qi, Yaqiang Wang. Subdirect Sums of GSDD1 matrices[J]. Electronic Research Archive, 2024, 32(6): 3989-4010. doi: 10.3934/era.2024179
[1] | Yi Liu, Lei Gao, Tianxu Zhao . Partially doubly strictly diagonally dominant matrices with applications. Electronic Research Archive, 2023, 31(5): 2994-3013. doi: 10.3934/era.2023151 |
[2] | Natália Bebiano, João da Providência, Wei-Ru Xu . Approximations for the von Neumann and Rényi entropies of graphs with circulant type Laplacians. Electronic Research Archive, 2022, 30(5): 1864-1880. doi: 10.3934/era.2022094 |
[3] | Yan Li, Yaqiang Wang . Some new results for $ B_1 $-matrices. Electronic Research Archive, 2023, 31(8): 4773-4787. doi: 10.3934/era.2023244 |
[4] | Yongge Tian . Characterizations of matrix equalities involving the sums and products of multiple matrices and their generalized inverse. Electronic Research Archive, 2023, 31(9): 5866-5893. doi: 10.3934/era.2023298 |
[5] | Zoran Pucanović, Marko Pešović . Analyzing Chebyshev polynomial-based geometric circulant matrices. Electronic Research Archive, 2024, 32(9): 5478-5495. doi: 10.3934/era.2024254 |
[6] | Jiachen Mu, Duanzhi Zhang . Multiplicity of symmetric brake orbits of asymptotically linear symmetric reversible Hamiltonian systems. Electronic Research Archive, 2022, 30(7): 2417-2427. doi: 10.3934/era.2022123 |
[7] | Xiuyun Guo, Xue Zhang . Determinants and invertibility of circulant matrices. Electronic Research Archive, 2024, 32(7): 4741-4752. doi: 10.3934/era.2024216 |
[8] | Yifan Luo, Kaisheng Lei, Qingzhong Ji . On the sumsets of units in a ring of matrices over $ \mathbb{Z}/m\mathbb{Z} $. Electronic Research Archive, 2025, 33(3): 1323-1332. doi: 10.3934/era.2025059 |
[9] | Jun-e Feng, Rong Zhao, Yanjun Cui . Simplification of logical functions with application to circuits. Electronic Research Archive, 2022, 30(9): 3320-3336. doi: 10.3934/era.2022168 |
[10] | Daochang Zhang, Dijana Mosić, Liangyun Chen . On the Drazin inverse of anti-triangular block matrices. Electronic Research Archive, 2022, 30(7): 2428-2445. doi: 10.3934/era.2022124 |
The class of generalized SDD1(GSDD1) matrices is a new subclass of H-matrices. In this paper, we focus on the subdirect sum of GSDD1 matrices, and some sufficient conditions to ensure that the subdirect sum of GSDD1 matrices with strictly diagonally dominant (SDD) matrices is in the class of GSDD1 matrices are given. Moreover, corresponding examples are given to illustrate our results.
In 1999, the concept of k-subdirect sums of square matrices was proposed by Fallat and Johnson [1], which is a generalization of the usual sum of matrices [2]. The subdirect sum of matrices plays an important role in many areas, such as matrix completion problems, global stiffness matrices in finite elements and overlapping subdomains in domain decomposition methods [1,2,3,4,5].
An important question for subdirect sums is whether the k-subdirect sum of two square matrices in one class of matrices lies in the same class. This question has attracted widespread attention in different classes of matrices and produced a variety of results. In 2005, Bru et al. gave sufficient conditions ensuring that the subdirect sum of two nonsingular M-matrices was also a nonsingular M-matrix [3]. Then the following year, they further came to the conclusion of the k-subdirect sum of S-SDD matrices is also an S-SDD matrix [2]. In [6], Chen and Wang succeeded in producing some sufficient conditions that the k-subdirect sum of SDD1 matrices is an SDD1 matrix. In [7], Li et al. gave some sufficient conditions such that the k-subdirect sum of doubly strictly diagonally dominant (DSDD) matrices is in the class of DSDD matrices. In addition, the k-subdirect sum of other classes of matrices were mentioned, such as Nekrasov matrices [8,9,10], quasi-Nekrasov (QN) matrices [11], SDD(p) matrices [12], weakly chained diagonally dominant matrices [13], Ostrowski-Brauer Sparse (OBS) matrices [14], {i0}-Nekrasov matrices [15], {p1,p2}-Nekrasov matrices [16], Dashnic-Zusmanovich (DZ) matrices [17], and B-matrices [18,19].
GSDD1 matrices as a new subclass of H-matrices was proposed by Dai et al. in 2023 [20]. In this paper, we focus on the subdirect sum of GSDD1 matrices, and some sufficient conditions such that the k-subdirect sum of GSDD1 matrices with SDD matrices belong to GSDD1 matrices are given. Numerical examples are presented to illustrate the corresponding results.
Now, some definitions are listed as follows.
Definition 1.1. ([2]) Let A and B be two square matrices of order n1 and n2, respectively, and k be an integer such that 1≤k≤min{n1,n2}, and let A and B be partitioned into 2×2 blocks as follows:
A=(A11A12A21A22),B=(B11B12B21B22), | (1.1) |
where A22 and B11 are square matrices of order k. Following [1], we call the square matrix of order n=n1+n2−k given by
C=(A11A120A21A22+B11B120B21B22) |
the k-subdirect sum of A and B, denoted by C=A⊕kB. We can use the elements in A and B to represent any element in C. Before that, let us define the following set of indices:
S1={1,2,...,n1−k},S2={n1−k+1,n1−k+2,...,n1},S3={n1+1,...,n}. | (1.2) |
Obviously, S1∪S2∪S3=N:={1,2,...,n}. Denoting C=A⊕kB=[cij], A=[aij] and B=[bij], then
cij={aij,i∈S1,j∈S1∪S2,0,i∈S1,j∈S3,aij,i∈S2,j∈S1,aij+bi−n1+k,j−n1+k,i∈S2,j∈S2,bi−n1+k,j−n1+k,i∈S2,j∈S3,0,i∈S3,j∈S1,bi−n1+k,j−n1+k,i∈S3,j∈S2∪S3. |
Definition 1.2. ([20]) Given a matrix A=[aij]∈Cn×n, where Cn×n is the set of complex matrices. Let
ri(A)=∑j∈N,j≠i|aij|,i∈N. |
NA={i||aii|≤ri(A)}, |
¯NA={i||aii|>ri(A)}. |
It is easy to obtain that ¯NA is the complement of NA in N, i.e., ¯NA=N∖NA.
Definition 1.3. ([6]) A matrix A=[aij]∈Cn×n is called a strictly diagonally dominant (SDD) matrix if
|aii|>ri(A),i∈N. |
Definition 1.4. ([20]) A matrix A=[aij]∈Cn×n is called a GSDD1 matrix if
{ri(A)>pi¯NA(A),i∈¯NA,(ri(A)−pi¯NA(A))(|ajj|−pjNA(A))>piNA(A)pj¯NA(A),i∈¯NA,j∈NA, |
where
piNA(A):=∑j∈NA∖{i}|aij|,pi¯NA(A):=∑j∈¯NA∖{i}rj(A)|ajj||aij|,i∈N. |
Remark 1.1. From Definitions 1.3 and 1.4, it is easy to obtain that if a matrix A is an SDD matrix with ri(A)>0, then it is a GSDD1 matrix.
First of all, a counterexample is given to show that the subdirect sum of two GSDD1 matrices may not necessarily be a GSDD1 matrix.
Example 2.1. Consider the following GSDD1 matrices A and B, where
A=(432143013.5),B=(2.5201212.31.84). |
and the 1-subdirect sum C=A⊕1B is
C=(43200143000162000121002.31.84). |
However, C is not a GSDD1 matrix because
(r3(C)−p3¯NC(C))(|c11|−p1NC(C))=(3−0)(4−3)=3=3×1=p3NC(C)p1¯NC(C). |
Example 2.1 shows that the subdirect sum of GSDD1 matrices is not a GSDD1 matrix. Then, a meaningful discussion is concerned with: under what conditions will the subdirect sum of GSDD1 matrices is in the class of GSDD1 matrices?
In order to obtain the main results, several lemmas are introduced that will be used in the sequel.
Lemma 2.1. If matrix A=[aij]∈Cn×n is a GSDD1 matrix, then |ajj|−pjNA(A)>0 holds for all j∈NA.
Proof. According to the definition of GSDD1 matrices, we get
(ri(A)−pi¯NA(A))(|ajj|−pjNA(A))>piNA(A)pj¯NA(A). |
Since ri(A)−pi¯NA(A)>0,piNA(A), and pj¯NA(A) are all nonnegative, |ajj|−pjNA(A)>0 is obtained.
Lemma 2.2. ([20]) If A=[aij]∈Cn×n is a GSDD1 matrix, then there is at least one entry aij≠0, i≠j, i∈¯NA, j∈N.
Lemma 2.3. ([20]) If A=[aij]∈Cn×n is a GSDD1 matrix with NA=∅, then A is an SDD matrix, and there is at least one entry aij≠0, i≠j, i∈¯NA, j∈¯NA.
Now, we consider the 1-subdirect sum of GSDD1 matrices.
Theorem 2.1. Let A=[aij] and B=[bij] be square matrices of order n1 and n2 partitioned as in (1.1), respectively. And let k=1, S1={1,2,…,n1−1}, S2={n1}, and S3={n1+1,n1+2,…,n1+n2−1}. We assume that A is a GSDD1 matrix, and B is an SDD matrix with ri(B)>0 for all i∈¯NB. If all diagonal entries of A22 and B11 are positive (or all negative), n1∈¯NA and
rn1(A)|an1,n1|≥rn1(A)+r1(B)|an1,n1+b11|, |
then the 1-subdirect sum C=A⊕1B is a GSDD1 matrix.
Proof. According to the 1-subdirect sum C=A⊕1B, we have
rn1(C)=rn1(A)+r1(B). |
From n1∈¯NA, we know |an1,n1|>rn1(A). Because all diagonal entries of A22 and B11 are positive (or negative), we have
|cn1,n1|=|an1,n1+b11|=|an1,n1|+|b11|>rn1(A)+r1(B)=rn1(C). |
Since A is a GSDD1 matrix, B is an SDD matrix with ri(B)>0 for all i∈¯NB, C=A⊕1B, and according to Lemmas 2.2 and 2.3, we know that ri(C)≠0 for all i∈¯NC. Therefore, for any i∈¯NC,
ri(C)=∑j∈N∖{i}|cij|>∑j∈¯NC∖{i}rj(C)|cjj||cij|=pi¯NC(C). |
For any j∈NC, we easily get j∈NC∩S1=NA∩S1⊂S1. For the three different selection ranges of i, that is, i∈¯NC∩S1=¯NA∩S1⊂S1, i∈¯NC∩S2={n1}, and i∈¯NC∩S3⊂S3, therefore, we divide the proof into three cases.
Case 1. For i∈¯NC∩S1=¯NA∩S1⊂S1, j∈NC, we have
ri(C)=ri(A), |
pi¯NC(C)=∑j∈¯NC∖{i},j∈S1rj(C)|cjj||cij|+∑j∈¯NC∖{i},j∈S2rj(C)|cjj||cij|+∑j∈¯NC∖{i},j∈S3rj(C)|cjj||cij|=∑j∈¯NA∖{i},j∈S1rj(A)|ajj||aij|+rn1(A)+r1(B)|an1,n1+b11||ai,n1|+0≤∑j∈¯NA∖{i},j∈S1rj(A)|ajj||aij|+rn1(A)|an1,n1||ai,n1|=pi¯NA(A), |
|cjj|=|ajj|, | (2.1) |
pjNC(C)=∑j′∈NC∖{j}|cjj′|=∑j′∈NA∖{j}|ajj′|=pjNA(A), | (2.2) |
piNC(C)=∑j∈NC∖{i}|cij|=∑j∈NA∖{i}|aij|=piNA(A), |
pj¯NC(C)=∑j′∈¯NC∖{j},j′∈S1rj′(C)|cj′j′||cjj′|+∑j′∈¯NC∖{j},j′∈S2rj′(C)|cj′j′||cjj′|+∑j′∈¯NC∖{j},j′∈S3rj′(C)|cj′j′||cjj′|=∑j′∈¯NA∖{j},j′∈S1rj′(A)|aj′j′||ajj′|+rn1(A)+r1(B)|an1,n1+b11||aj,n1|+0≤∑j′∈¯NA∖{j},j′∈S1rj′(A)|aj′j′||ajj′|+rn1(A)|an1,n1||aj,n1|=pj¯NA(A). | (2.3) |
Therefore, we obtain that
(ri(C)−pi¯NC(C))(|cjj|−pjNC(C))≥(ri(A)−pi¯NA(A))(|ajj|−pjNA(A))>piNA(A)pj¯NA(A)≥piNC(C)pj¯NC(C). |
Case 2. For i∈¯NC∩S2={n1}, j∈NC,
rn1(C)=rn1(A)+r1(B), |
pn1NC(C)=∑j∈NC∖{n1}|cn1,j|=∑j∈NA∖{n1}|an1,j|=pn1NA(A). |
pn1¯NC(C)=∑j∈¯NC∖{n1},j∈S1rj(C)|cjj||cn1,j|+∑j∈¯NC∖{n1},j∈S3rj(C)|cjj||cn1,j|=∑j∈¯NA∖{n1}rj(A)|ajj||an1,j|+∑j∈¯NB∖{1}rj(B)|bjj||b1j|=pn1¯NA(A)+p1¯NB(B). |
We know that the results of the |cjj|, pjNC(C), and pj¯NC(C) are the same as (2.1), (2.2), and (2.3). Because B is an SDD matrix with ri(B)>0 for all i∈¯NB, we clearly get
r1(B)−p1¯NB(B)>0. |
Hence,
(rn1(C)−pn1¯NC(C))(|cjj|−pjNC(C))=(rn1(A)+r1(B)−pn1¯NA(A)−p1¯NB(B))(|ajj|−pjNA(A))>(rn1(A)−pn1¯NA(A))(|ajj|−pjNA(A))>pn1NA(A)pj¯NA(A)≥pn1NC(C)pj¯NC(C). |
Case 3. For i∈¯NC∩S3⊂S3, j∈NC, in particular, we obtain that
piNC(C)=∑j∈NC∖{i}|cij|=0. |
So we easily come up with
(rn1(C)−pn1¯NC(C))(|cjj|−pjNC(C))=(rn1(C)−pn1¯NC(C))(|ajj|−pjNA(A))>0=piNC(C)pj¯NC(C). |
From Cases 1–3, we have that for any i∈¯NC and j∈NC, the C matrix satisfies the definition of the GSDD1 matrix. The conclusion is as follows.
Theorem 2.2. Let A=[aij] and B=[bij] be square matrices of order n1 and n2 partitioned as in (1.1), respectively. And let k, S1, S2, and S3 be as in Theorem 2.1. Likewise, we assume A is a GSDD1 matrix, and B is an SDD matrix with ri(B)>0 for all i∈¯NB. If all diagonal entries of A22 and B11 are positive (or all negative), n1∈NA, rn1(A)+r1(B)≥|an1,n1|+|b11| and
min2≤l≤n2(rl(B)−pl¯NB(B))≥maxm∈¯NA(rm(A)−pm¯NA(A)), |
minm∈¯NApmNA(A)≥max2≤l≤n2|bl1|, |
then C=A⊕1B is a GSDD1 matrix.
Proof. Since A is a GSDD1 matrix, B is an SDD matrix with ri(B)>0 for all i∈¯NB, n1∈NA, and rn1(A)+r1(B)≥|an1,n1|+|b11|, we get n1∈NC and then NC=NA.
For any i∈¯NC, by Lemmas 2.2 and 2.3, we have ri(C)≠0 and then
ri(C)=∑j∈N∖{i}|cij|>∑j∈¯NC∖{i}rj(C)|cjj||cij|=pi¯NC(C). |
Since n1∈NC, i.e., i∈¯NC∩S2=∅, we prove it according to the two different selection ranges of i, namely i∈¯NC∩S1=¯NA∩S1⊂S1 and i∈¯NC∩S3⊂S3. For any j∈NC, that is, j∈NC∩S1=NA∩S1⊂S1 and j∈NC∩S2=NA∩S2={n1}. Therefore, we prove it from the following cases.
Case 1. For i∈¯NC∩S1=¯NA∩S1⊂S1, j∈NC∩S1=NA∩S1⊂S1, we obtain that
ri(C)=ri(A), | (2.4) |
pi¯NC(C)=∑j∈¯NC∖{i},j∈S1rj(C)|cjj||cij|+∑j∈¯NC∖{i},j∈S3rj(C)|cjj||cij|=∑j∈¯NA∖{i}rj(A)|ajj||aij|+0=pi¯NA(A), | (2.5) |
|cjj|=|ajj|, | (2.6) |
pjNC(C)=∑j′∈NC∖{j}|cjj′|=∑j′∈NA∖{j}|ajj′|=pjNA(A), | (2.7) |
piNC(C)=∑j∈NC∖{i}|cij|=∑j∈NA∖{i}|aij|=piNA(A), | (2.8) |
pj¯NC(C)=∑j′∈¯NC∖{j},j′∈S1rj′(C)|cj′j′||cjj′|+∑j′∈¯NC∖{j},j′∈S3rj′(C)|cj′j′||cjj′|=∑j′∈¯NA∖{j}rj′(A)|aj′j′||ajj′|+0=pj¯NA(A). | (2.9) |
Therefore,
(ri(C)−pi¯NC(C))(|cjj|−pjNC(C))=(ri(A)−pi¯NA(A))(|ajj|−pjNA(A))>piNA(A)pj¯NA(A)=piNC(C)pj¯NC(C). |
Case 2. For i∈¯NC∩S1=¯NA∩S1⊂S1, j∈NC∩S2=NA∩S2={n1}, we know that ri(C), piNC(C), and pi¯NC(C) have the same results as (2.4), (2.8), and (2.5). Moreover,
|cn1,n1|=|an1,n1+b11|=|an1,n1|+|b11|, | (2.10) |
pn1NC(C)=∑j′∈NC∖{n1}|cn1,j′|=∑j′∈NA∖{n1}|an1,j′|=pn1NA(A), | (2.11) |
pn1¯NC(C)=∑j′∈¯NC∖{n1},j′∈S1rj′(C)|cj′j′||cn1,j′|+∑j′∈¯NC∖{n1},j′∈S3rj′(C)|cj′j′||cn1,j′|=∑j′∈¯NA∖{n1}rj′(A)|aj′j′||an1,j′|+∑j′∈¯NB∖{1}rj′(B)|bj′j′||b1j′|=pn1¯NA(A)+p1¯NB(B). | (2.12) |
Hence, we obtain that
(ri(C)−pi¯NC(C))(|cn1,n1|−pn1NC(C))=(ri(A)−pi¯NA(A))(|an1,n1|+|b11|−pn1NA(A))=(ri(A)−pi¯NA(A))(|an1,n1|−pn1NA(A))+(ri(A)−pi¯NA(A))⋅|b11|>piNA(A)pn1¯NA(A)+piNA(A)p1¯NB(B)=piNC(C)pn1¯NC(C). |
Case 3. For i∈¯NC∩S3⊂S3, j∈NC∩S1=NA∩S1⊂S1, we have
ri(C)=ri−n1+1(B)=rl(B), | (2.13) |
pi¯NC(C)=∑j∈¯NC∖{i},j∈S1rj(C)|cjj||cij|+∑j∈¯NC∖{i},j∈S3rj(C)|cjj||cij|=0+∑j∈¯NB∖{l},j∈{2,…,n2}rj(B)|bjj||blj|≤pl¯NB(B), | (2.14) |
piNC(C)=∑j∈NC∖{i},j∈S1|cij|+∑j∈NC∖{i},j=n1|ci,n1|=0+|bl1|=|bl1|, | (2.15) |
where l=i−n1+1. We have the same values of |cjj|, pjNC(C), and pj¯NC(C) as (2.6), (2.7), and (2.9). Therefore,
(ri(C)−pi¯NC(C))(|cjj|−pjNC(C))≥(rl(B)−pl¯NB(B))(|ajj|−pjNA(A))≥(rm(A)−pm¯NA(A))(|ajj|−pjNA(A))>pmNA(A)pj¯NA(A)≥|bl1|⋅pj¯NA(A)=piNC(C)pj¯NC(C). |
Case 4. For i∈¯NC∩S3⊂S3, j∈NC∩S2=NA∩S2={n1}, we get that the values of ri(C), piNC(C), and pi¯NC(C) are the same as (2.13), (2.15), and (2.14). Moreover, the results of |cjj|, pjNC(C), and pj¯NC(C) are the same as (2.10), (2.11), and (2.12). Hence, we obtain that
(ri(C)−pi¯NC(C))(|cn1,n1|−pn1NC(C))≥(rl(B)−pl¯NB(B))(|an1,n1|+|b11|−pn1NA(A))=(rl(B)−pl¯NB(B))⋅|b11|+(rl(B)−pl¯NB(B))(|an1,n1|−pn1NA(A))≥(rm(A)−pm¯NA(A))⋅|b11|+(rm(A)−pm¯NA(A))(|an1,n1|−pn1NA(A))>pmNA(A)p1¯NB(B)+pmNA(A)pn1¯NA(A)≥|bl1|⋅(p1¯NB(B)+pn1¯NA(A))=piNC(C)pn1¯NC(C). |
From Cases 1–4, we definitively get that C is a GSDD1 matrix.
The following Example 2.2 shows that Theorem 2.1 may not necessarily hold when k≥2.
Example 2.2. Consider the following matrices:
A=(311.71141122410113),B=(311021103), |
where A is a GSDD1 matrix and B is an SDD matrix with ri(B)>0 for all i∈¯NB. It is easy to verify that A and B satisfy the conditions of Theorem 2.1 and A⊕1B is a GSDD1 matrix. However, C=A⊕2B is not a GSDD1 matrix. In fact,
C=(311.71014110227210115100103). |
By computation,
¯NC={2,4,5},NC={1,3}, |
(r5(C)−p5¯NC(C))(|c11|−p1NC(C))=(1−0)(3−1.7)=1.3<1.35=1×1.35=p5NC(C)p1¯NC(C). |
Therefore, C=A⊕2B is not a GSDD1 matrix.
The following Example 2.3 shows that Theorem 2.2 may not necessarily hold when k≥2.
Example 2.3. Consider the following matrices:
A=(5221040100321112),B=(3−201154.30.9−5.117), |
where A is a GSDD1 matrix and B is an SDD matrix with ri(B)>0 for all i∈¯NB. It is easy to verify that A and B satisfy the conditions of Theorem 2.2 and A⊕1B is a GSDD1 matrix. However, C=A⊕2B is not a GSDD1. In fact,
C=(522100401000600112174.3000.9−5.117). |
By computation, r3(C)−p3¯NC(C)=0, therefore, C=A⊕2B is not a GSDD1 matrix.
Those are sufficient conditions to ensure that the 1-subdirect sum of GSDD1 matrices with SDD matrices is a GSDD1 matrix. In fact, as the value of k increases, the situation becomes more complicated, so that the adequate conditions we give will also be more complicated.
Next, some sufficient conditions ensuring that the k-subdirect (k≥2) sum of GSDD1 matrices with SDD matrices is a GSDD1 matrix are given.
Theorem 2.3. Let A=[aij] and B=[bij] be square matrices of order n1 and n2 partitioned as in (1.1), respectively. And let 2≤k≤min{n1,n2}, S1, S2, and S3 be as in (1.2). We assume A is a GSDD1 matrix and B is an SDD matrix with ri(B)>0 for all i∈¯NB. If all diagonal entries of A22 and B11 are positive (or all negative), i∈¯NA for any i∈S2 and
∑j∈¯NA∖{i},j∈S2λj|ajj+bj−n1+k,j−n1+k||aij|≤∑j∈¯NA∖{i},j∈S2rj(A)|ajj||aij|,(i∈S1∪S2) |
∑j∈¯NB∖{i−n1+k}j∈{1,…,k}λj+n1−k|aj+n1−k,j+n1−k+bjj||bi−n1+k,j|≤∑j∈¯NB∖{i−n1+k}j∈{1,…,k}rj(B)|bjj||bi−n1+k,j|,(i∈S2) |
λi≥ri(A)+pi−n1+k¯NB(B),(i∈S2) |
where λi=ri(A)+ri−n1+k(B)+n1∑j=n1−k+1j≠i|aij+bi−n1+k,j−n1+k|−n1∑j=n1−k+1j≠i(|aij|+|bi−n1+k,j−n1+k|), then the k-subdirect sum C=A⊕kB is a GSDD1 matrix.
Proof. Since A is a GSDD1 matrix with i∈¯NA for any i∈S2, we get |aii|>ri(A). According to the k-subdirect sum C=A⊕kB, we have ri(C)=λi≤ri(A)+ri−n1+k(B). Because all diagonal entries of A22 and B11 are positive (or negative), we get |cii|=|aii|+|bi−n1+k,i−n1+k|. Therefore, we obtain that |cii|>ri(C), that is, for any i∈S2, i∈¯NC. Since A is a GSDD1 matrix, B is an SDD matrix with ri(B)>0 for all i∈¯NB, and C=A⊕kB, by Lemmas 2.2 and 2.3 we know that ri(C)≠0 for i∈¯NC∩S1∪S3=¯NA∩S1∪S3. For i∈S2, by sufficient conditions, we have λi≥ri(A)+pi−n1+k¯NB(B), which means that λi>0. Therefore, for any i∈¯NC, we obtain that
ri(C)=∑j∈N∖{i}|cij|>∑j∈¯NC∖{i}rj(C)|cjj||cij|=pi¯NC(C). |
Moreover, for any j∈NC, we get j∈NC∩S1=NA∩S1⊂S1. For any i∈¯NC, similarly, we prove it from the following three cases, which are i∈¯NC∩S1=¯NA∩S1⊂S1, i∈¯NC∩S2=¯NA∩S2⊂S2, and i∈¯NC∩S3⊂S3.
Case 1. For i∈¯NC∩S1=¯NA∩S1⊂S1, j∈NC, we have
ri(C)=ri(A), |
pi¯NC(C)=∑j∈¯NC∖{i},j∈S1rj(C)|cjj||cij|+∑j∈¯NC∖{i},j∈S2rj(C)|cjj||cij|+∑j∈¯NC∖{i},j∈S3rj(C)|cjj||cij|=∑j∈¯NA∖{i},j∈S1rj(A)|ajj||aij|+∑j∈¯NA∖{i},j∈S2λj|ajj+bj−n1+k,j−n1+k||aij|+0≤∑j∈¯NA∖{i},j∈S1rj(A)|ajj||aij|+∑j∈¯NA∖{i},j∈S2rj(A)|ajj||aij|=pi¯NA(A), |
|cjj|=|ajj|, | (2.16) |
pjNC(C)=∑j′∈NC∖{j}|cjj′|=∑j′∈NA∖{j}|ajj′|=pjNA(A), | (2.17) |
piNC(C)=∑j∈NC∖{i}|cij|=∑j∈NA∖{i}|aij|=piNA(A), |
pj¯NC(C)=∑j′∈¯NC∖{j},j′∈S1rj′(C)|cj′j′||cjj′|+∑j′∈¯NC∖{j},j′∈S2rj′(C)|cj′j′||cjj′|+∑j′∈¯NC∖{j},j′∈S3rj′(C)|cj′j′||cjj′|=∑j′∈¯NA∖{j},j′∈S1rj′(A)|aj′j′||ajj′|+∑j′∈¯NA∖{j},j′∈S2λj′|aj′j′+bj′−n1+k,j′−n1+k||ajj′|+0≤∑j′∈¯NA∖{j},j′∈S1rj′(A)|aj′j′||ajj′|+∑j′∈¯NA∖{j},j′∈S2rj′(A)|aj′j′||ajj′|=pj¯NA(A). | (2.18) |
Therefore,
(ri(C)−pi¯NC(C))(|cjj|−pjNC(C))≥(ri(A)−pi¯NA(A))(|ajj|−pjNA(A))>piNA(A)pj¯NA(A)≥piNC(C)pj¯NC(C). |
Case 2. For , , we obtain that
We know that , , and are the same as (2.16), (2.17), and (2.18). Therefore,
Case 3. For , , specifically, we obtain that
Hence,
Therefore, we get that and for any , .
Corollary 2.1. Let and be square matrices of order and partitioned as in (1.1), respectively. And let , , , and be as in (1.2). We assume is a matrix and is an matrix with for all . If all diagonal entries of and are positive (or all negative), for any and
where is the same as of Theorem 2.3 and , then the -subdirect sum is a matrix.
Proof. For the inequality
multiplying both sides of this inequality by and summing for every , we have
Similarly, for , we obtain that
By Theorem 2.3, we obtain that the -subdirect sum is a matrix.
Theorem 2.4. Let and be square matrices of order and partitioned as in (1.1), respectively. And let , , , and be as in (1.2). We assume is a matrix and is an matrix with for all . If all diagonal entries of and are positive (or all negative), for any , and
where is the same as of Theorem 2.3, then the -subdirect sum is a matrix.
Proof. Since is a matrix with for any and , we have and , that is, for any , we have . Moreover, we know that , which means that . Combining Lemmas 2.2 and 2.3, we get that for . Therefore, for any , we obtain that
Since , we prove it from the following two aspects, which are and . For any , that is, and . Therefore, we prove it from the following cases.
Case 1. For , , we get
(2.19) |
(2.20) |
(2.21) |
(2.22) |
(2.23) |
(2.24) |
Therefore, we obtain that
Case 2. For , , we know that and (2.19)} are equal, and (2.20) are equal, and and (2.23) are equal. Moreover,
(2.25) |
(2.26) |
(2.27) |
Hence,
Case 3. For , , we obtain that
(2.28) |
(2.29) |
(2.30) |
where . We know that , , and are the same as (2.21), (2.22), and (2.24). Therefore,
Case 4. For , , we obtain that the values of , , and are equal to (2.28), (2.30), and (2.29). Moreover, the results of , , and are the same as (2.25), (2.26), and (2.27). Hence, we arrive at
In conclusion, for any , , we successfully derive that and} Therefore, is a matrix.
Example 2.4. Consider the following matrices:
where is a matrix with for all , and is an matrix with for all . By computation, we derive . Moreover,
we get that is true for .
we have that the second sufficient condition in Theorem 2.3 is true.
we get that the third sufficient condition in Theorem 2.3 is met. Therefore, by Theorem 2.3, is a matrix. In fact,
where . By computation,
It is not difficult to find that and when , . So we deduce that and are true when , . Thus, is a matrix.
Example 2.5. Consider the following matrices:
where is a matrix and is an matrix with for all . By computation, ,
Moreover,
Hence, the conditions in Theorem 2.4 are met. By Theorem 2.4, is a matrix. In fact,
By computation, , . Moreover,
We see that and when , . Therefore, we obtain that and are true when , . Therefore, is a matrix.
Remark 2.1. Since the subdirect sum of matrices does not satisfy the commutative law, if we change " is a matrix, and is an matrix" to " is an matrix, and is a matrix", then we will obtain new sufficient conditions by using similar proofs in this paper.
In this paper, some sufficient conditions are given to show that the subdirect sum of matrices with matrices is in the class of matrices, and these conditions are only dependent on the elements of the given matrices. Furthermore, some numerical examples are also presented to illustrate the corresponding theoretical results.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
This work was partly supported by the National Natural Science Foundation of China (31600299), Natural Science Basic Research Program of Shaanxi, China (2020JM-622), and the Postgraduate Innovative Research Project of Baoji University of Arts and Sciences (YJSCX23YB33).
The authors declare there is no conflicts of interest.
[1] |
S. M. Fallat, C. R. Johnson, Subdirect sums and positivity classes of matrices, Linear Algebra Appl., 288 (1999), 149–173. https://doi.org/10.1016/s0024-3795(98)10194-5 doi: 10.1016/s0024-3795(98)10194-5
![]() |
[2] |
R. Bru, F. Pedroche, D. B. Szyld, Subdirect sums of -strictly diagonally dominant matrices, Electron. J. Linear Algebra, 15 (2006), 201–209. https://doi.org/10.13001/1081-3810.1230 doi: 10.13001/1081-3810.1230
![]() |
[3] |
R. Bru, F. Pedroche, D. B. Szyld, Subdirect sums of nonsingular -matrices and of their invers-e, Electron. J. Linear Algebra, 13 (2005), 162–174. https://doi.org/10.13001/1081-3810.1159 doi: 10.13001/1081-3810.1159
![]() |
[4] |
A. Frommer, D. B. Szyld, Weighted max norms, splittings, and overlapping additive Schwarz iterations, Numer. Math., 83 (1999), 259–278. https://doi.org/10.1007/s002110050449 doi: 10.1007/s002110050449
![]() |
[5] |
R. Bru, F. Pedroche, D. B. Szyld, Additive Schwarz iterations for Markov chains, SIAM J. Matrix Anal. Appl., 27 (2005), 445–458. https://doi.org/10.1137/040616541 doi: 10.1137/040616541
![]() |
[6] |
X. Y. Chen, Y. Q. Wang, Subdirect Sums of Matrices, J. Math., 2020 (2020), 1–20. https://doi.org/10.1155/2020/3810423 doi: 10.1155/2020/3810423
![]() |
[7] |
Y. T. Li, X. Y. Chen, Y. Liu, L. Gao, Y. Q. Wang, Subdirect sums of doubly strictly diagonally dominant matrices, J. Math., 2021 (2021), 3810423. https://doi.org/10.1155/2021/6624695 doi: 10.1155/2021/6624695
![]() |
[8] |
C. Q. Li, Q. L. Liu, L. Gao, Y. T. Li, Subdirect sums of Nekrasov matrices, Linear Multilinear A., 64 (2016), 208–218. https://doi.org/10.1080/03081087.2015.1032198 doi: 10.1080/03081087.2015.1032198
![]() |
[9] |
J. Xue, C. Q. Li, Y. T. Li, On subdirect sums of Nekrasov matrices, Linear Multilinear A., 72 (2023), 1044–1055. https://doi.org/10.1080/03081087.2023.2172378 doi: 10.1080/03081087.2023.2172378
![]() |
[10] |
Z. H. Lyu, X. R. Wang, L. S. Wen, -subdirect sums of Nekrasov matrices, Electron. J. Linear Al., 38 (2022), 339–346. https://doi.org/10.13001/ela.2022.6951 doi: 10.13001/ela.2022.6951
![]() |
[11] |
L. Gao, H. Huang, C. Q. Li, Subdirect sums of -matrices, Linear Multilinear A., 68 (2020), 1605–1623. https://doi.org/10.1080/03081087.2018.1551323 doi: 10.1080/03081087.2018.1551323
![]() |
[12] |
Q. L. Liu, J. F. He, L. Gao, C. Q. Li, Note on subdirect sums of matrices, Linear Multilinear A., 70 (2022), 2582–2601. https://doi.org/10.1080/03081087.2020.1807457 doi: 10.1080/03081087.2020.1807457
![]() |
[13] |
C. Q. Li, R. D. Ma, Q. L. Liu, Y. Li, Subdirect sums of weakly chained diagonally dominant matrices, Linear Multilinear A., 65 (2017), 1220–1231. https://doi.org/10.1080/03081087.2016.1233933 doi: 10.1080/03081087.2016.1233933
![]() |
[14] |
L. Gao, Y. Liu, On matrices and matrices, Bull. Iran. Math. Soc., 48 (2022), 2807–2824. https://doi.org/10.1007/s41980-021-00669-6 doi: 10.1007/s41980-021-00669-6
![]() |
[15] |
J. Xia, Note on subdirect sums of -Nekrasov matrices, AIMS Math., 7 (2022), 617–631. https://doi.org/10.3934/math.2022039 doi: 10.3934/math.2022039
![]() |
[16] |
L. Gao, Q. L. Liu, C. Q. Li, Y. T. Li, On -Nekrasov matrices, Bull. Malays. Math. Sci. Soc., 44 (2021), 2971–2999. https://doi.org/10.1007/s40840-021-01094-y doi: 10.1007/s40840-021-01094-y
![]() |
[17] |
L. Liu, X. Y. Chen, Y. T. Li, Y. Q. Wang, Subdirect sums of Dashnic-Zusmanovich matrices, B. Sci. Math., 173 (2021), 103057. https://doi.org/10.1016/j.bulsci.2021.103057 doi: 10.1016/j.bulsci.2021.103057
![]() |
[18] |
C. M. Araújo, S. Mendes-Gonçalves, On a class of nonsingular matrices containing -matrices, Linear Algebra Appl., 578 (2019), 356–369. https://doi.org/10.1016/j.laa.2019.05.015 doi: 10.1016/j.laa.2019.05.015
![]() |
[19] |
C. M. Araújo, J. R. Torregrosa, Some results on -matrices and doubly -matrices, Linear Algebra Appl., 459 (2014), 101–120. https://doi.org/10.1016/j.laa.2014.06.048 doi: 10.1016/j.laa.2014.06.048
![]() |
[20] |
P. F. Dai, J. P. Li, S. Y. Zhao, Infinity norm bounds for the inverse for matrices using scaling matrices, Comput. Appl. Math., 42 (2023), 121. https://doi.org/10.1007/s40314-022-02165-x doi: 10.1007/s40314-022-02165-x
![]() |
1. | Jiaqi Qi, Keru Wen, Yaqiang Wang, On k-subdirect sums of $$SD{D_1}$$ matrices, 2025, 0916-7005, 10.1007/s13160-025-00693-7 |