This paper introduced a smoothing algorithm for calculating the maximal eigenvalue of non-defective positive matrices. Two special matrices were constructed to provide monotonically increasing lower-bound estimates and monotonically decreasing upper-bound estimates of the maximal eigenvalue. The monotonicity and convergence of these estimations was also proven. Finally, the effectiveness of the algorithm was demonstrated with numerical examples.
Citation: Na Li, Qin Zhong. Smoothing algorithm for the maximal eigenvalue of non-defective positive matrices[J]. AIMS Mathematics, 2024, 9(3): 5925-5936. doi: 10.3934/math.2024289
Related Papers:
[1]
Songting Yin .
Some rigidity theorems on Finsler manifolds. AIMS Mathematics, 2021, 6(3): 3025-3036.
doi: 10.3934/math.2021184
[2]
Yinzhen Mei, Chengxiao Guo, Mengtian Liu .
The bounds of the energy and Laplacian energy of chain graphs. AIMS Mathematics, 2021, 6(5): 4847-4859.
doi: 10.3934/math.2021284
[3]
Imrana Kousar, Saima Nazeer, Abid Mahboob, Sana Shahid, Yu-Pei Lv .
Numerous graph energies of regular subdivision graph and complete graph. AIMS Mathematics, 2021, 6(8): 8466-8476.
doi: 10.3934/math.2021491
[4]
João Lita da Silva .
Spectral properties for a type of heptadiagonal symmetric matrices. AIMS Mathematics, 2023, 8(12): 29995-30022.
doi: 10.3934/math.20231534
[5]
Meraj Ali Khan, Ali H. Alkhaldi, Mohd. Aquib .
Estimation of eigenvalues for the $ \alpha $-Laplace operator on pseudo-slant submanifolds of generalized Sasakian space forms. AIMS Mathematics, 2022, 7(9): 16054-16066.
doi: 10.3934/math.2022879
[6]
Yuwen He, Jun Li, Lingsheng Meng .
Three effective preconditioners for double saddle point problem. AIMS Mathematics, 2021, 6(7): 6933-6947.
doi: 10.3934/math.2021406
[7]
Shunjie Bai .
A tighter M-eigenvalue localization set for partially symmetric tensors and its an application. AIMS Mathematics, 2022, 7(4): 6084-6098.
doi: 10.3934/math.2022339
[8]
Guijuan Lin, Sujuan Long, Qiqi Zhang .
The upper bound for the first positive eigenvalue of Sub-Laplacian on a compact strictly pseudoconvex hypersurface. AIMS Mathematics, 2024, 9(9): 25376-25395.
doi: 10.3934/math.20241239
[9]
Qin Zhong .
Some new inequalities for nonnegative matrices involving Schur product. AIMS Mathematics, 2023, 8(12): 29667-29680.
doi: 10.3934/math.20231518
[10]
Yingxia Zhao, Lanlan Liu, Feng Wang .
Error bounds for linear complementarity problems of $ SD{{D}_{1}} $ matrices and $ SD{{D}_{1}} $-$ B $ matrices. AIMS Mathematics, 2022, 7(7): 11862-11878.
doi: 10.3934/math.2022662
Abstract
This paper introduced a smoothing algorithm for calculating the maximal eigenvalue of non-defective positive matrices. Two special matrices were constructed to provide monotonically increasing lower-bound estimates and monotonically decreasing upper-bound estimates of the maximal eigenvalue. The monotonicity and convergence of these estimations was also proven. Finally, the effectiveness of the algorithm was demonstrated with numerical examples.
1.
Introduction
A matrix A with all elements greater than zero is referred to as a positive matrix and is denoted by A>0. Positive matrices possess several significant properties. Perron [1] first proposed that if A is a positive matrix, then the spectral radius is an eigenvalue of A. This eigenvalue, denoted by ρ(A), is called the maximal eigenvalue of A or the Perron root of A, and ρ(A) dominates all other eigenvalues in modulus.
If A=(aij), then A is called nonnegative if aij≥0 and is denoted by A≥0. Nonnegative matrices are frequently encountered in real-life applications. Frobenius [1] extended Perron's theory to nonnegative matrices and nonnegative irreducible matrices, leading to the rapid development of the nonnegative matrix theory. The famous Perron-Frobenius theorem is widely used in both theory and practice. Estimating the range of maximal eigenvalue of positive matrices is a popular topic in the nonnegative matrix theory and has been extensively and thoroughly studied in [2,3,4,5,6,7,8,9,10].
For any positive integer n, let ⟨n⟩ = {1, 2, ⋯,n}. Given A=(aij)n×n≥0, we define
Frobenius [1] obtained the following classical conclusion:
r(A)≤ρ(A)≤R(A).
(1)
Additionally, equality in (1) is achieved when the sum of each row of the matrix A is equal. A similar result holds for the column with the transpose matrix AT in place of A. The above inequality implies that the maximal eigenvalue ρ(A) of a nonnegative square matrix A is between the smallest row sum r(A) and the largest row sum R(A). This observation provides a convenient and efficient method for estimating the maximal eigenvalue using the elements of A.
The class of positive matrices, which is the subclass of nonnegative matrices, shares similar properties with nonnegative matrices. The generalization of the result in (1) was presented in [2,3,4] to improve the bounds of the maximal eigenvalues of positive matrices.
Minc [5] made improvements to the bounds in (1) for nonnegative matrices with nonzero row sums, resulting in the following:
mini(1rin∑t=1aitrt)≤ρ(A)≤maxi(1rin∑t=1aitrt).
Liu [6] generalized the above result further and obtained the following conclusion:
where B=(A+I)n−1 and I is the identity matrix of order n. The same is true for column sums.
If we set m = 1 in (2), we will obtain
miniri(Ak+1)ri(Ak)≤ρ(A)≤maxiri(Ak+1)ri(Ak).
(4)
The boundaries are further generalized in [8] as follows:
miniri(AB)ri(B)≤ρ(A)≤maxiri(AB)ri(B),
(5)
where B is an arbitrary matrix that has positive row sums.
The following is the concept of a non-defective matrix. In linear algebra, a square matrix that lacks a full basis of eigenvectors and cannot be diagonalized is referred to as a defective matrix. Specifically, a matrix of order n is considered non-defective if it contains n linearly independent eigenvectors.
This paper is dedicated to the estimation and calculation of the maximal eigenvalue of a positive matrix. Initially, we present monotonically increasing lower-bound estimations and monotonically decreasing upper-bound estimations for the maximal eigenvalue of a positive matrix. Additionally, we rigorously prove the monotonicity and convergence of these estimations. Notably, if the positive matrix is non-defective, we provide a smoothing algorithm to calculate the maximal eigenvalue of such a non-defective positive matrix.
2.
Main results
To derive our conclusions, we will recall some essential lemmas as follows.
Lemma 1.[5] Let λ be an eigenvalue of the square matrix A of order n and let U=(u1,u2,⋯,un)T and V=(v1,v2,⋯,vn)T be eigenvectors corresponding to λ of AT and A, respectively, then
λn∑i=1ui=n∑i=1uiri(A),
λn∑j=1vj=n∑j=1vjcj(A).
Lemma 2.[5] If qt>0,t∈⟨n⟩, then for any real numbers pt,t∈⟨n⟩, the following inequality holds:
mintptqt≤n∑t=1ptn∑t=1qt≤maxtptqt.
Now, we present the upper and lower bounds on the maximal eigenvalue of a positive matrix.
Theorem 1. Given a positive matrix A=(aij)n×n, let B1=(A+αI)n−1,B2=(A−βI)n−1, where α=maxi{aii},β=mini{aii}. If ri(AB1B2)≠0, ci(AB1B2)≠0, i∈⟨n⟩, then for any positive integer k, we have
Proof. First, we prove ri(AkB1B2)>0 with the condition ri(AB1B2)≠0. For any positive integer k≥2, the element of the matrix Ak−1B1B2 located in the t-th row and the j-th column is denoted by (Ak−1B1B2)tj. Note that AkB1B2=AAk−1B1B2. We have
It is evident that B1=(A+αI)n−1 and B2=(A−βI)n−1 are nonnegative. Therefore, AB1B2 is nonnegative and ri(AB1B2)>0 holds under the restriction that ri(AB1B2)≠0. As A is a positive matrix, that is, aij>0, i,j∈⟨n⟩, we immediately obtain ri(AkB1B2)>0 from Eq (8). By employing the same approach, we also obtain ci(AkB1B2)>0 if ci(AB1B2)≠0, i∈⟨n⟩. These assertions ensure that the expressions in Eqs (6) and (7) are valid.
Now, we assume X=(x1,x2,⋯,xn)T>0 is the eigenvector of the matrix AT corresponding to ρ(A), that is, ATX=ρ(A)X. Clearly, the maximal eigenvalues of the matrix polynomials
Ak+2B1B2=Ak+2(A+αI)n−1(A−βI)n−1
and
AkB1B2=Ak(A+αI)n−1(A−βI)n−1
are ρk+2(A)[ρ(A)+α]n−1[ρ(A)−β]n−1 and ρk(A)[ρ(A)+α]n−1[ρ(A)−β]n−1, respectively. Therefore, we obtain
Therefore, inequality (6) is proved. Similarly, inequality (7) holds.
Remark 1. From the formula
ri(AkB1B2)=n∑t=1aitrt(Ak−1B1B2),
in (8), we can observe that ri(Ak+2B1B2),ri(AkB1B2) can be calculated by induction. In addition, a new matrix multiplication technique known as the semitensor product of matrices (STP) has been developed in recent years, which offers more powerful functionality compared to traditional matrix multiplication [11,12,13]. This implies that it is not difficult to compute the upper and lower bounds of ρ(A).
In the following, we prove the convergence of the upper and lower bound expressions in Theorem 1 when the positive integer k→∞.
Theorem 2. Under the assumptions of Theorem 1, the following limits
is monotonically decreasing with respect to k. On the other hand, by Theorem 1 we know that the sequence has a lower bound ρ(A). Based on the monotonicity bounded criterion, we conclude that
limk→∞maxi√ri(Ak+2B1B2)ri(AkB1B2),
exists and is not less than ρ(A). Similarly, the sequence
{mini√ri(Ak+2B1B2)ri(AkB1B2)},
increases monotonically with respect to k and has an upper bound ρ(A) according to (6) in Theorem 1. Therefore, we derive that
limk→∞mini√ri(Ak+2B1B2)ri(AkB1B2),
exists and is not more than ρ(A). Using the same approach, we can establish analogous results for column rows when ci(AB1B2)≠0, i∈⟨n⟩.
Theorem 2 discusses the upper and lower bounds on the maximum eigenvalue of a positive matrix and proves that the sequence {maxi√ri(Ak+2B1B2)ri(AkB1B2)} decreases monotonically and has a lower bound ρ(A), while the sequence {mini√ri(Ak+2B1B2)ri(AkB1B2)} increases monotonically and has an upper bound ρ(A). It is natural to wonder whether limk→∞maxi√ri(Ak+2B1B2)ri(AkB1B2) or limk→∞mini√ri(Ak+2B1B2)ri(AkB1B2) is then equal to ρ(A)? In general, it is difficult to prove
where (Ak+1)it denotes the element of the matrix Ak+1 located in the i-th row and the t-th column and (AB1B2)tj denotes the element of the matrix AB1B2 located in the t-th row and the j-th column, respectively. Similarly, we have
ri(AkB1B2)=n∑t=1(Ak−1)itrt(AB1B2),k≥2.
(14)
Now, we define a special vector as follows:
Y=(r1(AB1B2),r2(AB1B2),⋯,rn(AB1B2))T.
It is obvious that Y is a positive vector since ri(AB1B2)≠0 (more precisely, ri(AB1B2)>0), i∈⟨n⟩.
Thus, Y can be expressed as
Y=C1X1+C2X2+⋯+Cn−1Xn−1+CnXn,
(15)
where C1,C2,⋯,Cn−1,Cn are not all zero. Together with Eqs (13) and (14), one can get
Step 4. If T−t<ε, go to Step 5; otherwise, set k=k+1 and go back to Step 3;
Step 5. Output k and ρ(A)=T+t2, stop.
Remark 3. Replace the row sums in the algorithm with the corresponding column sums. The algorithm is still valid.
Remark 4. In the above algorithm, the upper bound of ρ(A) is decreasing, while the lower bound of ρ(A) is increasing. This behavior exhibits a smoothing tendency and, as a result, we refer to this algorithm as a smoothing algorithm.
4.
Numerical examples
In this section, we consider two examples to demonstrate our findings.
Example 1. Consider positive matrix:
A=(112233411).
The comparisons between the estimation results of [1,2,3,4,5,6,7,8] and Theorem 1 of this paper regarding the maximal eigenvalue of matrix A are presented in Table 1.
Indeed, ρ(A)=5.74165738⋯. The computational results in Table 1 demonstrate that the conclusions obtained from Theorem 1 in this paper improve upon the existing related results.
Example 2. Calculate the maximal eigenvalue of the non-defective positive matrix B using the algorithm in Section 3. The results are presented in Table 2.
In this paper, we have introduced monotonically increasing lower-bound estimators and monotonically decreasing upper-bound estimators for the maximal eigenvalue of a positive matrix. These estimators are constructed using two special matrices associated with the positive matrix. The advantage of these estimators is that they are straightforward to compute, as they solely depend on the elements of the positive matrix. Furthermore, we have rigorously proven the monotonicity and convergence of both the upper and lower bound estimations for the maximal eigenvalue of positive matrices.
Additionally, we have developed a smoothing algorithm specifically designed to calculate the approximate value of the maximal eigenvalue for a non-defective positive matrix. This algorithm serves as an effective tool to obtain a reasonably accurate estimate for the maximal eigenvalue in such cases.
Overall, our findings provide valuable insights and practical tools for estimating and computing the maximal eigenvalue of positive matrices, with special attention given to non-defective positive matrices.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
This work was financially supported by Sichuan University Jinjiang College Cultivation Project of Sichuan Higher Education Institutions of Double First-class Construction Gongga Plan.
Conflict of interest
The authors declare that they have no competing interests.
References
[1]
A. Berman, R. J. Plemmons, Nonnegative matrices in the mathematical sciences, Philadelphia: Society for Industrial and Applied Mathematics, 1994. https://doi.org/10.1137/1.9781611971262
[2]
W. Ledermann, Bounds for the greatest latent root of a positive matrix, J. Lond. Math. Soc., s1-25 (1950), 265–268. https://doi.org/10.1112/jlms/s1-25.4.265 doi: 10.1112/jlms/s1-25.4.265
[3]
A. Ostrowski, Bounds for the greatest latent root of a positive matrix, J. Lond. Math. Soc., s1-27 (1952), 253–256. https://doi.org/10.1112/jlms/s1-27.2.253 doi: 10.1112/jlms/s1-27.2.253
[4]
A. Brauer, The theorems of Ledermann and Ostrowski on positive matrices, Duke Math. J., 24 (1957), 265–274. https://doi.org/10.1215/S0012-7094-57-02434-1 doi: 10.1215/S0012-7094-57-02434-1
[5]
H. Minc, Nonnegative Matrices, New York: Wiley, 1988.
[6]
S. L. Liu, Bounds for the greatest characteristic root of a nonnegative matrix, Linear Algebra Appl., 239 (1996), 151–160. https://doi.org/10.1016/S0024-3795(96)90008-7 doi: 10.1016/S0024-3795(96)90008-7
[7]
L. M. Liu, T. Z. Huang, X. Q. Liu, New bounds for the greatest eigenvalue of a nonnegative matrix, J. Univ. Electron. Sci. Techn. China, 2007,343–345.
[8]
P. Liao, Bounds for the Perron root of nonnegative matrices and spectral radius of iteration matrices, Linear Algebra Appl., 530 (2017), 253–265. https://doi.org/10.1016/j.laa.2017.05.021 doi: 10.1016/j.laa.2017.05.021
[9]
H. Cheriyath, N. Agarwal, On the Perron root and eigenvectors associated with a subshift of finite type, Linear Algebra Appl., 633 (2022), 42–70. https://doi.org/10.1016/j.laa.2021.10.003 doi: 10.1016/j.laa.2021.10.003
[10]
P. Liao, Bounds for the Perron root of positive matrices, Linear Multilinear A., 71 (2023), 1849–1857. https://doi.org/10.1080/03081087.2022.2081310 doi: 10.1080/03081087.2022.2081310
[11]
Y. Y. Yan, D. Z. Cheng, J. E. Feng, H. T. Li, J. M. Yue, Survey on applications of algebraic state space theory of logical systems to finite state machines, Sci. China Inform. Sci., 66 (2023), 111201. https://doi.org/10.1007/s11432-022-3538-4 doi: 10.1007/s11432-022-3538-4
[12]
D. Z. Cheng, H. S. Qi, Y. Zhao, An introduction to semi-tensor product of matrices and its applications, World Scientific, 2012. https://doi.org/10.1142/8323
[13]
J. M. Yue, Y. Y. Yan, H. Deng, Matrix approach to formulate and search k-ESS of graphs using the STP theory, J. Math., 2021 (2021), 1–12. https://doi.org/10.1155/2021/7230661 doi: 10.1155/2021/7230661