Research article

Smoothing algorithm for the maximal eigenvalue of non-defective positive matrices

  • Received: 26 October 2023 Revised: 11 December 2023 Accepted: 10 January 2024 Published: 31 January 2024
  • MSC : 15A42, 15A18

  • This paper introduced a smoothing algorithm for calculating the maximal eigenvalue of non-defective positive matrices. Two special matrices were constructed to provide monotonically increasing lower-bound estimates and monotonically decreasing upper-bound estimates of the maximal eigenvalue. The monotonicity and convergence of these estimations was also proven. Finally, the effectiveness of the algorithm was demonstrated with numerical examples.

    Citation: Na Li, Qin Zhong. Smoothing algorithm for the maximal eigenvalue of non-defective positive matrices[J]. AIMS Mathematics, 2024, 9(3): 5925-5936. doi: 10.3934/math.2024289

    Related Papers:

    [1] Songting Yin . Some rigidity theorems on Finsler manifolds. AIMS Mathematics, 2021, 6(3): 3025-3036. doi: 10.3934/math.2021184
    [2] Yinzhen Mei, Chengxiao Guo, Mengtian Liu . The bounds of the energy and Laplacian energy of chain graphs. AIMS Mathematics, 2021, 6(5): 4847-4859. doi: 10.3934/math.2021284
    [3] Imrana Kousar, Saima Nazeer, Abid Mahboob, Sana Shahid, Yu-Pei Lv . Numerous graph energies of regular subdivision graph and complete graph. AIMS Mathematics, 2021, 6(8): 8466-8476. doi: 10.3934/math.2021491
    [4] João Lita da Silva . Spectral properties for a type of heptadiagonal symmetric matrices. AIMS Mathematics, 2023, 8(12): 29995-30022. doi: 10.3934/math.20231534
    [5] Meraj Ali Khan, Ali H. Alkhaldi, Mohd. Aquib . Estimation of eigenvalues for the $ \alpha $-Laplace operator on pseudo-slant submanifolds of generalized Sasakian space forms. AIMS Mathematics, 2022, 7(9): 16054-16066. doi: 10.3934/math.2022879
    [6] Yuwen He, Jun Li, Lingsheng Meng . Three effective preconditioners for double saddle point problem. AIMS Mathematics, 2021, 6(7): 6933-6947. doi: 10.3934/math.2021406
    [7] Shunjie Bai . A tighter M-eigenvalue localization set for partially symmetric tensors and its an application. AIMS Mathematics, 2022, 7(4): 6084-6098. doi: 10.3934/math.2022339
    [8] Guijuan Lin, Sujuan Long, Qiqi Zhang . The upper bound for the first positive eigenvalue of Sub-Laplacian on a compact strictly pseudoconvex hypersurface. AIMS Mathematics, 2024, 9(9): 25376-25395. doi: 10.3934/math.20241239
    [9] Qin Zhong . Some new inequalities for nonnegative matrices involving Schur product. AIMS Mathematics, 2023, 8(12): 29667-29680. doi: 10.3934/math.20231518
    [10] Yingxia Zhao, Lanlan Liu, Feng Wang . Error bounds for linear complementarity problems of $ SD{{D}_{1}} $ matrices and $ SD{{D}_{1}} $-$ B $ matrices. AIMS Mathematics, 2022, 7(7): 11862-11878. doi: 10.3934/math.2022662
  • This paper introduced a smoothing algorithm for calculating the maximal eigenvalue of non-defective positive matrices. Two special matrices were constructed to provide monotonically increasing lower-bound estimates and monotonically decreasing upper-bound estimates of the maximal eigenvalue. The monotonicity and convergence of these estimations was also proven. Finally, the effectiveness of the algorithm was demonstrated with numerical examples.



    A matrix A with all elements greater than zero is referred to as a positive matrix and is denoted by A>0. Positive matrices possess several significant properties. Perron [1] first proposed that if A is a positive matrix, then the spectral radius is an eigenvalue of A. This eigenvalue, denoted by ρ(A), is called the maximal eigenvalue of A or the Perron root of A, and ρ(A) dominates all other eigenvalues in modulus.

    If A=(aij), then A is called nonnegative if aij0 and is denoted by A0. Nonnegative matrices are frequently encountered in real-life applications. Frobenius [1] extended Perron's theory to nonnegative matrices and nonnegative irreducible matrices, leading to the rapid development of the nonnegative matrix theory. The famous Perron-Frobenius theorem is widely used in both theory and practice. Estimating the range of maximal eigenvalue of positive matrices is a popular topic in the nonnegative matrix theory and has been extensively and thoroughly studied in [2,3,4,5,6,7,8,9,10].

    For any positive integer n, let n = {1, 2, ,n}. Given A=(aij)n×n0, we define

    ri(A)=nk=1aik,cj(A)=nk=1akj,miniri(A)=r(A),maxiri(A)=R(A),i,jn.

    Frobenius [1] obtained the following classical conclusion:

    r(A)ρ(A)R(A). (1)

    Additionally, equality in (1) is achieved when the sum of each row of the matrix A is equal. A similar result holds for the column with the transpose matrix AT in place of A. The above inequality implies that the maximal eigenvalue ρ(A) of a nonnegative square matrix A is between the smallest row sum r(A) and the largest row sum R(A). This observation provides a convenient and efficient method for estimating the maximal eigenvalue using the elements of A.

    The class of positive matrices, which is the subclass of nonnegative matrices, shares similar properties with nonnegative matrices. The generalization of the result in (1) was presented in [2,3,4] to improve the bounds of the maximal eigenvalues of positive matrices.

    Minc [5] made improvements to the bounds in (1) for nonnegative matrices with nonzero row sums, resulting in the following:

    mini(1rint=1aitrt)ρ(A)maxi(1rint=1aitrt).

    Liu [6] generalized the above result further and obtained the following conclusion:

    mini[ri(Ak+m)ri(Ak)]1mρ(A)maxi[ri(Ak+m)ri(Ak)]1m, (2)

    where k is any nonnegative integer and m is any positive integer. The same is true for column sums.

    Based on Eq (2), Liu et al. [7] gave an innovative result as follows:

    mini[ri(AmBk)ri(Bk)]1mρ(A)maxi[ri(AmBk)ri(Bk)]1m, (3)

    where B=(A+I)n1 and I is the identity matrix of order n. The same is true for column sums.

    If we set m = 1 in (2), we will obtain

    miniri(Ak+1)ri(Ak)ρ(A)maxiri(Ak+1)ri(Ak). (4)

    The boundaries are further generalized in [8] as follows:

    miniri(AB)ri(B)ρ(A)maxiri(AB)ri(B), (5)

    where B is an arbitrary matrix that has positive row sums.

    The following is the concept of a non-defective matrix. In linear algebra, a square matrix that lacks a full basis of eigenvectors and cannot be diagonalized is referred to as a defective matrix. Specifically, a matrix of order n is considered non-defective if it contains n linearly independent eigenvectors.

    This paper is dedicated to the estimation and calculation of the maximal eigenvalue of a positive matrix. Initially, we present monotonically increasing lower-bound estimations and monotonically decreasing upper-bound estimations for the maximal eigenvalue of a positive matrix. Additionally, we rigorously prove the monotonicity and convergence of these estimations. Notably, if the positive matrix is non-defective, we provide a smoothing algorithm to calculate the maximal eigenvalue of such a non-defective positive matrix.

    To derive our conclusions, we will recall some essential lemmas as follows.

    Lemma 1. [5] Let λ be an eigenvalue of the square matrix A of order n and let U=(u1,u2,,un)T and V=(v1,v2,,vn)T be eigenvectors corresponding to λ of AT and A, respectively, then

    λni=1ui=ni=1uiri(A),
    λnj=1vj=nj=1vjcj(A).

    Lemma 2. [5] If qt>0,tn, then for any real numbers pt,tn, the following inequality holds:

    mintptqtnt=1ptnt=1qtmaxtptqt.

    Now, we present the upper and lower bounds on the maximal eigenvalue of a positive matrix.

    Theorem 1. Given a positive matrix A=(aij)n×n, let B1=(A+αI)n1,B2=(AβI)n1, where α=maxi{aii},β=mini{aii}. If ri(AB1B2)0, ci(AB1B2)0, in, then for any positive integer k, we have

    miniri(Ak+2B1B2)ri(AkB1B2)ρ(A)maxiri(Ak+2B1B2)ri(AkB1B2), (6)
    minici(Ak+2B1B2)ci(AkB1B2)ρ(A)maxici(Ak+2B1B2)ci(AkB1B2). (7)

    Proof. First, we prove ri(AkB1B2)>0 with the condition ri(AB1B2)0. For any positive integer k2, the element of the matrix Ak1B1B2 located in the t-th row and the j-th column is denoted by (Ak1B1B2)tj. Note that AkB1B2=AAk1B1B2. We have

    ri(AkB1B2)=ri(AAk1B1B2)=nj=1nt=1ait(Ak1B1B2)tj=nt=1nj=1ait(Ak1B1B2)tj=nt=1aitnj=1(Ak1B1B2)tj=nt=1aitrt(Ak1B1B2). (8)

    It is evident that B1=(A+αI)n1 and B2=(AβI)n1 are nonnegative. Therefore, AB1B2 is nonnegative and ri(AB1B2)>0 holds under the restriction that ri(AB1B2)0. As A is a positive matrix, that is, aij>0, i,jn, we immediately obtain ri(AkB1B2)>0 from Eq (8). By employing the same approach, we also obtain ci(AkB1B2)>0 if ci(AB1B2)0, in. These assertions ensure that the expressions in Eqs (6) and (7) are valid.

    Now, we assume X=(x1,x2,,xn)T>0 is the eigenvector of the matrix AT corresponding to ρ(A), that is, ATX=ρ(A)X. Clearly, the maximal eigenvalues of the matrix polynomials

    Ak+2B1B2=Ak+2(A+αI)n1(AβI)n1

    and

    AkB1B2=Ak(A+αI)n1(AβI)n1

    are ρk+2(A)[ρ(A)+α]n1[ρ(A)β]n1 and ρk(A)[ρ(A)+α]n1[ρ(A)β]n1, respectively. Therefore, we obtain

    (Ak+2B1B2)TX=ρk+2(A)[ρ(A)+α]n1[ρ(A)β]n1X

    and

    (AkB1B2)TX=ρk(A)[ρ(A)+α]n1[ρ(A)β]n1X.

    Based on Lemma 1, we have

    ρk+2(A)[ρ(A)+α]n1[ρ(A)β]n1ni=1xi=ni=1xiri(Ak+2B1B2) (9)

    and

    ρk(A)[ρ(A)+α]n1[ρ(A)β]n1ni=1xi=ni=1xiri(AkB1B2). (10)

    Moreover, we must have

    ρk(A)[ρ(A)+α]n1[ρ(A)β]n1>0.

    This is guaranteed by the previous proof ri(AkB1B2)>0 and

    ρk(A)[ρ(A)+α]n1[ρ(A)β]n1miniri(AkB1B2)>0,in.

    Therefore, according to Eqs (9) and (10), we can get

    ρ2(A)=ρk+2(A)[ρ(A)+α]n1[ρ(A)β]n1ni=1xiρk(A)[ρ(A)+α]n1[ρ(A)β]n1ni=1xi=ni=1xiri(Ak+2B1B2)ni=1xiri(AkB1B2).

    It follows from Lemma 2 that

    minixiri(Ak+2B1B2)xiri(AkB1B2)ρ2(A)maxixiri(Ak+2B1B2)xiri(AkB1B2),

    that is,

    miniri(Ak+2B1B2)ri(AkB1B2)ρ(A)maxiri(Ak+2B1B2)ri(AkB1B2).

    Therefore, inequality (6) is proved. Similarly, inequality (7) holds.

    Remark 1. From the formula

    ri(AkB1B2)=nt=1aitrt(Ak1B1B2),

    in (8), we can observe that ri(Ak+2B1B2),ri(AkB1B2) can be calculated by induction. In addition, a new matrix multiplication technique known as the semitensor product of matrices (STP) has been developed in recent years, which offers more powerful functionality compared to traditional matrix multiplication [11,12,13]. This implies that it is not difficult to compute the upper and lower bounds of ρ(A).

    In the following, we prove the convergence of the upper and lower bound expressions in Theorem 1 when the positive integer k.

    Theorem 2. Under the assumptions of Theorem 1, the following limits

    limkminiri(Ak+2B1B2)ri(AkB1B2),limkmaxiri(Ak+2B1B2)ri(AkB1B2),
    limkminici(Ak+2B1B2)ci(AkB1B2),limkmaxici(Ak+2B1B2)ci(AkB1B2),

    exist, and ρ(A) satisfies the following inequalities:

    limkminiri(Ak+2B1B2)ri(AkB1B2)ρ(A)limkmaxiri(Ak+2B1B2)ri(AkB1B2),
    limkminici(Ak+2B1B2)ci(AkB1B2)ρ(A)limkmaxici(Ak+2B1B2)ci(AkB1B2).

    Proof. For any positive integer k2, according to (8) and Lemma 2, we have

    ri(Ak+2B1B2)ri(AkB1B2)=nt=1aitrt(Ak+1B1B2)nt=1aitrt(Ak1B1B2)maxiri(Ak+1B1B2)ri(Ak1B1B2)=maxiri(A(k1)+2B1B2)ri(Ak1B1B2).

    The above inequality shows that

    maxiri(A(k1)+2B1B2)ri(Ak1B1B2)maxiri(Ak+2B1B2)ri(AkB1B2).

    Therefore, we acquire

    maxiri(A(k1)+2B1B2)ri(Ak1B1B2)maxiri(Ak+2B1B2)ri(AkB1B2).

    That is to say, the following sequence

    {maxiri(Ak+2B1B2)ri(AkB1B2)},

    is monotonically decreasing with respect to k. On the other hand, by Theorem 1 we know that the sequence has a lower bound ρ(A). Based on the monotonicity bounded criterion, we conclude that

    limkmaxiri(Ak+2B1B2)ri(AkB1B2),

    exists and is not less than ρ(A). Similarly, the sequence

    {miniri(Ak+2B1B2)ri(AkB1B2)},

    increases monotonically with respect to k and has an upper bound ρ(A) according to (6) in Theorem 1. Therefore, we derive that

    limkminiri(Ak+2B1B2)ri(AkB1B2),

    exists and is not more than ρ(A). Using the same approach, we can establish analogous results for column rows when ci(AB1B2)0, in.

    Theorem 2 discusses the upper and lower bounds on the maximum eigenvalue of a positive matrix and proves that the sequence {maxiri(Ak+2B1B2)ri(AkB1B2)} decreases monotonically and has a lower bound ρ(A), while the sequence {miniri(Ak+2B1B2)ri(AkB1B2)} increases monotonically and has an upper bound ρ(A). It is natural to wonder whether limkmaxiri(Ak+2B1B2)ri(AkB1B2) or limkminiri(Ak+2B1B2)ri(AkB1B2) is then equal to ρ(A)? In general, it is difficult to prove

    limkminiri(Ak+2B1B2)ri(AkB1B2)=ρ(A)=limkmaxiri(Ak+2B1B2)ri(AkB1B2).

    Indeed, in special cases we have some elegant conclusions. The following are the results for a non-defective positive matrix.

    Theorem 3. Under the assumptions of Theorem 1, if the positive matrix A=(aij)n×n is non-defective, then we have

    limkminiri(Ak+2B1B2)ri(AkB1B2)=ρ(A)=limkmaxiri(Ak+2B1B2)ri(AkB1B2), (11)
    limkminici(Ak+2B1B2)ci(AkB1B2)=ρ(A)=limkmaxici(Ak+2B1B2)ci(AkB1B2). (12)

    Proof. Without loss of generality, it may be assumed that λ1,λ2,,λn1,ρ(A) are the eigenvalues of the matrix A such that

    |λ1||λ2||λn1|ρ(A).

    The corresponding eigenvector family is X1,X2,,Xn-1,Xn, in which

    Xj=(x1j,x2j,,xnj)T,j=1,2,,n.

    Since A is non-defective, the eigenvector family X1,X2,,Xn-1,Xn forms a basis of the n-dimensional vector space. Moreover, we have

    ri(Ak+2B1B2)=ri(Ak+1AB1B2)=nj=1nt=1(Ak+1)it(AB1B2)tj=nt=1nj=1(Ak+1)it(AB1B2)tj=nt=1(Ak+1)itnj=1(AB1B2)tj=nt=1(Ak+1)itrt(AB1B2), (13)

    where (Ak+1)it denotes the element of the matrix Ak+1 located in the i-th row and the t-th column and (AB1B2)tj denotes the element of the matrix AB1B2 located in the t-th row and the j-th column, respectively. Similarly, we have

    ri(AkB1B2)=nt=1(Ak1)itrt(AB1B2),k2. (14)

    Now, we define a special vector as follows:

    Y=(r1(AB1B2),r2(AB1B2),,rn(AB1B2))T.

    It is obvious that Y is a positive vector since ri(AB1B2)0 (more precisely, ri(AB1B2)>0), in.

    Thus, Y can be expressed as

    Y=C1X1+C2X2++Cn1Xn1+CnXn, (15)

    where C1,C2,,Cn1,Cn are not all zero. Together with Eqs (13) and (14), one can get

    ri(Ak+2B1B2)ri(AkB1B2)=nt=1(Ak+1)itrt(AB1B2)nt=1(Ak1)itrt(AB1B2),k2.

    On the other hand, we have

    nt=1(Ak+1)itrt(AB1B2)=[Ak+1(r1(AB1B2),r2(AB1B2),,rn(AB1B2))T]i=(Ak+1Y)i (16)

    and

    nt=1(Ak1)itrt(AB1B2)=[Ak1(r1(AB1B2),r2(AB1B2),,rn(AB1B2))T]i=(Ak1Y)i, (17)

    in which (Ak+1Y)i and (Ak1Y)i denote the i-th coordinates of the vectors Ak+1Y and Ak1Y, respectively. Combined with Eqs (15)–(17), we obtain

    ri(Ak+2B1B2)ri(AkB1B2)=(Ak+1Y)i(Ak1Y)i=[Ak+1(C1X1+C2X2++Cn1Xn1+CnXn)]i[Ak1(C1X1+C2X2++Cn1Xn1+CnXn)]i=(C1Ak+1X1+C2Ak+1X2++Cn1Ak+1Xn1+CnAk+1Xn)i(C1Ak1X1+C2Ak1X2++Cn1Ak1Xn1+CnAk1Xn)i=C1λk+11xi1+C2λk+12xi2++Cn1λk+1n1xi(n1)+Cnρk+1(A)xinC1λk11xi1+C2λk12xi2++Cn1λk1n1xi(n1)+Cnρk1(A)xin. (18)

    Take the limit on both sides of Eq (18), and we acquire

    limkri(Ak+2B1B2)ri(AkB1B2)=limkC1λk+11xi1+C2λk+12xi2++Cn1λk+1n1xi(n1)+Cnρk+1(A)xinC1λk11xi1+C2λk12xi2++Cn1λk1n1xi(n1)+Cnρk1(A)xin=limkC1λk+11xi1+C2λk+12xi2++Cn1λk+1n1xi(n1)+Cnρk+1(A)xinρk+1(A)C1λk11xi1+C2λk12xi2++Cn1λk1n1xi(n1)+Cnρk1(A)xinρk1(A)1ρ2(A)=ρ2(A).

    Therefore, the equality in (11) holds. Similarly, we can prove that the corresponding result holds for the column.

    Remark 2. Equations (11) and (12) in Theorem 3 show that for a non-defective positive matrix A, the limits of the maximum and minimum values of

    ri(Ak+2B1B2)ri(AkB1B2)(ci(Ak+2B1B2)ci(AkB1B2)),

    are equal to the maximum eigenvalue of A when k tends to infinity.

    Based on Theorems 1–3, we can derive the algorithm for determining the maximum eigenvalue of a non-defective positive matrix.

    Step 0. Given a non-defective positive matrix A=(aij)n×n and a sufficiently small positive number ε>0;

    Step 1. Let α=maxi{aii},β=mini{aii}. Compute B1=(A+αI)n1,B2=(AβI)n1;

    Step 2. Let k=0;

    Step 3. Compute T=maxiri(Ak+2B1B2)ri(AkB1B2),t=miniri(Ak+2B1B2)ri(AkB1B2);

    Step 4. If Tt<ε, go to Step 5; otherwise, set k=k+1 and go back to Step 3;

    Step 5. Output k and ρ(A)=T+t2, stop.

    Remark 3. Replace the row sums in the algorithm with the corresponding column sums. The algorithm is still valid.

    Remark 4. In the above algorithm, the upper bound of ρ(A) is decreasing, while the lower bound of ρ(A) is increasing. This behavior exhibits a smoothing tendency and, as a result, we refer to this algorithm as a smoothing algorithm.

    In this section, we consider two examples to demonstrate our findings.

    Example 1. Consider positive matrix:

    A=(112233411).

    The comparisons between the estimation results of [1,2,3,4,5,6,7,8] and Theorem 1 of this paper regarding the maximal eigenvalue of matrix A are presented in Table 1.

    Table 1.  Bounds for the maximal eigenvalue of A.
    By row By column
    (1) 4.0000<ρ(A)<8.0000 5.0000<ρ(A)<7.0000
    Ledermann [2] 4.1547<ρ(A)<7.8661 5.0800<ρ(A)<6.9259
    Ostrowski [3] 4.5275<ρ(A)<7.6547 5.2247<ρ(A)<6.8165
    Brauer [4] 4.8284<ρ(A)<7.4642 5.3722<ρ(A)<6.7016
    Minc [5] 5.0000<ρ(A)<6.2500 5.6000<ρ(A)<5.8572
    (4)(k=2) 5.5833ρ(A)5.8667 5.7143ρ(A)5.7805
    (2)(m=k=2) 5.6789ρ(A)5.7735 5.7259ρ(A)5.7615
    (3)(m=k=2) 5.6836ρ(A)5.8539 5.6975ρ(A)6.3087
    (5)(B=(A+I)2) 5.1429ρ(A)6.4444 5.5000ρ(A)6.0000
    Theorem 1(k=1) 5.7292ρ(A)5.7581 5.7408ρ(A)5.7428
    Theorem 1(k=2) 5.7367ρ(A)5.7455 5.7413ρ(A)5.7419
    Theorem 1(k=3) 5.7405ρ(A)5.7432 5.7416ρ(A)5.7418

     | Show Table
    DownLoad: CSV

    Indeed, ρ(A)=5.74165738. The computational results in Table 1 demonstrate that the conclusions obtained from Theorem 1 in this paper improve upon the existing related results.

    Example 2. Calculate the maximal eigenvalue of the non-defective positive matrix B using the algorithm in Section 3. The results are presented in Table 2.

    B=(1111111112222222123333331234444412345555123456661234567712345678).
    Table 2.  Estimation for the maximal eigenvalue of B.
    ε Iteration numbers ρ(B)
    102 1 29.37
    104 1 29.3653
    106 1 29.36529789
    108 1 29.36529789434
    1010 2 29.36529789436882

     | Show Table
    DownLoad: CSV

    In this paper, we have introduced monotonically increasing lower-bound estimators and monotonically decreasing upper-bound estimators for the maximal eigenvalue of a positive matrix. These estimators are constructed using two special matrices associated with the positive matrix. The advantage of these estimators is that they are straightforward to compute, as they solely depend on the elements of the positive matrix. Furthermore, we have rigorously proven the monotonicity and convergence of both the upper and lower bound estimations for the maximal eigenvalue of positive matrices.

    Additionally, we have developed a smoothing algorithm specifically designed to calculate the approximate value of the maximal eigenvalue for a non-defective positive matrix. This algorithm serves as an effective tool to obtain a reasonably accurate estimate for the maximal eigenvalue in such cases.

    Overall, our findings provide valuable insights and practical tools for estimating and computing the maximal eigenvalue of positive matrices, with special attention given to non-defective positive matrices.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was financially supported by Sichuan University Jinjiang College Cultivation Project of Sichuan Higher Education Institutions of Double First-class Construction Gongga Plan.

    The authors declare that they have no competing interests.



    [1] A. Berman, R. J. Plemmons, Nonnegative matrices in the mathematical sciences, Philadelphia: Society for Industrial and Applied Mathematics, 1994. https://doi.org/10.1137/1.9781611971262
    [2] W. Ledermann, Bounds for the greatest latent root of a positive matrix, J. Lond. Math. Soc., s1-25 (1950), 265–268. https://doi.org/10.1112/jlms/s1-25.4.265 doi: 10.1112/jlms/s1-25.4.265
    [3] A. Ostrowski, Bounds for the greatest latent root of a positive matrix, J. Lond. Math. Soc., s1-27 (1952), 253–256. https://doi.org/10.1112/jlms/s1-27.2.253 doi: 10.1112/jlms/s1-27.2.253
    [4] A. Brauer, The theorems of Ledermann and Ostrowski on positive matrices, Duke Math. J., 24 (1957), 265–274. https://doi.org/10.1215/S0012-7094-57-02434-1 doi: 10.1215/S0012-7094-57-02434-1
    [5] H. Minc, Nonnegative Matrices, New York: Wiley, 1988.
    [6] S. L. Liu, Bounds for the greatest characteristic root of a nonnegative matrix, Linear Algebra Appl., 239 (1996), 151–160. https://doi.org/10.1016/S0024-3795(96)90008-7 doi: 10.1016/S0024-3795(96)90008-7
    [7] L. M. Liu, T. Z. Huang, X. Q. Liu, New bounds for the greatest eigenvalue of a nonnegative matrix, J. Univ. Electron. Sci. Techn. China, 2007,343–345.
    [8] P. Liao, Bounds for the Perron root of nonnegative matrices and spectral radius of iteration matrices, Linear Algebra Appl., 530 (2017), 253–265. https://doi.org/10.1016/j.laa.2017.05.021 doi: 10.1016/j.laa.2017.05.021
    [9] H. Cheriyath, N. Agarwal, On the Perron root and eigenvectors associated with a subshift of finite type, Linear Algebra Appl., 633 (2022), 42–70. https://doi.org/10.1016/j.laa.2021.10.003 doi: 10.1016/j.laa.2021.10.003
    [10] P. Liao, Bounds for the Perron root of positive matrices, Linear Multilinear A., 71 (2023), 1849–1857. https://doi.org/10.1080/03081087.2022.2081310 doi: 10.1080/03081087.2022.2081310
    [11] Y. Y. Yan, D. Z. Cheng, J. E. Feng, H. T. Li, J. M. Yue, Survey on applications of algebraic state space theory of logical systems to finite state machines, Sci. China Inform. Sci., 66 (2023), 111201. https://doi.org/10.1007/s11432-022-3538-4 doi: 10.1007/s11432-022-3538-4
    [12] D. Z. Cheng, H. S. Qi, Y. Zhao, An introduction to semi-tensor product of matrices and its applications, World Scientific, 2012. https://doi.org/10.1142/8323
    [13] J. M. Yue, Y. Y. Yan, H. Deng, Matrix approach to formulate and search k-ESS of graphs using the STP theory, J. Math., 2021 (2021), 1–12. https://doi.org/10.1155/2021/7230661 doi: 10.1155/2021/7230661
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1135) PDF downloads(64) Cited by(0)

Figures and Tables

Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog