Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Research article

A structure-preserving doubling algorithm for the square root of regular M-matrix

  • Received: 10 May 2024 Revised: 27 August 2024 Accepted: 06 September 2024 Published: 18 September 2024
  • The matrix square root is widely encountered in many fields of mathematics. In this paper, based on the properties of M-matrix and quadratic matrix equations, we study the square root of M-matrix, and prove that for a regular M-matrix there always exists a regular M-matrix as its square root. In addition, a structure-preserving doubling algorithm is proposed to compute the square root. Theoretical analysis and numerical experiments are given to show that our method is feasible and is effective under certain conditions.

    Citation: Zehua Wang, Jinrui Guan, Ahmed Zubair. A structure-preserving doubling algorithm for the square root of regular M-matrix[J]. Electronic Research Archive, 2024, 32(9): 5306-5320. doi: 10.3934/era.2024245

    Related Papers:

    [1] Cairong Chen . A structure-preserving doubling algorithm for solving a class of quadratic matrix equation with M-matrix. Electronic Research Archive, 2022, 30(2): 574-584. doi: 10.3934/era.2022030
    [2] Jin Wang . Least squares solutions of matrix equation AXB=C under semi-tensor product. Electronic Research Archive, 2024, 32(5): 2976-2993. doi: 10.3934/era.2024136
    [3] Janthip Jaiprasert, Pattrawut Chansangiam . Exact and least-squares solutions of a generalized Sylvester-transpose matrix equation over generalized quaternions. Electronic Research Archive, 2024, 32(4): 2789-2804. doi: 10.3934/era.2024126
    [4] Ruiping Wen, Liang Zhang, Yalei Pei . A hybrid singular value thresholding algorithm with diagonal-modify for low-rank matrix recovery. Electronic Research Archive, 2024, 32(11): 5926-5942. doi: 10.3934/era.2024274
    [5] Xing Zhang, Xiaoyu Jiang, Zhaolin Jiang, Heejung Byun . Algorithms for solving a class of real quasi-symmetric Toeplitz linear systems and its applications. Electronic Research Archive, 2023, 31(4): 1966-1981. doi: 10.3934/era.2023101
    [6] Yimou Liao, Tianxiu Lu, Feng Yin . A two-step randomized Gauss-Seidel method for solving large-scale linear least squares problems. Electronic Research Archive, 2022, 30(2): 755-779. doi: 10.3934/era.2022040
    [7] Saeedreza Tofighi, Farshad Merrikh-Bayat, Farhad Bayat . Designing and tuning MIMO feedforward controllers using iterated LMI restriction. Electronic Research Archive, 2022, 30(7): 2465-2486. doi: 10.3934/era.2022126
    [8] Jia-Min Luo, Hou-Biao Li, Wei-Bo Wei . Block splitting preconditioner for time-space fractional diffusion equations. Electronic Research Archive, 2022, 30(3): 780-797. doi: 10.3934/era.2022041
    [9] Dongmei Yu, Yifei Yuan, Yiming Zhang . A preconditioned new modulus-based matrix splitting method for solving linear complementarity problem of H+-matrices. Electronic Research Archive, 2023, 31(1): 123-146. doi: 10.3934/era.2023007
    [10] D. Mosić, P. S. Stanimirović, L. A. Kazakovtsev . The m-weak group inverse for rectangular matrices. Electronic Research Archive, 2024, 32(3): 1822-1843. doi: 10.3934/era.2024083
  • The matrix square root is widely encountered in many fields of mathematics. In this paper, based on the properties of M-matrix and quadratic matrix equations, we study the square root of M-matrix, and prove that for a regular M-matrix there always exists a regular M-matrix as its square root. In addition, a structure-preserving doubling algorithm is proposed to compute the square root. Theoretical analysis and numerical experiments are given to show that our method is feasible and is effective under certain conditions.



    Let ACn×n, a matrix XCn×n is called a square root of A if it satisfies the following matrix equation:

    X2=A. (1.1)

    The matrix square root is widely encountered in many fields of mathematics, such as nonlinear matrix equations, computation of the matrix logarithm, boundary value problems, Markov chains, and so on (see [1,2,3,4]).

    In recent years, many scholars have conducted in-depth research on the theories and numerical algorithms of matrix square root, and obtained a large number of results. In terms of theoretical research, people have discussed the existence and the number of square roots for general matrices and obtained many results (see [2,3]). In addition, for matrices with special structure and special properties, such as M-matrix (see [5]), H-matrix with positive diagonal entries (see [6]), central symmetric matrix (see [7,8]), P-orthogonal matrix (see [9]), Toeplitz matrix (see [10]), cyclic matrix (see [11]), Boolean matrix (see [12]), and so on, the existence of square root is also studied.

    In terms of numerical algorithms, the Schur method was first used to calculate the square root of a general matrix (see [13,14]). In fact, the command sqrtm in Matlab for calculating the square root of a general matrix was written based on the Schur method. However, when the size of the matrix increases, the complexity of the Schur method increases sharply. Moreover, for some matrices with special structures, iterative methods are more effective, so people turned to iterative methods. Higham first proposed the Newton method to calculate the square root of a general matrix in [15]. Although the Newton method has a fast convergence rate, the operation count of each iteration is too large. In order to overcome the defects of the Newton method, the simple Newton method was proposed. However, the simple Newton method has poor stability in calculation (see [2]). Later, the Newton method was improved, and some fast and effective algorithms were proposed, such as the DB method, the CR method, and so on (see [16,17,18,19,20]). Recently, many effective algorithms for calculating matrix square root have also been proposed from different perspectives, such as the power method (see [21,22]), gradient neural network method (see [23]), Zolotarev iterative method (see [24]), Chebyshev iterative method (see [25]), and so on (see [26,27,28]).

    Generally speaking, the theory of matrix square root is very complex, and the existence of a square root of a general matrix is not as evident as it seems. For a general matrix, it may have no square root, or may have many square roots. A sufficient condition for one to exist is that A has no real negative eigenvalue. More generally, any matrix A having no non-positive real eigenvalues has a unique square root for which every eigenvalue has positive real part, and this square root, denoted by A1/2, is called the principal square (see [1,2]). As for M-matrix, we have the following result.

    Lemma 1.1. [2] Let A be a nonsingular M-matrix. Then A has exactly one nonsingular M-matrix as its square root.

    But for general M-matrix, the above conclusion is not necessarily valid. For example, consider

    A=(0100),B=(1100).

    It is easy to verify that A and B are M-matrices, but direct calculation shows that A has no square root while B has an M-matrix square root. Therefore, it is necessary to extend the above theorem to more general subclasses of M-matrix. The regular M-matrix is an extension of the nonsingular M-matrix and the irreducible singular M-matrix. The regular M-matrix has many beautiful properties similar to the nonsingular M-matrix and the irreducible singular M-matrix, and it also plays an important role in the theories of nonsymmetric algebraic Riccati equations. In this paper, we will prove that for a regular M-matrix, there always exists a regular M-matrix as its square root. In addition, a structure-preserving doubling algorithm is proposed to compute the square root. Theoretical analysis and numerical experiments are given to show that our method is feasible and is effective under certain conditions.

    The rest of the paper is organized as follows: In Section 2, we give some preliminary results of M-matrix. In Section 3, based on the properties of M-matrix and quadratic matrix equations, we prove the existence of square root of a regular M-matrix. In Section 4, we propose a structure-preserving doubling algorithm to compute the square root and then give a convergence analysis of it. In Section 5, we use some numerical examples to show the feasibility and effectiveness of our method. Conclusions are given in Section 6.

    In this section, we review the definitions and some properties of M-matrix.

    For any matrix ARm×n, if the elements of A satisfy aij0 for 1im, 1jn, then A is called a nonnegative matrix, denoted by A0. For any matrices ARm×n, BRm×n, we write AB if aijbij hold for all 1im, 1jn. Let A=(aij)Rn×n, then A is called a Z-matrix if aij0 for all ij. A Z-matrix A is called an M-matrix if there exists a nonnegative matrix B such that A=sIB and sρ(B), where ρ(B) is the spectral radius of B. In particular, A is called a nonsingular M-matrix if s>ρ(B) and is called a singular M-matrix if s=ρ(B).

    The following properties of M-matrix are well-known and can be found in [29].

    Lemma 2.1. Let ARn×n be a Z-matrix. Then the following statements are equivalent:

    (i) A is a nonsingular M-matrix;

    (ii) A10;

    (iii) Au>0 for some positive vector u>0.

    Lemma 2.2. Let A be a nonsingular M-matrix and B be a Z-matrix. If AB, then B is also a nonsingular M-matrix.

    Lemma 2.3. Let A be an irreducible singular M-matrix and B be a Z-matrix. If AB and AB, then B is a nonsingular M-matrix.

    The regular M-matrix is an extension of the nonsingular M-matrix and the irreducible singular M-matrix. The definition of regular M-matrix is introduced in the following:

    Definition 2.1. [30] An M-matrix A is said to be regular if Au0 for some u>0.

    It is easy to verify that nonsingular M-matrices and irreducible singular M-matrices are always regular M-matrices, and any Z-matrix A such that Au0 for some u>0 is a regular M-matrix.

    In this section, we consider the square root of a regular M-matrix, and prove that for a regular M-matrix there exists a regular M-matrix as its square root.

    Let A=(aij)Rn×n be a regular M-matrix, and let

    X=αIY, (3.1)

    where α>0 is a parameter to be determined. Then Eq (1.1) can be rewritten as

    Y22αY+α2IA=0. (3.2)

    Here we choose the parameter α such that α2IA is a nonnegative matrix. Since A is a regular M-matrix, we can easily verify from definition that aii0 for i=1,2,,n, where aii are the diagonal elements of A. In addition, it is evident that max unless . Thus, if we choose

    then is a nonnegative matrix.

    In the following, we discuss the existence of a minimal non-negative solution for Eq (3.2). In order to achieve this goal, we write Eq (3.2) in the following form:

    and then consider the iteration

    (3.3)

    We have the following result for iteration (3.3).

    Lemma 3.1. Let be a regular M-matrix, and the parameter satisfy . Then the sequence generated by iteration (3.3) is well defined, monotonically increasing, and satisfies , where is a positive vector such that .

    Proof. It is evident that the sequence is well defined, since only matrix additions and multiplications are used in the iteration (3.3).

    We prove the conclusion by mathematical induction.

    (ⅰ) The sequence is monotonically increasing, i.e.,

    When , it is evident that

    Suppose the conclusion is true for . From

    and

    we know the conclusion is true for . So for any , the conclusion is true.

    (ⅱ) Since is a regular M-matrix, there is a positive vector such that . In the following, we prove that for any , holds true.

    When , the conclusion is obvious. Suppose that the conclusion is true for . Then

    thus, the same is true for . So for any , the conclusion is true.

    Theorem 3.1. Let be a regular M-matrix, and the parameter satisfy . Then Eq (3.2) has a minimal non-negative solution , and is a regular M-matrix.

    Proof. According to Lemma 3.1, the sequence obtained from iteration (3.3) is monotonically increasing and upper bounded. So there is a limit . Taking limit on both sides of (3.3), we know is a solution of Eq (3.2), and .

    If is another non-negative solution of Eq (3.2), we can easily verify as in Lemma 3.1 that the sequence obtained from iteration (3.3) satisfies . Taking limit yields , so is the minimal non-negative solution.

    In addition, we have obtained for in the proof of Lemma 3.1. Taking limit, we have , so

    Thus, by definition, is a regular M-matrix.

    According to the above theorem, we can achieve the following conclusion.

    Theorem 3.2. Let be a regular M-matrix. Then there exists a square root of , and the square root is a regular M-matrix.

    Proof. By Theorem 3.1, is a regular M-matrix, and it is a square root of since it satisfies Eq (1.1).

    Corollary 3.1. Let be a regular M-matrix, and

    be two parameters. Then , where and are minimal non-negative solutions of Eq (3.2) associated with and , respectively.

    Proof. Let be the square root of , as in Theorem 3.2. Since

    we have immediately.

    Corollary 3.2. Let be a regular M-matrix and be the minimal non-negative solution of Eq (3.2). If is a nonsingular M-matrix, then ; if is singular, then .

    Proof. Note that is an M-matrix and satisfies Eq (1.1). If is nonsingular, so is , and by definition of M-matrix we know . If is singular, so is , and thus .

    Remark 1. We have proved that, for a regular M-matrix , there exists a square root of , and the square root is a regular M-matrix. However, may have more than one M-matrix as its square root. For example, consider

    It is evident that is a regular M-matrix, and is a square root of itself since . In addition,

    is an M-matrix (not regular), and is also a square root of .

    The doubling algorithms are very efficient iterative methods for solving nonlinear matrix equations. The idea of doubling algorithms can be traced back to the 1970s, and recently the doubling algorithms have been successfully applied to various kinds of nonlinear matrix equations. We refer to [31] for a complete description of doubling algorithms. In this section, we propose a structure-preserving doubling algorithm to compute the square root of a regular M-matrix and then give a theoretical analysis of it.

    The analysis in the previous section shows that in order to calculate the square root of a regular M-matrix, it is only necessary to compute the minimal non-negative solution of Eq (3.2). For effectively solving (3.2), we divide both sides of Eq (3.2) by and let to obtain

    (4.1)

    By Theorem 3.1 and Corrolary 3.2, Eq (4.1) has a minimal non-negative solution such that . In particular, when is a nonsingular M-matrix.

    It can be verified that Eq (4.1) is equivalent to

    (4.2)

    Pre-multiply Eq (4.2) by

    then we have

    (4.3)

    where

    It is clear that the matrix pencil is in the first standard form (SF1) as defined in [31], and it is natural to apply the doubling algorithm for SF1 to solve (4.3).

    The basic idea of the structure-preserving doubling algorithm for SF1 to solve (4.3) is to recursively construct a sequence of pencils for that satisfy

    (4.4)

    and at the same time have the same forms as :

    (4.5)

    Moreover, the blocks in and can be produced by the following iteration of doubling algorithms:

    For Eq (4.1), another matrix equation can be constructed as follows:

    (4.6)

    which is called the dual equation of (4.1). It can be verified similarly that Eq (4.6) has a minimal non-negative solution such that as for (4.1). In addition, we have

    (4.7)

    for matrix pencils in (4.5).

    Thus, as long as , , , and are well-defined, we will have

    or, equivalently,

    (4.8)
    (4.9)

    According to the above derivation, the structure-preserving doubling algorithm to compute the square root of regular M-matrix can be stated as follows:

    In the following, we give a convergence analysis of the structure-preserving doubling algorithm.

    Lemma 4.1. Let be a regular M-matrix, and and be the minimal non-negative solutions for Eqs (4.1) and (4.6), respectively. Then and are both regular M-matrices. In particular, and are nonsingular M-matrices if is nonsingular, and they are irreducible M-matrices if is irreducible.

    Proof. (ⅰ) Since is a regular M-matrix, there exists a positive vector such that . Firstly, we can easily verify as in Lemma 3.1 that the sequences and generated by the following iterations

    (4.10)
    (4.11)

    are well defined, monotonically increasing, and satisfy

    for all . Taking limit, we have and .

    In addition, it is clear and are Z-matrices, and noting that

    we know and are both regular M-matrices.

    (ⅱ) When is a nonsingular M-matrix, it follows that for . We can verify further that the sequence generated by (4.10) satisfies

    Taking limit, we obtain

    Thus

    which mean and are nonsingular M-matrices.

    When is irreducible, from (4.10) and (4.11) we can observe

    are irreducible. Since and , we can conclude and are irreducible M-matrices. Hence and are irreducible M-matrices.

    By the general theory of doubling algorithms, we can conclude the following convergence result when is a nonsingular M-matrix or an irreducible singular M-matrix.

    Theorem 4.1. Let be a nonsingular M-matrix or an irreducible singular M-matrix, and and be the minimal non-negative solutions for Eqs (4.1) and (4.6), respectively. Then the sequences , , , generated by Algorithm 4.1 are well-defined, and for

    (a) ;

    (b) ;

    (c) and are nonsingular M-matrices;

    (d) , , and

    (4.12)

    Algorithm 4.1 The structure-preserving doubling algorithm (SDA)
    Input: A regular M-matrix ;
    Output: A square root of , which is a regular M-matrix.
    Step 1. Set ;
    Step 2. Compute , ;
    Step 3. For , until convergence, compute
    Step 4. Set , where .

    In addition, we have

    Proof. We prove the conclusions by mathematical induction.

    (ⅰ) When , we observe firstly that , , , and are well-defined and are all non-negative. In addition, since

    both and are nonsingular M-matrices by Lemmas 2.2 and 2.3. Thus , , , and are well-defined. Moreover, from the iteration of doubling algorithms we have

    Let in (4.8) and (4.8) to get

    thus we have

    From Lemma 4.1 and noting that

    we know and are nonsingular M-matrices. Furthermore, we have

    This completes the proof for .

    (ⅱ) Suppose now that the conclusions are true for . From the iteration of doubling algorithms, , , , and are well-defined, and satisfy

    On the other hand, let in (4.8) and (4.8) to obtain

    Thus, we have

    which mean that and are nonsingular M-matrices from Lemma 4.1. Furthermore, we have

    Thus, the conclusions are true for .

    By the induction, the conclusions are true for all .

    In addition, from (4.12) we can obtain

    Letting , we have

    Remark 2. We only prove the convergence result of the structure-preserving doubling algorithm for the nonsingular M-matrix and the irreducible singular M-matrix. However, when is a general regular M-matrix, the convergence analysis of the structure-preserving doubling algorithm is very complicated. We refer to [31] for a complete convergence analysis of the structure-preserving doubling algorithm. In addition, from Theorem 4.1, we can observe that when is a nonsingular M-matrix, the convergence rate is quadratic since . When is an irreducible singular M-matrix, the convergence rate is linear.

    In this section, we use some numerical examples to show the feasibility and effectiveness of the structure-preserving doubling algorithm (Algorithm 4.1). We will compare Algorithm 4.1 (SDA) with the basic Newton method (Newton) in [2]. The numerical results are presented in terms of iteration numbers (IT), CPU time (CPU) in seconds, and residue (Res), where the residue is defined to be

    In our implementations, all iterations are performed in Matlab (R2018a) on a personal computer with a 2G CPU and 8G memory and are terminated when the current iterate satisfies .

    Example 5.1. Consider nonsingular M-matrix as follows:

    where . For different orders of , we apply both methods to calculate the square root of . The numerical results are shown in Table 1.

    Table 1.  Numerical results of Example 5.1.
    Method IT CPU RES
    10 Newton 6 0.0645 7.5213e-16
    SDA 6 0.0352 3.8339e-13
    15 Newton 6 0.3145 1.1951e-15
    SDA 7 0.1086 8.4507e-16
    20 Newton 6 1.3102 4.1972e-14
    SDA 7 0.5396 4.6757e-14
    25 Newton 7 6.5119 1.6059e-15
    SDA 8 1.8972 1.5041e-15
    30 Newton 7 19.8473 1.5212e-15
    SDA 8 4.9366 1.1492e-15

     | Show Table
    DownLoad: CSV

    It can be seen from the table that both methods can obtain results satisfying the accuracy for different orders of . In particular, the Newton method is a little faster than SDA, while SDA is much cheaper than the Newton method in terms of CPU time.

    Example 5.2. Consider the nonsingular M-matrix generated by the MATLAB command as in the following

    For different sizes of , we apply both methods to calculate the square root of . The numerical results are reported in Table 2.

    Table 2.  Numerical results of Example 5.2.
    Method IT CPU RES
    100 Newton 7 0.0955 3.6745e-14
    SDA 7 0.0310 6.8434e-16
    200 Newton 8 0.4543 1.0868e-15
    SDA 7 0.1041 3.8228e-13
    300 Newton 8 1.1642 1.2259e-15
    SDA 8 0.2740 1.0488e-15
    400 Newton 8 2.6180 1.4675e-14
    SDA 8 0.5141 9.9678e-16
    500 Newton 8 5.3220 3.6847e-13
    SDA 8 0.9228 1.4307e-15

     | Show Table
    DownLoad: CSV

    From Table 2, it can be seen that both methods can obtain results satisfying the accuracy. In particular, SDA is much cheaper than the Newton method in terms of CPU time.

    Example 5.3. Consider irreducible singular M-matrix as follows:

    For different sizes of , the numerical results are reported in Table 3.

    Table 3.  Numerical results of Example 5.3.
    Method IT CPU RES
    100 Newton 22 0.2288 7.1685e-13
    SDA 20 0.0820 4.5514e-13
    200 Newton 23 1.1957 2.8263e-13
    SDA 20 0.2292 4.5569e-13
    300 Newton 21 2.9924 3.5541e-13
    SDA 20 0.5114 4.5553e-13
    400 Newton 26 8.5343 2.7856e-13
    SDA 20 1.2232 4.5548e-13
    500 Newton 29 19.2009 3.6264e-13
    SDA 20 2.5535 4.5548e-13

     | Show Table
    DownLoad: CSV

    In this example, due to the singularity of , the convergence rates of the two iteration methods are linear. In particular, SDA is a little faster than the Newton method and is much cheaper than the Newton method.

    Example 5.4. Consider the following regular singular M-matrix

    The square root (reserved to four decimal places) is

    The numerical results are shown in Table 4. For this example, the Newton method fails to converge, but SDA can still get a result that meets the accuracy.

    Table 4.  Numerical results of Example 5.4.
    Method IT RES
    Newton - -
    SDA 21 1.5486e-16

     | Show Table
    DownLoad: CSV

    From the above four examples, it can be seen that the SDA proposed in this paper is feasible. In particular, it requires fewer CPU time than the Newton method.

    We studied the square root of M-matrix in this paper and proved that the square root of a regular M-matrix exists and is still a regular M-matrix. In addition, we proposed a structure-preserving doubling algorithm (SDA) to compute the square root. Theoretical analysis and numerical experiments have shown that SDA is feasible and is effective under certain conditions.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors would like to express their gratitude to the editor and the two anonymous reviewers for their valuable comments and suggestions, which have greatly improved the quality of the paper. This work was supported by the National Natural Science Foundation of China (12001395).

    The authors declare there is no conflicts of interest.



    [1] R. A. Horn, C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press, London, 1991.
    [2] N. J. Higham, Functions of Matrices: Theory and Computation, SIAM, Philadelphia London, 2008.
    [3] N. J. Higham, A. H. Al-Mohy, Computing matrix functions, Acta Numer., 19 (2010), 159–208. https://doi.org/10.1017/S0962492910000036 doi: 10.1017/S0962492910000036
    [4] D. A. Bini, B. Iannazzo, B. Meini, Numerical Solution of Algebraic Riccati Equations, SIAM, Philadelphia London, 2012.
    [5] G. Alefeld, N. Schneider, On square root of M-matrices, Linear Algebra Appl., 42 (1982), 119–132. https://doi.org/10.1016/0024-3795(82)90143-4 doi: 10.1016/0024-3795(82)90143-4
    [6] L. Lin, Z. Y. Liu, On the square root of an H-matrix with positive diagonal elements, Ann. Oper. Res., 103 (2001), 339–350. https://doi.org/10.1023/A:1012931928589 doi: 10.1023/A:1012931928589
    [7] C. R. Johnson, K. Okubo, R. Reams, Uniqueness of matrix square roots and an application, Linear Algebra Appl., 323 (2001), 51–60. https://doi.org/10.1016/S0024-3795(00)00243-3 doi: 10.1016/S0024-3795(00)00243-3
    [8] Z. Y. Liu, Y. L. Zhang, R. Ralha, Computing the square roots of matrices with central symmetry, Appl. Math. Comput., 186 (2007), 715–726. https://doi.org/10.1016/j.amc.2006.08.032 doi: 10.1016/j.amc.2006.08.032
    [9] J. R. Cardoso, C. S. Kenney, F. S. Leite, Computing the square root and logarithm of a real P-orthogonal matrix, Appl. Numer. Math., 46 (2003), 173–196. https://doi.org/10.1016/S0168-9274(03)00033-3 doi: 10.1016/S0168-9274(03)00033-3
    [10] C. B. Lu, C. Q. Gu, The computation of the square root of circulant matrices, Appl. Math. Comput., 217 (2011), 6819–6829. https://doi.org/10.1016/j.amc.2011.01.018 doi: 10.1016/j.amc.2011.01.018
    [11] Z. Y. Liu, Y. L. Zhang, J. Santos, R. Ralha, On computing complex square roots of real matrices, Appl. Math. Lett., 25 (2012), 1565–1568. https://doi.org/10.1016/j.aml.2012.01.015 doi: 10.1016/j.aml.2012.01.015
    [12] P. D. Moral, A. Niclas, A Taylor expansion of the square root matrix function, J. Math. Anal. Appl., 465 (2018), 259–2668. https://doi.org/10.1016/j.jmaa.2018.05.005 doi: 10.1016/j.jmaa.2018.05.005
    [13] Å. Björck, S. Hammarling, A Schur method for the square root of a matrix, Linear Algebra Appl., 52/53 (1983), 127–140. https://doi.org/10.1016/0024-3795(83)80010-X doi: 10.1016/0024-3795(83)80010-X
    [14] N. J. Higham, Computing real square roots of a real matrix, Linear Algebra Appl., 88-89 (1987), 405–430. https://doi.org/10.1016/0024-3795(87)90118-2 doi: 10.1016/0024-3795(87)90118-2
    [15] N. J. Higham, Newton's method for the matrix square root, Math. Comput., 46 (1986), 537–549. https://doi.org/10.2307/2007992 doi: 10.2307/2007992
    [16] N. J. Higham, Stable iterations for the matrix square root, Numer. Alg., 15 (1997), 227–242. https://doi.org/10.1023/A:1019150005407 doi: 10.1023/A:1019150005407
    [17] E. D. Denman, A. N. Beavers, The matrix sign function and computations in systems, Appl. Math. Comput., 2 (1976), 63–94. https://doi.org/10.1016/0096-3003(76)90020-5 doi: 10.1016/0096-3003(76)90020-5
    [18] M. A. Hasan, A power method for computing square roots of complex matrices, J. Math. Anal. Appl., 213 (1997), 393–405. https://doi.org/10.1006/jmaa.1997.5517 doi: 10.1006/jmaa.1997.5517
    [19] B. Meini, The matrix square root from a new functional perspective: Theoretical results and computational issues, SIAM J. Matrix Anal. Appl., 26 (2004), 362–376. https://doi.org/10.1137/S0895479803426656 doi: 10.1137/S0895479803426656
    [20] D. A. Bini, B. Meini, The cyclic reduction algorithm: from Poisson equation to stochastic processes and beyonds, Numer. Algor., 51 (2009), 23–60. https://doi.org/10.1007/s11075-008-9253-0 doi: 10.1007/s11075-008-9253-0
    [21] A. Frommer, B. Hashemi, Verified computation of square roots of a matrix, SIAM J. Matrix Anal. Appl., 31 (2009), 1279–1302. https://doi.org/10.1137/090757058 doi: 10.1137/090757058
    [22] A. Sadeghi, Approximating the principal matrix square root using some novel third-order iterative methods, Ain Shams Eng. J., 9 (2018), 993–999. https://doi.org/10.1016/j.asej.2016.06.004 doi: 10.1016/j.asej.2016.06.004
    [23] C. Mo, D. Gerontitis, P. S. Stanimirović, Solving the time-varying tensor square root equation by varying-parameters finite-time Zhang neural network, Neurocomputing, 445 (2021), 309–325. https://doi.org/10.1016/j.neucom.2021.03.011 doi: 10.1016/j.neucom.2021.03.011
    [24] S. G. Evan, Zolotarev iterations for the matrix square root, SIAM J. Matrix Anal. Appl., 40 (2019), 696–719. https://doi.org/10.1137/18M1178529 doi: 10.1137/18M1178529
    [25] C. H. Guo, Explicit convergence regions of Newton's method and Chebyshev's method for the matrix pth root, Linear Algebra Appl., 583 (2019), 63–76. https://doi.org/10.1016/j.laa.2019.08.020 doi: 10.1016/j.laa.2019.08.020
    [26] S. Miyajima, Fast enclosure for a matrix inverse square root, Linear Algebra Appl., 467 (2015), 116–135. https://doi.org/10.1016/j.laa.2014.11.007 doi: 10.1016/j.laa.2014.11.007
    [27] X. F. Duan, C. Y. Wang, C. M. Li, Newton's method for solving the tensor square root problem, Appl. Math. Lett., 98 (2019), 57–62. https://doi.org/10.1016/j.aml.2019.05.031 doi: 10.1016/j.aml.2019.05.031
    [28] D. Lu, C. H. Guo, Explicit p-dependent convergence regions of Newton's method for the matrix pth root, Appl. Math. Lett., 122 (2021), 107566. https://doi.org/10.1016/j.aml.2021.107566 doi: 10.1016/j.aml.2021.107566
    [29] A. Berman, R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Academic Press, New York, 1994.
    [30] C. H. Guo, On algebraic Riccati equations associated with M-matrices, Linear Algebra Appl., 439 (2013), 2800–2814. https://doi.org/10.1016/j.laa.2013.08.018 doi: 10.1016/j.laa.2013.08.018
    [31] T. M. Huang, R. C. Li, W. W. Lin, Structure-Preserving Doubling Algorithms for Nonlinear Matrix Equations, SIAM, Philadelphia, 2018.
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(834) PDF downloads(32) Cited by(0)

Figures and Tables

Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog