Research article

A fast and efficient Newton-type iterative scheme to find the sign of a matrix

  • Received: 11 July 2022 Revised: 19 October 2022 Accepted: 07 November 2022 Published: 07 June 2023
  • MSC : 65F30, 65F60

  • This work proposes a new scheme under the umbrella of iteration methods to compute the sign of an invertible matrix. To this target, a review of the exiting solvers of the same type is given and then a new scheme is derived based on a multi-step Newton-type nonlinear equation solver. It is shown that the new method and its reciprocal converge globally with wider convergence radii in contrast to their competitors of the same order from the general Padé schemes. After investigation on the theoretical parts, numerical experiments based on complex matrices of various sizes are furnished to reveal the superiority of the proposed solver in terms of elapsed CPU time.

    Citation: Malik Zaka Ullah, Sultan Muaysh Alaslani, Fouad Othman Mallawi, Fayyaz Ahmad, Stanford Shateyi, Mir Asma. A fast and efficient Newton-type iterative scheme to find the sign of a matrix[J]. AIMS Mathematics, 2023, 8(8): 19264-19274. doi: 10.3934/math.2023982

    Related Papers:

    [1] Xiaofeng Wang, Ying Cao . A numerically stable high-order Chebyshev-Halley type multipoint iterative method for calculating matrix sign function. AIMS Mathematics, 2023, 8(5): 12456-12471. doi: 10.3934/math.2023625
    [2] Alicia Cordero, Arleen Ledesma, Javier G. Maimó, Juan R. Torregrosa . Design and dynamical behavior of a fourth order family of iterative methods for solving nonlinear equations. AIMS Mathematics, 2024, 9(4): 8564-8593. doi: 10.3934/math.2024415
    [3] Wan-Chen Zhao, Xin-Hui Shao . New matrix splitting iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(5): 10558-10578. doi: 10.3934/math.2023536
    [4] Jia Yu, Xiaofeng Wang . A single parameter fourth-order Jarrat-type iterative method for solving nonlinear systems. AIMS Mathematics, 2025, 10(4): 7847-7863. doi: 10.3934/math.2025360
    [5] Mudassir Shams, Nasreen Kausar, Serkan Araci, Liang Kong, Bruno Carpentieri . Highly efficient family of two-step simultaneous method for all polynomial roots. AIMS Mathematics, 2024, 9(1): 1755-1771. doi: 10.3934/math.2024085
    [6] Chen-Can Zhou, Qin-Qin Shen, Geng-Chen Yang, Quan Shi . A general modulus-based matrix splitting method for quasi-complementarity problem. AIMS Mathematics, 2022, 7(6): 10994-11014. doi: 10.3934/math.2022614
    [7] Fiza Zafar, Alicia Cordero, Dua-E-Zahra Rizvi, Juan Ramon Torregrosa . An optimal eighth order derivative free multiple root finding scheme and its dynamics. AIMS Mathematics, 2023, 8(4): 8478-8503. doi: 10.3934/math.2023427
    [8] Mudassir Shams, Nasreen Kausar, Serkan Araci, Liang Kong . On the stability analysis of numerical schemes for solving non-linear polynomials arises in engineering problems. AIMS Mathematics, 2024, 9(4): 8885-8903. doi: 10.3934/math.2024433
    [9] Muhammad Asim Khan, Norma Alias, Umair Ali . A new fourth-order grouping iterative method for the time fractional sub-diffusion equation having a weak singularity at initial time. AIMS Mathematics, 2023, 8(6): 13725-13746. doi: 10.3934/math.2023697
    [10] Kasmita Devi, Prashanth Maroju . Complete study of local convergence and basin of attraction of sixth-order iterative method. AIMS Mathematics, 2023, 8(9): 21191-21207. doi: 10.3934/math.20231080
  • This work proposes a new scheme under the umbrella of iteration methods to compute the sign of an invertible matrix. To this target, a review of the exiting solvers of the same type is given and then a new scheme is derived based on a multi-step Newton-type nonlinear equation solver. It is shown that the new method and its reciprocal converge globally with wider convergence radii in contrast to their competitors of the same order from the general Padé schemes. After investigation on the theoretical parts, numerical experiments based on complex matrices of various sizes are furnished to reveal the superiority of the proposed solver in terms of elapsed CPU time.



    The author in [10] first proposed a matrix function for the sign to be used and applied in algebraic Riccati equations. The sign function of a matrix can be defined using the following Cauchy integral definition:

    S=sign(A)=2π0(t2I+A2)1dt. (1.1)

    Assume that ACn×n is given, and g is a scalar function. Then, we define g(A) as a function of a matrix with the same size as the input matrix. As long as g is given on the spectrum of the input matrix [5,Chapter 5], we can find the following features for g(A):

    ● If Y commutes with A, then Y commutes with g(A),

    ● The eigenvalues of g(A) are g(λi), where λi stand for the eigenvalues of A,

    g(YAY1)=Yg(A)Y1,

    g(AT)=g(A)T,

    g(A) commutes with A.

    Note that both Jordan decomposition and Schur decomposition with block Parlett recursion can be used under some conditions to compute matrix functions. The matrix sign function can give us the ability to decompose an appropriate matrix into two components whose spectra lie on opposite sides of the axis of imaginary. For more information, see [3].

    Practically speaking, we are motivated to derive a new Newton-type fast iteration method to find (1.1), which is efficient. That is, due to its higher order and wider convergence radii, fewer iterations will be required. This results in lower computational cost of the whole algorithm since fewer number of stopping terminations should be computed per cycle for the iterative structure. This is useful especially when the entries of the input matrix are changing a bit in a loop and each time, it is needed to find (1.1). The goal of this research is to boost convergence speed, resulting in high computational efficiency.

    Here, we concentrate on functions of matrices and especially the sign function. Toward this aim, we first derive an iterative scheme for finding the matrix sign function. We derive a novel numerical procedure to find (1.1) for which the convergence rate is high via the number of matrix-matrix products while only one matrix inversion is needed per full stage.

    For this research study, the remaining sections of this article are unfolded as comes next. In Section 2, iteration methods for finding (1.1) are discussed. Section 3 investigates a higher order method, which is under the umbrella of iteration methods to compute the sign of an invertible matrix. The convergence of the scheme and its asymptotical stability are brought forward. To test the correctness, efficiency, and stability, we run numerical tests on various problems in Section 4. Finally, based on the numerical results obtained, the suggested technique is determined to be efficient. Some remarks are furnished in Section 5.

    A typical procedure for finding iteration methods to calculate matrix functions is via employing a suitable nonlinear solver and selecting an initial value so as to guarantee the (local) convergence. It is here requisite to remind that the results originated from scalar nonlinear solvers do not generalize necessarily to the matrix environment. As an instance, standard convergence conditions given via the derivatives of g at a fixed point in the scalar case do not translate explicitly to similar conditions on the Frechét derivatives in the matrix environment.

    The second-order Newton's method (NM) has the following structure to calculate the sign of a square invertible matrix:

    Wm+1=12(Wm+W1m), (2.1)

    where

    W0=A, (2.2)

    is initial value. Authors in [7] proposed an important family of iterative schemes for finding (1.1) employing the following Padé approximants:

    g(ζ)=(1ζ)1/2. (2.3)

    Let us consider that the (m1,m2)-Padé approximate to g(ζ) is defined by:

    Pm1(ζ)Qm2(ζ), (2.4)

    where P, Q are polynomials of the approximate degree in the Padé approximation and m1+m21. Then, [7] showed that the iterative scheme

    wm+1=wmPm1(1w2m)Qm2(1w2m):=ψ2m1+1,2m2, (2.5)

    converges with convergence speed m1+m2+1 to ±1.

    By considering (2.5), the well-known locally convergent inversion-free Newton-Schulz method

    Wm+1=12Wm(3IW2m), (2.6)

    and Halley's iteration scheme

    Wm+1=[I+W2m][Wm(3I+W2m)]1, (2.7)

    belong to this family. It is noted that Newton's method (2.1) is a member of the reciprocal of (2.5), [4].

    Authors in the work [2] investigated a fourth-order family of methods to find (1.1) with the following structure:

    Wm=Wm((16v)I+2(7+2v)W2m+(3+2v)W4m)×[(12v)I2(3+2v)W2m+(11+6v)W4m]1, (2.8)

    wherein v is a free real parameter.

    Two other fourth-order methods from (2.5) having global convergence behavior are given by:

    Wm+1=[I+6W2m+W4m][4Wm(I+W2m)]1,Padé [1,2], (2.9)
    Wm+1=[4Wm(I+W2m)][I+6W2m+W4m]1,Reciprocal of Padé [1,2]. (2.10)

    For a recent work on this topic, an interested reader may refer to [12].

    Recalling that iteration methods can be constructed for matrix sign function if they can solve the matrix equation below (see, e.g., [11]):

    G(X):=I+X2=0, (3.1)

    wherein I is the unit matrix. In the scalar case, we consider g(x)=x21=0. To get a better background on nonlinear equation solvers and their recent findings, one may refer to [1].

    Without loss of generality, consider the scalar case. Now, we propose the following multi-step Newton-type iteration scheme for solving (3.1):

    {ym=wmg(wm)g(wm),m=0,1,,hm=wmg(wm)(5/4)g(ym)g(wm)(9/4)g(ym)g(wm)g(wm),wm+1=hmg(hm)(hmwm)g(hm)g(wm). (3.2)

    Theorem 3.1. Let ξD be a simple solution (root) of g:DCC, which is a sufficiently smooth nonlinear function. Also assume the initial value w0 is close enough to the solution. Then, the sequence generated by (3.2) has at least fourth-order convergence and satisfies the following equation of error:

    em+1=k324e4m+(3116k4294k22k3)e5m+O(e6m), (3.3)

    where kj=g(j)(ξ)j!g(ξ) and em=wmξ.

    Proof. The proof is via Taylor expansions. See e.g., [13,14]. Hence, it is omitted.

    By employing (3.2) to solve the scalar version of (3.1), we attain the following iterative method in a unique way:

    Wm+1=Wm(23I+38W2m+3W4m)[5I+42W2m+17W4m]1, (3.4)

    having the initial value (2.2). Similarly, one may obtain the reciprocal version of (3.4) as follows:

    Wm+1=(5I+42W2m+17W4m)[Wm(23I+38W2m+3W4m)]1. (3.5)

    Now, the convergence properties of (3.5) are investigated. Recall that for any H such that A+H is positive definite, we have

    sign(0A+HI0), (3.6)

    as a fixed point of (3.5).

    Theorem 3.2. In computing the sign of the matrix A without any eigenvalues on the imaginary axis, consider that W0 is sufficiently close to S or is chosen by (2.2) and commutes with A. Then, the scheme (3.5) (or equivalently (3.4)) converges to the sign matrix S, and the convergence rate is four.

    Proof. Let us use the Jordan block matrix J and decompose A using an invertible matrix T of the same size as follows:

    A=TJT1. (3.7)

    Employing this along with the iterative method yields an iteration scheme similar to the original matrix iterative method but on the eigenvalues from the step m to the step m+1 as follows:

    λim+1=(5+42λim2+17λim4)[λim(23+38λim2+3λim4)]1,1in, (3.8)

    where

    si=sign(λim)=±1. (3.9)

    In general and after some mathematical simplifications, the expression (3.8) shows that the eigenvalues tend to si=±1, that is:

    limm|λim+1siλim+1+si|=0. (3.10)

    This shows the convergence and states that the eigenvalues tends to ±1 after each iteration, and there would be eigenvalue clustering using (3.5). After studying the convergence, it is requisite to compute the convergence speed. Toward this, it is considered that:

    Δm=Wm(23I+38W2m+3W4m). (3.11)

    Using (3.11), we can write:

    Wm+1S=(5I+42W2m+17W4m)Δ1mS=[5I+42W2m+17W4mSΔm]Δ1m=[5I+42W2m+17W4m23WmS38W3mS3W5mS]Δ1m=[5(WmS)4+3WmS(W44W3mS+6W2mS24WmS3+I)]Δ1m=[5(WmS)4+3WmS(WmS)4]Δ1m=(WmS)4[5I+3WmS]Δ1m. (3.12)

    Using (3.12), it is possible to get that:

    Wm+1S(5I+3WmSΔ1m)WmS4, (3.13)

    which shows the convergence rate of four. This completes the proof. The analysis of error for (3.4) might be similarly performed. The proof is complete now.

    We remark that W0=A is the best choice as the initial approximation since no eigenvalues are located on the axis of imaginary and could lead to convergence. However, another sufficiently close initial approximation based on μA where μ is an appropriate non-zero parameter can be chosen as long as no eigenvalues are located on the axis of imaginary.

    The presented method needs computation of matrix multiplications, as do the other competitors. On the other hand, most of the convergence theory for the presented method is via the calculation of the eigenvalues [8] from an iterate to the next one. Nonetheless, we have showed the fourth order is obtained only if the initial approximation is chosen properly.

    The cost of an algorithm is important for employing it in practical problems. The convergence rate is not the only factor, and the method is useful only if it can compete with the most efficient existing solvers of the same type. When we compare (3.5) to (2.9)–(2.10), it is observed that all possess per cycle four matrix products and only one matrix inversion. It is now seen and checked that the proposed methods have wider convergence radii.

    To check the global convergence of the presented scheme in contrast to the existing solvers, we may draw attraction basins of the iterative methods when solving the nonlinear equation g(x)=x21 on the complex domain [2,2]×[2,2]. In fact, we divide the domain into a refined mesh and test to what root, each of the mesh points converge. Results of comparisons are brought forward in Figures 1 and 2 by employing the stopping condition

    |g(xm)|103. (3.14)
    Figure 1.  Attractors and convergence radii for (2.1) on left, and (2.9) on right.
    Figure 2.  Attractors and convergence radii for (3.4) on left, and (3.5) on right.

    The results are shaded via the number of iterates required to reach the convergence. They too show that (3.4) and (3.5), have wider convergence radii than their competitors of the same order from (2.5).

    Here, although the iterative methods (2.1), and (2.9) are globally convergent, the basins for (3.4) and (3.5) contain lighter areas which indicate wider convergence radii.

    The larger attraction basins given for the proposed solvers are mainly due to the appropriate selection of the parameters furnished in the second step of the nonlinear equation solver (3.2). In fact, the parameters are selected as limiting values, in which the convergence order of the solver improves to higher order, but it does not select the exact limiting value since it costs further matrix products. In this way, by keeping the number of matrix products fixed, we can derive an iterative method for finding the sign of a nonsingular matrix with larger attraction basins than the competitors from the Padé scheme.

    The stability for (3.4) or similarly (3.5) are presented now. Theorem 3 follows from [6] on the stability of "pure matrix iterations."

    Theorem 3.3. Using (3.5) and having a nonsingular matrix A, {Wm}m=0 with W0=A is stable asymptotically.

    Proof. Assume that δm is a numerical perturbation which is produced in the mth iteration. So, we can define

    ˜Wm=Wm+δm. (3.15)

    In the rest of the proof, it is assumed that (δm)i0, i2, is used, which is valid based on first order error analysis. This is rational if δm is sufficiently small. Now, we obtain

    ˜Wm+1=[5I+42˜W2m+17˜W4m]×[˜Wm(23I+38˜W2m+3˜W4m)]1. (3.16)

    For sufficiently large m, viz., at the phase of convergence, it is assumed that

    Wmsign(A)=S, (3.17)

    where the following facts are employed (for any the matrix E and any invertible matrix H) [15,page 188]:

    (H+E)1H1H1EH1. (3.18)

    We used

    S2=I, (3.19)

    and

    S1=S, (3.20)

    to get that

    ˜Wm+1(S+12δm12SδmS). (3.21)

    By further simplifications and using δm+1=˜Wm+1Wm+1, we can find

    δm+112δm12SδmS. (3.22)

    This can lead to the point that the method at the stage m+1 is bounded, i.e., we have:

    δm+112δ0Sδ0S. (3.23)

    Therefore, the sequence {Wm}m=0 produced via (3.4) has asymptotical stability. The proof ends now.

    Here, the entire implementations were performed by Mathematica 12.0 in machine precision [16,Chapter 1.4]. The iteration methods (2.1), (2.7), (2.9), (3.4) and (3.5) (denoted by NM, HM, Padé, PM1, and PM2, respectively) were compared based on the efficiency and stability on complex test matrices of various sizes. We compared the methods using the following condition:

    Em+1=W2m+1I2104. (4.1)

    It is recalled that for (2.1), the process can be accelerated, via calculating an extra parameter per iterate and replacing Wm with μmWm as follows:

    μm={|det(Wm)|1n,    (determinantal scaling),ρ(W1m)ρ(Wm),    (spectral scaling),W1mWm,    (norm scaling). (4.2)

    Further acceleration and optimizing of (2.7) were studied in [9]. Similar investigations can be performed for the new derived iterative method in order to accelerate the scheme. Anyhow, none of such accelerations will be performed in this section for any of the compared methods in order to have a fair comparison.

    Experiment 4.1. Various complex matrices of different sizes are employed here to compare the efficiency of the existing solvers based on both the number iterations as well as the CPU time in seconds. Ten matrices are produced as follows:

    SeedRandom[123];

    number=10;

    Table[A[l]=RandomComplex[55I,5+5I,100l,100l];,l,number];

    tolerance=104;

    Results are provided in Tables 1 and 2 showing that PM1 has the best performance against its competitors. Here, it is important to note that the convergence of the discussed iteration methods relies on an appropriate choice of initial matrices. It is observed PM1 beats all other competitors by converging to the sign as quickly as possible.

    Table 1.  Comparisons for the number of iterates needed to find the signs of the matrices in Experiment 4.1.
    Matrices Size NM HM Padé PM1 PM2
    #1 100×100 16 11 8 7 7
    #2 200×200 18 12 9 8 8
    #3 300×300 16 11 8 7 7
    #4 400×400 18 12 9 8 8
    #5 500×500 19 12 10 9 9
    #6 600×600 18 11 9 8 8
    #7 700×700 18 12 9 8 8
    #8 800×800 21 13 11 9 9
    #9 900×900 18 12 9 8 8
    #10 1000×1000 20 13 10 9 9
    Mean 18.2 11.9 9.2 8.1 8.1

     | Show Table
    DownLoad: CSV
    Table 2.  Comparison of CPU times in seconds required to find the signs of the matrices in Experiment 4.1.
    Matrices Size NM HM Padé PM1 PM2
    #1 100×100 0.03 0.02 0.03 0.01 0.01
    #2 200×200 0.12 0.10 0.10 0.09 0.09
    #3 300×300 0.28 0.29 0.25 0.27 0.26
    #4 400×400 0.62 0.62 0.54 0.54 0.55
    #5 500×500 1.15 1.05 1.07 1.03 1.08
    #6 600×600 2.07 1.56 1.48 1.49 1.45
    #7 700×700 3.13 2.41 2.23 2.08 2.16
    #8 800×800 5.15 3.75 3.84 3.40 3.54
    #9 900×900 5.23 4.96 4.36 4.20 4.19
    #10 1000×1000 8.45 7.57 6.76 6.25 6.50
    Mean 2.62 2.23 2.06 1.94 1.98

     | Show Table
    DownLoad: CSV

    This part of the work is ended by stating that after drawing attraction basins of the new methods, such as PM1, in Figures 3 and 4 for higher degree polynomials, it can be stated that like all the other solvers, when one wants to extend NM, HM, Padé or PM1 to find the matrix sector function, then similar conditions can be imposed since they have local convergence (like the Newton's solver) to find the matrix sector function, but PM1 or PM2 enjoy higher rates of convergence, which will reduce the CPU time in practice.

    Figure 3.  Attractors and convergence radii for (2.1) on left, and (3.2) on right for the equation x31=0.
    Figure 4.  Attractors and convergence radii for (2.1) on left, and (3.2) on right for the equation x41=0.

    This paper has discussed the existing iteration methods to find the matrix sign functions. The most challenging ones arise from (2.5) with arbitrary order of convergence. Such methods are called optimal in some sense in the literature. However, in this article, we have proposed a novel Newton-type solver possessing fourth convergence speed with global convergence behavior to calculate the sign of a matrix. The new method competes with corresponding methods from the Padé family of methods.

    A clear application of the sign matrix is in finding the matrix geometric mean of two Hermitian positive definite matrices, which is required mainly in the context of solving a special class of nonlinear matrix equations. For the proposed solvers in this work, (3.4) and (3.5), parallel implementations are simple, employing the available optimized programs for matrix-matrix products. Computational tests in Section 4 have been performed to show the effectiveness of the new iterative methods for a variety of complex matrices of different sizes. The mean CPU times for (3.4) and (3.5) showed better performance than the others.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research work was funded by Institutional Fund Projects under grant no. (IFPIP: 581-130-1443). The authors gratefully acknowledge technical and financial support provided by the Ministry of Education and King Abdulaziz University, DSR, Jeddah, Saudi Arabia.

    The writers state that there is no conflict of interests in this paper.



    [1] G. Candelario, A. Cordero, J. R. Torregrosa, M. P. Vassileva, An optimal and low computational cost fractional Newton-type method for solving nonlinear equations, Appl. Math. Lett., 124 (2022), 107650, http://doi.org/10.1016/j.aml.2021.107650 doi: 10.1016/j.aml.2021.107650
    [2] A. Cordero, F. Soleymani, J. R. Torregrosa, M. Zaka Ullah, Numerically stable improved Chebyshev-Halley type schemes for matrix sign function, J. Comput. Appl. Math., 318 (2017), 189–198, http://doi.org/10.1016/j.cam.2016.10.025 doi: 10.1016/j.cam.2016.10.025
    [3] E. D. Denman, A. N. Beavers, The matrix sign function and computations in systems, Appl. Math. Comput., 2 (1976), 63–94, http://doi.org/10.1016/0096-3003(76)90020-5 doi: 10.1016/0096-3003(76)90020-5
    [4] O. Gomilko, F. Greco, K. Ziȩtak, A Padé family of iterations for the matrix sign function and related problems, Numer. Linear. Algebr. Appl., 19 (2012), 585–605, http://doi.org/10.1002/nla.786 doi: 10.1002/nla.786
    [5] N. J. Higham, Functions of Matrices: Theory and Computation, Philadelphia: SIAM, 2008.
    [6] B. Iannazzo, Numerical Solution of Certain Nonlinear Matrix Equations, PhD thesis, Universita degli studi di Pisa, 2007.
    [7] C. S. Kenney, A. J. Laub, Rational iterative methods for the matrix sign function, SIAM J. Matrix Anal. Appl., 12 (1991), 273–291, http://doi.org/10.1137/0612020 doi: 10.1137/0612020
    [8] E. M. Maralani, F. D. Saei, A. A. J. Akbarfam, K. Ghanbari, Computation of eigenvalues of fractional Sturm-Liouville problems, Iran. J. Numer. Anal. Optim. 11 (2021), 117–133. http://doi.org/10.22067/IJNAO.2020.11305.0 doi: 10.22067/IJNAO.2020.11305.0
    [9] Y. Nakatsukasa, Z. Bai, F. Gygi, Optimizing Halley's iteration for computing the matrix polar decomposition, SIAM J. Matrix Anal. Appl., 31 (2010), 2700–2720. http://doi.org/10.1137/090774999 doi: 10.1137/090774999
    [10] J. D. Roberts, Linear model reduction and solution of the algebraic Riccati equation by use of the sign function, Int. J. Control, 32 (1980), 677–687. http://doi.org/10.1080/00207178008922881 doi: 10.1080/00207178008922881
    [11] A. R. Soheili, F. Toutounian, F. Soleymani, A fast convergent numerical method for matrix sign function with application in SDEs, J. Comput. Appl. Math., 282 (2015), 167–178. http://doi.org/10.1016/j.cam.2014.12.041 doi: 10.1016/j.cam.2014.12.041
    [12] F. Soleymani, F. W. Khdhr, R. K. Saeed, J. Golzarpoor, A family of high order iterations for calculating the sign of a matrix, Math. Meth. Appl. Sci., 43 (2020), 8192–8203. http://doi.org/10.1002/mma.6471 doi: 10.1002/mma.6471
    [13] F. Soleymani, Some efficient seventh-order derivative-free families in root-finding, Opuscula Math., 33 (2013), 163–173. http://doi.org/10.7494/OpMath.2013.33.1.163 doi: 10.7494/OpMath.2013.33.1.163
    [14] F. Soleymani, A three-step iterative method for nonlinear systems with sixth order of convergence, Int. J. Comput. Sci. Math., 4 (2013), 363–373. https://doi.org/10.1504/IJCSM.2013.058057 doi: 10.1504/IJCSM.2013.058057
    [15] G. W. Stewart, Introduction to Matrix Computations, New York: Academic Press, 1973.
    [16] M. Trott, The Mathematica Guide-Book for Numerics, New York: Springer, 2006.
  • This article has been cited by:

    1. Pallvi Sharma, Munish Kansal, Extraction of deflating subspaces using disk function of a matrix pencil via matrix sign function with application in generalized eigenvalue problem, 2024, 442, 03770427, 115730, 10.1016/j.cam.2023.115730
    2. Munish Kansal, Vanita Sharma, Pallvi Sharma, Lorentz Jäntschi, A Globally Convergent Iterative Method for Matrix Sign Function and Its Application for Determining the Eigenvalues of a Matrix Pencil, 2024, 16, 2073-8994, 481, 10.3390/sym16040481
    3. Tao Liu, Ting Li, Malik Zaka Ullah, Abdullah Khamis Alzahrani, Stanford Shateyi, An Efficient Iterative Approach for Hermitian Matrices Having a Fourth-Order Convergence Rate to Find the Geometric Mean, 2024, 12, 2227-7390, 1772, 10.3390/math12111772
    4. Lei Shi, Malik Ullah, Hemant Nashine, Monairah Alansari, Stanford Shateyi, An Enhanced Numerical Iterative Method for Expanding the Attraction Basins When Computing Matrix Signs of Invertible Matrices, 2023, 7, 2504-3110, 684, 10.3390/fractalfract7090684
    5. Shuai Wang, Ziyang Wang, Wu Xie, Yunfei Qi, Tao Liu, An Accelerated Sixth-Order Procedure to Determine the Matrix Sign Function Computationally, 2025, 13, 2227-7390, 1080, 10.3390/math13071080
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2126) PDF downloads(50) Cited by(5)

Figures and Tables

Figures(4)  /  Tables(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog