Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Some new inequalities for nonnegative matrices involving Schur product

  • Received: 16 September 2023 Revised: 23 October 2023 Accepted: 24 October 2023 Published: 01 November 2023
  • MSC : 15A47

  • In this study, we focused on the spectral radius of the Schur product. Two new types of the upper bound of ρ(MN), which is the spectral radius of the Schur product of two matrices M,N with nonnegative elements, were established using the Hölder inequality and eigenvalue inclusion theorem. In addition, the obtained new type upper bounds were compared with the classical conclusions. Numerical examples demonstrated that the new type of upper formulas improved the result of Johnson and Horn effectively in some cases, and were sharper than other existing results.

    Citation: Qin Zhong. Some new inequalities for nonnegative matrices involving Schur product[J]. AIMS Mathematics, 2023, 8(12): 29667-29680. doi: 10.3934/math.20231518

    Related Papers:

    [1] Wan-Chen Zhao, Xin-Hui Shao . New matrix splitting iteration method for generalized absolute value equations. AIMS Mathematics, 2023, 8(5): 10558-10578. doi: 10.3934/math.2023536
    [2] Li-Tao Zhang, Xian-Yu Zuo, Shi-Liang Wu, Tong-Xiang Gu, Yi-Fan Zhang, Yan-Ping Wang . A two-sweep shift-splitting iterative method for complex symmetric linear systems. AIMS Mathematics, 2020, 5(3): 1913-1925. doi: 10.3934/math.2020127
    [3] Chen-Can Zhou, Qin-Qin Shen, Geng-Chen Yang, Quan Shi . A general modulus-based matrix splitting method for quasi-complementarity problem. AIMS Mathematics, 2022, 7(6): 10994-11014. doi: 10.3934/math.2022614
    [4] Yajun Xie, Changfeng Ma . The hybird methods of projection-splitting for solving tensor split feasibility problem. AIMS Mathematics, 2023, 8(9): 20597-20611. doi: 10.3934/math.20231050
    [5] Wenxiu Guo, Xiaoping Lu, Hua Zheng . A two-step iteration method for solving vertical nonlinear complementarity problems. AIMS Mathematics, 2024, 9(6): 14358-14375. doi: 10.3934/math.2024698
    [6] ShiLiang Wu, CuiXia Li . A special shift splitting iteration method for absolute value equation. AIMS Mathematics, 2020, 5(5): 5171-5183. doi: 10.3934/math.2020332
    [7] Jin-Song Xiong . Generalized accelerated AOR splitting iterative method for generalized saddle point problems. AIMS Mathematics, 2022, 7(5): 7625-7641. doi: 10.3934/math.2022428
    [8] Junxiang Lu, Chengyi Zhang . On the strong P-regular splitting iterative methods for non-Hermitian linear systems. AIMS Mathematics, 2021, 6(11): 11879-11893. doi: 10.3934/math.2021689
    [9] Huiling Wang, Zhaolu Tian, Yufeng Nie . The HSS splitting hierarchical identification algorithms for solving the Sylvester matrix equation. AIMS Mathematics, 2025, 10(6): 13476-13497. doi: 10.3934/math.2025605
    [10] Dongmei Yu, Yiming Zhang, Cairong Chen, Deren Han . A new relaxed acceleration two-sweep modulus-based matrix splitting iteration method for solving linear complementarity problems. AIMS Mathematics, 2023, 8(6): 13368-13389. doi: 10.3934/math.2023677
  • In this study, we focused on the spectral radius of the Schur product. Two new types of the upper bound of ρ(MN), which is the spectral radius of the Schur product of two matrices M,N with nonnegative elements, were established using the Hölder inequality and eigenvalue inclusion theorem. In addition, the obtained new type upper bounds were compared with the classical conclusions. Numerical examples demonstrated that the new type of upper formulas improved the result of Johnson and Horn effectively in some cases, and were sharper than other existing results.



    Consider large sparse linear matrix equation

    AXB=C, (1.1)

    where ACm×m and BCn×n are non-Hermitian positive definite matrices, CCm×n is a given complex matrix. In many areas of scientific computation and engineering applications, such as signal and image processing [9,20], control theory [11], photogrammetry [19], we need to solve such matrix equations. Therefore, solving such matrix equations by efficient methods is a very important topic.

    We often rewrite the above matrix Eq (1.1) as the following linear system

    (BTA)x=c, (1.2)

    where the vectors x and c contain the concatenated columns of the matrices X and C, respectively, being the Kronecker product symbol and BT representing the transpose of the matrix B. Although this equivalent linear system can be applied in theoretical analysis, in fact, solving (1.2) is always costly and ill-conditioned.

    So far there are many numerical methods to solve the matrix Eq (1.1). When the coefficient matrices are not large, we can use some direct algorithms, such as the QR-factorization-based algorithms [13,28]. Iterative methods are usually employed for large sparse matrix Eq (1.1), for instance, least-squares-based iteration methods [26] and gradient-based iteration methods [25]. Moreover, the nested splitting conjugate gradient (NSCG) iterative method, which was first proposed by Axelsson, Bai and Qiu in [1] to solve linear systems, was considered for the matrix Eq (1.1) in [14].

    Bai, Golub and Ng originally established the efficient Hermitian and skew-Hermitian splitting (HSS) iterative method [5] for linear systems with non-Hermitian positive definite coefficient matrices. Subsequently, some HSS-based methods were further considered to improve its robustness for linear systems; see [3,4,8,17,24,27] and other literature. For solving the continuous Sylvester equation, Bai recently established the HSS iteration method [2]. Hereafter, some HSS-based methods were discussed for solving this Sylvester equation [12,15,16,18,22,30,31,32,33]. For the matrix Eq (1.1), Wang, Li and Dai recently use an inner-outer iteration strategy and then proposed an HSS iteration method [23]. According to the discussion in [23], if the quasi-optimal parameter is employed, the upper bound of the convergence rate is equal to that of the CG method. After that, Zhang, Yang and Wu considered a more efficient parameterized preconditioned HSS (PPHSS) iteration method [29] to further improve the efficiency for solving the matrix Eq (1.1), and Zhou, Wang and Zhou presented a modified HSS (MHSS) iteration method [34] for solving a class of complex matrix Eq (1.1).

    Moreover, the shift-splitting (SS) iteration method [7] was first presented by Bai, Yin and Su to solve the ill-conditioned linear systems. Then this splitting method was subsequently considered for solving saddle point problems due to its promising performance; see [10,21] and other literature. In this paper, the SS technique is implemented to solve the matrix Eq (1.1). Some related convergence theorems of the SS method are discussed in detail. Numerical examples demonstrate that the SS is superior to the HSS and NSCG methods, especially when the coefficient matrices are ill-conditioned.

    The content of this paper is arranged as follows. In Section 2 we establish the SS method for solving the matrix Eq (1.1), and then some related convergence properties are studied in Section 3. In Section 4, the effectiveness of our method is illustrated by two numerical examples. Finally, our brief conclusions are given in Section 5.

    Based on the shift-splitting proposed by Bai, Yin and Su in [7], we have the shift-splitting of A and B as follows:

    A=12(αIm+A)12(αImA), (2.1)

    and

    B=12(βIn+B)12(βInB), (2.2)

    where α and β are given positive constants.

    Therefore, using the splitting of the matrix A in (2.1), the following splitting iteration method to solve (1.1) can be defined:

    (αIm+A)X(k+1)B=(αImA)X(k)B+2C. (2.3)

    Then, from the splitting of the matrix B in (2.2), we can solve each step of (2.3) iteratively by

    (αIm+A)X(k+1,j+1)(βIn+B)=(αIm+A)X(k+1,j)(βInB)+2(αImA)X(k)B+4C. (2.4)

    Therefore, we can establish the following shift-splitting (SS) iteration method to solve (1.1).

    Algorithm 1 (The SS iteration method). Given an initial guess X(0)Cm×n, for k=0,1,2,, until X(k) converges.

    Approximate the solution of

    (αIm+A)Z(k)B=2R(k) (2.5)

    with R(k)=CAX(k)B, i.e., let Z(k):=Z(k,j+1) and compute Z(k,j+1) iteratively by

    (αIm+A)Z(k,j+1)(βIn+B)=(αIm+A)Z(k,j)(βInB)+4R(k), (2.6)

    once the residual P(k)=2R(k)(αIm+A)Z(k,j+1)B of the outer iteration (2.5) satisfies

    P(k)FεkR(k)F,

    where F denotes the Frobenius norm of a matrix. Then compute

    X(k+1)=X(k)+Z(k).

    Here, {εk} is a given tolerance. In addition, we can choose efficient methods in the process of computing Z(k,j+1) in (2.6).

    The pseudo-code of this algorithm is shown as following:

    The pseudo-code of the SS algorithm for matrix equation AXB=C
    1. Given an initial guess X(0)Cm×n
    2. R(0)=CAX(0)B
    3. For k=0,1,2,,kmax Do:
    4. Given an initial guess Z(k,0)Cm×n
    5. P(k,0)=2R(k)(αIm+A)Z(k,0)B
    6. For j=0,1,2,,jmax Do:
    7. Compute Z(k,j+1) iteratively by
      (αIm+A)Z(k,j+1)(βIn+B)=(αIm+A)Z(k,j)(βInB)+4R(k)
    8. P(k,j+1)=2R(k)(αIm+A)Z(k,j+1)B
    9. If P(k,j+1)FεkR(k)F Go To 11
    10. End Do
    11. X(k+1)=X(k)+Z(k)
    12. R(k+1)=CAX(k+1)B
    13. If R(k+1)FtolR(0)F Stop
    14. End Do

     | Show Table
    DownLoad: CSV

    Remark 1. Because the SS iteration scheme is only a single-step method, a considerable advantage is that it costs less computing workloads than the two-step iteration methods such as the HSS iteration [23] and the modified HSS (MHSS) iteration [34].

    In this section, we denote by

    H=12(A+A)andS=12(AA)

    the Hermitian and skew-Hermitian parts of the matrix A, respectively. Moreover, λmin and λmax represent the smallest and the largest eigenvalues of H, respectively, and κ=λmax/λmin.

    Firstly, the unconditional convergence property of the SS iteration (2.3) is given as follows.

    Theorem 1. Let ACm×m be positive definite, and α be a positive constant. Denote by

    M(α)=In((αIm+A)1(αImA)). (3.1)

    Then the convergence factor of the SS iteration method (2.3) is given by the spectral radius ρ(M(α)) of the matrix M(α), which is bounded by

    φ(α):=(αIm+A)1(αImA)2. (3.2)

    Consequently, we have

    ρ(M(α))φ(α)<1,α>0, (3.3)

    i.e., the SS iteration (2.3) is unconditionally convergent to the exact solution XCm×n of the matrix Eq (1.1).

    Proof. The SS iteration (2.3) can be reformulated as

    X(k+1)=(αIm+A)1(αImA)X(k)+2(αIm+A)1CB1. (3.4)

    Using the Kronecker product, we can rewrite (3.3) as follows:

    x(k+1)=M(α)x(k)+N(α)c,

    where M(α) is the iteration matrix defined in (3.1), and N(α)=2BT(αIm+A)1.

    We can easily see that ρ(M(α))φ(α) holds for all α>0. From Lemma 2.1 in [8], we can obtain that φ(α)<1, α>0. This completes the proof.

    Noting that the matrix (αIm+A)1(αImA) is an extrapolated Cayley transform of A, from [6], we can obtain another upper bound for the convergence factor of ρ(M(α)), as well as the minimum point and the corresponding minimal value of this upper bound.

    Theorem 2. Let the conditions of Theorem 1 be satisfied. Denote by

    σ(α)=maxλ[λmin,λmax]|αλ|α+λ,ζ(α)=S2α+λmin.

    Then for the convergence factor of ρ(M(α)) it holds that

    ρ(M(α))((σ(α))2+(ζ(α))21+(ζ(α))2)1/2ϕ(α)<1. (3.5)

    Moreover, at

    α={λminλmax,forS2λminκ1,λ2min+S22,forS2λminκ1, (3.6)

    the function ϕ(α) attains its minimum

    ϕ(α)={(η2+τ21+τ2)1/2,forS2λminκ1,(1υ1+υ)1/2,forS2λminκ1, (3.7)

    where

    η=κ1κ+1,τ=S2(κ+1)λminandυ=λminλ2min+S22.

    Proof. From Theorem 3.1 in [6], we can directly obtain (3.2)–(3.7).

    Remark 2. α is called the theoretical quasi-optimal parameter of the SS iteration method. Similarly, the theoretical quasi-optimal parameter β of the inner iterations (2.6) can be also obtained, which has the same form as α.

    In the following, we present another convergence theorem for a new form.

    Theorem 3. Let the conditions of Theorem 1 be satisfied. If {X(k)}k=0Cm×n is an iteration sequence generated by Algorithm 1 and if XCm×n is the exact solution of the matrix Eq (1.1), then it holds that

    X(k+1)XF(φ(α)+μθεk)X(k)XF,k=0,1,2,

    where the constants μ and θ are given by

    μ=BT(αIm+A)12,θ=BTA2.

    In particular, when

    φ(α)+μθεmax<1, (3.8)

    the iteration sequence {X(k)}k=0 converges to X, where εmax=maxk{εk}.

    Proof. We can rewrite the SS iteration in Algorithm 1 as the following form:

    (BT(αIm+A))z(k)=2r(k),x(k+1)=x(k)+z(k), (3.9)

    with r(k)=c(BTA)x(k), where z(k) is such that the residual

    p(k)=2r(k)(BT(αIm+A))z(k)

    satisfies p(k)2εkr(k)2.

    In fact, the inexact variant of the SS iteration method for solving the linear system (1.2) is just the above iteration scheme (3.9). From (3.9), we obtain

    x(k+1)=x(k)+(BT(αIm+A))1(2r(k)p(k))=x(k)+(BT(αIm+A))1(2c2(BTA)x(k)p(k))=(In((αIm+A)1(αImA)))x(k)+2(BT(αIm+A)1)c(BT(αIm+A)1)p(k). (3.10)

    Because xCn is the exact solution of the linear system (1.2), it must satisfy

    x=(In((αIm+A)1(αImA)))x+2(BT(αIm+A)1)c. (3.11)

    By subtracting (3.11) from (3.10), we have

    x(k+1)x=(In((αIm+A)1(αImA)))(x(k)x)(BT(αIm+A)1)p(k). (3.12)

    Taking norms on both sides from (3.12), then

    x(k+1)x2In((αIm+A)1(αImA))2x(k)x2+BT(αIm+A)12p(k)2φ(α)x(k)x2+μεkr(k)2. (3.13)

    Noticing that

    r(k)2=c(BTA)x(k)2=(BTA)(xx(k))2θx(k)x2,

    by (3.13) the estimate

    ||x(k+1)x||2(φ(α)+μθεk)||x(k)x||2,k=0,1,2, (3.14)

    can be obtained. Note that for a matrix YCm×n, YF=||y||2, where the vector y contains the concatenated columns of the matrix Y. Then the estimate (3.14) can be equivalently rewritten as

    X(k+1)XF(φ(α)+μθεk)X(k)XF,k=0,1,2,.

    So we can easily get the above conclusion.

    Remark 3. From Theorem 3 we know that, in order to guarantee the convergence of the SS iteration, it is not necessary for the condition εk0. All we need is that the condition (3.8) is satisfied.

    In this section, two different matrix equations are solved by the HSS, SS and NSCG iteration methods. The efficiencies of the above iteration methods are examined by comparing the number of outer iteration steps (denoted by IT-out), the average number of inner iteration steps (denoted by IT-in-1 and IT-in-2 for the HSS, IT-in for the SS), and the elapsed CPU times (denoted by CPU). The notation "–" shows that no solution has been obtained after 1000 outer iteration steps.

    The initial guess is the zero matrix. All iterations are terminated once X(k) satisfies

    CAX(k)BFCF106.

    We set εk=0.01, k=0,1,2, to be the tolerances for all the inner iteration schemes.

    Moreover, in practical computation, we choose direct algorithms to solve all sub-equations involved in each step. We use Cholesky and LU factorization for the Hermitian and non-Hermitian coefficient matrices, respectively.

    Example 1 ([2]) We consider the matrix Eq (1.1) with m=n and

    A=M+5qN+100(n+1)2IandB=M+2qN+100(n+1)2I,

    where M,NRn×n are two tridiagonal matrices as follows:

    M=tridiag(1,2,1)andN=tridiag(0.5,0,0.5).

    In Tables 1 and 2, the theoretical quasi-optimal parameters and experimental optimal parameters of HSS and SS are listed, respectively. In Tables 3 and 4, the numerical results of HSS and SS are listed.

    Table 1.  The theoretical quasi-optimal parameters of HSS and SS for Example 1.
    Method HSS SS
    n q αquasi βquasi αquasi βquasi
    n=16 q=0.1 1.28 1.28 1.28 1.28
    q=0.3 1.28 1.28 1.52 1.28
    q=1 1.28 1.28 4.93 2.00
    n=32 q=0.1 0.64 0.64 0.64 0.64
    q=0.3 0.64 0.64 1.50 0.64
    q=1 0.64 0.64 4.98 1.99
    n=64 q=0.1 0.32 0.32 0.50 0.32
    q=0.3 0.32 0.32 1.50 0.60
    q=1 0.32 0.32 4.99 2.00
    n=128 q=0.1 0.16 0.16 0.50 0.20
    q=0.3 0.16 0.16 1.50 0.60
    q=1 0.16 0.16 5.00 2.00

     | Show Table
    DownLoad: CSV
    Table 2.  The experimental optimal iteration parameters of HSS and SS for Example 1.
    Method HSS SS
    n q αexp βexp αexp βexp
    n=16 q=0.1 1.22 1.14 1.14 0.98
    q=0.3 1.40 1.12 1.66 1.16
    q=1 1.96 1.30 0.36 1.74
    n=32 q=0.1 1.84 0.72 0.70 0.66
    q=0.3 1.04 0.72 1.12 0.68
    q=1 1.70 0.90 3.02 0.84
    n=64 q=0.1 3.00 0.40 0.20 0.40
    q=0.3 1.10 0.40 0.90 0.50
    q=1 1.30 0.60 2.30 0.70
    n=128 q=0.1 3.00 0.30 0.30 0.20
    q=0.3 1.10 0.30 0.60 0.30
    q=1 1.10 0.80 2.90 0.60

     | Show Table
    DownLoad: CSV
    Table 3.  Numerical results of HSS and SS with the theoretical quasi-optimal parameters for Example 1.
    Method HSS SS
    n q IT-out IT-in-1 IT-in-2 CPU IT-out IT-in CPU
    n=16 q=0.1 22 7.2 7.0 0.0151 11 4.0 0.0036
    q=0.3 16 7.0 7.0 0.0104 9 4.0 0.0029
    q=1 20 6.0 6.0 0.0117 17 5.0 0.0078
    n=32 q=0.1 36 14.0 14.0 0.1610 19 6.9 0.0295
    q=0.3 33 14.0 14.0 0.1216 15 7.0 0.0269
    q=1 39 11.0 11.0 0.1168 24 10.0 0.0472
    n=64 q=0.1 68 28.3 28.3 1.6490 30 13.0 0.2677
    q=0.3 74 24.7 24.8 1.5486 27 16.0 0.2906
    q=1 87 25.0 25.0 1.8381 35 20.0 0.4377
    n=128 q=0.1 144 54.5 54.7 34.394 57 21.2 4.3253
    q=0.3 188 45.1 45.9 36.729 48 35.0 6.2253
    q=1 465 52.0 52.0 104.331 52 38.0 6.7005

     | Show Table
    DownLoad: CSV
    Table 4.  Numerical results of HSS and SS with the experimental optimal iteration parameters for Example 1.
    Method HSS SS
    n q IT-out IT-in-1 IT-in-2 CPU IT-out IT-in CPU
    n=16 q=0.1 19 7.0 7.0 0.0096 11 4.0 0.0023
    q=0.3 15 8.0 8.0 0.0093 8 4.0 0.0017
    q=1 16 6.0 6.0 0.0070 11 4.0 0.0022
    n=32 q=0.1 29 13.0 13.0 0.0853 18 7.0 0.0227
    q=0.3 23 13.0 13.0 0.0667 12 7.0 0.0162
    q=1 25 11.0 11.0 0.0637 20 7.0 0.0244
    n=64 q=0.1 118 24.0 24.0 2.0849 30 10.5 0.1952
    q=0.3 41 23.0 23.0 0.6952 16 11.1 0.1008
    q=1 37 18.0 18.0 0.4733 30 10.0 0.1650
    n=128 q=0.1 229 39.7 39.7 32.032 40 20.5 2.4942
    q=0.3 70 35.0 35.0 8.6951 22 18.0 1.2280
    q=1 53 38.6 38.6 7.9165 45 14.0 1.7385

     | Show Table
    DownLoad: CSV

    From Tables 3 and 4 it can be observed that, the SS outperforms the HSS for various n and q, especially when q is small (the coefficient matrices are ill-conditioned).

    Moreover, as two single-step methods, the numerical results of NSCG and SS are compared in Table 5. From Table 5 we see that the SS method has better computing efficiency than the NSCG method.

    Table 5.  Numerical results of NSCG and SS for Example 1.
    Method NSCG SS
    n q IT-out IT-in CPU IT-out IT-in CPU
    n=16 q=0.1 15 25.4 0.0230 11 4.0 0.0023
    q=0.3 291 30.5 0.2445 8 4.0 0.0017
    q=1 90 170.9 0.4020 11 4.0 0.0022
    n=32 q=0.1 18 7.0 0.0227
    q=0.3 45 488.6 1.9451 12 7.0 0.0162
    q=1 75 493.3467 2.9591 20 7.0 0.0244
    n=64 q=0.1 30 10.5 0.1952
    q=0.3 77 497.7 9.5250 16 11.1 0.1008
    q=1 62 494.5 7.5782 30 10.0 0.1650
    n=128 q=0.1 74 493.3 48.129 40 20.5 2.4942
    q=0.3 69 492.9 44.699 22 18.0 1.2280
    q=1 69 492.8 45.885 45 14.0 1.7385

     | Show Table
    DownLoad: CSV

    Example 2 ([2]) We consider the matrix Eq (1.1) with m=n and

    {A=diag(1,2,,n)+rLT,B=2tIn+diag(1,2,,n)+rLT+2tL,

    where L is a strictly lower triangular matrix and all the elements in the lower triangle part are ones, and t is a specified problem parameter. In our tests, we take t=1.

    In Tables 6 and 7, for various n and r, we list the theoretical quasi-optimal parameters and experimental optimal parameters of HSS and SS, respectively. In Tables 8 and 9, the numerical results of HSS and SS are listed. Moreover, the numerical results of NSCG and SS are compared in Table 10.

    Table 6.  The theoretical quasi-optimal parameters of HSS and SS for Example 2.
    Method HSS SS
    n q αexp βexp αexp βexp
    n=32 r=0.01 5.66 6.75 5.66 6.75
    r=0.1 5.63 6.71 5.63 6.71
    r=1 4.94 6.36 10.20 6.36
    n=64 r=0.01 8.00 9.47 8.00 10.07
    r=0.1 7.96 9.41 7.96 9.41
    r=1 6.90 8.89 20.38 10.22
    n=128 r=0.01 11.31 13.31 11.31 20.01
    r=0.1 11.25 13.23 11.25 16.35
    r=1 9.66 12.46 40.75 20.39
    n=256 r=0.01 16.00 18.75 16.00 39.95
    r=0.1 15.91 18.63 15.91 32.62
    r=1 13.55 17.50 81.49 40.75

     | Show Table
    DownLoad: CSV
    Table 7.  The experimental optimal iteration parameters of HSS and SS for Example 2.
    Method HSS SS
    n q αexp βexp αexp βexp
    n=32 r=0.01 7 10 7 13
    r=0.1 7 9 7 14
    r=1 7 6 30 10
    n=64 r=0.01 10 11 10 25
    r=0.1 11 12 10 26
    r=1 10 1 60 15
    n=128 r=0.01 16 10 15 49
    r=0.1 16 12 15 53
    r=1 16 2 120 23
    n=256 r=0.01 24 16 22 98
    r=0.1 24 16 24 104
    r=1 24 4 239 34

     | Show Table
    DownLoad: CSV
    Table 8.  Numerical results of HSS and SS with the theoretical quasi-optimal parameters for Example 2.
    Method HSS SS
    n q IT-out IT-in-1 IT-in-2 CPU IT-out IT-in CPU
    n=32 r=0.01 37 10.3 10.3 0.1743 18 6.0 0.0373
    r=0.1 37 10.3 10.3 0.1380 18 7.0 0.0388
    r=1 39 10.4 10.5 0.1376 11 9.0 0.0306
    n=64 r=0.01 60 12.9 12.9 0.9061 25 8.0 0.2025
    r=0.1 56 12.9 12.9 0.8133 25 9.0 0.2330
    r=1 69 18.6 18.7 1.4371 11 12.0 0.1388
    n=128 r=0.01 95 19.7 19.7 9.9527 35 8.0 1.2843
    r=0.1 96 20.2 20.3 10.807 35 10.0 1.4921
    r=1 100 27.3 27.4 14.947 11 12.0 0.5410
    n=256 r=0.01 100 28.4 28.4 93.739 49 8.0 10.863
    r=0.1 100 29.2 29.2 92.406 49 10.0 13.902
    r=1 100 39.0 38.8 124.37 11 12.0 3.8920

     | Show Table
    DownLoad: CSV
    Table 9.  Numerical results of HSS and SS with the experimental optimal iteration parameters for Example 2.
    Method HSS SS
    n q IT-out IT-in-1 IT-in-2 CPU IT-out IT-in CPU
    n=32 r=0.01 32 7.7 7.7 0.0787 16 3.1 0.0130
    r=0.1 31 8.2 8.2 0.0783 16 3.2 0.0127
    r=1 30 7.0 7.0 0.0646 2 6.0 0.0028
    n=64 r=0.01 43 12.2 12.2 0.5301 21 3.2 0.0510
    r=0.1 42 11.4 11.4 0.4874 21 3.3 0.0647
    r=1 39 55.9 55.9 2.1008 2 8.0 0.0117
    n=128 r=0.01 57 24.3 24.3 5.8351 28 3.4 0.3985
    r=0.1 54 20.3 20.3 4.8781 27 3.3 0.3821
    r=1 53 55.2 55.2 12.885 2 10.8 0.0902
    n=256 r=0.01 75 30.9 30.9 59.116 36 3.1 2.5999
    r=0.1 70 30.1 30.1 64.391 34 3.2 3.3392
    r=1 66 52.3 52.3 85.011 2 14.0 0.7423

     | Show Table
    DownLoad: CSV
    Table 10.  Numerical results of NSCG and SS for Example 2.
    Method NSCG SS
    n q IT-out IT-in CPU IT-out IT-in CPU
    n=32 r=0.01 17 25.4 0.0467 16 3.1 0.0130
    r=0.1 18 50.1 0.0717 16 3.2 0.0127
    r=1 100 87.3 0.6921 2 6.0 0.0028
    n=64 r=0.01 21 36.2 0.2206 21 3.2 0.0510
    r=0.1 23 86.9 0.4631 21 3.3 0.0647
    r=1 100 99.5 2.4361 2 8.0 0.0117
    n=128 r=0.01 25 51.0 1.7965 28 3.4 0.3985
    r=0.1 29 96.5 3.4998 27 3.3 0.3821
    r=1 100 100 12.997 2 10.8 0.0902
    n=256 r=0.01 31 64.0 17.264 36 3.1 2.5999
    r=0.1 37 98.1 32.679 34 3.2 3.3392
    r=1 100 100 96.801 2 14.0 0.7423

     | Show Table
    DownLoad: CSV

    From Tables 810 we get the same conclusion as example 1.

    Therefore, for large sparse matrix equation AXB=C, the SS method is an effective iterative approach.

    By utilizing an inner-outer iteration strategy, we established a shift-splitting (SS) iteration method for large sparse linear matrix equations AXB=C. Two different convergence theories were analysed in depth. Furthermore, the quasi-optimal parameters of SS iteration matrix are given. Numerical experiments illustrated that, the SS method can always outperform the HSS and NSCG methods both in outer and inner iteration numbers and computing time, especially for the ill-conditioned coefficient matrices.

    The authors are very grateful to the anonymous referees for their helpful comments and suggestions on the manuscript. This research is supported by the Natural Science Foundation of Gansu Province (No. 20JR5RA464), the National Natural Science Foundation of China (No. 11501272), and the China Postdoctoral Science Foundation funded project (No. 2016M592858).

    The authors declare there is no conflict of interest.



    [1] R. A. Horn, C. R. Johnson, Topics in matrix analysis, Cambridge University Press, 1991. https://doi.org/10.1017/CBO9780511840371
    [2] L. L. Zhao, Q. B. Liu, Some inequalities on the spectral radius of matrices, J. Inequal. Appl., 2018 (2018), 5. https://doi.org/10.1186/s13660-017-1598-2 doi: 10.1186/s13660-017-1598-2
    [3] W. L. Zeng, J. Z. Liu, Lower bound estimation of the minimum eigenvalue of Hadamard product of an M-matrix and its inverse, Bull. Iran. Math. Soc., 48 (2022), 1075–1091. https://doi.org/10.1007/s41980-021-00563-1 doi: 10.1007/s41980-021-00563-1
    [4] S. Karlin, F. Ost, Some monotonicity properties of Schur powers of matrices and related inequalities, Linear Algebra Appl., 68 (1985), 47–65. https://doi.org/10.1016/0024-3795(85)90207-1 doi: 10.1016/0024-3795(85)90207-1
    [5] M. Z. Fang, Bounds on eigenvalues of the Hadamard product and the Fan product of matrices, Linear Algebra Appl., 425 (2007), 7–15. https://doi.org/10.1016/j.laa.2007.03.024 doi: 10.1016/j.laa.2007.03.024
    [6] Q. B. Liu, G. L. Chen, On two inequalities for the Hadamard product and the fan product of matrices, Linear Algebra Appl., 431 (2009), 974–984. https://doi.org/10.1016/j.laa.2009.03.049 doi: 10.1016/j.laa.2009.03.049
    [7] Z. J. Huang, On the spectral radius and the spectral norm of Hadamard products of nonnegative matrices, Linear Algebra Appl., 434 (2011), 457–462. https://doi.org/10.1016/j.laa.2010.08.038 doi: 10.1016/j.laa.2010.08.038
    [8] J. Li, H. Hai, Some new inequalities for the Hadamard product of nonnegative matrices, Linear Algebra Appl., 606 (2020), 159–169. https://doi.org/10.1016/j.laa.2020.07.025 doi: 10.1016/j.laa.2020.07.025
    [9] K. M. R. Audenaert, Spectral radius of Hadamard product versus conventional product for non-negative matrices, Linear Algebra Appl., 432 (2010), 366–368. https://doi.org/10.1016/j.laa.2009.08.017 doi: 10.1016/j.laa.2009.08.017
    [10] Q. P. Guo, J. S. Leng, H. B. Li, C. Cattani, Some bounds on eigenvalues of the Hadamard product and the Fan product of matrices, Mathematics, 7 (2019), 147. https://doi.org/10.3390/math7020147 doi: 10.3390/math7020147
    [11] A. Berman, R. J. Plemmons, Nonnegative matrices in the mathematical sciences, Society for Industrial and Applied Mathematics, 1994. https://doi.org/10.1137/1.9781611971262
    [12] E. F. Beckenbach, R. Bellman, Inequalities, Springer, 1961.
    [13] A. Brauer, Limits for the characteristic roots of a matrix. Ⅱ, Duke Math. J., 14 (1947), 21–26. https://doi.org/10.1215/s0012-7094-47-01403-8 doi: 10.1215/s0012-7094-47-01403-8
  • This article has been cited by:

    1. Baohua Huang, Xiaofei Peng, A randomized block Douglas–Rachford method for solving linear matrix equation, 2024, 61, 0008-0624, 10.1007/s10092-024-00599-9
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1849) PDF downloads(80) Cited by(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog