Loading [MathJax]/jax/element/mml/optable/GeneralPunctuation.js
Research article

An alternating direction power-method for computing the largest singular value and singular vectors of a matrix

  • Received: 21 July 2022 Revised: 10 October 2022 Accepted: 11 October 2022 Published: 17 October 2022
  • MSC : 65F10, 65F15

  • The singular value decomposition (SVD) is an important tool in matrix theory and numerical linear algebra. Research on the efficient numerical algorithms for computing the SVD of a matrix is extensive in the past decades. In this paper, we propose an alternating direction power-method for computing the largest singular value and singular vector of a matrix. The new method is similar to the well-known power method but needs fewer operations in the iterations. Convergence of the new method is proved under suitable conditions. Theoretical analysis and numerical experiments show both that the new method is feasible and is effective than the power method in some cases.

    Citation: Yonghong Duan, Ruiping Wen. An alternating direction power-method for computing the largest singular value and singular vectors of a matrix[J]. AIMS Mathematics, 2023, 8(1): 1127-1138. doi: 10.3934/math.2023056

    Related Papers:

    [1] Janthip Jaiprasert, Pattrawut Chansangiam . Exact and least-squares solutions of a generalized Sylvester-transpose matrix equation over generalized quaternions. Electronic Research Archive, 2024, 32(4): 2789-2804. doi: 10.3934/era.2024126
    [2] Jin Wang, Jun-E Feng, Hua-Lin Huang . Solvability of the matrix equation AX2=B with semi-tensor product. Electronic Research Archive, 2021, 29(3): 2249-2267. doi: 10.3934/era.2020114
    [3] Yimou Liao, Tianxiu Lu, Feng Yin . A two-step randomized Gauss-Seidel method for solving large-scale linear least squares problems. Electronic Research Archive, 2022, 30(2): 755-779. doi: 10.3934/era.2022040
    [4] Zehua Wang, Jinrui Guan, Ahmed Zubair . A structure-preserving doubling algorithm for the square root of regular M-matrix. Electronic Research Archive, 2024, 32(9): 5306-5320. doi: 10.3934/era.2024245
    [5] Hankang Ji, Yuanyuan Li, Xueying Ding, Jianquan Lu . Stability analysis of Boolean networks with Markov jump disturbances and their application in apoptosis networks. Electronic Research Archive, 2022, 30(9): 3422-3434. doi: 10.3934/era.2022174
    [6] Ping Yang, Xingyong Zhang . Existence of nontrivial solutions for a poly-Laplacian system involving concave-convex nonlinearities on locally finite graphs. Electronic Research Archive, 2023, 31(12): 7473-7495. doi: 10.3934/era.2023377
    [7] ShinJa Jeong, Mi-Young Kim . Computational aspects of the multiscale discontinuous Galerkin method for convection-diffusion-reaction problems. Electronic Research Archive, 2021, 29(2): 1991-2006. doi: 10.3934/era.2020101
    [8] Yang Song, Beiyan Yang, Jimin Wang . Stability analysis and security control of nonlinear singular semi-Markov jump systems. Electronic Research Archive, 2025, 33(1): 1-25. doi: 10.3934/era.2025001
    [9] Lot-Kei Chou, Siu-Long Lei . High dimensional Riesz space distributed-order advection-dispersion equations with ADI scheme in compression format. Electronic Research Archive, 2022, 30(4): 1463-1476. doi: 10.3934/era.2022077
    [10] Zhengmao Chen . A priori bounds and existence of smooth solutions to a Lp Aleksandrov problem for Codazzi tensor with log-convex measure. Electronic Research Archive, 2023, 31(2): 840-859. doi: 10.3934/era.2023042
  • The singular value decomposition (SVD) is an important tool in matrix theory and numerical linear algebra. Research on the efficient numerical algorithms for computing the SVD of a matrix is extensive in the past decades. In this paper, we propose an alternating direction power-method for computing the largest singular value and singular vector of a matrix. The new method is similar to the well-known power method but needs fewer operations in the iterations. Convergence of the new method is proved under suitable conditions. Theoretical analysis and numerical experiments show both that the new method is feasible and is effective than the power method in some cases.



    Matrix equations of the form AXB=C are important research topics in linear algebra. They are widely used in engineering and theoretical studies, such as in image and signal processing, photogrammetry and surface fitting in computer-aided geometric design [1,2]. In addition, the equation-solving problems are also applied to the numerical solutions of differential equations, signal processing, cybernetics, optimization models, solid mechanics, structural dynamics, and so on [3,4,5,6,7]. So far, there is an abundance of research results on the solutions of matrix equation AXB=C, including the existence [8], uniqueness [9], numerical solution [10], and the structure of solutions [11,12,13,14,15,16]. Moreover, [17] discusses the Hermitian and skew-Hermitian splitting iterative method for solving the equation. The authors of [18] provided the Jacobi and Gauss-Seidel type iterative method to solve the equation.

    However, in practical applications, ordinary matrix multiplication can no longer meet the needs. In 2001, Cheng and Zhao constructed a semi-tensor product method, which makes the multiplication of two matrices no longer limited by dimension [19,20]. After that, the semi-tensor product began to be widely studied and discussed. It is not only applied to problems such as the permutation of high-dimensional data and algebraization of non linear robust stability control of power systems [22], it also provides a new research tool for the study of problems in Boolean networks [23], game theory [24], graph coloring [25], fuzzy control [26] and other fields [27]. However, some of these problems can be reduced to solving linear or matrix equations under the semi-tensor product. Yao et al. studied the solution of the equation under a semi-tensor product (STP equation), i.e., AX=B, in [28]. After that, the authors of [29,30,31] studied the solvability of STP equations AX2=B,AlX=B and AXXB=C, respectively.

    To date, the application of the STP equation AXB=C has also been reflected in many studies using the matrix semi-tensor product method. For example, in the study of multi-agent distributed cooperative control over finite fields, the authors of [32] transformed nonlinear dynamic equations over finite fields into the form of STP equation Z(t)=˜LZ(t+1), where ˜L=ˆLQM and Q is the control matrix. Thus, if we want to get the right control matrix to realize consensus regarding L, we need to solve the STP equation AXB=C. Recently, Ji et al. studied the solutions of STP equation AXB=C and gave the necessary and sufficient conditions for the equation to have a solution; they also formulated the specific solution steps in [33]. Nevertheless, the condition that STP equation AXB=C has a solution is very harsh. On the one hand, parameter matrix C needs to have a specific form; particularly, it should be a block Toeplitz matrix and, even if the matrix C meets certain conditions, the equation may not have a solution. This brings difficulties in practical applications. On the other hand, there is usually a certain error in the data that we measure, which will cause the parameter matrix C of the equation AXB=C to not achieve the required specific form; furthermore, the equation at this time will have no exact solutions.

    Therefore, this paper describes a study of the approximate solutions of STP equation AXB=C. The main contributions of this paper are as follows: (1) The least-squares (LS) solution of STP equation AXB=C is discussed for the first time. Compared with the existing results indicating that the equation has a solution, it is more general and greatly reduces the requirement of matrix form. (2) On the basis of Moore-Penrose generalized inverse operation and matrix differentiation, the specific forms of the LS solutions under the conditions of the matrix-vector equation and matrix equation are derived.

    The paper is organized as follows. First, we study the LS solution problem of the matrix-vector STP equation AXB=C, together with a specific form of the LS solutions, where X is an unknown vector. Then, we study the LS solution problem when X is an unknown matrix and give the concrete form of the LS solutions. In addition, several simple numerical examples are given for each case to verify the feasibility of the theoretical results.

    This study applies the following notations.

    R: the real number field;

    Rn: the set of n-dimensional vectors over R;

    Rm×n: the set of m×n matrices over R;

    AT: the transpose of matrix A;

    A: the Frobenius norm of matrix A;

    tr(A): the trace of matrix A;

    A+: the Moore-Penrose generalized inverse of matrix A;

    lcm{m,n}: the least common multiple of positive integers m and n;

    gcd{m,n}: the greatest common divisor of positive integers m and n;

    ab: the formula b divides a;

    ab: b is divisible by a;

    f(x)x: differentiation of f(x) with respect to x.

    Let A=[aij]Rm×n and B=[bij]Rp×q. We give the following definitions:

    Definition 2.1. [34] The Kronecker product AB is defined as follows:

    AB=[a11Ba21Ba1nBa21Ba22Ba2nBam1Bam2BamnB]Rmp×nq. (2.1)

    Definition 2.2. [20] The left semi-tensor product AB is defined as follows:

    AB=(AIt/n)(BIt/p)R(mt/n)×(qt/p), (2.2)

    where t=lcm{n,p}.

    Definition 2.3. [21] For a matrix ARm×n, the mn column vector Vc(A) is defined as follows:

    Vc(A)=[a11am1a12am2a1namn]T. (2.3)

    Proposition 2.1. [33,34] When A,B are two real-valued matrices and X is an unknown variable matrix, we have the following results about matrix differentiation:

    tr(AX)X=AT,tr(XTA)X=A,tr(XTAX)X=(A+AT)X.

    In this subsection, we will consider the following matrix-vector STP equation:

    AXB=C, (2.4)

    where ARm×n,BRr×l, CRh×k are given matrices, and XRp is the vector that needs to be solved.

    With regard to the requirements of the dimensionality of the matrices in the STP equation (2.4), we have the following properties:

    Proposition 2.2. [33] For matrix-vector STP equation (2.4),

    1) when m=h, the necessary conditions for (2.4) with vector solutions of size p are that kl,nr should be positive integers and klnr,p=lnrk;

    2) when mh, the necessary conditions for (2.4) with vector solutions of size p are that hm,kl should be positive integers and β=gcd{hm,r},gcd{kl,β}=1,gcd{hm,kl}=1 and p=nhlmrk hold.

    Remark: When Proposition 2.2 is satisfied, matrices A,B, and C are said to be compatible, and the sizes of X are called permissible sizes.

    Example 2.1 Consider matrix-vector STP equation AXB=C with the following coefficients:

    A=[1201], B=[01], C=[111020].

    It is easy to see that m=1,n=4,r=2,l=1,h=2, and k=3. Although mh,lk,β=gcd{hm,r},gcd{kl,β}=1, and gcd{hm,kl}=1, nbak is not a positive integer. So, A,B, and C are not compatible. At this time, matrix-vector STP equation (2.4) has no solution.

    For the case that m=h, let X=[x1x2xp]TRp, As be the s-th column of A, and ˇA1,ˇA2,,ˇApRm×np=Rm×rkl be the p equal block of the matrix A, i.e., A=[ˇA1ˇA2ˇAp], and

    ˇAi=[A(i1)rkl+1A(i1)rkl+2Airkl],i=1,,p.

    Let t1=lcm{n,p},t2=lcm{t1p,r}; comparing the relationship of dimensions, we can get that t1=n and t2=rkl. Then

    AXB=(AIt1n)(XIt1p)B=[ˇA1ˇA2ˇAp][x1x2xp]B=(x1ˇA1+x2ˇA2++xpˇAp)B=x1ˇA1B+x2ˇA2B++xpˇApB=x1(ˇA1It2lrk)(BIt2r)+x2(ˇA2It2lrk)(BIt2r)++xp(ˇApIt2lrk)(BIt2r)=x1ˇA1(BIkl)+x2ˇA2(BIkl)++xpˇAp(BIkl)=CRm×k.

    Denote

    ˇBi=ˇAi(BIkl)=[A(i1)rkl+1A(i1)rkl+2Airkl](BIkl)=[A(i1)rkl+1A((ji1)r+1)kl](B1Ikl)++[A(ir1)kl+1Airkl](BhIkl)Rm×k,i=1,,p.

    It is easy to see that when the matrices A and C have the same row dimension, the STP equation (2.4) has a better representation.

    Proposition 2.3. Matrix-vector STP equation (2.4), given m=h, can be rewritten as follows:

    x1ˇB1+x2ˇB2++xpˇBp=C. (2.5)

    Obviously, it can also have the following form:

    [ˇB1,jˇB2,jˇBp,j]X=Cj,i=1,,p,j=1,,k,

    and ˇBi,j is the j-th column of ˇBi.

    At the same time, applying the Vc operator to both sides of (2.5) yields

    xlVc(ˇB1)+x2Vc(ˇB2)++xpVc(ˇBp)=[Vc(ˇB1)Vc(ˇB2)Vc(ˇBp)]X=Vc(C).

    We get the following proposition.

    Proposition 2.4. When m=h, matrix-vector STP equation (2.4) is equivalent to the linear form equation under the traditional matrix product:

    ¯BX=Vc(C),

    where

    ¯B=[Vc(ˇB1)Vc(ˇB2)Vc(ˇBp)]=[ˇB1,1ˇB2,1ˇBp,1ˇB1,2ˇB2,2ˇBp,2ˇB1,kˇB2,kˇBp,k]. (2.6)

    In this subsection, we will consider the following matrix STP equation:

    AXB=C, (2.7)

    where ARm×n,BRr×l,CRh×k are given matrices, and XRp×q is the matrix that needs to be solved.

    For matrix STP equation (2.7), the dimensionality of its matrices has the following requirements:

    Proposition 2.5. [33] For matrix STP equation (2.7),

    1) when m=h, the necessary conditions for (2.7) with a matrix solution with size p×q are that kl,nr should be positive integers and p=nα,q=rklα, where α is a common factor of n and rkl;

    2) when mh, the necessary conditions for (2.7) with a matrix solution of size p×q are that hm,kl should be positive integers, gcd{hmβ,αβ}=1,gcd{hm,kl}=1,βr,p=nhmα,q=rklα, α is the common factor of nhm and rkl, and β=gcd{hm,α}.

    Remark: When Proposition 2.5 is satisfied, matrices A,B, and C are said to be compatible, and the sizes of X are called permissible sizes.

    Example 2.2 Consider matrix STP equation AXB=C with the following coefficients:

    A=[10110100], B=[21], C=[315020].

    We see that m=2,n=4,r=2,l=1,h=2, and k=3, so A,B and C are compatible. At this time, matrix STP equation (2.7) may have a solution XR2×3 or R4×6. (In fact, by Corollary 4.1 of [33], this equation has no solution.)

    When m=h, let As be the s-th column of A and denote ˇA1,ˇA2,,ˇApRm×α as p blocks of A with the same size, i.e., A=[˜A1ˇA2ˇAp], where

    ˇAi=[A(i1)α+1A(i1)α+2Aiα],i=1,,p.

    Denote

    ˉA=[Vc(ˇA1),Vc(ˇA2),,Vc(ˇA2)]=[A1Aα+1A(p1)α+1A2Aα+2A(p1)α+2AαA2αApα],

    we will have the following proposition.

    Proposition 2.6. [33] When m=h, STP equation (2.7) can be rewritten as follows:

    (BTIkml)(IqˉA)Vc(X)=Vc(C). (2.8)

    In this subsection we will consider the LS solutions of the following matrix-vector STP equation:

    AXB=C, (3.1)

    where ARm×n,BRr×l,CRm×k are given matrices, and XRp is the vector that needs to be solved. By Proposition 2.2, we know that when kl,nr, and klnr, all matrices are compatible. At this time, the matrix-vector STP equation (3.1) may have solutions in Rlnrk.

    Now, assuming that kl,nr, and klnr hold and we want to find the LS solutions of matrix-vector STP equation (3.1) on Rlnrk, that is, given ARm×n,BRr×l, and CRm×k, we want to find XRlnrk such that

    AXBC2=minXRlnrkAXBC2. (3.2)

    According to Proposition 2.3, matrix-vector equation (2.4) under the condition that m=h can be rewritten in the column form as follows:

    [ˇB1,jˇB2,k+jˇBp,(p1)k+j]X=Cj,j=1,,k.

    So, we have

    AXBC2=kΣj=1[ˇB1,jˇB2,k+jˇBp,(p1)k+j]XCj2=kΣj=1tr[([ˇB1,jˇB2,k+jˇBp,(p1)k+j]XCj)T([ˇB1,jˇB2,k+jˇBp,(p1)k+j]XCj)]=kΣj=1tr(XT[ˇB1,jˇB2,k+jˇBp,(p1)k+j]T[ˇB1,jˇB2,k+jˇBp,(p1)k+j]XXT[ˇB1,jˇB2,k+jˇBp,(p1)k+j]TCjCTj[ˇB1,jˇB2,k+jˇBp,(p1)k+j]X+CTjCj).

    Since AXBC2 is a smooth function for the variables of X, X is the minimum point if and only if X satisfies the following equation:

    ddXAXBC2=0.

    Then, we derive the following:

    ddXAXBC2=kΣj=1(2[ˇB1,jˇBp,(p1)k+j][ˇB1,jˇBp,(p1)k+j]TX2[ˇB1,jˇBp,(p1)k+j]TCj).

    Taking

    ddXAXBC2=0,

    we have

    kΣj=1[ˇB1,jˇBp,(p1)k+j]T[ˇB1,jˇBp,(p1)k+j]X=kΣj=1[ˇB1,jˇBp,(p1)k+j]TCj. (3.3)

    Hence, the minimum point of linear equation (3.2) is given by

    X=(kΣj=1[ˇB1,jˇBp,(p1)k+j]T[ˇB1,jˇBp,(p1)k+j])+(kΣj=1[ˇB1,jˇBp,(p1)k+j]TCj).

    And, it is also the LS solution of (3.1).

    Meanwhile, we can draw the following result:

    Theorem 3.1. If ˇB1,ˇB2,,ˇBp are linearly independent and ¯B of (2.6) is full rank, then the LS solution of matrix-vector STP equation (3.1) is given by

    X=(¯BT¯B)1¯BTVc(C);

    If ˇB1,ˇB2,,ˇBp are linearly related and ¯B is not full rank, then the LS solution of matrix-vector STP equation (3.1) is given by

    X=(¯BT¯B)+¯BTVc(C).

    Proof. According to Proposition 2.4, (3.1) is equals to the following system of linear equations with a traditional matrix product

    ¯BX=Vc(C). (3.4)

    Therefore, we only need to study the LS solutions of matrix-vector STP equation (3.4). From the conclusion in linear algebra, the LS solutions of matrix-vector STP equation (3.4) must satisfy the following equation:

    ¯BT¯BX=¯BTVc(C). (3.5)

    When ¯B is full rank, ¯BT¯B is invertible and the LS solution of matrix-vector STP equation (3.4) is given by

    X=(¯BT¯B)1¯BTVc(C);

    When ¯B is not full rank, ¯BT¯B is nonsingular and the LS solution of matrix-vector STP equation (3.4) is given by

    X=(¯BT¯B)+¯BTVc(C).

    Comparing (3.3) and (3.5), we can see that

    ¯BT¯B=kΣj=1[ˇB1,jˇB2,k+jˇBp,(p1)k+j]T[ˇB1,jˇB2,k+jˇBp,(p1)k+j],¯BTVc(C)=kΣj=1([ˇB1,jˇB2,k+jˇBp,(p1)k+j]TCj,

    and

    ¯BXVc(C)2=kΣj=1[ˇB1,jˇB2,k+jˇBp,(p1)k+j]XCj2.

    Therefore, the two equations are the same, and the LS solution obtained via the two methods is consistent. Obviously, the second method is easier to employ. Below, we only use the second method to find the LS solutions.

    Example 3.1 Now, we shall explore the LS solution of the matrix-vector STP equation AXB=C with the following coefficients:

    A=[101101001001],B=[212001],C=[111020110].

    By Example 2.1(1), it follows that the matrix-vector STP equation AXB=C has no exact solution. Then, we can investigate the LS solutions of this equation.

    First, because A,B, and C are compatible, the matrix-vector equation may have LS solutions on R2. Second, divide A into 2 blocks; we have

    ˇA1=[100110],ˇA2=[110001],ˇB1=ˇA1(BI1)=[212001212],ˇB2=ˇA2(BI1)=[211000001].

    Then, we can get

    ¯B=[220020110010211021],Vc(C)=[101102110].

    Because ¯B is full rank, the LS solution of this matrix-vector STP equation is given by

    X=(¯BT¯B)1¯BTVc(C)=[0.29630.0741].

    In this subsection we will explore the LS solutions of the following matrix-vector STP equation:

    AXB=C, (3.6)

    where ARm×n,BRr×l and CRh×k are given matrices and XRp is the vector that needs to be solved. By Proposition 2.2, we know that, when m|h,k|l,nlrk,β=gcd{hm,r},gcd{kl,β}=1, and gcd{hm,kl}=1, A,B, and C are compatible. At this time, STP equation (3.6) may have a solution belonging to Rnhlmrk.

    In what follows, we assume that matrix-vector STP equation (3.6) always satisfies the compatibility conditions, and we will find the LS solutions of matrix-vector STP equation (3.6) on Rnhlmrk. Since hm is a factor of the dimension nhlmrk of X, it is easy to obtain the matrix-vector STP equation given by

    AXB=(AIhm)XB,

    according to the multiplication rules of semi-tensor products. Let A=AIhm; then matrix-vector STP equation (3.6) is transformed into the case of m=h, and, from the conclusion of the previous section, one can easily obtain the LS solution of matrix-vector STP equation (3.6).

    Below, we give an algorithm for finding the LS solutions of matrix-vector STP equation (3.6):

    Step one: Check whether A,B, and C are compatible, that is, whether m|h andk|l hold, and whether gcd{hm,kl}=1. If not, we can get that the equation has no solution.

    Step two: Let XRp,p=nhlmrk, and A=AIhmRh×nhm. Take ˇA1,ˇA2,,ˇApRm×nhmp=Rm×rkl to have p equal blocks of the matrix A:

    ˇAi=[A(i1)rkl+1A(i1)rkl+2Airkl],i=1,,p,

    As is the s-th column of A. Let

    ˇB1,ˇB2,,ˇBpRm×k,

    where

    ˇBi=ˇAi(BIkl)=[A(i1)rkl+1A(i1)rkl+2Airkl](BIkl),i=1,,p.

    Step three: Let

    ¯B=[ˇB1,1ˇB2,1ˇBp,1ˇB1,2ˇB2,2ˇBp,2ˇB1,kˇB2,kˇBp,k],

    and calculate Vc(C).

    Step four: Solve the equation ¯BT¯BX=¯BTVc(C); if ¯B is full rank and ¯BT¯B is reversible, at this time, the LS solution of matrix-vector STP equation (3.6) is given by

    X=(¯BT¯B)1¯BTVc(C);

    If ¯B is not full rank, the LS solution of matrix-vector STP equation (3.6) is given by

    X=(¯BT¯B)+¯BTVc(C).

    Example 3.2 Now, we shall explore the LS solutions of the matrix-vector STP equation AXB=C with the following coefficients:

    A=[1011],B=[20],C=[110210].

    According to Example 2.1(2), we know that this matrix-vector STP equation has no exact solution. Then, we can investigate the LS solutions of this STP equation.

    Step one: m|h,k|l,gcd{hm, and kl}=1, so A,B, and C are compatible; we proceed to the second step.

    Step two: The matrix-vector STP equation may have an LS solution XR3, and

    A=AI3=[100000100100010000010010001000001001].

    Let

    ˇA1=[100001000010],ˇA2=[001000010000],ˇA3=[010000101001],

    be three equal blocks of the matrix A. We have

    ˇB1=ˇA1(BI2)=[200200],ˇB2=ˇA2(BI2)=[000000],ˇB3=ˇA3(BI2)=[020020].

    Step three: Let

    ¯B=[200000002002200000],Vc(C)=[101120].

    Step four: Because ¯B is not full rank, the LS solution of this matrix-vector STP equation is given by

    X=(¯BT¯B)+¯BTVc(C)=[0.750000.5000].

    In this subsection we will explore the LS solutions of the following matrix STP equation

    AXB=C, (4.1)

    where ARm×n,BRr×l, and CRm×k are given matrices and XRp×q is the matrix that needs to be solved. By Proposition 2.5, we have that, when lk, all matrices are compatible. At this time, matrix STP equation (4.1) may have solutions in Rnα×rklα, and α is a common factor of n and rkl.

    Now, we assume that lk, and we want to find the LS solutions of matrix STP equation (4.1) on Rnα×rklα; the problem is as follows: Given ARm×n,BRr×l, and CRm×k, we want to find XRp×q such that

    AXBC2=minXRp×qAXBC2. (4.2)

    By Proposition 2.6, matrix STP equation (4.1) can be rewritten as

    (BTIkml)(IqˉA)Vc(X)=Vc(C). (4.3)

    So, finding the LS solution of (4.1) is equivalent to finding XRp×q such that

    (BTIkml)(IqˉA)Vc(X)Vc(C)2=minXRp×q(BTIkml)(IqˉA)Vc(X)Vc(C)2. (4.4)

    Then, we have the following theorem.

    Theorem 4.1. When B is full rank and B''A'' is invertible, the LS solution of matrix STP equation (4.1) is given by

    \begin{eqnarray*} V_{c}( X^{*}) = (B''A'')^{+}C''; \end{eqnarray*}

    When B''A'' is not full rank and B''A'' is nonsingular, the LS solution of matrix STP equation (4.1) is given by

    \begin{eqnarray*} V_{c}( X^{*}) = (B''A'')^{-1}C''. \end{eqnarray*}

    Proof. Let B'' = B^{T}\otimes I_{\frac{km}{l}}, \; A'' = I_{q}\otimes \bar{A}, \; X'' = V_{c}(X) , and C'' = V_{c}(C) ; then (4.4) can be rewritten as

    \begin{eqnarray*} &&\; \parallel(B^{T}\otimes I_{\frac{km}{l}})(I_{q}\otimes \bar{A})V_{c}( X)-V_{c}( C)\parallel^{2}\\ && = \parallel B''A''X''- C''\parallel^{2}\\ && = \text{tr}[(B''A''X''- C'')^{T}(B''A''X''- C'')]\\ && = \text{tr}[(X''^{T}A''^{T}B''^{T}- C''^{T})(B''A''X''- C'')]\\ && = \text{tr}[(X''^{T}A''^{T}B''^{T}B''A''X'')- (X''^{T}A''^{T}B''^{T}C'')-(C''^{T}B''A''X'')+( C''^{T}C'')]. \end{eqnarray*}

    Because \parallel A\ltimes X\ltimes B-C \parallel^{2} is a smooth function for the variables of X , it follows that X^{*} is the minimum point if and only if X^{*} satisfies

    \begin{eqnarray*} \frac{d}{dX}\parallel (B^{T}\otimes I_{\frac{km}{l}})(I_{q}\otimes \bar{A})V_{c}( X)-V_{c}( C) \parallel^{2} = 0. \end{eqnarray*}

    Given that

    \begin{eqnarray*} \frac{d}{dX}\parallel (B^{T}\otimes I_{\frac{km}{l}})(I_{q}\otimes \bar{A})V_{c}( X)-V_{c}( C) \parallel^{2} = 2A''^{T}B''^{T}B''A''X''- 2A''^{T}B''^{T}C'', \end{eqnarray*}

    let

    \begin{eqnarray*} \frac{d}{dX}\parallel (B^{T}\otimes I_{\frac{km}{l}})(I_{q}\otimes \bar{A})V_{c}( X)-V_{c}( C) \parallel^{2} = 0. \end{eqnarray*}

    Then, we have

    \begin{eqnarray*} \label{19} A''^{T}B''^{T}B''A''X'' = A''^{T}B''^{T}C''. \end{eqnarray*}

    Thus, the minimum point of linear equation (4.2) is also the LS solution of matrix STP equation (4.1) . That is to say, \parallel A\ltimes X\ltimes B-C\parallel^{2} is the smallest if and only if \parallel (B^{T}\otimes I_{\frac{km}{l}})(I_{q}\otimes \bar{A})V_{c}(X^{*})-V_{c}(C)\parallel^{2} gets the minimum value. And, the statement is naturally proven.

    Now, we shall examine the relationship between the LS solutions of different compatible sizes. Let p_{1} \times q_{1}, p_{2} \times q_{2} be two different compatible sizes and 1 < \frac{p_{1}}{q_{1}} = \frac{p_{2}}{q_{2}}\in Z . If X\in \mathbb{R}^{p_{1} \times q_{1}} , we should have that X\otimes I_{\frac{p_{2}}{p_{1}}}\in \mathbb{R}^{p_{2} \times q_{2}} ; we can get the following formula:

    \begin{eqnarray*} \mathop{min}\limits_{X\in \mathbb{R}^{p_{2} \times q_{2}}}\parallel A\ltimes X\ltimes B-C\parallel^{2}\leq \mathop{min}\limits_{X\in \mathbb{R}^{p_{1} \times q_{1}}}\parallel A\ltimes X\ltimes B-C\parallel^{2}. \end{eqnarray*}

    Therefore, if we consider (4.1) to take the LS solutions among all compatible sizes of matrices, then it should be the LS solutions of the equation on \mathbb{R}^{n \times k} .

    Example 4.1 Now, we shall explore the LS solutions of matrix STP equation AXB = C with the following coefficients:

    \begin{eqnarray*} A = \begin{bmatrix} 1&0&1&1\\ 0&-1&0&0 \end{bmatrix},\; \; \; B = \begin{bmatrix} 2\\ 1 \end{bmatrix},\; \; \; C = \begin{bmatrix} 3&1&5\\ 0&2&0\\ \end{bmatrix}. \end{eqnarray*}

    Example 2.2(1) shows that the matrix STP equation AXB = C has no exact solution. Now, we can investigate the LS solutions of this equation.

    First, given that A, \; B , and C are compatible, the matrix STP equation may have LS solutions on \mathbb{R}^{2\times3} or \mathbb{R}^{4\times6} .

    (1) The case that \alpha = 2, \; X\in \mathbb{R}^{2\times3} :

    Let

    \begin{eqnarray*} \check{A}_{1} = \left[ \begin{smallmatrix} A_{1} &A_{2} \end{smallmatrix} \right] = \left[ \begin{smallmatrix} 1&0\\ 0&-1 \end{smallmatrix} \right],\; \; \check{A}_{2} = \left[ \begin{smallmatrix} A_{3} &A_{4} \end{smallmatrix} \right] = \left[ \begin{smallmatrix} 1&1\\ 0&0 \end{smallmatrix} \right]. \end{eqnarray*}

    Then, we have

    \begin{eqnarray*} \bar{A} = \left[ \begin{smallmatrix} 1&1\\ 0&0\\ 0&1\\ -1&0 \end{smallmatrix} \right],\; \; A'' = I_{3}\otimes \bar{A} = \left[ \begin{smallmatrix} 1&1&0&0&0&0\\ 0&0&0&0&0&0\\ 0&1&0&0&0&0\\ -1&0&0&0&0&0\\ 0&0&1&1&0&0\\ 0&0&0&0&0&0\\ 0&0&0&1&0&0\\ 0&0&-1&0&0&0\\ 0&0&0&0&1&1\\ 0&0&0&0&0&0\\ 0&0&0&0&0&1\\ 0&0&0&0&-1&0 \end{smallmatrix} \right]. \end{eqnarray*}

    Let

    \begin{eqnarray*} B'' = B^{T}\otimes I_{6} = \left[ \begin{smallmatrix} 2&0&0&0&0&0&1&0&0&0&0&0\\ 0&2&0&0&0&0&0&1&0&0&0&0\\ 0&0&2&0&0&0&0&0&1&0&0&0\\ 0&0&0&2&0&0&0&0&0&1&0&0\\ 0&0&0&0&2&0&0&0&0&0&1&0\\ 0&0&0&0&0&2&0&0&0&0&0&1 \end{smallmatrix} \right],\; \; C'' = V_{c}(C) = \left[ \begin{smallmatrix} 3\\ 1\\ 5\\ 0\\ 2\\ 0 \end{smallmatrix} \right]. \end{eqnarray*}

    Because B''A'' is full rank, the LS solution of this matrix STP equation satisfies

    \begin{eqnarray*} V_{c}( X^{*}) = (B''A'')^{-1}C'' = \left[ \begin{smallmatrix} 0\\ 1.1667\\ -1.0000\\ 0.6667\\ 0\\ 2.6667 \end{smallmatrix} \right],\; \; \text{then}\; X^{*} = \left[ \begin{smallmatrix} 0&1.1667&-1.0000\\ 0.6667&0&2.6667 \end{smallmatrix} \right]. \end{eqnarray*}

    (2) The case that \alpha = 1, \; X\in \mathbb{R}^{4\times6} :

    Let

    \bar{A} = \left[ \begin{smallmatrix} 1&0&1&1\\ 0&-1&0&0 \end{smallmatrix} \right],
    A'' = I_{6}\otimes \bar{A} = \left[ \begin{smallmatrix} 1&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&1&0&0&0&0&0\\ 0&1&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&1&0&0&0&0\\ 0&0&1&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&1&0&0&0\\ 0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&1&0&0\\ 0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&1&0\\ 0&0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&1\\ 0&0&0&0&0&0&-1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&-1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&-1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&-1&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&-1&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&-1&0&0&0&0&0&0&0&0&0&0&0&0 \end{smallmatrix} \right].

    Let

    \begin{eqnarray*} B'' = B^{T}\otimes I_{6} = \left[ \begin{smallmatrix} 2&0&0&0&0&0&1&0&0&0&0&0\\ 0&2&0&0&0&0&0&1&0&0&0&0\\ 0&0&2&0&0&0&0&0&1&0&0&0\\ 0&0&0&2&0&0&0&0&0&1&0&0\\ 0&0&0&0&2&0&0&0&0&0&1&0\\ 0&0&0&0&0&2&0&0&0&0&0&1 \end{smallmatrix} \right],\; \; C'' = V_{c}(C) = \left[ \begin{smallmatrix} 3\\ 1\\ 5\\ 0\\ 2\\ 0 \end{smallmatrix} \right]. \end{eqnarray*}

    Because B''A'' is full rank, the LS solution of this matrix STP equation satisfies

    \begin{eqnarray*} &&V_{c}( X^{*}) = (B''A'')^{+}C'' = \left[ \begin{smallmatrix} 0.4615\\0.1538 \\0.7692 \\ 0 \\0.3077 \\0 \\-0.2308 \\-0.0769 \\-0.3846 \\0 \\-0.1538 \\ 0 \\ 0.4615 \\ 0.1538 \\ 0.7692 \\ 0 \\0.3077 \\ 0 \\ 0.4615 \\ 0.1538 \\ 0.7692 \\ 0 \\ 0.3077 \\ 0 \end{smallmatrix} \right], \; \text{then}\; X^{*} = \left[ \begin{smallmatrix} 0.4615 &0.1538 &0.7692 & 0 &0.3077 &0 \\-0.2308 &-0.0769 &-0.3846 &0 &-0.1538 & 0 \\ 0.4615 & 0.1538 & 0.7692 & 0 &0.3077 & 0 \\ 0.4615 & 0.1538 & 0.7692 & 0 & 0.3077 & 0 \end{smallmatrix} \right]. \end{eqnarray*}

    This section focuses on the LS solutions of the following matrix STP equation:

    \begin{eqnarray} AXB = C, \end{eqnarray} (4.5)

    where A\in \mathbb{R}^{m\times n}, \; B\in \mathbb{R}^{r\times l} and C\in \mathbb{R}^{h\times k} are given matrices and X\in \mathbb{R}^{p\times q} is the matrix that needs to be solved. By Proposition 2.5 , we have that, when m|h, \; l|k, \; gcd\{\frac{h}{m\beta}, \frac{\alpha}{\beta}\} = 1, gcd\{\beta, \frac{k}{l}\} = 1 , and \beta|r , where \beta = gcd\{\frac{h}{m}, \alpha\} , all matrices are compatible. At this time, the matrix-vector equation (4.5) may have solutions in \mathbb{R}^{\frac{nh}{m\alpha}\times \frac{rk}{l\alpha}} and \alpha is a common factor of \frac{nh}{m} and \frac{rk}{l} .

    Now, we assume that matrix STP equation (4.5) always satisfies the compatibility conditions. Since \frac{h}{m} is a factor of the row dimension \frac{nh}{m\alpha} of X , it is easy to obtain the matrix STP equation

    \begin{eqnarray*} A\ltimes X\ltimes B = (A\otimes I_{\frac{h}{m}})\ltimes X\ltimes B, \end{eqnarray*}

    according to the multiplication rules of STP. Let A' = A\otimes I_{\frac{h}{m}} : then (4.5) can be transformed into the case of m = h , and we can easily obtain the LS solution of matrix STP equation (4.5) .

    Below, we give an algorithm for finding the LS solutions of matrix STP equation (4.5) :

    Step one: Check whether m|h and l|k hold. If not, we can get that the equation has no solution.

    Step two: Find all values of \alpha that satisfy that gcd\{\frac{r}{m}, h\} = 1, \; gcd\{\frac{h}{m\beta}, \; \frac{\alpha}{\beta}\} = 1, \; gcd\{\beta, \frac{k}{l}\} = 1 , and \beta|r, \; \beta = gcd\{\frac{h}{m}, \alpha\} ; correspondingly, find all compatible sizes p\times q and perform the following steps for each compatible size.

    Step three: Let A' = A\otimes I_{\frac{h}{m}}\in \mathbb{R}^{h\times\frac{nh}{m}} . We have

    \begin{eqnarray*} \bar{A'} = [V_{c}(\check{A'}_{1}),V_{c}(\check{A'}_{2}),\cdots,V_{c}(\check{A'}_{2})] = \begin{bmatrix} A'_{1}&A'_{\alpha+1}&\cdots&A'_{(p-1)\alpha+1}\\ A'_{2}&A'_{\alpha+2}&\cdots&A'_{(p-1)\alpha+2}\\ \vdots&\vdots&\ddots&\vdots\\ A'_{\alpha}&A'_{2\alpha}&\cdots&A'_{p\alpha} \end{bmatrix}, \end{eqnarray*}

    where \check{A'}_{1}, \check{A'}_{2}, \cdots, \check{A'}_{p}\in \mathbb{R}^ {m\times \alpha} are p blocks of A' of the same size, and A'_{i} is the i -th column of A' . Let B'' = B^{T}\otimes I_{\frac{kh}{l}}, \; A'' = I_{q}\otimes \bar{A'}, \; X'' = V_{c}(X) , and C'' = V_{c}(C) .

    Step four: Solve the equation A''^{T}B''^{T}B''A''X'' = A''^{T}B''^{T}C'' ; if B''A'' is full rank and B''A'' is reversible, the LS solution of matrix STP equation (4.5) is given by

    \begin{eqnarray*} V_{c}(X^{*}) = (B''A'')^{-1}C''; \end{eqnarray*}

    if B''A'' is not full rank, the LS solution of matrix STP equation (4.5) is given by

    \begin{eqnarray*} V_{c}(X^{*}) = (B''A'')^{+}C''. \end{eqnarray*}

    Example 4.2 Now, we shall explore the LS solutions of matrix STP equation AXB = C with the following coefficients:

    \begin{eqnarray*} A = \begin{bmatrix} 1&0 \end{bmatrix},\; \; \; B = \begin{bmatrix} 1&0&1\\ 0&-1&0\\ 1&0&1\\ 0&0&-1 \end{bmatrix},\; \; \; C = \begin{bmatrix} 3&1&5\\ 0&2&0\\ \end{bmatrix}. \end{eqnarray*}

    According to Example 2.2(2) , matrix STP equation AX B = C has no exact solution. Now, we can investigate the LS solutions of this equation:

    Step one: m|r and l|k hold.

    Step two: gcd\{\frac{r}{m}, h\} = 1, \; gcd\{\frac{h}{m\beta}, \; \frac{\alpha}{\beta}\} = 1, \; gcd\{\beta, \frac{k}{l}\} = 1, \; \beta|r , and \beta = gcd\{\frac{h}{m}, \alpha\} hold. The matrix STP equation AXB = C may have the solution X \in \mathbb{R}^{2\times 2} or \mathbb{R}^{4\times 4} .

    Step three: (1) The case that \alpha = 2 :

    Let

    \begin{eqnarray*} A' = A\otimes I_{2} = \left[ \begin{smallmatrix} 1&0&0&0\\ 0&1&0&0 \end{smallmatrix} \right]. \end{eqnarray*}

    Then, we have

    \begin{eqnarray*} \bar{A'} = [V_{c}(\check{A'}_{1}),V_{c}(\check{A'}_{2})] = \left[ \begin{smallmatrix} 1&0\\ 0&0\\ 0&0\\ 1&0 \end{smallmatrix} \right]. \end{eqnarray*}

    Let

    \begin{align*} \qquad &B'' = B^{T}\otimes I_{2} = \left[ \begin{smallmatrix} 1&0&0&0&1&0&0&0\\ 0&1&0&0&0&1&0&0\\ 0&0&-1&0&0&0&0&0\\ 0&0&0&-1&0&0&0&0\\ 1&0&0&0&1&0&-1&0\\ 0&1&0&0&0&1&0&-1 \end{smallmatrix} \right], \; A'' = I_{2}\otimes \bar{A'} = \left[ \begin{smallmatrix} 1&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 1&0&0&0\\ 0&0&1&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&1&0 \end{smallmatrix} \right], \; \; C'' = V_{c}( C) = \left[ \begin{smallmatrix} 3\\ 1\\ 5\\ 0\\ 2\\ 0 \end{smallmatrix} \right].& \end{align*}

    (2) The case that \alpha = 1 :

    Let

    \begin{eqnarray*} A' = A\otimes I_{2} = \left[ \begin{smallmatrix} 1&0&0&0\\ 0&1&0&0 \end{smallmatrix} \right]. \end{eqnarray*}

    Then, we have

    \begin{eqnarray*} \bar{A'} = [V_{c}(\check{A'}_{1}),V_{c}(\check{A'}_{2}),V_{c}(\check{A'}_{3}),V_{c}(\check{A'}_{4})] = \left[ \begin{smallmatrix} 1&0&0&0\\ 0&1&0&0 \end{smallmatrix} \right]. \end{eqnarray*}

    Let

    \begin{align*} &B'' = B^{T}\otimes I_{2} = \left[ \begin{smallmatrix} 1&0&0&0&1&0&0&0\\ 0&1&0&0&0&1&0&0\\ 0&0&-1&0&0&0&0&0\\ 0&0&0&-1&0&0&0&0\\ 1&0&0&0&1&0&-1&0\\ 0&1&0&0&0&1&0&-1 \end{smallmatrix} \right],\; \; A'' = I_{4}\otimes \bar{A'} = \left[ \begin{smallmatrix} 1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&0\\ \end{smallmatrix} \right],& \\&\qquad\qquad\qquad\qquad\qquad\qquad\quad C'' = V_{c}( C) = \left[ \begin{smallmatrix} 3\\ 1\\ 5\\ 0\\ 2\\ 0 \end{smallmatrix} \right].& \end{align*}

    Step four: Because B''A'' is not full rank, the LS solution of this matrix STP equation is obtained as follows:

    (1) The case that \alpha = 2 :

    \begin{eqnarray*} V_{c}(X^{*}) = (B''A'')^{+}C'' = \left[ \begin{smallmatrix} 1.0000\\0\\ 1.0000\\ 0 \end{smallmatrix} \right] \Longrightarrow X^{*} = \left[ \begin{smallmatrix} 1.0000& 0\\ 1.0000& 0 \end{smallmatrix} \right]. \end{eqnarray*}

    (2) The case that \alpha = 1 :

    \begin{align*} & \qquad V_{c}(X^{*}) = (B''A'')^{+}C'' = \left[ \begin{smallmatrix} 1.5000\\ 0.5000\\ -5.0000\\ 0\\ 1.5000\\ 0.5000\\ 1.0000\\ 1.0000\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0 \end{smallmatrix} \right] \Longrightarrow \quad X^{*} = \left[ \begin{smallmatrix} 1.5000&0.5000& -5.0000&0\\ 1.5000&0.5000&1.0000&1.0000\\ 0&0&0&0\\ 0&0&0&0\\ \end{smallmatrix} \right].& \end{align*}

    In this paper, we applied the semi-tensor product to solve the matrix equation AXB = C and studied the LS solutions of the matrix equation under the semi-tensor product. By appling the definition of semi-tensor products, the equation can be transformed into the matrix equation under the ordinary matrix product and then combined with the Moore-Penrose generalized inverse operation and matrix differentiation. The specific forms of the LS solutions under the conditions of the matrix-vector equation and matrix equation were also respectively derived. Investigating the solution of Sylvester equations under a semi-tensor product, as well as the LS solution problem, will be future research work.

    No artificial intelligence tools were usded in the creation of this article.

    The work was supported in part by the National Natural Science Foundation (NNSF) of China under Grant 12301573 and in part by the Natural Science Foundation of Shandong under grant ZR2022QA095.

    No potential conflict of interest was reported by the author.



    [1] J. Demmel, Applied numerical linear algebra, Philadelphia: SIAM, 1997. https://doi.org/10.1137/1.9781611971446
    [2] G. H. Golub, C. F. Van Loan, Matrix computations, 4 Eds., Baltimore and London: The Johns Hopkins University Press, 2013.
    [3] J. F. Cai, E. J. Candès, Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20 (2010), 1956–1982. https://doi.org/10.1137/080738970 doi: 10.1137/080738970
    [4] L. L. Scharf, The SVD and reduced rank signal processing, Signal Process., 25 (1991), 113–133. https://doi.org/10.1016/0165-1684(91)90058-Q doi: 10.1016/0165-1684(91)90058-Q
    [5] B. D. Moor, The singular value decomposition and long and short spaces of noisy matrices, IEEE Trans. Signal Proc., 41 (1993), 2826–2838. https://doi.org/10.1109/78.236505 doi: 10.1109/78.236505
    [6] M. V. Kulikova, Hyperbolic SVD-based Kalman filtering for Chandrasekhar recursion, IET Control Theory Appl., 13 (2019), 1525. https://doi.org/10.1049/iet-cta.2018.5864 doi: 10.1049/iet-cta.2018.5864
    [7] G. H. Golub, W. Kahan, Calculating the singular values and the pesudo-inverse of a matrix, SIAM J. Numer. Anal., 2 (1965), 205–224. https://doi.org/10.1137/0702016 doi: 10.1137/0702016
    [8] M. Gu, S. C. Eisenstat, A divide-and-conquer algorithm for the bidiagonal SVD, SIAM J. Matrix Anal. Appl., 16 (1995), 79–92. https://doi.org/10.1137/S0895479892242232 doi: 10.1137/S0895479892242232
    [9] Z. Drmac, K. Veselic, New fast and accurate Jacobi SVD algorithm, SIAM J. Matrix Anal. Appl., 29 (2008), 1322–1362. https://doi.org/10.1137/050639193 doi: 10.1137/050639193
    [10] H. Zha, A note on the existence of the hyperbolic singular value decomposition, Linear Algebra Appl., 240 (1996), 199–205. https://doi.org/10.1016/0024-3795(94)00197-9 doi: 10.1016/0024-3795(94)00197-9
    [11] A. W. Bojanczyk, An implicit Jacobi-like method for computing generalized hyperbolic SVD, Linear Algebra Appl., 358 (2003), 293–307. https://doi.org/10.1016/S0024-3795(02)00394-4 doi: 10.1016/S0024-3795(02)00394-4
    [12] D. S. Shirokov, A note on the hyperbolic singular value decomposition without hyperexchange matrices, J. Comput. Appl. Math., 2021, 1–8.
    [13] V. Novaković, S. Singer, A GPU-based hyperbolic SVD algorithm, BIT Numer. Math., 51 (2011), 1009–1030. https://doi.org/10.1007/s10543-011-0333-5 doi: 10.1007/s10543-011-0333-5
    [14] J. Huang, Z. Jia, A cross-product free Jacobi-Davidson type method for computing a partial generalized singular value decomposition (GSVD) of a large matrix pair, 2020, 1–14.
    [15] J. Demmel, P. Koev, Accurate SVDs of weakly diagonally dominant M-matrices, Numer. Math., 98 (2004), 99–104. https://doi.org/10.1007/s00211-004-0527-8 doi: 10.1007/s00211-004-0527-8
    [16] J. Demmel, P. Koev, Accurate SVDs of polynomial Vandermonde matrices involving orthonormal polynomials, Linear Algebra Appl., 417 (2006), 382–396. https://doi.org/10.1016/j.laa.2005.09.014 doi: 10.1016/j.laa.2005.09.014
    [17] J. Demmel, W. Kahan, Accurate singular values of bidiagonal matrices, SIAM J. Stat. Comp., 11 (1990), 873–912. https://doi.org/10.1137/0911052 doi: 10.1137/0911052
    [18] J. Demmel, Accurate singular value decompositions of structured matrices, SIAM J. Matrix Anal. Appl., 21 (1999), 562–580. https://doi.org/10.1137/S0895479897328716 doi: 10.1137/S0895479897328716
    [19] F. M. Dopico, J. Moro, A note on multiplicative backward errors of accurate SVD algorithms, SIAM J. Matrix Anal. Appl., 25 (2004), 1021–1031. https://doi.org/10.1137/S0895479803427005 doi: 10.1137/S0895479803427005
    [20] R. Escalante, M. Raydan, Alternating projection methods, Philadelphia: SIAM, 2007.
    [21] V. Fernando, B. N. Parlett, Accurate singular values and differential qd algorithms, Numer. Math., 67 (1994), 191–229. https://doi.org/10.1007/s002110050024 doi: 10.1007/s002110050024
    [22] B. Grosser, B. Lang, An O(n^2) algorithm for the bidiagonal SVD, Linear Algebra Appl., 358 (2003), 45–70. https://doi.org/10.1016/S0024-3795(01)00398-6 doi: 10.1016/S0024-3795(01)00398-6
    [23] R. A. Horn, C. R. Johnson, Matrix analysis, Cambridge: Cambridge University Press, 1985. https://doi.org/10.1017/CBO9780511810817
    [24] N. J. Higham, QR factorization with complete pivoting and accurate computation of the SVD, Linear Algebra Appl., 309 (2000), 153–174. https://doi.org/10.1016/S0024-3795(99)00230-X doi: 10.1016/S0024-3795(99)00230-X
    [25] P. Koev, Accurate eigenvalues and SVDs of totally nonnegative matrices, SIAM J. Matrix Anal. Appl., 27 (2005), 1–23. https://doi.org/10.1137/S0895479803438225 doi: 10.1137/S0895479803438225
    [26] Q. Ye, Computing singular values of diagonally dominant matrices to high relative accuracy, Math. Comput., 77 (2008), 2195–2230. https://doi.org/10.1090/S0025-5718-08-02112-1 doi: 10.1090/S0025-5718-08-02112-1
    [27] J. Blanchard, J. Tanner, K. Wei, CGIHT: Conjugate grandient iterative hard thresholding for compressed sensing and matrix completion, University of Oxford, 2014. https://doi.org/10.1093/imaiai/iav011
    [28] M. Hardt, E. Price, The noisy power method: A meta algorithm with applications, International Conference on Neural Information Processing Systems MIT Press, 2014.
    [29] J. Nocedal, S. Wright, Numerical optimization, Berlin: Springer, 2006.
    [30] T. Davis, Y. Hu, The University of Florida sparse matrix collection, ACM Trans. Math. Software, 38 (2011), 1–25. https://doi.org/10.1145/2049662.2049663 doi: 10.1145/2049662.2049663
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2170) PDF downloads(105) Cited by(0)

Figures and Tables

Figures(3)  /  Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog