Processing math: 100%
Research article

The semi-tensor product method for special least squares solutions of the complex generalized Sylvester matrix equation

  • Received: 30 July 2022 Revised: 10 November 2022 Accepted: 28 November 2022 Published: 13 December 2022
  • MSC : 15A06, 15A24

  • In this paper, we are interested in the minimal norm of least squares Hermitian solution and the minimal norm of least squares anti-Hermitian solution for the complex generalized Sylvester matrix equation CXD+EXF=G. By utilizing of the real vector representations of complex matrices and the semi-tensor product of matrices, we first transform solving special least squares solutions of the above matrix equation into solving the general least squares solutions of the corresponding real matrix equations, and then obtain the expressions of the minimal norm of least squares Hermitian solution and the minimal norm of least squares anti-Hermitian solution. Further, we give two numerical algorithms and two numerical examples, and numerical examples illustrate that our proposed algorithms are more efficient and accurate.

    Citation: Fengxia Zhang, Ying Li, Jianli Zhao. The semi-tensor product method for special least squares solutions of the complex generalized Sylvester matrix equation[J]. AIMS Mathematics, 2023, 8(3): 5200-5215. doi: 10.3934/math.2023261

    Related Papers:

    [1] Yimeng Xi, Zhihong Liu, Ying Li, Ruyu Tao, Tao Wang . On the mixed solution of reduced biquaternion matrix equation ni=1AiXiBi=E with sub-matrix constraints and its application. AIMS Mathematics, 2023, 8(11): 27901-27923. doi: 10.3934/math.20231427
    [2] Fengxia Zhang, Ying Li, Jianli Zhao . A real representation method for special least squares solutions of the quaternion matrix equation (AXB,DXE)=(C,F). AIMS Mathematics, 2022, 7(8): 14595-14613. doi: 10.3934/math.2022803
    [3] Anli Wei, Ying Li, Wenxv Ding, Jianli Zhao . Three special kinds of least squares solutions for the quaternion generalized Sylvester matrix equation. AIMS Mathematics, 2022, 7(4): 5029-5048. doi: 10.3934/math.2022280
    [4] Wenxv Ding, Ying Li, Anli Wei, Zhihong Liu . Solving reduced biquaternion matrices equation ki=1AiXBi=C with special structure based on semi-tensor product of matrices. AIMS Mathematics, 2022, 7(3): 3258-3276. doi: 10.3934/math.2022181
    [5] Dong Wang, Ying Li, Wenxv Ding . The least squares Bisymmetric solution of quaternion matrix equation AXB=C. AIMS Mathematics, 2021, 6(12): 13247-13257. doi: 10.3934/math.2021766
    [6] Huiting Zhang, Yuying Yuan, Sisi Li, Yongxin Yuan . The least-squares solutions of the matrix equation AXB+BXA=D and its optimal approximation. AIMS Mathematics, 2022, 7(3): 3680-3691. doi: 10.3934/math.2022203
    [7] Jiao Xu, Hairui Zhang, Lina Liu, Huiting Zhang, Yongxin Yuan . A unified treatment for the restricted solutions of the matrix equation AXB=C. AIMS Mathematics, 2020, 5(6): 6594-6608. doi: 10.3934/math.2020424
    [8] Jin Zhong, Yilin Zhang . Dual group inverses of dual matrices and their applications in solving systems of linear dual equations. AIMS Mathematics, 2022, 7(5): 7606-7624. doi: 10.3934/math.2022427
    [9] Xiaohan Li, Xin Liu, Jing Jiang, Jian Sun . Some solutions to a third-order quaternion tensor equation. AIMS Mathematics, 2023, 8(11): 27725-27741. doi: 10.3934/math.20231419
    [10] Siting Yu, Jingjing Peng, Zengao Tang, Zhenyun Peng . Iterative methods to solve the constrained Sylvester equation. AIMS Mathematics, 2023, 8(9): 21531-21553. doi: 10.3934/math.20231097
  • In this paper, we are interested in the minimal norm of least squares Hermitian solution and the minimal norm of least squares anti-Hermitian solution for the complex generalized Sylvester matrix equation CXD+EXF=G. By utilizing of the real vector representations of complex matrices and the semi-tensor product of matrices, we first transform solving special least squares solutions of the above matrix equation into solving the general least squares solutions of the corresponding real matrix equations, and then obtain the expressions of the minimal norm of least squares Hermitian solution and the minimal norm of least squares anti-Hermitian solution. Further, we give two numerical algorithms and two numerical examples, and numerical examples illustrate that our proposed algorithms are more efficient and accurate.



    In this paper, for the convenience of expression, we first introduce some notations. R, Rm and Rm×n stand for the sets of all real numbers, m-dimensional real column vectors and m×n real matrices, respectively. C and Cm×n stand for the sets of all complex numbers and m×n complex matrices, respectively. HCm×m,AHCm×m stand for the sets of all m×m complex Hermitian and anti-Hermitian matrices, respectively. SRm×m and ASRm×m stand for the sets of all m×m real symmetric and anti-symmetric matrices, respectively. Im is the m×m identity matrix, and δkm is the k-th column of Im, k=1,2,,m. B and BT represent the Moore-Penrose inverse and the transpose of the matrix B, respectively. For the matrix B=(β1,β2,,βn)Rm×n, vec(B) represents the mn-dimensional column vector (βT1,βT2,,βTn)T. BC represents the Kronecker product of matrices B and C. represents the 2 norm of a vector or the Frobenius norm of a matrix. For a matrix C, rowi(C),colj(C) represent the i-th row and the j-th column of C, respectively. rand(m,n) is a function in MATLAB.

    Linear matrix equations are widely used in applied mathematics, computational mathematics, computer science, control theory, signal and color image processing and other fields, which has aroused the interest of many scholars and achieved some valuable results [1,2,3,4,5,6,7,8]. Direct method [9,10,11,12,13,14] and iterative method [15,16,17,18,19,20] are two common methods to solve linear matrix equations.

    In this paper, we are interested in the following generalized Sylvester matrix equation

    CXD+EXF=G, (1.1)

    in which C,D,E,F,G are known matrices, and X is unknown matrix. This matrix equation (1.1) over different number fields has been widely studied in recent years. Many scholars have proposed iterative methods for different solutions of the matrix equation (1.1) [21,22,23,24,25,26]. For the direct method, some meaningful conclusions are also obtained for the matrix equation (1.1). For example, Yuan et al. [27] gave the expression of the minimal norm of least squares Hermitian solution for the complex matrix equation (1.1) by a product for matrices and vectors. Zhang et al. [28] studied the least squares Hermitian solutions for the complex matrix equation (1.1) by the real representations of complex matrices. Yuan [29] proposed the expressions of the least squares pure imaginary solutions and real solutions for the quaternion matrix equation (1.1) by using the complex representations of quaternion matrices and the Moore-Penrose inverse. Wang et al. [30] gave some necessary and sufficient conditions for the complex constraint generalized Sylvester matrix equations to have a common solution by the rank method, and obtained the expression of the general common solution. Yuan [31] solved the mixed complex Sylvester matrix equations by the generalized singular-value decomposition, and gave the explicit expression of the general solution. Yuan et al. [32] studied the Hermitian solution of the split quaternion matrix equation (1.1). Kyrchei [33] represented the solution of the quaternion matrix equation (1.1) by quaternion row-column determinants.

    In the process of solving the linearization problem of nonlinear systems, Cheng et al. [34] proposed the theory of the semi-tensor product of matrices in recent years. The semi-tensor product of matrices breaks through the limitation of dimension and realizes quasi commutativity, which is a generalization of the traditional matrix multiplication. Now, the semi-tensor product of matrices has been applied to Boolean networks [35], graph theory [36], game theory [37], logical systems [38] and so on.

    To the best of our knowledge, there is no reference on the study of the complex matrix equation (1.1) by using the semi-tensor product method. Therefore, in this paper, we will use the real vector representations of complex matrices and the semi-tensor product of matrices to study the following two problems.

    Problem 1. Suppose C,ECm×n,D,FCn×p,GCm×p and

    LH={X|XHCn×n CXD+EXFG=min}.

    Find out XHCLH satisfying XHC=minXLHX.

    Problem 2. Suppose C,ECm×n,D,FCn×p,GCm×p and

    LAH={X|XAHCn×n CXD+EXFG=min}.

    Find out XAHCLAH satisfying XAHC=minXLAHX.

    XHC, XAHC are called the minimal norm of least squares Hermitian solution and the minimal norm of least squares anti-Hermitian solution for the complex matrix equation (1.1), respectively.

    The rest of this paper is arranged as below. In Section 2, some preliminary results are presented for solving Problems 1 and 2. In Section 3, the solutions of Problems 1 and 2 are obtained by the real vector representations of complex matrices and the semi-tensor product of matrices. In Section 4, two numerical algorithms of solving Problems 1 and 2 are proposed, and two numerical examples are given to show the effectiveness of purposed algorithms. In Section 5, this paper is summarized briefly.

    In this section, we review and present some preliminary results of solving Problems 1 and 2.

    Definition 2.1. ([39]) Let BRm×n,CRs×t. The semi-tensor product of B,C is defined as

    BC=(BIq/n)(CIq/s),

    where q is the least common multiple of n,s.

    In Definition 2.1, when n=s, the semi-tensor product of A,B is essentially the traditional matrix product. The semi-tensor product has the following properties.

    Lemma 2.2. ([39]) Suppose A,B,C,D,E are real matrices of appropriate orders. There are the following conclusions:

    (1) (AB)C=A(BC).

    (2) A(D+E)=AD+AE,(D+E)A=DA+EA.

    Lemma 2.3. ([39]) Let αRm, BRp×q, and then αB=(ImB)α.

    Lemma 2.3 reflects the quasi commutativity of vector and matrix. In order to realize the commutativity between vectors, the following swap matrix is important.

    Definition 2.4. ([39]) The following square matrix

    S[m,n]=(Inδ1m,Inδ2m,,Inδmm)

    is called the (m,n)-dimension swap matrix.

    Lemma 2.5. ([39]) Let αRm,βRn, then

    S[m,n]αβ=βα,

    where S[m,n] is as same as that in Lemma 2.4.

    To study Problems 1 and 2, we define the real vector representations of complex number, complex vector and complex matrix, and give their properties.

    Let a complex number a=a1+a2i, where a1,a2R, and then the following column vector

    a=(a1a2)

    is defined as the real vector representation of the complex number a.

    Lemma 2.6. ([40]) Let a,bC, then ab=Wab, where

    W=(10010110).

    Suppose α=(x1,x2,,xn), β=(y1,y2,,ym)T are two complex vectors. The following column vectors

    α=(x1x2xn),β=(y1y2ym)

    are defined as the real vector representations of the complex vectors α,β, respectively. And we can easily get the following properties of the real vector representations of complex vectors:

    xi=(δin)Tα,yj=(δjm)Tβ,

    in which i=1,2,,n,j=1,2,,m.

    Lemma 2.7. ([40]) Suppose α=(x1,x2,,xn),β=(y1,y2,,yn)T are two complex vectors. W is as same as that in Lemma 2.7. Then

    αβ=Fnαβ,

    in which Fn=Wni=1[(δin)T(I2n(δin)T)].

    Let ACm×n, and then the following column vectors

    Ac=(col1(A)col2(A)coln(A)),  Ar=(row1(A)row2(A)rowm(A))

    are defined as the real column and row vector representations of A, respectively. And they satisfy

    coli(A)=(δin)TAc,  rowj(A)=(δjm)TAr,

    in which i=1,2,,n, j=1,2,,m.

    Further, the following properties also hold.

    Lemma 2.8. Let A,BCm×n, and then

    (1)(A±B)c=Ac±Bc,(A±B)r=Ar±Br;

    (2)A=Ac=Ar.

    Proof. (1) Notice that

    colj(A±B)=colj(A)±colj(B),j=1,2,,n,rowi(A±B)=rowi(A)±rowi(B),i=1,2,,m.

    Thus, we obtain

    (A±B)c=(col1(A±B)col2(A±B)coln(A±B))=(col1(A)±col1(B)col2(A)±col2(B)coln(A)±coln(B))=(col1(A)col2(A)coln(A))±(col1(B)col2(B)coln(B))=Ac±Bc,(A±B)r=(row1(A±B)row2(A±B)rowm(A±B))=(row1(A)±row1(B)row2(A)±row2(B)rowm(A)±rowm(B))=(row1(A)row2(A)rowm(A))±(row1(B)row2(B)rowm(B))=Ar±Br,

    which show that (1) holds.

    (2) Because rowi(A)2=rowi(A)2, colj(A)2=colj(A)2, we get

    A2=mi=1rowi(A)2=mi=1rowi(A)2=Ar2,A2=ni=1colj(A)2=ni=1colj(A)2=Ac2.

    Thus (2) holds.

    Lemma 2.9. Let ACm×n,BCn×p. Fn is as same as that in Lemma 2.8. Then

    (AB)c=KmnpArBc,

    in which

    Kmnp=(K1mnpK2mnpKpmnp), Kimnp=(Fn(δ1m)T(I2mn(δip)T)Fn(δ2m)T(I2mn(δip)T)Fn(δmm)T(I2mn(δip)T)),i=1,2,,p.

    Proof. Block the matrices A,B into the following forms:

    A=(row1(A)row2(A)rowm(A)), B=(col1(B),col2(B),,colp(B)).

    By using of Lemma 2.3 and Lemma 2.8, we obatin

    rowi(A)colj(B)=Fnrowi(A)colj(B)=Fn(δim)TAr(δjp)TBc=Fn(δim)T[I2mn(δjp)T]ArBc.

    Thus, there is

    (AB)c=(row1(A)col1(B)row2(A)col1(B)rowm(A)col1(B)row1(A)colp(B)row2(A)colp(B)rowm(A)colp(B))=(Fn(δ1m)T(I2mn(δ1p)T)Fn(δ2m)T(I2mn(δ1p)T)Fn(δmm)T(I2mn(δ1p)T)Fn(δ1m)T(I2mn(δpp)T)Fn(δ2m)T(I2mn(δpp)T)Fn(δmm)T(I2mn(δpp)T))ArBc.

    So Lemma 2.10 holds.

    Lemma 2.10. ([40]) Let ACm×n,BCn×p. Fn is as same as that in Lemma 2.8. Then

    (AB)r=˜KmnpArBc,

    in which

    ˜Kmnp=(˜K1mnp˜K2mnp˜Kmmnp), ˜Kjmnp=(Fn(δjm)T(I2mn(δ1p)T)Fn(δjm)T(I2mn(δ1p)T)Fn(δjm)T(I2mn(δpp)T)),j=1,2,,m.

    In the last part of this section, we propose the necessary and sufficient conditions for a complex matrix to be Hermitian and anti-Hermitian.

    Lemma 2.11. Let X=X1+X2iCn×n, then

    Xc=M(vec(X1)vec(X2)),

    where M=(δ12n2,δ32n2,,δ2n212n2,δ22n2,δ42n2,,δ2n22n2).

    Proof. Let X1=(x(1)ij)n×n,X2=(x(2)ij)n×n, and then

    vec(X1)=(x(1)11,x(1)21,,x(1)n1,,x(1)1n,x(1)2n,,x(1)nn)T,
    vec(X2)=(x(2)11,x(2)21,,x(2)n1,,x(2)1n,x(2)2n,,x(2)nn)T.

    Notice that

    (δ12n2,δ32n2,,δ2n212n2)vec(X1)=(x(1)11,0,x(1)21,0,,x(1)n1,0,,x(1)1n,0,x(1)2n,0,,x(1)nn,0)T,
    (δ22n2,δ42n2,,δ2n22n2)vec(X2)=(0,x(2)11,0,x(2)21,,0,x(2)n1,,0,x(2)1n,0,x(2)2n,,0,x(2)nn)T,

    so we have

    M(vec(X1)vec(X2))=(δ12n2,δ32n2,,δ2n212n2)vec(X1)+(δ22n2,δ42n2,,δ2n22n2)vec(X2)=(x(1)11,0,x(1)21,0,,x(1)n1,0,,x(1)1n,0,x(1)2n,0,,x(1)nn,0)T+(0,x(2)11,0,x(2)21,,0,x(2)n1,,0,x(2)1n,0,x(2)2n,,0,x(2)nn)T=(x(1)11,x(2)11,x(1)21,x(2)21,x(1)n1,x(2)n1,,x(1)1n,x(2)1n,x(1)2n,x(2)2n,,x(1)nn,x(2)nn)T=Xc.

    So Lemma 2.12 holds.

    For a real matrix X=(xij)Rn×n, we denote

    α1=(x11,x21,,xn1),α2=(x22,x32,,xn2),,αn1=(x(n1)(n1),xn(n1)),αn=xnn,β1=(x21,x31,,xn1),β2=(x32,x42,,xn2),,βn2=(x(n1)(n2),xn(n2)),βn1=xn(n1).

    vecS(X),vecA(X) stand for the following vectors

    vecS(X)=(α1,α2,,αn1,αn)TRn(n+1)2, (2.1)
    vecA(X)=(β1,β2,,βn2,βn1)TRn(n1)2. (2.2)

    Lemma 2.12. ([7]) Let XRn×n, and then the following conclusions hold.

    (1) XSRn×nvec(X)=PnvecS(X), where

    Pn=(δ1nδ2nδ3nδn1nδnn00000000δ1n000δ2nδ3nδn1nδnn00000δ1n000δ2n00000000δ1n000δ2n0δn1nδnn00000δ1n000δ2n0δn1nδnn).

    (2) XASRn×nvec(X)=QnvecA(X), where

    Qn=(δ2nδ3nδn1nδnn0000δ1n000δ3nδn1nδnn00δ1n00δ2n00000δ1n00δ2n0δnn000δ1n00δ2nδn1n).

    We can get the following results by Lemmas 2.12 and 2.13.

    Lemma 2.13. Suppose X=X1+X2iCn×n.

    (1) If XHCn×n, then Xc=H(vecS(X1)vecA(X2)), where H=M(Pn00Qn).

    (2) If XAHCn×n, then Xc=˜H(vecA(X1)vecS(X2)), where ˜H=M(Qn00Pn).

    Proof. For XHCn×n, we have X1SRn×n,X2ASRn×n. By Lemma 2.13, we obtain

    vec(X1)=PnvecS(X1),vec(X2)=QnvecA(X2).

    According to Lemma 2.12, we get

    Xc=M(vec(X1)vec(X2))=M(PnvecS(X1)QnvecA(X2))=M(Pn00Qn)(vecS(X1)vecA(X2))=H(vecS(X1)vecA(X2)),

    which shows that (1) holds. Similarly, we can prove that (2) holds.

    In this section, by using the real vector representations of complex matrices and the semi-tensor product of matrices, we first transform the least squares problems of Problems 1 and 2 into the corresponding real least squares problems with free variables, and then obtain the expressions of the solutions of Problems 1 and 2. Further, we give the necessary and sufficient conditions for the complex matrix equation (1.1) to have Hermitian and anti-Hermitian solutions.

    Theorem 3.1. Let C,ECm×n,D,FCn×p,GCm×p. S[2np,2n2],Kmnp,˜Kmnn,H are as same as those in Definition 2.4, Lemmas 2.10, 2.11 and Lemma 2.14, respectively. Denote

    U=Kmnp˜Kmnn(CrS[2np,2n2]Dc+ErS[2np,2n2]Fc).

    Then the set LH in Problem 1 can be expressed as

    LH={X|Xc=H(UH)Gc+H[In2(UH)(UH)]y, yRn2}, (3.1)

    and the unique Hermitian solution XHCLH of Problem 1 satisfies

    (XHC)c=H(UH)Gc. (3.2)

    Proof. By Lemma 2.5, Lemmas 2.9–2.11, we obtain

    CXD+EXFG=(CXD+EXFG)c=(CXD)c+(EXF)cGc=Kmnp(CX)rDc+Kmnp(EX)rFcGc=Kmnp˜KmnnCrXcDc+Kmnp˜KmnnErXcFcGc=Kmnp˜Kmnn(CrS[2np,2n2]Dc+ErS[2np,2n2]Fc)XcGc=UXcGc.

    For the Hermitian matrix X=X1+X2i, by applying Lemma 2.14, we have

    Xc=H(vecS(X1)vecA(X2)).

    So we get

    CXD+EXFG=UH(vecS(X1)vecA(X2))Gc.

    Then the complex least squares problem of Problem 1 is converted into the real least squares problem with free variables

    minUH(vecS(X1)vecA(X2))Gc.

    The general solution of the above least squares problem is

    (vecS(X1)vecA(X2))=(UH)Gc+[In2(UH)(UH)]y, yRn2.

    Then applying Lemma 2.14, we obtain that (3.1) holds. Therefore, the unique Hermitian solution XCHLH of Problem 1 satisfies (3.2).

    Now, we propose a necessary and sufficient condition that the complex matrix equation (1.1) has a Hermitian solution, and the expression of general Hermitian solutions.

    Corollary 3.2. Let C,ECm×n,D,FCn×p,GCm×p. H, U are as same as those in Lemma 2.14 and Theorem 3.1, respectively. Then the necessary and sufficient condition that the complex matrix equation (1.1) has a Hermitian solution is

    (UH(UH)I2mp)Gc=0. (3.3)

    If (3.3) holds, the Hermitian solution set of the complex matrix equation (1.1) is

    SH={X|Xc=H(UH)Gc+H[In2(UH)(UH)]y, yRn2}. (3.4)

    Proof. The necessary and sufficient condition that the complex matrix equation (1.1) has a Hermitian solution is for any XHCn×n,

    CXD+EXFG=0.

    According to Theorem 3.1, we have

    CXD+EXFG=UH(vecS(X1)vecA(X2))Gc=UH(UH)UH(vecS(X1)vecA(X2))Gc=UH(UH)GcGc=(UH(UH)I2mp)Gc.

    Therefore, the necessary and sufficient condition that the complex matrix equation (1.1) has a Hermitian solution is

    (UH(UH)I2mp)Gc=0,

    which illustrates that (3.3) holds. For XHCn×n, we get

    CXD+EXFG=UH(vecS(X1)vecA(X2))Gc.

    Therefore, when the complex matrix equation (1.1) has a Hermitian solution, the Hermitian solutions of (1.1) satisfy the following equation

    UH(vecS(X1)vecA(X2))=Gc.

    By Theorem 3.1, (3.4) is established.

    Theorem 3.3. Let C,ECm×n,D,FCn×p,GCm×p. ˜H,U are as same as those in Lemma 2.14 and Theorem 3.1, respectively. Then the set LAH of Problem 2 can be expressed as

    LAH={X|Xc=˜H(U˜H)Gc+˜H[In2(U˜H)(U˜H)]y, yRn2}, (3.5)

    and the unique anti-Hermitian solution XAHCLAH of Problem 2 satisfies

    (XAHC)c=˜H(U˜H)Gc. (3.6)

    Proof. By Theorem 3.1, we obtain

    CXD+EXFG=UXcGc.

    For X=X1+X2iAHCn×n, we have Xc=˜H(vecA(X1)vecS(X2)). Thus we get

    CXD+EXFG=U˜H(vecA(X1)vecS(X2))Gc.

    minU˜H(vecA(X1)vecS(X2))Gc has the following general solution:

    (vecA(X1)vecS(X2))=(U˜H)Gc+[In2(U˜H)(U˜H)]y, yRn2.

    By Lemma 2.14, (3.5) holds. Further, the unique anti-Hermitian solution XACHLAH of Problem 2 satisfies (3.6).

    Now, we give the necessary and sufficient condition that the complex matrix equation (1.1) to have an anti-Hermitian solution and the expression of general anti-Hermitian solution. Because the research method is similar to Corollary 3.2, we only give these conclusions and omit the specific derivation process.

    Corollary 3.4. Let C,ECm×n,D,FCn×p,GCm×p. ˜H,U are as same as those in Lemma 2.14 and Theorem 3.1, respectively. Then the necessary and sufficient condition that the complex matrix equation (1.1) has an anti-Hermitian solution is

    (U˜H(U˜H)I2mp)Gc=0. (3.7)

    If (3.7) holds, the anti-Hermitian solution set of the complex matrix equation (1.1) is

    SAH={X|Xc=˜H(U˜H)Gc+˜H[In2(U˜H)(U˜H)]y, yRn2}. (3.8)

    Remark 1. The semi-tensor product of matrices provides a new method for solving linear matrix equations. The feature of this method is to first convert complex matrices into the corresponding real vectors, and then use the quasi commutativity of the vectors to transform the complex matrix equation into the real linear system with the form Ax=b, so as to obtain the solution of the complex matrix equation. This method only involves real matrices and real vectors, so it is more convenient in numerical calculation. The weakness of this method is that it leads to the expansion of the dimension when the complex matrix is straightened into a real vector, and therefore it is not convenient to calculate the complex matrix equation of higher order.

    Remark 2. In [27], the authors propose a method of solving the solution of Problem 1 by a product for matrices and vectors. This method involves a lot of calculations of complex matrix, the Moore-Penrose inverse and matrix inner product, which reduces the accuracy of the result to some degree. The numerical example in Section 5 will illustrate this.

    In this section, two numerical algorithms are first proposed to solve Problems 1 and 2, and then two numerical examples are given to show the effectiveness of purposed algorithms. In the first example, we give the errors of the solutions of Problems 1 and 2. In the second example, we compare the accuracy of the solution of Problem 1 calculated by Algorithm 4.1 and the method in [27]. All calculations are implemented on an Intel Core i7-2600@3.40GHz/8GB computer by using MATLAB R2021b.

    Algorithm 4.1. (The solution of Problem 1)

    (1) Input matrices C,ECm×n, D,FCn×p, GCm×p, W, S[2np,2n2], Pn,Qn, M.

    (2) Generate Cr,Er,Dc,Fc,Gc, Fn, and then calculate Kmnp,˜Kmnn, U and H.

    (3) Calculate the unique Hermitian solution XCH of Problem 1 by (3.2).

    Algorithm 4.2. (The solution of Problem 2)

    (1) Input matrices C,ECm×n, D,FCn×p, GCm×p, W, S[2np,2n2], Pn,Qn, M.

    (2) Generate Cr,Er,Dc,Fc,Gc, Fn, and then calculate Kmnp,˜Kmnn, U and ˜H.

    (3) Calculate the unique anti-Hermitian solution XAHC of Problem 2 by (3.4).

    Example 4.1. Suppose C,ECm×n, D,FCn×p are random matrices generated by MATLAB. Let M1=rand(n,n), M2=rand(n,n).

    (1) According to M1,M2, we generate the following matrix

    X=(M1+MT1)+(M2MT2)i,

    and then XHCn×n. Let G=CXD+EXF, and m=n=p=k. Here, all matrices are square, and the probability that random matrices are nonsingular is 1. Therefore, (1.1) has a unique Hermitian solution X, which is also the unique solution of Problem 1. We calculate the solution XHC of Problem 1 by Algorithm 4.1. Let k=2:10 and the error ε=log10(XHCX). The relation between the error ε and k is shown in Figure 1 (a).

    Figure 1.  The errors of solving Problems 1 and 2.

    (2)Generate the following matrix

    X=(M1MT1)+(M2+MT2)i,

    and then XAHCn×n. Let G=CXD+EXF, and m=n=p=k. Then, (1.1) has a unique anti-Hermitian solution X, which is also the unique solution of Problem 2. We calculate the solution XAHC of Problem 2 by Algorithm 4.2. Let k=2:10 and the error ε=log10(XAHCX). The relation between the error ε and k is shown in Figure 1 (b).

    From Figure 1, we see that ε<12, and thus the errors of solving Problems 1 and 2 are all no more than 1012. This shows that the effectiveness of Algorithms 4.1 and 4.2. In addition, (a) and (b) of Figure 1 are little difference, which reflects that the calculation amount of the solution of Problem 1 is almost the same as that of Problem 2. These are consistent with theoretical reality, and therefore Figure 1 is reasonable.

    Example 4.2. Suppose

    C=10rand(m,n)+20rand(m,n)i, D=20rand(n,p)+10rand(n,p)i,
    E=20rand(m,n)+10rand(m,n)i, F=10rand(n,p)+20rand(n,p)i.

    X is same as that in Example 4.1. Let G=CXD+EXF. When m=n=p=k, X is the unique solution of Problem 1. X1, X2 represent the solutions of Problem 1 computed by Algorithm 4.1 and the method in [27], respectively. Let ϵ1=X1X,ϵ2=X2X. Table 1 shows the errors ϵ1,ϵ2 of matrices of different orders.

    Table 1.  Error comparison of two methods in solving Problem 1.
    k ϵ1 ϵ2
    2 5.1179e-16 4.0656e-15
    3 3.8081e-15 1.5832e-13
    4 6.9372e-15 2.8004e-13
    5 3.1605e-14 2.0132e-12
    6 3.0276e-14 1.3684e-12
    7 5.8574e-14 2.7640e-12
    8 2.5821e-13 1.4964e-11
    9 3.1605e-13 1.1339e-11
    10 7.4086e-13 1.1213e-11

     | Show Table
    DownLoad: CSV

    Table 1 illustrates that the errors obtained by Algorithm 4.1 are smaller than those obtained by the method in [27]. This is because, in the process of solving Problem 1, [27] involves a lot of calculations of complex matrix, the Moore-Penrose inverse and matrix inner product, which leads to the reduction of calculation accuracy. Thus Algorithm 4.1 is more accurate and efficient.

    In this paper, we obtain the minimal norm of least squares Hermitian solution and the minimal norm of least squares anti-Hermitian solution for the complex matrix equation CXD+EXF=G by the semi-tensor product of matrices. The numerical examples show that our purposed method is more effective and accurate. The semi-tensor product of matrices provides a new idea for the study of matrix equations. This method can also be applied to the study of solutions of many types of linear matrix equations.

    The study is supported by the Scientifc Research Foundation of Liaocheng University (No. 318011921), Discipline with Strong Characteristics of Liaocheng University – Intelligent Science and Technology (No. 319462208), and the Natural Science Foundation of Shandong Province of China (Nos. ZR2020MA053 and ZR2022MA030).

    The authors declare that there is no conflict of interest.



    [1] X. P. Sheng, A relaxed gradient based algorithm for solving generalized coupled Sylvester matrix equations, J. Franklin Inst., 355 (2018), 4282–4297. https://doi.org/10.1016/j.jfranklin.2018.04.008 doi: 10.1016/j.jfranklin.2018.04.008
    [2] M. Dehghan, M. Hajarian, On the generalized bisymmetric and skew-symmetric solutions of the system of generalized Sylvester matrix equations, Linear Multilinear Algebra, 59 (2011), 1281–1309. https://doi.org/10.1080/03081087.2010.524363 doi: 10.1080/03081087.2010.524363
    [3] M. Dehghan, M. Dehghani-Madiseh, M. Hajarian, A generalized preconditioned MHSS method for a class of complex symmetric linear systems, Math. Model. Anal., 18 (2013), 561–576. https://doi.org/10.3846/13926292.2013.839964 doi: 10.3846/13926292.2013.839964
    [4] M. Dehghani-Madiseh, M. Dehghan, Parametric AE-solution sets to the parametric linear systems with multiple right-hand sides and parametric matrix equation A(p)X=B(p), Numer. Algor., 73 (2016), 245–279. https://doi.org/10.1007/s11075-015-0094-3 doi: 10.1007/s11075-015-0094-3
    [5] Y. B. Deng, Z. Z. Bai, Y. H. Gao, Iterative orthogonal direction methods for Hermitian minimum norm solutions of two consistent matrix equations, Numer. Linear Algebra Appl., 13 (2006), 801–823. https://doi.org/10.1002/nla.496 doi: 10.1002/nla.496
    [6] B. H. Huang, C. F. Ma, On the least squares generalized Hamiltonian solution of generalized coupled Sylvester-conjugate matrix equations, Comput. Math. Appl., 74 (2017), 532–555. https://doi.org/10.1016/j.camwa.2017.04.035 doi: 10.1016/j.camwa.2017.04.035
    [7] F. X. Zhang, M. S. Wei, Y. Li, J. L. Zhao, The minimal norm least squares Hermitian solution of the complex matrix equation AXB+CXD=E, J. Franklin Inst., 355 (2018), 1296–1310. https://doi.org/10.1016/j.jfranklin.2017.12.023 doi: 10.1016/j.jfranklin.2017.12.023
    [8] Y. Gu, Y. Z. Song, Global Hessenberg and CMRH methods for a class of complex matrix equations, J. Comput. Appl. Math., 404 (2022), 113868. https://doi.org/10.1016/j.cam.2021.113868 doi: 10.1016/j.cam.2021.113868
    [9] F. X. Zhang, Y. Li, J. L. Zhao, A real representation method for special least squares solutions of the quaternion matrix equation (AXB,DXE)=(C,F), AIMS Mathematics, 7 (2022), 14595–14613. https://doi.org/10.3934/math.2022803 doi: 10.3934/math.2022803
    [10] Q. W. Wang, Z. H. He, Solvability conditions and general solution for mixed Sylvester equations, Automatica, 49 (2013), 2713–2719. https://doi.org/10.1016/j.automatica.2013.06.009 doi: 10.1016/j.automatica.2013.06.009
    [11] F. X. Zhang, W. S. Mu, Y. Li, J. L. Zhao, Special least squares solutions of the quaternion matrix equation AXB+CXD=E, Comput. Math. Appl., 72 (2016), 1426–1435. https://doi.org/10.1016/j.camwa.2016.07.019 doi: 10.1016/j.camwa.2016.07.019
    [12] H. T. Zhang, L. N. Liu, H. Liu, Y. X. Yuan, The solution of the matrix equation AXB=D and the system of matrix equations AX=C,XB=D with XX=Ip, Appl. Math. Comput., 418 (2022), 126789. https://doi.org/10.1016/j.amc.2021.126789 doi: 10.1016/j.amc.2021.126789
    [13] G. J. Song, Q. W. Wang, S. W. Yu, Cramer's rule for a system of quaternion matrix equations with applications, Appl. Math. Comput., 336 (2018), 490–499. https://doi.org/10.1016/j.amc.2018.04.056 doi: 10.1016/j.amc.2018.04.056
    [14] F. X. Zhang, M. S. Wei, Y. Li, J. L. Zhao, An efficient real representation method for least squares problem of the quaternion constrained matrix equation AXB+CYD=E, Int. J. Comput. Math., 98 (2021), 1408–1419. https://doi.org/10.1080/00207160.2020.1821001 doi: 10.1080/00207160.2020.1821001
    [15] X. Peng, X. X. Guo, Real iterative algorithms for a common solution to the complex conjugate matrix equation system, Appl. Math. Comput., 270 (2015), 472–482. https://doi.org/10.1016/j.amc.2015.07.105 doi: 10.1016/j.amc.2015.07.105
    [16] X. P. Sheng, W. W. Sun, The relaxed gradient based iterative algorithm for solving matrix equations AiXBi=Fi, Comput. Math. Appl., 74 (2017), 597–604. https://doi.org/10.1016/j.camwa.2017.05.008 doi: 10.1016/j.camwa.2017.05.008
    [17] M. Dehghan, A. Shirilord, A new approximation algorithm for solving generalized Lyapunov matrix equations, J. Comput. Appl. Math., 404 (2022), 113898. https://doi.org/10.1016/j.cam.2021.113898 doi: 10.1016/j.cam.2021.113898
    [18] M. Dehghan, R. Mohammadi-Arani, Generalized product-type methods based on bi-conjugate gradient (GPBiCG) for solving shifted linear systems, Comput. Appl. Math., 36 (2017), 1591–1606. https://doi.org/10.1007/s40314-016-0315-y doi: 10.1007/s40314-016-0315-y
    [19] B. H. Huang, C. F. Ma, On the least squares generalized Hamiltonian solution of generalized coupled Sylvester-conjugate matrix equations, Comput. Math. Appl., 74 (2017), 532–555. https://doi.org/10.1016/j.camwa.2017.04.035 doi: 10.1016/j.camwa.2017.04.035
    [20] T. X. Yan, C. F. Ma, An iterative algorithm for generalized Hamiltonian solution of a class of generalized coupled Sylvester-conjugate matrix equations, Appl. Math. Comput., 411 (2021), 126491. https://doi.org/10.1016/j.amc.2021.126491 doi: 10.1016/j.amc.2021.126491
    [21] M. Hajarian, Developing BiCOR and CORS methods for coupled Sylvester-transpose and periodic Sylvester matrix equations, Appl. Math. Model., 39 (2015), 6073–6084. https://doi.org/10.1016/j.apm.2015.01.026 doi: 10.1016/j.apm.2015.01.026
    [22] H. M. Zhang, A finite iterative algorithm for solving the complex generalized coupled Sylvester matrix equations by using the linear operators, J. Frankl. Inst., 354 (2017), 1856–1874. https://doi.org/10.1016/j.jfranklin.2016.12.011 doi: 10.1016/j.jfranklin.2016.12.011
    [23] L. L. Lv, Z. Zhang, Finite iterative solutions to periodic Sylvester matrix equations, J. Frankl. Inst., 354 (2017), 2358–2370. https://doi.org/10.1016/j.jfranklin.2017.01.004 doi: 10.1016/j.jfranklin.2017.01.004
    [24] N. Huang, C. F. Ma, Modified conjugate gradient method for obtaining the minimum-norm solution of the generalized coupled Sylvester-conjugate matrix equations, Appl. Math. Model., 40 (2016), 1260–1275. https://doi.org/10.1016/j.apm.2015.07.017 doi: 10.1016/j.apm.2015.07.017
    [25] F. P. A. Beik, D. K. Salkuyeh, On the global Krylov subspace methods for solving general coupled matrix equations, Comput. Math. Appl., 62 (2011), 4605–4613. https://doi.org/10.1016/j.camwa.2011.10.043 doi: 10.1016/j.camwa.2011.10.043
    [26] M. Dehghan, A. Shirilord, Solving complex Sylvester matrix equation by accelerated double-step scale splitting (ADSS) method, Eng. Comput., 37 (2021), 489–508. https://doi.org/10.1007/s00366-019-00838-6 doi: 10.1007/s00366-019-00838-6
    [27] S. F. Yuan, A. P. Liao, Least squares Hermitian solution of the complex matrix equation AXB+CXD=E with least norm, J. Frankl. Inst., 351 (2014), 4978–4997. https://doi.org/10.1016/j.jfranklin.2014.08.003 doi: 10.1016/j.jfranklin.2014.08.003
    [28] F. X. Zhang, M. S. Wei, Y. Li, J. L. Zhao, The minimal norm least squares Hermitian solution of the complex matrix equation AXB+CXD=E, J. Frankl. Inst., 355 (2018), 1296–1310. https://doi.org/10.1016/j.jfranklin.2017.12.023 doi: 10.1016/j.jfranklin.2017.12.023
    [29] S. F. Yuan, Least squares pure imaginary solution and real solution of the quaternion matrix equation AXB+CXD=E with the least norm, J. Appl. Math., 2014 (2014), 857081. https://doi.org/10.1155/2014/857081 doi: 10.1155/2014/857081
    [30] Q. W. Wang, A. Rehman, Z. H. He, Y. Zhang, Constraint generalized sylvester matrix equations, Automatica, 69 (2016), 60–64. https://doi.org/10.1016/j.automatica.2016.02.024 doi: 10.1016/j.automatica.2016.02.024
    [31] Y. X. Yuan, Solving the mixed Sylvester matrix equations by matrix decompositions, C. R. Math., 353 (2015), 1053–1059. https://doi.org/10.1016/j.crma.2015.08.010 doi: 10.1016/j.crma.2015.08.010
    [32] S. F. Yuan, Q. W. Wang, Y. B. Yu, Y. Tian, On Hermitian solutions of the split quaternion matrix equation AXB+CXD=E, Adv. Appl. Clifford Algebras, 27 (2017), 3235–3252. https://doi.org/10.1007/s00006-017-0806-y doi: 10.1007/s00006-017-0806-y
    [33] I. Kyrchei, Determinantal representations of solutions to systems of two-sided quaternion matrix equations, Linear Multilinear Algebra, 69 (2021), 648–672. https://doi.org/10.1080/03081087.2019.1614517 doi: 10.1080/03081087.2019.1614517
    [34] D. Z. Cheng, H. S. Qi, Y. Zhao, An introduction to semi-tensor product of matrices and its application, Singapore: World Scientific Publishing Company, 2012.
    [35] J. Feng, J. Yao, P. Cui, Singular boolean networks: Semi-tensor product approach, Sci. China Inf. Sci., 56 (2013), 1–14. https://doi.org/10.1007/s11432-012-4666-8 doi: 10.1007/s11432-012-4666-8
    [36] M. R. Xu, Y. Z. Wang, Conflict-free coloring problem with appliction to frequency assignment, J. Shandong Univ., 45 (2015), 64–69.
    [37] D. Z. Cheng, Q. S. Qi, Z. Q. Liu, From STP to game-based control, Sci. China Inf. Sci., 61 (2018), 010201. https://doi.org/10.1007/s11432-017-9265-2 doi: 10.1007/s11432-017-9265-2
    [38] J. Q. Lu, H. T. Li, Y. Liu, F. F. Li, Survey on semi-tensor product method with its applications in logical networks and other finite-valued systems, IET Control Theory Appl., 11 (2017), 2040–2047. https://doi.org/10.1049/iet-cta.2016.1659 doi: 10.1049/iet-cta.2016.1659
    [39] D. Z. Cheng, H. S. Qi, Z. Q. Li, Analysis and control of Boolean networks: A semi-tensor product approach, London: Springer, 2011.
    [40] W. X. Ding, Y. Li, D. Wang, T. Wang, The application of the semi-tensor product in solving special Toeplitz solution of complex linear system, J. Liaocheng Univ. (Nat. Sci.), 34 (2021), 1–6.
  • This article has been cited by:

    1. Jiaxin Lan, Jingpin Huang, Yun Wang, An E-extra iteration method for solving reduced biquaternion matrix equation AX+XB=C, 2024, 9, 2473-6988, 17578, 10.3934/math.2024854
    2. Qing-Wen Wang, Zi-Han Gao, Jia-Le Gao, A Comprehensive Review on Solving the System of Equations AX = C and XB = D, 2025, 17, 2073-8994, 625, 10.3390/sym17040625
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1716) PDF downloads(103) Cited by(2)

Figures and Tables

Figures(1)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog