Loading [MathJax]/jax/output/SVG/jax.js
Research article

The semi-tensor product method for special least squares solutions of the complex generalized Sylvester matrix equation

  • Received: 30 July 2022 Revised: 10 November 2022 Accepted: 28 November 2022 Published: 13 December 2022
  • MSC : 15A06, 15A24

  • In this paper, we are interested in the minimal norm of least squares Hermitian solution and the minimal norm of least squares anti-Hermitian solution for the complex generalized Sylvester matrix equation CXD+EXF=G. By utilizing of the real vector representations of complex matrices and the semi-tensor product of matrices, we first transform solving special least squares solutions of the above matrix equation into solving the general least squares solutions of the corresponding real matrix equations, and then obtain the expressions of the minimal norm of least squares Hermitian solution and the minimal norm of least squares anti-Hermitian solution. Further, we give two numerical algorithms and two numerical examples, and numerical examples illustrate that our proposed algorithms are more efficient and accurate.

    Citation: Fengxia Zhang, Ying Li, Jianli Zhao. The semi-tensor product method for special least squares solutions of the complex generalized Sylvester matrix equation[J]. AIMS Mathematics, 2023, 8(3): 5200-5215. doi: 10.3934/math.2023261

    Related Papers:

    [1] Xuemin Xue, Xiangtuan Xiong, Yuanxiang Zhang . Two fractional regularization methods for identifying the radiogenic source of the Helium production-diffusion equation. AIMS Mathematics, 2021, 6(10): 11425-11448. doi: 10.3934/math.2021662
    [2] M. J. Huntul . Inverse source problems for multi-parameter space-time fractional differential equations with bi-fractional Laplacian operators. AIMS Mathematics, 2024, 9(11): 32734-32756. doi: 10.3934/math.20241566
    [3] Shuang-Shuang Zhou, Saima Rashid, Asia Rauf, Khadija Tul Kubra, Abdullah M. Alsharif . Initial boundary value problems for a multi-term time fractional diffusion equation with generalized fractional derivatives in time. AIMS Mathematics, 2021, 6(11): 12114-12132. doi: 10.3934/math.2021703
    [4] Fethi Bouzeffour . Inversion formulas for space-fractional Bessel heat diffusion through Tikhonov regularization. AIMS Mathematics, 2024, 9(8): 20826-20842. doi: 10.3934/math.20241013
    [5] Yilihamujiang Yimamu, Zui-Cha Deng, Liu Yang . An inverse volatility problem in a degenerate parabolic equation in a bounded domain. AIMS Mathematics, 2022, 7(10): 19237-19266. doi: 10.3934/math.20221056
    [6] Zui-Cha Deng, Liu Yang . Unicity of solution for a semi-infinite inverse heat source problem. AIMS Mathematics, 2022, 7(4): 7026-7039. doi: 10.3934/math.2022391
    [7] Arivazhagan Anbu, Sakthivel Kumarasamy, Barani Balan Natesan . Lipschitz stability of an inverse problem for the Kawahara equation with damping. AIMS Mathematics, 2020, 5(5): 4529-4545. doi: 10.3934/math.2020291
    [8] Yashar Mehraliyev, Seriye Allahverdiyeva, Aysel Ramazanova . On one coefficient inverse boundary value problem for a linear pseudoparabolic equation of the fourth order. AIMS Mathematics, 2023, 8(2): 2622-2633. doi: 10.3934/math.2023136
    [9] Xiangtuan Xiong, Wanxia Shi, Xuemin Xue . Determination of three parameters in a time-space fractional diffusion equation. AIMS Mathematics, 2021, 6(6): 5909-5923. doi: 10.3934/math.2021350
    [10] Dun-Gang Li, Fan Yang, Ping Fan, Xiao-Xiao Li, Can-Yun Huang . Landweber iterative regularization method for reconstructing the unknown source of the modified Helmholtz equation. AIMS Mathematics, 2021, 6(9): 10327-10342. doi: 10.3934/math.2021598
  • In this paper, we are interested in the minimal norm of least squares Hermitian solution and the minimal norm of least squares anti-Hermitian solution for the complex generalized Sylvester matrix equation CXD+EXF=G. By utilizing of the real vector representations of complex matrices and the semi-tensor product of matrices, we first transform solving special least squares solutions of the above matrix equation into solving the general least squares solutions of the corresponding real matrix equations, and then obtain the expressions of the minimal norm of least squares Hermitian solution and the minimal norm of least squares anti-Hermitian solution. Further, we give two numerical algorithms and two numerical examples, and numerical examples illustrate that our proposed algorithms are more efficient and accurate.



    The study of operator equations has a long history. Khatri and Mitra [10], Wu and Cain [15], Xiong and Qin [16] and Yuan et al. [18] studied matrix equations AX=C and XB=C in matrix algebra. Dajić and Koliha [1], Douglas [4] and Liang and Deng [12] investigated these equations in operator space. Xu et al. [5,13,14,17], Fang and Yu [6] studied these equations of adjoint operator space on Hilbert C-modules. The famous Douglas' range inclusion theorem played a key role in the existence of the solutions of equation AX=C. Many scholars discussed the existence and the general formulae of self-adjoint solutions (resp. positive or real positive solutions) of one equation or common solutions of two equations. In finite dimensional case, Groβ gave the necessary and sufficient conditions for the matrix equation AX=C to have a real positive solution in [7]. However, the detailed formula for the real positive solution of this equation is not fully provided. Dajíc and Koliha [2] first provided a general form of real positive solutions of equation AX=C in Hilbert space under certain conditions. Recently, the real positive solutions of AX=C were considered with corresponding operator A not necessarily having closed range in [6,12] for adjoint operators. However, these formulae of real positive solutions still have some additional restrictions. In [16], the authors considered the equivalent conditions for the existence of common real positive solutions to the pair of the matrix equations AX=C, XB=D in matrix algebra and offered partial common real positive solutions.

    Let H and K be complex Hilbert spaces. We denote the set of all bounded linear operators from H into K by B(H,K) and by B(H) when H=K. For an operator A, we shall denote by A, R(A), ¯R(A) and N(A) the adjoint, the range, the closure of the range and the null space of A, respectively. An operator AB(H) is said to be positive (A0), if Ax,x0 for all xH. Note that a positive operator has unique square root and

    |A|:=(AA)12

    is the absolute value of A. Let A be the Moore-Penrose inverse of A, which is bounded if and only if R(A) is closed [9]. An operator A is called real positive if

    Re(A):=12(A+A)0.

    The set of all real positive operators in B(H) is denoted by ReB+(H). An operator sequence {Tn} convergents to TB(H) in strong operator topology if ||TnxTx||0 as n for any xH. Denote by

    T=s.o.limnTn.

    Let PM denote the orthogonal projection on the closed subspace M. The identity onto a closed subspace M is denoted by IM or I, if there is no confusion.

    In this paper, we focus our work on the problem of characterizing the real positive solutions of operator equations AX=C and XB=D with corresponding operators A and B not necessarily having closed ranges in infinite dimensional Hilbert space. Our current goal is three-fold:

    Firstly, by using polar decomposition and the strong operator convergence, a completely new representation of the reduced solution F of operator equation AX=C is given by

    F=s.o.limn(|A|+1nI)1UAC,

    where A=UA|A| is the polar decomposition of A. This solution has property F=AC when R(A) is closed. Furthermore, the necessary and sufficient conditions for the existence of real positive solutions and the detailed formulae for the real positive solutions of equation AX=C are obtained, which improves the related results in [2,7,12]. Some comments on the reduced solution and real positive solutions are given.

    Next, we discuss common solutions of the system of two operator equations AX=C and XB=D. The necessary and sufficient conditions for the existence common solutions and the detailed representations of general common solutions are provided, which extends the classical closed range case with a short proof. Furthermore, we consider the problem of finding the sufficient conditions which ensure this system has a real positive solution as well as the formula of these real positive solutions.

    Finally, two examples are provided. As shown by Example 4.1, the system of equations AX=C and XB=D has common real positive solutions under above given sufficient conditions. It is shown by Example 4.2 that a gap is unfortunately contained in the original paper [16], where the authors gave two equivalent conditions for the existence of common real positive solutions for this system in matrix algebra. Here, it is still an open question to give an equivalent condition for the existence of common real positive solutions of AX=C and XB=D in infinite dimensional Hilbert space.

    In general, AC is the reduced solution of equation AX=C if R(A) is closed. Liang and Deng gave an expression of the reduced solution denoted AC through the spectral measure of positive operators where operator A may not be closed, but sometimes A is only a symbol since the spectral integral 0λdEλ (see [12]) may be divergent. In this section, we will give a new representation of reduced solution of AX=C by operator sequence. we begin with some lemmas.

    Lemma 2.1. ([8]) Suppose that {Tn}n=1 is an invertible positive operator sequence in B(H) and

    T=s.o.limnTn.

    If T is also invertible and positive, then

    T1=s.o.limnT1n.

    For an operator TB(H), then T has a unique polar decomposition T=UT|T|, where UT is a partial isometry and P¯R(T)=UTUT([8]). Denote

    Tn=(1nIH+|T|)1,

    for each positive integer n. In [11], the authors paid attention to the operator sequence Tn|T| over Hilbert C-module. It was given that Tn|T| converges to P¯R(T) in strong operator topology for T is a positive operator. Here we have some relative results as follows,

    Lemma 2.2. Suppose that TB(H) and Tn=(1nIH+|T|)1. Then the following statements hold:

    (i)([8,11]) P¯R(T)=s.o.limnTn|T|.

    (ii) T=s.o.limnTnUT if R(T) is closed.

    Proof. We only prove the statement (ii). If R(T) is closed, then R(T) is also closed. It is natural that H=R(T)N(T)=R(T)N(T). Thus T and |T| have the following matrix forms, respectively,

    T=(T11000):(R(T)N(T))(R(T)N(T)) (2.1)

    and

    |T|=(TT)12=((T11T11)12000):(R(T)N(T))(R(T)N(T)), (2.2)

    where T11B(R(T),R(T)) is invertible. Then T has the matrix form

    T=(T111000):(R(T)N(T))(R(T)N(T)). (2.3)

    And from (2.2), it is easy to get that

    1nIH+|T|=(1nIR(T)+|T11|001nIN(A)):(R(T)N(T))(R(T)N(T)). (2.4)

    By the invertiblity of diagonal operator matrix and the above matrix form (2.4), we have

    Tn=(1nIH+|T|)1=((1nIR(T)+|T11|)100nIN(A)):(R(T)N(T))(R(T)N(T)). (2.5)

    Because the operator sequence

    {1nIR(T)+|T11|}B(R(T))

    converges to |T11| and

    1nIR(T)+|T11|

    is invertible in B(R(T)) for each n, then the operator sequence

    {(1nIR(T)+|T11|)1}B(R(T))

    converges to |T11|1 by Lemma 2.1. Moreover, partial isometry UT has the matrix form

    UT=(U11000):(R(T)N(T))(R(T)N(T)) (2.6)

    with respect to the space decomposition H=R(T)N(T), where U11 is a unitary from R(T) onto R(T). Then T11=U11|T11| and T111=|T11|1U11 by T=UT|T| and the matrix forms (2.1), (2.2), and (2.6). From matrix forms (2.5) and (2.6), we have

    TnUT=((1nIR(T)+|T11|)1U11000):(R(T)N(T))(R(T)N(T)).

    Since the operator sequence (1nIR(T)+|T11|)1U11 converges to

    |T11|1U11=T111,

    we obtain that the operator sequence {TnUT} converges to (T111000), which is equivalent to T by the matrix form (2.3).

    In Lemma 2.2, the statement (ii) is not necessarily true if R(T) is not closed. The following is an example.

    Example 2.3. Let {e1,e2,} be a basis of an infinite Hilbert space H. Define an operator T as follows,

    Tek=1k,k=1,2,,n,.

    Then

    Tn=(1nIH+|T|)1=(1nIH+T)1,

    since T is a positive operator. It is easy to get that

    Tnek=knn+kek

    for any k. Suppose

    x=k=11kek.

    Then xH with

    x2=k=11k2<.

    By direct computing, we have

    Tnxx|<Tnx,x>|=k=1nk(n+k)=k=1(1k1n+k)>nk=11knk=11n+k>nk=11k12,

    that is, Tnx>1x(nk=11k12). It is shown that the number sequence {Tnx} is divergent since the harmonic progression k=11k is divergent. This implies that the operator sequence {Tn} is divergent in strong operator topology.

    Lemma 2.4. ([4]) Let A,CB(H). The following three conditions are equivalent:

    (1) R(C)R(A).

    (2) There exists λ>0 such that CCλAA.

    (3) There exists DB(H) such that AD=C.

    If one of these conditions holds then there exists a unique solution FB(H) of the equation AX=C such that R(F)R(A) and N(F)=N(C). This solution is called the reduced solution.

    Theorem 2.5. Let A,CB(H) be such that the equation AX=C has a solution. Then the unique reduced solution F can be formulated by

    F=s.o.limn(|A|+1nI)1UAC,

    where UAB(H) is the partial isometry associated to the polar decomposition A=UA|A|. Particularly, if R(A) is closed, the reduced solution F is AC.

    Proof. Since A=UA|A|, we have UAA=|A|. Assuming that X is a solution of AX=C, then |A|X=UAC. By multiplication (|A|+1nI)1 on the left, it follows that

    (|A|+1nI)1|A|X=(|A|+1nI)1UAC.

    From Lemma 2.2 (i), {(|A|+1n)1|A|X} is convergent to P¯R(A)X in strong operator topology. So

    P¯R(A)X=s.o.limn(|A|+1nI)1UACF.

    This shows that

    AF=AP¯R(A)X=AX=C,

    R(F)¯R(A) and N(F)N(C). By the definition of F, we know N(C)N(F). So N(C)=N(F). Suppose that F is another reduced solution, then A(FF)=0 and so

    R(FF)N(A)¯R(A)={0}.

    It is shown that FF=0 and then F=F. That is to say, F is the unique reduced solution of AX=C. If R(A) is closed, F=AC by Lemma 2.2 (ii).

    Remark 2.6. Although (|A|+1nI)1UA does not necessarily converge in strong operator topology by Example 2.3, (|A|+1nI)1UAC is convergent from the proof of the Theorem 2.5 if R(C)R(A).

    As a consequence of the above Theorem 2.5, Lemma 2.4 and Theorem 3.5 in [12], the following result holds.

    Theorem 2.7. ([12]) Let A,CB(H). Then AX=C has a solution if and only if R(C)R(A). In this case, the general solutions can be represented by

    X=F+(IP)Y, YB(H),

    where F is the reduced solution of AX=C and P=P¯R(A). Particularly, if R(A) is closed, then the general solutions can be represented by

    X=AC+(IP)Y,  YB(H).

    In this section, we mainly study the real positive solutions of equation

    AX=C (3.1)

    and the system of equations

    AX=C,XB=D. (3.2)

    Firstly, some preliminaries are given. For a given operator AB(H), denote P=P¯R(A) in the sequel.

    Lemma 3.1. Let A,CB(H) be such that Eq (3.1) has a solution and F is the reduced solution. Then the following statements hold:

    (i) CA+AC0 if and only if FP+PF0.

    (ii)R(FP+PF) is closed if R(CA+AC) is closed.

    Proof. (i) Assume that CA+AC0. For any xH, there exist y¯R(A) and zN(A) such that x=y+z. And there exists {Ayn}R(A) such that limnAyn=y. According to R(F)¯R(A), by direct computing,

    (FP¯R(A)+P¯R(A)F)x,x=(FP+PF)y,y=limn(F+F)Ayn,Ayn=limnA(F+F)Ayn,yn=(CA+AC)y,y0.

    This shows that FP+PF0.

    Contrarily, if FP+PF0, it is elementary to get that

    (CA+AC)x,x=(AFA+AFA)x,x=(F+F)Ax,Ax=(F+F)PAx,PAx=(FP+PF)Ax,Ax0,

    for any xH by R(F)¯R(A). This implies that CA+AC0.

    (ii) Since R(CA+AC)=R(A(F+F)A) is closed, we have R(A(F+F)P) is closed. It follows that R(P(F+F)A) is closed and so R(P(F+F)P) is closed.

    Lemma 3.2. ([3]) Let

    A=(A11A12A12A22)B(HH).

    The following two statements hold:

    (i) There exists an operator A22 such that A0 if and only if A110 and R(A12)R(A1211).

    (ii) Suppose that A11 is an invertible and positive operator. Then A0 if and only if A22A12A111A12.

    Lemma 3.3. Let

    A=(A11A12A12A22)B(HK).

    Then A0 if and only if

    (i) A110.

    (ii)A22XP¯R(A11)X, where X is a solution of A1211X=A12.

    Proof. Assume that the statements (i) and (ii) hold. Then

    A=(A11A1211XXA1211A22)=(A11A1211P¯R(A11)XXP¯R(A11)A1211A22)=(A1211P¯R(A11)X00)(A1211P¯R(A11)X00)+(000A22XP¯R(A11)X)0.

    Conversely, if A0, it follows from Lemma 3.2 that A110. And also we get that

    A1211X=A12

    has a solution X since R(A12)R(A1211). Moreover, A0 shows that for any positive integer n,

    A+1nPH0,

    that is,

    (A11+1nIHA12A12A22)0.

    By Lemma 3.2 again, we have

    A22A12(A11+1nIH1)1A12=XA1211(A11+1nIH1)1A1211X.

    From Lemma 2.2 (i), it is immediate that

    A1211(A11+1nIH1)1A1211=(A11+1nIH1)1A11P¯R(A11)(n)

    in strong operator topology. Thus

    XP¯R(A11)X=s.o.limnXA1211(A11+1nIH1)1A1211X

    holds. Therefore A22XP¯R(A11)X.

    Corollary 3.4. Let

    A=(A11A12A12A22)B(HK)

    be such that A11 has closed range. Then A0 if and only if

    (i) A110.

    (ii)A22XA11X, where X is a solution of A11X=A12.

    Proof. Since A11 is an operator with closed range, then R(A1211)=R(A11). Thus the solvability of the two equations A11X=A12 and A1211X=A12 are equivalent. It is easy to check that A1211X is a solution of A1211X=A12 if X is a solution of A11X=A12. From the Lemma 3.3, the result holds.

    In [12], the author gave a formula of real positive solutions for Eq (3.1) which is only a restriction on the general solutions. Here we obtain a specific expression of the real positive solutions.

    Theorem 3.5. Let A,CB(H). The Eq (3.1) has real positive solutions if and only if R(C)R(A) and CA+AC0. In this case, the real positive solutions can be represented by

    X=F(IP)F+(IP)Y(PF+FP)12+12(IP)YP¯R(F0)Y(IP)+(IP)Z(IP), (3.3)

    for any YB(H), ZReB+(H), where F is the reduced solution of AX=C and F0=PF+FP. Particularly, if R(CA+AC) is closed, then the real positive solutions can be represented by

    X=F(IP)F+(IP)Y(PF+FP)+(IP)YFPY(IP)+(IP)Z(IP). (3.4)

    Proof. From Theorem 2.7, AX=C has a solution if and only if R(C)R(A). Suppose that X is a solution of AX=C. There exists an operator ¯YB(H) such that

    X=F+(IP)¯Y.

    Set H=¯R(A)N(A). Then F has the following matrix form

    F=(F11F1200), (3.5)

    since R(F)¯R(A). The operator ¯Y can be expressed as

    ¯Y=(¯Y11¯Y12¯Y21¯Y22). (3.6)

    This follows that

    X+X=(F11+F11F12+¯Y21¯Y21+F12¯Y22+¯Y22). (3.7)

    "": If X is a real positive solution, then F11+F110 from Lemma 3.3 and the matrix form (3.7) of X+X. And so PF+FP0. According to Lemma 3.1, CA+AC0.

    "": Assume that CA+AC0, then PF+FP0 and so F11+F110. Denote X0=F(IP)F. Then AX0=C and

    X0+X0=(F11+F11000)0.

    That is, X0 is a real positive solution of AX=C.

    Next, we analyse the general form of real positive solutions. Suppose that X is a real positive solution of AX=C. Then X+X has the matrix form (3.7). From Lemma 3.3, there exists an operator Y12 such that

    ¯Y21=(F11+F11)12Y12F12

    and

    ¯Y22+¯Y22Y12P¯R(F11+F11)Y12.

    Let

    Z22=¯Y2212Y12P¯R(F11+F11)Y12.

    In this case, X can be represented by the following form

    X=(F11F12Y12(F11+F11)12F1212Y12Y12+Z22)=F(IP)F+(IP)Y(PF+FP)12+12(IP)YP¯R(F0)Y(IP)+(IP)Z(IP),

    for some YB(H), ZReB+(H), where F0=PF+FP.

    On the contrary, for any operator YB(H) and ZReB+(H) such that X has the form (3.3), it is clear that AX=C. Let Y=(Yij)2×2 and Z=(Zij)2×2 with respect to the space decomposition H=¯R(A)N(A). Then

    X+X=(F11+F11(F11+F11)12Y12Y12(F11+F11)1212Y12Y12+Z22+Z22)=((F11+F11)12Y1200)((F11+F11)12Y1200)+(000Z22+Z22)0.

    Particularly, if R(CA+AC) is closed, then R(F11+F11) is closed by Lemma 3.1. Suppose that X is a real positive solution of AX=C, then X+X has the matrix form (3.7). From Corollary 3.4, there exists an operator Y12B(N(A),¯R(A)) such that

    ¯Y21=(F11+F11)Y12F12

    and

    ¯Y22+¯Y22Y12(F11+F11)Y12.

    Let

    Z22=¯Y22Y12F11Y12.

    Consequently,

    X=(F11F12Y12(F11+F11)F12Y12F11Y12+Z22)=F(IP)F+(IP)Y(PF+FP)+(IP)YFPY(IP)+(IP)Z(IP),

    for some YB(H), ZReB+(H).

    On the contrary, for any YB(H) and ZReB+(H) such that X has a form (3.2), it is clear that AX=C and also we have

    X+X=(F11+F11(F11+F11)Y12Y12F11+F11Y12F11+F11Y12+Z22+Z22)=((F11+F11)120Y12(F11+F11)120)((F11+F11)12(F11+F11)12Y1200)+(000Z22+Z22)0.

    Next, we consider the solutions of AX=C,XB=D.

    Theorem 3.6. Let A,B,C,DB(H). Then the system of Eq (3.2) has a solution if and only if R(C)R(A), R(D)R(B) and AD=CB. In this case, the general common solution can be represented by

    X=F(IP)H+(IP)Z(IP¯R(B)),

    for any ZB(H), where F is the reduced solution of (3.1) and H is the reduced solution of BX=D. Particularly, if R(A) and R(B) are closed, the general solution can be represented by

    X=AC+DBAADB+(IAA)Z(IBB).

    Proof. The necessity is clear. We only need to prove the sufficient condition. From R(C)R(A) and Theorem 2.7, a solution X of Eq (3.1) can be represented by

    X=F+(IP)Y,

    for some YB(H), where F is the reduced solution. Substitute above X into equation XB=D, we have

    (IP)YB=DFB.

    This shows that

    (IP)YB=(IP)(DFB)

    and then

    (IP)YB=(IP)D,

    since R(F)¯R(A) and (IP)F=0. Moreover, YB=D has a solution since R(D)R(B). Denote H is the reduced solution of BˆY=D. Then

    R(H(IP))¯R(B)

    and

    N(H(IP))=N(D(IP)),

    since R(H)¯R(B) and N(H)=N(D). So H(IP) is the reduced solution of BˆY=D(IP) and then there exists ZB(H) such that

    Y(IP)=H(IP)+(IP¯R(B))Z.

    That is to say, the system of equations AX=C, XB=D has common solutions. The general solution can be represented by

    X=F(IP)H+(IP)Z(IP¯R(B)), for any ZB(H).

    Especially, if R(A) and R(B) are closed, then F=AC and H=BD. It follows that the general common solution is

    X=AC+(IAA)DB+(IAA)Z(IBB),

    for any ZB(L,H).

    Theorem 3.7. Let A,B,C,DB(H) and Q=P¯R((IP)B). The system of Eq (3.2) has a real positive solution if the following statements hold:

    (i)R(C)R(A), R(D)R(B), AD=CB,

    (ii) CA, (D+BF)(IP)B are real positive operators,

    (iii) R((D+BF)(IP))R(B(IP)),

    where F is the reduced solution of AX=C. In this case, one of common real positive solutions can be represented by

    X=F(IP)F+(IP)H(IP)(IP)H(IPQ)+(IPQ)Z(IPQ), (3.8)

    for any ZReB+(H), where H is the reduced solution of

    B(IP)X=(D+BF)(IP).

    Proof. Combined Theorems 3.5 and 3.6 with statements (i) and (ii), the system of equations AX=C, XB=D has common solutions. For any YReB+(H),

    X=F(IP)F+(IP)Y(IP)

    is a real positive solution of AX=C. We only need to prove that there exists YReB+(H) such that above X is also a solution of XB=D. Since

    H=¯R(A)N(A)=¯R(A)N(A)

    and also

    H=¯R(B)N(B),

    then A, F and B have the following operator matrix forms,

    A=(A11000):(¯R(A)N(A))(¯R(A)N(A)), (3.9)
    F=(F11F1200):(¯R(A)N(A))(¯R(A)N(A)), (3.10)
    B=(B110B210):(¯R(B)N(B))(¯R(A)N(A)), (3.11)

    respectively, where A11 is an injective operator from ¯R(A) into ¯R(A) with dense range. The operator D has the matrix form

    D=(D110D210):(¯R(B)N(B))(¯R(A)N(A)), (3.12)

    since R(D)R(B). Let

    Y=(Y11Y12Y21Y22):(¯R(A)N(A))(¯R(A)N(A)). (3.13)

    Then X has a matrix form as follows:

    X=(F11F12F12Y22):(¯R(A)N(A))(¯R(A)N(A)). (3.14)

    By the matrix forms (3.11) and (3.14) of B and X, respectively, it is easy to get that

    BX=(B11B2100)(F11F12F12Y22)=(B11F11+B21F12B11F12+B21Y2200). (3.15)

    Combining AD=CB=AFB with the matrix forms (3.9)–(3.12), we have

    AD=(A11D11000)=(A11(F11B11+F12B21)000)=AFB.

    It is immediate that

    A11(F11B11+F12B21)=A11D11.

    And so

    (B11F11+B21F12)A11=D11A11.

    Therefore,

    B11F11+B21F12=D11, (3.16)

    since R(A11) has dense range in ¯R(A11). From the statement (iii), R(D21+B11F12)R(B21) holds. Moreover, by the statement (ii) (D+BF)(IP)B being real positive, we have (D21+B11F12) is real positive. These infer that the equation

    B21^Y22=D21+B11F12 (3.17)

    has real positive solutions by Theorem 3.5. Suppose that H22 is the reduced solution of Eq (3.17), we can deduce that

    H=(000H22)

    is the reduced solution of the following equation

    B(IP)ˆY=(D+BF)(IP), (3.18)

    since R(H)(IP)B and

    N(H)=N((D+BF)(IP)).

    Using Theorem 3.5 again, H(IQ)H0 is a real positive solution of (3.18). Then (IP)(H(IQ)H)(IP) is also a real positive solution of (3.18). Because of

    IPP¯R((IP)B)=Q,

    we have

    (IP)(IQ)=(IQ)(IP)=(IPQ).

    Let

    X0=F(IP)F+(IP)H(IP)(IP)H(IPQ).

    That is,

    X0=(F11F12F12H22H22(IN(A)Q)):(¯R(A)N(A))(¯R(A)N(A)).

    Put X=X0 into formula (3.15), combining Eqs (3.16) and (3.17) with the matrix form (3.12) and (3.15), we can get BX0=D. So X0 is a common real positive solution of Eq (3.2). Furthermore, it is easy to verify that

    A(IPQ)Z(IPQ)=0

    and

    (IPQ)Z(IPQ)B=0,

    for any ZB(H). Therefore

    X=X0+(IPQ)Z(IPQ), for any ZReB+(H)

    is a real positive solution of system equations AX=C,XB=D.

    Some examples are given in this section to demonstrate our results are valid. And also it is shown that a gap is unfortunately contained in the original paper [16] about existence of real positive solutions.

    Example 4.1. Let H be the infinite Hilbert space as in Example 2.3 with a basis {e1,e2,} and Tek=1kek,k1. As well known the range of TB(H) is not closed and ¯R(T)=H. Define operators A,C,B,DB(HH as follows:

    A=(T000),C=(TT00),B=(0T00),D=(0T0T).

    Then

    A0, UA=(IH000)

    and

    (|A|+1nI)1UAC=((T+1nIH)1T(T+1nIH)1T00).

    By Theorem 2.5 and Lemma 2.2 (i), the reduced solution F of the operator equation AX=C has the following form

    F=s.o.limn(|A|+1nI)1UAC=(IHIH00).

    Denote

    P=P¯R(A)=(IH000).

    From Theorem 3.5, we know that the formula of real positive solutions of AX=C is

    X=(IHIHIH+2Y1212Y12Y12+Z22), for any Y12B(H),Z22ReB+(H).

    Furthermore, it is easy to check that

    (i) AD=CB,R(C)=R(A),R(D)=R(B).

    (ii) CA=A and (D+BF)(IP)B=(0000) are real positive operators.

    (iii)(D+BF)(IP)=(0000) and B(IP)=(000T).

    By Theorem 3.7, we have the system of equations

    AX=C,XB=D

    has common real positive solutions. Moreover,

    Q=P¯R((IP)B)=0.

    So one of real positive solutions has the following form

    X=F(IP)F+(IPQ)Z(IPQ)=(IHIHIHZ22), for any Z22ReB+(H).

    Whereas the statements (i)–(iii) in Theorem 3.7 are only sufficient conditions for the existence of common real positive solutions of system (3.2). The following is an example.

    Example 4.2. Let A,B,C,D be 2×2 complex matrices. Denote

    A=(1000),B=(0111),C=(1100),D=(1212212),X0=(112112).

    By direct computing, AX0=C, X0B=D and

    X0+X0=(2221)=(2010)(2010)0.

    So X0 is a common real positive solution of system equations AX=C,XB=D. But in this case, Theorem 3.7 does not work. In fact,

    F=AC=(1100), P=PR(A)=A.

    Hence,

    (D+BF)(IP)=((1122212)+(0111)(1100))(0001)=(01202+12),

    and

    B(IP)=(0111)(0001)=(0101).

    This shows that the statement (iii)

    R((D+BF)(IP))R(B(IP))

    in Theorem 3.7 does not hold.

    Xiong and Qin gave two equivalent conditions (Theorems 2.1 and 2.2 in [16]) for the existence of common real positive solutions for AX=C,XB=D in matrix algebra. But unfortunately, there is a gap in these results. Here, Example 4.2 is a counterexample. In fact, r(A)=12, where r(A) stands for the rank of matrix A and A=A=CA. Simplifying by elementary block matrix operations, we have

    r(AA0AC+CAADC0BA)=r(A02AADC0BA)=r(A000DC2A0BA). (4.1)

    Moreover,

    r(DC2ABA)=r(1210122121001101100)=r(0110122121001101100)=r(0000122121001101100)=r(0000122+121000101100)=r(0000021000101100)=r(0000020000101000)=3.

    Therefore, combining the above result with formula (4.1), we obtain that

    r(AA0AC+CAADC0BA)=r(A)+r(DC2ABA)=4. (4.2)

    Then

    r(A)2 and r(AA0AC+CAADC0BA)2r(A). (4.3)

    Noting that

    AD=(1000)(1212212)=(1200).

    Using elementary row-column transformation again, we have

    r(BAADCA)=r(BAADA)=r(0110110012100000)=r(0010110011100000)=r(0010110000100000)=2. (4.4)

    From formulaes (4.1), (4.2) and (4.4), it is natural to get that

    r(ADCB)+r(AA0AC+CAADC0BA)r(A)+r(BAADCA), (4.5)

    since AD=CB. But AX=C,XB=D have a common real positive solution. The statements (4.3) and (4.5) shows that Theorems 2.1 and 2.2 in [16] do not work, respectively. Actually, the conditions in [16] are also only sufficient conditions for the existence of common real positive solutions of Eq (3.2). So here is still an open question.

    Question 4.3. Let A,B,C,DB(H). Give an equivalent condition for the existence of common real positive solutions of AX=C,XB=D.

    In this work, a new representation of reduced solution of AX=C is given by a strong operator convergent sequence. This result provides us a method to discuss the general solutions of Eq (3.2). By making full use of block operator matrix methods, the formula of real positive solutions of AX=C is obtained in Theorem 3.5, which is the basis of finding common real positive solutions of Eq (3.2). Through Example 4.1, it is demonstrated that Theorem 3.7 is useful to find some common real positive solutions. But unfortunately, it is complicated to consider all the common real positive solutions by using of the method in Theorem 3.7. Maybe, we need some other techniques. This will be our next problem to solve.

    The authors would like to thank the referees for their useful comments and suggestions, which greatly improved the presentation of this paper. Part of this work was completed during the first author visit to Shanghai Normal University. The author thanks professor Qingxiang Xu for his discussion and useful suggestion.

    This research was supported by the National Natural Science Foundation of China (No. 12061031), the Natural Science Basic Research Plan in Shaanxi Province of China (No. 2021JM-189) and the Natural Science Basic Research Plan in Hainan Province of China (Nos. 120MS030,120QN250).

    The authors declare no conflicts of interest.



    [1] X. P. Sheng, A relaxed gradient based algorithm for solving generalized coupled Sylvester matrix equations, J. Franklin Inst., 355 (2018), 4282–4297. https://doi.org/10.1016/j.jfranklin.2018.04.008 doi: 10.1016/j.jfranklin.2018.04.008
    [2] M. Dehghan, M. Hajarian, On the generalized bisymmetric and skew-symmetric solutions of the system of generalized Sylvester matrix equations, Linear Multilinear Algebra, 59 (2011), 1281–1309. https://doi.org/10.1080/03081087.2010.524363 doi: 10.1080/03081087.2010.524363
    [3] M. Dehghan, M. Dehghani-Madiseh, M. Hajarian, A generalized preconditioned MHSS method for a class of complex symmetric linear systems, Math. Model. Anal., 18 (2013), 561–576. https://doi.org/10.3846/13926292.2013.839964 doi: 10.3846/13926292.2013.839964
    [4] M. Dehghani-Madiseh, M. Dehghan, Parametric AE-solution sets to the parametric linear systems with multiple right-hand sides and parametric matrix equation A(p)X=B(p), Numer. Algor., 73 (2016), 245–279. https://doi.org/10.1007/s11075-015-0094-3 doi: 10.1007/s11075-015-0094-3
    [5] Y. B. Deng, Z. Z. Bai, Y. H. Gao, Iterative orthogonal direction methods for Hermitian minimum norm solutions of two consistent matrix equations, Numer. Linear Algebra Appl., 13 (2006), 801–823. https://doi.org/10.1002/nla.496 doi: 10.1002/nla.496
    [6] B. H. Huang, C. F. Ma, On the least squares generalized Hamiltonian solution of generalized coupled Sylvester-conjugate matrix equations, Comput. Math. Appl., 74 (2017), 532–555. https://doi.org/10.1016/j.camwa.2017.04.035 doi: 10.1016/j.camwa.2017.04.035
    [7] F. X. Zhang, M. S. Wei, Y. Li, J. L. Zhao, The minimal norm least squares Hermitian solution of the complex matrix equation AXB+CXD=E, J. Franklin Inst., 355 (2018), 1296–1310. https://doi.org/10.1016/j.jfranklin.2017.12.023 doi: 10.1016/j.jfranklin.2017.12.023
    [8] Y. Gu, Y. Z. Song, Global Hessenberg and CMRH methods for a class of complex matrix equations, J. Comput. Appl. Math., 404 (2022), 113868. https://doi.org/10.1016/j.cam.2021.113868 doi: 10.1016/j.cam.2021.113868
    [9] F. X. Zhang, Y. Li, J. L. Zhao, A real representation method for special least squares solutions of the quaternion matrix equation (AXB,DXE)=(C,F), AIMS Mathematics, 7 (2022), 14595–14613. https://doi.org/10.3934/math.2022803 doi: 10.3934/math.2022803
    [10] Q. W. Wang, Z. H. He, Solvability conditions and general solution for mixed Sylvester equations, Automatica, 49 (2013), 2713–2719. https://doi.org/10.1016/j.automatica.2013.06.009 doi: 10.1016/j.automatica.2013.06.009
    [11] F. X. Zhang, W. S. Mu, Y. Li, J. L. Zhao, Special least squares solutions of the quaternion matrix equation AXB+CXD=E, Comput. Math. Appl., 72 (2016), 1426–1435. https://doi.org/10.1016/j.camwa.2016.07.019 doi: 10.1016/j.camwa.2016.07.019
    [12] H. T. Zhang, L. N. Liu, H. Liu, Y. X. Yuan, The solution of the matrix equation AXB=D and the system of matrix equations AX=C,XB=D with XX=Ip, Appl. Math. Comput., 418 (2022), 126789. https://doi.org/10.1016/j.amc.2021.126789 doi: 10.1016/j.amc.2021.126789
    [13] G. J. Song, Q. W. Wang, S. W. Yu, Cramer's rule for a system of quaternion matrix equations with applications, Appl. Math. Comput., 336 (2018), 490–499. https://doi.org/10.1016/j.amc.2018.04.056 doi: 10.1016/j.amc.2018.04.056
    [14] F. X. Zhang, M. S. Wei, Y. Li, J. L. Zhao, An efficient real representation method for least squares problem of the quaternion constrained matrix equation AXB+CYD=E, Int. J. Comput. Math., 98 (2021), 1408–1419. https://doi.org/10.1080/00207160.2020.1821001 doi: 10.1080/00207160.2020.1821001
    [15] X. Peng, X. X. Guo, Real iterative algorithms for a common solution to the complex conjugate matrix equation system, Appl. Math. Comput., 270 (2015), 472–482. https://doi.org/10.1016/j.amc.2015.07.105 doi: 10.1016/j.amc.2015.07.105
    [16] X. P. Sheng, W. W. Sun, The relaxed gradient based iterative algorithm for solving matrix equations AiXBi=Fi, Comput. Math. Appl., 74 (2017), 597–604. https://doi.org/10.1016/j.camwa.2017.05.008 doi: 10.1016/j.camwa.2017.05.008
    [17] M. Dehghan, A. Shirilord, A new approximation algorithm for solving generalized Lyapunov matrix equations, J. Comput. Appl. Math., 404 (2022), 113898. https://doi.org/10.1016/j.cam.2021.113898 doi: 10.1016/j.cam.2021.113898
    [18] M. Dehghan, R. Mohammadi-Arani, Generalized product-type methods based on bi-conjugate gradient (GPBiCG) for solving shifted linear systems, Comput. Appl. Math., 36 (2017), 1591–1606. https://doi.org/10.1007/s40314-016-0315-y doi: 10.1007/s40314-016-0315-y
    [19] B. H. Huang, C. F. Ma, On the least squares generalized Hamiltonian solution of generalized coupled Sylvester-conjugate matrix equations, Comput. Math. Appl., 74 (2017), 532–555. https://doi.org/10.1016/j.camwa.2017.04.035 doi: 10.1016/j.camwa.2017.04.035
    [20] T. X. Yan, C. F. Ma, An iterative algorithm for generalized Hamiltonian solution of a class of generalized coupled Sylvester-conjugate matrix equations, Appl. Math. Comput., 411 (2021), 126491. https://doi.org/10.1016/j.amc.2021.126491 doi: 10.1016/j.amc.2021.126491
    [21] M. Hajarian, Developing BiCOR and CORS methods for coupled Sylvester-transpose and periodic Sylvester matrix equations, Appl. Math. Model., 39 (2015), 6073–6084. https://doi.org/10.1016/j.apm.2015.01.026 doi: 10.1016/j.apm.2015.01.026
    [22] H. M. Zhang, A finite iterative algorithm for solving the complex generalized coupled Sylvester matrix equations by using the linear operators, J. Frankl. Inst., 354 (2017), 1856–1874. https://doi.org/10.1016/j.jfranklin.2016.12.011 doi: 10.1016/j.jfranklin.2016.12.011
    [23] L. L. Lv, Z. Zhang, Finite iterative solutions to periodic Sylvester matrix equations, J. Frankl. Inst., 354 (2017), 2358–2370. https://doi.org/10.1016/j.jfranklin.2017.01.004 doi: 10.1016/j.jfranklin.2017.01.004
    [24] N. Huang, C. F. Ma, Modified conjugate gradient method for obtaining the minimum-norm solution of the generalized coupled Sylvester-conjugate matrix equations, Appl. Math. Model., 40 (2016), 1260–1275. https://doi.org/10.1016/j.apm.2015.07.017 doi: 10.1016/j.apm.2015.07.017
    [25] F. P. A. Beik, D. K. Salkuyeh, On the global Krylov subspace methods for solving general coupled matrix equations, Comput. Math. Appl., 62 (2011), 4605–4613. https://doi.org/10.1016/j.camwa.2011.10.043 doi: 10.1016/j.camwa.2011.10.043
    [26] M. Dehghan, A. Shirilord, Solving complex Sylvester matrix equation by accelerated double-step scale splitting (ADSS) method, Eng. Comput., 37 (2021), 489–508. https://doi.org/10.1007/s00366-019-00838-6 doi: 10.1007/s00366-019-00838-6
    [27] S. F. Yuan, A. P. Liao, Least squares Hermitian solution of the complex matrix equation AXB+CXD=E with least norm, J. Frankl. Inst., 351 (2014), 4978–4997. https://doi.org/10.1016/j.jfranklin.2014.08.003 doi: 10.1016/j.jfranklin.2014.08.003
    [28] F. X. Zhang, M. S. Wei, Y. Li, J. L. Zhao, The minimal norm least squares Hermitian solution of the complex matrix equation AXB+CXD=E, J. Frankl. Inst., 355 (2018), 1296–1310. https://doi.org/10.1016/j.jfranklin.2017.12.023 doi: 10.1016/j.jfranklin.2017.12.023
    [29] S. F. Yuan, Least squares pure imaginary solution and real solution of the quaternion matrix equation AXB+CXD=E with the least norm, J. Appl. Math., 2014 (2014), 857081. https://doi.org/10.1155/2014/857081 doi: 10.1155/2014/857081
    [30] Q. W. Wang, A. Rehman, Z. H. He, Y. Zhang, Constraint generalized sylvester matrix equations, Automatica, 69 (2016), 60–64. https://doi.org/10.1016/j.automatica.2016.02.024 doi: 10.1016/j.automatica.2016.02.024
    [31] Y. X. Yuan, Solving the mixed Sylvester matrix equations by matrix decompositions, C. R. Math., 353 (2015), 1053–1059. https://doi.org/10.1016/j.crma.2015.08.010 doi: 10.1016/j.crma.2015.08.010
    [32] S. F. Yuan, Q. W. Wang, Y. B. Yu, Y. Tian, On Hermitian solutions of the split quaternion matrix equation AXB+CXD=E, Adv. Appl. Clifford Algebras, 27 (2017), 3235–3252. https://doi.org/10.1007/s00006-017-0806-y doi: 10.1007/s00006-017-0806-y
    [33] I. Kyrchei, Determinantal representations of solutions to systems of two-sided quaternion matrix equations, Linear Multilinear Algebra, 69 (2021), 648–672. https://doi.org/10.1080/03081087.2019.1614517 doi: 10.1080/03081087.2019.1614517
    [34] D. Z. Cheng, H. S. Qi, Y. Zhao, An introduction to semi-tensor product of matrices and its application, Singapore: World Scientific Publishing Company, 2012.
    [35] J. Feng, J. Yao, P. Cui, Singular boolean networks: Semi-tensor product approach, Sci. China Inf. Sci., 56 (2013), 1–14. https://doi.org/10.1007/s11432-012-4666-8 doi: 10.1007/s11432-012-4666-8
    [36] M. R. Xu, Y. Z. Wang, Conflict-free coloring problem with appliction to frequency assignment, J. Shandong Univ., 45 (2015), 64–69.
    [37] D. Z. Cheng, Q. S. Qi, Z. Q. Liu, From STP to game-based control, Sci. China Inf. Sci., 61 (2018), 010201. https://doi.org/10.1007/s11432-017-9265-2 doi: 10.1007/s11432-017-9265-2
    [38] J. Q. Lu, H. T. Li, Y. Liu, F. F. Li, Survey on semi-tensor product method with its applications in logical networks and other finite-valued systems, IET Control Theory Appl., 11 (2017), 2040–2047. https://doi.org/10.1049/iet-cta.2016.1659 doi: 10.1049/iet-cta.2016.1659
    [39] D. Z. Cheng, H. S. Qi, Z. Q. Li, Analysis and control of Boolean networks: A semi-tensor product approach, London: Springer, 2011.
    [40] W. X. Ding, Y. Li, D. Wang, T. Wang, The application of the semi-tensor product in solving special Toeplitz solution of complex linear system, J. Liaocheng Univ. (Nat. Sci.), 34 (2021), 1–6.
  • This article has been cited by:

    1. Haiyan Zhang, Yanni Dou, Weiyan Yu, Positive Solutions of Operator Equations AX = B, XC = D, 2023, 12, 2075-1680, 818, 10.3390/axioms12090818
    2. Qing-Wen Wang, Zi-Han Gao, Jia-Le Gao, A Comprehensive Review on Solving the System of Equations AX = C and XB = D, 2025, 17, 2073-8994, 625, 10.3390/sym17040625
    3. Hranislav Stanković, Polynomially accretive operators, 2025, 54, 2651-477X, 516, 10.15672/hujms.1421159
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1635) PDF downloads(100) Cited by(2)

Figures and Tables

Figures(1)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog