Loading [MathJax]/jax/output/SVG/jax.js
Research article

Exact and least-squares solutions of a generalized Sylvester-transpose matrix equation over generalized quaternions

  • Received: 26 December 2023 Revised: 07 March 2024 Accepted: 29 March 2024 Published: 08 April 2024
  • We have considered a generalized Sylvester-transpose matrix equation AXB+CXTD=E, where A,B,C,D, and E are given rectangular matrices over a generalized quaternion skew-field, and X is an unknown matrix. We have applied certain vectorizations and real representations to transform the matrix equation into a matrix equation over the real numbers. Thus, we have investigated a solvability condition, general exact/least-squares solutions, minimal-norm solutions, and the exact/least-squares solution closest to a given matrix. The main equation included the equation AXB=E and the Sylvester-transpose equation. Our results also covered such matrix equations over the quaternions, and quaternionic linear systems.

    Citation: Janthip Jaiprasert, Pattrawut Chansangiam. Exact and least-squares solutions of a generalized Sylvester-transpose matrix equation over generalized quaternions[J]. Electronic Research Archive, 2024, 32(4): 2789-2804. doi: 10.3934/era.2024126

    Related Papers:

    [1] Jin Wang . Least squares solutions of matrix equation $ AXB = C $ under semi-tensor product. Electronic Research Archive, 2024, 32(5): 2976-2993. doi: 10.3934/era.2024136
    [2] ShinJa Jeong, Mi-Young Kim . Computational aspects of the multiscale discontinuous Galerkin method for convection-diffusion-reaction problems. Electronic Research Archive, 2021, 29(2): 1991-2006. doi: 10.3934/era.2020101
    [3] Yimou Liao, Tianxiu Lu, Feng Yin . A two-step randomized Gauss-Seidel method for solving large-scale linear least squares problems. Electronic Research Archive, 2022, 30(2): 755-779. doi: 10.3934/era.2022040
    [4] Anatoliy Martynyuk, Gani Stamov, Ivanka Stamova, Yulya Martynyuk–Chernienko . On the regularization and matrix Lyapunov functions for fuzzy differential systems with uncertain parameters. Electronic Research Archive, 2023, 31(10): 6089-6119. doi: 10.3934/era.2023310
    [5] Zehua Wang, Jinrui Guan, Ahmed Zubair . A structure-preserving doubling algorithm for the square root of regular M-matrix. Electronic Research Archive, 2024, 32(9): 5306-5320. doi: 10.3934/era.2024245
    [6] Quanguo Chen, Yong Deng . Hopf algebra structures on generalized quaternion algebras. Electronic Research Archive, 2024, 32(5): 3334-3362. doi: 10.3934/era.2024154
    [7] Amila Muthunayake, Cac Phan, Ratnasingham Shivaji . An infinite semipositone problem with a reversed S-shaped bifurcation curve. Electronic Research Archive, 2023, 31(2): 1147-1156. doi: 10.3934/era.2023058
    [8] Yongge Tian . Characterizations of matrix equalities involving the sums and products of multiple matrices and their generalized inverse. Electronic Research Archive, 2023, 31(9): 5866-5893. doi: 10.3934/era.2023298
    [9] Shuai Yuan, Sitong Chen, Xianhua Tang . Normalized solutions for Choquard equations with general nonlinearities. Electronic Research Archive, 2020, 28(1): 291-309. doi: 10.3934/era.2020017
    [10] Li Yang, Chunlai Mu, Shouming Zhou, Xinyu Tu . The global conservative solutions for the generalized camassa-holm equation. Electronic Research Archive, 2019, 27(0): 37-67. doi: 10.3934/era.2019009
  • We have considered a generalized Sylvester-transpose matrix equation AXB+CXTD=E, where A,B,C,D, and E are given rectangular matrices over a generalized quaternion skew-field, and X is an unknown matrix. We have applied certain vectorizations and real representations to transform the matrix equation into a matrix equation over the real numbers. Thus, we have investigated a solvability condition, general exact/least-squares solutions, minimal-norm solutions, and the exact/least-squares solution closest to a given matrix. The main equation included the equation AXB=E and the Sylvester-transpose equation. Our results also covered such matrix equations over the quaternions, and quaternionic linear systems.



    Linear matrix equations over the field R of real numbers have a strong connection to certain problems in differential equations, and control and system theory [1,2,3]. Indeed, the Sylvester-transpose matrix equation

    AX+XTD=E, (1.1)

    is closely related to eigenstructure assignment [4], pole assignment [3], and fault detection in dynamical systems [5]. More generally, many authors investigated a generalized Sylvester-transpose equation:

    AXB+CXTD=E, (1.2)

    and a generalized Sylvester one

    AXB+CXD=E. (1.3)

    In the last decade, theory and computational aspects for such equations were investigated for Eq (1.1) [6] and Eq (1.2) [7,8,9,10,11,12,13].

    Instead of the real number field, we can develop a theory for matrix equations over suitable algebraic structures, e.g., the quaternion skew-field or other skew-fields. Recall that the (Hamilton) quaternions

    Q={q1+q2i+q3j+q4k|q1,q2,q3,q4R},

    is a non-comutative division ring with respect to the coordinatewise addition and the Hamilton multiplication defined by

    i2=j2=k2=1,ij=ji=k,jk=kj=i,ki=ik=j. (1.4)

    The quaternions are widely used in quantum physics [14,15], computer graphics [16], robot trajectory planning [17], and modeling [18], etc., [19,20,21]. The reader can find more information about quaternions in the survey paper [22]. Moreover, if we generalize the rule (1.4), then we get a generalized quaternion [23]. Let u,vR{0}. Let Qu,v be a four-dimensionl vector space over R with an ordered basis {1,i,j,k}, i.e.,

    Qu,v={x1+x2i+x3j+x4k|x1,x2,x3,x4R}.

    The additon and the scalar multiplication on Qu,v are defined in usual ways. The multiplication of any two of 1,i,j, or k is defined so that 1 acts as an identity, and the following rules apply:

    i2=u,j2=v,k2=ijk=uv,ij=ji=k,jk=kj=vi,ik=ki=uj.

    It turns out that Qu,v becomes a non-commutative division ring. A famous special case (u,v)=(1,1) of Qu,v is known as the Hamilton quaternions. The case (u,v)=(1,1), the case (u,v)=(1,1), and the case (u,v)=(1,1) are called the split quaternion ring, the nectarine quaternion ring, and the conectarine quaternion ring, respectively.

    Matrices over quaternions are one of the main interest topics in linear algebra [22]. Matrix equations over Q or Qu,v turn out to be important in various fields, e.g., computer platforms [24], image processing [25,26], color image restoration [27], image and video inpainting [28,29], signal processing [30] and quantum mechanics [31]. In the last decade, various authors investigated such matrix equations from theoretical points of view. The work in [32] introduced fast and robust algorithms for the eigenproblem and the QR factorization of matrices over Q. Yuan et al. [33] proposed an explicit expression of the least-squares (LS) solution, the LS pure-imaginary solution, and the real solution of Eq (1.3) with the least norm. Zhang et al. [34] studied special LS solutions of Eq (1.3), and obtained the expressions of the minimal-norm LS solution, the pure-imaginary LS solution, and the real LS solution. Recently, Tian et al. [35] considered Hermitian solutions of Eq (1.3). Indeed, they proposed necessary and sufficient conditions for the existence of a Hermitian solution and provided the explicit general expression of the solution when it was solvable.

    In this paper, we investigated the Sylvester-transpose matix Eq (1.2) where A,B,C,D, and E are given generalized quaternion matices with compatible size and X is an unknown. We have measured the associated error of a matrix by the Frobenius norm . Indeed, we have discussed the following problems.

    Problem 1.1. Find the solution set S of exact solutions to Eq (1.2). In addition, find the minimal-norm element of S, i.e., find a matrix X such that

    X=minXSX.

    Problem 1.2. Find a solution ˉXS closest to a given matrix YQn×pu,v, i.e., find ˉX such that

    ˉXY=minXSXY.

    Problem 1.3. Find the set L of LS solutions to Eq (1.2). In addition, find ˜X such that

    ˜X=minXLX.

    Problem 1.4. Find an LS solution of Eq (1.2) closest to a given matrix YQn×pu,v. That is, find the matrix ˊX such that

    ˊXY=minXLXY.

    Moreover, we have discussed certain special cases of Eq (1.2), namely Eq (1.1), the equation AXB=E, and the case when u=v=1.

    The rest of this paper is structured as follows. In Section 2, we set up basic notations and provide auxiliary tools from matrix theory in order to study matrix equations. In Section 3, we investigate Problems 1.1 and 1.2. In Section 4, we investigate Problems 1.3 and 1.4. In Section 5, we take a look at certain special cases of the main Eq (1.2). In Section 6, we provide numerical examples to illustrate our theory. Finally, we summarize the whole work in the last section.

    Let us denote by Rm×n the set of all m×n real matrices. The set of n-dimensional real vectors is written by Rn:=Rn×1. The transpose, the conjugate, the Moor-Penrose inverse, and the Frobenius norm of a matrix A are written by AT,ˉA, A and A, respectively. The identity matrix of order n is denoted by In. The ith column of a matrix A is denoted by coli(A).

    With each matrix A=(aij)Rm×n and BRs×t, the (column) vector Vc(A) is defined as

    Vc(A)=(a11am1a12am2a1namn)TRmn,

    and the Kronecker product of A and B is defined as

    AB=(aijB)=(a11Ba12Ba1jBa21Ba22Ba2jBai1Bai2BaijB)Rms×nt.

    Lemma 2.1. [36] For any ARm×n, XRn×p, and CRp×q, we have

    Vc(AXC)=(CTA)Vc(X).

    Lemma 2.2. [36] For any XRn×p, we have

    Vc(XT)=P(n,p)Vc(X).

    Here, P(n,p) is a permutation matrix defined by

    P(n,p)=ni=1pj=1EijETij,

    where each EijRn×p has entry 1 in position (i,j) and all other entries are zero.

    For any positive integers m and n, we denote the set of all m×n generalized quaternion matrices by Qm×nu,v. For each AQm×nu,v, we can write

    A=A1+A2i+A3j+A4k,

    where A1,A2,A3,A4Rm×n. We define

    Γ(A)=(A1A2A3A4)R4m×n.

    Now, consider X=X1+X2i+X3j+X4kQn×pu,v, where X1,X2,X3,X4Rn×p. We have

    AX=(A1+A2i+A3j+A4k)(X1+X2i+X3j+X4k)=A1(X1+X2i+X3j+X4k)+A2i(X1+X2i+X3j+X4k)+A3j(X1+X2i+X3j+X4k)+A4k(X1+X2i+X3j+X4k)=(A1X1+uA2X2+vA3X3uvA4X4)+(A1X2+A2X1vA3X4+vA4X3)i+(A1X3+uA2X4+A3X1uA4X2)j+(A1X4+A2X3A3X2+A4X1)k.

    Thus,

    Γ(AX)=(A1X1+uA2X2+vA3X3uvA4X4A1X2+A2X1vA3X4+vA4X3A1X3+uA2X4+A3X1uA4X2A1X4+A2X3A3X2+A4X1)=R(A)(X1X2X3X4), (2.1)

    where

    R(A)=(A1uA2vA3uvA4A2A1vA4vA3A3uA4A1uA2A4A3A2A1),

    is called a real matrix representation of A. From the block columns of A, it is useful to define the following:

    Θ(A)=(uA2A1uA4A3),Δ(A)=(vA3vA4A1A2),Φ(A)=(uvA4vA3uA2A1)R4m×n.

    Clearly, the transformations Vc,Γ,Θ,Δ, and Φ are injective. It is easy to see that

    A=A12+A22+A32+A42=Γ(A). (2.2)

    Proposition 2.3. [35] Let A,BQm×nu,v and kR. Then the following properties hold.

    (i) Γ(A+B)=Γ(A)+Γ(B), Γ(kA)=k(A).

    (ii) R(AB)=R(A)R(B).

    (iii) R(Im)=I4m.

    In this section, we discuss how to solve the Sylvester-transpose matrix equation

    AXB+CXTD=E, (3.1)

    where AQm×nu,v,BQp×qu,v,CQm×pu,v,DQn×qu,v, and EQm×qu,v are given matrices and XQn×pu,v is an unknown. Our idea is to transform Eq (3.1) into a real linear system. So, let us recall the following result.

    Lemma 3.1. [37] Given KRm×n and bRm, we consider the linear system

    Kx=b. (3.2)

    Then the system (3.2) has a solution xRn if and only if KKb=b, where K is the Moore-Penrose inverse of K. For the consistent case, we have the following:

    (i) The general solution of Eq (3.2) is given by

    x=Kb+(InKK)y, (3.3)

    where yRn is an arbitrary vector.

    (ii) Among the general solution (3.3), the minimal-norm solution is given by

    x=Kb. (3.4)

    (iii) If rank(K)=n, then the system (3.2) has an unique solution given by (3.4).

    The next lemmas are utilized to transform Eq (3.1) into a linear system.

    Lemma 3.2. Let A,B,C, and DRm×n. Then

    Vc(ATBTCTDT)=P(m,4n)(P(4,n)Im)Vc(ABCD).

    Proof. Using Lemma 2.2, we obtain

    Vc(ATBTCTDT)=Vc(ABCD)T=P(m,4n)Vc(ABCD)=P(m,4n)(Vc(A)Vc(B)Vc(C)Vc(D))=P(m,4n)(P(4,n)Im)Vc(ABCD).

    Lemma 3.3. Let XQn×pu,v. Then

    (Vc(Γ(X))Vc(Θ(X))Vc(Δ(X))Vc(Φ(X)))=MVc(Γ(X)),whereM=(I4npIpRpInIpSpInIpTpIn)R16np×4np, (3.5)

    and

    Rp=(e42ue41e44ue43)R4×4,Sp=(e43e44ve41ve42)R4×4,Tp=(e44ue43ve42uve41)R4×4,

    where e4i=coli(I4).

    Proof. We compute

    Vc(Θ(X))=(ucol1(X2)col1(X1)ucol1(X4)col1(X3)ucolp(X2)colp(X1)ucolp(X4)colp(X3))=(0uIn000000In0000000000uIn000000In0000000000uIn000000In0000000000uIn000000In0)(col1(X1)col1(X2)col1(X3)col1(X4)colp(X1)colp(X2)colp(X3)colp(X4))=Ip[(e42ue41e44ue43)In]Vc(Γ(X))=(IpRpIn)Vc(Γ(X)).

    With a similar process, we obtain

    Vc(Δ(X))=Ip[(e43e44ve41ve42)In]Vc(Γ(X))=(IpSpIn)Vc(Γ(X)),

    and

    Vc(Φ(X))=Ip[(e44ue43ve42uve41)In]Vc(Γ(X))=(IpTpIn)Vc(Γ(X)).

    Thus, we obtain Eq (3.5).

    Theorem 3.4. Consider Eq (3.1). Let us denote

    W=(Γ(B)TR(A))+(Γ(D)TR(C))(I4P(n,4p)(P(4,p)In)). (3.6)

    (i) The matrix Eq (3.1) has a solution if and only if

    (WM)(WM)Vc(Γ(E))=Vc(Γ(E)).

    (ii) Then the solution set S of Problem 1.1 can be expressed as

    S={X|Vc(Γ(X))=(WM)Vc(Γ(E))+[I4np(WM)(WM)]y}, (3.7)

    where yR4np is an arbitrary vector.

    (iii) Among all solutions (3.7), the minimal-norm solution is given by

    Vc(Γ(X))=(WM)Vc(Γ(E)). (3.8)

    (iv) When WM is of full-column rank, Eq (3.1) has a unique solution given by (3.8).

    Proof. From Eq (3.1), we consider the associated norm-error AXB+CXTDE. Using Eq (2.2), Proposition 2.3 and Lemma 2.1, we obtain

    AXB+CXTDE=Γ(AXB+CXTDE)=Γ(AXB)+Γ(CXTD)Γ(E)=R(A)R(X)Γ(B)+R(C)R(XT)Γ(D)Γ(E)=Vc[R(A)R(X)Γ(B)+R(C)R(XT)Γ(D)Γ(E)]=(Γ(B)TR(A))Vc(R(X))+(Γ(D)TR(C))Vc(R(XT))Vc(Γ(E)).

    By Lemma 3.2, we have

    Vc(Γ(XT))=P(n,4p)(P(4,p)In)Vc(Γ(X)),Vc(Θ(XT))=P(n,4p)(P(4,p)In)Vc(Θ(X)),Vc(Δ(XT))=P(n,4p)(P(4,p)In)Vc(Δ(X)),Vc(Φ(XT))=P(n,4p)(P(4,p)In)Vc(Φ(X)).

    Using Lemma 3.3, we compute (Γ(B)TR(A))Vc(R(X))+(Γ(D)TR(C))Vc(R(XT))Vc(Γ(E))

    =(Γ(B)TR(A))Vc(R(X))+(Γ(D)TR(C))(P(n,4p)(P(4,p)In)Vc(Γ(X))P(n,4p)(P(4,p)In)Vc(Θ(X))P(n,4p)(P(4,p)In)Vc(Δ(X))P(n,4p)(P(4,p)In)Vc(Φ(X)))Vc(Γ(E))=(Γ(B)TR(A))(Vc(Γ(X))Vc(Θ(X))Vc(Δ(X))Vc(Φ(X)))+(Γ(D)TR(C))(I4P(n,4p)(P(4,p)In))(Vc(Γ(X))Vc(Θ(X))Vc(Δ(X))Vc(Φ(X)))Vc(Γ(E))=[(Γ(B)TR(A))+(Γ(D)TR(C))(I4P(n,4p)(P(4,p)In))](Vc(Γ(X))Vc(Θ(X))Vc(Δ(X))Vc(Φ(X)))Vc(Γ(E))=[(Γ(B)TR(A))+(Γ(D)TR(C))(I4P(n,4p)(P(4,p)In))]MVc(Γ(X))Vc(Γ(E))=WMVc(Γ(X))Vc(Γ(E)).

    So, the generalized quaternion matrix Eq (3.1) is equivalent to a real linear system

    WMVc(Γ(X))=Vc(Γ(E)). (3.9)

    By Lemma 3.1, the system (3.9) has the general solution

    Vc(Γ(X))=(WM)Vc(Γ(E))+[I4np(WM)(WM)]y,

    where yR4np is an arbitrary vector. The assertions (iii) and (iv) now follow from Lemma 3.1.

    Theorem 3.5. Consider Eq (3.1). Let YQn×pu,v be given. Then Problem 1.2 is equivalent to finding the minimal-norm solution ZQn×pu,v of a matrix equation

    AZB+CZTD=ˆE,

    where ˆE=E(AYB+CYTD).

    Proof. Letting Z=XY, we consider the following error

    AXB+CXTDE=AXB+CXTDEAYBCYTD+AYB+CYTD=A(XY)B+C(XTYT)DE+AYB+CYTD=AZB+CZTDˆE.

    Thus, Problem 1.2 is equivalent to the following minimization:

    minAXB+CXTD=EXY=minAXB+CXTD=EZ=minAZB+CZTD=ˆEZ,

    as desired.

    In this section, we investigate Eq (3.2) when it is inconsistant. We seek for least-squares (LS) solutions with minimal-norm or the closest solution to a given matrix. Recall the following result:

    Lemma 4.1. [37] Consider the linear system (3.2) in the inconsistent case. We have the following:

    (i) The general LS solutions of Eq (3.2) are given by (3.3), where yR4np is an arbitrary vector.

    (ii) Among such LS solutions, the minimal-norm solution is given by (3.4).

    (iii) If rank(K)=4np, then the system (3.2) has a unique LS solution given by (3.4).

    Theorem 4.2. Suppose that Eq (3.2) is inconsistent. Denote W as in (3.6).

    (i) Then the solution set L of Problem 1.3 can be expressed as

    L={X|Vc(Γ(X))=(WM)Vc(Γ(E))+[I4np(WM)(WM)]y}, (4.1)

    where yR4np is an arbitrary vector.

    (ii) Among such solutions (4.1), the minimal-norm solution is given by (3.8).

    (iii) Moreover, if rank(WS)=4np, Eq (3.1) has a unique LS solution given by (3.8).

    Proof. From the proof of Theorem 3.4, we see that Eq (3.1) is equivalent to the real linear system (3.9). Lemma 4.1 now implies that the LS solutions of Eq (3.1) are given by

    Vc(Γ(X))=(WM)Vc(Γ(E))+[I4np(WM)(WM)]y,

    where yR4np is an arbitrary vector. The assertions (ii) and (iii) also follow from Lemma 4.1.

    Theorem 4.3. Consider Eq (3.1). Let YQn×pu,v be given. Then Problem 1.4 is equivalent to finding the minimal-norm least-squares solution ZQn×pu,vG of a matrix equation

    AZB+CZTD=ˆE,

    where ˆE=E(AYB+CYTD).

    Proof. From the proof of Theorem 3.5, we have

    AXB+CXTDE=AZB+CZTDˆE,

    where Z=XY. Thus, Problem 1.4 is equivalent to the following:

    minXLXY=minAXB+CXTDE=minXY=minAXB+CXTDE=minZ=minAZB+CZTDˆE=minZ,

    as desired.

    From the Sylvester-transpose Eq (1.2), we can investigate its certain special cases.

    Corollary 5.1. Let AQm×nu,v,BQp×qu,v, and EQm×qu,v. Consider the matrix equation

    AXB=E

    in an unknown XQn×pu,v. Then the conclusions of Theorems 3.4, 3.5, 4.2 and 4.3 hold, where the matrix W is given by

    W=Γ(B)TR(A).

    Proof. We set C=0 and D=0 in those theorems.

    The next special case is the Sylvester-transpose matrix equation

    AX+XTD=E. (5.1)

    Corollary 5.2. Let AQp×nu,v,DQn×pu,v, and EQp×pu,v. Consider Eq (5.1) in an unknown XQn×pu,v. Then the conclusions of Theorems 3.4, 3.5, 4.2 and 4.3 hold, where

    W=(Γ(Ip)R(A))+(Γ(D)TI4p)(I4P(n,4p)(P(4,p)In)).

    Proof. We set B=C=Ip in those theorems.

    In the next result, we consider Eq (1.2) over the quaternions.

    Corollary 5.3. Let AQm×n,BQp×q,CQm×p,DQn×q, and EQm×q. Consider the matrix equation

    AXB+CXTD=E.

    Then the conclusions of Theorems 3.4, 3.5, 4.2 and 4.3 hold, where the matrix M is given explicitly by

    M=(I4npIp´RPIp´SPIp´TP)R16np×4np, (5.2)

    and

    ´Rp=(0In00In000000In00In0),´Sp=(00In0000InIn0000In00),´Tp=(000In00In00In00In000).

    Proof. Set u=v=1 in those theorems.

    In this subsection, we consider a quaternion linear system

    Ax=b, (5.3)

    where AQm×n and bQm are given, and xQn is an unknown.

    Corollary 5.4. Consider the linear system (5.3). Denote M as in (5.2) where p=1.

    (i) Then the system (5.3) has a solution if and only if

    (R(A)M)(R(A)M)Γ(b)=Γ(b).

    (ii) The general exact/LS solution of Eq (5.3) can be expressed as

    Γ(x)=(R(A)M)Γ(b)+[I4n(R(A)M)(R(A)M)]y, (5.4)

    where yR4n is an arbitrary vector.

    (iii) Among all solutions (5.4), the minimal-norm solution is given by

    Γ(x)=(R(A)M)Γ(b). (5.5)

    (iv) When R(A)M is of full-column rank, Eq (5.3) has a unique exact/LS solution given by (5.5).

    Proof. From Theorems 3.4 and 4.2, set p=1,B=I1, and C=0.

    A conjugate gradient type to solve the quaternion linear system (5.3) is the quaternion generalized minimal residual method (QGMRES) [38]. Now, we discuss the following problem.

    Problem 5.5. Let AQm×n and bQm be given. Find an LS solution of Eq (5.3) closest to a given vector hQn. That is, find the vector ˜x such that

    ˜xh=minAxb=minxh.

    Corollary 5.6. Consider Eq (5.3). Let hQn be given. Then the solution of Problem 5.5 is given by x=h+z where

    Γ(z)=(R(A)M)[Γ(b)R(A)Γ(h)].

    Here, the matrix M is given by (5.2) where p=1.

    Proof. From the case B=1 and C=0 in Theorem 4.3, we see that Problem 5.5 is equivalent to finding a minimal-norm LS solution z of the linear system

    Az=bAh.

    Indeed, the desired solution is x=h+z. From Corollary 5.4 and Eq (2.1), we obtain

    Γ(z)=(R(A)M)Γ(bAh)=(R(A)M)[Γ(b)R(A)Γ(h)].

    In this section, we provide numerical examples to illustrate our results.

    Example 6.1. Consider the generalized Sylvester-transpose matrix equation AXB+CXTD=E over the split quaternions (i.e., (u,v)=(1,1)),

    A=(1i+2j)1×2,C=(1i+j+k)1×2,B=(i+k2+3j)2×1,D=(2i3k)2×1,E=(1+4i+3j+k)1×1.

    Then we have

    R(A)=(10010200011000020200100100020110),Γ(B)T=(02100310),R(C)=(10010100011000010100100100010110),Γ(D)T=(03200001),

    and

    W=(Γ(B)TR(A))+(Γ(D)TR(C)),M=I4P(2,8)(P(4,2)I2),Γ(E)=(1431)T.

    According to Theorem 3.4, the matrix equation has a unique solution, computed via MATLAB as follows:

    X=(0000.2709)+(0000.0739)i+(0000.8079)j+(0000.8079)k.

    Example 6.2. Consider the matrix equation AXB+CXTD=E over the split quaternions, i.e., (u,v)=(1,1). Here, we are given the matrices A,B,C,D, and E as in Example 6.1, and we will find a solution X closest to a given matrix

    Y=(100i).

    We obtain

    ˆE=E(AYB+CYTD)andΓ(ˆE)=(2428)T.

    Using Theorem 3.5 and MATLAB, we obtain:

    Z=(0000.3481)+(0000.6404)i+(0000.3350)j+(0000.1938)k.

    Thus, we get the desired solution:

    X=Z+Y=(1000.3481)+(0000.3596)i+(0000.3350)j+(0000.1938)k.

    We investigated a generalized Sylvester-transpose matrix equation AXB+CXTD=E, where A,B,C,D,E, and X are matrices over a generalized quaternion skew-field. When all matrix dimensions were compatible, we provided a criterion for the equation to have a solution, involving Moore-Penrose inverses of associated matrices. Applying vectorizations and real representations of generalized quaternion matrices, we derived formulas of general exact/least-squares solutions, the minimal-norm solution, and the solution closest to a given matrix. Our results included the equation AXB=E and the Sylvester-transpose equation, quaternionic matrix equations, and quaternionic linear systems.

    The authors declare that they have not used artificial intelligence (AI) tools in the creation of this article.

    This work was supported by King Mongkut's Institute of Technology Ladkrabang. The authors would like to thank the anonymous referees for suggestions.

    The authors declare that there are no conflicts of interest.



    [1] E. D. Geir, P. Fernando, A Course in Robust Control Theory: A Convex Approach, Springer, New York, 1999.
    [2] F. Lewis, A survey of linear singular systems, Circ. Syst. Signal Process., 5 (1986), 3–36. https://doi.org/10.1007/BF01600184 doi: 10.1007/BF01600184
    [3] L. Dai, Singular Control Systems, Springer, Berlin, 1989. https://doi.org/10.1007/BFb0002475
    [4] L. R. Fletcher, J. Kuatsky, N. K. Nichols, Eigenstructure assignment in descriptor systems, IEEE Trans. Autom. Control, 31 (1986), 1138–1141. https://doi.org/10.1109/TAC.1986.1104189 doi: 10.1109/TAC.1986.1104189
    [5] P. M. Frank, Fault diagnosis in dynamic systems using analytical and knowledge-based redundancy a survey and some new results, Automatica, 26 (1990), 459–474. https://doi.org/10.1016/0005-1098(90)90018-D doi: 10.1016/0005-1098(90)90018-D
    [6] J. Jaiprasert, P. Chansangiam, Solving the Sylvester-transpose matrix equation under the semi-tensor product, Symmetry, 14 (2022), 1094. https://doi.org/10.3390/sym14061094 doi: 10.3390/sym14061094
    [7] N. Boonruangkan, P. Chansangiam, Convergence analysis of a gradient iterative algorithm with optimal convergence factor for a generalized Sylvester-transpose matrix equation, AIMS Math., 6 (2021), 8477–8496. https://doi.org/10.3934/math.2021492 doi: 10.3934/math.2021492
    [8] K. Tansri, P. Chansangiam, Conjugate gradient algorithm for least-squares solutions of a generalized Sylvester-transpose matrix equation, Symmetry, 14 (2022), 1868. https://doi.org/10.3390/sym14091868 doi: 10.3390/sym14091868
    [9] Y. J. Xie, C. F. Ma, The accelerated gradient based iterative algorithm for solving a class of generalized Sylvester transpose matrix equation, Appl. Math. Comput., 273 (2016), 1257–1269. https://doi.org/10.1016/j.amc.2015.07.022 doi: 10.1016/j.amc.2015.07.022
    [10] M. Hajarian, Extending the CGLS algorithm for least squares solutions of the generalized Sylvester-transpose matrix equations, J. Franklin Inst., 353 (2016), 1168–1185. https://doi.org/10.1016/j.jfranklin.2015.05.024 doi: 10.1016/j.jfranklin.2015.05.024
    [11] A. Kittisopapron, P. Chansangiam, Approximated least-squares solutions of a generalized Sylvester-transpose matrix equation via gradient-descent iterative algorithm, Adv. Differ. Equations, 2021 (2021), 266. https://doi.org/10.1186/s13662-021-03427-4 doi: 10.1186/s13662-021-03427-4
    [12] K. Tansri, S. Choomklang, P. Chansangiam, Conjugate gradient algorithm for consistent generalized Sylvester-transpose matrix equations, AIMS Math., 7 (2022), 5386–5407. https://doi.org/10.3934/math.2022299 doi: 10.3934/math.2022299
    [13] M. Wang, X. Cheng, Iterative algorithm for solving the matrix equation AXB+CXTD=E, Appl. Math. Comput., 187 (2007), 622–629. https://doi.org/10.1016/j.amc.2006.08.169 doi: 10.1016/j.amc.2006.08.169
    [14] S. L. Adler, Quaternionic Quantum Mechanics and Quantum Fields, 1st edition, Oxford U.P., New York, 1995.
    [15] D. Finkelstein, J. M. Jauch, S. Schiminovich, D. Speiser, Foundations of quaternion quantum mechanics, J. Math. Phys., 3 (1962), 3207–3220. https://doi.org/10.1063/1.1703794 doi: 10.1063/1.1703794
    [16] R. Heise, B. A. Macdonald, Quaternions and motion interpolation: A tutorial, in New Advances in Computer Graphics, (eds. R. A. Earnshaw and B. Wyvill), Springer, (1989), 229–243. https://doi.org/10.1007/978-4-431-68093-2_14
    [17] D. Pletincks, Quaternion calculus as a basic tool in computer graphics, Visual Comput., 5 (1989), 2–13. https://doi.org/10.1007/BF01901476 doi: 10.1007/BF01901476
    [18] T. Li, Q. W. Wang, X. F. Zhang, A modified conjugate residual method and nearest Kronecker product preconditioner for the generalized coupled Sylvester tensor equations, Mathermatics, 10 (2022), 1730. https://doi.org/10.3390/math10101730 doi: 10.3390/math10101730
    [19] Z. H. He, X. X. Wang, Y. F. Zhao, Eigenvalues of quaternion tensors with applications to color video processing, J. Sci. Comput., 94 (2023). https://doi.org/10.1007/s10915-022-02058-5 doi: 10.1007/s10915-022-02058-5
    [20] Z. H. He, C. Navasca, X. X. Wang, Decomposition for a quaternion tensor triplet with applications, Adv. Appl. Clifford Algebras, 32 (2022). https://doi.org/10.1007/s00006-021-01195-8 doi: 10.1007/s00006-021-01195-8
    [21] Z. H. He, Some new results on a system of Sylvester-type quaternion matrix equations, Linear Multilinear Algebra, 69 (2021), 3069–3091. https://doi.org/10.1080/03081087.2019.1704213 doi: 10.1080/03081087.2019.1704213
    [22] X. Liu, Y. Zhang, Matrices over Quaternion Algebras, in Matrix and Operator Equations and Applications, (eds. M. S. Moslehian), Springer, (2023), 139–183. https://doi.org/10.1007/978-3-031-25386-7
    [23] M. Jafari, Y. Yuyli, Generalized quaternions and their algebraic properties, Commun. Fac. Sci. Univ. Ank. Series A1 Math. Stat., 64 (2015), 15–27. https://doi.org/10.1501/Commual_0000000724 doi: 10.1501/Commual_0000000724
    [24] J. Ping, H. T. Wu, A closed-form forward kinematics solution for the 6-6p Stewart platform, IEEE Trans. Rob. Autom., 17 (2001), 522–526. https://doi.org/10.1109/70.954766 doi: 10.1109/70.954766
    [25] F. X. Zhang, M. S. Wei, Y. Li, J. L. Zhao, Special least squares solutions of the quaternion matrix equation AX=B with applications, Appl. Math. Comput., 270 (2015), 425–433. https://doi.org/10.1016/j.amc.2015.08.046 doi: 10.1016/j.amc.2015.08.046
    [26] F. Caccavale, C. Natale, B. Siciliano, L. Villani, Six-dof impedance control based on angle/axis representaions, IEEE Trans. Rob. Autom., 15 (1999), 289–300. https://doi.org/10.1109/70.760350 doi: 10.1109/70.760350
    [27] Z. Jia, M. K. Ng, Color image restoration by saturation-value total variation, SIAM J. Imag. Sci., 12 (2019), 2. https://doi.org/10.1137/18M1230451 doi: 10.1137/18M1230451
    [28] Z. Jia, M. K. Ng, G. J. Song, Robust quaternion matrix completion with applications to image inpainting, Numer. Linear Algebra Appl., 26 (2019), e2245. https://doi.org/10.1002/nla.2245 doi: 10.1002/nla.2245
    [29] Z. Jia, Q. Jin, M. K. Ng, X. L. Zhao, Non-local robust quaternion matrix completion for large-scale color image and video inpainting, IEEE Trans. Image Process., 31 (2022), 3868–3883. https://doi.org/10.1109/TIP.2022.3176133 doi: 10.1109/TIP.2022.3176133
    [30] C. E. Moxey, S. J. Sangwine, T. A. Ell, Hypercomplex correlation techniques for vector imagines, IEEE Trans. Signal Process., 51 (2003), 1941–1953. https://doi.org/10.1109/TSP.2003.812734 doi: 10.1109/TSP.2003.812734
    [31] S. L. Adler, Scattering and decay theory for quaternionic quantum mechanics and structure of induced t nonconservation, Phys. Rev. D, 37 (1988), 3654–3662. https://doi.org/10.1103/PhysRevD.37.3654 doi: 10.1103/PhysRevD.37.3654
    [32] Z. Jia, M. Wei, M. X. Zhao, Y. Chen, A new real structure-preserving quaternion QR algorithm, J. Comput. Appl. Math., 343 (2018), 26–48. https://doi.org/10.1016/j.cam.2018.04.019 doi: 10.1016/j.cam.2018.04.019
    [33] S. F. Yuan, Least squares pure imaginary solution and real solution of quaternion matrix equation AXB+CXD=E with the least norm, J. Appl. Math., 2014 (2014), 1–9. https://doi.org/10.1155/2014/857081 doi: 10.1155/2014/857081
    [34] F. Zhang, W. Mu, Y. Li, J. Zhao, Special least squares solutions of the quaternion matrix equation AXB+CXD=E, Comput. Math. Appl., 72 (2016), 1426–1435. https://doi.org/10.1016/j.camwa.2016.07.019 doi: 10.1016/j.camwa.2016.07.019
    [35] Y. Tian, X. Liu, S. F. Yuan, On Hermitian solutions of the generalized quaternion matrix equation AXB+CXD=E, Math. Probl. Eng., 2021 (2021), 1–10. https://doi.org/10.1155/2021/1497335 doi: 10.1155/2021/1497335
    [36] D. A. Turkington, Matrix Calculus & Zero-One Matrices: Statistical and Econometric Applications, Cambridge University Press, Cambridge, 2002. https://doi.org/10.1017/CBO9780511528460
    [37] A. B. Israel, T. N. E. Greville, Generalized Inverses: Theory and applications, 3rd edition, Springer, New York, 2003. https://doi.org/10.1007/b97366
    [38] Z. Jia, M. K. Ng, Structure preserving quaternion generalized minimal residual method, SIAM J. Matrix Anal. Appl., 42 (2021), 616–634. https://doi.org/10.1137/20M133751X doi: 10.1137/20M133751X
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1082) PDF downloads(64) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog