Research article Special Issues

Metric geometric means with arbitrary weights of positive definite matrices involving semi-tensor products

  • We extend the notion of classical metric geometric mean (MGM) for positive definite matrices of the same dimension to those of arbitrary dimensions, so that usual matrix products are replaced by semi-tensor products. When the weights are arbitrary real numbers, the weighted MGMs possess not only nice properties as in the classical case, but also affine change of parameters, exponential law, and cancellability. Moreover, when the weights belong to the unit interval, the weighted MGM has remarkable properties, namely, monotonicity and continuity from above. Then we apply a continuity argument to extend the weighted MGM to positive semidefinite matrices, here the weights belong to the unit interval. It turns out that this matrix mean posses rich algebraic, order, and analytic properties, such as, monotonicity, continuity from above, congruent invariance, permutation invariance, affine change of parameters, and exponential law. Furthermore, we investigate certain equations concerning weighted MGMs of positive definite matrices. It turns out that such equations are always uniquely solvable with explicit solutions. The notion of MGMs can be applied to solve certain symmetric word equations in two letters.

    Citation: Arnon Ploymukda, Pattrawut Chansangiam. Metric geometric means with arbitrary weights of positive definite matrices involving semi-tensor products[J]. AIMS Mathematics, 2023, 8(11): 26153-26167. doi: 10.3934/math.20231333

    Related Papers:

    [1] Pattrawut Chansangiam, Arnon Ploymukda . Riccati equation and metric geometric means of positive semidefinite matrices involving semi-tensor products. AIMS Mathematics, 2023, 8(10): 23519-23533. doi: 10.3934/math.20231195
    [2] Arnon Ploymukda, Kanjanaporn Tansri, Pattrawut Chansangiam . Weighted spectral geometric means and matrix equations of positive definite matrices involving semi-tensor products. AIMS Mathematics, 2024, 9(5): 11452-11467. doi: 10.3934/math.2024562
    [3] Tong Wu, Yong Wang . Super warped products with a semi-symmetric non-metric connection. AIMS Mathematics, 2022, 7(6): 10534-10553. doi: 10.3934/math.2022587
    [4] Mohd Danish Siddiqi, Meraj Ali Khan, Ibrahim Al-Dayel, Khalid Masood . Geometrization of string cloud spacetime in general relativity. AIMS Mathematics, 2023, 8(12): 29042-29057. doi: 10.3934/math.20231487
    [5] Yajun Xie, Changfeng Ma, Qingqing Zheng . On the nonlinear matrix equation $ X^{s}+A^{H}F(X)A = Q $. AIMS Mathematics, 2023, 8(8): 18392-18407. doi: 10.3934/math.2023935
    [6] Wenbin Gong, Yaqiang Wang . Some new criteria for judging $ \mathcal{H} $-tensors and their applications. AIMS Mathematics, 2023, 8(4): 7606-7617. doi: 10.3934/math.2023381
    [7] Rajesh Kumar, Sameh Shenawy, Lalnunenga Colney, Nasser Bin Turki . Certain results on tangent bundle endowed with generalized Tanaka Webster connection (GTWC) on Kenmotsu manifolds. AIMS Mathematics, 2024, 9(11): 30364-30383. doi: 10.3934/math.20241465
    [8] Tinglan Yao . An optimal $ Z $-eigenvalue inclusion interval for a sixth-order tensor and its an application. AIMS Mathematics, 2022, 7(1): 967-985. doi: 10.3934/math.2022058
    [9] Wenxv Ding, Ying Li, Anli Wei, Zhihong Liu . Solving reduced biquaternion matrices equation $ \sum\limits_{i = 1}^{k}A_iXB_i = C $ with special structure based on semi-tensor product of matrices. AIMS Mathematics, 2022, 7(3): 3258-3276. doi: 10.3934/math.2022181
    [10] Dongjian Bai, Feng Wang . New methods based $ \mathcal{H} $-tensors for identifying the positive definiteness of multivariate homogeneous forms. AIMS Mathematics, 2021, 6(9): 10281-10295. doi: 10.3934/math.2021595
  • We extend the notion of classical metric geometric mean (MGM) for positive definite matrices of the same dimension to those of arbitrary dimensions, so that usual matrix products are replaced by semi-tensor products. When the weights are arbitrary real numbers, the weighted MGMs possess not only nice properties as in the classical case, but also affine change of parameters, exponential law, and cancellability. Moreover, when the weights belong to the unit interval, the weighted MGM has remarkable properties, namely, monotonicity and continuity from above. Then we apply a continuity argument to extend the weighted MGM to positive semidefinite matrices, here the weights belong to the unit interval. It turns out that this matrix mean posses rich algebraic, order, and analytic properties, such as, monotonicity, continuity from above, congruent invariance, permutation invariance, affine change of parameters, and exponential law. Furthermore, we investigate certain equations concerning weighted MGMs of positive definite matrices. It turns out that such equations are always uniquely solvable with explicit solutions. The notion of MGMs can be applied to solve certain symmetric word equations in two letters.



    The notion of metric geometric mean (MGM for short) of positive definite matrices involves many mathematical areas, e.g. matrix/operator theory, geometry, and group theory. For any positive definite matrices A and B of the same size, the MGM of A and B is defined as

    AB=A1/2(A1/2BA1/2)1/2A1/2. (1.1)

    From algebraic viewpoint, AB is a unique positive solution of the Riccati equation (see [1, Ch. 4])

    XA1X=B. (1.2)

    In fact, the explicit formula (1.1) and the equation (1.2) are two equivalent ways to describe the geometric mean; see [2]. From differential-geometry viewpoint, AB is a unique midpoint of the Riemannian geodesic interpolated from A to B, called the weighted geometric mean of A and B:

    γ(t):=AtB=A1/2(A1/2BA1/2)tA1/2,0t1. (1.3)

    This midpoint is measured through a natural Riemannian metric; see a monograph [1, Ch. 6] or Section 2. The weighted MGMs (1.3) posses rich algebraic, order, and analytic properties, namely, positive homogeneity, congruent invariance, permutation invariance, self duality, monotonicity, and continuity from above; see [3, Sect. 3].

    In operator-theoretic approach, the weighted MGMs (1.3) for positive operators on a Hilbert space are Kubo-Ando means [4] in the sense that they satisfy the monotonicity, transformer inequality, continuity from above, and normalization. More precisely, AtB is the Kubo-Ando mean associated with the operator monotone function f(x)=xt on the positive half-line, e.g. [3, Sect. 3]. The geometric mean serves as a tool for deriving matrix/operators inequalities; see the original idea in [5], see also [1, Ch. 4], [3, Sect. 3], and [6]. The cancellability of the weighted MGM together with spectral theory can be applied to solve operator mean equations; see [7]. Indeed, given two positive invertible operators A and B acting on the same Hilbert space (or positive definite matrices of the same dimension), the equation AtX=B is uniquely solvable in terms of the weighted MGM in which the weight can be any nonzero real number. See more development of geometric mean theory for matrices/operators in [8,9,10].

    A series of Lawson and Lim works investigated the theory of (weighted) MGM in various frameworks. Indeed, the geometric mean (with weight 1/2) can be naturally defined on symmetric cones [11], symmetric sets [12], two-powered twisted subgroups [12,13], and Bruhat-Tits spaces [2]. The framework of reflection quasigroups (based on point-reflection geometry and quasigroup theory) [13] allows us to define weighted MGMs via geodesics, where the weights can be any dyadic rationals. On lineated symmetric spaces [14], the weights can be arbitrary real numbers, due to the density of the dyadic rationals on the real line. Their theory can be applied to solve certain symmetric word equations in two matrix letters; see [15]. Moreover, mean equations related MGMs were investigated in [13,16].

    From the formulas (1.1) and (1.3), the matrix products are the usual products between matching-dimension matrices. We can extend the MGM theory by replacing the usual product to the semi-tensor product (STP) between square matrices of general dimension. The STP was introduced by Cheng [17], so that the STP reduces to the usual matrix product in matching-dimension case. The STP keeps various algebraic properties of the usual matrix product such as the distribution over the addition, the associativity, and compatibility with transposition, inversion and scalar multiplication. The STP is a useful tool when dealing with vectors and matrices in classical and fuzzy logic, lattices and universal algebra, and differential geometry; see [18,19]. The STPs turn out to have a variety of applications in other fields: networked evolutionary games [20], finite state machines [21], Boolean networks [19,22,23], physics [24], and engineering [25]. Many authors developed matrix equations based on STPs; e.g. Sylvester-type equations [26,27,28], and a quadratic equation AXX=B [29].

    The present paper aims to develop further theory on weighted MGMs for positive definite matrices of arbitrary dimensions, where the matrix products are given by the STPs. In this case, the weights can be arbitrary real numbers; see Section 3. It turns out that this mean satisfies various properties as in the classical case. The most interesting case is when the weights belong to the unit interval. In this case, the weighted MGM satisfies the monotonicity, and the continuity from above. Then we investigate the theory when either A or B is not assumed to be invertible. In such case, the weights are restricted to be in [0,) or (,1]. We also use a continuity argument to study the weighted MGMs, in which the weights belong to [0,1]. Moreover, we prove the cancellability of the weighted MGMs, and apply to solve certain nonlinear matrix equations concerning MGMs; see Section 4. In Section 5, we apply the theory to solve the Riccati equation and certain symmetric word equations in two matrix letters. Finally, we summarize the whole work in Section 6.

    In the next section, we setup basic notation and provide preliminaries results on the Riemannian geometry of positive definite matrices, the tensor product, and the semi-tensor product.

    Throughout, let Mm,n be the set of all m×n complex matrices, and abbreviate Mn,n to Mn. Define Cn=Mn,1, the set of n-dimensional complex vectors. Denote by AT and A the transpose and conjugate transpose of a matrix A, respectively. The n×n identity matrix is denoted by In. The general linear group of n×n invertible complex matrices is denoted by GLn. The symbols Hn, and PSn stand for the vector space of n×n Hermitian matrices, and the cone of n×n positive semidefinite matrices, respectively. For a pair (A,B)Hn×Hn, the partial ordering AB means that AB lies in the positive cone PSn. In particular, APSn if and only if A0. Let us denote the set of n×n positive definite matrices by Pn. For each AHn, the strict inequality A>0 indicates that APn.

    A matrix pair (A,B)Mm,n×Mp,q is said to satisfy factor-dimension condition if n|p or p|n. In this case, we write AkB when n=kp, and AkB when p=kn.

    Recall that Mn is a Hilbert space endowed with the Hilbert-Schmidt inner product A,BHS=trAB and the associated norm AHS=(trAA)1/2. The subset Pn, which is an open subset in Hn, is a Riemannian manifold endowed with the trace Riemannian

    ds=A1/2dAA1/2HS=[tr(A1dA)2]1/2.

    If γ:[a,b]Pn is a (piecewise) differentiable path in Pn, we define the length of γ by

    L(γ)=baγ1/2(t)γ(t)γ1/2(t)HSdt.

    For each XGLn, the congruence transformation

    ΓX:PnPn,AXAX (2.1)

    is bijective and the composition ΓXγ:[a,b]Pn is another path in Pn. For any A,BPn, the distance between A and B is given by

    δHS(A,B)=inf{ L(γ):γ is a path from A to B }.

    Lemma 2.1. (e.g. [1]). For each A,BPn, there is a unique geodesic from A to B, parametrized by

    γ(t)=A1/2(A1/2BA1/2)tA1/2,0t1. (2.2)

    This geodesic is natural in the sense that δHS(A,γ(t))=tδHS(A,B) for each t.

    For any A,BPn, denote the geodesic from A to B by [A,B].

    This subsection is a brief review on tensor products and semi-tensor products of matrices. Recall that for any matrices A=[aij]Mm,n and BMp,q, their tensor product is defined by

    AB=[aijB]Mmp,nq.

    The tensor operation (A,B)AB is bilinear and associative.

    Lemma 2.2 (e.g. [3]). Let (A,B)Mm,n×Mp,q and (P,Q)Mm×Mn. Then we have

    (1). AB=0 if and only if either A=0 or B=0;

    (2). (AB)=AB;

    (3). if (P,Q)GLm×GLn, then (PQ)1=P1Q1;

    (4). if (P,Q)PSm×PSn, then PQPSmn and (PQ)1/2=P1/2Q1/2;

    (5). if (P,Q)Pm×Pn, then PQPmn;

    (6). det(PQ)=(detP)n(detQ)m.

    To define the semi-tensor product, first consider a pair (X,Y)M1,m×Cn of row and column vectors, respectively. If XkY, then we split X into X1,X2,,XnM1,k and define the STP of X and Y as

    XY=ni=1yiXiM1,k.

    If XkY, then we split Y into Y1,Y2,,YmCk and define the STP of X and Y as

    XY=mi=1xiYiCk.

    In general, for a pair (A,B)Mm,n×Mp,q satisfying the factor-dimensional condition, we define

    AB=[AiBj]m,qi,j=1,

    where Ai is i-th row of A and Bj is the j-th column of B. More generally, for an arbitrary matrix pair (A,B)Mm,n×Mp,q, we let α=lcm(n,p) and define

    AB=(AIα/n)(BIα/p)Mαmn,αqp.

    The operation (A,B)AB turns out to be bilinear, associative, and continuous.

    Lemma 2.3 (e.g. [18]). Let (A,B)Mm,n×Mp,q and (P,Q)Mm×Mn. Then we have

    (1). (AB)=BA;

    (2). if (P,Q)GLm×GLn, then (PQ)1=Q1P1;

    (3). det(PQ)=(detP)α/m(detQ)α/n where α=lcm(m,n).

    Proposition 2.4. Let AMm,XMn and S,THm.

    (1). If A0, then XAX0.

    (2). If ST, then XSXXTX.

    (3). If A>0 and XGLn, then XAX>0.

    (4). If S>T, then XSX>XTX.

    Proof. 1) Since (XAX)=XAX, we have that XAX is Hermitian. Let α=lcm(m,n) and uCα. Set v=Xu. Using Lemma 2.2, we obtain that AIα/p0 and then, by Lemma 2.3,

    u(XAX)u=(Xu)A(Xu)=v(AIα/n)v0.

    This implies that XAX0. 2) Since ST, we have ST0. Applying the assertion 1, we get X(ST)X0, i.e., XSXXTX. The proofs of the assertions 3)-4) are similar to the assertions 1)-2), respectively.

    We extend the classical weighted MGM (1.3) for a pair of positive definite matrices of different sizes as follows.

    Definition 3.1. Let (A,B)Pm×Pn. For any tR, the t-weighted metric geometric mean (MGM) of A and B is defined by

    AtB=A1/2(A1/2BA1/2)tA1/2Mα, (3.1)

    where α=lcm(m,n).

    Note that when m=n, Eq (3.1) reduces to the classical one (1.3). In particular, A0B=AIα/m, A1B=BIα/n, A1B=AB1A, and A2B=BA1B. Fundamental properties of the weighted MGMs (3.1) are as follows.

    Theorem 3.2. Let (A,B)Pm×Pn. Let r,s,tR and α=lcm(m,n). Then

    (1). Positivity: AtB>0.

    (2). Fixed-point property: AtA=A.

    (3). Positive homogeneity: c(AtB)=(cA)t(cB) for all c>0.

    (4). Congruent invariance: C(AtB)C=(CAC)t(CBC) for all CGLα.

    (5). Self duality: (AtB)1=A1tB1.

    (6). Permutation invariance: A1/2B=B1/2A. More generally, AtB=B1tA.

    (7). Affine change of parameters: (ArB)t(AsB)=A(1t)r+tsB.

    (8). Exponential law: Ar(AsB)=ArsB.

    (9). C1(AtB)=(C1A)t(C1B) for any CPm.

    (10). Left cancellability: Let Y1,Y2Pn and tR{0}. Then the equation AtY1=AtY2 implies Y1=Y2. In other words, for each t0, the map XAtX is an injective map from Pn to Pα.

    (11). Right cancellability: Let X1,X2Pm and tR{1}. Then the equation X1tB=X2tB implies X1=X2. In other words, for each t1, the map XXtB is an injective map from Pm to Pα.

    (12). Determinantal identity: det(AtB)=(detA)(1t)αm(detB)tαn.

    Proof. The positivity of t follows from Proposition 2.4(3). Properties 2 and 3 follow directly from the formula (3.1). Let γ be the natural parametrization of the geodesic [AIα/m,BIα/n] on the space Pα as discussed in Lemma 2.1. To prove the congruent invariance, let CGLα and consider the congruence transformation ΓC defined by (2.1). Then the path γC(t):=ΓC(γ(t)) joins the points γC(0)=CAC and γC(1)=CBC. By Lemma 2.1, we obtain

    C(AtB)C=ΓC(γ(t))=γC(t)=(CAC)t(CBC).

    To prove the self duality, let β(t)=(γ(t))1. By Lemma 2.2, we have β(0)=A1Iα/m and β(1)=B1Iα/n. Thus

    (AtB)1=(γ(t))1=β(t)=A1tB1.

    To prove the 6th item, define f:RR by f(t)=1t. Let δ=γf. Then δ(0)=BIα/n and δ(1)=AIα/m. By Lemma 2.1, we have δ(t)=BtA, and thus

    AtB=γ(t)=γ(f(1t))=δ(1t)=B1tA.

    To prove the 7th item, fix r,s and let t vary. Let δ(t)=(ArB)t(AsB). We have δ(0)=ArB and δ(1)=AsB. Define f:RR,f(t)=(1t)r+ts and β=γf. We obtain

    β(t)=γ((1t)r+ts)=A(1t)r+tsB

    and β(0)=ArB, β(1)=AsB. Hence, δ(t)=β(t), i.e., (ArB)t(AsB)=A(1t)r+tsB. The exponential law is derived from the 7th item as follows: Ar(AsB)=(A0B)r(AsB)=ArsB. The 9th item follows from the congruent invariance and the self duality. To prove the left cancellability, let tR{0} and suppose that AtY1=AtY2. We have by the exponential law that

    Y1Iα/n=A1Y1=A1/t(AtY1)=A1/t(AtY2)=A1Y2=Y2Iα/n.

    It follows that (Y1Y2)Iα/n=0, and thus by Lemma 2.2 we conclude that Y1=Y2. Hence, the map XAtX is injective. The right cancellability follows from the left cancellability together with the permutation inavariance. The determinantal identity follows directly from Lemmas 2.3 and 2.2:

    det(AtB)=det(A1/2)α/mdet((A1/2BA1/2)t)(detA1/2)α/m=(detA)α/m(det(A1/2BA1/2)t)=(detA)α/m(detA)tα/m(detB)tα/n=(detA)(1t)α/m(detB)tα/n.

    This finishes the proof.

    Remark 3.3. From Theorem 3.2, a particular case of the exponential law is that (Bs)r=Bsr for any B>0 and s,tR. The congruent invariance means that the operation t is invariant under the congruence transformation ΓC for any CGLα.

    Now, we focus on weighted MGMs in which the weight lies in the interval [0,1]. Let us write AkA when the matrix sequence (Ak) converges to the matrix A. If (Ak) is a sequence in Hn, the expression AkA means that (Ak) is a decreasing sequence and AkA. Recall the following well known matrix inequality:

    Lemma 3.4 (Löwner-Heinz inequality, e.g. [3]). Let S,TPSn and w[0,1]. If ST, then SwTw.

    When the weights are in [0,1], this mean has remarkable order and analytic properties:

    Theorem 3.5. Let (A,B),(C,D)Pm×Pn and w[0,1].

    (1). Monotonicity: If AC and BD, then AwBCwD.

    (2). Continuity from above: Let (Ak,Bk)Pm×Pn for all kN. If AkA and BkB, then AkwBkAwB.

    Proof. To prove the monotonicity, suppose that AC and BD. By Proposition 2.4, we have that A1/2BA1/2A1/2DA1/2. Using Lemma 3.4 and Proposition 2.4, we obtain

    AwB=A1/2(A1/2BA1/2)wA1/2A1/2(A1/2DA1/2)wA1/2=AwD.

    This shows the monotonicity of w in the second argument. This property together with the permutation invariance in Theorem 3.2 yield

    AwB=B1wAB1wC=CwBCwD.

    To prove the continuity from above, suppose that AkA and BkB. Applying the monotonicity and the positivity, we conclude that (AkwBk) is a decreasing sequence of positive definite matrices. The continuity of the semi-tensor multiplication implies that A1/2kBkA1/2k converges to A1/2BA1/2, and thus

    A1/2k(A1/2kBkA1/2k)wA1/2kA1/2(A1/2BA1/2)wA1/2.

    Hence, AkwBkAwB.

    Now, we extend the weighted MGM to positive semidefinite matrices. Indeed, when the first matrix argument is positive definite but the second one is positive semidefinite, the weights can be any nonnegative real numbers.

    Definition 3.6. Let (A,B)Pm×PSn. For any t[0,), the t-weighted MGM of A and B is defined by

    AtB=A1/2(A1/2BA1/2)tA1/2. (3.2)

    Here, we apply a convention X0=Iα for any XMα.

    This definition is well-defined since the matrix A1/2BA1/2 is positive semidefinite according to Proposition 2.4. The permutation invariance suggests the following definition.

    Definition 3.7. Let (A,B)PSm×Pn. For any t(,1], the t-weighted MGM of A and B is defined by

    AtB=B1tA=B1/2(B1/2AB1/2)1tB1/2. (3.3)

    Here, we apply the convention X0=Iα for any XMα.

    This definition is well-defined according to Definition 3.6 (since 1t0). Note that when A>0 and B>0, Definitions 3.1, 3.6, and 3.7 are coincide. Fundamental properties of the means (3.2) and (3.3) are as follows.

    Theorem 3.8. Denote α=lcm(m,n). If either

    (i) (A,B)Pm×PSn and r,s,t0, or

    (ii) (A,B)PSm×Pn and r,s,t1,

    then

    (1). Positivity: AtB0.

    (2). Positive homogeneity: c(AtB)=(cA)t(cB) for all c>0.

    (3). Congruent invariance: C(AtB)C=(CAC)t(CBC) for all CGLα.

    (4). Affine change of parameters: (ArB)t(AsB)=A(1t)r+tsB.

    (5). Exponential law: Ar(AsB)=ArsB.

    (6). Determinantal identity: det(AtB)=(detA)(1t)αm(detB)tαn.

    Proof. The proof of each assertion is similar to that in Theorem 3.2. When APSm, we consider A+ϵImPm and take limits when ϵ0+. When BPSn, we consider B+ϵInPn and take limits when ϵ0+.

    It is natural to extend the weighted MGMs of positive definite matrices to those of positive semidefinite matrices by a limit process. Theorem 3.5 (or both Definitions 3.6 and 3.7) then suggests us that the weights must be in the interval [0,1].

    Definition 3.9. Let (A,B)PSm×PSn. For any w[0,1], the w-weighted MGM of A and B is defined by

    AwB=limε0+(A+εIm)w(B+εIn)Mα, (3.4)

    where α=lcm(m,n). Here, we apply the convention X0=Iα for any XMα.

    Lemma 3.10. Definition 3.4 is well-defined. Moreover, if (A,B)PSm×PSn, then AwBPSα.

    Proof. When ε0+, the nets A+εIm and B+εIn are decreasing nets of positive definite matrices. From the monotonicity property in Theorem 3.5, the net (A+εIm)w(B+εIn) is decreasing. Since this net is also bounded below by the zero matrix, the order-completeness of the matrix space guarantees an existence of the limit (3.4). Moreover, the matrix limit is positive semidefinite.

    Fundamental properties of weighted MGMs are listed below.

    Theorem 3.11. Let (A,B),(C,D)PSm×PSn. Let w,r,s[0,1] and α=lcm(m,n). Then

    (1). Fixed-point property: AwA=A.

    (2). Positive homogeneity: c(AwB)=(cA)w(cB) for all c0.

    (3). Congruent invariance: T(AwB)T=(TAT)w(TBT) for all TGLα.

    (4). Permutation invariance: A1/2B=B1/2A. More generally, AwB=B1wA.

    (5). Affine change of parameters: (ArB)w(AsB)=A(1w)r+wsB.

    (6). Exponential law: Ar(AsB)=ArsB.

    (7). Determinantal identity: det(AwB)=(detA)(1w)αm(detB)wαn.

    (8). Monotonicity: If AC and BD, then AwBCwD.

    (9). Continuity from above: If AkPSm and BkPSn for all kN are such that AkA and BkB, then AkwBkAwB.

    Proof. When APSm and BPSn, we can consider A+εImPm and B+εInPn, and then take limits when ε0+. The 1st-7th items now follow from Theorem 3.2 (or Theorems 3.8). The 8th-9th items follow from Theorem 3.5. For the continuity from above, if AkA and BkB, then the monotonicity implies the decreasingness of the sequence AkwBk when k. Moreover,

    limkAkwBk=limklimε0+(Ak+εIm)w(Bk+εIn)=limε0+limk(Ak+εIm)w(Bk+εIn)=limε0+(A+εIm)w(B+εIn)=AwB.

    Thus, AkwBkAwB as desire.

    In particular, the congruent invariance implies the transformer inequality:

    C(AwB)C(CAC)w(CBC),C0.

    This property together with the monotonicity, the above continuity, and the fixed-point property yield that the mean w is a Kubo-Ando mean when w[0,1]. The results in this section include those for the MGM with weight 1/2 in [30].

    In this section, we apply our theory to solve certain matrix equations concerning weighted MGMs for positive definite matrices.

    Corollary 4.1. Let APm and B,XPn with AkB. Let tR{0}. Then the mean equation

    AtX=B (4.1)

    is uniquely solvable with an explicit solution X=A1/tB. Moreover, the solution varies continuously on the given matrices A and B. In particular, the geometric mean problem A1/2X=B has a unique solution X=A2B=BA1B.

    Proof. The exponential law in Theorem 3.2 implies that

    AtX=B=A1B=At(A1/tB).

    Then the left cancellability implies that Eq (4.1) has a unique solution X=A1/tB. The continuity of the solution follows from the explicit formula (3.1) and the continuity of the semi-tensor operation.

    Remark 4.2. We can investigate the Eq (4.1) when APm and BPSn. In this case, the weight t must be positive. This equation is uniquely solvable with an explicit solution X=A1/tBPSn. For simplicity in this section, we consider only the case when all given matrices are positive definite.

    Corollary 4.3. Let APm and B,XPn with AkB. Let r,s,tR.

    (1). If s0 and t1, then the mean equation

    (AsX)tA=B (4.2)

    is uniquely solvable with an explicit solution X=A1s(1t)B.

    (2). If s+tst, then the equation

    (AsX)tX=B (4.3)

    is uniquely solvable with an explicit solution X=AλB, where λ=1/(s+tst).

    (3). If s(1t)1, then the mean equation

    (AsX)tB=X (4.4)

    is uniquely solvable with an explicit solution X=AλB, where λ=t/(sts+1).

    (4). If st, then the equation

    AsX=BtX (4.5)

    is uniquely solvable with an explicit solution X=AλB, where λ=(1t)/(st).

    (5). If srs+rt1, then the equation

    (AsX)r(BtX)=X (4.6)

    is uniquely solvable with an explicit solution X=AλB, where λ=(rtr)/(srs+rt1).

    (6). If t0, then the mean equation

    (AtX)r(BtX)=X (4.7)

    is uniquely solvable with an explicit solution X=ArB.

    Proof. For the 1st assertion, using Theorem 3.2, we have

    As(1t)X=A1t(AsX)=(AsX)tA=B.

    By Corollary 4.1, we get the desire solution.

    For the 2nd item, applying Theorem 3.2, we get

    As+tstX=X(1s)(1t)A=X1t(X1sA)=(AsX)tX=B.

    Using Corollary 4.1, we obtain that X=A1s+tstB.

    For the 3rd item, the trivial case s=t=0 yields X=A0B. Assume that s,t0. From B1t(AsX)=X, we get by Theorem 3.2 and Corollary 4.1 that AsX=B11tX=Xtt1B. Consider

    B=Xt1t(AsX)=Xt1t(X1sA)=X(t1)(1s)tA=A1λX,

    where λ=t/(sts+1). Now, we can deduce the desire solution from Corollary 4.1.

    For the 4th item, the trivial case s=0 yields that the equation BtX=AIk has a unique solution X=B1tA=At1tB. Now, consider the case s0. Using Corollary 4.1, we have

    X=A1s(BtX)=(BtX)s1sA.

    Applying (4.4), we obtain that X=B(s1)/(st)A=AλB, where λ=(1t)/(st).

    For the 5th item, the case r=0 yields that the equation AsX=X has a unique solution X=A0B. Now, consider the case r0. Using Theorem 3.2 and Corollary 4.1, we get

    BtX=(AsX)1rX=Xr1r(X1sA)=X(r1)(1s)rA=Arss+1rX.

    Applying the equation (4.5), we obtain that X=AλB, where λ=rt/(srs+rt).

    For the 6th item, setting s=t in (4.6) yields the desire result.

    Remark 4.4. Note that the cases t=0 in Eqs (4.1)–(4.5) all reduce to Eq (4.1). A particular case of (4.3) when s=t=1/2 reads that the mean equation

    (AX)X=B (4.8)

    has a unique solution X=A4/3B. In Eq (4.4), when s=t=1/2, the mean equation

    (AX)B=X

    has a unique solution X=A2/3B. If r=s=t=1/2, then Eqs (4.6) or (4.7) implies that the geometric mean X=AB is a unique solution of the equation

    (AX)(BX)=X. (4.9)

    Equtions (4.8) and (4.9) were studied in [13] in the framework of dyadic symmetric sets.

    Recall that a matrix word in two letters A,BMn is an expression of the form

    W(A,B)=Ar1Bs1Ar2Bs2ArpBspArp+1

    in which the exponents ri,siR{0} for all i=1,2,,p and rp+1R. A matrix word is said to be symmetric if it is identical to its reversal. A famous symmetric matrix-word equation is the Riccati equation

    XAX=B, (5.1)

    here A,B,X are positive definite matrices of the same dimension. Indeed, in control engineering, an optimal regulator problem for a linear dynamical system reduces to an algebraic Riccati equation (under the controllability and the observability conditions) XA1XRXXR=B, where A,B are positive definite, and R is an arbitrary square matrix. A simple case R=0 yields the Riccati equation XA1X=B.

    In our context, the Riccati equation (5.1) can be written as A12X=B or X1A1=B. The case t=2 in Corollary 4.1 reads:

    Corollary 5.1. Let APm and B,XPn with AkB. Then the Riccati equation XAX=B has a unique positive solution X=A11/2B.

    More generally, consider the following symmetric word equation in two positive definite letters A,BPn with respect to the usual products:

    B=XAXAXAXAX((p+1)-terms of X, and p-terms of A)=X(AX)p,

    here pN. We now investigate such equations with respect to semi-tensor products.

    Corollary 5.2. Let APm and B,XPn with AkB.

    (1). Let pN. Then the symmetric word equation

    X(AX)p=B (5.2)

    is uniquely solvable with an explicit solution X=A11p+1B.

    (2). Let rR{1}. Then the symmetric word equation

    X(XAX)rX=B (5.3)

    is uniquely solvable with an explicit solution

    X=(A11r+1B)1/2.

    Proof. First, since pN, we can observe the following:

    X(AX)p=X1/2(X1/2AX1/2)pX1/2.

    It follows from the results in Section 3 that

    X(AX)p=X1/2(X1/2A1X1/2)pX1/2=XpA1=A1p+1X.

    According to Corollary 4.1, the equation A1p+1X=B has a unique solution X=A11p+1B.

    To solve Eq (5.3), we observe the following:

    X(XAX)rX=X(X1A1X1)rX=X2rA1=A1r+1X2.

    According to Corollary 4.1, the equation A1r+1X2=B has a unique solution X2=A11r+1B. Hence, we get the desire formula of X.

    We extend the notion of the classical weighted MGM to that of positive definite matrices of arbitrary dimensions, so that the usual matrix products are generalized to the semi-tensor products. When the weights are arbitrary real numbers, the weighted MGMs posses not only nice properties as in the classical case, e.g. the congruent invariance and the self duality, but also affine change of parameters, exponential law, and left/right cancellability. Moreover, when the weights belong to the unit interval, the weighted MGM has remarkable properties, namely, monotonicity and continuity from above (according to the famous Löwner-Heinz inequality). When the matrix A or B is not assumed to be invertible, we can define AtB where the weight t is restricted to [0,) or (,1]. Then we apply a continuity argument to extend the weighted MGM to positive semidefinite matrices, here the weights belong to the unit interval. It turns out that this matrix mean posses rich algebraic, order, and analytic properties, such as, monotonicity, continuity from above, congruent invariance, permutation invariance, affine change of parameters, and exponential law. Furthermore, we investigate certain equations concerning weighted MGMs of positive definite matrices. Due to the cancellability and another properties of the weighted MGM, such equations are always uniquely solvable with solutions expressed in terms of weighted MGMs. The notion of MGMs can be applied to solve certain symmetric word equations in two positive definite letters. A particular interest of the word equations in the field of control engineering is the Riccati equation in a general form involving semi-tensor products. Our results include the classical weighted MGMs of matrices as special case.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research is supported by postdoctoral fellowship of School of Science, King Mongkut's Institute of Technology Ladkrabang. The authors thank anonymous referees for their comments and suggestions on the manuscript.

    The authors declare there is no conflicts of interest.



    [1] R. Bhatia, Positive Definite Matrices, New Jersey: Princeton University Press, 2007.
    [2] J. Lawson, Y. Lim, The geometric mean, matrices, metrics, and more, J. Amer. Math. Soc., 108 (2001), 797–812. https://doi.org/10.2307/2695553 doi: 10.2307/2695553
    [3] F. Hiai, Matrix analysis: matrix monotone functions, matrix means, and majorization, Interdiscip. Inf. Sci., 16 (2010), 139–248. https://doi.org/10.4036/iis.2010.139 doi: 10.4036/iis.2010.139
    [4] F. Kubo, T. Ando, Means of positive linear operators, Math. Ann., 246 (1980), 205–224. https://doi.org/10.1007/BF01371042 doi: 10.1007/BF01371042
    [5] T. Ando, Concavity of certain maps on positive definite matrices and applications to Hadamard products, Linear Alg. Appl., 26 (1979), 203–241. https://doi.org/10.1016/0024-3795(79)90179-4 doi: 10.1016/0024-3795(79)90179-4
    [6] A. Ploymukda, P. Chansangiam, Concavity and convexity of several maps involving Tracy-Singh products, Khatri-Rao products, and operator-monotone functions of positive operators, Sci. Asia, 45 (2019), 194–201. https://doi.org/10.2306/scienceasia1513-1874.2019.45.194 doi: 10.2306/scienceasia1513-1874.2019.45.194
    [7] P. Chansangiam, Cancellability and regularity of operator connections with applications to nonlinear operator equations involving means, J. Ineq. Appl., 2015 (2015), 411. https://doi.org/10.1186/s13660-015-0934-7 doi: 10.1186/s13660-015-0934-7
    [8] Y. Lim, Factorizations and geometric means of positive definite matrices, Linear Algebra Appl., 437 (2012), 2159–2172. https://doi.org/10.1016/j.laa.2012.05.039 doi: 10.1016/j.laa.2012.05.039
    [9] A. Ploymukda, P. Chansangiam, Geometric means and Tracy-Singh products for positive operators, Commun. Math. Appl., 9 (2018), 475–488. https://doi.org/10.26713/cma.v9i4.547 doi: 10.26713/cma.v9i4.547
    [10] A. Ploymukda, P. Chansangiam, Weighted Lim's geometric mean of positive invertible operators on a Hilbert space, J. Comput. Anal. Appl., 29 (2020), 390–400.
    [11] Y. Lim, Geometric means on symmetric cones, Arch. der Math., 75 (2000), 39–45. https://doi.org/10.1007/s000130050471 doi: 10.1007/s000130050471
    [12] J. Lawson, Y. Lim, Symmetric sets with midpoints and algebraically equivalent theories, Result. Math., 46 (2004), 37–56. https://doi.org/10.1007/BF03322869 doi: 10.1007/BF03322869
    [13] J. Lawson, Y. Lim, Geometric means and reflection quasigroups, Quasigroups Relat. Syst., 14 (2006), 43–59.
    [14] J. Lawson, Y. Lim, Symmetric space with convex metrics, Forum Math., 19 (2007), 571–602. https://doi.org/10.1515/FORUM.2007.023 doi: 10.1515/FORUM.2007.023
    [15] J. Lawson, Y. Lim, Solving symmetric matrix word equations via symmetric space machinery, Linear Algebra Appl., 414 (2006), 560–569. https://doi.org/10.1016/j.laa.2005.10.035 doi: 10.1016/j.laa.2005.10.035
    [16] P. Chansangiam, Weighted means and weighted mean equations in lineated symmetric spaces, Quasigroups Relat. Syst., 26 (2018), 197–210.
    [17] D. Cheng, Semi-tensor product of matrices and its application to Morgen's problem, Sci. China Ser. F, 44 (2001), 195–212. https://doi.org/10.1007/BF02714570 doi: 10.1007/BF02714570
    [18] D. Cheng, H. Qi, A. Xue, A survey on semi-tensor product of matrices, J. Syst. Sci. Complexity, 20 (2007), 304–322. https://doi.org/10.1007/s11424-007-9027-0 doi: 10.1007/s11424-007-9027-0
    [19] D. Cheng, H. Qi, Q. Li, Analysis and control of boolean networks: a semi-tensor product approach, London: Springer-Verlag, 2011.
    [20] G. Zhao, H. Li, P. Duan, F. E. Alsaadi, Survey on applications of semi-tensor product method in networked evolutionary games, J. Appl. Anal. Comput., 10 (2020), 32–54. https://doi.org/ 10.11948/20180201 doi: 10.11948/20180201
    [21] Y. Yan, D. Cheng, J. Feng, H. Li, J. Yue, Survey on applications of algebraic state space theory of logical systems to finite state machines, Sci. China Inf. Sci., 66 (2023), 111201. https://doi.org/10.1007/s11432-022-3538-4 doi: 10.1007/s11432-022-3538-4
    [22] Y. Yan, J. Yue, Z. Chen, Algebraic method of simplifying Boolean networks using semi‐tensor product of Matrices, Asian J. Control, 21 (2019), 2569–2577. https://doi.org/10.1002/asjc.2125 doi: 10.1002/asjc.2125
    [23] H. Ji, Y. Li, X. Ding, J. Lu, Stability analysis of Boolean networks with Markov jump disturbances and their application in apoptosis networks, Electronic Res. Arch., 30 (2022), 3422–3434. https://doi.org/10.3934/era.2022174 doi: 10.3934/era.2022174
    [24] D. Cheng, Y. Dong, Semi-tensor product of matrices and its some applications to physics, Meth. Appl. Anal., 10 (2003), 565–588. https://dx.doi.org/10.4310/MAA.2003.v10.n4.a5 doi: 10.4310/MAA.2003.v10.n4.a5
    [25] H. Li, G. Zhao, M. Meng, J. Feng, A survey on applications of semi-tensor product method in engineering, Sci. China Inf. Sci., 61 (2018), 010202. https://doi.org/10.1007/s11432-017-9238-1 doi: 10.1007/s11432-017-9238-1
    [26] Z. Ji, J. Li, X. Zhou, F. Duan, T. Li, On solutions of matrix equation AXB=C under semi-tensor product, Linear Multilinear Algebra, 69 (2019), 1–29. https://doi.org/10.1080/03081087.2019.1650881 doi: 10.1080/03081087.2019.1650881
    [27] P. Chansangiam, S. V. Sabau, Sylvester matrix equation under the semi-tensor product of matrices, An. Ştiinţ. Univ. Al. I. Cuza Iaşi. Mat. (N.S.), 68 (2022), 263–278. https://doi.org/10.47743/anstim.2022.00020 doi: 10.47743/anstim.2022.00020
    [28] J. Jaiprasert, P. Chansangiam, Solving the Sylvester-transpose matrix equation under the semi-tensor product, Symmetry, 14 (2022), 1094. https://doi.org/10.3390/sym14061094 doi: 10.3390/sym14061094
    [29] J. Wang, J. E. Feng, H. L. Huang, Solvability of the matrix equation AX2=B with semi-tensor product, Electronic Res. Arch., 29 (2021), 2249–2267. https://doi.org/10.3934/era.2020114 doi: 10.3934/era.2020114
    [30] P. Chansangiam, A. Ploymukda, Riccati equation and metric geometric means of positive semidefinite matrices involving semi-tensor products, AIMS Math., 8 (2023), 23519–23533. https://doi.org/10.3934/math.20231195 doi: 10.3934/math.20231195
  • This article has been cited by:

    1. Arnon Ploymukda, Kanjanaporn Tansri, Pattrawut Chansangiam, Weighted spectral geometric means and matrix equations of positive definite matrices involving semi-tensor products, 2024, 9, 2473-6988, 11452, 10.3934/math.2024562
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1299) PDF downloads(63) Cited by(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog