Loading [MathJax]/jax/element/mml/optable/BasicLatin.js
Research article Special Issues

A new test for detecting specification errors in Gaussian linear mixed-effects models

  • Linear mixed-effects models (LMEMs) are widely used in medical, engineering, and social applications. The accurate specification of the covariance matrix structure within the error term is known to impact the estimation and inference procedures. Thus, it is crucial to detect the source of errors in LMEMs specifications. In this study, we propose combining a user-friendly computational test with an analytical method to visualize the source of errors. Through statistical simulations under different scenarios, we evaluate the performance of the proposed test in terms of the Power and Type I error rate. Our findings indicate that as the sample size n increases, the proposed test effectively detects misspecification in the systematic component, the number of random effects, the within-subject covariance structure, and the covariance structure of the error term in the LMEM with high Power while maintaining the nominal Type I error rate. Finally, we show the practical usefulness of our proposed test with a real-world application.

    Citation: Jairo A. Angel, Francisco M.M. Rocha, Jorge I. Vélez, Julio M. Singer. A new test for detecting specification errors in Gaussian linear mixed-effects models[J]. AIMS Mathematics, 2024, 9(11): 30710-30727. doi: 10.3934/math.20241483

    Related Papers:

    [1] Pattrawut Chansangiam, Arnon Ploymukda . Riccati equation and metric geometric means of positive semidefinite matrices involving semi-tensor products. AIMS Mathematics, 2023, 8(10): 23519-23533. doi: 10.3934/math.20231195
    [2] Arnon Ploymukda, Pattrawut Chansangiam . Metric geometric means with arbitrary weights of positive definite matrices involving semi-tensor products. AIMS Mathematics, 2023, 8(11): 26153-26167. doi: 10.3934/math.20231333
    [3] Yajun Xie, Changfeng Ma, Qingqing Zheng . On the nonlinear matrix equation Xs+AHF(X)A=Q. AIMS Mathematics, 2023, 8(8): 18392-18407. doi: 10.3934/math.2023935
    [4] Tong Wu, Yong Wang . Super warped products with a semi-symmetric non-metric connection. AIMS Mathematics, 2022, 7(6): 10534-10553. doi: 10.3934/math.2022587
    [5] Rajesh Kumar, Sameh Shenawy, Lalnunenga Colney, Nasser Bin Turki . Certain results on tangent bundle endowed with generalized Tanaka Webster connection (GTWC) on Kenmotsu manifolds. AIMS Mathematics, 2024, 9(11): 30364-30383. doi: 10.3934/math.20241465
    [6] Mohd Danish Siddiqi, Meraj Ali Khan, Ibrahim Al-Dayel, Khalid Masood . Geometrization of string cloud spacetime in general relativity. AIMS Mathematics, 2023, 8(12): 29042-29057. doi: 10.3934/math.20231487
    [7] Wenxv Ding, Ying Li, Anli Wei, Zhihong Liu . Solving reduced biquaternion matrices equation ki=1AiXBi=C with special structure based on semi-tensor product of matrices. AIMS Mathematics, 2022, 7(3): 3258-3276. doi: 10.3934/math.2022181
    [8] Muhammad Asad Iqbal, Abid Ali, Ibtesam Alshammari, Cenap Ozel . Construction of new Lie group and its geometric properties. AIMS Mathematics, 2024, 9(3): 6088-6108. doi: 10.3934/math.2024298
    [9] Yimeng Xi, Zhihong Liu, Ying Li, Ruyu Tao, Tao Wang . On the mixed solution of reduced biquaternion matrix equation ni=1AiXiBi=E with sub-matrix constraints and its application. AIMS Mathematics, 2023, 8(11): 27901-27923. doi: 10.3934/math.20231427
    [10] Fengxia Zhang, Ying Li, Jianli Zhao . The semi-tensor product method for special least squares solutions of the complex generalized Sylvester matrix equation. AIMS Mathematics, 2023, 8(3): 5200-5215. doi: 10.3934/math.2023261
  • Linear mixed-effects models (LMEMs) are widely used in medical, engineering, and social applications. The accurate specification of the covariance matrix structure within the error term is known to impact the estimation and inference procedures. Thus, it is crucial to detect the source of errors in LMEMs specifications. In this study, we propose combining a user-friendly computational test with an analytical method to visualize the source of errors. Through statistical simulations under different scenarios, we evaluate the performance of the proposed test in terms of the Power and Type I error rate. Our findings indicate that as the sample size n increases, the proposed test effectively detects misspecification in the systematic component, the number of random effects, the within-subject covariance structure, and the covariance structure of the error term in the LMEM with high Power while maintaining the nominal Type I error rate. Finally, we show the practical usefulness of our proposed test with a real-world application.



    In mathematics, we are familiar with the notion of geometric mean for positive real numbers. This notion was generalized to that for positive definite matrices of the same dimension in many ways. The metric geometric mean (MGM) of two positive definite matrices A and B is defined as

    AB=A1/2(A1/2BA1/2)1/2A1/2. (1.1)

    This mean was introduced by Pusz and Woronowicz [1] and studied in more detail by Ando [2]. Algebraically, AB is a unique solution to the algebraic Riccati equation XA1X=B; e.g., [3]. Geometrically, AB is a unique midpoint of the Riemannian geodesic interpolated from A to B, called the weighted MGM of A and B:

    AtB=A1/2(A1/2BA1/2)tA1/2,0t1. (1.2)

    Remarkable properties of the mean t, where t[0,1], are monotonicity, concavity, and upper semi-continuity (according to the famous Löwner-Heinz inequality); see, e.g., [2,4] and a survey [5,Sect. 3]. Moreover, MGMs play an important role in the Riemannian geometry of the positive definite matrices; see, e.g., [6,Ch. 4].

    Another kind of geometric means of positive definite matrices is the spectral geometric mean (SGM), first introduced by Fiedler and Pták [7]:

    AB=(A1B)1/2A(A1B)1/2. (1.3)

    Note that the scalar consistency holds, i.e., if AB=BA, then

    AB=AB=A1/2B1/2.

    Since the SGM is based on the MGM, the SGM satisfies many nice properties as those for MGMs, for example, idempotency, homogeneity, permutation invariance, unitary invariance, self duality, and a determinantal identity. However, the SGM does not possess the monotonicity, the concavity, and the upper semi-continuity. A significant property of SGMs is that (AB)2 is similar to AB and, they have the same spectrum; hence, the name "spectral geometric mean". The work [7] also established a similarity relation between the MGM AB and the SGM AB when A and B are positive definite matrices of the same size. After that, Lee and Kim [8] investigated the t-weighted SGM, where t is an arbitrary real number:

    AtB=(A1B)tA(A1B)t. (1.4)

    Gan and Tam [9] extended certain results of [7] to the case of the t-weighted SGMs when t[0,1]. Many research topics on the SGMs have been widely studied, e.g., [10,11]. Lim [12] introduced another (weighted) geometric mean of positive definite matrices varying over Hermitian unitary matrices, including the MGM as a special case. The Lim's mean has an explicit formula in terms of MGMs and SGMs.

    There are several ways to extend the classical studies of MGMs and SGMs. The notion of MGMs can be defined on symmetric cones [8,13] and reflection quasigroups [14] via algebraic-geometrical perspectives. In the framework of lineated symmetric spaces [14] and reflection quasigroups equipped with a compatible Hausdorff topology, we can define MGMs of arbitrary reals weights. The SGMs were also investigated on symmetric cones in [8]. These geometric means can be extended to those for positive (invertible) operators on a Hilbert space; see, e.g., [15,16]. The cancellability of such means has significant applications in mean equations; see, e.g., [17,18].

    Another way to generalize the means (1.2) and (1.4) is to replace the traditional matrix multiplications (TMM) by the semi-tensor products (STP) . Recall that the STP is a generalization of the TMM, introduced by Cheng [19]; see more information in [20]. To be more precise, consider a matrix pair (A,B)Mm,n×Mp,q and let α=lcm(n,p). The STP of A and B allows the two matrices to participate the TMM through the Kronecker multiplication (denoted by ) with certain identity matrices:

    AB=(AIα/n)(BIα/p)Mαmn,αqp.

    For the factor-dimension condition n=kp, we have

    AB=A(BIk).

    For the matching-dimension condition n=p, the product reduces to AB=AB. The STP occupies rich algebraic properties as those for TMM, such as bilinearity and associativity. Moreover, STPs possess special properties that TMM does not have, for example, pseudo commutativity dealing with swap matrices, and algebraic formulations of logical functions. In the last decade, STPs were beneficial to developing algebraic state space theory, so the theory can integrate ideas and methods for finite state machines to those for control theory; see a survey in [21].

    Recently, the work [22] extended the MGM notion (1.1) to any pair of positive definite matrices, where the matrix sizes satisfied the factor-dimension condition:

    AB=A1/2(A1/2BA1/2)1/2A1/2. (1.5)

    In fact, AB is a unique positive-definite solution of the semi-tensor Riccati equation XA1X=B. After that, the MGMs of arbitrary weight tR were studied in [23]. In particular, when t[0,1], the weighted MGMs have remarkable properties, namely, the monotonicity and the upper semi-continuity. See Section 2 for more details.

    The present paper is a continuation of the works [22,23]. Here we investigate SGMS involving STPs. We start with the matrix mean equation:

    A1X=(A1B)t,

    where A and B are given positive definite matrices of different sizes, tR, and X is an unknown square matrix. Here, is defined by the formula (1.5). We show that this equation has a unique positive definite solution, which is defined to be the t-weighted SGM of A and B. Another characterization of weighted SGMs are obtained in terms of certain matrix equations. It turns out that this mean satisfies various properties as in the classical case. We establish a similarity relation between the MGM and the SGM of two positive definite matrices of arbitrary dimensions. Our results generalize the work [7] and relate to the work [8]. Moreover, we investigate certain matrix equations involving weighted MGMs and SGMs.

    The paper is organized as follows. In Section 2, we set up basic notation and give basic results on STPs, Kronecker products, and weighted MGMs of positive definite matrices. In Section 3, we characterize the weighted SGM for positive definite matrices in terms of matrix equations, then we provide fundamental properties of weighted SGMs in Section 4. In Section 5, we investigate matrix equations involving weighted SGMs and MGMs. We conclude the whole work in Section 6.

    Throughout, let Mm,n be the set of all m×n complex matrices and abbreviate Mn,n to Mn. Define Cn=Mn,1 as the set of n-dimensional complex vectors. Denote by AT and A the transpose and conjugate transpose of a matrix A, respectively. The n×n identity matrix is denoted by In. The general linear group of n×n complex matrices is denoted by GLn. Let us denote the set of n×n positive definite matrices by Pn. A matrix pair (A,B)Mm,n×Mp,q is said to satisfy a factor-dimension condition if np or pn. In this case, we write AkB when n=kp, and AkB when p=kn.

    Recall that for any matrices A=[aij]Mm,n and BMp,q, their Kronecker product is defined by

    AB=[aijB]Mmp,nq.

    The Kronecker operation (A,B)AB is bilinear and associative.

    Lemma 2.1 (e.g. [5]). Let (A,B)Mm,n×Mp,q, (C,D)Mn,r×Mq,s, and (P,Q)Mm×Mn, then

    (i) (AB)=AB.

    (ii) (AB)(CD)=(AC)(BD).

    (iii) If (P,Q)GLm×GLn, then (PQ)1=P1Q1.

    (iv) If (P,Q)Pm×Pn, then PQPmn and (PQ)1/2=P1/2Q1/2.

    Lemma 2.2 (e.g. [20]). Let (A,B)Mm,n×Mp,q and (P,Q)Mm×Mn, then

    (i) (AB)=BA.

    (ii) If (P,Q)GLm×GLn, then (PQ)1=Q1P1.

    (iii) det where \alpha = {\rm{lcm}}(m, n) .

    Lemma 2.3 ([23]). For any S\in \mathbb{P}_m and X\in \mathbb{M}_n , we have X^*\ltimes S\ltimes X \in \mathbb{P}_{\alpha} , where \alpha = {\rm{lcm}}(m, n) .

    Definition 2.4. Let (A, B)\in \mathbb{P}_m\times \mathbb{P}_n and \alpha = {\rm{lcm}}(m, n) . For any t\in \mathbb{R} , the t -weighted MGM of A and B is defined by

    \begin{align} A\,\sharp_t\, B \; = \; A^{1/2}\ltimes \left( A^{-1/2}\ltimes B\ltimes A^{-1/2} \right)^t\ltimes A^{1/2} \;\in\; \mathbb{P}_\alpha. \end{align} (2.1)

    Note that A \, \sharp_0\, B = A\otimes I_{\alpha/m} and A \, \sharp_1\, B = B\otimes I_{\alpha/n} . We simply write A \, \sharp\, B = A \, \sharp_{1/2}\, B . We clearly have A\, \sharp_t\, B > 0 and A\, \sharp_t\, A = A .

    Lemma 2.5 ([22]). Let (A, B)\in \mathbb{P}_m\times \mathbb{P}_n be such that A\prec_k B , then the Riccati equation

    X\ltimes A^{-1}\ltimes X \; = \; B

    has a unique solution X = A \, \sharp\, B\in \mathbb{P}_n .

    Lemma 2.6 ([23]). Let (A, B)\in \mathbb{P}_m\times \mathbb{P}_n and X, Y\in \mathbb{P}_n . Let t\in \mathbb{R} and \alpha = {\rm{lcm}}(m, n) , then

    (i) Positive homogeneity: For any scalars a, b, c > 0 , we have c(A \, \sharp_t\, B) = (cA) \, \sharp_t\, (cB) and, more generally,

    \begin{align} (aA) \,\sharp_t \, (bB) \; = \; a^{1-t} b^t (A \,\sharp_t\, B). \end{align} (2.2)

    (ii) Self duality: (A \, \sharp_t\, B)^{-1} = A^{-1} \, \sharp_t\, B^{-1} .

    (iii) Permutation invariance: A \, \sharp_{1/2} \, B = B \, \sharp_{1/2}\, A . More generally, A \, \sharp_t\, B = B \, \sharp_{1-t}\, A .

    (iv) Consistency with scalars: If A\ltimes B = B \ltimes A , then A \, \sharp\, B = A^{1-t}\ltimes B^t .

    (v) Determinantal identity:

    \det(A\,\sharp\, B) \; = \; \sqrt{(\det A)^{\alpha /m}(\det B)^{\alpha /n}}.

    (vi) Cancellability: If t \neq 0 , then the equation A \, \sharp_t\, X = A \, \sharp_t\, Y implies X = Y .

    In this section, we define and characterize weighted SGMs in terms of certain matrix equations involving MGMs and STPs.

    Theorem 3.1. Let (A, B)\in \mathbb{P}_m\times \mathbb{P}_n . Let t\in \mathbb{R} and \alpha = {\rm{lcm}}(m, n) , then the mean equation

    \begin{align} A^{-1}\,\sharp\, X \; = \; (A^{-1}\,\sharp\, B)^t \end{align} (3.1)

    has a unique solution X \in \mathbb{P}_\alpha .

    Proof. Note that the matrix pair (A, X) satisfies the factor-dimension condition. Let Y = (A^{-1}\, \sharp\, B)^t and consider

    \begin{align*} X \; = \; Y\ltimes A\ltimes Y. \end{align*}

    Using Lemma 2.5, we obtain that Y = A^{-1}\, \sharp\, X . Thus, A^{-1} \, \sharp\, X = (A^{-1}\, \sharp\, B)^t . For the uniqueness, let Z\in \mathbb{P}_\alpha be such that A^{-1}\, \sharp\, Z = Y . By Lemma 2.5, we get

    Z \; = \; Y\ltimes A\ltimes Y \; = \; X.

    We call the matrix X in Theorem 3.1 the t -weighted SGM of A and B .

    Definition 3.2. Let (A, B)\in \mathbb{P}_m\times \mathbb{P}_n and \alpha = {\rm{lcm}}(m, n) . For any t\in \mathbb{R} , the t -weighted SGM of A and B is defined by

    \begin{align} A \,\lozenge_t\, B \; = \; (A^{-1}\,\sharp\, B)^t\ltimes A\ltimes (A^{-1}\,\sharp\, B)^t \,\in\; \mathbb{M}_\alpha. \end{align} (3.2)

    According to Lemma 2.3, we have A \, \lozenge_t\, B \in \mathbb{P}_\alpha . In particular, A\, \lozenge_0\, B = A\otimes I_{\alpha/m} and A \, \lozenge_1\, B = B\otimes I_{\alpha/n} . When t = 1/2 , we simply write A\, \lozenge\, B = A\, \lozenge_{1/2}\, B . The formula (3.2) implies that

    \begin{align} A \,\lozenge_t\, A \; = \; A, \quad A \,\lozenge_t\, A^{-1} \; = \; A^{1-2t} \end{align} (3.3)

    for any t\in \mathbb{R} . Note that in the case n \mid m , we have

    \begin{align*} A \,\lozenge_t\, B \; = \; (A^{-1} \,\sharp\, B)^{t} A (A^{-1} \,\sharp\,B)^{t}, \end{align*}

    i.e., Eq (3.2) reduces to the same formula (1.4) as in the classical case m = n . By Theorem 3.1, we have

    \begin{align*} A^{-1} \,\sharp\, (A \,\lozenge_t\, B ) \; = \; (A^{-1}\,\sharp\, B)^t \; = \; (B \,\lozenge_{t}\, A )^{-1} \,\sharp\, B. \end{align*}

    The following theorem provides another characterization of the weighted SGMs.

    Theorem 3.3. Let (A, B)\in \mathbb{P}_m\times \mathbb{P}_n . Let t\in \mathbb{R} and \alpha = {\rm{lcm}}(m, n) , then the following are equivalent:

    (i) X = A\, \lozenge_t\, B .

    (ii) There exists a positive definite matrix Y\in \mathbb{P}_\alpha such that

    \begin{align} X \; = \; Y^t\ltimes A\ltimes Y^t \; = \; Y^{t-1}\ltimes B\ltimes Y^{t-1}. \end{align} (3.4)

    Moreover, the matrix Y satisfying (3.4) is uniquely determined by Y = A^{-1}\, \sharp\, B .

    Proof. Let X = A\, \lozenge_t\, B . Set Y = A^{-1}\, \sharp\, B \in \mathbb{P}_{\alpha} . By Definition 3.2, we have X = Y^t\ltimes A\ltimes Y^t . By Lemma 2.5, we get Y\ltimes A\ltimes Y = B\otimes I_{\alpha/n} . Hence,

    \begin{align*} Y^{t-1}\ltimes B \ltimes Y^{t-1} \; = \; Y^t Y^{-1}\ltimes B\ltimes Y^{-1} Y^t \; = \; Y^t\ltimes A\ltimes Y^t \; = \; X. \end{align*}

    To show the uniqueness, let Z\in \mathbb{P}_\alpha be such that

    X \; = \; Z^t\ltimes A\ltimes Z^t = Z^{t-1}\ltimes B\ltimes Z^{t-1}.

    We have Z\ltimes A\ltimes Z = B\otimes I_{\alpha/n} . Note that the pair (A, B \otimes I_{\alpha/n}) satisfies the factor-dimension condition. Now, Lemma 2.5 implies that Z = A^{-1}\, \sharp\, B = Y .

    Conversely, suppose there exists a matrix Y\in \mathbb{P}_\alpha such that Eq (3.4) holds, then Y\ltimes A\ltimes Y = B . Applying Lemma 2.5, we have Y = A^{-1}\, \sharp\, B . Therefore,

    X\; = \; (A^{-1}\,\sharp\, B)^t\ltimes A\ltimes(A^{-1}\,\sharp\, B)^t \; = \; A\diamondsuit_t B.

    Fundamental properties of the weighted SGMs (3.2) are as follows.

    Theorem 4.1. Let (A, B)\in \mathbb{P}_m\times \mathbb{P}_n , t\in \mathbb{R} , and \alpha = {\rm{lcm}}(m, n) , then

    (i) Permutation invariance: A\, \lozenge_t\, B = B\, \lozenge_{1-t}\, A . In particular, A\, \lozenge \, B = B \, \lozenge \, A .

    (ii) Positive homogeneity: c(A\, \lozenge_t\, B) = (cA) \, \lozenge_t\, (cB) for all c > 0 . More generally, for any scalars a, b > 0 , we have

    \begin{align*} (aA) \,\lozenge_t \, (bB) \; = \; a^{1-t} b^t (A \,\lozenge_t\, B). \end{align*}

    (iii) Self-duality: (A\, \lozenge_t\, B)^{-1} = A^{-1}\, \lozenge_t\, B^{-1} .

    (iv) Unitary invariance: For any U\in \mathbb{U}_\alpha , we have

    \begin{align} U^* (A\,\lozenge_t\, B) U \; = \; (U^*\ltimes A\ltimes U) \,\lozenge_t\, (U^*\ltimes B\ltimes U). \end{align} (4.1)

    (v) Consistency with scalars: If A\ltimes B = B\ltimes A , then A\, \lozenge_t\, B = A^{1-t}\ltimes B^t .

    (vi) Determinantal identity:

    \det(A\,\lozenge_t\, B) \; = \;(\det A)^{\frac{(1-t)\alpha}{m}}(\det B)^\frac{t\alpha}{n}.

    (vii) Left and right cancellability: For any t\in \mathbb{R}-\{0\} and Y_1, Y_2\in \mathbb{P}_n , the equation

    A \,\lozenge_t\, Y_1 \; = \; A \,\lozenge_t\, Y_2

    implies Y_1 = Y_2 . For any t\in \mathbb{R}-\{1\} and X_1, X_2\in \mathbb{P}_m , the equation X_1\, \lozenge_t \, B = X_2\, \lozenge_t \, B implies X_1 = X_2 . In other words, the maps X\mapsto A \, \lozenge_t\, X and X\mapsto X \, \lozenge_t\, B are injective for any t \neq 0, 1 .

    (viii) (A \, \lozenge\, B)^2 is positively similar to A\ltimes B i.e., there is a matrix P \in \mathbb{P}_{\alpha} such that

    (A \,\lozenge\, B)^2 \; = \; P (A\ltimes B) P^{-1}.

    In particular, (A \, \lozenge\, B)^2 and A\ltimes B have the same eigenvalues.

    Proof. Throughout this proof, let X = A\, \lozenge_t\, B and Y = A^{-1}\, \sharp\, B . From Theorem 3.3, the characteristic equation (3.4) holds.

    To prove (ⅰ), set Z = B\, \lozenge_{1-t} \, A and W = B^{-1}\, \sharp\, A . By Theorem 3.3, we get

    \begin{align*} Z \; = \; W^{1-t}\ltimes B\ltimes W^{1-t} \; = \; W^{-t}\ltimes A\ltimes W^{-t}. \end{align*}

    It follows from Lemma 2.6(ⅱ) that

    \begin{align*} W^{-1} \; = \; B \,\sharp\, A^{-1} \; = \; A^{-1}\,\sharp\, B \; = \; Y. \end{align*}

    Hence, X = Y^t\ltimes A\ltimes Y^t = W^{-t}\ltimes A\ltimes W^{-t} = Z , i.e., A\, \lozenge_t\, B = B\, \lozenge_{1-t}\, A .

    The assertion (ⅱ) follows directly from the formulas (3.2) and (2.2):

    \begin{align*} (aA) \,\lozenge_t\, (bB) \;& = \; (a^{-1}A^{-1} \,\sharp\, bB)^t \ltimes (aA) \ltimes (a^{-1}A^{-1} \,\sharp\, bB)^t \\ \;& = \; (a^{-1} \,\sharp\, b)^t (A^{-1} \,\sharp\, B)^t \ltimes (aA) \ltimes (a^{-1} \,\sharp\, b)^t (A^{-1} \,\sharp\, B)^t \\ \;& = \; (a^{-1} \,\sharp\, b)^t a (a^{-1} \,\sharp\, b)^t (A^{-1} \,\sharp\, B)^t \ltimes A \ltimes (A^{-1} \,\sharp\, B)^t \\ \;& = \; a^{1-t} b^t (A \,\lozenge_t\, B). \end{align*}

    To prove the self-duality (ⅲ), set W = Y^{-1} = A\, \sharp \, B^{-1} . Observe that

    \begin{align*} X^{-1} \;& = \; (Y^t \ltimes A \ltimes Y^t)^{-1} \; = \; Y^{-t} \ltimes A^{-1} \ltimes Y^{-t} \; = \; W^t\ltimes A^{-1}\ltimes W^t, \\ X^{-1} \;& = \; (Y^{t-1} \ltimes B \ltimes Y^{t-1})^{-1} \; = \; Y^{1-t} \ltimes B^{-1} \ltimes Y^{1-t} \; = \; W^{t-1}\ltimes B^{-1}\ltimes W^{t-1}. \end{align*}

    Theorem 3.3 now implies that

    \begin{align*} (A\,\lozenge_t\, B)^{-1} \; = \; X^{-1} \; = \; A^{-1} \,\lozenge_t\, B^{-1}. \end{align*}

    To prove (ⅳ), let U\in \mathbb{U}_\alpha and consider W = U^*\ltimes Y\ltimes U . We have

    \begin{align*} W^t\ltimes U^* \ltimes A \ltimes U \ltimes W^t \;& = \; U^*\ltimes Y^t \ltimes U \ltimes U^* \ltimes A \ltimes U \ltimes U^*\ltimes Y^t \ltimes U \\ \;& = \; U^*\ltimes Y^t\ltimes A\ltimes Y^t\ltimes U \\ \;& = \; U^*\ltimes X\ltimes U, \end{align*}

    and, similarly,

    \begin{align*} W^{t-1} \ltimes U^* \ltimes B \ltimes U \ltimes W^{t-1} \; = \; U^*\ltimes Y^{t-1} \ltimes B \ltimes Y^{t-1} \ltimes U \; = \; U^*\ltimes X\ltimes U. \end{align*}

    By Theorem 3.3, we arrive at (4.1).

    For the assertion (ⅴ), the assumption A\ltimes B = B\ltimes A together with Lemma 2.6 (ⅳ) yields

    \begin{align*} Y \; = \; A^{-1} \,\sharp\, B \; = \;A^{-1/2}\ltimes B^{1/2} . \end{align*}

    It follows that

    \begin{align*} Y^t\ltimes A\ltimes Y^t \;& = \; A^{-t/2}\ltimes B^{t/2}\ltimes A\ltimes A^{-t/2}\ltimes B^{t/2} \; = \; A^{1-t}\ltimes B^t, \\ Y^{t-1}\ltimes B\ltimes Y^{t-1} \;& = \; A^{-(t-1)/2}\ltimes B^{(t-1)/2}\ltimes B\ltimes A^{-(t-1)/2}\ltimes B^{(t-1)/2} \; = \; A^{1-t}\ltimes B^t. \end{align*}

    Now, Theorem 3.3 implies that A\, \lozenge_t\, B = A^{1-t}\ltimes B^t . The determinantal identity (ⅵ) follows directly from the formula (1.4), Lemma 2.2(ⅲ), and Lemma 2.6(ⅴ):

    \begin{align*} \det(A\,\lozenge_t\, B) \;& = \; \det(A^{-1}\,\sharp\, B)^{2t}(\det A)^\frac{\alpha}{m} \\ \;& = \; (\det A)^{-\frac{\alpha t}{m}}(\det B)^\frac{\alpha t}{n}(\det A)^\frac{\alpha}{m} \\ \;& = \; (\det A)^\frac{(1-t)\alpha}{m}(\det B)^\frac{t\alpha}{n}. \end{align*}

    To prove the left cancellability, let t\in \mathbb{R}-\{0\} and suppose that A\, \lozenge_t\, Y_1 = A \, \lozenge_t\, Y_2 . We have

    \begin{align*} \left(A^{1/2}\ltimes(A^{-1} \,\sharp\, Y_1)^t\ltimes A^{1/2} \right)^2 \;& = \; A^{1/2}\ltimes (A \,\lozenge_t \, Y_1)\ltimes A^{1/2} \\ \;& = \; A^{1/2}\ltimes (A \, \lozenge_t \, Y_2)\ltimes A^{1/2} \\ \;& = \; \left(A^{1/2}\ltimes(A^{-1} \,\sharp\, Y_2)^t\ltimes A^{1/2} \right)^2. \end{align*}

    Taking the positive square root yields

    A^{1/2}\ltimes(A^{-1} \,\sharp\, Y_1)^t\ltimes A^{1/2} \; = \; A^{1/2}\ltimes(A^{-1} \,\sharp\, Y_2)^t\ltimes A^{1/2},

    and, thus, (A^{-1} \, \sharp\, Y_1)^t = (A^{-1} \, \sharp\, Y_2)^t . Since t \neq 0 , we get A^{-1} \, \sharp\, Y_1 = A^{-1} \, \sharp\, Y_2 . Using the left cancellability of MGM (Lemma 2.6(ⅵ)), we obtain Y_1 = Y_2 . The right cancellability follows from the left cancellability together with the permutation invariance (ⅰ).

    For the assertion (ⅷ), since A\, \lozenge\, B = Y^{1/2}\ltimes A\ltimes Y^{1/2} = Y^{-1/2}\ltimes B\ltimes Y^{-1/2} , we have

    \begin{align*} (A \,\lozenge\, B)^2 \;& = \; (Y^{1/2}\ltimes A\ltimes Y^{1/2})(Y^{-1/2}\ltimes B\ltimes Y^{-1/2}) \\ \;& = \; Y^{1/2}(A\ltimes B)Y^{-1/2}. \end{align*}

    Note that the matrix Y^{1/2} is positive definite. Thus, (A \, \lozenge\, B)^2 is positively similar to A\ltimes B , so they have the same eigenvalues.

    Remark 4.2. Let (A, B) \in \mathbb{P}_{m} \times \mathbb{P}_{n} . Instead of Definition 3.2, the permutation invariance (ⅰ) provides an alternative definition of A \, \lozenge_t\, B as follows:

    \begin{align*} A \,\lozenge_t\, B \;& = \; (B^{-1} \,\sharp\, A)^{1-t} \ltimes B \ltimes (B^{-1} \,\sharp\,A)^{1-t} \\ \;& = \; (A\,\sharp\, B^{-1} )^{1-t} \ltimes B \ltimes (A \,\sharp\,B^{-1})^{1-t} . \end{align*}

    In particular, if m \mid n , we have

    \begin{align*} A \,\lozenge_t\, B \; = \; (A \,\sharp\, B^{-1})^{1-t} B (A \,\sharp\, B^{-1})^{1-t}. \end{align*}

    The assertion (ⅷ) is the reason why A\, \lozenge\, B is called the SGM.

    Now, we will show that A\, \sharp\, B and A\, \lozenge_t\, B are positively similar when A and B are positive definite matrices of arbitrary sizes. Before that, we need the following lemma.

    Lemma 4.3. Let (A, B)\in \mathbb{P}_m\times \mathbb{P}_n . Let t\in \mathbb{R} and \alpha = {\rm{lcm}}(m, n) , then there exists a unique Y_t\in \mathbb{P}_\alpha such that

    \begin{align*} A\,\lozenge_t\, B \; = \; Y_t\ltimes A\ltimes Y_t \quad \mathit{\text{and}} \quad B\,\lozenge_t \,A \; = \;Y_t^{-1}\ltimes A\ltimes Y_t^{-1}. \end{align*}

    Proof. Set Y_t = (A^{-1}\, \sharp\, B)^t , then Y_t\ltimes A\ltimes Y_t = A\, \lozenge_t\, B . Using Lemma 2.6, we obtain that

    Y_t^{-1}\ltimes B\ltimes Y_t^{-1} \; = \; (B^{-1}\,\sharp\, A)^t\ltimes B\ltimes (B^{-1}\,\sharp\, A)^t \; = \; B\,\lozenge_t A.

    To prove the uniqueness, let Z_t\in \mathbb{P}_\alpha be such that Z_t\ltimes A\ltimes Z_t = A\, \lozenge_t\, B and Z_t^{-1}\ltimes A\ltimes Z_t^{-1} = B\, \lozenge_t\, A . By Lemma 2.5, we get Z_t = A^{-1}\, \sharp\, (A\, \lozenge_t\, B) , but Theorem 3.1 says that

    A^{-1}\,\sharp\, (A\,\lozenge_t\, B) \; = \; (A^{-1}\,\sharp\, B)^t.

    Thus, Z_t = Y_t .

    Theorem 4.4. Let (A, B)\in \mathbb{P}_m\times \mathbb{P}_n . Let t\in \mathbb{R} and \alpha = {\rm{lcm}}(m, n) , then A\, \sharp\, B is positively similar to (A\lozenge_{1-t}B)^{1/2}U(A \, \lozenge_t\, B)^{1/2} for some unitary U\in \mathbb{M}_\alpha .

    Proof. By Lemma 4.3, there exists Y_t\in \mathbb{P}_\alpha such that A\, \lozenge_t\, B = Y_t\ltimes A\ltimes Y_t and B\, \lozenge_t A = Y_t^{-1}\ltimes A\ltimes Y_t^{-1} . Using Lemmas 2.2 and 2.5, we have

    \begin{align*} Y_t(A \,\lozenge_{1-t}\, B)Y_t \;& = \; B\otimes I_{\alpha/n} \\ \;& = \; (A\,\sharp\,B)\ltimes A^{-1}\ltimes (A\,\sharp\,B) \\ \;& = \; (A\,\sharp\,B)Y_t(A\,\lozenge_t\, B)^{-1}Y_t(A\,\sharp\,B), \end{align*}

    then

    \begin{align*} \left( (A \,\lozenge_t\, B)^{-1/2}Y_t(\,A\sharp\,B)Y_t (A \,\lozenge_t\, B)^{-1/2}\right)^2 \; = \; (A \,\lozenge_t\, B)^{-1/2}Y_t^2 (A\lozenge_{1-t}B)Y_t^2(A \,\lozenge_t\, B)^{-1/2}. \end{align*}

    Thus,

    \begin{align*} A\,\sharp\,B \; = \; Y_t^{-1}(A \,\lozenge_t\, B)^{1/2} \left((A \,\lozenge_t\, B)^{-1/2}Y_t^2(A\lozenge_{1-t}B)Y_t^2 (A \,\lozenge_t\, B)^{-1/2}\right)^{1/2} (A \,\lozenge_t\, B)^{1/2}Y_t^{-1}. \end{align*}

    Set V = (A \, \lozenge_t\, B)^{-1/2}Y_t^2 (A \, \lozenge_{1-t}\, B)^{1/2} and U = V^{-1}(VV^*)^{1/2} . Obviously, U is a unitary matrix. We obtain

    \begin{align*} A\,\sharp\,B \;& = \; Y_t^{-1} (A \,\lozenge_t\, B)^{1/2}(VV^*)^{1/2} (A \,\lozenge_t\, B)^{1/2}Y_t^{-1} \\ \;& = \; Y_t(A\lozenge_{1-t}B)^{1/2} V^{-1}(VV^*)^{1/2} (A \,\lozenge_t\, B)^{1/2} Y_t^{-1} \\ \;& = \; Y_t (A\lozenge_{1-t}B)^{1/2}U(A \,\lozenge_t\, B)^{1/2}Y_t^{-1}. \end{align*}

    This implies that (A\lozenge_{1-t}B)^{1/2}U(A \, \lozenge_t\, B)^{1/2} is positive similar to A\, \sharp\, B .

    In general, the MGM A\, \sharp_t\, B and the SGM A\, \lozenge_t\, B are not comparable (in the Löwner partial order). We will show that A\, \sharp_t\, B and A\, \lozenge_t\, B coincide in the case that A and B are commuting with respect to the STP. To do this, we need a lemma.

    Lemma 4.5. Let (P, Q)\in \mathbb{P}_m\times \mathbb{P}_n . If

    \begin{align} P\ltimes Q\ltimes P\ltimes Q^{-1} \; = \; Q\ltimes P\ltimes Q^{-1}\ltimes P, \end{align} (4.2)

    then P\ltimes Q = Q\ltimes P .

    Proof. From Eq (4.2), we have

    \left( Q^{-1/2}\ltimes P\ltimes Q^{1/2}\right)\left( Q^{-1/2}\ltimes P\ltimes Q^{1/2}\right)^* \; = \; \left( Q^{-1/2}\ltimes P\ltimes Q^{1/2}\right)^*\left( Q^{-1/2}\ltimes P\ltimes Q^{1/2}\right) .

    This implies that Q^{-1/2}\ltimes P\ltimes Q^{1/2} is a normal matrix. Since Q^{-1/2}\ltimes P\ltimes Q^{1/2} and P\otimes I_{\alpha/m} are similar matrices, we conclude that the eigenvalues of Q^{-1/2}\ltimes P\ltimes Q^{1/2} are real and Q^{-1/2}\ltimes P\ltimes Q^{1/2} is Hermitian. Hence,

    Q^{-1/2}\ltimes P\ltimes Q^{1/2} \; = \; \left( Q^{-1/2}\ltimes P\ltimes Q^{1/2}\right)^* \; = \; Q^{1/2}\ltimes P\ltimes Q^{-1/2}.

    Therefore, P\ltimes Q = Q\ltimes P .

    The next theorem generalizes [7,Theorem 5.1].

    Theorem 4.6. Let (A, B)\in \mathbb{P}_m\times \mathbb{P}_n and t\in \mathbb{R} . If A\ltimes B = B\ltimes A , then A\, \sharp_t\, B = A\, \lozenge_t\, B . In particular, A\, \sharp\, B = A\lozenge B if and only if A\ltimes B = B\ltimes A .

    Proof. Suppose A\ltimes B = B\ltimes A . By Lemma 2.6 and Theorem 4.1, we have

    A\,\sharp_t\,B \; = \; A^{1-t}\ltimes B^t = A\,\lozenge_t\, B .

    Next, assume that A\, \sharp\, B = A \, \lozenge\, B = X . By Lemma 2.5, we have

    X\ltimes A^{-1}\ltimes X \; = \; B\otimes I_{\alpha/n}.

    Set Y = A^{-1}\, \sharp\, B . By Lemma 3.3, we get X = Y^{1/2}\ltimes A\ltimes Y^{1/2} = Y^{-1/2}\ltimes B\ltimes Y^{-1/2} . It follows that

    \begin{align*} Y^{1/2}\ltimes X\ltimes Y^{1/2} \;& = \; B\otimes I_{\alpha/n} \; = \; X\ltimes A^{-1}\ltimes X \\ \;& = \; X\ltimes Y^{1/2}\ltimes X^{-1}\ltimes Y^{1/2}\ltimes X. \end{align*}

    Thus,

    Y^{1/2}\ltimes X\ltimes Y^{1/2}\ltimes X^{-1} \; = \; X\ltimes Y^{1/2}\ltimes X^{-1}\ltimes Y^{1/2}.

    Lemma 4.5 implies that X\ltimes Y^{1/2} = Y^{1/2}\ltimes X . Hence,

    \begin{align*} A\ltimes B \;& = \; A \ltimes Y \ltimes A \ltimes Y \; = \; Y^{-1/2}\ltimes X^2\ltimes Y^{1/2} \\ \;& = \; X^2 \; = \; Y^{1/2}\ltimes X^2\ltimes Y^{-1/2} \\ \;& = \; Y \ltimes A \ltimes Y \ltimes A \\ \;& = \; B\ltimes A. \end{align*}

    Theorem 4.7. Let (A, B)\in \mathbb{P}_m\times \mathbb{P}_n and \alpha = {\rm{lcm}}(m, n) , then the following statements are equivalent:

    (i) A \, \lozenge\, B = I_{\alpha} ,

    (ii) A\otimes I_{\alpha/m} = B^{-1}\otimes I_{\alpha/n} ,

    (iii) A \, \sharp\, B = I_\alpha .

    Proof. First, we show the equivalence between the statements (ⅰ) and (ⅱ). Suppose that A\lozenge B = I_{\alpha} . Letting Y = A^{-1}\, \sharp\, B , we have by Theorem 3.3 that

    Y^{1/2}\ltimes A\ltimes Y^{1/2} \; = \; Y^{-1/2}\ltimes B\ltimes Y^{-1/2} \; = \; I_{\alpha}.

    Applying Lemma 2.1, we obtain

    A\otimes I_{\alpha/m} \; = \; Y^{-1} \; = \; B^{-1}\otimes I_{\alpha/n}.

    Now, suppose A\otimes I_{\alpha/m} = B^{-1}\otimes I_{\alpha/n} . By Lemma 2.1, we have

    \begin{align*} A \ltimes B \;& = \; (A \otimes I_{\alpha/m}) (B \otimes I_{\alpha/n}) \; = \; (B^{-1} \otimes I_{\alpha/n}) (B \otimes I_{\alpha/n}) \\ \;& = \; I_n \otimes I_{\alpha/n} \; = \; I_{\alpha}, \end{align*}

    and similarly, B \ltimes A = I_{\alpha} . Now, Theorem 4.1(ⅴ) implies that

    \begin{align*} A \,\lozenge\, B \;& = \; A^{1/2}\ltimes B^{1/2} \\ \;& = \; (B^{-1/2}\otimes I_{\alpha/n})(B^{1/2}\otimes I_{\alpha/n}) \; = \; I_{\alpha}. \end{align*}

    Next, we show the equivalence between (ⅱ) and (ⅲ). Suppose that A \, \sharp\, B = I_\alpha , then we have

    \begin{align*} (A^{-1/2} \ltimes B \ltimes A^{-1/2})^{1/2} \; = \; A^{-1/2} \ltimes I_{\alpha} \ltimes A^{-1/2} \; = \; A^{-1} \otimes I_{\alpha/m}. \end{align*}

    This implies that

    \begin{align*} A^{-1/2} \ltimes B \ltimes A^{-1/2} \; = \; (A^{-1} \otimes I_{\alpha})^2 \; = \; A^{-2} \otimes I_{\alpha/m}. \end{align*}

    Thus, B \otimes I_{\alpha/n} = A^{-1} \otimes I_{\alpha/m} or A \otimes I_{\alpha/m} = B^{-1} \otimes I_{\alpha/n} .

    Now, suppose (ⅲ) holds, then we get A \ltimes B = I_{\alpha} = B \ltimes A . It follows from Lemma 2.6 (ⅳ) that A \, \sharp\, B = A^{1/2} \ltimes B^{1/2} = I_{\alpha} .

    In particular from Theorem 4.7, when m = n , we have that A \, \lozenge\, B = I_n if and only if A = B^{-1} , if and only if, A \, \sharp\, B = I_n . This result was included in [7] and related to the work [8].

    In this section, we investigate matrix equations involving MGMs and SGMs of positive definite matrices. In particular, recall that the work [23] investigated the matrix equation A \, \sharp_t\, X = B . We discuss this matrix equation when the MGM \sharp_t is replaced by the SGM \lozenge_t in the next theorem.

    Theorem 5.1. Let (A, B)\in \mathbb{P}_m\times \mathbb{P}_n where m \mid n . Let t \in \mathbb{R} -\{0\} , then the mean equation

    \begin{align} A \,\lozenge_t\, X \; = \; B, \end{align} (5.1)

    in an unknown X \in \mathbb{P}_n , is equivalent to the Riccati equation

    \begin{align} W_t \ltimes A \ltimes W_t \; = \; B \end{align} (5.2)

    in an unknown W_t \in \mathbb{P}_n . Moreover, Eq (5.1) has a unique solution given by

    \begin{align} X \; = \; A \,\lozenge_{1/t}\, B \; = \; (A \,\sharp\, B^{-1})^{1- \frac{1}{t}} B (A \,\sharp\, B^{-1})^{1- \frac{1}{t}}. \end{align} (5.3)

    Proof. Let us denote W_t = (A^{-1} \, \sharp\, X)^t for each t \in \mathbb{R} -\{0\} . By Definition 3.2, we have

    \begin{align*} A \,\lozenge_t\, X \; = \; (A^{-1}\,\sharp\, X)^t\ltimes A\ltimes (A^{-1}\,\sharp\, X)^t \; = \; W_t \ltimes A \ltimes W_t. \end{align*}

    Note that the map X \mapsto W_t is injective due to the cancellability of the MGM \sharp_t (Lemma 2.6(ⅵ)). Thus, Eq (5.1) is equivalent to the Riccati equation (5.2). Now, Lemma 2.5 implies that Eq (5.2) is equivalent to W_t = A^{-1} \, \sharp\, B . Thus, Eq (5.1) is equivalent to the equation

    \begin{align} (A^{-1} \,\sharp\, X)^t \; = \; A^{-1} \,\sharp\, B. \end{align} (5.4)

    We now solve (5.4). Indeed, we have

    \begin{align*} A^{-1} \,\sharp\, X \; = \; (A^{-1} \,\sharp\, B)^{1/t}. \end{align*}

    According to Theorem 3.1 and Definition 3.2, this equation has a unique solution denoted by the SGM of A and B with weight 1/t . Now, Remark 4.2 provides the explicit formula (5.3) of A \, \lozenge_{1/t}\, B .

    Remark 5.2. For the case n \mid m in Theorem 5.1, we get a similar result. In particular to the case m \mid n , the mean equation

    \begin{align} A \,\lozenge\, X \; = \; B \end{align} (5.5)

    has a unique solution X = (A^{-1}\, \sharp\, B)B(A^{-1}\, \sharp\, B) .

    Theorem 5.3. Let (A, B)\in \mathbb{P}_m\times \mathbb{P}_n . Let t\in \mathbb{R} -\{0\} and \alpha = {\rm{lcm}}(m, n) , then the equation

    \begin{align} (A\,\sharp\, X)\,\sharp_t\,(B\,\sharp\, X) \; = \; I_\alpha \end{align} (5.6)

    has a unique solution X = A^{-1} \, \lozenge_t\, B^{-1} \in \mathbb{P}_\alpha .

    Proof. For the case t = 0 , Lemma 2.5 tells us that the equation A\, \sharp\, X = I_\alpha has a unique solution

    X \; = \; A^{-1}\otimes I_{\alpha/m} \; = \; A^{-1} \,\lozenge_0\, B^{-1}.

    Now, assume that t\neq 0 . To prove the uniqueness, let U = A\, \sharp\, X and V = B\, \sharp\, X , then

    U \ltimes A^{-1} \ltimes U \; = \; X \; = \; V \ltimes B^{-1} \ltimes V .

    Since U\, \sharp_t\, V = I_\alpha , we obtain (U^{-1/2}\ltimes V\ltimes U^{-1/2})^t = U^{-1} and, thus, V = U^{(t-1)/t} . It follows that

    \begin{align*} B\otimes I_{\alpha/n} \;& = \; V\ltimes X^{-1} \ltimes V \; = \; V\ltimes U^{-1}\ltimes A\ltimes U^{-1}\ltimes V \\ \;& = \; U^{-1/t}\ltimes A\ltimes U^{-1/t}. \end{align*}

    Using Lemma 2.5, we have that U^{-1/t} = A^{-1}\, \sharp\, B and, thus, U = (A^{-1}\, \sharp\, B)^{-t} . Hence,

    \begin{align*} X \;& = \; (A^{-1}\,\sharp\,B)^{-t} \ltimes A^{-1} \ltimes (A^{-1}\,\sharp\,B)^{-t} \\ \;& = \; (A\,\sharp\, B^{-1})^t \ltimes A^{-1} \ltimes (A\,\sharp\, B^{-1})^t \; = \; A^{-1} \,\lozenge_t\, B^{-1}. \end{align*}

    Corollary 5.4. Let (A, B)\in \mathbb{P}_m\times \mathbb{P}_n and \alpha = {\rm{lcm}}(m, n) , then the equation

    \begin{align} A\,\sharp\, X \; = \; B\,\sharp\, X^{-1} \end{align} (5.7)

    has a unique solution X = A^{-1} \, \lozenge\, B \in \mathbb{P}_\alpha .

    Proof. Equation (5.7) and Lemma 2.6 imply that

    \begin{align*} (A \,\sharp\, X)^{-1} \; = \; (B\,\sharp\, X^{-1})^{-1} \; = \; B^{-1} \,\sharp\, X. \end{align*}

    Thus, Eq (5.7) is equivalent to the following equation:

    \begin{align*} (A\,\sharp\, X)\,\sharp_{1/2} \,(B^{-1}\,\sharp\, X) \; = \; I_\alpha. \end{align*}

    Now, the desired solution follows from the case t = 1/2 in Theorem 5.3.

    In particular, when m = n and A = B , the equation A\, \sharp\, X = A\, \sharp\, X^{-1} has a unique solution X = A \, \lozenge\, A^{-1} = A^0 = I by Eq (3.3).

    Theorem 5.5. Let (A, B)\in \mathbb{P}_m\times \mathbb{P}_n and \alpha = {\rm{lcm}}(m, n) , then the equation

    \begin{align} (A\,\sharp\, X) \,\lozenge_t\, (B\,\sharp\, X) \; = \; I_\alpha \end{align} (5.8)

    has a unique solution X = A^{-1} \, \lozenge_t\, B^{-1} \in \mathbb{P}_\alpha .

    Proof. If t = 0 , the equation A\, \sharp\, X^{-1} = I_\alpha has a unique solution X = A^{-1} \otimes I_{\alpha/m} = A^{-1} \, \lozenge_0\, B^{-1} . Now, consider t\neq 0 , and let U = A\, \sharp\, X and V = B\, \sharp\, X , then

    U^{-1}\ltimes A\ltimes U^{-1} \; = \; X^{-1} \; = \; V^{-1}\ltimes B\ltimes V^{-1}.

    Since U\lozenge_t V = I_\alpha , we have that U = (U^{-1}\, \sharp\, V)^{-2t} , i.e., U^{1/(2t)} = U\, \sharp\, V^{-1} . Applying Lemma 2.5, we get V^{-1} = U^{1/(2t)}\ltimes U^{-1}\ltimes U^{1/(2t)} = U^{(1-t)/t} . Hence,

    B \; = \; V\ltimes U^{-1}\ltimes A\ltimes U^{-1}\ltimes V \; = \; U^{-1/t}\ltimes A\ltimes U^{-1/t}.

    Using Lemma 2.5, we have U^{-1/t} = A^{-1}\, \sharp\, B , i.e., U = (A^{-1}\, \sharp\, B)^{-t} . Thus,

    X^{-1} \; = \; (A^{-1}\,\sharp\, B)^t\ltimes A\ltimes (A^{-1}\,\sharp\, B)^t \; = \; A \,\lozenge_t\, B.

    Hence, by the self-duality of the SGM \lozenge_{t} , we have

    X \; = \; (A \,\lozenge_t\, B)^{-1} \; = \; A^{-1} \,\lozenge_t\, B^{-1}.

    All results in this section seem to be not noticed before in the literature. In particular, from Theorems 5.3 and 5.5, when m = n and A = B , the equation A\, \sharp\, X = I has a unique solution X = A^{-1} .

    We characterize weighted SGMs of positive definite matrices in terms of certain matrix equations involving MGMs and STPs. Indeed, for each real number t , the unique positive solution of the matrix equation A^{-1}\, \sharp\, X \; = \; (A^{-1}\, \sharp\, B)^t is defined to be the t -weighted SGM of A and B . We then establish several properties of the weighted SGMs such as permutation invariance, homogeneity, self-duality, unitary invariance, cancellability, and a determinantal identity. The most significant property is the fact that (A\lozenge B)^2 is positively similar to A\ltimes B , so the two matrices have the same spectrum. The results in Sections 3 and 5 include the classical weighted SGMs of matrices as special cases. Furthermore, we show that certain equations concerning weighted SGMs and weighted MGMs of positive definite matrices have a unique solution written explicitly as weighted SGMs of associated matrices. In particular, the equation A \, \lozenge_t\, X = B can be expressed in terms of the famous Riccati equation. For future works, we may investigate SGMs from differential-geometry viewpoints, such as geodesic property.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research project is supported by National Research Council of Thailand (NRCT): (N41A640234). The authors would like to thank the anonymous referees for comments and suggestions.

    The authors declare there is no conflicts of interest.



    [1] J. Afiune, Avaliação Ecocardiografica Evolutiva de Recém-nascidos Pre-termo, do Nascimento Até o Termo, Phd thesis, Universidade de São Paulo, 2000.
    [2] E. Demidenko, Mixed Models: Theory and Applications with R, 2 Eds., Hoboken: John Wiley & Sons, Inc., 2013.
    [3] R. Drikvandi, G. Verbeke, G. Molenberghs, Diagnosing Misspecification of the Random-Effects Distribution in Mixed Models, Biometrics, 73 (2017), 63–71.
    [4] D. A. Freedman, On the so-called 'Huber Sandwich Estimator' and 'Robust Standard Errors', Amer. Stat., 60 (2006), 299–302. http://doi.org/10.1198/000313006X152207 doi: 10.1198/000313006X152207
    [5] S. K. Hanneman, Design, Analysis, and Interpretation of Method-Comparison Studies, AACN Adv. Crit. Care, 19 (2008), 223–234. http://doi.org/10.4037/15597768-2008-2015 doi: 10.4037/15597768-2008-2015
    [6] D. A. Harville, Maximum Likelihood Approaches to Variance Component Estimation and to Related Problems, J. Amer. Stat. Assoc., 72 (1977), 320–338.
    [7] X. Huang, Detecting Random-Effects Model Misspecification via Coarsened Data, Comput. Stat. Data Anal., 55 (2011), 703–714. https://doi.org/10.1016/j.csda.2010.06.012 doi: 10.1016/j.csda.2010.06.012
    [8] J. Jiang, Goodness-of-Fit Tests for Mixed Model Diagnostics, Ann. Stat., 29 (2001), 1137–1164.
    [9] N. Lange, L. Ryan, Assessing Normality in Random Effects Models, Ann. Stat., 17 (1989), 624–642.
    [10] S. Litière, A. Alonso, G. Molenberghs, Type I and Type II Error under Random-Effects Misspecification in Generalized Linear Mixed Models, Biometrics, 63 (2007), 1038–1044.
    [11] C. Nagle, An Introduction to Fitting and Evaluating Mixed-effects Models in R, Proceedings of the 10th Pronunciation in Second Language Learning and Teaching Conference, 2018, 82–105.
    [12] H. D. Patterson, R. Thompson, Recovery of Inter-block Information when Block Sizes are Unequal, Biometrika, 58 (1971), 545–554,
    [13] R Core Team, R: A Language and Environment for Statistical Computing, Available from: https://www.r-project.org/.
    [14] G. K. Robinson, That BLUP is a Good Thing: The Estimation of Random Effects, Stat. Sci., 6 (1991), 15–32,
    [15] F. M. Rocha, J. M. Singer, Selection of Terms in Random Coefficient Regression Models, J. Appl. Stat., 45 (2018), 225–242.
    [16] H. Schielzeth, N. J. Dingemanse, S. Nakagawa, D. F. Westneat, H. Allegue, C. Teplitsky, et al., Robustness of Linear Mixed-Effects Models to Violations of Distributional Assumptions, Methods Ecol. Evol., 11 (2020), 1141–1152. https://doi.org/10.1111/2041-210X.13434 doi: 10.1111/2041-210X.13434
    [17] P. Sen, J. Singer, Large Sample Methods in Statistics: An Introduction with Applications, Boca Raton: CRC Press, 1993.
    [18] J. M. Singer, F. M. Rocha, J. S. Nobre, Graphical Tools for Detecting Departures from Linear Mixed Model Assumptions and Some Remedial Measures, Int. Stat. Rev., 85 (2017), 290–324. https://doi.org/10.1111/insr.12178 doi: 10.1111/insr.12178
    [19] H. White, Maximum Likelihood Estimation of Misspecified Models, Econometrica, 50 (1982), 1–25. https://doi.org/10.2307/1912526 doi: 10.2307/1912526
    [20] D. Yu, X. Zhang, K. K. Yau, Asymptotic Properties and Information Criteria for Misspecified Generalized Linear Mixed Models, J. R. Stat. Soc. Ser. B (Stat. Methodol.), 80 (2018), 817–836.
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(769) PDF downloads(56) Cited by(0)

Figures and Tables

Figures(8)  /  Tables(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog