Research article Special Issues

A matrix analysis of BLMBPs under a general linear model and its transformation

  • This paper is concerned with the relationships between best linear minimum biased predictors (BLMBPs) in the context of a general linear model (GLM) and its transformed general linear models (TGLMs). We shall establish a mathematical procedure by means of some exact and analytical tools in matrix theory that were developed in recent years. The coverage includes constructing a general vector composed of all unknown parameters in the context of a GLM and its TGLMs, deriving the exact expressions of the BLMBPs through the technical use of analytical solutions of a constrained quadratic matrix-valued function optimization problem in the Löwner partial ordering, and discussing a variety of theoretical performances and properties of the BLMBPs. We also give a series of characterizations of relationships between BLMBPs under a given GLM and its TGLMs.

    Citation: Li Gong, Bo Jiang. A matrix analysis of BLMBPs under a general linear model and its transformation[J]. AIMS Mathematics, 2024, 9(1): 1840-1860. doi: 10.3934/math.2024090

    Related Papers:

    [1] Bo Jiang, Yongge Tian . Equivalent analysis of different estimations under a multivariate general linear model. AIMS Mathematics, 2024, 9(9): 23544-23563. doi: 10.3934/math.20241144
    [2] Ruixia Yuan, Bo Jiang, Yongge Tian . A study of the equivalence of inference results in the contexts of true and misspecified multivariate general linear models. AIMS Mathematics, 2023, 8(9): 21001-21021. doi: 10.3934/math.20231069
    [3] Najmeddine Attia, Rim Amami . On linear transformation of generalized affine fractal interpolation function. AIMS Mathematics, 2024, 9(7): 16848-16862. doi: 10.3934/math.2024817
    [4] Yongge Tian . An effective treatment of adding-up restrictions in the inference of a general linear model. AIMS Mathematics, 2023, 8(7): 15189-15200. doi: 10.3934/math.2023775
    [5] Nesrin Güler, Melek Eriş Büyükkaya . Statistical inference of a stochastically restricted linear mixed model. AIMS Mathematics, 2023, 8(10): 24401-24417. doi: 10.3934/math.20231244
    [6] Dayang Dai, Dabuxilatu Wang . A generalized Liu-type estimator for logistic partial linear regression model with multicollinearity. AIMS Mathematics, 2023, 8(5): 11851-11874. doi: 10.3934/math.2023600
    [7] Zongcheng Li, Jin Li . Linear barycentric rational collocation method for solving a class of generalized Boussinesq equations. AIMS Mathematics, 2023, 8(8): 18141-18162. doi: 10.3934/math.2023921
    [8] J. Alberto Conejero, Antonio Falcó, María Mora–Jiménez . A pre-processing procedure for the implementation of the greedy rank-one algorithm to solve high-dimensional linear systems. AIMS Mathematics, 2023, 8(11): 25633-25653. doi: 10.3934/math.20231308
    [9] Jianhua Tang, Chuntao Yin . Analysis of the generalized fractional differential system. AIMS Mathematics, 2022, 7(5): 8654-8684. doi: 10.3934/math.2022484
    [10] Xingwei Ren . Statistical inference of the mixed linear model with incorrect stochastic linear restrictions. AIMS Mathematics, 2025, 10(5): 11349-11368. doi: 10.3934/math.2025516
  • This paper is concerned with the relationships between best linear minimum biased predictors (BLMBPs) in the context of a general linear model (GLM) and its transformed general linear models (TGLMs). We shall establish a mathematical procedure by means of some exact and analytical tools in matrix theory that were developed in recent years. The coverage includes constructing a general vector composed of all unknown parameters in the context of a GLM and its TGLMs, deriving the exact expressions of the BLMBPs through the technical use of analytical solutions of a constrained quadratic matrix-valued function optimization problem in the Löwner partial ordering, and discussing a variety of theoretical performances and properties of the BLMBPs. We also give a series of characterizations of relationships between BLMBPs under a given GLM and its TGLMs.



    In this paper, we consider the standard linear model

    M:  y=Xθ+ν, (1.1)

    where it is assumed that yRn×1 is vector of observable random variables, XRn×p are known matrices of arbitrary ranks (0r(X)min{n,p}), θRp×1 is a vector of fixed but unknown parameters, and νRn×1 is a random error vector. In order to carry out reasonable estimation and statistical inference in the context of (1.1), we assume that the expectation vector and the covariance matrix of ν are given without loss of generality by

    E(ν)=0,  Cov(ν)=Σ. (1.2)

    In the sequel, we assume that ΣRn×n is a known positive semi-definite matrix of arbitrary rank in order to derive general and precise conclusions under the given model assumptions. Once the work on the general assumptions is established, we can further, as usual in parametric regression analysis, let Σ be certain specified forms with known or unknown entries, and then derive various concrete inference results.

    The assumptions in the contexts of (1.1) and (1.2) are typical in form for a complete specification of the general linear model (for short, GLM). Observe that there are two unknown vectors θ and ν in (1.1). Then, we can figure out that an obligatory task in statistical inference under (1.1) and (1.2) is to predict the two unknown vectors simultaneously:

    τ=Aθ+Bν, (1.3)

    where it is assumed that ARs×p and BRs×n are known matrices of arbitrary ranks. This vector obviously includes all the unknown vectors in (1.1), such as θ and ν, as its special cases. It is easy to see that under (1.1) and (1.2), we have

    E(τ)=Aθ,  Cov(τ)=BΣB,  Cov(τ,y)=BΣ. (1.4)

    In the investigation of linear statistical models for regression, it is a common inference problem to propose and characterize various reasonable connections between two different given models under the given model assumptions. One concrete problem of such kind is to investigate the relationships between a given linear model (called the original model) and certain types of its transformed models. Sometimes, the transformed models are required to meet with certain necessary requirements in the statistical inferences of the original linear model. Now let us consider M in (1.1) and its transformed models. In such a case, we may be faced with different transformed forms of the model in accordance with linear transformations of observable random vector y. Generally speaking, various possible transformed models of M in (1.1) are often obtained by pre-multiplying T by a given matrix. For example,

    N:  Ty=TXθ+Tν, (1.5)

    is a common transformed form of M in (1.1), where TRm×n is a known transformation matrix of arbitrary rank. Below, we present a group of well-known cases of the transformed model for different choices of the transformation matrix T in (1.5).

    (a) We first divide the original model M in (1.1) as

    M:  [y1y2]=[X1X2]θ+[ν1ν2]

    by the partitions of the vectors and matrices in the model. Then we take the transformation matrices T1=[In1,0] and T2=[0,In2] in (1.5) to obtain the following two sub-sample models:

    M1:  y1=X1θ+ν1,M2:  y2=X2θ+ν2,

    where yiRni×1,XiRni×p,θRp×1, νiRni×1 and n=n1+n2. They can also be viewed as adding or deleting certain regression equations in a given GLM. Also, we can say that these two individual models occur in two periods of observations of data.

    (b) Assume that a concrete form of M in (1.1) is given by

    M:  [y1y2]=[X100X2][θ1θ2]+[ν1ν2].

    In this case, taking the two transformation matrices T1=[In1,0] and T2=[0,In2], we obtain the following two sub-sample models:

    M1:  y1=X1θ1+ν1,M2:  y2=X2θ2+ν2,

    where yiRni×1,XiRni×pi,θiRpi×1, νiRni×1, n=n1+n2, p=p1+p2. The two models are known as seemingly unrelated linear models, which are linked to each other by the correlated error terms across the models, where all the given matrices and the unknown vectors in the two models are different.

    Due to the linear nature of M in (1.1), we see the expectations and covariance matrices of y,Ty and τ under the assumptions in (1.1) and (1.2):

    E(y)=Xθ,  E(Ty)=TXθ, (1.6)
    Cov(y)=Σ,  Cov(Ty)=TΣT, (1.7)
    Cov(τ,y)=BΣ,  Cov(τ,Ty)=BΣT. (1.8)

    Now we mention some backgrounds of this current study. For unknown parameters in a given regression model, statisticians are able to adopt different optimal criteria in order to obtain proper predictions and estimations of the unknown parameters. In comparison, the best linear unbiased prediction, the best linear unbiased estimation and the least squares estimation are best known among others because they have many excellent mathematical and statistical properties and performances. There were many deep and fruitful works in the statistical literatures related to these predictions and estimations. However, it is a common fact in statistical practice that the unknown parameters in a given model may not be predictable or estimable. Instead, it is necessary to choose certain biased predictions and biased estimations for the unknown parameters. For example, Rao described the bias between estimators and unknown parameter functions, constructed the minimum biased estimation class, selected the one with the minimum variance in the minimum biased estimation class and then defined the best linear minimum biased estimation. Especially when the unknown parameter function is an estimable function, the best linear minimum biased estimation is the classic best linear unbiased estimation. It can be seen from (1.1)–(1.8) that a given model and its transformed models are not necessarily equivalent in form. Hence, the predictors/estimators of unknown vectors that are going to derive under these models have different algebraic expressions and properties. Yet, some transformations of observable random vectors may preserve enough information for predicting/estimating unknown vectors in the original model. Therefore, it is natural to consider certain links between the predictors/estimators obtained from an original model and its transformed models in statistical inferences of these models. Traditionally, the problems of characterizing relationships between predictions/estimations of unknown vectors in an original model and its transformed models were known as linear sufficiency problems, which were first considered in [1,3]. Many scholars also studied the relationship between estimations under a given original model and its transformed model from different aspects. For instance, Baksalary and Kala considered the problem on linear transformations of GLMs preserving the best linear unbiased estimations under the general Gauss-Markoff model in [1]; Xie studied in [30] the best linear minimum biased estimations under a given GLM and discussed the problem of the linear transformation preserving the best linear minimum biased estimations. Also, the subject of this kind was sufficiently approached in [7,9,14,18,20,33] among others.

    Given the model assumptions in (1.1)–(1.8), the purpose of this paper is to provide a unified theoretical and conceptual exploration for solving the best linear minimum biased prediction (for short, BLMBP) problems under a GLM and its transformed general linear models (for short, TGLMs) through the skillful and effective use of a series of exact and analytical matrix analysis tools. The remaining part of this current paper is organized as follows. In the second section, we introduce notation and serval matrix analysis tools and techniques that we shall utilize to characterize matrix equalities and matrix set inclusions that involve generalized inverses of matrices. In the third section, we introduce the definitions of the linear minimum biased predictor (for short, LMBP) and the BLMBP of τ in (1.3), as well as basic estimation and inference theory regarding the LMBP and BLMBP, including their analytical expressions and their mathematical and statistical properties and features in the contexts of (1.1)–(1.8). In the fourth section, we address the problems regarding the relationships between the BLMBPs under a GLM and its TGLMs using the powerful matrix rank and inertia methodology. The fifth section presents a special example related to the main findings in the preceding sections. Some conclusions and remarks are given in the last section.

    We begin with the introduction of notation used in the sequel. Rm×n denotes the collection of all m×n matrices over the field of real numbers, and the symbols M, r(M) and R(M) denote the transpose, the rank and the range (column space) of MRm×n, and Im denotes the identity matrix of order m. The Moore–Penrose generalized inverse of M, denoted by M, is defined to be the unique solution G satisfying the four matrix equations MGM=M, GMG=G, (MG)=MG and (GM)=GM. Let PM=MM, M=EM=ImMM and FM=InMM denote the three orthogonal projectors (symmetric idempotent matrices) induced from M, which will help in briefly denoting calculation processes related to generalized inverses of matrices, where both EM and FM satisfy EM=FM and FM=EM and the ranks of EM and FM are r(EM)=mr(M) and r(FM)=nr(M). Two symmetric matrices M and N of the same size are said to satisfy the inequalities MN, MN, MN and MN in the Löwner partial ordering if MN is positive semi-definite, negative semi-definite, positive definite and negative definite, respectively. Further information about the orthogonal projectors PM, EM and FM and their various applications in the theory of linear statistical models can be found, e.g., in [10,13,16,19]. It is also well known that the Löwner partial ordering between two symmetric matrices is a surprisingly strong and useful property in matrix analysis. The reader is referred to [16] and the references therein for more results and facts regarding the issues of the Löwner partial ordering in statistical theory and applications. Recently, the authors of [2,4,5,6,23,25,26,29] proposed and approached a series of research problems concerning the relationships of different kinds of predictions of unknown parameters in regression models using the rank and inertia methodology in matrix analysis, and provided a variety of simple and reasonable equivalent facts related to the relationship problems. In this paper, we also adopt the rank and inertia methodology to approach the relationship problems regarding different estimations and predictions.

    As preliminaries that can help readers in getting familiar with the features and usefulness of the matrix rank methodology, we present in the following a list of commonly used results and facts about ranks of matrices and matrix equations, which are well known or easy to prove. We shall use them in the descriptions and simplifications of various complicated matrix expressions and matrix equalities that occur in the statistical inference of a GLM and its TGLMs in the following sections.

    Lemma 2.1 ([28]). Let A and B be two sets composed by matrices of the same size.

    ABminAA,BBr(AB)=0, (2.1)
    ABmaxAAminBBr(AB)=0. (2.2)

    Lemma 2.2 ([12]). Let ARm×n, BRm×k, and CRl×n. Then,

    r[A,B]=r(A)+r(EAB)=r(B)+r(EBA), (2.3)
    r[AC]=r(A)+r(CFA)=r(C)+r(AFC), (2.4)
    r[AABB0]=r[A,B]+r(B). (2.5)

    In particular, the following results hold:

    (a) r[A,B]=r(A)R(B)R(A)AAB=BEAB=0.

    (b) r[AC]=r(A)R(C)R(A)CAA=CCFA=0.

    Lemma 2.3 ([22]). Assume that five matrices A1, B1, A2, B2 and A3 of appropriate sizes satisfy the conditions R(A1)R(B1), R(A2)R(B1), R(A2)R(B2) and R(A3)R(B2). Then,

    r(A1B1A2B2A3)=r[0B2A3B1A20A100]r(B1)r(B2). (2.6)

    Lemma 2.4 ([21,27]). Let ARm×n,BRm×k and CRl×n be given. Then, the maximum and minimum ranks of ABZ and ABZC with respect to a variable matrix Z of appropriate sizes are given by the following closed-form formulas:

    maxZRk×nr(ABZ)=min{r[A,B],n}, (2.7)
    minZRk×nr(ABZ)=r[A,B]r(B), (2.8)
    maxZRk×lr(ABZC)=min{r[A,B],r[AC]}. (2.9)

    Below we offer some existing formulas and results regarding general solutions of a basic linear matrix equation and a constrained quadratic matrix optimization problem.

    Lemma 2.5 ([15]). Let ARm×n and BRp×n. Then, the linear matrix equation ZA=B is solvable for ZRp×m if and only if R(A)R(B), or equivalently, BAA=B. In this case, the general solution of the equation can be written in the parametric form

    Z=BA+UA,

    where URp×m is an arbitrary matrix.

    Lemma 2.6 ([28]). Let ARm×n, BRm×k and assume that R(A)=R(B). Then

    XA=0XB=0.

    Lemma 2.7 ([24]). Let

    f(Z)=(ZC+D)M(ZC+D)  s.t.  ZA=B, (2.10)

    where it is assumed that ARp×q, BRn×q, CRp×m and DRn×m are given, MRm×m is positive semi-definite and the matrix equation ZA=B is solvable for ZRn×p. Then, there always exists a solution Z0 of ZA=B such that

    f(Z)f(Z0)

    holds for all solutions of ZA=B, and the matrix Z0 that satisfies the above inequality is determined by the following consistent matrix equation:

    Z0[A,CMCA]=[B,DMCA].

    In this case, the general expression of Z0 and the corresponding f(Z0) and f(Z) are given by

    Z0=argminZA=Bf(Z)=[B,DMCA][A,CMCA]+U[A,CMC],f(Z0)=minZA=Bf(Z)=GMGGMCTCMG,f(Z)=f(Z0)+(ZC+D)MCTCM(ZC+D)=f(Z0)+(ZCMCA+DMCA)T(ZCMCA+DMCA),

    where G=BAC+D, T=(ACMCA) and URn×p is arbitrary.

    In order to describe the relationships between BLMBPs under different regression models, we need to adopt the following definition to characterize possible equality between two random vectors [28].

    Definition 2.8. Let y be as given in (1.1), let {L1} and {L2} be two matrix sets and let L1y and L2y be any two linear predictors of τ in (1.3).

    (a) {L1y}{L2y} holds definitely, i.e, {L1}{L2} if and only if

    minL1{L1},L2{L2}r(L1L2)=0.

    (b) The vector set inclusion {L1y}{L2y} holds definitely, i.e, {L1}{L2} if and only if

    maxL1{L1}minL2{L2}r(L1L2)=0.

    (c) {L1y}{L2y} holds with probability 1 if and only if

    minL1{L1},L2{L2}r((L1L2)[X,Σ])=0minL1{L1},L2{L2}r((L1L2)[X,ΣX])=0minL1{L1},L2{L2}r((L1L2)[XX,ΣX])=0.

    (d) The vecctor set inclusion {L1y}{L2y} holds with probability 1 if and only if

    maxL1{L1}minL2{L2}r((L1L2)[X,Σ])=0maxL1{L1}minL2{L2}r((L1L2)[X,ΣX])=0maxL1{L1}minL2{L2}r((L1L2)[XX,ΣX])=0.

    Recall in parametric regression analysis that if there exists a matrix L such that E(Lyτ)=(LXA)θ=0 holds for all θ, the parametric parameter vector τ in (1.3) is said to predictable under the assumptions in (1.1) and (1.2). Otherwise, there does not exist an unbiased prediction of τ under (1.1) and (1.2), and therefore, we have to seek certain biased predictions of τ according to various specified optimization criteria. In this section, we shall adopt the following known definitions of the LMBP and BLMBP of τ (cf. [17, p.337]).

    Definition 3.1. Let the parametric vector τ be as given in (1.3).

    (a) The LMBP of τ in (1.3) under (1.1) is defined to be

    LMBPM(τ)=LMBPM(Aθ+Bν)=ˆLy, (3.1)

    where the matrix ˆL satisfies

    ˆL=argminLRs×ntr((LXA)(LXA)). (3.2)

    (b) The LMBP of τ in (1.3) under (1.5) is defined to be

    LMBPN(τ)=LMBPN(Aθ+Bν)=ˆKTy, (3.3)

    where the matrix ˆK satisfies

    ˆK=argminKRs×mtr((KTXA)(KTXA)). (3.4)

    Theorem 3.2. Under the notations in Definition 3.1, the following results hold:

    ˆL=argminLRs×ntr((LXA)(LXA))ˆLXX=AX, (3.5)
    ˆK=argminKRs×mtr((KTXA)(KTXA))ˆKTX(TX)=A(TX). (3.6)

    Proof. Note first that

    tr((KTXA)(KTXA))=tr((KTXA(TX)TX+A(TX)TXA)(KTXA(TX)TX+A(TX)TXA))=tr((KTXA(TX)TXAFTX)(KTXA(TX)TXAFTX))=tr(AFTXA)+tr((KTXA(TX)TX)(KTXA(TX)TX))    tr((KA(TX))TXFTXA)tr(AFTX(TX)(KA(TX)))=tr(AFTXA)+tr((KTXA(TX)TX)(KTXA(TX)TX)), (3.7)

    where TXFTX=0. Note that tr((KTXA(TX)TX)(KTXA(TX)TX))0 for all KRs×m and the matrix equation KTX=A(TX)TX is solvable for KRs×m. In this case, we obtain

    minKRs×mtr((KTXA)(KTXA))=tr(AFTXA),

    and

    ˆK=argminKRs×mtr((KTXA)(KTXA))ˆKTXA(TX)TX=0ˆKTX(TX)=A(TX),

    thus establishing (3.6). Letting T=In leads to (3.5).

    Definition 3.3. Let the parametric vector τ be as given in (1.3).

    (a) If ˆL satisfies

    Cov(ˆLyτ)=min  s.t.  ˆL=argminLRs×ntr((LXA)(LXA)) (3.8)

    holds in the Löwner partial ordering, then the linear statistic ˆLy is defined to be the BLMBP of τ in (1.3) under (1.1), and is denoted by

    ˆLy=BLMBPM(τ)=BLMBPM(Aθ+Bν). (3.9)

    (b) If ˆK satisfies

    Cov(ˆKTyτ)=min  s.t.  ˆK=argminKRs×mtr((KTXA)(KTXA)) (3.10)

    holds in the Löwner partial ordering, then the linear statistic ˆKTy is defined to be the BLMBP of τ in (1.3) under (1.5), and is denoted by

    ˆKTy=BLMBPN(τ)=BLMBPN(Aθ+Bν). (3.11)

    If B=0 or A=0 in (1.3), then the ˆKTy in (3.11) are defined to be the best linear minimum biased estimator (BLMBE) and the BLMBP of Aθ and Bν in (1.3) under (1.5), respectively, and are denoted by

    ˆKTy=BLMBEN(Aθ)  and  ˆKTy=BLMBPN(Bν).

    It is easy to verify that the difference of KTyτ under (1.5) can be written in the following form:

    KTyτ=KTXθ+KTνAθBν=(KTXA)θ+(KTB)ν.

    Hence, the covariance matrix of KTyτ can be written as

    Cov(KTyτ)=(KTB)Σ(KTB)=f(K). (3.12)

    Our main results on the BLMBPs of τ in (1.3) are given below.

    Theorem 3.4. Let the parametric vector τ be as given in (1.3) and define W=[TX(TX),Cov(Ty)(TX)] and D=Cov(τ,Ty). Then

    Cov(ˆKTyτ)=min  s.t.  ˆKTX(TX)=A(TX)ˆKW=[A(TX),D(TX)]. (3.13)

    The matrix equation in (3.13) is solvable for ˆK, i.e.,

    [A(TX),D(TX)]WW=[A(TX),D(TX)] (3.14)

    holds under (3.6), while the general expression of ˆK and the corresponding BLMBPN(τ) can be written in the following form

    BLMBPN(τ)=ˆKTy=([A(TX),D(TX)]W+U1W)Ty, (3.15)

    where U1Rs×m is arbitrary. Furthermore, the following results hold.

    (a) r[TX,TΣT(TX)]=r[TX,(TX)TΣT]=r[TX,TΣ] and

    R[TX,TΣT(TX)]=R[TX,(TX)TΣT]=R[TX,TΣ].

    (b) ˆKT in (3.15) is unique if and only if R(T)R[TX,TΣ].

    (c) BLMBPN(τ) is unique if and only if TyR[TX,TΣ] holds with probability 1.

    (d) The covariance matrix of BLMBPN(τ) is given by

    Cov(BLMBPN(τ))=ˆKTΣTˆK=([A(TX),D(TX)]W)TΣT([A(TX),D(TX)]W); (3.16)

    the covariance matrix between BLMBPN(τ) and τ is given by

    Cov(BLMBPN(τ),τ)=[A(TX),D(TX)][TX(TX),TΣT(TX)]D; (3.17)

    the difference of Cov(τ) and Cov(BLMBPN(τ)) is given by

    Cov(τ)Cov(BLMBPN(τ))=BΣB([A(TX),D(TX)]W)TΣT([A(TX),D(TX)]W); (3.18)

    the covariance matrix of τBLMBPN(τ) is given by

    Cov(τBLMBPN(τ))=([A(TX),D(TX)]WTB)Σ([A(TX),D(TX)]WTB). (3.19)

    (e) If B=0 or A=0 in (1.3), then

    BLMBEN(Aθ)=([A(TX),0]W+U1W)Ty, (3.20)
    BLMBPN(Bν)=([0,D(TX)]W+U1W)Ty. (3.21)

    Proof. Eq (3.13) is obviously equivalent to

    f(K)=(KTB)Σ(KTB)=min  s.t.  KTX(TX)=A(TX). (3.22)

    Since Σ0, the optimization problem in (3.22) is a special case of (2.10). By Lemma 2.7, the solution of (3.22) is determined by the matrix equation in (3.13). This equation is consistent as well under (3.6), and the general solution of the equation and the corresponding BLMBP are given in (3.15). Result (a) is well known; see [11,16]. Results (b) and (c) follow from the conditions [TX,TΣT(TX)]T=0 and [TX,TΣT]Ty=0 holds with probability 1.

    Taking the covariance operation of (3.15) yields (3.16). Also from (1.8) and (3.15), the covariance matrix between BLMBPN(τ) and τ is

    Cov(BLMBPN(τ),τ)=Cov(ˆKTy,τ)=[A(TX),D(TX)][TX(TX),TΣT(TX)]TΣB=[A(TX),D(TX)][TX(TX),TΣT(TX)]D,

    thus establishing (3.17). Combination of (1.4) and (3.16) yields (3.18). Substitution of (3.15) into (3.12) and then simplification yields (3.19).

    Some conclusions for a special case of Theorem 3.4 are presented below without proof.

    Corollary 3.5. Let the parametric vector τ be as given in (1.3), and define V=[XX,Cov(y)X] and C=Cov(τ,y). Then

    Cov(ˆLyτ)=min s.t.  ˆLXX=AXˆLV=[AX,CX]. (3.23)

    The matrix equation in (3.23) is solvable for ˆL, i.e.,

    [AX,CX]VV=[AX,CX] (3.24)

    holds under (3.5), while the general expression of ˆL and the corresponding BLMBPM(τ) can be written in the following form

    BLMBPM(τ)=ˆLy=([AX,CX]V+U2V)y, (3.25)

    where U2Rs×n is arbitrary. Furthermore, the following results hold.

    (a) r[X,ΣX]=r[X,XΣ]=r[X,Σ] and R[X,ΣX]=R[X,XΣ]=R[X,Σ].

    (b) ˆL in (3.25) is unique if and only if r[X,Σ]=n.

    (c) BLMBPM(τ) is unique if and only if yR[X,Σ] holds with probability 1.

    (d) The covariance matrix of BLMBPM(τ) is given by

    Cov(BLMBPM(τ))=ˆLΣˆL=([AX,CX]V)Σ([AX,CX]V); (3.26)

    the covariance matrix between BLMBPM(τ) and τ is given by

    Cov(BLMBPM(τ),τ)=[AX,CX][XX,ΣX]C; (3.27)

    the difference of Cov(τ) and Cov(BLMBPM(τ)) is given by

    Cov(τ)Cov(BLMBPM(τ))=BΣB([AX,CX]V)Σ([AX,CX]V); (3.28)

    the covariance matrix of τBLMBPM(τ) is given by

    Cov(τBLMBPM(τ))=([AX,CX]VB)Σ([AX,CX]VB). (3.29)

    Corollary 3.6. Let the parametric vector τ be as given in (1.3). Then, the following results hold:

    (a) The BLMBP of τ can be decomposed as the sum

    BLMBPN(τ)=BLMBEN(Aθ)+BLMBPN(Bν), (3.30)

    and they satisfy

    Cov(BLMBEN(Aθ),BLMBPN(Bν))=0, (3.31)
    Cov(BLMBPN(τ))=Cov(BLMBEN(Aθ))+Cov(BLMBPN(Bν)). (3.32)

    (b) For any matrix PRt×s, and the following equality holds:

    BLMBPN(Pτ)=PBLMBPN(τ). (3.33)

    (c) The BLMBP of τ can be decomposed as the sum

    BLMBPM(τ)=BLMBEM(Aθ)+BLMBPM(Bν), (3.34)

    and they satisfy

    Cov(BLMBEM(Aθ),BLMBPM(Bν))=0, (3.35)
    Cov(BLMBPM(τ))=Cov(BLMBEM(Aθ))+Cov(BLMBPM(Bν)). (3.36)

    (d) For any matrix PRt×s, and the following equality holds:

    BLMBPM(Pτ)=PBLMBPM(τ). (3.37)

    Proof. Notice that the arbitrary matrix U1 in (3.15) can be rewritten as U1=V1+V2, while the matrix [A(TX),D(TX)] in (3.15) can be rewritten as

    [A(TX),D(TX)]=[A(TX),0]+[0,D(TX)].

    Correspondingly, BLMBPN(τ) in (3.15) can be rewritten as the sum:

    BLMBPN(τ)=([A(TX),D(TX)]W+U1W)Ty=([A(TX),0]W+V1W)Ty  +([0,D(TX)]W+V2W)Ty=BLMBEN(Aθ)+BLMBPN(Bν),

    thus establishing (3.30). From (3.20) and (3.21), the covariance matrix between BLMBEN(Aθ) and BLMBPN(Bν) is given by

    Cov(BLMBEN(Aθ),BLMBPN(Bν))=[A(TX),0]WTΣT([0,BΣT(TX)]W).

    Applying (2.6) to the right-hand side of the above equality and then simplifying by Theorem 3.4(a), (2.3), and (2.5), we obtain

    r(Cov(BLMBEN(Aθ),BLMBPN(Bν)))=r([A(TX),0]WTΣT([0,BΣT(TX)]W))=r[0[TX(TX)(TX)TΣT][0(TX)TΣB][TX(TX),TΣT(TX)]TΣT0[A(TX),0]00]2r[TX,TΣT(TX)]=r[[000(TX)TΣT(TX)][TX(TX)0][0(TX)TΣB][TX(TX),0]TΣT0[A(TX),0]00]2r[TX,TΣ]=r[0TX(TX)TX(TX)TΣTA(TX)0]+r[(TX)TΣT(TX),(TX)TΣB]2r[TX,TΣ]=r[TX(TX)A(TX)]+r[TX(TX)TΣT]+r[TX,TΣT(TX),TΣB]r(TX)2r[TX,TΣ]=r[TX,TΣ,TΣB]r[TX,TΣ]=r[TX,TΣ]r[TX,TΣ]=0,

    which implies that Cov(BLMBEN(Aθ),BLMBPN(Bν)) is a zero matrix, thus establishing (3.31). Equation (3.32) follows from (3.30) and (3.31). Result (b) follows directly from (3.15). Results (c) and (d) are special cases of (a) and (b).

    One of the main tasks in the statistical inference of parametric regression models is to characterize connections between different predicttions/estimattions of unknown parameters. In this section, we study the relationships between the BLMBPs under GLM and its TGLMs. Because the coefficient matrices ˆKT and ˆL in (3.15) and (3.25) are not necessarily unique, we use

    {ˆL},  {ˆKT},  {BLMBPM(τ)}={ˆLy},  {BLMBPN(τ)}={ˆKTy} (4.1)

    to denote the collections of all the coefficient matrices and the corresponding BLMBPs. In order to characterize the relations between the collections of the coefficient matrices in (4.1), it is necessary to discuss the following four cases:

    (a) {ˆL}{ˆKT}, so that {BLMBPM(τ)}{BLMBPN(τ)} holds definitely;

    (b) {ˆL}{ˆKT}, so that {BLMBPM(τ)}{BLMBPN(τ)} holds definitely;

    (c) {ˆL}{ˆKT}, so that {BLMBPM(τ)}{BLMBPN(τ)} holds definitely;

    (d) {ˆL}={ˆKT}, so that {BLMBPM(τ)}={BLMBPN(τ)} holds definitely.

    In order to characterize the relations between the collections of the random vectors in (4.1), it is necessary to discuss the following four cases:

    (a) {BLMBPM(τ)}{BLMBPN(τ)} holds with probability 1;

    (b) {BLMBPM(τ)}{BLMBPN(τ)} holds with probability 1;

    (c) {BLMBPM(τ)}{BLMBPN(τ)} holds with probability 1;

    (d) {BLMBPM(τ)}={BLMBPN(τ)} holds with probability 1.

    Our main results are given below.

    Theorem 4.1. Let BLMBPN(τ) and BLMBPM(τ) be as given in (3.15) and (3.25), respectively and define

    Λ=[TXXTΣ0X],  Γ=[AX,BΣ]. (4.2)

    Then, the following results hold.

    (a) There exist ˆL and ˆK such that ˆL=ˆKT if and only if R(Γ)R(Λ). In this case, {BLMBPM(τ)}{BLMBPN(τ)} holds definitely.

    (b) {ˆL}{ˆKT} if and only if R(Γ)R(Λ). In this case, {BLMBPM(τ)}{BLMBPN(τ)} holds definitely.

    (c) {ˆL}{ˆKT} if and only if r[ΛΓ]=r(T)+r(X)+r[X,Σ]n. In this case, {BLMBPM(τ)}{BLMBPN(τ)} holds definitely.

    (d) {ˆL}={ˆKT} if and only if R(Γ)R(Λ) and r[TX,TΣ]=r[X,Σ]+r(T)n. In this case, {BLMBPM(τ)}={BLMBPN(τ)} holds definitely.

    Proof. From (3.15) and (3.25), the difference ˆLˆKT can be written as

    ˆLˆKT=Q+U2[XX,ΣX]U1[TX(TX),TΣT(TX)]T, (4.3)

    where Q=[AX,BΣX][XX,ΣX][A(TX),BΣT(TX)][TX(TX),TΣT(TX)]T and U1Rs×m and U2Rs×n are arbitrary. Applying (2.8) to (4.3) gives

    minˆL,ˆKr(ˆLˆKT)=minU1,U2r(Q+U2[XX,ΣX]U1[TX(TX),TΣT(TX)]T)=r[Q[XX,ΣX][TX(TX),TΣT(TX)]T]r[[XX,ΣX][TX(TX),TΣT(TX)]T]. (4.4)

    It is easy to obtain by (2.3), (2.4) and elementary block matrix operations (EBMOs) that

    r[Q[XX,ΣX][TX(TX),TΣT(TX)]T]=r[Q00In[XX,ΣX]0T0[TX(TX),TΣT(TX)]]r[XX,ΣX]r[TX(TX),TΣT(TX)]=r[0[AX,BΣX][A(TX),BΣT(TX)]In[XX,ΣX]0T0[TX(TX),TΣT(TX)]]r[X,Σ]r[TX,TΣ]=r[0[AX,BΣX][A(TX),BΣT(TX)]In000T[XX,ΣX][TX(TX),TΣT(TX)]]r[X,Σ]r[TX,TΣ]=r[[AX,BΣX][A(TX),BΣT(TX)]T[XX,ΣX][TX(TX),TΣT(TX)]]+nr[X,Σ]r[TX,TΣ]=r[TXXTΣTΣT0X000(TX)AXBΣBΣT]+nr(X)r(TX)r[X,Σ]r[TX,TΣ]=r[TXXTΣ00X000(TX)AXBΣ0]+nr(X)r(TX)r[X,Σ]r[TX,TΣ]=r[TXXTΣ0XAXBΣ]+nr(Γ)r[X,Σ], (4.5)

    and

    r[[XX,ΣX][TX(TX),TΣT(TX)]T]=r[In[XX,ΣX]0T0[TX(TX),TΣT(TX)]]r[XX,ΣX]r[TX(TX),TΣT(TX)]=r[In000T[XX,ΣX][TX(TX),TΣT(TX)]]r[X,Σ]r[TX,TΣ]=r(T[XX,ΣX],[TX(TX),TΣT(TX)])+nr[X,Σ]r[TX,TΣ]=r[TXXTΣTΣT0X000(TX)]+nr(X)r(TX)r[X,Σ]r[TX,TΣ]=r[TXXTΣ00X000(TX)]+nr(X)r(TX)r[X,Σ]r[TX,TΣ]=r[TXXTΣ0X]+nr(Γ)r[X,Σ]. (4.6)

    Substituting (4.5) and (4.6) into (4.4) yields

    minˆL,ˆKr(ˆLˆKT)=r[ΛΓ]r(Λ). (4.7)

    Setting the right-hand side of (4.7) equal to zero and applying Lemma 2.2(b) yields the equivalent condition in (a). Applying (2.8) to (4.3) yields

    minˆLr(ˆLˆKT)=minU2r(Q+U2[XX,ΣX]U1[TX(TX),TΣT(TX)]T)=r[QU1[TX(TX),TΣT(TX)]T[XX,ΣX]]r([XX,ΣX])=r[QU1[TX(TX),TΣT(TX)]T[XX,ΣX]]n+r[X,Σ] (4.8)

    and by (2.9) and (4.5),

    maxU1r[QU1[TX(TX),TΣT(TX)]T[XX,ΣX]]=maxU1r([Q[XX,ΣX]][Is0]U1[TX(TX),TΣT(TX)]T)=min{r[Q[XX,ΣX][TX(TX), TΣT(TX)]T], r[QIs[XX,ΣX]0]}=min{r[Q[XX,ΣX][TX(TX), TΣT(TX)]T], s+nr[X,Σ]}=min{r[ΛΓ]+nr(X)r[X,Σ]r[TX,TΣ], s+nr[X,Σ]}=min{s,r[ΛΓ]r(Γ)}+nr[X,Σ]. (4.9)

    Combining (4.8) and (4.9) yields

    maxˆKminˆLr(ˆLˆKT)=min{s,r[ΛΓ]r(Γ)}. (4.10)

    Setting the right-hand side of (4.10) equal to zero yields r[ΛΓ]=r(Γ). Thus, the statement in (b) holds.

    By a similar approach, we can obtain

    maxˆLminˆKr(ˆLˆKT)=min{s,r[ΛΓ]+nr(T)r(X)r[X,Σ]}, (4.11)

    as required for the statement in (c). Combining (b) and (c) yields (d).

    Theorem 4.2. Let BLMBPN(τ) and BLMBPM(τ) be as given in (3.15) and (3.25), respectively, and let Λ and Γ be as given in (4.2). Then, the following six statements are equivalent:

    (a) {BLMBPM(τ)}{BLMBPN(τ)} holds definitely.

    (b) {BLMBPM(τ)}{BLMBPN(τ)} holds with probability 1.

    (c) {BLMBPM(τ)}{BLMBPN(τ)} holds with probability 1.

    (d) {BLMBPM(τ)}{BLMBPN(τ)} holds with probability 1.

    (e) {BLMBPM(τ)}={BLMBPN(τ)} holds with probability 1.

    (f) R(Γ)R(Λ).

    Proof. It can be seen from Lemma 2.6 and Definition 2.8(c) that (a) is equivalent to

    minˆL,ˆKr((ˆLˆKT)[XX,ΣX])=0. (4.12)

    Substituting the coefficient matrices in (3.15) and (3.25) into (4.12) and simplifying, we obtain

    U1[TX(TX),TΣT(TX)]T[XX,ΣX]=J,

    where J=[AX,BΣX][A(TX),BΣT(TX)][TX(TX),TΣT(TX)]T[XX,ΣX] and U1Rs×m is arbitrary. From Lemma 2.5, the matrix equation is solvable for U1 if and only if

    r[J[TX(TX),TΣT(TX)]T[XX,ΣX]]=r([TX(TX),TΣT(TX)]T[XX,ΣX]). (4.13)

    Applying (2.3) and (2.4), and simplifying, leads to

    r[J[TX(TX),TΣT(TX)]T[XX,ΣX]]=r[J0T[XX,ΣX][TX(TX),TΣT(TX)]]r[TX(TX),TΣT(TX)]=r[[AX,BΣX][A(TX),BΣT(TX)]T[XX,ΣX][TX(TX),TΣT(TX)]]r[TX,TΣ]=r[TXXTΣTΣT0X000(TX)AXBΣBΣT]r(X)r(TX)r[TX,TΣ]=r[TXXTΣ0XAXBΣ]r(X)r[TX,TΣ]=r[ΛΓ]r(Γ), (4.14)

    and

    r([TX(TX),TΣT(TX)]T[XX,ΣX])=r([TX(TX),TΣT(TX)],T[XX,ΣX])r[TX,TΣ]=r[TX,TΣ]r[TX,TΣ]=0. (4.15)

    Substituting (4.14) and (4.15) into (4.13) leads to r[ΛΓ]=r(Γ), thus establishing the equivalence of (a) and (e).

    From Lemma 2.6 and Definition 2.8(d) that (b) is equivalent to

    maxˆKminˆLr((ˆLˆKT)[XX,ΣX])=0. (4.16)

    From (2.7), (3.15), (3.23), (3.25) and (4.14),

    maxˆKminˆLr((ˆLˆKT)[XX,ΣX])=maxˆKr(JU1[TX(TX),TΣT(TX)]T[XX,ΣX])=min{r[J[TX(TX),TΣT(TX)]T[XX,ΣX]],s}. (4.17)

    Setting the right-hand side of (4.17) equal to zero yields r[ΛΓ]=r(Γ), thus establishing the equivalence of (b) and (e).

    Similarly, we are able to obtain

    maxˆLminˆKr((ˆLˆKT)[XX,ΣX])=r[ΛΓ]r(Γ). (4.18)

    Thus, (4.18) is equivalent to r[ΛΓ]=r(Γ). Combining results (b) and (c) leads to the equivalence of (d) and (e).

    Combining Theorems 4.1 and 4.2, we obtain the following result:

    Corollary 4.3. Let BLMBPN(τ) and BLMBPM(τ) be as given in (3.15) and (3.25), respectively, and let Λ and Γ be as given in (4.2). Then, the following six statistical statements are equivalent:

    (a) {BLMBPM(τ)}{BLMBPN(τ)} holds definitely.

    (b) {BLMBPM(τ)}{BLMBPN(τ)} holds definitely.

    (c) {BLMBPM(τ)}{BLMBPN(τ)} holds with probability 1.

    (d) {BLMBPM(τ)}{BLMBPN(τ)} holds with probability 1.

    (e) {BLMBPM(τ)}{BLMBPN(τ)} holds with probability 1.

    (f) {BLMBPM(τ)}={BLMBPN(τ)} holds with probability 1.

    The results in the above theorems and corollaries can be simplified further for different choices of the matrices in (1.3), such as, A=K and B=0. Hence, many specified conclusions can further be obtained on the relationships between a GLM and its TGLMs. Assume that τ in (1.3) is predictable under (1.1) and (1.5). We obtain that the BLMBP of τ is just its BLUP, and the main results in the paper are the classic theory on the BLUPs of τ under (1.1) and (1.5). Therefore, these works are certain extensions of the classic BLUP theory.

    Assume that a concrete form of M in (1.1) is given by

    M:  [y1y2]=[X100X2][θ1θ2]+[ν1ν2],  E[ν1ν2]=0,  Cov[ν1ν2]=σ2[In100In2].

    In this case, taking two transformation matrices T1=[In1,0] and T2=[0,In2], we obtain the following two sub-sample models:

    M1:  y1=X1θ1+ν1,  E(ν1)=0,  Cov(ν1)=σ2In1,M2:  y2=X2θ2+ν2,  E(ν2)=0,  Cov(ν2)=σ2In2,

    where it is assumed that yiRni×1,XiRni×pi,θiRpi×1, νiRni×1, n=n1+n2, p=p1+p2. For illustrating the results in Section 4, let A=[X1,0],B=0 and A=[0,X2],B=0 in (1.3), respectively. From Theorems 4.1 and 4.2,

    r[ΛΓ]=r[TXXTΣ0XAXBΣ]=r[X1X1σ2In100X1000X2X1X100]=r[0In1000X2X1X100]=n1+r(X),r(Λ)=r[TXXTΣ0X]=r[TXTΣ0X]=r[X1σ2In100X1000X2]=n1+r(X),r(T)+r(X)+r[X,Σ]n=n1+r(X).

    Obviously, the equivalent conditions all hold in Theorems 4.1 and 4.2. Thus, we can easily describe the relations between the corresponding estimators.

    We have provided algebraic and statistical analysis of a biased prediction problem when a joint parametric vector is unpredictable under a given GLM, and obtained an abundance of exact formulas and facts about the BLMBPs of the joint parametric vector in the contexts of a GLM and its TGLMs. All the findings in this article are technically formulated or denoted in certain analytical expressions or explicit assertions through the surprise use of specified matrix analysis tools and techniques. Hence, it is not difficult to understand these results and facts from both mathematical and statistical aspects. In view of this fact, we can take these obtained in the preceding sections as a group of theoretical contributions in the statistical inference under general linear model assumptions. Consequently, we are able to utilize the statistical methods developed in this article to provide additional insight into various concrete inference problems and subjects related to GLMs. Correspondingly, we point out that the main conclusions presented in this work have certain significant applications in the field of inverse scattering problems. The reader is referred to [8,31,32] on the topic of inverse scattering problems.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    We are grateful to anonymous reviewers for their helpful comments and suggestions. The second author was supported in part by the Shandong Provincial Natural Science Foundation #ZR2019MA065.

    The authors declare that they have no conflicts of interest.



    [1] J. K. Baksalary, R. Kala, Linear transformations preserving best linear unbiased estimators in a general Gauss-Markoff model, Ann. Stat., 9 (1981), 913–916.
    [2] B. Dong, W. Guo, Y. Tian, On relations between BLUEs under two transformed linear models, J. Multivariate Anal., 131 (2014), 279–292. https://doi.org/10.1016/j.jmva.2014.07.005 doi: 10.1016/j.jmva.2014.07.005
    [3] H. Drygas, Sufficiency and completeness in the general Gauss-Markov model, Sankhyā A, 45 (1983), 88–98.
    [4] N. Güler, On relations between BLUPs under two transformed linear random-effects models, Commun. Stat. Simul. Comput., 51 (2022), 5099–5125. https://doi.org/10.1080/03610918.2020.1757709 doi: 10.1080/03610918.2020.1757709
    [5] N. Güler, M. E. Büyükkaya, Notes on comparison of covariance matrices of BLUPs under linear random-effects model with its two subsample models, Iran. J. Sci. Tech. Trans. A, 43 (2019), 2993–3002. https://doi.org/10.1007/s40995-019-00785-3 doi: 10.1007/s40995-019-00785-3
    [6] N. Güler, M. E. Büyükkaya, Inertia and rank approach in transformed linear mixed models for comparison of BLUPs, Commun. Stat. Theor. Meth., 52 (2023), 3108–3123. https://doi.org/10.1080/03610926.2021.1967397 doi: 10.1080/03610926.2021.1967397
    [7] S. J. Haslett, J. Isotalo, Y. Liu, S. Puntanen, Equalities between OLSE, BLUE and BLUP in the linear model, Stat. Papers, 55 (2014), 543–561. https://doi.org/10.1007/s00362-013-0500-7 doi: 10.1007/s00362-013-0500-7
    [8] Y. He, H. Liu, X. Wang, A novel quantitative inverse scattering scheme using interior resonant modes, Inverse Probl., 39 (2023), 085002. https://doi.org/10.1088/1361-6420/acdc49 doi: 10.1088/1361-6420/acdc49
    [9] R. Kala, P. R. Pordzik, Estimation in singular partitioned, reduced or transformed linear models, Stat. Papers, 50 (2009), 633–638. https://doi.org/10.1007/s00362-007-0097-9 doi: 10.1007/s00362-007-0097-9
    [10] A. Markiewicz, S. Puntanen, All about the with its applications in the linear statistical models, Open Math., 13 (2015), 33–50. https://doi.org/10.1515/math-2015-0005 doi: 10.1515/math-2015-0005
    [11] A. Markiewicz, S. Puntanen, Further properties of linear prediction suciency and the BLUPs in the linear model with new observations, Afrika Stat., 13 (2018), 1511–1530. https://doi.org/10.16929/as/1511.117 doi: 10.16929/as/1511.117
    [12] G. Marsaglia, G. P. H. Styan, Equalities and inequalities for ranks of matrices, Linear Multilinear Algebra, 2 (1974), 269–292. https://doi.org/10.1080/03081087408817070 doi: 10.1080/03081087408817070
    [13] S. K. Mitra, Generalized inverse of matrices and applications to linear models, In: Handbook of Statistics, 1 (1980), 471–512. https://doi.org/10.1016/S0169-7161(80)80045-9
    [14] C. H. Morrell, J. D. Pearson, L. J. Brant, Linear transformations of linear mixed-effects models, Amer. Stat., 51 (1997), 338–343. https://doi.org/10.1080/00031305.1997.10474409 doi: 10.1080/00031305.1997.10474409
    [15] R. Penrose, A generalized inverse for matrices, Proc. Cambridge Phil. Soc., 51 (1955), 406–413. https://doi.org/10.1017/S0305004100030401 doi: 10.1017/S0305004100030401
    [16] S. Puntanen, G. P. H. Styan, J. Isotalo, Matrix Tricks for Linear Statistical Models: Our Personal Top Twenty, Berlin: Springer, 2011.
    [17] C. R. Rao, Linear Statistical Inference and Its Applications, New York: Wiley, 1973.
    [18] C. R. Rao, Choice of best linear estimators in the Gauss-Markoff model with a singular dispersion matrix, Commun. Stat. Theor. Meth., 7 (1978), 1199–1208.
    [19] C. R. Rao, S. K. Mitra, Generalized Inverse of Matrices and Its Applications, New York: Wiley, 1971.
    [20] J. Shao, J. Zhang, A transformation approach in linear mixed-effects models with informative missing responses, Biometrika, 102 (2015), 107–119. https://doi.org/10.1093/biomet/asu069 doi: 10.1093/biomet/asu069
    [21] Y. Tian, The maximal and minimal ranks of some expressions of generalized inverses of matrices, SEA Bull. Math., 25 (2002), 745–755. https://doi.org/10.1007/s100120200015 doi: 10.1007/s100120200015
    [22] Y. Tian, More on maximal and minimal ranks of Schur complements with applications, Appl. Math. Comput., 152 (2004), 675–692. https://doi.org/10.1016/S0096-3003(03)00585-X doi: 10.1016/S0096-3003(03)00585-X
    [23] Y. Tian, On properties of BLUEs under general linear regression models, J. Stat. Plann. Inference, 143 (2013), 771–782. https://doi.org/10.1016/j.jspi.2012.10.005 doi: 10.1016/j.jspi.2012.10.005
    [24] Y. Tian, A new derivation of BLUPs under random-effects model, Metrika, 78 (2015), 905–918. https://doi.org/10.1007/s00184-015-0533-0 doi: 10.1007/s00184-015-0533-0
    [25] Y. Tian, Transformation approaches of linear random-effects models, Stat. Meth. Appl., 26 (2017), 583–608. https://doi.org/10.1007/s10260-017-0381-3 doi: 10.1007/s10260-017-0381-3
    [26] Y. Tian, Matrix rank and inertia formulas in the analysis of general linear models, Open Math., 15 (2017), 126–150. https://doi.org/10.1515/math-2017-0013 doi: 10.1515/math-2017-0013
    [27] Y. Tian, S. Cheng, The maximal and minimal ranks of A-BXC with applications, New York J. Math., 9 (2003), 345–362.
    [28] Y. Tian, B. Jiang, A new analysis of the relationships between a general linear model and its mis-specified forms, J. Korean Stat. Soc., 46 (2017), 182–193. https://doi.org/10.1016/j.jkss.2016.08.004 doi: 10.1016/j.jkss.2016.08.004
    [29] Y. Tian, S. Puntanen, On the equivalence of estimations under a general linear model and its transformed models, Linear Algebra Appl., 430 (2009), 2622–2641. https://doi.org/10.1016/j.laa.2008.09.016 doi: 10.1016/j.laa.2008.09.016
    [30] C. Xie, Linear transformations preserving best linear minimum bias linear estimators in a Gauss-Markoff model, Appl. Math. J. Chin. Univer. Ser. A, 9 (1994), 429–434.
    [31] W. Yin, W. Yang, H. Liu, A neural network scheme for recovering scattering obstacles with limited phaseless far-field data, J. Comput. Phys., 417 (2020), 109594. https://doi.org/10.1016/j.jcp.2020.109594 doi: 10.1016/j.jcp.2020.109594
    [32] Y. Yin, W. Yin, P. Meng, H. Liu, The interior inverse scattering problem for a two-layered cavity using the Bayesian method, Inverse Probl. Imaging, 16 (2022), 673–690. http://dx.doi.org/10.3934/ipi.2021069 doi: 10.3934/ipi.2021069
    [33] B. Zhang, The BLUE and MINQUE in Gauss-Markoff model with linear transformation of the observable variables, Acta Math. Sci., 27 (2007), 203–210. https://doi.org/10.1016/S0252-9602(07)60018-6 doi: 10.1016/S0252-9602(07)60018-6
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1488) PDF downloads(83) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog