This paper is concerned with the relationships between best linear minimum biased predictors (BLMBPs) in the context of a general linear model (GLM) and its transformed general linear models (TGLMs). We shall establish a mathematical procedure by means of some exact and analytical tools in matrix theory that were developed in recent years. The coverage includes constructing a general vector composed of all unknown parameters in the context of a GLM and its TGLMs, deriving the exact expressions of the BLMBPs through the technical use of analytical solutions of a constrained quadratic matrix-valued function optimization problem in the Löwner partial ordering, and discussing a variety of theoretical performances and properties of the BLMBPs. We also give a series of characterizations of relationships between BLMBPs under a given GLM and its TGLMs.
Citation: Li Gong, Bo Jiang. A matrix analysis of BLMBPs under a general linear model and its transformation[J]. AIMS Mathematics, 2024, 9(1): 1840-1860. doi: 10.3934/math.2024090
[1] | Bo Jiang, Yongge Tian . Equivalent analysis of different estimations under a multivariate general linear model. AIMS Mathematics, 2024, 9(9): 23544-23563. doi: 10.3934/math.20241144 |
[2] | Ruixia Yuan, Bo Jiang, Yongge Tian . A study of the equivalence of inference results in the contexts of true and misspecified multivariate general linear models. AIMS Mathematics, 2023, 8(9): 21001-21021. doi: 10.3934/math.20231069 |
[3] | Najmeddine Attia, Rim Amami . On linear transformation of generalized affine fractal interpolation function. AIMS Mathematics, 2024, 9(7): 16848-16862. doi: 10.3934/math.2024817 |
[4] | Yongge Tian . An effective treatment of adding-up restrictions in the inference of a general linear model. AIMS Mathematics, 2023, 8(7): 15189-15200. doi: 10.3934/math.2023775 |
[5] | Nesrin Güler, Melek Eriş Büyükkaya . Statistical inference of a stochastically restricted linear mixed model. AIMS Mathematics, 2023, 8(10): 24401-24417. doi: 10.3934/math.20231244 |
[6] | Dayang Dai, Dabuxilatu Wang . A generalized Liu-type estimator for logistic partial linear regression model with multicollinearity. AIMS Mathematics, 2023, 8(5): 11851-11874. doi: 10.3934/math.2023600 |
[7] | Zongcheng Li, Jin Li . Linear barycentric rational collocation method for solving a class of generalized Boussinesq equations. AIMS Mathematics, 2023, 8(8): 18141-18162. doi: 10.3934/math.2023921 |
[8] | J. Alberto Conejero, Antonio Falcó, María Mora–Jiménez . A pre-processing procedure for the implementation of the greedy rank-one algorithm to solve high-dimensional linear systems. AIMS Mathematics, 2023, 8(11): 25633-25653. doi: 10.3934/math.20231308 |
[9] | Jianhua Tang, Chuntao Yin . Analysis of the generalized fractional differential system. AIMS Mathematics, 2022, 7(5): 8654-8684. doi: 10.3934/math.2022484 |
[10] | Xingwei Ren . Statistical inference of the mixed linear model with incorrect stochastic linear restrictions. AIMS Mathematics, 2025, 10(5): 11349-11368. doi: 10.3934/math.2025516 |
This paper is concerned with the relationships between best linear minimum biased predictors (BLMBPs) in the context of a general linear model (GLM) and its transformed general linear models (TGLMs). We shall establish a mathematical procedure by means of some exact and analytical tools in matrix theory that were developed in recent years. The coverage includes constructing a general vector composed of all unknown parameters in the context of a GLM and its TGLMs, deriving the exact expressions of the BLMBPs through the technical use of analytical solutions of a constrained quadratic matrix-valued function optimization problem in the Löwner partial ordering, and discussing a variety of theoretical performances and properties of the BLMBPs. We also give a series of characterizations of relationships between BLMBPs under a given GLM and its TGLMs.
In this paper, we consider the standard linear model
M: y=Xθ+ν, | (1.1) |
where it is assumed that y∈Rn×1 is vector of observable random variables, X∈Rn×p are known matrices of arbitrary ranks (0≤r(X)≤min{n,p}), θ∈Rp×1 is a vector of fixed but unknown parameters, and ν∈Rn×1 is a random error vector. In order to carry out reasonable estimation and statistical inference in the context of (1.1), we assume that the expectation vector and the covariance matrix of ν are given without loss of generality by
E(ν)=0, Cov(ν)=Σ. | (1.2) |
In the sequel, we assume that Σ∈Rn×n is a known positive semi-definite matrix of arbitrary rank in order to derive general and precise conclusions under the given model assumptions. Once the work on the general assumptions is established, we can further, as usual in parametric regression analysis, let Σ be certain specified forms with known or unknown entries, and then derive various concrete inference results.
The assumptions in the contexts of (1.1) and (1.2) are typical in form for a complete specification of the general linear model (for short, GLM). Observe that there are two unknown vectors θ and ν in (1.1). Then, we can figure out that an obligatory task in statistical inference under (1.1) and (1.2) is to predict the two unknown vectors simultaneously:
τ=Aθ+Bν, | (1.3) |
where it is assumed that A∈Rs×p and B∈Rs×n are known matrices of arbitrary ranks. This vector obviously includes all the unknown vectors in (1.1), such as θ and ν, as its special cases. It is easy to see that under (1.1) and (1.2), we have
E(τ)=Aθ, Cov(τ)=BΣB′, Cov(τ,y)=BΣ. | (1.4) |
In the investigation of linear statistical models for regression, it is a common inference problem to propose and characterize various reasonable connections between two different given models under the given model assumptions. One concrete problem of such kind is to investigate the relationships between a given linear model (called the original model) and certain types of its transformed models. Sometimes, the transformed models are required to meet with certain necessary requirements in the statistical inferences of the original linear model. Now let us consider M in (1.1) and its transformed models. In such a case, we may be faced with different transformed forms of the model in accordance with linear transformations of observable random vector y. Generally speaking, various possible transformed models of M in (1.1) are often obtained by pre-multiplying T by a given matrix. For example,
N: Ty=TXθ+Tν, | (1.5) |
is a common transformed form of M in (1.1), where T∈Rm×n is a known transformation matrix of arbitrary rank. Below, we present a group of well-known cases of the transformed model for different choices of the transformation matrix T in (1.5).
(a) We first divide the original model M in (1.1) as
M: [y1y2]=[X1X2]θ+[ν1ν2] |
by the partitions of the vectors and matrices in the model. Then we take the transformation matrices T1=[In1,0] and T2=[0,In2] in (1.5) to obtain the following two sub-sample models:
M1: y1=X1θ+ν1,M2: y2=X2θ+ν2, |
where yi∈Rni×1,Xi∈Rni×p,θ∈Rp×1, νi∈Rni×1 and n=n1+n2. They can also be viewed as adding or deleting certain regression equations in a given GLM. Also, we can say that these two individual models occur in two periods of observations of data.
(b) Assume that a concrete form of M in (1.1) is given by
M: [y1y2]=[X100X2][θ1θ2]+[ν1ν2]. |
In this case, taking the two transformation matrices T1=[In1,0] and T2=[0,In2], we obtain the following two sub-sample models:
M1: y1=X1θ1+ν1,M2: y2=X2θ2+ν2, |
where yi∈Rni×1,Xi∈Rni×pi,θi∈Rpi×1, νi∈Rni×1, n=n1+n2, p=p1+p2. The two models are known as seemingly unrelated linear models, which are linked to each other by the correlated error terms across the models, where all the given matrices and the unknown vectors in the two models are different.
Due to the linear nature of M in (1.1), we see the expectations and covariance matrices of y,Ty and τ under the assumptions in (1.1) and (1.2):
E(y)=Xθ, E(Ty)=TXθ, | (1.6) |
Cov(y)=Σ, Cov(Ty)=TΣT′, | (1.7) |
Cov(τ,y)=BΣ, Cov(τ,Ty)=BΣT′. | (1.8) |
Now we mention some backgrounds of this current study. For unknown parameters in a given regression model, statisticians are able to adopt different optimal criteria in order to obtain proper predictions and estimations of the unknown parameters. In comparison, the best linear unbiased prediction, the best linear unbiased estimation and the least squares estimation are best known among others because they have many excellent mathematical and statistical properties and performances. There were many deep and fruitful works in the statistical literatures related to these predictions and estimations. However, it is a common fact in statistical practice that the unknown parameters in a given model may not be predictable or estimable. Instead, it is necessary to choose certain biased predictions and biased estimations for the unknown parameters. For example, Rao described the bias between estimators and unknown parameter functions, constructed the minimum biased estimation class, selected the one with the minimum variance in the minimum biased estimation class and then defined the best linear minimum biased estimation. Especially when the unknown parameter function is an estimable function, the best linear minimum biased estimation is the classic best linear unbiased estimation. It can be seen from (1.1)–(1.8) that a given model and its transformed models are not necessarily equivalent in form. Hence, the predictors/estimators of unknown vectors that are going to derive under these models have different algebraic expressions and properties. Yet, some transformations of observable random vectors may preserve enough information for predicting/estimating unknown vectors in the original model. Therefore, it is natural to consider certain links between the predictors/estimators obtained from an original model and its transformed models in statistical inferences of these models. Traditionally, the problems of characterizing relationships between predictions/estimations of unknown vectors in an original model and its transformed models were known as linear sufficiency problems, which were first considered in [1,3]. Many scholars also studied the relationship between estimations under a given original model and its transformed model from different aspects. For instance, Baksalary and Kala considered the problem on linear transformations of GLMs preserving the best linear unbiased estimations under the general Gauss-Markoff model in [1]; Xie studied in [30] the best linear minimum biased estimations under a given GLM and discussed the problem of the linear transformation preserving the best linear minimum biased estimations. Also, the subject of this kind was sufficiently approached in [7,9,14,18,20,33] among others.
Given the model assumptions in (1.1)–(1.8), the purpose of this paper is to provide a unified theoretical and conceptual exploration for solving the best linear minimum biased prediction (for short, BLMBP) problems under a GLM and its transformed general linear models (for short, TGLMs) through the skillful and effective use of a series of exact and analytical matrix analysis tools. The remaining part of this current paper is organized as follows. In the second section, we introduce notation and serval matrix analysis tools and techniques that we shall utilize to characterize matrix equalities and matrix set inclusions that involve generalized inverses of matrices. In the third section, we introduce the definitions of the linear minimum biased predictor (for short, LMBP) and the BLMBP of τ in (1.3), as well as basic estimation and inference theory regarding the LMBP and BLMBP, including their analytical expressions and their mathematical and statistical properties and features in the contexts of (1.1)–(1.8). In the fourth section, we address the problems regarding the relationships between the BLMBPs under a GLM and its TGLMs using the powerful matrix rank and inertia methodology. The fifth section presents a special example related to the main findings in the preceding sections. Some conclusions and remarks are given in the last section.
We begin with the introduction of notation used in the sequel. Rm×n denotes the collection of all m×n matrices over the field of real numbers, and the symbols M′, r(M) and R(M) denote the transpose, the rank and the range (column space) of M∈Rm×n, and Im denotes the identity matrix of order m. The Moore–Penrose generalized inverse of M, denoted by M†, is defined to be the unique solution G satisfying the four matrix equations MGM=M, GMG=G, (MG)′=MG and (GM)′=GM. Let PM=MM†, M⊥=EM=Im−MM† and FM=In−M†M denote the three orthogonal projectors (symmetric idempotent matrices) induced from M, which will help in briefly denoting calculation processes related to generalized inverses of matrices, where both EM and FM satisfy EM=FM′ and FM=EM′ and the ranks of EM and FM are r(EM)=m−r(M) and r(FM)=n−r(M). Two symmetric matrices M and N of the same size are said to satisfy the inequalities M≽N, M≼N, M≻N and M≺N in the Löwner partial ordering if M−N is positive semi-definite, negative semi-definite, positive definite and negative definite, respectively. Further information about the orthogonal projectors PM, EM and FM and their various applications in the theory of linear statistical models can be found, e.g., in [10,13,16,19]. It is also well known that the Löwner partial ordering between two symmetric matrices is a surprisingly strong and useful property in matrix analysis. The reader is referred to [16] and the references therein for more results and facts regarding the issues of the Löwner partial ordering in statistical theory and applications. Recently, the authors of [2,4,5,6,23,25,26,29] proposed and approached a series of research problems concerning the relationships of different kinds of predictions of unknown parameters in regression models using the rank and inertia methodology in matrix analysis, and provided a variety of simple and reasonable equivalent facts related to the relationship problems. In this paper, we also adopt the rank and inertia methodology to approach the relationship problems regarding different estimations and predictions.
As preliminaries that can help readers in getting familiar with the features and usefulness of the matrix rank methodology, we present in the following a list of commonly used results and facts about ranks of matrices and matrix equations, which are well known or easy to prove. We shall use them in the descriptions and simplifications of various complicated matrix expressions and matrix equalities that occur in the statistical inference of a GLM and its TGLMs in the following sections.
Lemma 2.1 ([28]). Let A and B be two sets composed by matrices of the same size.
A∩B≠∅⇒minA∈A,B∈Br(A−B)=0, | (2.1) |
A⊆B⇒maxA∈AminB∈Br(A−B)=0. | (2.2) |
Lemma 2.2 ([12]). Let A∈Rm×n, B∈Rm×k, and C∈Rl×n. Then,
r[A,B]=r(A)+r(EAB)=r(B)+r(EBA), | (2.3) |
r[AC]=r(A)+r(CFA)=r(C)+r(AFC), | (2.4) |
r[AA′BB′0]=r[A,B]+r(B). | (2.5) |
In particular, the following results hold:
(a) r[A,B]=r(A)⇔R(B)⊆R(A)⇔AA†B=B⇔EAB=0.
(b) r[AC]=r(A)⇔R(C′)⊆R(A′)⇔CA†A=C⇔CFA=0.
Lemma 2.3 ([22]). Assume that five matrices A1, B1, A2, B2 and A3 of appropriate sizes satisfy the conditions R(A′1)⊆R(B′1), R(A2)⊆R(B1), R(A′2)⊆R(B′2) and R(A3)⊆R(B2). Then,
r(A1B†1A2B†2A3)=r[0B2A3B1A20A100]−r(B1)−r(B2). | (2.6) |
Lemma 2.4 ([21,27]). Let A∈Rm×n,B∈Rm×k and C∈Rl×n be given. Then, the maximum and minimum ranks of A−BZ and A−BZC with respect to a variable matrix Z of appropriate sizes are given by the following closed-form formulas:
maxZ∈Rk×nr(A−BZ)=min{r[A,B],n}, | (2.7) |
minZ∈Rk×nr(A−BZ)=r[A,B]−r(B), | (2.8) |
maxZ∈Rk×lr(A−BZC)=min{r[A,B],r[AC]}. | (2.9) |
Below we offer some existing formulas and results regarding general solutions of a basic linear matrix equation and a constrained quadratic matrix optimization problem.
Lemma 2.5 ([15]). Let A∈Rm×n and B∈Rp×n. Then, the linear matrix equation ZA=B is solvable for Z∈Rp×m if and only if R(A′)⊇R(B′), or equivalently, BA†A=B. In this case, the general solution of the equation can be written in the parametric form
Z=BA†+UA⊥, |
where U∈Rp×m is an arbitrary matrix.
Lemma 2.6 ([28]). Let A∈Rm×n, B∈Rm×k and assume that R(A)=R(B). Then
XA=0⇔XB=0. |
Lemma 2.7 ([24]). Let
f(Z)=(ZC+D)M(ZC+D)′ s.t. ZA=B, | (2.10) |
where it is assumed that A∈Rp×q, B∈Rn×q, C∈Rp×m and D∈Rn×m are given, M∈Rm×m is positive semi-definite and the matrix equation ZA=B is solvable for Z∈Rn×p. Then, there always exists a solution Z0 of ZA=B such that
f(Z)≽f(Z0) |
holds for all solutions of ZA=B, and the matrix Z0 that satisfies the above inequality is determined by the following consistent matrix equation:
Z0[A,CMC′A⊥]=[B,−DMC′A⊥]. |
In this case, the general expression of Z0 and the corresponding f(Z0) and f(Z) are given by
Z0=argminZA=Bf(Z)=[B,−DMC′A⊥][A,CMC′A⊥]†+U[A,CMC′]⊥,f(Z0)=minZA=Bf(Z)=GMG′−GMC′TCMG′,f(Z)=f(Z0)+(ZC+D)MC′TCM(ZC+D)′=f(Z0)+(ZCMC′A⊥+DMC′A⊥)T(ZCMC′A⊥+DMC′A⊥)′, |
where G=BA†C+D, T=(A⊥CMC′A⊥)† and U∈Rn×p is arbitrary.
In order to describe the relationships between BLMBPs under different regression models, we need to adopt the following definition to characterize possible equality between two random vectors [28].
Definition 2.8. Let y be as given in (1.1), let {L1} and {L2} be two matrix sets and let L1y and L2y be any two linear predictors of τ in (1.3).
(a) {L1y}∩{L2y}≠∅ holds definitely, i.e, {L1}∩{L2}≠∅ if and only if
minL1∈{L1},L2∈{L2}r(L1−L2)=0. |
(b) The vector set inclusion {L1y}⊆{L2y} holds definitely, i.e, {L1}⊆{L2} if and only if
maxL1∈{L1}minL2∈{L2}r(L1−L2)=0. |
(c) {L1y}∩{L2y}≠∅ holds with probability 1 if and only if
minL1∈{L1},L2∈{L2}r((L1−L2)[X,Σ])=0⇔minL1∈{L1},L2∈{L2}r((L1−L2)[X,ΣX⊥])=0⇔minL1∈{L1},L2∈{L2}r((L1−L2)[XX′,ΣX⊥])=0. |
(d) The vecctor set inclusion {L1y}⊆{L2y} holds with probability 1 if and only if
maxL1∈{L1}minL2∈{L2}r((L1−L2)[X,Σ])=0⇔maxL1∈{L1}minL2∈{L2}r((L1−L2)[X,ΣX⊥])=0⇔maxL1∈{L1}minL2∈{L2}r((L1−L2)[XX′,ΣX⊥])=0. |
Recall in parametric regression analysis that if there exists a matrix L such that E(Ly−τ)=(LX−A)θ=0 holds for all θ, the parametric parameter vector τ in (1.3) is said to predictable under the assumptions in (1.1) and (1.2). Otherwise, there does not exist an unbiased prediction of τ under (1.1) and (1.2), and therefore, we have to seek certain biased predictions of τ according to various specified optimization criteria. In this section, we shall adopt the following known definitions of the LMBP and BLMBP of τ (cf. [17, p.337]).
Definition 3.1. Let the parametric vector τ be as given in (1.3).
(a) The LMBP of τ in (1.3) under (1.1) is defined to be
LMBPM(τ)=LMBPM(Aθ+Bν)=ˆLy, | (3.1) |
where the matrix ˆL satisfies
ˆL=argminL∈Rs×ntr((LX−A)(LX−A)′). | (3.2) |
(b) The LMBP of τ in (1.3) under (1.5) is defined to be
LMBPN(τ)=LMBPN(Aθ+Bν)=ˆKTy, | (3.3) |
where the matrix ˆK satisfies
ˆK=argminK∈Rs×mtr((KTX−A)(KTX−A)′). | (3.4) |
Theorem 3.2. Under the notations in Definition 3.1, the following results hold:
ˆL=argminL∈Rs×ntr((LX−A)(LX−A)′)⇔ˆLXX′=AX′, | (3.5) |
ˆK=argminK∈Rs×mtr((KTX−A)(KTX−A)′)⇔ˆKTX(TX)′=A(TX)′. | (3.6) |
Proof. Note first that
tr((KTX−A)(KTX−A)′)=tr((KTX−A(TX)†TX+A(TX)†TX−A)(KTX−A(TX)†TX+A(TX)†TX−A)′)=tr((KTX−A(TX)†TX−AFTX)(KTX−A(TX)†TX−AFTX)′)=tr(AFTXA′)+tr((KTX−A(TX)†TX)(KTX−A(TX)†TX)′) −tr((K−A(TX)†)TXFTXA′)−tr(AFTX(TX)′(K−A(TX)†)′)=tr(AFTXA′)+tr((KTX−A(TX)†TX)(KTX−A(TX)†TX)′), | (3.7) |
where TXFTX=0. Note that tr((KTX−A(TX)†TX)(KTX−A(TX)†TX)′)≥0 for all K∈Rs×m and the matrix equation KTX=A(TX)†TX is solvable for K∈Rs×m. In this case, we obtain
minK∈Rs×mtr((KTX−A)(KTX−A)′)=tr(AFTXA′), |
and
ˆK=argminK∈Rs×mtr((KTX−A)(KTX−A)′)⇔ˆKTX−A(TX)†TX=0⇔ˆKTX(TX)′=A(TX)′, |
thus establishing (3.6). Letting T=In leads to (3.5).
Definition 3.3. Let the parametric vector τ be as given in (1.3).
(a) If ˆL satisfies
Cov(ˆLy−τ)=min s.t. ˆL=argminL∈Rs×ntr((LX−A)(LX−A)′) | (3.8) |
holds in the Löwner partial ordering, then the linear statistic ˆLy is defined to be the BLMBP of τ in (1.3) under (1.1), and is denoted by
ˆLy=BLMBPM(τ)=BLMBPM(Aθ+Bν). | (3.9) |
(b) If ˆK satisfies
Cov(ˆKTy−τ)=min s.t. ˆK=argminK∈Rs×mtr((KTX−A)(KTX−A)′) | (3.10) |
holds in the Löwner partial ordering, then the linear statistic ˆKTy is defined to be the BLMBP of τ in (1.3) under (1.5), and is denoted by
ˆKTy=BLMBPN(τ)=BLMBPN(Aθ+Bν). | (3.11) |
If B=0 or A=0 in (1.3), then the ˆKTy in (3.11) are defined to be the best linear minimum biased estimator (BLMBE) and the BLMBP of Aθ and Bν in (1.3) under (1.5), respectively, and are denoted by
ˆKTy=BLMBEN(Aθ) and ˆKTy=BLMBPN(Bν). |
It is easy to verify that the difference of KTy−τ under (1.5) can be written in the following form:
KTy−τ=KTXθ+KTν−Aθ−Bν=(KTX−A)θ+(KT−B)ν. |
Hence, the covariance matrix of KTy−τ can be written as
Cov(KTy−τ)=(KT−B)Σ(KT−B)′△=f(K). | (3.12) |
Our main results on the BLMBPs of τ in (1.3) are given below.
Theorem 3.4. Let the parametric vector τ be as given in (1.3) and define W=[TX(TX)′,Cov(Ty)(TX)⊥] and D=Cov(τ,Ty). Then
Cov(ˆKTy−τ)=min s.t. ˆKTX(TX)′=A(TX)′⇔ˆKW=[A(TX)′,D(TX)⊥]. | (3.13) |
The matrix equation in (3.13) is solvable for ˆK, i.e.,
[A(TX)′,D(TX)⊥]W†W=[A(TX)′,D(TX)⊥] | (3.14) |
holds under (3.6), while the general expression of ˆK and the corresponding BLMBPN(τ) can be written in the following form
BLMBPN(τ)=ˆKTy=([A(TX)′,D(TX)⊥]W†+U1W⊥)Ty, | (3.15) |
where U1∈Rs×m is arbitrary. Furthermore, the following results hold.
(a) r[TX,TΣT′(TX)⊥]=r[TX,(TX)⊥TΣT′]=r[TX,TΣ] and
R[TX,TΣT′(TX)⊥]=R[TX,(TX)⊥TΣT′]=R[TX,TΣ].
(b) ˆKT in (3.15) is unique if and only if R(T)⊆R[TX,TΣ].
(c) BLMBPN(τ) is unique if and only if Ty∈R[TX,TΣ] holds with probability 1.
(d) The covariance matrix of BLMBPN(τ) is given by
Cov(BLMBPN(τ))=ˆKTΣT′ˆK′=([A(TX)′,D(TX)⊥]W†)TΣT′([A(TX)′,D(TX)⊥]W†)′; | (3.16) |
the covariance matrix between BLMBPN(τ) and τ is given by
Cov(BLMBPN(τ),τ)=[A(TX)′,D(TX)⊥][TX(TX)′,TΣT′(TX)⊥]†D′; | (3.17) |
the difference of Cov(τ) and Cov(BLMBPN(τ)) is given by
Cov(τ)−Cov(BLMBPN(τ))=BΣB′−([A(TX)′,D(TX)⊥]W†)TΣT′([A(TX)′,D(TX)⊥]W†)′; | (3.18) |
the covariance matrix of τ−BLMBPN(τ) is given by
Cov(τ−BLMBPN(τ))=([A(TX)′,D(TX)⊥]W†T−B)Σ([A(TX)′,D(TX)⊥]W†T−B)′. | (3.19) |
(e) If B=0 or A=0 in (1.3), then
BLMBEN(Aθ)=([A(TX)′,0]W†+U1W⊥)Ty, | (3.20) |
BLMBPN(Bν)=([0,D(TX)⊥]W†+U1W⊥)Ty. | (3.21) |
Proof. Eq (3.13) is obviously equivalent to
f(K)=(KT−B)Σ(KT−B)′=min s.t. KTX(TX)′=A(TX)′. | (3.22) |
Since Σ≽0, the optimization problem in (3.22) is a special case of (2.10). By Lemma 2.7, the solution of (3.22) is determined by the matrix equation in (3.13). This equation is consistent as well under (3.6), and the general solution of the equation and the corresponding BLMBP are given in (3.15). Result (a) is well known; see [11,16]. Results (b) and (c) follow from the conditions [TX,TΣT′(TX)⊥]⊥T=0 and [TX,TΣT′]⊥Ty=0 holds with probability 1.
Taking the covariance operation of (3.15) yields (3.16). Also from (1.8) and (3.15), the covariance matrix between BLMBPN(τ) and τ is
Cov(BLMBPN(τ),τ)=Cov(ˆKTy,τ)=[A(TX)′,D(TX)⊥][TX(TX)′,TΣT′(TX)⊥]†TΣB′=[A(TX)′,D(TX)⊥][TX(TX)′,TΣT′(TX)⊥]†D′, |
thus establishing (3.17). Combination of (1.4) and (3.16) yields (3.18). Substitution of (3.15) into (3.12) and then simplification yields (3.19).
Some conclusions for a special case of Theorem 3.4 are presented below without proof.
Corollary 3.5. Let the parametric vector τ be as given in (1.3), and define V=[XX′,Cov(y)X⊥] and C=Cov(τ,y). Then
Cov(ˆLy−τ)=min s.t. ˆLXX′=AX′⇔ˆLV=[AX′,CX⊥]. | (3.23) |
The matrix equation in (3.23) is solvable for ˆL, i.e.,
[AX′,CX⊥]V†V=[AX′,CX⊥] | (3.24) |
holds under (3.5), while the general expression of ˆL and the corresponding BLMBPM(τ) can be written in the following form
BLMBPM(τ)=ˆLy=([AX′,CX⊥]V†+U2V⊥)y, | (3.25) |
where U2∈Rs×n is arbitrary. Furthermore, the following results hold.
(a) r[X,ΣX⊥]=r[X,X⊥Σ]=r[X,Σ] and R[X,ΣX⊥]=R[X,X⊥Σ]=R[X,Σ].
(b) ˆL in (3.25) is unique if and only if r[X,Σ]=n.
(c) BLMBPM(τ) is unique if and only if y∈R[X,Σ] holds with probability 1.
(d) The covariance matrix of BLMBPM(τ) is given by
Cov(BLMBPM(τ))=ˆLΣˆL′=([AX′,CX⊥]V†)Σ([AX′,CX⊥]V†)′; | (3.26) |
the covariance matrix between BLMBPM(τ) and τ is given by
Cov(BLMBPM(τ),τ)=[AX′,CX⊥][XX′,ΣX⊥]†C′; | (3.27) |
the difference of Cov(τ) and Cov(BLMBPM(τ)) is given by
Cov(τ)−Cov(BLMBPM(τ))=BΣB′−([AX′,CX⊥]V†)Σ([AX′,CX⊥]V†)′; | (3.28) |
the covariance matrix of τ−BLMBPM(τ) is given by
Cov(τ−BLMBPM(τ))=([AX′,CX⊥]V†−B)Σ([AX′,CX⊥]V†−B)′. | (3.29) |
Corollary 3.6. Let the parametric vector τ be as given in (1.3). Then, the following results hold:
(a) The BLMBP of τ can be decomposed as the sum
BLMBPN(τ)=BLMBEN(Aθ)+BLMBPN(Bν), | (3.30) |
and they satisfy
Cov(BLMBEN(Aθ),BLMBPN(Bν))=0, | (3.31) |
Cov(BLMBPN(τ))=Cov(BLMBEN(Aθ))+Cov(BLMBPN(Bν)). | (3.32) |
(b) For any matrix P∈Rt×s, and the following equality holds:
BLMBPN(Pτ)=PBLMBPN(τ). | (3.33) |
(c) The BLMBP of τ can be decomposed as the sum
BLMBPM(τ)=BLMBEM(Aθ)+BLMBPM(Bν), | (3.34) |
and they satisfy
Cov(BLMBEM(Aθ),BLMBPM(Bν))=0, | (3.35) |
Cov(BLMBPM(τ))=Cov(BLMBEM(Aθ))+Cov(BLMBPM(Bν)). | (3.36) |
(d) For any matrix P∈Rt×s, and the following equality holds:
BLMBPM(Pτ)=PBLMBPM(τ). | (3.37) |
Proof. Notice that the arbitrary matrix U1 in (3.15) can be rewritten as U1=V1+V2, while the matrix [A(TX)′,D(TX)⊥] in (3.15) can be rewritten as
[A(TX)′,D(TX)⊥]=[A(TX)′,0]+[0,D(TX)⊥]. |
Correspondingly, BLMBPN(τ) in (3.15) can be rewritten as the sum:
BLMBPN(τ)=([A(TX)′,D(TX)⊥]W†+U1W⊥)Ty=([A(TX)′,0]W†+V1W⊥)Ty +([0,D(TX)⊥]W†+V2W⊥)Ty=BLMBEN(Aθ)+BLMBPN(Bν), |
thus establishing (3.30). From (3.20) and (3.21), the covariance matrix between BLMBEN(Aθ) and BLMBPN(Bν) is given by
Cov(BLMBEN(Aθ),BLMBPN(Bν))=[A(TX)′,0]W†TΣT′([0,BΣT′(TX)⊥]W†)′. |
Applying (2.6) to the right-hand side of the above equality and then simplifying by Theorem 3.4(a), (2.3), and (2.5), we obtain
r(Cov(BLMBEN(Aθ),BLMBPN(Bν)))=r([A(TX)′,0]W†TΣT′([0,BΣT′(TX)⊥]W†)′)=r[0[TX(TX)′(TX)⊥TΣT′][0(TX)⊥TΣB′][TX(TX)′,TΣT′(TX)⊥]TΣT′0[A(TX)′,0]00]−2r[TX,TΣT′(TX)⊥]=r[[000−(TX)⊥TΣT′(TX)⊥][TX(TX)′0][0(TX)⊥TΣB′][TX(TX)′,0]TΣT′0[A(TX)′,0]00]−2r[TX,TΣ]=r[0TX(TX)′TX(TX)′TΣT′A(TX)′0]+r[(TX)⊥TΣT′(TX)⊥,(TX)⊥TΣB′]−2r[TX,TΣ]=r[TX(TX)′A(TX)′]+r[TX(TX)′TΣT′]+r[TX,TΣT′(TX)⊥,TΣB′]−r(TX)−2r[TX,TΣ]=r[TX,TΣ,TΣB′]−r[TX,TΣ]=r[TX,TΣ]−r[TX,TΣ]=0, |
which implies that Cov(BLMBEN(Aθ),BLMBPN(Bν)) is a zero matrix, thus establishing (3.31). Equation (3.32) follows from (3.30) and (3.31). Result (b) follows directly from (3.15). Results (c) and (d) are special cases of (a) and (b).
One of the main tasks in the statistical inference of parametric regression models is to characterize connections between different predicttions/estimattions of unknown parameters. In this section, we study the relationships between the BLMBPs under GLM and its TGLMs. Because the coefficient matrices ˆKT and ˆL in (3.15) and (3.25) are not necessarily unique, we use
{ˆL}, {ˆKT}, {BLMBPM(τ)}={ˆLy}, {BLMBPN(τ)}={ˆKTy} | (4.1) |
to denote the collections of all the coefficient matrices and the corresponding BLMBPs. In order to characterize the relations between the collections of the coefficient matrices in (4.1), it is necessary to discuss the following four cases:
(a) {ˆL}∩{ˆKT}≠∅, so that {BLMBPM(τ)}∩{BLMBPN(τ)}≠∅ holds definitely;
(b) {ˆL}⊇{ˆKT}, so that {BLMBPM(τ)}⊇{BLMBPN(τ)} holds definitely;
(c) {ˆL}⊆{ˆKT}, so that {BLMBPM(τ)}⊆{BLMBPN(τ)} holds definitely;
(d) {ˆL}={ˆKT}, so that {BLMBPM(τ)}={BLMBPN(τ)} holds definitely.
In order to characterize the relations between the collections of the random vectors in (4.1), it is necessary to discuss the following four cases:
(a) {BLMBPM(τ)}∩{BLMBPN(τ)}≠∅ holds with probability 1;
(b) {BLMBPM(τ)}⊇{BLMBPN(τ)} holds with probability 1;
(c) {BLMBPM(τ)}⊆{BLMBPN(τ)} holds with probability 1;
(d) {BLMBPM(τ)}={BLMBPN(τ)} holds with probability 1.
Our main results are given below.
Theorem 4.1. Let BLMBPN(τ) and BLMBPM(τ) be as given in (3.15) and (3.25), respectively and define
Λ=[TXX′TΣ0X′], Γ=[AX′,BΣ]. | (4.2) |
Then, the following results hold.
(a) There exist ˆL and ˆK such that ˆL=ˆKT if and only if R(Γ′)⊆R(Λ′). In this case, {BLMBPM(τ)}∩{BLMBPN(τ)}≠∅ holds definitely.
(b) {ˆL}⊇{ˆKT} if and only if R(Γ′)⊆R(Λ′). In this case, {BLMBPM(τ)}⊇{BLMBPN(τ)} holds definitely.
(c) {ˆL}⊆{ˆKT} if and only if r[ΛΓ]=r(T)+r(X)+r[X,Σ]−n. In this case, {BLMBPM(τ)}⊆{BLMBPN(τ)} holds definitely.
(d) {ˆL}={ˆKT} if and only if R(Γ′)⊆R(Λ′) and r[TX,TΣ]=r[X,Σ]+r(T)−n. In this case, {BLMBPM(τ)}={BLMBPN(τ)} holds definitely.
Proof. From (3.15) and (3.25), the difference ˆL−ˆKT can be written as
ˆL−ˆKT=Q+U2[XX′,ΣX⊥]⊥−U1[TX(TX)′,TΣT′(TX)⊥]⊥T, | (4.3) |
where Q=[AX′,BΣX⊥][XX′,ΣX⊥]†−[A(TX)′,BΣT′(TX)⊥][TX(TX)′,TΣT′(TX)⊥]†T and U1∈Rs×m and U2∈Rs×n are arbitrary. Applying (2.8) to (4.3) gives
minˆL,ˆKr(ˆL−ˆKT)=minU1,U2r(Q+U2[XX′,ΣX⊥]⊥−U1[TX(TX)′,TΣT′(TX)⊥]⊥T)=r[Q[XX′,ΣX⊥]⊥[TX(TX)′,TΣT′(TX)⊥]⊥T]−r[[XX′,ΣX⊥]⊥[TX(TX)′,TΣT′(TX)⊥]⊥T]. | (4.4) |
It is easy to obtain by (2.3), (2.4) and elementary block matrix operations (EBMOs) that
r[Q[XX′,ΣX⊥]⊥[TX(TX)′,TΣT′(TX)⊥]⊥T]=r[Q00In[XX′,ΣX⊥]0T0[TX(TX)′,TΣT′(TX)⊥]]−r[XX′,ΣX⊥]−r[TX(TX)′,TΣT′(TX)⊥]=r[0−[AX′,BΣX⊥][A(TX)′,BΣT′(TX)⊥]In[XX′,ΣX⊥]0T0[TX(TX)′,TΣT′(TX)⊥]]−r[X,Σ]−r[TX,TΣ]=r[0−[AX′,BΣX⊥][A(TX)′,BΣT′(TX)⊥]In000−T[XX′,ΣX⊥][TX(TX)′,TΣT′(TX)⊥]]−r[X,Σ]−r[TX,TΣ]=r[[AX′,BΣX⊥][A(TX)′,BΣT′(TX)⊥]T[XX′,ΣX⊥][TX(TX)′,TΣT′(TX)⊥]]+n−r[X,Σ]−r[TX,TΣ]=r[TXX′TΣTΣT′0X′000(TX)′AX′BΣBΣT′]+n−r(X)−r(TX)−r[X,Σ]−r[TX,TΣ]=r[TXX′TΣ00X′000(TX)′AX′BΣ0]+n−r(X)−r(TX)−r[X,Σ]−r[TX,TΣ]=r[TXX′TΣ0X′AX′BΣ]+n−r(Γ)−r[X,Σ], | (4.5) |
and
r[[XX′,ΣX⊥]⊥[TX(TX)′,TΣT′(TX)⊥]⊥T]=r[In[XX′,ΣX⊥]0T0[TX(TX)′,TΣT′(TX)⊥]]−r[XX′,ΣX⊥]−r[TX(TX)′,TΣT′(TX)⊥]=r[In000−T[XX′,ΣX⊥][TX(TX)′,TΣT′(TX)⊥]]−r[X,Σ]−r[TX,TΣ]=r(T[XX′,ΣX⊥],[TX(TX)′,TΣT′(TX)⊥])+n−r[X,Σ]−r[TX,TΣ]=r[TXX′TΣTΣT′0X′000(TX)′]+n−r(X)−r(TX)−r[X,Σ]−r[TX,TΣ]=r[TXX′TΣ00X′000(TX)′]+n−r(X)−r(TX)−r[X,Σ]−r[TX,TΣ]=r[TXX′TΣ0X′]+n−r(Γ)−r[X,Σ]. | (4.6) |
Substituting (4.5) and (4.6) into (4.4) yields
minˆL,ˆKr(ˆL−ˆKT)=r[ΛΓ]−r(Λ). | (4.7) |
Setting the right-hand side of (4.7) equal to zero and applying Lemma 2.2(b) yields the equivalent condition in (a). Applying (2.8) to (4.3) yields
minˆLr(ˆL−ˆKT)=minU2r(Q+U2[XX′,ΣX⊥]⊥−U1[TX(TX)′,TΣT′(TX)⊥]⊥T)=r[Q−U1[TX(TX)′,TΣT′(TX)⊥]⊥T[XX′,ΣX⊥]⊥]−r([XX′,ΣX⊥]⊥)=r[Q−U1[TX(TX)′,TΣT′(TX)⊥]⊥T[XX′,ΣX⊥]⊥]−n+r[X,Σ] | (4.8) |
and by (2.9) and (4.5),
maxU1r[Q−U1[TX(TX)′,TΣT′(TX)⊥]⊥T[XX′,ΣX⊥]⊥]=maxU1r([Q[XX′,ΣX⊥]⊥]−[Is0]U1[TX(TX)′,TΣT′(TX)⊥]⊥T)=min{r[Q[XX′,ΣX⊥]⊥[TX(TX)′, TΣT′(TX)⊥]⊥T], r[QIs[XX′,ΣX⊥]⊥0]}=min{r[Q[XX′,ΣX⊥][TX(TX)′, TΣT′(TX)⊥]⊥T], s+n−r[X,Σ]}=min{r[ΛΓ]+n−r(X)−r[X,Σ]−r[TX,TΣ], s+n−r[X,Σ]}=min{s,r[ΛΓ]−r(Γ)}+n−r[X,Σ]. | (4.9) |
Combining (4.8) and (4.9) yields
maxˆKminˆLr(ˆL−ˆKT)=min{s,r[ΛΓ]−r(Γ)}. | (4.10) |
Setting the right-hand side of (4.10) equal to zero yields r[ΛΓ]=r(Γ). Thus, the statement in (b) holds.
By a similar approach, we can obtain
maxˆLminˆKr(ˆL−ˆKT)=min{s,r[ΛΓ]+n−r(T)−r(X)−r[X,Σ]}, | (4.11) |
as required for the statement in (c). Combining (b) and (c) yields (d).
Theorem 4.2. Let BLMBPN(τ) and BLMBPM(τ) be as given in (3.15) and (3.25), respectively, and let Λ and Γ be as given in (4.2). Then, the following six statements are equivalent:
(a) {BLMBPM(τ)}∩{BLMBPN(τ)}≠∅ holds definitely.
(b) {BLMBPM(τ)}∩{BLMBPN(τ)}≠∅ holds with probability 1.
(c) {BLMBPM(τ)}⊇{BLMBPN(τ)} holds with probability 1.
(d) {BLMBPM(τ)}⊆{BLMBPN(τ)} holds with probability 1.
(e) {BLMBPM(τ)}={BLMBPN(τ)} holds with probability 1.
(f) R(Γ′)⊆R(Λ′).
Proof. It can be seen from Lemma 2.6 and Definition 2.8(c) that (a) is equivalent to
minˆL,ˆKr((ˆL−ˆKT)[XX′,ΣX⊥])=0. | (4.12) |
Substituting the coefficient matrices in (3.15) and (3.25) into (4.12) and simplifying, we obtain
U1[TX(TX)′,TΣT′(TX)⊥]⊥T[XX′,ΣX⊥]=J, |
where J=[AX′,BΣX⊥]−[A(TX)′,BΣT′(TX)⊥][TX(TX)′,TΣT′(TX)⊥]†T[XX′,ΣX⊥] and U1∈Rs×m is arbitrary. From Lemma 2.5, the matrix equation is solvable for U1 if and only if
r[J[TX(TX)′,TΣT′(TX)⊥]⊥T[XX′,ΣX⊥]]=r([TX(TX)′,TΣT′(TX)⊥]⊥T[XX′,ΣX⊥]). | (4.13) |
Applying (2.3) and (2.4), and simplifying, leads to
r[J[TX(TX)′,TΣT′(TX)⊥]⊥T[XX′,ΣX⊥]]=r[J0T[XX′,ΣX⊥][TX(TX)′,TΣT′(TX)⊥]]−r[TX(TX)′,TΣT′(TX)⊥]=r[[AX′,BΣX⊥][A(TX)′,BΣT′(TX)⊥]T[XX′,ΣX⊥][TX(TX)′,TΣT′(TX)⊥]]−r[TX,TΣ]=r[TXX′TΣTΣT′0X′000(TX)′AX′BΣBΣT′]−r(X)−r(TX)−r[TX,TΣ]=r[TXX′TΣ0X′AX′BΣ]−r(X)−r[TX,TΣ]=r[ΛΓ]−r(Γ), | (4.14) |
and
r([TX(TX)′,TΣT′(TX)⊥]⊥T[XX′,ΣX⊥])=r([TX(TX)′,TΣT′(TX)⊥],T[XX′,ΣX⊥])−r[TX,TΣ]=r[TX,TΣ]−r[TX,TΣ]=0. | (4.15) |
Substituting (4.14) and (4.15) into (4.13) leads to r[ΛΓ]=r(Γ), thus establishing the equivalence of (a) and (e).
From Lemma 2.6 and Definition 2.8(d) that (b) is equivalent to
maxˆKminˆLr((ˆL−ˆKT)[XX′,ΣX⊥])=0. | (4.16) |
From (2.7), (3.15), (3.23), (3.25) and (4.14),
maxˆKminˆLr((ˆL−ˆKT)[XX′,ΣX⊥])=maxˆKr(J−U1[TX(TX)′,TΣT′(TX)⊥]⊥T[XX′,ΣX⊥])=min{r[J[TX(TX)′,TΣT′(TX)⊥]⊥T[XX′,ΣX⊥]],s}. | (4.17) |
Setting the right-hand side of (4.17) equal to zero yields r[ΛΓ]=r(Γ), thus establishing the equivalence of (b) and (e).
Similarly, we are able to obtain
maxˆLminˆKr((ˆL−ˆKT)[XX′,ΣX⊥])=r[ΛΓ]−r(Γ). | (4.18) |
Thus, (4.18) is equivalent to r[ΛΓ]=r(Γ). Combining results (b) and (c) leads to the equivalence of (d) and (e).
Combining Theorems 4.1 and 4.2, we obtain the following result:
Corollary 4.3. Let BLMBPN(τ) and BLMBPM(τ) be as given in (3.15) and (3.25), respectively, and let Λ and Γ be as given in (4.2). Then, the following six statistical statements are equivalent:
(a) {BLMBPM(τ)}∩{BLMBPN(τ)}≠∅ holds definitely.
(b) {BLMBPM(τ)}⊇{BLMBPN(τ)} holds definitely.
(c) {BLMBPM(τ)}∩{BLMBPN(τ)}≠∅ holds with probability 1.
(d) {BLMBPM(τ)}⊇{BLMBPN(τ)} holds with probability 1.
(e) {BLMBPM(τ)}⊆{BLMBPN(τ)} holds with probability 1.
(f) {BLMBPM(τ)}={BLMBPN(τ)} holds with probability 1.
The results in the above theorems and corollaries can be simplified further for different choices of the matrices in (1.3), such as, A=K and B=0. Hence, many specified conclusions can further be obtained on the relationships between a GLM and its TGLMs. Assume that τ in (1.3) is predictable under (1.1) and (1.5). We obtain that the BLMBP of τ is just its BLUP, and the main results in the paper are the classic theory on the BLUPs of τ under (1.1) and (1.5). Therefore, these works are certain extensions of the classic BLUP theory.
Assume that a concrete form of M in (1.1) is given by
M: [y1y2]=[X100X2][θ1θ2]+[ν1ν2], E[ν1ν2]=0, Cov[ν1ν2]=σ2[In100In2]. |
In this case, taking two transformation matrices T1=[In1,0] and T2=[0,In2], we obtain the following two sub-sample models:
M1: y1=X1θ1+ν1, E(ν1)=0, Cov(ν1)=σ2In1,M2: y2=X2θ2+ν2, E(ν2)=0, Cov(ν2)=σ2In2, |
where it is assumed that yi∈Rni×1,Xi∈Rni×pi,θi∈Rpi×1, νi∈Rni×1, n=n1+n2, p=p1+p2. For illustrating the results in Section 4, let A=[X1,0],B=0 and A=[0,X2],B=0 in (1.3), respectively. From Theorems 4.1 and 4.2,
r[ΛΓ]=r[TXX′TΣ0X′AX′BΣ]=r[X1X′1σ2In100X′1000X′2X1X′100]=r[0In1000X′2X1X′100]=n1+r(X),r(Λ)=r[TXX′TΣ0X′]=r[TXTΣ0X′]=r[X1σ2In100X′1000X′2]=n1+r(X),r(T)+r(X)+r[X,Σ]−n=n1+r(X). |
Obviously, the equivalent conditions all hold in Theorems 4.1 and 4.2. Thus, we can easily describe the relations between the corresponding estimators.
We have provided algebraic and statistical analysis of a biased prediction problem when a joint parametric vector is unpredictable under a given GLM, and obtained an abundance of exact formulas and facts about the BLMBPs of the joint parametric vector in the contexts of a GLM and its TGLMs. All the findings in this article are technically formulated or denoted in certain analytical expressions or explicit assertions through the surprise use of specified matrix analysis tools and techniques. Hence, it is not difficult to understand these results and facts from both mathematical and statistical aspects. In view of this fact, we can take these obtained in the preceding sections as a group of theoretical contributions in the statistical inference under general linear model assumptions. Consequently, we are able to utilize the statistical methods developed in this article to provide additional insight into various concrete inference problems and subjects related to GLMs. Correspondingly, we point out that the main conclusions presented in this work have certain significant applications in the field of inverse scattering problems. The reader is referred to [8,31,32] on the topic of inverse scattering problems.
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
We are grateful to anonymous reviewers for their helpful comments and suggestions. The second author was supported in part by the Shandong Provincial Natural Science Foundation #ZR2019MA065.
The authors declare that they have no conflicts of interest.
[1] | J. K. Baksalary, R. Kala, Linear transformations preserving best linear unbiased estimators in a general Gauss-Markoff model, Ann. Stat., 9 (1981), 913–916. |
[2] |
B. Dong, W. Guo, Y. Tian, On relations between BLUEs under two transformed linear models, J. Multivariate Anal., 131 (2014), 279–292. https://doi.org/10.1016/j.jmva.2014.07.005 doi: 10.1016/j.jmva.2014.07.005
![]() |
[3] | H. Drygas, Sufficiency and completeness in the general Gauss-Markov model, Sankhyā A, 45 (1983), 88–98. |
[4] |
N. Güler, On relations between BLUPs under two transformed linear random-effects models, Commun. Stat. Simul. Comput., 51 (2022), 5099–5125. https://doi.org/10.1080/03610918.2020.1757709 doi: 10.1080/03610918.2020.1757709
![]() |
[5] |
N. Güler, M. E. Büyükkaya, Notes on comparison of covariance matrices of BLUPs under linear random-effects model with its two subsample models, Iran. J. Sci. Tech. Trans. A, 43 (2019), 2993–3002. https://doi.org/10.1007/s40995-019-00785-3 doi: 10.1007/s40995-019-00785-3
![]() |
[6] |
N. Güler, M. E. Büyükkaya, Inertia and rank approach in transformed linear mixed models for comparison of BLUPs, Commun. Stat. Theor. Meth., 52 (2023), 3108–3123. https://doi.org/10.1080/03610926.2021.1967397 doi: 10.1080/03610926.2021.1967397
![]() |
[7] |
S. J. Haslett, J. Isotalo, Y. Liu, S. Puntanen, Equalities between OLSE, BLUE and BLUP in the linear model, Stat. Papers, 55 (2014), 543–561. https://doi.org/10.1007/s00362-013-0500-7 doi: 10.1007/s00362-013-0500-7
![]() |
[8] |
Y. He, H. Liu, X. Wang, A novel quantitative inverse scattering scheme using interior resonant modes, Inverse Probl., 39 (2023), 085002. https://doi.org/10.1088/1361-6420/acdc49 doi: 10.1088/1361-6420/acdc49
![]() |
[9] |
R. Kala, P. R. Pordzik, Estimation in singular partitioned, reduced or transformed linear models, Stat. Papers, 50 (2009), 633–638. https://doi.org/10.1007/s00362-007-0097-9 doi: 10.1007/s00362-007-0097-9
![]() |
[10] |
A. Markiewicz, S. Puntanen, All about the ⊥ with its applications in the linear statistical models, Open Math., 13 (2015), 33–50. https://doi.org/10.1515/math-2015-0005 doi: 10.1515/math-2015-0005
![]() |
[11] |
A. Markiewicz, S. Puntanen, Further properties of linear prediction suciency and the BLUPs in the linear model with new observations, Afrika Stat., 13 (2018), 1511–1530. https://doi.org/10.16929/as/1511.117 doi: 10.16929/as/1511.117
![]() |
[12] |
G. Marsaglia, G. P. H. Styan, Equalities and inequalities for ranks of matrices, Linear Multilinear Algebra, 2 (1974), 269–292. https://doi.org/10.1080/03081087408817070 doi: 10.1080/03081087408817070
![]() |
[13] | S. K. Mitra, Generalized inverse of matrices and applications to linear models, In: Handbook of Statistics, 1 (1980), 471–512. https://doi.org/10.1016/S0169-7161(80)80045-9 |
[14] |
C. H. Morrell, J. D. Pearson, L. J. Brant, Linear transformations of linear mixed-effects models, Amer. Stat., 51 (1997), 338–343. https://doi.org/10.1080/00031305.1997.10474409 doi: 10.1080/00031305.1997.10474409
![]() |
[15] |
R. Penrose, A generalized inverse for matrices, Proc. Cambridge Phil. Soc., 51 (1955), 406–413. https://doi.org/10.1017/S0305004100030401 doi: 10.1017/S0305004100030401
![]() |
[16] | S. Puntanen, G. P. H. Styan, J. Isotalo, Matrix Tricks for Linear Statistical Models: Our Personal Top Twenty, Berlin: Springer, 2011. |
[17] | C. R. Rao, Linear Statistical Inference and Its Applications, New York: Wiley, 1973. |
[18] | C. R. Rao, Choice of best linear estimators in the Gauss-Markoff model with a singular dispersion matrix, Commun. Stat. Theor. Meth., 7 (1978), 1199–1208. |
[19] | C. R. Rao, S. K. Mitra, Generalized Inverse of Matrices and Its Applications, New York: Wiley, 1971. |
[20] |
J. Shao, J. Zhang, A transformation approach in linear mixed-effects models with informative missing responses, Biometrika, 102 (2015), 107–119. https://doi.org/10.1093/biomet/asu069 doi: 10.1093/biomet/asu069
![]() |
[21] |
Y. Tian, The maximal and minimal ranks of some expressions of generalized inverses of matrices, SEA Bull. Math., 25 (2002), 745–755. https://doi.org/10.1007/s100120200015 doi: 10.1007/s100120200015
![]() |
[22] |
Y. Tian, More on maximal and minimal ranks of Schur complements with applications, Appl. Math. Comput., 152 (2004), 675–692. https://doi.org/10.1016/S0096-3003(03)00585-X doi: 10.1016/S0096-3003(03)00585-X
![]() |
[23] |
Y. Tian, On properties of BLUEs under general linear regression models, J. Stat. Plann. Inference, 143 (2013), 771–782. https://doi.org/10.1016/j.jspi.2012.10.005 doi: 10.1016/j.jspi.2012.10.005
![]() |
[24] |
Y. Tian, A new derivation of BLUPs under random-effects model, Metrika, 78 (2015), 905–918. https://doi.org/10.1007/s00184-015-0533-0 doi: 10.1007/s00184-015-0533-0
![]() |
[25] |
Y. Tian, Transformation approaches of linear random-effects models, Stat. Meth. Appl., 26 (2017), 583–608. https://doi.org/10.1007/s10260-017-0381-3 doi: 10.1007/s10260-017-0381-3
![]() |
[26] |
Y. Tian, Matrix rank and inertia formulas in the analysis of general linear models, Open Math., 15 (2017), 126–150. https://doi.org/10.1515/math-2017-0013 doi: 10.1515/math-2017-0013
![]() |
[27] | Y. Tian, S. Cheng, The maximal and minimal ranks of A-BXC with applications, New York J. Math., 9 (2003), 345–362. |
[28] |
Y. Tian, B. Jiang, A new analysis of the relationships between a general linear model and its mis-specified forms, J. Korean Stat. Soc., 46 (2017), 182–193. https://doi.org/10.1016/j.jkss.2016.08.004 doi: 10.1016/j.jkss.2016.08.004
![]() |
[29] |
Y. Tian, S. Puntanen, On the equivalence of estimations under a general linear model and its transformed models, Linear Algebra Appl., 430 (2009), 2622–2641. https://doi.org/10.1016/j.laa.2008.09.016 doi: 10.1016/j.laa.2008.09.016
![]() |
[30] | C. Xie, Linear transformations preserving best linear minimum bias linear estimators in a Gauss-Markoff model, Appl. Math. J. Chin. Univer. Ser. A, 9 (1994), 429–434. |
[31] |
W. Yin, W. Yang, H. Liu, A neural network scheme for recovering scattering obstacles with limited phaseless far-field data, J. Comput. Phys., 417 (2020), 109594. https://doi.org/10.1016/j.jcp.2020.109594 doi: 10.1016/j.jcp.2020.109594
![]() |
[32] |
Y. Yin, W. Yin, P. Meng, H. Liu, The interior inverse scattering problem for a two-layered cavity using the Bayesian method, Inverse Probl. Imaging, 16 (2022), 673–690. http://dx.doi.org/10.3934/ipi.2021069 doi: 10.3934/ipi.2021069
![]() |
[33] |
B. Zhang, The BLUE and MINQUE in Gauss-Markoff model with linear transformation of the observable variables, Acta Math. Sci., 27 (2007), 203–210. https://doi.org/10.1016/S0252-9602(07)60018-6 doi: 10.1016/S0252-9602(07)60018-6
![]() |