Loading [MathJax]/jax/element/mml/optable/MiscTechnical.js
Research article

Robust estimation for varying-coefficient partially linear measurement error model with auxiliary instrumental variables

  • We study the varying-coefficient partially linear model when some linear covariates are not observed, but their auxiliary instrumental variables are available. Combining the calibrated error-prone covariates and modal regression, we present a two-stage efficient estimation procedure, which is robust against outliers or heavy-tail error distributions. Asymptotic properties of the resulting estimators are established. Performance of our proposed estimation procedure is illustrated through some numerous simulations and a real example. And the results confirm that the proposed methods are satisfactory.

    Citation: Yanting Xiao, Wanying Dong. Robust estimation for varying-coefficient partially linear measurement error model with auxiliary instrumental variables[J]. AIMS Mathematics, 2023, 8(8): 18373-18391. doi: 10.3934/math.2023934

    Related Papers:

    [1] M. Nagy, H. M. Barakat, M. A. Alawady, I. A. Husseiny, A. F. Alrasheedi, T. S. Taher, A. H. Mansi, M. O. Mohamed . Inference and other aspects for qWeibull distribution via generalized order statistics with applications to medical datasets. AIMS Mathematics, 2024, 9(4): 8311-8338. doi: 10.3934/math.2024404
    [2] G. M. Mansour, M. A. Abd Elgawad, A. S. Al-Moisheer, H. M. Barakat, M. A. Alawady, I. A. Husseiny, M. O. Mohamed . Bivariate Epanechnikov-Weibull distribution based on Sarmanov copula: properties, simulation, and uncertainty measures with applications. AIMS Mathematics, 2025, 10(5): 12689-12725. doi: 10.3934/math.2025572
    [3] H. M. Barakat, M. A. Alawady, I. A. Husseiny, M. Nagy, A. H. Mansi, M. O. Mohamed . Bivariate Epanechnikov-exponential distribution: statistical properties, reliability measures, and applications to computer science data. AIMS Mathematics, 2024, 9(11): 32299-32327. doi: 10.3934/math.20241550
    [4] Jumanah Ahmed Darwish, Saman Hanif Shahbaz, Lutfiah Ismail Al-Turk, Muhammad Qaiser Shahbaz . Some bivariate and multivariate families of distributions: Theory, inference and application. AIMS Mathematics, 2022, 7(8): 15584-15611. doi: 10.3934/math.2022854
    [5] Aisha Fayomi, Ehab M. Almetwally, Maha E. Qura . A novel bivariate Lomax-G family of distributions: Properties, inference, and applications to environmental, medical, and computer science data. AIMS Mathematics, 2023, 8(8): 17539-17584. doi: 10.3934/math.2023896
    [6] I. A. Husseiny, M. Nagy, A. H. Mansi, M. A. Alawady . Some Tsallis entropy measures in concomitants of generalized order statistics under iterated FGM bivariate distribution. AIMS Mathematics, 2024, 9(9): 23268-23290. doi: 10.3934/math.20241131
    [7] Ammar M. Sarhan, Rabab S. Gomaa, Alia M. Magar, Najwan Alsadat . Bivariate exponentiated generalized inverted exponential distribution with applications on dependent competing risks data. AIMS Mathematics, 2024, 9(10): 29439-29473. doi: 10.3934/math.20241427
    [8] Nora Nader, Dina A. Ramadan, Hanan Haj Ahmad, M. A. El-Damcese, B. S. El-Desouky . Optimizing analgesic pain relief time analysis through Bayesian and non-Bayesian approaches to new right truncated Fréchet-inverted Weibull distribution. AIMS Mathematics, 2023, 8(12): 31217-31245. doi: 10.3934/math.20231598
    [9] A. M. Abd El-Raheem, Ehab M. Almetwally, M. S. Mohamed, E. H. Hafez . Accelerated life tests for modified Kies exponential lifetime distribution: binomial removal, transformers turn insulation application and numerical results. AIMS Mathematics, 2021, 6(5): 5222-5255. doi: 10.3934/math.2021310
    [10] Yichen Lv, Xinping Xiao . Grey parameter estimation method for extreme value distribution of short-term wind speed data. AIMS Mathematics, 2024, 9(3): 6238-6265. doi: 10.3934/math.2024304
  • We study the varying-coefficient partially linear model when some linear covariates are not observed, but their auxiliary instrumental variables are available. Combining the calibrated error-prone covariates and modal regression, we present a two-stage efficient estimation procedure, which is robust against outliers or heavy-tail error distributions. Asymptotic properties of the resulting estimators are established. Performance of our proposed estimation procedure is illustrated through some numerous simulations and a real example. And the results confirm that the proposed methods are satisfactory.



    In 1995, Kamps presented the notion of generalized order statistics (GOS), which is the unification of different models of ascendingly ordered random variables (RVs). The GOS incorporates significant and well-known concepts that have been discussed individually in the statistical literature. Many models of ascendingly ordered RVs, such as sequential order statistics, progressive Type-Ⅱ (PT-Ⅱ) censored order statistics, ordinary order statistics (OOS), record values, and Pfeifer's record model, are theoretically contained in the GOS model.

    Assume F(.) to be an arbitrary continuous cumulative distribution function (CDF) with probability density function (PDF) f(.). Assume also k>0, nN, and ˜m=(m1,m2,,mn1)n1 to be the parameters such that γn=k and γi=k+ni+Mi, for i=1,,n1, where Mi=n1ι=imι. Then, the RVs Xi:n,˜m,k,i=1,,n, are said to be GOS, if their joint PDF is given by

    f1,2,,n:n,˜m,k(x1,x2,,xn)=k(n1ν=1γν)(n1ν=1[ˉF(xν)]mνf(xν))[ˉF(xn)]k1f(xn), (1.1)

    where ˉF(.)=1F(.) and F1(0)<x1xn<F1(1).

    Several models of arranged RVs can be considered special instances of GOS. m1=m2==mn1=m; γi=k+(ni)(m+1), i=1,,n corresponds to m-generalized order statistics (m-GOS), γi=ni+1 (mi=0,k=1) corresponds to OOS, and mi=1;γi=k, i=1,,n, kN corresponds to k-recored values. Also, for mi=Ri, n=m0+m0ν=1Rν, RνN, and γi=ni1ν=1Rνi+1, 1im0, where m0 denotes the fixed number of failure of units to be observed, the model reduces to PT-Ⅱ censored order statistics.

    Under the condition γiγj, i,j=1,,n1,ij, Kamps and Cramer [23] derived the PDF of Xr:n,˜m,k, 1rn as

    fr:n,˜m,k(x)=Cr1f(x)ri=1ai,r[ˉF(x)]γi1, (1.2)

    and the joint PDF of Xr:n,˜m,k and Xs:n,˜m,k, r,s=1,,n,r<s as

    fr,s:n,˜m,k(x,y)=Cs1[ri=1ai,r[ˉF(x)]γi][sj=r+1a(r)j,s[ˉF(y)ˉF(x)]γj]×f(x)ˉF(x)f(y)ˉF(y),x<y, (1.3)

    where Cr1=ri=1γi, ai,r=rı=1ıi1γıγi,1irn, and a(r)j,s=sȷ=r+1ȷj1γȷγj, r+1jsn.

    It can be shown that for m1=m2==mn1=m1 (Khan and Khan [24]),

    ai,r=(1)ri(m+1)r1(r1)!(r1ri),

    and

    a(r)j,s=(1)sj(m+1)sr1(sr1)!(sr1sj).

    Therefore, the PDF of Xr:n,˜m,k given in (1.2) reduces to

    fr:n,m,k(x)=Cr1(r1)![ˉF(x)]γr1f(x)gr1m(F(x)), (1.4)

    and the joint PDF of Xr:n,˜m,k and Xs:n,˜m,k given in (1.3) reduces to

    fr,s:n,m,k(x,y)=Cs1(r1)!(sr1)![ˉF(x)]mgr1m(F(x))[hm(F(y))hm(F(x))]sr1×[ˉF(y)]γs1f(x)f(y),x<y, (1.5)

    where Cr1=ri=1γi, γi=k+(ni)(m+1), hm(x)={1m+1(1x)m+1,m1,ln(1x),m=1, and gm(x)=hm(x)hm(0), x[0,1) (see Kamps [22]).

    David [8] introduced the concept of concomitants of order statistics (COS), but Yang [47] described the general theory of COS. Concomitants are important in selection and prediction issues, ranked set sampling, parameter estimation, and the characterization of parent bivariate distributions. For a brief overview of the uses of the concomitants of ordered RVs, see Veena and Thomas [46] and the references therein. For a review of fundamental findings on COS, see Daivd and Nagaraja [9]. Furthermore, for some of the recent works on COS, we refer to Philip and Thomas [36,37,38], Kumar et al. [29], Barakat et al. [3], and Koshti and Kamalja [28].

    Several authors have investigated the concomitants of GOS (CGOS), including Ahsanullah and Nevzorov [1], Beg and Ahsanullah [4], El-Din et al. [13], Domma and Giordano [11], Hanif and Shahbaz [15], Shahbaz and Shahbaz [40], Tahmasebi et al. [44], Alawady et al. [2], and Kamal et al. [20]. Let (Xi,Yi), i=1,,n be a random sample from a bivariate distribution function FX,Y(x,y). When the X-variates are ordered in ascending order as X1:n,˜m,kX2:n,˜m,kX3:n,˜m,kXn:n,˜m,k, then Y-variates paired (not necessarily in ascending order) with these GOS are called the CGOS and are indicated by Y[r:n,˜m,k], r=1,,n. The PDF of Y[r:n,˜m,k] is given by (Ahsanullah and Nevzorov, [1])

    h[r:n,˜m,k](y)=f(y|x)fr:n,˜m,k(x)dx, (1.6)

    where f(y|x) is the conditional PDF of Y given X and fr:n,˜m,k(x) is defined in (1.2).

    Moreover, the joint PDF of Y[r:n,˜m,k] and Y[s:n,˜m,k], r,s=1,,n,r<s is given by

    h[r,s:n,˜m,k](y1,y2)=x1f(y1|x1)f(y2|x2)fr,s:n,˜m,k(x1,x2)dx2dx1, (1.7)

    where fr,s:n,˜m,k(x1,x2) is given in (1.3).

    One of the most notable applications of COS is in ranked set sampling (RSS). RSS is considered a beneficial sampling strategy for improving estimation efficiency and precision if the variable under study is expensive to measure or difficult to get, yet inexpensive and simple to rank. RSS was proposed by McIntyre [31] and then supported by Takahasi and Wakimoto [45] through mathematical theory. The procedure for RSS is described as follows:

    1) Randomly choose n2 units from the population under study, then divide them into n sets of n units.

    2) Order the elements of each set without making actual measurements.

    3) Choose and quantify the ith minimum from the ith set, i=1,,n, to create a new set of size n, known as the RSS.

    4) If a large sample size is required, repeat the above three steps d times (cycles) until a sample of size nd is obtained.

    For a comprehensive review of the theory and applications of RSS, see Chen et al. [7]. In some practical applications, the study variable, say, Y, is more difficult to measure, whereas an auxiliary variable X associated with Y is easily quantifiable and may be precisely arranged. In this situation, Stokes [42] created another RSS technique, which is as follows:

    1) At random, choose n independent bivariate sets of size n.

    2) Take note of the value of the auxiliary variable on each of these units.

    3) From the ith set of size n, choose the variable Y associated with the ith smallest X, i=1,,n.

    The resulting set of n units is known as the RSS. Consider (X(i:n)i,Y[i:n]i), i=1,,n to be the pair chosen from the ith set, where X(i:n)i is the ith order statistics of the auxiliary variate in the ith set and Y[i:n]i is the measurement made on the Y variate associated with X(i:n)i. Y[i:n]i is obviously the concomitant of the ith order statistics resulting from the ith sample. Numerous authors in the literature have considered the estimation of parameters of the various bivariate distributions using RSS and its modifications. Some work in this area is by Chacko and Thomas [5], Philip and Thomas [36,37], Koshti and Kamalja [26], Irshad et al. [16,17], and Dong et al. [12].

    COS and higher moments of multivariate distributions have received a lot of attention in recent years. Most of the literature on concomitants is concentrated on symmetric distributions such as multivariate normal (Sheikhi et al., [41]; Chaumette and Vincent, [6]) or multivariate elliptical (Jamalizadeh and Balakrishnan, [18]). Skewed distributions have gained a lot of interest recently in the literature since many datasets encountered in reality have some degree of skewness. In this regard, the distribution theory of COS from skew distributions has been investigated by several authors, including Hanif and Shahbaz [15], Shahbaz and Shahbaz [40], Tahmasebi et al. [44], Shahbaz et al. [39], and Kamal et al. [20]. In this article, we consider the bivariate generalized Weibull (BGW) distribution and the CGOS arising from it. There are numerous reasons for considering this particular bivariate distribution. Due to the presence of four parameters, the joint PDF of the BGW distribution is quite flexible and can take on various shapes depending on the shape parameter. The joint PDF, joint CDF, and conditional PDF for the BGW distribution are all in closed forms, making them appropriate for usage in practice. The univariate marginals of this distribution are able to analyze various types of hazard rates. In addition, it can be utilized for modeling bivariate lifetime data in a variety of scenarios. So far, no results on CGOS arising from the BGW distribution have been found in the literature. Thus, the current study aims to develop the distribution theory of CGOS originating from the BGW distribution and apply it to associated inference problems.

    The article is structured as follows: In Section 2, we provide a brief overview of the BGW distribution and some of its properties. In Section 3, we present the marginal PDF as well as the explicit expressions for the single moments of CGOS from the BGW distribution. The joint PDF of CGOS from the BGW distribution is also obtained in Section 3. Furthermore, the explicit expressions for the product moments of CGOS are derived. Section 4 presents the best linear unbiased (BLU) estimator of the parameter of the study variable contained in the BGW distribution using Stokes's RSS and some of the other modified RSS schemes. In Section 5, we apply the results to a real dataset. In Section 6, conclusions are provided.

    A bivariate RV (X,Y) is said to follow a BGW distribution if its PDF is given by (Pathak et al. [34])

    f(x,y)=θα2(β1β2)1xα1yα1eω(x,y;ϕ)(1eω(x,y;ϕ))θ2(1θeω(x,y;ϕ)), (2.1)

    where x,y0,α,β1,β2>0, 0<θ1, ω(x,y;ϕ)=xαβ1+yαβ2, and ϕ=(α,β1,β2). The BGW distribution includes the bivariate generalized exponential distribution (refer to Mirhosseini et al. [32]) and the bivariate generalized Rayleigh distribution (refer to Pathak and Vellaisamy [35]) as sub-models. The conditional PDF of Y given X=x is (Pathak et al. [34])

    f(y (2.2)

    The RV X\sim EW(\alpha, \beta_{1}, \theta) is a member of the exponentiated Weibull (EW) distribution with PDF

    \begin{eqnarray} f(x)& = & \theta \alpha (\beta_{1})^{-1} x^{\alpha-1} e^{-\frac{x^{\alpha}}{\beta_{1}}} (1-e^{-\frac{x^{\alpha}}{\beta_{1}}})^{\theta-1}, x\geq 0, \end{eqnarray} (2.3)

    and CDF

    \begin{eqnarray} F(x)& = & (1-e^{-\frac{x^{\alpha}}{\beta_{1}}})^{\theta}, x\geq 0. \end{eqnarray} (2.4)

    Similarly, Y\sim EW(\alpha, \beta_{2}, \theta) . A series expansion of the PDF of the BGW distribution is given by

    \begin{eqnarray} f(x, y)& = & \alpha^{2} (\beta_{1}\beta_{2})^{-1} x^{\alpha-1} y^{\alpha-1} \sum\limits_{j = 1}^{\infty}\binom{\theta}{j} (-1)^{j+1} j^{2} e^{-j\omega(x, y;\phi)}. \end{eqnarray} (2.5)

    Pathak et al. [34] showed that the product moments of the BGW distribution are

    \begin{eqnarray} E(x^{p} y^{q}) = \Gamma(1+\frac{p}{\alpha})\Gamma(1+\frac{q}{\alpha}) \beta_{1}^{\frac{p}{\alpha}}\beta_{2}^{\frac{q}{\alpha}} \sum\limits_{j = 1}^{\infty}\binom{\theta}{j} (-1)^{j+1} \frac{1}{j^{(p+q)/\alpha}}. \end{eqnarray} (2.6)

    If we make the transformation, U = \frac{X}{\beta_{1}^{\star}} and V = \frac{Y}{\beta_{2}^{\star}} , \beta_{i}^{\star} = \beta_{i}^{1/\alpha}, i = 1, 2 , the standard BGW distribution has the joint PDF as

    \begin{eqnarray} f^{\star}(u, v)& = & \alpha^{2} u^{\alpha-1} v^{\alpha-1} \sum\limits_{j = 1}^{\infty}\binom{\theta}{j} (-1)^{j+1} j^{2} e^{-j(u^{\alpha}+ v^{\alpha})}. \end{eqnarray} (2.7)

    It is clear that the variables U and V have the standard EW distribution as marginal functions with PDFs are, respectively, given by

    \begin{eqnarray} f^{\star}(u)& = & \theta \alpha u^{\alpha-1} e^{-u^{\alpha}} (1-e^{-u^{\alpha}})^{\theta-1}, u\geq 0, \end{eqnarray} (2.8)
    \begin{eqnarray} f^{\star}(v)& = & \theta \alpha v^{\alpha-1} e^{-v^{\alpha}} (1-e^{-v^{\alpha}})^{\theta-1}, v\geq 0. \end{eqnarray} (2.9)

    In this part, we obtain the distributions and moments of CGOS arising from the BGW distribution.

    Suppose (X_{i}, Y_{i}) and (U_{i}, V_{i}) are random samples of size n each originating from the BGW distribution and the standard BGW distribution, with PDFs provided by (2.1) and (2.7), respectively. Let V_{[r:n, \tilde{m}, k]} be the concomitant of the r^{th} GOS U_{r:n, \tilde{m}, k} . Then, the PDF and the p^{th} moments of V_{[r:n, \tilde{m}, k]}, r = 1, \cdots, n are given by the following two theorems:

    Theorem 1 If V_{[r:n, \tilde{m}, k]} is the concomitant of the r^{th} GOS from the standard BGW distribution, then the PDF of V_{[r:n, \tilde{m}, k]} , for r = 1, \cdots, n, is given by

    \begin{eqnarray} h_{[r:n, \tilde{m}, k]}(v)& = &\alpha \, C_{r-1}\sum\limits_{i = 1}^{r}\sum\limits_{j = 1}^{\infty} a_{i, r}\, j^{2} \delta_{\theta}(j, \gamma_{i}) v^{\alpha-1} e^{-j v^{\alpha}}, v \geq 0, \end{eqnarray} (3.1)

    where \delta_{\theta}(j, \gamma_{i}) = \binom{\theta}{j}\sum_{\tau = 0}^{\gamma_{i}-1}(-1)^{j+\tau +1}\binom{\gamma_{i}-1}{\tau} B(j, \theta \tau +1) and B(., .) is the complete beta function.

    Proof. Using the PDF of U_{r:n, \tilde{m}, k} (1.2) in (1.6), the PDF of the r^{th} CGOS V_{[r:n, \tilde{m}, k]} is given as

    \begin{eqnarray} h_{[r:n, \tilde{m}, k]}(v)& = & C_{r-1}\sum\limits_{i = 1}^{r} a_{i, r} \int_{0}^{\infty} f(v|u)\left[\bar{F}(u)\right]^{\gamma_{i}-1}f(u) du. \end{eqnarray} (3.2)

    In view of (2.7) and (2.8), we get

    \begin{eqnarray} h_{[r:n, \tilde{m}, k]}(v)& = &\alpha\, C_{r-1}\sum\limits_{i = 1}^{r}\sum\limits_{j = 1}^{\infty} \sum\limits_{\tau = 0}^{\gamma_{i}-1} a_{i, r}\, (-1)^{j+\tau +1} j^{2} \binom{\theta}{j} \\ && \times \binom{\gamma_{i}-1}{\tau} v^{\alpha-1} e^{-j v^{\alpha}} \int_{0}^{\infty} e^{-j z} (1-e^{- z})^{\theta \tau} dz\\ && \text{ [provided that }\; \gamma_{i} \; \text{is an integer]}, \end{eqnarray} (3.3)

    where z = u^{\alpha} . Now, by using Eq (3.3121) in Gradshteyn and Ryzhik [14] to compute the integral in (3.3), we obtain the result given in (3.1).

    Corollary 1. Taking m_{1} = m_{2} = \cdots = m_{n-1} = m\neq-1 in (3.1), the PDF of the r^{th} concomitant of m-GOS from the standard BGW distribution is given by

    \begin{eqnarray} h_{[r:n, m, k]}(v)& = & \frac{\alpha C_{r-1}}{(r-1)!}\sum\limits_{i = 1}^{r}\sum\limits_{j = 1}^{\infty} (-1)^{r-i} \frac{j^{2}}{(m+1)^{r-1}}\binom{r-1}{r-i}\delta_{\theta}(j, \gamma_{i})\, v^{\alpha-1} e^{-j v^{\alpha}}, v \geq 0, \end{eqnarray} (3.4)

    where \gamma_{i} = k+(n-i)(m+1).

    Remark 1. When m = 0 and k = 1 in (3.4), we get the PDF of the r^{th} COS from the standard BGW distribution as

    \begin{eqnarray} h_{[r:n]}(v)& = &\alpha \, C_{r:n}\sum\limits_{i = 1}^{r}\sum\limits_{j = 1}^{\infty} (-1)^{r-i} j^{2}\binom{r-1}{r-i} \delta_{\theta}(j, n-i+1) \, v^{\alpha-1} e^{-j v^{\alpha}}, \end{eqnarray} (3.5)

    where C_{r:n} = \frac{n!}{(r-1)!(n-r)!} .

    Theorem 2. Under the conditions of Theorem 1, the p^{th} moment of V_{[r:n, \tilde{m}, k]} is

    \begin{eqnarray} \mu_{[r:n, \tilde{m}, k]}^{(p)}& = & C_{r-1}\sum\limits_{i = 1}^{r}\sum\limits_{j = 1}^{\infty} a_{i, r}\, \delta_{\theta}(j, \gamma_{i}) \dfrac{\Gamma(\frac{p}{\alpha}+1)}{j^{\frac{p}{\alpha}-1}}. \end{eqnarray} (3.6)

    Proof. Using (3.1), the p^{th} moment of V_{[r:n, \tilde{m}, k]} is given as

    \begin{eqnarray} \mu_{[r:n, \tilde{m}, k]}^{(p)}& = &E[V_{[r:n, \tilde{m}, k]}^{p}] = \int_{0}^{\infty} v^{p} h_{[r:n, \tilde{m}, k]}(v) dv\\ && = C_{r-1}\sum\limits_{i = 1}^{r}\sum\limits_{j = 1}^{\infty} a_{i, r}\, j^{2}\, \delta_{\theta}(j, \gamma_{i}) \int_{0}^{\infty} z^{\frac{p}{\alpha}} e^{-j z}dz, \end{eqnarray} (3.7)

    where z = v^{\alpha} . Then, after integration, we get (3.6).

    Corollary 2. Taking m_{1} = m_{2} = \cdots = m_{n-1} = m\neq-1 in (3.6), the p^{th} moment of the concomitant of m-GOS is given by

    \begin{eqnarray} \mu_{[r:n, m, k]}^{(p)}& = & \frac{C_{r-1}}{(r-1)!}\sum\limits_{i = 1}^{r}\sum\limits_{j = 1}^{\infty} \frac{(-1)^{r-i}}{(m+1)^{r-1}}\binom{r-1}{r-i} \, \delta_{\theta}(j, \gamma_{i}) \dfrac{\Gamma(\frac{p}{\alpha}+1)}{j^{\frac{p}{\alpha}-1}}. \end{eqnarray} (3.8)

    Remark 2. Let m = 0 and k = 1 in (3.8), then the p^{th} moment of COS is

    \begin{eqnarray} \mu_{[r:n]}^{(p)}& = &E[V_{[r:n]}^{p}] \\ && = C_{r:n}\sum\limits_{i = 1}^{r}\sum\limits_{j = 1}^{\infty} (-1)^{r-i}\binom{r-1}{r-i}\, \delta_{\theta}(j, n-i+1) \dfrac{\Gamma(\frac{p}{\alpha}+1)}{j^{\frac{p}{\alpha}-1}}. \end{eqnarray} (3.9)

    Let V_{[r:n, \tilde{m}, k]} and V_{[s:n, \tilde{m}, k]} , r, s = 1, \cdots, n, r < s be the concomitants of the r^{th} and s^{th} GOS from the standard BGW distribution. Then, the joint PDF and the product moments of V_{[r:n, \tilde{m}, k]} and V_{[s:n, \tilde{m}, k]} are given by the following two theorems:

    Theorem 3. The joint PDF of concomitants V_{[r:n, \tilde{m}, k]} and V_{[s:n, \tilde{m}, k]} , r, s = 1, \cdots, n, r < s is given by

    \begin{eqnarray} h_{[r, s:n, \tilde{m}, k]}(v_{1}, v_{2})& = &\alpha^{2} C_{s-1}\sum\limits_{i = 1}^{r}\sum\limits_{j = r+1}^{s}\sum\limits_{\kappa_{1} = 1}^{\infty} \sum\limits_{\kappa_{2} = 1}^{\infty}a_{i, r}\, a_{j, s}^{(r)}\, \kappa_{1}^{2}\kappa_{2}\delta_{\theta}(\kappa_{1}, \kappa_{2}, \gamma_{i}, \gamma_{j}) \\ &&\times v_{1}^{\alpha-1}v_{2}^{\alpha-1} e^{-(\kappa_{1} v_{1}^{\alpha}+\kappa_{2} v_{2}^{\alpha})}, v_{1}, \, v_{2} > 0, \end{eqnarray} (3.10)

    where

    \delta_{\theta}(\kappa_{1}, \kappa_{2}, \gamma_{i}, \gamma_{j}) = \binom{\theta}{\kappa_{1}}\binom{\theta}{\kappa_{2}}\sum\limits_{\tau_{1} = 0}^{\gamma_{j}-1} \sum\limits_{\tau_{2} = 0}^{\gamma_{i}-\gamma_{j}-1}(-1)^{\kappa_{1}+\kappa_{2}+\tau_{1} +\tau_{2}+2}\binom{\gamma_{j}-1}{\tau_{1}} \binom{\gamma_{i}-\gamma_{j}-1}{\tau_{2}}
    \times B(\kappa_{1}+\kappa_{2}, \theta \tau_{2} +1){}_3 F_{2}\left(\kappa_{2}, -\theta \tau_{1}, \kappa_{1}+\kappa_{2};\kappa_{2}+1, \kappa_{1}+\kappa_{2}+\theta \tau_{2}+1;1\right),

    where {}_3 F_{2}(a_{1}, \, a_{2}, \, a_{3};\, b_{1}, \, b_{2};\, x) denotes the hypergeometric function defined by

    {}_3 F_{2}(a_{1}, \, a_{2}, \, a_{3};\, b_{1}, \, b_{2};\, x) = \sum\limits_{\ell = 0}^{\infty} \frac{(a_{1})_{\ell}(a_{2})_{\ell}(a_{3})_{\ell}}{(b_{1})_{\ell}(b_{2})_{\ell}} \frac{x^{\ell}}{\ell!},

    and (c)_{\ell} = c (c+1)\cdots (c+\ell-1) is the ascending factorial.

    Proof. Using (1.3) in (1.7), the joint PDF of the r^{th} and s^{th} CGOS V_{[r:n, \tilde{m}, k]} and V_{[s:n, \tilde{m}, k]} is given as

    \begin{eqnarray} h_{[r, s:n, \tilde{m}, k]}(v_{1}, v_{2})& = & C_{s-1}\sum\limits_{i = 1}^{r}\sum\limits_{j = r+1}^{s} a_{i, r}\, a_{j, s}^{(r)} \int_{0}^{\infty}\int_{u_{1}}^{\infty} f(v_{1}|u_{1})f(v_{2}|u_{2})\\ &&\times\left[\bar{F}(u_{1})\right]^{\gamma_{i}-\gamma_{j}-1}\left[\bar{F}(u_{2})\right]^{\gamma_{j}-1}f(u_{1})f(u_{2}) du_{2}\, du_{1}. \end{eqnarray} (3.11)

    In view of (2.7) and (2.8), we get

    \begin{eqnarray} h_{[r, s:n, \tilde{m}, k]}(v_{1}, v_{2})& = & \alpha^{4} C_{s-1}\sum\limits_{i = 1}^{r}\sum\limits_{j = r+1}^{s}\sum\limits_{\kappa_{1} = 1}^{\infty} \sum\limits_{\kappa_{2} = 1}^{\infty} a_{i, r}\, a_{j, s}^{(r)}(-1)^{\kappa_{1}+\kappa_{2}+2}\\ &&\times \kappa_{1}^{2}\kappa_{2}^{2} \binom{\theta}{\kappa_{1}}\binom{\theta}{\kappa_{2}} v_{1}^{\alpha-1} v_{2}^{\alpha-1} e^{- \kappa_{1} v_{1}^{\alpha} } e^{- \kappa_{2} v_{2}^{\alpha} }\\ && \times \int_{0}^{\infty} u_{1}^{\alpha-1}e^{- \kappa_{1} u_{1}^{\alpha} }[1-(1-e^{- u_{1}^{\alpha} })^{\theta}]^{\gamma_{i}-\gamma_{j}-1} I(u_{1})du_{1}, \end{eqnarray} (3.12)

    where

    \begin{eqnarray} I(u_{1})& = & \sum\limits_{\tau_{1} = 0}^{\gamma_{j}-1}(-1)^{\tau_{1}}\binom{\gamma_{j}-1}{\tau_{1}}\int_{u_{1}}^{\infty} u_{2}^{\alpha-1} e^{- \kappa_{2} u_{2}^{\alpha} }(1-e^{- u_{2}^{\alpha} })^{\tau_{1}\theta}du_{2}\\ && = \alpha^{-1}\sum\limits_{\tau_{1} = 0}^{\gamma_{j}-1}(-1)^{\tau_{1}}\binom{\gamma_{j}-1}{\tau_{1}} B_{w}(\kappa_{2}, \tau_{1}\theta+1), \end{eqnarray} (3.13)

    where w = e^{-u_{1}^{\alpha}} and B_{w}(., .) denotes the incomplete beta function defined by B_{w}(a_{1}, a_{2}) = \int_{0}^{w} x^{a_{1}-1} (1-x)^{a_{2}-1}dx.

    Now, putting the value of I(u_{1}) in (3.12), we get

    \begin{eqnarray} h_{[r, s:n, \tilde{m}, k]}(v_{1}, v_{2})& = & \alpha^{2} C_{s-1}\sum\limits_{i = 1}^{r}\sum\limits_{j = r+1}^{s}\sum\limits_{\kappa_{1} = 1}^{\infty} \sum\limits_{\kappa_{2} = 1}^{\infty}\sum\limits_{\tau_{1} = 0}^{\gamma_{j}-1}\sum\limits_{\tau_{2} = 0}^{\gamma_{i}-\gamma_{j}-1} a_{i, r}\, a_{j, s}^{(r)}(-1)^{\kappa_{1}+\kappa_{2}+\tau_{1}+\tau_{2}+2}\\ &&\times \kappa_{1}^{2}\kappa_{2}^{2} \binom{\theta}{\kappa_{1}}\binom{\theta}{\kappa_{2}}\binom{\gamma_{j}-1}{\tau_{1}}\binom{\gamma_{i}-\gamma_{j}-1}{\tau_{2}} v_{1}^{\alpha-1} v_{2}^{\alpha-1} e^{- \kappa_{1} v_{1}^{\alpha} } e^{- \kappa_{2} v_{2}^{\alpha} }\\ && \times \int_{0}^{\infty} e^{- \kappa_{1} z }(1-e^{-z})^{\tau_{2}\theta} B_{e^{-z}}(\kappa_{2}, \tau_{1}\theta+1) dz\\ && \text{ [provided that} \; \gamma_{i}-\gamma_{j} \;\text{is an integer] }, \end{eqnarray} (3.14)

    where z = u_{1}^{\alpha} . We know that B_{w}(a_{1}, a_{2}) = \frac{w^{a_{1}}}{a_{1}}{}_2 F_{1}(a_{1}, \, 1-a_{2};\, a_{1}+1;\, w) (see Mathai and Saxena, [30]), and

    \begin{eqnarray} \int_{0}^{1} x^{a-1} (1-x)^{b-1}\, {}_2 F_{1}(c, \, d;\, e;\, x)dx = B(a, b)\, {}_3 F_{2}(c, \, d, \, a;\, e, \, a+b;\, 1).\\ \end{eqnarray} (3.15)

    Therefore,

    \begin{eqnarray*} \label{IIIe14} h_{[r, s:n, \tilde{m}, k]}(v_{1}, v_{2})& = & \alpha^{2} C_{s-1}\sum\limits_{i = 1}^{r}\sum\limits_{j = r+1}^{s}\sum\limits_{\kappa_{1} = 1}^{\infty} \sum\limits_{\kappa_{2} = 1}^{\infty}\sum\limits_{\tau_{1} = 0}^{\gamma_{j}-1}\sum\limits_{\tau_{2} = 0}^{\gamma_{i}-\gamma_{j}-1} a_{i, r}\, a_{j, s}^{(r)}(-1)^{\kappa_{1}+\kappa_{2}+\tau_{1}+\tau_{2}+2}\nonumber\\ &&\times \kappa_{1}^{2}\kappa_{2} \binom{\theta}{\kappa_{1}}\binom{\theta}{\kappa_{2}}\binom{\gamma_{j}-1}{\tau_{1}}\binom{\gamma_{i}-\gamma_{j}-1}{\tau_{2}} v_{1}^{\alpha-1} v_{2}^{\alpha-1} e^{- \kappa_{1} v_{1}^{\alpha} } e^{- \kappa_{2} v_{2}^{\alpha} }\nonumber\\ && \times \int_{0}^{1} t^{\kappa_{1}+\kappa_{2}-1}(1-t)^{\tau_{2}\theta} {}_2 F_{1}(\kappa_{2}, \, -\tau_{1}\theta;\, \kappa_{2}+1;\, t) dt, \end{eqnarray*}

    where t = e^{-z} . Now, using (3.15), we get the result of (3.10).

    Corollary 3. At m_{1} = m_{2} = \cdots = m_{n-1} = m\neq-1 in (3.10), the joint PDF of concomitants V_{[r:n, m, k]} and V_{[s:n, m, k]} of the r^{th} and s^{th} m-GOS for the standard BGW distribution is given by

    \begin{eqnarray} h_{[r, s:n, m, k]}(v_{1}, v_{2})& = & \dfrac{\alpha^{2} C_{s-1}}{(r-1)!(s-r-1)!}\sum\limits_{i = 1}^{r}\sum\limits_{j = r+1}^{s}\sum\limits_{\kappa_{1} = 1}^{\infty} \sum\limits_{\kappa_{2} = 1}^{\infty} \frac{(-1)^{r-i+s-j} }{(m+1)^{s-2}} \binom{r-1}{r-i}\binom{s-r-1}{s-j}\, \kappa_{1}^{2}\kappa_{2} \\ &&\times\delta_{\theta}(\kappa_{1}, \kappa_{2}, \gamma_{i}, \gamma_{j})\ v_{1}^{\alpha-1}v_{2}^{\alpha-1} e^{-(\kappa_{1} v_{1}^{\alpha}+\kappa_{2} v_{2}^{\alpha})}, v_{1}, \, v_{2} > 0, \end{eqnarray} (3.16)

    where \gamma_{i} = k+(n-i)(m+1).

    Remark 3. For m = 0 and k = 1 in (3.16), we obtain the joint PDF of the r^{th} and s^{th} COS from the standard BGW distribution as

    \begin{eqnarray} h_{[r, s:n]}(v_{1}, v_{2})& = &\alpha^{2} C_{r, s:n} \sum\limits_{i = 1}^{r}\sum\limits_{j = r+1}^{s}\sum\limits_{\kappa_{1} = 1}^{\infty} \sum\limits_{\kappa_{2} = 1}^{\infty}(-1)^{r-i+s-j} \binom{r-1}{r-i}\binom{s-r-1}{s-j}\, \kappa_{1}^{2}\kappa_{2} \\ &&\times\delta_{\theta}(\kappa_{1}, \kappa_{2}, n-i+1, n-j+1)\ v_{1}^{\alpha-1}v_{2}^{\alpha-1} e^{-(\kappa_{1} v_{1}^{\alpha}+\kappa_{2} v_{2}^{\alpha})}, v_{1}, \, v_{2} > 0, \end{eqnarray} (3.17)

    where C_{r, s:n} = \frac{n!}{(r-1)!(s-r-1)!(n-s)!}.

    Theorem 4. The product moments of two concomitants V_{[r:n, \tilde{m}, k]} and V_{[s:n, \tilde{m}, k]} are given by

    \begin{eqnarray} \mu_{[r, s:n, \tilde{m}, k]}^{(p, q)}& = & C_{s-1}\sum\limits_{i = 1}^{r}\sum\limits_{j = r+1}^{s}\sum\limits_{\kappa_{1} = 1}^{\infty} \sum\limits_{\kappa_{2} = 1}^{\infty}a_{i, r}\, a_{j, s}^{(r)} \, \delta_{\theta}(\kappa_{1}, \kappa_{2}, \gamma_{i}, \gamma_{j})\\ &&\times \frac{\Gamma(\frac{p}{\alpha}+1)}{\kappa_{1}^{\frac{p}{\alpha}-1} } \frac{\Gamma(\frac{q}{\alpha}+1)}{\kappa_{2}^{\frac{q}{\alpha}} }. \end{eqnarray} (3.18)

    Proof. Using (3.10), the p^{th} and q^{th} moments of V_{[r:n, \tilde{m}, k]} and V_{[s:n, \tilde{m}, k]} are given as

    \begin{eqnarray} \mu_{[r, s:n, \tilde{m}, k]}^{(p, q)}& = &E[V_{[r:n, \tilde{m}, k]}^{p} V_{[s:n, \tilde{m}, k]}^{q}] = \int_{0}^{\infty}\int_{0}^{\infty} v_{1}^{p} v_{2}^{q} h_{[r, s:n, \tilde{m}, k]}(v_{1}, v_{2}) dv_{1}dv_{2}\\ && = C_{s-1}\sum\limits_{i = 1}^{r}\sum\limits_{j = r+1}^{s}\sum\limits_{\kappa_{1} = 1}^{\infty} \sum\limits_{\kappa_{2} = 1}^{\infty} a_{i, r}\, a_{j, s}^{(r)} \, \kappa_{1}^{2}\kappa_{2} \, \delta_{\theta}(\kappa_{1}, \kappa_{2}, \gamma_{i}, \gamma_{j})\\ \\ &&\times\int_{0}^{\infty}\int_{0}^{\infty} z_{1}^{\frac{p}{\alpha}}z_{2}^{\frac{q}{\alpha}} e^{-(\kappa_{1} z_{1}+\kappa_{2} z_{2})}dz_{1}dz_{2}, \end{eqnarray} (3.19)

    where z_{i} = v_{i}^{\alpha}, i = 1, 2 . Then, after integration, we get (3.18).

    Corollary 4. Setting m_{1} = m_{2} = \cdots = m_{n-1} = m\neq-1 in (3.18), we can get the product moments of two concomitants of m-GOS of the standard BGW distribution.

    Remark 4. When m = 0 and k = 1 in (3.18), we get the product moments of COS as

    \begin{eqnarray} \mu_{[r, s:n]}^{(p, q)}& = &E[V_{[r:n]}^{p} V_{[s:n]}^{q}]\\ && = C_{r, s:n}\sum\limits_{i = 1}^{r}\sum\limits_{j = r+1}^{s}\sum\limits_{\kappa_{1} = 1}^{\infty} \sum\limits_{\kappa_{2} = 1}^{\infty} (-1)^{r-i+s-j} \binom{r-1}{r-i}\binom{s-r-1}{s-j} \\ &&\times \delta_{\theta}(\kappa_{1}, \kappa_{2}, n-i+1, n-j+1) \frac{\Gamma(\frac{p}{\alpha}+1)}{\kappa_{1}^{\frac{p}{\alpha}-1} }\frac{\Gamma(\frac{q}{\alpha}+1)}{\kappa_{2}^{\frac{q}{\alpha}} }. \end{eqnarray} (3.20)

    Remark 5. At m_{i} = R_{i}, \, n = m_{0}+\sum_{i = 1}^{m_{0}}R_{i}, and \gamma_{i} = n-\sum_{\nu = 1}^{i-1}R_{\nu}-i+1, 1\leq i\leq m_{0} in Theorems 1–4, the results for PT-Ⅱ censored order statistics can be obtained.

    Remark 6. From (3.9), the expressions of means and variances of the COS Y_{[i:n]}, \, i = 1, \cdots, n, arising from the BGW distribution, are obtained as follows:

    E[Y_{[i:n]}] = \beta_{2}^{\star}E[V_{[i:n]}]
    = \beta_{2}^{\star} \mu_{[i:n]},
    Var[Y_{[i:n]}] = {\beta_{2}^{\star}}^{2} Var[V_{[i:n]}]
    = {\beta_{2}^{\star}}^{2} \delta_{i, i:n},

    where Var[V_{[i:n]}] = \mu_{[i:n]}^{(2)}-(\mu_{[i:n]})^{2} . The expression of the covariances between Y_{[i:n]} and Y_{[j:n]} is given, using (3.9) and (3.20), by

    Cov[Y_{[i:n]}, Y_{[j:n]}] = {\beta_{2}^{\star}}^{2} Cov[V_{[i:n]}, V_{[j:n]}]
    = {\beta_{2}^{\star}}^{2} \delta_{i, j:n},

    where Cov(V_{[i:n]}, V_{[j:n]}) = \mu_{[i, j:n]}- \mu_{[i:n]} \mu_{[j:n]}, 1\leq i < j\leq n.

    The means and variances of the COS of the standard BGW distribution for n = 1, \cdots, 5 and different values of the parameters \alpha and \theta are calculated in Tables 1 and 2. It can be noted that the condition \sum_{r = 1}^{n}\mu_{[r:n]}^{j} = n \, \mu_{1:1}^{j}, j = 1, 2 is satisfied (see David and Nagaraja, [10]). In Tables 36, we have computed the means and variances of the concomitants of PT-Ⅱ censored order statistics. From Tables 16, one can observe that the variances are decreasing with respect to \alpha .

    Table 1.  Means and variances of the COS for the standard BGW distribution with \theta = 0.50 .
    Mean Variance
    n r \alpha=1 \alpha=2 \alpha=1 \alpha=2
    1 1 0.613519 0.626050 0.710359 0.221580
    2 1 0.454473 0.503844 0.546719 0.200615
    2 0.772564 0.748256 0.823408 0.212676
    3 1 0.363977 0.427653 0.445089 0.181090
    2 0.635465 0.656226 0.700844 0.204833
    3 0.841114 0.794272 0.870593 0.210246
    4 1 0.304634 0.374043 0.375757 0.164726
    2 0.542006 0.588481 0.610825 0.195696
    3 0.728923 0.723970 0.773394 0.204791
    4 0.878510 0.817706 0.897399 0.209868
    5 1 0.262399 0.333637 0.325339 0.151085
    2 0.473577 0.535668 0.541753 0.186636
    3 0.644650 0.667700 0.696872 0.198827
    4 0.785106 0.761484 0.816517 0.205248
    5 0.901861 0.831761 0.914894 0.210035

     | Show Table
    DownLoad: CSV
    Table 2.  Means and variances of the COS for the standard BGW distribution with \theta = 0.90 .
    Mean Variance
    n r \alpha=1 \alpha=2 \alpha=1 \alpha=2
    1 1 0.933392 0.846157 0.956977 0.217410
    2 1 0.898058 0.823580 0.935961 0.219775
    2 0.968725 0.868735 0.975495 0.214025
    3 1 0.873819 0.807493 0.922204 0.221774
    2 0.946537 0.855752 0.959950 0.214226
    3 0.979820 0.875226 0.982898 0.213799
    4 1 0.855345 0.794891 0.912004 0.223493
    2 0.929241 0.845300 0.948711 0.214709
    3 0.963834 0.866204 0.970591 0.213524
    4 0.985148 0.878234 0.986887 0.213854
    5 1 0.840423 0.784489 0.903900 0.224999
    2 0.915036 0.836498 0.939966 0.215307
    3 0.950548 0.858502 0.961071 0.213522
    4 0.972691 0.871339 0.976741 0.213459
    5 0.988263 0.879957 0.989375 0.213938

     | Show Table
    DownLoad: CSV
    Table 3.  Means of the concomitants of PT-Ⅱ censored order statistics for the standard BGW distribution with \theta = 0.50 .
    \alpha m_{0}, n Scheme Mean
    1 2, 10 8, 0 0.155927 0.664362
    1 2, 10 0, 8 0.155927 0.292953
    1 3, 10 7, 0, 0 0.155927 0.52911 0.799614
    1 3, 10 0, 0, 7 0.155927 0.292953 0.412797
    1 4, 10 6, 0, 0, 0 0.155927 0.453142 0.681045 0.858899
    1 4, 10 0, 0, 0, 6 0.155927 0.292953 0.412797 0.51814
    1 5, 10 5, 0, 0, 0, 0 0.155927 0.403773 0.601249 0.760842 0.891585
    1 5, 10 0, 0, 0, 0, 5 0.155927 0.292953 0.412797 0.51814 0.61135
    2 2, 10 8, 0 0.220176 0.671147
    2 2, 10 0, 8 0.220176 0.377969
    2 3, 10 7, 0, 0 0.220176 0.574761 0.767534
    2 3, 10 0, 0, 7 0.220176 0.377969 0.492427
    2 4, 10 6, 0, 0, 0 0.220176 0.516571 0.691139 0.805731
    2 4, 10 0, 0, 0, 6 0.220176 0.377969 0.492427 0.579475
    2 5, 10 5, 0, 0, 0, 0 0.220176 0.476621 0.636421 0.745858 0.825689
    2 5, 10 0, 0, 0, 0, 5 0.220176 0.377969 0.492427 0.579475 0.648751

     | Show Table
    DownLoad: CSV
    Table 4.  Means of the concomitants of PT-Ⅱ censored order statistics for the standard BGW distribution with \theta = 0.90 .
    \alpha m_{0}, n Scheme Mean
    1 2, 10 8, 0 0.791751 0.94913
    1 2, 10 0, 8 0.791751 0.867371
    1 3, 10 7, 0, 0 0.791751 0.924635 0.973624
    1 3, 10 0, 0, 7 0.791751 0.867371 0.905027
    1 4, 10 6, 0, 0, 0 0.791751 0.908991 0.955924 0.982475
    1 4, 10 0, 0, 0, 6 0.791751 0.867371 0.905027 0.929301
    1 5, 10 5, 0, 0, 0, 0 0.791751 0.897741 0.94274 0.969107 0.986931
    1 5, 10 0, 0, 0, 0, 5 0.791751 0.867371 0.905027 0.929301 0.946824
    2 2, 10 8, 0 0.749137 0.856937
    2 2, 10 0, 8 0.749137 0.805526
    2 3, 10 7, 0, 0 0.749137 0.84219 0.871684
    2 3, 10 0, 0, 7 0.749137 0.805526 0.830734
    2 4, 10 6, 0, 0, 0 0.749137 0.832503 0.861564 0.876744
    2 4, 10 0, 0, 0, 6 0.749137 0.805526 0.830734 0.846014
    2 5, 10 5, 0, 0, 0, 0 0.749137 0.825394 0.853831 0.869298 0.879226
    2 5, 10 0, 0, 0, 0, 5 0.749137 0.805526 0.830734 0.846014 0.85658

     | Show Table
    DownLoad: CSV
    Table 5.  Variances of the concomitants of PT-Ⅱ censored order statistics for the standard BGW distribution with \theta = 0.50 .
    \alpha m_{0}, n Scheme Variance
    1 2, 10 8, 0 0.195285 0.741739
    1 2, 10 0, 8 0.195285 0.347562
    1 3, 10 7, 0, 0 0.195285 0.606725 0.840167
    1 3, 10 0, 0, 7 0.195285 0.347562 0.469548
    1 4, 10 6, 0, 0, 0 0.195285 0.525646 0.734255 0.882579
    1 4, 10 0, 0, 0, 6 0.195285 0.347562 0.469548 0.570046
    1 5, 10 5, 0, 0, 0, 0 0.195285 0.47150 0.658836 0.796939 0.906852
    1 5, 10 0, 0, 0, 0, 5 0.195285 0.347562 0.469548 0.570046 0.654972
    2 2, 10 8, 0 0.107449 0.213924
    2 2, 10 0, 8 0.107449 0.150092
    2 3, 10 7, 0, 0 0.107449 0.19876 0.210507
    2 3, 10 0, 0, 7 0.107449 0.150092 0.170312
    2 4, 10 6, 0, 0, 0 0.107449 0.186296 0.203372 0.209697
    2 4, 10 0, 0, 0, 6 0.107449 0.150092 0.170312 0.182349
    2 5, 10 5, 0, 0, 0, 0 0.107449 0.176605 0.196217 0.204539 0.209823
    2 5, 10 0, 0, 0, 0, 5 0.107449 0.150092 0.170312 0.182349 0.190473

     | Show Table
    DownLoad: CSV
    Table 6.  Variances of the concomitants of PT-Ⅱ censored order statistics for the standard BGW distribution with \theta = 0.90 .
    \alpha m_{0}, n Scheme Variance
    1 2, 10 8, 0 0.877894 0.963287
    1 2, 10 0, 8 0.877894 0.913014
    1 3, 10 7, 0, 0 0.877894 0.946946 0.978427
    1 3, 10 0, 0, 7 0.877894 0.913014 0.932212
    1 4, 10 6, 0, 0, 0 0.877894 0.937071 0.965229 0.984791
    1 4, 10 0, 0, 0, 6 0.877894 0.913014 0.932212 0.946122
    1 5, 10 5, 0, 0, 0, 0 0.877894 0.93025 0.956015 0.974096 0.988277
    1 5, 10 0, 0, 0, 0, 5 0.877894 0.913014 0.932212 0.946122 0.957275
    2 2, 10 8, 0 0.230545 0.214788
    2 2, 10 0, 8 0.230545 0.218499
    2 3, 10 7, 0, 0 0.230545 , 0.215351 0.213791
    2 3, 10 0, 0, 7 0.230545 0.218499 0.214908
    2 4, 10 6, 0, 0, 0 0.230545 0.21593 0.213631 0.213794
    2 4, 10 0, 0, 0, 6 0.230545 0.218499 0.214908 0.213561
    2 5, 10 5, 0, 0, 0, 0 0.230545 0.216466 0.213713 0.213429 0.213892
    2 5, 10 0, 0, 0, 0, 5 0.230545 0.218499 0.214908 0.213561 0.213094

     | Show Table
    DownLoad: CSV

    In this part, we obtain the BLU estimator of \beta_{2}^{\star} involved in the BGW distribution using Stoke's RSS. Assume that n sets of units, each of size n , are taken from the BGW distribution with the PDF given in (2.1). Let X_{(i:n)_{i}} , i = 1, \cdots, n represent the observation made on the auxiliary variable X in the i^{th} unit of the RSS, and Y_{[i:n]_{i}} represent the measurement performed on the Y variable in the same unit. It is obvious that Y_{[i:n]_{i}} has the same distribution as Y_{[i:n]} , the concomitant of the i^{th} order statistics (see David and Nagraja, [10], p. 145). From Remark 6, the mean and the variance of Y_{[i:n]_{i}} are given as E[Y_{[i:n]_{i}}] = \beta_{2}^{\star} \mu_{[i:n]}, and Var[Y_{[i:n]_{i}}] = {\beta_{2}^{\star}}^{2} \delta_{i, i:n}, \, 1\leq i \leq n. Because the two measurements Y_{[i:n]_{i}} and Y_{[j:n]_{j}} (i\neq j) of Y are based on two independent samples, we have Cov[Y_{[i:n]_{i}}, Y_{[j:n]_{j}}] = 0.

    Let \textbf{Y}_{[n]} = (Y_{[1:n]_{1}}, Y_{[2:n]_{2}}, \cdots, Y_{[n:n]_{n}})^{'} denote the column vector of COS. Then, the mean vector and the variance-covariance matrix of \textbf{Y}_{[n]} can be written as

    \begin{eqnarray} E[\textbf{Y}_{[n]}] = \beta_{2}^{\star} \boldsymbol{\mu}, \end{eqnarray} (4.1)

    and

    \begin{eqnarray} D[\textbf{Y}_{[n]}] = {\beta_{2}^{\star}}^{2} \Lambda, \end{eqnarray} (4.2)

    where \boldsymbol{\mu} = (\mu_{[1:n]}, \cdots, \mu_{[n:n]})^{'} and \Lambda = diag(\delta_{1, 1:n}, \delta_{2, 2:n}, \cdots, \delta_{n, n:n}). If the parameters \alpha and \theta are known, then the combination of (4.1) and (4.2) allow us to apply the generalized Gauss-Markov theorem (see David and Nagraja, [10], p. 185). Hence, the BLU estimator \hat{\beta_{2}^{\star}} of \beta_{2}^{\star} is given as

    \begin{eqnarray} \hat{\beta_{2}^{\star}}& = &(\boldsymbol{\mu^{'}}\Lambda^{-1}\boldsymbol{\mu})^{-1}\boldsymbol{\mu^{'}}\Lambda^{-1}\textbf{Y}_{[n]}\\ && = \sum\limits_{i = 1}^{n} a_{i} Y_{[i:n]_{i}}, \end{eqnarray} (4.3)

    where a_{i} = \frac{ \mu_{[i:n]}/\delta_{i, i:n}}{\sum_{i = 1}^{n}\mu_{[i:n]}^2/\delta_{i, i:n}} , and the variance of \hat{\beta_{2}^{\star}} is given by

    \begin{eqnarray} Var[\hat{\beta_{2}^{\star}}]& = &(\boldsymbol{\mu^{'}}\Lambda^{-1}\boldsymbol{\mu})^{-1}{\beta_{2}^{\star}}^{2}\\ && = \left( \sum\limits_{i = 1}^{n}\mu_{[i:n]}^2/\delta_{i, i:n}\right) ^{-1}{\beta_{2}^{\star}}^{2}. \end{eqnarray} (4.4)

    We have calculated the coefficients a_{i} of Y_{[i:n]_{i}}, i = 1, \cdots, n in \hat{\beta_{2}^{\star}} and Var[\hat{\beta_{2}^{\star}}]/{\beta_{2}^{\star}}^{2} for n = 1, \cdots, 5 , and different values of the parameters \alpha and \theta are presented in Tables 7 and 8.

    Table 7.  The coefficients a_{i} in the BLUE \hat{\beta_{2}^{\star}} and Var[\hat{\beta_{2}^{\star}}]/{\beta_{2}^{\star}}^{2} for \theta = 0.50 .
    \alpha n Coefficients (a_i) Var[\hat{\beta_{2}^{\star}}]/{\beta_{2}^{\star}}^{2}
    1 1 1.62994 1.88722
    2 0.75389 0.85091 0.90691
    3 0.48490 0.53764 0.57288 0.59296
    4 0.35637 0.39005 0.41430 0.43032 0.43957
    5 0.28143 0.30502 0.32279 0.33551 0.34396 0.34893
    2 1 1.59732 0.56534
    2 0.64431 0.90259 0.25654
    3 0.38632 0.52409 0.61801 0.16359
    4 0.27147 0.35952 0.42265 0.46582 0.11956
    5 0.20763 0.26986 0.31575 0.34884 0.37235 0.09402

     | Show Table
    DownLoad: CSV
    Table 8.  The coefficients a_{i} in the BLUE \hat{\beta_{2}^{\star}} and Var[\hat{\beta_{2}^{\star}}]/{\beta_{2}^{\star}}^{2} for \theta = 0.90 .
    \alpha n Coefficients (a_i) Var[\hat{\beta_{2}^{\star}}]/{\beta_{2}^{\star}}^{2}
    1 1 1.07136 1.09843
    2 0.52613 0.54453 0.54834
    3 0.34606 0.36012 0.36408 0.36523
    4 0.25675 0.26814 0.27185 0.27327 0.27375
    5 0.20354 0.21310 0.21651 0.21800 0.21866 0.21891
    2 1 1.18181 0.30365
    2 0.56671 0.61384 0.15123
    3 0.36625 0.40182 0.41178 0.10059
    4 0.26791 0.29656 0.30558 0.30934 0.07533
    5 0.20987 0.23386 0.24202 0.24571 0.24758 0.06019

     | Show Table
    DownLoad: CSV

    A modified RSS approach is presented by Stokes [43], wherein only the largest or smallest judgment ranked unit is selected for quantification. Let n random samples each of size n be drawn from the BGW distribution. From each of the n samples, choose the unit for which the measurement on the auxiliary variable X is the smallest (largest) and measure the Y variable associated with it. Then, we call the collection of observations Y_{[1:n]_{1}}, Y_{[1:n]_{2}}, \cdots, Y_{[1:n]_{n}} ( Y_{[n:n]_{1}}, Y_{[n:n]_{2}}, \cdots, Y_{[n:n]_{n}} ) as the lower RSS (LRSS) (upper RSS (URSS)).

    Based on LRSS and URSS, the BLU estimators \tilde{\beta}_{2, LRSS}^{\star} and \tilde{\beta}_{2, URSS}^{\star} of \beta_{2}^{\star} are

    \begin{eqnarray} \tilde{\beta}_{2, LRSS}^{\star}& = &\frac{1}{n \mu_{[1:n]} }\sum\limits_{i = 1}^{n} Y_{[1:n]_{i}}, \end{eqnarray} (4.5)
    \begin{eqnarray} \tilde{\beta}_{2, URSS}^{\star}& = &\frac{1}{ n \mu_{[n:n]} }\sum\limits_{i = 1}^{n} Y_{[n:n]_{i}}, \end{eqnarray} (4.6)

    and their variances are

    \begin{eqnarray} Var[\tilde{\beta}_{2, LRSS}^{\star}]& = &\left(n \mu_{[1:n]}^2/\delta_{1, 1:n}\right) ^{-1}{\beta_{2}^{\star}}^{2}, \end{eqnarray} (4.7)
    \begin{eqnarray} Var[\tilde{\beta}_{2, URSS}^{\star}]& = &\left(n \mu_{[n:n]}^2/\delta_{n, n:n}\right) ^{-1}{\beta_{2}^{\star}}^{2}. \end{eqnarray} (4.8)

    The efficiencies e_{1} of \tilde{\beta}_{2, LRSS}^{\star} and e_{2} of \tilde{\beta}_{2, URSS}^{\star} relative to \hat{\beta_{2}^{\star}} are given by

    e_{1} = \dfrac{Var[\hat{\beta_{2}^{\star}}]}{Var[\tilde{\beta}_{2, LRSS}^{\star}]}, \, \, \, \, e_{2} = \dfrac{Var[\hat{\beta_{2}^{\star}}]}{Var[\tilde{\beta}_{2, URSS}^{\star}]},

    see, for example, Koshti and Kamalja [27] and Philip and Thomas [37]. We have computed the efficiencies e_{1} and e_{2} for n = 2, \cdots, 5 , \alpha = 1, 2 , and \theta = 0.50, 0.90 , which are presented in Table 9. From Table 9, it can be observed that:

    Table 9.  Efficiencies of the estimators \tilde{\beta}_{2, LRSS}^{\star} and \tilde{\beta}_{2, URSS}^{\star} relative to \hat{\beta_{2}^{\star}} .
    e_{1} e_{2}
    n \theta \alpha=1 \alpha=2 \alpha=1 \alpha=2
    2 0.50 0.68524 0.64926 1.31476 1.35074
    3 0.50 0.52948 0.49563 1.44557 1.47260
    4 0.50 0.43425 0.40617 1.51216 1.52362
    5 0.50 0.36923 0.34637 1.55104 1.54852
    2 0.90 0.94500 0.93347 1.05501 1.06654
    3 0.90 0.90719 0.88724 1.07020 1.08120
    4 0.90 0.87843 0.85183 1.07685 1.08669
    5 0.90 0.85528 0.82321 1.08048 1.08932

     | Show Table
    DownLoad: CSV

    ● The efficiency e_{1} is less than one for all selected values of \alpha, \theta , and n . So, \hat{\beta_{2}^{\star}} is relatively more efficient than \tilde{\beta}_{2, LRSS}^{\star} .

    ● The efficiency e_{1} decreases as \alpha increases, and for a fixed pair (n, \alpha) , e_{1} increases as \theta increases.

    ● The efficiency e_{2} is greater than one for all selected values of \alpha, \theta , and n . Thus, \tilde{\beta}_{2, URSS}^{\star} is relatively more efficient than \hat{\beta_{2}^{\star}} .

    ● The efficiency e_{2} increases as \alpha increases, and for a fixed pair (n, \alpha) , e_{2} decreases as \theta increases.

    For illustration purposes, we have considered the American Football League dataset given in Jamalizadeh and Kundu [19]. The bivariate dataset represents the game time to the first points scored by kicking the ball between goal posts (X) and the 'game time' by moving the ball into the end zone (Y) . Pathak et al. [34] demonstrated that the BGW distribution fits this data better than other real-life time models. Here, we generate random samples of size five using forty-two pairs of observations. The samples under RSS schemes are displayed in Table 10.

    Table 10.  Samples of size n = 5 under various RSS schemes.
    Scheme Sample values for Y-variable
    RSS 0.75 7.78 38.07 49.75 20.57
    LRSS 0.75 2.9 2.9 6.42 3.98
    URSS 49.88 15.53 49.75 42.35 20.57

     | Show Table
    DownLoad: CSV

    The estimator of \beta_{2}^{\star} under various RSS schemes is a function of \alpha and \theta , which are unknown in this case. Thus, the method of moment estimation can be taken (see Kamalja and Koshti [21], for example). To obtain the moment estimators of \alpha and \theta , we use the moment equations based on the moments of Y -observations and the moment equation based on the correlation between (X, Y) . These give \hat{\alpha} = 3.39821 and \hat{\theta} = 0.24259 . Table 11 shows the estimates of \beta_{2}^{\star} under the RSS, LRSS, and URSS schemes. The results show that \tilde{\beta}_{2, URSS}^{\star} has the smallest variance. This is consistent with the findings of the efficiency performance study in Section 4.

    Table 11.  The estimates of \beta_{2}^{\star} under various RSS schemes.
    Scheme Estimator of \beta_{2}^{\star} Estimate of \beta_{2}^{\star} Variance/{\beta_{2}^{\star}}^{2}
    RSS \hat{\beta_{2}^{\star}} 48.4195 0.07167
    LRSS \tilde{\beta}_{2, LRSS}^{\star} 33.1903 0.89618
    URSS \tilde{\beta}_{2, URSS}^{\star} 45.0528 0.03010

     | Show Table
    DownLoad: CSV

    In this paper, we have considered the CGOS from the BGW distribution. We have derived the PDFs and moments of CGOS from the BGW distribution. Similar results for order statistics and PT-Ⅱ censored order statistics are presented as special instances. Finally, we have obtained the BLU estimator of the parameter associated with the study variable based on Stoke's RSS. Moreover, a real dataset is used for illustration purposes. The results for higher joint moments can be used to create skewness or kurtosis matrices (Kollo, [25]), which have important applications in both independent component analysis and invariant coordinate selection. This could be an interesting topic for future research. It will also be interesting to discuss the problem of predicting intervals for future order statistics and record values using concomitants of order statistics and record values arising from BGW distribution; see, for example, Muraleedharan and Chacko [33]. In addition, some information measures, such as the Shannon entropy and extropy, for CGOS can also be investigated.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The author would like to acknowledge the Deanship of Graduate Studies and Scientific Research, Taif University for funding this work.

    The author declares that no conflict of interest exist.



    [1] W. A. Fuller, Measurement error models, New York: Wiley, 1987.
    [2] J. H. You, G. M. Chen, Estimation of a semiparametric varying-coefficient partially linear errors-in-variables model, J. Multivariate Anal., 97 (2006), 324–341. https://doi.org/10.1016/j.jmva.2005.03.002 doi: 10.1016/j.jmva.2005.03.002
    [3] X. L. Wang, G. R. Li, L. Lin, Empirical likelihood inference for semi-parametric varying-coefficient partially linear EV models, Metrika, 73 (2011), 171–185. https://doi.org/10.1007/s00184-009-0271-2 doi: 10.1007/s00184-009-0271-2
    [4] S. Y. Feng, L. G. Xue, Bias-corrected statistical inference for partially linear varying coefficient errors-in-variables models with restricted condition, Ann. Inst. Stat. Math., 66 (2014), 121–140. https://doi.org/10.1007/s10463-013-0407-z doi: 10.1007/s10463-013-0407-z
    [5] G. L. Fan, H. X. Xu, Z. S. Huang, Empirical likelihood for semivarying coefficient model with measurement error in the nonparametric part, ASTA Adv. Stat. Anal., 100 (2016), 21–41. https://doi.org/10.1007/s10182-015-0247-7 doi: 10.1007/s10182-015-0247-7
    [6] Y. Zhou, H. Liang, Statistical inference for semiparametric varying-coefficient partially linear models with error-prone linear covariates, Ann. Statist., 37 (2009), 427–458. https://doi.org/10.1214/07-AOS561 doi: 10.1214/07-AOS561
    [7] P. X. Zhao, L. G. Xue, Instrumental variable-based empirical likelihood inferences for varying-coefficient models with error-prone covariates, J. Appl. Stat., 40 (2013), 380–396. https://doi.org/10.1080/02664763.2012.744810 doi: 10.1080/02664763.2012.744810
    [8] Y. Q. Xu, X. L. Li, G. M. Chen, Estimation and inference for varying-coefficient regression models with error-prone covariates, J. Syst. Sci. Complex., 27 (2014), 1263–1285. https://doi.org/10.1007/s11424-014-3014-z doi: 10.1007/s11424-014-3014-z
    [9] J. Zhang, X. G. Wang, Y. Yu, Y. J. Gai, Estimation and variable selection in partial linear single index models with error-prone linear covariates, Statistics, 48 (2014), 1048–1070. https://doi.org/10.1080/02331888.2013.800519 doi: 10.1080/02331888.2013.800519
    [10] Z. S. Huang, H. Y. Ding, Statistical estimation for partially linear error-in-variable models with error-prone covariates, Commun. Stat.-Simul. Comput., 46 (2017), 6559–6573. https://doi.org/10.1080/03610918.2016.1208233 doi: 10.1080/03610918.2016.1208233
    [11] Z. H. Sun, Y. F. Jiang, X. Ye, Improved statistical inference on semiparametric varying-coefficient partially linear measurement error model, J. Nonparametr. Stat., 31 (2019), 549–566. https://doi.org/10.1080/10485252.2019.1603383 doi: 10.1080/10485252.2019.1603383
    [12] W. X. Yao, B. G. Lindsay, R. Z. Li, Local modal regression, J. Nonparametr. Stat., 24 (2012), 647–663. https://doi.org/10.1080/10485252.2012.678848 doi: 10.1080/10485252.2012.678848
    [13] W. H. Zhao, R. Q. Zhang, J. C. Liu, Y. Z. Lv, Robust and efficient variable selection for semiparametric partially linear varying coefficient model based on modal regression, Ann. Inst. Stat. Math., 66 (2014), 165–191. https://doi.org/10.1007/s10463-013-0410-4 doi: 10.1007/s10463-013-0410-4
    [14] H. Yang, J. Yang, A robust and efficient estimation and variable selection method for partially linear single-index models, J. Multivariate Anal., 129 (2014), 227–242. https://doi.org/10.1016/j.jmva.2014.04.024 doi: 10.1016/j.jmva.2014.04.024
    [15] H. Yang, J. Lv, C. H. Guo, Robust estimation and variable selection for varying-coefficient single-index models based on modal regression, Commun. Stat.-Theor. Meth., 45 (2016), 4048–4067. https://doi.org/10.1080/03610926.2014.915043 doi: 10.1080/03610926.2014.915043
    [16] J. Lv, H. Yang, C. H. Guo, Variable selection in partially linear additive models for modal regression, Commun. Stat.-Simul. Comput., 46 (2017), 5646–5665. https://doi.org/10.1080/03610918.2016.1171346 doi: 10.1080/03610918.2016.1171346
    [17] P. Yu, Z. Y. Zhu, J. H. Shi, X. K. Ai, Robust estimation for partial functional linear regression model based on modal regression, J. Syst. Sci. Complex., 33 (2022), 527–544. https://doi.org/10.1007/s11424-020-8217-x doi: 10.1007/s11424-020-8217-x
    [18] J. Fan, R. Gijbels, Local polynomial modelling and its applications, London: Chapman & Hall, 1996.
    [19] L. L. Schumaker, Spline function, New York: Wiley, 1981.
    [20] J. Li, S. Ray, B. G. Lindsay, A nonparametric statistical approach to clustering via mode identification, J. Mach. Learn. Res., 8 (2007), 1687–1723. https://doi.org/10.1007/s10846-007-9145-x doi: 10.1007/s10846-007-9145-x
    [21] B. Efron, T. Hastie, I. Johnstone, R. Tibshirani, Least angle regression, Ann. Statist., 32 (2004), 407–499. https://doi.org/10.1214/009053604000000067 doi: 10.1214/009053604000000067
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1667) PDF downloads(87) Cited by(0)

Figures and Tables

Figures(5)  /  Tables(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog