Processing math: 62%
Research article

Understanding farmers' risk perception to drought vulnerability in Balochistan, Pakistan

  • Frequent occurrence of drought is a major challenge to the farmers in the drought prone district of Balochistan province, Pakistan. The agricultural communities are facing threat to agricultural production and livestock due to socio-economic drought in the study area. The Socio-economic drought refers to the conditions in which water supply flops sustaining water demand, resulting in adverse effects on society, economy and environment. The intensity of drought impacts is normally analyzed through meteorological, agricultural and hydrological indices. However, this paper presents a study based on interviews to analyze farmer's risk perceptions, attitude and awareness towards socio-economic drought and risks associated with it. The study relies on a survey of 265 farm households, following a structured questionnaire, focus group discussions and key informant interviews. Results of the study revealed that farmers perceived a continuous variability in climate for the last two decades and identified drought as the most prevalent disaster in the region. Economic reliance on agriculture and livestock, abolishment of surface water resources, depletion of groundwater and insufficient supply of electricity has further increased their vulnerability to drought. Reduction in agriculture and livestock production as well as loss of employment were the immediate economic impacts of the socio-economic drought in the study area. Social impacts such as migration to other places, increase in social crimes, drop out of schoolchildren and impacts on health and festivals were also reported. The environmental impacts included constant increase in temperature, decrease in rainfall intensity and non-climatic factors. Understanding of farmer's risk perception to drought vulnerability may contribute in assisting policy makers for the most appropriate intervention strategies.

    Citation: Hashim Durrani, Ainuddin Syed, Amjad Khan, Alam Tareen, Nisar Ahmed Durrani, Bashir Ahmed Khwajakhail. Understanding farmers' risk perception to drought vulnerability in Balochistan, Pakistan[J]. AIMS Agriculture and Food, 2021, 6(1): 82-105. doi: 10.3934/agrfood.2021006

    Related Papers:

    [1] Areej M. AL-Zaydi . On concomitants of generalized order statistics arising from bivariate generalized Weibull distribution and its application in estimation. AIMS Mathematics, 2024, 9(8): 22002-22021. doi: 10.3934/math.20241069
    [2] Wenzhi Zhao, Dou Liu, Huiming Wang . Sieve bootstrap test for multiple change points in the mean of long memory sequence. AIMS Mathematics, 2022, 7(6): 10245-10255. doi: 10.3934/math.2022570
    [3] Jin-liang Wang, Chang-shou Deng, Jiang-feng Li . On moment convergence for some order statistics. AIMS Mathematics, 2022, 7(9): 17061-17079. doi: 10.3934/math.2022938
    [4] Enrique de Amo, José Juan Quesada-Molina, Manuel Úbeda-Flores . Total positivity and dependence of order statistics. AIMS Mathematics, 2023, 8(12): 30717-30730. doi: 10.3934/math.20231570
    [5] H. M. Barakat, M. H. Dwes . Asymptotic behavior of ordered random variables in mixture of two Gaussian sequences with random index. AIMS Mathematics, 2022, 7(10): 19306-19324. doi: 10.3934/math.20221060
    [6] Mohamed Said Mohamed, Muqrin A. Almuqrin . Properties of fractional generalized entropy in ordered variables and symmetry testing. AIMS Mathematics, 2025, 10(1): 1116-1141. doi: 10.3934/math.2025053
    [7] Jinliang Wang, Fang Wang, Songbo Hu . On asymptotic correlation coefficient for some order statistics. AIMS Mathematics, 2023, 8(3): 6763-6776. doi: 10.3934/math.2023344
    [8] Haroon Barakat, Osama Khaled, Hadeer Ghonem . Predicting future order statistics with random sample size. AIMS Mathematics, 2021, 6(5): 5133-5147. doi: 10.3934/math.2021304
    [9] Mikail Et, Muhammed Cinar, Hacer Sengul Kandemir . Deferred statistical convergence of order α in metric spaces. AIMS Mathematics, 2020, 5(4): 3731-3740. doi: 10.3934/math.2020241
    [10] Salim Bouzebda, Amel Nezzal . Uniform in number of neighbors consistency and weak convergence of kNN empirical conditional processes and kNN conditional U-processes involving functional mixing data. AIMS Mathematics, 2024, 9(2): 4427-4550. doi: 10.3934/math.2024218
  • Frequent occurrence of drought is a major challenge to the farmers in the drought prone district of Balochistan province, Pakistan. The agricultural communities are facing threat to agricultural production and livestock due to socio-economic drought in the study area. The Socio-economic drought refers to the conditions in which water supply flops sustaining water demand, resulting in adverse effects on society, economy and environment. The intensity of drought impacts is normally analyzed through meteorological, agricultural and hydrological indices. However, this paper presents a study based on interviews to analyze farmer's risk perceptions, attitude and awareness towards socio-economic drought and risks associated with it. The study relies on a survey of 265 farm households, following a structured questionnaire, focus group discussions and key informant interviews. Results of the study revealed that farmers perceived a continuous variability in climate for the last two decades and identified drought as the most prevalent disaster in the region. Economic reliance on agriculture and livestock, abolishment of surface water resources, depletion of groundwater and insufficient supply of electricity has further increased their vulnerability to drought. Reduction in agriculture and livestock production as well as loss of employment were the immediate economic impacts of the socio-economic drought in the study area. Social impacts such as migration to other places, increase in social crimes, drop out of schoolchildren and impacts on health and festivals were also reported. The environmental impacts included constant increase in temperature, decrease in rainfall intensity and non-climatic factors. Understanding of farmer's risk perception to drought vulnerability may contribute in assisting policy makers for the most appropriate intervention strategies.


    Kamps [22] introduced the model of GOSs as a unified approach to a variety of ordered random variables (RVs), including ordinary order statistics (OOSs), sequential order statistics (SOSs), progressive type Ⅱ censored order statistics (POSs), order statistics with a non-integer sample size, record values, and Pfeifer's record model. Since the GOSs model unifies the models of ordered RVs, the practical importance of GOSs is evident. For example, in reliability theory, the rth extreme OOS indicates the life-length of an (nr+1)-out-of-n system, whereas the model of SOSs is an extension of the OOSs model that describes specific dependencies or interactions among system components induced by component failures. Furthermore, the POSs model is a valuable tool for collecting information in lifetime tests.

    The uniform GOSs are defined via their joint probability density function (PDF) on a unit cone of Rn. More specifically, let nN, k1 and m1,m2,...,mn1R be parameters such that γr=k+nr+n1j=rmj>0, r{1,2,...,n1}.

    If the RVs U(r,n,˜m,k)=Ur:n,r=1,2,...,n, possess a PDF of the form

    fU1:n,U2:n,...,Un:n(u1,u2,...,un)=k(n1j=1γj)(n1j=1(1uj)mj)(1un)γn1 (1.1)

    on the cone {(u1,u2,...,un):0u1u2...un<1}Rn, then they are called uniform GOSs. Furthermore, GOSs based on some distribution function (DF) F can be defined via the quantile transformation X(r,n,˜m,k)=Xr:n=F1(Ur:n),r=1,2,...,n, where F1 denotes the quantile function of F. On the other hand, by choosing the parameters appropriately, we can obtain different models of ordered RVs, such as m-GOSs (m1=m2=...=mn1=m,γr=k+(nr)(m+1),r=1,2,...,n); OOSs, a sub-model of m-GOSs (m=0 and k=1); order statistics with non-integral sample size, a sub-model of m-GOSs (m=0,k=αn+1 and n1<αR); SOSs (mi=(ni+1)αi(ni)αi+11,i=1,2,...,n1,0<αiR,k=αn); kth record values (m1=m2=...=mn1=1,kN); POSs with censoring scheme (R1,R2,...,RM)(mi=Ri,i=1,2,...,M1, and mi=0,i=M,M+1,...,n1 and k=RM+1); and Pfeifer's record model (mi=βiβi+11,i=1,2,...,n1,0<βiR and k=βn).

    The marginal DF, Ψ(m,k)r:n(x)=P(Xr:nx), of the rth m-GOS is given in Kamps [22] by

    Ψ(m,k)r:n(x)=1Cr1¯Fγr(x)r1i=01i!Cri1gim(x),

    where Cr1=ri=1γi, r=1,2,...,n, γn=k, and ¯F(x)=1F(x). Moreover, if m1, (m+1)gm(x)=Gm(x)=1¯Fm+1(x) is a DF, whereas g1(x)=log¯F(x). Under the condition m1, the possible limit DFs of the maximum m-GOS and their domains of attraction under linear normalization were derived by Nasri-Roudsari [27]. Moreover, the limit DFs of Ψ(m,k)n:n(x) under power normalization were derived by Nasri-Roudsari [28]. The possible non-degenerate limit DFs and the rate of convergence of the upper extreme m-GOSs were discussed by Nasri-Roudsari and Cramer [29]. The necessary and sufficient conditions for weak convergence as well as the form of the possible limit DFs of extreme, central and intermediate m-GOSs, were derived by Barakat [4].

    The bootstrap method, which was first introduced by Efron [20] for independent RVs, is an efficient procedure for solving many statistical problems based on re-sampling from the available data. It enables statisticians to perform statistical inference on a wide range of problems without imposing many structural assumptions on the data-generating random process (cf. [14,21,26]). For example, the bootstrap method is used to find standard errors for estimators, confidence intervals for unknown parameters, and p-values for test statistics under a null hypothesis.

    There are several forms of the bootstrap method and additionally several other re-sampling methods that are related to it, such as jackknifing, cross-validation, randomization tests, and permutation tests. Let Xn=(X1,X2,...,Xn) be a random sample of size n from an unknown DF F. The idea of the bootstrap technique is to re-sample with replacement from the original sample Xn and form a bootstrapped version of the original statistic. For B=B(n), as n, assume that Y1,Y2,...,YB are conditionally independent and identically distributed (i.i.d) RVs with distribution

    P(Y1=Xj|Xn)=1n,j=1,2,...,n.

    Hence, (Y1,Y2,...,YB) is a re-sample of size B from the empirical distribution Fn of F based on Xn. Let Pj,j=1,2,...,n, be an independent RV with respective beta distribution Ix(γj,1),j=1,2,...,n. Thus, Pj follows a power function distribution with exponent γj=k+(Bj)(m+1),j=1,2,...,B. Now, in view of the results of Cramer [17], we can write the rth m-GOS based on the empirical DF Fn in the form

    Yr:B=Y(r,B,m,k)=F1n(1rj=1Pj),r=1,2,...,B.

    Moreover, let

    H(m,k)r,n,B(x)=P(YBr+1:BbBaBx|Xn), (1.2)

    be the bootstrap distribution of a1n(Xnr+1:nbn), for suitable normalizing constants an>0 and bn, where n and B are the sample size and re-sample size, respectively.

    It has been shown, for many statistics, that the bootstrap method is asymptotically consistent (cf. Efron [20]). That is, the asymptotic distribution of the bootstrap for a given statistic is the same as the asymptotic distribution of the original statistic. Many results for the bootstrap method and its applications can be found in the literature. For instance, the inconsistency, weak consistency and strong consistency for bootstrapping the maximum OOSs under linear normalization were investigated by Athreya and Fukuchi [2] and Fukuchi [21]. They showed that, in a full-sample bootstrap situation, the maximum OOSs fails to be consistent. Later, Barakat et al. [10] extended the results of Fukuchi and Athreya to the GOSs. Barakat et al. [12] obtained similar results for the OOSs with variable ranks as well. Furthermore, bootstrapping OOSs with variable rank under power normalization was investigated by Barakat et al. [13].

    The main goal of this paper is to build on the findings of [10] by discussing the consistency of bootstrap central and intermediate GOSs for determining an appropriate re-sample size for known and unknown normalizing constants. Moreover, a simulation study is carried out to explain how the bootstrap sample size can be chosen numerically. This paper is structured as follows. In Section 2, we briefly review the main results concerning the asymptotic behaviour of the m-GOSs with variable rank. Sections 3 and 4 are devoted, respectively, to bootstrapping the intermediate and central m-GOSs. Finally, a simulation study is conducted in Section 5.

    We end this section with some motivations that highlight the importance of our work.

    Work motivation

    The purpose of the bootstrap method is to construct an approximate sampling distribution for the statistic of interest. So, if the statistic of interest Sn follows a certain distribution, we would like our bootstrap distribution SB to converge to the same distribution. If we do not have this, then we can not trust the inferences made. For i.i.d. samples of size n, the ordinary bootstrap method is known to be consistent in many situations, but it may fail in important examples (cf. [10,12,13,21]). Using bootstrap samples of size B, where B and Bn0, typically resolves the problem (cf. [10,12,13,21]). However, the choice of B is a key issue for the quality of the convergence (e.g., weak consistency and strong consistency). In this paper, we investigate the strong consistency of bootstrapping central and intermediate m-GOSs for an appropriate choice of re-sample size B for known and unknown normalizing constants. The critical choice problem of B is theoretically addressed in this paper. Furthermore, a simulation study is used to discuss it realistically.

    The model of m-GOSs contains two practically important sub-models, OOSs and SOSs, on which this study focuses. For central OOSs and SOSs, one can use the bootstrap method to obtain a confidence interval for the pth population quantile. On the other hand, in many important applications such as flood hazard assessment [18], seismic hazard assessment [1] and analysis of bank operational risk [16], we need an estimator (confidence interval estimate) of an intermediate OOSs (SOSs) quantile. Moreover, it is well known that the asymptotic behavior of intermediate quantiles is one of the main factors in choosing a suitable value of threshold in the peak over threshold (POT) approach and constructing related estimators (the Hill estimators) of the tail index (cf. [9,19]). Therefore, the study of bootstrapping intermediate OOSs will pave the way to use and improve the modeling of extreme values via the POT approach. This potential application of bootstrapping intermediate OOSs will be the subject of future studies.

    In this section, we briefly review the main results concerning the asymptotic behaviour of the intermediate and central m-GOSs, which are related to the present work.

    The intermediate OOSs have a wide range of important applications. For instance, they can be used to estimate the probabilities of future extreme observations and to estimate tail quantiles of the underlying distribution that are extremes relative to the available sample size (cf. [30]). Furthermore, Pickands [30] has revealed that intermediate OOSs can be applied to construct consistent estimators for the shape parameter of the limiting extremal distribution in the parametric form. Teugels [32] and Mason [25] have also found estimators that are in part based on intermediate OOSs. A sequence {Xrn:n} is called a sequence of intermediate OOSs if rnn and rnnn0 (lower intermediate) or rnnn1 (upper intermediate), where the symbol (n) stands for convergence as n. Wu [33] (see also, Leadbetter et al. [23]) revealed that, if {rn} is any nondecreasing intermediate rank sequence, and there exist normalizing constants an>0 and bn such that

    Ψ(0,1)nrn+1:n(anx+bn)=IF(anx+bn)(nrn+1,rn)wnΨ(0,1)(x), (2.1)

    where wn stands for weak convergence, as n, Ψ(0,1)nrn+1:n(x) is the DF of the upper rnth OOS (upper intermediate), and Ψ(0,1)(x) is a nondegenerate DF, then Ψ(0,1)(x) must be one and only one of the types N(Ui;α(x)),i=1,2,3, where N(.) denotes the standard normal DF, and α is a positive constant. Moreover,

    U1;α(x)={αlogx,x0,,x>0,U2;α(x)={,x0,αlogx,x>0,

    and U3;α(x)=U3(x)=x,x. Furthermore, (2.1) is satisfied with Ψ(0,1)(x)=N(Ui;α(x)),i=1,2,3, if and only if

    rnn¯F(anx+bn)rnnUi;α(x). (2.2)

    In this work we confine ourselves to a very wide intermediate rank sequence which is known as Chibisov's rank, where rnnωnl2, 0<ω<1; for more details about Chibisov's rank, see ([6,7,15]). When (2.1) is satisfied for this rank, we say that F belongs to the intermediate domain of attraction of Ψ(0,1)(x)=N(Ui;α(x)) and write FD(l,ω)(N(Ui;α(x))). The following lemma is needed for studying the asymptotic distributions of the suitably normalized intermediate m-GOSs.

    Lemma 2.1. (cf. Barakat [4])Let m>1. Then, for any nondecreasing intermediate variable rank rn, there exist normalizing constants an>0 and bn such that

    Ψ(m,k)nrn+1:n(anx+bn)wnΨ(m,k)(x), (2.3)

    if and only if

    rnn¯Fm+1(anx+bn)rnnU(x), (2.4)

    where Ψ(m,k)(x) is a nondegenerate DF with Ψ(m,k)(x)=N(U(x)), and n=n+km+11.

    Theorem 2.1. (cf. Barakat [4])Suppose that m>1, and rn is a nondecreasing intermediate variable rank. Moreover, let rn be a variable rank defined by

    rn=rS1(n),

    where S(n)=rn/(rn/n)1m+1. Then, there exist normalizing constants an>0 and bn for which (2.3) is satisfied for some nondegenerate DF Ψ(m,k)(x) if and only if there are normalizing constants αn>0 and βn for which

    Ψ(0,1)nrn+1:n(αnx+βn)wnΨ(0,1)(x), (2.5)

    where Ψ(0,1)(x) is some nondegenerate DF. Equivalently,

    rnn¯F(αnx+βn)rnnUi;α(x), (2.6)

    with Ψ(0,1)(x)=N(Ui;α(x)),i=1,2,3. In this case, the normalizing constants an and bn can be chosen as an=αS(n) and bn=βS(n). Furthermore, U(x) in (2.4) takes the form (m+1)Ui;α(x).

    When the rank sequence rnn satisfies the regular condition rn=λn+o(n), where 0<λ<1, rn is referred to as a central rank sequence, and Xrn:n is called central OOSs. There are numerous distinct results for central OOSs and their applications in the literature. Smirnov [31] showed that, if there exist normalizing constants cn>0 and dn such that

    Ψ(0,1)nrn+1:n(cnx+dn)=IF(cnx+dn)(nrn+1,rn)wnΨ(0,1)λ(x), (2.7)

    where Ψ(0,1)λ(x) is some nondegenerate DF, then Ψ(0,1)λ(x) must be one and only one of the types N(Vi;α(x)),i=1,2,3,4. Moreover,

    V1;α(x)={,x0,cxα,x>0,c,α>0,V2;α(x)={c|x|α,x0,c,α>0,,x>0,V3;α(x)={c|x|α,x0,c,α>0,c1xα,x>0,c1>0,V4;0(x)={,x1,0,1<x1,,x>1,

    where c=1λ(1λ) and c1=c/A,A>0. In that case, we say that the DF F belongs to the domain of normal λ-attraction of the limit type Vi,α(x),i=1,2,3,4, written FD(λ)(Vi;α(x)). Moreover, Smirnov [31] showed that (2.7) is satisfied if and only if

    n(λ¯F(cnx+dn)Cλ)nVi;α(x), (2.8)

    where Cλ=1/c=λ(1λ). The following lemma, which is due to Barakat [4], is a cornerstone of the asymptotic theory of central m-GOSs.

    Lemma 2.2. (cf. Barakat [4]) Let m>1. Moreover, let rn be a nondecreasing variable rank such that rn=λn+o(n). Then, there exist normalizing constants cn>0 and dn such that

    Ψ(m,k)nrn+1:n(cnx+dn)wnΨ(m,k)λ(x), (2.9)

    if and only if

    n(λ¯Fm+1(cnx+dn)Cλ)nV(x), (2.10)

    where Ψ(m,k)λ(x) is a nondegenerate DF with Ψ(m,k)λ(x)=N(V(x)).

    Theorem 2.2. (cf. Barakat [4])Let m>1, and let rn be a nondecreasing variable rank such that rn=λn+o(n). Then, there exist normalizing constants cn>0 and dn for which (2.9) holds, for some nondegenerate DF Ψ(m,k)λ(x), if and only if, for the same normalizing constants cn and dn, we have FDλ(m)(Vi;α(x)),i=1,2,3,4, where λ(m)=λ1m+1.That is, in view of (2.10), we have

    n(λ(m)¯F(cnx+dn)Cλ(m))nVi;α(x),i{1,2,3,4}. (2.11)

    Moreover, Ψ(m,k)λ(x)=N((Cλ(m)/Cλ)(m+1)Vi(x)), where Ct=Ct/t.

    In this section, we study the asymptotic behaviour of bootstrapping intermediate m-GOSs with rank sequence rB. In other words, we are interested in the limiting distribution of H(m,k)rB,n,B(x)= for different choices of the re-sample size B=B(n), when the normalizing constants an and bn are either known or unknown.

    This subsection investigates the inconsistency, weak consistency and strong consistency of the bootstrap distribution H(m,k)rB,n,B(x) when the normalizing constants are known. More specifically, it is proved in the next theorem that the full-sample bootstrap (i.e., B=n) of H(0,k)rn,n,n(x) fails to be consistent with Ψ(0,k)(x). Moreover, in the same theorem it is shown that if m>0 and B=n, then H(m,k)rn,n,n(x) is a consistent estimator of Ψ(m,k)(x).

    Theorem 3.1. Let the relation (2.3) be satisfied with Ψ(m,k)(x)=N((m+1)Ui;α(x)),i=1,2,3. Then,

    H(0,k)rn,n,n(x)dnN(Z(x)), (3.1)

    where dn stands for convergence in distribution as n, and Z(x) has a normal distribution with mean Ui;β(x) and a variance of one. Moreover, if m>0, then

    supxR|H(m,k)rn,n,n(x)Ψ(m,k)(x)|pn0, (3.2)

    where pn stands for convergence in probability as n.

    Proof. Since all of the limit types in (2.2) are continuous, the convergence in (2.3) is uniform in x. Therefore, we can write

    H(m,k)rB,n,B(x)=N(rBB¯Fm+1n(aBx+bB)rB)+ξn(x), (3.3)

    where ξn(x)n0 uniformly with respect to x, and B=B+km+11n. Suppose now that the condition B=n holds true; then, (3.3) can be expressed as

    H(m,k)rn,n,n(x)=N(rnn¯Fm+1n(anx+bn)rn)+ξn(x). (3.4)

    When m=0, after some routine algebraic calculations we can obtain (3.1), which is the same result of Theorem 4.2 in Barakat et al. [12]. Now, we have to prove (3.2) for m>0 when B=n. Since S(n)n, (2.6) implies

    rS(n)S(n)¯F(αS(n)x+βS(n))rS(n)nUi;α(x). (3.5)

    Furthermore, from the central limit theorem we have

    n¯Fn(αS(n)x+βS(n))n¯F(αS(n)x+βS(n))n¯F(αS(n)x+βS(n))(1¯F(αS(n)x+βS(n)))dnZ(x). (3.6)

    On the other hand, from Chibisov [15] and Barakat [4], S(n) can be written in the form S(n)=l2mm+1n1+ωmm+1(1+o(1)), where 0<ω<1 and l>0. Consequently, we get

    S(n)n=l2mm+1n1+ωmm+11(1+o(1))n0. (3.7)

    Moreover, from Barakat [4], it is clear that rS(n)S(n)¯F(αS(n)x+βS(n)). Thus, by (3.6) and (3.7), we obtain

    S(n)¯Fn(αS(n)x+βS(n))S(n)¯F(αS(n)x+βS(n))rS(n)=S(n)nn¯Fn(αS(n)x+βS(n))n¯F(αS(n)x+βS(n))rS(n)
    S(n)nn¯Fn(αS(n)x+βS(n))n¯F(αS(n)x+βS(n))S(n)¯F(αS(n)x+βS(n))(1¯F(αS(n)x+βS(n)))=S(n)nn¯Fn(αS(n)x+βS(n))n¯F(αS(n)x+βS(n))n¯F(αS(n)x+βS(n))(1¯F(αS(n)x+βS(n)))n0. (3.8)

    Therefore, from (3.5) and (3.8) we get

    rS(n)S(n)¯Fn(αS(n)x+βS(n))rS(n)=rS(n)S(n)¯F(αS(n)x+βS(n))rS(n)S(n)¯Fn(αS(n)x+βS(n))S(n)¯F(αS(n)x+βS(n))rS(n)nUi;α(x). (3.9)

    From (3.9), we can write ¯Fn(αS(n)x+βS(n))=(rS(n)S(n))(1Ui;α(x)rS(n)(1+o(1))), which implies

    ¯Fnm+1(αS(n)x+βS(n))=(rS(n)S(n))m+1(1(m+1)Ui;α(x)rS(n)(1+o(1))). (3.10)

    Furthermore, from Barakat [4], it can be noted that rS(n)rn and rS(n)S(n)=(rnn)1m+1. Thus, by using (3.10), we have

    rnn¯Fnm+1(anx+bn)rn=ωn+(m+1)Ui;α(x)τn(1+o(1)), (3.11)

    where

    ωn=rnn(rS(n)S(n))m+1rn=rnn(rnn)rn=0, (3.12)

    and

    τn=n(rS(n)S(n))m+1rnrS(n)=n(rnn)r2n=1. (3.13)

    Substituting from (3.12) and (3.13) into (3.11), we get

    rnn¯Fnm+1(anx+bn)rnn(m+1)Ui;α(x), (3.14)

    which proves (3.2), and the proof is completed.

    Theorem 3.2. Assume that there exist normalizing constants an>0 and bn from which the relation (2.3) holds, with Ψ(m,k)(x)=N((m+1)Ui;α(x)),i=1,2,3. Let S(B)=o(n). Then

    supxR|H(m,k)rB,n,B(x)Ψ(m,k)(x)|pn0. (3.15)

    Moreover, if B is chosen such that n=1λnS(B)<, λ(0,1), then

    supxR|H(m,k)rB,n,B(x)Ψ(m,k)(x)|w.p.1n0, (3.16)

    where w.p.1n stands for convergence with probability one as n (almost sure convergence).

    Proof. By noting that S(B)n, (2.6) implies

    rS(B)S(B)¯F(αS(B)x+βS(B))rS(B)nUi;α(x).

    Define the statistic KrS(n),n,B(x) by the relation

    KrS(B),n,B(x)=rS(B)S(B)¯Fn(αS(B)x+βS(B))rS(B).

    Therefore, we get

    E(KrS(B),n,B(x))=rS(B)S(B)¯F(αS(B)x+βS(B))rS(B)nUi;α(x), (3.17)

    and

    Var(KrS(B)  ,  n,  B(x))=S(B)n¯F(αS(B)x+βS(B))(1¯F(αS(B)x+βS(B)))˜rS(B)S(B)no(1)n0, (3.18)

    where ˜rS(B)=rS(B)S(B)¯F(αS(B)x+βS(B))(1¯F(αS(B)x+βS(B))). By combining (3.17) and (3.18), we obtain

    rS(B)S(B)¯Fn(αS(B)  x +βS(B)     )rS(B)pnUi;α(x), (3.19)

    which proves (3.15). In order to prove (3.16), it is sufficient to show that the convergence in (3.19) is w.p.1. First note that

    rS(B)S(B)¯Fn(αS(B)x+βS(B))rS(B)=rS(B)S(B)¯F(αS(B)x+βS(B))rS(B)S(B)¯Fn(αS(B)x+βS(B))S(B)¯F(αS(B)x+βS(B))rS(B),

    and

    rS(B)S(B)¯F(αS(B)  x  +  βS(B)      )rS(B)nUi;α(x).

    Hence, the convergence in (3.19) becomes w.p.1 if we can show that

    S(B)¯Fn(αS(B)x+βS(B))S(B)¯F(αS(B)x+βS(B))rS(B)=S(B)nn¯Fn(αS(B)x+βS(B))n¯F(αS(B)x+βS(B))n˜rS(B)w.p.1n0.

    By the Borel-Cantelli lemma, it is sufficient to prove that

    n=1P(S(B)n|n¯Fn(αS(B)  x  +  βS(B)     )n¯F(αS(B)  x  +  βS(B)     )n˜rS(B)|>ϵ)<

    for every ϵ>0. For every θ>0, we have

    S(B)nlogP(S(B)n(n¯Fn(αS(B)x+βS(B))n¯F(αS(B)x+βS(B))n˜rS(B))>ϵ)=S(B)nlogP(n¯Fn(αS(B)x+βS(B))n¯F(αS(B)x+βS(B))n˜rS(B)>ϵnS(B))=S(B)nlogP(eθPn,B(x)>eθϵnS(B)),

    where

    Pn,B(x)=n¯Fn(αS(B)  x  +βS(B)     )n¯F(αS(B)  x  +βS(B)     )n˜rS(B).

    From Markov's inequality, we get

    S(B)nlogP(eθPn,B(x)>eθϵnS(B))S(B)nlog(eθϵnS(B)E(eθPn,B(x)))S(B)n(θϵnS(B)+logMB(θ))=θϵ+S(B)nlogMB(θ)nθϵ,

    where MB(θ) denotes the moment generating function of the standard normal distribution. Consequently, for sufficiently large n, we get

    n=1P(S(B)n(n¯Fn(αS(B)x+βS(B))n¯F(αS(B)x+βS(B))n˜rS(B))>ϵ)=n=1elogP(S(B)nPn,B  (x)  >ϵ)n=1eθϵnS(B)<.

    By a similar argument, for every ϵ>0, it can be shown that

    n=1P(S(B)n(n¯Fn(αS(B)  x  +βS(B)     )n¯F(αS(B)  x  +βS(B)     )n˜rS(B))<ϵ)<.

    Since the condition n=1λnS(B)<, λ(0,1), ensures the convergence of the series n=1exp{θϵnS(B)} for every ϵ>0, we obtain

    rS(B)S(B)¯Fn(αS(B)  x  +βS(B)     )rS(B)w.p.1nUi;α(x). (3.20)

    From (3.20), we can write

    ¯Fn(αS(B)x+βS(B))=¯Fn(aBx+bB)=(rS(B)S(B))(1Ui;α(x)rS(B)(1+o(1))),w.p.1,

    which implies

    ¯Fnm+1(aBx+bB)=(rS(B)S(B))m+1(1(m+1)Ui;α(x)rS(B)(1+o(1))),w.p.1.

    Therefore, we get

    rBB¯Fm+1(aBx+bB)rB=ηn+(m+1)Ui;α(x)ϑn(1+o(1)),w.p.1,

    where

    ηn=rBB(rS(B)S(B))m+1rB=rBB(rBB)rB=0,

    and

    ϑn=B(rS(B)S(B))m+1rBrS(B)=B(rBB)r2B=1.

    Consequently,

    rBB¯Fm+1(aBx+bB)rBw.p.1n(m+1)Ui;α(x)=U(x).

    Thus, (3.16) is proved. This completes the proof.

    One of the most important problems in statistical modeling is to reduce the required knowledge about the DF of the population from which the available data is obtained. In the bootstrap method, this situation leads to the case of unknown normalizing constants. In the rest of this section, we investigate the consistency property of the bootstrap intermediate m-GOSs when the normalizing constants are unknown.

    Suppose now that the normalizing constants an and bn are unknown, and they are estimated from the sample data, Xn=(X1,X2,...,Xn). Let ˆaB and ˆbB be the estimators of an and bn, based on Xn, respectively, and

    ˆH(m,k)rB,n,B(x)=P(YBrB+1:B     ˆbBˆaBx|Xn) (3.21)

    be the bootstrap distribution of Ψ(m,k)nrn+1:n(anx+bn) with the estimated normalizing constants, ˆaB and ˆbB. The sufficient conditions for ˆH(m,k)rB,n,B(x) to be consistent are explored in the next theorem. The idea of this theorem was originally given in Theorem 2.6 for maximum OOSs in [21].

    Theorem 3.3. Assume that ˆaB,ˆbB and B=B(n) are such that the following three conditions are satisfied:

    (C1)H(m,k)rB,n,B(x)w.p.1nΨ(m,k)(x),(C2)ˆaBaBw.p.1n1,and(C3)ˆbBbBaBw.p.1n0.

    Then,

    supxR|ˆH(m,k)rB,n,B(x)Ψ(m,k)(x)|w.p.1n0. (3.22)

    Moreover, if we replace "w.p.1n" with "pn" in the conditions (C1)–(C3), the convergence (3.22) remains true.

    Proof. First, we note that the condition (C1) is equivalent to

    rBB¯Fnm+1(aBx+bB)rnnU(x).

    Furthermore, ϵ>0, the conditions (C2) and (C3) imply

    (1ϵ)aB<ˆaB<(1+ϵ)aB, (3.23)

    and

    bBϵaB<ˆbB<bB+ϵaB, (3.24)

    respectively. By fixing x>0, the relations (3.23) and (3.24) yield

    B¯Fnm+1((1+ϵ)x+ϵ)aB+bB)B¯Fnm+1(ˆaBx+ˆbB)B¯Fnm+1((1ϵ)xϵ)aB+bB).

    Therefore,

    lim supnrBB¯Fnm+1(ˆaBx+ˆbB)rBU((1+ϵ)x+ϵ)U((1ϵ)xϵ)lim infnrBB¯Fnm+1(ˆaBx+ˆbB)rB.

    Since U(x) is continuous, we get

    limnrBB¯Fnm+1(ˆaBx+ˆbB)rB=U(x).

    For x<0, the same limit relation can be accomplished using a similar argument. Consequently, (3.22) is proved. If the conditions (C1)(C3) hold in probability, then for any subsequence {ni}i=1, there exists a subsequence {ni}i=1, such that the conditions (C1)(C3) hold w.p.1. An application of the first part of the theorem, based on the new subsequence, yields

    supxR|ˆHrB(ni),ni,B(ni)(x)Ψ(m,k)(x)|w.p.1n0.

    This completes the proof of the theorem.

    In the next theorem, an appropriate choice of estimators of the normalizing constants which satisfy conditions (C2) and (C3) of Theorem 3.3 is accomplished for each domain of attraction. More specifically, we get a specific choice of ˆaB and ˆbB from which (3.22) holds true.

    Theorem 3.4. Assume that rn=nS(B)rS(B), rn=nS(B)(rS(B)+rS(B)), xo is the right endpoint of F, and ˆxo=Xrn:n. The estimators ˆaB and ˆbB can be chosen, respectively, as

    (i) ˆaB=ˆxoF1n(rS(B)S(B))=Xrn:nXrn:n and ˆbB=Xrn:n, if FD(l,ω)(N((m+1)U1;α(x))),

    (ii) ˆaB=F1n(rS(B)S(B))=Xrn:n and ˆbB=0, if FD(l,ω)(N((m+1)U2;α(x))),

    (iii) ˆaB=F1n(rS(B)+rS(B)S(B)rS(B)S(B))=Xrn:nXrn:n and ˆbB=F1n(rS(B)S(B))=Xrn:n, if FD(l,ω)(N((m+1)U3(x))).

    Moreover, If S(B)=o(n), then

    supxR|ˆH(m,k)rB,n,B(x)Ψ(m,k)(x)|pn0. (3.25)

    Furthermore, if n=1λnS(B)< for each λ(0,1), then (3.25) holds w.p.1.

    Proof. Let FD(l,ω)(N((m+1)U1;α(x))). In order to prove conditions (C2) and (C3), we have to show that

    ˆaBaB=Xrn:nXrn:naBn1, (3.26)

    and

    ˆbBbBaB=Xrn:nxoaBn0, (3.27)

    both in probability or w.p.1. Firstly, let us consider the case of convergence in probability. Clearly,

    ˆaBaB=Xrn:nxoαn×αnaBXrn:nxoaB.

    Hence, (3.26) and (3.27) are proved if we can show that

    αnaBn0, (3.28)

    and

    Xrn:nxoaBpn1. (3.29)

    On the other hand, for any γ>0, Lemma 3.1 in Barakat et al. [11] reveals that

    αnaB=αnαS(B)μ1γ(rn)μ1γ(rS(B))=e1γnω2e1γ(S(B))ω2=e1γnω2(1(S(B)n)ω2)n0,

    where μ(n)=exp(n). Thus, (3.28) is proved. Turning now to prove (3.29), it can be noted that

    rnn¯F(aBx+bB)rn=rnn¯F(aBx+xo)rn=nS(B)rS(B)n¯F(aBx+xo)nS(B)rS(B)=nS(B)(rS(B)S(B)¯F(aBx+xo)rS(B))n{,if x>1,,if x<1. (3.30)

    For every ϵ>0, (3.30) implies P(Xrn:nxoaB<ϵ1)nN()=1, or equivalently,

    P(Xrn:nxoaB>ϵ1)n0. (3.31)

    Similarly,

    P(Xrn:nxoaB<ϵ1)nN()=0. (3.32)

    By combining (3.31) and (3.32), we get

    P(|Xrn:nxoaB+1|>ϵ)n0, (3.33)

    which proves (3.29). Now, let FD(l,ω)(N((m+1)U2;α(x))). Condition (C3) of Theorem 2.4 is clearly proved (since ˆbB=0). Consequently, it is sufficient to prove only condition (C2). Thus, we have to show that

    ˆaBaB=Xrn:naBn1, (3.34)

    in probability or w.p.1. We start by proving the convergence in probability. It is clear that,

    rnn¯F(aBx+bB)rn=nS(B)rS(B)n¯F(aBx)nS(B)rS(B)=nS(B)(rS(B)S(B)¯F(aBx)rS(B))n{,if x>1,,if x<1. (3.35)

    For every ϵ>0, (3.35) implies P(Xrn:naB<ϵ+1)nN()=1, which is equivalent to

    P(Xrn:naB>ϵ+1)n0. (3.36)

    Arguing similarly, we get

    P(Xrn:naB<ϵ+1)nN()=0. (3.37)

    Based on the relations (3.36) and (3.37), we get

    P(|Xrn:naB1|>ϵ)n0, (3.38)

    which proves (3.34). Finally, assume that FD(l,ω)(N((m+1)U3(x))). By Theorem 2.4, in order to prove conditions (C2) and (C3), it suffices to show that

    ˆaBaB=Xrn:nXrn:naBn1, (3.39)

    and

    ˆbBbBaB=Xrn:nbBaBn0, (3.40)

    both in probability or w.p.1. We start with the convergence in probability. Write

    ˆaBaB=Xrn:nXrn:naB=Xrn:nbBaBXrn:nbBaB. (3.41)

    Consequently, to prove (3.39) and (3.40), we need to show that

    Xrn:nbBaBn1, (3.42)

    and

    Xrn:nbBaBn0. (3.43)

    First, we are going to prove (3.42). Since we have

    rnn¯F(aBx+bB)rn=nS(B)(rS(B)+rS(B))n¯F(aBx+bB)nS(B)(rS(B)+rS(B))=nS(B)(rS(B)+rS(B)S(B)¯F(aBx+bB)rS(B)(1+1/rS(B)))
    =nS(B)(rS(B)+rS(B)S(B)¯F(aBx+bB)rS(B)(1+o(1)))=nS(B)(rS(B)S(B)¯F(aBx+bB)rS(B)(1+o(1))+rS(B)rS(B)(1+o(1))),

    from the assumption of the theorem, we have

    rS(B)S(B)¯F(aBx+bB)rS(B)(1+o(1))nx.

    Consequently,

    rnn¯F(aBx+bB)rnn{,if x>1,,if x<1. (3.44)

    Thus, for every ϵ>0, we get P(Xrn:nbBaB<ϵ1)nN()=1, and this implies

    P(Xrn:nbBaB>ϵ1)n0. (3.45)

    Further, we have

    P(Xrn:nbBaB<ϵ1)n0. (3.46)

    Hence, (3.45) and (3.46) lead to

    P(|Xrn:nbBaB+1|>ϵ)n0,

    which proves (3.42). Secondly, we are going to prove (3.43). Clearly,

    rnn¯F(aBx+bB)rn=nS(B)rS(B)n¯F(aBx+bB)nS(B)rS(B)=
    nS(B)(rS(B)S(B)¯F(aBx+bB)rS(B))n{,if x>0,,if x<0. (3.47)

    For every ϵ>0, Eq (3.47) leads to P(Xrn:nbBaB<ϵ)nN()=1, which yields

    P(Xrn:nbBaB>ϵ)n0. (3.48)

    In the same way, we get

    P(Xrn:nbBaB<ϵ)n0. (3.49)

    Thus, relations (3.48) and (3.49) imply

    P(|Xrn:nbBaB|>ϵ)n0,

    which proves (3.43). Finally, in the proof of Parts (ⅰ)–(ⅲ), in order to switch to the convergence w.p.1, we argue in the same way that we did at the end of Theorem 3.3's proof. This completes the proof of the theorem.

    In this section, the asymptotic behaviour of the bootstrap distribution for central m-GOSs, H(m,k)rB,n,B(x)=P(YBrB+1:BdBcBx|Xn), is considered for different choices of the re-sample size B=B(n) when the normalizing constants cn and dn are assumed to be known or unknown.

    The next theorem discusses the consistency of the bootstrap distribution H(m,k)rB,n,B(x) in the case of full-sample bootstrap. It is revealed that the full-sample bootstrap distribution fails to be a consistent estimator of the DF Ψ(m,k)λ(x).

    Theorem 4.1. Assume that the relation (2.9) is satisfied, where the weak limits are of the form

    Ψ(m,k)λ(x)=N(Cλ(m)Cλ(m+1)Vi,α(x)),fori=1,2,3,4. (4.1)

    Then,

    H(m,k)rn,n,n(x)dnN(Λ(x)), (4.2)

    where Λ(x) has a normal distribution with mean Cλ(m)Cλ(m+1)Vi;α(x) and variance Cλ(m)Cλ(m+1).

    Proof. Although the convergence in (2.9) does not yield continuous types in general, under the condition rn=λn+o(n), Barakat and El-Shandidy [5] showed that the convergence is uniform. Consequently,

    H(m,k)rn,n,B(x)=N(Bλ¯Fnm+1(cBx+dB)Cλ)+ξn(x),

    where ξn(x)n0 uniformly with respect to x. Now, consider that the condition B=n holds. Since (2.8) is satisfied, we have

    n(λ(m)¯F(cnx+dn)Cλ(m))nVi;α(x). (4.3)

    An application of the central limit theorem yields

    n¯Fn(cnx+dn)n¯F(cnx+dn)n¯F(cnx+dn)(1¯F(cnx+dn))dnZ(x), (4.4)

    where Z(x) is the standard normal RV. On the other hand, it is clear from relation (4.3) that ¯F(cnx+dn)nλ(m), which implies

    n¯F(cnx+dn)(1¯F(cnx+dn))nCλ(m)n1. (4.5)

    The two limit relations (4.3) and (4.5) enable us to apply Khinchin's type theorem on the relation (4.4) to get

    n(λ(m)¯Fn(cnx+dn)Cλ(m))=n(λ(m)¯F(cnx+dn)Cλ(m))n¯Fn(cnx+dn)n¯F(cnx+dn)nCλ(m)nVi;α(x)Z(x). (4.6)

    As a result of (4.6), we get

    ¯Fnm+1(cnx+dn)=λ(1Cλ(m)(Vi,α(x)Z(x))λ(m)n(1+o(1)))m+1=λ(1(m+1)Cλ(m)(Vi;α(x)Z(x))λ(m)n(1+o(1))). (4.7)

    Consequently,

    n(λ¯Fnm+1(cnx+dn)Cλ)n(m+1)λCλ(m)λ(m)Cλ(Vi;α(x)Z(x))=Cλ(m)Cλ(m+1)(Vi;α(x)Z(x)).

    Hence, the theorem is proved.

    Theorem 4.2. Under the same conditions of Theorem 4.1, if B=o(n), then

    supxR|H(m,k)rB,n,B(x)Ψ(m,k)λ(x)|pn0. (4.8)

    Furthermore, if B is such that n=1PnB<, P(0,1), then

    supxR|H(m,k)rB,n,B(x)Ψ(m,k)λ(x)|w.p.1n0. (4.9)

    Proof. Write H(m,k)rB,n,B(x)=N(Sn,B(x))+ξn(x), where, Sn,B(x)=B(λ(m)¯Fn(cBx+dB)Cλ(m)), and ξn(x)n0 uniformly with respect to x. It can be noted that

    E(Sn,B(x))=B(λ(m)ˉF(cBx+dB)Cλ(m))nVi;α(x), (4.10)

    and

    Var(Sn,B(x))=B¯F(cBx+dB)(1¯F(cBx+dB))nC2λ(m)n0. (4.11)

    Accordingly,

    B(λ(m)¯Fn(cBx+dB)Cλ(m))pnVi;α(x). (4.12)

    We'll now show that the convergence in (4.12) is w.p.1. For this purpose, write

    B(λ(m)¯Fn(cBx+dB)Cλ(m))=B(λ(m)¯F(cBx+dB)Cλ(m))Bn(n¯Fn(cBx+dB)n¯F(cBx+dB)nCλ(m)).

    On the other hand, the assumptions of the theorem ensure that ¯F(cBx+dB)nλ(m), and B(λ(m)¯F(cBx+dB)Cλ(m))nVi;α(x). To prove the limit relation B(λ(m)¯Fn(cBx+dB)Cλ(m))w.p.1nVi;α(x), it is sufficient to show that

    Bn(n¯Fn(cBx+dB)n¯F(cBx+dB)nCλ(m))=Bn(n¯Fn(cBx+dB)n¯F(cBx+dB)n¯F(cBx+dB)(1¯F(cBx+dB))w.p.1n0.

    According to the Borel-Cantelli lemma, we need to show that, for every ϵ>0,

    n=1P(Bn|n¯Fn(cBx+dB)n¯F(cBx+dB)n¯F(cBx+dB)(1¯F(cBx+dB)|>ϵ)<.

    For every θ>0, we have

    BnlogP(Bn(n¯Fn(cBx+dB)n¯F(cBx+dB)n¯F(cBx+dB)(1¯F(cBx+dB))>ϵ)=BnlogP(n¯Fn(cBx+dB)n¯F(cBx+dB)n¯F(cBx+dB)(1¯F(cBx+dB)>ϵnB)=BnlogP(eθTn,B>eθϵnB),

    where Tn,B is defined by

    Tn,B=n¯Fn(cBx+dB)n¯F(cBx+dB)n¯F(cBx+dB)(1¯F(cBx+dB).

    From Markov's inequality, we get

    BnlogP(eθTn,B>eθϵnB)Bnlog(eθϵnBE(eθTn,B))Bn(θϵnB+logMB(θ))=θϵ+BnlogMB(θ)nθϵ.

    Consequently, for sufficiently large n, we get

    n=1P(Bn(n¯Fn(cBx+dB)n¯F(cBx+dB)n¯F(cBx+dB)(1¯F(cBx+dB))>ϵ)n=1elogP(BnTn,B>ϵ)n=1eθϵnB<.

    By using the same method, we can show that, for every ϵ>0,

    n=1P(Bn(n¯Fn(cBx+dB)n¯F(cBx+dB)n¯F(cBx+dB)(1¯F(cBx+dB))<ϵ)<.

    Since the condition, n=1PnB<, P(0,1), assures the convergence of the series n=1exp{θϵnB} for every ϵ>0, we have

    B(λ(m)¯Fn(cBx+dB)Cλ(m))w.p.1nVi;α(x). (4.13)

    Therefore,

    ¯Fnm+1(cBx+dB)=λ(1Cλ(m)Vi;α(x)λ(m)n(1+o(1)))m+1=λ(1(m+1)Cλ(m)Vi;α(x)λ(m)n(1+o(1))),w.p.1. (4.14)

    Accordingly,

    n(λ¯Fnm+1(cBx+dB)Cλ)w.p.1n(m+1)λCλ(m)λ(m)CλVi;α(x)=Cλ(m)Cλ(m+1)Vi;α(x),

    which was to be proved, and this completes the proof of the theorem.

    Assume that the normalizing constants cn and dn are unknown and that they must be estimated using the sample data Xn=(X1,X2,...,Xn). Let ˆcB and ˆdB be the estimators of cn and dn based on Xn and ˆH(m,k)rB,n,B(x)=P(YBrB+1:BˆdBˆcBx|Xn) be the bootstrap distribution for appropriately normalized central m-GOSs. The next theorem gives sufficient conditions for ˆH(m,k)rB,n,B(x) to be consistent, where we restrict ourselves to the first three non-degenerate types, corresponding to Vi;α(x),i=1,2,3. Clearly, each of these three limit laws has at most one discontinuity point of the first type.

    Theorem 4.3. Suppose that the conditions of Theorem 4.2 are satisfied. Moreover, suppose that ˆcB, ˆdB, and B=B(n) satisfy the following three conditions:

    (K1)H(m,k)rB,n,B(x)w.p.1nΨ(m,k)λ(x),(K2)ˆcBcBw.p.1n1,and(K3)ˆdBcBdBw.p.1n0.

    Then, supxRc|^H(m,k)rB,n,B(x)Ψ(m,k)λ(x)|w.p.1n0, where RcR is the set of all continuity points of Ψ(m,k)λ(x). Moreover, this theorem holds if "w.p.1n" is replaced by "pn".

    Proof. The proof of the theorem is similar to the proof of Theorem 3.3.

    Now, for the consistency of the bootstrap distribution ˆH(m,k)rB,n,B(x), the next theorem gives a proper choice for the normalizing constants ˆcB and ˆdB satisfying conditions (K2) and (K3) in Theorem 4.3 for each domain of attraction of N(Cλ(m)Cλ(m+1)Vi;α(x)),i=1,2,3.

    Theorem 4.4. Let rn=[λ(m)n]+1, rn=[nB+n]+1, and rn=[λ(m)nnB]+1. Then,

    i. ˆcB=F1n(λ(m)+1B)F1n(λ(m))=Xrn:nXrn:n, and ˆdB=F1n(λ(m))=Xrn:n, if FDλ(m)(N(V1;α(x)));

    ii. ˆcB=F1n(λ(m))F1n(λ(m)1B)=Xrn:nXrn:n, and ˆdB=F1n(λ(m))=Xrn:n, if FDλ(m)(N(V2;α(x)));

    iii. ˆcB=F1n(λ(m)+1B)F1n(λ(m))=Xrn:nXrn:n, and ˆdB=F1n(λ(m))=Xrn:n, if FDλ(m)(N(V3;α(x))).

    Moreover, if B = o(n), then

    \begin{equation} \underset{x\in\mathbb{R}}{\sup}\left|\hat H^{\ast(m, k)}_{r_{B}, n, B}(x)-\Psi_\lambda^{(m, k)}(x)\right|\xrightarrow[\;\;n\;\;]{p}0. \end{equation} (4.15)

    Finally, if \sum_{n = 1}^{\infty}P^{\sqrt{\frac{n}{B}}} < \infty for each P \in (0, 1), then the convergence in (4.15) holds w.p.1.

    Proof. Let F\in D^{\lambda_{(m)}}(\mathcal{N}(V_{1;\alpha}(x))). In view of Theorem 4.3, we need to show that

    \begin{equation} \frac{\hat{c}_{B}}{{c}_{B}} = \frac{X_{r^{''}_{n}:n}-X_{r^{'}_{n}:n}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{}1, \end{equation} (4.16)

    and

    \begin{equation} \frac{\hat{d}_{B}-{d}_{B}}{{c}_{B}} = \frac{{X_{r^{'}_{n}:n}}-{d}_{B}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{}0, \end{equation} (4.17)

    both in probability or w.p.1. We start with the convergence in probability. Clearly,

    \frac{\hat{c}_{B}}{{c}_{B}} = \frac{X_{r^{''}_{n}:n}-X_{r^{'}_{n}:n}}{{c}_{B}} = \frac{X_{r^{''}_{n}:n}-{d}_{B}}{{c}_{B}}-\frac{X_{r^{'}_{n}:n}-{d}_{B}}{{c}_{B}}.

    To prove (4.16) and (4.17), it is sufficient to show that

    \begin{equation} \frac{X_{r^{''}_{n}:n}-{d}_{B}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{p}1, \end{equation} (4.18)

    and

    \begin{equation} \frac{X_{r^{'}_{n}:n}-{d}_{B}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{p}0. \end{equation} (4.19)

    We will start by proving (4.18). In view of the relations \left[\frac{n}{\sqrt{B}}+\lambda_{(m)} n\right] = \frac{n}{\sqrt{B}}+\lambda_{(m)} n-\delta and \frac{1}{\sqrt{B}}+\lambda_{(m)} +\frac{1-\delta}{n}\sim \lambda_{(m)} for 0\leq \delta < 1, we get

    \begin{equation*} \frac{(n-r^{''}_{n})-n\overline{F}(c_{B}x+d_{B})}{\sqrt{r^{''}_{n}(1-\frac{r^{''}_{n}}{n})}} = \sqrt{n}\frac{\left(1-\frac{1}{\sqrt{B}}-\lambda_{(m)}-\frac{1-\delta}{n}\right)-\overline{F}(c_{B}x+d_{B})} {\sqrt{\left(\frac{1}{\sqrt{B}}+\lambda_{(m)}+\frac{1-\delta}{n}\right)\left(1-\left(\frac{1}{\sqrt{B}}+\lambda_{(m)} +\frac{1-\delta}{n}\right)\right)}} \end{equation*}
    \begin{equation} \sim \sqrt{ \frac{n}{B}}\left(\sqrt{B}\frac{(1-\lambda_{(m)})-\overline{F}(c_{B}x+d_{B})} {C_{\lambda_{(m)}}}-\frac{1+\frac{\sqrt{B}}{n}(1-\delta)}{C_{\lambda_{(m)}}}\right) \xrightarrow[\;\;n\;\;]{}\begin{cases} \infty, & {\rm{if }}\ x > 1, \\ -\infty, & {\rm{if }}\ x < 1. \end{cases} \end{equation} (4.20)

    Relation (4.20) is a direct consequence of the obvious relations,

    \frac{1+\frac{\sqrt{B}}{n}(1-\delta)}{C_{\lambda_{(m)}}}\xrightarrow[\;\;n\;\;]{}\frac{1}{C_{\lambda_{(m)}}} = c\; \; \mbox{and}\; \; \sqrt{B}\frac{(1-{\lambda_{(m)}})-\overline{F}(c_{B}x+d_{B})} {C_{\lambda_{(m)}}}\xrightarrow[\;\;n\;\;]{}cx^\alpha.

    The relation (4.20) yields P\left(\frac{X_{r^{''}_{n}:n}-d_{B}}{c_{B}} < \epsilon+1\right)\xrightarrow[\; \; n\; \; ]{}\mathcal{N}(\infty) = 1, which is equivalent to

    \begin{equation} P\left(\frac{X_{r^{''}_{n}:n}-d_{B}}{c_{B}} > \epsilon+1\right)\xrightarrow[\;\;n\;\;]{}0. \end{equation} (4.21)

    Similarly, we get

    \begin{equation} P\left(\frac{X_{r^{''}_{n}:n}-d_{B}}{c_{B}} < -\epsilon+1\right)\xrightarrow[\;\;n\;\;]{}\mathcal{N}(-\infty) = 0. \end{equation} (4.22)

    From (4.21) and (4.22), we get P\left(\left|\frac{X_{r^{''}_{n}:n}-d_{B}}{{c}_{B}}-1\right| > \epsilon\right)\xrightarrow[\; \; n\; \; ]{}0, which proves (4.18). Now, we are going to prove (4.19). It is simple to derive that

    \begin{equation*} \frac{(n-r^{'}_{n})-n\overline{F}(c_{B}x+d_{B})}{\sqrt{r^{'}_{n}(1-\frac{r^{'}_{n}}{n})}} = \sqrt{n}\frac{\left(1-\lambda_{(m)}-\frac{1-\delta}{n}\right)-\overline{F}(c_{B}x+d_{B})} {\sqrt{\left(\lambda_{(m)}+\frac{1-\delta}{n}\right)\left(1-\left(\lambda_{(m)} +\frac{1-\delta}{n}\right)\right)}} \end{equation*}
    \begin{equation} = \sqrt{\frac{n}{B}}\left(\sqrt{B}\frac{\left(1-\lambda_{(m)}-\frac{1-\delta}{n}\right)-\overline{F}(c_{B}x+d_{B})} {\sqrt{\left(\lambda_{(m)}+\frac{1-\delta}{n}\right)\left(1-\left(\lambda_{(m)} +\frac{1-\delta}{n}\right)\right)}}\right) \xrightarrow[\;\;n\;\;]{}\begin{cases} \infty, & {\rm{if }}\ x > 0, \\ -\infty, & {\rm{if }}\ x < 0. \end{cases} \end{equation} (4.23)

    Thus, from (4.23), we have P\left(\frac{X_{r^{'}_{n}:n}-d_{B}}{c_{B}} < \epsilon\right)\xrightarrow[\; \; n\; \; ]{}\mathcal{N}(\infty) = 1, which is equivalent to

    \begin{equation} P\left(\frac{X_{r^{'}_{n}:n}-d_{B}}{c_{B}} > \epsilon\right)\xrightarrow[\;\;n\;\;]{}0. \end{equation} (4.24)

    Similarly, we get

    \begin{equation} P\left(\frac{X_{r^{'}_{n}:n}-d_{B}}{c_{B}} < -\epsilon\right)\xrightarrow[\;\;n\;\;]{}0. \end{equation} (4.25)

    Therefore, by combining the relations (4.24) and (4.25), we get P\left(\left|\frac{X_{r^{'}_{n}:n}-d_{B}}{{c}_{B}}\right| > \epsilon\right)\xrightarrow[\; \; n\; \; ]{}0, which proves (4.19). This completes the proof of Part ⅰ. Now, let F(c_nx+d_n)\in D^{\lambda_{(m)}}(\mathcal{N}(V_{2;\alpha}(x))). It is sufficient to establish from Theorem 4.3 that

    \begin{equation} \frac{\hat{c}_{B}}{{c}_{B}} = \frac{X_{r^{'}_{n}:n}-X_{r^{'''}_{n}:n}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{}1, \end{equation} (4.26)

    and

    \begin{equation} \frac{\hat{d}_{B}-{d}_{B}}{{c}_{B}} = \frac{{X_{r^{'}_{n}:n}}-{d}_{B}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{}0, \end{equation} (4.27)

    both in probability or w.p.1. We begin with the situation of probability convergence, and we begin with

    \frac{\hat{c}_{B}}{{c}_{B}} = \frac{X_{r^{'}_{n}:n}-X_{r^{'''}_{n}:n}}{{c}_{B}} = \frac{X_{r^{'}_{n}:n}-{d}_{B}}{{c}_{B}}-\frac{X_{r^{'''}_{n}:n}-{d}_{B}}{{c}_{B}}.

    Hence, to prove (4.26) and (4.27), it is sufficient to show that

    \begin{equation} \frac{X_{r^{'''}_{n}:n}-{d}_{B}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{p}-1, \end{equation} (4.28)

    and

    \begin{equation} \frac{X_{r^{'}_{n}:n}-{d}_{B}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{p}0. \end{equation} (4.29)

    We are going to prove (4.28). By applying the relations [\lambda_{(m)}n-\frac{n}{\sqrt{B}}] = \lambda_{(m)}n-\frac{n}{\sqrt{B}}-\delta, \; 0 \leq\delta < 1, and \lambda_{(m)}-\frac{1}{\sqrt{B}} +\frac{1-\delta}{n}\sim \lambda_{(m)}, as n\to\infty, we can deduce that

    \begin{equation*} \frac{(n-r^{'''}_{n})-n\overline{F}(c_{B}x+d_{B})}{\sqrt{r^{'''}_{n}(1-\frac{r^{'''}_{n}}{n})}} = \sqrt{n}\frac{\left(n-\lambda_{(m)}+\frac{1}{\sqrt{B}}-\frac{1-\delta}{n}\right)-\overline{F}(c_{B}x+d_{B})} {\sqrt{\left(\lambda_{(m)}-\frac{1}{\sqrt{B}}+\frac{1-\delta}{n}\right)\left(1-\left(\lambda_{(m)}-\frac{1}{\sqrt{B}} +\frac{1-\delta}{n}\right)\right)}} \end{equation*}
    \begin{equation} \sim\sqrt{\frac{n}{B}}\left(\sqrt{B}\frac{(1-\lambda_{(m)})-\overline{F}(c_{B}x+d_{B})} {C_{\lambda_{(m)}}}-\frac{-1+\frac{\sqrt{B}}{n}(1-\delta)}{C_{\lambda_{(m)}}}\right) \xrightarrow[\;\;n\;\;]{} \begin{cases} -\infty, & {\rm{if }}\ |x| > 1, \\ \infty, & {\rm{if }}\ |x| < 1. \end{cases} \end{equation} (4.30)

    Thus, on account of (4.30), we get P\left(\frac{X_{r^{'''}_{n}:n}-d_{B}}{c_{B}} < \epsilon-1\right)\xrightarrow[\; \; n\; \; ]{}{}\mathcal{N}(\infty) = 1, which implies

    \begin{equation} P\left(\frac{X_{r^{'''}_{n}:n}-d_{B}}{c_{B}} > \epsilon-1\right)\xrightarrow[\;\;n\;\;]{}0. \end{equation} (4.31)

    In a similar vein, we have

    \begin{equation} P\left(\frac{X_{r^{'''}_{n}:n}-d_{B}}{c_{B}} < -\epsilon-1\right)\xrightarrow[\;\;n\;\;]{}\mathcal{N}(-\infty) = 0. \end{equation} (4.32)

    From (4.31) and (4.32), we get P\left(\left|\frac{X_{r^{'''}_{n}:n}-d_{B}}{{c}_{B}}+1\right| > \epsilon\right)\xrightarrow[\; \; n\; \; ]{}0. Hence, (4.28) is proved. We turn now to prove (4.29). We start with the obvious limit relation

    \begin{equation*} \frac{(n-r^{'}_{n})-n\overline{F}(c_{B}x+d_{B})}{\sqrt{r^{'}_{n}(1-\frac{r^{'}_{n}}{n})}} = \sqrt{n}\frac{\left(1-\lambda_{(m)}-\frac{1-\delta}{n}\right)-\overline{F}(c_{B}x+d_{B})} {\sqrt{\left(\lambda_{(m)}+\frac{1-\delta}{n}\right)\left(1-\left(\lambda_{(m)} +\frac{1-\delta}{n}\right)\right)}} = \end{equation*}
    \begin{equation} \sqrt{\frac{n}{B}}\left(\sqrt{B}\frac{\left(1-\lambda_{(m)}-\frac{1-\delta}{n}\right)-\overline{F}(c_{B}x+d_{B})} {\sqrt{\left(\lambda_{(m)}+\frac{1-\delta}{n}\right)\left(1-\left(\lambda_{(m)} +\frac{1-\delta}{n}\right)\right)}}\right) \xrightarrow[\;\;n\;\;]{}\begin{cases} \infty, & {\rm{if }}\ x > 0, \\ -\infty, & {\rm{if }}\ x < 0, \end{cases} \end{equation} (4.33)

    which in turn implies that P\left(\frac{X_{r^{'}_{n}:n}-d_{B}}{c_{B}} < \epsilon\right)\xrightarrow[\; \; n\; \; ]{}\mathcal{N}(\infty) = 1, and hence

    \begin{equation} P\left(\frac{X_{r^{'}_{n}:n}-d_{B}}{c_{B}} > \epsilon\right)\xrightarrow[\;\;n\;\;]{}0. \end{equation} (4.34)

    Moreover, the limit relation (4.33) yields

    \begin{equation} P\left(\frac{X_{r^{'}_{n}:n}-d_{B}}{c_{B}} < -\epsilon\right)\xrightarrow[\;\;n\;\;]{}0. \end{equation} (4.35)

    By combining (4.34) and (4.35), we get P\left(\left|\frac{X_{r^{'}_{n}:n}-d_{B}}{{c}_{B}}\right| > \epsilon\right)\xrightarrow[\; \; n\; \; ]{}0, which proves (4.29).

    Finally, consider the case F\in D^{\lambda_{(m)}}(\mathcal{N}(V_{3;\alpha}(x))). From Theorem 4.3, it suffices to show that

    \begin{equation} \frac{\hat{c}_{B}}{{c}_{B}} = \frac{X_{r^{''}_{n}:n}-X_{r^{'}_{n}:n}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{}1, \end{equation} (4.36)

    and

    \begin{equation} \frac{\hat{d}_{B}-{d}_{B}}{{c}_{B}} = \frac{{X_{r^{'}_{n}:n}}-{d}_{B}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{}0, \end{equation} (4.37)

    both in probability or w.p.1. We first focus on the case of the convergence in probability, and we start with

    \frac{\hat{c}_{B}}{{c}_{B}} = \frac{X_{r^{''}_{n}:n}-X_{r^{'}_{n}:n}}{{c}_{B}} = \frac{X_{r^{''}_{n}:n}-{d}_{B}}{{c}_{B}}-\frac{X_{r^{'}_{n}:n}-{d}_{B}}{{c}_{B}}.

    Therefore, to prove (4.36) and (4.37), it is sufficient to show that

    \begin{equation} \frac{X_{r^{''}_{n}:n}-{d}_{B}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{p}1, \end{equation} (4.38)

    and

    \begin{equation} \frac{X_{r^{'}_{n}:n}-{d}_{B}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{p}0. \end{equation} (4.39)

    By proceeding in the same manner that we did in Parts ⅰ and ⅱ, we can easily show that

    \begin{equation} \frac{(n-r^{''}_{n})-n\overline{F}(c_{B}x+d_{B})}{\sqrt{r^{''}_{n}(1-\frac{r^{''}_{n}}{n})}} \xrightarrow[\;\;n\;\;]{}\begin{cases} \infty, & {\rm{if }}\ x > 1, \\ -\infty, & {\rm{if }}\ x < 1. \end{cases} \end{equation} (4.40)

    Relation (4.40) yields P\left(\frac{X_{r^{''}_{n}:n}-d_{B}}{c_{B}} < \epsilon+1\right)\xrightarrow[\; \; n\; \; ]{}\mathcal{N}(\infty) = 1, which is equivalent to

    \begin{equation} P\left(\frac{X_{r^{''}_{n}:n}-d_{B}}{c_{B}} > \epsilon+1\right)\xrightarrow[\;\;n\;\;]{}0. \end{equation} (4.41)

    Similarly, we get

    \begin{equation} P\left(\frac{X_{r^{''}_{n}:n}-d_{B}}{c_{B}} < -\epsilon+1\right)\xrightarrow[\;\;n\;\;]{}\mathcal{N}(-\infty) = 0. \end{equation} (4.42)

    The two limit relations (4.41) and (4.42) yield

    P\left(\left|\frac{X_{k^{''}_{n}:n}-d_{B}}{{c}_{B}}-1\right| > \epsilon\right)\xrightarrow[\;\;n\;\;]{}0,

    which in turn proves (4.38). On the other hand, the proof of the relation (4.37) follows also by proceeding as we did in Parts i and ii. Finally, in order to transfer to the convergence w.p.1 in Parts ⅰ–ⅱ, we argue in the same way we did at the end of the proof of Theorem 3.3. The proof of the theorem is now completed.

    In this section, a simulation study is conducted to explain how we can choose the bootstrap re-sample size B numerically, relying on the best fit of the bootstrapping DF of central and intermediate quantiles for different ranks. More specifically, we apply the Kolmogorov-Smirnov (K-S) goodness of fit test to test the null hypothesis H_0: The central (intermediate) quantiles follow a normal distribution at a 5% significance level. Moreover, we repeat the K-S test many times for different values of B and then select the value of B that corresponds to the largest average p -value for the fit. See Tables 13.

    Table 1.  The average p -values ( \overline{p} ) for fitting different bootstrap central quantiles of the SOSs and OOSs models to a normal distribution with various values of B.
    k=1, \; m=1 (SOSs) k=1, \; m=2 (SOSs) k=1, \; m=0 (OOSs)
    \lambda=0.5 \lambda=0.25 \lambda=0.5 \lambda=0.25 \lambda=0.5 \lambda=0.25
    B \overline p \overline p \overline p \overline p \overline p \overline p
    100 0.25239 0.31176 0.34101 0.22219 0.35895^{\star} 0.26554
    150 0.29968^{\star} 0.24726 0.36257^{\star} 0.22550 0.31619 0.31344^{\star}
    200 0.17472 0.32276^{\star} 0.34588 0.22552^{\star} 0.32378 0.21946
    250 0.21397 0.24931 0.31296 0.18478 0.27626 0.24845
    300 0.19569 0.29015 0.26269 0.18825 0.29849 0.18454
    350 0.19635 0.26083 0.24576 0.18935 0.27687 0.16976
    400 0.15829 0.26653 0.26114 0.17181 0.24873 0.17627

     | Show Table
    DownLoad: CSV
    Table 2.  The average p -values for fitting bootstrap intermediate quantiles of the OOSs and SOSs models to a normal distribution with various values of B.
    Intermediate (OOSs) Intermediate (SOSs)
    B \overline p ( r_{n}=\sqrt{n} ) \overline p ( r^{\star}_{n}=\sqrt{n} )
    100 0.12289 0.02019
    150 0.20122 0.07148
    200 0.28733^{\star} 0.14383
    250 0.18223 0.18207
    300 0.12546 0.21568^{\star}
    350 0.08254 0.17340
    400 0.05879 0.12973

     | Show Table
    DownLoad: CSV
    Table 3.  The average p -values for fitting bootstrap intermediate quantiles of OOSs to a normal distribution with various values of B when M = 1000 and n = 200,000 .
    Intermediate (OOSs)
    B \overline p ( r_{n}=\sqrt{n} )
    100 0.30920
    150 0.22490
    200 0.25541
    250 0.36622
    300 0.43944^{\star}
    350 0.39108
    400 0.38285

     | Show Table
    DownLoad: CSV

    In this simulation, three m -GOSs sub-models based on the standard normal distribution are considered. Namely, these are OOSs ( \gamma_i = n-i+1 ), SOSs with m = 1 (i.e. \gamma_i = 2(n-i)+1 ) and SOSs with m = 2 (i.e., \gamma_i = 3(n-i)+1 ). Moreover, the full sample size is n = 20,000 (if we enlarge the sample size, the running time becomes very long, especially for the SOSs model), the number of replicates is M = 1000 , the central ranks are \lambda = 0.25 and \lambda = 0.5, for the re-sample bootstrap, B = 100 - 400(50) , and we choose the intermediate rank to be r_{n} = \sqrt{n}. The results are presented in Tables 1 and 2.

    This study relies on the fact that the sample central and intermediate quantiles based on the standard normal distribution weakly converge to the normal distribution. Moreover, according, to the results of Sections 3 and 4, we expect that the bootstrapping DFs of central and intermediate quantiles converge to the normal distribution provided that B \ll n (i.e., B = o(n) ) for central ranks and S(B) \ll n (i.e., S(B) = o(n) ) for intermediate ranks. Moreover, based on the K-S test for normality and the corresponding p -values, the best value of B for central ranks (that corresponds to the largest p -value) should be chosen such that \sum_{n = 1}^{\infty}\lambda^{\sqrt{\frac{n}{B}}} < \infty, \forall \lambda\in(0, 1), while the best value of B for intermediate ranks should chosen such that \sum_{n = 1}^{\infty}\lambda^{\sqrt{\frac{n}{S(B)}}} < \infty, \forall \lambda\in(0, 1) (see Remark 5.3).

    The simulation study is implemented by Mathematica 12.3 via the following algorithm:

    Step 1. Select the m -GOSs model. That is, choose the values of \gamma_i .

    Step 2. Generate an ordered sample of size n = 20000 , say {\bf{X}}_n , based on the standard normal distribution by the algorithm of Barakat et al. [8].

    Step 3. Choose the central or intermediate rank.

    Step 4. Select the bootstrap re-sample size B.

    Step 5. Select M random samples, each of which is size B, from {\bf{X}}_n with replacement.

    Step 6. Compute the quantile of each sample, and store them in Q_B .

    Step 7. By using the K-S test, check the normality of the data sets Q_B to the normal distribution and then compute the p -value.

    Step 8. Repeat Steps 5–7 many times (100 times) for each chosen value of B and then compute the average p -value.

    Step 9. Repeat Steps 4–8 for different values of B , and then pick the largest average p -value and the corresponding value of B .

    Remark 5.1. In the earlier version of this paper, in order to implement Step 7 of the previous algorithm, we fitted the data sets Q_B to the normal DF by using the K-S test after calculating the sample mean and standard deviation. However, we noted an important issue that the K-S test can be used to fit the normal distribution only when parameters are not estimated from the data (cf. [24]). Since our focus here is only on checking the normality of the bootstrap samples, we apply the K-S test to check the normality of the given sample bootstrapping statistics without estimating any parameters.

    Remark 5.2. According to the results presented in Tables 1 and 2, it is noted that for n = 20000, the largest average p -values for selected central ranks are always achieved at B , which falls in the interval [100–200]. Moreover, the best average p -values for selected intermediate ranks are achieved at B falling in the interval [200–300]. However, it is shown that the accuracy of the goodness of fit depends on both the selected GOSs model and the selected ranks. Moreover, in view of the results given in Table 3, the average p -value for intermediate OSs increases as the sample size increases with the same bootstrap re-sample size.

    Remark 5.3. Based on the results of Sections 3 and 4, the best performance of the bootstrapping DFs of the central m-GOSs occurs at the values of B for which \sum_{n = 1}^{\infty}\lambda^{\sqrt{\frac{n}{B}}} < \infty, \forall \lambda\in(0, 1) . On the other hand, according to [2], the condition \sqrt{B} = o(\frac{2\sqrt{n}}{\log n}) is a sufficient condition for \sum_{n = 1}^{\infty}\lambda^{\sqrt{\frac{n}{B}}} < \infty, \forall \lambda\in(0, 1), which implies that the best performance of the bootstrapping DFs of the central OSs occurs when B\ll800. Moreover, in the case of intermediate m-GOSs, the condition \sqrt{S(B)} = o(\frac{2\sqrt{n}}{\log n}) is a sufficient condition for \sum_{n = 1}^{\infty}\lambda^{\sqrt{\frac{n}{S(B)}}} < \infty, \forall \lambda\in(0, 1), which implies that the best performance of the bootstrapping DFs of intermediate m-GOSs occurs when S(B)\ll800. Since in our study we choose the intermediate rank r_{n} = \sqrt{n}, this implies B\ll7500. Therefore, the simulation output endorses this anticipated result.

    The bootstrap method is an efficient procedure for solving many statistical problems based on re-sampling from the available data. For example, the bootstrap method is used to find standard errors for estimators, confidence intervals for unknown parameters, and p-values for test statistics under a null hypothesis. One of the desired properties of the bootstrapping method is consistency, which guarantees that the limit of the bootstrap distribution is the same as the distribution of the given statistic. In this paper, we investigated the strong consistency of bootstrapping central and intermediate m -GOSs for an appropriate choice of re-sample size for known and unknown normalizing constants. Finally, a simulation study was conducted to to explain how we can choose the bootstrap re-sample size B numerically, relying on the best fit of the bootstrapping DF of central and intermediate quantiles for different ranks.

    The authors are grateful to the editor and anonymous referees for their insightful comments and suggestions, which helped to improve the paper's presentation.

    All authors declare no conflict of interest in this paper.



    [1] Anjum S, Saleem M, Cheema M, et al. (2012) An assessment to vulnerability, extent, characteristics and severity of drought hazard in Pakistan. Pak J Sci 64: 138–143.
    [2] Hussain A, Zulqarnain M, Hussain J (2010) Catastrophes in the South Punjab due to Climate Change and the Role of PIDEANS. Center for Environmental Economics and Climate Change (CEECC), Islamabad. Available from: www.pide.org.pk.
    [3] McElhinney H (2011) Six months into the floods: Resetting Pakistan's priorities through reconstruction, 144.
    [4] Janjua PZ, Samad, G, Khan NU, et al. (2010) Impact of climate change on wheat production: A case study of Pakistan [with comments]. Pak Dev Rev 49: 799–822.
    [5] World Bank (2017) World Development Indicators. Washington, DC: World Band. Available from: https//data.worldbank.org.
    [6] Government of Pakistan (2016) Pakistan Economic Survey 2015–2016. Islamabad: Economic Adviser's Wing, Finance Division, Government of Pakistan. Available from: http://www.finance.gov.pk/survey/chapters_16/02_Agriculture.pdf.
    [7] Trade Development Authority of Pakistan (2016) Statistics for 2015–2016. Islamabad: TDAP. Available from: http://www.tdap.gov.pk/tdap-statistics.php.
    [8] Ashraf M, Routray JK (2013) Perception and understanding of drought and coping strategies of farming households in north-west Balochistan. Int J Disaster Risk Reduct 5: 49–60
    [9] United Nations Development Program (2015) Drought risk assessment in the province of Balochistan, Pakistan. Available from: file:///C:/Users/Home/Downloads/Drought-Risk-Asst-Balochistan-Nov%202015-lowres%20(7).pdf.
    [10] Osbahr H, Twyman C, Adger WN, et al. (2008) Effective livelihood adaptation to climate change disturbance: scale dimensions of practice in Mozambique. Geoforum 39: 1951–1964.
    [11] Mortimore MJ, Adams WM (2001) Farmer adaptation, change and 'crisis' in the Sahel. Global Environ Chang11: 49–57.
    [12] Rehman T, Panezai S, Ainuddin S (2019) Drought perceptions and coping strategies of drought-prone rural households: a case study of Nushki District, Balochistan. J Geogr Soc Sci 1: 44–56.
    [13] Hewitt K (2014) Regions of risk: A geographical introduction to disasters. Routledge.
    [14] Obasi GOP (1994) WMO's role in the international decade for natural disaster reduction. Bull Am Meteorol Soc 75: 1655–1662.
    [15] Wilhite DA (2000) Drought as a natural hazard: concepts and definitions. Available from https://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1068&context=droughtfacpub.
    [16] Sivakumar MVK (2014) Impacts of natural disasters in agriculture: An overview. World Meteorological Organisation, Geneva, Switzerland.
    [17] Mniki S (2009) Socio-economic impact of drought induced disasters on farm owners of Nkonkobe Local Municipality. Doctoral dissertation, University of the Free State. Available from: https://www.ufs.ac.za/docs/librariesprovider22/disaster-management-training-and-education-centre-for-africa-(dimtec)-documents/dissertations/2253.pdf?sfvrsn=cafef821_2.
    [18] Parry M, Parry M, Canziani O, et al. (2007) Climate change 2007-impacts, adaptation and vulnerability: Working group Ⅱ contribution to the fourth assessment report of the Intergovernmental Panel on Climate Change. Cambri Uni Press. Available from: https://www.ipcc.ch/site/assets/uploads/2018/03/ar4_wg2_full_report.pdf.
    [19] Lal M (2003) Global climate change: India's monsoon and its variability. J Environ Stud Policy 6: 1–34.
    [20] Dabo-Niang S, Hamdad L, Ternynck C, et al. (2014) A kernel spatial density estimation allowing for the analysis of spatial clustering. Application to Monsoon Asia Drought Atlas data. Stochastic Environ Res risk Assess 28: 2075–2099.
    [21] Islam M, Ahmad S, Afzal M (2004) Drought in Balochistan of Pakistan: prospects and management. In Proceedings of the international congress on Yak, Chengdu. Available from: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.539.9507&rep=rep1&type=pdf.
    [22] National Drought Monitoring Centre Pakistan Meteorological Department (2013) Drought Bulletin of Pakistan. Available from: http://www.ndmc.pmd.gov.pk/quater1.pdf.
    [23] Shafiq M, Kakar MA (2007) Effects of drought on livestock sector in Balochistan Province of Pakistan. Int J Agric Biol (Pakistan) 9: 657–665.
    [24] Naz F, Dars GH, Ansari K, et al. (2020) Drought Trends in Balochistan. Water 12: 470.
    [25] Ashraf M, Routray JK (2015) Spatio-temporal characteristics of precipitation and drought in Balochistan Province, Pakistan. Nat Hazards 77: 229–254.
    [26] Jamro S, Channa FN, Dars GH, et al. (2020) Exploring the Evolution of Drought Characteristics in Balochistan, Pakistan. Appl Sci 10: 913.
    [27] Patt AG, Schröter D (2008) Perceptions of climate risk in Mozambique: implications for the success of adaptation strategies. Global Environ Change 18: 458–467.
    [28] Sherval M, Askew LE (2012) Experiencing 'drought and more': local responses from rural Victoria, Australia. Popul Environ 33: 347–364.
    [29] Hussein MH (2004) Bonded labour in agriculture : a rapid assessment in Sindh and Balochistan, Pakistan. ILO Working Papers 993675363402676. International Labour Organization. Available from: https://ideas.repec.org/p/ilo/ilowps/993675363402676.html.
    [30] Pakistan Disaster Management Authority (2018) Provincial monsoon contingency plan-2018. Available from: http://www.pdma.gob.pk/wp-content/uploads/2018/12/Final-MonSoon-2018.pdf.
    [31] Pakistan Bureau of Statistics (2017) Population census report. Available from: http://www.pbs.gov.pk/sites/default/files/DISTRICT_WISE_CENSUS_RESULTS_CENSUS_2017.pdf.
    [32] Government of Balochistan (2020) Explore Balochistan. Available from: http://balochistan.gov.pk/explore-balochistan/about-balochistan/.
    [33] Drought Bulletin of Pakistan (2013) National Drought Monitoring Centre Pakistan Meteorological Department, Available from: http://www.pmd.gov.pk/ndmc/quater215.pdf.
    [34] Khair SM, Culas RJ (2013) Rationalizing water management policies: tube well development and resource use sustainability in Balochistan region of Pakistan. Int J Water 7: 294–316.
    [35] Ashraf M, Routray JK, Saeed M (2014) Determinants of farmers' choice of coping and adaptation measures to the drought hazard in northwest Balochistan, Pakistan. Nat hazards 73: 1451–1473.
    [36] Iqbal MW, Donjadee S, Kwanyuen B, et al. (2018) Farmers' perceptions of and adaptations to drought in Herat Province, Afghanistan. J Mt Sci 15: 1741–1756. https://doi.org/10.1007/s11629-017-4750-z
    [37] Arkin H, Colton RR (1963) Tables for statisticians. Barnes and Noble. Inc., New York.
    [38] Diggs DM (1991) Drought experience and perception of climatic change among Great Plains farmers. Great Plains Res 114–132.
    [39] West CT, Roncoli C, Ouattara F (2008) Local perceptions and regional climate trends on the Central Plateau of Burkina Faso. Land Degrad & Dev 19: 289–304.
    [40] Farooqi AB, Khan AH, Mir H (2005) Climate change perspective in Pakistan. Pak J Meteorol 2.
    [41] Maddison D (2007) The perception of and adaptation to climate change in Africa. The World Bank.
    [42] Habiba U, Shaw R, Takeuchi Y (2012) Farmer's perception and adaptation practices to cope with drought: Perspectives from Northwestern Bangladesh. Int J Disaster Risk Reduct 1: 72–84. https://doi.org/10.1016/j.ijdrr.2012.05.004
    [43] Chaudhry QUZ, Mahmood A, Rasul G, et al. (2009) Climate change indicators of Pakistan. Pak Meteorol Dep. Available from: http://www.pmd.gov.pk/CC%20Indicators.pdf.
    [44] Mertz O, Mbow C, Reenberg A, et al. (2009) Farmers' perceptions of climate change and agricultural adaptation strategies in rural Sahel. Environ Manage 43: 804–816.
    [45] Udmale P, Ichikawa Y, Manandhar S, et al. (2014) Farmers' perception of drought impacts, local adaptation and administrative mitigation measures in Maharashtra State, India. Int J Disaster Risk Reduct 10: 250–269.
    [46] Giordano M (2009) Global groundwater? Issues and solutions. Annu Rev Environ Res 34: 153–178.
    [47] Wada Y, Van Beek LP, Van Kempen CM, et al. (2010) Global depletion of groundwater resources. Geophys Res Lett 37.
    [48] Jülich S (2015) Development of a composite index with quantitative indicators for drought disaster risk analysis at the micro level. Hu EcolRisk Assess: An International Journal 21: 37–66.
    [49] Van Steenbergen F (1995) The frontier problem in incipient groundwater management regimes in Balochistan (Pakistan). Hu Ecol 23: 53–74.
    [50] Pakistan H, Cameos C (2008) Supporting public resource management in Balochsitan. Identification of recharge potential zones in three over-drawn basins (PLB, Nari, and Zhob) (final report). Irrigation and Power Department, Government of Balochistan, Royal Government of Netherlands.
    [51] Nasreen M (2004) Disaster research: exploring sociological approach to disaster in Bangladesh. Bangladesh e-journ Soc 1: 1–8.
    [52] Abid M, Schilling J, Scheffran J, et al. (2016) Climate change vulnerability, adaptation and risk perceptions at farm level in Punjab, Pakistan. Sci Total Environ 547: 447–460.
    [53] Cooper S, Wheeler T (2017) Rural household vulnerability to climate risk in Uganda. Reg Environ Change 17: 649–663.
    [54] Menghistu HT, Mersha TT, Abraha AZ (2018) Farmers' perception of drought and its socioeconomic impact: the case of Tigray and Afar regions of Ethiopia. J Appl Anim Res 46: 1023–1031.
    [55] Sam A, Kumar R, Kächele H, et al. (2017) Quantifying household vulnerability triggered by drought: evidence from rural India. Clim Dev 9: 618–633.
    [56] Panda A (2017) Vulnerability to climate variability and drought among small and marginal farmers: a case study in Odisha, India. Clim Dev 9: 605–617.
    [57] Osawe OW (2013) Livelihood vulnerability and migration decision making nexus: The case of rural farm households in Nigeria (No. 309-2016-5132).
    [58] Nyberg-Sorensen N, Van Hear N, Engberg-Pedersen P (2002) The migration-development nexus: Evidence and policy options. Int Migr 40: 49–75.
    [59] Bahta YT, Jordaan A, Muyambo F (2016) Communal farmers' perception of drought in South Africa: Policy implication for drought risk reduction. Int J Disaster Risk Reduct 20: 39–50.
    [60] Jordaan AJ (2012) Drought risk reduction in the Northern Cape, South Africa. Doctoral dissertation, University of the Free State.
    [61] Ngaka MJ (2012) Drought preparedness, impact and response: A case of the Eastern Cape and Free State provinces of South Africa. Jàmbá: J Disaster Risk Stud 4: 1–10.
    [62] Cooper PJM, Dimes J, Rao KPC, et al. (2008) Coping better with current climatic variability in the rain-fed farming systems of sub-Saharan Africa: An essential first step in adapting to future climate change? Agric Ecosystems & Environ 126: 24–35.
    [63] Fara K (2001 How natural are 'natural disasters'? Vulnerability to drought of communal farmers in Southern Namibia. Risk Manage 3: 47–63.
    [64] Paradise TR (2005) Perception of earthquake risk in Agadir, Morocco: A case study from a Muslim community. Global Environ Change Part B: Environ Hazards 6: 167–180.
    [65] Gornall J, Betts R, Burke E, et al. (2010) Implications of climate change for agricultural productivity in the early twenty-first century. Philos Trans Royal Soc B 365: 2973–2989.
    [66] Potopova V, Boroneanţ C, Boincean B, et al. (2016) Impact of agricultural drought on main crop yields in the Republic of Moldova. Int J Climatol 36: 2063–2082.
    [67] Jiri, O, Mafongoya PL, Chivenge P (2017) Contextual vulnerability of rainfed crop-based farming communities in semi-arid Zimbabwe. Int J Clim Change Strategies Manage 9.
    [68] Muyambo F, Jordaan AJ, Bahta YT (2017) Assessing social vulnerability to drought in South Africa: Policy implication for drought risk reduction. Jàmbá: J Disaster Risk Stud 9: 1–7.
    [69] Mhlanga-Ndlovu BSFN, Nhamo G (2017) An assessment of Swaziland sugarcane farmer associations' vulnerability to climate change. J Integr Environ Sci 14: 39–57.
    [70] Sujakhu NM, Ranjitkar S, Niraula RR, et al. (2018) Determinants of livelihood vulnerability in farming communities in two sites in the Asian Highlands. Water Int 43: 165–182.
    [71] Huong NTL, Yao S, Fahad S (2019) Assessing household livelihood vulnerability to climate change: The case of Northwest Vietnam. Hu Ecol Risk Assess: An International Journal 25: 1157–1175.
    [72] Lemos MC, Finan TJ, Fox RW, et al. (2002) The use of seasonal climate forecasting in policymaking: lessons from Northeast Brazil. Clim Change 55: 479–507.
    [73] Bhandari H, Pandey S, Sharan, R, et al. (2007) Economic costs of drought and rice farmers' drought-coping mechanisms in eastern India. Economic costs of drought and rice farmers' coping mechanisms: a cross-country comparative analysis. 43–111.
    [74] Dumenu WK, Obeng EA (2016) Climate change and rural communities in Ghana: Social vulnerability, impacts, adaptations and policy implications. Environ Sci & Policy 55: 208–217
    [75] Kuhlicke C, Steinführer A, Begg C, et al. (2011) Perspectives on social capacity building for natural hazards: outlining an emerging field of research and practice in Europe. Environ Sci & Policy 14: 804–814.
    [76] Jiao X, Moinuddin H (2016) Operationalizing analysis of micro-level climate change vulnerability and adaptive capacity. Clim Dev 8: 45–57.
    [77] Mishra S (2007) Household livelihood and coping mechanism during drought among oraon tribe of Sundargarh district of Orissa, India. J Soc Sci 15: 181–186. https://doi.org/10.1080/10807039.2018.1460801
    [78] Alemayehu A, Bewket W (2017) Smallholder farmers' coping and adaptation strategies to climate change and variability in the central highlands of Ethiopia. Local Environ 22: 825–839.
    [79] Asfaw A, Simane B, Bantider A, et al. (2019) Determinants in the adoption of climate change adaptation strategies: evidence from rainfed-dependent smallholder farmers in north-central Ethiopia (Woleka sub-basin). Environ Dev Sustainability 21: 2535–2565.
    [80] Rosenzweig MR, Wolpin KI (1993) Credit market constraints, consumption smoothing, and the accumulation of durable production assets in low-income countries: Investments in bullocks in India. J Political Econ 101: 223–244.
    [81] Moniruzzaman S (2015) Crop choice as climate change adaptation: Evidence from Bangladesh. Ecol Econ 118: 90–98.
  • This article has been cited by:

    1. O.M. Khaled, H.M. Barakat, Laila A. AL-Essa, Ehab M. Almetwally, Physics and economic applications by progressive censoring and bootstrapping sampling for extension of power Topp-Leone model, 2024, 17, 16878507, 100898, 10.1016/j.jrras.2024.100898
    2. Amany E. Aly, Magdy E. El-Adll, Haroon M. Barakat, Ramy Abdelhamid Aldallal, A new least squares method for estimation and prediction based on the cumulative Hazard function, 2023, 8, 2473-6988, 21968, 10.3934/math.20231120
    3. M. E. Sobh, H. M. Barakat, Magdy E. El-Adll, Amany E. Aly, Asymptotic behavior of bootstrapped extreme order statistics under unknown power normalizing constants, 2025, 66, 0932-5026, 10.1007/s00362-025-01665-2
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(6641) PDF downloads(782) Cited by(6)

Figures and Tables

Figures(8)  /  Tables(3)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog