Loading [MathJax]/jax/element/mml/optable/GeneralPunctuation.js
Research article Topical Sections

Range-extending Zinc-air battery for electric vehicle

  • A vehicle model is used to evaluate a novel powertrain that is comprised of a dual energy storage system (Dual ESS). The system includes two battery packs with different chemistries and the necessary electronic controls to facilitate their coordination and optimization. Here, a lithium-ion battery pack is used as the primary pack and a Zinc-air battery as the secondary or range-extending pack. Zinc-air batteries are usually considered unsuitable for use in vehicles due to their poor cycle life, but the model demonstrates the feasibility of this technology with an appropriate control strategy, with limited cycling of the range extender pack. The battery pack sizes and the battery control strategy are configured to optimize range, cost and longevity. In simulation the vehicle performance compares favourably to a similar vehicle with a single energy storage system (Single ESS) powertrain, travelling up to 75 km further under test conditions. The simulation demonstrates that the Zinc-air battery pack need only cycle 100 times to enjoy a ten-year lifespan. The Zinc-air battery model is based on leading Zinc-air battery research from literature, with some assumptions regarding achievable improvements. Having such a model clarifies the performance requirements of Zinc-air cells and improves the research community's ability to set performance targets for Zinc-air cells.

    Citation: Steven B. Sherman, Zachary P. Cano, Michael Fowler, Zhongwei Chen. Range-extending Zinc-air battery for electric vehicle[J]. AIMS Energy, 2018, 6(1): 121-145. doi: 10.3934/energy.2018.1.121

    Related Papers:

    [1] Areej M. AL-Zaydi . On concomitants of generalized order statistics arising from bivariate generalized Weibull distribution and its application in estimation. AIMS Mathematics, 2024, 9(8): 22002-22021. doi: 10.3934/math.20241069
    [2] Wenzhi Zhao, Dou Liu, Huiming Wang . Sieve bootstrap test for multiple change points in the mean of long memory sequence. AIMS Mathematics, 2022, 7(6): 10245-10255. doi: 10.3934/math.2022570
    [3] Jin-liang Wang, Chang-shou Deng, Jiang-feng Li . On moment convergence for some order statistics. AIMS Mathematics, 2022, 7(9): 17061-17079. doi: 10.3934/math.2022938
    [4] Enrique de Amo, José Juan Quesada-Molina, Manuel Úbeda-Flores . Total positivity and dependence of order statistics. AIMS Mathematics, 2023, 8(12): 30717-30730. doi: 10.3934/math.20231570
    [5] H. M. Barakat, M. H. Dwes . Asymptotic behavior of ordered random variables in mixture of two Gaussian sequences with random index. AIMS Mathematics, 2022, 7(10): 19306-19324. doi: 10.3934/math.20221060
    [6] Mohamed Said Mohamed, Muqrin A. Almuqrin . Properties of fractional generalized entropy in ordered variables and symmetry testing. AIMS Mathematics, 2025, 10(1): 1116-1141. doi: 10.3934/math.2025053
    [7] Jinliang Wang, Fang Wang, Songbo Hu . On asymptotic correlation coefficient for some order statistics. AIMS Mathematics, 2023, 8(3): 6763-6776. doi: 10.3934/math.2023344
    [8] Haroon Barakat, Osama Khaled, Hadeer Ghonem . Predicting future order statistics with random sample size. AIMS Mathematics, 2021, 6(5): 5133-5147. doi: 10.3934/math.2021304
    [9] Mikail Et, Muhammed Cinar, Hacer Sengul Kandemir . Deferred statistical convergence of order α in metric spaces. AIMS Mathematics, 2020, 5(4): 3731-3740. doi: 10.3934/math.2020241
    [10] Salim Bouzebda, Amel Nezzal . Uniform in number of neighbors consistency and weak convergence of kNN empirical conditional processes and kNN conditional U-processes involving functional mixing data. AIMS Mathematics, 2024, 9(2): 4427-4550. doi: 10.3934/math.2024218
  • A vehicle model is used to evaluate a novel powertrain that is comprised of a dual energy storage system (Dual ESS). The system includes two battery packs with different chemistries and the necessary electronic controls to facilitate their coordination and optimization. Here, a lithium-ion battery pack is used as the primary pack and a Zinc-air battery as the secondary or range-extending pack. Zinc-air batteries are usually considered unsuitable for use in vehicles due to their poor cycle life, but the model demonstrates the feasibility of this technology with an appropriate control strategy, with limited cycling of the range extender pack. The battery pack sizes and the battery control strategy are configured to optimize range, cost and longevity. In simulation the vehicle performance compares favourably to a similar vehicle with a single energy storage system (Single ESS) powertrain, travelling up to 75 km further under test conditions. The simulation demonstrates that the Zinc-air battery pack need only cycle 100 times to enjoy a ten-year lifespan. The Zinc-air battery model is based on leading Zinc-air battery research from literature, with some assumptions regarding achievable improvements. Having such a model clarifies the performance requirements of Zinc-air cells and improves the research community's ability to set performance targets for Zinc-air cells.


    Kamps [22] introduced the model of GOSs as a unified approach to a variety of ordered random variables (RVs), including ordinary order statistics (OOSs), sequential order statistics (SOSs), progressive type Ⅱ censored order statistics (POSs), order statistics with a non-integer sample size, record values, and Pfeifer's record model. Since the GOSs model unifies the models of ordered RVs, the practical importance of GOSs is evident. For example, in reliability theory, the rth extreme OOS indicates the life-length of an (nr+1)-out-of-n system, whereas the model of SOSs is an extension of the OOSs model that describes specific dependencies or interactions among system components induced by component failures. Furthermore, the POSs model is a valuable tool for collecting information in lifetime tests.

    The uniform GOSs are defined via their joint probability density function (PDF) on a unit cone of Rn. More specifically, let nN, k1 and m1,m2,...,mn1R be parameters such that γr=k+nr+n1j=rmj>0, r{1,2,...,n1}.

    If the RVs U(r,n,˜m,k)=Ur:n,r=1,2,...,n, possess a PDF of the form

    fU1:n,U2:n,...,Un:n(u1,u2,...,un)=k(n1j=1γj)(n1j=1(1uj)mj)(1un)γn1 (1.1)

    on the cone {(u1,u2,...,un):0u1u2...un<1}Rn, then they are called uniform GOSs. Furthermore, GOSs based on some distribution function (DF) F can be defined via the quantile transformation X(r,n,˜m,k)=Xr:n=F1(Ur:n),r=1,2,...,n, where F1 denotes the quantile function of F. On the other hand, by choosing the parameters appropriately, we can obtain different models of ordered RVs, such as m-GOSs (m1=m2=...=mn1=m,γr=k+(nr)(m+1),r=1,2,...,n); OOSs, a sub-model of m-GOSs (m=0 and k=1); order statistics with non-integral sample size, a sub-model of m-GOSs (m=0,k=αn+1 and n1<αR); SOSs (mi=(ni+1)αi(ni)αi+11,i=1,2,...,n1,0<αiR,k=αn); kth record values (m1=m2=...=mn1=1,kN); POSs with censoring scheme (R1,R2,...,RM)(mi=Ri,i=1,2,...,M1, and mi=0,i=M,M+1,...,n1 and k=RM+1); and Pfeifer's record model (mi=βiβi+11,i=1,2,...,n1,0<βiR and k=βn).

    The marginal DF, Ψ(m,k)r:n(x)=P(Xr:nx), of the rth m-GOS is given in Kamps [22] by

    Ψ(m,k)r:n(x)=1Cr1¯Fγr(x)r1i=01i!Cri1gim(x),

    where Cr1=ri=1γi, r=1,2,...,n, γn=k, and ¯F(x)=1F(x). Moreover, if m1, (m+1)gm(x)=Gm(x)=1¯Fm+1(x) is a DF, whereas g1(x)=log¯F(x). Under the condition m1, the possible limit DFs of the maximum m-GOS and their domains of attraction under linear normalization were derived by Nasri-Roudsari [27]. Moreover, the limit DFs of Ψ(m,k)n:n(x) under power normalization were derived by Nasri-Roudsari [28]. The possible non-degenerate limit DFs and the rate of convergence of the upper extreme m-GOSs were discussed by Nasri-Roudsari and Cramer [29]. The necessary and sufficient conditions for weak convergence as well as the form of the possible limit DFs of extreme, central and intermediate m-GOSs, were derived by Barakat [4].

    The bootstrap method, which was first introduced by Efron [20] for independent RVs, is an efficient procedure for solving many statistical problems based on re-sampling from the available data. It enables statisticians to perform statistical inference on a wide range of problems without imposing many structural assumptions on the data-generating random process (cf. [14,21,26]). For example, the bootstrap method is used to find standard errors for estimators, confidence intervals for unknown parameters, and p-values for test statistics under a null hypothesis.

    There are several forms of the bootstrap method and additionally several other re-sampling methods that are related to it, such as jackknifing, cross-validation, randomization tests, and permutation tests. Let Xn=(X1,X2,...,Xn) be a random sample of size n from an unknown DF F. The idea of the bootstrap technique is to re-sample with replacement from the original sample Xn and form a bootstrapped version of the original statistic. For B=B(n), as n, assume that Y1,Y2,...,YB are conditionally independent and identically distributed (i.i.d) RVs with distribution

    P(Y1=Xj|Xn)=1n,j=1,2,...,n.

    Hence, (Y1,Y2,...,YB) is a re-sample of size B from the empirical distribution Fn of F based on Xn. Let Pj,j=1,2,...,n, be an independent RV with respective beta distribution Ix(γj,1),j=1,2,...,n. Thus, Pj follows a power function distribution with exponent γj=k+(Bj)(m+1),j=1,2,...,B. Now, in view of the results of Cramer [17], we can write the rth m-GOS based on the empirical DF Fn in the form

    Yr:B=Y(r,B,m,k)=F1n(1rj=1Pj),r=1,2,...,B.

    Moreover, let

    H(m,k)r,n,B(x)=P(YBr+1:BbBaBx|Xn), (1.2)

    be the bootstrap distribution of a1n(Xnr+1:nbn), for suitable normalizing constants an>0 and bn, where n and B are the sample size and re-sample size, respectively.

    It has been shown, for many statistics, that the bootstrap method is asymptotically consistent (cf. Efron [20]). That is, the asymptotic distribution of the bootstrap for a given statistic is the same as the asymptotic distribution of the original statistic. Many results for the bootstrap method and its applications can be found in the literature. For instance, the inconsistency, weak consistency and strong consistency for bootstrapping the maximum OOSs under linear normalization were investigated by Athreya and Fukuchi [2] and Fukuchi [21]. They showed that, in a full-sample bootstrap situation, the maximum OOSs fails to be consistent. Later, Barakat et al. [10] extended the results of Fukuchi and Athreya to the GOSs. Barakat et al. [12] obtained similar results for the OOSs with variable ranks as well. Furthermore, bootstrapping OOSs with variable rank under power normalization was investigated by Barakat et al. [13].

    The main goal of this paper is to build on the findings of [10] by discussing the consistency of bootstrap central and intermediate GOSs for determining an appropriate re-sample size for known and unknown normalizing constants. Moreover, a simulation study is carried out to explain how the bootstrap sample size can be chosen numerically. This paper is structured as follows. In Section 2, we briefly review the main results concerning the asymptotic behaviour of the m-GOSs with variable rank. Sections 3 and 4 are devoted, respectively, to bootstrapping the intermediate and central m-GOSs. Finally, a simulation study is conducted in Section 5.

    We end this section with some motivations that highlight the importance of our work.

    Work motivation

    The purpose of the bootstrap method is to construct an approximate sampling distribution for the statistic of interest. So, if the statistic of interest Sn follows a certain distribution, we would like our bootstrap distribution SB to converge to the same distribution. If we do not have this, then we can not trust the inferences made. For i.i.d. samples of size n, the ordinary bootstrap method is known to be consistent in many situations, but it may fail in important examples (cf. [10,12,13,21]). Using bootstrap samples of size B, where B and Bn0, typically resolves the problem (cf. [10,12,13,21]). However, the choice of B is a key issue for the quality of the convergence (e.g., weak consistency and strong consistency). In this paper, we investigate the strong consistency of bootstrapping central and intermediate m-GOSs for an appropriate choice of re-sample size B for known and unknown normalizing constants. The critical choice problem of B is theoretically addressed in this paper. Furthermore, a simulation study is used to discuss it realistically.

    The model of m-GOSs contains two practically important sub-models, OOSs and SOSs, on which this study focuses. For central OOSs and SOSs, one can use the bootstrap method to obtain a confidence interval for the pth population quantile. On the other hand, in many important applications such as flood hazard assessment [18], seismic hazard assessment [1] and analysis of bank operational risk [16], we need an estimator (confidence interval estimate) of an intermediate OOSs (SOSs) quantile. Moreover, it is well known that the asymptotic behavior of intermediate quantiles is one of the main factors in choosing a suitable value of threshold in the peak over threshold (POT) approach and constructing related estimators (the Hill estimators) of the tail index (cf. [9,19]). Therefore, the study of bootstrapping intermediate OOSs will pave the way to use and improve the modeling of extreme values via the POT approach. This potential application of bootstrapping intermediate OOSs will be the subject of future studies.

    In this section, we briefly review the main results concerning the asymptotic behaviour of the intermediate and central m-GOSs, which are related to the present work.

    The intermediate OOSs have a wide range of important applications. For instance, they can be used to estimate the probabilities of future extreme observations and to estimate tail quantiles of the underlying distribution that are extremes relative to the available sample size (cf. [30]). Furthermore, Pickands [30] has revealed that intermediate OOSs can be applied to construct consistent estimators for the shape parameter of the limiting extremal distribution in the parametric form. Teugels [32] and Mason [25] have also found estimators that are in part based on intermediate OOSs. A sequence {Xrn:n} is called a sequence of intermediate OOSs if rnn and rnnn0 (lower intermediate) or rnnn1 (upper intermediate), where the symbol (n) stands for convergence as n. Wu [33] (see also, Leadbetter et al. [23]) revealed that, if {rn} is any nondecreasing intermediate rank sequence, and there exist normalizing constants an>0 and bn such that

    Ψ(0,1)nrn+1:n(anx+bn)=IF(anx+bn)(nrn+1,rn)wnΨ(0,1)(x), (2.1)

    where wn stands for weak convergence, as n, Ψ(0,1)nrn+1:n(x) is the DF of the upper rnth OOS (upper intermediate), and Ψ(0,1)(x) is a nondegenerate DF, then Ψ(0,1)(x) must be one and only one of the types N(Ui;α(x)),i=1,2,3, where N(.) denotes the standard normal DF, and α is a positive constant. Moreover,

    U1;α(x)={αlogx,x0,,x>0,U2;α(x)={,x0,αlogx,x>0,

    and U3;α(x)=U3(x)=x,x. Furthermore, (2.1) is satisfied with Ψ(0,1)(x)=N(Ui;α(x)),i=1,2,3, if and only if

    rnn¯F(anx+bn)rnnUi;α(x). (2.2)

    In this work we confine ourselves to a very wide intermediate rank sequence which is known as Chibisov's rank, where rnnωnl2, 0<ω<1; for more details about Chibisov's rank, see ([6,7,15]). When (2.1) is satisfied for this rank, we say that F belongs to the intermediate domain of attraction of Ψ(0,1)(x)=N(Ui;α(x)) and write FD(l,ω)(N(Ui;α(x))). The following lemma is needed for studying the asymptotic distributions of the suitably normalized intermediate m-GOSs.

    Lemma 2.1. (cf. Barakat [4])Let m>1. Then, for any nondecreasing intermediate variable rank rn, there exist normalizing constants an>0 and bn such that

    Ψ(m,k)nrn+1:n(anx+bn)wnΨ(m,k)(x), (2.3)

    if and only if

    rnn¯Fm+1(anx+bn)rnnU(x), (2.4)

    where Ψ(m,k)(x) is a nondegenerate DF with Ψ(m,k)(x)=N(U(x)), and n=n+km+11.

    Theorem 2.1. (cf. Barakat [4])Suppose that m>1, and rn is a nondecreasing intermediate variable rank. Moreover, let rn be a variable rank defined by

    rn=rS1(n),

    where S(n)=rn/(rn/n)1m+1. Then, there exist normalizing constants an>0 and bn for which (2.3) is satisfied for some nondegenerate DF Ψ(m,k)(x) if and only if there are normalizing constants αn>0 and βn for which

    Ψ(0,1)nrn+1:n(αnx+βn)wnΨ(0,1)(x), (2.5)

    where Ψ(0,1)(x) is some nondegenerate DF. Equivalently,

    rnn¯F(αnx+βn)rnnUi;α(x), (2.6)

    with Ψ(0,1)(x)=N(Ui;α(x)),i=1,2,3. In this case, the normalizing constants an and bn can be chosen as an=αS(n) and bn=βS(n). Furthermore, U(x) in (2.4) takes the form (m+1)Ui;α(x).

    When the rank sequence rnn satisfies the regular condition rn=λn+o(n), where 0<λ<1, rn is referred to as a central rank sequence, and Xrn:n is called central OOSs. There are numerous distinct results for central OOSs and their applications in the literature. Smirnov [31] showed that, if there exist normalizing constants cn>0 and dn such that

    Ψ(0,1)nrn+1:n(cnx+dn)=IF(cnx+dn)(nrn+1,rn)wnΨ(0,1)λ(x), (2.7)

    where Ψ(0,1)λ(x) is some nondegenerate DF, then Ψ(0,1)λ(x) must be one and only one of the types N(Vi;α(x)),i=1,2,3,4. Moreover,

    V1;α(x)={,x0,cxα,x>0,c,α>0,V2;α(x)={c|x|α,x0,c,α>0,,x>0,V3;α(x)={c|x|α,x0,c,α>0,c1xα,x>0,c1>0,V4;0(x)={,x1,0,1<x1,,x>1,

    where c=1λ(1λ) and c1=c/A,A>0. In that case, we say that the DF F belongs to the domain of normal λ-attraction of the limit type Vi,α(x),i=1,2,3,4, written FD(λ)(Vi;α(x)). Moreover, Smirnov [31] showed that (2.7) is satisfied if and only if

    n(λ¯F(cnx+dn)Cλ)nVi;α(x), (2.8)

    where Cλ=1/c=λ(1λ). The following lemma, which is due to Barakat [4], is a cornerstone of the asymptotic theory of central m-GOSs.

    Lemma 2.2. (cf. Barakat [4]) Let m>1. Moreover, let rn be a nondecreasing variable rank such that rn=λn+o(n). Then, there exist normalizing constants cn>0 and dn such that

    Ψ(m,k)nrn+1:n(cnx+dn)wnΨ(m,k)λ(x), (2.9)

    if and only if

    n(λ¯Fm+1(cnx+dn)Cλ)nV(x), (2.10)

    where Ψ(m,k)λ(x) is a nondegenerate DF with Ψ(m,k)λ(x)=N(V(x)).

    Theorem 2.2. (cf. Barakat [4])Let m>1, and let rn be a nondecreasing variable rank such that rn=λn+o(n). Then, there exist normalizing constants cn>0 and dn for which (2.9) holds, for some nondegenerate DF Ψ(m,k)λ(x), if and only if, for the same normalizing constants cn and dn, we have FDλ(m)(Vi;α(x)),i=1,2,3,4, where λ(m)=λ1m+1.That is, in view of (2.10), we have

    n(λ(m)¯F(cnx+dn)Cλ(m))nVi;α(x),i{1,2,3,4}. (2.11)

    Moreover, Ψ(m,k)λ(x)=N((Cλ(m)/Cλ)(m+1)Vi(x)), where Ct=Ct/t.

    In this section, we study the asymptotic behaviour of bootstrapping intermediate m-GOSs with rank sequence rB. In other words, we are interested in the limiting distribution of H(m,k)rB,n,B(x)= for different choices of the re-sample size B=B(n), when the normalizing constants an and bn are either known or unknown.

    This subsection investigates the inconsistency, weak consistency and strong consistency of the bootstrap distribution H(m,k)rB,n,B(x) when the normalizing constants are known. More specifically, it is proved in the next theorem that the full-sample bootstrap (i.e., B=n) of H(0,k)rn,n,n(x) fails to be consistent with Ψ(0,k)(x). Moreover, in the same theorem it is shown that if m>0 and B=n, then H(m,k)rn,n,n(x) is a consistent estimator of Ψ(m,k)(x).

    Theorem 3.1. Let the relation (2.3) be satisfied with Ψ(m,k)(x)=N((m+1)Ui;α(x)),i=1,2,3. Then,

    H(0,k)rn,n,n(x)dnN(Z(x)), (3.1)

    where dn stands for convergence in distribution as n, and Z(x) has a normal distribution with mean Ui;β(x) and a variance of one. Moreover, if m>0, then

    supxR|H(m,k)rn,n,n(x)Ψ(m,k)(x)|pn0, (3.2)

    where pn stands for convergence in probability as n.

    Proof. Since all of the limit types in (2.2) are continuous, the convergence in (2.3) is uniform in x. Therefore, we can write

    H(m,k)rB,n,B(x)=N(rBB¯Fm+1n(aBx+bB)rB)+ξn(x), (3.3)

    where ξn(x)n0 uniformly with respect to x, and B=B+km+11n. Suppose now that the condition B=n holds true; then, (3.3) can be expressed as

    H(m,k)rn,n,n(x)=N(rnn¯Fm+1n(anx+bn)rn)+ξn(x). (3.4)

    When m=0, after some routine algebraic calculations we can obtain (3.1), which is the same result of Theorem 4.2 in Barakat et al. [12]. Now, we have to prove (3.2) for m>0 when B=n. Since S(n)n, (2.6) implies

    rS(n)S(n)¯F(αS(n)x+βS(n))rS(n)nUi;α(x). (3.5)

    Furthermore, from the central limit theorem we have

    n¯Fn(αS(n)x+βS(n))n¯F(αS(n)x+βS(n))n¯F(αS(n)x+βS(n))(1¯F(αS(n)x+βS(n)))dnZ(x). (3.6)

    On the other hand, from Chibisov [15] and Barakat [4], S(n) can be written in the form S(n)=l2mm+1n1+ωmm+1(1+o(1)), where 0<ω<1 and l>0. Consequently, we get

    S(n)n=l2mm+1n1+ωmm+11(1+o(1))n0. (3.7)

    Moreover, from Barakat [4], it is clear that rS(n)S(n)¯F(αS(n)x+βS(n)). Thus, by (3.6) and (3.7), we obtain

    S(n)¯Fn(αS(n)x+βS(n))S(n)¯F(αS(n)x+βS(n))rS(n)=S(n)nn¯Fn(αS(n)x+βS(n))n¯F(αS(n)x+βS(n))rS(n)
    S(n)nn¯Fn(αS(n)x+βS(n))n¯F(αS(n)x+βS(n))S(n)¯F(αS(n)x+βS(n))(1¯F(αS(n)x+βS(n)))=S(n)nn¯Fn(αS(n)x+βS(n))n¯F(αS(n)x+βS(n))n¯F(αS(n)x+βS(n))(1¯F(αS(n)x+βS(n)))n0. (3.8)

    Therefore, from (3.5) and (3.8) we get

    rS(n)S(n)¯Fn(αS(n)x+βS(n))rS(n)=rS(n)S(n)¯F(αS(n)x+βS(n))rS(n)S(n)¯Fn(αS(n)x+βS(n))S(n)¯F(αS(n)x+βS(n))rS(n)nUi;α(x). (3.9)

    From (3.9), we can write ¯Fn(αS(n)x+βS(n))=(rS(n)S(n))(1Ui;α(x)rS(n)(1+o(1))), which implies

    ¯Fnm+1(αS(n)x+βS(n))=(rS(n)S(n))m+1(1(m+1)Ui;α(x)rS(n)(1+o(1))). (3.10)

    Furthermore, from Barakat [4], it can be noted that rS(n)rn and rS(n)S(n)=(rnn)1m+1. Thus, by using (3.10), we have

    rnn¯Fnm+1(anx+bn)rn=ωn+(m+1)Ui;α(x)τn(1+o(1)), (3.11)

    where

    ωn=rnn(rS(n)S(n))m+1rn=rnn(rnn)rn=0, (3.12)

    and

    τn=n(rS(n)S(n))m+1rnrS(n)=n(rnn)r2n=1. (3.13)

    Substituting from (3.12) and (3.13) into (3.11), we get

    rnn¯Fnm+1(anx+bn)rnn(m+1)Ui;α(x), (3.14)

    which proves (3.2), and the proof is completed.

    Theorem 3.2. Assume that there exist normalizing constants an>0 and bn from which the relation (2.3) holds, with Ψ(m,k)(x)=N((m+1)Ui;α(x)),i=1,2,3. Let S(B)=o(n). Then

    supxR|H(m,k)rB,n,B(x)Ψ(m,k)(x)|pn0. (3.15)

    Moreover, if B is chosen such that n=1λnS(B)<, λ(0,1), then

    supxR|H(m,k)rB,n,B(x)Ψ(m,k)(x)|w.p.1n0, (3.16)

    where w.p.1n stands for convergence with probability one as n (almost sure convergence).

    Proof. By noting that S(B)n, (2.6) implies

    rS(B)S(B)¯F(αS(B)x+βS(B))rS(B)nUi;α(x).

    Define the statistic KrS(n),n,B(x) by the relation

    KrS(B),n,B(x)=rS(B)S(B)¯Fn(αS(B)x+βS(B))rS(B).

    Therefore, we get

    E(KrS(B),n,B(x))=rS(B)S(B)¯F(αS(B)x+βS(B))rS(B)nUi;α(x), (3.17)

    and

    Var(KrS(B)  ,  n,  B(x))=S(B)n¯F(αS(B)x+βS(B))(1¯F(αS(B)x+βS(B)))˜rS(B)S(B)no(1)n0, (3.18)

    where ˜rS(B)=rS(B)S(B)¯F(αS(B)x+βS(B))(1¯F(αS(B)x+βS(B))). By combining (3.17) and (3.18), we obtain

    rS(B)S(B)¯Fn(αS(B)  x +βS(B)     )rS(B)pnUi;α(x), (3.19)

    which proves (3.15). In order to prove (3.16), it is sufficient to show that the convergence in (3.19) is w.p.1. First note that

    rS(B)S(B)¯Fn(αS(B)x+βS(B))rS(B)=rS(B)S(B)¯F(αS(B)x+βS(B))rS(B)S(B)¯Fn(αS(B)x+βS(B))S(B)¯F(αS(B)x+βS(B))rS(B),

    and

    rS(B)S(B)¯F(αS(B)  x  +  βS(B)      )rS(B)nUi;α(x).

    Hence, the convergence in (3.19) becomes w.p.1 if we can show that

    S(B)¯Fn(αS(B)x+βS(B))S(B)¯F(αS(B)x+βS(B))rS(B)=S(B)nn¯Fn(αS(B)x+βS(B))n¯F(αS(B)x+βS(B))n˜rS(B)w.p.1n0.

    By the Borel-Cantelli lemma, it is sufficient to prove that

    n=1P(S(B)n|n¯Fn(αS(B)  x  +  βS(B)     )n¯F(αS(B)  x  +  βS(B)     )n˜rS(B)|>ϵ)<

    for every ϵ>0. For every θ>0, we have

    S(B)nlogP(S(B)n(n¯Fn(αS(B)x+βS(B))n¯F(αS(B)x+βS(B))n˜rS(B))>ϵ)=S(B)nlogP(n¯Fn(αS(B)x+βS(B))n¯F(αS(B)x+βS(B))n˜rS(B)>ϵnS(B))=S(B)nlogP(eθPn,B(x)>eθϵnS(B)),

    where

    Pn,B(x)=n¯Fn(αS(B)  x  +βS(B)     )n¯F(αS(B)  x  +βS(B)     )n˜rS(B).

    From Markov's inequality, we get

    S(B)nlogP(eθPn,B(x)>eθϵnS(B))S(B)nlog(eθϵnS(B)E(eθPn,B(x)))S(B)n(θϵnS(B)+logMB(θ))=θϵ+S(B)nlogMB(θ)nθϵ,

    where MB(θ) denotes the moment generating function of the standard normal distribution. Consequently, for sufficiently large n, we get

    n=1P(S(B)n(n¯Fn(αS(B)x+βS(B))n¯F(αS(B)x+βS(B))n˜rS(B))>ϵ)=n=1elogP(S(B)nPn,B  (x)  >ϵ)n=1eθϵnS(B)<.

    By a similar argument, for every ϵ>0, it can be shown that

    n=1P(S(B)n(n¯Fn(αS(B)  x  +βS(B)     )n¯F(αS(B)  x  +βS(B)     )n˜rS(B))<ϵ)<.

    Since the condition n=1λnS(B)<, λ(0,1), ensures the convergence of the series n=1exp{θϵnS(B)} for every ϵ>0, we obtain

    rS(B)S(B)¯Fn(αS(B)  x  +βS(B)     )rS(B)w.p.1nUi;α(x). (3.20)

    From (3.20), we can write

    ¯Fn(αS(B)x+βS(B))=¯Fn(aBx+bB)=(rS(B)S(B))(1Ui;α(x)rS(B)(1+o(1))),w.p.1,

    which implies

    ¯Fnm+1(aBx+bB)=(rS(B)S(B))m+1(1(m+1)Ui;α(x)rS(B)(1+o(1))),w.p.1.

    Therefore, we get

    rBB¯Fm+1(aBx+bB)rB=ηn+(m+1)Ui;α(x)ϑn(1+o(1)),w.p.1,

    where

    ηn=rBB(rS(B)S(B))m+1rB=rBB(rBB)rB=0,

    and

    ϑn=B(rS(B)S(B))m+1rBrS(B)=B(rBB)r2B=1.

    Consequently,

    rBB¯Fm+1(aBx+bB)rBw.p.1n(m+1)Ui;α(x)=U(x).

    Thus, (3.16) is proved. This completes the proof.

    One of the most important problems in statistical modeling is to reduce the required knowledge about the DF of the population from which the available data is obtained. In the bootstrap method, this situation leads to the case of unknown normalizing constants. In the rest of this section, we investigate the consistency property of the bootstrap intermediate m-GOSs when the normalizing constants are unknown.

    Suppose now that the normalizing constants an and bn are unknown, and they are estimated from the sample data, Xn=(X1,X2,...,Xn). Let ˆaB and ˆbB be the estimators of an and bn, based on Xn, respectively, and

    ˆH(m,k)rB,n,B(x)=P(YBrB+1:B     ˆbBˆaBx|Xn) (3.21)

    be the bootstrap distribution of Ψ(m,k)nrn+1:n(anx+bn) with the estimated normalizing constants, ˆaB and ˆbB. The sufficient conditions for ˆH(m,k)rB,n,B(x) to be consistent are explored in the next theorem. The idea of this theorem was originally given in Theorem 2.6 for maximum OOSs in [21].

    Theorem 3.3. Assume that ˆaB,ˆbB and B=B(n) are such that the following three conditions are satisfied:

    (C1)H(m,k)rB,n,B(x)w.p.1nΨ(m,k)(x),(C2)ˆaBaBw.p.1n1,and(C3)ˆbBbBaBw.p.1n0.

    Then,

    supxR|ˆH(m,k)rB,n,B(x)Ψ(m,k)(x)|w.p.1n0. (3.22)

    Moreover, if we replace "w.p.1n" with "pn" in the conditions (C1)–(C3), the convergence (3.22) remains true.

    Proof. First, we note that the condition (C1) is equivalent to

    rBB¯Fnm+1(aBx+bB)rnnU(x).

    Furthermore, ϵ>0, the conditions (C2) and (C3) imply

    (1ϵ)aB<ˆaB<(1+ϵ)aB, (3.23)

    and

    bBϵaB<ˆbB<bB+ϵaB, (3.24)

    respectively. By fixing x>0, the relations (3.23) and (3.24) yield

    B¯Fnm+1((1+ϵ)x+ϵ)aB+bB)B¯Fnm+1(ˆaBx+ˆbB)B¯Fnm+1((1ϵ)xϵ)aB+bB).

    Therefore,

    lim supnrBB¯Fnm+1(ˆaBx+ˆbB)rBU((1+ϵ)x+ϵ)U((1ϵ)xϵ)lim infnrBB¯Fnm+1(ˆaBx+ˆbB)rB.

    Since U(x) is continuous, we get

    limnrBB¯Fnm+1(ˆaBx+ˆbB)rB=U(x).

    For x<0, the same limit relation can be accomplished using a similar argument. Consequently, (3.22) is proved. If the conditions (C1)(C3) hold in probability, then for any subsequence {ni}i=1, there exists a subsequence {ni}i=1, such that the conditions (C1)(C3) hold w.p.1. An application of the first part of the theorem, based on the new subsequence, yields

    supxR|ˆHrB(ni),ni,B(ni)(x)Ψ(m,k)(x)|w.p.1n0.

    This completes the proof of the theorem.

    In the next theorem, an appropriate choice of estimators of the normalizing constants which satisfy conditions (C2) and (C3) of Theorem 3.3 is accomplished for each domain of attraction. More specifically, we get a specific choice of ˆaB and ˆbB from which (3.22) holds true.

    Theorem 3.4. Assume that rn=nS(B)rS(B), rn x^{o} is the right endpoint of F, and {\hat x}^{o} = X_{r^{\star}_{n}:n}. The estimators \hat{a}_{B} and \hat{b}_{B} can be chosen, respectively, as

    (i) \hat{a}_{B} = {\hat x}^{o}-F_{n}^{-1}\left(\frac{r^{\star}_{S(B)}}{S(B)}\right) = X_{r^{\star}_{n}:n}-X_{r^{\star'}_{n}:n} and \hat{b}_{B} = X_{r^{\star}_{n}:n}, if F \in D^{(l, \omega)}\Big(\mathcal{N}((m+1)U_{1;\alpha}(x))\Big),

    (ii) \hat{a}_{B} = F_{n}^{-1}\left(\frac{r^{\star}_{S(B)}}{S(B)}\right) = X_{r^{\star'}_{n}:n} and \hat{b}_{B} = 0, if F \in D^{(l, \omega)}\Big(\mathcal{N}((m+1)U_{2;\alpha}(x))\Big),

    (iii) \hat{a}_{B} = F_{n}^{-1}\left(\frac{r^{\star}_{S(B)+\sqrt{r^{\star}_{S(B)}}}}{S(B)}-\frac{r^{\star}_{S(B)}}{S(B)}\right) = X_{r^{\star'}_{n}:n}-X_{r^{\star''}_{n}:n}\ \mathit{\mbox{and}}\ \hat{b}_{B} = F^{-1}_{n}\left(\frac{r^{\star}_{S(B)}}{S(B)}\right) = X_{r^{\star'}_{n}:n}, if F \in D^{(l, \omega)}\Big(\mathcal{N}((m+1)U_{3}(x))\Big).

    Moreover, If S(B) = o(n), then

    \begin{equation} \underset{x\in\mathbb{R}}{\sup}\left|\hat{H}_{r_{B}, n, B}^{(m, k)}(x)-\Psi^{(m, k)}(x)\right|\xrightarrow[\;\;n\;\;]{p}0. \end{equation} (3.25)

    Furthermore, if \sum_{n = 1}^{\infty}\lambda^{\sqrt{\frac{n}{S(B)}}} < \infty for each \lambda \in (0, 1), then (3.25) holds w.p.1.

    Proof. Let F \in D^{(l, \omega)}\Big(\mathcal{N}((m+1)U_{1;\alpha}(x))\Big). In order to prove conditions ( C_{2} ) and ( C_{3} ), we have to show that

    \begin{equation} \frac{\hat{a}_{B}}{{a}_{B}} = \frac{X_{r^{\star}_{n}:n}-X_{r^{\star'}_{n}:n}}{{{a}_{B}}}\xrightarrow[\;\;n\;\;]{}1, \end{equation} (3.26)

    and

    \begin{equation} \frac{\hat{b}_{B}-{b}_{B}}{{a}_{B}} = \frac{X_{r^{\star}_{n}:n}-x^{o}}{{{a}_{B}}}\xrightarrow[\;\;n\;\;]{}0, \end{equation} (3.27)

    both in probability or w.p.1. Firstly, let us consider the case of convergence in probability. Clearly,

    \begin{equation*} \frac{\hat{a}_{B}}{{a}_{B}} = \frac{X_{r^{\star}_{n}:n}-x^{o}}{\alpha_{n}}\times\frac{\alpha_{n}}{a_{B}}-\frac{X_{r^{\star'}_{n}:n}-x^{o}}{a_{B}}. \end{equation*}

    Hence, (3.26) and (3.27) are proved if we can show that

    \begin{equation} \frac{\alpha_{n}}{a_{B}}\xrightarrow[\;\;n\;\;]{}0, \end{equation} (3.28)

    and

    \begin{equation} \frac{X_{r^{\star'}_{n}:n}-x^{o}}{a_{B}}\xrightarrow[\;\;n\;\;]{p}-1. \end{equation} (3.29)

    On the other hand, for any \gamma > 0, Lemma 3.1 in Barakat et al. [11] reveals that

    \begin{equation*} \frac{\alpha_{n}}{a_{B}} = \frac{\alpha_{n}}{\alpha_{S(B)}}\sim\frac{\mu^{\frac{-1}{\gamma}}(r^{\star}_{n})}{\mu^{\frac{-1}{\gamma}}(r^{\star}_{S(B)})} = \frac{e^{\frac{-1}{\gamma}n^{\frac{\omega}{2}}}}{e^{\frac{-1}{\gamma}(S(B))^{\frac{\omega}{2}}}} = e^{{\frac{-1}{\gamma}n^{\frac{\omega}{2}}}\left( 1-\left(\frac{S(B)}{n}\right)^{\frac{\omega}{2}}\right)}\xrightarrow[\;\;n\;\;]{}0, \end{equation*}

    where \mu(n) = \exp(\sqrt{n}). Thus, (3.28) is proved. Turning now to prove (3.29), it can be noted that

    \begin{align} & \frac{r_{n}^{\star'}-n\overline{F}(a_{B}x+b_{B})}{\sqrt{r_{n}^{\star'}}} = \frac{r_{n}^{\star'}-n\overline{F}(a_{B}x+x^{o})}{\sqrt{r_{n}^{\star'}}} = \frac{\frac{n}{S(B)}r_{S(B)}^{\star}-n\overline{F}(a_{B}x+x^{o})}{\sqrt{\frac{n}{S(B)}r_{S(B)}^{\star}}} \\ & = \sqrt{\frac{n}{S(B)}}\left(\frac{r_{S(B)}^{\star}-S(B)\overline{F}(a_{B}x+x^{o})}{\sqrt{r_{S(B)}^{\star}}}\right) \xrightarrow[\;\;n\;\;]{}\begin{cases} \infty, & {\rm{if }}\ x > -1, \\ -\infty, & {\rm{if }}\ x < -1. \end{cases} \end{align} (3.30)

    For every \epsilon > 0, (3.30) implies P\left(\frac{X_{r_{n}^{\star'}:n}-x^{o}}{a_{B}} < \epsilon-1\right) \xrightarrow[\; \; n\; \; ]{}\mathcal{N}(\infty) = 1, or equivalently,

    \begin{equation} P\left(\frac{X_{r_{n}^{\star'}:n}-x^{o}}{a_{B}} > \epsilon-1\right) \xrightarrow[\;\;n\;\;]{}0. \end{equation} (3.31)

    Similarly,

    \begin{equation} P\left(\frac{X_{r_{n}^{\star'}:n}-x^{o}}{a_{B}} < -\epsilon-1\right) \xrightarrow[\;\;n\;\;]{}\mathcal{N}(-\infty) = 0. \end{equation} (3.32)

    By combining (3.31) and (3.32), we get

    \begin{equation} P\left(\left|\frac{X_{r_{n}^{\star'}:n}-x^{o}}{a_{B}}+1\right| > \epsilon\right) \xrightarrow[\;\;n\;\;]{}0, \end{equation} (3.33)

    which proves (3.29). Now, let F\in D^{(l, \omega)}\Big(\mathcal{N}((m+1)U_{2;\alpha}(x))\Big) . Condition ( C_{3} ) of Theorem 2.4 is clearly proved (since \hat{b}_{B} = 0 ). Consequently, it is sufficient to prove only condition ( C_{2} ). Thus, we have to show that

    \begin{equation} \frac{\hat{a}_{B}}{a_{B}} = \frac{X_{r^{\star'}_{n}:n}}{a_{B}}\xrightarrow[\;\;n\;\;]{}1, \end{equation} (3.34)

    in probability or w.p.1. We start by proving the convergence in probability. It is clear that,

    \begin{align} \frac{r_{n}^{\star'}-n\overline{F}(a_{B}x+b_{B})}{\sqrt{r_{n}^{\star'}}} & = \frac{\frac{n}{S(B)}r_{S(B)}^{\star}-n\overline{F}(a_{B}x)}{\sqrt{\frac{n}{S(B)}r_{S(B)}^{\star}}} \\ & = \sqrt{\frac{n}{S(B)}}\left(\frac{r_{S(B)}^{\star}-S(B)\overline{F}(a_{B}x)}{\sqrt{r_{S(B)}^{\star}}}\right) \xrightarrow[\;\;n\;\;]{}\begin{cases} \infty, & {\rm{if }}\ x > 1, \\ -\infty, & {\rm{if }}\ x < 1. \end{cases} \end{align} (3.35)

    For every \epsilon > 0, (3.35) implies P\left(\frac{X_{r_{n}^{\star'}:n}}{a_{B}} < \epsilon+1\right) \xrightarrow[\; \; n\; \; ]{}\mathcal{N}(\infty) = 1, which is equivalent to

    \begin{equation} P\left(\frac{X_{r_{n}^{\star'}:n}}{a_{B}} > \epsilon+1\right) \xrightarrow[\;\;n\;\;]{}0. \end{equation} (3.36)

    Arguing similarly, we get

    \begin{equation} P\left(\frac{X_{r_{n}^{\star'}:n}}{a_{B}} < -\epsilon+1\right) \xrightarrow[\;\;n\;\;]{}\mathcal{N}(-\infty) = 0. \end{equation} (3.37)

    Based on the relations (3.36) and (3.37), we get

    \begin{equation} P\left(\left|\frac{X_{r_{n}^{\star'}:n}}{a_{B}}-1\right| > \epsilon\right) \xrightarrow[\;\;n\;\;]{}0, \end{equation} (3.38)

    which proves (3.34). Finally, assume that F\in D^{(l, \omega)}(\mathcal{N}((m+1)U_{3}(x))). By Theorem 2.4, in order to prove conditions ( C_{2} ) and ( C_{3} ), it suffices to show that

    \begin{equation} \frac{\hat{a}_{B}}{{a}_{B}} = \frac{X_{r^{\star'}_{n}:n}-X_{r^{\star''}_{n}:n}}{{a}_{B}}\xrightarrow[\;\;n\;\;]{}1, \end{equation} (3.39)

    and

    \begin{equation} \frac{\hat{b}_{B}-{b}_{B}}{{a}_{B}} = \frac{X_{r^{\star'}_{n}:n}-{b}_{B}}{{{a}_{B}}}\xrightarrow[\;\;n\;\;]{}0, \end{equation} (3.40)

    both in probability or w.p.1. We start with the convergence in probability. Write

    \begin{equation} \frac{\hat{a}_{B}}{{a}_{B}} = \frac{X_{r^{\star'}_{n}:n}-X_{r^{\star''}_{n}:n}}{{a}_{B}} = \frac{X_{r^{\star'}_{n}:n}-{b}_{B}}{{a}_{B}}-\frac{X_{r^{\star''}_{n}:n}-{b}_{B}}{{a}_{B}}. \end{equation} (3.41)

    Consequently, to prove (3.39) and (3.40), we need to show that

    \begin{equation} \frac{X_{r^{\star''}_{n}:n}-{b}_{B}}{{a}_{B}}\xrightarrow[\;\;n\;\;]{}-1, \end{equation} (3.42)

    and

    \begin{equation} \frac{X_{r^{\star'}_{n}:n}-{b}_{B}}{{a}_{B}}\xrightarrow[\;\;n\;\;]{}0. \end{equation} (3.43)

    First, we are going to prove (3.42). Since we have

    \begin{align*} &\; \frac{r_{n}^{\star''}-n\overline{F}(a_{B}x+b_{B})}{\sqrt{r_{n}^{\star''}}} = \frac{\frac{n}{S(B)}\left(r^{\star}_{S(B)}+\sqrt{r^{\star}_{S(B)}}\right)-n\overline{F}(a_{B}x+b_{B})} {\sqrt{\frac{n}{S(B)}\left(r^{\star}_{S(B)}+\sqrt{r^{\star}_{S(B)}}\right)}} \nonumber \\ & = \sqrt{\frac{n}{S(B)}} \left(\frac{r^{\star}_{S(B)}+\sqrt{r^{\star}_{S(B)}}-S(B)\overline{F}(a_{B}x+b_{B})}{\sqrt{r^{\star}_{S(B)}\left(1+1/ \sqrt{r^{\star}_{S(B)}}\right)}}\right) \end{align*}
    \begin{align*} & = \sqrt{\frac{n}{S(B)}}\left(\frac{r^{\star}_{S(B)}+\sqrt{r^{\star}_{S(B)}}-S(B)\overline{F}(a_{B}x+b_{B})}{\sqrt{r^{\star}_{S(B)}(1+o(1))}}\right) \nonumber \\ & = \sqrt{\frac{n}{S(B)}} \left(\frac{r^{\star}_{S(B)}-S(B)\overline{F}(a_{B}x+b_{B})}{\sqrt{r^{\star}_{S(B)}(1+o(1))}}+ \frac{\sqrt{r^{\star}_{S(B)}}}{\sqrt{r^{\star}_{S(B)}(1+o(1))}}\right), \end{align*}

    from the assumption of the theorem, we have

    \begin{equation*} \frac{r^{\star}_{S(B)}-S(B)\overline{F}(a_{B}x+b_{B})}{\sqrt{r^{\star}_{S(B)}(1+o(1))}}\xrightarrow[\;\;n\;\;]{}x. \end{equation*}

    Consequently,

    \begin{equation} \frac{r_{n}^{\star''}-n\overline{F}(a_{B}x+b_{B})}{\sqrt{r_{n}^{\star''}}} \xrightarrow[\;\;n\;\;]{}\begin{cases} \infty, & {\rm{if }}\ x > -1, \\ -\infty, & {\rm{if }}\ x < -1. \end{cases} \end{equation} (3.44)

    Thus, for every \epsilon > 0, we get P\left(\frac{X_{r_{n}^{\star''}:n}-b_{B}}{a_{B}} < \epsilon-1\right) \xrightarrow[\; \; n\; \; ]{}\mathcal{N}(\infty) = 1, and this implies

    \begin{equation} P\left(\frac{X_{r_{n}^{\star''}:n}-b_{B}}{a_{B}} > \epsilon-1\right) \xrightarrow[\;\;n\;\;]{}0. \end{equation} (3.45)

    Further, we have

    \begin{equation} P\left(\frac{X_{r_{n}^{\star''}:n}-b_{B}}{a_{B}} < -\epsilon-1\right) \xrightarrow[\;\;n\;\;]{}0. \end{equation} (3.46)

    Hence, (3.45) and (3.46) lead to

    \begin{equation*} P\left(\left|\frac{X_{r_{n}^{\star''}:n}-b_{B}}{a_{B}}+1\right| > \epsilon\right) \xrightarrow[\;\;n\;\;]{}0, \end{equation*}

    which proves (3.42). Secondly, we are going to prove (3.43). Clearly,

    \begin{equation*} \frac{r_{n}^{\star'}-n\overline{F}(a_{B}x+b_{B})}{\sqrt{r_{n}^{\star'}}} = \frac{\frac{n}{S(B)}r_{S(B)}^{\star}-n\overline{F}(a_{B}x+b_{B})}{\sqrt{\frac{n}{S(B)}r_{S(B)}^{\star}}} = \end{equation*}
    \begin{equation} \sqrt{\frac{n}{S(B)}}\left(\frac{r_{S(B)}^{\star}-S(B)\overline{F}(a_{B}x+b_{B})}{\sqrt{r_{S(B)}^{\star}}}\right) \xrightarrow[\;\;n\;\;]{}\begin{cases} \infty, & {\rm{if }}\ x > 0, \\ -\infty, & {\rm{if }}\ x < 0. \end{cases} \end{equation} (3.47)

    For every \epsilon > 0, Eq (3.47) leads to P\left(\frac{X_{r_{n}^{\star'}:n}-b_{B}}{a_{B}} < \epsilon\right) \xrightarrow[\; \; n\; \; ]{}\mathcal{N}(\infty) = 1, which yields

    \begin{equation} P\left(\frac{X_{r_{n}^{\star'}:n}-b_{B}}{a_{B}} > \epsilon\right) \xrightarrow[\;\;n\;\;]{}0. \end{equation} (3.48)

    In the same way, we get

    \begin{equation} P\left(\frac{X_{r_{n}^{\star'}:n}-b_{B}}{a_{B}} < -\epsilon\right) \xrightarrow[\;\;n\;\;]{}0. \end{equation} (3.49)

    Thus, relations (3.48) and (3.49) imply

    \begin{equation*} P\left(\left|\frac{X_{r_{n}^{\star'}:n}-b_{B}}{a_{B}}\right| > \epsilon\right) \xrightarrow[\;\;n\;\;]{}0, \end{equation*}

    which proves (3.43). Finally, in the proof of Parts (ⅰ)–(ⅲ), in order to switch to the convergence w.p.1, we argue in the same way that we did at the end of Theorem 3.3's proof. This completes the proof of the theorem.

    In this section, the asymptotic behaviour of the bootstrap distribution for central m -GOSs, H^{\ast(m, k)}_{r_{B}, n, B}(x) = P\left(\frac{Y_{B-r_{B}+1:B}^{\star}-d_{B}}{c_{B}}\leq x|{\bf{X}}_{n}\right), is considered for different choices of the re-sample size B = B(n) when the normalizing constants c_{n} and d_{n} are assumed to be known or unknown.

    The next theorem discusses the consistency of the bootstrap distribution H^{\ast(m, k)}_{r_{B}, n, B}(x) in the case of full-sample bootstrap. It is revealed that the full-sample bootstrap distribution fails to be a consistent estimator of the DF \Psi_\lambda^{(m, k)}(x).

    Theorem 4.1. Assume that the relation (2.9) is satisfied, where the weak limits are of the form

    \begin{equation} \Psi_\lambda^{(m, k)}(x) = \mathcal{N}\left(\frac{C^{\star}_{\lambda_{(m)}}}{C^{\star}_{\lambda}}(m+1)V_{i, \alpha}(x)\right), \;\; \mathit{\mbox{for}} \; i = 1, 2, 3, 4. \end{equation} (4.1)

    Then,

    \begin{equation} H^{\ast(m, k)}_{r_{n}, n, n}(x)\xrightarrow[\;\;n\;\;]{d}\mathcal{N}(\Lambda(x)), \end{equation} (4.2)

    where \Lambda(x) has a normal distribution with mean \frac{C^{\star}_{\lambda_{(m)}}}{C^{\star}_{\lambda}}(m+1)V_{i; \alpha}(x) and variance \frac{C^{\star}_{\lambda_{(m)}}}{C^{\star}_{\lambda}}(m+1) .

    Proof. Although the convergence in (2.9) does not yield continuous types in general, under the condition r_{n} = \lambda n+o(\sqrt{n}), Barakat and El-Shandidy [5] showed that the convergence is uniform. Consequently,

    \begin{equation*} H^{\ast(m, k)}_{r_{n}, n, B}(x) = \mathcal{N}\left(\sqrt{B}\, \frac{\lambda-\bar{F_{n}}^{m+1}(c_{B}x+d_{B})}{C_{\lambda}}\right)+\xi_{n}(x), \end{equation*}

    where \xi_{n}(x)\xrightarrow[\; \; n\; \; ]{}0 uniformly with respect to x. Now, consider that the condition B = n holds. Since (2.8) is satisfied, we have

    \begin{equation} \sqrt{n} \left( \frac{\lambda(m)-\overline{F}(c_{n}x+d_{n})}{C_{\lambda(m)}}\right) \xrightarrow[\;\;n\;\;]{}V_{i;\alpha}(x). \end{equation} (4.3)

    An application of the central limit theorem yields

    \begin{equation} \frac{n\bar{F_{n}}(c_{n}x+d_{n})-n\overline{F}(c_{n}x+d_{n})} {\sqrt{n\overline{F}(c_{n}x+d_{n})(1-\overline{F}(c_{n}x+d_{n}))}}\xrightarrow[\;\;n\;\;]{d}Z(x), \end{equation} (4.4)

    where Z(x) is the standard normal RV. On the other hand, it is clear from relation (4.3) that \overline{F}(c_{n}x+d_{n})\xrightarrow[\; \; n\; \; ]{}\lambda(m) , which implies

    \begin{equation} \frac{\sqrt{n\overline{F}(c_{n}x+d_{n})(1-\overline{F}(c_{n}x+d_{n}))}}{\sqrt{n}C_{\lambda(m)}}\xrightarrow[\;\;n\;\;]{}1. \end{equation} (4.5)

    The two limit relations (4.3) and (4.5) enable us to apply Khinchin's type theorem on the relation (4.4) to get

    \begin{align} \begin{split} \sqrt{n}\left(\frac{\lambda(m)-\bar{F_{n}}(c_{n}x+d_{n})}{C_{\lambda(m)}}\right) = \sqrt{n}\left(\frac{\lambda(m)-\overline{F}(c_{n}x+d_{n})}{C_{\lambda(m)}}\right)\\ - \frac{n\bar{F_{n}}(c_{n}x+d_{n})-n\overline{F}(c_{n}x+d_{n})}{\sqrt{n}C_{\lambda(m)}}\xrightarrow[\;\;n\;\;]{}V_{i;\alpha}(x)-Z(x). \end{split} \end{align} (4.6)

    As a result of (4.6), we get

    \begin{equation} \begin{aligned} \bar{F_{n}}^{m+1}(c_{n}x+d_{n})& = \lambda \left(1-\frac{C_{\lambda(m)}(V_{i, \alpha}(x)-Z(x))}{\lambda(m)\sqrt{n}}(1+o(1))\right)^{m+1}\\ & = \lambda \left(1-\frac{(m+1)C_{\lambda(m)}(V_{i;\alpha}(x)-Z(x))}{\lambda(m)\sqrt{n}}(1+o(1))\right). \end{aligned} \end{equation} (4.7)

    Consequently,

    \begin{align*} \sqrt{n} \left(\frac{\lambda-\bar{F_{n}}^{m+1}(c_{n}x+d_{n})}{C_{\lambda}} \right) \xrightarrow[\;\;n\;\;]{} & \frac{(m+1)\lambda C_{\lambda_{(m)}}}{\lambda_{(m)}C_{\lambda}}(V_{i;\alpha}(x)-Z(x))\\ & = \frac{C^{\star}_{\lambda_{(m)}}}{C^{\star}_{\lambda}}(m+1)(V_{i;\alpha}(x)-Z(x)). \end{align*}

    Hence, the theorem is proved.

    Theorem 4.2. Under the same conditions of Theorem 4.1, if B = o(n) , then

    \begin{equation} \underset{x\in\mathbb{R}}{\sup}\left|H^{\ast(m, k)}_{r_{B}, n, B}(x)-\Psi_\lambda^{(m, k)}(x)\right|\xrightarrow[\;\;n\;\;]{p}0. \end{equation} (4.8)

    Furthermore, if B is such that \sum_{n = 1}^{\infty}P^{\sqrt{\frac{n}{B}}} < \infty, \forall P\in(0, 1), then

    \begin{equation} \underset{x\in\mathbb{R}}{\sup}\left|H^{\ast(m, k)}_{r_{B}, n, B}(x)-\Psi_\lambda^{(m, k)}(x)\right|\xrightarrow[\;\;n\;\;]{w.p.1}0. \end{equation} (4.9)

    Proof. Write H^{\ast(m, k)}_{r_{B}, n, B}(x) = \mathcal{N}\left(\mathcal{S}_{n, B}(x)\right)+\xi_{n}(x), where, \mathcal{S}_{n, B}(x) = \sqrt{B} \left(\cfrac{\lambda_{(m)}-\bar{F_{n}}(c_{B}x+d_{B})}{C_{\lambda_{(m)}}}\right), and \xi_{n}(x)\xrightarrow[\; \; n\; \; ]{}0 uniformly with respect to x. It can be noted that

    \begin{equation} E\big(\mathcal{S}_{n, B}(x)\big) = \sqrt{B} \left(\cfrac{\lambda_{(m)}-\bar{F}(c_{B}x+d_{B})}{C_{\lambda_{(m)}}}\right)\xrightarrow[\;\;n\;\;]{}V_{i;\alpha}(x), \end{equation} (4.10)

    and

    \begin{equation} \mbox{Var}\big(\mathcal{S}_{n, B}(x)\big) = \frac{B\overline{F}(c_{B}x+d_{B})(1-\overline{F}(c_{B}x+d_{B}))}{nC^{2}_{\lambda_{(m)}}}\xrightarrow[\;\;n\;\;]{}0. \end{equation} (4.11)

    Accordingly,

    \begin{equation} \sqrt{B} \left(\frac{\lambda_{(m)}-\bar{F_{n}}(c_{B}x+d_{B})}{C_{\lambda_{(m)}}}\right) \xrightarrow[\;\;n\;\;]{p}V_{i;\alpha}(x). \end{equation} (4.12)

    We'll now show that the convergence in (4.12) is w.p.1. For this purpose, write

    \begin{align*} \sqrt{B} \left( \frac{\lambda_{(m)}-\bar{F_{n}}(c_{B}x+d_{B})}{C_{\lambda_{(m)}}} \right)& = \sqrt{B} \left(\frac{\lambda_{(m)}-\overline{F}(c_{B}x+d_{B})}{C_{\lambda_{(m)}}}\right)\\ &- \sqrt{\frac{B}{n}}\left(\frac{n\bar{F_{n}}(c_{B}x+d_{B})-n\overline{F}(c_{B}x+d_{B})}{\sqrt{n}C_{\lambda_{(m)}}}\right). \end{align*}

    On the other hand, the assumptions of the theorem ensure that \overline{F}(c_{B}x+d_{B})\xrightarrow[\; \; n\; \; ]{}\lambda_{(m)}, and \sqrt{B}\left(\frac{\lambda_{(m)}-\overline{F}(c_{B}x+d_{B})}{C_{\lambda_{(m)}}} \right) \xrightarrow[\; \; n\; \; ]{}V_{i; \alpha}(x). To prove the limit relation \sqrt{B}\left(\frac{\lambda_{(m)}-\bar{F_{n}}(c_{B}x+d_{B})}{C_{\lambda_{(m)}}}\right)\xrightarrow[\; \; n\; \; ]{w.p.1}V_{i; \alpha}(x), it is sufficient to show that

    \begin{equation*} \sqrt{\frac{B}{n}}\! \left(\! \frac{n\bar{F_{n}}(c_{B}x+d_{B})-n\overline{F}(c_{B}x+d_{B})}{\sqrt{n}C_{\lambda_{(m)}}}\! \right)\! = \! \sqrt{\frac{B}{n}}\left(\!\frac{n\bar{F_{n}}(c_{B}x+d_{B})-n\overline{F}(c_{B}x+d_{B})}{\sqrt{n\overline{F}(c_{B}x+d_{B})(1-\overline{F}(c_{B}x+d_{B})}}\right) \xrightarrow[\;\;n\;\;]{w.p.1}0. \end{equation*}

    According to the Borel-Cantelli lemma, we need to show that, for every \epsilon > 0,

    \sum\limits_{n = 1}^{\infty}P\left( \sqrt{\frac{B}{n}} {\left|\frac{n\bar{F_{n}}(c_{B}x+d_{B})-n\overline{F}(c_{B}x+d_{B})}{\sqrt{n\overline{F}(c_{B}x+d_{B})(1-\overline{F}(c_{B}x+d_{B})}}\right| > \epsilon}\right) < \infty.

    For every \theta > 0 , we have

    \begin{align*} & \; \sqrt{\frac{B}{n}}\log P\left(\sqrt{\frac{B}{n}} {\left(\frac{n\bar{F_{n}}(c_{B}x+d_{B})-n\overline{F}(c_{B}x+d_{B})}{\sqrt{n\overline{F}(c_{B}x+d_{B})(1-\overline{F}(c_{B}x+d_{B})}}\right) > \epsilon}\right) \\ & = \sqrt{\frac{B}{n}}\log P\left( {\frac{n\bar{F_{n}}(c_{B}x+d_{B})-n\overline{F}(c_{B}x+d_{B})}{\sqrt{n\overline{F}(c_{B}x+d_{B})(1-\overline{F}(c_{B}x+d_{B})}} > \epsilon\sqrt{\frac{n}{B}}}\right) \\ & = \sqrt{\frac{B}{n}}\log P\left(e^{\theta \mathcal{T}_{n, B}} > e^{\theta\epsilon\sqrt{\frac{n}{B}}} \right), \end{align*}

    where \mathcal{T}_{n, B} is defined by

    \mathcal{T}_{n, B} = \frac{n\bar{F_{n}}(c_{B}x+d_{B})-n\overline{F}(c_{B}x+d_{B})}{\sqrt{n\overline{F}(c_{B}x+d_{B})(1-\overline{F}(c_{B}x+d_{B})}}.

    From Markov's inequality, we get

    \begin{align*} & \sqrt{\frac{B}{n}}\log P\left(e^{\theta \mathcal{T}_{n, B}} > e^{\theta\epsilon\sqrt{\frac{n}{B}}} \right) \leq \sqrt{\frac{B}{n}}\log \left( e^{-\theta\epsilon\sqrt{\frac{n}{B}}}E\left(e^{\theta \mathcal{T}_{n, B}}\right)\right) \\ \sim & \sqrt{\frac{B}{n}}\left(-\theta\epsilon\sqrt{\frac{n}{B}}+\log M_{B}(\theta) \right) = -\theta\epsilon+\sqrt{\frac{B}{n}}\log M_{B}(\theta)\xrightarrow[\;\;n\;\;]{}-\theta\epsilon. \end{align*}

    Consequently, for sufficiently large n, we get

    \begin{align*} & \sum\limits_{n = 1}^{\infty}P\left(\sqrt{\frac{B}{n}} {\left(\frac{n\bar{F_{n}}(c_{B}x+d_{B})-n\overline{F}(c_{B}x+d_{B})}{\sqrt{n\overline{F}(c_{B}x+d_{B})(1-\overline{F}(c_{B}x+d_{B})}}\right) > \epsilon}\right) \\ \sim & \sum\limits_{n = 1}^{\infty}e^{\log P\left(\sqrt{\frac{B}{n}} \mathcal{T}_{n, B} > \epsilon\right)}\leq \sum\limits_{n = 1}^{\infty}e^{-\theta\epsilon\sqrt{\frac{n}{B}}} < \infty. \end{align*}

    By using the same method, we can show that, for every \epsilon > 0,

    \sum\limits_{n = 1}^{\infty}P\left(\sqrt{\frac{B}{n}} {\left(\frac{n\bar{F_{n}}(c_{B}x+d_{B})-n\overline{F}(c_{B}x+d_{B})}{\sqrt{n\overline{F}(c_{B}x+d_{B})(1-\overline{F}(c_{B}x+d_{B})}}\right) < -\epsilon}\right) < \infty.

    Since the condition, \sum_{n = 1}^{\infty}P^{\sqrt{\frac{n}{B}}} < \infty, \forall P \in (0, 1), assures the convergence of the series \sum_{n = 1}^{\infty}\exp\left\{{-\theta\epsilon\sqrt{\frac{n}{B}}}\right\} for every \epsilon > 0, we have

    \begin{equation} \sqrt{B} \left(\frac{\lambda_{(m)}-\bar{F_{n}}(c_{B}x+d_{B})}{C_{\lambda_{(m)}}} \right) \xrightarrow[\;\;n\;\;]{w.p.1}V_{i;\alpha}(x). \end{equation} (4.13)

    Therefore,

    \begin{equation} \begin{aligned} \bar{F_{n}}^{m+1}(c_{B}x+d_{B})& = \lambda \left(1-\frac{C_{\lambda(m)}V_{i;\alpha}(x)}{\lambda(m)\sqrt{n}}(1+o(1))\right)^{m+1}\\ & = \lambda \left(1-\frac{(m+1)C_{\lambda(m)}V_{i;\alpha}(x)}{\lambda(m)\sqrt{n}}(1+o(1))\right), \; w.p.1. \end{aligned} \end{equation} (4.14)

    Accordingly,

    \begin{equation*} \sqrt{n}\left(\frac{\lambda-\bar{F_{n}}^{m+1}(c_{B}x+d_{B})}{C_{\lambda}} \right) \xrightarrow[\;\;n\;\;]{w.p.1}\frac{(m+1)\lambda C_{\lambda_{(m)}}}{\lambda_{(m)}C_{\lambda}}V_{i;\alpha}(x) = \frac{C^{\star}_{\lambda_{(m)}}}{C^{\star}_{\lambda}}(m+1)V_{i;\alpha}(x), \end{equation*}

    which was to be proved, and this completes the proof of the theorem.

    Assume that the normalizing constants c_n and d_n are unknown and that they must be estimated using the sample data {\bf{X}}_{n} = (X_{1}, X_{2}, ..., X_{n}) . Let \hat{c}_{B} and \hat{d}_{B} be the estimators of c_{n} and d_{n} based on {\bf{X}}_{n} and \hat H_{r_{B}, n, B}^{\ast(m, k)}(x) = P\left(\frac{Y_{B-r_{B}+1:B}^{\star}-\hat{d}_{B}}{\hat{c}_{B}}\leq x|{\bf{X}}_{n}\right) be the bootstrap distribution for appropriately normalized central m -GOSs. The next theorem gives sufficient conditions for \hat H^{\ast(m, k)}_{r_{B}, n, B}(x) to be consistent, where we restrict ourselves to the first three non-degenerate types, corresponding to V_{i; \alpha}(x), i = 1, 2, 3. Clearly, each of these three limit laws has at most one discontinuity point of the first type.

    Theorem 4.3. Suppose that the conditions of Theorem 4.2 are satisfied. Moreover, suppose that \hat{c}_{B}, \hat{d}_{B}, and B = B(n) satisfy the following three conditions:

    (K_{1})\; {H^{\ast}}_{r_{B}, n, B}^{(m, k)}(x)\xrightarrow[\;\;n\;\;]{w.p.1}\Psi_\lambda^{(m, k)}(x), \; \; \; (K_{2}) \; \cfrac{\hat{c}_{B}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{w.p.1}1, \; \mathit{\mbox{and}}\; (K_{3})\; \cfrac{\hat{d}_{B}-c_{B}}{{d}_{B}}\xrightarrow[\;\;n\;\;]{w.p.1}0.

    Then, \underset{x\in\mathbb{R}^c}{\sup}\left|\hat{{H^{\ast}}}_{r_{B}, n, B}^{(m, k)}(x)-\Psi_\lambda^{(m, k)}(x)\right|\xrightarrow[\; \; n\; \; ]{w.p.1}0, where \mathbb{R}^c\in \mathbb{R} is the set of all continuity points of \Psi_\lambda^{(m, k)}(x). Moreover, this theorem holds if " \xrightarrow[\; \; n\; \; ]{w.p.1} " is replaced by " \xrightarrow[\; \; n\; \; ]{p} ".

    Proof. The proof of the theorem is similar to the proof of Theorem 3.3.

    Now, for the consistency of the bootstrap distribution \hat H^{\ast(m, k)}_{r_{B}, n, B}(x), the next theorem gives a proper choice for the normalizing constants \hat{c}_{B} and \hat{d}_{B} satisfying conditions ( K_{2} ) and ( K_{3} ) in Theorem 4.3 for each domain of attraction of \mathcal{N}\left(\frac{C^{\star}_{\lambda_{(m)}}}{C^{\star}_{\lambda}}(m+1)V_{i; \alpha}(x)\right), \; i = 1, 2, 3.

    Theorem 4.4. Let r^{'}_{n} = [\lambda_{(m)} n]+1 , r^{''}_{n} = \left[\frac{n}{\sqrt{B}}+ n\right]+1, and r^{'''}_{n} = \left[\lambda_{(m)} n-\frac{n}{\sqrt{B}}\right]+1. Then,

    i. \hat{c}_{B} = F_{n}^{-1}\left(\lambda_{(m)}+\frac{1}{\sqrt{B}}\right)-F_{n}^{-1}\left(\lambda_{(m)}\right) = X_{r^{''}_{n}:n}-X_{r^{'}_{n}:n}, and \hat{d}_{B} = F_{n}^{-1}\left(\lambda_{(m)}\right) = X_{r^{'}_{n}:n}, if F\in D^{\lambda_{(m)}}(\mathcal{N}(V_{1;\alpha}(x)));

    ii. \hat{c}_{B} = F_{n}^{-1}\left(\lambda_{(m)}\right)-F_{n}^{-1}\left(\lambda_{(m)}-\frac{1}{\sqrt{B}}\right) = X_{r^{'}_{n}:n}-X_{r^{'''}_{n}:n}, \; and \hat{d}_{B} = F_{n}^{-1}\left(\lambda_{(m)}\right) = X_{r^{'}_{n}:n}, if F\in D^{\lambda_{(m)}}(\mathcal{N}(V_{2;\alpha}(x)));

    iii. \hat{c}_{B} = F_{n}^{-1}\left(\lambda_{(m)}+\frac{1}{\sqrt{B}}\right)-F_{n}^{-1}\left(\lambda_{(m)}\right) = X_{r^{''}_{n}:n}-X_{r^{'}_{n}:n}, \; and \hat{d}_{B} = F_{n}^{-1}\left(\lambda_{(m)}\right) = X_{r^{'}_{n}:n}, if F\in D^{\lambda_{(m)}}(\mathcal{N}(V_{3;\alpha}(x))).

    Moreover, if B = o(n), then

    \begin{equation} \underset{x\in\mathbb{R}}{\sup}\left|\hat H^{\ast(m, k)}_{r_{B}, n, B}(x)-\Psi_\lambda^{(m, k)}(x)\right|\xrightarrow[\;\;n\;\;]{p}0. \end{equation} (4.15)

    Finally, if \sum_{n = 1}^{\infty}P^{\sqrt{\frac{n}{B}}} < \infty for each P \in (0, 1), then the convergence in (4.15) holds w.p.1.

    Proof. Let F\in D^{\lambda_{(m)}}(\mathcal{N}(V_{1;\alpha}(x))). In view of Theorem 4.3, we need to show that

    \begin{equation} \frac{\hat{c}_{B}}{{c}_{B}} = \frac{X_{r^{''}_{n}:n}-X_{r^{'}_{n}:n}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{}1, \end{equation} (4.16)

    and

    \begin{equation} \frac{\hat{d}_{B}-{d}_{B}}{{c}_{B}} = \frac{{X_{r^{'}_{n}:n}}-{d}_{B}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{}0, \end{equation} (4.17)

    both in probability or w.p.1. We start with the convergence in probability. Clearly,

    \frac{\hat{c}_{B}}{{c}_{B}} = \frac{X_{r^{''}_{n}:n}-X_{r^{'}_{n}:n}}{{c}_{B}} = \frac{X_{r^{''}_{n}:n}-{d}_{B}}{{c}_{B}}-\frac{X_{r^{'}_{n}:n}-{d}_{B}}{{c}_{B}}.

    To prove (4.16) and (4.17), it is sufficient to show that

    \begin{equation} \frac{X_{r^{''}_{n}:n}-{d}_{B}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{p}1, \end{equation} (4.18)

    and

    \begin{equation} \frac{X_{r^{'}_{n}:n}-{d}_{B}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{p}0. \end{equation} (4.19)

    We will start by proving (4.18). In view of the relations \left[\frac{n}{\sqrt{B}}+\lambda_{(m)} n\right] = \frac{n}{\sqrt{B}}+\lambda_{(m)} n-\delta and \frac{1}{\sqrt{B}}+\lambda_{(m)} +\frac{1-\delta}{n}\sim \lambda_{(m)} for 0\leq \delta < 1, we get

    \begin{equation*} \frac{(n-r^{''}_{n})-n\overline{F}(c_{B}x+d_{B})}{\sqrt{r^{''}_{n}(1-\frac{r^{''}_{n}}{n})}} = \sqrt{n}\frac{\left(1-\frac{1}{\sqrt{B}}-\lambda_{(m)}-\frac{1-\delta}{n}\right)-\overline{F}(c_{B}x+d_{B})} {\sqrt{\left(\frac{1}{\sqrt{B}}+\lambda_{(m)}+\frac{1-\delta}{n}\right)\left(1-\left(\frac{1}{\sqrt{B}}+\lambda_{(m)} +\frac{1-\delta}{n}\right)\right)}} \end{equation*}
    \begin{equation} \sim \sqrt{ \frac{n}{B}}\left(\sqrt{B}\frac{(1-\lambda_{(m)})-\overline{F}(c_{B}x+d_{B})} {C_{\lambda_{(m)}}}-\frac{1+\frac{\sqrt{B}}{n}(1-\delta)}{C_{\lambda_{(m)}}}\right) \xrightarrow[\;\;n\;\;]{}\begin{cases} \infty, & {\rm{if }}\ x > 1, \\ -\infty, & {\rm{if }}\ x < 1. \end{cases} \end{equation} (4.20)

    Relation (4.20) is a direct consequence of the obvious relations,

    \frac{1+\frac{\sqrt{B}}{n}(1-\delta)}{C_{\lambda_{(m)}}}\xrightarrow[\;\;n\;\;]{}\frac{1}{C_{\lambda_{(m)}}} = c\; \; \mbox{and}\; \; \sqrt{B}\frac{(1-{\lambda_{(m)}})-\overline{F}(c_{B}x+d_{B})} {C_{\lambda_{(m)}}}\xrightarrow[\;\;n\;\;]{}cx^\alpha.

    The relation (4.20) yields P\left(\frac{X_{r^{''}_{n}:n}-d_{B}}{c_{B}} < \epsilon+1\right)\xrightarrow[\; \; n\; \; ]{}\mathcal{N}(\infty) = 1, which is equivalent to

    \begin{equation} P\left(\frac{X_{r^{''}_{n}:n}-d_{B}}{c_{B}} > \epsilon+1\right)\xrightarrow[\;\;n\;\;]{}0. \end{equation} (4.21)

    Similarly, we get

    \begin{equation} P\left(\frac{X_{r^{''}_{n}:n}-d_{B}}{c_{B}} < -\epsilon+1\right)\xrightarrow[\;\;n\;\;]{}\mathcal{N}(-\infty) = 0. \end{equation} (4.22)

    From (4.21) and (4.22), we get P\left(\left|\frac{X_{r^{''}_{n}:n}-d_{B}}{{c}_{B}}-1\right| > \epsilon\right)\xrightarrow[\; \; n\; \; ]{}0, which proves (4.18). Now, we are going to prove (4.19). It is simple to derive that

    \begin{equation*} \frac{(n-r^{'}_{n})-n\overline{F}(c_{B}x+d_{B})}{\sqrt{r^{'}_{n}(1-\frac{r^{'}_{n}}{n})}} = \sqrt{n}\frac{\left(1-\lambda_{(m)}-\frac{1-\delta}{n}\right)-\overline{F}(c_{B}x+d_{B})} {\sqrt{\left(\lambda_{(m)}+\frac{1-\delta}{n}\right)\left(1-\left(\lambda_{(m)} +\frac{1-\delta}{n}\right)\right)}} \end{equation*}
    \begin{equation} = \sqrt{\frac{n}{B}}\left(\sqrt{B}\frac{\left(1-\lambda_{(m)}-\frac{1-\delta}{n}\right)-\overline{F}(c_{B}x+d_{B})} {\sqrt{\left(\lambda_{(m)}+\frac{1-\delta}{n}\right)\left(1-\left(\lambda_{(m)} +\frac{1-\delta}{n}\right)\right)}}\right) \xrightarrow[\;\;n\;\;]{}\begin{cases} \infty, & {\rm{if }}\ x > 0, \\ -\infty, & {\rm{if }}\ x < 0. \end{cases} \end{equation} (4.23)

    Thus, from (4.23), we have P\left(\frac{X_{r^{'}_{n}:n}-d_{B}}{c_{B}} < \epsilon\right)\xrightarrow[\; \; n\; \; ]{}\mathcal{N}(\infty) = 1, which is equivalent to

    \begin{equation} P\left(\frac{X_{r^{'}_{n}:n}-d_{B}}{c_{B}} > \epsilon\right)\xrightarrow[\;\;n\;\;]{}0. \end{equation} (4.24)

    Similarly, we get

    \begin{equation} P\left(\frac{X_{r^{'}_{n}:n}-d_{B}}{c_{B}} < -\epsilon\right)\xrightarrow[\;\;n\;\;]{}0. \end{equation} (4.25)

    Therefore, by combining the relations (4.24) and (4.25), we get P\left(\left|\frac{X_{r^{'}_{n}:n}-d_{B}}{{c}_{B}}\right| > \epsilon\right)\xrightarrow[\; \; n\; \; ]{}0, which proves (4.19). This completes the proof of Part ⅰ. Now, let F(c_nx+d_n)\in D^{\lambda_{(m)}}(\mathcal{N}(V_{2;\alpha}(x))). It is sufficient to establish from Theorem 4.3 that

    \begin{equation} \frac{\hat{c}_{B}}{{c}_{B}} = \frac{X_{r^{'}_{n}:n}-X_{r^{'''}_{n}:n}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{}1, \end{equation} (4.26)

    and

    \begin{equation} \frac{\hat{d}_{B}-{d}_{B}}{{c}_{B}} = \frac{{X_{r^{'}_{n}:n}}-{d}_{B}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{}0, \end{equation} (4.27)

    both in probability or w.p.1. We begin with the situation of probability convergence, and we begin with

    \frac{\hat{c}_{B}}{{c}_{B}} = \frac{X_{r^{'}_{n}:n}-X_{r^{'''}_{n}:n}}{{c}_{B}} = \frac{X_{r^{'}_{n}:n}-{d}_{B}}{{c}_{B}}-\frac{X_{r^{'''}_{n}:n}-{d}_{B}}{{c}_{B}}.

    Hence, to prove (4.26) and (4.27), it is sufficient to show that

    \begin{equation} \frac{X_{r^{'''}_{n}:n}-{d}_{B}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{p}-1, \end{equation} (4.28)

    and

    \begin{equation} \frac{X_{r^{'}_{n}:n}-{d}_{B}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{p}0. \end{equation} (4.29)

    We are going to prove (4.28). By applying the relations [\lambda_{(m)}n-\frac{n}{\sqrt{B}}] = \lambda_{(m)}n-\frac{n}{\sqrt{B}}-\delta, \; 0 \leq\delta < 1, and \lambda_{(m)}-\frac{1}{\sqrt{B}} +\frac{1-\delta}{n}\sim \lambda_{(m)}, as n\to\infty, we can deduce that

    \begin{equation*} \frac{(n-r^{'''}_{n})-n\overline{F}(c_{B}x+d_{B})}{\sqrt{r^{'''}_{n}(1-\frac{r^{'''}_{n}}{n})}} = \sqrt{n}\frac{\left(n-\lambda_{(m)}+\frac{1}{\sqrt{B}}-\frac{1-\delta}{n}\right)-\overline{F}(c_{B}x+d_{B})} {\sqrt{\left(\lambda_{(m)}-\frac{1}{\sqrt{B}}+\frac{1-\delta}{n}\right)\left(1-\left(\lambda_{(m)}-\frac{1}{\sqrt{B}} +\frac{1-\delta}{n}\right)\right)}} \end{equation*}
    \begin{equation} \sim\sqrt{\frac{n}{B}}\left(\sqrt{B}\frac{(1-\lambda_{(m)})-\overline{F}(c_{B}x+d_{B})} {C_{\lambda_{(m)}}}-\frac{-1+\frac{\sqrt{B}}{n}(1-\delta)}{C_{\lambda_{(m)}}}\right) \xrightarrow[\;\;n\;\;]{} \begin{cases} -\infty, & {\rm{if }}\ |x| > 1, \\ \infty, & {\rm{if }}\ |x| < 1. \end{cases} \end{equation} (4.30)

    Thus, on account of (4.30), we get P\left(\frac{X_{r^{'''}_{n}:n}-d_{B}}{c_{B}} < \epsilon-1\right)\xrightarrow[\; \; n\; \; ]{}{}\mathcal{N}(\infty) = 1, which implies

    \begin{equation} P\left(\frac{X_{r^{'''}_{n}:n}-d_{B}}{c_{B}} > \epsilon-1\right)\xrightarrow[\;\;n\;\;]{}0. \end{equation} (4.31)

    In a similar vein, we have

    \begin{equation} P\left(\frac{X_{r^{'''}_{n}:n}-d_{B}}{c_{B}} < -\epsilon-1\right)\xrightarrow[\;\;n\;\;]{}\mathcal{N}(-\infty) = 0. \end{equation} (4.32)

    From (4.31) and (4.32), we get P\left(\left|\frac{X_{r^{'''}_{n}:n}-d_{B}}{{c}_{B}}+1\right| > \epsilon\right)\xrightarrow[\; \; n\; \; ]{}0. Hence, (4.28) is proved. We turn now to prove (4.29). We start with the obvious limit relation

    \begin{equation*} \frac{(n-r^{'}_{n})-n\overline{F}(c_{B}x+d_{B})}{\sqrt{r^{'}_{n}(1-\frac{r^{'}_{n}}{n})}} = \sqrt{n}\frac{\left(1-\lambda_{(m)}-\frac{1-\delta}{n}\right)-\overline{F}(c_{B}x+d_{B})} {\sqrt{\left(\lambda_{(m)}+\frac{1-\delta}{n}\right)\left(1-\left(\lambda_{(m)} +\frac{1-\delta}{n}\right)\right)}} = \end{equation*}
    \begin{equation} \sqrt{\frac{n}{B}}\left(\sqrt{B}\frac{\left(1-\lambda_{(m)}-\frac{1-\delta}{n}\right)-\overline{F}(c_{B}x+d_{B})} {\sqrt{\left(\lambda_{(m)}+\frac{1-\delta}{n}\right)\left(1-\left(\lambda_{(m)} +\frac{1-\delta}{n}\right)\right)}}\right) \xrightarrow[\;\;n\;\;]{}\begin{cases} \infty, & {\rm{if }}\ x > 0, \\ -\infty, & {\rm{if }}\ x < 0, \end{cases} \end{equation} (4.33)

    which in turn implies that P\left(\frac{X_{r^{'}_{n}:n}-d_{B}}{c_{B}} < \epsilon\right)\xrightarrow[\; \; n\; \; ]{}\mathcal{N}(\infty) = 1, and hence

    \begin{equation} P\left(\frac{X_{r^{'}_{n}:n}-d_{B}}{c_{B}} > \epsilon\right)\xrightarrow[\;\;n\;\;]{}0. \end{equation} (4.34)

    Moreover, the limit relation (4.33) yields

    \begin{equation} P\left(\frac{X_{r^{'}_{n}:n}-d_{B}}{c_{B}} < -\epsilon\right)\xrightarrow[\;\;n\;\;]{}0. \end{equation} (4.35)

    By combining (4.34) and (4.35), we get P\left(\left|\frac{X_{r^{'}_{n}:n}-d_{B}}{{c}_{B}}\right| > \epsilon\right)\xrightarrow[\; \; n\; \; ]{}0, which proves (4.29).

    Finally, consider the case F\in D^{\lambda_{(m)}}(\mathcal{N}(V_{3;\alpha}(x))). From Theorem 4.3, it suffices to show that

    \begin{equation} \frac{\hat{c}_{B}}{{c}_{B}} = \frac{X_{r^{''}_{n}:n}-X_{r^{'}_{n}:n}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{}1, \end{equation} (4.36)

    and

    \begin{equation} \frac{\hat{d}_{B}-{d}_{B}}{{c}_{B}} = \frac{{X_{r^{'}_{n}:n}}-{d}_{B}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{}0, \end{equation} (4.37)

    both in probability or w.p.1. We first focus on the case of the convergence in probability, and we start with

    \frac{\hat{c}_{B}}{{c}_{B}} = \frac{X_{r^{''}_{n}:n}-X_{r^{'}_{n}:n}}{{c}_{B}} = \frac{X_{r^{''}_{n}:n}-{d}_{B}}{{c}_{B}}-\frac{X_{r^{'}_{n}:n}-{d}_{B}}{{c}_{B}}.

    Therefore, to prove (4.36) and (4.37), it is sufficient to show that

    \begin{equation} \frac{X_{r^{''}_{n}:n}-{d}_{B}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{p}1, \end{equation} (4.38)

    and

    \begin{equation} \frac{X_{r^{'}_{n}:n}-{d}_{B}}{{c}_{B}}\xrightarrow[\;\;n\;\;]{p}0. \end{equation} (4.39)

    By proceeding in the same manner that we did in Parts ⅰ and ⅱ, we can easily show that

    \begin{equation} \frac{(n-r^{''}_{n})-n\overline{F}(c_{B}x+d_{B})}{\sqrt{r^{''}_{n}(1-\frac{r^{''}_{n}}{n})}} \xrightarrow[\;\;n\;\;]{}\begin{cases} \infty, & {\rm{if }}\ x > 1, \\ -\infty, & {\rm{if }}\ x < 1. \end{cases} \end{equation} (4.40)

    Relation (4.40) yields P\left(\frac{X_{r^{''}_{n}:n}-d_{B}}{c_{B}} < \epsilon+1\right)\xrightarrow[\; \; n\; \; ]{}\mathcal{N}(\infty) = 1, which is equivalent to

    \begin{equation} P\left(\frac{X_{r^{''}_{n}:n}-d_{B}}{c_{B}} > \epsilon+1\right)\xrightarrow[\;\;n\;\;]{}0. \end{equation} (4.41)

    Similarly, we get

    \begin{equation} P\left(\frac{X_{r^{''}_{n}:n}-d_{B}}{c_{B}} < -\epsilon+1\right)\xrightarrow[\;\;n\;\;]{}\mathcal{N}(-\infty) = 0. \end{equation} (4.42)

    The two limit relations (4.41) and (4.42) yield

    P\left(\left|\frac{X_{k^{''}_{n}:n}-d_{B}}{{c}_{B}}-1\right| > \epsilon\right)\xrightarrow[\;\;n\;\;]{}0,

    which in turn proves (4.38). On the other hand, the proof of the relation (4.37) follows also by proceeding as we did in Parts i and ii. Finally, in order to transfer to the convergence w.p.1 in Parts ⅰ–ⅱ, we argue in the same way we did at the end of the proof of Theorem 3.3. The proof of the theorem is now completed.

    In this section, a simulation study is conducted to explain how we can choose the bootstrap re-sample size B numerically, relying on the best fit of the bootstrapping DF of central and intermediate quantiles for different ranks. More specifically, we apply the Kolmogorov-Smirnov (K-S) goodness of fit test to test the null hypothesis H_0: The central (intermediate) quantiles follow a normal distribution at a 5% significance level. Moreover, we repeat the K-S test many times for different values of B and then select the value of B that corresponds to the largest average p -value for the fit. See Tables 13.

    Table 1.  The average p -values ( \overline{p} ) for fitting different bootstrap central quantiles of the SOSs and OOSs models to a normal distribution with various values of B.
    k=1, \; m=1 (SOSs) k=1, \; m=2 (SOSs) k=1, \; m=0 (OOSs)
    \lambda=0.5 \lambda=0.25 \lambda=0.5 \lambda=0.25 \lambda=0.5 \lambda=0.25
    B \overline p \overline p \overline p \overline p \overline p \overline p
    100 0.25239 0.31176 0.34101 0.22219 0.35895^{\star} 0.26554
    150 0.29968^{\star} 0.24726 0.36257^{\star} 0.22550 0.31619 0.31344^{\star}
    200 0.17472 0.32276^{\star} 0.34588 0.22552^{\star} 0.32378 0.21946
    250 0.21397 0.24931 0.31296 0.18478 0.27626 0.24845
    300 0.19569 0.29015 0.26269 0.18825 0.29849 0.18454
    350 0.19635 0.26083 0.24576 0.18935 0.27687 0.16976
    400 0.15829 0.26653 0.26114 0.17181 0.24873 0.17627

     | Show Table
    DownLoad: CSV
    Table 2.  The average p -values for fitting bootstrap intermediate quantiles of the OOSs and SOSs models to a normal distribution with various values of B.
    Intermediate (OOSs) Intermediate (SOSs)
    B \overline p ( r_{n}=\sqrt{n} ) \overline p ( r^{\star}_{n}=\sqrt{n} )
    100 0.12289 0.02019
    150 0.20122 0.07148
    200 0.28733^{\star} 0.14383
    250 0.18223 0.18207
    300 0.12546 0.21568^{\star}
    350 0.08254 0.17340
    400 0.05879 0.12973

     | Show Table
    DownLoad: CSV
    Table 3.  The average p -values for fitting bootstrap intermediate quantiles of OOSs to a normal distribution with various values of B when M = 1000 and n = 200,000 .
    Intermediate (OOSs)
    B \overline p ( r_{n}=\sqrt{n} )
    100 0.30920
    150 0.22490
    200 0.25541
    250 0.36622
    300 0.43944^{\star}
    350 0.39108
    400 0.38285

     | Show Table
    DownLoad: CSV

    In this simulation, three m -GOSs sub-models based on the standard normal distribution are considered. Namely, these are OOSs ( \gamma_i = n-i+1 ), SOSs with m = 1 (i.e. \gamma_i = 2(n-i)+1 ) and SOSs with m = 2 (i.e., \gamma_i = 3(n-i)+1 ). Moreover, the full sample size is n = 20,000 (if we enlarge the sample size, the running time becomes very long, especially for the SOSs model), the number of replicates is M = 1000 , the central ranks are \lambda = 0.25 and \lambda = 0.5, for the re-sample bootstrap, B = 100 - 400(50) , and we choose the intermediate rank to be r_{n} = \sqrt{n}. The results are presented in Tables 1 and 2.

    This study relies on the fact that the sample central and intermediate quantiles based on the standard normal distribution weakly converge to the normal distribution. Moreover, according, to the results of Sections 3 and 4, we expect that the bootstrapping DFs of central and intermediate quantiles converge to the normal distribution provided that B \ll n (i.e., B = o(n) ) for central ranks and S(B) \ll n (i.e., S(B) = o(n) ) for intermediate ranks. Moreover, based on the K-S test for normality and the corresponding p -values, the best value of B for central ranks (that corresponds to the largest p -value) should be chosen such that \sum_{n = 1}^{\infty}\lambda^{\sqrt{\frac{n}{B}}} < \infty, \forall \lambda\in(0, 1), while the best value of B for intermediate ranks should chosen such that \sum_{n = 1}^{\infty}\lambda^{\sqrt{\frac{n}{S(B)}}} < \infty, \forall \lambda\in(0, 1) (see Remark 5.3).

    The simulation study is implemented by Mathematica 12.3 via the following algorithm:

    Step 1. Select the m -GOSs model. That is, choose the values of \gamma_i .

    Step 2. Generate an ordered sample of size n = 20000 , say {\bf{X}}_n , based on the standard normal distribution by the algorithm of Barakat et al. [8].

    Step 3. Choose the central or intermediate rank.

    Step 4. Select the bootstrap re-sample size B.

    Step 5. Select M random samples, each of which is size B, from {\bf{X}}_n with replacement.

    Step 6. Compute the quantile of each sample, and store them in Q_B .

    Step 7. By using the K-S test, check the normality of the data sets Q_B to the normal distribution and then compute the p -value.

    Step 8. Repeat Steps 5–7 many times (100 times) for each chosen value of B and then compute the average p -value.

    Step 9. Repeat Steps 4–8 for different values of B , and then pick the largest average p -value and the corresponding value of B .

    Remark 5.1. In the earlier version of this paper, in order to implement Step 7 of the previous algorithm, we fitted the data sets Q_B to the normal DF by using the K-S test after calculating the sample mean and standard deviation. However, we noted an important issue that the K-S test can be used to fit the normal distribution only when parameters are not estimated from the data (cf. [24]). Since our focus here is only on checking the normality of the bootstrap samples, we apply the K-S test to check the normality of the given sample bootstrapping statistics without estimating any parameters.

    Remark 5.2. According to the results presented in Tables 1 and 2, it is noted that for n = 20000, the largest average p -values for selected central ranks are always achieved at B , which falls in the interval [100–200]. Moreover, the best average p -values for selected intermediate ranks are achieved at B falling in the interval [200–300]. However, it is shown that the accuracy of the goodness of fit depends on both the selected GOSs model and the selected ranks. Moreover, in view of the results given in Table 3, the average p -value for intermediate OSs increases as the sample size increases with the same bootstrap re-sample size.

    Remark 5.3. Based on the results of Sections 3 and 4, the best performance of the bootstrapping DFs of the central m-GOSs occurs at the values of B for which \sum_{n = 1}^{\infty}\lambda^{\sqrt{\frac{n}{B}}} < \infty, \forall \lambda\in(0, 1) . On the other hand, according to [2], the condition \sqrt{B} = o(\frac{2\sqrt{n}}{\log n}) is a sufficient condition for \sum_{n = 1}^{\infty}\lambda^{\sqrt{\frac{n}{B}}} < \infty, \forall \lambda\in(0, 1), which implies that the best performance of the bootstrapping DFs of the central OSs occurs when B\ll800. Moreover, in the case of intermediate m-GOSs, the condition \sqrt{S(B)} = o(\frac{2\sqrt{n}}{\log n}) is a sufficient condition for \sum_{n = 1}^{\infty}\lambda^{\sqrt{\frac{n}{S(B)}}} < \infty, \forall \lambda\in(0, 1), which implies that the best performance of the bootstrapping DFs of intermediate m-GOSs occurs when S(B)\ll800. Since in our study we choose the intermediate rank r_{n} = \sqrt{n}, this implies B\ll7500. Therefore, the simulation output endorses this anticipated result.

    The bootstrap method is an efficient procedure for solving many statistical problems based on re-sampling from the available data. For example, the bootstrap method is used to find standard errors for estimators, confidence intervals for unknown parameters, and p-values for test statistics under a null hypothesis. One of the desired properties of the bootstrapping method is consistency, which guarantees that the limit of the bootstrap distribution is the same as the distribution of the given statistic. In this paper, we investigated the strong consistency of bootstrapping central and intermediate m -GOSs for an appropriate choice of re-sample size for known and unknown normalizing constants. Finally, a simulation study was conducted to to explain how we can choose the bootstrap re-sample size B numerically, relying on the best fit of the bootstrapping DF of central and intermediate quantiles for different ranks.

    The authors are grateful to the editor and anonymous referees for their insightful comments and suggestions, which helped to improve the paper's presentation.

    All authors declare no conflict of interest in this paper.

    [1] Edenhofer O, Pichs-Madruga R, Sokona Y, et al. (2014) IPCC, 2014: Summary for Policymakers. In: Climate Change 2014: Mitigation of Climate Change. Contribution of Working Group III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Available from: http://www.ipcc.ch/pdf/assessment-report/ar5/wg3/ipcc_wg3_ar5_summary-forpolicymakers.pdf.
    [2] United States Environmental Protection Agency (2016) Inventory of U.S. Greenhouse Gas Emissions and Sinks: 1990-2014. Available from: https://www.epa.gov/sites/production/files/2016-04/documents/us-ghg-inventory-2016-main-text.pdf.
    [3] Environment and Climate Change Canada (2016) Canadian Environmental Sustainability Indicators: Greenhouse Gas Emissions. Available from: https://www.ec.gc.ca/indicateurs-indicators/F60DB708-6243-4A71-896B-6C7FB5CC7D01/GHGEmissions_EN.pdf.
    [4] International Energy Agency (2017) Global EV Outlook 2017: Two million and counting.
    [5] Li W, Long R, Chen H, et al. (2017) A review of factors influencing consumer intentions to adopt battery electric vehicles. Renew Sust Energ Rev 78: 318–328. doi: 10.1016/j.rser.2017.04.076
    [6] Goldstein J, Brown I, Koretz B (1999) New developments in the electric fuel Ltd. zinc/air system. J Power Sources 80: 171–179. doi: 10.1016/S0378-7753(98)00260-2
    [7] Toussaint G, Stevens P, Rouget R, et al. (2012) A high energy density rechargeable Zinc-air battery for automotive application. Meet. Abstr. MA2012-02, 1172–1172. Available from: http://ma.ecsdl.org/content/MA2012-02/11/1172.full.pdf.
    [8] Buisness Wire (2017) Eos Energy Storage Now Taking Orders at $95/kWh for the Eos Aurora® DC Battery System. Available from: https://www.businesswire.com/news/home/ 20170418005284/en/Eos-Energy-Storage-Orders-95kWh-Eos-Aurora%C2%AE.
    [9] Li Y, Dai H (2014) Recent advances in Zinc-air batteries. Chem Soc Rev 43: 5257. doi: 10.1039/C4CS00015C
    [10] Bockstette J, Habermann K, Ogrzewalla J, et al. (2013) Performance plus range: Combined battery concept for plug‑in hybrid vehicles. Sae Int J Altern Powertrains 2: 156–171. doi: 10.4271/2013-01-1525
    [11] Stewart SG, Kohn SI, Kelty KR, et al. (2014) Electric vehicle extended range hybrid battery pack system: US, US 8803470 B2.
    [12] Catton J, Wang C, Sherman S, et al. (2017) Extended range electric vehicle powertrain simulation and comparison with consideration of fuel cell and metal-air battery. WCX™ 17: SAE World Congress Experience.
    [13] Rahman MA, Wang X, Wen C (2013) High energy density metal-air batteries: A review. J Electrochem Soc 160: A1759–A1771. doi: 10.1149/2.062310jes
    [14] Fu J, Cano ZP, Park MG, et al. (2016) Electrically rechargeable Zinc-air batteries: Progress, challenges, and perspectives. Adv Mater 29: 1604685.
    [15] CBC News (2014) Electric car with massive range in demo by Phinergy, Alcoa. Available from: http://www.cbc.ca/news/technology/electric-car-with-massive-range-in-demo-by-phinergy-alcoa-1.2664653.
    [16] Nixon DB (1994) Electric vehicle having multiple replacement batteries: US, US 5542488 A.
    [17] Passaniti J, Carpenter D, McKenzie R (2011) Button cell batteries: Silver oxide-Zinc and Zinc-air systems. In: Reddy TB (Ed.), Linden's Handbook of Batteries (13). Available from: http://www.accessengineeringlibrary.com/browse/lindens-handbook-of-batteries-fourth-edition.
    [18] Vatsalarani J, Trivedi DC, Ragavendran K, et al. (2005) Effect of polyaniline coating on "Shape Change" phenomenon of porous Zinc electrode. J Electrochem Soc 152: A1974–A1978. doi: 10.1149/1.2008992
    [19] Chang WL, Sathiyanarayanan K, Eom SW, et al. (2006) Novel alloys to improve the electrochemical behavior of Zinc anodes for Zinc/air battery. J Power Sources 160: 1436–1441. doi: 10.1016/j.jpowsour.2006.02.019
    [20] Parker JF, Chervin CN, Nelson ES, et al. (2014) Wiring zinc in three dimensions re-writes battery performance-dendrite-free cycling. Energ Environ Sci 7: 1117–1124. doi: 10.1039/C3EE43754J
    [21] Jung KN, Jung JH, Im WB, et al. (2013) Doped lanthanum nickelates with a layered perovskite structure as bifunctional cathode catalysts for rechargeable metal-air batteries. ACS Appl Matter Inter 5: 9902–9907. doi: 10.1021/am403244k
    [22] Lee DU, Choi J, Feng K, et al. (2013) Advanced extremely durable 3D bifunctional air electrodes for rechargeable Zinc-air batteries. Adv Energy Mater 4: 1301389.
    [23] Clark S, Latz A, Horstmann B (2017) Rational development of neutral aqueous electrolytes for Zin-Air batteries. Chemsuschem 2017: 4735.
    [24] Goh FWT, Liu Z, Hor TSA, et al. (2014) A near-neutral chloride electrolyte for electrically rechargeable Zinc-air batteries. J Electrochem Soc 161: A2080–A2086. doi: 10.1149/2.0311414jes
    [25] Eos Energy Storage, Llc. Electrochemical cell with divalent cation electrolyte and at least one intercalation electrode. US 20150244031 A1.
    [26] Eos Storage. Products and Technology. Available from: https://eosenergystorage.com/products-technology/.
    [27] Mohamad AA (2006) Zn/gelled 6M KOH/O2 Zinc-air battery. J Power Sources 159: 752–757. doi: 10.1016/j.jpowsour.2005.10.110
    [28] Dahn J, Ehrlich GM (2011) Lithium-Ion Batteries. In: Reddy TB (Ed.), Linden's Handbook of Batteries (26). Available from: http://www.accessengineeringlibrary.com/browse/lindens-handbook-of-batteries-fourth-edition.
    [29] Argonne National Labs (2017) Available from: http://www.autonomie.net/expertise/Autonomie.html.
    [30] Argonne National Laboratory (2016) EcoCar3 Advanced Vehicle Technology Competition. Available from: http://ecocar3.org/.
    [31] A123 Systems, Inc. Nanophosphate Basics: An Overview of the Structure, Properties and Benefits of A123 System' Proprietary Lithium Ion Battery Technology. Available from: https://www.neces.com/assets/A123-Systems_Nanophosphate-overview-whitepaper_FINAL1.pdf.
    [32] A123 Systems, Inc. 20Ah Prismatic Pouch Cell. Available from: http://accessengineeringlibrary.com/browse/lindens-handbook-of-batteries-fourth-edition/p2001c2f299713_1001#p2001c2f299713_16002.
    [33] Nejad S, Gladwin DT, Stone DA (2016) A systematic review of lumped-paramter equivalent circuit models for real-time estimation of lithium-ion battery states. J Power Sources 316: 183–196. doi: 10.1016/j.jpowsour.2016.03.042
    [34] A123 Systems, Inc. Battery Pack Design, Validation and Assembly Guide using A123 Systems AMP20mlHD-A Nanophosphate Cells. Available from: http://www.formula-hybrid.org/wp-content/uploads/A123_AMP20_battery_Design_guide.pdf.
    [35] Nykvist B, Nilsson M (2015) Rapidly falling costs of battery packs for electric vehicles. Nat Clim Change 5: 329–332. doi: 10.1038/nclimate2564
    [36] Eckl R, Ehrl B, Lienkamp M (2013) Range extender for seldom use in the electric car mute-Zinc Air battery. Conference on Future Automotive Technology: Focus Electromobility, Munich.
    [37] Adler TC, McLarnon FR, Cairns EJ (1993) Low-Zinc-solubility electrolytes for use in Zinc/Nickel oxide cell. J Electrochem Soc 140: 289–294. doi: 10.1149/1.2221039
    [38] Narayanan SR, Prakash GKS, Manohar A, et al. (2011) Materials challenges and technical approaches for realizing inexpensive and robust iron-air batteries for large-scale energy storage. Solid State Ionics 216: 105–109.
    [39] U.S. Department of Energy (2017) Available from: https://www.fueleconomy.gov/feg/fe_test_schedules.shtml.
    [40] U.S. Department of Transportation, Federal Highway Administration, 2009 National Household Travel Survey. Available from: http://nhts.ornl.gov.
    [41] Drillet JF, Holzer F, Kallis T, et al. (2001) Influence of CO2 on the stability of bifunctional oxygen electrodes for rechargeable zinc/air batteries and study of different CO2 filter materials. Phys Chem Chem Phys 3: 368–371. doi: 10.1039/b005523i
    [42] Dharmakeerthi CH, Mithulananthan N, Saha TK (2014) Impact of electric vehicle fast charging on power system voltage stability. Int J Electr Power 57: 241–249. doi: 10.1016/j.ijepes.2013.12.005
    [43] U.S. Department of Energy (DOE) (2017) Model Year 2016 Fuel Economy Guide. Available from: https://www.fueleconomy.gov/feg/pdfs/guides/FEG2016.pdf.
    [44] U.S. Department of Energy (DOE) (2016) Model Year 2014 Fuel Economy Guide. Available from: https://www.fueleconomy.gov/feg/pdfs/guides/FEG2014.pdf.
    [45] Vijayenthiran V (2015) 2016 Chevrolet Camaro Full Pricing Released. MotorAuthority. Available from: http://www.motorauthority.com/news/1099687_2016-chevrolet-camaro-full-pricing-released.
    [46] Kochhan R, Fuchs S, Reuter B, et al. (2014) An overview of costs for vehicle components, fuels and greenhouse gas emissions.
    [47] Propfe B, Redelbach M, Santini DJ, et al. (2012) Cost analysis of Plug-in Hybrid Electric Vehicles including Maintenance & Repair Costs and Resale Values, EVS26 International Battery, Hybrid and Fuel Cell Electric Vehicle Symposium, Los Angeles, California.
    [48] Schlömer S, Bruckner T, Fulton L, et al. (2014) Annex III Technology-specific cost and performance parameters. In: Climate Change 2014: Mitigation of Climate Change. Contribution of Working Group III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Available from: https://www.ipcc.ch/pdf/assessment-report/ar5/wg3/ipcc_wg3_ar5_annex-iii.pdf.
    [49] U.S. Energy Information Administration. (2016, April). Frequently Asked Questions: What is U.S. electricity generation by energy source? Available from: https://www.eia.gov/tools/faqs/faq.cfm?id=427&t=3.
    [50] Canadian Electricity Association (2014) Key Canadian Electricity Statistics. Available from: http://www.electricity.ca/media/Electricity101/KeyCanadianElectricityStatistics10June2014.pdf.
    [51] U.S. Department of Energy (2017) Available from: https://www.fueleconomy.gov/feg/Find.do?action=sbs&id=38187.
  • This article has been cited by:

    1. O.M. Khaled, H.M. Barakat, Laila A. AL-Essa, Ehab M. Almetwally, Physics and economic applications by progressive censoring and bootstrapping sampling for extension of power Topp-Leone model, 2024, 17, 16878507, 100898, 10.1016/j.jrras.2024.100898
    2. Amany E. Aly, Magdy E. El-Adll, Haroon M. Barakat, Ramy Abdelhamid Aldallal, A new least squares method for estimation and prediction based on the cumulative Hazard function, 2023, 8, 2473-6988, 21968, 10.3934/math.20231120
    3. M. E. Sobh, H. M. Barakat, Magdy E. El-Adll, Amany E. Aly, Asymptotic behavior of bootstrapped extreme order statistics under unknown power normalizing constants, 2025, 66, 0932-5026, 10.1007/s00362-025-01665-2
  • Reader Comments
  • © 2018 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(20598) PDF downloads(1487) Cited by(32)

Figures and Tables

Figures(13)  /  Tables(9)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog