Research article Special Issues

Sieve bootstrap test for multiple change points in the mean of long memory sequence

  • In this paper, the sieve bootstrap test for multiple change points in the mean of long memory sequence is studied. Firstly, the ANOVA test statistics for change points detection is obtained. Secondly, sieve bootstrap statistics is constructed and the consistency under the Mallows measure is proved. Finally, the effectiveness of the method was illustrated by simulation and example analysis. Simulation results show that our method can not only control the empirical size well but also have reasonable good power.

    Citation: Wenzhi Zhao, Dou Liu, Huiming Wang. Sieve bootstrap test for multiple change points in the mean of long memory sequence[J]. AIMS Mathematics, 2022, 7(6): 10245-10255. doi: 10.3934/math.2022570

    Related Papers:

    [1] Zhanshou Chen, Muci Peng, Li Xi . A new procedure for unit root to long-memory process change-point monitoring. AIMS Mathematics, 2022, 7(4): 6467-6477. doi: 10.3934/math.2022360
    [2] Sang Gil Kang, Woo Dong Lee, Yongku Kim . Bayesian multiple changing-points detection. AIMS Mathematics, 2025, 10(3): 4662-4708. doi: 10.3934/math.2025216
    [3] Yang Du, Weihu Cheng . Change point detection for a skew normal distribution based on the Q-function. AIMS Mathematics, 2024, 9(10): 28698-28721. doi: 10.3934/math.20241392
    [4] Ilsang Ohn, Jisu Park . Fast full conformal prediction for multiple test points. AIMS Mathematics, 2025, 10(3): 5143-5157. doi: 10.3934/math.2025236
    [5] Yirong Huang, Liang Ding, Yan Lin, Yi Luo . A new approach to detect long memory by fractional integration or short memory by structural break. AIMS Mathematics, 2024, 9(6): 16468-16485. doi: 10.3934/math.2024798
    [6] Asamh Saleh M. Al Luhayb . Nonparametric bootstrap methods for hypothesis testing in the event of double-censored data. AIMS Mathematics, 2024, 9(2): 4649-4664. doi: 10.3934/math.2024224
    [7] Neama Salah Youssef Temraz . Analysis of stress-strength reliability with m-step strength levels under type I censoring and Gompertz distribution. AIMS Mathematics, 2024, 9(11): 30728-30744. doi: 10.3934/math.20241484
    [8] Chunting Ji, Hui Liu, Jie Xin . Random attractors of the stochastic extended Brusselator system with a multiplicative noise. AIMS Mathematics, 2020, 5(4): 3584-3611. doi: 10.3934/math.2020233
    [9] Sunil Kumar, Jai Bhagwan, Lorentz Jäntschi . Numerical simulation of multiple roots of van der Waals and CSTR problems with a derivative-free technique. AIMS Mathematics, 2023, 8(6): 14288-14299. doi: 10.3934/math.2023731
    [10] Xiaofeng Zhang, Hao Jin, Yunfeng Yang . Quasi-autocorrelation coefficient change test of heavy-tailed sequences based on M-estimation. AIMS Mathematics, 2024, 9(7): 19569-19596. doi: 10.3934/math.2024955
  • In this paper, the sieve bootstrap test for multiple change points in the mean of long memory sequence is studied. Firstly, the ANOVA test statistics for change points detection is obtained. Secondly, sieve bootstrap statistics is constructed and the consistency under the Mallows measure is proved. Finally, the effectiveness of the method was illustrated by simulation and example analysis. Simulation results show that our method can not only control the empirical size well but also have reasonable good power.



    The presence of change point can easily mislead the conventional time series analysis and result in erroneous conclusions. One of the problems in change point analysis is to detect whether there are change points in the statistical sequence.

    The statistical literature on change point problem start with Page (1954) [1], who publish an article on quality inspection, which has attracted the attention of experts in various fields. Horváth and Kokoszka et al. (1997, 1998) [2,3] study the CUSUM estimator of mean change point and obtain the limiting distribution for the estimator. Kuan et al. (1998) [4] propose least square method to estimate mean change point in fractionally integrated process. Shao (2011) [5] use ratio statistics to solve the testing problem of mean change point. In the actual statistical test, the data sequence may have more than one change points. The detection method designed for at-most-one change point problem performs poor under multiple change point detection problem. So the study of the multiple change points detection is of great significance. Among them, Bai et al. (1997, 1998, 2003) [6,7,8,9] consider the estimation and detection problem of the multiple change points in linear process. Bardet et al. (2010) [10] and Kejriwal et al. (2013) [11] use different methods to solve the same problem. Lijing et al. (2020) [12] and Macneill et al. (2020) [13,14] consider the detection problem of multiple change points in linear process. Noriah et al. (2014) [15] propose ANOVA statistics to test the multiple change points in i.i.d. sequence. The limiting distribution of test statistics is derived under i.i.d. assumption. While, it is not easy to obtain limiting distribution for long range dependence sequence. Fortunately, bootstrap method give a chance to solve this problem more conveniently.

    Long memory processes are prevalent in many areas, for example, in geophysical sciences, microeconomic, asset pricing, stock returns and exchange rates. Hidalago and Robinson et al. (1996) [16] propose the Wald method to test mean change point in long memory sequence. Lazarova (2005) [17] study the change point detection problem in linear regression models with long memory error. Wang (2008) [18] give the estimation method for change point in non-parametric regression models with long memory error. The above literatures focus on the single change point of long memory sequence. However, in practice, due to the interference of various factors, the statistical property of long memory sequence may change not only once, but many times. Thus, it is very necessary to study the multiple point problem of long memory sequence. In this paper, we study the multiple change point detection problem for long memory sequence.

    Sieve bootstrap method was introduced by Bühlmann (1997) [19] firstly. Alonso et al. (2002, 2003, 2004) [20,21,22] and Mukhopadhyay et al. (2010) [23] use sieve bootstrap to study the forecasting problems for time series. Poskitt (2008) [24] prove the properties of the sieve bootstrap method and point out that sieve bootstrap method is very useful to analyze long memory sequence. So, in this paper, we consider sieve bootstrap test for multiple change points in the mean of long memory sequence.

    We assume that  n observations X1,X2,,Xn are given by:

    Xt=μ(t)+ et,t=1,2,,n,
    μ(t)={μ1,1tn1,μ2,n1+1tn2,μk+1,nk+1tn, (2.1)

    where et=φ(B)εt=j=0φjεtj,φjc0jd1(0<c0<,j,0<d<0.5), and j=0φ2j<,εt is an i.i.d. process with mean zero and finite variance σ2. The symbol "" indicates that the ratio of left- and right-hand sides tends to 1. The sequence {et,t=1,2,,n} is a linear stationary sequence with long memory, so is {Xt,t=1,2,,n}. Parameters μ1,μ2,,μk+1 are the finite constant and n1,n2,,nk are the unknown change point. We consider here the problem of testing the null hypothesis of no change point:

    H0:μ1=μ2==μk+1 (2.2)

    against the multiple change points alternative:

    H0:μ1=μ2==μk+1. (2.3)

    Let τ=(0<τ1<τ2<<τk<1) be any partition of [0, 1], [nτi][nτi1]+2,i=1,2,,k+1 with τ0=0,τk+1=1. Define di,n=[nτi][nτi1],i=1,2,,k+1. Let S0=0,Sr=rj=1Xj, r=1,2,,n and ˉX=1nSn. For i=1,2,,k+1, the mean of X[nτi1]+1,,X[nτi] is ¯Xi=1di,n[nτi]t=[nτi1]+1Xt.

    The one-way ANOVA-type test statistic proposed by Noriah et al. (2014) [15] is:

    Zn(k):=τVn(τ)dτ, (2.4)

    where

    Vn(τ)=c20n2dk+1i=1ai.nSSTr(τ),

    and

    SSTr(τ)=k+1i=1di,n(¯XiˉX)2.

    Often, the number of change points k is unknown and we assume that it has an upper bound K. Test statistic is defined as Zn(K)=max1kKZn(k) and the limiting distribution is approximated by sieve bootstrap method.

    {Xt} is invertible and has an AR() representation. The specific steps of the sieve bootstrap method are as follows:

    Step 1. Having observed the samples x1,x2,,xn, we start estimating ˆd. Specific estimation method can be referred to Hurst (1951) [25];

    Step 2. Make ˆd-order difference on {xt} to obtain the sequence {yt}. To fit the AR(p) autoregressive process of {yt}, that is

    ^yt=ˆϕ0+ˆϕ1y1+ˆϕ2y2++ˆϕpytp. (2.5)

    Given a maximum AR order p=p(n) of the autoregressive approximation, then choose the optimal ˆp using the BIC criterion, where p(n)=o(n), and p(n) as the sample size n. We obtain the residuals ˆε1,ˆε2,,ˆεt. The residual is re-sampled to obtain new residual sequence, and the autoregressive process yt=^yt+ˆεt is constructed for the new residual sequence.

    Step 3. The new long memory sequence {xt} is constructed by xt=(1B)ˆdyt. Calculates the corresponding sieve bootstrap test statistics value zn(K):

    zn(K):=max1kKτc20n2dk+1i=1di.nk+1i=1di,n(¯xi¯x)2dτ. (2.6)

    Step 4. Repeat Steps 2–3 B times and obtain B values zi(K),i=1,2,,B. The sieve bootstrap approximation of the p value is p=1B#{i:zi(K)zn(K)}, where # denotes the number of elements in the set and zn(K):=max1kKτc20n2dk+1i=1di.nk+1i=1di,n(¯xiˉx)2dτ. We reject H0 if p<α.

    In order to prove that the asymptotic results are consistent, the following assumption and lemmas are required [24].

    Assumption 1. Let ξt denote the σ-algebra of events determined by εs, st. Also, assume that {εt}tZ are i.i.d and that

    E[εt|ξt1]=0,E[ε2t|ξt1]=σ2,tZ.

    Furthermore, assume that E[ε4t]<,tZ.

    Lemma 1. Assume that n observations X1,X2,,Xn satisfy Eq (2.1). Then, for tZ,0<d<0.5,p=p(n), we have

    1nnt=1(ˆεtεt)2=Oa.s{(pλmin(Γp))(lognn)12d}, (3.1)

    where λmin(Γp)=O{pq}, q0.

    Proof. See Lemma 2 in reference [24], the proof is omitted.

    Lemma 2. Assume that n observations X1,X2,,Xn satisfy Eq (2.1). We obtain the represents et=j=0ˆφjˆεtj, et=j=0φjεtj, then, for tZ,0<d<0.5,p(n), we have

    j=0|ˆφjφj|=Oa.s{(p5λmin(Γp))12(lognn)12d}, (3.2)

    where λmin(Γp)=O{pq}, q0.

    Proof. See Lemma 3 in reference [24], the proof is omitted.

    Theorem 1. Let η(FX,FY) denotes Mallow's measure of the distance between two probability distribution FX and FY, defined as inf{E||XY||2}12 where the infimum is taken over all square integrable random variables X and Y in Rm with marginal distributions FX and FY. Assume that n observations X1,X2,,Xn satisfy Eq (2.1). So if the null hypothesis H0 is true, when n, for 0<d<0.5 and p=p(n). Then with probability one

    η(FZn(k),FZn(k))=O{(p5λmin(Γ2p))12(lognn)12d}, (3.3)

    where Zn(k) denotes for the test statistic, Zn(k) denotes for the sieve bootstrap test statistic, FZn(k) denotes for the true distribution, FZn(k) denotes for the sieve bootstrap approximate distribution, λmin(Γp)=O{pq},q0.

    Proof. From the definition of et=j=0ˆφjˆεtj and et=j=0φjεtj, we have

    ¯Xi=1di,n[nτi]t=[nτi1]+1Xt=1di,n[nτi]t=[nτi1]+1(μt+et)

    and

    ¯Xi=1di,n[nτi]t=[nτi1]+1Xt=1di,n[nτi]t=[nτi1]+1(μt+et). (3.4)

    It follows that

    |¯Xi¯Xi|21di,n[nτi]t=[nτi1]+1|XtXt|2=1di,n[nτi]t=[nτi1]+1|etet|2. (3.5)

    Where Zn(k) denotes for the sieve bootstrap test statistic, that is

    Zn(k):=τc20n2dk+1i=1di.nk+1i=1di,n(¯Xi¯X)2dτ.

    So applying the mean value theorem of calculus, we have

    Zn(k)Zn(k)=nt=1Zi(k)Xt(XtXt)=nt=1Zi(k)Xt(etet),

    and

    ||Zn(k)Zn(k)||nt=1||Zi(k)Xt|||XtXt|=nt=1||Zi(k)Xt|||etet|. (3.6)

    By assumption, nZi(k)Xt are continuous on the domain R and hence uniformly bounded, so there is M>0. We can therefore conclude that n||Zi(k)Xt||M and Zn(k) will satisfy the Lipschitz condition, that is

    ||Zn(k)Zn(k)||21nnt=1M2|etet|2. (3.7)

    Similar to the proof of Theorem 1 in literature [22], from the definition of Mallows metric and applying the Cauchy-Schwartz inequality, we have

    η(FZn(k),FZn(k))2E[E[||Zn(k)Zn(k)||2]E[E[1nnt=1M2|etet|2]1nnt=1E[E[M2]]1nnt=1E[E[(etet)2]]. (3.8)

    We obtain the representation et=j=0ˆφjˆεtj,et=j=0φjεtj, from which it follows that

    etet=j=0(φjˆφj)εtj+j=0ˆφj(εtjˆεtj)=u(t)+v(t).

    Where 1nnt=1E[E[M2]], thus we are faced with the task of evaluating E[E[(u(t)+v(t))2]. Consider E[E[v(t)2] firstly. By construction, εtˆεt are i.i.d with respect to all the observations X1,X2,,Xt. Hence

    E[v(t)2]=1nnt=1(ˆεtεt)2j=0|ˆφj|2. (3.9)

    Using Lemma 1 we get that,

    1nnt=1(ˆεtεt)2=Oa.s{(pλmin(Γp))(lognn)12d}. (3.10)

    Since j=0|φj|, using Lemma 2 we can conclude that

    j=0|ˆφj|j=0|φj|+j=0|ˆφjφj|=O(1)+O{(p5λmin(Γ2p))12(lognn)12d}.

    Thus,

    E[E[v(t)2]]=O{(pλmin(Γp))(lognn)12d}. (3.11)

    Now consider E[E[u(t)2]]. Since u(t)=j=0(φjˆφj)εtj is a constant relative to all the observations X1,X2,,Xt, we have

    E[E[u(t)2]]=E[u(t)2E(1)].

    According to Poskitt (2008) [24], under H0, Xt has the following spectral density

    f(ω)=σ2|φ(eiω)|22ϕ.

    Thus,

    E[u(t)2]=σ22πππ|φ(eiω)ˆφ(eiω)|2|ϕ(eiω)φ(eiω)|2dω. (3.12)

    For any δ>0,ω(π,π], from Lemma 4 in literature [24], we have

    |ϕ(eiω)φ(eiω)|1+|ϕ(eiω)φ(eiω)1|1+δ.

    Hence

    E[u(t)2]σ22πππ|φ(eiω)ˆφ(eiω)|2(1+δ)2dω=[δ(1+δ)]22πj=0|ˆφjφj|2.

    And by Lemma 2 this equals

    j=0|ˆφjφj|=Oa.s{(p5λmin(Γp))12(lognn)12d}.

    We can therefore conclude that

    E[u(t)2]=O{(p5λmin(Γp))(lognn)12d}. (3.13)

    To sum up, we have

    η(FZn(k),FZn(k))=O{(p5λmin(Γ2p))12(lognn)12d}.

    Which completes the proof of the theorem.

    In this section, we evaluate the performance of the test statistics trough a simulation. Experiments are conducted based on sample sizes n=400 and n=600 with 1000 replications. Consider the following data generation process:

    Xt=μ(t)+ et,t=1,2,,n,
    μ(t)={μ1,1tn1,μ2,n1+1tn2,μk+1,nk+1tn,

    where et is a FARIMA(0, d, 0) process. Simulation studies are based on B = 5000 and α=0.05. Take d=0.1,0.2,0.3,0.4. Under H0, μ1=μ2==μk+1=0, the empirical sizes of Zn(K) are summarized in Table 1. Under H1, the number of change points K=2 and 3. When K=2, the change point combinations (n1,n2) are divided into three situations: (18n,38n), (38n,58n) and (58n,78n) and (μ1,μ2,μ3)=(0,1,2). When K=3, the change point combinations (n1,n2,n3) are divided into two situations: (18n,38n,58n) and (38n,58n,78n). The mean parameters (μ1,μ2,μ3,μ4) are taken as (0,1,2,3) and (0,1,1,2). The empirical powers of Zn(K) are shown from Table 2 to Table 5.

    Table 1.  The empirical size of Zn(K).
    d n=400 n=600
    0.1 0.025 0.037
    0.2 0.036 0.045
    0.3 0.048 0.051
    0.4 0.056 0.055

     | Show Table
    DownLoad: CSV
    Table 2.  The empirical power with n=400.
    d (18n,38n) (38n,58n) (58n,78n)
    0.1 0.699 0.721 0.704
    0.2 0.686 0.709 0.737
    0.3 0.715 0.753 0.828
    0.4 0.853 0.714 0.895

     | Show Table
    DownLoad: CSV
    Table 3.  The empirical power with n=600.
    d (18n,38n) (38n,58n) (58n,78n)
    0.1 0.684 0.643 0.776
    0.2 0.688 0.715 0.675
    0.3 0.722 0.825 0.870
    0.4 0.885 0.909 0.916

     | Show Table
    DownLoad: CSV
    Table 4.  The empirical power with n=400.
    d (μ1,μ2,μ3,μ4) (18n,38n,58n) (38n,58n,78n)
    0.1 (0,1,2,3) 0.614 0.693
    (0,1,-1,2) 0.559 0.685
    0.2 (0,1,2,3) 0.640 0.708
    (0,1,-1,2) 0.668 0.690
    0.3 (0,1,2,3) 0.713 0.734
    (0,1,-1,2) 0.682 0.699
    0.4 (0,1,2,3) 0.775 0.877
    (0,1,-1,2) 0.717 0.850

     | Show Table
    DownLoad: CSV
    Table 5.  The empirical power with n=600.
    d (μ1,μ2,μ3,μ4) (18n,38n,58n) (38n,58n,78n)
    0.1 (0,1,2,3) 0.702 0.685
    (0,1,-1,2) 0.694 0.696
    0.2 (0,1,2,3) 0.698 0.690
    (0,1,-1,2) 0.709 0.689
    0.3 (0,1,2,3) 0.732 0.877
    (0,1,-1,2) 0.786 0.792
    0.4 (0,1,2,3) 0.814 0.901
    (0,1,-1,2) 0.827 0.874

     | Show Table
    DownLoad: CSV

    Table 1 displays the empirical size of Zn(K) under H0. With the increase of sample size n, the empirical size is close to the significant level α=0.05.

    It can be seen from Tables 25 that the empirical power of Zn(K) under H1 increases significantly when the sample size n increases. The larger d is, the bigger the value of the empirical power.

    In this subsection, the method in this paper is used to detect mean change in monthly average temperature of Northern Hemisphere (1854–1989). The data comes from Beran (1994) [26] and ˆd=0.37.

    Wang (2008) [18] suggests that there is one change point in the data. Thus, we take K=1 and repeat Steps 1–4 to generate 5000 values of sieve bootstrap statistics. Figure 1 shows these values. The horizontal solid line is significant level α=0.05. p=15000{#i:zi(K)zn(K)}=0.0020<0.05. Thus, we reject H0.

    Figure 1.  The value of 5000 sieve bootstrap test statistics.

    Further, the least square method in Kuan (1998) [4] is used to estimate location of change point. The change point estimator is in the 860 observation, corresponding to July 1925. Figure 2 shows these results. The vertical line in the figure indicates the position of the change point. The two horizontal lines in the figure indicate the mean value before and after the change point. The mean value before the change point is -0.34, and the mean value after the change point is 0.02, which means that the monthly average temperature in the Northern Hemisphere has increased.

    Figure 2.  Monthly average temperature data of the Northern Hemisphere from 1854 to 1989.

    This paper considers the sieve bootstrap test for multiple change points in the mean of long memory sequence. The consistency of sieve bootstrap approximation is proved. Numerical simulation and application results support the conclusion.

    This work was supported by the National Natural Foundation of China (Grant nos. 11771353, 12171391) and the Natural Science Basic Research Program of Shaanxi (No. 2022JM-024).

    The authors declare that they have no conflicts of interest.



    [1] E. S. Page, Continuous inspection schemes, Biometrika, 41 (1954), 100–115. https://doi.org/10.1093/biomet/41.1-2.100 doi: 10.1093/biomet/41.1-2.100
    [2] L. Horváth, P. Kokoszka, The effect of long-range dependence on change point estimators, J. Stat. Plan. Infer., 64 (1997), 57–81. https://doi.org/10.1016/S0378-3758(96)00208-X doi: 10.1016/S0378-3758(96)00208-X
    [3] P. Kokoszka, R. Leipus, Change point in the mean of dependent observations, Stat. Probabil. Lett., 40 (1998), 385–393. https://doi.org/10.1016/S0167-7152(98)00145-X doi: 10.1016/S0167-7152(98)00145-X
    [4] C. M. Kuan, C. C. Hus, Change-point estimation of fractionally integrated process, J. Time Ser. Anal., 19 (1998), 693–708. https://doi.org/10.1111/1467-9892.00117 doi: 10.1111/1467-9892.00117
    [5] X. Shao, A simple test of changes in mean in the possible presence of long-range dependence, J. Time Ser. Anal., 32 (2011), 598–606. https://doi.org/10.1111/j.1467-9892.2010.00717.x doi: 10.1111/j.1467-9892.2010.00717.x
    [6] J. S. Bai, Estimation of a change point in multiple regression models, Rev. Econ. Stat., 79 (1997), 551–563. https://doi.org/10.1162/003465397557132 doi: 10.1162/003465397557132
    [7] J. S. Bai, P. Perron, Estimating and testing linear models with multiple structural changes, Econometrica, 66 (1998), 47–78. https://doi.org/10.2307/2998540 doi: 10.2307/2998540
    [8] J. S. Bai, P. Perron, Critical values for multiple structural change tests, Economet. J., 6 (2003), 72–78. http://dx.doi.org/10.1111/1368-423x.00102 doi: 10.1111/1368-423x.00102
    [9] J. S. Bai, P. Perron, Multiple structural change models: A simulation analysis, J. Appl. Economet., 18 (2003), 1–22. https://doi.org/10.1017/CBO9781139164863.010 doi: 10.1017/CBO9781139164863.010
    [10] J. M. Bardet, W. C. Kengne, O. Wintenberger, Detecting multiple change-points in general causal time series using penalized quasi-likelihood, Electron. J. Stat., 6 (2010), 435–477. https://doi.org/10.48550/arXiv.1008.0054 doi: 10.48550/arXiv.1008.0054
    [11] M. Kejriwal, P. Perron, J. Zhou, Wald tests for detecting multiple structural changes in persistence, Economet. Theor., 29 (2013), 289–323. http://dx.doi.org/10.1017/S0266466612000357 doi: 10.1017/S0266466612000357
    [12] L. J. Ma, J. G. Andrew, S. Georgy, Multiple change point detection and validation in autoregressive time series data, Stat. Pap., 61 (2020), 1507–1528. http://dx.doi.org/10.1007/s00362-020-01198-w doi: 10.1007/s00362-020-01198-w
    [13] I. B. Macnelill, V. K. Jandhyala, A. Kaul, S. B. Fotopoulos, Multiple change-point models for time series, Environmetrics, 31 (2020), 1–15. https://doi.org/10.1002/env.2593 doi: 10.1002/env.2593
    [14] S. Bouzebda, A. A. Ferfache, Asymptotic properties of M-estimators based on estimating equations and censored data in semi-parametric models with multiple change points, J. Math. Anal. Appl., 497 (2021), 297–318. http://dx.doi.org/10.1016/j.jmaa.2020.124883 doi: 10.1016/j.jmaa.2020.124883
    [15] M. A. K. Noriah, A. A. A. Emad-eldin, An ANOVA-type test for multiple change points, Stat. Pap., 55 (2014), 1159–1178. http://dx.doi.org/10.1007/s00362-013-0559-1 doi: 10.1007/s00362-013-0559-1
    [16] J. Hidalgo, P. M. Robinson, Testing for structural change in a long-memory environment, J. Econometrics, 70 (1996), 159–174. http://dx.doi.org/10.1016/0304-4076(94)01687-9 doi: 10.1016/0304-4076(94)01687-9
    [17] S. Lazarova, Testing for structural change in regression with long memory processes, J. Econometrice, 129 (2005), 329–372. http://dx.doi.org/10.1016/j.jeconom.2004.09.011 doi: 10.1016/j.jeconom.2004.09.011
    [18] L. Wang, Change point estimation in long memory nonparametric models with applications, Commun. Stat.-Simul. C., 37 (2008), 48–61. http://dx.doi.org/10.1080/03610910701723583 doi: 10.1080/03610910701723583
    [19] P. Bühlmann, Sieve bootstrap for time series, Bernoulli, 3 (1997), 123–148. https://doi.org/10.2307/3318584 doi: 10.2307/3318584
    [20] A. M. Alonso, D. Peña, J. Romo, Forecasting time series with sieve bootstrap, J. Stat. Plan. Infer., 100 (2002), 1–11. https://doi.org/10.1016/s0378-3758(01)00092-1 doi: 10.1016/s0378-3758(01)00092-1
    [21] A. M. Alonso, D. Peña, J. Romo, On sieve bootstrap prediction intervals, Stat. Probabil. Lett., 65 (2003), 13–20. https://doi.org/10.1016/S0167-7152(03)00214-1 doi: 10.1016/S0167-7152(03)00214-1
    [22] A. M. Alonso, D. Peña, J. Romo, Introducing model uncertainty in time series bootstrap, Stat. Sinica, 14 (2004), 155–174. https://doi.org/10.1007/s00440-003-0309-8 doi: 10.1007/s00440-003-0309-8
    [23] P. Mukhopadhyay, V. A. Samaranayake, Prediction intervals for time series: A modified sieve bootstrap approach, Commun. Stat.-Simul. C., 39 (2010), 517–538. https://doi.org/10.1080/03610910903506521 doi: 10.1080/03610910903506521
    [24] D. S. Poskitt, Properties of the sieve bootstrap for fractionally integrated and non-invertible processes, J. Time Ser. Anal., 29 (2008), 224–250. https://doi.org/10.1111/j.1467-9892.2007.00554.x doi: 10.1111/j.1467-9892.2007.00554.x
    [25] H. E. Hurst, Long-term storage capacity of reservoirs, Trans. Am. Soc. Civ. Eng. Tans., 116 (1951), 770–799. https://doi.org/10.1016/0013-4694(51)90043-0 doi: 10.1016/0013-4694(51)90043-0
    [26] J. Beran, Statistics for long-memory process, New York: Chapman and Hall, 1994. http://dx.doi.org/10.2307/2983481
  • This article has been cited by:

    1. Tahir Mahmood, Muhammad Riaz, Anam Iqbal, Kabwe Mulenga, An improved statistical approach to compare means, 2023, 8, 2473-6988, 4596, 10.3934/math.2023227
    2. Cecilia Castro, Víctor Leiva, Maria do Carmo Lourenço-Gomes, Ana Paula Amorim, Advanced Mathematical Approaches in Psycholinguistic Data Analysis: A Methodological Insight, 2023, 7, 2504-3110, 670, 10.3390/fractalfract7090670
    3. Gaozhen Wang, Wenxian Guo, Hongxiang Wang, Lintong Huang, Fengtian Hong, Yinchu Ma, Yajuan Ma, Handong Ye, Xiaohan Zhang, Jiangnan Yang, Assessment of Changes in River Flow and Ecohydrological Indicators From the Viewpoint of Changing Landscape Patterns in the Jialing River Basin, China, 2025, 18, 1936-0584, 10.1002/eco.70001
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2034) PDF downloads(84) Cited by(3)

Figures and Tables

Figures(2)  /  Tables(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog