Loading [MathJax]/jax/output/SVG/jax.js
Research article

Multi-objective biofuel feedstock optimization considering different land-cover scenarios and watershed impacts

  • This research presents a novel optimization modeling framework for the existing Soil and Water Assessment Tool (SWAT), which can be used to optimize perennial feedstock production. This novel multi-objective evolutionary algorithm (MOEA) uses SWAT outputs to determine optimal spatial placement of variant cropping systems, considering environmental impacts from land-cover change and management practices. The final solution to the multi-objective problem is presented as a set of Pareto optimal solutions, where one is suggested considering the proximity to the ideal vector [1,0,0,0]. This unique approach provides a well-suited method to assist researchers and stakeholders in understanding the environmental impacts when cultivating biofuel feedstocks. The application of the proposed MOEA is illustrated by analyzing SWAT's example data set for Lake Fork Watershed. Nine land-cover scenarios were evaluated in SWAT to determine their optimal spatial placement considering maximizing biomass production while minimizing sediment yield, organic nitrogen yield, and organic phosphorous yield.

    Citation: Ana Cram, Jose Espiritu, Heidi Taboada, Delia J. Valles-Rosales, Young Ho Park, Efren Delgado, Jianzhong Su. Multi-objective biofuel feedstock optimization considering different land-cover scenarios and watershed impacts[J]. Clean Technologies and Recycling, 2022, 2(2): 103-118. doi: 10.3934/ctr.2022006

    Related Papers:

    [1] Ahmad Bin Azim, Ahmad ALoqaily, Asad Ali, Sumbal Ali, Nabil Mlaiki, Fawad Hussain . q-Spherical fuzzy rough sets and their usage in multi-attribute decision-making problems. AIMS Mathematics, 2023, 8(4): 8210-8248. doi: 10.3934/math.2023415
    [2] Muhammad Naeem, Muhammad Qiyas, Lazim Abdullah, Neelam Khan, Salman Khan . Spherical fuzzy rough Hamacher aggregation operators and their application in decision making problem. AIMS Mathematics, 2023, 8(7): 17112-17141. doi: 10.3934/math.2023874
    [3] Shahzaib Ashraf, Huzaira Razzaque, Muhammad Naeem, Thongchai Botmart . Spherical q-linear Diophantine fuzzy aggregation information: Application in decision support systems. AIMS Mathematics, 2023, 8(3): 6651-6681. doi: 10.3934/math.2023337
    [4] Rabia Mazhar, Shahida Bashir, Muhammad Shabir, Mohammed Al-Shamiri . A soft relation approach to approximate the spherical fuzzy ideals of semigroups. AIMS Mathematics, 2025, 10(2): 3734-3758. doi: 10.3934/math.2025173
    [5] Attaullah, Shahzaib Ashraf, Noor Rehman, Asghar Khan, Muhammad Naeem, Choonkil Park . Improved VIKOR methodology based on q-rung orthopair hesitant fuzzy rough aggregation information: application in multi expert decision making. AIMS Mathematics, 2022, 7(5): 9524-9548. doi: 10.3934/math.2022530
    [6] Khawlah Alhulwah, Muhammad Azeem, Mehwish Sarfraz, Nasreen Almohanna, Ali Ahmad . Prioritized aggregation operators for Schweizer-Sklar multi-attribute decision-making for complex spherical fuzzy information in mobile e-tourism applications. AIMS Mathematics, 2024, 9(12): 34753-34784. doi: 10.3934/math.20241655
    [7] Attaullah, Shahzaib Ashraf, Noor Rehman, Asghar Khan, Choonkil Park . A decision making algorithm for wind power plant based on q-rung orthopair hesitant fuzzy rough aggregation information and TOPSIS. AIMS Mathematics, 2022, 7(4): 5241-5274. doi: 10.3934/math.2022292
    [8] Muhammad Ali Khan, Saleem Abdullah, Alaa O. Almagrabi . Analysis of deep learning technique using a complex spherical fuzzy rough decision support model. AIMS Mathematics, 2023, 8(10): 23372-23402. doi: 10.3934/math.20231188
    [9] Ali Ahmad, Humera Rashid, Hamdan Alshehri, Muhammad Kamran Jamil, Haitham Assiri . Randić energies in decision making for human trafficking by interval-valued T-spherical fuzzy Hamacher graphs. AIMS Mathematics, 2025, 10(4): 9697-9747. doi: 10.3934/math.2025446
    [10] Rukchart Prasertpong . Roughness of soft sets and fuzzy sets in semigroups based on set-valued picture hesitant fuzzy relations. AIMS Mathematics, 2022, 7(2): 2891-2928. doi: 10.3934/math.2022160
  • This research presents a novel optimization modeling framework for the existing Soil and Water Assessment Tool (SWAT), which can be used to optimize perennial feedstock production. This novel multi-objective evolutionary algorithm (MOEA) uses SWAT outputs to determine optimal spatial placement of variant cropping systems, considering environmental impacts from land-cover change and management practices. The final solution to the multi-objective problem is presented as a set of Pareto optimal solutions, where one is suggested considering the proximity to the ideal vector [1,0,0,0]. This unique approach provides a well-suited method to assist researchers and stakeholders in understanding the environmental impacts when cultivating biofuel feedstocks. The application of the proposed MOEA is illustrated by analyzing SWAT's example data set for Lake Fork Watershed. Nine land-cover scenarios were evaluated in SWAT to determine their optimal spatial placement considering maximizing biomass production while minimizing sediment yield, organic nitrogen yield, and organic phosphorous yield.



    Detecting a change-point in a time series has received considerable attention for a long time, originating from in quality control [1]. It remains a popular field of research today due to the occurrence of sudden changes in various areas, such as financial data, signal processing, genetic engineering, and machine learning. There is an important issue in detecting structural breaks in time series data, which involves identifying changes in a sequence of parameters, numerical characteristics, or distributions that alter the model, such as a shift in mean [2], a change in variance [3], a change in tail index [4], or a change in persistence [5,6], etc.

    However, Mandelbrot [7] has pointed out that many financial asset return distributions exhibit characteristics, such as peakedness and heavy tails, that cannot be adequately described by traditional normal distributions. Heavy-tailed series are better suited for capturing the distributional features of peaks and heavy tails in financial data due to their additivity and consistency with market observations. The distributions of commodity and stock returns often exhibit heavy tails with a possible infinite variance, as subsequently pointed out by Fama [8] and Mandelbrot [9]. They have initiated an investigation into time series models in which the marginal distributions exhibit regularly varying tails. Recently, there has been increasing interest in modeling change-point phenomena using heavy-tailed noise variables.

    Developing estimation procedures for statistical models has received a great deal of interest, which are designed to represent data with infinite variance. Under the assumption of heavy-tailed time series with infinite variance, Paulauskas and Paulauskas [10] developed the asymptotic theory for econometric co-integration processes. Knight [11] investigated the limiting distribution of M-estimation for autoregressive parameters in the context of an integral linear process with infinite variance data. These findings demonstrate that, in terms of a heavier sequence, M-estimation is asymptotically more robust than least squares estimation (LS-estimation). M-estimation is a widely used and important method, which was first introduced by Huber [12] in 1964 for the location parameter model. Since then, many statisticians have shown interest in studying M-estimation, and this led to the establishment of a series of useful results. Hušková [13] proposed and investigated a method based on the moving sum of M-residuals to detect parameter changes and estimate the change position. Davis [14] studied the M-estimation of general autoregressive moving average (ARMA) processes with infinite variance, where the innovation follows a non-Gaussian stability law in its domain of attraction, and they derived a functional limit theorem for stochastic processes and established asymptotic properties of M-estimation. The asymptotic distribution of the M-estimation for parameters in an unstable AR(p) process was presented in Sohrabi [15], who suggested that the M-estimation exhibits a higher asymptotic convergence rate compared to LS-estimation due to the fact that the LS-estimation of the mean is expressed as ˆμμ=Op(T1/κ1), whose consistency is destroyed if κ(0,1). Knight [16] investigated the asymptotic behavior of LS- and M-estimation for autoregressive parameters in the context of a data generation process with an infinite variance random walk. The study demonstrated that certain M-estimation converges more rapidly than LS-estimation, especially in heavy-tailed distributions. Therefore, this paper employs M-estimation to estimate the parameters.

    In time series analysis, covariance, the correlation coefficient, and their sample form are fundamental tools for studying parameter estimation, goodness of fit, change-point detection, and other related researches. For instance, classic monographs [17,18] have extensively discussed these topics, and introduced numerous practical applications. Furthermore, Wang et al. [19] proposed two semiparametric additive mean models for clustered panel count data and derived estimation equations to estimate the regression parameters of interest for the proposed models. Xu et al. [20] provided a bivariate Wiener model to capture the degradation patterns of two key performance characteristics of permanent magnet brakes, and considered an objective Bayesian method to analyze degradation data with small sample sizes. Additionally, in Yaghi's [21] doctoral dissertation, he proposed a novel method to detect change in the covariance of time series by converting the change in auto-covariance into a change in slope. Jarušková [22] investigated the equivalence of two covariance operators, and utilized the functional principal component analysis method, which can verify the equality of the largest eigenvalues and their corresponding eigenfunctions. Furthermore, Wied [23] proposed a test statistic

    QT(X,Y)=ˆDmax2jTjT|ˆρjˆρT|

    to study the change in the correlation coefficient between two time series over an unknown time period, where ˆρ is the estimated autocorrelation coefficient, and ˆD is the regulatory parameter associated with the long-run variance. Na [24] applied the monitoring procedure

    inf{k>T:Tk=ˆΣ1/2T(ˆγkˆγT)1Tb(kT)}

    to identify changes in the autocorrelation function, parameter instability, and distribution shifts within the GARCH model, where ˆγ is the autocorrelation coefficient, ˆΣ is the long-run variance, and b() is a given boundary function. Dette [25] proposed to use

    ˆV(k)n(s)=1nnsj=1ˆejˆej+kˆσ2(tj)nsnnj=1ˆejˆej+kˆσ2(tj)

    to detect relevant changes in time series models, where ˆei denotes the nonparametric residuals, and ˆσ is the variance function estimation.

    This paper extends previous research to heavy-tailed innovation processes and utilizes the moving window method to convert the problem of detecting change in QAC into a change in mean. The methods for testing mean change-points primarily include the maximum likelihood method, least squares method, cumulative sum method (also known as CUSUM), Bayesian method, empirical quantile method, wavelet method, and others. Among these methods, the most commonly used test statistics for change-point problems is cumulative sum statistics. Yddter [26] and Hawkins [27] utilized the maximum likelihood approach to investigate the mean change-point of a normal sequence, while Kim [28] employed Incán's [29] cumulative sum of squares (SCUSUM) method to examine parameter changes in the generalized autoregressive conditional heteroskedasticity (GARCH). Lee [30] utilized the residual cumulative sum method (RCUSUM) to enhance the detection of parameter changes in GARCH (1, 1). Han [31] investigated the change-point estimation of the mean for heavy-tailed dependent sequences, and provided the consistency of the CUSUM statistics. However, due to the non-monotonic empirical powers issue with CUSUM statistics, ratio-typed statistics were subsequently proposed as a suitable alternative to CUSUM, particularly in cases of infinite variance since they do not require any variance estimation for normalization. As a result, Horváth [32] proposed a robust ratio test statistic to test the mean change-point of weakly dependent stable distribution sequences. Jin et al. [33] applied this ratio test statistic to investigate the mean change in their research.

    The remainder of this paper is organized as follows. In Section 2, we introduce our ratio-typed test and derive its asyptotic properties under both the null hypothesis and the alternative, and the asymptotic behavior of the parameter estimation and ratio-typed test is studied. Section 3 presents the Monte Carlo simulation results. Section 4 offers an empirical application example and Section 5 concludes the paper.

    The above-mentioned methods require estimation of the long-run variance to execute the test for the change in the autocorrelation coefficient. However, since the heavy-tailed sequence has an infinite variance, the results of these tests are not applicable. We use Yaghi [21]'s moving window method to combine the QAC of each window into a new series and utilize ratio test statistics to test mean change, which avoids the need for estimating long-run variance. In this paper, the primitive time series {yt,1tT} would satisfy the following conditions,

    yt=μ+βt+ξt, (2.1)
    ξt=c1ξt1+c2ξt2++ηt, (2.2)

    where μ and β are the intercept and the time trend, and T is the sample size. ξt refers to a p-order autoregressive process. We assume throughout this paper that the error term ηt belongs to the domain of attraction of a stable law. For any x>0, then

    TP(|ηt|>aTx)xκ,

    where aT=inf{x:P(|ηt|>x)T1}, and

    limxP(ηt>x)P(|ηt|>x)=q[0,1].

    The tail thickness of the observed data is determined by tail index κ, which is unknown. Well-known special cases of ηt are the Gaussian (κ=2) and Cauchy (κ=1) distributions. When κ(0,2), the distribution has the moment behavior: E|ηt|ν= once νκ, and thus ηt has infinite variance, it is heavy-tailed.

    The variance of the heavy-tailed sequence yt is infinite, but the variance of Tκ21yt is finite.

    Proof. Multiply both sides of the model (1.1) by a control velocity Tκ21,

    Tκ21yt=Tκ21μ+Tκ21βt+Tκ21ξt

    where κ(0,2), and T represents the sample size. We then prove the existence of the first and second moments of T(κ21)ξt. According to the definition of the stable distribution, we obtain

    limTTκP(X>T)=C(σ,β,μ),

    where C(σ,β,μ) is bounded. This means that there is at least one sufficiently large M, that when T>M, there is P(X>T)=O(Tκ).

    Note that

    E(Tκ21ξt)=+Tκ21ξf(ξ)dξ=+Tκ21ξdF(ξ)=Tκ21MξdF(ξ)+Tκ21MMξdF(ξ)+Tκ21MξdF(ξ).

    Since as T, then Tκ210, and MMξdF(ξ)M+1dF(ξ)=M. Here, the second term tends to zero. Next, we only prove the third term, and the proof for the first term is similar.

    Let P(X>T)=¯F(T)=1F(T), then ¯F()=0, and we get

    Tκ21+MξdF(ξ)=Tκ21+Mξd(1¯F(ξ))=Tκ21+Mξd¯F(ξ)=Tκ21ξ¯F(ξ)M+Tκ21+M¯F(ξ)dξ.

    Because Tκ21M¯F(M)=Tκ21M1κMκ¯F(M)0, then Tκ21ξ¯F(ξ)0, as ξ. On the other hand, due to ¯F()=0, it leads to Tκ21+M¯F(ξ)dξ0, and E(T(κ21)ξt)=0.

    Similarly, the existence of E(Tκ1ξ2t) can be dealt with the same way. We complete the proof of the variance of Tκ21ξt.

    In this study, our primary objective is to attain a more significant leap in amplitude. Consequently, we concentrate on the first-order autocorrelation coefficient, whereas the second-order and higher-order autocorrelation coefficients tend to diminish or fluctuate within the interval of (1,1). We define the first-order QAC of the heavy-tailed sequences as follows, t=1,,T,

    α(1)=Cov(Tκ21yt,Tκ21yt1)D(Tκ21yt)D(Tκ21yt1).

    Thus, the corresponding sample correlation coefficient is defined as

    ˆα(1)=1TTk=1(Tκ21ykTκ21ˉy)(Tκ21yk+1Tκ21ˉy)1TTk=1(Tκ21ykTκ21ˉy)21TTk=1(Tκ21yk+1Tκ21ˉy)2,

    where ˉy=1TTi=1yi.

    Through simulation experiments, it was found that the presence of intercept and slope parameters had a significant impact on the changes in the QAC. Therefore, it is crucial to detrend the intercept and slope parameters in the model because noticeable variations occur in the QAC when the regression coefficient of the AR(p) series alters. Now, we use a simple example to employ a moving window method to simulate this phenomenon. Suppose the sequence follows an AR(1) model with a change in autoregressive parameter,

    yt=1+0.2t+ξt, (2.3)
    ξt=0.1ξt11{t[Tτ]}+0.7ξt11{t>[Tτ]}+ηt. (2.4)

    Now, we conside a window width m=10, lag number d=1, sample size T=1200, change position τ=0.5, and tail index κ=1.2. Since each window has a width of m and a lag of d, it results in the number of windows n=floor((Tm)/d)+1 and obtains n=Tm+1 sets of the sub-sample, namely, {yt,t=1,,m}, {yt,t=2,,m+1}, ,{yt,t=Tm+1,,T}.

    Figure 1(a) indicates that the change in ˆαj(1) based on the sub-sample {yj,,yj+m1}, j=1,,n, is not obvious, where

    αj(1)=1mj+m1k=j(Tκ21yjTκ21ˉyj)(Tκ21yj+1Tκ21ˉyj)1mj+m1k=j(Tκ21yjTκ21ˉyj)21mj+m1k=j(Tκ21yj+1Tκ21ˉyj)2,

    and ˉyj=1mj+m1t=jyj. Under some regular conditions, it has beem proven that αj(1) can converge to α(1) in probability. On the other hand, if there are consistent estimates of parameters μ and β, we can have the residuals {ˆξj,,ˆξj+m1}, j=1,,n, where ˆξt=ytˆμˆβt. This means that the intercept and slope have detrended. Thus, Figure 1(b) shows that ˆαj(1) based on the sub-sample {ˆξj,,ˆξj+m1}, j=1,,n, have more pronounced fluctuation, where

    ˆαj(1)=1mj+m1k=j(Tκ21ˆξjTκ21ˉˆξj)(Tκ21ˆξj+1Tκ21ˉˆξj)1mj+m1k=j(Tκ21ˆξjTκ21ˉˆξj)21mj+m1k=j(Tκ21ˆξj+1Tκ21ˉˆξj)2, (2.5)

    and ˉˆξj=1mj+m1t=jˆξj.

    Figure 1.  QAC consists of two types of series: yt and ytˆμˆβt.

    Moreover, it is worth noting that

    α(1)=Cov(Tκ21yt,Tκ21yt1)D(Tκ21yt)D(Tκ21yt1)=Cov(Tκ21ξt,Tκ21ξt1)D(Tκ21ξt)D(Tκ21ξt1). (2.6)

    Because ˆαj(1) can converge to α(1) in probability, we focus on an autocorrelation coefficient consisting of ˆξt, not yt. As shown in Figure 1, if a change in mean is observed in the new series ˆα1(1),,ˆαn(1), it can be explained that the QAC of the primitive sequence yt has changed. The problem of testing the null hypothesis of no QAC change can be illustrated as

    H0:ˆα1(1)=ˆα2(1)==ˆαn(1)

    against the alternative hypothesis

    H0:ˆα1(1)=ˆα2(1)==ˆα[nτ](1)=γ1ˆα[nτ]+1(1)=ˆα[nτ]+2(1)==ˆαn(1)=γ2.

    Therefore, we would convert the QAC change to the mean change.

    Prior to deriving the asymptotic properties of the following ratio-typed test, it is necessary to establish a lemma that proves the consistency of M-estimations for both intercept and slope parameters under the null hypothesis for all κ(0,2]. To estimate the parameters μ and β by M-estimation, ˆμ and ˆβ are defined as solutions of the minimization problem

    argminμ,βRTt=1ρ(ytμβt), (2.7)

    where ρ is a convex loss function. The estimations in equation (2.5) are sometimes also defined as the solution to the following equation,

    Tt=1ϕ(ytμβt)=0. (2.8)

    Throughout this paper, we will make the following assumptions about the loss function ρ and the distribution of random variables ξ1,,ξT.

    Assumption 1. The distribution Fξ of random error term ξt is in the domain of attraction of a stable law with index κ(0,2), and ηt is an independent identically distributed sequence (i.i.d).

    (1) If κ>1, then E(ξt)=0;

    (2) while if κ<1, ξt has a symmetric distribution.

    Assumption 2. Let ρ be a convex and twice differentiable function, and take ρ=ϕ, where ϕ is Lipschitz continuous, then there is a real number K0 made for all x and y thus that |ϕ(x)ϕ(y)|K|xy|.

    Assumption 3. Finally, we will make the following assumptions about the random variable ϕ(ξt):

    (1) E(ϕ(ξt))=0;

    (2) 0<σ2ξ(ϕ)=Eϕ2(ξt)+2i=0Eϕ(ξt)ϕ(ξt+i)<.

    Assumptions 1, 2, and 3 are standard conditions for deriving the asymptotic properties based on M-estimation, although extra moment conditions are imposed on ϕ and ϕ. Note that ρ is an almost everywhere differentiable convex function that ensures the uniqueness solution. Although ρ may not exist in this case, M-estimation can still be counted under certain additional conditions for the asymptotic theory. This paper only considers situations where ρ exists. We mainly consider two types of estimation methods, ϕ(x)=x (LS-estimation) and ϕ(x)=xI{|x|K}+Ksgn(x)I{|x|>K} (M-estimation).

    In the view of the moving window method for fixed d, we can rewrite formula (2.3) and obtain the following first-order QAC ˆαj(1),j=1,,n,

    ˆαj(1)=1md(j1)+mk=d(j1)+1(Tκ21ˆξkTκ21ˉˆξj)(Tκ21ˆξk+1Tκ21ˉˆξj)1md(j1)+mk=d(j1)+1(Tκ21ˆξkTκ21ˉˆξj)21md(j1)+mk=d(j1)+1(Tκ21ˆξk+1Tκ21ˉˆξj)2, (2.9)

    where ˉˆξj=1md(j1)+mi=d(j1)+1ˆξi and ˆξt=ytˆμˆβt. The QAC of all windows based on the residuals ˆξk constitute a new sequence of {ˆαj(1),j=1,2,,n}. Recall that n=floor((Tm)d)+1 represents the sequence number of windows. Here, d and p denote convergence in distribution and convergence in probability, respectively.

    Lemma 2.1. If Assumptions 1–3 hold, ˆμ and ˆβ minimize formula (2.5), and under the null hypothesis, then

    T3/2(ˆββ)d6σξ(ϕ)W(1)12σξ(ϕ)10rW(r)drE(ϕ(ξt)),T1/2(ˆμμ)d2σξ(ϕ)W(1)+6σξ(ϕ)10rW(r)drE(ϕ(ξt)),

    where W() is a Wiener process.

    Proof. The proof is essentially the same as that in Knight [16]. Define the process

    Z(u,v)=Tt=1(ρ(ξtT3/2utT1/2v)ρ(ξt)),

    with (u,v)=(T3/2(ˆββ),T1/2(ˆμμ)) minimizing Z(u,v). By a Taylor series expansion of each summand of Z around u=0,v=0, then

    Z(u,v)=uT3/2Tt=1tϕ(ξt)vT1/2Tt=1ϕ(ξt)+12u2T3Tt=1t2ϕ(ξt)+12v2T1Tt=1ϕ(ξt)+uvT2Tt=1tϕ(ξt)=5i=1Ii, (2.10)

    where ξt(ξt±|T3/2ut+T1/2v|). Using the Lipschitz continuity of ϕ, then

    |ϕ(ξt)ϕ(ξt)|C|T3/2ut+T1/2v|

    with bounded C, and we get

    T1Tt=1|ϕ(ξt)ϕ(ξt)|C|T3/2ut+T1/2v|0.

    Thus, ϕ(ξt) can be approximately replaced by ϕ(ξt).

    Under Assumptions 1–3, since ϕ(ξt) satisfies the central limit theorem, it yields

    1T[Tr]t=1ϕ(ξt)dσξ(ϕ)W(r), (2.11)

    where σ2(ϕ)=Eϕ2(ξt)+2i=0Eϕ(ξt)ϕ(ξt+i). By some algebraic derivation, we have

    I1=uT3/2Tt=1tϕ(ξt)duσξ(ϕ)(W(1)10rW(r)dr), (2.12)

    and

    I2=vT1/2Tt=1ϕ(ξt)dvσξ(ϕ)W(1). (2.13)

    By the theorem of large numbers, we get

    sup0r11T[Tr]t=1|ϕ(ξt)Eϕ(ξt)|p0.

    This shows that ϕ(ξt) can be asympotically replaced by Eϕ(ξt), resulting in

    I3=12u2T3Tt=1t2ϕ(ξt)d16u2E(ϕ(ξt)), (2.14)
    I4=12v2T1Tt=1ϕ(ξt)d12v2E(ϕ(ξt)), (2.15)

    and

    I5=uvT2Tt=1tϕ(ξt)d12uvE(ϕ(ξt)). (2.16)

    Together with (3)–(7), we can rewrite (1) as

    Z(u,v)uσξ(ϕ)(W(1)10rW(r)dr)vσξ(ϕ)W(1)+16u2E(ϕ(ξt))+12v2E(ϕ(ξt))+12uvE(ϕ(ξt))

    and take the partial derivative on it with respect to u and v

    {Z(u,v)u=σξ(ϕ)(W(1)10rW(r)dr)+13uE(ϕ(ξt))+12vE(ϕ(ξt))=0,Z(u,v)v=σξ(ϕ)W(1)+vE(ϕ(ξt))+12uE(ϕ(ξt))=0.

    The solution is

    {u=6σξ(ϕ)W(1)12σξ(ϕ)10rW(r)drE(ϕ(ξt)),v=2σW(1)6σξ(ϕ)10rW(r)drE(ϕ(ξt)).

    This, in turn, implies that

    (T3/2(ˆββ)T1/2(ˆμμ))d(6σξ(ϕ)W(1)12σξ(ϕ)10rW(r)drE(ϕ(ξt))2σξ(ϕ)W(1)+6σξ(ϕ)10rW(r)drE(ϕ(ξt))).

    Therefore, the proof is complete.

    Lemma 2.1 shows the convergence rate for ˆμ and ˆβ based on M-estimation are T1/2 and T3/2 under the heavy-tail environment, which are the same as those in the case of a Gaussian process. Furthermore, the simulation study has revealed that the parameter estimations based on M-estimation are consistent and more robust than those based on the LS-method. The QAC plays a crucial role in time series analysis by measuring the degree to which successive observations are correlated with each other over time. The consistency of these estimations ensures that our comprehension of such correlations remains robust and trustworthy. Consequently, we can present the ratio-typed test to detect a change in mean and discuss its asymptotic properties.

    For the sake of convenience, we define ωj=ˆαj(1). Without loss of generality, we suppose ωj follows an AR(p) process with a drift, that is, ωt=γ+γ1ωt1++γpωtp+εt. By using a t-test to fit the QAC sequence, it is found that when p=1, the P-value is 0.0086, which is greater than 0.001; whereas when p=0, the P-value is 6×10213, which is much smaller than 0.001. The smaller the P-value is, the better the goodness-of-fit is. This means that, under the null hypethesis of no change, the QAC series can be fitted by the mean model

    ωt=γ+εt,t=1,,n, (2.17)

    where εt is assumed to meet Assumption 1. However, under the alternative hypothesis, the QAC series would follow the mean model with a change,

    ωt=γ11{t[nτ]}+γ21{t>[nτ]}+εt, (2.18)

    where γ1γ2 and τ is an unknown change-point.

    Hence, for testing a change in the QAC under original sequence {yt,t=1,,T}, it can be converted into mean change detection with sequence {ω1,ω2,,ωn}. The mean change test has been extensively studied and is relatively mature. Because these innovations {εt,t=1,,n} in sequence {ω1,ω2,,ωn} maybe follow a heavy-tailed distribution, this paper extensively studies test procedures that utilize the ratio-typed test based on M-residuals to detect changes in mean.

    Our inspiration is derived from Horváth's description (2008) [32] of ratio-typed tests and their robustness performance, and by the framework of test statistics, which is on the basis of Peštová and Pešta (2018) [34]. The ratio-typed test based on M-residuals is expressed as

    Vn=max0s1Vn(s),
    Vn(s)=|[ns]t=1ϕ(ωtˆγ(ϕ))|max1[nv][ns]|[nv]i=1ϕ(ωiˆγ1(ϕ))|+max[ns]+1[nv]n|ni=[nv]ϕ(ωiˆγ2(ϕ))|.

    The score function ϕ is chosen in two forms. The problem of testing the null hypothesis of no QAC change can be illustrated as: ϕ(x)=x,xR, and these procedures could be reduced to classic LS procedures (Csörgo and Horváth (1997) [35]). In a similar vein, these procedures are simplified to the Huber function truncation process if ϕ(x)=xI{|x|K}+Ksgn(x)I{|x|>K}. ˆγ(ϕ) is the M-estimation of parameter γ generated by a score function ρ with sequence {ω1,ω2,,ωn}, i.e., it is defined as a solution of the minimization problem

    argminγRnt=1ρ(ωtγ), (2.19)

    where ρ is a convex loss function. Sometimes, the estimation in (2.9) is also defined as a solution of the following equation:

    nt=1ϕ(ωtγ)=0, (2.20)

    where ρ and ϕ satisfy Assumption 2.

    Analogous to ˆγ(ϕ), the M-estimates ˆγ1(ϕ) and ˆγ2(ϕ) are computed, respectively, from sequences ω1,,ω[ns] and ω[ns]+1,,ωn. They are solutions to these equations:

    [ns]t=1ϕ(ωtγ)=0, (2.21)

    and

    nt=[ns]+1ϕ(ωtγ)=0. (2.22)

    Prior to deriving asymptotic properties of the proposed ratio-typed test, we assume that ϕ(εt) follows Assumption 3 and give the lemma that, under the null hypothesis, the M-estimation should be consistent for all κ(0,2].

    Lemma 2.2. If Assumptions 4–6 hold, under the null hypothesis, we have

    n1/2(ˆγ(ϕ)γ)dσε(ϕ)W(1)E(ϕ(εt)).

    Similarly

    n1/2(ˆγ1(ϕ)γ)dσε(ϕ)W(s)sE(ϕ(εt)),
    n1/2(ˆγ2(ϕ)γ)dσε(ϕ)W(1)W(s)(1s)E(ϕ(εt)),

    where σ2ε(ϕ)=Eϕ2(εt)+2i=0Eϕ(εt)ϕ(εt+i) and W() is a Wiener process.

    Proof. Define the process

    Z(u)=nt=1{ρ(εt+un1/2)ρ(εt)},

    with u=n1/2(γˆγ(ϕ)). By a Taylor series expansion of each summand of Z around u=0, then

    Z(u)=un1/2nt=1ϕ(εt)+12u2n1nt=1ϕ(εt), (2.23)

    where εt(εt,εt±|un1/2|). Using the Lipschitz continuity of ϕ, then |ϕ(εt)ϕ(εt)|C|un1/2| with bounded C, and we get

    n1nt=1|ϕ(εt)ϕ(εt)|C|un1/2|0 (2.24)

    uniformly over u in compact sets. By the theorem of large numbers, then

    sup0s11n[ns]t=1|ϕ(εt)Eϕ(εt)|d0. (2.25)

    Combining n1/2[ns]t=1ϕ(εt)dσε(ϕ)W(s) with (9) and (10), it yields

    Z(u)=un1/2nt=1ϕ(εt)+12u2n1nt=1ϕ(εt)duσε(ϕ)W(s)+12u2E(ϕ(εt)). (2.26)

    Find the minimum value in (11), and we have

    Z(u3)=n1/2nt=1ϕ(εt)+un1nt=1E(ϕ(εt))=0,

    so it turns out

    n1/2(ˆγ(ϕ)γ)=u=n1/2nt=1ϕ(εt)n1nt=1E(ϕ(εt))dσε(ϕ)W(1)E(ϕ(εt)).

    Similarly, the asymptotic distributions of n1/2(ˆγ1(ϕ)γ) and n1/2(ˆγ2(ϕ)γ) can be obtained in the same way. Therefore, the proof is complete.

    Subsequently, we examine the performance of the ratio-typed test in the presence of a mean change. The ensuing lemma serves as a crucial tool for achieving the desired outcomes under the alternative hypothesis.

    Lemma 2.3. If Assumptions 4–6 hold, and under the alternative hypothesis, we have:

    (i) for i=1,2, both ˆγ(ϕ)γi=Op(1) hold,

    (ii) let a constant θ0, then

    if s(0,τθ], we have ˆγ1(ϕ)γ1=Op(n1/2), and ˆγ2(ϕ)γ2=Op(1),

    if s[τ+θ,1), we have ˆγ1(ϕ)γ1=Op(1), and ˆγ2(ϕ)γ2=Op(n1/2),

    (iii) if s=τ, n1/2(ˆγ1(ϕ)γ1) and n1/2(ˆγ2(ϕ)γ2) have the same asymptotic distributions as those in Lemma 2.2.

    Proof.(ⅰ) To confirm ˆγ(ϕ)γ1=Op(1), it is sufficient to prove that ˆγ(ϕ)γ1 does converge to nonzero in probability. Without loss of generality, we assume n1/2(ˆγ(ϕ)γ1)=Op(1). Under the alternative hypothesis, define the process

    Qn(u)=[nτ]t=1{ϕ(εt+n1/2u)ϕ(εt)}+nt=[nτ]+1{ϕ(εt+(γ2γ1)+n1/2u)ϕ(εt)}=Qn,1(u)+Qn,2(u), (2.27)

    where u=n1/2(γ1ˆγ(ϕ)).

    By a Taylor series expansion of each summand of Qn,1(u), it yields

    Qn,1(u)=n1/2u[nτ]t=1ϕ(εt)+12n1u2[nτ]t=1ϕ(˜εt), (2.28)

    where ˜εt(εt,εt±n1/2|u|). Similarly, we can get a Taylor series expansion of each summand Qn2(u) as follows

    Qn,2(u)=(n1/2u+(γ2γ1))nt=[nτ]+1ϕ(εt)+12(n1/2u+(γ2γ1))2nt=[nτ]+1ϕ(˜˜εt), (2.29)

    where ˜˜εt(εt,εt±|n1/2u+(γ2γ1)|).

    In view of (13) and (14), to find ind the minimum value in (11), it turns out that

    Qn(u)=n1/2nt=1ϕ(εt)+un1[nτ]t=1ϕ(˜εt)+(un1+n1/2(γ2γ1))nt=[nτ]+1ϕ(˜˜εt)+op(1)=0. (2.30)

    Using the Lipschitz continuity of ϕ, we have

    n1[nτ]t=1|ϕ(εt)ϕ(˜εt)|n1/2Cτ|u|0.

    However, because of γ2γ1, then

    n1nt=[nτ]+1|ϕ(εt)ϕ(˜˜εt)|C(1τ)|n1/2u+(γ2γ1)|0.

    Thus, because of the fact that

    sup0<r<1n1[nr]t=1(ϕ(ε)E(ϕ(ε)))p0, (2.31)

    we rewrite (15) as follows

    Qn(u)=n1/2nt=1ϕ(εt)+un1[nτ]t=1E(ϕ(εt))+(un1+n1/2(γ2γ1))nt=[nτ]+1E(ϕ(˜˜εt))=0.

    To find the solution to the equation Qn(u)=0, we have

    u=n1/2nt=1ϕ(εt)+n1/2(γ2γ1)nt=[nτ]+1E(ϕ(˜˜εt))τE(ϕ(εt))+n1nt=[nτ]+1E(ϕ(˜˜εt))=Op(n1/2),

    which holds due to E(˜˜ε)=Op(1). Recall that u=n1/2(γ1ˆγ(ϕ)), and it shows that ˆγ(ϕ)γ1 should converge to nonzero, which contradicts the artificial assumption. Hence, we prove that ˆγ(ϕ)γ1=Op(1), and can deal with ˆγ(ϕ)γ2=Op(1) in the same way.

    (ⅱ) Since the estimator ˆγ(ϕ) is constructed on the basic of data ω1,,ωn, which involve the structural change in location, it turns out that ˆγ1(ϕ)γi=Op(1), i=1,2. For the same reason, we still suppose that u=n1/2(γ1ˆγ(ϕ)) is bounded. If s(τ+θ,1), because the estimator ˆγ1(ϕ) consists of data ω1,,ω[ns], we rewrite (12) in the form

    Qn(u)=[nτ]t=1{ϕ(εt+n1/2u)ϕ(εt)}+[ns]t=[nτ]+1{ϕ(εt+(γ2γ1)+n1/2u)ϕ(εt)}.

    Using the same proof process as discussed above, we obtain

    u=n1/2[ns]t=1ϕ(εt)+n1/2(γ2γ1)[ns]t=[nτ]+1 E(ϕ(˜˜εt))τE(ϕ(εt))+n1[ns]t=[nτ]+1E(ϕ(˜˜εt)).

    Note that θ=sτ0, and it again leads to u=Op(n1/2), which is inconsistent with u being bounded. Hence, it turns out that ˆγ1(ϕ)γ1=Op(1). While the estimator ˆγ2(ϕ) consists of data ω[ns]+1,,ωn which are not contaminated, it implies that the asymptotic distribution of n1/2(ˆγ2(ϕ)γ2) is asymptotically equivalent to that shown in Lemma 2.2. Similarly, if s(0,τθ), the assertions of ˆγ1(ϕ)γ1=Op(n1/2) and ˆγ2(ϕ)γ2=Op(1) do hold.

    (ⅲ) When s=τ, the two sets of data ω1,,ω[ns] and ω[ns]+1,,ωn do not become contaminated even if under the alternative hypothesis. Thus, the convergence of ˆγ1(ϕ) and ˆγ2(ϕ) still exists, and the proof of their asympotic distribution is similar to that in Theorem 2.1 Thus, the proof is complete.

    As stated in Lemma 2.3, if any of the estimation equations (2.10)–(2.13) involve observations with a change in mean, the corresponding M-estimation will be biased. If s=τ, the asymptotic results for ˆγ1(ϕ) and ˆγ2(ϕ) are consistent with Lemma 2.2.

    Theorem 2.1. (Undernull) Suppose sequences {ω1,,ωn} follow model (2.8) under the null hypothesis, as n, and we have,

    Vndsup0s1|W(s)sW(1)|sup0vs|W(v)vsW(s)|+sups<v1|W(1)W(v)1v1s(W(1)W(s))|.

    Proof. The proof is analogous in several steps with the proof of Theorem 1.1 in Horváth [32]. By the mean value theorem, it leads to

    ϕ(ωtˆγ(ϕ))=ϕ(εt+γˆγ(ϕ))=ϕ(εt)+ϕ(εt)(γˆγ(ϕ)),

    where εt satisfies |ϕ(εt)ϕ(εt)|C|γˆγ(ϕ)|.

    Combining Lemma 2.2 with (9) and (10), we have

    n1/2[ns]t=1ϕ(ωtˆγ(ϕ))=n1/2[ns]t=1ϕ(εt)+n1/2(γˆγ(ϕ))[ns]t=1ϕ(εt)+op(1)dσε(ϕ)W(s)σε(ϕ)W(1)E(ϕ(εt))sE(ϕ(εt))=σε(ϕ)(W(s)sW(1)), (2.32)

    which holds due to n1/2(γˆγ(ϕ))max0s1[ns]t=1|ϕ(εt)ϕ(εt)|C|n1/2n1(γˆγ(ϕ))2|d0.

    Actually, the proof of the first and second terms of the denominators is roughly analogous, so we just handle the second one.

    n1/2ni=[nv]+1ϕ(γiˆγ2(ϕ))=n1/2ni=[nv]+1ϕ(εi)+n1/2(γˆγ2(ϕ))ni=[nv]+1ϕ(εi)=n1/2ni=[nv]+1ϕ(εi)+n1/2(γˆγ2(ϕ))ni=[nv]+1ϕ(εi)+op(1)σε(ϕ)(W(1)W(v)W(1)W(s)(1s)(1v)), (2.33)

    where εt(εt,εt±n1/2|γˆγ2(ϕ)|). Similarly, we can obtain

    n1/2[nv]i=1ϕ(ωiˆγ1(ϕ))dσε(ϕ)(W(v)vsW(s)). (2.34)

    Therefore, together with (17), (18), and (19), it yields

    Vn=max1[ns]nn1/2|[ns]t=1ϕ(ωtˆγ(ϕ))|max1[nv][ns]n1/2|[nv]i=1ϕ(ωiˆγ1(ϕ))|+max[ns]+1[nv]nn1/2|ni=[nv]ϕ(ωiˆγ2(ϕ))|dsup0s1|W(s)sW(1)|sup0vs|W(v)vsW(s)|+sups<v1|W(1)W(v)1v1s(W(1)W(s))|.

    Therefore, the proof is complete.

    Theorem 2.1 demonstrates that, under the null hypothesis, the proposed test Vn converges to a function of the Wiener process. In comparison with existing results, the asymptotic distribution remains robust against variations in tail index and therefore yields a unique critical value for any given significance level. Thus, the ratio-typed test is robust to heavy-tailed series with infinite variance, which has the advantage of avoiding tail index estimation for real data and greatly improves operational efficiency.

    Next, we study the behavior of the ratio-typed test if there is a mean change. The following theorem is crucial for obtaining the desired results under the alternative hypothesis.

    Theorem 2.2. (Underalternative) Suppose sequences {ω1,,ωn} follow model (2.9) under the alternative hypothesis, as n, and we have

    Vn=Op(n1/2).

    Proof. When s(τ+θ,1), according to the proofs of Lemma 2.3 and Theorem 2.1, we have ˆγ1(ϕ)γ1=Op(1), ˆγ2(ϕ)γ2=Op(1), and ˆγ2(ϕ)γ2=Op(n1/2) under the alternative hypothesis. For the numerator of Vn, by the mean value theorem, it yields

    |[ns]t=1ϕ(ωtˆγ(ϕ))|=|[nτ]t=1ϕ(εt+(γ1ˆγ(ϕ))+[ns]t=[nτ]+1ϕ(εt+(γ2ˆγ(ϕ))|=|[nτ]t=1[ϕ(εt)+ϕ(ˆεt)(γ1ˆγ(ϕ))]+[ns]t=[nτ]+1[ϕ(εt)+ϕ(ˆˆε)(γ2ˆμ(ϕ))]|=|[ns]t=1ϕ(εt)+(γ1ˆγ(ϕ))[nτ]t=1ϕ(ˆεt)+(γ2ˆγ(ϕ))[ns]t=[nτ]+1ϕ(ˆˆεt)|,

    where ˆεt(εt,εt+|γ1ˆγ(ϕ)|), and ˆˆεt(εt,εt+|γ2ˆγ(ϕ)|). Since the second and third terms play a major role in the convergent rate Op(n), it follow that

    max0s1|[ns]t=1ϕ(ωtˆγ(ϕ))|=Op(n). (2.35)

    Now, we deal with the first term of the denominator.

    (ⅰ) If 1[nv][nτ], we have

    [nv]i=1ϕ(ωiˆγ1(ϕ))=[nv]i=1ϕ(εi)+(γ1ˆγ1(ϕ))[nv]i=1ϕ(˜εi)=Op(n1/2)+Op(1)×Op(n)=Op(n),

    where ˜εi(εi,εi±|(γ1ˆγ1(ϕ))|).

    (ⅱ) If [nτ]+1[nv][ns],

    [nv]i=1ϕ(ωiˆγ1(ϕ))=[nτ]i=1ϕ(εi+(γ1ˆγ1(ϕ)))+[nv]i=[nτ]+1ϕ(εi+(γ2ˆγ1(ϕ)))=[nv]i=1ϕ(εi)+(γ1ˆγ1(ϕ))[nτ]i=1ϕ(ˆεi)+(γ2ˆγ1(ϕ))[nv]i=[nτ]+1ϕ(ˆεi)=Op(n1/2)+Op(n)=Op(n),

    where ˆεi(εi,εi±|(γ1ˆγ1(ϕ))|),ˆεi(εi,εi±|(γ2ˆγ2(ϕ))|). Hence, we obtain

    max1[nv][ns]|[nv]i=1ϕ(ωiˆγ1(ϕ))|=Op(n). (2.36)

    For the second term of the denominator, its asymptotic disbribution is the same as the null hypothesis, because data ω[ns]+1,,ωn are free of the influnence caused by the change-point. Thus, we have

    max[ns]+1[nv]n|ni=[nv]+1ϕ(ωiˆγ2(ϕ))|=Op(n1/2). (2.37)

    Combining (20)–(22), we end up with max0s1Vn=Op(1). When s(0,τθ), we can similarly prove that max0s1Vn=Op(1). Finally, when s(τθ,τ+θ), we just consider the special case of s=τ. Since both ot the two sets ω1,,ω[ns] and ω[ns]+1,,ωn are not affected by the mean change, the asymptotic distribution of the denominator is the same as in Theorem 2.1. However, the numerator always diverges. Consequently, we get

    VnVn(τ)=Op(n)Op(n1/2)+Op(n1/2)=Op(n1/2).

    Therefore, the proof is complete.

    According to Theorem 2.2, the ratio-typed test is consistent under the alternative hypothesis. However, unlike the result provided in Theorem 2.1, a closed-form asymptotic distribution cannot be obtained due to the unknown explicit expression of the objective function for M-estimation. The simulation study has revealed an intriguing finding that the divergence appears to be independent of the tail index in theory, yet it significantly affects the validity in practice.

    In this section, we present simulation results to investigate the performance of the ratio-typed test V on empirical sizes and empirical powers using LS-estimation and M-estimation, i.e., ϕLS(x)=x and ϕM(x)=xI{|x|K}+Ksgn(x)I{|x|>K}. The validity of the theory will be verified by the portfolio of the change position, magnitude of the change-point, tail index, window width, lag number and so on. Empirical sizes refer to the rejection rates at a significance level of 0.05 under the null hypothesis, while empirical powers denote rejection rates in the presence of a change-point. All results are based on 2000 replications.

    We adopt the following DGP (data generating process):

    yt=μ+βt+ξt,ξt=c1ξt11{t[Tτ]}+(c1+δ)ξt11{t>[Tτ]}+ηt,

    where ηt is a heavy-tailed sequence. The remaining parameters are set as follows: threshold value K=1.345; autoregressive coefficient c1=0.3,0,0.3; sample size T=300,600,1200; tailed index κ=0.4,0.8,1.2,1.6,2.0; intercept and slope μ=5, β=0.2; change position τ=0,0.3,0.5,0.7; magnitude of change δ=0.3,0.6; window width m=10,20,30; and lag number d=1,2,3,10,15,25.

    In this subsection, our objective is to discuss the critical values of the ratio-typed test. Critical values of M-estimation and LS-estimation corresponding to changes in the tail index under different coefficients c1 are obtained and presented in Table 1.

    Table 1.  Simulated critical values under H0.
    VM VLS
    κ c1=0.3 c1=0 c1=0.3 c1=0.3 c1=0 c1=0.3
    0.4 1.4345 1.4484 1.4031 1.5156 1.5288 1.5346
    0.8 1.4078 1.4139 1.4176 1.5084 1.4957 1.5175
    1.2 1.4421 1.4291 1.4266 1.4799 1.4756 1.4690
    1.6 1.4298 1.4011 1.4133 1.4308 1.4377 1.4295
    2.0 1.4125 1.4153 1.4379 1.4286 1.4085 1.4190

     | Show Table
    DownLoad: CSV

    For the sake of simplicity, VM is defined as a ratio-typed test based on M-estimation with Huber function, and the definition of VLS relies on LS-estimation. It is not surprising that the simulated critical values of VLS are functional to tail index κ, whereas this phenomenon does not occur in VM. That is consistent with the conclusion of Jin et al. [36,37]. It is noteworthy that critical values of two test statistics are not sensitive to the variations in c1. Thus, for the VLS test, prior estimation of the tail index is necessary to conduct change-point tests with corresponding critical values. However, accurately estimating the unknown and elusive tail index in practical applications remains challenging. On the other hand, the simulated critical value of VM shows minimal fluctuations, which greatly facilitates its practical application.

    In this subsection, we aim to investigate empirical sizes and empirical powers of ratio-typed tests under various profiles of sample size, score function, change position, window width, and lag number. Figure 2 illustrates the curve of empirical sizes for VM and VLS under the null hypothesis, where the x-axis represents the tail index and the y-axis represents empirical sizes.

    Figure 2.  Empirical sizes of VM and VLS under H0.

    The empirical sizes of VM, as shown in Figure 2, have greater stability compared to that of VLS, because they nearly fluctuate around the 0.05 confidence level. It is notable that, the rejection rate of VM performs well regardless of the window width, but for VLS, the empirical sizes are sensitive to the window width. When m=10, there is a slight distortion, but the distortion becomes more pronounced as the window width expands to 20 and 30, which indicates that the rejection rate is unstable with larger window widths, in particular T=300. It means that when sample size is small, the increase in window width may result in a slight growth of distortion. The reason for this is that a larger amount of data within each window is more prone to outliers, which in turn triggers an over-rejection. However, as the sample size increases, empirical sizes approach satisfactory and fluctuate around the significance level. There is no significant difference in empirical sizes when various lag numbers are considered. In short, the rejection rate of VLS is higher for a smaller sample size with a larger tail index, while the phenomenon does not occur for VM. These results indicate that the convergence rate estimated by M-estimation is independent of tail thickness, as confirmed by Lemmas 2.1 and 2.2.

    Figures 35 show the sensitivity analysis of the ratio-typed test under the alternative hypothesis, in terms of sample size, tail index, window width, lag number, change location, and magnitude of change. With an increase in the tail index, there is a decrease in the empirical powers of VM, while there is an increase in those of VLS. The ratio-typed test based on M-estimation appears to have a higher level of empirical powers if κ1.4, while the test based on LS-estimation exhibits larger empirical powers if κ>1.4. The Huber function truncates outliers to normality, by which the M-estimation enables more accurate estimation in heavy-tailed sequences while it involves more bias in light-tailed observations. This indicates that M-estimation is better suited for heavy-tail sequences. As expected, both tests have excellent performance on empirical powers with sample size growth, for example, when τ=0.5 and κ=1.6 for m=20 and d=1, the rejection rates of VM are 80.7%, 93.2%, and 99.35%, respectively, for T=300,600,1200.

    Figure 3.  Empirical powers of VM and VLS under H1.
    Figure 4.  Empirical powers of VM and VLS under H1.
    Figure 5.  Empirical powers of VM and VLS under H1.

    It is interesting that, compared to that of τ=0.3 and τ=0.7, the empirical powers exhibit superior performance when τ=0.5 for both VM and VLS. Furthermore, it is found that a wider window width leads to larger empirical powers when the lag number is fixed. For example, when d=1, τ=0.5, and T=600, if the window width m is 10, the empirical powers obtained from M-estimation are 99.95%, 99.2%, 93.3%, 83.55%, and 72.85% under various tail indices, while the empirical powers are 100%, 99.35%, 97.8%, 93.2%, and 87.8% when m=20. The empirical powers still remain insensitive to the lag number when the window width is fixed. When m=30, τ=0.7, T=300, and d=1, the empirical powers based on M-estimation are 98.2%, 93%, 85.4%, 76.8%, and 69.1%; but for d=2, the empirical powers are 96.35%, 92.25%, 85%, 75.15%, and 67.75%. This highlights the criticality of selecting an appropriate window width.

    The relationship between empirical powers and lag numbers d=1,2,3 is present in Figures 35. The line charts reveal an intuitive result that there is a slight difference in the empirical powers when the lag number is small. Therefore, to account for the impact of lag numbers on empirical powers, we consider three distinct lag numbers with d=10,15,25. As depicted in Figure 6, the empirical powers decrease as the lag number increases. For example, when κ=0.8, τ=0.3, T=1200, and m=20, empirical powers of VM are 80.52%, 76.12%, and 57.38% for d=10,15,25. This phenomenon is attributed to the sample size of the mean change-point test reduction caused by a larger lag number and a smaller window width.

    Figure 6.  Empirical powers of VM and VLS under H1.

    Recall that empirical powers decrease with an increase in lag numbers; therefore, we choose a relatively optimal lag number of d=3 and explore window widths of m=10,20,30 to analyze calculation efficiency. Actually, when the sample size is large, d=3 can be selected due to computation cost reduction. However, when the sample size is small, we choose d=1 as the optimal lag number to ensure no loss of samples and maximize the change-point test efficiency. Figures 27 indicate that when m=30, both VM and VLS obtain outstanding empirical power. Consequently, we adopt m=30 as the optimal window width in the practical example. In other words, the empirical powers of the ratio-typed test based on M-estimation are higher than those of the ratio-typed test based on LS-estimation, especially in the case of heavy-tailed sequences.

    Figure 7.  Empirical powers of VM and VLS under H1.

    Figures 36 report the simulated results of δ=0.3, and we also are interested in the case of δ=0.6. As shown in Figure 7, it is not surprising that empirical powers enhance with the growth in δ. Additionally, when δ=0.3, empirical powers tend to decrease as τ is far away from the middle of the sample. But for δ=0.6, the difference in empirical powers is very small no matter whether τ=0.3, τ=0.5, or τ=0.7. In other words, when δ is large, the influence of the change position on the empirical power is negligible.

    In this section, the ratio-typed test based on M-estimation is used to test the QAC change in USD/CNY exchange rate data, by which the validity of the aforementioned method is confirmed. Figure 8(a) shows a total of 581 daily data points of the USD/CNY exchange rate from May 12, 2009 to August 31, 2011, which were drawn from https://www.economagic.com. Through the use of software [38] to find the rough estimate of the tail index, we have ˆκ=1.0515. Thus, we suppose that this set of exchange rate data is yt=μ+βt+ξt,t=1,2,581, where innovation ξt is heavy-tailed. In view of this, the test statistics in [23,24,25] for the change in correlation coefficients detection are invalidated, because this method requires long-run variance estimation, which is difficult and redundant for heavy-tailed sequences.

    Figure 8.  Daily exchange rate data and residuals sequence.

    Note that the method proposed in this paper not only avoids long-term variance estimation, but also expands the application range and greatly improves practicability and convenience by converting the change in the autocorrelation coefficient into the change in mean. For a mean change problem, we provide the ratio-typed test based on M-estimation to execute for the heavy-tailed series. The exchange rate sequence yt is detrended in advance. The intercept and the slope are estimated through M-estimation, resulting in ˆξt = ytˆμˆβt, t=1,2,,581, where ˆμ=6.9401 and ˆβ=0.00078. The residuals ˆξt are depicted in Figure 8(b). Thus, we set d=1 and m=30, and get a new sequence ωt composed of 550 QAC structure, as shown in Figure 9.

    Figure 9.  The QAC sequence.

    By BIC or AIC criteria, the QAC data ωt is fitted by the the mean model, namely, ωt=γ+εt,t=1,2,,550. We substitute the ωt sequence into the ratio-typed test based on M-estimation, and find VM=3.6375>1.4133 at s=348. That is, ωt occured a change in mean at s=348 (the red dashed line in Figure 9), and is divided into two segments. The first part is γ1,γ2,,γ348 with a sample mean of γ1=0.9155, and the second one involves γ349,γ350,,γ580 with a sample mean of γ2=0.6373. The standard error of the mean estimate based on M-estimation is 0.0075, which indicates that our proposed method is highly reliable. By this, we verify that the daily data of the USD/CNY exchange rate yt from May 12, 2009 to August 31, 2011 has a change in the autocorrelation coefficient.

    In this paper, we primarily studied the change-point test of the QAC for heavy-tailed sequences. In order to improve the efficiency, the moving window method was used to convert the QAC change to the mean change, and the ratio-typed test based on M-estimation was proposed to test the mean change. The methods not only eliminated the influence of outliers, but also extended the theory on QAC change detection of Gaussian sequences to heavy-tail process with tail index κ(0,2). Under some regular conditions, the asymptotic distribution under the null hypothesis was a functional of a Wiener process, which was independent of the tail index, and the consistency was also obtained under the alternative hypothesis. The simulation results revealed that these procedures have good performance even if the sequence was heavy-tailed. In summary, we can combine the moving window method with a ratio-typed test based on M-estimation to test the change in the QAC with a heavy-tailed series.

    Xiaofeng Zhang: Writing-original draft, Software; Hao Jin: Methodology, Writing-review & editing; Yunfeng Yang: Validation. All authors have read and agreed to the published version of the manuscript.

    All authors declare that they have not used Artificial Intelligence tools in the creation of this article.

    The authors would like to thank Prof. John Nolan who provided the software for generating stable innovations and fitting the tail index. The authors are also thankful for the financial support from NNSF (Nos.71473194) and SNSF (Nos.2020JM-513).

    All authors disclose no conflicts of interest.



    [1] Congress US, Energy Independence and Security Act of 2007. Public Law 110-140. Congress Washington DC, 2007. Available from: https://uscode.house.gov/statutes/pl/110/140.pdf.
    [2] Neitsch SL, Arnold JG, Kiniry JR, et al. (2011) Soil and Water Assessment Tool theoretical documentation version 2009. Texas Water Resources Institute Technical Report no. 406. Available from: https://oaktrust.library.tamu.edu/handle/1969.1/128050.
    [3] Economic Research Service (ERS), U.S. Department of Agriculture (USDA). Food Environment Atlas, 2022. Available from: https://www.ers.usda.gov/data-products/us-bioenergy-statistics/.
    [4] National Research Council (2008) Water Implications of Biofuels Production in the United States, Washington DC: National Academies Press.
    [5] Eghball B, Gilley JE, Kramer LA, et al. (2000) Narrow grass hedge effects on phosphorus and nitrogen in runoff following manure and fertilizer application. J Soil Water Conserv 55: 172-176.
    [6] Turner RE, Rabalais NN, Dortch Q, et al. (1995) Evidence for nutrient limitation on sources causing hypoxia on the Louisiana shelf, Proceedings of the 1st Gulf of Mexico Hypoxia Management Conference, 106-112.
    [7] Blanco‐Canqui H (2010) Energy crops and their implications on soil and environment. Agron J 102: 403-419. https://doi.org/10.2134/agronj2009.0333 doi: 10.2134/agronj2009.0333
    [8] McGregor KC, Dabney S, Johnson JR (1999) Runoff and soil loss from cotton plotswith and without stiff-grass hedges. Trans ASAE 42: 361-368. https://doi.org/10.13031/2013.13367 doi: 10.13031/2013.13367
    [9] Blanco-Canqui H, Gantzer CJ, Anderson SH, et al. (2004) Grass barriers for reduced concentrated flow induced soil and nutrient loss. Soil Sci Soc Am J 68: 1963-1972. https://doi.org/10.2136/sssaj2004.1963 doi: 10.2136/sssaj2004.1963
    [10] Blanco-Canqui H, Gantzer CJ, Anderson SH, et al. (2004) Grass barrier and vegetative filter strip effectiveness in reducing runoff, sediment, nitrogen, and phosphorus loss. Soil Sci Soc Am J 68: 1670-1678. https://doi.org/10.2136/sssaj2004.1670 doi: 10.2136/sssaj2004.1670
    [11] Blanco‐Canqui H (2010) Energy crops and their implications on soil and environment. Agron J 102: 403-419. https://doi.org/10.2134/agronj2009.0333 doi: 10.2134/agronj2009.0333
    [12] Hannah L, Lovejoy TE, Schneider SH (2019) Biodiversity and climate change in context, Climate Change and Biodiversity, New Haven: Yale University Press. https://doi.org/10.2307/j.ctv8jnzw1
    [13] Tallis H, Polasky S (2009) Mapping and valuing ecosystem services as an approach for conservation and natural‐resource management. Ann NY Acad Sci 1162: 265-283. https://doi.org/10.1111/j.1749-6632.2009.04152.x doi: 10.1111/j.1749-6632.2009.04152.x
    [14] Engel B, Chaubey I, Thomas M, et al. (2010) Biofuels and water quality: challenges and opportunities for simulation modeling. Biofuels 1: 463-477. https://doi.org/10.4155/bfs.10.17 doi: 10.4155/bfs.10.17
    [15] Gallardo-Vázquez D, Valdez-Juárez LE, Lizcano-Á lvarez JL (2019) Corporate social responsibility and intellectual capital: Sources of competitiveness and legitimacy in organizations' management practices. Sustainability 11: 5843. https://doi.org/10.3390/su11205843 doi: 10.3390/su11205843
    [16] Nä schen K, Diekkrüger B, Evers M, et al. (2019) The impact of land use/land cover change (LULCC) on water resources in a tropical catchment in Tanzania under different climate change scenarios. Sustainability 11: 7083. https://doi.org/10.3390/su11247083 doi: 10.3390/su11247083
    [17] Tang C, Li J, Zhou Z, et al. (2019) How to optimize ecosystem services based on a Bayesian model: A case study of Jinghe River Basin. Sustainability 11: 4149. https://doi.org/10.3390/su11154149 doi: 10.3390/su11154149
    [18] Kaini P, Artita K, Nicklow JW (2007) Evaluating optimal detention pond locations at a watershed scale, World Environmental and Water Resources Congress 2007: Restoring Our Natural Habitat, 1-8. https://doi.org/10.1061/40927(243)170
    [19] Kaini P, Artita K, Nicklow JW (2012) Optimizing structural best management practices using SWAT and genetic algorithm to improve water quality goals. Water Resour Manag 26: 1827-1845. https://doi.org/10.1007/s11269-012-9989-0 doi: 10.1007/s11269-012-9989-0
    [20] Kaini P, Artita K, Nicklow JW (2009) Generating different scenarios of BMP designs in a watershed scale by combining NSGA-II with SWAT, World Environmental and Water Resources Congress 2009: Great Rivers, 1-9. https://doi.org/10.1061/41036(342)493
    [21] Artita KS, Kaini P, Nicklow JW (2008) Generating alternative watershed-scale BMP designs with evolutionary algorithms, World Environmental and Water Resources Congress 2008: Ahupua'A, 1-9. https://doi.org/10.1061/40976(316)127
    [22] Maringanti C, Chaubey I, Arabi M, et al. (2008) A multi-objective optimization tool for the selection and placement of BMPs for pesticide control. Hydrol Earth Syst Sci Discuss 5: 28-29. https://doi.org/10.5194/hessd-5-1821-2008 doi: 10.5194/hessd-5-1821-2008
    [23] Maringanti C, Chaubey I, Arabi M, et al. (2011) Application of a multi-objective optimization method to provide least cost alternatives for NPS pollution control. Environ Manage 48: 448-461. https://doi.org/10.1007/s00267-011-9696-2 doi: 10.1007/s00267-011-9696-2
    [24] Herman MR, Nejadhashemi AP, Daneshvar F, et al. (2016) Optimization of bioenergy crop selection and placement based on a stream health indicator using an evolutionary algorithm. J Environ Manage 181: 413-424. https://doi.org/10.1016/j.jenvman.2016.07.005 doi: 10.1016/j.jenvman.2016.07.005
    [25] Gitau MW, Veith TL, Gburek WJ (2004) Farm-level optimization of BMP placement for cost-effective pollution reduction. Trans ASAE 47: 1923-1931. https://doi.org/10.13031/2013.17805 doi: 10.13031/2013.17805
    [26] Gitau MW, Veith TL, Gburek WJ, et al. (2006) Watershed level best management practice selection and placement in the Town Brook Watershed, New York. J Am Water Resour As 42: 1565-1581. https://doi.org/10.1111/j.1752-1688.2006.tb06021.x doi: 10.1111/j.1752-1688.2006.tb06021.x
    [27] Muleta MK, Nicklow JW (2002) Evolutionary algorithms for multiobjective evaluation of watershed management decisions. J Hydroinf 4: 83-97. https://doi.org/10.2166/hydro.2002.0010 doi: 10.2166/hydro.2002.0010
    [28] Ng TL, Eheart JW, Cai X, et al. (2010) Modeling Miscanthus in the Soil and Water Assessment Tool (SWAT) to simulate its water quality effects as a bioenergy crop. Environ Sci Technol 44: 7138-7144. https://doi.org/10.1021/es9039677 doi: 10.1021/es9039677
    [29] USDA Plants Database, Natural Resources Conservation Service. United States Department of Agriculture, 2022. Available from: https://plants.usda.gov/home.
    [30] Taboada HA, Coit DW (2012) A new multiple objective evolutionary algorithm for reliability optimization of series-parallel systems. Int J Appl Evol Comput 3: 1-18. https://doi.org/10.4018/jaec.2012040101 doi: 10.4018/jaec.2012040101
    [31] Gassman PW, Reyes MR, Green CH, et al. (2007) The Soil and Water Assessment Tool: historical development, applications, and future research directions. Trans ASABE 50: 1211-1250. https://doi.org/10.13031/2013.23637 doi: 10.13031/2013.23637
    [32] Arnold JG, Kiniry JR, Srinivasan R, et al. (2013) SWAT 2012 Input/Output Documentation. Texas Water Resources Institute. Available from: https://oaktrust.library.tamu.edu/handle/1969.1/149194.
    [33] Winchell M, Srinivasan R, Di Luzio M, et al. (2010) ArcSWAT interface for SWAT 2009, User's Guide, Blackland Research Center, Texas Agricultural Experiment Station, Temple.
  • This article has been cited by:

    1. Ahmad Bin Azim, Asad Ali, Abdul Samad Khan, Fuad A. Awwad, Sumbal Ali, Emad A.A. Ismail, Aggregation operators based on Einstein averaging under q-spherical fuzzy rough sets and their applications in navigation systems for automatic cars, 2024, 10, 24058440, e34698, 10.1016/j.heliyon.2024.e34698
    2. Ahmad Bin Azim, Asad Ali, Abdul Samad Khan, Fuad A. Awwad, Emad A.A. Ismail, Sumbal Ali, Utilizing sine trigonometric q-spherical fuzzy rough aggregation operators for group decision-making and their role in digital transformation, 2024, 10, 24058440, e30758, 10.1016/j.heliyon.2024.e30758
    3. Ahmad Bin Azim, Asad Ali, Abdul Samad Khan, Fuad A. Awwad, Sumbal Ali, Emad A. A. Ismail, Applications of q-Spherical Fuzzy Rough CODAS to the Assessment of a Problem Involving Renewable Energy Site Selection, 2024, 12, 2169-3536, 114100, 10.1109/ACCESS.2024.3412193
    4. Ahmad Bin Azim, Asad Ali, Sumbal Ali, Ahmad Aloqaily, Nabil Mlaiki, q-Spherical Fuzzy Rough Frank Aggregation Operators in AI Neural Networks: Applications in Military Transport Systems, 2024, 12, 2169-3536, 104215, 10.1109/ACCESS.2024.3414845
    5. Sumbal Ali, Asad Ali, Ahmad Bin Azim, Ahmad Aloqaily, Nabil Mlaiki, Utilizing aggregation operators based on q-rung orthopair neutrosophic soft sets and their applications in multi-attributes decision making problems, 2024, 10, 24058440, e35059, 10.1016/j.heliyon.2024.e35059
    6. Ahmad Bin Azim, Tmader Alballa, Somayah Abdualziz Alhabeeb, Maryam Sulaiman Albely, Hamiden Abd El-Wahed Khalifa, Advanced Decision-Making Strategies in Life 3.0 Through q-Spherical Fuzzy Rough Dombi Aggregation Operators, 2025, 17, 1866-9956, 10.1007/s12559-025-10434-0
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2085) PDF downloads(61) Cited by(0)

Figures and Tables

Figures(8)  /  Tables(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog