
In order to solve nonlinear equations, we introduce two new three-step with-memory iterative methods in this paper. We have improved the order of convergence of a well-known optimal eighth-order iterative method by extending it into two with-memory methods using one and two self-accelerating parameters, respectively. The self-accelerating parameters that increase the convergence order are computed using the Hermite interpolating polynomial. The newly proposed uni-parametric and bi-parametric with-memory iterative methods (IM) improved the R-order of convergence of the existing eighth-order method from 8 to 10 and 10.7446, respectively. Furthermore, the efficiency index has increased from 1.6818 to 1.7783 and 1.8105, respectively. In addition, this improvement in convergence order and efficiency index can be obtained without using any extra function evaluations. Extensive numerical testing on a wide range of problems demonstrates that the proposed methods are more efficient than some well-known existing methods.
Citation: Shubham Kumar Mittal, Sunil Panday, Lorentz Jäntschi, Liviu C. Bolunduţ. Two novel efficient memory-based multi-point iterative methods for solving nonlinear equations[J]. AIMS Mathematics, 2025, 10(3): 5421-5443. doi: 10.3934/math.2025250
[1] | Ahmad Bin Azim, Ahmad ALoqaily, Asad Ali, Sumbal Ali, Nabil Mlaiki, Fawad Hussain . q-Spherical fuzzy rough sets and their usage in multi-attribute decision-making problems. AIMS Mathematics, 2023, 8(4): 8210-8248. doi: 10.3934/math.2023415 |
[2] | Muhammad Naeem, Muhammad Qiyas, Lazim Abdullah, Neelam Khan, Salman Khan . Spherical fuzzy rough Hamacher aggregation operators and their application in decision making problem. AIMS Mathematics, 2023, 8(7): 17112-17141. doi: 10.3934/math.2023874 |
[3] | Shahzaib Ashraf, Huzaira Razzaque, Muhammad Naeem, Thongchai Botmart . Spherical q-linear Diophantine fuzzy aggregation information: Application in decision support systems. AIMS Mathematics, 2023, 8(3): 6651-6681. doi: 10.3934/math.2023337 |
[4] | Rabia Mazhar, Shahida Bashir, Muhammad Shabir, Mohammed Al-Shamiri . A soft relation approach to approximate the spherical fuzzy ideals of semigroups. AIMS Mathematics, 2025, 10(2): 3734-3758. doi: 10.3934/math.2025173 |
[5] | Attaullah, Shahzaib Ashraf, Noor Rehman, Asghar Khan, Muhammad Naeem, Choonkil Park . Improved VIKOR methodology based on q-rung orthopair hesitant fuzzy rough aggregation information: application in multi expert decision making. AIMS Mathematics, 2022, 7(5): 9524-9548. doi: 10.3934/math.2022530 |
[6] | Khawlah Alhulwah, Muhammad Azeem, Mehwish Sarfraz, Nasreen Almohanna, Ali Ahmad . Prioritized aggregation operators for Schweizer-Sklar multi-attribute decision-making for complex spherical fuzzy information in mobile e-tourism applications. AIMS Mathematics, 2024, 9(12): 34753-34784. doi: 10.3934/math.20241655 |
[7] | Attaullah, Shahzaib Ashraf, Noor Rehman, Asghar Khan, Choonkil Park . A decision making algorithm for wind power plant based on q-rung orthopair hesitant fuzzy rough aggregation information and TOPSIS. AIMS Mathematics, 2022, 7(4): 5241-5274. doi: 10.3934/math.2022292 |
[8] | Muhammad Ali Khan, Saleem Abdullah, Alaa O. Almagrabi . Analysis of deep learning technique using a complex spherical fuzzy rough decision support model. AIMS Mathematics, 2023, 8(10): 23372-23402. doi: 10.3934/math.20231188 |
[9] | Ali Ahmad, Humera Rashid, Hamdan Alshehri, Muhammad Kamran Jamil, Haitham Assiri . Randić energies in decision making for human trafficking by interval-valued T-spherical fuzzy Hamacher graphs. AIMS Mathematics, 2025, 10(4): 9697-9747. doi: 10.3934/math.2025446 |
[10] | Rukchart Prasertpong . Roughness of soft sets and fuzzy sets in semigroups based on set-valued picture hesitant fuzzy relations. AIMS Mathematics, 2022, 7(2): 2891-2928. doi: 10.3934/math.2022160 |
In order to solve nonlinear equations, we introduce two new three-step with-memory iterative methods in this paper. We have improved the order of convergence of a well-known optimal eighth-order iterative method by extending it into two with-memory methods using one and two self-accelerating parameters, respectively. The self-accelerating parameters that increase the convergence order are computed using the Hermite interpolating polynomial. The newly proposed uni-parametric and bi-parametric with-memory iterative methods (IM) improved the R-order of convergence of the existing eighth-order method from 8 to 10 and 10.7446, respectively. Furthermore, the efficiency index has increased from 1.6818 to 1.7783 and 1.8105, respectively. In addition, this improvement in convergence order and efficiency index can be obtained without using any extra function evaluations. Extensive numerical testing on a wide range of problems demonstrates that the proposed methods are more efficient than some well-known existing methods.
Detecting a change-point in a time series has received considerable attention for a long time, originating from in quality control [1]. It remains a popular field of research today due to the occurrence of sudden changes in various areas, such as financial data, signal processing, genetic engineering, and machine learning. There is an important issue in detecting structural breaks in time series data, which involves identifying changes in a sequence of parameters, numerical characteristics, or distributions that alter the model, such as a shift in mean [2], a change in variance [3], a change in tail index [4], or a change in persistence [5,6], etc.
However, Mandelbrot [7] has pointed out that many financial asset return distributions exhibit characteristics, such as peakedness and heavy tails, that cannot be adequately described by traditional normal distributions. Heavy-tailed series are better suited for capturing the distributional features of peaks and heavy tails in financial data due to their additivity and consistency with market observations. The distributions of commodity and stock returns often exhibit heavy tails with a possible infinite variance, as subsequently pointed out by Fama [8] and Mandelbrot [9]. They have initiated an investigation into time series models in which the marginal distributions exhibit regularly varying tails. Recently, there has been increasing interest in modeling change-point phenomena using heavy-tailed noise variables.
Developing estimation procedures for statistical models has received a great deal of interest, which are designed to represent data with infinite variance. Under the assumption of heavy-tailed time series with infinite variance, Paulauskas and Paulauskas [10] developed the asymptotic theory for econometric co-integration processes. Knight [11] investigated the limiting distribution of M-estimation for autoregressive parameters in the context of an integral linear process with infinite variance data. These findings demonstrate that, in terms of a heavier sequence, M-estimation is asymptotically more robust than least squares estimation (LS-estimation). M-estimation is a widely used and important method, which was first introduced by Huber [12] in 1964 for the location parameter model. Since then, many statisticians have shown interest in studying M-estimation, and this led to the establishment of a series of useful results. Hušková [13] proposed and investigated a method based on the moving sum of M-residuals to detect parameter changes and estimate the change position. Davis [14] studied the M-estimation of general autoregressive moving average (ARMA) processes with infinite variance, where the innovation follows a non-Gaussian stability law in its domain of attraction, and they derived a functional limit theorem for stochastic processes and established asymptotic properties of M-estimation. The asymptotic distribution of the M-estimation for parameters in an unstable AR(p) process was presented in Sohrabi [15], who suggested that the M-estimation exhibits a higher asymptotic convergence rate compared to LS-estimation due to the fact that the LS-estimation of the mean is expressed as ˆμ−μ=Op(T1/κ−1), whose consistency is destroyed if κ∈(0,1). Knight [16] investigated the asymptotic behavior of LS- and M-estimation for autoregressive parameters in the context of a data generation process with an infinite variance random walk. The study demonstrated that certain M-estimation converges more rapidly than LS-estimation, especially in heavy-tailed distributions. Therefore, this paper employs M-estimation to estimate the parameters.
In time series analysis, covariance, the correlation coefficient, and their sample form are fundamental tools for studying parameter estimation, goodness of fit, change-point detection, and other related researches. For instance, classic monographs [17,18] have extensively discussed these topics, and introduced numerous practical applications. Furthermore, Wang et al. [19] proposed two semiparametric additive mean models for clustered panel count data and derived estimation equations to estimate the regression parameters of interest for the proposed models. Xu et al. [20] provided a bivariate Wiener model to capture the degradation patterns of two key performance characteristics of permanent magnet brakes, and considered an objective Bayesian method to analyze degradation data with small sample sizes. Additionally, in Yaghi's [21] doctoral dissertation, he proposed a novel method to detect change in the covariance of time series by converting the change in auto-covariance into a change in slope. Jarušková [22] investigated the equivalence of two covariance operators, and utilized the functional principal component analysis method, which can verify the equality of the largest eigenvalues and their corresponding eigenfunctions. Furthermore, Wied [23] proposed a test statistic
QT(X,Y)=ˆDmax2≤j≤Tj√T|ˆρj−ˆρT| |
to study the change in the correlation coefficient between two time series over an unknown time period, where ˆρ is the estimated autocorrelation coefficient, and ˆD is the regulatory parameter associated with the long-run variance. Na [24] applied the monitoring procedure
inf{k>T:Tk=‖ˆΣ−1/2T(ˆγk−ˆγT)‖≥1√Tb(kT)} |
to identify changes in the autocorrelation function, parameter instability, and distribution shifts within the GARCH model, where ˆγ is the autocorrelation coefficient, ˆΣ is the long-run variance, and b(⋅) is a given boundary function. Dette [25] proposed to use
ˆV(k)n(s)=1n∑⌊ns⌋j=1ˆejˆej+kˆσ2(tj)−⌊ns⌋n∑nj=1ˆejˆej+kˆσ2(tj) |
to detect relevant changes in time series models, where ˆei denotes the nonparametric residuals, and ˆσ is the variance function estimation.
This paper extends previous research to heavy-tailed innovation processes and utilizes the moving window method to convert the problem of detecting change in QAC into a change in mean. The methods for testing mean change-points primarily include the maximum likelihood method, least squares method, cumulative sum method (also known as CUSUM), Bayesian method, empirical quantile method, wavelet method, and others. Among these methods, the most commonly used test statistics for change-point problems is cumulative sum statistics. Yddter [26] and Hawkins [27] utilized the maximum likelihood approach to investigate the mean change-point of a normal sequence, while Kim [28] employed Incán's [29] cumulative sum of squares (SCUSUM) method to examine parameter changes in the generalized autoregressive conditional heteroskedasticity (GARCH). Lee [30] utilized the residual cumulative sum method (RCUSUM) to enhance the detection of parameter changes in GARCH (1, 1). Han [31] investigated the change-point estimation of the mean for heavy-tailed dependent sequences, and provided the consistency of the CUSUM statistics. However, due to the non-monotonic empirical powers issue with CUSUM statistics, ratio-typed statistics were subsequently proposed as a suitable alternative to CUSUM, particularly in cases of infinite variance since they do not require any variance estimation for normalization. As a result, Horváth [32] proposed a robust ratio test statistic to test the mean change-point of weakly dependent stable distribution sequences. Jin et al. [33] applied this ratio test statistic to investigate the mean change in their research.
The remainder of this paper is organized as follows. In Section 2, we introduce our ratio-typed test and derive its asyptotic properties under both the null hypothesis and the alternative, and the asymptotic behavior of the parameter estimation and ratio-typed test is studied. Section 3 presents the Monte Carlo simulation results. Section 4 offers an empirical application example and Section 5 concludes the paper.
The above-mentioned methods require estimation of the long-run variance to execute the test for the change in the autocorrelation coefficient. However, since the heavy-tailed sequence has an infinite variance, the results of these tests are not applicable. We use Yaghi [21]'s moving window method to combine the QAC of each window into a new series and utilize ratio test statistics to test mean change, which avoids the need for estimating long-run variance. In this paper, the primitive time series {yt,1≤t≤T} would satisfy the following conditions,
yt=μ+βt+ξt, | (2.1) |
ξt=c1ξt−1+c2ξt−2+⋯+ηt, | (2.2) |
where μ and β are the intercept and the time trend, and T is the sample size. ξt refers to a p-order autoregressive process. We assume throughout this paper that the error term ηt belongs to the domain of attraction of a stable law. For any x>0, then
TP(|ηt|>aTx)→x−κ, |
where aT=inf{x:P(|ηt|>x)≤T−1}, and
limx→∞P(ηt>x)P(|ηt|>x)=q∈[0,1]. |
The tail thickness of the observed data is determined by tail index κ, which is unknown. Well-known special cases of ηt are the Gaussian (κ=2) and Cauchy (κ=1) distributions. When κ∈(0,2), the distribution has the moment behavior: E|ηt|ν=∞ once ν≥κ, and thus ηt has infinite variance, it is heavy-tailed.
The variance of the heavy-tailed sequence yt is infinite, but the variance of Tκ2−1yt is finite.
Proof. Multiply both sides of the model (1.1) by a control velocity Tκ2−1,
Tκ2−1yt=Tκ2−1μ+Tκ2−1βt+Tκ2−1ξt |
where κ∈(0,2), and T represents the sample size. We then prove the existence of the first and second moments of T(κ2−1)ξt. According to the definition of the stable distribution, we obtain
limT→∞TκP(X>T)=C(σ,β,μ), |
where C(σ,β,μ) is bounded. This means that there is at least one sufficiently large M, that when T>M, there is P(X>T)=O(T−κ).
Note that
E(Tκ2−1ξt)=∫+∞−∞Tκ2−1ξf(ξ)dξ=∫+∞−∞Tκ2−1ξdF(ξ)=Tκ2−1∫−M−∞ξdF(ξ)+Tκ2−1∫M−MξdF(ξ)+Tκ2−1∫∞MξdF(ξ). |
Since as T→∞, then Tκ2−1→0, and ∫M−MξdF(ξ)≤M∫+∞−∞1dF(ξ)=M. Here, the second term tends to zero. Next, we only prove the third term, and the proof for the first term is similar.
Let P(X>T)=¯F(T)=1−F(T), then ¯F(∞)=0, and we get
Tκ2−1∫+∞MξdF(ξ)=Tκ2−1∫+∞Mξd(1−¯F(ξ))=−Tκ2−1∫+∞Mξd¯F(ξ)=−Tκ2−1ξ¯F(ξ)∣∞M+Tκ2−1∫+∞M¯F(ξ)dξ. |
Because Tκ2−1M¯F(M)=Tκ2−1M1−κMκ¯F(M)→0, then Tκ2−1ξ¯F(ξ)→0, as ξ→∞. On the other hand, due to ¯F(∞)=0, it leads to Tκ2−1∫+∞M¯F(ξ)dξ→0, and E(T(κ2−1)ξt)=0.
Similarly, the existence of E(Tκ−1ξ2t) can be dealt with the same way. We complete the proof of the variance of Tκ2−1ξt.
In this study, our primary objective is to attain a more significant leap in amplitude. Consequently, we concentrate on the first-order autocorrelation coefficient, whereas the second-order and higher-order autocorrelation coefficients tend to diminish or fluctuate within the interval of (−1,1). We define the first-order QAC of the heavy-tailed sequences as follows, t=1,⋯,T,
α(1)=Cov(Tκ2−1yt,Tκ2−1yt−1)√D(Tκ2−1yt)√D(Tκ2−1yt−1). |
Thus, the corresponding sample correlation coefficient is defined as
ˆα(1)=1T∑Tk=1(Tκ2−1yk−Tκ2−1ˉy)(Tκ2−1yk+1−Tκ2−1ˉy)√1T∑Tk=1(Tκ2−1yk−Tκ2−1ˉy)2√1T∑Tk=1(Tκ2−1yk+1−Tκ2−1ˉy)2, |
where ˉy=1T∑Ti=1yi.
Through simulation experiments, it was found that the presence of intercept and slope parameters had a significant impact on the changes in the QAC. Therefore, it is crucial to detrend the intercept and slope parameters in the model because noticeable variations occur in the QAC when the regression coefficient of the AR(p) series alters. Now, we use a simple example to employ a moving window method to simulate this phenomenon. Suppose the sequence follows an AR(1) model with a change in autoregressive parameter,
yt=1+0.2t+ξt, | (2.3) |
ξt=0.1ξt−11{t≤[Tτ∗]}+0.7ξt−11{t>[Tτ∗]}+ηt. | (2.4) |
Now, we conside a window width m=10, lag number d=1, sample size T=1200, change position τ∗=0.5, and tail index κ=1.2. Since each window has a width of m and a lag of d, it results in the number of windows n=floor((T−m)/d)+1 and obtains n=T−m+1 sets of the sub-sample, namely, {yt,t=1,⋯,m}, {yt,t=2,⋯,m+1}, ⋯,{yt,t=T−m+1,⋯,T}.
Figure 1(a) indicates that the change in ˆαj(1) based on the sub-sample {yj,⋯,yj+m−1}, j=1,⋯,n, is not obvious, where
αj(1)=1m∑j+m−1k=j(Tκ2−1yj−Tκ2−1ˉyj)(Tκ2−1yj+1−Tκ2−1ˉyj)√1m∑j+m−1k=j(Tκ2−1yj−Tκ2−1ˉyj)2√1m∑j+m−1k=j(Tκ2−1yj+1−Tκ2−1ˉyj)2, |
and ˉyj=1m∑j+m−1t=jyj. Under some regular conditions, it has beem proven that αj(1) can converge to α(1) in probability. On the other hand, if there are consistent estimates of parameters μ and β, we can have the residuals {ˆξj,⋯,ˆξj+m−1}, j=1,⋯,n, where ˆξt=yt−ˆμ−ˆβt. This means that the intercept and slope have detrended. Thus, Figure 1(b) shows that ˆαj(1) based on the sub-sample {ˆξj,⋯,ˆξj+m−1}, j=1,⋯,n, have more pronounced fluctuation, where
ˆαj(1)=1m∑j+m−1k=j(Tκ2−1ˆξj−Tκ2−1ˉˆξj)(Tκ2−1ˆξj+1−Tκ2−1ˉˆξj)√1m∑j+m−1k=j(Tκ2−1ˆξj−Tκ2−1ˉˆξj)2√1m∑j+m−1k=j(Tκ2−1ˆξj+1−Tκ2−1ˉˆξj)2, | (2.5) |
and ˉˆξj=1m∑j+m−1t=jˆξj.
Moreover, it is worth noting that
α(1)=Cov(Tκ2−1yt,Tκ2−1yt−1)√D(Tκ2−1yt)√D(Tκ2−1yt−1)=Cov(Tκ2−1ξt,Tκ2−1ξt−1)√D(Tκ2−1ξt)√D(Tκ2−1ξt−1). | (2.6) |
Because ˆαj(1) can converge to α(1) in probability, we focus on an autocorrelation coefficient consisting of ˆξt, not yt. As shown in Figure 1, if a change in mean is observed in the new series ˆα1(1),⋯,ˆαn(1), it can be explained that the QAC of the primitive sequence yt has changed. The problem of testing the null hypothesis of no QAC change can be illustrated as
H0:ˆα1(1)=ˆα2(1)=⋯=ˆαn(1) |
against the alternative hypothesis
H0:ˆα1(1)=ˆα2(1)=⋯=ˆα[nτ](1)=γ1≠ˆα[nτ]+1(1)=ˆα[nτ]+2(1)=⋯=ˆαn(1)=γ2. |
Therefore, we would convert the QAC change to the mean change.
Prior to deriving the asymptotic properties of the following ratio-typed test, it is necessary to establish a lemma that proves the consistency of M-estimations for both intercept and slope parameters under the null hypothesis for all κ∈(0,2]. To estimate the parameters μ and β by M-estimation, ˆμ and ˆβ are defined as solutions of the minimization problem
argminμ,β∈RT∑t=1ρ(yt−μ−βt), | (2.7) |
where ρ is a convex loss function. The estimations in equation (2.5) are sometimes also defined as the solution to the following equation,
T∑t=1ϕ(yt−μ−βt)=0. | (2.8) |
Throughout this paper, we will make the following assumptions about the loss function ρ and the distribution of random variables ξ1,⋯,ξT.
Assumption 1. The distribution Fξ of random error term ξt is in the domain of attraction of a stable law with index κ∈(0,2), and ηt is an independent identically distributed sequence (i.i.d).
(1) If κ>1, then E(ξt)=0;
(2) while if κ<1, ξt has a symmetric distribution.
Assumption 2. Let ρ be a convex and twice differentiable function, and take ρ′=ϕ, where ϕ is Lipschitz continuous, then there is a real number K≥0 made for all x and y thus that |ϕ′(x)−ϕ′(y)|≤K|x−y|.
Assumption 3. Finally, we will make the following assumptions about the random variable ϕ(ξt):
(1) E(ϕ(ξt))=0;
(2) 0<σ2ξ(ϕ)=Eϕ2(ξt)+2∑∞i=0Eϕ(ξt)ϕ(ξt+i)<∞.
Assumptions 1, 2, and 3 are standard conditions for deriving the asymptotic properties based on M-estimation, although extra moment conditions are imposed on ϕ and ϕ′. Note that ρ is an almost everywhere differentiable convex function that ensures the uniqueness solution. Although ρ′ may not exist in this case, M-estimation can still be counted under certain additional conditions for the asymptotic theory. This paper only considers situations where ρ′ exists. We mainly consider two types of estimation methods, ϕ(x)=x (LS-estimation) and ϕ(x)=xI{|x|≤K}+Ksgn(x)I{|x|>K} (M-estimation).
In the view of the moving window method for fixed d, we can rewrite formula (2.3) and obtain the following first-order QAC ˆαj(1),j=1,⋯,n,
ˆαj(1)=1m∑d(j−1)+mk=d(j−1)+1(Tκ2−1ˆξk−Tκ2−1ˉˆξj)(Tκ2−1ˆξk+1−Tκ2−1ˉˆξj)√1m∑d(j−1)+mk=d(j−1)+1(Tκ2−1ˆξk−Tκ2−1ˉˆξj)2√1m∑d(j−1)+mk=d(j−1)+1(Tκ2−1ˆξk+1−Tκ2−1ˉˆξj)2, | (2.9) |
where ˉˆξj=1m∑d(j−1)+mi=d(j−1)+1ˆξi and ˆξt=yt−ˆμ−ˆβt. The QAC of all windows based on the residuals ˆξk constitute a new sequence of {ˆαj(1),j=1,2,…,n}. Recall that n=floor((T−m)d)+1 represents the sequence number of windows. Here, d→ and p→ denote convergence in distribution and convergence in probability, respectively.
Lemma 2.1. If Assumptions 1–3 hold, ˆμ and ˆβ minimize formula (2.5), and under the null hypothesis, then
T3/2(ˆβ−β)d→6σξ(ϕ)W(1)−12σξ(ϕ)∫10rW(r)drE(ϕ′(ξt)),T1/2(ˆμ−μ)d→−2σξ(ϕ)W(1)+6σξ(ϕ)∫10rW(r)drE(ϕ′(ξt)), |
where W(⋅) is a Wiener process.
Proof. The proof is essentially the same as that in Knight [16]. Define the process
Z(u,v)=T∑t=1(ρ(ξt−T−3/2ut−T−1/2v)−ρ(ξt)), |
with (u,v)=(T3/2(ˆβ−β),T1/2(ˆμ−μ)) minimizing Z(u,v). By a Taylor series expansion of each summand of Z around u=0,v=0, then
Z(u,v)=−uT−3/2T∑t=1tϕ(ξt)−vT−1/2T∑t=1ϕ(ξt)+12u2T−3T∑t=1t2ϕ′(ξ∗t)+12v2T−1T∑t=1ϕ′(ξ∗t)+uvT−2T∑t=1tϕ′(ξ∗t)=5∑i=1Ii, | (2.10) |
where ξ∗t∈(ξt±|T−3/2ut+T−1/2v|). Using the Lipschitz continuity of ϕ′, then
|ϕ′(ξt)−ϕ′(ξ∗t)|≤C|T−3/2ut+T−1/2v| |
with bounded C, and we get
T−1T∑t=1|ϕ′(ξt)−ϕ′(ξ∗t)|≤C|T−3/2ut+T−1/2v|→0. |
Thus, ϕ′(ξ∗t) can be approximately replaced by ϕ′(ξt).
Under Assumptions 1–3, since ϕ(ξt) satisfies the central limit theorem, it yields
1√T[Tr]∑t=1ϕ(ξt)d→σξ(ϕ)W(r), | (2.11) |
where σ2(ϕ)=Eϕ2(ξt)+2∑∞i=0Eϕ(ξt)ϕ(ξt+i). By some algebraic derivation, we have
I1=−uT−3/2T∑t=1tϕ(ξt)d→−uσξ(ϕ)(W(1)−∫10rW(r′)dr′), | (2.12) |
and
I2=−vT−1/2T∑t=1ϕ(ξt)d→−vσξ(ϕ)W(1). | (2.13) |
By the theorem of large numbers, we get
sup0≤r≤11T[Tr]∑t=1|ϕ′(ξt)−Eϕ′(ξt)|p→0. |
This shows that ϕ′(ξt) can be asympotically replaced by Eϕ′(ξt), resulting in
I3=12u2T−3T∑t=1t2ϕ′(ξt)d→16u2E(ϕ′(ξt)), | (2.14) |
I4=12v2T−1T∑t=1ϕ′(ξt)d→12v2E(ϕ′(ξt)), | (2.15) |
and
I5=uvT−2T∑t=1tϕ′(ξt)d→12uvE(ϕ′(ξt)). | (2.16) |
Together with (3)–(7), we can rewrite (1) as
Z(u,v)→−uσξ(ϕ)(W(1)−∫10rW(r)dr)−vσξ(ϕ)W(1)+16u2E(ϕ′(ξt))+12v2E(ϕ′(ξt))+12uvE(ϕ′(ξt)) |
and take the partial derivative on it with respect to u and v
{∂Z(u,v)∂u=−σξ(ϕ)(W(1)−∫10rW(r)dr)+13uE(ϕ′(ξt))+12vE(ϕ′(ξt))=0,∂Z(u,v)∂v=−σξ(ϕ)W(1)+vE(ϕ′(ξt))+12uE(ϕ′(ξt))=0. |
The solution is
{u=6σξ(ϕ)W(1)−12σξ(ϕ)∫10rW(r)drE(ϕ′(ξt)),v=2σW(1)−6σξ(ϕ)∫10rW(r)drE(ϕ′(ξt)). |
This, in turn, implies that
(T3/2(ˆβ−β)T1/2(ˆμ−μ))d→(6σξ(ϕ)W(1)−12σξ(ϕ)∫10rW(r)drE(ϕ′(ξt))−2σξ(ϕ)W(1)+6σξ(ϕ)∫10rW(r)drE(ϕ′(ξt))). |
Therefore, the proof is complete.
Lemma 2.1 shows the convergence rate for ˆμ and ˆβ based on M-estimation are T−1/2 and T−3/2 under the heavy-tail environment, which are the same as those in the case of a Gaussian process. Furthermore, the simulation study has revealed that the parameter estimations based on M-estimation are consistent and more robust than those based on the LS-method. The QAC plays a crucial role in time series analysis by measuring the degree to which successive observations are correlated with each other over time. The consistency of these estimations ensures that our comprehension of such correlations remains robust and trustworthy. Consequently, we can present the ratio-typed test to detect a change in mean and discuss its asymptotic properties.
For the sake of convenience, we define ωj=ˆαj(1). Without loss of generality, we suppose ωj follows an AR(p) process with a drift, that is, ωt=γ+γ1ωt−1+⋯+γpωt−p+εt. By using a t-test to fit the QAC sequence, it is found that when p=1, the P-value is 0.0086, which is greater than 0.001; whereas when p=0, the P-value is 6×10−213, which is much smaller than 0.001. The smaller the P-value is, the better the goodness-of-fit is. This means that, under the null hypethesis of no change, the QAC series can be fitted by the mean model
ωt=γ+εt,t=1,⋯,n, | (2.17) |
where εt is assumed to meet Assumption 1. However, under the alternative hypothesis, the QAC series would follow the mean model with a change,
ωt=γ∗11{t≤[nτ]}+γ∗21{t>[nτ]}+εt, | (2.18) |
where γ∗1≠γ∗2 and τ is an unknown change-point.
Hence, for testing a change in the QAC under original sequence {yt,t=1,⋯,T}, it can be converted into mean change detection with sequence {ω1,ω2,⋯,ωn}. The mean change test has been extensively studied and is relatively mature. Because these innovations {εt,t=1,⋯,n} in sequence {ω1,ω2,⋯,ωn} maybe follow a heavy-tailed distribution, this paper extensively studies test procedures that utilize the ratio-typed test based on M-residuals to detect changes in mean.
Our inspiration is derived from Horváth's description (2008) [32] of ratio-typed tests and their robustness performance, and by the framework of test statistics, which is on the basis of Peštová and Pešta (2018) [34]. The ratio-typed test based on M-residuals is expressed as
Vn=max0≤s≤1Vn(s), |
Vn(s)=|[ns]∑t=1ϕ(ωt−ˆγ(ϕ))|max1≤[nv]≤[ns]|[nv]∑i=1ϕ(ωi−ˆγ1(ϕ))|+max[ns]+1≤[nv]≤n|n∑i=[nv]ϕ(ωi−ˆγ2(ϕ))|. |
The score function ϕ is chosen in two forms. The problem of testing the null hypothesis of no QAC change can be illustrated as: ϕ(x)=x,x∈R, and these procedures could be reduced to classic LS procedures (Csörgo and Horváth (1997) [35]). In a similar vein, these procedures are simplified to the Huber function truncation process if ϕ(x)=xI{|x|≤K}+Ksgn(x)I{|x|>K}. ˆγ(ϕ) is the M-estimation of parameter γ generated by a score function ρ with sequence {ω1,ω2,⋯,ωn}, i.e., it is defined as a solution of the minimization problem
argminγ∈Rn∑t=1ρ(ωt−γ), | (2.19) |
where ρ is a convex loss function. Sometimes, the estimation in (2.9) is also defined as a solution of the following equation:
n∑t=1ϕ(ωt−γ)=0, | (2.20) |
where ρ and ϕ satisfy Assumption 2.
Analogous to ˆγ(ϕ), the M-estimates ˆγ1(ϕ) and ˆγ2(ϕ) are computed, respectively, from sequences ω1,⋯,ω[ns] and ω[ns]+1,⋯,ωn. They are solutions to these equations:
[ns]∑t=1ϕ(ωt−γ)=0, | (2.21) |
and
n∑t=[ns]+1ϕ(ωt−γ)=0. | (2.22) |
Prior to deriving asymptotic properties of the proposed ratio-typed test, we assume that ϕ(εt) follows Assumption 3 and give the lemma that, under the null hypothesis, the M-estimation should be consistent for all κ∈(0,2].
Lemma 2.2. If Assumptions 4–6 hold, under the null hypothesis, we have
n1/2(ˆγ(ϕ)−γ)d→σε(ϕ)⋅W(1)E(ϕ′(εt)). |
Similarly
n1/2(ˆγ1(ϕ)−γ)d→σε(ϕ)W(s)sE(ϕ′(εt)), |
n1/2(ˆγ2(ϕ)−γ)d→σε(ϕ)W(1)−W(s)(1−s)E(ϕ′(εt)), |
where σ2ε(ϕ)=Eϕ2(εt)+2∑∞i=0Eϕ(εt)ϕ(εt+i) and W(⋅) is a Wiener process.
Proof. Define the process
Z(u)=n∑t=1{ρ(εt+un−1/2)−ρ(εt)}, |
with u=n1/2(γ−ˆγ(ϕ)). By a Taylor series expansion of each summand of Z around u=0, then
Z(u)=un−1/2n∑t=1ϕ(εt)+12u2n−1n∑t=1ϕ′(ε∗t), | (2.23) |
where ε∗t∈(εt,εt±|un−1/2|). Using the Lipschitz continuity of ϕ′, then |ϕ′(εt)−ϕ′(ε∗t)|≤C|un−1/2| with bounded C, and we get
n−1n∑t=1|ϕ′(εt)−ϕ′(ε∗t)|≤C|un−1/2|→0 | (2.24) |
uniformly over u in compact sets. By the theorem of large numbers, then
sup0≤s≤11n[ns]∑t=1|ϕ′(εt)−Eϕ′(εt)|d→0. | (2.25) |
Combining n−1/2[ns]∑t=1ϕ(εt)d→σε(ϕ)W(s) with (9) and (10), it yields
Z(u)=un−1/2n∑t=1ϕ(εt)+12u2n−1n∑t=1ϕ′(ε∗t)d→uσε(ϕ)W(s)+12u2E(ϕ′(εt)). | (2.26) |
Find the minimum value in (11), and we have
Z′(u3)=n−1/2n∑t=1ϕ(εt)+un−1n∑t=1E(ϕ′(εt))=0, |
so it turns out
n1/2(ˆγ(ϕ)−γ)=−u=n−1/2∑nt=1ϕ(εt)n−1∑nt=1E(ϕ′(εt))d→σε(ϕ)⋅W(1)E(ϕ′(εt)). |
Similarly, the asymptotic distributions of n1/2(ˆγ1(ϕ)−γ) and n1/2(ˆγ2(ϕ)−γ) can be obtained in the same way. Therefore, the proof is complete.
Subsequently, we examine the performance of the ratio-typed test in the presence of a mean change. The ensuing lemma serves as a crucial tool for achieving the desired outcomes under the alternative hypothesis.
Lemma 2.3. If Assumptions 4–6 hold, and under the alternative hypothesis, we have:
(i) for i=1,2, both ˆγ(ϕ)−γ∗i=Op(1) hold,
(ii) let a constant θ≠0, then
if s∈(0,τ−θ], we have ˆγ1(ϕ)−γ∗1=Op(n−1/2), and ˆγ2(ϕ)−γ∗2=Op(1),
if s∈[τ+θ,1), we have ˆγ1(ϕ)−γ∗1=Op(1), and ˆγ2(ϕ)−γ∗2=Op(n−1/2),
(iii) if s=τ, n1/2(ˆγ1(ϕ)−γ∗1) and n1/2(ˆγ2(ϕ)−γ∗2) have the same asymptotic distributions as those in Lemma 2.2.
Proof.(ⅰ) To confirm ˆγ(ϕ)−γ1=Op(1), it is sufficient to prove that ˆγ(ϕ)−γ1 does converge to nonzero in probability. Without loss of generality, we assume n1/2(ˆγ(ϕ)−γ1)=Op(1). Under the alternative hypothesis, define the process
Qn(u)=[nτ]∑t=1{ϕ(εt+n−1/2u)−ϕ(εt)}+n∑t=[nτ]+1{ϕ(εt+(γ2−γ1)+n−1/2u)−ϕ(εt)}△=Qn,1(u)+Qn,2(u), | (2.27) |
where u=n1/2(γ1−ˆγ(ϕ)).
By a Taylor series expansion of each summand of Qn,1(u), it yields
Qn,1(u)=n−1/2u[nτ]∑t=1ϕ(εt)+12n−1u2[nτ]∑t=1ϕ′(˜εt), | (2.28) |
where ˜εt∈(εt,εt±n−1/2|u|). Similarly, we can get a Taylor series expansion of each summand Qn2(u) as follows
Qn,2(u)=(n−1/2u+(γ2−γ1))n∑t=[nτ]+1ϕ(εt)+12(n−1/2u+(γ2−γ1))2n∑t=[nτ]+1ϕ′(˜˜εt), | (2.29) |
where ˜˜εt∈(εt,εt±|n−1/2u+(γ2−γ1)|).
In view of (13) and (14), to find ind the minimum value in (11), it turns out that
Q′n(u)=n−1/2n∑t=1ϕ(εt)+un−1[nτ]∑t=1ϕ′(˜εt)+(un−1+n−1/2(γ2−γ1))n∑t=[nτ]+1ϕ′(˜˜εt)+op(1)=0. | (2.30) |
Using the Lipschitz continuity of ϕ′, we have
n−1[nτ]∑t=1|ϕ′(εt)−ϕ′(˜εt)|≤n−1/2Cτ|u|→0. |
However, because of γ2≠γ1, then
n−1n∑t=[nτ]+1|ϕ′(εt)−ϕ′(˜˜εt)|≤C(1−τ)|n−1/2u+(γ2−γ1)|↛ |
Thus, because of the fact that
\begin{align} \sup\limits_{0 < r < 1}n^{-1}\sum\limits_{t = 1}^{[nr]}(\phi'(\varepsilon) -E(\phi'(\varepsilon)))\xrightarrow{p}0, \end{align} | (2.31) |
we rewrite (15) as follows
\begin{align} Q_{n}'(u) = &n^{-1/2}\sum\limits_{t = 1}^{n}\phi(\varepsilon_t)+un^{-1}\sum\limits_{t = 1}^{[n\tau]}E(\phi'(\varepsilon_t))+(un^{-1}+n^{-1/2}(\gamma_2-\gamma_1))\sum\limits_{t = [n\tau]+1}^{n}E(\phi'(\tilde{\tilde{\varepsilon}}_t)) = 0. \end{align} |
To find the solution to the equation Q_{n}'(u) = 0 , we have
\begin{align} u = &-\frac{n^{-1/2}\sum\limits_{t = 1}^n\phi(\varepsilon_t)+n^{-1/2}(\gamma_2-\gamma_1)\sum\nolimits_{t = [n\tau]+1}^{n}E(\phi'(\tilde{\tilde{\varepsilon}}_t))}{\tau E(\phi'(\varepsilon_t))+n^{-1}\sum\nolimits_{t = [n\tau]+1}^{n}E(\phi'(\tilde{\tilde{\varepsilon}}_t))} = O_p(n^{1/2}), \end{align} |
which holds due to E(\tilde{\tilde{\varepsilon}}) = O_p(1) . Recall that u = n^{1/2}(\gamma_1-\hat{\gamma}(\phi)) , and it shows that \hat{\gamma}(\phi)-\gamma_1 should converge to nonzero, which contradicts the artificial assumption. Hence, we prove that \hat{\gamma}(\phi)-\gamma_1 = O_p(1) , and can deal with \hat{\gamma}(\phi)-\gamma_2 = O_p(1) in the same way.
(ⅱ) Since the estimator \hat{\gamma}(\phi) is constructed on the basic of data \omega_1, \cdots, \omega_n , which involve the structural change in location, it turns out that \hat{\gamma}_1(\phi)-\gamma_i = O_p(1) , i = 1, 2 . For the same reason, we still suppose that u = n^{1/2}(\gamma_1-\hat{\gamma}(\phi)) is bounded. If s\in(\tau+\theta, 1) , because the estimator \hat{\gamma}_1(\phi) consists of data \omega_1, \cdots, \omega_{[ns]} , we rewrite (12) in the form
\begin{align} Q_n(u) = &\sum\limits_{t = 1}^{[n\tau]}\{\phi(\varepsilon_t+n^{-1/2}u)-\phi(\varepsilon_t)\}+\sum\limits_{t = [n\tau]+1}^{[ns]}\{\phi(\varepsilon_t+(\gamma_2-\gamma_1)+n^{-1/2}u)-\phi(\varepsilon_t)\}. \end{align} |
Using the same proof process as discussed above, we obtain
\begin{align} u = &-\frac{n^{-1/2}\sum\limits_{t = 1}^{[ns]}\phi(\varepsilon_t)+n^{-1/2}(\gamma_2-\gamma_1)\sum\nolimits_{t = [n\tau]+1}^{[ns]}\ E(\phi'(\tilde{\tilde{\varepsilon}}_t))}{\tau E(\phi'(\varepsilon_t))+n^{-1}\sum\nolimits_{t = [n\tau]+1}^{[ns]}E(\phi'(\tilde{\tilde{\varepsilon}}_t))}. \end{align} |
Note that \theta = s-\tau\neq0 , and it again leads to u = O_p(n^{1/2}) , which is inconsistent with u being bounded. Hence, it turns out that \hat{\gamma}_1(\phi)-\gamma_1 = O_p(1) . While the estimator \hat{\gamma}_2(\phi) consists of data \omega_{[ns]+1}, \cdots, \omega_{n} which are not contaminated, it implies that the asymptotic distribution of n^{1/2}(\hat{\gamma}_2(\phi)-\gamma_2) is asymptotically equivalent to that shown in Lemma 2.2. Similarly, if s\in(0, \tau-\theta) , the assertions of \hat{\gamma}_1(\phi)-\gamma_1 = O_p(n^{-1/2}) and \hat{\gamma}_2(\phi)-\gamma_2 = O_p(1) do hold.
(ⅲ) When s = \tau , the two sets of data \omega_1, \cdots, \omega_{[ns]} and \omega_{[ns]+1}, \cdots, \omega_{n} do not become contaminated even if under the alternative hypothesis. Thus, the convergence of \hat{\gamma}_1(\phi) and \hat{\gamma}_2(\phi) still exists, and the proof of their asympotic distribution is similar to that in Theorem 2.1 Thus, the proof is complete.
As stated in Lemma 2.3, if any of the estimation equations (2.10)–(2.13) involve observations with a change in mean, the corresponding M-estimation will be biased. If s = \tau , the asymptotic results for \hat{\gamma}_1(\phi) and \hat{\gamma}_2(\phi) are consistent with Lemma 2.2.
Theorem 2.1. (Under\; null) Suppose sequences \{\omega_1, \cdots, \omega_n\} follow model (2.8) under the null hypothesis, as n\rightarrow \infty , and we have,
\begin{eqnarray*} V_n\xrightarrow{d}\sup\limits_{0\leq s\leq 1}\frac{\vert{W(s)-sW(1)}\vert}{\sup\limits_{0\leq v \leq s}\vert{W(v)-\frac{v}{s}W(s)}\vert+\sup\limits_{s < v \leq 1}\vert{W(1)-W(v)-\frac{1-v}{1-s}({W(1)-W(s)})}\vert}. \end{eqnarray*} |
Proof. The proof is analogous in several steps with the proof of Theorem 1.1 in Horváth [32]. By the mean value theorem, it leads to
\begin{eqnarray*} \phi(\omega_t-\hat{\gamma}(\phi)) = \phi(\varepsilon_t+\gamma-\hat{\gamma}(\phi)) = \phi(\varepsilon_t)+\phi'(\varepsilon_t^{**})(\gamma-\hat{\gamma}(\phi)), \end{eqnarray*} |
where \varepsilon_t^{**} satisfies |\phi'(\varepsilon_t)-\phi'(\varepsilon_t^{**})|\leq C|\gamma-\hat{\gamma}(\phi)| .
Combining Lemma 2.2 with (9) and (10), we have
n^{-1/2}\sum\limits_{t = 1}^{[ns]}\phi(\omega_t-\hat{\gamma}(\phi)) = n^{-1/2}\sum\limits_{t = 1}^{[ns]}\phi(\varepsilon_t)+n^{-1/2}(\gamma-\hat{\gamma}(\phi))\sum\limits_{t = 1}^{[ns]}\phi'(\varepsilon_t)+o_p(1)\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\xrightarrow{d}\sigma_\varepsilon(\phi)W(s)-\frac{\sigma_\varepsilon(\phi)\cdot W(1)}{E(\phi'(\varepsilon_t))}sE(\phi'(\varepsilon_t))\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;= \sigma_\varepsilon(\phi)\cdot (W(s)-sW(1)), | (2.32) |
which holds due to n^{-1/2}(\gamma-\hat{\gamma}(\phi))\max\limits_{0\leq s\leq1}\sum_{t = 1}^{[ns]}\left|\phi'(\varepsilon_t)-\phi'(\varepsilon_t^{**})\right|\leq C|n^{-1/2}n^{-1}(\gamma-\hat{\gamma}(\phi))^2|\xrightarrow{d}0.
Actually, the proof of the first and second terms of the denominators is roughly analogous, so we just handle the second one.
n^{-1/2}\sum\limits_{i = [nv]+1}^n\phi(\gamma_i-\hat{\gamma}_2(\phi)) = n^{-1/2}\sum\limits_{i = [nv]+1}^n\phi(\varepsilon_{i})+n^{-1/2}(\gamma-\hat{\gamma}_2(\phi))\sum\limits_{i = [nv]+1}^n\phi'(\varepsilon^{***}_{i})\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;= n^{-1/2}\sum\limits_{i = [nv]+1}^n\phi(\varepsilon_{i})+n^{-1/2}(\gamma-\hat{\gamma}_2(\phi))\sum\limits_{i = [nv]+1}^n\phi'(\varepsilon_{i})+o_p(1)\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\xrightarrow\sigma_\varepsilon(\phi)\left(W(1)-W(v)-\frac{W(1)-W(s)}{(1-s)}(1-v)\right), | (2.33) |
where \varepsilon_t^{***}\in(\varepsilon_t, \varepsilon_t\pm n^{1/2}|\gamma-\hat{\gamma}_2(\phi)|) . Similarly, we can obtain
\begin{align} n^{-1/2}\sum\limits_{i = 1}^{[nv]}\phi(\omega_i-\hat{\gamma}_1(\phi)) \xrightarrow{d}\sigma_\varepsilon(\phi)\cdot\left(W(v)-\frac{v}{s}W(s)\right). \end{align} | (2.34) |
Therefore, together with (17), (18), and (19), it yields
\begin{align} V_n = &\max\limits_{1\leq [ns] \leq n}\frac{n^{-1/2}\left|\sum\limits_{t = 1}^{[ns]}\phi(\omega_t-\hat{\gamma}(\phi))\right|} {\max\limits_{1\leq [nv] \leq [ns]}n^{-1/2}\left|\sum\limits_{i = 1}^{[nv]}\phi(\omega_i-\hat{\gamma}_1(\phi))\right|+\max\limits_{{[ns]+1}\leq [nv] \leq n}n^{-1/2}\left|\sum\limits_{i = [nv]}^n\phi(\omega_i-\hat{\gamma}_2(\phi))\right|}\\ \xrightarrow{d}&\sup\limits_{0\leq s\leq 1}\frac{|W(s)-sW(1)|}{\sup\limits_{0\leq v\leq s}|W(v)-\frac{v}{s}W(s)|+\sup\limits_{s < v\leq 1}|W(1)-W(v)-\frac{1-v}{1-s}(W(1)-W(s))|}. \end{align} |
Therefore, the proof is complete.
Theorem 2.1 demonstrates that, under the null hypothesis, the proposed test V_n converges to a function of the Wiener process. In comparison with existing results, the asymptotic distribution remains robust against variations in tail index and therefore yields a unique critical value for any given significance level. Thus, the ratio-typed test is robust to heavy-tailed series with infinite variance, which has the advantage of avoiding tail index estimation for real data and greatly improves operational efficiency.
Next, we study the behavior of the ratio-typed test if there is a mean change. The following theorem is crucial for obtaining the desired results under the alternative hypothesis.
Theorem 2.2. (Under\; alternative) Suppose sequences \{\omega_1, \cdots, \omega_n\} follow model (2.9) under the alternative hypothesis, as n\rightarrow \infty , and we have
\begin{eqnarray*} V_n = O_p(n^{1/2}). \end{eqnarray*} |
Proof. When s \in(\tau+\theta, 1) , according to the proofs of Lemma 2.3 and Theorem 2.1, we have \hat{\gamma}_1(\phi)-\gamma_1 = O_p(1) , \hat{\gamma}_2(\phi)-\gamma_2 = O_p(1) , and \hat{\gamma}_2(\phi)-\gamma_2 = O_p(n^{-1/2}) under the alternative hypothesis. For the numerator of V_n , by the mean value theorem, it yields
\begin{align} \left|\sum\limits_{t = 1}^{[ns]}\phi(\omega_t-\hat{\gamma}(\phi))\right| = &\left|\sum\limits_{t = 1}^{[n\tau]}\phi(\varepsilon_t+(\gamma_1-\hat{\gamma}(\phi))+\sum\limits_{t = [n\tau]+1}^{[ns]}\phi(\varepsilon_t+(\gamma_2-\hat{\gamma}(\phi))\right|\\ = &\left|\sum\limits_{t = 1}^{[n\tau]}[\phi(\varepsilon_t)+\phi'(\hat{\varepsilon}_t)(\gamma_1-\hat{\gamma}(\phi))]+\sum\limits_{t = [n\tau]+1}^{[ns]}[\phi(\varepsilon_t)+\phi'(\hat{\hat{\varepsilon}})(\gamma_2-\hat{\mu}(\phi))]\right|\\ = &\left|\sum\limits_{t = 1}^{[ns]}\phi(\varepsilon_t)+(\gamma_1-\hat{\gamma}(\phi))\sum\limits_{t = 1}^{[n\tau]}\phi'(\hat{\varepsilon}_t)+(\gamma_2-\hat{\gamma}(\phi))\sum\limits_{t = [n\tau]+1}^{[ns]}\phi'(\hat{\hat{\varepsilon}}_t)\right|, \end{align} |
where \hat{\varepsilon}_t\in{(\varepsilon_t, \varepsilon_t+|\gamma_1-\hat{\gamma}(\phi)|)} , and \hat{\hat{\varepsilon}}_t\in{(\varepsilon_t, \varepsilon_t+|\gamma_2-\hat{\gamma}(\phi)|)} . Since the second and third terms play a major role in the convergent rate O_p(n) , it follow that
\begin{align} \max\limits_{0\leq s\leq1}\left|\sum\limits_{t = 1}^{[ns]}\phi(\omega_t-\hat{\gamma}(\phi))\right| = O_p(n). \end{align} | (2.35) |
Now, we deal with the first term of the denominator.
(ⅰ) If 1\leq [nv]\leq [n\tau] , we have
\begin{align} \sum\limits_{i = 1}^{[nv]}{\phi(\omega_i-\hat{\gamma}_1(\phi))} = &\sum\limits_{i = 1}^{[nv]}\phi(\varepsilon_i)+(\gamma_1-\hat{\gamma}_1(\phi))\sum\limits_{i = 1}^{[nv]}\phi'(\tilde{\varepsilon}_i^*)\\ = &O_p(n^{1/2})+O_p(1)\times O_p(n) = O_p(n), \end{align} |
where \tilde{\varepsilon}_i^*\in(\varepsilon_i, \varepsilon_i\pm|(\gamma_1-\hat{\gamma}_1(\phi))|) .
(ⅱ) If [n\tau]+1\leq [nv]\leq [ns] ,
\begin{align} \sum\limits_{i = 1}^{[nv]}{\phi(\omega_i-\hat{\gamma}_1(\phi))} = &\sum\limits_{i = 1}^{[n\tau]}{\phi(\varepsilon_i+(\gamma_1-\hat{\gamma}_1(\phi)))}+\sum\limits_{i = [n\tau]+1}^{[nv]}{\phi(\varepsilon_i+(\gamma_2-\hat{\gamma}_1(\phi)))}\\ = &\sum\limits_{i = 1}^{[nv]}\phi(\varepsilon_i)+(\gamma_1-\hat{\gamma}_1(\phi))\sum\limits_{i = 1}^{[n\tau]}\phi'(\hat{\varepsilon}_i^*)+(\gamma_2-\hat{\gamma}_1(\phi))\sum\limits_{i = [n\tau]+1}^{[nv]}\phi'(\hat{\varepsilon}_i^{**})\\ = &O_p(n^{1/2})+O_p(n) = O_p(n), \end{align} |
where \hat{\varepsilon}_i^*\in(\varepsilon_i, \varepsilon_i\pm|(\gamma_1-\hat{\gamma}_1(\phi))|), \hat{\varepsilon}_i^{**}\in(\varepsilon_i, \varepsilon_i\pm|(\gamma_2-\hat{\gamma}_2(\phi))|) . Hence, we obtain
\begin{eqnarray} \max\limits_{1\leq [nv] \leq [ns]}\left|\sum\limits_{i = 1}^{[nv]}{\phi(\omega_i-\hat{\gamma}_1(\phi))}\right| = O_p(n). \end{eqnarray} | (2.36) |
For the second term of the denominator, its asymptotic disbribution is the same as the null hypothesis, because data \omega_{[ns]+1}, \cdots, \omega_n are free of the influnence caused by the change-point. Thus, we have
\begin{eqnarray} \max\limits_{[ns]+1\leq [nv] \leq n }\left|\sum\limits_{i = [nv]+1}^{n}{\phi(\omega_i-\hat{\gamma}_2(\phi))}\right| = O_p(n^{1/2}). \end{eqnarray} | (2.37) |
Combining (20)–(22), we end up with \max\limits_{0\leq s \leq 1}V_n = O_p(1) . When s\in(0, \tau-\theta) , we can similarly prove that \max\limits_{0\leq s \leq 1}V_n = O_p(1) . Finally, when s\in(\tau-\theta, \tau+\theta) , we just consider the special case of s = \tau . Since both ot the two sets \omega_1, \cdots, \omega_{[ns]} and \omega_{[ns]+1}, \cdots, \omega_{n} are not affected by the mean change, the asymptotic distribution of the denominator is the same as in Theorem 2.1. However, the numerator always diverges. Consequently, we get
\begin{eqnarray*} V_n\geq V_n(\tau) = \frac{O_p(n)}{O_p(n^{1/2})+O_p(n^{1/2})} = O_p(n^{1/2}). \end{eqnarray*} |
Therefore, the proof is complete.
According to Theorem 2.2, the ratio-typed test is consistent under the alternative hypothesis. However, unlike the result provided in Theorem 2.1, a closed-form asymptotic distribution cannot be obtained due to the unknown explicit expression of the objective function for M-estimation. The simulation study has revealed an intriguing finding that the divergence appears to be independent of the tail index in theory, yet it significantly affects the validity in practice.
In this section, we present simulation results to investigate the performance of the ratio-typed test V on empirical sizes and empirical powers using LS-estimation and M-estimation, i.e., \phi_{LS}(x) = x and \phi_{M}(x) = xI\{{\vert x\vert\leq K}\}+K sgn(x)I\{{\vert x\vert > K}\} . The validity of the theory will be verified by the portfolio of the change position, magnitude of the change-point, tail index, window width, lag number and so on. Empirical sizes refer to the rejection rates at a significance level of 0.05 under the null hypothesis, while empirical powers denote rejection rates in the presence of a change-point. All results are based on 2000 replications.
We adopt the following DGP (data generating process):
\begin{eqnarray*} &y_{t} = \mu +\beta t+\xi _{t},\\ &\xi _{t} = c_{1} \xi _{t-1} 1_{\{t\leq [T\tau]\}}+(c_{1}+\delta) \xi _{t-1} 1_{\{t > [T\tau]\}}+\eta _{t}, \end{eqnarray*} |
where \eta_t is a heavy-tailed sequence. The remaining parameters are set as follows: threshold value K = 1.345 ; autoregressive coefficient c_1 = -0.3, 0, 0.3 ; sample size T = 300,600, 1200 ; tailed index \kappa = 0.4, 0.8, 1.2, 1.6, 2.0 ; intercept and slope \mu = 5 , \beta = 0.2 ; change position \tau = 0, 0.3, 0.5, 0.7 ; magnitude of change \delta = 0.3, 0.6 ; window width m = 10, 20, 30 ; and lag number d = 1, 2, 3, 10, 15, 25 .
In this subsection, our objective is to discuss the critical values of the ratio-typed test. Critical values of M-estimation and LS-estimation corresponding to changes in the tail index under different coefficients c_{1} are obtained and presented in Table 1.
V_{M} | V_{LS} | ||||||
\kappa | c_{1}=-0.3 | c_{1}=0 | c_{1}=0.3 | c_{1}=-0.3 | c_{1}=0 | c_{1}=0.3 | |
0.4 | 1.4345 | 1.4484 | 1.4031 | 1.5156 | 1.5288 | 1.5346 | |
0.8 | 1.4078 | 1.4139 | 1.4176 | 1.5084 | 1.4957 | 1.5175 | |
1.2 | 1.4421 | 1.4291 | 1.4266 | 1.4799 | 1.4756 | 1.4690 | |
1.6 | 1.4298 | 1.4011 | 1.4133 | 1.4308 | 1.4377 | 1.4295 | |
2.0 | 1.4125 | 1.4153 | 1.4379 | 1.4286 | 1.4085 | 1.4190 |
For the sake of simplicity, V_{M} is defined as a ratio-typed test based on M-estimation with Huber function, and the definition of V_{LS} relies on LS-estimation. It is not surprising that the simulated critical values of V_{LS} are functional to tail index \kappa , whereas this phenomenon does not occur in V_{M} . That is consistent with the conclusion of Jin et al. [36,37]. It is noteworthy that critical values of two test statistics are not sensitive to the variations in c_1 . Thus, for the V_{LS} test, prior estimation of the tail index is necessary to conduct change-point tests with corresponding critical values. However, accurately estimating the unknown and elusive tail index in practical applications remains challenging. On the other hand, the simulated critical value of V_{M} shows minimal fluctuations, which greatly facilitates its practical application.
In this subsection, we aim to investigate empirical sizes and empirical powers of ratio-typed tests under various profiles of sample size, score function, change position, window width, and lag number. Figure 2 illustrates the curve of empirical sizes for V_{M} and V_{LS} under the null hypothesis, where the x -axis represents the tail index and the y -axis represents empirical sizes.
The empirical sizes of V_{M} , as shown in Figure 2, have greater stability compared to that of V_{LS} , because they nearly fluctuate around the 0.05 confidence level. It is notable that, the rejection rate of V_{M} performs well regardless of the window width, but for V_{LS} , the empirical sizes are sensitive to the window width. When m = 10 , there is a slight distortion, but the distortion becomes more pronounced as the window width expands to 20 and 30, which indicates that the rejection rate is unstable with larger window widths, in particular T = 300 . It means that when sample size is small, the increase in window width may result in a slight growth of distortion. The reason for this is that a larger amount of data within each window is more prone to outliers, which in turn triggers an over-rejection. However, as the sample size increases, empirical sizes approach satisfactory and fluctuate around the significance level. There is no significant difference in empirical sizes when various lag numbers are considered. In short, the rejection rate of V_{LS} is higher for a smaller sample size with a larger tail index, while the phenomenon does not occur for V_{M} . These results indicate that the convergence rate estimated by M-estimation is independent of tail thickness, as confirmed by Lemmas 2.1 and 2.2.
Figures 3–5 show the sensitivity analysis of the ratio-typed test under the alternative hypothesis, in terms of sample size, tail index, window width, lag number, change location, and magnitude of change. With an increase in the tail index, there is a decrease in the empirical powers of V_{M} , while there is an increase in those of V_{LS} . The ratio-typed test based on M-estimation appears to have a higher level of empirical powers if \kappa\leq1.4 , while the test based on LS-estimation exhibits larger empirical powers if \kappa > 1.4 . The Huber function truncates outliers to normality, by which the M-estimation enables more accurate estimation in heavy-tailed sequences while it involves more bias in light-tailed observations. This indicates that M-estimation is better suited for heavy-tail sequences. As expected, both tests have excellent performance on empirical powers with sample size growth, for example, when \tau = 0.5 and \kappa = 1.6 for m = 20 and d = 1 , the rejection rates of V_{M} are 80.7 \% , 93.2 \% , and 99.35 \% , respectively, for T = 300,600, 1200 .
It is interesting that, compared to that of \tau = 0.3 and \tau = 0.7 , the empirical powers exhibit superior performance when \tau = 0.5 for both V_{M} and V_{LS} . Furthermore, it is found that a wider window width leads to larger empirical powers when the lag number is fixed. For example, when d = 1 , \tau = 0.5 , and T = 600 , if the window width m is 10, the empirical powers obtained from M-estimation are 99.95 \% , 99.2 \% , 93.3 \% , 83.55 \% , and 72.85 \% under various tail indices, while the empirical powers are 100 \% , 99.35 \% , 97.8 \% , 93.2 \% , and 87.8 \% when m = 20 . The empirical powers still remain insensitive to the lag number when the window width is fixed. When m = 30 , \tau = 0.7 , T = 300 , and d = 1 , the empirical powers based on M-estimation are 98.2 \% , 93 \% , 85.4 \% , 76.8 \% , and 69.1 \% ; but for d = 2 , the empirical powers are 96.35 \% , 92.25 \% , 85 \% , 75.15 \% , and 67.75 \% . This highlights the criticality of selecting an appropriate window width.
The relationship between empirical powers and lag numbers d = 1, 2, 3 is present in Figures 3–5. The line charts reveal an intuitive result that there is a slight difference in the empirical powers when the lag number is small. Therefore, to account for the impact of lag numbers on empirical powers, we consider three distinct lag numbers with d = 10, 15, 25 . As depicted in Figure 6, the empirical powers decrease as the lag number increases. For example, when \kappa = 0.8 , \tau = 0.3 , T = 1200 , and m = 20 , empirical powers of V_{M} are 80.52 \% , 76.12 \% , and 57.38 \% for d = 10, 15, 25 . This phenomenon is attributed to the sample size of the mean change-point test reduction caused by a larger lag number and a smaller window width.
Recall that empirical powers decrease with an increase in lag numbers; therefore, we choose a relatively optimal lag number of d = 3 and explore window widths of m = 10, 20, 30 to analyze calculation efficiency. Actually, when the sample size is large, d = 3 can be selected due to computation cost reduction. However, when the sample size is small, we choose d = 1 as the optimal lag number to ensure no loss of samples and maximize the change-point test efficiency. Figures 2–7 indicate that when m = 30 , both V_{M} and V_{LS} obtain outstanding empirical power. Consequently, we adopt m = 30 as the optimal window width in the practical example. In other words, the empirical powers of the ratio-typed test based on M-estimation are higher than those of the ratio-typed test based on LS-estimation, especially in the case of heavy-tailed sequences.
Figures 3–6 report the simulated results of \delta = 0.3 , and we also are interested in the case of \delta = 0.6 . As shown in Figure 7, it is not surprising that empirical powers enhance with the growth in \delta . Additionally, when \delta = 0.3 , empirical powers tend to decrease as \tau is far away from the middle of the sample. But for \delta = 0.6 , the difference in empirical powers is very small no matter whether \tau = 0.3 , \tau = 0.5 , or \tau = 0.7 . In other words, when \delta is large, the influence of the change position on the empirical power is negligible.
In this section, the ratio-typed test based on M-estimation is used to test the QAC change in USD/CNY exchange rate data, by which the validity of the aforementioned method is confirmed. Figure 8(a) shows a total of 581 daily data points of the USD/CNY exchange rate from May 12, 2009 to August 31, 2011, which were drawn from https://www.economagic.com. Through the use of software [38] to find the rough estimate of the tail index, we have \hat{\kappa} = 1.0515 . Thus, we suppose that this set of exchange rate data is y_{t} = \mu+\beta t+\xi_{t}, t = 1, 2, \dots 581 , where innovation \xi_t is heavy-tailed. In view of this, the test statistics in [23,24,25] for the change in correlation coefficients detection are invalidated, because this method requires long-run variance estimation, which is difficult and redundant for heavy-tailed sequences.
Note that the method proposed in this paper not only avoids long-term variance estimation, but also expands the application range and greatly improves practicability and convenience by converting the change in the autocorrelation coefficient into the change in mean. For a mean change problem, we provide the ratio-typed test based on M-estimation to execute for the heavy-tailed series. The exchange rate sequence y_{t} is detrended in advance. The intercept and the slope are estimated through M-estimation, resulting in \hat{\xi}_{t} = y_{t}-\hat{\mu}-\hat{\beta} t, t = 1, 2, \dots, 581 , where \hat{\mu} = 6.9401 and \hat{\beta} = -0.00078 . The residuals \hat{\xi}_{t} are depicted in Figure 8(b). Thus, we set d = 1 and m = 30 , and get a new sequence \omega_{t} composed of 550 QAC structure, as shown in Figure 9.
By BIC or AIC criteria, the QAC data \omega_{t} is fitted by the the mean model, namely, \omega_{t} = \gamma+\varepsilon_{t}, t = 1, 2, \dots, 550 . We substitute the \omega_{t} sequence into the ratio-typed test based on M-estimation, and find V_{M} = 3.6375 > 1.4133 at s^{*} = 348 . That is, \omega_{t} occured a change in mean at s^{*} = 348 (the red dashed line in Figure 9), and is divided into two segments. The first part is \gamma_{1}, \gamma_{2}, \dots, \gamma_{348} with a sample mean of \gamma_{1}^{\ast } = 0.9155 , and the second one involves \gamma_{349}, \gamma_{350}, \dots, \gamma_{580} with a sample mean of \gamma_{2}^{\ast } = 0.6373 . The standard error of the mean estimate based on M-estimation is 0.0075, which indicates that our proposed method is highly reliable. By this, we verify that the daily data of the USD/CNY exchange rate y_{t} from May 12, 2009 to August 31, 2011 has a change in the autocorrelation coefficient.
In this paper, we primarily studied the change-point test of the QAC for heavy-tailed sequences. In order to improve the efficiency, the moving window method was used to convert the QAC change to the mean change, and the ratio-typed test based on M-estimation was proposed to test the mean change. The methods not only eliminated the influence of outliers, but also extended the theory on QAC change detection of Gaussian sequences to heavy-tail process with tail index \kappa \in (0, 2) . Under some regular conditions, the asymptotic distribution under the null hypothesis was a functional of a Wiener process, which was independent of the tail index, and the consistency was also obtained under the alternative hypothesis. The simulation results revealed that these procedures have good performance even if the sequence was heavy-tailed. In summary, we can combine the moving window method with a ratio-typed test based on M-estimation to test the change in the QAC with a heavy-tailed series.
Xiaofeng Zhang: Writing-original draft, Software; Hao Jin: Methodology, Writing-review & editing; Yunfeng Yang: Validation. All authors have read and agreed to the published version of the manuscript.
All authors declare that they have not used Artificial Intelligence tools in the creation of this article.
The authors would like to thank Prof. John Nolan who provided the software for generating stable innovations and fitting the tail index. The authors are also thankful for the financial support from NNSF (Nos.71473194) and SNSF (Nos.2020JM-513).
All authors disclose no conflicts of interest.
[1] | A. Cordero, J. R. Torregrosa, On the design of optimal iterative methods for solving nonlinear equations, In: S. Amat, S. Busquier, Advances in iterative methods for nonlinear equations, 2016, 79–111. https://doi.org/10.1007/978-3-319-39228-8_5 |
[2] |
R. Sharma, A. Bahl, An optimal fourth order iterative method for solving nonlinear equations and its dynamics, J. Complex Anal., 2015, 259167. https://doi.org/10.1155/2015/259167 doi: 10.1155/2015/259167
![]() |
[3] |
J. L. Varona, An optimal thirty-second-order iterative method for solving nonlinear equations and a conjecture, Qual. Theory Dyn. Syst., 21 (2022), 39. https://doi.org/10.1007/s12346-022-00572-3 doi: 10.1007/s12346-022-00572-3
![]() |
[4] |
G. Zhang, Y. Zhang, H. Ding, New family of eighth‐order methods for nonlinear equation, Int. J. Comput. Math. Electr. Electron. Eng., 28 (2009), 1418–1427. https://doi.org/10.1108/03321640910991985 doi: 10.1108/03321640910991985
![]() |
[5] |
J. F. Traub, Iterative methods for the solution of equations, Amer. Math. Soc., 312 (1982), 151. https://doi.org/10.2307/2004117 doi: 10.2307/2004117
![]() |
[6] |
X. Wang, Y. Tao, A new Newton method with memory for solving nonlinear equations, Mathematics, 8 (2020), 108. https://doi.org/10.3390/math8010108 doi: 10.3390/math8010108
![]() |
[7] |
V. Torkashvand, A two-step method adaptive with memory with eighth-order for solving nonlinear equations and its dynamic, Comput. Methods Differ. Equations, 10 (2022), 1007–1026. https://doi.org/10.22034/cmde.2022.46651.1961 doi: 10.22034/cmde.2022.46651.1961
![]() |
[8] |
G. Thangkhenpau, S. Panday, S. K. Mittal, L. Jäntschi, Novel parametric families of with and without memory iterative methods for multiple roots of nonlinear equations, Mathematics, 11 (2023), 2036. https://doi.org/10.3390/math11092036 doi: 10.3390/math11092036
![]() |
[9] | T. Lotfi, P. Assari, A new two step class of methods with memory for solving nonlinear equations with high efficiency index, Int. J. Math. Modell. Comput., 4 (2014), 277–288. |
[10] |
S. Panday, S. K. Mittal, C. E. Stoenoiu, L. Jäntschi, A new adaptive eleventh-order memory algorithm for solving nonlinear equations, Mathematics, 12 (2024), 1809. https://doi.org/10.3390/math12121809 doi: 10.3390/math12121809
![]() |
[11] |
S. K. Mittal, S. Panday, L. Jäntschi, Enhanced ninth-order memory-based iterative technique for efficiently solving nonlinear equations, Mathematics, 12 (2024), 3490. https://doi.org/10.3390/math12223490 doi: 10.3390/math12223490
![]() |
[12] |
O. S. Solaiman, I. Hashim, Optimal eighth-order solver for nonlinear equations with applications in chemical engineering, Intell. Autom. Soft Comput., 27 (2021), 379–390. https://doi.org/10.32604/iasc.2021.015285 doi: 10.32604/iasc.2021.015285
![]() |
[13] | J. M. Ortega, W. C. Rheinboldt, Iterative solution of nonlinear equations in several variables, Society for Industrial and Applied Mathematics, 2000. |
[14] | G. Alefeld, J. Herzberger, Introduction to interval computation, Academic Press, 2012. |
[15] |
N. Kumar, A. Cordero, J. P. Jaiswal, J. R. Torregrosa, An efficient extension of three-step optimal iterative scheme into with memory and its stability, J. Anal., 2024. https://doi.org/10.1007/s41478-024-00833-1 doi: 10.1007/s41478-024-00833-1
![]() |
[16] |
O. S. Solaiman, I. Hashim, Two new efficient sixth order iterative methods for solving nonlinear equations, J. King Saud Univ. Sci., 31 (2019), 701–705. https://doi.org/10.1016/j.jksus.2018.03.021 doi: 10.1016/j.jksus.2018.03.021
![]() |
[17] | Y. Y. Al-Husayni, I. A. Al-Subaihi, Tenth-order iterative methods without derivatives for solving nonlinear equations, J. Res. Appl. Math., 3 (2017), 13–18. |
[18] |
N. Choubey, J. P. Jaiswal, Two-and three-point with memory methods for solving nonlinear equations, Numer. Anal. Appl., 10 (2017), 74–89. https://doi.org/10.1134/S1995423917010086 doi: 10.1134/S1995423917010086
![]() |
[19] |
X. Wang, T. Zhang, Some Newton-type iterative methods with and without memory for solving nonlinear equations, Int. J. Comput. Methods, 11 (2014), 1350078. https://doi.org/10.1142/S0219876213500783 doi: 10.1142/S0219876213500783
![]() |
[20] |
N. Choubey, A. Cordero, J. P. Jaiswal, J. R. Torregrosa, Dynamical techniques for analyzing iterative schemes with memory, Complexity, 2018. https://doi.org/10.1155/2018/1232341 doi: 10.1155/2018/1232341
![]() |
[21] |
S. Weerakoon, T. G. I. Fernando, A variant of Newton's method with accelerated third-order convergence, Appl. Math. Lett., 13 (2000), 87–93. https://doi.org/10.1016/S0893-9659(00)00100-2 doi: 10.1016/S0893-9659(00)00100-2
![]() |
1. | Ahmad Bin Azim, Asad Ali, Abdul Samad Khan, Fuad A. Awwad, Sumbal Ali, Emad A.A. Ismail, Aggregation operators based on Einstein averaging under q-spherical fuzzy rough sets and their applications in navigation systems for automatic cars, 2024, 10, 24058440, e34698, 10.1016/j.heliyon.2024.e34698 | |
2. | Ahmad Bin Azim, Asad Ali, Abdul Samad Khan, Fuad A. Awwad, Emad A.A. Ismail, Sumbal Ali, Utilizing sine trigonometric q-spherical fuzzy rough aggregation operators for group decision-making and their role in digital transformation, 2024, 10, 24058440, e30758, 10.1016/j.heliyon.2024.e30758 | |
3. | Ahmad Bin Azim, Asad Ali, Abdul Samad Khan, Fuad A. Awwad, Sumbal Ali, Emad A. A. Ismail, Applications of q-Spherical Fuzzy Rough CODAS to the Assessment of a Problem Involving Renewable Energy Site Selection, 2024, 12, 2169-3536, 114100, 10.1109/ACCESS.2024.3412193 | |
4. | Ahmad Bin Azim, Asad Ali, Sumbal Ali, Ahmad Aloqaily, Nabil Mlaiki, q-Spherical Fuzzy Rough Frank Aggregation Operators in AI Neural Networks: Applications in Military Transport Systems, 2024, 12, 2169-3536, 104215, 10.1109/ACCESS.2024.3414845 | |
5. | Sumbal Ali, Asad Ali, Ahmad Bin Azim, Ahmad Aloqaily, Nabil Mlaiki, Utilizing aggregation operators based on q-rung orthopair neutrosophic soft sets and their applications in multi-attributes decision making problems, 2024, 10, 24058440, e35059, 10.1016/j.heliyon.2024.e35059 | |
6. | Ahmad Bin Azim, Tmader Alballa, Somayah Abdualziz Alhabeeb, Maryam Sulaiman Albely, Hamiden Abd El-Wahed Khalifa, Advanced Decision-Making Strategies in Life 3.0 Through q-Spherical Fuzzy Rough Dombi Aggregation Operators, 2025, 17, 1866-9956, 10.1007/s12559-025-10434-0 |
V_{M} | V_{LS} | ||||||
\kappa | c_{1}=-0.3 | c_{1}=0 | c_{1}=0.3 | c_{1}=-0.3 | c_{1}=0 | c_{1}=0.3 | |
0.4 | 1.4345 | 1.4484 | 1.4031 | 1.5156 | 1.5288 | 1.5346 | |
0.8 | 1.4078 | 1.4139 | 1.4176 | 1.5084 | 1.4957 | 1.5175 | |
1.2 | 1.4421 | 1.4291 | 1.4266 | 1.4799 | 1.4756 | 1.4690 | |
1.6 | 1.4298 | 1.4011 | 1.4133 | 1.4308 | 1.4377 | 1.4295 | |
2.0 | 1.4125 | 1.4153 | 1.4379 | 1.4286 | 1.4085 | 1.4190 |
V_{M} | V_{LS} | ||||||
\kappa | c_{1}=-0.3 | c_{1}=0 | c_{1}=0.3 | c_{1}=-0.3 | c_{1}=0 | c_{1}=0.3 | |
0.4 | 1.4345 | 1.4484 | 1.4031 | 1.5156 | 1.5288 | 1.5346 | |
0.8 | 1.4078 | 1.4139 | 1.4176 | 1.5084 | 1.4957 | 1.5175 | |
1.2 | 1.4421 | 1.4291 | 1.4266 | 1.4799 | 1.4756 | 1.4690 | |
1.6 | 1.4298 | 1.4011 | 1.4133 | 1.4308 | 1.4377 | 1.4295 | |
2.0 | 1.4125 | 1.4153 | 1.4379 | 1.4286 | 1.4085 | 1.4190 |