Specification | Condition (2.3) |
Standard TAR-SV1 | δ|β1(1)|+(1−δ)|β2(1)|<1 (c.f., Breidt [8]) |
Standard symmetric AR-SV1 | |β1(1)|<1 (c.f., Francq and Zakoïan [26]) |
Symmetric PAR-SVs | ∏sj=1|β1(j)|<1 (c.f., Aknouche [18]) |
Water is an essential element that sustains life on this planet, yet it is threatened by human activities. With little attention paid to the waterfall as a source of a domestic water supply and a tourist spot for recreation, this study was designed to investigate one of the waterfalls in Iligan City, Philippines: Dodiongan Falls. The location of the study is a neighborhood of the city garbage dumpsite that due to an uncontrollable situation, releases dark-colored secretion from the treatment box as has been verified by the residents in the area; this posed a threat to their food security and livelihood. Assessing the physicochemical parameters, heavy metal concentration and Escherichia coli counts is very crucial in interpreting its water quality. All parameters such as the pH, alkalinity, turbidity, lead (Pb), mercury (Hg), and the E. Coli test were done following the standard procedures. The results revealed that the pH, alkalinity, turbidity, total lead (less than 0.01 mg/L) and total mercury concentration (less than 0.001 mg/L) at the three sites were in conformity with the guidelines of the World Health Organization and Philippine national water quality standards. However, the E. Coli count has increased downstream from 220 to 1,600 MPN per 100 ml, which exceeded the standard limit. With these findings, it is paramount that the creation of a management plan be initiated as soon as possible by the different governmental agencies in order to bring back the life of Dodiongan Falls.
Citation: Cyril A. Cabello, Nelfa D. Canini, Barbara C. Lluisma. Water quality assessment of Dodiongan Falls in Bonbonon, Iligan City, Philippines[J]. AIMS Environmental Science, 2022, 9(4): 526-537. doi: 10.3934/environsci.2022031
[1] | Wenjia Guo, Yuankui Ma, Tianping Zhang . New identities involving Hardy sums S3(h,k) and general Kloosterman sums. AIMS Mathematics, 2021, 6(2): 1596-1606. doi: 10.3934/math.2021095 |
[2] | Jiankang Wang, Zhefeng Xu, Minmin Jia . Distribution of values of Hardy sums over Chebyshev polynomials. AIMS Mathematics, 2024, 9(2): 3788-3797. doi: 10.3934/math.2024186 |
[3] | Jiankang Wang, Zhefeng Xu, Minmin Jia . On the generalized Cochrane sum with Dirichlet characters. AIMS Mathematics, 2023, 8(12): 30182-30193. doi: 10.3934/math.20231542 |
[4] | Zhuoyu Chen, Wenpeng Zhang . A new reciprocity formula of Dedekind sums and its applications. AIMS Mathematics, 2024, 9(5): 12814-12824. doi: 10.3934/math.2024626 |
[5] | Xuan Wang, Li Wang, Guohui Chen . The fourth power mean of the generalized quadratic Gauss sums associated with some Dirichlet characters. AIMS Mathematics, 2024, 9(7): 17774-17783. doi: 10.3934/math.2024864 |
[6] | Yanbo Song . On two sums related to the Lehmer problem over short intervals. AIMS Mathematics, 2021, 6(11): 11723-11732. doi: 10.3934/math.2021681 |
[7] | Huimin Wang, Liqun Hu . Sums of the higher divisor function of diagonal homogeneous forms in short intervals. AIMS Mathematics, 2023, 8(10): 22577-22592. doi: 10.3934/math.20231150 |
[8] | Jinmin Yu, Renjie Yuan, Tingting Wang . The fourth power mean value of one kind two-term exponential sums. AIMS Mathematics, 2022, 7(9): 17045-17060. doi: 10.3934/math.2022937 |
[9] | Hu Jiayuan, Chen Zhuoyu . On distribution properties of cubic residues. AIMS Mathematics, 2020, 5(6): 6051-6060. doi: 10.3934/math.2020388 |
[10] | Xue Han, Tingting Wang . The hybrid power mean of the generalized Gauss sums and the generalized two-term exponential sums. AIMS Mathematics, 2024, 9(2): 3722-3739. doi: 10.3934/math.2024183 |
Water is an essential element that sustains life on this planet, yet it is threatened by human activities. With little attention paid to the waterfall as a source of a domestic water supply and a tourist spot for recreation, this study was designed to investigate one of the waterfalls in Iligan City, Philippines: Dodiongan Falls. The location of the study is a neighborhood of the city garbage dumpsite that due to an uncontrollable situation, releases dark-colored secretion from the treatment box as has been verified by the residents in the area; this posed a threat to their food security and livelihood. Assessing the physicochemical parameters, heavy metal concentration and Escherichia coli counts is very crucial in interpreting its water quality. All parameters such as the pH, alkalinity, turbidity, lead (Pb), mercury (Hg), and the E. Coli test were done following the standard procedures. The results revealed that the pH, alkalinity, turbidity, total lead (less than 0.01 mg/L) and total mercury concentration (less than 0.001 mg/L) at the three sites were in conformity with the guidelines of the World Health Organization and Philippine national water quality standards. However, the E. Coli count has increased downstream from 220 to 1,600 MPN per 100 ml, which exceeded the standard limit. With these findings, it is paramount that the creation of a management plan be initiated as soon as possible by the different governmental agencies in order to bring back the life of Dodiongan Falls.
The understanding that financial returns' volatility, or instantaneous variation, is not constant over time is widely acknowledged. This phenomenon, often termed volatility clustering, manifests as a high serial autocorrelation in return variances. Instantaneous change and volatility clustering is of particular importance in financial time series analysis. A prominent tool for modeling volatility is the stochastic volatility (SV) model, initially introduced by [1]. It stands as a credible alternative to the widely used autoregressive conditional heteroskedasticity/generalized autoregressive conditional heteroskedasticity (ARCH/GARCH) family of models. While both model families serve to analyze time-dependent variances, they differ notably in construction. ARCH/GARCH models typically characterize time-dependent variances by expressing volatility as a function of past observations or volatility, allowing for one-step-ahead forecast. Conversely, SV modeling embraces the stochastic nature of volatility, enabling it to evolve according to a stochastic process. Empirically, SV models offer greater flexibility than ARCH/GARCH models due to the incorporation of an innovation term into the latent volatility process [2,3]. This enhances the model's ability to capture the dynamics of real-world financial data more accurately.
Moreover, extensive empirical research has identified an asymmetric volatility response to positive and negative past returns. This characteristic, known as the leverage effect, was initially elucidated by [4]. Essentially, financial markets exhibit heightened volatility in reaction to negative shocks, often termed "bad news", compared to equivalent-magnitude positive shocks, or "good news". Despite the efficacy of the basic versions of the GARCH and SV models, neither inherently accounts for the leverage effect and asymmetry. This inherent limitation may constrain the applicability and accuracy of each of these approaches in certain contexts.
To address the asymmetric responses to positive and negative returns, researchers have proposed extensions to existing models. Research from [5] proposes an extension to the logGARCH model, which captures such stylized facts by accommodating asymmetric responses. Similarly, [6] explores asymmetric specifications of GARCH models. In the SV framework, incorporation of the leverage effect and asymmetry can be achieved by introducing correlations between the volatility process noise and the observation series noise [7]. This approach enhances the SV model's ability to capture the nuanced dynamics of financial markets, including asymmetric responses to market shocks.
Threshold models have made significant contributions to volatility modeling within deterministic frameworks. Extending this concept to the SV framework would seem both intuitive and promising. Breidt [8] introduced the threshold SV (TSV) model, integrating the threshold concept into SV analysis. Drawing on Tong's foundational work [9], the TSV model posits that volatility dynamics switch between two regimes based on the nature of incoming information (good or bad). In each regime, volatility is modeled using a first-order autoregressive process, with transitions between regimes determined by the signs of lagged stock returns. So et al. [10] proposed a similar approach, constructing a threshold SV model to capture both mean asymmetry and variance simultaneously. Chen et al. [11] further generalized the TSV model by incorporating heavy-tailed error distributions. TSV models have gained popularity for their efficacy in representing financial return volatility [12].
In the standard economic literature, numerous extensions have been suggested to incorporate additional characteristics of time series data, such as long memory, simultaneous dependence, and regime changes [13,14,15,16,17]. However, a significant portion of these approaches relies on fixed volatility parameters. This approach may fall short of explaining time series data characterized by volatility that displays a periodic correlation pattern. Such patterns cannot be adequately modeled by TSV models with parameters that remain constant over time. To overcome this constraint, researchers have developed models that explicitly integrate periodicity into model parameters. This paper seeks to explore a novel category of periodic volatility models known as periodic threshold autoregressive SV (PTAR-SV) models. In PTAR-SV, the logarithm of volatility is represented using a first-order periodic TAR model; this approach extends the PAR-SV models initially introduced by Aknouche [18] and also considered by Boussaha and Hamdi [19]. Notably, the PAR-SV model presents certain distinctions from the periodic SV (PSV) model proposed by Tsiakas [20]. It is crucial to note that in Tsiakas' formulation, the parameters are represented as a combination of sine and cosine functions along with suitable dummy variables. However, this representation only captures a particular scenario of deterministic seasonality and does not provide a comprehensive explanation for periodically correlated volatilities. Periodicity in SV models was first introduced by Tsiakas [20]. Although the periodic SV model of Tsiakas [20] has many advantages, it only takes into account a kind of deterministic periodicity. To model stochastic periodicity in SV models, Aknouche [18] proposed the periodic autoregressive SV. The main contribution of the present manuscript is to allow for asymmetry in periodic SVs by combining the works of Breidt [8] and Aknouche [18]. Let us delve into the PAR-SVs process (Xt,t∈Z), defined on (Ω,ℑ,P) and satisfying the factorization
Xt=etexp(12ht), | (1.1) |
where (et) represents a sequence of independent and identically distributed (i.i.d.) random variables; these variables are defined on the same probability space; and the sequence (et) is characterized by having a zero mean and unit variance. Furthermore, the periodic log-volatility process is
ht=α(t)+β(t)ht−1+γ(t)ηt, | (1.2) |
where the parameters α(.), β(.) , and γ(.) are functions that vary periodically with time t, the period of this variation is denoted as s (i.e., ∀n∈Z, α(t)=α(t+ns), and so on), and (ηt,t∈Z) is a sequence of i.i.d.(0,1), which achieves the following assumption:
Assumption 1. (et) and (ηt) are independent.
In this context, we introduce the PTAR-SVs. This is formulated as per Eq (1.1) with the periodic log-volatility process, i.e.,
ht=α(t)+(β1(t)I{Xt−1>0}+β2(t)I{Xt−1≤0})ht−1+γ(t)ηt. | (1.3) |
In Eq (1.3), the functions α(.), β1(.), β2(.) , and γ(.) switch between s-seasons, denoted by α(.)=s−1∑l=0α(l)IΔ(l)(.), β1(.)=s−1∑l=0β1(l)IΔ(l)(.), β2(.)=s−1∑l=0β2(l)IΔ(l)(.) and γ(.)=s−1∑l=0γ(l)IΔ(l)(.), where Δ(l):={st+l,t}, while, as per Eq (1.3), it is possible to express them equivalently in a periodic version, as follows:
hst+v=α(v)+(β1(v)I{Xst+v−1>0}+β2(v)I{Xst+v−1≤0})hst+v−1+γ(v)ηst+v, | (1.4) |
for all v∈{1,...,s}. Our model extends the scope of previous models, offering a more comprehensive framework for volatility analysis. This model can be viewed as an extension of the work by Breidt [8] in the standard TAR-SV models, when s=1, and also to the symmetric PAR-SVs models, when β1(.)=β2(.).
This paper is arranged in the following manner. Section 2 introduces the periodic linear state-space representation and delves into certain probabilistic properties of the proposed model. Section 3 outlines a direct approach to addressing the estimation problem, employing a periodic Kalman filter. Additionally, in Section 4, we introduce a Bayesian approach using the Griddy Gibbs sampler. The performance of our proposed estimation method is then evaluated through a simulation study in Section 5. Real-world applications to the spot rates of the euro against the Algerian dinar log-return series are examined in Section 6. Finally, Section 7 presents the conclusions that can be drawn from our study, while the proofs of the main results are provided in the appendices.
To enhance the statistical analysis of the proposed model, it is crucial to establish conditions that ensure certain probabilistic properties of PTAR-SVs, which include periodic stationarity and the presence of higher moments. To this end, many studies have been published on these properties, such as the asymmetric standard case, i.e., TAR-SV1, (see, e.g., [8]) and the symmetric standard, i.e., AR-SV1 (see, e.g., [21], and references therein), or the symmetric periodic, i.e., PAR-SVs (see, e.g., [18,19], and references therein) cases. As is customary in the modeling of periodic time-varying systems, we can now express the Eqs (1.1)–(1.4) in a convenient manner. This approach is similar to the one used by Gladyshev [22] for periodic linear models. In this approach, a time-invariant multivariate random coefficient AR-SV model is constructed by incorporating seasonal v, where v takes values from the set {1,...,s}. Consequently, our focus shifts to the analysis of the properties inherent to this model. To clarify, if we define s-vectors X_′n:=(Xsn+1,...,Xsn+s), h_′n:=(hsn+1,...,hsn+s) , and η_′n:=(ηsn+1,...,ηsn+s), the model represented by (1.1)–(1.4) can be then by formulated as a multivariate random coefficient AR-SV model,
{X_n=Δnexp(12h_n)h_n=Γ(1)nh_n−1+Λ_n+Γ(2)nη_n, | (2.1) |
where Δn:=diag(esn+1,...,esn+s) and Λ_n, Γ(1)n, and Γ(2)n are given by
Λ_′n:=(α(1),...,l−1∑k=0{k−1∏v=0ζsn+l−v}α(l−k)⏟l−times,...,s−1∑k=0{k−1∏v=0ζsn+s−v}α(s−k)), |
Γ(1)n:=(ζsn+1ζsn+1ζsn+2O(s,s−1)⋮{s−1∏v=0ζsn+v}), with ζn:=β1(n)I{en−1>0}+β2(n)I{en−1≤0},Γ(2)n:=(γ(1)0⋯0ζsn+2γ(1)γ(2)⋯0⋮⋱⋱⋮{s−2∏v=0ζsn+s−v}γ(1)⋯γ(s−1)ζsn+sγ(s)). |
Here, O(n,m) signifies an n×m matrix in which all entries are zero and the function I{.} refers to an indicator function. Deriving directly from the earlier equivalent formulation, we can deduce the following properties:
In this context, our current objective is to establish conditions that ensure the existence of a uniquely strict stationary (in a periodic sense), as defined by Ghezal et al. [5,23], for a PTAR-SVs process. We will now initiate our investigation by examining the strict stationarity of the model represented by Eq (2.1), utilizing a fundamental tool: The highest-Lyapunov exponent for random matrices that are both independent and periodically distributed (i.p.d.). Given that the sequence {(Γ(1)n,Λ_n+Γ(2)nη_n),n∈Z} is an i.p.d. sequence and both E{log+‖Γ(1)0‖} and E{log+‖Λ_0+Γ(2)0η_0‖} are finite, where log+(x)=(logx)∨0, based on Brandt's Theorem [24] (also presented as Theorem 1.1 in Bougerol and Picard [25]), a condition sufficient for the model described by Eq (2.1) to exhibit a non-anticipative strict stationarity solution is that the highest-Lyapunov exponent linked to the i.p.d. sequence of matrices Γ:=(Γ(1)n,n∈Z) is defined, γ(Γ):=infm≥11mE{log‖m−1∏l=0Γ(1)n−l‖}<0. To achieve the desired objectives, taking a multiplicative norm, we can establish the following inequality:
γ(Γ)≤infn≥11nE{log|sn−1∏j=0ζsn−j|}=logs∏j=1(δ|β1(j)|+(1−δ)|β2(j)|), |
where δ=P(e0>0). We are now able to present a fundamental result that establishes a sufficient condition for achieving strict periodic stationarity:
Theorem 1. The PTAR-SVs model, given by (1.1)–(1.4), features a unique, non-anticipative, strict periodic stationarity and periodic ergodic solution. This solution, for t∈Z and v∈{1,...,s}, is given by:
Xst+v=est+vexp(2−1∑m≥0{m−1∏l=0ζst+v−l}(α(v−m)+γ(v−m)ηst+v−m)), |
such as the series (2.2) converges almost surely, iff,
s∏j=1(δ|β1(j)|+(1−δ)|β2(j)|)<1. | (2.2) |
Example 1. In the following Table 1, we provide a summary of condition (2.3) for various specific cases
In the case of the PTAR-SVs model, the presence of explosive seasons, indicated by δ|β1(v)|+(1−δ)|β2(v)|≥1, does not necessarily eliminate the possibility of having a strictly periodically stationary solution. Specifically, when s=2 and with certain parameter conditions such as β1(2)=2β2(1)=a, β1(1)+1=β2(2)=b, along with et⇝t(3), Figure 1 below illustrates the regions of strict periodic stationarity.
Specification | Condition (2.3) |
Standard TAR-SV1 | δ|β1(1)|+(1−δ)|β2(1)|<1 (c.f., Breidt [8]) |
Standard symmetric AR-SV1 | |β1(1)|<1 (c.f., Francq and Zakoïan [26]) |
Symmetric PAR-SVs | ∏sj=1|β1(j)|<1 (c.f., Aknouche [18]) |
Other properties, including periodic geometric ergodicity, strong mixing, and moments of the PTAR-SVs model, are also provided.
We now delve into the statistical properties of PTAR-SVs processes, focusing on periodic geometric ergodicity and strong mixing. We initially establish that (h_n,n∈Z) forms a Markov chain with a state space (Rs,BRs), where BRs represents the Borel σ-field on Rs. This Markov chain exhibits time-homogeneous n-step transition probabilities, denoted as Pn(h_,A)=P(h_n∈A|h_0=h_), where h_∈Rs, B∈ BRs, P1(h_,B)=P(h_,B). The invariant probability of the Markov chain is defined as π(A)=∫P(h_,A)π(dh_). Furthermore, if λ is a Lebesgue measure on (Rs,BRs), then (h_n,n∈Z) is λ-irreducible and aperiodic. As a consequence, (h_n,n∈Z) demonstrates the property of being strong Feller (for a more detailed discussion, refer to Meyn and Tweedie's work [27]). We can then state the following result:
Theorem 2. Under the condition stated in (2.3), our process (Xt), described by Eqs (1.1)–(1.4), exhibits geometric periodic ergodicity. Furthermore, if initialized from its invariant measure, (Xt) (resp., (ht)) demonstrates strict periodic stationarity and periodic β-mixing with exponential decay.
If we assume that the distribution of (et,t∈Z) exhibits a symmetry property, this implies that the odd-order moments of (Xt,t∈Z) exist and are zero. Furthermore, assuming that E{e2mt}<∞ for all m∈N∗, we can calculate the even moments of (Xt,t∈Z) using well-established results related to the log−normal distribution. The theorem summarizing these conditions can be presented as follows:
Theorem 3. Consider (Xt,t∈Z) to be a strict periodic stationarity solution to Eqs (1.1)–(1.4), where E{e2mt}<∞, ∀m∈N∗ holds. A sufficient condition for E{X2mt} to remain finite is:
∏l≥0E{exp(m{∏k−1j=0ζst+v−j}(α(v−k)+γ(v−k)ηst+v−k))}<∞forv=1,…,s. | (2.3) |
Additionally, the closed-form expression for the 2m-th moment of Xt is:
Ω(2m)(v):=E{X2mst+v}=E{e2mst+v}∏k≥0E{exp(m{k−1∏j=0ζst+v−j}(α(v−k)+γ(v−k)ηst+v−k))}. |
We next present the autocovariance of the powered process. This autocovariance, denoted Ξ(2m)v,X(n), is valuable for both model identification and the derivation of specific estimation methods. It is defined as Ξ(2m)v,X(n)=E{X2mst+vX2mst+v−n}.
Theorem 4. Consider (Xt,t∈Z) to be a strict periodic stationarity solution to Eqs (1.1)–(1.4), where E{e4mt}<∞ holds for any positive integer m. Under conditions (2.3), (2.4), and
∏k≥nE{exp(m((1+{n−1∏j=0ζ−1st+v−j}){k−1∏j=0ζst+v−j}(α(v−k)+γ(v−k)ηst+v−k)))}<∞, |
for n≥0, v=1,…,s, we have
Ξ(2m)v,X(n)=Ξ(2m)v,e(n)×n−1∏k=0E{exp(m{k−1∏j=nζst+v−j}(α(v−k)+γ(v−k)ηst+v−k))}×∏k≥nE{exp(m((1+{n−1∏j=0ζ−1st+v−j}){k−1∏j=0ζst+v−j}(α(v−k)+γ(v−k)ηst+v−k)))}. |
Hence, the autocovariance function for the process (X2mt,t∈Z,m∈N∗) can be expressed as follows:
Cov(X2mst+v,X2mst+v−n)=Ξ(2m)v,X(n)−Ω(2m)(v)Ω(2m)(v−n), |
for n≥0, v=1,…,s.
Here, we discuss the implementation of the QML estimator (QMLE) based on the periodic Kalman filter for estimating the parameters associated with the PTAR-SVs model. Let θ_′:=(θ_1,...,θ_s)∈Θ⊂R4s, where θ_′v:=(φ_′v,γ(v)), φ_′v:=(α(v),β1(v),β2(v)) for all v∈{1,...,s}. The actual parameter value, symbolized by θ_0∈Θ, remains unknown and requires estimation. To undertake this task, let X_={X1,...,XsN} denote an observed sample from the distinct, causal, and strict periodic stationarity solution of (1.1)–(1.4). It is sensible to describe the quasi-likelihood function for θ_ in the innovation form, as follows:
logLθ_(X_)=−Ns2log(2π)−12s∑v=1N−1∑t=0(log(E{ˆω2st+v})+ˆω2st+vE{ˆω2st+v}), |
where ˆωt represents the sample innovation at time t, defined as ˆωt=log(X2t)−ˆXt|t−1, where ˆXt|t−1 denotes the optimal linear predictor of log(X2t) based on the observations log(X21), ..., log(X2t−1). A QMLE of θ_ is identified as any measurable solution, denoted as ˆθ_n, such that:
ˆθ_n=argmaxθ_∈ΘlogLθ_(X). | (3.1) |
The optimal linear predictor is represented by ˆXt|t−1, and the mean square error Qt|t−1=E{(ˆht|t−1−ht)2} can be recursively computed using the periodic Kalman filter as follows:
ˆht|t−1=α(t)+(β1(t)I{Λt−1>0}+β2(t)I{Λt−1≤0})Λt−1,Λt=ˆht|t−1+Qt|t−1Δ−1t(log(X2t)−ˆht|t−1−E{loge20}),Qt|t−1=γ2(t)+(β21(t)I{Λt−1>0}+β22(t)I{Λt−1≤0})Γt−1,Ωt=Qt|t−1+var(loge20),Γt=Qt|t−1−Q2t|t−1Ω−1t, t=2,...,Ns, |
with start-up values ˆh1|0=E{h1} and Q1|0=var(h1).
The estimation of the unknown parameter is achieved by maximizing the quasi-log-likelihood logLθ_(X_). However, explicit formulas for the estimates at the maximum of Lθ_(X_) are not readily available, necessitating the application of numerical optimization methods.
It is crucial to highlight that, under appropriate conditions, the QMLE, ˆθ_n, which minimizes the loss function −logLθ_(X_), has been demonstrated to be consistent and asymptotically normally distributed (see Ljung [28]). The asymptotic covariance matrix has been established to be the inverse of the asymptotic information matrix. In spite of these advantageous asymptotic properties, the efficacy of the periodic Kalman filter and periodic Chandrasekhar filter recursions may diminish in situations that deviate from normality or linearity. It is widely acknowledged that the QML estimator may not be optimal in finite samples, leading researchers to explore Monte Carlo-based approximations. This is particularly relevant to enhancing performance, especially in the context of state-space models such as SV models.
Embarking on the exploration of maximum likelihood through the expectation-maximization (EM) algorithm, augmented with particle filters and smoothers, marks a significant step forward in enhancing the robustness of parameter estimation in the context of nonlinear and/or non-Gaussian state-space models. Particle filters, recognized as sequential Monte Carlo methods, present a potent alternative to the conventional Kalman filter, especially when confronted with optimal estimation challenges within nonlinear/non-Gaussian state-space frameworks. Comprehensive surveys of particle methods are available in Arulampalam et al. [29] and Doucet et al. [30], offering valuable insights into their applications and methodologies.
To delve deeper into the application of these methodologies, we turn our attention to the linearized representation obtained by taking the logarithm of X2sn+v in the assumed periodically stationary PTAR-SV model. Let h_={h0,h1,...,hsN} and χ_={logX21,...,logX2sN} represent vectors containing complete data and log−volatility data, respectively. Given a specific realization of h_, the complete log-likelihood function for the parameter θ_ can be formulated in the following manner:
logLθ_(h_,χ_)=D−12((h0−κ0)τ−10)2−12logτ20−N2s∑v=1logγ2(v)−12s∑v=1N−1∑t=0exp(logX2st+v−hst+v)+12s∑v=1N−1∑t=0(logX2st+v−hst+v)−12s∑v=1N−1∑t=0(hst+v−α(v)−(β1(v)I{Xst+v−1>0}+β2(v)I{Xst+v−1<0})hst+v−1)2γ−2(v), |
where D represents a constant that is independent of θ_ and h0 follows a Gaussian distribution N(κ0,τ20).
In instances of incomplete data, a widely adopted approach to parameter estimation is the recursive EM algorithm (see Dempster et al. [31]). This iterative method is recognized for its versatility in computing maximum likelihood estimation (MLE). The EM algorithm is iterative, generating a sequence of values ˆθ_(l), l≥1, which progressively refine the MLE. The algorithm consists of two primary steps: An expectation step (E-step) followed by a maximization step (M-step). At the outset of the i-th iteration, parameters are estimated from the preceding iteration, (ˆθ_(l−1)). The E-step involves the definition of the E(.,.) function, an integral component of the EM algorithm,
E(θ_,ˆθ_(l−1))=E{logLθ_(h_,χ_)|h_,χ_,ˆθ_(l−1)}=D−12((h0,sN−κ0)2+H0,sN)τ−20−12logτ20−N2s∑v=1logγ2(v)−12s∑v=1N−1∑t=0E{exp(logX2st+v−hst+v)|h_,χ_,ˆθ_(l−1)}+12s∑v=1N−1∑t=0(logX2st+v−hst+v,sN)−12s∑v=1N−1∑t=0((hst+v,Ns−α(v)−(β1(v)I{est+v−1>0}+β2(v)I{est+v−1<0})hst+v−1,Ns)2+Hst+v,Ns+(β21(v)I{est+v−1>0}+β22(v)I{est+v−1<0})Hst+v−1,Ns−2(β1(v)I{est+v−1>0}+β2(v)I{est+v−1<0})H(Ns)st+v,st+v−1)γ−2(v), | (3.2) |
where
ht,Ns=E{ht|h_,χ_,ˆθ_(l−1)},Ht,Ns=E{(ht−ht,Ns)2|h_,χ_,ˆθ_(l−1)},H(Ns)t,t−1=E{(ht−ht,Ns)(ht−1−ht−1,Ns)|h_,χ_,ˆθ_(l−1)}. |
Prior to the transition to the M-step, a crucial prerequisite is the evaluation of various quantities, including E{exp(logX2st+v−hst+v)|h_,χ_,ˆθ_(l−1)}, ht,Ns, Ht,Ns, and H(Ns)t,t−1. These quantities can be sequentially approximated over time through the application of particle filtering and smoothing algorithms—an extension of Kim and Stoffer's approach [16]. To tackle this task, we employ the particle filter algorithm, as outlined below:
Particle Filter Algorithm for PTAR-SV:
ⅰ. Initialization: Set M as the number of the particles, the initial distribution g(l)0∼P0(h0) with u(l)0=M−1, l=1,…,M.
ⅱ. Particle prediction: For t≥1 and l=1,…,M:
1) Generate η(l)t∼N(0,1).
2) Calculate P(l)t=α(t)+(β1(t)I{et−1>0}+β2(t)I{et−1<0})g(l)t−1+γ(t)η(l)t.
ⅲ. Weight update:
u(l)t=u(l)t−1P(logX2t|P(l)t)∝u(l)t−1exp(−X2texp(−P(l)t)/2)×exp((logX2t−P(l)t)/2). |
ⅳ. Normalize the importance weights: Compute ˜u(l)t=(M∑l=1u(l)t)−1×u(l)t, for l=1,…,M.
ⅴ. Compute the measure of degeneracy: neff = (M∑l=1(˜u(l)t)2)−1. If neff≤nT (typically 2nT=M), resample with replacement M equally weighted particles {g(l)t,l=1,…,M} from the set {P(l)t,l=1,....,M} according to the normalized weights, or else g(.)t=P(.)t.
ⅵ. Assign the particle: Assign the particle set {g(l)t,l=1,…,M} a weight {˜u(l)t,l=1,…,M}.
The subsequent particle smoothing algorithm is devised to incorporate information available after a time t, providing approximations for the required quantities E{exp(logX2st+v−hst+v)|h_,χ_,ˆθ_(l−1)}, ht,Ns, Ht,Ns and H(Ns)t,t−1 essential for the E-step.
Particle Smoothing Algorithm for PTAR-SV:
ⅰ. Initialization: For each l=1,…,M, choose q(l)t based on the function g(l)t with a probability ˜u(l)t. Set ˜U(l)t=M−1.
ⅱ. Particle Smoothing Iteration: For t≥1 and l=1,…,M:
1) Calculate the smoothed weights: For m=1,…,M:
U(m)t−1|t=˜u(m)t−1exp(−(q(l)t−α(t)−(β1(t)I{et−1>0}+β2(t)I{et−1<0})g(m)t−1)2/2γ2(t)). |
2) Normalize the modified weights: For m=1,…,M:
˜U(m)t−1|t=(M∑l=1U(l)t−1|t)−1×U(m)t−1|t. |
3) Choose q(l)t−1=g(m)t−1, with a probability ˜U(m)t−1|t.
ⅲ. Final Computation: Compute
E{exp(logX^2t−ht)|h_,χ_,ˆθ_(l−1)}=M−1M∑l=1exp(logX2t−q(l)t), |
ˆht,Ns=M−1M∑l=1q(l)t, ˆHt,Ns=(M−1)−1M∑l=1(q(l)t−ˆht,Ns)2, |
and
ˆH(Ns)t,t−1=M−1M∑l=1(q(l)t−ˆht,Ns)(q(l)t−1−ˆht−1,Ns). |
After substituting the approximations for ht,Ns, Ht,Ns, and H(Ns)t,t−1, we proceed to the M-step, where the updated parameter ˆθ_(l) can be obtained by maximizing the E(.,.)-function with respect to θ_ given the previous estimate ˆθ_(l−1). This is expressed as:
ˆθ_(l)=argmaxθ_E(θ_,ˆθ_(l−1)). |
The first-order derivatives of Eq (3.2), with respect to the unknown parameters α(v), β1(v), β2(v) , and γ(v) for v=1,....,s, are provided below:
∂∂β1(v)E(θ_,ˆθ_(l−1))=N−1∑t=0((hst+v,Ns−α(v)−β1(v)I{est+v−1>0}hst+v−1,Ns)I{est+v−1>0}hst+v−1,Ns−β1(v)I{est+v−1>0}Hst+v−1,Ns+I{est+v−1>0}H(Ns)st+v,st+v−1)γ−2(v), |
∂∂β2(v)E(θ_,ˆθ_(l−1))=N−1∑t=0((hst+v,Ns−α(v)−β2(v)I{est+v−1<0}hst+v−1,Ns)I{est+v−1<0}hst+v−1,Ns−β2(v)I{est+v−1<0}Hst+v−1,Ns+I{est+v−1<0}H(Ns)st+v,st+v−1)γ−2(v), |
∂∂α(v)E(θ_,ˆθ_(l−1))=N−1∑t=0(hst+v,Ns−α(v)−(β1(v)I{est+v−1>0}+β2(v)I{est+v−1<0})hst+v−1,Ns)γ−2(v), |
∂∂γ(v)E(θ_,ˆθ_(l−1))=−Nγ−1(v)+γ−3(v)N−1∑t=0(hst+v,Ns−α(v)−(β1(v)I{est+v−1>0}+β2(v)I{est+v−1<0})hst+v−1,Ns)2+γ−3(v)N−1∑t=0(Hst+v,Ns+(β21(v)I{est+v−1>0}+β22(v)I{est+v−1<0})Hst+v−1,Ns−2(β1(v)I{est+v−1>0}+β2(v)I{est+v−1<0})H(Ns)st+v,st+v−1). |
Therefore, at the l-th iteration, the parameter estimates for α(v), β1(v), β2(v) , and γ(v) can be obtained by solving the following equations:
∂∂α(v)E(θ_,ˆθ_(l−1))=0, ∂∂β1(v)E(θ_,ˆθ_(l−1))=0, ∂∂β2(v)E(θ_,ˆθ_(l−1))=0, ∂∂γ(v)E(θ_,ˆθ_(l−1))=0. |
The closed-form solutions for the above equations provide the updated parameter estimates at the m-th iteration
ˆβ(m)1(v)=4∑j=1Lj(v)/7∑j=5Lj(v), ˆβ(m)2(v)=4∑j=1˜Lj(v)/7∑j=5˜Lj(v), |
ˆα(m)(v)=N−1N−1∑t=0hst+v,Ns−N−1N−1∑t=0(ˆβ(m)1(v)I{est+v−1>0}+ˆβ(m)2(v)I{est+v−1<0})hst+v−1,Ns, |
ˆγ(m)(v)=[N−1N−1∑t=0(hst+v,Ns−ˆα(m)(v)−(ˆβ(m)1(v)I{est+v−1>0}+ˆβ(m)2(v)I{est+v−1<0})hst+v−1,Ns)2+N−1N−1∑t=0Hst+v,Ns+N−1N−1∑t=0((ˆβ(m)1(v))2I{est+v−1>0}+(ˆβ(m)2(v))2I{est+v−1<0})Hst+v−1,Ns−2N−1∑t=0(ˆβ(m)1(v)I{est+v−1>0}+ˆβ(m)2(v)I{est+v−1<0})H(Ns)st+v,st+v−1]1/2, |
where
L1(v)=N−1(N−1∑t=0hst+v,Nshst+v−1,NsI{est+v−1<0}+N−1∑t=0H(Ns)st+v,st+v−1I{est+v−1<0})×(N−1∑t=0hst+v−1,NsI{est+v−1<0})(N−1∑t=0hst+v−1,NsI{est+v−1>0}), |
L2(v)=(N−1∑t=0H(Ns)st+v,st+v−1I{est+v−1>0}+N−1∑t=0hst+v,Nshst+v−1,NsI{est+v−1>0})×(N−1∑t=0Hst+v−1,NsI{est+v−1<0}+N−1∑t=0h2st+v−1,NsI{est+v−1<0}), |
L3(v)=−N−1(N−1∑t=0H(Ns)st+v,st+v−1I{est+v−1>0}+N−1∑t=0hst+v,Nshst+v−1,NsI{est+v−1>0})×(N−1∑t=0hst+v−1,NsI{est+v−1<0})2, |
L4(v)=−N−1(N−1∑t=0hst+v,Ns)(N−1∑t=0hst+v−1,NsI{est+v−1>0})×(N−1∑t=0Hst+v−1,NsI{est+v−1<0}+N−1∑t=0h2st+v−1,NsI{est+v−1<0}), |
L5(v)=(N−1∑t=0Hst+v−1,NsI{est+v−1>0}+N−1∑t=0h2st+v−1,NsI{est+v−1>0})×(N−1∑t=0Hst+v−1,NsI{est+v−1<0}+N−1∑t=0h2st+v−1,NsI{est+v−1<0}), |
L6(v)=−N−1(N−1∑t=0hst+v−1,NsI{est+v−1>0})2(N−1∑t=0Hst+v−1,NsI{est+v−1<0}+N−1∑t=0h2st+v−1,NsI{est+v−1<0}), |
L7(v)=−N−1(N−1∑t=0Hst+v−1,NsI{est+v−1>0}+N−1∑t=0h2st+v−1,NsI{est+v−1>0})(N−1∑t=0hst+v−1,NsI{est+v−1<0})2, |
and
˜L1(v)=N−1(N−1∑t=0hst+v,Nshst+v−1,NsI{est+v−1>0}+N−1∑t=0H(Ns)st+v,st+v−1I{est+v−1>0})×(N−1∑t=0hst+v−1,NsI{est+v−1<0})(N−1∑t=0hst+v−1,NsI{est+v−1>0}), |
˜L2(v)=(N−1∑t=0hst+v,Nshst+v−1,NsI{est+v−1<0}+N−1∑t=0H(Ns)st+v,st+v−1I{est+v−1<0})×(N−1∑t=0Hst+v−1,NsI{est+v−1>0}+N−1∑t=0h2st+v−1,NsI{est+v−1>0}), |
˜L3(v)=−N−1(N−1∑t=0hst+v,Nshst+v−1,NsI{est+v−1<0}+N−1∑t=0H(Ns)st+v,st+v−1I{est+v−1<0})×(N−1∑t=0hst+v−1,NsI{est+v−1>0})2, |
˜L4(v)=−N−1(N−1∑t=0hst+v−1,NsI{est+v−1<0})(N−1∑t=0hst+v,Ns)×(N−1∑t=0Hst+v−1,NsI{est+v−1>0}+N−1∑t=0h2st+v−1,NsI{est+v−1>0}), |
˜L5(v)=(N−1∑t=0Hst+v−1,NsI{est+v−1<0}+N−1∑t=0h2st+v−1,NsI{est+v−1<0})×(N−1∑t=0Hst+v−1,NsI{est+v−1>0}+N−1∑t=0h2st+v−1,NsI{est+v−1>0}), |
˜L6(v)=−N−1(N−1∑t=0Hst+v−1,NsI{est+v−1<0}+N−1∑t=0h2st+v−1,NsI{est+v−1<0})(N−1∑t=0hst+v−1,NsI{est+v−1>0})2, |
˜L7(v)=−N−1(N−1∑t=0hst+v−1,NsI{est+v−1<0})2(N−1∑t=0Hst+v−1,NsI{est+v−1>0}+N−1∑t=0h2st+v−1,NsI{est+v−1>0}). |
Remark 1. The asymptotic properties of periodic volatility models are examined under general regularity conditions by several authors, particularly by Aknouche et al. [40,41], so the consistency and asymptotic normality of the QMLE can be determined by employing the standard theory of models with time-invariant parameters (see Dunsmuir, [32]). This is evident when examining the (2.1) model. To achieve this, we refer to the corresponding multivariate time-invariant model presented in (2.1), which we transform into a linear form, as follows:
{Z_n=h_n+Υ_nh_n=Γ(1)nh_n−1+Λ_n+Γ(2)nη_n, | (3.3) |
where Z_′n:=(log(X2sn+1),...,log(X2sn+s)) and Υ_′n:=(log(e2sn+1),...,log(e2sn+s)). Utilizing (3.3), we can apply the theory presented by Dunsmuir [32] to determine the QMLE's asymptotic variance under the condition of finite moments E{(log(X2sn+1))4} (see also Ruiz et al., [13,33]).
In the Bayesian MCMC estimation, we consider the parameter θ_, and the unobserved volatilities h_′=(h1,h2,...,hsN) in the model described by Eqs (1.1)–(1.4) are treated as random variables with a prior distribution, denoted by g(h_,θ_). The objective is to infer the joint posterior distribution, g(h_,θ_|X_), given a series X_ generated from the Eqs. (1.1)-(1.4) with Gaussian innovations. Assuming independence among the parameters h_, φ_′:=(φ_′1,φ_′2,...,φ_′s), ξ_′:=(γ21,γ22,...,γ2s), due to the periodic structure of the PTAR−SV model, Gibbs sampling can be employed to estimate the joint posterior distribution. The Gibbs sampler involves drawing samples from conditional posterior distributions, such as g(φ_|ξ_,X_,h_), g(γ2v|φ_,γ_2−{v},X_,h_) for v=1,...,s), and g(h_|φ_,ξ_,X_), where h_−{t} comprises all elements of the vector h_ except for the t-th element, ht. Sampling directly from g(ht|φ_,ξ_,X_,h_−{t}) is complex, but we here adopt the Griddy-Gibbs procedure, a simpler implementation in the periodic context compared to the Metropolis-Hastings chain (for further discussion on this topic, readers can refer to [34,35]).
In the analysis of the prior and posterior distributions within the Gibbs sampler framework, the first step of sampling, we focus on the sampling process of the parameter φ_. Before delving into the conditional posterior distribution, denoted by g(φ_|ξ_,X_,h_), derived from conjugate prior distributions and linear regression theory, we express the PTAR equation as a standard linear regression. Specifically, by defining
Sst+v:=(O(1,3(v−1)),1,hst+v−1I{est+v−1>0},hst+v−1I{est+v−1<0},O_(1,3(s−v))), |
the model (1.1)–(1.4) can be reformulated into the following periodically linear regression:
hst+v=Sst+vφ_+γ(v)ηst+v,v=1,...,s,t=0,...,N−1. |
Alternatively, it can also be represented as a standard regression:
γ−1(v)hst+v=γ−1(v)Sst+vφ_+ηst+v,v=1,...,s,t=0,...,N−1, | (4.1) |
where the errors follow i.i.d. Gaussian distributions. Assuming knowledge of the γ(v),v=1,...,s and the initial observation h0, the least squares estimate ˆφ_WLSE of φ_, based on (4.1), takes the form:
ˆφ_WLSE=(∑0≤t≤N−1∑1≤v≤sγ−2(v)S′st+vSst+v)−1∑0≤t≤N−1∑1≤v≤sγ−2(v)S′st+vhst+v. |
This estimate follows a normal distribution (φ_,Cov), where
Cov−1=∑0≤t≤N−1∑1≤v≤sγ−2(v)S′st+vSst+v. |
Under (4.1), the data's information about φ_ is encapsulated in the weighted least squares estimate ˆφ_WLSE. To derive a closed-form expression for the conditional posterior g(φ_|ξ_,X_,h_), we employ a Gaussian conjugate prior for φ_. Specifically, the prior distribution is Gaussian, denoted φ_∼N(φ_∘,Ψ∘), where φ_∘ and Ψ∘ are known hyperparameters tailored to yield a suitably diffuse yet informative prior. Thus, utilizing standard regression theory, the conditional posterior distribution of φ_|ξ_,X_,h_ is Gaussian, and denoted as
φ_|ξ_,X_,h_∼N(¯φ_,¯Ψ), | (4.2) |
where
¯Ψ−1=∑0≤t≤N−1∑1≤v≤sγ−2(v)S′st+vSst+v+(Ψ∘)−1, ¯Ψ−1¯φ_=∑0≤t≤N−1∑1≤v≤sγ−2(v)S′st+vhst+v+(Ψ∘)−1φ_∘. | (4.3) |
A couple of observations are warranted:
a. The block diagonal form of the matrix Cov is mirrored in Ψ∘, assuming it is also block diagonal. This implies that assuming the seasonal parameters φ_1, φ_2, …, φ_s are independent of each other, each has a conjugate prior with appropriate hyperparameters, facilitating the same result.
b. Enhanced computational efficiency and stability in deriving ¯φ_ and ¯Ψ can be achieved by setting ¯φ_=¯φ_sN, ¯Ψ=¯ΨsN, and iteratively computing these quantities using the recursive least squares algorithm
¯φ_st+v=¯φ_st+v−1+¯Ψst+v−1S′st+v(hst+v−Sst+v−1¯φ_st+v−1)(γ2(v)+Sst+v−1¯Ψ−1st+v−1S′st+v)−1,¯Ψ−1st+v=¯Ψ−1st+v−1+¯Ψst+v−1S′st+vSst+v−1¯Ψ−1st+v−1(γ2(v)+Sst+v−1¯Ψ−1st+v−1S′st+v)−1, | (4.4) |
with starting values ¯φ_0=φ_0 and ¯Ψ−10=¯Ψ0.
This approach may enhance numerical stability and reduce computation time, particularly for large periods s. In the second step of the Gibbs sampler, we sample the parameters γ2(v),v=1,…,s. Conjugate priors for γ2(v),v=1,…,s are utilized to derive a closed-form expression for the conditional posterior distribution of γ2(v), given the data and the other parameters γ_2−{v}. These priors are modeled using the inverted Chi-squared distribution:
πvτvγ−2(v)∼χ2πv, |
where πv=τ−1v,v=1,...,s. When the parameters φ_ and h_ are defined, specifically as
ηst+v=hst+v−α(v)−(β1(v)I{Xst+v−1>0}+β2(v)I{Xst+v−1≤0})hst+v−1,v=1,...,s,t=0,...,N−1, | (4.5) |
then the errors ηv,ηs+v,...,ηs(N−1)+v follow a normal distribution i.i.d.(0,γ2(v)), for v=1,...,s. Utilizing standard Bayesian linear regression theory, the conditional posterior distribution of γ2(v) for v=1,...,s, given the data and the remaining parameters, conforms to an inverted Chi-squared distribution, represented as:
πvτv+∑0≤t≤N−1γ−2(v)η2st+v∼χ2N+πv−1,v=1,...,s. | (4.6) |
In the third step of sampling, we address the augmented volatility parameters, \underline{h} . We seek to sample from the conditional posterior distribution g(h_{t}\left\vert \underline{\theta}, \underline{X}, \underline{h}_{-\left\{ t\right\} }\right.) , t = 1, 2, ..., sN . To begin, we present the expression for this distribution and, subsequently, we demonstrate the method to indirectly draw samples using the Griddy Gibbs technique. Due to the Markovian nature of the volatility process \left\{ h_{t}; t\in\mathbb{Z}\right\} and the conditional independence of X_{t} and h_{t-h} (h\neq0) , given h_{t} for any t = 2, ..., sN-1 , we have:
\begin{align} g(h_{t}\left\vert \underline{\theta}, \underline{X}, \underline{h}_{-\left\{ t\right\} }\right. ) & = g(h_{t}\left\vert h_{t-1}, \underline{\theta }\right. )g(h_{t+1}\left\vert h_{t}, \underline{\theta}\right. )g(X_{t} \left\vert h_{t}, \underline{\theta}\right. )\left/ g(h_{t+1}\left\vert h_{t-1}, \underline{\theta}\right. )g(X_{t}\left\vert h_{t-1}, h_{t+1} , \underline{\theta}\right. )\right. \\ & \varpropto g(h_{t}\left\vert h_{t-1}, \underline{\theta}\right. )g(h_{t+1}\left\vert h_{t}, \underline{\theta}\right. )g(X_{t}\left\vert h_{t}, \underline{\theta}\right. ). \end{align} | (4.7) |
By leveraging the known fact that X_{t}\left\vert \underline{\theta}\right. , h_{t}\equiv X_{t}\left\vert h_{t}\right. \thicksim\mathcal{N}(0, h_{t}) , and h_{t}\left\vert h_{t-1}, \underline{\theta}\right. \thicksim \mathcal{N}\left(h_{t}-\gamma\left(t\right) \eta_{t}, \gamma^{2}\left(t\right) \right) , the formula (4.7) can thus be transformed to:
\begin{equation} g(h_{t}\left\vert \underline{\theta}, \underline{X}, \underline{h}_{-\left\{ t\right\} }\right. )\varpropto h_{t}^{-3/2}\exp\left( -\frac{1}{2} h_{t}^{-1}X_{t}^{2}-\frac{1}{2}\varsigma_{t}^{-1}\left( h_{t}-\omega _{t}\right) ^{2}\right) , t = 2, ..., sN-1, \end{equation} | (4.8) |
where
\begin{equation} \varsigma_{t}^{-1} = \left( \gamma^{-2}\left( t\right) +\gamma^{-2}\left( t+1\right) \left( \delta\beta_{1}\left( t+1\right) +\left( 1-\delta \right) \beta_{2}\left( t+1\right) \right) \right), \end{equation} | (4.9) |
\begin{equation} \omega_{t} = \varsigma_{t}\left( \gamma^{-2}\left( t\right) \left( h_{t}-\gamma\left( t\right) \eta_{t}\right) +\gamma^{-2}\left( t+1\right) \left( \delta\beta_{1}\left( t+1\right) +\left( 1-\delta\right) \beta _{2}\left( t+1\right) \right) \left( h_{t+1}-\alpha\left( t+1\right) \right) \right). \end{equation} | (4.10) |
In (4.8), we employ the well-known formula
a(y-\alpha)^{2}+b(y-\beta)^{2} = (y-\gamma)^{2}(a+b)+ab(\alpha-\beta )^{2}(a+b)^{-1}, |
where (a+b)\gamma = a\alpha+b\beta , provided that a+b\neq0 . For h_{1} and h_{sN} , a simple approach is adopted where h_{1} is assumed to be fixed, enabling the sampling process to commence with t = 2 . It may be noted that h_{sN}\left\vert h_{sN-1}, \underline{\theta}\right. \thicksim\mathcal{N} \left(h_{sN}-\gamma\left(sN\right) \eta_{sN}, \gamma^{2}\left(sN\right) \right) . Alternatively, a forecast of h_{sN+1} and a backward prediction of h_{0} can be utilized by applying the formula (4.8) for t = 1, ..., sN+1 . In this scenario, h_{sN+1} is forecast based on a two-step-ahead forecast, \widehat{h}_{sN-1}(2) , computed at the origin sN-1 using:
\begin{align*} \widehat{h}_{sN-1}(2) & = \alpha\left( sN+1\right) +\left( \beta _{1}\left( sN+1\right) \mathbb{I}_{\left\{ e_{sN} > 0\right\} }+\beta _{2}\left( sN+1\right) \mathbb{I}_{\left\{ e_{sN} < 0\right\} }\right) \\ & \times\left( \alpha\left( sN\right) +\left( \beta_{1}\left( sN\right) \mathbb{I}_{\left\{ e_{sN-1} > 0\right\} }+\beta_{2}\left( sN\right) \mathbb{I}_{\left\{ e_{sN-1} < 0\right\} }\right) h_{sN-1}\right) . \end{align*} |
The backward forecast of h_{0} is derived using a two-step-ahead backward forecast based on the backward periodic autoregression (as discussed in [36]). After determining the conditional posterior g(h_{t}\left\vert \underline{\theta}, \underline{X}, \underline{h}_{-\left\{ t\right\} }\right.) , except for a scale factor, indirect sampling algorithms can be employed to draw the volatility h_{t} . Research from [34] utilized the rejection Metropolis-Hasting algorithm, while [18,35,39] suggested the Griddy-Gibbs technique, which involves:
\textbf{a.} Selecting a grid of l points from a specified interval [h_{t, 1}, h_{t, l}] of h_{t}:\left(h_{t, k}, 1\leq k\leq l\right) is decreasing, then evaluating the conditional posterior g(h_{t}\left\vert \underline{\theta}, \underline{X}, \underline{h}_{-\left\{ t\right\} }\right.) via (4.8) at each of these points, yielding g_{t, k} = g(h_{t, k}\left\vert \underline{\theta}, \underline{X}, \underline{h}_{-\left\{ t\right\} }\right.) , k = 1, \dots, l .
\textbf{b.} Constructing the discrete distribution P(.) from the values g_{t, 1}, g_{t, 2}, \dots, g_{t, l} defined at h_{t, k} , k = 1, \dots, l , where P(h_{t, k}) = g_{t, k}\left/ \sum\limits_{k = 1}^{l}g_{t, k}\right. . This serves as an approximation to the inverse cumulative distribution of g(h_{t} \left\vert \underline{\theta}, \underline{X}, \underline{h}_{-\left\{ t\right\} }\right.) .
\textbf{c.} Generating a number from the uniform distribution on (0, 1) and transforming it using the discrete distribution, P(.) , obtained in b. to obtain a random draw for h_{t} .
It is noteworthy that the choice of the grid, [h_{t, 1}, h_{t, l}] , significantly impacts the efficiency of the Griddy algorithm. Following a similar strategy to that in [18], the range of h_{t} at the m- th Gibbs iteration is set to [\overline{h}_{t}^{l}, \overline{h}_{t}^{L}] , where \overline{h}_{t}^{l} = \frac{3}{5}\left(h_{t}^{\left(0\right) }\vee h_{t}^{\left(m-1\right) }\right), \overline{h}_{t}^{L} = \frac{7} {5}\left(h_{t}^{\left(0\right) }\vee h_{t}^{\left(m-1\right) }\right), and h_{t}^{\left(m-1\right) } and h_{t}^{\left(0\right) } are, respectively, the estimates of h_{t} for the (m-1)- th iteration and the initial value.
The algorithm outlines the Gibbs sampler for sampling from the conditional posterior distribution, g(\underline{h}, \underline{\theta}\left\vert \underline{X}\right.) given \left\vert \underline{X}\right. . For m = 0, 1, ..., L , where \underline{h}^{(m)} = \left(h_{1}^{(m)}, ..., h_{s} ^{(m)}\right) ^{\prime}, \underline{\varphi}^{(m)} = \left(\alpha ^{(m)}\left(1\right), \beta_{1}^{(m)}\left(1\right), \beta_{2} ^{(m)}\left(1\right), ..., \alpha^{(m)}\left(s\right), \beta_{1} ^{(m)}\left(s\right), \beta_{2}^{(m)}\left(s\right) \right) ^{\prime} and \underline{\xi}^{(m)} = \left(\left(\gamma^{2}\right) ^{(m)}\left(1\right), ..., \left(\gamma^{2}\right) ^{(m)}\left(s\right) \right) ^{\prime} , the algorithm is as follows:
Algorithm a:
\textbf{S1.} Specify starting values \underline{h}^{(0)} , \underline{\varphi}^{(0)} and \underline{\xi}^{(0)} .
\textbf{S2.} Repeat for m = 0, 1, \dots, L-1 ,
– Draw \underline{\varphi}^{(m+1)} from g(\underline{\varphi}\left\vert \underline{X}, \underline{\xi}^{(m)}, \underline{h}^{(m)}\right.) using (4.2)–(4.4) with starting values \overline{\underline{\varphi}}_{0} = \underline{\varphi}_{0} and \overline{\Psi}_{0}^{-1} = \overline{\Psi}_{0} .
– Draw \underline{\xi}^{(m+1)} from g(\underline{\gamma}^{2}\left\vert \underline{X}, \underline{\varphi}^{(m+1)}, \underline{h}^{(m)}\right.) using (4.5) and (4.6).
– Repeat for t = 1, \dots, sN
* Griddy Gibbs:
· Select a grid of l points: \left(h_{t, k}^{(m+1)}, k = 1, \dots, l\right) is decreasing.
· For k = 1, \dots, l , calculate g_{t, k}^{(m+1)} = g(h_{t; k}^{\left(m\right) }\left\vert \underline{\theta}^{\left(m\right) }, \underline{X}, \underline{h}_{-\left\{ t\right\} }^{\left(m\right) }\right.) from (4.8)–(4.10).
· Define the inverse distribution, P\left(h_{t, k}^{(m+1)}\right) = g_{t, k}^{(m+1)}\left/ \sum\limits_{k = 1}^{l}g_{t, k}^{(m+1)}\right. , k = 1, \dots, l.
· Generate a number u from the uniform (0, 1) distribution.
· Transform w using the inverse distribution, P(.) , to get h_{t}^{(m+1)} , considered to be a draw from g(h_{t}\ \left\vert \underline{\theta}^{(m+1)}, \underline{X}, \underline {h}_{-\left\{ t\right\} }^{(m)}\right) .
S3. Return the values \underline{h}^{(m)} , \underline {\varphi}^{(m)} , and \underline{\xi}^{(m)} , m = 1, \dots, L .
Upon sampling from the posterior distribution g(\underline {h}, \underline{\theta}\left\vert \underline{X}\right.) , statistical inference for the PTAR-SV model becomes straightforward. The Bayesian Griddy-Gibbs parameter estimate \widehat{\underline{\theta}}_{BGG} of \underline{\theta} is defined as the posterior mean \widetilde {\underline{\theta}} = E\left\{ \underline{\theta}\left\vert \underline{X}\right. \right\} , which, according to the Markov chain ergodic theorem, can be reasonably approximated by: \widehat{\underline{\theta} }_{BGG} = L^{-1}\sum\limits_{m_{0}\leq m\leq L+m_{0}}\underline{\theta}^{(m)}, where \underline{\theta}^{(m)} represents the m -th draw from g(\underline{h}, \underline{\theta}\left\vert \underline{X}\right.) , as provided by Algorithm a. m_{0} denotes the burn-in size (the initial draws discarded), and L is the number of draws. Smoothing and forecasting volatility are intrinsic outcomes of the Bayesian Griddy-Gibbs method. The smoothed value \widetilde{h}_{t} = E\left\{ h_{t}\left\vert \underline {X}\right. \right\} , for t = 1, ..., sN is obtained during sampling from the distribution g(h_{t}\left\vert \left\vert \underline{X}\right. \right.) , a marginal of the posterior distribution g(\underline{h}, \underline{\theta }\left\vert \underline{X}\right.) . Thus, \widetilde{h}_{t} can be accurately approximated by: L^{-1}\sum\limits_{m_{0}\leq m\leq L+m_{0}} h_{t}^{(m)}, where h_{t}^{(m)} represents the m- th draw of h_{t} from g(h_{t}, \underline{\theta}\left\vert \underline{X}\right.) . Forecasting future values, h_{sN+j}, j = 1, ..., k , can be accomplished either by employing the volatility equation with the Bayesian parameter estimates or directly by sampling from the predictive distribution g(h_{sN+1}, h_{sN+2}, ..., h_{sN+k} \left\vert \underline{X}\right.) (for further discussion, the reader is referred to [18,34]).
It is crucial to assess the numerical properties of the proposed Bayes Griddy-Gibbs (BGG) method, in particular because the volatilities are sampled element by element. Despite its simplicity in implementation, it is well established that the single-move approach, as employed in the BGG method, often leads to highly correlated posterior draws, a correlation that can result in slow mixing and convergence properties. Among various MCMC diagnostic measures, we focus here on two key metrics.
\textbf{1.} Relative Numerical Inefficiency (RNI): The RNI provides insight into the inefficiency caused by the serial correlation of the BGG parameter draws. This can be calculated as:
RNI-1 = 2\sum\limits_{1\leq j\leq500}D\left( 2j\times10^{-3}\right) \widehat{\rho}_{1, j}. |
Here, 500 denotes the bandwidth, D(.) represents the Parzen kernel, and \widehat{\rho}_{1, j} is the sample autocorrelation at lag j of the BGG parameter draws. The RNI value indicates the extent of inefficiency attributed to serial correlation.
\textbf{2.} Numerical Standard Error (NSE): The NSE quantifies the uncertainty associated with the MCMC estimator and is calculated as the square root of the estimated asymptotic variance of the estimator. Mathematically, this can be expressed as:
NSE = L^{-1}\left( \widehat{\rho}_{2, 0}+2\sum\limits_{1\leq j\leq500}D\left( 2j\times10^{-3}\right) \widehat{\rho}_{2, j}\right) . |
Here, \widehat{\rho}_{2, j} represents the sample autocovariance at lag j of the BGG parameter draws, and L is the total number of draws.
These diagnostic measures, particularly the RNI and NSE, offer valuable insights into the efficiency and reliability of the MCMC sampling process, aiding in the assessment of convergence and the accuracy of parameter estimates.
Remark 2. The deviance information criterion (DIC) is a crucial tool in the selection of the period, (s) , in PTAR-SV modeling. Unlike traditional criteria such as akaike information criterion (AIC) and bayesian information criterion (BIC), DIC strikes a balance between model adequacy and complexity. Its computation involves assessing the conditional likelihood of the PTAR-SV model and the posterior mean of its parameters, which are derived from MCMC draws. By comparing DIC values across different period lengths, researchers can identify the most suitable model. Despite the inherent challenge in estimating the standard error associated with the DIC due to its randomness, researchers can obtain a rough estimate of its variability by replicating calculations. It is important to note that the choice of DIC variant (e.g., observed or conditional) influences its interpretation, with the conditional version being particularly relevant to latent variable models such as PTAR-SV (for further details, the reader is referred to [18]).
Remark 3. In this paper, we delve into a class of nonlinear models designed for the analysis of periodic time series, referred to as PTAR-SV models. These models not only capture the feature of asymmetric volatility, which is already well-known in the deterministic volatility framework, but also uncover the periodicity hidden in the autocovariance structure, characteristics frequently observed in financial and economic time series. It is worth noting that another class of models, namely, Markov-switching TSV (MS-TSV) models, has recently been explored by Ghezal et al. [37]. These models aim to tackle the asymmetry and leverage effects observed in financial time series' volatility. They extend classical TSV models by representing the parameters governing log-volatility as functions of a homogeneous Markov chain. Both our paper and the work by Ghezal et al. [37] concentrate on establishing various probabilistic properties, including strict stationarity, causality, and ergodicity. Additionally, both delve into computing higher-order moments and the derivation of the autocovariance function of squared processes. However, our paper adopts a periodic perspective in these analyses because the PTAR-SV models demonstrate global non-stationarity whilst exhibiting stationarity within each period, unlike the MS-TSV models. Moreover, while Ghezal et al. [37] primarily concentrates on the autocovariance function of the squared process and second-order stationary solutions, our paper expands the analysis to encompass the autocovariance function of the powers of the squared process. Additionally, we explore concepts such as periodic geometric ergodicity and strong mixing. Finally, both papers propose the QML method for parameter estimation. However, our paper introduces an additional approach—the EM algorithm with particle filters and smoothers. Moreover, we extend our analysis to include a Bayesian MCMC method based on Griddy-Gibbs sampling.
We conducted a simulation study to evaluate the performance of the QML and BGG methods for parameter estimation in the context of the PTAR-SV _{s} model with s\in\left\{ 2, 3\right\} . A total of 500 datasets of varying sizes were generated. Specifically, we considered sample sizes, sN , from the set \left\{ 750, 1500, 3000\right\} . The values of the parameters were chosen to satisfy the periodic stationarity condition \prod\limits_{j = 1}^{s}\left(\delta\left\vert \beta_{1}\left(j\right) \right\vert +\left(1-\delta\right) \left\vert \beta_{2}\left(j\right) \right\vert \right) < 1 . For each generated dataset, we estimated the parameter vector of interest, { \underline{\theta}} , using the QMLE (resp., BGG estimation), denoted \widehat {{ \underline{\theta}}}_{QMLE} (resp., \widehat {{ \underline{\theta}}}_{BGGE} ). The QMLE (resp., BGGE) algorithm was executed via the "fminsearch.m" minimizer function in MATLAB8. The root mean square errors (RMSE) of the estimated parameters \widehat{{ \theta}} are presented in parentheses in the tables below. The actual values (TV) of the parameters for each of the considered data-generating processes are also presented.
This study primarily focuses on analyzing RMSEs, providing initial insights into the finite sample properties of the QMLE and BGGE within the framework of the PTAR-SV _{s} model. The results so obtained suggest that both the QMLE and BGGE methods effectively provide parameter estimates. Upon examining Table 2, it is evident that the strong consistency of the QMLE and BGGE for the PTAR-SV _{2} model is satisfactory. Furthermore, the corresponding RMSEs demonstrate a significant reduction as the sample size increases. Turning to the outcomes presented in Table 3 for the PTAR-SV _{3} model, the strong consistency is consistently confirmed. Importantly, the estimation procedure yields favorable results even with a relatively small sample size. The comparison between the two methods demonstrates that both perform adequately with regard to parameter estimation. However, the BGGE consistently outperforms the QMLE, exhibiting significantly lower RMSEs across all instances. This finding is consistent with previous results obtained in the symmetric case [18].
Tv \backslash 2N | 750 | 1500 | 3000 | ||||
QML | Bayesian | QML | Bayesian | QML | Bayesian | ||
\alpha\left(1\right) | 0.50 | 0.5241 | 0.4861 | 0.5155 | 0.4952 | 0.5080 | 0.4980 |
\left(0.0941\right) | \left(0.0245\right) | \left(0.0648\right) | \left(0.0115\right) | \left(0.0342\right) | \left(0.0060\right) | ||
\alpha\left(2\right) | -1.00 | -0.9903 | -1.0053 | -0.9907 | -1.0045 | -0.9930 | -1.0033 |
\left(0.0770\right) | \left(0.0164\right) | \left(0.0459\right) | \left(0.0073\right) | \left(0.0193\right) | \left(0.0035\right) | ||
\beta_{1}\left(1\right) | 0.75 | 0.7601 | 0.7457 | 0.7564 | 0.7487 | 0.7534 | 0.7499 |
\left(0.0785\right) | \left(0.0024\right) | \left(0.0499\right) | \left(0.0012\right) | \left(0.0110\right) | \left(0.0006\right) | ||
\beta_{1}\left(2\right) | 0.25 | 0.2477 | 0.2484 | 0.2543 | 0.2497 | 0.2534 | 0.2498 |
\left(0.0661\right) | \left(0.0017\right) | \left(0.0537\right) | \left(0.0008\right) | \left(0.0360\right) | \left(0.0004\right) | ||
\beta_{2}\left(1\right) | -0.35 | -0.3638 | -0.3509 | -0.3609 | -0.3503 | -0.3584 | -0.3501 |
\left(0.0774\right) | \left(0.0026\right) | \left(0.0682\right) | \left(0.0014\right) | \left(0.0496\right) | \left(0.0006\right) | ||
\beta_{2}\left(2\right) | -0.55 | -0.5597 | -0.5521 | -0.5583 | -0.5492 | -0.5554 | -0.5497 |
\left(0.0971\right) | \left(0.0020\right) | \left(0.0873\right) | \left(0.0008\right) | \left(0.0649\right) | \left(0.0005\right) | ||
\gamma\left(1\right) | 0.65 | 0.6616 | 0.6541 | 0.6592 | 0.6523 | 0.6513 | 0.6491 |
\left(0.0638\right) | \left(0.0074\right) | \left(0.0591\right) | \left(0.0029\right) | \left(0.0256\right) | \left(0.0014\right) | ||
\gamma\left(2\right) | -0.05 | -0.0433 | -0.0473 | -0.0473 | -0.0476 | -0.0475 | -0.0483 |
\left(0.0516\right) | \left(0.0023\right) | \left(0.0439\right) | \left(0.0011\right) | \left(0.0375\right) | \left(0.0005\right) |
Tv \backslash 3N | 750 | 1500 | 3000 | ||||
QML | Bayesian | QML | Bayesian | QML | Bayesian | ||
\alpha\left(1\right) | 0.50 | 0.4904 | 0.4940 | 0.5097 | 0.4961 | 0.5054 | 0.4979 |
\left(0.1064\right) | \left(0.0495\right) | \left(0.0864\right) | \left(0.0231\right) | \left(0.0789\right) | \left(0.0097\right) | ||
\alpha\left(2\right) | 1.00 | 1.0065 | 1.0062 | 0.9979 | 0.9991 | 1.0020 | 1.0008 |
\left(0.0768\right) | \left(0.0386\right) | \left(0.0703\right) | \left(0.0186\right) | \left(0.0207\right) | \left(0.0083\right) | ||
\alpha\left(3\right) | 1.50 | 1.4706 | 1.4709 | 1.5163 | 1.4914 | 1.4928 | 1.4966 |
\left(0.0755\right) | \left(0.0588\right) | \left(0.0584\right) | \left(0.0285\right) | \left(0.0264\right) | \left(0.0128\right) | ||
\beta_{1}\left(1\right) | 0.15 | 0.1462 | 0.1487 | 0.1524 | 0.1489 | 0.1515 | 0.1496 |
\left(0.0819\right) | \left(0.0038\right) | \left(0.0594\right) | \left(0.0017\right) | \left(0.0566\right) | \left(0.0008\right) | ||
\beta_{1}\left(2\right) | -0.15 | -0.1426 | -0.1444 | -0.1465 | -0.1482 | -0.1490 | -0.1496 |
\left(0.0921\right) | \left(0.0040\right) | \left(0.0810\right) | \left(0.0021\right) | \left(0.0552\right) | \left(0.0009\right) | ||
\beta_{1}\left(3\right) | 0.45 | 0.4544 | 0.4459 | 0.4575 | \rm 0.4476 | 0.4546 | \rm 0.4491 |
\left(0.0946\right) | \left(0.0054\right) | \left(0.0680\right) | \left(0.0026\right) | \left(0.0529\right) | \left(0.0012\right) | ||
\beta_{2}\left(1\right) | -0.55 | -0.5626 | -0.5477 | -0.5598 | -0.5479 | -0.5516 | -0.5496 |
\left(0.0982\right) | \left(0.0046\right) | \left(0.0301\right) | \left(0.0026\right) | \left(0.0079\right) | \left(0.0010\right) | ||
\beta_{2}\left(2\right) | 0.25 | 0.2367 | 0.2538 | 0.2405 | 0.2513 | 0.2437 | 0.2502 |
\left(0.1054\right) | \left(0.0042\right) | \left(0.0899\right) | \left(0.0023\right) | \left(0.0303\right) | \left(0.0011\right) | ||
\beta_{2}\left(3\right) | -0.35 | -0.3580 | -0.3481 | -0.3570 | -0.3525 | -0.3568 | -0.3513 |
\left(0.0741\right) | \left(0.0046\right) | \left(0.0670\right) | \left(0.0025\right) | \left(0.0226\right) | \left(0.0010\right) | ||
\gamma\left(1\right) | 0.00 | 0.0133 | 0.0061 | - 0.0044 | 0.0016 | 0.0016 | 0.0008 |
\left(0.0607\right) | \left(0.0124\right) | \left(0.0288\right) | \left(0.0053\right) | \left(0.0153\right) | \left(0.0023\right) | ||
\gamma\left(2\right) | 0.65 | 0.6676 | 0.6465 | 0.6617 | 0.6573 | 0.6598 | 0.6522 |
\left(0.1020\right) | \left(0.0250\right) | \left(0.0955\right) | \left(0.0118\right) | \left(0.0582\right) | \left(0.0050\right) | ||
\gamma\left(3\right) | 0.05 | 0.0603 | 0.0582 | 0.0437 | 0.0474 | 0.0561 | 0.0521 |
\left(0.0887\right) | \left(0.0234\right) | \left(0.0228\right) | \left(0.0104\right) | \left(0.0161\right) | \left(0.0048\right) |
This section focuses on modeling the Gaussian PTAR-SV_{3} model using the daily time series datasets, (X_{t})_{t\geq1} , representing the Euro/Algerian dinar (EUR/DZD) exchange rate. Leveraging the QMLE method for its favorable finite-sample properties, we began by filtering out all non-trading days, including holidays and weekends. The observations span from January 3, 2000, to September 29, 2011. The corresponding log-return series can be calculated as \varpi_{t} = 10^{2}\log\left(X_{t}\right) -10^{2}\log\left(X_{t-1}\right) . Plots of the prices \left(X_{t}\right) , the daily return series of prices \left(\varpi_{t}\right) , squared returns \left(\varpi_{t}^{2}\right) , absolute returns \left(\left\vert\varpi_{t}\right\vert \right) , and \log -absolute returns are illustrated in Figure 2.
Additionally, upon reviewing the sample autocorrelation functions depicted in Figure 3, it is evident that the series \left(\varpi_{t}\right), \ \left(\varpi_{t}^{2}\right), \left(\left\vert \varpi_{t}\right\vert \right) , and \left(\log\left\vert \varpi_{t}\right\vert \right) exhibit distinct characteristics. Specifically, the series \left(\varpi_{t}\right) displays a Taylor effect, as indicated by \widehat{\rho}_{\varpi_{t}^{2}}(h) < \widehat{\rho}_{\left\vert \varpi_{t}\right\vert }(h) for some h > 0 . Consequently, the modeling of the series \left(\varpi_{t}\right) using standard SV models is rejected in favor of certain asymmetric models. Table 4 reports several basic descriptive statistics for the given series.
Series | \left(X_{t}\right) \times10^{3} | \left(\varpi_{t}\right) \times10^{3} | \left(\varpi_{t}^{2}\right) \times10^{7} | \left(\left\vert \varpi_{t}\right\vert \right) \times10^{4} | \left(\log\left\vert \varpi_{t}\right\vert \right) \times10^{3} |
mean | 0.0886 | 0.0001 | 0.0000 | 0.0004 | 0.0004 |
Std. Dev | 0.0116 | 0.0050 | 0.0000 | 0.0004 | 0.0006 |
Median | 0.0911 | 0.0001 | 0.0000 | 0.0003 | 0.0005 |
Skewness | -0.0005 | 0.0004 | 0.0000 | 0.0003 | -0.0013 |
Kurtosis | 0.0021 | 0.0090 | 0.0000 | 0.0018 | 0.0065 |
Min | 0.0672 | -0.0233 | 0.0000 | 0.0000 | -0.0040 |
Max | 0.1091 | 0.0497 | 0.0002 | 0.0050 | 0.0019 |
J. Bera | 0.2325 | 4.5971 | 2.7228 | 3.4009 | 2.4880 |
Arch (300) | 100\% | 100\% | 00\% | 100\% | 100\% |
LBtest | 100\% | 24.20\% | 98.23\% | 100\% | 100\% |
Table 4 reports descriptive statistics for the EUR/DZD returns over the entire study period. It includes statistics for the returns, absolute returns, squared returns, and log-absolute returns. The lowest return observed is -23.300 , while the highest return is 49.700 . The data exhibits positive skewness and high kurtosis, with kurtosis values for all three log-return series being much greater than 3 . In this study, we propose a 5 -periodic PTAR-SV model to capture intra-week effects in the daily exchange rate. The model allows parameters to vary with the day of the week, where v = 1 corresponds to Monday, v = 2 to Tuesday, and so on. Table 5 reports the estimated parameters pertaining to the 5 -PTAR-SV model. In Table 5, we present the results of the QML parameter estimation for the PTAR-SV and certain other specific models. It is evident from these results that the periodic and standard estimated models are periodically stationary and stationary, respectively. Notably, the persistence measure estimates of the PTAR-SV model and the PAR-SV model fitted by Aknouche (2017, [18]) are notably smaller than those obtained from the standard TAR-SV model. Additionally, the empirical coverages of the PTAR-SV-based prediction intervals are closer to the nominal coverages than those of the PAR-SV-based prediction intervals. Furthermore, when comparing the empirical coverages of the fitted TAR-SV models, there is a slight superiority observed in the periodic modeling. These findings suggest that the PTAR-SV model, fitted to the time series of daily EUR/DZD log-returns, demonstrates greater accuracy and improved forecasting performance than the standard AR-SV and PAR-SV models.
v=1 | v=2 | v=3 | v=4 | v=5 | ||
\alpha\left(1\right) | -0.0248 | - | - | - | - | |
Standard | \beta_{1}\left(1\right) | 0.9991 | - | - | - | - |
TAR-SV | \beta_{2}\left(1\right) | 0.3165 | - | - | - | - |
\gamma\left(1\right) | 0.0482 | - | - | - | - | |
Periodic | \alpha\left(v\right) | -1.9725 | 0.8948 | 6.9167 | -4.9851 | -0.1852 |
symmetric | \beta_{1}\left(v\right) | 0.8763 | 1.0578 | 1.3264 | 0.4517 | 0.8676 |
AR-SV | \gamma\left(v\right) | 0.1089 | 0.2001 | 0.3886 | 0.2456 | 0.1998 |
\alpha\left(v\right) | -2.2061 | 0.9558 | 7.9368 | -5.3924 | -0.2762 | |
Periodic | \beta_{1}\left(v\right) | 0.9927 | 1.0621 | 1.6264 | 0.6321 | 0.9734 |
TAR-SV | \beta_{2}\left(v\right) | 0.6034 | 0.7168 | 0.3540 | 0.4112 | 0.3985 |
\gamma\left(v\right) | 0.2734 | 0.3244 | 0.5275 | 0.4712 | 0.5219 |
In this paper, we have conducted a study on a specific category of nonlinear models tailored to capture the characteristics of periodic asymmetric time series, known as PTAR-SV models. These models exhibit the ability to capture volatility clustering and reveal the inherent periodicity present in the autocovariance structure, both of which are common stylized facts in financial and economic time series. Additionally, PTAR-SV models are adept at encapsulating various stylized facts, notably leverage effects that denote asymmetry within the volatility process. Importantly, these models have demonstrated enhanced accuracy and superior forecasting performance compared to PTGARCH models.
This paper explores several aspects, including periodic stationarity conditions, moment calculations, and analysis of the autocovariance function of squared process powers. Additionally, we propose the QML method for parameter estimation, employing the EM algorithm with particle filters and smoothers. Furthermore, we introduce the BGGE as an alternative approach to parameter estimation.
Building upon the findings of Kim et al.[42] and Chib et al.[43], our research underscores the significance of employing a multi-move approach in the MCMC method for model estimation. These seminal works demonstrate that traditional single-move estimation methods may be inefficient, particularly when dealing with complex models and heavy-tailed distributions. By incorporating insights from these studies, our paper advocates for the adoption of a multi-move approach to enhance estimation efficiency and accuracy in nonlinear modeling. Moving forward, our research aims to expand in multiple directions. One proposed extension involves exploring heavier-tailed distributions, which can be easily accommodated using the t -distribution. Additionally, we plan to investigate a periodic multivariate version of the PTAR-SV model, where multiple variables are considered simultaneously. This multivariate approach promises to provide deeper insights into the interconnected dynamics of multiple time series variables and their periodic behaviors.
Proof of Theorem 1. The presence of a causal, strictly periodic, stationary solution for Eq (1.4) is directly linked to the existence of a causal, strictly periodically stationary solution for the PTAR model proposed by Bentarzi and Djeddou [38]. This PTAR model is described by the equation:
h_{st+v} = \alpha\left( v\right) +\left( \beta_{1}\left( v\right) \mathbb{I}_{\left\{ X_{st+v-1} > 0\right\} }+\beta_{2}\left( v\right) \mathbb{I}_{\left\{ X_{st+v-1}\leq0\right\} }\right) h_{st+v-1} +\gamma\left( v\right) \eta_{st+v}, \text{ for all }v\in\left\{ 1, ..., s\right\} , |
where \prod\limits_{j = 1}^{s}\left(\delta\left\vert \beta_{1}\left(j\right) \right\vert +\left(1-\delta\right) \left\vert \beta_{2}\left(j\right) \right\vert \right) < 1 . When this condition is satisfied, the log-volatility h_{t} can be represented causally as:
h_{st+v} = \sum\limits_{m\geq0}\left\{ \prod\limits_{l = 0}^{m-1}\zeta _{st+v-l}\right\} \left( \alpha\left( v-m\right) +\gamma\left( v-m\right) \eta_{st+v-m}\right) . |
Proof of Theorem 2. The periodic geometric ergodicity of \left(\underline{X}_{t}, t\in\mathbb{Z}\right) is a consequence of the geometric ergodicity of the vector \left(\underline{h}_{t}, t\in\mathbb{Z}\right) , as demonstrated by Meyn and Tweedie's [27] results.
Proof of Theorem 3. For every t\in\mathbb{Z} and 1\leq v\leq s , the following holds:
\begin{align*} E\left\{ X_{st+v}^{2m}\right\} & = E\left\{ \left( e_{st+v}\exp\left( \frac{1}{2}\sum\limits_{k\geq0}\left\{ \prod\limits_{j = 0}^{k-1}\zeta _{st+v-j}\right\} \left( \alpha\left( v-k\right) +\gamma\left( v-k\right) \eta_{st+v-k}\right) \right) \right) ^{2m}\right\} \\ & = E\left\{ e_{st+v}^{2m}\right\} \prod\limits_{k\geq0}E\left\{ \exp\left( m\left\{ \prod\limits_{j = 0}^{k-1}\zeta_{st+v-j}\right\} \left( \alpha\left( v-k\right) +\gamma\left( v-k\right) \eta_{st+v-k}\right) \right) \right\} . \end{align*} |
Consequently, a sufficient condition for the existence of E\left\{ X_{st+v}^{2m}\right\} is:
\prod\limits_{l\geq0}E\left\{ \exp\left( m\left\{ \prod\nolimits_{j = 0}^{k-1} \zeta_{st+v-j}\right\} \left( \alpha\left( v-k\right) +\gamma\left( v-k\right) \eta_{st+v-k}\right) \right) \right\} < \infty\text{ for }v = 1, \ldots, s. |
Proof of Theorem 4. For every t\in\mathbb{Z} , n > 0 , and 1\leq v\leq s , the following holds:
\begin{align*} \Xi_{v}^{\left( 2m\right) }\left( n\right) & = E\left\{ e_{st+v} ^{2m}\right\} E\left\{ e_{st+v-n}^{2m}\right\} E\left\{ \exp\left( m\left( h_{st+v}+h_{st+v-n}\right) \right) \right\} \\ & = \Xi_{v, e}^{\left( 2m\right) }\left( n\right) \\ & \times E\left\{ \exp\left( m\left( \begin{array} [c]{l} \sum\limits_{k\geq0}\left\{ \prod\nolimits_{j = 0}^{k-1}\zeta_{st+v-j}\right\} \left( \alpha\left( v-k\right) +\gamma\left( v-k\right) \eta _{st+v-k}\right) \\ +\sum\nolimits_{k\geq0}\left\{ \prod\nolimits_{j = 0}^{k-1}\zeta_{st+v-n-j} \right\} \left( \alpha\left( v-n-k\right) +\gamma\left( v-n-k\right) \eta_{st+v-n-k}\right) \end{array} \right) \right) \right\} \\ & = \Xi_{v, e}^{\left( 2m\right) }\left( n\right) \times E\left\{ \exp\left( m\left( \begin{array} [c]{l} \sum\limits_{k\geq0}\left\{ \prod\nolimits_{j = 0}^{k-1}\zeta_{st+v-j}\right\} \left( \alpha\left( v-k\right) +\gamma\left( v-k\right) \eta _{st+v-k}\right) \\ +\sum\nolimits_{k\geq n}\left\{ \prod\nolimits_{j = n}^{k-1}\zeta _{st+v-j}\right\} \left( \alpha\left( v-k\right) +\gamma\left( v-k\right) \eta_{st+v-k}\right) \end{array} \right) \right) \right\} \\ & = \Xi_{v, e}^{\left( 2m\right) }\left( n\right) \\ & \times E\left\{ \exp\left( m\left( \begin{array} [c]{l} \sum\limits_{k = 0}^{n-1}\left\{ \prod\nolimits_{j = 0}^{k-1}\zeta_{st+v-j}\right\} \left( \alpha\left( v-k\right) +\gamma\left( v-k\right) \eta _{st+v-k}\right) \\ +\left( 1+\left\{ \prod\nolimits_{j = 0}^{n-1}\zeta_{st+v-j}^{-1}\right\} \right) \sum\nolimits_{k\geq n}\left\{ \prod\nolimits_{j = 0}^{k-1} \zeta_{st+v-j}\right\} \left( \alpha\left( v-k\right) +\gamma\left( v-k\right) \eta_{st+v-k}\right) \end{array} \right) \right) \right\} \\ & = \Xi_{v, e}^{\left( 2m\right) }\left( n\right) \times\prod \limits_{k = 0}^{n-1}E\left\{ \exp\left( m\left\{ \prod\limits_{j = n} ^{k-1}\zeta_{st+v-j}\right\} \left( \alpha\left( v-k\right) +\gamma\left( v-k\right) \eta_{st+v-k}\right) \right) \right\} \\ & \times\prod\limits_{k\geq n}E\left\{ \exp\left( m\left( \left( 1+\left\{ \prod\limits_{j = 0}^{n-1}\zeta_{st+v-j}^{-1}\right\} \right) \left\{ \prod\limits_{j = 0}^{k-1}\zeta_{st+v-j}\right\} \left( \alpha\left( v-k\right) +\gamma\left( v-k\right) \eta_{st+v-k}\right) \right) \right) \right\} . \end{align*} |
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
The authors declare no conflict of interest.
[1] |
Al-Badaii F, Shuhaimi-Othman M (2015) Water pollution and its impact on the prevalence of antibiotic-resistant E. coli and total coliform bacteria: a study of the Semenyih River, Peninsular Malaysia. Water Quality, Exposure and Health 7: 319–330. https://doi.org/10.1007/s12403-014-0151-5 doi: 10.1007/s12403-014-0151-5
![]() |
[2] |
Molina-Navarro E, Trolle D, Martínez-Pérez S, et al. (2014) Hydrological and water quality impact assessment of a Mediterranean limno-reservoir under climate change and land use management scenarios. Journal of hydrology 509: 354–366. https://doi.org/10.1016/j.jhydrol.2013.11.053 doi: 10.1016/j.jhydrol.2013.11.053
![]() |
[3] |
Oyekanmi FB, Bello-Olusoji OA, Akin-Obasola BJ (2017) Water quality parameters of Olumirin waterfall in relation to availability of Caridina africana (Kingsley, 1822) in Erin-Ijesa, Nigeria. Madridge J Aquac Res Dev 1: 8-12. https://doi:10.18689/mjard-1000102 doi: 10.18689/mjard-1000102
![]() |
[4] |
Santos EM, Mariano G, do Nascimento MAL (2015) Geotouristic potential of waterfalls in igneous and metamorphic rocks: the case of the city of bonito, Pernambuco, northeast Brazil/O potencial geoturístico de cachoeiras em rochas ígneas e metamórficas. Caderno de Geografia 25: 179-191. https://doi.org/10.5752/P.2318-2962.2015v25n43p179 doi: 10.5752/P.2318-2962.2015v25n43p179
![]() |
[5] |
Hanasaki N, Fujimori S, Yamamoto T, et al. (2013) A global water scarcity assessment under Shared Socio-economic Pathways–Part 2: Water availability and scarcity. Hydrology and Earth System Sciences 17: 2393-2413. https://doi.org/10.5194/hess-17-2393-2013 doi: 10.5194/hess-17-2393-2013
![]() |
[6] |
Amoako J, Karikari AY, Ansa-Asare OD (2011) Physico-chemical quality of boreholes in Densu Basin of Ghana. Applied Water Science 1: 41-48. https://doi.org/10.1007/s13201-011-0007-0 doi: 10.1007/s13201-011-0007-0
![]() |
[7] |
Arnell NW, Gosling SN (2013) The impacts of climate change on river flow regimes at the global scale. Journal of Hydrology: 486: 351-364. https://doi.org/10.1016/j.jhydrol.2013.02.010 doi: 10.1016/j.jhydrol.2013.02.010
![]() |
[8] | Camposano AVC, Pelone MS, Violanda R, et al. Extraction of Waterfalls from DTM using OBIA: A Case Study of Mantayupan Falls in Barili, Cebu. |
[9] |
Offem BO, Ikpi GU (2011) Water quality and environmental impact assessment of a tropical waterfall system. Environment and Natural Resources Research 1: 63. https://doi.org/10.5539/enrr.v1n1p63 doi: 10.5539/enrr.v1n1p63
![]() |
[10] | Ayodele SO (2015) WATER QUALITY ASSESSMENT OF ARINTA AND OLUMIRIN WATERFALLS IN EKITI AND OSUN STATES, SOUTH WESTERN NIGERIA. International Journal of Innovative Environmental Studies Research 3:32-47. |
[11] | Richardson SD, Kimura SY (2015) Water analysis: emerging contaminants and current issues. Analytical chemistry 88: 546-582. |
[12] |
Kaewla W, Wiwanitkit V (2016) Accidental risk analysis in three big waterfalls in Southern Laos. Annals of Tropical Medicine and Public Health 9. https://doi.org/10.4103/1755-6783.179105 doi: 10.4103/1755-6783.179105
![]() |
[13] | Enguito MRC, Matunog VE, Bala JJO, et al. (2013) Water quality assessment of Carangan estero in Ozamiz City, Philippines. Journal of Multidisciplinary Studies 1. |
[14] |
Dettori M, Pittaluga P, Busonera G, et al. (2020) Environmental risks perception among citizens living near industrial plants: a cross-sectional study. International journal of environmental research and public health 17; 4870. https://doi.org/10.3390/ijerph17134870 doi: 10.3390/ijerph17134870
![]() |
[15] |
Bena A, Gandini M, Cadum E, et al. (2019) Risk perception in the population living near the Turin municipal solid waste incineration plant: Survey results before start-up and communication strategies. BMC Public Health 19: 1-9. https://doi.org/10.1186/s12889-019-6808-z doi: 10.1186/s12889-019-6808-z
![]() |
[16] |
Liu L, Oza S, Hogan D, et al. (2016) Global, regional, and national causes of under-5 mortality in 2000–15: an updated systematic analysis with implications for the Sustainable Development Goals. The Lancet 388: 3027-3035. https://doi.org/10.1016/S0140-6736(16)31593-8 doi: 10.1016/S0140-6736(16)31593-8
![]() |
[17] |
Soggiu ME, Inglessis M, Gagliardi RV, et al. (2020) PM10 and PM2.5 qualitative source apportionment using selective wind direction sampling in a port-industrial area in Civitavecchia, Italy. Atmosphere 11: 94. https://doi.org/10.3390/atmos11010094 doi: 10.3390/atmos11010094
![]() |
[18] | Brauman KA, Siebert S, Foley JA (2013) Improvements in crop water productivity increase water sustainability and food security—a global analysis. Environmental Research Letters 8: 024030. |
[19] |
Clayton PD, Pearson RG (2016) Harsh habitats? Waterfalls and their faunal dynamics in tropical Australia. Hydrobiologia 775: 123-137. https://doi.org/10.1007/s10750-016-2719-5 doi: 10.1007/s10750-016-2719-5
![]() |
[20] |
Tandang DN, Rubite RR, Angeles Jr RT, et al. (2016) Begonia titoevangelistae (sect. Baryandra, Begoniaceae) a new species from Catanduanes Island, the Philippines. Phytotaxa 282: 273-281. https://doi.org/10.11646/phytotaxa.282.4.4 doi: 10.11646/phytotaxa.282.4.4
![]() |
[21] |
Duan W, He B, Nover D, et al. (2016) Water quality assessment and pollution source identification of the eastern Poyang Lake Basin using multivariate statistical methods. Sustainability 8: 133. https://doi.org/10.3390/su8020133 doi: 10.3390/su8020133
![]() |
[22] | World Health Organization, WHO, World Health Organisation Staff (2004) Guidelines for drinking-water quality (Vol. 1). World Health Organization. |
[23] |
Renaud FG, Le TTH, Lindener C, et al. (2015). Resilience and shifts in agro-ecosystems facing increasing sea-level rise and salinity intrusion in Ben Tre Province, Mekong Delta. Climatic Change 133: 69-84. https://doi.org/10.1007/s10584-014-1113-4 doi: 10.1007/s10584-014-1113-4
![]() |
[24] |
Bain R, Cronk R, Hossain R, et al. (2014) Global assessment of exposure to faecal contamination through drinking water based on a systematic review. Tropical Medicine & International Health 19: 917-927. https://doi.org/10.1111/tmi.12334 doi: 10.1111/tmi.12334
![]() |
[25] |
Sadat-Noori SM, Ebrahimi K, Liaghat AM (2014) Groundwater quality assessment using the Water Quality Index and GIS in Saveh-Nobaran aquifer, Iran. Environmental Earth Sciences 71: 3827-3843. https://doi.org/10.1007/s12665-013-2770-8 doi: 10.1007/s12665-013-2770-8
![]() |
[26] |
Altenburger R, Ait-Aissa S, Antczak P, et al. (2015) Future water quality monitoring—Adapting tools to deal with mixtures of pollutants in water resource management. Science of the total environment 512: 540-551. https://doi.org/10.1016/j.scitotenv.2014.12.057 doi: 10.1016/j.scitotenv.2014.12.057
![]() |
[27] |
Glassmeyer ST, Furlong ET, Kolpin DW, et al. (2017) Nationwide reconnaissance of contaminants of emerging concern in source and treated drinking waters of the United States. Science of the Total Environment 581: 909-922. https://doi.org/10.1016/j.scitotenv.2016.12.004 doi: 10.1016/j.scitotenv.2016.12.004
![]() |
[28] |
Islam MS, Ahmed MK, Raknuzzaman M, et al. (2015) Heavy metal pollution in surface water and sediment: a preliminary assessment of an urban river in a developing country. Ecological indicators 48: 282-291. https://doi.org/10.1016/j.ecolind.2014.08.016 doi: 10.1016/j.ecolind.2014.08.016
![]() |
[29] |
Ji X, Dahlgren RA, Zhang M (2016) Comparison of seven water quality assessment methods for the characterization and management of highly impaired river systems. Environmental monitoring and assessment 188: 1-16. https://doi.org/10.1007/s10661-015-5016-2 doi: 10.1007/s10661-015-5016-2
![]() |
[30] | Philippine National Water Quality Standards (2007) PNWQS Guidelines for Drinking Water Quality PNWQS, Philippines. |
[31] |
Mustacisa MM, Bodiongan C, Montes V, et al. (2017). Epidemiological Study on Kawasan Waterfalls. International Journal of Environmental Science & Sustainable Development 1: 9. https://doi.org/10.21625/essd.v2i1.78 doi: 10.21625/essd.v2i1.78
![]() |
[32] | Su GL (2008) Assessing the effect of a dumpsite to groundwater quality in Payatas, Philippines. American Journal of Environmental Sciences 4: 276. |
1. | Tariku Tesfaye Haile, Fenglin Tian, Ghada AlNemer, Boping Tian, Multiscale Change Point Detection for Univariate Time Series Data with Missing Value, 2024, 12, 2227-7390, 3189, 10.3390/math12203189 | |
2. | V. P. Volchkov, V. G. Sannikov, 2024, Parametric Spectral Analysis Based on the Application of a Two-Way Recursive Model with Control, 979-8-3503-7344-8, 1, 10.1109/SYNCHROINFO61835.2024.10617610 | |
3. | Omar Alzeley, Ahmed Ghezal, On an asymmetric multivariate stochastic difference volatility: structure and estimation, 2024, 9, 2473-6988, 18528, 10.3934/math.2024902 | |
4. | Hashem Althagafi, Ahmed Ghezal, Global Stability of a System of Fuzzy Difference Equations of Higher-Order, 2024, 1598-5865, 10.1007/s12190-024-02302-1 | |
5. | Hashem Althagafi, Ahmed Ghezal, Solving a system of nonlinear difference equations with bilinear dynamics, 2024, 9, 2473-6988, 34067, 10.3934/math.20241624 | |
6. | Fabio Demaria, Maddalena Cavicchioli, Financial sustainability in the luxury industry across the Covid-19 pandemic: lessons from hierarchical methods, 2025, 0033-5177, 10.1007/s11135-025-02202-x |
Specification | Condition (2.3) |
Standard TAR-SV _{1} | \delta\left\vert \beta_{1}\left(1\right) \right\vert +\left(1-\delta\right) \left\vert \beta_{2}\left(1\right) \right\vert < 1 (c.f., Breidt [8]) |
Standard symmetric AR-SV _{1} | \left\vert \beta_{1}\left(1\right) \right\vert < 1 (c.f., Francq and Zakoïan [26]) |
Symmetric PAR-SV _{s} | \prod\nolimits_{j=1}^{s}\left\vert \beta_{1}\left(j\right) \right\vert < 1 (c.f., Aknouche [18]) |
Tv \backslash 2N | 750 | 1500 | 3000 | ||||
QML | Bayesian | QML | Bayesian | QML | Bayesian | ||
\alpha\left(1\right) | 0.50 | 0.5241 | 0.4861 | 0.5155 | 0.4952 | 0.5080 | 0.4980 |
\left(0.0941\right) | \left(0.0245\right) | \left(0.0648\right) | \left(0.0115\right) | \left(0.0342\right) | \left(0.0060\right) | ||
\alpha\left(2\right) | -1.00 | -0.9903 | -1.0053 | -0.9907 | -1.0045 | -0.9930 | -1.0033 |
\left(0.0770\right) | \left(0.0164\right) | \left(0.0459\right) | \left(0.0073\right) | \left(0.0193\right) | \left(0.0035\right) | ||
\beta_{1}\left(1\right) | 0.75 | 0.7601 | 0.7457 | 0.7564 | 0.7487 | 0.7534 | 0.7499 |
\left(0.0785\right) | \left(0.0024\right) | \left(0.0499\right) | \left(0.0012\right) | \left(0.0110\right) | \left(0.0006\right) | ||
\beta_{1}\left(2\right) | 0.25 | 0.2477 | 0.2484 | 0.2543 | 0.2497 | 0.2534 | 0.2498 |
\left(0.0661\right) | \left(0.0017\right) | \left(0.0537\right) | \left(0.0008\right) | \left(0.0360\right) | \left(0.0004\right) | ||
\beta_{2}\left(1\right) | -0.35 | -0.3638 | -0.3509 | -0.3609 | -0.3503 | -0.3584 | -0.3501 |
\left(0.0774\right) | \left(0.0026\right) | \left(0.0682\right) | \left(0.0014\right) | \left(0.0496\right) | \left(0.0006\right) | ||
\beta_{2}\left(2\right) | -0.55 | -0.5597 | -0.5521 | -0.5583 | -0.5492 | -0.5554 | -0.5497 |
\left(0.0971\right) | \left(0.0020\right) | \left(0.0873\right) | \left(0.0008\right) | \left(0.0649\right) | \left(0.0005\right) | ||
\gamma\left(1\right) | 0.65 | 0.6616 | 0.6541 | 0.6592 | 0.6523 | 0.6513 | 0.6491 |
\left(0.0638\right) | \left(0.0074\right) | \left(0.0591\right) | \left(0.0029\right) | \left(0.0256\right) | \left(0.0014\right) | ||
\gamma\left(2\right) | -0.05 | -0.0433 | -0.0473 | -0.0473 | -0.0476 | -0.0475 | -0.0483 |
\left(0.0516\right) | \left(0.0023\right) | \left(0.0439\right) | \left(0.0011\right) | \left(0.0375\right) | \left(0.0005\right) |
Tv \backslash 3N | 750 | 1500 | 3000 | ||||
QML | Bayesian | QML | Bayesian | QML | Bayesian | ||
\alpha\left(1\right) | 0.50 | 0.4904 | 0.4940 | 0.5097 | 0.4961 | 0.5054 | 0.4979 |
\left(0.1064\right) | \left(0.0495\right) | \left(0.0864\right) | \left(0.0231\right) | \left(0.0789\right) | \left(0.0097\right) | ||
\alpha\left(2\right) | 1.00 | 1.0065 | 1.0062 | 0.9979 | 0.9991 | 1.0020 | 1.0008 |
\left(0.0768\right) | \left(0.0386\right) | \left(0.0703\right) | \left(0.0186\right) | \left(0.0207\right) | \left(0.0083\right) | ||
\alpha\left(3\right) | 1.50 | 1.4706 | 1.4709 | 1.5163 | 1.4914 | 1.4928 | 1.4966 |
\left(0.0755\right) | \left(0.0588\right) | \left(0.0584\right) | \left(0.0285\right) | \left(0.0264\right) | \left(0.0128\right) | ||
\beta_{1}\left(1\right) | 0.15 | 0.1462 | 0.1487 | 0.1524 | 0.1489 | 0.1515 | 0.1496 |
\left(0.0819\right) | \left(0.0038\right) | \left(0.0594\right) | \left(0.0017\right) | \left(0.0566\right) | \left(0.0008\right) | ||
\beta_{1}\left(2\right) | -0.15 | -0.1426 | -0.1444 | -0.1465 | -0.1482 | -0.1490 | -0.1496 |
\left(0.0921\right) | \left(0.0040\right) | \left(0.0810\right) | \left(0.0021\right) | \left(0.0552\right) | \left(0.0009\right) | ||
\beta_{1}\left(3\right) | 0.45 | 0.4544 | 0.4459 | 0.4575 | \rm 0.4476 | 0.4546 | \rm 0.4491 |
\left(0.0946\right) | \left(0.0054\right) | \left(0.0680\right) | \left(0.0026\right) | \left(0.0529\right) | \left(0.0012\right) | ||
\beta_{2}\left(1\right) | -0.55 | -0.5626 | -0.5477 | -0.5598 | -0.5479 | -0.5516 | -0.5496 |
\left(0.0982\right) | \left(0.0046\right) | \left(0.0301\right) | \left(0.0026\right) | \left(0.0079\right) | \left(0.0010\right) | ||
\beta_{2}\left(2\right) | 0.25 | 0.2367 | 0.2538 | 0.2405 | 0.2513 | 0.2437 | 0.2502 |
\left(0.1054\right) | \left(0.0042\right) | \left(0.0899\right) | \left(0.0023\right) | \left(0.0303\right) | \left(0.0011\right) | ||
\beta_{2}\left(3\right) | -0.35 | -0.3580 | -0.3481 | -0.3570 | -0.3525 | -0.3568 | -0.3513 |
\left(0.0741\right) | \left(0.0046\right) | \left(0.0670\right) | \left(0.0025\right) | \left(0.0226\right) | \left(0.0010\right) | ||
\gamma\left(1\right) | 0.00 | 0.0133 | 0.0061 | - 0.0044 | 0.0016 | 0.0016 | 0.0008 |
\left(0.0607\right) | \left(0.0124\right) | \left(0.0288\right) | \left(0.0053\right) | \left(0.0153\right) | \left(0.0023\right) | ||
\gamma\left(2\right) | 0.65 | 0.6676 | 0.6465 | 0.6617 | 0.6573 | 0.6598 | 0.6522 |
\left(0.1020\right) | \left(0.0250\right) | \left(0.0955\right) | \left(0.0118\right) | \left(0.0582\right) | \left(0.0050\right) | ||
\gamma\left(3\right) | 0.05 | 0.0603 | 0.0582 | 0.0437 | 0.0474 | 0.0561 | 0.0521 |
\left(0.0887\right) | \left(0.0234\right) | \left(0.0228\right) | \left(0.0104\right) | \left(0.0161\right) | \left(0.0048\right) |
Series | \left(X_{t}\right) \times10^{3} | \left(\varpi_{t}\right) \times10^{3} | \left(\varpi_{t}^{2}\right) \times10^{7} | \left(\left\vert \varpi_{t}\right\vert \right) \times10^{4} | \left(\log\left\vert \varpi_{t}\right\vert \right) \times10^{3} |
mean | 0.0886 | 0.0001 | 0.0000 | 0.0004 | 0.0004 |
Std. Dev | 0.0116 | 0.0050 | 0.0000 | 0.0004 | 0.0006 |
Median | 0.0911 | 0.0001 | 0.0000 | 0.0003 | 0.0005 |
Skewness | -0.0005 | 0.0004 | 0.0000 | 0.0003 | -0.0013 |
Kurtosis | 0.0021 | 0.0090 | 0.0000 | 0.0018 | 0.0065 |
Min | 0.0672 | -0.0233 | 0.0000 | 0.0000 | -0.0040 |
Max | 0.1091 | 0.0497 | 0.0002 | 0.0050 | 0.0019 |
J. Bera | 0.2325 | 4.5971 | 2.7228 | 3.4009 | 2.4880 |
Arch (300) | 100\% | 100\% | 00\% | 100\% | 100\% |
LBtest | 100\% | 24.20\% | 98.23\% | 100\% | 100\% |
v=1 | v=2 | v=3 | v=4 | v=5 | ||
\alpha\left(1\right) | -0.0248 | - | - | - | - | |
Standard | \beta_{1}\left(1\right) | 0.9991 | - | - | - | - |
TAR-SV | \beta_{2}\left(1\right) | 0.3165 | - | - | - | - |
\gamma\left(1\right) | 0.0482 | - | - | - | - | |
Periodic | \alpha\left(v\right) | -1.9725 | 0.8948 | 6.9167 | -4.9851 | -0.1852 |
symmetric | \beta_{1}\left(v\right) | 0.8763 | 1.0578 | 1.3264 | 0.4517 | 0.8676 |
AR-SV | \gamma\left(v\right) | 0.1089 | 0.2001 | 0.3886 | 0.2456 | 0.1998 |
\alpha\left(v\right) | -2.2061 | 0.9558 | 7.9368 | -5.3924 | -0.2762 | |
Periodic | \beta_{1}\left(v\right) | 0.9927 | 1.0621 | 1.6264 | 0.6321 | 0.9734 |
TAR-SV | \beta_{2}\left(v\right) | 0.6034 | 0.7168 | 0.3540 | 0.4112 | 0.3985 |
\gamma\left(v\right) | 0.2734 | 0.3244 | 0.5275 | 0.4712 | 0.5219 |
Specification | Condition (2.3) |
Standard TAR-SV _{1} | \delta\left\vert \beta_{1}\left(1\right) \right\vert +\left(1-\delta\right) \left\vert \beta_{2}\left(1\right) \right\vert < 1 (c.f., Breidt [8]) |
Standard symmetric AR-SV _{1} | \left\vert \beta_{1}\left(1\right) \right\vert < 1 (c.f., Francq and Zakoïan [26]) |
Symmetric PAR-SV _{s} | \prod\nolimits_{j=1}^{s}\left\vert \beta_{1}\left(j\right) \right\vert < 1 (c.f., Aknouche [18]) |
Tv \backslash 2N | 750 | 1500 | 3000 | ||||
QML | Bayesian | QML | Bayesian | QML | Bayesian | ||
\alpha\left(1\right) | 0.50 | 0.5241 | 0.4861 | 0.5155 | 0.4952 | 0.5080 | 0.4980 |
\left(0.0941\right) | \left(0.0245\right) | \left(0.0648\right) | \left(0.0115\right) | \left(0.0342\right) | \left(0.0060\right) | ||
\alpha\left(2\right) | -1.00 | -0.9903 | -1.0053 | -0.9907 | -1.0045 | -0.9930 | -1.0033 |
\left(0.0770\right) | \left(0.0164\right) | \left(0.0459\right) | \left(0.0073\right) | \left(0.0193\right) | \left(0.0035\right) | ||
\beta_{1}\left(1\right) | 0.75 | 0.7601 | 0.7457 | 0.7564 | 0.7487 | 0.7534 | 0.7499 |
\left(0.0785\right) | \left(0.0024\right) | \left(0.0499\right) | \left(0.0012\right) | \left(0.0110\right) | \left(0.0006\right) | ||
\beta_{1}\left(2\right) | 0.25 | 0.2477 | 0.2484 | 0.2543 | 0.2497 | 0.2534 | 0.2498 |
\left(0.0661\right) | \left(0.0017\right) | \left(0.0537\right) | \left(0.0008\right) | \left(0.0360\right) | \left(0.0004\right) | ||
\beta_{2}\left(1\right) | -0.35 | -0.3638 | -0.3509 | -0.3609 | -0.3503 | -0.3584 | -0.3501 |
\left(0.0774\right) | \left(0.0026\right) | \left(0.0682\right) | \left(0.0014\right) | \left(0.0496\right) | \left(0.0006\right) | ||
\beta_{2}\left(2\right) | -0.55 | -0.5597 | -0.5521 | -0.5583 | -0.5492 | -0.5554 | -0.5497 |
\left(0.0971\right) | \left(0.0020\right) | \left(0.0873\right) | \left(0.0008\right) | \left(0.0649\right) | \left(0.0005\right) | ||
\gamma\left(1\right) | 0.65 | 0.6616 | 0.6541 | 0.6592 | 0.6523 | 0.6513 | 0.6491 |
\left(0.0638\right) | \left(0.0074\right) | \left(0.0591\right) | \left(0.0029\right) | \left(0.0256\right) | \left(0.0014\right) | ||
\gamma\left(2\right) | -0.05 | -0.0433 | -0.0473 | -0.0473 | -0.0476 | -0.0475 | -0.0483 |
\left(0.0516\right) | \left(0.0023\right) | \left(0.0439\right) | \left(0.0011\right) | \left(0.0375\right) | \left(0.0005\right) |
Tv \backslash 3N | 750 | 1500 | 3000 | ||||
QML | Bayesian | QML | Bayesian | QML | Bayesian | ||
\alpha\left(1\right) | 0.50 | 0.4904 | 0.4940 | 0.5097 | 0.4961 | 0.5054 | 0.4979 |
\left(0.1064\right) | \left(0.0495\right) | \left(0.0864\right) | \left(0.0231\right) | \left(0.0789\right) | \left(0.0097\right) | ||
\alpha\left(2\right) | 1.00 | 1.0065 | 1.0062 | 0.9979 | 0.9991 | 1.0020 | 1.0008 |
\left(0.0768\right) | \left(0.0386\right) | \left(0.0703\right) | \left(0.0186\right) | \left(0.0207\right) | \left(0.0083\right) | ||
\alpha\left(3\right) | 1.50 | 1.4706 | 1.4709 | 1.5163 | 1.4914 | 1.4928 | 1.4966 |
\left(0.0755\right) | \left(0.0588\right) | \left(0.0584\right) | \left(0.0285\right) | \left(0.0264\right) | \left(0.0128\right) | ||
\beta_{1}\left(1\right) | 0.15 | 0.1462 | 0.1487 | 0.1524 | 0.1489 | 0.1515 | 0.1496 |
\left(0.0819\right) | \left(0.0038\right) | \left(0.0594\right) | \left(0.0017\right) | \left(0.0566\right) | \left(0.0008\right) | ||
\beta_{1}\left(2\right) | -0.15 | -0.1426 | -0.1444 | -0.1465 | -0.1482 | -0.1490 | -0.1496 |
\left(0.0921\right) | \left(0.0040\right) | \left(0.0810\right) | \left(0.0021\right) | \left(0.0552\right) | \left(0.0009\right) | ||
\beta_{1}\left(3\right) | 0.45 | 0.4544 | 0.4459 | 0.4575 | \rm 0.4476 | 0.4546 | \rm 0.4491 |
\left(0.0946\right) | \left(0.0054\right) | \left(0.0680\right) | \left(0.0026\right) | \left(0.0529\right) | \left(0.0012\right) | ||
\beta_{2}\left(1\right) | -0.55 | -0.5626 | -0.5477 | -0.5598 | -0.5479 | -0.5516 | -0.5496 |
\left(0.0982\right) | \left(0.0046\right) | \left(0.0301\right) | \left(0.0026\right) | \left(0.0079\right) | \left(0.0010\right) | ||
\beta_{2}\left(2\right) | 0.25 | 0.2367 | 0.2538 | 0.2405 | 0.2513 | 0.2437 | 0.2502 |
\left(0.1054\right) | \left(0.0042\right) | \left(0.0899\right) | \left(0.0023\right) | \left(0.0303\right) | \left(0.0011\right) | ||
\beta_{2}\left(3\right) | -0.35 | -0.3580 | -0.3481 | -0.3570 | -0.3525 | -0.3568 | -0.3513 |
\left(0.0741\right) | \left(0.0046\right) | \left(0.0670\right) | \left(0.0025\right) | \left(0.0226\right) | \left(0.0010\right) | ||
\gamma\left(1\right) | 0.00 | 0.0133 | 0.0061 | - 0.0044 | 0.0016 | 0.0016 | 0.0008 |
\left(0.0607\right) | \left(0.0124\right) | \left(0.0288\right) | \left(0.0053\right) | \left(0.0153\right) | \left(0.0023\right) | ||
\gamma\left(2\right) | 0.65 | 0.6676 | 0.6465 | 0.6617 | 0.6573 | 0.6598 | 0.6522 |
\left(0.1020\right) | \left(0.0250\right) | \left(0.0955\right) | \left(0.0118\right) | \left(0.0582\right) | \left(0.0050\right) | ||
\gamma\left(3\right) | 0.05 | 0.0603 | 0.0582 | 0.0437 | 0.0474 | 0.0561 | 0.0521 |
\left(0.0887\right) | \left(0.0234\right) | \left(0.0228\right) | \left(0.0104\right) | \left(0.0161\right) | \left(0.0048\right) |
Series | \left(X_{t}\right) \times10^{3} | \left(\varpi_{t}\right) \times10^{3} | \left(\varpi_{t}^{2}\right) \times10^{7} | \left(\left\vert \varpi_{t}\right\vert \right) \times10^{4} | \left(\log\left\vert \varpi_{t}\right\vert \right) \times10^{3} |
mean | 0.0886 | 0.0001 | 0.0000 | 0.0004 | 0.0004 |
Std. Dev | 0.0116 | 0.0050 | 0.0000 | 0.0004 | 0.0006 |
Median | 0.0911 | 0.0001 | 0.0000 | 0.0003 | 0.0005 |
Skewness | -0.0005 | 0.0004 | 0.0000 | 0.0003 | -0.0013 |
Kurtosis | 0.0021 | 0.0090 | 0.0000 | 0.0018 | 0.0065 |
Min | 0.0672 | -0.0233 | 0.0000 | 0.0000 | -0.0040 |
Max | 0.1091 | 0.0497 | 0.0002 | 0.0050 | 0.0019 |
J. Bera | 0.2325 | 4.5971 | 2.7228 | 3.4009 | 2.4880 |
Arch (300) | 100\% | 100\% | 00\% | 100\% | 100\% |
LBtest | 100\% | 24.20\% | 98.23\% | 100\% | 100\% |
v=1 | v=2 | v=3 | v=4 | v=5 | ||
\alpha\left(1\right) | -0.0248 | - | - | - | - | |
Standard | \beta_{1}\left(1\right) | 0.9991 | - | - | - | - |
TAR-SV | \beta_{2}\left(1\right) | 0.3165 | - | - | - | - |
\gamma\left(1\right) | 0.0482 | - | - | - | - | |
Periodic | \alpha\left(v\right) | -1.9725 | 0.8948 | 6.9167 | -4.9851 | -0.1852 |
symmetric | \beta_{1}\left(v\right) | 0.8763 | 1.0578 | 1.3264 | 0.4517 | 0.8676 |
AR-SV | \gamma\left(v\right) | 0.1089 | 0.2001 | 0.3886 | 0.2456 | 0.1998 |
\alpha\left(v\right) | -2.2061 | 0.9558 | 7.9368 | -5.3924 | -0.2762 | |
Periodic | \beta_{1}\left(v\right) | 0.9927 | 1.0621 | 1.6264 | 0.6321 | 0.9734 |
TAR-SV | \beta_{2}\left(v\right) | 0.6034 | 0.7168 | 0.3540 | 0.4112 | 0.3985 |
\gamma\left(v\right) | 0.2734 | 0.3244 | 0.5275 | 0.4712 | 0.5219 |