1.
Introduction
The understanding that financial returns' volatility, or instantaneous variation, is not constant over time is widely acknowledged. This phenomenon, often termed volatility clustering, manifests as a high serial autocorrelation in return variances. Instantaneous change and volatility clustering is of particular importance in financial time series analysis. A prominent tool for modeling volatility is the stochastic volatility (SV) model, initially introduced by [1]. It stands as a credible alternative to the widely used autoregressive conditional heteroskedasticity/generalized autoregressive conditional heteroskedasticity (ARCH/GARCH) family of models. While both model families serve to analyze time-dependent variances, they differ notably in construction. ARCH/GARCH models typically characterize time-dependent variances by expressing volatility as a function of past observations or volatility, allowing for one-step-ahead forecast. Conversely, SV modeling embraces the stochastic nature of volatility, enabling it to evolve according to a stochastic process. Empirically, SV models offer greater flexibility than ARCH/GARCH models due to the incorporation of an innovation term into the latent volatility process [2,3]. This enhances the model's ability to capture the dynamics of real-world financial data more accurately.
Moreover, extensive empirical research has identified an asymmetric volatility response to positive and negative past returns. This characteristic, known as the leverage effect, was initially elucidated by [4]. Essentially, financial markets exhibit heightened volatility in reaction to negative shocks, often termed "bad news", compared to equivalent-magnitude positive shocks, or "good news". Despite the efficacy of the basic versions of the GARCH and SV models, neither inherently accounts for the leverage effect and asymmetry. This inherent limitation may constrain the applicability and accuracy of each of these approaches in certain contexts.
To address the asymmetric responses to positive and negative returns, researchers have proposed extensions to existing models. Research from [5] proposes an extension to the $ \log $GARCH model, which captures such stylized facts by accommodating asymmetric responses. Similarly, [6] explores asymmetric specifications of GARCH models. In the SV framework, incorporation of the leverage effect and asymmetry can be achieved by introducing correlations between the volatility process noise and the observation series noise [7]. This approach enhances the SV model's ability to capture the nuanced dynamics of financial markets, including asymmetric responses to market shocks.
Threshold models have made significant contributions to volatility modeling within deterministic frameworks. Extending this concept to the SV framework would seem both intuitive and promising. Breidt [8] introduced the threshold SV (TSV) model, integrating the threshold concept into SV analysis. Drawing on Tong's foundational work [9], the TSV model posits that volatility dynamics switch between two regimes based on the nature of incoming information (good or bad). In each regime, volatility is modeled using a first-order autoregressive process, with transitions between regimes determined by the signs of lagged stock returns. So et al. [10] proposed a similar approach, constructing a threshold SV model to capture both mean asymmetry and variance simultaneously. Chen et al. [11] further generalized the TSV model by incorporating heavy-tailed error distributions. TSV models have gained popularity for their efficacy in representing financial return volatility [12].
In the standard economic literature, numerous extensions have been suggested to incorporate additional characteristics of time series data, such as long memory, simultaneous dependence, and regime changes [13,14,15,16,17]. However, a significant portion of these approaches relies on fixed volatility parameters. This approach may fall short of explaining time series data characterized by volatility that displays a periodic correlation pattern. Such patterns cannot be adequately modeled by TSV models with parameters that remain constant over time. To overcome this constraint, researchers have developed models that explicitly integrate periodicity into model parameters. This paper seeks to explore a novel category of periodic volatility models known as periodic threshold autoregressive SV (PTAR-SV) models. In PTAR-SV, the logarithm of volatility is represented using a first-order periodic TAR model; this approach extends the PAR-SV models initially introduced by Aknouche [18] and also considered by Boussaha and Hamdi [19]. Notably, the PAR-SV model presents certain distinctions from the periodic SV (PSV) model proposed by Tsiakas [20]. It is crucial to note that in Tsiakas' formulation, the parameters are represented as a combination of sine and cosine functions along with suitable dummy variables. However, this representation only captures a particular scenario of deterministic seasonality and does not provide a comprehensive explanation for periodically correlated volatilities. Periodicity in SV models was first introduced by Tsiakas [20]. Although the periodic SV model of Tsiakas [20] has many advantages, it only takes into account a kind of deterministic periodicity. To model stochastic periodicity in SV models, Aknouche [18] proposed the periodic autoregressive SV. The main contribution of the present manuscript is to allow for asymmetry in periodic SVs by combining the works of Breidt [8] and Aknouche [18]. Let us delve into the PAR-SV$ _{s} $ process $ \left(X_{t}, t\in\mathbb{Z}\right) $, defined on $ \left(\Omega, \Im, P\right) $ and satisfying the factorization
where $ \left(e_{t}\right) $ represents a sequence of independent and identically distributed (i.i.d.) random variables; these variables are defined on the same probability space; and the sequence $ \left(e_{t}\right) $ is characterized by having a zero mean and unit variance. Furthermore, the periodic $ \log $-volatility process is
where the parameters $ \alpha\left(.\right), $ $ \beta\left(.\right) \ $, and $ \gamma\left(.\right) $ are functions that vary periodically with time $ t $, the period of this variation is denoted as $ s $ (i.e., $ \forall n\in\mathbb{Z} $, $ \alpha\left(t\right) = \alpha\left(t+ns\right), $ and so on), and $ \left(\eta_{t}, t\in\mathbb{Z}\right) $ is a sequence of i.i.d.$ \left(0, 1\right) $, which achieves the following assumption:
Assumption 1. $ \left(e_{t}\right) $ and $ \left(\eta_{t}\right) $ are independent.
In this context, we introduce the PTAR-SV$ _{s} $. This is formulated as per Eq (1.1) with the periodic $ \log $-volatility process, i.e.,
In Eq (1.3)$, $ the functions $ \alpha\left(.\right), $ $ \beta _{1}\left(.\right), $ $ \beta_{2}\left(.\right) \ $, and $ \gamma\left(.\right) $ switch between $ s $-seasons, denoted by $ \alpha\left(.\right) = { \sum\limits_{l = 0}^{s-1}} \alpha(l)\mathbb{I}_{\Delta\left(l\right) }\left(.\right) $, $ \beta _{1}\left(.\right) = { \sum\limits_{l = 0}^{s-1}} \beta_{1}\left(l\right) \mathbb{I}_{\Delta\left(l\right) }\left(.\right) $, $ \beta_{2}\left(.\right) = { \sum\limits_{l = 0}^{s-1}} \beta_{2}\left(l\right) \mathbb{I}_{\Delta\left(l\right) }\left(.\right) $ and $ \gamma\left(.\right) = { \sum\limits_{l = 0}^{s-1}} \gamma\left(l\right) \mathbb{I}_{\Delta\left(l\right) }\left(.\right) $, where $ \Delta\left(l\right) : = \left\{ st+l, t\right\} $, while, as per Eq (1.3), it is possible to express them equivalently in a periodic version, as follows:
for all $ v\in\left\{ 1, ..., s\right\} $. Our model extends the scope of previous models, offering a more comprehensive framework for volatility analysis. This model can be viewed as an extension of the work by Breidt [8] in the standard TAR-SV models, when $ s = 1 $, and also to the symmetric PAR-SV$ _{s} $ models, when $ \beta_{1}\left(.\right) = \beta _{2}\left(.\right) $.
This paper is arranged in the following manner. Section 2 introduces the periodic linear state-space representation and delves into certain probabilistic properties of the proposed model. Section 3 outlines a direct approach to addressing the estimation problem, employing a periodic Kalman filter. Additionally, in Section 4, we introduce a Bayesian approach using the Griddy Gibbs sampler. The performance of our proposed estimation method is then evaluated through a simulation study in Section 5. Real-world applications to the spot rates of the euro against the Algerian dinar log-return series are examined in Section 6. Finally, Section 7 presents the conclusions that can be drawn from our study, while the proofs of the main results are provided in the appendices.
2.
Probabilistic properties of PTAR-SV$ _{s} $
To enhance the statistical analysis of the proposed model, it is crucial to establish conditions that ensure certain probabilistic properties of PTAR-SV$ _{s} $, which include periodic stationarity and the presence of higher moments. To this end, many studies have been published on these properties, such as the asymmetric standard case, i.e., TAR-SV$ _{1}, $ (see, e.g., [8]) and the symmetric standard, i.e., AR-SV$ _{1} $ (see, e.g., [21]$, $ and references therein), or the symmetric periodic, i.e., PAR-SV$ _{s} $ (see, e.g., [18,19], and references therein) cases. As is customary in the modeling of periodic time-varying systems, we can now express the Eqs (1.1)–(1.4) in a convenient manner. This approach is similar to the one used by Gladyshev [22] for periodic linear models. In this approach, a time-invariant multivariate random coefficient AR-SV model is constructed by incorporating seasonal $ v $, where $ v $ takes values from the set $ \left\{1, ..., s\right\} $. Consequently, our focus shifts to the analysis of the properties inherent to this model. To clarify, if we define $ s $-vectors $ \underline{X}_{n}^{\prime}: = \left(X_{sn+1}, ..., X_{sn+s}\right), $ $ \underline{h}_{n}^{\prime}: = \left(h_{sn+1}, ..., h_{sn+s}\right) \ $, and $ \underline{\eta}_{n}^{\prime}: = \left(\eta_{sn+1}, ..., \eta_{sn+s}\right), \ $ the model represented by (1.1)–(1.4) can be then by formulated as a multivariate random coefficient AR-SV model,
where $ \Delta_{n}: = diag\left(e_{sn+1}, ..., e_{sn+s}\right) $ and $ \underline{\Lambda}_{n} $, $ \Gamma_{n}^{\left(1\right) } $, and $ \Gamma _{n}^{\left(2\right) } $ are given by
Here, $ O_{\left(n, m\right) } $ signifies an $ n\times m $ matrix in which all entries are zero and the function $ I_{\left\{.\right\} } $ refers to an indicator function. Deriving directly from the earlier equivalent formulation, we can deduce the following properties:
2.1. Periodic stationarity
In this context, our current objective is to establish conditions that ensure the existence of a uniquely strict stationary (in a periodic sense), as defined by Ghezal et al. [5,23], for a PTAR-SV$ _{s} $ process. We will now initiate our investigation by examining the strict stationarity of the model represented by Eq (2.1), utilizing a fundamental tool: The highest-Lyapunov exponent for random matrices that are both independent and periodically distributed (i.p.d.). Given that the sequence $ \left\{ \left(\Gamma_{n}^{\left(1\right) }, \underline{\Lambda}_{n}+\Gamma_{n}^{\left(2\right) }\underline{\eta} _{n}\right), n\in\mathbb{Z}\right\} $ is an i.p.d. sequence and both $ E\left\{ \log^{+}\left\Vert \Gamma_{0}^{\left(1\right) }\right\Vert \right\} $ and $ E\left\{ \log^{+}\left\Vert \underline{\Lambda}_{0} +\Gamma_{0}^{\left(2\right) }\underline{\eta}_{0}\right\Vert \right\} $ are finite, where $ \log^{+}\left(x\right) = \left(\log x\right) \vee0 $, based on Brandt's Theorem [24] (also presented as Theorem 1.1 in Bougerol and Picard [25]), a condition sufficient for the model described by Eq (2.1) to exhibit a non-anticipative strict stationarity solution is that the highest-Lyapunov exponent linked to the i.p.d. sequence of matrices $ \Gamma: = (\Gamma_{n}^{\left(1\right) }, n\in\mathbb{Z}) $ is defined, $ \gamma\left(\Gamma\right) : = \underset {m\geq1}{\inf}\dfrac{1}{m}E\left\{ \log\left\Vert \prod\limits_{l = 0} ^{m-1}\Gamma_{n-l}^{\left(1\right) }\right\Vert \right\} < 0. $ To achieve the desired objectives, taking a multiplicative norm, we can establish the following inequality:
where $ \delta = P\left(e_{0} > 0\right). $ We are now able to present a fundamental result that establishes a sufficient condition for achieving strict periodic stationarity:
Theorem 1. The PTAR-SV$ _{s} $ model, given by (1.1)–(1.4), features a unique, non-anticipative, strict periodic stationarity and periodic ergodic solution. This solution, for $ t\in\mathbb{Z} $ and $ v\in\left\{ 1, ..., s\right\}, $ is given by:
such as the series (2.2) converges almost surely, iff,
Example 1. In the following Table 1, we provide a summary of condition (2.3) for various specific cases
In the case of the PTAR-SV$ _{s} $ model, the presence of explosive seasons, indicated by $ \delta\left\vert \beta_{1}\left(v\right) \right\vert +\left(1-\delta\right) \left\vert \beta_{2}\left(v\right) \right\vert \geq1, $ does not necessarily eliminate the possibility of having a strictly periodically stationary solution. Specifically, when $ s = 2 $ and with certain parameter conditions such as $ \beta_{1}\left(2\right) = 2\beta_{2}\left(1\right) = a $, $ \beta_{1}(1)+1 = \beta_{2}(2) = b $, along with $ e_{t} \leadsto{ t}_{\left(3\right) } $, Figure 1 below illustrates the regions of strict periodic stationarity.
Other properties, including periodic geometric ergodicity, strong mixing, and moments of the PTAR-SV$ _{s} $ model, are also provided.
2.2. Periodic geometric ergodicity and strong mixing
We now delve into the statistical properties of PTAR-SV$ _{s} $ processes, focusing on periodic geometric ergodicity and strong mixing. We initially establish that $ \left(\underline{h}_{n}, n\in\mathbb{Z}\right) $ forms a Markov chain with a state space $ \left(\mathbb{R}^{s}, \mathcal{B} _{\mathbb{R}^{s}}\right), $ where $ \mathcal{B}_{\mathbb{R}^{s}} $ represents the Borel $ \sigma $-field on $ \mathbb{R}^{s} $. This Markov chain exhibits time-homogeneous $ n $-step transition probabilities, denoted as $ P^{n}\left(\underline{h}, A\right) = P\left(\underline{h}_{n}\in A\left\vert \underline{h}_{0} = \underline{h}\right. \right) $, where $ \underline{h} \in\mathbb{R}^{s} $, $ B\in $ $ \mathcal{B}_{\mathbb{R}^{s}} $, $ P^{1}\left(\underline{h}, B\right) = P\left(\underline{h}, B\right) $. The invariant probability of the Markov chain is defined as $ \pi\left(A\right) = \int P\left(\underline{h}, A\right) \pi\left(d\underline{h}\right). $ Furthermore, if $ \lambda $ is a Lebesgue measure on $ \left(\mathbb{R} ^{s}, \mathcal{B}_{\mathbb{R}^{s}}\right), $ then $ \left(\underline{h} _{n}, n\in\mathbb{Z}\right) $ is $ \lambda $-irreducible and aperiodic. As a consequence, $ \left(\underline{h}_{n}, n\in\mathbb{Z}\right) $ demonstrates the property of being strong Feller (for a more detailed discussion, refer to Meyn and Tweedie's work [27]). We can then state the following result:
Theorem 2. Under the condition stated in (2.3)$, $ our process $ \left(X_{t}\right) $, described by Eqs (1.1)–(1.4), exhibits geometric periodic ergodicity. Furthermore, if initialized from its invariant measure, $ \left(X_{t}\right) $ (resp., $ \left(h_{t}\right) $) demonstrates strict periodic stationarity and periodic $ \beta $-mixing with exponential decay.
2.3. Moments of the PTAR-SV$ _{s} $ model
If we assume that the distribution of $ \left(e_{t}, t\in\mathbb{Z}\right) $ exhibits a symmetry property, this implies that the odd-order moments of $ \left(X_{t}, t\in\mathbb{Z}\right) $ exist and are zero. Furthermore, assuming that $ E\left\{ e_{t}^{2m}\right\} < \infty $ for all $ m\in \mathbb{N}^{\ast} $, we can calculate the even moments of $ \left(X_{t}, t\in\mathbb{Z}\right) $ using well-established results related to the $ \log- $normal distribution. The theorem summarizing these conditions can be presented as follows:
Theorem 3. Consider $ \left(X_{t}, t\in\mathbb{Z}\right) $ to be a strict periodic stationarity solution to Eqs (1.1)–(1.4), where $ E\left\{ e_{t}^{2m}\right\} < \infty $, $ \forall m\in\mathbb{N}^{\ast} $ holds. A sufficient condition for $ E\left\{ X_{t}^{2m}\right\} $ to remain finite is:
Additionally, the closed-form expression for the $ 2m $-th moment of $ X_{t} $ is:
We next present the autocovariance of the powered process. This autocovariance, denoted $ \Xi_{v, X}^{\left(2m\right) }\left(n\right), $ is valuable for both model identification and the derivation of specific estimation methods. It is defined as $ \Xi_{v, X}^{\left(2m\right) }\left(n\right) = E\left\{ X_{st+v}^{2m}X_{st+v-n}^{2m}\right\}. $
Theorem 4. Consider $ \left(X_{t}, t\in\mathbb{Z}\right) $ to be a strict periodic stationarity solution to Eqs (1.1)–(1.4), where $ E\left\{ e_{t}^{4m}\right\} < \infty $ holds for any positive integer $ m $. Under conditions (2.3), (2.4), and
for $ n\geq0 $, $ v = 1, \ldots, s, $ we have
Hence, the autocovariance function for the process $ \left(X_{t}^{2m}, t\in\mathbb{Z}, m\in\mathbb{N}^{\ast}\right) $ can be expressed as follows:
for $ n\geq0 $, $ v = 1, \ldots, s. $
3.
QML estimation
Here, we discuss the implementation of the QML estimator (QMLE) based on the periodic Kalman filter for estimating the parameters associated with the PTAR-SV$ _{s} $ model. Let $ \underline{\theta}^{\prime}: = \left(\underline {\theta}_{1}, ..., \underline{\theta}_{s}\right) \in\Theta\subset \mathbb{R}^{4s} $, where $ \underline{\theta}_{v}^{\prime}: = \left(\underline{\varphi}_{v}^{\prime}, \gamma\left(v\right) \right), $ $ \underline{\varphi}_{v}^{\prime}: = \left(\alpha\left(v\right), \beta _{1}\left(v\right), \beta_{2}\left(v\right) \right) $ for all $ v\in\left\{ 1, ..., s\right\} $. The actual parameter value, symbolized by $ \underline{\theta}_{0}\in\Theta $, remains unknown and requires estimation. To undertake this task, let $ \underline{X} = \left\{ X_{1}, ..., X_{sN}\right\} $ denote an observed sample from the distinct, causal, and strict periodic stationarity solution of (1.1)–(1.4). It is sensible to describe the quasi-likelihood function for $ \underline{\theta} $ in the innovation form, as follows:
where $ \widehat{\omega}_{t} $ represents the sample innovation at time $ t $, defined as $ \widehat{\omega}_{t} = \log\left(X_{t}^{2}\right) -\widehat {X}_{\left. t\right\vert t-1}, $ where $ \widehat{X}_{\left. t\right\vert t-1} $ denotes the optimal linear predictor of $ \log\left(X_{t}^{2}\right) $ based on the observations $ \log\left(X_{1}^{2}\right) $, ..., $ \log\left(X_{t-1}^{2}\right). $ A QMLE of $ \underline{\theta} $ is identified as any measurable solution, denoted as $ \widehat{\underline{\theta}}_{n} $, such that:
The optimal linear predictor is represented by $ \widehat{X}_{\left. t\right\vert t-1}, $ and the mean square error $ Q_{\left. t\right\vert t-1} = E\left\{ \left(\widehat{h}_{\left. t\right\vert t-1}-h_{t}\right) ^{2}\right\} $ can be recursively computed using the periodic Kalman filter as follows:
with start-up values $ \widehat{h}_{\left. 1\right\vert 0} = E\left\{ h_{1}\right\} $ and $ Q_{\left. 1\right\vert 0} = var\left(h_{1}\right). $
The estimation of the unknown parameter is achieved by maximizing the quasi-log-likelihood $ \log L_{\underline{\theta}}\left(\underline {X}\right) $. However, explicit formulas for the estimates at the maximum of $ L_{\underline{\theta}}\left(\underline{X}\right) $ are not readily available, necessitating the application of numerical optimization methods.
It is crucial to highlight that, under appropriate conditions, the QMLE, $ \widehat{\underline{\theta}}_{n} $, which minimizes the loss function $ -\log L_{\underline{\theta}}\left(\underline{X}\right) $, has been demonstrated to be consistent and asymptotically normally distributed (see Ljung [28]). The asymptotic covariance matrix has been established to be the inverse of the asymptotic information matrix. In spite of these advantageous asymptotic properties, the efficacy of the periodic Kalman filter and periodic Chandrasekhar filter recursions may diminish in situations that deviate from normality or linearity. It is widely acknowledged that the QML estimator may not be optimal in finite samples, leading researchers to explore Monte Carlo-based approximations. This is particularly relevant to enhancing performance, especially in the context of state-space models such as SV models.
Embarking on the exploration of maximum likelihood through the expectation-maximization (EM) algorithm, augmented with particle filters and smoothers, marks a significant step forward in enhancing the robustness of parameter estimation in the context of nonlinear and/or non-Gaussian state-space models. Particle filters, recognized as sequential Monte Carlo methods, present a potent alternative to the conventional Kalman filter, especially when confronted with optimal estimation challenges within nonlinear/non-Gaussian state-space frameworks. Comprehensive surveys of particle methods are available in Arulampalam et al. [29] and Doucet et al. [30], offering valuable insights into their applications and methodologies.
To delve deeper into the application of these methodologies, we turn our attention to the linearized representation obtained by taking the logarithm of $ X_{sn+v}^{2} $ in the assumed periodically stationary PTAR-SV model. Let $ \underline{h} = \left\{ h_{0}, h_{1}, ..., h_{sN}\right\} $ and $ \underline{\chi} = \left\{ \log X_{1}^{2}, ..., \log X_{sN}^{2}\right\} $ represent vectors containing complete data and $ \log- $volatility data, respectively. Given a specific realization of $ \underline{h} $, the complete $ \log $-likelihood function for the parameter $ \underline{\theta} $ can be formulated in the following manner:
where $ D $ represents a constant that is independent of $ \underline{\theta} $ and $ h_{0} $ follows a Gaussian distribution $ \mathcal{N}\left(\kappa _{0}, \tau_{0}^{2}\right). $
In instances of incomplete data, a widely adopted approach to parameter estimation is the recursive EM algorithm (see Dempster et al. [31]). This iterative method is recognized for its versatility in computing maximum likelihood estimation (MLE). The EM algorithm is iterative, generating a sequence of values $ \widehat{\underline{\theta}}_{\left(l\right) } $, $ l\geq1 $, which progressively refine the MLE. The algorithm consists of two primary steps: An expectation step (E-step) followed by a maximization step (M-step). At the outset of the $ i $-th iteration, parameters are estimated from the preceding iteration, ($ \widehat{\underline{\theta}}_{\left(l-1\right) } $). The E-step involves the definition of the $ E\left(., .\right) $ function, an integral component of the EM algorithm,
where
Prior to the transition to the M-step, a crucial prerequisite is the evaluation of various quantities, including $ E\left\{ \left. \exp\left(\log X_{st+v}^{2}-h_{st+v}\right) \right\vert \underline{h}, \underline{\chi }, \widehat{\underline{\theta}}_{\left(l-1\right) }\right\} $, $ h_{t, Ns} $, $ H_{t, Ns} $, and $ H_{t, t-1}^{\left(Ns\right) } $. These quantities can be sequentially approximated over time through the application of particle filtering and smoothing algorithms—an extension of Kim and Stoffer's approach [16]. To tackle this task, we employ the particle filter algorithm, as outlined below:
Particle Filter Algorithm for PTAR-SV:
ⅰ. Initialization: Set M as the number of the particles, the initial distribution $ g_{0}^{(l)}\thicksim P_{0}(h_{0}) $ with $ u_{0}^{(l)} = M^{-1} $, $ l = 1, \dots, M $.
ⅱ. Particle prediction: For $ t\geq1 $ and $ l = 1, \dots, M: $
$ \textbf{1)} $ Generate $ \eta_{t}^{(l)}\thicksim\mathcal{N}(0, 1) $.
$ \textbf{2)} $ Calculate $ P_{t}^{\left(l\right) } = \alpha\left(t\right) +\left(\beta_{1}\left(t\right) \mathbb{I}_{\left\{ e_{t-1} > 0\right\} }+\beta_{2}\left(t\right) \mathbb{I}_{\left\{ e_{t-1} < 0\right\} }\right) g_{t-1}^{\left(l\right) }+\gamma\left(t\right) \eta_{t}^{(l)}. $
ⅲ. Weight update:
ⅳ. Normalize the importance weights: Compute $ \widetilde{u}_{t}^{(l)} = \left(\sum\limits_{l = 1}^{M}u_{t}^{(l)}\right) ^{-1}\times u_{t}^{(l)}, $ for $ l = 1, \dots, M. $
ⅴ. Compute the measure of degeneracy: $ n_{eff} $ $ = $ $ \left(\sum\limits_{l = 1}^{M}\left(\widetilde{u}_{t}^{(l)}\right) ^{2}\right) ^{-1}. $ If $ n_{eff}\leq n_{T} $ (typically $ 2n_{T} = M $), resample with replacement $ M $ equally weighted particles $ \left\{ g_{t}^{(l)}, l = 1, \dots, M\right\} $ from the set $ \left\{ P_{t}^{\left(l\right) }, l = 1, ...., M\right\} $ according to the normalized weights, or else $ g_{t} ^{(.)} = P_{t}^{\left(.\right) }. $
ⅵ. Assign the particle: Assign the particle set $ \left\{ g_{t}^{(l)}, l = 1, \dots, M\right\} $ a weight $ \left\{ \widetilde {u}_{t}^{(l)}, l = 1, \dots, M\right\}. $
The subsequent particle smoothing algorithm is devised to incorporate information available after a time $ t $, providing approximations for the required quantities $ E\left\{ \left. \exp\left(\log X_{st+v} ^{2}-h_{st+v}\right) \right\vert \underline{h}, \underline{\chi}, \widehat{\underline{\theta}}_{\left(l-1\right) }\right\} $, $ h_{t, Ns} $, $ H_{t, Ns} $ and $ H_{t, t-1}^{\left(Ns\right) } $ essential for the E-step.
Particle Smoothing Algorithm for PTAR-SV:
ⅰ. Initialization: For each $ l = 1, \dots, M $, choose $ q_{t}^{(l)} $ based on the function $ g_{t}^{(l)} $ with a probability $ \widetilde{u}_{t}^{(l)}. $ Set $ \widetilde{U}_{t}^{(l)} = M^{-1} $.
ⅱ. Particle Smoothing Iteration: For $ t\geq1 $ and $ l = 1, \dots, M: $
$ \textbf{1)} $ Calculate the smoothed weights: For $ m = 1, \dots, M: $
$ \textbf{2)} $ Normalize the modified weights: For $ m = 1, \dots, M: $
$ \textbf{3)} $ Choose $ q_{t-1}^{(l)} = g_{t-1}^{(m)}, $ with a probability $ \widetilde{U}_{t-1\left\vert t\right.}^{(m)}. $
ⅲ. Final Computation: Compute
and
After substituting the approximations for $ h_{t, Ns} $, $ H_{t, Ns} $, and $ H_{t, t-1}^{\left(Ns\right) } $, we proceed to the M-step, where the updated parameter $ \widehat{\underline{\theta}}_{\left(l\right) } $ can be obtained by maximizing the $ E\left(., .\right) $-function with respect to $ \underline {\theta} $ given the previous estimate $ \widehat{\underline{\theta}}_{\left(l-1\right) } $. This is expressed as:
The first-order derivatives of Eq (3.2), with respect to the unknown parameters $ \alpha\left(v\right), $ $ \beta_{1}\left(v\right) $, $ \beta_{2}\left(v\right) \ $, and $ \gamma\left(v\right) $ for $ v = 1, ...., s $, are provided below:
Therefore, at the $ l $-th iteration, the parameter estimates for $ \alpha\left(v\right), $ $ \beta_{1}\left(v\right) $, $ \beta_{2}\left(v\right) \ $, and $ \gamma\left(v\right) $ can be obtained by solving the following equations:
The closed-form solutions for the above equations provide the updated parameter estimates at the $ m $-th iteration
where
and
Remark 1. The asymptotic properties of periodic volatility models are examined under general regularity conditions by several authors, particularly by Aknouche et al. [40,41], so the consistency and asymptotic normality of the QMLE can be determined by employing the standard theory of models with time-invariant parameters (see Dunsmuir, [32]). This is evident when examining the (2.1) model. To achieve this, we refer to the corresponding multivariate time-invariant model presented in (2.1), which we transform into a linear form, as follows:
where $ \underline{Z}_{n}^{\prime}: = \left(\log\left(X_{sn+1}^{2}\right), ..., \log\left(X_{sn+s}^{2}\right) \right) \ $and $ \underline{\Upsilon }_{n}^{\prime}: = \left(\log\left(e_{sn+1}^{2}\right), ..., \log\left(e_{sn+s}^{2}\right) \right). $ Utilizing (3.3), we can apply the theory presented by Dunsmuir [32] to determine the QMLE's asymptotic variance under the condition of finite moments $ E\left\{ \left(\log\left(X_{sn+1}^{2}\right) \right) ^{4}\right\} $ (see also Ruiz et al., [13,33]).
4.
Bayesian MCMC estimation
In the Bayesian MCMC estimation, we consider the parameter $ \underline{\theta} $, and the unobserved volatilities $ \underline{h}^{\prime} = \left(h_{1}, h_{2}, ..., h_{sN}\right) $ in the model described by Eqs (1.1)–(1.4) are treated as random variables with a prior distribution, denoted by $ g(\underline{h}, \underline{\theta}) $. The objective is to infer the joint posterior distribution, $ g(\underline{h}, \underline{\theta }\left\vert \underline{X}\right.) $, given a series $ \underline{X} $ generated from the Eqs. (1.1)-(1.4) with Gaussian innovations. Assuming independence among the parameters $ \underline{h} $, $ \underline{\varphi }^{\prime}: = \left(\underline{\varphi}_{1}^{\prime}, \underline{\varphi} _{2}^{\prime}, ..., \underline{\varphi}_{s}^{\prime}\right) $, $ \underline{\xi }^{\prime}: = \left(\gamma_{1}^{2}, \gamma_{2}^{2}, ..., \gamma_{s}^{2}\right) $, due to the periodic structure of the $ PTAR-SV $ model, Gibbs sampling can be employed to estimate the joint posterior distribution. The Gibbs sampler involves drawing samples from conditional posterior distributions, such as $ g(\underline{\varphi}\left\vert \underline{\xi}, \underline{X}, \underline {h}\right.) $, $ g(\gamma_{v}^{2}\left\vert \underline{\varphi}, \underline {\gamma}_{-\left\{ v\right\} }^{2}, \underline{X}, \underline{h}\right.) $ for $ v = 1, ..., s $), and $ g(\underline{h}\left\vert \underline{\varphi }, \underline{\xi}, \underline{X}\right.), $ where $ \underline{h}_{-\left\{ t\right\} } $ comprises all elements of the vector $ \underline{h} $ except for the $ t $-th element, $ h_{t} $. Sampling directly from $ g(h_{t}\left\vert \underline{\varphi}, \underline{\xi}, \underline{X}, \underline{h}_{-\left\{ t\right\} }\right.) $ is complex, but we here adopt the Griddy-Gibbs procedure, a simpler implementation in the periodic context compared to the Metropolis-Hastings chain (for further discussion on this topic, readers can refer to [34,35]).
4.1. Gibbs sampler: Analyzing the prior and posterior distributions
In the analysis of the prior and posterior distributions within the Gibbs sampler framework, the first step of sampling, we focus on the sampling process of the parameter $ \underline{\varphi} $. Before delving into the conditional posterior distribution, denoted by $ g(\underline{\varphi}\left\vert \underline{\xi}, \underline{X}, \underline{h}\right.) $, derived from conjugate prior distributions and linear regression theory, we express the $ PTAR $ equation as a standard linear regression. Specifically, by defining
the model (1.1)–(1.4) can be reformulated into the following periodically linear regression:
Alternatively, it can also be represented as a standard regression:
where the errors follow i.i.d. Gaussian distributions. Assuming knowledge of the $ \gamma\left(v\right), v = 1, ..., s $ and the initial observation $ h_{0} $, the least squares estimate $ \widehat{\underline{\varphi}}_{WLSE} $ of $ \underline{\varphi} $, based on (4.1), takes the form:
This estimate follows a normal distribution $ \left(\underline{\varphi }, Cov\right) $, where
Under (4.1), the data's information about $ \underline{\varphi} $ is encapsulated in the weighted least squares estimate $ \widehat{\underline {\varphi}}_{WLSE} $. To derive a closed-form expression for the conditional posterior $ g(\underline{\varphi}\left\vert \underline{\xi}, \underline {X}, \underline{h}\right.) $, we employ a Gaussian conjugate prior for $ \underline{\varphi} $. Specifically, the prior distribution is Gaussian, denoted $ \underline{\varphi}\thicksim\mathcal{N}(\underline{\varphi} {{}^\circ}, \Psi {{}^\circ}) $, where $ \underline{\varphi} {{}^\circ} $ and $ \Psi {{}^\circ} $ are known hyperparameters tailored to yield a suitably diffuse yet informative prior. Thus, utilizing standard regression theory, the conditional posterior distribution of $ \underline{\varphi}\left\vert \underline{\xi }, \underline{X}, \underline{h}\right. $ is Gaussian, and denoted as
where
A couple of observations are warranted:
$ \textbf{a.} $ The block diagonal form of the matrix $ Cov $ is mirrored in $ \Psi {^\circ} $, assuming it is also block diagonal. This implies that assuming the seasonal parameters $ \underline{\varphi}_{1} $, $ \underline{\varphi}_{2} $, $ \dots $, $ \underline{\varphi}_{s} $ are independent of each other, each has a conjugate prior with appropriate hyperparameters, facilitating the same result.
$ \textbf{b.} $ Enhanced computational efficiency and stability in deriving $ \overline{\underline{\varphi}} $ and $ \overline{\Psi} $ can be achieved by setting $ \overline{\underline{\varphi}} = \overline{\underline{\varphi}}_{sN} $, $ \overline{\Psi} = \overline{\Psi}_{sN} $, and iteratively computing these quantities using the recursive least squares algorithm
with starting values $ \overline{\underline{\varphi}}_{0} = \underline{\varphi }_{0} $ and $ \overline{\Psi}_{0}^{-1} = \overline{\Psi}_{0}. $
This approach may enhance numerical stability and reduce computation time, particularly for large periods $ s $. In the second step of the Gibbs sampler, we sample the parameters $ \gamma^{2}\left(v\right), v = 1, \dots, s $. Conjugate priors for $ \gamma^{2}\left(v\right), v = 1, \dots, s $ are utilized to derive a closed-form expression for the conditional posterior distribution of $ \gamma^{2}\left(v\right) $, given the data and the other parameters $ \underline{\gamma}_{-\left\{ v\right\}}^{2} $. These priors are modeled using the inverted Chi-squared distribution:
where $ \pi_{v} = \tau_{v}^{-1}, v = 1, ..., s $. When the parameters $ \underline {\varphi} $ and $ \underline{h} $ are defined, specifically as
then the errors $ \eta_{v}, \eta_{s+v}, ..., \eta_{s(N-1)+v} $ follow a normal distribution i.i.d.$ \left(0, \gamma^{2}\left(v\right) \right) $, for $ v = 1, ..., s $. Utilizing standard Bayesian linear regression theory, the conditional posterior distribution of $ \gamma^{2}\left(v\right) $ for $ v = 1, ..., s $, given the data and the remaining parameters, conforms to an inverted Chi-squared distribution, represented as:
In the third step of sampling, we address the augmented volatility parameters, $ \underline{h} $. We seek to sample from the conditional posterior distribution $ g(h_{t}\left\vert \underline{\theta}, \underline{X}, \underline{h}_{-\left\{ t\right\} }\right.) $, $ t = 1, 2, ..., sN $. To begin, we present the expression for this distribution and, subsequently, we demonstrate the method to indirectly draw samples using the Griddy Gibbs technique. Due to the Markovian nature of the volatility process $ \left\{ h_{t}; t\in\mathbb{Z}\right\} $ and the conditional independence of $ X_{t} $ and $ h_{t-h} $ $ (h\neq0) $, given $ h_{t} $ for any $ t = 2, ..., sN-1 $, we have:
By leveraging the known fact that $ X_{t}\left\vert \underline{\theta}\right. $, $ h_{t}\equiv X_{t}\left\vert h_{t}\right. \thicksim\mathcal{N}(0, h_{t}) $, and $ h_{t}\left\vert h_{t-1}, \underline{\theta}\right. \thicksim \mathcal{N}\left(h_{t}-\gamma\left(t\right) \eta_{t}, \gamma^{2}\left(t\right) \right) $, the formula (4.7) can thus be transformed to:
where
In (4.8), we employ the well-known formula
where $ (a+b)\gamma = a\alpha+b\beta $, provided that $ a+b\neq0 $. For $ h_{1} $ and $ h_{sN} $, a simple approach is adopted where $ h_{1} $ is assumed to be fixed, enabling the sampling process to commence with $ t = 2 $. It may be noted that $ h_{sN}\left\vert h_{sN-1}, \underline{\theta}\right. \thicksim\mathcal{N} \left(h_{sN}-\gamma\left(sN\right) \eta_{sN}, \gamma^{2}\left(sN\right) \right) $. Alternatively, a forecast of $ h_{sN+1} $ and a backward prediction of $ h_{0} $ can be utilized by applying the formula (4.8) for $ t = 1, ..., sN+1 $. In this scenario, $ h_{sN+1} $ is forecast based on a two-step-ahead forecast, $ \widehat{h}_{sN-1}(2) $, computed at the origin $ sN-1 $ using:
The backward forecast of $ h_{0} $ is derived using a two-step-ahead backward forecast based on the backward periodic autoregression (as discussed in [36]). After determining the conditional posterior $ g(h_{t}\left\vert \underline{\theta}, \underline{X}, \underline{h}_{-\left\{ t\right\} }\right.) $, except for a scale factor, indirect sampling algorithms can be employed to draw the volatility $ h_{t} $. Research from [34] utilized the rejection Metropolis-Hasting algorithm, while [18,35,39] suggested the Griddy-Gibbs technique, which involves:
$ \textbf{a.} $ Selecting a grid of $ l $ points from a specified interval $ [h_{t, 1}, h_{t, l}] $ of $ h_{t}:\left(h_{t, k}, 1\leq k\leq l\right) $ is decreasing, then evaluating the conditional posterior $ g(h_{t}\left\vert \underline{\theta}, \underline{X}, \underline{h}_{-\left\{ t\right\} }\right.) $ via (4.8) at each of these points, yielding $ g_{t, k} = g(h_{t, k}\left\vert \underline{\theta}, \underline{X}, \underline{h}_{-\left\{ t\right\} }\right.) $, $ k = 1, \dots, l $.
$ \textbf{b.} $ Constructing the discrete distribution $ P(.) $ from the values $ g_{t, 1}, g_{t, 2}, \dots, g_{t, l} $ defined at $ h_{t, k} $, $ k = 1, \dots, l $, where $ P(h_{t, k}) = g_{t, k}\left/ \sum\limits_{k = 1}^{l}g_{t, k}\right. $. This serves as an approximation to the inverse cumulative distribution of $ g(h_{t} \left\vert \underline{\theta}, \underline{X}, \underline{h}_{-\left\{ t\right\} }\right.) $.
$ \textbf{c.} $ Generating a number from the uniform distribution on $ (0, 1) $ and transforming it using the discrete distribution, $ P(.) $, obtained in b. to obtain a random draw for $ h_{t} $.
It is noteworthy that the choice of the grid, $ [h_{t, 1}, h_{t, l}] $, significantly impacts the efficiency of the Griddy algorithm. Following a similar strategy to that in [18], the range of $ h_{t} $ at the $ m- $th Gibbs iteration is set to $ [\overline{h}_{t}^{l}, \overline{h}_{t}^{L}] $, where $ \overline{h}_{t}^{l} = \frac{3}{5}\left(h_{t}^{\left(0\right) }\vee h_{t}^{\left(m-1\right) }\right), $ $ \overline{h}_{t}^{L} = \frac{7} {5}\left(h_{t}^{\left(0\right) }\vee h_{t}^{\left(m-1\right) }\right), $ and $ h_{t}^{\left(m-1\right) } $ and $ h_{t}^{\left(0\right) } $ are, respectively, the estimates of $ h_{t} $ for the $ (m-1)- $th iteration and the initial value.
4.2. MCMC algorithm
The algorithm outlines the Gibbs sampler for sampling from the conditional posterior distribution, $ g(\underline{h}, \underline{\theta}\left\vert \underline{X}\right.) $ given $ \left\vert \underline{X}\right. $. For $ m = 0, 1, ..., L $, where $ \underline{h}^{(m)} = \left(h_{1}^{(m)}, ..., h_{s} ^{(m)}\right) ^{\prime}, $ $ \underline{\varphi}^{(m)} = \left(\alpha ^{(m)}\left(1\right), \beta_{1}^{(m)}\left(1\right), \beta_{2} ^{(m)}\left(1\right), ..., \alpha^{(m)}\left(s\right), \beta_{1} ^{(m)}\left(s\right), \beta_{2}^{(m)}\left(s\right) \right) ^{\prime} $ and $ \underline{\xi}^{(m)} = \left(\left(\gamma^{2}\right) ^{(m)}\left(1\right), ..., \left(\gamma^{2}\right) ^{(m)}\left(s\right) \right) ^{\prime} $, the algorithm is as follows:
Algorithm a:
$ \textbf{S1.} $ Specify starting values $ \underline{h}^{(0)} $, $ \underline{\varphi}^{(0)} $ and $ \underline{\xi}^{(0)} $.
$ \textbf{S2.} $ Repeat for $ m = 0, 1, \dots, L-1 $,
– Draw $ \underline{\varphi}^{(m+1)} $ from $ g(\underline{\varphi}\left\vert \underline{X}, \underline{\xi}^{(m)}, \underline{h}^{(m)}\right.) $ using (4.2)–(4.4) with starting values $ \overline{\underline{\varphi}}_{0} = \underline{\varphi}_{0} $ and $ \overline{\Psi}_{0}^{-1} = \overline{\Psi}_{0} $.
– Draw $ \underline{\xi}^{(m+1)} $ from $ g(\underline{\gamma}^{2}\left\vert \underline{X}, \underline{\varphi}^{(m+1)}, \underline{h}^{(m)}\right.) $ using (4.5) and (4.6).
– Repeat for $ t = 1, \dots, sN $
* Griddy Gibbs:
· Select a grid of $ l $ points: $ \left(h_{t, k}^{(m+1)}, k = 1, \dots, l\right) $ is decreasing.
· For $ k = 1, \dots, l $, calculate $ g_{t, k}^{(m+1)} = g(h_{t; k}^{\left(m\right) }\left\vert \underline{\theta}^{\left(m\right) }, \underline{X}, \underline{h}_{-\left\{ t\right\} }^{\left(m\right) }\right.) $ from (4.8)–(4.10).
· Define the inverse distribution, $ P\left(h_{t, k}^{(m+1)}\right) = g_{t, k}^{(m+1)}\left/ \sum\limits_{k = 1}^{l}g_{t, k}^{(m+1)}\right. $, $ k = 1, \dots, l. $
· Generate a number $ u $ from the uniform (0, 1) distribution.
· Transform $ w $ using the inverse distribution, $ P(.) $, to get $ h_{t}^{(m+1)} $, considered to be a draw from $ g(h_{t}\ \left\vert \underline{\theta}^{(m+1)}, \underline{X}, \underline {h}_{-\left\{ t\right\} }^{(m)}\right) $.
S3. Return the values $ \underline{h}^{(m)} $, $ \underline {\varphi}^{(m)} $, and $ \underline{\xi}^{(m)} $, $ m = 1, \dots, L $.
Upon sampling from the posterior distribution $ g(\underline {h}, \underline{\theta}\left\vert \underline{X}\right.) $, statistical inference for the PTAR-SV model becomes straightforward. The Bayesian Griddy-Gibbs parameter estimate $ \widehat{\underline{\theta}}_{BGG} $ of $ \underline{\theta} $ is defined as the posterior mean $ \widetilde {\underline{\theta}} = E\left\{ \underline{\theta}\left\vert \underline{X}\right. \right\} $, which, according to the Markov chain ergodic theorem, can be reasonably approximated by: $ \widehat{\underline{\theta} }_{BGG} = L^{-1}\sum\limits_{m_{0}\leq m\leq L+m_{0}}\underline{\theta}^{(m)}, $ where $ \underline{\theta}^{(m)} $ represents the $ m $-th draw from $ g(\underline{h}, \underline{\theta}\left\vert \underline{X}\right.) $, as provided by Algorithm a. $ m_{0} $ denotes the burn-in size (the initial draws discarded), and $ L $ is the number of draws. Smoothing and forecasting volatility are intrinsic outcomes of the Bayesian Griddy-Gibbs method. The smoothed value $ \widetilde{h}_{t} = E\left\{ h_{t}\left\vert \underline {X}\right. \right\} $, for $ t = 1, ..., sN $ is obtained during sampling from the distribution $ g(h_{t}\left\vert \left\vert \underline{X}\right. \right.) $, a marginal of the posterior distribution $ g(\underline{h}, \underline{\theta }\left\vert \underline{X}\right.) $. Thus, $ \widetilde{h}_{t} $ can be accurately approximated by: $ L^{-1}\sum\limits_{m_{0}\leq m\leq L+m_{0}} h_{t}^{(m)}, $ where $ h_{t}^{(m)} $ represents the $ m- $th draw of $ h_{t} $ from $ g(h_{t}, \underline{\theta}\left\vert \underline{X}\right.) $. Forecasting future values, $ h_{sN+j}, $ $ j = 1, ..., k $, can be accomplished either by employing the volatility equation with the Bayesian parameter estimates or directly by sampling from the predictive distribution $ g(h_{sN+1}, h_{sN+2}, ..., h_{sN+k} \left\vert \underline{X}\right.) $ (for further discussion, the reader is referred to [18,34])$. $
4.3. MCMC diagnostics
It is crucial to assess the numerical properties of the proposed Bayes Griddy-Gibbs (BGG) method, in particular because the volatilities are sampled element by element. Despite its simplicity in implementation, it is well established that the single-move approach, as employed in the BGG method, often leads to highly correlated posterior draws, a correlation that can result in slow mixing and convergence properties. Among various MCMC diagnostic measures, we focus here on two key metrics.
$ \textbf{1.} $ Relative Numerical Inefficiency (RNI): The RNI provides insight into the inefficiency caused by the serial correlation of the BGG parameter draws. This can be calculated as:
Here, 500 denotes the bandwidth, $ D(.) $ represents the Parzen kernel, and $ \widehat{\rho}_{1, j} $ is the sample autocorrelation at lag $ j $ of the BGG parameter draws. The RNI value indicates the extent of inefficiency attributed to serial correlation.
$ \textbf{2.} $ Numerical Standard Error (NSE): The NSE quantifies the uncertainty associated with the MCMC estimator and is calculated as the square root of the estimated asymptotic variance of the estimator. Mathematically, this can be expressed as:
Here, $ \widehat{\rho}_{2, j} $ represents the sample autocovariance at lag $ j $ of the BGG parameter draws, and $ L $ is the total number of draws.
These diagnostic measures, particularly the RNI and NSE, offer valuable insights into the efficiency and reliability of the MCMC sampling process, aiding in the assessment of convergence and the accuracy of parameter estimates.
Remark 2. The deviance information criterion $ (DIC) $ is a crucial tool in the selection of the period, $ (s) $, in $ PTAR-SV $ modeling. Unlike traditional criteria such as akaike information criterion (AIC) and bayesian information criterion (BIC), DIC strikes a balance between model adequacy and complexity. Its computation involves assessing the conditional likelihood of the $ PTAR-SV $ model and the posterior mean of its parameters, which are derived from MCMC draws. By comparing DIC values across different period lengths, researchers can identify the most suitable model. Despite the inherent challenge in estimating the standard error associated with the DIC due to its randomness, researchers can obtain a rough estimate of its variability by replicating calculations. It is important to note that the choice of DIC variant (e.g., observed or conditional) influences its interpretation, with the conditional version being particularly relevant to latent variable models such as $ PTAR-SV $ (for further details, the reader is referred to [18])$. $
Remark 3. In this paper, we delve into a class of nonlinear models designed for the analysis of periodic time series, referred to as PTAR-SV models. These models not only capture the feature of asymmetric volatility, which is already well-known in the deterministic volatility framework, but also uncover the periodicity hidden in the autocovariance structure, characteristics frequently observed in financial and economic time series. It is worth noting that another class of models, namely, Markov-switching TSV (MS-TSV) models, has recently been explored by Ghezal et al. [37]. These models aim to tackle the asymmetry and leverage effects observed in financial time series' volatility. They extend classical TSV models by representing the parameters governing log-volatility as functions of a homogeneous Markov chain. Both our paper and the work by Ghezal et al. [37] concentrate on establishing various probabilistic properties, including strict stationarity, causality, and ergodicity. Additionally, both delve into computing higher-order moments and the derivation of the autocovariance function of squared processes. However, our paper adopts a periodic perspective in these analyses because the PTAR-SV models demonstrate global non-stationarity whilst exhibiting stationarity within each period, unlike the MS-TSV models. Moreover, while Ghezal et al. [37] primarily concentrates on the autocovariance function of the squared process and second-order stationary solutions, our paper expands the analysis to encompass the autocovariance function of the powers of the squared process. Additionally, we explore concepts such as periodic geometric ergodicity and strong mixing. Finally, both papers propose the QML method for parameter estimation. However, our paper introduces an additional approach—the EM algorithm with particle filters and smoothers. Moreover, we extend our analysis to include a Bayesian MCMC method based on Griddy-Gibbs sampling.
5.
Simulation
We conducted a simulation study to evaluate the performance of the QML and BGG methods for parameter estimation in the context of the PTAR-SV$ _{s} $ model with $ s\in\left\{ 2, 3\right\} $. A total of 500 datasets of varying sizes were generated. Specifically, we considered sample sizes, $ sN $, from the set $ \left\{ 750, 1500, 3000\right\} $. The values of the parameters were chosen to satisfy the periodic stationarity condition $ \prod\limits_{j = 1}^{s}\left(\delta\left\vert \beta_{1}\left(j\right) \right\vert +\left(1-\delta\right) \left\vert \beta_{2}\left(j\right) \right\vert \right) < 1 $. For each generated dataset, we estimated the parameter vector of interest, $ { \underline{\theta}} $, using the QMLE (resp., BGG estimation), denoted $ \widehat {{ \underline{\theta}}}_{QMLE} $ (resp., $ \widehat {{ \underline{\theta}}}_{BGGE} $). The QMLE (resp., BGGE) algorithm was executed via the "fminsearch.m" minimizer function in MATLAB8. The root mean square errors (RMSE) of the estimated parameters $ \widehat{{ \theta}} $ are presented in parentheses in the tables below. The actual values (TV) of the parameters for each of the considered data-generating processes are also presented.
This study primarily focuses on analyzing RMSEs, providing initial insights into the finite sample properties of the QMLE and BGGE within the framework of the PTAR-SV$ _{s} $ model. The results so obtained suggest that both the QMLE and BGGE methods effectively provide parameter estimates. Upon examining Table 2, it is evident that the strong consistency of the QMLE and BGGE for the PTAR-SV$ _{2} $ model is satisfactory. Furthermore, the corresponding RMSEs demonstrate a significant reduction as the sample size increases. Turning to the outcomes presented in Table 3 for the PTAR-SV$ _{3} $ model, the strong consistency is consistently confirmed. Importantly, the estimation procedure yields favorable results even with a relatively small sample size. The comparison between the two methods demonstrates that both perform adequately with regard to parameter estimation. However, the BGGE consistently outperforms the QMLE, exhibiting significantly lower RMSEs across all instances. This finding is consistent with previous results obtained in the symmetric case [18].
6.
Real application
This section focuses on modeling the Gaussian $ PTAR-SV_{3} $ model using the daily time series datasets, $ (X_{t})_{t\geq1} $, representing the Euro/Algerian dinar (EUR/DZD) exchange rate. Leveraging the QMLE method for its favorable finite-sample properties, we began by filtering out all non-trading days, including holidays and weekends. The observations span from January 3, 2000, to September 29, 2011. The corresponding log-return series can be calculated as $ \varpi_{t} = 10^{2}\log\left(X_{t}\right) -10^{2}\log\left(X_{t-1}\right) $. Plots of the prices $ \left(X_{t}\right) $, the daily return series of prices $ \left(\varpi_{t}\right) $, squared returns$ \left(\varpi_{t}^{2}\right) $, absolute returns $ \left(\left\vert\varpi_{t}\right\vert \right) $, and $ \log $-absolute returns are illustrated in Figure 2.
Additionally, upon reviewing the sample autocorrelation functions depicted in Figure 3, it is evident that the series $ \left(\varpi_{t}\right), \ \left(\varpi_{t}^{2}\right), $ $ \left(\left\vert \varpi_{t}\right\vert \right) $, and $ \left(\log\left\vert \varpi_{t}\right\vert \right) $ exhibit distinct characteristics. Specifically, the series $ \left(\varpi_{t}\right) $ displays a Taylor effect, as indicated by $ \widehat{\rho}_{\varpi_{t}^{2}}(h) < \widehat{\rho}_{\left\vert \varpi_{t}\right\vert }(h) $ for some $ h > 0 $. Consequently, the modeling of the series $ \left(\varpi_{t}\right) $ using standard SV models is rejected in favor of certain asymmetric models. Table 4 reports several basic descriptive statistics for the given series.
Table 4 reports descriptive statistics for the EUR/DZD returns over the entire study period. It includes statistics for the returns, absolute returns, squared returns, and log-absolute returns. The lowest return observed is $ -23.300 $, while the highest return is $ 49.700 $. The data exhibits positive skewness and high kurtosis, with kurtosis values for all three log-return series being much greater than $ 3 $. In this study, we propose a $ 5 $-periodic PTAR-SV model to capture intra-week effects in the daily exchange rate. The model allows parameters to vary with the day of the week, where $ v = 1 $ corresponds to Monday, $ v = 2 $ to Tuesday, and so on. Table 5 reports the estimated parameters pertaining to the $ 5 $-PTAR-SV model. In Table 5, we present the results of the QML parameter estimation for the PTAR-SV and certain other specific models. It is evident from these results that the periodic and standard estimated models are periodically stationary and stationary, respectively. Notably, the persistence measure estimates of the PTAR-SV model and the PAR-SV model fitted by Aknouche (2017, [18]) are notably smaller than those obtained from the standard TAR-SV model. Additionally, the empirical coverages of the PTAR-SV-based prediction intervals are closer to the nominal coverages than those of the PAR-SV-based prediction intervals. Furthermore, when comparing the empirical coverages of the fitted TAR-SV models, there is a slight superiority observed in the periodic modeling. These findings suggest that the PTAR-SV model, fitted to the time series of daily EUR/DZD log-returns, demonstrates greater accuracy and improved forecasting performance than the standard AR-SV and PAR-SV models.
7.
Conclusions
In this paper, we have conducted a study on a specific category of nonlinear models tailored to capture the characteristics of periodic asymmetric time series, known as PTAR-SV models. These models exhibit the ability to capture volatility clustering and reveal the inherent periodicity present in the autocovariance structure, both of which are common stylized facts in financial and economic time series. Additionally, PTAR-SV models are adept at encapsulating various stylized facts, notably leverage effects that denote asymmetry within the volatility process. Importantly, these models have demonstrated enhanced accuracy and superior forecasting performance compared to PTGARCH models.
This paper explores several aspects, including periodic stationarity conditions, moment calculations, and analysis of the autocovariance function of squared process powers. Additionally, we propose the QML method for parameter estimation, employing the EM algorithm with particle filters and smoothers. Furthermore, we introduce the BGGE as an alternative approach to parameter estimation.
Building upon the findings of Kim et al.[42] and Chib et al.[43], our research underscores the significance of employing a multi-move approach in the MCMC method for model estimation. These seminal works demonstrate that traditional single-move estimation methods may be inefficient, particularly when dealing with complex models and heavy-tailed distributions. By incorporating insights from these studies, our paper advocates for the adoption of a multi-move approach to enhance estimation efficiency and accuracy in nonlinear modeling. Moving forward, our research aims to expand in multiple directions. One proposed extension involves exploring heavier-tailed distributions, which can be easily accommodated using the $ t $-distribution. Additionally, we plan to investigate a periodic multivariate version of the PTAR-SV model, where multiple variables are considered simultaneously. This multivariate approach promises to provide deeper insights into the interconnected dynamics of multiple time series variables and their periodic behaviors.
Appendix
Proof of Theorem 1. The presence of a causal, strictly periodic, stationary solution for Eq (1.4) is directly linked to the existence of a causal, strictly periodically stationary solution for the PTAR model proposed by Bentarzi and Djeddou [38]. This PTAR model is described by the equation:
where $ \prod\limits_{j = 1}^{s}\left(\delta\left\vert \beta_{1}\left(j\right) \right\vert +\left(1-\delta\right) \left\vert \beta_{2}\left(j\right) \right\vert \right) < 1 $. When this condition is satisfied, the log-volatility $ h_{t} $ can be represented causally as:
Proof of Theorem 2. The periodic geometric ergodicity of $ \left(\underline{X}_{t}, t\in\mathbb{Z}\right) $ is a consequence of the geometric ergodicity of the vector $ \left(\underline{h}_{t}, t\in\mathbb{Z}\right) $, as demonstrated by Meyn and Tweedie's [27] results.
Proof of Theorem 3. For every $ t\in\mathbb{Z} $ and $ 1\leq v\leq s $, the following holds:
Consequently, a sufficient condition for the existence of $ E\left\{ X_{st+v}^{2m}\right\} $ is:
Proof of Theorem 4. For every $ t\in\mathbb{Z} $, $ n > 0 $, and $ 1\leq v\leq s $, the following holds:
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Conflict of interest
The authors declare no conflict of interest.