Loading [MathJax]/extensions/TeX/boldsymbol.js
Research article

On some stochastic differential equations with jumps subject to small positives coefficients

  • Received: 17 April 2019 Accepted: 28 August 2019 Published: 11 September 2019
  • MSC : 35B27, 35K57, 60F10, 60H15

  • We provide a large deviation principle for jumps and stochastic diffusion processes, according to a viscosity coefficient (ε) and a small scaling parameter (δ) both going at the same rate. To do so we have to come up with estimates on the moment Lyapunov function trajectories.

    Citation: Clement Manga, Alioune Coulibaly, Alassane Diedhiou. On some stochastic differential equations with jumps subject to small positives coefficients[J]. AIMS Mathematics, 2019, 4(5): 1369-1385. doi: 10.3934/math.2019.5.1369

    Related Papers:

    [1] Noorah Mshary, Hamdy M. Ahmed, Ahmed S. Ghanem . Existence and controllability of nonlinear evolution equation involving Hilfer fractional derivative with noise and impulsive effect via Rosenblatt process and Poisson jumps. AIMS Mathematics, 2024, 9(4): 9746-9769. doi: 10.3934/math.2024477
    [2] Xiuxian Chen, Zhongyang Sun, Dan Zhu . Mean-variance investment and risk control strategies for a dynamic contagion process with diffusion. AIMS Mathematics, 2024, 9(11): 33062-33086. doi: 10.3934/math.20241580
    [3] A. M. Sayed Ahmed, Hamdy M. Ahmed, Nesreen Sirelkhtam Elmki Abdalla, Assmaa Abd-Elmonem, E. M. Mohamed . Approximate controllability of Sobolev-type Atangana-Baleanu fractional differential inclusions with noise effect and Poisson jumps. AIMS Mathematics, 2023, 8(10): 25288-25310. doi: 10.3934/math.20231290
    [4] Jiali Wu, Maoning Tang, Qingxin Meng . A stochastic linear-quadratic optimal control problem with jumps in an infinite horizon. AIMS Mathematics, 2023, 8(2): 4042-4078. doi: 10.3934/math.2023202
    [5] Dongdong Gao, Daipeng Kuang, Jianli Li . Some results on the existence and stability of impulsive delayed stochastic differential equations with Poisson jumps. AIMS Mathematics, 2023, 8(7): 15269-15284. doi: 10.3934/math.2023780
    [6] Xin Liu, Yan Wang . Averaging principle on infinite intervals for stochastic ordinary differential equations with Lévy noise. AIMS Mathematics, 2021, 6(5): 5316-5350. doi: 10.3934/math.2021314
    [7] Ramkumar Kasinathan, Ravikumar Kasinathan, Dumitru Baleanu, Anguraj Annamalai . Well posedness of second-order impulsive fractional neutral stochastic differential equations. AIMS Mathematics, 2021, 6(9): 9222-9235. doi: 10.3934/math.2021536
    [8] Murugan Suvinthra, Krishnan Balachandran, Rajendran Mabel Lizzy . Large Deviations for Stochastic Fractional Integrodifferential Equations. AIMS Mathematics, 2017, 2(2): 348-364. doi: 10.3934/Math.2017.2.348
    [9] Yongxiang Zhu, Min Zhu . Well-posedness and order preservation for neutral type stochastic differential equations of infinite delay with jumps. AIMS Mathematics, 2024, 9(5): 11537-11559. doi: 10.3934/math.2024566
    [10] Chunhong Li, Sanxing Liu . Stochastic invariance for hybrid stochastic differential equation with non-Lipschitz coefficients. AIMS Mathematics, 2020, 5(4): 3612-3633. doi: 10.3934/math.2020234
  • We provide a large deviation principle for jumps and stochastic diffusion processes, according to a viscosity coefficient (ε) and a small scaling parameter (δ) both going at the same rate. To do so we have to come up with estimates on the moment Lyapunov function trajectories.


    Specially, let us consider ε (viscosity coefficient) and δε (scaling parameter, written as a function of ε) both be non-negatives, such that

    H.1 limε0δεε:=γ  (γR+).

    Our concern is the following d-dimensional stochastic differential equations (SDE) :

    Xx,ε,δεtx=εt0σ(Xx,ε,δεsδε)dWs+εδεt0b(Xx,ε,δεsδε)ds+t0c(Xx,ε,δεsδε)ds+Lε,δεt, xRd, (1.1)

    where {Wt: t0} is a d-dimensional standard Brownian motion and Lε,δε:={Lε,δεt: t0} is a Poisson point process with continuous compensator, independent of W, both defined on a given filtered probability space (Ω,F,P,F) with F:={Ft: t0} being the P-completion of the filtration F. More precisely, we assume that Lε,δε takes the form:

    Lε,δεt:=t0Rdk(Xx,ε,δεsδε,y)(εNε1(dsdy)ν(dy)ds),t0 (1.2)

    where k is a given predictable function (see [10], Chap. Ⅳ Sect. 9), and N is a Poisson random counting measure on Rd with mean Lèvy measure (or intensity measure) ν. The coefficients b,c,σ are subject to suitable conditions.

    The family of equation (1.1) subject to a combination effect of homogenization and large deviations, is a classical problem which goes back to Paolo Baldi [1] at the end of 20'th century. Such a problem, when Lε,δεt0, has been extensively investigated by Freidlin and Sowers [9]. They have shown three classical regimes depending on the relative rate at which the small viscosity coefficient ε and the scaling parameter δε tend to zero. They have provided some effective rate functions associated to a large deviation principle (LDP) for SDE which have been used as direct applications to wavefront propagation. Recently, we have focused the problem (1.1) in the sense that homogenization dominates (see [4]), at the same time, we have provided lower and upper bounds of LDP for SDE which allows jumps processes. The purpose of the present article is to carry out the program outlined in [4] regarding the following case: The small coefficients (ε and δε) go at the same rate, which is technically more difficult. Indeed, in this case, the logarithm moment generating function of Xx,ε,δεT, noted gεT,x below, may not have an explicit value limit. Thanks to Baxendale and Stroock [2], we overcome this challenge by using the superior and inferior limits of the function gεT,x.

    Definition 1. Let Xx,ε,δ be a Rd-valued random variable and let Pε,δ denote its distribution on the Borel subsets of Rd, that is, Pε,δ(A)=P(Xx,ε,δA). The family {Xx,ε,δ;ε,δ0} satisfies an LDP if there exists a lower semi-continuous function I:Rd[0,+] such that

    ● for each open set ARd lim infε0εlogP(Xx,ε,δεA)infxAI(x),

    ● for each closed set BRd lim supε0εlogP(Xx,ε,δεB)infxBI(x).

    I is called the rate function for the LDP. A rate function is good if for each aR+, {x:I(x)a} is compact.

    Our program will be as follows. In Section (2) we set up some notation, make precisely our hypothesis and state the main result. In Section (3) we give an overview of some existing characterizations of large deviations, comparing them with ours, and we give the outline of the proofs.

    Finally, we wish to say that the subject related to this topic, besides those of the references mentioned earlier can be found in [12,15] and references therein.

    Denote expectation with respect to P by E and the gradient operator by . We have already defined .,. as the standard Euclidean inner product on Rd, and . as the associated norm. Let Cp(Rd,Rd) be the collection of continuous mapping from Rd into Rd which are periodic of period 1 in each coordinate of the argument and let .Cp(Rd,Rd) be the associated sup norm. Let Td be the d-dimensional torus of size 1, and let .C(Td,Rd) be the standard sup norm on C(Td,Rd), the space of continuous mapping from Td into Rd. Also we define P(Td) as the collection of all probability measure on Td.

    The {σi: 1id} in (1.1) are assumed to be in Cp(Rd,Rd), and we also assume that

    κ:=inf{di=1θ,σi(x)2: xRd,θRd,θ=1}>0. (2.1)

    We assume that b,c in (1.1) are in Cp(Rd,Rd).

    We now turn our attention to the Poisson part. We first consider a Poisson random measure Nε1(.,.) on [0,T)×Rd defined on the space probability (Ω,F,P), with Lèvy measure ε1ν such that the standard integrability condition holds:

    Rd{0}(1|y|2)ν(dy)<+. (2.2)

    The compensator of εNε1 is thus the deterministic measure εˆNε1(dtdy):=dtν(dy) on [0,T)×Rd. In this paper we shall be interested in Poisson point process of class (QL), namely a point process whose counting measure has continuous compensator (see Ikeda and Watanabe [10]). More precisely, in light of the representation theorem of the Poisson point process ([10], Chap. Ⅱ, Theorem 7.4), we shall assume that Lε,δε is a pure jump process of the following form :

    Lε,δεt:=t0Rd{0}k(.δε,y)(s)(εNε1(dsdy)ν(dy)ds),t0,

    where k is Cp(Rd×Rd,Rd) with respect to first variable, integrable with respect to dtdy, so that the counting measure of Lε,δε, denoted by NLε,δε(dtdy) takes the form :

    \begin{equation} N_{L^{\varepsilon, \delta_{\varepsilon}}}\left(\left( 0, t\right]\times A \right) : = \int_{0}^{t}\int_{\mathbb{R}^{d}}\boldsymbol{1}_{A}\left( k\left(\frac{.}{\delta_{\varepsilon}}, y\right) (s)\right) \varepsilon N^{\varepsilon^{-1}}(dsdy) = \sum\limits_{0\leqslant s\leqslant t}\boldsymbol{1}_{\left\lbrace\Delta L_{s}^{\varepsilon, \delta_{\varepsilon}}\in A \right\rbrace }, \end{equation} (2.3)

    and its compensator is therefore \hat{N}_{{L}^{\varepsilon, \delta_{\varepsilon}}}(dtdy): = k\left(\frac{.}{\delta_{\varepsilon}}, y\right)(s)\mathbb{E}\left[N_{{L}^{\varepsilon, \delta_{\varepsilon}}}(dtdy)\right] = k\left(\frac{.}{\delta_{\varepsilon}}, y\right)(s)\nu(dy)dt , and hence continuous, i.e., L^{\varepsilon, \delta_{\varepsilon}} has continuous statistic.

    The Markov processes X^{\varepsilon, \delta_{\varepsilon}} that we consider include jump processes and diffusion. Next let's write down its generator on twice continuously differentiable functions with compact support by

    \begin{equation} \begin{split} \mathcal{L}_{\varepsilon, \delta_{\varepsilon}}\phi\left(x\right): = &\frac{\varepsilon}{2}\sum\limits_{i, j = 1}^{d}a_{ij}\left(\frac{x}{\delta_{\varepsilon}}\right)\frac{\partial^{2}\phi(x)}{\partial x_{i}\partial x_{j}} +\frac{\varepsilon}{\delta_{\varepsilon}}\sum\limits_{i = 1}^{d}b_{i}\left(\frac{x}{\delta_{\varepsilon}}\right)\frac{\partial\phi(x)}{\partial x_{i}}+\sum\limits_{i = 1}^{d}c_{i}\left(\frac{x}{\delta_{\varepsilon}}\right) \frac{\partial\phi(x)}{\partial x_{i}}\\ &+\frac{1}{\varepsilon}\int_{\mathbb{R}^{d}}\left[\phi\left(x+\varepsilon k\left(\frac{x}{\delta_{\varepsilon}}, y\right) \right)-\phi(x)-\varepsilon\sum\limits_{i = 1}^{d} k_{i}\left(\frac{x}{\delta_{\varepsilon}}, y\right)\frac{\partial\phi(x)}{\partial x_{i}} \right]\nu(dy), \quad x\in\mathbb{R}^{d}, \end{split} \end{equation} (2.4)

    where the matrix a: = \left(a_{ij}\right) is factored as a: = \sigma\sigma^{*} , and ^{*} denotes the transpose. We set

    \begin{equation} \mathcal{L}_{\gamma}: = \frac{1}{2}\sum\limits_{i, j = 1}^{d}a_{ij}\left(x\right)\frac{\partial^{2}}{\partial x_{i}\partial x_{j}} +\sum\limits_{i = 1}^{d}b_{i}\left(x\right)\frac{\partial}{\partial x_{i}}+ \gamma \sum\limits_{i = 1}^{d}c_{i}\left(x\right)\frac{\partial}{\partial x_{i}}, \quad x\in\mathbb{R}^{d}, \end{equation} (2.5)

    and require the followings :

    \mathbf{H}.\mathbf{2}\left\{ \begin{align} & (\text{Global}\ \ \text{Lipshitz})\ \ \text{there}\ \ \text{exists}\ \ \text{a}\ \ \text{constant}\ \ {{C}_{\text{1}}}\ \ \text{such}\ \ \text{that}\ \ \text{for}\ \ \text{any}\ \ \zeta :\text{=}{{\sigma }_{i}},b,c,\ \text{1}\le i\le d,\ \text{and}\ k: \\ & \text{i})\qquad \qquad \qquad \left\| \zeta ({x}')-\zeta (x) \right\|+\int_{{{\mathbb{R}}^{d}}}{\left\| k({x}',y)-k(x,y) \right\|}\nu (dy)\le {{C}_{1}}\left\| {x}'-x \right\|,\quad \forall {x}',x\in {{\mathbb{R}}^{d}}. \\ & (\text{Growth})\ \ \text{there}\ \ \text{exists}\ \ \text{cons}\tan \text{t}\ \ {{C}_{\text{2}}}\ \ \text{such}\ \ \text{that}\ \ \text{for}\ \ \text{any}\ \ \zeta :\text{=}{{\sigma }_{i}},b,c,\text{1}\le i\le d,\ \text{and}\ k: \\ & \text{ii})\qquad \qquad \qquad {{\left\| \zeta (x) \right\|}^{2}}+\int_{{{\mathbb{R}}^{d}}}{{{\left\| k(x,y) \right\|}^{2}}}\nu (dx)\le {{C}_{2}}\left( 1+{{\left\| x \right\|}^{2}} \right),\quad \forall x\in {{\mathbb{R}}^{d}}. \\ \end{align} \right.

    By requirement there exists a \mathcal{L}_{\gamma} -diffusion with jumps on \mathbb{R}^{d} and by periodicity assumption on the coefficients such a process induces process \overline{X} which has both a diffusion component and a jump component on the d -dimensional torus \mathbb{T}^{d} , moreover the \mathcal{L}_{\gamma} -diffusion-part process is ergodic. We denote by m_{\gamma} its unique invariant measure. To move the SDE (1.1) to the torus \mathbb{T}^{d} , we define the pull-back \overline{X}^{\varepsilon, \delta_{\varepsilon}}_{t}: = \frac{1}{\delta_{\varepsilon}}X^{x, \varepsilon, \delta_{\varepsilon}}_{\left(\delta_{\varepsilon}/\sqrt{\varepsilon}\right)^{2}t} which satisfies the SDE :

    \begin{equation} \begin{aligned} \overline{X}_{t}^{\varepsilon, \delta}-\frac{x}{\delta_{\varepsilon}} = \int_{0}^{t}\sigma\left(\overline{X}^{\varepsilon, \delta_{\varepsilon}}_{s}\right)d\overline{W}_{s}^{\varepsilon}+\int_{0}^{t}b\left(\overline{X}^{\varepsilon, \delta_{\varepsilon}}_{s}\right)ds+\frac{\delta_{\varepsilon}}{\varepsilon}\int_{0}^{t}c\left(\overline{X}^{\varepsilon, \delta_{\varepsilon}}_{s}\right)ds+\overline{L}_{t}^{\varepsilon, \delta_{\varepsilon}}, \end{aligned} \end{equation} (2.6)

    where

    \begin{equation*} \left\lbrace \begin{aligned} &\overline{W}^{\varepsilon}_{t}: = \frac{\sqrt{\varepsilon}}{\delta_{\varepsilon}}W_{\left(\delta_{\varepsilon}/\sqrt{\varepsilon}\right)^{2}t} ~{\rm is~ Brownian ~~motion }\\ &\\ &\overline{L}^{\varepsilon, \delta_{\varepsilon}}_{t}: = \frac{\varepsilon}{\delta_{\varepsilon}}\int_{0}^{t}\int_{\mathbb{R}^{d}}k\left(\overline{X}^{\varepsilon, \delta_{\varepsilon}}_{s-}, y\right) \left( N^{\left(\delta_{\varepsilon}/\varepsilon\right)^{2}}\left(dsdy\right)-\left( \frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} \nu(dy)ds\right) , \ t\geqslant 0 . \end{aligned}\right. \end{equation*}

    The infinitesimal generator \overline{\mathcal{L}}_{\varepsilon, \delta_{\varepsilon}} of the diffusion component associated to the process \overline{X}^{\varepsilon, \delta_{\varepsilon}} is

    \begin{equation} \overline{\mathcal{L}}_{\varepsilon, \delta_{\varepsilon}}: = \frac{1}{2}\sum\limits_{i, j = 1}^{d}a_{ij}\left(x\right)\frac{\partial^{2}}{\partial x_{i}\partial x_{j}} +\sum\limits_{i = 1}^{d}b_{i}\left(x\right)\frac{\partial}{\partial x_{i}}+\frac{\delta_{\varepsilon}}{\varepsilon}\sum\limits_{i = 1}^{d}c_{i}\left(x\right)\frac{\partial}{\partial x_{i}}, \quad x\in\mathbb{T}^{d}, \end{equation} (2.7)

    This generator tends to \mathcal{L}_{\gamma} defined in (2.5), as \varepsilon\to 0 .

    The proof of the main Theorem 4 (below) relies on explicit calculation of the logarithm moments of X^{\varepsilon, \delta_{\varepsilon}} and the followings Girsanov's formula and Itô's formula. Before proceeding, let us introduce some space.

    For E locally compact, let \mathcal{H}^2(T, \lambda) be the linear space of all equivalence classes of mappings F:[0, T] \times E \times \Omega \longrightarrow \mathbb{R} which coincide almost everywhere with respect to dt \otimes d\lambda \otimes d\mathbb{P} and which satisfy the following conditions :

    F is predictable;

    \int_0^T \int_{E\backslash\{0\}}\mathbb{E}\Big(| F(t, z)|^2\Big) dt \lambda(dz) < + \infty.

    We endow \mathcal{H}^2(T, \lambda) with the inner product \left\langle F, G \right\rangle_{T, \lambda}: = \int_0^T \int_{E\backslash\{0\}}\mathbb{E}\Big(F(t, z)G(t, z)\Big) dt \lambda(dz). Then, it is well know that \left(\mathcal{H}^2(T, \lambda); \left\langle., . \right\rangle_{T, \lambda}\right) is a real separable Hilbert space.

    Let N_{p} be a Poisson random measure on \mathbb{R}_{+}\times\left(E\backslash\{0\}\right) with intensity measure \lambda , according to a given \mathcal{F}_{T} -adapted, \sigma -finite point process p which is independent of the Brownian motion W . Let \widetilde{N}_{p} be the associated compensated Poisson random measure. Now we have (see, D. Applebaum [3] Chapter 5 , Section 2 )

    Lemma 2 (Girsanov's formula). Let X be a Lèvy process such that e^{X} is a martingale, i.e.,

    X_t = \int_0^t b(s) ds + \int_0^t \sigma (s) d W_s + \int_0^t \int_EH(s, z)\widetilde{N}_{p}(dsdz)+ \int_0^t \int_E K(s, z)N_{p}(dsdz),

    with

    b(t) = -\frac{1}{2}\sigma^2 (t) - \int_E \left( e^{H(t, z)}-1 - H(t, z)\right) \lambda(dz) - \int_E\left( e^{K(t, z)}-1 \right)\lambda(dz), \ \mathbb{P}-a.s.

    We suppose that there exists C > 0 such that

    \left| K(t, z)\right|\leqslant C, \quad \forall t\geqslant0, \forall z\in E .

    For L \in \mathcal{H}^2 (T, \lambda) we define

    M_t : = \int_0^t \int_{z\neq 0} L(s, z) \widetilde{N}(dsdz).

    Set

    U(t, z) = \left( e^{H(t, z)}-1 \right)\boldsymbol{1}_{\big\{\left\|z\right\| \lt 1 \big\} }+ \left( e^{K(t, z)}-1 \right)\boldsymbol{1}_{\big\{\left\|z\right\|\geqslant 1 \big\} }

    and we suppose that

    \int_0^T \int_{\big\{\left\|z\right\|\leqslant 1 \big\}} \left( e^{H(s, z)}-1 \right)^2 \lambda(dz)ds \lt + \infty .

    Finally, we define

    B_t = W_t - \int_0^t \sigma(s) ds\quad and \quad N_t = M_t -\int_0^t \int_{z\neq 0} L(s, z)U(s, z)\lambda(dz)ds, \ \ 0 \leq t \leq T.

    Let \boldsymbol{Q} be the probability measure on (\Omega, \mathcal{F}_T) defined as:

    \dfrac{d\boldsymbol{Q}}{d\mathbb{P}}: = e^{ X_T}.

    Then under \boldsymbol{Q} , B_t is a Brownian motion and N_t is a \boldsymbol{Q} -martingale.

    Next define a d -dimensional semi-martingale Y_{t}: = \left(Y_1, \ldots, Y_d\right) by

    \begin{equation} Y_{t}^{i} = Y_{0}^{i}+M_{t}^{i} + A_{t}^{i} + \int_0^t \int_E f_{i}(s, z, .)\widetilde{N}_{p}(dsdz)+ \int_0^t \int_E g_{i}(s, z, .)N_{p}(dsdz), \quad i: = 1, \cdots, d \end{equation} (2.8)

    where

    M_{t} is locally continuous square integrable \left(\mathcal{F}_{t}\right) -martingale and M_{0}: = 0 ;

    A_{t} is a continuous \left(\mathcal{F}_{t}\right) -adapted process whose almost all sample functions are of bounded variation on each finite interval and A_{0}: = 0 ;

    g is \left(\mathcal{F}_{t}\right) -predictable and for t > 0 , \int_{0}^{t}\int_{E}\left\| g(s, z, .)\right\| \lambda(dz)ds < +\infty\ a.s ;

    f is \left(\mathcal{F}_{t}\right) -predictable and for t > 0 , \int_{0}^{t}\int_{E}\left\| f(s, z, .)\right\| ^{2}\lambda(dz)ds < +\infty\ a.s .

    We have (see, for example Ikeda and Watanabe [10] Theorem 5.1 )

    Lemma 3 (Itô's formula). Let F be a function of class C^{2} on \mathbb{R}^{d} and Y_t a d -dimensional semi-martingale given in (2.8). Then the stochastic process F\left(Y_{t} \right) is also a \left(\mathcal{F}_{t}\right) -semi-martingale and the following formula holds :

    \begin{split} F\left(Y_t\right)-F\left(Y_0\right) = &\sum\limits_{i = 1}^{d}\int_{0}^{t}\frac{\partial F}{\partial x_{i}}\left(Y_s\right)dM_{s}^{i} +\sum\limits_{i = 1}^{d}\int_{0}^{t}\frac{\partial F}{\partial x_{i}}\left(Y_s\right)A_{s}^{i}ds+\frac{1}{2}\sum\limits_{i, j = 1}^{d}\int_{0}^{t}\frac{\partial^{2} F}{\partial x_{i}\partial x_{j}}\left(Y_s\right)d\left\langle M^{i}, M^{j}\right\rangle_{s} \\ &+\int_{0}^{t}\int_{E}\left[F\left(Y_{s-}+g(s, z, .)\right)-F\left(Y_{s-}\right)\right]N_{p}(dzds) \\ &+\int_{0}^{t}\int_{E}\left[F\left(Y_{s-}+f(s, z, .)\right)-F\left(Y_{s-}\right)\right]\widetilde{N}_{p}(dzds)\\ &+\int_{0}^{t}\int_{E}\left[F\left(Y_{s-}+f(s, z, .)\right)-F\left(Y_{s-}\right)-\sum\limits_{i = 1}^{d}f_{i}(s, z, .)\frac{\partial F}{\partial x_{i}}\left(Y_s\right)\right]\lambda(dz)ds. \end{split}

    A version of the next theorem (without jumps) can be seen in Baxendale and Stroock [2] (Corollary 1.12). This is a very long one and we now have all the necessary arguments to be able to copy it, mutatis mutandis, by replacing it back in our frame. For this reason, we will content ourselves with the statement of the theorem, referring the careful reader to the aforementioned paper.

    Before proceeding, we give some definitions. Denote by C^{\infty}\left(\mathbb{T}^{d}\right) the space of functions of class C^{\infty} on {\mathbb{T}}^{d} . For all function \phi\in C^{\infty}\left(\mathbb{T}^{d}\right) and all probability measure \mu\in\mathcal{P}\left(\mathbb{T}^{d}\right) , let us set

    \begin{aligned} &b_{\mu}(\phi): = \int_{\mathbb{T}^{d}}\left(I+\nabla\phi\right)(x)b (x)\mu(dx), \qquad\qquad\qquad\qquad\qquad c_{\mu}(\phi): = \int_{\mathbb{T}^{d}}\left(I+\nabla\phi\right)(x)c (x)\mu(dx), \\ &k_{\mu}(\phi, y): = \int_{\mathbb{T}^{d}}\left(I+\nabla\phi\right)(x)k(x, y)\mu(dx), \quad\qquad\qquad\qquad\qquad\overline{c}_{\mu}(\phi): = c_{\mu}(\phi)-\int_{\mathbb{R}^{d}}k_{\mu}(\phi, y)\nu(dy), \\ &a_{\mu}(\phi): = \int_{\mathbb{T}^{d}}\left(I+\nabla\phi\right)(x)a(x)\left(I+\nabla\phi\right)(x)\mu(dx), \qquad\quad\Delta_{a_{\mu}}(\phi): = \frac{1}{2}\int_{\mathbb{T}^{d}} {\rm Tr}\Big\lbrace \nabla\phi(x)a(x)\nabla\phi(x)\Big\rbrace \mu(dx). \end{aligned}

    Define

    \begin{split} \overline{\mathcal{J}}_{\gamma}(\theta): = \inf\limits_{\phi\in C^{\infty}\left(\mathbb{T}^{d}\right) }\sup\limits_{\mu\in\mathcal{P}\left(\mathbb{T}^{d}\right)}&\left\lbrace \frac{1}{2}\left\langle\theta, a_{\mu}(\phi)\theta \right\rangle+\left\langle \overline{c}_{\mu}(\phi)+\frac{1}{\gamma}\left(\Delta_{a_{\mu}}+b_{\mu}\right)(\phi) , \theta\right\rangle \right. \\ &\left.\qquad +\int_{\mathbb{R}^{d}}\int_{\mathbb{T}^{d}}\left(e^{ \left\langle k(z, y), \theta\right\rangle} -1 \right)\mu(dz)\nu(dy)\right\rbrace . \end{split}

    Now fix \mu\in\mathcal{P}\left(\mathbb{T}^{d}\right) , fix also \phi\in C^{\infty}\left(\mathbb{T}^{d}\right) , by assumption on a , the matrix a_{\mu}(\phi) is strictly positive-definite. Then letting a^{-1}_{\mu}(\phi) be its inverse matrix, we define the norm \Big\|\theta \Big\|_{a_{\mu}^{-1}(\phi)}: = \sqrt{\left\langle\theta, a_{\mu}^{-1}(\phi)\theta\right\rangle } for all \theta\in\mathbb{R}^{d} . Next, we define

    Q_{a_{\mu}(\phi), \nu}: = \left\lbrace w\in\mathbb{R}^{d}:\ w = a_{\mu}(\phi)u\ {\rm for~ some }\ u\in\mathbb{R}^{d}\right\rbrace + S_{\nu},

    where S_{\nu} denotes the support of \nu . Let {\rm con}\left(Q_{a_{\mu}(\phi), \nu}\right) be the smallest convex cone that contains Q_{a_{\mu}(\phi), \nu} and define

    \begin{equation} \boldsymbol{T}_{a_{\mu}(\phi), \nu}: = \Big\{\overline{c}_{\mu}(\phi)\Big\} + {\rm con}\left(Q_{a_{\mu}(\phi), \nu}\right) . \end{equation} (2.9)

    Let {\rm ri}(A) denotes the relative interior of a set A . In light of the characterization lemma (see Ikeda and Watanabe [10], Lemma 10.2.3 ) to the effective domain of \mathcal{J}(\theta): = \inf\limits_{\theta'\in\mathbb{R}^{d}}\left\lbrace \left\langle\theta, \theta' \right\rangle-\overline{\mathcal{J}}\left(\theta'\right)\right\rbrace , we have

    {\rm ri}\left(\inf\limits_{\phi\in C^{\infty}\left(\mathbb{T}^{d}\right) }\sup\limits_{\mu\in\mathcal{P}\left(\mathbb{T}^{d}\right)}\boldsymbol{T}_{a_{\mu}(\phi), \nu}\right) \equiv {\rm ri}\Big( {\rm dom} \mathcal{J}(\theta) \Big).

    By the requirements (2.1) and (2.2), it is well know that

    \begin{split} \mathcal{J}_{\gamma}(\theta) &: = \sup\limits_{\phi\in C^{\infty}\left(\mathbb{T}^{d}\right) }\inf\limits_{\mu\in\mathcal{P}\left(\mathbb{T}^{d}\right)}\left\lbrace \frac{1}{2}\Big\|\theta-\overline{c}_{\mu}(\phi)-\frac{1}{\gamma}\left(\Delta_{a_{\mu}}+b_{\mu}\right)(\phi)\Big\|_{a_{\mu}^{-1}(\phi)}^{2} +\int_{\mathbb{R}^{d}}\int_{\mathbb{T}^{d}}\varrho\left(\frac{\left\|\theta\right\|}{\left\|k(z, y)\right\|} \right)\mu(dz)\nu(dy)\right\rbrace \end{split}

    where \ \varrho(r): = r\log r-r+1, \quad r\in\left(0, +\infty\right) .

    Finally, let us define, for T > 0 and x\in\mathbb{R}^{d} ,

    \begin{equation} g^{\varepsilon}_{T, x}(\theta): = \varepsilon\log\mathbb{E}\left\lbrace \exp\left(\frac{1}{\varepsilon}\left\langle\theta, X_{T}^{x, \varepsilon, \delta_{\varepsilon}} \right\rangle \right) \right\rbrace, \quad \varepsilon \gt 0 , \theta\in\mathbb{R}^{d}. \end{equation} (2.10)

    Now we state our main result.

    Theorem 4 (Main result). For all T > 0 , we assume that the hypothesis (\boldsymbol{H.1}) and (\boldsymbol{H.2}) hold true. Suppose that

    \begin{equation*} g^{T, x}_{\gamma}(\theta): = \lim\limits_{\varepsilon\to 0}g_{t, x}^{\varepsilon}(\theta) = \lim\limits_{\varepsilon\to 0}\varepsilon\log\mathbb{E}\left\lbrace \exp{\frac{1}{\varepsilon}\left(\theta, X_{T}^{x, \varepsilon, \delta_{\varepsilon}}\right)}\right\rbrace \end{equation*}

    exists uniformly with respect to x\in {\mathbb{R}}^{d} . In addition

    \begin{equation*} \begin{split} g^{T, x}_{\gamma}(\theta): = \left\langle x , \theta \right\rangle +T\overline{\mathcal{J}}_{\gamma}(\theta). \end{split} \end{equation*}

    Then we have

    \begin{eqnarray*} \liminf\limits_{\varepsilon\to 0}g_{T, x}^{\varepsilon}(\theta)\geqslant \left\langle x , \theta \right\rangle +T\left( \overline{Q}_{\gamma}(\theta)+\alpha_{1, \gamma}(\theta)+\alpha_{2, \gamma}(\theta)\right) \\ \limsup\limits_{\varepsilon\to 0}g_{T, x}^{\varepsilon}(\theta)\leqslant \left\langle x , \theta \right\rangle +T\left( \overline{Q}_{\gamma}(\theta)+\beta_{1, \gamma}(\theta)+\beta_{2, \gamma}(\theta)\right) \end{eqnarray*}

    where

    \begin{equation*} \begin{aligned} &\begin{split} \overline{Q}_{\gamma}(\theta): = \frac{1}{2}\left\langle\theta, a_{m_{\gamma}}(\phi)\theta \right\rangle&+\left\langle \overline{c}_{m_{\gamma}}(\phi)+\frac{1}{\gamma}\left(\Delta_{a_{m_{\gamma}}}+b_{m_{\gamma}}\right)(\phi), \theta\right\rangle \\ &+\int_{\mathbb{R}^{d}}\int_{\mathbb{T}^{d}}\left(e^{ \left\langle k(x, y), \theta\right\rangle} -1 \right)m_{\gamma}(dx)\nu(dy) \end{split}\\ &\alpha_{1, \gamma}(\theta): = \inf\left\lbrace\gamma \int_{\mathbb{R}^{d}} \int_{\mathbb{T}^{d}}\Big\|k(x, y)\nabla\Psi_{\theta}(x)\Big\| m_{\gamma}(dx)\nu(dy):\Psi_{\theta}\in C^{\infty}\left(\mathbb{T}^{d}\right)\right\rbrace \\ &\alpha_{2, \gamma}(\theta): = \inf\left\lbrace\frac{\gamma^{2}}{2} \int_{\mathbb{T}^{d}}\Big\|\sigma^{*}(x)\nabla\Psi_{\theta}(x)\Big\|^{2}m_{\gamma}(dx):\Psi_{\theta}\in C^{\infty}\left(\mathbb{T}^{d}\right)\right\rbrace \\ &\beta_{1, \gamma}(\theta): = \inf\limits_{\Psi_{\theta}\in C^{\infty}\left(\mathbb{T}^{d}\right)}\sup\limits_{\mu\in\mathcal{P}\left(\mathbb{T}^{d}\right)}\left\lbrace\gamma \int_{\mathbb{R}^{d}} \int_{\mathbb{T}^{d}}\Big\|k(x, y)\nabla\Psi_{\theta}(x)\Big\| \mu(dx)\nu(dy)\right\rbrace \\ &\beta_{2, \gamma}(\theta): = \inf\limits_{\Psi_{\theta}\in C^{\infty}\left(\mathbb{T}^{d}\right)}\sup\limits_{\mu\in\mathcal{P}\left(\mathbb{T}^{d}\right)}\left\lbrace\frac{\gamma^{2}}{2} \int_{\mathbb{T}^{d}}\Big\|\sigma^{*}(x)\nabla\Psi_{\theta}(x)\Big\|^{2}\mu(dx)\right\rbrace , \end{aligned} \end{equation*}

    and we have used \Psi_{\theta} to denote the unique u\in C^{\infty}\left({\mathbb{T}}^{d}\right) which satisfies

    \mathcal{L}_{\gamma}u = Q(., \theta)-\overline{Q}_{\gamma}(\theta)\ {\rm and }\ \int_{ {\mathbb{T}}^{d}}u(x)m_{\gamma}(dx) = 0,

    and with

    \begin{split} Q(x, \theta): = &\frac{1}{2}\Big\langle\theta, \Big(I+\nabla\phi\Big)(x)\sigma(x) \Big\rangle^{2}+\int_{ {\mathbb{R}}^{d}}\Big\langle\theta, \Big(I+\nabla\phi\Big)(x)\Big(c(x)-k(x, y)\Big) \Big\rangle\nu(dy) \\ &+\frac{1}{\gamma}\left\langle\theta, \frac{1}{2} {\rm \text{Tr}}\Big\lbrace \nabla\phi a\nabla\phi\Big\rbrace(x) +\Big(I+\nabla\phi\Big)(x)b(x)\right\rangle+\int_{\mathbb{R}^{d}}\left(e^{ \left\langle k(x, y), \theta\right\rangle} -1 \right)\nu(dy), \ \phi\in C^{\infty}\left( {\mathbb{T}}^{d}\right) . \end{split}

    Finally, set

    \begin{equation*} I_{\gamma}^{T, x}(\eta): = \sup\limits_{\theta}\Big\lbrace \langle\theta, \eta\rangle-g^{T, x}_{\gamma}(\theta)\Big\rbrace = \sup\limits_{\theta}\Big\lbrace \langle\theta, \eta-x\rangle-T\overline{\mathcal{J}}_{\gamma}(\theta)\Big\rbrace = T\mathcal{J}\left(\dfrac{\eta-x}{T}\right) . \end{equation*}

    Then for every set \Gamma on \mathcal{B}\left(\mathbb{R}^{d}\right) ,

    \begin{equation*} \begin{split} & \liminf\limits_{\varepsilon\to 0}\varepsilon\log\mathbb{P}\bigg\{X_{T}^{x, \varepsilon, \delta_{\varepsilon}}\in\Gamma\bigg\}\geqslant- \inf\limits_{\eta\in\stackrel{\circ}{\Gamma}} I_{\gamma}^{T, x}(\eta)\\ & \limsup\limits_{\varepsilon\to 0}\varepsilon\log\mathbb{P}\bigg\{X_{T}^{x, \varepsilon, \delta_{\varepsilon}}\in\Gamma\bigg\}\leqslant- \inf\limits_{\eta\in\overline{\Gamma}} I_{\gamma}^{T, x}(\eta). \end{split} \end{equation*}

    Freidlin-Wentzell's theory is interested in the behaviour of a stochastic process seen as the Brownian perturbation of an ordinary differential equation (ODE). For the description and the background for ODE over-amortised by a Brownian martingale, we refer to [5,8,13] and the papers therein. At the same time, non-linear stochastic evolution equations have been studied in various literatures. However, there are still few results on the large deviation for stochastic evolution equations with jumps (see, for example [12,15]).

    In all the references listed above the question is: What do SDE's trajectories look like when the viscosity parameter \varepsilon is small? The answer is twofold depending on the time scales considered: Finite time interval or increasingly large time scale. Another question can also be considered: "When the process is ergodic and has an invariant distribution \mu_{\varepsilon} , what is the asymptotic behaviour of \mu_{\varepsilon} when \varepsilon\to 0 ?"

    This is the question in all the references available for us in this direction, with the one important exception given by the paper [9] (see also [1]). In [9] large deviations problems arising in stochastic homogenization are discussed. The scope of our approach based on the LDP with combination of homogenization, differs substantially from that of [9].

    Our version of SDE contains a substantial novelty, which makes it possible to handle the Freidlin-Wentzell's large deviation arising in stochastic homogenization for non-linear evolution equations with Poisson jumps and Brownian motions. To explain this modified construction in the most transparent way, we took the "jump component" in a relatively simple form, that is, we consider Poisson point process of class (QL).

    There are several toolbox for the large deviations (see, for example [6]). The way we choose allows us to characterize the LDP with help of the analysis of the logarithmic moment generating function.

    We refer to Baxendale et al. [2] for the characterization and the general results on the LDP with help of the analysis of the logarithmic moment generating function. However, on this way we meet substantial difficulties already when we try to prove that the LDP holds. Initially the corresponding rate function (this one is denoted by I ) is identified as the Legendre transform (or convex conjugate) of the logarithmic moment generating function defined, when the limit exists, as

    \begin{equation*} g^{T, x}_{\gamma}(\theta) = \lim\limits_{\varepsilon\to 0}\varepsilon\log {\mathbb{E}}\left\lbrace \exp{\frac{1}{\varepsilon}\left\langle \theta, X_{T}^{x, \varepsilon, \delta_{\varepsilon}}\right\rangle }\right\rbrace. \end{equation*}

    Subsequently, the Cameron-Martin-Girsanov transformation is used to find an alternative expression for I . Although from the standpoint of large deviation theory it is the rate function I which is of paramount interest, it is the logarithmic moment generating function g^{T, x}_{\gamma} which is most important for the analysis of Lyapunov exponents. Indeed, one sees that g^{T, x}_{\gamma} is here what is called the moment Lyapunov function.

    The proof of the main result can be conducted in six steps, which we outline below.

    Proof. (Theorem 4)

    Step 1: (calculation of the moment Lyapunov function).

    The calculation of this limit is inspired by Freidlin [8], Lemma 2.1 (without jumps) Section 7.2.

    For any \theta\in {\mathbb{R}}^{d}

    \lim\limits_{\varepsilon\to 0}g_{T, x}^{\varepsilon}(\theta) = g^{T, x}_{\gamma}(\theta) = \left\langle x, \theta\right\rangle +T\overline{\mathcal{J}}_{y}(\theta),

    exists uniformly in x\in {\mathbb{R}}^{d} , with

    \begin{split} \overline{\mathcal{J}}_{\gamma}(\theta) = \inf\limits_{\phi\in C^{\infty}\left( {\mathbb{T}}^{d}\right) }\sup\limits_{\mu\in\mathcal{P}\left( {\mathbb{T}}^{d}\right)}&\left\lbrace \frac{1}{2}\left\langle\theta, a_{\mu}(\phi)\theta \right\rangle+\left\langle \overline{c}_{\mu}(\phi)+\frac{1}{\gamma}\left(\Delta_{a_{\mu}}+b_{\mu}\right)(\phi) , \theta\right\rangle \right. \\ &\left.\qquad +\int_{ {\mathbb{R}}^{d}}\int_{ {\mathbb{T}}^{d}}\left(e^{ \left\langle k(z, y), \theta\right\rangle} -1 \right)\mu(dz)\nu(dy)\right\rbrace \end{split}

    being the largest eigenvalue of the differential operator

    \begin{split} L_{\theta, \gamma}: = &\frac{1}{\gamma^{2}}\mathcal{L}_{\gamma}+\frac{1}{\gamma}\sum\limits_{i, j = 1}^{d}a_{i, j}(x)\theta_{i}\frac{\partial}{\partial x_{i}} +\sum\limits_{i = 1}^{d}\left(\frac{1}{\gamma}b_{i}+c_{i}\right)(x)\theta_{i}+\frac{1}{2}\sum\limits_{i, j = 1}^{d}\theta_{i}^{*}a_{i, j}(x)\theta_{j}\\ &+\int_{ {\mathbb{R}}^{d}}\sum\limits_{i = 1}^{d}\left\lbrace k_{i}(x, y)\theta_{i}+\frac{1}{\gamma}k_{i}(x, y)\frac{\partial}{\partial x_{i}} \right\rbrace\nu(dy)+\int_{ {\mathbb{R}}^{d}}\left(e^{ \left\langle k(x, y), \theta\right\rangle} -1 \right)\nu(dy). \end{split}

    Step 2: (change of probability measure).

    We set \hat{X}_{t}^{\varepsilon, \delta_{\varepsilon}}: = \dfrac{1}{\delta_{\varepsilon}}X_{t}^{x, \varepsilon, \delta_{\varepsilon}} , by the Itô's formula we have for all \phi\in C^{\infty}\left({\mathbb{T}}^{d}\right)

    \begin{split} \tilde{X}^{\varepsilon, \delta_{\varepsilon}}_{t}& = X^{x, \varepsilon, \delta_{\varepsilon}}_{t}+\delta_{\varepsilon}\left[ \phi\left(\hat{X}_{t}^{\varepsilon, \delta_{\varepsilon}}\right) -\phi\left(\frac{x}{\delta_{\varepsilon}}\right)\right]\\ & = x\ +\int_{0}^{t}\left(I+\nabla\phi\right)\left(\hat{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)\left[ c\left(\hat{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)-\int_{ {\mathbb{R}}^{d}}k\left(\hat{X}_{s-}^{\varepsilon, \delta_{\varepsilon}}, y\right)\nu(dy)\right] ds\\ &\qquad+\sqrt{\varepsilon}\int_{0}^{t}\left(I+\nabla\phi\right)\left(\hat{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)\sigma\left(\hat{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right) dW_{s}\\ &\qquad + \dfrac{\varepsilon}{\delta_\varepsilon} \int_{0}^{t}\left[\dfrac{1}{2} {\rm Tr}\Big\lbrace \nabla\phi\left(\hat{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)a\left(\hat{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)\nabla\phi\left(\hat{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)\Big\rbrace +\left(I+ \nabla\phi\right) \left(\hat{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)b\left(\hat{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)\right] ds \\ &\qquad+\delta_{\varepsilon}\int_{0}^{t}\int_{ {\mathbb{R}}^{d}}\left[\phi\left(\hat{X}_{s-}^{\varepsilon, \delta_{\varepsilon}}+\varepsilon k\left(\hat{X}_{s-}^{\varepsilon, \delta_{\varepsilon}}, y\right) \right)-\phi\left(\hat{X}_{s-}^{\varepsilon, \delta_{\varepsilon}} \right) \right] N^{\varepsilon^{-1}}(dyds)\\ &\qquad+\varepsilon\int_{0}^{t}\int_{ {\mathbb{R}}^{d}}k\left(\hat{X}_{s-}^{\varepsilon, \delta_{\varepsilon}}, y\right)N^{\varepsilon^{-1}}(dyds). \end{split}

    Let us define for z\in {\mathbb{T}}^{d}

    H^{\varepsilon, \varphi}(z): = \varphi\Big(z+\varepsilon k\left(z, .\right)\Big)-\varphi(z)\quad \forall\varphi\in C^{\infty}\left( {\mathbb{T}}^{d}\right).

    So, we are going to consider the logarithm moment generating function of X^{x, \varepsilon, \delta_{\varepsilon}} , g_{T, x}^{\varepsilon} (2.10). By Girsanov's formula, we have

    \begin{equation} \begin{split} g_{T, x}^{\varepsilon}(\theta) = &\left\langle x, \theta \right\rangle +\varepsilon\log\widetilde{ {\mathbb{E}}}\left[\exp\left\lbrace\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2}\left( \frac{1}{2}\int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T } \left\langle\theta, \left(I+\nabla\phi\right)\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)\sigma\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right) \right\rangle^{2} ds \right. \right.\right. \\ &\left. \quad+\int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T } \left\langle\theta, \left(I+\nabla\phi\right)\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)\left[ c\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)-\int_{ {\mathbb{R}}^{d}}k\left(\overline{X}_{s-}^{\varepsilon, \delta_{\varepsilon}}, y\right)\nu(dy)\right] \right\rangle ds \right) \\ & \quad +\dfrac{\delta_\varepsilon}{\varepsilon}\int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T } \left\langle\theta, \dfrac{1}{2} {\rm Tr}\Big\lbrace \nabla\phi\left(\hat{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)a\left(\hat{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)\nabla\phi\left(\hat{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)\Big\rbrace +\left(I+\nabla\phi\right) \left(\hat{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)b\left(\hat{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)\right\rangle ds\\ &\quad+\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} \int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\int_{ {\mathbb{R}}^{d}}\Bigg( e^{ \left\lbrace \frac{\delta_{\varepsilon}}{\varepsilon}\left\langle\theta, H^{\varepsilon, \phi}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}, y\right)\right\rangle\right\rbrace } -1\Bigg)\nu(dy) ds\\ &\left. \left.\quad+\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} \int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\int_{ {\mathbb{R}}^{d}}\Bigg( e^{ \left\langle\theta, k\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}, y\right)\right\rangle} -1\Bigg)\nu(dy) ds-\frac{\delta_{\varepsilon}}{\varepsilon}\left( \phi\left(\overline{X}_{t}^{\varepsilon, \delta_{\varepsilon}}\right) -\phi\left(\frac{x}{\delta_{\varepsilon}}\right)\right)\right\rbrace \right] \end{split} \end{equation} (3.1)

    where \widetilde{ {\mathbb{E}}} is the expectation operator with respect to the probability \widetilde{\mathbb{P}} defined as

    \begin{split} \dfrac{d\widetilde{\mathbb{P}}}{d\mathbb{P}}: = & \exp\left(\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)\int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T } \left\langle\theta, \left(I+\nabla\phi\right)\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)\sigma\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right) \right\rangle dW_{s} \right. \\ &\left.\qquad -\frac{1}{2}\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2}\int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T } \left\langle\theta, \left(I+\nabla\phi\right)\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)\sigma\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right) \right\rangle^{2} ds\right)\\ & \times\exp\left(\int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\int_{ {\mathbb{R}}^{d}}\left\langle \theta, k\left(\overline{X}_{s-}^{\varepsilon, \delta_{\varepsilon}}, y\right)\right\rangle N^{\left(\delta_{\varepsilon}/\varepsilon\right)^{2}} (dyds) \right. \\ &\left.\quad\qquad -\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} \int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\int_{ {\mathbb{R}}^{d}}\Bigg( e^{ \left\langle\theta, k\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}, y\right)\right\rangle} -1\Bigg)\nu(dy) ds\right)\\ & \times\exp\left(\frac{\delta_{\varepsilon}}{\varepsilon}\int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\int_{ {\mathbb{R}}^{d}}\left\langle \theta, H^{\varepsilon, \phi}\left(\overline{X}_{s-}^{\varepsilon, \delta_{\varepsilon}}, y\right)\right\rangle N^{\left(\delta_{\varepsilon}/\varepsilon\right)^{2}}(dyds) \right. \\ &\left.\quad\qquad -\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} \int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\int_{ {\mathbb{R}}^{d}}\Bigg( e^{ \left\lbrace \frac{\delta_{\varepsilon}}{\varepsilon}\left\langle\theta, H^{\varepsilon, \phi}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}, y\right)\right\rangle\right\rbrace} -1\Bigg)\nu(dy) ds\right). \end{split}

    Step 3: (ergodicity).

    Let us set, for all z\in {\mathbb{T}}^{d} , for all \theta\in {\mathbb{R}}^{d} :

    \begin{equation} \begin{split} \Phi(z, \theta): = & \frac{1}{2}\left\langle\theta, \Big(I+\nabla\phi(z)\Big)\sigma (z) \right\rangle^{2}+\int_{ {\mathbb{R}}^{d}}\left\langle\Big(I+\nabla\phi(z)\Big)\Big(c(z)-k(z, y)\Big), \theta\right\rangle\nu(dy)\\ &+\frac{1}{\gamma}\Big\langle\theta, \frac{1}{2} {\rm Tr}\Big\lbrace \nabla\phi(z)a(z)\nabla\phi(z)\Big\rbrace +\Big(I+\nabla\phi(z)\Big)b(z)\Big\rangle +\int_{ {\mathbb{R}}^{d}}\left(e^{ \left\langle k(z, y), \theta\right\rangle} -1 \right)\nu (dy) . \end{split} \end{equation} (3.2)

    and let us set \Psi_{\theta}\in C^{\infty}\left({\mathbb{T}}^{d}\right)\ be the unique solution of

    \mathcal{L}_{\gamma}\Psi_{\theta} = \Phi-\int_{ {\mathbb{T}}^{d}}\Phi(z, \theta)m_{\gamma}(dz), \quad {\rm which ~satisfies} \quad \int_{ {\mathbb{T}}^{d}}\Psi_{\theta}(z)m_{\gamma}(dz) = 0.

    Such a solution \Psi_{\theta} must exist again by the assumptions on the coefficients (see, for instance, Pardoux et al. [11]). So applying Itô formula to \left(\frac{\delta_{\varepsilon}}{\sqrt{\varepsilon}}\right)^{2} \Psi_{\theta}\left(\overline{X}\right) , we have

    \begin{equation} \begin{split} \left(\frac{\delta_{\varepsilon}}{\sqrt{\varepsilon}}\right)^{2} &\int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\Phi \left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}, \theta \right)ds = T\int_{ {\mathbb{T}}^{d}}\Phi(z, .)m_{\gamma}(dz)+\left(\frac{\delta_{\varepsilon}}{\sqrt{\varepsilon}}\right)^{2}\left[\Psi_{\theta}\left( \overline{X}^{\varepsilon, \delta_{\varepsilon}}_{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\right) -\Psi_{\theta}\left(\frac{x}{\delta_{\varepsilon}}\right)\right] \\ &-\left(\frac{\delta_{\varepsilon}}{\sqrt{\varepsilon}}\right)^{2} \int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T} \sigma\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}} \right)\nabla\Psi_{\theta}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}} \right)dW_{s}\\ &-\frac{\delta_{\varepsilon}^{2}}{\varepsilon} \int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\left( \frac{\delta_\varepsilon}{\varepsilon}- \gamma \right) c\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}} \right)\nabla\Psi_{\theta}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}} \right)ds\\ &+\frac{\delta_{\varepsilon}^{3}}{\varepsilon^{2}} \int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\int_{ {\mathbb{R}}^{d}}k\left(\overline{X}_{s-}^{\varepsilon, \delta_{\varepsilon}}, y\right)\nabla\Psi_{\theta}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)\nu(dy)ds\\ &-\left(\frac{\delta_{\varepsilon}}{\sqrt{\varepsilon}}\right)^{2} \int_{0}^{\left(\sqrt{\varepsilon}/ \delta_{\varepsilon}\right)^{2}T}\int_{ {\mathbb{R}}^{d}}\left[\Psi_{\theta} \left(\overline{X}_{s-}^{\varepsilon, \delta_{\varepsilon}}+\varepsilon k \left(\overline{X}_{s-}^{\varepsilon, \delta_{\varepsilon}}, y\right) \right) -\Psi_{\theta}\left(\overline{X}_{s-}^{\varepsilon, \delta_{\varepsilon}} \right) \right]N^{\left(\delta_{\varepsilon}/\varepsilon\right)^{2}}(dyds).\\ \end{split} \end{equation} (3.3)

    Then putting (3.3) into the formula (3.1), we have

    \begin{equation} \begin{split} g_{T, x}^{\varepsilon}(\theta) = &\left\langle x, \theta\right\rangle +T\int_{ {\mathbb{T}}^{d}}\Phi(z, \theta)m(dz)+\varepsilon\log\widetilde{ {\mathbb{E}}}\left[ \exp\left\lbrace\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2}\left(\Psi_{\theta}\left( \overline{X}^{\varepsilon, \delta_{\varepsilon}}_{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\right) -\Psi_{\theta}\left(\frac{x}{\delta_{\varepsilon}}\right)\right) \right. \right. \\ &\quad\qquad\qquad-\frac{\delta_{\varepsilon}}{\varepsilon}\left( \phi\left(\hat{X}_{t}^{\varepsilon, \delta_{\varepsilon}}\right) -\phi\left(\frac{x}{\delta_{\varepsilon}}\right)\right)\\ &\quad\qquad\qquad+\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} \int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\int_{ {\mathbb{R}}^{d}}\Bigg( e^{ \left\lbrace \frac{\delta_{\varepsilon}}{\varepsilon}\left\langle\theta, H^{\varepsilon, \phi}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}, y\right)\right\rangle\right\rbrace } -1\Bigg)\nu(dy) ds\\ &\quad\qquad\qquad-\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2}\int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\sigma\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}} \right)\nabla\Psi_{\theta}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}} \right)dW_{s}\\ &\quad\qquad\qquad-\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2}\int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\left( \frac{\delta_\varepsilon}{\varepsilon}- \gamma \right)c\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}} \right)\nabla\Psi_{\theta}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}} \right)ds\\ &\quad\qquad\qquad+\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{3}\int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\int_{ {\mathbb{R}}^{d}}k\left(\overline{X}_{s-}^{\varepsilon, \delta_{\varepsilon}}, y\right)\nabla\Psi_{\theta}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)\nu(dy)ds\\ &\left.\left. \quad\qquad\qquad-\left( \frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} \int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\int_{ {\mathbb{R}}^{d}}H^{\varepsilon, \Psi_{\theta}} \left(\overline{X}_{s-}^{\varepsilon, \delta_{\varepsilon}} \right) N^{\left(\delta_{\varepsilon}/\varepsilon\right)^{2}}(dyds)\right\rbrace \right] . \end{split} \end{equation} (3.4)

    Step 4: (upper bound).

    First, we have

    \begin{split} &\left|\log\left[\widetilde{ {\mathbb{E}}}\left(\exp\left\lbrace -\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2}\int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\sigma\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}} \right)\nabla\Psi_{\theta}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}} \right)dW_{s}\right\rbrace \right)\right]\right| \\ &\quad\leqslant\left|\log\left[\widetilde{ {\mathbb{E}}}\left(\exp\left\lbrace -\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2}\int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\sigma\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}} \right)\nabla\Psi_{\theta}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}} \right)dW_{s}\right\rbrace \right. \right. \right. \\ &\qquad\times\exp\left\lbrace -\frac{1}{2}\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{4}\int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\left\| \sigma\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}} \right)\nabla\Psi_{\theta}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}} \right)\right\|^{2}ds\right\rbrace \\ &\left. \left.\left. \qquad\times\exp\left\lbrace\frac{1}{2}\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{4}\int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\left\| \sigma\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}} \right)\nabla\Psi_{\theta}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}} \right)\right\|^{2}ds\right\rbrace\right)\right] \right| \leqslant\frac{T}{\varepsilon}\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2}\dfrac{\beta_{2, \gamma}(\theta)}{\gamma^{2}}. \end{split}

    Secondly, we can see

    \left|\log\left[\widetilde{ {\mathbb{E}}}\left(\exp\left\lbrace \left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{3}\int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\int_{ {\mathbb{R}}^{d}}k\left(\overline{X}_{s-}^{\varepsilon, \delta_{\varepsilon}}, y\right)\nabla\Psi_{\theta}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}\right)\nu(dy)ds\right\rbrace \right)\right]\right|\leqslant\frac{T}{\varepsilon}\frac{\delta_{\varepsilon}}{\varepsilon}\dfrac{\beta_{1, \gamma}(\theta)}{\gamma}.

    Thirdly, we observe

    \begin{equation} \begin{split} \widetilde{ {\mathbb{E}}}&\left[ \exp\left\lbrace\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2}\left(\Psi_{\theta}\left( \overline{X}^{\varepsilon, \delta_{\varepsilon}}_{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\right) -\Psi_{\theta}\left(\frac{x}{\delta_{\varepsilon}}\right)\right) \right. \right.-\frac{\delta_{\varepsilon}}{\varepsilon}\left(\phi\left( \overline{X}^{\varepsilon, \delta_{\varepsilon}}_{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\right) -\phi\left(\frac{x}{\delta_{\varepsilon}}\right)\right)\\ &\qquad\qquad\qquad-\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2}\int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\left( \frac{\delta_\varepsilon}{\varepsilon}- \gamma \right)c\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}} \right)\nabla\Psi_{\theta}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}} \right)ds\\ &\qquad\qquad\qquad-\left( \frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} \int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\int_{ {\mathbb{R}}^{d}} H^{\varepsilon, \Psi_{\theta}}\left(\overline{X}_{s-}^{\varepsilon, \delta_{\varepsilon}} \right)N^{\left(\delta_{\varepsilon}/\varepsilon\right)^{2}}(dyds)\\ &\left.\left. \qquad\qquad\qquad+\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} \int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\int_{ {\mathbb{R}}^{d}}\Bigg( e^{ \left\lbrace \frac{\delta_{\varepsilon}}{\varepsilon}\left\langle\theta, H^{\varepsilon, \phi}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}, y\right)\right\rangle\right\rbrace } -1\Bigg)\nu(dy) ds\right\rbrace \right]\\ &\leqslant\exp \left\lbrace \left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} K_{1}+\frac{\delta_{\varepsilon}}{\varepsilon}K_{2}+\frac{T}{\varepsilon}\left( \frac{\delta_\varepsilon}{\varepsilon}- \gamma \right)K_{3}+T\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2}K_{4}+T\frac{\delta_{\varepsilon}}{\varepsilon}K_{5}\right\rbrace . \end{split} \end{equation} (3.5)

    In fact, we notice that

    \begin{split} &\left|\log\left[\widetilde{ {\mathbb{E}}}\left(\exp\left\lbrace -\left( \frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} \int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\int_{ {\mathbb{R}}^{d}}H^{\varepsilon, \Psi_{\theta}} \left(\overline{X}_{s-}^{\varepsilon, \delta_{\varepsilon}} \right) N^{\left(\delta_{\varepsilon}/\varepsilon\right)^{2}}(dyds)\right\rbrace\right)\right]\right| \\ &\qquad\qquad\leqslant\left| \log\left( \exp\left\lbrace -\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} \int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T} \int_{ {\mathbb{R}}^{d}}H^{\varepsilon, \Psi_{\theta}}\left(\overline{X}_{s-}^{\varepsilon, \delta_{\varepsilon}}, y\right) N^{\left(\delta_{\varepsilon}/\varepsilon\right)^{2}}(dyds) \right.\right. \right. \\ &\qquad\qquad\left.\qquad -\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} \int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\int_{ {\mathbb{R}}^{d}} \Bigg[ e^{ \left\lbrace\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} H^{\varepsilon, \Psi_{\theta}}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}, y\right)\right\rbrace} -1 \Bigg]\nu(dy) ds\right\rbrace \\ &\qquad\qquad\left. \left. \qquad \times\exp\Bigg\{ \left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} \int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\int_{ {\mathbb{R}}^{d}}\Bigg[ e^{\left\lbrace \left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} H^{\varepsilon, \Psi_{\theta}}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}, y\right)\right\rbrace} -1 \Bigg]\nu(dy) ds\Bigg\}\right) \right|\\ &\qquad\qquad\leqslant\exp\left\lbrace \frac{T}{\varepsilon}\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} \left\| H^{\varepsilon, \Psi_{\theta}}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}, .\right)\right\|_{C^{\infty}\left( {\mathbb{T}}^{d}, {\mathbb{R}}^{d}\right) }+o(1) \right\rbrace.\\ &\qquad\qquad\leqslant\exp\left\lbrace T\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} \left\| k\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}, .\right)\right\|_{C^{\infty}\left( {\mathbb{T}}^{d}, {\mathbb{R}}^{d}\right) }+o(1)\right\rbrace, \end{split}

    and

    \begin{split} \widetilde{ {\mathbb{E}}}&\left[ \exp\left\lbrace\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} \int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\int_{ {\mathbb{R}}^{d}}\Bigg( e^{ \left\lbrace \frac{\delta_{\varepsilon}}{\varepsilon}\left\langle\theta, H^{\varepsilon, \phi}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}, y\right)\right\rangle\right\rbrace } -1\Bigg)\nu(dy) ds\right\rbrace \right]\\ &\leqslant\exp\left\lbrace T\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right) \left\| k\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}, .\right)\right\|_{C^{\infty}\left( {\mathbb{T}}^{d}, {\mathbb{R}}^{d}\right) }+o(1)\right\rbrace. \end{split}

    From (3.5) we have

    \begin{split} \lim\limits_{\varepsilon\to 0}\varepsilon\log\widetilde{ {\mathbb{E}}}&\left[ \exp\left\lbrace\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2}\left(\Psi_{\theta}\left( \overline{X}^{\varepsilon, \delta_{\varepsilon}}_{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\right) -\Psi_{\theta}\left(\frac{x}{\delta_{\varepsilon}}\right)\right) -\frac{\delta_{\varepsilon}}{\varepsilon}\left(\phi\left( \overline{X}^{\varepsilon, \delta_{\varepsilon}}_{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\right) -\phi\left(\frac{x}{\delta_{\varepsilon}}\right)\right)\right. \right. \\ &\qquad\qquad\qquad-\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2}\int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\left(\gamma-\frac{\delta_{\varepsilon}}{\varepsilon}\right) c\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}} \right)\nabla\Psi_{\theta}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}} \right)ds\\ &\qquad\qquad\qquad-\left( \frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} \int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\int_{ {\mathbb{R}}^{d}}H^{\varepsilon, \Psi_{\theta}}\left(\overline{X}_{s-}^{\varepsilon, \delta_{\varepsilon}} \right)N^{\left(\delta_{\varepsilon}/\varepsilon\right)^{2}}(dyds)\\ &\left.\left. \qquad\qquad\qquad+\left(\frac{\delta_{\varepsilon}}{\varepsilon}\right)^{2} \int_{0}^{\left(\sqrt{\varepsilon}/\delta_{\varepsilon}\right)^{2}T}\int_{ {\mathbb{R}}^{d}}\Bigg( e^{ \left\lbrace \frac{\delta_{\varepsilon}}{\varepsilon}\left\langle\theta, H^{\varepsilon, \phi}\left(\overline{X}_{s}^{\varepsilon, \delta_{\varepsilon}}, y\right)\right\rangle\right\rbrace } -1\Bigg)\nu(dy) ds\right\rbrace \right]\longrightarrow 0. \end{split}

    Then, for all \theta\in {\mathbb{R}}^{d}, \ T > 0 , we have

    \begin{equation} \begin{split} \limsup\limits_{\varepsilon\to 0}g_{T, x}^{\varepsilon}(\theta)\leqslant \left\langle\theta, x \right\rangle+T\left( \overline{Q}_{\gamma}(\theta)+\beta_{1, \gamma}(\theta)+\beta_{2, \gamma}(\theta)\right) . \end{split} \end{equation} (3.6)

    Step 5: (lower bound).

    As it has already been seen previously, we then have

    \begin{equation} \liminf\limits_{\varepsilon\to 0}g_{T, x}^{\varepsilon}(\theta)\geqslant \left\langle\theta, x \right\rangle+T\left( \overline{Q}_{\gamma}(\theta)+\alpha_{1, \gamma}(\theta)+\alpha_{2, \gamma}(\theta)\right) . \end{equation} (3.7)

    Step 6: (Fentchel-Legendre transform).

    Let us set for \ \theta\in {\mathbb{R}}^{d} , X_{1} be a Gaussian random vector with logarithm moment generating function

    \overline{\Lambda}_{1, \gamma}(\theta): = \inf\limits_{\phi\in C^{\infty}\left( {\mathbb{T}}^{d}\right) }\sup\limits_{\mu\in\mathcal{P}\left( {\mathbb{T}}^{d}\right)}\left\lbrace \frac{1}{2}\left\langle\theta, a_{\mu}(\phi)\theta \right\rangle+\left\langle \overline{c}_{\mu}(\phi)+\frac{1}{\gamma}\left(\Delta_{a_{\mu}}+b_{\mu}\right)(\phi) , \theta\right\rangle\right\rbrace .

    and X_{2} be a stationary Poisson process on {\mathbb{R}}^{d} independent of X_{1} with logarithm moment generating function

    \overline{\Lambda}_{2}(\theta): = \sup\limits_{\mu\in\mathcal{P}\left( {\mathbb{T}}^{d}\right)}\left\lbrace \int_{ {\mathbb{R}}^{d}}\int_{ {\mathbb{T}}^{d}}\left(e^{ \left\langle k(z, y), \theta\right\rangle} -1 \right)\mu(dz)\nu(dy)\right\rbrace .

    We notice that

    \overline{\mathcal{J}}_{\gamma}(\theta): = \overline{\Lambda}_{1, \gamma}(\theta)+ \overline{\Lambda}_{2}(\theta).

    Let \Lambda_{1}(\theta) and \Lambda_{2}(\theta) denote respectively the Fenchel-Legendre transform of \overline{\Lambda}_{1} and \overline{\Lambda}_{2} , we have

    \Lambda_{1, \gamma}(\theta): = \sup\limits_{\phi\in C^{\infty}\left( {\mathbb{T}}^{d}\right) }\inf\limits_{\mu\in\mathcal{P}\left( {\mathbb{T}}^{d}\right)}\left\lbrace\frac{1}{2}\Big\|\theta-\overline{c}_{\mu}(\phi)-\frac{1}{\gamma}\left(\Delta_{a_{\mu}}+b_{\mu}\right)(\phi)\Big\|_{a_{\mu}^{-1}(\phi)}^{2}\right\rbrace

    and

    \Lambda_{2}(\theta): = \inf\limits_{\mu\in\mathcal{P}\left( {\mathbb{T}}^{d}\right)}\left\lbrace\int_{ {\mathbb{T}}^{d}}\int_{ {\mathbb{R}}^{d}}\varrho\left(\frac{\left\|\theta\right\|}{\left\|k(x, y)\right\|} \right)\nu(dy)\mu(dx)\right\rbrace .

    Since \overline{\mathcal{J}}(\theta) is the logarithm moment generating of X_{1}+X_{2} , it follows that its Fenchel-Legendre transform is

    \mathcal{J}_{\gamma}(\theta): = \Lambda_{1, \gamma}(\theta)+\Lambda_{2}(\theta).

    For the final assertion of the Theorem 4, we can observe that \mathcal{J}_{\gamma} is convex.

    We also need to know that X_{t}^{x, \varepsilon, \delta_{\varepsilon}} is exponentially tight in the path space. Indeed, let \mathcal{D}\left([0, T], {\mathbb{R}}^{d}\right) be the space of functions that map [0, T] into {\mathbb{R}}^{d} , which are right continuous and having left hand limits. \mathcal{D}\left([0, T], {\mathbb{R}}^{d}\right) is metricated by the Skorohod metric, with respect to which it is complete and separable. Then, as in Proposition 3.2 [9], we have

    Remark 5. We assume that the hypothesis (\boldsymbol{H.1}) and (\boldsymbol{H.2}) hold true. For any fixed T > 0, \ x\in {\mathbb{R}}^{d} and \alpha\in(0, 1/2) ,

    \limsup\limits_{\varrho\to \infty}\limsup\limits_{\varepsilon\to 0}\varepsilon\log\mathbb{P}\Big\lbrace\left\|X_{T}^{x, \varepsilon, \delta_{\varepsilon}} \right\|_{\mathcal{D}^{\alpha}\left([0, T], {\mathbb{R}}^{d}\right)}\geqslant\varrho \Big\rbrace = -\infty.

    To complete this part, we point out that the previous proof also provides the LDP in the path space \mathcal{D}\left([0, T], {\mathbb{R}}^{d}\right) . Let us consider some definitions :

    S_{0, T}(\varphi): = \left\lbrace \begin{array}{ll} \int_{0}^{T}\mathcal{J}\left(\dot{\varphi}(s)\right)ds & {\rm if } \varphi\in \mathcal{D}\left([0, T], {\mathbb{R}}^{d} \right)\ {\rm and }\ \varphi(0) = x\\ &\\ +\infty & {\rm else. } \end{array} \right.

    Since the function \mathcal{J} is convex we can show that

    \inf\limits_{\substack{\varphi\in \mathcal{D}\left([0, T], {\mathbb{R}}^{d}\right)\\ \varphi(0) = x, \ \varphi(T) = z}}\int_{0}^{T}\mathcal{J}\left(\dot{\varphi}(s)\right)ds: = T\mathcal{J}\left(\dfrac{z-x}{T} \right) .

    So we have

    Remark 6. For all T > 0 , we assume that the hypothesis (\boldsymbol{H.1}) and (\boldsymbol{H.2}) hold true. Then for every x\in {\mathbb{R}}^{d} , the family \left\lbrace X^{x, \varepsilon, \delta_{\varepsilon}}_{T}: \varepsilon > 0\right\rbrace\ of \ \mathcal{D}\left([0, T], {\mathbb{R}}^{d}\right) -valued random variables has a large deviations principle with good rate function \ S_{0, T}(\varphi)\ for all \varphi\in\mathcal{D}\left([0, T], {\mathbb{R}}^{d}\right) .

    We have derived the general expression of the action functional for stochastic processes with jumps. In particular, we generalize the Regime 2 considered in Freidlin-Sowers [9] for some classical SDE, as well as the version in Baldi [1] for a family of measures on a topological vector space. Furthermore, we have analysed a general LDP arising in stochastic homogenization and we have extended the result in [4].

    Before finishing, we notice that the large deviations established in this paper generalizes Schilder's theorem, which is useful both to probabilist who are interested in the trajectories consequences of stochastic process and to statisticians who are interested in the weights of small balls given by the Wiener measure. Such results can have implications in both random optimization and Bayesian statistics when studying trajectories of posterior density estimates or regression problems. For example, one may consult the following works to see how highly theoretical probabilistic developments can rebound on unexpected statistical applications [7,14].

    The authors would like to thank, in advance, the anonymous referee(s) for their valuable comments that helped clarify and refine this paper.

    The authors declare no conflict of interest.



    [1] P. Baldi, Large deviations for diffusions processes with homogenization applications, Ann. Probab., 19 (1991), 509–524.
    [2] P. H. Baxendale, D. W. Stoock, Large deviations and stochastic flows of diffeomorphisms, Probab. Th. Rel. Fields, 80 (1988), 169–215.
    [3] D. Applebaum, Lévy Processes and Stochastic Calculus, Cambridge University Press, 2009.
    [4] C. Manga, A. Coulibaly, A. Diedhiou, On jumps stochastic evolution equations with application of homogenization and large deviations, J. Math. Res., 11 (2019), 125–134.
    [5] A. Dembo, O. Zeitouni, Large Deviations Techniques and Applications, Boston: Jones and Bartlett, 1998.
    [6] J. Feng, T. G. Kurtz, Large Deviations for Stochastic Processes, Providence: American Mathematical Society, 2006.
    [7] J. Kueibs, W. V. Li, W. Linde, The gaussian measure of shifted balls, Probab. Th. Rel. Fields, 98 (1994), 143–162.
    [8] M. I. Freidlin, Functional Integration and Partial Differential Equations, Princeton: Princeton University Press, 1985.
    [9] M. I. Freidlin, R. B. Sowers, A comparison of homogenization and large deviations, with applications to wavefront propagation, Stoch. Proc. Appl., 82 (1999), 23–52.
    [10] N. Ikeda, S. Watanabe, Stochastic Differential Equations and Diffusion Processes, Elsevier, 2014.
    [11] E. Pardoux, Yu. Veretennikov, On the Poisson equation and diffusion approximation, I. Ann. Probab., 29 (2001), 1061–1085.
    [12] M. Röckner, T. Zhang, Stochastic evolution equations of jump type: Existence, uniqueness and large deviation principles, T. Potential Anal., 26 (2007), 255–279.
    [13] S. R. S. Varadhan, Large Deviations and Applications, Philadelphia: Society for Industrial and Applied Mathematics, 1984.
    [14] A. W. van der Vaart, J. H. van Zanten, Rates of contraction of posterior distributions based on Gaussian process priors, Ann. Statist., 36 (2008), 1435–1463.
    [15] H. Y. Zhao, S. Y. Xu, Freidlin-Wentzell's large deviations for stochastic evolution equations with Poisson jumps, Adv. Pure Math., 6 (2016), 676–694.
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4742) PDF downloads(369) Cited by(0)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog