
Citation: Shang Wu, Pengfei Xu, Jianhua Huang. Invariant measure of stochastic damped Ostrovsky equation driven by pure jump noise[J]. AIMS Mathematics, 2020, 5(6): 7145-7160. doi: 10.3934/math.2020457
[1] | Jiali Wu, Maoning Tang, Qingxin Meng . A stochastic linear-quadratic optimal control problem with jumps in an infinite horizon. AIMS Mathematics, 2023, 8(2): 4042-4078. doi: 10.3934/math.2023202 |
[2] | Xiuxian Chen, Zhongyang Sun, Dan Zhu . Mean-variance investment and risk control strategies for a dynamic contagion process with diffusion. AIMS Mathematics, 2024, 9(11): 33062-33086. doi: 10.3934/math.20241580 |
[3] | Minyu Wu, Xizhong Yang, Feiran Yuan, Xuyi Qiu . Averaging principle for two-time-scale stochastic functional differential equations with past-dependent switching. AIMS Mathematics, 2025, 10(1): 353-387. doi: 10.3934/math.2025017 |
[4] | Yassine Sabbar, Aeshah A. Raezah . Threshold analysis of an algae-zooplankton model incorporating general interaction rates and nonlinear independent stochastic components. AIMS Mathematics, 2024, 9(7): 18211-18235. doi: 10.3934/math.2024889 |
[5] | Xintao Li, Lianbing She, Rongrui Lin . Invariant measures for stochastic FitzHugh-Nagumo delay lattice systems with long-range interactions in weighted space. AIMS Mathematics, 2024, 9(7): 18860-18896. doi: 10.3934/math.2024918 |
[6] | Min Han, Bin Pei . An averaging principle for stochastic evolution equations with jumps and random time delays. AIMS Mathematics, 2021, 6(1): 39-51. doi: 10.3934/math.2021003 |
[7] | Ramkumar Kasinathan, Ravikumar Kasinathan, Dumitru Baleanu, Anguraj Annamalai . Well posedness of second-order impulsive fractional neutral stochastic differential equations. AIMS Mathematics, 2021, 6(9): 9222-9235. doi: 10.3934/math.2021536 |
[8] | Dongdong Gao, Daipeng Kuang, Jianli Li . Some results on the existence and stability of impulsive delayed stochastic differential equations with Poisson jumps. AIMS Mathematics, 2023, 8(7): 15269-15284. doi: 10.3934/math.2023780 |
[9] | Noorah Mshary, Hamdy M. Ahmed, Ahmed S. Ghanem . Existence and controllability of nonlinear evolution equation involving Hilfer fractional derivative with noise and impulsive effect via Rosenblatt process and Poisson jumps. AIMS Mathematics, 2024, 9(4): 9746-9769. doi: 10.3934/math.2024477 |
[10] | Xiaodong Wang, Kai Wang, Zhidong Teng . Global dynamics and density function in a class of stochastic SVI epidemic models with Lévy jumps and nonlinear incidence. AIMS Mathematics, 2023, 8(2): 2829-2855. doi: 10.3934/math.2023148 |
In this paper, we concentrate on the following stochastic damped Ostrovsky equation driven by pure jump noise
{du(t)=[β∂3xu(t)−12∂x(u2)+λu+γ∂−1xu]dt+∫Zg(u(t−),z)η(dt,dz),u(0)=u0. | (1.1) |
where λ>0, β and γ are real numbers such that β⋅γ≠0 and ∂−1xf=((iξ)−1ˆf(ξ))∨, and η(dt,dz) denotes the Poisson counting measure. The Ostrovsky equation was introduced by [17] to describe the weakly nonlinear long waves in a rotating liquid, where the parameter γ measures the effect of rotation which is supposed to be small. β represents the capillary waves on the surface of a liquid or for oblique magneto-acoustic waves in plasma. And λ>0 means the positive dispersion.
For the stochastic Ostrvosky equation driven by white noise, Isaza [11,12,13] proved the well-posedness in Hs(R) with s>−34. Yan et al. [21,22] obtained the local well-posedness for the Cauchy problem for (1.1) the initial data u0(⋅,ω)∈Hs(R)(ω∈Ωa.e.) with s>−34, and the global well-posedness in L2(R)(a.e.ω∈Ω). Chen [4] established the existence of random attractor in ˜L2(R). Wu et al. [19] established the ergodicity of stochastic damped Ostrovsky equation with Gaussian noise, but they do not discuss the ergodicity of the equation with Lévy noise.
The dispersive partial differential equation such as Schrödinger equation, KdV equation and Ostrovsky equation are integrable systems with infinite conservation laws. If λ=0, then (1.1) become a stochastic KdV equation. Bouard et al. [2,3] established the local well-posedness for stochastic KdV equation with additive noise and multiplicative noise respectively. The dissipative effect of damped dispersive systems are too weak to apply the theory for dissipative dynamical systems. There are two typical difficulties to establish the ergodicity for stochastic dispersive equation on R need to overtake, one is the non-compactness of the whole real line R, and the other one is the non-compactness of operator semigroup for stochastic linear dispersive equation. Recently, Ekren, Kukavica and Ziane [9,10] applied the stopping time technique to establish the boundedness of the uniform estimate, which can deduce the tightness of the measure sequences in local square integrable space L2loc(R), they also used the convergence in measure in Hilbert space in terms of the idea in [16] to obtain the existence of invariant measure, but they could not provide with the uniqueness and ergodicity, we refer it to [9] for details.
Recently, Bouard et al. [1] studied the nonlinear Schrödinger eqution driven by jump process, and proved the existence of a martingale solution of the nonlinear Schrödinger equation with jump processes with infinite activity, but they could not study the ergodicity of invariant measure for stochastic Schrödinger equation with jump noise. In fact, there are a lot of paper on ergodicity for stochastic dissipative partial differential equation driven by jump process, Dong and Xie [7] studied ergodicity of stochastic 2D Navier-stokes equations with Lévy noise, Dong et.al.[6] study the existence of martingale solutions of stochastic 3D Navier-Stokes equations with jump noise. To the best of our knowledge, there is little paper on the ergodicity of stochastic damped dispersive PDE with Lévy noise due to the above mentioned difficulty.
The goal of this paper is to study the invariant measure of stochastic damped Ostrovsky equation with pure jump noise. The novelty of the current paper to obtain the uniformly bounded of solutions in H1(R) and L2(R) space respectively, which are the key tools to prove the tightness of the measure sequences in local square integrable space L2loc(R). By using the convergence in measure in Hilbert space in terms of the idea in [16], we can obtain the existence of invariant measure. Moreover, by applying the idea of Proposition 3.2.7 in [5], we prove that the invariant measure is unique if the initial value is non-random. The another novelty of the current paper is that the numerical simulation in the sense of E‖u(t,⋅)‖L2x reveals that stochastic damped Ostrovsky equation driven by pure jump may posses an unique ergodic invariant measure.
We state our main result followed as:
Theorem 1.1. Assume that u0∈X1, and the Hypothesis (H1)–(H3) are satisfied. Then stochastic damped Ostrovsky equation (1.1) admits an invariant measures. Moreover, it posses an ergodic invariant measure for the non-random initial data.
The rest of paper is organized as follows. In Section 2, some function setting and useful lemmas or technique theorem are provided. In Section 3, the uniformly bounded of solutions in H1(R) and L2(R) space are established respectively, which are the key tools to obtain the ergodicity. By using the methods of [1,Appendix A], we prove the existence of invariant measure for stochastic damped Ostrovsky equation in Section 4. Moreover, by applying the idea of Proposition 3.2.7 in [5], we prove that the invariant measure is unique if the initial value is restricted to be non-random. In Section 5, some numerical simulation are provide to support the theoretical results.
In this section, we introduce some basic concepts and some inequalities, which are from [1] and [18].
Let (Ω,F,Ft,P) be a complete probability space, (Z,B(Z)) be a measurable space and ν(dz) be a σ−finite measure on it. Let P=(p(t),t∈Dp) be a stationary Ft−Poisson point process on Z with characteristic measure ν(dz), where Dp is a countable subset of [0,+∞) depending on random parameter ω∈Ω. Denote by η(dt,dz) the Poisson counting measure associated with p, that is η(dt,dz)=Σs∈Dp,s⩽t1A(p(s)), where 1A(⋅) is the indicator function with respect to A. Let ˜η(dt,dz)=η(dt,dz)−dtν(dz) be the compensated Poisson measure. If ν(z)<∞, ˜η is a martingale with ⟨˜η(Z)⟩t=tν(z), and ν is the intensity measure of η.
We impose the hypothesis (H1)–(H3) on the Poisson processes just as [8]
(H1) For any u∈X1, there exists a constant C<∞ such that
∫‖g(u,z)‖2X1ν(dz)⩽C(1+‖u‖2X1), |
(H2) ν(0)=0, ∫Z‖z‖2Zν(dz)<∞ and ν(Z)=ρ<∞,
(H3) Z is continuously embedded in H1(R).
The measure ν describes the expected number of jumps of a certain height in a time interval of length 1. The hypothesis means there is an finite number of small jumps is expected.
Let Y be a separable and complete metric space and T>0. The space X(T)=D([0,T];Y) denotes the space of all right continuous functions x:[0,T]→Y with left limits, P(D([0,T];Y)) the space of Borel probability measures on D([0,T];Y). We equip D([0,T];Y) with the Skorohod topology such that D([0,T];Y) is both separable and complete.
Let H be a Hilbert space, Hs(R) is the Sobolev space with norm
‖f‖Hs(R)=‖⟨ξ⟩sFxf‖L2ξ(R), |
where ⟨ξ⟩s=(1+ξ2)s2 for any ξ∈R,Fxu and F−1xu denotes the Fourier transformation and the Fourier inverse transformation of u with respect to its space variable respectively. ˙Hs1,s2(R) is the Sobolev space with norm
‖f‖˙Hs1,s2(R)=‖⟨ξ⟩s1|ξ|s2Fxf‖L2ξ(R). |
and ˙Hs=˙H0,s. With this choice of the antiderivative we have, ∂−1xf=(ˆf(ξ)iξ)∨, so it is natural to define the function space Xs as one in [15]
Xs={f∈Hs(R):∂−1xf∈L2(R)},s∈R. |
Space S(R) is the Schwartz space and S′(R) is its dual space. Fu and F−1u denotes the Fourier transformation and the Fourier inverse transformation of u with respect to its all variables respectively.
The following Theorem is the key tool to prove the ergodicity for stochastic equation (1.1), which is from [1],
Lemma 2.1 (Appendix A.1, [1]). Let {xn:n∈N} be a sequence of cˊadlˊag processes, each of the process defined on a probability space (Ωn,Fn,Pn). Then the sequence of laws of {xn:n∈N}is tight on D([0,T];Y) if
(a) there exists a space Y1, with Y1→Y compactly and some r>0, such that
En|xn(t)|rY1≤C,∀n∈N. |
(b) there exist constants c>0, γ>0 and r>0 such that for all θ∈[0,T],t∈[0,T−θ], and n≥0, we have
Ensup |
Denote
\begin{eqnarray*} &&\phi(\xi) = \beta\xi^{3}+\frac{\gamma}{\xi}, \quad S_{\lambda}(t)u = e^{\lambda t} U(t),\\ &&U(t)u_{0} = e^{-\lambda t}\int_{R} e^{i(x\xi-t\phi(\xi))}\mathcal{F}_{x}{u}_{0}(\xi)d\xi,\\ &&\|f\|_{L_{t}^{q}L_{x}^{p}} = \left(\int_{R} \left(\int_{R}|f(x,t)|^{p}dx\right)^{\frac{q}{p}}dt\right)^{\frac{1}{q}}, \quad \|f\|_{L_{xt}^{p}} = \|f\|_{L_{t}^{p}L_{x}^{p}}. \end{eqnarray*} |
Then the mild solution of equation (1.1) can be written as:
\begin{equation} \begin{aligned} &&u(t) = S_{\lambda}(t)u_{0}-\int_{0}^{t}S_{\lambda}(t-s)u\partial_{x}uds+\int_{0}^{t}\int_{Z}S_{\lambda}(t-s)g(u(s-),z)\eta(ds,dz). \end{aligned} \end{equation} | (2.1) |
Definition 2.1. Assume that (X, T) is a polish space, \Sigma is a \sigma -algebra on X , M is a set of measures on \Sigma . M is said to be tight if for any \varepsilon > 0 , there exists a compact set K_{\epsilon}\subset X such that
|\mu(K_{\varepsilon}^{c})| \lt \varepsilon, \;for\; any\; \mu\in M. |
The following Lemma is the key tool to prove the ergodicity for stochastic equation (1.1), which is from [5].
Lemma 2.2 ([5], Proposition 3.2.7). An invariant probability measure for the semigroup P_t , t\geq 0 , is ergodicity if and only if it is an extremal point of the set of all the invariant probability measures for the semigroup P_t, t\geq 0 .
In this section, we establish the uniform estimates of the solution in L^2 norm and H^1 norm respectively, which is the key tool to obtain the ergodicity for stochastic Ostrovsky equation (1.1).
Lemma 3.1 ([22]). There exists some monotonely increasingly continous function \tilde{C_{1}}(t) and a constant \alpha > 0 , \forall h, g\in X(T) , such that
\parallel \int_{0}^{t}S_{\lambda}(t-s)h(s)\partial_{x}g(s)ds \parallel_{X(T)} \leqslant \tilde{C_{1}}(T)T^{\alpha}\parallel h\parallel_{X(T)}\parallel g\parallel_{X(T)}. |
Here \|u\|_{X(T)} = \sup_{s \in [0, T]} \| u(s)\|_{H^1} .
Lemma 3.2 ([22]). There exists some monotonely increasingly continuous function \tilde{C_{2}}(t) , such that
\parallel S_{\lambda}(t)u_{0}\parallel_{X(T)}\leqslant \tilde{C_{2}}(T)\parallel u_{0}\parallel_{X(T)}. |
Lemma 3.3. If u(t)\in X_1 solves the equation (2.1), and the Hypothesis (H1)–(H3) are satisfied. There exists a constant C > 0 such that
\begin{equation} \mathbb{E}[\sup\limits_{0\leqslant t\leqslant T}\|u\|_{L^{2p}}^{2}]\leqslant C(1+\mathbb{E}[\|u_{0}\|_{L^{2p}}^{2}]), \end{equation} | (3.1) |
\begin{equation} \mathbb{ E}[\sup\limits_{0\leqslant t\leqslant T}\|u\|_{H^{1}}^{2}]\leqslant C(1+\mathbb{E}[\|u_{0}\|_{H^{1}}^{2}]). \end{equation} | (3.2) |
Proof. The proof of (3.1) can get by slightly modifying the proof of Lemma 5.1 in [22], we just prove (3.2). Let
I(u) = \int_R \beta (\partial_x u)^2+\frac{\gamma}{2} (\partial_x^{-1} u)^2+\frac{1}{3} u^3 dx. |
Applying the Itô formula to I(u) yields
\begin{eqnarray*} I(u(t)) &\leq& I(u(0))+\int_{0}^{t}\int_{Z}(\|\partial_x g(u(s-),z)\|_{L^{2}}^{2}+2\langle \partial_xu(s-),\partial_xg(u(s-),z)\rangle)\tilde{\eta}(ds,dz)\\ &&+\int_{0}^{t}\int_{Z}(\|\partial_x g(u(s),z)\|_{L^{2}}^{2}+2\langle \partial_xu(s), \partial_xg(u(s),z)\rangle)ds\nu(dz)\\ &&+\int_{0}^{t}\int_{Z}(\|\partial_x^{-1} g(u(s-),z)\|_{L^{2}}^{2}+2\langle \partial_x^{-1}u(s-), \partial_x^{-1}g(u(s-),z)\rangle)\tilde{\eta}(ds,dz)\\ &&+\int_{0}^{t}\int_{Z}(\|\partial_x^{-1} g(u(s),z)\|_{L^{2}}^{2}+2\langle \partial_x^{-1}u(s), \partial_x^{-1}g(u(s),z)\rangle)ds\nu(dz)\\ &&+\int_{0}^{t}\int_{Z}\| u(s-)+g(u(s-),z)\|^3_{L^3}-\| u(s-) \|^3_{L^3} \tilde{\eta}(ds,dz)\\ &&+\int_{0}^{t}\int_{Z}\| u(s)+g(u(s),z)\|^3_{L^3}-\| u(s) \|^3_{L^3}ds\nu(dz)\\ && = I(u(0))+M_1+I_1+M_2+I_2+M_3+I_3. \end{eqnarray*} |
Direct computation shows that
\begin{eqnarray*} &&[M_{1}^{\tau_{N}},M_{1}^{\tau_{N}}]_{t}^{\frac{1}{2}} = [M_1,M_1]_{\tau_{N}\wedge t}^{\frac{1}{2}}\\ && = (\sum\limits_{s\in D_{p},s\leqslant \tau_{N}\wedge t}(\|\partial_x g(u(s-),z)\|_{L^{2}}^{2}+2\langle u(s-),g(u(s-),p(s))\rangle)^{2})^{\frac{1}{2}}\\ &&\leq(2\sum\limits_{s\in D_{p},s\leqslant \tau_{N}\wedge t}(\|\partial_x g(u(s-),z)\|_{L^{2}}^{4}+8\sum\limits_{s\in D_{p},s\leqslant \tau_{N}\wedge t}\|\partial_x u(s-)\|_{L^{2}}^{2}\|\partial_x g(u(s-),p(s))\|_{L^{2}}^{2})^{\frac{1}{2}}\\ &&\leq C(\sum\limits_{s\in D_{p},s\leqslant \tau_{N}\wedge t}\|\partial_x g(u(s-),z)\|_{L^{2}}^{4})^{\frac{1}{2}} +C(\sum\limits_{s\in D_{p},s\leqslant \tau_{N}\wedge t}\|\partial_x u(s-)\|_{L^{2}}^{2}\|\partial_x g(u(s-),p(s))\|_{L^{2}}^{2})^{\frac{1}{2}}\\ &&\leq C\sum\limits_{s\in D_{p},s\leqslant \tau_{N}\wedge t}\|\partial_x g(u(s-),z)\|_{L^{2}}^{2}+C(\sum\limits_{s\in D_{p},s\leqslant \tau_{N}\wedge t}\|\partial_x u(s-)\|_{L^{2}}^{2}\|\partial_x g(u(s-),p(s))\|_{L^{2}}^{2})^{\frac{1}{2}}\\ &&\leq C\sum\limits_{s\in D_{p},s\leqslant \tau_{N}\wedge t}\|\partial_x g(u(s-),z)\|_{L^{2}}^{2} +C\sup\limits_{s\leq t\wedge\tau_{N}}\|\partial_x u(s-)\|_{L^{2}}(\sum\limits_{s\in D_{p},s\leqslant \tau_{N}\wedge t}\|\partial_x g(u(s-),p(s))\|_{L^{2}}^{2})^{\frac{1}{2}}\\ &&\leq C\sum\limits_{s\in D_{p},s\leq \tau_{N}\wedge t}\|\partial_x g(u(s-),z)\|_{L^{2}}^{2}+\frac{1}{2}\sup\limits_{s\leq t\wedge\tau_{N}}\|\partial_x u(s-)\|_{L^{2}}^{2}. \end{eqnarray*} |
Taking the expectation and using (H1)–(H3) gives
\begin{eqnarray*} &&\mathbb{E}[\sup\limits_{0\leqslant s\leqslant t}|M_{1}^{\tau_{N}}|]\leqslant C\mathbb{E}([M_{\tau_{N}},M_{\tau_{N}}]_{t}^{\frac{1}{2}})\\ &&\leq C\mathbb{E}[\sum\limits_{s\in D_{p},s\leqslant \tau_{N}\wedge t}\|\partial_x g(u(s-),z)\|_{L^{2}}^{2}]+\frac{1}{2}\mathbb{E}[\sup\limits_{s\leqslant t\wedge\tau_{N}}\|\partial_x u(s-)\|_{L^{2}}^{2}]\\ && = C\mathbb{E}[\int_{0}^{t\wedge \tau_{N}}\int_{Z}\|\partial_x g(u(s-),z)\|_{L^{2}}^{2}ds\nu(dz)]+\frac{1}{2}E[\sup\limits_{s\leq t\wedge\tau_{N}}\|\partial_x u(s-)\|_{L^{2}}^{2}]\\ &&\leq C\mathbb{E}[\int_{0}^{t\wedge \tau_{N}}(1+\|\partial_x u(s-)\|_{L^{2}}^{2})ds] +\frac{1}{2}\mathbb{E}[\sup\limits_{s\leqslant t\wedge\tau_{N}}\|\partial_x u(s-)\|_{L^{2}}^{2}]\\ &&\leq C(1+N)t+\frac{1}{2}N \lt \infty. \end{eqnarray*} |
We can deduce that for I_{1}(t\wedge\tau_{N}) ,
\begin{eqnarray*} &&\mathbb{E}[I_{1}(t\wedge\tau_{N})] = E[\int_{0}^{t\wedge\tau_{N}}\int_{Z}(\|\partial_x g(u(s),z)\|_{L^{2}}^{2} +2\langle u(s),g(u(s),z)\rangle)ds\nu(dz)]\\ &&\leq \mathbb{E}[\int_{0}^{t\wedge\tau_{N}}\int_{Z}(\|\partial_x g(u(s),z)\|_{L^{2}}^{2}+2\|\partial_x u(s)\|_{L^{2}}^{2}\|\partial_x g(u(s),z)\|_{L^{2}}^{2})ds\nu(dz)]\\ &&\leq 2\mathbb{E}[\int_{0}^{t\wedge\tau_{N}}\int_{Z}\|\partial_x g(u(s),z)\|_{L^{2}}^{2}ds\nu(dz)] +\mathbb{E}[\int_{0}^{t\wedge\tau_{N}}\int_{Z}\|\partial_x u(s)\|_{L^{2}}^{2}ds\nu(dz)]\\ && = 2\mathbb{E}[\int_{0}^{t\wedge\tau_{N}}\int_{Z}\|\partial_x g(u(s),z)\|_{L^{2}}^{2}ds\nu(dz)] +\mathbb{E}[\int_{0}^{t\wedge\tau_{N}}\int_{Z}\|\partial_x u(s-)\|_{L^{2}}^{2}ds\nu(dz)]\\ &&\leq C\mathbb{E}[\int_{0}^{t\wedge\tau_{N}}(1+\|\partial_x u(s-)\|_{L^{2}}^{2})ds]+\rho \mathbb{E}[\int_{0}^{t\wedge\tau_{N}}\|\partial_x u(s-)\|_{L^{2}}^{2}ds]\\ &&\leq Ct+(C+\rho)Nt \lt \infty . \end{eqnarray*} |
Similarly,
\begin{eqnarray*} &&\mathbb{E}[\sup\limits_{0\leq s\leq t}|M_{2}^{\tau_{N}}|]\leq C\mathbb{E}([M_{\tau_{N}},M_{\tau_{N}}]_{t}^{\frac{1}{2}})\\ &&\leq C\mathbb{E}[\int_{0}^{t\wedge \tau_{N}}(1+\|\partial_x^{-1} u(s-)\|_{L^{2}}^{2})ds]+\frac{1}{2}E[\sup\limits_{s\leq t\wedge\tau_{N}}\|\partial_x^{-1} u(s-)\|_{L^{2}}^{2}]\\ &&\leqslant C(1+N)t+\frac{1}{2}N \lt \infty, \end{eqnarray*} |
and
\begin{eqnarray*} &&\mathbb{E}[I_{2}(t\wedge\tau_{N})] = \mathbb{E}[\int_{0}^{t\wedge\tau_{N}}\int_{Z}(\|\partial_x^{-1} g(u(s),z)\|_{L^{2}}^{2}+2\langle u(s),g(u(s),z)\rangle)ds\nu(dz)]\\ &&\leq C\mathbb{E}[\int_{0}^{t\wedge\tau_{N}}(1+\|\partial_x^{-1} u(s-)\|_{L^{2}}^{2})ds] +\rho \mathbb{E}[\int_{0}^{t\wedge\tau_{N}}\|\partial_x^{-1} u(s-)\|_{L^{2}}^{2}ds]\\ &&\leq Ct+(C+\rho)Nt \lt \infty. \end{eqnarray*} |
For M_3 , by Agmon's inequality, \|u\|_{L^\infty}\leq \|u\|_{L^2}^{1/2}\|\partial_x u\|_{L^2}^{1/2} , we have
\begin{eqnarray*} &&[M_{3}^{,\tau_{N}},M_{3}^{,\tau_{N}}]_{t}^{\frac{1}{2}} = [M_3,M_3]_{\tau_{N}\wedge t}^{\frac{1}{2}}\\ && = \left(\sum\limits_{s\in D_{p},s\leq \tau_{N}\wedge t}(\|g(u(s-),z)\|_{L^{3}}^{3}+2\langle u^2(s-),g(u(s-),p(s))\rangle+2\langle u(s-),g^2(u(s-),p(s))\rangle )^{2}\right)^{\frac{1}{2}}\\ &&\leq C\left(\sum\limits_{s\in D_{p},s\leqslant \tau_{N}\wedge t}(\|g(u(s-),z)\|_{L^{3}}^{6}+ \sum\limits_{s\in D_{p},s\leqslant \tau_{N}\wedge t}\| \partial_x u(s-)\|_{L^{2}}^{4}\|g(u(s-),p(s))\|_{L^{2}}^{2}\right.\\ &&+ \left.\sum\limits_{s\in D_{p},s\leqslant \tau_{N}\wedge t}\| u(s-)\|_{L^{2}}^{4}\|\partial_x g(u(s-),p(s))\|_{L^{2}}^{2}\right)^{\frac{1}{2}}\\ &&\leq C\left(\sum\limits_{s\in D_{p},s\leqslant \tau_{N}\wedge t}\| \partial_x u(s-)\|_{L^{2}}^{2}+\|g(u(s-),p(s))\|_{L^{2}}^{4} +\| u(s-)\|_{L^{2}}^{4}+\|\partial_x g(u(s-),p(s))\|_{L^{2}}^{2}\right), \end{eqnarray*} |
and by (3.1)
\begin{eqnarray*} &&\mathbb{E}[\sup\limits_{0\leqslant s\leqslant t}|M_{3}^{\tau_{N}}|] \leq C\mathbb{E}[\sum\limits_{s\in D_{p},s\leqslant \tau_{N}\wedge t}\|\partial_x g(u(s-),z)\|_{L^{2}}^{2}]+\frac{1}{2}E[\sup\limits_{s\leqslant t\wedge\tau_{N}}\|\partial_x u(s-)\|_{L^{2}}^{2}]\\ &&\leq CE[\int_{0}^{t\wedge \tau_{N}}(1+\|\partial_x u(s-)\|_{L^{2}}^{2})ds]+\frac{1}{2}E[\sup\limits_{s\leqslant t\wedge\tau_{N}}\|\partial_x u(s-)\|_{L^{2}}^{2}] \leq C(1+N)t+\frac{1}{2}N \lt \infty,\\ && \mathbb{E}[I_{3}(t\wedge\tau_{N})] \leq C\mathbb{E}[\int_{0}^{t\wedge \tau_{N}}(1+\|\partial_x u(s-)\|_{L^{2}}^{2})ds]+\frac{1}{2}E[\sup\limits_{s\leqslant t\wedge\tau_{N}}\|\partial_x u(s-)\|_{L^{2}}^{2}]\\ && \leq C(1+N)t+\frac{1}{2}N \lt \infty. \end{eqnarray*} |
Combining the above arguments, we can obtain
\begin{eqnarray*} &&\mathbb{E}[\sup\limits_{0\leqslant s\leqslant t\wedge\tau_{N}}\|I(u(s))\|_{L^{2}}^{2}] = \mathbb{E}[\sup\limits_{0\leqslant s\leqslant t}\|I(u(s\wedge\tau_{N}))\|_{L^{2}}^{2}]\\ &&\leq \mathbb{E}[\|I(u_{0})\|_{L^{2}}^{2}]+Ct+C\mathbb{E}[\int_{0}^{t\wedge \tau_{N}}(1+\|u(s-)\|_{L^{2}}^{2})ds]+\frac{1}{2}E[\sup\limits_{s\leqslant t\wedge\tau_{N}}\| u(s-)\|_{L^{2}}^{2}]\\ &&+C\mathbb{E}[\int_{0}^{t\wedge\tau_{N}}(1+\|\partial_x u(s-)\|_{L^{2}}^{2})ds]+\rho E[\int_{0}^{t\wedge\tau_{N}}\|\partial_xu(s-)\|_{L^{2}}^{2}ds]\\ &&+CE[\int_{0}^{t\wedge \tau_{N}}(\sup\limits_{s\leqslant t\wedge\tau_{N}}\|\partial_x^{-1}u(s)\|_{L^{2}}^{2})ds]+\frac{1}{2}E[\sup\limits_{s\leqslant t\wedge\tau_{N}}\| \partial_x^{-1}u(s)\|_{L^{2}}^{2}]. \end{eqnarray*} |
Applying Gronwall's inequality leads to
\begin{eqnarray*} \mathbb{E}[\sup\limits_{0\leqslant s\leqslant t\wedge\tau_{N}}\|I( u(s))\|_{L^{2}}^{2}] \leqslant C(Ct+E[\|I( u_{0})\|_{L^{2}}^{2}])e^{Ct}. \end{eqnarray*} |
Since \tau_{N}\wedge T\rightarrow T as N\rightarrow \infty, {\mathbb{P}\text{.a.s.}} . It follows from Young's inequality that
\begin{equation} \begin{aligned} \int_{-\infty}^{\infty} u^{3}(x, t) d x & \leq\|u(t)\|_{L^{\infty}}\left\|u_{0}\right\|_{L^{2}}^{2} \leq \sqrt{2}\left\|u_{0}\right\|_{L^{2}}^{5 / 2}\left\|\partial_{x} u(t)\right\|_{L^{2}}^{1 / 2} \\ & \leq \frac{\beta}{2}\left\|\partial_{x} u(t)\right\|_{L^{2}}^{2}+C. \end{aligned} \end{equation} | (3.3) |
which implies that
\begin{eqnarray} &&\beta\left\|\partial_{x} u(t)\right\|_{L^{2}}^{2}+\frac{\gamma}{2}\left\|\partial_{x}^{-1} u(t)\right\|_{L^{2}}^{2} \\ && = I_{2}\left(u_{0}\right)-\frac{1}{3}\|u(t)\|_{L^{3}}^{3} \leq\left|I_{2}\left(u_{0}\right)\right|+\frac{\beta}{2}\left\|\partial_{x} u(t)\right\|_{L^{2}}^{2}+C . \end{eqnarray} | (3.4) |
Then, we can prove that
\mathbb{E}[\sup\limits_{0\leqslant t\leqslant T}\|u\|_{H^{1}}^{2}]\leqslant C(1+E[\|u_{0}\|_{H^{1}}^{2}]). |
Thus, the proof of Lemma is completed.
Next, we will prove the global well-posdness of (1.1) in \mathbb{D}([0, T], H^1) by the method developed in [1].
Theorem 3.1. Assume the conditions (H1)–(H3) are satisfied. Then, the equation (1.1) admits a unique global mild solution in X_1 , which is c \acute{a} dlag.
Proof. Let \{\tau_{n}:n\in \mathbb{N}\} be a family of independent exponential distributed random variables with parameter \rho , and set
T_{n} = \sum\limits_{j = 1}^{n}\tau_{j}, n\in \mathbb{N}. |
Define \{N(t):t\geqslant 0\} as the counting process
N(t) = \Sigma_{j = 1}^{\infty}1_{[T_{j},\infty)}(t), t\geqslant 0. |
Then for any fixed t > 0 , N(t) is a Poisson distributed random variable with parameter \rho t . Let \{Y_{n}:n\in \mathbb{N}\} be a family of independent, \nu/\rho distributed random variables. Then
\begin{eqnarray*} \int_{0}^{t}\int_{Z}g(u(s-),z)\tilde{\eta}(ds,dz) = \begin{cases} -\int_{0}^{t}\int_{Z}g(u(s-),z)\nu(dz)ds, if N(t) = 0,\\ \\ \Sigma_{j = 1}^{N(t)}g(u(t-),Y_{n})-\int_{0}^{t}\int_{Z}g(u(s-),z)\nu(dz)ds,if N(t) \gt 0, \end{cases} \end{eqnarray*} |
Notice that N(t) = 0 on the interval [0, T_{1}) , then the equation (1.1) can be rewritten as the following equations
\begin{eqnarray} \begin{cases} du(t) = [\beta\partial_{x}^{3}u(t)-\frac{1}{2}\partial_{x}(u^{2}) +\lambda u+\gamma \partial_{x}^{-1}u]dt ,\\ u(0) = u_0. \end{cases} \end{eqnarray} | (3.5) |
Thus, the mild solution of equation (3.5) can be represented as follows
u(t) = S_{\lambda}(t)u_{0}-\int_{0}^{t}S_{\lambda}(t-s)u\partial_{x}uds. |
Define the operator F by
Fu = S_{\lambda}(t)u_{0}-\int_{0}^{t}S_{\lambda}(t-s)u\partial_{x}uds. |
Then
\begin{eqnarray*} \|Fu\|_{X_1(T_{1})}\leqslant \|S_{\lambda}(t)u_{0}\|_{X_1(T_{1})}+\|\int_{0}^{t}S_{\lambda}(t-s)u\partial_{x}u ds\|_{X_1(T_{1})} \end{eqnarray*} |
for \|u\|_{X_1(T_{1})} < \infty . Lemma 3.1 and 3.2 implies that there exists some constant C > 0 such that
\|Fu\|_{X_1(T_{1})}\leqslant C(\|u_{0}\|_{X_1(T_{1})}+T_{1}^{\alpha} \|u\|_{X_1(T_{1})}^{2}) \lt \infty. |
Thus, F maps X_1(T_{1}) into itself. Next, we will verify that the operator F is contractive. In fact, for any u_{1}, u_{2}\in X_1(T_{1}) , we have
\begin{eqnarray*} &&\|Fu_{1}-Fu_{2}\|_{X_1(T_{1})} = \|\int_{0}^{t}S_{\lambda}(t-s)(u_{1}\partial_{x}u_{1}-u_{2}\partial_{x}u_{2})ds\|_{X_1(T_{1})}\\ &&\leqslant\|\int_{0}^{t}S_{\lambda}(t-s)u_{1}\partial_{x}(u_{1}-u_{2})ds\|_{X_1(T_{1})}+\|\int_{0}^{t}S_{\lambda}(t-s)(u_{1}-u_{2})\partial_{x}u_{2}ds\|_{X_1(T_{1})}\\ &&\leqslant C_{1}(T_{1})T_{1}^{\alpha}(\|u_{1}\|_{X_1(T_{1})}+\|u_{2}\|_{X_1(T_{1})})\|u_{1}-u_{2}\|_{X_1(T_{1})}. \end{eqnarray*} |
Hence, there exists a sufficient small T_{1} > 0 such that equation (3.5) posses an unique local solution on [0, T_1] . In the sequel, we extend the local solution to the global one by Lemma 3.3. To the end, denote this local solution on [0, T_{1}) by u_{1} . We will consider the solution on [T_{1}, T_{2}) . Notice that a jump with size g(u(T_{1}-), Y_{n}) happens at time T_{1} , let u_{1}^{0} = u_{1}(T_{1}-)+g(u_{1}(T_{1}-), Y_{n}) , and we consider a second process on [T_{1}, T_{2}) as following
\begin{eqnarray} \begin{cases} du(t) = [\beta\partial_{x}^{3}u(t)-\frac{1}{2}\partial_{x}(u^{2}) +\lambda u+\gamma \partial_{x}^{-1}u]dt ,\\ u(0) = u_0, \end{cases} \end{eqnarray} | (3.6) |
Similarly, we can deduce that equation (3.6) posses a unique mild solution u_{2} on [T_{1}, T_{2}) . Repeating the above arguments, we can obtain that the equation (1.1) has a unque global mild solution in X(T) .
Since P(N(t) < \infty) = 1 , the solution u(t) is almost surely defined on [0, T] . It is clearly that the jumps take place at each T_{j} , and \lim\limits_{t\downarrow T_{j}}u(t) = u_{j}^{0} , and \lim\limits_{t\uparrow T_{j}}u(t) exists. Then u is c\acute{a}dlag . Thus the proof of Theorem 3.1 is complete.
In this section, we first prove the tightness of semigroup P(t), t\geq 0 according to Lemma 2.1, then prove the ergodicity of stochastic equation (1.1).
Theorem 4.1. Assume that u_{0}\in H^{1}({\mathbb{R}}) , and the conditions (H1)–(H3) are satisfied. Then for any sequence of deterministic initial conditions \{u_0^{n}\} with R = \sup\limits_{n\in\mathbb{N}}\{\| u_{0}^{n}\|_{H^{1}}\} < \infty , \{P_{t_{n}}(u_{0}^{n}, \cdot):n\in\mathbb{N}\} is tight on H^{1} for any t_{n} > 0 with t_n\to\infty as n\to\infty .
Proof. Without loss of generality, we assume that t_{n} is an increasing sequence. Denote by u_{n}(t) the solution with the initial condition u_{n}(0) . We claim that \{u_{n}(t)\}_{n = 1}^{\infty} converges in H^{1} ({\mathbb{R}}) . In fact, since
\mathbb{E}[\sup\limits_{0\leq t\leq T}\|u\|_{H^{1}}^{2}]\leq C(1+E[\|u_{0}\|_{H^{1}}^{2}]), |
then we have
\sup\limits_{t\geq 0}\mathbb{E}[\parallel u_{n}(t_{n})\parallel _{H^{1}}^{2}] \leq C(\sup\limits_{n}(\| u_{n}(0)\| _{H^{1}}^{2})+\sup\limits_{n}(\| u_{0}\| _{L^{2}}^{2})^{2}+1): = C(R). |
Due to Lemma 2.2, it suffices to prove there exist two constants c > 0 and \gamma > 0 and a real number r > 0 such that for all \theta > 0, t\in [0, T-\theta] , and n\geq 0
\mathbb{E} _n \sup\limits_{t\leq s\leq t+\theta} |u_n(t)-u_n(s)|_X^r\leq c \theta^\gamma. |
By the Definition 2.1, we get
\begin{eqnarray*} &&\mathbb{E} \|u_n(t+\theta)-u_n(s)\|^2_{H^1}\\ &&\leq \mathbb{E}\|\int_{t}^{t+\theta}S_{\lambda}(t-s)u\partial_{x}uds\|^2 +\mathbb{E}\|\int_{t}^{t+\theta}\int_{Z}S_{\lambda}(t-s)g(u(s-),z)\eta(ds,dz)\|^2. \end{eqnarray*} |
It follows from Lemma 3.1 and assumptions (H1)–(H3) that there exist \gamma > 0 such that
\mathbb{E} \|u_n(t+\theta)-u_n(s)\|^2\leq \theta^\gamma+\mathbb{E}\int_{t}^{t+\theta}1+\|S_{\lambda}(t-s)g(u(s-),z)\|^2ds, |
which implies that the tightness of \{P_{t_{n}}(u_{0}^{n}, \cdot):n\in\mathbb{N}\} holds, and the proof of Theorem 4.1 is complete.
Theorem 4.2. Assume that u_{0}\in H^{1}({\mathbb{R}}) and the conditions (H1)–(H3) are satisfied. If K is a compact set of \{H^{1}\} , then the sequence of measures \{P_{s}(v, \cdot):s\in[0, 1], v\in K\} is tight on H^1 .
Proof. We need to prove that \{P_{s}(v, \cdot):s\in[0, 1], v\in K\} posses a convergent subsequence. Without loss of generality, we assume that (s_{n}, v_{n})\in[0, 1]\times K converges to (s, v)\in[0, 1]\times K , because K is a compact set. Let u_{n}(t) be the solution with the initial value v_n(0) , and u(t) the solution with the initial value v . For u\in D([0, 1];H^{1}) , then we can derive that
\lim\limits_{n\rightarrow\infty}\|u(s_{n})-u(s)\|_{H^{1}} = 0, P.a.s. |
Next, we will prove that there exists a subsequence (s_{n_{k}}, v_{n_{k}}) of sequence (s_{n}, v_{n}) such that
\lim\limits_{k\rightarrow\infty}(\|u(s_{n_{k}})-u(s)\|_{H^{1}}+\|u_{n_{k}}(s)-u(s)\|_{H^{1}}) = 0. |
Finally, set R_{0} = \sup\limits_{v\in K}\|v\|_{H^{1}}+1 , and choose R > 0 such that \frac{C(R_{0})}{R}\leq\frac{\varepsilon}{2} , then we have
\begin{eqnarray*} &&\mathbb{P}\{\max\{\sup\limits_{s\in [0,t]}\|u(s)\|_{H^{1}}^{2},\sup\limits_{s\in [0,t]}\|u_{n}(s)\|_{H^{1}}^{2}\}\geq R\}\\ &&\leq \mathbb{P}\{\sup\limits_{s\in [0,t]}\|u(s)\|_{H^{1}}^{2}+\sup\limits_{s\in [0,t]}\|u_n(s)\|_{H^{1}}^{2}\geq R\}\\ &&\leq \frac{1}{R}\mathbb{E}[\sup\limits_{s\in [0,t]}\|u(s)\|_{H^{1}}^{2} +\sup\limits_{s\in [0,t]}\|u_{n}(s)\|_{H^{1}}^{2}]\leq\frac{C(R_{0})}{R}\leq\frac{\varepsilon}{2}. \end{eqnarray*} |
Define \tau = \inf\{s\geq 0:4\tilde{C}(t)s^{\alpha}(\tilde{C}(t)R+\|\bar{u}\|_{X(s)}+\|\int_{0}^{s}U(s-r)\partial_x u^2dr\|_{X(s)}) > 1\}. is a stopping time. Let \tau_{0} = \tau , and
\begin{eqnarray*} &&\tau_{k+1} = \inf\{s\geq \tau_{k}:4\tilde{C}(t)(s-\tau_{k})^{\alpha}(\tilde{C}(t)R+\|\bar{u_{\tau_{k}}}\|_{X(\tau_{k},s)}\\ &&\quad\quad\quad +\|\int_{\tau_{k}}^{s}U(s-r)\partial_x u^2dr\|_{X(\tau_{k},s)}) \gt 1\},\\ &&A_n = \{\max\{\sup\limits_{s\in [0,t]}\|u(s)\|_{H^{1}}^{2},\sup\limits_{s\in [0,t]}\|u_{n}(s)\|_{H^{1}}^{2}\}\leq R\}, \end{eqnarray*} |
and choose N such that P(\tau_{N}\leq 1)\leq \frac{\varepsilon}{2} , we obtain
\sup\limits_{s\in [0,1]}\|u(s)-u_n(s)\|_{H^{1}}\leq (2\tilde{C}(t))^{N+1}\|v-v_{n}\|_{H^{1}}\rightarrow 0 |
on the interval A_{n}\bigcap\{\tau_{N}\geq1\} .
Choosing n be large enough such that (2\tilde{C}(t))^{N+1}\|v-v_{n}\|_{H^{1}} < \delta , we have
\mathbb{P}(\sup\limits_{s\in [0,1]}\|u(s)-u_{n}(s)\|_{H^{1}}\leq \delta)\geq P(A_{n}\bigcap\{\tau_{N}\geq1\})\geq 1-\varepsilon, |
which implies that
\lim\limits_{n\rightarrow\infty}\sup\limits_{s\in [0,1]}\|u(s)-u_{n}(s)\|_{H^{1}}\rightarrow 0,P.a.s. |
Therefore, there exists a sequence n_{k} such that
\lim\limits_{k\rightarrow\infty}\|u(s)-u_{n_{k}}(s)\|_{H^{1}}\rightarrow 0,P.a.s. |
We can deduce that
\begin{eqnarray*} |\mathbb{P}_{s_{n_{k}}}\xi(v_{n_{k}})-P_{s}\xi(v)|\leq E[|\xi(u_{n_{k}}(s_{n_{k}})) -\xi(u(s_{n_{k}}))|]+ E[|\xi(u(s_{n_{k}}))-\xi(u(s))|]\rightarrow 0 \end{eqnarray*} |
for any real valued uniformly continuous function \xi\in H^{1}({\mathbb{R}}) . Therefore, \{\mathbb{P}_{s}(v, \cdot):s\in[0, 1], v\in K\} has a convergent subsequence. The proof of Theorem 4.2 is complete.
Theorem 4.3. Assume that u_{0}\in H^{1}({\mathbb{R}}) and the conditions (H1)–(H3) are satisfied. Then \mu_{n}(\cdot) = \frac{1}{n}\int_{0}^{n}P_{t}(0, \cdot)dt, n = 1, 2, ... is tight on H^{1}({\mathbb{R}}) .
Proof. Since for any \varepsilon > 0 , \{P_{n}(0, \cdot):n\geq 0\} is tight, we can choose a compact set K_{\varepsilon}\subset H^{1}({\mathbb{R}}) such that \sup\limits_{n}\{P_{n}(0, K_{\varepsilon}^{c})\leq\frac{\varepsilon}{2} , and \{P_{s}(v, \cdot):s\in[0, 1], v\in K\} is tight on H^{1} . We can also choose a compact set A_{\varepsilon}\subset H^{m}({\mathbb{R}}) such that \sup\limits_{s\in[0, 1], v\in K_{\varepsilon}}\{P_{n}(0, A_{\varepsilon}^{c})\leq\frac{\varepsilon}{2} . Therefore, we have
\begin{eqnarray*} \mu_{n}(A_{\varepsilon}^{c})& = &\frac{1}{n}\int_{0}^{n}P_{t}(0,A_{\varepsilon}^{c})dt\\ & = &\frac{1}{n}\sum\limits_{i = 0}^{n-1}\int_{i}^{i+1}\int_{H^{1}}P_{i}(0,dy)P_{t-i}(y,A_{\varepsilon}^{c})dt\\ & = &\frac{1}{n}\sum\limits_{i = 0}^{n-1}\int_{i}^{i+1}[\int_{K_{\varepsilon}}P_{i}(0,dy)P_{t-i}(y,A_{\varepsilon}^{c}) +\int_{K_{\varepsilon}^{c}}P_{i}(0,dy)P_{t-i}(y,A_{\varepsilon}^{c})]dt\\ &\leq&\frac{1}{n}\sum\limits_{i = 0}^{n-1}\int_{i}^{i+1}[\frac{\varepsilon}{2}\int_{K_{\varepsilon}}P_{i}(0,dy)+\int_{K_{\varepsilon}^{c}}P_{i}(0,dy)]dt\\ &\leq&\frac{1}{n}\sum\limits_{i = 0}^{n-1}\int_{i}^{i+1}\varepsilon dt = \varepsilon. \end{eqnarray*} |
Then we get that \mu_{n}(\cdot) = \frac{1}{n}\int_{0}^{n}P_{t}(0, \cdot)dt , n = 1, 2, ... is tight on H^{1}({\mathbb{R}})\cap \dot{H}^{-1}({\mathbb{R}}) . The proof of Theorem 4.3 is complete.
Theorem 4.4. Assume that u_{0}\in H^{1}({\mathbb{R}}) and the conditions (H1)–(H3) are satisfied. Then equation (1.1) admits an invariant measures. Moreover, equation (1.1) with the deterministic initial condition posses an ergodic invariant measure.
Proof. Using Krylov-Bogoliubov proposition, Theorem 4.1 and Theorem 4.3, we can prove there exist at least an invariant measures for equation (1.1). Denote K be the set of all the invariant measure, then it is easy to check that K is convex. Let \{\mu_{n}\}_{n\in \mathbb{N}^{+}} be a sequence of invariant measures in K . Then there exists some constant C such that
\begin{eqnarray*} \sup\limits_{n}\int\| u\|_{H^1}^{2}\mu_{n}(du)\leq C \quad\text{and}\quad \sup\limits_{n}\int\| u\|_{H^1}^{4}\mu_{n}(du)\leq C. \end{eqnarray*} |
For any deterministic initial condition, \{(\mu_{n}P_{t_{n}})(\cdot):n\in \mathbb{N}^{+}\} = \{\mu_{n}(\cdot):n\in \mathbb{N}^{+}\} is tight. Since K is a self sequentially compact set, then K is compact. It follows from Krein-Milman theorem, a convex compact set posses extremal point. Theorem 2.2 yields that this extremal point is ergodic. Therefore, stochastic equation (1.1) admits an ergodic invariant measure. Thus, the proof of Theorem 4.4 is complete.
In this section, we will take the numerical simulation of the invariant measure. To the end, we give the distribution of the solution \frac{1}{T_M} \sum_{m = 0}^{T_M} \mathbb{E}\left[\Phi\left(u(t_m)\right)\right] by using the so-called the Monte Carl method as following. One can prove theoretically that it does have a unique invariant measure, which derive to the ergodicity[5].
We do the numerical simulation to find what happen if the Ostrovsky equation perturbed by pure jump noise. We firstly use the the norm conservative finite difference scheme introduced by [20,scheme 1] to simulate the equation (1.1) as
\begin{eqnarray} &&\frac{U_{j}^{(n+1)}-U_{j}^{(n)}}{\Delta t}+\beta \delta^{(3)}\left(\frac{U_{j}^{(n)}+U_{j}^{(n+1)}}{2}\right)\\ &&+\frac{1}{3}\left(\delta^{(1)}\left(\frac{U_{j}^{(n)}+U_{j}^{(n+1)}}{2}\right)^{2}+\frac{U_{j}^{(n)}+U_{j}^{(n+1)}}{2} \delta^{(2)}\left(\frac{U_{j}^{(n)}+U_{j}^{(n+1)}}{2}\right)\right)\\ && = \lambda \left(\frac{U_{j}^{(n)}+U_{j}^{(n+1)}}{2}\right)+\gamma \delta_{\mathrm{FD}}^{-1}\left(\frac{U_{j}^{(n)}+U_{j}^{(n+1)}}{2}\right)+\sum\limits_{t_i \leq s \leq t_{i+1}} g\left(\Delta L_{s}\right) 1_{A}\left(\Delta L_{s}(\omega)\right) \end{eqnarray} | (5.1) |
where
\begin{eqnarray*} &&\delta_{\mathrm{FD}}^{-1} U_{j}^{(\mathrm{m})}\\ && = \Delta x\left(\frac{U_{1}^{(n)}}{2}+\sum\limits_{k = 2}^{j-1} U_{k}^{(n)}+\frac{U_{j}^{(n)}}{2}\right)-\frac{(\Delta x)^{2}}{L} \sum\limits_{k = 1}^{N}\left(\frac{U_{1}^{(n)}}{2}+\sum\limits_{1 = 2}^{k-1} U_{1}^{(n)}+\frac{U_{k}^{(n)}}{2}\right),\\ &&\delta^{(1)} U_{j}^{(n)} = \frac{U_{j+1}^{(n)}-U_{j-1}^{(n)}}{2 \Delta x}, \quad \delta^{(3)} U_{j}^{(n)} = \frac{U_{j+2}^{(n)}-2 U_{j+1}^{(n)}+2 U_{j-1}^{(n)}-U_{j-2}^{(n)}}{2(\Delta x)^{3}} \end{eqnarray*} |
and \sum_{s \leq t} 1_{A}\left(\Delta L_{s}(\omega)\right) is a possion process with the parameter P \Delta t .
Now we set \beta = 0.01, \gamma = 1 , \lambda = 0.5 , and u_0(x) = sin(x) . The simulation of (1.1) driven by Poisson noise with g(u(t-), z) = 0.2u(t-)z is given in Figure 1. Figure 2 gives time changes of \|u(t, \cdot)\|_{L^2_x} using different sample trajectories.
It can be clearly seen from the Figure 2 that the decay rate of the equation is slowed down under the influence of noise. At the same time, as shown in Figure 3, it can be seen that for \Phi(y) = exp(-|y|^2) , the distribution of the solution \frac{1}{N+1} \sum_{n = 0}^{N} \mathbb{E}\left[\Phi\left(U^{(n)}\right)\right] tends to a measure \mu as T \to \infty .
The above numerical simulation of stochastic damped Ostrovsky equation (Figure 3) in the sense of \mathbb{E} \|u(t, \cdot)\|_{L^2_x} reveals that stochastic damped Ostrovsky equation driven by pure jump posses an unique ergodic invariant measure.
In this paper, the global well-posedness of stochastic damped Ostrovsky equation driven by pure jump noise is given at first. Then we use the tightness criterion to investigate the existence of invariant measures, and give the ergodicity under determined initial condition. A numerical experiment is also given to support our results.
The paper is Supported by the National Natural Science Foundation of China (No.11771449) and China Postdoctoral Science Foundation(No. 2019T120966).
[1] |
A. Bouard, E. Hausenblasb, The nonlinear Schrödinger equation driven by jump processes, J. Math. Anal. Appl., 475 (2019), 215-252. doi: 10.1016/j.jmaa.2019.02.036
![]() |
[2] |
A. Bouard, A. Debussche, On the stochastic Korteweg-de Vries equation, J. Funct. Anal., 154 (1998), 215-251. doi: 10.1006/jfan.1997.3184
![]() |
[3] |
A. Bouard, A. Debussche, Y. Tsutsumi, White noise driven Korteweg-de Vries equation, J. Funct. Anal., 169 (1999), 532-558. doi: 10.1006/jfan.1999.3484
![]() |
[4] | Y. Chen, Random attractor of the stochastic damped forced Ostrovsky equation, J. Math. Anal. Appl., 404 (2013), 283-299. |
[5] | G. Da Prato, J. Zabczyk, Ergodicity for infinite dimensional systems, Cambridge: Cambridge University Press, 1996. |
[6] |
Z. Dong, J. Liang, Martingale solutions and Markov selection of stochastic 3D-Navier-Stokes equations with jump, J. Diff. Eqns, 250 (2011), 2737-2778. doi: 10.1016/j.jde.2011.01.018
![]() |
[7] |
Z. Dong, Y. Xie, Ergodicity of stochastic 2D Navier-stokes equations with Lévy noise, J. Diff. Eqns, 251 (2011), 196-222. doi: 10.1016/j.jde.2011.03.015
![]() |
[8] | Z. Dong, T. Xu, T. Zhang, Invariant measures for stochastic evolution equations of pure jump type. Stoch. Proc. Appl., 119 (2009), 410-427. |
[9] |
I. Ekren, I. Kukavica, M. Ziane, Existence of invariant measure for stochastic damped Schrödinger equation, Stoch PDE: Anal. Comp., 5 (2017), 343-367. doi: 10.1007/s40072-016-0090-1
![]() |
[10] |
I. Ekren, I. Kukavica, M. Ziane, Existence of invariant measure for stochastic damped KdV equation, Indiana Univ. Math. J., 67 (2018), 1221-1254. doi: 10.1512/iumj.2018.67.7365
![]() |
[11] |
P. Isaza, J. Mejía, Cauchy problem for the Ostrovsky equation in spaces of low regularity, J. Diff. Eqns., 230 (2006), 661-681. doi: 10.1016/j.jde.2006.04.007
![]() |
[12] |
P. Isaza, J. Mejía, Global Cauchy problem for the Ostrovsky equation, Nonlinear Anal. TMA., 67 (2007), 1482-1503. doi: 10.1016/j.na.2006.07.031
![]() |
[13] |
P. Isazaa, J. Mejía, Local well-posedness and quantitative ill-posedness for the Ostrovsky equation, Nonlinear Anal. TMA., 70 (2009), 2306-2316. doi: 10.1016/j.na.2008.03.010
![]() |
[14] | Y. Li, W. Yan, The Cauchy problem for higher-order KdV equations forced by white noise, J. Henan Normal University, 45 (2017), 1-8. |
[15] |
F. Linares, A. Milanés, Local and global well-posedness for the Ostrovsky equation, J. Diff. Eqns., 222 (2006), 325-340. doi: 10.1016/j.jde.2005.07.023
![]() |
[16] | Y. Prokhorov, Convergence of random processes and limit theorems in probability theory. Theory Probab. Appl., 2 (1956), 157-214. |
[17] | L. Ostrovsky, Nonlinear internal waves in a rotating ocean Nonlinear internal waves in a rotating ocean, Okeanologiya, 18 (1978), 181-191. |
[18] | S. Peszat, J. Zabczyk, Stochatsic partial differential equation with Levy noise Evolution Equation Approach, Cambridge: Cambridge University Press, 2000. |
[19] | S. Wu, P. Xu, J. Huang, et al. Ergodicity of stochastic damped Ostrovsky equation driven by white noise, Discrete Cont. Dyn. B, doi:10.3934/dcdsb.2020175. |
[20] | T. Yaguchi, T. Matsuo, M. Sugihara, Conservative numerical schemes for the Ostrovsky equation. J. Comput. Appl. Math., 234 (2010), 1036-1048. |
[21] |
W. Yan, M. Yang, J. Huang, et al. The Cauchy problem for the Ostrovsky equation with positive dispersion, Nonlinear Differential Appl., 25 (2018), 22-37. doi: 10.1007/s00030-018-0514-x
![]() |
[22] |
W. Yan, M. Yang, J. Duan, White noise driven Ostrovsky equation, J. Diff. Eqns., 267 (2019), 5701-5735. doi: 10.1016/j.jde.2019.06.003
![]() |