Loading [MathJax]/jax/element/mml/optable/MathOperators.js

Martingale solutions of stochastic nonlocal cross-diffusion systems

  • The work of Bendahmane is supported by the Fincome project

  • Received: 01 March 2022 Published: 16 June 2022
  • Primary: 60H15, 35K57; Secondary: 35M10, 35A05

  • We establish the existence of solutions for a class of stochastic reaction-diffusion systems with cross-diffusion terms modeling interspecific competition between two populations. More precisely, we prove the existence of weak martingale solutions employing appropriate Faedo-Galerkin approximations and the stochastic compactness method. The nonnegativity of solutions is proved by a stochastic adaptation of the well-known Stampacchia approach.

    Citation: Mostafa Bendahmane, Kenneth H. Karlsen. Martingale solutions of stochastic nonlocal cross-diffusion systems[J]. Networks and Heterogeneous Media, 2022, 17(5): 719-752. doi: 10.3934/nhm.2022024

    Related Papers:

    [1] José Antonio Carrillo, Yingping Peng, Aneta Wróblewska-Kamińska . Relative entropy method for the relaxation limit of hydrodynamic models. Networks and Heterogeneous Media, 2020, 15(3): 369-387. doi: 10.3934/nhm.2020023
    [2] Elisabeth Logak, Isabelle Passat . An epidemic model with nonlocal diffusion on networks. Networks and Heterogeneous Media, 2016, 11(4): 693-719. doi: 10.3934/nhm.2016014
    [3] Hakima Bessaih, Yalchin Efendiev, Florin Maris . Homogenization of the evolution Stokes equation in a perforated domain with a stochastic Fourier boundary condition. Networks and Heterogeneous Media, 2015, 10(2): 343-367. doi: 10.3934/nhm.2015.10.343
    [4] Simone Göttlich, Stephan Martin, Thorsten Sickenberger . Time-continuous production networks with random breakdowns. Networks and Heterogeneous Media, 2011, 6(4): 695-714. doi: 10.3934/nhm.2011.6.695
    [5] Xia Li, Chuntian Wang, Hao Li, Andrea L. Bertozzi . A martingale formulation for stochastic compartmental susceptible-infected-recovered (SIR) models to analyze finite size effects in COVID-19 case studies. Networks and Heterogeneous Media, 2022, 17(3): 311-331. doi: 10.3934/nhm.2022009
    [6] Nurehemaiti Yiming . Asymptotic behavior of the solution of a stochastic clearing queueing system. Networks and Heterogeneous Media, 2025, 20(2): 590-624. doi: 10.3934/nhm.2025026
    [7] Mostafa Bendahmane . Analysis of a reaction-diffusion system modeling predator-prey with prey-taxis. Networks and Heterogeneous Media, 2008, 3(4): 863-879. doi: 10.3934/nhm.2008.3.863
    [8] Piotr Gwiazda, Karolina Kropielnicka, Anna Marciniak-Czochra . The Escalator Boxcar Train method for a system of age-structured equations. Networks and Heterogeneous Media, 2016, 11(1): 123-143. doi: 10.3934/nhm.2016.11.123
    [9] Arnaud Ducrot, Michel Langlais, Pierre Magal . Multiple travelling waves for an SI-epidemic model. Networks and Heterogeneous Media, 2013, 8(1): 171-190. doi: 10.3934/nhm.2013.8.171
    [10] Pierre Degond, Sophie Hecht, Nicolas Vauchelet . Incompressible limit of a continuum model of tissue growth for two cell populations. Networks and Heterogeneous Media, 2020, 15(1): 57-85. doi: 10.3934/nhm.2020003
  • We establish the existence of solutions for a class of stochastic reaction-diffusion systems with cross-diffusion terms modeling interspecific competition between two populations. More precisely, we prove the existence of weak martingale solutions employing appropriate Faedo-Galerkin approximations and the stochastic compactness method. The nonnegativity of solutions is proved by a stochastic adaptation of the well-known Stampacchia approach.



    This work is devoted to the mathematical analysis of a stochastic reaction-diffusion system with cross-diffusion modeling the interaction between two populations. Cross-diffusion expresses that the population flux of a given subpopulation is affected by the presence of other subpopulations. The (deterministic) dynamics of interacting species with cross-diffusion were investigated by many authors, including Levin [25], Levin and Segel [24], Okubo and Levin [31], Mimura and Murray [27], Mimura and Kawasaki [26], Mimura and Yamaguti [28], Galiano et al. [17,18], Bendahmane et al. [1,6] (see also [2,3,5,7]) to name a few. We consider a spatially distributed population wherein u=u(t,x) and v=v(t,x) are the respective densities of two subpopulations at time t and location xΩ. The variables u and v may represent predator and prey densities. In the context of dispersal of an epidemic disease, the two variables u and v may represent predator and prey densities for susceptible (those who can catch the disease) and infectious individuals (those who are infected and can transmit the disease). Let p=u+v be the total population density. The population in each subclass is given by

    U(t)=Ωu(t,x)dx,V(t)=Ωv(t,x)dx,

    whereas the total population is

    P(t)=Ω(u+v)(t,x)dx=Ωp(t,x)dx,

    where Ω is a bounded open domain of Rd (d=3), with C3 boundary Ω and outward unit normal ν. In this work, we assume that the diffusion of individuals follows a Fick law modified by various other processes such as searching for food, escaping high infection risks, or avoiding large concentrations of individuals. This means that the mobility in each subclass is influenced by the spatial gradient of the other subclass (cf. e.g. [29,30,31]).

    A prototype of stochastic reaction-diffusion systems with nonlocal diffusion and cross-diffusion terms is

    du(Du(Ωu(t,x)dx)u+A11(u,v)u+A12(u,v)v)dt=F(u,v)dt+σu(u)dWu(t),dv(Dv(Ωv(t,x)dx)v+A21(u,v)u+A22(u,v)v)dt=G(u,v)dt+σv(v)dWv(t), (1)

    which is posed in the time-space cylinder ΩT:=(0,T)×Ω. This system is supplemented with nonnegative initial data,

    u(0,x)=u0(x)0,v(0,x)=v0(x)0,xΩ, (2)

    and zero-flux boundary conditions on ΣT:=(0,T)×Ω:

    (Du(Ωu(t,x)dx)u+A11(u,v)u+A12(u,v)v)ν=0,(Dv(Ωv(t,x)dx)v+A21(u,v)u+A22(u,v)v)ν=0. (3)

    In the system (1), Ww is a cylindrical Wiener process, with noise function σw for w=u,v. Formally, we can think of σw(w)dWw as k1σw,k(w)dWk,w(t), where {Ww,k}k1 is a sequence of independent 1D Brownian motions and {σw,k}k1 a sequence of noise coefficients. The processes Wu and Wv are independent, and the terms σu(u)dWu and σv(v)dWv model environmental noise.

    In (1),

    F(u,v):=θ(u,v)μuG(u,v):=θ(u,v)γvμv (4)

    are the reaction terms. In the dispersal of an epidemic disease, the constants μ,γ>0 are the biological parameters of the system (think of 1/γ as the duration of the infectious stage and μ as the mortality rate). The incidence function θ takes a proportionate mixing form: for some constant α>0,

    θ(u,v)=αuvu+v,u,v0. (5)

    For later use, note that

    0θ(u,v)αmin(u,v),u,v0. (6)

    The diffusion rates (given by Du() and Dv()>0) are assumed to be "nonlocal", depending on the whole of each population rather than on the local density; in other words, the diffusion of individuals is guided by the global state of the population in the medium. For example, if we want to model species tending to leave crowded zones, a natural assumption is that Du(),Dv() are increasing functions. Otherwise, for species attracted by a growing population, one may assume that the nonlocal diffusion coefficients Du(),Dv() are decreasing functions. We assume that Du,Dv:RR are continuous functions satisfying the following conditions: Cm,CM>0 such that for w=u,v,

    Dw(I)Cm,|Dw(I1)Dw(I2)|CM|I1I2|,I,I1,I2R. (7)

    In (1), A(u,v)={Aij(u,v)}2i,j=1 is the cross-diffusion matrix. For simplicity of presentation, we introduce the short-hand notation

    A(u,v)(uv)=(A11(u,v)u+A12(u,v)vA21(u,v)u+A22(u,v)v).

    We assume that the matrix A has as C2 entries and satisfies the following conditions:

    u,v0,A12(0,v)=0,A21(u,0)=0,u,v0,w:=(w1w2)R2d,(A(u,v)w,w)1C|A(u,v)||w|2,u1,u2,v1,v20,|A(u1,v1)A(u2,v2)|C(|u1u2|+|v1v2|), (8)

    where (,) is the usual scalar product on R2, with corresponding norm ||. Moreover, |A(,)|=maxi,j=1,2|Aij(,)| and C is a positive constant. Notice that (8) implies

    A11(u,v)0,A22(u,v)0,u,v0.

    A typical example of a cross-diffusion matrix is

    A(u,v)=(a11u+a12va13ua21va22u+a23v),

    where the coefficients aij>0 are known as self-diffusion rates. This matrix is nonnegative if 8a11a21a212 and 8a22a12a221, cf. [4] for more details.

    Remark 1. For the upcoming analysis we need to extend the definitions of A, cf. (8), F and G to all u,vR. We do this by assuming the following (for i,j=1,2):

    if u,v0, then Aij(u,v)0, otherwise Aij(u,v)=0(ij) and Aii(u,v)0,F(u,v)={θ(u,v)μu,if u,v0,μu,if u0 and v<0,0,if u<0 and v0,G(u,v)={θ(u,v)γvμv,if u,v0,0,if u0 and v<0,γvμv,if u<0 and v0.

    Our analysis is restricted to positive cross-diffusion matrices A. Positive matrices are motivated by their applications in population dynamics. In a forthcoming work, we decipher stability and instability conditions for the spatially constant stationary state. Moreover, we define and prove the existence of suitably defined solutions satisfying these conditions. The "natural" solutions are determined when the nonlinearities and cross diffusivities obey certain constraints. In the deterministic case [3], these constraints are not fully satisfied for realistic parameters, yielding instabilities. The interesting open question is, which type of solution experiences instabilities? Degenerate cross-diffusion systems and numerical methods will be the subject of another forthcoming work.

    Historically, cross-diffusion models are deterministic, meaning that the input data determine the solution at each moment in time. In deterministic models, non-predictable environmental factors are not considered, although it is well-known that a combination of random perturbations and nonlinearities can strongly influence solutions. Multiple factors may influence the population's growth in the environment, such as food, water, temperature, etc., each element easily being thought of as stochastic. It is natural to employ noise to model these environmental fluctuations by adding a stochastic forcing term to the deterministic system, resulting in (1).

    Let us now put the mathematical contributions of this paper into perspective. First, note that the standard theory for parabolic systems does not apply naturally to the cross-diffusion model because of the strong coupling in the highest derivatives. As a result, no traditional maximum principle applies. A stochastic forcing term further complicates the maximum principle approach. The existence result for (1) is based on martingale solutions and the introduction of suitable approximate (Faedo-Galerkin) solutions. We derive a series of system-specific a priori estimates in L2ω,tH1L2ωLtL2xL1ωCt(W1,4x) for the Faedo-Galerkin approximations and use a compactness method to conclude convergence. The system's nonlinear structure requires strong convergence of the approximate solutions in suitable norms. However, one cannot directly deduce strong convergence in the probability variable. To handle this issue, we establish weak compactness of the probability laws of the approximate solutions, which follows from tightness and Prokhorov's theorem. We then construct a.s. convergent versions of the approximations using Skorokhod's representation theorem, which makes it possible to show that the limit constitutes a martingale solution of (1). We demonstrate that the constructed solutions are nonnegative by adapting the Stampacchia approach to the stochastic setting, following Chekroun, Park, and Temam [10]. Finally, we mention that the pathwise uniqueness of the solution for the deterministic and stochastic cross-diffusion systems remains an open problem.

    In [14], the authors prove the existence of solutions for a related stochastic cross-diffusion system (with F,G,Du,Dv0) using the entropy method, assuming that the cross-diffusion matrix exhibits a quadratic entropy structure. A critical difference between our work and [14] is that the cross-diffusion term in the predator-prey system (1) does not have an entropy structure. Besides, the system (1) contains nonlocal diffusion terms, which further breaks the entropy structure in [14].

    For the existence of martingale solutions for other classes of SPDEs, we refer [8,9,11,12,15,16,20,21,22,34,35], to mention a few inspirational examples.

    The paper is organized as follows: In Section 2, we present the stochastic framework and state the noise coefficients' hypotheses. Section 3 supplies the definition of a weak martingale solution and declares the main result. We construct approximate solutions by the Faedo-Galerkin method in Section 4. Uniform estimates for these approximations are established in Sections 5 and 6. Section 7 proves the tightness of the probability laws generated by the Faedo-Galerkin approximations. The tightness and Skorokhod's representation theorem is used to show that a weakly convergent sequence of the probability laws has a limit that can be represented as the law of an almost surely convergent sequence of random variables defined on a common probability space. The limit of this sequence is proved to be a weak martingale solution of the stochastic reaction-diffusion system in Section 8, while its nonnegativity is deferred to Section 9.

    Throughout this paper, we will frequently use the letters C,K, etc., to denote a generic constant independent of n, that may take different values at different occurrences.

    This section recalls basic concepts and results from stochastic analysis (see e.g. [11,33] for more details). We consider a complete probability space (D,F,P), along with a complete right-continuous filtration {Ft}t[0,T].

    In passing, note that the letter Ω is reserved for the physical domain in this paper. In contrast, we use D for the probability domain (in the stochastic literature, Ω denotes the probability domain).

    Given a separable Banach space B, which is equipped with the Borel σ-algebra B(B), a B-valued random variable X is a measurable mapping from (D,F,P) to (B,B(B)), DωX(ω)B. The expectation of a random variable X is E[X]:=DXdP. For p1, the Banach space Lp(D;B)=Lp(D,F,P;B) is the collection of all B-valued random variables, equipped with the following norm

    XLp(D;B)=XLp(D,F,P;B):=(E[XpB])1p(p<),XLp(D;B)=XL(D,F,P;B):=supωDX(ω)B.

    We use the abbreviation a.s. (or almost surely) for "P-almost every ωD". A stochastic process X={X(t)}t[0,T] is a collection of B-valued random variables X(t). We assume that X is measurable, which means that the map X:D×[0,T]B is measurable from F×B([0,T]) to B(B). The paths tX(ω,t) are then automatically Borel measurable.

    We refer to

    S=(D,F,{Ft}t[0,T],P,{Wk}k=1) (9)

    as a (Brownian) stochastic basis, where {Wk}k=1 is a sequence of independent one-dimensional Wiener processes adapted to the filtration {Ft}t[0,T].

    A stochastic process X is adapted if X(t) is Ft measurable for all t[0,T]. When a filtration is involved there are additional notions of measurability (predictable, optional and progressive) that occasionally are more convenient to work with. Herein we use the (stronger) notion of a predictable process. A predictable process is a PT×B([0,T]) measurable map D×[0,T]B, (ω,t)X(ω,t), where PT is the predictable σ-algebra on D×[0,T] associated with {Ft}t[0,T], i.e., the σ-algebra generated by all left-continuous adapted processes.

    Consider a Hilbert space U equipped with a complete orthonormal basis {ψk}k1. A cylindrical Wiener process W on U is defined by W:=k1Wkψk. The vector space of all bounded linear operators from U to L2(Ω) is denoted L(U,L2(Ω)). Denote by L2(U,L2(Ω)) the space of Hilbert-Schmidt operators from U to L2(Ω), i.e., RL2(U,L2(Ω))RL(U,L2(Ω)), R2L2(U,L2(Ω)):=k1RψkL2(Ω)<. We recall that L2(U,L2(Ω)) is a Hilbert space. As is well-known, there is an auxiliary Hilbert space U0U, with a Hilbert-Schmidt embedding J:UU0, on which the infinite series k1Wkψk converges.

    For a given cylindrical Wiener process Ww, the L2(Ω)-valued Itô stochastic integral σdWw is defined as follows (see for, e.g., [11,33]):

    t0σwdWw=k=1t0σw,kdWw,k,σw,k:=σwψk, (10)

    for any L2(Ω)-valued predictable integrand

    σL2(D,F,P;L2(0,T;L2(U,L2(Ω)))).

    Throughout the paper, we assume several conditions on the noise coefficients σu,σv appearing in (1). For each zL2(Ω), we assume that σw(z):UL2(Ω), for w=u,v, is defined by

    σw(z)ψk=σw,k(z()),k1,

    for some real-valued functions σw,k():RR that satisfy

    k1|σw,k(z)|2Cσ(1+|z|2),zR,k1|σw,k(z1)σw,k(z2)|2Cσ|z1z2|2,z1,z2R, (11)

    for a constant Cσ>0. A consequence of (11) is

    σw(z)2L2(U,L2(Ω))Cσ(1+z2L2(Ω)),zL2(Ω),σw(z1)σw(z2)2L2(U,L2(Ω))Cσz1z22L2(Ω),z1,z2L2(Ω). (12)

    Under these conditions (12), the stochastic integral (10) is an L2(Ω)-valued square integrable martingale, satisfying the Burkholder-Davis-Gundy (BDG) inequality

    E[supt[0,T]t0σwdWwpL2(Ω)]CE[(T0σw2L2(U,L2(Ω))dt)p2], (13)

    where C is a constant depending on p1. We need the following convergence result for stochastic integrals [12,Lemma 2.1].

    Lemma 2.1 (convergence of stochastic integrals). For each nN, consider a stochastic basisSn=(D,F,{Ftn},P,Wn)and a {Ftn}–predictable processGn, which belongs to L2(0,T;L2(U,L2(Ω))), almost surely. Furthermore, suppose there exist a stochastic basisS=(D,F,{Ft},P,W)and a {Ft}–predictable processG, which belongs to L2(0,T;L2(U,L2(Ω))) a.s., such that

    WnnWin C([0,T];U0), in probabilityGnnGin L2(0,T;L2(U;L2(Ω))), in probability.

    Then

    t0GndWnnt0GdWin L2(0,T;L2(Ω)), in probability.

    Let S be a Polish space. We denote by B(S) the collection Borel subsets of S and by P(S) the family of all Borel probability measures on S. A sequence of probability measures {μn}n1 on (S,B(S)) is tight [11] if for every ϵ>0 there is a compact set KϵS such that μn(Kϵ)>1ϵ for all n1. According to Prokhorov's theorem (see e.g. [11,Theorem 2.3]), tightness is a criterion for weak compactness: If {μn}n1 is tight, then there exists a subsequence {μnj}j1 that converges weakly to a probability measure μ, where weak convergence means that Sϕ(w)dμnj(w)Sϕ(w)dμ(w), for any continuous bounded function ϕ:SR.

    Any random variable X:DS induces a probability measure L on (S,B(S)) via the pushforward of P through X, often L=PX1 is referred to as the law of X. Let {Xk}k1 be a sequence of random variables whose laws Lk converge weakly to L. Then a well-known result of Skorokhod (see e.g. [11,Theorem 2.4]) says that there exist a probability space (˜D,˜F,˜P) and random variables ˜Xk,˜X:˜DS such that the law of ˜Xk is Lk, the law of ˜X is L, and ˜Xk˜X ˜P-almost surely as k.

    We will utilize the following notion of solution for the stochastic cross-diffusion system.

    Definition 3.1 (weak martingale solution). Let μu0, μv0 be probability measures on L2(Ω). A weak martingale solution of the stochastic cross-diffusion system (1), with initial-boundary data (2) and (3), is a triplet (S,u,v) satisfying the following conditions:

    1. S=(D,F,{Ft},P,{Wu,k}k=1,{Wv,k}k=1) is a stochastic basis;

    2. Wu:=k1Wk,uψk and Wv:=k1Wk,vψk are two independent cylindrical Wiener processes, adapted to the filtration {Ft};

    3. The elements u and v are nonnegative, belong to

    L2(D,F,P;L2(0,T;H1(Ω)))L2(D,F,P;L(0,T;L2(Ω))),

    and satisfy

    |Aij(u,v)|uL2(D,F,P;L2(0,T;L2(Ω))),i,j=1,2.

    Finally, u,vC([0,T];(H1(Ω))) a.s., and u,v are predictable in (H1(Ω)).

    4. The laws of u0:=u(0) and v0:=v(0) are respectively μu0 and μv0;

    5. The following equations hold P-almost surely, for any t[0,T]:

    Ωu(t)φudxΩu0φudx+t0Ω(Du(Ωu(t,x)dx)u+A11(u,v)u+A12(u,v)v)φudxds=t0ΩF(u,v)φudxds+t0Ωσu(u)φudxdWu(s),Ωv(t)φvdxΩv0φvdx+t0Ω(Dv(Ωv(t,x)dx)v+A21(u,v)u+A22(u,v)v)φvdxds=t0ΩG(u,v)φvdxds+t0Ωσv(v)φvdxdWv(s), (14)

    for all φu,φvW1,4(Ω).

    Remark 2. In Definition 3.1, we use the standard Sobolev spaces

    H1(Ω)=W1,2(Ω),and for p(1,),W1,p(Ω)={uLp(Ω):uLp(Ω;Rd)},

    along with the corresponding dual spaces (H1(Ω)) and (W1,p(Ω)). Later we also use the space H2(Ω) consisting of all functions uL2(Ω) for which uL2(Ω;Rd) and 2uL2(Ω;Rd×d). Throughout the paper we use (W1,p(Ω)) to denote the dual of W1,p(Ω), which is a Banach space with norm

    L(W1,p(Ω))=sup{|L,ϕ|:ϕW1,p(Ω),ϕW1,p(Ω)1},

    where , is the duality pairing between (W1,p(Ω)) and W1,p(Ω).

    Recall that L(W1,p(Ω)) if and only if there exist functions f0,f1,,fdLp(Ω), p=pp1, such that

    L,ϕ=Ωf0ϕ+di=1fixiϕdx,ϕW1,p(Ω),

    and L(W1,p(Ω))=(di=0fiLp(Ω))1/p [23,Theorem 10.41]. Note that bounded linear functionals over W1,p(Ω) are not distributions.

    Remark 3. 1. Given the regularity conditions imposed in Definition 3.1, one can show that the deterministic and the stochastic integrals in (14) are all well-defined. Regarding the stochastic terms t0Ωσw(w)φwdxdWw(s), w=u,v, they are interpreted as in (10).

    2. For martingale solutions, one prescribes the initial data in terms of probability measures μu0,μv0 on L2(Ω), For probabilistic strong solutions (not considered here), one prescribes the initial data in terms of random variables u0,v0L2ω,x:=L2(D;L2(Ω)).

    3. Part (3) of Definition 3.1 implies that u,v belong to the space L(0,T;L2(Ω))C([0,T];(H1(Ω))), almost surely. Hence, u,vCw([0,T];L2(Ω)) a.s., i.e., for any ϕL2(Ω), [0,T]tΩw(t)ϕdx is continuous a.s., for w=u,v. We do not have u,vC([0,T];L2(Ω)) (strong time-continuity in L2). As W1,4(Ω)H1(Ω) with continuous embedding (recall that ΩR3 bounded), (H1(Ω))(W1,4(Ω)) with continuous embedding, and therefore u,vC([0,T];(W1,4(Ω))) a.s., which is consistent with requiring the equations (14) to hold for all φu,φvW1,4(Ω).

    Remark 4. A significant difficulty for the analysis of (1) is the strong coupling in the highest derivatives. However, since these terms are zero on the boundary, cf. (3), the nonlinear boundary conditions will "disappear" in the weak martingale formulation.

    Our main result is

    Theorem 3.2 (existence). Suppose conditions (4), (5), (6), (8), (7), and (11) hold, and thatthe initial data u0, v0 are randomvariables with laws μu0, μv0 satisfying

    L2(Ω)wq0L2(Ω)dμw0(w)<,for some q0>3,w:=u,v. (15)

    Then the stochastic cross-diffusionsystem (1), with initial-boundary data (2) and (3), possesses a weak martingale solution in the sense of Definition 3.1. Moreover, assuming σu(0)=σv(0)=0, this martingale solution is nonnegative.

    The proof of Theorem 3.2 is organized into several sections. First, in Section 4, we construct the Faedo-Galerkin solutions. Energy-type estimates are derived in Section 5. Convergence of the approximate solutions (along a subsequence) to a limit follows from these estimates, a temporal translation estimate, cf. 6, and the tightness of the probability laws generated by the Faedo-Galerkin solutions, cf. Section 7. In Section 8, we show that the limit is a weak martingale solution. Finally, we prove the nonnegativity of the constructed martingale solution, cf. Section 9.

    In this section, we define precisely the Faedo-Galerkin equations and prove that there exists a solution to these equations. We begin by fixing a stochastic basis S, cf. (9), and F0-measurable initial data u0,v0L2(D;L2(Ω)), with respective laws μu0,μv0 on L2(Ω). We look for approximate solutions obtained from the projection of (1), (2) and (3) onto a finite dimensional space Xn:=Span{e1,,en}.

    Let us make precise the basis functions e1,,en. The following discussion is well-known but is included for the sake of readability. First, we introduce the spaces

    L20:={uL2(Ω):¯u:=1|Ω|Ωudx=0},H2N:={uH2(Ω):uν=0on Ω},(H1)0:={u(H1(Ω)):¯u:=1|Ω|u,1(H1),H1=0}.

    The embeddings H2NH1L2(L2)(H1)(H2N) are continuous, dense and compact. We have u,v(H1),H1=(u,v):=Ωuvdx for uL2(Ω), vH1(Ω). Similarly, u,v(H2N),H2N=(u,v) for uL2(Ω), vH2N.

    The Neumann-Laplace operator ΔN:H1(Ω)L20(Ω)(H1)0 is defined by

    ΔNu,v(H1),H1=Ωuvdx,u,vH1(Ω).

    The Neumann-Laplace operator is positive and self-adjoint. By the Lax-Milgram theorem and the Poincaré inequality, the inverse operator (ΔN)1:(H1)0H1(Ω)L20 is compact, positive and symmetric. By the spectral theorem, (ΔN)1 admits a sequence of eigenfunctions {wl}l=1 that forms a complete orthonormal basis in L20. The eigenfunctions of ΔN is e1:=1/|Ω|12 and el:=wl1 for l2. The sequence {el}l=1 is an orthonormal basis of L2(Ω). The L2 orthogonal projection is denoted by

    Πn:L2(Ω)Xn=Span{e1,,en},Πnu:=nl=1(u,el)el. (16)

    Then Πnuu in L2(Ω) as n and ΠnuL2(Ω)uL2(Ω).

    Denoting the corresponding eigenvalues by {λl}l=1, we have

    Δel=λlelin Ω,elν=0on Ω, (17)

    for each lN. The eigenvalues form a nondecreasing sequence with λ1=0 and λl as l. By elliptic regularity theory, each eigenfunction el belongs to H2NL(Ω), elC(Ω), and el is as smooth in ¯Ω as Ω deems possible (e.g. elC if Ω is C). By [19,Lemma 3.1], the space H2N is dense in H1(Ω) and in W1,5(Ω). The same proof applies to W1,p(Ω) for any p[1,6]. It is further known that {el}l=1 forms a basis of H2N. Indeed, for any uH2N,

    ΔΠnu=nl=1(u,el)Δel=nl=1(u,λel)el=nl=1(u,Δel)el=nl=1(Δu,el)el=ΠnΔu.

    As a result, ΔΠnu converges in L2 to Δu as n. We can therefore conclude that Πnuu in H2N. Hence, the sequence {el}l=1 forms a basis of H2N. Later we will make use of the estimate

    ΠnuH2NCuH2N,

    for a constant C that is independent of n.

    From the weak form of (17) with test function v=em,

    Ωelemdx=λlΩelemdx=λlδlm,l,mN;

    thus (u,v)H1(Ω)=(1+λl)δlm and elH1(Ω)=(u,u)H1(Ω)=(1+λl)12, i.e., {el}l=1 is an orthonormal basis of L2(Ω) that is orthogonal in H1(Ω). Set ˜el:=el/(1+λl)12. Then {˜el}l=1 forms an orthonormal basis of H1(Ω). To see this, note that {˜el}l=1 is clearly an orthonormal sequence in H1(Ω). To prove that it is a basis, it suffices to establish that (u,˜el)H1(Ω)=0 l implies u=0, for any uH1(Ω). Suppose (u,˜el)H1(Ω)=0 l. From integration by parts and (17),

    0=Ωu˜eldx+Ωu˜eldx=(1+λl)12Ωueldx,

    so that (u,el)=0 l. Since {el}l=1 is a basis of L2(Ω), this implies that u=0.

    Let us note that the restriction of Πn to H1(Ω) coincides with ˜Πn, the H1 orthogonal projection onto the space Span{˜e1,,˜en}: for any uH1(Ω),

    ˜Πnu=nl=1(u,˜el)H1(Ω)˜el=nl=1(1+λl)12(u,el)˜el=nl=1(u,el)el=Πnu.

    Consequently,

    Πnunuin H1(Ω),ΠnuH1(Ω)uH1(Ω).

    Finally, we will continue to use the symbol Πn for the operator

    Πn:XSpan{e1,,en},Πnu:=nl=1u,elX,Xel,

    where X=H1(Ω) or X=H2N. The restriction of this operator to L2(Ω) coincides with the L2 orthogonal projection defined before (16). It is easy to verify that

    (Πnu,v)=u,ΠnvX,X,uX,vX,

    as (nl=1u,elX,Xel,v)=nl=1u,elX,X(el,v)=u,nl=1(v,el)elX,X.

    We can now define our Faedo-Galerkin approximations

    un,vn:[0,T]Xn,un(t)=nl=1cnl(t)el,vn(t)=nl=1dnl(t)el, (18)

    where the coefficients cn={cnl(t)}nl=1 and dn={dnl}nl=1 are determined such that the following equations hold (for l=1,,n):

    (dun,el)+Du(Ωun(t,x)dx)(un,el)dt+(A11(un,vn)un+A12(un,vn)vn,el)dt=(F(un,vn),el)dt+nk=1(σnu,k(un),el)dWu,k(t),(dvn,el)+Dv(Ωvn(t,x)dx)(vn,el)dt+(A21(un,vn)un+A22(un,vn)vn,el)dt=(G(un,vn),el)dt+nk=1(σnv,k(vn),el)dWv,k(t), (19)

    and, with reference to the initial data,

    un(0)=un0:=nl=1cnl(0)el,cnl(0):=(un0,el)L2(Ω),vn(0)=vn0:=nl=1dnl(0)el,dnl(0):=(v0,el)L2(Ω). (20)

    In (19) we have used the following approximations of the noise coefficients:

    σnw,k(wn):=nl=1σw,k,l(wn)el,whereσw,k,l(wn):=(σw,k(wn),el)L2(Ω),w=u,v. (21)

    Using the Faedo-Galerkin equations (19), the regularity un(t),vn(t)H2NL, and basic properties of the projection operator Πn, we obtain

    un(t)un0t0Πn[(Du(Ωun(t,x)dx)un)]dst0Πn[(A11(un,vn)un+A12(un,vn)vn)]ds=t0Πn[F(un,vn)]ds+t0σnu(un)dWnu(s)in L2(Ω),vn(t)vn0t0Πn[(Dv(Ωvn(t,x)dx)vn)]dst0Πn[(A21(un,vn)un+A22(un,vn)vn)]ds=t0Πn[G(un,vn)]ds+t0σnv(vn)dWnv(s)in L2(Ω), (22)

    with initial data un0=Πnu0 and vn0=Πnv0, where σnw(wn)dWnw is short-hand notation for nk=1σnw,k(wn)dWw,k, w=u,v. The formulation (22) allows us to treat un, vn as stochastic processes in Rn, so that one can apply the finite dimensional Itô formula to the Faedo-Galerkin equations.

    Remark 5. Our construction of approximate solutions makes use of Neumann boundary conditions, which are encoded in the space H2N. The zero-flux boundary conditions (3) are recovered when we pass to the limit to identify the weak martingale solution.

    The existence of pathwise solutions to the finite-dimensional problem (19), (20) is guaranteed by the next lemma.

    Lemma 4.1. For each nN, the Faedo-Galerkin equations (18), (19), (20) possess a unique adaptedsolution (un(t),vn(t)) on [0,T].Furthermore, un,vnC([0,T];Xn) a.s., where Xn is defined in (16), and E[wn(t)2L2(Ω)]T,n1, t[0,T], w=u,v.

    Proof. We look for a stochastic process Cn taking values in Xn×Xn that is a solution to the following system of stochastic differential equations:

    dCn=M(Cn)dt+Γ(Cn)dWn, (23)

    where Cn=(unvn), M(Cn)=(Au(Cn)Av(Cn)), and

    Au(Cn)=Πn(Du(Ωun(t,x)dx)un)Πn(A11(un,vn)un+A12(un,vn)vn)+ΠnF(un,vn),Av(Cn)=Πn(Dv(Ωvn(t,x)dx)vn)Πn(A21(un,vn)un+A22(un,vn)vn)+ΠnG(un,vn).

    Moreover, Γ(Cn)dWn is short-hand notation for (σnu(un)dWnuσnv(vn)dWnv). We complete (23) with initial data Cn(0)=Cn0, where Cn0 is the vector defined by (20).

    To prove the existence and uniqueness of a pathwise solution to (23), we will use [33,Theorem 3.1.1] (see also Theorem 5.1.3 in [33]), which asks that M and Γ satisfy the following conditions:

    (ⅰ) — local weak monotonicity. For all C1=(u1v1) and C2=(u2v2) with ui,viXn such that uniL2(Ω),vniL2(Ω)r, for any r>0 and i=1,2, we have

    2(M(C1)M(C2),C1C2)+Γ(C1)Γ(C2)2L2(Ω)K(r)C1C22L2(Ω), (24)

    for a constant K(r) that may depend on r, where (,) denotes the L2(Ω) inner product.

    (ⅱ) — weak coercivity. For all C=(uv) with u,vXn,

    2(M(C),C)+Γ(C)2L2(Ω)K(1+C2L2(Ω)), (25)

    for some constant K>0.

    The weak coercivity condition (25) is easily verified using the assumption (8) and the global Lipschitz continuity of F,G,Γ.

    Let us verify the weak monotonicity condition (24) in some detail. Fix a real number r>0 and set ¯u:=u1u2 and ¯v:=v1v2, where ui,vi are arbitrary functions in Xn for which uiL2(Ω),viL2(Ω)r for i=1,2. In view of (8) and Young's inequality,

    (M(C1)M(C2),C1C2)+Γ(C1)Γ(C2)2L2(Ω)=6i=0Ii, (26)

    where I0=Γ(C1)Γ(C2)2L2(Ω)(12)C1C22L2(Ω) and

    I1=w=u,vDw(Ωw1dx)(¯w,¯w),I2=w=u,v(Dw(Ωw1dx)Dw(Ωw2dx))(w2,¯w),I3=(A(u1,v1)(¯u¯v),(¯u¯v)),I4=((A(u1,v1)A(u2,v2))(u2v2),(¯u¯v)),I5=(F(u1,v1)F(u2,v2),¯u),I6=(G(u1,v1)G(u2,v2),¯v).

    Recall that the basis functions el belong to H2N and H2NW1,p(Ω)L(Ω), for any p[1,6] (as ΩR3 is bounded). Hence, the assumption wiL2(Ω)r implies wiH2Nr,n1, for w=u,v and i=1,2. In view of (7),

    |I2|w=u,vw1w2L1(Ω)w2L2(Ω)¯wL2(Ω),

    and so |I2|r,nw=u,vw1w2L2(Ω). Similarly, given the assumption (8),

    |I4|w=u,vw1w2L2(Ω)w=u,vw2L4(Ω)wn=u,v¯wL4(Ω)w=u,vw1w2L2(Ω)w=u,vwn2H2Nw=u,v¯wH2N,

    and so |I4|r,nw=u,vw1w2L2(Ω). In view of the global Lipschitz continuity of the reaction functions F and G, cf. (4), it follows that

    |I5|+|I6|w=u,vw1w2L2(Ω)w=u,vwnL2(Ω),

    so that |I5|+|I6|rw=u,vw1w2L2(Ω). Finally, by (7) and (8), I1,I30.

    Referring to (26), this implies 6i=0Iir,n Cn1Cn22L2(Ω), and (24) thus holds.

    We start with a series of basic energy-type estimates.

    Lemma 5.1. Let un(t),vn(t), t[0,T], satisfy (19), (20). There is a constant C>0, independentof n, such that

    E[un(t)2L2(Ω)]+E[vn(t)2L2(Ω)]C,t[0,T]; (27)
    E[T0Ω|un|2dxdt]+E[T0Ω|vn|2dxdt]C; (28)
    E[T0Ω|Aij(un,vn)|(|un|2+|vn|2)dxdt]C,i,j=1,2; (29)
    E[supt[0,T]un(t)2L2(Ω)]+E[supt[0,T]vn(t)2L2(Ω)]C. (30)

    Proof. By Itô's formula, dS(wn)=S(wn)dwn+12S(wn)nk=1(σw,k(wn))2dt, w=u,v, for any C2 function S:RR. Hence, with S(w)=12|w|2,

    12w=u,vwn(t)2L2(Ω)+w=u,vt0Dw(Ωwn(t,x)dx)Ω|wn|2dxds+t0(A11(un,vn)un+A12(un,vn)vn,un)L2(Ω)ds+t0(A21(un,vn)un+A22(un,vn)vn,vn)L2(Ω)ds=12w=u,vwn(0)2L2(Ω)+t0(F(un,vn),un)L2(Ω)ds+t0(G(un,vn),vn)L2(Ω)ds+w=u,vnk=1t0Ωwnσnw,k(wn)dxdWw,k+12w=u,vnk=1t0Ω(σnw,k(wn))2dxds12w=u,vwn(0)2L2(Ω)+Ct0(1+un(t)2L2(Ω)+vn(t)2L2(Ω))ds+w=u,vnk=1t0Ωwnσnw,k(wn)dxdWw,k(s), (31)

    where have put to good use (4), (5), and also (6). By the fundamental assumption (8), the sum of the Aij terms is lower bounded by |A(un,vn)|(|un|2+|vn|2), so

    w=u,vwn(t)2L2(Ω)+w=u,vCmt0Ω|wn|2dxds+w=u,vt0Ω|A(un,vn)||wn|2dxdsw=u,vwn(0)2L2(Ω)+Ct0(1+w=u,vwn(t)2L2(Ω))ds+w=u,vnk=1t0Ωwnσnw,k(wn)dxdWw,k(s). (32)

    where we have also used (7). Applying E[] to (32) and using the Gronwall inequality, we arrive at (27), (28), and (29), recalling that the initial data u0,v0 belong to L2.

    To prove the final estimate (30), we take supt[0,T] and then E[] in (31). Using (27) and the L2 boundedness of the initial data, we end up with the estimate

    w=u,vE[supt[0,T]wn(t)2L2(Ω)]C(1+w=u,vIw), (33)

    where Iw:=E[supt[0,T]|nk=1t0Ωwnσnw,k(wn)dxdWw,k(s)|]. Using the BDG inequality (13), the Cauchy-Schwarz inequality, (11), Cauchy's inequality, and (27), we proceed as follows for w=u,v:

    |Iw|CE[(T0nk=1|Ωwnσnw,k(wn)dx|2dt)12]CE[(T0(Ω|wn|2dx)(nk=1Ω|σnw,k(wn)|2dx)dt)12]CE[(supt[0,T]Ω|wn|2dx)12(T0nk=1Ω|σnw,k(wn)|2dxdt)12]αE[supt[0,T]Ω|wn|2dx]+C(α)E[T0nk=1Ω|σnw,k(wn)|2dxdt]αE[supt[0,T]wn(t)2L2(Ω)]+C, (34)

    for any number α>0. Combining the inequalities (33) and (34), and choosing α>0 small, we arrive at the estimate (30).

    Later we will need to convert a.s. convergence into L2 convergence. To this end, the next lemma—containing improved integrability estimates—is useful.

    Corollary 1. Let un(t),vn(t), t[0,T], satisfy (19), (20). Suppose u0,v0 belong toLq(D,F,P;L2(Ω)))with q(2,q0], cf. (15). Then there exists a constant C>0, independent of n, such that

    E[sup0tTwn(t)qL2(Ω)]C,E[wnqL2((0,T)×Ω)]C,w=u,v, (35)

    and

    E[|T0Ω|Aij(un,vn)|(|un|2+|vn|2)dxdt|q2]C,i,j=1,2. (36)

    Proof. Starting off from (31), the following estimate holds for any (ω,t)D×[0,T]:

    w=u,vsup0τtwn(τ)2L2(Ω)w=u,vwn(0)2L2(Ω)+Cw=u,vt0wn(s)2L2(Ω)ds+Cw=u,vsup0τt|nk=1τ0Ωwnσnw,k(wn)dxdWw,k(s)|,

    for some constant C independent of n. Next, we raise both sides of this inequality to power q/2 and take the expectation, eventually obtaining

    w=u,vE[sup0τtun(τ)qL2(Ω)]Cw=u,vE[wn(0)qL2(Ω)]+C(1+t)q2+Cw=u,vt0wn(s)qL2(Ω)ds+w=u,vIw, (37)

    where Iw=E[sup0τt|nk=1τ0Ωwnσnw,k(wn)dxdWw,k(s)|q2]. Relying on the martingale inequality (13), we proceed as in (34):

    IwCE[(t0nk=1|Ωwnσnw,k(wn)dx|2ds)q4]CE[(t0(Ω|wn|2dx)(nk=1Ω|σnw,k(wn)|2dx)ds)q4]CE[(supτ[0,t]Ω|wn|2dx)q4(t0nk=1Ω|σnk,w(wn)|2dxds)q4]αE[(supτ[0,t]Ω|wn|2dx)q2]+C(α)E[(t0nk=1Ω|σnk,u(wn)|2dxds)q2]αE[supτ[0,t]wn(τ)qL2(Ω)]+CE[t0wn(s)qL2(Ω)ds]+C, (38)

    for any number α>0. Choosing α small, we conclude from (37), (38) that

    w=u,vE[sup0τtwn(τ)qL2(Ω)]Cw=u,vE[wn(0)qL2(Ω)]+Cw=u,vt0E[wn(s)qL2(Ω)ds]+C,

    for some constant C>0 independent of n. An application of Grönwall's inequality now yields the sought-after estimate (35).

    Finally, we use (32), the first part of (38), and (35) to conclude that there is a constant C>0, independent of n, such that

    w=u,vE[|t0Ω|wn|2dxds|q2]C,w=u,v,

    and the second part of (35) follows. Similarly, we derive (36).

    Given Lemma 5.1, it is easy to see that Ai1(un,vn)un and A2j(un,vn)vn are uniformly bounded in Lq for some q<2, for i,j=1,2. As a result, we cannot control the time translation of the approximate solution in the space (H1(Ω)). Although we expect the exact solution to be continuous in time with values in (W1,4(Ω)) (evident by inspecting the proof below), the fact that the sequence {el}l=1 is not a basis of W1,4(Ω)—but it is for H2NW1,4—we cannot control the projection operator in W1,4(Ω)—but we can in H2N. To ensure strong L2t,x compactness of a sequence of Faedo-Galerkin solutions, we will therefore establish a temporal translation estimate in the larger space (H2N)(W1,4(Ω))(H1(Ω)), which is enough to work out the required L2t,x compactness (and tightness).

    Lemma 6.1. Extend the Faedo-Galerkin functionsun(t),vn(t), t[0,T], whichsatisfy (19) and (20), by zero outside of [0,T].There exists a constant C=C(T,Ω)>0, independent of n, such that

    E[sup|τ|(0,δ)wn(t+τ)wn(t)(H2N)]Cδ1/4,t[0,T], (39)

    for any sufficiently small δ>0, w=u,v.

    Proof. In what follows, we write , instead of ,(H2N),H2N. We will estimate the expected value of

    I(t,τ):=un(t+τ,)un(t,)(H2N)=sup{|un(t+τ,)un(t,),ϕ|:ϕH2N,ϕH2N1}=sup{Ω(un(t+τ,x)un(t,x))ϕ(x)dx:ϕH2N,ϕH2N1},

    for τ(0,δ), δ>0. The same estimate can derived for τ(δ,0).

    By (18),

    I(t,τ):=un(t+τ,)un(t,)(H2N)4i=1Ii(t,τ),

    where

    I1(t,τ)=t+τtΠn[(Du(Ωun(t,x)dx)un)]ds(H2N),I2(t,τ)=t+τtΠn[(A11(un,vn)un+A12(un,vn)vn)]ds(H2N),I3(t,τ)=t+τtΠn[F(un,vn)]ds(H2N),I4(t,τ)=nk=1t+τtσnu,k(un)dWu,k(s)(H2N).

    Estimate of I2._ Setting Ln2,u:=Πn[(A11(un,vn)un)], let us estimate

    t+τtLn2,uds(H2N)=sup{|t+τtLn2,uds,ϕ|:ϕH2N,ϕH2N1}=sup{|t+τtΩLn2,uϕdxds|:ϕH2N,ϕH2N1}=sup{|t+τtΩA11(un,vn)unΠnϕdxds|:ϕH2N,ϕH2N1}

    by bounding the term

    I:=|t+τtΩA11(un,vn)unΠnϕdxds|.

    By the generalised Hölder inequality,

    Iτ1/4|A11(un,vn)|L4((0,T)×Ω)×|A11(un,vn)||un|L2((0,T)×Ω)ΠnϕL4(Ω).

    Now we use that H2N is continuously embedded in W1,p(Ω) p[1,6] (recalling that ΩR3 bounded), so

    ΠnϕL4(Ω)ΠnϕW1,4(Ω)ΠnϕH2N.

    As {el}l=1 is a basis of H2N, ΠnϕH2NϕH2N and thus ΠnϕL4(Ω)ϕH2N. Using this bound and Young's product inequality,

    Iτ1/4(|A11(un,vn)|2L4((0,T)×Ω)+|A11(un,vn)||un|2L2((0,T)×Ω))ϕH2N,

    Note that

    |A11(un,vn)|2L4((0,T)×Ω)=A11(un,vn)L2((0,T)×Ω)1+un+vnL2((0,T)×Ω)T,Ω1+unL(0,T;L2(Ω))+vnL(0,T;L2(Ω)).

    Consequently, after taking the expectation and using (29) and (30),

    E[I]T,Ωτ1/4ϕH2N.

    Summarising,

    E[supτ(0,δ)sup{|t+τtLn2,uds,ϕ|:ϕH2N,ϕH2N1}]δ14,

    i.e.,

    E[supτ(0,δ)t+τtLnu,2ds(H2N)]δ14.

    A similar estimate holds for Ln2,v:=Πn[(A12(un,vn)vn)], and therefore

    E[sup0τδI2(t,τ)]δ1/4,uniformly in t[0,T].

    Estimate of I1._ Set Ln1:=Πn[(Du(Ωun(t,x)dx)un)]. Given (7),

    |Du(Ωun(t,x)dx)|21+(Ω|un(t,x)|dx)2Ω1+un2L(0,T;L2(Ω)).

    Using this, we bound

    |t+τtLn1ds,ϕ|=|t+τtΩDu(Ωun(t,x)dx)unΠnϕdxds|

    by a constant times

    τ1/2(T0Ω(1+un2L(0,T;L2(Ω)))|un|2dxds)12ΠnϕL2(Ω)τ1/2(1+unL(0,T;L2(Ω)))unL2((0,T)×Ω)ΠnϕH1(Ω).

    Recalling that the sequence {el}l=1 is an orthogonal basis of H1(Ω), we have

    ΠnϕH1(Ω)ϕH1(ΩϕH2N.

    Taking the expectation and using Young's inequality,

    E[(1+unL(0,T;L2(Ω)))unL2(Ω)]1+E[un2L(0,T;L2(Ω))]+E[un2L(0,T;L2(Ω))](28),(30)1,

    and thus we conclude that

    E[supτ(0,δ)sup{|t+τtLn1ds,ϕ|:ϕH2N,ϕH2N1}]δ1/2,

    i.e.,

    E[supτ(0,δ)I1(t,τ)]δ1/2,uniformly in t[0,T].

    Estimate of I3._ Set Ln3:=Πn[F(un,vn)]. The function F is linearly growing in both its arguments, which follows from (4), (5) and (6). Using this, we bound

    |t+τtLn3ds,ϕ|=|t+τtΩF(un,vn)Πnϕdxds|

    by a constant times

    τ1/21+un+vnL2((0,T)×Ω)ΠnϕL2(Ω)τ1/2(1+un2L2((0,T)×Ω)+un2L2((0,T)×Ω))ϕH2N,

    where we have used Young's inequality and that the sequence is an orthonormal basis of , so that . Hence

    i.e.,

    Set . We bound

    by a constant times

    where . By the Burkholder-Davis-Gundy inequality (13),

    where . As a result,

    i.e.,

    Summarising our estimates of concludes the proof of (39) for . The proof for is the same.

    In this section we establish the tightness of the probability measures (laws) generated by the Faedo-Galerkin solutions . Note that the strong convergence of in is a consequence of the spatial bound (28) and the time translation estimate (39), recalling that . To secure the strong (almost sure) convergence in the probability variable , we need to use some results of Skorokhod linked to tightness (weak compactness) of probability measures and almost sure representations of random variables.

    We choose the following phase space for the probability laws of the Faedo-Galerkin approximations:

    where

    and ( is defined in Section 2)

    As , are Polish spaces, the intersection space is Polish. It is also a fact that products of Polish spaces are Polish. Therefore, since and are Polish, is a Polish space. We denote by the -algebra of Borel subsets of , and introduce the measurable mapping

    We define a probability measure on by

    (40)

    Denote by , , , , , the respective laws of , , , , and , which are defined respectively on , , , and . Thus

    Remark 6. As a cartesian product of topological spaces, is always equipped with the product topology and, thus, the Borel -algebra generated by the product topology. Of course, on there are two natural -algebras: the product of the Borel -algebras and the already introduced for the product topology. For Polish (and separable metric) spaces, these two coincide. This implies that coordinatewise measurability and tightness is the same as joint measurability and tightness, which is important since we use the product of the Borel -algebras in the computations below leading up to the joint tightness and weak convergence in the product space .

    Given sequences of positive numbers tending to zero as (to be specified below), introduce the set

    It is easy to see that is a Banach space under the norm

    In view of [36], we have

    where means that is compactly embedded in . Indeed, to conclude this we need Theorem 5 in [36] on the compactness of functions with values in an intermediate space. Let , , be Banach spaces with continuous embeddings and compactly embedded in . Then [36,Theorem 5] ensures that is relatively compact in , with , if is bounded in and, as , there holds that , uniformly for , if is finite. If , then the relative compactness is in . First, we will apply this result with , , and , which implies relative compactness in . Second, we will apply it with , , and , to conclude relative compactness in the space .

    Now we verify that the laws , cf. (40), of the Faedo-Galerkin solutions are tight.

    Lemma 7.1. The sequence ofprobability measures is (uniformly) tight, andtherefore weakly compact, on the phase space .

    Proof. For each , we need to produce compact sets

    such that , where is short-hand notation for . This follows if we show that for .

    To this end, pick the sequences , such that

    (41)

    and take

    where is a number to be determined later. In view of [36,Theorem 5], is a compact subset of . For , we have

    Repeated applications of the Chebyshev inequality supply

    where we have used (28), (30), and (39). From this, we can choose such that

    Regarding the finite-dimensional approximations of the Wiener processes, we know that the finite series are -a.s. convergent in as . This implies that the laws converge weakly. Now we use Prokhorov's weak compactness characterization (see e.g. [11,Theorem 2.3])) to conclude the tightness of and ; thus, for any , there exists a compact set in such that

    Similarly, the initial data approximations are -a.s. convergent in as , and so the laws converge weakly (with , ). As a result, these laws are tight and thus

    Summarising, is a tight sequence of probability measures. By Prokhorov's theorem [11,Theorem 2.3], this implies the weak compactness of .

    As the probability measures linked to the Faedo-Galerkin approximations form a sequence that is weakly compact on , we deduce that converges weakly to a probability measure on , up to a subsequence that we do not relabel. We can then apply the Skorokhod representation theorem (see e.g. [11,Theorem 2.4]) to deduce the existence of a new (complete) probability space and new random variables

    (42)

    with respective joint laws and , such that almost surely in the topology of , i.e., the following convergences hold -almost surely as :

    (43)

    By equality of the laws, the estimates in Lemma 5.1 and Corollary 1 continue to hold for the new random variables (). In fact, all statistical estimates for the Faedo-Galerkin approximations are valid for the "tilde" approximations defined on the new probability space . Recall -a.s., where and each belongs to Besides, by elliptic regularity, is smooth in . Since and have the same laws and is a Borel subset of , it follows that and -a.s., . Moreover, we have

    Lemma 7.2. Let , , , , , be the Skorokhod representations of the Faedo-Galerkinapproximations, cf. (42). There exists aconstant , independent of , such that

    (44)

    for any , see (15) and Corollary 1 for the appearance of .

    Proof. We prove the first estimate in (44), as the other ones can be proved in the same way. Let be a continuous injection between Polish spaces. According to the Lusin-Suslin theorem, is a Borel set in . Since is continuously embedded in , we can apply the Lusin-Suslin theorem to ensure that is a Borel set in . Hence, as is a measure on and is continuous ( Borel measurable), the integration makes sense. Consequently, .

    Recalling (42), consider the associated stochastic basis

    (45)

    where

    The filtration is the smallest one making all the "tilde processes" , , , , , and adapted.

    A cylindrical Wiener process is fully determined by its law. By equality of the laws and Lévy's martingale characterization of a Wiener process, see [11,Theorem 4.6], we conclude that and are cylindrical Wiener processes with respect to their canonical filtrations. Furthermore, we claim that , are cylindrical Wiener processes relative to the filtration defined in (45). To prove this, we must verify that is measurable and is independent of , for all , . These properties are simple consequences of the fact that and have the same laws and that is measurable and is independent of .

    Hence, there exist sequences , of mutually independent real-valued Wiener processes adapted to such that

    (46)

    recalling that is the basis of and the series converge in (cf. Sect. 2).

    In what follows, we will use the following -truncated sums

    which converges to in , -almost surely; the convergence claim follows from (43) and standard arguments (see e.g. [11,Section 4.2.2]).

    Arguing as in [8], using (22) and equality of the laws, the following equations hold -almost surely on the new probability space :

    (47)

    for any , where , . Let us sketch the proof of the first equation in (47), with the second one following in the same way. Consider the first equation in (22) and introduce the -valued stochastic process

    Replacing by , we denote the resulting process by . Let us also introduce the random variables

    By(22), a.s. and thus . Recalling that , let us replace the integrand by the time-regularised function , for , in which case the stochastic integral can be viewed as a continuous function of the Wiener process (after an integration by parts). Denote by the analog of with replaced by . We use a similar definition of . It is now possible to write , , for some continuous function . By equality of the laws,

    Sending yields , implying that the first equation in (47) holds.

    The next estimate was not stated in Lemma 7.2, but it can be derived from the "tilde" equations in (47), following the proofs of (29) and (36). For any ,

    (48)

    where the constant is independent of .

    A stochastic basis is needed for the limit of the Skorokhod representations, i.e., for the variables , cf. (42): namely,

    (49)

    where . Recall that , are cylindrical Wiener processes with respect to , see (45) and (46). Since , in the sense of (43), it is more or less obvious that also the limits , are cylindrical Wiener processes with respect to , see for example [32,Lemma 9.9] or [13,Proposition 4.8]. As a result, there exist sequences , of real-valued Wiener processes adapted to the filtration , cf. (49), such that and .

    Given the -independent estimates in Lemma 7.2 and the almost sure convergences in (43), we deduce the following result:

    Lemma 8.1 (convergence). The limits , , , , and , see (42) and also (43), satisfy

    and , for .

    Let , , , , , be the Skorokhod representations of the Faedo-Galerkinapproximations, cf. (42). Then, passing if necessary to subsequence as ,

    (50)

    Proof. The strong convergences (ⅰ) follow from (43), the moment estimate (44) with , and Vitali's convergence theorem. The strong convergences (vii) and (viii) follow in a similar way. The weak convergences (ⅱ), (ⅲ) are consequences of the -uniform bounds on in and in , cf. (44), passing if necessary to a subsequence.

    Part (ⅳ) is a consequence of (43) and Vitali's convergence theorem, given the moment bounds (with some )

    for , where we have used that is bounded in , see (44).

    Let us verify part (v). Set , , and , , . By (ⅱ), in . By (ⅰ), passing to a subsequence (not relabelled), we may as well assume that , almost everywhere in . By the global Lipschitz continuity of , this transfers to almost everywhere in . Besides, since and are uniformly bounded in , is uniformly bounded in . Vitali's convergence theorem then implies that in . Next, given the bound (48) (with ), converges weakly to some limit in , passing if necessary to a subsequence (not relabelled). At the same time, and in , and so the strong-weak product converges weakly to in , which allows us to identify the weak limit as , i.e., in . This proves (v). The verification of (vi) is similar.

    Our final step is to pass to the limit in the Faedo-Galerkin equations (47).

    Lemma 8.2 (limit equations). The limits , , , , , ofthe Skorokhod a.s. representations of the Faedo-Galerkinapproximations—constructed in (42), (43)—satisfythe following equations -a.s., for all :

    (51)
    (52)

    for all , where the laws of and are and , respectively.

    Proof. We will focus on (51). The second equation (52) can be treated similarly. First, recall that the space is dense in . Therefore, it is sufficient to establish (51) under the assumption that . Indeed, given the bounds in Lemma 8.1, all terms in (51)—except for cross-diffusion and the stochastic integral—are bounded by a (-dependent) constant times the or norm of . Via the BDG inequality (13), the stochastic integral is bounded in expectation by a constant times the norm of . Finally, as and can be bounded in , the cross-diffusion terms are bounded by a (-dependent) constant times .

    Fix , and write (51) symbolically as , for . As in [12], the strategy of the proof is to demonstrate that

    which would imply that for -a.e. and thus, by the Fubini theorem, -a.s., for a.e. . Since the simple functions are dense in , it enough to prove that

    (53)

    for a measurable set , where denotes the characteristic function of .

    The Faedo-Galerkin equations (47) holds in , and hence pointwise in . Multiplying the first (pointwise) equation with and then doing spatial integration by parts, using the fact that —and thus on —and basic properties of the projection operator , we eventually arrive at

    (54)

    We multiply (54) with , integrate the result over , and then we pass to the limit in each term separately.

    By part (viii) of (50), we obtain . Recall that in and (cf. Theorem 3.2 for the appearance of ). Hence, as the laws of and are the same, we conclude that .

    In what follows, we will make repeated use of the following simple fact: If in , , then in as well. Furthermore, we will use that

    The weak convergence in of , cf. (50)–(ⅱ), implies that

    where we have used that strongly in , recalling in and noting that the strong convergence of , cf. (50)–(ⅰ), and (7) imply the strong convergence of .

    Regarding the cross-diffusion terms, set (for and )

    and write

    Recalling that is weakly convergent to in , cf. (50)–(v), we need to prove that is strongly convergent to in , in order to conclude that in . First, in :

    where we have used that and is a basis of . We also claim that in . To see this, note that (50)–(ⅰ) and (8) imply

    Thus, by the Brezis-Lieb lemma,

    Passing to a subsequence if necessary, we may as well assume that a.e., and further note that is uniformly bounded in , because , are uniformly bounded in and is globally Lipschitz continuous, cf. (8). Another application of the Brezis-Lieb lemma then guarantees that in as . Summarising, in , in , and thus in

    As a result of in , we obtain (, )

    Using that is globally Lipschitz, cf. (4) and (5), and the strong convergences , in , cf. (50)–(ⅰ), and recalling in , we obtain

    For the stochastic integral, we will use Lemma 2.1 to prove that

    (55)

    in probability (with respect to ). Since in , -a.s. and thus in probability, cf. (43), it remains to prove that

    (56)

    Clearly,

    (57)

    By (12) and (43), we easily obtain

    (58)

    For the -term, we proceed as follows:

    where , are defined respectively in (10), (21).

    The integrand can be dominated by an function (-a.s.):

    recalling that and thus (a.s.).

    This calculation also shows that a.s.

    and a.s., so that

    for a.e. and almost surely. In view of these facts and

    an application of Lebesgue's dominated convergence theorem supplies

    (59)

    Combining (57), (58) and (59), we arrive at (56). By Lemma 2.1, this implies (55).

    Passing to a subsequence (not relabeled), we may replace "in probability" by "-almost surely" in (55). Fixing any number , cf. (15), we use the Burkholder-Davis-Gundy inequality (13) and (11), (44) to work out the following estimate:

    Hence, by Vitali's convergence theorem, (55) implies

    Using this and the fact that in , we arrive at

    This concludes the proof of (53), which implies that the desired (51) holds.

    Remark 7. We have proved that the Skorokhod representations (42), (43) satisfy the weak formulation (51), (52) for a.e. . As a.s., the weak form (51), (52) actually holds for every . This weak continuity property also ensures that are predictable in .

    This section proves that the martingale solution constructed as the limit of the Faedo-Galerkin approximations is non-negative, thereby ending the proof of Theorem 3.2. The proof is based on the Stampacchia method, which was properly adapted to the stochastic setting in [10]. It uses Itô's formula to derive the SDEs satisfied by the negative parts of the Faedo-Galerkin solutions, an energy estimate, and a limiting process with , arriving eventually at , if the initial data are nonnegative. We write for the negative part, , of . Below we work with a smooth approximation of .

    The nonnegativity result is contained in

    Lemma 9.1. The solution constructed inTheorem 3.2 is non-negative.

    Proof. In this proof we drop the tildes on the relevant functions, writing for example instead of . For , denote by the approximation of defined by

    Observe that

    It is easy to see that , , and for all . Besides, as , the following convergences hold, uniformly in : , , and . An application of Itô formula to , where solves (22), gives

    (60)

    It is easy to see that . In view of (8) and Remark 1,

    (61)

    As a result,

    Similarly, from the definition of the function , cf. (4) and (5), it follows that .

    Keeping in mind the convergences in (50) (see also [10,Section 3.2]), we send in (60) to arrive at the inequality:

    (62)

    Sending in (62), and proceeding exactly as in [10,Section 3.4], we arrive at

    (63)

    for a.e. where is a constant. Finally, by the nonnegativity of and applying Gronwall's inequality in (63), we conclude that a.e. in , almost surely. Along the same lines, it follows that a.e. in , almost surely.



    [1] A reaction-diffusion system modeling predator-prey with prey-taxis. Nonlinear Anal. Real World Appl. (2008) 9: 2086-2105.
    [2] A convergent finite volume method for a model of indirectly transmitted diseases with nonlocal cross-diffusion. Comput. Math. Appl. (2015) 70: 132-157.
    [3]

    V. Anaya, M. Bendahmane, M. Langlais and M. Sepúlveda, Pattern formation for a reaction diffusion system with constant and cross diffusion, In Numerical Mathematics and Advanced Applications—ENUMATH 2013, 103 (2015), 153–161.

    [4] Remarks about spatially structured SI model systems with cross diffusion. Contributions to Partial Differential Equations and Applications (2019) 47: 43-64.
    [5] Analysis of a reaction-diffusion system modeling predator-prey with prey-taxis. Netw. Heterog. Media (2008) 3: 863-879.
    [6]

    M. Bendahmane, K. H. Karlsen and J. M. Urbano, On a two-sidedly degenerate chemotaxis model with volume-filling effect, Math. Methods Appl. Sci., 17 (2007), 783–804.

    [7] Conservative cross diffusions and pattern formation through relaxation. J. Math. Pures Appl. (2009) 92: 651-667.
    [8] Stochastic Navier-Stokes equations. Acta Appl. Math. (1995) 38: 267-304.
    [9] Martingale solutions for stochastic Euler equations. Stochastic Anal. Appl. (1999) 17: 713-725.
    [10] The Stampacchia maximum principle for stochastic partial differential equations and applications. J. Differential Equations (2016) 260: 2926-2972.
    [11] (2014) Stochastic Equations in Infinite Dimensions. Cambridge: Cambridge University Press.
    [12] Local martingale and pathwise solutions for an abstract fluids model. Phys. D (2011) 240: 1123-1144.
    [13] Degenerate parabolic stochastic partial differential equations: Quasilinear case. Ann. Probab. (2016) 44: 1916-1955.
    [14] Global martingale solutions for a stochastic population cross-diffusion system. Stochastic Process. Appl. (2019) 129: 3792-3820.
    [15]

    F. Flandoli, An introduction to 3D stochastic fluid dynamics,, In SPDE in Hydrodynamic: Recent Progress and Prospects, 1942 (2008), 51–150.

    [16] Martingale and stationary solutions for stochastic Navier-Stokes equations. Probab. Theory Related Fields (1995) 102: 367-391.
    [17] Analysis and numerical solution of a nonlinear cross-diffusion system arising in population dynamics. RACSAM. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. AMat. (2001) 95: 281-295.
    [18] Semi-discretization in time and numerical convergence of solutions of a nonlinear cross-diffusion population model. Numer. Math. (2003) 93: 655-673.
    [19] Global weak solutions and asymptotic limits of a cahn–hilliard–darcy system modelling tumour growth. AIMS Math. (2016) 1: 318-360.
    [20] Martingale and pathwise solutions to the stochastic Zakharov-Kuznetsov equation with multiplicative noise. Discrete Contin. Dyn. Syst. Ser. B (2014) 19: 1047-1085.
    [21] Martingale solution to equations for differential type fluids of grade two driven by random force of Lévy type. Potential Anal. (2013) 38: 1291-1331.
    [22] Degenerate parabolic stochastic partial differential equations. Stochastic Process. Appl. (2013) 123: 4294-4336.
    [23]

    G. Leoni, A First Course in Sobolev Spaces, 2 edition, American Mathematical Society, Providence, RI, second edition, 2017.

    [24] A more functional response to predator-prey stability. The American Naturalist (1977) 111: 381-383.
    [25] Hypothesis for origin of planktonic patchiness. Nature (1976) 259: 659-659.
    [26] Spatial segregation in competitive interaction-diffusion equations. J. Math. Biol. (1980) 9: 49-64.
    [27] On a diffusive prey-predator model which exhibits patchiness. J. Theoret. Biol. (1978) 75: 249-262.
    [28] Pattern formation in interacting and diffusing systems in population biology. Advances in Biophysics (1982) 15: 19-65.
    [29]

    J. D. Murray, Mathematical Biology. I, 3 edition, Interdisciplinary Applied Mathematics, 17. Springer-Verlag, New York, 2002.

    [30]

    J. D. Murray, Mathematical Biology. II, 3 edition, Interdisciplinary Applied Mathematics, 18. Springer-Verlag, New York, 2003.

    [31]

    A. Okubo and S. A. Levin., Diffusion and Ecological Problems: Modern Perspectives, 2 edtion, Interdisciplinary Applied Mathematics, 14. Springer-Verlag, New York, 2001.

    [32] Stochastic nonlinear wave equations in local Sobolev spaces. Electron. J. Probab. (2010) 15: 1041-1091.
    [33]

    C. Prévôt and M. Röckner, A concise Course on Stochastic Partial Differential Equations, Springer, Berlin, 2007.

    [34] Existence and large time behavior for a stochastic model of modified magnetohydrodynamic equations. Z. Angew. Math. Phys. (2015) 66: 2197-2235.
    [35] Density dependent stochastic Navier-Stokes equations with non-Lipschitz random forcing. Rev. Math. Phys. (2010) 22: 669-697.
    [36] Compact sets in the space . Ann. Mat. Pura Appl. (1987) 146: 65-96.
  • This article has been cited by:

    1. M Bendahmane, K H Karlsen, F Mroué, Stochastic electromechanical bidomain model * , 2024, 37, 0951-7715, 075023, 10.1088/1361-6544/ad5132
    2. Mrinmay Biswas, Ansgar Jüngel, 2025, Chapter 1, 978-981-96-3097-4, 1, 10.1007/978-981-96-3098-1_1
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1249) PDF downloads(230) Cited by(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog