Processing math: 55%
Research article Special Issues

PercolationDF: A percolation-based medical diagnosis framework


  • Goal: With the continuing shortage and unequal distribution of medical resources, our objective is to develop a general diagnosis framework that utilizes a smaller amount of electronic medical records (EMRs) to alleviate the problem that the data volume requirement of prevailing models is too vast for medical institutions to afford. Methods: The framework proposed contains network construction, network expansion, and disease diagnosis methods. In the first two stages above, the knowledge extracted from EMRs is utilized to build and expense an EMR-based medical knowledge network (EMKN) to model and represent the medical knowledge. Then, percolation theory is modified to diagnose EMKN. Result: Facing the lack of data, our framework outperforms naïve Bayes networks, neural networks and logistic regression, especially in the top-10 recall. Out of 207 test cases, 51.7% achieved 100% in the top-10 recall, 21% better than what was achieved in one of our previous studies. Conclusion: The experimental results show that the proposed framework may be useful for medical knowledge representation and diagnosis. The framework effectively alleviates the lack of data volume by inferring the knowledge modeled in EMKN. Significance: The proposed framework not only has applications for diagnosis but also may be extended to other domains to represent and model the knowledge and inference on the representation.

    Citation: Jingchi Jiang, Xuehui Yu, Yi Lin, Yi Guan. PercolationDF: A percolation-based medical diagnosis framework[J]. Mathematical Biosciences and Engineering, 2022, 19(6): 5832-5849. doi: 10.3934/mbe.2022273

    Related Papers:

    [1] J Sharath Kumar, Naresh Chandra Murmu, Tapas Kuila . Recent trends in the graphene-based sensors for the detection of hydrogen peroxide. AIMS Materials Science, 2018, 5(3): 422-466. doi: 10.3934/matersci.2018.3.422
    [2] Sekhar Chandra Ray . Application of graphene/graphene-oxide: A comprehensive review. AIMS Materials Science, 2025, 12(3): 453-513. doi: 10.3934/matersci.2025023
    [3] Qinghua Qin . Applications of piezoelectric and biomedical metamaterials: A review. AIMS Materials Science, 2025, 12(3): 562-609. doi: 10.3934/matersci.2025025
    [4] Cibely Silva Martin, Mateus Dassie Maximino, Matheus Santos Pereira, Clarissa de Almeida Olivati, Priscila Alessio . The role of film composition and nanostructuration on the polyphenol sensor performance. AIMS Materials Science, 2017, 4(1): 27-42. doi: 10.3934/matersci.2017.1.27
    [5] Ajitanshu Vedrtnam, Kishor Kalauni, Sunil Dubey, Aman Kumar . A comprehensive study on structure, properties, synthesis and characterization of ferrites. AIMS Materials Science, 2020, 7(6): 800-835. doi: 10.3934/matersci.2020.6.800
    [6] Jia Tian, Yue He, Fangpei Li, Wenbo Peng, Yongning He . Laser surface processing technology for performance enhancement of TENG. AIMS Materials Science, 2025, 12(1): 1-22. doi: 10.3934/matersci.2025001
    [7] Silvia Colodrero . Conjugated polymers as functional hole selective layers in efficient metal halide perovskite solar cells. AIMS Materials Science, 2017, 4(4): 956-969. doi: 10.3934/matersci.2017.4.956
    [8] Dong Geun Lee, Hwan Chul Yoo, Eun-Ki Hong, Won-Ju Cho, Jong Tae Park . Device performances and instabilities of the engineered active layer with different film thickness and composition ratios in amorphous InGaZnO thin film transistors. AIMS Materials Science, 2020, 7(5): 596-607. doi: 10.3934/matersci.2020.5.596
    [9] Giovanna Di Pasquale, Salvatore Graziani, Chiara Gugliuzzo, Antonino Pollicino . Ionic polymer-metal composites (IPMCs) and ionic polymer-polymer composites (IP2Cs): Effects of electrode on mechanical, thermal and electromechanical behaviour. AIMS Materials Science, 2017, 4(5): 1062-1077. doi: 10.3934/matersci.2017.5.1062
    [10] Fábio M. Carvalho, Rita Teixeira-Santos, Filipe J. M. Mergulhão, Luciana C. Gomes . Targeting biofilms in medical devices using probiotic cells: a systematic review. AIMS Materials Science, 2021, 8(4): 501-523. doi: 10.3934/matersci.2021031
  • Goal: With the continuing shortage and unequal distribution of medical resources, our objective is to develop a general diagnosis framework that utilizes a smaller amount of electronic medical records (EMRs) to alleviate the problem that the data volume requirement of prevailing models is too vast for medical institutions to afford. Methods: The framework proposed contains network construction, network expansion, and disease diagnosis methods. In the first two stages above, the knowledge extracted from EMRs is utilized to build and expense an EMR-based medical knowledge network (EMKN) to model and represent the medical knowledge. Then, percolation theory is modified to diagnose EMKN. Result: Facing the lack of data, our framework outperforms naïve Bayes networks, neural networks and logistic regression, especially in the top-10 recall. Out of 207 test cases, 51.7% achieved 100% in the top-10 recall, 21% better than what was achieved in one of our previous studies. Conclusion: The experimental results show that the proposed framework may be useful for medical knowledge representation and diagnosis. The framework effectively alleviates the lack of data volume by inferring the knowledge modeled in EMKN. Significance: The proposed framework not only has applications for diagnosis but also may be extended to other domains to represent and model the knowledge and inference on the representation.



    The nonlinear Ginzburg-Landau equation plays an important role in the studies of physics, which describes many interesting phenomena and has been studied extensively (see [1] for a more detailed description). The fractional Ginzburg-Landau equation [2,3,4] is employed to describe processes in media with fractional dispersion or long-range interaction. It becomes very popular because the fractional derivative and fractional integral have broad applications in different fields of science [5,6,7,8,9,10].

    Our work focuses on the existence of invariant measures of the autonomous fractional stochastic delay Ginzburg-Landau equations on Rn:

    du(t)+(1+iν)(Δ)αu(t)dt+(1+iμ)|u(t)|2βu(t)dt+λu(t)dt=G(x,u(tρ))dt,+k=1(σ1,k(x)+κ(x)σ2,k(u(t)))dWk(t), t>0, (1.1)

    with initial condition

    u(s)=φ(s),s[ρ,0], (1.2)

    where u(x,t) is a complex-valued function on Rn×[0,+). In (1.1), i is the imaginary unit, α,β,μ,ν and λ are real constants with β>0,λ>0 and ρ>0. (Δ)α with 0<α<1 is the fractional Laplace operator, σ1,k(x)L2(Rn) and σ2,k(u):CR are nonlinear functions, κ(x)L2(Rn)L(Rn) and {Wk}k=1 is a sequence of independent standard real-valued Wiener process on a complete filtered probability space (Ω, F, {Ft}tR,P), where {Ft}tR is an increasing right continuous family of sub-σ-algebras of F that contains all P-null sets.

    The Ginzburg-Landau equation with fractional derivative was first introduced in [2]. There is a large amount of literature which was used for investigating fractional deterministic Ginzburg-Landau equations such as [1] and stochastic equations such as [11,12,13,14,15,16,17]. These papers had respectively researched the long-time deterministic as well as random dynamical systems of fractional equations with autonomous forms and non-autonomous forms. However, in spite of quite a lot of contribution of the works, no result is provided for the existence of pathwise pullback random attractors and invariant measures for the delay stochastic Ginzburg-Landau equations.

    The delay differential equations [18] was described the dynamical systems that rely on current and past historical states. For the past few years, researchers had made great progress in the study of linear and nonlinear delay differential equations, see [20,21]. Delay differential equations are widely used in many fields, so investigating the solutions of equations has profound significance. Therefore, it's necessary that we establish the dynamics of delay stochastic Ginzburg-Landau equations.

    The goal of this paper is to prove the existence of invariant measures of the stochastic Eqs (1.1) and (1.2) in L2(Ω;C([ρ,0],L2(Rn))) by applying Krylov-Bogolyubov's method. The main difficulty of this paper is that deducing the uniform estimates of solutions (because of the nonlinear term (1+iμ)|u(t)|2βu(t) and complex-valued solutions), proving the weak compactness of a set distribution laws of the segments of solutions in L2(Ω;C([ρ,0],L2(Rn))) (because the standard Sobolev embeddings are not compact on unbounded domains Rn), and establishing the equicontinuity of solutions in L2(Ω;C([ρ,0],L2(Rn))) (because the uniform estimates in L2(Ω;C([ρ,0],L2(Rn))) are not sufficient, and the uniform estimates in L2(Ω;C([ρ,0],H1(Rn))) are needed).

    For the estimates of the nonlinear term (1+iμ)|u(t)|2βu(t), we apply integrating by parts and nonnegative definite quadratic form. There are Several methods to handle the noncompact on unbounded domain, including weighted spaces [22,23,24], weak Feller approach [25,26] and uniform tail-estimates [23,27]. We first obtain the uniform estimates of the tail of the solution as well as the technique of dyadic division, then establish the weak compactness of a set of probability distribution of solutions in C([ρ,0],L2(Rn)) applying the Ascoli-Arzelˊa theorem.

    Let S be the Schwartz space of rapidly decaying C functions on Rn. The fractional Laplace operator (Δ)α for 0<α<1 is defined by, for uS,

    (Δ)αu(x)=12C(n,α)Rnu(x+y)+u(xy)2u(x)|y|n+2αdy,    xRn,

    where C(n,α) is a positive constant given by

    C(n,α)=α4αΓ(n+2α2)πn2Γ(1α).

    By [28], the inner product ((Δ)α2u,(Δ)α2v) in the complex field is defined by

    ((Δ)α2u,(Δ)α2v)=C(n,α)2RnRn(u(x)u(y))(ˉv(x)ˉv(y))|xy|n+2αdxdy,

    for uHα(Rn). The fractional Sobolev space Hα(Rn) is endowed with the norm

    u2Hα(Rn)=u2L2(Rn)+2C(n,α)(Δ)α2u2L2(Rn).

    About the fractional derivative of fractional Ginzburg-Landau equations, there is another statement in [29].

    We organize the article as follows. In Section 2, we establish the well-posedness of (1.1) and (1.2) in L2(Ω;C([ρ,0],H)). In Sections 3 and 4, we derive the uniform estimates of solutions in L2(Ω;C([ρ,0],H)) and L2(Ω;C([ρ,0],V)), respectively. In Section 5, the existence of invariant measures is obtained.

    In this section, we show the nonlinear drift term and the diffusion term in (1.1) which are needed for the well-posedness of the stochastic delay Ginzburg-Landau Eqs (1.1) and (1.2) defined on Rn.

    We assume that G:Rn×CC is continuous and satisfies

    |G(x,u)||h(x)|+a|u|, xRn, uC (2.1)

    and

    |G(x,u)||ˆh(x)|+ˆa|u|, xRn, uC, (2.2)

    where a and ˆa>0 are constants and h(x),ˆh(x)L2(Rn). Moreover, G(x,u) is Lipschitz continuous in uC uniformly with respect to xRn. More precisely, there exists a constant CG>0 such that

    |G(x,u1)G(x,u2)|CG|u1u2|, xRn, u1,u2C. (2.3)

    For the diffusion coefficients of noise, we suppose that for each kN+

    k=1σ1,k2<, (2.4)

    and that σ2,k(u):CR is globally Lipschitz continuous; namely, for every kN+, there exists a positive number αk such that for all s1,s2C,

    |σ2,k(s1)σ2,k(s2)|αk|s1s2|. (2.5)

    We further assume that for each kN+, there exist positive numbers βk, ˆβk, γk and ˆγk such that

    |σ2,k(s)|βk+γk|s|, sC, (2.6)

    and

    |σ2,k(s)|ˆβk+ˆγk|s|, sC, (2.7)

    where k=1(α2k+β2k+γ2k+ˆβ2k+ˆγ2k)<+. In this paper, we deal with the stochastic Eqs (1.1) and (1.2) in the space C([ρ,0],L2(Rn)). In the following discussion, we denote by H=L2(Rn), V=H1(Rn).

    A solution of problems (1.1) and (1.2) will be understood in the following sense.

    Definition 2.1. We suppose that φ(s)L2(Ω,C([ρ,0],H)) is F0-measurable. Then, a continuous H-valued Ft-adapted stochastic process u(x,t) is named a solution of problems (1.1) and (1.2), if

    1) u is pathwise continuous on [0,+), and Ft-adapted for all t0,

    uL2(Ω,C([0,T],H))L2(Ω,L2([0,T],V))

    for all T>0,

    2) u(s)=φ(s) for ρs0,

    3) For all t0 and ξV,

    (u(t),ξ)+(1+iν)t0((Δ)α2u(s),(Δ)α2ξ)ds+t0Rn(1+iμ)|u(s)|2βu(s)ξ(x)dxds+λt0(u(s),ξ)ds=(φ(0),ξ)+t0(G(s,u(sρ)),ξ)ds+k=1t0(σ1,k(x)+κ(x)σ2,k(u(s)),ξ)dWk(s), (2.8)

    for almost all ωΩ.

    By the Galerkin method and the argument of Theorem 3.1 in [30], one can verify that if (2.1)–(2.7) hold true, then, for every F0-measurable function φ(s)L2(Ω,C([ρ,0],H)), the problems (1.1) and (1.2) has a unique solution u(x,t) in the sense of Definition 2.1.

    Now, we establish the Lipschitz continuity of the solutions of the problems (1.1) and (1.2) with respect to the initial data in L2(Ω,C([ρ,0],H)).

    Theorem 2.2. Suppose (2.1)–(2.6) hold, and F0-measurable function φ1,φ2L2(Ω,C([ρ,0],H)). If u1=u(t,φ1) and u2=u(t,φ2) are the solutions of the problems (1.1) and (1.2) with initial data φ1 and φ2, respectively, then, for any t0,

    E[supρstu(s,φ1)u(s,φ2)2]+E[t0u(s,φ1)u(s,φ2)2Vds]
    C1e˜C1tE[supρs0φ1(s)φ2(s)]2],

    where C1 and ˜C1 are positive constants independent of φ1 and φ2.

    Proof. Since both u1 and u2 are the solutions of the problems (1.1) and (1.2), we have, for all t0,

    u1u2+(1+iν)t0(Δ)α(u1u2)ds+(1+iμ)t0(|u1|2βu1|u2|2βu2)ds+λt0(u1u2)ds=φ1(0)φ2(0)+t0(G(x,u1(sρ))G(x,u2(sρ)))ds+k=1t0κ(x)(σ2,k(u1)σ2,k(u2))dWk. (2.9)

    By (2.9), the integration by parts of Ito's formula and taking the real parts, we get, for all t0,

    u1u22+2t0(Δ)α2(u1u2)2ds+2Ret0Rn(ˉu1ˉu2)[|u1|2βu1|u2|2βu2]dxds+2λt0u1u22ds=φ1(0)φ2(0)2+2Ret0(u1u2,G(x,u1(sρ))G(x,u2(sρ)))ds+k=1t0κ(x)(σ2,k(u1)σ2,k(u2))2ds+2Ret0(u1u2,k=1κ(x)(σ2,k(u1)σ2,k(u2)))dWk(s). (2.10)

    For the third term in the first row of (2.10), one has

        2Ret0Rn(ˉu1ˉu2)[|u1|2βu1|u2|2βu2]dxds=t0Rn2|u1|2β+2+2|u2|2β+22Re(u1ˉu2)(|u1|2β+|u2|2β)dxdst0Rn2|u1|2β+2+2|u2|2β+22|u1||u2|(|u1|2β+|u2|2β)dxdst0Rn2|u1|2β+2+2|u2|2β+2(|u1|2+|u2|2)(|u1|2β+|u2|2β)dxds=t0Rn|u1|2β+2+|u2|2β+2|u1|2β|u2|2|u2|2β|u1|2dxds=t0Rn(|u1|2β|u2|2β)(|u1|2|u2|2)dxds0.

    By (2.10), we deduce that for t0,

        E[sup0rtu1(r)u2(r)2]E[supρs0φ1(s)φ2(s)2]+2E[t0u1u2G(x,u1(sρ))G(x,u2(sρ))ds]   +k=1E[t0κ(σ2,k(u1)σ2,k(u2))2ds]   +2E[sup0rt|k=1r0(u1u2,κ(x)(σ2,k(u1)σ2,k(u2))dWk(s))|]. (2.11)

    For the second term on the right-hand side of (2.11), by (2.3), one has

        2E[t0u1u2G(x,u1(sρ))G(x,u2(sρ))ds]E[t0u1u22ds]+E[t0G(x,u1(sρ))G(x,u2(sρ))2ds]E[t0u1u22ds]+C2GE[t0u1(sρ)u2(sρ)2ds]=E[t0u1u22ds]+C2GE[tρρu1u22ds](1+C2G)E[t0u1u22ds]+C2GE[0ρφ1(s)φ2(s)2ds](1+C2G)t0E[sup0rsu1u22]ds+ρC2GE[supρs0φ1(s)φ2(s)2].

    For the third term on the right-hand side of (2.11), by (2.5), we have

      k=1E[t0κ(x)(σ2,k(u1)σ2,k(u2))2ds]κ(x)2Lk=1α2kE[t0u1u22ds]κ(x)2Lk=1α2kt0E[sup0rsu1u22]ds. (2.12)

    For the forth term on the right-hand side of (2.11), by Burkholder-Davis-Gundy's inequality, one has

        2E[sup0rt|k=1r0(u1u2,κ(x)(σ2,k(u1)σ2,k(u2))dWk(s))|]B1E[(t0k=1|(u1u2,κ(x)(σ2,k(u1)σ2,k(u2)))|2ds)12]B1E[(t0k=1u1u22κ2Lσ2,k(u1)σ2,k(u2)2ds)12]B1E[sup0stu1u2κL(k=1α2k)12(t0u1u22ds)12]12E[sup0stu1u22]+12B21κ2Lk=1α2kE[t0sup0rsu1u22ds], (2.13)

    where B1 is a constant produced by Burkholder-Davis-Gundy's inequality.

    It follows from (2.11)–(2.13) that for all t0,

    E[sup0rtu1(r)u2(r)2]2(1+ρC2G)E[supρs0φ1(s)φ2(s)2]+2[1+C2G+(1+12B21)κ2Lk=1α2k]t0E[sup0rsu1(r)u2(r)2]ds. (2.14)

    Applying Gronwall inequality to (2.14), we obtain that for all t0,

    E[sup0rtu1(r)u2(r)2]2(1+ρC2G)ec1tE[supρs0φ1(s)φ2(s)2], (2.15)

    where c1=2[1+C2G+(1+12B21)κ2Lk=1α2k]. By (2.10), there exists c2 such that for all t0,

    E[t0u1u22Vds]˜c2ec2tE[supρs0φ1(s)φ2(s)2].

    We assume that a, αk and γk are small enough in the sense, there exists a constant p2 such that

    2112p(2p1)2p12pa+2p(2p1)κ2Lk=1(α2k+γ2k)<pλ. (3.1)

    By (3.1), one has

    2κ2Lk=1γ2k<λ, (3.2)

    and

    2a+2κ2Lk=1γ2k<λ. (3.3)

    The inequalities (3.1)–(3.3) are used to establish the uniform tail-estimate of the solution of (1.1) and (1.2).

    Lemma 3.1. Suppose (2.1)–(2.6) and (3.2) hold. If φ(s)L2(Ω;C([ρ,0],H)), then, for all t0, there exists a positive constant μ1 such that the solution u of (1.1) and (1.2) satisfies

    E[u(t)2]+t0eμ1(st)E(u(s)2V)ds+t0eμ1(st)E(u(s)2β+2L2β+2)ds
    M1E[supρs0φ(s)2]+~M1, (3.4)

    and

    t+ρ0E[u(s)2V]ds(M1(t+ρ)+1+2aρC(n,α))E[supρs0φ(s)2]+2(t+ρ)aC(n,α)h(x)2
    +2(t+ρ)C(n,α)k=1(σ1,k2+2β2kκ(x)2)+˜M1(t+ρ),

    where ˜M1 is a positive constant independent of φ.

    Proof. By (1.1) and the integration by parts of Ito's formula, we have for all t0,

        u(t)2+2t0(Δ)α2u(s)2ds+2t0u(s)2β+2L2β+2ds+2λt0u(s)2ds=2Ret0(u(s),G(x,u(sρ)))ds+φ(0)2+k=1t0σ1,k(x)+κ(x)σ2,k(u(s))2ds+2Ret0(u(s),k=1σ1,k(x)+κ(x)σ2,k(u(s)))dWk(s). (3.5)

    The system (3.5) can be rewritten as

        d(u(t)2)+2(Δ)α2u(t)2dt+2u(t)2β+2L2β+2dt+2λu(t)2dt=2Re(u(t),G(x,u(tρ)))dt+k=1σ1,k(x)+κ(x)σ2,k(u(t))2dt+2Re(u(t),k=1σ1,k(x)+κ(x)σ2,k(u(t)))dWk(t). (3.6)

    Assume that μ1 is a positive constant, one has

        eμ1tu(t)2+2t0eμ1s(Δ)α2u(s)2ds+2t0eμ1su(s)2β+2L2β+2ds=(μ12λ)t0eμ1su(s)2ds+φ(0)2+2Ret0eμ1s(u(s),G(x,u(sρ)))ds+k=1t0eμ1sσ1,k+κσ2,k(u(s))2ds+2Ret0eμ1s(u(s),k=1σ1,k(x)+κ(x)σ2,k(u(s)))dWk(s).

    Taking the expectation, we have for all t0,

        eμ1tE(u(t)2)+2E[t0eμ1s(Δ)α2u(s)2ds]+2E[t0eμ1su(s)pLpds]=E(φ(0)2)+(μ12λ)E[t0eμ1su(s)2ds]+2E[t0eμ1sRe(u(s),G(x,u(sρ)))ds]+k=1E[t0eμ1sσ1,k(x)+κ(x)σ2,k(u(s))2ds]. (3.7)

    For the third term on the right-hand side (3.7), by (2.1), we have

        2E[t0eμ1sRe(u(s),G(x,u(sρ)))ds]2t0eμ1sE[u(s)G(x,u(sρ))]ds2at0eμ1sE(u(s)2)ds+22at0eμ1sE[G(x,u(sρ))2]ds2at0eμ1sE(u(s)2)ds+2at0eμ1sh(x)2ds+2at0eμ1sE[u(sρ)2]ds2a(1+eμ1ρ)t0eμ1sE[u(s)2]ds+2ah(x)2t0eμ1sds+2aeμ1ρ0ρeμ1sE[φ(s)2]ds2a(1+eμ1ρ)t0eμ1sE[u(s)2]ds+2eμ1taμ1h(x)2+2aρeμ1ρE[supρs0φ(s)2]. (3.8)

    For the forth term on the right-hand side (3.7), by (2.6), we have

        k=1E[t0eμ1sσ1,k+κσ2,k(u(s))2ds]k=1E[t0eμ1s(2σ1,k2+2κσ2,k(u(s))2)ds]2μ1k=1σ1,k2eμ1t+4k=1t0eμ1sE[β2kκ2+γ2kκ2Lu(s)2]ds2μ1k=1(σ1,k2+2β2kκ(x)2)eμ1t+4k=1γ2kκ(x)2Lt0eμ1sE(u(s)2)ds. (3.9)

    By (3.7)–(3.9), we obtain for all t0,

        eμ1tE(u(t)2)+2E[t0eμ1s(Δ)α2u(s)2ds]+2E[t0eμ1su(s)2β+2L2β+2ds](1+2aρeμ1ρ)E[supρs0φ(s)2]+[μ12λ+2a(1+eμ1ρ)+4k=1γ2kκ2L]t0eμ1sE[u2]ds+2aμ1eμ1th(x)2+2μ1k=1(σ1,k2+2β2kκ(x)2)eμ1t. (3.10)

    By (3.2), there exists a positive constant μ1 sufficiently small such that

    2μ1+2a+2aeμ1ρ+4k=1γ2kκ(x)2L2λ.

    Then, we have, for all t0,

        E(u(t)2)+2t0eμ1(st)E((Δ)α2u(s)2)ds     +μ1t0eμ1(st)E(u(s)2)ds+2t0eμ1(st)E(u(s)2β+2L2β+2)ds(1+2aρeμ1ρ)E(supρs0φ(s)2)+1μ1(2ah(x)2+2k=1(σ1,k2+2β2kκ(x)2)),

    which completes the proof of (3.4).

    Integrating (3.6) on [0,t+ρ] and taking the expectation, one has

      E[u(t+ρ)2]+2E[t+ρ0(Δ)α2u(s)2ds]+2E[t+ρ0u(s)2β+2L2β+2ds]+2λE[t+ρ0u(s)2ds]=E[φ(0)2]+2E[t+ρ0Re(u(s),G(x,u(sρ)))ds]+k=1E[t+ρ0σ1,k+κ(x)σ2,k(u(s))ds]. (3.11)

    For the second term on the right-hand side of (3.11), by (2.1), we have

        2E[t+ρ0Re(u,G(x,u(sρ)))ds]22aE[t+ρ0u2ds]+2aρE[supρs0φ(s)2]+2(t+ρ)ah2. (3.12)

    For the third term on the right-hand side of (3.11), one has

        k=1E[t+ρ0σ1,k+κσ2,k(u(s))ds]2(t+ρ)k=1(σ1,k2+2β2kκ2)+4k=1γ2kκ(x)2LE[t+ρ0u2ds]. (3.13)

    Then, by (3.2) and (3.11)–(3.13), for all t0, we obtain,

    2E[t+ρ0(Δ)α2u(s)2ds](1+2aρ)E[supρs0φ(s)2]
    +2(t+ρ)k=1(σ1,k2+2β2kκ(x)2)+2(t+ρ)ah(x)2.

    The result then follows from (3.4).

    The next lemma is used to obtain the uniform estimates of the segments of solutions in C([ρ,0],H).

    Lemma 3.2. Suppose (2.1)–(2.6) and (3.2) hold. Then, for any φ(s)L2(Ω,F0;C([ρ,0],H)), the solution of (1.1) satisfies that, for all tρ,

    E(suptρrtu(r)2)M2E[supρs0φ(s)2]+˜M2,

    where M2 and ˜M2 are positive constants independent of φ.

    Proof. By (1.1) and integration by parts of Ito's formula and taking the real part, we get for all tρ and tρrt,

        u(r)2+2rtρ(Δ)α2u(s)2ds+2rtρu(s)2β+2L2β+2ds+2λrtρu(s)2ds=u(tρ)2+2Rertρ(u(s),G(x,u(sρ)))ds+k=1rtρσ1,k(x)+κ(x)σ2,k(u(s)))2ds+2Rek=1rtρ(u(s),(σ1,k(x)+κ(x)σ2,k(u(s))dWk(s)). (3.14)

    For the second term on the right-hand side of (3.14), by (2.1) we have, for all tρ and tρrt,

        2Rertρ(u(s),G(x,u(sρ)))ds2rtρu(s)G(x,u(sρ))dsrtρu(s)2ds+rtρG(x,u(sρ))2dsrtρu(s)2ds+2rtρh2ds+2a2rtρu(sρ)2dsrtρu(s)2ds+2ρh2+2a2tρt2ρu(s)2ds. (3.15)

    For the third term on the right-hand side of of (3.14), for all tρ and tρrt, by (2.6), we have

        k=1rtρσ1,k(x)+κ(x)σ2,k(u(s))2ds2ρk=1σ1,k2+4ρκ2k=1β2k+4κ2Lk=1γ2krtρu(s)2ds. (3.16)

    By (3.14)–(3.16), we obtain for all tρ and tρrt,

    u(r)2c3+u(tρ)2+c4rt2ρu(s)2ds
    +2Rek=1rtρ(u(s),(σ1,k(x)+κ(x)σ2,k(u(s))dWk(s)), (3.17)

    where c3=2ρh2+2ρk=1σ1,k2+4ρκ2k=1β2k and c4=1+2a2+4κ2Lk=1γ2k. By (3.17), we find that for all tρ,

    E[suptρrtu(r)2]c3+E[u(tρ)2]+c4tt2ρE[u(s)2]ds+2E[suptρrt|k=1rtρ(u(s),(σ1,k(x)+κ(x)σ2,k(u(s))dWk(s))|]. (3.18)

    For the second term and the third term on the right-hand side of (3.18), by Lemma 3.1, we deduce for all tρ,

    E[u(tρ)2]sups0E[u(s)2]M1E[supρs0φ2]+˜M1 (3.19)

    and

    c4tt2ρE[u(s)2]ds2ρc4supsρE[u(s)2]c5E[supρs0φ2]+c5. (3.20)

    For the last term on the right-hand side of (3.18), by Burkholder-Davis-Gundy's inequality and Lemma 3.1, we obtain for all tρ,

        2E[suptρrt|k=1rtρ(u(s),σ1,k(x)+κ(x)σ2,k(u(s))dWk(s))|]2B2E[(k=1ttρ|(u(s),σ1,k+κσ2,k(u(s)))|2ds)12]12E[suptρstu(s)2]+2B22E[k=1ttρσ1,k+κσ2,k(u(s))2ds]12E[suptρstu(s)2]+2B22(2ρk=1σ1,k2+4ρκ2k=1β2k)+8B22ρκ2Lk=1γ2ksups0E[u(s)2]. (3.21)

    By Lemma 3.1 and (3.18)–(3.21), we deduce that for all tρ,

    E[suptρrtu(r)2]M2E[supρs0φ(s)2]+˜M2.

    This completes the proof.

    To establish the tightness of a family of distributions of solutions, we now derive uniform estimates on the tails of solutions to the problems (1.1) and (1.2).

    Lemma 3.3. Suppose (2.1)–(2.6) and (3.2) hold. If φ(s)L2(Ω,C([ρ,0],H)). Then, for all t0, the solution u of (1.1) and (1.2) satisfies

    lim supmsuptρ|x|mE[|u(t,x)|2]dx=0.

    Proof. We suppose that θ(x):RnR is a smooth function with 0θ(x)1, for all xRn defined by

    θ(x)={0if |x|1,1if |x|2.

    For fixed mN, we denote that θm(x)=θ(xm). By (1.1), we have

    d(θmu)+(1+iν)θm(Δ)αudt+(1+iμ)θm|u|2βudt+λθmudt=θmG(x,u(tρ))dt
    +k=1θm(σ1,k+κσ2,k)dWk(t). (3.22)

    By (3.2), We can find μ2 sufficiently small such that

    μ2+22a+4κ2Lk=1γ2k2λ<0. (3.23)

    By (3.22) and integration by parts of Ito's formula and taking the expectation, we obtain

        E[θmu2]+2t0eμ2(st)E[Rnθ2m|u|2β+2dx]ds=eμ2tE[θmφ(0)2]2t0eμ2(st)E[Re(1+iν)((Δ)α2u,(Δ)α2(θ2mu))]ds+(μ22λ)t0eμ2(st)E[θmu2]ds+2t0eμ2(st)E[Re(θmu,θmG(x,u(sρ)))]ds+k=1t0eμ2(st)E[θm(σ1,k+κ(x)σ2,k(u(s)))2]ds. (3.24)

    For the first term in the second row of (3.24), since φ(s)L2(Ω,C([ρ,0],H)), we have for all s[ρ,0], E[φ(0)2]<. It follows that for any ε>0, there exists a positive N1=N1(ε,φ)1, for all mN1, one has |x|mE[φ2(0,x)]dx<ε. Consequently,

        E[θmφ(0)2]=E[Rn|θ(xm)φ(0,x)|2dx]=E[|x|m|θ(xm)φ(0,x)|2dx]|x|mE[|φ(0,x)|2]dx<ε,   mN1. (3.25)

    Now we consider the second term on the right-hand side of (3.24). We first have

        2E[Re(1+iν)((Δ)α2u(s),(Δ)α2(θ2mu(s)))]=C(n,α)E[Re(1+iν)RnRn[u(x)u(y)][θ2m(x)ˉu(x)θ2m(y)ˉu(y)]|xy|n+2α]dxdy=C(n,α)E[Re(1+iν)RnRn[u(x)u(y)][θ2m(x)(ˉu(x)ˉu(y))+ˉu(y)(θ2m(x)θ2m(y))]|xy|n+2α]dxdy=C(n,α)E[Re(1+iν)RnRnθ2m(x)|u(x)u(y)|2|xy|n+2αdxdy]C(n,α)E[Re(1+iν)RnRn(u(x)u(y))(θ2m(x)θ2m(y))ˉu(y)|xy|n+2αdxdy]C(n,α)E[Re(1+iν)RnRn(u(x)u(y))(θ2m(x)θ2m(y))ˉu(y)|xy|n+2αdxdy]C(n,α)1+ν2E[|RnRn(u(x)u(y))(θ2m(x)θ2m(y))ˉu(y)|xy|n+2αdxdy|]2C(n,α)1+ν2E[Rn|ˉu(y)|(Rn|(u(x)u(y))(θm(x)θm(y))||xy|n+2αdx)dy]2C(n,α)1+ν2E[u(s)(Rn(Rn|(u(x)u(y))(θm(x)θm(y))||xy|n+2αdx)2dy)12]2C(n,α)1+ν2E[u(s)(Rn(Rn|u(x)u(y)|2|xy|n+2αdxRn|(θm(x)θm(y))|2|xy|n+2αdx)dy)12]. (3.26)

    We now prove the following inequality:

    Rn|(θm(x)θm(y))|2|xy|n+2αdxc6m2α. (3.27)

    Let xy=h and hm=z, then, we obtain,

        Rn|(θm(x)θm(y))|2|xy|n+2αdx=Rn|θ(y+hm)θ(ym)|2|h|n+2αdh=Rn|θ(ym+z)θ(ym)|2mn+2α|z|n+2αmndz=1m2αRn|θ(ym+z)θ(ym)|2|z|n+2αdz=1m2α|z|1|θ(ym+z)θ(ym)|2|z|n+2αdz+1m2α|z|>1|θ(ym+z)θ(ym)|2|z|n+2αdzc6m2α|z|1|z|2|z|n+2αdz+4m2α|z|>11|z|n+2αdzc6m2α|z|11|z|n+2α2dz+4m2α|z|>11|z|n+2αdzc6ˉc6m2α+4˜c6m2α=c6ˉc6+4˜c6m2α. (3.28)

    This proves (3.27) with c6:=c6ˉc6+4˜c6. By (3.26) and (3.27), we obtain,

        2E[Re(1+iν)((Δ)α2u(s),(Δ)α2θ2mu(s))]2c6(1+ν2)C(n,α)mαE[u(s)RnRn|u(x)u(y)|2|xy|n+2αdxdy]c6(1+ν2)C(n,α)mα(E(u(s)2)+E(RnRn|u(x)u(y)|2|xy|n+2αdxdy))c6(1+ν2)C(n,α)mαE(u(s)2)+2c6(1+ν2)mαE((Δ)α2u(s)2). (3.29)

    By (3.29), for the second term on the right-hand side of (3.24), we get

        2t0eμ2sE[Re(1+iν)((Δ)α2u(s),(Δ)α2θ2mu(s))]dsc6(1+ν2)C(n,α)mαt0eμ2sE[u(s)2]ds+2c6(1+ν2)mαt0eμ2sE[(Δ)α2u(s)2]ds. (3.30)

    By Lemma 3.1, we have

        c6(1+ν2)C(n,α)mαt0eμ2(st)E[u(s)2]dsc6(1+ν2)C(n,α)mα[M1E[supρs0φ(s)2]+˜M1]t0eμ2(st)dsc6(1+ν2)C(n,α)mα1μ2[M1E[supρs0φ(s)2]+˜M1]. (3.31)

    By (3.31), we deduce that there exists N2(ε,φ)N1, for all t0 and mN2,

    c6(1+ν2)C(n,α)mαt0eμ2(st)E[u(s)2]ds<ε.

    By Lemma 3.1, there exists N3(ε,φ)N2 such that for all t0 and mN3,

        2c6(1+ν2)mαt0eμ2(st)E[(Δ)α2u2]ds2c6(1+ν2)mα[M1E[supρs0φ(s)2]+˜M1]<ε.

    For the forth term on the right-hand side of (3.24), we obtain that there exists N4(ε,φ)N3, for all t0 and mN4,

        2t0eμ2(st)E[Re(θmu,θmG(x,u(sρ)))]ds2at0eμ2(st)E[θmu(s)2]ds+12at0eμ2(st)E[θmG(x,u(sρ))2]ds2aμ2|x|mh2(x)dx+2a0ρeμ2(st)E[θmφ(s)2]ds+22at0eμ2(st)E[θmu(s)2]ds2aμ2ε+2a0ρeμ2(st)E[θmφ(s)2]ds+22at0eμ2(st)E[θmu(s)2]ds.

    Since {φ(s)L2(Ω,H)|s[ρ,0]} is compact, it has a open cover of balls with radius ε2 which denoted by {B(φi,ε2)}li=1. Since φi=φ(si)L2(Ω;C([ρ,0],H)) for i=1,2,,l, we obtain that for given ε>0,

    {φ(s)L2(Ω;C([ρ,0],H))}li=1{XL2(Ω,H)|XφiL2(Ω,H)<ε2}.

    Since φiL2(Ω,H), there exists a positive constant N5=N5(ε,φ)N4, for mN5, we have

    supi=1,2,,l|x|mE[|φ(si,x)|2]dx<ε4.

    Then,

    sups[ρ,0]|x|mE[|φ(s,x)|2]dx<ε2,mN5.

    Consequently, one has

        2t0eμ2(st)E[Re(θmu,θmG(x,u(sρ)))]ds2aμ2ε+2aρε2+22at0eμ2(st)E[θmu(s)2]ds. (3.32)

    For the fifth term on the right-hand side of (3.24), by (2.6), we obtain

        k=1t0eμ2(st)E[θm(σ1,k+κ(x)σ2,k(u(s)))2]ds2k=1t0eμ2(st)θmσ1,k2ds+2k=1t0eμ2(st)E[θmκ(x)σ2,k(u(s))2]ds2μ2k=1|x|m|σ1,k(x)|2dx+4μ2k=1β2k|x|mκ2(x)dx+4κ(x)2Lk=1γ2kt0eμ2(st)E[θmu(s)2]ds.

    Since k=1σ1,k2< and κ(x)L2(Rn)L(Rn), there exists N6=N6(ε,φ)N5, for all t0 and mN6, we have

    k=1|x|m|σ1,k(x)|2dx+|x|mκ2(x)dx<ε.

    Consequently, for the fifth term on the right-hand side of (3.24), we get for all t0 and mN6,

    k=1t0eμ2(st)E[θm(σ1,k+κσ2,k)2]ds2μ2(1+2k=1β2k)ε
    +4κ2Lk=1γ2kt0eμ2(st)E[θmu(s)2]ds.

    Therefore, for all t0 and mN6,

    E[θmu(t)2][2+eμ2t+2aμ2+22aρ+2μ2(1+2k=1β2k)]ε
    +(μ22λ+22a+4κ2Lk=1γ2k)t0eμ2(st)E[θmu(s)2]ds.

    Taking the limit in the above equation and by (3.23), we have

    lim supmsuptρ|x|mE[|u(t,x)|2]dx=0,

    which completes the proof.

    Lemma 3.4. Suppose (2.1)–(2.6) and (3.2) hold. If φ(s)L2(Ω,C([ρ,0],H)), then the solution u of (1.1) and (1.2) satisfies

    lim supmsupt0E[supr[tρ,t]|x|m|u(r,x)|2dx]=0.

    Proof. By (3.22) and integration by parts of Ito's formula and taking the real part, for all tρ and r[tρ,t], we have

        eμ2rθmu(r)2+2rtρeμ2sRnθ2m|u|2β+2dxds=eμ2(tρ)θmu(tρ)22rtρeμ2sRe(1+iν)((Δ)α2u(s),(Δ)α2θ2mu(s))ds+(μ22λ)rtρeμ2sθmu(s)2ds+2Rertρeμ2s(θmu(s),θmG(x,u(sρ)))ds+k=1rtρeμ2sθm(σ1,k+κ(x)σ2,k(u(s)))2ds+2Rek=1rtρeμ2s(θmu(s),θm(σ1,k+κσ2,k(u(s))))dWk(s). (3.33)

    By (3.33), we deduce,

        E[suptρrtθmu(r)2]E[θmu(tρ)2]2E[suptρrtrtρeμ2(sr)Re(1+iν)((Δ)α2u,(Δ)α2θ2mu)ds]+|μ22λ|E[suptρrtrtρθmu2eμ2(sr)ds]+2E[suptρrtrtρeμ2(sr)θmuθmG(x,u(sρ))ds]+k=1E[suptρrtrtρeμ2(sr)θm(σ1,k+κσ2,k(u(s)))2ds]+2E[suptρrt|k=1rtρeμ2(sr)(θmu(s),θm(σ1,k+κσ2,k(u(s))))dWk(s)|]. (3.34)

    For the first term on the right-hand side of (3.34), by Lemma 3.3, one has for any ε>0, there exists ˜N1(ε,φ)1 such that for all m˜N1 and tρ,

    E[θmu(tρ)2]|x|mE[|u(tρ,x)|2]dx<ε. (3.35)

    For the second term on the right-hand side of (3.34), by (3.29), we have

        2E[suptρrtrtρeμ2(sr)Re(1+iν)((Δ)α2u(s),(Δ)α2θ2mu(s))ds]2c6(1+ν2)C(n,α)mαE[suptρrt(rtρeμ2(sr)u(s)(Δ)α2u(s)ds)]2c6(1+ν2)C(n,α)mαeμ2ρE[(ttρeμ2(st)u(s)(Δ)α2u(s)ds)]c6(1+ν2)C(n,α)mαeμ2ρ{ttρeμ2(st)E[u2]ds+E[ttρeμ2(st)(Δ)α2u2ds]}c6(1+ν2)C(n,α)mαeμ2ρ{ρsups[tρ,t]E[u(s)2]+E[ttρeμ2(st)(Δ)α2u2ds]}. (3.36)

    By Lemma 3.1 and (3.36), we deduce that there exists ˜N2(ε,φ)˜N1 such that for all m˜N2 and tρ,

    2E[suptρrtrtρeμ2(sr)Re(1+iν)((Δ)α2u(s),(Δ)α2θ2mu(s))ds]<ε. (3.37)

    For the third term on the right-hand side of (3.34), by Lemma 3.3, we obtain that for all m˜N2 and tρ,

    |μ22λ|E[suptρrtrtρθmu(s)2eμ2(sr)ds]|μ22λ|E[ttρθmu(s)2ds]
    |μ22λ|ρsuptρstE[θmu(s)2]<|μ22λ|ρε. (3.38)

    For the forth term on the right-hand side of (3.34), by (2.1), we obtain

    2E[suptρrtrtρeμ2(sr)θmu(s)θmG(x,u(sρ))ds]ttρE[θmu(s)2]ds+2ρθmh2+2a2tρt2ρE[θmu(s)2]dsρsuptρstE[θmu(s)2]+2ρθmh2+2a2ρsupt2ρstρE[θmu(s)2],

    which along with Lemma 3.3, we deduce that there exists ˜N3(ε,φ)˜N2 such that for all m˜N3 and tρ,

    2E[suptρrtrtρeμ2(sr)θmu(s)θmG(x,u(sρ))ds]<(3+2a2)ρε. (3.39)

    For the fifth term on the right-hand side of (3.34), by (2.6), we have

        k=1E[suptρrtrtρeμ2(sr)θm(σ1,k+κσ2,k(u(s)))2ds]2ρk=1θmσ1,k2+2ρk=1suptρstE[θmκ(x)σ2,k(u(s))2]2ρk=1|x|m|σ1,k(x)|2dx+4ρk=1β2k|x|m|κ(x)|2dx+4ρκ(x)2Lk=1γ2ksuptρstE[θmu(s)2].

    By the condition κ(x)L2(Rn)L(Rn), (2.4) and Lemma 3.3, we deduce that there exists ˜N4(ε,φ)˜N3 such that for all m˜N4 and tρ,

    k=1E[suptρrtrtρeμ2(sr)θm(σ1,k+κσ2,k(u(s)))2ds]<2ρ(1+λ+2k=1β2k)ε. (3.40)

    For the sixth term on the right-hand side of (3.34), by (2.6), (3.40) and Burkholder-Davis-Gundy's inequality, we have,

        2E[suptρrt|k=1rtρeμ2(sr)(θmu(s),θm(σ1,k+κσ2,k(u(s))))dWk(s)|]2eμ2(tρ)E[suptρrt|k=1rtρeμ2s(θmu(s),θmσ1,k+θmκ(x)σ2,k(u(s)))dWk(s)|]2˜B2eμ2(tρ)E[(ttρe2μ2sk=1|(θmu(s),θmσ1,k+θmκ(x)σ2,k(u(s)))|2ds)12]2˜B2eμ2(tρ)E[suptρstθmu(s)(ttρe2μ2sk=1θmσ1,k+θmκσ2,k(u(s))2ds)12]12E[suptρstθmu(s)2]+2˜B22E[e2μ2ρttρe2μ2(st)k=1θmσ1,k+θmκ(x)σ2,k(u(s))2ds]12E[suptρstθmu(s)2]+2˜B22e2μ2ρk=1E[suptρrtrtρeμ2(sr)θmσ1,k+θmκ(x)σ2,k(u(s))2ds]12E[suptρstθmu(s)2]+4ρ(1+λ+2k=1β2k)˜B22e2μ2ρε.

    Above all, for all m˜N4 and tρ, we obtain,

    E[suptρrtθmu(r)2][4+2|μ22λ|ρ+(6+4a2)ρ+4ρ(1+2˜B22e2μ2ρ)(1+λ+2k=1β2k)]ε.

    Therefore, we conclude

    lim supmsupt0E[suptρrt|x|m|u(r,x)|2dx]=0.

    Lemma 3.5. Suppose (2.1)–(2.6) and (3.1) hold. If φ(s)L2(Ω,C([ρ,0],H)), then there exists a positive constant μ3 such that the solution u of (1.1) and (1.2) satisfies

     suptρE[u(t)2p]+supt0E[t0eμ3(st)u(s)2p2(Δ)α2u(s)2ds]        (1+aρeμρ2p(4p2)2p12p)E[φ2pCH]+M3, (3.41)

    where M3 is a positive constant independent of φ.

    Proof. By (3.1), there exist positive constants μ and ϵ1 such that

     μ+aeμρ2p2112p(2p1)2p12p+4(p1)(2p1)ϵ2p2p21k=1(σ1,k2+κ2β2k)         +4θ(2p1)κ2Lk=1γ2k2pλ. (3.42)

    Given nN, let τn be a stopping time as defined by

    τn=inf{t0:u(t)>n},

    and as usual, we set τn=+ if {t0:u(t)>n}=. By the continuity of solutions, we have

    limnτn=+.

    Applying Ito's formula, we obtain

        d(u(t)2p)=d((u(t)2)p)=pu(t)2(p1)d(u(t)2)+2p(p1)u(t)2(p2)    ×k=1|(u(t),σ1,k+κσ2,k(u(t)))|2dt. (3.43)

    Substituting (3.6) into (3.43), we infer

    d(u(t)2p)=2pu(t)2(p1)(Δ)α2u(t)2dt2pu(t)2(p1)u(t)2β+2L2β+2dt2pλu(t)2pdt     +2pu(t)2(p1)Re(u(t),G(x,u(tρ)))dt     +pu(t)2(p1)k=1σ1,k(x)+κ(x)σ2,k(u(t))2dt     +2pu(t)2(p1)Re(u(t),k=1σ1,k(x)+κ(x)σ2,k(u(t)))dWk(t)     +2p(p1)u(t)2(p2)k=1|(u(t),σ1,k+κσ2,k(u(t)))|2dt. (3.44)

    We also get the formula

    d(eμtu(t)2p)=μeμtu(t)2pdt+eμtd(u(t)2p). (3.45)

    Substituting (3.44) into (3.45) and integrating on (0,tτn) with t0, we deduce

        eμ(tτn)u(tτn)2p+2ptτn0eμsu(s)2(p1)(Δ)α2u(s)2ds=2ptτn0eμsu(s)2(p1)u(s)2β+2L2β+2ds+φ(0)2p+(μ2pλ)tτn0eμsu(s)2pds      +2ptτn0eμsu(s)2(p1)Re(u(s),G(x,u(sρ)))ds      +pk=1tτn0eμsu(s)2(p1)σ1,k+κσ2,k(u(s))2ds      +2pk=1tτn0eμsu(t)2(p1)Re(u(s),σ1,k+κσ2,k(u(s)))dWk(s)      +2p(p1)k=1tτn0eμsu(s)2(p2)|(u(s),σ1,k+κσ2,k(u(s)))|2ds. (3.46)

    Taking the expectation, we obtain for t0,

        E[eμ(tτn)u(tτn)2p]+2pE[tτn0eμsu(s)2(p1)(Δ)α2u(s)2ds]=2pE[tτn0eμsu(s)2(p1)u(s)2β+2L2β+2ds]+E[φ(0)2p]+(μ2pλ)E[tτn0eμsu(s)2pds]      +2pE[tτn0eμsu(s)2(p1)Re(u(s),G(x,u(sρ)))ds]      +pk=1E[tτn0eμsu(s)2(p1)σ1,k+κσ2,k(u(s))2ds]      +2p(p1)k=1E[tτn0eμsu(s)2(p2)|(u(s),σ1,k+κσ2,k(u(s)))|2ds]E[φ(0)2p]+(μ2pλ)E[tτn0eμsu(s)2pds]      +2pE[tτn0eμsu(s)2(p1)Re(u(s),G(x,u(sρ)))ds]      +pk=1E[tτn0eμsu(s)2(p1)σ1,k+κσ2,k(u(s))2ds]      +2p(p1)k=1E[tτn0eμsu(s)2(p2)|(u(s),σ1,k+κσ2,k(u(s)))|2ds]. (3.47)

    Next, we estimate the terms on the right-hand side of (3.47).

    For the third term on the right-hand side of (3.47), by Young's inequality and (2.1), we infer

        2θE[tτn0eμsu(s)2(p1)Re(u(s),G(x,u(sρ)))ds]2θE[tτn0eμsu(s)2p1G(x,u(sρ))2ds]aeμρ2p2112p(2p1)2p12pE[tτn0eμsu(s)2pds]        +(2p122p1a2peμρ)2p12pE[tτn0eμsG(x,u(sρ))2ds]aeμρ2p2112p(2p1)2p12pE[tτn0eμsu(s)2pds]        +22p1(2p122p1a2peμρ)2p12pE[tτn0eμs(h2p+a2pu(sρ)2p)ds]aeμρ2p2112p(2p1)2p12pE[tτn0eμsu(s)2pds]        +1μ(4p2a2peμρ)2p12ph2peμt+aρeμρ2p(4p2)2p12pE[φ2pCH]. (3.48)

    For the forth term on the right-hand side of (3.47), we infer

        pk=1E[tτn0eμsu(s)2(p1)σ1,k+κσ2,k(u(s))2ds]2pk=1E[tτn0eμsu(s)2(p1)σ1,k2ds]      +2pk=1E[tτn0eμsu(s)2(p1)κσ2,k(u(s))2ds]. (3.49)

    For the first term on the right-hand side of (3.49), we have

        2pk=1E[tτn0eμsu(s)2(p1)σ1,k2ds]2(p1)ϵ2p2p21k=1σ1,k2E[tτn0eμsu(s)2pds]+2μϵp1k=1σ1,k2eμt. (3.50)

    For the second term on the right-hand side of (3.49), we have

        2pk=1E[tτn0eμsu(s)2(p1)κσ2,k(u(s))2ds]4pκ2k=1β2kE[tτn0eμsu(s)2(p1)ds]+4pκ2Lk=1γ2kE[tτn0eμsu(s)2pds]4(p1)ϵ2p2p21κ2k=1β2kE[tτn0eμsu(s)2pds]    +4μϵp1κ2k=1β2keμt+4pκ2Lk=1γ2kE[tτn0eμsu(s)2pds]. (3.51)

    By (3.49)–(3.51), we obtain

        pk=1E[tτn0eμsu(s)2(p1)σ1,k+κσ2,k(u(s))2ds][4(p1)ϵ2p2p21k=1(σ1,k2+κ2β2k)+4pκ2Lk=1γ2k]E[tτn0eμsu(s)2pds]    +2μϵp1k=1(σ1,k2+2κ2β2k))eμt. (3.52)

    For the fifth term on the right-hand side of (3.47), applying (3.52), we have

        2p(p1)k=1E[tτn0eμsu(s)2(p2)|(u(s),σ1,k+κσ2,k(u(s)))|2ds]2p(p1)k=1E[tτn0eμsu(s)2p2σ1,k+κσ2,k(u(s))2ds][8(p1)2ϵ2p2p21k=1(σ1,k2+κ2β2k)+8p(p1)κ2Lk=1γ2k]E[tτn0eμsu(s)2pads]    +4(p1)μϵp1k=1(σ1,k2+2κ2β2k))eμt. (3.53)

    From (3.47), (3.48), (3.52) and (3.53), we obtain that for t0,

        E[eμ(tτn)u(tτn)2p]+2pE[tτn0eμsu(s)2(p1)(Δ)α2u(s)2ds](1+aρeμρ2p(4p2)2p12p)E[φ2pCH]   +(μ2pλ+aeμρ2p2112p(2p1)2p12p+4(p1)(2p1)ϵ2p2p21    ×k=1(σ1,k2+κ2β2k)+4p(2p1)κ2Lk=1γ2k)E[tτn0eμsu(s)2pds]      +1μ(4p2a2peμρ)2p12ph2peμt+4(p1)μϵp1k=1(σ1,k2+2κ2β2k))eμt. (3.54)

    Then by (3.42) and (3.54), we obtain that for t0,

        E[eμ(tτn)u(tτn)2p]+2pE[tτn0eμsu(s)2(p1)(Δ)α2u(s)2ds](1+aρeμρ2p(4p2)2p12p)E[φ2pCH]+1μ(4p2a2paeμρ)2p12ph2peμt   +4(p1)μϵp1k=1(σ1,k2+2κ2β2k))eμt. (3.55)

    Letting n, by Fatou's Lemma, we deduce that for t0,

        E[eμtu(t)2p]+2pE[t0eμsu(s)2(p1)(Δ)α2u(s)2ds](1+aρeμρ2p(4θ2)2θ12p)E[φ2pCH]+1μ(4p2a2peμρ)2p12ph2peμt   +4(p1)μϵp1k=1(σ1,k2+2κ2β2k))eμt.

    Hence, we have for t\geq 0 ,

    \begin{align*} &\ \ \ \ \mathbb{E}\left[\|u(t)\|^{2p}\right]+2p\mathbb{E}\left[\int_0^{t}e^{\mu (s-t)}\|u(s)\|^{2(p-1)}\|(-\Delta)^{\frac{\alpha}{2}}u(s)\|^{2}ds\right]\\& \leq\left(1+a\rho e^{\frac{\mu\rho}{2p}}(4p-2)^{\frac{2p-1}{2p}} \right)\mathbb{E}\left[\|\varphi\|^{2p}_{C_H}\right]+ \frac 1\mu\left(\frac{4p-2}{a^{2p}e^{\mu\rho}}\right)^{\frac{2p-1}{2p}}\|h\|^{2p}\\&\ \ \ +\frac{4(p-1)}{\mu\epsilon_1^p}\sum\limits^\infty_{k = 1}(\|\sigma_{1,k}\|^2 +2\|\kappa\|^2\beta^2_k)). \end{align*}

    This implies the desired estimate.

    In this section, we establish the uniform estimates of solutions of problems (1.1) and (1.2) with initial data in C([-\rho, 0], V) . To the end, we assume that for each k\in\mathbb{N} , the function \sigma_{1, k}\in V and

    \begin{align} \sum\limits^\infty_{k = 1}\|\sigma_{1,k}\|^2_V < \infty. \end{align} (4.1)

    Furthermore, we assume that the function \kappa\in V and there exists a constant C > 0 such that

    \begin{align} |\nabla\kappa(x)|\leq C. \end{align} (4.2)

    In the sequel, we further assume that the constant a , \hat{\gamma}_k in (2.7) are sufficiently small in the sense that there exists a constant p\geq 2 such that

    \begin{align} \hat{a}2^{1-\frac{1}{2p}}(2p-1)^{\frac{2p-1}{2p}}+2p(2p-1)\|\kappa\|^2_{L^\infty}\sum\limits^\infty_{k = 1}(\beta^2_k+\hat{\beta}^2_k+\gamma_k^2+\hat{\gamma}^2_k) < p\frac\lambda 2. \end{align} (4.3)

    By (4.3), we can find

    \begin{align} \sqrt{2}\hat{a}+2\|\kappa\|^2_{L^\infty}\sum\limits^\infty_{k = 1}\hat{\gamma}^2_k < \frac \lambda 2. \end{align} (4.4)

    Lemma 4.1. Suppose (2.1)–(2.7) and (4.4) hold. If \varphi(s)\in L^2(\Omega; C([-\rho, 0], V)) , then, for all t\geq0 , there exists a positive constant \mu_4 such that the solution u of (1.1) and (1.2) satisfies

    \begin{align} \begin{split} \sup\limits_{s\geq -\rho}\mathbb{E}[\|\nabla u(t)\|^2]+\sup\limits_{s\geq 0}\mathbb{E}\left[\int^t_0e^{\mu_4(s-t)}\|(-\Delta)^{\frac{\alpha+1}{2}}u(s)\|^2ds\right] \leq M_4\left(\mathbb{E}[\|\varphi\|^2_{C_V}]+1\right), \end{split} \end{align} (4.5)

    where M_4 is a positive constant independent of \varphi .

    Proof. By (4.4), there exists a positive constant \mu_1 such that

    \begin{align} \mu_1-2\lambda+8\|\kappa\|^2_{L^\infty}\sum\limits^\infty_{k = 1}\hat{\gamma}^2_k < 0. \end{align} (4.6)

    By (1.1) and applying Ito's formula to e^{\mu_1 t}\|\nabla u(t)\|^2 , we have for t\geq 0 ,

    \begin{align*} &\ \ \ \ e^{\mu_1t}\|\nabla u(t)\|^2+2\int^t_0e^{\mu_1s}\|(-\Delta)^{\frac{\alpha+1}{2}}u(s)\|^2ds+2\int^t_0e^{\mu_1s}\mbox{Re}\left((1+\text{i}\mu)|u(s)|^{2\beta}u(s),-\Delta u(s)\right)ds\\& = (\mu_1-2\lambda)\int^t_0e^{\mu_1s}\|\nabla u(s)\|^2ds+\|\nabla\varphi(0)\|^2+2\mbox{Re}\int^t_0e^{\mu_1s}(G(x,u(s-\rho)),-\Delta u(s))ds\\& \ \ \ \ +\sum\limits^\infty_{k = 1}\int^t_0e^{\mu_1s}\|\nabla(\sigma_{1,k}+\kappa\sigma_{2,k}(u(s)))\|^2ds\\& \ \ \ \ \ +2\sum\limits^\infty_{k = 1}\mbox{Re}\int^t_0e^{\mu_1s}\left(\sigma_{1,k}(x)+\kappa(x)\sigma_{2,k}(u(s)),-\Delta u(s)\right)dW_k(s). \end{align*}

    Taking the expectation, we have for all t\geq0 ,

    \begin{align} \begin{split} &\ \ \ \ e^{\mu_1t}\mathbb{E}[\|\nabla u(t)\|^2]\!\!+\!\!2\mathbb{E}\left[\int^t_0e^{\mu_1s}\|(\!\!-\!\!\Delta)^{\frac{\alpha+1}{2}}u(s)\|^2ds\right] \!\!+\!\!2\mathbb{E}\left[\int^t_0e^{\mu_1s}\mbox{Re}\left((1\!\!+\!\!\text{i}\mu)|u(s)|^{2\beta}u(s),\!\!-\!\!\Delta u(s)\right)ds\right]\\& = (\mu_1-2\lambda)\mathbb{E}\left[\int^t_0e^{\mu_1s}\|\nabla u(s)\|^2ds\right]\!\!+\!\!\mathbb{E}\left[\|\nabla\varphi(0)\|^2\right]\!\!+\!\!2\mathbb{E}\left[\mbox{Re}\int^t_0e^{\mu_1s}(G(x,u(s-\rho)),-\Delta u(s))ds\right]\\& \ \ \ \ \!\!+\!\!\sum\limits^\infty_{k = 1}\mathbb{E}\left[\int^t_0e^{\mu_1s}\|\nabla(\sigma_{1,k}\!\!+\!\!\kappa\sigma_{2,k}(u(s)))\|^2ds\right]. \end{split} \end{align} (4.7)

    First, we estimate the third term on the left-hand side of (4.7). Applying integrating by parts, we have

    \begin{align} \begin{split} &\ \ \ \ \mbox{Re}\left((1+\text{i}\mu)|u|^{2\beta}u,\Delta u\right)\\& = -\mbox{Re}(1+\text{i}\mu)\int_{\mathbb{R}^n}\left((\beta+1)|u|^{2\beta}|\nabla u|^2+\beta|u|^{2(\beta-1)}(u\nabla \overline{u})^2\right)dx\\& = \int_{\mathbb{R}^n}|u|^{2(\beta-1)}\left(-(\beta+1)|u|^{2}|\nabla u|^2+\frac{\beta(1+\text{i}\mu)}{2}(u\nabla \overline{u})^2+\frac{\beta(1-\text{i}\mu)}{2}(\overline{u}\nabla u)^2\right)dx\\& = \int_{\mathbb{R}^n}|u|^{2(\beta-1)}trace(YMY^H), \end{split} \end{align} (4.8)

    where

    Y = \left( \begin{array}{c} \overline{u}\nabla u \\ u\nabla \overline{u} \\ \end{array} \right)^H, M = \left( \begin{array}{cc} -\frac{\beta+1}{2} & \frac{\beta(1+\text{i}\mu)}{2} \\ \frac{\beta(1-\text{i}\mu)}{2} & -\frac{\beta+1}{2} \\ \end{array} \right),

    and Y^H is the conjugate transpose of the matrix Y . We observe that the condition \beta\leq \frac{1}{\sqrt{1+\mu^2}-1} implies that the matrix M is nonpositive definite. Hence, we obtain

    \begin{align} \begin{split} 2\mathbb{E}\left[\int^t_0e^{\mu_1s}\mbox{Re}\left((1+\text{i}\mu)|u(s)|^{2\beta}u(s),\Delta u(s)\right)ds\right]\leq 0. \end{split} \end{align} (4.9)

    Next, we estimate the terms on the right-hand side of (4.7). For the third term on the right-hand side of (4.7), applying (2.2) and Gagliardo-Nirenberg inequality, we have

    \begin{align} \begin{split} &\ \ \ \ 2\mathbb{E}\left[\mbox{Re}\int^t_0e^{\mu_1s}(G(x,u(s-\rho)),-\Delta u(s))ds\right]\leq2\mathbb{E}\left[\int^t_0e^{\mu_1s}\|\nabla u(s)\|\|\nabla G(x,u(s-\rho))\|ds\right]\\& \leq\mathbb{E}\left[\int^t_0e^{\mu_1s}\|\nabla u(s)\|^2ds\right]+\mathbb{E}\left[\int^t_0e^{\mu_1s}\|\nabla G(x,u(s-\rho))\|^2ds\right]\\& \leq\mathbb{E}\left[\int^t_0e^{\mu_1s}\|\nabla u(s)\|^2ds\right]+2\mathbb{E}\left[\int^t_0e^{\mu_1s}\|\hat{h}(x)\|^2ds\right]+2\hat{a}^2\mathbb{E}\left[\int^{t}_0e^{\mu_1s}\|\nabla u(s-\rho)\|^2ds\right]\\& \leq\mathbb{E}\left[\int^t_0e^{\mu_1s}\|(\!-\!\Delta)^{\frac{\alpha+1}{2}} u(s)\|^2ds\right]\!\!+\!\!\frac{2}{\mu_1}\|\hat{h}(x)\|^2e^{\mu_1t}\\&\ \ \ \ \!\!+\!\!\frac{c}{\mu_1}\sup\limits_{s\geq 0}\mathbb{E}[\| u(s)\|^2]e^{\mu_1t}\!\!+\!\!\frac{2\hat{a}^2}{\mu_1}\sup\limits_{-\rho\leq s\leq 0}\mathbb{E}[\|\nabla \varphi(s)\|^2]e^{\mu_1t}, \end{split} \end{align} (4.10)

    where c is a positive constant from Gagliardo-Nirenberg inequality. For the forth term on the right-hand side of (4.7), applying (2.6) and (2.7), we have

    \begin{align} \begin{split} &\ \ \ \ \sum\limits^\infty_{k = 1}\mathbb{E}\left[\int^t_0e^{\mu_1s}\|\nabla(\sigma_{1,k}+\kappa\sigma_{2,k}(u(s)))\|^2ds\right] \leq2\sum\limits^\infty_{k = 1}\mathbb{E}\left[\int^t_0e^{\mu_1s}(\|\nabla\sigma_{1,k}\|^2\!\!+\!\!\|\nabla(\kappa\sigma_{2,k}(u(s)))\|^2)ds\right]\\& \leq\frac{2}{\mu_1}\sum\limits^\infty_{k = 1}\|\nabla \sigma_{1,k}\|^2e^{\mu_1t} \!\!+\!\!8\sum\limits^\infty_{k = 1}\mathbb{E}\left[\int^t_0e^{\mu_1s}\left(\beta^2_k\|\nabla \kappa\|^2\!\!+\!\!\hat{\beta}^2_k\|\kappa\|^2\!\!+\!\!{\gamma}^2_kC^2\|u(s)\|^2\!\!+\!\!\hat{\gamma}^2_k\|\kappa\|^2_{L^\infty}\|\nabla u(s)\|^2\right)ds\right]\\& \leq\frac{2}{\mu_1}\sum\limits^\infty_{k = 1}\left(\|\nabla\sigma_{1,k}\|^2+4\beta^2_k\|\nabla\kappa\|^2+4\hat{\beta}^2_k\|\kappa\|^2+4C^2{\gamma}^2_k\sup\limits_{s\geq 0}\mathbb{E}[\|u(s)\|^2]\right)e^{\mu_1t}\\&\ \ \ \ \ +8\sum\limits^\infty_{k = 1}\hat{\gamma}^2_k\|\kappa(x)\|^2_{L^\infty}\mathbb{E}\left[\int^t_0e^{\mu_1s}\|\nabla u(s)\|^2ds\right]. \end{split} \end{align} (4.11)

    By (4.7), (4.10) and (4.11), we obtain

    \begin{align} \begin{split} &\ \ \ \ \mathbb{E}[\|\nabla u(t)\|^2]\!\!+\!\!\mathbb{E}\left[\int^t_0e^{\mu_1(s-t)}\|(-\Delta)^{\frac{\alpha+1}{2}}u(s)\|^2ds\right]\\&\ \leq \mathbb{E}\left[\|\nabla\varphi(0)\|^2\right]e^{-\mu_1t}+\frac{2}{\mu_1}\|\hat{h}(x)\|^2 +\left(\mu_1-2\lambda+8\|\kappa\|^2_{L^\infty}\sum\limits^\infty_{k = 1}\hat{\gamma}^2_k\right)\mathbb{E}\left[\int^t_0e^{\mu_1s}\|\nabla u(s)\|^2ds\right]\\&\ \ \ \ +\frac{2}{\mu_1}\left(\frac c2+4\left(C^2\sum\limits^\infty_{k = 1}{\gamma}^2_k+c\|\kappa\|^2_{L^\infty}\sum\limits^\infty_{k = 1}\hat{\gamma}^2_k\right)\right)\sup\limits_{s\geq -\rho}\mathbb{E}[\|u(s)\|^2]\\& \ \ \ \ +\frac{2}{\mu_1}\sum\limits^\infty_{k = 1}\left(\|\nabla\sigma_{1,k}\|^2+4(\beta^2_k+\hat{\beta}^2_k)\|\kappa\|^2_V\right)+\frac{2\hat{a}^2}{\mu_1}\sup\limits_{-\rho\leq s\leq 0}\mathbb{E}[\|\nabla \varphi(s)\|^2]. \end{split} \end{align} (4.12)

    Then by (4.6) and (4.12), we obtain that for all t\geq 0 ,

    \begin{align} \begin{split} &\ \ \ \ \mathbb{E}[\|\nabla u(t)\|^2]+\mathbb{E}\left[\int^t_0e^{\mu_1(s-t)}\|(-\Delta)^{\frac{\alpha+1}{2}}u(s)\|^2ds\right]\\&\ \leq \mathbb{E}\left[\|\nabla\varphi(0)\|^2\right]e^{-\mu_1t}\!+\!\frac{2}{\mu_1}\|\hat{h}(x)\|^2\!+\!\frac{2}{\mu_1}\left(\frac c2\!+\!4(C^2\sum\limits^\infty_{k = 1}{\gamma}^2_k\!+\!c\|\kappa\|^2_{L^\infty}\sum\limits^\infty_{k = 1}\hat{\gamma}^2_k)\right)\sup\limits_{s\geq -\rho}\mathbb{E}[\|u(s)\|^2]\\& \ \ \ \ +\frac{2}{\mu_1}\sum\limits^\infty_{k = 1}\left(\|\nabla\sigma_{1,k}\|^2+4(\beta^2_k+\hat{\beta}^2_k)\|\kappa\|^2_V\right)+\frac{2\hat{a}^2}{\mu_1}\sup\limits_{-\rho\leq s\leq 0}\mathbb{E}[\|\nabla \varphi(s)\|^2]. \end{split} \end{align} (4.13)

    Then by (4.13) and Lemma 3.1, we obtain the estimates (4.5).

    Lemma 4.2. Suppose (2.1)–(2.7) and (4.4) hold. If \varphi(s)\in L^2(\Omega; C([-\rho, 0], V)) , then the solution u of (1.1) and (1.2) satisfies

    \begin{align} \begin{split} \sup\limits_{t\geq \rho}\left\{\mathbb{E}\left[\sup\limits_{t-\rho\leq r\leq t}\|\nabla u(r)\|^2\right]\right\} \leq M_5\left(\mathbb{E}[\|\varphi\|^2_{C_V}]+1\right), \end{split} \end{align} (4.14)

    where M_5 is a positive constant independent of \varphi .

    Proof. By (1.1) and Ito's formula, we get for all t\geq \rho and t-\rho\leq r\leq t ,

    \begin{align} \begin{split} &\ \ \ \ \|\nabla u(r)\|^2+2\int^r_{t-\rho}\|(-\Delta)^{\frac{\alpha+1}{2}}u(s)\|^2ds\\&\ \ \ \ +2\int^r_{t-\rho}\mbox{Re}\left((1+\text{i}\mu)|u(s)|^{2\beta}u(s),-\Delta u(s)\right)ds+2\lambda\int^r_{t-\rho}\|\nabla u(s)\|^2ds\\& = \|\nabla u(t-\rho)\|^2+2\mbox{Re}\int^r_{t-\rho}(G(x,u(s-\rho)),-\Delta u(s))ds\\& \ \ \ \ +\sum\limits^\infty_{k = 1}\int^r_{t-\rho}\|\nabla(\sigma_{1,k}+\kappa\sigma_{2,k}(u(s)))\|^2ds\\& \ \ \ \ \ +2\sum\limits^\infty_{k = 1}\mbox{Re}\int^r_{t-\rho}\left(\sigma_{1,k}(x)+\kappa(x)\sigma_{2,k}(u(s)),-\Delta u(s)\right)dW_k(s). \end{split} \end{align} (4.15)

    For the third term on the left-hand side of (4.15), applying (4.8), we have

    \begin{align} \begin{split} -2\int^r_{t-\rho}\mbox{Re}\left((1+\text{i}\mu)|u(s)|^{2\beta}u(s),-\Delta u(s)\right)ds\leq 0. \end{split} \end{align} (4.16)

    For the second term on the right-hand side of (4.15), applying (2.2) and Gagliardo-Nirenberg inequality, we have

    \begin{align} \begin{split} &\ \ \ \ 2\mbox{Re}\int^r_{t-\rho}(G(x,u(s-\rho)),-\Delta u(s))ds\leq2\int^r_{t-\rho}\|\nabla u(s)\|\|\nabla G(x,u(s-\rho))\|ds\\& \leq\int^r_{t-\rho}\|\nabla u(s)\|^2ds+\int^r_{t-\rho}\|\nabla G(x,u(s-\rho))\|^2ds\\& \leq\int^r_{t-\rho}\|\nabla u(s)\|^2ds+2\int^r_{t-\rho}\|\hat{h}(x)\|^2ds+2\hat{a}^2\int^r_{t-\rho}\|\nabla u(s-\rho)\|^2ds\\& \leq\int^r_{t-\rho}\|(\!-\!\Delta)^{\frac{\alpha+1}{2}} u(s)\|^2ds\!+\!2\rho\|\hat{h}(x)\|^2+2\hat{a}^2\int^{r-\rho}_{t-2\rho}\|\nabla u(s)\|^2ds\!+c\int^r_{t-\rho}\|u(s)\|^2ds. \end{split} \end{align} (4.17)

    For the third term on the right-hand side of (4.15), applying (2.6) and (2.7), we have

    \begin{align} \begin{split} &\ \ \ \ \sum\limits^\infty_{k = 1}\int^r_{t-\rho}\|\nabla(\sigma_{1,k}+\kappa\sigma_{2,k}(u(s)))\|^2ds \leq2\sum\limits^\infty_{k = 1}\int^r_{t-\rho}(\|\nabla\sigma_{1,k}\|^2+\|\nabla(\kappa\sigma_{2,k}(u(s)))\|^2)ds\\& \leq 2\rho\sum\limits^\infty_{k = 1}\|\nabla \sigma_{1,k}\|^2+8\rho\left(\|\nabla \kappa\|^2\sum\limits^\infty_{k = 1}\beta^2_k+\|\kappa\|^2\sum\limits^\infty_{k = 1}\hat{\beta}^2_k\right)\\&\ \ \ \ \ +8C^2\sum\limits^\infty_{k = 1}{\gamma}^2_k\int^r_{t-\rho}\|u(s)\|^2ds +8\|\kappa\|^2_{L^\infty}\sum\limits^\infty_{k = 1}\hat{\gamma}^2_k\int^r_{t-\rho}\|\nabla u(s)\|^2ds. \end{split} \end{align} (4.18)

    By (4.4) and (4.15)–(4.18), we infer that for all t\geq \rho and t-\rho\leq r\leq t ,

    \begin{align} \begin{split} &\ \ \ \ \|\nabla u(r)\|^2\leq c_1+\|\nabla u(t-\rho)\|^2+c_2\int^r_{t-2\rho}\|u(s)\|^2ds+2\hat{a}^2\int^{r}_{t-2\rho}\|\nabla u(s)\|^2ds\\& \ \ \ \ \ +2\sum\limits^\infty_{k = 1}\mbox{Re}\int^r_{t-\rho}\left(\sigma_{1,k}(x)+\kappa(x)\sigma_{2,k}(u(s)),-\Delta u(s)\right)dW_k(s), \end{split} \end{align} (4.19)

    where c_1 and c_2 are positive constants. By (4.19), we deduce that for all t\geq \rho ,

    \begin{align} \begin{split} &\ \ \ \ \mathbb{E}\left[\sup\limits_{t-\rho\leq r\leq t}\|\nabla u(r)\|^2\right]\leq c_1+\mathbb{E}\left[\|\nabla u(t-\rho)\|^2\right]+c_2\int^r_{t-2\rho}\mathbb{E}\left[\|u(s)\|^2+\|\nabla u(s)\|^2\right]ds\\& \ \ \ \ \ +2\mathbb{E}\left[\sup\limits_{t-\rho\leq r\leq t}\left|\sum\limits^\infty_{k = 1}\int^r_{t-\rho}\left(\sigma_{1,k}(x)+\kappa(x)\sigma_{2,k}(u(s)),-\Delta u(s)\right)dW_k(s)\right|\right]. \end{split} \end{align} (4.20)

    For the second term on the right-hand side of (4.20), by Lemma 4.1 we infer that for all t\geq \rho ,

    \begin{align} \begin{split} &\ \ \ \ \mathbb{E}\left[\|\nabla u(t-\rho)\|^2\right]\leq \sup\limits_{s\geq-\rho}\mathbb{E}\left[\|\nabla u(s)\|^2\right]\leq c_3\mathbb{E}\left[\|\varphi\|^2_{C_V}\right]+c_3. \end{split} \end{align} (4.21)

    For the third term on the right-hand side of (4.20), by Lemmas 3.1 and 4.1 we infer that for all t\geq \rho ,

    \begin{align} \begin{split} &\ \ \ \ c_2\int^r_{t-2\rho}\mathbb{E}\left[\|u(s)\|^2+\|\nabla u(s)\|^2\right]ds\leq 2\rho c_2 \sup\limits_{s\geq-\rho}\mathbb{E}\left[\|u(s)\|^2+\|\nabla u(s)\|^2\right]\leq c_4\mathbb{E}\left[\|\varphi\|^2_{C_V}\right]+c_4. \end{split} \end{align} (4.22)

    For the last term on the right-hand side of (4.20), by BDG inequality, (4.18), Lemmas 3.1 and 4.1, we deduce that for all t\geq \rho ,

    \begin{align} \begin{split} &\ \ \ \ 2\mathbb{E}\left[\sup\limits_{t-\rho\leq r\leq t}\left|\sum\limits^\infty_{k = 1}\int^r_{t-\rho}\left(\sigma_{1,k}(x)+\kappa(x)\sigma_{2,k}(u(s)),-\Delta u(s)\right)dW_k(s)\right|\right]\\& \leq2c_5\mathbb{E}\left[\left(\sum\limits^\infty_{k = 1}\int^t_{t-\rho}\left|\left(\sigma_{1,k}(x)+\kappa(x)\sigma_{2,k}(u(s)),-\Delta u(s)\right)\right|^2ds\right)^{\frac 12}\right]\\& \leq2c_5\mathbb{E}\left[\left(\sum\limits^\infty_{k = 1}\int^t_{t-\rho}\|\nabla u(s)\|^2\|\nabla\left(\sigma_{1,k}(x)+\kappa(x)\sigma_{2,k}(u(s))\right)\|^2ds\right)^{\frac 12}\right]\\&\leq2c_5\mathbb{E}\left[\sup\limits_{t-\rho\leq s\leq t}\|\nabla u(s)\|\left(\sum\limits^\infty_{k = 1}\int^t_{t-\rho}\|\nabla\left(\sigma_{1,k}(x)+\kappa(x)\sigma_{2,k}(u(s))\right)\|^2ds\right)^{\frac 12}\right]\\&\leq \frac 12\mathbb{E}\left[\sup\limits_{t-\rho\leq s\leq t}\|\nabla u(s)\|^2\right]+2c_5^2\mathbb{E}\left[\sum\limits^\infty_{k = 1}\int^t_{t-\rho}\|\nabla\left(\sigma_{1,k}(x)+\kappa(x)\sigma_{2,k}(u(s))\right)\|^2ds\right]\\&\leq \frac 12\mathbb{E}\left[\sup\limits_{t-\rho\leq s\leq t}\|\nabla u(s)\|^2\right]+c_6+c_6\int^t_{t-\rho}\mathbb{E}\left[\|u(s)\|^2+\|\nabla u(s)\|^2\right]ds\\&\leq \frac 12\mathbb{E}\left[\sup\limits_{t-\rho\leq s\leq t}\|\nabla u(s)\|^2\right]+c_6+\rho c_6\left(\sup\limits_{s\geq 0}\mathbb{E}\|u(s)\|^2+\sup\limits_{s\geq 0}\|\nabla u(s)\|^2\right)\\&\leq \frac 12\mathbb{E}\left[\sup\limits_{t-\rho\leq s\leq t}\|\nabla u(s)\|^2\right]+c_7\mathbb{E}\left[\|\varphi\|^2_{C_V}\right]+c_7. \end{split} \end{align} (4.23)

    By (4.20)–(4.23), we obtain that for all t\geq \rho ,

    \mathbb{E}\left[\sup\limits_{t-\rho\leq r\leq t}\|\nabla u(r)\|^2\right]\leq c_8\mathbb{E}\left[\|\varphi\|^2_{C_V}\right]+c_9,

    which completes the proof.

    Lemma 4.3. Suppose (2.1)–(2.7) and (3.1) hold. If \varphi(s)\in L^{2p}(\Omega, C([-\rho, 0], V)) , then there exists a positive constant \mu_5 such that the solution u of (1.1) and (1.2) satisfies

    \begin{align} \begin{split} &\ \sup\limits_{t\geq -\rho}\mathbb{E}[\|\nabla u(t)\|^{2p}]+\sup\limits_{t\geq 0}\mathbb{E}\left[\int^t_0e^{\mu_5(s-t)}\|\nabla u(s)\|^{2(p-1)}\|(-\Delta)^{\frac{\alpha+1}{2}}u(s)\|^2ds\right]\\&\ \ \ \ \ \ \ \ \leq M_5\left(\mathbb{E}\left[\|\varphi\|^{2p}_{C_V}\right]+1\right), \end{split} \end{align} (4.24)

    where M_5 is a positive constant independent of \varphi .

    Proof. By (3.1), there exist positive constants \mu and \epsilon_1 such that

    \begin{align} \begin{split} &\ \mu+4(p-1)\epsilon_1^{\frac {p}{p-1}}+\frac{2p\hat{a}^{2}}{\mu\epsilon_1^p} +8C^2(p-1)(2p-1)\epsilon_1^{\frac p{p-1}}\sum\limits^\infty_{k = 1}{\gamma}^2_k +8p(2p-1)\|\kappa\|^2_{L^\infty}\sum\limits^\infty_{k = 1}\hat{\gamma}^2_k\\&\ \ \ \ \ \ \ \ \ +2(p-1)(2p-1)\epsilon_1^{\frac p{p-1}}\sum\limits^\infty_{k = 1}\left(\|\nabla\sigma_{1,k}\|^2+4\beta^2_k\|\nabla \kappa\|^2\!\!+\!\!4\hat{\beta}^2_k\|\kappa\|^2\right)\leq 2p\lambda. \end{split} \end{align} (4.25)

    By (1.1) and applying Ito's formula to e^{\mu t}\|\nabla u(t)\|^{2p} , we get for t\geq 0 ,

    \begin{align*} &\ \ \ \ e^{\mu t}\|\nabla u(t)\|^{2p}+2p\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}\|(-\Delta)^{\frac{\alpha+1}{2}}u(s)\|^2ds\\&\ \ \ \ \ \ +2p\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}\mbox{Re}\left((1+\text{i}\mu)|u(s)|^{2\beta}u(s),-\Delta u(s)\right)ds\\& = \|\nabla\varphi(0)\|^{2p}+(\mu-2p\lambda)\int^t_0e^{\mu s}\|\nabla u(s)\|^{2p}ds\\&\ \ \ \ \ +2p\mbox{Re}\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}(G(x,u(s-\rho)),-\Delta u(s))ds\\& \ \ \ \ \ +p\sum\limits^\infty_{k = 1}\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}\|\nabla(\sigma_{1,k}+\kappa\sigma_{2,k}(u(s)))\|^2ds\\& \ \ \ \ \ +2p\sum\limits^\infty_{k = 1}\mbox{Re}\int^t_0e^{\mu_1s}\|\nabla u(s)\|^{2(p-1)}\left(\sigma_{1,k}+\kappa\sigma_{2,k}(u(s)),-\Delta u(s)\right)dW_k(s)\\& \ \ \ \ \ +2p(p-1)\sum\limits^\infty_{k = 1}\mbox{Re}\int^t_0e^{\mu_1s}\|\nabla u(s)\|^{2(p-2)}\left|(\sigma_{1,k}+\kappa\sigma_{2,k}(u(s)),-\Delta u(s))\right|^2ds. \end{align*}

    Taking the expectation, we have for t\geq 0 ,

    \begin{align} \begin{split} &\ \ \ \ e^{\mu t}\mathbb{E}\left[\|\nabla u(t)\|^{2p}\right]+2p\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}\|(-\Delta)^{\frac{\alpha+1}{2}}u(s)\|^2ds\right]\\&\ \ \ \ \ \ +2p\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}\mbox{Re}\left((1+\text{i}\mu)|u(s)|^{2\beta}u(s),-\Delta u(s)\right)ds\right]\\& = \mathbb{E}\left[\|\nabla\varphi(0)\|^{2p}\right]+(\mu-2p\lambda)\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2p}ds\right]\\&\ \ \ \ \ +2p\mathbb{E}\left[\mbox{Re}\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}(G(x,u(s-\rho)),-\Delta u(s))ds\right]\\& \ \ \ \ \ +p\sum\limits^\infty_{k = 1}\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}\|\nabla(\sigma_{1,k}+\kappa\sigma_{2,k}(u(s)))\|^2ds\right]\\& \ \ \ \ \ +2p(p-1)\sum\limits^\infty_{k = 1}\mathbb{E}\left[\mbox{Re}\int^t_0e^{\mu_1s}\|\nabla u(s)\|^{2(p-2)}\left|(\sigma_{1,k}+\kappa\sigma_{2,k}(u(s)),-\Delta u(s))\right|^2ds\right]. \end{split} \end{align} (4.26)

    By (4.8), we get the third term on the left-hand side of (4.26) is nonnegative. Next, we estimate each term on the right-hand side of (4.26). For the third term on the right-hand side of (4.26), applying (2.2), Gagliardo-Nirenberg inequality and Young's inequality, we deduce

    \begin{align} \begin{split} &\ \ \ \ 2p\mathbb{E}\left[\mbox{Re}\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}(G(x,u(s-\rho)),-\Delta u(s))ds\right]\\&\leq2p\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}\|\nabla u(s)\|\|\nabla G(x,u(s-\rho))\|ds\right]\\&\leq p\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}\|\nabla u(s)\|^2ds\right]+p\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}\|\nabla G(x,u(s-\rho))\|^2ds\right]\\&\leq p\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}\|\nabla u(s)\|^2ds\right]\\&\ \ \ \ +2p\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}\|\hat{h}\|^2ds\right]+2p\hat{a}^2\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}\|\nabla u(s-\rho)\|^2ds\right]\\&\leq p\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}(\|(\!-\!\Delta)^{\frac{\alpha+1}{2}} u(s)\|^2+c\|u(s)\|^2)ds\right]+\frac 2{\epsilon_1^p}\|\hat{h}(x)\|^{2p}\int^t_0e^{\mu s}ds\\&\ \ \ \ +4(p-1)\epsilon_1^{\frac {p}{p-1}}\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2p}ds\right] +\frac{2p\hat{a}^{2}}{\epsilon_1^p}\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2p}ds\right]\\&\leq p\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}\|(\!-\!\Delta)^{\frac{\alpha+1}{2}} u(s)\|^2ds\right]+\left(4(p-1)\epsilon_1^{\frac {p}{p-1}}+\frac{2p\hat{a}^{2}}{\mu\epsilon_1^p}\right)\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2p}ds\right]\\&\ \ \ \ +\frac 1{\mu\epsilon_1^p}\|\hat{h}(x)\|^{2p}e^{\mu t} +\frac{2p\hat{a}^{2}}{\mu\epsilon_1^p}\sup\limits_{-\rho\leq s\leq 0}\mathbb{E}\left[\|\nabla \varphi(s)\|^{2p}\right]e^{\mu t}+c\sup\limits_{s\geq 0}\mathbb{E}\left[\| u(s)\|^{2p}\right]e^{\mu t}. \end{split} \end{align} (4.27)

    For the forth term on the right-hand side of (4.26), applying (2.7), we infer

    \begin{align} \begin{split} &\ \ \ \ p\sum\limits^\infty_{k = 1}\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}\|\nabla(\sigma_{1,k}+\kappa\sigma_{2,k}(u(s)))\|^2ds\right]\\& \leq2p\sum\limits^\infty_{k = 1}\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p\!-\!1)}\|\nabla\sigma_{1,k}\|^2ds\right] \!+\!2p\sum\limits^\infty_{k = 1}\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p\!-\!1)}\|\nabla(\kappa\sigma_{2,k}(u(s)))\|^2ds\right]\\&\leq 2p\sum\limits^\infty_{k = 1}\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}\|\nabla\sigma_{1,k}\|^2ds\right]\\&\ \ \ \ +8p\sum\limits^\infty_{k = 1}\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}\left(\beta^2_k\|\nabla \kappa\|^2\!\!+\!\!\hat{\beta}^2_k\|\kappa\|^2\!\!+\!\!{\gamma}^2_kC^2\|u(s)\|^2\!\!+\!\!\hat{\gamma}^2_k\|\kappa\|^2_{L^\infty}\|\nabla u(s)\|^2\right)ds\right]\\&\leq2p\sum\limits^\infty_{k = 1}\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}\left(\|\nabla\sigma_{1,k}\|^2+4\beta^2_k\|\nabla \kappa\|^2\!\!+\!\!4\hat{\beta}^2_k\|\kappa\|^2\right)ds\right]\\&\ \ \ \ +8C^2p\sum\limits^\infty_{k = 1}{\gamma}^2_k\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}\|u(s)\|^2ds\right] +8p\|\kappa\|^2_{L^\infty}\sum\limits^\infty_{k = 1}\hat{\gamma}^2_k\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2p}ds\right]. \end{split} \end{align} (4.28)

    Then applying Young's inequality, (4.28) can be estimated by

    \begin{align} \begin{split} &\ \ \ \ p\sum\limits^\infty_{k = 1}\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2(p-1)}\|\nabla(\sigma_{1,k}+\kappa\sigma_{2,k}(u(s)))\|^2ds\right]\\& \leq\sum\limits^\infty_{k = 1}\left(\|\nabla\sigma_{1,k}\|^2+4\beta^2_k\|\nabla \kappa\|^2\!\!+\!\!4\hat{\beta}^2_k\|\kappa\|^2\right)\times\mathbb{E}\left[\int^t_0e^{\mu s}\left((2p-2)\epsilon_1^{\frac p{p-1}}\|\nabla u(s)\|^{2p}+\frac2{\epsilon_1^p}\right)ds\right]\\&\ \ \ \ +2C^2\sum\limits^\infty_{k = 1}{\gamma}^2_k\mathbb{E}\left[\int^t_0e^{\mu s}\left((4p-4)\epsilon_1^{\frac p{p-1}}\|\nabla u(s)\|^{2p}+\frac4{\epsilon_1^p}\|u(s)\|^{2p}\right)ds\right]\\&\ \ \ \ +8p\|\kappa\|^2_{L^\infty}\sum\limits^\infty_{k = 1}\hat{\gamma}^2_k\mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2p}ds\right]\\& = \left[(2p-2)\epsilon_1^{\frac p{p-1}}\sum\limits^\infty_{k = 1}\left(\|\nabla\sigma_{1,k}\|^2+4\beta^2_k\|\nabla \kappa\|^2\!\!+\!\!4\hat{\beta}^2_k\|\kappa\|^2\right)\right.\\&\ \ \ \ \left.+2C^2(4p-4)\epsilon_1^{\frac p{p-1}}\sum\limits^\infty_{k = 1}{\gamma}^2_k+8p\|\kappa\|^2_{L^\infty}\sum\limits^\infty_{k = 1}\hat{\gamma}^2_k\right] \mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2p}ds\right]\\&\ \ \ \ \!+\!\sum\limits^\infty_{k = 1}\left(\|\nabla\sigma_{1,k}\|^2\!+\!4\beta^2_k\|\nabla \kappa\|^2\!\!+\!\!4\hat{\beta}^2_k\|\kappa\|^2\right)\frac2{\epsilon_1^p}e^{\mu t} \!+\!\frac{8C^2}{\mu\epsilon_1^p}\sum\limits^\infty_{k = 1}{\gamma}^2_k\sup\limits_{s\geq 0}\mathbb{E}\left[\|u(s)\|^{2p}\right]e^{\mu t}. \end{split} \end{align} (4.29)

    For the fifth term on the right-hand side of (4.26), applying integrating by parts and (4.29), we get

    \begin{align} \begin{split} &\ \ \ \ 2p(p-1)\sum\limits^\infty_{k = 1}\mathbb{E}\left[\mbox{Re}\int^t_0e^{\mu_1s}\|\nabla u(s)\|^{2(p-2)}\left|(\sigma_{1,k}+\kappa\sigma_{2,k}(u(s)),-\Delta u(s))\right|^2ds\right]\\&\leq2p(p-1)\sum\limits^\infty_{k = 1}\mathbb{E}\left[\int^t_0e^{\mu_1s}\|\nabla u(s)\|^{2p-2}\|\nabla(\sigma_{1,k}+\kappa\sigma_{2,k}(u(s)))\|^2ds\right]\\&\leq2(p-1)\left[(2p-2)\epsilon_1^{\frac p{p-1}}\sum\limits^\infty_{k = 1}\left(\|\nabla\sigma_{1,k}\|^2+4\beta^2_k\|\nabla \kappa\|^2\!\!+\!\!4\hat{\beta}^2_k\|\kappa\|^2\right)\right.\\&\ \ \ \ \left.+2C^2(4p-4)\epsilon_1^{\frac p{p-1}}\sum\limits^\infty_{k = 1}{\gamma}^2_k+8p\|\kappa\|^2_{L^\infty}\sum\limits^\infty_{k = 1}\hat{\gamma}^2_k\right] \mathbb{E}\left[\int^t_0e^{\mu s}\|\nabla u(s)\|^{2p}ds\right]\\&\ \ \ \ \ \!+\!\sum\limits^\infty_{k = 1}\left(\|\nabla\sigma_{1,k}\|^2\!+\!4\beta^2_k\|\nabla \kappa\|^2\!\!+\!\!4\hat{\beta}^2_k\|\kappa\|^2\right)\frac{4(p\!-\!1)}{\epsilon_1^p}e^{\mu t} \!+\!\frac{16C^2(p\!-\!1)}{\mu\epsilon_1^p}\sum\limits^\infty_{k = 1}{\gamma}^2_k\sup\limits_{s\geq 0}\mathbb{E}\left[\|u(s)\|^{2p}\right]e^{\mu t}. \end{split} \end{align} (4.30)

    By (4.26), (4.27), (4.29) and (4.30), we obtain

    \begin{align} \begin{split} &\ \ \ \ \mathbb{E}\left[\|\nabla u(t)\|^{2p}\right]+p\mathbb{E}\left[\int^t_0e^{\mu (s-t)}\|\nabla u(s)\|^{2(p-1)}\|(-\Delta)^{\frac{\alpha+1}{2}}u(s)\|^2ds\right]\\& \leq\mathbb{E}\left[\|\nabla\varphi(0)\|^{2p}\right]e^{-\mu t}+\left[\mu-2p\lambda+4(\varrho-1)\epsilon_1^{\frac {p}{p-1}}+\frac{2p\hat{a}^{2}}{\mu\epsilon_1^p} +8C^2(p-1)(2p-1)\epsilon_1^{\frac p{p-1}}\sum\limits^\infty_{k = 1}{\gamma}^2_k\right.\\&\ \ \ \ \left.+8p(2p-1)\|\kappa\|^2_{L^\infty}\sum\limits^\infty_{k = 1}\hat{\gamma}^2_k+2(p-1)(2p-1)\epsilon_1^{\frac p{p-1}}\sum\limits^\infty_{k = 1}\left(\|\nabla\sigma_{1,k}\|^2+4\beta^2_k\|\nabla \kappa\|^2\!\!+\!\!4\hat{\beta}^2_k\|\kappa\|^2\right)\right]\\&\ \ \ \ \times\mathbb{E}\left[\int^t_0e^{\mu (s-t)}\|\nabla u(s)\|^{2p}ds\right]+\frac 1{\mu\epsilon_1^p}\|\hat{h}(x)\|^{2p} +\frac{2p\hat{a}^{2}}{\mu\epsilon_1^p}\sup\limits_{-\rho\leq s\leq 0}\mathbb{E}\left[\|\nabla \varphi(s)\|^{2p}\right]\\&\ \ \ \ +c\sup\limits_{s\geq 0}\mathbb{E}\left[\| u(s)\|^{2p}\right] +\sum\limits^\infty_{k = 1}\left(\|\nabla\sigma_{1,k}\|^2+4\beta^2_k\|\nabla \kappa\|^2\!\!+\!\!4\hat{\beta}^2_k\|\kappa\|^2\right)\frac{4p-2}{\epsilon_1^p}\\&\ \ \ \ +\frac{8C^2(2p-1)}{\mu\epsilon_1^p}\sum\limits^\infty_{k = 1}{\gamma}^2_k\sup\limits_{s\geq 0}\mathbb{E}\left[\|u(s)\|^{2p}\right]. \end{split} \end{align} (4.31)

    Then by (4.25) and (4.31), we deduce that for all t\geq 0 ,

    \begin{align} \begin{split} &\ \ \ \ \mathbb{E}\left[\|\nabla u(t)\|^{2p}\right]+p\mathbb{E}\left[\int^t_0e^{\mu (s-t)}\|\nabla u(s)\|^{2(p-1)}\|(-\Delta)^{\frac{\alpha+1}{2}}u(s)\|^2ds\right]\\& \leq\mathbb{E}\left[\|\nabla\varphi(0)\|^{2p}\right]e^{-\mu t}+\frac 1{\mu\epsilon_1^p}\|\hat{h}(x)\|^{2p} +\frac{2p\hat{a}^{2}}{\mu\epsilon_1^p}\sup\limits_{-\rho\leq s\leq 0}\mathbb{E}\left[\|\nabla \varphi(s)\|^{2p}\right]\\&\ \ \ \ +c\sup\limits_{s\geq 0}\mathbb{E}\left[\| u(s)\|^{2p}\right] +\sum\limits^\infty_{k = 1}\left(\|\nabla\sigma_{1,k}\|^2+4\beta^2_k\|\nabla \kappa\|^2\!\!+\!\!4\hat{\beta}^2_k\|\kappa\|^2\right)\frac{4p-2}{\epsilon_1^p}\\&\ \ \ \ +\frac{8C^2(2p-1)}{\mu\epsilon_1^p}\sum\limits^\infty_{k = 1}{\gamma}^2_k\sup\limits_{s\geq 0}\mathbb{E}\left[\|u(s)\|^{2p}\right]. \end{split} \end{align} (4.32)

    Therefore, by (4.32) and Lemma 3.5, there exists a constant M_5 independent of \varphi such that

    \begin{align} \begin{split} &\ \ \ \ \sup\limits_{t\geq -\rho}\mathbb{E}\left[\|\nabla u(t)\|^{2p}\right]+\sup\limits_{t\geq 0}\mathbb{E}\left[\int^t_0e^{\mu (s-t)}\|\nabla u(s)\|^{2(p-1)}\|(-\Delta)^{\frac{\alpha+1}{2}}u(s)\|^2ds\right]\\& \leq M_5(\mathbb{E}\left[\|\varphi\|^{2p}_{C_V}\right]+1). \end{split} \end{align} (4.33)

    For convenience, we write A = (1+\text{i}\nu)(-\triangle)^\alpha+\lambda I . Then, similar to Theorem 6.5 in [31], the solution of (1.1) and (1.2) can be expressed as

    \begin{align} \begin{split} &\ u(t) = e^{-At}u(0)-\int_0^te^{-A(t-s)}(1+\text{i}\mu)|u(s)|^{2\beta}u(s)ds\\&\ \ \ +\int_0^te^{-A(t-s)}G(\cdot,u(s-\rho))ds+\sum\limits^\infty_{k = 1}\int_0^te^{-A(t-s)}(\sigma_{1,k}+\kappa\sigma_{2,k}(u(s)))dW_k(s). \end{split} \end{align} (4.34)

    The next lemma is concerned with the H \ddot{o} lder continuity ofsolutions in time which is needed to prove the tightness of distributions of solutions.

    Lemma 4.4. Suppose (2.1)–(2.7) and (3.1) hold. If \varphi(s)\in L^{2p}(\Omega, C([-\rho, 0], V)) , then the solution u of (1.1) and (1.2) satisfies, for any t > r\geq 0 ,

    \begin{align} \begin{split} \mathbb{E}[\|u(t)-u(r)\|^{2p}]\leq M_6(|t-r|^{p}+|t-r|^{2p}), \end{split} \end{align} (4.35)

    where M_6 is a positive constant depending on \varphi , but independent of t and r .

    Proof. By (4.34), we get for t > r\geq 0 ,

    \begin{align} \begin{split} &\ u(t) = e^{-A(t-r)}u(r)-\int_r^te^{-A(t-s)}(1+\text{i}\mu)|u(s)|^{2\beta}u(s)ds\\&\ \ \ +\int_r^te^{-A(t-s)}G(\cdot,u(s-\rho))ds+\sum\limits^\infty_{k = 1}\int_r^te^{-A(t-s)}(\sigma_{1,k}+\kappa\sigma_{2,k}(u(s)))dW_k(s). \end{split} \end{align} (4.36)

    Then we infer

    \begin{align} \begin{split} &\ \|u(t)-u(r)\|^{2p}\leq \frac{5^{2p}}{4}\left[\|(e^{-A(t-r)}-I)u(r)\|^{2p}+\|\int_r^te^{-A(t-s)}(1+\text{i}\mu)|u(s)|^{2\beta}u(s)ds\|^{2p}\right.\\&\ \ \ \left.+\|\int_r^te^{-A(t-s)}G(\cdot,u(s-\rho))ds\|^{2p}+\|\sum\limits^\infty_{k = 1}\int_r^te^{-A(t-s)}(\sigma_{1,k}+\kappa\sigma_{2,k}(u(s)))dW_k(s)\|^{2p}\right]. \end{split} \end{align} (4.37)

    Taking the expectation of (4.36), we have for all t > r\geq 0 ,

    \begin{align} \begin{split} &\ \mathbb{E}[\|u(t)-u(r)\|^{2p}]\leq \frac{5^{2p}}{4}\mathbb{E}[\|(e^{-A(t-r)}\!-\!I)u(r)\|^{2p}]\!+\!\frac{5^{2p}}{4}\mathbb{E}\left[\|\int_r^te^{-A(t-s)}(1\!+\!\text{i}\mu)|u(s)|^{2\beta}u(s)ds\|^{2p}\right]\\&\ \ \ \!+\!\frac{5^{2p}}{4}\mathbb{E}\left[\|\int_r^te^{-A(t-s)}G(\cdot,u(s\!-\!\rho))ds\|^{2p}\right] \!+\!\frac{5^{2p}}{4}\mathbb{E}\left[\|\sum\limits^\infty_{k = 1}\int_r^te^{-A(t-s)}(\sigma_{1,k}\!+\!\kappa\sigma_{2,k}(u(s)))dW_k(s)\|^{2p}\right]. \end{split} \end{align} (4.38)

    For the first term on the right-hand side of (4.38), by Theorem 1.4.3 in [32], we find that there exists a positive number C_0 depending on \varrho such that for all t > r\geq 0 ,

    \begin{align*} \frac{5^{2p}}{4}\mathbb{E}[\|(e^{-A(t-r)}\!-\!I)u(r)\|^{2p}]\leq C_0(t-r)^p\mathbb{E}[\|u(r)\|^{2p}_{C_V}]. \end{align*}

    Applying Lemmas 3.5 and 4.3, we obtain for all t > r\geq 0 ,

    \begin{align} \begin{split} \frac{5^{2p}}{4}\mathbb{E}[\|(e^{-A(t-r)}\!-\!I)u(r)\|^{2p}]\leq C_1(t-r)^p. \end{split} \end{align} (4.39)

    For the second term on the right-hand side of (4.38), by the contraction property of e^{-At} , we infer that for all t > r\geq 0 ,

    \begin{align*} &\ \ \ \mathbb{E}[\|\int_r^te^{-A(t-s)}(1\!+\!\text{i}\mu)|u(s)|^{2\beta}u(s)ds\|^{2p}] \leq(1\!+\!\mu^2)^p\mathbb{E}\left[\left(\int_r^t\||u(s)|^{2\beta+1}\|ds\right)^{2p}\right]\\& \leq (1\!+\!\mu^2)^p\mathbb{E}\left[\left(\int_r^t\||u(s)|^{2\beta+1}\|^{2p}ds\right)\right](t-r)^{2p-1}\\& \leq(1\!+\!\mu^2)^p\sup\limits_{s\geq 0}\mathbb{E}\left[\left(\|u(s)\|^{2(2\beta+1)}_{L^{2(2\beta+1)}}\right)^{p}\right](t-r)^{2p}. \end{align*}

    We deduce the estimate \sup\limits_{s\geq 0}\mathbb{E}\left[\left(\|u(s)\|^{2(2\beta+1)}_{L^{2(2\beta+1)}}\right)^{p}\right]\leq M'\left(\mathbb{E}[\|\varphi\|^2_{C_V}]+1\right) similarly to Lemma 3.5 together with Lemma 3.3 in [1]. Hence, the second term on the right-hand side of (4.38) can be estimated by

    \begin{align} \begin{split} \mathbb{E}\left[\|\int_r^te^{-A(t-s)}(1\!+\!\text{i}\mu)|u(s)|^{2\beta}u(s)ds\|^{2p}\right] \leq C_2(t-r)^{2p}. \end{split} \end{align} (4.40)

    For the third term on the right-hand side of (4.38), by the contraction property of e^{-At} and (2.1) and Lemma 3.5, we deduce that for all t > r\geq 0 ,

    \begin{align} \begin{split} &\ \ \ \ \frac{5^{2p}}{4}\mathbb{E}\left[\|\int_r^te^{-A(t-s)}G(\cdot,u(s\!-\!\rho))ds\|^{2p}\right] \leq \frac{5^{2p}}{4}\mathbb{E}\left[\left(\int_r^t\|G(\cdot,u(s\!-\!\rho))\|ds\right)^{2p}\right]\\& \leq\frac{5^{2p}}{4}\mathbb{E}\left[\left(\int_r^t(\|h\|+a\|u(s\!-\!\rho)\|)ds\right)^{2p}\right]\\& \leq\frac{5^{2p}}{4}\mathbb{E}\left[\left(\int_r^t(\|h\|+a\|u(s\!-\!\rho)\|)^{2p}ds\right)\right](t-r)^{2p-1}\\& \leq\frac{10^{2p}}{8}(t-r)^{2p-1}\int_r^t\left(\|h\|^{2p}+a^{2p}\mathbb{E}\left[\|u(s\!-\!\rho)\|^{2p}\right]\right)ds\\& \leq\frac{10^{2p}}{8}\left(\|h\|^{2p}+a^{2p}\sup\limits_{t\geq -\rho}\mathbb{E}\left[\|u(s)\|^{2p}\right]\right)(t-r)^{2p}\leq C_3(t-r)^{2p}. \end{split} \end{align} (4.41)

    For the forth term the right-hand side of (4.38), from the BDG inequality, the contraction property of e^{-At} , (2.6) H \ddot{o} lder's inequality and Lemma 3.5, we deduce

    \begin{align*} &\frac{5^{2p}}{4}\mathbb{E}\left[\|\sum\limits^\infty_{k = 1}\int_r^te^{-A(t-s)}(\sigma_{1,k}\!+\!\kappa\sigma_{2,k}(u(s)))dW_k(s)\|^{2p}\right]\\ &\leq\frac{5^{2p}}{4}C_4\mathbb{E}\left[\left(\int_r^t\sum\limits^\infty_{k = 1}\|e^{-A(t-s)}(\sigma_{1,k}\!+\!\kappa\sigma_{2,k}(u(s)))\|^2ds\right)^{p}\right]\\ &\leq\frac{5^{2p}}{4}C_4\mathbb{E}\left[\left(\int_r^t\sum\limits^\infty_{k = 1}2(\|\sigma_{1,k}\|^2+\|\kappa\sigma_{2,k}(u(s))\|^2)ds\right)^{p}\right]\\ &\leq\frac{5^{2p}}{4}C_4\mathbb{E}\left[\left(\int_r^t\sum\limits^\infty_{k = 1}2 (\|\sigma_{1,k}\|^2+2\|\kappa\|^2\beta^2_k+2\|\kappa\|^2_{L^\infty}\gamma^2_k\|(u(s))\|^2)ds\right)^{p}\right] \\ &\leq\frac{5^{2p}}{2}C_4\mathbb{E}\left[\left(2\sum\limits^\infty_{k = 1}(\|\sigma_{1,k}\|^2+2\|\kappa\|^2\beta^2_k)(t-r) +4\sum\limits^\infty_{k = 1}\|\kappa\|^2_{L^\infty}\gamma^2_k\int_r^t\|u(s)\|^2ds\right)^{p}\right]\\& \leq\frac{10^{2p}}{8}C_4\left(\sum\limits^\infty_{k = 1}(\|\sigma_{1,k}\|^2+2\|\kappa\|^2\beta^2_k\right)^p(t-r)^p +\frac{10^{2p}}{8}C_4\left(2\sum\limits^\infty_{k = 1}\|\kappa\|^2_{L^\infty}\gamma^2_k\right)^p\mathbb{E}\left[\left(\int_r^t\|u(s)\|^2ds\right)^{p}\right]\\& \leq\frac{10^{2p}}{8}C_4\left(\sum\limits^\infty_{k = 1}(\|\sigma_{1,k}\|^2+2\|\kappa\|^2\beta^2_k\right)^p(t-r)^p\\&\ \ \ \ +\frac{10^{2p}}{8}C_4\left(2\sum\limits^\infty_{k = 1}\|\kappa\|^2_{L^\infty}\gamma^2_k\right)^p(t-r)^{p-1} \int_r^t\mathbb{E}\left[\|u(s)\|^{2p}\right]ds\\& \leq\frac{10^{2p}}{8}C_4\left(\sum\limits^\infty_{k = 1}(\|\sigma_{1,k}\|^2+2\|\kappa\|^2\beta^2_k\right)^p(t-r)^p\\&\ \ \ \ +\frac{10^{2p}}{8}C_4\left(2\sum\limits^\infty_{k = 1}\|\kappa\|^2_{L^\infty}\gamma^2_k\right)^p(t-r)^{p} \sup\limits_{s\geq 0}\mathbb{E}\left[\|u(s)\|^{2p}\right]\\&\leq C_5(t-r)^p. \end{align*} (4.42)

    Therefore, from (4.38)–(4.42), we obtain there exists C_6 > 0 independent of t and r , such that for all t > r\geq 0 ,

    \mathbb{E}[\|u(t)-u(r)\|^{2p}]\leq C_6(|t-r|^{p}+|t-r|^{2p}).

    The proof is complete.

    In this section, we first recall the definition of invariant measure and transition operator. Then we construct a compact subset of C([-\rho, 0];H) in order to prove the tightness of the sequence of invariant measure m_k on C([-\rho, 0];H) .

    Recall that for any initial time t_0 and every \mathcal {F}_{t_0} -measurable function \varphi(s)\in L^2(\Omega, C([-\rho, 0], H)) , problems (1.1) and (1.2) has a unique solution u(t; t_0, \varphi) for t\in[t_0-\rho, \infty) . For convenience, given t\geq t_0 and \mathcal {F}_{t_0} -measurable function \varphi(s)\in L^2(\Omega, C([-\rho, 0], H)) , the segment of u(t; t_0, \varphi) on [t-\rho, t] is written as

    u_t(t_0,\varphi)(s) = u(t+s;t_0,\varphi)\ for\ every\ s\in[-\rho,0].

    Then u_t(t_0, \varphi)\in L^2(\Omega, C([-\rho, 0], H)) for all t\geq t_0 . We introduce the transition operator for (1.1). If \phi(s):C([-\rho, 0], H)\rightarrow \mathbb{R} is a bounded Borel function, then for initial time r with 0\leq r\leq t and \varphi(s)\in C([-\rho, 0], H) , we write

    (p_{r,t}\phi)(\varphi) = \mathbb{E}[\phi(u_t(r,\varphi))].

    Particularly, for \Gamma\in \mathcal {B}(C([-\rho, 0], H)) , 0\leq r\leq t and \varphi\in C([-\rho, 0], H) , we have

    p(r,\varphi;t,\Gamma) = (p_{r,t}1_{\Gamma})(\varphi) = P\{\omega\in\Omega|u_t(r,\varphi)\in\Gamma\},

    where 1_{\Gamma} is the characteristic function of \Gamma . Then p(r, \varphi; t, \cdot) is the distribution of u_t(0, \varphi) in C([-\rho, 0], H) . In the following context, we will write p_{0, t} as p_t .

    Recall that a probability measure \mathscr{M} on C([-\rho, 0], H) is called an invariant measure, if for all t\geq0 and every bounded and continuous function \phi:C([-\rho, 0];H)\rightarrow \mathbb{R},

    \int_{C([-\rho,0];H)}(p_t\phi)(\varphi)d\mathscr{M}(\varphi) = \int_{C([-\rho,0];H)}\phi(\varphi)d\mathscr{M}(\varphi),\ \ for\ all\ t\geq0.

    According to [33], we infer that the transition operator \{p_{r, t}\}_{0\leq r\leq t} has the following properties.

    Lemma 5.1. Suppose (2.1)–(2.7) and (4.1)–(4.3) hold. One has

    (a) The family \{p_{r, t}\}_{0\leq r\leq t} is Feller; that is, if \phi:C([-\rho, 0], H)\rightarrow \mathbb{R} is bounded and continuous, then for any 0\leq r\leq t , the function p_{r, t}\phi:C([-\rho, 0], H)\rightarrow \mathbb{R} is also bounded and continuous.

    (b) The family \{p_{r, t}\}_{0\leq r\leq t} is homogeneous (in time); that is, for any 0\leq r\leq t ,

    p(r,\varphi;t,\cdot) = p(0,\varphi;t-r,\cdot), \forall\varphi\in C([-\rho,0],H).

    (c) Given r\geq0 and \varphi\in C([-\rho, 0], H) , the process \{u_t(r, \varphi)\}_{t\geq r} is a C([-\rho, 0], H) -valued Markov process. Consequently, if \phi:C([-\rho, 0], H)\rightarrow \mathbb{R} is a bounded Borel function, then for any 0\leq s\leq r\leq t , P -almost surely,

    (p_{s,t}\phi)(\varphi) = (p_{s,r}(p_{r,t}\phi))(\varphi), \forall\varphi\in C([-\rho,0],H),

    and the Chapman-Kolmogorov equation is valid:

    p(s,\varphi;t,\Gamma) = \int_{C([-\rho,0],H)}p(s,\varphi;r,dy)p(r,y;t,\Gamma),

    for any \varphi\in C([-\rho, 0], H) and \Gamma\in\mathcal {B}(C([-\rho, 0], H)).

    Now, we establish the existence of invariant measures of problems (1.1) and (1.2).

    Theorem 5.2. Suppose (2.1)–(2.7) and (4.1)–(4.3) hold. Then (1.1) and (1.2) processes an invariant measure on C([-\rho, 0], H) .

    Proof. We employ Krylov-Bogolyubov's method to the solution u(t, 0, 0) of problems (1.1) and (1.2), where the initial condition \varphi\equiv0 at the initial time 0. Because of this particular \varphi\in C([-\rho, 0], V)\subseteq C([-\rho, 0], H) , we know that all results obtained in the previous Sections 3 and 4 are valid. For simplicity, the solution u(t, 0, 0) is written as u(t) and the segment u_t(0, 0) as u_t . For k\in\mathbb{N}^+ , we set

    \begin{align} \mathscr{M}_k = \frac1k\int^{k+\rho}_{\rho} p(0,0;t,\cdot)dt. \end{align} (5.1)

    Step 1. We prove the tightness of \{\mathscr{M}_k\}_{k = 1}^\infty in C([-\rho, 0], H) . Applying Lemmas 3.2 and 4.2, we get that there exists C_1 > 0 such that for all t\geq \rho ,

    \begin{align} \mathbb{E}\left[\sup\limits_{-\rho\leq s\leq 0}\|u_t(s)\|^{2}_{V}\right]\leq C_1. \end{align} (5.2)

    By (5.2) and Chebyshev's inequality, we have that for all t\geq \rho ,

    \begin{align*} P\left(\left\{\sup\limits_{-\rho\leq s\leq 0}\|u_t(s)\|_{V}\geq R\right\}\right)\leq\frac 1{R^2}\mathbb{E}\left[\sup\limits_{-\rho\leq s\leq 0}\|u_t(s)\|^2_{V}\right]\leq\frac{C_1}{R^2}\rightarrow 0\ \ as\ \ R\rightarrow \infty, \end{align*}

    and hence for every \varepsilon > 0 , there exists R_1 = R_1(\varepsilon) > 0 such that for all t\geq \rho ,

    \begin{align} P\left(\left\{\sup\limits_{-\rho\leq s\leq 0}\|u_t(s)\|_{V}\geq R_1\right\}\right)\leq \frac 13\varepsilon. \end{align} (5.3)

    By Lemma 4.4, we get that there exists C_2 > 0 such that for all t\geq \rho and r, s\in[-\rho, 0] ,

    \mathbb{E}[\|u_t(r)-u_t(s)\|^{2p}]\leq C_2(1+|r-s|^{p})|r-s|^{p},

    and hence for all t\geq \rho and r, s\in[-\rho, 0] ,

    \begin{align} \mathbb{E}[\|u_t(r)-u_t(s)\|^{2p}]\leq C_2(1+\rho^{p})|r-s|^{p}. \end{align} (5.4)

    Since p\geq 2 , applying (5.4) and the usual technique of dyadic division, we obtain that there exists R_2 = R_2(\varepsilon) > 0 such that for all t\geq \rho ,

    \begin{align} P\left(\left\{\sup\limits_{-\rho\leq s\leq r\leq 0}\frac{\|u_t(r)-u_t(s)\|}{|r-s|^{\frac{p-1}{4p}}} \leq R_2\right\}\right)\geq1-\frac 13\varepsilon. \end{align} (5.5)

    By Lemma 3.4, we get that for given \varepsilon > 0 and m\in\mathbb{N} , there exists an integer n_m = n_m(\varepsilon, m)\geq 1 such that for all t\geq \rho ,

    \mathbb{E}\left[\sup\limits_{-\rho\leq s\leq 0}\int_{|x|\geq n_m}|u(t+s,x)|^{2}dx\right]\leq \frac{\varepsilon}{2^{2m+2}},

    which implies that for all t\geq \rho and m\in\mathbb{N} ,

    \begin{align} P\left(\left\{\sup\limits_{-\rho\leq s\leq 0}\int_{|x|\geq n_m}|u(t+s,x)|^{2}dx \geq \frac 1{2^m}\right\}\right)\leq 2^m\mathbb{E}\left[\sup\limits_{-\rho\leq s\leq 0}\int_{|x|\geq n_m}|u(t+s,x)|^{2}dx\right]\leq \frac{\varepsilon}{2^{m+2}}. \end{align} (5.6)

    By (5.6), we infer that for all t\geq \rho ,

    P\left(\bigcup\limits_{m = 1}^{\infty}\left\{\sup\limits_{-\rho\leq s\leq 0}\int_{|x|\geq n_m}|u(t+s,x)|^{2}dx \geq \frac 1{2^m}\right\}\right)\leq \sum\limits^\infty_{k = 1}\frac{\varepsilon}{2^{m+2}}\leq \frac14\varepsilon,

    and hence for all t\geq \rho ,

    \begin{align} P\left(\left\{\sup\limits_{-\rho\leq s\leq 0}\int_{|x|\geq n_m}|u(t+s,x)|^{2}dx \leq \frac 1{2^m}\ for \ all\ m\in\mathbb{N}\right\}\right)\geq 1-\frac14\varepsilon. \end{align} (5.7)

    Let

    \begin{align} \mathcal {M}_{1,\varepsilon} = \left\{\zeta:[-\rho,0]\rightarrow V,\sup\limits_{-\rho\leq s\leq 0}\|\zeta(s)\|_{V}\leq R_1(\varepsilon)\right\}, \end{align} (5.8)
    \begin{align} \mathcal {M}_{2,\varepsilon} = \left\{\zeta\in C([-\rho,0],H):\sup\limits_{-\rho\leq s\leq r\leq 0}\frac{\|\zeta(r)-\zeta(s)\|}{|r-s|^{\frac{\varrho-1}{4\varrho}}} \leq R_2(\varepsilon)\right\}, \end{align} (5.9)
    \begin{align} \mathcal {M}_{3,\varepsilon} = \left\{\zeta\in C([-\rho,0],H):\sup\limits_{-\rho\leq s\leq 0}\int_{|x|\geq n_m}|\zeta(s,x)|^{2}dx \leq \frac 1{2^m}\ for \ all\ m\in\mathbb{N}\right\}, \end{align} (5.10)

    and

    \begin{align} \mathcal {M}_{\varepsilon} = \mathcal {M}_{1,\varepsilon}\bigcap\mathcal {M}_{2,\varepsilon}\bigcap\mathcal {M}_{3,\varepsilon}. \end{align} (5.11)

    From (5.3), (5.5) and (5.7)–(5.11), we obtain that for all t\geq \rho ,

    \begin{align} P\left(u_t\in\mathcal {M}_{\varepsilon}\right) > 1-\varepsilon. \end{align} (5.12)

    By (5.1) and (5.12), we deduce that for all k\in\mathbb{N} ,

    \begin{align} \mathscr{M}_k\left(\mathcal {M}_{\varepsilon}\right) > 1-\varepsilon. \end{align} (5.13)

    Next, we prove the set \mathcal {M}_{\varepsilon} is precompact in C([-\rho, 0], H) . First, we prove for every s\in[-\rho, 0] the set \{\zeta(s):\zeta\in\mathcal {M}_{\varepsilon}\} is a precompact subset of H . By (5.8) and (5.11), we obtain that for every s\in[-\rho, 0] , the set \{\zeta(s):\zeta\in\mathcal {M}_{\varepsilon}\} is bounded in V . Let \mathcal {Q}_{m_0} = \left\{x\in\mathbb{R}^n:|x| < n_{m_0}\right\} . Then we get that the set \{\zeta(s)|_{\mathcal {Q}_{m_0}}:\zeta\in\mathcal {M}_{\varepsilon}\} is bounded in H^1(\mathcal {Q}_{m_0}) and hence precompact in L^2(\mathcal {Q}_{m_0}) due to compactness of the embedding H^1(\mathcal {Q}_{m_0})\hookrightarrow L^2(\mathcal {Q}_{m_0}) . This implies that the set \{\zeta(s)|_{\mathcal {Q}_{m_0}}:\zeta\in\mathcal {M}_{\varepsilon}\} has a finite open cover of balls with radius \frac 12\delta in L^2(\mathcal {Q}_{m_0}) . Note that for every \delta > 0 , there exists m_0 = m_0(\delta)\in\mathbb{N} such that for all \zeta\in\mathcal {M}_{\varepsilon} ,

    \begin{align} \int_{|x|\geq n_{m_0}}|\zeta(s,x)|^{2}dx \leq \frac 1{2^{m_0}} < \frac{\delta^2}8. \end{align} (5.14)

    Hence, by (5.14), the set \{\zeta(s):\zeta\in\mathcal {M}_{\varepsilon}\} has a finite open cover of balls with radius \frac 12\delta in L^2(\mathbb{R}^n) . Since \delta > 0 is arbitrary, we obtain that the set \{\zeta(s):\zeta\in\mathcal {M}_{\varepsilon}\} is percompact in H . Then from (5.9) and (5.11), we obtain that \mathcal{M}_{\varepsilon} is equicontinuous in C([-\rho, 0], H) . Therefore, by the Ascoli-Arzel \grave{a} theorem we deduce that \mathcal {M}_{\varepsilon} is precompact in C([-\rho, 0], H) , which along with (5.13) shows that \{m_k\}_{k = 1}^\infty is tight on C([-\rho, 0], H) .

    Step 2. We prove the existence of invariant measures of problems (1.1) and (1.2). Since the sequence \{\mathscr{M}_k\}_{k = 1}^\infty is tight on C([-\rho, 0];H) , there exists a probability measure m on C([-\rho, 0];H) , we take a subsequence of \{\mathscr{M}_k\} (not rebel) such that \mathscr{M}_k\rightarrow m, \ \ as\ \ k\rightarrow \infty. In the following, we prove \mathscr{M} is an invariant measure of (1.1) and (1.2). Applying (5.1) and the Chapman-Kolmogorov equation, we obtain that for every t\geq0 and every \phi:C([-\rho, 0];H)\rightarrow \mathbb{R} ,

    \begin{align*} &\ \ \ \ \int_{C([-\rho,0];H)}\phi(v)d\mathscr{M}(v) = \lim\limits_{k\rightarrow \infty}\frac1k\int^{k+\rho}_\rho\left(\int_{C([-\rho,0];H)}\phi(v)p(0,0;s,dv)\right)ds\\& = \lim\limits_{k\rightarrow \infty}\frac1k\int^{k+\rho-t}_{\rho-t}\left(\int_{C([-\rho,0];H)}\phi(v)p(0,0;s+t,dv)\right)ds\\& = \lim\limits_{k\rightarrow \infty}\frac1k\int^{k+\rho}_\rho\left(\int_{C([-\rho,0];H)}\phi(v)p(0,0;s+t,dv)\right)ds\\& = \lim\limits_{k\rightarrow \infty}\frac1k\int^{k+\rho}_\rho\left(\int_{C([-\rho,0];H)}\left(\int_{C([-\rho,0];H)}\phi(v)p(s,\varphi;s+t,dv)\right)p(0,0;s,d\varphi)\right)ds\\& = \lim\limits_{k\rightarrow \infty}\frac1k\int^{k+\rho}_\rho\left(\int_{C([-\rho,0];H)}\left(\int_{C([-\rho,0];H)}\phi(v)p(0,\varphi;t,dv)\right)p(0,0;s,d\varphi)\right)ds\\& = \int_{C([-\rho,0];H)}\left(\int_{C([-\rho,0];H)}\phi(v)p(0,\varphi;t,dv)\right)d\mathscr{M}(\varphi)\\& = \int_{C([-\rho,0];H)}(p_{0,t}\phi)(\varphi)d\mathscr{M}(\varphi), \end{align*}

    which completes the proof.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors are grateful to the anonymous referees whose suggestions have in our opinion, greatly improved the paper. This work is partially supported by the NSF of Shandong Province (No. ZR 2021MA055) and USA Simons Foundation (No. 628308).

    The authors declare there is no conflict of interest.



    [1] M. L. Craig, C. A. Jackel, P. B. Gerrits, Selection of medical students and the maldistribution of the medical workforce in Queensland, Australia, Aust. J. Rural Health, 1 (1993), 17–21. https://doi.org/10.1111/j.1440-1584.1993.tb00075.x doi: 10.1111/j.1440-1584.1993.tb00075.x
    [2] J. A. Osheroff, J. M. Teich, B. Middleton, E. B Steen, A. Wright, D. E. Detmer, A roadmap for national action on clinical decision support, J. Am. Med. Inf. Assoc., 14 (2007), 141–145. https://doi.org/10.1197/jamia.M2334 doi: 10.1197/jamia.M2334
    [3] D. Demner-Fushman, W. W. Chapman, C. J. McDonald, What can natural language processing do for clinical decision support? J. Biomed. Inf., 42 (2009), 760–772. https://doi.org/10.1016/j.jbi.2009.08.007 doi: 10.1016/j.jbi.2009.08.007
    [4] A. N. Kho, J. A. Pacheco, P. L. Peissig, L. Rasmussen, K. M. Newton, N. Weston, et al., Electronic medical records for genetic research: results of the emerge consortium, Sci. Transl. Med., 3 (2011) 79re1. https://doi.org/10.1126/scitranslmed.3001807 doi: 10.1126/scitranslmed.3001807
    [5] R. C. Wasserman, Electronic medical recor (EMRs), epidemiology, and epistemology: reflections on EMRs and future pediatric clinical research, Acad. Pediatr., 11 (2011), 280–287. https://doi.org/10.1016/j.acap.2011.02.007 doi: 10.1016/j.acap.2011.02.007
    [6] A. Rajkomar, J. Dean, I. Kohane, Machine learning in medicine, N. Engl. J. Med., 2019. https://doi.org/10.1056/NEJMra1814259 doi: 10.1056/NEJMra1814259
    [7] T. Ma, A. Zhang, AffinityNet: semi-supervised few-shot learning for disease type prediction, in Proceedings of the AAAI Conference on Artificial Intelligence, 33 (2019), 1069–1076. https://doi.org/10.1609/aaai.v33i01.33011069
    [8] Y. Wang, Q. Yao, J. T. Kwok, L. M. Ni, Generalizing from a few examples: A survey on few-shot learning, preprint, arXiv: 1904.05046.
    [9] M. E. J. Newman, The structure and function of complex networks, SIAM Rev., 45 (2003), 167–256. https://doi.org/10.1137/S003614450342480 doi: 10.1137/S003614450342480
    [10] A. L. Barabási, N. Gulbahce, J. Loscalzo, Network medicine: A network-based approach to human disease, Nat. Rev. Genet., 12 (2011), 56–68. https://doi.org/10.1038/nrg2918 doi: 10.1038/nrg2918
    [11] K. I. Goh, M. E. Cusick, D. Valle, B. Childs, M. Vidal, A. L. Barabási, The human disease network, Proc. Natl. Acad. Sci., 104 (2007), 8685–8690. https://doi.org/10.1073/pnas.0701361104 doi: 10.1073/pnas.0701361104
    [12] C. A. Hidalgo, N. Blumm, A. L. Barabási, N. A. Christakis, A dynamic network approach for the study of human phenotypes, PLoS Comput. Biol., 5 (2009), e1000353. https://doi.org/10.1371/journal.pcbi.1000353 doi: 10.1371/journal.pcbi.1000353
    [13] X. Z. Zhou, J. Menche, A. L. Barabási, A. Sharma, Human symptoms–disease network, Nat. Commun., 5 (2014), 4212. https://doi.org/10.1038/ncomms5212 doi: 10.1038/ncomms5212
    [14] C. Zhao, J. Jiang, Z. Xu, Y. Guan, A study of EMR-based medical knowledge network and its applications, Comput. Methods Programs Biomed., 143 (2017), 13–23. https://doi.org/10.1016/j.cmpb.2017.02.016 doi: 10.1016/j.cmpb.2017.02.016
    [15] R. Alizadehsani, J. Habibi, M. J. Hosseini, H. Mashayekhi, R. Boghrati, A. Ghandeharioun, et al., A data mining approach for diagnosis of coronary artery disease, Comput. Methods Programs Biomed., 111 (2013), 52–61. https://doi.org/10.1016/j.cmpb.2013.03.004 doi: 10.1016/j.cmpb.2013.03.004
    [16] H. H. Rau, C. Y. Hsu, Y. A. Lin, S. Atique, A. Fuad, L. M. Wei, et al., Development of a web-based liver cancer prediction model for type II diabetes patients by using an artificial neural network, Comput. Methods Programs Biomed., 125 (2016), 58–65. https://doi.org/10.1016/j.cmpb.2015.11.009 doi: 10.1016/j.cmpb.2015.11.009
    [17] E. Choi, M. T. Bahadori, L. Song, W. F. Stewart, J. Sun, Gram: Graph-based attention model for healthcare representation learning, preprint, arXiv: 1611.07012.
    [18] E. Choi, M. T. Bahadori, J. Sun, J. Kulas, A. Schuetz, W. F. Stewart, Retain: An interpretable predictive model for healthcare using reverse time attention mechanism, in Proceedings of the 30th International Conference on Neural Information Processing Systems, (2016), 3512–3520. Available from: https://dl.acm.org/doi/10.5555/3157382.3157490.
    [19] Z. C. Lipton, D. C. Kale, C. Elkan, R. Wetzell, Learning to diagnose with LSTM recurrent neural networks, preprint, arXiv: 1511.03677.
    [20] F. Ma, R. Chitta, J. Zhou, Q. You, T. Sun, J. Gao, Dipole: Diagnosis prediction in healthcare via attention-based bidirectional recurrent neural networks, in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, (2017), 1903–1911. https://doi.org/10.1145/3097983.3098088
    [21] E. Choi, C. Xiao, W. F. Stewart, J. Sun, Mime: Multilevel medical embedding of electronic health records for predictive healthcare, in Proceedings of the 32nd International Conference on Neural Information Processing Systems, (2018), 4547–4557. Available from: https://dl.acm.org/doi/abs/10.5555/3327345.3327366.
    [22] J. Jiang, X. Li, C. Zhao, Y. Guan, Q. Yu, Learning and inference in knowledge-based probabilistic model for medical diagnosis, Knowledge-Based Syst., 138 (2017), 58–68. https://doi.org/10.1016/j.knosys.2017.09.030 doi: 10.1016/j.knosys.2017.09.030
    [23] D. E. Heckerman, E. J. Horvitz, B. N. Nathwani, Toward normative expert systems: Part I the pathfinder project, Methods Inf. Med., 31 (1991), 90–105. https://doi.org/10.1055/s-0038-1634867 doi: 10.1055/s-0038-1634867
    [24] J. G. Klann, P. Szolovits, S. M. Downs, G. Schadow, Decision support from local data: creating adaptive order menus from past clinician behavior, J. Biomed. Inf., 48 (2014), 84–93. https://doi.org/10.1016/j.jbi.2013.12.005 doi: 10.1016/j.jbi.2013.12.005
    [25] M. J. Flores, A. E. Nicholson, A. Brunskill, K. B. Korb, S. Mascaro, Incorporating expert knowledge when learning bayesian network structure: a medical case study, Artif. Intell. Med., 53 (2011), 181–204. https://doi.org/10.1016/j.artmed.2011.08.004 doi: 10.1016/j.artmed.2011.08.004
    [26] D. M. Chickering, D. Heckerman, C. Meek, Large-sample learning of bayesian networks is np-hard, J. Mach. Learn. Res., 5 (2004), 1287–1330.
    [27] T. N. Kipf, M. Welling, Semi-supervised classification with graph convolutional networks, preprint, arXiv: 1609.02907.
    [28] C. Y. Wee, C. Liu, A. Lee, J. S. Poh, H. Ji, A. Qiu, et al., Cortical graph neural network for ad and mci diagnosis and transfer learning across populations, NeuroImage: Clin., 23 (2019), 101929. https://doi.org/10.1016/j.nicl.2019.101929 doi: 10.1016/j.nicl.2019.101929
    [29] R. C. Petersen, P. Aisen, L. A. Beckett, M. Donohue, A. Gamst, D. J. Harvey, et al., Alzheimer's disease neuroimaging initiative (adni): clinical characterization, Neurology, 74 (2010), 201–209. https://doi.org/10.1212/WNL.0b013e3181cb3e25 doi: 10.1212/WNL.0b013e3181cb3e25
    [30] D. Ahmedt-Aristizabal, M. A. Armin, S. Denman, C. Fookes, L. Perersson, Graph-based deep learning for medical diagnosis and analysis: past, present and future, Sensors, 21 (2021), 4758. https://doi.org/10.3390/s21144758 doi: 10.3390/s21144758
    [31] M. Bastian, S. Heymann, M. Jacomy, Gephi: An open source software for exploring and manipulating networks, in Proceedings of the International AAAI Conference on Web and Social Media, 3 (2009), 361–362. Available from: https: //ojs.aaai.org/index.php/ICWSM/article/view/13937.
    [32] S. R. Broadbentand J. M. Hammersley, Percolation processes: I. Crystals and mazes, Math. Proc. Cambridge Philos. Soc., 53 (1957), 629–641. https://doi.org/10.1017/S0305004100032680 doi: 10.1017/S0305004100032680
    [33] J. M. Hammersley, Percolation processes: II. The connective constant, Math. Proc. Cambridge Philos. Soc., 53 (1957), 642–645. https://doi.org/10.1017/S0305004100032692 doi: 10.1017/S0305004100032692
    [34] G. Grimmett, Percolation, Springer, New York, 1989. https://doi.org/10.1007/978-1-4757-4208-4
    [35] E. Choi, M. T. Bahadori, A. Schuetz, W. F. Stewart, J. Sun, Doctor AI: predicting clinical events via recurrent neural networks, in Proceedings of the 1st machine learning for healthcare conference, 56 (2016), 301–318. Available from: http://proceedings.mlr.press/v56/Choi16.pdf.
    [36] 2010 i2b2/va challenge evaluation assertion annotation guidelines. Available from: https://www.i2b2.org/NLP/Relations/assets/Assertion%20Annotation%20Guideline.pdf.
    [37] 2010 i2b2/va challenge evaluation concept annotation guidelines. Available from: https://www.i2b2.org/NLP/Relations/assets/Concept%20Annotation%20Guideline.pdf.
    [38] J. Yang, Y. Guan, B. He, C. Qu, Q. Yu, Y. Liu, et al., Annotation scheme and corpus construction for named entities and entity relations on Chinese electronic medical records, J. Software, 27 (2016), 2725–2746. https://doi.org/10.13328/j.cnki.jos.004880 doi: 10.13328/j.cnki.jos.004880
    [39] B. He, B. Dong, Y. Guan, J. Yang, Z. Jiang, Q. Yu, et al., Building a comprehensive syntactic and semantic corpus of Chinese clinical texts, J. Biomed. Inf., 69 (2017), 203–217. https://doi.org/10.1016/j.jbi.2017.04.006 doi: 10.1016/j.jbi.2017.04.006
    [40] E. Choi, A. Schuetz, W. F. Stewart, J. Sun, Using recurrent neural network models for early detection of heart failure onset, J. Am. Med. Inf. Assoc., 24 (2017), 361–370. https://doi.org/10.1093/jamia/ocw112 doi: 10.1093/jamia/ocw112
    [41] P. Nguyen, T. Tran, N. Wickramasinghe, S. Venkatesh, Deepr: a convolutional net for medical records, IEEE J. Biomed. Health Inf., 21 (2017), 22–30. https://doi.org/10.1109/JBHI.2016.2633963 doi: 10.1109/JBHI.2016.2633963
    [42] C. Zhao, J. Jiang, Y. Guan, X. Guo, B. He, EMR-based medical knowledge representation and inference via Markov random fields and distributed representation learning, Artif. Intell. Med., 87 (2018), 49–59. https://doi.org/10.1016/j.artmed.2018.03.005 doi: 10.1016/j.artmed.2018.03.005
    [43] R. Miotto, L. Li, B. A. Kidd, J. T. Dudley, Deep Patient: An unsupervised representation to predict the future of patients from the electronic health records, Sci. Rep., 6 (2016), 26094. https://doi.org/10.1038/srep26094 doi: 10.1038/srep26094
    [44] C. Buckley, E. M. Voorhees, Retrieval evaluation with incomplete information, in Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, (2004), 25–32. https://doi.org/10.1145/1008992.1009000
    [45] C. Buckley, E. M. Voorhees, Evaluating evaluation measure stability, ACM SIGIR Forum, 51 (2017), 235–242. https://doi.org/10.1145/3130348.3130373 doi: 10.1145/3130348.3130373
    [46] M. D. Smucker, J. Allan, B. Carterette, A comparison of statistical significance tests for information retrieval evaluation, in Proceedings of the 16th ACM Conference on Conference on Information and Knowledge Management, (2007), 623–632. https://doi.org/10.1145/1321440.1321528
  • This article has been cited by:

    1. Fabio Di Pietrantonio, Domenico Cannatà, Massimiliano Benetti, 2019, 9780128144015, 181, 10.1016/B978-0-12-814401-5.00008-6
    2. Christina G. Siontorou, 2020, Chapter 25-2, 978-3-319-47405-2, 1, 10.1007/978-3-319-47405-2_25-2
    3. Christina G. Siontorou, Georgia-Paraskevi Nikoleli, Marianna-Thalia Nikolelis, Dimitrios P. Nikolelis, 2019, 9780128157435, 375, 10.1016/B978-0-12-815743-5.00015-9
    4. Christina G. Siontorou, 2019, Chapter 25-1, 978-3-319-47405-2, 1, 10.1007/978-3-319-47405-2_25-1
    5. Christina G. Siontorou, 2022, Chapter 25, 978-3-030-23216-0, 707, 10.1007/978-3-030-23217-7_25
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3161) PDF downloads(85) Cited by(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog