Processing math: 52%
Research article Special Issues

Volatility analysis of returns and risk: Family versus nonfamily firms

  • Family firms (FF) tend to be classified as less risky and volatile than nonfamily firms (NFF). This article aims to examine whether there are differences in risk and volatility between FF and NFF, using Portuguese listed firms during 2008 and 2017. Through different models and specifications, we were able to verify that there exists a positive relationship identified in the volatility-return nexus which depends on the model used, and even so, negative in the case of FF, but that volatility is stronger in NFF than in FF as descriptive statistics reveal. Furthermore, it was found no considerable differences in terms of the liquidity-volatility relationship between the two types of firms, and we cannot argue that the negative relationship between returns and turnover is higher in NFF. It was also found that more illiquid stocks have negative returns but there are no clear differences between FF and NFF. The crisis effect is more able to explain volatility positively than returns negatively, being the impact lower for NFF. Our results do not strictly confirm the fact that FF are less volatile than NFF but provided variables interaction effects we may argue that a risk-averse investor will be more prone to invest in FF stocks, while a risk lover agent will prefer to look at NFF when building their investment portfolios.

    Citation: Mara Madaleno, Elisabete Vieira. Volatility analysis of returns and risk: Family versus nonfamily firms[J]. Quantitative Finance and Economics, 2018, 2(2): 348-372. doi: 10.3934/QFE.2018.2.348

    Related Papers:

    [1] Martin Burger, Marco Di Francesco . Large time behavior of nonlocal aggregation models with nonlinear diffusion. Networks and Heterogeneous Media, 2008, 3(4): 749-785. doi: 10.3934/nhm.2008.3.749
    [2] Iryna Pankratova, Andrey Piatnitski . Homogenization of convection-diffusion equation in infinite cylinder. Networks and Heterogeneous Media, 2011, 6(1): 111-126. doi: 10.3934/nhm.2011.6.111
    [3] Alexander Mielke, Sina Reichelt, Marita Thomas . Two-scale homogenization of nonlinear reaction-diffusion systems with slow diffusion. Networks and Heterogeneous Media, 2014, 9(2): 353-382. doi: 10.3934/nhm.2014.9.353
    [4] Patrick Henning . Convergence of MsFEM approximations for elliptic, non-periodic homogenization problems. Networks and Heterogeneous Media, 2012, 7(3): 503-524. doi: 10.3934/nhm.2012.7.503
    [5] Feiyang Peng, Yanbin Tang . Inverse problem of determining diffusion matrix between different structures for time fractional diffusion equation. Networks and Heterogeneous Media, 2024, 19(1): 291-304. doi: 10.3934/nhm.2024013
    [6] Elisabeth Logak, Isabelle Passat . An epidemic model with nonlocal diffusion on networks. Networks and Heterogeneous Media, 2016, 11(4): 693-719. doi: 10.3934/nhm.2016014
    [7] Patrick Henning, Mario Ohlberger . The heterogeneous multiscale finite element method for advection-diffusion problems with rapidly oscillating coefficients and large expected drift. Networks and Heterogeneous Media, 2010, 5(4): 711-744. doi: 10.3934/nhm.2010.5.711
    [8] José Antonio Carrillo, Yingping Peng, Aneta Wróblewska-Kamińska . Relative entropy method for the relaxation limit of hydrodynamic models. Networks and Heterogeneous Media, 2020, 15(3): 369-387. doi: 10.3934/nhm.2020023
    [9] Martin Heida, Benedikt Jahnel, Anh Duc Vu . Regularized homogenization on irregularly perforated domains. Networks and Heterogeneous Media, 2025, 20(1): 165-212. doi: 10.3934/nhm.2025010
    [10] Mogtaba Mohammed, Mamadou Sango . Homogenization of nonlinear hyperbolic stochastic partial differential equations with nonlinear damping and forcing. Networks and Heterogeneous Media, 2019, 14(2): 341-369. doi: 10.3934/nhm.2019014
  • Family firms (FF) tend to be classified as less risky and volatile than nonfamily firms (NFF). This article aims to examine whether there are differences in risk and volatility between FF and NFF, using Portuguese listed firms during 2008 and 2017. Through different models and specifications, we were able to verify that there exists a positive relationship identified in the volatility-return nexus which depends on the model used, and even so, negative in the case of FF, but that volatility is stronger in NFF than in FF as descriptive statistics reveal. Furthermore, it was found no considerable differences in terms of the liquidity-volatility relationship between the two types of firms, and we cannot argue that the negative relationship between returns and turnover is higher in NFF. It was also found that more illiquid stocks have negative returns but there are no clear differences between FF and NFF. The crisis effect is more able to explain volatility positively than returns negatively, being the impact lower for NFF. Our results do not strictly confirm the fact that FF are less volatile than NFF but provided variables interaction effects we may argue that a risk-averse investor will be more prone to invest in FF stocks, while a risk lover agent will prefer to look at NFF when building their investment portfolios.


    The homogenization theory is to establish the macroscopic behavior of a system which is microscopically heterogeneous in order to describe some characteristics of the heterogeneous medium [1]. In recent years many papers have been faced with the problem of how to get an effective behavior as a scale parameter ε0+. Nguetseng [2] and Allaire [3] first proposed two-scale convergence, and in 1997, Holmbom [4] proved the homogenization result of the parabolic equation that the main operator depends on time t by using two-scale convergence.

    Recently, Akagi and Oka [5] considered a space-time homogenization problem for nonlinear diffusion equations with periodically oscillating space and time coefficients:

    {tuε(x,t)div(A(xε,tεr)(|uε|m2uε))(x,t)=f(x,t),(x,t)Ω×I,uε(x,0)=u0(x),xΩ,|uε|m2uε(x,t)=0,xΩ×I, (1.1)

    their main results are based on the two-scale convergence theory for space-time homogenization.

    Akagi and Oka [6] also considered space-time homogenization problems for porous medium equations with nonnegative initial data. These are important developments of the homogenization of local second-order parabolic equations where the operator depends on the time t. Geng and Shen [7] and Niu and Xu [8] discussed the convergence rates in periodic homogenization of a second-order parabolic system depending on time t. There are many qualitative and quantitative studies on the homogenization theory of parabolic equations with periodic and stationary coefficients [9,10,11,12].

    The nonlocal operator homogenization theory is based on the regular convolutional kernel and the singularity kernel corresponding to the fractional Laplace equation. Piatnitski and Zhizhina [13] gave a scaling operator:

    Lεu(x)=Rd1εd+2J(xyε)λ(xε)μ(yε)(u(y)u(x))dy, (1.2)

    where there are two natural length scales, one being the macroscopic scale of order 1 and the other being the microscopic pore scale of order ε>0; the scale parameter ε measures the oscillation. The bounded 1periodic functions λ(ξ),μ(η) describe the periodic structure. As ε0+, the limit of operators {Lε}ε>0 is a second order elliptic differential operator L corresponding to the macroscopic scale. Piatnitski and Zhizhina [14] dealt with the homogenization of parabolic problems for integral convolutional type operators with a non-symmetric jump kernel in a periodic elliptic medium

    Lεu(x)=RdJ(xyε)μ(xε,yε)(u(y)u(x))dy, (1.3)

    where μ(ξ,η) is a positive periodic function in ξ and η. Kassmann, Piatnitski and Zhizhina [15] considered the homogenization of a Lévy-type operator.

    Karch, Kassmann and Krupski [16] discussed the existence of the Cauchy problem

    {tu(x,t)=Rdρ(u(x,t),u(y,t);x,y)(u(y,t)u(x,t))dy,u(x,0)=u0(x), (1.4)

    for (x,t)Rd×[0,) with a given homogeneous jump kernel ρ. Their models contain both integrable and non-integrable kernels.

    Next, we introduce some examples about the nonlocal evolution of porous medium equations and fast diffusion equations. Cortazar et al. [17] considered the rescaled problem

    tuε(x,t)=1ε2(RJ(xyεuε(y,t))dyεuε(x,t)),(x,t)R×[0,) (1.5)

    with a fixed initial condition u0(x) and they proved that the limit limε0uε(x,t)=u(x,t) is a solution to the porous medium equation ut=D(u3)xx for a suitable constant D, where J is a smooth non-negative even function supported in [1,1].

    Andreu et al.[18, Chapter 5] discussed a class of nonlinear nonlocal evolution equations with the Neumann boundary condition

    {tz(t,x)=ΩJ(xy)(u(t,y)u(t,x))dy,xΩ,t>0,z(t,x)θ(u(t,x)),xΩ,t>0,z(0,x)=z0(x),xΩ, (1.6)

    where Ω is a bounded domain. If the maximal monotone function θ(r)=|r|p1r, under the suitable rescale model problem (1.6) corresponds to the nonlocal version of the porous medium equation if 0<p<1, or to the fast diffusion equation if p>1.

    Nonlocal porous medium equations with a non-integrable kernel is an important example of nonlinear and nonlocal diffusion equations; the different properties of solutions to the fractional porous medium equation

    tu(x,t)+(Δ)α2(|u|p1u)=0 (1.7)

    have been studied from various viewpoints [19,20,21,22,23,24].

    The two types of equations studied in this paper have recently been widely researched in the following form:

    tu(x,t)=L(|u|p1u), (1.8)

    where p>0, L is a linear, symmetric, and nonnegative operator (p1) and sub-Markovian operator (0<p<1). More details can be seen in [25,26].

    Inspired by the thought of local nonlinear homogenization described by Akagi and Oka [5,6], the goal of this paper is to investigate the homogenization theory of nonlocal nonlinear parabolic equations in a periodic environment with the following nonlocal scaling operator:

    Lεu(x,t)=RdJ(xyε,tεr)εd+2ν(xε,yε)(|u(y,t)|p1u(y,t)|u(x,t)|p1u(x,t))dy, (1.9)

    which means that we take the jump kernel in Eq (1.4) as follows

    ρ(u(x),u(y);x,y)=|u(y)|p1u(y)|u(x)|p1u(x)u(y)u(x)J(xyε,tεr)εd+2ν(xε,yε). (1.10)

    The difference between the kernels in Eq (1.10) and in the equation

    tu(x,t)=RdCα,d|xy|d+α(|u(y)|p1u(y)|u(x)|p1u(x))dy, (1.11)

    where Cα,d is a constant and α(0,1), can be seen in the work of Karch, Kassmann and Krupski [16]. For more literatures about time-dependent regular kernels (integrable) and Lévy kernels (non-integrable), see, for instance, [27,28] for more details.

    This paper is mainly divided into two parts.

    The first part is the homogenization problem under the periodic framework. Our goal is to characterize the limit operator by homogenizing the nonlocal operators {Lε}ε>0 as the scale parameter ε0+. The paper is organized as follows. The first step, in the case of 0<p2, we transfer the spatial nonlinearity to the time derivative term through a kirchhoff transform, which can simplify the difficulty of nonlinearity in the nonlocal operator. In the second step, we construct axillary functions that work on the operator and then divide the operator in Eq (1.9) into three parts we then deal with it part by part separately according to the parameters r and p. We prove that the first part is zero. For the second part, we get that the limit is a nonlinear diffusion operator. Finally, from the third part we get an error function ϕε and we can prove that it tends to zero in L2((0,T),L2(Rd)) as ε0+. We also consider the homogenization of the nonlocal porous medium equation (1p<+) with non negative initial values and get similar homogenization results.

    The second part is the homogenization problem under the stationary framework. The idea of the proof is divided into the following steps. The first step is to construct an approximation sequence equation when p=1; by approximation we obtain a random corrector function. The second step is to prove the existence and uniqueness of the corrector functions, as well as the properties of sub-linear growth and stationarity. The third step is to prove the convergent limit equation. There will be some additional stationary matrix-field ϝ(xε,tε2,ω) with zero average and the non-stationary term Υε during the process of solving the coefficients of the limit equation. It is necessary to prove that there are some functions v2 and v3 to cancel the additional part, and also to prove the positive definiteness of the matrix Θ and the existence of the limit equation. The fourth step focuses on the effects of nonlinearity and give some key proofs of our results.

    The novelty of this paper is two folds. First, for the determination equation with a periodic structure, our study complements the results in literature for r=2 and p=1. Second, we consider the corresponding equation with a stationary structure.

    It is worth noting that we need to require that 0<p2 become |uε|p1uεL2((0,T)×L2loc(Rd)). So far it is not actually clear how to solve the case of p>2. The local equation in the case that p(0,2)uεL((0,T)×L3p(Ω))L2((0,T)×H10(Ω)) does not hold when p=2; the specific proof is in [5, Lemma 4.1].

    In order to deal with the homogenization of nonlinear nonlocal operators, we first introduce some results on nonlinear functional analysis, semigroups and the nonlocal diffusion of knowledge; the main references are [29,30].

    Notation. X=L2(Rd,ϱ), I=(0,T),E=(0,1) and Y=Td=[0,1]d. We have that x0R,QR(x0)=x0+(R2,R2)d and Br(x0) is the open ball in Rd centered at x0 and radius r. Moreover, QR=QR(0),Br=Br(0), and ˜QR=QR×IR=(R2,R2)d+1Rd+1, while ˜Q and ˆQ are used for any cube in Rd+1. Additionally, aαb means that there exists a constant C=C(α)>0 such that aCb. We write a=b if aαb and bαa.

    Assume that the kernel J is a nonnegative symmetric function that satisfies the time periodicity that

    J(z,s+1)=J(z,s),sI, (2.1)

    and that J(,) is compactly supported in the set {(x,t)Rd+1:t0}. In addition,

    {J(z,s)L((0,T),Cb(Rd)L1(Rd))J(x,t)J(y,t)if|x||y|,t>0.j1>0andJ1(z)j1suchthatRdJ1(z)|z|dz=j0,||J(z,)||L(0,T)J1(z),zU, (2.2)

    where U is any tube in Rd.

    We also assume that the bounded periodic function ν(x,y) satisfies

    0<α1ν(x,y)α2<+, (2.3)

    where α1 and α2 are positive constants. Here ν contains the case that ν(x,y)=λ(x)μ(y).

    Definition 2.1. (Monotone operator) Let X and X be a Banach space and its dual space. A set-valued operator A:X2X is said to be monotone, if it holds that

    <uv,ξη>≥0forall[u,ξ],[v,η]G(A), (2.4)

    where G(A) denotes the graph of A, i.e., G(A)={[u,ξ]X×X:ξAu}.

    Finally, let us recall the notion of subdifferentials for convex functionals.

    Definition 2.2. (Subdifferential operator) Let X and X be a Banach space and its dual space, respectively. Let ϕ:X(,+] be a proper (i.e., D(ϕ)) lower semicontinuous and convex functional with the effective domain D(ϕ):={uX:ϕ(u)<+}. The subdifferential operator ϕ:X2X of ϕ is defined by

    ϕ(u)={ξX:ϕ(v)ϕ(u)ξ,vuXforallvD(ϕ)}

    with domain D(ϕ):={uD(ϕ):ϕ(u)}. Subdifferential operators form a subclass of maximal monotone operators.

    Theorem 2.3. (Minty) Every subdifferential operator is maximal monotone.

    Lemma 2.1. [29, Prop. 6.19,Poincaré-type inequality] For q1, assume that J(x)J(y) if |x||y| and Ω is a bounded domain in Rd; the quantity

    βq1:=βq1(J,Ω,q)=infuLq(Ω),Ωudx=0ΩΩJ(xy)|u(y)u(x)|qdydx2Ω|u(x)|qdx (2.5)

    is strictly positive. Consequently, for every uLq(Ω),

    βq1Ω|u(x)1|Ω|Ωu(x)dx|qdx12ΩΩJ(xy)|u(y)u(x)|qdydx. (2.6)

    Theorem 2.4. [30, Prop. 32D] Let VHV be an evolution triple; and X=Lp(0,T;V), where 1<p< and 0<T<. Suppose that the operator A:XX is pseudomonotone, coercive and bounded. Then, for each bX and the operators

    {L1u=u,D(L1)={uW1,p(0,T;V,H):u(0)=0},L2u=u,D(L2)={uW1,p(0,T;V,H):u(0)=u(T)}, (2.7)

    the equations

    {L1u+Au=b,uD(L1),L2u+Au=b,uD(L2) (2.8)

    have respective solutions. In addition, if A is strictly monotone, then the corresponding solutions are unique.

    We consider the following nonlocal scaling operator

    Lεu(x,t)=RdJ(xyε,tεr)εd+2ν(xε,yε)(|u(y,t)|p1u(y,t)|u(x,t)|p1u(x,t))dy, (3.1)

    and the corresponding Cauchy problem

    {tuε(x,t)Lεuε(x,t)=0,(x,t)Rd×(0,T),uε(x,0)=φ(x),xRd, (3.2)

    with

    φ(x)L[1,](Rd):=L1(Rd)L(Rd). (3.3)

    As the scale parameter ε0+, we will prove that the effective Cauchy problem for Eq (3.2) is

    {tu0(x,t)L0u0(x,t)=0,(x,t)Rd×(0,T),u0(x,0)=φ(x),xRd, (3.4)

    where

    L0u0(x,t)=Θ(|u0|p1u0)=di,j=1Θij2xixj(|u0|p1u0), (3.5)

    and the positive definite constant matrix Θ=(Θij) will be given below. For writing convenience, we omit Σ in Eq (3.5).

    Remark 1. According to [31,32], for 1<p<, the Cauchy problem of porous medium equations admits a solution when φ(x)L1loc(Rd), but the corresponding result for the Cauchy problem to fast diffusion equations was only established for d3,d2d<p<1 and for d={1,2},0<p<1 when φ(x)L1loc(Rd). Therefore, the index conditions in the critical situation are also satisfied here. For p=1, the operator Lε in Eq (3.1) is linear, the Cauchy problems of parabolic Eqs (3.2) and (3.4) have solutions uε and u0L((0,T),L2(Rd)) respectively. But the existence of solutions is not obvious for 0<p<1 and p>1, so we need to prove it before going to investigate the limit behavior.

    We apply the space of functions of bounded variation, following [16,33]. Suppose that for uL1(Rd), there exist finite signed Radon measures λi(i=1,2,,d) such that

    Rduxiϕdx=Rdϕdλi,ϕCc(Rd),|Du|(Rd)=di=1sup{RdΦidλi:ΦC0(Rd,Rd),ΦC0(Rd,Rd)<1}.

    Then we say that uBV(Rd) if the norm uBV=2u1+|Du|(Rd)<.

    For every δ(0,1], we consider a function hδC([0,)),0hδ(x)1 which is nondecreasing and satisfies that hδ(x)=0 for xδ2 and hδ(x)=1 for xδ. Denote

    Γ0(u(x),u(y),x,y,t)=|u(y)|p1u(y)|u(x)|p1u(x)u(y)u(x),Γ(u(x),u(y),x,y,t)=J(xy,t)ν(x,y)Γ0(u(x),u(y),x,y,t),Γδ(a,b;x,y)=hδ(|ab|)1|xy|δ(x,y)Γ(a,b;x,y),Lt,δvu(x)=RdΓδ(v(x),v(y),x,y,t)(u(y)u(x))dy.

    Lemma 3.1. For 1p<+, the operator Bδ(u):=Lt,δuu is locally Lipschitz as a mapping Bδ:L[1,](Rd)L[1,](Rd) for a.e. t.

    Proof. See Appendix A for a detailed proof.

    Lemma 3.2. For every initial data point uδ0(x)L[1,](Rd),1<p<+ and T>0, the problem (3.2) admits a unique global classical solution

    uδ(x,t)C1([0,T],L[1,](Rd)).

    Proof. For vC1([0,T],L[1,](Rd)), consider an integral operator

    {Vδ:X=C([0,T],L[1,](Rd))C1([0,T],L[1,](Rd)),(Vδv)(t)=t0Bδ(v(s))ds,vX. (3.6)

    From Lemma 3.1, the operator Bδ(u)=Lt,δuu is locally Lipschitz. Fix T(0,); for v1,v2X, we have

    |||Vδv1Vδv2|||XT0||Bδ(v1(s))Bδ(v2(s))||[1,]dsTmax0tT||Bδv1Bδv2||[1,]M(p,α2,Λ)T|||v1v2|||X, (3.7)

    and

    |||tVδv1tVδv2|||Xmax0tT||Bδv1Bδv2||[1,]M(p,α2,Λ)|||v1v2|||X,

    where M(p,α2,Λ) is introduced in Appendix A. For a small enough T such that CT<1, the Banach contraction mapping principle implies that the problem (3.2) admits a unique local classical solution uδC1([0,T],L[1,](Rd)).

    This local classical solution uδ is actually global. According to [16, Lemma 3.5], we have

    ||u(t)||[1,]C||φ||[1,],t[0,T]. (3.8)

    Taking a ball B(φ,C||φ||[1,])X, then M(p,α2,Λ) only depends on ||φ||[1,]. Therefore, the problem (3.2) admits a global solution.

    Theorem 3.1. (Existence of strong solutions) For 1<p<+ and the initial condition φBV(Rd)L(Rd), the problem (3.2) has a strong solution (and still denotes u)

    uL([0,),BV(Rd)L(Rd))C([0,),L1loc(Rd)). (3.9)

    Proof. For an arbitrarily fixed T>0, by applying the Aubin-Lions-Simon lemma [34, Theorem 1] in the space L([0,T],L1(Ω)), we can get the convergent subsequence in the usual way. So there exist a subsequence {uδj} and a function u such that uδjuinC([0,T],L1loc(Rd)). The specific proof process can be found in [16,25].

    The case 0<p<1 can be obtained in [35], and the existence of a doubly nonlinear equation is consistent with our equation. Noticed that more studies focus on fractional nonlocal fast diffusion equations, e.g. [36,37]. The general framework was recently studied in [16]. We now describe our main results on uε(x,t),u0(x,t) corresponding to the Cauchy problems (3.2) and (3.4), respectively.

    Theorem 3.2. Assume that the functions J(z,s) and ν(x,y) satisfy the conditions (2.1)–(2.3). Let uε(x,t) be the solution of the Cauchy problem (3.2) and u0(x,t) be the solution of the effective Cauchy problem (3.4). Then there exist a vector ϖRd(ϖ=0 for p1) and a positive definite matrix Θ such that for any T>0, we have

    uε(x+ϖεt,t)u0(x,t)L1((0,T),L1loc(Rd))0asε0+. (3.10)

    Theorem 3.2 implies that the homogenization of the nonlocal operator in Eq (3.1) is a local porous medium operator. The Cauchy problem of porous medium equations has been extensively studied in [31,32,38,39].

    The homogenized flux Θ(x,t) can be characterized as follows.

    Case I. For 0<r<2 and 0<p2, Θ is a constant d×d matrix given by

    Θ=10TdRd12(ξq)(ξq)J(ξq,s)ν(ξ,q)m(ξ)dqdξds10TdRdJ(ξq,s)ν(ξ,q)(ξq)χ1(q,s)dqdξds+ϖ10Td1p|u0|1pχ1(ξ,s)μ(ξ,s)dξds, (3.11)

    where the periodic function χ1(ξ,s)((ξ,s)Td×T) solves the cell-problem

    {RdJ(ξq,s)ν(ξ,q)(qξ+χ1(q,s)χ1(ξ,s))dq=1p|u0|1pϖ,χ1(y,0)=χ1(y,1),yTd, (3.12)

    u0(x,t) is the solution of Eq (3.4) and m will be defined in Eq (4.40).

    Remark 2. For p1,χ1 does not depend on x and t, so ϖ=0,wε(x,t)=u(x,t)+εu1(x,t)+ε2u2(x,t) and the pair (u,u1) is uniquely determined. Moreover, the function u1(x,t,y,s) can be written as

    u1(x,t,y,s)=dk=1xk(|u0|p1u0)(x,t)χk1(y,s). (3.13)

    Case II. For r=2 and p(0,1], the homogenized matrix function Θ(x,t) is characterized by

    Θ(x,t)=10TdRd12(ξq)(ξq)J(ξq)ν(ξ,q)m(ξ)dqdξds10TdRdJ(ξq,s)ν(ξ,q)m(ξ)(ξq)χ1(x,t,q,s)dqdξds+ϖ10Td1p|u0|1pχ1(x,t,ξ,s)ν(ξ,q)m(ξ)dξds, (3.14)

    where χk1=χk1(x,t,y,s)L(Rd×(0,T);L2(E;L2per(Y)/R)) solves the cell problem

    {RdJ(ξq,s)ν(ξ,q)(qξ+χ1(x,t,q,s)χ1(x,t,ξ,s))dq=1p|u0|1p(sχ1(x,t,ξ,s)ϖ),(ξ,s)Td×T,χ1(x,t,y,0)=χ1(x,t,y,1),yTd, (3.15)

    such that, for each (x,t)Rd×(0,T),

    |u0|1pχk1L(Rd×(0,T);L2(E;[L2per(Y)/R])), (3.16)
    |u0|1p2χk1L(Rd×(0,T);C(ˉE;L2(Y)/R)). (3.17)

    Case III. For r=2 and p(1,2], the homogenized matrix function Θ(x,t) is characterized by Eq (3.14), where

    χk1(x,t,y,s)={p|u0|p1k1(x,t,y,s)ifu0(x,t)0,0ifu0(x,t)=0, (3.18)

    and k1=k1(x,t,y,s)L([u00];H1(E;[L2per(Y)/R])) solves the cell problem for each (x,t)[u00]:

    {s1(x,t,ξ,s)=RdJ(ξq,s)ν(ξ,q)(qξ+p|u0|p11(x,t,q,s)p|u0|p11(x,t,ξ,s))dq,1(x,t,y,0)=1(x,t,y,1),yTd, (3.19)

    such that

    |u0|p1k1L([u00];L2(E;L2per(Y)/R)), (3.20)
    |u0|p12k1L([u00];C(ˉE;L2(Y)/R)), (3.21)

    and the measurable set [u00]:={(x,t)Rd×(0,T):u0(x,t)0}.

    Case IV. For 2<r<+ and 0<p1, Θ is a constant d×d matrix given by

    Θ=TdRd12(ξq)(ξq)[10J(ξq,s)ds]ν(ξ,q)m(ξ)dqdξTdRd[10J(ξq,s)ds]ν(ξ,q)m(ξ)(ξq)χ1(x,t,q,s)dqdξ+ϖ10TdRd1p|u0|1pχ1(x,t,ξ,s)ν(ξ,q)m(ξ)dqdξds, (3.22)

    where χ1 satisfies the following problem with (ξ,s)Td×T:

    Td10J(ξq,s)dsν(ξ,q)(qξ+χ1(q)χ1(ξ))dq=1p|u0|1pϖ, (3.23)

    χ1 does not include s because ϖ=0; we found that χ1 does not include x and t.

    Case V. For 2<r<+, 1<p2,ϖ=0 and Θ is a constant d×d matrix given by

    Θ=TdRd12(ξq)(ξq)[10J(ξq,s)ds]ν(ξ,q)m(ξ)dqdξTdRd[10J(ξq,s)ds]ν(ξ,q)m(ξ)(ξq)χ1(q)dqdξ, (3.24)

    where χ1 also satisfies Eq (3.23).

    Now for the operator given by Eq (3.1), we consider the following nonlocal scaling operator

    Lεv(x,t)=1εd+2RdJ(xyε,tεr)ν(xε,yε)(v(y,t)v(x,t))dy; (3.25)

    thus for v(x,t)=|u(x,t)|p1u(x,t), we have that

    Lεv(x,t)=Lε(|u|p1u)=Lεu(x,t).

    Therefore we transform the problems (3.2) and (3.4) into the following Cauchy problems

    {tvε(x,t)1pLεvε(x,t)=0,(x,t)Rd×(0,T),vε(x,0)=φ(x),xRd, (3.26)

    and

    {tv(x,t)1pΘv(x,t)=0,(x,t)Rd×(0,T),v(x,0)=φ(x),xRd, (3.27)

    respectively. We first study the existence and uniqueness of solutions to the Cauchy problems (3.26) and (3.27) with the nonlocal operator (3.25), where Lε is a non-positive and self-adjoint operator in the space L2(Rd,m) for ν(x,y). In fact, for any u,vL2(Rd,m),

    (Lεu(x),u(x))L2(Rd,m)=12εd+2R2dJ(xyε,tεr)ν(xε,yε)m(xε)|u(y)u(x)|2dydx0. (3.28)

    We directly give the following theorem.

    Theorem 3.3. The hypotheses given above are satisfied. If there is a homogenized solution denoted as v of the problem (3.26). Then we have

    vL((0,T);BV(Rd)L(Rd))H1((0,T);L2loc(Rd)). (3.29)

    Proof. The proof of existence is similar to Theorem 3.1 so we do not show it here. What needs to be emphasized here is that, in the three cases of 0<p<1, p=1 and p>1, the method of proof of existence could be different. These will not affect our subsequent homogenization proof.

    Due to the classical method of asymptotic expansion, we first construct some auxiliary functions to prove Theorem 3.2, i.e., our main results on homogenization of nonlinear nonlocal equations with a periodic structure. Denote xε=xϖtε and y=xεz, we first give a chain-rule formula.

    Lemma 4.1. (Chain-rule formula) If vε(x,t)1p is bounded in H1(I;L2(Rd))(0<p2), then for a.e. (x,t)Rd×I, we have

    vε(x,t)1pt=1p|vε(x,t)|1ppvε(x,t)t,p(0,1), (4.1)
    vε(x,t)t=p|vε(x,t)|p1pvε(x,t)1pt,p(1,2]. (4.2)

    For a given vC((0,T),S(Rd)), we introduce some auxiliary functions:

    wε(x,t)=v(x,t)+εu1(x,t)+ε2u2(x,t). (4.3)

    For different cases of r,p, we construct the corresponding auxiliary functions wεi,i=0,1,2.

    (i) r=2,0<p<1

    wε0(x,t)=v(x,t)+εχ1(x,t,xε,tεr)v(x,t)+ε2χ2(x,t,xε,tεr)v(x,t). (4.4)

    (ii) r=2,1<p2

    wε1(x,t)=v(x,t)+εp|v(x,t)|p1p1(x,t,xε,tεr)v(x,t)+ε2p|v(x,t)|p1p2(x,t,xε,tεr)v(x,t). (4.5)

    (iii) r=2,p=1

    wε2(x,t)=v(xϖtε,t)+εχ1(xε,tεr)v(xϖtε,t)+ε2χ2(xε,tεr)v(xϖtε,t). (4.6)

    (iv) r2,χi and i do not depend on x and t.

    Lemma 4.2. For a given vC((0,T),S(Rd)), wεi(i=0,1,2) is defined by Eqs (4.4)–(4.6). Then there exist two functions

    χ1(L((0,T),(L2(Td×T))d,χ2(L((0,T),(L2(Td×T))d×d, (4.7)

    a vector ϖRd(p=1) and a positive definite matrix Θ such that

    Hεwεi(x,t):=wεi(x,t)1ptLεwεi=(1p|wεi|1ppvt(xε,t)Θv(xε,t)+ϕε(x,t))|xε=xϖεt, (4.8)

    where ϕε0 in L2((0,T),L2(Rd)) as ε0.

    Remark 3. For p=1, the homogenization takes place in the moving coordinates Xt=xϖεt with an appropriate constant vector ϖ. But it does not work in the nonlinear situation (p1) when ϖRd.

    Proof. Substitute the expressions on the right-hand side of Eqs (4.4)–(4.6) into Eq (4.8) and using the notation xε=xϖεt and tχ(x,t,xε,tεr)=tχ+1εrsχ, where the symbol stands for the tensor product:

    zz=(zizj)d×d,zzv=zizj2vxixj,zzzv=zizjzk3vxixjxk,χ2(xε,tεr)(ϖε)v=χij2(xε,tεr)(xkε)xixjxkv.

    Case 1. For p(0,1),

    wε0(x,t)t=1p|wε0|1pp[vt(x,t)+(ε1rχ1s)v(x,t)+ε2rχ2sv(x,t)+ϕ(time)ε(x,t)]

    with

    ϕ(time)ε(x,t)=εχ1tv(x,t)+ε2χ2tv(x,t)+εχ1(x,t,xε,tεr)vt(x,t)+ε2χ2(x,t,xε,tεr)vt(x,t). (4.9)

    Set z=xyε; we get

    (Lεwε0)(x,t)=1ε2RdJ(z,s)ν(xε,xεz){v(xεz,t)+εχ1(y,t,xεz,tεr)v(xεz,t)+ε2χ2(y,t,xεz,tεr)v(xεz,t)v(x,t)εχ1(y,t,xε,tεr)v(x,t)ε2χ2(y,t,xε,tεr)v(x,t)}dz. (4.10)

    Using the Taylor expansions

    v(y)=v(x)+10ddθv(x+(yx)θ)dθ=v(x)+10v(x+(yx)θ)(yx)dθ, (4.11)
    v(y)=v(x)+v(x)(yx)+10v(x+(yx)θ)(yx)2(1θ)dθ, (4.12)

    we have

    (Lεwε0)(x,t)=Rd1ε2J(z,tεr)ν(xε,xεz)[v(x,t)εzv(x,t)+ε210v(xεzθ,t)z2(1θ)dθ+εχ1(xεz,tεr)v(x,t)ε2χ1(xεz,tεr)zv(x)+ε3χ1(xεz,tεr)10v(xεzθ,t)z2(1θ)dθ+ε2χ2(xεz,tεr)v(xεz)v(x)εχ1(xε,tεr)v(x,t)ε2χ2(xε,tεr)v(x,t)]dz, (4.13)
    Hεwε0(x,t)=wε0(x,t)t1εd+2RdJ(xyε,tεr)ν(xε,yε)(wε0(y,t)wε0(x,t))dy=1p|wε0|1ppvt(x,t)+1εM0(x,t)+Mε(x,t)+ϕε(x,t)asε0+, (4.14)

    where

    ϕε(x,t)=1p|wε0(x,t)|1ppϕ(time)ε(x,t)ϕ(space)ε(x,t), (4.15)

    and

    ϕ(space)ε=1ε2RddzJ(z,tεr)ν(xε,xεz){ε210v(xεzq,t)zz(1q)dqε22v(x,t)zz+ε3ϰ1(xεz,tεr)10v(xεzq,t)zz(1q)dqε3ϰ2(xεz,tεr)10v(xεzq,t)zdq}, (4.16)
    M0(x,t)=ε2r1p|wε|1ppχ1sv(x,t)v(x,t)[RdJ(z,tεr)ν(xε,xεz)(z+χ1(y,t,xεz,tεr)χ1(x,t,xε,tεr))dz], (4.17)
    Mε(x,t)=ε2r1p|wε|1ppχ2sv(x,t)v(x,t)[RdJ(z,tεr)ν(xε,xεz)(zχ1(y,t,xεz,tεr)+χ2(y,t,xεz,tεr)χ2(x,t,xε,tεr)+12z2)dz]. (4.18)

    Case 2. For p(1,2],

    wε1(x,t)t=1p|wε1|1pp[vt(x,t)+ε1rp|v|p1p1sv(x,t)+ε2rp|v|p1p2sv(x,t)+ϕ(time)ε(x,t)],

    with

    ϕ(time)ε=ε(p1)|v|(1+1p)vvt1(x,t,xε,tεr)v(x,t)+ε2(p1)|v|(1+1p)vvt2(x,t,xε,tεr)v(x,t)+εp|v|p1p1tv(x,t)+ε2p|v|p1p2tv(x,t)+εp|v|p1p1(x,t,xε,tεr)vt(x,t)+ε2p|v|p1p2(x,t,xε,tεr)vt(x,t). (4.19)

    Set z=xyε; we get

    (Lεwε1)(x,t)=1ε2RdJ(z,s)ν(xε,xεz){v(xεz,t)+εp|v(y,t)|p1p1(y,t,xεz,tεr)v(xεz,t)+ε2p|v(y,t)|p1p2(y,t,xεz,tεr)v(xεz,t)v(x,t)εp|v(x,t)|p1p1(x,t,xε,tεr)v(x,t)ε2p|v(x,t)|p1p2(x,t,xε,tεr)v(x,t)}dz. (4.20)

    Using the Taylor expansions again,

    Hεwε1(x,t)=1p|wε1|1ppvt+1εM0(x,t)+Mε(x,t)+1p|wε1|1ppϕ(time)ε+ϕ(space)εasε0+, (4.21)

    where

    M0(x,t)=ε2r|vwε1|p1p1sv(x,t)v(x,t)[RdJ(z,tεr)ν(xε,xεz)(z+p|v(y,t)|p1p1(y,t,xεz,tεr)p|v(x,t)|p1p1(x,t,xε,tεr))dz], (4.22)
    Mε(x,t)=ε2r|vwε1|p1p2sv(x,t)v(x,t)[RdJ(z,tεr)ν(xε,xεz)(zp|v(y,t)|p1p1(y,t,xεz,tεr)+p|v(y,t)|p1p2(y,t,xεz,tεr)p|v(x,t)|p1p2(x,t,xε,tεr)+12z2)dz], (4.23)

    and ϕ(space)ε is similar to that in the case that 0<p<1.

    Case 3. For p=1, similar to the derivations in [13,14], we only need to notice that the correctors are the functions χi(xε,tεr)(i=1,2). Therefore, substituting the expression on the right-hand side of Hε for wε2 in Eq (4.6) and using the notation xε=xϖεt we get

    Hεwε2(x,t)=vt(xε,t)+1εM0(x,t)+Mε(x,t)+ϕε(x,t)asε0+, (4.24)

    where

    ϕε(x,t)=ϕ(time)ε(x,t)ϕ(space)ε(x,t), (4.25)

    and

    M0(x,t)=ε2rχ1sv(xε,t)v(xε,t)[RdJ(z,tεr)ν(xε,xεz)(z+χ1(xεz,tεr)χ1(xε,tεr))dz+ϖ], (4.26)
    Mε(x,t)=ε2rχ2sv(xε,t)[RdJ(z,tεr)ν(xε,xεz)(z+χ1(xεz,tεr)+χ2(xεz,tεr)χ2(xε,tεr)+12z2)dz+ϖχ1]. (4.27)

    Due to the order of ε, we put the terms with O(ε) and the higher-order terms with o(ε) into the remainder as the fourth part. For the given functions χ1 and χ2, it is easy to show that the fourth part is an infinitesimal O(ε) as ε0+.

    This completes the proof of Lemma 4.2.

    We now consider the asymptotic decomposition of (Lεwε)(x,t) in ε, deal with the last three parts M0(x,t),Mε(x,t) and ϕε(x,t) and get more precisely asymptotic behavior.

    1. Constructing auxiliary functions to guarantee the first part M0(x,t) of Lε satisfies that 0<r2,M0=0 and r>2,εr2M0=0.

    2. From the second part Mε(x,t) of Lε we can get a second order differential operator L0 such that L0v(x,t)=Θv as ε0+.

    3. The third part ϕε satisfies that

    limε0+||ϕε||L2((0,T),L2(Rd))=0. (4.28)

    Finishing the above three steps, for wε(x,t)=v(x,t)+εu1(x,t)+ε2u2(x,t), we can prove that the operator Lε has the following asymptotic representation

    (Lεwε)(x)=Θv+ϕε(x,t)asε0+. (4.29)

    We now construct an auxiliary function in order to prove that M0=0, where M0(x,t) is defined by Eq (4.17). Because v(x,t) and its derivatives are in C((0,T)×S(Rd)), we need not to deal with this part and only solve the theorem as follows.

    Theorem 4.1. Assume that there exists a function χ1(x,t,ξ,s)L(Rd×I;L2per(T×Y)) and ϖRd such that M0=0.

    Proof. We need to consider the solvability of the following equations due to the time scale r and given p(0,2]. For r=2 and 0<p1, it is straightforward to see that χ1(x,t,ξ,s)=χ1(ξ,s) when p=1. In case that p1, we sometimes omit x and t for simplicity to write χ1(x,t,xε,tεr) as χ1(xε,tεr). For any ε>0, we have

    ε2r1p|wε|1ppχ1s1p|wε|1ppϖ[RdJ(z,tεr)ν(xε,xεz)(z+χ1(xεz,t,xεz,tεr)χ1(x,t,xε,tεr)dz]=0. (4.30)

    Denote ξ=xε and s=tεr, which is a variable with the period ξ,sTd=[0,1]d, also ν(ξ),χ1(ξ,s) and χ2(ξ,s) are functions on Td. We solve Eq (4.30) for the functions χ1(ξ,s) and χ2(ξ,s) on the torus. Let

    ψε(x,t)=ε2r1p|wε|1pp.

    For (x,t)Rd×I, ψε(x,t)0 as ε0, and from

    ψεχ1sεr2ψεϖ=RdJ(z,s)μ(ξ,ξz)(z+χ1(εξεz,t,ξz,s)χ1(εξ,t,ξ,s))dz, (4.31)

    we have

    Ξ(y,s)=RdJ(ηy,s)ν(y,q)(yq)dq+εr2ψεϖ=(y,s)+εr2ψεϖ. (4.32)

    We consider that X=L2((0,1)×Td). Let L:XD(L)X be defined by Lu=u, where u is understood in the sense of distributions, i.e.,

    10u(t)ψ(t)dt=10u(t)ψ(t)dt,ψC0(0,1)

    with domain Dom(L)={uX:uX,u(0)=u(1)}. We can see that

    Lu,ρ=10u(t),ρ(t)Vdt,uDom(L),ρX.

    It is easy to obtain that L:XD(L)X is densely defined maximal monotone. For details about the operator L, the reader is referred to Zeidler [30, Prop. 32.10]. Let ˜N:XX be defined by

    ˜NΠ=RdJ(ηζ,s)ν(η,ζ)(Π(εζ,ζ)Π(εη,η))dζ=(GK)Π,
    ˜NΠ,Πm=10TdRdJ(ηζ,s)ν(η,ζ)m(η)(Π(εζ,ζ)Π(εη,η))Π(εη,η)dζdηds=1210Tdμ(η,s)RdJ(ηζ,s)ν(η,ζ)m(η)(Π(εζ,ζ)Π(εη,η))2dζdηds0;

    then, ˜N is a monotone operator in X.

    1Π=RdJ(ηζ,s)ν(η,ζ)Π(ζ)dζ, (4.33)
    2Π=RdJ(ηζ,s)ν(η,ζ)dζΠ(η); (4.34)

    we know that 1 is bounded in XX and 2 is a positive and invertible operator. Denote

    κπ(ξ)=RdJ(ξq,s)ν(ξ,q)Π(q)dq,πL2(Td). (4.35)

    We first introduce a proposition.

    Proposition 1. [6] For ˘J(η)=kZdJ(η+k),ηTd, the operator

    κφ(ξ)=RdJ(ξq)ν(ξ,q)φ(q)dq=Td˘J(ξη)ν(ξ,η)φ(η)dη,φL2(Td) (4.36)

    is a compact operator in L2(Td).

    From Proposition 1 and Lemma 2.1, we have

    (˜Nχ,χ)12α2110TdTdJ(xy,s)|χ(y,s)χ(x,s)|2dydxdscβ110Td|χ1|Td|Tdχ|2dyds=cβ1||χ||2L2(T×Td); (4.37)

    we know the ˜N is coercive.

    Lemma 4.3. There exists a function χε1(x,t,ξ,s) on Rd×R×[0,1]d×[0,1] such that Eq (4.30) holds true.

    Proof. We first rewrite Eq (4.31) as follows

    ψε(x,t)Lχε1(x,t,y,s)˜Nχε1(x,t,y,s)=Ξ, (4.38)

    where the operator ψε(x,t),L and ˜N are defined above, L:XD(L)X is a densely defined maximal monotone operator in X and ˜N is bounded pseudomonotone and coercive in XX from the inequality (4.37); due to Eq (4.30) we fix an arbitrary (x,t)Rd×I. By applying Theorem 2.4, Eq (4.38) has a solution, that is, there exists a function χε1(x,t,ξ,s) on the torus Td×T such that Eq (4.38) holds true. The proof is completed.

    For p>1 and ˜ψε=ε2r|vwε1|p1p, we have

    M0(x,t)=ψεε1sv(x,t)v(x,t)[RdJ(z,tεr)ν(xε,xεz)(z+p|v(y,t)|p1pε1(y,t,xεz,tεr)p|v(x,t)|p1pε1(x,t,xε,tεr))dz]=0. (4.39)

    Then we have the following lemma.

    Lemma 4.4. Fix ε>0; there exists a function ε1 on Rd×R×[0,1]d×[0,1] such that Eq (4.39) holds true.

    The proof is similar to the case of 0<p<1, so we omit the details.

    For p=1,0<r2 and ψε=ε2r, so χ1 does not include x and t because time derivative term tends to zero when ε0. We obtain the existence of χ1. Next, we also need to determine ϖ. Then, the solvability condition for Eq (4.31) is that ˜N is the sum of a positive invertible operator K and a compact operator G. In [14] it shows that the dimension of space Ker(KG) is one and that

    Ker(KG)=K1(ξ)π0(ξ):=m(ξ), (4.40)

    where π0(ξ) and m(ξ) are positive and bounded.

    According to the Fredholm theory, dim(GK)=dim(GK), thus there exists m(ξ)Ker(GK) that satisfies (GK)m(ξ)=0 such that

    10RdTdJ(ξq,s)ν(ξ,q)(ξq)dqm(ξ)dξdsϖ10Tdm(ξ)dξds=0.

    Taking the normalized m(ξ) with Tdm(ξ)dξ=1, and choosing ϖ as

    ϖ=10RdTdJ(ξq,s)ν(ξ,q)(ξq)dqm(ξ)dξds.

    We also need the following lemma in order to use symmetry of the integral; it is obviously right when the nonlocal structure is symmetric.

    Lemma 4.5. The compact operator (K1G) has a simple eigenvalue at λ=1. The corresponding eigenfunction η0 satisfies the equation

    (K1G)η0=η0,

    and there exists a unique (up to an additive constant) function mL2(Td) satisfying

    (K1G)m=m,RdJ(qξ)ν(q,ξ)m(q)dq=m(ξ)RdJ(ξq)ν(ξ,q)dq,

    i.e., Span(m) = Ker(KG). This function obeys the following lower and upper bounds:

    0<κ1η0(ξ)κ2<,0<˜κ1m(ξ)˜κ2<,ξTd,

    where κ1,κ2,˜κ1,˜κ2 are positive constants.

    We can obtain from the Krein-Rutman theorem [40] that the operator (K1G) has the maximal eigenvalue equal to 1.

    Since we find that χε1 is related to ε, we also need to discuss strong measurability in (x,t). This section is devoted to discussing the existence, uniqueness and regularity of solutions to cell problems at the critical ratio r=2. The cases r<2 and r>2 are similar to the case of r=2 and E and Td correspond to the cell area of time and space respectively. We simply write w(y,s) for the functions w=w(x,t,y,s) by omitting the variables x and t, unless any confusion may arise. We first explain V=L2per(Y); the {χk1}dk=1 in this section refers to χ1.

    Case I. For r=2 and 0<p<1. For each (x,t)Rd×(0,T), the cell problem reads that

    {1p|v|(1p)/psχ1(ξ,s)=TdJ(ξq,s)ν(ξ,q)(qξ+χ1(q,s)χ1(ξ,s))dqdξ,χk(y,0)=χk(y,1),yTd, (4.41)

    such that MY(χk1(,s)y)=0 for sT. It can be regarded as a constant to discuss the existence, uniqueness and regularity of solutions to Eq (4.41) in view of v=v(x,t) depending only on (x,t) for each (x,t) that is fixed. In case that v(x,t)0, assuming

    Ξ+1p|v|(1p)/pϖ[L2(J;L2per(Y))]d,

    one can construct a unique weak solution χk1(x,t,,)L2(E;V)H1(E;L2(V)), where V=L2per(Y)/R,kN.

    Lemma 4.6. (Strong measurability in (x,t)) Assume that r=2 and p(0,1). For kd, the function:

    (x,t)χk1(x,t,,)(resp.,|v(x,t)|(1p)/pχk1(x,t,,))

    is strongly measurable in Rd×(0,T) with values in L2(E;V). Moreover,

    χk1L(Rd×I;L2(E;V)),|v|(1p)/pχk1L(Rd×I;H1(E;L2(V))).

    Proof. Since |v|(1p)/p lies in L(p+1)/(1p)(Rd×I), one can take a sequence (ψε) of step functions from Rd×I into R such that ψε(x,t)(1/p)|v(x,t)|(1p)/p for (x,t)M0, where M0 is a measurable set in Rd×I satisfying |(Rd×I)M0|=0, as ε+. Fix (x,t)M0 and let χε1(x,t,,)L2(E;V) be the unique solution to

    {1p|wε|(1p)/psχε1(ξ,s)=RdJ(ξq,s)ν(ξ,q)(qξ+χε1(q,s)χε1(ξ,s))dq,χε1(y,0)=χε1(y,1),yTd (4.42)

    such that MY(χε1(,s)y)=0. Moreover, we note that the vector-valued function (x,t)χε(x,t,,) is defined over Rd×I. Test Eq (4.42) by using χε1 and with respect to the ξ integral in Y. We observe by using the nonlocal Poincaré inequality that

    ψε2ddsχε1(y,s)2L2(Y)+β1χε1(y,s)2L2(Y)Y|Ξ|χε1(y,s)dyβ12χε1(y,s)2L2(Y)+CΞ2L2(Y).

    Integrate both sides over (0,1) and employ the periodicity χε(,0)=χε(,1) in Td. It then follows that

    β1210χε1(y,s)2L2(Y)dsC10Ξ2L2(Y)ds;

    then, one can also get

    ψε210sχε1(y,s)2L2(Y)dsC10Ξ2L2(Y)ds.

    Therefore we can select a subsequence and still note χε1 as a limit χ1(x,t,,)L2(E;V) such that

    |v(x,t)|(1p)/pχ(x,t,,)H1(E;L2(V))

    and

    χε1(x,t,,)χ1(x,t,,)weaklyinL2(E;V).ψε(x,t)χε1(x,t,,)1p|v(x,t)|1p1χ1(x,t,,)weaklyinH1(E;L2(V)).

    Hence, (x,t)χ1(x,t,,) is weakly measurable in Rd×I, with values in L2(E;V); therefore, due to Pettis' theorem, it is also strongly measurable. Moreover, the fact that the convergence ψε(x,t)(1/p)|v(x,t)|(1p)/p a.e. in Rd×I as ε0, it can be verified that the unique solution χ1 solves Eq (4.41) for a.e. (x,t)Rd×I.

    Finally, it is easy to check that

    χ1,|v(x,t)|(1p)/pχ1L(Rd×(0,T);H1(E;L2(V))).

    In case that 0<p<1, for a.e. (x,t)Rd×I and all lV and l1Cper(E), we observe that

    10Td[1p|v(x,t)|(1p)/pχ1(ξ,s)l(ξ)sl1(s)RdJ(ξq,s)ν(ξ,q)(qξ+χ1(q,s)χ1(ξ,s))dql(ξ)l1(s)]dξds=0. (4.43)

    Next we will show that

    1p|v(x,t)|(1p)/pχ1L2(Rd×I;H1(E;V)). (4.44)

    Actually let us define ξ(x,t,,)L2(E;V) by

    10ξ(x,t,,s),ς(,s)Vds=10TdRdJ(ξq,s)ν(ξ,q)(qξ+χ1(q,s)χ1(ξ,s))dqς(ξ,s)dξds (4.45)

    for ςL2(J;V). Then ξ:Rd×IL2(E;V) is weakly measurable, and actually it is strongly measurable by Pettis' theorem.

    Since χ1L2(Rd×I×Td×E), one can verify that ξL2(Rd×I;L2(E;V)). Furthermore, we deduce by Eq (4.45) that

    101p|v|(1p)/pχ1(x,t,,s)st1(s)ds=10ξ(x,t,,s)t1(s)dsinV,

    which along with the arbitrariness of t1Cper(E) in the distributional sense for a.e. (x,t,s)Rd×I×E implies that

    1p|v|(1p)/psχ1(x,t,ξ,s)=ξ(x,t,,s)inV

    This yields Eq (4.44). It is easy to check that

    1p|v(x,t)|(1p)/pχ(x,t,,1)=1p|v(x,t)|(1p)/pχ(x,t,,0)inV

    for a.e. (x,t)Rd×I. Case I is proved.

    Case II. r=2 and 1<p2. It is enough to consider the case that v(x,t)0 only. For each (x,t)[v0]:={(x,t)Rd×(0,T):v(x,t)0}, the existence and uniqueness of a weak solution k1(x,t,,)L[v0]L2(E;V) to the cell problem can be verified

    {s1(ξ,s)=TdJ(ξq,s)ν(ξ,q)(qξ+p|v|(p1)/p1(ξ,s)p|v|(p1)/p1(q,s))dq,k1(y,0)=k1(y,1),yTd, (4.46)

    such that MY(k(,s)y)=0 for sT.

    Moreover, we claim that

    |u|(p1)/pk1L([v0];L2(E;V)),k1L([v0];L2(E;V)),

    which implies that kL([v0];L2(E;V)).

    The proof is similar to the case for 0<p<1.

    Case III. 0<r<2 and 0<p1. Let ψε(x,t)=ε2rψε0 for (x,t)Rd×I as ε0,χ1 satisfies the following equation

    {RdJ(ξq,s)ν(ξ,q)(qξ+χ1(q,s)χ1(ξ,s))dq=0,χ1(y,0)=χ1(y,1),yTd (4.47)

    as ϖ=0; we found that χ1 does not include x and t, and that χ1L2(E×V); then, we have

    u1(x,t,y,s)=dk=1xk(|u|p1u(x,t))χk1(y,s). (4.48)

    Case IV. 0<r<2 and 1<p2. It is enough to just consider the case that v(x,t)0. For each (x,t)[v0]:={(x,t)Rd×(0,T):v(x,t)0}, we can verify the existence and uniqueness of the solution k1(,)L2(E;V)H1(E;L2(V)) to the cell problem

    {TdJ(ξq,s)ν(ξ,q)(qξ+p|v|(p1)/p1(ξ,s)p|v|(p1)/p1(q,s))dq=0,k1(y,0)=k1(y,1),yTd, (4.49)

    where, actually, 1=1p|v|(1p)/pχ1, the situation is similar to Case III.

    For r>2 and 0<p2, we consider two cases.

    Case V. r>2 and 0<p1. For any ε>0, we have

    1p|wε|1ppχ1sεr21p|wε|1ppϖ=εr2[RdJ(z,tεr)μ(xε,xεz)(z+χ1(xεz,tεr)χ1(xε,tεr))dz]; (4.50)

    let ε0; we get

    1p|v|1ppχ1s=0, (4.51)

    which implies that χ1 is in fact independent of s and satisfies

    1p|v|1ppϖ=Rd10J(z,s)dsμ(ξ,ξz)(z+χ1(ξz)χ1(ξ))dzinTd. (4.52)

    Case VI. For r>2 and 1<p2. For any ε>0, we have

    |vwε1|p1pΦ1sεr2[RdJ(z,tεr)μ(xε,xεz)(z+p|v|p1pΦ1(xεz,tεr)p|v|p1pΦ1(xε,tεr))dz]=0, (4.53)

    let ε0; we get

    Φ1s=0, (4.54)

    which means that Φ1(y,s) does not depend on s and satisfies Eq (4.53).

    Actually the existence of χ2 is similar to χ1; the proof steps are the same as before; next, we prove that the symmetric part of the matrix Θ defined in Theorem 3.2 is positive definite.

    Case I. r=2 and 0<p1:

    From Eq (4.18), we have

    Mε(x,t)=ε2r1p|wε|1ppχ2s1p|wε|1ppϖχ1[RdJ(z,tεr)ν(xε,xεz)(zχ1(y,t,xεz,tεr)+χ2(y,t,xεz,tεr)χ2(x,t,xε,tεr)+12z2)dz]. (5.1)

    For r=2,

    ε2r1p|wε|1ppχ2s1p|v|1ppχ2sasε0. (5.2)

    Next, using the time periodicity of J, we consider

    Θ(x,t)=10Y[RdJ(z,s)μ(y,yz)(zχ1(x,t,yz,s)+χ2(x,t,yz,s)χ2(x,t,y,s)+12z2)dz1p|v|1pp(χ2sϖχ1)]dyds, (5.3)
    Θij(x,t)=Θ(x,t)10Tdm(ξ)dξds=10TdRd12(ξq)i(ξq)jJ(ξq,s)ν(ξ,q)m(ξ)dqdξds10TdRdJ(ξq,s)ν(ξ,q)m(ξ)(ξq)iχj1(x,t,q,s)dqdξds+ϖi10TdRd1p|v|1ppχj1(x,t,ξ,s)ν(ξ,q)m(ξ)dqdξds10TdRd1p|v|1ppχ2sν(ξ,q)m(ξ)dqdξds, (5.4)

    the last formula on the right side of Eq (5.4) is zero by using the periodicity of χ2.

    Our aim is to show that the symmetric part of the right-hand side of Eq (5.4) is equal to B such that

    Bij=˜Θij+˜Θji=10TdRd(ξq)i(ξq)jJ(ξq,s)ν(ξ,q)m(ξ)dqdξds10TdRdJ(ξq,s)ν(ξ,q)m(ξ)((ξq)iχj1+(ξq)jχi1)dqdξds+ϖi10Td1p|v|1ppχj1(x,t,ξ,s)ν(ξ,q)m(ξ)dξds+ϖj10Td1p|v|1ppχi1(x,t,ξ,s)ν(ξ,q)m(ξ)dξds. (5.5)

    We also want to prove Bij is positive definite. For brevity, we write χ1(x,t,ξ,s)=χ1(ξ,s). From Eq (5.5), we have

    Bij=Bij1+Bij2+Bij3, (5.6)

    where

    Bij1=10TdRd(ξq)i(ξq)jJ(ξq,s)ν(ξ,q)m(ξ)dqdξds, (5.7)
    Bij2=10TdRdJ(ξq,s)ν(ξ,q)m(ξ)(χ1(ξ,s)χ1(q,s))i(χ1(ξ,s)χ1(q,s))jdqdξds, (5.8)
    Bij3=10TdRdJ(ξq,s)ν(ξ,q)m(ξ)((ξq)i(χ1(ξ,s)χ1(q,s))j+(ξq)j(χ1(ξ,s)χ1(q,s))i)dqdξds. (5.9)

    Obviously, we find that Bij1 is the first integral of Eq (5.5). Let us rearrange the integral in Bij3 as follows:

    Bij3=10TdRdJ(ξq,s)ν(ξ,q)m(ξ)((ξq)iχj1(ξ,s)+(ξq)jχi1(ξ,s))dqdξds10TdRdJ(ξq,s)ν(ξ,q)m(ξ)((ξq)iχj1(q,s)+(ξq)jχi1(q,s))dqdξds=˘ιij2+ιij2. (5.10)

    Then, ιij2 coincides with the second integral in Eq (5.5). Further, we rearrange the integral ˘ιij2 and recall the definition of the function in Eq (4.32):

    ˘ιij2=10Tdi(ξ,s)χj1(ξ,s)ν(ξ,q)m(ξ)dξds+10Tdj(ξ,s)χi1(ξ,s)ν(ξ,q)m(ξ)dξds=10Tdχj1(ξ,s)ν(ξ,q)m(ξ)(1p|v|1ppϖi+Aχi1(ξ,s)1p|v|1ppχi1s)dξds+10Tdχi1(ξ,s)ν(ξ,q)m(ξ)(1p|v|1ppϖj+Aχj1(ξ,s)1p|v|1ppχj1s)dξds=10Td1p|v|1pp(ϖiχj1(ξ,s)+ϖjχi1(ξ,s))ν(ξ,q)m(ξ)dξds+10Tdν(ξ,q)m(ξ)(χj1(ξ,s)Aχi1(ξ,s)+χi1(ξ,s)Aχj1(ξ,s))dξds10Tdν(ξ,q)m(ξ)1p|v|1pp(χj1(ξ,s)χj1s+χi1(ξ,s)χi1s)dξds.

    The last two formulas on the right side of the above equation are zero by using the periodicity of χ1. Denote

    ij2=ϖi10Td1p|v|1ppχj1(ξ,s)ν(ξ,q)m(ξ)dξds+ϖj10Td1p|v|1ppχi1(ξ,s)ν(ξ,q)m(ξ)dξds,
    ˜ij2=10Tdχj1(ξ,s)ν(ξ,q)m(ξ)(Aχi1(ξ,s))dξds+10Tdχi1(ξ,s)ν(ξ,q)m(ξ)(Aχj1(ξ,s))dξds.

    Then, ij2 coincides with the third integral in Eq (5.5). We only need to prove that Bij2=˜ij2. We have

    Bij2=10TdRdJ(ξq,s)ν(ξ,q)m(ξ)(χ1(ξ,s)χ1(q,s))iχj1(ξ,s)dqdξds10TdRdJ(ξq,s)ν(ξ,q)m(ξ)(χ1(ξ,s)χ1(q,s))iχj1(q,s)dqdξds=10TdAχi1(ξ,s)χj1(ξ,s)ν(ξ,q)m(ξ)dξds+ιij3.

    We rearrange ιij3:

    ιij3=10TdRdJ(ξq,s)ν(ξ,q)m(ξ)(χ1(q,s)χ1(ξ,s))iχj1(q,s)dqdξds=10TdRdJ(ξq,s)ν(ξ,q)m(ξ)(χ1(ξ,s)χ1(q,s))iχj1(ξ,s)dqdξds=10TdRdJ(ξq,s)ν(ξ,q)m(ξ)χi1(ξ,s)χj1(ξ,s)dqdξds10TdRdJ(ξq,s)ν(ξ,q)m(ξ)χj1(q,s)χi1(ξ,s)dqdξds=10TdAχj1(ξ,s)χi1(ξ,s)ν(ξ,q)m(ξ)dξds. (5.11)

    Thus, Bij2=˜ij2 and the proof of Eq (5.4) is done by this relation. The structure of Eq (5.4) means that (Ir,r)0, for any qRd; moreover, (Ir,r)>0 since m>0 and χ1(q,s) is the periodic function while q is a linear function. Consequently, ((ξq)+(χ1(ξ,s)χ1(q,s)))r2 will not be identical to 0 if r0.

    Case II. r>2 and 0<p1:

    For r>2, χ1 does not include s, and

    Θ=TdRd12(ξq)(ξq)[10J(ξq,s)ds]ν(ξ,q)m(ξ)dqdξTdRd[10J(ξq,s)ds]ν(ξ,q)m(ξ)(ξq)χ1(q)dqdξ+ϖ10TdRd1p|v|1ppχ1(ξ)ν(ξ,q)m(ξ)dqdξds. (5.12)

    Case III. r<2 and 0<p1:

    ε2r1p|wε|1ppχ2s0asε0; (5.13)

    there is one less item here than Case I and χ does not contain x and t.

    Case IV. 0<r< and 1<p2:

    The proof is similar to that in Case I, so it will not be described here.

    The estimation of error is similar to the linear equation, so we do not provide a detailed description here; this gives the result of p=1, and other situations can be obtained by analogous argument.

    Proposition 2. Let vC((0,T),S(Rd)). For the functions ϕ(time)ε and ϕ(space)ε:

    ϕ(time)ε(x,t)=εχ1tv(xε,t)+ε2χ2tv(xε,t)+εχ1(xε,tεr)vt(xε,t)+ε2χ2(xε,tεr)(ϖε)v(xε,t)+ε2χ2(xε,tεr)vt(xε,t), (6.1)
    ϕ(space)ε(x,t)=1ε2RdJ(z,tεr)ν(xε,xεz){ε22v(xε,t)zz+ε20v(xεεzq,t)zz(1q)dq+ε3χ1(xεz,tεr)10v(xεεzq,t)zz(1q)dqε3χ2(xεz,tεr)10v(xεεzq,t)zdq}dz; (6.2)

    we have

    ϕ(space)ε20andϕ(time)ε20asε0, (6.3)

    where 2 is the norm in L2((0,T),L2(Rd)) and xε=xϖεt.

    Proof. The convergence for ϕ(time)ε immediately follows from the representation (6.1) for this function. For the function ϕ(space)ε, the proof is completely analogous to the proof of [13, Proposition 5]. The proof of Proposition 2 is done.

    Together with Proposition 2, we get Eq (4.28), that is, limε0+||ϕε||L2(I,L2(Rd))=0.

    Let u0(x,t) be a solution of Eq (3.4) with u0(x,0)=φS(Rd). For any T>0, then v(x,t)C((0,T),S(Rd)) and we bring it into the equation satisfied by uε by constructing the approximate auxiliary functions (4.4)–(4.6). It follows from Lemma 4.2 that wε satisfies the following equation

    wε(x,t)1ptLεwε=1p|wε(x,t)|1ppu0t(xε,t)Θu0(xε,t)+ϕε(x,t)=Φε(x,t),wε(x,0)=φ(x)+ψε(x),

    where xε=xϖεt and

    ψε(x)=εχ1(xε,0)φ(x)+ε2χ2(xε,0)φ(x)L2(Rd).

    Consequently, the difference ˆvε(x,t)=wε(x,t)vε(x,t), where vε is the solution of Eq (3.26), which satisfies the following problem:

    ((wε)1p(vε)1p)tLεˆvε(x,t)=Φε(x,t),vε(x,0)=ψε(x). (6.4)

    Notice that, with Proposition 2, we have that ψεL2(Rd)=O(ε) and Φε2=o(1) when u(x,t)0.

    We will show that (vε)1/p(wε)1/p tends to zero in L((0,T);L2loc(Rd))) as ε0. Denote

    Z=Lloc((0,T),BV(Rd)))C((0,T),L1loc(Rd)),Z=L1((0,T),L1loc(Rd)).

    Proposition 3. Let vεZ be the solution of Eq (6.4) with a small ψε and ϕε:

    ϕεL2((0,T),L2(Rd))=o(1),ψεL2(Rd)=O(ε)asε0.

    Then, we have

    (wε)1/p(vε)1/pL((0,T),L2loc(Rd))0asε0.

    Proof. For p=1, please refer to [14]; we mainly discuss the case that p1. Let (wε)1/p=v1, and let (vε)1/p=uε=v2 satisfy the following equations

    {v1/p1tLεv1=Φε,v1(x,0)=φ(x)+ψε(x),v1/p2tLεv2=0,v2(x,0)=φ(x). (6.5)

    By subtraction, we have

    t((δILt))Lt(v1v2)=Φε,tI, (6.6)

    =(δILt)1(v1/p1v1/p2) and X=L2(U). By multiplying both sides of Eq (6.6) with the test function and integrate it, we obtain

    (Φε,X=t((δILt)),X+(δILt)(v1v2),Xδ(v1v2),X=t((δILt)),X+v1/p1v1/p2,v1v2Xδ(v1v2),X.

    Applying the Cauchy-Schwartz inequality and using (Φε,Xo(ε), the monotonicity of vv1/p yields

    o(ε)t((δILt)),Xδ(v1v2),X=ddt(δILt),X(δILt),tXδ(v1v2),X. (6.7)

    We denote ˆJ(xε,yε,tεr)=J(xyε,tεr)ν(xε,yε)m(xε) for any U⊂⊂Rd; the second term on the right-hand side of the inequality (6.7) is rewritten as

    (δILt),tX=δ2ddt||||L2URdˆJ(xε,yε,tεr)((y,t)(x,t))(t(x,t))dydx=δ2ddt||||L2+12URdˆJ(xε,yε,tεr)((y,t)(x,t))(t(y,t)t(x,t))dydx=δ2ddt||||L2+12URdˆJ(xε,yε,tεr)((y,t)(x,t))(t(y,t)t(x,t))dydx=δ2ddt||||L2+14URdt[ˆJ(xε,yε,tεr)((y,t)(x,t))2]dydx14URdt[J(xyε,tεr)]ν(xε,yε)m(xε)((y,t)(x,t))2dydx=12ddt(δILt),X14εrURds[J(xyε,tεr)]ν(xε,yε)m(xε)((y,t)(x,t))2dydx. (6.8)

    Combining the inequality (6.7) with Eq (6.8) and using the symmetry of ν(xε,yε)m(xε), we can derive

    12ddt((δILt)),XC2ϱεrsJ(xyε,tεr)LLt,X+δ(v1v2),X+o(ε)C12sJ(xyε,tεr)L(δILt),X+o(δ)+o(ε), (6.9)

    where C1=1/(ϱεr). Applying Gronwall's inequality to (6.9), for all tˉI, we have

    (β1+δ)URd|(v1p1v1p2)(x,t)|2dx(2.6)(δILt),X((δIL0)(0),(0)X+o(δ,ε))exp(CT0sJ(xε,tεr)L(Rd)dt)C1(URd|v1p0,1v1p0,2|2dx+o(δ,ε))exp(CT0sJ(xε,tεr)L(Rd)dt). (6.10)

    When ε,δ0, then v0,1v0,20; it follows that v1/p1(t)v1/p2(t)0 in L2loc(Rd) for all tI. The proof is done.

    We now give the proof of Theorem 3.2. We have

    uε=(vε)1/p,u0=(v0)1/p,wε(x,t)u0(xϖεt,t)Z0asε0.

    Then, the inequality (6.10) immediately yields by Proposition 3:

    uε(x,t)u0(xϖεt,t)Z0oruε(x+ϖεt,t)u0(x,t)Z0asε0.

    Thus, we only prove Theorem 3.2 for a dense set in L1(Rd) of initial data, when φS(Rd). For any φL1(Rd) and >0 there exists φS(Rd) such that φφL1(Rd)<δ. We denote by uε and u0 the solution of Eqs (3.2) and (3.4) with initial data φ. Because Eq (3.4) is the standard Cauchy problem for a parabolic operator with constant coefficients, the classical upper bound of its solution is given in [31, Theorem E] for any T>0:

    u0(x,t)u0(x,t)ZφφL1(Rd)<,t[0,T]. (7.1)

    By the estimate in Proposition 3 we obtain

    uε(x,t)uε(x,t)ZC1. (7.2)

    For an arbitrarily small δ>0, the upper bounds of the inequalities (7.1) and (7.2) are valid, and these imply that

    uε(x+ϖεt,t)u0(x,t)Zuε(x+ϖεt,t)uε(x+ϖεt,t)Z+uε(x+ϖεt,t)u0(x,t)Z+u0(x,t)u0(x,t)Z0asε0.

    This completes the proof of Theorem 3.2.

    We first give a framework for nonlocal nonlinear diffusion problems.

    Lemma 8.1. [16, Corollary 1.6] For a given homogeneous jump kernel ρ, the operator Lv is defined by

    (Lvu)(x)=Rd[u(x)u(y)]ρ(v(x),v(y);x,y)dy. (8.1)

    For every initial condition u0L1(Rd)L(Rd), the nonlinear nonlocal initial value problem

    {tu(x,t)+Luu(x,t)=0,(x,t)Rd×(0,),u(x,0)=u0(x),xRd, (8.2)

    has a very weak solution u(x,t) such that

    uL([0,),L1(Rd)L(Rd))C([0,),L1(Rd)),

    and the solution has the following properties:

    (1) Mass is conserved: \int_{\mathbb{R}^{d}}u(t, x)dx = \int_{\mathbb{R}^{d}}u_{0}(x)dx for all t\geq 0 ;

    (2) L^{p}- norms are nonincreasing: \|u(t)\|_{p}\leq\left\|u_{0}\right\|_{p} for all p\in[1, \infty] and t\geq 0 ;

    (3) If u_{0}(x)\geq 0 for a.e. x\in \mathbb{R}^{d} , then u(t, x)\geq0 for a.e. x\in\mathbb{R}^{d} and t\geq 0 .

    Homogenization of the local porous medium equation for negative initial values can be seen in [6]. Here we consider the following nonlocal scaling operator with a time-dependent kernel:

    \begin{eqnarray} \mathscr{L}^{\varepsilon}_{t}u(x,t) = \frac{1}{\varepsilon^{d+2}}\int_{\mathbb{R}^d}J(\frac{x-y}{\varepsilon},\frac{t}{\varepsilon^r}) \nu(\frac{x}{\varepsilon},\frac{y}{\varepsilon})\Big(u^{p}(y,t)-u^{p}(x,t)\Big)dy, \end{eqnarray} (8.3)

    where p > 1 and the Cauchy problem and its corresponding effective Cauchy problem

    \begin{eqnarray} &&\left\{\begin{array}{ll} \partial_{t}u^{\varepsilon}(x,t)-\mathscr{L}^{\varepsilon}_{t}u^\varepsilon(x,t) = 0,\; & (x,t)\in\mathbb{R}^d\times(0,T),\\ u^\varepsilon(x,0) = \varphi(x), & x\in\mathbb{R}^d, \end{array}\right. \end{eqnarray} (8.4)
    \begin{eqnarray} &&\left\{\begin{array}{ll} \partial_{t}u^{0}(x,t)-\mathscr{L}^{0}_{t}u^{0}(x,t) = 0,\; &(x,t)\in \mathbb{R}^d\times(0,T),\\ u^{0}(x,0) = \varphi(x), &x\in\mathbb{R}^d, \end{array}\right. \end{eqnarray} (8.5)

    respectively, where \varphi(x)\in L^{[1, \infty]}(\mathbb{R}^d): = L^{1}(\mathbb{R}^{d})\cap L^{\infty}(\mathbb{R}^{d}) ,

    \begin{eqnarray} \mathscr{L}^{0}_{t}u(x,t) = \Theta(x,t)\cdot\nabla\nabla u^p, \end{eqnarray} (8.6)

    and the matrix \Theta(x, t) will be given below.

    We now describe the main result and give a simple proof.

    Theorem 8.1. Assume that the functions J(z, s) and \nu(x, y) satisfy the conditions (2.1)–(2.3). Let u^{\varepsilon}(x, t) be a solution of the nonlocal evolutional Cauchy problem (8.4) and u^{0}(x, t) be a solution of the local Cauchy problem (8.5). Then there exists a positive definite matrix-valued function \Theta(x, t) such that for any T > 0 ,

    \begin{eqnarray} \|u^{\varepsilon}(x,t)-u^{0}(x,t)\|_{L^{1}\left((0,T), L_{loc}^{2}\left(\mathbb{R}^{d}\right)\right)}\rightarrow 0\;\mathrm{as}\; \varepsilon\rightarrow 0. \end{eqnarray} (8.7)

    Proof. Set

    \begin{eqnarray} w^\varepsilon(x,t) = u(x,t)+\varepsilon\chi_1(x,t,\frac{x}{\varepsilon},\frac{t}{\varepsilon^r})\nabla u(x,t)+\varepsilon^2\chi_2(x,t,\frac{x}{\varepsilon},\frac{t}{\varepsilon^r})\nabla \nabla u(x,t), \end{eqnarray} (8.8)
    \begin{eqnarray} \frac{\partial w^{\varepsilon}(x,t)}{\partial t}& = &pu^{p-1}\frac{\partial u}{\partial t}(x,t)+\Big(\frac{1}{\varepsilon}\cdot\varepsilon^{2-r}\frac{\partial \chi_1}{\partial s}\Big)\cdot\nabla u(x,t)\\ &+&\varepsilon^{2-r}\Big(\frac{\partial \chi_2}{\partial s}\Big)\cdot\nabla\nabla u(x,t)+\phi_{\varepsilon}^{(time)}(x,t) \end{eqnarray} (8.9)

    with

    \begin{eqnarray} &&\phi_{\varepsilon}^{(time)}(x,t) = \varepsilon\frac{\partial\chi_1}{\partial t}\cdot\nabla u(x,t)+\varepsilon^2\frac{\partial\chi_2}{\partial t}\cdot\nabla\nabla u(x,t)\\ &&+\varepsilon \chi_{1}\Big(x,t,\frac{x}{\varepsilon},\frac{t}{\varepsilon^r}\Big)\cdot\nabla\frac{\partial u}{\partial t}(x,t) +\varepsilon^{2}\chi_{2}\Big(x,t,\frac{x}{\varepsilon},\frac{t}{\varepsilon^r}\Big)\cdot\nabla\nabla\frac{\partial u}{\partial t}(x,t). \end{eqnarray} (8.10)

    Denote

    \begin{eqnarray*} &&H^{\varepsilon}w^{\varepsilon}(x,t) = \frac{\partial w^{\varepsilon}(x,t)}{\partial t}\nonumber\\ &&-\frac{1}{\varepsilon^{d+2}}\int_{\mathbb{R}^{d}}J\left(\frac{x-y}{\varepsilon},\frac{t}{\varepsilon^r}\right)\nu\left(\frac{x}{\varepsilon}, \frac{y}{\varepsilon}\right)\left((w^{\varepsilon}(y,t))^p-(w^{\varepsilon}(x,t))^p\right)dy; \end{eqnarray*}

    we have

    \begin{eqnarray} &&H^{\varepsilon}w^{\varepsilon}(x,t)\\ && = \frac{\partial u}{\partial t}(x,t)+\left(\frac{1}{\varepsilon}\cdot\varepsilon^{2-r}\frac{\partial\chi_1}{\partial s}\right)\cdot\nabla u(x,t) +\varepsilon^{2-r}\left(\frac{\partial\chi_2}{\partial s}\right)\cdot\nabla\nabla u(x,t)\\ &&+\varepsilon \frac{\partial\chi_1}{\partial t}\cdot\nabla u(x,t)+\varepsilon^2\frac{\partial\chi_2}{\partial t}\cdot\nabla\nabla u(x,t)+\varepsilon \chi_{1}\left(x,t,\frac{x}{\varepsilon},\frac{t}{\varepsilon^r}\right)\cdot\nabla\frac{\partial u}{\partial t}(x,t)\\ &&+\varepsilon^{2}\chi_{2}\left(x,t,\frac{x}{\varepsilon},\frac{t}{\varepsilon^r}\right)\cdot\nabla\nabla\frac{\partial u}{\partial t}(x,t) \\ &&-\frac{1}{\varepsilon^{d+2}}\int_{\mathbb{R}^{d}}J(\frac{x-y}{\varepsilon},\frac{t}{\varepsilon^r})\nu(\frac{x}{\varepsilon},\frac{y}{\varepsilon})\Big\{u(y,t)+\varepsilon \chi_{1}\left(y,t,\frac{y}{\varepsilon},\frac{t}{\varepsilon^r}\right)\cdot\nabla u(y,t) \\ &&+\varepsilon^{2}\chi_{2}\Big(y,t,\frac{y}{\varepsilon},\frac{t}{\varepsilon^r}\Big)\nabla\nabla u(y,t)-u(x,t)-\varepsilon\chi_{1}\nabla u\Big(x, t\Big)-\varepsilon^{2}\chi_{2}\nabla\nabla u(x,t)\Big\}\\ &&\cdot\underbrace{\Big(w^{\varepsilon}(y,t)^{p-1}+w^{\varepsilon}(y,t)^{p-2}w^{\varepsilon}(x,t)+\cdots+w^{\varepsilon}(x,t)^{p-1}\Big)}_{(a^p-b^p) = (a-b)(a^{p-1}+a^{p-2}b\cdots+ ab^{p-2}+b^{p-1})}dy\\ &&: = \frac{\partial u}{\partial t}(x,t)+\frac{1}{\varepsilon}\mathcal{M}_{0}(x,t)+\mathcal{M}_{\varepsilon}(x,t)+\phi_{\varepsilon}(x,t). \end{eqnarray} (8.11)

    Using the Taylor formula and symmetry of the integral, we directly give

    \begin{eqnarray} &&\mathcal{M}^{\varepsilon}_{0}(x,t) = \varepsilon^{2-r}\frac{\partial\chi_1}{\partial s}\nabla u(x,t)-\nabla u(x,t)\Big[\int_{\mathbb{R}^d}J(z,\frac{t}{\varepsilon^r})\nu(\frac{x}{\varepsilon},\frac{x}{\varepsilon}-z)\Big(-z\\ &&+\chi_1(y,t,\frac{x}{\varepsilon}-z,\frac{t}{\varepsilon^r})-\chi_1(x,t,\frac{x}{\varepsilon},\frac{t}{\varepsilon^r})\Big)dz\Big]\Big(pu^{p-1}(x,t)+o(\varepsilon)\Big), \end{eqnarray} (8.12)
    \begin{eqnarray} &&\mathcal{M}_{\varepsilon}(x,t) = \varepsilon^{2-r}\frac{\partial \chi_2}{\partial s}\nabla\nabla u(x,t)\\ &&-\int_{\mathbb{R}^d}J(z,\frac{t}{\varepsilon^r})\nu(\frac{x}{\varepsilon},\frac{x}{\varepsilon}-z)\Big[\Big(-z\chi_1(y,t,\frac{x}{\varepsilon}-z,\frac{t}{\varepsilon^r})\\ &&+\chi_2(y,t,\frac{x}{\varepsilon}-z,\frac{t}{\varepsilon^r})-\chi_2(x,t,\frac{x}{\varepsilon},\frac{t}{\varepsilon^r})+{\frac{1}{2}}z^{2}\Big)pu^{p-1}(x,t))\nabla\nabla u(x,t)\\ &&+\Big(-z\chi_1(y,t,\frac{x}{\varepsilon}-z,\frac{t}{\varepsilon^r})+{\frac{1}{2}}z^{2}\Big)p(p-1)u^{p-2}\nabla u \otimes \nabla u\Big]dz. \end{eqnarray} (8.13)

    Similar to the proofs in the above chapters, we can get the corresponding conclusion. That is, due to Eq (8.11), there are two functions \chi_1 and \chi_2 such that \mathcal{M}_{0} = 0.

    For 0 < r\leq 2, we have that \mathcal{M}_{\varepsilon}\to \Theta\cdot\nabla\nabla u^p(x, t) as \varepsilon\rightarrow 0, where

    \begin{eqnarray} \Theta^{ij}& = &\int_0^1\int_{\mathbb{T}^{d}}\int_{\mathbb{R}^{d}}\frac{1}{2}(\xi-q)^{i}(\xi-q)^{j}J(\xi-q,s)\nu(\xi,q)m(\xi)dqd\xi ds\\ &-&\int_0^1\int_{\mathbb{T}^{d}}\int_{\mathbb{R}^{d}}J(\xi-q,s)\nu(\xi,q)m(\xi)(\xi-q)^{i}\chi_{1}^{j}(x,t,q,s)dqd\xi ds. \end{eqnarray} (8.14)

    For r > 2, we can get a result that is similar to Eq (8.14), so we do not repeat the proof here.

    The literature on the stochastic homogenization of parabolic equations of local equations can be found in [9,10]. In this section, we introduce how to deal with the nonlocal parabolic equation model with random statistically homogeneous coefficients, where the ideas and methods mainly come from [41], for which we need some additional measure ergodic theory.

    T is a d- dimensional dynamical system, (\Omega, \digamma, P) has a standard probability space and we assume that \mu(x, w) = \mu(T_{x}w), T_x:\Omega\to\Omega satisfy the following properties.

    (1) T_{y_1}T_{y_2} = T_{y_1+y_2} for all y_1, y_2 in R^d, T_0 = Id .

    (2) P(T_{y}A) = P(A) for all A\in\mathcal{F} and all y\in R^d .

    (3) T_x is a measurable map from R^d\times\Omega to \Omega , where R^d is equipped with the Borel \sigma- algebra.

    We consider the operator

    \begin{eqnarray} L^\varepsilon v(x,t) = \frac{1}{\varepsilon^{d+2}}\int_{\mathbb{R}^d}J(\frac{x-y}{\varepsilon},\frac{t}{\varepsilon^2},\omega_t)\nu(\frac{x}{\varepsilon},\frac{y}{\varepsilon},\omega_s)\Big(v(y,t)-v(x,t)\Big)dy, \end{eqnarray} (9.1)

    where for a.e.\; \omega_t , we have

    \begin{eqnarray} &&J(z,t)\in L^{\infty}((0,T),L^1(\mathbb{R}^d),J(\cdot,t) \subset \subset \mathbb{R}^d,\\ && J(z,t) = J(-z,t), J_1(z)\geq J(z,t)\geq 0; \\ &&\nu(\frac{x}{\varepsilon},\frac{y}{\varepsilon},\omega_s) = \nu(\frac{x}{\varepsilon},\omega_s)\nu(\frac{y}{\varepsilon},\omega_s), \end{eqnarray} (9.2)

    where \omega_t and \omega_s are random fields to \mathbb{R}\times\mathbb{R}^d that are stationary with respect to time t and space x , respectively. We fix an ergodic environment probability, that is, assume that

    \begin{eqnarray} \left\{\begin{array}{l} (\Omega,\mathcal{F},\mathbb{P}){\; is\; a\; probability\; space\; endowed\; with\; an\; ergodic\; semigroup,} \\ \tau:\mathbb{Z}^{d}\times\mathbb{R}\times\Omega\rightarrow \Omega{\; of\; measure\; preserving\; maps,} \end{array}\right. \end{eqnarray} (9.3)

    and we denote by \mathbf{L}^{2} the set of stationary maps u = u(x, t, \omega) , meaning that

    \begin{eqnarray} u(x+k,t+s,\omega) = u(x,t,T_{(k,s)}\omega), \forall (k,s,\omega)\in\mathbb{R}^{d}\times\mathbb{R}\times\Omega \end{eqnarray} (9.4)

    Notice that spatial variables can be Z^d- stationary in local equations, but they are not applied here and we will see them later. Define norm- \mathbf{L}^{2}

    \begin{eqnarray} \|u\|_{\mathbf{L}^{2}} = \mathbb{E}[\int_{\tilde{Q}_{1}}u^{2}] < +\infty. \end{eqnarray} (9.5)

    Note that, if u\in\mathbf{L}^{2} and U is a bounded measurable subset of \mathbb{R}^{d} , the stationarity in time implies that the limit

    \begin{eqnarray*} \mathbb{E}\left[\int_{U}u(x,t)dx\right] = \lim\limits_{h\rightarrow 0^{+}}\mathbb{E}\left[\frac{1}{2h}\int_{U}\int_{t-h}^{t}u(x,s)dxds\right] \end{eqnarray*}

    exists for any t\in\mathbb{R} and is independent of t . Let \mathcal{C} be the subset of \mathbf{L}^{2} of maps with smooth and square integrable space and time derivatives of all order belonging to \mathbf{L}^{\mathbf{2}} . \mathcal{C} is dense in \mathbf{L}^{2} with respect to the norm in the inequality (9.5).

    We denote by \mathbf{H}^{1}, \mathbf{H}_{\mathbf{x}}^{\mathbf{1}} the closure of \mathcal{C} with respect to the norm

    \|u\|_{\mathbf{H}^{1}} = \left(\|u\|_{\mathbf{L}^{2}}^{2}+\left\|\partial_{t} u\right\|_{\mathbf{L}^{2}}^{2}+\|D u\|_{\mathbf{L}^{2}}^{2}\right)^{1/2},\; \|u\|_{\mathbf{H}_{\mathbf{x}}^{1}} = \left(\|u\|_{\mathbf{L}^{2}}^{2}+\|D u\|_{\mathbf{L}^{2}}^{2}\right)^{1/2}

    and \mathbf{H}_{\mathbf{x}}^{-\mathbf{1}} is the dual space of \mathbf{H}_{\mathbf{x}}^{1} . Moreover, \mathbf{L}_{{pot}}^{2} is the closure with respect to the \mathbf{L}^{2} -norm of \{Du: u\in \mathcal{C}\} in \left(\mathbf{L}^{2}(\Omega)\right)^{d} .

    Set v(x, t) = |u(x, t)|^{p-1}u(x, t) , denote L^\varepsilon v(x, t) = L^\varepsilon(|u|^{p-1}u): = \mathscr{L}_t^\varepsilon u(x, t) . We now consider the Cauchy problem and its corresponding effective Cauchy problem

    \begin{eqnarray} &&\left\{\begin{array}{ll} \partial_{t}u^{\varepsilon}(x,t)-\mathscr{L}^{\varepsilon}_{t}u^\varepsilon(x,t) = 0,\; & (x,t)\in \mathbb{R}^d\times(0,T),\\ u^\varepsilon(x,0) = \varphi(x), & x\in\mathbb{R}^d, \end{array}\right. \end{eqnarray} (9.6)
    \begin{eqnarray} &&\left\{\begin{array}{ll} \partial_{t}u^{0}(x,t)-\mathscr{L}^{0}_{t}u^{0}(x,t) = 0,\; & (x,t)\in \mathbb{R}^d\times(0,T),\\ u^{0}(x,0) = \varphi(x), &x\in\mathbb{R}^d, \end{array}\right. \end{eqnarray} (9.7)

    where \varphi(x)\in L^{[1, \infty]}(\mathbb{R}^d): = L^{1}(\mathbb{R}^{d})\cap L^{\infty}(\mathbb{R}^{d}) ,

    \begin{eqnarray} \mathscr{L}^{0}_{t}u(x,t) = \Theta(x,t)\nabla\nabla |u|^{p-1}u, \end{eqnarray} (9.8)

    and the matrix \Theta(x, t) will be given below.

    We also transform the problems (9.6) and (9.7) into the following Cauchy problems

    \begin{eqnarray} &&\left\{\begin{array}{ll} \partial_{t}v^{\varepsilon}(x,t)^{1/p}-L^\varepsilon v^\varepsilon(x,t) = 0, & (x,t)\in \mathbb{R}^d\times(0,T), \\ v^\varepsilon(x,0) = \varphi(x), & x\in\mathbb{R}^d, \end{array}\right. \end{eqnarray} (9.9)
    \begin{eqnarray} &&\left\{\begin{array}{ll} \partial_{t}v(x,t)^{1/p}-\Theta\cdot\nabla\nabla v(x,t) = 0, & (x,t)\in \mathbb{R}^d\times(0,T), \\ v(x,0) = \varphi(x), & x\in\mathbb{R}^d. \end{array}\right. \end{eqnarray} (9.10)

    Theorem 9.1. Assume that the functions J(z, s) and \nu(x, y) satisfy the condition (9.2). Let u^{\varepsilon}(x, t) be the solution of the evolution Cauchy problem (9.9) and u^{0}(x, t) be the solution of the effective Cauchy problem (9.10). Then, there exists a positive definite constant matrix \Theta such that for any T > 0 , we have

    \begin{eqnarray} \Big\|u^{\varepsilon}(x,t)-u^{0}(x,t)\Big\|_{L^{1}((0,T),L_{loc}^{p+1}(\mathbb{R}^{d}))}\rightarrow 0\;\mathit{\mbox{as}}\; \varepsilon\rightarrow 0^{+} a.s. \end{eqnarray} (9.11)

    The homogenized flux \Theta(x, t) can be characterized as follows.

    Case I. For r = 2 and p = 1 , the homogenized constant matrix \Theta is characterized by

    \begin{eqnarray} \Theta = \frac{1}{2}\int_{-\frac{1}{2}}^{\frac{1}{2}}\int_{\mathbb{R}^{d}}\int_{\Omega}\Big(z\otimes z-z\otimes\zeta_{-z}(0,s,\omega)\Big)J(z,s)\mu(0,\omega)\mu(-z,\omega)dzdsd\mathbf{P}(\omega), \end{eqnarray} (9.12)

    where \chi = \chi(y, s) and (\xi, s)\in\mathbb{T}^{d}\times\mathbb{T} solves the cell problem

    \begin{eqnarray} \left\{\begin{array}{l} \int_{\mathbb{R}^{d}}J(\xi-q,s)\nu(\xi,q)\Big(q-\xi+\chi(q,s)-\chi(\xi,s)\Big)dq = \partial_{s}\chi(\xi,s), \\ \chi(y,0) = \chi(y,1),\; y\in\mathbb{T}^{d}. \end{array}\right. \end{eqnarray} (9.13)

    Case II. For r = 2 and p\in(0, 1] , the homogenized matrix \Theta(x, t) is characterized by

    \begin{eqnarray*} \Theta(x,t)& = &\frac{1}{2}\int_{-\frac{1}{2}}^{\frac{1}{2}}\int_{\mathbb{R}^{d}}\int_{\Omega}\Big(z\otimes z-z\otimes\zeta_{-z}(x,t,0,s,\omega)\Big)\\ &&\qquad\times J(z,s)\mu(0,\omega)\mu(-z,\omega)dzdsd\mathbf{P}(\omega), \end{eqnarray*}

    where \chi^{k} = \chi^{k}(y, s) solves the cell problem

    \begin{eqnarray} \left\{\begin{array}{l} \int_{\mathbb{R}^{d}}J(\xi-q,s)\nu(\xi,q)\Big(q-\xi+\chi(x,t,q,s)-\chi(x,t,\xi,s)\Big)dq\\ \qquad\qquad = \frac{1}{p}|u^{0}|^{1-p}(\partial_{s}\chi(x,t,\xi,s)),\; (\xi,s)\in\mathbb{T}^{d}\times\mathbb{T},\\ \chi(x,t,y,0) = \chi(x,t,y,1),\; y\in\mathbb{T}^{d}. \end{array}\right. \end{eqnarray} (9.14)

    Case III. For r = 2 and p\in(1, 2] , the homogenized matrix \Theta is characterized by

    \begin{eqnarray*} \chi^{k}(x,t,y,s) = \left\{\begin{array}{ll} p|u^{0}|^{{p-1}}\natural^{k}(x,t,y,s) & \mbox{if}\; u^{0}(x,t) \neq 0, \\ 0 & \mbox{if}\; u^{0}(x,t) = 0, \end{array}\right. \end{eqnarray*}

    where \natural^{k} = \natural^{k}(x, t, y, s) solves the cell-problem for each (x, t)\in[u_{0}\neq 0] ,

    \begin{eqnarray} \left\{\begin{array}{l} \partial_{s}\natural(x,t,\xi,s) = \int_{\mathbb{R}^{d}}J(\xi-q,s)\nu(\xi,q)\Big(q-\xi\\ \qquad\qquad+p|u^{0}|^{p-1}\natural(x,t,q,s)-p|u^{0}|^{p-1}\natural(x,t,\xi,s)\Big)dq,\\ \natural(x,t,y,0) = \natural_1(x,t,y,1),\; y\in\mathbb{T}^{d}, \end{array}\right. \end{eqnarray} (9.15)

    and the measurable set \left[u^{0}\neq 0\right]: = \left\{(x, t)\in \mathbb{R}^d\times(0, T): u^{0}(x, t)\neq 0\right\} .

    We need to construct special axillary functions with the following structures.

    (i) For p = 1 ,

    \begin{eqnarray} w_0^\varepsilon(x,t) = v(x,t)+\underbrace{{\varepsilon\chi(\frac{x}{\varepsilon},\frac{t}{\varepsilon^2},\omega)}}_{First\; corrector}\nabla v(x,t)+\underbrace{{v_2^\varepsilon(x,t)}+{v_3^\varepsilon(x,t).}}_{Additional\; terms \; for \; compensation} \end{eqnarray} (9.16)

    (ii) For 0 < p < 1 ,

    \begin{eqnarray} w_0^\varepsilon(x,t) = v(x,t)+\varepsilon\chi(x,t,\frac{x}{\varepsilon},\frac{t}{\varepsilon^2},\omega)\nabla \hat{v}(x,t)+\hat{v}^\varepsilon_2(x,t)+\hat{v}^\varepsilon_3(x,t). \end{eqnarray} (9.17)

    (iii) For 1 < p\leq2 ,

    \begin{eqnarray} w_1^\varepsilon(x,t) = v(x,t)+\varepsilon p|v(x,t)|^{\frac{p-1}{p}}\natural(x,t,\frac{x}{\varepsilon},\frac{t}{\varepsilon^2},\omega)\nabla v(x,t)+\check{v}^\varepsilon_2(x,t)+\breve{v}_3^\varepsilon(x,t). \end{eqnarray} (9.18)

    The functions v_i^\varepsilon and \breve{v}_i^\varepsilon are used to eliminate some extra parts that will be mentioned later when we consider some convergence.

    Next, we need to bring in the auxiliary function w_i^\varepsilon(x, t) to decompose according to the order of \varepsilon.

    Case 1. For p = 1, similar to the derivations in [13,14], we only need a new corrector \chi(\frac{x}{\varepsilon}, \frac{t}{\varepsilon^r}) . Substituting the expression on the right-hand side of H^\varepsilon for w_0^\varepsilon in Eq (4.3), we get

    \begin{eqnarray} H^{\varepsilon}w_0^{\varepsilon}(x,t) = \frac{\partial v}{\partial t}(x,t)+\frac{1}{\varepsilon}\underbrace{{\mathcal{M}_{0}(x,t)}}_{ = 0}+\underbrace{{\mathcal{M}_{\varepsilon}(x,t)}}_{Zero-order\; expansion}+\underbrace{{\phi_{\varepsilon}(x,t)}}_{Remainder} \end{eqnarray} (9.19)

    as \varepsilon \to 0^{+} , where

    \begin{eqnarray} \mathcal{M}_{0}(x,t)& = &\frac{\partial\chi}{\partial s}\nabla v(x,t)-\nabla v(x,t)\Big[\int_{\mathbb{R}^d}J(z,\frac{t}{\varepsilon^2})\nu(\frac{x}{\varepsilon},\frac{x}{\varepsilon}-z)\\ &&\cdot\Big(-z+\chi(\frac{x}{\varepsilon}-z,\frac{t}{\varepsilon^2})-\chi(\frac{x}{\varepsilon},\frac{t}{\varepsilon^2})\Big)dz\Big], \end{eqnarray} (9.20)
    \begin{eqnarray} \mathcal{M}_{\varepsilon}(x,t)& = &-\nabla\nabla v(x,t)\Big[\int_{\mathbb{R}^d}J(z,\frac{t}{\varepsilon^2})\nu(\frac{x}{\varepsilon},\frac{x}{\varepsilon}-z)\\ &&\cdot\Big(-z\otimes\chi(\frac{x}{\varepsilon}-z,\frac{t}{\varepsilon^2})+{\frac{1}{2}}z\otimes z\Big)dz\Big], \end{eqnarray} (9.21)
    \begin{eqnarray} \phi_{\varepsilon}(x,t)& = &\phi_{\varepsilon}^{(time)}-\phi_{\varepsilon}^{(space)},\; \phi_{\varepsilon}^{(time)}(x,t) = \varepsilon\chi(\frac{x}{\varepsilon},\frac{t}{\varepsilon^2})\cdot\nabla\frac{\partial v}{\partial t}(x,t), \end{eqnarray} (9.22)
    \begin{eqnarray} &&\phi_{\varepsilon}^{(space)} = \int_{\mathbb{R}^{d}}J(z,\frac{t}{\varepsilon^2})\nu\Big(\frac{x}{\varepsilon},\frac{x}{\varepsilon}-z\Big)\\ &&\cdot\Big(\int_{0}^{1}\nabla\nabla v(x-\varepsilon z\eta,t)\cdot z \otimes z(1-\eta)d\eta-\frac{1}{2}\nabla\nabla v(x,t)\cdot z\otimes z\Big)dz \\ &&+\frac{1}{\varepsilon} \int_{\mathbb{R}^{d}} J(z,\frac{t}{\varepsilon^2})\nu\left(\frac{x}{\varepsilon},\frac{x}{\varepsilon}-z\right) \chi\left(\frac{x}{\varepsilon}-z,\frac{t}{\varepsilon^2}\right)\Big(\nabla v(x-\varepsilon z,t)-\nabla v(x,t)\Big)dz \\ &&+\nabla \nabla v(x,t)\int_{\mathbb{R}^{d}}J(z,\frac{t}{\varepsilon^2})\nu\left(\frac{x}{\varepsilon},\frac{x}{\varepsilon}-z\right)z\otimes \chi\left(\frac{x}{\varepsilon}-z,\frac{t}{\varepsilon^2}\right)dz. \end{eqnarray} (9.23)

    Case 2. For p\in(0, 1),

    \begin{eqnarray*} \frac{\partial w_1^{\varepsilon}(x,t)}{\partial t} = \frac{1}{p}|w_1^\varepsilon|^{\frac{1-p}{p}}\Big[\frac{\partial v}{\partial t}(x,t)+\frac{1}{\varepsilon}\frac{\partial \chi}{\partial s}\cdot\nabla v(x, t)+\phi_{\varepsilon}^{(time)}(x,t)\Big]. \end{eqnarray*}

    Using the Taylor expansions we have

    \begin{eqnarray*} H^{\varepsilon}w_1^{\varepsilon}(x,t) = \frac{1}{p}|w_{1}^\varepsilon|^{\frac{1-p}{p}}\frac{\partial v}{\partial t}(x,t)+\frac{1}{\varepsilon}\mathcal{M}_{0}(x,t)+\mathcal{M}_{\varepsilon}(x,t)+\phi_{\varepsilon}(x,t)\;\mbox{as}\; \varepsilon \to 0^{+}, \end{eqnarray*}

    where

    \begin{eqnarray} \mathcal{M}_{0}(x,t)& = &\frac{1}{p}|w^\varepsilon|^{\frac{1-p}{p}}\frac{\partial\chi}{\partial s}\nabla v(x,t)-\nabla v(x,t)\Big[ \int_{\mathbb{R}^d}J(z,\frac{t}{\varepsilon^2})\nu(\frac{x}{\varepsilon},\frac{x}{\varepsilon}-z)\\ &&\cdot\Big(-z+\chi(y,t,\frac{x}{\varepsilon}-z,\frac{ t}{\varepsilon^2})-\chi(x,t,\frac{x}{\varepsilon},\frac{t}{\varepsilon^2})\Big)dz\Big], \end{eqnarray} (9.24)
    \begin{eqnarray} \mathcal{M}_{\varepsilon}(x,t)& = &-\nabla\nabla v(x,t)\Big[\int_{\mathbb{R}^d}J(z,\frac{t}{\varepsilon^2})\nu(\frac{x}{\varepsilon},\frac{x}{\varepsilon}-z)\\ &&\cdot\Big(-z\chi(y,t,\frac{x}{\varepsilon}-z,\frac{t}{\varepsilon^2})+{\frac{1}{2}}z^{2}\Big)dz\Big], \end{eqnarray} (9.25)
    \begin{eqnarray} \phi_{\varepsilon}(x,t)& = &\frac{1}{p}|w_{1}^\varepsilon|^{\frac{1-p}{p}}\phi_{\varepsilon}^{(time)}+\phi_{\varepsilon}^{(space)}, \\ \phi_{\varepsilon}^{(time)}(x,t)& = &\varepsilon\frac{\partial\chi}{\partial t}\cdot\nabla v(x,t) +\varepsilon \chi(x,t,\frac{x}{\varepsilon},\frac{t}{\varepsilon^2})\cdot\nabla\frac{\partial v}{\partial t}(x,t). \end{eqnarray} (9.26)

    Case 3. For p\in(1, 2], we have

    \begin{eqnarray} \frac{\partial w_2^{\varepsilon}(x,t)}{\partial t} = \frac{1}{p}|w_2^\varepsilon|^{\frac{1-p}{p}}\Big[\frac{\partial v}{\partial t}(x,t)+(\frac{1}{\varepsilon}p|v|^{\frac{p-1}{p}}\frac{\partial \natural_1}{\partial s})\cdot\nabla v(x,t)+\phi_{\varepsilon}^{(time)}(x,t)\Big], \end{eqnarray} (9.27)
    \begin{eqnarray*} \phi_{\varepsilon}^{(time)}(x,t)& = &\varepsilon(p-1)|v(x,t)|^{-{(1+\frac{1}{p})}}v(x,t)\frac{\partial v}{\partial t}\natural(x,t,\frac{x}{\varepsilon},\frac{t}{\varepsilon^2})\nabla v(x,t)\nonumber\\ &+&\varepsilon p|v(x,t)|^{\frac{p-1}{p}}\frac{\partial\natural}{\partial t}\cdot \nabla v(x,t) +\varepsilon p|v(x,t)|^{\frac{p-1}{p}}\natural(x,t,\frac{x}{\varepsilon},\frac{t}{\varepsilon^2})\cdot\nabla\frac{\partial v}{\partial t}(x,t). \end{eqnarray*}

    and

    \begin{eqnarray} H^{\varepsilon} w_2^{\varepsilon}(x,t) = \frac{1}{p}|w_2^\varepsilon|^{\frac{1-p}{p}}\frac{\partial v}{\partial t}+\frac{1}{\varepsilon}\mathcal{M}_{0}(x,t)+\mathcal{M}_{\varepsilon}(x,t) +\phi_{\varepsilon}^{(time)}+\phi_{\varepsilon}^{(space)}\;\mbox{as}\; \varepsilon \to 0^{+}, \end{eqnarray} (9.28)

    where

    \begin{eqnarray} &&\mathcal{M}_{0}(x,t) = \varepsilon^{2-r}\Big|\frac{v}{w^\varepsilon_2}\Big|^{\frac{p-1}{p}}\frac{\partial \natural_1}{\partial s}\nabla v(x,t)-\nabla v\Big[\int_{\mathbb{R}^d}J(z,\frac{t}{\varepsilon^2}) \nu(\frac{x}{\varepsilon},\frac{x}{\varepsilon}-z)\\ &&\qquad\cdot\Big(-z+p|v(y,t)|^{\frac{p-1}{p}}\natural(y,t,\frac{x}{\varepsilon}-z,\frac{ t}{\varepsilon^2})-p|v(x,t)|^{\frac{p-1}{p}}\natural(x,t,\frac{x}{\varepsilon},\frac{ t}{\varepsilon^2})\Big)dz \Big], \end{eqnarray} (9.29)
    \begin{eqnarray} &&\mathcal{M}_{\varepsilon}(x,t) = -\nabla\nabla v(x,t)\Big[\int_{\mathbb{R}^d}J(z,\frac{t}{\varepsilon^2})\nu(\frac{x}{\varepsilon},\frac{x}{\varepsilon}-z)\\ &&\cdot\Big(-zp|v(y,t)|^{\frac{p-1}{p}}\natural(y,t,\frac{x}{\varepsilon}-z,\frac{t}{\varepsilon^2})+{\frac{1}{2}}z^{2}\Big)dz\Big]. \end{eqnarray} (9.30)

    Due to the order of \varepsilon , we put the terms with O(\varepsilon) and the higher-order terms with o(\varepsilon) into the remainder as the fourth part.

    Next, we will prove the main conclusion. For the convenience of the proof, we first prove the linear case, and then point out some results of the nonlinearity that are different from linearity.

    The proof of the linear equation is divided into three parts, where the first part is about the first-order random corrector, the second part is about the zero-order term and remainder, and the last part is the proof of Theorem 9.1.

    For any \delta > 0 , let us consider the equation

    \begin{eqnarray} \delta\chi^\delta-\delta\partial_{tt}\chi^\delta+\partial_{t}\chi^\delta-\delta\Delta\chi^\delta-\mathfrak{A}_\omega\chi^\delta = 0, \end{eqnarray} (9.31)

    where

    \begin{eqnarray} \mathfrak{A}_\omega\chi = \int_{\mathbb{R}^d}J(z,t)\nu(x,x-z)(-z+\chi(x-z,t)-\chi(x,t))dz. \end{eqnarray} (9.32)

    We set

    \begin{eqnarray} \zeta_{z}(\xi,t,\omega) = \chi\left(\xi-z,t,\omega\right)-\chi(\xi,t,\omega). \end{eqnarray} (9.33)

    Throughout the proof, to justify repeated integration by parts and to deal with the unbounded domain, we use the exponential weight \widehat{\Gamma}_{\theta} , which, for \theta > 0 , is given by

    \widehat{\Gamma}_{\theta}(x,t) = \exp\left\{-\theta\left(1+|x|^2+t^{2}\right)^{1/2}\right\}.

    The first lemma is about the existence of and some a priori bounds for the approximate corrector in a bounded domain.

    Theorem 9.2. Assume that Eqs (9.2)–(9.4) are satisfied; there exists a unique map \chi : \mathbb{R}^{d+1}\times\Omega \rightarrow \mathbb{R} such that

    \begin{eqnarray} \partial_{t}\chi^{k}-\mathfrak{A}_\omega\chi^{k} = 0, k = 1,2,\cdots,d, \; in \; \mathbf{L}^{\mathbf{2}} \end{eqnarray} (9.34)

    and for all z\in\mathbb{R}^d, \; \zeta_{z}(\xi, t, \omega)\in\mathbf{L}^{\mathbf{2}} is a stationary field that satisfies

    \int_{\widetilde{Q}_{1}}\zeta_{z}(x,t,\omega)dxdt = 0\; \mathbb{P}-a.s.

    The positive definite constant matrix \Theta is defined by

    \begin{eqnarray} \Theta(x,t)& = &\frac{1}{2}\int_{-\frac{1}{2}}^{\frac{1}{2}}\int_{\mathbb{R}^{d}}\int_{\Omega}\left(z\otimes z-z\otimes\zeta_{-z}(0,s,\omega)\right)\\ &&\qquad \times J(z,s,\omega)\mu(0,\omega)\mu(-z,\omega)dzdsd\mathbf{P}(\omega). \end{eqnarray} (9.35)

    Moreover, \mathbb{P} -a.s. \chi^{\varepsilon}(x, t, \omega) = \varepsilon\chi\Big(\frac{x}{\varepsilon}, \frac{t}{\varepsilon^{2}}, \omega\Big) is a function satisfying sub-linear growth, that is

    \chi^{\varepsilon}(x,t,\omega)\stackrel{\varepsilon\rightarrow 0}{\longrightarrow}0\;\mathit{\mbox{in}}\; L_{loc}^{2}(\mathbb{R}^{d+1}).

    The proof is long and technical, so we will follow the diagram step by step.

     

    Step Task
    1 The existence for the approximate corrector in a bounded domain
    2 The existence for the approximate corrector in an unbounded domain
    3 The convergence of an approximate sequence
    4 The existence of a corrector
    5 The stationarity of a corrector
    6 The uniqueness of a corrector
    7 The sublinearity of a corrector

    Denote

    \begin{eqnarray} \mathfrak{A}_\omega u& = &\int_{\mathbb{R}^d}J(z,t)\nu(x,x-z)\Big(u(x-z,t)-u(x,t)\Big)dz\\ &-&\int_{\mathbb{R}^d}J(z,t)\nu(x,x-z)zdz: = A_\omega u-f. \end{eqnarray} (9.36)

    For a large enough L and the bounded support sets of u and J , we get

    \begin{eqnarray*} (A_\omega u,u)_{L^2(\widetilde{Q}_{L})}\leq 0. \end{eqnarray*}

    Step 1. Approximation sequence for constructing the solution.

    Lemma 9.1. Assume Eqs (9.2)–(9.4) for any \omega\in\Omega, \delta > 0 and sufficiently large L > 0 , and let \mathfrak{u}_{L} \in H_{0}^{1}\left(\widetilde{Q}_{L}\right) be the solution of

    \begin{eqnarray} \delta\mathfrak{u}^\delta_{L}-\delta\partial_{tt}\mathfrak{u}^\delta_{L}+\partial_{t}\mathfrak{u}^\delta_{L}-\delta\Delta\mathfrak{u}^\delta_{L}-\mathfrak{A}_\omega\mathfrak{u}^\delta_{L} = 0\; \mathit{\mbox{in}} \; \widetilde{Q}_{L}, \quad \mathfrak{u}^\delta_{L} = 0\; \mathit{\mbox{in}}\; \partial\widetilde{Q}_{L}. \end{eqnarray} (9.37)

    Then, we have that \theta_{m} > 0 , which depends on \delta but not on L or \omega , such that for any \theta\in\left(0, \theta_{m}\right] and \mathbb{P} -a.s.,

    \begin{eqnarray} \int_{\tilde{Q}_{L}}\left(\delta(\mathfrak{u}^\delta_{L})^{2}+\delta\left(\partial_{t}\mathfrak{u}^\delta_{L}\right)^{2}+\left(D\mathfrak{u}^\delta_{L}\right)^{2}\right)\widehat{\Gamma}_{\theta}\lesssim C_{\Gamma,\delta,\theta}. \end{eqnarray} (9.38)

    Proof. Using \widehat{\Gamma}_{\theta}u_{L} as a test function in Eq (9.37), \widehat{\Gamma}_{\theta} satisfies \left|D\widehat{\Gamma}_{\theta}\right|+\left|\partial_{t} \widehat{\Gamma}_{\theta}\right| \lesssim \theta \widehat{\Gamma}_{\theta} ; we find that

    \begin{eqnarray} &&\int_{\widetilde{Q}_{L}}\left(\delta ({\mathfrak{u}^\delta_{L}})^{2}(x,t,\omega)+\delta\left(\partial_{t} \mathfrak{u}^\delta_{L}\right)^{2}(x,t,\omega)-\delta \left(D\mathfrak{u}^\delta_{L}\right)^{2}(x,t,\omega)\right)\widehat{\Gamma}_{\theta} \\ &&\leq -\int_{\widetilde{Q}_{L}}\left(\delta\partial_{t}\mathfrak{u}^\delta_{L}\mathfrak{u}^\delta_{L}\frac{\partial_{t}\widehat{\Gamma}_{\theta}}{\widehat{\Gamma}_{\theta}}-\frac{\left( \mathfrak{u}^\delta_{L}\right)^{2}\partial_{t}\widehat{\Gamma}_{\theta}}{2\widehat{\Gamma}_{\theta}}+\mathfrak{u}^\delta_{L}D\mathfrak{u}^\delta_{L}\cdot\frac{D\widehat{\Gamma}_{\theta}}{ \widehat{\Gamma}_{\theta}}+f\mathfrak{u}^\delta_{L}\right)\widehat{\Gamma}_{\theta} \\ &&\lesssim\int_{\widetilde{Q}_{L}}\left(\delta\theta\left|\partial_{t}\mathfrak{u}^\delta_{L}\right|\left|\mathfrak{u}^\delta_{L}\right|+\theta\mathfrak{u}^\delta_{L}D\mathfrak{u}_{L}+\theta( \mathfrak{u}^\delta_{L})^{2}+|f||\mathfrak{u}^\delta_{L}|)\widehat{\Gamma}_{\theta};\right. \end{eqnarray} (9.39)

    by the Cauchy-Schwartz inequality, we can finish the proof.

    Step 2. Next we prove the existence of \chi^{\delta}.

    Lemma 9.2. Assume that Eqs (9.2)–(9.4) are satisfied. For any \delta > 0 and \theta\in(0, \theta_{m}) , in the sense of distributions, there exists a unique stationary solution \chi^{\delta}\in \mathbf{H}_{\hat{\Gamma}_{\theta}}^{1} \subset \mathbf{H}^{\mathbf{1}} of

    \begin{eqnarray} \delta\chi^{\delta}-\delta\partial_{tt}\chi^{\delta}+\partial_{t}\chi^{\delta}-\delta\Delta\chi^{\delta}-\mathfrak{A}_\omega\chi^{\delta} = 0\;\mathit{\mbox{in}}\;\mathbb{R}^{d+1}, \mathbb{P}-a.s. \end{eqnarray} (9.40)

    It is independent of \theta\in\left(0, \theta_{m}\right) . We also have the following estimate

    \begin{eqnarray} \mathbb{E}\int_{\tilde{Q}_{1}}\Big[\delta\left(\chi^{\delta}\right)^{2}+\delta\left(\partial_{t} \chi^{\delta}\right)^{2}+\left(D\chi^\delta\right)^{2}\Big] \leq C. \end{eqnarray} (9.41)

    Proof. This equation contains both local and nonlocal terms; our aim is to get an approximate equation in an unbounded domain, so we first show that \mathfrak{u}_{L} \to \mathfrak{u} . Then \omega is arbitrarily fixed, and \mathfrak{u}_{L}\in\mathbf{H}_{\widehat{\Gamma}_{\theta}}^{\mathbf{1}} for any \theta\in\left(0, \theta_{m}\right], L\in(0, \infty) from Lemma 9.1. According to the rule of diagonals, this produces subsequences, which we still remember as the original notation, then, for some {\mathfrak{u}\in\bigcap\limits_{\theta_1\in\left(0, \theta_{m}\right]} \mathbf{H}_{\widehat{\Gamma}_{\theta_1}}^{1}} and any \theta\in\left(0, \theta_{m}\right.] we have \mathfrak{u}_{L}\stackrel{L\rightarrow \infty}{\longrightarrow}\mathfrak{u} in \mathbf{H}_{\widehat{\Gamma}_{\theta}}^{1} . Here,

    \begin{eqnarray*} \mathfrak{A}_\omega\mathfrak{u}_{L}-\mathfrak{A}_\omega\mathfrak{u} = \int_{\mathbb{R}^d}J(z,t)\nu(x,x-z)\Big((\mathfrak{u}_{L}-\mathfrak{u})(x-z,t)-(\mathfrak{u}_{L}-\mathfrak{u})(x,t)\Big)dz; \end{eqnarray*}

    thus, for any L > 0 and \theta\in(0, \theta_{m}) , \mathfrak{u}_{L}\rightarrow \mathfrak{u} in L^{2}(\widetilde{Q}_{L}) , taking norm in L_{\widehat{\Gamma}_{\theta}}^{2} and dividing the integral area into two parts, we have

    \begin{eqnarray} &&\|\mathfrak{A}_\omega\mathfrak{u}_{L}-\mathfrak{A}_\omega\mathfrak{u} \|_{L_{\Gamma_{\theta}}^{2}(\mathbb{R}^{d+1})} \\ &&\leq\|\mathfrak{A}_\omega\mathfrak{u}_{L}-\mathfrak{A}_\omega\mathfrak{u}\|_{L_{\Gamma_{\theta}}^{2}(\tilde{Q}_{L})}+(\sup\limits_{\mathbb{R}^{d+1}\backslash\widetilde{Q}_{L}} \frac{\Gamma_{\theta}}{\Gamma_{\theta_{m}}})\|\mathfrak{A}_\omega\mathfrak{u}_{L}-\mathfrak{A}_\omega\mathfrak{u} \|_{L^2_{\Gamma_{\theta_{m}}}(\mathbb{R}^{d+1}\backslash\tilde{Q}_{L})} \\ &&\leq\|\mathfrak{A}_\omega\mathfrak{u}_{L}-\mathfrak{A}_\omega\mathfrak{u} \|_{L^{2}(\tilde{Q}_{L})} +(\sup\limits_{\mathbb{R}^{d+1}\backslash\widetilde{Q}_{L}} \frac{\Gamma_{\theta}}{\Gamma_{\theta_{m}}})(\|\mathfrak{A}_\omega \mathfrak{u}_{L}\|_{L_{\Gamma_{\theta_{m}}}^{2}(\mathbb{R}^{d+1})}+\|\mathfrak{A}_\omega \mathfrak{u} \|_{L^2_{\Gamma_{\theta_{m}}}(\mathbb{R}^{d+1})}) \\ &&: = V_1+V_2. \end{eqnarray} (9.42)

    We know that \mathfrak{A}_\omega is a bounded operator in L^{2}(\tilde{Q}_{L}) from [13, Proposition 6]; however,

    \|\mathfrak{A}_\omega\mathfrak{u}_{L}-\mathfrak{A}_\omega\mathfrak{u}\|_{L_{\Gamma_{\theta}}^{2}(\mathbb{R}^{d+1})}\to 0

    can be obtained directly through a similar argument in the inequality (9.42).

    Note that V_1, V_2\rightarrow0 uniformly as L\rightarrow +\infty by using the definition of \Gamma_{\theta} . Therefore we can assume that, \Delta \mathfrak{u}_{L}(x, t, \omega)\rightarrow \Delta \mathfrak{u} = \zeta, \mathfrak{A}_\omega \mathfrak{u}_{L}\to \mathfrak{A}_\omega\mathfrak{u} = \zeta_1, where \zeta, \zeta_1 \in\bigcap_{\theta^{\prime}\in\left(0, \theta_{m}\right]}L^{2}_{\widehat{\Gamma}_{\theta^{\prime}}} as L\rightarrow \infty .

    It can be seen that, in the sense of distribution, we have

    \begin{eqnarray} \delta\mathfrak{u}-\delta\partial_{tt}\mathfrak{u}+\partial_{t}\mathfrak{u}-\delta\Delta\mathfrak{u}-\mathfrak{A}_\omega\mathfrak{u} = 0\; \mbox{in}\; \mathbb{R}^{d+1}, \end{eqnarray} (9.43)

    and for all \theta\in\left(0, \theta_{m}\right] ,

    \begin{eqnarray} \int_{\tilde{Q}}\delta\Big(\mathfrak{u}^{2}+(\partial_{t}\mathfrak{u})^{2}+|D\mathfrak{u}|^{2}\Big)\widehat{\Gamma}_{\theta}\lesssim C_{\Gamma,\delta,\theta}, \end{eqnarray} (9.44)

    where \partial_{t}\mathfrak{u}\in\mathbf{H}_{\mathbf{x}}^{-\mathbf{1}}, D\mathfrak{u}\in\left(\mathbf{L}^{2}(\Omega)\right)^{d}.

    Next, we check that \mathfrak{u}\in\bigcap \limits_{\theta^{\prime}\in\left[0, \theta_{m}\right]}H_{\widehat{\Gamma}_{\theta^{\prime}}}^{1} is a solution of Eq (9.43) in the sense of distribution. Let \phi\in C_{c}^{\infty}\left(\mathbb{R}^{d+1}\right) . For a large enough L , we have

    \begin{eqnarray*} \int_{\tilde{Q}_{L}}[\delta((\mathfrak{u}_{L}-\phi)^{2}+(\partial_{t}\mathfrak{u}_{L}-\partial_{t}\phi)^2+|\nabla\mathfrak{u}_{L}-\nabla \phi|^{2} )-\mathfrak{A}_\omega(\mathfrak{u}^\delta_{L}-\phi)(\mathfrak{u}^\delta_{L}-\phi)]\widehat{\Gamma}_{\theta} = 0; \end{eqnarray*}

    using u_{L}\widehat{\Gamma}_{\theta} as a test function for the equation of u_{L} , we find that

    \begin{eqnarray*} &&\int_{\widetilde{Q}_{L}}(\delta\mathfrak{u}_{L}^{2}+\delta(\partial_{t}\mathfrak{u}_{L})^{2} +\delta\partial_{t}\mathfrak{u}_{L}\mathfrak{u}_{L}\frac{\partial_{t}\widehat{\Gamma}_{\theta}}{\widehat{\Gamma}_{\theta}} +\partial_{t}\mathfrak{u}_{L}\mathfrak{u}_{L} +\delta\mathfrak{u}_{L}\nabla\mathfrak{u}_{L}\cdot\frac{D\widehat{\Gamma}_{\theta}}{\widehat{\Gamma}_{\theta}}+\delta|\nabla \mathfrak{u}_{L}|^2)\widehat{\Gamma}_{\theta} \nonumber\\ &&-\int_{\widetilde{Q}_{L}}\int_{\mathbb{R}^{d}}J{\mu}(x-z,\omega){\mu}(x,\omega)(-z+\mathfrak{u}_{L}(x-z,t,\omega)- \mathfrak{u}_{L}(x,t,\omega))\mathfrak{u}_{L}(x,t,\omega)dz\widehat{\Gamma}_{\theta} = 0; \end{eqnarray*}

    thus, the above two identities subtract yield that

    \begin{eqnarray*} &&\int_{\tilde{Q}_{L}}\Big(-2\delta\mathfrak{u}_{L}\phi+\delta\phi^{2}-2\delta\partial_{t}\mathfrak{u}_{L}\partial_{t}\phi+\delta(\partial_{t}\phi)^{2}+\delta(\nabla \phi)^2-2\delta\nabla \mathfrak{u}_{L}\cdot \nabla\phi \\ &&+\mathfrak{A}_\omega\mathfrak{u}_{L}\cdot\phi+\mathfrak{A}_\omega\phi\cdot(\mathfrak{u}_{L}-\phi)-\partial_{t}\mathfrak{u}_{L}\mathfrak{u}_{L} -\delta\partial_{t}\mathfrak{u}_{L}\mathfrak{u}_{L}\frac{\partial_{t} \widehat{\Gamma}_{\theta}}{\widehat{\Gamma}_{\theta}}-\delta\mathfrak{u}_{L}\nabla\mathfrak{u}_{L} \cdot\frac{D\widehat{\Gamma}_{\theta}}{\widehat{\Gamma}_{\theta}}\Big)\widehat{\Gamma}_{\theta} = 0. \end{eqnarray*}

    As L\rightarrow \infty, \mathfrak{u}_{L}\rightarrow \mathfrak{u}, \nabla\mathfrak{u}_{L}\to\nabla\mathfrak{u}, \partial_{t}\mathfrak{u}_{L}\rightarrow \partial_{t}\mathfrak{u} and (\zeta^\delta_{z})_L \rightarrow \zeta^\delta_{z} in L_{\widehat{\Gamma}_{\theta}}^{2} and in the sense of L_{l o c}^{2} , we get

    \begin{eqnarray*} &&\int_{\mathbb{R}^{d+1}}\Big(-2\delta\mathfrak{u}\phi+\delta\phi^{2}-2\delta\partial_{t}\mathfrak{u}\partial_{t}\phi+\delta(\partial_{t}\phi)^{2}+\delta(\nabla\phi)^2-2\delta\nabla\mathfrak{u}\cdot \nabla\phi \\ &&+\mathfrak{A}_\omega\mathfrak{u}\cdot\phi+\mathfrak{A}_\omega\phi\cdot(\mathfrak{u}-\phi) -\partial_{t}\mathfrak{u}\mathfrak{u}-\delta\partial_{t}\mathfrak{u}\mathfrak{u}\frac{\partial_{t} \widehat{\Gamma}_{\theta}}{\widehat{\Gamma}_{\theta}} -\delta\mathfrak{u}\nabla\mathfrak{u}\cdot\frac{D\widehat{\Gamma}_{\theta}}{\widehat{\Gamma}_{\theta}}\Big)\widehat{\Gamma}_{\theta} = 0. \end{eqnarray*}

    By integrating Eq (9.43) against \phi\widehat{\Gamma}_{\theta} , we obtain

    \begin{eqnarray*} \int_{\mathbb{R}^{d+1}}\Big(\delta\mathfrak{u}\phi+\delta\partial_{t}\mathfrak{u}\partial_{t}\phi+\delta\partial_{t}\mathfrak{u}\phi\frac{\partial_{t} \widehat{\Gamma}_{\theta}}{\widehat{\Gamma}_{\theta}}+\partial_{t}\mathfrak{u}\phi +\delta \nabla \mathfrak{u} \nabla \phi +\delta \nabla \mathfrak{u} \phi \frac{D \widehat{\Gamma}_{\theta}}{\widehat{\Gamma}_{\theta}}-\mathfrak{A}_\omega\mathfrak{u}\cdot\phi\Big)\widehat{\Gamma}_{\theta} = 0. \end{eqnarray*}

    Adding the above two equations gives

    \begin{eqnarray} &&\int_{\mathbb{R}^{d+1}}\Big(-\delta\phi(\mathfrak{u}-\phi)-\delta\partial_{t}\phi(\partial_{t}\mathfrak{u}-\partial_{t}\phi)-\delta \nabla(\mathfrak{u}-\phi)\nabla\phi+\mathfrak{A}_\omega\phi\cdot(\mathfrak{u}-\phi) \\ &&\qquad-\partial_{t}\mathfrak{u}(\mathfrak{u}-\phi)-\delta\partial_{t}\mathfrak{u}(\mathfrak{u}-\phi)\frac{\partial_{t}\widehat{\Gamma}_{\theta}}{\widehat{\Gamma}_{\theta}}-\delta(\mathfrak{u}-\phi )\nabla\mathfrak{u}\frac{D \widehat{\Gamma}_{\theta}}{\widehat{\Gamma}_{\theta}}\Big)\widehat{\Gamma}_{\theta} = 0. \end{eqnarray} (9.45)

    For every \psi\in C_{c}^{\infty}(\mathbb{R}^{d+1}) , set \phi = \mathfrak{u}+h\psi , as h\rightarrow 0^{+} we have

    \begin{eqnarray} \int_{\mathbb{R}^{d+1}}\Big(\delta\mathfrak{u}\psi+\delta\partial_{t}\mathfrak{u}\partial_{t}\psi+\delta D\mathfrak{u}D\psi-\mathfrak{A}_\omega\mathfrak{u}\cdot\psi+\partial_{t}\mathfrak{u}\psi +\delta \partial_{t}\mathfrak{u}\psi\frac{\partial_{t}\widehat{\Gamma}_{\theta}}{\widehat{\Gamma}_{\theta}}+\delta\psi \nabla \mathfrak{u}\frac{D\widehat{\Gamma}_{\theta}}{\widehat{\Gamma}_{\theta}}\Big)\widehat{\Gamma}_{\theta} = 0, \end{eqnarray} (9.46)

    where \mathfrak{u} and D \mathfrak{u} are locally integrable, \psi has a compact support, D\widehat{\Gamma}_{\theta}, \partial_t\widehat{\Gamma}_{\theta}\stackrel{\theta\rightarrow 0}{\longrightarrow}0 locally uniformly; thus, we have

    \begin{eqnarray} \int_{\mathbb{R}^{d+1}}\delta\mathfrak{u}\psi+\delta\partial_{t}\mathfrak{u}\partial_{t}\psi+\delta D\mathfrak{u}D\psi-\mathfrak{A}_\omega\mathfrak{u}\cdot\psi+\partial_{t}\mathfrak{u}\psi = 0\;\mbox{as}\;\theta\rightarrow0. \end{eqnarray} (9.47)

    For any \psi\in C_{c}^{\infty}(\mathbb{R}^{d+1}) , Eq (9.47) implies that \mathfrak{u} is a solution of Eq (9.43) in the sense of distributions. Finally \mathfrak{u} is stationary from the uniqueness of Eq (9.43).

    Step 3. Approximate the convergence of sequences \chi^{\delta} in \mathbf{L}^{\mathbf{2}} .

    We have known that from the inequality (9.41)

    \delta\int_{\mathbb{R}^{d+1}}(\partial_t\chi^{\delta})^{2} d xdt\leq C,\; \int_{\mathbb{R}^{d+1}}(\chi^{\delta})^{2} d xdt\leq C, \; \delta\int_{\mathbb{R}^{d+1}}\left|\nabla \chi^{\delta}\right|^{2}dxdt\leq C,

    where the constant C is independent of \delta . Letting \delta \rightarrow 0 , for an arbitrarily fixed \omega , we have

    \begin{eqnarray*} \delta\partial_t \chi^{\delta}\rightharpoonup \partial_t\chi\;&&\mbox{weakly}\;\mbox{in}\; L^{2}(\mathbb{R}^{d+1}),\\ \chi^{\delta}\rightharpoonup\chi\;&&\mbox{weakly}\;\mbox{in}\; L^{2}(\mathbb{R}^{d+1}),\\ \delta\nabla\chi^{\delta}\rightharpoonup0\;&&\mbox{weakly}\;\mbox{in}\; L^{2}(\mathbb{R}^{d+1}). \end{eqnarray*}

    Moreover, we have

    \frac{1}{2}\int_{\mathbb{R}}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}J(x,y)\nu(x,y)\left|\chi^{\delta}(y,t)-\chi^{\delta}(x,t)\right|^2dydxdt\leq C.

    Hence, for any measurable subset E\subset Q\times Q\times Q_1\subset\mathbb{R}^{d}\times\mathbb{R}^{d}\times\mathbb{R} , using the cauchy-Schwartz inequality we find that

    \begin{eqnarray} &&\left|\int_{E}J(x,y)\nu(x,y)\Big(\chi^{\delta}(y,t)-\chi^{\delta}(x,t)\Big)dydxdt\right|^{2} \\ &&\leq\int_{E}J(x,y)\nu(x,y)dydxdt\int_{E}J(x,y)\nu(x,y)\Big|\chi^{\delta}(y,t)-\chi^{\delta}(x,t)\Big|^{2}dydxdt \\ &&\lesssim_{J,\nu} C. \end{eqnarray} (9.48)

    Now, applying the Dunford-Pettis theorem, there exists \vartheta(x, z, t) such that

    J(z,t)\nu(x,x-z)\left(-z+\chi^{\delta}(x-z,t)-\chi^{\delta}(x,t)\right)\rightharpoonup\vartheta(x,z,t)

    weakly in L^{1}(Q\times Q\times Q_1). Taking \mathfrak{u} = \chi^\delta and \psi = \chi in Eq (9.47), as \delta \to 0 we have

    \int_{Q_1}\int_{Q}\partial_{t}\chi \chi dxdt = \int_{Q_1}\int_{Q}\int_{Q}\vartheta\chi dxdzdt.

    Therefore, to finish the proof we have to show that over Q* = Q_1\times Q\times Q

    \begin{eqnarray} \int_{Q*}J(z,t)\nu(x,x-z)(-z+\chi(x-z,t)-\chi(x,t))\chi dxdzdt = \int_{Q*}\vartheta\chi dxdzdt. \end{eqnarray} (9.49)

    In fact, taking \mathfrak{u} = \chi^{\delta} and \psi = \chi^{\delta} in Eq (9.47), we have

    \begin{eqnarray} &&\int_{Q_1}\int_{Q}\left(\delta(\chi^\delta)^{2}+\delta\left(\partial_{t}\chi^\delta\right)^{2}+\partial_{t}\chi^\delta\chi^\delta+\delta(\nabla\chi^\delta)^2\right)-\int_{Q_1}\int_{Q}\partial_{t}\chi \chi\\ & = &\int_{Q*}J(z,t){\nu}\left(x,x-z\right)\left(-z+\chi^\delta\left(x-z,t\right)-\chi^\delta(x,t)\right)\chi^\delta dxdzdt-\int_{Q*}\vartheta\chi dxdzdt; \end{eqnarray} (9.50)

    thus,

    \begin{eqnarray} \lim\limits_{\delta\rightarrow 0}[\int_{Q*}J(z,t)\nu(x,x-z)\left(-z+\chi^{\delta}(x-z,t)-\chi^{\delta}(x,t)\right)\chi^{\delta}(x,t)-\int_{Q*}\vartheta\chi] = 0. \end{eqnarray} (9.51)

    Using the monotonicity of the nonlocal operator, for all \Psi\in L^{\infty}(Q\times Q_1) ,

    \begin{eqnarray} &&\int_{Q*}J(z,t)\left(-z+\chi^{\delta}(x-z,t)-\chi^{\delta}(x,t)\right)\left(\chi^{\delta}(x,t)-\Psi(x,t)\right)dzdxdt \\ &&\leq\int_{Q*}\left(J(z,t)(-z+\Psi(x-z,t)-\Psi(x,t))\right)\left(\chi^{\delta}(x,t)-\Psi(x,t)\right)dzdxdt. \end{eqnarray} (9.52)

    Passing to the limit as \delta\rightarrow 0 , by using Eq (9.51), we have

    \begin{eqnarray*} \int_{Q*}\vartheta(\chi-\Psi)dzdxdt\leq\int_{Q*}J(z,t)(-z+\Psi(x-z,t)-\Psi(x,t))(\chi(x,t)-\Psi(x,t))dzdxdt. \end{eqnarray*}

    Choosing \Psi = \chi\pm\gamma \chi, \gamma > 0 , and letting \gamma\rightarrow 0 , we get Eq (9.49), and the proof is finished.

    From Eq (9.43) and the Cauchy-Schwartz inequality, for any \psi\in L^{2}(Q\times Q_1) we get

    \begin{eqnarray} \int_{Q\times Q_1}\delta\chi^{\delta}\psi+\delta\partial_{t}\chi^{\delta}\partial_{t}\psi+\delta D\chi^{\delta}D\psi \rightarrow 0 \; as \; \delta \to 0. \end{eqnarray} (9.53)

    Passing to the limit as \delta\rightarrow 0 in Eq (9.43), for a.e. \omega the function \chi\left(z, t, \omega\right) satisfies the equation

    \begin{eqnarray*} \partial_{t}\chi&-&\int_{\mathbb{R}^{d}}J(z,t){\mu}\left({x-z},t,\omega\right){\mu}\left({x},\omega\right)(\chi(x-z,t,\omega)-\chi(x,t,\omega))dz\\ &+&\int_{\mathbb{R}^{d}}zJ(z,t){\mu}\left({x},\omega\right){\mu}\left({x-z},\omega\right)dz = 0; \end{eqnarray*}

    thus, we prove that \chi(x, t, \omega) is a solution of Eq (9.34).

    Denote \zeta_{z}(\xi, t, \omega) = {\chi}(z+\xi, t, \omega)-{\chi}(\xi, t, \omega) ; then for z\in\mathbb{R}^{d} , we have

    \begin{eqnarray*} \zeta_{z}(\xi,t,\omega) = \zeta_{z}\left(0,t,T_{\xi}\omega\right). \end{eqnarray*}

    Step 4. Stationarity of \zeta_{z}.

    For all z\in\mathbb{R}^{d} and t\in(0, T) , the field \zeta_{z}(x, t, \omega) is statistically homogeneous in x and t , and

    \zeta_{z}(0,t,\omega) = \chi(z,t,\omega).

    Thus, the random function \chi(x, t, \omega) is not stationary, but its increments

    \zeta_{z}(\xi,t,\omega) = \chi(\xi+z,t,\omega)-\chi(\xi,t,\omega)

    form a stationary field for any given z .

    We first prove the uniqueness of \chi^\delta . Let u_{1} and u_{2} be two solutions and set \tilde{u} = u_{1}-u_{2} . Using \tilde{u}\widehat{\rho_{\theta}} as a test function in Eq (9.43) for \tilde{u} , we find that

    \begin{eqnarray} &&\int_{\mathbb{R}^{d+1}}(\delta\tilde{u}^{2}+\delta(\partial_{t}\tilde{u})^{2}+(D\widetilde{u})^2-(A_\omega\tilde{u},\tilde{u}))\widehat{\Gamma}_{\theta} \\ && = -\int_{\mathbb{R}^{d+1}}(\delta\tilde{u}\partial_{t}\tilde{u}\partial_{t}\widehat{\Gamma}_{\theta}+\tilde{u}D\tilde{u}+f|\tilde{u}|)\cdot D\widehat{\Gamma}_{\theta} \\ &&\leq\theta\int_{\mathbb{R}^{d+1}}(\delta|\tilde{u}||\partial_{t}\tilde{u}|+C_{0}|D\tilde{u}||\tilde{u}|)\widehat{\Gamma}_{\theta}. \end{eqnarray} (9.54)

    Then a standard argument based on the Cauchy-Schwartz inequality implies that, for \theta small enough, \tilde{u}\equiv 0 .

    Next, we will prove that \zeta_{z}(x, t, \omega) is stationary in x, t.

    Proposition 4. The function \chi(z, t, \omega) can be extended to \mathbb{R}^{d}\times\mathbb{R}\times\Omega in such a way that \chi(z, t, \omega) satisfies the relation inequality (9.34), i.e., \chi(x, t, \omega) has stationary increments:

    \begin{eqnarray} \left\{\begin{array}{l} \chi(z+\xi,t,\omega)-\chi(\xi,t,\omega) = \chi\left(z,t,T_{\xi}\omega\right) = \chi\left(z,t,T_{\xi}\omega\right)-\chi\left(0,t,T_{\xi}\omega\right),\\ \chi(z,t+s,\omega)-\chi(z,s,\omega) = \chi\left(z,t,T_{s} \omega\right)-\chi\left(z,0,T_{s} \omega\right). \end{array}\right. \end{eqnarray} (9.55)

    Proof. The strong convergence \left\{\chi^{\delta}\right\} implies that there exists a subsequence of \left\{\chi^{\delta_{n_k}}\right\} that converges a.s. to the same limit \chi(z, t, \omega) :

    \lim\limits_{k\rightarrow \infty}\chi^{\delta_{n_k}}(z,t,\omega) = \chi(z,t,\omega)\;\mbox{for}\; a.e. (z,t,\omega).

    Since J and \nu are stationary, according to the uniqueness of the stationary solution \chi^{\delta} , we get

    {\chi^{\delta_{n_k}}}(z+\xi,t,\omega)-{\chi^{\delta_{n_k}}}(\xi,t,\omega) = {\chi^{\delta_{n_k}}}\left(z,t,T_{\xi}\omega\right)-{\chi^{\delta_{n_k}}}\left(0,t,T_{\xi}\omega\right) = \chi^{\delta_{n_k}}\left(z,t, T_{\xi}\omega\right).

    Thus \left\{\chi^{\delta_{n_k}}\right\} in Eq (9.43) and passing to the limit as k\rightarrow \infty we obtain Eq (9.34) first only for z_{1} and z_{2} that z_{1}, z_{2} and z_{1}+z_{2} belong to supp J(\cdot, t) . Then, we extend the function \chi(z, t, \omega) to a.e. z\in\mathbb{R}^{d} due to Eq (9.55): \chi(z_{1}+z_{2}, t, \omega) = \chi(z_{2}, t, \omega)+\chi(z_{1}, t, T_{z_{2}}\omega). The proof of the second formula is similar, so we omit the details. Therefore we get the stationarity of \zeta_{z}.

    Step 5. Uniqueness of \chi.

    We first establish an important lemma.

    Let \varphi = \varphi(x, R(t)) = \varphi(\frac{|x|}{R(t)})\in C^{1}\left(\mathbb{R}^{d}\times[0, +\infty)\right) be such that

    \begin{eqnarray*} \varphi(x,R) = 0\;\mbox{in}\;\mathbb{R}^{d}\backslash Q_{2R},\; \varphi(x,R) = 1\;\mbox{in}\;Q_{R},\;\varphi(x,R) = 2-\frac{|x|}{R(t)}\;\mbox{in}\;Q_{2R}\backslash Q_{R}. \end{eqnarray*}

    Denote \bar{\sigma}_{z} = \bar{\sigma}(x+z, t, \omega)-\bar{\sigma}(x, t, \omega) and

    \begin{eqnarray*} \mathbb{A}_1 = \int_{\mathbb{R}^{d}}\int_{|\xi| > 3 R}J(z,t)\mu(\xi+z,\omega)\mu(\xi,\omega)\bar{\sigma}_{z}\bar{\sigma}(\xi+z,\omega) \left(\varphi\left(\frac{|\xi+z|}{R}\right)-\varphi\left(\frac{|\xi|}{R}\right)\right)dzd\xi, \end{eqnarray*}
    \begin{eqnarray*} \mathbb{A}_2& = &\int_{\mathbb{R}^{d}}\int_{|\xi|\leq 3R}J(z,t)\mu(\xi+z,\omega)\mu(\xi,\omega)\Big(\bar{\sigma}(\xi+z,\omega)-\bar{\sigma}(\xi,\omega)\Big)^2\\ &&\times\left(\varphi\left(\frac{|\xi+z|}{R}\right)-\varphi\left(\frac{|\xi|}{R}\right)\right)d\xi dz\\ & = &\int_{|z|\leq\sqrt{R}}\int_{|\xi|\leq 3R}J(z,t)\mu(\xi+z,\omega)\mu(\xi,\omega)\Big(\bar{\sigma}(\xi+z,\omega)-\bar{\sigma}(\xi,\omega)\Big)^2\\ &&\times\left(\varphi\left(\frac{|\xi+z|}{R}\right)-\varphi\left(\frac{|\xi|}{R}\right)\right)d\xi dz\\ &+&\int_{|z|\geq\sqrt{R}}\int_{|\xi|\leq 3R}J(z,t)\mu(\xi+z,\omega)\mu(\xi,\omega)\Big(\bar{\sigma}(\xi+z,\omega)-\bar{\sigma}(\xi,\omega)\Big)^2\\ &&\times\left(\varphi\left(\frac{|\xi+z|}{R}\right)-\varphi\left(\frac{|\xi|}{R}\right)\right)d\xi dz = \mathbb{A}_{2 < }+\mathbb{A}_{2 > }, \end{eqnarray*}
    \begin{eqnarray*} \mathbb{A}_3& = &\int_{\mathbb{R}^{d}}\int_{|\xi|\leq 3R}J(z,t)\mu(\xi+z,\omega)\mu(\xi,\omega)\bar{\sigma}_{z}\bar{\sigma}(\xi,\omega)\nonumber\\ &&\cdot\left(\varphi\left(\frac{|\xi+z|}{R}\right)-\varphi\left(\frac{|\xi|}{R}\right)\right)d\xi dz\\ &\leq&\int_{\mathbb{R}^{d}}\int_{\substack{2R\leq|\xi|\leq 3R,\\|\xi|\leq R}}J(z,t)\mu(\xi+z,\omega)\mu(\xi,\omega)\bar{\sigma}_{z}\bar{\sigma}(\xi,\omega)\nonumber\\ &&\cdot\left(\varphi\left(\frac{|\xi+z|}{R}\right)-\varphi\left(\frac{|\xi|}{R}\right)\right) d\xi dz\\ &+&\int_{\mathbb{R}^{d}}\int_{R\leq|\xi|\leq 2R}J(z,t)\mu(\xi+z,\omega)\mu(\xi,\omega)\bar{\sigma}_{z}\bar{\sigma}(\xi,\omega)\nonumber\\ &&\cdot\left(\varphi\left(\frac{|\xi+z|}{R}\right)-\varphi\left(\frac{|\xi|}{R}\right)\right)d\xi dz. \end{eqnarray*}

    Lemma 9.3. We have the following estimates

    \begin{eqnarray*} \mathbb{A}_1&\leq&\phi_1(R)+\frac{C}{\alpha|R^{\prime}(t)}R^{d-1}+\alpha|R^{\prime}(t)|\int_{R\leq|\eta|\leq 2R}\frac{|\bar{\sigma}(\eta,\omega)|^{2}}{R}\varphi\left(\frac{|\eta|}{R}\right)d\eta,\\ \mathbb{A}_{2 < }&\leq& \frac{C}{\sqrt{R(t)}}R(t)^d,\quad \mathbb{A}_{2 > }\leq c_2R(t)^d, \\ \mathbb{A}_3&\leq&c_s R^d+\frac{C}{\alpha|R^{\prime}(t)}R^{d-1}+\alpha|R^{\prime}(t)|\int_{R\leq|\eta|\leq 2R}|\bar{\sigma}(\eta,\omega)|^{2}\frac{|\eta|}{R^2}d\eta, \end{eqnarray*}

    and

    \begin{eqnarray*} \phi\left(T_{\eta}\omega,t\right) = \int_{\mathbb{R}^d}|z|J(z,t)|\bar{\sigma}_z(T_{\eta-z}\omega,t)|,\phi_1(R) = {\alpha_{2}^{2}}\int_{|\eta|\leq R}\phi\left(T_{\eta}\omega\right)\frac{|\bar{\sigma}(\eta,\omega)|}{R}d\eta \lesssim c_d R^{d}, \end{eqnarray*}

    where c_2, c_d and \alpha are small enough.

    The proof is given in Appendix C.

    Lemma 9.4. For a.e. \omega and all \xi\in\mathbb{R}^d , we have

    \begin{eqnarray} \mathbf{E}\int_{-\frac{1}{2}}^{\frac{1}{2}}\int_{\mathbb{R}^{d}}J(z,t)\mu\left(z,\omega\right)\mu(0,\omega)\bar{\sigma}_{z}^{2}dzdt = 0. \end{eqnarray} (9.56)

    Proof. Because the map {t\rightarrow \mathbf{E}\int_{\mathbb{R}^{d}}J(z, t)\mu\left(z, \omega\right)\mu(0, \omega)\bar{\sigma}_{z}^{2}dz} is well-defined, and for all t\in \mathbb{R} , in order to prove Eq (9.56) by using the contradiction, we have

    \begin{eqnarray} \mathbf{E}\int_{\mathbb{R}^{d}}J(z,t)\mu\left(z,\omega\right)\mu(0,\omega)\bar{\sigma}_{z}^{2}dz\geq0. \end{eqnarray} (9.57)

    Fix R > 0 ; in view of the stationarity of \bar{\sigma}_{z} , there exist \varepsilon_{0} > 0 and 0 < \kappa < \widehat{\kappa} such that, for all t\in\mathbb{R}, \varepsilon\in\left(0, \varepsilon_{0}\right) and R > 0 ,

    \begin{eqnarray} \frac{1}{2}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}J(z,t)\mu(x+z,\omega)\mu(x,\omega)(\bar{\sigma}(x+z,t,\omega)-\bar{\sigma}(x,t,\omega))^2\varphi\left(\frac{|x|}{R}\right)dzdx\geq\kappa R^{d}. \end{eqnarray} (9.58)

    T is large enough; let R(t) = \left(T-\gamma_{1}t\right)^{1/4} satisfy R^{'}(t)\leq 0 for some \gamma_{1} > 0 ; then

    \begin{eqnarray*} &&\frac{d}{dt}\int_{\mathbb{R}^{d}}\frac{1}{2}\bar{\sigma}^{2}(x,t)\varphi(x,R(t))dx \nonumber\\ & = &R^{\prime}(t)\int_{\mathbb{R}^{d}}\frac{1}{2}\bar{\sigma}^{2}(x,t)\partial_{R}\varphi(x,R(t))dx+\int_{\mathbb{R}^{d}}\bar{\sigma}(x,t)\partial_{t}\bar{\sigma}(x,t)\varphi(x,R(t))dx \nonumber\\ & = &R^{\prime}(t)\int_{\mathbb{R}^{d}}\frac{1}{2}\bar{\sigma}^{2}(x,t)\partial_{R}\varphi(x,R(t))dx\nonumber\\ &+&\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}J(z,t)\mu(x+z,\omega)\mu(x,\omega)(\bar{\sigma}(x+z,\omega)-\bar{\sigma}(x,\omega))\bar{\sigma}(x,\omega)\varphi\left(\frac{|x|}{R}\right)dzdx\nonumber\\ & = &R^{\prime}(t)\int_{\mathbb{R}^{d}}\frac{1}{2}\bar{\sigma}^{2}(x,t)\partial_{R}\varphi(x,R(t))dx\nonumber\\ &-&\frac{1}{2}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}J(z,t)\mu(x+z,\omega)\mu(x,\omega)\Big(\bar{\sigma}(x+z,\omega)-\bar{\sigma}(x,\omega)\Big) \nonumber\\ &&\cdot\left(\bar{\sigma}(x+z,\omega)\varphi\left(\frac{|x+z|}{R}\right)-\bar{\sigma}(x,\omega)\varphi\left(\frac{|x|}{R}\right)\right)dzdx, \nonumber \end{eqnarray*}

    thus,

    \begin{eqnarray} &&\frac{d}{dt}\int_{\mathbb{R}^{d}}\frac{1}{2}\bar{\sigma}^{2}(x,t)\varphi(x,R(t))dx = R^{\prime}(t)\int_{\mathbb{R}^{d}}\frac{1}{2}\bar{\sigma}^{2}(x,t)\partial_{R}\varphi(x,R(t))dx\\ &&-\frac{1}{2}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}J(z,t)\mu(x+z,\omega)\mu(x,\omega)\Big(\bar{\sigma}(x+z,\omega)-\bar{\sigma}(x,\omega)\Big)^{2}\varphi\left(\frac{|x|}{R}\right)dzdx \\ &&-\frac{1}{2}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}J(z,t)\mu(x+z,\omega)\mu(x,\omega)\Big(\bar{\sigma}(x+z,\omega)-\bar{\sigma}(x,\omega)\Big)\bar{\sigma}(x+z,\omega) \\ &&\cdot\left(\varphi\left(\frac{|x+z|}{R}\right)-\varphi\left(\frac{|x|}{R}\right)\right)dzdx. \end{eqnarray} (9.59)

    According to Lemma 9.3, we have

    \begin{eqnarray} &&\frac{d}{dt}\int_{\mathbb{R}^{d}}\frac{1}{2}\bar{\sigma}^{2}(x,t)\varphi(x,R(t))dx \\ &&\leq R^{\prime}(t)\int_{\mathbb{R}^{d}}\frac{1}{2}\bar{\sigma}^{2}(x,t)\partial_{R}\varphi(x,R(t))dx-\kappa R(t)^{d}+|\mathbb{A}_1|+|\mathbb{A}_2|+|\mathbb{A}_3|, \end{eqnarray} (9.60)

    the fact that the inequality \partial_{R}\varphi(x, R(t)) = \frac{|x|}{R(t)^2}, \varphi(\frac{|x|}{R})\leq\frac{|x|}{R} when R\leq|x|\leq 2R , if \alpha is small enough, we obtain the following from estimates in Lemma 9.3:

    \begin{eqnarray} &&\frac{d}{dt}\mathbb{E}\int_{\mathbb{R}^{d}}\frac{1}{2}\bar{\sigma}^{2}(x,t)\varphi(x,R(t))dx+(\kappa-c_2-c_s-c_d)R(t)^{d}\\ &&\leq \frac{C}{\sqrt{R(t)}}R(t)^d+\frac{C}{|R^{\prime}(t)|}R(t)^{d-1}; \end{eqnarray} (9.61)

    due to the facts that \left|Q_{2R(t)}\backslash Q_{R(t)}\right|\lesssim C(R(t))^{d} and R^{\prime}(t) = \gamma_{1}(R(t))^{-1} , for some C > 0 , we get

    \begin{eqnarray} &&\frac{d}{dt}\mathbb{E}\int_{\mathbb{R}^{d}}\frac{1}{2}\bar{\sigma}^{2}(x,t)\varphi(x,R(t))dx \\ &&\leq -R(t)^{d}\Big(\kappa-c_2-c_s-c_d-C \gamma_{1}^{-1}-\frac{C}{\sqrt{R(t)}}\Big). \end{eqnarray} (9.62)

    Choosing \gamma_{1} > 1 large enough and c_2, c_s and c_d small enough such that \kappa-c_2-c_s-c_d-C\gamma_{1}^{-1}\geq\hat{\tau}/2 and t\leq t_{T} = (T- (4C\hat{\tau}^{-1})^2)\gamma_{1}^{-1} , in order to have \frac{C}{\sqrt{R(t)}}\leq\hat{\tau}/4 on [0, t_{T}] , we have

    \begin{eqnarray*} \frac{d}{dt}\mathbb{E}\int_{\mathbb{R}^{d}}\frac{1}{2}\bar{\sigma}^{2}(x,t)\varphi(x,R(t))d x\leq-R(t)^{d}\frac{\hat{\tau}}{4}. \quad for\; any \; t\in[0,t_{T}]. \end{eqnarray*}

    Along with integration in time over t\in[t_1, \gamma_{1}^{-1}T] for t_1\in[0, \sqrt{T}] , suppose that \gamma_{1}^{-1} and T are sufficiently large and satisfy \gamma_{1}^{-1}T < t_{T} ; also, given the fact that \varphi\geq 0 , we have

    \begin{eqnarray*} \mathbb{E}\int_{\mathbb{R}^{d}}\frac{1}{2}\bar{\sigma}^{2}(x,t_{1})\varphi(x,R(t_{1}))dx\geq\frac{\hat{\tau}}{4}\int_{t_1}^{\gamma_{1}^{-1} T}R(t)^{d}dt. \end{eqnarray*}

    Integrating in time over t_1\in[0, \sqrt{T}] , since R(t_1)\leq T^{1/4} and \varphi(x, R(t_1))\leq1_{Q_{2T^{1/4}}} , we get

    \begin{eqnarray*} \mathbb{E}\int_{0}^{T^{1/4}}\int_{Q_{2T^{1/4}}}\frac{1}{2}\bar{\sigma}^{2}(x,t_1)dxdt_1\geq C^{-1}\hat{\tau}T^{(d+3)/4}. \end{eqnarray*}

    Hence, we can apply Lemma B.1 in Appendix B which implies that, for any \delta > 0 , there exists R_{\delta} such that, for all R\geq R_{\delta} ,

    \begin{eqnarray*} \mathbb{E}\int_{0}^{R}\int_{Q_{R}}\bar{\sigma}^{2}(x,t_1)dxdt_1\leq\delta R^{d+3}. \end{eqnarray*}

    Here choosing R = 2T^{1/4} and T large enough, we obtain

    \begin{eqnarray*} C^{-1}\hat{\tau}T^{\frac{d+3}{4}}\leq\mathbb{E}\int_{0}^{T^{1/4}}\int_{Q_{2T^{1/4}}}\frac{1}{2}\bar{\sigma}^{2}(x,t_1)dxdt_1\leq \delta 2^{d+2} T^{\frac{d+3}{4}}, \end{eqnarray*}

    as \delta is small enough, which yields a contradiction. Hence we get

    \begin{eqnarray} \mathbf{E}\int_{-1/2}^{1/2}\int_{\mathbb{R}^{d}}J(z,t)\mu\left(z,\omega\right)\mu(0,\omega)\bar{\sigma}_{z}^{2}dzdt\leq 0. \end{eqnarray} (9.63)

    By combining the inequality (9.63) with the inequality (9.57), we get Eq (9.56); the proof is done.

    Suppose that u and u_1 are the solutions of the equation and set \mathfrak{T}^{1} = \partial_t u^1 and \xi^{1} = A_\omega u^1 ; then \mathfrak{T}-\mathfrak{T}^{1}-(\xi-\xi^{1}) = 0; by applying Lemma 9.4 to the pair (\mathfrak{T}-\mathfrak{T}^{1}, u-u^{1}) , we find that

    \mathbb{E}\int_{-1/2}^{1/2}(\xi-\xi^{1})(u-u^{1})dt = 0,

    which implies the uniqueness of \chi .

    Step 6. Sublinear growth.

    Lemma 9.5. The family of functions \{\chi^{\varepsilon}(x, t, \omega) = \varepsilon\chi\left(\frac{x}{\varepsilon}, \frac{t}{\varepsilon^2}, \omega\right)\}_{\varepsilon > 0} is bounded and compact in L_{loc}^{2}(\mathbb{R}^{d+1}) .

    Proof. Assume that \chi^{\varepsilon}(x, t, \omega) = \varepsilon\chi(\frac{x}{\varepsilon}, \frac{t}{\varepsilon^2}, \omega) satisfies the equation

    \begin{eqnarray} \partial_{t}\chi^{\varepsilon}-\mathfrak{A}_\omega\chi^{\varepsilon} = 0\;\mbox{in}\;\mathbb{R}^d\times\mathbb{R}. \end{eqnarray} (9.64)

    Denote \chi_z^{\varepsilon} (x, t) = z+\chi^{\varepsilon}(x+z, t, \omega)-\chi^{\varepsilon}(x, t, \omega) , \diamondsuit = J(z, s)\mu(x+z, \omega)\mu(x, \omega) and j_\ell = 4\alpha_2^2 j_0 . Without loss of generality, we assume that 0 < j_\ell\leq 1 ; we will show that there exists a universal constant C_{0} such that, \mathbb{P} -a.s. and for any R, T > 0 ,

    \begin{eqnarray*} &&\limsup\limits_{\varepsilon\rightarrow 0}\int_{0}^{T}\int_{Q_{R}}\left(\chi^{\varepsilon}(x,t)\right)^{2}dxdt\nonumber\\ &&\leq C_{0}T^{3}R^{d-2}\mathbb{E}\int_{-\frac{1}{2}}^{\frac{1}{2}}\int_{\mathbb{R}^d}|z|J(z,s)|\chi^{\varepsilon}_z (T_{-z}\omega,s)|^2dzds. \end{eqnarray*}

    Fix \xi\in C^{\infty}(\mathbb{R}; [0, 1]) such that

    \begin{eqnarray} \xi\equiv 0\;\mbox{in}\;(-\infty,\frac{3j_\ell}{2}-2),\quad \xi\equiv 1\;\mbox{in}\;[j_\ell-1,+\infty), \quad |\xi^{\prime}|\leq 2. \end{eqnarray} (9.65)

    Denote

    \varphi(x,s,t) = \xi\Big((\frac{3j_\ell}{2}-\frac{sj_\ell}{2t})-||x||_{\infty}R^{-1}\Big),

    where ||x||_{\infty} = \max\{|x_i|: i = 1, \cdots, d\} . Since 1\leq\frac{3}{2}-\frac{s}{2t}\leq\frac{3}{2} for s\in[0, t], \; \varphi(x, s, t) = 1 in Q_{R} , while \varphi(x, s, t) = 0 in \mathbb{R}^{d}\backslash Q_{2 R}.

    From Eq (9.64) and Young's inequality, fix t > 0 and for any s\in(0, t) , we have

    \begin{eqnarray} &&\frac{d}{ds}\int_{\mathbb{R}^{d}}(\chi^{\varepsilon})^{2}\varphi(x,s,t)dx = \int_{\mathbb{R}^{d}}(\chi^{\varepsilon})^{2}\partial_{s}\varphi dx-L_\omega \chi^{\varepsilon}\varphi \\ && = \int_{\mathbb{R}^{d}}(\chi^{\varepsilon})^{2}\partial_{s}\varphi-\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\diamondsuit\chi_z^{\varepsilon}\chi^{\varepsilon}(x+z,s,\omega)(\varphi(x+z,s,t)-\varphi(x,s,t))dzdx \\ &&-\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\diamondsuit\chi_z^{\varepsilon}(\chi^{\varepsilon}(x+z,s,\omega)-\chi^{\varepsilon}(x,s,\omega))\varphi(x,s,t)dzdx \\ && = \int_{\mathbb{R}^{d}}(\chi^{\varepsilon})^{2}\partial_{s}\varphi-\mathbb{B}_{1}-\mathbb{B}_{2}, \end{eqnarray} (9.66)

    where

    \begin{eqnarray*} \mathbb{B}_1& = &\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\diamondsuit\chi^{\varepsilon}_{z}\chi^{\varepsilon}(x+z,s,\omega)\Big(\varphi(x+z,s,t)-\varphi(x,s,t)\Big)dzdx, \nonumber\\ \mathbb{B}_2& = &\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\diamondsuit\chi_z^{\varepsilon}\Big(\chi^{\varepsilon}(x+z,\omega)-\chi^{\varepsilon}(x,\omega)\Big)\varphi(x,s,t)dzdx = \mathbb{B}_{21}+\mathbb{B}_{22}, \nonumber\\ \mathbb{B}_{21}& = &R^{-1}t\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\diamondsuit(\chi^{\varepsilon}_{z}(T_{x}\omega))^2\Big(\varphi(x+z,s)-\varphi(x,s)\Big)dxdz \nonumber \\ & = &R^{-1}t\int_{\mathbb{R}^{d}}\int_{|x|\leq 3R}\diamondsuit(\chi^{\varepsilon}_{z}(T_{x}\omega))^2\Big(\varphi(x+z,s)-\varphi(x,s)\Big)dxdz \nonumber \\ &+&R^{-1}t\int_{\mathbb{R}^{d}}\int_{x|\geq 3R}\diamondsuit(\chi^{\varepsilon}_{z}(T_{x}\omega))^2\Big(\varphi(x+z,s)-\varphi(x,s)\Big)dxdz, \nonumber \\ \mathbb{B}_{22}& = &Rt^{-1}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\diamondsuit(\chi^{\varepsilon}(x+z,s,\omega))^2\Big(\varphi(x+z,s)-\varphi(x,s)\Big)dxdz. \end{eqnarray*}

    Denote {\tilde{\phi}({\eta}, s, \omega) = \int_{\mathbb{R}^d}|z|J(z, s)|\chi^{\varepsilon}_z(T_{\eta-z}\omega, s)|^2dz; } then we have

    \begin{eqnarray} \mathbb{B}_{21}&\leq&R^{\frac{d}{2}-2}t\alpha_{2}^{2}\Big(\int_{|\eta|\leq 2R}\tilde{\phi}^{2}(T_{\eta}\omega)d\eta\Big)^{1/2}\\ &+&R^{\frac{d}{2}-2}t\alpha_{2}^{2}\Big(\int_{|\xi|\leq 3R}\tilde{\phi}^{2}(T_{\xi}\omega)d\xi\Big)^{1/2}\\ &\leq&R^{\frac{d}{2}-2}t\alpha_{2}^{2}\yen^\varepsilon(s). \end{eqnarray} (9.67)

    Finally, the penultimate inequality uses Appendix C. So we have

    \begin{eqnarray} \yen^\varepsilon(s) = \Big(\int_{|\eta|\leq 2R}\tilde{\phi}^{2}(T_{{\eta}}\omega,s)d\eta\Big)^{1/2}+\Big(\int_{|\xi|\leq 3R}\tilde{\phi}^{2}(T_{{\xi}}\omega,s)d\xi\Big)^{1/2}, \end{eqnarray} (9.68)

    thus,

    \begin{eqnarray} \mathbb{B}_{22}&\leq&\xi^{\prime}t^{-1}\alpha_2^2\int_{\mathbb{R}^{d}}J_1(z)|z|dz\int_{\mathbb{R}^{d}}(\chi^{\varepsilon}(x,s,\omega))^2dx \\ & = &\xi^{\prime}t^{-1}\alpha_2^2j_0\int_{\mathbb{R}^{d}}(\chi^{\varepsilon})^2dx. \end{eqnarray} (9.69)

    Since \partial_{s}\varphi = -2\alpha_2^2 j_0 t^{-1}\xi^{\prime} while |\varphi\left(x\right)-\varphi\left(y\right)|\leq R^{-1}\xi^{\prime}|x-y| , we can absorb the last term on the right-hand side of Eq (9.66) into the first one to obtain

    \begin{eqnarray} \frac{d}{ds}\int_{\mathbb{R}^{d}}(\chi^{\varepsilon}(s))^{2}\varphi(x,s,t)dx& = &\int_{\mathbb{R}^{d}}(\chi^{\varepsilon})^{2}\partial_{s}\varphi dx-\mathbb{B}_{1}-\mathbb{B}_{2} \leq -\mathbb{B}_{1}-\mathbb{B}_{21}. \end{eqnarray} (9.70)

    Integrating in time over s\in[0, t] and using the definition of \phi we get

    \begin{eqnarray} \int_{Q_{R}}(\chi^{\varepsilon}(t))^{2}dx\leq-\int_0^t\mathbb{B}_{1}(s)ds+R^{\frac{d}{2}-2}t\int_0^t\yen^\varepsilon(s)ds+\int_{Q_{2 R}}(\chi^{\varepsilon}(0))^{2}dx. \end{eqnarray} (9.71)

    Integrating in time over t\in[0, T] ,

    \begin{eqnarray} \int_{0}^{T}\int_{Q_{R}}(\chi^{\varepsilon}(t))^{2}dxdt&\leq&-\int_0^T\int_0^t\mathbb{B}_{1}(s)dsdt+R^{\frac{d}{2}-2}\int_{0}^{T}t\int_{0}^{t}\yen^\varepsilon(s)dsdt\\ &+&T\int_{Q_{2R}}\left(\chi^{\varepsilon}(0)\right)^{2}dx. \end{eqnarray} (9.72)

    Let \varepsilon\rightarrow 0 ; from Appendix B and the ergodic theorem, we have \mathbb{P} -a.s.,

    \begin{eqnarray} &&\limsup\limits_{\varepsilon\rightarrow 0}\int_{0}^{T}\int_{Q_{R}}(\chi^{\varepsilon}(x,t))^{2}dxdt\\ &&\leq-\int_{0}^{T}\int_{0}^{t}\int_{\mathbb{R}^{d}}\mathbb{E}\Big[\int_{\tilde{Q}_{1}}\mathfrak{A}_\omega\chi\cdot\chi\Big]\varphi(x,s,t)dxdsdt +R^{\frac{d}{2}-2}T\int_{0}^{T}\int_{0}^{t}\yen(s)dsdt, \end{eqnarray} (9.73)

    where

    \begin{eqnarray} \yen& = &\int_{-1/2}^{1/2}\Big(\int_{|\eta|\leq 2R}\tilde{\phi}^{2}(0,s,\omega)d\eta\Big)^{1/2}ds+\int_{-1/2}^{1/2}\Big(\int_{|\xi|\leq 3R}\tilde{\phi}^{2}(0,s,\omega)d\xi\Big)^{1/2}ds\\ &\leq& CR^{\frac{d}{2}}\int_{-1/2}^{1/2}\tilde{\phi}(0,s,\omega)ds. \end{eqnarray} (9.74)

    Lemma 9.4 gives that the first term on the right-hand side of the inequality (9.73) vanishes. Thus, using the inequality (9.74),

    \begin{eqnarray*} &&\limsup\limits_{\varepsilon\rightarrow 0}\int_{0}^{T}\int_{Q_{R}}(\chi^{\varepsilon}(x,t))^{2}dxdt\nonumber\\ &&\lesssim R^{d-2}T^{3}\mathbb{E}\int_{-1/2}^{1/2}\int_{\mathbb{R}^d}|z|J(z,s)|\chi^{\varepsilon}_z (T_{-z}\omega,s)|^2dzds. \end{eqnarray*}

    Due to the symmetric property we have, \mathbb{P} -a.s.,

    \begin{eqnarray} &&\limsup\limits_{\varepsilon\rightarrow0}\int_{-T}^{T}\int_{Q_{R}}(\chi^{\varepsilon}(x,t))^{2}dxdt\\ &&\leq C_{0}T^{3}R^{d-2}\mathbb{E}\int_{-1/2}^{1/2}\int_{\mathbb{R}^d}|z|J(z,s)|\chi^{\varepsilon}_z(T_{-z} \omega,s)|^2dzds. \end{eqnarray} (9.75)

    Next we show that \chi^{\varepsilon}(x, t) converges to 0 as \varepsilon\rightarrow 0^{+} .

    Let \omega\in\Omega be such that the inequality (9.75) holds for any T, R > 0 . The inequality (9.75) implies that the family (\chi^{\varepsilon}(x, t))_{\varepsilon > 0} is bounded in L_{loc}^{2}(\mathbb{R}^{d}\times\mathbb{R}) .

    Note that (\partial_{t}\chi^{\varepsilon}(x, t))_{\varepsilon > 0} is bounded in L_{loc}^{2}(\mathbb{R}, L^{2}) , and (\chi^{\varepsilon}(\cdot, t))_{\varepsilon > 0} is compact in L^2_{loc}(\mathbb{R}^d) according to [41, Lemma 4.1]. Hence, with the help of the classical Lions-Aubin lemma, the family (\chi^{\varepsilon}(x, t))_{\varepsilon > 0} is relatively compact in L_{loc}^{2}(\mathbb{R}^{d+1}) .

    Let (\chi^{\varepsilon_{n}}(x, t)) be any convergent subsequence with limit \chi in L_{ {loc }}^{2}(\mathbb{R}^{d}\times\mathbb{R}) . Since \zeta_{z} is stationary in an ergodic environment, it converges weakly to a constant. Thus, owing to Eq (9.64), \chi solves \partial_{t}\chi = 0 in \mathbb{R}^{d}\times\mathbb{R} . Letting T\rightarrow 0 in the inequality (9.75) yields that \chi(\cdot, 0) = 0. Therefore, \chi\equiv 0 , so \chi^{\varepsilon_{n}}\rightarrow 0 in L_{loc}^{2}(\mathbb{R}^{d}\times\mathbb{R}) . Lemma 9.5 has been proved.

    Proof of Theorem 9.2. Following the above Steps 1–6, we can finish the proof of the main Theorem 9.2 in this section.

    Remembering Eq (9.19), as \varepsilon \to 0^{+} , we have an asymptotic expansion for w^\varepsilon(x, t) = v^{0}(x, t)+\varepsilon u_1(x, t)+\varepsilon^2u_2(x, t) :

    \begin{eqnarray*} H^{\varepsilon}w_0^{\varepsilon}(x,t) = \frac{\partial}{\partial t}v^{0}(x,t)+\frac{1}{\varepsilon}\underbrace{{\mathcal{M}_{0}(x,t)}}_{ = 0} +\underbrace{{\mathcal{M}_{\varepsilon}(x,t)}}_{Zero-order\; expansion}+\underbrace{{\phi_{\varepsilon}(x,t)}}_{Remainder}. \end{eqnarray*}

    We now give a decomposition of the zero-order \varepsilon^{0} term in the asymptotic expansion of H^{\varepsilon}w_0^{\varepsilon}(x, t) .

    Lemma 10.1. For the zero-order expansion term \mathcal{M}_{\varepsilon}(x, t) we have

    \begin{eqnarray} \mathcal{M}_{\varepsilon}(x,t) = (D_{1}-D_{2})\cdot\nabla\nabla v^{0}+\Upsilon^{\varepsilon}+\digamma(\frac{x}{\varepsilon},\frac{t}{\varepsilon^2},\omega)\cdot\nabla\nabla v^{0}, \end{eqnarray} (10.1)

    where \Upsilon^{\varepsilon} = (\beta_\varepsilon, v_2^\varepsilon), \; \digamma(\frac{x}{\varepsilon}, \frac{t}{\varepsilon^2}, \omega) = \aleph_{1}(\frac{x}{\varepsilon}, \frac{t}{\varepsilon^2}, \omega)-\aleph_{2}(\frac{x}{\varepsilon}, \frac{t}{\varepsilon^2}, \omega), and the matrices D_{1} and D_{2} are

    \begin{eqnarray*} D_{1}& = &\int_{-1/2}^{1/2}\int_{\mathbb{R}^{d}}\frac{1}{2}z\otimes z\mathbf{E}\Big\{J(z,s)\mu(0,\omega)\mu(-z,\omega)\Big\}dzds, \\ D_{2}& = &\int_{-1/2}^{1/2}\int_{\mathbb{R}^{d}}\frac{1}{2}z\otimes\mathbf{E}\Big\{J(z,t)\zeta_{-z}(0,t,\omega)\mu(0,\omega)\mu(-z,\omega)\Big\}dzdt; \end{eqnarray*}

    \digamma(\frac{x}{\varepsilon}, \frac{t}{\varepsilon^2}, \omega) , \aleph_{1}(\frac{x}{\varepsilon}, \frac{t}{\varepsilon^2}, \omega) and \aleph_{2}(\frac{x}{\varepsilon}, \frac{t}{\varepsilon^2}, \omega) are stationary fields with a zero mean which are given by

    \begin{eqnarray} \aleph_{1}(\frac{x}{\varepsilon},\frac{t}{\varepsilon^2},\omega)& = &\frac{1}{2}\int_{\mathbb{R}^{d}}z^{2}\Big[J(z,\frac{t}{\varepsilon^2})\mu(\frac{x}{\varepsilon},\omega)\mu(\frac{x}{\varepsilon}-z, \omega) \\ &&\qquad-\mathbf{E}\Big\{J(z,s,\omega)\mu(0,\omega)\mu(-z,\omega)\Big\}\Big]dz, \end{eqnarray} (10.2)
    \begin{eqnarray} \aleph_{2}(\frac{x}{\varepsilon},\frac{t}{\varepsilon^2},\omega)& = &\frac{1}{2}\int_{\mathbb{R}^{d}}z\Big[J(z,\frac{t}{\varepsilon^2})\zeta_{-z}(\frac{x}{\varepsilon},\frac{t}{\varepsilon^2},\omega) \mu(\frac{x}{\varepsilon},\omega)\mu(\frac{x}{\varepsilon}-z,\omega) \\ &&\qquad-\mathbf{E}\Big\{J(z,s,\omega)\zeta_{-z}(0,\frac{t}{\varepsilon^2},\omega)\mu(0,\omega)\mu(-z,\omega)\Big\}\Big]dz. \end{eqnarray} (10.3)

    For the problems

    \begin{eqnarray} &&\left\{\begin{array}{l} (\partial_t-L^{\varepsilon})v_{2}^{\varepsilon}(x,t,\omega) = -\Upsilon^{\varepsilon},\\ v_{2}^{\varepsilon}(x,0) = 0, \end{array}\right. \end{eqnarray} (10.4)
    \begin{eqnarray} &&\left\{\begin{array}{l} (\partial_t-L^{\varepsilon})v_{3}^{\varepsilon}(x,t,\omega) = -\digamma^{\varepsilon}\cdot\nabla\nabla v^{0},\\ v_{3}^{\varepsilon}(x,0) = 0, \end{array}\right. \end{eqnarray} (10.5)

    we have that ||v^{\varepsilon}_i||_{L^{\infty}((0, T)\times\mathbb{R}^d)}\to 0\; (i = 2, 3)\; \mathbb{P}-a.s. as \varepsilon\rightarrow 0^{+}.

    We need to prove the boundedness of the sequence v^{\varepsilon}_i and then prove its compactness by using the Lions-Aubin lemma; we finally explain that ||v^{\varepsilon}_i||_{L^{\infty}((0, T)\times \mathbb{R}^d)} converges to 0 . The proof requires some technical estimates in [41, Section 5]; we provide the proof in the Appendix C.

    Proposition 5. The matrix \Theta = \mathbb{D}_{1}-\mathbb{D}_{2} is positive definite:

    \Theta = \frac{1}{2}\int_{-\frac{1}{2}}^{\frac{1}{2}}\int_{\mathbb{R}^{d}}\int_{\Omega}(z\otimes z-z\otimes\zeta_{-z}(0,s,\omega)) J(z,s)\mu(0,\omega)\mu(-z,\omega)dzdsd\mathbf{P}(\omega) > 0.

    Proof. The procedure of deriving \Theta 's positive definiteness is basically similar to [41, Proposition 5.1].

    We now consider the estimate of the remainder

    \phi_{\varepsilon}(x,t) = \phi_{\varepsilon}^{(time)}(x,t)+\phi_{\varepsilon}^{(space)}(x,t),

    in the asymptotic expansion Eq (9.19), where \phi_{\varepsilon}^{(time)} and \phi_{\varepsilon}^{(space)} are defined in Eqs (9.22) and (9.23) respectively.

    Proposition 6. Let v\in C^{\infty}\left((0, T), \mathcal{S}\left(\mathbb{R}^{d}\right)\right) ; then, for the functions \phi_{\varepsilon}^{(time)} and \phi_{\varepsilon}^{(space)} we have

    \begin{eqnarray} \left\|\phi_{\varepsilon}^{(space)}\right\|_{2}\rightarrow 0\;\mathit{\mbox{and}}\;\left\|\phi_{\varepsilon}^{(time)}\right\|_{2}\rightarrow 0\;\mathit{\mbox{as}}\;\varepsilon \rightarrow 0, \end{eqnarray} (10.6)

    where \|\cdot\|_{2} is the norm in L^{2}((0, T), L^{2}(\mathbb{R}^{d})).

    Proof. The convergence for \phi_{\varepsilon}^{(time)} immediately follows from the representation Eq (9.22). For the function \phi_{\varepsilon}^{(space)} given in Eq (9.23), the proof is completely analogous to that of [13, Proposition 5]. The proof of Proposition 6 is done.

    We now give an asymptotic representation of the second term \mathcal{M}_{0}(x, t) in Eq (9.19), that is,

    \begin{eqnarray} \mathcal{M}_{0}(x,t) = \hat{L}v^{0}+\digamma\left(\frac{x}{\varepsilon}, \frac{t}{\varepsilon^2},\omega\right)\nabla \nabla v^{0}+\Upsilon^{\varepsilon}, \end{eqnarray} (10.7)

    where \digamma(\frac{x}{\varepsilon}, \frac{t}{\varepsilon^2}, \omega) is a stationary matrix-field with a zero average and \Upsilon^{\varepsilon} is a non-stationary term; they are defined in Lemma 10.1. Additionally, u_{2}^{\varepsilon} and u_{3}^{\varepsilon} satisfy

    \begin{eqnarray} (\partial_t-L^{\varepsilon})u_{2}^{\varepsilon} = -\digamma(\frac{x}{\varepsilon},\frac{t}{\varepsilon^2},\omega)\nabla\nabla v^{0}, \; (\partial_t-L^{\varepsilon})u_{3}^{\varepsilon} = -\Upsilon^{\varepsilon}, \end{eqnarray} (10.8)

    respectively, and

    \begin{eqnarray*} \|u_{2}^{\varepsilon}\|_{\infty}\rightarrow 0,\quad \|u_{3}^{\varepsilon}\|_{\infty} \rightarrow 0\; \mbox{as}\; \varepsilon \rightarrow 0. \end{eqnarray*}

    For the corrector \chi , from the sublinearity of \chi^\varepsilon , we have

    \begin{eqnarray} \left\|\varepsilon\chi\left(\frac{x}{\varepsilon},\frac{t}{\varepsilon^2}\right)\nabla v^{0}(x,t)\right\|_{L^{2}\left(\mathbb{R}^{d+1}\right)}\rightarrow 0\;\mbox{as}\; \varepsilon \rightarrow 0. \end{eqnarray} (10.9)

    This yields

    \begin{eqnarray*} \|w^{\varepsilon}-v^{0}\|\rightarrow 0\; \mbox{as}\; \varepsilon \rightarrow 0. \end{eqnarray*}

    With this choice of \chi, u_{2}^{\varepsilon} and u_{3}^{\varepsilon} , the expression (\partial_t-L^{\varepsilon})w^{\varepsilon} can be rearranged as follows:

    \begin{eqnarray} (\partial_t-L^{\varepsilon})w^{\varepsilon}& = &(\partial_t-L^{\varepsilon})(v^0+\varepsilon\chi(\frac{x}{\varepsilon},\frac{t}{\varepsilon^2},\omega)\nabla v^0)+(\partial_t-L^{\varepsilon})(u_{2}^{\varepsilon}+u_{3}^{\varepsilon})\\ & = &\left(\partial_t-L^{\varepsilon}\right) v^{\varepsilon}+\phi_{\varepsilon}. \end{eqnarray} (10.10)

    From Proposition 6, \|\phi_{\varepsilon}\|_{2} vanishes as \varepsilon \rightarrow 0 . This implies that

    \begin{eqnarray} \|w^{\varepsilon}-v^{\varepsilon}\|_{L^{2}((0,T), L^2(\mathbb{R}^{d}))}\rightarrow 0\;\mbox{as}\; \varepsilon \rightarrow 0,\;\mathbb{P}-a.s. \end{eqnarray} (10.11)

    Proof of Theorem 9.1. The proof is almost the same as that of Theorem 3.2 on the homogenization of nonlinear nonlocal parabolic equations with time dependent coefficients under a periodic and stationary structure that we have discussed in Sections 3–7; the difference is that we replace the periodic structure with the stationary structure in the nonlocal operator. Combining Eq (10.11) and using the triangle inequality and the fact that Schwartz space is dense in L^2, Theorem 9.1 is proved.

    In this section we just give some results of the corresponding equations with a nonlinear nonlocal operator for p\neq1 . The proof is rather long and tedious so we omit the proof in details.

    Case I. 0 < p < 1 .

    We need to show that the nonlinear term will make the corrector \chi depend on v(x, t) and \varepsilon , and that \chi^\varepsilon\to \chi as \varepsilon \to 0.

    Theorem 11.1. Assume that the linear condition is satisfied; there exists a unique map \chi : \mathbb{R}^{d+1}\times\Omega\rightarrow \mathbb{R} such that

    \begin{eqnarray} \frac{1}{p}\left|v\right|^{(1-p)/p}\partial_{s}\chi-\mathfrak{A}_\omega\chi = 0\; in \; \mathbf{L}^{\mathbf{2}}, \end{eqnarray} (11.1)

    and \forall z\in\mathbb{R}^d , \zeta_{z}(x, t, y, s, \omega) = \chi(x, t, y+z, s, \omega)-\chi(x, t, y, s, \omega)\in L_{loc}^2(\mathbb{R}^{d+1}, \mathbf{L}^{\mathbf{2}}) , satisfying

    \begin{eqnarray*} \int_{\widetilde{Q}_{1}}\zeta_{z}(x,t,y,s,\omega)dyds = 0,\;\mathbb{P}-a.s. \end{eqnarray*}

    We have the convergence

    \begin{eqnarray*} \chi^{\varepsilon}(x,t,\omega) = \varepsilon\chi\Big(x,t,\frac{x}{\varepsilon},\frac{t}{\varepsilon^{2}},\omega\Big)\stackrel{\varepsilon\rightarrow 0}{\longrightarrow} 0\;\mathit{\mbox{in}}\; L_{loc}^{2}(\mathbb{R}^{d+1}\times\mathbb{R}^{d+1}),\;\mathbb{P}-a.s. \end{eqnarray*}

    In addition, the positive definite matrix \Theta is defined by

    \Theta = \frac{1}{2}\int_{-\frac{1}{2}}^{\frac{1}{2}}\int_{\mathbb{R}^{d}}\int_{\Omega}\Big(z\otimes z-z\otimes\zeta_{-z}(x,t,0,s,\omega)\Big)J(z,s)\mu(0,\omega)\mu(-z,\omega)dzdsd\mathbf{P}(\omega).

    Case II. 1 < p\leq2 .

    Theorem 11.2. Assume that the linear condition is satisfied, there exists a unique map \chi : \mathbb{R}^{d+1}\times\Omega\rightarrow \mathbb{R} such that

    \begin{eqnarray} \partial_{s}\natural-\mathfrak{A}^1_\omega\natural = 0\; in \; \mathbf{L}^{\mathbf{2}}, \end{eqnarray} (11.2)

    where

    \begin{eqnarray*} &&\mathfrak{A}^1_\omega\natural = \int_{\mathbb{T}^{d}}J(\xi-q,t)\nu(\xi,q)\Big(q-\xi+p|v|^{(p-1)/p}\natural(x,t,\xi,s)-p|v|^{(p-1)/p}\natural(x,t,q,s)\Big)dq,\\ &&\chi^{k}(x,t,y,s) = \left\{\begin{array}{ll} p|v^{0}|^{(p-1)/p}\natural^{k}(x,t,y,s) & \mathit{\mbox{if}}\quad v^{0}(x,t)\neq 0, \\ 0 & \mathit{\mbox{if}}\quad v^{0}(x,t) = 0. \end{array}\right. \end{eqnarray*}

    We have the convergence

    \begin{eqnarray*} \natural^{\varepsilon}(x,t,\omega) = \varepsilon\natural(x,t,\frac{x}{\varepsilon},\frac{t}{\varepsilon^{2}},\omega)\stackrel{\varepsilon\rightarrow 0}{\longrightarrow} 0\;\mathit{\mbox{in}}\; L_{loc}^{2}(\mathbb{R}^{d+1}\times\mathbb{R}^{d+1}),\;\mathbb{P}-a.s. \end{eqnarray*}

    In addition, the positive definite matrix \Theta is the same as 0 < p < 1.

    Results of non-self-similar scales are similar to those obtained above.

    Remark 4. There are two parts of the proof here that are different from the equation with the linear operator. The first is that the random corrector depends on macroscopic and microscopic variables and the solution u of the equation, which requires more approximations as in the periodic case to obtain the existence of the corrector. The second part is that the heterogeneous solution u^\varepsilon converges to a homogeneous solution u(x, t). Usually we can find a corrector \chi depending on u(x, t) , but this will create the problem of not having enough information about the regularity of the map u \mapsto \chi(\cdot, \cdot, u, \omega) , and it requires us to develop some useful tools to overcome the difficulty. In 2022, Cardaliaguet, Dirr and Souganidis [42] dealt with the homogenization of a class of nonlinear parabolic equations and the corresponding random corrector

    \begin{eqnarray*} \chi^\varepsilon (x,t,\omega) = \chi(\frac{x}{\varepsilon},\frac{t}{\varepsilon^2},\nabla u(x,t),\omega). \end{eqnarray*}

    In order to circumvent this difficulty, they introduced a localization argument for the gradient of the corrector, which is based on a piecewise constant approximation of \nabla u . Whether a more general equation (like doubly nonlinear equations or fractional diffusion equations) can be used is the direction we will think about in the future.

    The research was supported by the National Natural Science Foundation of China (No.12171442). The research of J. Chen is partially supported by the CSC under grant No. 202206160033.

    Data sharing is not applicable to this article as no new data were created or analyzed in this study.

    The authors have no conflicts to disclose.

    Junlong Chen carried out the homogenization theory of partial differential equations, and Yanbin Tang evaluated carried out the reaction diffusion equations and the perturbation theory of partial differential equations. All authors carried out the proofs and conceived the study. All authors read and approved the final manuscript.

    Proof of Lemma 3.1. Let u, v\in L^{[1, \infty]}(\mathbb{R}^{d}) be such that

    \max\{||u||_{1},||v||_{1},||u||_{\infty},||v||_{\infty}\}\leq\Lambda

    and let \Lambda > 0 be a given constant. Using the integrability condition and the local Lipschitz continuity with Lipschitz constant L_{\Lambda} of \Gamma^{\delta}(u(x), u(y), x, y, t) . If we take -\Lambda < a, b, c < \Lambda such that |c-b|, |a-b|\geq \delta , then for a given function f we have

    \begin{eqnarray} \left|\frac{f(a)-f(b)}{a-b}-\frac{f(c)-f(b)}{c-b}\right|\leq\frac{2}{\delta^{2}}\left(\max\limits_{|\xi| < \Lambda}|f(\xi)|+\Lambda\max\limits_{|\xi| < \Lambda}\left|f^{\prime}(\xi)\right|\right)|a-c|, \end{eqnarray} (A.1)

    so that we can easily get the local Lipschitz-continuity of \Gamma^{\delta} and we have

    \begin{eqnarray} &&||\mathcal{L}^{t,\delta}_{u}u-\mathcal{L}^{t,\delta}_{v}v||_{\infty}\leq||\mathcal{L}^{t,\delta}_{u}(u-v)||_{\infty}+||(\mathcal{L}^{t,\delta}_{u}-\mathcal{L}^{t,\delta}_{v})v||_{\infty}\\ &\leq&\sup\limits_{x\in\mathbb{R}^{d}}|\int_{\mathbb{R}^{d}}(u(x)-u(y)-v(x)+v(y))\Gamma^{\delta}(u(x),u(y),x,y,t)dy|\\ &+&\sup\limits_{x\in\mathbb{R}^{d}}|\int_{\mathbb{R}^{d}}(v(x)-v(y))(\Gamma^{\delta}\Big(u(x),u(y),x,y,t)-\Gamma^{\delta}(v(x),v(y),x,y,t))dy|\\ &\leq&2\alpha_2\|u-v\|_{\infty}\sup\limits_{x\in\mathbb{R}^{d}}\Big|\int_{\mathbb{R}^{d}}J(x-y,t)\frac{|u(y)|^{p-1}u(y)-|u(x)|^{p-1}u(x)}{u(y)-u(x)}dy\Big|\\ &+&2\alpha_2\|v\|_{\infty}L_{\Lambda}\int_{\mathbb{R}^{d}}J(x-y,t)\Big(|u(y)-v(y)|+|u(x)-v(x)|\Big)dy\\ &\leq&(2\alpha_2\|u-v\|_{\infty}C_p(\|u\|_{\infty}+\|v\|_{\infty})^{p-1}+2\alpha_2\|v\|_{\infty}L_{\Lambda}\|u-v\|_{\infty})\int_{\mathbb{R}^{d}}J(x,t)dx\\ &\leq&M(p,\alpha_2,\Lambda)\|u-v\|_{\infty}, \end{eqnarray} (A.2)

    where C_p and M(p, \alpha_2, \Lambda) are constants and L_{\Lambda} = \frac{2(p+1)}{\delta^2}\Lambda^p .

    Similarly, we have

    \begin{eqnarray*} &&||\mathcal{L}^{t,\delta}_{u} u-\mathcal{L}^{t,\delta}_{v} v||_{1}\leq ||\mathcal{L}^{t,\delta}_{u}(u-v)||_{1}+||(\mathcal{L}^{t,\delta}_{u}-\mathcal{L}^{t,\delta}_{v})v||_{1}\nonumber\\ &\leq&\int_{\mathbb{R}^{d}}\Big|\int_{\mathbb{R}^{d}}\Big(u(x)-u(y)-v(x)+v(y)\Big)\Gamma^{\delta}\Big(u(x),u(y),x,y,t\Big)dy\Big|dx\nonumber\\ &+&\int_{\mathbb{R}^{d}}\Big|\int_{\mathbb{R}^{d}}\Big(v(x)-v(y)\Big)\Big(\Gamma^{\delta}(u(x),u(y),x,y,t)-\Gamma^{\delta}(v(x),v(y),x,y,t)\Big)dy\Big|dx\\ &\leq& 2\alpha_2C_p(\|u\|_{\infty}+\|v\|_{\infty})^{p-1}\int_{\mathbb{R}^{d}}|u(x)-v(x)|dx\sup\limits_{x\in\mathbb{R}^{d}}\Big|\int_{\mathbb{R}^{d}}J(x-y,t)dy\Big|\nonumber\\ &+&2\alpha_2\int_{\mathbb{R}^{d}}|v(x)|\int_{\mathbb{R}^{d}}L_{\Lambda}J(x-y,t)\Big(|u(y)-v(y)|+|u(x)-v(x)|\Big)dydx;\nonumber \end{eqnarray*}

    thus,

    \begin{eqnarray} ||\mathcal{L}^{t,\delta}_{u} u-\mathcal{L}^{t,\delta}_{v} v||_{1}&\leq& 2\alpha_2\|u-v\|_{1}C_p(\|u\|_{\infty}+\|v\|_{\infty})^{p-1}\int_{\mathbb{R}^{d}}J(x,t)dx\\ &+&2\alpha_2L_{\Lambda}\Big(\|v\|_{\infty}\int_{\mathbb{R}^{d}}J(x,t)dx+\|v\|_{1}\Big)\|u-v\|_{1}\\ &\leq& M(p,\alpha_2,\Lambda)\|u-v\|_{1}. \end{eqnarray} (A.3)

    This completes the proof of Lemma 3.1.

    Lemma B.1. Let (\Omega, \mathcal{F}, \mathbb{P}) and \mathfrak{U}: \mathbb{R}^{d+1}\times\Omega\rightarrow \mathbb{R} have the time derivative \partial_t \mathfrak{U} and increments \zeta_{z}(\xi, t, \omega) = \mathfrak{U}(z+\xi, t, w)-\mathfrak{U}(\xi, t, w) . Then, \mathbb{P} -a.s.,

    \lim\limits_{R\rightarrow +\infty}R^{-(d+2)}\int_{Q_{R}}| \mathfrak{U}(x,0)|^{2}dx = 0\;\mathit{\mbox{and}}\;\lim\limits_{R\rightarrow +\infty}R^{-(d+3)}\int_{\widetilde{Q}_{R}}| \mathfrak{U}(x,t)|^{2}dxdt = 0.

    That is, given \mathfrak{U}^{\varepsilon}(x, t, \omega) = \varepsilon\mathfrak{U}(x/\varepsilon, t/\varepsilon, \omega) , for any fixed R > 0 , we have

    \lim\limits_{\varepsilon\rightarrow 0}\int_{Q_{R}}| \mathfrak{U}^{\varepsilon}(x,0)|^{2}dx = 0\;\mathit{\mbox{and}}\;\lim\limits_{\varepsilon\rightarrow 0}\int_{\widetilde{Q}_{R}}| \mathfrak{U}^{\varepsilon}(x,t)|^{2}dxdt = 0, \; \mathbb{P}-a.s.

    Proof. The result is not surprising, as this reflects the sublinear growth property of the corrector. This is the property of the oscillatory function and it can be seen in [41, Lemma 4.1]. We can apply [42, Lemma A.2] and [43, Theorem 5.3] and use the classical nonlocal Poincaré's inequality for any \omega \in \Omega_{0} \subset \Omega ; the family \left(\mathfrak{U}^{\varepsilon}\right)_{\varepsilon > 0} is relatively compact in L_{loc}^{2}\left(\mathbb{R}^{d+1}\right) , thus \mathfrak{U}^{\varepsilon_{k}}\rightarrow 0 in L_{loc}^{2}\left(\mathbb{R}^{d+1}\right) as k\rightarrow \infty and \mathbb{P} -a.s. Taking in expectation we have \mathfrak{U}^{\varepsilon}\rightarrow 0 as \varepsilon\rightarrow 0 . \mathfrak{U}^{\varepsilon}(\cdot, 0) also satisfies the assertion of [41, Lemma 4.1], so we omit it.

    Proof of Lemma 9.3. The idea of the proof comes from [41, Proposition 4.5]; here, we mainly describe our ideas.

    If |\xi| > 3R , then \varphi\left(\frac{|\xi|}{R}\right) = 0 . Also \varphi\left(\frac{|\xi+z|}{R}\right) = 0 if |\xi| > 3R and |z| > R . We obtain

    \begin{eqnarray} &&\int_{\mathbb{R}^{d}}\int_{|\xi| > 3R}J(z,t)\mu(\xi+z,\omega)\mu(\xi,\omega)\left|\bar{\sigma}_{z}\left(T_{\xi}\omega\right)\right||\bar{\sigma}(\xi+z,\omega)|\varphi\left(\frac{|\xi+z|}{R}\right)d\xi d z \\ &&\leq{\alpha_{2}^{2}}\int_{|\eta|\leq 2R}\left(\int_{|z| > R}|z|J(z,t)\left|\bar{\sigma}_{z}\left(T_{\eta-z}\omega\right)\right|dz\right)\frac{1}{R}|\bar{\sigma}(\eta,\omega)| \varphi\left(\frac{|\eta|}{R}\right)d\eta \\ &&\leq{\alpha_{2}^{2}}\int_{|\eta|\leq 2R}\phi\left(T_{\eta}\omega\right)\frac{1}{R}|\bar{\sigma}(\eta,\omega)|\varphi\left(\frac{|\eta|}{R}\right)d\eta, \end{eqnarray} (C.1)

    where \eta = \xi+z and

    \begin{eqnarray} \phi\left(T_{\eta}\omega,t\right) = \int_{\mathbb{R}^{d}}|z|J(z,t)\left|\bar{\sigma}_{z}\left(T_{\eta-z}\omega\right)\right|dz. \end{eqnarray} (C.2)

    Due to the integrability of \bar{\sigma}_{z}(\omega) with the weighted kernel function in probability, \phi(\omega)\in L^{2}(\Omega) , we have

    \begin{eqnarray} &&\mathbb{A}_1\leq{\alpha_{2}^{2}}\int_{|\eta|\leq R}\phi\left(T_{\eta}\omega\right)\frac{|\bar{\sigma}(\eta,\omega)|}{R}d\eta+{\alpha_{2}^{2}}\int_{R\leq|\eta|\leq 2R}\phi\left(T_{\eta}\omega\right) \frac{|\bar{\sigma}(\eta,\omega)|}{R}\varphi\left(\frac{|\eta|}{R}\right)d\eta \\ &&\leq\phi_1(R)+\frac{C}{\alpha R|R^{\prime}(t)|}\int_{|\eta|\leq 2R}\phi^{2}\left(T_{\eta}\omega\right)d\eta+\alpha|R^{\prime}(t)|\int_{R\leq|\eta|\leq 2R}\frac{|\bar{\sigma}(\eta, \omega)|^{2}}{R}\varphi\left(\frac{|\eta|}{R}\right)d\eta \\ &&\leq\phi_1(R)+\frac{CR^{d-1}}{\alpha|R^{\prime}(t)|}+\alpha|R^{\prime}(t)|\int_{R\leq|\eta|\leq 2R}\frac{|\bar{\sigma}(\eta, \omega)|^{2}}{R}\varphi\left(\frac{|\eta|}{R}\right)d\eta. \end{eqnarray} (C.3)

    Applying the Cauchy-Schwarz inequality, the boundedness of \phi and the sublinear property of \bar{\sigma} (Lemma 9.5 is proved) gives

    \begin{eqnarray*} \phi_1(R)\lesssim R^d\Big(\frac{1}{R^d}\int_{|\eta|\leq R}\phi^2\left(T_{\eta}\omega\right)d\eta\Big)^{\frac{1}{2}}\Big(\frac{1}{R^d}\int_{|\eta|\leq R}(\frac{|\bar{\sigma}(\eta,\omega)|}{R})^2 d\eta\Big)^{\frac{1}{2}} \lesssim_R c_d R^{d}, \end{eqnarray*}

    where c_d is sufficiently small.

    The first term on the right-hand side of \mathbb{A}_3 satisfies

    \begin{eqnarray*} \int_{\mathbb{R}^{d}}\int_{\substack{2R\leq|\xi|\leq 3R,\\ |\xi|\leq R}} J(z,t)\mu(\xi+z,\omega)\mu(\xi,\omega)\bar{\sigma}_{z}\bar{\sigma}(\xi,\omega)(\varphi(\frac{|\xi+z|}{R})-\varphi(\frac{|\xi|}{R}))d\xi dz \lesssim_R c_s R^d, \end{eqnarray*}

    where we use the estimate of I_2 in [41, Proposition 4.5] and the sublinear property of \bar{\sigma} ; and, c_s is sufficiently small. Applying the inequality \Big|\varphi\Big(\frac{|x|}{R}\Big)-\varphi\Big(\frac{|y|}{R}\Big)\Big|\leq\frac{|x-y|}{R} , for |\xi|\geq R, we get

    \begin{eqnarray} &&\int_{\mathbb{R}^{d}}\int_{R\leq|\xi|\leq 2R}J(z,t)\mu(\xi+z,\omega)\mu(\xi,\omega)\bar{\sigma}_{z}\bar{\sigma}(\xi,\omega)\Big(\varphi\Big(\frac{|\xi+z|}{R}\Big)-\varphi\Big(\frac{|\xi|}{R}\Big)\Big)d\xi dz\\ &&\leq\alpha_{2}^{2}\int_{\mathbb{R}^{d}}\int_{R\leq|\xi|\leq 2R}J(z,t)\Big|\bar{\sigma}_{z}\Big(\xi,\omega\Big)\Big|\Big|\bar{\sigma}(\xi,\omega)\Big|\frac{|z|}{R}d\xi dz \\ &&\leq\frac{C}{\alpha R|R^{\prime}(t)|}\int_{|\eta|\leq 2R}\phi^{2}(T_{\eta}\omega)d\eta+\alpha|R^{\prime}(t)|\int_{R\leq|\eta|\leq 2R} \frac{|\bar{\sigma}(\eta,\omega)|^{2}}{R}d\eta\\ &&\leq\frac{CR^{d-1}}{\alpha|R^{\prime}(t)|}+\alpha |R^{\prime}(t)|\int_{R\leq|\eta|\leq 2R}\frac{|\bar{\sigma}(\eta,\omega)|^{2}|\eta|}{R}d\eta. \end{eqnarray} (C.4)

    \mathbb{A}_{2 < }\leq\frac{C}{\sqrt{R(t)}}R(t)^d and \mathbb{A}_{2 > }\leq c_2R(t)^d can be directly obtained in [41, Proposition 4.5]. This ends the proof of Lemma 9.3.

    Proof of Lemma 10.1. For any \varphi\in L^{2}((0, T)\times\mathbb{R}^{d}) , we can find that

    \begin{eqnarray} &&J_{2}^{\varepsilon}(\varphi) = \frac{1}{2}\int_{\mathbb{R}}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}J(z,\frac{t}{\varepsilon^2})z\mu\Big(\frac{x}{\varepsilon},\omega\Big)\mu\Big(\frac{x}{\varepsilon}-z, \frac{t}{\varepsilon^2},\omega\Big)\chi\Big(\frac{x}{\varepsilon},\frac{t}{\varepsilon^2},\omega\Big) \\ &&\qquad\cdot\Big(\nabla\nabla v^{0}(x,t)\varphi(x,t)-\nabla\nabla v^{0}(x-\varepsilon z,t)\varphi(x-\varepsilon z,t)\Big)dxdz \end{eqnarray} (C.5)

    is a bounded linear functional on L^2(\mathbb{R}\times\mathbb{R}^d) . Then, by the Riesz theorem for a.e. \omega , there exists a function \beta_\varepsilon\in L^2(\mathbb{R}\times\mathbb{R}^d) such that \Upsilon^{\varepsilon} = (\beta_\varepsilon, \varphi) :

    \begin{eqnarray} (\partial_t-L^{\varepsilon})v_{3}^{\varepsilon}(x,\omega) = -\digamma^{\varepsilon}(x,t,\omega) = -\digamma\Big(\frac{x}{\varepsilon},\frac{t}{\varepsilon^2},\omega\Big)\cdot\nabla\nabla v^{0}(x,t). \end{eqnarray} (C.6)

    Since supp v^{0}\subset B is a bounded subset of \mathbb{R}^{d} and \int_{\mathbb{R}^{d}}J_1(z)|z|\left|\zeta_{-z}(\omega)\right|dz\in L^{2}(\Omega), by the Birkhoff theorem v_{3}^{\varepsilon}\in L^{2}((0, T)\times\mathbb{R}^{d}) .

    Our goal is to prove that \|v_{3}^{\varepsilon}\|_{L^{2}((0, T)\times\mathbb{R}^{d})}\rightarrow 0 as \varepsilon\rightarrow 0 . We first show that the family \{v_{3}^{\varepsilon}\} is bounded in L^{2}((0, T)\times\mathbb{R}^{d}) . Denote

    \begin{eqnarray*} &&\mathfrak{G}_{1}^{2} = \frac{1}{2\varepsilon^{2}}\int_{0}^T{}\int_{\mathbb{R}^{2d}}J(z,\frac{t}{\varepsilon^2})\mu (\frac{x}{\varepsilon},\omega)\mu(\frac{x}{\varepsilon}-z,\omega)(v_{2}^{\varepsilon}(x-\varepsilon z,t)-v_{2}^{\varepsilon}(x,t))^{2}dzdxdt, \\ &&\mathfrak{G}_{2}^{2} = \int_{0}^T\int_{\mathbb{R}^{d}}\Big(v_{2}^{\varepsilon}(x,t)\Big)^{2}dxdt. \end{eqnarray*}

    We first give an important lemma.

    Lemma C.1. [41, Lemma 5.1] For v_2^\varepsilon in Eq (10.4) and J_{2}^{\varepsilon}(\varphi) in Eq (C.5), we have

    \begin{eqnarray*} J_{2}^{\varepsilon}(v_2^\varepsilon)\leq(\mathfrak{G}_1+\mathfrak{G}_2)\cdot o(1)\;\mathit{\mbox{as}}\; \varepsilon\rightarrow 0. \end{eqnarray*}

    Lemma C.2. For v_2^\varepsilon in Eq (10.4) and a.e. \omega , we have that ||v^{\varepsilon}_2||_{L^{2}((0, T)\times\mathbb{R}^d)}\to 0\; \mathit{\mbox{as}}\; \varepsilon\rightarrow 0.

    Proof. Multiply v_{2}^{\varepsilon} on both sides in Eq (10.4); then, we have that ((\partial_t-L^{\varepsilon})v_{2}^{\varepsilon}, v_{2}^{\varepsilon}) = (-\Upsilon^{\varepsilon}, u_{2}^{\varepsilon}). Consider the second term on the left-hand side; we have

    \begin{eqnarray*} &&-\int_0^s\iint_{\mathbb{R}^{2d}}J(z,\frac{t}{\varepsilon^2})\mu(\frac{x}{\varepsilon},\omega)\mu(\frac{x}{\varepsilon}-z,\omega) (v_{2}^{\varepsilon}(x-\varepsilon z,t)-v_{2}^{\varepsilon}(x,t))dz v_{2}^{\varepsilon}(x,t)dxdt \nonumber\\ && = \frac{1}{2}\int_0^s\iint_{\mathbb{R}^{2d}}J(z,\frac{t}{\varepsilon^2})\mu(\frac{x}{\varepsilon},\omega)\mu(\frac{x}{\varepsilon}-z,\omega) (v_{2}^{\varepsilon}(x-\varepsilon z,t)-v_{2}^{\varepsilon}(x,t))^{2}dzdxdt. \end{eqnarray*}

    For any s\in(0, T) , and by integrating over 0 to s ,

    \begin{eqnarray*} &&\frac{1}{2}\int_{\mathbb{R}^{d}}v_{2}^{2}(x,s)dx-\int_{0}^{s}\int_{\mathbb{R}^{d}}\Upsilon^{\varepsilon}(x,t)v^{\varepsilon}_{2}(x,t)dxdt = \int_{0}^{s}(L^{\varepsilon}v^{\varepsilon}_{2},v^{\varepsilon}_{2})dt\leqslant 0,\\ &&\mathfrak{G}_1^2+\mathfrak{G}_2^2\lesssim||u_{2}^{\varepsilon}||_{\infty}^2+\mathfrak{G}_1^2\lesssim(\mathfrak{G}_1+\mathfrak{G}_2)\cdot o(1); \end{eqnarray*}

    using the Gronwall inequality, we get that ||v^{\varepsilon}_2||_{L^{2}((0, T)\times\mathbb{R}^d)}\to 0 as \varepsilon \to 0.

    We focus on \{v_3^{\varepsilon}\} defined in Eq (10.5). Our goal is to prove that ||v^{\varepsilon}_3||_{L^{2}((0, T)\times\mathbb{R}^d)}\to 0. We first prove its compactness in L^2 .

    Lemma C.3. \{v_3^{\varepsilon}\} is compact and v_{3}^{\varepsilon}\to 0 as \varepsilon\to 0 in {L^{2}((0, T)\times\mathbb{R}^d)}.

    Proof. See details in [41, Lemmas 5.3,5.4].

    [1] Agarwal V, Arisoy YE, Naik NY (2017) Volatility of aggregate volatility and hedge fund returns. J Financ Econ 125: 491–510. doi: 10.1016/j.jfineco.2017.06.015
    [2] Anderson RC, Reeb DM (2003) Founding-Family Ownership and Firm Performance: Evidence from the S&P 500. J Financ 58: 1301–1328. doi: 10.1111/1540-6261.00567
    [3] Attig N, Boubakri N, Ghoul SE, et al. (2016) The Global Financial Crisis, Family Control, and Dividend Policy. Financ Manage 45: 291–313. doi: 10.1111/fima.12115
    [4] Baillie RT, DeGennaro RP (1990) Stock returns and volatility. J Finan Quant Anal 25: 203–214. doi: 10.2307/2330824
    [5] Bakke TE, Whited TM (2010) Which firms follow the market? An analysis of corporate investment decisions. Rev Financ Stud 23: 1941–1980.
    [6] Barinov A, Science M (2014) Turnover: Liquidity or uncertainty? Manag Sci 60: 2478–2495. doi: 10.1287/mnsc.2014.1913
    [7] Bekaert G, Wu G (2000) Asymmetric volatility and risk in equity markets. Rev Financ Stud 13: 1–42. doi: 10.1093/rfs/13.1.1
    [8] Berrone P, Cruz C, Gomez-Mejia LR (2016) Socioemotional wealth in family firms: Theoretical dimensions, assessment approaches, and agenda for future research. Fam Bus Rev 25: 258–279.
    [9] .Bollerslev T (1986) Generalized autoregressive conditional heteroscedasticity. J Econometrics 31: 307–327. doi: 10.1016/0304-4076(86)90063-1
    [10] Bollerslev T, Engle RF, Wooldridge JM (1988) A capital asset pricing model with time-varying covariances. J Polit Econ 96: 116–131. doi: 10.1086/261527
    [11] Bollerslev T, Litvinova J, Tauchen G (2006) Leverage and volatility feedback effects in high-frequency data. J Financ Econ 4: 353–384.
    [12] Brogaard J, Li D, Xia Y (2017) Stock liquidity and default risk. J Financ Econ 124: 486–502. doi: 10.1016/j.jfineco.2017.03.003
    [13] Callen JL, Khan M, Lu H (2013) Accounting quality, stock price delay, and future stock returns. Contemp Account Res 30: 269–295. doi: 10.1111/j.1911-3846.2011.01154.x
    [14] Cassia L, Massis AD, Pizzurno E (2012) Strategic innovation and new product development in family firms. Int J Entrep Behav Res 18: 198–232. doi: 10.1108/13552551211204229
    [15] Datar VT, Naik NY, Radcliffe R (1998) Liquidity and stock returns: An alternative test. J Financ Mark 1: 203–219. doi: 10.1016/S1386-4181(97)00004-9
    [16] Dey MK (2005) Turnover and return in global stock markets. Emerging Mark Rev 6: 45–67. doi: 10.1016/j.ememar.2004.09.003
    [17] Duran P, Kammerlander N, Zellweger T (2016) Doing more with less: Innovation input and output in family Firms. Acad Manag J 59: 1224–1264. doi: 10.5465/amj.2014.0424
    [18] Erbetta F, Menozzi A, Corbetta G, et al. (2013) Assessing family firm performance using frontier analysis techniques: Evidence from Italian manufacturing industries. J Fam Bus Strategy 4: 106–117. doi: 10.1016/j.jfbs.2013.04.001
    [19] European Family Businesses. Definition of a family business, 2016. Available from: http://www.europeanfamilybusinesses.eu/family-businesses/facts-figures.
    [20] Fang VW, Noe TH, Tice S (2009) Stock market liquidity and firm value. J Financ Econ 94: 150–169. doi: 10.1016/j.jfineco.2008.08.007
    [21] Gómez-Mejía LR, Haynes KT, Núñez-Nickel M, et al. (2007) Socioemotional Wealth and Business Risks in Family-controlled Firms: Evidence from Spanish Olive Oil Mills. Adm Sci Q 52: 106–137. doi: 10.2189/asqu.52.1.106
    [22] Hibbert AM, Daigler RT, Dupoyet B (2008) A behavioral explanation for the negative asymmetric return–volatility relation. J Bank Financ 32: 2254–2266. doi: 10.1016/j.jbankfin.2007.12.046
    [23] Hiebl MRW (2013) Risk aversion in family firms: What do we really know? J Risk Financ 14: 49–70.
    [24] Huybrechts J, Voordeckers W, Lybaert N (2013) Entrepreneurial Risk Taking of Private Family Firms: The Influence of a Nonfamily CEO and the Moderating Effect of CEO Tenure. Fam Bus Rev 26: 161–179. doi: 10.1177/0894486512469252
    [25] Jiang F, Ma Y, Shi B (2016) Stock liquidity and dividends payouts. J Corp Financ 42: 295–314.
    [26] Lahmiri S (2017a) On fractality and chaos in Moroccan family business stock returns and volatility. Physica 473: 29–39.
    [27] Lahmiri S (2017b) Multifractal in volatility of family business stocks listed on Casablanca stock exchange. Fractals 25: 1750014.
    [28] Lahmiri S (2017c) Multifractal analysis of Moroccan family business stock returns. Physica 486: 183–191.
    [29] Lahmiri S (2018) Randomness in denoised stock returns: The case of Moroccan family business companies. Phy Lette A 382: 554–560. doi: 10.1016/j.physleta.2017.12.020
    [30] Lesmond D, Ogden J, Trzcinka C (1999) A new estimate of transaction costs. Rev Financ Stud 12: 1113–1141. doi: 10.1093/rfs/12.5.1113
    [31] Lettau M, Ludvigson SC (2002) Measuring and modeling variation in the risk-return trade-off. Handb Financ Econometrics 2002: 617–690.
    [32] Lins KV, Volpin P, Wagner HF (2013) Does Family Control Matter? International Evidence from the 2008–2009 Financial Crisis. Rev Financ Stud 26: 2583–2619.
    [33] Lisboa I, Quirós MDMM (2015) Family firms' heterogeneity and firm risk. Bol Est Econ 70: 139–157.
    [34] Litz RA, Pearson AW, Litchfield S (2012) Charting the future of family business research: Perspectives from the field. Fam Bus Rev 25: 16–32. doi: 10.1177/0894486511418489
    [35] Maloni MJ, Hiatt MS, Astrachan JH (2017) Supply management and family business: A review and call for research. J Purch Supply Manag 23: 123–136. doi: 10.1016/j.pursup.2016.12.002
    [36] Martinez MA, Aldrich HE (2014) Sociological theories applied to family business, In: Melin, L., Nordqvist, M., Sharma, P. (Eds.), The SAGE Handbook of Family Business . London: SAGE Publications Ltd, 83–99.
    [37] Nekhili M, Nagati H, Chtioui T, et al. (2017) Corporate social responsibility disclosure and market value: Family versus nonfamily firms. J Bus Res 77: 41–52. doi: 10.1016/j.jbusres.2017.04.001
    [38] Patel R, Chrisman J (2014) Risk abatement as a strategy for R&D investments in family firms. Strat Manag J 35: 617–627. doi: 10.1002/smj.2119
    [39] Pathan S (2009) Strong boards, CEO power and bank risk-taking. J Bank Finan 33: 1340–1350. doi: 10.1016/j.jbankfin.2009.02.001
    [40] Poletti-Hughes J, Williams J (2017) The effect of family control on value and risk-taking in Mexico: A socioemotional wealth approach. Int Rev Financ Anal 1–13.
    [41] Qian M, Sun PW, Yu B (2017) High turnover with high price delay? Dissecting the puzzling phenomenon for China's A-shares. Financ Res Lett 22: 105–113.
    [42] Setia-Atmaja L (2010) Dividend and debt policies of family controlled firms: The impact of board independence. Int J Manag Financ 6: 128–142.
    [43] Short JC, Payne GT, Brigham KH, et al. (2009) Family Firms and Entrepreneurial Orientation in Publicly Traded Firms: A Comparative Analysis of the S&P 500. Fam Bus Rev 22: 9–24. doi: 10.1177/0894486508327823
    [44] Sims C (1980) Macroeconomics and reality. Econometrica 48: 1–48. doi: 10.2307/1912017
    [45] Teixeira RF, Madaleno M, Vieira ES (2016) Oil price effects over individual Portuguese stock returns. Empir Econ 53: 1–36.
    [46] Vieira ES (2017) Debt policy and firm performance of family firms: The impact of economic adversity. Int J Manag Financ 13: 267–286.
    [47] Wu G (2001) The determinants of asymmetric volatility. Rev Financ Stud 14: 837–859. doi: 10.1093/rfs/14.3.837
    [48] Zhou H, He F, Wang Y (2017) Did family firms perform better during the financial crisis? New insights from the S&P 500 firms. Global Financ J 33: 88–103.
  • This article has been cited by:

    1. Hongru Ma, Yanbin Tang, Homogenization of a semilinear elliptic problem in a thin composite domain with an imperfect interface, 2023, 46, 0170-4214, 19329, 10.1002/mma.9628
    2. Yongqiang Zhao, Yanbin Tang, Approximation of solutions to integro-differential time fractional order parabolic equations in L^{p}-spaces, 2023, 2023, 1029-242X, 10.1186/s13660-023-03057-2
    3. Caihong Gu, Yanbin Tang, Well-posedness of Cauchy problem of fractional drift diffusion system in non-critical spaces with power-law nonlinearity, 2024, 13, 2191-950X, 10.1515/anona-2024-0023
    4. Junlong Chen, Yanbin Tang, Homogenization of non-local nonlinear p-Laplacian equation with variable index and periodic structure, 2023, 64, 0022-2488, 10.1063/5.0091156
    5. Feiyang Peng, Yanbin Tang, Inverse problem of determining diffusion matrix between different structures for time fractional diffusion equation, 2024, 19, 1556-1801, 291, 10.3934/nhm.2024013
    6. Hongru Ma, Yanbin Tang, Homogenization of a nonlinear reaction‐diffusion problem in a perforated domain with different size obstacles, 2024, 104, 0044-2267, 10.1002/zamm.202300333
    7. Tomasz Klimsiak, Tomasz Komorowski, Lorenzo Marino, Homogenization of stable-like operators with random, ergodic coefficients, 2025, 00220396, 10.1016/j.jde.2025.02.054
  • Reader Comments
  • © 2018 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4682) PDF downloads(919) Cited by(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog