Processing math: 48%
Research article

Homogenization of a pseudo-parabolic system via a spatial-temporal decoupling: Upscaling and corrector estimates for perforated domains

  • We determine corrector estimates quantifying the convergence speed of the upscaling of a pseudo-parabolic system containing drift terms incorporating the separation of length scales with relative size ϵ1. To achieve this goal, we exploit a natural spatial-temporal decomposition, which splits the pseudo-parabolic system into an elliptic partial differential equation and an ordinary differential equation coupled together. We obtain upscaled model equations, explicit formulas for effective transport coefficients, as well as corrector estimates delimitating the quality of the upscaling. Finally, for special cases we show convergence speeds for global times, i.e., tR+, by using time intervals expanding to the whole R+ simultaneously with passing to the homogenization limit ϵ0.

    Citation: Arthur. J. Vromans, Fons van de Ven, Adrian Muntean. Homogenization of a pseudo-parabolic system via a spatial-temporal decoupling: Upscaling and corrector estimates for perforated domains[J]. Mathematics in Engineering, 2019, 1(3): 548-582. doi: 10.3934/mine.2019.3.548

    Related Papers:

    [1] E. Artioli, G. Elefante, A. Sommariva, M. Vianello . Homogenization of composite materials reinforced with unidirectional fibres with complex curvilinear cross section: a virtual element approach. Mathematics in Engineering, 2024, 6(4): 510-535. doi: 10.3934/mine.2024021
    [2] Raimondo Penta, Ariel Ramírez-Torres, José Merodio, Reinaldo Rodríguez-Ramos . Effective governing equations for heterogenous porous media subject to inhomogeneous body forces. Mathematics in Engineering, 2021, 3(4): 1-17. doi: 10.3934/mine.2021033
    [3] Pier Domenico Lamberti, Michele Zaccaron . Spectral stability of the curlcurl operator via uniform Gaffney inequalities on perturbed electromagnetic cavities. Mathematics in Engineering, 2023, 5(1): 1-31. doi: 10.3934/mine.2023018
    [4] Youchan Kim, Seungjin Ryu, Pilsoo Shin . Approximation of elliptic and parabolic equations with Dirichlet boundary conditions. Mathematics in Engineering, 2023, 5(4): 1-43. doi: 10.3934/mine.2023079
    [5] Mikyoung Lee, Jihoon Ok . Local Calderón-Zygmund estimates for parabolic equations in weighted Lebesgue spaces. Mathematics in Engineering, 2023, 5(3): 1-20. doi: 10.3934/mine.2023062
    [6] Nicolai Krylov . On parabolic Adams's, the Chiarenza-Frasca theorems, and some other results related to parabolic Morrey spaces. Mathematics in Engineering, 2023, 5(2): 1-20. doi: 10.3934/mine.2023038
    [7] Lucas C. F. Ferreira . On the uniqueness of mild solutions for the parabolic-elliptic Keller-Segel system in the critical $ L^{p} $-space. Mathematics in Engineering, 2022, 4(6): 1-14. doi: 10.3934/mine.2022048
    [8] Gabriel B. Apolinário, Laurent Chevillard . Space-time statistics of a linear dynamical energy cascade model. Mathematics in Engineering, 2023, 5(2): 1-23. doi: 10.3934/mine.2023025
    [9] Hoai-Minh Nguyen, Tu Nguyen . Approximate cloaking for the heat equation via transformation optics. Mathematics in Engineering, 2019, 1(4): 775-788. doi: 10.3934/mine.2019.4.775
    [10] Italo Capuzzo Dolcetta . The weak maximum principle for degenerate elliptic equations: unbounded domains and systems. Mathematics in Engineering, 2020, 2(4): 772-786. doi: 10.3934/mine.2020036
  • We determine corrector estimates quantifying the convergence speed of the upscaling of a pseudo-parabolic system containing drift terms incorporating the separation of length scales with relative size ϵ1. To achieve this goal, we exploit a natural spatial-temporal decomposition, which splits the pseudo-parabolic system into an elliptic partial differential equation and an ordinary differential equation coupled together. We obtain upscaled model equations, explicit formulas for effective transport coefficients, as well as corrector estimates delimitating the quality of the upscaling. Finally, for special cases we show convergence speeds for global times, i.e., tR+, by using time intervals expanding to the whole R+ simultaneously with passing to the homogenization limit ϵ0.


    Corrosion of concrete by acidic compounds is a problem for construction as corrosion can lead to erosion and degradation of the structural integrity of concrete structures; see e.g., [1,2]. Structural failures and collapse as a result of concrete corrosion [3,4,5] is detrimental to society as it often impacts crucial infrastructure, typically leading to high costs [6,7]. From a more positive side, these failures can be avoided with a sufficiently smart monitoring and timely repairs based on a priori calculations of the maximal lifespan of the concrete. These calculations have to take into account the heterogeneous nature of the concrete [8], the physical properties of the concrete [9], the corrosion reaction [10], and the expansion/contraction behaviour of corroded concrete mixtures, see [11,12,13]. For example, the typical length scale of the concrete heterogeneities is much smaller than the typical length scale used in concrete construction [8]. Moreover, concrete corrosion has a characteristic time that is many orders of magnitude smaller than the expected lifespan of concrete structures [10]. Hence, it is computationally expensive to use the heterogeneity length scale for detailed simulations of concrete constructions such as bridges. However, using averaging techniques in order to obtain effective properties on the typical length scale of concrete constructions, one can significantly decrease computational costs with the potential of not losing accuracy.

    Tractable real-life problems usually involve a hierarchy of separated scales: from a microscale via intermediate scales to a macroscale. With averaging techniques one can obtain effective behaviours at a higher scale from the underlying lower scale. For example, Ern and Giovangigli used averaging techniques on statistical distributions in kinetic chemical equilibrium regimes to obtain continuous macroscopic equations for mixtures, see [14] or see Chapter 4 of [15] for a variety of effective macroscopic equations obtained with this averaging technique.

    Of course, the use of averaging techniques to obtain effective macroscopic equations in mixture theory is by itself not new, see Figure 7.2 in [16] for an early application from 1934. The main problem with averaging techniques is choosing the right averaging methodology for the problem at hand. In this respect, periodic homogenization can be regarded as a successful method, since it expresses conditions under which macroscale behaviour can be obtained in a natural way from microscale behaviour. Furthermore, the homogenization method has been successfully used to derive not only equations for capturing macroscale behaviours but also convergence/corrector speeds depending on the scale separation between the macroscale and the microscale.

    To obtain the macroscopic behaviour, we perform the homogenization by employing the concept of two-scale convergence. Moreover, we use formal asymptotic expansions to determine the speed of convergence via so-called corrector estimates. These estimates follow a procedure similar to those used by Cioranescu and Saint Jean-Paulin in Chapter 2 of [17]. Derivation via homogenization of constitutive laws, such as those arising from mixture theory, is a classical subject in homogenization, see [18]. Homogenization methods, upscaling, and corrector estimates are active research subjects due to the interdisciplinary nature of applying these mathematical techniques to real world problems and the complexities arising from the problem-specific constraints.

    The microscopic equations of our concrete corrosion model are conservation laws for mass and momentum for an incompressible mixture, see [19] and [20] for details. The existence of weak solutions of this model was shown in [21] and Chapter 2 of [20]. The parameter space dependence of the existence region for this model was explored numerically in [19]. The two-scale convergence for a subsystem of these microscopic equations, a pseudo-parabolic system, was shown in [22]. This paper handles the same pseudo-parabolic system as in [22] but posed on a perforated microscale domain.

    In [23], Peszyńska, Showalter and Yi investigated the upscaling of a pseudo-parabolic system via two-scale convergence using a natural decomposition that splits the spatial and temporal behaviour. They looked at several different scale separation cases: classical case, highly heterogeneous case (also known as high-contrast case), vanishing time-delay case and Richards equation of porous media. These cases were chosen to showcase the ease with which upscaling could be done via this natural decomposition.

    In this paper, we point out that this natural decomposition from [23] can also be applied to a pseudo-parabolic system with suitably scaled drift terms. Moreover, for such a pseudo-parabolic system with drift we determine the convergence speed via corrector estimates. This is in contrast with [23], where no convergence speed was derived for any pseudo-parabolic system they presented. Using this natural decomposition, the corrector estimates for the pseudo-parabolic equation follow straightforwardly from those of the spatially elliptic system with corrections due to the temporal first-order ordinary differential equation. Corrector estimates with convergence speeds have been obtained for the standard elliptic system, see [17], but also for coupled systems related to pseudo-parabolic equations such as the coupled elliptic-parabolic system with a mixed third order term describing thermoelasticity in [24]. The convergence speed we obtain in this paper coincides for bounded spatial domains with known results for both elliptic systems and pseudo-parabolic systems on bounded temporal domains, see [25]. Finally, we apply our results to a concrete corrosion model, which describes the mechanics of concrete corrosion at a microscopic level with a perforated periodic domain geometry. Even though this model is linear, the main difficulty lies in determining effective macroscopic models for the mechanics of concrete corrosion based on the known microscopic mechanics model with such a complicated domain geometry. Obtaining these effective macroscopic models is difficult as the microscopic behavior is highly oscillatory due to the complicated domain geometry, while the macroscopic models need to encapsulate this behavior with a much less volatile effective behavior on a simple domain geometry without perforations or periodicity.

    The remainder of this paper is divided into seven parts:

    Section 2: Notation and problem statement,

    Section 3: Main results,

    Section 4: Upscaling procedure,

    Section 5: Corrector estimates,

    Section 6: Application to a concrete corrosion model,

    Appendix A: Exact forms of coefficients in corrector estimates,

    Appendix B: Introduction to two-scale convergence.

    We introduce the description of the geometry of the medium in question with a variant of the construction found in [26]. Let (0,T), with T>0, be a time-interval and ΩRd for d{2,3} be a simply connected bounded domain with a C2-boundary Ω. Take YΩ a simply connected bounded domain, or more precisely there exists a diffeomorphism γ:RdRd such that Int(γ([0,1]d))=Y.

    We perforate Y with a smooth open set T=γ(T0) for a smooth open set T0(0,1)d such that ¯T¯Y with a C2-boundary T that does not intersect the boundary of Y, TY=, and introduce Y=Y¯T. Remark that T is assumed to be C2-regular.

    Let G0 be lattice* of the translation group Td on Rd such that [0,1]d=Td/G0. Hence, we have the following properties: gG0g([0,1]d)=Rd and (0,1)dg((0,1)d)= for all gG0 not the identity-mapping. Moreover, we demand that the diffeomorphism γ allows Gγ:=γG0γ1 to be a discrete subetaoup of Td with ¯Y=Td/Gγ.

    * A lattice of a locally compact group G is a discrete subetaoup H with the property that the quotient space G/H has a finite invariant (under G) measure. A discrete subetaoup H of G is a group HG under group operations of G such that there is (an open cover) a collection C of open sets CG satisfying HCCC and for all CC there is a unique element hH such that hC.

    Assume that there exists a sequence (ϵh)h(0,ϵ0) such that ϵh0 as h (we omit the subscript h when it is obvious from context that this sequence is mentioned). Moreover, we assume that for all ϵh(0,ϵ0) there is a set Gϵhγ={ϵhg for gGγ} with which we introduce Tϵh=ΩGϵhγ(T), the set of all holes and parts of holes inside Ω. Hence, we can define the domain Ωϵh=ΩTϵh and we demand that Ωϵh is connected for all ϵh(0,ϵ0). We introduce for all ϵh(0,ϵ0) the boundaries intΩϵh and extΩϵh as intΩϵh=gGϵhγ{g(¯T)g(¯T)Ω} and extΩϵh=ΩϵhintΩϵh. The first boundary contains all the boundaries of the holes fully contained in Ω, while the second contains the remaining boundaries of the perforated region Ω. In Figure 1 a schematic representation of the domain components is shown.

    Figure 1.  A domain Ω with the thick black boundary Ω on an ϵ-sized periodic grid with grid cells Y, which contains a white circular perforation T and the blue bulk Y yields the light-red coloured domain Ωϵ with thick red and black boundary Ωϵext and the green internal perforation boundaries Ωϵint. The thick red boundary parts of the perforations are locations where a choice will have to be made between the boundary condition of the perforation edges and the boundary condition of Ω.

    Note, T does not depend on ϵ, since this could give rise to unwanted complicating effects such as treated in [27].

    Having the domains specified, we focus on defining the needed function spaces. We start by introducing C#(Y), the space of continuous function defined on Y and periodic with respect to Y under Gγ. To be precise:

    C#(Y)={fC(Rd)|fg=ffor all gGγ}. (2.1)

    Hence, the property "Y-periodic" means "invariant under Gγ" for functions defined on Y. Similarly the property "Y-periodic" means "invariant under Gγ" for functions defined on Y.

    With C#(Y) at hand, we construct Bochner spaces like Lp(Ω;C#(Y)) for p1 integer. For a detailed explanation of Bochner spaces, see Section 2.19 of [28]. These types of Bochner spaces exhibit properties that hint at two-scale convergence, as is defined in Section B. Similar function spaces are constructed for Y in an analogous way.

    Introduce the space

    Vϵ={vH1(Ωϵ)v=0 on extΩϵ} (2.2)

    equipped with the seminorm

    vVϵ=vL2(Ωϵ)d. (2.3)

    Remark 1. The seminorm in (2.3) is equivalent to the usual H1-norm by the Poincaré inequality, see Lemma 2.1 on page 14 of [17]. Moreover, this equivalence of norms is uniform in ϵ.

    For correct use of functions spaces over Y and Y, we need an embedding result, which is based on an extension operator. The following theorem and corollary are Theorem 2.10 and Corollary 2.11 in Chapter 2 of [17].

    Theorem 1. Suppose that the domain Ωϵ is such that TY is a smooth open set with a C2-boundary that does not intersect the boundary of Y and such that the boundary of Tϵ does not intersect the boundary of Ω. Then there exists an extension operator Pϵ and a constant C independent of ϵ such that

    PϵL(L2(Ωϵ);L2(Ω))L(Vϵ;H10(Ω)), (2.4)

    and for any vVϵ, we have the bounds

    PϵvL2(Ω)CvL2(Ωϵ),PϵvL2(Ω)dCvL2(Ωϵ)d. (2.5)

    Corollary 1. There exists a constant C independent of ϵ such that for all vVϵ

    PϵvH10(Ω)CvVϵ. (2.6)

    Introduce the notation ˆ, a hat symbol, to denote extension via the extension operator Pϵ.

    The notation =(ddx1,,ddxd) denotes the vectorial total derivative with respect to the components of x=(x1,,xd) for functions depending on both x and x/ϵ. Spatial vectors have d components, while variable vectors have N components. Tensors have diNj components for i, j nonnegative integers. Furthermore, the notation

    cϵ(t,x)=c(t,x,x/ϵ) (2.7)

    is used for the ϵ-independent functions c(t,x,y) in assumption (A1) further on. Moreover, the spatial inner product is denoted with , while the variable inner product is just seen as a product or operator acting on a variable vector or tensor.

    Let T>0. We consider the following Neumann problem for unknown functions Vϵα, Uϵα with α{1,,N} posed on (0,T)×Ωϵ:

    (AϵVϵ)α:=Nβ=1MϵαβVϵβdi,j=1ddxi(EϵijdVϵαdxj+Nβ=1DϵiαβVϵβ)=Hϵα+Nβ=1(KϵαβUϵβ+di=1˜JϵiαβdUϵβdxi)=:(HϵUϵ)α, (2.8a)
    (LUϵ)α:=Uϵαt+Nβ=1LαβUϵβ=Nβ=1GαβVϵβ, (2.8b)

    with the boundary conditions

    Uϵα=Uα in {0}×Ωϵ, (2.9a)
    Vϵα=0 on (0,T)×extΩϵ, (2.9b)
    dVϵαdνDϵ:=di=1(dj=1EϵijdVϵαdxj+Nβ=1DϵiαβVϵβ)nϵi=0 on (0,T)×intΩϵ, (2.9c)

    for α{1,,N} or, in short-hand notation, this reads:

    {AϵVϵ:=MϵVϵ(EϵVϵ+DϵVϵ)=Hϵ+KϵUϵ+˜JϵUϵ=:HϵUϵ in (0,T)×Ωϵ,LUϵ:=Uϵt+LUϵ=GVϵ in (0,T)×Ωϵ,Uϵ=U in {0}×Ωϵ,Vϵ=0 on (0,T)×extΩϵ,dVϵdνDϵ=(EϵVϵ+DϵVϵ)nϵ=0 on (0,T)×intΩϵ. (2.10)

    Consider the following technical requirements for the coefficients arising in the Neumann problem (2.8a)–(2.9c).

    (A1) For all α,β{1,,N} and for all i,j{1,,d}, we assume:

    Mαβ,Hα,Kαβ,JiαβL(R+;W2,(Ω;C2#(Y))),Eij,DiαβL(R+;W3,(Ω;C3#(Y))),Lαβ,GαβL(R+;W4,(Ω)),UαW4,(Ω), (2.11)

    with ˜Jϵ=ϵJϵ; see Remark 2 further on.

    (A2) The tensors M and E have a linear sum decomposition with a skew-symmetric matrix and a diagonal matrix with the diagonal elements of M and E denoted by Mα,EiL(R+×Ω;C#(Y)), respectively, satisfying Mα>0, Ei>0 and 1/Mα,1/EiL(R+×Ω×Y).

    For real symmetric matrices M and E, the finite dimensional version of the spectral theorem states that they are diagonalizable by orthogonal matrices. Since M acts on the variable space RN, while E acts on the spatial space Rd, one can simultaneously diagonalize both real symmetric matrices. For general real matrices M and E the linear sum decomposition in symmetric and skew-symmetric matrices allows for a diagonalization of the symmetric part. The orthogonal matrix transformations necessary to diagonalize the symmetric part does not modify the regularity of the domain Ω, of the perforated periodic cell Y or of the coefficients of D, H, K, J, L, or G. Hence, we are allowed to assume a linear sum decomposition of M and E in a diagonal and a skew-symmetric matrix.

    (A3) The inequality

    Diβα2L(R+×Ωϵ;C#(Y))<4mαeidN2 (2.12)

    holds with

    1mα=1MαL(R+×Ω×Y) and 1ei=1EiL(R+×Ω×Y) (2.13)

    for all α,β{1,,N}, for all i{1,,d}, and for all ϵ(0,ϵ0).

    (A4) The perforation holes do not intersect the boundary of Ω:

    TϵΩ= for a given sequence ϵ(0,ϵ0).

    Remark 2. The dependence ˜Jϵ=ϵJϵ was chosen to simplify both existence and uniqueness results and arguments for bounding certain terms. The case ˜Jϵ=Jϵ can be treated with the proofs outlined in this paper if additional cell functions are introduced and special inequalities similar to the Poincaré-Wirtinger inequality are used. See (4.32) onward in Section 4 for the introduction of cell functions.

    Remark 3. Satisfying inequality (2.12) implies that the same inequality is satisfied for the Y-averaged functions ¯Dϵiβα, ¯Mϵβα, and ¯Eϵij in L(R+×Ω), where we used the following notion of Y-averaged functions

    ¯f(t,x)=1|Y|Yf(t,x,y)dy. (2.14)

    Remark 4. Assumption (A4) implies the following identities for the given sequence ϵ(0,ϵ0):

    intΩϵ=TϵΩ,extΩϵ=Ω. (2.15)

    Without (A4) perforations would intersect Ω. One must then decide which parts of the boundary of the intersected cell Y satisfies which boundary condition: (2.9b) or (2.9c). This leads to non-trivial situations, that ultimately affects the corrector estimates in non-trivial ways.

    Theorem 2. Under assumptions (A1)–(A4), there exist a solution pair (Uϵ,Vϵ)H1((0,T)×Ωϵ)N×L((0,T);VϵH2(Ωϵ))N satisfying the Neumann problem (2.8a)–(2.9c).

    Proof. For Kϵ=MϵG1L, Jϵ=0 and d=1 the result follows by Theorem 1 in [21]. For non-perforated domains the result follows by either Theorem 1 in [22] or Theorem 7 in Chapter 4 of [20].

    For perforated domains, the result follows similarly. An outline of the proof is as follows. First, time-discretization with fixed spacing Δt is applied to the Neumann problem (2.8a)–(2.9c) such that AϵVϵ at t=kΔt equals HϵUϵ at t=(k1)Δt and LUϵ at t=kΔt equals GVϵ at t=(k1)Δt. This is an application of the Rothe method. Under assumptions (A1)–(A4), testing AϵVϵ with a function ϕ yields a continuous and coercive bilinear form on H1(Ωϵ)N, while testing LUϵ with a function ψ yields a continuous and coercive bilinear form on L2(Ωϵ)N. Hence, Lax-Milgram leads to the existence of a solution at each time slice t=kΔt.

    Choosing the right functions for ϕ and ψ and using a discrete version of Gronwall's inequality we obtain upper bounds of Uϵ and Vϵ independent of Δt. Linearly interpolating the time slices, we find that the Δt-independent time slices guarantee the existence of continuous weak limits. Due to sufficient regularity, we even obtain strong convergence and existence of boundary traces. Then the continuous weak limits are actually weak solutions of our Neumann problem (2.8a)–(2.9c). The uniqueness follows by the linearity of our Neumann problem (2.8a)–(2.9c).

    Two special length scales are involved in the Neumann problem (2.8a)–(2.9c): The variable x is the "macroscopic" scale, while x/ϵ represents the "microscopic" scale. This leads to a double dependence of parameter functions (and, hence, of the solutions to the model equations), on both the macroscale and the microscale. For example, if xΩϵ, by the definition of Ωϵ, there exists gGγ such that x/ϵ=g(y) with yY. This suggests that we look for a formal asymptotic expansion of the form

    Vϵ(t,x)=V0(t,x,xϵ)+ϵV1(t,x,xϵ)+ϵ2V2(t,x,xϵ)+, (3.1a)
    Uϵ(t,x)=U0(t,x,xϵ)+ϵU1(t,x,xϵ)+ϵ2U2(t,x,xϵ)+ (3.1b)

    with Vj(t,x,y), Uj(t,x,y) defined for tR+, xΩϵ and yY and Y-periodic (i.e., Vj, Uj are periodic with respect to Gϵγ).

    Theorem 3. Let assumptions (A1)–(A4) hold. For all TR+ there exist a unique pair (Uϵ,Vϵ)H1((0,T)×Ωϵ)N×L((0,T);Vϵ)N satisfying the Neumann problem (2.8a)–(2.9c). Moreover, for ϵ0

    ˆUϵ2U0inH1(0,T;L2(Ω×Y))Nand (3.2a)
    ˆVϵ2V0inL(0,T;L2(Ω×Y))N. (3.2b)

    This implies

    ˆUϵ1|Y|YU0(t,x,y)dyinH1((0,T)×Ω)Nand (3.3a)
    ˆVϵ1|Y|YV0(t,x,y)dyinL((0,T);H10(Ω))N (3.3b)

    for ϵ0.

    Proof. See Section 4 for the full details and [22] for a short proof of the two-scale convergence for a non-perforated setting.

    Additionally, we are interested in deriving the speed of convergence of the formal asymptotic expansion. Boundary effects are expected to occur due to intersection of the external boundary with the perforated periodic cells. Hence, a cut-off function is introduced to remove this part from the analysis.

    Let Mϵ be the cut-off function defined by

    {MϵD(Ω)Mϵ=0 if dist(x,Ω)ϵdiam(Y)Mϵ=1 if dist(x,Ω)2ϵdiam(Y)ϵ|dMϵdxi|Ci{1,,d} (3.4)

    We refer to

    Φϵ=VϵV0Mϵ(ϵV1+ϵ2V2), (3.5a)
    Ψϵ=UϵU0Mϵ(ϵU1+ϵ2U2) (3.5b)

    as error functions. Now, we are able to state our convergence speed result.

    Theorem 4. Let assumptions (A1)–(A4) hold. There exist constants l0, κ0, ˜κ0, λ0 and μ0 such that

    ΦϵVNϵ(t)C(ϵ,t), (3.6a)
    ΨϵH1(Ωϵ)N(t)C(ϵ,t)tlelt (3.6b)

    with

    C(ϵ,t)=C(ϵ12+ϵ32)[1+ϵ12(1+˜κeλt)(1+κ(1+tlelt))]exp(μtlelt) (3.7)

    where C is a constant independent of ϵ and t, and tl=min{1/l,t}.

    Remark 5. The upper bounds in (3.6a) and (3.6b) are O(ϵ12) for ϵ-independent finite time intervals. We call this type of bounds corrector estimates.

    The corrector estimate of Φϵ in Theorem 4 becomes that of the classic linear elliptic system for K=0 and J=0. This is because K=0 and J=0 imply ˜κ=κ=μ=0, see A. See [17] for the classical approach to corrector estimates of elliptic systems in perforated domains and [29] for a spectral approach in non-perforated domains.

    Corollary 2. Under the assumptions of Theorem 4,

    ˆVϵV0H10(Ω)N(t)C(ϵ,t), (3.8a)
    ˆUϵU0H1(Ω)N(t)C(ϵ,t)tlelt (3.8b)

    hold, where C is a constant independent of ϵ and t.

    According to Remark 5, ϵ-independent finite time intervals yield O(ϵ12) corrector estimates. Is it, then, possible to have a converging corrector estimate for diverging time intervals in the limit ϵ0? The next theorem answers this question positively.

    Theorem 5. If l>0, we introduce the rescaled time τln(1ϵ)=exp(lt)1 and q(0,12) independent of both ϵ and t satisfying 0<μτ/l<12q. Then, for 0<ϵ<exp(2μ(12q)l), we have the corrector bounds

    ΦϵVNϵ(t)=O(ϵ12μlτ)=o(1)=ω(ϵ12), (3.9a)
    ΨϵH1(Ωϵ)N(t)=O(ϵ12μlτ)O(ϵqq)=o(1)=ω(ϵ12) (3.9b)

    as ϵ0.

    If l=0, we introduce the rescaled time τln(1ϵ)=t0 and p,q(0,12) independent of both ϵ and t satisfying 0<max{μτ,(λ+μ)τ+p12}<12q. Then, for 0<ϵ<1, we have the corrector bounds

    ΦϵVNϵ(t)=O(ϵ12μτ)+O(ϵ1(λ+μ)τ)O(ϵpp), (3.10a)
    ΨϵH1(Ωϵ)N(t)=[O(ϵ12μτ)+O(ϵ1(λ+μ)τ)O(ϵpp)]O(ϵqq) (3.10b)

    as ϵ0. If, additionally, κ=0 holds, then the bounds change to

    ΦϵVNϵ(t)=O(ϵmin{12,1λτ}), (3.11a)
    ΨϵH1(Ωϵ)N(t)=O(ϵmin{12,1λτ})O(ϵqq). (3.11b)

    Proof. Insert the definition of the rescaled time into (3.6a) and (3.6b), use tl=min{1/l,t}=t for l=0 and tl1/l for l>0. Now one obtains the product ϵδln(1/ϵ) for some positive number δ>0 at several locations, which has a single maximal value of 1δe at ln(1ϵ)=1δ. The minimum function is needed since O(ϵr)+O(ϵs)=O(ϵmin{r,s}). The small o and small ω orders are upper and lower asymptotic convergence speeds, respectively, for ϵ0. The upper bound for ϵ is needed to guarantee that the interval for τ corresponds to t0.

    Theorem 5 indicates that convergence can be retained for certain diverging sequences of time-intervals. Consequently, appropriate rescalings of the time variable yield upscaled systems and convergence rates for systems with regularity conditions different from those in assumptions (A1)–(A3).

    Remark 6. The tensors L and G are not dependent on ϵ nor are unbounded functions of t. If such a dependence or unbounded behaviour does exist, then bounds similar to those stated in Theorem 4 are still valid in a new time-variable sIR+ if an invertible C1-map fϵ from tR+ to s exists such that tensors (Lϵ/fϵ)f1ϵ, (Gϵ/fϵ)f1ϵ, Mϵf1ϵ, Eϵf1ϵ, Dϵf1ϵ, Hϵf1ϵ, Kϵf1ϵ, and Jϵf1ϵ satisfy (A1)–(A3).

    Moreover, if fϵ(R+)=R+ for ϵ>0 small enough, then the bounds of Theorem 5 are valid as well with τ defined in terms of s.

    Upscaling of the Neumann problem (2.8a)–(2.9c) can be done by many methods, e.g., via asymptotic expansions or two-scale convergence in suitable function spaces. We proceed in four steps:

    1. Existence and uniqueness of (Uϵ,Vϵ).

    We rely on Theorem 2.

    2. Obtain ϵ-independent bounds for (Uϵ,Vϵ).

    See Section 4.1.

    a. Obtain a priori estimates for (Uϵ,Vϵ). See Lemma 1.

    b. Obtain ϵ-independent bounds for (Uϵ,Vϵ). See Theorem 6.

    3. Upscaling via two-scale convergence.

    See Section 4.2.

    a. Two-scale limit of (Uϵ,Vϵ) for ϵ0. See Lemma 2.

    b. Two-scale limit of problem (2.8a)–(2.9c) for ϵ0. See Theorem 7.

    4. Upscaling via asymptotic expansions and relating to two-scale convergence.

    See Section 4.3.

    a. Expand (2.8a) and (Uϵ,Vϵ). See equations (4.18)–(4.30).

    b. Obtain existence & uniqueness of (U0,V0). See Lemma 3 and Lemma 4

    c. Obtain the defining system of (U0,V0). See equations (4.32)–(4.39) and Lemma 5.

    d. Statement of the upscaled system. See Theorem 8.

    In this section, we show ϵ-independent bounds for a weak solution (Uϵ,Vϵ) to the Neumann problem (2.8a)–(2.9c). We define a weak solution to the Neumann problem (2.8a)–(2.9c) as a pair (Uϵ,Vϵ)H1((0,T)×Ωϵ)N×L((0,T),Vϵ)N satisfying

    (Pϵw){Ωϵϕ[MϵVϵHϵKϵUϵJϵUϵ]+(ϕ)(EϵVϵ+DϵVϵ)dx=0,Ωϵψ[Uϵt+LUϵGVϵ]dx=0,Uϵ(0,x)=U(x) for all x¯Ωϵ,

    for a.e. t(0,T) and for all test-functions ϕVNϵ and ψL2(Ωϵ)N.

    The existence and uniqueness of solutions to system (Pϵw) can only hold when the parameters are well-balanced. The next lemma provides a set of parameters for which these parameters are well-balanced.

    Lemma 1. Assume assumptions (A1)–(A3) hold and we have ϵ(0,ϵ0) for ϵ0>0, then there exist positive constants ˜mα, ˜ei, ˜H, ˜Kα, ˜Jiα, for α{1,,N} and i{1,,d} such that the a priori estimate

    Nα=1˜mαVϵα2L2(Ωϵ)+di=1Nα=1˜eidVϵαdxi2L2(Ωϵ)˜H+Nα=1˜KαUϵα2L2(Ωϵ)+di=1Nα=1˜JiαdUϵαdxi2L2(Ωϵ) (4.1)

    holds for a.e. t(0,T).

    Proof. We test the first equation of (Pϵw) with ϕ=Vϵ and apply Young's inequality wherever a product is not a square. A non-square product containing both Uϵ and Vϵ can only be found in the D-term. Hence, Young's inequality allows all other non-square product terms to have a negligible effect on the coercivity constants mα and ei, while affecting ˜H, ˜Kα, ˜Jiα. Therefore, we only need to enforce two inequalities to prove the lemma by guaranteeing coercivity, i.e.,

    eiNα=1ηiβα2˜Diβα˜ei>0for β{1,,N},i{1,,d}, (4.2a)
    mαdi=1Nβ=1˜Diβα2ηiβα˜mα>0for α{1,,N}, (4.2b)

    where ˜Diβα=DiβαL(R+×Ω;C#(Y)). We can choose ηiβα>0 satisfying

    dN˜Diβα2mα<ηiβα<2eiN˜Diβα, (4.3)

    if inequality (2.12) in assumption (A3) is satisfied. For the exact definition of the constants ˜mα, ˜ei, ˜H, ˜Kα, ˜Jiα, see equations (A.4a)–(A.4e) in A.

    Theorem 6. Assume (A1)–(A3) to hold, then there exist positive constants C, ˜κ and λ independent of ϵ such that

    UϵH1(Ωϵ)N(t)Ceλt,VϵVNϵ(t)C(1+˜κeλt) (4.4)

    hold for t0.

    Proof. By (A1)–(A3) there exist positive numbers ˜mα, ˜ei, ˜H, ˜Kα, ˜Jiα for α{1,,N} and i{1,,d} such that the a priori estimate (4.1) stated in Lemma 1 holds. Moreover, what concerns system (Pϵw) there exist LG, LN, GG, and GN, see equations (A.3a)–(A.3d) in A, such that

    tUϵ2L2(Ωϵ)NLNUϵ2L2(Ωϵ)N+GNVϵ2L2(Ωϵ)N, (4.5a)
    tUϵ2L2(Ωϵ)d×NLGUϵ2L2(Ωϵ)N+LNUϵ2L2(Ωϵ)d×N+GGVϵ2L2(Ωϵ)N+GNVϵ2L2(Ωϵ)d×N (4.5b)

    hold. Adding (4.5a) and (4.5b), and using (4.1), we obtain a positive constant I and a vector JRN+ such that

    tUϵ2H1(Ωϵ)NJ+IUϵ2H1(Ωϵ)N (4.6)

    with

    I=max{0,LN+max{LG+GMmax1αN{˜Kα},GMmax1αN,1id{˜Jiα}}}, (4.7a)
    GM=max1α<N,1id{GN+GG˜mα,GN˜ei}. (4.7b)

    Applying Gronwall's inequality, see [30,Thm. 1], to (4.6) yields the existence of a constant λ defined as λ=I/2, such that

    UϵH1(Ωϵ)N(t)Ceλt,VϵVNϵ(t)C(1+˜κeλt) (4.8)

    with ˜κ=max1αN,1id{˜Kα,˜Jiα}.

    Remark 7. It is difficult to obtain exact expressions for optimal values of LN, LG, GN and GG such that a minimal positive value of λ is obtained. See A for the exact dependence of λ on the parameters involved in the Neumann problem (2.8a)–(2.9c).

    Remark 8. The (0,T)×Ωϵ-measurability of Uϵ and Vϵ can be proven based on the Rothe-method (discretization in time) in combination with the convergence of piecewise linear functions to any function in the spaces H1((0,T)×Ωϵ) or L((0,T);Vϵ). One can prove that both Uϵ and Vϵ are measurable and are weak solutions to (Pϵw). See Chapter 2 in [20] for a pseudo-parabolic system for which the Rothe-method is used to show existence (and hence also measurability).

    Remark 9. Since we have GL(R+;W1,(Ω))N×N and VϵL((0,T);Vϵ)N, we are allowed to differentiate equation (2.8b) with respect to x and test the resulting identity with both Uϵ and tUϵ. However, conversely, we are not allowed to differentiate equation (2.8a) with respect to t as all tensors have insufficient regularity: They are in L(R+×Ωϵ)N×N.

    Remark 10. We cannot differentiate equation (2.8b) with respect to x when L or G has decreased spatial regularity, for example L((0,T)×Ω)N×N. One can still obtain unique solutions of (Pϵw) if and only if Jϵ=0 holds, since it removes the Uϵ term from equation (2.8a). Consequently, Theorem 6 holds with UϵH1((0,T);L2(Ωϵ)) and Jϵ=0 under the additional relaxed regularity assumption L,GL((0,T)×Ω)N×N and with λ modified by taking LG=˜Jiα=0 and by replacing GM with GN/min1αN˜mα.

    We recall the notation ˆfϵ to denote the extension on Ω via the operator Pϵ for fϵ defined on Ωϵ. This extension operator Pϵ, as defined in Theorem 6, is well-defined if both T and Ω are C2-regular, assumption (A4) holds, and TY=. Hence, the extension operator is well-defined in our setting.

    Lemma 2. Assume (A1)–(A4) to hold. For each ϵ(0,ϵ0), let the pair of sequences (Uϵ,Vϵ)H1((0,T)×Ωϵ)N×L((0,T);Vϵ)N be the unique weak solution to (Pϵw). Then this sequence of weak solutions satisfies the estimates

    UϵH1((0,T)×Ωϵ)N+VϵL((0,T);Vϵ)NC, (4.9)

    for all ϵ(0,ϵ0) and there exist vector functions

    u in H1((0,T)×Ω)N, (4.10a)
    U in H1((0,T);L2(Ω;H1#(Y)/R))N, (4.10b)
    v in L((0,T);H10(Ω))N, (4.10c)
    V in L((0,T)×Ω;H1#(Y)/R)N, (4.10d)

    and a subsequence ϵϵ, for which the following two-scale convergences

    ˆUϵ2u (4.11a)
    tˆUϵ2tu (4.11b)
    ˆUϵ2u+yU (4.11c)
    tˆUϵ2tu+tyU (4.11d)
    ˆVϵ2v (4.11e)
    ˆVϵ2v+yV (4.11f)

    hold.

    Proof. For all ϵ>0, Theorem 6 gives the bounds (4.9) independent of the choice of ϵ. Hence, ˆUϵu in H1((0,T)×Ω)N and ˆVϵv in L((0,T);H10(Ω))N as ϵ0. By Proposition 1 in B, we obtain a subsequence ϵϵ and functions uH1((0,T)×Ω)N, vL2((0,T);H10(Ω))N, U,VL2((0,T)×Ω;H1#(Y)/R)N such that (4.11a), (4.11b), (4.11c), (4.11e), and (4.11f) hold for a.e. t(0,T). Moreover, there exists a vector function ˜UL2((0,T)×Ω;H1#(Y)/R)N such that the following two-scale convergence

    tˆUϵ2tu+y˜U (4.12)

    holds for the same subsequence ϵ. Using two-scale convergence, Fubini's Theorem and partial integration in time, we obtain an increased regularity for U, i.e., UH1((0,T);L2(Ω;H1#(Y)/R))N, with tyU=y˜U.

    By Lemma 2, we can determine what the macroscopic version of (Pϵw), which we denote by (P0w). This is as stated in Theorem 7.

    Theorem 7. Assume the hypotheses of Lemma 2 to be satisfied. Then the two-scale limits uH1((0,T)×Ω)N and vL((0,T);H10(Ω))N introduced in Lemma 2 form a weak solution to

    (P0w){Ωϕ[¯Mv¯H¯Ku]+(ϕ)(Ev+Dv)dx=0,Ωψ[ut+LuGv]dx=0,u(0,x)=U(x)forxΩ,

    for a.e. t(0,T) for all test functions ϕH10(Ω)N, and ψL2(Ω)N, where the barred tensors and vectors are Y averaged functions as introduced in (A2). Furthermore,

    E=1|Y|YE(1+yW)dy,D=1|Y|YD+EyZdy (4.13)

    are the wanted effective coefficients. The auxiliary tensors

    Zαβ,WiL(0,T;W2,(Ω;H1#(Y)/R)) satisfy the cell problems

    0=YΦ(y[E(1+yW)])dy=YΦ(yˆE)dy, (4.14a)
    0=YΨ(y[D+EyZ])dy=YΨ(yˆD)dy (4.14b)

    for all ΦC#(Y)d, ΨC#(Y)N.

    Proof. The solution to system (Pϵw) is extended to Ω by taking ˆHϵ, ˆVϵ, ˆUϵ for Hϵ, Vϵ, Uϵ, respectively. The extended system is satisfied on TϵΩ and it satisfies the boundary conditions on intΩϵ of system (Pϵw). Hence, it is sufficient to look at (Pϵw) only. In (Pϵw), we choose ψ=ψϵ=Ψ(t,x,xϵ) for the test function ΨL2((0,T);D(Ωϵ;C#(Y)))N, ϕ=ϕϵ=Φ(t,x)+ϵφ(t,x,xϵ) for the test functions ΦL2((0,T);C0(Ωϵ))N, φL2((0,T);D(Ωϵ;C#(Y)))N. Corollary 5 and Theorem 9 in combination with (B.4) lead to Tϵ2T, where Tϵ is an arbitrary tensor or vector in (Pϵw) other than L and G. Moreover, by Corollary 5 and Propositions 1 and 2 we have ψϵ2Ψ(t,x,y), ϕϵ2Φ(t,x), and ϕϵ2Φ(t,x)+yφ(t,x,y). By Corollary 5 and Theorem 9, there is a two-scale limit of (Pϵw), reading

    Ω1|Y|YΦ[MvHKu]+(Φ+yφ)[E(v+yV)+Dv]+Ψ[ut+LuGv]dydx=0. (4.15)

    Similarly, the initial condition

    u(0,x)=U(x),x¯Ω, (4.16)

    is satisfied by u as yu=0 holds.

    For Φ=Ψ=0, we can take V=Wv+Zv+˜V, where W and Z satisfy the cell problems (4.14a) and (4.14b), respectively, and y˜V=0. The existence and uniqueness of W and Z follows from Lax-Milgram as cell problems (4.14a) and (4.14b) are linear elliptic systems by (A2) for the Hilbert space H1#(Y). The regularity of W and Z in t(0,T) and xΩ follows from the regularity of E and D as stated in (A1), (A2) and (A3). Moreover, we obtain vL((0,T);H2(Ω)) due to (A1). Then Proposition 1, Theorem 9 and the embedding H1/2(Y)L2(T) yields 0=VϵνDϵ2(ˆEv+ˆDv)n=0 on Y, which is automatically guaranteed by (4.14a) and (4.14b).

    Hence, (P0w) yields the strong form system

    (P0s){¯Mv(Ev+Dv)=ˉH+ˉKuin(0,T)×Ω,ut+Lu=Gvin(0,T)×Ω,v=0on(0,T)×Ω,u=Uon{0}×ˉΩ,

    when, next to the regularity of (A1), the following regularity holds:

    Mαβ,Hα,KαβC(0,T;C1(Ω;C1#(Y))), (4.17a)
    Eij,DiαβC(0,T;C2(Ω;C2#(Y))), (4.17b)
    Lαβ,GαβC(0,T;C1(Ω)), (4.17c)
    UC(¯Ω), (4.17d)

    for all TR+, when both Ω and T are C3-boundaries.

    Even though the previous section showed that there is a two-scale limit (u,v), it is necessary to show the relation between (u,v) and (Uϵ,Vϵ). To this end, we first rewrite the Neumann problem (2.8a)–(2.9c) and then use asymptotic expansions such that we are lead to the two-scale limit, including the cell-functions, in a natural way. The Neumann problem (2.8a)–(2.9c) can be written in operator form as

    {AϵVϵ=HϵUϵ on (0,T)×Ωϵ,LUϵ=GVϵ on (0,T)×Ωϵ,Uϵ=U in {0}×Ωϵ,Vϵ=0 on (0,T)×extΩϵ,dVϵdνDϵ=0 on (0,T)×intΩϵ. (4.18)

    as indicated in Section 2.

    We postulate the following asymptotic expansions in ϵ of Uϵ and Vϵ:

    Vϵ(t,x)=V0(t,x,xϵ)+ϵV1(t,x,xϵ)+ϵ2V2(t,x,xϵ)+, (4.19a)
    Uϵ(t,x)=U0(t,x,xϵ)+ϵU1(t,x,xϵ)+ϵ2U2(t,x,xϵ)+. (4.19b)

    Let Φ=Φ(t,x,y)L(0,T;C2(Ω;C2#(Y)))N be a vector function depending on two spatial variables x and y, and introduce Φϵ(t,x)=Φ(t,x,x/ϵ). Then the total spatial derivatives in x become two partial derivatives, one in x and one in y:

    Φϵ(t,x)=1ϵ(yΦ)(t,x,xϵ)+(xΦ)(t,x,xϵ), (4.20a)
    \begin{align} \nabla\cdot{\boldsymbol{\Phi}}^\epsilon(t,{\boldsymbol{x}}) & = \frac{1}{\epsilon}(\nabla_{{\boldsymbol{y}}}\cdot{\boldsymbol{\Phi}})\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)+ (\nabla_{{\boldsymbol{x}}}\cdot{\boldsymbol{\Phi}})\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right). \end{align} (4.20b)

    Do note, the evaluation {\boldsymbol{y}} = {\boldsymbol{x}}/\epsilon is suspended as is common in formal asymptotic expansions, leading to the use of {\boldsymbol{y}}\in Y^* and {\boldsymbol{x}}\in\Omega .

    Hence, \mathcal{A}^\epsilon{\boldsymbol{\Phi}}^\epsilon can be formally expanded:

    \begin{equation} \mathcal{A}^\epsilon{\boldsymbol{\Phi}}^\epsilon = \left[\left(\frac{1}{\epsilon^2}\mathcal{A}^0+\frac{1}{\epsilon}\mathcal{A}^1+\mathcal{A}^2\right){\boldsymbol{\Phi}}\right]\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right), \end{equation} (4.21)

    where

    \begin{align} \mathcal{A}^0{\boldsymbol{\Phi}}& = -\nabla_{{\boldsymbol{y}}}\cdot\left(\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}{\boldsymbol{\Phi}}\right), \end{align} (4.22a)
    \begin{align} \mathcal{A}^1{\boldsymbol{\Phi}}& = -\nabla_{{\boldsymbol{y}}}\cdot\left(\mathit{E}\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{\Phi}}\right)-\nabla_{{\boldsymbol{x}}}\cdot\left(\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}{\boldsymbol{\Phi}}\right)-\nabla_{{\boldsymbol{y}}}\cdot\left(\mathit{D}{\boldsymbol{\Phi}}\right), \end{align} (4.22b)
    \begin{align} \mathcal{A}^2{\boldsymbol{\Phi}}& = \!\qquad\!\qquad\quad\mathit{M}{\boldsymbol{\Phi}}-\nabla_{{\boldsymbol{x}}}\cdot\left(\mathit{E}\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{\Phi}}\right)-\nabla_{{\boldsymbol{x}}}\cdot\left(\mathit{D}{\boldsymbol{\Phi}}\right). \end{align} (4.22c)

    Moreover, \mathcal{H}^\epsilon{\boldsymbol{\Phi}}^\epsilon can be written as {\boldsymbol{H}}+(\mathcal{H}^0+\epsilon \mathcal{H}^1){\boldsymbol{\Phi}} , where

    \begin{align} \mathcal{H}^0& = \mathit{K}+\mathit{J}\cdot\nabla_{{\boldsymbol{y}}}, \end{align} (4.23a)
    \begin{align} \mathcal{H}^1& = \qquad\mathit{J}\cdot\nabla_{{\boldsymbol{x}}}. \end{align} (4.23b)

    Since the outward normal {\boldsymbol{n}} on \partial\mathcal{T} depends only on {\boldsymbol{y}} and the outward normal {\boldsymbol{n}}^\epsilon on \partial_{int}\Omega^\epsilon = \partial\mathcal{T}^\epsilon\cap\Omega is defined as the Y -periodic function \left.{\boldsymbol{n}}\right|_{{\boldsymbol{y}} = {\boldsymbol{x}}/\epsilon} , one has

    \begin{multline} \frac{\partial{\boldsymbol{\Phi}}^\epsilon}{\partial\nu_{\mathit{D}^\epsilon}} = \left(\mathit{E}^\epsilon\cdot\frac{\mathrm{d}{\boldsymbol{\Phi}}^\epsilon}{\mathrm{d}{\boldsymbol{x}}}+\mathit{D}^\epsilon{\boldsymbol{\Phi}}^\epsilon\right)\cdot{\boldsymbol{n}}^\epsilon\\ = \left(\frac{1}{\epsilon}\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}{\boldsymbol{\Phi}}+\mathit{E}\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{\Phi}}+\mathit{D}{\boldsymbol{\Phi}}\right)\cdot{\boldsymbol{n}}^\epsilon\qquad\qquad\qquad\qquad\qquad\quad\\ = : \frac{1}{\epsilon} \frac{\partial{\boldsymbol{\Phi}}^\epsilon}{\partial\nu_{\mathit{E}}}+\frac{\partial{\boldsymbol{\Phi}}^\epsilon}{\partial\nu_{\mathit{D}}}.\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\! \end{multline} (4.24)

    Inserting (4.19a), (4.19b), (4.21)–(4.24) into the Neumann problem (4.18) and expanding the full problem into powers of \epsilon , we obtain the following auxilliary systems:

    \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0{\boldsymbol{V}}^0 = 0&\text{ in }(0,T)\times\Omega\times Y^*,\cr \frac{\partial {\boldsymbol{V}}^0}{\partial\nu_{\mathit{E}}} = 0&\text{ on }(0,T)\times\Omega\times\partial\mathcal{T},\cr {\boldsymbol{V}}^0 = 0&\text{ on }(0,T)\times\partial\Omega\times Y^*,\cr {\boldsymbol{V}}^0\quad\text{ $Y$-periodic,} \end{array}\right.\end{equation} (4.25)
    \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0{\boldsymbol{V}}^1 = -\mathcal{A}^1 {\boldsymbol{V}}^0&\text{ in }(0,T)\times\Omega\times Y^*,\cr \frac{\partial {\boldsymbol{V}}^1}{\partial\nu_{\mathit{E}}} = -\frac{\partial {\boldsymbol{V}}^0}{\partial\nu_{\mathit{D}}}&\text{ on }(0,T)\times\Omega\times\partial\mathcal{T},\cr {\boldsymbol{V}}^1 = 0&\text{ on }(0,T)\times\partial\Omega\times Y^*,\cr {\boldsymbol{V}}^1\quad\text{ $Y$-periodic,} \end{array}\right.\end{equation} (4.26)
    \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0{\boldsymbol{V}}^2 = -\mathcal{A}^1 {\boldsymbol{V}}^1-\mathcal{A}^2{\boldsymbol{V}}^0+{\boldsymbol{H}}+\mathcal{H}^0{\boldsymbol{U}}^0&\text{ in }(0,T)\times\Omega\times Y^*,\cr \frac{\partial {\boldsymbol{V}}^2}{\partial\nu_{\mathit{E}}} = -\frac{\partial {\boldsymbol{V}}^1}{\partial\nu_{\mathit{D}}}&\text{ on }(0,T)\times\Omega\times\partial\mathcal{T},\cr {\boldsymbol{V}}^2 = 0&\text{ on }(0,T)\times\partial\Omega\times Y^*,\cr {\boldsymbol{V}}^2\quad\text{ $Y$-periodic.} \end{array}\right. \end{equation} (4.27)

    For i\geq3 , we have

    \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0{\boldsymbol{V}}^i = -\mathcal{A}^1{\boldsymbol{V}}^{i-1}-\mathcal{A}^2{\boldsymbol{V}}^{i-2}+\mathcal{H}^0{\boldsymbol{U}}^{i-2}+\mathcal{H}^1{\boldsymbol{U}}^{i-3}&\text{ in }(0,T)\times\Omega\times Y^*,\cr \frac{\partial {\boldsymbol{V}}^i}{\partial\nu_{\mathit{E}}} = -\frac{\partial {\boldsymbol{V}}^{i-1}}{\partial\nu_{\mathit{D}}}&\text{ on }(0,T)\times\Omega\times\partial\mathcal{T},\cr {\boldsymbol{V}}^i = 0&\text{ on }(0,T)\times\partial\Omega\times Y^*,\cr {\boldsymbol{V}}^i\quad\text{ $Y$-periodic.} \end{array}\right. \end{equation} (4.28)

    Furthermore, we have

    \begin{equation} \left\{\begin{array}{ll} \mathcal{L}{\boldsymbol{U}}^0 = \mathit{G}{\boldsymbol{V}}^0&\text{ in }(0,T)\times\Omega\times Y^*,\cr {\boldsymbol{U}}^0 = {\boldsymbol{U}}^*&\text{ in } \{0\}\times\Omega\times Y^*,\cr {\boldsymbol{U}}^0\quad\text{ $Y$-periodic,} \end{array}\right. \end{equation} (4.29)

    and, for j\geq1 ,

    \begin{equation} \left\{\begin{array}{ll} \mathcal{L}{\boldsymbol{U}}^j = \mathit{G}{\boldsymbol{V}}^j&\text{ in }(0,T)\times\Omega\times Y^*,\cr {\boldsymbol{U}}^j = 0&\text{ in } \{0\}\times\Omega\times Y^*,\cr {\boldsymbol{U}}^j\quad\text{ $Y$-periodic.} \end{array}\right. \end{equation} (4.30)

    The existence and uniqueness of weak solutions of the systems (4.25)–(4.28) is stated in the following Lemma:

    Lemma 3. Let F\in L^2(Y^*) and g\in L^2(\partial \mathcal{T}) be Y -periodic. Let \mathit{A}(y)\in L^\infty_\#(Y^*)^{N\times N} satisfy \sum\limits_{i, j = 1}^n\mathit{A}_{ij}(y)\xi_i\xi_j\geq a\sum\limits_{i = 1}^n\xi_i^2 for all {\boldsymbol{\xi}}\in{\bf{R}}^n for some a > 0 .

    Consider the following boundary value problem for \omega({\boldsymbol{y}}) :

    \begin{equation} \left\{\begin{array}{ll}-\nabla_{{\boldsymbol{y}}}\cdot\left(\mathit{A}({\boldsymbol{y}})\cdot\nabla_{{\boldsymbol{y}}}\omega\right) = F({\boldsymbol{y}})&\text{ on }Y^*,\cr -\left[\mathit{A}({\boldsymbol{y}})\nabla_{{\boldsymbol{y}}}\omega\right]\cdot{\boldsymbol{n}} = g({\boldsymbol{y}})&\text{ on }\partial\mathcal{T},\cr \omega\text{ is }Y\;{-periodic}.\cr \end{array}\right. \end{equation} (4.31)

    Then the following statements hold:

    (i) There exists a weak Y -periodic solution \omega\in H^1_\#(Y^*)/{\bf{R}} to (4.31) if and only if \int_{Y^*}F({\boldsymbol{y}})\mathrm{d}{\boldsymbol{y}} = \int_{\partial \mathcal{T}}g({\boldsymbol{y}})\mathrm{d}\sigma_y .

    (ii) If (i) holds, then the uniqueness of weak solutions is ensured up to an additive constant.

    See Lemma 2.1 in [26].

    Existence and uniqueness of the solutions of the systems (4.29) and (4.30) can be handled via the application of Rothe's method, see [31] for details on Rothe's method, and Gronwall's inequality, and see [30] for various different versions of useful discrete Gronwall's inequalities.

    Lemma 4. The function {\boldsymbol{V}}^0 depends only on (t, {\boldsymbol{x}})\in(0, T)\times\Omega .

    Proof. Applying Lemma 3 to system (4.25) yields the weak solution {\boldsymbol{V}}^0(t, x, y)\in H^1_\#(Y^*)/{\bf{R}} pointwise in (t, {\boldsymbol{x}})\in(0, T)\times\Omega with uniqueness ensured up to an additive function depending only on (t, {\boldsymbol{x}})\in(0, T)\times\Omega . Direct testing of (4.25) with {\boldsymbol{V}}^0 yields \|\nabla_{{\boldsymbol{y}}}{\boldsymbol{V}}^0\|_{L^2_\#(Y^*)} = 0 . Hence, \nabla_{{\boldsymbol{y}}}{\boldsymbol{V}}^0 = 0 a.e. in Y^* .

    Corollary 3. The function {\boldsymbol{U}}^0 depends only on (t, {\boldsymbol{x}})\in(0, T)\times\Omega .

    Proof. Apply the gradient \nabla_{{\boldsymbol{y}}} to system (4.29). The independence of {\boldsymbol{y}} follows directly from (A1) and Lemma 4.

    The application of Lemma 3 to system (4.26) yields, due to the divergence theorem, again a weak solution {\boldsymbol{V}}^1(t, {\boldsymbol{x}}, {\boldsymbol{y}})\in H^1_\#(Y^*)/{\bf{R}} pointwise in (t, {\boldsymbol{x}})\in(0, T)\times\Omega with uniqueness ensured up to an additive function depending only on (t, {\boldsymbol{x}})\in(0, T)\times\Omega . One can determine {\boldsymbol{V}}^1 from {\boldsymbol{V}}^0 with the use of a decomposition of {\boldsymbol{V}}^1 into products of {\boldsymbol{V}}^0 derivatives and so-called cell functions:

    \begin{equation} {\boldsymbol{V}}^1 = {\boldsymbol{W}}\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{V}}^0+\mathit{Z}{\boldsymbol{V}}^0+\tilde{{\boldsymbol{V}}}^1 \\ \end{equation} (4.32)

    with \tilde{{\boldsymbol{V}}}^1 the Y^* -average of {\boldsymbol{V}}^1 satisfying \nabla_{{\boldsymbol{y}}}\tilde{{\boldsymbol{V}}}^1 = \mathit{0} , and for \alpha, \beta\in\{1, \ldots, N\} and i\in\{1, \ldots, d\} with cell functions

    \begin{equation} \mathit{Z}_{\alpha\beta},W_{i}\in L^\infty({\bf{R}}_+;W^{2,\infty}(\Omega;C^2_\#(Y^*)/{\bf{R}})) \end{equation} (4.33)

    with vanishing Y^* -average. Insertion of (4.32) into system (4.26) leads to systems for the cell-functions {\boldsymbol{W}} and \mathit{Z} :

    \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0{\boldsymbol{W}} = -\nabla_{{\boldsymbol{y}}}\cdot\mathit{E}&\text{ in }Y^*,\cr \frac{\partial {\boldsymbol{W}}}{\partial\nu_{\mathit{E}}} = -{\boldsymbol{n}}\cdot\mathit{E}&\text{ on }\partial \mathcal{T},\cr {\boldsymbol{W}}\quad\text{ $Y$-periodic,}\cr \frac{1}{|Y|}\int_{Y^*}{\boldsymbol{W}}\mathrm{d}{\boldsymbol{y}} = {\boldsymbol{0}}. \end{array}\right.\end{equation} (4.34)

    and

    \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0\mathit{Z} = -\nabla_{{\boldsymbol{y}}}\cdot\mathit{D}&\text{ in }Y^*,\cr \frac{\partial \mathit{Z}}{\partial\nu_{\mathit{E}}} = -{\boldsymbol{n}}\cdot\mathit{D}&\text{ on }\partial \mathcal{T},\cr \mathit{Z}_{\alpha\beta}\quad\text{ $Y$-periodic,}\cr \frac{1}{|Y|}\int_{Y^*}\mathit{Z}\mathrm{d}{\boldsymbol{y}} = \mathit{0}. \end{array}\right.\end{equation} (4.35)

    Again the existence and uniqueness up to an additive constant of the cell functions in systems (4.34) and (4.35) follow from Lemma 3 and convenient applications of the divergence theorem. The regularity of solutions follows from Theorem 9.25 and Theorem 9.26 in [32].

    The existence and uniqueness for {\boldsymbol{V}}^2 follows from applying Lemma 3 to system (4.27), which states that a solvability condition has to be satisfied. Using the divergence theorem, this solvability condition becomes

    \begin{multline} \int_{Y^*}\mathcal{A}^2{\boldsymbol{V}}^0+\mathcal{A}^1\left[({\boldsymbol{W}}\cdot\nabla_{{\boldsymbol{x}}}+\mathit{Z}){\boldsymbol{V}}^0\right]+\nabla_{{\boldsymbol{y}}}\cdot\left[(\mathit{E}\cdot\nabla_{{\boldsymbol{x}}}+\mathit{D})({\boldsymbol{W}}\cdot\nabla_{{\boldsymbol{x}}}+\mathit{Z}){\boldsymbol{V}}^0\right]\mathrm{d}{\boldsymbol{y}}\\ = \int_{Y^*}{\boldsymbol{H}}\mathrm{d}{\boldsymbol{y}}+\int_{Y^*}\mathcal{H}^0\mathrm{d}{\boldsymbol{y}}\;{\boldsymbol{U}}^0. \end{multline} (4.36)

    Inserting (4.22b), (4.22c), and (4.23a) and using both \nabla_{{\boldsymbol{y}}}{\boldsymbol{V}}^0 = \mathit{0} and \nabla_{{\boldsymbol{y}}}{\boldsymbol{U}}^0 = \mathit{0} , we find

    \begin{multline} \int_{Y^*}\mathit{M}{\boldsymbol{V}}^0\mathrm{d}{\boldsymbol{y}}-\int_{Y^*}\nabla_{{\boldsymbol{x}}}\cdot(\mathit{D}{\boldsymbol{V}}^0)\mathrm{d}{\boldsymbol{y}}-\int_{Y^*}\nabla_{{\boldsymbol{x}}}\cdot\left(\mathit{E}\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{V}}^0\right)\mathrm{d}{\boldsymbol{y}}\\ -\int_{Y^*}\nabla_{{\boldsymbol{x}}}\cdot\left(\mathit{E}\cdot\left[\nabla_{{\boldsymbol{y}}}({\boldsymbol{W}}\cdot\nabla_{{\boldsymbol{x}}}+\mathit{Z})\right]{\boldsymbol{V}}^0\right)\mathrm{d}{\boldsymbol{y}} = \int_{Y^*}{\boldsymbol{H}}\mathrm{d}{\boldsymbol{y}}+\int_{Y^*}\mathit{K}\mathrm{d}{\boldsymbol{y}}\;{\boldsymbol{U}}^0, \end{multline} (4.37)

    which after rearrangement looks like

    \begin{multline} \int_{Y^*}\mathit{M}\mathrm{d}{\boldsymbol{y}}{\boldsymbol{V}}^0-\nabla_{{\boldsymbol{x}}}\cdot\left(\int_{Y^*}\mathit{E}+\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}{\boldsymbol{W}}\mathrm{d}{\boldsymbol{y}}\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{V}}^0\right)\\ -\nabla_{{\boldsymbol{x}}}\cdot\left(\int_{Y^*}\mathit{D}+\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}\mathit{Z}\mathrm{d}{\boldsymbol{y}}{\boldsymbol{V}}^0\right) = \int_{Y^*}{\boldsymbol{H}}\mathrm{d}{\boldsymbol{y}}+\int_{Y^*}\mathit{K}\mathrm{d}{\boldsymbol{y}}\;{\boldsymbol{U}}^0. \end{multline} (4.38)

    Dividing all terms by |Y| , we realize that this solvability condition is an upscaled version of (2.8a):

    \begin{equation} \overline{\mathit{M}}{\boldsymbol{V}}^0-\nabla_{{\boldsymbol{x}}}\cdot\left(\mathit{E}^*\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{V}}^0+\mathit{D}^*{\boldsymbol{V}}^0\right) = \overline{{\boldsymbol{H}}}+\overline{\mathit{K}}{\boldsymbol{U}}^0, \end{equation} (4.39)

    where we have used (4.32), the cell function decomposition, and the new short-hand notation

    \begin{align} \mathit{E}^* & = \frac{1}{|Y|}\int_{Y^*}\mathit{E}\cdot\left(\mathit{1}+\nabla_{{\boldsymbol{y}}}{\boldsymbol{W}}\right)\mathrm{d}{\boldsymbol{y}}, \end{align} (4.40a)
    \begin{align} \mathit{D}^* & = \frac{1}{|Y|}\int_{Y^*}\mathit{D}+\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}\mathit{Z}\mathrm{d}{\boldsymbol{y}}. \end{align} (4.40b)

    Lemma 5. The pair ({\boldsymbol{U}}^0, {\boldsymbol{V}}^0)\in H^1((0, T)\times\Omega)\times L^\infty((0, T); H^1_0(\Omega)) are weak solutions to the following system

    \begin{equation} \label{eq: upscaledUV} \left\{ \begin{array}{l} \overline{\mathsf{M}}{\boldsymbol{V}}^0-\nabla_{\boldsymbol{x}}\cdot\left(\mathsf{E}^*\cdot\nabla_{\boldsymbol{x}}{\boldsymbol{V}}^0+\mathsf{D}^*{\boldsymbol{V}}^0\right) = \overline{\boldsymbol{H}}+\overline{\mathsf{K}}{\boldsymbol{U}}^0&{in }\;(0,T)\times\Omega,\cr \frac{\partial {\boldsymbol{U}}^0}{\partial t}+\mathsf{L}{\boldsymbol{U}}^0 = \mathsf{G}{\boldsymbol{V}}^0&{in }\;(0,T)\times\Omega,\cr \boldsymbol{V}^0 = \boldsymbol{0}&{on }\;(0,T)\times\partial\Omega,\cr \boldsymbol{U}^0 = {\boldsymbol{U}}^*&{on }\;\{0\}\times\Omega. \end{array} \right. \end{equation} (4.41)

    Proof. From system (4.25), equation (4.39), \nabla_{{\boldsymbol{y}}}{\boldsymbol{V}}^0 = \mathit{0} , assumption (A3) and system (4.29), we see that \nabla_{{\boldsymbol{y}}}{\boldsymbol{U}}^0 = \mathit{0} . This leads automatically to system (4.41), since there is no {\boldsymbol{y}} -dependence and \Omega^\epsilon\subset\Omega , \Omega^\epsilon\rightarrow\Omega , \partial_{ext}\Omega^\epsilon = \partial\Omega . Analogous to the proof of Theorem 6 we obtain the required spatial regularity. Moreover, by testing the second line with \frac{\partial}{\partial t}{\boldsymbol{U}}^0 , applying a gradient to the second line and testing it with \frac{\partial}{\partial t}\nabla{\boldsymbol{U}}^0 , we obtain the required temporal regularity as well.

    Theorem 8. Let (A1)–(A3) be valid, then ({\boldsymbol{u}}, {\boldsymbol{v}}) = ({\boldsymbol{U}}^0, {\boldsymbol{V}}^0) .

    Proof. From (P ^0_s ) and Lemma 5, we see that ({\boldsymbol{u}}, {\boldsymbol{v}}) and ({\boldsymbol{U}}^0, {\boldsymbol{V}}^0) satisfy the same linear boundary value problem. We only have to prove the uniqueness for this boundary value problem.

    From testing (4.35) with {\boldsymbol{W}} and (4.34) with \mathit{Z} , we obtain the identity

    \begin{equation} \int_{Y^*}(\nabla_{{\boldsymbol{y}}}{\boldsymbol{W}})^\top\cdot\mathit{D}\mathrm{d}{\boldsymbol{y}} = \int_{Y^*}\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}\mathit{Z}\mathrm{d}{\boldsymbol{y}}. \end{equation} (4.42)

    Hence, from (4.40b) we get

    \begin{equation} \mathit{D}^* = \frac{1}{|Y|}\int_{Y^*}\left(\mathit{1}+(\nabla_{{\boldsymbol{y}}}{\boldsymbol{W}})\right)^\top\cdot\mathit{D}\mathrm{d}{\boldsymbol{y}}. \end{equation} (4.43)

    Moreover, testing system (4.34) with {\boldsymbol{W}} yields the identity

    \begin{equation} \mathit{E}^* = \frac{1}{|Y|}\int_{Y^*}\left(\mathit{1}+(\nabla_{{\boldsymbol{y}}}{\boldsymbol{W}})\right)^\top\cdot\mathit{E}\cdot\left(\mathit{1}+(\nabla_{{\boldsymbol{y}}}{\boldsymbol{W}})\right)\mathrm{d}{\boldsymbol{y}}. \end{equation} (4.44)

    We subtract (P ^0_s ) from (4.41) and introduce \tilde{{\boldsymbol{U}}} , \tilde{{\boldsymbol{V}}} as

    \begin{equation} \tilde{{\boldsymbol{U}}} = {\boldsymbol{U}}^0-{\boldsymbol{u}}\quad\text{ and }\quad\tilde{{\boldsymbol{V}}} = {\boldsymbol{V}}^0-{\boldsymbol{v}}. \end{equation} (4.45)

    Testing with \tilde{{\boldsymbol{V}}} , we obtain the equation

    \begin{equation} 0 = \int_\Omega\frac{1}{|Y|}\int_{Y^*}\left[\tilde{{\boldsymbol{V}}}^\top\mathit{M}\tilde{{\boldsymbol{V}}}+{\boldsymbol{\zeta}}^\top\cdot\left(\mathit{E}\cdot{\boldsymbol{\zeta}}+\mathit{D}\tilde{{\boldsymbol{V}}}\right) - \tilde{{\boldsymbol{V}}}^\top\mathit{K}\tilde{{\boldsymbol{U}}}\right]\mathrm{d}{\boldsymbol{y}}\mathrm{d}{\boldsymbol{x}}, \end{equation} (4.46)

    where

    \begin{equation} {\boldsymbol{\zeta}} = \left(\mathit{1}+(\nabla_{{\boldsymbol{y}}}{\boldsymbol{W}})\right)\cdot\nabla_{{\boldsymbol{x}}}\tilde{{\boldsymbol{V}}}. \end{equation} (4.47)

    This equation is identical to the Neumann problem (2.8a)–(2.9c) with {\boldsymbol{H}} = {\boldsymbol{0}} , \mathit{J} = \mathit{0} , and replacements \nabla_{{\boldsymbol{x}}}{\boldsymbol{V}} \rightarrow {\boldsymbol{\zeta}} , {\boldsymbol{U}}\rightarrow\tilde{{\boldsymbol{U}}} and {\boldsymbol{V}}\rightarrow\tilde{{\boldsymbol{V}}} in (2.8a). Moreover, (2.8a) is coercive due to assumption (A3). Therefore, we can follow the argument of the proof of Theorem 6, but we only use equations (4.1) and (4.5a) with constants \tilde{H} and \tilde{J}_{i\alpha} set to 0 . For some R > 0 , this leads to

    \begin{equation} \frac{\partial}{\partial t}\|\tilde{{\boldsymbol{U}}}\|_{L^2(\Omega;L^2_\#(Y^*))^N}^2\leq R\|\tilde{{\boldsymbol{U}}}\|_{L^2(\Omega;L^2_\#(Y^*))^N}^2. \end{equation} (4.48)

    Applying Gronwall inequality and using the initial value \tilde{{\boldsymbol{U}}} = {\boldsymbol{U}}^*-{\boldsymbol{U}}^* = {\boldsymbol{0}} , we obtain \|\tilde{{\boldsymbol{U}}}\|_{L^2(\Omega; L^2_\#(Y^*))^N} = 0 a.e. in (0, T) . By the coercivity, we obtain \|\tilde{{\boldsymbol{V}}}\|_{L^2(\Omega; L^2_\#(Y^*))^N} = 0 and \|{\boldsymbol{\zeta}}\|_{L^2(\Omega; L^2_\#(Y^*))^N} = 0 .

    From the proof of Proposition 6.12 in [33], we see that \mathit{1}+\nabla_{{\boldsymbol{y}}}W does not have a kernel that contains non-zero Y -periodic solutions. Therefore, {\boldsymbol{\zeta}} = {\boldsymbol{0}} yields \nabla_{{\boldsymbol{y}}}\tilde{{\boldsymbol{V}}} = \mathit{0} . Thus, we have \tilde{{\boldsymbol{U}}} = {\boldsymbol{0}} in L^\infty((0, T); L^2(\Omega))^N and \tilde{{\boldsymbol{V}}} = {\boldsymbol{0}} in L^\infty((0, T); H^1_0(\Omega))^N . Hence, ({\boldsymbol{u}}, {\boldsymbol{v}}) = ({\boldsymbol{U}}^0, {\boldsymbol{V}}^0) .

    Corollary 4. Let \lambda\geq0 and \tilde{\kappa}\geq0 be as in Theorem 6. Then there exists a positive constant C independent of \epsilon such that

    \begin{equation} \|{\boldsymbol{U}}^0\|_{H^1(\Omega^\epsilon)^N}(t)\leq Ce^{\lambda t},\qquad \|{\boldsymbol{V}}^0\|_{\mathbb{V}_\epsilon^N}(t)\leq C(1+\tilde{\kappa}e^{\lambda t}) \end{equation} (4.49)

    holds for t\geq0 .

    Proof. Bochner's Theorem, see Chapter 2 in [28], states that \|\hat{{\boldsymbol{U}}}^\epsilon\|_{H^1(\Omega)^N}(t) , \|\hat{{\boldsymbol{V}}}^\epsilon\|_{H^1_0(\Omega)^N}(t) , \|{\boldsymbol{U}}^\epsilon\|_{H^1(\Omega^\epsilon)^N}(t) , and \|{\boldsymbol{V}}^\epsilon\|_{\mathbb{V}_\epsilon^N}(t) are Lebesgue integrable, and, therefore, elements of L^1(0, T) . Since \Omega does not depend on t , Theorem 6 is applicable for a.e. t\in(0, T) . From Theorem 6 we obtain that both \|\hat{{\boldsymbol{U}}}^\epsilon\|_{H^1(\Omega)^N}(t) and \|\hat{{\boldsymbol{V}}}^\epsilon\|_{H^1_0(\Omega)^N}(t) have \epsilon -independent upper bounds for a.e. t\in(0, T) . Hence, the Eberlein-Šmuljan Theorem states that there is a subsequence \epsilon'\subset\epsilon such that \hat{{\boldsymbol{U}}}^{\epsilon'}(t) and \hat{{\boldsymbol{V}}}^{\epsilon'}(t) converge weakly in H^1(\Omega)^N and H^1_0(\Omega)^N , respectively, for a.e. t\in(0, T) . Moreover, Lemma 2 states that \hat{{\boldsymbol{U}}}^{\epsilon'} and \hat{{\boldsymbol{V}}}^{\epsilon'} two scale converge (and therefore weakly) to {\boldsymbol{u}}\in H^1(0, T; L^2(\Omega))^N and {\boldsymbol{v}}\in L^\infty(0, T; L^2(\Omega))^N , respectively. Limits of weak convergences are unique. Hence, \hat{{\boldsymbol{U}}}^{\epsilon'}(t)\rightharpoonup{\boldsymbol{u}}(t) in H^1(\Omega)^N and \hat{{\boldsymbol{V}}}^{\epsilon'}(t)\rightharpoonup{\boldsymbol{v}}(t) in H^1_0(\Omega)^N for a.e. t\in (0, T) as \epsilon'\downarrow0 .

    Using these weak convergences, (2.5) and (2.6) in Theorem 6, Theorem 6, and the limit inferior property of weakly convergent sequences, we obtain

    \begin{align} \|{\boldsymbol{U}}^0\|_{H^1(\Omega^\epsilon)^N}(t)&\leq \|{\boldsymbol{U}}^0\|_{H^1(\Omega)^N}(t) = \liminf\limits_{\epsilon\rightarrow 0}\|\hat{{\boldsymbol{U}}}^\epsilon\|_{H^1(\Omega)^N}(t)\cr &\leq \liminf\limits_{\epsilon\rightarrow 0}C\|{\boldsymbol{U}}^\epsilon\|_{H^1(\Omega^\epsilon)^N}(t)\leq C e^{\lambda t}, \end{align} (4.50a)
    \begin{align} \|{\boldsymbol{V}}^0\|_{\mathbb{V}_\epsilon^N}(t)&\leq \|{\boldsymbol{V}}^0\|_{H^1_0(\Omega)^N}(t) = \liminf\limits_{\epsilon\rightarrow 0}\|\hat{{\boldsymbol{V}}}^\epsilon\|_{H^1_0(\Omega)^N}(t)\cr &\leq \liminf\limits_{\epsilon\rightarrow 0}C\|{\boldsymbol{V}}^\epsilon\|_{\mathbb{V}_\epsilon^N}(t)\leq C\left(1+\tilde{\kappa}e^{\lambda t}\right). \end{align} (4.50b)

    Hence, the bounds of Theorem 6 hold for {\boldsymbol{U}}^0 and {\boldsymbol{V}}^0 as well.

    This concludes the proof of Theorem 3.

    It is natural to determine the speed of convergence of the weak solutions ({\boldsymbol{U}}^\epsilon, {\boldsymbol{V}}^\epsilon) to ({\boldsymbol{U}}^0, {\boldsymbol{V}}^0) . However, certain boundary effects are expected due to intersection of the external boundary with the perforated periodic cells. It is clear that \Omega^\epsilon\rightarrow\Omega for \epsilon\downarrow0 , but the boundary effects impact the periodic behavior, which can lead to {\boldsymbol{V}}^j\neq{\boldsymbol{0}} at \partial_{ext}\Omega^\epsilon for j > 0 . Hence, a cut-off function is introduced to remove this potentially problematic part of the domain. The use of a cut-off function is a standard technique in corrector estimates for periodic homogenization. See [17] for a similar introduction of a cut-off function.

    Let us again introduce the cut-off function M_\epsilon defined by

    \begin{equation} \left\{\begin{array}{ll} M_\epsilon\in\mathcal{D}(\Omega),\cr M_\epsilon = 0 &\text{ if }\mathrm{dist}({\boldsymbol{x}},\partial\Omega)\leq\epsilon,\cr M_\epsilon = 1 &\text{ if }\mathrm{dist}({\boldsymbol{x}},\partial\Omega)\geq2\epsilon,\cr \epsilon\left|\frac{\mathrm{d} M_\epsilon}{\mathrm{d} x_i}\right|\leq C&i\in\{1,\ldots,d\}. \end{array}\right. \end{equation} (5.1)

    With this cut-off function defined, we introduce again the error functions

    \begin{align} {\boldsymbol{\Phi}}^\epsilon& = {\boldsymbol{V}}^\epsilon-{\boldsymbol{V}}^0-M_\epsilon(\epsilon {\boldsymbol{V}}^1+\epsilon^2 {\boldsymbol{V}}^2), \end{align} (5.2a)
    \begin{align} {\boldsymbol{\Psi}}^\epsilon& = {\boldsymbol{U}}^\epsilon-{\boldsymbol{U}}^0-M_\epsilon(\epsilon {\boldsymbol{U}}^1+\epsilon^2 {\boldsymbol{U}}^2), \end{align} (5.2b)

    where the M_\epsilon terms are the so-called corrector terms.

    The solvability condition for system (4.27) naturally leds to the fact that ({\boldsymbol{U}}^0, {\boldsymbol{V}}^0) has to satisfy system (4.41). Similar to solving system (4.26) for {\boldsymbol{V}}^1 , we handle system (4.27) for {\boldsymbol{V}}^2 with a decomposition into cell-functions:

    \begin{equation} {\boldsymbol{V}}^2 = {\boldsymbol{P}} + \mathit{Q}^0{\boldsymbol{V}}^0+\mathit{R}^0{\boldsymbol{U}}^0+\mathit{Q}^1\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{V}}^0+\mathit{R}^1\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{U}}^0+\mathit{Q}^2:\mathrm{D}^2_{{\boldsymbol{x}}}{\boldsymbol{V}}^0 \end{equation} (5.3)

    where we have the cell-functions

    \begin{equation} \begin{array}{rcl} P_{\alpha},R_{\alpha\beta}^0,R_{i\alpha\beta}^1&\in& L^\infty({\bf{R}}_+;W^{2,\infty}(\Omega^\epsilon;C^3_\#(Y^*))),\cr Q_{\alpha\beta}^0,Q_{i\alpha\beta}^1&\in& L^\infty({\bf{R}}_+;W^{2,\infty}(\Omega^\epsilon;C^2_\#(Y^*))),\cr Q^2_{ij}&\in& L^\infty({\bf{R}}_+;W^{2,\infty}(\Omega^\epsilon;C^2_\#(Y^*))) \end{array} \end{equation} (5.4)

    for \alpha, \beta\in\{1, \ldots, N\} and for i, j\in\{1, \ldots, d\} , and where

    \begin{equation} (\mathit{Q}^2:\mathrm{D}^2_{{\boldsymbol{x}}}{\boldsymbol{V}}^0)_\alpha : = \sum\limits_{i,j = 1}^dQ_{ij}\frac{\partial^2 V^0_\alpha}{\partial x_i\partial x_j}. \end{equation} (5.5)

    The cell-functions {\boldsymbol{P}} , \mathit{Q}^0 , \mathit{R}^0 , \mathit{Q}^1 , \mathit{R}^1 , \mathit{Q}^2 satisfy the following systems of partial differential equations, obtained from subtracting (4.39) from (4.27) and inserting (5.4):

    \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0{\boldsymbol{P}} = {\boldsymbol{H}}-\overline{{\boldsymbol{H}}}&\text{ in }Y^*,\cr \frac{\partial {\boldsymbol{P}}}{\partial\nu_{\mathit{E}}} = {\boldsymbol{0}}&\text{ on }\partial \mathcal{T},\cr {\boldsymbol{P}}\quad\text{ $Y$-periodic,} \end{array}\right. \end{equation} (5.6)
    \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0\mathit{Q}^0 = \nabla_{{\boldsymbol{y}}}\cdot\left(\mathit{E}\cdot\nabla_{{\boldsymbol{x}}}\mathit{Z}\right)+\nabla_{{\boldsymbol{x}}}\cdot\left(\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}\mathit{Z}\right)+\nabla_{{\boldsymbol{y}}}\cdot\left(\mathit{D}\mathit{Z}\right)\cr \qquad\qquad+\nabla_{{\boldsymbol{x}}}\cdot\left(\mathit{D}-\mathit{D}^*\right)+\overline{\mathit{M}}-\mathit{M}&\text{ in }Y^*,\cr \frac{\partial \mathit{Q}^0}{\partial\nu_{\mathit{E}}} = -\left(\mathit{D}\mathit{Z}+\mathit{E}\cdot\nabla_{{\boldsymbol{x}}}\mathit{Z}\right)\cdot{\boldsymbol{n}}&\text{ on }\partial \mathcal{T},\cr \mathit{Q}^0\quad\text{ $Y$-periodic,} \end{array}\right. \end{equation} (5.7)
    \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0\mathit{R}^0 = \mathit{K}-\overline{\mathit{K}}&\text{ in }Y^*,\cr \frac{\partial R^0_{\alpha\beta}}{\partial\nu_{\mathit{E}}} = \mathit{0}&\text{ on }\partial \mathcal{T},\cr \mathit{R}^0\quad\text{ $Y$-periodic,} \end{array}\right. \end{equation} (5.8)
    \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0\mathit{Q}^1 = \nabla_{{\boldsymbol{y}}}\cdot(\mathit{E}\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{W}})\otimes\mathit{1}+\nabla_{{\boldsymbol{y}}}\cdot(\mathit{E}\otimes\mathit{Z})\cr \qquad\qquad+\nabla_{{\boldsymbol{x}}}\cdot(\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}{\boldsymbol{W}})\otimes\mathit{1}+\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}\mathit{Z}\cr \qquad\qquad+\nabla_{{\boldsymbol{y}}}\cdot(\mathit{D}\otimes{\boldsymbol{W}})+\nabla_{{\boldsymbol{x}}}\cdot(\mathit{E}-\mathit{E}^*)\otimes\mathit{1}+\mathit{D}-\mathit{D}^*&\text{ in }Y^*,\cr \frac{\partial \mathit{Q}^1}{\partial\nu_{\mathit{E}}} = {\boldsymbol{W}}\otimes(\mathit{D}\cdot{\boldsymbol{n}})+{\boldsymbol{n}}\cdot\left(\mathit{E}\otimes\mathit{Z}+\mathit{E}\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{W}}\otimes\mathit{1}\right)&\text{ on }\partial \mathcal{T},\cr \mathit{Q}^1\quad\text{ $Y$-periodic,} \end{array}\right. \end{equation} (5.9)
    \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0\mathit{R}^1 = \mathit{0}&\text{ in }Y^*,\cr \frac{\partial \mathit{R}^1}{\partial\nu_{\mathit{E}}} = \mathit{0}&\text{ on }\partial \mathcal{T},\cr \mathit{R}^1\quad\text{ $Y$-periodic,} \end{array}\right. \end{equation} (5.10)
    \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0\mathit{Q}^2\! = \!\nabla_{{\boldsymbol{y}}}\cdot(\mathit{E}\otimes{\boldsymbol{W}})+\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}{\boldsymbol{W}}+\mathit{E}-\mathit{E}^*&\text{ in }Y^*,\cr \frac{\partial\mathit{Q}^2}{\partial\nu_{\mathit{E}}} = -{\boldsymbol{n}}\cdot\mathit{E}\otimes {\boldsymbol{W}}&\text{ on }\partial \mathcal{T},\cr \mathit{Q}^2\quad\text{ $Y$-periodic.} \end{array}\right. \end{equation} (5.11)

    The well-posedness of the cell-problems (4.34)–(5.11) is given by Lemma 3, while the regularity follows from Theorem 9.25 and Theorem 9.26 in [32]. Note that cell-problem (5.10) yields \mathit{R}^1 = \mathit{0} .

    Let C denote a constant independent of \epsilon , {\boldsymbol{x}} , {\boldsymbol{y}} and t .

    We rewrite the error-function {\boldsymbol{\Phi}}^\epsilon as

    \begin{equation} {\boldsymbol{\Phi}}^\epsilon = {\boldsymbol{V}}^\epsilon-{\boldsymbol{V}}^0-M_\epsilon(\epsilon {\boldsymbol{V}}^1+\epsilon^2 {\boldsymbol{V}}^2) = {\boldsymbol{\phi}}^\epsilon+(1-M_\epsilon)(\epsilon {\boldsymbol{V}}^1+\epsilon^2{\boldsymbol{V}}^2), \end{equation} (5.12)

    where

    \begin{equation} {\boldsymbol{\phi}}^\epsilon = {\boldsymbol{V}}^\epsilon-({\boldsymbol{V}}^0+\epsilon {\boldsymbol{V}}^1+\epsilon^2{\boldsymbol{V}}^2). \end{equation} (5.13)

    Similarly, we make use of the error-function {\boldsymbol{\Psi}}^\epsilon

    \begin{equation} {\boldsymbol{\Psi}}^\epsilon = {\boldsymbol{U}}^\epsilon-{\boldsymbol{U}}^0-M_\epsilon(\epsilon {\boldsymbol{U}}^1+\epsilon^2{\boldsymbol{U}}^2). \end{equation} (5.14)

    The goal is to estimate both {\boldsymbol{\Phi}}^\epsilon and {\boldsymbol{\Psi}}^\epsilon uniformly in \epsilon .

    Even though our problem for ({\boldsymbol{U}}^\epsilon, {\boldsymbol{V}}^\epsilon) is defined on \Omega^\epsilon , while the asymptotic expansion terms ({\boldsymbol{U}}^i, {\boldsymbol{V}}^i) are defined on \Omega\times Y^* , we are still able to use spaces defined on \Omega^\epsilon such as \mathbb{V}_\epsilon^N since the evaluation {\boldsymbol{y}} = {\boldsymbol{x}}/\epsilon transfers the zero-extension on \mathcal{T} to \mathcal{T}^\epsilon .

    Introduce the coercive bilinear form a_\epsilon: \mathbb{V}_\epsilon^N\times\mathbb{V}_\epsilon^N\rightarrow{\bf{R}} defined as

    \begin{equation} a_\epsilon({\boldsymbol{\psi}},{\boldsymbol{\phi}}) = \int_{\Omega^\epsilon}{\boldsymbol{\phi}}^\top \mathcal{A}^\epsilon{\boldsymbol{\psi}}\mathrm{d}{\boldsymbol{x}} \end{equation} (5.15)

    pointwise in t\in{\bf{R}}_+ , on which it depends implicitly.

    By construction, {\boldsymbol{\Phi}}^\epsilon vanishes on \partial_{ext}\Omega^\epsilon , which allows for the estimation of \|{\boldsymbol{\Phi}}^\epsilon\|_{\mathbb{V}_\epsilon^N} . This estimation follows the standard approach, see [17] for the details.

    First the inequality |a_\epsilon({\boldsymbol{\Phi}}^\epsilon, {\boldsymbol{\phi}})|\leq \mathcal{C}(\epsilon, t)\|{\boldsymbol{\phi}}\|_{\mathbb{V}_\epsilon^N} , where \mathcal{C}(\epsilon, t) is a constant depending on \epsilon and t\in\mathbb{R}_+ , is obtained for any {\boldsymbol{\phi}}\in \mathbb{V}_\epsilon^N . Second, we take {\boldsymbol{\phi}} = {\boldsymbol{\Phi}}^\epsilon and using the coercivity, one immediately obtains \|{\boldsymbol{\Phi}}^\epsilon\|_{\mathbb{V}_\epsilon^N} .

    Our pseudo-parabolic system complicates this approach. Instead of \mathcal{C}(\epsilon, t) , one gets C\|{\boldsymbol{\Psi}}^\epsilon\|_{H^1_0(\Omega^\epsilon)^N} . Via an ordinary differential equation for {\boldsymbol{\Psi}}^\epsilon , we obtain a temporal inequality for \|{\boldsymbol{\Psi}}^\epsilon\|_{H^1_0(\Omega^\epsilon)^N} that contains \|{\boldsymbol{\Phi}}^\epsilon\|_{\mathbb{V}_\epsilon^N} . The upper bound for \|{\boldsymbol{\Phi}}^\epsilon\|_{\mathbb{V}_\epsilon^N} now follows from applying Gronwall's inequality, leading to an upper bound for \|{\boldsymbol{\Psi}}^\epsilon\|_{H^1_0(\Omega^\epsilon)^N} .

    From equation (5.13), we have

    \begin{equation} a_\epsilon({\boldsymbol{\Phi}}^\epsilon,{\boldsymbol{\phi}}) = a_\epsilon({\boldsymbol{\phi}}^\epsilon,{\boldsymbol{\phi}})+a_\epsilon((1-M_\epsilon)(\epsilon {\boldsymbol{V}}^1+\epsilon^2{\boldsymbol{V}}^2),{\boldsymbol{\phi}}) \end{equation} (5.16)

    for {\boldsymbol{\phi}}\in\mathbb{V}_\epsilon^N .

    Do note that M_\epsilon vanishes in a neighbourhood of the boundary \partial_{ext}\Omega^\epsilon , see (5.1), because of which the second term in (5.16) vanishes outside this neighbourhood.

    We start by estimating the first term of (5.16), a_\epsilon({\boldsymbol{\phi}}^\epsilon, {\boldsymbol{\phi}}) . From the asymptotic expansion of \mathcal{A}^\epsilon , we obtain

    \begin{multline} \mathcal{A}^\epsilon{\boldsymbol{\phi}}^\epsilon = (\epsilon^{-2}\mathcal{A}^0+\epsilon^{-1}\mathcal{A}^1+\mathcal{A}^2){\boldsymbol{\phi}}^\epsilon\\ = \mathcal{A}^\epsilon {\boldsymbol{V}}^\epsilon - \epsilon^{-2}\mathcal{A}^0{\boldsymbol{V}}^0-\epsilon^{-1}(\mathcal{A}^0{\boldsymbol{V}}^1+\mathcal{A}^1{\boldsymbol{V}}^0)-(\mathcal{A}^0{\boldsymbol{V}}^2+\mathcal{A}^1{\boldsymbol{V}}^1+\mathcal{A}^2{\boldsymbol{V}}^0)\qquad\qquad\qquad\quad\\ -\epsilon(\mathcal{A}^1{\boldsymbol{V}}^2+\mathcal{A}^2{\boldsymbol{V}}^1)-\epsilon^2\mathcal{A}^2{\boldsymbol{V}}^2. \end{multline} (5.17)

    Using the definitions of \mathcal{A}^0 , \mathcal{A}^1 , \mathcal{A}^2 , {\boldsymbol{V}}^0 , {\boldsymbol{V}}^1 , {\boldsymbol{V}}^2 , we have

    \begin{equation} \mathcal{A}^\epsilon {\boldsymbol{\phi}}^\epsilon = \mathit{K}^\epsilon {\boldsymbol{U}}^\epsilon-\left.\mathit{K}\right|_{{\boldsymbol{y}} = {\boldsymbol{x}}/\epsilon}{\boldsymbol{U}}^0+\epsilon\mathit{J}^\epsilon\nabla {\boldsymbol{U}}^\epsilon-\epsilon(\mathcal{A}^2{\boldsymbol{V}}^1+\mathcal{A}^1{\boldsymbol{V}}^2)-\epsilon^2\mathcal{A}^2{\boldsymbol{V}}^2. \end{equation} (5.18)

    The function {\boldsymbol{\phi}}^\epsilon satisfies the following boundary condition on \partial\mathcal{T}^\epsilon

    \begin{equation} \frac{\partial{\boldsymbol{\phi}}^\epsilon}{\partial \nu_{\mathit{D}^\epsilon}} = -\epsilon^2\frac{\partial {\boldsymbol{V}}^2}{\partial \nu_\mathit{D}}, \end{equation} (5.19)

    as a consequence of the boundary conditions for the {\boldsymbol{V}}^i -terms. Hence, {\boldsymbol{\phi}}^\epsilon satisfies the following system:

    \begin{equation} \left\{\begin{array} \mathcal{A}^\epsilon {\boldsymbol{\phi}}^\epsilon = {\boldsymbol{f}}^\epsilon-\epsilon {\boldsymbol{g}}^\epsilon &\text{ in }\Omega^\epsilon,\cr \frac{\partial{\boldsymbol{\phi}}^\epsilon}{\partial \nu_{\mathit{D}^\epsilon}} = \epsilon^2\mathit{h}^\epsilon\cdot{\boldsymbol{n}}^\epsilon&\text{ on }\partial \mathcal{T}^\epsilon,\cr {\boldsymbol{\phi}}^\epsilon = -\epsilon{\boldsymbol{V}}^1-\epsilon^2{\boldsymbol{V}}^2&\text{ on }\partial\Omega. \end{array}\right. \end{equation} (5.20)

    Testing with {\boldsymbol{\phi}}^\top\in \mathbb{V}_\epsilon^N and performing a partial integration, we obtain

    \begin{equation} a_\epsilon({\boldsymbol{\phi}}^\epsilon,{\boldsymbol{\phi}}) = \int_{\Omega^\epsilon} {\boldsymbol{\phi}}^\top{\boldsymbol{f}}^\epsilon\mathrm{d}{\boldsymbol{x}}-\int_{\Omega^\epsilon}\epsilon {\boldsymbol{\phi}}^\top{\boldsymbol{g}}^\epsilon\mathrm{d}{\boldsymbol{x}}+\int_{\partial \mathcal{T}^\epsilon}\epsilon^2{\boldsymbol{\phi}}^\top\mathit{h}^\epsilon\cdot{\boldsymbol{n}}^\epsilon\mathrm{d}{\boldsymbol{s}}, \end{equation} (5.21)

    where {\boldsymbol{f}}^\epsilon , {\boldsymbol{g}}^\epsilon and \mathit{h}^\epsilon are given by

    \begin{equation} {\boldsymbol{f}}^\epsilon = \mathit{K}^\epsilon {\boldsymbol{U}}^\epsilon-\left.\mathit{K}\right|_{{\boldsymbol{y}} = {\boldsymbol{x}}/\epsilon}{\boldsymbol{U}}^0 \end{equation} (5.22)
    \begin{multline} {\boldsymbol{g}}^\epsilon = \mathcal{A}^1\left[{\boldsymbol{P}}+\mathit{Q}^0{\boldsymbol{V}}^0+\mathit{R}^0{\boldsymbol{U}}^0+\mathit{Q}^1\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{V}}^0+\mathit{R}^1\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{U}}^0+\mathit{Q}^2:\mathrm{D}_{{\boldsymbol{x}}}^2{\boldsymbol{V}}^0\right]\\ +\mathcal{A}^2\left[{\boldsymbol{W}}\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{V}}^0+\mathit{Z} {\boldsymbol{V}}^0\right]-\mathit{J}^\epsilon\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{U}}^\epsilon\\ +\epsilon \mathcal{A}^2\left[{\boldsymbol{P}}+\mathit{Q}^0{\boldsymbol{V}}^0+\mathit{R}^0{\boldsymbol{U}}^0+\mathit{Q}^1\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{V}}^0+\mathit{R}^1\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{U}}^0+\mathit{Q}^2:\mathrm{D}_{{\boldsymbol{x}}}^2{\boldsymbol{V}}^0\right], \end{multline} (5.23)
    \begin{equation} \mathit{h}^\epsilon = -\frac{\partial}{\partial \nu_{\mathit{D}}}\left[{\boldsymbol{P}}+\mathit{Q}^0{\boldsymbol{V}}^0+\mathit{R}^0{\boldsymbol{U}}^0+\mathit{Q}^1\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{V}}^0+\mathit{R}^1\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{U}}^0+\mathit{Q}^2:\mathrm{D}_{{\boldsymbol{x}}}^2{\boldsymbol{V}}^0\right]. \end{equation} (5.24)

    Estimates for {\boldsymbol{f}}^\epsilon , {\boldsymbol{g}}^\epsilon and \mathit{h}^\epsilon follow from estimates on {\boldsymbol{V}}^0 , {\boldsymbol{U}}^0 , {\boldsymbol{P}} , \mathit{Q}^0 , \mathit{R}^0 , \mathit{Q}^1 , \mathit{R}^1 , \mathit{Q}^2 , and {\boldsymbol{W}} . Due to the regularity of \overline{{\boldsymbol{H}}} , \overline{\mathit{K}} , \mathit{J} , \mathit{G} , classical regularity results for elliptic systems, see Theorem 8.12 and Theorem 8.13 in [34], quarantee that all spatial derivatives up to the fourth order of ({\boldsymbol{U}}^0, {\boldsymbol{V}}^0) are in L^\infty({\bf{R}}_+\times\Omega) . Similarly, from Theorem 9.25 and Theorem 9.26 in [32], the cell-functions {\boldsymbol{W}} , {\boldsymbol{P}} , \mathit{Q}^0 , \mathit{R}^0 , \mathit{Q}^1 , \mathit{R}^1 and \mathit{Q}^2 have higher regularity, than given by Lemma 3: W_i, P_\alpha, Q^0_{\alpha\beta}, R^0_{\alpha\beta}, Q^1_{i\alpha\beta}, R^1_{\alpha\beta}, \mathit{Q}^2_{ij} are in L^\infty({\bf{R}}_+; W^{2, \infty}(\Omega; H^3_\#(Y^*)/{\bf{R}})) . We denote with \kappa the time-independent bound

    \begin{equation*} \kappa = \sup\limits_{1\leq\alpha,\beta\leq N}\|K_{\alpha\beta}\|_{L^\infty({\bf{R}}_+;W^{1,\infty}(\Omega;C^1_\#(Y^*)))}. \end{equation*}

    Note that \|\mathit{R}^0\|_{L^\infty({\bf{R}}_+\times\Omega; C^1_\#(Y^*)))^{N\times N}}\leq C\kappa by the Poincaré-Wirtinger inequality.

    Bounding {\boldsymbol{g}}^\epsilon follows now directly from equation (5.23) and Corollary 4:

    \begin{equation} \|g^\epsilon_\alpha\|_{L^2(\Omega^\epsilon)^N}\leq C(1+\epsilon)(1+(\kappa+\tilde{\kappa})e^{\lambda t}), \end{equation} (5.25)

    where C is independent of \epsilon.

    Bounding \mathit{h}^\epsilon is more difficult as it is defined on the boundary \partial\mathcal{T}^\epsilon . The following result, see Lemma 2.31 on page 47 in [17], gives a trace inequality, which shows that \mathit{h}^\epsilon is properly defined.

    Lemma 6. Let \psi\in H^1(\Omega^\epsilon) . Then

    \begin{equation} \|\psi\|_{L^2(\partial \mathcal{T}^\epsilon)}\leq C\epsilon^{-1/2}\|\psi\|_{\mathbb{V}_\epsilon}, \end{equation} (5.26)

    where C is independent of \epsilon .

    By (5.24), the regularity of the cell-functions, the regularity of the normal at the boundary, Corollary 4 and using Lemma 6 twice, we have

    \begin{equation} \left|\int_{\partial \mathcal{T}^\epsilon}\epsilon^2{\boldsymbol{\phi}}^\top\mathit{h}^\epsilon\cdot{\boldsymbol{n}}^\epsilon\mathrm{d}s({\boldsymbol{x}})\right|\leq C\epsilon(1+(\kappa+\tilde{\kappa})e^{\lambda t})\|{\boldsymbol{\phi}}\|_{\mathbb{V}_\epsilon^N}. \end{equation} (5.27)

    We estimate {\boldsymbol{f}}^\epsilon in L^2(\Omega^\epsilon)^N from the standard inequality |a_1b_1-a_2b_2|\leq|a_1-a_2||b_2|+|a_1||b_1-b_2| for all a_1, a_2, b_1, b_2\in{\bf{R}} . This leads to

    \begin{equation} \|{\boldsymbol{f}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}\leq \|\mathit{K}^\epsilon-\mathit{K}\|_{L^2(\Omega^\epsilon)^{N\times N}}\|{\boldsymbol{U}}^\epsilon\|_{L^\infty(\Omega^\epsilon)^N} +\|\mathit{K}\|_{L^\infty(\Omega^\epsilon)^{N\times N}}\|{\boldsymbol{U}}^\epsilon-{\boldsymbol{U}}^0\|_{L^2(\Omega^\epsilon)^N}. \end{equation} (5.28)

    With this inequality, the estimation depends on the convergence of \mathit{K}^\epsilon and {\boldsymbol{U}}^\epsilon to \mathit{K} and {\boldsymbol{U}}^0 , respectively, but with the notation according to (2.7) we have \mathit{K}^\epsilon-\left.\mathit{K}\right|_{{\boldsymbol{y}} = {\boldsymbol{x}}/\epsilon} = \mathit{0} a.e.

    From the definition of {\boldsymbol{\Psi}}^\epsilon , we obtain

    \begin{equation} \|{\boldsymbol{U}}^\epsilon-{\boldsymbol{U}}^0\|_{L^2(\Omega^\epsilon)^N} = \|{\boldsymbol{\Psi}}^\epsilon+M_\epsilon(\epsilon {\boldsymbol{U}}^1+\epsilon^2 {\boldsymbol{U}}^2)\|_{L^2(\Omega^\epsilon)^N} \leq \|{\boldsymbol{\Psi}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}+\epsilon\|{\boldsymbol{U}}^1\|_{L^2(\Omega^\epsilon)^N}+\epsilon^2\|{\boldsymbol{U}}^2\|_{L^2(\Omega^\epsilon)^N}. \end{equation} (5.29)

    Introduce the notations l = L_N and t_l = \min\{1/l, t\} . Using system (4.30), the bounds C(1+(\kappa+\tilde{\kappa})e^{\lambda t}) for \|{\boldsymbol{V}}^1\|_{H^1(\Omega^\epsilon)^N} and \|{\boldsymbol{V}}^2\|_{H^1(\Omega^\epsilon)^N} obtained via the cell-function decompositions (4.32) and (5.3), respectively, the inequalities (4.5a) and (4.5b), and by employing Gronwall's inequality, we obtain

    \begin{align} \|{\boldsymbol{U}}^1\|_{H^1(\Omega^\epsilon)^N}&\leq C(1+(\kappa+\tilde{\kappa})e^{\lambda t})\sqrt{t_le^{lt}+t_l^2e^{2lt}}, \end{align} (5.30a)
    \begin{align} \|{\boldsymbol{U}}^2\|_{H^1(\Omega^\epsilon)^N}&\leq C(1+(\kappa+\tilde{\kappa})e^{\lambda t})\sqrt{t_le^{lt}+t_l^2e^{2lt}}, \end{align} (5.30b)
    \begin{align} \|{\boldsymbol{U}}^\epsilon\!-\!{\boldsymbol{U}}^0\|_{L^2(\Omega^\epsilon)^N}&\leq \|{\boldsymbol{\Psi}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}\cr &\quad+C(\epsilon+\epsilon^2)(1+(\kappa+\tilde{\kappa})e^{\lambda t})\sqrt{t_le^{lt}+t^2_le^{2lt}}. \end{align} (5.30c)

    Thus from identity (5.28) we obtain

    \begin{equation} \|{\boldsymbol{f}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}\leq \kappa\|{\boldsymbol{\Psi}}^\epsilon\|_{L^2(\Omega^\epsilon)^N} +C(\epsilon+\epsilon^2)\kappa(1+(\kappa+\tilde{\kappa})e^{\lambda t})\sqrt{t_le^{lt}+t_l^2e^{2lt}}, \end{equation} (5.31)

    We now have all the ingredients to estimate a_\epsilon({\boldsymbol{\phi}}^\epsilon, {\boldsymbol{\phi}}) . Inserting estimates (5.25), (5.27) and (5.31) into (5.21), we find

    \begin{equation} |a_\epsilon({\boldsymbol{\phi}}^\epsilon,{\boldsymbol{\phi}})|\!\leq\! \left[\kappa\|{\boldsymbol{\Psi}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}+C(\epsilon\!+\!\epsilon^2)(1\!+\!(\kappa+\tilde{\kappa})e^{\lambda t})(1+\kappa (1+ t_le^{lt}))\right]\!\|{\boldsymbol{\phi}}\|_{\mathbb{V}_\epsilon^N}. \end{equation} (5.32)

    Next, we need to estimate the second right-hand term of (5.16), a_\epsilon((1-M_\epsilon)(\epsilon {\boldsymbol{V}}^1+\epsilon^2 {\boldsymbol{V}}^2), {\boldsymbol{\phi}}) . Trusting [17] (see pages 48 and 49 in the reference) and using the bounds C(1+(\kappa+\tilde{\kappa})e^{\lambda t}) for \|{\boldsymbol{V}}^1\|_{H^1(\Omega^\epsilon)^N} and \|{\boldsymbol{V}}^2\|_{H^1(\Omega^\epsilon)^N} , we obtain

    \begin{equation} |a_\epsilon((1-M_\epsilon)(\epsilon {\boldsymbol{V}}^1+\epsilon^2 {\boldsymbol{V}}^2),{\boldsymbol{\phi}})| \leq \left[C(\epsilon^{\frac{1}{2}}+\epsilon^{\frac{3}{2}})+C(\epsilon+\epsilon^2)\left(1+(\kappa+\tilde{\kappa})e^{\lambda t}\right)\right]\|{\boldsymbol{\phi}}\|_{\mathbb{V}_\epsilon^N}. \end{equation} (5.33)

    The combination of (5.32) and (5.33) yields

    \begin{equation} |a_\epsilon({\boldsymbol{\Phi}}^\epsilon,{\boldsymbol{\phi}})|\!\leq\! \left[\kappa\|{\boldsymbol{\Psi}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}\!+C(\epsilon^{\frac{1}{2}}\!+\!\epsilon^{\frac{3}{2}})\left(1+\epsilon^{\frac{1}{2}}(1\!+\!(\kappa+\tilde{\kappa})e^{\lambda t})(1\!+\!\kappa (1+ t_le^{lt}))\right)\right]\!\|{\boldsymbol{\phi}}\|_{\mathbb{V}_\epsilon^N}. \end{equation} (5.34)

    Since \mathcal{L}{\boldsymbol{\Psi}}^\epsilon = \mathit{G}{\boldsymbol{\Phi}}^\epsilon , we obtain an identity similar to (4.5a) to which we apply Gronwall's inequality, leading to

    \begin{equation} \|{\boldsymbol{\Psi}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}^2(t)\leq \int_0^te^{l(t-s)}G_N\|{\boldsymbol{\Phi}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}^2(s)\mathrm{d}s. \end{equation} (5.35)

    Choosing {\boldsymbol{\phi}} = {\boldsymbol{\Phi}}^\epsilon and with m denoting the coercivity constant \min\limits_{1\leq i\leq n, 1\leq\alpha\leq N}\{\tilde{m}_\alpha, \tilde{e}_i\} , we obtain

    \begin{equation} m\|{\boldsymbol{\Phi}}^\epsilon\|_{\mathbb{V}_\epsilon^N}^2\!\leq\! \left[\kappa\sqrt{\int_0^te^{l(t-s)}G_N\|{\boldsymbol{\Phi}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}^2(s)\mathrm{d}s}\!+\!\mathcal{B}(\epsilon,t)\right]\!\|{\boldsymbol{\Phi}}^\epsilon\|_{\mathbb{V}_\epsilon^N}, \end{equation} (5.36)

    where

    \begin{equation} \mathcal{B}(\epsilon,t) = C(\epsilon^{\frac{1}{2}}\!+\!\epsilon^{\frac{3}{2}})\left(1+\epsilon^{\frac{1}{2}}(1\!+\!(\kappa+\tilde{\kappa})e^{\lambda t})(1\!+\!\kappa(1+ t_le^{lt}))\right). \end{equation} (5.37)

    Applying Young's inequality twice, once with \eta > 0 and once with \eta_1 > 0 , using the Poincaré inequality (see Remark 1) and Gronwall's inequality to (5.36), we arrive at

    \begin{equation} \|{\boldsymbol{\Phi}}^\epsilon\|_{\mathbb{V}_\epsilon^N}^2\!\leq\!\frac{\mathcal{B}(\epsilon,t)^2}{\eta_1(2m-\eta_1-\eta)}+\int_0^t\frac{\kappa^2 G_N e^{l(t-s)}\mathcal{B}(\epsilon,s)^2}{\eta(2m-\eta_1-\eta)^2\eta_1}\exp\left(\int_s^t\frac{\kappa G_N}{\eta(2m-\eta_1-\eta)}e^{l(t-u)}\mathrm{d}u\right)\mathrm{d}s. \end{equation} (5.38)

    Since 0 < \mathcal{B}(\epsilon, s)\leq \mathcal{B}(\epsilon, t) for s\leq t , we can use the Leibniz rule to obtain

    \begin{equation} \|{\boldsymbol{\Phi}}^\epsilon\|_{\mathbb{V}_\epsilon^N}^2\!\leq\!\frac{\mathcal{B}(\epsilon,t)^2}{\eta_1(2m-\eta_1-\eta)}\exp\left(\frac{\kappa^2 G_N}{\eta(2m-\eta_1-\eta)}t_le^{lt}\right). \end{equation} (5.39)

    Minimizing the two fractions separately leads us to \eta_1 = m-\frac{\eta}{2} and \eta = m-\frac{\eta_1}{2} , whence \eta = \eta_1 = \frac{2}{3}m . Hence, we obtain

    \begin{equation} \begin{array}{rcl} \|{\boldsymbol{\Phi}}^\epsilon\|_{\mathbb{V}_\epsilon^N}\!&\leq&\! C(\epsilon^{\frac{1}{2}}\!\!+\!\epsilon^{\frac{3}{2}}\!)\left(1\!+\!\epsilon^{\frac{1}{2}}\!(1\!+\!(\kappa\!+\!\tilde{\kappa})e^{\lambda t})(1\!+\!\kappa (1\!+\! t_le^{lt}))\right)\!\exp\!\left(\mu t_le^{lt}\right)\!,\\ & = &\mathcal{C}(\epsilon,t) \end{array} \end{equation} (5.40)

    and from (5.35), we arrive at

    \begin{equation} \|{\boldsymbol{\Psi}}^\epsilon\|_{H^1(\Omega^\epsilon)^N}\leq\!\mathcal{C}(\epsilon,t)\sqrt{t_le^{lt}} \end{equation} (5.41)

    with

    \begin{equation} \mu = \frac{9\kappa^2G_N}{8m^2}. \end{equation} (5.42)

    This completes the proof of Theorem 4.

    In [19] a concrete corrosion model has been derived from first principles. This model combines mixture theory with balance laws, while incorporating chemical reaction effects, mechanical deformations, incompressible flow, diffusion, and moving boundary effects. The model represents the onset of concrete corrosion by representing the corroded part as a layer of cement (the mixture) on top of a concrete bed and below an acidic fluid. The mixture contains three components \phi = (\phi_1, \phi_2, \phi_3) , which react chemically via 3+2\rightarrow 1 . For simplification, we work in volume fractions. Hence, the identity \phi_1+\phi_2+\phi_3 = 1 holds. The model equations on a domain \Omega become for \alpha\in\{1, 2, 3\}

    \begin{align} \frac{\partial\phi_\alpha}{\partial t}+\epsilon\nabla\cdot(\phi_\alpha\bf{v}_\alpha)-\epsilon\delta_\alpha\Delta \phi_\alpha& = \quad\epsilon\kappa_\alpha \mathcal{F}(\phi_1,\phi_3), \end{align} (6.1a)
    \begin{align} \nabla\cdot\left(\sum\limits_{\alpha = 1}^3\phi_\alpha\bf{v}_\alpha\right)-\sum\limits_{\alpha = 1}^3\delta_\alpha\Delta \phi_\alpha& = \quad\sum\limits_{\alpha = 1}^3\kappa_\alpha \mathcal{F}(\phi_1,\phi_3), \end{align} (6.1b)
    \begin{align} \nabla(-\phi_\alpha p+\![\lambda_\alpha\!+\!\mu_\alpha]\nabla\!\!\cdot\!\bf{u}_\alpha)\!+\!\mu_\alpha\Delta \bf{u}_\alpha& = \chi_\alpha(\bf{v}_\alpha\!-\!\bf{v}_3)\!-\!\!\sum\limits_{\beta = 1}^3\!\gamma_{\alpha\beta}\Delta \bf{v}_\beta, \end{align} (6.1c)
    \begin{align} \nabla\left(-p+\sum\limits_{\alpha = 1}^2(\lambda_\alpha+\mu_\alpha)\nabla\cdot\bf{u}_\alpha\right)+\sum\limits_{\alpha = 1}^2\mu_\alpha\Delta \bf{u}_\alpha&+\sum\limits_{\alpha = 1}^3\sum\limits_{\beta = 1}^3\gamma_{\alpha\beta}\Delta \bf{v}_\beta = 0, \end{align} (6.1d)

    where U_\alpha and {\boldsymbol{v}}_\alpha = \partial U_\alpha/\partial t are the displacement and velocity of component \alpha , respectively, and \epsilon is a small positive number independent of any spatial scale. Equation (6.1a) denotes a mass balance law, (6.1b) denotes the incompressibility condition, (6.1c) the partial (for component \alpha ) momentum balance law, and (6.1d) the total momentum balance.

    For t = \mathcal{O}(\epsilon^0) , we can treat \phi as constant, which removes some nonlinearities from the model. Moreover, with equation (6.1b) we can eliminate {\boldsymbol{v}}_3 in favor of {\boldsymbol{v}}_1 and {\boldsymbol{v}}_2 , while with equation (6.1d) we can eliminate p . This leads to a final expression for {\boldsymbol{u}} = (U_1, U_2) :

    \begin{equation} \tilde{\mathit{M}}\partial_t{\boldsymbol{u}} -\tilde{\mathit{A}}{\boldsymbol{u}}-\mathrm{div}\left(\tilde{\mathit{B}}{\boldsymbol{u}}+\tilde{\mathit{D}}\partial_t{\boldsymbol{u}}+ \mathit{E}\cdot\nabla\left(\mathit{F}{\boldsymbol{u}}+\tilde{\mathit{G}}\partial_t{\boldsymbol{u}}\right)\right) = {\boldsymbol{H}}, \end{equation} (6.2)

    with

    \begin{align} \tilde{\mathit{M}} & = \left(\begin{array}{cc} \chi_1\frac{\phi_1+\phi_3}{\phi_3}&\chi_1\frac{\phi_2}{\phi_3}\\ \chi_2\frac{\phi_1}{\phi_3}&\chi_2\frac{\phi_2+\phi_3}{\phi_3} \end{array}\right),&\quad\tilde{\mathit{A}}& = \tilde{\mathit{B}} = \tilde{\mathit{D}} = \mathit{0}, \end{align} (6.3a)
    \begin{align} \mathit{F} & = \left(\begin{array}{cc} \mu_1(\phi_2+\phi_3)&-\mu_2\phi_1\\ -\mu_1\phi_2&\mu_2(\phi_1+\phi_3) \end{array}\right),&\quad\mathit{E} & = \bf{I}, \end{align} (6.3b)
    \begin{align} \tilde{G}_{\alpha\beta} & = -\gamma_{\alpha\beta}+\phi_\alpha\sum\limits_{\lambda = 1}^3\gamma_{\lambda\beta},&\quad H_\alpha & = \frac{\chi_\alpha}{\phi_3}\mathcal{F}(\phi_1,\phi_3)\sum\limits_{\lambda = 1}^3\kappa_\lambda. \end{align} (6.3c)

    According to [19], there are several options for \gamma_{\alpha\beta} , but all these options lead to non-invertible \tilde{\mathit{G}} . Suppose we take \gamma_{11} = \gamma_{22} = \gamma_1 < 0 and \gamma_{12} = \gamma_{21} = \gamma_2 < 0 with \gamma_1 > \gamma_2 . Then \tilde{\mathit{G}} is invertible and positive definite for \phi_3 > 0 , since the determinant of \tilde{\mathit{G}} equals (\gamma_1^2-\gamma_2^2)\phi_3 .

    According to Section 4.3 of [20], we obtain the Neumann problem (2.8a), (2.8b) with

    \begin{equation} \mathit{M} = \tilde{\mathit{M}}\tilde{\mathit{G}}^{-1},\; \mathit{D} = \mathit{0},\;\mathit{L} = \tilde{\mathit{G}}^{-1}\mathit{F},\; \mathit{G} = \tilde{\mathit{G}}^{-1},\; \mathit{K} = -\tilde{\mathit{M}}\tilde{\mathit{G}}^{-1}\mathit{F},\; \mathit{J} = \mathit{0}. \end{equation} (6.4)

    Note, that both \mathit{E} and {\boldsymbol{H}} do not change in this transformation. Moreover, \mathit{M} is positive definite, since both \tilde{\mathit{M}} and \tilde{\mathit{G}} are positive definite.

    Suppose the cement mixture has a periodic microstructure, satisfying assumption (A4), inherited from the concrete microstructure if corroded. Assume the constants \chi_\alpha , \mu_\alpha , \kappa_\alpha , and \gamma_{\alpha\beta} are actually functions of both the macroscopic scale {\boldsymbol{x}} and the microscopic scale {\boldsymbol{y}} , such that Assumptions (A1)–(A3) are satisfied. Note that (A3) is trivially satisfied.

    From the main results we see that a macroscale limit ({\boldsymbol{U}}^0, {\boldsymbol{V}}^0) of this microscale corrosion problem exists, which satisfies system (P ^0_w ), and that the convergence speed is given by Theorem 4 with constants l , \kappa , \lambda and \mu given by A.

    We acknowledge the Netherlands Organisation of Scientific Research (NWO) for the MPE grant 657.000.004, the NWO Cluster Nonlinear Dynamics in Natural Systems (NDNS+) for funding a research stay of AJV at Karlstads Universitet and the Swedish Royal Academy of Science (KVA) for the Stiftelsen GS Magnusons fund grant MG2018-0020.

    The authors declare no conflict of interest.

    In Theorem 4, the three constants l , \lambda and \mu are introduced as exponents indicating the exponential growth in time of the corrector bounds. Moreover, there was also a constant \kappa that indicated whether additional exponential growth occurs or not. For brevity it was not stated how these constants depend on the given matrices and tensors. Here we will give an exact determination procedure of these constants.

    The constant \kappa denotes the maximal operator norm of the tensor \mathit{K} .

    \begin{equation} \kappa = \sup\limits_{1\leq\alpha,\beta\leq N}\|K_{\alpha\beta}\|_{L^\infty({\bf{R}}_+;W^{1,\infty}(\Omega;C^1_\#(Y^*)))}. \end{equation} (A.1)

    The constants l , \lambda , \tilde{\kappa} and \mu were obtained via Young's inequality, which make them a coupled system via several additional positive constants: \eta , \eta_1 , \eta_2 , \eta_3 . The obtained expressions are

    \begin{align} l& = \max\{0,L_N\}, \end{align} (A.2a)
    \begin{align} \lambda & = \frac{1}{2}\max\left\{0,L_N+\max\left\{L_G+G_M\max\limits_{1\leq\alpha\leq N}\tilde{K}_\alpha,G_M\max\limits_{1\leq\alpha\leq N}\max\limits_{1\leq i\leq d}\tilde{J}_{i\alpha}\right\}\right\}, \end{align} (A.2b)
    \begin{align} \mu& = \frac{9\kappa^2}{8m^2}G_N, \end{align} (A.2c)
    \begin{align} \tilde{\kappa} & = \max\limits_{1\leq\alpha\leq N,1\leq i\leq d}\{\tilde{K}_\alpha,\tilde{J}_{i\alpha}\} \end{align} (A.2d)

    with the values

    \begin{align} L_N& = 2\mathcal{L}_{\min}+\eta G_{\max}+\eta_1dN\mathcal{L}_G, \end{align} (A.3a)
    \begin{align} L_G& = 2\mathcal{L}_{\min}+\frac{dN}{\eta_1}\mathcal{L}_G+\eta_2G_{\max}+\eta_3dNG_G, \end{align} (A.3b)
    \begin{align} G_N& = \frac{1}{\eta}G_{\max}+\frac{dN}{\eta_3}G_G, \end{align} (A.3c)
    \begin{align} G_G& = \frac{1}{\eta_2}G_{\max} , \end{align} (A.3d)
    \begin{align} G_M& = \max\limits_{1\leq\alpha\leq N}\max\limits_{1\leq i\leq d}\left\{\frac{G_N+G_G}{\tilde{m}_\alpha},\frac{G_N}{\tilde{e}_i}\right\}, \end{align} (A.3e)
    \begin{align} m& = \min\limits_{1\leq\alpha\leq N}\min\limits_{1\leq i\leq d}\{\tilde{m}_\alpha,\tilde{e}_i\}, \end{align} (A.3f)

    where we have the positive values

    \begin{align} \tilde{m}_\alpha& = m_\alpha-\sum\limits_{i = 1}^d\sum\limits_{\beta = 1}^N\frac{\|D_{i\beta\alpha}\|_{L^\infty({\bf{R}}_+\times\Omega;C_\#(Y^*))}}{2\eta_{i\beta\alpha}}-\eta_\alpha-\sum\limits_{\beta = 1}^N\eta_{\alpha\beta}-\sum\limits_{i = 1}^d\sum\limits_{\beta = 1}^N\tilde{\eta}_{i\alpha\beta}, \end{align} (A.4a)
    \begin{align} \tilde{e}_i& = e_i-\sum\limits_{\alpha,\beta = 1}^N\frac{\eta_{i\beta\alpha}}{2}\|D_{i\beta\alpha}\|_{L^\infty({\bf{R}}_+\times\Omega;C_\#(Y^*))}, \end{align} (A.4b)
    \begin{align} \tilde{H}& = \sum\limits_{\alpha = 1}^N\frac{1}{4\eta_{\alpha}}\|\mathit{H}_{\alpha}\|^2_{L^\infty({\bf{R}}_+\times\Omega;C_\#(Y^*))}, \end{align} (A.4c)
    \begin{align} \tilde{K}_\alpha& = \sum\limits_{\beta = 1}^N\frac{1}{4\eta_{\beta\alpha}}\|K_{\beta\alpha}\|^2_{L^\infty({\bf{R}}_+\times\Omega;C_\#(Y^*))}, \end{align} (A.4d)
    \begin{align} \tilde{J}_{i\alpha}& = \sum\limits_{\beta = 1}^N\frac{\epsilon_{0}^2}{4\tilde{\eta}_{i\beta\alpha}}\|J_{i\beta\alpha}\|^2_{L^\infty({\bf{R}}_+\times\Omega;C_\#(Y^*))} \end{align} (A.4e)

    for \eta_{i\beta\alpha} > 0 , \eta_{\beta} > 0 , \eta_{\alpha\beta} > 0 , \tilde{\eta}_{i\alpha\beta} > 0 and \epsilon_{0} the supremum of allowed \epsilon values (which is 1 for Theorem 5). Moreover, we have

    \mathcal{L}_{\min} as the L^\infty({\bf{R}}_+\times\Omega) -norm of the absolute value of the largest negative eigenvalue or it is -1 times the smallest positive eigenvalue of \mathit{L} if no negative or 0 eigenvalues exist,

    \mathcal{L}_G as the L^\infty({\bf{R}}_+\times\Omega) -norm of the largest absolute value of the \nabla\mathit{L} components,

    G_{\max} as the L^\infty({\bf{R}}_+\times\Omega) -norm of the largest eigenvalue of \mathit{G} ,

    G_G as the L^\infty({\bf{R}}_+\times\Omega) -norm of the largest absolute value of the \nabla\mathit{G} components.

    Remark 11. Remark that smaller l and \mu yield longer times \tau in Theorem 5 and faster convergence rates in \epsilon . However, l and \mu are only coupled via \lambda . Hence, l and \mu can be made as small as needed as long as \lambda remains finite and independent of \epsilon .

    Remark 12. Note that \mathcal{L}_{\min} < 0 allows for a hyperplane of positive values of \eta and \eta_1 in (\eta, \eta_1, \eta_2, \eta_3) -space such that l = L_N = 0 . In this case not \lambda or \mu should be minimized. Instead \tau_{end} should be maximized, the time \tau for which the bounds of Theorem 5 equal \mathcal{O}(1) for p = q = 0 . For \mu\geq\lambda this yields a minimization of \mu , while for \mu < \lambda a minimization of \mu+\lambda . Due to the use of maximums in the definition of \lambda and \tau_{end} , we refrain from maximizing \tau_{end} as any attempt leads to a large tree of cases for which an optimization problem has to be solved.

    Two-scale convergence is a method invented in 1989 by Nguetseng, see [35]. This method removes many technicalities by basing the convergence itself on functional analytic grounds as a property of functions in certain spaces. In some sense the function spaces natural to periodic boundary conditions have nice convergence properties of their oscillating continuous functions. This is made precise in the First Oscillation Lemma:

    Lemma 7 ('First Oscillation Lemma'). Let B_p(\Omega, Y) , 1\leq p < \infty , denote any of the spaces L^p(\Omega; C_\#(Y)) , L^p_\#(\Omega; C(\overline{Y})) , C(\overline{\Omega}; C_\#(Y)) . Then B_p(\Omega, Y) has the following properties:

    1. B_p(\Omega, Y) is a separable Banach space.

    2. B_p(\Omega, Y) is dense in L^p(\Omega\times Y) .

    3. If f({\boldsymbol{x}}, {\boldsymbol{y}})\in B_p(\Omega, Y) . Then f({\boldsymbol{x}}, {\boldsymbol{x}}/\epsilon) is a measurable function on \Omega such that

    \begin{equation} \left\|f\left({\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)\right\|_{L^p(\Omega)}\leq \left\|f\left({\boldsymbol{x}},{\boldsymbol{y}}\right)\right\|_{B_p(\Omega,Y)}. \end{equation} (B.1)

    4. For every f({\boldsymbol{x}}, {\boldsymbol{y}})\in B_p(\Omega, Y) , one has

    \begin{equation} \lim\limits_{\epsilon\rightarrow0}\int_\Omega f\left({\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)\mathrm{d}{\boldsymbol{x}} = \frac{1}{|Y|}\int_\Omega\int_Yf({\boldsymbol{x}},{\boldsymbol{y}})\mathrm{d}{\boldsymbol{y}}\mathrm{d}{\boldsymbol{x}}. \end{equation} (B.2)

    5. For every f({\boldsymbol{x}}, {\boldsymbol{y}})\in B_p(\Omega, Y) , one has

    \begin{equation} \lim\limits_{\epsilon\rightarrow0}\int_\Omega \left|f\left({\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)\right|^p\mathrm{d}{\boldsymbol{x}} = \frac{1}{|Y|}\int_\Omega\int_Y |f({\boldsymbol{x}},{\boldsymbol{y}})|^p\mathrm{d}{\boldsymbol{y}}\mathrm{d}{\boldsymbol{x}}. \end{equation} (B.3)

    See Theorems 2 and 4 in [36].

    However, application of the First Oscillation Lemma is not sufficient as it cannot be applied to weak solutions nor to gradients. Essentially two-scale convergence overcomes these problems by extending the First Oscillation Lemma in a weak sense.

    Two-scale convergence: Definition and results

    For each function c(t, {\boldsymbol{x}}, {\boldsymbol{y}}) on (0, T)\times\Omega\times Y , we introduce a corresponding sequence of functions c^\epsilon(t, {\boldsymbol{x}}) on (0, T)\times\Omega by

    \begin{equation} c^\epsilon(t,{\boldsymbol{x}}) = c\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right) \end{equation} (B.4)

    for all \epsilon\in(0, \epsilon_0) , although two-scale convergence is valid for more general bounded sequences of functions c^\epsilon(t, {\boldsymbol{x}}) .

    Introduce the notation \nabla_{{\boldsymbol{y}}} for the gradient in the {\boldsymbol{y}} -variable. Moreover, we introduce the notations \rightarrow , \rightharpoonup , and \overset{2}{\longrightarrow} to point out strong convergence, weak convergence, and two-scale convergence, respectively.

    The two-scale convergence was first introduced in [35] and popularized with the seminal paper [37], in which the term two-scale convergence was actually coined. For our explanation we use both the seminal paper [37] as the modern exposition of two-scale convergence in [36]. From now on, p and q are real numbers such that 1 < p < \infty and 1/p+1/q = 1 .

    Definition 1. Let (\epsilon_h)_h be a fixed sequence of positive real numbers converging to 0. A sequence (u_\epsilon) of functions in L^p(\Omega) is said to two-scale converge to a limit u_0\in L^p(\Omega\times Y) if

    \begin{equation} \int_\Omega u_\epsilon({\boldsymbol{x}})\phi\left({\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)\mathrm{d}{\boldsymbol{x}}\rightarrow\frac{1}{|Y|}\int_\Omega\int_Yu_0({\boldsymbol{x}},{\boldsymbol{y}})\phi({\boldsymbol{x}},{\boldsymbol{y}})\mathrm{d}{\boldsymbol{y}}\mathrm{d}{\boldsymbol{x}}, \end{equation} (B.5)

    when it is clear from the context we will omit the subscript h

    for every \phi\in L^q(\Omega; C_\#(Y)) .

    See Definition 6 on page 41 of [36].

    Remark 13. Definition 1 allows for an extension of two-scale convergence to Bochner spaces L^r(I; L^p(\Omega\times Y)) of the additional variable t\in I for r = p\in[1, \infty) by having the regularity u_\epsilon\in L^p(I\times\Omega) , u_0\in L^p(I\times\Omega\times Y) and \phi\in L^q(I\times\Omega; C_\#(Y)) with q = \frac{p}{p-1} . Moreover (B.5) changes into

    \begin{equation} \int_I\int_\Omega u_\epsilon(t,{\boldsymbol{x}})\phi\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)\mathrm{d}{\boldsymbol{x}}\mathrm{d}t\rightarrow\frac{1}{|Y|}\int_I\int_\Omega\int_Yu_0(t,{\boldsymbol{x}},{\boldsymbol{y}})\phi(t,{\boldsymbol{x}},{\boldsymbol{y}})\mathrm{d}{\boldsymbol{y}}\mathrm{d}{\boldsymbol{x}}\mathrm{d}t. \end{equation} (B.6)

    This Bochner-like extension is well-defined because for y -independent u_0 limits two-scale convergence is identical to weak convergence.

    Note, for r \neq p convergence (12.6) is valid for the regularity u_\epsilon\in L^r(I; L^p(\Omega)) , u_0\in L^r(I; L^p(\Omega\times Y)) and \phi\in L^s(I; L^1(\Omega; C_\#(Y))) for s = \frac{r}{r-1} .

    For r = \infty we need \phi\in \text{ba}_{ac}(I; L^1(\Omega; C_\#(Y))) , where \text{ba}_{ac}(I) denotes L^\infty(I)^* as the dual of L^\infty(I) can be identified with the set of all finitely additive signed measures that are absolutely continuous with respect to \mathrm{d}t on I .

    With the Bochner version of Definition 1 introduced in Remark 13, we can give the Sobolev space version of Definition 1.

    Definition 2. Let r, p\in[1, \infty) , s = \frac{r}{r-1} , and q = \frac{p}{p-1} . A sequence (u_\epsilon) of functions in W^{1, r}(I; L^p(\Omega)) is said to two-scale converge to a limit u_0\in W^{1, r}(I; L^p(\Omega\times Y)) if both

    \begin{align} \int_I\int_\Omega u_\epsilon(t,{\boldsymbol{x}})\phi\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)\mathrm{d}{\boldsymbol{x}}\mathrm{d}t&\rightarrow\frac{1}{|Y|}\int_I\int_\Omega\int_Yu_0(t,{\boldsymbol{x}},{\boldsymbol{y}})\phi(t,{\boldsymbol{x}},{\boldsymbol{y}})\mathrm{d}{\boldsymbol{y}}\mathrm{d}{\boldsymbol{x}}\mathrm{d}t, \end{align} (B.7a)
    \begin{align} \int_I\int_\Omega \frac{\partial u_\epsilon}{\partial t}(t,{\boldsymbol{x}})\phi\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)\mathrm{d}{\boldsymbol{x}}\mathrm{d}t&\rightarrow\frac{1}{|Y|}\int_I\int_\Omega\int_Y\frac{\partial u_0}{\partial t}(t,{\boldsymbol{x}},{\boldsymbol{y}})\phi(t,{\boldsymbol{x}},{\boldsymbol{y}})\mathrm{d}{\boldsymbol{y}}\mathrm{d}{\boldsymbol{x}}\mathrm{d}t \end{align} (B.7b)

    hold for every \phi\in L^s(I; L^q(\Omega; C_\#(Y))) . Or in short notation

    \begin{equation} u_\epsilon \overset{2}{\rightarrow}u_0\text{ in }L^r(I;L^p(\Omega))\text{ and } \frac{\partial u_\epsilon}{\partial t} \overset{2}{\rightarrow}\frac{\partial u_0}{\partial t}\text{ in }L^r(I;L^p(\Omega)). \end{equation} (B.8)

    We now list several important results concerning the two-scale convergence, which can all be extended in a natural way for Bochner spaces, see Section 2.5.2 in [38].

    Proposition 1. Let (u_\epsilon) be a bounded sequence in W^{1, p}(\Omega) for 1 < p\leq\infty such that

    \begin{equation} u_\epsilon\rightharpoonup u_0\quad{in}\quad W^{1,p}(\Omega). \end{equation} (B.9)

    Then u_\epsilon\overset{2}{\longrightarrow}u_0 and there exist a subsequence \epsilon' and a u_1\in L^p(\Omega; W^{1, p}_\#(Y)/{\bf{R}}) such that

    \begin{equation} \nabla u_{\epsilon'}\overset{2}{\longrightarrow}\nabla u_0+\nabla_{{\boldsymbol{y}}}u_1. \end{equation} (B.10)

    Proposition 1 for 1 < p < \infty is Theorem 20 in [36], while for p = 2 it is identity (i) in Proposition 1.14 in [37]. On page 1492 of [37] it is mentioned that the p = \infty case holds as well. The case of interest for us here is p = 2 .

    Proposition 2. Let (u_\epsilon) and (\epsilon\nabla u_\epsilon) be two bounded sequence in L^2(\Omega) . Then there exists a function u_0({\boldsymbol{x}}, {\boldsymbol{y}}) in L^2(\Omega; H^1_\#(Y)) such that, up to a subsequence, u_\epsilon\overset{2}{\longrightarrow} u_0({\boldsymbol{x}}, {\boldsymbol{y}}) and \epsilon\nabla u_\epsilon\overset{2}{\longrightarrow} \nabla_{{\boldsymbol{y}}}u_0({\boldsymbol{x}}, {\boldsymbol{y}}) . See identity (ii) in Proposition 1.14 in [37].

    Corollary 5. Let (u_\epsilon) be a bounded sequence in L^p(\Omega) , with 1 < p\leq\infty . There exists a function u_0({\boldsymbol{x}}, {\boldsymbol{y}}) in L^p(\Omega\times Y) such that, up to a subsequence, u_\epsilon\overset{2}{\longrightarrow} u_0({\boldsymbol{x}}, {\boldsymbol{y}}) , i.e., for any function \psi({\boldsymbol{x}}, {\boldsymbol{y}})\in\mathcal{D}(\Omega; C^\infty_\#(Y)) , we have

    \begin{equation} \lim\limits_{\epsilon\rightarrow0}\int_\Omega u_\epsilon({\boldsymbol{x}})\psi\!\left({\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)\mathrm{d}{\boldsymbol{x}} = \frac{1}{|Y|}\int_\Omega\int_Yu_0({\boldsymbol{x}},{\boldsymbol{y}})\psi({\boldsymbol{x}},{\boldsymbol{y}})\mathrm{d}{\boldsymbol{y}}\mathrm{d}{\boldsymbol{x}}. \end{equation} (B.11)

    See Corollary 1.15 in [37].

    Note, that Propositions 1 and 2 are straightforwardly extended to Bochner spaces by applying the two-scale convergence notions of Remark 13 and Definition 2 instead of the notion from Definition 1.

    Theorem 9. Let (u_\epsilon) be a sequence in L^p(\Omega) for 1 < p < \infty , which two-scale converges to u_0\in L^p(\Omega\times Y) and assume that

    \begin{equation} \lim\limits_{\epsilon\rightarrow0}\|u_\epsilon\|_{L^p(\Omega)} = \|u_0\|_{L^p(\Omega\times Y)}. \end{equation} (B.12)

    Then, for any sequence (v_\epsilon) in L^q(\Omega) with \frac{1}{p}+\frac{1}{q} = 1 , which two-scale converges to v_0\in L^q(\Omega\times Y) , we have that

    \begin{equation} \int_\Omega u_\epsilon({\boldsymbol{x}})v_\epsilon({\boldsymbol{x}})\tau\!\left({\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)\mathrm{d}{\boldsymbol{x}}\rightarrow\int_\Omega\frac{1}{|Y|}\int_Y u_0({\boldsymbol{x}},{\boldsymbol{y}})v_0({\boldsymbol{x}},{\boldsymbol{y}})\tau(x,{\boldsymbol{y}})\mathrm{d}{\boldsymbol{y}}\mathrm{d}{\boldsymbol{x}}, \end{equation} (B.13)

    for every \tau in \mathcal{D}(\Omega, C^\infty_\#(Y)) . Moreover, if the Y -periodic extension of u belong to L^p(\Omega; C_\#(Y)) , then

    \begin{equation} \lim\limits_{\epsilon\rightarrow0}\left\|u_\epsilon({\boldsymbol{x}})-u_0\left({\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)\right\|_{L^p(\Omega)} = 0. \end{equation} (B.14)

    See Theorem 18 in [36].

    These results generalize properties 3, 4 and 5 of the First Oscillation Lemma in such a way that the convergence applies to weak solutions, products and gradients AND it even guarantees that the convergence is strong for oscillating continuous functions.

    Hence, two-scale convergence is suitable for upscaling problems.



    [1] Rendell F, Jauberthie R, Grantham M (2002) Deteriorated Concrete, London: Thomas Telford Publishing, UK.
    [2] Sand W (2000) Microbial corrosion, In: Cahn, R.W., Haasen, P., Kramer, E.J. Editors, Materials Science and Technology: A Comprehensive Treatment, Chichester: Wiley-Vch, Vol. 1, Chap. 4.
    [3] During EDD (1997) Corrosion Atlas: A Collection of Illustrated Case Histories, 3rd Edition., Amsterdam: Elsevier.
    [4] Gu JD, Ford TE, Mitchell R (2011) Microbial corrosion of concrete, In: Revie, R.W. Editor, Uhlig's Corrosion Handbook, 3rd Edition., Hoboken, New Jersey: John Wiley & Sons, 451–460.
    [5] Trethewey KR, Chamberlain J (1995) Corrosion for Science & Engineering, 2nd Edition., Harlow: Longman Group, UK.
    [6] Elsener B (2000) Corrosion of steel in concrete, In: Cahn, R.W., Haasen, P., Kramer, E.J., Materials Science and Technology : A Comprehensive Treatment, Chichester: Wiley-Vch, Vol. 2, Chap. 8.
    [7] Verdink Jr. ED (2011) Economics of corrosion, In: Revie, R.W., Editor, Uhlig's Corrosion Handbook, 3rd Edition., Hoboken: John Wiley & Sons, New Jersey, Chap. 3.
    [8] Ortiz M, Popov EP (1982) Plain concrete as a composite material, Mech Mater 1: 139–150.
    [9] Monteiro PJM (1996) Mechanical modelling of the transition zone, In: Maso, J.C. Editor, Interfacial Transition Zone in Concrete, 1st Edition., London: E & FN SPON, UK, Chap. 4. No. 11 in RILEM Report. State-of-the-Art Report prepared by RILEM Technical Committee 108-1CC, Interfaces in Cementitious Composites.
    [10] Taylor HFW (1997) Cement Chemistry, 2nd Edition., London: Thomas Telford Publishing, UK.
    [11] Böhm M, Devinny J, Jahani F, et al. (1998) On a moving-boundary system modeling corrosion in sewer pipes. Appl Math Comput 92: 247–269.
    [12] Clarelli F, Fasano A, Natalini R (2008) Mathematics and monument conservation: Free boundary models of marble sulfation. SIAM J Appl Math 69: 149–168. doi: 10.1137/070695125
    [13] Fusi L, Farina A, Primicerio M, et al. (2014) A free boundary problem for CaCO3 neutralization of acid waters. Nonlinear Anal Real World Appl 15: 42–50. doi: 10.1016/j.nonrwa.2013.05.004
    [14] Ern A, Giovangigli V (1998) The kinetic chemical equilibrium regime. Phys A 260: 49–72. doi: 10.1016/S0378-4371(98)00303-3
    [15] Giovangigli V (1999) Multicomponent Flow Modeling, Series of Modeling and Simulation in Science, Engineering and Technology, Springer Science+Business Media.
    [16] Cowin S (2013) Continuum Mechanics of Anisotropic Materials, Berlin: Springer.
    [17] Ciorănescu D, Saint Jean Paulin J (1998) Homogenization of Reticulated Structures, Series of Applied Mathematical Sciences, Springer-Verlag, Vol. 136.
    [18] Sanchez-Palencia E (1980) Non-Homogeneous Media and Vibration Theory, Series of Lecture Notes in Physics, Berlin: Springer, Vol. 127.
    [19] Vromans AJ, Muntean A, van de Ven AAF (2018) A mixture theory-based concrete corrosion model coupling chemical reactions, diffusion and mechanics. Pac J Math Ind 10: 1–21. doi: 10.1186/s40736-017-0035-2
    [20] Vromans AJ (2018) A pseudoparabolic reaction-diffusion-mechanics system: Modeling, analysis and simulation, Licentiate thesis, Karlstad University.
    [21] Vromans AJ, van de Ven AAF, Muntean A (2019) Parameter delimitation of the weak solvability for a pseudo-parabolic system coupling chemical reactions, diffusion and momentum equations. Adv Math Sci Appl 28: 273–311.
    [22] Vromans AJ, van de Ven AAF, Muntean A (2019) Periodic homogenization of a pseudo-parabolic equation via a spatial-temporal decomposition, In: Faragó, I., Izsák, F., Simon, P. Editors, Progress in Industrial Mathematics at ECMI 2018, ECMI 2018, Mathematics in Industry, Springer, In press.
    [23] Peszyńska M, Showalter R, Yi SY (2009) Homogenization of a pseudoparabolic system. Appl Anal 88: 1265–1282. doi: 10.1080/00036810903277077
    [24] Eden M (2018) Homogenization of thermoelasticity systems describing phase transformations, PhD thesis, Universität Bremen.
    [25] Reichelt S (2016) Error estimates for elliptic equations with not-exactly periodic coefficients. Adv Math Sci Appl 25: 117–131.
    [26] Muntean A, Chalupecký V (2011) Homogenization Method and Multiscale Modeling, No. 34 in COE Lecture Note, Institute of Mathematics for Industry, Kyushu University, Japan.
    [27] Marchenko VA, Krushlov E (2006) Homogenization of Partial Differential Equations, No. 46 in Progress in Mathematical Physics, Boston: Birkhäuser, Basel, Berlin. Based on the original Russian edition, translated from the original Russian by M. Goncharenko and D. Shepelsky.
    [28] Kufner A, John O, Fučik S (1977) Function Spaces, Leyden: Noordhoff International Publishing.
    [29] Meshkova YM, Suslina TA (2016) Homogenization of initial boundary value problems for parabolic systems with periodic coefficients. Appl Anal 95: 1736–1775. DOI 10.1080/00036811. 2015.1068300. doi: 10.1080/00036811.2015.1068300
    [30] Dragomir SS (2003) Some Gronwall Type Inequalities and Applications, RGMIA Monographs, New York: Nova Science.
    [31] Rothe F (1984) Global Solutions of Reaction-Diffusion Systems, Berlin Heidelberg: Springer-Verlag.
    [32] Brezis H (2010) Functional Analysis, Sobolev Spaces and Partial Differential Equations, New York: Springer.
    [33] Ciorănescu D, Donato P (1999) An Introduction to Homogenization, No. 17 in Oxford Lecture Series in Mathematics and its Applications, Oxford University Press.
    [34] Gilbarg D, Trudinger N (1977) Elliptic Partial Differential Equations of Second Order, 1998 Edition., Berlin: Springer.
    [35] Nguetseng G (1989) A general convergence result for a functional related to the theory of homogenization. SIAM J Math Anal 20: 608–623. doi: 10.1137/0520043
    [36] Lukkassen D, Nguetseng G, Wall P (2002) Two-scale convergence. Int J Pure Appl Math 2: 35–86.
    [37] Allaire G (1992) Homogenization and two-scale convergence. SIAM J Math Anal 23: 1482–1518. doi: 10.1137/0523084
    [38] Pavliotis GA, Stuart AM (2008) Multiscale Methods-Averaging and Homogenization, No. 53 in Texts in Applied Mathematics, Berlin: Springer.
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4355) PDF downloads(698) Cited by(0)

Figures and Tables

Figures(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog