Loading [MathJax]/jax/element/mml/optable/BasicLatin.js

Opinion formation in voting processes under bounded confidence

  • Received: 01 October 2018 Revised: 01 March 2019
  • 90B10, 91D30, 91B12, 34D20, 15B51

  • In recent years, opinion dynamics has received an increasing attention and various models have been introduced and evaluated mainly by simulation. In this study, we introduce a model inspired by the so-called "bounded confidence" approach where voters engaged in an electoral decision with two options are influenced by individuals sharing an opinion similar to their own. This model allows one to capture salient features of the evolution of opinions and results in final clusters of voters. We provide a detailed study of the model, including a complete taxonomy of the equilibrium points and an analysis of their stability. The model highlights that the final electoral outcome depends on the level of interaction in the society, besides the initial opinion of each individual, so that a strongly interconnected society can reverse the electoral outcome as compared to a society with looser exchange.

    Citation: Sergei Yu. Pilyugin, M. C. Campi. Opinion formation in voting processes under bounded confidence[J]. Networks and Heterogeneous Media, 2019, 14(3): 617-632. doi: 10.3934/nhm.2019024

    Related Papers:

    [1] E. Artioli, G. Elefante, A. Sommariva, M. Vianello . Homogenization of composite materials reinforced with unidirectional fibres with complex curvilinear cross section: a virtual element approach. Mathematics in Engineering, 2024, 6(4): 510-535. doi: 10.3934/mine.2024021
    [2] Raimondo Penta, Ariel Ramírez-Torres, José Merodio, Reinaldo Rodríguez-Ramos . Effective governing equations for heterogenous porous media subject to inhomogeneous body forces. Mathematics in Engineering, 2021, 3(4): 1-17. doi: 10.3934/mine.2021033
    [3] Pier Domenico Lamberti, Michele Zaccaron . Spectral stability of the curlcurl operator via uniform Gaffney inequalities on perturbed electromagnetic cavities. Mathematics in Engineering, 2023, 5(1): 1-31. doi: 10.3934/mine.2023018
    [4] Youchan Kim, Seungjin Ryu, Pilsoo Shin . Approximation of elliptic and parabolic equations with Dirichlet boundary conditions. Mathematics in Engineering, 2023, 5(4): 1-43. doi: 10.3934/mine.2023079
    [5] Mikyoung Lee, Jihoon Ok . Local Calderón-Zygmund estimates for parabolic equations in weighted Lebesgue spaces. Mathematics in Engineering, 2023, 5(3): 1-20. doi: 10.3934/mine.2023062
    [6] Nicolai Krylov . On parabolic Adams's, the Chiarenza-Frasca theorems, and some other results related to parabolic Morrey spaces. Mathematics in Engineering, 2023, 5(2): 1-20. doi: 10.3934/mine.2023038
    [7] Lucas C. F. Ferreira . On the uniqueness of mild solutions for the parabolic-elliptic Keller-Segel system in the critical $ L^{p} $-space. Mathematics in Engineering, 2022, 4(6): 1-14. doi: 10.3934/mine.2022048
    [8] Hoai-Minh Nguyen, Tu Nguyen . Approximate cloaking for the heat equation via transformation optics. Mathematics in Engineering, 2019, 1(4): 775-788. doi: 10.3934/mine.2019.4.775
    [9] Gabriel B. Apolinário, Laurent Chevillard . Space-time statistics of a linear dynamical energy cascade model. Mathematics in Engineering, 2023, 5(2): 1-23. doi: 10.3934/mine.2023025
    [10] Italo Capuzzo Dolcetta . The weak maximum principle for degenerate elliptic equations: unbounded domains and systems. Mathematics in Engineering, 2020, 2(4): 772-786. doi: 10.3934/mine.2020036
  • In recent years, opinion dynamics has received an increasing attention and various models have been introduced and evaluated mainly by simulation. In this study, we introduce a model inspired by the so-called "bounded confidence" approach where voters engaged in an electoral decision with two options are influenced by individuals sharing an opinion similar to their own. This model allows one to capture salient features of the evolution of opinions and results in final clusters of voters. We provide a detailed study of the model, including a complete taxonomy of the equilibrium points and an analysis of their stability. The model highlights that the final electoral outcome depends on the level of interaction in the society, besides the initial opinion of each individual, so that a strongly interconnected society can reverse the electoral outcome as compared to a society with looser exchange.



    Corrosion of concrete by acidic compounds is a problem for construction as corrosion can lead to erosion and degradation of the structural integrity of concrete structures; see e.g., [1,2]. Structural failures and collapse as a result of concrete corrosion [3,4,5] is detrimental to society as it often impacts crucial infrastructure, typically leading to high costs [6,7]. From a more positive side, these failures can be avoided with a sufficiently smart monitoring and timely repairs based on a priori calculations of the maximal lifespan of the concrete. These calculations have to take into account the heterogeneous nature of the concrete [8], the physical properties of the concrete [9], the corrosion reaction [10], and the expansion/contraction behaviour of corroded concrete mixtures, see [11,12,13]. For example, the typical length scale of the concrete heterogeneities is much smaller than the typical length scale used in concrete construction [8]. Moreover, concrete corrosion has a characteristic time that is many orders of magnitude smaller than the expected lifespan of concrete structures [10]. Hence, it is computationally expensive to use the heterogeneity length scale for detailed simulations of concrete constructions such as bridges. However, using averaging techniques in order to obtain effective properties on the typical length scale of concrete constructions, one can significantly decrease computational costs with the potential of not losing accuracy.

    Tractable real-life problems usually involve a hierarchy of separated scales: from a microscale via intermediate scales to a macroscale. With averaging techniques one can obtain effective behaviours at a higher scale from the underlying lower scale. For example, Ern and Giovangigli used averaging techniques on statistical distributions in kinetic chemical equilibrium regimes to obtain continuous macroscopic equations for mixtures, see [14] or see Chapter 4 of [15] for a variety of effective macroscopic equations obtained with this averaging technique.

    Of course, the use of averaging techniques to obtain effective macroscopic equations in mixture theory is by itself not new, see Figure 7.2 in [16] for an early application from 1934. The main problem with averaging techniques is choosing the right averaging methodology for the problem at hand. In this respect, periodic homogenization can be regarded as a successful method, since it expresses conditions under which macroscale behaviour can be obtained in a natural way from microscale behaviour. Furthermore, the homogenization method has been successfully used to derive not only equations for capturing macroscale behaviours but also convergence/corrector speeds depending on the scale separation between the macroscale and the microscale.

    To obtain the macroscopic behaviour, we perform the homogenization by employing the concept of two-scale convergence. Moreover, we use formal asymptotic expansions to determine the speed of convergence via so-called corrector estimates. These estimates follow a procedure similar to those used by Cioranescu and Saint Jean-Paulin in Chapter 2 of [17]. Derivation via homogenization of constitutive laws, such as those arising from mixture theory, is a classical subject in homogenization, see [18]. Homogenization methods, upscaling, and corrector estimates are active research subjects due to the interdisciplinary nature of applying these mathematical techniques to real world problems and the complexities arising from the problem-specific constraints.

    The microscopic equations of our concrete corrosion model are conservation laws for mass and momentum for an incompressible mixture, see [19] and [20] for details. The existence of weak solutions of this model was shown in [21] and Chapter 2 of [20]. The parameter space dependence of the existence region for this model was explored numerically in [19]. The two-scale convergence for a subsystem of these microscopic equations, a pseudo-parabolic system, was shown in [22]. This paper handles the same pseudo-parabolic system as in [22] but posed on a perforated microscale domain.

    In [23], Peszyńska, Showalter and Yi investigated the upscaling of a pseudo-parabolic system via two-scale convergence using a natural decomposition that splits the spatial and temporal behaviour. They looked at several different scale separation cases: classical case, highly heterogeneous case (also known as high-contrast case), vanishing time-delay case and Richards equation of porous media. These cases were chosen to showcase the ease with which upscaling could be done via this natural decomposition.

    In this paper, we point out that this natural decomposition from [23] can also be applied to a pseudo-parabolic system with suitably scaled drift terms. Moreover, for such a pseudo-parabolic system with drift we determine the convergence speed via corrector estimates. This is in contrast with [23], where no convergence speed was derived for any pseudo-parabolic system they presented. Using this natural decomposition, the corrector estimates for the pseudo-parabolic equation follow straightforwardly from those of the spatially elliptic system with corrections due to the temporal first-order ordinary differential equation. Corrector estimates with convergence speeds have been obtained for the standard elliptic system, see [17], but also for coupled systems related to pseudo-parabolic equations such as the coupled elliptic-parabolic system with a mixed third order term describing thermoelasticity in [24]. The convergence speed we obtain in this paper coincides for bounded spatial domains with known results for both elliptic systems and pseudo-parabolic systems on bounded temporal domains, see [25]. Finally, we apply our results to a concrete corrosion model, which describes the mechanics of concrete corrosion at a microscopic level with a perforated periodic domain geometry. Even though this model is linear, the main difficulty lies in determining effective macroscopic models for the mechanics of concrete corrosion based on the known microscopic mechanics model with such a complicated domain geometry. Obtaining these effective macroscopic models is difficult as the microscopic behavior is highly oscillatory due to the complicated domain geometry, while the macroscopic models need to encapsulate this behavior with a much less volatile effective behavior on a simple domain geometry without perforations or periodicity.

    The remainder of this paper is divided into seven parts:

    Section 2: Notation and problem statement,

    Section 3: Main results,

    Section 4: Upscaling procedure,

    Section 5: Corrector estimates,

    Section 6: Application to a concrete corrosion model,

    Appendix A: Exact forms of coefficients in corrector estimates,

    Appendix B: Introduction to two-scale convergence.

    We introduce the description of the geometry of the medium in question with a variant of the construction found in [26]. Let $ (0, T) $, with $ T > 0 $, be a time-interval and $ \Omega\subset{\bf{R}}^d $ for $ d\in\{2, 3\} $ be a simply connected bounded domain with a $ C^2 $-boundary $ \partial\Omega $. Take $ Y\subset\Omega $ a simply connected bounded domain, or more precisely there exists a diffeomorphism $ \gamma:{\bf{R}}^d\rightarrow{\bf{R}}^d $ such that $ \text{Int}(\gamma([0, 1]^d)) = Y $.

    We perforate $ Y $ with a smooth open set $ \mathcal{T} = \gamma(\mathcal{T}_0) $ for a smooth open set $ \mathcal{T}_0\subset(0, 1)^d $ such that $ \overline{\mathcal{T}}\subset\overline{Y} $ with a $ C^2 $-boundary $ \partial \mathcal{T} $ that does not intersect the boundary of $ Y $, $ \partial\mathcal{T}\cap\partial Y = \emptyset $, and introduce $ Y^* = Y\backslash \overline{\mathcal{T}} $. Remark that $ \partial \mathcal{T} $ is assumed to be $ C^2 $-regular.

    Let $ G_0 $ be lattice* of the translation group $ \mathcal{T}_d $ on $ {\bf{R}}^d $ such that $ [0, 1]^d = \mathcal{T}_d/G_0 $. Hence, we have the following properties: $ \bigcup_{g\in G_0}g([0, 1]^d) = {\bf{R}}^d $ and $ (0, 1)^d\cap g((0, 1)^d) = \emptyset $ for all $ g\in G_0 $ not the identity-mapping. Moreover, we demand that the diffeomorphism $ \gamma $ allows $ G_\gamma: = \gamma\circ G_0\circ\gamma^{-1} $ to be a discrete subetaoup of $ \mathcal{T}_d $ with $ \overline{Y} = \mathcal{T}_d/G_\gamma $.

    * A lattice of a locally compact group $ \mathbb{G} $ is a discrete subetaoup $ \mathbb{H} $ with the property that the quotient space $ \mathbb{G}/\mathbb{H} $ has a finite invariant (under $ \mathbb{G} $) measure. A discrete subetaoup $ \mathbb{H} $ of $ \mathbb{G} $ is a group $ \mathbb{H}\subsetneq\mathbb{G} $ under group operations of $ \mathbb{G} $ such that there is (an open cover) a collection $ \mathbb{C} $ of open sets $ C\subsetneq\mathbb{G} $ satisfying $ \mathbb{H}\subset \cup_{C\in\mathbb{C}}C $ and for all $ C\in\mathbb{C} $ there is a unique element $ h\in\mathbb{H} $ such that $ h\in C $.

    Assume that there exists a sequence $ (\epsilon_h)_h\subset(0, \epsilon_0) $ such that $ \epsilon_h\rightarrow0 $ as $ h\rightarrow\infty $ (we omit the subscript $ h $ when it is obvious from context that this sequence is mentioned). Moreover, we assume that for all $ \epsilon_h\in(0, \epsilon_0) $ there is a set $ G_\gamma^{\epsilon_h} = \{\epsilon_h g \text{ for }g\in G_\gamma\} $ with which we introduce $ \mathcal{T}^{\epsilon_h} = \Omega\cap G_\gamma^{\epsilon_h}(\mathcal{T}) $, the set of all holes and parts of holes inside $ \Omega $. Hence, we can define the domain $ \Omega^{\epsilon_h} = \Omega\backslash\mathcal{T}^{\epsilon_h} $ and we demand that $ \Omega^{\epsilon_h} $ is connected for all $ \epsilon_h\in(0, \epsilon_0) $. We introduce for all $ \epsilon_h\in(0, \epsilon_0) $ the boundaries $ \partial_{int}\Omega^{\epsilon_h} $ and $ \partial_{ext}\Omega^{\epsilon_h} $ as $ \partial_{int}\Omega^{\epsilon_h} = \bigcup_{g\in G_\gamma^{\epsilon_h}}\{\partial g(\overline{\mathcal{T}})\mid g(\overline{\mathcal{T}})\subset\Omega\} $ and $ \partial_{ext}\Omega^{\epsilon_h} = \partial\Omega^{\epsilon_h}\backslash \partial_{int}\Omega^{\epsilon_h} $. The first boundary contains all the boundaries of the holes fully contained in $ \Omega $, while the second contains the remaining boundaries of the perforated region $ \Omega $. In Figure 1 a schematic representation of the domain components is shown.

    Figure 1.  A domain $ \Omega $ with the thick black boundary $ \partial\Omega $ on an $ \epsilon $-sized periodic grid with grid cells $ Y $, which contains a white circular perforation $ \mathcal{T} $ and the blue bulk $ Y^* $ yields the light-red coloured domain $ \Omega^\epsilon $ with thick red and black boundary $ \partial\Omega^\epsilon_{ext} $ and the green internal perforation boundaries $ \partial\Omega^\epsilon_{int} $. The thick red boundary parts of the perforations are locations where a choice will have to be made between the boundary condition of the perforation edges and the boundary condition of $ \partial\Omega $.

    Note, $ \mathcal{T} $ does not depend on $ \epsilon $, since this could give rise to unwanted complicating effects such as treated in [27].

    Having the domains specified, we focus on defining the needed function spaces. We start by introducing $ C_\#(Y) $, the space of continuous function defined on $ Y $ and periodic with respect to $ Y $ under $ G_\gamma $. To be precise:

    $ C#(Y)={fC(Rd)|fg=ffor all gGγ}. $ (2.1)

    Hence, the property "$ Y $-periodic" means "invariant under $ G_\gamma $" for functions defined on $ Y $. Similarly the property "$ Y^* $-periodic" means "invariant under $ G_\gamma $" for functions defined on $ Y^* $.

    With $ C_\#(Y) $ at hand, we construct Bochner spaces like $ L^p(\Omega; C_\#(Y)) $ for $ p\geq1 $ integer. For a detailed explanation of Bochner spaces, see Section 2.19 of [28]. These types of Bochner spaces exhibit properties that hint at two-scale convergence, as is defined in Section B. Similar function spaces are constructed for $ Y^* $ in an analogous way.

    Introduce the space

    $ Vϵ={vH1(Ωϵ)v=0 on extΩϵ} $ (2.2)

    equipped with the seminorm

    $ vVϵ=vL2(Ωϵ)d. $ (2.3)

    Remark 1. The seminorm in (2.3) is equivalent to the usual $ \mathcal{H}^1 $-norm by the Poincaré inequality, see Lemma 2.1 on page 14 of [17]. Moreover, this equivalence of norms is uniform in $ \epsilon $.

    For correct use of functions spaces over $ Y $ and $ Y^* $, we need an embedding result, which is based on an extension operator. The following theorem and corollary are Theorem 2.10 and Corollary 2.11 in Chapter 2 of [17].

    Theorem 1. Suppose that the domain $ \Omega^\epsilon $ is such that $ \mathcal{T}\subset Y $ is a smooth open set with a $ C^2 $-boundary that does not intersect the boundary of $ Y $ and such that the boundary of $ \mathcal{T}^\epsilon $ does not intersect the boundary of $ \Omega $. Then there exists an extension operator $ \mathcal{P}^\epsilon $ and a constant $ C $ independent of $ \epsilon $ such that

    $ PϵL(L2(Ωϵ);L2(Ω))L(Vϵ;H10(Ω)), $ (2.4)

    and for any $ v\in\mathbb{V}_\epsilon $, we have the bounds

    $ PϵvL2(Ω)CvL2(Ωϵ),PϵvL2(Ω)dCvL2(Ωϵ)d. $ (2.5)

    Corollary 1. There exists a constant $ C $ independent of $ \epsilon $ such that for all $ v\in\mathbb{V}_\epsilon $

    $ PϵvH10(Ω)CvVϵ. $ (2.6)

    Introduce the notation $ \hat{\cdot} $, a hat symbol, to denote extension via the extension operator $ \mathcal{P}^\epsilon $.

    The notation $ \nabla = (\frac{\mathrm{d}}{\mathrm{d}x_1}, \ldots, \frac{\mathrm{d}}{\mathrm{d}x_d}) $ denotes the vectorial total derivative with respect to the components of $ {\boldsymbol{x}} = (x_1, \ldots, x_d)^\top $ for functions depending on both $ {\boldsymbol{x}} $ and $ {\boldsymbol{x}}/\epsilon $. Spatial vectors have $ d $ components, while variable vectors have $ N $ components. Tensors have $ d^iN^j $ components for $ i $, $ j $ nonnegative integers. Furthermore, the notation

    $ cϵ(t,x)=c(t,x,x/ϵ) $ (2.7)

    is used for the $ \epsilon $-independent functions $ c(t, {\boldsymbol{x}}, {\boldsymbol{y}}) $ in assumption (A1) further on. Moreover, the spatial inner product is denoted with $ \cdot $, while the variable inner product is just seen as a product or operator acting on a variable vector or tensor.

    Let $ T > 0 $. We consider the following Neumann problem for unknown functions $ V^\epsilon_\alpha $, $ U_\alpha^\epsilon $ with $ \alpha\in\{1, \ldots, N\} $ posed on $ (0, T)\times\Omega^\epsilon: $

    $ (AϵVϵ)α:=Nβ=1MϵαβVϵβdi,j=1ddxi(EϵijdVϵαdxj+Nβ=1DϵiαβVϵβ)=Hϵα+Nβ=1(KϵαβUϵβ+di=1˜JϵiαβdUϵβdxi)=:(HϵUϵ)α, $ (2.8a)
    $ (LUϵ)α:=Uϵαt+Nβ=1LαβUϵβ=Nβ=1GαβVϵβ, $ (2.8b)

    with the boundary conditions

    $ Uϵα=Uα in {0}×Ωϵ, $ (2.9a)
    $ Vϵα=0 on (0,T)×extΩϵ, $ (2.9b)
    $ dVϵαdνDϵ:=di=1(dj=1EϵijdVϵαdxj+Nβ=1DϵiαβVϵβ)nϵi=0 on (0,T)×intΩϵ, $ (2.9c)

    for $ \alpha\in\{1, \ldots, N\} $ or, in short-hand notation, this reads:

    $ {AϵVϵ:=MϵVϵ(EϵVϵ+DϵVϵ)=Hϵ+KϵUϵ+˜JϵUϵ=:HϵUϵ in (0,T)×Ωϵ,LUϵ:=Uϵt+LUϵ=GVϵ in (0,T)×Ωϵ,Uϵ=U in {0}×Ωϵ,Vϵ=0 on (0,T)×extΩϵ,dVϵdνDϵ=(EϵVϵ+DϵVϵ)nϵ=0 on (0,T)×intΩϵ. $ (2.10)

    Consider the following technical requirements for the coefficients arising in the Neumann problem (2.8a)–(2.9c).

    (A1) For all $ \alpha, \beta\in\{1, \ldots, N\} $ and for all $ i, j\in\{1, \ldots, d\} $, we assume:

    $ Mαβ,Hα,Kαβ,JiαβL(R+;W2,(Ω;C2#(Y))),Eij,DiαβL(R+;W3,(Ω;C3#(Y))),Lαβ,GαβL(R+;W4,(Ω)),UαW4,(Ω), $ (2.11)

    with $ \tilde{\mathit{J}}^\epsilon = \epsilon \mathit{J}^\epsilon $; see Remark 2 further on.

    (A2) The tensors $ \mathit{M} $ and $ \mathit{E} $ have a linear sum decomposition with a skew-symmetric matrix and a diagonal matrix with the diagonal elements of $ \mathit{M} $ and $ \mathit{E} $ denoted by $ M_\alpha, E_i\in L^\infty({\bf{R}}_+\times\Omega; C_\#(Y^*)) $, respectively, satisfying $ M_\alpha > 0 $, $ E_i > 0 $ and $ 1/M_\alpha, 1/E_i\in L^\infty({\bf{R}}_+\times\Omega\times Y^*) $.

    For real symmetric matrices $ \mathit{M} $ and $ \mathit{E} $, the finite dimensional version of the spectral theorem states that they are diagonalizable by orthogonal matrices. Since $ \mathit{M} $ acts on the variable space $ {\bf{R}}^N $, while $ \mathit{E} $ acts on the spatial space $ {\bf{R}}^d $, one can simultaneously diagonalize both real symmetric matrices. For general real matrices $ \mathit{M} $ and $ \mathit{E} $ the linear sum decomposition in symmetric and skew-symmetric matrices allows for a diagonalization of the symmetric part. The orthogonal matrix transformations necessary to diagonalize the symmetric part does not modify the regularity of the domain $ \Omega $, of the perforated periodic cell $ Y^* $ or of the coefficients of $ \mathit{D} $, $ {\boldsymbol{H}} $, $ \mathit{K} $, $ \mathit{J} $, $ \mathit{L} $, or $ \mathit{G} $. Hence, we are allowed to assume a linear sum decomposition of $ \mathit{M} $ and $ \mathit{E} $ in a diagonal and a skew-symmetric matrix.

    (A3) The inequality

    $ Diβα2L(R+×Ωϵ;C#(Y))<4mαeidN2 $ (2.12)

    holds with

    $ 1mα=1MαL(R+×Ω×Y) and 1ei=1EiL(R+×Ω×Y) $ (2.13)

    for all $ \alpha, \beta\in\{1, \ldots, N\} $, for all $ i\in\{1, \ldots, d\} $, and for all $ \epsilon\in(0, \epsilon_0) $.

    (A4) The perforation holes do not intersect the boundary of $ \Omega $:

    $ TϵΩ= for a given sequence ϵ(0,ϵ0). $

    Remark 2. The dependence $ \tilde{\mathit{J}}^\epsilon = \epsilon\mathit{J}^\epsilon $ was chosen to simplify both existence and uniqueness results and arguments for bounding certain terms. The case $ \tilde{\mathit{J}}^\epsilon = \mathit{J}^\epsilon $ can be treated with the proofs outlined in this paper if additional cell functions are introduced and special inequalities similar to the Poincaré-Wirtinger inequality are used. See (4.32) onward in Section 4 for the introduction of cell functions.

    Remark 3. Satisfying inequality (2.12) implies that the same inequality is satisfied for the $ Y^* $-averaged functions $ \overline{D_{i\beta\alpha}^\epsilon} $, $ \overline{M_{\beta\alpha}^\epsilon} $, and $ \overline{\mathit{E}^\epsilon_{ij}} $ in $ L^\infty({\bf{R}}_+\times\Omega) $, where we used the following notion of $ Y^* $-averaged functions

    $ ¯f(t,x)=1|Y|Yf(t,x,y)dy. $ (2.14)

    Remark 4. Assumption (A4) implies the following identities for the given sequence $ \epsilon\in(0, \epsilon_0) $:

    $ intΩϵ=TϵΩ,extΩϵ=Ω. $ (2.15)

    Without (A4) perforations would intersect $ \partial\Omega $. One must then decide which parts of the boundary of the intersected cell $ Y^* $ satisfies which boundary condition: (2.9b) or (2.9c). This leads to non-trivial situations, that ultimately affects the corrector estimates in non-trivial ways.

    Theorem 2. Under assumptions (A1)–(A4), there exist a solution pair $ ({\boldsymbol{U}}^\epsilon, {\boldsymbol{V}}^\epsilon) \in H^1((0, T)\times\Omega^\epsilon)^N\times L^\infty((0, T); \mathbb{V}_\epsilon\cap H^2(\Omega^\epsilon))^N $ satisfying the Neumann problem (2.8a)–(2.9c).

    Proof. For $ \mathit{K}^\epsilon = \mathit{M}^\epsilon\mathit{G}^{-1}\mathit{L} $, $ \mathit{J}^\epsilon = \mathit{0} $ and $ d = 1 $ the result follows by Theorem 1 in [21]. For non-perforated domains the result follows by either Theorem 1 in [22] or Theorem 7 in Chapter 4 of [20].

    For perforated domains, the result follows similarly. An outline of the proof is as follows. First, time-discretization with fixed spacing $ \Delta t $ is applied to the Neumann problem (2.8a)–(2.9c) such that $ \mathcal{A}^\epsilon{\boldsymbol{V}}^\epsilon $ at $ t = k\Delta t $ equals $ \mathcal{H}^\epsilon {\boldsymbol{U}}^\epsilon $ at $ t = (k-1)\Delta t $ and $ \mathcal{L}{\boldsymbol{U}}^\epsilon $ at $ t = k\Delta t $ equals $ \mathit{G}{\boldsymbol{V}}^\epsilon $ at $ t = (k-1)\Delta t $. This is an application of the Rothe method. Under assumptions (A1)–(A4), testing $ \mathcal{A}^\epsilon{\boldsymbol{V}}^\epsilon $ with a function $ \phi $ yields a continuous and coercive bilinear form on $ H^1(\Omega^\epsilon)^N $, while testing $ \mathcal{L}{\boldsymbol{U}}^\epsilon $ with a function $ \psi $ yields a continuous and coercive bilinear form on $ L^2(\Omega^\epsilon)^N $. Hence, Lax-Milgram leads to the existence of a solution at each time slice $ t = k\Delta t $.

    Choosing the right functions for $ \phi $ and $ \psi $ and using a discrete version of Gronwall's inequality we obtain upper bounds of $ {\boldsymbol{U}}^\epsilon $ and $ {\boldsymbol{V}}^\epsilon $ independent of $ \Delta t $. Linearly interpolating the time slices, we find that the $ \Delta t $-independent time slices guarantee the existence of continuous weak limits. Due to sufficient regularity, we even obtain strong convergence and existence of boundary traces. Then the continuous weak limits are actually weak solutions of our Neumann problem (2.8a)–(2.9c). The uniqueness follows by the linearity of our Neumann problem (2.8a)–(2.9c).

    Two special length scales are involved in the Neumann problem (2.8a)–(2.9c): The variable $ {\boldsymbol{x}} $ is the "macroscopic" scale, while $ {\boldsymbol{x}}/\epsilon $ represents the "microscopic" scale. This leads to a double dependence of parameter functions (and, hence, of the solutions to the model equations), on both the macroscale and the microscale. For example, if $ {\boldsymbol{x}}\in\Omega^\epsilon $, by the definition of $ \Omega^\epsilon $, there exists $ g\in G_\gamma $ such that $ {\boldsymbol{x}}/\epsilon = g({\boldsymbol{y}}) $ with $ {\boldsymbol{y}}\in Y^* $. This suggests that we look for a formal asymptotic expansion of the form

    $ Vϵ(t,x)=V0(t,x,xϵ)+ϵV1(t,x,xϵ)+ϵ2V2(t,x,xϵ)+, $ (3.1a)
    $ Uϵ(t,x)=U0(t,x,xϵ)+ϵU1(t,x,xϵ)+ϵ2U2(t,x,xϵ)+ $ (3.1b)

    with $ {\boldsymbol{V}}^j(t, {\boldsymbol{x}}, {\boldsymbol{y}}) $, $ {\boldsymbol{U}}^j(t, {\boldsymbol{x}}, {\boldsymbol{y}}) $ defined for $ t\in{\bf{R}}_+ $, $ {\boldsymbol{x}}\in\Omega^\epsilon $ and $ {\boldsymbol{y}}\in Y^* $ and $ Y^* $-periodic (i.e., $ {\boldsymbol{V}}^j $, $ {\boldsymbol{U}}^j $ are periodic with respect to $ G_\gamma^{\epsilon} $).

    Theorem 3. Let assumptions (A1)–(A4) hold. For all $ T\in{\bf{R}}_+ $ there exist a unique pair $ ({\boldsymbol{U}}^\epsilon, {\boldsymbol{V}}^\epsilon)\in H^1((0, T)\times\Omega^\epsilon)^N\times L^\infty((0, T); \mathbb{V}_\epsilon)^N $ satisfying the Neumann problem (2.8a)–(2.9c). Moreover, for $ \epsilon\downarrow0 $

    $ ˆUϵ2U0inH1(0,T;L2(Ω×Y))Nand $ (3.2a)
    $ ˆVϵ2V0inL(0,T;L2(Ω×Y))N. $ (3.2b)

    This implies

    $ ˆUϵ1|Y|YU0(t,x,y)dyinH1((0,T)×Ω)Nand $ (3.3a)
    $ ˆVϵ1|Y|YV0(t,x,y)dyinL((0,T);H10(Ω))N $ (3.3b)

    for $ \epsilon\downarrow0 $.

    Proof. See Section 4 for the full details and [22] for a short proof of the two-scale convergence for a non-perforated setting.

    Additionally, we are interested in deriving the speed of convergence of the formal asymptotic expansion. Boundary effects are expected to occur due to intersection of the external boundary with the perforated periodic cells. Hence, a cut-off function is introduced to remove this part from the analysis.

    Let $ M_\epsilon $ be the cut-off function defined by

    $ \left\{MϵD(Ω)Mϵ=0 if dist(x,Ω)ϵdiam(Y)Mϵ=1 if dist(x,Ω)2ϵdiam(Y)ϵ|dMϵdxi|Ci{1,,d}\right. $ (3.4)

    We refer to

    $ Φϵ=VϵV0Mϵ(ϵV1+ϵ2V2), $ (3.5a)
    $ Ψϵ=UϵU0Mϵ(ϵU1+ϵ2U2) $ (3.5b)

    as error functions. Now, we are able to state our convergence speed result.

    Theorem 4. Let assumptions (A1)–(A4) hold. There exist constants $ l\geq0 $, $ \kappa\geq0 $, $ \tilde{\kappa}\geq0 $, $ \lambda\geq0 $ and $ \mu\geq0 $ such that

    $ ΦϵVNϵ(t)C(ϵ,t), $ (3.6a)
    $ ΨϵH1(Ωϵ)N(t)C(ϵ,t)tlelt $ (3.6b)

    with

    $ C(ϵ,t)=C(ϵ12+ϵ32)[1+ϵ12(1+˜κeλt)(1+κ(1+tlelt))]exp(μtlelt) $ (3.7)

    where $ C $ is a constant independent of $ \epsilon $ and $ t $, and $ t_l = \min\{1/l, t\} $.

    Remark 5. The upper bounds in (3.6a) and (3.6b) are $ \mathcal{O}(\epsilon^{\frac{1}{2}}) $ for $ \epsilon $-independent finite time intervals. We call this type of bounds corrector estimates.

    The corrector estimate of $ {\boldsymbol{\Phi}}^\epsilon $ in Theorem 4 becomes that of the classic linear elliptic system for $ \mathit{K} = \mathit{0} $ and $ \mathit{J} = 0 $. This is because $ \mathit{K} = \mathit{0} $ and $ \mathit{J} = 0 $ imply $ \tilde{\kappa} = \kappa = \mu = 0 $, see A. See [17] for the classical approach to corrector estimates of elliptic systems in perforated domains and [29] for a spectral approach in non-perforated domains.

    Corollary 2. Under the assumptions of Theorem 4,

    $ ˆVϵV0H10(Ω)N(t)C(ϵ,t), $ (3.8a)
    $ ˆUϵU0H1(Ω)N(t)C(ϵ,t)tlelt $ (3.8b)

    hold, where $ C $ is a constant independent of $ \epsilon $ and $ t $.

    According to Remark 5, $ \epsilon $-independent finite time intervals yield $ \mathcal{O}(\epsilon^{\frac{1}{2}}) $ corrector estimates. Is it, then, possible to have a converging corrector estimate for diverging time intervals in the limit $ \epsilon\downarrow0 $? The next theorem answers this question positively.

    Theorem 5. If $ l > 0 $, we introduce the rescaled time $ \tau\ln\left(\frac{1}{\epsilon}\right) = \exp(lt)\geq1 $ and $ q\in(0, \frac{1}{2}) $ independent of both $ \epsilon $ and $ t $ satisfying $ 0 < \mu\tau/l < \frac{1}{2}-q $. Then, for $ 0 < \epsilon < \exp(-\frac{2\mu}{(1-2q)l}) $, we have the corrector bounds

    $ ΦϵVNϵ(t)=O(ϵ12μlτ)=o(1)=ω(ϵ12), $ (3.9a)
    $ ΨϵH1(Ωϵ)N(t)=O(ϵ12μlτ)O(ϵqq)=o(1)=ω(ϵ12) $ (3.9b)

    as $ \epsilon\downarrow0 $.

    If $ l = 0 $, we introduce the rescaled time $ \tau\ln\left(\frac{1}{\epsilon}\right) = t\geq0 $ and $ p, q\in(0, \frac{1}{2}) $ independent of both $ \epsilon $ and $ t $ satisfying $ 0 < \max\{\mu\tau, (\lambda+\mu)\tau+p-\frac{1}{2}\} < \frac{1}{2}-q $. Then, for $ 0 < \epsilon < 1 $, we have the corrector bounds

    $ ΦϵVNϵ(t)=O(ϵ12μτ)+O(ϵ1(λ+μ)τ)O(ϵpp), $ (3.10a)
    $ ΨϵH1(Ωϵ)N(t)=[O(ϵ12μτ)+O(ϵ1(λ+μ)τ)O(ϵpp)]O(ϵqq) $ (3.10b)

    as $ \epsilon\downarrow0 $. If, additionally, $ \kappa = 0 $ holds, then the bounds change to

    $ ΦϵVNϵ(t)=O(ϵmin $ (3.11a)
    $ \begin{align} \|{\boldsymbol{\Psi}}^\epsilon\|_{H^1(\Omega^\epsilon)^N}(t)& = \mathcal{O}\left(\epsilon^{\min\{\frac{1}{2},1-\lambda\tau\}}\right)\mathcal{O}\left(\frac{\epsilon^{-q}}{q}\right). \end{align} $ (3.11b)

    Proof. Insert the definition of the rescaled time into (3.6a) and (3.6b), use $ t_l = \min\{1/l, t\} = t $ for $ l = 0 $ and $ t_l\leq 1/l $ for $ l > 0 $. Now one obtains the product $ \epsilon^\delta\ln(1/\epsilon) $ for some positive number $ \delta > 0 $ at several locations, which has a single maximal value of $ \frac{1}{\delta e} $ at $ \ln\left(\frac{1}{\epsilon}\right) = \frac{1}{\delta} $. The minimum function is needed since $ \mathcal{O}(\epsilon^{r})+\mathcal{O}(\epsilon^{s}) = \mathcal{O}(\epsilon^{\min\{r, s\}}) $. The small $ o $ and small $ \omega $ orders are upper and lower asymptotic convergence speeds, respectively, for $ \epsilon\downarrow0 $. The upper bound for $ \epsilon $ is needed to guarantee that the interval for $ \tau $ corresponds to $ t\geq0 $.

    Theorem 5 indicates that convergence can be retained for certain diverging sequences of time-intervals. Consequently, appropriate rescalings of the time variable yield upscaled systems and convergence rates for systems with regularity conditions different from those in assumptions (A1)–(A3).

    Remark 6. The tensors $ \mathit{L} $ and $ \mathit{G} $ are not dependent on $ \epsilon $ nor are unbounded functions of $ t $. If such a dependence or unbounded behaviour does exist, then bounds similar to those stated in Theorem 4 are still valid in a new time-variable $ s\in I\subset{\bf{R}}_+ $ if an invertible $ C^1 $-map $ f_\epsilon $ from $ t\in{\bf{R}}_+ $ to $ s $ exists such that tensors $ (\mathit{L}^\epsilon/f'_\epsilon)\circ f_\epsilon^{-1} $, $ (\mathit{G}^\epsilon/f'_\epsilon)\circ f_\epsilon^{-1} $, $ \mathit{M}^\epsilon\circ f_\epsilon^{-1} $, $ \mathit{E}^\epsilon\circ f_\epsilon^{-1} $, $ \mathit{D}^\epsilon\circ f_\epsilon^{-1} $, $ \mathit{H}^\epsilon\circ f_\epsilon^{-1} $, $ \mathit{K}^\epsilon\circ f_\epsilon^{-1} $, and $ \mathit{J}^\epsilon\circ f_\epsilon^{-1} $ satisfy (A1)–(A3).

    Moreover, if $ f_\epsilon({\bf{R}}_+) = {\bf{R}}_+ $ for $ \epsilon > 0 $ small enough, then the bounds of Theorem 5 are valid as well with $ \tau $ defined in terms of $ s $.

    Upscaling of the Neumann problem (2.8a)–(2.9c) can be done by many methods, e.g., via asymptotic expansions or two-scale convergence in suitable function spaces. We proceed in four steps:

    $ 1. $ Existence and uniqueness of $ ({\boldsymbol{U}}^\epsilon, {\boldsymbol{V}}^\epsilon) $.

    We rely on Theorem 2.

    $ 2. $ Obtain $ \epsilon $-independent bounds for $ ({\boldsymbol{U}}^\epsilon, {\boldsymbol{V}}^\epsilon) $.

    See Section 4.1.

    a. Obtain a priori estimates for $ ({\boldsymbol{U}}^\epsilon, {\boldsymbol{V}}^\epsilon) $. See Lemma 1.

    b. Obtain $ \epsilon $-independent bounds for $ ({\boldsymbol{U}}^\epsilon, {\boldsymbol{V}}^\epsilon) $. See Theorem 6.

    3. Upscaling via two-scale convergence.

    See Section 4.2.

    a. Two-scale limit of $ ({\boldsymbol{U}}^\epsilon, {\boldsymbol{V}}^\epsilon) $ for $ \epsilon\downarrow0 $. See Lemma 2.

    b. Two-scale limit of problem (2.8a)–(2.9c) for $ \epsilon\!\, \!\, \!\downarrow\!\, \!\, \!0 $. See Theorem 7.

    4. Upscaling via asymptotic expansions and relating to two-scale convergence.

    See Section 4.3.

    a. Expand (2.8a) and $ ({\boldsymbol{U}}^\epsilon, {\boldsymbol{V}}^\epsilon) $. See equations (4.18)–(4.30).

    b. Obtain existence & uniqueness of $ \!({\boldsymbol{U}}^0, \!\!{\boldsymbol{V}}^0) $. See Lemma 3 and Lemma 4

    c. Obtain the defining system of $ ({\boldsymbol{U}}^0, {\boldsymbol{V}}^0) $. See equations (4.32)–(4.39) and Lemma 5.

    d. Statement of the upscaled system. See Theorem 8.

    In this section, we show $ \epsilon $-independent bounds for a weak solution $ ({\boldsymbol{U}}^\epsilon, {\boldsymbol{V}}^\epsilon) $ to the Neumann problem (2.8a)–(2.9c). We define a weak solution to the Neumann problem (2.8a)–(2.9c) as a pair $ ({\boldsymbol{U}}^\epsilon, {\boldsymbol{V}}^\epsilon)\in H^1((0, T)\times\Omega^\epsilon)^N\times L^\infty((0, T), \mathbb{V}_\epsilon)^N $ satisfying

    $ \begin{equation} {{({\bf{P}}_w^\epsilon)}}\qquad\qquad\left\{\begin{array} \int_{\Omega^\epsilon}{\boldsymbol{\phi}}^\top\left[\mathit{M}^\epsilon{\boldsymbol{V}}^\epsilon-{\boldsymbol{H}}^\epsilon - \mathit{K}^\epsilon{\boldsymbol{U}}^\epsilon-\mathit{J}^\epsilon\cdot\nabla{\boldsymbol{U}}^\epsilon\right]&\cr \qquad\qquad+(\nabla{\boldsymbol{\phi}})^\top\cdot\left(\mathit{E}^\epsilon\cdot\nabla{\boldsymbol{V}}^\epsilon+\mathit{D}^\epsilon{\boldsymbol{V}}^\epsilon\right)\mathrm{d}{\boldsymbol{x}} = 0,\cr \int_{\Omega^\epsilon}{\boldsymbol{\psi}}^\top\left[\frac{\partial{\boldsymbol{U}}^\epsilon}{\partial t}+\mathit{L}{\boldsymbol{U}}^\epsilon- \mathit{G}{\boldsymbol{V}}^\epsilon\right]\mathrm{d}{\boldsymbol{x}} = 0,\cr {\boldsymbol{U}}^\epsilon(0,{\boldsymbol{x}}) = {\boldsymbol{U}}^*({\boldsymbol{x}}) \text{ for all }{\boldsymbol{x}}\in\overline{\Omega^\epsilon},\end{array}\right. \end{equation} $

    for a.e. $ t\in(0, T) $ and for all test-functions $ {\boldsymbol{\phi}}\in \mathbb{V}_\epsilon^N $ and $ {\boldsymbol{\psi}}\in L^2(\Omega^\epsilon)^N $.

    The existence and uniqueness of solutions to system (P$ _w^\epsilon $) can only hold when the parameters are well-balanced. The next lemma provides a set of parameters for which these parameters are well-balanced.

    Lemma 1. Assume assumptions (A1)–(A3) hold and we have $ \epsilon\in(0, \epsilon_0) $ for $ \epsilon_0 > 0 $, then there exist positive constants $ \tilde{m}_\alpha $, $ \tilde{e}_i $, $ \tilde{H} $, $ \tilde{K}_\alpha $, $ \tilde{J}_{i\alpha} $, for $ \alpha\in\{1, \ldots, N\} $ and $ i\in\{1, \ldots, d\} $ such that the a priori estimate

    $ \begin{equation} \sum\limits_{\alpha = 1}^N\tilde{m}_\alpha\|V^\epsilon_\alpha\|^2_{L^2(\Omega^\epsilon)}+\sum\limits_{i = 1}^d\sum\limits_{\alpha = 1}^N\tilde{e}_i\left\|\frac{\mathrm{d}V^\epsilon_\alpha}{\mathrm{d}x_i}\right\|^2_{L^2(\Omega^\epsilon)} \leq \tilde{H}+\sum\limits_{\alpha = 1}^N\tilde{K}_\alpha\|U^\epsilon_\alpha\|^2_{L^2(\Omega^\epsilon)}+\sum\limits_{i = 1}^d\sum\limits_{\alpha = 1}^N\tilde{J}_{i\alpha}\left\|\frac{\mathrm{d}U^\epsilon_\alpha}{\mathrm{d}x_i}\right\|^2_{L^2(\Omega^\epsilon)} \end{equation} $ (4.1)

    holds for a.e. $ t\in(0, T) $.

    Proof. We test the first equation of (P$ _w^\epsilon $) with $ {\boldsymbol{\phi}} = {\boldsymbol{V}}^\epsilon $ and apply Young's inequality wherever a product is not a square. A non-square product containing both $ {\boldsymbol{U}}^\epsilon $ and $ \nabla{\boldsymbol{V}}^\epsilon $ can only be found in the $ \mathit{D} $-term. Hence, Young's inequality allows all other non-square product terms to have a negligible effect on the coercivity constants $ m_\alpha $ and $ e_i $, while affecting $ \tilde{H} $, $ \tilde{K}_\alpha $, $ \tilde{J}_{i\alpha} $. Therefore, we only need to enforce two inequalities to prove the lemma by guaranteeing coercivity, i.e.,

    $ \begin{align} e_i-\sum\limits_{\alpha = 1}^N\frac{\eta_{i\beta\alpha}}{2}\tilde{D}_{i\beta\alpha}\geq\tilde{e}_i \gt 0&\;\text{for }\beta\in\{1,\ldots,N\},\,i\in\{1,\ldots,d\}, \end{align} $ (4.2a)
    $ \begin{align} m_\alpha-\sum\limits_{i = 1}^d\sum\limits_{\beta = 1}^N\frac{\tilde{D}_{i\beta\alpha}}{2\eta_{i\beta\alpha}}\geq\tilde{m}_\alpha \gt 0&\;\text{for }\alpha\in\{1,\ldots,N\}, \end{align} $ (4.2b)

    where $ \tilde{D}_{i\beta\alpha} = \|D_{i\beta\alpha}\|_{L^\infty({\bf{R}}_+\times\Omega; C_\#(Y^*))} $. We can choose $ \eta_{i\beta\alpha} > 0 $ satisfying

    $ \begin{equation} \frac{dN\tilde{D}_{i\beta\alpha}}{2m_\alpha} \lt \eta_{i\beta\alpha} \lt \frac{2e_i}{N\tilde{D}_{i\beta\alpha}}, \end{equation} $ (4.3)

    if inequality (2.12) in assumption (A3) is satisfied. For the exact definition of the constants $ \tilde{m}_\alpha $, $ \tilde{e}_i $, $ \tilde{H} $, $ \tilde{K}_\alpha $, $ \tilde{J}_{i\alpha} $, see equations (A.4a)–(A.4e) in A.

    Theorem 6. Assume (A1)–(A3) to hold, then there exist positive constants $ C $, $ \tilde{\kappa} $ and $ \lambda $ independent of $ \epsilon $ such that

    $ \begin{equation} \|{\boldsymbol{U}}^\epsilon\|_{H^1(\Omega^\epsilon)^N}(t)\leq Ce^{\lambda t},\qquad \|{\boldsymbol{V}}^\epsilon\|_{\mathbb{V}_\epsilon^N}(t)\leq C(1+\tilde{\kappa}e^{\lambda t}) \end{equation} $ (4.4)

    hold for $ t\geq0 $.

    Proof. By (A1)–(A3) there exist positive numbers $ \tilde{m}_\alpha $, $ \tilde{e}_i $, $ \tilde{H} $, $ \tilde{K}_\alpha $, $ \tilde{J}_{i\alpha} $ for $ \alpha\in\{1, \ldots, N\} $ and $ i\in\{1, \ldots, d\} $ such that the a priori estimate (4.1) stated in Lemma 1 holds. Moreover, what concerns system (P$ _w^\epsilon $) there exist $ L_G $, $ L_N $, $ G_G $, and $ G_N $, see equations (A.3a)–(A.3d) in A, such that

    $ \begin{align} \frac{\partial}{\partial t}\|{\boldsymbol{U}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}^2&\leq L_N\|{\boldsymbol{U}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}^2+G_N\|{\boldsymbol{V}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}^2, \end{align} $ (4.5a)
    $ \begin{align} \frac{\partial}{\partial t}\|\nabla {\boldsymbol{U}}^\epsilon\|_{L^2(\Omega^\epsilon)^{d\times N}}^2&\leq L_G\|{\boldsymbol{U}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}^2+L_N\|\nabla {\boldsymbol{U}}^\epsilon\|_{L^2(\Omega^\epsilon)^{d\times N}}^2\cr &\quad+G_G\|{\boldsymbol{V}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}^2+G_N\|\nabla {\boldsymbol{V}}^\epsilon\|_{L^2(\Omega^\epsilon)^{d\times N}}^2 \end{align} $ (4.5b)

    hold. Adding (4.5a) and (4.5b), and using (4.1), we obtain a positive constant $ I $ and a vector $ {\boldsymbol{J}}\in{\bf{R}}_+^N $ such that

    $ \begin{equation} \frac{\partial}{\partial t}\|{\boldsymbol{U}}^\epsilon\|_{H^1(\Omega^\epsilon)^N}^2\leq {\boldsymbol{J}}+I\|{\boldsymbol{U}}^\epsilon\|_{H^1(\Omega^\epsilon)^N}^2 \end{equation} $ (4.6)

    with

    $ \begin{align} I & = \max\!\left\{\!0,\! L_N\!+\!\max\!\left\{\!L_G\!+\!G_M\!\!\!\!\max\limits_{1\leq\alpha\leq N}\!\{\tilde{K}_\alpha\},G_M\!\!\!\!\!\!\!\!\!\!\!\max\limits_{1\leq\alpha\leq N,1\leq i\leq d}\{\tilde{J}_{i\alpha}\}\!\right\}\!\right\}, \end{align} $ (4.7a)
    $ \begin{align} G_M & = \max\limits_{1\leq\alpha \lt N,1\leq i\leq d}\left\{\frac{G_N+G_G}{\tilde{m}_\alpha},\frac{G_N}{\tilde{e}_i}\right\}. \end{align} $ (4.7b)

    Applying Gronwall's inequality, see [30,Thm. 1], to (4.6) yields the existence of a constant $ \lambda $ defined as $ \lambda = I/2 $, such that

    $ \begin{equation} \|{\boldsymbol{U}}^\epsilon\|_{H^1(\Omega^\epsilon)^N}(t)\leq Ce^{\lambda t},\qquad \|{\boldsymbol{V}}^\epsilon\|_{\mathbb{V}_\epsilon^N}(t)\leq C(1+\tilde{\kappa}e^{\lambda t}) \end{equation} $ (4.8)

    with $ \tilde{\kappa} = \max_{1\leq\alpha\leq N, 1\leq i\leq d}\{\tilde{K}_\alpha, \tilde{J}_{i\alpha}\} $.

    Remark 7. It is difficult to obtain exact expressions for optimal values of $ L_N $, $ L_G $, $ G_N $ and $ G_G $ such that a minimal positive value of $ \lambda $ is obtained. See A for the exact dependence of $ \lambda $ on the parameters involved in the Neumann problem (2.8a)–(2.9c).

    Remark 8. The $ (0, T)\times\Omega^\epsilon $-measurability of $ {\boldsymbol{U}}^\epsilon $ and $ {\boldsymbol{V}}^\epsilon $ can be proven based on the Rothe-method (discretization in time) in combination with the convergence of piecewise linear functions to any function in the spaces $ H^1((0, T)\times\Omega^\epsilon) $ or $ L^\infty((0, T); \mathbb{V}_\epsilon) $. One can prove that both $ {\boldsymbol{U}}^\epsilon $ and $ {\boldsymbol{V}}^\epsilon $ are measurable and are weak solutions to ($ {\bf{P}}_w^\epsilon $). See Chapter 2 in [20] for a pseudo-parabolic system for which the Rothe-method is used to show existence (and hence also measurability).

    Remark 9. Since we have $ \mathit{G}\!\!\in\!\! L^\infty({\bf{R}}_+; \!W^{1, \infty}(\Omega))^{N\!\times\! N} $ and $ {\boldsymbol{V}}^\epsilon\!\!\in\!\! L^\infty((0, T); \!\mathbb{V}_\epsilon)^N $, we are allowed to differentiate equation (2.8b) with respect to $ {\boldsymbol{x}} $ and test the resulting identity with both $ \nabla{\boldsymbol{U}}^\epsilon $ and $ \frac{\partial}{\partial t}\nabla{\boldsymbol{U}}^\epsilon $. However, conversely, we are not allowed to differentiate equation (2.8a) with respect to $ t $ as all tensors have insufficient regularity: They are in $ L^\infty({\bf{R}}_+\times\Omega^\epsilon)^{N\times N} $.

    Remark 10. We cannot differentiate equation (2.8b) with respect to $ {\boldsymbol{x}} $ when $ \mathit{L} $ or $ \mathit{G} $ has decreased spatial regularity, for example $ L^\infty((0, T)\times\Omega)^{N\times N} $. One can still obtain unique solutions of ($ {\bf{P}}_w^\epsilon $) if and only if $ \mathit{J}^\epsilon = \mathit{0} $ holds, since it removes the $ \nabla{\boldsymbol{U}}^\epsilon $ term from equation (2.8a). Consequently, Theorem 6 holds with $ {\boldsymbol{U}}^\epsilon\in H^1((0, T); L^2(\Omega^\epsilon)) $ and $ \mathit{J}^\epsilon = \mathit{0} $ under the additional relaxed regularity assumption $ \mathit{L}, \mathit{G}\in L^\infty((0, T)\times\Omega)^{N\times N} $ and with $ \lambda $ modified by taking $ L_G = \tilde{J}_{i\alpha} = 0 $ and by replacing $ G_M $ with $ G_N/\min_{1\leq\alpha\leq N}\tilde{m}_\alpha $.

    We recall the notation $ \hat{f}^\epsilon $ to denote the extension on $ \Omega $ via the operator $ \mathcal{P}^\epsilon $ for $ f^\epsilon $ defined on $ \Omega^\epsilon $. This extension operator $ \mathcal{P}^\epsilon $, as defined in Theorem 6, is well-defined if both $ \partial\mathcal{T} $ and $ \partial \Omega $ are $ C^2 $-regular, assumption (A4) holds, and $ \partial\mathcal{T}\cap\partial Y = \emptyset $. Hence, the extension operator is well-defined in our setting.

    Lemma 2. Assume (A1)–(A4) to hold. For each $ \epsilon\in(0, \epsilon_0) $, let the pair of sequences $ ({\boldsymbol{U}}^\epsilon, {\boldsymbol{V}}^\epsilon)\in H^1((0, T)\times\Omega^\epsilon)^N\times L^\infty((0, T); \mathbb{V}_\epsilon)^N $ be the unique weak solution to ($ {\bf{P}}^{\, \epsilon}_w $). Then this sequence of weak solutions satisfies the estimates

    $ \begin{equation} \|{\boldsymbol{U}}^\epsilon\|_{H^1((0,T)\times\Omega^\epsilon)^N}+\|{\boldsymbol{V}}^\epsilon\|_{L^\infty((0,T);\mathbb{V}_\epsilon)^N}\leq C, \end{equation} $ (4.9)

    for all $ \epsilon\in(0, \epsilon_0) $ and there exist vector functions

    $ \begin{align} {\boldsymbol{u}}&\text{ in }H^1((0,T)\times\Omega)^N, \end{align} $ (4.10a)
    $ \begin{align} \mathcal{U}&\text{ in }H^1((0,T);L^2(\Omega;H^1_\#(Y^*)/{\bf{R}}))^N, \end{align} $ (4.10b)
    $ \begin{align} {\boldsymbol{v}}&\text{ in }L^\infty((0,T);H^1_0(\Omega))^N, \end{align} $ (4.10c)
    $ \begin{align} \mathcal{V}&\text{ in }L^\infty((0,T)\times\Omega;H^1_\#(Y^*)/{\bf{R}})^N, \end{align} $ (4.10d)

    and a subsequence $ \epsilon'\subset\epsilon $, for which the following two-scale convergences

    $ \begin{align} \hat{{\boldsymbol{U}}}^{\epsilon'}&\overset{2}{\longrightarrow}{\boldsymbol{u}} \end{align} $ (4.11a)
    $ \begin{align} \frac{\partial}{\partial t}\hat{{\boldsymbol{U}}}^{\epsilon'}&\overset{2}{\longrightarrow}\frac{\partial}{\partial t}{\boldsymbol{u}} \end{align} $ (4.11b)
    $ \begin{align} \nabla\hat{{\boldsymbol{U}}}^{\epsilon'}&\overset{2}{\longrightarrow}\nabla{\boldsymbol{u}}+\nabla_{{\boldsymbol{y}}}\mathcal{U} \end{align} $ (4.11c)
    $ \begin{align} \frac{\partial}{\partial t}\nabla\hat{{\boldsymbol{U}}}^{\epsilon'}&\overset{2}{\longrightarrow}\frac{\partial}{\partial t}\nabla{\boldsymbol{u}}+\frac{\partial}{\partial t}\nabla_{{\boldsymbol{y}}}\mathcal{U} \end{align} $ (4.11d)
    $ \begin{align} \hat{{\boldsymbol{V}}}^{\epsilon'}&\overset{2}{\longrightarrow}{\boldsymbol{v}} \end{align} $ (4.11e)
    $ \begin{align} \nabla\hat{{\boldsymbol{V}}}^{\epsilon'}&\overset{2}{\longrightarrow}\nabla{\boldsymbol{v}}+\nabla_{{\boldsymbol{y}}}\mathcal{V} \end{align} $ (4.11f)

    hold.

    Proof. For all $ \epsilon > 0 $, Theorem 6 gives the bounds (4.9) independent of the choice of $ \epsilon $. Hence, $ \hat{{\boldsymbol{U}}}^\epsilon\rightharpoonup {\boldsymbol{u}} $ in $ H^1((0, T)\times\Omega)^N $ and $ \hat{{\boldsymbol{V}}}^\epsilon\rightharpoonup {\boldsymbol{v}} $ in $ L^\infty((0, T); H^1_0(\Omega))^N $ as $ \epsilon\rightarrow0 $. By Proposition 1 in B, we obtain a subsequence $ \epsilon'\subset\epsilon $ and functions $ {\boldsymbol{u}}\in H^1((0, T)\times\Omega)^N $, $ {\boldsymbol{v}}\in L^2((0, T); H^1_0(\Omega))^N $, $ \mathcal{U}, \mathcal{V}\in L^2((0, T)\times\Omega; H^1_\#(Y^*)/{\bf{R}})^N $ such that (4.11a), (4.11b), (4.11c), (4.11e), and (4.11f) hold for a.e. $ t\in(0, T) $. Moreover, there exists a vector function $ \tilde{\mathcal{U}}\in L^2((0, T)\times\Omega; H^1_\#(Y^*)/{\bf{R}})^N $ such that the following two-scale convergence

    $ \begin{equation} \frac{\partial}{\partial t}\nabla\hat{{\boldsymbol{U}}}^{\epsilon'}\overset{2}{\longrightarrow}\frac{\partial}{\partial t}\nabla{\boldsymbol{u}}+\nabla_{{\boldsymbol{y}}}\tilde{\mathcal{U}} \end{equation} $ (4.12)

    holds for the same subsequence $ \epsilon' $. Using two-scale convergence, Fubini's Theorem and partial integration in time, we obtain an increased regularity for $ \mathcal{U} $, i.e., $ \mathcal{U}\in H^1((0, T); L^2(\Omega; H^1_\#(Y^*)/{\bf{R}}))^N $, with $ \frac{\partial}{\partial t}\nabla_{{\boldsymbol{y}}}\mathcal{U} = \nabla_{{\boldsymbol{y}}}\tilde{\mathcal{U}} $.

    By Lemma 2, we can determine what the macroscopic version of (P$ ^\epsilon_w $), which we denote by (P$ ^0_w $). This is as stated in Theorem 7.

    Theorem 7. Assume the hypotheses of Lemma 2 to be satisfied. Then the two-scale limits $ {\boldsymbol{u}}\in H^1((0, T)\times\Omega)^N $ and $ {\boldsymbol{v}}\in L^\infty((0, T); H^1_0(\Omega))^N $ introduced in Lemma 2 form a weak solution to

    $ \begin{equation*} \textbf{$({\bf{P}}^0_w)$}\qquad\quad\left\{\begin{array}\int_\Omega{\boldsymbol{\phi}}^\top\left[\overline{\mathit{M}}{\boldsymbol{v}}-\overline{{\boldsymbol{H}}} - \overline{\mathit{K}}{\boldsymbol{u}}\right]&\cr \qquad\qquad+(\nabla{\boldsymbol{\phi}})^\top\cdot\left(\mathit{E}^*\cdot\nabla{\boldsymbol{v}}+\mathit{D}^*{\boldsymbol{v}}\right)\mathrm{d}{\boldsymbol{x}} = 0,\cr \int_\Omega{\boldsymbol{\psi}}^\top\left[\frac{\partial {\boldsymbol{u}}}{\partial t}+\mathit{L}{\boldsymbol{u}}- \mathit{G}{\boldsymbol{v}}\right]\mathrm{d}{\boldsymbol{x}} = 0,\cr {\boldsymbol{u}}(0,{\boldsymbol{x}}) = {\boldsymbol{U}}^*({\boldsymbol{x}})\qquad\;{for }\;{\boldsymbol{x}}\in\Omega, \end{array}\right. \end{equation*} $

    for a.e. $ t\in(0, T) $ for all test functions $ {\boldsymbol{\phi}}\in H^1_0(\Omega)^N $, and $ {\boldsymbol{\psi}}\in L^2(\Omega)^N $, where the barred tensors and vectors are $ Y^* $ averaged functions as introduced in (A2). Furthermore,

    $ \begin{equation} \mathit{E}^* = \frac{1}{|Y|}\int_{Y^*}\mathit{E}\cdot(\mathit{1}+\nabla_{{\boldsymbol{y}}}{\boldsymbol{W}})\mathrm{d}{\boldsymbol{y}},\quad\mathit{D}^* = \frac{1}{|Y|}\int_{Y^*}\mathit{D}+\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}\mathit{Z}\mathrm{d}{\boldsymbol{y}} \end{equation} $ (4.13)

    are the wanted effective coefficients. The auxiliary tensors

    $ Z_{\alpha\beta}, W_i\in L^\infty(0, T; W^{2, \infty}(\Omega; H^1_\#(Y^*)/{\bf{R}})) $ satisfy the cell problems

    $ \begin{align} {\boldsymbol{0}}& = \int_{Y^*}{\boldsymbol{\Phi}}^\top\cdot(\nabla_{{\boldsymbol{y}}}\cdot\left[\mathit{E}\cdot(\mathit{1}+\nabla_{{\boldsymbol{y}}}{\boldsymbol{W}})\right])\mathrm{d}{\boldsymbol{y}} = \int_{Y^*}{\boldsymbol{\Phi}}^\top\cdot(\nabla_{{\boldsymbol{y}}}\cdot\hat{\mathit{E}})\mathrm{d}{\boldsymbol{y}}, \end{align} $ (4.14a)
    $ \begin{align} {\boldsymbol{0}}& = \int_{Y^*}{\boldsymbol{\Psi}}^\top(\nabla_{{\boldsymbol{y}}}\cdot\left[\mathit{D}+\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}\mathit{Z}\right])\mathrm{d}{\boldsymbol{y}} = \int_{Y^*}{\boldsymbol{\Psi}}^\top(\nabla_{{\boldsymbol{y}}}\cdot\hat{\mathit{D}})\mathrm{d}{\boldsymbol{y}} \end{align} $ (4.14b)

    for all $ {\boldsymbol{\Phi}}\in C_\#(Y^*)^d $, $ {\boldsymbol{\Psi}}\in C_\#(Y^*)^N $.

    Proof. The solution to system (P$ ^\epsilon_w $) is extended to $ \Omega $ by taking $ \hat{{\boldsymbol{H}}}^\epsilon $, $ \hat{{\boldsymbol{V}}}^\epsilon $, $ \hat{{\boldsymbol{U}}}^\epsilon $ for $ {\boldsymbol{H}}^\epsilon $, $ {\boldsymbol{V}}^\epsilon $, $ {\boldsymbol{U}}^\epsilon $, respectively. The extended system is satisfied on $ \mathcal{T}^\epsilon\cap\Omega $ and it satisfies the boundary conditions on $ \partial_{int}\Omega^\epsilon $ of system (P$ ^\epsilon_w $). Hence, it is sufficient to look at (P$ ^\epsilon_w $) only. In (P$ ^\epsilon_w $), we choose $ {\boldsymbol{\psi}} = {\boldsymbol{\psi}}^\epsilon = {\boldsymbol{\Psi}}\left(t, {\boldsymbol{x}}, \frac{{\boldsymbol{x}}}{\epsilon}\right) $ for the test function $ {\boldsymbol{\Psi}}\in L^2((0, T); \mathcal{D}(\Omega^\epsilon; C^\infty_\#(Y^*)))^N $, $ {\boldsymbol{\phi}} = {\boldsymbol{\phi}}^\epsilon = {\boldsymbol{\Phi}}(t, {\boldsymbol{x}})+\epsilon{\boldsymbol{\varphi}}\left(t, {\boldsymbol{x}}, \frac{{\boldsymbol{x}}}{\epsilon}\right) $ for the test functions $ {\boldsymbol{\Phi}}\in L^2((0, T); C^\infty_0(\Omega^\epsilon))^N $, $ {\boldsymbol{\varphi}}\in L^2((0, T); \mathcal{D}(\Omega^\epsilon; C^\infty_\#(Y^*)))^N $. Corollary 5 and Theorem 9 in combination with (B.4) lead to $ \mathit{T}^\epsilon\overset{2}{\longrightarrow}\mathit{T} $, where $ \mathit{T}^\epsilon $ is an arbitrary tensor or vector in (P$ ^\epsilon_w $) other than $ \mathit{L} $ and $ \mathit{G} $. Moreover, by Corollary 5 and Propositions 1 and 2 we have $ {\boldsymbol{\psi}}^\epsilon\overset{2}{\longrightarrow}{\boldsymbol{\Psi}}(t, {\boldsymbol{x}}, {\boldsymbol{y}}) $, $ {\boldsymbol{\phi}}^\epsilon\overset{2}{\longrightarrow}{\boldsymbol{\Phi}}(t, {\boldsymbol{x}}) $, and $ \nabla{\boldsymbol{\phi}}^\epsilon\overset{2}{\longrightarrow}\nabla{\boldsymbol{\Phi}}(t, {\boldsymbol{x}})+\nabla_{{\boldsymbol{y}}}{\boldsymbol{\varphi}}(t, {\boldsymbol{x}}, {\boldsymbol{y}}) $. By Corollary 5 and Theorem 9, there is a two-scale limit of (P$ ^\epsilon_w $), reading

    $ \begin{multline} \int_\Omega\frac{1}{|Y|}\int_{Y^*} {\boldsymbol{\Phi}}^\top\left[\mathit{M}{\boldsymbol{v}}-{\boldsymbol{H}}-\mathit{K}{\boldsymbol{u}}\right]\\ +(\nabla{\boldsymbol{\Phi}}+\nabla_{{\boldsymbol{y}}}{\boldsymbol{\varphi}})^\top\cdot\left[\mathit{E}\cdot(\nabla{\boldsymbol{v}}+\nabla_{{\boldsymbol{y}}}\mathcal{V})+\mathit{D}{\boldsymbol{v}}\right]\\ +{\boldsymbol{\Psi}}^\top\left[\frac{\partial{\boldsymbol{u}}}{\partial t}+\mathit{L}{\boldsymbol{u}}-\mathit{G}{\boldsymbol{v}}\right]\mathrm{d}{\boldsymbol{y}}\mathrm{d}{\boldsymbol{x}} = 0. \end{multline} $ (4.15)

    Similarly, the initial condition

    $ \begin{equation} {\boldsymbol{u}}(0,{\boldsymbol{x}}) = {\boldsymbol{U}}^*({\boldsymbol{x}}), \quad {\boldsymbol{x}}\in\overline{\Omega}, \end{equation} $ (4.16)

    is satisfied by $ {\boldsymbol{u}} $ as $ \nabla_{{\boldsymbol{y}}}{\boldsymbol{u}} = \mathit{0} $ holds.

    For $ {\boldsymbol{\Phi}} = {\boldsymbol{\Psi}} = {\boldsymbol{0}} $, we can take $ \mathcal{V} = {\boldsymbol{W}}\cdot\nabla{\boldsymbol{v}}+\mathit{Z}{\boldsymbol{v}}+\tilde{\mathcal{V}} $, where $ {\boldsymbol{W}} $ and $ \mathit{Z} $ satisfy the cell problems (4.14a) and (4.14b), respectively, and $ \nabla_{{\boldsymbol{y}}}\tilde{\mathcal{V}} = \mathit{0} $. The existence and uniqueness of $ {\boldsymbol{W}} $ and $ \mathit{Z} $ follows from Lax-Milgram as cell problems (4.14a) and (4.14b) are linear elliptic systems by (A2) for the Hilbert space $ H^1_\#(Y^*) $. The regularity of $ {\boldsymbol{W}} $ and $ \mathit{Z} $ in $ t\in(0, T) $ and $ {\boldsymbol{x}}\in\Omega $ follows from the regularity of $ \mathit{E} $ and $ \mathit{D} $ as stated in (A1), (A2) and (A3). Moreover, we obtain $ {\boldsymbol{v}}\in L^\infty((0, T); H^2(\Omega)) $ due to (A1). Then Proposition 1, Theorem 9 and the embedding $ H^{1/2}(Y^*)\hookrightarrow L^2(\partial\mathcal{T}) $ yields $ 0 = \frac{\partial {\boldsymbol{V}}^\epsilon}{\partial\nu_{\mathit{D}^\epsilon}}\overset{2}{\longrightarrow}(\hat{\mathit{E}}\cdot\nabla{\boldsymbol{v}}+\hat{\mathit{D}}{\boldsymbol{v}})\cdot{\boldsymbol{n}} = 0 $ on $ \partial Y^* $, which is automatically guaranteed by (4.14a) and (4.14b).

    Hence, (P$ ^0_w $) yields the strong form system

    $ \left( {{\bf{P}}_s^0} \right)\;\;\;\;\left\{ {\begin{array}{*{20}{l}} {\overline {\rm{M}} \mathit{\boldsymbol{v}} - \nabla \cdot \left( {{{\rm{E}}^*} \cdot \nabla \mathit{\boldsymbol{v}} + {{\rm{D}}^*}\mathit{\boldsymbol{v}}} \right) = \mathit{\boldsymbol{\bar H}} + {\rm{\bar K}}\mathit{\boldsymbol{u}}}&{{\rm{ in\; }}(0,T) \times \Omega ,}\\ {\frac{{\partial \mathit{\boldsymbol{u}}}}{{\partial t}} + {\rm{L}}\mathit{\boldsymbol{u}} = {\rm{G}}\mathit{\boldsymbol{v}}}&{{\rm{ in \;}}(0,T) \times \Omega ,}\\ {\mathit{\boldsymbol{v}} = {\bf{0}}}&{{\rm{ on\; }}(0,T) \times \partial \Omega ,}\\ {\mathit{\boldsymbol{u}} = {\mathit{\boldsymbol{U}}^*}}&{{\rm{ on\; }}\{ 0\} \times \bar \Omega ,} \end{array}} \right. $

    when, next to the regularity of (A1), the following regularity holds:

    $ \begin{align} M_{\alpha\beta},H_\alpha,K_{\alpha\beta}&\in C(0,T;C^{1}(\Omega;C^1_\#(Y^*))), \end{align} $ (4.17a)
    $ \begin{align} E_{ij},D_{i\alpha\beta}&\in C(0,T;C^{2}(\Omega;C^2_\#(Y^*))), \end{align} $ (4.17b)
    $ \begin{align} L_{\alpha\beta},G_{\alpha\beta}&\in C(0,T;C^{1}(\Omega)), \end{align} $ (4.17c)
    $ \begin{align} {\boldsymbol{U}}^*&\in C(\overline{\Omega}), \end{align} $ (4.17d)

    for all $ T\in{\bf{R}}_+ $, when both $ \partial\Omega $ and $ \partial\mathcal{T} $ are $ C^3 $-boundaries.

    Even though the previous section showed that there is a two-scale limit $ ({\boldsymbol{u}}, {\boldsymbol{v}}) $, it is necessary to show the relation between $ ({\boldsymbol{u}}, {\boldsymbol{v}}) $ and $ ({\boldsymbol{U}}^\epsilon, {\boldsymbol{V}}^\epsilon) $. To this end, we first rewrite the Neumann problem (2.8a)–(2.9c) and then use asymptotic expansions such that we are lead to the two-scale limit, including the cell-functions, in a natural way. The Neumann problem (2.8a)–(2.9c) can be written in operator form as

    $ \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^\epsilon {\boldsymbol{V}}^\epsilon = \mathcal{H}^\epsilon{\boldsymbol{U}}^\epsilon&\text{ on }(0,T)\times\Omega^\epsilon,\cr \mathcal{L}{\boldsymbol{U}}^\epsilon = \mathit{G}{\boldsymbol{V}}^\epsilon&\text{ on }(0,T)\times\Omega^\epsilon,\cr {\boldsymbol{U}}^\epsilon = {\boldsymbol{U}}^*&\text{ in }\{0\}\times\Omega^\epsilon,\cr {\boldsymbol{V}}^\epsilon = 0&\text{ on }(0,T)\times\partial_{ext}\Omega^\epsilon,\cr \frac{\mathrm{d} {\boldsymbol{V}}^\epsilon}{\mathrm{d}\nu_{\mathit{D}^\epsilon}} = 0&\text{ on }(0,T)\times\partial_{int}\Omega^\epsilon. \end{array}\right. \end{equation} $ (4.18)

    as indicated in Section 2.

    We postulate the following asymptotic expansions in $ \epsilon $ of $ {\boldsymbol{U}}^\epsilon $ and $ {\boldsymbol{V}}^\epsilon $:

    $ \begin{align} {\boldsymbol{V}}^\epsilon(t,{\boldsymbol{x}}) & = {\boldsymbol{V}}^0\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)+\epsilon {\boldsymbol{V}}^1\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)+\epsilon^2 {\boldsymbol{V}}^2\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)+\cdots, \end{align} $ (4.19a)
    $ \begin{align} {\boldsymbol{U}}^\epsilon(t,{\boldsymbol{x}}) & = {\boldsymbol{U}}^0\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)+\epsilon {\boldsymbol{U}}^1\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)+\epsilon^2 {\boldsymbol{U}}^2\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)+\cdots. \end{align} $ (4.19b)

    Let $ {\boldsymbol{\Phi}} = {\boldsymbol{\Phi}}(t, {\boldsymbol{x}}, {\boldsymbol{y}})\in L^\infty(0, T; C^2(\Omega; C^2_\#(Y^*)))^N $ be a vector function depending on two spatial variables $ {\boldsymbol{x}} $ and $ {\boldsymbol{y}} $, and introduce $ {\boldsymbol{\Phi}}^\epsilon(t, {\boldsymbol{x}}) = {\boldsymbol{\Phi}}(t, {\boldsymbol{x}}, {\boldsymbol{x}}/\epsilon) $. Then the total spatial derivatives in $ {\boldsymbol{x}} $ become two partial derivatives, one in $ {\boldsymbol{x}} $ and one in $ {\boldsymbol{y}} $:

    $ \begin{align} \nabla{\boldsymbol{\Phi}}^\epsilon(t,{\boldsymbol{x}}) & = \frac{1}{\epsilon}(\nabla_{{\boldsymbol{y}}}{\boldsymbol{\Phi}})\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)+ (\nabla_{{\boldsymbol{x}}}{\boldsymbol{\Phi}})\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right), \end{align} $ (4.20a)
    $ \begin{align} \nabla\cdot{\boldsymbol{\Phi}}^\epsilon(t,{\boldsymbol{x}}) & = \frac{1}{\epsilon}(\nabla_{{\boldsymbol{y}}}\cdot{\boldsymbol{\Phi}})\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)+ (\nabla_{{\boldsymbol{x}}}\cdot{\boldsymbol{\Phi}})\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right). \end{align} $ (4.20b)

    Do note, the evaluation $ {\boldsymbol{y}} = {\boldsymbol{x}}/\epsilon $ is suspended as is common in formal asymptotic expansions, leading to the use of $ {\boldsymbol{y}}\in Y^* $ and $ {\boldsymbol{x}}\in\Omega $.

    Hence, $ \mathcal{A}^\epsilon{\boldsymbol{\Phi}}^\epsilon $ can be formally expanded:

    $ \begin{equation} \mathcal{A}^\epsilon{\boldsymbol{\Phi}}^\epsilon = \left[\left(\frac{1}{\epsilon^2}\mathcal{A}^0+\frac{1}{\epsilon}\mathcal{A}^1+\mathcal{A}^2\right){\boldsymbol{\Phi}}\right]\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right), \end{equation} $ (4.21)

    where

    $ \begin{align} \mathcal{A}^0{\boldsymbol{\Phi}}& = -\nabla_{{\boldsymbol{y}}}\cdot\left(\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}{\boldsymbol{\Phi}}\right), \end{align} $ (4.22a)
    $ \begin{align} \mathcal{A}^1{\boldsymbol{\Phi}}& = -\nabla_{{\boldsymbol{y}}}\cdot\left(\mathit{E}\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{\Phi}}\right)-\nabla_{{\boldsymbol{x}}}\cdot\left(\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}{\boldsymbol{\Phi}}\right)-\nabla_{{\boldsymbol{y}}}\cdot\left(\mathit{D}{\boldsymbol{\Phi}}\right), \end{align} $ (4.22b)
    $ \begin{align} \mathcal{A}^2{\boldsymbol{\Phi}}& = \!\qquad\!\qquad\quad\mathit{M}{\boldsymbol{\Phi}}-\nabla_{{\boldsymbol{x}}}\cdot\left(\mathit{E}\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{\Phi}}\right)-\nabla_{{\boldsymbol{x}}}\cdot\left(\mathit{D}{\boldsymbol{\Phi}}\right). \end{align} $ (4.22c)

    Moreover, $ \mathcal{H}^\epsilon{\boldsymbol{\Phi}}^\epsilon $ can be written as $ {\boldsymbol{H}}+(\mathcal{H}^0+\epsilon \mathcal{H}^1){\boldsymbol{\Phi}} $, where

    $ \begin{align} \mathcal{H}^0& = \mathit{K}+\mathit{J}\cdot\nabla_{{\boldsymbol{y}}}, \end{align} $ (4.23a)
    $ \begin{align} \mathcal{H}^1& = \qquad\mathit{J}\cdot\nabla_{{\boldsymbol{x}}}. \end{align} $ (4.23b)

    Since the outward normal $ {\boldsymbol{n}} $ on $ \partial\mathcal{T} $ depends only on $ {\boldsymbol{y}} $ and the outward normal $ {\boldsymbol{n}}^\epsilon $ on $ \partial_{int}\Omega^\epsilon = \partial\mathcal{T}^\epsilon\cap\Omega $ is defined as the $ Y $-periodic function $ \left.{\boldsymbol{n}}\right|_{{\boldsymbol{y}} = {\boldsymbol{x}}/\epsilon} $, one has

    $ \begin{multline} \frac{\partial{\boldsymbol{\Phi}}^\epsilon}{\partial\nu_{\mathit{D}^\epsilon}} = \left(\mathit{E}^\epsilon\cdot\frac{\mathrm{d}{\boldsymbol{\Phi}}^\epsilon}{\mathrm{d}{\boldsymbol{x}}}+\mathit{D}^\epsilon{\boldsymbol{\Phi}}^\epsilon\right)\cdot{\boldsymbol{n}}^\epsilon\\ = \left(\frac{1}{\epsilon}\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}{\boldsymbol{\Phi}}+\mathit{E}\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{\Phi}}+\mathit{D}{\boldsymbol{\Phi}}\right)\cdot{\boldsymbol{n}}^\epsilon\qquad\qquad\qquad\qquad\qquad\quad\\ = : \frac{1}{\epsilon} \frac{\partial{\boldsymbol{\Phi}}^\epsilon}{\partial\nu_{\mathit{E}}}+\frac{\partial{\boldsymbol{\Phi}}^\epsilon}{\partial\nu_{\mathit{D}}}.\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\! \end{multline} $ (4.24)

    Inserting (4.19a), (4.19b), (4.21)–(4.24) into the Neumann problem (4.18) and expanding the full problem into powers of $ \epsilon $, we obtain the following auxilliary systems:

    $ \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0{\boldsymbol{V}}^0 = 0&\text{ in }(0,T)\times\Omega\times Y^*,\cr \frac{\partial {\boldsymbol{V}}^0}{\partial\nu_{\mathit{E}}} = 0&\text{ on }(0,T)\times\Omega\times\partial\mathcal{T},\cr {\boldsymbol{V}}^0 = 0&\text{ on }(0,T)\times\partial\Omega\times Y^*,\cr {\boldsymbol{V}}^0\quad\text{ $Y$-periodic,} \end{array}\right.\end{equation} $ (4.25)
    $ \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0{\boldsymbol{V}}^1 = -\mathcal{A}^1 {\boldsymbol{V}}^0&\text{ in }(0,T)\times\Omega\times Y^*,\cr \frac{\partial {\boldsymbol{V}}^1}{\partial\nu_{\mathit{E}}} = -\frac{\partial {\boldsymbol{V}}^0}{\partial\nu_{\mathit{D}}}&\text{ on }(0,T)\times\Omega\times\partial\mathcal{T},\cr {\boldsymbol{V}}^1 = 0&\text{ on }(0,T)\times\partial\Omega\times Y^*,\cr {\boldsymbol{V}}^1\quad\text{ $Y$-periodic,} \end{array}\right.\end{equation} $ (4.26)
    $ \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0{\boldsymbol{V}}^2 = -\mathcal{A}^1 {\boldsymbol{V}}^1-\mathcal{A}^2{\boldsymbol{V}}^0+{\boldsymbol{H}}+\mathcal{H}^0{\boldsymbol{U}}^0&\text{ in }(0,T)\times\Omega\times Y^*,\cr \frac{\partial {\boldsymbol{V}}^2}{\partial\nu_{\mathit{E}}} = -\frac{\partial {\boldsymbol{V}}^1}{\partial\nu_{\mathit{D}}}&\text{ on }(0,T)\times\Omega\times\partial\mathcal{T},\cr {\boldsymbol{V}}^2 = 0&\text{ on }(0,T)\times\partial\Omega\times Y^*,\cr {\boldsymbol{V}}^2\quad\text{ $Y$-periodic.} \end{array}\right. \end{equation} $ (4.27)

    For $ i\geq3 $, we have

    $ \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0{\boldsymbol{V}}^i = -\mathcal{A}^1{\boldsymbol{V}}^{i-1}-\mathcal{A}^2{\boldsymbol{V}}^{i-2}+\mathcal{H}^0{\boldsymbol{U}}^{i-2}+\mathcal{H}^1{\boldsymbol{U}}^{i-3}&\text{ in }(0,T)\times\Omega\times Y^*,\cr \frac{\partial {\boldsymbol{V}}^i}{\partial\nu_{\mathit{E}}} = -\frac{\partial {\boldsymbol{V}}^{i-1}}{\partial\nu_{\mathit{D}}}&\text{ on }(0,T)\times\Omega\times\partial\mathcal{T},\cr {\boldsymbol{V}}^i = 0&\text{ on }(0,T)\times\partial\Omega\times Y^*,\cr {\boldsymbol{V}}^i\quad\text{ $Y$-periodic.} \end{array}\right. \end{equation} $ (4.28)

    Furthermore, we have

    $ \begin{equation} \left\{\begin{array}{ll} \mathcal{L}{\boldsymbol{U}}^0 = \mathit{G}{\boldsymbol{V}}^0&\text{ in }(0,T)\times\Omega\times Y^*,\cr {\boldsymbol{U}}^0 = {\boldsymbol{U}}^*&\text{ in } \{0\}\times\Omega\times Y^*,\cr {\boldsymbol{U}}^0\quad\text{ $Y$-periodic,} \end{array}\right. \end{equation} $ (4.29)

    and, for $ j\geq1 $,

    $ \begin{equation} \left\{\begin{array}{ll} \mathcal{L}{\boldsymbol{U}}^j = \mathit{G}{\boldsymbol{V}}^j&\text{ in }(0,T)\times\Omega\times Y^*,\cr {\boldsymbol{U}}^j = 0&\text{ in } \{0\}\times\Omega\times Y^*,\cr {\boldsymbol{U}}^j\quad\text{ $Y$-periodic.} \end{array}\right. \end{equation} $ (4.30)

    The existence and uniqueness of weak solutions of the systems (4.25)–(4.28) is stated in the following Lemma:

    Lemma 3. Let $ F\in L^2(Y^*) $ and $ g\in L^2(\partial \mathcal{T}) $ be $ Y $-periodic. Let $ \mathit{A}(y)\in L^\infty_\#(Y^*)^{N\times N} $ satisfy $ \sum\limits_{i, j = 1}^n\mathit{A}_{ij}(y)\xi_i\xi_j\geq a\sum\limits_{i = 1}^n\xi_i^2 $ for all $ {\boldsymbol{\xi}}\in{\bf{R}}^n $ for some $ a > 0 $.

    Consider the following boundary value problem for $ \omega({\boldsymbol{y}}) $:

    $ \begin{equation} \left\{\begin{array}{ll}-\nabla_{{\boldsymbol{y}}}\cdot\left(\mathit{A}({\boldsymbol{y}})\cdot\nabla_{{\boldsymbol{y}}}\omega\right) = F({\boldsymbol{y}})&\text{ on }Y^*,\cr -\left[\mathit{A}({\boldsymbol{y}})\nabla_{{\boldsymbol{y}}}\omega\right]\cdot{\boldsymbol{n}} = g({\boldsymbol{y}})&\text{ on }\partial\mathcal{T},\cr \omega\text{ is }Y\;{-periodic}.\cr \end{array}\right. \end{equation} $ (4.31)

    Then the following statements hold:

    (i) There exists a weak $ Y $-periodic solution $ \omega\in H^1_\#(Y^*)/{\bf{R}} $ to (4.31) if and only if $ \int_{Y^*}F({\boldsymbol{y}})\mathrm{d}{\boldsymbol{y}} = \int_{\partial \mathcal{T}}g({\boldsymbol{y}})\mathrm{d}\sigma_y $.

    (ii) If (i) holds, then the uniqueness of weak solutions is ensured up to an additive constant.

    See Lemma 2.1 in [26].

    Existence and uniqueness of the solutions of the systems (4.29) and (4.30) can be handled via the application of Rothe's method, see [31] for details on Rothe's method, and Gronwall's inequality, and see [30] for various different versions of useful discrete Gronwall's inequalities.

    Lemma 4. The function $ {\boldsymbol{V}}^0 $ depends only on $ (t, {\boldsymbol{x}})\in(0, T)\times\Omega $.

    Proof. Applying Lemma 3 to system (4.25) yields the weak solution $ {\boldsymbol{V}}^0(t, x, y)\in H^1_\#(Y^*)/{\bf{R}} $ pointwise in $ (t, {\boldsymbol{x}})\in(0, T)\times\Omega $ with uniqueness ensured up to an additive function depending only on $ (t, {\boldsymbol{x}})\in(0, T)\times\Omega $. Direct testing of (4.25) with $ {\boldsymbol{V}}^0 $ yields $ \|\nabla_{{\boldsymbol{y}}}{\boldsymbol{V}}^0\|_{L^2_\#(Y^*)} = 0 $. Hence, $ \nabla_{{\boldsymbol{y}}}{\boldsymbol{V}}^0 = 0 $ a.e. in $ Y^* $.

    Corollary 3. The function $ {\boldsymbol{U}}^0 $ depends only on $ (t, {\boldsymbol{x}})\in(0, T)\times\Omega $.

    Proof. Apply the gradient $ \nabla_{{\boldsymbol{y}}} $ to system (4.29). The independence of $ {\boldsymbol{y}} $ follows directly from (A1) and Lemma 4.

    The application of Lemma 3 to system (4.26) yields, due to the divergence theorem, again a weak solution $ {\boldsymbol{V}}^1(t, {\boldsymbol{x}}, {\boldsymbol{y}})\in H^1_\#(Y^*)/{\bf{R}} $ pointwise in $ (t, {\boldsymbol{x}})\in(0, T)\times\Omega $ with uniqueness ensured up to an additive function depending only on $ (t, {\boldsymbol{x}})\in(0, T)\times\Omega $. One can determine $ {\boldsymbol{V}}^1 $ from $ {\boldsymbol{V}}^0 $ with the use of a decomposition of $ {\boldsymbol{V}}^1 $ into products of $ {\boldsymbol{V}}^0 $ derivatives and so-called cell functions:

    $ \begin{equation} {\boldsymbol{V}}^1 = {\boldsymbol{W}}\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{V}}^0+\mathit{Z}{\boldsymbol{V}}^0+\tilde{{\boldsymbol{V}}}^1 \\ \end{equation} $ (4.32)

    with $ \tilde{{\boldsymbol{V}}}^1 $ the $ Y^* $-average of $ {\boldsymbol{V}}^1 $ satisfying $ \nabla_{{\boldsymbol{y}}}\tilde{{\boldsymbol{V}}}^1 = \mathit{0} $, and for $ \alpha, \beta\in\{1, \ldots, N\} $ and $ i\in\{1, \ldots, d\} $ with cell functions

    $ \begin{equation} \mathit{Z}_{\alpha\beta},W_{i}\in L^\infty({\bf{R}}_+;W^{2,\infty}(\Omega;C^2_\#(Y^*)/{\bf{R}})) \end{equation} $ (4.33)

    with vanishing $ Y^* $-average. Insertion of (4.32) into system (4.26) leads to systems for the cell-functions $ {\boldsymbol{W}} $ and $ \mathit{Z} $:

    $ \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0{\boldsymbol{W}} = -\nabla_{{\boldsymbol{y}}}\cdot\mathit{E}&\text{ in }Y^*,\cr \frac{\partial {\boldsymbol{W}}}{\partial\nu_{\mathit{E}}} = -{\boldsymbol{n}}\cdot\mathit{E}&\text{ on }\partial \mathcal{T},\cr {\boldsymbol{W}}\quad\text{ $Y$-periodic,}\cr \frac{1}{|Y|}\int_{Y^*}{\boldsymbol{W}}\mathrm{d}{\boldsymbol{y}} = {\boldsymbol{0}}. \end{array}\right.\end{equation} $ (4.34)

    and

    $ \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0\mathit{Z} = -\nabla_{{\boldsymbol{y}}}\cdot\mathit{D}&\text{ in }Y^*,\cr \frac{\partial \mathit{Z}}{\partial\nu_{\mathit{E}}} = -{\boldsymbol{n}}\cdot\mathit{D}&\text{ on }\partial \mathcal{T},\cr \mathit{Z}_{\alpha\beta}\quad\text{ $Y$-periodic,}\cr \frac{1}{|Y|}\int_{Y^*}\mathit{Z}\mathrm{d}{\boldsymbol{y}} = \mathit{0}. \end{array}\right.\end{equation} $ (4.35)

    Again the existence and uniqueness up to an additive constant of the cell functions in systems (4.34) and (4.35) follow from Lemma 3 and convenient applications of the divergence theorem. The regularity of solutions follows from Theorem 9.25 and Theorem 9.26 in [32].

    The existence and uniqueness for $ {\boldsymbol{V}}^2 $ follows from applying Lemma 3 to system (4.27), which states that a solvability condition has to be satisfied. Using the divergence theorem, this solvability condition becomes

    $ \begin{multline} \int_{Y^*}\mathcal{A}^2{\boldsymbol{V}}^0+\mathcal{A}^1\left[({\boldsymbol{W}}\cdot\nabla_{{\boldsymbol{x}}}+\mathit{Z}){\boldsymbol{V}}^0\right]+\nabla_{{\boldsymbol{y}}}\cdot\left[(\mathit{E}\cdot\nabla_{{\boldsymbol{x}}}+\mathit{D})({\boldsymbol{W}}\cdot\nabla_{{\boldsymbol{x}}}+\mathit{Z}){\boldsymbol{V}}^0\right]\mathrm{d}{\boldsymbol{y}}\\ = \int_{Y^*}{\boldsymbol{H}}\mathrm{d}{\boldsymbol{y}}+\int_{Y^*}\mathcal{H}^0\mathrm{d}{\boldsymbol{y}}\;{\boldsymbol{U}}^0. \end{multline} $ (4.36)

    Inserting (4.22b), (4.22c), and (4.23a) and using both $ \nabla_{{\boldsymbol{y}}}{\boldsymbol{V}}^0 = \mathit{0} $ and $ \nabla_{{\boldsymbol{y}}}{\boldsymbol{U}}^0 = \mathit{0} $, we find

    $ \begin{multline} \int_{Y^*}\mathit{M}{\boldsymbol{V}}^0\mathrm{d}{\boldsymbol{y}}-\int_{Y^*}\nabla_{{\boldsymbol{x}}}\cdot(\mathit{D}{\boldsymbol{V}}^0)\mathrm{d}{\boldsymbol{y}}-\int_{Y^*}\nabla_{{\boldsymbol{x}}}\cdot\left(\mathit{E}\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{V}}^0\right)\mathrm{d}{\boldsymbol{y}}\\ -\int_{Y^*}\nabla_{{\boldsymbol{x}}}\cdot\left(\mathit{E}\cdot\left[\nabla_{{\boldsymbol{y}}}({\boldsymbol{W}}\cdot\nabla_{{\boldsymbol{x}}}+\mathit{Z})\right]{\boldsymbol{V}}^0\right)\mathrm{d}{\boldsymbol{y}} = \int_{Y^*}{\boldsymbol{H}}\mathrm{d}{\boldsymbol{y}}+\int_{Y^*}\mathit{K}\mathrm{d}{\boldsymbol{y}}\;{\boldsymbol{U}}^0, \end{multline} $ (4.37)

    which after rearrangement looks like

    $ \begin{multline} \int_{Y^*}\mathit{M}\mathrm{d}{\boldsymbol{y}}{\boldsymbol{V}}^0-\nabla_{{\boldsymbol{x}}}\cdot\left(\int_{Y^*}\mathit{E}+\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}{\boldsymbol{W}}\mathrm{d}{\boldsymbol{y}}\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{V}}^0\right)\\ -\nabla_{{\boldsymbol{x}}}\cdot\left(\int_{Y^*}\mathit{D}+\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}\mathit{Z}\mathrm{d}{\boldsymbol{y}}{\boldsymbol{V}}^0\right) = \int_{Y^*}{\boldsymbol{H}}\mathrm{d}{\boldsymbol{y}}+\int_{Y^*}\mathit{K}\mathrm{d}{\boldsymbol{y}}\;{\boldsymbol{U}}^0. \end{multline} $ (4.38)

    Dividing all terms by $ |Y| $, we realize that this solvability condition is an upscaled version of (2.8a):

    $ \begin{equation} \overline{\mathit{M}}{\boldsymbol{V}}^0-\nabla_{{\boldsymbol{x}}}\cdot\left(\mathit{E}^*\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{V}}^0+\mathit{D}^*{\boldsymbol{V}}^0\right) = \overline{{\boldsymbol{H}}}+\overline{\mathit{K}}{\boldsymbol{U}}^0, \end{equation} $ (4.39)

    where we have used (4.32), the cell function decomposition, and the new short-hand notation

    $ \begin{align} \mathit{E}^* & = \frac{1}{|Y|}\int_{Y^*}\mathit{E}\cdot\left(\mathit{1}+\nabla_{{\boldsymbol{y}}}{\boldsymbol{W}}\right)\mathrm{d}{\boldsymbol{y}}, \end{align} $ (4.40a)
    $ \begin{align} \mathit{D}^* & = \frac{1}{|Y|}\int_{Y^*}\mathit{D}+\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}\mathit{Z}\mathrm{d}{\boldsymbol{y}}. \end{align} $ (4.40b)

    Lemma 5. The pair $ ({\boldsymbol{U}}^0, {\boldsymbol{V}}^0)\in H^1((0, T)\times\Omega)\times L^\infty((0, T); H^1_0(\Omega)) $ are weak solutions to the following system

    $ \begin{equation} \label{eq: upscaledUV} \left\{ \begin{array}{l} \overline{\mathsf{M}}{\boldsymbol{V}}^0-\nabla_{\boldsymbol{x}}\cdot\left(\mathsf{E}^*\cdot\nabla_{\boldsymbol{x}}{\boldsymbol{V}}^0+\mathsf{D}^*{\boldsymbol{V}}^0\right) = \overline{\boldsymbol{H}}+\overline{\mathsf{K}}{\boldsymbol{U}}^0&{in }\;(0,T)\times\Omega,\cr \frac{\partial {\boldsymbol{U}}^0}{\partial t}+\mathsf{L}{\boldsymbol{U}}^0 = \mathsf{G}{\boldsymbol{V}}^0&{in }\;(0,T)\times\Omega,\cr \boldsymbol{V}^0 = \boldsymbol{0}&{on }\;(0,T)\times\partial\Omega,\cr \boldsymbol{U}^0 = {\boldsymbol{U}}^*&{on }\;\{0\}\times\Omega. \end{array} \right. \end{equation}$ (4.41)

    Proof. From system (4.25), equation (4.39), $ \nabla_{{\boldsymbol{y}}}{\boldsymbol{V}}^0 = \mathit{0} $, assumption (A3) and system (4.29), we see that $ \nabla_{{\boldsymbol{y}}}{\boldsymbol{U}}^0 = \mathit{0} $. This leads automatically to system (4.41), since there is no $ {\boldsymbol{y}} $-dependence and $ \Omega^\epsilon\subset\Omega $, $ \Omega^\epsilon\rightarrow\Omega $, $ \partial_{ext}\Omega^\epsilon = \partial\Omega $. Analogous to the proof of Theorem 6 we obtain the required spatial regularity. Moreover, by testing the second line with $ \frac{\partial}{\partial t}{\boldsymbol{U}}^0 $, applying a gradient to the second line and testing it with $ \frac{\partial}{\partial t}\nabla{\boldsymbol{U}}^0 $, we obtain the required temporal regularity as well.

    Theorem 8. Let (A1)–(A3) be valid, then $ ({\boldsymbol{u}}, {\boldsymbol{v}}) = ({\boldsymbol{U}}^0, {\boldsymbol{V}}^0) $.

    Proof. From (P$ ^0_s $) and Lemma 5, we see that $ ({\boldsymbol{u}}, {\boldsymbol{v}}) $ and $ ({\boldsymbol{U}}^0, {\boldsymbol{V}}^0) $ satisfy the same linear boundary value problem. We only have to prove the uniqueness for this boundary value problem.

    From testing (4.35) with $ {\boldsymbol{W}} $ and (4.34) with $ \mathit{Z} $, we obtain the identity

    $ \begin{equation} \int_{Y^*}(\nabla_{{\boldsymbol{y}}}{\boldsymbol{W}})^\top\cdot\mathit{D}\mathrm{d}{\boldsymbol{y}} = \int_{Y^*}\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}\mathit{Z}\mathrm{d}{\boldsymbol{y}}. \end{equation} $ (4.42)

    Hence, from (4.40b) we get

    $ \begin{equation} \mathit{D}^* = \frac{1}{|Y|}\int_{Y^*}\left(\mathit{1}+(\nabla_{{\boldsymbol{y}}}{\boldsymbol{W}})\right)^\top\cdot\mathit{D}\mathrm{d}{\boldsymbol{y}}. \end{equation} $ (4.43)

    Moreover, testing system (4.34) with $ {\boldsymbol{W}} $ yields the identity

    $ \begin{equation} \mathit{E}^* = \frac{1}{|Y|}\int_{Y^*}\left(\mathit{1}+(\nabla_{{\boldsymbol{y}}}{\boldsymbol{W}})\right)^\top\cdot\mathit{E}\cdot\left(\mathit{1}+(\nabla_{{\boldsymbol{y}}}{\boldsymbol{W}})\right)\mathrm{d}{\boldsymbol{y}}. \end{equation} $ (4.44)

    We subtract (P$ ^0_s $) from (4.41) and introduce $ \tilde{{\boldsymbol{U}}} $, $ \tilde{{\boldsymbol{V}}} $ as

    $ \begin{equation} \tilde{{\boldsymbol{U}}} = {\boldsymbol{U}}^0-{\boldsymbol{u}}\quad\text{ and }\quad\tilde{{\boldsymbol{V}}} = {\boldsymbol{V}}^0-{\boldsymbol{v}}. \end{equation} $ (4.45)

    Testing with $ \tilde{{\boldsymbol{V}}} $, we obtain the equation

    $ \begin{equation} 0 = \int_\Omega\frac{1}{|Y|}\int_{Y^*}\left[\tilde{{\boldsymbol{V}}}^\top\mathit{M}\tilde{{\boldsymbol{V}}}+{\boldsymbol{\zeta}}^\top\cdot\left(\mathit{E}\cdot{\boldsymbol{\zeta}}+\mathit{D}\tilde{{\boldsymbol{V}}}\right) - \tilde{{\boldsymbol{V}}}^\top\mathit{K}\tilde{{\boldsymbol{U}}}\right]\mathrm{d}{\boldsymbol{y}}\mathrm{d}{\boldsymbol{x}}, \end{equation} $ (4.46)

    where

    $ \begin{equation} {\boldsymbol{\zeta}} = \left(\mathit{1}+(\nabla_{{\boldsymbol{y}}}{\boldsymbol{W}})\right)\cdot\nabla_{{\boldsymbol{x}}}\tilde{{\boldsymbol{V}}}. \end{equation} $ (4.47)

    This equation is identical to the Neumann problem (2.8a)–(2.9c) with $ {\boldsymbol{H}} = {\boldsymbol{0}} $, $ \mathit{J} = \mathit{0} $, and replacements $ \nabla_{{\boldsymbol{x}}}{\boldsymbol{V}} \rightarrow {\boldsymbol{\zeta}} $, $ {\boldsymbol{U}}\rightarrow\tilde{{\boldsymbol{U}}} $ and $ {\boldsymbol{V}}\rightarrow\tilde{{\boldsymbol{V}}} $ in (2.8a). Moreover, (2.8a) is coercive due to assumption (A3). Therefore, we can follow the argument of the proof of Theorem 6, but we only use equations (4.1) and (4.5a) with constants $ \tilde{H} $ and $ \tilde{J}_{i\alpha} $ set to $ 0 $. For some $ R > 0 $, this leads to

    $ \begin{equation} \frac{\partial}{\partial t}\|\tilde{{\boldsymbol{U}}}\|_{L^2(\Omega;L^2_\#(Y^*))^N}^2\leq R\|\tilde{{\boldsymbol{U}}}\|_{L^2(\Omega;L^2_\#(Y^*))^N}^2. \end{equation} $ (4.48)

    Applying Gronwall inequality and using the initial value $ \tilde{{\boldsymbol{U}}} = {\boldsymbol{U}}^*-{\boldsymbol{U}}^* = {\boldsymbol{0}} $, we obtain $ \|\tilde{{\boldsymbol{U}}}\|_{L^2(\Omega; L^2_\#(Y^*))^N} = 0 $ a.e. in $ (0, T) $. By the coercivity, we obtain $ \|\tilde{{\boldsymbol{V}}}\|_{L^2(\Omega; L^2_\#(Y^*))^N} = 0 $ and $ \|{\boldsymbol{\zeta}}\|_{L^2(\Omega; L^2_\#(Y^*))^N} = 0 $.

    From the proof of Proposition 6.12 in [33], we see that $ \mathit{1}+\nabla_{{\boldsymbol{y}}}W $ does not have a kernel that contains non-zero $ Y $-periodic solutions. Therefore, $ {\boldsymbol{\zeta}} = {\boldsymbol{0}} $ yields $ \nabla_{{\boldsymbol{y}}}\tilde{{\boldsymbol{V}}} = \mathit{0} $. Thus, we have $ \tilde{{\boldsymbol{U}}} = {\boldsymbol{0}} $ in $ L^\infty((0, T); L^2(\Omega))^N $ and $ \tilde{{\boldsymbol{V}}} = {\boldsymbol{0}} $ in $ L^\infty((0, T); H^1_0(\Omega))^N $. Hence, $ ({\boldsymbol{u}}, {\boldsymbol{v}}) = ({\boldsymbol{U}}^0, {\boldsymbol{V}}^0) $.

    Corollary 4. Let $ \lambda\geq0 $ and $ \tilde{\kappa}\geq0 $ be as in Theorem 6. Then there exists a positive constant $ C $ independent of $ \epsilon $ such that

    $ \begin{equation} \|{\boldsymbol{U}}^0\|_{H^1(\Omega^\epsilon)^N}(t)\leq Ce^{\lambda t},\qquad \|{\boldsymbol{V}}^0\|_{\mathbb{V}_\epsilon^N}(t)\leq C(1+\tilde{\kappa}e^{\lambda t}) \end{equation} $ (4.49)

    holds for $ t\geq0 $.

    Proof. Bochner's Theorem, see Chapter 2 in [28], states that $ \|\hat{{\boldsymbol{U}}}^\epsilon\|_{H^1(\Omega)^N}(t) $, $ \|\hat{{\boldsymbol{V}}}^\epsilon\|_{H^1_0(\Omega)^N}(t) $, $ \|{\boldsymbol{U}}^\epsilon\|_{H^1(\Omega^\epsilon)^N}(t) $, and $ \|{\boldsymbol{V}}^\epsilon\|_{\mathbb{V}_\epsilon^N}(t) $ are Lebesgue integrable, and, therefore, elements of $ L^1(0, T) $. Since $ \Omega $ does not depend on $ t $, Theorem 6 is applicable for a.e. $ t\in(0, T) $. From Theorem 6 we obtain that both $ \|\hat{{\boldsymbol{U}}}^\epsilon\|_{H^1(\Omega)^N}(t) $ and $ \|\hat{{\boldsymbol{V}}}^\epsilon\|_{H^1_0(\Omega)^N}(t) $ have $ \epsilon $-independent upper bounds for a.e. $ t\in(0, T) $. Hence, the Eberlein-Šmuljan Theorem states that there is a subsequence $ \epsilon'\subset\epsilon $ such that $ \hat{{\boldsymbol{U}}}^{\epsilon'}(t) $ and $ \hat{{\boldsymbol{V}}}^{\epsilon'}(t) $ converge weakly in $ H^1(\Omega)^N $ and $ H^1_0(\Omega)^N $, respectively, for a.e. $ t\in(0, T) $. Moreover, Lemma 2 states that $ \hat{{\boldsymbol{U}}}^{\epsilon'} $ and $ \hat{{\boldsymbol{V}}}^{\epsilon'} $ two scale converge (and therefore weakly) to $ {\boldsymbol{u}}\in H^1(0, T; L^2(\Omega))^N $ and $ {\boldsymbol{v}}\in L^\infty(0, T; L^2(\Omega))^N $, respectively. Limits of weak convergences are unique. Hence, $ \hat{{\boldsymbol{U}}}^{\epsilon'}(t)\rightharpoonup{\boldsymbol{u}}(t) $ in $ H^1(\Omega)^N $ and $ \hat{{\boldsymbol{V}}}^{\epsilon'}(t)\rightharpoonup{\boldsymbol{v}}(t) $ in $ H^1_0(\Omega)^N $ for a.e. $ t\in (0, T) $ as $ \epsilon'\downarrow0 $.

    Using these weak convergences, (2.5) and (2.6) in Theorem 6, Theorem 6, and the limit inferior property of weakly convergent sequences, we obtain

    $ \begin{align} \|{\boldsymbol{U}}^0\|_{H^1(\Omega^\epsilon)^N}(t)&\leq \|{\boldsymbol{U}}^0\|_{H^1(\Omega)^N}(t) = \liminf\limits_{\epsilon\rightarrow 0}\|\hat{{\boldsymbol{U}}}^\epsilon\|_{H^1(\Omega)^N}(t)\cr &\leq \liminf\limits_{\epsilon\rightarrow 0}C\|{\boldsymbol{U}}^\epsilon\|_{H^1(\Omega^\epsilon)^N}(t)\leq C e^{\lambda t}, \end{align} $ (4.50a)
    $ \begin{align} \|{\boldsymbol{V}}^0\|_{\mathbb{V}_\epsilon^N}(t)&\leq \|{\boldsymbol{V}}^0\|_{H^1_0(\Omega)^N}(t) = \liminf\limits_{\epsilon\rightarrow 0}\|\hat{{\boldsymbol{V}}}^\epsilon\|_{H^1_0(\Omega)^N}(t)\cr &\leq \liminf\limits_{\epsilon\rightarrow 0}C\|{\boldsymbol{V}}^\epsilon\|_{\mathbb{V}_\epsilon^N}(t)\leq C\left(1+\tilde{\kappa}e^{\lambda t}\right). \end{align} $ (4.50b)

    Hence, the bounds of Theorem 6 hold for $ {\boldsymbol{U}}^0 $ and $ {\boldsymbol{V}}^0 $ as well.

    This concludes the proof of Theorem 3.

    It is natural to determine the speed of convergence of the weak solutions $ ({\boldsymbol{U}}^\epsilon, {\boldsymbol{V}}^\epsilon) $ to $ ({\boldsymbol{U}}^0, {\boldsymbol{V}}^0) $. However, certain boundary effects are expected due to intersection of the external boundary with the perforated periodic cells. It is clear that $ \Omega^\epsilon\rightarrow\Omega $ for $ \epsilon\downarrow0 $, but the boundary effects impact the periodic behavior, which can lead to $ {\boldsymbol{V}}^j\neq{\boldsymbol{0}} $ at $ \partial_{ext}\Omega^\epsilon $ for $ j > 0 $. Hence, a cut-off function is introduced to remove this potentially problematic part of the domain. The use of a cut-off function is a standard technique in corrector estimates for periodic homogenization. See [17] for a similar introduction of a cut-off function.

    Let us again introduce the cut-off function $ M_\epsilon $ defined by

    $ \begin{equation} \left\{\begin{array}{ll} M_\epsilon\in\mathcal{D}(\Omega),\cr M_\epsilon = 0 &\text{ if }\mathrm{dist}({\boldsymbol{x}},\partial\Omega)\leq\epsilon,\cr M_\epsilon = 1 &\text{ if }\mathrm{dist}({\boldsymbol{x}},\partial\Omega)\geq2\epsilon,\cr \epsilon\left|\frac{\mathrm{d} M_\epsilon}{\mathrm{d} x_i}\right|\leq C&i\in\{1,\ldots,d\}. \end{array}\right. \end{equation} $ (5.1)

    With this cut-off function defined, we introduce again the error functions

    $ \begin{align} {\boldsymbol{\Phi}}^\epsilon& = {\boldsymbol{V}}^\epsilon-{\boldsymbol{V}}^0-M_\epsilon(\epsilon {\boldsymbol{V}}^1+\epsilon^2 {\boldsymbol{V}}^2), \end{align} $ (5.2a)
    $ \begin{align} {\boldsymbol{\Psi}}^\epsilon& = {\boldsymbol{U}}^\epsilon-{\boldsymbol{U}}^0-M_\epsilon(\epsilon {\boldsymbol{U}}^1+\epsilon^2 {\boldsymbol{U}}^2), \end{align} $ (5.2b)

    where the $ M_\epsilon $ terms are the so-called corrector terms.

    The solvability condition for system (4.27) naturally leds to the fact that $ ({\boldsymbol{U}}^0, {\boldsymbol{V}}^0) $ has to satisfy system (4.41). Similar to solving system (4.26) for $ {\boldsymbol{V}}^1 $, we handle system (4.27) for $ {\boldsymbol{V}}^2 $ with a decomposition into cell-functions:

    $ \begin{equation} {\boldsymbol{V}}^2 = {\boldsymbol{P}} + \mathit{Q}^0{\boldsymbol{V}}^0+\mathit{R}^0{\boldsymbol{U}}^0+\mathit{Q}^1\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{V}}^0+\mathit{R}^1\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{U}}^0+\mathit{Q}^2:\mathrm{D}^2_{{\boldsymbol{x}}}{\boldsymbol{V}}^0 \end{equation} $ (5.3)

    where we have the cell-functions

    $ \begin{equation} \begin{array}{rcl} P_{\alpha},R_{\alpha\beta}^0,R_{i\alpha\beta}^1&\in& L^\infty({\bf{R}}_+;W^{2,\infty}(\Omega^\epsilon;C^3_\#(Y^*))),\cr Q_{\alpha\beta}^0,Q_{i\alpha\beta}^1&\in& L^\infty({\bf{R}}_+;W^{2,\infty}(\Omega^\epsilon;C^2_\#(Y^*))),\cr Q^2_{ij}&\in& L^\infty({\bf{R}}_+;W^{2,\infty}(\Omega^\epsilon;C^2_\#(Y^*))) \end{array} \end{equation} $ (5.4)

    for $ \alpha, \beta\in\{1, \ldots, N\} $ and for $ i, j\in\{1, \ldots, d\} $, and where

    $ \begin{equation} (\mathit{Q}^2:\mathrm{D}^2_{{\boldsymbol{x}}}{\boldsymbol{V}}^0)_\alpha : = \sum\limits_{i,j = 1}^dQ_{ij}\frac{\partial^2 V^0_\alpha}{\partial x_i\partial x_j}. \end{equation} $ (5.5)

    The cell-functions $ {\boldsymbol{P}} $, $ \mathit{Q}^0 $, $ \mathit{R}^0 $, $ \mathit{Q}^1 $, $ \mathit{R}^1 $, $ \mathit{Q}^2 $ satisfy the following systems of partial differential equations, obtained from subtracting (4.39) from (4.27) and inserting (5.4):

    $ \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0{\boldsymbol{P}} = {\boldsymbol{H}}-\overline{{\boldsymbol{H}}}&\text{ in }Y^*,\cr \frac{\partial {\boldsymbol{P}}}{\partial\nu_{\mathit{E}}} = {\boldsymbol{0}}&\text{ on }\partial \mathcal{T},\cr {\boldsymbol{P}}\quad\text{ $Y$-periodic,} \end{array}\right. \end{equation} $ (5.6)
    $ \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0\mathit{Q}^0 = \nabla_{{\boldsymbol{y}}}\cdot\left(\mathit{E}\cdot\nabla_{{\boldsymbol{x}}}\mathit{Z}\right)+\nabla_{{\boldsymbol{x}}}\cdot\left(\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}\mathit{Z}\right)+\nabla_{{\boldsymbol{y}}}\cdot\left(\mathit{D}\mathit{Z}\right)\cr \qquad\qquad+\nabla_{{\boldsymbol{x}}}\cdot\left(\mathit{D}-\mathit{D}^*\right)+\overline{\mathit{M}}-\mathit{M}&\text{ in }Y^*,\cr \frac{\partial \mathit{Q}^0}{\partial\nu_{\mathit{E}}} = -\left(\mathit{D}\mathit{Z}+\mathit{E}\cdot\nabla_{{\boldsymbol{x}}}\mathit{Z}\right)\cdot{\boldsymbol{n}}&\text{ on }\partial \mathcal{T},\cr \mathit{Q}^0\quad\text{ $Y$-periodic,} \end{array}\right. \end{equation} $ (5.7)
    $ \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0\mathit{R}^0 = \mathit{K}-\overline{\mathit{K}}&\text{ in }Y^*,\cr \frac{\partial R^0_{\alpha\beta}}{\partial\nu_{\mathit{E}}} = \mathit{0}&\text{ on }\partial \mathcal{T},\cr \mathit{R}^0\quad\text{ $Y$-periodic,} \end{array}\right. \end{equation} $ (5.8)
    $ \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0\mathit{Q}^1 = \nabla_{{\boldsymbol{y}}}\cdot(\mathit{E}\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{W}})\otimes\mathit{1}+\nabla_{{\boldsymbol{y}}}\cdot(\mathit{E}\otimes\mathit{Z})\cr \qquad\qquad+\nabla_{{\boldsymbol{x}}}\cdot(\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}{\boldsymbol{W}})\otimes\mathit{1}+\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}\mathit{Z}\cr \qquad\qquad+\nabla_{{\boldsymbol{y}}}\cdot(\mathit{D}\otimes{\boldsymbol{W}})+\nabla_{{\boldsymbol{x}}}\cdot(\mathit{E}-\mathit{E}^*)\otimes\mathit{1}+\mathit{D}-\mathit{D}^*&\text{ in }Y^*,\cr \frac{\partial \mathit{Q}^1}{\partial\nu_{\mathit{E}}} = {\boldsymbol{W}}\otimes(\mathit{D}\cdot{\boldsymbol{n}})+{\boldsymbol{n}}\cdot\left(\mathit{E}\otimes\mathit{Z}+\mathit{E}\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{W}}\otimes\mathit{1}\right)&\text{ on }\partial \mathcal{T},\cr \mathit{Q}^1\quad\text{ $Y$-periodic,} \end{array}\right. \end{equation} $ (5.9)
    $ \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0\mathit{R}^1 = \mathit{0}&\text{ in }Y^*,\cr \frac{\partial \mathit{R}^1}{\partial\nu_{\mathit{E}}} = \mathit{0}&\text{ on }\partial \mathcal{T},\cr \mathit{R}^1\quad\text{ $Y$-periodic,} \end{array}\right. \end{equation} $ (5.10)
    $ \begin{equation} \left\{\begin{array}{ll} \mathcal{A}^0\mathit{Q}^2\! = \!\nabla_{{\boldsymbol{y}}}\cdot(\mathit{E}\otimes{\boldsymbol{W}})+\mathit{E}\cdot\nabla_{{\boldsymbol{y}}}{\boldsymbol{W}}+\mathit{E}-\mathit{E}^*&\text{ in }Y^*,\cr \frac{\partial\mathit{Q}^2}{\partial\nu_{\mathit{E}}} = -{\boldsymbol{n}}\cdot\mathit{E}\otimes {\boldsymbol{W}}&\text{ on }\partial \mathcal{T},\cr \mathit{Q}^2\quad\text{ $Y$-periodic.} \end{array}\right. \end{equation} $ (5.11)

    The well-posedness of the cell-problems (4.34)–(5.11) is given by Lemma 3, while the regularity follows from Theorem 9.25 and Theorem 9.26 in [32]. Note that cell-problem (5.10) yields $ \mathit{R}^1 = \mathit{0} $.

    Let $ C $ denote a constant independent of $ \epsilon $, $ {\boldsymbol{x}} $, $ {\boldsymbol{y}} $ and $ t $.

    We rewrite the error-function $ {\boldsymbol{\Phi}}^\epsilon $ as

    $ \begin{equation} {\boldsymbol{\Phi}}^\epsilon = {\boldsymbol{V}}^\epsilon-{\boldsymbol{V}}^0-M_\epsilon(\epsilon {\boldsymbol{V}}^1+\epsilon^2 {\boldsymbol{V}}^2) = {\boldsymbol{\phi}}^\epsilon+(1-M_\epsilon)(\epsilon {\boldsymbol{V}}^1+\epsilon^2{\boldsymbol{V}}^2), \end{equation} $ (5.12)

    where

    $ \begin{equation} {\boldsymbol{\phi}}^\epsilon = {\boldsymbol{V}}^\epsilon-({\boldsymbol{V}}^0+\epsilon {\boldsymbol{V}}^1+\epsilon^2{\boldsymbol{V}}^2). \end{equation} $ (5.13)

    Similarly, we make use of the error-function $ {\boldsymbol{\Psi}}^\epsilon $

    $ \begin{equation} {\boldsymbol{\Psi}}^\epsilon = {\boldsymbol{U}}^\epsilon-{\boldsymbol{U}}^0-M_\epsilon(\epsilon {\boldsymbol{U}}^1+\epsilon^2{\boldsymbol{U}}^2). \end{equation} $ (5.14)

    The goal is to estimate both $ {\boldsymbol{\Phi}}^\epsilon $ and $ {\boldsymbol{\Psi}}^\epsilon $ uniformly in $ \epsilon $.

    Even though our problem for $ ({\boldsymbol{U}}^\epsilon, {\boldsymbol{V}}^\epsilon) $ is defined on $ \Omega^\epsilon $, while the asymptotic expansion terms $ ({\boldsymbol{U}}^i, {\boldsymbol{V}}^i) $ are defined on $ \Omega\times Y^* $, we are still able to use spaces defined on $ \Omega^\epsilon $ such as $ \mathbb{V}_\epsilon^N $ since the evaluation $ {\boldsymbol{y}} = {\boldsymbol{x}}/\epsilon $ transfers the zero-extension on $ \mathcal{T} $ to $ \mathcal{T}^\epsilon $.

    Introduce the coercive bilinear form $ a_\epsilon: \mathbb{V}_\epsilon^N\times\mathbb{V}_\epsilon^N\rightarrow{\bf{R}} $ defined as

    $ \begin{equation} a_\epsilon({\boldsymbol{\psi}},{\boldsymbol{\phi}}) = \int_{\Omega^\epsilon}{\boldsymbol{\phi}}^\top \mathcal{A}^\epsilon{\boldsymbol{\psi}}\mathrm{d}{\boldsymbol{x}} \end{equation} $ (5.15)

    pointwise in $ t\in{\bf{R}}_+ $, on which it depends implicitly.

    By construction, $ {\boldsymbol{\Phi}}^\epsilon $ vanishes on $ \partial_{ext}\Omega^\epsilon $, which allows for the estimation of $ \|{\boldsymbol{\Phi}}^\epsilon\|_{\mathbb{V}_\epsilon^N} $. This estimation follows the standard approach, see [17] for the details.

    First the inequality $ |a_\epsilon({\boldsymbol{\Phi}}^\epsilon, {\boldsymbol{\phi}})|\leq \mathcal{C}(\epsilon, t)\|{\boldsymbol{\phi}}\|_{\mathbb{V}_\epsilon^N} $, where $ \mathcal{C}(\epsilon, t) $ is a constant depending on $ \epsilon $ and $ t\in\mathbb{R}_+ $, is obtained for any $ {\boldsymbol{\phi}}\in \mathbb{V}_\epsilon^N $. Second, we take $ {\boldsymbol{\phi}} = {\boldsymbol{\Phi}}^\epsilon $ and using the coercivity, one immediately obtains $ \|{\boldsymbol{\Phi}}^\epsilon\|_{\mathbb{V}_\epsilon^N} $.

    Our pseudo-parabolic system complicates this approach. Instead of $ \mathcal{C}(\epsilon, t) $, one gets $ C\|{\boldsymbol{\Psi}}^\epsilon\|_{H^1_0(\Omega^\epsilon)^N} $. Via an ordinary differential equation for $ {\boldsymbol{\Psi}}^\epsilon $, we obtain a temporal inequality for $ \|{\boldsymbol{\Psi}}^\epsilon\|_{H^1_0(\Omega^\epsilon)^N} $ that contains $ \|{\boldsymbol{\Phi}}^\epsilon\|_{\mathbb{V}_\epsilon^N} $. The upper bound for $ \|{\boldsymbol{\Phi}}^\epsilon\|_{\mathbb{V}_\epsilon^N} $ now follows from applying Gronwall's inequality, leading to an upper bound for $ \|{\boldsymbol{\Psi}}^\epsilon\|_{H^1_0(\Omega^\epsilon)^N} $.

    From equation (5.13), we have

    $ \begin{equation} a_\epsilon({\boldsymbol{\Phi}}^\epsilon,{\boldsymbol{\phi}}) = a_\epsilon({\boldsymbol{\phi}}^\epsilon,{\boldsymbol{\phi}})+a_\epsilon((1-M_\epsilon)(\epsilon {\boldsymbol{V}}^1+\epsilon^2{\boldsymbol{V}}^2),{\boldsymbol{\phi}}) \end{equation} $ (5.16)

    for $ {\boldsymbol{\phi}}\in\mathbb{V}_\epsilon^N $.

    Do note that $ M_\epsilon $ vanishes in a neighbourhood of the boundary $ \partial_{ext}\Omega^\epsilon $, see (5.1), because of which the second term in (5.16) vanishes outside this neighbourhood.

    We start by estimating the first term of (5.16), $ a_\epsilon({\boldsymbol{\phi}}^\epsilon, {\boldsymbol{\phi}}) $. From the asymptotic expansion of $ \mathcal{A}^\epsilon $, we obtain

    $ \begin{multline} \mathcal{A}^\epsilon{\boldsymbol{\phi}}^\epsilon = (\epsilon^{-2}\mathcal{A}^0+\epsilon^{-1}\mathcal{A}^1+\mathcal{A}^2){\boldsymbol{\phi}}^\epsilon\\ = \mathcal{A}^\epsilon {\boldsymbol{V}}^\epsilon - \epsilon^{-2}\mathcal{A}^0{\boldsymbol{V}}^0-\epsilon^{-1}(\mathcal{A}^0{\boldsymbol{V}}^1+\mathcal{A}^1{\boldsymbol{V}}^0)-(\mathcal{A}^0{\boldsymbol{V}}^2+\mathcal{A}^1{\boldsymbol{V}}^1+\mathcal{A}^2{\boldsymbol{V}}^0)\qquad\qquad\qquad\quad\\ -\epsilon(\mathcal{A}^1{\boldsymbol{V}}^2+\mathcal{A}^2{\boldsymbol{V}}^1)-\epsilon^2\mathcal{A}^2{\boldsymbol{V}}^2. \end{multline} $ (5.17)

    Using the definitions of $ \mathcal{A}^0 $, $ \mathcal{A}^1 $, $ \mathcal{A}^2 $, $ {\boldsymbol{V}}^0 $, $ {\boldsymbol{V}}^1 $, $ {\boldsymbol{V}}^2 $, we have

    $ \begin{equation} \mathcal{A}^\epsilon {\boldsymbol{\phi}}^\epsilon = \mathit{K}^\epsilon {\boldsymbol{U}}^\epsilon-\left.\mathit{K}\right|_{{\boldsymbol{y}} = {\boldsymbol{x}}/\epsilon}{\boldsymbol{U}}^0+\epsilon\mathit{J}^\epsilon\nabla {\boldsymbol{U}}^\epsilon-\epsilon(\mathcal{A}^2{\boldsymbol{V}}^1+\mathcal{A}^1{\boldsymbol{V}}^2)-\epsilon^2\mathcal{A}^2{\boldsymbol{V}}^2. \end{equation} $ (5.18)

    The function $ {\boldsymbol{\phi}}^\epsilon $ satisfies the following boundary condition on $ \partial\mathcal{T}^\epsilon $

    $ \begin{equation} \frac{\partial{\boldsymbol{\phi}}^\epsilon}{\partial \nu_{\mathit{D}^\epsilon}} = -\epsilon^2\frac{\partial {\boldsymbol{V}}^2}{\partial \nu_\mathit{D}}, \end{equation} $ (5.19)

    as a consequence of the boundary conditions for the $ {\boldsymbol{V}}^i $-terms. Hence, $ {\boldsymbol{\phi}}^\epsilon $ satisfies the following system:

    $ \begin{equation} \left\{\begin{array} \mathcal{A}^\epsilon {\boldsymbol{\phi}}^\epsilon = {\boldsymbol{f}}^\epsilon-\epsilon {\boldsymbol{g}}^\epsilon &\text{ in }\Omega^\epsilon,\cr \frac{\partial{\boldsymbol{\phi}}^\epsilon}{\partial \nu_{\mathit{D}^\epsilon}} = \epsilon^2\mathit{h}^\epsilon\cdot{\boldsymbol{n}}^\epsilon&\text{ on }\partial \mathcal{T}^\epsilon,\cr {\boldsymbol{\phi}}^\epsilon = -\epsilon{\boldsymbol{V}}^1-\epsilon^2{\boldsymbol{V}}^2&\text{ on }\partial\Omega. \end{array}\right. \end{equation} $ (5.20)

    Testing with $ {\boldsymbol{\phi}}^\top\in \mathbb{V}_\epsilon^N $ and performing a partial integration, we obtain

    $ \begin{equation} a_\epsilon({\boldsymbol{\phi}}^\epsilon,{\boldsymbol{\phi}}) = \int_{\Omega^\epsilon} {\boldsymbol{\phi}}^\top{\boldsymbol{f}}^\epsilon\mathrm{d}{\boldsymbol{x}}-\int_{\Omega^\epsilon}\epsilon {\boldsymbol{\phi}}^\top{\boldsymbol{g}}^\epsilon\mathrm{d}{\boldsymbol{x}}+\int_{\partial \mathcal{T}^\epsilon}\epsilon^2{\boldsymbol{\phi}}^\top\mathit{h}^\epsilon\cdot{\boldsymbol{n}}^\epsilon\mathrm{d}{\boldsymbol{s}}, \end{equation} $ (5.21)

    where $ {\boldsymbol{f}}^\epsilon $, $ {\boldsymbol{g}}^\epsilon $ and $ \mathit{h}^\epsilon $ are given by

    $ \begin{equation} {\boldsymbol{f}}^\epsilon = \mathit{K}^\epsilon {\boldsymbol{U}}^\epsilon-\left.\mathit{K}\right|_{{\boldsymbol{y}} = {\boldsymbol{x}}/\epsilon}{\boldsymbol{U}}^0 \end{equation} $ (5.22)
    $ \begin{multline} {\boldsymbol{g}}^\epsilon = \mathcal{A}^1\left[{\boldsymbol{P}}+\mathit{Q}^0{\boldsymbol{V}}^0+\mathit{R}^0{\boldsymbol{U}}^0+\mathit{Q}^1\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{V}}^0+\mathit{R}^1\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{U}}^0+\mathit{Q}^2:\mathrm{D}_{{\boldsymbol{x}}}^2{\boldsymbol{V}}^0\right]\\ +\mathcal{A}^2\left[{\boldsymbol{W}}\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{V}}^0+\mathit{Z} {\boldsymbol{V}}^0\right]-\mathit{J}^\epsilon\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{U}}^\epsilon\\ +\epsilon \mathcal{A}^2\left[{\boldsymbol{P}}+\mathit{Q}^0{\boldsymbol{V}}^0+\mathit{R}^0{\boldsymbol{U}}^0+\mathit{Q}^1\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{V}}^0+\mathit{R}^1\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{U}}^0+\mathit{Q}^2:\mathrm{D}_{{\boldsymbol{x}}}^2{\boldsymbol{V}}^0\right], \end{multline} $ (5.23)
    $ \begin{equation} \mathit{h}^\epsilon = -\frac{\partial}{\partial \nu_{\mathit{D}}}\left[{\boldsymbol{P}}+\mathit{Q}^0{\boldsymbol{V}}^0+\mathit{R}^0{\boldsymbol{U}}^0+\mathit{Q}^1\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{V}}^0+\mathit{R}^1\cdot\nabla_{{\boldsymbol{x}}}{\boldsymbol{U}}^0+\mathit{Q}^2:\mathrm{D}_{{\boldsymbol{x}}}^2{\boldsymbol{V}}^0\right]. \end{equation} $ (5.24)

    Estimates for $ {\boldsymbol{f}}^\epsilon $, $ {\boldsymbol{g}}^\epsilon $ and $ \mathit{h}^\epsilon $ follow from estimates on $ {\boldsymbol{V}}^0 $, $ {\boldsymbol{U}}^0 $, $ {\boldsymbol{P}} $, $ \mathit{Q}^0 $, $ \mathit{R}^0 $, $ \mathit{Q}^1 $, $ \mathit{R}^1 $, $ \mathit{Q}^2 $, and $ {\boldsymbol{W}} $. Due to the regularity of $ \overline{{\boldsymbol{H}}} $, $ \overline{\mathit{K}} $, $ \mathit{J} $, $ \mathit{G} $, classical regularity results for elliptic systems, see Theorem 8.12 and Theorem 8.13 in [34], quarantee that all spatial derivatives up to the fourth order of $ ({\boldsymbol{U}}^0, {\boldsymbol{V}}^0) $ are in $ L^\infty({\bf{R}}_+\times\Omega) $. Similarly, from Theorem 9.25 and Theorem 9.26 in [32], the cell-functions $ {\boldsymbol{W}} $, $ {\boldsymbol{P}} $, $ \mathit{Q}^0 $, $ \mathit{R}^0 $, $ \mathit{Q}^1 $, $ \mathit{R}^1 $ and $ \mathit{Q}^2 $ have higher regularity, than given by Lemma 3: $ W_i, P_\alpha, Q^0_{\alpha\beta}, R^0_{\alpha\beta}, Q^1_{i\alpha\beta}, R^1_{\alpha\beta}, \mathit{Q}^2_{ij} $ are in $ L^\infty({\bf{R}}_+; W^{2, \infty}(\Omega; H^3_\#(Y^*)/{\bf{R}})) $. We denote with $ \kappa $ the time-independent bound

    $ \begin{equation*} \kappa = \sup\limits_{1\leq\alpha,\beta\leq N}\|K_{\alpha\beta}\|_{L^\infty({\bf{R}}_+;W^{1,\infty}(\Omega;C^1_\#(Y^*)))}. \end{equation*} $

    Note that $ \|\mathit{R}^0\|_{L^\infty({\bf{R}}_+\times\Omega; C^1_\#(Y^*)))^{N\times N}}\leq C\kappa $ by the Poincaré-Wirtinger inequality.

    Bounding $ {\boldsymbol{g}}^\epsilon $ follows now directly from equation (5.23) and Corollary 4:

    $ \begin{equation} \|g^\epsilon_\alpha\|_{L^2(\Omega^\epsilon)^N}\leq C(1+\epsilon)(1+(\kappa+\tilde{\kappa})e^{\lambda t}), \end{equation} $ (5.25)

    where $ C $ is independent of $ \epsilon. $

    Bounding $ \mathit{h}^\epsilon $ is more difficult as it is defined on the boundary $ \partial\mathcal{T}^\epsilon $. The following result, see Lemma 2.31 on page 47 in [17], gives a trace inequality, which shows that $ \mathit{h}^\epsilon $ is properly defined.

    Lemma 6. Let $ \psi\in H^1(\Omega^\epsilon) $. Then

    $ \begin{equation} \|\psi\|_{L^2(\partial \mathcal{T}^\epsilon)}\leq C\epsilon^{-1/2}\|\psi\|_{\mathbb{V}_\epsilon}, \end{equation} $ (5.26)

    where $ C $ is independent of $ \epsilon $.

    By (5.24), the regularity of the cell-functions, the regularity of the normal at the boundary, Corollary 4 and using Lemma 6 twice, we have

    $ \begin{equation} \left|\int_{\partial \mathcal{T}^\epsilon}\epsilon^2{\boldsymbol{\phi}}^\top\mathit{h}^\epsilon\cdot{\boldsymbol{n}}^\epsilon\mathrm{d}s({\boldsymbol{x}})\right|\leq C\epsilon(1+(\kappa+\tilde{\kappa})e^{\lambda t})\|{\boldsymbol{\phi}}\|_{\mathbb{V}_\epsilon^N}. \end{equation} $ (5.27)

    We estimate $ {\boldsymbol{f}}^\epsilon $ in $ L^2(\Omega^\epsilon)^N $ from the standard inequality $ |a_1b_1-a_2b_2|\leq|a_1-a_2||b_2|+|a_1||b_1-b_2| $ for all $ a_1, a_2, b_1, b_2\in{\bf{R}} $. This leads to

    $ \begin{equation} \|{\boldsymbol{f}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}\leq \|\mathit{K}^\epsilon-\mathit{K}\|_{L^2(\Omega^\epsilon)^{N\times N}}\|{\boldsymbol{U}}^\epsilon\|_{L^\infty(\Omega^\epsilon)^N} +\|\mathit{K}\|_{L^\infty(\Omega^\epsilon)^{N\times N}}\|{\boldsymbol{U}}^\epsilon-{\boldsymbol{U}}^0\|_{L^2(\Omega^\epsilon)^N}. \end{equation} $ (5.28)

    With this inequality, the estimation depends on the convergence of $ \mathit{K}^\epsilon $ and $ {\boldsymbol{U}}^\epsilon $ to $ \mathit{K} $ and $ {\boldsymbol{U}}^0 $, respectively, but with the notation according to (2.7) we have $ \mathit{K}^\epsilon-\left.\mathit{K}\right|_{{\boldsymbol{y}} = {\boldsymbol{x}}/\epsilon} = \mathit{0} $ a.e.

    From the definition of $ {\boldsymbol{\Psi}}^\epsilon $, we obtain

    $ \begin{equation} \|{\boldsymbol{U}}^\epsilon-{\boldsymbol{U}}^0\|_{L^2(\Omega^\epsilon)^N} = \|{\boldsymbol{\Psi}}^\epsilon+M_\epsilon(\epsilon {\boldsymbol{U}}^1+\epsilon^2 {\boldsymbol{U}}^2)\|_{L^2(\Omega^\epsilon)^N} \leq \|{\boldsymbol{\Psi}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}+\epsilon\|{\boldsymbol{U}}^1\|_{L^2(\Omega^\epsilon)^N}+\epsilon^2\|{\boldsymbol{U}}^2\|_{L^2(\Omega^\epsilon)^N}. \end{equation} $ (5.29)

    Introduce the notations $ l = L_N $ and $ t_l = \min\{1/l, t\} $. Using system (4.30), the bounds $ C(1+(\kappa+\tilde{\kappa})e^{\lambda t}) $ for $ \|{\boldsymbol{V}}^1\|_{H^1(\Omega^\epsilon)^N} $ and $ \|{\boldsymbol{V}}^2\|_{H^1(\Omega^\epsilon)^N} $ obtained via the cell-function decompositions (4.32) and (5.3), respectively, the inequalities (4.5a) and (4.5b), and by employing Gronwall's inequality, we obtain

    $ \begin{align} \|{\boldsymbol{U}}^1\|_{H^1(\Omega^\epsilon)^N}&\leq C(1+(\kappa+\tilde{\kappa})e^{\lambda t})\sqrt{t_le^{lt}+t_l^2e^{2lt}}, \end{align} $ (5.30a)
    $ \begin{align} \|{\boldsymbol{U}}^2\|_{H^1(\Omega^\epsilon)^N}&\leq C(1+(\kappa+\tilde{\kappa})e^{\lambda t})\sqrt{t_le^{lt}+t_l^2e^{2lt}}, \end{align} $ (5.30b)
    $ \begin{align} \|{\boldsymbol{U}}^\epsilon\!-\!{\boldsymbol{U}}^0\|_{L^2(\Omega^\epsilon)^N}&\leq \|{\boldsymbol{\Psi}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}\cr &\quad+C(\epsilon+\epsilon^2)(1+(\kappa+\tilde{\kappa})e^{\lambda t})\sqrt{t_le^{lt}+t^2_le^{2lt}}. \end{align} $ (5.30c)

    Thus from identity (5.28) we obtain

    $ \begin{equation} \|{\boldsymbol{f}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}\leq \kappa\|{\boldsymbol{\Psi}}^\epsilon\|_{L^2(\Omega^\epsilon)^N} +C(\epsilon+\epsilon^2)\kappa(1+(\kappa+\tilde{\kappa})e^{\lambda t})\sqrt{t_le^{lt}+t_l^2e^{2lt}}, \end{equation} $ (5.31)

    We now have all the ingredients to estimate $ a_\epsilon({\boldsymbol{\phi}}^\epsilon, {\boldsymbol{\phi}}) $. Inserting estimates (5.25), (5.27) and (5.31) into (5.21), we find

    $ \begin{equation} |a_\epsilon({\boldsymbol{\phi}}^\epsilon,{\boldsymbol{\phi}})|\!\leq\! \left[\kappa\|{\boldsymbol{\Psi}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}+C(\epsilon\!+\!\epsilon^2)(1\!+\!(\kappa+\tilde{\kappa})e^{\lambda t})(1+\kappa (1+ t_le^{lt}))\right]\!\|{\boldsymbol{\phi}}\|_{\mathbb{V}_\epsilon^N}. \end{equation} $ (5.32)

    Next, we need to estimate the second right-hand term of (5.16), $ a_\epsilon((1-M_\epsilon)(\epsilon {\boldsymbol{V}}^1+\epsilon^2 {\boldsymbol{V}}^2), {\boldsymbol{\phi}}) $. Trusting [17] (see pages 48 and 49 in the reference) and using the bounds $ C(1+(\kappa+\tilde{\kappa})e^{\lambda t}) $ for $ \|{\boldsymbol{V}}^1\|_{H^1(\Omega^\epsilon)^N} $ and $ \|{\boldsymbol{V}}^2\|_{H^1(\Omega^\epsilon)^N} $, we obtain

    $ \begin{equation} |a_\epsilon((1-M_\epsilon)(\epsilon {\boldsymbol{V}}^1+\epsilon^2 {\boldsymbol{V}}^2),{\boldsymbol{\phi}})| \leq \left[C(\epsilon^{\frac{1}{2}}+\epsilon^{\frac{3}{2}})+C(\epsilon+\epsilon^2)\left(1+(\kappa+\tilde{\kappa})e^{\lambda t}\right)\right]\|{\boldsymbol{\phi}}\|_{\mathbb{V}_\epsilon^N}. \end{equation} $ (5.33)

    The combination of (5.32) and (5.33) yields

    $ \begin{equation} |a_\epsilon({\boldsymbol{\Phi}}^\epsilon,{\boldsymbol{\phi}})|\!\leq\! \left[\kappa\|{\boldsymbol{\Psi}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}\!+C(\epsilon^{\frac{1}{2}}\!+\!\epsilon^{\frac{3}{2}})\left(1+\epsilon^{\frac{1}{2}}(1\!+\!(\kappa+\tilde{\kappa})e^{\lambda t})(1\!+\!\kappa (1+ t_le^{lt}))\right)\right]\!\|{\boldsymbol{\phi}}\|_{\mathbb{V}_\epsilon^N}. \end{equation} $ (5.34)

    Since $ \mathcal{L}{\boldsymbol{\Psi}}^\epsilon = \mathit{G}{\boldsymbol{\Phi}}^\epsilon $, we obtain an identity similar to (4.5a) to which we apply Gronwall's inequality, leading to

    $ \begin{equation} \|{\boldsymbol{\Psi}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}^2(t)\leq \int_0^te^{l(t-s)}G_N\|{\boldsymbol{\Phi}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}^2(s)\mathrm{d}s. \end{equation} $ (5.35)

    Choosing $ {\boldsymbol{\phi}} = {\boldsymbol{\Phi}}^\epsilon $ and with $ m $ denoting the coercivity constant $ \min\limits_{1\leq i\leq n, 1\leq\alpha\leq N}\{\tilde{m}_\alpha, \tilde{e}_i\} $, we obtain

    $ \begin{equation} m\|{\boldsymbol{\Phi}}^\epsilon\|_{\mathbb{V}_\epsilon^N}^2\!\leq\! \left[\kappa\sqrt{\int_0^te^{l(t-s)}G_N\|{\boldsymbol{\Phi}}^\epsilon\|_{L^2(\Omega^\epsilon)^N}^2(s)\mathrm{d}s}\!+\!\mathcal{B}(\epsilon,t)\right]\!\|{\boldsymbol{\Phi}}^\epsilon\|_{\mathbb{V}_\epsilon^N}, \end{equation} $ (5.36)

    where

    $ \begin{equation} \mathcal{B}(\epsilon,t) = C(\epsilon^{\frac{1}{2}}\!+\!\epsilon^{\frac{3}{2}})\left(1+\epsilon^{\frac{1}{2}}(1\!+\!(\kappa+\tilde{\kappa})e^{\lambda t})(1\!+\!\kappa(1+ t_le^{lt}))\right). \end{equation} $ (5.37)

    Applying Young's inequality twice, once with $ \eta > 0 $ and once with $ \eta_1 > 0 $, using the Poincaré inequality (see Remark 1) and Gronwall's inequality to (5.36), we arrive at

    $ \begin{equation} \|{\boldsymbol{\Phi}}^\epsilon\|_{\mathbb{V}_\epsilon^N}^2\!\leq\!\frac{\mathcal{B}(\epsilon,t)^2}{\eta_1(2m-\eta_1-\eta)}+\int_0^t\frac{\kappa^2 G_N e^{l(t-s)}\mathcal{B}(\epsilon,s)^2}{\eta(2m-\eta_1-\eta)^2\eta_1}\exp\left(\int_s^t\frac{\kappa G_N}{\eta(2m-\eta_1-\eta)}e^{l(t-u)}\mathrm{d}u\right)\mathrm{d}s. \end{equation} $ (5.38)

    Since $ 0 < \mathcal{B}(\epsilon, s)\leq \mathcal{B}(\epsilon, t) $ for $ s\leq t $, we can use the Leibniz rule to obtain

    $ \begin{equation} \|{\boldsymbol{\Phi}}^\epsilon\|_{\mathbb{V}_\epsilon^N}^2\!\leq\!\frac{\mathcal{B}(\epsilon,t)^2}{\eta_1(2m-\eta_1-\eta)}\exp\left(\frac{\kappa^2 G_N}{\eta(2m-\eta_1-\eta)}t_le^{lt}\right). \end{equation} $ (5.39)

    Minimizing the two fractions separately leads us to $ \eta_1 = m-\frac{\eta}{2} $ and $ \eta = m-\frac{\eta_1}{2} $, whence $ \eta = \eta_1 = \frac{2}{3}m $. Hence, we obtain

    $ \begin{equation} \begin{array}{rcl} \|{\boldsymbol{\Phi}}^\epsilon\|_{\mathbb{V}_\epsilon^N}\!&\leq&\! C(\epsilon^{\frac{1}{2}}\!\!+\!\epsilon^{\frac{3}{2}}\!)\left(1\!+\!\epsilon^{\frac{1}{2}}\!(1\!+\!(\kappa\!+\!\tilde{\kappa})e^{\lambda t})(1\!+\!\kappa (1\!+\! t_le^{lt}))\right)\!\exp\!\left(\mu t_le^{lt}\right)\!,\\ & = &\mathcal{C}(\epsilon,t) \end{array} \end{equation} $ (5.40)

    and from (5.35), we arrive at

    $ \begin{equation} \|{\boldsymbol{\Psi}}^\epsilon\|_{H^1(\Omega^\epsilon)^N}\leq\!\mathcal{C}(\epsilon,t)\sqrt{t_le^{lt}} \end{equation} $ (5.41)

    with

    $ \begin{equation} \mu = \frac{9\kappa^2G_N}{8m^2}. \end{equation} $ (5.42)

    This completes the proof of Theorem 4.

    In [19] a concrete corrosion model has been derived from first principles. This model combines mixture theory with balance laws, while incorporating chemical reaction effects, mechanical deformations, incompressible flow, diffusion, and moving boundary effects. The model represents the onset of concrete corrosion by representing the corroded part as a layer of cement (the mixture) on top of a concrete bed and below an acidic fluid. The mixture contains three components $ \phi = (\phi_1, \phi_2, \phi_3) $, which react chemically via $ 3+2\rightarrow 1 $. For simplification, we work in volume fractions. Hence, the identity $ \phi_1+\phi_2+\phi_3 = 1 $ holds. The model equations on a domain $ \Omega $ become for $ \alpha\in\{1, 2, 3\} $

    $ \begin{align} \frac{\partial\phi_\alpha}{\partial t}+\epsilon\nabla\cdot(\phi_\alpha\bf{v}_\alpha)-\epsilon\delta_\alpha\Delta \phi_\alpha& = \quad\epsilon\kappa_\alpha \mathcal{F}(\phi_1,\phi_3), \end{align} $ (6.1a)
    $ \begin{align} \nabla\cdot\left(\sum\limits_{\alpha = 1}^3\phi_\alpha\bf{v}_\alpha\right)-\sum\limits_{\alpha = 1}^3\delta_\alpha\Delta \phi_\alpha& = \quad\sum\limits_{\alpha = 1}^3\kappa_\alpha \mathcal{F}(\phi_1,\phi_3), \end{align} $ (6.1b)
    $ \begin{align} \nabla(-\phi_\alpha p+\![\lambda_\alpha\!+\!\mu_\alpha]\nabla\!\!\cdot\!\bf{u}_\alpha)\!+\!\mu_\alpha\Delta \bf{u}_\alpha& = \chi_\alpha(\bf{v}_\alpha\!-\!\bf{v}_3)\!-\!\!\sum\limits_{\beta = 1}^3\!\gamma_{\alpha\beta}\Delta \bf{v}_\beta, \end{align} $ (6.1c)
    $ \begin{align} \nabla\left(-p+\sum\limits_{\alpha = 1}^2(\lambda_\alpha+\mu_\alpha)\nabla\cdot\bf{u}_\alpha\right)+\sum\limits_{\alpha = 1}^2\mu_\alpha\Delta \bf{u}_\alpha&+\sum\limits_{\alpha = 1}^3\sum\limits_{\beta = 1}^3\gamma_{\alpha\beta}\Delta \bf{v}_\beta = 0, \end{align} $ (6.1d)

    where $ U_\alpha $ and $ {\boldsymbol{v}}_\alpha = \partial U_\alpha/\partial t $ are the displacement and velocity of component $ \alpha $, respectively, and $ \epsilon $ is a small positive number independent of any spatial scale. Equation (6.1a) denotes a mass balance law, (6.1b) denotes the incompressibility condition, (6.1c) the partial (for component $ \alpha $) momentum balance law, and (6.1d) the total momentum balance.

    For $ t = \mathcal{O}(\epsilon^0) $, we can treat $ \phi $ as constant, which removes some nonlinearities from the model. Moreover, with equation (6.1b) we can eliminate $ {\boldsymbol{v}}_3 $ in favor of $ {\boldsymbol{v}}_1 $ and $ {\boldsymbol{v}}_2 $, while with equation (6.1d) we can eliminate $ p $. This leads to a final expression for $ {\boldsymbol{u}} = (U_1, U_2) $:

    $ \begin{equation} \tilde{\mathit{M}}\partial_t{\boldsymbol{u}} -\tilde{\mathit{A}}{\boldsymbol{u}}-\mathrm{div}\left(\tilde{\mathit{B}}{\boldsymbol{u}}+\tilde{\mathit{D}}\partial_t{\boldsymbol{u}}+ \mathit{E}\cdot\nabla\left(\mathit{F}{\boldsymbol{u}}+\tilde{\mathit{G}}\partial_t{\boldsymbol{u}}\right)\right) = {\boldsymbol{H}}, \end{equation} $ (6.2)

    with

    $ \begin{align} \tilde{\mathit{M}} & = \left(\begin{array}{cc} \chi_1\frac{\phi_1+\phi_3}{\phi_3}&\chi_1\frac{\phi_2}{\phi_3}\\ \chi_2\frac{\phi_1}{\phi_3}&\chi_2\frac{\phi_2+\phi_3}{\phi_3} \end{array}\right),&\quad\tilde{\mathit{A}}& = \tilde{\mathit{B}} = \tilde{\mathit{D}} = \mathit{0}, \end{align} $ (6.3a)
    $ \begin{align} \mathit{F} & = \left(\begin{array}{cc} \mu_1(\phi_2+\phi_3)&-\mu_2\phi_1\\ -\mu_1\phi_2&\mu_2(\phi_1+\phi_3) \end{array}\right),&\quad\mathit{E} & = \bf{I}, \end{align} $ (6.3b)
    $ \begin{align} \tilde{G}_{\alpha\beta} & = -\gamma_{\alpha\beta}+\phi_\alpha\sum\limits_{\lambda = 1}^3\gamma_{\lambda\beta},&\quad H_\alpha & = \frac{\chi_\alpha}{\phi_3}\mathcal{F}(\phi_1,\phi_3)\sum\limits_{\lambda = 1}^3\kappa_\lambda. \end{align} $ (6.3c)

    According to [19], there are several options for $ \gamma_{\alpha\beta} $, but all these options lead to non-invertible $ \tilde{\mathit{G}} $. Suppose we take $ \gamma_{11} = \gamma_{22} = \gamma_1 < 0 $ and $ \gamma_{12} = \gamma_{21} = \gamma_2 < 0 $ with $ \gamma_1 > \gamma_2 $. Then $ \tilde{\mathit{G}} $ is invertible and positive definite for $ \phi_3 > 0 $, since the determinant of $ \tilde{\mathit{G}} $ equals $ (\gamma_1^2-\gamma_2^2)\phi_3 $.

    According to Section 4.3 of [20], we obtain the Neumann problem (2.8a), (2.8b) with

    $ \begin{equation} \mathit{M} = \tilde{\mathit{M}}\tilde{\mathit{G}}^{-1},\; \mathit{D} = \mathit{0},\;\mathit{L} = \tilde{\mathit{G}}^{-1}\mathit{F},\; \mathit{G} = \tilde{\mathit{G}}^{-1},\; \mathit{K} = -\tilde{\mathit{M}}\tilde{\mathit{G}}^{-1}\mathit{F},\; \mathit{J} = \mathit{0}. \end{equation} $ (6.4)

    Note, that both $ \mathit{E} $ and $ {\boldsymbol{H}} $ do not change in this transformation. Moreover, $ \mathit{M} $ is positive definite, since both $ \tilde{\mathit{M}} $ and $ \tilde{\mathit{G}} $ are positive definite.

    Suppose the cement mixture has a periodic microstructure, satisfying assumption (A4), inherited from the concrete microstructure if corroded. Assume the constants $ \chi_\alpha $, $ \mu_\alpha $, $ \kappa_\alpha $, and $ \gamma_{\alpha\beta} $ are actually functions of both the macroscopic scale $ {\boldsymbol{x}} $ and the microscopic scale $ {\boldsymbol{y}} $, such that Assumptions (A1)–(A3) are satisfied. Note that (A3) is trivially satisfied.

    From the main results we see that a macroscale limit $ ({\boldsymbol{U}}^0, {\boldsymbol{V}}^0) $ of this microscale corrosion problem exists, which satisfies system (P$ ^0_w $), and that the convergence speed is given by Theorem 4 with constants $ l $, $ \kappa $, $ \lambda $ and $ \mu $ given by A.

    We acknowledge the Netherlands Organisation of Scientific Research (NWO) for the MPE grant 657.000.004, the NWO Cluster Nonlinear Dynamics in Natural Systems (NDNS+) for funding a research stay of AJV at Karlstads Universitet and the Swedish Royal Academy of Science (KVA) for the Stiftelsen GS Magnusons fund grant MG2018-0020.

    The authors declare no conflict of interest.

    In Theorem 4, the three constants $ l $, $ \lambda $ and $ \mu $ are introduced as exponents indicating the exponential growth in time of the corrector bounds. Moreover, there was also a constant $ \kappa $ that indicated whether additional exponential growth occurs or not. For brevity it was not stated how these constants depend on the given matrices and tensors. Here we will give an exact determination procedure of these constants.

    The constant $ \kappa $ denotes the maximal operator norm of the tensor $ \mathit{K} $.

    $ \begin{equation} \kappa = \sup\limits_{1\leq\alpha,\beta\leq N}\|K_{\alpha\beta}\|_{L^\infty({\bf{R}}_+;W^{1,\infty}(\Omega;C^1_\#(Y^*)))}. \end{equation} $ (A.1)

    The constants $ l $, $ \lambda $, $ \tilde{\kappa} $ and $ \mu $ were obtained via Young's inequality, which make them a coupled system via several additional positive constants: $ \eta $, $ \eta_1 $, $ \eta_2 $, $ \eta_3 $. The obtained expressions are

    $ \begin{align} l& = \max\{0,L_N\}, \end{align} $ (A.2a)
    $ \begin{align} \lambda & = \frac{1}{2}\max\left\{0,L_N+\max\left\{L_G+G_M\max\limits_{1\leq\alpha\leq N}\tilde{K}_\alpha,G_M\max\limits_{1\leq\alpha\leq N}\max\limits_{1\leq i\leq d}\tilde{J}_{i\alpha}\right\}\right\}, \end{align} $ (A.2b)
    $ \begin{align} \mu& = \frac{9\kappa^2}{8m^2}G_N, \end{align} $ (A.2c)
    $ \begin{align} \tilde{\kappa} & = \max\limits_{1\leq\alpha\leq N,1\leq i\leq d}\{\tilde{K}_\alpha,\tilde{J}_{i\alpha}\} \end{align} $ (A.2d)

    with the values

    $ \begin{align} L_N& = 2\mathcal{L}_{\min}+\eta G_{\max}+\eta_1dN\mathcal{L}_G, \end{align} $ (A.3a)
    $ \begin{align} L_G& = 2\mathcal{L}_{\min}+\frac{dN}{\eta_1}\mathcal{L}_G+\eta_2G_{\max}+\eta_3dNG_G, \end{align} $ (A.3b)
    $ \begin{align} G_N& = \frac{1}{\eta}G_{\max}+\frac{dN}{\eta_3}G_G, \end{align} $ (A.3c)
    $ \begin{align} G_G& = \frac{1}{\eta_2}G_{\max} , \end{align} $ (A.3d)
    $ \begin{align} G_M& = \max\limits_{1\leq\alpha\leq N}\max\limits_{1\leq i\leq d}\left\{\frac{G_N+G_G}{\tilde{m}_\alpha},\frac{G_N}{\tilde{e}_i}\right\}, \end{align} $ (A.3e)
    $ \begin{align} m& = \min\limits_{1\leq\alpha\leq N}\min\limits_{1\leq i\leq d}\{\tilde{m}_\alpha,\tilde{e}_i\}, \end{align} $ (A.3f)

    where we have the positive values

    $ \begin{align} \tilde{m}_\alpha& = m_\alpha-\sum\limits_{i = 1}^d\sum\limits_{\beta = 1}^N\frac{\|D_{i\beta\alpha}\|_{L^\infty({\bf{R}}_+\times\Omega;C_\#(Y^*))}}{2\eta_{i\beta\alpha}}-\eta_\alpha-\sum\limits_{\beta = 1}^N\eta_{\alpha\beta}-\sum\limits_{i = 1}^d\sum\limits_{\beta = 1}^N\tilde{\eta}_{i\alpha\beta}, \end{align} $ (A.4a)
    $ \begin{align} \tilde{e}_i& = e_i-\sum\limits_{\alpha,\beta = 1}^N\frac{\eta_{i\beta\alpha}}{2}\|D_{i\beta\alpha}\|_{L^\infty({\bf{R}}_+\times\Omega;C_\#(Y^*))}, \end{align} $ (A.4b)
    $ \begin{align} \tilde{H}& = \sum\limits_{\alpha = 1}^N\frac{1}{4\eta_{\alpha}}\|\mathit{H}_{\alpha}\|^2_{L^\infty({\bf{R}}_+\times\Omega;C_\#(Y^*))}, \end{align} $ (A.4c)
    $ \begin{align} \tilde{K}_\alpha& = \sum\limits_{\beta = 1}^N\frac{1}{4\eta_{\beta\alpha}}\|K_{\beta\alpha}\|^2_{L^\infty({\bf{R}}_+\times\Omega;C_\#(Y^*))}, \end{align} $ (A.4d)
    $ \begin{align} \tilde{J}_{i\alpha}& = \sum\limits_{\beta = 1}^N\frac{\epsilon_{0}^2}{4\tilde{\eta}_{i\beta\alpha}}\|J_{i\beta\alpha}\|^2_{L^\infty({\bf{R}}_+\times\Omega;C_\#(Y^*))} \end{align} $ (A.4e)

    for $ \eta_{i\beta\alpha} > 0 $, $ \eta_{\beta} > 0 $, $ \eta_{\alpha\beta} > 0 $, $ \tilde{\eta}_{i\alpha\beta} > 0 $ and $ \epsilon_{0} $ the supremum of allowed $ \epsilon $ values (which is 1 for Theorem 5). Moreover, we have

    ● $ \mathcal{L}_{\min} $ as the $ L^\infty({\bf{R}}_+\times\Omega) $-norm of the absolute value of the largest negative eigenvalue or it is -1 times the smallest positive eigenvalue of $ \mathit{L} $ if no negative or 0 eigenvalues exist,

    ● $ \mathcal{L}_G $ as the $ L^\infty({\bf{R}}_+\times\Omega) $-norm of the largest absolute value of the $ \nabla\mathit{L} $ components,

    ● $ G_{\max} $ as the $ L^\infty({\bf{R}}_+\times\Omega) $-norm of the largest eigenvalue of $ \mathit{G} $,

    ● $ G_G $ as the $ L^\infty({\bf{R}}_+\times\Omega) $-norm of the largest absolute value of the $ \nabla\mathit{G} $ components.

    Remark 11. Remark that smaller $ l $ and $ \mu $ yield longer times $ \tau $ in Theorem 5 and faster convergence rates in $ \epsilon $. However, $ l $ and $ \mu $ are only coupled via $ \lambda $. Hence, $ l $ and $ \mu $ can be made as small as needed as long as $ \lambda $ remains finite and independent of $ \epsilon $.

    Remark 12. Note that $ \mathcal{L}_{\min} < 0 $ allows for a hyperplane of positive values of $ \eta $ and $ \eta_1 $ in $ (\eta, \eta_1, \eta_2, \eta_3) $-space such that $ l = L_N = 0 $. In this case not $ \lambda $ or $ \mu $ should be minimized. Instead $ \tau_{end} $ should be maximized, the time $ \tau $ for which the bounds of Theorem 5 equal $ \mathcal{O}(1) $ for $ p = q = 0 $. For $ \mu\geq\lambda $ this yields a minimization of $ \mu $, while for $ \mu < \lambda $ a minimization of $ \mu+\lambda $. Due to the use of maximums in the definition of $ \lambda $ and $ \tau_{end} $, we refrain from maximizing $ \tau_{end} $ as any attempt leads to a large tree of cases for which an optimization problem has to be solved.

    Two-scale convergence is a method invented in 1989 by Nguetseng, see [35]. This method removes many technicalities by basing the convergence itself on functional analytic grounds as a property of functions in certain spaces. In some sense the function spaces natural to periodic boundary conditions have nice convergence properties of their oscillating continuous functions. This is made precise in the First Oscillation Lemma:

    Lemma 7 ('First Oscillation Lemma'). Let $ B_p(\Omega, Y) $, $ 1\leq p < \infty $, denote any of the spaces $ L^p(\Omega; C_\#(Y)) $, $ L^p_\#(\Omega; C(\overline{Y})) $, $ C(\overline{\Omega}; C_\#(Y)) $. Then $ B_p(\Omega, Y) $ has the following properties:

    1. $ B_p(\Omega, Y) $ is a separable Banach space.

    2. $ B_p(\Omega, Y) $ is dense in $ L^p(\Omega\times Y) $.

    3. If $ f({\boldsymbol{x}}, {\boldsymbol{y}})\in B_p(\Omega, Y) $. Then $ f({\boldsymbol{x}}, {\boldsymbol{x}}/\epsilon) $ is a measurable function on $ \Omega $ such that

    $ \begin{equation} \left\|f\left({\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)\right\|_{L^p(\Omega)}\leq \left\|f\left({\boldsymbol{x}},{\boldsymbol{y}}\right)\right\|_{B_p(\Omega,Y)}. \end{equation} $ (B.1)

    4. For every $ f({\boldsymbol{x}}, {\boldsymbol{y}})\in B_p(\Omega, Y) $, one has

    $ \begin{equation} \lim\limits_{\epsilon\rightarrow0}\int_\Omega f\left({\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)\mathrm{d}{\boldsymbol{x}} = \frac{1}{|Y|}\int_\Omega\int_Yf({\boldsymbol{x}},{\boldsymbol{y}})\mathrm{d}{\boldsymbol{y}}\mathrm{d}{\boldsymbol{x}}. \end{equation} $ (B.2)

    5. For every $ f({\boldsymbol{x}}, {\boldsymbol{y}})\in B_p(\Omega, Y) $, one has

    $ \begin{equation} \lim\limits_{\epsilon\rightarrow0}\int_\Omega \left|f\left({\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)\right|^p\mathrm{d}{\boldsymbol{x}} = \frac{1}{|Y|}\int_\Omega\int_Y |f({\boldsymbol{x}},{\boldsymbol{y}})|^p\mathrm{d}{\boldsymbol{y}}\mathrm{d}{\boldsymbol{x}}. \end{equation} $ (B.3)

    See Theorems 2 and 4 in [36].

    However, application of the First Oscillation Lemma is not sufficient as it cannot be applied to weak solutions nor to gradients. Essentially two-scale convergence overcomes these problems by extending the First Oscillation Lemma in a weak sense.

    Two-scale convergence: Definition and results

    For each function $ c(t, {\boldsymbol{x}}, {\boldsymbol{y}}) $ on $ (0, T)\times\Omega\times Y $, we introduce a corresponding sequence of functions $ c^\epsilon(t, {\boldsymbol{x}}) $ on $ (0, T)\times\Omega $ by

    $ \begin{equation} c^\epsilon(t,{\boldsymbol{x}}) = c\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right) \end{equation} $ (B.4)

    for all $ \epsilon\in(0, \epsilon_0) $, although two-scale convergence is valid for more general bounded sequences of functions $ c^\epsilon(t, {\boldsymbol{x}}) $.

    Introduce the notation $ \nabla_{{\boldsymbol{y}}} $ for the gradient in the $ {\boldsymbol{y}} $-variable. Moreover, we introduce the notations $ \rightarrow $, $ \rightharpoonup $, and $ \overset{2}{\longrightarrow} $ to point out strong convergence, weak convergence, and two-scale convergence, respectively.

    The two-scale convergence was first introduced in [35] and popularized with the seminal paper [37], in which the term two-scale convergence was actually coined. For our explanation we use both the seminal paper [37] as the modern exposition of two-scale convergence in [36]. From now on, $ p $ and $ q $ are real numbers such that $ 1 < p < \infty $ and $ 1/p+1/q = 1 $.

    Definition 1. Let $ (\epsilon_h)_h $ be a fixed sequence of positive real numbers converging to 0. A sequence $ (u_\epsilon) $ of functions in $ L^p(\Omega) $ is said to two-scale converge to a limit $ u_0\in L^p(\Omega\times Y) $ if

    $ \begin{equation} \int_\Omega u_\epsilon({\boldsymbol{x}})\phi\left({\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)\mathrm{d}{\boldsymbol{x}}\rightarrow\frac{1}{|Y|}\int_\Omega\int_Yu_0({\boldsymbol{x}},{\boldsymbol{y}})\phi({\boldsymbol{x}},{\boldsymbol{y}})\mathrm{d}{\boldsymbol{y}}\mathrm{d}{\boldsymbol{x}}, \end{equation} $ (B.5)

    when it is clear from the context we will omit the subscript $ h $

    for every $ \phi\in L^q(\Omega; C_\#(Y)) $.

    See Definition 6 on page 41 of [36].

    Remark 13. Definition 1 allows for an extension of two-scale convergence to Bochner spaces $ L^r(I; L^p(\Omega\times Y)) $ of the additional variable $ t\in I $ for $ r = p\in[1, \infty) $ by having the regularity $ u_\epsilon\in L^p(I\times\Omega) $, $ u_0\in L^p(I\times\Omega\times Y) $ and $ \phi\in L^q(I\times\Omega; C_\#(Y)) $ with $ q = \frac{p}{p-1} $. Moreover (B.5) changes into

    $ \begin{equation} \int_I\int_\Omega u_\epsilon(t,{\boldsymbol{x}})\phi\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)\mathrm{d}{\boldsymbol{x}}\mathrm{d}t\rightarrow\frac{1}{|Y|}\int_I\int_\Omega\int_Yu_0(t,{\boldsymbol{x}},{\boldsymbol{y}})\phi(t,{\boldsymbol{x}},{\boldsymbol{y}})\mathrm{d}{\boldsymbol{y}}\mathrm{d}{\boldsymbol{x}}\mathrm{d}t. \end{equation} $ (B.6)

    This Bochner-like extension is well-defined because for $ y $-independent $ u_0 $ limits two-scale convergence is identical to weak convergence.

    Note, for $ r \neq p $ convergence (12.6) is valid for the regularity $ u_\epsilon\in L^r(I; L^p(\Omega)) $, $ u_0\in L^r(I; L^p(\Omega\times Y)) $ and $ \phi\in L^s(I; L^1(\Omega; C_\#(Y))) $ for $ s = \frac{r}{r-1} $.

    For $ r = \infty $ we need $ \phi\in \text{ba}_{ac}(I; L^1(\Omega; C_\#(Y))) $, where $ \text{ba}_{ac}(I) $ denotes $ L^\infty(I)^* $ as the dual of $ L^\infty(I) $ can be identified with the set of all finitely additive signed measures that are absolutely continuous with respect to $ \mathrm{d}t $ on $ I $.

    With the Bochner version of Definition 1 introduced in Remark 13, we can give the Sobolev space version of Definition 1.

    Definition 2. Let $ r, p\in[1, \infty) $, $ s = \frac{r}{r-1} $, and $ q = \frac{p}{p-1} $. A sequence $ (u_\epsilon) $ of functions in $ W^{1, r}(I; L^p(\Omega)) $ is said to two-scale converge to a limit $ u_0\in W^{1, r}(I; L^p(\Omega\times Y)) $ if both

    $ \begin{align} \int_I\int_\Omega u_\epsilon(t,{\boldsymbol{x}})\phi\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)\mathrm{d}{\boldsymbol{x}}\mathrm{d}t&\rightarrow\frac{1}{|Y|}\int_I\int_\Omega\int_Yu_0(t,{\boldsymbol{x}},{\boldsymbol{y}})\phi(t,{\boldsymbol{x}},{\boldsymbol{y}})\mathrm{d}{\boldsymbol{y}}\mathrm{d}{\boldsymbol{x}}\mathrm{d}t, \end{align} $ (B.7a)
    $ \begin{align} \int_I\int_\Omega \frac{\partial u_\epsilon}{\partial t}(t,{\boldsymbol{x}})\phi\left(t,{\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)\mathrm{d}{\boldsymbol{x}}\mathrm{d}t&\rightarrow\frac{1}{|Y|}\int_I\int_\Omega\int_Y\frac{\partial u_0}{\partial t}(t,{\boldsymbol{x}},{\boldsymbol{y}})\phi(t,{\boldsymbol{x}},{\boldsymbol{y}})\mathrm{d}{\boldsymbol{y}}\mathrm{d}{\boldsymbol{x}}\mathrm{d}t \end{align} $ (B.7b)

    hold for every $ \phi\in L^s(I; L^q(\Omega; C_\#(Y))) $. Or in short notation

    $ \begin{equation} u_\epsilon \overset{2}{\rightarrow}u_0\text{ in }L^r(I;L^p(\Omega))\text{ and } \frac{\partial u_\epsilon}{\partial t} \overset{2}{\rightarrow}\frac{\partial u_0}{\partial t}\text{ in }L^r(I;L^p(\Omega)). \end{equation} $ (B.8)

    We now list several important results concerning the two-scale convergence, which can all be extended in a natural way for Bochner spaces, see Section 2.5.2 in [38].

    Proposition 1. Let $ (u_\epsilon) $ be a bounded sequence in $ W^{1, p}(\Omega) $ for $ 1 < p\leq\infty $ such that

    $ \begin{equation} u_\epsilon\rightharpoonup u_0\quad{in}\quad W^{1,p}(\Omega). \end{equation} $ (B.9)

    Then $ u_\epsilon\overset{2}{\longrightarrow}u_0 $ and there exist a subsequence $ \epsilon' $ and a $ u_1\in L^p(\Omega; W^{1, p}_\#(Y)/{\bf{R}}) $ such that

    $ \begin{equation} \nabla u_{\epsilon'}\overset{2}{\longrightarrow}\nabla u_0+\nabla_{{\boldsymbol{y}}}u_1. \end{equation} $ (B.10)

    Proposition 1 for $ 1 < p < \infty $ is Theorem 20 in [36], while for $ p = 2 $ it is identity (i) in Proposition 1.14 in [37]. On page 1492 of [37] it is mentioned that the $ p = \infty $ case holds as well. The case of interest for us here is $ p = 2 $.

    Proposition 2. Let $ (u_\epsilon) $ and $ (\epsilon\nabla u_\epsilon) $ be two bounded sequence in $ L^2(\Omega) $. Then there exists a function $ u_0({\boldsymbol{x}}, {\boldsymbol{y}}) $ in $ L^2(\Omega; H^1_\#(Y)) $ such that, up to a subsequence, $ u_\epsilon\overset{2}{\longrightarrow} u_0({\boldsymbol{x}}, {\boldsymbol{y}}) $ and $ \epsilon\nabla u_\epsilon\overset{2}{\longrightarrow} \nabla_{{\boldsymbol{y}}}u_0({\boldsymbol{x}}, {\boldsymbol{y}}) $. See identity (ii) in Proposition 1.14 in [37].

    Corollary 5. Let $ (u_\epsilon) $ be a bounded sequence in $ L^p(\Omega) $, with $ 1 < p\leq\infty $. There exists a function $ u_0({\boldsymbol{x}}, {\boldsymbol{y}}) $ in $ L^p(\Omega\times Y) $ such that, up to a subsequence, $ u_\epsilon\overset{2}{\longrightarrow} u_0({\boldsymbol{x}}, {\boldsymbol{y}}) $, i.e., for any function $ \psi({\boldsymbol{x}}, {\boldsymbol{y}})\in\mathcal{D}(\Omega; C^\infty_\#(Y)) $, we have

    $ \begin{equation} \lim\limits_{\epsilon\rightarrow0}\int_\Omega u_\epsilon({\boldsymbol{x}})\psi\!\left({\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)\mathrm{d}{\boldsymbol{x}} = \frac{1}{|Y|}\int_\Omega\int_Yu_0({\boldsymbol{x}},{\boldsymbol{y}})\psi({\boldsymbol{x}},{\boldsymbol{y}})\mathrm{d}{\boldsymbol{y}}\mathrm{d}{\boldsymbol{x}}. \end{equation} $ (B.11)

    See Corollary 1.15 in [37].

    Note, that Propositions 1 and 2 are straightforwardly extended to Bochner spaces by applying the two-scale convergence notions of Remark 13 and Definition 2 instead of the notion from Definition 1.

    Theorem 9. Let $ (u_\epsilon) $ be a sequence in $ L^p(\Omega) $ for $ 1 < p < \infty $, which two-scale converges to $ u_0\in L^p(\Omega\times Y) $ and assume that

    $ \begin{equation} \lim\limits_{\epsilon\rightarrow0}\|u_\epsilon\|_{L^p(\Omega)} = \|u_0\|_{L^p(\Omega\times Y)}. \end{equation} $ (B.12)

    Then, for any sequence $ (v_\epsilon) $ in $ L^q(\Omega) $ with $ \frac{1}{p}+\frac{1}{q} = 1 $, which two-scale converges to $ v_0\in L^q(\Omega\times Y) $, we have that

    $ \begin{equation} \int_\Omega u_\epsilon({\boldsymbol{x}})v_\epsilon({\boldsymbol{x}})\tau\!\left({\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)\mathrm{d}{\boldsymbol{x}}\rightarrow\int_\Omega\frac{1}{|Y|}\int_Y u_0({\boldsymbol{x}},{\boldsymbol{y}})v_0({\boldsymbol{x}},{\boldsymbol{y}})\tau(x,{\boldsymbol{y}})\mathrm{d}{\boldsymbol{y}}\mathrm{d}{\boldsymbol{x}}, \end{equation} $ (B.13)

    for every $ \tau $ in $ \mathcal{D}(\Omega, C^\infty_\#(Y)) $. Moreover, if the $ Y $-periodic extension of $ u $ belong to $ L^p(\Omega; C_\#(Y)) $, then

    $ \begin{equation} \lim\limits_{\epsilon\rightarrow0}\left\|u_\epsilon({\boldsymbol{x}})-u_0\left({\boldsymbol{x}},\frac{{\boldsymbol{x}}}{\epsilon}\right)\right\|_{L^p(\Omega)} = 0. \end{equation} $ (B.14)

    See Theorem 18 in [36].

    These results generalize properties 3, 4 and 5 of the First Oscillation Lemma in such a way that the convergence applies to weak solutions, products and gradients AND it even guarantees that the convergence is strong for oscillating continuous functions.

    Hence, two-scale convergence is suitable for upscaling problems.



    [1] Cluster formation in opinion dynamics: A qualitative analysis. Z. Angew. Math. Phys. (2010) 61: 583-602.
    [2] Asymptotic analysis of continuous opinion dynamics models under bounded confidence. Commun. Pure Appl. Anal. (2013) 12: 1487-1499.
    [3] Continuous and discontinuous opinion dynamics with bounded confidence. Nonlinear Anal. Real World Appl. (2012) 13: 1239-1251.
    [4]

    S. Chatterjee, Reaching a consensus: Some limit theorems, Proc. Int. Statist. Inst., 159–164.

    [5] Toward consensus: Some convergence theorems on repeated averaging. J. Appl. Prob. (1977) 14: 89-97.
    [6] Approaching consensus can be delicate when positions harden. Stochastic Proc. and Appl. (1986) 22: 315-322.
    [7] Mixing beliefs among interacting agents. Advances in Complex Systems (2000) 3: 87-98.
    [8] Consensus formation under bounded confidence. Nonlinear Analysis (2001) 47: 4615-4621.
    [9] The Krause – Hegselmann consensus model with discrete opinions. Internat. J. Modern Phys. C (2004) 15: 1021-1029.
    [10] A formal theory of social power. Psychological Review (1956) 63: 181-194.
    [11] Reaching a consensus. J. Amer. Statist. Assoc. (1974) 69: 118-121.
    [12]

    F. Harary, A criterion for unanimity in French's theory of social power, in Studies in Social Power (ed. D. Cartwright), Institute for Social Research, Ann Arbor, 1959.

    [13] The Hegselmann – Krause dynamics for equally spaced agents. J. Difference Equ. Appl. (2016) 22: 1621-1645.
    [14]

    R. Hegselmann and A. Flache, Understanding complex social dynamics: A plea for cellular automata based modelling, Journal of Artificial Societies and Social Simulation, 1 (1998).

    [15] Opinion dynamics and bounded confidence: Models, analysis and simulation. Journal of Artificial Societies and Social Simulation (2002) 5: 1-33.
    [16] Opinion dynamics under the influence of radical groups, charismatic leaders, and other constant signals: A simple unifying model. Networks and Heterogeneous Media (2015) 10: 477-509.
    [17] Clustering and asymptotic behavior in opinion formation. J. Differential Equations (2014) 257: 4165-4187.
    [18]

    U. Krause, A discrete nonlinear and non - autonomous model of consensus formation, in Communications in Difference Equations (ed. S. Elaydi et al.), Gordon and Breach Publ., Amsterdam, 2000.

    [19]

    U. Krause, Soziale dynamiken mit vielen interakteuren. eine problemskizze, in Modellierung und Simulation von Dynamiken mit vielen interagierenden Akteuren (eds. U. Krause and M. Stockler), Universitat Bremen, 1997.

    [20] Compromise, consensus, and the iteration of means. Elem. Math. (2009) 64: 1-8.
    [21]

    U. Krause, Positive Dynamical Systems in Discrete Time. Theory, Models, and Applications, De Gruyter, Berlin, 2015.

    [22] On the Hegselmann – Krause conjecture in opinion dynamics. Journal of Difference Equations and Applications (2011) 17: 859-876.
    [23] Social consensus and rational agnoiology. Synthese (1975) 31: 141-160.
    [24]

    K. Lehrer and C. Wagner, Rational Consensus in Science and Society, D. Reidel Publ. Co., Dordrecht-Boston, Mass., 1981.

    [25] A stabilization theorem for dynamics of continuous opinions. Physica A (2005) 355: 217-223.
    [26] Continuous opinion dynamics under bounded confidence: A survey. International Journal of Modern Physics C (2007) 18: 1819-1838.
    [27]

    W. Ren and Y. Cao, Distributed Coordination of Multi-agent Networks. Emergent Problems, Models, and Issues, Springer, 2011.

    [28] Consensus through respect: A model of rational group decision-making. Philosophical Studies (1978) 34: 335-349.
    [29] The Hegselmann – Krause dynamics for the continuous-agent model and a regular opinion function do not always lead to consensus. IEEE Trans. Automat. Control (2015) 60: 2416-2421.
    [30]

    G. Weisbuch, G. Deffuant and G. Nadal, Interacting agents and continuous opinion dynamics, in Heterogenous Agents, Interactions and Economic Performance (eds. R. Cowan and N. Jonard), Lecture Notes in Economics and Mathematical Systems, 521, Springer, Berlin, 2003.

    [31] On the control through leadership of the Hegselmann – Krause opinion formation model. Mathematical Models and Methods in Applied Sciences (2015) 25: 565-585.
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3671) PDF downloads(348) Cited by(17)

Figures and Tables

Figures(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog