Loading [MathJax]/extensions/TeX/boldsymbol.js
Research article Special Issues

Synchronization of fractional-order memristive recurrent neural networks via aperiodically intermittent control


  • In this paper, synchronization of fractional-order memristive recurrent neural networks via aperiodically intermittent control is investigated. Considering the special properties of memristor neural network, differential inclusion theory is introduced. Similar to the aperiodically strategy of integer order, aperiodically intermittent control strategy of fractional order is proposed. Under the framework of Fillipov's solution, based on the intermittent strategy of fractional order systems and the properties Mittag-Leffler, sufficient criteria of aperiodically intermittent strategy are obtained by constructing appropriate Lyapunov functional. Some comparisons are given to demonstrate the advantages of aperiodically strategy. A simulation example is given to illustrate the derived conclusions.

    Citation: Shuai Zhang, Yongqing Yang, Xin Sui, Yanna Zhang. Synchronization of fractional-order memristive recurrent neural networks via aperiodically intermittent control[J]. Mathematical Biosciences and Engineering, 2022, 19(11): 11717-11734. doi: 10.3934/mbe.2022545

    Related Papers:

    [1] Lorenza D'Elia . Γ-convergence of quadratic functionals with non uniformly elliptic conductivity matrices. Networks and Heterogeneous Media, 2022, 17(1): 15-45. doi: 10.3934/nhm.2021022
    [2] Andrea Braides, Valeria Chiadò Piat . Non convex homogenization problems for singular structures. Networks and Heterogeneous Media, 2008, 3(3): 489-508. doi: 10.3934/nhm.2008.3.489
    [3] Patrick Henning . Convergence of MsFEM approximations for elliptic, non-periodic homogenization problems. Networks and Heterogeneous Media, 2012, 7(3): 503-524. doi: 10.3934/nhm.2012.7.503
    [4] Leonid Berlyand, Volodymyr Rybalko . Homogenized description of multiple Ginzburg-Landau vortices pinned by small holes. Networks and Heterogeneous Media, 2013, 8(1): 115-130. doi: 10.3934/nhm.2013.8.115
    [5] Manuel Friedrich, Bernd Schmidt . On a discrete-to-continuum convergence result for a two dimensional brittle material in the small displacement regime. Networks and Heterogeneous Media, 2015, 10(2): 321-342. doi: 10.3934/nhm.2015.10.321
    [6] Fabio Camilli, Claudio Marchi . On the convergence rate in multiscale homogenization of fully nonlinear elliptic problems. Networks and Heterogeneous Media, 2011, 6(1): 61-75. doi: 10.3934/nhm.2011.6.61
    [7] Brahim Amaziane, Leonid Pankratov, Andrey Piatnitski . Homogenization of variational functionals with nonstandard growth in perforated domains. Networks and Heterogeneous Media, 2010, 5(2): 189-215. doi: 10.3934/nhm.2010.5.189
    [8] Antonio DeSimone, Martin Kružík . Domain patterns and hysteresis in phase-transforming solids: Analysis and numerical simulations of a sharp interface dissipative model via phase-field approximation. Networks and Heterogeneous Media, 2013, 8(2): 481-499. doi: 10.3934/nhm.2013.8.481
    [9] Liselott Flodén, Jens Persson . Homogenization of nonlinear dissipative hyperbolic problems exhibiting arbitrarily many spatial and temporal scales. Networks and Heterogeneous Media, 2016, 11(4): 627-653. doi: 10.3934/nhm.2016012
    [10] Peter Bella, Arianna Giunti . Green's function for elliptic systems: Moment bounds. Networks and Heterogeneous Media, 2018, 13(1): 155-176. doi: 10.3934/nhm.2018007
  • In this paper, synchronization of fractional-order memristive recurrent neural networks via aperiodically intermittent control is investigated. Considering the special properties of memristor neural network, differential inclusion theory is introduced. Similar to the aperiodically strategy of integer order, aperiodically intermittent control strategy of fractional order is proposed. Under the framework of Fillipov's solution, based on the intermittent strategy of fractional order systems and the properties Mittag-Leffler, sufficient criteria of aperiodically intermittent strategy are obtained by constructing appropriate Lyapunov functional. Some comparisons are given to demonstrate the advantages of aperiodically strategy. A simulation example is given to illustrate the derived conclusions.



    In this paper, for a bounded domain Ω of Rd, we study the homogenization through Γ-convergence of the conductivity energy with a zero-order term of the type

    Fε(u):={Ω{A(xε)uu+|u|2}dx,if  uH10(Ω),,if  uL2(Ω)H10(Ω). (1)

    The conductivity A is a Yd-periodic, symmetric and non-negative matrix-valued function in L(Rd)d×d, denoted by Lper(Yd)d×d, which is not strongly elliptic, i.e.

    ess-infyYd(min{A(y)ξξ:ξRd,|ξ|=1})0. (2)

    This condition holds true when the conductivity energy density has missing derivatives. This occurs, for example, when the quadratic form associated to A is given by

    Aξξ:=Aξξforξ=(ξ,ξd)Rd1×R,

    where ALper(Yd)(d1)×(d1) is symmetric and non-negative matrix. It is known (see e.g. [13,Chapters 24 and 25]) that the strongly ellipticity of the matrix A, i.e.

    ess-infyYd(min{A(y)ξξ:ξRd,|ξ|=1})>0, (3)

    combined with the boundedness implies a compactness result of the conductivity functional

    uH10(Ω)ΩA(xε)uudx

    for the L2(Ω)-strong topology. The Γ-limit is given by

    ΩAuudx,

    where the matrix-valued function A is defined by the classical homogenization formula

    Aλλ:=min{YdA(y)(λ+v(y))(λ+v(y))dy:vH1per(Yd)}. (4)

    The Γ-convergence for the Lp(Ω)-strong topology, for p>1, for the class of integral functionals Fε of the form

    Fε(u)=Ωf(xε,Du)dx,foruW1,p(Ω;Rm), (5)

    where f:Ω×Rm×dR is a Borel function, 1-periodic in the first variable satisfying the standard growth conditions of order p, namely c1|M|pf(x,M)c2(|M|p+1) for any xΩ and for any real (m×d)-matrix M, has been widely studied and it is a classical subject (see e.g. [4,Chapter 12] and [13,Chapter 24]). On the contrary, the Γ-convergence of oscillating functionals for the weak topology on bounded sets of Lp(Ω) has been very few analysed. An example of the study of Γ-convergence for the Lp(Ω)-weak topology can be found in the paper [6] where, in the context of double-porosity, the authors compare the Γ-limit for non-linear functionals analogous to 5 computed with respect to different topologies and in particular with respect to Lp(Ω)-weak topology.

    In this paper, we investigate the Γ-convergence for the weak topology on bounded sets (a metrizable topology) of L2(Ω) of the conductivity functional under condition 2. In this case, one has no a priori L2(Ω)-bound on the sequence of gradients, which implies a loss of coerciveness of the investigated energy. To overcome this difficulty, we add a quadratic zeroth-order term of the form u2L2(Ω), so that we immediately obtain the coerciveness in the weak topology of L2(Ω) of Fε, namely, for uH10(Ω),

    Fε(u)Ω|u|2dx.

    This estimate guarantees that Γ-limit for the weak topology on bounded sets of L2(Ω) is characterized by conditions (i) and (ii) of the Definition 1.1 below (see [13,Proposition 8.10]), as well as, thanks to a compactness result (see [13,Corollary 8.12]), Fε Γ-converges for the weak topology of L2(Ω), up to subsequences, to some functional. We will show that, under the following assumptions:

    (H1) any two-scale limit u0(x,y) of a sequence uε of functions in L2(Ω) with bounded energy Fε(uε) does not depend on y (see [1,Theorem 1.2]);

    (H2) the space V defined by

    V:={YdA1/2(y)Φ(y)dy:ΦL2per(Yd;Rd)withdiv(A1/2(y)Φ(y))=0inD(Rd)}

    agrees with the space Rd,

    the Γ-limit is given by

    F0(u):={Ω{Auu+|u|2}dx,if  uH10(Ω),,if  uL2(Ω)H10(Ω), (6)

    where the homogenized matrix A is given through the expected homogenization formula

    Aλλ:=inf{YdA(y)(λ+v(y))(λ+v(y))dy:vH1per(Yd)}. (7)

    We need to make assumption (H1) since for any sequence uε with bounded energy, i.e. supε>0Fε(uε)<, the sequence uε in L2(Ω;Rd) is not bounded due to the lack of ellipticity of the matrix-valued conductivity A(y). Assumption (H2) turns out to be equivalent to the positive definiteness of the homogenized matrix (see Proposition 1).

    In the 2D isotropic elasticity setting of [11], the authors make use of similar conditions as (H1) and (H2) in the proof of the main results (see [11,Theorems 3.3 and 3.4]). They investigate the limit in the sense of Γ-convergence for the L2(Ω)-weak topology of the elasticity functional with a zeroth-order term in the case of two-phase isotropic laminate materials where the phase 1 is very strongly elliptic, while the phase 2 is only strongly elliptic. The strong ellipticity of the effective tensor is preserved through a homogenization process expect in the case when the volume fraction of each phase is 1/2, as first evidenced by Gutiérrez [14]. Indeed, Gutiérrez has provided two and three dimensional examples of 1-periodic rank-one laminates such that the homogenized tensor induced by a homogenization process, labelled 1-convergence, is not strongly elliptic. These examples have been revisited by means of a homogenization process using Γ-convergence in the two-dimensional case of [10] and in the three-dimensional case of [12].

    In the present scalar case, we enlighten assumptions (H1) and (H2) which are the key ingredients to obtain the general Γ-convergence result Theorem 2.1. Using Nguetseng-Allaire [1,16] two-scale convergence, we prove that for any dimension d2, the Γ-limit F0 6 for the weak topology of L2(Ω) actually agrees with the one obtained for the L2(Ω)-strong topology under uniformly ellipticity 3, replacing the minimum in 4 by the infimum in 7. Assumption (H2) implies the coerciveness of the functional F0 showing that its domain is H10(Ω) and that the homogenized matrix A is positive definite. More precisely, the positive definiteness of A turns out to be equivalent to assumption (H2) (see Proposition 1). We also provide two and three dimensional 1-periodic rank-one laminates which satisfy assumptions (H1) and (H2) (see Proposition 2 for the two-dimensional case and Proposition 3 for the three-dimensional case). Thanks to Theorem 2.1, the corresponding homogenized matrix A is positive definite. For this class of laminates, an alternative and independent proof of positive definiteness of A is performed using an explicit expression of A (see Proposition 5). This expression generalizes the classical laminate formula for non-degenerate phases (see [17] and also [2,Lemma 1.3.32], [8]) to the case of two-phase rank-one laminates with degenerate and anisotropic phases.

    The lack of assumption (H1) may induce a degenerate asymptotic behaviour of the functional Fε 1. We provide a two-dimensional rank-one laminate with two degenerate phases for which the functional Fε does Γ-converge for the L2(Ω)-weak topology to a functional F which differs from the one given by 6 (see Proposition 4). In this example, any two-scale limit u0(x,y) of a sequence with bounded energy Fε(uε), depends on the variable y. Moreover, we give two quite different expressions of the Γ-limit F which seem to be original up to the best of our knowledge. The energy density of the first expression is written with Fourier transform of the target function. The second expression appears as a non-local functional due to the presence of a convolution term. However, we do not know if the Γ-limit F is a Dirichlet form in the sense of Beurling-Deny [3], since the Markovian property is not stable by the L2(Ω)-weak topology (see Remark 2).

    The paper is organized as follows. In Section 2, we prove a general Γ-convergence result (see Theorem 2.1) for the functional Fε 1 with any non-uniformly elliptic matrix-valued function A, under assumptions (H1) and (H2). In Section 3 we illustrate the general result of Section 2 by periodic two-phase rank-one laminates with two (possibly) degenerate and anisotropic phases in dimension two and three. We provide algebraic conditions so that assumptions (H1) and (H2) are satisfied (see Propositions 2 and 3). In Section 4 we exhibit a two-dimensional counter-example where assumption (H1) fails, which leads us to a degenerate Γ-limit F involving a convolution term (see Proposition 4). Finally, in the Appendix we give an explicit formula for the homogenized matrix A for any two-phase rank-one laminates with (possibly) degenerate phases. We also provide an alternative proof of the positive definiteness of A using an explicit expression of A for the class of two-phase rank-one laminates introduced in Section 3 (see Proposition 5).

    Notation.

    ● For i=1,,d, ei denotes the i-th vector of the canonical basis in Rd;

    Id denotes the unit matrix of Rd×d;

    H1per(Yd;Rn) (resp. L2per(Yd;Rn), Cper(Yd;Rn)) is the space of those functions in H1loc(Rd;Rn) (resp. L2loc(Rd;Rn), Cloc(Rd;Rn)) that are Yd-periodic;

    ● Throughout, the variable x will refer to running point in a bounded open domain ΩRd, while the variable y will refer to a running point in Yd (or k+Yd, kZd);

    ● We write

    uεu0

    with uεL2(Ω) and u0L2(Ω×Yd) if uε two-scale converges to u0 in the sense of Nguetseng-Allaire (see [1,16])

    F1 and F2 denote the Fourier transform defined on L1(R) and L2(R) respectively. For fL1(R)L2(R), the Fourier transform F1 of f is defined by

    F1(f)(λ):=Re2πiλxf(x)dx.

    Definition 1.1. Let X be a reflexive and separable Banach space endowed with the weak topology σ(X,X), and let Fε:XR be a ε-indexed sequence of functionals. The sequence Fε Γ-converges to the functional F0:XR for the weak topology of X, and we write FεΓ(X)wF0, if for any uX,

    i) uεu, F0(u)lim infε0Fε(uε),

    ii) ¯uεu such that limε0Fε(¯uε)=F0(u).

    Such a sequence ¯uε is called a recovery sequence.

    Recall that the weak topology of L2(Ω) is metrizable on bounded sets, i.e. there exists a metric d on L2(Ω) such that on every norm bounded subset B of L2(Ω) the weak topology coincides with the topology induced on B by the metric d (see e.g. [13,Proposition 8.7]).

    In this section, we will prove the main result of this paper. As previously announced, up to a subsequence, the sequence of functionals {\mathscr{F}_\varepsilon} , given by 1 with non-uniformly elliptic matrix-valued conductivity A(y) , \Gamma -converges for the weak topology on bounded sets of L^2({\Omega}) to some functional. Our aim is to show that \Gamma -limit is exactly {\mathscr{F}_0} when u\in H^1_0({\Omega}) .

    Theorem 2.1. Let {\mathscr{F}_\varepsilon} be functionals given by 1 with A(y) a Y_d -periodic, symmetric, non-negative matrix-valued function in L^\infty({\mathbb{R}}^d)^{d\times d} satisfying 2. Assume the following assumptions

    \rm (H1) any two-scale limit u_0(x, y) of a sequence {u_\varepsilon} of functions in L^2({\Omega}) with bounded energy {\mathscr{F}_\varepsilon}({u_\varepsilon}) does not depend on y ;

    \rm (H2) the space V defined by

    \begin{equation} \begin{split} V: = \biggl\{ \int_{Y_d}A^{1/2}(y)\Phi(y)dy \quad :& \quad \Phi\in L^2_{\mathit{{per}}}(Y_d; \quad {\mathbb{R}}^d) \quad \\ &\text{with} \quad \text{div}\left (A^{1/2}(y)\Phi(y)\right) = 0 \quad \mathit{{in}} \quad \mathscr{D}'({\mathbb{R}}^d) \biggr\} \end{split} \end{equation} (8)

    agrees with the space {\mathbb{R}^d} .

    Then, {\mathscr{F}_\varepsilon} \Gamma -converges for the weak topology of L^2({\Omega}) to {\mathscr{F}_0} , i.e.

    \begin{equation*} {\mathscr{F}_\varepsilon}{{\stackrel{\Gamma(L^2)-w}{\rightharpoonup}}}{\mathscr{F}_0}, \end{equation*}

    where {\mathscr{F}_0} is defined by 6 and A^\ast is given by 7.

    Proof. We split the proof into two steps which are an adaptation of [11,Theorem 3.3] using the sole assumptions (H1) and (H2) in the general setting of conductivity.

    Step 1 - \Gamma - \liminf inequality.

    Consider a sequence \{{u_\varepsilon} \}_\varepsilon converging weakly in L^2({\Omega}) to u\in L^2({\Omega}) . We want to prove that

    \begin{equation} \liminf\limits_{\varepsilon\to 0 } {\mathscr{F}_\varepsilon} ({u_\varepsilon})\geq {\mathscr{F}_0}(u). \end{equation} (9)

    If the lower limit is \infty then 9 is trivial. Up to a subsequence, still indexed by \varepsilon , we may assume that \liminf{\mathscr{F}_\varepsilon}({u_\varepsilon}) is a limit and we can also assume henceforth that, for some 0<C<\infty ,

    \begin{equation} {\mathscr{F}_\varepsilon}({u_\varepsilon})\leq C. \end{equation} (10)

    As {u_\varepsilon} is bounded in L^2({\Omega}) , there exists a subsequence, still indexed by \varepsilon , which two-scale converges to a function u_0(x, y)\in L^2({\Omega}\times Y_d) (see e.g. [1,Theorem 1.2]). In other words,

    \begin{equation} {u_\varepsilon} \mathop \rightharpoonup \limits^ \rightharpoonup u_0. \end{equation} (11)

    Assumption (H1) ensures that

    \begin{equation} u_0(x, y) = u(x) \quad\text{is independent of} \quad y, \end{equation} (12)

    where, according to the link between two-scale and weak L^2({\Omega}) -convergences (see [1,Proposition 1.6]), u is the weak limit of {u_\varepsilon} , i.e.

    \begin{equation*} {u_\varepsilon}\rightharpoonup u \quad\text{weakly in} \quad L^2({\Omega}). \end{equation*}

    Since all the components of the matrix A(y) are bounded and A(y) is non-negative as a quadratic form, in view of 10, for another subsequence (not relabeled), we have

    \begin{equation*} A\left ({\frac{x}{\varepsilon}}\right){\nabla}{u_\varepsilon} \mathop \rightharpoonup \limits^ \rightharpoonup \sigma_0(x, y) \qquad\text{with} \quad \sigma_0\in L^2({\Omega}\times Y_d; {\mathbb{R}^d}) , \end{equation*}

    and also

    \begin{equation} A^{1/2}\left ({\frac{x}{\varepsilon}}\right){\nabla}{u_\varepsilon} \mathop \rightharpoonup \limits^ \rightharpoonup \Theta_0(x, y) \qquad\text{with} \quad \Theta_0\in L^2({\Omega}\times Y_d; {\mathbb{R}^d}). \end{equation} (13)

    In particular

    \begin{equation} \varepsilon A\left ({\frac{x}{\varepsilon}}\right){\nabla}{u_\varepsilon} \mathop \rightharpoonup \limits^ \rightharpoonup 0. \end{equation} (14)

    Consider \Phi\in{L^2_{\text{per}}}(Y_d; {\mathbb{R}^d}) such that

    \begin{equation} {\mathrm{div}}\left (A^{1/2}(y)\Phi(y)\right) = 0 \qquad\text{in} \quad \mathscr{D}'({\mathbb{R}}^d), \end{equation} (15)

    or equivalently,

    \begin{equation} \notag \int_{Y_d} A^{1/2}(y)\Phi(y)\cdot{\nabla}\psi(y)dy = 0 \qquad\forall \psi\in{H^1_{\text{per}}}(Y_d). \end{equation}

    Take also \varphi\in C^{\infty}(\overline{{\Omega}}) . Since {u_\varepsilon}\in H^1_0({\Omega}) and in view of 15, an integration by parts yields

    \begin{align*} \int_{{\Omega}} A^{1/2}\left ({\frac{x}{\varepsilon}}\right)&{\nabla}{u_\varepsilon}\cdot\Phi\left ({\frac{x}{\varepsilon}}\right)\varphi(x)dx = - \int_{{\Omega}}{u_\varepsilon} A^{1/2}\left ({\frac{x}{\varepsilon}}\right)\Phi\left ({\frac{x}{\varepsilon}}\right)\cdot{\nabla}\varphi(x)dx. \end{align*}

    By using [1,Lemma 5.7], A^{1/2}(y)\Phi(y)\cdot{\nabla} \varphi(x) is an admissible test function for the two-scale convergence. Then, we can pass to the two-scale limit in the previous expression with the help of the convergences 11 and 13 along with 12, and we obtain

    \begin{equation} \int_{{\Omega}\times Y_d} \Theta_0(x, y)\cdot\Phi(y)\varphi(x)dxdy = -\int_{{\Omega}\times Y_d} u(x)A^{1/2}(y)\Phi(y)\cdot{\nabla}\varphi(x)dxdy. \end{equation} (16)

    We prove that the target function u is in H^1(\Omega) . Setting

    \begin{equation} N: = \int_{Y_d} A^{1/2}(y)\Phi(y)dy, \end{equation} (17)

    and varying \varphi in {C^\infty_{\text{c}}}({\Omega}) , the equality 16 reads as

    \begin{equation*} \label{formulaN} \int_{{\Omega}\times Y_d} \Theta_0(x, y)\cdot \Phi(y)\varphi(x)dxdy = -\int_{{\Omega}} u(x)N\cdot {\nabla}\varphi(x)dx \end{equation*}

    Since the integral in the left-hand side is bounded by a constant times \|\varphi\|_{L^2({\Omega})} , the right-hand side is a linear and continuous map in \varphi\in L^2({\Omega}) . By the Riesz representation theorem, there exists g\in L^2({\Omega}) such that, for any \varphi\in{C^\infty_{\text{c}}}({\Omega}) ,

    \begin{equation*} \int_{{\Omega}} u(x)N\cdot {\nabla}\varphi(x)dx = \int_{{\Omega}} g(x)\varphi(x)dx, \end{equation*}

    which implies that

    \begin{equation} N\cdot{\nabla} u\in L^2({\Omega}). \end{equation} (18)

    In view of assumption (H2), N is an arbitrary vector in {\mathbb{R}^d} so that we infer from 18 that

    \begin{equation} u\in H^1({\Omega}). \end{equation} (19)

    This combined with equality 16 leads us to

    \begin{equation} \int_{{\Omega}\times Y_d} \Theta_0(x, y)\cdot\Phi(y)\varphi(x)dxdy = \int_{{\Omega}\times Y_d}A^{1/2}(y){\nabla} u(x)\cdot\Phi(y)\varphi(x)dxdy. \end{equation} (20)

    By density, the last equality holds if the test functions \Phi(y)\varphi(x) are replaced by the set of \psi(x, y)\in L^2({\Omega}; {L^2_{\text{per}}}(Y_d;{\mathbb{R}^d})) such that

    \begin{equation*} {\mathrm{div}}_y\left (A^{1/2}(y)\psi(x, y)\right) = 0 \qquad\text{in} \quad \mathscr{D}'({\mathbb{R}}^d), \end{equation*}

    or equivalently,

    \begin{equation*} \int_{{\Omega}\times Y_d}\psi(x, y)\cdot A^{1/2}(y){\nabla}_yv(x, y)dxdy = 0 \qquad\forall v\in L^2({\Omega};{H^1_{\text{per}}}(Y_d)). \end{equation*}

    The L^2({\Omega}; {L^2_{\text{per}}}(Y_d;{\mathbb{R}^d})) -orthogonal to that set is the L^2 -closure of

    \begin{equation*} \mathscr{K} : = \left \{A^{1/2}(y){\nabla}_yv(x, y) \quad : \quad v\in L^2({\Omega}; {H^1_{\text{per}}}(Y_d))\right\}. \end{equation*}

    Thus, the equality 20 yields

    \begin{equation*} \Theta_0(x, y) = A^{1/2}(y){\nabla} u(x)+S(x, y) \end{equation*}

    for some S in the closure of \mathscr{K} , i.e. there exists a sequence v_n\in L^2({\Omega};{H^1_{\text{per}}}(Y_d)) such that

    \begin{equation*} A^{1/2}(y){\nabla}_yv_n(x, y) \to S(x, y)\qquad\text{strongly in}\quad L^2({\Omega};{L^2_{\text{per}}}(Y_d;{\mathbb{R}^d})). \end{equation*}

    Due to the lower semi-continuity property of two-scale convergence (see [1,Proposition 1.6]), we get

    \begin{align*} \liminf\limits_{\varepsilon\to 0} \|A^{1/2}(x/\varepsilon){\nabla}{u_\varepsilon}\|^2_{L^2({\Omega};{\mathbb{R}^d})}&\geq \|\Theta_0\|^2_{L^2({\Omega}\times Y_d;{\mathbb{R}^d})}\\ & = \lim\limits_n\left \|A^{1/2}(y)\left ({\nabla}_xu(x)+{\nabla}_yv_n\right)\right\|^2_{L^2({\Omega}\times Y_d;{\mathbb{R}^d})}. \end{align*}

    Then, by the weak L^2 -lower semi-continuity of \|{u_\varepsilon}\|_{L^2({\Omega})} , we have

    \begin{align*} &\liminf\limits_{\varepsilon\to 0}{\mathscr{F}_\varepsilon}({u_\varepsilon})\notag\\ &\geq \lim\limits_{n}\int_{{\Omega}\times Y_d}A(y)({\nabla}_xu(x)+{\nabla}_yv_n(x, y))\cdot({\nabla}_xu(x)+{\nabla}_yv_n(x, y))dxdy + \int_{{\Omega}}|u|^2dx\\ &\geq\int_{{\Omega}}\inf\biggl\{\int_{ Y_d}A(y)({\nabla}_xu(x)+{\nabla}_yv(y))\cdot({\nabla}_xu(x)+{\nabla}_yv(y))dy \quad : \quad \quad v\in {H^1_{\text{per}}}(Y_d) \biggl\} dx\notag\\ &\qquad + \int_{{\Omega}}|u|^2dx. \end{align*}

    Recalling the definition 7, we immediately conclude that

    \begin{equation*} \liminf\limits_{\varepsilon\to 0}{\mathscr{F}_\varepsilon}({u_\varepsilon})\geq \int_{{\Omega}}\left \{A^\ast{\nabla} u\cdot{\nabla} u + |u|^2\right\}dx, \end{equation*}

    provided that u\in H^1_0({\Omega}) .

    It remains to prove that the target function u is actually in H^1_0({\Omega}) , giving a complete characterization of \Gamma -limit. To this end, take x_0\in\partial{\Omega} a Lebesgue point for u\lfloor\partial{\Omega} and for \nu (x_0) , the exterior normal to {\Omega} at point x_0 . Thanks to 19, we know that u\in H^1({\Omega}) , hence, after an integration by parts of the right-hand side of 16, we obtain, for \varphi\in C^\infty(\overline{{\Omega}}) ,

    \begin{align} \int_{{\Omega}\times Y_d}\Theta_0(x, y)\cdot\Phi(y)\varphi(x)dxdy & = \int_{{\Omega}}N\cdot{\nabla} u(x)\varphi(x)dx\\ &\quad - \int_{\partial{\Omega}}N\cdot \nu(x)u(x)\varphi(x)d\mathscr{H}, \end{align} (21)

    where N is given by 17. Varying \varphi in {C^\infty_{\text{c}}}({\Omega}) , the first two integrals in 21 are equal and bounded by a constant times \|\varphi\|_{L^2({\Omega})} . It follows that, for any \varphi\in C^\infty(\overline{{\Omega}}) ,

    \begin{equation*} \int_{\partial{\Omega}}N\cdot \nu(x)u(x)\varphi(x)d\mathscr{H} = 0, \end{equation*}

    which leads to N\cdot \nu(x) u(x) = 0 \mathscr{H} -a.e. on \partial{\Omega} . Since x_0 is a Lebesgue point, we have

    \begin{equation} N\cdot \nu(x_0) u(x_0) = 0. \end{equation} (22)

    In view of assumption (H2) and the arbitrariness of N , we can choose N such that N = \nu(x_0) so that from 22 we get u(x_0) = 0 . Hence,

    \begin{equation*} u\in H^1_0({\Omega}). \end{equation*}

    This concludes the proof of the \Gamma - \liminf inequality.

    Step 2 - \Gamma - \limsup inequality.

    We use the same arguments of [12,Theorem 2.4] which can easily extend to the conductivity setting. We just give an idea of the proof, which is based on a perturbation argument. For \delta>0 , let A_\delta be the perturbed matrix of {\mathbb{R}}^{d\times d} defined by

    \begin{equation*} A_\delta: = A+\delta I_d, \end{equation*}

    where I_d is the unit matrix of {\mathbb{R}}^{d\times d} . Since the matrix A is non-negative, A_\delta turns out to be positive definite, hence, the functional {\mathscr{F}_\varepsilon}^\delta , defined by 1 with A_\delta in place of A , \Gamma -converges to the functional \mathscr{F}^\delta given by

    \begin{equation*} \mathscr{F}^\delta(u) : = \left\{ \begin{array}{l} \int_{{\Omega}} \left \{A^\ast_\delta{\nabla} u\cdot{\nabla} u + |u|^2\right\}dx, &\text{if} \quad u\in H^1_0({\Omega}), \\ \quad \infty, &\text{if} \quad u\in L^2({\Omega})\setminus H^1_0({\Omega}), \end{array} \right. \end{equation*}

    for the strong topology of L^2({\Omega}) (see e.g. [13,Corollary 24.5]). Thanks to the compactness result of \Gamma -convergence (see e.g. [4,Proposition 1.42]), there exists a subsequence \varepsilon_j such that \mathscr{F}_{\varepsilon_j} \Gamma -converges for the L^2({\Omega}) -strong topology to some functional F^0 . Let u\in H^1_0({\Omega}) and let u_{\varepsilon_j} be a recovery sequence for \mathscr{F}_{\varepsilon_j} which converges to u for the H^1({\Omega}) -weak topology on bounded sets. Since \mathscr{F}_{\varepsilon_j}\leq \mathscr{F}_{\varepsilon_j}^\delta and since u_{\varepsilon_j} belongs to some bounded set of H^1(\Omega) , from [13,Propositions 6.7 and 8.10] we deduce that

    \begin{align*} F^0(u)&\leq \mathscr{F}^\delta(u)\\ &\leq \liminf\limits_{\varepsilon_j\to 0} \int_{{\Omega}}\left \{A_\delta\left (\frac{x}{\varepsilon_j}\right){\nabla} u_{\varepsilon_j}\cdot {\nabla} u_{\varepsilon_j} +|u_{\varepsilon_j}|^2\right\}dx\\ &\leq \liminf\limits_{\varepsilon_j\to 0} \int_{{\Omega}}\left \{A\left (\frac{x}{\varepsilon_j}\right){\nabla} u_{\varepsilon_j}\cdot {\nabla} u_{\varepsilon_j} +|u_{\varepsilon_j}|^2\right\}dx + O(\delta)\\ & = F^0(u) + O(\delta). \end{align*}

    It follows that \mathscr{F}^\delta converges to F^0 as \delta\to 0 . Then, the \Gamma -limit F^0 of \mathscr{F}_{\varepsilon_j} is independent on the subsequence \varepsilon_j . Repeating the same arguments, any subsequence of {\mathscr{F}_\varepsilon} has a further subsequence which \Gamma -converges for the strong topology of L^2({\Omega}) to F^0 = \lim_{\delta\to 0}\mathscr{F}^\delta . Thanks to the Urysohn property (see e.g. [4,Proposition 1.44]), the whole sequence {\mathscr{F}_\varepsilon} \Gamma -converges to the functional F^0 for the strong topology of L^2({\Omega}) . On the other hand, in light of the definition 7 of A^\ast , we get that A^\ast_\delta converges to A^\ast as \delta\to 0 , i.e.

    \begin{equation} \lim\limits_{\delta\to 0}A^\ast_\delta = A^\ast. \end{equation} (23)

    Thanks to the Lebesgue dominated convergence theorem and in view of 23, we get that F^0 = \lim_{\delta\to 0}\mathscr{F}^\delta is exactly {\mathscr{F}_0} given by 6. Therefore, {\mathscr{F}_\varepsilon} \Gamma -converges to {\mathscr{F}_0} for the L^2({\Omega}) -strong topology.

    Now, let us show that {\mathscr{F}_\varepsilon} \Gamma -converges to {\mathscr{F}_0} for the weak topology of L^2({\Omega}) . Recall that the L^2({\Omega}) -weak topology is metrizable on the closed ball of L^2({\Omega}) . Fix n\in\mathbb{N} and let d_{B_n} be any metric inducing the L^2({\Omega}) -weak topology on the ball B_n centered on 0 and of radius n . Let u\in H^1_0({\Omega}) and let \overline{u}_{\varepsilon} be a recovery sequence for {\mathscr{F}_\varepsilon} for the L^2({\Omega}) -strong topology. Since the topology induced by the metric d_{B_n} on B_n is weaker than the L^2({\Omega}) -strong topology, \overline{u}_{\varepsilon} is also a recovery sequence for {\mathscr{F}_\varepsilon} for the L^2({\Omega}) -weak topology on B_n . Hence,

    \begin{equation*} \lim\limits_{\varepsilon\to 0}{\mathscr{F}_\varepsilon}(\overline{u}_{\varepsilon}) = {\mathscr{F}_0}(u), \end{equation*}

    which proves the \Gamma - \limsup inequality in B_n . Finally, since any sequence converging weakly in L^2({\Omega}) belongs to some ball B_n\subset L^2({\Omega}) , as well as its limit, it follows that the \Gamma - \limsup inequality holds true for {\mathscr{F}_\varepsilon} for L^2({\Omega}) -weak topology, which concludes the proof.

    The next proposition provides a characterization of Assumption \rm(H2) in terms of homogenized matrix A^\ast .

    Proposition 1. Assumption \rm(H2) is equivalent to the positive definiteness of A^\ast , or equivalently,

    \begin{equation} \notag {\text{Ker}}(A^\ast) = V^\perp. \end{equation}

    Proof. Consider \lambda\in{\text{Ker}}(A^\ast) . Define

    \begin{equation*} H^1_\lambda(Y_d) : = \left \{u\in H^1_{\rm loc}({\mathbb{R}^d}) \quad : \quad {\nabla} u \quad \text{is} \quad Y_d\text{-periodic and} \quad \int_{Y_d}{\nabla} u(y)dy = \lambda \right\}. \end{equation*}

    Recall that u\in H^1_\lambda(Y_d) if and only if there exists v\in{H^1_{\text{per}}}(Y_d) such that u(y) = v(y) + \lambda\cdot y (see e.g. [13,Lemma 25.2]). Since A^\ast is non-negative and symmetric, from 7 it follows that

    \begin{align*} 0 = A^\ast\lambda\cdot\lambda & = \inf\left \{ \int_{ Y_d}A(y){\nabla} u(y)\cdot{\nabla} u(y)dy \quad : \quad u\in H^1_\lambda(Y_d) \right\}. \label{newhommat} \end{align*}

    Then, there exists a sequence u_n of functions in H^1_\lambda(Y_d) such that

    \begin{equation*} \lim\limits_{n\to\infty}\int_{Y_d} A(y){\nabla} u_n(y)\cdot{\nabla} u_n(y) dy = 0, \end{equation*}

    which implies that

    \begin{equation} A^{1/2}{\nabla} u_n \to 0\qquad \text{strongly in} \quad L^2(Y_d; \quad {\mathbb{R}}^d). \end{equation} (24)

    Now, take \Phi\in L^2_{\rm per}(Y_d; \quad {\mathbb{R}}^d) such that A^{1/2}\Phi is a divergence free field in {\mathbb{R}}^d . Recall that, since u_n\in H^1_\lambda(Y_d) , we have that {\nabla} u_n(y) = {\nabla} v_n(y) + \lambda , for some v_n\in H^1_{\rm per} (Y_d) . This implies that

    \begin{align} \int_{Y_d} A^{1/2}(y){\nabla} u_n(y)\cdot\Phi(y)dy & = \int_{Y_d}{\nabla} u_n(y)\cdot A^{1/2}(y)\Phi(y)dy \\ & = \lambda\cdot \int_{Y_d} A^{1/2}(y)\Phi(y)dy + \int_{Y_d}{\nabla} v_n(y)\cdot A^{1/2}(y)\Phi(y)dy \\ & = \lambda\cdot \int_{Y_d} A^{1/2}(y)\Phi(y)dy, \end{align} (25)

    where the last equality is obtained by integrating by parts the second integral combined with the fact that A^{1/2}\Phi is a divergence free field in {\mathbb{R}}^d . In view of convergence 24, the integral on the left-hand side of 25 converges to 0 . Hence, passing to the limit as n\to\infty in 25 yields

    \begin{equation*} \label{lambadaperp} 0 = \lambda\cdot \left (\int_{Y_d} A^{1/2}(y)\Phi(y)dy\right), \end{equation*}

    for any \Phi\in L^2_{\rm per}(Y_d; \quad {\mathbb{R}}^d) such that A^{1/2}\Phi is a divergence free field in {\mathbb{R}}^d . Therefore \lambda\in V^\perp which implies that

    \begin{equation*} \label{kersubsetV} {\text{Ker}}(A^\ast) \subseteq V^\perp. \end{equation*}

    Conversely, by 23 we already know that

    \begin{equation*} \lim\limits_{\delta\to 0} A^\ast_\delta = A^\ast, \end{equation*}

    where A^\ast_\delta is the homogenized matrix associated with A_\delta = A+\delta I_d . Since A_\delta is strongly elliptic, the homogenized matrix A^\ast_\delta is given by

    \begin{equation} A^\ast_\delta \lambda\cdot\lambda = \min\left \{ \int_{Y_d} A_\delta(y){\nabla} u_\delta(y)\cdot {\nabla} u_\delta(y) dy \quad : \quad u_\delta\in H^1_\lambda (Y_d) \right\}. \end{equation} (26)

    Let \overline{u}_\delta be the minimizer of problem 26. Therefore, there exists a constant C>0 such that

    \begin{equation*} A^\ast_\delta\lambda\cdot\lambda = \int_{Y_d} A_\delta(y){\nabla} \overline{u}_\delta(y)\cdot {\nabla} \overline{u}_\delta(y) dy = \int_{Y_d}|A^{1/2}_\delta(y){\nabla} \overline{u}_\delta(y)|^2dy\leq C, \end{equation*}

    which implies that the sequence \Phi_\delta(y): = A^{1/2}_\delta(y){\nabla} \overline{u}_\delta(y) is bounded in L^2_{\rm per}(Y_d; {\mathbb{R}}^d) . Then, up to extract a subsequence, we can assume that \Phi_\delta converges weakly to some \Phi in L^2_{\rm per} (Y_d; {\mathbb{R}}^d) .

    Now, we show that A^{1/2}_\delta converges strongly to A^{1/2} in L^\infty_{\rm per}(Y_d)^{d\times d} . Since A(y) is a symmetric matrix, there exists an orthogonal matrix-valued function R in L^\infty_{\rm per}(Y_d)^{d\times d} such that

    \begin{equation} \notag A(y) = R(y)D(y)R^T(y) \qquad\text{for a.e. }y\in Y_d, \end{equation}

    where D is a diagonal non-negative matrix-valued function in L^\infty_{\rm per}(Y_d)^{d\times d} and R^T denotes the transpose of R . It follows that A_\delta (y) = A(y)+\delta I_d = R(y)(D(y)+\delta I_d)R^T(y) , for a.e. y\in Y_d . Hence,

    \begin{equation} A^{1/2}_\delta (y) = R(y)(D(y)+\delta I_d)^{1/2}R^T(y)\qquad \text{for a.e. }y\in Y_d, \notag \end{equation}

    which implies that A^{1/2}_\delta converges strongly to A^{1/2} = RD^{1/2}R^T in L^\infty_{\rm per} (Y_d)^{d\times d} .

    Now, passing to the limit as \delta\to 0 in

    \begin{equation*} {\mathrm{div}}(A_\delta^{1/2}\Phi_\delta) = {\mathrm{div}} (A_\delta{\nabla} \overline{u}_\delta) = 0 \qquad\text{in } \mathscr{D'}({\mathbb{R}}^d), \end{equation*}

    we have

    \begin{equation*} {\mathrm{div}} (A^{1/2}\Phi) = 0 \qquad\text{in } \mathscr{D'}({\mathbb{R}}^d). \end{equation*}

    This along with \Phi\in L^2_{\rm per}(Y_d; {\mathbb{R}}^d) implies that \Phi is a test function for the set V given by 8. From 26 it follows that

    \begin{equation*} A^\ast_\delta\lambda = \int_{Y_d} A_\delta(y){\nabla} \overline{u}_\delta(y)dy = \int_{Y_d} A^{1/2}_\delta(y) \Phi_\delta(y)dy. \end{equation*}

    Hence, taking into account the strong convergence of A^{1/2}_\delta in L^\infty_{\rm per}(Y_d)^{d\times d} and the weak convergence of \Phi_\delta in L^2_{\rm per}(Y_d; {\mathbb{R}}^d) , we have

    \begin{equation*} A^\ast\lambda = \lim\limits_{\delta\to 0} A^\ast_\delta \lambda = \lim\limits_{\delta\to 0} \int_{Y_d} A^{1/2}_\delta(y) \Phi_\delta(y)dy = \int_{Y_d} A^{1/2}(y)\Phi(y)dy, \end{equation*}

    which implies that A^\ast\lambda\in V since \Phi is a suitable test function for the set V . Therefore, for \lambda\in V^\perp ,

    \begin{equation*} A^\ast\lambda\cdot\lambda = 0, \end{equation*}

    so that, since A^\ast is a non-negative matrix, we deduce that \lambda\in{\text{Ker}}(A^\ast) . In other words,

    \begin{equation*} V^\perp\subseteq {\text{Ker}}(A^\ast), \end{equation*}

    which concludes the proof.

    In this section we provide a geometric setting for which assumptions (H1) and (H2) are fulfilled. We focus on a 1 -periodic rank-one laminates of direction e_1 with two phases in {\mathbb{R}}^d , d = 2, 3 . Specifically, we assume the existence of two anisotropic phases Z_1 and Z_2 of Y_d given by

    \begin{equation*} Z_1 = (0, \theta)\times (0, 1)^{d-1} \quad\text{and}\quad Z_2 = (\theta, 1)\times (0, 1)^{d-1}, \end{equation*}

    where \theta denotes the volume fraction of the phase Z_1 . Let Z_1^\# and Z_2^\# be the associated subsets of {\mathbb{R}^d} , i.e. the open periodic sets

    \begin{equation*} Z_i^\# : = \text{Int}\biggl(\bigcup\limits_{k\in\mathbb{Z}^d} \left (\overline{Z_i}+k\right)\biggr) \qquad\text{for} \quad i = 1, 2. \end{equation*}

    Let X_1 and X_2 be unbounded connected components of Z_1^\# and Z_2^\# in {\mathbb{R}^d} given by

    \begin{equation*} X_1 : = (0, \theta)\times {\mathbb{R}}^{d-1} \quad\text{and}\quad X_2 : = (\theta, 1)\times {\mathbb{R}}^{d-1}, \end{equation*}

    and we denote by \partial Z the interface \{y_1 = 0\} .

    The anisotropic phases are described by two constant, symmetric and non-negative matrices A_1 and A_2 of {\mathbb{R}}^{d\times d} which are possibly not positive definite. Hence, the conductivity matrix-valued function A\in L^\infty_{\text{per}}(Y_d)^{d\times d} , given by

    \begin{equation} A(y_1) : = \chi(y_1)A_1 + (1-\chi(y_1))A_2 \qquad\text{for} \quad y_1\in{\mathbb{R}}, \end{equation} (27)

    where \chi is the 1 -periodic characteristic function of the phase Z_1 , is not strongly elliptic, i.e. 2 is satisfied.

    We are interested in two-phase mixtures in {\mathbb{R}^2} with one degenerate phase. We specialize to the case where the non-negative and symmetric matrices A_1 and A_2 of {\mathbb{R}}^{2\times 2} are such that

    \begin{equation} A_1 = \xi\otimes\xi\qquad\text{and}\quad A_2 \quad \text{is positive definite}, \end{equation} (28)

    for some \xi\in{\mathbb{R}^2} . The next proposition establishes the algebraic conditions which provide assumptions (H1) and (H2) of Theorem 2.1.

    Proposition 2. Let A_1 and A_2 be the matrices defined by 28. Assume that \xi\cdot e_1\neq 0 and the vectors \xi and A_2e_1 are linearly independent in {\mathbb{R}^2} . Then, assumptions \rm(H1) and \rm(H2) are satisfied. In particular, the homogenized matrix A^\ast , given by 7, associated to the matrix A defined by 27 and 28 is positive definite.

    From Theorem 2.1, we easily deduce that the energy {\mathscr{F}_\varepsilon} defined by 1 with A given by 27 and 28 \Gamma -converges to the functional {\mathscr{F}_0} given by 6 with conductivity matrix A^\ast defined by 7. In the present case, the homogenized matrix A^\ast has an explicit expression given in Proposition 5 in the Appendix.

    Proof. Firstly, let us prove assumption (H1). We adapt the proof of Step 1 of [11,Theorem 3.3] to two-dimensional laminates. In our context, the algebra involved is different due to the scalar setting.

    Denote by u^i_0 the restriction of the two-scale limit u_0 in phase Z_i or Z_i^\# for i = 1, 2 . In view of 14, for any \Phi(x, y)\in{C^\infty_{\text{c}}}({\Omega}\times{\mathbb{R}^2}; \quad {\mathbb{R}^2}) with compact support in {\Omega}\times Z^\#_1 , or due to periodicity in {\Omega}\times X_1 , we deduce that

    \begin{align} 0& = -\lim\limits_{\varepsilon\to 0} \varepsilon\int_{{\Omega}} A\left ({\frac{x}{\varepsilon}}\right){\nabla}{u_\varepsilon}\cdot \Phi\left (x, {\frac{x}{\varepsilon}}\right) dx \\ & = \lim\limits_{\varepsilon\to 0}\int_{{\Omega}} {u_\varepsilon} {\mathrm{div}}_y(A_1\Phi(x, y))\left (x, {\frac{x}{\varepsilon}}\right) dx \\ & = \int_{{\Omega}\times Z^\#_1} u^1_0(x, y) {\mathrm{div}}_y(A_1\Phi(x, y)) dxdy \\ & = -\int_{{\Omega}\times Z^\#_1} A_1{\nabla}_y u^1_0(x, y) \cdot\Phi(x, y) dxdy, \end{align}

    so that

    \begin{equation} A_1{\nabla}_y u^1_0(x, y) \equiv 0 \qquad\text{in} \quad \Omega\times Z_1^\#. \end{equation} (29)

    Similarly, taking \Phi(x, y)\in{C^\infty_{\text{c}}}({\Omega}\times{\mathbb{R}^2}; \quad {\mathbb{R}^2}) with compact support in {\Omega}\times Z_2^\# , or equivalently in {\Omega}\times X_2 , as test function and repeating the same arguments, we obtain

    \begin{equation} A_2{\nabla}_y u^2_0(x, y) \equiv 0 \qquad\text{in} \quad \Omega\times Z_2^\#. \end{equation} (30)

    Due to 29, in phase Z_1^\# we have

    \begin{equation*} \label{kerAdimtwo} {\nabla}_yu^1_0\in{\text{Ker}} (A_1) = \text{Span}(\xi^\perp), \end{equation*}

    where \xi^\perp = (-\xi_2, \xi_1)\in{\mathbb{R}^2} is perpendicular to \xi = (\xi_1, \xi_2) . Hence, u^1_0 reads as

    \begin{equation} u_0^1(x, y) = \theta^1 (x, \quad \xi^\perp\cdot y)\qquad \text{a.e.} \quad (x, y)\in {\Omega}\times X_1, \end{equation} (31)

    for some function \theta^1\in L^2({\Omega}\times {\mathbb{R}}) . On the other hand, since the matrix A_2 is positive definite, in phase Z_2^\# the relation 30 implies that

    \begin{equation} u^2_0(x, y) = \theta^2(x) \qquad\text{a.e.} \quad (x, y)\in{\Omega} \times X_2, \end{equation} (32)

    for some function \theta^2\in L^2({\Omega}) . Now, consider a constant vector-valued function \Phi defined on Y_2 such that

    \begin{equation} (A_1-A_2)\Phi\cdot e_1 = 0\qquad \text{on} \quad \partial Z_1^\#. \end{equation} (33)

    Note that condition 33 is necessary for {\mathrm{div}}_y(A(y)\Phi) to be an admissible test function for two-scale convergence. In view of 14 and 32, for any \varphi\in{C^\infty_{\text{c}}}({\Omega}; {C^\infty_{\text{per}}}(Y_2)) , we obtain

    \begin{align} 0& = -\lim\limits_{\varepsilon\to 0} \varepsilon\int_{{\Omega}} A(y){\nabla}{u_\varepsilon}\cdot\Phi\varphi\left (x, {\frac{x}{\varepsilon}}\right)dx \\ & = \lim\limits_{\varepsilon\to 0} \int_{{\Omega}}{u_\varepsilon}{\mathrm{div}}_y(A(y)\Phi\varphi(x, y))\left (x, {\frac{x}{\varepsilon}}\right)dx\\ & = \int_{{\Omega}\times Z_1}u^1_0(x, y){\mathrm{div}}_y(A_1\Phi\varphi(x, y))dxdy \\ &\quad + \int_{{\Omega}\times Z_2}\theta^2(x){\mathrm{div}}_y(A_2\Phi\varphi(x, y))dxdy. \end{align}

    Take now \varphi\in{C^\infty_{\text{c}}}({\Omega}\times{\mathbb{R}^2}) and use the periodized function

    \begin{equation*} \label{perfun} \varphi^\#(x, y) : = \sum\limits_{k\in\mathbb{Z}^2}\varphi(x, y+k) \end{equation*}

    as new test function. Then, we obtain

    \begin{align} 0 & = \int_{{\Omega}\times Z_1}u^1_0(x, y){\mathrm{div}}_y(A_1\Phi\varphi^\#(x, y))dxdy+ \int_{{\Omega}\times Z_2}\theta^2(x){\mathrm{div}}_y(A_2\Phi\varphi^\#(x, y))dxdy\\ & = \sum\limits_{k\in\mathbb{Z}^2}\int_{{\Omega}\times (Z_1+k)}u^1_0(x, y){\mathrm{div}}_y(A_1\Phi\varphi(x, y))dxdy\\ &\quad +\sum\limits_{k\in\mathbb{Z}^2} \int_{{\Omega}\times (Z_2+k)}\theta^2(x){\mathrm{div}}_y(A_2\Phi\varphi(x, y))dxdy\\ & = \int_{{\Omega}\times Z^\#_1}u^1_0(x, y){\mathrm{div}}_y(A_1\Phi\varphi(x, y))dxdy+ \int_{{\Omega}\times Z^\#_2}\theta^2(x){\mathrm{div}}_y(A_2\Phi\varphi(x, y))dxdy. \end{align} (34)

    Recall that A_1 = \xi\otimes \xi , where \xi is such that \xi\cdot e_1\neq0 . This combined with the linear independence of the vectors \xi and A_2e_1 implies that the linear map

    \begin{equation*} \Phi\in{\mathbb{R}^2}\mapsto (A_1e_1\cdot\Phi, \quad A_2e_1\cdot\Phi)\in{\mathbb{R}^2} \end{equation*}

    is one-to-one. Hence, for any f\in{\mathbb{R}} , there exists a unique \Phi\in{\mathbb{R}^2} such that

    \begin{equation} A_1\Phi\cdot e_1 = A_2\Phi\cdot e_1 = f. \end{equation} (35)

    In view of the arbitrariness of f in 35, we can choose \Phi such that

    \begin{equation} A_1e_1\cdot \Phi = A_2e_1\cdot \Phi = 1\qquad \text{on} \quad \partial Z^\#_1. \end{equation} (36)

    Since A_1 {\nabla}_y u^1_0 = 0 in the distributional sense and A_1 = \xi\otimes\xi , we deduce that u^1_0 is constant along the direction \xi . Using Fubini's theorem, we may integrate along straight lines parallel to the vector \xi where integration by parts is allowed. Therefore, performing an integration by parts in 34 combined with 36, it follows that for any \varphi\in C^\infty_{\rm c}(\Omega\times\mathbb{R}^2) ,

    \begin{align} 0 & = \int_{{\Omega}\times \partial Z} v_0(x, y)\varphi(x, y)dx{d\mathscr{H}_y}, \end{align}

    where we have set v_0(x, y): = u^1_0(x, y) - \theta^2(x) . We conclude that v_0(x, \cdot) has a trace on \partial Z for a.e. x\in\Omega satisfying

    \begin{equation} v_0(x, \cdot) = 0 \qquad\text{on} \quad \partial Z. \end{equation} (37)

    Recall that \partial Z = \{y_1 = 0\} . Fix x\in{\Omega} . Taking into account 31 and 32, the equality 37 reads as

    \begin{equation*} \theta^1(x, \quad \xi_1y_2) = \theta^2(x) \qquad\text{on}\quad \partial Z. \end{equation*}

    Since \xi\cdot e_1 \neq 0 , it follows that \theta^1 only depends on x so that u^1_0(x, y) agrees with \theta^2(x) . Finally, we conclude that u_0(x, y) : = \chi(y_1)u_0^1(x, y) + (1-\chi(y_1))u^2_0(x, y) is independent of y and hence (H1) is satisfied.

    We prove assumption (H2). The proof is a variant of the Step 2 of [11,Theorem 3.4]. For arbitrary \alpha, \beta\in{\mathbb{R}} , let \Phi be a vector-valued function given by

    \begin{equation} A^{1/2}(y)\Phi(y) : = \chi(y_1)\alpha\xi + (1-\chi(y_1))(\alpha\xi+\beta e_2) \qquad\text{for a.e.} \quad y\in{\mathbb{R}^2}. \end{equation} (38)

    Such a vector field \Phi does exist, since \xi is in the range of A_1 and thus the right-hand side of 38 belongs pointwise to the range of A , or equivalently to the range of A^{1/2} . Moreover, the difference of two constant phases in 38 is orthogonal to the laminate direction e_1 , so that A^{1/2}\Phi is a laminate divergence free periodic field in {\mathbb{R}^2} . Its average value is given by

    \begin{equation*} N: = \int_{Y_2} A^{1/2}(y)\Phi(y)dy = \alpha\xi+(1-\theta)\beta e_2. \end{equation*}

    Hence, due to \xi\cdot e_1\neq 0 and the arbitrariness of \alpha, \beta , the set of the vectors N spans {\mathbb{R}^2} , which yields assumption (H2).

    From Proposition 1, it immediately follows that the homogenized matrix A^\ast is positive definite. For the reader's convenience, the proof of explicit formula of A^\ast is postponed to Proposition 5 in the Appendix.

    We are going to deal with three-dimensional laminates where both phases are degenerate. We assume that the symmetric and non-negative matrices A_1 and A_2 of {\mathbb{R}}^{3\times 3} have rank two, hence, there exist \eta_1, \eta_2\in{\mathbb{R}}^3 such that

    \begin{equation} {\text{Ker}} (A_i) = {\rm Span}(\eta_i) \qquad\text{for} \quad i = 1, 2. \end{equation} (39)

    The following proposition gives the algebraic conditions so that assumptions required by Theorem 2.1 are satisfied.

    Proposition 3. Let \eta_1 and \eta_2 be the vectors in {\mathbb{R}}^3 defined by 39. Assume that the vectors \{e_1, \eta_1, \eta_2\} as well as \{A_1e_1, A_2e_1\} are linearly independent in {\mathbb{R}}^3 . Then, assumptions \rm(H1) and \rm(H2) are satisfied. In particular, the homogenized matrix A^\ast given by 7 and associated to the conductivity matrix A given by 27 and 39 is positive definite.

    Invoking again Theorem 2.1, the energy {\mathscr{F}_\varepsilon} defined by 1 with A given by 27 and 39, \Gamma -converges for the weak topology of L^2({\Omega}) to {\mathscr{F}_0} where the effective conductivity A^\ast is given by 7. As in two-dimensional laminate materials, A^\ast has an explicit expression (see Proposition 5 in the Appendix).

    Proof. We first show assumption (H1). The proof is an adaptation of the first step of [11,Theorem 3.3]. Same arguments as in the proof of Proposition 2 show that

    \begin{equation} A_i{\nabla}_y u^i_0(x, y) \equiv 0 \qquad \text{in} \quad {\Omega}\times Z_i^\#\quad\text{for} \quad i = 1, 2. \end{equation} (40)

    In view of 39 and 40, in phase Z_i^\# , u^i_0 reads as

    \begin{equation} u^i_0(x, y) = \theta^i(x, \eta_i\cdot y) \qquad \text{a.e.} \quad (x, y)\in{\Omega}\times X_i, \end{equation} (41)

    for some function \theta^i\in L^2({\Omega}\times{\mathbb{R}}) and i = 1, 2 . Now, consider a constant vector-valued function \Phi on Y_3 such that the transmission condition 33 holds. In view of 14, for any \varphi\in{C^\infty_{\text{c}}}({\Omega}, \quad {C^\infty_{\text{per}}}(Y_3)) , we obtain

    \begin{align} 0& = -\lim\limits_{\varepsilon\to 0} \varepsilon\int_{{\Omega}} A(y){\nabla}{u_\varepsilon}\cdot\Phi\varphi\left (x, {\frac{x}{\varepsilon}}\right)dx\\ & = \int_{{\Omega}\times Z_1}u^1_0(x, y){\mathrm{div}}_y(A_1\Phi\varphi(x, y))dxdy \\ &\quad + \int_{{\Omega}\times Z_2}u^2_0(x, y){\mathrm{div}}_y(A_2\Phi\varphi(x, y))dxdy. \end{align} (42)

    Take \varphi\in{C^\infty_{\text{c}}}({\Omega}\times{\mathbb{R}}^3) . Putting the periodized function

    \begin{equation*} \varphi^\# (x, y) : = \sum\limits_{k\in\mathbb{Z}^3} \varphi(x, y+k) \end{equation*}

    as test function in 42, we get

    \begin{equation} \int_{{\Omega}\times Z_1^\#} u^1_0(x, y){\mathrm{div}}_y(A_1\Phi\varphi(x, y)) dxdy +\int_{{\Omega}\times Z_2^\#} u^2_0(x, y){\mathrm{div}}_y(A_2\Phi\varphi(x, y)) dxdy = 0. \end{equation} (43)

    Since the vectors A_1e_1 and A_2e_1 are independent in {\mathbb{R}}^3 , the linear map

    \begin{equation*} \Phi\in\mathbb{R}^3\mapsto(A_1 e_1\cdot\Phi, \quad A_2e_1\cdot\Phi)\in{\mathbb{R}^2} \end{equation*}

    is surjective. In particular, for any f\in{\mathbb{R}} , there exists \Phi\in{\mathbb{R}}^3 such that

    \begin{equation} A_1\Phi\cdot e_1 = A_2\Phi\cdot e_1 = f. \end{equation} (44)

    In view of the arbitrariness of f in 44, we can choose \Phi such that 36 is satisfied. Due to 40 and 39, we deduce that u^i_0 is constant along the plane \Pi_i perpendicular to \eta_i , for i = 1, 2 . This implies that, thanks to Fubini's theorem, we may integrate along the plane \Pi_i where an integration by part may be performed. Hence, an integration by parts in 43 combined with 36, yields for any \varphi\in{C^\infty_{\text{c}}}({\Omega}\times {\mathbb{R}}^3) ,

    \begin{equation*} \int_{{\Omega}\times\partial Z}\left [u^1_0(x, y)-u^2_0(x, y) \right]\varphi(x, y)dx{d\mathscr{H}_y} = 0, \end{equation*}

    which implies that

    \begin{equation} u^1_0(x, \cdot) = u^2_0(x, \cdot) \qquad \text{on}\quad \partial Z. \end{equation} (45)

    Fix x\in{\Omega} and recall that \partial Z = \{y_1 = 0\} . In view of 41, the relation 45 reads as

    \begin{align} \theta^1(x, \quad b_1y_2 + c_1y_3) = \theta^2(x, \quad b_2y_2 + c_2y_3) \qquad \text{on}\quad \partial Z , \end{align} (46)

    with \eta_i = (a_i, b_i, c_i) for i = 1, 2 . Due to the independence of \{e_1, \eta_1, \eta_2\} in \mathbb{R}^3 , the linear map ( y_2, y_3)\in{\mathbb{R}}^2\mapsto (z_1, z_2)\in{\mathbb{R}}^2 defined by

    \begin{equation*} z_1 : = b_1y_2 + c_1y_3, \qquad z_2 : = b_2y_2 + c_2y_3, \end{equation*}

    is a change of variables so that 46 becomes

    \begin{equation*} \label{constfunc} \theta^1 (x, z_1) = \theta^2(x, z_2) \qquad \text{a.e.} \quad z_1, z_2\in\mathbb{R}. \end{equation*}

    This implies that \theta^1 and \theta^2 depend only on x and thus u^1_0 and u^2_0 agree with some function u\in L^2({\Omega}) . Finally, we conclude that u_0(x, y) = \chi(y_1)u^1_0(x, y) + (1-\chi(y_1))u^2_0(x, y) is independent of y and hence (H1) is satisfied.

    It remains to prove assumption (H2). To this end, let E be the subset of \mathbb{R}^3\times\mathbb{R}^3 defined by

    \begin{equation} E: = \{ (\xi_1, \xi_2)\in\mathbb{R}^3\times\mathbb{R}^3 \quad : \quad (\xi_1-\xi_2)\cdot e_1 = 0, \quad \xi_1\cdot \eta_1 = 0, \quad \xi_2\cdot \eta_2 = 0 \}. \end{equation} (47)

    For (\xi_1, \xi_2)\in E , let \Phi be the vector-valued function defined by

    \begin{equation} A^{1/2}(y)\Phi(y) : = \chi(y_1)\xi_1+(1-\chi(y_1))\xi_2 \qquad \text{a.e.} \quad y\in{\mathbb{R}}^3. \end{equation} (48)

    The existence of such a vector field \Phi is guaranteed by the conditions \xi_i\cdot\eta_i = 0 , for i = 1, 2 , which imply that \xi_i belongs to the range of A_i and hence the right-hand side of 48 belongs pointwise to the range of A , or equivalently to the range of A^{1/2} . Moreover, since the difference of the phases \xi_1 and \xi_2 is orthogonal to the laminate direction e_1 , A^{1/2}\Phi is a laminate divergence free periodic field in {\mathbb{R}}^3 . Its average value is given by

    \begin{equation*} N: = \int_{Y_3}A^{1/2}(y)\Phi(y)dy = \theta\xi_1+(1-\theta)\xi_2. \end{equation*}

    Note that E is a linear subspace of {\mathbb{R}}^3\times{\mathbb{R}}^3 whose dimension is three. Indeed, let f be the linear map defined by

    \begin{equation*} (\xi_1, \xi_2)\in {\mathbb{R}}^3\times{\mathbb{R}}^3\mapsto \left ((\xi_1-\xi_2)\cdot e_1, \quad \xi_1\cdot\eta_1, \quad \xi_2\cdot\eta_2 \right)\in{\mathbb{R}}^3. \end{equation*}

    If we identify the pair (\xi_1, \xi_2)\in{\mathbb{R}}^3\times {\mathbb{R}}^3 with the vector (x_1, y_1, z_1, x_2, y_2, z_2)\in{\mathbb{R}}^6 , with \xi_i = (x_i, y_i, z_i) , for i = 1, 2 , the associated matrix M_f\in{\mathbb{R}}^{3\times 6} of f is given by

    \begin{equation*} M_f: = \begin{pmatrix} 1 & 0 & 0 & -1 & 0 & 0\\ a_1 & b_1 & c_1 & 0 & 0 & 0\\ 0 & 0 & 0 & a_2 & b_2 & c_2 \end{pmatrix}, \end{equation*}

    with \eta_i = (a_i, b_i, c_i) , i = 1, 2 . In view of the linear independence of \{ e_1, \eta_1, \eta_2 \} , the rank of M_f is three, which implies that the dimension of kernel {\text{Ker}}(f) is also three. Since the kernel {\text{Ker}}(f) agrees with E , we conclude that the dimension of E is three.

    Now, let g be the linear map defined by

    \begin{equation*} (\xi_1, \xi_2)\in E\mapsto \theta\xi_1+(1-\theta)\xi_2\in\mathbb{R}^3. \end{equation*}

    Let us show that g is invertible. To this end, consider (\xi_1, \xi_2)\in{\text{Ker}}(g) . From the definition of the map g , {\text{Ker}}(g) consists of all vectors (\xi_1, \xi_2)\in E of the form

    \begin{equation} \left (\xi_1, \quad \frac{\theta}{\theta-1}\xi_1\right). \end{equation} (49)

    In view of the definition of E given by 47, the vector 49 satisfies the conditions

    \begin{equation*} \left (\xi_1-\frac{\theta}{\theta-1}\xi_1 \right)\cdot e_1 = 0, \quad \xi_1\cdot\eta_1 = 0, \quad \frac{\theta}{\theta-1}\xi_1\cdot\eta_2 = 0. \end{equation*}

    This combined with the linear independence of \{e_1, \eta_1, \eta_2 \} implies that

    \begin{equation*} \xi_1\in\{e_1, \eta_1, \eta_2 \}^\perp = \{0\}. \end{equation*}

    Hence, {\text{Ker}}(g) = \{(0, 0)\} which implies along with the fact that the dimension of E is three that g is invertible. This proves that all the vectors of {\mathbb{R}}^3 can be attained through the map g so that assumption (H2) is satisfied.

    Thanks to Proposition 1, the homogenized matrix A^\ast turns out to be positive definite. The proof of the explicit expression of A^\ast is given in Proposition 5 in the Appendix.

    In this section we are going to construct a counter-example of two-dimensional laminates with two degenerate phases, where the lack of assumption (H1) provides an anomalous asymptotic behaviour of the functional {\mathscr{F}_\varepsilon} 1.

    Let {\Omega}: = (0, 1)^2 and let e_2 be the laminate direction. We assume that the non-negative and symmetric matrices A_1 and A_2 of {\mathbb{R}}^{2\times 2} are given by

    \begin{equation*} A_1 = e_1\otimes e_1 \quad\text{and}\quad A_2 = ce_1\otimes e_1, \end{equation*}

    for some positive constant c>1 . The presence of c\neq 1 is essential to have oscillation in the conductivity matrix A . In the present case, the matrix-valued conductivity A is given by

    \begin{equation} A(y_2) : = \chi(y_2)A_1+(1-\chi(y_2))A_2 = a(y_2)e_1\otimes e_1\qquad \text{for} \quad y_2\in{\mathbb{R}}, \end{equation} (50)

    with

    \begin{equation} a(y_2) : = \chi(y_2) + c(1-\chi(y_2))\geq 1. \end{equation} (51)

    Thus, the energy {\mathscr{F}_\varepsilon} , defined by 1 with A(y) given by 50 and 51 becomes

    \mathscr{F}_{\varepsilon}(u)=\left\{\begin{array}{c} \int_{\Omega}\left[a\left(\frac{x_{2}}{\varepsilon}\right)\left(\frac{\partial u}{\partial x_{1}}\right)^{2}+|u|^{2}\right] d x, \quad \text { if } u \in H_{0}^{1}\left((0,1)_{x_{1}} ; L^{2}(0,1)_{x_{2}}\right), \\ \infty, \quad \text { if } u \in L^{2}(\Omega) \backslash H_{0}^{1}\left((0,1)_{x_{1}} ; L^{2}(0,1)_{x_{2}}\right). \end{array}\right. (52)

    We denote by \ast_1 the convolution with respect to the variable x_1 , i.e. for f\in L^1({\mathbb{R}}^2) and g\in L^2({\mathbb{R}}^2)

    \begin{equation*} (f\ast_1g)(x_1, x_2) = \int_{\mathbb{R}}f(x_1-t, x_2)g(t, x_2)dt. \end{equation*}

    Throughout this section, c_{\theta} denotes the positive constant given by

    \begin{equation} c_\theta: = c\theta+1-\theta, \end{equation} (53)

    where \theta\in (0, 1) is the volume fraction of the phase Z_1 in Y_2 . The following result proves the \Gamma -convergence of {\mathscr{F}_\varepsilon} for the weak topology of L^2({\Omega}) and provides two alternative expressions of the \Gamma -limit, one of that seems nonlocal due to presence of convolution term (see Remark 2 below).

    Proposition 4. Let {\mathscr{F}_\varepsilon} be the functional defined by 52. Then, {\mathscr{F}_\varepsilon} \Gamma -converges for the weak topology of L^2({\Omega}) to the functional defined by

    \begin{equation*} \label{c12} \mathscr{F} (u) : = \left\{ \begin{array}{l} &\int_{0}^{1}dx_2\int_{\mathbb{R}} \frac{1}{\hat{k}_0(\lambda_1)}|{\mathcal{F}}_2 (u)(\lambda_1, x_2)|^2d\lambda_1, \quad\mathit{{if}}~~ u\in H^1_0((0, 1)_{x_1}; L^2(0, 1)_{x_2}) , \\ &\\ & \quad \infty, \quad \mathit{{if}}~~ u\in L^2({\Omega})\setminus H^1_0((0, 1)_{x_1}; L^2(0, 1)_{x_2}) , \end{array} \right. \end{equation*}

    where {\mathcal{F}}_2 (u)(\lambda_1, \cdot) denotes the Fourier transform on L^2({\mathbb{R}}) of parameter \lambda_1 with respect to the variable x_1 of the function x_1\mapsto u(x_1, \cdot) extended by zero outside (0, 1) and

    \begin{equation} \hat{k}_0(\lambda_1): = \int_{0}^{1}\frac{1}{4\pi^2a(y_2)\lambda_1^2 +1}dy_2. \end{equation} (54)

    The \Gamma -limit \mathscr{F} can be also expressed as

    \mathscr{F}(u):=\left\{\begin{array}{c} \int_{0}^{1} d x_{2} \int_{\mathbb{R}}\left\{\frac{c}{c_{\theta}}\left(\frac{\partial u}{\partial x_{1}}\right)^{2}\left(x_{1}, x_{2}\right)+\left[\sqrt{\alpha} u\left(x_{1}, x_{2}\right)+\left(h *_{1} u\right)\left(x_{1}, x_{2}\right)\right]^{2}\right\} d x_{1}, \\ \text { if } u \in H_{0}^{1}\left((0,1)_{x_{1}} ; L^{2}(0,1)_{x_{2}}\right), \\ \infty, \quad \text { if } u \in L^{2}(\Omega) \backslash H_{0}^{1}\left((0,1)_{x_{1}} ; L^{2}(0,1)_{x_{2}}\right), \end{array}\right. (55)

    where c_\theta is given by 53 and h is a real-valued function in L^2({\mathbb{R}}) defined by means of its Fourier transform {\mathcal{F}}_2 on L^2({\mathbb{R}})

    \begin{equation} {\mathcal{F}}_2(h)(\lambda_1) : = \sqrt{\alpha +f(\lambda_1)}-\sqrt{\alpha}, \end{equation} (56)

    where \alpha and f are given by

    \begin{equation} \alpha: = \frac{c^2\theta +1-\theta}{c_\theta^2} > 0, \qquad f(\lambda_1): = \frac{(c-1)^2\theta(\theta-1)}{c^2_\theta}\frac{1}{c_\theta4\pi^2\lambda_1^2 + 1}. \end{equation} (57)

    Moreover, any two-scale limit u_0(x, y) of a sequence {u_\varepsilon} with bounded energy {\mathscr{F}_\varepsilon} depends on the variable y_2\in Y_1 .

    Remark 1. From 57, we can deduce that

    \begin{equation*} \alpha +f(\lambda_1) = {1\over c^2_\theta(c_\theta4\pi^2\lambda_1^2 + 1)}\left \{(c^2\theta+1-\theta)c_\theta4\pi^2\lambda_1^2+ [(c-1)\theta+1]^2 \right\} > 0, \end{equation*}

    for any \lambda_1\in{\mathbb{R}} , so that the Fourier transform of h is well-defined.

    Proof. We divide the proof into three steps.

    Step 1 - \Gamma - \liminf inequality.

    Consider a sequence \{{u_\varepsilon} \}_\varepsilon converging weakly in L^2({\Omega}) to u\in L^2({\Omega}) . Our aim is to prove that

    \begin{equation} \liminf\limits_{\varepsilon\to 0 } {\mathscr{F}_\varepsilon} ({u_\varepsilon})\geq \mathscr{F}(u). \end{equation} (58)

    If the lower limit is \infty then 58 is trivial. Up to a subsequence, still indexed by \varepsilon , we may assume that \liminf{\mathscr{F}_\varepsilon}({u_\varepsilon}) is a limit and we may assume henceforth that, for some 0<C<\infty ,

    \begin{equation} {\mathscr{F}_\varepsilon}({u_\varepsilon})\leq C. \end{equation} (59)

    It follows that the sequence {u_\varepsilon} is bounded in L^2({\Omega}) and according to [1,Theorem 1.2], a subsequence, still indexed by \varepsilon , of that sequence two-scale converges to some u_0(x, y)\in L^2({\Omega}\times Y_2) . In other words,

    \begin{equation} {u_\varepsilon} \mathop \rightharpoonup \limits^ \rightharpoonup u_0. \end{equation} (60)

    In view of 51, we know that a\geq 1 so that, thanks to 59, for another subsequence (not relabeled) we have

    \begin{equation} \frac{\partial {u_\varepsilon}}{\partial x_1} \mathop \rightharpoonup \limits^ \rightharpoonup \sigma_0(x, y) \qquad\text{with} \quad \sigma_0\in L^2({\Omega}\times Y_2). \end{equation} (61)

    In particular,

    \begin{equation} \varepsilon\frac{\partial {u_\varepsilon}}{\partial x_1} \mathop \rightharpoonup \limits^ \rightharpoonup 0. \end{equation} (62)

    Take \varphi\in {C^\infty_{\text{c}}}({\Omega}; \quad {C^\infty_{\text{per}}}(Y_2)) . By integration by parts, we obtain

    \begin{equation*} \varepsilon\int_{{\Omega}}\frac{\partial {u_\varepsilon}}{\partial x_1}\varphi\left (x, {\frac{x}{\varepsilon}}\right)dx = - \int_{{\Omega}}{u_\varepsilon}\left (\varepsilon\frac{\partial \varphi}{\partial x_1}\left (x, {\frac{x}{\varepsilon}}\right)+\frac{\partial \varphi}{\partial y_1}\left (x, {\frac{x}{\varepsilon}}\right) \right)dx. \end{equation*}

    Passing to the limit in both terms with the help of 60 and 62 leads to

    \begin{equation*} 0 = - \int_{{\Omega}\times Y_2}u_0(x, y)\frac{\partial \varphi}{\partial y_1}(x, y)dxdy, \end{equation*}

    which implies that

    \begin{equation} u_0(x, y) \quad\text{is independent of} \quad y_1. \end{equation} (63)

    Due to the link between two-scale and weak L^2 -convergences (see [1,Proposition 1.6]), we have

    \begin{equation} {u_\varepsilon}\rightharpoonup u(x) = \int_{Y_1} u_0(x, y_2)dy_2\qquad\text{weakly in }~~ L^2({\Omega}) . \end{equation} (64)

    Now consider \varphi\in C^\infty(\overline{{\Omega}}; \quad C^\infty_{\text{per}}(Y_2)) such that

    \begin{equation} \frac{\partial\varphi}{\partial y_1} (x, y) = 0. \end{equation} (65)

    Since {u_\varepsilon}\in H^1_0((0, 1)_{x_1}; L^2(0, 1)_{x_2}) , an integration by parts leads us to

    \begin{align*} \int_{{\Omega}} \frac{\partial{u_\varepsilon}}{\partial x_1}\varphi\left (x, y \right)dx = -\int_{{\Omega}} {u_\varepsilon}\frac{\partial\varphi}{\partial x_1}\left (x, y\right)dx. \end{align*}

    In view of the convergences 60 and 61 together with 63, we can pass to the two-scale limit in the previous expression and we obtain

    \begin{align} \int_{{\Omega}\times Y_2}\sigma_0(x, y)\varphi(x, y)dxdy & = -\int_{{\Omega}\times Y_2} u_0(x, y_2) \frac{\partial \varphi}{\partial x_1}(x, y )dxdy. \end{align} (66)

    Varying \varphi\in{C^\infty_{\text{c}}}({\Omega}; \quad {C^\infty_{\text{per}}}(Y_2)) , the left-hand side of 66 is bounded by a constant times \|\varphi\|_{L^2({\Omega}\times [0, 1))} so that the right-hand side is a linear and continuous form in \varphi\in L^2({\Omega}\times Y_2) . By the Riesz representation theorem, there exists g\in L^2({\Omega}\times Y_2) such that, for any \varphi\in {C^\infty_{\text{c}}}({\Omega}; \quad {C^\infty_{\text{per}}}(Y_2)) ,

    \begin{equation*} \int_{{\Omega}\times Y_2}u_0(x, y_2) \frac{\partial \varphi}{\partial x_1}(x, y)dxdy = \int_{{\Omega}\times Y_2} g(x, y)\varphi(x, y) dxdy, \end{equation*}

    which yields

    \begin{equation} \frac{\partial u_0}{\partial x_1}(x, y_2) \in L^2({\Omega}\times Y_1). \end{equation} (67)

    Then, an integration by parts with respect to x_1 of the right-hand side of 66 yields, for any \varphi\in C^\infty(\overline{{\Omega}}; \quad {C^\infty_{\text{per}}}(Y_2)) satisfying 65,

    \begin{align} \int_{{\Omega}\times Y_2}&\sigma_0(x, y)\varphi(x, y)dxdy = \int_{{\Omega}\times Y_2}\frac{\partial u_0}{\partial x_1}(x, y_2)\varphi(x, y)dxdy \\ &\quad- \int_{0}^{1}dx_2\int_{Y_2}\left [u_0(1, x_2, y_2)\varphi(1, x_2, y) -u_0(0, x_2, y_2)\varphi(0, x_2, y) \right]dy. \end{align}

    Since for any \varphi\in {C^\infty_{\text{c}}}({\Omega}; \quad {C^\infty_{\text{per}}}(Y_2)) the first two integrals are equal and bounded by a constant times \|\varphi\|_{L^2(\Omega\times [0, 1)} , we conclude that, for any \varphi\in C^\infty(\overline{{\Omega}}; \quad {C^\infty_{\text{per}}}(Y_2)) satisfying 65,

    \begin{align} \int_{0}^{1}dx_2\int_{Y_2}\left [u_0(1, x_2, y_2)\varphi(1, x_2, y) -u_0(0, x_2, y_2)\varphi(0, x_2, y) \right]dy = 0, \end{align}

    which implies that

    \begin{equation*} u_0(1, x_2, y_2) = u_0(0, x_2, y_2) = 0 \qquad \text{a.e.} \quad (x_2, y_2)\in (0, 1)\times Y_1. \end{equation*}

    This combined with 67 yields

    \begin{equation*} u_0(x_1, x_2, y_2)\in H^1_0((0, 1)_{x_1}; L^2((0, 1)_{x_2}\times Y_1)). \end{equation*}

    Finally, an integration by parts with respect to x_1 of the right-hand side of 66 implies that, for any \varphi\in C^\infty(\overline{{\Omega}}; \quad {C^\infty_{\text{per}}}(Y_2)) satisfying 65,

    \begin{align*} \int_{{\Omega}\times Y_2}\left ( \sigma_0(x, y)-\frac{\partial u_0}{\partial x_1}(x, y_2)\right)\varphi(x, y)dxdy = 0. \end{align*}

    Since the orthogonal of divergence-free functions is the gradients, from the previous equality we deduce that there exists \tilde{u}\in H^1_{\text{per}}(Y_1; L^2({\Omega}\times Y_1)) such that

    \begin{equation} \sigma_0(x, y) = \frac{\partial u_0}{\partial x_1}(x, y_2)+ \frac{\partial \tilde{u}}{\partial y_1}(x, y). \end{equation} (68)

    Now, we show that

    \begin{equation} \liminf\limits_{\varepsilon\to 0} \int_{{\Omega}}a\left (\frac{x_2}{\varepsilon}\right)\left (\frac{\partial {u_\varepsilon}}{\partial x_1} \right)^2dx \geq \int_{{\Omega}\times Y_2}a(y_2)\left ( \frac{\partial u_0}{\partial x_1}(x, y_2)+ \frac{\partial \tilde{u}}{\partial y_1}(x, y) \right)^2dxdy. \end{equation} (69)

    To this end, set

    \begin{equation*} \sigma_\varepsilon : = \frac{\partial {u_\varepsilon}}{\partial x_1}. \end{equation*}

    Since a\in L^\infty_{\rm per}(Y_1)\subset L^2_{\rm per}(Y_1) , there exists a sequence a_k of functions in {C^\infty_{\text{per}}}(Y_1) such that

    \begin{equation} \|a-a_k\|_{L^2(Y_1)} \to 0 \quad\text{as}~~ k\to\infty , \end{equation} (70)

    hence, by periodicity, we also have

    \begin{equation} \left \|a\left ({\frac{x_2}{\varepsilon}}\right) - a_k\left ({\frac{x_2}{\varepsilon}}\right) \right\|_{L^2({\Omega})} \leq C \|a-a_k\|_{L^2(Y_1)}, \end{equation} (71)

    for some positive constant C>0 . On the other hand, since \sigma_0 given by 68 is in L^2({\Omega}\times Y_2) , there exists a sequence \psi_n of functions in {C^\infty_{\text{c}}}({\Omega}; \quad {C^\infty_{\text{per}}}(Y_2)) such that

    \begin{equation} \psi_n (x, y) \to \sigma_0(x, y)\qquad \text{strongly in }~~ L^2({\Omega}\times Y_2) . \end{equation} (72)

    From the inequality

    \begin{equation*} \int_{{\Omega}} a\left ({\frac{x_2}{\varepsilon}}\right)\left (\sigma_{\varepsilon} - \psi_n\left (x, {\frac{x}{\varepsilon}}\right)\right) ^ 2 dx\geq 0, \end{equation*}

    we get

    \begin{align} \int_{{\Omega}}& a\left ({\frac{x_2}{\varepsilon}}\right)\sigma_\varepsilon^2dx\geq 2\int_{{\Omega}}a\left ({\frac{x_2}{\varepsilon}}\right)\sigma_\varepsilon\psi_n\left (x, {\frac{x}{\varepsilon}}\right)dx -\int_{{\Omega}}a\left ({\frac{x_2}{\varepsilon}}\right)\psi_n^2\left (x, {\frac{x}{\varepsilon}}\right)dx\\ & = 2\int_{{\Omega}}\left (a\left ({\frac{x_2}{\varepsilon}}\right)-a_k\left ({\frac{x_2}{\varepsilon}}\right) \right)\sigma_\varepsilon\psi_n\left (x, {\frac{x}{\varepsilon}}\right)dx + 2\int_{{\Omega}}a_k\left ({\frac{x_2}{\varepsilon}}\right)\sigma_\varepsilon\psi_n\left (x, {\frac{x}{\varepsilon}}\right)dx\\ &\quad-\int_{{\Omega}}a\left ({\frac{x_2}{\varepsilon}}\right)\psi_n^2\left (x, {\frac{x}{\varepsilon}}\right)dx. \end{align} (73)

    In view of 71, the first integral on the right-hand side of 73 can be estimated as

    \begin{align*} \left |\int_{{\Omega}}\left (a\left ({\frac{x_2}{\varepsilon}}\right)-a_k\left ({\frac{x_2}{\varepsilon}}\right) \right)\sigma_\varepsilon\psi_n\left (x, {\frac{x}{\varepsilon}}\right)dx \right| &\leq C \|a-a_k\|_{L^2(Y_1)}\|\psi_n\|_{L^\infty({\Omega})}\|\sigma_\varepsilon\|_{L^2({\Omega})}\\ &\leq C \|a-a_k\|_{L^2(Y_1)}. \end{align*}

    Hence, passing to the limit as \varepsilon\to 0 in 73 with the help of 61 leads to

    \begin{align*} \liminf\limits_{\varepsilon\to 0} \int_{{\Omega}} a\left ({\frac{x_2}{\varepsilon}}\right)\sigma^2_\varepsilon dx&\geq- C \|a-a_k\|_{L^2(Y_1)}+ 2\lim\limits_{\varepsilon\to 0} \int_{{\Omega}}a_k\left ({\frac{x_2}{\varepsilon}}\right)\sigma_\varepsilon\psi_n\left (x, {\frac{x}{\varepsilon}}\right)dx \\ &\quad -\lim\limits_{\varepsilon\to 0}\int_{{\Omega}}a\left ({\frac{x_2}{\varepsilon}}\right)\psi_n^2\left (x, {\frac{x}{\varepsilon}}\right)dxdy\\ & = 2\int_{{\Omega}\times Y_2}a_k(y_2)\sigma_0(x, y)\psi_n(x, y)dxdy - C\|a-a_k\|_{L^2(Y_1)}\\ &\quad-\int_{{\Omega}\times Y_2}a(y_2)\psi_n^2(x, y)dxdy. \end{align*}

    Thanks to 70, we take the limit as k\to\infty in the previous inequality and we obtain

    \begin{align*} \liminf\limits_{\varepsilon\to 0} \int_{{\Omega}} a\left ({\frac{x_2}{\varepsilon}}\right)\sigma^2_\varepsilon dx &\geq 2\int_{{\Omega}\times Y_2}a(y_2)\sigma_0(x, y)\psi_n(x, y)dxdy\notag\\ &\quad-\int_{{\Omega}\times Y_2}a(y_2)\psi_n^2(x, y)dxdy, \end{align*}

    so that in view of 72, passing to the limit as n\to\infty leads to

    \begin{align*} \liminf\limits_{\varepsilon\to 0} \int_{{\Omega}} a\left ({\frac{x_2}{\varepsilon}}\right)\sigma^2_\varepsilon dx &\geq \int_{{\Omega}\times Y_2}a(y_2)\sigma_0^2(x, y)dxdy. \end{align*}

    This combined with 68 proves 69.

    By 63, we already know that u_0 does not depend on y_1 . In view of the periodicity of \tilde{u} with respect to y_1 , an application of Jensen's inequality leads us to

    \begin{align} \int_{{\Omega}\times Y_2} a(y_2)&\left (\frac{\partial u_0}{\partial x_1}(x, y_2) +\frac{\partial \tilde{u}}{\partial y_1} (x, y)\right)^2dxdy\\ & = \int_{{\Omega}}dx\int_{Y_1}a(y_2)dy_2\int_{Y_1}\left (\frac{\partial u_0}{\partial x_1}(x, y_2) +\frac{\partial \tilde{u}}{\partial y_1} (x, y)\right)^2dy_1\\ &\geq\int_{{\Omega}}dx\int_{Y_1}a(y_2)dy_2\left (\int_{Y_1} \left [\frac{\partial u_0}{\partial x_1}(x, y_2) +\frac{\partial \tilde{u}}{\partial y_1} (x, y)\right] dy_1 \right)^2\\ & = \int_{{\Omega}}dx\int_{Y_1}a(y_2)\left (\frac{\partial u_0}{\partial x_1}\right)^2(x, y_2)dy_2. \end{align}

    This combined with 69 implies that

    \begin{align} \liminf\limits_{\varepsilon\to 0}\int_{{\Omega}}a\left (\frac{x_2}{\varepsilon}\right)\left (\frac{\partial{u_\varepsilon}}{\partial x_1}\right)^2dx &\geq\int_{{\Omega}}dx\int_{Y_1}a(y_2)\left (\frac{\partial u_0}{\partial x_1}\right)^2(x, y_2)dy_2. \end{align} (74)

    Now, we extend the functions in L^2({\Omega}) by zero with respect to x_1 outside (0, 1) so that functions in H^1_0((0, 1)_{x_1};L^2(0, 1)_{x_2}) can be regarded as functions in H^1(\mathbb{R}_{x_1}; L^2(0, 1)_{x_2}) . Due to the weak L^2 -lower semi-continuity of \|{u_\varepsilon}\|_{L^2({\Omega})} along with 74, we have

    \begin{align} &\liminf\limits_{\varepsilon\to 0} {\mathscr{F}_\varepsilon}({u_\varepsilon}) \\ &\quad \geq \int_{0}^{1}dx_2\int_{Y_1}dy_2\int_{\mathbb{R}}\biggl[a(y_2)\left (\frac{\partial u_0}{\partial x_1}\right)^2(x_1, x_2, y_2)+ |u_0|^2(x_1, x_2, y_2)\biggr]dx_1. \end{align} (75)

    We minimize the right-hand side with respect to u_0(x_1, x_2, y_2)\in H^1(\mathbb{R}_{x_1}; \quad L^2((0, 1)_{x_2} \times Y_1)) satisfying 64 where the weak limit u of {u_\varepsilon} in L^2({\Omega}) is fixed. The minimizer, still denoted by u_0 , satisfies the Euler equation

    \begin{align*} \int_{0}^{1}dx_2\int_{Y_1}dy_2\int_{\mathbb{R}}&\biggl[a(y_2)\frac{\partial u_0}{\partial x_1}(x_1, x_2, y_2)\frac{\partial v}{\partial x_1}(x_1, x_2, y_2)\notag\\ &\quad + u_0(x_1, x_2, y_2)v(x_1, x_2, y_2)\biggr]dx_1 = 0 \end{align*}

    for any v(x_1, x_2, y_2)\in H^1(\mathbb{R}_{x_1}; \quad L^2((0, 1)_{x_2}\times Y_1)) such that \int_{Y_1} v(x, y_2)dy_2 = 0 . Then, there exists b(x_1, x_2)\in H^{-1}(\mathbb{R}_{x_1}; \quad L^2(\mathbb{R})_{x_2}) independent of y_2 such that in distributions sense with respect to the variable x_1 ,

    \begin{equation} -a(y_2)\frac{\partial^2 u_0}{\partial x_1^2}(x_1, x_2 , y_2) + u_0(x_1, x_2, y_2) = b(x_1, x_2) \quad \text{in} \quad \mathscr{D}'({\mathbb{R}}) \end{equation} (76)

    for a.e. (x_2, y_2)\in (0, 1)\times Y_1 . Taking the Fourier transform {\mathcal{F}}_2 on L^2({\mathbb{R}}) of parameter \lambda_1 with respect to the variables x_1 , the equation 76 becomes

    \begin{equation} {\mathcal{F}}_2(u_0)(\lambda_1, x_2, y_2) = \frac{{\mathcal{F}}_2(b)(\lambda_1, x_2)}{4\pi^2a(y_2)\lambda_1^2+1} \qquad\text{a.e.} \quad (\lambda_1, x_2, y_2)\in {\mathbb{R}}\times (0, 1)\times Y_1. \end{equation} (77)

    Note that 77 proves in particular that the two-scale limit u_0 does depend on the variable y_2 , since its Fourier transform with respect to the variable x_1 depends on y_2 through the function a(y_2) .

    In light of the definition 54 of \hat{k}_0 and due to 64, integrating 77 with respect to y_2\in Y_1 yields

    \begin{equation} {\mathcal{F}}_2(u)(\lambda_1, x_2) = \hat{k}_0(\lambda_1){\mathcal{F}}_2 (b)(\lambda_1, x_2)\qquad\text{a.e.} \quad (\lambda_1, x_2)\in {\mathbb{R}}\times (0, 1). \end{equation} (78)

    By using Plancherel's identity with respect to the variable x_1 in the right-hand side of 75 and in view of 77 and 78, we obtain

    \begin{align} \liminf\limits_{\varepsilon\to 0} {\mathscr{F}_\varepsilon}({u_\varepsilon})&\geq \int_{0}^{1}dx_2\int_{Y_1}dy_2\int_{\mathbb{R}} (4\pi^2a(y_2)\lambda_1^2 +1)|{\mathcal{F}}_2(u_0)(\lambda_1, x_2, y_2)|^2d\lambda_1\\ & = \int_{0}^{1}dx_2\int_{\mathbb{R}} \frac{1}{\hat{k}_0(\lambda_1)} |{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1, \end{align}

    which proves the \Gamma - \liminf inequality.

    Step 2- \Gamma - \limsup inequality.

    For the proof of the \Gamma - \limsup inequality, we need the following lemma whose proof will be given later.

    Lemma 4.1. Let u\in{C^\infty_{\mathit{{c}}}}({\Omega}) . For fixed x_2\in (0, 1) and y_2\in Y_1 , let b(\cdot, x_2) be the distribution (parameterized by x_2 ) defined by

    \begin{equation} {\mathcal{F}}_2(b)(\lambda_1, x_2): = \frac{1}{\hat{k}_0(\lambda_1)} {\mathcal{F}}_2(u)(\lambda_1, x_2), \end{equation} (79)

    where u(\cdot, x_2) is extended by zero outside (0, 1) . Let u_0(\cdot, x_2, y_2) be the unique solution to problem

    \begin{align} \label{stuliopb} \left\{ \begin{array}{l} -a(y_2)\frac{\partial^2 u_0}{\partial x_1^2}(x_1, x_2, y_2) + u_0(x_1, x_2, y_2) = b(x_1, x_2), & x_1\in (0, 1), \\ u_0(0, x_2, y_2) = u_0(1, x_2, y_2) = 0, & \end{array} \right. \end{align} (80)

    with a(y_2) given by 51. Then b(x_1, x_2) is in C([0, 1]_{x_2}; \quad L^2(0, 1)_{x_1}) and u_0(x_1, x_2, y_2) is in C^1([0, 1]^2; \quad L^\infty_{\rm per}(Y_1)) .

    Let u\in{C^\infty_{\text{c}}}({\Omega}) . Thanks to Lemma 4.1, there exists a unique solution

    \begin{equation} u_0(x_1, x_2, y_2)\in C^1([0, 1]^2; \quad L^\infty_{\rm per}(Y_1)) \end{equation} (81)

    to the problem 80. Taking the Fourier transform {\mathcal{F}}_2 on L^2({\mathbb{R}}) of parameter \lambda_1 with respect to x_1 of the equation in 80 and taking into account 79, we get

    \begin{equation} {\mathcal{F}}_2(u_0)(\lambda_1, x_2, y_2) = \frac{{\mathcal{F}}_2(u)(\lambda_1, x_2) }{(4\pi^2a(y_2)\lambda_1^2+1)\hat{k}_0(\lambda_1)}\qquad \text{for} \quad (\lambda_1, x_2, y_2)\in{\mathbb{R}}\times [0, 1]\times Y_1, \end{equation} (82)

    where u_0(\cdot, x_2, y_2) and u(\cdot, x_2) are extended by zero outside (0, 1) . Integrating 82 over y_2\in Y_1 , we obtain

    \begin{equation} u(x_1, x_2) = \int_{Y_1} u_0(x_1, x_2, y_2)dy_2 \qquad \text{for} \quad (x_1, x_2)\in\mathbb{R}\times (0, 1). \end{equation} (83)

    Let \{{u_\varepsilon}\}_\varepsilon be the sequence in L^2({\Omega}) defined by

    \begin{equation*} \label{defuesp} u_\varepsilon(x_1, x_2) : = u_0\left (x_1, x_2, {\frac{x_2}{\varepsilon}}\right). \end{equation*}

    Recall that rapidly oscillating Y_1 -periodic function {u_\varepsilon} weakly converges in L^2({\Omega}) to the mean value of {u_\varepsilon} over Y_1 . This combined with 83 implies that {u_\varepsilon} weakly converges in L^2({\Omega}) to u . In other words,

    \begin{equation*} {u_\varepsilon} \rightharpoonup u \quad \text{weakly in} \quad L^2({\Omega}). \end{equation*}

    Due to 81, we can apply [1,Lemma 5.5] so that u_0(x_1, x_2, y_2) and {\partial u_0\over \partial x_1} are admissible test functions for the two-scale convergence. Hence,

    \begin{align} \lim\limits_{\varepsilon\to 0}&{\mathscr{F}_\varepsilon}({u_\varepsilon}) = \lim\limits_{\varepsilon\to 0} \int_{{\Omega}}\left [a\left ({\frac{x_2}{\varepsilon}}\right)\left (\frac{\partial u_0 }{\partial x_1} \right)^2\left (x_1, x_2, \frac{x_2}{\varepsilon}\right) +\left |u_0\left (x_1, x_2, {\frac{x_2}{\varepsilon}}\right)\right|^2 \right]dx\\ & = \int_{{\Omega}}dx\int_{Y_1}\left [ a(y_2)\left (\frac{\partial u_0}{\partial x_1} \right)^2(x_1, x_2, y_2) +\left |u_0(x_1, x_2, y_2)\right|^2\right]dy_2\\\ & = \int_{0}^{1}dx_2\int_{Y_1}dy_2\int_{\mathbb{R}}\left [ a(y_2)\left (\frac{\partial u_0}{\partial x_1} \right)^2(x_1, x_2, y_2)+\left |u_0(x_1, x_2, y_2)\right|^2\right]dx_1, \end{align} (84)

    where the function x_1\mapsto u_0(x_1, \cdot, \cdot) is extended by zero outside (0, 1) . In view of the definition 54 of \hat{k}_0 and due to 82, the Plancherel identity with respect to the variable x_1 and the Fubini theorem yield

    \begin{align*} \int_{0}^{1}dx_2\int_{Y_1}dy_2\int_{\mathbb{R}}&\left [ a(y_2)\left (\frac{\partial u_0}{\partial x_1} \right)^2(x_1, x_2, y_2)+\left |u_0(x_1, x_2, y_2)\right|^2\right]dx_1\notag\\ & = \int_{0}^{1}dx_2\int_{Y_1}dy_2\int_{\mathbb{R}}(4\pi^2a(y_2)\lambda^2_1+1)|{\mathcal{F}}_2(u_0)(\lambda_1, x_2, y_2)|^2d\lambda_1\notag\\ & = \int_{0}^{1}dx_2\int_{\mathbb{R}} \frac{1}{\hat{k}_0(\lambda_1)}|{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1. \label{limsup2} \end{align*}

    This together with 84 implies that, for u\in{C^\infty_{\text{c}}}({\Omega}) ,

    \begin{equation*} \lim\limits_{\varepsilon\to 0} {\mathscr{F}_\varepsilon}({u_\varepsilon}) = \int_{0}^{1}dx_2\int_{\mathbb{R}} \frac{1}{\hat{k}_0(\lambda_1)}|{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1, \end{equation*}

    which proves the \Gamma - \limsup inequality on {C^\infty_{\text{c}}}({\Omega}) .

    Now, we extend the previous result to any u\in H^1_0((0, 1)_{x_1}; \quad L^2(0, 1)_{x_2}) . To this end, we use a density argument (see e.g. [5,Remark 2.8]). Recall that the weak topology of L^2({\Omega}) is metrizable on the closed balls of L^2({\Omega}) . Fix n\in\mathbb{N} and denote d_{B_n} any metric inducing the L^2({\Omega}) -weak topology on the ball B_n centered on 0 and of radius n . Then, H^1_0((0, 1)_{x_1}; L^2(0, 1)_{x_2}) can be regarded as a subspace of L^2({\Omega}) endowed with the metric d_{B_n} . On the other hand, H^1_0((0, 1)_{x_1}; L^2(0, 1)_{x_2}) is a Hilbert space endowed with the norm

    \begin{equation*} \|u\|_{H^1_0((0, 1)_{x_1}; L^2(0, 1)_{x_2})} : = \left ( \left \|\frac{\partial u}{\partial x_1} \right\|^2_{L^2({\Omega})} + \|u\|^2_{L^2({\Omega})}\right)^{1/2}. \end{equation*}

    The associated metric d_{H^1_0} on H^1_0((0, 1)_{x_1}; L^2(0, 1)_{x_2}) induces a topology which is not weaker than that induced by d_{B_n} , i.e.

    \begin{equation} d_{H^1_0}(u_k, u)\to 0 \quad \text{implies } \quad d_{B_n}(u_k, u)\to 0. \end{equation} (85)

    Recall that {C^\infty_{\text{c}}}({\Omega}) is a dense subspace of H^1_0((0, 1)_{x_1}; L^2(0, 1)_{x_2}) for the metric d_{H^1_0} and that the \Gamma - \limsup inequality holds on {C^\infty_{\text{c}}}({\Omega}) for the L^2({\Omega}) -weak topology, i.e. for any u\in{C^\infty_{\text{c}}}({\Omega}) ,

    \begin{equation} \Gamma\text{-}\limsup\limits_{\varepsilon\to 0}{\mathscr{F}_\varepsilon}(u)\leq \mathscr{F}(u). \end{equation} (86)

    A direct computation of \hat{k}_0 , given by 54, shows that

    \begin{align*} \hat{k}_0(\lambda_1) & = \frac{c_\theta4\pi^2\lambda_1^2+1}{(4\pi^2\lambda_1^2+ 1)(c4\pi^2\lambda_1^2+ 1)}, \end{align*}

    which implies that

    \begin{align} \frac{1}{\hat{k}_0(\lambda_1) } = \frac{c}{c_\theta}4\pi^2\lambda_1^2 + f(\lambda_1) + \alpha, \end{align} (87)

    where f(\lambda_1) and \alpha are given by 57. Hence, there exists a positive constant C such that

    \begin{equation} \frac{1}{\hat{k}_0(\lambda_1)} \leq C(4\pi^2\lambda_1^2 + 1). \end{equation} (88)

    This combined with the Plancherel identity yields

    \begin{align} \mathscr{F}(u)&\leq C\int_{0}^{1}dx_2\int_{{\mathbb{R}}} (4\pi^2\lambda_1^2 + 1) |{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1\\ & = C\int_{0}^{1}dx_2\int_{{\mathbb{R}}}\left [ \left ( \frac{\partial u}{\partial x_1} \right)^2(x_1, x_2)+ |u(x_1, x_2)|^2\right]dx_1\\ & = C \|u\|^2_{H^1_0((0, 1)_{x_1}; L^2(0, 1)_{x_2})}, \end{align} (89)

    where u(\cdot, x_2) is extended by zero outside (0, 1) . Since \mathscr{F} is a non-negative quadratic form, from 89 we conclude that \mathscr{F} is continuous with respect to the metric d_{H^1_0} .

    Now, take u\in H^1_0((0, 1)_{x_1}; L^2(0, 1)_{x_2}) . By density, there exists a sequence u_k in {C^\infty_{\text{c}}}({\Omega}) such that

    \begin{equation} d_{H^1_0} (u_k, u)\to 0\qquad\text{as} \quad k\to\infty. \end{equation} (90)

    In particular, due to 85, we also have that d_{B_n} (u_k, u)\to 0 as k\to\infty . In view of the weakly lower semi-continuity of \Gamma - \limsup and the continuity of \mathscr{F} , we deduce from 86 that

    \begin{align*} \Gamma\text{-}\limsup\limits_{\varepsilon\to 0}{\mathscr{F}_\varepsilon}(u)&\leq \liminf\limits_{k\to\infty} (\Gamma\text{-}\limsup\limits_{\varepsilon\to 0}{\mathscr{F}_\varepsilon}(u_k) )\\ &\leq \liminf\limits_{k\to\infty}\mathscr{F}(u_k)\\ & = \mathscr{F}(u), \end{align*}

    which proves the \Gamma - \limsup inequality in B_n . Since for any u\in H^1_0((0, 1)_{x_1}; L^2(0, 1)_{x_2}) the sequence u_k of functions in {C^\infty_{\text{c}}}({\Omega}) satisfying 90 belongs to some ball B_n of L^2({\Omega}) , as well as its limit, the \Gamma - \limsup property holds true for the sequence {\mathscr{F}_\varepsilon} on H^1_0((0, 1)_{x_1}; L^2(0, 1)_{x_2}) , which concludes the proof of \Gamma - \limsup inequality.

    Step 3 - Alternative expression of \Gamma -limit.

    The proof of the equality between the two expressions of the \Gamma -limit \mathscr{F} relies on the following lemma whose proof will be given later.

    Lemma 4.2. Let h \in L^2(\mathbb{R}) and u\in L^1(\mathbb{R})\cap L^2(\mathbb{R}) . Then, h\ast u\in L^2({\mathbb{R}}) and

    \begin{equation} {\mathcal{F}}_2 (h\ast u) = {\mathcal{F}}_2(h){\mathcal{F}}_2(u)\quad \mathit{{a.e. ~in }}~~ \mathbb{R} . \end{equation} (91)

    By applying Plancherel's identity with respect to x_1 , for any u\in H^1_0({\mathbb{R}}_{x_1}; L^2(0, 1)_{x_2}) extended by zero with respect to the variable x_1 outside (0, 1) , we get

    \begin{align} \int_{\mathbb{R}}&\left |\sqrt{\alpha}u(x_1, x_2) + (h\ast_1u)(x_1, x_2)\right|^2dx_1 \\ & = \int_{\mathbb{R}}\left |\sqrt{\alpha}{\mathcal{F}}_2(u)(\lambda_1, x_2) + {\mathcal{F}}_2(h\ast_1u)(\lambda_1, x_2)\right|^2d\lambda_1 \\ & = \int_{\mathbb{R}} \biggl[\alpha \left |{\mathcal{F}}_2 (u)(\lambda_1, x_2)\right|^2 + 2\sqrt{\alpha}{\rm Re}\left ({\mathcal{F}}_2(u)(\lambda_1, x_2) \overline{{\mathcal{F}}_2(h\ast_1u)}(\lambda_1, x_2)\right)\\ & \quad +\left |{\mathcal{F}}_2(h\ast_1u)(\lambda_1, x_2)\right|^2\biggr]d\lambda_1. \end{align} (92)

    Recall that the Fourier transform of h , given by 56, is real. From 92, an application of Lemma 4.2 leads us to

    \begin{align} \int_{\mathbb{R}} &\left [\alpha \biggl|{\mathcal{F}}_2 (u)(\lambda_1, x_2)\right|^2 + 2\sqrt{\alpha}{\rm Re}\left ({\mathcal{F}}_2(u)(\lambda_1, x_2) \overline{{\mathcal{F}}_2(h\ast_1u)}(\lambda_1, x_2)\right)\\ & \quad +\left |{\mathcal{F}}_2(h\ast_1u)(\lambda_1, x_2)\right|^2\biggr]d\lambda_1\\ & = \int_{\mathbb{R}} \left [\alpha+2\sqrt{\alpha}{\mathcal{F}}_2(h)(\lambda_1) + \left ({\mathcal{F}}_2(h)(\lambda_1)\right)^2\right]\left |{\mathcal{F}}_2(u)(\lambda_1, x_2)\right|^2d\lambda_1\\ & = \int_{\mathbb{R}} \left [\sqrt{\alpha}+ {\mathcal{F}}_2(h)(\lambda_1)\right]^2|{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1\\ & = \int_{\mathbb{R}} \left [\alpha+ f(\lambda_1)\right]|{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1. \end{align} (93)

    On the other hand, by applying Plancherel's identity with respect to x_1 , we obtain

    \begin{equation*} \int_{{\mathbb{R}}}\frac{c}{c_\theta}4\pi^2\lambda_1^2|{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1 = \int_{{\mathbb{R}}}\frac{c}{c_\theta}\left (\frac{\partial u}{\partial x_1}\right)^2(x_1, x_2)dx_1. \end{equation*}

    In view of the expansion of 1/\hat{k}_0(\lambda_1) given by 87, the previous equality combined with 92 and 93 implies that, for u\in H^1_0((0, 1)_{x_1}; \quad L^2(0, 1)_{x_2}) extended by zero with respect to x_1 outside (0, 1) ,

    \begin{align*} &\int_{0}^{1}dx_2\int_{\mathbb{R}} \frac{1}{\hat{k}_0(\lambda_1)} |{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1 \\ &\quad = \int_{0}^{1}dx_2 \int_{\mathbb{R}}\left \{\frac{c}{c_\theta}\left (\frac{\partial u}{\partial x_1}\right)^2(x_1, x_2)+[\sqrt{\alpha}u(x_1, x_2) + (h\ast_1u)(x_1, x_2)]^2\right\}dx_1, \end{align*}

    which concludes the proof.

    Proof of Lemma 4.1. In view of 87, the equality 79 becomes

    \begin{align} {\mathcal{F}}_2(b)(\lambda_1, x_2 ) & = \left (\frac{c}{c_\theta}4\pi^2\lambda_1^2+\alpha + f(\lambda_1)\right){\mathcal{F}}_2(u)(\lambda_1, x_2)\\ & = {\mathcal{F}}_2 \left (-\frac{c}{c_\theta}\frac{\partial^2 u}{\partial x_1^2} +\alpha u\right)(\lambda_1, x_2) + f(\lambda_1){\mathcal{F}}_2(u)(\lambda_1, x_2). \end{align} (94)

    Since

    \begin{equation*} \label{fCL1} f(\lambda_1) = \frac{(c-1)^2\theta(\theta-1)}{c^2_\theta}\frac{1}{c_\theta4\pi^2\lambda_1^2 + 1} = O(\lambda_1^{-2})\in C_0(\mathbb{R})\cap L^1(\mathbb{R}), \end{equation*}

    the right-hand side of 94 belongs to L^2({\mathbb{R}}) with respect to \lambda_1 , which implies that

    \begin{equation*} {\mathcal{F}}_2(b)(\cdot, x_2)\in L^2({\mathbb{R}}). \end{equation*}

    Applying the Plancherel identity, we obtain that b(\cdot, x_2)\in L^2({\mathbb{R}}) with respect to x_1 . Since u(\cdot, x_2) is extended by zero outside (0, 1) , b(\cdot, x_2) is also equal to zero outside (0, 1) so that

    \begin{equation} b(\cdot, x_2)\in L^2(0, 1). \end{equation} (95)

    We show that b(x_1, \cdot) is a continuous function with respect to x_2\in [0, 1] . Recall that the continuity of x_2\in [0, 1]\mapsto b(x_1, x_2) \in L^2(0, 1)_{x_1} is equivalent to

    \begin{equation*} \lim\limits_{t\to 0} \|b(\cdot, x_2+t) - b(\cdot, x_2)\|_{L^2(0, 1)_{x_1}} = 0. \end{equation*}

    Thanks to Plancherel's identity, we infer from 79 that

    \begin{align*} \|b(\cdot, x_2+t) -& b(\cdot, x_2)\|^2_{L^2(0, 1)_{x_1}} = \|{\mathcal{F}}_2(b)(\cdot, x_2+t) - {\mathcal{F}}_2(b)(\cdot, x_2)\|^2_{L^2({\mathbb{R}})_{\lambda_1}}\\ & = \int_{{\mathbb{R}}}\left |\frac{1}{\hat{k}_0(\lambda_1)}\left [{\mathcal{F}}_2(u)(\lambda_1, x_2+t) - {\mathcal{F}}_2(u)(\lambda_1, x_2)\right] \right|^2d\lambda_1. \end{align*}

    In view of 88 and thanks to the Plancherel identity, we obtain

    \begin{align} \|b(\cdot, x_2+t) &- b(\cdot, x_2)\|^2_{L^2(0, 1)_{x_1}}\\ & \leq C^2 \int_{{\mathbb{R}}}\left |(4\pi^2\lambda_1^2+1) ({\mathcal{F}}_2(u)(\lambda_1, x_2+t) - {\mathcal{F}}_2(u)(\lambda_1, x_2)) \right|^2d\lambda_1\\ &\leq C^2\left \|{\mathcal{F}}_2 \left (\frac{\partial u}{\partial x_1}\right) (\cdot, x_2+t ) -{\mathcal{F}}_2\left (\frac{\partial u}{\partial x_1}\right) (\cdot, x_2 ) \right\|^2_{L^2(0, 1)_{\lambda_1}} \\ &\quad+ C^2 \|{\mathcal{F}}_2(u)(\cdot, x_2+t)-{\mathcal{F}}_2(u)(\cdot, x_2)\|^2_{L^2(0, 1)_{\lambda_1}} \\ & = C^2\left \|\frac{\partial u}{\partial x_1}(\cdot, x_2+t ) -\frac{\partial u}{\partial x_1} (\cdot, x_2 ) \right\|^2_{L^2(0, 1)_{x_1}} \\ &\quad +C^2 \|u(\cdot, x_2+t)-u(\cdot, x_2)\|^2_{L^2(0, 1)_{x_1}}. \end{align}

    By the Lebesgue dominated convergence theorem and since u\in{C^\infty_{\text{c}}}([0, 1]^2) , from the previous inequality we conclude that the map x_2\in [0, 1]\mapsto b(x_1, x_2)\in L^2(0, 1)_{x_1} is continuous. Hence,

    \begin{equation} b(x_1, x_2)\in C([0, 1]_{x_2}; \quad L^2(0, 1)_{x_1}). \end{equation} (96)

    To conclude the proof, it remains to show the regularity of u_0 . Note that 80 is a Sturm-Liouville problem with constant coefficient with respect to x_1 , since x_2\in (0, 1) and y_2\in Y_1 play the role of parameters. By 95, we already know that b(\cdot, x_2)\in L^2(0, 1) , so that thanks to a classical regularity result (see e.g. [7] pp. 223-224), the problem 80 admits a unique solution u_0(\cdot, x_2, y_2) in H^2(0, 1) . Since H^2(0, 1) is embedding into C^1([0, 1]) , we have

    \begin{equation*} u_0(\cdot, x_2, y_2)\in C^1([0, 1])\qquad\text{a.e.} \quad (x_2, y_2)\in (0, 1)\times Y_1. \end{equation*}

    On the other hand, the solution u_0(x_1, x_2, y_2) to the Sturm-Liouville problem 80 is explicitly given by

    \begin{equation} u_0(x_1, x_2, y_2) : = \int_{0}^{1} G_{y_2}(x_1, s) b(s, x_2)ds, \end{equation} (97)

    where b(x_1, x_2) is defined by 79 and 96 and the kernel G_{y_2}(x_1, s) is given by

    \begin{equation*} G_{y_2}(x_1, s) : = \frac{1}{\sqrt{a(y_2)}\sinh\left (\frac{1}{\sqrt{a(y_2)}}\right) }\sinh\left (\frac{x_1\wedge s}{\sqrt{a(y_2)}}\right)\sinh\left (\frac{1-x_1\vee s}{\sqrt{a(y_2)}}\right). \end{equation*}

    This combined with 96 and 97 proves that

    \begin{equation*} \label{u0isregular} u_0(x_1, x_2, y_2) \in C^1([0, 1]^2, L^\infty_{\rm per}(Y_1)), \end{equation*}

    which concludes the proof.

    We prove now the Lemma 4.2 that we used in Step 3 of Proposition 4.

    Proof of Lemma 4.2. By the convolution property of the Fourier transform on L^2({\mathbb{R}}) , we have

    \begin{equation} h\ast u = \overline{{\mathcal{F}}_2}({\mathcal{F}}_2(h)) \ast \overline{{\mathcal{F}}_2}({\mathcal{F}}_2(h)) = \overline{{\mathcal{F}}_1}({\mathcal{F}}_2(h){\mathcal{F}}_2(u)), \end{equation} (98)

    where \overline{{\mathcal{F}}_i} denotes the conjugate Fourier transform for i = 1, 2 . On the other hand, since u\in L^1(\mathbb{R})\cap L^2({\mathbb{R}}) and due to Riemann-Lebesgue's lemma, we deduce that {\mathcal{F}}_2(u) = {\mathcal{F}}_1(u)\in C_0(\mathbb{R})\cap L^2({\mathbb{R}}) . This combined with {\mathcal{F}}_2(h)\in L^2({\mathbb{R}}) implies that

    \begin{equation*} {\mathcal{F}}_2(h){\mathcal{F}}_2(u) = {\mathcal{F}}_2(h){\mathcal{F}}_1(u)\in L^2(\mathbb{R})\cap L^1({\mathbb{R}}). \end{equation*}

    Since \overline{{\mathcal{F}}_1} = \overline{{\mathcal{F}}_2} on L^1({\mathbb{R}})\cap L^2({\mathbb{R}}) , from 98 we deduce that

    \begin{equation*} h\ast u = \overline{{\mathcal{F}}_2}({\mathcal{F}}_2(h){\mathcal{F}}_2(u))\in L^2(\mathbb{R}), \end{equation*}

    which yields 91. This concludes the proof.

    Remark 2. Thanks to the Beurling-Deny theory of Dirichlet forms [3], Mosco [15, Theorem 4.1.2] has proved that the \Gamma -limit F of a family of Markovian form for the L^2({\Omega}) -strong topology is a Dirichlet form which can be split into a sum of three forms: a strongly local form F_d , a local form and nonlocal one. More precisely, for u\in L^2({\Omega}) with F(u)<\infty , we have

    \begin{equation} F(u) = F_d(u) + \int_{{\Omega}}u^2k(dx) + \int_{({\Omega}\times{\Omega})\setminus{\rm diag}} (u(x)-u(y))^2j(dx, dy), \end{equation} (99)

    where F_d is called the diffusion part of F , k is a positive Radon measure on {\Omega} , called the killing measure, and j is a positive Radon measure on ({\Omega}\times{\Omega})\setminus{\rm diag} , called the jumping measure. Recall that a Dirichlet form F is a closed form which satisfies the Markovian property, i.e. for any contraction T:{\mathbb{R}}\to{\mathbb{R}} , such that

    \begin{equation*} T(0) = 0, \qquad\text{and}\qquad \forall x, y\in{\mathbb{R}}, \quad |T(x)-T(y)|\leq |x-y|, \end{equation*}

    we have F\circ T\leq F . A \Gamma -limit form obtained with the L^2({\Omega}) -weak topology does not a priori satisfy the Markovian property, since the L^2({\Omega}) -weak convergence does not commute with all contractions T . An example of a sequence of Markovian forms whose \Gamma -limit for the L^2({\Omega}) -weak topology does not satisfy the Markovian property is provided in [9, Theorem 3.1]. Hence, the representation formula 99 does not hold in general when the L^2({\Omega}) -strong topology is replaced by the L^2({\Omega}) -weak topology.

    In the present context, we do not know if the \Gamma -limit \mathscr{F} 55 is a Dirichlet form since the presence of the convolution term makes difficult to prove the Markovian property.

    We are going to give an explicit expression of the homogenized matrix A^\ast defined by 7, which extends the rank-one laminate formula in the case of a rank-one laminates with degenerate phases. We will recover directly from this expression the positive definiteness of A^* for the class of rank-one laminates introduced in Section 3. Indeed, by virtue of Theorem 2.1 the positive definiteness of A^* also follows from assumption (H2) which is established in Proposition 2 and Proposition 3.

    Set

    \begin{equation} a: = (1-\theta) A_1 e_1\cdot e_1 + \theta A_2e_1\cdot e_1, \end{equation} (100)

    with \theta\in (0, 1) being the volume fraction of phase Z_1 .

    Proposition 5. Let A_1 and A_2 be two symmetric and non-negative matrices of {\mathbb{R}}^{d\times d} , d\geq 2 . If a given by 100 is positive, the homogenized matrix A^\ast is given by

    \begin{equation} A^\ast = \theta A_1+(1-\theta)A_2 -\frac{\theta(1-\theta)}{a} (A_2-A_1)e_1\otimes (A_2-A_1)e_1. \end{equation} (101)

    If a = 0 , the homogenized matrix A^\ast is the arithmetic average of the matrices A_1 and A_2 , i.e.

    \begin{equation} A^\ast = \theta A_1+(1-\theta)A_2. \end{equation} (102)

    Furthermore, if one of the following conditions is satisfied:

    i) in two dimensions, a>0 and the matrices A_1 and A_2 are given by 28 with \xi\cdot e_1\neq 0 ,

    ii) in three dimensions, a>0 , the matrices A_1 and A_2 are given by 39 and the vectors \{e_1, \eta_1, \eta_2\} are independent in {\mathbb{R}}^3 ,

    then A^\ast is positive definite.

    Remark 3. The condition a>0 agrees with the \Gamma -convergence results of Propositions 2 and 3. In the two-dimensional framework, the degenerate case a = 0 does not agree with Propositions 2. Indeed, a = 0 implies that A_1 e_1\cdot e_1 = A_2 e_1\cdot e_1 = 0 in contradiction to positive definiteness of A_2 . Similar in the three-dimensional setting, where the independence of \{e_1, \eta_1, \eta_2 \} is not compatible with a = 0 . Indeed, a = 0 implies that A_i e_1 = A_i \eta_i = 0 , for i = 1, 2 , which contradicts the fact that A_1 and A_2 have rank two.

    Proof. Assume that a>0 . In view of the convergence 23, we already know that

    \begin{equation} \lim\limits_{\delta\to 0} A^\ast_\delta = A^\ast, \end{equation} (103)

    where, for \delta>0 , A^\ast_\delta is the homogenized matrix associated to conductivity matrix A_\delta given by

    \begin{equation*} A_\delta(y_1) = \chi(y_1) A_1^\delta + (1-\chi(y_1)) A_2 ^\delta\qquad\text{for} \quad y_1\in{\mathbb{R}}, \end{equation*}

    with A_i^\delta = A_i +\delta I_d . Since A_1 and A_2 are non-negative matrices, A_\delta is positive definite and thus the homogenized matrix A^\ast_\delta is given by the lamination formula (see [17] and also [2,Lemma 1.3.32])

    \begin{equation} A^\ast_\delta = \theta A_1^\delta+(1-\theta)A_2^\delta -\frac{\theta(1-\theta)}{(1-\theta)A_1^\delta e_1\cdot e_1 + \theta A_2^\delta e_1\cdot e_1 } (A_2^\delta-A_1^\delta)e_1\otimes (A_2^\delta-A_1^\delta)e_1. \end{equation} (104)

    If a>0 , we easily infer from the convergence 103 combined with the lamination formula 104 the expression 101 for A^\ast .

    We prove that A^\ast x\cdot x\geq 0 for any x\in{\mathbb{R}^d} . From the Cauchy-Schwarz inequality, we deduce that

    \begin{align} |(A_2-A_1)e_1\cdot x| &\leq |A_2e_1\cdot x|+ |A_1e_1\cdot x|\\ &\leq (A_2e_1\cdot e_1)^{1/2}(A_2x\cdot x)^{1/2} + (A_1e_1\cdot e_1)^{1/2}(A_1x\cdot x)^{1/2}. \end{align} (105)

    This combined with the definition 101 of A^\ast implies that, for any x\in{\mathbb{R}^d} ,

    \begin{align} A^\ast x\cdot x& = \theta (A_1 x\cdot x) + (1-\theta)(A_2 x\cdot x) - \theta(1-\theta)a^{-1}\left |(A_2-A_1)e_1\cdot x\right|^2\\ &\geq \theta (A_1x\cdot x) +(1-\theta) (A_2 x\cdot x)\\ &\quad-\theta(1-\theta)a^{-1}[(A_2e_1\cdot e_1)^{1/2} (A_2x\cdot x)^{1/2}+ (A_1e_1\cdot e_1)^{1/2} (A_1x\cdot x)^{1/2} ]^2\\ & = a^{-1} [a \theta (A_1x\cdot x) +a (1-\theta) (A_2 x\cdot x) -\theta(1-\theta)(A_2e_1\cdot e_1)( A_2x\cdot x)\\ & \quad -\theta(1-\theta)(A_1e_1\cdot e_1)(A_1x\cdot x)\\ & \quad - 2\theta(1-\theta)(A_2e_1\cdot e_1)^{1/2}( A_2x\cdot x)^{1/2}(A_1e_1\cdot e_1)^{1/2}(A_1x\cdot x)^{1/2}]. \end{align} (106)

    In view of definition 100 of a , we have that

    \begin{align} &a \theta( A_1x\cdot x) +a (1-\theta)( A_2 x\cdot x)\\& = \theta (1-\theta)(A_1 e_1\cdot e_1)(A_1 x\cdot x) + \theta^2(A_2 e_1\cdot e_1)(A_1 x\cdot x)\\ &\quad+ (1-\theta)^2 (A_1 e_1\cdot e_1)(A_2 x\cdot x)\\ &\quad + \theta(1-\theta)(A_2 e_1\cdot e_1)(A_2 x\cdot x). \end{align}

    Plugging this equality in 106, we deduce that

    \begin{align} A^\ast x\cdot x &\geq a^{-1}[\theta^2(A_2 e_1\cdot e_1)(A_1 x\cdot x)+ (1-\theta)^2(A_1 e_1\cdot e_1)(A_2 x\cdot x)\\ & \quad -2\theta(1-\theta)(A_2e_1\cdot e_1)^{1/2}(A_1x\cdot x)^{1/2}(A_1e_1\cdot e_1)^{1/2}( A_2x\cdot x)^{1/2}]\\ & = a^{-1}[\theta (A_2 e_1\cdot e_1)^{1/2}(A_1x\cdot x)^{1/2} -(1-\theta) (A_1 e_1\cdot e_1)^{1/2}(A_2x\cdot x)^{1/2} ]^2\geq 0, \end{align} (107)

    which proves that A^\ast is a non-negative definite matrix.

    Now, assume a = 0 . Since A_1 and A_2 are non-negative matrices, the condition a = 0 implies A_1e_1\cdot e_1 = A_2e_1\cdot e_1 = 0 or equivalently A_1e_1 = A_2e_1 = 0 . Hence,

    \begin{equation*} (A_2^\delta -A_1^\delta)e_1 = (A_2-A_1)e_1 = 0, \end{equation*}

    which implies that the lamination formula 104 becomes

    \begin{equation*} A^\ast_\delta = \theta A_1^\delta + (1-\theta)A_2^\delta. \end{equation*}

    This combined with the convergence 103 yields to the expression 102 for A^\ast .

    To conclude the proof, it remains to prove the positive definiteness of A^\ast under the above conditions i) and ii).

    Case (i). d = 2 , a>0 and A_1, A_2 given by 28.

    Assume A^* x\cdot x = 0 . Then, the inequality 107 is an equality, which yields in turn equalities in 105. In particular, we have

    \begin{equation} |A_2e_1\cdot x| = (A_2e_1\cdot e_1)^{1/2}(A_2x\cdot x)^{1/2} = \|A^{1/2}_2e_1\|\|A^{1/2}_2x\|. \end{equation} (108)

    Recall that the Cauchy-Schwarz inequality is an equality if and only if one of vectors is a scalar multiple of the other. This combined with 108 leads to A^{1/2}_2x = \alpha A^{1/2}_2e_1 for some \alpha\in{\mathbb{R}} , so that, since A_2 is positive definite or equivalently A^{1/2}_2 , we have

    \begin{equation} x = \alpha e_1 \qquad\text{for some} \quad \alpha\in{\mathbb{R}}. \end{equation} (109)

    From the definition 101 of A^\ast and due to the assumption \xi\cdot e_1\neq 0 , we get

    \begin{equation} A^\ast e_1\cdot e_1 = \frac{1}{a}(A_2 e_1\cdot e_1) (\xi\cdot e_1)^2 > 0. \end{equation} (110)

    Recall that A^\ast x\cdot x = 0 . This combined with 109 and 110 implies that x = 0 , which proves that A^\ast is positive definite.

    Case (ii). d = 3 , a>0 and A_1, A_2 given by 39.

    Assume that A^\ast x\cdot x = 0 . As in Case (i), we have equalities in 105. In other words,

    \begin{align} |A_1e_1\cdot x| & = (A_1e_1\cdot e_1)^{1/2}(A_1x\cdot x)^{1/2}, \end{align} (111)
    \begin{align} |A_2e_1\cdot x| & = (A_2e_1\cdot e_1)^{1/2}(A_2x\cdot x)^{1/2}. \end{align} (112)

    Let p_i(t) be the non-negative polynomials of degree 2 defined by

    \begin{equation*} p_i(t) : = A_i(x+te_1)\cdot (x+te_1) \qquad\text{for} \quad i = 1, 2. \end{equation*}

    In view of 111, the discriminant of p_1(t) is zero, so that there exists t_1\in{\mathbb{R}} such that

    \begin{equation} p_1(t_1) = A_1(x+t_1e_1)\cdot (x+t_1e_1) = 0. \end{equation} (113)

    Recall that {\text{Ker}} (A_1) = {\rm Span}(\eta_1) . Since A_1 is non-negative matrix, we deduce from 113 that x+t_1e_1 belongs to {\text{Ker}} (A_1) , so that

    \begin{equation} x\in{\rm Span}(e_1, \eta_1). \end{equation} (114)

    Similarly, recalling that {\text{Ker}} (A_2) = {\rm Span}(\eta_2) and using 112, we have

    \begin{equation} x\in{\rm Span}(e_1, \eta_2). \end{equation} (115)

    Since the vectors \{e_1, \eta_1, \eta_2\} are independent in {\mathbb{R}}^3 , 114 and 115 imply that

    \begin{equation*} x = \alpha e_1 \qquad\text{for some} \quad \alpha\in{\mathbb{R}}. \end{equation*}

    In light of definition 101 of A^\ast , we have

    \begin{equation*} A^\ast e_1\cdot e_1 = \frac{1}{a} (A_1e_1\cdot e_1)(A_2e_1\cdot e_1) > 0, \end{equation*}

    which implies that x = 0 , since A^\ast x\cdot x = 0 . This establishes that A^\ast is positive definite and concludes the proof.

    Note that when d = 2 and a>0 the assumption \xi\cdot e_1\neq 0 is essential to obtain that A^\ast is positive definite. Otherwise, the homogenized matrix A^\ast is just non-negative definite as shown by the following counter-example. Let A_1 and A_2 be symmetric and non-negative matrices of {\mathbb{R}}^{2\times 2} defined by

    \begin{equation*} A_1 = e_2\otimes e_2 \quad\text{and}\quad A_2 = I_2. \end{equation*}

    Then, it is easy to check that a = \theta>0 and A^\ast e_1\cdot e_1 = 0 .

    This problem was pointed out to me by Marc Briane during my stay at Institut National des Sciences Appliquées de Rennes. I thank him for the countless fruitful discussions. My thanks also extend to Valeria Chiadò Piat for useful remarks. The author is also a member of the INdAM-GNAMPA project "Analisi variazionale di modelli non-locali nelle scienze applicate".



    [1] L. Chua, Memrisor-the missing circuit element, IEEE Trans. Circuit Theory, 18 (1971), 507–519. https://doi.org/10.1109/TCT.1971.1083337 doi: 10.1109/TCT.1971.1083337
    [2] L. Chua, S. Kang, Memristive devices and systems. Proc. IEEE, 64 (1976), 209–223. https://doi.org/10.1109/PROC.1976.10092 doi: 10.1109/PROC.1976.10092
    [3] D. Strukov, G. Snider, D. Stewart, R. Williams, The missing memristor found, Nature, 453 (2008), 80–83. https://doi.org/10.1038/nature06932 doi: 10.1038/nature06932
    [4] J. Tour, T. He, Electronics: The fourth element, Nature, 453 (2008), 42–43. https://doi.org/10.1038/453042a doi: 10.1038/453042a
    [5] W. Mao, Y. Liu, L. Ding, A. Safian, X. Liang, A new structured domain adversarial neural network for transfer fault diagnosis of rolling bearings under different working conditions, IEEE Trans. Instrum. Meas., 70 (2021), 1–13. https://doi.org/10.1109/TIM.2020.3038596 doi: 10.1109/TIM.2020.3038596
    [6] S. Wang, Z. Dou, D. Chen, H. Yu, Y. Li, P. Pan, Multimodal multiclass boosting and its application to cross-modal retrieval, Neurocomputing, 357 (2019), 11–23. https://doi.org/10.1016/j.neucom.2019.05.040 doi: 10.1016/j.neucom.2019.05.040
    [7] W. Mao, J. Wang, Z. Xue, An ELM-based model with sparse-weighting strategy for sequential data imbalance problem, Int. J. Mach. Learn. Cybern., 8 (2017), 1333–1345. https://doi.org/10.1007/s13042-016-0509-z doi: 10.1007/s13042-016-0509-z
    [8] S. Zhang, Y. Yang, L. Li, D. Wu, Quasi-synchronization of fractional-order complex-valued memristive recurrent neural networks with switching jumps mismatch, Neural Process Lett., 53 (2021), 865–-891. https://doi.org/10.1007/s11063-020-10342-4 doi: 10.1007/s11063-020-10342-4
    [9] Y. Shi, J. Cao, G. Chen, Exponential stability of complex-valued memristor-based neural networks with time-varying delays, Appl. Math. Comput., 313 (2017), 222–234. https://doi.org/10.1016/j.amc.2017.05.078 doi: 10.1016/j.amc.2017.05.078
    [10] X. Yang, D. Ho, Synchronization of delayed memristive neural networks: robust analysis approach, IEEE Trans. Cybern., 46 (2016), 3377–3387. https://doi.org/10.1109/TCYB.2015.2505903 doi: 10.1109/TCYB.2015.2505903
    [11] G. Zhang, Z. Zeng, Exponential stability for a class of memristive neural networks with mixed time-varying delays, Appl. Math. Comput., 321 (2018), 544–554. https://doi.org/10.1016/j.amc.2017.11.022 doi: 10.1016/j.amc.2017.11.022
    [12] M. Mehrabbeik, F. Parastesh, J. Ramadoss, K. Rajagopal, H. Namazi, S. Jafari, Synchronization and chimera states in the network of electrochemically coupled memristive Rulkov neuron maps, Math. Biosci. Eng., 18 (2021), 9394–9409. https://doi.org/10.3934/mbe.2021462 doi: 10.3934/mbe.2021462
    [13] T. Dong, X. Gong, T. Huang, Zero-Hopf bifurcation of a memristive synaptic Hopfield neural network with time delay, Neural Networks, 149 (2022), 146–156. https://doi.org/10.1016/j.neunet.2022.02.009 doi: 10.1016/j.neunet.2022.02.009
    [14] X. Yang, J. Cao, J. Liang, Exponential synchronization of memristive neural networks with delays: interval matrix method, IEEE Trans. Neural Networks Learn. Syst., 28 (2017), 1878–1888. https://doi.org/10.1109/TNNLS.2016.2561298 doi: 10.1109/TNNLS.2016.2561298
    [15] Y. Shi, J. Cao, G. Chen, Exponential stability of complex-valued memristor-based neural networks with time-varying delays, Appl. Math. Comput., 313 (2017), 222–234. https://doi.org/10.1016/j.amc.2017.05.078 doi: 10.1016/j.amc.2017.05.078
    [16] G. Zhang, Z. Zeng, J. Hu, New results on global exponential dissipativity analysis of memristive inertial neural networks with distributed time-varying delays, Neural Networks, 97 (2018), 183–191. https://doi.org/10.1016/j.neunet.2017.10.003 doi: 10.1016/j.neunet.2017.10.003
    [17] A. Wu, Y. Chen, Z. Zeng, Multi-mode function synchronization of memristive neural networks with mixed delays and parameters mismatch via event-triggered control, Inf. Sci., 572 (2021), 147–166. https://doi.org/10.1016/j.ins.2021.04.101 doi: 10.1016/j.ins.2021.04.101
    [18] X. Yang, J. Cao, J. Qiu, pth moment exponential stochastic synchronization of coupled memristor-based neural networks with mixed delays via delayed impulsive control, Neural Networks, 65 (2015), 80–91. https://doi.org/10.1016/j.neunet.2015.01.008 doi: 10.1016/j.neunet.2015.01.008
    [19] L. Zhang, Y. Yang, F. Wang, Projective synchronization of fractional-order memristive neural networks with switching jumps mismatch, Phys. A, 471 (2017), 402–415. https://doi.org/10.1016/j.physa.2016.12.030 doi: 10.1016/j.physa.2016.12.030
    [20] J. Zhang, Z. Lou, Y. Jia, W. Shao, Ground state of Kirchhoff type fractional Schrödinger equations with critical growth, J. Math. Anal. Appl., 462 (2018), 57–83. https://doi.org/10.1016/j.jmaa.2018.01.060 doi: 10.1016/j.jmaa.2018.01.060
    [21] B. Łupińska, E. Schmeidel, Analysis of some Katugampola fractional differential equations with fractional boundary conditions, Math. Biosci. Eng., 18 (2021), 1–19. https://doi.org/10.3934/mbe.2021359 doi: 10.3934/mbe.2021359
    [22] J. Zhang, J. Wang, Numerical analysis for Navier–Stokes equations with time fractional derivatives, Appl. Math. Comput., 30 (2022), 2747–2758. https://doi.org/10.1016/j.amc.2018.04.036 doi: 10.1016/j.amc.2018.04.036
    [23] F. Wang, Y. Yang, Quasi-synchronization for fractional-order delayed dynamical networks with heterogeneous nodes, Appl. Math. Comput., 339 (2018), 1–14. https://doi.org/10.1016/j.amc.2018.07.041 doi: 10.1016/j.amc.2018.07.041
    [24] C. Huang, J. Cao, M. Xiao, A. Alsaedi, T. Hayat, Bifurcations in a delayed fractional complex-valued neural network, Appl. Math. Comput., 292 (2017), 210–227. https://doi.org/0.1016/j.amc.2018.07.041
    [25] L. Zhang, Y. Yang, F. Wang, Synchronization analysis of fractional-order neural networks with time-varying delays via discontinuous neuron activations, Neurocomputing, 275 (2018), 40–49. https://doi.org/10.1016/j.neucom.2017.04.056 doi: 10.1016/j.neucom.2017.04.056
    [26] C. Hu, H. Jiang, Special functions-based fixed-time estimation and stabilization for dynamic systems, IEEE Trans. Syst. Man Cybern., 5 (2022), 3251–3262. https://doi.org/10.1109/TSMC.2021.3062206 doi: 10.1109/TSMC.2021.3062206
    [27] S. Yang, C. Hu, J. Yu, H. Jiang, Finite-time cluster synchronization in complex-variable networks with fractional-order and nonlinear coupling, Neural Networks, 135 (2021), 212–224. https://doi.org/10.1016/j.neunet.2020.12.015 doi: 10.1016/j.neunet.2020.12.015
    [28] J. Fei, L. Liu, Real-time nonlinear model predictive control of active power filter using self-feedback recurrent fuzzy neural network estimator, IEEE Trans. Ind. Electron., 69 (2022), 8366–8376. https://doi.org/10.1109/TIE.2021.3106007 doi: 10.1109/TIE.2021.3106007
    [29] W. Sun, L. Peng, Observer-based robust adaptive control for uncertain stochastic Hamiltonian systems with state and input delays, Nonlinear Anal. Modell. Control, 19 (2014), 626–645. https://doi.org/10.15388/NA.2014.4.8 doi: 10.15388/NA.2014.4.8
    [30] S. Liu, J. Wang, Y. Zhou, M. Feckan, Iterative learning control with pulse compensation for fractional differential systems, Math. Slovaca, 68 (2018), 563–574. https://doi.org/10.1515/ms-2017-0125 doi: 10.1515/ms-2017-0125
    [31] M. Sabzalian, A. Mohammadzadeh, S. Lin, W. Zhang, Robust fuzzy control for fractional-order systems with estimated fraction-order, Nonlinear Dyn., 98 (2019), 2375–2385. https://doi.org/10.1007/s11071-019-05217-w doi: 10.1007/s11071-019-05217-w
    [32] Z. Wang, J. Fei, Fractional-order terminal sliding-mode control using self-evolving recurrent chebyshev fuzzy neural network for mems gyroscope, IEEE Trans. Fuzzy Syst., 30 (2022), 2747– 2758. https://doi.org/10.1109/TFUZZ.2021.3094717 doi: 10.1109/TFUZZ.2021.3094717
    [33] Y. Cao, S. Wang, Z. Guo, T. Huang, S. Wen, Synchronization of memristive neural networks with leakage delay and parameters mismatch via event-triggered control, Neural Networks, 119 (2019), 178–189. https://doi.org/10.1016/j.neunet.2019.08.011 doi: 10.1016/j.neunet.2019.08.011
    [34] X. Li, X. Yang, J. Cao, Event-triggered impulsive control for nonlinear delay systems, Automatica, 117 (2020), 108981. https://doi.org/10.1016/j.automatica.2020.108981 doi: 10.1016/j.automatica.2020.108981
    [35] X. Li, D. Peng, J. Cao, Lyapunov stability for impulsive systems via event-triggered impulsive control, IEEE Trans. Autom. Control, 65 (2020), 4908–4913. https://doi.org/10.1109/TAC.2020.2964558 doi: 10.1109/TAC.2020.2964558
    [36] H. Li, X. Gao, R. Li, Exponential stability and sampled-data synchronization of delayed complex-valued memristive neural networks, Neural Process Lett., 51 (2020), 193–209. https://doi.org/10.1007/s11063-019-10082-0 doi: 10.1007/s11063-019-10082-0
    [37] H. Fan, K. Shi, Y. Zhao, Global \mu-synchronization for nonlinear complex networks with unbounded multiple time delays and uncertainties via impulsive control, Phys. A, 599 (2022), 127484. https://doi.org/10.1016/j.physa.2022.127484 doi: 10.1016/j.physa.2022.127484
    [38] X. Li, D. Ho, J. Cao, Finite-time stability and settling-time estimation of nonlinear impulsive systems, Automatica, 99 (2019), 361–368. https://doi.org/10.1016/j.automatica.2018.10.024 doi: 10.1016/j.automatica.2018.10.024
    [39] S. Yang, C. Hu, J. Yu, H. Jiang, Exponential stability of fractional-order impulsive control systems with applications in synchronization, IEEE Trans. Cybern., 50 (2020), 3157–3168. https://doi.org/10.1109/TCYB.2019.2906497 doi: 10.1109/TCYB.2019.2906497
    [40] H. Fan, K. Shi, Y. Zhao, Pinning impulsive cluster synchronization of uncertain complex dynamical networks with multiple time-varying delays and impulsive effects, Phys. A, 587 (2022), 126534. https://doi.org/10.1016/j.physa.2021.126534 doi: 10.1016/j.physa.2021.126534
    [41] F. Wang, Y. Yang, Intermittent synchronization of fractional order coupled nonlinear systems based on a new differential inequality, Phys. A, 512 (2018), 142–152. https://doi.org/10.1016/j.physa.2018.08.023 doi: 10.1016/j.physa.2018.08.023
    [42] L. Zhang, Y. Yang, F. Wang, Lag synchronization for fractional-order memristive neural networks via period intermittent control, Nonlinear Dyn., 89 (2017), 367–381. https://doi.org/10.1007/s11071-017-3459-4 doi: 10.1007/s11071-017-3459-4
    [43] C. Hu, H. He, H. Jiang, Synchronization of complex-valued dynamic networks with intermittently adaptive coupling: A direct error method, Automatica, 112 (2020), 108675. https://doi.org/10.1016/j.automatica.2019.108675 doi: 10.1016/j.automatica.2019.108675
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2206) PDF downloads(110) Cited by(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog