In this paper, synchronization of fractional-order memristive recurrent neural networks via aperiodically intermittent control is investigated. Considering the special properties of memristor neural network, differential inclusion theory is introduced. Similar to the aperiodically strategy of integer order, aperiodically intermittent control strategy of fractional order is proposed. Under the framework of Fillipov's solution, based on the intermittent strategy of fractional order systems and the properties Mittag-Leffler, sufficient criteria of aperiodically intermittent strategy are obtained by constructing appropriate Lyapunov functional. Some comparisons are given to demonstrate the advantages of aperiodically strategy. A simulation example is given to illustrate the derived conclusions.
Citation: Shuai Zhang, Yongqing Yang, Xin Sui, Yanna Zhang. Synchronization of fractional-order memristive recurrent neural networks via aperiodically intermittent control[J]. Mathematical Biosciences and Engineering, 2022, 19(11): 11717-11734. doi: 10.3934/mbe.2022545
[1] | Lorenza D'Elia . Γ-convergence of quadratic functionals with non uniformly elliptic conductivity matrices. Networks and Heterogeneous Media, 2022, 17(1): 15-45. doi: 10.3934/nhm.2021022 |
[2] | Andrea Braides, Valeria Chiadò Piat . Non convex homogenization problems for singular structures. Networks and Heterogeneous Media, 2008, 3(3): 489-508. doi: 10.3934/nhm.2008.3.489 |
[3] | Patrick Henning . Convergence of MsFEM approximations for elliptic, non-periodic homogenization problems. Networks and Heterogeneous Media, 2012, 7(3): 503-524. doi: 10.3934/nhm.2012.7.503 |
[4] | Leonid Berlyand, Volodymyr Rybalko . Homogenized description of multiple Ginzburg-Landau vortices pinned by small holes. Networks and Heterogeneous Media, 2013, 8(1): 115-130. doi: 10.3934/nhm.2013.8.115 |
[5] | Manuel Friedrich, Bernd Schmidt . On a discrete-to-continuum convergence result for a two dimensional brittle material in the small displacement regime. Networks and Heterogeneous Media, 2015, 10(2): 321-342. doi: 10.3934/nhm.2015.10.321 |
[6] | Fabio Camilli, Claudio Marchi . On the convergence rate in multiscale homogenization of fully nonlinear elliptic problems. Networks and Heterogeneous Media, 2011, 6(1): 61-75. doi: 10.3934/nhm.2011.6.61 |
[7] | Brahim Amaziane, Leonid Pankratov, Andrey Piatnitski . Homogenization of variational functionals with nonstandard growth in perforated domains. Networks and Heterogeneous Media, 2010, 5(2): 189-215. doi: 10.3934/nhm.2010.5.189 |
[8] | Antonio DeSimone, Martin Kružík . Domain patterns and hysteresis in phase-transforming solids: Analysis and numerical simulations of a sharp interface dissipative model via phase-field approximation. Networks and Heterogeneous Media, 2013, 8(2): 481-499. doi: 10.3934/nhm.2013.8.481 |
[9] | Liselott Flodén, Jens Persson . Homogenization of nonlinear dissipative hyperbolic problems exhibiting arbitrarily many spatial and temporal scales. Networks and Heterogeneous Media, 2016, 11(4): 627-653. doi: 10.3934/nhm.2016012 |
[10] | Peter Bella, Arianna Giunti . Green's function for elliptic systems: Moment bounds. Networks and Heterogeneous Media, 2018, 13(1): 155-176. doi: 10.3934/nhm.2018007 |
In this paper, synchronization of fractional-order memristive recurrent neural networks via aperiodically intermittent control is investigated. Considering the special properties of memristor neural network, differential inclusion theory is introduced. Similar to the aperiodically strategy of integer order, aperiodically intermittent control strategy of fractional order is proposed. Under the framework of Fillipov's solution, based on the intermittent strategy of fractional order systems and the properties Mittag-Leffler, sufficient criteria of aperiodically intermittent strategy are obtained by constructing appropriate Lyapunov functional. Some comparisons are given to demonstrate the advantages of aperiodically strategy. A simulation example is given to illustrate the derived conclusions.
In this paper, for a bounded domain
Fε(u):={∫Ω{A(xε)∇u⋅∇u+|u|2}dx,if u∈H10(Ω),∞,if u∈L2(Ω)∖H10(Ω). | (1) |
The conductivity
ess-infy∈Yd(min{A(y)ξ⋅ξ:ξ∈Rd,|ξ|=1})≥0. | (2) |
This condition holds true when the conductivity energy density has missing derivatives. This occurs, for example, when the quadratic form associated to
Aξ⋅ξ:=A′ξ′⋅ξ′forξ=(ξ′,ξd)∈Rd−1×R, |
where
ess-infy∈Yd(min{A(y)ξ⋅ξ:ξ∈Rd,|ξ|=1})>0, | (3) |
combined with the boundedness implies a compactness result of the conductivity functional
u∈H10(Ω)↦∫ΩA(xε)∇u⋅∇udx |
for the
∫ΩA∗∇u⋅∇udx, |
where the matrix-valued function
A∗λ⋅λ:=min{∫YdA(y)(λ+∇v(y))⋅(λ+∇v(y))dy:v∈H1per(Yd)}. | (4) |
The
Fε(u)=∫Ωf(xε,Du)dx,foru∈W1,p(Ω;Rm), | (5) |
where
In this paper, we investigate the
Fε(u)≥∫Ω|u|2dx. |
This estimate guarantees that
V:={∫YdA1/2(y)Φ(y)dy:Φ∈L2per(Yd;Rd)withdiv(A1/2(y)Φ(y))=0inD′(Rd)} |
agrees with the space
the
F0(u):={∫Ω{A∗∇u⋅∇u+|u|2}dx,if u∈H10(Ω),∞,if u∈L2(Ω)∖H10(Ω), | (6) |
where the homogenized matrix
A∗λ⋅λ:=inf{∫YdA(y)(λ+∇v(y))⋅(λ+∇v(y))dy:v∈H1per(Yd)}. | (7) |
We need to make assumption (H1) since for any sequence
In the
In the present scalar case, we enlighten assumptions (H1) and (H2) which are the key ingredients to obtain the general
The lack of assumption (H1) may induce a degenerate asymptotic behaviour of the functional
The paper is organized as follows. In Section 2, we prove a general
Notation.
● For
●
●
● Throughout, the variable
● We write
uε⇀⇀u0 |
with
●
F1(f)(λ):=∫Re−2πiλxf(x)dx. |
Definition 1.1. Let
i)
ii)
Such a sequence
Recall that the weak topology of
In this section, we will prove the main result of this paper. As previously announced, up to a subsequence, the sequence of functionals
Theorem 2.1. Let
\begin{equation} \begin{split} V: = \biggl\{ \int_{Y_d}A^{1/2}(y)\Phi(y)dy \quad :& \quad \Phi\in L^2_{\mathit{{per}}}(Y_d; \quad {\mathbb{R}}^d) \quad \\ &\text{with} \quad \text{div}\left (A^{1/2}(y)\Phi(y)\right) = 0 \quad \mathit{{in}} \quad \mathscr{D}'({\mathbb{R}}^d) \biggr\} \end{split} \end{equation} | (8) |
agrees with the space
Then,
\begin{equation*} {\mathscr{F}_\varepsilon}{{\stackrel{\Gamma(L^2)-w}{\rightharpoonup}}}{\mathscr{F}_0}, \end{equation*} |
where
Proof. We split the proof into two steps which are an adaptation of [11,Theorem 3.3] using the sole assumptions (H1) and (H2) in the general setting of conductivity.
Step 1 -
Consider a sequence
\begin{equation} \liminf\limits_{\varepsilon\to 0 } {\mathscr{F}_\varepsilon} ({u_\varepsilon})\geq {\mathscr{F}_0}(u). \end{equation} | (9) |
If the lower limit is
\begin{equation} {\mathscr{F}_\varepsilon}({u_\varepsilon})\leq C. \end{equation} | (10) |
As
\begin{equation} {u_\varepsilon} \mathop \rightharpoonup \limits^ \rightharpoonup u_0. \end{equation} | (11) |
Assumption (H1) ensures that
\begin{equation} u_0(x, y) = u(x) \quad\text{is independent of} \quad y, \end{equation} | (12) |
where, according to the link between two-scale and weak
\begin{equation*} {u_\varepsilon}\rightharpoonup u \quad\text{weakly in} \quad L^2({\Omega}). \end{equation*} |
Since all the components of the matrix
\begin{equation*} A\left ({\frac{x}{\varepsilon}}\right){\nabla}{u_\varepsilon} \mathop \rightharpoonup \limits^ \rightharpoonup \sigma_0(x, y) \qquad\text{with} \quad \sigma_0\in L^2({\Omega}\times Y_d; {\mathbb{R}^d}) , \end{equation*} |
and also
\begin{equation} A^{1/2}\left ({\frac{x}{\varepsilon}}\right){\nabla}{u_\varepsilon} \mathop \rightharpoonup \limits^ \rightharpoonup \Theta_0(x, y) \qquad\text{with} \quad \Theta_0\in L^2({\Omega}\times Y_d; {\mathbb{R}^d}). \end{equation} | (13) |
In particular
\begin{equation} \varepsilon A\left ({\frac{x}{\varepsilon}}\right){\nabla}{u_\varepsilon} \mathop \rightharpoonup \limits^ \rightharpoonup 0. \end{equation} | (14) |
Consider
\begin{equation} {\mathrm{div}}\left (A^{1/2}(y)\Phi(y)\right) = 0 \qquad\text{in} \quad \mathscr{D}'({\mathbb{R}}^d), \end{equation} | (15) |
or equivalently,
\begin{equation} \notag \int_{Y_d} A^{1/2}(y)\Phi(y)\cdot{\nabla}\psi(y)dy = 0 \qquad\forall \psi\in{H^1_{\text{per}}}(Y_d). \end{equation} |
Take also
\begin{align*} \int_{{\Omega}} A^{1/2}\left ({\frac{x}{\varepsilon}}\right)&{\nabla}{u_\varepsilon}\cdot\Phi\left ({\frac{x}{\varepsilon}}\right)\varphi(x)dx = - \int_{{\Omega}}{u_\varepsilon} A^{1/2}\left ({\frac{x}{\varepsilon}}\right)\Phi\left ({\frac{x}{\varepsilon}}\right)\cdot{\nabla}\varphi(x)dx. \end{align*} |
By using [1,Lemma 5.7],
\begin{equation} \int_{{\Omega}\times Y_d} \Theta_0(x, y)\cdot\Phi(y)\varphi(x)dxdy = -\int_{{\Omega}\times Y_d} u(x)A^{1/2}(y)\Phi(y)\cdot{\nabla}\varphi(x)dxdy. \end{equation} | (16) |
We prove that the target function
\begin{equation} N: = \int_{Y_d} A^{1/2}(y)\Phi(y)dy, \end{equation} | (17) |
and varying
\begin{equation*} \label{formulaN} \int_{{\Omega}\times Y_d} \Theta_0(x, y)\cdot \Phi(y)\varphi(x)dxdy = -\int_{{\Omega}} u(x)N\cdot {\nabla}\varphi(x)dx \end{equation*} |
Since the integral in the left-hand side is bounded by a constant times
\begin{equation*} \int_{{\Omega}} u(x)N\cdot {\nabla}\varphi(x)dx = \int_{{\Omega}} g(x)\varphi(x)dx, \end{equation*} |
which implies that
\begin{equation} N\cdot{\nabla} u\in L^2({\Omega}). \end{equation} | (18) |
In view of assumption (H2),
\begin{equation} u\in H^1({\Omega}). \end{equation} | (19) |
This combined with equality 16 leads us to
\begin{equation} \int_{{\Omega}\times Y_d} \Theta_0(x, y)\cdot\Phi(y)\varphi(x)dxdy = \int_{{\Omega}\times Y_d}A^{1/2}(y){\nabla} u(x)\cdot\Phi(y)\varphi(x)dxdy. \end{equation} | (20) |
By density, the last equality holds if the test functions
\begin{equation*} {\mathrm{div}}_y\left (A^{1/2}(y)\psi(x, y)\right) = 0 \qquad\text{in} \quad \mathscr{D}'({\mathbb{R}}^d), \end{equation*} |
or equivalently,
\begin{equation*} \int_{{\Omega}\times Y_d}\psi(x, y)\cdot A^{1/2}(y){\nabla}_yv(x, y)dxdy = 0 \qquad\forall v\in L^2({\Omega};{H^1_{\text{per}}}(Y_d)). \end{equation*} |
The
\begin{equation*} \mathscr{K} : = \left \{A^{1/2}(y){\nabla}_yv(x, y) \quad : \quad v\in L^2({\Omega}; {H^1_{\text{per}}}(Y_d))\right\}. \end{equation*} |
Thus, the equality 20 yields
\begin{equation*} \Theta_0(x, y) = A^{1/2}(y){\nabla} u(x)+S(x, y) \end{equation*} |
for some
\begin{equation*} A^{1/2}(y){\nabla}_yv_n(x, y) \to S(x, y)\qquad\text{strongly in}\quad L^2({\Omega};{L^2_{\text{per}}}(Y_d;{\mathbb{R}^d})). \end{equation*} |
Due to the lower semi-continuity property of two-scale convergence (see [1,Proposition 1.6]), we get
\begin{align*} \liminf\limits_{\varepsilon\to 0} \|A^{1/2}(x/\varepsilon){\nabla}{u_\varepsilon}\|^2_{L^2({\Omega};{\mathbb{R}^d})}&\geq \|\Theta_0\|^2_{L^2({\Omega}\times Y_d;{\mathbb{R}^d})}\\ & = \lim\limits_n\left \|A^{1/2}(y)\left ({\nabla}_xu(x)+{\nabla}_yv_n\right)\right\|^2_{L^2({\Omega}\times Y_d;{\mathbb{R}^d})}. \end{align*} |
Then, by the weak
\begin{align*} &\liminf\limits_{\varepsilon\to 0}{\mathscr{F}_\varepsilon}({u_\varepsilon})\notag\\ &\geq \lim\limits_{n}\int_{{\Omega}\times Y_d}A(y)({\nabla}_xu(x)+{\nabla}_yv_n(x, y))\cdot({\nabla}_xu(x)+{\nabla}_yv_n(x, y))dxdy + \int_{{\Omega}}|u|^2dx\\ &\geq\int_{{\Omega}}\inf\biggl\{\int_{ Y_d}A(y)({\nabla}_xu(x)+{\nabla}_yv(y))\cdot({\nabla}_xu(x)+{\nabla}_yv(y))dy \quad : \quad \quad v\in {H^1_{\text{per}}}(Y_d) \biggl\} dx\notag\\ &\qquad + \int_{{\Omega}}|u|^2dx. \end{align*} |
Recalling the definition 7, we immediately conclude that
\begin{equation*} \liminf\limits_{\varepsilon\to 0}{\mathscr{F}_\varepsilon}({u_\varepsilon})\geq \int_{{\Omega}}\left \{A^\ast{\nabla} u\cdot{\nabla} u + |u|^2\right\}dx, \end{equation*} |
provided that
It remains to prove that the target function
\begin{align} \int_{{\Omega}\times Y_d}\Theta_0(x, y)\cdot\Phi(y)\varphi(x)dxdy & = \int_{{\Omega}}N\cdot{\nabla} u(x)\varphi(x)dx\\ &\quad - \int_{\partial{\Omega}}N\cdot \nu(x)u(x)\varphi(x)d\mathscr{H}, \end{align} | (21) |
where
\begin{equation*} \int_{\partial{\Omega}}N\cdot \nu(x)u(x)\varphi(x)d\mathscr{H} = 0, \end{equation*} |
which leads to
\begin{equation} N\cdot \nu(x_0) u(x_0) = 0. \end{equation} | (22) |
In view of assumption (H2) and the arbitrariness of
\begin{equation*} u\in H^1_0({\Omega}). \end{equation*} |
This concludes the proof of the
Step 2 -
We use the same arguments of [12,Theorem 2.4] which can easily extend to the conductivity setting. We just give an idea of the proof, which is based on a perturbation argument. For
\begin{equation*} A_\delta: = A+\delta I_d, \end{equation*} |
where
\begin{equation*} \mathscr{F}^\delta(u) : = \left\{ \begin{array}{l} \int_{{\Omega}} \left \{A^\ast_\delta{\nabla} u\cdot{\nabla} u + |u|^2\right\}dx, &\text{if} \quad u\in H^1_0({\Omega}), \\ \quad \infty, &\text{if} \quad u\in L^2({\Omega})\setminus H^1_0({\Omega}), \end{array} \right. \end{equation*} |
for the strong topology of
\begin{align*} F^0(u)&\leq \mathscr{F}^\delta(u)\\ &\leq \liminf\limits_{\varepsilon_j\to 0} \int_{{\Omega}}\left \{A_\delta\left (\frac{x}{\varepsilon_j}\right){\nabla} u_{\varepsilon_j}\cdot {\nabla} u_{\varepsilon_j} +|u_{\varepsilon_j}|^2\right\}dx\\ &\leq \liminf\limits_{\varepsilon_j\to 0} \int_{{\Omega}}\left \{A\left (\frac{x}{\varepsilon_j}\right){\nabla} u_{\varepsilon_j}\cdot {\nabla} u_{\varepsilon_j} +|u_{\varepsilon_j}|^2\right\}dx + O(\delta)\\ & = F^0(u) + O(\delta). \end{align*} |
It follows that
\begin{equation} \lim\limits_{\delta\to 0}A^\ast_\delta = A^\ast. \end{equation} | (23) |
Thanks to the Lebesgue dominated convergence theorem and in view of 23, we get that
Now, let us show that
\begin{equation*} \lim\limits_{\varepsilon\to 0}{\mathscr{F}_\varepsilon}(\overline{u}_{\varepsilon}) = {\mathscr{F}_0}(u), \end{equation*} |
which proves the
The next proposition provides a characterization of Assumption
Proposition 1. Assumption
\begin{equation} \notag {\text{Ker}}(A^\ast) = V^\perp. \end{equation} |
Proof. Consider
\begin{equation*} H^1_\lambda(Y_d) : = \left \{u\in H^1_{\rm loc}({\mathbb{R}^d}) \quad : \quad {\nabla} u \quad \text{is} \quad Y_d\text{-periodic and} \quad \int_{Y_d}{\nabla} u(y)dy = \lambda \right\}. \end{equation*} |
Recall that
\begin{align*} 0 = A^\ast\lambda\cdot\lambda & = \inf\left \{ \int_{ Y_d}A(y){\nabla} u(y)\cdot{\nabla} u(y)dy \quad : \quad u\in H^1_\lambda(Y_d) \right\}. \label{newhommat} \end{align*} |
Then, there exists a sequence
\begin{equation*} \lim\limits_{n\to\infty}\int_{Y_d} A(y){\nabla} u_n(y)\cdot{\nabla} u_n(y) dy = 0, \end{equation*} |
which implies that
\begin{equation} A^{1/2}{\nabla} u_n \to 0\qquad \text{strongly in} \quad L^2(Y_d; \quad {\mathbb{R}}^d). \end{equation} | (24) |
Now, take
\begin{align} \int_{Y_d} A^{1/2}(y){\nabla} u_n(y)\cdot\Phi(y)dy & = \int_{Y_d}{\nabla} u_n(y)\cdot A^{1/2}(y)\Phi(y)dy \\ & = \lambda\cdot \int_{Y_d} A^{1/2}(y)\Phi(y)dy + \int_{Y_d}{\nabla} v_n(y)\cdot A^{1/2}(y)\Phi(y)dy \\ & = \lambda\cdot \int_{Y_d} A^{1/2}(y)\Phi(y)dy, \end{align} | (25) |
where the last equality is obtained by integrating by parts the second integral combined with the fact that
\begin{equation*} \label{lambadaperp} 0 = \lambda\cdot \left (\int_{Y_d} A^{1/2}(y)\Phi(y)dy\right), \end{equation*} |
for any
\begin{equation*} \label{kersubsetV} {\text{Ker}}(A^\ast) \subseteq V^\perp. \end{equation*} |
Conversely, by 23 we already know that
\begin{equation*} \lim\limits_{\delta\to 0} A^\ast_\delta = A^\ast, \end{equation*} |
where
\begin{equation} A^\ast_\delta \lambda\cdot\lambda = \min\left \{ \int_{Y_d} A_\delta(y){\nabla} u_\delta(y)\cdot {\nabla} u_\delta(y) dy \quad : \quad u_\delta\in H^1_\lambda (Y_d) \right\}. \end{equation} | (26) |
Let
\begin{equation*} A^\ast_\delta\lambda\cdot\lambda = \int_{Y_d} A_\delta(y){\nabla} \overline{u}_\delta(y)\cdot {\nabla} \overline{u}_\delta(y) dy = \int_{Y_d}|A^{1/2}_\delta(y){\nabla} \overline{u}_\delta(y)|^2dy\leq C, \end{equation*} |
which implies that the sequence
Now, we show that
\begin{equation} \notag A(y) = R(y)D(y)R^T(y) \qquad\text{for a.e. }y\in Y_d, \end{equation} |
where
\begin{equation} A^{1/2}_\delta (y) = R(y)(D(y)+\delta I_d)^{1/2}R^T(y)\qquad \text{for a.e. }y\in Y_d, \notag \end{equation} |
which implies that
Now, passing to the limit as
\begin{equation*} {\mathrm{div}}(A_\delta^{1/2}\Phi_\delta) = {\mathrm{div}} (A_\delta{\nabla} \overline{u}_\delta) = 0 \qquad\text{in } \mathscr{D'}({\mathbb{R}}^d), \end{equation*} |
we have
\begin{equation*} {\mathrm{div}} (A^{1/2}\Phi) = 0 \qquad\text{in } \mathscr{D'}({\mathbb{R}}^d). \end{equation*} |
This along with
\begin{equation*} A^\ast_\delta\lambda = \int_{Y_d} A_\delta(y){\nabla} \overline{u}_\delta(y)dy = \int_{Y_d} A^{1/2}_\delta(y) \Phi_\delta(y)dy. \end{equation*} |
Hence, taking into account the strong convergence of
\begin{equation*} A^\ast\lambda = \lim\limits_{\delta\to 0} A^\ast_\delta \lambda = \lim\limits_{\delta\to 0} \int_{Y_d} A^{1/2}_\delta(y) \Phi_\delta(y)dy = \int_{Y_d} A^{1/2}(y)\Phi(y)dy, \end{equation*} |
which implies that
\begin{equation*} A^\ast\lambda\cdot\lambda = 0, \end{equation*} |
so that, since
\begin{equation*} V^\perp\subseteq {\text{Ker}}(A^\ast), \end{equation*} |
which concludes the proof.
In this section we provide a geometric setting for which assumptions (H1) and (H2) are fulfilled. We focus on a
\begin{equation*} Z_1 = (0, \theta)\times (0, 1)^{d-1} \quad\text{and}\quad Z_2 = (\theta, 1)\times (0, 1)^{d-1}, \end{equation*} |
where
\begin{equation*} Z_i^\# : = \text{Int}\biggl(\bigcup\limits_{k\in\mathbb{Z}^d} \left (\overline{Z_i}+k\right)\biggr) \qquad\text{for} \quad i = 1, 2. \end{equation*} |
Let
\begin{equation*} X_1 : = (0, \theta)\times {\mathbb{R}}^{d-1} \quad\text{and}\quad X_2 : = (\theta, 1)\times {\mathbb{R}}^{d-1}, \end{equation*} |
and we denote by
The anisotropic phases are described by two constant, symmetric and non-negative matrices
\begin{equation} A(y_1) : = \chi(y_1)A_1 + (1-\chi(y_1))A_2 \qquad\text{for} \quad y_1\in{\mathbb{R}}, \end{equation} | (27) |
where
We are interested in two-phase mixtures in
\begin{equation} A_1 = \xi\otimes\xi\qquad\text{and}\quad A_2 \quad \text{is positive definite}, \end{equation} | (28) |
for some
Proposition 2. Let
From Theorem 2.1, we easily deduce that the energy
Proof. Firstly, let us prove assumption (H1). We adapt the proof of Step 1 of [11,Theorem 3.3] to two-dimensional laminates. In our context, the algebra involved is different due to the scalar setting.
Denote by
\begin{align} 0& = -\lim\limits_{\varepsilon\to 0} \varepsilon\int_{{\Omega}} A\left ({\frac{x}{\varepsilon}}\right){\nabla}{u_\varepsilon}\cdot \Phi\left (x, {\frac{x}{\varepsilon}}\right) dx \\ & = \lim\limits_{\varepsilon\to 0}\int_{{\Omega}} {u_\varepsilon} {\mathrm{div}}_y(A_1\Phi(x, y))\left (x, {\frac{x}{\varepsilon}}\right) dx \\ & = \int_{{\Omega}\times Z^\#_1} u^1_0(x, y) {\mathrm{div}}_y(A_1\Phi(x, y)) dxdy \\ & = -\int_{{\Omega}\times Z^\#_1} A_1{\nabla}_y u^1_0(x, y) \cdot\Phi(x, y) dxdy, \end{align} |
so that
\begin{equation} A_1{\nabla}_y u^1_0(x, y) \equiv 0 \qquad\text{in} \quad \Omega\times Z_1^\#. \end{equation} | (29) |
Similarly, taking
\begin{equation} A_2{\nabla}_y u^2_0(x, y) \equiv 0 \qquad\text{in} \quad \Omega\times Z_2^\#. \end{equation} | (30) |
Due to 29, in phase
\begin{equation*} \label{kerAdimtwo} {\nabla}_yu^1_0\in{\text{Ker}} (A_1) = \text{Span}(\xi^\perp), \end{equation*} |
where
\begin{equation} u_0^1(x, y) = \theta^1 (x, \quad \xi^\perp\cdot y)\qquad \text{a.e.} \quad (x, y)\in {\Omega}\times X_1, \end{equation} | (31) |
for some function
\begin{equation} u^2_0(x, y) = \theta^2(x) \qquad\text{a.e.} \quad (x, y)\in{\Omega} \times X_2, \end{equation} | (32) |
for some function
\begin{equation} (A_1-A_2)\Phi\cdot e_1 = 0\qquad \text{on} \quad \partial Z_1^\#. \end{equation} | (33) |
Note that condition 33 is necessary for
\begin{align} 0& = -\lim\limits_{\varepsilon\to 0} \varepsilon\int_{{\Omega}} A(y){\nabla}{u_\varepsilon}\cdot\Phi\varphi\left (x, {\frac{x}{\varepsilon}}\right)dx \\ & = \lim\limits_{\varepsilon\to 0} \int_{{\Omega}}{u_\varepsilon}{\mathrm{div}}_y(A(y)\Phi\varphi(x, y))\left (x, {\frac{x}{\varepsilon}}\right)dx\\ & = \int_{{\Omega}\times Z_1}u^1_0(x, y){\mathrm{div}}_y(A_1\Phi\varphi(x, y))dxdy \\ &\quad + \int_{{\Omega}\times Z_2}\theta^2(x){\mathrm{div}}_y(A_2\Phi\varphi(x, y))dxdy. \end{align} |
Take now
\begin{equation*} \label{perfun} \varphi^\#(x, y) : = \sum\limits_{k\in\mathbb{Z}^2}\varphi(x, y+k) \end{equation*} |
as new test function. Then, we obtain
\begin{align} 0 & = \int_{{\Omega}\times Z_1}u^1_0(x, y){\mathrm{div}}_y(A_1\Phi\varphi^\#(x, y))dxdy+ \int_{{\Omega}\times Z_2}\theta^2(x){\mathrm{div}}_y(A_2\Phi\varphi^\#(x, y))dxdy\\ & = \sum\limits_{k\in\mathbb{Z}^2}\int_{{\Omega}\times (Z_1+k)}u^1_0(x, y){\mathrm{div}}_y(A_1\Phi\varphi(x, y))dxdy\\ &\quad +\sum\limits_{k\in\mathbb{Z}^2} \int_{{\Omega}\times (Z_2+k)}\theta^2(x){\mathrm{div}}_y(A_2\Phi\varphi(x, y))dxdy\\ & = \int_{{\Omega}\times Z^\#_1}u^1_0(x, y){\mathrm{div}}_y(A_1\Phi\varphi(x, y))dxdy+ \int_{{\Omega}\times Z^\#_2}\theta^2(x){\mathrm{div}}_y(A_2\Phi\varphi(x, y))dxdy. \end{align} | (34) |
Recall that
\begin{equation*} \Phi\in{\mathbb{R}^2}\mapsto (A_1e_1\cdot\Phi, \quad A_2e_1\cdot\Phi)\in{\mathbb{R}^2} \end{equation*} |
is one-to-one. Hence, for any
\begin{equation} A_1\Phi\cdot e_1 = A_2\Phi\cdot e_1 = f. \end{equation} | (35) |
In view of the arbitrariness of
\begin{equation} A_1e_1\cdot \Phi = A_2e_1\cdot \Phi = 1\qquad \text{on} \quad \partial Z^\#_1. \end{equation} | (36) |
Since
\begin{align} 0 & = \int_{{\Omega}\times \partial Z} v_0(x, y)\varphi(x, y)dx{d\mathscr{H}_y}, \end{align} |
where we have set
\begin{equation} v_0(x, \cdot) = 0 \qquad\text{on} \quad \partial Z. \end{equation} | (37) |
Recall that
\begin{equation*} \theta^1(x, \quad \xi_1y_2) = \theta^2(x) \qquad\text{on}\quad \partial Z. \end{equation*} |
Since
We prove assumption (H2). The proof is a variant of the Step 2 of [11,Theorem 3.4]. For arbitrary
\begin{equation} A^{1/2}(y)\Phi(y) : = \chi(y_1)\alpha\xi + (1-\chi(y_1))(\alpha\xi+\beta e_2) \qquad\text{for a.e.} \quad y\in{\mathbb{R}^2}. \end{equation} | (38) |
Such a vector field
\begin{equation*} N: = \int_{Y_2} A^{1/2}(y)\Phi(y)dy = \alpha\xi+(1-\theta)\beta e_2. \end{equation*} |
Hence, due to
From Proposition 1, it immediately follows that the homogenized matrix
We are going to deal with three-dimensional laminates where both phases are degenerate. We assume that the symmetric and non-negative matrices
\begin{equation} {\text{Ker}} (A_i) = {\rm Span}(\eta_i) \qquad\text{for} \quad i = 1, 2. \end{equation} | (39) |
The following proposition gives the algebraic conditions so that assumptions required by Theorem 2.1 are satisfied.
Proposition 3. Let
Invoking again Theorem 2.1, the energy
Proof. We first show assumption (H1). The proof is an adaptation of the first step of [11,Theorem 3.3]. Same arguments as in the proof of Proposition 2 show that
\begin{equation} A_i{\nabla}_y u^i_0(x, y) \equiv 0 \qquad \text{in} \quad {\Omega}\times Z_i^\#\quad\text{for} \quad i = 1, 2. \end{equation} | (40) |
In view of 39 and 40, in phase
\begin{equation} u^i_0(x, y) = \theta^i(x, \eta_i\cdot y) \qquad \text{a.e.} \quad (x, y)\in{\Omega}\times X_i, \end{equation} | (41) |
for some function
\begin{align} 0& = -\lim\limits_{\varepsilon\to 0} \varepsilon\int_{{\Omega}} A(y){\nabla}{u_\varepsilon}\cdot\Phi\varphi\left (x, {\frac{x}{\varepsilon}}\right)dx\\ & = \int_{{\Omega}\times Z_1}u^1_0(x, y){\mathrm{div}}_y(A_1\Phi\varphi(x, y))dxdy \\ &\quad + \int_{{\Omega}\times Z_2}u^2_0(x, y){\mathrm{div}}_y(A_2\Phi\varphi(x, y))dxdy. \end{align} | (42) |
Take
\begin{equation*} \varphi^\# (x, y) : = \sum\limits_{k\in\mathbb{Z}^3} \varphi(x, y+k) \end{equation*} |
as test function in 42, we get
\begin{equation} \int_{{\Omega}\times Z_1^\#} u^1_0(x, y){\mathrm{div}}_y(A_1\Phi\varphi(x, y)) dxdy +\int_{{\Omega}\times Z_2^\#} u^2_0(x, y){\mathrm{div}}_y(A_2\Phi\varphi(x, y)) dxdy = 0. \end{equation} | (43) |
Since the vectors
\begin{equation*} \Phi\in\mathbb{R}^3\mapsto(A_1 e_1\cdot\Phi, \quad A_2e_1\cdot\Phi)\in{\mathbb{R}^2} \end{equation*} |
is surjective. In particular, for any
\begin{equation} A_1\Phi\cdot e_1 = A_2\Phi\cdot e_1 = f. \end{equation} | (44) |
In view of the arbitrariness of
\begin{equation*} \int_{{\Omega}\times\partial Z}\left [u^1_0(x, y)-u^2_0(x, y) \right]\varphi(x, y)dx{d\mathscr{H}_y} = 0, \end{equation*} |
which implies that
\begin{equation} u^1_0(x, \cdot) = u^2_0(x, \cdot) \qquad \text{on}\quad \partial Z. \end{equation} | (45) |
Fix
\begin{align} \theta^1(x, \quad b_1y_2 + c_1y_3) = \theta^2(x, \quad b_2y_2 + c_2y_3) \qquad \text{on}\quad \partial Z , \end{align} | (46) |
with
\begin{equation*} z_1 : = b_1y_2 + c_1y_3, \qquad z_2 : = b_2y_2 + c_2y_3, \end{equation*} |
is a change of variables so that 46 becomes
\begin{equation*} \label{constfunc} \theta^1 (x, z_1) = \theta^2(x, z_2) \qquad \text{a.e.} \quad z_1, z_2\in\mathbb{R}. \end{equation*} |
This implies that
It remains to prove assumption (H2). To this end, let
\begin{equation} E: = \{ (\xi_1, \xi_2)\in\mathbb{R}^3\times\mathbb{R}^3 \quad : \quad (\xi_1-\xi_2)\cdot e_1 = 0, \quad \xi_1\cdot \eta_1 = 0, \quad \xi_2\cdot \eta_2 = 0 \}. \end{equation} | (47) |
For
\begin{equation} A^{1/2}(y)\Phi(y) : = \chi(y_1)\xi_1+(1-\chi(y_1))\xi_2 \qquad \text{a.e.} \quad y\in{\mathbb{R}}^3. \end{equation} | (48) |
The existence of such a vector field
\begin{equation*} N: = \int_{Y_3}A^{1/2}(y)\Phi(y)dy = \theta\xi_1+(1-\theta)\xi_2. \end{equation*} |
Note that
\begin{equation*} (\xi_1, \xi_2)\in {\mathbb{R}}^3\times{\mathbb{R}}^3\mapsto \left ((\xi_1-\xi_2)\cdot e_1, \quad \xi_1\cdot\eta_1, \quad \xi_2\cdot\eta_2 \right)\in{\mathbb{R}}^3. \end{equation*} |
If we identify the pair
\begin{equation*} M_f: = \begin{pmatrix} 1 & 0 & 0 & -1 & 0 & 0\\ a_1 & b_1 & c_1 & 0 & 0 & 0\\ 0 & 0 & 0 & a_2 & b_2 & c_2 \end{pmatrix}, \end{equation*} |
with
Now, let
\begin{equation*} (\xi_1, \xi_2)\in E\mapsto \theta\xi_1+(1-\theta)\xi_2\in\mathbb{R}^3. \end{equation*} |
Let us show that
\begin{equation} \left (\xi_1, \quad \frac{\theta}{\theta-1}\xi_1\right). \end{equation} | (49) |
In view of the definition of
\begin{equation*} \left (\xi_1-\frac{\theta}{\theta-1}\xi_1 \right)\cdot e_1 = 0, \quad \xi_1\cdot\eta_1 = 0, \quad \frac{\theta}{\theta-1}\xi_1\cdot\eta_2 = 0. \end{equation*} |
This combined with the linear independence of
\begin{equation*} \xi_1\in\{e_1, \eta_1, \eta_2 \}^\perp = \{0\}. \end{equation*} |
Hence,
Thanks to Proposition 1, the homogenized matrix
In this section we are going to construct a counter-example of two-dimensional laminates with two degenerate phases, where the lack of assumption (H1) provides an anomalous asymptotic behaviour of the functional
Let
\begin{equation*} A_1 = e_1\otimes e_1 \quad\text{and}\quad A_2 = ce_1\otimes e_1, \end{equation*} |
for some positive constant
\begin{equation} A(y_2) : = \chi(y_2)A_1+(1-\chi(y_2))A_2 = a(y_2)e_1\otimes e_1\qquad \text{for} \quad y_2\in{\mathbb{R}}, \end{equation} | (50) |
with
\begin{equation} a(y_2) : = \chi(y_2) + c(1-\chi(y_2))\geq 1. \end{equation} | (51) |
Thus, the energy
\mathscr{F}_{\varepsilon}(u)=\left\{\begin{array}{c} \int_{\Omega}\left[a\left(\frac{x_{2}}{\varepsilon}\right)\left(\frac{\partial u}{\partial x_{1}}\right)^{2}+|u|^{2}\right] d x, \quad \text { if } u \in H_{0}^{1}\left((0,1)_{x_{1}} ; L^{2}(0,1)_{x_{2}}\right), \\ \infty, \quad \text { if } u \in L^{2}(\Omega) \backslash H_{0}^{1}\left((0,1)_{x_{1}} ; L^{2}(0,1)_{x_{2}}\right). \end{array}\right. | (52) |
We denote by
\begin{equation*} (f\ast_1g)(x_1, x_2) = \int_{\mathbb{R}}f(x_1-t, x_2)g(t, x_2)dt. \end{equation*} |
Throughout this section,
\begin{equation} c_\theta: = c\theta+1-\theta, \end{equation} | (53) |
where
Proposition 4. Let
\begin{equation*} \label{c12} \mathscr{F} (u) : = \left\{ \begin{array}{l} &\int_{0}^{1}dx_2\int_{\mathbb{R}} \frac{1}{\hat{k}_0(\lambda_1)}|{\mathcal{F}}_2 (u)(\lambda_1, x_2)|^2d\lambda_1, \quad\mathit{{if}}~~ u\in H^1_0((0, 1)_{x_1}; L^2(0, 1)_{x_2}) , \\ &\\ & \quad \infty, \quad \mathit{{if}}~~ u\in L^2({\Omega})\setminus H^1_0((0, 1)_{x_1}; L^2(0, 1)_{x_2}) , \end{array} \right. \end{equation*} |
where
\begin{equation} \hat{k}_0(\lambda_1): = \int_{0}^{1}\frac{1}{4\pi^2a(y_2)\lambda_1^2 +1}dy_2. \end{equation} | (54) |
The
\mathscr{F}(u):=\left\{\begin{array}{c} \int_{0}^{1} d x_{2} \int_{\mathbb{R}}\left\{\frac{c}{c_{\theta}}\left(\frac{\partial u}{\partial x_{1}}\right)^{2}\left(x_{1}, x_{2}\right)+\left[\sqrt{\alpha} u\left(x_{1}, x_{2}\right)+\left(h *_{1} u\right)\left(x_{1}, x_{2}\right)\right]^{2}\right\} d x_{1}, \\ \text { if } u \in H_{0}^{1}\left((0,1)_{x_{1}} ; L^{2}(0,1)_{x_{2}}\right), \\ \infty, \quad \text { if } u \in L^{2}(\Omega) \backslash H_{0}^{1}\left((0,1)_{x_{1}} ; L^{2}(0,1)_{x_{2}}\right), \end{array}\right. | (55) |
where
\begin{equation} {\mathcal{F}}_2(h)(\lambda_1) : = \sqrt{\alpha +f(\lambda_1)}-\sqrt{\alpha}, \end{equation} | (56) |
where
\begin{equation} \alpha: = \frac{c^2\theta +1-\theta}{c_\theta^2} > 0, \qquad f(\lambda_1): = \frac{(c-1)^2\theta(\theta-1)}{c^2_\theta}\frac{1}{c_\theta4\pi^2\lambda_1^2 + 1}. \end{equation} | (57) |
Moreover, any two-scale limit
Remark 1. From 57, we can deduce that
\begin{equation*} \alpha +f(\lambda_1) = {1\over c^2_\theta(c_\theta4\pi^2\lambda_1^2 + 1)}\left \{(c^2\theta+1-\theta)c_\theta4\pi^2\lambda_1^2+ [(c-1)\theta+1]^2 \right\} > 0, \end{equation*} |
for any
Proof. We divide the proof into three steps.
Step 1 -
Consider a sequence
\begin{equation} \liminf\limits_{\varepsilon\to 0 } {\mathscr{F}_\varepsilon} ({u_\varepsilon})\geq \mathscr{F}(u). \end{equation} | (58) |
If the lower limit is
\begin{equation} {\mathscr{F}_\varepsilon}({u_\varepsilon})\leq C. \end{equation} | (59) |
It follows that the sequence
\begin{equation} {u_\varepsilon} \mathop \rightharpoonup \limits^ \rightharpoonup u_0. \end{equation} | (60) |
In view of 51, we know that
\begin{equation} \frac{\partial {u_\varepsilon}}{\partial x_1} \mathop \rightharpoonup \limits^ \rightharpoonup \sigma_0(x, y) \qquad\text{with} \quad \sigma_0\in L^2({\Omega}\times Y_2). \end{equation} | (61) |
In particular,
\begin{equation} \varepsilon\frac{\partial {u_\varepsilon}}{\partial x_1} \mathop \rightharpoonup \limits^ \rightharpoonup 0. \end{equation} | (62) |
Take
\begin{equation*} \varepsilon\int_{{\Omega}}\frac{\partial {u_\varepsilon}}{\partial x_1}\varphi\left (x, {\frac{x}{\varepsilon}}\right)dx = - \int_{{\Omega}}{u_\varepsilon}\left (\varepsilon\frac{\partial \varphi}{\partial x_1}\left (x, {\frac{x}{\varepsilon}}\right)+\frac{\partial \varphi}{\partial y_1}\left (x, {\frac{x}{\varepsilon}}\right) \right)dx. \end{equation*} |
Passing to the limit in both terms with the help of 60 and 62 leads to
\begin{equation*} 0 = - \int_{{\Omega}\times Y_2}u_0(x, y)\frac{\partial \varphi}{\partial y_1}(x, y)dxdy, \end{equation*} |
which implies that
\begin{equation} u_0(x, y) \quad\text{is independent of} \quad y_1. \end{equation} | (63) |
Due to the link between two-scale and weak
\begin{equation} {u_\varepsilon}\rightharpoonup u(x) = \int_{Y_1} u_0(x, y_2)dy_2\qquad\text{weakly in }~~ L^2({\Omega}) . \end{equation} | (64) |
Now consider
\begin{equation} \frac{\partial\varphi}{\partial y_1} (x, y) = 0. \end{equation} | (65) |
Since
\begin{align*} \int_{{\Omega}} \frac{\partial{u_\varepsilon}}{\partial x_1}\varphi\left (x, y \right)dx = -\int_{{\Omega}} {u_\varepsilon}\frac{\partial\varphi}{\partial x_1}\left (x, y\right)dx. \end{align*} |
In view of the convergences 60 and 61 together with 63, we can pass to the two-scale limit in the previous expression and we obtain
\begin{align} \int_{{\Omega}\times Y_2}\sigma_0(x, y)\varphi(x, y)dxdy & = -\int_{{\Omega}\times Y_2} u_0(x, y_2) \frac{\partial \varphi}{\partial x_1}(x, y )dxdy. \end{align} | (66) |
Varying
\begin{equation*} \int_{{\Omega}\times Y_2}u_0(x, y_2) \frac{\partial \varphi}{\partial x_1}(x, y)dxdy = \int_{{\Omega}\times Y_2} g(x, y)\varphi(x, y) dxdy, \end{equation*} |
which yields
\begin{equation} \frac{\partial u_0}{\partial x_1}(x, y_2) \in L^2({\Omega}\times Y_1). \end{equation} | (67) |
Then, an integration by parts with respect to
\begin{align} \int_{{\Omega}\times Y_2}&\sigma_0(x, y)\varphi(x, y)dxdy = \int_{{\Omega}\times Y_2}\frac{\partial u_0}{\partial x_1}(x, y_2)\varphi(x, y)dxdy \\ &\quad- \int_{0}^{1}dx_2\int_{Y_2}\left [u_0(1, x_2, y_2)\varphi(1, x_2, y) -u_0(0, x_2, y_2)\varphi(0, x_2, y) \right]dy. \end{align} |
Since for any
\begin{align} \int_{0}^{1}dx_2\int_{Y_2}\left [u_0(1, x_2, y_2)\varphi(1, x_2, y) -u_0(0, x_2, y_2)\varphi(0, x_2, y) \right]dy = 0, \end{align} |
which implies that
\begin{equation*} u_0(1, x_2, y_2) = u_0(0, x_2, y_2) = 0 \qquad \text{a.e.} \quad (x_2, y_2)\in (0, 1)\times Y_1. \end{equation*} |
This combined with 67 yields
\begin{equation*} u_0(x_1, x_2, y_2)\in H^1_0((0, 1)_{x_1}; L^2((0, 1)_{x_2}\times Y_1)). \end{equation*} |
Finally, an integration by parts with respect to
\begin{align*} \int_{{\Omega}\times Y_2}\left ( \sigma_0(x, y)-\frac{\partial u_0}{\partial x_1}(x, y_2)\right)\varphi(x, y)dxdy = 0. \end{align*} |
Since the orthogonal of divergence-free functions is the gradients, from the previous equality we deduce that there exists
\begin{equation} \sigma_0(x, y) = \frac{\partial u_0}{\partial x_1}(x, y_2)+ \frac{\partial \tilde{u}}{\partial y_1}(x, y). \end{equation} | (68) |
Now, we show that
\begin{equation} \liminf\limits_{\varepsilon\to 0} \int_{{\Omega}}a\left (\frac{x_2}{\varepsilon}\right)\left (\frac{\partial {u_\varepsilon}}{\partial x_1} \right)^2dx \geq \int_{{\Omega}\times Y_2}a(y_2)\left ( \frac{\partial u_0}{\partial x_1}(x, y_2)+ \frac{\partial \tilde{u}}{\partial y_1}(x, y) \right)^2dxdy. \end{equation} | (69) |
To this end, set
\begin{equation*} \sigma_\varepsilon : = \frac{\partial {u_\varepsilon}}{\partial x_1}. \end{equation*} |
Since
\begin{equation} \|a-a_k\|_{L^2(Y_1)} \to 0 \quad\text{as}~~ k\to\infty , \end{equation} | (70) |
hence, by periodicity, we also have
\begin{equation} \left \|a\left ({\frac{x_2}{\varepsilon}}\right) - a_k\left ({\frac{x_2}{\varepsilon}}\right) \right\|_{L^2({\Omega})} \leq C \|a-a_k\|_{L^2(Y_1)}, \end{equation} | (71) |
for some positive constant
\begin{equation} \psi_n (x, y) \to \sigma_0(x, y)\qquad \text{strongly in }~~ L^2({\Omega}\times Y_2) . \end{equation} | (72) |
From the inequality
\begin{equation*} \int_{{\Omega}} a\left ({\frac{x_2}{\varepsilon}}\right)\left (\sigma_{\varepsilon} - \psi_n\left (x, {\frac{x}{\varepsilon}}\right)\right) ^ 2 dx\geq 0, \end{equation*} |
we get
\begin{align} \int_{{\Omega}}& a\left ({\frac{x_2}{\varepsilon}}\right)\sigma_\varepsilon^2dx\geq 2\int_{{\Omega}}a\left ({\frac{x_2}{\varepsilon}}\right)\sigma_\varepsilon\psi_n\left (x, {\frac{x}{\varepsilon}}\right)dx -\int_{{\Omega}}a\left ({\frac{x_2}{\varepsilon}}\right)\psi_n^2\left (x, {\frac{x}{\varepsilon}}\right)dx\\ & = 2\int_{{\Omega}}\left (a\left ({\frac{x_2}{\varepsilon}}\right)-a_k\left ({\frac{x_2}{\varepsilon}}\right) \right)\sigma_\varepsilon\psi_n\left (x, {\frac{x}{\varepsilon}}\right)dx + 2\int_{{\Omega}}a_k\left ({\frac{x_2}{\varepsilon}}\right)\sigma_\varepsilon\psi_n\left (x, {\frac{x}{\varepsilon}}\right)dx\\ &\quad-\int_{{\Omega}}a\left ({\frac{x_2}{\varepsilon}}\right)\psi_n^2\left (x, {\frac{x}{\varepsilon}}\right)dx. \end{align} | (73) |
In view of 71, the first integral on the right-hand side of 73 can be estimated as
\begin{align*} \left |\int_{{\Omega}}\left (a\left ({\frac{x_2}{\varepsilon}}\right)-a_k\left ({\frac{x_2}{\varepsilon}}\right) \right)\sigma_\varepsilon\psi_n\left (x, {\frac{x}{\varepsilon}}\right)dx \right| &\leq C \|a-a_k\|_{L^2(Y_1)}\|\psi_n\|_{L^\infty({\Omega})}\|\sigma_\varepsilon\|_{L^2({\Omega})}\\ &\leq C \|a-a_k\|_{L^2(Y_1)}. \end{align*} |
Hence, passing to the limit as
\begin{align*} \liminf\limits_{\varepsilon\to 0} \int_{{\Omega}} a\left ({\frac{x_2}{\varepsilon}}\right)\sigma^2_\varepsilon dx&\geq- C \|a-a_k\|_{L^2(Y_1)}+ 2\lim\limits_{\varepsilon\to 0} \int_{{\Omega}}a_k\left ({\frac{x_2}{\varepsilon}}\right)\sigma_\varepsilon\psi_n\left (x, {\frac{x}{\varepsilon}}\right)dx \\ &\quad -\lim\limits_{\varepsilon\to 0}\int_{{\Omega}}a\left ({\frac{x_2}{\varepsilon}}\right)\psi_n^2\left (x, {\frac{x}{\varepsilon}}\right)dxdy\\ & = 2\int_{{\Omega}\times Y_2}a_k(y_2)\sigma_0(x, y)\psi_n(x, y)dxdy - C\|a-a_k\|_{L^2(Y_1)}\\ &\quad-\int_{{\Omega}\times Y_2}a(y_2)\psi_n^2(x, y)dxdy. \end{align*} |
Thanks to 70, we take the limit as
\begin{align*} \liminf\limits_{\varepsilon\to 0} \int_{{\Omega}} a\left ({\frac{x_2}{\varepsilon}}\right)\sigma^2_\varepsilon dx &\geq 2\int_{{\Omega}\times Y_2}a(y_2)\sigma_0(x, y)\psi_n(x, y)dxdy\notag\\ &\quad-\int_{{\Omega}\times Y_2}a(y_2)\psi_n^2(x, y)dxdy, \end{align*} |
so that in view of 72, passing to the limit as
\begin{align*} \liminf\limits_{\varepsilon\to 0} \int_{{\Omega}} a\left ({\frac{x_2}{\varepsilon}}\right)\sigma^2_\varepsilon dx &\geq \int_{{\Omega}\times Y_2}a(y_2)\sigma_0^2(x, y)dxdy. \end{align*} |
This combined with 68 proves 69.
By 63, we already know that
\begin{align} \int_{{\Omega}\times Y_2} a(y_2)&\left (\frac{\partial u_0}{\partial x_1}(x, y_2) +\frac{\partial \tilde{u}}{\partial y_1} (x, y)\right)^2dxdy\\ & = \int_{{\Omega}}dx\int_{Y_1}a(y_2)dy_2\int_{Y_1}\left (\frac{\partial u_0}{\partial x_1}(x, y_2) +\frac{\partial \tilde{u}}{\partial y_1} (x, y)\right)^2dy_1\\ &\geq\int_{{\Omega}}dx\int_{Y_1}a(y_2)dy_2\left (\int_{Y_1} \left [\frac{\partial u_0}{\partial x_1}(x, y_2) +\frac{\partial \tilde{u}}{\partial y_1} (x, y)\right] dy_1 \right)^2\\ & = \int_{{\Omega}}dx\int_{Y_1}a(y_2)\left (\frac{\partial u_0}{\partial x_1}\right)^2(x, y_2)dy_2. \end{align} |
This combined with 69 implies that
\begin{align} \liminf\limits_{\varepsilon\to 0}\int_{{\Omega}}a\left (\frac{x_2}{\varepsilon}\right)\left (\frac{\partial{u_\varepsilon}}{\partial x_1}\right)^2dx &\geq\int_{{\Omega}}dx\int_{Y_1}a(y_2)\left (\frac{\partial u_0}{\partial x_1}\right)^2(x, y_2)dy_2. \end{align} | (74) |
Now, we extend the functions in
\begin{align} &\liminf\limits_{\varepsilon\to 0} {\mathscr{F}_\varepsilon}({u_\varepsilon}) \\ &\quad \geq \int_{0}^{1}dx_2\int_{Y_1}dy_2\int_{\mathbb{R}}\biggl[a(y_2)\left (\frac{\partial u_0}{\partial x_1}\right)^2(x_1, x_2, y_2)+ |u_0|^2(x_1, x_2, y_2)\biggr]dx_1. \end{align} | (75) |
We minimize the right-hand side with respect to
\begin{align*} \int_{0}^{1}dx_2\int_{Y_1}dy_2\int_{\mathbb{R}}&\biggl[a(y_2)\frac{\partial u_0}{\partial x_1}(x_1, x_2, y_2)\frac{\partial v}{\partial x_1}(x_1, x_2, y_2)\notag\\ &\quad + u_0(x_1, x_2, y_2)v(x_1, x_2, y_2)\biggr]dx_1 = 0 \end{align*} |
for any
\begin{equation} -a(y_2)\frac{\partial^2 u_0}{\partial x_1^2}(x_1, x_2 , y_2) + u_0(x_1, x_2, y_2) = b(x_1, x_2) \quad \text{in} \quad \mathscr{D}'({\mathbb{R}}) \end{equation} | (76) |
for a.e.
\begin{equation} {\mathcal{F}}_2(u_0)(\lambda_1, x_2, y_2) = \frac{{\mathcal{F}}_2(b)(\lambda_1, x_2)}{4\pi^2a(y_2)\lambda_1^2+1} \qquad\text{a.e.} \quad (\lambda_1, x_2, y_2)\in {\mathbb{R}}\times (0, 1)\times Y_1. \end{equation} | (77) |
Note that 77 proves in particular that the two-scale limit
In light of the definition 54 of
\begin{equation} {\mathcal{F}}_2(u)(\lambda_1, x_2) = \hat{k}_0(\lambda_1){\mathcal{F}}_2 (b)(\lambda_1, x_2)\qquad\text{a.e.} \quad (\lambda_1, x_2)\in {\mathbb{R}}\times (0, 1). \end{equation} | (78) |
By using Plancherel's identity with respect to the variable
\begin{align} \liminf\limits_{\varepsilon\to 0} {\mathscr{F}_\varepsilon}({u_\varepsilon})&\geq \int_{0}^{1}dx_2\int_{Y_1}dy_2\int_{\mathbb{R}} (4\pi^2a(y_2)\lambda_1^2 +1)|{\mathcal{F}}_2(u_0)(\lambda_1, x_2, y_2)|^2d\lambda_1\\ & = \int_{0}^{1}dx_2\int_{\mathbb{R}} \frac{1}{\hat{k}_0(\lambda_1)} |{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1, \end{align} |
which proves the
Step 2-
For the proof of the
Lemma 4.1. Let
\begin{equation} {\mathcal{F}}_2(b)(\lambda_1, x_2): = \frac{1}{\hat{k}_0(\lambda_1)} {\mathcal{F}}_2(u)(\lambda_1, x_2), \end{equation} | (79) |
where
\begin{align} \label{stuliopb} \left\{ \begin{array}{l} -a(y_2)\frac{\partial^2 u_0}{\partial x_1^2}(x_1, x_2, y_2) + u_0(x_1, x_2, y_2) = b(x_1, x_2), & x_1\in (0, 1), \\ u_0(0, x_2, y_2) = u_0(1, x_2, y_2) = 0, & \end{array} \right. \end{align} | (80) |
with
Let
\begin{equation} u_0(x_1, x_2, y_2)\in C^1([0, 1]^2; \quad L^\infty_{\rm per}(Y_1)) \end{equation} | (81) |
to the problem 80. Taking the Fourier transform
\begin{equation} {\mathcal{F}}_2(u_0)(\lambda_1, x_2, y_2) = \frac{{\mathcal{F}}_2(u)(\lambda_1, x_2) }{(4\pi^2a(y_2)\lambda_1^2+1)\hat{k}_0(\lambda_1)}\qquad \text{for} \quad (\lambda_1, x_2, y_2)\in{\mathbb{R}}\times [0, 1]\times Y_1, \end{equation} | (82) |
where
\begin{equation} u(x_1, x_2) = \int_{Y_1} u_0(x_1, x_2, y_2)dy_2 \qquad \text{for} \quad (x_1, x_2)\in\mathbb{R}\times (0, 1). \end{equation} | (83) |
Let
\begin{equation*} \label{defuesp} u_\varepsilon(x_1, x_2) : = u_0\left (x_1, x_2, {\frac{x_2}{\varepsilon}}\right). \end{equation*} |
Recall that rapidly oscillating
\begin{equation*} {u_\varepsilon} \rightharpoonup u \quad \text{weakly in} \quad L^2({\Omega}). \end{equation*} |
Due to 81, we can apply [1,Lemma 5.5] so that
\begin{align} \lim\limits_{\varepsilon\to 0}&{\mathscr{F}_\varepsilon}({u_\varepsilon}) = \lim\limits_{\varepsilon\to 0} \int_{{\Omega}}\left [a\left ({\frac{x_2}{\varepsilon}}\right)\left (\frac{\partial u_0 }{\partial x_1} \right)^2\left (x_1, x_2, \frac{x_2}{\varepsilon}\right) +\left |u_0\left (x_1, x_2, {\frac{x_2}{\varepsilon}}\right)\right|^2 \right]dx\\ & = \int_{{\Omega}}dx\int_{Y_1}\left [ a(y_2)\left (\frac{\partial u_0}{\partial x_1} \right)^2(x_1, x_2, y_2) +\left |u_0(x_1, x_2, y_2)\right|^2\right]dy_2\\\ & = \int_{0}^{1}dx_2\int_{Y_1}dy_2\int_{\mathbb{R}}\left [ a(y_2)\left (\frac{\partial u_0}{\partial x_1} \right)^2(x_1, x_2, y_2)+\left |u_0(x_1, x_2, y_2)\right|^2\right]dx_1, \end{align} | (84) |
where the function
\begin{align*} \int_{0}^{1}dx_2\int_{Y_1}dy_2\int_{\mathbb{R}}&\left [ a(y_2)\left (\frac{\partial u_0}{\partial x_1} \right)^2(x_1, x_2, y_2)+\left |u_0(x_1, x_2, y_2)\right|^2\right]dx_1\notag\\ & = \int_{0}^{1}dx_2\int_{Y_1}dy_2\int_{\mathbb{R}}(4\pi^2a(y_2)\lambda^2_1+1)|{\mathcal{F}}_2(u_0)(\lambda_1, x_2, y_2)|^2d\lambda_1\notag\\ & = \int_{0}^{1}dx_2\int_{\mathbb{R}} \frac{1}{\hat{k}_0(\lambda_1)}|{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1. \label{limsup2} \end{align*} |
This together with 84 implies that, for
\begin{equation*} \lim\limits_{\varepsilon\to 0} {\mathscr{F}_\varepsilon}({u_\varepsilon}) = \int_{0}^{1}dx_2\int_{\mathbb{R}} \frac{1}{\hat{k}_0(\lambda_1)}|{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1, \end{equation*} |
which proves the
Now, we extend the previous result to any
\begin{equation*} \|u\|_{H^1_0((0, 1)_{x_1}; L^2(0, 1)_{x_2})} : = \left ( \left \|\frac{\partial u}{\partial x_1} \right\|^2_{L^2({\Omega})} + \|u\|^2_{L^2({\Omega})}\right)^{1/2}. \end{equation*} |
The associated metric
\begin{equation} d_{H^1_0}(u_k, u)\to 0 \quad \text{implies } \quad d_{B_n}(u_k, u)\to 0. \end{equation} | (85) |
Recall that
\begin{equation} \Gamma\text{-}\limsup\limits_{\varepsilon\to 0}{\mathscr{F}_\varepsilon}(u)\leq \mathscr{F}(u). \end{equation} | (86) |
A direct computation of
\begin{align*} \hat{k}_0(\lambda_1) & = \frac{c_\theta4\pi^2\lambda_1^2+1}{(4\pi^2\lambda_1^2+ 1)(c4\pi^2\lambda_1^2+ 1)}, \end{align*} |
which implies that
\begin{align} \frac{1}{\hat{k}_0(\lambda_1) } = \frac{c}{c_\theta}4\pi^2\lambda_1^2 + f(\lambda_1) + \alpha, \end{align} | (87) |
where
\begin{equation} \frac{1}{\hat{k}_0(\lambda_1)} \leq C(4\pi^2\lambda_1^2 + 1). \end{equation} | (88) |
This combined with the Plancherel identity yields
\begin{align} \mathscr{F}(u)&\leq C\int_{0}^{1}dx_2\int_{{\mathbb{R}}} (4\pi^2\lambda_1^2 + 1) |{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1\\ & = C\int_{0}^{1}dx_2\int_{{\mathbb{R}}}\left [ \left ( \frac{\partial u}{\partial x_1} \right)^2(x_1, x_2)+ |u(x_1, x_2)|^2\right]dx_1\\ & = C \|u\|^2_{H^1_0((0, 1)_{x_1}; L^2(0, 1)_{x_2})}, \end{align} | (89) |
where
Now, take
\begin{equation} d_{H^1_0} (u_k, u)\to 0\qquad\text{as} \quad k\to\infty. \end{equation} | (90) |
In particular, due to 85, we also have that
\begin{align*} \Gamma\text{-}\limsup\limits_{\varepsilon\to 0}{\mathscr{F}_\varepsilon}(u)&\leq \liminf\limits_{k\to\infty} (\Gamma\text{-}\limsup\limits_{\varepsilon\to 0}{\mathscr{F}_\varepsilon}(u_k) )\\ &\leq \liminf\limits_{k\to\infty}\mathscr{F}(u_k)\\ & = \mathscr{F}(u), \end{align*} |
which proves the
Step 3 - Alternative expression of
The proof of the equality between the two expressions of the
Lemma 4.2. Let
\begin{equation} {\mathcal{F}}_2 (h\ast u) = {\mathcal{F}}_2(h){\mathcal{F}}_2(u)\quad \mathit{{a.e. ~in }}~~ \mathbb{R} . \end{equation} | (91) |
By applying Plancherel's identity with respect to
\begin{align} \int_{\mathbb{R}}&\left |\sqrt{\alpha}u(x_1, x_2) + (h\ast_1u)(x_1, x_2)\right|^2dx_1 \\ & = \int_{\mathbb{R}}\left |\sqrt{\alpha}{\mathcal{F}}_2(u)(\lambda_1, x_2) + {\mathcal{F}}_2(h\ast_1u)(\lambda_1, x_2)\right|^2d\lambda_1 \\ & = \int_{\mathbb{R}} \biggl[\alpha \left |{\mathcal{F}}_2 (u)(\lambda_1, x_2)\right|^2 + 2\sqrt{\alpha}{\rm Re}\left ({\mathcal{F}}_2(u)(\lambda_1, x_2) \overline{{\mathcal{F}}_2(h\ast_1u)}(\lambda_1, x_2)\right)\\ & \quad +\left |{\mathcal{F}}_2(h\ast_1u)(\lambda_1, x_2)\right|^2\biggr]d\lambda_1. \end{align} | (92) |
Recall that the Fourier transform of
\begin{align} \int_{\mathbb{R}} &\left [\alpha \biggl|{\mathcal{F}}_2 (u)(\lambda_1, x_2)\right|^2 + 2\sqrt{\alpha}{\rm Re}\left ({\mathcal{F}}_2(u)(\lambda_1, x_2) \overline{{\mathcal{F}}_2(h\ast_1u)}(\lambda_1, x_2)\right)\\ & \quad +\left |{\mathcal{F}}_2(h\ast_1u)(\lambda_1, x_2)\right|^2\biggr]d\lambda_1\\ & = \int_{\mathbb{R}} \left [\alpha+2\sqrt{\alpha}{\mathcal{F}}_2(h)(\lambda_1) + \left ({\mathcal{F}}_2(h)(\lambda_1)\right)^2\right]\left |{\mathcal{F}}_2(u)(\lambda_1, x_2)\right|^2d\lambda_1\\ & = \int_{\mathbb{R}} \left [\sqrt{\alpha}+ {\mathcal{F}}_2(h)(\lambda_1)\right]^2|{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1\\ & = \int_{\mathbb{R}} \left [\alpha+ f(\lambda_1)\right]|{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1. \end{align} | (93) |
On the other hand, by applying Plancherel's identity with respect to
\begin{equation*} \int_{{\mathbb{R}}}\frac{c}{c_\theta}4\pi^2\lambda_1^2|{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1 = \int_{{\mathbb{R}}}\frac{c}{c_\theta}\left (\frac{\partial u}{\partial x_1}\right)^2(x_1, x_2)dx_1. \end{equation*} |
In view of the expansion of
\begin{align*} &\int_{0}^{1}dx_2\int_{\mathbb{R}} \frac{1}{\hat{k}_0(\lambda_1)} |{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1 \\ &\quad = \int_{0}^{1}dx_2 \int_{\mathbb{R}}\left \{\frac{c}{c_\theta}\left (\frac{\partial u}{\partial x_1}\right)^2(x_1, x_2)+[\sqrt{\alpha}u(x_1, x_2) + (h\ast_1u)(x_1, x_2)]^2\right\}dx_1, \end{align*} |
which concludes the proof.
Proof of Lemma 4.1. In view of 87, the equality 79 becomes
\begin{align} {\mathcal{F}}_2(b)(\lambda_1, x_2 ) & = \left (\frac{c}{c_\theta}4\pi^2\lambda_1^2+\alpha + f(\lambda_1)\right){\mathcal{F}}_2(u)(\lambda_1, x_2)\\ & = {\mathcal{F}}_2 \left (-\frac{c}{c_\theta}\frac{\partial^2 u}{\partial x_1^2} +\alpha u\right)(\lambda_1, x_2) + f(\lambda_1){\mathcal{F}}_2(u)(\lambda_1, x_2). \end{align} | (94) |
Since
\begin{equation*} \label{fCL1} f(\lambda_1) = \frac{(c-1)^2\theta(\theta-1)}{c^2_\theta}\frac{1}{c_\theta4\pi^2\lambda_1^2 + 1} = O(\lambda_1^{-2})\in C_0(\mathbb{R})\cap L^1(\mathbb{R}), \end{equation*} |
the right-hand side of 94 belongs to
\begin{equation*} {\mathcal{F}}_2(b)(\cdot, x_2)\in L^2({\mathbb{R}}). \end{equation*} |
Applying the Plancherel identity, we obtain that
\begin{equation} b(\cdot, x_2)\in L^2(0, 1). \end{equation} | (95) |
We show that
\begin{equation*} \lim\limits_{t\to 0} \|b(\cdot, x_2+t) - b(\cdot, x_2)\|_{L^2(0, 1)_{x_1}} = 0. \end{equation*} |
Thanks to Plancherel's identity, we infer from 79 that
\begin{align*} \|b(\cdot, x_2+t) -& b(\cdot, x_2)\|^2_{L^2(0, 1)_{x_1}} = \|{\mathcal{F}}_2(b)(\cdot, x_2+t) - {\mathcal{F}}_2(b)(\cdot, x_2)\|^2_{L^2({\mathbb{R}})_{\lambda_1}}\\ & = \int_{{\mathbb{R}}}\left |\frac{1}{\hat{k}_0(\lambda_1)}\left [{\mathcal{F}}_2(u)(\lambda_1, x_2+t) - {\mathcal{F}}_2(u)(\lambda_1, x_2)\right] \right|^2d\lambda_1. \end{align*} |
In view of 88 and thanks to the Plancherel identity, we obtain
\begin{align} \|b(\cdot, x_2+t) &- b(\cdot, x_2)\|^2_{L^2(0, 1)_{x_1}}\\ & \leq C^2 \int_{{\mathbb{R}}}\left |(4\pi^2\lambda_1^2+1) ({\mathcal{F}}_2(u)(\lambda_1, x_2+t) - {\mathcal{F}}_2(u)(\lambda_1, x_2)) \right|^2d\lambda_1\\ &\leq C^2\left \|{\mathcal{F}}_2 \left (\frac{\partial u}{\partial x_1}\right) (\cdot, x_2+t ) -{\mathcal{F}}_2\left (\frac{\partial u}{\partial x_1}\right) (\cdot, x_2 ) \right\|^2_{L^2(0, 1)_{\lambda_1}} \\ &\quad+ C^2 \|{\mathcal{F}}_2(u)(\cdot, x_2+t)-{\mathcal{F}}_2(u)(\cdot, x_2)\|^2_{L^2(0, 1)_{\lambda_1}} \\ & = C^2\left \|\frac{\partial u}{\partial x_1}(\cdot, x_2+t ) -\frac{\partial u}{\partial x_1} (\cdot, x_2 ) \right\|^2_{L^2(0, 1)_{x_1}} \\ &\quad +C^2 \|u(\cdot, x_2+t)-u(\cdot, x_2)\|^2_{L^2(0, 1)_{x_1}}. \end{align} |
By the Lebesgue dominated convergence theorem and since
\begin{equation} b(x_1, x_2)\in C([0, 1]_{x_2}; \quad L^2(0, 1)_{x_1}). \end{equation} | (96) |
To conclude the proof, it remains to show the regularity of
\begin{equation*} u_0(\cdot, x_2, y_2)\in C^1([0, 1])\qquad\text{a.e.} \quad (x_2, y_2)\in (0, 1)\times Y_1. \end{equation*} |
On the other hand, the solution
\begin{equation} u_0(x_1, x_2, y_2) : = \int_{0}^{1} G_{y_2}(x_1, s) b(s, x_2)ds, \end{equation} | (97) |
where
\begin{equation*} G_{y_2}(x_1, s) : = \frac{1}{\sqrt{a(y_2)}\sinh\left (\frac{1}{\sqrt{a(y_2)}}\right) }\sinh\left (\frac{x_1\wedge s}{\sqrt{a(y_2)}}\right)\sinh\left (\frac{1-x_1\vee s}{\sqrt{a(y_2)}}\right). \end{equation*} |
This combined with 96 and 97 proves that
\begin{equation*} \label{u0isregular} u_0(x_1, x_2, y_2) \in C^1([0, 1]^2, L^\infty_{\rm per}(Y_1)), \end{equation*} |
which concludes the proof.
We prove now the Lemma 4.2 that we used in Step
Proof of Lemma 4.2. By the convolution property of the Fourier transform on
\begin{equation} h\ast u = \overline{{\mathcal{F}}_2}({\mathcal{F}}_2(h)) \ast \overline{{\mathcal{F}}_2}({\mathcal{F}}_2(h)) = \overline{{\mathcal{F}}_1}({\mathcal{F}}_2(h){\mathcal{F}}_2(u)), \end{equation} | (98) |
where
\begin{equation*} {\mathcal{F}}_2(h){\mathcal{F}}_2(u) = {\mathcal{F}}_2(h){\mathcal{F}}_1(u)\in L^2(\mathbb{R})\cap L^1({\mathbb{R}}). \end{equation*} |
Since
\begin{equation*} h\ast u = \overline{{\mathcal{F}}_2}({\mathcal{F}}_2(h){\mathcal{F}}_2(u))\in L^2(\mathbb{R}), \end{equation*} |
which yields 91. This concludes the proof.
Remark 2. Thanks to the Beurling-Deny theory of Dirichlet forms [3], Mosco [15, Theorem 4.1.2] has proved that the
\begin{equation} F(u) = F_d(u) + \int_{{\Omega}}u^2k(dx) + \int_{({\Omega}\times{\Omega})\setminus{\rm diag}} (u(x)-u(y))^2j(dx, dy), \end{equation} | (99) |
where
\begin{equation*} T(0) = 0, \qquad\text{and}\qquad \forall x, y\in{\mathbb{R}}, \quad |T(x)-T(y)|\leq |x-y|, \end{equation*} |
we have
In the present context, we do not know if the
We are going to give an explicit expression of the homogenized matrix
Set
\begin{equation} a: = (1-\theta) A_1 e_1\cdot e_1 + \theta A_2e_1\cdot e_1, \end{equation} | (100) |
with
Proposition 5. Let
\begin{equation} A^\ast = \theta A_1+(1-\theta)A_2 -\frac{\theta(1-\theta)}{a} (A_2-A_1)e_1\otimes (A_2-A_1)e_1. \end{equation} | (101) |
If
\begin{equation} A^\ast = \theta A_1+(1-\theta)A_2. \end{equation} | (102) |
Furthermore, if one of the following conditions is satisfied:
i) in two dimensions,
ii) in three dimensions,
then
Remark 3. The condition
Proof. Assume that
\begin{equation} \lim\limits_{\delta\to 0} A^\ast_\delta = A^\ast, \end{equation} | (103) |
where, for
\begin{equation*} A_\delta(y_1) = \chi(y_1) A_1^\delta + (1-\chi(y_1)) A_2 ^\delta\qquad\text{for} \quad y_1\in{\mathbb{R}}, \end{equation*} |
with
\begin{equation} A^\ast_\delta = \theta A_1^\delta+(1-\theta)A_2^\delta -\frac{\theta(1-\theta)}{(1-\theta)A_1^\delta e_1\cdot e_1 + \theta A_2^\delta e_1\cdot e_1 } (A_2^\delta-A_1^\delta)e_1\otimes (A_2^\delta-A_1^\delta)e_1. \end{equation} | (104) |
If
We prove that
\begin{align} |(A_2-A_1)e_1\cdot x| &\leq |A_2e_1\cdot x|+ |A_1e_1\cdot x|\\ &\leq (A_2e_1\cdot e_1)^{1/2}(A_2x\cdot x)^{1/2} + (A_1e_1\cdot e_1)^{1/2}(A_1x\cdot x)^{1/2}. \end{align} | (105) |
This combined with the definition 101 of
\begin{align} A^\ast x\cdot x& = \theta (A_1 x\cdot x) + (1-\theta)(A_2 x\cdot x) - \theta(1-\theta)a^{-1}\left |(A_2-A_1)e_1\cdot x\right|^2\\ &\geq \theta (A_1x\cdot x) +(1-\theta) (A_2 x\cdot x)\\ &\quad-\theta(1-\theta)a^{-1}[(A_2e_1\cdot e_1)^{1/2} (A_2x\cdot x)^{1/2}+ (A_1e_1\cdot e_1)^{1/2} (A_1x\cdot x)^{1/2} ]^2\\ & = a^{-1} [a \theta (A_1x\cdot x) +a (1-\theta) (A_2 x\cdot x) -\theta(1-\theta)(A_2e_1\cdot e_1)( A_2x\cdot x)\\ & \quad -\theta(1-\theta)(A_1e_1\cdot e_1)(A_1x\cdot x)\\ & \quad - 2\theta(1-\theta)(A_2e_1\cdot e_1)^{1/2}( A_2x\cdot x)^{1/2}(A_1e_1\cdot e_1)^{1/2}(A_1x\cdot x)^{1/2}]. \end{align} | (106) |
In view of definition 100 of
\begin{align} &a \theta( A_1x\cdot x) +a (1-\theta)( A_2 x\cdot x)\\& = \theta (1-\theta)(A_1 e_1\cdot e_1)(A_1 x\cdot x) + \theta^2(A_2 e_1\cdot e_1)(A_1 x\cdot x)\\ &\quad+ (1-\theta)^2 (A_1 e_1\cdot e_1)(A_2 x\cdot x)\\ &\quad + \theta(1-\theta)(A_2 e_1\cdot e_1)(A_2 x\cdot x). \end{align} |
Plugging this equality in 106, we deduce that
\begin{align} A^\ast x\cdot x &\geq a^{-1}[\theta^2(A_2 e_1\cdot e_1)(A_1 x\cdot x)+ (1-\theta)^2(A_1 e_1\cdot e_1)(A_2 x\cdot x)\\ & \quad -2\theta(1-\theta)(A_2e_1\cdot e_1)^{1/2}(A_1x\cdot x)^{1/2}(A_1e_1\cdot e_1)^{1/2}( A_2x\cdot x)^{1/2}]\\ & = a^{-1}[\theta (A_2 e_1\cdot e_1)^{1/2}(A_1x\cdot x)^{1/2} -(1-\theta) (A_1 e_1\cdot e_1)^{1/2}(A_2x\cdot x)^{1/2} ]^2\geq 0, \end{align} | (107) |
which proves that
Now, assume
\begin{equation*} (A_2^\delta -A_1^\delta)e_1 = (A_2-A_1)e_1 = 0, \end{equation*} |
which implies that the lamination formula 104 becomes
\begin{equation*} A^\ast_\delta = \theta A_1^\delta + (1-\theta)A_2^\delta. \end{equation*} |
This combined with the convergence 103 yields to the expression 102 for
To conclude the proof, it remains to prove the positive definiteness of
Case (i).
Assume
\begin{equation} |A_2e_1\cdot x| = (A_2e_1\cdot e_1)^{1/2}(A_2x\cdot x)^{1/2} = \|A^{1/2}_2e_1\|\|A^{1/2}_2x\|. \end{equation} | (108) |
Recall that the Cauchy-Schwarz inequality is an equality if and only if one of vectors is a scalar multiple of the other. This combined with 108 leads to
\begin{equation} x = \alpha e_1 \qquad\text{for some} \quad \alpha\in{\mathbb{R}}. \end{equation} | (109) |
From the definition 101 of
\begin{equation} A^\ast e_1\cdot e_1 = \frac{1}{a}(A_2 e_1\cdot e_1) (\xi\cdot e_1)^2 > 0. \end{equation} | (110) |
Recall that
Case (ii).
Assume that
\begin{align} |A_1e_1\cdot x| & = (A_1e_1\cdot e_1)^{1/2}(A_1x\cdot x)^{1/2}, \end{align} | (111) |
\begin{align} |A_2e_1\cdot x| & = (A_2e_1\cdot e_1)^{1/2}(A_2x\cdot x)^{1/2}. \end{align} | (112) |
Let
\begin{equation*} p_i(t) : = A_i(x+te_1)\cdot (x+te_1) \qquad\text{for} \quad i = 1, 2. \end{equation*} |
In view of 111, the discriminant of
\begin{equation} p_1(t_1) = A_1(x+t_1e_1)\cdot (x+t_1e_1) = 0. \end{equation} | (113) |
Recall that
\begin{equation} x\in{\rm Span}(e_1, \eta_1). \end{equation} | (114) |
Similarly, recalling that
\begin{equation} x\in{\rm Span}(e_1, \eta_2). \end{equation} | (115) |
Since the vectors
\begin{equation*} x = \alpha e_1 \qquad\text{for some} \quad \alpha\in{\mathbb{R}}. \end{equation*} |
In light of definition 101 of
\begin{equation*} A^\ast e_1\cdot e_1 = \frac{1}{a} (A_1e_1\cdot e_1)(A_2e_1\cdot e_1) > 0, \end{equation*} |
which implies that
Note that when
\begin{equation*} A_1 = e_2\otimes e_2 \quad\text{and}\quad A_2 = I_2. \end{equation*} |
Then, it is easy to check that
This problem was pointed out to me by Marc Briane during my stay at Institut National des Sciences Appliquées de Rennes. I thank him for the countless fruitful discussions. My thanks also extend to Valeria Chiadò Piat for useful remarks. The author is also a member of the INdAM-GNAMPA project "Analisi variazionale di modelli non-locali nelle scienze applicate".
[1] |
L. Chua, Memrisor-the missing circuit element, IEEE Trans. Circuit Theory, 18 (1971), 507–519. https://doi.org/10.1109/TCT.1971.1083337 doi: 10.1109/TCT.1971.1083337
![]() |
[2] |
L. Chua, S. Kang, Memristive devices and systems. Proc. IEEE, 64 (1976), 209–223. https://doi.org/10.1109/PROC.1976.10092 doi: 10.1109/PROC.1976.10092
![]() |
[3] |
D. Strukov, G. Snider, D. Stewart, R. Williams, The missing memristor found, Nature, 453 (2008), 80–83. https://doi.org/10.1038/nature06932 doi: 10.1038/nature06932
![]() |
[4] |
J. Tour, T. He, Electronics: The fourth element, Nature, 453 (2008), 42–43. https://doi.org/10.1038/453042a doi: 10.1038/453042a
![]() |
[5] |
W. Mao, Y. Liu, L. Ding, A. Safian, X. Liang, A new structured domain adversarial neural network for transfer fault diagnosis of rolling bearings under different working conditions, IEEE Trans. Instrum. Meas., 70 (2021), 1–13. https://doi.org/10.1109/TIM.2020.3038596 doi: 10.1109/TIM.2020.3038596
![]() |
[6] |
S. Wang, Z. Dou, D. Chen, H. Yu, Y. Li, P. Pan, Multimodal multiclass boosting and its application to cross-modal retrieval, Neurocomputing, 357 (2019), 11–23. https://doi.org/10.1016/j.neucom.2019.05.040 doi: 10.1016/j.neucom.2019.05.040
![]() |
[7] |
W. Mao, J. Wang, Z. Xue, An ELM-based model with sparse-weighting strategy for sequential data imbalance problem, Int. J. Mach. Learn. Cybern., 8 (2017), 1333–1345. https://doi.org/10.1007/s13042-016-0509-z doi: 10.1007/s13042-016-0509-z
![]() |
[8] |
S. Zhang, Y. Yang, L. Li, D. Wu, Quasi-synchronization of fractional-order complex-valued memristive recurrent neural networks with switching jumps mismatch, Neural Process Lett., 53 (2021), 865–-891. https://doi.org/10.1007/s11063-020-10342-4 doi: 10.1007/s11063-020-10342-4
![]() |
[9] |
Y. Shi, J. Cao, G. Chen, Exponential stability of complex-valued memristor-based neural networks with time-varying delays, Appl. Math. Comput., 313 (2017), 222–234. https://doi.org/10.1016/j.amc.2017.05.078 doi: 10.1016/j.amc.2017.05.078
![]() |
[10] |
X. Yang, D. Ho, Synchronization of delayed memristive neural networks: robust analysis approach, IEEE Trans. Cybern., 46 (2016), 3377–3387. https://doi.org/10.1109/TCYB.2015.2505903 doi: 10.1109/TCYB.2015.2505903
![]() |
[11] |
G. Zhang, Z. Zeng, Exponential stability for a class of memristive neural networks with mixed time-varying delays, Appl. Math. Comput., 321 (2018), 544–554. https://doi.org/10.1016/j.amc.2017.11.022 doi: 10.1016/j.amc.2017.11.022
![]() |
[12] |
M. Mehrabbeik, F. Parastesh, J. Ramadoss, K. Rajagopal, H. Namazi, S. Jafari, Synchronization and chimera states in the network of electrochemically coupled memristive Rulkov neuron maps, Math. Biosci. Eng., 18 (2021), 9394–9409. https://doi.org/10.3934/mbe.2021462 doi: 10.3934/mbe.2021462
![]() |
[13] |
T. Dong, X. Gong, T. Huang, Zero-Hopf bifurcation of a memristive synaptic Hopfield neural network with time delay, Neural Networks, 149 (2022), 146–156. https://doi.org/10.1016/j.neunet.2022.02.009 doi: 10.1016/j.neunet.2022.02.009
![]() |
[14] |
X. Yang, J. Cao, J. Liang, Exponential synchronization of memristive neural networks with delays: interval matrix method, IEEE Trans. Neural Networks Learn. Syst., 28 (2017), 1878–1888. https://doi.org/10.1109/TNNLS.2016.2561298 doi: 10.1109/TNNLS.2016.2561298
![]() |
[15] |
Y. Shi, J. Cao, G. Chen, Exponential stability of complex-valued memristor-based neural networks with time-varying delays, Appl. Math. Comput., 313 (2017), 222–234. https://doi.org/10.1016/j.amc.2017.05.078 doi: 10.1016/j.amc.2017.05.078
![]() |
[16] |
G. Zhang, Z. Zeng, J. Hu, New results on global exponential dissipativity analysis of memristive inertial neural networks with distributed time-varying delays, Neural Networks, 97 (2018), 183–191. https://doi.org/10.1016/j.neunet.2017.10.003 doi: 10.1016/j.neunet.2017.10.003
![]() |
[17] |
A. Wu, Y. Chen, Z. Zeng, Multi-mode function synchronization of memristive neural networks with mixed delays and parameters mismatch via event-triggered control, Inf. Sci., 572 (2021), 147–166. https://doi.org/10.1016/j.ins.2021.04.101 doi: 10.1016/j.ins.2021.04.101
![]() |
[18] |
X. Yang, J. Cao, J. Qiu, pth moment exponential stochastic synchronization of coupled memristor-based neural networks with mixed delays via delayed impulsive control, Neural Networks, 65 (2015), 80–91. https://doi.org/10.1016/j.neunet.2015.01.008 doi: 10.1016/j.neunet.2015.01.008
![]() |
[19] |
L. Zhang, Y. Yang, F. Wang, Projective synchronization of fractional-order memristive neural networks with switching jumps mismatch, Phys. A, 471 (2017), 402–415. https://doi.org/10.1016/j.physa.2016.12.030 doi: 10.1016/j.physa.2016.12.030
![]() |
[20] |
J. Zhang, Z. Lou, Y. Jia, W. Shao, Ground state of Kirchhoff type fractional Schrödinger equations with critical growth, J. Math. Anal. Appl., 462 (2018), 57–83. https://doi.org/10.1016/j.jmaa.2018.01.060 doi: 10.1016/j.jmaa.2018.01.060
![]() |
[21] |
B. Łupińska, E. Schmeidel, Analysis of some Katugampola fractional differential equations with fractional boundary conditions, Math. Biosci. Eng., 18 (2021), 1–19. https://doi.org/10.3934/mbe.2021359 doi: 10.3934/mbe.2021359
![]() |
[22] |
J. Zhang, J. Wang, Numerical analysis for Navier–Stokes equations with time fractional derivatives, Appl. Math. Comput., 30 (2022), 2747–2758. https://doi.org/10.1016/j.amc.2018.04.036 doi: 10.1016/j.amc.2018.04.036
![]() |
[23] |
F. Wang, Y. Yang, Quasi-synchronization for fractional-order delayed dynamical networks with heterogeneous nodes, Appl. Math. Comput., 339 (2018), 1–14. https://doi.org/10.1016/j.amc.2018.07.041 doi: 10.1016/j.amc.2018.07.041
![]() |
[24] | C. Huang, J. Cao, M. Xiao, A. Alsaedi, T. Hayat, Bifurcations in a delayed fractional complex-valued neural network, Appl. Math. Comput., 292 (2017), 210–227. https://doi.org/0.1016/j.amc.2018.07.041 |
[25] |
L. Zhang, Y. Yang, F. Wang, Synchronization analysis of fractional-order neural networks with time-varying delays via discontinuous neuron activations, Neurocomputing, 275 (2018), 40–49. https://doi.org/10.1016/j.neucom.2017.04.056 doi: 10.1016/j.neucom.2017.04.056
![]() |
[26] |
C. Hu, H. Jiang, Special functions-based fixed-time estimation and stabilization for dynamic systems, IEEE Trans. Syst. Man Cybern., 5 (2022), 3251–3262. https://doi.org/10.1109/TSMC.2021.3062206 doi: 10.1109/TSMC.2021.3062206
![]() |
[27] |
S. Yang, C. Hu, J. Yu, H. Jiang, Finite-time cluster synchronization in complex-variable networks with fractional-order and nonlinear coupling, Neural Networks, 135 (2021), 212–224. https://doi.org/10.1016/j.neunet.2020.12.015 doi: 10.1016/j.neunet.2020.12.015
![]() |
[28] |
J. Fei, L. Liu, Real-time nonlinear model predictive control of active power filter using self-feedback recurrent fuzzy neural network estimator, IEEE Trans. Ind. Electron., 69 (2022), 8366–8376. https://doi.org/10.1109/TIE.2021.3106007 doi: 10.1109/TIE.2021.3106007
![]() |
[29] |
W. Sun, L. Peng, Observer-based robust adaptive control for uncertain stochastic Hamiltonian systems with state and input delays, Nonlinear Anal. Modell. Control, 19 (2014), 626–645. https://doi.org/10.15388/NA.2014.4.8 doi: 10.15388/NA.2014.4.8
![]() |
[30] |
S. Liu, J. Wang, Y. Zhou, M. Feckan, Iterative learning control with pulse compensation for fractional differential systems, Math. Slovaca, 68 (2018), 563–574. https://doi.org/10.1515/ms-2017-0125 doi: 10.1515/ms-2017-0125
![]() |
[31] |
M. Sabzalian, A. Mohammadzadeh, S. Lin, W. Zhang, Robust fuzzy control for fractional-order systems with estimated fraction-order, Nonlinear Dyn., 98 (2019), 2375–2385. https://doi.org/10.1007/s11071-019-05217-w doi: 10.1007/s11071-019-05217-w
![]() |
[32] |
Z. Wang, J. Fei, Fractional-order terminal sliding-mode control using self-evolving recurrent chebyshev fuzzy neural network for mems gyroscope, IEEE Trans. Fuzzy Syst., 30 (2022), 2747– 2758. https://doi.org/10.1109/TFUZZ.2021.3094717 doi: 10.1109/TFUZZ.2021.3094717
![]() |
[33] |
Y. Cao, S. Wang, Z. Guo, T. Huang, S. Wen, Synchronization of memristive neural networks with leakage delay and parameters mismatch via event-triggered control, Neural Networks, 119 (2019), 178–189. https://doi.org/10.1016/j.neunet.2019.08.011 doi: 10.1016/j.neunet.2019.08.011
![]() |
[34] |
X. Li, X. Yang, J. Cao, Event-triggered impulsive control for nonlinear delay systems, Automatica, 117 (2020), 108981. https://doi.org/10.1016/j.automatica.2020.108981 doi: 10.1016/j.automatica.2020.108981
![]() |
[35] |
X. Li, D. Peng, J. Cao, Lyapunov stability for impulsive systems via event-triggered impulsive control, IEEE Trans. Autom. Control, 65 (2020), 4908–4913. https://doi.org/10.1109/TAC.2020.2964558 doi: 10.1109/TAC.2020.2964558
![]() |
[36] |
H. Li, X. Gao, R. Li, Exponential stability and sampled-data synchronization of delayed complex-valued memristive neural networks, Neural Process Lett., 51 (2020), 193–209. https://doi.org/10.1007/s11063-019-10082-0 doi: 10.1007/s11063-019-10082-0
![]() |
[37] |
H. Fan, K. Shi, Y. Zhao, Global \mu-synchronization for nonlinear complex networks with unbounded multiple time delays and uncertainties via impulsive control, Phys. A, 599 (2022), 127484. https://doi.org/10.1016/j.physa.2022.127484 doi: 10.1016/j.physa.2022.127484
![]() |
[38] |
X. Li, D. Ho, J. Cao, Finite-time stability and settling-time estimation of nonlinear impulsive systems, Automatica, 99 (2019), 361–368. https://doi.org/10.1016/j.automatica.2018.10.024 doi: 10.1016/j.automatica.2018.10.024
![]() |
[39] |
S. Yang, C. Hu, J. Yu, H. Jiang, Exponential stability of fractional-order impulsive control systems with applications in synchronization, IEEE Trans. Cybern., 50 (2020), 3157–3168. https://doi.org/10.1109/TCYB.2019.2906497 doi: 10.1109/TCYB.2019.2906497
![]() |
[40] |
H. Fan, K. Shi, Y. Zhao, Pinning impulsive cluster synchronization of uncertain complex dynamical networks with multiple time-varying delays and impulsive effects, Phys. A, 587 (2022), 126534. https://doi.org/10.1016/j.physa.2021.126534 doi: 10.1016/j.physa.2021.126534
![]() |
[41] |
F. Wang, Y. Yang, Intermittent synchronization of fractional order coupled nonlinear systems based on a new differential inequality, Phys. A, 512 (2018), 142–152. https://doi.org/10.1016/j.physa.2018.08.023 doi: 10.1016/j.physa.2018.08.023
![]() |
[42] |
L. Zhang, Y. Yang, F. Wang, Lag synchronization for fractional-order memristive neural networks via period intermittent control, Nonlinear Dyn., 89 (2017), 367–381. https://doi.org/10.1007/s11071-017-3459-4 doi: 10.1007/s11071-017-3459-4
![]() |
[43] |
C. Hu, H. He, H. Jiang, Synchronization of complex-valued dynamic networks with intermittently adaptive coupling: A direct error method, Automatica, 112 (2020), 108675. https://doi.org/10.1016/j.automatica.2019.108675 doi: 10.1016/j.automatica.2019.108675
![]() |