The aim of this paper is to develop a new decision making method considering decision makers' psychological behavior for multi-attribute decision making problem under intuitionistic trapezoidal fuzzy environment. We first put forward a new distance measure of intuitionistic trapezoidal fuzzy numbers. Then combining with cumulative prospect theory, we develop a novel decision making method, which can consider risk attitude of decision makers. Finally, an example is given to demonstrate the effectiveness and practicability of the proposed method.
Citation: Haiping Ren, Laijun Luo. A novel distance of intuitionistic trapezoidal fuzzy numbers and its-based prospect theory algorithm in multi-attribute decision making model[J]. Mathematical Biosciences and Engineering, 2020, 17(4): 2905-2922. doi: 10.3934/mbe.2020163
[1] | Lorenza D'Elia . Γ-convergence of quadratic functionals with non uniformly elliptic conductivity matrices. Networks and Heterogeneous Media, 2022, 17(1): 15-45. doi: 10.3934/nhm.2021022 |
[2] | Andrea Braides, Valeria Chiadò Piat . Non convex homogenization problems for singular structures. Networks and Heterogeneous Media, 2008, 3(3): 489-508. doi: 10.3934/nhm.2008.3.489 |
[3] | Patrick Henning . Convergence of MsFEM approximations for elliptic, non-periodic homogenization problems. Networks and Heterogeneous Media, 2012, 7(3): 503-524. doi: 10.3934/nhm.2012.7.503 |
[4] | Leonid Berlyand, Volodymyr Rybalko . Homogenized description of multiple Ginzburg-Landau vortices pinned by small holes. Networks and Heterogeneous Media, 2013, 8(1): 115-130. doi: 10.3934/nhm.2013.8.115 |
[5] | Manuel Friedrich, Bernd Schmidt . On a discrete-to-continuum convergence result for a two dimensional brittle material in the small displacement regime. Networks and Heterogeneous Media, 2015, 10(2): 321-342. doi: 10.3934/nhm.2015.10.321 |
[6] | Fabio Camilli, Claudio Marchi . On the convergence rate in multiscale homogenization of fully nonlinear elliptic problems. Networks and Heterogeneous Media, 2011, 6(1): 61-75. doi: 10.3934/nhm.2011.6.61 |
[7] | Brahim Amaziane, Leonid Pankratov, Andrey Piatnitski . Homogenization of variational functionals with nonstandard growth in perforated domains. Networks and Heterogeneous Media, 2010, 5(2): 189-215. doi: 10.3934/nhm.2010.5.189 |
[8] | Antonio DeSimone, Martin Kružík . Domain patterns and hysteresis in phase-transforming solids: Analysis and numerical simulations of a sharp interface dissipative model via phase-field approximation. Networks and Heterogeneous Media, 2013, 8(2): 481-499. doi: 10.3934/nhm.2013.8.481 |
[9] | Liselott Flodén, Jens Persson . Homogenization of nonlinear dissipative hyperbolic problems exhibiting arbitrarily many spatial and temporal scales. Networks and Heterogeneous Media, 2016, 11(4): 627-653. doi: 10.3934/nhm.2016012 |
[10] | Peter Bella, Arianna Giunti . Green's function for elliptic systems: Moment bounds. Networks and Heterogeneous Media, 2018, 13(1): 155-176. doi: 10.3934/nhm.2018007 |
The aim of this paper is to develop a new decision making method considering decision makers' psychological behavior for multi-attribute decision making problem under intuitionistic trapezoidal fuzzy environment. We first put forward a new distance measure of intuitionistic trapezoidal fuzzy numbers. Then combining with cumulative prospect theory, we develop a novel decision making method, which can consider risk attitude of decision makers. Finally, an example is given to demonstrate the effectiveness and practicability of the proposed method.
In this paper, for a bounded domain
Fε(u):={∫Ω{A(xε)∇u⋅∇u+|u|2}dx,if u∈H10(Ω),∞,if u∈L2(Ω)∖H10(Ω). | (1) |
The conductivity
ess-infy∈Yd(min{A(y)ξ⋅ξ:ξ∈Rd,|ξ|=1})≥0. | (2) |
This condition holds true when the conductivity energy density has missing derivatives. This occurs, for example, when the quadratic form associated to
Aξ⋅ξ:=A′ξ′⋅ξ′forξ=(ξ′,ξd)∈Rd−1×R, |
where
ess-infy∈Yd(min{A(y)ξ⋅ξ:ξ∈Rd,|ξ|=1})>0, | (3) |
combined with the boundedness implies a compactness result of the conductivity functional
u∈H10(Ω)↦∫ΩA(xε)∇u⋅∇udx |
for the
∫ΩA∗∇u⋅∇udx, |
where the matrix-valued function
A∗λ⋅λ:=min{∫YdA(y)(λ+∇v(y))⋅(λ+∇v(y))dy:v∈H1per(Yd)}. | (4) |
The
Fε(u)=∫Ωf(xε,Du)dx,foru∈W1,p(Ω;Rm), | (5) |
where
In this paper, we investigate the
Fε(u)≥∫Ω|u|2dx. |
This estimate guarantees that
V:={∫YdA1/2(y)Φ(y)dy:Φ∈L2per(Yd;Rd)withdiv(A1/2(y)Φ(y))=0inD′(Rd)} |
agrees with the space
the
F0(u):={∫Ω{A∗∇u⋅∇u+|u|2}dx,if u∈H10(Ω),∞,if u∈L2(Ω)∖H10(Ω), | (6) |
where the homogenized matrix
A∗λ⋅λ:=inf{∫YdA(y)(λ+∇v(y))⋅(λ+∇v(y))dy:v∈H1per(Yd)}. | (7) |
We need to make assumption (H1) since for any sequence
In the
In the present scalar case, we enlighten assumptions (H1) and (H2) which are the key ingredients to obtain the general
The lack of assumption (H1) may induce a degenerate asymptotic behaviour of the functional
The paper is organized as follows. In Section 2, we prove a general
Notation.
● For
●
●
● Throughout, the variable
● We write
uε⇀⇀u0 |
with
●
F1(f)(λ):=∫Re−2πiλxf(x)dx. |
Definition 1.1. Let
i)
ii)
Such a sequence
Recall that the weak topology of
In this section, we will prove the main result of this paper. As previously announced, up to a subsequence, the sequence of functionals
Theorem 2.1. Let
V:={∫YdA1/2(y)Φ(y)dy:Φ∈L2per(Yd;Rd)withdiv(A1/2(y)Φ(y))=0inD′(Rd)} | (8) |
agrees with the space
Then,
FεΓ(L2)−w⇀F0, |
where
Proof. We split the proof into two steps which are an adaptation of [11,Theorem 3.3] using the sole assumptions (H1) and (H2) in the general setting of conductivity.
Step 1 -
Consider a sequence
lim infε→0Fε(uε)≥F0(u). | (9) |
If the lower limit is
Fε(uε)≤C. | (10) |
As
uε⇀⇀u0. | (11) |
Assumption (H1) ensures that
u0(x,y)=u(x)is independent ofy, | (12) |
where, according to the link between two-scale and weak
uε⇀uweakly inL2(Ω). |
Since all the components of the matrix
A(xε)∇uε⇀⇀σ0(x,y)withσ0∈L2(Ω×Yd;Rd), |
and also
A1/2(xε)∇uε⇀⇀Θ0(x,y)withΘ0∈L2(Ω×Yd;Rd). | (13) |
In particular
εA(xε)∇uε⇀⇀0. | (14) |
Consider
div(A1/2(y)Φ(y))=0inD′(Rd), | (15) |
or equivalently,
∫YdA1/2(y)Φ(y)⋅∇ψ(y)dy=0∀ψ∈H1per(Yd). |
Take also
∫ΩA1/2(xε)∇uε⋅Φ(xε)φ(x)dx=−∫ΩuεA1/2(xε)Φ(xε)⋅∇φ(x)dx. |
By using [1,Lemma 5.7],
∫Ω×YdΘ0(x,y)⋅Φ(y)φ(x)dxdy=−∫Ω×Ydu(x)A1/2(y)Φ(y)⋅∇φ(x)dxdy. | (16) |
We prove that the target function
N:=∫YdA1/2(y)Φ(y)dy, | (17) |
and varying
∫Ω×YdΘ0(x,y)⋅Φ(y)φ(x)dxdy=−∫Ωu(x)N⋅∇φ(x)dx |
Since the integral in the left-hand side is bounded by a constant times
∫Ωu(x)N⋅∇φ(x)dx=∫Ωg(x)φ(x)dx, |
which implies that
N⋅∇u∈L2(Ω). | (18) |
In view of assumption (H2),
u∈H1(Ω). | (19) |
This combined with equality 16 leads us to
∫Ω×YdΘ0(x,y)⋅Φ(y)φ(x)dxdy=∫Ω×YdA1/2(y)∇u(x)⋅Φ(y)φ(x)dxdy. | (20) |
By density, the last equality holds if the test functions
divy(A1/2(y)ψ(x,y))=0inD′(Rd), |
or equivalently,
∫Ω×Ydψ(x,y)⋅A1/2(y)∇yv(x,y)dxdy=0∀v∈L2(Ω;H1per(Yd)). |
The
K:={A1/2(y)∇yv(x,y):v∈L2(Ω;H1per(Yd))}. |
Thus, the equality 20 yields
Θ0(x,y)=A1/2(y)∇u(x)+S(x,y) |
for some
A1/2(y)∇yvn(x,y)→S(x,y)strongly inL2(Ω;L2per(Yd;Rd)). |
Due to the lower semi-continuity property of two-scale convergence (see [1,Proposition 1.6]), we get
lim infε→0‖A1/2(x/ε)∇uε‖2L2(Ω;Rd)≥‖Θ0‖2L2(Ω×Yd;Rd)=limn‖A1/2(y)(∇xu(x)+∇yvn)‖2L2(Ω×Yd;Rd). |
Then, by the weak
lim infε→0Fε(uε)≥limn∫Ω×YdA(y)(∇xu(x)+∇yvn(x,y))⋅(∇xu(x)+∇yvn(x,y))dxdy+∫Ω|u|2dx≥∫Ωinf{∫YdA(y)(∇xu(x)+∇yv(y))⋅(∇xu(x)+∇yv(y))dy:v∈H1per(Yd)}dx+∫Ω|u|2dx. |
Recalling the definition 7, we immediately conclude that
lim infε→0Fε(uε)≥∫Ω{A∗∇u⋅∇u+|u|2}dx, |
provided that
It remains to prove that the target function
∫Ω×YdΘ0(x,y)⋅Φ(y)φ(x)dxdy=∫ΩN⋅∇u(x)φ(x)dx−∫∂ΩN⋅ν(x)u(x)φ(x)dH, | (21) |
where
∫∂ΩN⋅ν(x)u(x)φ(x)dH=0, |
which leads to
N⋅ν(x0)u(x0)=0. | (22) |
In view of assumption (H2) and the arbitrariness of
u∈H10(Ω). |
This concludes the proof of the
Step 2 -
We use the same arguments of [12,Theorem 2.4] which can easily extend to the conductivity setting. We just give an idea of the proof, which is based on a perturbation argument. For
Aδ:=A+δId, |
where
Fδ(u):={∫Ω{A∗δ∇u⋅∇u+|u|2}dx,ifu∈H10(Ω),∞,ifu∈L2(Ω)∖H10(Ω), |
for the strong topology of
F0(u)≤Fδ(u)≤lim infεj→0∫Ω{Aδ(xεj)∇uεj⋅∇uεj+|uεj|2}dx≤lim infεj→0∫Ω{A(xεj)∇uεj⋅∇uεj+|uεj|2}dx+O(δ)=F0(u)+O(δ). |
It follows that
limδ→0A∗δ=A∗. | (23) |
Thanks to the Lebesgue dominated convergence theorem and in view of 23, we get that
Now, let us show that
limε→0Fε(¯uε)=F0(u), |
which proves the
The next proposition provides a characterization of Assumption
Proposition 1. Assumption
Ker(A∗)=V⊥. |
Proof. Consider
H1λ(Yd):={u∈H1loc(Rd):∇uisYd-periodic and∫Yd∇u(y)dy=λ}. |
Recall that
0=A∗λ⋅λ=inf{∫YdA(y)∇u(y)⋅∇u(y)dy:u∈H1λ(Yd)}. |
Then, there exists a sequence
limn→∞∫YdA(y)∇un(y)⋅∇un(y)dy=0, |
which implies that
A1/2∇un→0strongly inL2(Yd;Rd). | (24) |
Now, take
∫YdA1/2(y)∇un(y)⋅Φ(y)dy=∫Yd∇un(y)⋅A1/2(y)Φ(y)dy=λ⋅∫YdA1/2(y)Φ(y)dy+∫Yd∇vn(y)⋅A1/2(y)Φ(y)dy=λ⋅∫YdA1/2(y)Φ(y)dy, | (25) |
where the last equality is obtained by integrating by parts the second integral combined with the fact that
0=λ⋅(∫YdA1/2(y)Φ(y)dy), |
for any
Ker(A∗)⊆V⊥. |
Conversely, by 23 we already know that
limδ→0A∗δ=A∗, |
where
A∗δλ⋅λ=min{∫YdAδ(y)∇uδ(y)⋅∇uδ(y)dy:uδ∈H1λ(Yd)}. | (26) |
Let
A∗δλ⋅λ=∫YdAδ(y)∇¯uδ(y)⋅∇¯uδ(y)dy=∫Yd|A1/2δ(y)∇¯uδ(y)|2dy≤C, |
which implies that the sequence
Now, we show that
A(y)=R(y)D(y)RT(y)for a.e. y∈Yd, |
where
A1/2δ(y)=R(y)(D(y)+δId)1/2RT(y)for a.e. y∈Yd, |
which implies that
Now, passing to the limit as
div(A1/2δΦδ)=div(Aδ∇¯uδ)=0in D′(Rd), |
we have
div(A1/2Φ)=0in D′(Rd). |
This along with
A∗δλ=∫YdAδ(y)∇¯uδ(y)dy=∫YdA1/2δ(y)Φδ(y)dy. |
Hence, taking into account the strong convergence of
A∗λ=limδ→0A∗δλ=limδ→0∫YdA1/2δ(y)Φδ(y)dy=∫YdA1/2(y)Φ(y)dy, |
which implies that
A∗λ⋅λ=0, |
so that, since
V⊥⊆Ker(A∗), |
which concludes the proof.
In this section we provide a geometric setting for which assumptions (H1) and (H2) are fulfilled. We focus on a
Z1=(0,θ)×(0,1)d−1andZ2=(θ,1)×(0,1)d−1, |
where
Z#i:=Int(⋃k∈Zd(¯Zi+k))fori=1,2. |
Let
X1:=(0,θ)×Rd−1andX2:=(θ,1)×Rd−1, |
and we denote by
The anisotropic phases are described by two constant, symmetric and non-negative matrices
A(y1):=χ(y1)A1+(1−χ(y1))A2fory1∈R, | (27) |
where
We are interested in two-phase mixtures in
A1=ξ⊗ξandA2is positive definite, | (28) |
for some
Proposition 2. Let
From Theorem 2.1, we easily deduce that the energy
Proof. Firstly, let us prove assumption (H1). We adapt the proof of Step 1 of [11,Theorem 3.3] to two-dimensional laminates. In our context, the algebra involved is different due to the scalar setting.
Denote by
0=−limε→0ε∫ΩA(xε)∇uε⋅Φ(x,xε)dx=limε→0∫Ωuεdivy(A1Φ(x,y))(x,xε)dx=∫Ω×Z#1u10(x,y)divy(A1Φ(x,y))dxdy=−∫Ω×Z#1A1∇yu10(x,y)⋅Φ(x,y)dxdy, |
so that
A1∇yu10(x,y)≡0inΩ×Z#1. | (29) |
Similarly, taking
A2∇yu20(x,y)≡0inΩ×Z#2. | (30) |
Due to 29, in phase
∇yu10∈Ker(A1)=Span(ξ⊥), |
where
u10(x,y)=θ1(x,ξ⊥⋅y)a.e.(x,y)∈Ω×X1, | (31) |
for some function
u20(x,y)=θ2(x)a.e.(x,y)∈Ω×X2, | (32) |
for some function
(A1−A2)Φ⋅e1=0on∂Z#1. | (33) |
Note that condition 33 is necessary for
0=−limε→0ε∫ΩA(y)∇uε⋅Φφ(x,xε)dx=limε→0∫Ωuεdivy(A(y)Φφ(x,y))(x,xε)dx=∫Ω×Z1u10(x,y)divy(A1Φφ(x,y))dxdy+∫Ω×Z2θ2(x)divy(A2Φφ(x,y))dxdy. |
Take now
φ#(x,y):=∑k∈Z2φ(x,y+k) |
as new test function. Then, we obtain
0=∫Ω×Z1u10(x,y)divy(A1Φφ#(x,y))dxdy+∫Ω×Z2θ2(x)divy(A2Φφ#(x,y))dxdy=∑k∈Z2∫Ω×(Z1+k)u10(x,y)divy(A1Φφ(x,y))dxdy+∑k∈Z2∫Ω×(Z2+k)θ2(x)divy(A2Φφ(x,y))dxdy=∫Ω×Z#1u10(x,y)divy(A1Φφ(x,y))dxdy+∫Ω×Z#2θ2(x)divy(A2Φφ(x,y))dxdy. | (34) |
Recall that
Φ∈R2↦(A1e1⋅Φ,A2e1⋅Φ)∈R2 |
is one-to-one. Hence, for any
A1Φ⋅e1=A2Φ⋅e1=f. | (35) |
In view of the arbitrariness of
A1e1⋅Φ=A2e1⋅Φ=1on∂Z#1. | (36) |
Since
0=∫Ω×∂Zv0(x,y)φ(x,y)dxdHy, |
where we have set
v0(x,⋅)=0on∂Z. | (37) |
Recall that
θ1(x,ξ1y2)=θ2(x)on∂Z. |
Since
We prove assumption (H2). The proof is a variant of the Step 2 of [11,Theorem 3.4]. For arbitrary
A1/2(y)Φ(y):=χ(y1)αξ+(1−χ(y1))(αξ+βe2)for a.e.y∈R2. | (38) |
Such a vector field
N:=∫Y2A1/2(y)Φ(y)dy=αξ+(1−θ)βe2. |
Hence, due to
From Proposition 1, it immediately follows that the homogenized matrix
We are going to deal with three-dimensional laminates where both phases are degenerate. We assume that the symmetric and non-negative matrices
Ker(Ai)=Span(ηi)fori=1,2. | (39) |
The following proposition gives the algebraic conditions so that assumptions required by Theorem 2.1 are satisfied.
Proposition 3. Let
Invoking again Theorem 2.1, the energy
Proof. We first show assumption (H1). The proof is an adaptation of the first step of [11,Theorem 3.3]. Same arguments as in the proof of Proposition 2 show that
Ai∇yui0(x,y)≡0inΩ×Z#ifori=1,2. | (40) |
In view of 39 and 40, in phase
ui0(x,y)=θi(x,ηi⋅y)a.e.(x,y)∈Ω×Xi, | (41) |
for some function
0=−limε→0ε∫ΩA(y)∇uε⋅Φφ(x,xε)dx=∫Ω×Z1u10(x,y)divy(A1Φφ(x,y))dxdy+∫Ω×Z2u20(x,y)divy(A2Φφ(x,y))dxdy. | (42) |
Take
φ#(x,y):=∑k∈Z3φ(x,y+k) |
as test function in 42, we get
∫Ω×Z#1u10(x,y)divy(A1Φφ(x,y))dxdy+∫Ω×Z#2u20(x,y)divy(A2Φφ(x,y))dxdy=0. | (43) |
Since the vectors
Φ∈R3↦(A1e1⋅Φ,A2e1⋅Φ)∈R2 |
is surjective. In particular, for any
A1Φ⋅e1=A2Φ⋅e1=f. | (44) |
In view of the arbitrariness of
∫Ω×∂Z[u10(x,y)−u20(x,y)]φ(x,y)dxdHy=0, |
which implies that
u10(x,⋅)=u20(x,⋅)on∂Z. | (45) |
Fix
θ1(x,b1y2+c1y3)=θ2(x,b2y2+c2y3)on∂Z, | (46) |
with
z1:=b1y2+c1y3,z2:=b2y2+c2y3, |
is a change of variables so that 46 becomes
θ1(x,z1)=θ2(x,z2)a.e.z1,z2∈R. |
This implies that
It remains to prove assumption (H2). To this end, let
E:={(ξ1,ξ2)∈R3×R3:(ξ1−ξ2)⋅e1=0,ξ1⋅η1=0,ξ2⋅η2=0}. | (47) |
For
A1/2(y)Φ(y):=χ(y1)ξ1+(1−χ(y1))ξ2a.e.y∈R3. | (48) |
The existence of such a vector field
N:=∫Y3A1/2(y)Φ(y)dy=θξ1+(1−θ)ξ2. |
Note that
(ξ1,ξ2)∈R3×R3↦((ξ1−ξ2)⋅e1,ξ1⋅η1,ξ2⋅η2)∈R3. |
If we identify the pair
Mf:=(100−100a1b1c1000000a2b2c2), |
with
Now, let
(ξ1,ξ2)∈E↦θξ1+(1−θ)ξ2∈R3. |
Let us show that
(ξ1,θθ−1ξ1). | (49) |
In view of the definition of
(ξ1−θθ−1ξ1)⋅e1=0,ξ1⋅η1=0,θθ−1ξ1⋅η2=0. |
This combined with the linear independence of
ξ1∈{e1,η1,η2}⊥={0}. |
Hence,
Thanks to Proposition 1, the homogenized matrix
In this section we are going to construct a counter-example of two-dimensional laminates with two degenerate phases, where the lack of assumption (H1) provides an anomalous asymptotic behaviour of the functional
Let
\begin{equation*} A_1 = e_1\otimes e_1 \quad\text{and}\quad A_2 = ce_1\otimes e_1, \end{equation*} |
for some positive constant
\begin{equation} A(y_2) : = \chi(y_2)A_1+(1-\chi(y_2))A_2 = a(y_2)e_1\otimes e_1\qquad \text{for} \quad y_2\in{\mathbb{R}}, \end{equation} | (50) |
with
\begin{equation} a(y_2) : = \chi(y_2) + c(1-\chi(y_2))\geq 1. \end{equation} | (51) |
Thus, the energy
\mathscr{F}_{\varepsilon}(u)=\left\{\begin{array}{c} \int_{\Omega}\left[a\left(\frac{x_{2}}{\varepsilon}\right)\left(\frac{\partial u}{\partial x_{1}}\right)^{2}+|u|^{2}\right] d x, \quad \text { if } u \in H_{0}^{1}\left((0,1)_{x_{1}} ; L^{2}(0,1)_{x_{2}}\right), \\ \infty, \quad \text { if } u \in L^{2}(\Omega) \backslash H_{0}^{1}\left((0,1)_{x_{1}} ; L^{2}(0,1)_{x_{2}}\right). \end{array}\right. | (52) |
We denote by
\begin{equation*} (f\ast_1g)(x_1, x_2) = \int_{\mathbb{R}}f(x_1-t, x_2)g(t, x_2)dt. \end{equation*} |
Throughout this section,
\begin{equation} c_\theta: = c\theta+1-\theta, \end{equation} | (53) |
where
Proposition 4. Let
\begin{equation*} \label{c12} \mathscr{F} (u) : = \left\{ \begin{array}{l} &\int_{0}^{1}dx_2\int_{\mathbb{R}} \frac{1}{\hat{k}_0(\lambda_1)}|{\mathcal{F}}_2 (u)(\lambda_1, x_2)|^2d\lambda_1, \quad\mathit{{if}}~~ u\in H^1_0((0, 1)_{x_1}; L^2(0, 1)_{x_2}) , \\ &\\ & \quad \infty, \quad \mathit{{if}}~~ u\in L^2({\Omega})\setminus H^1_0((0, 1)_{x_1}; L^2(0, 1)_{x_2}) , \end{array} \right. \end{equation*} |
where
\begin{equation} \hat{k}_0(\lambda_1): = \int_{0}^{1}\frac{1}{4\pi^2a(y_2)\lambda_1^2 +1}dy_2. \end{equation} | (54) |
The
\mathscr{F}(u):=\left\{\begin{array}{c} \int_{0}^{1} d x_{2} \int_{\mathbb{R}}\left\{\frac{c}{c_{\theta}}\left(\frac{\partial u}{\partial x_{1}}\right)^{2}\left(x_{1}, x_{2}\right)+\left[\sqrt{\alpha} u\left(x_{1}, x_{2}\right)+\left(h *_{1} u\right)\left(x_{1}, x_{2}\right)\right]^{2}\right\} d x_{1}, \\ \text { if } u \in H_{0}^{1}\left((0,1)_{x_{1}} ; L^{2}(0,1)_{x_{2}}\right), \\ \infty, \quad \text { if } u \in L^{2}(\Omega) \backslash H_{0}^{1}\left((0,1)_{x_{1}} ; L^{2}(0,1)_{x_{2}}\right), \end{array}\right. | (55) |
where
\begin{equation} {\mathcal{F}}_2(h)(\lambda_1) : = \sqrt{\alpha +f(\lambda_1)}-\sqrt{\alpha}, \end{equation} | (56) |
where
\begin{equation} \alpha: = \frac{c^2\theta +1-\theta}{c_\theta^2} > 0, \qquad f(\lambda_1): = \frac{(c-1)^2\theta(\theta-1)}{c^2_\theta}\frac{1}{c_\theta4\pi^2\lambda_1^2 + 1}. \end{equation} | (57) |
Moreover, any two-scale limit
Remark 1. From 57, we can deduce that
\begin{equation*} \alpha +f(\lambda_1) = {1\over c^2_\theta(c_\theta4\pi^2\lambda_1^2 + 1)}\left \{(c^2\theta+1-\theta)c_\theta4\pi^2\lambda_1^2+ [(c-1)\theta+1]^2 \right\} > 0, \end{equation*} |
for any
Proof. We divide the proof into three steps.
Step 1 -
Consider a sequence
\begin{equation} \liminf\limits_{\varepsilon\to 0 } {\mathscr{F}_\varepsilon} ({u_\varepsilon})\geq \mathscr{F}(u). \end{equation} | (58) |
If the lower limit is
\begin{equation} {\mathscr{F}_\varepsilon}({u_\varepsilon})\leq C. \end{equation} | (59) |
It follows that the sequence
\begin{equation} {u_\varepsilon} \mathop \rightharpoonup \limits^ \rightharpoonup u_0. \end{equation} | (60) |
In view of 51, we know that
\begin{equation} \frac{\partial {u_\varepsilon}}{\partial x_1} \mathop \rightharpoonup \limits^ \rightharpoonup \sigma_0(x, y) \qquad\text{with} \quad \sigma_0\in L^2({\Omega}\times Y_2). \end{equation} | (61) |
In particular,
\begin{equation} \varepsilon\frac{\partial {u_\varepsilon}}{\partial x_1} \mathop \rightharpoonup \limits^ \rightharpoonup 0. \end{equation} | (62) |
Take
\begin{equation*} \varepsilon\int_{{\Omega}}\frac{\partial {u_\varepsilon}}{\partial x_1}\varphi\left (x, {\frac{x}{\varepsilon}}\right)dx = - \int_{{\Omega}}{u_\varepsilon}\left (\varepsilon\frac{\partial \varphi}{\partial x_1}\left (x, {\frac{x}{\varepsilon}}\right)+\frac{\partial \varphi}{\partial y_1}\left (x, {\frac{x}{\varepsilon}}\right) \right)dx. \end{equation*} |
Passing to the limit in both terms with the help of 60 and 62 leads to
\begin{equation*} 0 = - \int_{{\Omega}\times Y_2}u_0(x, y)\frac{\partial \varphi}{\partial y_1}(x, y)dxdy, \end{equation*} |
which implies that
\begin{equation} u_0(x, y) \quad\text{is independent of} \quad y_1. \end{equation} | (63) |
Due to the link between two-scale and weak
\begin{equation} {u_\varepsilon}\rightharpoonup u(x) = \int_{Y_1} u_0(x, y_2)dy_2\qquad\text{weakly in }~~ L^2({\Omega}) . \end{equation} | (64) |
Now consider
\begin{equation} \frac{\partial\varphi}{\partial y_1} (x, y) = 0. \end{equation} | (65) |
Since
\begin{align*} \int_{{\Omega}} \frac{\partial{u_\varepsilon}}{\partial x_1}\varphi\left (x, y \right)dx = -\int_{{\Omega}} {u_\varepsilon}\frac{\partial\varphi}{\partial x_1}\left (x, y\right)dx. \end{align*} |
In view of the convergences 60 and 61 together with 63, we can pass to the two-scale limit in the previous expression and we obtain
\begin{align} \int_{{\Omega}\times Y_2}\sigma_0(x, y)\varphi(x, y)dxdy & = -\int_{{\Omega}\times Y_2} u_0(x, y_2) \frac{\partial \varphi}{\partial x_1}(x, y )dxdy. \end{align} | (66) |
Varying
\begin{equation*} \int_{{\Omega}\times Y_2}u_0(x, y_2) \frac{\partial \varphi}{\partial x_1}(x, y)dxdy = \int_{{\Omega}\times Y_2} g(x, y)\varphi(x, y) dxdy, \end{equation*} |
which yields
\begin{equation} \frac{\partial u_0}{\partial x_1}(x, y_2) \in L^2({\Omega}\times Y_1). \end{equation} | (67) |
Then, an integration by parts with respect to
\begin{align} \int_{{\Omega}\times Y_2}&\sigma_0(x, y)\varphi(x, y)dxdy = \int_{{\Omega}\times Y_2}\frac{\partial u_0}{\partial x_1}(x, y_2)\varphi(x, y)dxdy \\ &\quad- \int_{0}^{1}dx_2\int_{Y_2}\left [u_0(1, x_2, y_2)\varphi(1, x_2, y) -u_0(0, x_2, y_2)\varphi(0, x_2, y) \right]dy. \end{align} |
Since for any
\begin{align} \int_{0}^{1}dx_2\int_{Y_2}\left [u_0(1, x_2, y_2)\varphi(1, x_2, y) -u_0(0, x_2, y_2)\varphi(0, x_2, y) \right]dy = 0, \end{align} |
which implies that
\begin{equation*} u_0(1, x_2, y_2) = u_0(0, x_2, y_2) = 0 \qquad \text{a.e.} \quad (x_2, y_2)\in (0, 1)\times Y_1. \end{equation*} |
This combined with 67 yields
\begin{equation*} u_0(x_1, x_2, y_2)\in H^1_0((0, 1)_{x_1}; L^2((0, 1)_{x_2}\times Y_1)). \end{equation*} |
Finally, an integration by parts with respect to
\begin{align*} \int_{{\Omega}\times Y_2}\left ( \sigma_0(x, y)-\frac{\partial u_0}{\partial x_1}(x, y_2)\right)\varphi(x, y)dxdy = 0. \end{align*} |
Since the orthogonal of divergence-free functions is the gradients, from the previous equality we deduce that there exists
\begin{equation} \sigma_0(x, y) = \frac{\partial u_0}{\partial x_1}(x, y_2)+ \frac{\partial \tilde{u}}{\partial y_1}(x, y). \end{equation} | (68) |
Now, we show that
\begin{equation} \liminf\limits_{\varepsilon\to 0} \int_{{\Omega}}a\left (\frac{x_2}{\varepsilon}\right)\left (\frac{\partial {u_\varepsilon}}{\partial x_1} \right)^2dx \geq \int_{{\Omega}\times Y_2}a(y_2)\left ( \frac{\partial u_0}{\partial x_1}(x, y_2)+ \frac{\partial \tilde{u}}{\partial y_1}(x, y) \right)^2dxdy. \end{equation} | (69) |
To this end, set
\begin{equation*} \sigma_\varepsilon : = \frac{\partial {u_\varepsilon}}{\partial x_1}. \end{equation*} |
Since
\begin{equation} \|a-a_k\|_{L^2(Y_1)} \to 0 \quad\text{as}~~ k\to\infty , \end{equation} | (70) |
hence, by periodicity, we also have
\begin{equation} \left \|a\left ({\frac{x_2}{\varepsilon}}\right) - a_k\left ({\frac{x_2}{\varepsilon}}\right) \right\|_{L^2({\Omega})} \leq C \|a-a_k\|_{L^2(Y_1)}, \end{equation} | (71) |
for some positive constant
\begin{equation} \psi_n (x, y) \to \sigma_0(x, y)\qquad \text{strongly in }~~ L^2({\Omega}\times Y_2) . \end{equation} | (72) |
From the inequality
\begin{equation*} \int_{{\Omega}} a\left ({\frac{x_2}{\varepsilon}}\right)\left (\sigma_{\varepsilon} - \psi_n\left (x, {\frac{x}{\varepsilon}}\right)\right) ^ 2 dx\geq 0, \end{equation*} |
we get
\begin{align} \int_{{\Omega}}& a\left ({\frac{x_2}{\varepsilon}}\right)\sigma_\varepsilon^2dx\geq 2\int_{{\Omega}}a\left ({\frac{x_2}{\varepsilon}}\right)\sigma_\varepsilon\psi_n\left (x, {\frac{x}{\varepsilon}}\right)dx -\int_{{\Omega}}a\left ({\frac{x_2}{\varepsilon}}\right)\psi_n^2\left (x, {\frac{x}{\varepsilon}}\right)dx\\ & = 2\int_{{\Omega}}\left (a\left ({\frac{x_2}{\varepsilon}}\right)-a_k\left ({\frac{x_2}{\varepsilon}}\right) \right)\sigma_\varepsilon\psi_n\left (x, {\frac{x}{\varepsilon}}\right)dx + 2\int_{{\Omega}}a_k\left ({\frac{x_2}{\varepsilon}}\right)\sigma_\varepsilon\psi_n\left (x, {\frac{x}{\varepsilon}}\right)dx\\ &\quad-\int_{{\Omega}}a\left ({\frac{x_2}{\varepsilon}}\right)\psi_n^2\left (x, {\frac{x}{\varepsilon}}\right)dx. \end{align} | (73) |
In view of 71, the first integral on the right-hand side of 73 can be estimated as
\begin{align*} \left |\int_{{\Omega}}\left (a\left ({\frac{x_2}{\varepsilon}}\right)-a_k\left ({\frac{x_2}{\varepsilon}}\right) \right)\sigma_\varepsilon\psi_n\left (x, {\frac{x}{\varepsilon}}\right)dx \right| &\leq C \|a-a_k\|_{L^2(Y_1)}\|\psi_n\|_{L^\infty({\Omega})}\|\sigma_\varepsilon\|_{L^2({\Omega})}\\ &\leq C \|a-a_k\|_{L^2(Y_1)}. \end{align*} |
Hence, passing to the limit as
\begin{align*} \liminf\limits_{\varepsilon\to 0} \int_{{\Omega}} a\left ({\frac{x_2}{\varepsilon}}\right)\sigma^2_\varepsilon dx&\geq- C \|a-a_k\|_{L^2(Y_1)}+ 2\lim\limits_{\varepsilon\to 0} \int_{{\Omega}}a_k\left ({\frac{x_2}{\varepsilon}}\right)\sigma_\varepsilon\psi_n\left (x, {\frac{x}{\varepsilon}}\right)dx \\ &\quad -\lim\limits_{\varepsilon\to 0}\int_{{\Omega}}a\left ({\frac{x_2}{\varepsilon}}\right)\psi_n^2\left (x, {\frac{x}{\varepsilon}}\right)dxdy\\ & = 2\int_{{\Omega}\times Y_2}a_k(y_2)\sigma_0(x, y)\psi_n(x, y)dxdy - C\|a-a_k\|_{L^2(Y_1)}\\ &\quad-\int_{{\Omega}\times Y_2}a(y_2)\psi_n^2(x, y)dxdy. \end{align*} |
Thanks to 70, we take the limit as
\begin{align*} \liminf\limits_{\varepsilon\to 0} \int_{{\Omega}} a\left ({\frac{x_2}{\varepsilon}}\right)\sigma^2_\varepsilon dx &\geq 2\int_{{\Omega}\times Y_2}a(y_2)\sigma_0(x, y)\psi_n(x, y)dxdy\notag\\ &\quad-\int_{{\Omega}\times Y_2}a(y_2)\psi_n^2(x, y)dxdy, \end{align*} |
so that in view of 72, passing to the limit as
\begin{align*} \liminf\limits_{\varepsilon\to 0} \int_{{\Omega}} a\left ({\frac{x_2}{\varepsilon}}\right)\sigma^2_\varepsilon dx &\geq \int_{{\Omega}\times Y_2}a(y_2)\sigma_0^2(x, y)dxdy. \end{align*} |
This combined with 68 proves 69.
By 63, we already know that
\begin{align} \int_{{\Omega}\times Y_2} a(y_2)&\left (\frac{\partial u_0}{\partial x_1}(x, y_2) +\frac{\partial \tilde{u}}{\partial y_1} (x, y)\right)^2dxdy\\ & = \int_{{\Omega}}dx\int_{Y_1}a(y_2)dy_2\int_{Y_1}\left (\frac{\partial u_0}{\partial x_1}(x, y_2) +\frac{\partial \tilde{u}}{\partial y_1} (x, y)\right)^2dy_1\\ &\geq\int_{{\Omega}}dx\int_{Y_1}a(y_2)dy_2\left (\int_{Y_1} \left [\frac{\partial u_0}{\partial x_1}(x, y_2) +\frac{\partial \tilde{u}}{\partial y_1} (x, y)\right] dy_1 \right)^2\\ & = \int_{{\Omega}}dx\int_{Y_1}a(y_2)\left (\frac{\partial u_0}{\partial x_1}\right)^2(x, y_2)dy_2. \end{align} |
This combined with 69 implies that
\begin{align} \liminf\limits_{\varepsilon\to 0}\int_{{\Omega}}a\left (\frac{x_2}{\varepsilon}\right)\left (\frac{\partial{u_\varepsilon}}{\partial x_1}\right)^2dx &\geq\int_{{\Omega}}dx\int_{Y_1}a(y_2)\left (\frac{\partial u_0}{\partial x_1}\right)^2(x, y_2)dy_2. \end{align} | (74) |
Now, we extend the functions in
\begin{align} &\liminf\limits_{\varepsilon\to 0} {\mathscr{F}_\varepsilon}({u_\varepsilon}) \\ &\quad \geq \int_{0}^{1}dx_2\int_{Y_1}dy_2\int_{\mathbb{R}}\biggl[a(y_2)\left (\frac{\partial u_0}{\partial x_1}\right)^2(x_1, x_2, y_2)+ |u_0|^2(x_1, x_2, y_2)\biggr]dx_1. \end{align} | (75) |
We minimize the right-hand side with respect to
\begin{align*} \int_{0}^{1}dx_2\int_{Y_1}dy_2\int_{\mathbb{R}}&\biggl[a(y_2)\frac{\partial u_0}{\partial x_1}(x_1, x_2, y_2)\frac{\partial v}{\partial x_1}(x_1, x_2, y_2)\notag\\ &\quad + u_0(x_1, x_2, y_2)v(x_1, x_2, y_2)\biggr]dx_1 = 0 \end{align*} |
for any
\begin{equation} -a(y_2)\frac{\partial^2 u_0}{\partial x_1^2}(x_1, x_2 , y_2) + u_0(x_1, x_2, y_2) = b(x_1, x_2) \quad \text{in} \quad \mathscr{D}'({\mathbb{R}}) \end{equation} | (76) |
for a.e.
\begin{equation} {\mathcal{F}}_2(u_0)(\lambda_1, x_2, y_2) = \frac{{\mathcal{F}}_2(b)(\lambda_1, x_2)}{4\pi^2a(y_2)\lambda_1^2+1} \qquad\text{a.e.} \quad (\lambda_1, x_2, y_2)\in {\mathbb{R}}\times (0, 1)\times Y_1. \end{equation} | (77) |
Note that 77 proves in particular that the two-scale limit
In light of the definition 54 of
\begin{equation} {\mathcal{F}}_2(u)(\lambda_1, x_2) = \hat{k}_0(\lambda_1){\mathcal{F}}_2 (b)(\lambda_1, x_2)\qquad\text{a.e.} \quad (\lambda_1, x_2)\in {\mathbb{R}}\times (0, 1). \end{equation} | (78) |
By using Plancherel's identity with respect to the variable
\begin{align} \liminf\limits_{\varepsilon\to 0} {\mathscr{F}_\varepsilon}({u_\varepsilon})&\geq \int_{0}^{1}dx_2\int_{Y_1}dy_2\int_{\mathbb{R}} (4\pi^2a(y_2)\lambda_1^2 +1)|{\mathcal{F}}_2(u_0)(\lambda_1, x_2, y_2)|^2d\lambda_1\\ & = \int_{0}^{1}dx_2\int_{\mathbb{R}} \frac{1}{\hat{k}_0(\lambda_1)} |{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1, \end{align} |
which proves the
Step 2-
For the proof of the
Lemma 4.1. Let
\begin{equation} {\mathcal{F}}_2(b)(\lambda_1, x_2): = \frac{1}{\hat{k}_0(\lambda_1)} {\mathcal{F}}_2(u)(\lambda_1, x_2), \end{equation} | (79) |
where
\begin{align} \label{stuliopb} \left\{ \begin{array}{l} -a(y_2)\frac{\partial^2 u_0}{\partial x_1^2}(x_1, x_2, y_2) + u_0(x_1, x_2, y_2) = b(x_1, x_2), & x_1\in (0, 1), \\ u_0(0, x_2, y_2) = u_0(1, x_2, y_2) = 0, & \end{array} \right. \end{align} | (80) |
with
Let
\begin{equation} u_0(x_1, x_2, y_2)\in C^1([0, 1]^2; \quad L^\infty_{\rm per}(Y_1)) \end{equation} | (81) |
to the problem 80. Taking the Fourier transform
\begin{equation} {\mathcal{F}}_2(u_0)(\lambda_1, x_2, y_2) = \frac{{\mathcal{F}}_2(u)(\lambda_1, x_2) }{(4\pi^2a(y_2)\lambda_1^2+1)\hat{k}_0(\lambda_1)}\qquad \text{for} \quad (\lambda_1, x_2, y_2)\in{\mathbb{R}}\times [0, 1]\times Y_1, \end{equation} | (82) |
where
\begin{equation} u(x_1, x_2) = \int_{Y_1} u_0(x_1, x_2, y_2)dy_2 \qquad \text{for} \quad (x_1, x_2)\in\mathbb{R}\times (0, 1). \end{equation} | (83) |
Let
\begin{equation*} \label{defuesp} u_\varepsilon(x_1, x_2) : = u_0\left (x_1, x_2, {\frac{x_2}{\varepsilon}}\right). \end{equation*} |
Recall that rapidly oscillating
\begin{equation*} {u_\varepsilon} \rightharpoonup u \quad \text{weakly in} \quad L^2({\Omega}). \end{equation*} |
Due to 81, we can apply [1,Lemma 5.5] so that
\begin{align} \lim\limits_{\varepsilon\to 0}&{\mathscr{F}_\varepsilon}({u_\varepsilon}) = \lim\limits_{\varepsilon\to 0} \int_{{\Omega}}\left [a\left ({\frac{x_2}{\varepsilon}}\right)\left (\frac{\partial u_0 }{\partial x_1} \right)^2\left (x_1, x_2, \frac{x_2}{\varepsilon}\right) +\left |u_0\left (x_1, x_2, {\frac{x_2}{\varepsilon}}\right)\right|^2 \right]dx\\ & = \int_{{\Omega}}dx\int_{Y_1}\left [ a(y_2)\left (\frac{\partial u_0}{\partial x_1} \right)^2(x_1, x_2, y_2) +\left |u_0(x_1, x_2, y_2)\right|^2\right]dy_2\\\ & = \int_{0}^{1}dx_2\int_{Y_1}dy_2\int_{\mathbb{R}}\left [ a(y_2)\left (\frac{\partial u_0}{\partial x_1} \right)^2(x_1, x_2, y_2)+\left |u_0(x_1, x_2, y_2)\right|^2\right]dx_1, \end{align} | (84) |
where the function
\begin{align*} \int_{0}^{1}dx_2\int_{Y_1}dy_2\int_{\mathbb{R}}&\left [ a(y_2)\left (\frac{\partial u_0}{\partial x_1} \right)^2(x_1, x_2, y_2)+\left |u_0(x_1, x_2, y_2)\right|^2\right]dx_1\notag\\ & = \int_{0}^{1}dx_2\int_{Y_1}dy_2\int_{\mathbb{R}}(4\pi^2a(y_2)\lambda^2_1+1)|{\mathcal{F}}_2(u_0)(\lambda_1, x_2, y_2)|^2d\lambda_1\notag\\ & = \int_{0}^{1}dx_2\int_{\mathbb{R}} \frac{1}{\hat{k}_0(\lambda_1)}|{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1. \label{limsup2} \end{align*} |
This together with 84 implies that, for
\begin{equation*} \lim\limits_{\varepsilon\to 0} {\mathscr{F}_\varepsilon}({u_\varepsilon}) = \int_{0}^{1}dx_2\int_{\mathbb{R}} \frac{1}{\hat{k}_0(\lambda_1)}|{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1, \end{equation*} |
which proves the
Now, we extend the previous result to any
\begin{equation*} \|u\|_{H^1_0((0, 1)_{x_1}; L^2(0, 1)_{x_2})} : = \left ( \left \|\frac{\partial u}{\partial x_1} \right\|^2_{L^2({\Omega})} + \|u\|^2_{L^2({\Omega})}\right)^{1/2}. \end{equation*} |
The associated metric
\begin{equation} d_{H^1_0}(u_k, u)\to 0 \quad \text{implies } \quad d_{B_n}(u_k, u)\to 0. \end{equation} | (85) |
Recall that
\begin{equation} \Gamma\text{-}\limsup\limits_{\varepsilon\to 0}{\mathscr{F}_\varepsilon}(u)\leq \mathscr{F}(u). \end{equation} | (86) |
A direct computation of
\begin{align*} \hat{k}_0(\lambda_1) & = \frac{c_\theta4\pi^2\lambda_1^2+1}{(4\pi^2\lambda_1^2+ 1)(c4\pi^2\lambda_1^2+ 1)}, \end{align*} |
which implies that
\begin{align} \frac{1}{\hat{k}_0(\lambda_1) } = \frac{c}{c_\theta}4\pi^2\lambda_1^2 + f(\lambda_1) + \alpha, \end{align} | (87) |
where
\begin{equation} \frac{1}{\hat{k}_0(\lambda_1)} \leq C(4\pi^2\lambda_1^2 + 1). \end{equation} | (88) |
This combined with the Plancherel identity yields
\begin{align} \mathscr{F}(u)&\leq C\int_{0}^{1}dx_2\int_{{\mathbb{R}}} (4\pi^2\lambda_1^2 + 1) |{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1\\ & = C\int_{0}^{1}dx_2\int_{{\mathbb{R}}}\left [ \left ( \frac{\partial u}{\partial x_1} \right)^2(x_1, x_2)+ |u(x_1, x_2)|^2\right]dx_1\\ & = C \|u\|^2_{H^1_0((0, 1)_{x_1}; L^2(0, 1)_{x_2})}, \end{align} | (89) |
where
Now, take
\begin{equation} d_{H^1_0} (u_k, u)\to 0\qquad\text{as} \quad k\to\infty. \end{equation} | (90) |
In particular, due to 85, we also have that
\begin{align*} \Gamma\text{-}\limsup\limits_{\varepsilon\to 0}{\mathscr{F}_\varepsilon}(u)&\leq \liminf\limits_{k\to\infty} (\Gamma\text{-}\limsup\limits_{\varepsilon\to 0}{\mathscr{F}_\varepsilon}(u_k) )\\ &\leq \liminf\limits_{k\to\infty}\mathscr{F}(u_k)\\ & = \mathscr{F}(u), \end{align*} |
which proves the
Step 3 - Alternative expression of
The proof of the equality between the two expressions of the
Lemma 4.2. Let
\begin{equation} {\mathcal{F}}_2 (h\ast u) = {\mathcal{F}}_2(h){\mathcal{F}}_2(u)\quad \mathit{{a.e. ~in }}~~ \mathbb{R} . \end{equation} | (91) |
By applying Plancherel's identity with respect to
\begin{align} \int_{\mathbb{R}}&\left |\sqrt{\alpha}u(x_1, x_2) + (h\ast_1u)(x_1, x_2)\right|^2dx_1 \\ & = \int_{\mathbb{R}}\left |\sqrt{\alpha}{\mathcal{F}}_2(u)(\lambda_1, x_2) + {\mathcal{F}}_2(h\ast_1u)(\lambda_1, x_2)\right|^2d\lambda_1 \\ & = \int_{\mathbb{R}} \biggl[\alpha \left |{\mathcal{F}}_2 (u)(\lambda_1, x_2)\right|^2 + 2\sqrt{\alpha}{\rm Re}\left ({\mathcal{F}}_2(u)(\lambda_1, x_2) \overline{{\mathcal{F}}_2(h\ast_1u)}(\lambda_1, x_2)\right)\\ & \quad +\left |{\mathcal{F}}_2(h\ast_1u)(\lambda_1, x_2)\right|^2\biggr]d\lambda_1. \end{align} | (92) |
Recall that the Fourier transform of
\begin{align} \int_{\mathbb{R}} &\left [\alpha \biggl|{\mathcal{F}}_2 (u)(\lambda_1, x_2)\right|^2 + 2\sqrt{\alpha}{\rm Re}\left ({\mathcal{F}}_2(u)(\lambda_1, x_2) \overline{{\mathcal{F}}_2(h\ast_1u)}(\lambda_1, x_2)\right)\\ & \quad +\left |{\mathcal{F}}_2(h\ast_1u)(\lambda_1, x_2)\right|^2\biggr]d\lambda_1\\ & = \int_{\mathbb{R}} \left [\alpha+2\sqrt{\alpha}{\mathcal{F}}_2(h)(\lambda_1) + \left ({\mathcal{F}}_2(h)(\lambda_1)\right)^2\right]\left |{\mathcal{F}}_2(u)(\lambda_1, x_2)\right|^2d\lambda_1\\ & = \int_{\mathbb{R}} \left [\sqrt{\alpha}+ {\mathcal{F}}_2(h)(\lambda_1)\right]^2|{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1\\ & = \int_{\mathbb{R}} \left [\alpha+ f(\lambda_1)\right]|{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1. \end{align} | (93) |
On the other hand, by applying Plancherel's identity with respect to
\begin{equation*} \int_{{\mathbb{R}}}\frac{c}{c_\theta}4\pi^2\lambda_1^2|{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1 = \int_{{\mathbb{R}}}\frac{c}{c_\theta}\left (\frac{\partial u}{\partial x_1}\right)^2(x_1, x_2)dx_1. \end{equation*} |
In view of the expansion of
\begin{align*} &\int_{0}^{1}dx_2\int_{\mathbb{R}} \frac{1}{\hat{k}_0(\lambda_1)} |{\mathcal{F}}_2(u)(\lambda_1, x_2)|^2d\lambda_1 \\ &\quad = \int_{0}^{1}dx_2 \int_{\mathbb{R}}\left \{\frac{c}{c_\theta}\left (\frac{\partial u}{\partial x_1}\right)^2(x_1, x_2)+[\sqrt{\alpha}u(x_1, x_2) + (h\ast_1u)(x_1, x_2)]^2\right\}dx_1, \end{align*} |
which concludes the proof.
Proof of Lemma 4.1. In view of 87, the equality 79 becomes
\begin{align} {\mathcal{F}}_2(b)(\lambda_1, x_2 ) & = \left (\frac{c}{c_\theta}4\pi^2\lambda_1^2+\alpha + f(\lambda_1)\right){\mathcal{F}}_2(u)(\lambda_1, x_2)\\ & = {\mathcal{F}}_2 \left (-\frac{c}{c_\theta}\frac{\partial^2 u}{\partial x_1^2} +\alpha u\right)(\lambda_1, x_2) + f(\lambda_1){\mathcal{F}}_2(u)(\lambda_1, x_2). \end{align} | (94) |
Since
\begin{equation*} \label{fCL1} f(\lambda_1) = \frac{(c-1)^2\theta(\theta-1)}{c^2_\theta}\frac{1}{c_\theta4\pi^2\lambda_1^2 + 1} = O(\lambda_1^{-2})\in C_0(\mathbb{R})\cap L^1(\mathbb{R}), \end{equation*} |
the right-hand side of 94 belongs to
\begin{equation*} {\mathcal{F}}_2(b)(\cdot, x_2)\in L^2({\mathbb{R}}). \end{equation*} |
Applying the Plancherel identity, we obtain that
\begin{equation} b(\cdot, x_2)\in L^2(0, 1). \end{equation} | (95) |
We show that
\begin{equation*} \lim\limits_{t\to 0} \|b(\cdot, x_2+t) - b(\cdot, x_2)\|_{L^2(0, 1)_{x_1}} = 0. \end{equation*} |
Thanks to Plancherel's identity, we infer from 79 that
\begin{align*} \|b(\cdot, x_2+t) -& b(\cdot, x_2)\|^2_{L^2(0, 1)_{x_1}} = \|{\mathcal{F}}_2(b)(\cdot, x_2+t) - {\mathcal{F}}_2(b)(\cdot, x_2)\|^2_{L^2({\mathbb{R}})_{\lambda_1}}\\ & = \int_{{\mathbb{R}}}\left |\frac{1}{\hat{k}_0(\lambda_1)}\left [{\mathcal{F}}_2(u)(\lambda_1, x_2+t) - {\mathcal{F}}_2(u)(\lambda_1, x_2)\right] \right|^2d\lambda_1. \end{align*} |
In view of 88 and thanks to the Plancherel identity, we obtain
\begin{align} \|b(\cdot, x_2+t) &- b(\cdot, x_2)\|^2_{L^2(0, 1)_{x_1}}\\ & \leq C^2 \int_{{\mathbb{R}}}\left |(4\pi^2\lambda_1^2+1) ({\mathcal{F}}_2(u)(\lambda_1, x_2+t) - {\mathcal{F}}_2(u)(\lambda_1, x_2)) \right|^2d\lambda_1\\ &\leq C^2\left \|{\mathcal{F}}_2 \left (\frac{\partial u}{\partial x_1}\right) (\cdot, x_2+t ) -{\mathcal{F}}_2\left (\frac{\partial u}{\partial x_1}\right) (\cdot, x_2 ) \right\|^2_{L^2(0, 1)_{\lambda_1}} \\ &\quad+ C^2 \|{\mathcal{F}}_2(u)(\cdot, x_2+t)-{\mathcal{F}}_2(u)(\cdot, x_2)\|^2_{L^2(0, 1)_{\lambda_1}} \\ & = C^2\left \|\frac{\partial u}{\partial x_1}(\cdot, x_2+t ) -\frac{\partial u}{\partial x_1} (\cdot, x_2 ) \right\|^2_{L^2(0, 1)_{x_1}} \\ &\quad +C^2 \|u(\cdot, x_2+t)-u(\cdot, x_2)\|^2_{L^2(0, 1)_{x_1}}. \end{align} |
By the Lebesgue dominated convergence theorem and since
\begin{equation} b(x_1, x_2)\in C([0, 1]_{x_2}; \quad L^2(0, 1)_{x_1}). \end{equation} | (96) |
To conclude the proof, it remains to show the regularity of
\begin{equation*} u_0(\cdot, x_2, y_2)\in C^1([0, 1])\qquad\text{a.e.} \quad (x_2, y_2)\in (0, 1)\times Y_1. \end{equation*} |
On the other hand, the solution
\begin{equation} u_0(x_1, x_2, y_2) : = \int_{0}^{1} G_{y_2}(x_1, s) b(s, x_2)ds, \end{equation} | (97) |
where
\begin{equation*} G_{y_2}(x_1, s) : = \frac{1}{\sqrt{a(y_2)}\sinh\left (\frac{1}{\sqrt{a(y_2)}}\right) }\sinh\left (\frac{x_1\wedge s}{\sqrt{a(y_2)}}\right)\sinh\left (\frac{1-x_1\vee s}{\sqrt{a(y_2)}}\right). \end{equation*} |
This combined with 96 and 97 proves that
\begin{equation*} \label{u0isregular} u_0(x_1, x_2, y_2) \in C^1([0, 1]^2, L^\infty_{\rm per}(Y_1)), \end{equation*} |
which concludes the proof.
We prove now the Lemma 4.2 that we used in Step
Proof of Lemma 4.2. By the convolution property of the Fourier transform on
\begin{equation} h\ast u = \overline{{\mathcal{F}}_2}({\mathcal{F}}_2(h)) \ast \overline{{\mathcal{F}}_2}({\mathcal{F}}_2(h)) = \overline{{\mathcal{F}}_1}({\mathcal{F}}_2(h){\mathcal{F}}_2(u)), \end{equation} | (98) |
where
\begin{equation*} {\mathcal{F}}_2(h){\mathcal{F}}_2(u) = {\mathcal{F}}_2(h){\mathcal{F}}_1(u)\in L^2(\mathbb{R})\cap L^1({\mathbb{R}}). \end{equation*} |
Since
\begin{equation*} h\ast u = \overline{{\mathcal{F}}_2}({\mathcal{F}}_2(h){\mathcal{F}}_2(u))\in L^2(\mathbb{R}), \end{equation*} |
which yields 91. This concludes the proof.
Remark 2. Thanks to the Beurling-Deny theory of Dirichlet forms [3], Mosco [15, Theorem 4.1.2] has proved that the
\begin{equation} F(u) = F_d(u) + \int_{{\Omega}}u^2k(dx) + \int_{({\Omega}\times{\Omega})\setminus{\rm diag}} (u(x)-u(y))^2j(dx, dy), \end{equation} | (99) |
where
\begin{equation*} T(0) = 0, \qquad\text{and}\qquad \forall x, y\in{\mathbb{R}}, \quad |T(x)-T(y)|\leq |x-y|, \end{equation*} |
we have
In the present context, we do not know if the
We are going to give an explicit expression of the homogenized matrix
Set
\begin{equation} a: = (1-\theta) A_1 e_1\cdot e_1 + \theta A_2e_1\cdot e_1, \end{equation} | (100) |
with
Proposition 5. Let
\begin{equation} A^\ast = \theta A_1+(1-\theta)A_2 -\frac{\theta(1-\theta)}{a} (A_2-A_1)e_1\otimes (A_2-A_1)e_1. \end{equation} | (101) |
If
\begin{equation} A^\ast = \theta A_1+(1-\theta)A_2. \end{equation} | (102) |
Furthermore, if one of the following conditions is satisfied:
i) in two dimensions,
ii) in three dimensions,
then
Remark 3. The condition
Proof. Assume that
\begin{equation} \lim\limits_{\delta\to 0} A^\ast_\delta = A^\ast, \end{equation} | (103) |
where, for
\begin{equation*} A_\delta(y_1) = \chi(y_1) A_1^\delta + (1-\chi(y_1)) A_2 ^\delta\qquad\text{for} \quad y_1\in{\mathbb{R}}, \end{equation*} |
with
\begin{equation} A^\ast_\delta = \theta A_1^\delta+(1-\theta)A_2^\delta -\frac{\theta(1-\theta)}{(1-\theta)A_1^\delta e_1\cdot e_1 + \theta A_2^\delta e_1\cdot e_1 } (A_2^\delta-A_1^\delta)e_1\otimes (A_2^\delta-A_1^\delta)e_1. \end{equation} | (104) |
If
We prove that
\begin{align} |(A_2-A_1)e_1\cdot x| &\leq |A_2e_1\cdot x|+ |A_1e_1\cdot x|\\ &\leq (A_2e_1\cdot e_1)^{1/2}(A_2x\cdot x)^{1/2} + (A_1e_1\cdot e_1)^{1/2}(A_1x\cdot x)^{1/2}. \end{align} | (105) |
This combined with the definition 101 of
\begin{align} A^\ast x\cdot x& = \theta (A_1 x\cdot x) + (1-\theta)(A_2 x\cdot x) - \theta(1-\theta)a^{-1}\left |(A_2-A_1)e_1\cdot x\right|^2\\ &\geq \theta (A_1x\cdot x) +(1-\theta) (A_2 x\cdot x)\\ &\quad-\theta(1-\theta)a^{-1}[(A_2e_1\cdot e_1)^{1/2} (A_2x\cdot x)^{1/2}+ (A_1e_1\cdot e_1)^{1/2} (A_1x\cdot x)^{1/2} ]^2\\ & = a^{-1} [a \theta (A_1x\cdot x) +a (1-\theta) (A_2 x\cdot x) -\theta(1-\theta)(A_2e_1\cdot e_1)( A_2x\cdot x)\\ & \quad -\theta(1-\theta)(A_1e_1\cdot e_1)(A_1x\cdot x)\\ & \quad - 2\theta(1-\theta)(A_2e_1\cdot e_1)^{1/2}( A_2x\cdot x)^{1/2}(A_1e_1\cdot e_1)^{1/2}(A_1x\cdot x)^{1/2}]. \end{align} | (106) |
In view of definition 100 of
\begin{align} &a \theta( A_1x\cdot x) +a (1-\theta)( A_2 x\cdot x)\\& = \theta (1-\theta)(A_1 e_1\cdot e_1)(A_1 x\cdot x) + \theta^2(A_2 e_1\cdot e_1)(A_1 x\cdot x)\\ &\quad+ (1-\theta)^2 (A_1 e_1\cdot e_1)(A_2 x\cdot x)\\ &\quad + \theta(1-\theta)(A_2 e_1\cdot e_1)(A_2 x\cdot x). \end{align} |
Plugging this equality in 106, we deduce that
\begin{align} A^\ast x\cdot x &\geq a^{-1}[\theta^2(A_2 e_1\cdot e_1)(A_1 x\cdot x)+ (1-\theta)^2(A_1 e_1\cdot e_1)(A_2 x\cdot x)\\ & \quad -2\theta(1-\theta)(A_2e_1\cdot e_1)^{1/2}(A_1x\cdot x)^{1/2}(A_1e_1\cdot e_1)^{1/2}( A_2x\cdot x)^{1/2}]\\ & = a^{-1}[\theta (A_2 e_1\cdot e_1)^{1/2}(A_1x\cdot x)^{1/2} -(1-\theta) (A_1 e_1\cdot e_1)^{1/2}(A_2x\cdot x)^{1/2} ]^2\geq 0, \end{align} | (107) |
which proves that
Now, assume
\begin{equation*} (A_2^\delta -A_1^\delta)e_1 = (A_2-A_1)e_1 = 0, \end{equation*} |
which implies that the lamination formula 104 becomes
\begin{equation*} A^\ast_\delta = \theta A_1^\delta + (1-\theta)A_2^\delta. \end{equation*} |
This combined with the convergence 103 yields to the expression 102 for
To conclude the proof, it remains to prove the positive definiteness of
Case (i).
Assume
\begin{equation} |A_2e_1\cdot x| = (A_2e_1\cdot e_1)^{1/2}(A_2x\cdot x)^{1/2} = \|A^{1/2}_2e_1\|\|A^{1/2}_2x\|. \end{equation} | (108) |
Recall that the Cauchy-Schwarz inequality is an equality if and only if one of vectors is a scalar multiple of the other. This combined with 108 leads to
\begin{equation} x = \alpha e_1 \qquad\text{for some} \quad \alpha\in{\mathbb{R}}. \end{equation} | (109) |
From the definition 101 of
\begin{equation} A^\ast e_1\cdot e_1 = \frac{1}{a}(A_2 e_1\cdot e_1) (\xi\cdot e_1)^2 > 0. \end{equation} | (110) |
Recall that
Case (ii).
Assume that
\begin{align} |A_1e_1\cdot x| & = (A_1e_1\cdot e_1)^{1/2}(A_1x\cdot x)^{1/2}, \end{align} | (111) |
\begin{align} |A_2e_1\cdot x| & = (A_2e_1\cdot e_1)^{1/2}(A_2x\cdot x)^{1/2}. \end{align} | (112) |
Let
\begin{equation*} p_i(t) : = A_i(x+te_1)\cdot (x+te_1) \qquad\text{for} \quad i = 1, 2. \end{equation*} |
In view of 111, the discriminant of
\begin{equation} p_1(t_1) = A_1(x+t_1e_1)\cdot (x+t_1e_1) = 0. \end{equation} | (113) |
Recall that
\begin{equation} x\in{\rm Span}(e_1, \eta_1). \end{equation} | (114) |
Similarly, recalling that
\begin{equation} x\in{\rm Span}(e_1, \eta_2). \end{equation} | (115) |
Since the vectors
\begin{equation*} x = \alpha e_1 \qquad\text{for some} \quad \alpha\in{\mathbb{R}}. \end{equation*} |
In light of definition 101 of
\begin{equation*} A^\ast e_1\cdot e_1 = \frac{1}{a} (A_1e_1\cdot e_1)(A_2e_1\cdot e_1) > 0, \end{equation*} |
which implies that
Note that when
\begin{equation*} A_1 = e_2\otimes e_2 \quad\text{and}\quad A_2 = I_2. \end{equation*} |
Then, it is easy to check that
This problem was pointed out to me by Marc Briane during my stay at Institut National des Sciences Appliquées de Rennes. I thank him for the countless fruitful discussions. My thanks also extend to Valeria Chiadò Piat for useful remarks. The author is also a member of the INdAM-GNAMPA project "Analisi variazionale di modelli non-locali nelle scienze applicate".
[1] |
G. Kou, P. Yang, Y. Peng, F. Xiao, Y. Chen, F. E. Alsaadic, Evaluation of feature selection methods for text classification with small datasets using multiple criteria decision-making methods, Appl. Soft Comput., 86 (2020), 105836. doi: 10.1016/j.asoc.2019.105836
![]() |
[2] |
G. Kou, D. Ergu, C. S. Lin, Y. Chen, Pairwise comparison matrix in multiple criteria decision making, Technol. Econ. Dev. Econ., 22 (2016), 738-765. doi: 10.3846/20294913.2016.1210694
![]() |
[3] |
G. Kou, Y. Q. Lu, Y. Peng, Y. Shi, Evaluation of classification algorithms using MCDM and rank correlation, Int. J. Inf. Tech. Decis., 11 (2012), 197-225. doi: 10.1142/S0219622012500095
![]() |
[4] | R. Joshi, S. Kumar, An exponential Jensen fuzzy divergence measure with applications in multiple attribute decision-making, Math. Probl. Eng., 2018 (2018), 4342098. |
[5] |
G. Kou, Y. Peng, G. X. Wang, Evaluation of clustering algorithms for financial risk analysis using MCDM methods, Inform. Sciences, 275 (2014), 1-12 doi: 10.1016/j.ins.2014.02.137
![]() |
[6] |
H. Garg, K. Kumar, Distance measures for connection number sets based on set pair analysis and its applications to decision-making process, Appl. Intell., 48 (2018), 3346-3359. doi: 10.1007/s10489-018-1152-z
![]() |
[7] |
L. A. Zadeh, Fuzzy Sets, Inform. Control, 8 (1965), 338-353. doi: 10.1016/S0019-9958(65)90241-X
![]() |
[8] |
T. Jie, F. Meng, A consistency-based method to decision making with triangular fuzzy multiplicative preference relations, Int. J. Fuzzy Syst., 19 (2017), 1317-1332. doi: 10.1007/s40815-017-0333-y
![]() |
[9] |
A. Ebrahimnejad, J. L. Verdegay, H. Garg, Signed distance ranking based approach for solving bounded interval-valued fuzzy numbers linear programming problems, Int. J. Intell. Syst., 34 (2019), 2055-2076. doi: 10.1002/int.22130
![]() |
[10] |
K. T. Atanassov, Intuitionistic fuzzy sets, Fuzzy Set. Syst., 20 (1986), 87-96. doi: 10.1016/S0165-0114(86)80034-3
![]() |
[11] |
K. T. Atanassov, G. Gargov, Interval valued intuitionistic fuzzy sets, Fuzzy Set. Syst., 31 (1989), 343-349. doi: 10.1016/0165-0114(89)90205-4
![]() |
[12] | H. Garg, An improved cosine similarity measures for intuitionistic fuzzy sets and their applications to decision-making process, Hacet. J. Math. Stat., 47 (2018), 1585-1601. |
[13] |
M. I. Ali, F. Feng, T. Mahmood, I. Mahmood, H. Faizan, A graphical method for ranking Atanassov's intuitionistic fuzzy values using the uncertainty index and entropy, Int. J. Intell. Syst., 34 (2019), 2692-2712. doi: 10.1002/int.22174
![]() |
[14] |
F. Feng, M. Liang, H. Fujita, R. R.Yager, X. Y. Liu, Lexicographic orders of intuitionistic fuzzy values and their relationships, Mathematics, 7 (2019), 166. doi: 10.3390/math7020166
![]() |
[15] |
P. Liu, Multiple attribute group decision making method based on interval-valued intuitionistic fuzzy power heronian aggregation operators, Comput. Ind. Eng., 108 (2017), 199-212. doi: 10.1016/j.cie.2017.04.033
![]() |
[16] |
H. Garg, K. Kumar, A novel exponential distance and its based TOPSIS method for interval-valued intuitionistic fuzzy sets using connection number of SPA theory, Artif. Intell. Rev., 53 (2020), 595-624. doi: 10.1007/s10462-018-9668-5
![]() |
[17] |
H. Garg, K. Kumar, Novel distance measures for cubic intuitionistic fuzzy sets and their applications to pattern recognitions and medical diagnosis, Granul. Comput., 5 (2020), 169-184. doi: 10.1007/s41066-018-0140-3
![]() |
[18] | A. Si, S. Das, S. Kar, An approach to rank picture fuzzy numbers for decision making problems, Decis. Ma-Appl. Manage. Eng., 2 (2019), 54-64. |
[19] |
F. Liu, A. W. Guan, V. Lukovac, M. Vukić, A multicriteria model for the selection of the transport service provider: A single valued neutrosophic DEMATEL multicriteria model, Decis. Ma-Appl. Manage. Eng., 1 (2018), 121-130. doi: 10.31181/dmame1801121r
![]() |
[20] |
J. H. Hu, L. Pan, Y. Yang, H. W. Chen, A group medical diagnosis model based on intuitionistic fuzzy soft sets, Appl. Soft Comput., 77 (2019), 453-466. doi: 10.1016/j.asoc.2019.01.041
![]() |
[21] | F. Feng, H. Fujita, M. I. Ali, R. R. Yager, X. Liu, Another view on generalized intuitionistic fuzzy soft sets and related multiattribute decision making methods, IEEE T. Fuzzy Syst., 27 (2018), 474-488. |
[22] |
H. Garg, R. Arora, Generalized and group-based generalized intuitionistic fuzzy soft sets with applications in decision-making, Appl. Intell., 48 (2018), 343-356. doi: 10.1007/s10489-017-0981-5
![]() |
[23] |
T. M. Athira, J. Sunil, H. Garg, Entropy and distance measures of Pythagorean fuzzy soft sets and their applications, J. Intell. Fuzzy Syst., 37 (2019), 4071-4084. doi: 10.3233/JIFS-190217
![]() |
[24] |
P. D. Liu, H. Gao, J. H. Ma, Novel green supplier selection method by combining quality function deployment with partitioned Bonferroni mean operator in interval type-2 fuzzy environment, Inform. Sciences, 490 (2019), 292-316. doi: 10.1016/j.ins.2019.03.079
![]() |
[25] |
P. D. Liu, P. Wang, Multiple-attribute decision-making based on archimedean Bonferroni operators of q-rung orthopair fuzzy numbers, IEEE T. Fuzzy Syst., 27 (2019), 834-848. doi: 10.1109/TFUZZ.2018.2826452
![]() |
[26] | P. D. Liu, S. Y. Ma, P. Wang, Multiple-attribute group decision-making based on q-rung orthopair fuzzy power Maclaurin symmetric mean operators, IEEE Trans. Syst. Man Cybern. Syst., 2018 (2018). |
[27] |
M. H. Shu, C. H. Cheng, J. R. Chang, Using intuitionistic fuzzy sets for fault-tree analysis on printed circuit board assembly, Microelectron. Reliab., 46 (2006), 2139-2148. doi: 10.1016/j.microrel.2006.01.007
![]() |
[28] | J. Q. Wang, Z. Zhang, Programming method of multicriteria decision making based on intuitionistic fuzzy number with incomplete certain information, Control Decis., 23 (2008), 1145-1152. |
[29] | J. Q. Wang, Z. Zhang, Aggregation operators on intuitionistic trapezoidal fuzzy number and its application to multi-criteria decision making problems, J. Syst. Eng. Electron., 20 (2009), 321-326. |
[30] |
J. Ye, Expected value method for intuitionistic trapezoidal fuzzy multicriteria decision-making problems, Expert Syst. Appl., 38 (2011), 11730-11734. doi: 10.1016/j.eswa.2011.03.059
![]() |
[31] |
J. Yuan, C. Li, A new method for multi-attribute decision making with intuitionistic trapezoidal fuzzy random variable, Int. J. Fuzzy Syst., 19 (2017), 15-26. doi: 10.1007/s40815-016-0184-y
![]() |
[32] |
D. Kahneman, A. Tversky, Prospect theory: an analysis of decision under risk, Econometrica, 47 (1979), 263-292. doi: 10.2307/1914185
![]() |
[33] |
P. D. Liu, Y. Li, An extended MULTIMOORA method for probabilistic linguistic multi-criteria group decision-making based on prospect theory, Comput. Ind. Eng., 136 (2019), 528-545. doi: 10.1016/j.cie.2019.07.052
![]() |
[34] |
A. Tversky, D. Kahneman, Advances in prospect theory: cumulative representation of uncertainty, J. Risk Uncertainty, 5 (1992), 297-323. doi: 10.1007/BF00122574
![]() |
[35] | Z. S. Chen, S. H. Xiong, Y. L. Li, G. S. Qian, Approach for intuitionistic trapezoidal fuzzy random prospect decision making based on the combination of parameter estimation and score functions, J. Syst. Eng. Electron., 37 (2015), 851-862. |
[36] | Q. G. Ma, Hesitant fuzzy multi-attribute group decision-making method based on prospect theory, Comput. Eng. Appl., 51 (2015), 249-253. |
[37] |
S. Z. Zeng, W. H. Su, J. Chen, Fuzzy decision making with induced heavy aggregation operators and distance measures, J. Intell. Fuzzy Syst., 26 (2014), 127-135. doi: 10.3233/IFS-120720
![]() |
[38] |
P. Grzegorzewski, Metrics and orders in space of fuzzy numbers, Fuzzy Set. Syst., 97 (1998), 83-94 doi: 10.1016/S0165-0114(96)00322-3
![]() |
[39] |
A. I. Ban, L. Coroianu, Nearest interval, triangular and trapezoidal approximation of a fuzzy number preserving ambiguity, Int. J. Approx. Reason., 53 (2012), 805-836 doi: 10.1016/j.ijar.2012.02.001
![]() |
[40] |
U. Sharma, S. Aggarwal, Solving fully fuzzy multi-objective linear programming problem using nearest interval approximation of fuzzy number and interval programming, Int. J. Fuzzy Syst., 20 (2018), 488-499. doi: 10.1007/s40815-017-0336-8
![]() |
[41] |
R. Joshi, S. Kumar, Jensen-Tsalli's intuitionistic fuzzy divergence measure and its applications in medical analysis and pattern recognition, Int. J. Uncertain. Fuzz., 27 (2019), 145-169. doi: 10.1142/S0218488519500077
![]() |