Research article Special Issues

Electroencephalogram based face emotion recognition using multimodal fusion and 1-D convolution neural network (ID-CNN) classifier

  • Received: 27 April 2023 Revised: 28 June 2023 Accepted: 03 July 2023 Published: 19 July 2023
  • Recently, there has been increased interest in emotion recognition. It is widely utilised in many industries, including healthcare, education and human-computer interaction (HCI). Different emotions are frequently recognised using characteristics of human emotion. Multimodal emotion identification based on the fusion of several features is currently the subject of increasing amounts of research. In order to obtain a superior classification performance, this work offers a deep learning model for multimodal emotion identification based on the fusion of electroencephalogram (EEG) signals and facial expressions. First, the face features from the facial expressions are extracted using a pre-trained convolution neural network (CNN). In this article, we employ CNNs to acquire spatial features from the original EEG signals. These CNNs use both regional and global convolution kernels to learn the characteristics of the left and right hemisphere channels as well as all EEG channels. Exponential canonical correlation analysis (ECCA) is used to combine highly correlated data from facial video frames and EEG after extraction. The 1-D CNN classifier uses these combined features to identify emotions. In order to assess the effectiveness of the suggested model, this research ran tests on the DEAP dataset. It is found that Multi_Modal_1D-CNN achieves 98.9% of accuracy, 93.2% of precision, 89.3% of recall, 94.23% of F1-score and 7sec of processing time.

    Citation: Youseef Alotaibi, Veera Ankalu. Vuyyuru. Electroencephalogram based face emotion recognition using multimodal fusion and 1-D convolution neural network (ID-CNN) classifier[J]. AIMS Mathematics, 2023, 8(10): 22984-23002. doi: 10.3934/math.20231169

    Related Papers:

    [1] Jamie L. Flexon, Lisa Stolzenberg, Stewart J. D'Alessio . The impact of cannabis legislation on benzodiazepine and opioid use and misuse. AIMS Medical Science, 2024, 11(1): 1-24. doi: 10.3934/medsci.2024001
    [2] Hicham Rahmi, Ben Yamine Mallouki, Fatiha Chigr, Mohamed Najimi . The effects of smoking Haschich on blood parameters in young people from the Beni Mellal region Morocco. AIMS Medical Science, 2021, 8(4): 276-290. doi: 10.3934/medsci.2021023
    [3] Gili Eshel, Baruch Harash, Maayan Ben Sasson, Amir Minerbi, Simon Vulfsons . Validation of the Hebrew version of the questionnaire “know pain 50”. AIMS Medical Science, 2022, 9(1): 51-64. doi: 10.3934/medsci.2022006
    [4] Carlos Forner-Álvarez, Ferran Cuenca-Martínez, Rafael Moreno-Gómez-Toledano, Celia Vidal-Quevedo, Mónica Grande-Alonso . Multimodal physiotherapy treatment based on a biobehavioral approach in a patient with chronic low back pain: A case report. AIMS Medical Science, 2024, 11(2): 77-89. doi: 10.3934/medsci.2024007
    [5] Carlos Forner-Álvarez, Ferran Cuenca-Martínez, Alba Sebastián-Martín, Celia Vidal-Quevedo, Mónica Grande-Alonso . Combined face-to-face and telerehabilitation physiotherapy management in a patient with chronic pain related to piriformis syndrome: A case report. AIMS Medical Science, 2024, 11(2): 113-123. doi: 10.3934/medsci.2024010
    [6] Diogo Henrique Constantino Coledam, Philippe Fanelli Ferraiol, Gustavo Aires de Arruda, Arli Ramos de Oliveira . Correlates of the use of health services among elementary school teachers: A cross-sectional exploratory study. AIMS Medical Science, 2023, 10(4): 273-290. doi: 10.3934/medsci.2023021
    [7] Benjamin P Jones, Srdjan Saso, Timothy Bracewell-Milnes, Jen Barcroft, Jane Borley, Teodor Goroszeniuk, Kostas Lathouras, Joseph Yazbek, J Richard Smith . Laparoscopic uterosacral nerve block: A fertility preserving option in chronic pelvic pain. AIMS Medical Science, 2019, 6(4): 260-267. doi: 10.3934/medsci.2019.4.260
    [8] Kaye Ervin, Julie Pallant, Daniel R. Terry, Lisa Bourke, David Pierce, Kristen Glenister . A Descriptive Study of Health, Lifestyle and Sociodemographic Characteristics and their Relationship to Known Dementia Risk Factors in Rural Victorian Communities. AIMS Medical Science, 2015, 2(3): 246-260. doi: 10.3934/medsci.2015.3.246
    [9] Joann E. Bolton, Elke Lacayo, Svetlana Kurklinsky, Christopher D. Sletten . Improvement in montreal cognitive assessment score following three-week pain rehabilitation program. AIMS Medical Science, 2019, 6(3): 201-209. doi: 10.3934/medsci.2019.3.201
    [10] Mansour Shakiba, Mohammad Hashemi, Zahra Rahbari, Salah Mahdar, Hiva Danesh, Fatemeh Bizhani, Gholamreza Bahari . Lack of Association between Human µ-Opioid Receptor (OPRM1) Gene Polymorphisms and Heroin Addiction in A Sample of Southeast Iranian Population. AIMS Medical Science, 2017, 4(2): 233-240. doi: 10.3934/medsci.2017.2.233
  • Recently, there has been increased interest in emotion recognition. It is widely utilised in many industries, including healthcare, education and human-computer interaction (HCI). Different emotions are frequently recognised using characteristics of human emotion. Multimodal emotion identification based on the fusion of several features is currently the subject of increasing amounts of research. In order to obtain a superior classification performance, this work offers a deep learning model for multimodal emotion identification based on the fusion of electroencephalogram (EEG) signals and facial expressions. First, the face features from the facial expressions are extracted using a pre-trained convolution neural network (CNN). In this article, we employ CNNs to acquire spatial features from the original EEG signals. These CNNs use both regional and global convolution kernels to learn the characteristics of the left and right hemisphere channels as well as all EEG channels. Exponential canonical correlation analysis (ECCA) is used to combine highly correlated data from facial video frames and EEG after extraction. The 1-D CNN classifier uses these combined features to identify emotions. In order to assess the effectiveness of the suggested model, this research ran tests on the DEAP dataset. It is found that Multi_Modal_1D-CNN achieves 98.9% of accuracy, 93.2% of precision, 89.3% of recall, 94.23% of F1-score and 7sec of processing time.



    In 2005, Rodríguez [1] used the Lyapunov-Schmidt method and Brower fixed-point theorem to discuss the following discrete Sturm-Liouville boundary value problem

    {Δ[p(t1)Δy(t1)]+q(t)y(t)+λy(t)=f(y(t)), t[a+1,b+1]Z,a11y(a)+a12Δy(a)=0, a21y(b+1)+a22Δy(b+1)=0,

    where λ is the eigenvalue of the corresponding linear problem and the nonlinearity f is bounded.

    Furthermore, in 2007, Ma [2] studied the following discrete boundary value problem

    {Δ[p(t1)Δy(t1)]+q(t)y(t)+λy(t)=f(t,y(t))+h(t), t[a+1,b+1]Z,a11y(a)+a12Δy(a)=0, a21y(b+1)+a22Δy(b+1)=0,

    where f is subject to the sublinear growth condition

    |f(t,s)|A|s|α+B,sR

    for some 0α<1 and A,B(0,). Additional results to the existence of solutions to the related continuous and discrete problems on the nonresonance and the resonance can be found in [3,4,5,6,7,8,9,10,11,12,13] and the references therein. For example, Li and Shu [14] considered the existence of solutions to the continuous Sturm-Liouville problem with random impulses and boundary value problems using the Dhage's fixed-point theorem and considered the existence of upper and lower solutions to a second-order random impulsive differential equation in [15] using the monotonic iterative method.

    Inspired by the above literature, we use the solution set connectivity theory of compact vector field [16] to consider the existence of solutions to discrete resonance problems

    {Δ[p(t1)Δy(t1)]+q(t)y(t)=λkr(t)y(t)+f(t,y(t))+γψk(t)+¯g(t),   t[1,T]Z,(a0λk+b0)y(0)=(c0λk+d0)Δy(0),(a1λk+b1)y(T+1)=(c1λk+d1)y(T+1), (1.1)

    where p:[0,T]Z(0,), q:[1,T]ZR, ¯g:[1,T]ZR, r(t)>0, t[1,T]Z, (λk,ψk) is the eigenpair of the corresponding linear problem

    {Δ[p(t1)Δy(t1)]+q(t)y(t)=λr(t)y(t), t[1,T]Z,(a0λ+b0)y(0)=(c0λ+d0)Δy(0),(a1λ+b1)y(T+1)=(c1λ+d1)y(T+1). (1.2)

    It is worth noting that the difference between the problem (1.1) and the above questions is the eigenvalue that not only appears in the equation but also in the boundary conditions, which causes us considerable difficulties. Furthermore, it should be noted that these problems also apply to a number of physical problems, including those involving heat conduction, vibrating strings, and so on. For instance, Fulton and Pruess [17] discussed a kind of heat conduction problem, which has the eigenparameter-dependent boundary conditions. However, to discuss this kind of problem, we should know the spectrum of the problem (1.2). Fortunately, in 2016, Gao and Ma [18] obtained the eigenvalue theory of problem (1.2) under the conditions listed as follows:

    (A1) δ0:=a0d0b0c0<0,c00, d1b10,

    (A2) δ1:=a1d1b1c1>0,c10, b0+d00,

    which laid a theoretical foundation for this paper.

    Under the conditions (A1) and (A2), we assume the following conditions hold:

    (H1) (Sublinear growth condition) f:[1,T]Z×RR is continuous and there exist α[0,1) and A,B(0,), such that

    |f(t,y)|A|y|α+B,

    (H2) (Symbol condition) There exists ω>0, such that

    yf(t,y)>0,t[1,T]Zfor|y|>ω, (1.3)

    or

    yf(t,y)<0,t[1,T]Zfor|y|>ω, (1.4)

    (H3) ¯g:[1,T]ZR satisfies

    Ts=1¯g(s)ψk(s)=0, (1.5)

    (H4) f:[1,T]Z×RR is continuous and

    lim|y|f(t,y)=0

    uniformly for t[1,T]Z.

    The organization of this paper is as follows. In the second section, we construct a completely new inner product space. In the new inner product space, we discuss the basic self-adjointness of the corresponding linear operator and the properties of the eigenpair of (1.2). Finally, under the above properties, the Lyapunov-Schmidit method is used to decompose the inner product space and transform our problem to an equivalent system, that is to say, finding the solutions of (1.1) is equivalent to finding the solutions of this system. Under the sublinear condition and sign conditions on nonlinear terms, an existence result of solutions to the problem (1.1) is obtained using Schauder's fixed-point theorem and the connectivity theories of the solution set of compact vector fields. Based on the first result, the existence of two solutions to the problem (1.1) is also obtained in this section.

    Definition 2.1. ([19]) A linear operator P from the linear space X to itself is called the projection operator, if P2=P.

    Lemma 2.2. ([16]) Let C be a bounded closed convex set in Banach space E, T:[α,β]×CC(α<β) be a continuous compact mapping, then the set

    Sα,β={(ρ,x)[α,β]×C|T(ρ,x)=x}

    contains a connected branch connecting {α}×C and {β}×C.

    Lemma 2.3. ([20])(Schauder) Let D be a bounded convex closed set in E, A:DD is completely continuous, then A has a fixed point in D.

    First, we construct the inner product space needed in this paper.

    Let

    Y:={u|u:[1,T]ZR},

    then Y is a Hilbert space under the following inner product

    y,zY=Tt=1y(t)z(t)

    and its norm is yY:=y,yY.

    Furthermore, consider the space H:=YR2. Define the inner product as follows:

    [y,α,β],[z,ζ,ρ]=y,zY+p(0)|δ0|αζ+p(T)|δ1|βρ,

    which norm is defined as

    y=[y,α,β],[y,α,β]12,

    where is transposition to a matrix.

    Let

    y0,0=b0y(0)d0Δy(0), y0,1=a0y(0)c0Δy(0)

    and

    yT+1,0=b1y(T+1)d1y(T+1), yT+1,1=a1y(T+1)c1y(T+1).

    For y=[y,α,β], define an operator L:DH as follows:

    Ly=[Δ[p(t1)Δy(t1)]+q(t)y(t)y0,0yT+1,0]:=[Lyy0,0yT+1,0],

    where D={[y,α,β]:yY, y0,1=α, yT+1,1=β}. Define S:DH as follows:

    Sy=S[yαβ]=[ryαβ].

    Then, the problem (1.2) is equivalent to the eigenvalue problem as follows:

    Ly=λSy, (2.1)

    that is, if (λk,y) is the eigenpair of the problem (1.2), then (λk,y) is the eigenpair of the opertor L. Conversely, if (λk,y) is the eigenpair of the operator L, then (λk,y) is the eigenpair of the problem (1.2).

    Eventually, we define A:DH as follows:

    Ay=F(t,y)+[γψk+¯g,0,0],

    where F(t,y)=F(t,[y,α,β])=[f(t,y),0,0]. Obviously, the solution of the problem (1.1) is equivalent to the fixed point of the following operator

    Ly=λkSy+Ay. (2.2)

    It can be seen that there is a homomorphism mapping (λk,y)(λk,y) between the problem (1.1) and the operator Eq (2.2).

    Next, we are committed to obtaining the orthogonality of the eigenfunction.

    Lemma 2.4. Assume that (λ,y) and (μ,z) are eigenpairs of L, then

    y,LzLy,z=(μλ)y,Sz.

    Proof Let y=[y,α,β]D, z=[z,ζ,ρ]D, then

    y,Lz=[y,α,β],[Lz,z0,0,zT+1,0]=y,LzY+p(0)|δ0|α(z0,0)+p(T)|δ1|β(zT+1,0)=μy,rzY+p(0)|δ0|α(μζ)+p(T)|δ1|β(μρ)=μy,Sz. (2.3)

    Similarly, we have

    Ly,z=[Ly,y0,0,yT+1,0],[z,ζ,ρ]=Ly,zY+p(0)|δ0|(y0,0)ζ+p(T)|δ1|(yT+1,0)ρ=λry,zY+p(0)|δ0|λαζ+p(T)|δ1|λβρ=λy,Sz. (2.4)

    It can be seen from (2.3) and (2.4)

    y,LzLy,z=(μλ)y,Sz.

    Lemma 2.5. The operator L is the self-adjoint operator in H.

    Proof For y=[y,α,β]D,z=[z,ζ,ρ]D, we just need to prove that y,Lz=Ly,z. By the definition of inner product in H. we obtain

    y,Lz=y,LzY+p(0)|δ0|α(z0,0)+p(T)|δ1|β(zT+1,0),

    and

    Ly,z=Ly,zY+p(0)|δ0|(y0,0)ζ+p(T)|δ1|(yT+1,0)ρ.

    Therefore,

    y,LzLy,z=y,LzYLy,zY+p(0)|δ0|[α(z0,0)(y0,0)ζ]+p(T)|δ1|[β(zT+1,0)(yT+1,0)ρ],

    where

    y,LzY=Tt=1y(t)(Δ[p(t1)Δz(t1)]+q(t)z(t))=Tt=1y(t)p(t1)Δz(t1)Tt=1y(t)p(t)Δz(t)+Tt=1q(t)y(t)z(t)=T1t=0y(t+1)p(t)Δz(t)Tt=1y(t)p(t)Δz(t)+Tt=1q(t)y(t)z(t)=T1t=0p(t)Δy(t)Δz(t)+p(0)y(0)Δz(0)p(T)y(T)Δz(T)+Tt=1q(t)y(t)z(t)

    and

    Ly,zY=T1t=0p(t)Δy(t)Δz(t)+p(0)Δy(0)z(0)p(T)Δy(T)z(T)+Tt=1q(t)y(t)z(t).

    Moreover, from

    α(z0,0)(y0,0)ζ=[a0y(0)c0Δy(0)][d0Δz(0)b0z(0)][d0Δy(0)b0y(0)][a0z(0)c0Δz(0)]=(a0d0b0c0)[y(0)Δz(0)Δy(0)z(0)]

    and

    β(zT+1,0)(yT+1,0)ρ=[a1y(T+1)c1y(T+1)][b1z(T+1)+d1z(T+1)][b1y(T+1)+d1y(T+1)][a1z(T+1)c1z(T+1)]=(a1d1b1c1)[y(T+1)z(T+1)y(T+1)z(T+1)],

    we have

    y,LzLy,z=p(0)|y(0)Δy(0)z(0)Δz(0)|p(T)|y(T)Δy(T)z(T)Δz(T)|p(0)|y(0)Δy(0)z(0)Δz(0)|+p(T)|y(T+1)y(T+1)z(T+1)z(T+1)|=0.

    In order to obtain the orthogonality of the eigenfunction, we define a weighted inner product related to the weighted function r(t) in H. First, we define the inner product in Y as y,zr=Tt=1r(t)y(t)z(t).

    Similarly, the inner product associated with the weight function r(t) in the space H is defined as follows:

    [y,α,β],[z,ζ,ρ]r=y,zr+p(0)|δ0|αζ+p(T)|δ1|βρ.

    Lemma 2.6. (Orthogonality theorem) Assume that (A1) and (A2) hold. If (λ,y) and (μ,z) are two different eigenpairs corresponding to L, then y and z are orthogonal under the weight inner product related to the weight function r(t).

    Proof Assume that (λ,y) and (μ,z) is the eigenpair of L, then it can be obtained from Lemmas 2.4 and 2.5

    0=(μλ)y,Sz=(μλ)y,zr.

    Therefore, if λμ, then y,zr=0, which implies that y and z are orthogonal to the inner product defined by the weighted function r(t).

    Lemma 2.7. ([18]) Suppose that (A1) and (A2) hold. Then (1.2) has at least T or at most T+2 simple eigenvalues.

    In this paper, we consider that λk is a simple eigenvalue, that is, the eigenspace corresponding to each eigenvalue is one-dimensional. Let ψk=[ψk,α,β]D be the eigenfunction corresponding to λk, and assume that it satisfies

    ψk,ψk=1. (2.5)

    Denote by L:=LλkS, then the operator (2.2) is transformed into

    Ly=Ay. (2.6)

    Define P:DD by

    (Px)(t)=ψk(t)ψk(t),x(t).

    Lemma 2.8. P is a projection operator and Im(P)=Ker(L).

    Proof Obviously, P is a linear operator, next, we need to prove P2=P.

    (P2x)(t)=P(Px)(t)=ψk(t)ψk(t),Px(t)=ψk(t)ψk(t),ψk(t)ψk(t),x(t)=ψk(t)ψk(t),x(t)ψk(t),ψk(t)=ψk(t)ψk(t),x(t)=(Px)(t).

    It can be obtained from the Definition 2.1, P is a projection operator. In addition, Im(P)=span{ψk}=Ker(L).

    Define H:HH by

    H([yαβ])=[yαβ][yαβ],ψkψk.

    Lemma 2.9. H is a projection operator and Im(H)=Im(L).

    Proof Obviously, H is a linear operator, next, we need to prove that H2=H.

    H2([yαβ])=H(H[yαβ])=H[yαβ]H[yαβ],ψkψk=[yαβ][yαβ],ψkψk[yαβ][yαβ],ψkψk,ψkψk=[yαβ]2[yαβ],ψkψk+[yαβ],ψkψk,ψkψk=[yαβ]2[yαβ],ψkψk+[yαβ],ψkψk,ψkψk=H([yαβ]).

    It can be obtained from Definition 2.1 that H is a projection operator. On the one hand, for any [y,α,β]H, we have

    H[yαβ],ψk=[yαβ][yαβ],ψkψk,ψk=[yαβ],ψk[yαβ],ψkψk,ψk=0,

    thus, Im(H)Im(L). On the other hand, for any yIm(L), we have

    y,ψk=0.

    In summary, Im(H)=Im(L).

    Denote that I is a identical operator, then

    D=Im(P)Im(IP),H=Im(H)Im(IH).

    The restriction of the operator L on L|Im(IP) is a bijection from Im(IP) to Im(H). Define M:Im(H)Im(IP) by

    M:=(L|Im(IP))1.

    It can be seen from KerL=span{ψk} that there is a unique decomposition for any y=[y,α,β]D

    y=ρψk+x,

    where ρR,x=[x,α,β]Im(IP).

    Lemma 2.10. The operator Eq (2.6) is equivalent to the following system

    x=MHA(ρψk+x), (2.7)
    Tt=1ψk(t)f(t,ρψk(t)+x(t))=γ(p(0)|δ0|α2+p(T)|δ1|β21):=θ, (2.8)

    where α=a0ψk(0)c0Δψk(0),β=a1ψk(T+1)c1ψk(T+1).

    Proof (ⅰ) For any y=ρψk+x, we have

    Ly=Ay  H(L(ρψk+x)A(ρψk+x))=0LxHA(ρψk+x)=0x=MHA(ρψk+x). 

    (ⅱ) Since Ly,ψk=0, we have Ay,ψk=0. Therefore,

    f(t,y)+γψk+¯g,ψkY=Tt=1f(t,ρψk(t)+x(t))ψk(t)+Tt=1γψk(t)ψk(t)+Tt=1¯g(t)ψk(t)=0.

    Combining (H3) with (2.5), we have

    Tt=1ψk(t)f(t,ρψk(t)+x(t))=γ(p(0)|δ0|α2+p(T)|δ1|β21)=θ,

    where α=a0ψk(0)c0Δψk(0),β=a1ψk(T+1)c1ψk(T+1).

    Let

    A+={t{1,2,,T} s.t. ψk(t)>0},
    A={t{1,2,,T} s.t. ψk(t)<0}.

    Obviously,

    A+A, min{|ψk(t)||tA+A}>0.

    Lemma 3.1. Supposed that (H1) holds, then there exist constants M0 and M1, such that

    xM1(|ρ|ψkY)α,

    where (ρ,x) is the solution of (2.7) and satisfies |ρ|M0.

    Proof Since

    A(ρψk+x)=F(t,ρψk+x)+[γψk+¯g,0,0]=[f(t,ρψk+x)+γψk+¯g,0,0],

    we have

    xMIm(H)Im(IP)HHIm(H)[¯gY+γψkY+A(|ρ|ψkY+xY)α+B]=MIm(H)Im(IP)HHIm(H)[¯gY+A(|ρ|ψkY)α(1+xY|ρ|ψkY)α+Bθ]MIm(H)Im(IP)HHIm(H)[¯gY+A(|ρ|ψkY)α(1+αxY|ρ|ψkY)+Bθ]=MIm(H)Im(IP)HHIm(H)[¯gY+A(|ρ|ψkY)α(1+α(|ρ|ψkY)1αxY(|ρ|ψkY)α)+Bθ]. 

    Denote that

    D0=MIm(H)Im(IP)HHIm(H)(¯gY+Bθ),D1=AMIm(H)Im(IP)HHIm(H).

    Furthermore, we have

    x(|ρ|ψkY)αD0(|ρ|ψkY)α+D1+αD1(|ρ|ψkY)1αxY(|ρ|ψkY)αD0(|ρ|ψkY)α+D1+αD1(|ρ|ψkY)1αx(|ρ|ψkY)α.

    So, if we let

    αD1(|ρ|ψkY)1α12,

    we have

    |ρ|(2αD1)11αψkY:=M0.

    Thus,

    x(|ρ|ψkY)α2D0(M0ψkY)α+2D1:=M1.

    This implies that

    xM1(|ρ|ψkY)α.

    Lemma 3.2. Suppose that (H1) holds, then there exist constants M0 and Γ, such that

    xΓ(|ρ|min{|ψk(t)||tA+A})α,

    where (ρ,x) is the solution of (2.7) and satisfies |ρ|M0.

    According to Lemma 3.2, choose constant ρ0, such that

    ρ0>max{M0,Γ(|ρ0|min{|ψk(t)||tA+A})α}. (3.1)

    Let

    K:={xIm(IP)|x=MHA(ρψk+x),|ρ|ρ0}.

    Then, for sufficiently large ρρ0, there is

    ρψk(t)+x(t)ω, tA+,xK, (3.2)
    ρψk(t)+x(t)ω, tA,xK, (3.3)

    and for sufficiently small ρρ0, there is

    ρψk(t)+x(t)ω, tA+,xK, (3.4)
    ρψk(t)+x(t)ω, tA,xK. (3.5)

    Theorem 3.3. Suppose that (A1), (A2) and (H1)(H3) hold, then there exists a non-empty bounded set Ω¯gR, such that the problem (1.1) has a solution if and only if θΩ¯g. Furthermore, Ω¯g contains θ=0 and has a non-empty interior.

    Proof We prove only the case of (1.3) in (H2), and the case of (1.4) can be similarly proved.

    From (1.3) and (3.2)–(3.5), it is not difficult to see that

    f(t,ρψk(t)+x(t))>0,   tA+, xK,
    f(t,ρψk(t)+x(t))<0,   tA, xK,

    for sufficiently large ρρ0 and for sufficiently small ρρ0,

    f(t,ρψk(t)+x(t))<0,   tA+, xK,
    f(t,ρψk(t)+x(t))>0,   tA, xK.

    Therefore, if ρρ0 is sufficiently large,

    ψk(t)f(t,ρψk(t)+x(t))>0, tA+A, xK, (3.6)

    if ρρ0 is sufficiently small,

    ψk(t)f(t,ρψk(t)+x(t))<0, tA+A, xK. (3.7)

    Let

    C:={xIm(IP)|xρ0}.

    Define Tρ:Im(IP)Im(IP) by

    Tρ:=MHA(ρψk+x).

    Obviously, Tρ is completely continuous. By (3.1), for xC and ρ[ρ0,ρ0],

    TρxΓ(|ρ|min{|ψk(t)||tA+A})αΓ(|ρ0|min{|ψk(t)||tA+A})αρ0,

    i.e.,

    Tρ(C)C.

    According to Schauder's fixed point theorem, Tρ has a fixed point on C, such that Tρx=x. It can be seen from Lemma 2.10 that the problem (1.1) is equivalent to the following system

    Ψ(s,x)=θ,   (s,x)S¯g,

    where

    S¯g:={(ρ,x)R×Im(IP)|x=MHA(ρψk+x)},
    Ψ(ρ,x):=Ts=1ψk(s)f(s,ρψk(s)+x(s)).

    At this time, the Ω¯g in Theorem 3.3 can be given by Ω¯g=Ψ(S¯g). There exists a solution to the problem (1.1) for θΩ¯g.

    From (3.6), (3.7) and A+A, we can deduce that for any xK

    Ts=1ψk(s)f(s,ρ0ψk(s)+x(s))<0, Ts=1ψk(s)f(s,ρ0ψk(s)+x(s))>0.

    Thus,

    Ψ(ρ0,x)<0<Ψ(ρ0,x), xK. (3.8)

    According to Lemma 2.2, S¯gRׯBρ0 contains a connected branch ξρ0,ρ0 connecting {ρ0}×C and {ρ0}×C. Combined with (3.8), Ω¯g contains θ=0 and has a non-empty interior.

    Theorem 3.4. Suppose that (A1), (A2), (H2)(H4) hold. Ω¯g as shown in Theorem 3.3, then there exists a nonempty set Ω¯gΩ¯g{0}, such that problem (1.1) has at least two solutions for θΩ¯g.

    Proof We prove only the case of (1.3), and the case of (1.4) can be similarly proved. Since the condition (H4) implies that (H1), using Theorem 3.3, we know that there exists ρ0>0, such that

    Ψ(ρ0,x)>0, xK.

    Let

    δ:=min{Ψ(ρ0,x)|xK},

    then δ>0.

    Next, we prove that problem (1.1) has at least two solutions for any θ(0,δ).

    Let

    S¯g:={(ρ,x)R×Im(IP)|x=MHA(ρψk+x)},
    ¯K:={xIm(IP)|(ρ,x)S¯g}.

    By (H4), there exists a constant A0 such that

    xA0, xK.

    Similar to the derivation of Theorem 3.3, there exists ρ>ρ0 such that the following results hold:

    (ⅰ) For ρρ, there is

    ψk(t)f(t,ρψk(t)+x(t))>0, tA+A, x¯K, (3.9)

    (ⅱ) For ρρ, there is

    ψk(t)f(t,ρψk(t)+x(t))<0, tA+A, x¯K. (3.10)

    Let

    C:={xIm(IP)|xA0}.

    According to (H4), (3.9) and (3.10), we have

    lim|ρ|Ts=1ψk(s)f(s,ρψk(s)+x(s))=0

    uniformly for x¯K, i.e.

    lim|ρ|Ψ(ρ,x)=0,  x¯K.

    Therefore, there exists a constant l:l>ρ>ρ0>0 such that S¯g contains a connected branch between {l}×C and {l}×C, and

    max{|Ψ(ρ,x)||ρ=±l, (ρ,x)ξl,l}max{|Ψ(ρ,x)||(ρ,x){l,l}ׯK}θ3.

    It can be seen from the connectivity of ξl,l that there exist (ρ1,x1) and (ρ2,x2) in ξl,l(S¯g), such that

    Ψ(ρ1,x1)=θ,    Ψ(ρ2,x2)=θ,

    where ρ1(l,ρ0),ρ2(ρ0,l). It can be proved that ρ1ψk+x1 and ρ2ψk+x2 are two different solutions of problem (1.1).

    In this section, we give a concrete example of the application of our major results of Theorems 3.3 and 3.4. We choose T=3,a0,d0,b1,c1=0 and a1,d1,b0,c0=1, which implies that the interval becomes [1,3]Z and the conditions (A1),(A2) hold.

    First, we consider the eigenpairs of the corresponding linear problem

    {Δ2y(t1)=λy(t),   t[1,3]Z,y(0)=λΔy(0),   λy(4)=y(4). (4.1)

    Define the equivalent matrix of (4.1) as follows,

    Aλ=(λ2+λ1+λ101λ2101λ2+11λ)

    Consequently, Aλy=0 is equivalent to (4.1). Let |Aλ|=0, we have

    λ1=1.4657,λ2=0.1149,λ3=0.8274,λ4=2.0911,λ5=3.4324,

    which are the eigenvalues of (4.1). Next, we choose λ=λ1=1.4657, then we obtain the corresponding eigenfunction

    ψ1(t)={1,t=1,3.4657,t=2,3.465721,t=3.

    Example 4.1. Consider the following problem

    {Δ2y(t1)=1.4657y(t)+f(t,y(t))+ψ1(t)+¯g(t),   t[1,3]Z,y(0)=1.4657Δy(0),   1.4657y(4)=y(4), (4.2)

    where

    f(t,s)={ts3,s[1,1],t5s,s(,1)(1,+),

    and

    ¯g(t)={0,t=1,3.465721,t=2,3.4657,t=3.

    Then, for f(t,y(t)), we have |f(t,y(t))|3|y(t)|13. If we choose ω=1, yf(t,y)>0 for |y(t)|>1. For ¯g(t), we have 3s=1¯g(s)ψ1(s)=0.

    Therefore, the problem (4.2) satisfies the conditions (A1),(A2), (H1)(H3), which implies that the problem (4.2) has at least one solution by Theorem 3.3.

    Example 4.2. Consider the following problem

    {Δ2y(t1)=1.4657y(t)+f(t,y(t))+ψ1(t)+¯g(t),   t[1,3]Z,y(0)=1.4657Δy(0),   1.4657y(4)=y(4), (4.3)

    where

    f(t,s)=tse|s|,   t[1,3]Z

    and

    ¯g(t)={0,t=1,13.46572,t=2,3.4657,t=3.

    Then, for f(t,y(t)), we always have yf(t,y)>0 for all y(t)>0 or y(t)<0, f is continuous and satisfies

    lim|y|f(t,y)=0.

    For ¯g(t), we have 3s=1¯g(s)ψ1(s)=0.

    Therefore, the problem (4.3) satisfies the conditions (A1),(A2), (H2)(H4), which implies that the problem (4.3) has at least two solutions by Theorem 3.4.

    The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.

    Supported by National Natural Science Foundation of China [Grant No. 11961060] and Natural Science Foundation of Qinghai Province(No.2024-ZJ-931).

    The authors declare that there are no conflicts of interest.



    [1] J. Zhao, X. Mao, L. Chen, Speech emotion recognition using deep 1D & 2D CNN LSTM networks, Biomed. Signal Process. Control, 47 (2019), 312–323. https://doi.org/10.1016/j.bspc.2018.08.035 doi: 10.1016/j.bspc.2018.08.035
    [2] M. Liu, J. Tang, Audio and video bimodal emotion recognition in social networks based on improved alexnet network and attention mechanism, J. Inf. Process. Syst., 17 (2021), 754–771. https://doi.org/10.3745/JIPS.02.0161 doi: 10.3745/JIPS.02.0161
    [3] J. N. Njoku, A. C. Caliwag, W. Lim, S. Kim, H. Hwang, J. Jung, Deep learning based data fusion methods for multimodal emotion recognition, J. Korean Inst. Commun. Inf. Sci., 47 (2022), 79–87. https://doi.org/10.7840/kics.2022.47.1.79 doi: 10.7840/kics.2022.47.1.79
    [4] Q. Ji, Z. Zhu, P. Lan, Real-time nonintrusive monitoring and prediction of driver fatigue, IEEE T. Veh. Techol., 53 (2004), 1052–1068. https://doi.org/10.1109/TVT.2004.830974 doi: 10.1109/TVT.2004.830974
    [5] H. Zhao, Z. Wang, S. Qiu, J. Wang, F. Xu, Z. Wang, et al., Adaptive gait detection based on foot-mounted inertial sensors and multi-sensor fusion, Inf. Fusion, 52 (2019), 157–166. https://doi.org/10.1016/j.inffus.2019.03.002 doi: 10.1016/j.inffus.2019.03.002
    [6] J. Gratch, S. Marsella, Evaluating a computational model of emotion, Auton. Agent. Multi-Agent Syst., 11 (2005), 23–43. https://doi.org/10.1007/s10458-005-1081-1 doi: 10.1007/s10458-005-1081-1
    [7] J. Edwards, H. J. Jackson, P. E. Pattison, Emotion recognition via facial expression and affective prosody in schizophrenia: A methodological review, Clin. Psychol. Rev., 22 (2002), 789–832. https://doi.org/10.1016/S0272-7358(02)00130-7 doi: 10.1016/S0272-7358(02)00130-7
    [8] T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots, Rob. Auton. Syst., 42 (2003), 143–166. https://doi.org/10.1016/S0921-8890(02)00372-X doi: 10.1016/S0921-8890(02)00372-X
    [9] J.A. Russell, A circumplex model of affect, J. Per. Soc. Psychol., 39 (1980), 1161–1178. https://doi.org/10.1037/h0077714 doi: 10.1037/h0077714
    [10] H. Gunes, B. Schuller, M. Pantic, R. Cowie, Emotion representation, analysis and synthesis in continuous space: A survey, In: 2011 IEEE International Conference on Automatic Face & Gesture Recognition (FG), 2011,827–834. https://doi.org/10.1109/FG.2011.5771357
    [11] R. Plutchik, The nature of emotions: Human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice, Am. Sci., 89 (2001), 344–350. http://www.jstor.org/stable/27857503 doi: 10.1511/2001.28.344
    [12] A. Gudi, H. E. Tasli, T. M. Den Uyl, A. Maroulis, Deep learning based facs action unit occurrence and intensity estimation, In: 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), 2015, 1–5. https://doi.org/10.1109/FG.2015.7284873
    [13] R. T. Ionescu, M. Popescu, C. Grozea, Local learning to improve bag of visual words model for facial expression recognition, In: ICML 2013 Workshop on Representation Learning, 2013.
    [14] S. Li, W. Deng, Deep facial expression recognition: A survey, IEEE T. Affect. Comput., 13 (2020), 1195–1215. https://doi.org/10.1109/TAFFC.2020.2981446 doi: 10.1109/TAFFC.2020.2981446
    [15] S. Wang, J. Qu, Y. Zhang, Y. Zhang, Multimodal emotion recognition from EEG signals and facial expressions, IEEE Access, 11 (2023), 33061–33068. https://doi.org/10.1109/ACCESS.2023.3263670 doi: 10.1109/ACCESS.2023.3263670
    [16] Y. Jiang, S. Xie, X. Xie, Y. Cui, H. Tang, Emotion recognition via multi-scale feature fusion network and attention mechanism, IEEE Sens. J., 10 (2023), 10790–10800. https://doi.org/10.1109/JSEN.2023.3265688 doi: 10.1109/JSEN.2023.3265688
    [17] Q. Zhang, H. Zhang, K. Zhou, L. Zhang, Developing a physiological signal-based, mean threshold and decision-level fusion algorithm (PMD) for emotion recognition, Tsinghua. Sci. Techol., 28 (2023), 673–685. https://doi.org/10.26599/TST.2022.9010038 doi: 10.26599/TST.2022.9010038
    [18] Y. Wang, S. Qiu, D. Li, C. Du, B. L. Lu, H. He, Multi-modal domain adaptation variational autoencoder for eeg-based emotion recognition, IEEE/CAA J. Autom. Sinica, 9 (2022), 1612–1626. https://doi.org/10.1109/JAS.2022.105515 doi: 10.1109/JAS.2022.105515
    [19] D. Li, J. Liu, Y. Yang, F. Hou, H. Song, Y. Song, et al., Emotion recognition of subjects with hearing impairment based on fusion of facial expression and EEG topographic map, IEEE T. Neur. Syst. Reh., 31 (2022), 437–445. https://doi.org/10.1109/TNSRE.2022.3225948 doi: 10.1109/TNSRE.2022.3225948
    [20] Y. Wu, J. Li, Multi-modal emotion identification fusing facial expression and EEG, Multimed. Tools Appl., 82 (2023), 10901–10919. https://doi.org/10.1007/s11042-022-13711-4 doi: 10.1007/s11042-022-13711-4
    [21] D. Y. Choi, D. H. Kim, B. C. Song, Multimodal attention network for continuous-time emotion recognition using video and EEG signals, IEEE Access, 8 (2020), 203814–203826. https://doi.org/10.1109/ACCESS.2020.3036877 doi: 10.1109/ACCESS.2020.3036877
    [22] E. S. Salama, R. A. El-Khoribi, M. E. Shoman, M. A. W. Shalaby, A 3D-convolutional neural network framework with ensemble learning techniques for multi-modal emotion recognition, Egypt. Inf. J., 22 (2021), 167–176. https://doi.org/10.1016/j.eij.2020.07.005 doi: 10.1016/j.eij.2020.07.005
    [23] S. Liu, Y. Zhao, Y. An, J. Zhao, S. H. Wang, J. Yan, GLFANet: A global to local feature aggregation network for EEG emotion recognition, Bio. Signal. Process. Control, 85 (2023), 104799. https://doi.org/10.1016/j.bspc.2023.104799 doi: 10.1016/j.bspc.2023.104799
    [24] Y. Hu, F. Wang, Multi-modal emotion recognition combining face image and EEG signal, J. Circuit. Syst. Comput., 32 (2022), 2350125. https://doi.org/10.1142/S0218126623501256 doi: 10.1142/S0218126623501256
    [25] S. Liu, Z. Wang, Y. An, J. Zhao, Y. Zhao, Y. D. Zhang, EEG emotion recognition based on the attention mechanism and pre-trained convolution capsule network, Knowl. Based Syst., 265 (2023), 110372. https://doi.org/10.1016/j.knosys.2023.110372 doi: 10.1016/j.knosys.2023.110372
    [26] C. Li, B. Wang, S. Zhang, Y. Liu, R. Song, J. Cheng, et al., Emotion recognition from EEG based on multi-task learning with capsule network and attention mechanism, Comput. Bio. Med., 143 (2022), 105303. https://doi.org/10.1016/j.compbiomed.2022.105303 doi: 10.1016/j.compbiomed.2022.105303
    [27] S. J. Savitha, M. Paulraj, K. Saranya, Emotional classification using EEG signals and facial expression: A survey, In: Deep Learning Approaches to Cloud Security, Beverly: Scrivener Publishing, 2021, 27–42. https://doi.org/10.1002/9781119760542.ch3
    [28] Y. Alotaibi, A new meta-heuristics data clustering algorithm based on tabu search and adaptive search memory. Symmetry, 14 (2022), 623. https://doi.org/10.3390/sym14030623 doi: 10.3390/sym14030623
    [29] H. S. Gill, O. I. Khalaf, Y. Alotaibi, S. Alghamdi, F. Alassery, Multi-model CNN-RNN-LSTM based fruit recognition and classification, Intell. Autom. Soft Comput., 33 (2022), 637–650. https://doi.org/10.32604/iasc.2022.02258 doi: 10.32604/iasc.2022.022589
    [30] Y. Alotaibi, M. N. Malik, H. H. Khan, A. Batool, S. U. Islam, A. Alsufyani, et al., Suggestion mining from opinionated text of big social media data, CMC, 68 (2021), 3323–3338. https://doi.org/10.32604/cmc.2021.016727 doi: 10.32604/cmc.2021.016727
    [31] H. S. Gill, O. I. Khalaf, Y. Alotaibi, S. Alghamdi, F. Alassery, Fruit image classification using deep learning, CMC, 71 (2022), 5135–5150. https://doi.org/10.32604/cmc.2022.022809 doi: 10.32604/cmc.2022.022809
    [32] T. Thanarajan, Y. Alotaibi, S. Rajendran, K. Nagappan, Improved wolf swarm optimization with deep-learning-based movement analysis and self-regulated human activity recognition, AIMS Mathematics, 8 (2023), 12520–12539. https://doi.org/10.3934/math.2023629 doi: 10.3934/math.2023629
    [33] S. Koelstra, C. Mühl, M. Soleymani, J. S. Lee, A. Yazdani, T. Ebrahimi, et al., DEAP: A database for emotion analysis; using physiological signals, IEEE T. Affect. Comput., 3 (2012), 18–31. https://doi.org/10.1109/T-AFFC.2011.15 doi: 10.1109/T-AFFC.2011.15
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2659) PDF downloads(162) Cited by(3)

Figures and Tables

Figures(8)  /  Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog