Research article

A multi-band centroid contrastive reconstruction fusion network for motor imagery electroencephalogram signal decoding

  • Received: 11 October 2023 Revised: 01 November 2023 Accepted: 07 November 2023 Published: 15 November 2023
  • Motor imagery (MI) brain-computer interface (BCI) assist users in establishing direct communication between their brain and external devices by decoding the movement intention of human electroencephalogram (EEG) signals. However, cerebral cortical potentials are highly rhythmic and sub-band features, different experimental situations and subjects have different categories of semantic information in specific sample target spaces. Feature fusion can lead to more discriminative features, but simple fusion of features from different embedding spaces leading to the model global loss is not easily convergent and ignores the complementarity of features. Considering the similarity and category contribution of different sub-band features, we propose a multi-band centroid contrastive reconstruction fusion network (MB-CCRF). We obtain multi-band spatio-temporal features by frequency division, preserving the task-related rhythmic features of different EEG signals; use a multi-stream cross-layer connected convolutional network to perform a deep feature representation for each sub-band separately; propose a centroid contrastive reconstruction fusion module, which maps different sub-band and category features into the same shared embedding space by comparing with category prototypes, reconstructing the feature semantic structure to ensure that the global loss of the fused features converges more easily. Finally, we use a learning mechanism to model the similarity between channel features and use it as the weight of fused sub-band features, thus enhancing the more discriminative features, suppressing the useless features. The experimental accuracy is 79.96% in the BCI competition Ⅳ-Ⅱa dataset. Moreover, the classification effect of sub-band features of different subjects is verified by comparison tests, the category propensity of different sub-band features is verified by confusion matrix tests and the distribution in different classes of each sub-band feature and fused feature are showed by visual analysis, revealing the importance of different sub-band features for the EEG-based MI classification task.

    Citation: Jiacan Xu, Donglin Li, Peng Zhou, Chunsheng Li, Zinan Wang, Shenghao Tong. A multi-band centroid contrastive reconstruction fusion network for motor imagery electroencephalogram signal decoding[J]. Mathematical Biosciences and Engineering, 2023, 20(12): 20624-20647. doi: 10.3934/mbe.2023912

    Related Papers:

    [1] Rong Chen, Shihang Pan, Baoshuai Zhang . Global conservative solutions for a modified periodic coupled Camassa-Holm system. Electronic Research Archive, 2021, 29(1): 1691-1708. doi: 10.3934/era.2020087
    [2] Li Yang, Chunlai Mu, Shouming Zhou, Xinyu Tu . The global conservative solutions for the generalized camassa-holm equation. Electronic Research Archive, 2019, 27(0): 37-67. doi: 10.3934/era.2019009
    [3] Cheng He, Changzheng Qu . Global weak solutions for the two-component Novikov equation. Electronic Research Archive, 2020, 28(4): 1545-1562. doi: 10.3934/era.2020081
    [4] Xiaochen Mao, Weijie Ding, Xiangyu Zhou, Song Wang, Xingyong Li . Complexity in time-delay networks of multiple interacting neural groups. Electronic Research Archive, 2021, 29(5): 2973-2985. doi: 10.3934/era.2021022
    [5] Shuang Wang, Chunlian Liu . Multiplicity of periodic solutions for weakly coupled parametrized systems with singularities. Electronic Research Archive, 2023, 31(6): 3594-3608. doi: 10.3934/era.2023182
    [6] Linlin Tan, Bianru Cheng . Global well-posedness of 2D incompressible Navier–Stokes–Darcy flow in a type of generalized time-dependent porosity media. Electronic Research Archive, 2024, 32(10): 5649-5681. doi: 10.3934/era.2024262
    [7] Janarthanan Ramadoss, Asma Alharbi, Karthikeyan Rajagopal, Salah Boulaaras . A fractional-order discrete memristor neuron model: Nodal and network dynamics. Electronic Research Archive, 2022, 30(11): 3977-3992. doi: 10.3934/era.2022202
    [8] Liju Yu, Jingjun Zhang . Global solution to the complex short pulse equation. Electronic Research Archive, 2024, 32(8): 4809-4827. doi: 10.3934/era.2024220
    [9] Zhili Zhang, Aying Wan, Hongyan Lin . Spatiotemporal patterns and multiple bifurcations of a reaction- diffusion model for hair follicle spacing. Electronic Research Archive, 2023, 31(4): 1922-1947. doi: 10.3934/era.2023099
    [10] Shiyong Zhang, Qiongfen Zhang . Normalized solution for a kind of coupled Kirchhoff systems. Electronic Research Archive, 2025, 33(2): 600-612. doi: 10.3934/era.2025028
  • Motor imagery (MI) brain-computer interface (BCI) assist users in establishing direct communication between their brain and external devices by decoding the movement intention of human electroencephalogram (EEG) signals. However, cerebral cortical potentials are highly rhythmic and sub-band features, different experimental situations and subjects have different categories of semantic information in specific sample target spaces. Feature fusion can lead to more discriminative features, but simple fusion of features from different embedding spaces leading to the model global loss is not easily convergent and ignores the complementarity of features. Considering the similarity and category contribution of different sub-band features, we propose a multi-band centroid contrastive reconstruction fusion network (MB-CCRF). We obtain multi-band spatio-temporal features by frequency division, preserving the task-related rhythmic features of different EEG signals; use a multi-stream cross-layer connected convolutional network to perform a deep feature representation for each sub-band separately; propose a centroid contrastive reconstruction fusion module, which maps different sub-band and category features into the same shared embedding space by comparing with category prototypes, reconstructing the feature semantic structure to ensure that the global loss of the fused features converges more easily. Finally, we use a learning mechanism to model the similarity between channel features and use it as the weight of fused sub-band features, thus enhancing the more discriminative features, suppressing the useless features. The experimental accuracy is 79.96% in the BCI competition Ⅳ-Ⅱa dataset. Moreover, the classification effect of sub-band features of different subjects is verified by comparison tests, the category propensity of different sub-band features is verified by confusion matrix tests and the distribution in different classes of each sub-band feature and fused feature are showed by visual analysis, revealing the importance of different sub-band features for the EEG-based MI classification task.



    Digital topology with interesting applications has been a popular topic in computer science and mathematics for several decades. Many researchers such as Rosenfeld [21,22], Kong [18,17], Kopperman [19], Boxer, Herman [14], Kovalevsky [20], Bertrand and Malgouyres would like to obtain some information about digital objects using topology and algebraic topology.

    The first study in this area was done by Rosenfeld [21] at the end of 1970s. He introduced the concept of continuity of a function from a digital image to another digital image. Later Boxer [1] presents a continuous function, a retraction, and a homotopy from the digital viewpoint. Boxer et al. [7] calculate the simplicial homology groups of some special digital surfaces and compute their Euler characteristics.

    Ege and Karaca [9] introduce the universal coefficient theorem and the Eilenberg-Steenrod axioms for digital simplicial homology groups. They also obtain some results on the Künneth formula and the Hurewicz theorem in digital images. Ege and Karaca [10] investigate the digital simplicial cohomology groups and especially define the cup product. For other significant studies, see [13,12,16].

    Karaca and Cinar [15] construct the digital singular cohomology groups of the digital images equipped with Khalimsky topology. Then they examine the Eilenberg- Steenrod axioms, the universal coefficient theorem, and the Künneth formula for a cohomology theory. They also introduce a cup product and give general properties of this new operation. Cinar and Karaca [8] calculate the digital homology groups of various digital surfaces and give some results related to Euler characteristics for some digital connected surfaces.

    This paper is organized as follows: First, some information about the digital topology is given in the section of preliminaries. In the next section, we define the smash product for digital images. Then, we show that this product has some properties such as associativity, distributivity, and commutativity. Finally, we investigate a suspension and a cone for any digital image and give some examples.

    Let Zn be the set of lattice points in the n-dimensional Euclidean space. We call that (X,κ) is a digital image where X is a finite subset of Zn and κ is an adjacency relation for the members of X. Adjacency relations on Zn are defined as follows: Two points p=(p1,p2,,pn) and q=(q1,q2,,qn) in Zn are called cl-adjacent [2] for 1ln if there are at most l indices i such that |piqi|=1 and for all other indices i such that |piqi|1, pi=qi. It is easy to see that c1=2 (see Figure 1) in Z,

    Figure 1.  2-adjacency in Z.

    c1=4 and c2=8 (see Figure 2) in Z2,

    Figure 2.  4 and 8 adjacencies in Z2.

    and c1=6, c2=18 and c3=26 (see Figure 3) in Z3.

    Figure 3.  6, 18 and 26 adjacencies in Z3.

    A κ-neighbor of p in Zn is a point of Zn which is κ-adjacent to p. A digital image X is κ-connected [14] if and only if for each distinct points x,yX, there exists a set {a0,a1,,ar} of points of X such that x=a0, y=ar, and ai and ai+1 are κ-adjacent where i{0,1,,r1}. A κ-component of a digital image X is a maximal κ-connected subset of X. Let a,bZ with a<b. A digital interval [1] is defined as follows:

    [x,y]Z={aZ | xay,x,yZ},

    where 2-adjacency relation is assumed.

    In a digital image (X,κ), a digital κ-path [3] from x to y is a (2,κ)-continuous function f:[0,m]ZX such that f(0)=x and f(m)=y where x,yX. Let f:(X,κ)(Y,λ) be a function. If the image under f of every κ-connected subset of X is κ-connected, then f is called (κ,λ)-continuous [2].

    A function f:(X,κ)(Y,λ) is (κ,λ)-continuous [22,2] if and only if for any κ-adjacent points a,bX, the points f(a) and f(b) are equal or λ-adjacent. A function f:(X,κ)(Y,λ) is an isomorphism [4] if f is a (κ,λ)-continuous bijection and f1 is (λ,κ)-continuous.

    Definition 2.1. [2] Suppose that f, g:(X,κ)(Y,λ) are (κ,λ)-continuous maps. If there exist a positive integer m and a function

    F:X×[0,m]ZY

    with the following conditions, then F is called a digital (κ,λ)-homotopy between f and g, and we say that f and g are digitally (κ,λ)-homotopic in Y, denoted by f(κ,λ)g.

    (ⅰ) For all xX, F(x,0)=f(x) and F(x,m)=g(x).

    (ⅱ) For all xX, Fx:[0,m]Y defined by Fx(t)=F(x,t) is (2,λ)-continuous.

    (ⅲ) For all t[0,m]Z, Ft:XY defined by Ft(x)=F(x,t) is (κ,λ)-continuous.

    A digital image (X,κ) is κ-contractible [1] if the identity map on X is (κ,κ)-homotopic to a constant map on X.

    A (κ,λ)-continuous map f:XY is (κ,λ)-homotopy equivalence [3] if there exists a (λ,κ)-continuous map g:YX such that

    gf(κ,κ)1X   and   fg(λ,λ)1Y

    where 1X and 1Y are the identity maps on X and Y, respectively. Moreover, we say that X and Y have the same (κ,λ)-homotopy type.

    For the cartesian product of two digital images X1 and X2, the adjacency relation [6] is defined as follows: Two points xi,yi(Xi,κi), (x0,y0) and (x1,y1) are k(κ1,κ2)-adjacent in X1×X2 if and only if one of the following is satisfied:

    x0=x1 and y0=y1; or

    x0=x1 and y0 and y1 are κ1-adjacent; or

    x0 and x1 are κ0-adjacent and y0=y1; or

    x0 and x1 are κ0-adjacent and y0 and y1 are κ1-adjacent.

    Definition 2.2. [3] A (κ,λ)-continuous surjection f:XY is (κ,λ)-shy if

    for each yY, f1({y}) is κ-connected, and

    for each y0,y1Y, if y0 and y1 are λ-adjacent, then f1({y0,y1}) is κ-connected.

    Theorem 2.3. [5] For a continuous surjection f(X,κ)(Y,λ), if f is an isomorphism, then f is shy. On the other hand, if f is shy and injective, then f is an isomorphism.

    The wedge of two digital images (X,κ) and (Y,λ), denoted by XY, is the union of the digital images (X,μ) and (Y,μ), where [4]

    X and Y have a single point p;

    If xX and yY are μ-adjacent, then either x=p or y=p;

    (X,μ) and (X,κ) are isomorphic; and

    (Y,μ) and (Y,λ) are isomorphic.

    Theorem 2.4. [5] Two continuous surjections

    f:(A,α)(C,γ)   and   g:(B,β)(D,δ)

    are shy maps if and only if f×g:(A×B,k(α,β))(C×D,k(γ,δ)) is a shy map.

    Sphere-like digital images is defined as follows [4]:

    Sn=[1,1]n+1Z{0n+1}Zn+1,

    where 0n is the origin point of Zn. For n=0 and n=1, the sphere-like digital images are shown in Figure 4.

    S0={c0=(1,0),c1=(1,0)},
    S1={c0=(1,0),c1=(1,1),c2=(0,1),c3=(1,1),c4=(1,0),c5=(1,1),
    c6=(0,1),c7=(1,1)}.
    Figure 4.  Digital 0sphere S0 and digital 1-sphere S1.

    In this section, we define the digital smash product which has some important relations with a digital homotopy theory.

    Definition 3.1. Let (X,κ) and (Y,λ) be two digital images. The digital smash product XY is defined to be the quotient digital image (X×Y)/(XY) with the adjacency relation k(κ,λ), where XY is regarded as a subset of X×Y.

    Before giving some properties of the digital smash product, we prove some theorems which will be used later.

    Theorem 3.2. Let Xa and Ya be digital images for each element a of an index set A. For each aA, if fa(κ,λ)ga:XaYa then

    aAfa(κn,λn)aAga,

    where n is the cardinality of the set A.

    Proof. Let Fa:Xa×[0,m]ZYa be a digital (κ,λ)-homotopy between fa and ga, where [0,m]Z is a digital interval. Then

    F:(aAXa)×[0,m]ZaAYa

    defined by

    F((xa),t)=(Fa(xa,t))

    is a digital continuous function, where t is an element of [0,m]Z since the functions Fa are digital continuous for each element aA. Therefore F is a digital (κn,λn)-homotopy between aAfa and aAga.

    Theorem 3.3. If each fa:XaYa is a digital (κ,λ)-homotopy equivalence for all aA, then aAfa is a digital (κn,λn)-homotopy equivalence, where n is the cardinality of the set A.

    Proof. Let ga:YaXa be a (λ,κ)-homotopy inverse to fa, for each aA. Then we obtain the following relations:

    (aAga)(aAfa)=aA(ga×fa)(λn,κn)aA(1Xa)=1aAXa,
    (aAfa)(aAga)=aA(fa×ga)(κn,λn)aA(1Ya)=1aAYa.

    So we conclude that aAfa is a digital (κn,λn)-homotopy equivalence.

    Theorem 3.4. Let (X,κ), (Y,λ) and (Z,σ) be digital images. If p:(X,κ)(Y,λ) is a (κ,λ)shy map and (Z,σ) is a σ-connected digital image, then

    p×1:(X×Z,k(κ×σ))(Y×Z,k(λ×σ))

    is a (κ×σ,λ×σ)-shy map, where 1Z:(Z,σ)(Z,σ) is an identity function.

    Proof. Since (Z,σ) is a σ-connected digital image, then for yY and zZ, we have

    (p×1Z)1(y,z)=(p1(y),11Z(z))=(p1(y),z).

    Thus, for each yY and zZ, (p×1Z)1(y,z) is κ-connected by the definition of the adjacency of the cartesian product of digital images. Moreover, the map 1Z preserves the connectivity, that is, for every z0,z1Z such that z0 and z1 are σ-adjacent, 1Z({z0,z1})={z0,z1} is σ-connected. It is easy to see that

    (p×1Z)1({y0,y1},{z0,z1})=(p1({y0,y1}),11Z({z0,z1}))=(p1({y0,y1}),({z0,z1})).

    Hence for each y0,y1Y and z0,z1Z, (p×1Z)1({y0,y1},{z0,z1}) is a k(κ,σ)-connected using the definition of the adjacency of the Cartesian product of digital images.

    Theorem 3.5. Let A and B be digital subsets of (X,κ) and (Y,λ), respectively. If f,g:(X,A)(Y,B) are (κ,λ)-continuous functions such that f(κ,λ)g, then the induced maps ˉf,ˉg:(X/A,κ)(Y/B,λ) are digitally (κ,λ)-homotopic.

    Proof. Let F:(X×I,A×I)(Y,B) be a digital (κ,λ)-homotopy between f and g where I=[0,m]Z. It is clear that F induces a digital function ˉF:(X/A)×IY/B such that the following square diagram is commutative, where p and q are shy maps:

    (p×1Z)1({y0,y1},{z0,z1})=(p1({y0,y1}),11Z({z0,z1}))=(p1({y0,y1}),({z0,z1})).

    Since qF is digitally continuous, p×1 is a shy map and ˉF(p×1)=qF, ˉF is a digital continuous map. Hence ˉF is a digital (κ,λ)-homotopy map between ˉf and ˉg.

    We are ready to present some properties of the digital smash product. The following theorem gives a relation between the digital smash product and the digital homotopy.

    Theorem 3.6. Given digital images (X,κ), (Y,λ), (A,σ), (B,α) and two digital functions f:XA and g:YB, there exists a function fg:XYAB with the following properties:

    (i) If h:AC, k:BD are digital functions, then

    (hk)(fg)=(hf)(kg).

    (ii) If f(κ,σ)f:XA and g(λ,α)g:YB, then

    fg(k(κ,λ),k(σ,α))fg.

    Proof. The digital function f×g:X×YA×B has the property that

    (f×g)(XY)A×B.

    Hence f×g induces a digital function fg:XYAB and property (i) is obvious. As for (ii), the digital homotopy F between f×g and f×g can be constructed as follows: We know that

    f(κ,σ)f   and   g(λ,α)g.

    By Theorem 3.2, we have

    f×g(k(κ,λ),k(σ,α))f×g.

    F is a digital homotopy of functions of pairs from (X×Y,XY) to (A×B,AB). Consequently a digital homotopy between fg and fg is induced by Theorem 3.5.

    Theorem 3.7. If f and g are digital homotopy equivalences, then fg is a digital homotopy equivalence.

    Proof. Let f:(X,κ)(Y,λ) be a (κ,λ)-homotopy equivalence. Then there exists a (λ,κ)-continuous function f:(Y,λ)(X,κ) such that

    ff(λ,λ)1Y and ff(κ,κ)1X.

    Moreover, let g:(A,σ)(B,α) be a (σ,α)-homotopy equivalence. Then there is a (α,σ)-continuous function g:(B,α)(A,σ) such that

    gg(α,α)1B and gg(σ,σ)1A.

    By Theorem 3.6, there exist digital functions

    fg:XAYB   and   fg:YBXA

    such that

    (fg)(fg)=1YB,
    (ff)(gg)=1YB,

    and

    (fg)(fg)=1XA,
    (ff)(gg)=1XA.

    So fg is a digital homotopy equivalence.

    The following theorem shows that the digital smash product is associative.

    Theorem 3.8. Let (X,κ), (Y,λ) and (Z,σ) be digital images. (XY)Z is digitally isomorphic to X(YZ).

    Proof. Consider the following diagram:

    (ff)(gg)=1XA.

    where p represents for the digital shy maps of the form X×YXY. By Theorem 3.4, p×1 and 1×p are digital shy maps. 1:X×Y×ZX×Y×Z induces functions

    f:(XY)ZX(YZ)   and   g:X(YZ)(XY)Z.

    These functions are clearly injections. By Theorem 2.3, f is a digital isomorphism.

    The next theorem gives the distributivity property for the digital smash product.

    Theorem 3.9. Let (X,κ), (Y,λ) and (Z,σ) be digital images. (XY)Z is digitally isomorphic to (XZ)(YZ).

    Proof. Suppose that p represents for the digital shy maps of the form X×YXY and q stands for the digital shy maps of the form X×YXY. We may obtain the following diagram:

    f:(XY)ZX(YZ)   and   g:X(YZ)(XY)Z.

    From Theorem 2.4, p×p is a digital shy map and by Theorem 3.4, q1 is also a digital shy map. The function m:(X×Y)×Z(X×Z)×(Y×Z) induces a digital function

    f:(XZ)×(YZ)(X×Z)×(Y×Z).

    Obviously f is a one-to-one function. By Theorem 2.3, f is a digital isomorphism.

    Theorem 3.10. Let (X,κ) and (Y,λ) be digital images. XY is digitally isomorphic to YX.

    Proof. If we suppose that g stands for the digital shy maps Y×XYX and p represents for the digital shy maps of the form X×YXY, we get the following diagram:

    f:(XZ)×(YZ)(X×Z)×(Y×Z).

    The switching map u:X×YY×X induces a digital shy map f:XYYX. Additionally, f is a one-to-one. Hence, f is a digital isomorphism from Theorem 2.3.

    Definition 3.11. The digital suspension of a digital image X, denoted by sX, is defined to be XS1.

    Example 1. Choose a digital image X=S0. Then we get the following digital images in Figure 5.

    Figure 5.  S1×S0 and S1S0.

    Theorem 3.12. Let x0 be the base point of a digital image X. Then sX is digitally isomorphic to the quotient digital image

    (X×[a,b]Z)/(X×{a}{x0}×[a,b]X×{b}),

    where the cardinality of [a,b]Z is equal to 8.

    Proof. The function

    [a,b]ZθS1

    is a digital shy map defined by θ(ti)=ci mod 8, where ciS1 and i{0,1,,7}. Hence if p:X×S1XS1 is a digital shy map, then the digital function

    X×[a,b]Z1×θX×S1pXS1

    is also a digital shy map, and its effect is to identify together points of

    X×{a}{x0}×[a,b]ZX×{b}.

    The digital composite function p(1×θ) induces a digital isomorphism

    (X×[a,b]Z)/(X×{a}{x0}×[a,b]ZX×{b})XS1=sX.

    Definition 3.13. The digital cone of a digital image X, denoted by cX, is defined to be XI, where I=[0,1]Z.

    Example 2. Take a digital image X=S0. Then we have the following digital images in Figure 6.

    Figure 6.  S0×I and S0I.

    Theorem 3.14. For any digital image (X,κ), the digital cone cX is a contractible digital image.

    Proof. Since I=[0,1]Z is digitally contractible to the point {0},

    cX=XI(2,2)X{0}

    is obviously a single point.

    Corollary 1. For mN, SmI is equal to SmS0, where I=[0,1]Z is the digital interval and S0 is a digital 0-sphere.

    Proof. Since S0 and I consist of two points, we get the required result.

    For each m,n0, can we prove that digital (m+n)-sphere Sm+n is isomorphic to SmSn?

    This paper introduces some notions such as the smash product, the suspension, and the cone for digital images. Since they are significant topics related to homotopy, homology, and cohomology groups in algebraic topology, we believe that the results in the paper can be useful for future studies in digital topology.

    We would like to express our gratitude to the anonymous referees for their helpful suggestions and corrections.



    [1] J. R. Wolpaw, N. Birbaumer, W. J. Heetderks, D. J. McFarland, P. H. Peckham, G. Schalk, et al., Brain-computer interface technology: A review of the first international meeting, IEEE Trans. Neural Syst. Rehabil. Eng., 8 (2000), 164–173. https://doi.org/10.1109/TRE.2000.847807 doi: 10.1109/TRE.2000.847807
    [2] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, T. M. Vaughan, Brain–computer interfaces for communication and control, Clin. Neurophysiol., 113 (2002), 767–791. https://doi.org/10.1016/S1388-2457(02)00057-3 doi: 10.1016/S1388-2457(02)00057-3
    [3] V. Mihajlović, B. Grundlehner, R. Vullers, J. Penders, Wearable, wireless EEG solutions in daily life applications: What are we missing, IEEE J. Biomed. Health Inf., 19 (2015), 6–21. https://doi.org/10.1109/JBHI.2014.2328317 doi: 10.1109/JBHI.2014.2328317
    [4] Y. Jiao, Y. Zhang, X. Chen, E. Yin, J. Jin, X. Wang, et al., Sparse group representation model for motor imagery EEG classification, IEEE J. Biomed. Health Inf., 23 (2018), 631–641. https://doi.org/10.1109/JBHI.2018.2832538 doi: 10.1109/JBHI.2018.2832538
    [5] T. D. Pham, Classification of motor-imagery tasks using a large EEG dataset by fusing classifiers learning on wavelet-scattering features, IEEE Trans. Neural Syst. Rehabil. Eng., 31 (2023), 1097–1107. https://doi.org/10.1109/TNSRE.2023.3241241 doi: 10.1109/TNSRE.2023.3241241
    [6] W. Y. Hsu, Y. W. Cheng, EEG-Channel-Temporal-Spectral-Attention correlation for motor imagery EEG classification, IEEE Trans. Neural Syst. Rehabil. Eng., 31 (2023), 1659–1669. https://doi.org/10.1109/TNSRE.2023.3255233 doi: 10.1109/TNSRE.2023.3255233
    [7] C. Liu, J. Jin, I Daly, S Li, H. Sun, Y. Huang, et al., SincNet-based hybrid neural network for motor imagery EEG decoding, IEEE Trans. Neural Syst. Rehabil. Eng., 30 (2022), 540–549. https://doi.org/10.1109/TNSRE.2022.3156076 doi: 10.1109/TNSRE.2022.3156076
    [8] X. Yin, M. Meng, Q. She, Y. Gao, Z. Luo, Optimal channel-based sparse time-frequency blocks common spatial pattern feature extraction method for motor imagery classification, Math. Biosci. Eng., 18 (2021), 4247–4263. https://doi.org/10.3934/mbe.2021213 doi: 10.3934/mbe.2021213
    [9] S. Vaid, P. Singh, C. Kaur, EEG signal analysis for BCI interface: A review, in Fifth International Conference on Advanced Computing & Communication Technologies, (2015), 143–147. https://doi.org/10.1109/ACCT.2015.72
    [10] Y. Li, X. D. Wang, M. L. Luo, K. Li, X. F. Yang, Q. Guo, Epileptic seizure classification of EEGs using time–frequency analysis based multiscale radial basis functions, IEEE J. Biomed. Health Inf., 22 (2017), 386–397. https://doi.org/10.1109/JBHI.2017.2654479 doi: 10.1109/JBHI.2017.2654479
    [11] J. W. Li, S. Barma, P. U. Mak, F. Chen, C. Li, M. Li, et al., Single-channel selection for EEG-based emotion recognition using brain rhythm sequencing, IEEE J. Biomed. Health Inf., 26 (2022), 2493–2503. https://doi.org/10.1109/JBHI.2022.3148109 doi: 10.1109/JBHI.2022.3148109
    [12] F. Lotte, C. Guan, Regularizing common spatial patterns to improve BCI designs: Unified theory and new algorithms, IEEE Trans. Biomed. Eng., 58 (2010), 355–362. https://doi.org/10.1109/TBME.2010.2082539 doi: 10.1109/TBME.2010.2082539
    [13] H. Ramoser, J. Muller-Gerking, G. Pfurtscheller, Optimal spatial filtering of single trial EEG during imagined hand movement, IEEE Trans. Neural Syst. Rehabil. Eng., 8 (2000), 441–446. https://doi.org/10.1109/86.895946 doi: 10.1109/86.895946
    [14] P. Herman, G. Prasad, T. M. McGinnity, D. Coyle, Comparative analysis of spectral approaches to feature extraction for EEG-Based motor imagery classification, IEEE Trans. Neural Syst. Rehabil. Eng., 16 (2008), 317–326. https://doi.org/10.1109/TNSRE.2008.926694 doi: 10.1109/TNSRE.2008.926694
    [15] B. Orset, K. Lee, R. Chavarriaga, J. Millán, User adaptation to closed-loop decoding of motor imagery termination, IEEE Trans. Biomed. Eng., 68 (2020), 3–10. https://doi.org/10.1109/TBME.2020.3001981 doi: 10.1109/TBME.2020.3001981
    [16] Y. Zhang, C. S. Nam, G. Zhou, J. Jin, X. Wang, A. Cichocki, Temporally constrained sparse group spatial patterns for motor imagery BCI, IEEE Trans. Cyber., 49 (2018), 3322–3332. https://doi.org/10.1109/TCYB.2018.2841847 doi: 10.1109/TCYB.2018.2841847
    [17] M. Lee, Y. H. Kim, S. W. Lee, Motor impairment in stroke patients is associated with network properties during consecutive motor imagery, IEEE Trans. Biomed. Eng., 69 (2022), 2604–2615. https://doi.org/10.1109/TBME.2022.3151742 doi: 10.1109/TBME.2022.3151742
    [18] Y. Y. Miao, J. Jin, L. Daly, C. Zuo, X. Wang, A. Cichocki, et al., Learning common time-frequency-spatial patterns for motor imagery classification, IEEE Trans. Neural Syst. Rehabil. Eng., 29 (2021), 699–707. https://doi.org/10.1109/TNSRE.2021.3071140 doi: 10.1109/TNSRE.2021.3071140
    [19] D. Hong, L. Gao, J. Yao, B. Zhang, A. Plaza, J. Chanussot, Graph convolutional networks for hyperspectral image classification, IEEE Trans. Geosci. Remote Sens., 59 (2021), 5966–5978. https://doi.org/10.1109/TGRS.2020.3015157 doi: 10.1109/TGRS.2020.3015157
    [20] C. Li, B. Zhang, D. Hong, J. Yao, J. Chanussot, LRR-Net: An interpretable deep unfolding network for hyperspectral anomaly detection, IEEE Trans. Geosci. Remote Sens., 61 (2023), 1–12. https://doi.org/10.1109/TGRS.2023.3279834 doi: 10.1109/TGRS.2023.3279834
    [21] J. Yao, B. Zhang, C. Li, D. Hong, J. Chanussot, Extended Vision Transformer (ExViT) for land use and land cover classification: A multimodal deep learning framework, IEEE Trans. Geosci. Remote Sens., 61 (2023), 1–15. https://doi.org/10.1109/TGRS.2023.3284671 doi: 10.1109/TGRS.2023.3284671
    [22] D. Hong, B. Zhang, H. Li, Y. Li, J. Yao, C. Li, et al., Cross-city matters: A multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks, Remote Sens. Environ., 299 (2023). https://doi.org/10.1016/j.rse.2023.113856 doi: 10.1016/j.rse.2023.113856
    [23] P. Zhang, X. Wang, W. Zhang, J. Chen, Learning spatial–spectral–temporal EEG features with recurrent 3D convolutional neural networks for cross-task mental workload assessment, IEEE Trans. Neural Syst. Rehabil. Eng., 27 (2019), 31–42. https://doi.org/10.1109/TNSRE.2018.2884641 doi: 10.1109/TNSRE.2018.2884641
    [24] S. Sakhavi, C. Guan, S. Yan, Learning temporal information for brain-computer interface using convolutional neural networks, IEEE Trans. Neural Networks Learn. Syst., 29 (2018), 5619–5629. https://doi.org/10.1109/TNNLS.2018.2789927 doi: 10.1109/TNNLS.2018.2789927
    [25] B. E. Olivas-Padilla, M. I. Chacon-Murguia, Classification of multiple motor imagery using deep convolutional neural networks and spatial filters, Appl. Soft Comput., 75 (2019), 461–472. https://doi.org/10.1016/j.asoc.2018.11.031 doi: 10.1016/j.asoc.2018.11.031
    [26] X. Ma, S. Qiu, H. He, Time-distributed attention network for EEG-based motor imagery decoding from the same limb, IEEE Trans. Neural Syst. Rehabil. Eng., 30 (2022), 496–508. https://doi.org/10.1109/TNSRE.2022.3154369 doi: 10.1109/TNSRE.2022.3154369
    [27] R. Zhang, N. L. Zhang, C. Chen, D. Y. Lv, G. Liu, F. Peng, et al., Motor imagery EEG classification with self-attention-based convolutional neural network, in 7th International Conference on Intelligent Informatics and Biomedical Science (ICⅡBMS), (2022), 195–199. https://doi.org/10.1109/ICⅡBMS55689.2022.9971698
    [28] J. Zheng, M. Liang, S. Sinha, L. Ge, W. Yu, A. Ekstrom, et al., Time-frequency analysis of scalp EEG with Hilbert-Huang transform and deep learning, IEEE J. Biomed. Health. Inf., 26 (2022), 1549–1559. https://doi.org/10.1109/JBHI.2021.3110267 doi: 10.1109/JBHI.2021.3110267
    [29] H. Fang, J. Jin, I. Daly, X. Wang, Feature extraction method based on filter banks and Riemannian tangent space in motor-imagery BCI, IEEE J. Biomed. Health. Inf., 26 (2022), 2504–2514. https://doi.org/10.1109/JBHI.2022.3146274 doi: 10.1109/JBHI.2022.3146274
    [30] F. Lotte, L. Bougrain, M. Clerc, Electroencephalography (EEG)-based brain-computer interfaces, in Wiley Encyclopedia of Electrical and Electronics Engineering, Wiley, (2015). https://doi.org/10.1002/047134608X.W8278
    [31] G. Pfurtscheller, C. Neuper, D. Flotzinger, M. Pregenzer, EEG-based discrimination between imagination of right and left hand movement, Electroencephalogr. Clin. Neurophysiol., 103 (1997), 642–651. https://doi.org/10.1016/S0013-4694(97)00080-1 doi: 10.1016/S0013-4694(97)00080-1
    [32] R. Chai, S. H. Ling, G. P. Hunter, Y. Tran, H. T. Nguyen, Brain–computer interface classifier for wheelchair commands using neural network with fuzzy particle swarm optimization, IEEE J. Biomed. Health. Inf., 18 (2014), 1614–1624. https://doi.org/10.1109/JBHI.2013.2295006 doi: 10.1109/JBHI.2013.2295006
    [33] K. K. Ang, Z. Y. Chin, H. Zhang, C. Guan, Filter bank common spatial pattern (FBCSP) in brain-computer interface, in 2008 IEEE International Joint Conference on Neural Networks, (2008), 2390–2397. https://doi.org/10.1109/IJCNN.2008.4634130
    [34] K. P. Thomas, C. Guan, C. T. Lau, A. P. Vinod, K. K. Ang, A new discriminative common spatial pattern method for motor imagery brain–computer interfaces, IEEE Trans. Biomed. Eng., 56 (2009), 2730–2733. https://doi.org/10.1109/TBME.2009.2026181 doi: 10.1109/TBME.2009.2026181
    [35] D. Hong, J. Yao, C. Li, D. Meng, N. Yokoya, J. Chanussot, Decoupled-and-coupled networks: Self-supervised hyperspectral image super-resolution with subpixel fusion, IEEE Trans. Geosci. Remote Sens., 61 (2023), 1–12. https://doi.org/10.1109/TGRS.2023.3324497 doi: 10.1109/TGRS.2023.3324497
    [36] Y. Yuan, G. Xun, K. Jia, A. Zhang, A multi-view deep learning framework for EEG seizure detection, IEEE J. Biomed. Health Inf., 23 (2019), 83–94. https://doi.org/10.1109/JBHI.2018.2871678 doi: 10.1109/JBHI.2018.2871678
    [37] D. Zhang, K. Chen, D. Jian, L. Yao, Motor imagery classification via temporal attention cues of graph embedded EEG signals, IEEE J. Biomed. Health Inf., 24 (2020), 2570–2579. https://doi.org/10.1109/JBHI.2020.2967128 doi: 10.1109/JBHI.2020.2967128
    [38] W. Wu, X. Gao, B. Hong, S. Gao, Classifying single-trial EEG during motor imagery by iterative spatio-spectral patterns learning (ISSPL), IEEE Trans. Biomed. Eng., 55 (2008), 1733–1743. https://doi.org/10.1109/TBME.2008.919125 doi: 10.1109/TBME.2008.919125
    [39] F. Qi, Y. Li, W. Wu, RSTFC: A novel algorithm for spatio-temporal filtering and classification of single-trial EEG, IEEE Trans. Neural Networks Learn. Syst., 26 (2015), 3070–3082. https://doi.org/10.1109/TNNLS.2015.2402694 doi: 10.1109/TNNLS.2015.2402694
    [40] D. Li, J. Xu, J. Wang, X. Fang, Y. Ji, A multi-scale fusion convolutional neural network based on attention mechanism for the visualization analysis of EEG signals decoding, IEEE Trans. Neural Syst. Rehabil. Eng., 28 (2020), 2615–2626. https://doi.org/10.1109/TNSRE.2020.3037326 doi: 10.1109/TNSRE.2020.3037326
    [41] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, arXiv preprint, (2015), arXiv: 1512.03385. https://doi.org/10.48550/arXiv.1512.03385
    [42] D. Arthur, S. Vassilvitskii, k-means++: The advantages of careful seeding, in Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, (2007), 1027–1035.
    [43] K. K. Ang, Z. Y. Chin, C. Wang, C. Guan, H. Zhang, Filter bank common spatial pattern algorithm on BCI competition Ⅳ Datasets 2a and 2b, Front. Neurosci., 6 (2012), 39. https://doi.org/10.3389/fnins.2012.00039 doi: 10.3389/fnins.2012.00039
    [44] R. T. Schirrmeister, J. T. Sprongenberg, L. D. J. Fiederer, M. Glasstetter, K. Eggensperger, M. Tangermann, et al., Deep learning with convolutional neural networks for EEG decoding and visualization, Hum. Brain Mapp., 38 (2017), 5391–542. https://doi.org/10.1002/hbm.23730 doi: 10.1002/hbm.23730
    [45] X. Zhao, H. Zhang, G. Zhu, F. You, S. Kuang, L. Sun, A multi-branch 3D convolutional neural network for EEG-based motor imagery classification, IEEE Trans. Neural Syst. Rehabil. Eng., 27 (2019), 2164–2177. https://doi.org/10.1109/TNSRE.2019.2938295 doi: 10.1109/TNSRE.2019.2938295
    [46] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, Grad-CAM: Visual explanations from deep networks via gradient-based localization, in 2017 IEEE International Conference on Computer Vision (ICCV), (2017), 618–626. https://doi.org/10.1109/ICCV.2017.74
    [47] D. Hong, N. Yokoya, J. Chanussot, X. Zhu, An augmented linear mixing model to address spectral variability for hyperspectral unmixing, IEEE Trans. Image Process., 28 (2019), 1923–1938. https://doi.org/10.1109/TIP.2018.2878958 doi: 10.1109/TIP.2018.2878958
    [48] R. K. Meleppat, C. R. Fortenbach, Y. Jian, E. S. Martinez, K. Wagner, B. S. Modjtahedi, et al., In vivo imaging of retinal and choroidal morphology and vascular plexuses of vertebrates using swept-source optical coherence tomography, Transl. Vision Sci. Technol., 11 (2022), 11. https://doi.org/10.1167/tvst.11.8.11 doi: 10.1167/tvst.11.8.11
    [49] K. M. Ratheesh, L. K. Seah, V. M. Murukeshan, Spectral phase-based automatic calibration scheme for swept source-based optical coherence tomography systems, Phys. Med. Biol., 61 (2016), 7652–7663. https://doi.org/10.1088/0031-9155/61/21/7652 doi: 10.1088/0031-9155/61/21/7652
    [50] R. K. Meleppat, E. B. Miller, S. K. Manna, P. Zhang, E. N. Pugh, R. J. Zawadzki, Multiscale hessian filtering for enhancement of OCT angiography images, in Ophthalmic Technologies XXIX, (2019), 64–70. https://doi.org/10.1117/12.2511044
    [51] R. K. Meleppat, P. Prabhathan, S. L. Keey, M. V. Matham, Plasmon resonant silica-coated silver nanoplates as contrast agents for optical coherence tomography, J. Biomed. Nanotechnol., 12 (2016), 1929–1937. https://doi.org/10.1166/jbn.2016.2297 doi: 10.1166/jbn.2016.2297
  • This article has been cited by:

    1. Byungsoo Moon, Orbital stability of periodic peakons for the generalized modified Camassa-Holm equation, 2021, 14, 1937-1632, 4409, 10.3934/dcdss.2021123
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2017) PDF downloads(67) Cited by(3)

Figures and Tables

Figures(9)  /  Tables(5)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog