Loading [MathJax]/jax/output/SVG/jax.js
Research article Special Issues

Global dynamics and density function in a class of stochastic SVI epidemic models with Lévy jumps and nonlinear incidence

  • The paper studies the global dynamics and probability density function for a class of stochastic SVI epidemic models with white noise, Lévy jumps and nonlinear incidence. The stability of disease-free and endemic equilibria for the corresponding deterministic model is first obtained. The threshold criteria on the stochastic extinction, persistence and stationary distribution are established. That is, the disease is extinct with probability one if the threshold value Rs0<1, and the disease is persistent in the mean and any positive solution is ergodic and has a unique stationary distribution if Rs0>1. Furthermore, the approximate expression of the log-normal probability density function around the quasi-endemic equilibrium of the stochastic model is calculated. A new technique for the calculation of the probability density function is proposed. Lastly, the numerical examples and simulations are presented to verify the main results.

    Citation: Xiaodong Wang, Kai Wang, Zhidong Teng. Global dynamics and density function in a class of stochastic SVI epidemic models with Lévy jumps and nonlinear incidence[J]. AIMS Mathematics, 2023, 8(2): 2829-2855. doi: 10.3934/math.2023148

    Related Papers:

    [1] Saad S. Alrwashdeh . The Effect of Environmental Albedo on the Energy Use of a Selected House in Amman-Jordan. AIMS Environmental Science, 2023, 10(5): 628-643. doi: 10.3934/environsci.2023035
    [2] Miguel Mora-Pérez, Ignacio Guillen-Guillamón, Petra Amparo López-Jiménez . A CFD study for evaluating the effects of natural ventilation on indoor comfort conditions. AIMS Environmental Science, 2017, 4(2): 289-309. doi: 10.3934/environsci.2017.2.289
    [3] Lars-Åke Mikaelsson, Jonas Jonasson . Sustainable Built Environment in Mid Sweden: Case study based models for sustainable building and construction processes. AIMS Environmental Science, 2021, 8(1): 47-59. doi: 10.3934/environsci.2021004
    [4] Dimitrios Kotzias . Built environment and indoor air quality: The case of volatile organic compounds. AIMS Environmental Science, 2021, 8(2): 135-147. doi: 10.3934/environsci.2021010
    [5] Jana Söderlund, Peter Newman . Biophilic architecture: a review of the rationale and outcomes. AIMS Environmental Science, 2015, 2(4): 950-969. doi: 10.3934/environsci.2015.4.950
    [6] Adriano Magliocco, Katia Perini . The perception of green integrated into architecture: installation of a green facade in Genoa, Italy. AIMS Environmental Science, 2015, 2(4): 899-909. doi: 10.3934/environsci.2015.4.899
    [7] Vanessa Duarte Pinto, Catarina Martins, José Rodrigues, Manuela Pires Rosa . Improving access to greenspaces in the Mediterranean city of Faro. AIMS Environmental Science, 2020, 7(3): 226-246. doi: 10.3934/environsci.2020014
    [8] Siti Rachmawati, Syafrudin, Budiyono, Ellyna Chairani, Iwan Suryadi . Life cycle analysis and environmental cost-benefit assessment of utilizing hospital medical waste into heavy metal safe paving blocks. AIMS Environmental Science, 2024, 11(5): 665-681. doi: 10.3934/environsci.2024033
    [9] Shenwei Yu, Mengqiu Yang . Research on the applicability of passive house technology in areas hot in summer and cold in winter-take Nanjing area as the research object. AIMS Environmental Science, 2019, 6(6): 460-471. doi: 10.3934/environsci.2019.6.460
    [10] Ik-Whan G. Kwon, Nina Shin, Sung-Ho Kim, Hamed Usman . Trust and commitment in supply chain during digital transformation: A case in Korea. AIMS Environmental Science, 2021, 8(6): 641-655. doi: 10.3934/environsci.2021040
  • The paper studies the global dynamics and probability density function for a class of stochastic SVI epidemic models with white noise, Lévy jumps and nonlinear incidence. The stability of disease-free and endemic equilibria for the corresponding deterministic model is first obtained. The threshold criteria on the stochastic extinction, persistence and stationary distribution are established. That is, the disease is extinct with probability one if the threshold value Rs0<1, and the disease is persistent in the mean and any positive solution is ergodic and has a unique stationary distribution if Rs0>1. Furthermore, the approximate expression of the log-normal probability density function around the quasi-endemic equilibrium of the stochastic model is calculated. A new technique for the calculation of the probability density function is proposed. Lastly, the numerical examples and simulations are presented to verify the main results.



    Epilepsy is a chronic brain dysfunction disease caused by a variety of causes, which is typically characterized by repeated and sudden excessive discharge of local neurons in the brain, resulting in central nervous system dysfunction [1]. Patients with epilepsy clinically exhibit symptoms such as muscle convulsions and loss of consciousness. Repeated seizures result in brain cell death, impairment of brain function, and life-threatening situations in severe cases. Furthermore, epilepsy imposes a significant burden on the patients' families and society. EEG plays an irreplaceable role in the diagnosis and treatment of epilepsy [2]. It has served as a crucial tool for clinical monitoring and diagnosis of epilepsy, offering a swift, reliable, cost-effective, and non-invasive technique to observe cerebral cortex brain activity [3]. Therefore, it is of great significance to study the prevention and treatment of epilepsy with EEG.

    Traditional feature extraction for EEG signals was inefficient due to the differences in the subjective experience of experts. Consequently, the automatic detection of epileptic EEG signals is one of the hot issues in biomedical research [4]. Boashash et al. [5] classified and recognized neonatal epileptic EEG signals by comprehensively analyzing the statistical, image, and signal features in the time-frequency domain. On the other hand, nonlinear dynamic analysis methods mostly included signal sample entropy (SEn), approximate entropy, information entropy, and Lempel-Ziv complexity metric analysis [6], which are often combined with time-frequency domain features in classification. Sabeti et al. [7] classified epileptic EEG signals from patients and normal subjects by feature vectors comprising SEn, approximate entropy, and Lempel-Ziv complexity. The method of automatic detection of epilepsy not only helps doctors to improve the accuracy of epilepsy diagnosis but also greatly saves the diagnosis time, which was of great significance for the prevention, diagnosis, and treatment of epilepsy.

    In recent years, machine learning has become popular with advances in computing and has been widely used in epilepsy recognition [8]. Researchers have tried to diagnose EEG signal features by machine learning. Extracting relevant features from the signals is the key to successfully diagnosing epilepsy. Whether it is manual extraction of a single-modal feature, manual extraction of multiple features, or automatic extraction of features by deep learning, their common purpose is to obtain more information about EEG signals [9], and the richness of such information is very important for in-depth understanding and analysis of EEG signals. Sharmila et al. [10] converted EEG signals into spectral images, but the experimental results were not satisfactory. The main reason was that a single convolutional neural network model extracts only the frequency domain and spatial domain features of EEG signals, ignoring the time domain features of EEG signals, resulting in poor classification performance. Brain activity is a time-dynamic process, and learning temporal evolution from EEG time series is important. Seal et al. [11] presented a deep mixing model with a convolutional neural network and a bidirectional short-duration memory network to extract frequency information and sequence relationships from signals. The proposed model, integrating convolutional neural networks and bidirectional short-term memory networks, efficiently extracts frequency information and sequence relationships from signals. However, when dealing with one-dimensional time series, its final classification accuracy (ACC) is unsatisfactory, and it may face challenges such as high computational complexity and overfitting. This model inputs only a one-dimensional chain time series into the classifier, and the final ACC is unsatisfactory.

    The self-attention mechanism weighs input data parts, emphasizing those most influential on the output. This approach assigns higher weights to key inputs, amplifying their impact within the network [12]. Today, self-attention is a pivotal concept in deep learning and is widely applied in EEG signal classification [13]. Zheng et al. [14] introduced an attention-based bidirectional short-term memory network for visual classification. It utilizes self-attention to identify critical EEG segments, thereby enhancing visual processing accuracy. It identifies critical EEG segments via self-attention to enhance a visual ACC. However, it may require extensive training data and resources and faces risks of overfitting and longer training time. Zhang et al. [15] simplified the temporal scale by one-dimensional convolution. They combined a bidirectional long short-term memory network with self-attention for arousal-based binary classification, resulting in reduced complexity and improved performance. However, the method relies on sufficient labeled data for effective training. Kim et al. [16] combined a long short-term memory network with self-attention to analyze EEG signals and conduct binary classification based on titer and arousal, achieving a good ACC. However, the method may demand substantial labeled data and has a relatively high computational cost. Chang et al. [17] employed 3D convolutional neural networks to extract deep EEG features and leveraged dual attention to enhance them, resulting in improved performance. However, the approach comes with higher computational costs and data requirements. Traditional methods extract single-modal features without preserving the structure of EEG signals in the dimensions, time, frequency, and nonlinear kinematics and do not reach the ideal state in multiple classification tasks.

    To address the aforementioned challenges, we proposed a multi-modal feature fusion (MMFF) method leveraging a multi-head self-attention mechanism to extract a time domain, frequency domain, and nonlinear dynamic features from epileptic EEG signals. Specifically, the obtained resting EEG signals were preprocessed to extract time series. Second, the time domain features of EEG were extracted by Gaussian kernel principal component analysis (GPCA), while the frequency domain features were extracted by short-time Fourier transform (STFT). Additionally, the nonlinear dynamic features were extracted by SEn. Then, the features of the three modalities were interactively learned by the multi-head self-attention mechanism, and the attention weights were trained. The fused features were derived by amalgamating the value vectors of feature representations, which transform an optimal model and introduce an L1 norm regularization term.

    The major contributions of this study are summarized as follows:

    (a) The multi-head self-attention mechanism fused the time domain, frequency domain, and nonlinear dynamic features derived from epileptic EEG signals. Additionally, the introduction of the L1 norm regularization term served to decrease model complexity, bolster robustness, and mitigate the risk of overfitting.

    (b) We explored the variation trends of several parameters in different GPCA, including the bandwidth of the Gaussian kernel function, STFT window length and step size, SEn window length, overlap rate, threshold, and the number of multi-head attention mechanisms.

    (c) We examined how these parameters, along with L1 norm regularization, affected the classification performance of epilepsy patients. Through this comprehensive investigation, an optimal parameter combination was identified, achieving an ACC of 92.76 ± 1.64%.

    Figure 1 shows the framework diagram. The specific steps are as follows: (a) Preprocess resting epileptic EEG signals to extract time series data; (b) Extract the time domain features of EEG by GPCA and then calculate their self-attention scores and generate the corresponding output; (c) Extract the frequency domain features of EEG by STFT, calculate their self-attention scores, and produce the associated output; (d) Extract the nonlinear dynamic features of EEG by Sen and then calculate their self-attention scores and generate the corresponding output; (e) Derive the feature representation by fusing the self-attention outputs from the three modal features; (f) Obtain the query vector, key vector, and value vector through a linear transformation to the feature representation; (g) Determine the normalized attention weight by scaling the dot product of the query and key vectors, followed by Softmax function; (h) Generate fused features by combining the value vectors of feature representations and then transform them into optimization models with the introduction of L1 norm regularization terms; and (i) Diagnose epilepsy by the fused features, enabling assessment of classification performance.

    Figure 1.  Research framework.

    The experimental data was sourced from a widely used epileptic EEG dataset provided by the University of Bonn, Germany (http://epileptologie-bonn.de/cms/upload/workgroup/lehnertz/eegdata.html). The dataset was divided into five subsets from Set A to Set E, with each subset containing 100 samples of the same type and each sample containing 4096 EEG time series. The data sampling frequency was 173.61 Hz, and the duration was 23.6 s. The artifacts had been removed by manual filtering. Set A and Set B were collected from 5 healthy subjects with open and closed eyes, respectively. The Set C and Set D were collected outside the focal area and in the focal area of 5 patients, respectively. Set E was collected from focal areas during the seizure. The Set C, Set D, and Set E subsets were recorded by tape electrodes implanted in the skull and affixed to the surface of the hippocampus. Among them, Set E and Set D were recorded by attaching a strip electrode to the lesion area of the hippocampus structure, and Set C was recorded by attaching a strip electrode to the hippocampus structure of the other brain hemisphere outside the lesion area. Table 1 shows the details.

    Table 1.  Details of the Bonn dataset.
    Data subset A B C D E
    Subject Five healthy volunteers Five patients with epilepsy
    Subject status Open-eyed Close-eyed Interparoxysmal Paroxysmal
    Recording period Normal Epileptic interval Epileptic phase
    Electrode type Non-invasive surface Intracranial
    Electrode arrangement International 10-20 standard system Hippocampus structure Lesion All seizures

     | Show Table
    DownLoad: CSV

    The features of EEG signals are mainly derived from time domain, frequency domain, time-frequency domain, and nonlinear dynamics [18,19,20].

    The time domain features were extracted by GPCA [21]. GPCA first maps EEG signals to a high-dimensional feature space and then applies principal component analysis for dimensionality reduction. For M samples x1,x2,...,xM in the input space, each of them is a d-dimensional vector. The similarity matrix K between samples was calculated by the Gaussian kernel function. Calculate the centralized matrix H, whose expression is:

    H=I1nEET (1)

    where I is the identity matrix and E is a n-dimensional one-all matrix.

    Calculate the centralized similarity matrix K', and its expression is as follows:

    K=H1KH (2)

    The eigenvalue decomposition of the centralized similarity matrix K' was carried out to obtain the eigenvalue and eigenvector. The eigenvector corresponding to the first k largest eigenvalues was selected as the principal component. The projection of all the samples onto k principal components forms a new eigenmatrix Z=[z1,z2,...,zM]T, as time domain features.

    The frequency domain features were extracted by STFT [22]. STFT decomposes EEG signals into components of different frequencies to obtain the amplitude spectrum and phase spectrum within each time window. These features facilitated the description of EEG signals' energy distribution and frequency features across various frequencies.

    Set the original EEG signal as x(t), sampling frequency as fs, time window length as Ls, and time window move step as s. For each time window, the amplitude spectrum and phase spectrum after STFT are calculated. Frequency domain feature XRNt×F, where Nt is the number of time windows and F is the frequency resolution of the amplitude spectrum. The specific STFT expression is as follows:

    X(m,w)=Nt1n=0x(n)w(nm)ejwn (3)

    where x(n) is the original EEG signal, w(n-m) is the window function, w is the frequency, m is the time shift factor, and Nt is the number of time windows.

    For each time window, set its starting time as ti, and calculate its amplitude spectrum A(k, i) and phase spectrum P(k, i) at k as follows:

    A(k,i)=|STFT(xti[n])|kP(k,i)=arg(STFT(xti[n]))k (4)

    where ||k represents the module of the kth frequency component of the STFT calculation result, and arg()k represents the angle of the kth frequency component of the STFT calculation result. xti[n] represents a time window of length Ls from time ti.

    Finally, the amplitude spectra of all time windows are splicing together to form the frequency domain feature X, and the ith row is the amplitude spectrum of the ith time window, namely X(i,k)=A(k,i).

    The nonlinear dynamic features were extracted by SEn [23]. The algorithm requires a small amount of data and takes less time [24]. The time series T is divided into m subsequences of length Le, and the overlap rate is set as R.

    The frequency of occurrence is calculated for each subsequence. The similarity between two subsequences is measured according to a distance metric. Calculate the occurrence frequency p:

    pi=m1C2mmj=1,ji[d(Ti,Tj)<r] (5)

    where Ti represents the ith sub-sequence, and C2m represents the number of 2-combinations from a set with m distinct sequences, that is, C2m=m(m1)2. r stands for a threshold to determine whether two subsequences are similar, where [d(Ti,Tj)<r] represent 1 for d(Ti,Tj)<r and 0 otherwise.

    Then, the nonlinear dynamic feature S is concretely expressed as:

    A=m1i=1mj=i+1pi[d(Ti,Tj)<r]B=mi=1mj=i+1pi[d(Ti,Tj)<r]S(Le,r)=log(A/B) (6)

    Nonlinear dynamic feature SRM×N varies with the window length Le and the threshold r. In a specific research environment, the parameter setting is certain, where M represents the number of samples and N represents the dimension of nonlinear dynamic features. In general, r ∈ 0.1·std~0.25·std, where std represents the standard deviation of a given time series.

    The self-attention mechanism [25] is applicable to situations where there are complex and nonlinear dependencies between different modalities and can improve the performance and expressiveness of the fused features, which will contain cross-modal information [26].

    (a) Calculate query vector QT=ZWQT, key vector KT=ZWKT and value vector VT=ZWVT, where WQT,WKT,WVTRT×d are the weight matrixes and d is the hidden layer dimension. Calculate query vector QF=XWQF, key vector KF=XWKF and value vector VF=XWVF, where WQF,WKF,WVFRF×d are the weight matrixes and d is the hidden layer dimension. Calculate query vector QN=SWQN, key vector KN=SWKN and value vector VN=SWVN, where WQN,WKN,WVNRN×d are the weight matrixes and d is the hidden layer dimension.

    (b) Calculate the attention score AT and output OT of time domain features. Calculate attention score AF and the output OF for frequency domain features. Calculate attention score AN and output ON for nonlinear dynamic features. The attention scores A and the output O are respectively expressed as:

    A=softmax(Q(K)Td)O=AV (7)

    where Q is the query vector, K is the key vector, and V is the value vector.

    (c) Integrate self-attention output ORM×(T+F+N) to get the feature representation:

    O=[OT,OF,ON] (8)

    Calculate the query vector Q=OWQ, the key vector K=OWK and the value vector V=OWV, where WQ,WK,WVR(T+F+N)×d are the weight matrixes and d is the hidden layer dimension.

    (d) Enable the model to simultaneously participate and process diverse information from various subspaces to enhance its expression and ability to handle complex tasks. Q, K, and V are split into h heads, each with dimension d/h, and an independent self-attention calculation is performed for each head. Specifically, for each head i, calculate its attention score Ai and the corresponding attention output Oi:

    Qi=headi(Q),Ki=headi(K),Vi=headi(V)Ai=softmax(Qi(Ki)Td/dhh)Oi=AiVi (9)

    Splice together the attention output Oi of all heads to obtain the final feature fusion matrix Y through a layer of the linear transformation:

    O=[O1,O2,...,Oh]Y=OWY (10)

    where WYRd×dYis the weight matrix, and dY is the dimension of the final output feature fusion.

    (e) Introduce the L1 norm regular term to reduce model complexity, increase robustness, and avoid overfitting. The objective function with a square loss is expressed as:

    minWQ,WK,WV,WY1MMi=1YiYi2F+λY1 (11)

    where Yi is the real feature, ˆYi is the feature predicted by the model, M is the sample size, and λ is the weight of the regularization term of the L1 norm.

    Update the weight matrixes WQ,WK,WV,WY by backpropagation algorithm to minimize the objective function in the training process [27]. The objective function enabled the model to learn the feature mapping Y and control the sparsity of the features by regularization terms. We measured the square of the Euclidean distance between the predicted and real features by square loss.

    We extracted the time domain, frequency domain, and nonlinear dynamic features of EEG by GPCA, STFT, and SEn, respectively. The features of these three modalities were fused by the multi-head self-attention mechanism, and L1 norm regular term was introduced. In the implementation of STFT, the attention weights were calculated by Hamming window function [28] and Softmax function [29]. Additionally, the L1 norm was resolved through the gradient descent algorithm [30]. The proposed method was verified by leave-one cross-validation due to the limited EEG samples. Ten repeated experiments were conducted to obtain the average value. The classification performance was assessed according to the ACC index [31]. The influence of different parameters on the classification performance was first discussed to verify the classification performance of fused features, and the optimal parameters were determined by a one-to-many strategy for multi-category classification and then compared with other feature extraction methods [32].

    The proposed method comprises multiple parameters. The mesh search method cannot find the optimal parameters directly. The optimal parameter values were systematically determined through a sequential process. (a) Determine the bandwidth σ of the Gaussian kernel function. (b) Set the window length Ls and step size s. (c) Establish the window length Le, overlap rate R, and threshold r. (d) Finalize the number of heads h and the L1 norm regularization parameter λ. The optimal classification model was formed by the EEG training set, and the model was assessed using the test set. The average of the test results was calculated to assess the model's performance. Multiple iterations of model training were conducted to ascertain the optimal hyperparameter. Subsequently, the model employing the optimal hyperparameter underwent testing with the original sample.

    GPCA has demonstrated superior performance in EEG analysis, thus justifying its selection for this task [33]. It differs from traditional principal component analysis in that it does not directly obtain a fixed number of principal components. The input data is mapped to a high-dimensional feature space by the Gaussian kernel function. Then, the eigenvector of the covariance matrix in the feature space is calculated as the kernel principal component. Since there were only 100 samples per class, we considered choosing a smaller Gaussian kernel bandwidth to avoid overfitting [34]. Accordingly, the σ was set from 1 to 10, and ACCs of different σ were compared, as shown in Figure 2.

    Figure 2.  ACCs of GPCA with different σ.

    It is observed from Figure 2 that σ has a great impact on the model performance and determines the similarity of the data in the feature space. As σ increased from 1 to 10, the accuracy of the five-category classification increased gradually, reaching a peak and gradually declining. Specifically, when σ increased from 1 to 5, ACCs increased from 65.4% to 80.2%, indicating that increasing σ helped improve ACCs in this interval. While σ increased to 6 and above, ACCs gradually declined, decreasing from 79.4% to 76.2%. With GPCA, σ controlled the "smoothness" between points mapped into a high-dimensional feature space. Smaller σ meant that the points in the high-dimensional space were more dispersed, leading the model to focus too much on details and noise in the training data, i.e., overfitting.

    Conversely, larger σ caused the mapped points to be more concentrated, leading the model to fail to capture important features from epileptic EEG signals, that is, underfit. ACCs increased first and then decreased with the increase of σ, indicating an optimal σ range. In this range, the model balanced the deviation and variance well and achieved a higher ACC. When σ was 5, the model reached the highest accuracy of 80.2%. This may indicate that the model complexity at this σ setting is moderate enough to capture key features from epileptic EEG signals without being too affected by noise.

    Ls is usually chosen as the power of 2 to obtain better computational efficiency when performing a fast Fourier transform.s is usually set to half Ls or 1/4 of Ls [35]. Since there were only 100 samples per category, we considered choosing a smaller Ls and s to retain more time and frequency detail. On that basis, Ls was set to 32, 64,128,256, and s was set to 4, 8, 16, and the ACCs when different Ls s were compared, as shown in Figure 3.

    Figure 3.  ACCs of STFT with different LS and s.

    It is observed from Figure 3 that when Ls was short (such as 32), ACCs were greatly affected by s to some extent, and ACCs ranged from 56.6% to 73.8%. This suggests that shorter Ls may be sensitive to time resolution, but have lower frequency resolution. When Ls was medium (e.g., 64), a significant increase was observed, especially when s was 8, and the ACCs reached up to 82.5%. This suggests that a moderate Ls may provide a better time-frequency resolution balance. For longer Ls (such as 128 and 256), the ACC changes were more complex, and the overall trend showed that the maximum ACC decreased as Ls increased. This could be attributed to the fact that although a longer Ls enhances frequency resolution, it compromises time resolution, impeding the ability to capture dynamic shifts from epileptic EEG signals accurately. For a given Ls, a shorter s (e.g., 4) yielded better or relatively stable ACCs. This could be attributed to the increased overlap in time, which allows for a more detailed capture of the signal. Conversely, a longer s (e.g., 16) often leads to a lower ACC in most cases. This decline in performance might be due to the reduced time coverage, potentially resulting in the omission of crucial time-frequency information.

    It is commonly recommended that Le be set to a power of 2 during EEG signal processing when extracting nonlinear dynamic features of EEG by SEn. This ensures improved computational efficiency when carrying out fast calculations. The advantage of SEn in capturing local features was reflected by setting the overlap rate R, thereby preserving a richer array of time and frequency details. R chooses a value between 0.5 and 0.9. r thresholds SEn to extract nonlinear dynamic features. Generally, r ranges from 0.1·std to 0.25·std [36], where std represents the standard deviation of a given time series. It is advisable to opt for a shorter Le and a higher overlap rate When working with a limited dataset of 100 samples per category. This approach helped to retain a greater amount of temporal and frequency detail. Then, Le was set to 32, 64,128,256, R was set to 0.6, 0.7, 0.8, 0.9, and r was set to 0.1·std, 0.15·std, 0.2·std, 0.25·std, and the ACCs of the combination of different Le, R and r were compared, as shown in Figure 4.

    Figure 4.  ACCs of SEn with different Le, R, and r.

    It is observed from Figure 4 that when Le was 64, the ACC was generally higher with the change of R and r, especially when R was 0.8 and r was 0.15·std, reaching the highest ACC of 82.6%. ACCs decreased slightly when Le increased to 128, suggesting that a shorter Le may be better suited to capturing dynamic changes in epileptic EEG signals. As Le was further increased, ACCs decreased even more, particularly when Le reached 256, leading to a significant reduction in ACCs. This decline could be explained by the fact that a longer Le decreased the temporal resolution, thereby impeding the effective capture of rapid changes in epileptic EEG signals. Under each Le setting, ACCs generally increased first and then stabilized or slightly decreased with the increase of R. This suggests that appropriately increasing R can aid in enhancing ACCs. The reason for this improvement lies in the fact that a higher R offers a more extensive data sample, enabling a more detailed capture of signal changes. When R was high (e.g., 0.8), the ACC reached its highest at shorter Le (e.g., 64). This emphasizes improving the efficiency of feature extraction by increasing R while maintaining a high temporal resolution. The influence of r variations on ACCs exhibits complexity, lacking a discernible universal trend. This complexity can be attributed to the fact that selecting an appropriate r is intricately tied to the inherent dynamic features of epileptic EEG signals. Consequently, an apt r is crucial for distinguishing signals corresponding to different states. The ACC reached its peak when r was set to 0.15 std in certain scenarios, such as when Le was set to 64, and R was 0.8. This finding suggests that, under these settings, r is optimally suited to the characteristics of the current dataset. This ensures accurate SEn calculation results by limiting the similarity between subsequences and mitigating the influence of noise or other interfering factors.

    The multi-head attention mechanism can extract the feature representation of different attention weights to enhance the expression ability of the model; thus, the number of heads h was usually set to a value between 2 and 8 [37]. The introduction of L1 norm regular term makes the model learn sparse feature representation to improve the model's generalization performance. The L1 norm regularization parameter λ controls how much the regularization term affects the overall loss function, and it is usually set to a value between 0.0001 and 0.1 [38]. Since there were only 100 samples per class, h was set to 4, 5, 6, 7, 8, and the value range of λ was set to [2–5, 2–3]. They provided sufficient model complexity while avoiding excessive computational burden. We compared the values of ACCs obtained from various combinations of h and λ, as shown in Figure 5.

    Figure 5.  ACCs of multi-head self-attention mechanism with different h and λ.

    When h was 4, ACCs increased somewhat (from 71.4% to 85.0%). λ increased from 2–5 to 2–4 but decreased (72.4%) when it was further increased to 2–3. Smaller regularization parameters may assist the model in better retaining crucial features, whereas substantial regularization parameters have the potential to cause the model to discard an excessive amount of information. When h was 5, ACCs fluctuated in the range from 2–5 to 2–3. The highest ACC occurred when the parameter combination was 5-headers and 2–4 regularized parameters (90.8%), which may be a more desirable combination. The ACC was higher overall when h was 6, and better results were achieved under different regularization parameters. The highest ACC (92.7%) occurred when the parameter combination was 6-head and 2–4 regularized parameters, indicating that the combination positively impacts the fusion of the time domain, frequency domain, and nonlinear dynamics features. ACCs were generally low when h was 7 and 8, especially when the regularization parameter was 2–4, and ACCs decreased significantly. An excessive number of heads might result in model complexity or overfitting, explaining the observed performance decline.

    To sum up, σ was set to 5, Ls was 64, s was 8, Le was 64, R was 0.8, r was 0.15·std, h was 6, and λ was 2–4.

    The proposed MMFF method was compared with the out-of-state feature extraction methods for epileptic EEG signals. Ten repeated experiments were conducted to get the average value from leave-one-out cross-validation. Specific methods included fast Fourier transform (FFT) [39], wavelet transform (WT) [40], mutual correlation power spectral density (RPSD) [41], genetic process-based feature extraction system (GPF) [42], power spectral density (PSD) [43], time-frequency analysis (TFA) [44], time-frequency analysis and approximate entropy (TFAE)[45], and time-frequency domain and spatial feature fusion (TFSF) [46]. Table 2 presents the mean ACCs, specificity (SPE), sensitivity (SEN), and respective variance for the five-category classification for epileptic EEG signals.

    Table 2.  ACCs of different feature extraction methods.
    Dataset Feature extraction method Classification method ACC (%) SPE(%) SEN(%)
    Set A, Set B, Set C, Set D, Set E FFT Decision tree 81.32 ± 3.46 80.02 ± 2.51 82.55 ± 3.04
    WT Artificial neural network 82.74 ± 2.85 82.61 ± 2.01 83.69 ± 2.53
    RPSD SVM 83.17 ± 1.55 83.52 ± 1.50 83.44 ± 2.08
    GPF K-nearest neighbor classifier 78.44 ± 4.28 77.83 ± 3.58 79.48 ± 4.02
    PSD Gaussian mixture model 86.53 ± 2.62 86.07 ± 2.05 87.09 ± 2.51
    TFA Artificial neural network 88.21 ± 2.06 88.67 ± 1.53 88.57 ± 1.84
    TFAE SVM 85.64 ± 1.47 85.32 ± 1.01 86.06 ± 1.53
    TFSF SVM 89.59 ± 2.13 89.66 ± 1.53 90.83 ± 2.09
    MMFF SVM 92.76 ± 1.64 92.51 ± 1.73 93.28 ± 1.57

     | Show Table
    DownLoad: CSV

    It is found in Table 2 that different feature extraction methods have a significant impact on ACCs. The ACC (89.59 ± 2.13%) was achieved by TFSF with a support vector machine (SVM) classifier, and 88.21 ± 2.06% was achieved by TFA with an artificial neural network classifier. This indicates that time-frequency domain features or their fusion positively affect epileptic EEG signal classification. The ACC of TFAE achieved 85.64 ± 1.47%, but the ACC with an artificial neural network classifier may not be optimal. The SVM classifier achieved high ACCs in multiple feature extraction methods, which showed that it had good robustness and generalization abilities for epileptic EEG signal classifications. MMFF achieved the highest ACC (92.76 ± 1.64%), indicating that the method had a good classification performance for the extracted features. There were complex interactions between feature extraction methods and classification methods. TFSF attained a high ACC, while other classifiers may not have exhibited optimal performance. This underscores the importance of considering the compatibility between feature extraction methods and the features of specific tasks when choosing them. It is crucial to identify the most suitable combination to ensure optimal performance.

    Epileptic EEG signals contain a wealth of information when processing biomedical signals. However, this information is often complex and multi-modal. The time domain features were extracted by GPCA to convert the original epileptic EEG signals into a smaller feature subspace with higher differentiation, such as mean value and standard difference. The frequency domain features were extracted by STFT to convert the epileptic EEG signals into the energy distributions at different frequencies, including the power spectral density and the phase of EEG signals. The nonlinear dynamic features were extract by SEn to assesse the complexity and irregularity of epileptic EEG signals, such as self-similarity and complexity.

    The MMFF method significantly increased the ACC in epileptic EEG signals classification tasks, reaching 92.76 ± 1.64%. This method verifies that the multi-head self-attention mechanism can effectively learn the time, frequency, and nonlinear dynamic features interactively to better capture the multi-modal features from epileptic EEG signals. In this method, the features of these three modallities were fused to better integrate the information of different modalities and retained the time, frequency, and nonlinear dynamic features of epileptic EEG signals. This strategy improved the robustness and accuracy of feature extraction for epileptic EEG signals.

    The proposed method obtained satisfactory results, but some limitations remained. For example, experimental datasets' limited size and origin may affect the model's ability to generalize. Future considerations include expanding the dataset size and further validating the model's applicability across a broader spectrum of scenarios. The calculation time of the model was too long, and each experiment lasted 5 days. We intend to refine the model structure, enhance feature extraction techniques, and investigate more efficacious epileptic EEG signal processing methods. In addition, our model lacks interpretability and requires the integration of clinical data and expert insights from real-world medical settings. In the future, we will advance the feature extraction of epileptic EEG signals in clinical practice to improve acceptability for medical professionals and patients. With a multi-head self-attention mechanism, the result is not perfect yet, so new deep learning methods will be used in the future to improve classification performance [47,48]. The available public data was preprocessed without significant noise. In the future, we will explore the sensitivity of the model to noise in EEG data or its performance at different data qualities.

    We developed an MMFF method to improve epileptic EEG signal feature classifications. Different modalities of features were extracted from a time domain, frequency domain, and nonlinear dynamics, respectively. These features were interactively learned through the multi-head self-attention mechanism, thereby acquiring attention weights among them. The fused features preserve the time, frequency, and nonlinear dynamics information of epileptic EEG signals to screen out more representative epileptic features. Experimental results show that the proposed method achieves better results for the five-category classification task. This proves the superiority and feasibility of this model in feature extraction. Introducing MMFF with a multi-head self-attention mechanism demonstrated superior performance in the five-category classification task. This study provides a compelling new method in the field of feature extraction for epileptic EEG signals and provides a reference for the diagnosis and treatment of epilepsy.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by National Natural Science Foundation of China (grant No. 51877013), and Jiangsu Provincial Key Research and Development Program (grant No. BE2021636). This work was also sponsored by Qing Lan Project of Jiangsu Province.

    The authors declare there is no conflict of interest.



    [1] M. E. Alexander, C. Bowman, S. M. Moghadas, R. Summers, A. B. Gumel, B. M. Sahai, A vaccination model for transmission dynamics of influenza, SIAM J. Appl. Dyn. Syst., 3 (2004), 503–524. https://doi.org/10.1137/030600370 doi: 10.1137/030600370
    [2] H. Whittle, S. Jaffar, M. Wansbrough, M. Mendy, U. Dumpis, A. Collinson, et al., Observational study of vaccine efficacy 14 years after trial of hepatitis B vaccination in Gambian children, BMJ, 325 (2002), 569. https://doi.org/10.1136/bmj.325.7364.569 doi: 10.1136/bmj.325.7364.569
    [3] M. Haber, I. M. Longini, M. E. Halloran, Measures of the effects of vaccination in a randomly mixing population, Int. J. Epidemiology, 20 (1991), 300–319. https://doi.org/10.1093/ije/20.1.300 doi: 10.1093/ije/20.1.300
    [4] X. N. Liu, Y. Takeuchi, S. Iwami, SVIR epidemic models with vaccination strategies, J. Theor. Biol., 253 (2008), 1–11. https://doi.org/10.1016/j.jtbi.2007.10.014 doi: 10.1016/j.jtbi.2007.10.014
    [5] J. M. Okwo-Bele, T. Cherian, The expanded programme on immunization: a lasting legacy of smallpox eradication, Vaccine, 29 (2011), D74–D79. https://doi.org/10.1016/j.vaccine.2012.01.080 doi: 10.1016/j.vaccine.2012.01.080
    [6] A. B. Sabin, Measles, killer of millions in developing countries: strategy for rapid elimination and continuing control, Eur. J. Epidemiology, 7 (1991), 1–22. https://doi.org/10.1007/BF00221337 doi: 10.1007/BF00221337
    [7] C. A. De Quadros, J. K. Andrus, J. M. Olive, C. M. Da Silveira, R. M. Eikhof, P. Carrasco, et al., Eradication of poliomyelitis: progress in the Americas, Pediatr. Inf. Dis. J., 10 (1991), 222–229. 10.1097/00006454-199103000-00011 doi: 10.1097/00006454-199103000-00011
    [8] M. Ramsay, N. Gay, E. Miller, M. Rush, J. White, P. Morgan-Capner, et al., The epidemiology of measles in England and Wales: rationale for 1994 national vaccination campaign, Commun. Dis. Rep., 4 (1994), R141-6.
    [9] G. Zaman, Y. H. Kang, I. H. Jung, Stability analysis and optimal vaccination of an SIR epidemic model, Biosystems, 93 (2008), 240–249. https://doi.org/10.1016/j.biosystems.2008.05.004 doi: 10.1016/j.biosystems.2008.05.004
    [10] S. J. Gao, H. S. Ouyang, J. J. Nieto, Mixed vaccination stragety in SIRS epidemic model with seasonal variablity on infection, Int. J. Biomath., 4 (2011), 473-491. https://doi.org/10.1142/S1793524511001337 doi: 10.1142/S1793524511001337
    [11] J. Q. Li, Z. E. Ma, Qualitative analyses of SIS epidemic model with vaccination and varying total population size, Math. Comput. Model., 35 (2002), 1235–1243. https://doi.org/10.1016/S0895-7177(02)00082-1 doi: 10.1016/S0895-7177(02)00082-1
    [12] X. Z. Li, J. Wang, M. Ghosh, Stability and bifurcation of an SIVS epidemic model with treatment and age of vaccination, Appl. Math. Model., 34 (2010), 437–450. https://doi.org/10.1016/j.apm.2009.06.002 doi: 10.1016/j.apm.2009.06.002
    [13] Q. Liu, D. Q. Jiang, Stationary distribution of a stochastic staged progression HIV model with imperfect vaccination, Phys. A, 527 (2019), 121271. https://doi.org/10.1016/j.physa.2019.121271 doi: 10.1016/j.physa.2019.121271
    [14] Q. Liu, D. Q. Jiang, Global dynamical behavior of a multigroup SVIR epidemic model with Markovian switching, Int. J. Biomath., 15 (2022), 2150080. https://doi.org/10.1142/S1793524521500807 doi: 10.1142/S1793524521500807
    [15] A. Lahrouz, L. Omari, D. Kiouach, A. Belmaati, Complete global stability for an SIRS epidemic model with generalized non-linear incidence and vaccination, Appl. Math. Comput., 218 (2012), 6519–6525. https://doi.org/10.1016/j.amc.2011.12.024 doi: 10.1016/j.amc.2011.12.024
    [16] S. G. Ruan, W. D. Wang, Dynamical behavior of an epidemic model with a nonlinear incidence rate, J. Differ. Equ., 188 (2003), 135–163. https://doi.org/10.1016/S0022-0396(02)00089-X doi: 10.1016/S0022-0396(02)00089-X
    [17] W. M. Liu, S. A. Levin, Y. Iwasa, Influence of nonlinear incidence rates upon the behavior of SIRS epidemiological models, J. Math. Biology, 23 (1986), 187–204. https://doi.org/10.1007/BF00276956 doi: 10.1007/BF00276956
    [18] Y. F. Li, J. G. Cui, The effect of constant and pulse vaccination on SIS epidemic models incorporating media coverage, Commun. Nonlinear Sci. Numer. Simul., 14 (2009), 2353–2365. https://doi.org/10.1016/j.cnsns.2008.06.024 doi: 10.1016/j.cnsns.2008.06.024
    [19] M. B. Ghori, P. A. Naik, J. Zu, Z. Eskandari, M. Naik, Global dynamics and bifurcation analysis of a fractional-order SEIR epidemic model with saturation incidence rate, Math. Methods Appl. Sci., 45 (2022), 3665–3688. https://doi.org/10.1002/mma.8010 doi: 10.1002/mma.8010
    [20] P. A. Naik, J. Zu, M. Ghoreishi, Stability analysis and approximate solution of SIR epidemic model with crowley-martin type functional response and Holling type-II treatment rate by using homotopy analysis method, J. Appl. Anal. Comput., 10 (2020), 1482–1515. https://doi.org/10.11948/20190239 doi: 10.11948/20190239
    [21] Y. Sabbar, A. Zeb, D. Kiouach, N. Gul, T. Sitthiwirattham, D. Baleanu, et al., Dynamical bifurcation of a sewage treatment model with general higher-order perturbation, Results Phys., 39 (2022), 105799. https://doi.org/10.1016/j.rinp.2022.105799 doi: 10.1016/j.rinp.2022.105799
    [22] R. Rifhat, L. Wang, Z. D. Teng, Dynamics for a class of stochastic SIS epidemic models with nonlinear incidence and periodic coefficients, Phys. A, 481 (2017), 176–190. https://doi.org/10.1016/j.physa.2017.04.016 doi: 10.1016/j.physa.2017.04.016
    [23] Y. Sabbar, A. Khan, A. Din, D. Kiouach, S. P. Rajasekar, Determining the global threshold of an epidemic model with general interference function and high-order perturbation, AIMS Math., 7 (2022), 19865–19890. https://doi.org/10.3934/math.20221088 doi: 10.3934/math.20221088
    [24] P. Zhu, Y. C. Wei, The dynamics of a stochastic SEI model with standard incidence and infectivity in incubation period, AIMS Math., 7 (2022), 18218–18238. https://doi.org/10.3934/math.20221002 doi: 10.3934/math.20221002
    [25] B. Q. Zhou, D. Q. Jiang, B. T. Han, T. Hayat, Threshold dynamics and density function of a stochastic epidemic model with media coverage and mean-reverting Ornstein-Uhlenbeck process, Math. Comput. Simul., 196 (2022), 15–44. https://doi.org/10.1016/j.matcom.2022.01.014 doi: 10.1016/j.matcom.2022.01.014
    [26] Y. Alnafisah, M. El-Shahed, Deterministic and stochastic model for the hepatitis C with different types of virus genome, AIMS Math., 7 (2022), 11905–11918. https://doi.org/10.3934/math.2022664 doi: 10.3934/math.2022664
    [27] L. Wang, Z. D. Teng, C. Y. Ji, X. M. Feng, K. Wang, Dynamical behaviors of a stochastic malaria model: a case study for Yunnan, China, Phys. A, 521 (2019), 435–454. https://doi.org/10.1016/j.physa.2018.12.030 doi: 10.1016/j.physa.2018.12.030
    [28] Y. P. Tan, Y. L. Cai, X. Q. Wang, Z. H. Peng, K. Wang, R. X. Yao, et al., Stochastic dynamics of an SIS epidemiological model with media coverage, Math. Comput. Simul., 204 (2023), 1–27. https://doi.org/10.1016/j.matcom.2022.08.001 doi: 10.1016/j.matcom.2022.08.001
    [29] Y. Liu, Extinction, persistence and density function analysis of a stochastic two-strain disease model with drug resistance mutation, Appl. Math. Comput., 433 (2022), 127393. https://doi.org/10.1016/j.amc.2022.127393 doi: 10.1016/j.amc.2022.127393
    [30] B. Q. Zhou, B. T. Han, D. Q. Jiang, T. Hayat, A. Alsaedi, Stationary distribution, extinction and probability density function of a stochastic vegetation-water model in arid ecosystems, J. Nonlinear Sci., 32 (2022), 30. https://doi.org/10.1007/s00332-022-09789-7 doi: 10.1007/s00332-022-09789-7
    [31] B. Q. Zhou, X. H. Zhang, D. Q. Jiang, Dynamics and density function analysis of a stochastic SVI epidemic model with half saturated incidence rate, Chaos Solitons Fract., 137 (2020), 109865. https://doi.org/10.1016/j.chaos.2020.109865 doi: 10.1016/j.chaos.2020.109865
    [32] Y. B. Liu, D. P. Kuang, J. L. Li, Threshold behaviour of a triple-delay SIQR stochastic epidemic model with Lévy noise perturbation, AIMS Math., 7 (2022), 16498–16518. https://doi.org/10.3934/math.2022903 doi: 10.3934/math.2022903
    [33] X. B. Zhang, Q. H. Shi, S. H. Ma, H. F. Huo, D. G. Li, Dynamic behavior of a stochastic SIQS epidemic model with Lévy jumps, Nonlinear Dyn., 93 (2018), 1481–1493. https://doi.org/10.1007/s11071-018-4272-4 doi: 10.1007/s11071-018-4272-4
    [34] J. N. Hu, B. Y. Wen, T. Zeng, Z. D. Teng, Dynamics of a stochastic susceptible-infective-recovered (SIRS) epidemic model with vaccination and nonlinear incidence under regime switching and Lévy jumps, Int. J. Nonlinear Sci. Numer. Simul., 22 (2021), 391–407. https://doi.org/10.1515/ijnsns-2018-0324 doi: 10.1515/ijnsns-2018-0324
    [35] Q. Liu, D. Q. Jiang, T. Hayat, B. Ahmad, Analysis of a delayed vaccinated SIR epidemic model with temporary immunity and Lévy jumps, Nonlinear Anal. Hybrid Syst., 27 (2018), 29–43. https://doi.org/10.1016/j.nahs.2017.08.002 doi: 10.1016/j.nahs.2017.08.002
    [36] L. Lv, X. J. Yao, Qualitative analysis of a nonautonomous stochastic SIS epidemic model with Lévy jumps, Math. Biosci. Eng., 18 (2021), 1352–1369. https://doi.org/10.3934/mbe.2021071 doi: 10.3934/mbe.2021071
    [37] Y. M. Ding, Y. X. Fu, Y. M. Kang, Stochastic analysis of COVID-19 by a SEIR model with Lévy noise, Chaos, 31 (2021), 043132. https://doi.org/10.1063/5.0021108 doi: 10.1063/5.0021108
    [38] J. Danane, K. Allali, Z. Hammouch, K. S. Nisar, Mathematical analysis and simulation of a stochastic COVID-19 Lévy jump model with isolation strategy, Results Phys., 23 (2021), 103994. https://doi.org/10.1016/j.rinp.2021.103994 doi: 10.1016/j.rinp.2021.103994
    [39] D. Kiouach, Y. Sabbar, The long-time behavior of a stochastic SIR epidemic model with distributed delay and multidimensional Lévy jumps, Int. J. Biomath., 15 (2022), 2250004. https://doi.org/10.1142/S1793524522500048 doi: 10.1142/S1793524522500048
    [40] Y. Sabbar, D. Kiouach, S. P. Rajasekar, S. E. A. El-idrissi, The influence of quadratic Lévy noise on the dynamic of an SIC contagious illness model: new framework, critical comparison and an application to COVID-19 (SARS-CoV-2) case, Chaos Solitons Fract., 159 (2022), 112110. https://doi.org/10.1016/j.chaos.2022.112110 doi: 10.1016/j.chaos.2022.112110
    [41] X. P. Li, A. Din, A. Zeb, S. Kumar, T. Saeed, The impact of Lévy noise on a stochastic and fractal-fractional Atangana-Baleanu order hepatitis B model under real statistical data, Chaos Solitons Fract., 154 (2022), 111623. https://doi.org/10.1016/j.chaos.2021.111623 doi: 10.1016/j.chaos.2021.111623
    [42] X. R. Mao, Stochastic differential equations and applications, Horwood Publishing Limited, 1997. https://doi.org/S0378-4371(17)30176-0/sb11
    [43] G. Strang, Linear algebra and its applications, Singapore: Thomson Learning, 1988.
    [44] C. Zhu, G. Yin, Asymptotic properties of hybrid diffusion systems, SIAM J. Control Optim., 46 (2007), 1155–1179. https://doi.org/10.1137/060649343 doi: 10.1137/060649343
    [45] Y. L. Cai, Y. Kang, M. Banerjee, W. M. Wang, A stochastic epidemic model incorporating media coverage, Commun. Math. Sci., 14 (2016), 893–910. https://doi.org/10.4310/CMS.2016.v14.n4.a1 doi: 10.4310/CMS.2016.v14.n4.a1
    [46] H. Roozen, An asymptotic solution to two-dimensional exit problem arising in population dynamics, SIAM J. Appl. Math., 49 (1989), 1793–1810. https://doi.org/10.1137/0149110 doi: 10.1137/0149110
    [47] T. C. Gard, Introduction to stochastic differential equations, New York: Dekker, 1988.
    [48] D. J. Higham, An algorithmic introduction to numerical simulation of stochastic differential equations, SIAM Review, 43 (2001), 525–546. https://doi.org/10.1137/S0036144500378302 doi: 10.1137/S0036144500378302
  • This article has been cited by:

    1. Christina Priavolou, Nikiforos Tsiouris, Vasilis Niaros, Vasilis Kostakis, Towards Sustainable Construction Practices: How to Reinvigorate Vernacular Buildings in the Digital Era?, 2021, 11, 2075-5309, 297, 10.3390/buildings11070297
    2. Ying Hong, Ahmed W.A. Hammad, Ali Akbar Nezhad, Optimising the implementation of BIM: A 2-stage stochastic programming approach, 2022, 136, 09265805, 104170, 10.1016/j.autcon.2022.104170
    3. Surya S., Binumol Tom, 2022, Using BIM for Residential Projects in Kerala-Challenges and Possibilities, 978-1-6654-6792-6, 1, 10.1109/ICNGIS54955.2022.10079874
    4. Nuray Benli Yıldız, Zehra Kalpaklı, Nuray Özkaraca, Creation of Historical Building Information Modelling (HBIM) Library, A Case Study of Registered House (No:56), Akçakoca, 2024, 12, 2148-2446, 901, 10.29130/dubited.1241508
    5. Nathalia Fonseca Arenas, Muhammad Shafique, Recent progress on BIM-based sustainable buildings: State of the art review, 2023, 15, 26661659, 100176, 10.1016/j.dibe.2023.100176
    6. Asimina Kouvara, Christina Priavolou, Denise Ott, Philipp Scherer, Verena Helen van Zyl-Bulitta, Circular, Local, Open: A Recipe for Sustainable Building Construction, 2023, 13, 2075-5309, 2493, 10.3390/buildings13102493
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2515) PDF downloads(256) Cited by(4)

Figures and Tables

Figures(3)  /  Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog