Research article Special Issues

EMD-based analysis of complexity with dissociated EEG amplitude and frequency information: a data-driven robust tool -for Autism diagnosis- compared to multi-scale entropy approach

  • Objective: Autism spectrum disorder (ASD) is usually characterised by altered social skills, repetitive behaviours, and difficulties in verbal/nonverbal communication. It has been reported that electroencephalograms (EEGs) in ASD are characterised by atypical complexity. The most commonly applied method in studies of ASD EEG complexity is multiscale entropy (MSE), where the sample entropy is evaluated across several scales. However, the accuracy of MSE-based classifications between ASD and neurotypical EEG activities is poor owing to several shortcomings in scale extraction and length, the overlap between amplitude and frequency information, and sensitivity to frequency. The present study proposes a novel, nonlinear, non-stationary, adaptive, data-driven, and accurate method for the classification of ASD and neurotypical groups based on EEG complexity and entropy without the shortcomings of MSE. Approach: The proposed method is as follows: (a) each ASD and neurotypical EEG (122 subjects × 64 channels) is decomposed using empirical mode decomposition (EMD) to obtain the intrinsic components (intrinsic mode functions). (b) The extracted components are normalised through the direct quadrature procedure. (c) The Hilbert transforms of the components are computed. (d) The analytic counterparts of components (and normalised components) are found. (e) The instantaneous frequency function of each analytic normalised component is calculated. (f) The instantaneous amplitude function of each analytic component is calculated. (g) The Shannon entropy values of the instantaneous frequency and amplitude vectors are computed. (h) The entropy values are classified using a neural network (NN). (i) The achieved accuracy is compared to that obtained with MSE-based classification. (j) The consistency of the results of entropy 3D mapping with clinical data is assessed. Main results: The results demonstrate that the proposed method outperforms MSE (accuracy: 66.4%), with an accuracy of 93.5%. Moreover, the entropy 3D mapping results are more consistent with the available clinical data regarding brain topography in ASD. Significance: This study presents a more robust alternative to MSE, which can be used for accurate classification of ASD/neurotypical as well as for the examination of EEG entropy across brain zones in ASD.

    Citation: Enas Abdulhay, Maha Alafeef, Hikmat Hadoush, V. Venkataraman, N. Arunkumar. EMD-based analysis of complexity with dissociated EEG amplitude and frequency information: a data-driven robust tool -for Autism diagnosis- compared to multi-scale entropy approach[J]. Mathematical Biosciences and Engineering, 2022, 19(5): 5031-5054. doi: 10.3934/mbe.2022235

    Related Papers:

    [1] Marco Roccetti . Predictive health intelligence: Potential, limitations and sense making. Mathematical Biosciences and Engineering, 2023, 20(6): 10459-10463. doi: 10.3934/mbe.2023460
    [2] Suqi Zhang, Xinxin Wang, Wenfeng Wang, Ningjing Zhang, Yunhao Fang, Jianxin Li . Recommendation model based on intention decomposition and heterogeneous information fusion. Mathematical Biosciences and Engineering, 2023, 20(9): 16401-16420. doi: 10.3934/mbe.2023732
    [3] Han Zhu, Xiaohai He, Meiling Wang, Mozhi Zhang, Linbo Qing . Medical visual question answering via corresponding feature fusion combined with semantic attention. Mathematical Biosciences and Engineering, 2022, 19(10): 10192-10212. doi: 10.3934/mbe.2022478
    [4] Xiaotong Ji, Dan Liu, Ping Xiong . Multi-model fusion short-term power load forecasting based on improved WOA optimization. Mathematical Biosciences and Engineering, 2022, 19(12): 13399-13420. doi: 10.3934/mbe.2022627
    [5] Zhaoyu Liang, Zhichang Zhang, Haoyuan Chen, Ziqin Zhang . Disease prediction based on multi-type data fusion from Chinese electronic health record. Mathematical Biosciences and Engineering, 2022, 19(12): 13732-13746. doi: 10.3934/mbe.2022640
    [6] Jianguo Xu, Cheng Wan, Weihua Yang, Bo Zheng, Zhipeng Yan, Jianxin Shen . A novel multi-modal fundus image fusion method for guiding the laser surgery of central serous chorioretinopathy. Mathematical Biosciences and Engineering, 2021, 18(4): 4797-4816. doi: 10.3934/mbe.2021244
    [7] Hongji Xu, Shi Li, Shidi Fan, Min Chen . A new inconsistent context fusion algorithm based on BP neural network and modified DST. Mathematical Biosciences and Engineering, 2021, 18(2): 968-982. doi: 10.3934/mbe.2021051
    [8] Jiaming Ding, Peigang Jiao, Kangning Li, Weibo Du . Road surface crack detection based on improved YOLOv5s. Mathematical Biosciences and Engineering, 2024, 21(3): 4269-4285. doi: 10.3934/mbe.2024188
    [9] Yalong Yang, Zhen Niu, Liangliang Su, Wenjing Xu, Yuanhang Wang . Multi-scale feature fusion for pavement crack detection based on Transformer. Mathematical Biosciences and Engineering, 2023, 20(8): 14920-14937. doi: 10.3934/mbe.2023668
    [10] Lingmin Lin, Kailai Liu, Huan Feng, Jing Li, Hengle Chen, Tao Zhang, Boyun Xue, Jiarui Si . Glucose trajectory prediction by deep learning for personal home care of type 2 diabetes mellitus: modelling and applying. Mathematical Biosciences and Engineering, 2022, 19(10): 10096-10107. doi: 10.3934/mbe.2022472
  • Objective: Autism spectrum disorder (ASD) is usually characterised by altered social skills, repetitive behaviours, and difficulties in verbal/nonverbal communication. It has been reported that electroencephalograms (EEGs) in ASD are characterised by atypical complexity. The most commonly applied method in studies of ASD EEG complexity is multiscale entropy (MSE), where the sample entropy is evaluated across several scales. However, the accuracy of MSE-based classifications between ASD and neurotypical EEG activities is poor owing to several shortcomings in scale extraction and length, the overlap between amplitude and frequency information, and sensitivity to frequency. The present study proposes a novel, nonlinear, non-stationary, adaptive, data-driven, and accurate method for the classification of ASD and neurotypical groups based on EEG complexity and entropy without the shortcomings of MSE. Approach: The proposed method is as follows: (a) each ASD and neurotypical EEG (122 subjects × 64 channels) is decomposed using empirical mode decomposition (EMD) to obtain the intrinsic components (intrinsic mode functions). (b) The extracted components are normalised through the direct quadrature procedure. (c) The Hilbert transforms of the components are computed. (d) The analytic counterparts of components (and normalised components) are found. (e) The instantaneous frequency function of each analytic normalised component is calculated. (f) The instantaneous amplitude function of each analytic component is calculated. (g) The Shannon entropy values of the instantaneous frequency and amplitude vectors are computed. (h) The entropy values are classified using a neural network (NN). (i) The achieved accuracy is compared to that obtained with MSE-based classification. (j) The consistency of the results of entropy 3D mapping with clinical data is assessed. Main results: The results demonstrate that the proposed method outperforms MSE (accuracy: 66.4%), with an accuracy of 93.5%. Moreover, the entropy 3D mapping results are more consistent with the available clinical data regarding brain topography in ASD. Significance: This study presents a more robust alternative to MSE, which can be used for accurate classification of ASD/neurotypical as well as for the examination of EEG entropy across brain zones in ASD.



    Health information literacy (HIL) generally refers to individuals' access to, understanding and use of health information and services [1]. It covers a range of skills and knowledge, including understanding the acquisition, evaluation, and application of health information, as well as using this information to make correct decisions [2]. This literacy also includes understanding the limitations and potential risks of health information, as well as seeking professional medical advice when necessary [3]. The status of HIL among different groups and the relationship between its influencing factors is an important exploration field [4]. In real life, different populations may have differences in accessing, understanding, evaluating and using health information, which may be influenced by various factors. Individuals of different age groups may have differences in accessing and using health information. The level of education may affect an individual's ability to understand, evaluate and utilize health information. People with higher levels of education may be more likely to understand complex health information, while those with lower levels of education may face more difficulties. Differences in health status may also affect individuals' ability to access and use health information. With the integration of information technology in various fields, some scholars have proposed HIL [5]. It refers to the ability of users to identify health information needs, obtain health information from reliable information sources and make reasonable health decisions. In this high-tech era, where the amount of information is growing rapidly and the value of information is rising infinitely, few have mastered the methods and skills of information acquisition. And those who have obtained high value-added scientific and technological information will have the first opportunity to take the initiative and play a leading role in social competition [6,7]. Almost all universities in China have carried out various forms of information literacy education, and also conducted unremitting research and discussion on information literacy education [8]. Therefore, the evaluation research of HIL is also rising. European and American countries have developed many HIL assessment tools suitable for their own citizens according to their own language, culture, socio-economic conditions and medical systems [9]. In particular, HIL evaluation tools developed and applied from a clinical perspective have been quite mature [10].

    The cognitive strategy of health information literacy refers to learners' processing of information in the current task[11]. At the same time, this cognitive strategy needs to combine the activities and steps taken in the process of completing the learning task [12]. In order to improve the awareness of the use of learning strategies for health information literacy, it is necessary to provide a variety of authentic learning materials[13]. In addition, the improvement of ability does not lie in the number of strategies used, but in the flexible and rational use of strategies [14]. Because the strategy itself is a prior strategy, it is difficult to ensure that the application process has complete matching and novelty to the recipient [15]. The designer has taken into account the background of the object, the relevant environment and the possible results. But after all, health activities are mainly activities between human bodies [16]. It is difficult for designers to fully estimate the problems in the policy application process [17].

    For example, problems that can be solved under the original cognitive level may become difficult due to some interference [18]. It may also become too simple due to some inspiration. The teaching process is a process of guiding and encouraging students to grow and develop continuously [19]. There is a relative frame of reference for the level of cognition and the advantages and disadvantages of cognitive strategies [20]. Students who are weak in some aspects may have advantages in other aspects [21,22,23]. Students with low passion for learning at one stage may show higher enthusiasm at another stage. Therefore, teaching strategies must be adjusted timely and flexibly in addition to accurate settings [24]. As the society moves towards the information age, the network, as an important carrier of big data, provides people with rich, novel, rapidly updated and diversified information, making it easier for the public to obtain information [25]. However, compared with the research in the field of semi physical teaching abroad, the research in China has just started, and there is less research on this group of college students [26]. Therefore, exploring the relationship between the HIL status quo of this group and the influencing factors will help to deeply understand the characteristics and laws of this group, and have far-reaching significance in improving the HIL level of college students. Therefore, the research on HIL evaluation will help China grasp the correct research direction, integrate with international standards as soon as possible, and provide an important reference for the development of localized HIL evaluation tools suitable for China.

    The research on the evaluation of HIL in this paper will help improve understanding of the essence and influencing factors of HIL, and further explore the relationship between HIL and health status. By evaluating HIL, we can better understand individuals' ability to acquire, understand and apply health information, thereby providing them with more precise and personalized health services and interventions. Research contribution points:

    ● This paper proposes an intelligent prediction method of health literacy based on deep information fusion. Specifically, potential latent dirichlet assignment (LDA) and convolutional neural network (CNN) structures are used as the basic framework for understanding the semantic features of text content.

    ● This study will fill the gap in the field of HIL evaluation by systematically sorting and performing in-depth analysis of the evaluation methods of HIL, providing a reference for subsequent research.

    ● This study will also propose HIL evaluation schemes targeting different populations and scenarios through the evaluation and comparison of existing evaluation tools, providing guidance for practical applications.

    In this information age, people's life and study are full of all kinds of information, and information literacy is an essential skill for life and study in the era of big data. With the rapid changes of science and social environment, people's living standard and ideology have changed, people's awareness and requirements for health have been constantly improved and their attention to health has also been increased. However, food safety and medical problems caused by low-quality health information have gradually increased. The authors argue that HIL is a subset of information literacy. At the same time, HIL is one of the important influencing factors of health literacy. Improving HIL is of great benefit to the improvement of public information literacy and health literacy.

    With the gradual deepening of HIL research, scholars began to realize that the design of HIL evaluation tools for all residents may lack consideration of population specificity. Therefore, scholars have gradually realized that HIL evaluation should be conducted for different groups of people. A topic model is a set of machine learning models that try to find potential topic structures in massive documents [27,28]. Before the topic modeling of the text, in order to save the storage space of the text and improve the retrieval efficiency when the model runs, it is necessary to filter out some meaningless words in advance to shorten the text. For example, search words that appear in every article, verbs and nouns that have no actual meaning but appear many times in the results, etc. By integrating the above words, we can get a stoplist of specific knowledge fields.

    After text preprocessing, every word needs to calculate its TF-IDF (term frequency-inverse document frequency) weight. For the word wi in document dj, its T F can be expressed as:

    TFi,j=ni,jknk,j. (2.1)

    In the above formula, the numerator ni,j represents the number of times that the word wi appears in the document dj, and the denominator knk,j represents the sum of the number of times that all words appear in the document dj.The LDA model is a typical Bayesian network structure, which defines that every document is a random mixture of hidden topics, and hidden topics are randomly composed of feature words with a certain probability. Figure 1 shows the LDA probability model, which is divided into document collection layer, document layer and feature word layer, and each layer is controlled by random variables. The Z represents potential themes, and w represents feature words.

    Figure 1.  LDA probability model diagram.

    Vector a and matrix b define the document set level. The vector a defines the relative strength of potential hidden topics in a document set, the matrix b represents the probability distribution of potential hidden topics in a document set and the element bi,j represents the probability that the j th feature word belongs to the i th hidden topic. According to the above process, the generation probability of the i th feature word wi in document d is:

    P(wi)=Tj=1P(wizi=j)P(zi=j). (2.2)

    In the formula, P(wizi=j) represents the probability that the feature word wi comes from the potential topic zi, and P(zi=j) represents the probability that the document contains the topic zi.

    Before the advent of the neural network language model, the commonly used model was the n-gram language model trained by natural language networks. The basic idea of this model is: the probability of a word appearing in a text is only related to its first words, and the probability of the word appearing is based on the probability of the words before it [29]. The complexity of parameter selection may increase as the dimension of the input word vector increases. Higher dimensional input word vectors may also lead to overfitting problems, which require more complex regularization techniques to handle. When using input word vectors with higher dimensions, mapping features may become more rich and complex. This complexity is because high-dimensional input word vectors contain more semantic information, which can better capture the subtle differences and semantic relationships between words. This may lead to the model mapping input to output more accurately and producing richer and more accurate prediction results. The sentence T can be expressed as T=(w1,Λ,wn), where wi represents the i th word in the sentence T, then the probability of the sentence T appearing in the text can be calculated as:

    P(T)=NP(wi/win1,Λ,wi1). (2.3)

    This not only reduces the network parameters, but also reduces the complexity of parameter selection, thus making the feature mapping unique and unchangeable. The problem of converting a one-to-many mapping to a one-to-two mapping usually involves classification, which involves dividing an object into one of two categories based on multiple features or attributes. Set a threshold based on the distribution of features or attributes to classify objects that are greater or less than this threshold into different categories. This method is simple and easy to implement. When converting a one-to-many mapping to a one-to-two mapping, there may be a problem of category imbalance, where the number of samples in certain categories is much larger than in others. In this case, it is necessary to adopt some special processing methods, such as Oversampling, undersampling, synthetic minority Oversampling and other methods to improve the classification performance. These advantages will be especially obvious when the input word vector with higher dimension is used.

    xlj=f(iMjxl1iklij+blj,) (2.4)

    where xl1i is the input characteristic, xlj is the output characteristic, klij is the weight, blj is the offset and f() is the activation function. According to mathematical analysis, text classification is a process of mapping unknown categories of texts to predefined categories. This mapping can be one-to-one, one-to-two or one-to-many. One-to-many mapping is usually converted into one-to-two mapping problems for analysis.

    Then, there is a relationship W between the new text set and the predefined category, which is expressed as:

    W:IJ(5). (2.5)

    For the document i in I,W(i) is known information, and the text set can be processed by the guided training of the text classification algorithm, and a text classification model R similar to I can be obtained.R is expressed as:

    R:IJ(6). (2.6)

    This can not only give full play to the advantages of automatic disambiguation and word segmentation by string frequency statistics, but also give full play to the advantages of high efficiency and fast segmentation speed of string matching [30,31].

    Expected cross entropy is the amount of information obtained when a certain feature word appears in the text. Given the feature t and text category ci, it can be expressed as the distance between P(Cit) and P(ci) to represent the value of ECE. The calculation formula is shown as:

    ECE(t)=P(t)ni=1P(cit)logP(cit).P(ci) (2.7)

    The greater the ECE(t), the greater the influence of feature t on classification. Assuming that the feature t is strongly correlated with the category ci, then the P(cit) value is large, and when the P(ci) value is small, the feature t has greater influence on classification.

    This paper selects noun morphemes and specialized noun morphemes for future word segmentation. After screening out other part-of-speech words, there are two main steps to be done next: synonym merging and name recognition. Because the features that appear in too few documents are not universal and can't have a good recognition ability for this category of articles, the words in this case are treated as noise words. According to the current research, there is no universally applicable method to determine this value. Generally, a large number of experiments are used to determine the threshold value, and their effects are used to measure the judgment. In this paper, define acronym method is used in the initial stage of feature dimensionality reduction, in order to reduce a large number of feature sets for noise removal.

    Considering that Chinese itself lacks word signs and strict rules, the existing rules of morphology, syntax and combination in the language field are still very general and complicated. Therefore, the words appearing in this page description are more closely related to the article category than the words in the text [32]. In this paper, LDA topic model is applied to the field of text processing, mainly for text similarity calculation, and then the text clustering and text recommendation algorithms are improved. Then the contribution of each keyword in the text is calculated, and TF-IDF method is mostly used to calculate the contribution. This method takes into account the semantics contained in the text and the semantics of each keyword, and avoids the above ambiguity.

    The text vector of the potential topic based on LDA is d=(z1,Λ,zn). T is the number of potential topics. The calculation method of text similarity based on LDA potential topic vector is shown as:

    Sim(di,dj)=didj|di||dj|. (2.8)

    When we choose the index feature vector to describe the sample, in order to achieve the purpose of no omission, we often describe a property with different names for many times, which will result in overlapping information. According to the domain knowledge or the method of feature variable clustering, we choose the appropriate feature variable set. Or use the following Mahalanobis distance:

    D(X,Y)=(XY)TS(1)(XY). (2.9)

    where S is the covariance matrix of sample matrix A, and X and Y are the covariance estimators of population distribution.

    Mahalanobis distance is the improvement of Ming's distance, which is invariant to all linear transformations, and overcomes the disadvantage that Ming's distance is influenced by dimensions. Mahalanobis distance also partially overcomes multiple correlations. The significance of the rough subtraction method proposed in this article and the traditional unsupervised disambiguation algorithm lies in providing a comparison and reference for subsequent experiments. Rough subtraction is an algorithm based on rough set theory that can be used to process imprecise or uncertain data, while traditional unsupervised disambiguation algorithms are based on clustering or classification methods aimed at eliminating ambiguity and uncertainty in the data. The comparison between these two algorithms and CNN can provide researchers with the performance and effect of different algorithms in dealing with health information literacy evaluation problems, and help to deeply understand the advantages and disadvantages of various algorithms and applicable scenarios.

    The core idea of rough subtraction is to effectively achieve the definition of system dimension reduction rough set without reducing the classification ability of the system, and the introduction of the approximation concept has brought many advantages. It can manipulate large-scale data, and such data can be inaccurate or ambiguous. Through the derivation of some theories of upper approximation and lower approximation, rough set can obtain the minimum expression of knowledge, which is the theoretical basis of knowledge reduction by rough set. The process is shown in Figure 2.

    Figure 2.  Rough set simplification process.

    To use rough set reduction, we need to establish a text model first. Here, this paper chooses to use the Boolean model, which is easy to model, and has very good adaptability to the application fields prepared by this program. Its data requirements are strict and its application scope is limited. However, this simple discretization method has a very good discretization effect when the data distribution is concentrated and the noise data is little. Then, according to the set number of intervals, the equidistant division is carried out. This method is widely used, and the discretization effect is very good when the data distribution is even. Traditional unsupervised disambiguation algorithms usually use co-occurrence rate to achieve rapid and relatively accurate disambiguation. The co-occurrence rate of words refers to the frequency of two words appearing together in an article. This is used as the basis of disambiguation. For example, if the correct meaning and all wrong meanings of polysemous words do not appear in the full text of the article, the co-occurrence rate of all meanings will be almost equal. It is difficult to solve the ambiguity problem of polysemous words by using co-occurrence rate. The calculation formula of the present rate is shown as:

    T(w1,w2)=log2(s=1p(w1,w2),p(w1)p(w2)) (2.10)

    where w1 is a polysemous word and w2 is a definition of polysemous word. s is the scope of co-occurrence rate, usually the whole statement in which w1 is located. p(w) indicates the frequency of the word w in the whole text.

    In the word forest, word coding is used to calculate similarity and merge. The similarity calculation formula is shown as:

    S(k1,k2)=αDis(k1,k2). (2.11)

    The numerator in the formula is a constant, and the denominator represents the distance between words k1,k2. The closer the semantic distance is, the smaller the denominator is, and the greater the calculated similarity is. People generally understand that literature refers to the sum of books, periodicals, papers and other texts that record knowledge. When clustering papers and documents, the biggest difference from clustering data in traditional databases is that the data in traditional databases are structured data, while the text is unstructured data. Regardless of the purpose and means of text mining, the process of text preprocessing includes two basic steps: word segmentation and stop words removal. Figure 3 shows the specific framework of literature analysis based on CNN.

    Figure 3.  CNN-based document analysis framework.

    CNN model is a multi-layer neural network model. Each layer of the model is made up of multiple two-dimensional planes, and each plane is made up of multiple independent neurons. Input to the full connection layer to get the final output. The upper layer is usually the full connection layer used as a classifier. In this way, each layer of the convolution upgrade network can obtain the most significant features of the data through digital filters. In the process of back propagation, the weight of the network is adjusted by the error between the actual output and the expected data. After that, the errors of other layers are adjusted by going forward layer by layer. The output of each unit of the hidden layer is:

    yk=f(L1j=0Vijhj+θk.) (2.12)

    The output of each unit of the output layer is:

    yk=f(L1j=0Wijhj+φj,) (2.13)

    where Vij represents the weights of input layer information i to hidden layer output information j,Wij represents the weights of hidden layer output information j to output layer information k, and θk,φj are used to represent the thresholds of output units and hidden layer units, respectively.f() is the activation function.

    In feature selection, the degree of independence between feature t and topic class C can be counted by χ2. The calculation formula is shown as:

    χ2(t,c)=N(ADBC)2(A+C)(B+D)(A+B)(C+D), (2.14)

    where A represents the number of documents belonging to category C and containing feature t,B represents the number of documents not belonging to category C but containing feature t,C represents the number of documents belonging to category C among documents not containing feature t,D represents the number of documents not belonging to category C among documents not containing feature t and N represents the total number of training text sets.

    The correlation calculation formula is shown as:

     CoherenceScore ssj=|s|i=1,j1(si,sj.)|s1| (2.15)

    In the process of iteration, when the change of the center point is less than β1, the whole cluster is added to the selected data set and deleted from the sample set, so that only the samples that have not been correctly identified are retained in the original sample data set. The formula for calculating the change of the center point is:

    βr=1|Ti|aiTr,iai1|Ti1|ajTri,jaj, (2.16)

    where r is the number of iterations of the algorithm, and Tr,i represents the ith  category of the r iteration. When βrβ1 is used, the conditions are met, and other samples are screened until all sample data are correctly identified.

    Compared with the early cognitive radio technology, the biggest difference between cognitive network and cognitive radio technology is that the object of perception and management operations has changed. As the evolution of cognitive radio network, cognitive network is not limited to spectrum resources. In order to meet the end-to-end decision objectives, cognitive networks need to manage and reconfigure the entire network, which means that cognitive networks are managed for the entire network. Therefore, for cognitive network, all the links and factors that can affect the communication target in the whole network are the objects that it perceives, analyzes, manages and configures. The concept of multi-dimensionality needs to be introduced here. From a macro perspective, the multi-dimensional nature of resources refers to the diversity of cognitive network resources. From a micro perspective, the multidimensional nature of resources can also be understood as that for each resource, its performance can be described from multiple perspectives, that is, the diversity of feature parameters. This chapter first introduces the resource analysis process of cognitive networks, and then conducts multi-dimensional representation for different resources.

    Multidimensional cognitive information is a quantitative analysis method based on mathematical statistics, which studies the external characteristics of literature. Content analysis is a qualitative method to study the content of literature. In this paper, the evolution of information literacy education research is quantitatively analyzed based on multidimensional cognitive informations, and the content analysis is carried out by combining qualitative analysis with the distribution of information literacy education topics. Using the multidimensional cognitive information analysis software, this paper analyzes the HIL field literature in the Web of Science database from 2010 to 2020 from six dimensions: year, country, author, research institution, keywords and citations. On this basis, the academic level in the field of HIL is studied, and the present situation and law of HIL development are explored.

    The common functions of computer-aided multidimensional cognitive information tools are to realize bibliographic information statistics, generate co-occurrence matrices, carry out cluster analysis and network analysis, etc. Some tools can directly realize the visualization of metrological results. CiteSpace is selected as the research tool in this paper. CiteSpace has a built-in data converter, which can process Chinese and English data, and integrates the functions of multidimensional cognitive informations and visual analysis, and supports network analysis of authors, institutions or countries, network analysis of co-occurrence of topics, keywords or disciplines, co-citation analysis of documents, authors or journals, and literature coupling analysis. The fit of research purposes, the degree of resource acquisition and utilization, and the validity of research methods are the main reasons why Citespace is chosen as an auxiliary tool for multidimensional cognitive information analysis.

    This study not only uses various multidimensional cognitive information indicators to reveal the characteristics of information ecology, but also combines information visualization techniques such as social network analysis and scientific knowledge mapping to vividly outline the development trend of information and ecology. So, the most important premise of drawing a subject knowledge map is to construct the co-occurrence matrix of some kind of data.

    Using Bicomb 2.0 to count the literature by time, from 2010 to 2020, the research papers on information literacy education showed an increasing trend year by year. Relevant results are shown in Figure 4. Although the growth was not obvious before 2011, after 2011, the in-depth research and practice of Web 2.0 promoted the rapid development of Library 2.0, and also made the literature added value of information literacy education in university libraries stable. According to the law of literature growth, the curve in Figure 4 shows that the research on information literacy education is gradually maturing. Table 1 and Figure 5 show the distribution of research hotspots of HIL research papers from 2010 to 2020. Figure 5 shows the distribution of research hotspots in HIL research papers from 2010 to 2020. The research results indicate that the distribution of hot topics in papers is relatively chaotic, with a relatively small number of papers in 2020, with a proportion ranging from 70% to 75%.

    Figure 4.  Statistical chart of information literacy education publications from 2010 to 2020.
    Table 1.  Distribution of research hotspots of HIL research papers in each year.
    Year Research hotspot Thesis number Proportion%
    2010 Overseas 30 80.541
    2011 Domestic 19 61.892
    2012 Domestic 19 80.267
    2013 Overseas 26 58.448
    2014 Overseas 35 78.426
    2015 Overseas 9 63.793
    2016 Overseas 16 75.266
    2017 Overseas 13 60.416
    2018 Overseas 32 81.144
    2019 Domestic 7 68.155
    2020 Domestic 21 75.279

     | Show Table
    DownLoad: CSV
    Figure 5.  Research hotspot road map.

    For cognitive network resources, there is no literature to explain them. At the same time, cognitive network resources are far beyond the scope of the cognitive radio system. The analysis of it will be more complex and tedious. In order to describe cognitive network resources in a clear and orderly way, this paper proposes a layer by layer decomposition analysis method based on similar methods, and gradually analyzes and describes cognitive network resources. The resources of a cognitive network are the sum of all perceptible, manageable and operable network components and factors that can affect end-to-end communication objectives in a cognitive network. Based on this positioning of cognitive network resources, taking the network as a whole as the source of resources, the following two-layer decomposition analysis can be carried out: selecting 8 years of research hotspots to establish the corresponding information literacy evalution index system. The research hotspots in seven years are all evaluation criteria, and the research hotspots in the other two years are designated as evaluation studies because the number of evaluation criteria and related papers in evaluation practice in that year are the same.

    From 2010 to 2020, the research literature on HIL was published in many disciplines, among which the top 5 journals are shown in Table 2. On the whole, the research on HIL is published in many journals of library and information science, which shows that libraries are widely involved in HIL research. One of its "Chinese journal of medical library and information science" documents has been cited more than 50 times, which may be the reason for the high total cited frequency. In addition, the number of articles published by library information work is small, but the total frequency of citations from library journals is high. This demonstrates that library information science has a certain influence in the field of HIL research.The country with the largest number of publications is the United States, with a total of 5047 publications, with a center degree of 0.501, as shown in Table 3 and Figure 6.

    Table 2.  Top 5 journals.
    Title Quantity of documents issued Total cited frequency
    Chinese journal of health education 41 333
    Journal of medical informatics 35 225
    modern information 15 193
    Chinese journal of medical library and information science 11 193
    Chinese journal of school health 8 132

     | Show Table
    DownLoad: CSV
    Table 3.  Ranking of articles issued by each country.
    Country Number of articles Centrad
    United States of America 1729 0.501
    Australia 572 0.405
    Britain 559 0.469
    Canada 557 0.428
    Germany 499 0.417
    The Netherlands 469 0.397
    China 327 0.493
    Spain 174 0.509
    Sweden 161 0.442

     | Show Table
    DownLoad: CSV
    Figure 6.  Discrete map of the number of articles issued by each country.

    With the rapid development of American technology, the research level of HIL is in the leading position, and it has become the object of competing cooperation among countries, forming a self-centered cooperation network. In contrast, although China ranks high in the number of published articles in the field of HIL, the number of published articles is only 18.9% of that in the United States, with a centrality of 0.493, which shows that the number of research achievements and cooperation in the field of HIL in China are poor. If a certain keyword appears repeatedly in its research field within a certain period of time, or the number of literatures on a certain represented topic suddenly increases, this topic may become a research hotspot within this period of time. Figure 7 is the keyword knowledge map formed after software analysis.

    Figure 7.  Keywords centrality of knowledge map.

    The important link of information literacy education is the cultivation of information acquisition ability, and the cultivation of this ability is mainly reflected in the understanding of information sources, the induction, analysis and utilization of information retrieval tools, retrieval techniques, retrieval strategies and retrieval results. Therefore, the curriculum system around information literacy education, such as the formation of the main branches of the curriculum, the construction of theoretical framework, the construction of teaching materials, the effectiveness of curriculum implementation and the reform of teaching methods, has become one of the research hotspots of information literacy education in recent years. This article refers to the health care big data standard of the Chinese Society of Health Information and Health Care. By organizing specialized training on medical information standards, we hope to help improve the management of health and medical informatization through systematic teaching. International organizations for standardization, such as ISO and HL7, are market-oriented and profitable organizations. They have a strong pursuit for the deepening of standardized services, and therefore they are in the forefront of the world in standardized services. The differences in standardization organizations among countries are mainly reflected in the differences in behavioral roles and their numbers.

    According to statistics, there are 952 international health information standard research institutions, mainly universities, research institutes, medical research centers and well-known hospitals. Thirteen of the top 15 research institutions are from the United States. Among them, Vanderbilt University in the United States ranks first with 69 articles, accounting for 5.545% of the total literature. This was followed by Harvard University, which published 68 articles in the field of health information standards, accounting for 5.39% of the total literature. See Figure 8 for details. In addition, according to the statistics of literature languages, there are 1,062 articles in English, accounting for 97.25% of the total literature, and the other literature languages are Spanish, German, Portuguese, French, Italian, etc. The historical posts of the top three high-yield countries are illustrated and put together to analyze the development trends and mutual relations of frontier countries in HIL, as shown in Figure 9. The above figure clearly shows that the research in the field of information ecology in China is on the rise, the related concepts of HIL are gradually refined, vocabulary research and language evaluation are gradually paid attention to, and the research on language proficiency and HIL in the context of globalization is a hot spot that continues up to now. The focus of HIL research has returned from solving social problems to the enlightenment of bilingual phenomena to HIL. This article echoes the research frontier in literature co-citation analysis.

    Figure 8.  Distribution of major research institutions with the highest number of published articles.
    Figure 9.  Comparison of historical publication centers in high-yield countries.

    Based on multi-dimensional cognitive information technology, this paper analyzes the current situation and hot spots of health information literacy at home and abroad. The results show that most of the relevant research in China is based on foreign achievements, and no authoritative national standards have been formed. With the rapid development of American technology, America's HIL research level is in the leading position, becoming the object of competition and cooperation among countries, and forming a self-centered cooperation network. HIL's cognitive strategies refer to learners' processing of information in the current task. The research has improved people's awareness of the use of HIL information vocabulary learning strategies, and it is necessary to provide students with rich, diverse and authentic vocabulary learning materials. In addition, the improvement of HIL capability does not lie in the number of strategies used, but in the flexible and rational use of strategies.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This work was supported by Key Research Project of Humanities and Social Sciences of Bengbu Medical College under grant 2020byzx262sk.

    The authors declare there is no conflict of interest.



    [1] T. Hirota, R. So, Y. S. Kim, B. Leventhal, R. A. Epstein, A systematic review of screening tools in non-young children and adults for autism spectrum disorder, Res. Dev. Disabil., 80 (2018), 1–12. https://doi.org/10.1016/j.ridd.2018.05.017 doi: 10.1016/j.ridd.2018.05.017
    [2] C. Lord, S. Risi, P.S. Dilavore, C. Shulman, A. Thurm, A. Pickles. Autism from 2 to 9 years of age. Arch. Gen. Psychiatry, 63 (2006), 694–701. https://doi.org/10.1001/archpsyc.63.6.694 doi: 10.1001/archpsyc.63.6.694
    [3] B. B. Sizoo, E. H. Horwitz, J. P. Teunisse, C. C. Kan, C. Vissers, E. Forceville, et al., Predictive validity of self-report questionnaires in the assessment of autism spectrum disorders in adults, Autism, 19 (2015), 842–849. https://doi.org/10.1177/1362361315589869 doi: 10.1177/1362361315589869
    [4] P. O. Towle, P. A. Patrick, Autism Spectrum Disorder Screening Instruments for Very Young Children: A Systematic Review, Autism. Res. Treat., 2016 (2016), 4624829. https://doi.org/10.1155/2016/4624829 doi: 10.1155/2016/4624829
    [5] D. Bone, S. Bishop, M. P. Black, M. S. Goodwin, C. Lord, S. S. Narayanan, Use of machine learning to improve autism screening and diagnostic instruments: effectiveness, efficiency, and multi-Instrument Fusion, J. Child. Psychol. Psychiatry, 57 (2017), 927–937. https://doi.org/10.1111/jcpp.12559 doi: 10.1111/jcpp.12559
    [6] J. A. Kosmicki, V. Sochat, M. Duda, D. P. Wall, Searching for a minimal set of behaviors for autism detection through feature selection-based machine learning, Transl. Psychiatry, 5 (2015), 514–517. https://doi.org/10.1038/tp.2015.7 doi: 10.1038/tp.2015.7
    [7] F. Thabtah, Machine learning in autistic spectrum disorder behavioral research: A review and ways forward, Informatics Heal. Soc. Care, 44 (2019), 278–297. https://doi.org/10.1080/17538157.2017.1399132 doi: 10.1080/17538157.2017.1399132
    [8] D. H. Oh, I. B. Kim, S. H. Kim, D. H. Ahn, Predicting autism spectrum disorder using blood-based gene expression signatures and machine learning, Clin. Psychopharmacol. Neurosci., 15 (2017), 47–52. https://doi.org/10.9758/cpn.2017.15.1.47 doi: 10.9758/cpn.2017.15.1.47
    [9] M. Duda, R. Ma, N. Haber, D. P. Wall, Use of machine learning for behavioral distinction of autism and ADHD, Transl. Psychiatry, 6 (2016), 732. https://doi.org/10.1038/tp.2015.221 doi: 10.1038/tp.2015.221
    [10] G. Li, O. Lee, H. Rabitz, High-efficiency classification of children with autism spectrum disorder, PLoS One, 13 (2018), 1–23. https://doi.org/10.1371/journal.pone.0192867 doi: 10.1371/journal.pone.0192867
    [11] Q. Tariq, S. L. Fleming, J. N. Schwartz, K. Dunlap, C. Corbin, P. Washington, et al., Detecting Developmental Delay and Autism Through Machine Learning Models Using Home Videos of Bangladeshi Children: Development and Validation Study, J. Med. Internet Res., 21 (2019), 13822. https://doi.org/10.2196/13822 doi: 10.2196/13822
    [12] D. Eman, W. R. Emanuel, Machine Learning Classifiers for Autism Spectrum Disorder: A Review, 2019 4th Int. Conf. Inform. Technol. Inform. Syst. Electr. Eng. (ICITISEE), Yogyakarta, Indonesia, 2019. https://doi.org/10.1109/ICITISEE48480.2019.9003807
    [13] X. Bi, Y. Wang, Q. Shu, Q. Sun, Q. Xu, Classification of autism spectrum disorder using random support vector machine cluster, Frontiers in Genetics, 6 (2018), 9–18. https://doi.org/10.3389/fgene.2018.00018 doi: 10.3389/fgene.2018.00018
    [14] E. Grossi, C. Olivieri, M. Buscema, Diagnosis of autism through EEG processed by advanced computational algorithms: a pilot study, Comput. Methods Programs Biomed., 142 (2017), 73–79. https://doi.org/10.1016/j.cmpb.2017.02.002 doi: 10.1016/j.cmpb.2017.02.002
    [15] M. L. Raja, M. Priya, Neural network based classification of EEG signals for diagnosis of autism spectrum disorder, Int. J. Pharm. Bio. Sci., 8 (2017), 1020–1026.
    [16] L. Raja, M. M. Priyab, EEG based ASD diagnosis for children using auto-regressive features and FFNN, Int. J. Control Theo. App., 10 (2017), 27–32.
    [17] L. Raja, M. M. Priya, EEG based diagnosis of autism spectrum disorder using static and dynamic neural networks, ARPN J. Eng. Appl. Sci., 12 (2017), 4653787.
    [18] R. Djemal, K. AlSharabi, S. Ibrahim, A. Alsuwailem, EEG-based computer aided diagnosis of autism spectrum disorder using wavelet, entropy, and ANN, BioMed. Res. Int., 2017 (2017), 1–9. https://doi.org/10.1155/2017/9816591 doi: 10.1155/2017/9816591
    [19] T. M. Heunis, C. Aldrich, P. J. Vries, Recent Advances in Resting-State Electroencephalography Biomarkers for Autism Spectrum Disorder-A Review of Methodological and Clinical Challenges, Rev. Pediatr. Neurol., 61 (2016), 28–37. https://doi.org/10.1016/j.pediatrneurol.2016.03.010 doi: 10.1016/j.pediatrneurol.2016.03.010
    [20] N. P. Jordanova, J. P. Jordanov, Spectrum-weighted EEG frequency ("brain-rate") as a quantitative indicator of mental arousal. Prilozi, 26 (2005), 35–42.
    [21] E. Abdulhay, M. Alafeef, A. Abdelhay, A. Al-Bashir, Classification of Normal, Ictal and Inter-ictal EEG via Direct Quadrature and Random Forest Tree, J. Med. Biol. Eng., 37 (2017), 843–857. https://doi.org/10.1007/s40846-017-0239-z doi: 10.1007/s40846-017-0239-z
    [22] Z. Dandan, D. Haiyan, H. Xinlin, L. Yunfeng, Z. Congle, Y. Datian, The Combination of Amplitude and Sample Entropy in EEG and its Application to Assessment of Cerebral Injuries in Piglets, 2008 Int. Conf. BioMed. Eng. Informatics, Sanya, China, 2008. https://doi.org/10.1109/BMEI.2008.12
    [23] E. Abdulhay, M. Alafeef, L. Alzghoul, M. Al Momani, R. Al Abdi, N. Arunkumar, et al., Computer-aided autism diagnosis via second-order difference plot area applied to EEG empirical mode decomposition, Neural Comput. Appl., 32 (2020), 10947–10956. https://doi.org/10.1007/s00521-018-3738-0 doi: 10.1007/s00521-018-3738-0
    [24] R. J. Oweis, E. W. Abdulhay, Seizure classification in EEG signals utilizing Hilbert-Huang transform, Biomed. Eng. Online, 10 (2011), 38. https://doi.org/10.1186/1475-925X-10-38 doi: 10.1186/1475-925X-10-38
    [25] E. Abdulhay, M. Alafeef, H. Hadoush, N. Alomari, M. Bashayreh, Frequency 3D Mapping and Inter-Channel Stability of EEG Intrinsic Function Pulsation: Indicators Towards Autism Spectrum Diagnosis, 2017 10th Jordanian Int. Electric. Electron. Eng. Conf. (JIEEEC), Amman, Jordan, 2017. https://doi.org/10.1109/JIEEEC.2017.8051416
    [26] H. Hadoush, M. Alafeef, E. Abdulhay, Automated identification for autism severity level: EEG analysis using empirical mode decomposition and second order difference plot, Behavioural Brain Res., 362 (2019), 240–248. https://doi.org/10.1016/j.bbr.2019.01.018 doi: 10.1016/j.bbr.2019.01.018
    [27] E. Abdulhay, V. Elamaran, M. Chandrasekar, V. S. Balaji, and K. Narasimhan, Automated diagnosis of epilepsy from EEG signals using ensemble learning approach, Pattern Recognition Letters, 139 (2020), 174–181. https://doi.org/10.1016/j.patrec.2017.05.021 doi: 10.1016/j.patrec.2017.05.021
    [28] T. H. Pham, J. Vicnesh, J. K. Wei, S. J. Oh, N. Arunkumar, E. Abdulhay, et al., Autism spectrum disorder diagnostic system using HOS bispectrum with EEG signals, Int. J. Environ. Res. Public Health, 17 (2020), 1–14. https://doi.org/10.3390/ijerph17030971 doi: 10.3390/ijerph17030971
    [29] W. Bosl, A. Tierney, H. T. Flusberg, C. Nelson, EEG complexity as a biomarker for autism spectrum disorder risk, BMC Med., 9 (2011), 18. https://doi.org/10.1186/1741-7015-9-18 doi: 10.1186/1741-7015-9-18
    [30] F. H. Duffy, A. Heidelise, Autism, spectrum or clusters? An EEG coherence study, BMC Neurol., 19 (2019), 27. https://doi.org/10.1186/s12883-019-1254-1 doi: 10.1186/s12883-019-1254-1
    [31] A. Sheikhani, H. Behnam, M. R. Mohammadi, M. Noroozian, Analysis of EEG background activity in Autsim disease patients with bispectrum and STFT measure, Proceedings of the 11th WSEAS Int. Conf. Commun., Agios Nikolaos, Greece, 2007.
    [32] J. Kang, H. Chen, X. Li, X. Li, EEG entropy analysis in autistic children, J. Clin. Neurosci., 62 (2019), 199–206. https://doi.org/10.1016/j.jocn.2018.11.027 doi: 10.1016/j.jocn.2018.11.027
    [33] L. Billeci, F. Sicca, K. Maharatna, F. Apicella, A. Narzisi, G. Campatelli, et al., On the application of quantitative EEG for characterizing autistic brain: a systematic review, Front. Hum. Neurosci., 7 (2013), 442. https://doi.org/10.3389/fnhum.2013.00442 doi: 10.3389/fnhum.2013.00442
    [34] M. Ahmadlou, H. Adeli, A. Adeli, Fractality and a wavelet-chaos-neural network methodology for EEG-based diagnosis of autistic spectrum disorder, J. Clin. Neurophysiol., 27 (2010), 328–333. https://doi.org/10.1097/WNP.0b013e3181f40dc8 doi: 10.1097/WNP.0b013e3181f40dc8
    [35] B. B. Mandelbrot, The Fractal Geometry of Nature. New York: Freeman and Company (1977), 1–468.
    [36] M. Costa, A. L. Goldberger, C. K. Peng, Multiscale entropy analysis of biological signals. Phys. Rev. E., 71 (2005), 021906. https://doi.org/10.1103/PhysRevE.71.021906 doi: 10.1103/PhysRevE.71.021906
    [37] A. Namdari, Z. Li, A review of entropy measures for uncertainty quantification of stochastic processes, Adv. Mechanical Eng., 11 (2019), 1–14. https://doi.org/10.1177/1687814019857350 doi: 10.1177/1687814019857350
    [38] H. Hadoush, M. Alafeef, E. Abdulhay, Brain complexity in children with mild and severe autism spectrum disorders: analysis of multiscale entropy in EEG, Brain Topography, 32 (2019), 914–921. https://doi.org/10.1007/s10548-019-00711-1 doi: 10.1007/s10548-019-00711-1
    [39] Y. Ghanbari, L. Bloy, J. C. Edgar, L. Blaskey, R. Verma, T. P. Roberts, Joint analysis of band-specific functional connectivity and signal complexity in autism, J. Autism Dev. Disord., 45 (2015), 444–460. https://doi.org/10.1007/s10803-013-1915-7 doi: 10.1007/s10803-013-1915-7
    [40] T. Liu, Y. Chen, D. Chen, C. Li, Y. Qiu, J. Wang, Altered electroencephalogram complexity in autistic children shown by the multiscale entropy approach, Neuro. Report, 28 (2017), 169–173. https://doi.org/10.1097/WNR.0000000000000724 doi: 10.1097/WNR.0000000000000724
    [41] J. O. Maximo, D. L. Murdaugh, R. K. Kana, Alterations in Brain Entropy in Autism Spectrum Disorders, 2017 Int. Meet. Autism Res., Birmingham, USA, 2017.
    [42] J. Q. Kosciessa, N. A. Kloosterman, D. D. Garrett, Standard multiscale entropy reflects neural dynamics at mismatched temporal scales: What's signal irregularity got to do with it?, PLOS Comput. Biol., 16 (2020), e1007885. https://doi.org/10.1371/journal.pcbi.1007885 doi: 10.1371/journal.pcbi.1007885
    [43] A. Catarino, O. Churches, S. B. Cohen, A. Andrade, H. Ring, Atypical EEG complexity in autism spectrum conditions: a multiscale, entropy analysis, Clin. Neurophysiol., 122 (2011), 2375–2383. https://doi.org/10.1016/j.clinph.2011.05.004 doi: 10.1016/j.clinph.2011.05.004
    [44] J. S. Richman, J. R. Moorman, Physiological time-series analysis using approximate entropy and sample entropy, Am. J. Physiol. Heart Circ. Physiol., 278 (2000), H2039–49. https://doi.org/10.1152/ajpheart.2000.278.6.H2039 doi: 10.1152/ajpheart.2000.278.6.H2039
    [45] R. Ferenets, T. Lipping, A. Anier, V. Jantti, S. Melto, S. Hovilehto, Comparison of entropy and complexity measures for the assessment of depth of sedation, IEEE Trans. Biomed. Eng., 53 (2006), 1067–1077. https://doi.org/10.1109/TBME.2006.873543 doi: 10.1109/TBME.2006.873543
    [46] A. H. Heurtier, The Multiscale Entropy Algorithm and Its Variants: A Review, Entropy, 17 (2015), 3110–3123. https://doi.org/10.3390/e17053110 doi: 10.3390/e17053110
    [47] H. Azami and J. Escudero, Amplitude- and Fluctuation-Based Dispersion Entropy, Entropy, 20 (2018), 210. https://doi.org/10.3390/e20030210 doi: 10.3390/e20030210
    [48] J. F. Valencia, A. Porta, M. Vallverdu, F. Claria, R. Baranowski, E. O. Baranowska, et al., Refined multiscale entropy: Application to 24-h Holter recordings of heart period variability in healthy and aortic stenosis subjects, IEEE Trans. Biomed., 56 (2009), 2202–2213. https://doi.org/10.1109/TBME.2009.2021986 doi: 10.1109/TBME.2009.2021986
    [49] J. F. Valencia, M. Vallverdu, R. Schroeder, L. Cygankiewicz, R. Vazquez, A. B. Luna, et al., Heart rate variability characterized by refined multiscale entropy applied to cardiac death in ischemic cardiomyopathy patients, Comput. Cardiol., 37 (2010), 65–68.
    [50] W. J. Bosl, T. Loddenkemper, C. A. Nelson, Nonlinear EEG biomarker profiles for autism and absence epilepsy, Neuropsychiatric Electrophysiology, 3 (2017), 1. https://doi.org/10.1186/s40810-017-0023-x doi: 10.1186/s40810-017-0023-x
    [51] W. J. Bosl, H. T. Flusberg, C. A. Nelson, EEG Analytics for Early Detection of Autism Spectrum Disorder: A data-driven approach, Sci. Rep., 8 (2018), 6828. https://doi.org/10.1038/s41598-018-24318-x doi: 10.1038/s41598-018-24318-x
    [52] S. D. Wu, C.W. Wu, K.Y. Lee, S. G. Lin, Modified multiscale entropy for short-term time series analysis, Physica A, 392 (2013), 15865–5873. https://doi.org/10.1016/j.physa.2013.07.075 doi: 10.1016/j.physa.2013.07.075
    [53] S. D. Wu, C. W. Wu, S. G. Lin, C. C. Wang, K. Y. Lee, Time series analysis using composite multiscale entropy, Entropy, 15 (2013), 1069–1084. https://doi.org/10.3390/e15031069 doi: 10.3390/e15031069
    [54] S. D. Wu, C. W. Wu, S. G. Lin, K. Y. Lee, C. K. Peng, Analysis of complex time series using refined composite multiscale entropy, Phys. Lett. A, 378 (2014), 1369–1374. https://doi.org/10.1016/j.physleta.2014.03.034 doi: 10.1016/j.physleta.2014.03.034
    [55] S. D. Wu, C. W. Wu, K. Y. Lee, S. G. Lin, Modified multiscale entropy for short-term time series analysis, Phys. A, 392 (2013), 5865–5873. https://doi.org/10.1016/j.physa.2013.07.075 doi: 10.1016/j.physa.2013.07.075
    [56] Y. C. Chang, H. T. Wu, H. R. Chen, A. B. Liu, J. J. Yeh, M. T. Lo, et al., Application of a modified entropy computational method in assessing the complexity of pulse wave velocity signals in healthy and diabetic subjects, Entropy, 16 (2014), 4032–4043. https://doi.org/10.3390/e16074032 doi: 10.3390/e16074032
    [57] Y. Jiang, C. K. Peng, Y. Xu, Hierarchical entropy analysis for biological signals, J. Comput. Appl. Math., 236 (2011), 728–742. https://doi.org/10.1016/j.cam.2011.06.007 doi: 10.1016/j.cam.2011.06.007
    [58] H. B. Xie, W. X. He, H. Liu, Measuring time series regularity using nonlinear similarity-based sample entropy, Phys. Lett. A, 372 (2008), 7140–7146. https://doi.org/10.1016/j.physleta.2008.10.049 doi: 10.1016/j.physleta.2008.10.049
    [59] M. U. Ahmed, D. P. Mandic, Multivariate multiscale entropy analysis, IEEE Signal Process. Lett., 19 (2012), 91–94. https://doi.org/10.1109/LSP.2011.2180713 doi: 10.1109/LSP.2011.2180713
    [60] M. D. Costa, A. L. Goldberger, Generalized multiscale entropy analysis: Application to quantifying the complex volatility of human heartbeat time series, Entropy, 17 (2015), 1197–1203. https://doi.org/10.3390/e17031197 doi: 10.3390/e17031197
    [61] L. Faes, A. Porta, M. Javorka, G. Nollo, Efficient Computation of Multiscale Entropy over Short Biomedical Time Series Based on Linear State-Space Models, Complexity, 2017 (2017), 1768264. https://doi.org/10.1155/2017/1768264 doi: 10.1155/2017/1768264
    [62] T. Takahashi, Complexity of spontaneous brain activity in mental disorders, Prog. Neuropsychopharmacol. Biol. Psychiatry, 45 (2013), 258–266. https://doi.org/10.1016/j.pnpbp.2012.05.001 doi: 10.1016/j.pnpbp.2012.05.001
    [63] N. Huang, Z. Shen, S. Long, M. Wu, H. H. Shih, Q. Zheng, et al., The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis, Proc. Math. Phys. Eng. Sci., 454 (1998), 903–995. https://doi.org/10.1098/rspa.1998.0193 doi: 10.1098/rspa.1998.0193
    [64] N. E. Huang, Z. Wu, A review on Hilbert-Huang transform: Method and its applications to geophysical studies, Rev. Geophys., 46 (2008), 228–251. https://doi.org/10.1029/2007RG000228 doi: 10.1029/2007RG000228
    [65] F. R. Kschischang, The Hilbert Transform. Toronto: University of Toronto, 2006.
    [66] E. Abdulhay, P.Y. Guméry, J. Fontecave, P. Baconnier, Cardiogenic oscillations extraction in inductive plethysmography: Ensemble empirical mode decomposition, Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., Minnesota, USA, 2009, 2240–2243. https://doi.org/10.1109/IEMBS.2009.5335004
    [67] X. Han, J. Peng, A. Cui, F. Zhao, Sparse Principal Component Analysis via Fractional Function Regularity, Math. Probl. Eng., 2020 (2020), 7874140. https://doi.org/10.1155/2020/7874140 doi: 10.1155/2020/7874140
    [68] C. K. Arthur, V. A. Temeng, Y. Y. Ziggah, Performance Evaluation of Training Algorithms in Backpropagation Neural Network Approach to Blast-Induced Ground Vibration Prediction, Ghana Mining J., 20 (2020), 20–33. https://doi.org/10.4314/gm.v20i1.3 doi: 10.4314/gm.v20i1.3
    [69] K. Kovarski, J. Malvy, R. K. Khanna, S. Arsène, M. Batty, M. Latinus, Reduced visual evoked potential amplitude in autism spectrum disorder, a variability effect?, Translational Psychiatry, 9 (2019), 341. https://doi.org/10.1038/s41398-019-0672-6 doi: 10.1038/s41398-019-0672-6
    [70] S. A. Nastase, V. Iacovella, B. Davis, U. Hasson, Connectivity in the human brain dissociates entropy and complexity of auditory inputs, NeuroImage, 31 (2015), 292–300. https://doi.org/10.1016/j.neuroimage.2014.12.048 doi: 10.1016/j.neuroimage.2014.12.048
    [71] P. Barttfeld, B. Wicker, S. Cukier, S. Navarta, S. Lew, M. Sigman, A big-world network in ASD: dynamical connectivity analysis reflects a deficit in longrange connections and an excess of short-range connections, Neuropsychologia, 49 (2015), 254–263. https://doi.org/10.1016/j.neuropsychologia.2010.11.024 doi: 10.1016/j.neuropsychologia.2010.11.024
    [72] H. Zhang, R. Li, X. Wen, Q. Li, X. Wu, Altered Time-Frequency Feature in Default Mode Network of Autism Based on Improved Hilbert-Huang Transform, IEEE J. Biomed. Health Informatics, 25 (2021), 485–492. https://doi.org/10.1109/JBHI.2020.2993109 doi: 10.1109/JBHI.2020.2993109
    [73] T. Wadhera, D. Kakkar, Conditional entropy approach to analyze cognitive dynamics in autism spectrum disorder, Neurol. Res., 42 (2020), 869–878. https://doi.org/10.1080/01616412.2020.1788844 doi: 10.1080/01616412.2020.1788844
    [74] E. Gani, N. Handayani, S. H. Pratama, N. Afif, F. Aziezah, A. C. Keintjem, et al., Brainwaves Analysis Using Spectral Entropy in Children with Autism Spectrum Disorders (ASD), J. phys. Conf. ser., 1505 (2020), 012070. https://doi.org/10.1088/1742-6596/1505/1/012070 doi: 10.1088/1742-6596/1505/1/012070
    [75] E. Amiot, Entropy of Fourier coefficients of periodic musical objects, J. Math. Music, 15 (2021), 235–246. https://doi.org/10.1080/17459737.2020.1777592 doi: 10.1080/17459737.2020.1777592
    [76] D. Abásolo, R. Hornero, P. Espino, D. Alvarez, J. Poza, Entropy analysis of the EEG background activity in Alzheimer's disease patients, Physiol. Meas., 27 (2006), 241–253. https://doi.org/10.1088/0967-3334/27/3/003 doi: 10.1088/0967-3334/27/3/003
    [77] J. Han, Y. Li, J. Kang, E. Cai, Z. Tong, G. Ouyang, et al., Global Synchronization of Multichannel EEG Based on Rényi Entropy in Children with Autism Spectrum Disorder, Appl. Sci., 7 (2017), 257. https://doi.org/10.3390/app7030257 doi: 10.3390/app7030257
    [78] E. Abdulhay, M. Alafeef, H. Hadoush, N. Arunkumar, Resting State EEG-based Diagnosis of Autism via Elliptic Area of Continuous Wavelet Transform Complex Plot, J. Intell. fuzzy syst., 39 (2020), 8599–8607. https://doi.org/10.3233/JIFS-189176 doi: 10.3233/JIFS-189176
    [79] R. Okazaki, T. Takahashi, K. Ueno, K. Takahashi, M. Ishitobi, M. Kikuchi, et al., Changes in EEG complexity with electroconvulsive therapy in a patient with autism spectrum disorders: a multiscale entropy approach, Front. Hum. Neurosci., 9 (2015), 25767444. https://doi.org/10.3389/fnhum.2015.00106 doi: 10.3389/fnhum.2015.00106
    [80] S. Thapaliya, S. Jayarathna, M. Jaime, Evaluating the EEG and eye movements for autism spectrum disorder, 2018 IEEE Int. Conf. Big Data, Seattle, WA, USA, 2018. https://doi.org/10.1109/BigData.2018.8622501
    [81] J. Eldridge, A. E. Lane, M. Belkin, S. Dennis, Robust features for the automatic identification of autism spectrum disorder in children, J. Neurodev. Disord., 6 (2014), 1–12. https://doi.org/10.1186/1866-1955-6-12 doi: 10.1186/1866-1955-6-1
    [82] H. Amoud, H. Snoussi, D. Hewson, M. Doussot, J. Duchêne, Intrinsic mode entropy for nonlinear discriminant analysis, IEEE Signal Process. Lett., 14 (2007), 297–300. https://doi.org/10.1109/LSP.2006.888089 doi: 10.1109/LSP.2006.888089
    [83] M. Hu, H. Liang, Adaptive multiscale entropy analysis of multivariate neural data, IEEE Trans. Biomed. Eng., 59 (2012), 12–15. https://doi.org/10.1109/TBME.2011.2162511 doi: 10.1109/TBME.2011.2162511
    [84] O. Dekhil, M. Ali, Y. E. Nakeib, A. Shalaby, A. Soliman, A. Switala, et.al., A Personalized Autism Diagnosis CAD System Using a Fusion of Structural MRI and Resting-State Functional MRI Data. Front. Psychiatry, 10 (2021), 1–16. https://doi.org/10.3389/fpsyt.2019.00392 doi: 10.3389/fpsyt.2019.00392
    [85] O. Dekhil, M. Ali, R. Haweel, Y. Elnakeib, M. Ghazal, H. Hajjdiab, et.al. A Comprehensive Framework for Differentiating Autism Spectrum Disorder From Neurotypicals by Fusing Structural MRI and Resting State Functional MRI, Seminars in Pediatric Neurology., 34 (2020), 100805. https://doi.org/10.1016/j.spen.2020.100805 doi: 10.1016/j.spen.2020.100805
    [86] K. Barik, K. Watanabe, J. Bhattacharya, G. Saha, Classification of Autism in Young Children by Phase Angle Clustering in Magnetoencephalogram Signals, 2020 National Conf. Commun. (NCC), Kharagpur, India, 2020, 1–6. https://doi.org/10.1109/NCC48643.2020.9056022
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3328) PDF downloads(161) Cited by(3)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog