
Citation: Erabhoina Hari Mohan, Varma Siddhartha, Raghavan Gopalan, Tata Narasinga Rao, Dinesh Rangappa. Urea and sucrose assisted combustion synthesis of LiFePO4/C nano-powder for lithium-ion battery cathode application[J]. AIMS Materials Science, 2014, 1(4): 191-201. doi: 10.3934/matersci.2014.4.191
[1] | Chaofan Li, Qiong Liu, Kai Ma . DCCL: Dual-channel hybrid neural network combined with self-attention for text classification. Mathematical Biosciences and Engineering, 2023, 20(2): 1981-1992. doi: 10.3934/mbe.2023091 |
[2] | Peng An, Xiumei Li, Ping Qin, YingJian Ye, Junyan Zhang, Hongyan Guo, Peng Duan, Zhibing He, Ping Song, Mingqun Li, Jinsong Wang, Yan Hu, Guoyan Feng, Yong Lin . Predicting model of mild and severe types of COVID-19 patients using Thymus CT radiomics model: A preliminary study. Mathematical Biosciences and Engineering, 2023, 20(4): 6612-6629. doi: 10.3934/mbe.2023284 |
[3] | Jingyao Liu, Qinghe Feng, Yu Miao, Wei He, Weili Shi, Zhengang Jiang . COVID-19 disease identification network based on weakly supervised feature selection. Mathematical Biosciences and Engineering, 2023, 20(5): 9327-9348. doi: 10.3934/mbe.2023409 |
[4] | Natalya Shakhovska, Vitaliy Yakovyna, Valentyna Chopyak . A new hybrid ensemble machine-learning model for severity risk assessment and post-COVID prediction system. Mathematical Biosciences and Engineering, 2022, 19(6): 6102-6123. doi: 10.3934/mbe.2022285 |
[5] | Feng Li, Mingfeng Jiang, Hongzeng Xu, Yi Chen, Feng Chen, Wei Nie, Li Wang . Data governance and Gensini score automatic calculation for coronary angiography with deep-learning-based natural language extraction. Mathematical Biosciences and Engineering, 2024, 21(3): 4085-4103. doi: 10.3934/mbe.2024180 |
[6] | Akansha Singh, Krishna Kant Singh, Michal Greguš, Ivan Izonin . CNGOD-An improved convolution neural network with grasshopper optimization for detection of COVID-19. Mathematical Biosciences and Engineering, 2022, 19(12): 12518-12531. doi: 10.3934/mbe.2022584 |
[7] | Yugui Jiang, Qifang Luo, Yuanfei Wei, Laith Abualigah, Yongquan Zhou . An efficient binary Gradient-based optimizer for feature selection. Mathematical Biosciences and Engineering, 2021, 18(4): 3813-3854. doi: 10.3934/mbe.2021192 |
[8] | Xiaodan Zhang, Shuyi Wang, Kemeng Xu, Rui Zhao, Yichong She . Cross-subject EEG-based emotion recognition through dynamic optimization of random forest with sparrow search algorithm. Mathematical Biosciences and Engineering, 2024, 21(3): 4779-4800. doi: 10.3934/mbe.2024210 |
[9] | Ziyu Jin, Ning Li . Diagnosis of each main coronary artery stenosis based on whale optimization algorithm and stacking model. Mathematical Biosciences and Engineering, 2022, 19(5): 4568-4591. doi: 10.3934/mbe.2022211 |
[10] | Pavithra Latha Kumaresan, Subbulakshmi Pasupathi, Sindhia Lingaswamy, Sreesharmila Thangaswamy, Vimal Shunmuganathan, Danilo Pelusi . Fruit-Fly optimization based feature integration in image retrieval. Mathematical Biosciences and Engineering, 2021, 18(5): 6178-6197. doi: 10.3934/mbe.2021309 |
A new coronavirus (COVID-19) emerged in Wuhan in December 2019 and quickly swept worldwide [1]. The COVID-19 epidemic was declared a Public Health Emergency of International Concern by the World Health Organization in January 2020 [2]. To counteract, control, lessen, and confine the COVID-19 virus's effects and consequences, several studies are still being done in a variety of fields. A number of models based on artificial intelligence have been developed to diagnose COVID-19 disease [3]. However, there are still a few models based on the machine to diagnosis of infectious epidemics.
This study is focused on clinical text mining related to COVID-19 and applying machine learning algorithms to categorize COVID-19 patients. Individual symptoms, demographic information, diagnosis, laboratory test results, chest x-ray reports, treatments, etc., can all be found in clinical texts, which are narrative texts providing a great deal of information regarding afflicted patients. However, the data in clinical texts are often high dimensional and include uninformative features, that significantly affect the accuracy of the classifier. As a result, the dimensionality of the data must be decreased [4]. Due to the vast amount of the clinical documents size, Feature Selection (FS) is an essential step before the classification process [5]. Their main advantages involve finding a subset of relevant features that will be useful in categorization. In addition to delivering high recognition, easing data comprehension, shortening training time, and resolving the curse of dimensionality problem [6,7]. FS is a challenging and complex problem because it necessitates striking a balance between lowering features and maintaining high classifier accuracy, so it requires an effective search strategy, especially when dealing with clinical text. Complicated issues, such as those involving feature selection, are often tackled with the help of algorithms that take inspiration from nature. In recent years, numerous novel swarm intelligence optimization algorithms have been proposed, such as the binary horse herd optimization algorithm [8], moth flame optimization [9], Binary Particle Swarm Optimization [10,11], binary grey wolf optimizer [12], binary aquila optimizer [13], artificial gorilla troop optimization [14].
For the first time, the flamingo search algorithm (FSA), for handling FS tasks in the healthcare sector, is presented in this work. FSA is an efficient new method for a novel swarm intelligence optimization inspired by the flamingo's lifestyle in the migratory and foraging behavior. Figure 1 depicts flamingo communities and individuals in their natural habitat. Flamingos are known for their foraging and migratory behaviors. To the best of our knowledge, it has not been used in feature selection issues; consequently, in this research, the proposed IBFSA has been developed to minimize the number of features chosen from the clinical text related to COVID-19 while maximizing classification accuracy. The proposed method is a wrapper-based approach. Hence a learning algorithm should be part of the evaluation process. In this investigation, SVMs are used [15,16]. The most important contributions of this study are:
● Development of a swarm algorithm called IBFSA to deal with feature selection process by an improved binary version of FSA is introduced.
● A novel modified Initialization approach has been proposed to enhance diversity and convergence during the research process.
● Levy flight has been incorporated into FSA to increase the diversity of solutions and offer a high level of randomization.
● The local search algorithm is incorporated before and after each iteration of FSA to prevent becoming stuck in local optima.
● Combining term weighting schema (RTF-C-IEF) with IBFSA.
● Propose a new clinical text categorizer by combining IBFSA and SVM.
● Use Two datasets to compare the state-of-the-art techniques with our proposed method.
The remaining parts of the paper are structured as follows. Section 2 the related works of clinical of COVID-19 and the FS procedure. Section 3: An overview about the FSA. The proposed methodology is outlined in Section 4. The experimental and findings are presented and discussed in Section 5. Finally, Section 6, concludes the paper.
Comparatively few attempts have been made to create intelligent classifiers, including feature selection, for the clinical text categorization of COVID-19 patients than for other topics. To correctly identify COVID-19 patients, the authors of this paper [17] employed Binary Particle Swarm Optimization (BPSO) as a wrapper approach for critical feature selection. According to experiments, it not only beats other methods but also introduces the highest possible degree of accuracy with the lowest possible time overhead. The COVID-19 dataset in [18] to disease diagnosis based on Grasshopper Optimization Algorithm (GOA), was used. The experimental findings demonstrate that the suggested method provides high classification accuracy. In this paper [19], presents an intelligent strategy for predicting SARS-CoV2 (COVID-19) using genetic feature selection techniques. The proposed model appears to have substantially lower prediction errors than conventional techniques. In this paper [20], the authors propose using a hybrid strategy based on the BOA algorithm and particle swarm optimization (PSO). The suggested methodology has been tested using the COVID-19 dataset. The experimental results show that the proposed model BOAPSO outperforms the PSO, BOA and GWO in terms of improving performance precision and reducing the number of chosen features by 91.07, 87.2, 87.8 and 87.3%, respectively. This paper [14] aims to introduce a unique discrete artificial gorilla troop optimization (DAGTO) approach for dealing with FS challenges in the healthcare sector. After completing a case study on COVID-19 samples and ten medical data sets were using to demonstrate the method's influence in practice. Evidence from statistically shows that it performs the best. In this study [13], the single Aquila optimizer (AO) is suggested as a search technique to find the optimal feature subset. The COVID-19 real-world dataset is used to evaluate the proposed method. Results showed that AO is superior to competing algorithms in terms of accuracy attained with the fewest features. The novel Caledonian crow learning algorithm is used in this study [21] to propose a strategy for selecting features relevant to the COVID-19 illness. The suggested approach for detecting COVID-19 patients is more accurate than a competing method, as demonstrated by experimental findings on the COVID-19 disease dataset at a Brazilian hospital. The best feature subset may be chosen with the help of a mix of the brainstorm optimization algorithm and the firefly algorithm, as described in this article [22]. For the dataset of coronavirus-related diseases, the proposed technique was used. The experimental findings produced demonstrated superior classification accuracy compared to previous approaches. Table 1 provides a brief comparison of earlier works on the COVID-19 detection method.
Method | Advantages | Disadvantages |
Aquila Optimizer (AO) and ML [13] | AO significantly outperforms other comparison algorithms and has been shown to be more effective in terms of predictive accuracy and reducing the number of features selected. | The COVID-19 patient data set used is small, and was not of very high dimensionality for the method to be explored effectively |
AGTO and ML [14] | Efficient in reducing the number of features used with better accuracy, also this approach has been demonstrated to be successful in real-world practical applications using real-world COVID-19 datasets. | The majority of the time, AGTO takes longer to implement. In addition, the database is not very highly dimensional. However, different approaches can be used to enhance the efficiency of the algorithm by applying advanced initialization procedures. |
PSO and DBNB classification [17] | The suggested method attempts to accurately identify infected patients with the least time penalty based on the more effective features elected by APSO. | Even though it is effective at diagnosing COVID-19 patients, the suggested method is only based on numerical data. Additionally, the dataset used is not insufficient to diagnose COVID-19 and is limited just to clinical laboratory data. However, analyzing CT scan reports may be helpful to confirm infection. |
GOA and CNN [18] | Easy to implement and takes little time by optimizing CNN by GOA. | By utilizing a more detailed dataset with more images from all three classes, the proposed method can be further enhanced. |
BOA, PSO and ML [20] | Compared to conventional classification methods, the proposed hybrid model is more effective at classifying COVID-19 patients. | The COVID-19 patient data set used is small, and was not of very high dimensionality. |
CA and ANN [21] |
ANN is a powerful classification technique. | The patient election has potential bias because the database is so unbalanced that only the number of infected people in it is 10% of the total number. |
BSO, FA and ML [22] | Compared to conventional classification methods, the proposed hybrid model is more effective at classifying COVID-19 patients. | The COVID-19 dataset contains limited data limited only to symptoms and its small size, plus a lot of missing data. So, it needs other methods of pre-processing. |
In conclusion, when comparing machine learning and globally intelligent algorithms to conventional methodologies, most of the experiments on COVID-19 Classification showed good classification results. In addition, swarm intelligence algorithms have been effectively used in the feature selection problem to manage various domains, but they are not extremely applied in clinical text related to COVID-19 categorization. As a result, there is a need and substantial motivation to present a new approach, which includes a weighting scheme, an intelligent feature selection method based on IBFSA, and SVM classifier for classification of the COVID-19 patients from clinical texts.
The FSA is an evolutionary algorithm with biological inspiration that is modeled after how flamingos in nature find food. Each candidate solution to the optimization issue in this algorithm is represented by a flamingo, and each flamingo has two primary characteristics, namely, its foraging and migrating patterns. Flamingos have no idea where most of the food is in the present (the globally ideal) search region. Therefore, flamingos look for a food site with more plentiful food than the known food in the search region by sharing information with each other, updating the location of each flamingo, and affecting changes in the locations of other flamingos in the group (the optimal solution Global). Identifying the globally best solution inside a specified search area is a significant aim of the swarm intelligence algorithm, and the flamingos' behavior is a fitting metaphor for this purpose [23].
The fundamental steps of this algorithm are described below:
Step 1. The population is initialized, set as $ P $, the maximum number of iterations is $ {Iter}_{Max} $, and the proportion of migrating flamingos in the first part is $ {MP}_{b} $.
Step 2. The number of foraging flamingos in the $ ith $ iteration of flamingo population renewal is $ {MP}_{r} = rand\left[\mathrm{0, 1}\right]\times P\times \left(1-{MP}_{b}\right) $. The number of migrating flamingos in the first part of this iteration is $ {MP}_{o} = {MP}_{b}\times P $. The number of migratory flamingos in the second part of this iteration is $ {MP}_{t} = {P-MP}_{o}{-MP}_{r} $. Individual flamingo fitness levels are calculated, and the entire flamingo population is then ranked by fitness. The flamingos with low fitness $ {MP}_{b} $ and high fitness $ {MP}_{t} $ are classified as migrants, while the others are classified as foraging flamingos.
Step 3. Migrating flamingos are modified based on Eq (2), and foraging flamingos are modified based on Eq (1).
$ {\boldsymbol{x}}_{\boldsymbol{i}\boldsymbol{j}}^{\boldsymbol{t}+1} = \left({x}_{ij}^{t}+{ \varepsilon }_{1}\times {xb}_{j}^{t}+{G}_{2}\times \left|{G}_{1}\times {xb}_{j}^{t}+{ \varepsilon }_{2}\times {x}_{ij}^{t}\right|\right)/K $ | (1) |
In Eq (2), $ {x}_{ij}^{t+1} $ presents the location of the $ ith $ flamingo in the $ ith $ dimension of the population in the $ \left(t+1\right) $th iteration, $ {x}_{ij}^{t} $ represents the location of the $ ith $ flamingo in the $ jth $ dimension in the $ t $ iteration of the flamingo population, namely, the location of the flamingo's feet. $ {xb}_{j}^{t} $ represents the j$ th $ dimension location of the flamingo with the best fitness in the population in the $ t $ iteration. $ K = K\left(n\right) $ is a diffusion factor, which is a random number that follows the chi-square distribution of $ n $ degrees of freedom. It is utilized to increase the size of the foraging-group for flamingos and simulate the possibility of individual selection in nature, enhancing its the global ability to search for the best opportunity. The random numbers $ {G}_{1} = N\left(\mathrm{0, 1}\right) $ and $ {G}_{1} = N\left(\mathrm{0, 1}\right) $ have a conventional normal distribution, $ { \varepsilon }_{1} $ and $ { \varepsilon }_{2} $ are determined by −1 or 1 at random.
$ {\boldsymbol{x}}_{\boldsymbol{i}\boldsymbol{j}}^{\boldsymbol{t}+1} = {x}_{ij}^{t}+\beta \times \left({xb}_{j}^{t}-{x}_{ij}^{t}\right) $ | (2) |
In Eq (2), $ {x}_{ij}^{t+1} $ and $ {xb}_{j}^{t} $ represents same meaning as the previous Eq (1). $ \beta = N\left(\mathrm{0, 1}\right) $ is a set of random integers with the same distribution across all trials; it is employed to broaden the search area during flamingo migration and simulate the randomness of individual flamingo behaviors during the particular migration process.
Step 4. Make sure there are no flamingos that are off-bounds.
Step 5. Move to Step 6 if the allotted number of iterations has been used; otherwise, go to Step 2.
Step 6. Result in the ideal solution and optimal value.
The FSA pseudo code is displayed in Algorithm 1.
Algorithm 1: Standard Flamingo Search Algorithm | |||||
Input: $ M-maximum \; number\; of\; iterations $ $ N-total\; number\; of\; flamingo $ $ {MP}_{b}-number\; of\; migrating\; flamingo $ Output: $ {X}_{best}-Global\; optimal\; position $ $ {f}_{best}-Fitness\; of\; global\; optimal\; position $ |
|||||
1 | Start | ||||
2 | Initialize a swarm of $ N $ $ flamingo $ s and its relevant parameters; | ||||
3 | $ t\leftarrow 1; $ | ||||
4 | While $ t < M $ do | ||||
![]() |
|||||
30 | end while | ||||
31 | Return $ {X}_{best} $, $ {f}_{g} $ /*Xbest is top optimal of a solution got by the algorithm */ |
In order to predict a COVID-19 diagnosis from clinical texts, our strategy described in this work includes six processing stages, namely collection and describe the dataset, text pre-processing, extract features, features selection, use of machine learning methods, and performance evaluation. The suggested model's block diagram is shown in Figure 2.
Two sets of clinical data related to Coronavirus (COVID-19) were collected to validate the effectiveness of the suggested method. First Dataset (DS1) was collected from several hospitals in Iraq of patients with SARS-CoV2. In contrast, other clinical text reports were collected to form the second data set (DS2) from various sources, including includes GitHub (https://github.com/Akibkhanday/Meta-data-of-Coronavirus.), the Italian Society of Medical and Interventional Radiology (SIRM) (https://www.sirm.org/category/senza-categoria/covid-19/), in addition to other cases reports, that were collected from medical publications related to COVID-19 on some websites such as Hindawi (https://www.hindawi.com/), Infection and Chemotherapy (https://www.jiac-j.com/), NIH (https://www.ncbi.nlm.nih.gov/pmc/), and ScienceDirect (https://www.sciencedirect.com/science/article/pii/S1477893921002106).
Both datasets contain "demographic" information, such as age, sex, and comorbidities, in addition to other needed diagnostics information and related tests, including symptoms, vital signs, lab results, values from routine blood tests, and chest CT imaging results, disposition, admission to an ICU, and survival to hospital discharge. The two datasets consist of 3053 and 1446 patients, respectively. Table 2 summarizes the used datasets comprising varying samples and attributes.
No | Type | No. of records | Label | Rate of Occurrences |
DS1 | Clinical Text | 3053 | Severe Non-Severe |
55% 45% |
DS2 | Clinical Text | 1446 | COVID-19 Positive COVID-19 Negative |
62% 38% |
Clinical texts present a difficult challenge to extract the hidden features from, since they are always presented in an unstructured format. Thus, to train a classifier, data must be presented in a readable manner and undergo pre-processing. Since some symbols and words may not be beneficial for categorization, the pre-processing method aims to improve the data's quality and clean it up. Several pre-processing steps were used to convert unstructured clinical texts into a word vector. It includes removing punctuation, and numbers, stopping words and other characters, converting letters, short-word removal, tokenization, parts-of-speech tagging, stemming, and lemmatization.
In order to complete NLP tasks, it is crucial to identify an effective text representation system [24]. From the pre-processed clinical texts, different features are extracted. The feature engineering described here relies on the use of two steps. SpaCy and ScispaCy were employed in the first step to extract medical entities from clinical text. Symptoms with more than one word were then converted into a single expression (e.g., "shortness of breath") in some reports. ScispaCy provides a robust rule-matching engine and Fast Models for Biomedical Natural Language Processing [25].
In the second stage, the RTF-C-IEF weighting method [26] is used to transform the extracted concepts, which are features, into probability values to be ready for the feature selection model. This procedure drastically decreases the number of features while preserving the informative features. RTF-C-IEF is a statistical weighting method to retrieve a term's significance within a document as the first stage of feature selection strategy for text mining. It was used for feature extraction instead of Bag of Word (BoW) and TF-IDF classical since RTF-C-IEF provides more accurate results [26].
A higher RTF-C-IEF feature score indicates more significance for that feature within the text's clinical context. The RTF-C-IEF formula is written as follows:
$ \boldsymbol{R}\boldsymbol{T}\boldsymbol{F}-\boldsymbol{C}-\boldsymbol{I}\boldsymbol{E}\boldsymbol{F} = {\left({tf}_{ij}\right)}^{{r}_{tf}}\times \left(1+\frac{{t}_{x}}{N}\right)\times {e}^{-\frac{dt\left({t}_{j}\right)}{N}} $ | (3) |
Where $ {tf}_{ij} $ is the term frequency, $ {t}_{x} $ represents the frequency count of the word $ x $ in the core corpus, $ N $ is the total of dataset, and $ dt\left({t}_{j}\right) $ corresponds to the frequency of documents that term $ {t}_{j} $ appears in the collection.
Prior to performing the classification, feature selection is a crucial step to choosing the important features, eliminating the irrelevant ones, minimizing the feature dimensions, and shortening the computing time required to complete the classification [10,27,28]. To realize that, FSA [29] is implemented. FSA is a new algorithm that simulates the behavior of flamingos searching for the best possible solution within a given search region (where food is most plentiful). Since FS is a binary issue, the native optimizer needs to be tweaked so that FSA may optimize in a high-dimensional binary search space, thereby improving the algorithm's efficiency. Many significant steps in updating the FSA algorithm are detailed in this study. Introducing a new operator into the algorithm's structure is the most common method for enhancing FSA exploration as well as correcting the typical roaming behavior of swarm members. In the first step, transfer functions from S-shaped families are used to convert the FSA to binary. Secondly, A novel initialization modification (MIA) approach was incorporated into the standard FSA algorithm to obtain high-quality individuals in beginning and thus increase the likelihood of discovering the best solution, which may increase the optimization's performance. In the third stage, the Levy flight operator is added to each flamingo to boost its variability and the optimizer's capacity to probe further into underexplored portions of the search space. Finally, enhancing the exploitation by Local Search Algorithm (LSA). These promising improvements are discussed in this sub-section. The architecture of the suggested feature selection approach is depicted in Figure 4, and the pseudocode of IBFSA is presented in Algorithm 4.
Modeling the FS problem as a binary one, which can only take values 0 or 1 in the feature-subset selection issue. Thereby, FSA cannot be utilized to directly resolve a feature selection problem because the final solution it produces using Eqs (1) and (2) is made up of continuous values (real number domain). As a result, a transfer function (TF) must be used to convert the values from continuous to binary (0 or 1). TF specifies the rate at which the values of the decision variables change from 1 to 0 and back. That is, when choosing a TF to convert the continuous values into binary (0, 1), the range of values the TF produces should fall within the range [0, 1]. The S-shaped family of logistic transformation functions is perfect for mapping processes since it produces output in the [0, 1] range. The purpose of this discovery is to identify features that have been omitted or elected. In this case, the flamingo stands for features set, and its binary values indicate whether or not that feature was chosen for inclusion in the final model, where 1 represents a selected feature and 0 means discard. An individual's value range is now mapped to [0, 1] by the following function [10]:
$ \boldsymbol{T}\boldsymbol{F}\left({\boldsymbol{x}}_{\boldsymbol{i}}^{\boldsymbol{d}}\left(\boldsymbol{t}\right)\right) = \frac{1}{1+{e}^{-2{x}_{i}^{d}\left(t\right)}} $ | (4) |
Where $ {x}_{i}^{d} $ denotes the $ {i}^{th} $ flamingo location in the $ {d}^{th} $ dimension at the $ {t}^{th} $ iteration, $ {x}_{i} $ is computed by Eqs (1) and (2). In Eq (4), the output of the S-shaped function is still displayed continuously as illustrated in Figure 3. Thus, to obtain the binary value the $ {i}^{th} $ position is modified as follows:
$ {\boldsymbol{x}}_{\boldsymbol{i}}^{\boldsymbol{d}}\left(\boldsymbol{t}+1\right) = \left\{ \right. $
|
(5) |
Where $ {x}_{i}^{d}\left(t+1\right) $ represents the $ ith $ element in the $ X $ solution at dimension $ d $ in iteration $ t+1 $, and $ rand\in \left[\mathrm{0, 1}\right] $.
Figure 4 depicts Levy flight, a mathematical representation of a random motion that follows a heavy-tailed probability distribution [30]. Levy flight was recently introduced as a solution to optimization problems. It has since been incorporated into the design of many optimization algorithms to improve their performance in areas including speed of convergence, preventing premature convergence, leaping from local minima, and striking a balance between exploration and exploitation [8,9,30]. This research aims to improve the FS process used in the COVID-19 diagnosis from clinical texts by proposing for the first time that Levy flight be included in the FSA structure to enhance the performance of the FSA optimizer. An equation that represents the flamingo location update based on Levy's flying improvement is Eq (6). So, in order to increase the variety of search spaces, it has been planned that each upgraded flamingo would employ Levy flight once, resulting in a higher level of exploration.
$ {\boldsymbol{x}}_{\boldsymbol{i}\boldsymbol{j}}^{\boldsymbol{t}+1} = \left({x}_{ij}^{t}+{ \varepsilon }_{1}\times {xb}_{j}^{t}+levy\left(\beta \right) \oplus \left|{G}_{1}\times {xb}_{j}^{t}+{ \varepsilon }_{2}\times {x}_{ij}^{t}\right|\right)/K $ | (6) |
$ \boldsymbol{L}\boldsymbol{e}\boldsymbol{v}\boldsymbol{y}\left(\boldsymbol{\beta }\right)~\boldsymbol{\mu } = {t}^{-1-\beta }0\le \beta \le 2 $ | (7) |
$ \boldsymbol{l}\boldsymbol{e}\boldsymbol{v}\boldsymbol{y}\left(\boldsymbol{\beta }\right) \sim \frac{\phi \times \mu}{\left|{\rm{V}}^{1 / \beta}\right|} $ | (8) |
$ \boldsymbol{\phi } = {\left[\frac{\mathrm{\Gamma }\left(1+\beta \right)\times \mathrm{sin}\left(\pi \times \beta /2\right)}{\mathrm{\Gamma }\left(\left(\frac{1+\beta }{2}\right)\times \beta \times {2}^{\frac{\beta -1}{2}}\right)}\right]}^{\frac{1}{\beta }} $ | (9) |
Where $ {X}_{i}^{t} $ indicates the $ {i}^{th} $ flamingo at iteration $ t $, rand indicates a random number in the range [0, 1], $\oplus $ represents the dot product, and α represents the step control parameter. Levy flight, as previously mentioned, is a random walk where the leap size supports a Levy distribution as given in Eq (7). Using Eq (8), Levy is computed as random numbers; µ and ν are common random distributions. Eq (9) shows how to calculate φ, where Γ represents a typical Gamma function, and β = 1.5, mentioned in [31].
Evolutionary algorithms rely heavily on the variety and convergence of their populations, and population initialization is a crucial aspect of this. This step's purpose is to offer an initial guess at potential solutions. These initially hypothesized solutions will then be iteratively enhanced throughout the optimization process until a stopping requirement is fulfilled. In most cases, having a high-quality initial population can help an algorithm converge more quickly and find the optimal solution. On the other hand, it is possible that an algorithm will not be able to locate the optimal solution if it has based on poor guesses [32,33]. In recent years, researches has shown that proper initialization approaches can improve the likelihood of locating global optimum solutions and decrease the variance of the final search outcomes [34]. In this paper, the performance of FSA is expanded to make it appropriate for the optimization problem by introducing a new initialization algorithm named MIA. Its basic idea is to create a population based on the initial population in a sporting way without any complex equation or making much change in the original FSA algorithm and its structure. Next, the better individuals will be selected out of the initial population, resulting in the creation of a new initial population made up of outstanding individuals. Thus, the MIA managed to manage part of this algorithm and correctly cover the possible space. Additionally, the suggested initialization technique significantly impacts solution quality, finds the optimal solution with high precision, and has helped boost the likelihood of starting with a global optimum. The whole pseudo code of MIA is displayed as Algorithm2.
Algorithm 2: The proposed MIA algorithm | ||
$ {\boldsymbol{X}}_{\boldsymbol{i}\boldsymbol{j}} $ = position of flamingos; /* Randomly generate the positions of N flamingo; $ {\boldsymbol{X}_{\boldsymbol{bin}}} = $ After Convert to binary_map ($ {\boldsymbol{X}}_{\boldsymbol{i}\boldsymbol{j}} $); $\boldsymbol{Fi}{\boldsymbol{t}_{\boldsymbol{old}}} = $ Find all Fitness to Population size(flamingos); |
||
$ {D}_{max} $= maximum of number of local iterations; $ {M}_{max} $= maximum of number of local iterations; N (Population size). |
||
1 | for $ d=1To $ $ {\boldsymbol{D}}_{\boldsymbol{m}\boldsymbol{a}\boldsymbol{x}} $ do | |
![]() |
||
21 | end for | |
22 | Return $ {X}_{bin} $, $ {X}_{ij} $, $ {Fit}_{old} $ |
The LSA algorithm was created and presented in Algorithm3 by [35]. In the original FSA, in each iteration of the migratory flamingo $ {MP}_{b} $, LSA is called to enhance the local location obtained by the Eq (3). After the migratory flamingos have moved to their best position, LSA is again called to improve finding the best solution $ {X}_{ij}^{t+1} $ currently obtained by still removing any more potentially pointless features. At first, LSA stores, in a variable $ Temp $, the value of $ {X}_{best}^{t+1} $ produced at the end of each IBFSA iteration. To improve $ Temp $, LSA runs iteratively $ LT $ times. At each iteration $ {L}_{t} $ of LSA, four features' $ rand-feat $ are randomly selected from $ Temp $. Every variable in the $ rand-feat $ is reversed by LSA. Then, the value of fitness $ f\left(Temp\right) $ of the new solution (the new Temp) is evaluated; if it is best than $ \left({X}_{best}^{t+1}\right) $, then $ {X}_{best}^{t+1} $ is set to $ Temp $; otherwise, $ {X}_{best}^{t+1} $ and $ {f}_{g} $ are kept unaltered.
Algorithm 3: The proposed LSA algorithm | |
$ \boldsymbol{L}\boldsymbol{T}- $ maximum of number of local iterations; $ {X}_{best}^{t+1} $ /* the best position so far at the end of IBFSA current iteration $ t+1 $; $ Temp\leftarrow {X}_{best}^{t+1} $ $ Lt\leftarrow 0; $ |
|
1 | While $ Lt < LT $ do |
![]() |
|
13 | end while |
14 | Return $ {\boldsymbol{X}}_{\boldsymbol{b}\boldsymbol{e}\boldsymbol{s}\boldsymbol{t}} $, $ {f}_{g} $ |
In addition, in order not to lose the distinctive sites that the flamingo passes through in its journey during the search for the optimal global solution, we added a parameter to help it maintain its sites that have the best fitness value appropriate that it has currently reached, and this prevents the flamingo from moving away from the optimal position and moving to a worse position.
After the flamingo is converted into a binary vector with the same number of rows and columns of the dataset in TF. The fitness function of the IBFSA is used to quantify each flamingo's level of fitness by combining two seemingly opposing goals. These goals are the number of chosen features and the accuracy. The FS problem seeks to maximize classification accuracy (minimize error rate) with a minimum of specified features. Then, the model performance was optimized with the SVM technique, and the optimal set of features for detecting COVID-19 was determined by identifying the best flamingo. IBFSA uses the following fitness function to evaluate the solutions and achieve an equilibrium between the two main goals:
$ {\boldsymbol{F}\boldsymbol{i}\boldsymbol{t}}_{\boldsymbol{F}\boldsymbol{S}} = \alpha \times E+\beta \times \frac{d}{D} $ | (10) |
Where $ E $ is the classifier's error rate, $ d $ is the number of features used to make a decision, and $ D $ is the total number of features. In addition, the values of $ \alpha $ and $ \beta $ are the weights employed to strike a balance between these two goals.
Algorithm 4: The proposed IBFSA based on MIA, TF, Levy flight and RSA | |
Input: $ M-maximum \; number\; of\; iterations $ $ N-total\; number\; of\; flamingo $ $ {MP}_{b}-number\; of\; migrating\; flamingo $ Output: $ {X}_{best}-Global\; optimal\; position $ $ {f}_{best}-Fitness\; of\; global\; optimal\; position $ |
|
1 | Start |
2 | Initialize a swarm of $ N $ $ flamingo $ s and its relevant parameters; |
3 | Apply MIA to $ {X}_{ij} $ using Algorithm (2); |
4 | $ t\leftarrow 1; $ |
5 | While $ t < M $ do |
![]() |
|
48 | end while |
49 | end |
50 | Return $ {X}_{best} $, $ {f}_{g} $ /*Xbest is the best solution obtained by the algorithm*/ |
The proposed method is a wrapper-based approach. Hence a learning algorithm should be part of the assessment process. In this research, SVMs are used as classifiers in the fitness evaluation process [36,37,38] because they are so efficient, mainly when dealing with data sets that only have two classes. In addition, the other classifiers are utilized in all other cases. Each dataset was divided at random into 20% for testing and 80% for training. Multiple metrics, including precision, sensitivity, F-measure, Macro-F1, and Macro-Recall, are used to assess the results of our tests and verify the efficacy of the suggested method. Are defined as follows:
$ \mathrm{P}\mathrm{r}\mathrm{e}\mathrm{c}\mathrm{i}\mathrm{s}\mathrm{i}\mathrm{o}\mathrm{n} = \frac{TP}{TP+FP}, \mathrm{R}\mathrm{e}\mathrm{c}\mathrm{a}\mathrm{l}\mathrm{l} = \frac{TP}{TP+FN} $ | (11) |
$ \mathrm{F}1\_\mathrm{s}\mathrm{c}\mathrm{o}\mathrm{r}\mathrm{e} = 2\times \frac{\mathrm{P}\mathrm{r}\mathrm{e}\mathrm{c}\mathrm{i}\mathrm{s}\mathrm{i}\mathrm{o}\mathrm{n}\times \mathrm{R}\mathrm{e}\mathrm{c}\mathrm{a}\mathrm{l}\mathrm{l}}{\mathrm{P}\mathrm{r}\mathrm{e}\mathrm{c}\mathrm{i}\mathrm{s}\mathrm{i}\mathrm{o}\mathrm{n}+\mathrm{R}\mathrm{e}\mathrm{c}\mathrm{a}\mathrm{l}\mathrm{l}} $ | (12) |
$ MacroF = \frac{1}{T}\sum _{j = 1}^{T}{F}_{j} $ | (13) |
$ MacroR = \frac{1}{T}\sum _{j = 1}^{T}{R}_{j} $ | (14) |
Where $ T $ denotes the total number of categorized classes and, $ {F}_{j} $, $ {R}_{j} $ are F, R values in the $ {j}^{th} $ category of class. In order to increase the statistically significant of the empirical results, we independently test each optimization technique 20 times across all datasets. For each assessment, the following metrics are calculated and used: average classification accuracy, features election ratio, average fitness, and standard deviation (STD) and adopted as follows:
$ {\mathbf{ \pmb{\mathsf{ μ}} }}_{\mathbf{f}\mathbf{e}\mathbf{a}\mathbf{t}} = \frac{1}{20}\sum _{\mathrm{k} = 1}^{20}\frac{{\mathrm{d}}_{\mathrm{*}}^{\mathrm{k}}}{\mathrm{D}} $ | (15) |
$ {\mathbf{ \pmb{\mathsf{ μ}} }}_{\mathbf{f}\mathbf{i}\mathbf{t}} = \frac{1}{20}\sum _{\mathrm{k} = 1}^{20}{\mathrm{f}}_{\mathrm{*}}^{\mathrm{k}} $ | (16) |
$ \mathbf{S}\mathbf{D} = \sqrt{\frac{1}{19}}{\sum }_{\mathrm{k} = 1}^{20}\left({\mathrm{Y}}_{\mathrm{*}}^{\mathrm{k}}-{\mathrm{\mu }}_{\mathrm{Y}}\right) $ | (17) |
This section offers a comprehensive empirical examination of the IBFSA optimization algorithm's behavior based on several improvements. Two datasets of patient medical records from COVID-19 are utilized for experiments. Table 1 details the specifics of these data collections.
It is well-known that it is challenging for a metaheuristics method to achieve optimal performance across all possible optimization situations, especially when employing the same parameter settings. Therefore, to obtain optimal performance, it is preferable to fine-tune the critical parameters for each optimization issue independently. Parameters must be established when the IBFSA has been defined, and its procedure explained (the number of flamingos, the number of iterations, and the number of runs). The iterations provide the flamingos the chance to achieve the best intensity during one generation. When the number of iterations is repeated multiple times, the runs get their best intensity. Although the runs take more time, they ensure that the solution produced is optimal. Keep in mind that only a subset (80%) of the COVID-19 datasets is used in the experiments for parameter setup. At the same time, the remaining data is held for assessment and validation at the end (testing data). To prevent random bias, each combination is separately run 20 times, and the average results are then shown. In addition, the state-of-the-art wrapper approaches, such as BPSO, BGWO, BWOA, BMFO and BFFA, were compared to the suggested method. All algorithms have been built with the same computer platform and settings for all algorithm parameters to ensure that comparisons are fairness. Table 3 displays how finely tuned the parameters got.
IBFSA Parameters | Description | Setting |
N | Run Time | 20 |
Pop. Size(N) | Number of flamingo search agents | 50 |
Itermax | Maximum number of iterations | 500 |
Dim | Dimension | Number of features |
β | Significance of the feature subset | 0.01 |
α | Importance of classification accuracy | 0.99 |
$ {\boldsymbol{M}\boldsymbol{P}}_{\boldsymbol{b}} $ | Proportion of migrating flamingo | 0.1 |
Here, we show the results we got from applying our method to test datasets associated with Covid-19, measuring how well our system did at classifying the data. In two stages, experiments are conducted. In stage one, the term weighting schema's impact is investigated on datasets to categories Covid-19 patients as we look for the best performance by including it in the suggested strategy. In the second stage, the proposed IBFSA is compared to numerous alternative wrapper FS methods to demonstrate the proposed method's efficacy. The IBFSA result, which consists of clinical texts with decreased feature sizes, is used as input for classifiers to categorize the patients into the appropriate classes. Take note, the phase of feature selection was separated from the phase of categorization. SVM with a linear kernel function as baseline classifier, Random Forest, the logistic recursion Nave Bayes classifier, and the multi-layer perceptron are all used to assess the quality of the feature subsets. These experiments are based on two key metrics: 1) The total number of features chosen; 2) Secondly, the accuracy of the classification. Measures such as best fitness value, worst fitness value, mean fitness value, STD for the average fitness values, the average number of the elected features, average accuracy score, and maximum accuracy value obtained are used to evaluate IBFSA performance on the FS issue in this section. For ease of understanding, the optimal results of a particular method are presented in bold.
Table 4 displays the total number of features extracted during pre-processing before the feature selection procedure. Tables 7 and 8 display the total number of features chosen from the datasets generated using various techniques. The tables show that, on average, the number of features is picked by using IBFSA better than any other technique tested (for both DS1 and DS2) from 20 iterations. Keep in mind that the accuracy and the number of selected features is tradeoffs. Thus, it may be challenging to get the best results in both of these objectives for any dataset. In light of this, we can conclude that the proposed IBFSA outperforms other algorithms in terms of feature selections in the chosen datasets, as shown in Figures 6 and 7.
Dataset | Number of features |
DS1 of Covid-19 | 377 |
DS2 of Covid-19 | 2367 |
Algorithm | Best | Worst | SD | Mean |
PSO | 11.9508 | 13.3517 | 3.6424 | 12.9468 |
WOA | 13.1452 | 14.6777 | 3.7754 | 13.7407 |
MFO | 12.8370 | 13.7504 | 2.1992 | 13.2715 |
GWO | 15.1563 | 16.8318 | 4.8170 | 16.1638 |
FFA | 13.8441 | 14.8428 | 2.7810 | 14.3461 |
IBFSA | 13.2032 | 18.6477 | 13.1204 | 15.2640 |
Algorithm | Best | Worst | SD | Mean |
PSO | 4.6866 | 5.4455 | 2.067 | 5.0539 |
WOA | 4.8834 | 5.9688 | 2.5784 | 5.6351 |
MFO | 4.7724 | 5.5376 | 2.6126 | 5.3095 |
GWO | 6.8914 | 9.0156 | 5.8036 | 8.0924 |
FFA | 4.9955 | 6.1279 | 3.5989 | 5.7708 |
IBFSA | 2.3806 | 5.3688 | 8.9300 | 3.9802 |
Algorithm | Best | Worst | SD | Selection Ratio | Removal Ratio |
PSO | 267 | 302 | 8.5006 | 73.5941 | 26.4058 |
WOA | 181 | 324 | 29.0923 | 79.1909 | 20.809 |
MFO | 270 | 304 | 10.8204 | 75.557 | 24.4429 |
GWO | 175 | 208 | 8.8317 | 50.1326 | 49.8673 |
FFA | 197 | 225 | 8.5230 | 56.3129 | 43.6870 |
IBFSA | 54 | 86 | 7.6461 | 17.9310 | 82.0689 |
Algorithm | Best | Worst | SD | Selection Ratio | Removal Ratio |
PSO | 1681 | 1773 | 2.5576 | 72.858 | 27.1419 |
WOA | 1156 | 1951 | 28.1438 | 72.3595 | 27.6404 |
MFO | 1669 | 1830 | 3.8661 | 74.4592 | 25.5407 |
GWO | 1128 | 1245 | 2.7217 | 49.8183 | 50.1816 |
FFA | 1299 | 1377 | 1.9534 | 56.2251 | 43.7748 |
IBFSA | 225 | 312 | 2.2832 | 11.2568 | 88.7431 |
The boxplots for both datasets are seen in Figures 8 and 9 to measure the number of features selected and algorithms performance. It should be noted that the boxplots reflect outcomes of classification and the number of FS, and are displayed after each method has been executed 20 times. These figures allow us to visually see the minimum, median, and maximum values of the data. As shown in these figures, IBFSA has higher boxplots than the other approaches in both datasets.
Tables 9 and 10 show that when comparing LR and RF performance, IBFSA performs best in terms of accuracy, precision, and F-measure index. However, there is no significant difference in average recall values between IBFSA and others. In the MLP classifier, Table 11 shows that the IBFSA has the best mean performance measured by the F-measure index. On the other hand, show Table 12 that compared to the performance of other models, the combination of Naive Bayes and IBFSA can categorize the texts with higher sensitivity. Moreover, in Table 13, we see that the SVM with IBFSA has a superior efficacy and outperforms all other algorithms regarding classifier performance, see Figure 10.
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 79.5247 | 76.5082 | 78.9809 | 75.9353 | 85.3741 | 82.4489 | 81.5780 | 79.0423 |
WOA | 78.7934 | 77.0658 | 77.9874 | 76.0964 | 85.034 | 83.6054 | 81.0450 | 79.6699 |
MFO | 77.3309 | 76.4533 | 76.3975 | 75.4018 | 84.6939 | 83.4183 | 79.8700 | 79.2028 |
GWO | 77.6965 | 75.6307 | 77.3885 | 73.7182 | 88.7755 | 85.0850 | 80.5030 | 78.9576 |
FFA | 79.7075 | 76.1791 | 79.4212 | 74.6162 | 87.4150 | 84.4387 | 81.6520 | 79.2132 |
IBFSA | 83.6996 | 80.1190 | 87.3494 | 80.8904 | 88.9262 | 83.6409 | 85.3377 | 81.8920 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 78.9762 | 77.1755 | 77.2871 | 75.3667 | 85.3741 | 83.1632 | 80.3226 | 79.0642 |
WOA | 79.5247 | 77.989 | 77.7429 | 75.7595 | 87.0748 | 83.8775 | 80.9135 | 79.6005 |
MFO | 79.159 | 77.5502 | 76.8519 | 75.0723 | 86.3946 | 84.3027 | 80.5825 | 79.4137 |
GWO | 77.5137 | 76.1152 | 75.3943 | 73.6087 | 89.1156 | 83.4524 | 80.4992 | 78.1753 |
FFA | 79.8903 | 77.0292 | 79.0323 | 74.6154 | 86.7347 | 83.7925 | 81.1258 | 78.9211 |
IBFSA | 80.9524 | 78.7912 | 79.6774 | 76.0975 | 94.2953 | 89.2953 | 84.2579 | 82.1315 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 79.8903 | 75.6307 | 80.0654 | 76.9761 | 88.7755 | 78.1292 | 81.6667 | 77.3871 |
WOA | 79.159 | 76.8098 | 79.4118 | 75.5138 | 90.1361 | 84.4217 | 81.2102 | 79.6115 |
MFO | 77.5137 | 76.0603 | 78.5156 | 75.1362 | 90.8163 | 83.1632 | 80.3709 | 78.8295 |
GWO | 77.8793 | 75.4936 | 78.5441 | 73.9967 | 89.1156 | 84.3367 | 80.4314 | 78.6784 |
FFA | 79.3419 | 75.5393 | 78.6184 | 74.8233 | 90.4762 | 82.6700 | 81.5057 | 78.3134 |
IBFSA | 79.7075 | 77.5686 | 81.2287 | 76.6963 | 92.8571 | 83.8946 | 82.0350 | 80.0531 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 76.782 | 73.7842 | 72.8814 | 70.4028 | 90.8163 | 88.5204 | 80.7284 | 78.4110 |
WOA | 76.416 | 74.6618 | 73.5043 | 71.6316 | 89.4558 | 87.5680 | 80.0000 | 78.7926 |
MFO | 76.051 | 74.6343 | 72.5762 | 71.4047 | 89.7959 | 88.0952 | 80.0000 | 78.8734 |
GWO | 76.416 | 73.6380 | 71.6535 | 69.5449 | 93.5374 | 90.7993 | 80.8889 | 78.7455 |
FFA | 76.234 | 74.4698 | 71.5847 | 70.5987 | 92.5170 | 90.0000 | 80.7122 | 79.1192 |
IBFSA | 76.5996 | 74.0859 | 72.3118 | 69.6321 | 93.8776 | 92.3129 | 80.7808 | 79.3395 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 79.1590 | 76.5082 | 80.2768 | 75.8037 | 88.4354 | 82.9251 | 81.0631 | 79.1439 |
WOA | 78.2450 | 76.9652 | 77.7070 | 76.0817 | 85.7143 | 83.3843 | 80.5873 | 79.5568 |
MFO | 77.8793 | 76.5996 | 76.8750 | 75.2367 | 87.0748 | 84.1836 | 80.8847 | 79.4516 |
GWO | 77.1481 | 75.5210 | 76.0125 | 72.7785 | 89.7959 | 87.1088 | 80.6107 | 79.2690 |
FFA | 79.5247 | 76.0146 | 79.3548 | 73.6135 | 88.4354 | 86.4285 | 81.4570 | 79.4856 |
IBFSA | 84.9817 | 82.0330 | 83.1288 | 79.0333 | 96.3087 | 91.4933 | 86.8590 | 84.7629 |
Classifiers results from machine learning's second dataset are displayed in Tables 14–18. As it can be seen from Tables 14–16 the classifiers achieved a promising performance compared to all methods, however, comparatively there is a marginal difference in accuracy between the classifiers. It is noteworthy, Table 17 shows that a NB classifier trained with IBFSA can prove superior efficacy compared to its other peers, achieving average classification sensitivity of 98.25% and a maximum sensitivity among the 20 runs is 100%. While, Table 18 shows that the IBFSA has the best accurate performance of all of the rivals regarding the SVM classifier, see Figure 11.
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 93.8849 | 91.6187 | 94.9721 | 92.7515 | 96.0674 | 94.2977 | 95.2646 | 93.5134 |
WOA | 93.1655 | 91.8345 | 94.3820 | 92.8152 | 96.6292 | 94.5786 | 94.7075 | 93.6820 |
MFO | 93.5252 | 92.3201 | 94.9153 | 93.2023 | 96.6292 | 94.9438 | 95.0276 | 94.0601 |
GWO | 94.2446 | 90.9172 | 96.0227 | 92.9304 | 96.0674 | 92.8932 | 95.4802 | 92.9027 |
FFA | 93.5252 | 91.4388 | 93.8889 | 92.7945 | 96.6292 | 93.9325 | 95.0276 | 93.3521 |
IBFSA | 93.1655 | 90.4676 | 95.4286 | 92.5492 | 94.9438 | 92.5842 | 94.6176 | 92.5574 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 94.2446 | 92.2302 | 93.5484 | 90.621 | 97.7528 | 95.9269 | 95.6044 | 93.1861 |
WOA | 93.5252 | 92.5000 | 93.7500 | 91.1323 | 97.7528 | 96.0393 | 94.7368 | 93.5073 |
MFO | 93.8849 | 92.6978 | 93.4066 | 91.1925 | 97.191 | 96.3202 | 95.0549 | 93.6793 |
GWO | 93.1655 | 91.5287 | 94.7977 | 91.9839 | 97.7528 | 93.3988 | 94.7658 | 92.6582 |
FFA | 92.8058 | 91.7985 | 94.3503 | 91.8734 | 96.6292 | 94.3258 | 94.1176 | 93.0739 |
IBFSA | 92.8058 | 91.5468 | 94.8276 | 93.3384 | 93.2584 | 91.9663 | 93.7853 | 92.6419 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 92.8058 | 90.4496 | 94.7977 | 92.8646 | 96.0674 | 92.1909 | 94.3820 | 92.5100 |
WOA | 92.4460 | 90.7014 | 94.3503 | 92.4501 | 97.1910 | 93.1460 | 94.0845 | 92.7617 |
MFO | 92.4460 | 90.7554 | 95.8824 | 93.2032 | 95.5056 | 92.3595 | 94.1828 | 92.7467 |
GWO | 92.8058 | 89.4964 | 95.4023 | 92.5473 | 96.6292 | 90.9550 | 94.5055 | 91.7095 |
FFA | 92.8058 | 90.5755 | 94.6429 | 92.4290 | 97.1910 | 92.9213 | 94.5355 | 92.6571 |
IBFSA | 92.4460 | 90.1798 | 95.3757 | 92.9065 | 94.3820 | 91.7135 | 94.0171 | 92.2804 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 91.3669 | 88.4172 | 88.8889 | 86.6196 | 98.8764 | 96.9101 | 93.617 | 91.4698 |
WOA | 89.5683 | 87.8057 | 88.601 | 85.9397 | 98.8764 | 96.882 | 92.2667 | 91.0656 |
MFO | 92.446 | 88.7589 | 91.9786 | 87.2246 | 98.8764 | 96.6572 | 94.2466 | 91.6853 |
GWO | 88.1295 | 85.1978 | 86.4322 | 82.5656 | 98.8764 | 97.528 | 91.2467 | 89.4133 |
FFA | 90.2878 | 85.9712 | 88.3249 | 83.5081 | 98.8764 | 97.4157 | 92.8 | 89.9095 |
IBFSA | 83.0935 | 77.7338 | 79.638 | 74.7827 | 100 | 98.5674 | 88.2206 | 85.0241 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 94.6043 | 92.7338 | 95.4802 | 93.4127 | 97.191 | 95.3932 | 95.8449 | 94.3868 |
WOA | 93.8849 | 92.7877 | 94.4444 | 93.2464 | 97.191 | 95.6741 | 95.2909 | 94.4405 |
MFO | 94.2446 | 93.0395 | 95 | 93.7031 | 97.191 | 95.5617 | 95.5801 | 94.6191 |
GWO | 93.5252 | 91.5287 | 95.9064 | 93.3714 | 96.6292 | 93.4269 | 94.9438 | 93.3876 |
FFA | 93.8849 | 91.6906 | 94.9721 | 93.0589 | 96.0674 | 94.0449 | 95.2646 | 93.5463 |
IBFSA | 97.1119 | 95.0541 | 97.1098 | 94.1285 | 99.4186 | 98.1613 | 97.7011 | 96.0932 |
As per results in Tables 13 and 18, it can be seen that the optimizer IBFSA with SVM classifier has demonstrated a greater classification accuracy in comparison to the other variations using LR, RF, MLP and NB classifiers in handling all selected datasets. One of the causes is that the SVM classifier uses over-fitting protection and does not depend primarily on the total number of processed features. So, it has better potential than previously studied classifiers in dealing with bigger text feature spaces. As seen in the results, when dealing with a sparsely of samples, the SVM can demonstrate a steadier efficacy compared to other models. On these particular datasets, the IBFSA algorithm achieves better results than any other competing approaches in terms of feature selection accuracy. The inclusion of new, more efficient components that improve the algorithm's balance between its exploratory and exploitative capacities is one possible explanation for the algorithm's improved performance.
A new diagnostic model for COVID-19 has been developed that will effectively increase the final prediction accuracy. The suggested approach includes two primary stages. The first stage is utilizing RTF-C-IEF to determine the feature's importance. Next, the modified flamingo search algorithm is then used to choose a collection of pertinent and non-redundant features in the second phase. Finally, the SVM-based classifier is used to predict COVID-19 using the features elected of clinical text. Our experiments were conducted on two sets of data, the first was collected from hospitals in the south of Iraq, and the second was from several sources on websites. In IBFSA, we presented four ways to boost both the global and local search capabilities of the algorithm. In addition, the continuous approach has been adapted to the binary feature selection problem using the binary transformation method. We have compared the suggested technique to state-of-the-art feature selection swarming methods such as PSO, MFO, GWO and FFA. Experiments reveal that the suggested technique is more effective in decreasing sub-features by more than 88% and with an accuracy superior to other methods. As a result, it can be concluded that the suggested approach is a powerful feature selection for COVID-19 patients' classification. Moreover, IBFSA reports that feature selection has decreased the number of diagnostic mistakes for COVID-19 patients. In this way, feature selection helps machine learning zero in on the most relevant information, lessening the likelihood of an incorrect diagnosis while attempting to distinguish between infected and uninfected individuals. In our future work, we'll take into account expanding and diversifying the test datasets to better assess the suggested methodology.
The original data (first dataset) used and/or processed during current study is part of the health records of a group of hospitals in southern Iraq. Therefore, data (DS1) is not available to the general public. May be made available from the corresponding author upon reasonable request.
The authors are so pleased to introduce their deep acknowledgment and great thanks to the staff of the hospitals and healthcare providers which supported the clinical data for this study, especially hospitals in Iraq.
The authors declare no conflict of interest.
[1] | Wang Y, Cao G (2008) Developments in Nanostructured Cathode Materials for High-Performance Lithium-Ion Batteries. Adv Matter 20: 2251–2269. |
[2] | Kang B, Ceder G (2009) Battery materials for ultrafast charging and discharging. Nature 458:190–193. |
[3] | Chung SY, Bloking JT, Chisng YM (2002) Electronically conductive phospho-olivines as lithium storage electrodes. Nature 1: 123–128. |
[4] | Padhi AK, Nanjundaswamy KS, Goodenough JB (1997) Phospho-olivines as Positive-Electrode Materials for Rechargeable Lithium Batteries. J Electrochem Soc144: 1188–1194. |
[5] | Nishimura SI, Kobayashi G, Ohoyama K, et al. (2008) Experimental visualization of lithium diffusion in LixFePO4. Nature 7: 707–711. |
[6] | Wang Y, Wang YR, Hosono E, et al. (2008) The Design of a LiFePO4/Carbon Nanocomposite With a Core–Shell Structure and Its Synthesis by an In Situ Polymerization Restriction Method. Angew Chem Int 47: 7461–7465. |
[7] | Malik R, Burch D, Bazant M, et al. (2010) Particle size dependence of the ionic diffusivity. Nano Lett 10: 4123–4127. |
[8] | Delacourt C, Wurm C, Laffont L, et al. (2006) Electrochemical and electrical properties of Nband/ or C-containing LiMPO4 composites (M = Fe, Mn). Solid State Ionic 177: 333–341. |
[9] | Mi CH, Cao YX, Zhang XG, et al. (2008) Synthesis and characterization of LiFePO4/(Ag+C) composite cathodes with nano-carbon webs. Power Technol 181: 301–306. |
[10] | Rong HG, Guang GX, Dong PZ, et al. (2007) Trans Nonferrous Met Soc China 17: 296. |
[11] | Peng W, Jiao L, Gao H, et al. (2011) A novel sol-gel method based on FePO4·2H2O to synthesize submicrometer structured LiFePO4/C cathode material. J Power Sources 196: 2841–2847. |
[12] | Xu Z, Xu L, Lai Q, et al. (2007) A PEG assisted sol-gel synthesis of LiFePO4 as cathodic material for lithium ion cells. Mater Res Bull 42: 883–891. |
[13] | Li X, Wang W, Shi C, et al. (2009) Structural and electrochemical characterization of LiFePO4/C prepared by a sol-gel route with long- and short-chain carbon sources. J Solid State Electr 13:921–926. |
[14] | Kuwahara A, Suzuki S, Miyayama M (2008) High-rate properties of LiFePO4/carbon composites as cathode materials for lithium-ion batteries. Ceram Int 34:863–866. |
[15] | Zhang C, Huang X, Yin Y, et al. (2009) Hydrothermal synthesis of monodispersed LiFePO4 cathode materials in alcohol–water mixed solutions. Ceram Int 35: 2979–2982. |
[16] | Singh M, Porada MW (2011) Adv Power Technol. |
[17] | Song MS, Kang YM, Kim JH, et al. (2007) “Simple and fast synthesis of LiFePO4-C composite for lithium rechargeable cells by ball-milling and microwave heating. J Power Soueces 166:260–265. |
[18] | Jia X, Ma M, Liu W, et al. (2011) Adv Mater Res 198: 1139. |
[19] | Rangappa D, Sone K, Kudo T, et al. (2010) Directed growth of nano architectured LiFePO4 electrode by solvothermal synthesis and their cathode properties. J Power Sources 195:6167–6171. |
[20] | Zang L, Peng G, Yang X, et al. (2010) High performance LiFePO4/C cathode for lithium ion battery prepared under vacuum conditions. Vaccum 84: 1319–1322. |
[21] | Hwang BJ, Hsu KF, Hu SK, et al. (2009) Template-free Reverse Micelle Process for the Synthesis of a Rod-like LiFePO4/C Composite Cathode Material for Lithium Batteries. J Power Sources 194: 515–519. |
[22] | Park KS, Kang KT, Lee SB, et al. (2004) Synthesis of LiFePO4 with fine particle by co-precipitation method. Mater Res Bull 39: 1803–1810. |
[23] | Mitric DJM, Cvjeticanin MKN, Skapin S, et al. (2011) J Power Sources. |
[24] | Zhao B, Jiang Y, Zhang H, et al. (2009) Morphology and electrical properties of carbon coated LiFePO4 cathode materials. J Power Sources 109: 462–466. |
[25] | Ogihara T, Kodera T, Myoujin K, et al. (2009) Preparation and electrochemical properties of cathode materials for lithium ion battery by aerosol process. Mater Sci Eng B 161: 109–114. |
[26] | Kalaiselvi N, Manthiram A (2010) One-pot, Glycine-assisted Combustion Synthesis and Characterization of Nanoporous LiFePO4/C Composite Cathodes for Lithium-Ion Batteries. J Power Sources 195: 2894–2899. |
[27] | Patil KC, Aruna ST, Mimani T (2002) Combustion synthesis: an update. Curr Opin Solid S T M 6:507–512. |
[28] | Purohit RD, Sharma BP, Pillai KT, et al. (2001) Ultrafine ceria powders via glycine-nitrate combustion. Mater Res Bull 36: 2711–2721. |
[29] | Liu J, Wang J, Yan X, et al. (2009) Long-term cyclability of LiFePO4/carbon composite cathode material for lithium-ion battery applications. Electrochim Acta 54: 5656–5659. |
[30] | Delmas C, Maccario M, Croguennec L, et al. (2008) Lithium deintercalation in LiFePO4 nanoparticles via a domino-cascade model. Nat Mater 7: 665–671. |
[31] | Lu CZ, Fey GTK, Kao HM (2009) Study of LiFePO4 cathode materials coated with high surface area carbon. J Power Sources 189: 155–162. |
[32] | Doeff MM, Wilcox JD, Yu R, et al. (2008) Effect of Surface Carbon Structure on the Electrochemical Performance of LiFePO4. J Solid State Electr 12: 995–1001. |
[33] | Sanchez MAE, Brito GES, Fantini MCA, et al. (2006) Synthesis and characterization of LiFePO4 prepared by sol–gel technique. Solid State Ionics 177: 497–500. |
[34] | Zhang P, Li X, Luo Z, et al. (2009) Kinetics of synthesis olivine LiFePO4 by using a precipitated-sintering method. J Alloy Compd 467: 390–396. |
[35] | Wu ZJ, Yue HF, Li LS, et al. (2010) Synthesis and electrochemical properties of multi-doped LiFePO4/C prepared from the steel slag. J Power Sources 195:2888–2893. |
1. | Shuhao Jiang, Jiahui Shang, Jichang Guo, Yong Zhang, Multi-Strategy Improved Flamingo Search Algorithm for Global Optimization, 2023, 13, 2076-3417, 5612, 10.3390/app13095612 | |
2. | Ulzhalgas Zhunissova, Róża Dzierżak, Zbigniew Omiotek, Volodymyr Lytvynenko, A Novel COVID-19 Diagnosis Approach Utilizing a Comprehensive Set of Diagnostic Information (CSDI), 2023, 12, 2077-0383, 6912, 10.3390/jcm12216912 | |
3. | Zijiao Zhang, Chong Wu, Shiyou Qu, Jiaming Liu, A hierarchical chain-based Archimedes optimization algorithm, 2023, 20, 1551-0018, 20881, 10.3934/mbe.2023924 | |
4. | Amjad Qtaish, Dheeb Albashish, Malik Braik, Mohammad T. Alshammari, Abdulrahman Alreshidi, Eissa Jaber Alreshidi, Memory-Based Sand Cat Swarm Optimization for Feature Selection in Medical Diagnosis, 2023, 12, 2079-9292, 2042, 10.3390/electronics12092042 | |
5. | Benjamin Danso Kwakye, Yongjun Li, Halima Habuba Mohamed, Evans Baidoo, Theophilus Quachie Asenso, Particle guided metaheuristic algorithm for global optimization and feature selection problems, 2024, 248, 09574174, 123362, 10.1016/j.eswa.2024.123362 | |
6. | Law Kumar Singh, Munish Khanna, Himanshu Monga, Rekha singh, Gaurav Pandey, Nature-Inspired Algorithms-Based Optimal Features Selection Strategy for COVID-19 Detection Using Medical Images, 2024, 42, 0288-3635, 761, 10.1007/s00354-024-00255-4 | |
7. | Maryam Askari, Farid Khoshalhan, Hodjat Hamidi, Boosting Feature Selection Efficiency with IMVO: Integrating MVO and Mutation-Based Local Search Algorithms, 2025, 25901230, 104866, 10.1016/j.rineng.2025.104866 |
Method | Advantages | Disadvantages |
Aquila Optimizer (AO) and ML [13] | AO significantly outperforms other comparison algorithms and has been shown to be more effective in terms of predictive accuracy and reducing the number of features selected. | The COVID-19 patient data set used is small, and was not of very high dimensionality for the method to be explored effectively |
AGTO and ML [14] | Efficient in reducing the number of features used with better accuracy, also this approach has been demonstrated to be successful in real-world practical applications using real-world COVID-19 datasets. | The majority of the time, AGTO takes longer to implement. In addition, the database is not very highly dimensional. However, different approaches can be used to enhance the efficiency of the algorithm by applying advanced initialization procedures. |
PSO and DBNB classification [17] | The suggested method attempts to accurately identify infected patients with the least time penalty based on the more effective features elected by APSO. | Even though it is effective at diagnosing COVID-19 patients, the suggested method is only based on numerical data. Additionally, the dataset used is not insufficient to diagnose COVID-19 and is limited just to clinical laboratory data. However, analyzing CT scan reports may be helpful to confirm infection. |
GOA and CNN [18] | Easy to implement and takes little time by optimizing CNN by GOA. | By utilizing a more detailed dataset with more images from all three classes, the proposed method can be further enhanced. |
BOA, PSO and ML [20] | Compared to conventional classification methods, the proposed hybrid model is more effective at classifying COVID-19 patients. | The COVID-19 patient data set used is small, and was not of very high dimensionality. |
CA and ANN [21] |
ANN is a powerful classification technique. | The patient election has potential bias because the database is so unbalanced that only the number of infected people in it is 10% of the total number. |
BSO, FA and ML [22] | Compared to conventional classification methods, the proposed hybrid model is more effective at classifying COVID-19 patients. | The COVID-19 dataset contains limited data limited only to symptoms and its small size, plus a lot of missing data. So, it needs other methods of pre-processing. |
Algorithm 1: Standard Flamingo Search Algorithm | |||||
Input: $ M-maximum \; number\; of\; iterations $ $ N-total\; number\; of\; flamingo $ $ {MP}_{b}-number\; of\; migrating\; flamingo $ Output: $ {X}_{best}-Global\; optimal\; position $ $ {f}_{best}-Fitness\; of\; global\; optimal\; position $ |
|||||
1 | Start | ||||
2 | Initialize a swarm of $ N $ $ flamingo $ s and its relevant parameters; | ||||
3 | $ t\leftarrow 1; $ | ||||
4 | While $ t < M $ do | ||||
![]() |
|||||
30 | end while | ||||
31 | Return $ {X}_{best} $, $ {f}_{g} $ /*Xbest is top optimal of a solution got by the algorithm */ |
No | Type | No. of records | Label | Rate of Occurrences |
DS1 | Clinical Text | 3053 | Severe Non-Severe |
55% 45% |
DS2 | Clinical Text | 1446 | COVID-19 Positive COVID-19 Negative |
62% 38% |
Algorithm 2: The proposed MIA algorithm | ||
$ {\boldsymbol{X}}_{\boldsymbol{i}\boldsymbol{j}} $ = position of flamingos; /* Randomly generate the positions of N flamingo; $ {\boldsymbol{X}_{\boldsymbol{bin}}} = $ After Convert to binary_map ($ {\boldsymbol{X}}_{\boldsymbol{i}\boldsymbol{j}} $); $\boldsymbol{Fi}{\boldsymbol{t}_{\boldsymbol{old}}} = $ Find all Fitness to Population size(flamingos); |
||
$ {D}_{max} $= maximum of number of local iterations; $ {M}_{max} $= maximum of number of local iterations; N (Population size). |
||
1 | for $ d=1To $ $ {\boldsymbol{D}}_{\boldsymbol{m}\boldsymbol{a}\boldsymbol{x}} $ do | |
![]() |
||
21 | end for | |
22 | Return $ {X}_{bin} $, $ {X}_{ij} $, $ {Fit}_{old} $ |
Algorithm 3: The proposed LSA algorithm | |
$ \boldsymbol{L}\boldsymbol{T}- $ maximum of number of local iterations; $ {X}_{best}^{t+1} $ /* the best position so far at the end of IBFSA current iteration $ t+1 $; $ Temp\leftarrow {X}_{best}^{t+1} $ $ Lt\leftarrow 0; $ |
|
1 | While $ Lt < LT $ do |
![]() |
|
13 | end while |
14 | Return $ {\boldsymbol{X}}_{\boldsymbol{b}\boldsymbol{e}\boldsymbol{s}\boldsymbol{t}} $, $ {f}_{g} $ |
Algorithm 4: The proposed IBFSA based on MIA, TF, Levy flight and RSA | |
Input: $ M-maximum \; number\; of\; iterations $ $ N-total\; number\; of\; flamingo $ $ {MP}_{b}-number\; of\; migrating\; flamingo $ Output: $ {X}_{best}-Global\; optimal\; position $ $ {f}_{best}-Fitness\; of\; global\; optimal\; position $ |
|
1 | Start |
2 | Initialize a swarm of $ N $ $ flamingo $ s and its relevant parameters; |
3 | Apply MIA to $ {X}_{ij} $ using Algorithm (2); |
4 | $ t\leftarrow 1; $ |
5 | While $ t < M $ do |
![]() |
|
48 | end while |
49 | end |
50 | Return $ {X}_{best} $, $ {f}_{g} $ /*Xbest is the best solution obtained by the algorithm*/ |
IBFSA Parameters | Description | Setting |
N | Run Time | 20 |
Pop. Size(N) | Number of flamingo search agents | 50 |
Itermax | Maximum number of iterations | 500 |
Dim | Dimension | Number of features |
β | Significance of the feature subset | 0.01 |
α | Importance of classification accuracy | 0.99 |
$ {\boldsymbol{M}\boldsymbol{P}}_{\boldsymbol{b}} $ | Proportion of migrating flamingo | 0.1 |
Dataset | Number of features |
DS1 of Covid-19 | 377 |
DS2 of Covid-19 | 2367 |
Algorithm | Best | Worst | SD | Mean |
PSO | 11.9508 | 13.3517 | 3.6424 | 12.9468 |
WOA | 13.1452 | 14.6777 | 3.7754 | 13.7407 |
MFO | 12.8370 | 13.7504 | 2.1992 | 13.2715 |
GWO | 15.1563 | 16.8318 | 4.8170 | 16.1638 |
FFA | 13.8441 | 14.8428 | 2.7810 | 14.3461 |
IBFSA | 13.2032 | 18.6477 | 13.1204 | 15.2640 |
Algorithm | Best | Worst | SD | Mean |
PSO | 4.6866 | 5.4455 | 2.067 | 5.0539 |
WOA | 4.8834 | 5.9688 | 2.5784 | 5.6351 |
MFO | 4.7724 | 5.5376 | 2.6126 | 5.3095 |
GWO | 6.8914 | 9.0156 | 5.8036 | 8.0924 |
FFA | 4.9955 | 6.1279 | 3.5989 | 5.7708 |
IBFSA | 2.3806 | 5.3688 | 8.9300 | 3.9802 |
Algorithm | Best | Worst | SD | Selection Ratio | Removal Ratio |
PSO | 267 | 302 | 8.5006 | 73.5941 | 26.4058 |
WOA | 181 | 324 | 29.0923 | 79.1909 | 20.809 |
MFO | 270 | 304 | 10.8204 | 75.557 | 24.4429 |
GWO | 175 | 208 | 8.8317 | 50.1326 | 49.8673 |
FFA | 197 | 225 | 8.5230 | 56.3129 | 43.6870 |
IBFSA | 54 | 86 | 7.6461 | 17.9310 | 82.0689 |
Algorithm | Best | Worst | SD | Selection Ratio | Removal Ratio |
PSO | 1681 | 1773 | 2.5576 | 72.858 | 27.1419 |
WOA | 1156 | 1951 | 28.1438 | 72.3595 | 27.6404 |
MFO | 1669 | 1830 | 3.8661 | 74.4592 | 25.5407 |
GWO | 1128 | 1245 | 2.7217 | 49.8183 | 50.1816 |
FFA | 1299 | 1377 | 1.9534 | 56.2251 | 43.7748 |
IBFSA | 225 | 312 | 2.2832 | 11.2568 | 88.7431 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 79.5247 | 76.5082 | 78.9809 | 75.9353 | 85.3741 | 82.4489 | 81.5780 | 79.0423 |
WOA | 78.7934 | 77.0658 | 77.9874 | 76.0964 | 85.034 | 83.6054 | 81.0450 | 79.6699 |
MFO | 77.3309 | 76.4533 | 76.3975 | 75.4018 | 84.6939 | 83.4183 | 79.8700 | 79.2028 |
GWO | 77.6965 | 75.6307 | 77.3885 | 73.7182 | 88.7755 | 85.0850 | 80.5030 | 78.9576 |
FFA | 79.7075 | 76.1791 | 79.4212 | 74.6162 | 87.4150 | 84.4387 | 81.6520 | 79.2132 |
IBFSA | 83.6996 | 80.1190 | 87.3494 | 80.8904 | 88.9262 | 83.6409 | 85.3377 | 81.8920 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 78.9762 | 77.1755 | 77.2871 | 75.3667 | 85.3741 | 83.1632 | 80.3226 | 79.0642 |
WOA | 79.5247 | 77.989 | 77.7429 | 75.7595 | 87.0748 | 83.8775 | 80.9135 | 79.6005 |
MFO | 79.159 | 77.5502 | 76.8519 | 75.0723 | 86.3946 | 84.3027 | 80.5825 | 79.4137 |
GWO | 77.5137 | 76.1152 | 75.3943 | 73.6087 | 89.1156 | 83.4524 | 80.4992 | 78.1753 |
FFA | 79.8903 | 77.0292 | 79.0323 | 74.6154 | 86.7347 | 83.7925 | 81.1258 | 78.9211 |
IBFSA | 80.9524 | 78.7912 | 79.6774 | 76.0975 | 94.2953 | 89.2953 | 84.2579 | 82.1315 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 79.8903 | 75.6307 | 80.0654 | 76.9761 | 88.7755 | 78.1292 | 81.6667 | 77.3871 |
WOA | 79.159 | 76.8098 | 79.4118 | 75.5138 | 90.1361 | 84.4217 | 81.2102 | 79.6115 |
MFO | 77.5137 | 76.0603 | 78.5156 | 75.1362 | 90.8163 | 83.1632 | 80.3709 | 78.8295 |
GWO | 77.8793 | 75.4936 | 78.5441 | 73.9967 | 89.1156 | 84.3367 | 80.4314 | 78.6784 |
FFA | 79.3419 | 75.5393 | 78.6184 | 74.8233 | 90.4762 | 82.6700 | 81.5057 | 78.3134 |
IBFSA | 79.7075 | 77.5686 | 81.2287 | 76.6963 | 92.8571 | 83.8946 | 82.0350 | 80.0531 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 76.782 | 73.7842 | 72.8814 | 70.4028 | 90.8163 | 88.5204 | 80.7284 | 78.4110 |
WOA | 76.416 | 74.6618 | 73.5043 | 71.6316 | 89.4558 | 87.5680 | 80.0000 | 78.7926 |
MFO | 76.051 | 74.6343 | 72.5762 | 71.4047 | 89.7959 | 88.0952 | 80.0000 | 78.8734 |
GWO | 76.416 | 73.6380 | 71.6535 | 69.5449 | 93.5374 | 90.7993 | 80.8889 | 78.7455 |
FFA | 76.234 | 74.4698 | 71.5847 | 70.5987 | 92.5170 | 90.0000 | 80.7122 | 79.1192 |
IBFSA | 76.5996 | 74.0859 | 72.3118 | 69.6321 | 93.8776 | 92.3129 | 80.7808 | 79.3395 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 79.1590 | 76.5082 | 80.2768 | 75.8037 | 88.4354 | 82.9251 | 81.0631 | 79.1439 |
WOA | 78.2450 | 76.9652 | 77.7070 | 76.0817 | 85.7143 | 83.3843 | 80.5873 | 79.5568 |
MFO | 77.8793 | 76.5996 | 76.8750 | 75.2367 | 87.0748 | 84.1836 | 80.8847 | 79.4516 |
GWO | 77.1481 | 75.5210 | 76.0125 | 72.7785 | 89.7959 | 87.1088 | 80.6107 | 79.2690 |
FFA | 79.5247 | 76.0146 | 79.3548 | 73.6135 | 88.4354 | 86.4285 | 81.4570 | 79.4856 |
IBFSA | 84.9817 | 82.0330 | 83.1288 | 79.0333 | 96.3087 | 91.4933 | 86.8590 | 84.7629 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 93.8849 | 91.6187 | 94.9721 | 92.7515 | 96.0674 | 94.2977 | 95.2646 | 93.5134 |
WOA | 93.1655 | 91.8345 | 94.3820 | 92.8152 | 96.6292 | 94.5786 | 94.7075 | 93.6820 |
MFO | 93.5252 | 92.3201 | 94.9153 | 93.2023 | 96.6292 | 94.9438 | 95.0276 | 94.0601 |
GWO | 94.2446 | 90.9172 | 96.0227 | 92.9304 | 96.0674 | 92.8932 | 95.4802 | 92.9027 |
FFA | 93.5252 | 91.4388 | 93.8889 | 92.7945 | 96.6292 | 93.9325 | 95.0276 | 93.3521 |
IBFSA | 93.1655 | 90.4676 | 95.4286 | 92.5492 | 94.9438 | 92.5842 | 94.6176 | 92.5574 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 94.2446 | 92.2302 | 93.5484 | 90.621 | 97.7528 | 95.9269 | 95.6044 | 93.1861 |
WOA | 93.5252 | 92.5000 | 93.7500 | 91.1323 | 97.7528 | 96.0393 | 94.7368 | 93.5073 |
MFO | 93.8849 | 92.6978 | 93.4066 | 91.1925 | 97.191 | 96.3202 | 95.0549 | 93.6793 |
GWO | 93.1655 | 91.5287 | 94.7977 | 91.9839 | 97.7528 | 93.3988 | 94.7658 | 92.6582 |
FFA | 92.8058 | 91.7985 | 94.3503 | 91.8734 | 96.6292 | 94.3258 | 94.1176 | 93.0739 |
IBFSA | 92.8058 | 91.5468 | 94.8276 | 93.3384 | 93.2584 | 91.9663 | 93.7853 | 92.6419 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 92.8058 | 90.4496 | 94.7977 | 92.8646 | 96.0674 | 92.1909 | 94.3820 | 92.5100 |
WOA | 92.4460 | 90.7014 | 94.3503 | 92.4501 | 97.1910 | 93.1460 | 94.0845 | 92.7617 |
MFO | 92.4460 | 90.7554 | 95.8824 | 93.2032 | 95.5056 | 92.3595 | 94.1828 | 92.7467 |
GWO | 92.8058 | 89.4964 | 95.4023 | 92.5473 | 96.6292 | 90.9550 | 94.5055 | 91.7095 |
FFA | 92.8058 | 90.5755 | 94.6429 | 92.4290 | 97.1910 | 92.9213 | 94.5355 | 92.6571 |
IBFSA | 92.4460 | 90.1798 | 95.3757 | 92.9065 | 94.3820 | 91.7135 | 94.0171 | 92.2804 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 91.3669 | 88.4172 | 88.8889 | 86.6196 | 98.8764 | 96.9101 | 93.617 | 91.4698 |
WOA | 89.5683 | 87.8057 | 88.601 | 85.9397 | 98.8764 | 96.882 | 92.2667 | 91.0656 |
MFO | 92.446 | 88.7589 | 91.9786 | 87.2246 | 98.8764 | 96.6572 | 94.2466 | 91.6853 |
GWO | 88.1295 | 85.1978 | 86.4322 | 82.5656 | 98.8764 | 97.528 | 91.2467 | 89.4133 |
FFA | 90.2878 | 85.9712 | 88.3249 | 83.5081 | 98.8764 | 97.4157 | 92.8 | 89.9095 |
IBFSA | 83.0935 | 77.7338 | 79.638 | 74.7827 | 100 | 98.5674 | 88.2206 | 85.0241 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 94.6043 | 92.7338 | 95.4802 | 93.4127 | 97.191 | 95.3932 | 95.8449 | 94.3868 |
WOA | 93.8849 | 92.7877 | 94.4444 | 93.2464 | 97.191 | 95.6741 | 95.2909 | 94.4405 |
MFO | 94.2446 | 93.0395 | 95 | 93.7031 | 97.191 | 95.5617 | 95.5801 | 94.6191 |
GWO | 93.5252 | 91.5287 | 95.9064 | 93.3714 | 96.6292 | 93.4269 | 94.9438 | 93.3876 |
FFA | 93.8849 | 91.6906 | 94.9721 | 93.0589 | 96.0674 | 94.0449 | 95.2646 | 93.5463 |
IBFSA | 97.1119 | 95.0541 | 97.1098 | 94.1285 | 99.4186 | 98.1613 | 97.7011 | 96.0932 |
Method | Advantages | Disadvantages |
Aquila Optimizer (AO) and ML [13] | AO significantly outperforms other comparison algorithms and has been shown to be more effective in terms of predictive accuracy and reducing the number of features selected. | The COVID-19 patient data set used is small, and was not of very high dimensionality for the method to be explored effectively |
AGTO and ML [14] | Efficient in reducing the number of features used with better accuracy, also this approach has been demonstrated to be successful in real-world practical applications using real-world COVID-19 datasets. | The majority of the time, AGTO takes longer to implement. In addition, the database is not very highly dimensional. However, different approaches can be used to enhance the efficiency of the algorithm by applying advanced initialization procedures. |
PSO and DBNB classification [17] | The suggested method attempts to accurately identify infected patients with the least time penalty based on the more effective features elected by APSO. | Even though it is effective at diagnosing COVID-19 patients, the suggested method is only based on numerical data. Additionally, the dataset used is not insufficient to diagnose COVID-19 and is limited just to clinical laboratory data. However, analyzing CT scan reports may be helpful to confirm infection. |
GOA and CNN [18] | Easy to implement and takes little time by optimizing CNN by GOA. | By utilizing a more detailed dataset with more images from all three classes, the proposed method can be further enhanced. |
BOA, PSO and ML [20] | Compared to conventional classification methods, the proposed hybrid model is more effective at classifying COVID-19 patients. | The COVID-19 patient data set used is small, and was not of very high dimensionality. |
CA and ANN [21] |
ANN is a powerful classification technique. | The patient election has potential bias because the database is so unbalanced that only the number of infected people in it is 10% of the total number. |
BSO, FA and ML [22] | Compared to conventional classification methods, the proposed hybrid model is more effective at classifying COVID-19 patients. | The COVID-19 dataset contains limited data limited only to symptoms and its small size, plus a lot of missing data. So, it needs other methods of pre-processing. |
Algorithm 1: Standard Flamingo Search Algorithm | |||||
Input: $ M-maximum \; number\; of\; iterations $ $ N-total\; number\; of\; flamingo $ $ {MP}_{b}-number\; of\; migrating\; flamingo $ Output: $ {X}_{best}-Global\; optimal\; position $ $ {f}_{best}-Fitness\; of\; global\; optimal\; position $ |
|||||
1 | Start | ||||
2 | Initialize a swarm of $ N $ $ flamingo $ s and its relevant parameters; | ||||
3 | $ t\leftarrow 1; $ | ||||
4 | While $ t < M $ do | ||||
![]() |
|||||
30 | end while | ||||
31 | Return $ {X}_{best} $, $ {f}_{g} $ /*Xbest is top optimal of a solution got by the algorithm */ |
No | Type | No. of records | Label | Rate of Occurrences |
DS1 | Clinical Text | 3053 | Severe Non-Severe |
55% 45% |
DS2 | Clinical Text | 1446 | COVID-19 Positive COVID-19 Negative |
62% 38% |
Algorithm 2: The proposed MIA algorithm | ||
$ {\boldsymbol{X}}_{\boldsymbol{i}\boldsymbol{j}} $ = position of flamingos; /* Randomly generate the positions of N flamingo; $ {\boldsymbol{X}_{\boldsymbol{bin}}} = $ After Convert to binary_map ($ {\boldsymbol{X}}_{\boldsymbol{i}\boldsymbol{j}} $); $\boldsymbol{Fi}{\boldsymbol{t}_{\boldsymbol{old}}} = $ Find all Fitness to Population size(flamingos); |
||
$ {D}_{max} $= maximum of number of local iterations; $ {M}_{max} $= maximum of number of local iterations; N (Population size). |
||
1 | for $ d=1To $ $ {\boldsymbol{D}}_{\boldsymbol{m}\boldsymbol{a}\boldsymbol{x}} $ do | |
![]() |
||
21 | end for | |
22 | Return $ {X}_{bin} $, $ {X}_{ij} $, $ {Fit}_{old} $ |
Algorithm 3: The proposed LSA algorithm | |
$ \boldsymbol{L}\boldsymbol{T}- $ maximum of number of local iterations; $ {X}_{best}^{t+1} $ /* the best position so far at the end of IBFSA current iteration $ t+1 $; $ Temp\leftarrow {X}_{best}^{t+1} $ $ Lt\leftarrow 0; $ |
|
1 | While $ Lt < LT $ do |
![]() |
|
13 | end while |
14 | Return $ {\boldsymbol{X}}_{\boldsymbol{b}\boldsymbol{e}\boldsymbol{s}\boldsymbol{t}} $, $ {f}_{g} $ |
Algorithm 4: The proposed IBFSA based on MIA, TF, Levy flight and RSA | |
Input: $ M-maximum \; number\; of\; iterations $ $ N-total\; number\; of\; flamingo $ $ {MP}_{b}-number\; of\; migrating\; flamingo $ Output: $ {X}_{best}-Global\; optimal\; position $ $ {f}_{best}-Fitness\; of\; global\; optimal\; position $ |
|
1 | Start |
2 | Initialize a swarm of $ N $ $ flamingo $ s and its relevant parameters; |
3 | Apply MIA to $ {X}_{ij} $ using Algorithm (2); |
4 | $ t\leftarrow 1; $ |
5 | While $ t < M $ do |
![]() |
|
48 | end while |
49 | end |
50 | Return $ {X}_{best} $, $ {f}_{g} $ /*Xbest is the best solution obtained by the algorithm*/ |
IBFSA Parameters | Description | Setting |
N | Run Time | 20 |
Pop. Size(N) | Number of flamingo search agents | 50 |
Itermax | Maximum number of iterations | 500 |
Dim | Dimension | Number of features |
β | Significance of the feature subset | 0.01 |
α | Importance of classification accuracy | 0.99 |
$ {\boldsymbol{M}\boldsymbol{P}}_{\boldsymbol{b}} $ | Proportion of migrating flamingo | 0.1 |
Dataset | Number of features |
DS1 of Covid-19 | 377 |
DS2 of Covid-19 | 2367 |
Algorithm | Best | Worst | SD | Mean |
PSO | 11.9508 | 13.3517 | 3.6424 | 12.9468 |
WOA | 13.1452 | 14.6777 | 3.7754 | 13.7407 |
MFO | 12.8370 | 13.7504 | 2.1992 | 13.2715 |
GWO | 15.1563 | 16.8318 | 4.8170 | 16.1638 |
FFA | 13.8441 | 14.8428 | 2.7810 | 14.3461 |
IBFSA | 13.2032 | 18.6477 | 13.1204 | 15.2640 |
Algorithm | Best | Worst | SD | Mean |
PSO | 4.6866 | 5.4455 | 2.067 | 5.0539 |
WOA | 4.8834 | 5.9688 | 2.5784 | 5.6351 |
MFO | 4.7724 | 5.5376 | 2.6126 | 5.3095 |
GWO | 6.8914 | 9.0156 | 5.8036 | 8.0924 |
FFA | 4.9955 | 6.1279 | 3.5989 | 5.7708 |
IBFSA | 2.3806 | 5.3688 | 8.9300 | 3.9802 |
Algorithm | Best | Worst | SD | Selection Ratio | Removal Ratio |
PSO | 267 | 302 | 8.5006 | 73.5941 | 26.4058 |
WOA | 181 | 324 | 29.0923 | 79.1909 | 20.809 |
MFO | 270 | 304 | 10.8204 | 75.557 | 24.4429 |
GWO | 175 | 208 | 8.8317 | 50.1326 | 49.8673 |
FFA | 197 | 225 | 8.5230 | 56.3129 | 43.6870 |
IBFSA | 54 | 86 | 7.6461 | 17.9310 | 82.0689 |
Algorithm | Best | Worst | SD | Selection Ratio | Removal Ratio |
PSO | 1681 | 1773 | 2.5576 | 72.858 | 27.1419 |
WOA | 1156 | 1951 | 28.1438 | 72.3595 | 27.6404 |
MFO | 1669 | 1830 | 3.8661 | 74.4592 | 25.5407 |
GWO | 1128 | 1245 | 2.7217 | 49.8183 | 50.1816 |
FFA | 1299 | 1377 | 1.9534 | 56.2251 | 43.7748 |
IBFSA | 225 | 312 | 2.2832 | 11.2568 | 88.7431 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 79.5247 | 76.5082 | 78.9809 | 75.9353 | 85.3741 | 82.4489 | 81.5780 | 79.0423 |
WOA | 78.7934 | 77.0658 | 77.9874 | 76.0964 | 85.034 | 83.6054 | 81.0450 | 79.6699 |
MFO | 77.3309 | 76.4533 | 76.3975 | 75.4018 | 84.6939 | 83.4183 | 79.8700 | 79.2028 |
GWO | 77.6965 | 75.6307 | 77.3885 | 73.7182 | 88.7755 | 85.0850 | 80.5030 | 78.9576 |
FFA | 79.7075 | 76.1791 | 79.4212 | 74.6162 | 87.4150 | 84.4387 | 81.6520 | 79.2132 |
IBFSA | 83.6996 | 80.1190 | 87.3494 | 80.8904 | 88.9262 | 83.6409 | 85.3377 | 81.8920 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 78.9762 | 77.1755 | 77.2871 | 75.3667 | 85.3741 | 83.1632 | 80.3226 | 79.0642 |
WOA | 79.5247 | 77.989 | 77.7429 | 75.7595 | 87.0748 | 83.8775 | 80.9135 | 79.6005 |
MFO | 79.159 | 77.5502 | 76.8519 | 75.0723 | 86.3946 | 84.3027 | 80.5825 | 79.4137 |
GWO | 77.5137 | 76.1152 | 75.3943 | 73.6087 | 89.1156 | 83.4524 | 80.4992 | 78.1753 |
FFA | 79.8903 | 77.0292 | 79.0323 | 74.6154 | 86.7347 | 83.7925 | 81.1258 | 78.9211 |
IBFSA | 80.9524 | 78.7912 | 79.6774 | 76.0975 | 94.2953 | 89.2953 | 84.2579 | 82.1315 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 79.8903 | 75.6307 | 80.0654 | 76.9761 | 88.7755 | 78.1292 | 81.6667 | 77.3871 |
WOA | 79.159 | 76.8098 | 79.4118 | 75.5138 | 90.1361 | 84.4217 | 81.2102 | 79.6115 |
MFO | 77.5137 | 76.0603 | 78.5156 | 75.1362 | 90.8163 | 83.1632 | 80.3709 | 78.8295 |
GWO | 77.8793 | 75.4936 | 78.5441 | 73.9967 | 89.1156 | 84.3367 | 80.4314 | 78.6784 |
FFA | 79.3419 | 75.5393 | 78.6184 | 74.8233 | 90.4762 | 82.6700 | 81.5057 | 78.3134 |
IBFSA | 79.7075 | 77.5686 | 81.2287 | 76.6963 | 92.8571 | 83.8946 | 82.0350 | 80.0531 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 76.782 | 73.7842 | 72.8814 | 70.4028 | 90.8163 | 88.5204 | 80.7284 | 78.4110 |
WOA | 76.416 | 74.6618 | 73.5043 | 71.6316 | 89.4558 | 87.5680 | 80.0000 | 78.7926 |
MFO | 76.051 | 74.6343 | 72.5762 | 71.4047 | 89.7959 | 88.0952 | 80.0000 | 78.8734 |
GWO | 76.416 | 73.6380 | 71.6535 | 69.5449 | 93.5374 | 90.7993 | 80.8889 | 78.7455 |
FFA | 76.234 | 74.4698 | 71.5847 | 70.5987 | 92.5170 | 90.0000 | 80.7122 | 79.1192 |
IBFSA | 76.5996 | 74.0859 | 72.3118 | 69.6321 | 93.8776 | 92.3129 | 80.7808 | 79.3395 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 79.1590 | 76.5082 | 80.2768 | 75.8037 | 88.4354 | 82.9251 | 81.0631 | 79.1439 |
WOA | 78.2450 | 76.9652 | 77.7070 | 76.0817 | 85.7143 | 83.3843 | 80.5873 | 79.5568 |
MFO | 77.8793 | 76.5996 | 76.8750 | 75.2367 | 87.0748 | 84.1836 | 80.8847 | 79.4516 |
GWO | 77.1481 | 75.5210 | 76.0125 | 72.7785 | 89.7959 | 87.1088 | 80.6107 | 79.2690 |
FFA | 79.5247 | 76.0146 | 79.3548 | 73.6135 | 88.4354 | 86.4285 | 81.4570 | 79.4856 |
IBFSA | 84.9817 | 82.0330 | 83.1288 | 79.0333 | 96.3087 | 91.4933 | 86.8590 | 84.7629 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 93.8849 | 91.6187 | 94.9721 | 92.7515 | 96.0674 | 94.2977 | 95.2646 | 93.5134 |
WOA | 93.1655 | 91.8345 | 94.3820 | 92.8152 | 96.6292 | 94.5786 | 94.7075 | 93.6820 |
MFO | 93.5252 | 92.3201 | 94.9153 | 93.2023 | 96.6292 | 94.9438 | 95.0276 | 94.0601 |
GWO | 94.2446 | 90.9172 | 96.0227 | 92.9304 | 96.0674 | 92.8932 | 95.4802 | 92.9027 |
FFA | 93.5252 | 91.4388 | 93.8889 | 92.7945 | 96.6292 | 93.9325 | 95.0276 | 93.3521 |
IBFSA | 93.1655 | 90.4676 | 95.4286 | 92.5492 | 94.9438 | 92.5842 | 94.6176 | 92.5574 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 94.2446 | 92.2302 | 93.5484 | 90.621 | 97.7528 | 95.9269 | 95.6044 | 93.1861 |
WOA | 93.5252 | 92.5000 | 93.7500 | 91.1323 | 97.7528 | 96.0393 | 94.7368 | 93.5073 |
MFO | 93.8849 | 92.6978 | 93.4066 | 91.1925 | 97.191 | 96.3202 | 95.0549 | 93.6793 |
GWO | 93.1655 | 91.5287 | 94.7977 | 91.9839 | 97.7528 | 93.3988 | 94.7658 | 92.6582 |
FFA | 92.8058 | 91.7985 | 94.3503 | 91.8734 | 96.6292 | 94.3258 | 94.1176 | 93.0739 |
IBFSA | 92.8058 | 91.5468 | 94.8276 | 93.3384 | 93.2584 | 91.9663 | 93.7853 | 92.6419 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 92.8058 | 90.4496 | 94.7977 | 92.8646 | 96.0674 | 92.1909 | 94.3820 | 92.5100 |
WOA | 92.4460 | 90.7014 | 94.3503 | 92.4501 | 97.1910 | 93.1460 | 94.0845 | 92.7617 |
MFO | 92.4460 | 90.7554 | 95.8824 | 93.2032 | 95.5056 | 92.3595 | 94.1828 | 92.7467 |
GWO | 92.8058 | 89.4964 | 95.4023 | 92.5473 | 96.6292 | 90.9550 | 94.5055 | 91.7095 |
FFA | 92.8058 | 90.5755 | 94.6429 | 92.4290 | 97.1910 | 92.9213 | 94.5355 | 92.6571 |
IBFSA | 92.4460 | 90.1798 | 95.3757 | 92.9065 | 94.3820 | 91.7135 | 94.0171 | 92.2804 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 91.3669 | 88.4172 | 88.8889 | 86.6196 | 98.8764 | 96.9101 | 93.617 | 91.4698 |
WOA | 89.5683 | 87.8057 | 88.601 | 85.9397 | 98.8764 | 96.882 | 92.2667 | 91.0656 |
MFO | 92.446 | 88.7589 | 91.9786 | 87.2246 | 98.8764 | 96.6572 | 94.2466 | 91.6853 |
GWO | 88.1295 | 85.1978 | 86.4322 | 82.5656 | 98.8764 | 97.528 | 91.2467 | 89.4133 |
FFA | 90.2878 | 85.9712 | 88.3249 | 83.5081 | 98.8764 | 97.4157 | 92.8 | 89.9095 |
IBFSA | 83.0935 | 77.7338 | 79.638 | 74.7827 | 100 | 98.5674 | 88.2206 | 85.0241 |
Algorithm | Accuracy | Precision | Recall | F-measure | ||||
Best | Mean | Best | Mean | Best | Mean | Best | Mean | |
PSO | 94.6043 | 92.7338 | 95.4802 | 93.4127 | 97.191 | 95.3932 | 95.8449 | 94.3868 |
WOA | 93.8849 | 92.7877 | 94.4444 | 93.2464 | 97.191 | 95.6741 | 95.2909 | 94.4405 |
MFO | 94.2446 | 93.0395 | 95 | 93.7031 | 97.191 | 95.5617 | 95.5801 | 94.6191 |
GWO | 93.5252 | 91.5287 | 95.9064 | 93.3714 | 96.6292 | 93.4269 | 94.9438 | 93.3876 |
FFA | 93.8849 | 91.6906 | 94.9721 | 93.0589 | 96.0674 | 94.0449 | 95.2646 | 93.5463 |
IBFSA | 97.1119 | 95.0541 | 97.1098 | 94.1285 | 99.4186 | 98.1613 | 97.7011 | 96.0932 |