
Citation: Neda Askari, Hassan Momtaz, Elahe Tajbakhsh. Acinetobacter baumannii in sheep, goat, and camel raw meat: virulence and antibiotic resistance pattern[J]. AIMS Microbiology, 2019, 5(3): 272-284. doi: 10.3934/microbiol.2019.3.272
[1] | Swarnava Biswas, Debajit Sen, Dinesh Bhatia, Pranjal Phukan, Moumita Mukherjee . Chest X-Ray image and pathological data based artificial intelligence enabled dual diagnostic method for multi-stage classification of COVID-19 patients. AIMS Biophysics, 2021, 8(4): 346-371. doi: 10.3934/biophy.2021028 |
[2] | Soumyajit Podder, Abhishek Mallick, Sudipta Das, Kartik Sau, Arijit Roy . Accurate diagnosis of liver diseases through the application of deep convolutional neural network on biopsy images. AIMS Biophysics, 2023, 10(4): 453-481. doi: 10.3934/biophy.2023026 |
[3] | Dafina Xhako, Niko Hyka, Elda Spahiu, Suela Hoxhaj . Medical image analysis using deep learning algorithms (DLA). AIMS Biophysics, 2025, 12(2): 121-143. doi: 10.3934/biophy.2025008 |
[4] | Joshua Holcomb, Nicholas Spellmon, Yingxue Zhang, Maysaa Doughan, Chunying Li, Zhe Yang . Protein crystallization: Eluding the bottleneck of X-ray crystallography. AIMS Biophysics, 2017, 4(4): 557-575. doi: 10.3934/biophy.2017.4.557 |
[5] | Vittoria Raimondi, Alessandro Grinzato . A basic introduction to single particles cryo-electron microscopy. AIMS Biophysics, 2022, 9(1): 5-20. doi: 10.3934/biophy.2022002 |
[6] | Haliz Hussein, Hazhmat Ali, Zeki Mohamed, Majeed Mustafa, Khairi Abdullah, Asaad Alasady, Mayada Yalda . Thyroid function and hematological alterations in cardiac catheterization workers: a pre-post observational study on x-ray exposure. AIMS Biophysics, 2025, 12(1): 43-53. doi: 10.3934/biophy.2025004 |
[7] | Jaouhra Cherif, Anis Raddaoui, Ghofrane Ben Fraj, Asma Laabidi, Nada Souissi . Escherichia coli's response to low-dose ionizing radiation stress. AIMS Biophysics, 2024, 11(2): 130-141. doi: 10.3934/biophy.2024009 |
[8] | Mario Lefebvre . A Wiener process with jumps to model the logarithm of new epidemic cases. AIMS Biophysics, 2022, 9(3): 271-281. doi: 10.3934/biophy.2022023 |
[9] | David Kirchenbuechler, Yael Mutsafi, Ben Horowitz, Smadar Levin-Zaidman, Deborah Fass, Sharon G. Wolf, Michael Elbaum . Cryo-STEM Tomography of Intact Vitrified Fibroblasts. AIMS Biophysics, 2015, 2(3): 259-273. doi: 10.3934/biophy.2015.3.259 |
[10] | Adam Redzej, Gabriel Waksman, Elena V Orlova . Structural studies of T4S systems by electron microscopy. AIMS Biophysics, 2015, 2(2): 184-199. doi: 10.3934/biophy.2015.2.184 |
The novel coronavirus was detected for the first time in Wuhan, China in December 2019 and the epidemic has been spreading ever since. The number of people infected all over the world is 21,040,712 (11 March 2021) and the number of confirmed cases all over the world is 119,107,115 (11 March 2021). COVID-19 gets transmitted from respiratory droplets when someone with the virus coughs, sneezes or talks. Due to severity of the pandemic, it is indeed needed for quick and cost-effective detection of COVID-19.
Recently it is reported that artificial intelligence and machine learning have tremendous potential in health care sector [1]. The SARS-CoV-2 virus in the respiratory tract was usually diagnosed using the reverse transcription-polymerase chain reaction (RT-PCR) by taking naso-pharyngeal samples from the patient [2].
The RT-PCR has a lower potential for contamination and the severity of infection can also be estimated. The RT-PCR has a specificity of 46% and a recall of 87% [3]. Though the test usually takes a few hours, the process of collecting and processing the samples in bulk takes days before the results from a single sample are known from the report.
Although RT-PCR is widely used, it is less sensitive than computed tomography [4]. Chest X-ray is an imaging method to detect COVID-19 done in wards, exclusive to patients with suspected infections. Portable radiology equipments are available for suspected patients. Chest imaging plays an important role in early diagnosis and treatment of patients with COVID-19.
Chest radiography is a faster and comparatively inexpensive imaging method and the previous studies have demonstrated its role in predicting COVID-19 in patients infected with COVID-19 [5]–[7]. The sudden and massive increase in workloads, due to the spread of the COVID-19 pandemic has required the development of additional tools to help manage patients. The algorithm established here is also beneficial in cases where there is shortage of expertise in medical teams.
Mask R-CNN is an important AI-based scheme which has been used before in automatic nucleus segmentation [8], lung nodules detection and segmentation [9],[10], liver segmentation [11], automated blood cell counting [12] and multiorgan segmentation [13]. Face detection and segmentation [14], detection of oral disease [15], hand segmentation [16], segmenting the optic nerve [17], segmentation of early gastric cancer [18],[19] and detection and classification of breast tumors [20] is also performed using Mask R-CNN. However, the Mask R-CNN method for the detection of COVID-19 from chest X-ray images has not been explored to the best of our knowledge.
Imaging in COVID-19 is done by expert radiologists whose role is to screen the images through visual observation and report the findings. Chest X-ray is one of the most commonly and widely applied imaging modality. Healthcare centers and hospitals can be highly benefited in applying our proposed method in practice. It would ease the workflow and the test results can be obtained faster than the RT-PCR and so the spread of the disease can be readily controlled.
Detection methods also include the rapid antigen test which provides immediate results although they have lower sensitivity than RT-PCR. Various medical imaging techniques like X-rays, computed tomography (CT) scans, and magnetic resonance imaging (MRI) are used all over the world for diagnosis of diseases. Deep-learning based AI techniques are used to ensure that the diagnosis is accurate [21],[22]. In this study, we assess the performance of an artificial intelligence (AI) system for the detection of COVID-19.
The dataset of images utilized here in this study was collected from COVID-19 image data compiled by Cohen and et al. [23],[24]. The entire dataset of chest X-ray images is a publicly available collection of data and can be obtained from their GitHub repository for further use. The dataset was first made public in February 2020 and is continuously growing ever since. As of now, it contains a total of 542 frontal chest X-ray images from 262 people worldwide of which 408 are standard frontal PA/AP and 134 are AP Supine images [24]. The dataset also contains numerous clinical use cases and tools. Based on these chest X-ray images the deep learning model is evaluated.
The Mask R-CNN is a deep neural network aimed to solve instance segmentation problem in machine learning or computer vision. Mask R-CNN is of enormous importance in medical imaging analysis. Mask R-CNN model is commonly used for object detection and segmentation. Not only it puts a bounding box on the target, but also it creates a mask and classifies the boxes depending on the pixels inside it. It is an extension over the Faster R-CNN model.
The Mask R-CNN model consists of three primary components which are the backbone, the Region Proposal Network (RPN) and RoIAlign. The architecture of the proposed deep learning model is shown in Figure 1. Backbone is a Feature Pyramid Network style deep neural network which can extract multi-level image features. The ResNet forms the backbone of the Mask R-CNN model. The CNN used here is ResNet41, ResNet50, ResNet65 and ResNet101. The CNN ResNet50 consists of 48 convolution layers along with 1 MaxPool and 1 Average Pool layer. Further, it has 3.8 × 109 floating point operations. The RPN uses a sliding window to scan the input image and detects the infected regions in this study. The RoIAlign then examines the RoIs obtained from the RPN and extends the feature maps from the backbone at various locations. The RoIAlign is responsible for the formation of the precise segmentation masks on the images. The RoIPooling in Faster R-CNN is replaced by a more precise and accurate segmentation using the RoIAlign.
The entire model was buildup using the above deep learning mechanism. The various features of the model are described below. The model is capable of classifying objects into different classes, surround them with bounding boxes and create a mask for the detected objects. The multi-mask loss function for each case is given by:
where Lcls, Lbbx and Lmask are classification loss, bounding box regression loss and mask prediction loss respectively. To minimize the loss function the three components are composed in a specific manner. The classification loss is defined using the following equation:
The classification loss Lcls of each anchor is the log loss of whether the area is a lung is calculated from:
The bounding box regression loss is given by the following equation:
In the above mathematical expressions, i is the index of an anchor and pi indicates the prediction probability that the anchor is one of the lungs. Ground truth label is denoted by
where, the label value of a cell (i, j) of a region of m × m size is given by yij.
At first, the AI-based model was required to train on known results. The chest X-ray images with the best quality were selected for training. The images used were in jpg format. For training, testing and validation dataset, images of different age, gender and orientation were considered.
The system was trained on a pneumonia dataset as well as a COVID-19 dataset [23]. This dataset includes 931 images of which 534 were COVID infected according to the RT-PCR test, 134 were pneumonia and the rest were not specified. The validation set consisted of 150 images (75 per label, equally divided between PA and AP images) that were used to compute the performance during the training process [23],[24]. All of these images were obtained from an open source database as mentioned earlier and the patient anonymity was maintained.
The annotations were done using VGG Image Annotator software. VGG Image Annotator is an open-source image annotation tool used to indicate regions in an image and create text that defines them. These images were annotated by skilled pathologists.
The whole setup was implemented in Windows environment using NVIDIA GTX 1080 4 GB GPU on a system with 8 GB RAM and having Intel Core-i5 7th generation @2.20GHz processor. The entire algorithm was implemented using Python programming language.
The experiments on the deep learning model were performed to detect COVID-19 in chest X-ray images. The Mask R-CNN was trained for 100 epochs. The performance assessment of the methods is tabulated in Table 1. The Accuracy, Specificity, Precision and Recall were compared for the different methods as well as for the method used in this study [25]. The trained neural network was used to classify X-ray images. The parameters used to evaluate the performance are:
The TP, TN, FP and FN are the True Positive, True Negative, False Positive and False Negative respectively. The accuracy, specificity, recall, precision and F1-score for the proposed models are shown in Table 1. These metrics evaluate the performance of the deep learning model on the chest X-ray image dataset.
Method | AC | SP | PR | RC | F1 |
ResNet41 | 93.16% | 89.03% | 94.63% | 87.26% | 94.54% |
ResNet50 | 96.98% | 97.36% | 96.60% | 97.32% | 96.93% |
ResNet65 | 94.35% | 91.79% | 96.48% | 89.35% | 95.74% |
ResNet101 | 95.23% | 92.64% | 94.35% | 87.62% | 97.63% |
In order to evaluate the Mask R-CNN model, the 5-fold cross-validation was performed. The first fold was used to test the model while the rest of the dataset was used to train the model. Then the performance was noted from the perspective of a binary classification problem.
The 5-fold cross-validation was performed on all the models of which the ResNet50 proved to be superior than other models. The Confusion Matrix (CM) was computed for the task of binary classification between COVID-19 and Not COVID-19. The results of the CM for all the 5 folds are shown in Figure 2. The performance of the metrics for all 5 folds and their average is illustrated in Table 2. From the Table 2, it is observed that the ResNet50 model has obtained an average accuracy, specificity, precision, recall and F1-score of 96.98%, 97.36%, 96.60%, 97.32% and 96.93% respectively.
Fold Number | AC | SP | PR | RC | F1 |
Fold-1 | 97.55% | 97.10% | 98.0% | 97.12% | 97.55% |
Fold-2 | 98.0% | 99.0% | 97.0% | 98.97% | 97.97% |
Fold-3 | 98.70% | 97.40% | 100% | 97.46% | 98.71% |
Fold-4 | 93.75% | 96.50% | 91.0% | 96.92% | 93.57% |
Fold-5 | 96.90% | 96.80% | 97.0% | 96.80% | 96.89% |
Average | 96.98% | 97.36% | 96.60% | 97.32% | 96.93% |
Among the proposed models the ResNet50 demonstrates a promising amount of accuracy in COVID-19 detection on chest X-ray images. The evaluations indicate that the model is highly capable of accurate detection of the disease. Figure 3 (a) shows the chest X-ray of patients (b) shows the deep learning model prediction with mask.
The biomedical images used are of 640 × 480 for training the model. Our proposed method shows that the infected lungs are obtained with red masks and confined within the bounding boxes. Using our method, the lungs are correctly detected with no false alarms. To obtain better accuracy, experiments are performed extensively on superior quality images of the dataset.
The comparison of the different AI-based COVID-19 detection models is presented in Table 3. From Table 3, it can be seen that our method delivers an accuracy of 96.98% and specificity of 97.36% in COVID-19 detection, which is superior compared to the accuracy of 94.92% and specificity of 92% using SSD proposed by Saiz et al. [21]. Our model is also found to be better in comparison to DeTraC-ResNet18 proposed by Abbas et al. [22], COVIDX-Net proposed by Hemdan et al. [26], COVID-Net proposed by Wang et al. [27] and VGG-19 proposed by Ioannis et al. [28]. Note that VGG-19 has a slightly better specificity than our Mask R-CNN model. In addition to these, our model is also found to be better when it compared with the ResNet50 + SVM model proposed by Sethy and Behera [29] and the shallow Convolutional Neural Network (which contains four layers and lesser number of parameters for more efficient computation) proposed by Mukherjee et al. [30].
The proposed model can also be applied to distinguish COVID-19 pneumonia from other pulmonary diseases such as asthma and bronchitis. The proposed method can also be modified for detecting other abnormalities in the lungs. The advantage of this model is that it provides both bounding box and instance segmentation while the limitation is that larger datasets are required for accurate detection.
Model used | Number of cases taken to train model | AC | SP | Reference |
Mask R-CNN | 534 COVID-19 positive and 134 COVID-19 negative | 96.98% | 97.36% | This work |
DeTraC-ResNet18 | 116 COVID-19 positive and 80 COVID-19 negative | 95.12% | 91.87% | Abbas et al. [22] |
SSD | 100 COVID-19 positive and 887 COVID-19 negative | 94.92% | 92% | Saiz et al. [21] |
COVIDX-Net | 25 COVID-19 positive | 90% | - | Hemdan et al. [26] |
VGG-19 | 224 COVID-19 positive | 93.48% | 98.75% | Ioannis et al. [28] |
COVID-Net | 53 COVID-19 positive and 5526 COVID-19 negative | 92.4% | - | Wang and Wong [27] |
ResNet50 + SVM | 25 COVID-19 positive and 25 COVID-19 negative | 95.38% | 93.47% | Sethy and Behera [29] |
Shallow CNN | 130 COVID-19 positive and 51 COVID-19 negative | 96.92% | - | Mukherjee et al. [30] |
The AI-based method, Mask R-CNN for detection of COVID-19 using chest X-ray image as primary dataset is presented in detail. The method, Mask R-CNN is found to be superior in comparison to other AI-based methods for detecting COVID-19. The deep learning model proposed in this study is not only capable to detect but also able to classify COVID-19 infections from chest X-ray study. The method, Mask R-CNN presented here delivers specificity of 97.36% and accuracy of 96.98% and therefore would be a very effective tool in healthcare. Also, it is noticed that the Mask R-CNN method has potential applications in detecting chest related diseases.
[1] |
Huang L, Sun L, Yan Y (2013) Clonal spread of carbapenem resistant Acinetobacter baumannii ST92 in a Chinese Hospital during a 6-year period. J Microbiol 51: 113–117. doi: 10.1007/s12275-013-2341-4
![]() |
[2] | Almasaudi SB (2018) Acinetobacter spp. As nosocomial pathogens: Epidemiology and resistance features. Saudi J Biol Sci 25: 586–596. |
[3] |
Peleg AY, Seifert H, Paterson DL (2008) Acinetobacter baumannii: emergence of a successful pathogen. Clin Microbiol Rev 21: 538–582. doi: 10.1128/CMR.00058-07
![]() |
[4] |
Dahiru M, Enabulele O (2015) Acinetobacter baumannii in birds' feces: A public health threat to vegetables and irrigation farmers. Adv Microbiol 5: 693–698. doi: 10.4236/aim.2015.510072
![]() |
[5] |
Guardabassi L, Schwarz S, Lloyd DH (2004) Pet animals as reservoirs of antimicrobial-resistant bacteria. J Antimicrob Chemother 54: 321–332. doi: 10.1093/jac/dkh332
![]() |
[6] |
Jung J, Park W (2015) Acinetobacter species as model microorganisms in environmental microbiology: current state and perspectives. Appl Microbiol Biotechnol 99: 2533–2548. doi: 10.1007/s00253-015-6439-y
![]() |
[7] | APIC guide (2010) Guide to the elimination of multidrug-resistant Acinetobacter baumannii transmission in healthcare settings. Washington, DC: APIC Headquarters. Available from: https://apic.org/Resource_/EliminationGuideForm/b8b0b11f-1808-4615-890b-f652d116ba56/File/APIC-AB-Guide.pdf |
[8] | Kurcik-Trajkovska B (2009) Acinetobacter spp.-A serious enemy threatening hospitals worldwide. Macedonian J Med Sci 2: 157–162. |
[9] |
Barnhart MM, Chapman MR (2006) Curli biogenesis and function. Annu Rev Microbiol 60: 131–147. doi: 10.1146/annurev.micro.60.080805.142106
![]() |
[10] | Müller S, Janssen T, Wieler LH (2014) Multidrug resistant Acinetobacter baumannii in veterinary medicine-emergence of an underestimated pathogen? Berl Munch Tierarztl Wochenschr 127: 435–446. |
[11] | Vaneechoutte M, Devriese LA, Dijkshoorn L, et al. (2000) Acinetobacter baumannii-infected vascular catheters collected from horses in an equine clinic. J Clin Microbiol 38: 4280–4281. |
[12] |
Francey T, Gaschen F, Nicolet J, et al. (2000) The role of Acinetobacter baumannii as a nosocomial pathogen for dogs and cats in an intensive care unit. J Vet Intern Med 14: 177–183. doi: 10.1111/j.1939-1676.2000.tb02233.x
![]() |
[13] |
Espinal P, Seifert H, Dijkshoorn L, et al. (2012) Rapid and accurate identification of genomic species from the Acinetobacter baumannii (Ab) group by MALDI-TOF MS. Clin Microbiol Infect 18: 1097 –1103. doi: 10.1111/j.1469-0691.2011.03696.x
![]() |
[14] | Momtaz H, Seifati SM, Tavakol M (2015) Determining the prevalence and detection of the most prevalent virulence genes in Acinetobacter baumannii isolated from hospital infections. Int J Med Lab 2: 87–97. |
[15] |
Tavakol M, Momtaz H, Mohajeri P, et al. (2018) Genotyping and distribution of putative virulence factors and antibiotic resistance genes of Acinetobacter baumannii strains isolated from raw meat. Antimicrob Resist Infect Control 7: 120. doi: 10.1186/s13756-018-0405-2
![]() |
[16] | CLSI (2017) Performance standards for antimicrobial susceptibility testing; twenty-fifth informational supplement. Wayne: Clinical and Laboratory Standards Institute. |
[17] | Kiani S, Momtaz H, Serajian AA, et al. (2016) Detection of integrons in Acinetobacter baumannii strains isolated from the nosocomial infections of Ahvaz city and their relation with the resistance pattern. Int J Med Lab 3: 50–63. |
[18] |
Endimiani A, Hujer KM, Hujer AM, et al. (2011) Acinetobacter baumannii isolates from pets and horses in Switzerland: molecular characterization and clinical data. J Antimicrob Chemother 66: 2248–2254. doi: 10.1093/jac/dkr289
![]() |
[19] | Al Atrouni A, Joly-Guillou ML, Hamze M, et al. (2016) Reservoirs of non-baumannii acinetobacter species. Front Microbiol 7: 49. |
[20] |
Hamouda A, Vali L, Amyes SG (2008) Gram-negative non-fermenting bacteria from food-producing animals are low risk for hospital-acquired infections. J Chemother 20: 702–708. doi: 10.1179/joc.2008.20.6.702
![]() |
[21] |
Hamouda A, Findlay J, Al Hassan L, et al. (2011) Epidemiology of Acinetobacter baumannii of animal origin. Int J Antimicrob Agents 38: 314–318. doi: 10.1016/j.ijantimicag.2011.06.007
![]() |
[22] |
Rafei R, Hamze M, Pailhoriès H, et al. (2015) Extrahuman epidemiology of Acinetobacter baumannii in Lebanon. Appl Environ Microbiol 81: 2359–2367. doi: 10.1128/AEM.03824-14
![]() |
[23] |
Nam HM, Lim SK, Kim JM, et al. (2010) In vitro activities of antimicrobials against six important species of gram-negative bacteria isolated from raw milk samples in Korea. Foodborne Pathog Dis 7: 221–224. doi: 10.1089/fpd.2009.0406
![]() |
[24] |
Gurung M, Nam HM, Tamang MD, et al. (2013) Prevalence and antimicrobial susceptibility of Acinetobacter from raw bulk tank milk in Korea. J Dairy Sci 96: 1997–2002. doi: 10.3168/jds.2012-5965
![]() |
1. | Shivangi Gupta, Sunanda Gupta, 2023, Chapter 16, 978-981-19-5291-3, 173, 10.1007/978-981-19-5292-0_16 | |
2. | Padma Nyoman Crisnapati, Dechrit Maneetham, RIFIS: A Novel Rice Field Sidewalk Detection Dataset for Walk-Behind Hand Tractor, 2022, 7, 2306-5729, 135, 10.3390/data7100135 | |
3. | F. M. Javed Mehedi Shamrat, Sami Azam, Asif Karim, Rakibul Islam, Zarrin Tasnim, Pronab Ghosh, Friso De Boer, LungNet22: A Fine-Tuned Model for Multiclass Classification and Prediction of Lung Disease Using X-ray Images, 2022, 12, 2075-4426, 680, 10.3390/jpm12050680 | |
4. | Durjoy Majumder, Development of a fast fourier transform-based analytical method for COVID-19 diagnosis from chest X-ray images using GNU octave, 2022, 47, 0971-6203, 279, 10.4103/jmp.jmp_26_22 | |
5. | Megha Agarwal, Vinti Gupta, Abhinav Goel, Aditya Raghav, Neeraj Dhiman, 2022, 2767, 0094-243X, 030001, 10.1063/5.0104372 | |
6. | Eugen Antal, Pavol Marák, Automated Transcription of Historical Encrypted Manuscripts, 2022, 82, 1338-9750, 65, 10.2478/tmmp-2022-0019 | |
7. | Anandbabu Gopatoti, P. Vijayalakshmi, Optimized chest X-ray image semantic segmentation networks for COVID-19 early detection, 2022, 30, 08953996, 491, 10.3233/XST-211113 | |
8. | Suganya D., Kalpana R., Prognosticating various acute covid lung disorders from COVID-19 patient using chest CT Images, 2023, 119, 09521976, 105820, 10.1016/j.engappai.2023.105820 | |
9. | Anusua Basu, Pradip Senapati, Mainak Deb, Rebika Rai, Krishna Gopal Dhal, A survey on recent trends in deep learning for nucleus segmentation from histopathology images, 2023, 1868-6478, 10.1007/s12530-023-09491-3 | |
10. | M. Emin Sahin, Hasan Ulutas, Esra Yuce, Mustafa Fatih Erkoc, Detection and classification of COVID-19 by using faster R-CNN and mask R-CNN on CT images, 2023, 0941-0643, 10.1007/s00521-023-08450-y | |
11. | Hsieh-Fu Tsai, Soumyajit Podder, Pin-Yuan Chen, Microsystem Advances through Integration with Artificial Intelligence, 2023, 14, 2072-666X, 826, 10.3390/mi14040826 | |
12. | G. Prabakaran, K. Jayanthi, An efficient Covid-19 detection and severity analysis using optimized mask region-based convolution neural network, 2023, 45, 10641246, 11679, 10.3233/JIFS-230312 | |
13. | Pradip Senapati, Anusua Basu, Mainak Deb, Krishna Gopal Dhal, Sharp dense U-Net: an enhanced dense U-Net architecture for nucleus segmentation, 2024, 15, 1868-8071, 2079, 10.1007/s13042-023-02017-y | |
14. | Om Ramakisan Varma, Mala Kalra, Sheeraz Kirmani, COVID‐19: A systematic review of prediction and classification techniques, 2023, 33, 0899-9457, 1829, 10.1002/ima.22905 | |
15. | Adetokunbo MacGregor John-Otumu, Charles Ikerionwu, Oluwaseun Oladeji Olaniyi, Oyewole Dokun, Udoka Felista Eze, Obi Chukwuemeka Nwokonkwo, 2024, Advancing COVID-19 Prediction with Deep Learning Models: A Review, 979-8-3503-5815-5, 1, 10.1109/SEB4SDG60871.2024.10630186 | |
16. | Ritu Agarwal, Tanupriya Choudhury, Neelu J. Ahuja, Tanmay Sarkar, Adadi Parise, Hybrid Deep Learning Algorithm-Based Food Recognition and Calorie Estimation, 2023, 2023, 1745-4549, 1, 10.1155/2023/6612302 | |
17. | S. Cheraghi, S. Amiri, F. Abdolali, A. Janati Esfahani, A. Amiri Tehrani Zade, R. Ahadi, F. Ansari, E. Raiesi Nafchi, Z. Hormozi-Moghaddam, A modified deep learning model in the classification of post-COVID-19 lung disease and a comparative study on Iranian and international databases, 2024, 22, 1605-0258, 55, 10.61186/ijrr.22.1.55 | |
18. | Oyewole Dokun, Adetokunbo John-Otumu, Udoka Eze, Charles Ikerionwu, Chukwuemeka Etus, Emeka Nwanga, Ogadimma Okonkwo, Deep Learning Model for COVID-19 Classification Using Fine Tuned ResNet50 on Chest X-Ray Images , 2024, 9, 2637-5680, 10, 10.11648/j.mlr.20240901.12 | |
19. | Dyala Aljagoub, Ri Na, Chongsheng Cheng, Delamination detection in concrete decks using numerical simulation and UAV-based infrared thermography with deep learning, 2025, 170, 09265805, 105940, 10.1016/j.autcon.2024.105940 |
Method | AC | SP | PR | RC | F1 |
ResNet41 | 93.16% | 89.03% | 94.63% | 87.26% | 94.54% |
ResNet50 | 96.98% | 97.36% | 96.60% | 97.32% | 96.93% |
ResNet65 | 94.35% | 91.79% | 96.48% | 89.35% | 95.74% |
ResNet101 | 95.23% | 92.64% | 94.35% | 87.62% | 97.63% |
Fold Number | AC | SP | PR | RC | F1 |
Fold-1 | 97.55% | 97.10% | 98.0% | 97.12% | 97.55% |
Fold-2 | 98.0% | 99.0% | 97.0% | 98.97% | 97.97% |
Fold-3 | 98.70% | 97.40% | 100% | 97.46% | 98.71% |
Fold-4 | 93.75% | 96.50% | 91.0% | 96.92% | 93.57% |
Fold-5 | 96.90% | 96.80% | 97.0% | 96.80% | 96.89% |
Average | 96.98% | 97.36% | 96.60% | 97.32% | 96.93% |
Model used | Number of cases taken to train model | AC | SP | Reference |
Mask R-CNN | 534 COVID-19 positive and 134 COVID-19 negative | 96.98% | 97.36% | This work |
DeTraC-ResNet18 | 116 COVID-19 positive and 80 COVID-19 negative | 95.12% | 91.87% | Abbas et al. [22] |
SSD | 100 COVID-19 positive and 887 COVID-19 negative | 94.92% | 92% | Saiz et al. [21] |
COVIDX-Net | 25 COVID-19 positive | 90% | - | Hemdan et al. [26] |
VGG-19 | 224 COVID-19 positive | 93.48% | 98.75% | Ioannis et al. [28] |
COVID-Net | 53 COVID-19 positive and 5526 COVID-19 negative | 92.4% | - | Wang and Wong [27] |
ResNet50 + SVM | 25 COVID-19 positive and 25 COVID-19 negative | 95.38% | 93.47% | Sethy and Behera [29] |
Shallow CNN | 130 COVID-19 positive and 51 COVID-19 negative | 96.92% | - | Mukherjee et al. [30] |
Method | AC | SP | PR | RC | F1 |
ResNet41 | 93.16% | 89.03% | 94.63% | 87.26% | 94.54% |
ResNet50 | 96.98% | 97.36% | 96.60% | 97.32% | 96.93% |
ResNet65 | 94.35% | 91.79% | 96.48% | 89.35% | 95.74% |
ResNet101 | 95.23% | 92.64% | 94.35% | 87.62% | 97.63% |
Fold Number | AC | SP | PR | RC | F1 |
Fold-1 | 97.55% | 97.10% | 98.0% | 97.12% | 97.55% |
Fold-2 | 98.0% | 99.0% | 97.0% | 98.97% | 97.97% |
Fold-3 | 98.70% | 97.40% | 100% | 97.46% | 98.71% |
Fold-4 | 93.75% | 96.50% | 91.0% | 96.92% | 93.57% |
Fold-5 | 96.90% | 96.80% | 97.0% | 96.80% | 96.89% |
Average | 96.98% | 97.36% | 96.60% | 97.32% | 96.93% |
Model used | Number of cases taken to train model | AC | SP | Reference |
Mask R-CNN | 534 COVID-19 positive and 134 COVID-19 negative | 96.98% | 97.36% | This work |
DeTraC-ResNet18 | 116 COVID-19 positive and 80 COVID-19 negative | 95.12% | 91.87% | Abbas et al. [22] |
SSD | 100 COVID-19 positive and 887 COVID-19 negative | 94.92% | 92% | Saiz et al. [21] |
COVIDX-Net | 25 COVID-19 positive | 90% | - | Hemdan et al. [26] |
VGG-19 | 224 COVID-19 positive | 93.48% | 98.75% | Ioannis et al. [28] |
COVID-Net | 53 COVID-19 positive and 5526 COVID-19 negative | 92.4% | - | Wang and Wong [27] |
ResNet50 + SVM | 25 COVID-19 positive and 25 COVID-19 negative | 95.38% | 93.47% | Sethy and Behera [29] |
Shallow CNN | 130 COVID-19 positive and 51 COVID-19 negative | 96.92% | - | Mukherjee et al. [30] |