
In this paper, we consider the dissipative Degasperis-Procesi equation with arbitrary dispersion coefficient and compactly supported initial data. We establish the simple condition on the initial data which lead to guarantee that the solution exists globally. We also investigate the propagation speed for the equation under the initial data is compactly supported.
Citation: Guenbo Hwang, Byungsoo Moon. Global existence and propagation speed for a Degasperis-Procesi equation with both dissipation and dispersion[J]. Electronic Research Archive, 2020, 28(1): 15-25. doi: 10.3934/era.2020002
[1] | Riadh Chteoui, Abdulrahman F. Aljohani, Anouar Ben Mabrouk . Classification and simulation of chaotic behaviour of the solutions of a mixed nonlinear Schrödinger system. Electronic Research Archive, 2021, 29(4): 2561-2597. doi: 10.3934/era.2021002 |
[2] | Caiwen Chen, Tianxiu Lu, Ping Gao . Chaotic performance and circuitry implement of piecewise logistic-like mapping. Electronic Research Archive, 2025, 33(1): 102-120. doi: 10.3934/era.2025006 |
[3] | XiongFei Chi, Ting Wang, ZhiYuan Li . Chaotic dynamics analysis and control of fractional-order ESER systems using a high-precision Caputo derivative approach. Electronic Research Archive, 2025, 33(6): 3613-3637. doi: 10.3934/era.2025161 |
[4] | Jiangtao Zhai, Haoxiang Sun, Chengcheng Xu, Wenqian Sun . ODTC: An online darknet traffic classification model based on multimodal self-attention chaotic mapping features. Electronic Research Archive, 2023, 31(8): 5056-5082. doi: 10.3934/era.2023259 |
[5] | Weijie Ding, Xiaochen Mao, Lei Qiao, Mingjie Guan, Minqiang Shao . Delay-induced instability and oscillations in a multiplex neural system with Fitzhugh-Nagumo networks. Electronic Research Archive, 2022, 30(3): 1075-1086. doi: 10.3934/era.2022057 |
[6] | Liangliang Li . Chaotic oscillations of 1D wave equation with or without energy-injections. Electronic Research Archive, 2022, 30(7): 2600-2617. doi: 10.3934/era.2022133 |
[7] | Qihong Shi, Yaqian Jia, Xunyang Wang . Global solution in a weak energy class for Klein-Gordon-Schrödinger system. Electronic Research Archive, 2022, 30(2): 633-643. doi: 10.3934/era.2022033 |
[8] | Lijun Miao, Jingwei Yu, Linlin Qiu . On global existence for the stochastic nonlinear Schrödinger equation with time-dependent linear loss/gain. Electronic Research Archive, 2025, 33(6): 3571-3583. doi: 10.3934/era.2025159 |
[9] | Shanshan Yang, Ning Li . Chaotic behavior of a new fractional-order financial system and its predefined-time sliding mode control based on the RBF neural network. Electronic Research Archive, 2025, 33(5): 2762-2799. doi: 10.3934/era.2025122 |
[10] | Senli Liu, Haibo Chen . Existence and asymptotic behaviour of positive ground state solution for critical Schrödinger-Bopp-Podolsky system. Electronic Research Archive, 2022, 30(6): 2138-2164. doi: 10.3934/era.2022108 |
In this paper, we consider the dissipative Degasperis-Procesi equation with arbitrary dispersion coefficient and compactly supported initial data. We establish the simple condition on the initial data which lead to guarantee that the solution exists globally. We also investigate the propagation speed for the equation under the initial data is compactly supported.
Abbreviations: AUC: Area under the curve; BCDR: Breast Cancer Digital Repository; CAD: Computer-aided detection/diagnosis; CADe: Computer-aided detection; CADx: Computer-aided diagnosis; CNN(s): Convolutional neural network(s); CT: Computed tomography; DCNN(s): Deep convolutional neural network(s); DDSM: Digital Database for Screening Mammography; ILD: Interstitial lung disease; LIDC: Lung Image Database Consortium; LIDC-IDRI: Lung Image Database Consortium image collection; LUNA16: Lung Nodule Analysis 2016; MRI: Magnetic resonance imaging; MIAS: Mammography Image Analysis Society; NELSON: Nederlands-LeuvensLongkanker Screenings Onderzoek; ODNN: Optimal deep neural network; PROMISE12: Prostate MRI Image Segmentation 2012; PFMP: Prostate Fused-MRI-Pathology; ROC: Receiver operating characteristic; T2W: T2-weighted; WBCD: Wisconsin Breast Cancer Dataset
Medical imaging has become indispensable for the detection or diagnosis of diseases, especially for the diagnosis of cancers combined with a biopsy, and gradually become an important basis for precision medicine [1,2]. Currently, imaging techniques for medical applications are mainly based on X-rays, computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET) and ultrasound [3].
However, with the development of science and technology and the promotion of medical imaging applications, data interpretation and analysis manually has gradually become a challenging task [4,5]. Radiologists may misinterpret diseases because of inexperience or fatigue, leading to missed diagnosis, that is, false negative results, non-lesions may be interpreted as lesions, or benign lesions may be misinterpreted as malignant, that is, false positive results [6,7,8,9,10,11,12,13]. According to statistics, the misdiagnosis rate caused by human in medical image analysis can reach 10–30% [14]. In this background, the CAD system can be a great helpful tool for radiologists in medical image analysis.
The CAD system was originally developed for breast cancer screening from mammograms in the 1960s [15,16]. Nowadays, it is one of the most important areas of research in the field of medical image analysis and radiomics. There are two important aspects in current CAD research: Computer-aided detection (CADe) and computer-aided diagnosis (CADx), respectively [17]. CADe takes advantage of the computer output to determine the location of suspicious lesions. CADx, on the other hand, gives an output that determines the characterization of lesions. The workflow of a typical CAD system (shown in Figure 1) in medical image analysis can be divided into four steps: Image pre-processing, segmentation, feature extraction and selection, lesion classification.
CAD systems are widely used for the detection and diagnosis of diseases in medical image analysis, such as breast cancer, lung cancer, prostate cancer, bone suppression, skin lesions, and Alzheimer's disease. The application of CAD systems can improve the accuracy of diagnosis, reduce time consumption, and optimize the radiologists' workloads [18,19,20,21,22,23,24].
Deep learning is a new technique that is overtaking the traditional machine learning techniques and is increasingly being used in CAD systems [25]. Generally, features are extracted manually in machine learning, while in deep learning, it is a completely automatic process. In addition, simple features such as colors, edges, and textures can be obtained in machine learning, while in deep learning, some hierarchical or compositional features are accessible during the training process.
Typically, deep learning methods can be divided into four categories: CNN-based methods, restricted Boltzmann machines (RBMs), autoencoders, and sparse coding. Recently, the CNN-based methods have attracted more and more attention around the world, which have achieved promising results in literature. A typical CNN framework (shown in Figure 2) is composed of one or more convolution layers and pooling layers (optional), followed by one or more fully connected layers [26].
Many CNN-based models have been proposed since LeNet-5 [27], such as AlexNet [28], VGG-Net (VGG-16 and VGG-19) [29], GoogLeNet [30], ResNet [31], and SPP-Net [32], which focus on increasing the network depths and designing more flexible structures. Deep convolutional neural network (DCNN), as a newly emerging form of medical image analysis, allows the automatic extraction of features and supervised learning of large scale datasets, leading to quantitative clinical decisions.
The application of CNN-based methods for medical images is quite different from those for natural images [33]. On the one hand, a large scale labeled dataset, for example, the ImageNet, is required for the training and testing of CNNs. On the other hand, medical images are usually grayscale instead of containing RGB channels. However, large scale medical image datasets are not always available due to the intensive labeling work and expert experience requirement.
In this paper, the current state-of-the-art deep learning techniques used in CAD research are presented in section 2, which focus on CNN-based methods. A summary of open available medical image databases and the most commonly used evaluation metrics in literature is listed in section 3. Challenges and future perspectives of CAD systems using CNN-based methods are summarized in section 4, followed by a brief conclusion.
Briefly speaking, conventional CAD systems consist of two different parts: Lesion detection and false-positive reduction. Lesion detection is primarily based on algorithms specific to the detection task, resulting in many candidate lesions. False-positive reduction is commonly based on traditional machine learning methods to reduce the false positive rate. However, even with these complicated and sophisticated programs, the general performance of conventional CAD systems is not good, thus hampering their widespread usage in clinical practice.
In contrast, deep learning techniques, particularly CNN-base methods, may provide us a single step solution of CAD. Additionally, the unique nature of transfer learning may accelerate the development of CAD systems for various diseases and different imaging modalities.
Early reports of CNN-based CAD systems for breast cancer [34], lung cancer [35] and Alzheimer's disease [36,37,38] have shown promising results regarding the performance in detecting and diagnosing diseases [39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60].
The primary goal of CADe is to increase the detection rate of diseases while reducing the false negative rate possibly due to the observers' mistakes or fatigue. In this overview, medical image analysis tasks such as segmentation, identification, localization and detection are considered as CADe [39,40,41,42,43,49,50,54,55].
In 2019, Fujita et al. designed a novel deep neural network architecture for the detection of fibrillations and flutters [39]. As the most common Arrhythmia clinically, fibrillations and flutters will increase the risk of heart failure, dementia, and stroke. In their work, the proposed CNN could effectively detect Arrhythmias without preprocessing on raw data.
With the purpose of identifying Parkinson's disease (PD) automatically, Luis et al. applied the recurrence plots to map the motor signals onto the image domain, which were further used to feed a CNN [40]. Experimental results showed significant improvement compared to their previous works with an average accuracy of over 87%.
Li et al. proposed an effective knowledge transfer method based on a small dataset from a local hospital and a large shared dataset from the Alzheimer's disease neuroimaging initiative, that is, transfer learning [41]. The detection accuracy of Alzheimer's disease increased by approximately 20% compared with that of a model simply based on the original small dataset. Since it is a common challenge in medical image analysis, the authors have provided a practical solution to the limited training data.
In 2018, Martin et al. developed a CADe system for the ureteral stones identification in CT slice volumes [42]. By using a CNN that worked directly on the high resolution CT volumes, the proposed method was evaluated on a large dataset from 465 patients with annotations performed by an expert radiologist. Finally, they achieved a sensitivity of 100% and an average of 2.68 false positives per patient.
Sajid et al. presented a DCNN architecture for the brain tumors segmentation in MRI images [43]. The proposed network consisted of multiple neural networks which were connected in sequential order with the feeding of convolutional feature maps at the peer level. Experimental results on BRATS 2015 benchmark data showed the effectiveness and superiority of their method.
In 2016, Shin et al. conducted experiments for thoraco-abdominal lymph node detection and to explore how the CNN performance changed according to factors of CNN architectures, dataset characteristics, and transfer learning [49]. They considered five different CNN architectures, which achieved state-of-the-art performances in various computer vision applications.
In 2015, Ronneberger et al. presented a network and training strategy for the segmentation of neuronal structures in electron microscopic stacks, which strongly depended on the data augmentation [50]. The proposed network was trained with very few images and superior to the previously developed methods in terms of the segmentation performance.
CADx not only involves the detection of suspicious lesions, but also the characterization and classification of the detected lesions. In this overview, medical image analysis tasks such as classification, characterization, recognition and diagnosis are considered as CADx [44,47,48,51,52,53,56,57,58,59].
In 2019, Ahmad et al. summarized the evidence for clinical applications of CADx and artificial intelligence in colonoscopy [51]. Artificial intelligence-based software could analyze video colonoscopy in the future only to support lesion detection and characterization, but also to assess technical quality.
Gonzxalez-Dxıaz et al. presented a CAD system called as DermaKNet for skin lesion diagnosis [52]. By incorporating the expert knowledge provided by dermatologists into the decision process, the authors aimed to overcome the traditional limitation of deep learning regarding the interpretability of the results. This work indicated that multi-task losses will allow for fusing segmentation and diagnosis networks into end-to-end trainable architectures.
Jeyaraj et al. developed an automated and computer-aided oral cancer diagnosis system by investigating patient's hyperspectral images [53]. With the application of regression-based partitioned algorithm, an accuracy of 94.5%, a sensitivity of 94.0%, and a specificity of 98.0% was obtained in their work. Since there was no a necessity of expert knowledge, the proposed system was easily developed on simple workbench in practice.
In 2018, Raghavendra et al. trained an eighteen-layer CNN to extract robust features from the digital fundus images for the accurate diagnosis of glaucoma [54]. With a relatively small data set, they obtained an accuracy of 98.13%, which demonstrated the robustness of the proposed CAD system. Similar to Fujita's work in [39], the authors also presented a novel CAD system for the automatic characterization of heart diseases [55]. The main difference between their works lied in the specific approaches that they used for feature extraction or classification.
Hosseini-Asl et al.proposed a three dimensional CNN (3D-CNN) to improve the prediction of Alzheimer's disease, which could show generic features extracted from brain images, adapt to different domain datasets, and accurately classify subjects with improved fine-tuning method [56]. Experimental results on the ADNI dataset demonstrated the superior performance compared to other CNN-based methods and conventional classifiers.
Similarly, Farooq et al. used a DCNN-based pipeline for the diagnosis of Alzheimer's disease from MRI scans [57]. Since it was quite difficult to diagnose Alzheimer's disease in elderly people and required a highly discriminative feature representation for classification, deep learning techniques were capable of this work. Experimental results were performed on the ADNI dataset with an accuracy of 98.8%.
In 2017, two optimized massive-training artificial neural network (MTANN) architectures and four distinct CNN architectures with different depths were used in [60] for lung nodules detection and identification. The results with the sensitivity of 100% and 22.7 false positives per patient showed the superior performance of MTANN architectures than compared to CNN architectures.
In 2016, Anthimopoulos et al. adopted and evaluated a DCNN designed for the classification of interstitial lung disease (ILD) patterns [59]. To train and evaluate the scheme, they used a dataset of 14, 696 image patches, derived by 120 CT scans from different scanners and hospitals. In addition, their proposed method was the first DCNN designed for a specific medical problem. The classification performance with an accuracy of 85.5% indicated that CNNs can be effectively used for analyzing lung patterns.
Some recent applications of CNN-based methods for CAD research are summarized in Table 1.
Reference | Year | Application | Method | Dataset | Result |
Fujita et al. [39] | 2019 | Fibrillations and flutters detection | DCNN | Physiobank(PTB) dataset | Accuracy: 98.45% Sensitvity: 99.87% Specificity: 99.27% |
Luis et al. [40] | 2019 | Parkinson's disease identification | CNN | HandPD dataset | Accuracy: 87% |
Gonzxalez-Dxıaz et al. [52] | 2019 | Skin lesion diagnosis | DermaKNet | 2017 ISBI Challenge | Specificity: 95% AUC: 0.917 |
Jeyaraj et al. [53] | 2019 | Oral cancer CAD system | DCNN | TCIA, GDC | Accuracy: 94.5% Sensitivity: 94.0% Specificity: 98.0% |
Martin et al. [42] | 2018 | Ureteral stone identification | CNN | Clinical | Sensitivity: 100% FP/scan: 2.68 |
Sajid et al. [43] | 2018 | Brain tumors segmentation | DCNN | BRATS 2015 | — |
Hosseini-Asl et al. [56] | 2018 | Alzheimer's disease diagnosis | 3D-CNN | ADNI dataset | Sensitivity: 76% F1-score: 0.75 |
Farooq et al. [57] | 2017 | Multi-class classification of Alzheimer's disease | CNN | ADNI dataset | Accuracy: 98.88% |
Liao et al. [44] | 2017 | Lung nodules diagnosis | A modified U-Net | Kaggle Data Science Bowl of 2017 | Accuracy: 81.42% Recall: 85.62% |
Tulder et al. [58] | 2016 | Lung CT images classification | Convolutional RBM | ILD CT scans | Accuracy: 89.0% |
Anthimopoulos et al. [59] | 2016 | Lung pattern classification | DCNN | ILD CT scans | Accuracy: 85.5% |
Pratt et al. [47] | 2016 | Diabetic retinopathy diagnosis | CNN | Kaggle Data Science Bowl of 2016 | Accuracy: 75.0% Sensitivity: 95.0% |
Gao et al. [48] | 2016 | Lung CT attenuation patterns classification | CNN | ILD CT scans | Accuracy: 87.9% |
Shin et al. [49] | 2016 | Thoraco-abdominal lymph node(LN) detection | DCNN | CT scans | AUC: 0.93–0.95 |
Ronneberger et al. [50] | 2015 | Biomedical image segmentation | CNN | ISBI challenge | Accuracy: 92.03% |
To the researchers, CNN-based methods are actively used for tasks such as classification, localization, segmentation and registration in medical image analysis. To the clinicians and radiologists, it is not the separation or combination of these tasks but the incorporation of them with a unified system that plays a significant role in clinical applications, known as the CAD systems.
According to a recent survey with bibliometric analysis, current CAD researches covered a wide range of diseases [16]. In this section, the latest clinical applications of CNN-based methods for CAD research are introduced, including breast cancer diagnosis, lung nodule detection and prostate cancer localization.
Breast cancer is one of the most common cancers for women. Thousands of women suffer from breast cancer around the world every year [61]. It was also predicted that there would be 19.3 million new cancer cases worldwide by 2025 [62]. The early detection and diagnosis can decrease the death rate of breast cancer significantly.
There have been numerous studies investigating the application of CAD systems for breast cancer detection and diagnosis, which used various medical imaging modalities and CNN-based methods [63,64,65,66,67,68,69,70,71,72,73,74,75,76].
In 2019, Chiang et al. proposed a fast and effective breast cancer CAD system based on 3D-CNN [63]. On evaluation with a test set of 171 tumors, the authors achieved sensitivities of 95%, 90%, 85% and 80% at 14.03, 6.92, 4.91 and 3.62 false positives per patient (with six passes), respectively. The results indicated the feasibility of their methods, however, the number of false positives at 100% sensitivity should be further reduced.
Samala et al. developed a DCNN for the classification of malignant and benign masses in digital breast tomosynthesis (DBT) [64]. This work demonstrated that multi-stage transfer learning could take advantage of the knowledge gained through source tasks from unrelated and related domains. Also, when the training sample size was limited, an additional stage of transfer learning was advantageous.
In 2018, Zhou et al. presented a segmentation-free method to classify benign and malignant breast tumors using CNNs [72]. With the proposed model trained on 540 images, an accuracy of 95.8%, a sensitivity of 96.2%, and a specificity of 95.7% was obtained, which was a promising result. Besides, it was the first attempt that made use of radiomics based on CNN to automatically extract high-throughput features from shear-wave elastography (SWE) data to classify breast tumors.
Gao et al. compared one hand-crafted feature extractor and five transfer learning feature extractors based on deep learning for breast cancer histology images classification [73]. The average accuracy was improved to 82.90% when using the five transfer learning feature groups.
In 2017, Li et al. established a 3D-CNN to discriminate between benign and malignant breast tumors [74]. The results with an accuracy of 78.1%, a sensitivity of 74.4% and a specificity of 82.3% demonstrated that 3D-CNN methods could be a promising technology for breast cancer classification without manual feature extraction.
Kooi et al. provided a head-to-head comparison between the state-of-the-art algorithms in mammography CAD systems [75]. A reader study was also performed, indicating that there was no significant difference between the proposed network and radiologists in terms of the detection and diagnosis accuracy.
In 2016, Samala et al. designed a DCNN to differentiate microcalcification candidates detected during the prescreening stage in a CAD system for clustered microcalcification [76]. As a validation, the selected DCNN was compared with their previously designed CNN architectures. The AUC of CNN and DCNN was 0.89 and 0.93, respectively (p < 0.05, which was statistically significant).
Some recent applications of CNN-based methods for breast cancer diagnosis are summarized in Table 2.
Reference | Year | Application | Method | Dataset | Result |
Chiang et al. [63] | 2019 | Breast ultrasound CAD system | 3D-CNN | Clinical | Sensitivity: 95.0% FP/scan: 14.03 |
Samala et al. [64] | 2019 | Breast masses classification | DCNN, transfer learning | DDSM | AUC: 0.85 ± 0.05 for single-stage transfer learning AUC: 0.91 ± 0.03 for multi-stage transfer learning |
Gao et al. [73] | 2018 | Breast cancer diagnosis | Shallow-Deep CNN(SD-CNN) | BCDR | Accuracy: 82.9% |
Zhou et al. [72] | 2018 | Breast tumors classification | CNN | Clinical | Accuracy: 95.8% Sensitivity: 96.2% Specificity: 95.7% |
Kooi et al. [75] | 2017 | Breast mammography CAD system | CNN | Clinical | AUC: 0.875–0.941 |
Becker et al. [65] | 2017 | Breast cancer detection | ANN | Clinical | AUC: 0.82 |
Li et al. [74] | 2017 | Breast tumors classification | 3D-CNN | Clinical | Accuracy: 78.1% Sensitivity: 74.4% Specificity: 82.3% AUC: 0.801 |
Zhou et al. [66] | 2017 | Breast tissue density classification | AlexNet | Clinical | Accuracy: 76% |
Kooi et al. [67] | 2017 | Masses discrimination | DCNNs | WBCD | AUC: 0.80 |
Ayelet et al. [68] | 2016 | Breast tumors detection and classification | Faster R-CNN | Clinical | Accuracy: 77% AUC: 0.72 |
Posada et al. [69] | 2016 | Breast cancer detection and diagnosis | AlexNet, VGGNet | MIAS | Accuracy: 60.01% and 64.52%, respectively |
Samala et al. [70] | 2016 | Digital breast tomosynthesis(DBT) CAD system | DCNN | Clinical | AUC: 0.90 |
Dhungel et al. [71] | 2016 | Masses classification | CNN | DDSM | Accuracy: 84.00% ± 4.00% |
Samala et al. [76] | 2016 | Breast CAD system | CNN, DCNN | Clinical | AUC: 0.89 and 0.93, respectively |
Lung cancer is one of the most frequent and leading causes to death all over the world. It was reported that there were approximately 1.8×106 new cases of lung cancer globally in 2012 [77,78]. Early detection of lung cancer, which is typically viewed in the form of lung nodules, is an efficient way to improve the survival rate.
The objectives in literature focusing on lung CAD systems can be divided into two categories: Lung nodules detection and classification [79,80,81,82,83,84,85,86,87,88,89,90,91]. The Lung Image Database Consortium (LIDC) and Lung Image Database Consortium image collection (LIDC-IDRI) are the most commonly used databases for the validation of experimental results.
In 2019, Shi et al. proposed a DCNN-based transfer learning method for the false-positive reduction of lung nodules detection [80]. The VGG-16 was adopted to extract discriminative features and the SVM was used to classify lung nodules. A sensitivity of 87.2% with 0.39 false positives per scan was reached, which was higher than other methods.
Savitha et al. analyzed the lung CT scan images using the optimal deep neural network (ODNN) and linear discriminate analysis (LDA) [83]. To detect and identify the lung cancer, the authors used a combination of ODNN and modified gravitational search algorithm (MGSA). An accuracy of 94.56%, a sensitivity of 96.2% and a specificity of 94.2% was given in the comparative results.
In 2018, Nishio et al. developed a CADx system to classify lung cancers between benign nodule, primary lung cancer, and metastatic lung cancer [84]. The proposed system was validated using different combinations of methods: Conventional machine learning classifiers, DCNN-based method with transfer learning, and DCNN-based method without transfer learning. In addition, they found that larger image size as input to train a DCNN improved the performance of lung nodules classification.
Dey et al. introduced a lung nodules classification method with high performance, which combined a three dimensional DCNN (3D-DCNN) and an ensemble method [89]. Compared to the shallow 3D-CNN architectures used in previous studies, the proposed 3D-DCNN could capture the features of spherical-shaped nodules more effectively.
In 2017, Anton et al. evaluated the effectiveness of a novel DCNN architecture for lung nodule malignancy classification [90]. The evaluation was based on the state-of-the-art ResNet architecture in their work. Further, the authors explored how the curriculum learning, transfer learning and varying network depth influenced the accuracy of malignancy classification.
In 2016, Li et al. designed a DCNN-based method for lung nodules classification, which had an advantage of automatic representation learning and strong generalization ability [91]. The DCNNs were trained by 62, 492 regions of interests (ROIs) samples including 40, 772 nodules and 21, 720 non-nodules from the LIDC database.
Some recent applications of CNN-based methods for lung nodules detection are summarized in Table 3.
Reference | Year | Application | Method | Dataset | Result |
Shi et al. [80] | 2019 | Lung nodules detection | VGG-16 | CT scans | Sensitivity: 87.2% FP/scan: 0.39 |
Savitha et al. [83] | 2019 | Lung cancers classification | Optimal deep neural network(ODNN) | Clinical | Accuracy: 94.56% Sensitivity: 96.20% Specificity: 94.20% |
Zhao et al. [84] | 2018 | Lung nodules classification | LeNet, AlexNet | LIDC | Accuracy: 82.20% AUC: 0.877 |
Dey et al. [89] | 2018 | Lung nodules classification | 3D-DCNN | LUNA16 | Competition performance metric(CPM): 0.910 |
Nishio et al. [82] | 2018 | Lung cancer CAD system | DCNN | Clinical | Accuracy: 68% |
Anton et al. [90] | 2017 | Lung CT CAD system | ResNet | LIDC-IDRI | Sensitivity: 91.07% Accuracy: 89.90% |
Ding et al. [85] | 2017 | Lung CAD system | DCNNs | LUNA16 | Sensitivity: 94.60% FROC: 0.893 |
Dou et al. [86] | 2017 | Lung nodules detection | 3D-CNN | LUNA16 | Sensitivity: 90.50% FP/scan: 1.0 |
Cheng et al. [87] | 2016 | Lung lesions classification | OverFeat | LIDC | Sensitivity: 90.80% ± 5.30 |
Li et al. [91] | 2016 | Lung nodules classification | DCNN | LIDC | Sensitivity: 87.10% FP/scan: 4.62 |
Liu et al. [88] | 2016 | Lung nodules classification | Multi-view CNN(MV-CNN) | LIDC-IDRI | Error rate: 5.41% Sensitivity: 90.49% Specificity: 99.91% |
Hua et al. [79] | 2015 | Lung nodules classification | CNN | LIDC | Sensitivity: 73.30% Specificity: 78.7% |
Prostate cancer is the most common malignancies among men and remains a second leading cause to deaths in men globally [92,93]. It was predicted that there would be 1.7 million new cancer cases by 2030. The early detection and diagnosis of prostate cancer can help to survive nine out of 10 men for the last five years.
Since it is a newly emerging area of research, more and more researchers pay attention to the prostate cancer detection, localization and diagnosis using CNN-based methods in CAD systems [94,95,96,97,98,99,100,101,102,103,104,105,106,107].
In 2019, Li et al. showed a new region-based CNN (R-CNN) framework for multi-task prediction of prostate cancer using an Epithelial Network Head and a Grading Network Head [95]. As a result, they achieved an accuracy of 99.07% and an average AUC of 0.998, which was the state-of-the-art performance in epithelial cells detection and Gleason grading tasks simultaneously. This work would help the pathologists to make the diagnosis more efficiently in the near future.
Leng et al. designed a framework for automatic identification of prostate cancer from colorimetric analysis of H & E and IHC-stained histopathological specimens [96]. The methods introduced in their work could be modularly integrated into digital pathology frameworks for detection of prostate cancer on whole slide images of histopathology slides. In addition, the proposed methods could be extended naturally to other related cancers as well.
In 2018, Chen et al. demonstrated that the state-of-the-art deep neural network could be retrained quickly with limited data provided by the PROSTATEx challenge [97]. They used inception V3 and VGG-16 which were pre-trained on the ImageNet, and obtained an AUC of 0.81 and 0.83, respectively. Also, compounding results from models trained with different image combination could improve the classification performance.
Song et al. proposed an automatic approach based on DCNN, inspired from VGG-Net, to differentiate prostate cancer and noncancerous tissues in multi-parametric MRI images using the PROSTATEx database [98].Further, Wang et al. improved this network by modifying a term in the loss function during the back-propagation process [99].
In 2017, Rampun et al. proposed a prostate cancer CAD system and suggested a set of discriminative texture descriptors extracted from T2-weighted (T2W) MRI images [94]. To test and evaluate their method, the authors collected 418 samples from 45 patients using 9-fold cross validation approach. Experimental results indicated that it was comparable among existing CAD systems using multimodality MRI.
Le et al. presented an automated method based on multimodal CNNs for two prostate cancer diagnostic tasks [106]. In the first phase, the proposed network aimed to classify cancerous and noncancerous tissues, while in the second phase, it was used for the differentiation between clinically significant prostate cancer and indolent prostate cancer. Finally, the authors obtained the promising results with a sensitivity of 89.85% and a specificity of 95.83% for prostate tissues classification, as well as a sensitivity of 100% and a specificity of 76.92% for prostate cancer characterization, respectively.
Yang et al. introduced an automated method for the prostate cancer localization in multi-parametric MRI images and assessed the agammaessiveness of detected lesions using multimodal multi-label CNNs [107]. Comprehensive evaluation demonstrated that the proposed method was superior to other networks in terms of the representative features extraction.
Some recent applications of CNN-based methods for prostate cancer localization are summarized in Table 4.
Reference | Year | Application | Method | Dataset | Result |
Li et al. [95] | 2019 | Prostate cancer grading | R-CNN | Clinical | Accuracy: 89.4% |
Wang et al. [99] | 2018 | Clinically significant prostate cancer CAD system | Dual-path CNN | Clinical | Sensitivity: 89.78% FP/scan: 1 |
Ishioka et al. [100] | 2018 | Prostate cancer CAD system | DCNN | Clinical (two training datasets: = 301, = 34) | AUC: 0.645 and 0.636, respectively |
Song et al. [98] | 2018 | Prostate cancer CADx | DCNN | PROSTATEx | Sensitivity: 87% Specificity: 90.6% AUC: 0.944 |
Chen et al. [97] | 2018 | Clinically significant prostate cancer classification | Inception V3, VGG-16 | PROSTATEx | AUC: 0.81 and 0.83 |
Kohl et al. [101] | 2017 | Prostate cancer detection | Fully convolutional networks (FCNs) | Clinical | Sensitivity: 55% Specificity: 98% |
Yang et al. [102] | 2017 | Prostate cancer detection | Co-trained CNNs | Clinical | Sensitivity: 46.00%, 92.00% and 97.00% FP/scan: 0.1, 1 and 10 |
Yang et al. [107] | 2017 | Prostate cancer localization and characterization | Multimodal multi-label CNNs | Clinical | Sensitivity: 98.6% Specificity: 98.3% Accuracy: 98.5% AUC: 0.998 |
Jin et al. [103] | 2017 | Prostate cancer detection | CNN | PROMISE12 | AUC: 0.974 |
Wang et al. [104] | 2017 | Prostate cancer detection | DCNN | PFMP | AUC: 0.84 |
Le et al. [106] | 2017 | Prostate cancer diagnosis | Multimodal CNNs | Clinical | Sensitivity: 89.85% Specificity: 95.83% |
Liu et al. [105] | 2017 | Prostate cancer lesions classification | XmasNet | PROSTATEx | AUC: 0.84 |
A well-characterized repository plays an important role in the performance evaluation of a CAD system [108]. Most of the researchers collect clinical data from different hospitals, which is a time-consuming and intensive process. Besides, extra work is required to normalize these images. Therefore, it is necessary to develop a standard database for the effective and objective performance evaluation among CAD systems [109].
In this section, some open available medical image databases are introduced, which are commonly used for breast cancer diagnosis, lung nodule detection and prostate cancer localization in literature, respectively.
The Mammography Image Analysis Society (MIAS) database contains left and right breast images from 161 patients with a total of 322 images, including 208 normal, 63 benign and 51 malignant (abnormal) cases [110,111]. For each X-ray film, it is associated with some medical information like lesion location, image scale, and malignancy annotated by experienced radiologists.
The Digital Database for Screening Mammography (DDSM) is the largest public breast image database, which consists of 2, 620 cases with a total of 10, 480 images including two images from each breast [112,113]. Each case is associated with some patient information, like age at the time of examination, subtlety rating for abnormalities and so on. Researchers have got satisfactory results using this database [114,115].
The Wisconsin Breast Cancer Dataset (WBCD) is publically available from the UCI Machine Learning Repository used for the validation of various classification algorithms [116]. There are 569 instances associated with 32 attributes in this database, which are collected from the Fine Needle Aspirates (FNA) of human breast tissues.
The Breast Cancer Digital Repository (BCDR) is the first Portuguese digital mammogram database [117]. It has at present a total of 1, 010 cases including digital content (3, 703 digitized film mammography images) and associated metadata. Precise segmentations of identified lesions (manual contours made by medical specialists) are also provided. Currently, two repositories are available for public domain: One containing digitalized film mammography, known as the BCDR-FM, and the other containing full field digital mammography, known as the BCDR-DM. Also, four benchmarking datasets representatives of benign and malignant lesions are available for free download to registered users.
A brief summary of these databases is shown in Table 5.
Database | Image modality | No. of patients | No. of benign samples | No. of malignant samples | No. of normal samples | Link |
MIAS | X-rays | 161 | 63 | 51 | 208 | http://www.wiau.man.ac.uk/services/MIAS/MIASweb.html |
WBCD | Mammography | 569 | 357 | 212 | - | http://marathon.csee.usf.edu/ Mammography/Database.html |
DDSM | Mammography | 2, 620 | 4, 044 | 3, 656 | 2, 780 | https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29 |
BCDR | Ultrasound | 1, 010 | - | - | 3, 703 | http://bcdr.inegi.up.pt/ |
The LIDC-IDRI database contains 1, 018 CT scans from 1, 010 patients with a total of 244, 527 images, which includes various imaging modalities like CT, digital radiography (DX), and computed radiography (CR) [118,119]. Each scan is associated with a XML file that records the annotations such as nodule ID, non-nodule ID, and reading sessions, performed by four expert radiologists.
The Lung Nodule Analysis 2016 (LUNA16) is one of the most commonly used database for lung cancer detection or diagnosis [120]. There are 888 CT scans with a total of 272 lung images in this database, and each scan is associated with the location of lesions as well as the image size. It is worth mentioning that the CT scans in this database are taken from the LIDC-IDRI database with the tumors smaller than 3mm are removed.
The Nederlands-LeuvensLongkanker Screenings Onderzoek (NELSON) database is usually employed to investigate lung nodule measurement, automatic detection and segmentation [121,122]. The ANODE09 database, which originates from the NELSON database, contains 55 anonymized thoracic CT scans [123].
The Japanese Society of Radiological Technology (JSRT) database has been used for various medical applications such as image pre-processing, image compression CAD system as well as picture archiving and communication system (PACS). Each sample in this database is associated with some clinical information such as patient age, gender, benign or malignant, and degree of subtlety in visual detection of nodules.
A brief summary of these databases is shown in Table 6.
Database | Image modality | No. of scans | No. of slices | No. of images | Link |
LIDC-IDRI | CT, DX, CR | 1, 018 | ─ | 244, 527 | https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI |
LUNA16 | CT | 888 | 1, 084 | 272 | https://luna16.grand-challenge.org/download/ |
ANODE09 | CT | 55 | 451.5 | ─ | https://www.rug.nl/research/portal/datasets/nederlandsleuvens-longkanker-screenings-onderzoek-nelson.html |
JSRT | X-rays | ─ | ─ | 247 | http://db.jsrt.or.jp/eng.php |
The PROSTATEx Challenge focuses on the diagnostic classification of clinically significant prostate cancer in a quantitative way [124]. This collection is a retrospective set of prostate MRI studies, which include T2W, dynamic contrast enhanced (DCE), and diffusion-weighted (DW) imaging. It contains 349 studies from 346 patients with a total of 309, 251 images acquired without an endorectal coil.
The Prostate MRI Image Segmentation 2012 (PROMISE12) challenge aims to compare interactive and (semi)-automatic segmentation algorithms of the prostate in MRI images. Patients with benign diseases, for example, benign prostatic hyperplasia, and prostate cancer are both covered in this database. Additionally, data is collected from multiple medical centers with multimodality MRI images for the purpose of robustness and generalization testing.
The Cancer Imaging Archive Prostate Fused-MRI-Pathology (PFMP) dataset comprises a total of 28 T1-weighted (T1W), T2W, DW, and DCE prostate MRI along with digitized histopathology images of corresponding radical prostatectomy specimens, which are acquired on a 3.0T Siemens TrioTim [125]. The MRI scans also have a mapping of extent of prostate cancer on them.
The Prostate-3T dataset provides prostate transversal T2W MRI images acquired on a 3.0T Siemens TrioTim using only a pelvic phased-array coil, which is commonly used for prostate cancer detection [126]. It was used for TCIA as part of an ISBI challenge competition in 2013. There are 64 cases with a total of 1, 258 images in this dataset.
A brief summary of these databases is shown in Table 7.
Database | Image modality | No. of cases | No. of images | Link |
PROSTATEx | MRI | 346 | 309, 251 | https://wiki.cancerimagingarchive.net/display/Public/SPIE-AAPM-NCI+PROSTATEx+Challenges |
PFMP | MRI | 28 | 32, 508 | https://pathology.cancerimagingarchive.net/pathdata/ |
PROMISE12 | MRI | 50 | ─ | https://promise12.grand-challenge.org/Download/ |
Prostate-3T | MRI | 64 | 1, 258 | https://wiki.cancerimagingarchive.net/display/Public/Prostate-3T |
The CAD system performance is evaluated by various metrics such as accuracy, precision, sensitivity, specificity, F1-score, recall, true positive rate (TPR), false positive rate (FPR), dice-coefficient, receiver operating characteristic (ROC) curve and area under the curve (AUC), etc.. The calculation formula of the most commonly used evaluation metrics in literature are summarized in Table 8.
Metric | Calculation Formula | |
Eq 1 | ||
Eq 2 | ||
Eq 3 | ||
Eq 4 | ||
Eq 5 | ||
Eq 6 | ||
Eq 7 | ||
Eq 8 |
Recently, more and more CAD systems have been developed for various diseases in medical image analysis. However, due to the complex structure of medical images and difficulty in establishing a standard library of biomedical signs, there are challenges in CAD research.
Firstly, the sample size with valid annotation is too small. On the one hand, it is of high cost and time consuming for radiologists or clinicians to label the medical images. On the other hand, the coverage of existed image libraries is not comprehensive. Secondly, the standardization of datasets and evaluation metrics is needed. Currently, most CAD systems are proposed based on various open available medical image databases, and there is no a standard for the performance evaluation. Obviously, it is very difficult to make a correct and reliable comparison among these systems. Thirdly, there are many difficulties in applying CAD systems to clinical use. Since the daily tasks of radiologists or clinicians are heavy, and the interface of the medical imaging system is not open to the public, it seems unpractical for the developed CAD systems to integrate seamlessly with other systems used in hospitals.
Many different CNN architectures have been proposed or adopted for medical image analysis, typically for image segmentation and classification tasks [127,128].
In [129], Chen et al. proposed a 2D bridged U-Net for the prostate segmentation. As a modified U-Net, the exponential ReLU, as an alternative of ReLU, and the Dice loss, as one of the pervasive loss functions, was adopted. In [130], Milletari et al. developed a V-Net for volumetric medical image segmentation. Despite the popularity of CNN-based methods, they could only be used for 2D images processing while most medical image data used in clinical practice consisted of 3D volumes.
In [131], Hussain et al. implemented an automated brain tumor segmentation algorithm using DCNN. The state-of-the-art neural network optimization strategies, such as dropout and batch normalization, are used in their work. Besides, the authors adopted the non-linear activation and inception module to build a new ILinear nexus architecture.
Generally, these architectures mainly focus on reducing the parameter space, saving computational time and dealing with 3D modalities. How to choose a more suited network architecture according to various medical image analysis tasks needs further research.
As it is known to all, deep learning architectures require a large amount of training data. Besides, most deep learning techniques, for example, CNN-based methods, require labeled data for supervised learning, which is difficult and time-consuming clinically. How to take the best advantage of limited data for training and how to train deeper networks effectively remains to be addressed.
There are two widely used solutions in literature that can partially deal with the mentioned problem. The first one is data augmentation, which uses the affine transformations such as translation, rotation, and scaling to generate more data from the existed data. The other is transfer learning, which has achieved promising results in medical image analysis [132,133]. The workflow of transfer learning is composed of two parts: Pre-trained on a large labeled dataset (such as ImageNet) and fine-tuning on the target dataset.
Despite the challenges associated with the introduction of deep learning methods and CAD systems in clinical settings, the promising results that are too valuable to discard.
Deep learning techniques extract knowledge from big data and produce an output that can be used for personalized treatment, which promotes the development of precision medicine. Unlike the conventional medical treatment, in precision medicine, the examination will go deep into the smallest molecular and genomic information, and medical staff will make the diagnostic decision according to the subtle differences among patients.
With the development of big data and medical imaging, radiomics comes into being [134]. By using a large number of medical images and feature-related algorithms, it aims to transform the region of interests into feature maps with high resolution. The standardized image acquisition, automated image analysis, radiomics of molecular images, and prognostic response evaluation are the key points in radiomics. At present, radiomics has been applied to the diagnosis, treatment, and prognosis of cancers, such as breast cancer and lung cancer, and has achieved promising results in literature.
In the future, medical image data will be linked more readily to non-imaging data in electronic medical records, such as gender, age, medical history and so on, known as imaging grouping. Deep learning techniques, when applied to electronic medical records, can help derive patient representations that may lead to predictions and augmentations of clinical decision support systems [135].
With the current rapid development of deep learning techniques, and CNN-based methods in particular, there are prospects for a more widespread application of CNN-based CAD systems in clinical practice. These techniques are not expected to replace the radiologists in the foreseeable future, but potentially facilitate the routine workflow, improve detection and diagnosis accuracy, reduce the probability of mistake and error, and enhance patients' satisfaction.
In this paper, an overview on CNN-based methods and their application in the field of CAD research is presented. It is concluded that CNN-based methods are increasingly being used in all sub-fields of medical image analysis, such as lesion segmentation, detection and classification. Despite the restrictions associated with data augmentation and transfer learning, they can be effectively used to deal with limited training data. Recent studies demonstrate that CNN-based methods in CAD research would greatly benefit the development of medical image analysis, and the future directions may towards radiomics, precision medicine and imaging grouping. This paper can provide researchers in medical image analysis with a systematic picture of the CNN-based methods used in CAD research.
This work was supported in part by grants from National Natural Science Foundation of China (Grant No. 61303099).
All authors declare no conflict of interest in this paper.
1. | Ismail Onder, Aydin Secer, Muslum Ozisik, Mustafa Bayram, Investigation of optical soliton solutions for the perturbed Gerdjikov-Ivanov equation with full-nonlinearity, 2023, 9, 24058440, e13519, 10.1016/j.heliyon.2023.e13519 | |
2. | Riadh Chteoui, Abdulrahman F. Aljohani, Anouar Ben Mabrouk, Synchronous Steady State Solutions of a Symmetric Mixed Cubic-Superlinear Schrödinger System, 2021, 13, 2073-8994, 190, 10.3390/sym13020190 | |
3. | Anouar Ben Mabrouk, Abdulaziz Alanazi, Zaid Bassfar, Dalal Alanazi, New hybrid model for nonlinear systems via Takagi-Sugeno fuzzy approach, 2024, 9, 2473-6988, 23197, 10.3934/math.20241128 | |
4. | Abdulrahman F. Aljohani, Sabrine Arfaoui, Anouar Ben Mabrouk, On an assorted nonlinear Schrödinger dynamical system, 2025, 2193-5343, 10.1007/s40065-025-00532-0 |
Reference | Year | Application | Method | Dataset | Result |
Fujita et al. [39] | 2019 | Fibrillations and flutters detection | DCNN | Physiobank(PTB) dataset | Accuracy: 98.45% Sensitvity: 99.87% Specificity: 99.27% |
Luis et al. [40] | 2019 | Parkinson's disease identification | CNN | HandPD dataset | Accuracy: 87% |
Gonzxalez-Dxıaz et al. [52] | 2019 | Skin lesion diagnosis | DermaKNet | 2017 ISBI Challenge | Specificity: 95% AUC: 0.917 |
Jeyaraj et al. [53] | 2019 | Oral cancer CAD system | DCNN | TCIA, GDC | Accuracy: 94.5% Sensitivity: 94.0% Specificity: 98.0% |
Martin et al. [42] | 2018 | Ureteral stone identification | CNN | Clinical | Sensitivity: 100% FP/scan: 2.68 |
Sajid et al. [43] | 2018 | Brain tumors segmentation | DCNN | BRATS 2015 | — |
Hosseini-Asl et al. [56] | 2018 | Alzheimer's disease diagnosis | 3D-CNN | ADNI dataset | Sensitivity: 76% F1-score: 0.75 |
Farooq et al. [57] | 2017 | Multi-class classification of Alzheimer's disease | CNN | ADNI dataset | Accuracy: 98.88% |
Liao et al. [44] | 2017 | Lung nodules diagnosis | A modified U-Net | Kaggle Data Science Bowl of 2017 | Accuracy: 81.42% Recall: 85.62% |
Tulder et al. [58] | 2016 | Lung CT images classification | Convolutional RBM | ILD CT scans | Accuracy: 89.0% |
Anthimopoulos et al. [59] | 2016 | Lung pattern classification | DCNN | ILD CT scans | Accuracy: 85.5% |
Pratt et al. [47] | 2016 | Diabetic retinopathy diagnosis | CNN | Kaggle Data Science Bowl of 2016 | Accuracy: 75.0% Sensitivity: 95.0% |
Gao et al. [48] | 2016 | Lung CT attenuation patterns classification | CNN | ILD CT scans | Accuracy: 87.9% |
Shin et al. [49] | 2016 | Thoraco-abdominal lymph node(LN) detection | DCNN | CT scans | AUC: 0.93–0.95 |
Ronneberger et al. [50] | 2015 | Biomedical image segmentation | CNN | ISBI challenge | Accuracy: 92.03% |
Reference | Year | Application | Method | Dataset | Result |
Chiang et al. [63] | 2019 | Breast ultrasound CAD system | 3D-CNN | Clinical | Sensitivity: 95.0% FP/scan: 14.03 |
Samala et al. [64] | 2019 | Breast masses classification | DCNN, transfer learning | DDSM | AUC: 0.85 ± 0.05 for single-stage transfer learning AUC: 0.91 ± 0.03 for multi-stage transfer learning |
Gao et al. [73] | 2018 | Breast cancer diagnosis | Shallow-Deep CNN(SD-CNN) | BCDR | Accuracy: 82.9% |
Zhou et al. [72] | 2018 | Breast tumors classification | CNN | Clinical | Accuracy: 95.8% Sensitivity: 96.2% Specificity: 95.7% |
Kooi et al. [75] | 2017 | Breast mammography CAD system | CNN | Clinical | AUC: 0.875–0.941 |
Becker et al. [65] | 2017 | Breast cancer detection | ANN | Clinical | AUC: 0.82 |
Li et al. [74] | 2017 | Breast tumors classification | 3D-CNN | Clinical | Accuracy: 78.1% Sensitivity: 74.4% Specificity: 82.3% AUC: 0.801 |
Zhou et al. [66] | 2017 | Breast tissue density classification | AlexNet | Clinical | Accuracy: 76% |
Kooi et al. [67] | 2017 | Masses discrimination | DCNNs | WBCD | AUC: 0.80 |
Ayelet et al. [68] | 2016 | Breast tumors detection and classification | Faster R-CNN | Clinical | Accuracy: 77% AUC: 0.72 |
Posada et al. [69] | 2016 | Breast cancer detection and diagnosis | AlexNet, VGGNet | MIAS | Accuracy: 60.01% and 64.52%, respectively |
Samala et al. [70] | 2016 | Digital breast tomosynthesis(DBT) CAD system | DCNN | Clinical | AUC: 0.90 |
Dhungel et al. [71] | 2016 | Masses classification | CNN | DDSM | Accuracy: 84.00% ± 4.00% |
Samala et al. [76] | 2016 | Breast CAD system | CNN, DCNN | Clinical | AUC: 0.89 and 0.93, respectively |
Reference | Year | Application | Method | Dataset | Result |
Shi et al. [80] | 2019 | Lung nodules detection | VGG-16 | CT scans | Sensitivity: 87.2% FP/scan: 0.39 |
Savitha et al. [83] | 2019 | Lung cancers classification | Optimal deep neural network(ODNN) | Clinical | Accuracy: 94.56% Sensitivity: 96.20% Specificity: 94.20% |
Zhao et al. [84] | 2018 | Lung nodules classification | LeNet, AlexNet | LIDC | Accuracy: 82.20% AUC: 0.877 |
Dey et al. [89] | 2018 | Lung nodules classification | 3D-DCNN | LUNA16 | Competition performance metric(CPM): 0.910 |
Nishio et al. [82] | 2018 | Lung cancer CAD system | DCNN | Clinical | Accuracy: 68% |
Anton et al. [90] | 2017 | Lung CT CAD system | ResNet | LIDC-IDRI | Sensitivity: 91.07% Accuracy: 89.90% |
Ding et al. [85] | 2017 | Lung CAD system | DCNNs | LUNA16 | Sensitivity: 94.60% FROC: 0.893 |
Dou et al. [86] | 2017 | Lung nodules detection | 3D-CNN | LUNA16 | Sensitivity: 90.50% FP/scan: 1.0 |
Cheng et al. [87] | 2016 | Lung lesions classification | OverFeat | LIDC | Sensitivity: 90.80% ± 5.30 |
Li et al. [91] | 2016 | Lung nodules classification | DCNN | LIDC | Sensitivity: 87.10% FP/scan: 4.62 |
Liu et al. [88] | 2016 | Lung nodules classification | Multi-view CNN(MV-CNN) | LIDC-IDRI | Error rate: 5.41% Sensitivity: 90.49% Specificity: 99.91% |
Hua et al. [79] | 2015 | Lung nodules classification | CNN | LIDC | Sensitivity: 73.30% Specificity: 78.7% |
Reference | Year | Application | Method | Dataset | Result |
Li et al. [95] | 2019 | Prostate cancer grading | R-CNN | Clinical | Accuracy: 89.4% |
Wang et al. [99] | 2018 | Clinically significant prostate cancer CAD system | Dual-path CNN | Clinical | Sensitivity: 89.78% FP/scan: 1 |
Ishioka et al. [100] | 2018 | Prostate cancer CAD system | DCNN | Clinical (two training datasets: = 301, = 34) | AUC: 0.645 and 0.636, respectively |
Song et al. [98] | 2018 | Prostate cancer CADx | DCNN | PROSTATEx | Sensitivity: 87% Specificity: 90.6% AUC: 0.944 |
Chen et al. [97] | 2018 | Clinically significant prostate cancer classification | Inception V3, VGG-16 | PROSTATEx | AUC: 0.81 and 0.83 |
Kohl et al. [101] | 2017 | Prostate cancer detection | Fully convolutional networks (FCNs) | Clinical | Sensitivity: 55% Specificity: 98% |
Yang et al. [102] | 2017 | Prostate cancer detection | Co-trained CNNs | Clinical | Sensitivity: 46.00%, 92.00% and 97.00% FP/scan: 0.1, 1 and 10 |
Yang et al. [107] | 2017 | Prostate cancer localization and characterization | Multimodal multi-label CNNs | Clinical | Sensitivity: 98.6% Specificity: 98.3% Accuracy: 98.5% AUC: 0.998 |
Jin et al. [103] | 2017 | Prostate cancer detection | CNN | PROMISE12 | AUC: 0.974 |
Wang et al. [104] | 2017 | Prostate cancer detection | DCNN | PFMP | AUC: 0.84 |
Le et al. [106] | 2017 | Prostate cancer diagnosis | Multimodal CNNs | Clinical | Sensitivity: 89.85% Specificity: 95.83% |
Liu et al. [105] | 2017 | Prostate cancer lesions classification | XmasNet | PROSTATEx | AUC: 0.84 |
Database | Image modality | No. of patients | No. of benign samples | No. of malignant samples | No. of normal samples | Link |
MIAS | X-rays | 161 | 63 | 51 | 208 | http://www.wiau.man.ac.uk/services/MIAS/MIASweb.html |
WBCD | Mammography | 569 | 357 | 212 | - | http://marathon.csee.usf.edu/ Mammography/Database.html |
DDSM | Mammography | 2, 620 | 4, 044 | 3, 656 | 2, 780 | https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29 |
BCDR | Ultrasound | 1, 010 | - | - | 3, 703 | http://bcdr.inegi.up.pt/ |
Database | Image modality | No. of scans | No. of slices | No. of images | Link |
LIDC-IDRI | CT, DX, CR | 1, 018 | ─ | 244, 527 | https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI |
LUNA16 | CT | 888 | 1, 084 | 272 | https://luna16.grand-challenge.org/download/ |
ANODE09 | CT | 55 | 451.5 | ─ | https://www.rug.nl/research/portal/datasets/nederlandsleuvens-longkanker-screenings-onderzoek-nelson.html |
JSRT | X-rays | ─ | ─ | 247 | http://db.jsrt.or.jp/eng.php |
Database | Image modality | No. of cases | No. of images | Link |
PROSTATEx | MRI | 346 | 309, 251 | https://wiki.cancerimagingarchive.net/display/Public/SPIE-AAPM-NCI+PROSTATEx+Challenges |
PFMP | MRI | 28 | 32, 508 | https://pathology.cancerimagingarchive.net/pathdata/ |
PROMISE12 | MRI | 50 | ─ | https://promise12.grand-challenge.org/Download/ |
Prostate-3T | MRI | 64 | 1, 258 | https://wiki.cancerimagingarchive.net/display/Public/Prostate-3T |
Metric | Calculation Formula | |
Eq 1 | ||
Eq 2 | ||
Eq 3 | ||
Eq 4 | ||
Eq 5 | ||
Eq 6 | ||
Eq 7 | ||
Eq 8 |
Reference | Year | Application | Method | Dataset | Result |
Fujita et al. [39] | 2019 | Fibrillations and flutters detection | DCNN | Physiobank(PTB) dataset | Accuracy: 98.45% Sensitvity: 99.87% Specificity: 99.27% |
Luis et al. [40] | 2019 | Parkinson's disease identification | CNN | HandPD dataset | Accuracy: 87% |
Gonzxalez-Dxıaz et al. [52] | 2019 | Skin lesion diagnosis | DermaKNet | 2017 ISBI Challenge | Specificity: 95% AUC: 0.917 |
Jeyaraj et al. [53] | 2019 | Oral cancer CAD system | DCNN | TCIA, GDC | Accuracy: 94.5% Sensitivity: 94.0% Specificity: 98.0% |
Martin et al. [42] | 2018 | Ureteral stone identification | CNN | Clinical | Sensitivity: 100% FP/scan: 2.68 |
Sajid et al. [43] | 2018 | Brain tumors segmentation | DCNN | BRATS 2015 | — |
Hosseini-Asl et al. [56] | 2018 | Alzheimer's disease diagnosis | 3D-CNN | ADNI dataset | Sensitivity: 76% F1-score: 0.75 |
Farooq et al. [57] | 2017 | Multi-class classification of Alzheimer's disease | CNN | ADNI dataset | Accuracy: 98.88% |
Liao et al. [44] | 2017 | Lung nodules diagnosis | A modified U-Net | Kaggle Data Science Bowl of 2017 | Accuracy: 81.42% Recall: 85.62% |
Tulder et al. [58] | 2016 | Lung CT images classification | Convolutional RBM | ILD CT scans | Accuracy: 89.0% |
Anthimopoulos et al. [59] | 2016 | Lung pattern classification | DCNN | ILD CT scans | Accuracy: 85.5% |
Pratt et al. [47] | 2016 | Diabetic retinopathy diagnosis | CNN | Kaggle Data Science Bowl of 2016 | Accuracy: 75.0% Sensitivity: 95.0% |
Gao et al. [48] | 2016 | Lung CT attenuation patterns classification | CNN | ILD CT scans | Accuracy: 87.9% |
Shin et al. [49] | 2016 | Thoraco-abdominal lymph node(LN) detection | DCNN | CT scans | AUC: 0.93–0.95 |
Ronneberger et al. [50] | 2015 | Biomedical image segmentation | CNN | ISBI challenge | Accuracy: 92.03% |
Reference | Year | Application | Method | Dataset | Result |
Chiang et al. [63] | 2019 | Breast ultrasound CAD system | 3D-CNN | Clinical | Sensitivity: 95.0% FP/scan: 14.03 |
Samala et al. [64] | 2019 | Breast masses classification | DCNN, transfer learning | DDSM | AUC: 0.85 ± 0.05 for single-stage transfer learning AUC: 0.91 ± 0.03 for multi-stage transfer learning |
Gao et al. [73] | 2018 | Breast cancer diagnosis | Shallow-Deep CNN(SD-CNN) | BCDR | Accuracy: 82.9% |
Zhou et al. [72] | 2018 | Breast tumors classification | CNN | Clinical | Accuracy: 95.8% Sensitivity: 96.2% Specificity: 95.7% |
Kooi et al. [75] | 2017 | Breast mammography CAD system | CNN | Clinical | AUC: 0.875–0.941 |
Becker et al. [65] | 2017 | Breast cancer detection | ANN | Clinical | AUC: 0.82 |
Li et al. [74] | 2017 | Breast tumors classification | 3D-CNN | Clinical | Accuracy: 78.1% Sensitivity: 74.4% Specificity: 82.3% AUC: 0.801 |
Zhou et al. [66] | 2017 | Breast tissue density classification | AlexNet | Clinical | Accuracy: 76% |
Kooi et al. [67] | 2017 | Masses discrimination | DCNNs | WBCD | AUC: 0.80 |
Ayelet et al. [68] | 2016 | Breast tumors detection and classification | Faster R-CNN | Clinical | Accuracy: 77% AUC: 0.72 |
Posada et al. [69] | 2016 | Breast cancer detection and diagnosis | AlexNet, VGGNet | MIAS | Accuracy: 60.01% and 64.52%, respectively |
Samala et al. [70] | 2016 | Digital breast tomosynthesis(DBT) CAD system | DCNN | Clinical | AUC: 0.90 |
Dhungel et al. [71] | 2016 | Masses classification | CNN | DDSM | Accuracy: 84.00% ± 4.00% |
Samala et al. [76] | 2016 | Breast CAD system | CNN, DCNN | Clinical | AUC: 0.89 and 0.93, respectively |
Reference | Year | Application | Method | Dataset | Result |
Shi et al. [80] | 2019 | Lung nodules detection | VGG-16 | CT scans | Sensitivity: 87.2% FP/scan: 0.39 |
Savitha et al. [83] | 2019 | Lung cancers classification | Optimal deep neural network(ODNN) | Clinical | Accuracy: 94.56% Sensitivity: 96.20% Specificity: 94.20% |
Zhao et al. [84] | 2018 | Lung nodules classification | LeNet, AlexNet | LIDC | Accuracy: 82.20% AUC: 0.877 |
Dey et al. [89] | 2018 | Lung nodules classification | 3D-DCNN | LUNA16 | Competition performance metric(CPM): 0.910 |
Nishio et al. [82] | 2018 | Lung cancer CAD system | DCNN | Clinical | Accuracy: 68% |
Anton et al. [90] | 2017 | Lung CT CAD system | ResNet | LIDC-IDRI | Sensitivity: 91.07% Accuracy: 89.90% |
Ding et al. [85] | 2017 | Lung CAD system | DCNNs | LUNA16 | Sensitivity: 94.60% FROC: 0.893 |
Dou et al. [86] | 2017 | Lung nodules detection | 3D-CNN | LUNA16 | Sensitivity: 90.50% FP/scan: 1.0 |
Cheng et al. [87] | 2016 | Lung lesions classification | OverFeat | LIDC | Sensitivity: 90.80% ± 5.30 |
Li et al. [91] | 2016 | Lung nodules classification | DCNN | LIDC | Sensitivity: 87.10% FP/scan: 4.62 |
Liu et al. [88] | 2016 | Lung nodules classification | Multi-view CNN(MV-CNN) | LIDC-IDRI | Error rate: 5.41% Sensitivity: 90.49% Specificity: 99.91% |
Hua et al. [79] | 2015 | Lung nodules classification | CNN | LIDC | Sensitivity: 73.30% Specificity: 78.7% |
Reference | Year | Application | Method | Dataset | Result |
Li et al. [95] | 2019 | Prostate cancer grading | R-CNN | Clinical | Accuracy: 89.4% |
Wang et al. [99] | 2018 | Clinically significant prostate cancer CAD system | Dual-path CNN | Clinical | Sensitivity: 89.78% FP/scan: 1 |
Ishioka et al. [100] | 2018 | Prostate cancer CAD system | DCNN | Clinical (two training datasets: = 301, = 34) | AUC: 0.645 and 0.636, respectively |
Song et al. [98] | 2018 | Prostate cancer CADx | DCNN | PROSTATEx | Sensitivity: 87% Specificity: 90.6% AUC: 0.944 |
Chen et al. [97] | 2018 | Clinically significant prostate cancer classification | Inception V3, VGG-16 | PROSTATEx | AUC: 0.81 and 0.83 |
Kohl et al. [101] | 2017 | Prostate cancer detection | Fully convolutional networks (FCNs) | Clinical | Sensitivity: 55% Specificity: 98% |
Yang et al. [102] | 2017 | Prostate cancer detection | Co-trained CNNs | Clinical | Sensitivity: 46.00%, 92.00% and 97.00% FP/scan: 0.1, 1 and 10 |
Yang et al. [107] | 2017 | Prostate cancer localization and characterization | Multimodal multi-label CNNs | Clinical | Sensitivity: 98.6% Specificity: 98.3% Accuracy: 98.5% AUC: 0.998 |
Jin et al. [103] | 2017 | Prostate cancer detection | CNN | PROMISE12 | AUC: 0.974 |
Wang et al. [104] | 2017 | Prostate cancer detection | DCNN | PFMP | AUC: 0.84 |
Le et al. [106] | 2017 | Prostate cancer diagnosis | Multimodal CNNs | Clinical | Sensitivity: 89.85% Specificity: 95.83% |
Liu et al. [105] | 2017 | Prostate cancer lesions classification | XmasNet | PROSTATEx | AUC: 0.84 |
Database | Image modality | No. of patients | No. of benign samples | No. of malignant samples | No. of normal samples | Link |
MIAS | X-rays | 161 | 63 | 51 | 208 | http://www.wiau.man.ac.uk/services/MIAS/MIASweb.html |
WBCD | Mammography | 569 | 357 | 212 | - | http://marathon.csee.usf.edu/ Mammography/Database.html |
DDSM | Mammography | 2, 620 | 4, 044 | 3, 656 | 2, 780 | https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29 |
BCDR | Ultrasound | 1, 010 | - | - | 3, 703 | http://bcdr.inegi.up.pt/ |
Database | Image modality | No. of scans | No. of slices | No. of images | Link |
LIDC-IDRI | CT, DX, CR | 1, 018 | ─ | 244, 527 | https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI |
LUNA16 | CT | 888 | 1, 084 | 272 | https://luna16.grand-challenge.org/download/ |
ANODE09 | CT | 55 | 451.5 | ─ | https://www.rug.nl/research/portal/datasets/nederlandsleuvens-longkanker-screenings-onderzoek-nelson.html |
JSRT | X-rays | ─ | ─ | 247 | http://db.jsrt.or.jp/eng.php |
Database | Image modality | No. of cases | No. of images | Link |
PROSTATEx | MRI | 346 | 309, 251 | https://wiki.cancerimagingarchive.net/display/Public/SPIE-AAPM-NCI+PROSTATEx+Challenges |
PFMP | MRI | 28 | 32, 508 | https://pathology.cancerimagingarchive.net/pathdata/ |
PROMISE12 | MRI | 50 | ─ | https://promise12.grand-challenge.org/Download/ |
Prostate-3T | MRI | 64 | 1, 258 | https://wiki.cancerimagingarchive.net/display/Public/Prostate-3T |
Metric | Calculation Formula | |
Eq 1 | ||
Eq 2 | ||
Eq 3 | ||
Eq 4 | ||
Eq 5 | ||
Eq 6 | ||
Eq 7 | ||
Eq 8 |