Research article

The role of time delays in P53 gene regulatory network stimulated by growth factor

  • Received: 20 February 2020 Accepted: 21 May 2020 Published: 25 May 2020
  • In this paper, a delayed mathematical model for the P53-Mdm2 network is developed. The P53-Mdm2 network we study is triggered by growth factor instead of DNA damage and the amount of DNA damage is regarded as zero. We study the influences of time delays, growth factor and other important chemical reaction rates on the dynamic behaviors in the system. It is shown that the time delay is a critical factor and its length determines the period, amplitude and stability of the P53 oscillation. Furthermore, as for some important chemical reaction rates, we also obtain some interesting results through numerical simulation. Especially, S (growth factor), k3 (rate constant for Mdm2p dephosphorylation), k10 (basal expression of PTEN) and k14 (Rate constant for PTEN-induced Akt dephosphorylation) could undermine the dynamic behavior of the system in different degree. These findings are expected to understand the mechanisms of action of several carcinogenic and tumor suppressor factors in humans under normal conditions.

    Citation: Changyong Dai, Haihong Liu, Fang Yan. The role of time delays in P53 gene regulatory network stimulated by growth factor[J]. Mathematical Biosciences and Engineering, 2020, 17(4): 3794-3835. doi: 10.3934/mbe.2020213

    Related Papers:

    [1] Riadh Chteoui, Abdulrahman F. Aljohani, Anouar Ben Mabrouk . Classification and simulation of chaotic behaviour of the solutions of a mixed nonlinear Schrödinger system. Electronic Research Archive, 2021, 29(4): 2561-2597. doi: 10.3934/era.2021002
    [2] Caiwen Chen, Tianxiu Lu, Ping Gao . Chaotic performance and circuitry implement of piecewise logistic-like mapping. Electronic Research Archive, 2025, 33(1): 102-120. doi: 10.3934/era.2025006
    [3] XiongFei Chi, Ting Wang, ZhiYuan Li . Chaotic dynamics analysis and control of fractional-order ESER systems using a high-precision Caputo derivative approach. Electronic Research Archive, 2025, 33(6): 3613-3637. doi: 10.3934/era.2025161
    [4] Jiangtao Zhai, Haoxiang Sun, Chengcheng Xu, Wenqian Sun . ODTC: An online darknet traffic classification model based on multimodal self-attention chaotic mapping features. Electronic Research Archive, 2023, 31(8): 5056-5082. doi: 10.3934/era.2023259
    [5] Weijie Ding, Xiaochen Mao, Lei Qiao, Mingjie Guan, Minqiang Shao . Delay-induced instability and oscillations in a multiplex neural system with Fitzhugh-Nagumo networks. Electronic Research Archive, 2022, 30(3): 1075-1086. doi: 10.3934/era.2022057
    [6] Liangliang Li . Chaotic oscillations of 1D wave equation with or without energy-injections. Electronic Research Archive, 2022, 30(7): 2600-2617. doi: 10.3934/era.2022133
    [7] Qihong Shi, Yaqian Jia, Xunyang Wang . Global solution in a weak energy class for Klein-Gordon-Schrödinger system. Electronic Research Archive, 2022, 30(2): 633-643. doi: 10.3934/era.2022033
    [8] Lijun Miao, Jingwei Yu, Linlin Qiu . On global existence for the stochastic nonlinear Schrödinger equation with time-dependent linear loss/gain. Electronic Research Archive, 2025, 33(6): 3571-3583. doi: 10.3934/era.2025159
    [9] Shanshan Yang, Ning Li . Chaotic behavior of a new fractional-order financial system and its predefined-time sliding mode control based on the RBF neural network. Electronic Research Archive, 2025, 33(5): 2762-2799. doi: 10.3934/era.2025122
    [10] Senli Liu, Haibo Chen . Existence and asymptotic behaviour of positive ground state solution for critical Schrödinger-Bopp-Podolsky system. Electronic Research Archive, 2022, 30(6): 2138-2164. doi: 10.3934/era.2022108
  • In this paper, a delayed mathematical model for the P53-Mdm2 network is developed. The P53-Mdm2 network we study is triggered by growth factor instead of DNA damage and the amount of DNA damage is regarded as zero. We study the influences of time delays, growth factor and other important chemical reaction rates on the dynamic behaviors in the system. It is shown that the time delay is a critical factor and its length determines the period, amplitude and stability of the P53 oscillation. Furthermore, as for some important chemical reaction rates, we also obtain some interesting results through numerical simulation. Especially, S (growth factor), k3 (rate constant for Mdm2p dephosphorylation), k10 (basal expression of PTEN) and k14 (Rate constant for PTEN-induced Akt dephosphorylation) could undermine the dynamic behavior of the system in different degree. These findings are expected to understand the mechanisms of action of several carcinogenic and tumor suppressor factors in humans under normal conditions.



    Abbreviations: AUC: Area under the curve; BCDR: Breast Cancer Digital Repository; CAD: Computer-aided detection/diagnosis; CADe: Computer-aided detection; CADx: Computer-aided diagnosis; CNN(s): Convolutional neural network(s); CT: Computed tomography; DCNN(s): Deep convolutional neural network(s); DDSM: Digital Database for Screening Mammography; ILD: Interstitial lung disease; LIDC: Lung Image Database Consortium; LIDC-IDRI: Lung Image Database Consortium image collection; LUNA16: Lung Nodule Analysis 2016; MRI: Magnetic resonance imaging; MIAS: Mammography Image Analysis Society; NELSON: Nederlands-LeuvensLongkanker Screenings Onderzoek; ODNN: Optimal deep neural network; PROMISE12: Prostate MRI Image Segmentation 2012; PFMP: Prostate Fused-MRI-Pathology; ROC: Receiver operating characteristic; T2W: T2-weighted; WBCD: Wisconsin Breast Cancer Dataset

    Medical imaging has become indispensable for the detection or diagnosis of diseases, especially for the diagnosis of cancers combined with a biopsy, and gradually become an important basis for precision medicine [1,2]. Currently, imaging techniques for medical applications are mainly based on X-rays, computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET) and ultrasound [3].

    However, with the development of science and technology and the promotion of medical imaging applications, data interpretation and analysis manually has gradually become a challenging task [4,5]. Radiologists may misinterpret diseases because of inexperience or fatigue, leading to missed diagnosis, that is, false negative results, non-lesions may be interpreted as lesions, or benign lesions may be misinterpreted as malignant, that is, false positive results [6,7,8,9,10,11,12,13]. According to statistics, the misdiagnosis rate caused by human in medical image analysis can reach 10–30% [14]. In this background, the CAD system can be a great helpful tool for radiologists in medical image analysis.

    The CAD system was originally developed for breast cancer screening from mammograms in the 1960s [15,16]. Nowadays, it is one of the most important areas of research in the field of medical image analysis and radiomics. There are two important aspects in current CAD research: Computer-aided detection (CADe) and computer-aided diagnosis (CADx), respectively [17]. CADe takes advantage of the computer output to determine the location of suspicious lesions. CADx, on the other hand, gives an output that determines the characterization of lesions. The workflow of a typical CAD system (shown in Figure 1) in medical image analysis can be divided into four steps: Image pre-processing, segmentation, feature extraction and selection, lesion classification.

    Figure 1.  Workflow of a typical CAD system.

    CAD systems are widely used for the detection and diagnosis of diseases in medical image analysis, such as breast cancer, lung cancer, prostate cancer, bone suppression, skin lesions, and Alzheimer's disease. The application of CAD systems can improve the accuracy of diagnosis, reduce time consumption, and optimize the radiologists' workloads [18,19,20,21,22,23,24].

    Deep learning is a new technique that is overtaking the traditional machine learning techniques and is increasingly being used in CAD systems [25]. Generally, features are extracted manually in machine learning, while in deep learning, it is a completely automatic process. In addition, simple features such as colors, edges, and textures can be obtained in machine learning, while in deep learning, some hierarchical or compositional features are accessible during the training process.

    Typically, deep learning methods can be divided into four categories: CNN-based methods, restricted Boltzmann machines (RBMs), autoencoders, and sparse coding. Recently, the CNN-based methods have attracted more and more attention around the world, which have achieved promising results in literature. A typical CNN framework (shown in Figure 2) is composed of one or more convolution layers and pooling layers (optional), followed by one or more fully connected layers [26].

    Figure 2.  A typical CNN framework for image classification.

    Many CNN-based models have been proposed since LeNet-5 [27], such as AlexNet [28], VGG-Net (VGG-16 and VGG-19) [29], GoogLeNet [30], ResNet [31], and SPP-Net [32], which focus on increasing the network depths and designing more flexible structures. Deep convolutional neural network (DCNN), as a newly emerging form of medical image analysis, allows the automatic extraction of features and supervised learning of large scale datasets, leading to quantitative clinical decisions.

    The application of CNN-based methods for medical images is quite different from those for natural images [33]. On the one hand, a large scale labeled dataset, for example, the ImageNet, is required for the training and testing of CNNs. On the other hand, medical images are usually grayscale instead of containing RGB channels. However, large scale medical image datasets are not always available due to the intensive labeling work and expert experience requirement.

    In this paper, the current state-of-the-art deep learning techniques used in CAD research are presented in section 2, which focus on CNN-based methods. A summary of open available medical image databases and the most commonly used evaluation metrics in literature is listed in section 3. Challenges and future perspectives of CAD systems using CNN-based methods are summarized in section 4, followed by a brief conclusion.

    Briefly speaking, conventional CAD systems consist of two different parts: Lesion detection and false-positive reduction. Lesion detection is primarily based on algorithms specific to the detection task, resulting in many candidate lesions. False-positive reduction is commonly based on traditional machine learning methods to reduce the false positive rate. However, even with these complicated and sophisticated programs, the general performance of conventional CAD systems is not good, thus hampering their widespread usage in clinical practice.

    In contrast, deep learning techniques, particularly CNN-base methods, may provide us a single step solution of CAD. Additionally, the unique nature of transfer learning may accelerate the development of CAD systems for various diseases and different imaging modalities.

    Early reports of CNN-based CAD systems for breast cancer [34], lung cancer [35] and Alzheimer's disease [36,37,38] have shown promising results regarding the performance in detecting and diagnosing diseases [39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60].

    The primary goal of CADe is to increase the detection rate of diseases while reducing the false negative rate possibly due to the observers' mistakes or fatigue. In this overview, medical image analysis tasks such as segmentation, identification, localization and detection are considered as CADe [39,40,41,42,43,49,50,54,55].

    In 2019, Fujita et al. designed a novel deep neural network architecture for the detection of fibrillations and flutters [39]. As the most common Arrhythmia clinically, fibrillations and flutters will increase the risk of heart failure, dementia, and stroke. In their work, the proposed CNN could effectively detect Arrhythmias without preprocessing on raw data.

    With the purpose of identifying Parkinson's disease (PD) automatically, Luis et al. applied the recurrence plots to map the motor signals onto the image domain, which were further used to feed a CNN [40]. Experimental results showed significant improvement compared to their previous works with an average accuracy of over 87%.

    Li et al. proposed an effective knowledge transfer method based on a small dataset from a local hospital and a large shared dataset from the Alzheimer's disease neuroimaging initiative, that is, transfer learning [41]. The detection accuracy of Alzheimer's disease increased by approximately 20% compared with that of a model simply based on the original small dataset. Since it is a common challenge in medical image analysis, the authors have provided a practical solution to the limited training data.

    In 2018, Martin et al. developed a CADe system for the ureteral stones identification in CT slice volumes [42]. By using a CNN that worked directly on the high resolution CT volumes, the proposed method was evaluated on a large dataset from 465 patients with annotations performed by an expert radiologist. Finally, they achieved a sensitivity of 100% and an average of 2.68 false positives per patient.

    Sajid et al. presented a DCNN architecture for the brain tumors segmentation in MRI images [43]. The proposed network consisted of multiple neural networks which were connected in sequential order with the feeding of convolutional feature maps at the peer level. Experimental results on BRATS 2015 benchmark data showed the effectiveness and superiority of their method.

    In 2016, Shin et al. conducted experiments for thoraco-abdominal lymph node detection and to explore how the CNN performance changed according to factors of CNN architectures, dataset characteristics, and transfer learning [49]. They considered five different CNN architectures, which achieved state-of-the-art performances in various computer vision applications.

    In 2015, Ronneberger et al. presented a network and training strategy for the segmentation of neuronal structures in electron microscopic stacks, which strongly depended on the data augmentation [50]. The proposed network was trained with very few images and superior to the previously developed methods in terms of the segmentation performance.

    CADx not only involves the detection of suspicious lesions, but also the characterization and classification of the detected lesions. In this overview, medical image analysis tasks such as classification, characterization, recognition and diagnosis are considered as CADx [44,47,48,51,52,53,56,57,58,59].

    In 2019, Ahmad et al. summarized the evidence for clinical applications of CADx and artificial intelligence in colonoscopy [51]. Artificial intelligence-based software could analyze video colonoscopy in the future only to support lesion detection and characterization, but also to assess technical quality.

    Gonzxalez-Dxıaz et al. presented a CAD system called as DermaKNet for skin lesion diagnosis [52]. By incorporating the expert knowledge provided by dermatologists into the decision process, the authors aimed to overcome the traditional limitation of deep learning regarding the interpretability of the results. This work indicated that multi-task losses will allow for fusing segmentation and diagnosis networks into end-to-end trainable architectures.

    Jeyaraj et al. developed an automated and computer-aided oral cancer diagnosis system by investigating patient's hyperspectral images [53]. With the application of regression-based partitioned algorithm, an accuracy of 94.5%, a sensitivity of 94.0%, and a specificity of 98.0% was obtained in their work. Since there was no a necessity of expert knowledge, the proposed system was easily developed on simple workbench in practice.

    In 2018, Raghavendra et al. trained an eighteen-layer CNN to extract robust features from the digital fundus images for the accurate diagnosis of glaucoma [54]. With a relatively small data set, they obtained an accuracy of 98.13%, which demonstrated the robustness of the proposed CAD system. Similar to Fujita's work in [39], the authors also presented a novel CAD system for the automatic characterization of heart diseases [55]. The main difference between their works lied in the specific approaches that they used for feature extraction or classification.

    Hosseini-Asl et al.proposed a three dimensional CNN (3D-CNN) to improve the prediction of Alzheimer's disease, which could show generic features extracted from brain images, adapt to different domain datasets, and accurately classify subjects with improved fine-tuning method [56]. Experimental results on the ADNI dataset demonstrated the superior performance compared to other CNN-based methods and conventional classifiers.

    Similarly, Farooq et al. used a DCNN-based pipeline for the diagnosis of Alzheimer's disease from MRI scans [57]. Since it was quite difficult to diagnose Alzheimer's disease in elderly people and required a highly discriminative feature representation for classification, deep learning techniques were capable of this work. Experimental results were performed on the ADNI dataset with an accuracy of 98.8%.

    In 2017, two optimized massive-training artificial neural network (MTANN) architectures and four distinct CNN architectures with different depths were used in [60] for lung nodules detection and identification. The results with the sensitivity of 100% and 22.7 false positives per patient showed the superior performance of MTANN architectures than compared to CNN architectures.

    In 2016, Anthimopoulos et al. adopted and evaluated a DCNN designed for the classification of interstitial lung disease (ILD) patterns [59]. To train and evaluate the scheme, they used a dataset of 14, 696 image patches, derived by 120 CT scans from different scanners and hospitals. In addition, their proposed method was the first DCNN designed for a specific medical problem. The classification performance with an accuracy of 85.5% indicated that CNNs can be effectively used for analyzing lung patterns.

    Some recent applications of CNN-based methods for CAD research are summarized in Table 1.

    Table 1.  Recent applications of CNN-based methods for CAD research.
    Reference Year Application Method Dataset Result
    Fujita et al. [39] 2019 Fibrillations and flutters detection DCNN Physiobank(PTB) dataset Accuracy: 98.45%
    Sensitvity: 99.87%
    Specificity: 99.27%
    Luis et al. [40] 2019 Parkinson's disease identification CNN HandPD dataset Accuracy: 87%
    Gonzxalez-Dxıaz et al. [52] 2019 Skin lesion diagnosis DermaKNet 2017 ISBI Challenge Specificity: 95%
    AUC: 0.917
    Jeyaraj et al. [53] 2019 Oral cancer CAD system DCNN TCIA, GDC Accuracy: 94.5%
    Sensitivity: 94.0%
    Specificity: 98.0%
    Martin et al. [42] 2018 Ureteral stone identification CNN Clinical Sensitivity: 100%
    FP/scan: 2.68
    Sajid et al. [43] 2018 Brain tumors segmentation DCNN BRATS 2015
    Hosseini-Asl et al. [56] 2018 Alzheimer's disease diagnosis 3D-CNN ADNI dataset Sensitivity: 76%
    F1-score: 0.75
    Farooq et al. [57] 2017 Multi-class classification of Alzheimer's disease CNN ADNI dataset Accuracy: 98.88%
    Liao et al. [44] 2017 Lung nodules diagnosis A modified U-Net Kaggle Data Science Bowl of 2017 Accuracy: 81.42%
    Recall: 85.62%
    Tulder et al. [58] 2016 Lung CT images classification Convolutional RBM ILD CT scans Accuracy: 89.0%
    Anthimopoulos et al. [59] 2016 Lung pattern classification DCNN ILD CT scans Accuracy: 85.5%
    Pratt et al. [47] 2016 Diabetic retinopathy diagnosis CNN Kaggle Data Science Bowl of 2016 Accuracy: 75.0%
    Sensitivity: 95.0%
    Gao et al. [48] 2016 Lung CT attenuation patterns classification CNN ILD CT scans Accuracy: 87.9%
    Shin et al. [49] 2016 Thoraco-abdominal lymph node(LN) detection DCNN CT scans AUC: 0.93–0.95
    Ronneberger et al. [50] 2015 Biomedical image segmentation CNN ISBI challenge Accuracy: 92.03%

     | Show Table
    DownLoad: CSV

    To the researchers, CNN-based methods are actively used for tasks such as classification, localization, segmentation and registration in medical image analysis. To the clinicians and radiologists, it is not the separation or combination of these tasks but the incorporation of them with a unified system that plays a significant role in clinical applications, known as the CAD systems.

    According to a recent survey with bibliometric analysis, current CAD researches covered a wide range of diseases [16]. In this section, the latest clinical applications of CNN-based methods for CAD research are introduced, including breast cancer diagnosis, lung nodule detection and prostate cancer localization.

    Breast cancer is one of the most common cancers for women. Thousands of women suffer from breast cancer around the world every year [61]. It was also predicted that there would be 19.3 million new cancer cases worldwide by 2025 [62]. The early detection and diagnosis can decrease the death rate of breast cancer significantly.

    There have been numerous studies investigating the application of CAD systems for breast cancer detection and diagnosis, which used various medical imaging modalities and CNN-based methods [63,64,65,66,67,68,69,70,71,72,73,74,75,76].

    In 2019, Chiang et al. proposed a fast and effective breast cancer CAD system based on 3D-CNN [63]. On evaluation with a test set of 171 tumors, the authors achieved sensitivities of 95%, 90%, 85% and 80% at 14.03, 6.92, 4.91 and 3.62 false positives per patient (with six passes), respectively. The results indicated the feasibility of their methods, however, the number of false positives at 100% sensitivity should be further reduced.

    Samala et al. developed a DCNN for the classification of malignant and benign masses in digital breast tomosynthesis (DBT) [64]. This work demonstrated that multi-stage transfer learning could take advantage of the knowledge gained through source tasks from unrelated and related domains. Also, when the training sample size was limited, an additional stage of transfer learning was advantageous.

    In 2018, Zhou et al. presented a segmentation-free method to classify benign and malignant breast tumors using CNNs [72]. With the proposed model trained on 540 images, an accuracy of 95.8%, a sensitivity of 96.2%, and a specificity of 95.7% was obtained, which was a promising result. Besides, it was the first attempt that made use of radiomics based on CNN to automatically extract high-throughput features from shear-wave elastography (SWE) data to classify breast tumors.

    Gao et al. compared one hand-crafted feature extractor and five transfer learning feature extractors based on deep learning for breast cancer histology images classification [73]. The average accuracy was improved to 82.90% when using the five transfer learning feature groups.

    In 2017, Li et al. established a 3D-CNN to discriminate between benign and malignant breast tumors [74]. The results with an accuracy of 78.1%, a sensitivity of 74.4% and a specificity of 82.3% demonstrated that 3D-CNN methods could be a promising technology for breast cancer classification without manual feature extraction.

    Kooi et al. provided a head-to-head comparison between the state-of-the-art algorithms in mammography CAD systems [75]. A reader study was also performed, indicating that there was no significant difference between the proposed network and radiologists in terms of the detection and diagnosis accuracy.

    In 2016, Samala et al. designed a DCNN to differentiate microcalcification candidates detected during the prescreening stage in a CAD system for clustered microcalcification [76]. As a validation, the selected DCNN was compared with their previously designed CNN architectures. The AUC of CNN and DCNN was 0.89 and 0.93, respectively (p < 0.05, which was statistically significant).

    Some recent applications of CNN-based methods for breast cancer diagnosis are summarized in Table 2.

    Table 2.  Recent applications of CNN-based methods for breast CAD systems.
    Reference Year Application Method Dataset Result
    Chiang et al. [63] 2019 Breast ultrasound CAD system 3D-CNN Clinical Sensitivity: 95.0%
    FP/scan: 14.03
    Samala et al. [64] 2019 Breast masses classification DCNN, transfer learning DDSM AUC: 0.85 ± 0.05 for
    single-stage transfer learning
    AUC: 0.91 ± 0.03 for multi-stage
    transfer learning
    Gao et al. [73] 2018 Breast cancer diagnosis Shallow-Deep CNN(SD-CNN) BCDR Accuracy: 82.9%
    Zhou et al. [72] 2018 Breast tumors classification CNN Clinical Accuracy: 95.8%
    Sensitivity: 96.2%
    Specificity: 95.7%
    Kooi et al. [75] 2017 Breast mammography CAD system CNN Clinical AUC: 0.875–0.941
    Becker et al. [65] 2017 Breast cancer detection ANN Clinical AUC: 0.82
    Li et al. [74] 2017 Breast tumors classification 3D-CNN Clinical Accuracy: 78.1%
    Sensitivity: 74.4%
    Specificity: 82.3%
    AUC: 0.801
    Zhou et al. [66] 2017 Breast tissue density classification AlexNet Clinical Accuracy: 76%
    Kooi et al. [67] 2017 Masses discrimination DCNNs WBCD AUC: 0.80
    Ayelet et al. [68] 2016 Breast tumors detection and classification Faster R-CNN Clinical Accuracy: 77%
    AUC: 0.72
    Posada et al. [69] 2016 Breast cancer detection and diagnosis AlexNet, VGGNet MIAS Accuracy: 60.01% and 64.52%,
    respectively
    Samala et al. [70] 2016 Digital breast tomosynthesis(DBT) CAD system DCNN Clinical AUC: 0.90
    Dhungel et al. [71] 2016 Masses classification CNN DDSM Accuracy: 84.00% ± 4.00%
    Samala et al. [76] 2016 Breast CAD system CNN, DCNN Clinical AUC: 0.89 and 0.93,
    respectively

     | Show Table
    DownLoad: CSV

    Lung cancer is one of the most frequent and leading causes to death all over the world. It was reported that there were approximately 1.8×106 new cases of lung cancer globally in 2012 [77,78]. Early detection of lung cancer, which is typically viewed in the form of lung nodules, is an efficient way to improve the survival rate.

    The objectives in literature focusing on lung CAD systems can be divided into two categories: Lung nodules detection and classification [79,80,81,82,83,84,85,86,87,88,89,90,91]. The Lung Image Database Consortium (LIDC) and Lung Image Database Consortium image collection (LIDC-IDRI) are the most commonly used databases for the validation of experimental results.

    In 2019, Shi et al. proposed a DCNN-based transfer learning method for the false-positive reduction of lung nodules detection [80]. The VGG-16 was adopted to extract discriminative features and the SVM was used to classify lung nodules. A sensitivity of 87.2% with 0.39 false positives per scan was reached, which was higher than other methods.

    Savitha et al. analyzed the lung CT scan images using the optimal deep neural network (ODNN) and linear discriminate analysis (LDA) [83]. To detect and identify the lung cancer, the authors used a combination of ODNN and modified gravitational search algorithm (MGSA). An accuracy of 94.56%, a sensitivity of 96.2% and a specificity of 94.2% was given in the comparative results.

    In 2018, Nishio et al. developed a CADx system to classify lung cancers between benign nodule, primary lung cancer, and metastatic lung cancer [84]. The proposed system was validated using different combinations of methods: Conventional machine learning classifiers, DCNN-based method with transfer learning, and DCNN-based method without transfer learning. In addition, they found that larger image size as input to train a DCNN improved the performance of lung nodules classification.

    Dey et al. introduced a lung nodules classification method with high performance, which combined a three dimensional DCNN (3D-DCNN) and an ensemble method [89]. Compared to the shallow 3D-CNN architectures used in previous studies, the proposed 3D-DCNN could capture the features of spherical-shaped nodules more effectively.

    In 2017, Anton et al. evaluated the effectiveness of a novel DCNN architecture for lung nodule malignancy classification [90]. The evaluation was based on the state-of-the-art ResNet architecture in their work. Further, the authors explored how the curriculum learning, transfer learning and varying network depth influenced the accuracy of malignancy classification.

    In 2016, Li et al. designed a DCNN-based method for lung nodules classification, which had an advantage of automatic representation learning and strong generalization ability [91]. The DCNNs were trained by 62, 492 regions of interests (ROIs) samples including 40, 772 nodules and 21, 720 non-nodules from the LIDC database.

    Some recent applications of CNN-based methods for lung nodules detection are summarized in Table 3.

    Table 3.  Recent applications of CNN-based methods for lung CAD systems.
    Reference Year Application Method Dataset Result
    Shi et al. [80] 2019 Lung nodules detection VGG-16 CT scans Sensitivity: 87.2%
    FP/scan: 0.39
    Savitha et al. [83] 2019 Lung cancers classification Optimal deep neural network(ODNN) Clinical Accuracy: 94.56%
    Sensitivity: 96.20%
    Specificity: 94.20%
    Zhao et al. [84] 2018 Lung nodules classification LeNet, AlexNet LIDC Accuracy: 82.20%
    AUC: 0.877
    Dey et al. [89] 2018 Lung nodules classification 3D-DCNN LUNA16 Competition performance
    metric(CPM): 0.910
    Nishio et al. [82] 2018 Lung cancer CAD system DCNN Clinical Accuracy: 68%
    Anton et al. [90] 2017 Lung CT CAD system ResNet LIDC-IDRI Sensitivity: 91.07%
    Accuracy: 89.90%
    Ding et al. [85] 2017 Lung CAD system DCNNs LUNA16 Sensitivity: 94.60%
    FROC: 0.893
    Dou et al. [86] 2017 Lung nodules detection 3D-CNN LUNA16 Sensitivity: 90.50%
    FP/scan: 1.0
    Cheng et al. [87] 2016 Lung lesions classification OverFeat LIDC Sensitivity: 90.80% ± 5.30
    Li et al. [91] 2016 Lung nodules classification DCNN LIDC Sensitivity: 87.10%
    FP/scan: 4.62
    Liu et al. [88] 2016 Lung nodules classification Multi-view CNN(MV-CNN) LIDC-IDRI Error rate: 5.41%
    Sensitivity: 90.49%
    Specificity: 99.91%
    Hua et al. [79] 2015 Lung nodules classification CNN LIDC Sensitivity: 73.30%
    Specificity: 78.7%

     | Show Table
    DownLoad: CSV

    Prostate cancer is the most common malignancies among men and remains a second leading cause to deaths in men globally [92,93]. It was predicted that there would be 1.7 million new cancer cases by 2030. The early detection and diagnosis of prostate cancer can help to survive nine out of 10 men for the last five years.

    Since it is a newly emerging area of research, more and more researchers pay attention to the prostate cancer detection, localization and diagnosis using CNN-based methods in CAD systems [94,95,96,97,98,99,100,101,102,103,104,105,106,107].

    In 2019, Li et al. showed a new region-based CNN (R-CNN) framework for multi-task prediction of prostate cancer using an Epithelial Network Head and a Grading Network Head [95]. As a result, they achieved an accuracy of 99.07% and an average AUC of 0.998, which was the state-of-the-art performance in epithelial cells detection and Gleason grading tasks simultaneously. This work would help the pathologists to make the diagnosis more efficiently in the near future.

    Leng et al. designed a framework for automatic identification of prostate cancer from colorimetric analysis of H & E and IHC-stained histopathological specimens [96]. The methods introduced in their work could be modularly integrated into digital pathology frameworks for detection of prostate cancer on whole slide images of histopathology slides. In addition, the proposed methods could be extended naturally to other related cancers as well.

    In 2018, Chen et al. demonstrated that the state-of-the-art deep neural network could be retrained quickly with limited data provided by the PROSTATEx challenge [97]. They used inception V3 and VGG-16 which were pre-trained on the ImageNet, and obtained an AUC of 0.81 and 0.83, respectively. Also, compounding results from models trained with different image combination could improve the classification performance.

    Song et al. proposed an automatic approach based on DCNN, inspired from VGG-Net, to differentiate prostate cancer and noncancerous tissues in multi-parametric MRI images using the PROSTATEx database [98].Further, Wang et al. improved this network by modifying a term in the loss function during the back-propagation process [99].

    In 2017, Rampun et al. proposed a prostate cancer CAD system and suggested a set of discriminative texture descriptors extracted from T2-weighted (T2W) MRI images [94]. To test and evaluate their method, the authors collected 418 samples from 45 patients using 9-fold cross validation approach. Experimental results indicated that it was comparable among existing CAD systems using multimodality MRI.

    Le et al. presented an automated method based on multimodal CNNs for two prostate cancer diagnostic tasks [106]. In the first phase, the proposed network aimed to classify cancerous and noncancerous tissues, while in the second phase, it was used for the differentiation between clinically significant prostate cancer and indolent prostate cancer. Finally, the authors obtained the promising results with a sensitivity of 89.85% and a specificity of 95.83% for prostate tissues classification, as well as a sensitivity of 100% and a specificity of 76.92% for prostate cancer characterization, respectively.

    Yang et al. introduced an automated method for the prostate cancer localization in multi-parametric MRI images and assessed the agammaessiveness of detected lesions using multimodal multi-label CNNs [107]. Comprehensive evaluation demonstrated that the proposed method was superior to other networks in terms of the representative features extraction.

    Some recent applications of CNN-based methods for prostate cancer localization are summarized in Table 4.

    Table 4.  Recent applications of CNN-based methods for prostate CAD systems.
    Reference Year Application Method Dataset Result
    Li et al. [95] 2019 Prostate cancer grading R-CNN Clinical Accuracy: 89.4%
    Wang et al. [99] 2018 Clinically significant prostate cancer CAD system Dual-path CNN Clinical Sensitivity: 89.78%
    FP/scan: 1
    Ishioka et al. [100] 2018 Prostate cancer CAD system DCNN Clinical (two training datasets: = 301, = 34) AUC: 0.645 and 0.636, respectively
    Song et al. [98] 2018 Prostate cancer CADx DCNN PROSTATEx Sensitivity: 87%
    Specificity: 90.6%
    AUC: 0.944
    Chen et al. [97] 2018 Clinically significant prostate cancer classification Inception V3, VGG-16 PROSTATEx AUC: 0.81 and 0.83
    Kohl et al. [101] 2017 Prostate cancer detection Fully convolutional networks (FCNs) Clinical Sensitivity: 55%
    Specificity: 98%
    Yang et al. [102] 2017 Prostate cancer detection Co-trained CNNs Clinical Sensitivity: 46.00%,
    92.00% and 97.00%
    FP/scan: 0.1, 1 and 10
    Yang et al. [107] 2017 Prostate cancer localization and characterization Multimodal multi-label CNNs Clinical Sensitivity: 98.6%
    Specificity: 98.3%
    Accuracy: 98.5%
    AUC: 0.998
    Jin et al. [103] 2017 Prostate cancer detection CNN PROMISE12 AUC: 0.974
    Wang et al. [104] 2017 Prostate cancer detection DCNN PFMP AUC: 0.84
    Le et al. [106] 2017 Prostate cancer diagnosis Multimodal CNNs Clinical Sensitivity: 89.85%
    Specificity: 95.83%
    Liu et al. [105] 2017 Prostate cancer lesions classification XmasNet PROSTATEx AUC: 0.84

     | Show Table
    DownLoad: CSV

    A well-characterized repository plays an important role in the performance evaluation of a CAD system [108]. Most of the researchers collect clinical data from different hospitals, which is a time-consuming and intensive process. Besides, extra work is required to normalize these images. Therefore, it is necessary to develop a standard database for the effective and objective performance evaluation among CAD systems [109].

    In this section, some open available medical image databases are introduced, which are commonly used for breast cancer diagnosis, lung nodule detection and prostate cancer localization in literature, respectively.

    The Mammography Image Analysis Society (MIAS) database contains left and right breast images from 161 patients with a total of 322 images, including 208 normal, 63 benign and 51 malignant (abnormal) cases [110,111]. For each X-ray film, it is associated with some medical information like lesion location, image scale, and malignancy annotated by experienced radiologists.

    The Digital Database for Screening Mammography (DDSM) is the largest public breast image database, which consists of 2, 620 cases with a total of 10, 480 images including two images from each breast [112,113]. Each case is associated with some patient information, like age at the time of examination, subtlety rating for abnormalities and so on. Researchers have got satisfactory results using this database [114,115].

    The Wisconsin Breast Cancer Dataset (WBCD) is publically available from the UCI Machine Learning Repository used for the validation of various classification algorithms [116]. There are 569 instances associated with 32 attributes in this database, which are collected from the Fine Needle Aspirates (FNA) of human breast tissues.

    The Breast Cancer Digital Repository (BCDR) is the first Portuguese digital mammogram database [117]. It has at present a total of 1, 010 cases including digital content (3, 703 digitized film mammography images) and associated metadata. Precise segmentations of identified lesions (manual contours made by medical specialists) are also provided. Currently, two repositories are available for public domain: One containing digitalized film mammography, known as the BCDR-FM, and the other containing full field digital mammography, known as the BCDR-DM. Also, four benchmarking datasets representatives of benign and malignant lesions are available for free download to registered users.

    A brief summary of these databases is shown in Table 5.

    Table 5.  A summary of open available breast image databases.
    Database Image modality No. of patients No. of benign samples No. of malignant samples No. of normal samples Link
    MIAS X-rays 161 63 51 208 http://www.wiau.man.ac.uk/services/MIAS/MIASweb.html
    WBCD Mammography 569 357 212 - http://marathon.csee.usf.edu/ Mammography/Database.html
    DDSM Mammography 2, 620 4, 044 3, 656 2, 780 https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29
    BCDR Ultrasound 1, 010 - - 3, 703 http://bcdr.inegi.up.pt/

     | Show Table
    DownLoad: CSV

    The LIDC-IDRI database contains 1, 018 CT scans from 1, 010 patients with a total of 244, 527 images, which includes various imaging modalities like CT, digital radiography (DX), and computed radiography (CR) [118,119]. Each scan is associated with a XML file that records the annotations such as nodule ID, non-nodule ID, and reading sessions, performed by four expert radiologists.

    The Lung Nodule Analysis 2016 (LUNA16) is one of the most commonly used database for lung cancer detection or diagnosis [120]. There are 888 CT scans with a total of 272 lung images in this database, and each scan is associated with the location of lesions as well as the image size. It is worth mentioning that the CT scans in this database are taken from the LIDC-IDRI database with the tumors smaller than 3mm are removed.

    The Nederlands-LeuvensLongkanker Screenings Onderzoek (NELSON) database is usually employed to investigate lung nodule measurement, automatic detection and segmentation [121,122]. The ANODE09 database, which originates from the NELSON database, contains 55 anonymized thoracic CT scans [123].

    The Japanese Society of Radiological Technology (JSRT) database has been used for various medical applications such as image pre-processing, image compression CAD system as well as picture archiving and communication system (PACS). Each sample in this database is associated with some clinical information such as patient age, gender, benign or malignant, and degree of subtlety in visual detection of nodules.

    A brief summary of these databases is shown in Table 6.

    Table 6.  A summary of open available lung image databases.
    Database Image modality No. of scans No. of slices No. of images Link
    LIDC-IDRI CT, DX, CR 1, 018 244, 527 https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI
    LUNA16 CT 888 1, 084 272 https://luna16.grand-challenge.org/download/
    ANODE09 CT 55 451.5 https://www.rug.nl/research/portal/datasets/nederlandsleuvens-longkanker-screenings-onderzoek-nelson.html
    JSRT X-rays 247 http://db.jsrt.or.jp/eng.php

     | Show Table
    DownLoad: CSV

    The PROSTATEx Challenge focuses on the diagnostic classification of clinically significant prostate cancer in a quantitative way [124]. This collection is a retrospective set of prostate MRI studies, which include T2W, dynamic contrast enhanced (DCE), and diffusion-weighted (DW) imaging. It contains 349 studies from 346 patients with a total of 309, 251 images acquired without an endorectal coil.

    The Prostate MRI Image Segmentation 2012 (PROMISE12) challenge aims to compare interactive and (semi)-automatic segmentation algorithms of the prostate in MRI images. Patients with benign diseases, for example, benign prostatic hyperplasia, and prostate cancer are both covered in this database. Additionally, data is collected from multiple medical centers with multimodality MRI images for the purpose of robustness and generalization testing.

    The Cancer Imaging Archive Prostate Fused-MRI-Pathology (PFMP) dataset comprises a total of 28 T1-weighted (T1W), T2W, DW, and DCE prostate MRI along with digitized histopathology images of corresponding radical prostatectomy specimens, which are acquired on a 3.0T Siemens TrioTim [125]. The MRI scans also have a mapping of extent of prostate cancer on them.

    The Prostate-3T dataset provides prostate transversal T2W MRI images acquired on a 3.0T Siemens TrioTim using only a pelvic phased-array coil, which is commonly used for prostate cancer detection [126]. It was used for TCIA as part of an ISBI challenge competition in 2013. There are 64 cases with a total of 1, 258 images in this dataset.

    A brief summary of these databases is shown in Table 7.

    Table 7.  A summary of open available prostate image databases.
    Database Image modality No. of cases No. of images Link
    PROSTATEx MRI 346 309, 251 https://wiki.cancerimagingarchive.net/display/Public/SPIE-AAPM-NCI+PROSTATEx+Challenges
    PFMP MRI 28 32, 508 https://pathology.cancerimagingarchive.net/pathdata/
    PROMISE12 MRI 50 https://promise12.grand-challenge.org/Download/
    Prostate-3T MRI 64 1, 258 https://wiki.cancerimagingarchive.net/display/Public/Prostate-3T

     | Show Table
    DownLoad: CSV

    The CAD system performance is evaluated by various metrics such as accuracy, precision, sensitivity, specificity, F1-score, recall, true positive rate (TPR), false positive rate (FPR), dice-coefficient, receiver operating characteristic (ROC) curve and area under the curve (AUC), etc.. The calculation formula of the most commonly used evaluation metrics in literature are summarized in Table 8.

    Table 8.  Commonly used evaluation metrics in CAD systems.
    Metric Calculation Formula
    accuracy accuracy=(TP+TN)(TP+TN+FP+FN) Eq 1
    precision precision=TP(TP+FP) Eq 2
    sensitivity sensitivity=TP(TP+FN) Eq 3
    specificity specificity=TN(TN+FP) Eq 4
    F1score F1score=2×(precision×recall)(precision+recall) Eq 5
    TPR TPR=TP(TP+FN) Eq 6
    FPR FPR=FP(TN+FP) Eq 7
    dicecoefficient dicecoefficient=2×|PGT||P|+|GT| Eq 8

     | Show Table
    DownLoad: CSV

    Recently, more and more CAD systems have been developed for various diseases in medical image analysis. However, due to the complex structure of medical images and difficulty in establishing a standard library of biomedical signs, there are challenges in CAD research.

    Firstly, the sample size with valid annotation is too small. On the one hand, it is of high cost and time consuming for radiologists or clinicians to label the medical images. On the other hand, the coverage of existed image libraries is not comprehensive. Secondly, the standardization of datasets and evaluation metrics is needed. Currently, most CAD systems are proposed based on various open available medical image databases, and there is no a standard for the performance evaluation. Obviously, it is very difficult to make a correct and reliable comparison among these systems. Thirdly, there are many difficulties in applying CAD systems to clinical use. Since the daily tasks of radiologists or clinicians are heavy, and the interface of the medical imaging system is not open to the public, it seems unpractical for the developed CAD systems to integrate seamlessly with other systems used in hospitals.

    Many different CNN architectures have been proposed or adopted for medical image analysis, typically for image segmentation and classification tasks [127,128].

    In [129], Chen et al. proposed a 2D bridged U-Net for the prostate segmentation. As a modified U-Net, the exponential ReLU, as an alternative of ReLU, and the Dice loss, as one of the pervasive loss functions, was adopted. In [130], Milletari et al. developed a V-Net for volumetric medical image segmentation. Despite the popularity of CNN-based methods, they could only be used for 2D images processing while most medical image data used in clinical practice consisted of 3D volumes.

    In [131], Hussain et al. implemented an automated brain tumor segmentation algorithm using DCNN. The state-of-the-art neural network optimization strategies, such as dropout and batch normalization, are used in their work. Besides, the authors adopted the non-linear activation and inception module to build a new ILinear nexus architecture.

    Generally, these architectures mainly focus on reducing the parameter space, saving computational time and dealing with 3D modalities. How to choose a more suited network architecture according to various medical image analysis tasks needs further research.

    As it is known to all, deep learning architectures require a large amount of training data. Besides, most deep learning techniques, for example, CNN-based methods, require labeled data for supervised learning, which is difficult and time-consuming clinically. How to take the best advantage of limited data for training and how to train deeper networks effectively remains to be addressed.

    There are two widely used solutions in literature that can partially deal with the mentioned problem. The first one is data augmentation, which uses the affine transformations such as translation, rotation, and scaling to generate more data from the existed data. The other is transfer learning, which has achieved promising results in medical image analysis [132,133]. The workflow of transfer learning is composed of two parts: Pre-trained on a large labeled dataset (such as ImageNet) and fine-tuning on the target dataset.

    Despite the challenges associated with the introduction of deep learning methods and CAD systems in clinical settings, the promising results that are too valuable to discard.

    Deep learning techniques extract knowledge from big data and produce an output that can be used for personalized treatment, which promotes the development of precision medicine. Unlike the conventional medical treatment, in precision medicine, the examination will go deep into the smallest molecular and genomic information, and medical staff will make the diagnostic decision according to the subtle differences among patients.

    With the development of big data and medical imaging, radiomics comes into being [134]. By using a large number of medical images and feature-related algorithms, it aims to transform the region of interests into feature maps with high resolution. The standardized image acquisition, automated image analysis, radiomics of molecular images, and prognostic response evaluation are the key points in radiomics. At present, radiomics has been applied to the diagnosis, treatment, and prognosis of cancers, such as breast cancer and lung cancer, and has achieved promising results in literature.

    In the future, medical image data will be linked more readily to non-imaging data in electronic medical records, such as gender, age, medical history and so on, known as imaging grouping. Deep learning techniques, when applied to electronic medical records, can help derive patient representations that may lead to predictions and augmentations of clinical decision support systems [135].

    With the current rapid development of deep learning techniques, and CNN-based methods in particular, there are prospects for a more widespread application of CNN-based CAD systems in clinical practice. These techniques are not expected to replace the radiologists in the foreseeable future, but potentially facilitate the routine workflow, improve detection and diagnosis accuracy, reduce the probability of mistake and error, and enhance patients' satisfaction.

    In this paper, an overview on CNN-based methods and their application in the field of CAD research is presented. It is concluded that CNN-based methods are increasingly being used in all sub-fields of medical image analysis, such as lesion segmentation, detection and classification. Despite the restrictions associated with data augmentation and transfer learning, they can be effectively used to deal with limited training data. Recent studies demonstrate that CNN-based methods in CAD research would greatly benefit the development of medical image analysis, and the future directions may towards radiomics, precision medicine and imaging grouping. This paper can provide researchers in medical image analysis with a systematic picture of the CNN-based methods used in CAD research.

    This work was supported in part by grants from National Natural Science Foundation of China (Grant No. 61303099).

    All authors declare no conflict of interest in this paper.



    [1] D. W. Meek, Tumour suppression by P53: A role for the DNA damage response, Nat. Rev. Cancer, 9 (2009), 714-723. doi: 10.1038/nrc2716
    [2] V. Rotter, p53, a transformation-related cellular-encoded protein, can be used as a biochemical marker for the detection of primary mouse tumor cells, P Natl. Acad. Sci. USA, 80 (1983), 2613-2617. doi: 10.1073/pnas.80.9.2613
    [3] F. Mantovani, L. Collavin, G. Del Sal, Mutant p53 as a guardian of the cancer cell, Cell Death Differ., 26 (2019), 199-212. doi: 10.1038/s41418-018-0246-9
    [4] J. Bartek, J. Bartkova, B. Vojtesek, Z. Staskova, J. Lukas, A. Rejthar, et al., Aberrant expression of thep53 oncoprotein is a common feature of a wide spectrum of human malignancies, Oncogene, 6 (1991), 1699-1703.
    [5] R. Iggo, J. Bartek, D. Lane, K. Gatter, A. L. Harris, J. Bartek, Increased expression of mutant forms of p53 oncogene in primary lung cancer, Lancet, 335 (1990), 675-679. doi: 10.1016/0140-6736(90)90801-B
    [6] A. J. Levine, P53, the cellular gatekeeper for growth and division, Cell, 88 (1997), 323-331. doi: 10.1016/S0092-8674(00)81871-1
    [7] J. E. Purvis, K. W. Karhohs, C. Mock, E. Batchelor, A. Loewer, G. Lahav, P53 dynamics control cell fate, Science, 336 (2012), 1440-1444. doi: 10.1126/science.1218351
    [8] K. H. Vousden, X. Lu, Live or let die: The cell's response to P53, Nat. Rev. Cancer, 2 (2002), 594-604. doi: 10.1038/nrc864
    [9] B. Vogelstein, D. Lane, A. Levine, Surfing the P53 network, Nature, 408 (2000), 307-310. doi: 10.1038/35042675
    [10] J. D. Oliner, K. W. Kinzler, P. S. Meltzer, D. L. George, B. Vogelstein, Amplification of a gene encoding a P53-associated protein in human sarcomas, Nature, 358 (1992), 80-83. doi: 10.1038/358080a0
    [11] M. H. G. Kubbutat, S. N. Jones, K. H. Vousden, Regulation of P53 stability by Mdm2, Nature, 387 (1997), 299-303. doi: 10.1038/42342-c1
    [12] J. H. Park, S. W. Yang, J. M. Park, S. H. Ka, J. Kim, Y. Kong, et al., Positive feedback regulation of P53 transactivity by DNA damage-induced ISG15 modification, Nat. Commun., 7 (2016), 12513. doi: 10.1038/ncomms12513
    [13] K. H. Vousden, D. P. Lane, P53 in health and disease, Nat. Rev. Mol. Cell Biol., 8 (2007), 275-283. doi: 10.1038/nrm2147
    [14] N. D. Lakin, S. P. Jackson, Regulation of P53 in response to DNA damage, Oncogene, 18 (1999), 7644-7655. doi: 10.1038/sj.onc.1203015
    [15] U. M. Moll, O. Petrenko, The MDM2-P53 interaction, Mol. Cancer Res., 1 (2004), 1001-1008.
    [16] Y. Haupt, R. Maya, A. Kazaz, M. Oren, Mdm2 promotes the rapid degradation of P53, Nature, 387 (1997), 296-299. doi: 10.1038/387296a0
    [17] G. Liao, D. Yang, L. Ma, W. Li, L. Hu, L, Zeng, et al., The development of piperidinones as potent MDM2-P53 protein-protein interaction inhibitors for cancer therapy, Eur. J. Med. Chem., 159 (2018), 1-9. doi: 10.1016/j.ejmech.2018.09.044
    [18] D. Cao, T. K. Ng, Y. W. Y. Yip, A. L. Young, C. P. Pang, W. K. Chu, et al., P53 inhibition by MDM2 in human pterygium, Exp. Eye Res., 175 (2018), 142-147. doi: 10.1016/j.exer.2018.06.021
    [19] R. Li, P. D. Sutphin, D. Schwartz, D. Matas, N. Almog, R. Wolkowicz, et al., Mutant p53 protein expression interferes with p53-independent apoptotic pathways, Oncogene, 16 (1998), 3269-3277. doi: 10.1038/sj.onc.1201867
    [20] G. Blandino, A. J Levine, M. Oren, Mutant p53 gain of function: Differential effects of different p53 mutants on resistance of cultured cells to chemotherapy, Oncogene, 18 (1999), 477-485. doi: 10.1038/sj.onc.1202314
    [21] G. Bossi, E. Lapi, S. Strano, C. Rinaldo, G. Blandino, A. Sacchi, Mutant p53 gain of function: Reduction of tumor malignancy of human cancer cell lines through abrogation of mutant p53 expression, Oncogene, 25 (2006), 304-309. doi: 10.1038/sj.onc.1209026
    [22] M. S. Irwin, K. Kondo, M. C. Marin, L. S. Cheng, W. C. Hahn, W. G. Kaelin, Chemosensitivity linked to p73 function, Cancer Cell, 3 (2003), 403-410. doi: 10.1016/S1535-6108(03)00078-3
    [23] R. Maya, R. Segel, U. Alon, A. J. Levine, Generation of oscillations by the P53-Mdm2 feedback loop: A theoretical and experimental study, P Natl. Acad. Sci. USA, 97 (2000), 11250-11255. doi: 10.1073/pnas.210171597
    [24] L. Ma, J. Wagner, J. Rice, W. Hu, A. Levine, G. Stolovitzky, A plausible model for the digital response of P53 to DNA damage, P Natl. Acad. Sci. USA, 102 (2005), 14266-14271. doi: 10.1073/pnas.0501352102
    [25] T. Zhang, P. Brazhnik, J. J. Tyson, Exploring mechanisms of the DNA-damage response: P53 pulses and their possible relevance to apoptosis, Cell Cycle, 6 (2007), 85-94. doi: 10.4161/cc.6.1.3705
    [26] X. P. Zhang, F. Liu, Z. Cheng, W. Wang, Cell fate decision mediated by P53 pulses, P Natl. Acad. Sci. USA., 106 (2009), 12245-12250. doi: 10.1073/pnas.0813088106
    [27] X. P. Zhang, F. Liu, W. Wang, Two-phase dynamics of P53 in the DNA damage response, P Natl. Acad. Sci. USA, 108 (2011), 8990-8995. doi: 10.1073/pnas.1100600108
    [28] T. Sun, W. Yang, J. Liu, P. Shen, Modeling the basal dynamics of P53 system, PLoS ONE, 6 (2011), e27882. doi: 10.1371/journal.pone.0027882
    [29] B. C. Torrico, M. P. d. A. Filho, T. A. Lima, M. D. D. N. Forte, R. C. Sa, F. G. Nogueira, Tuning of a dead-time compensator focusing on industrial processes, Isa Transact., 83 (2018), 189-198. doi: 10.1016/j.isatra.2018.09.003
    [30] T. Zhang, P. Brazhnik, J. J. Tyson, Computational Analysis of Dynamical Responses to the Intrinsic Pathway of Programmed Cell Death, Biophys J., 97 (2009), 415-434. doi: 10.1016/j.bpj.2009.04.053
    [31] K. H. Chong, S. Samarasinghe, D. Kulasiri, J. Zheng, Mathematical modelling of core regulatory mechanism in P53 protein that activates apoptotic switch, J. Theor. Biol., 462 (2019), 134-147. doi: 10.1016/j.jtbi.2018.11.008
    [32] Y. Zhang, Y. Xiong, W. G. Yarbrough, ARF promotes MDM2 degradation and stabilizes P53: ARF-INK4a locus deletion impairs both the Rb and P53 tumor suppression pathways, Cell, 92 (1998), 725-735. doi: 10.1016/S0092-8674(00)81401-4
    [33] E. Shaulian, D. Resnitzky, O. Shifman, G. Blandino, A. Amsterdam, A. Yayon, et al., Induction of Mdm2 and enhancement of cell survival by bFGF, Oncogene, 15 (1997), 2717-2725. doi: 10.1038/sj.onc.1201453
    [34] Y. Ogawara, S. Kishishita, T. Obata, Y. Isazawa, T. Suzuki, K. Tanaka, et al, Akt enhances Mdm2- mediated ubiquitination and degradation of P53, J. Biol. Chem., 277 (2002), 21843-21850. doi: 10.1074/jbc.M109745200
    [35] A. Carnero, C. Blanco-Aparicio, O. Renner, W. Link, The PTEN/PI3K/Akt signalling pathway in cancer, therapeutic implications, Curr. Cancer Drug Tar., 8 (2008), 187-198. doi: 10.2174/156800908784293659
    [36] X. Tian, B. Huang, X. P. Zhang, M. Lu, W. Wang, Modeling the response of a tumor-suppressive network to mitogenic and oncogenic signals, P Natl. Acad. Sci. USA, 114 (2017), 5337-5342. doi: 10.1073/pnas.1702412114
    [37] B. Novak, J. J. Tyson, Design principles of biochemical oscillators, Nat. Rev. Mol. Cell Biol., 9 (2008), 981-991. doi: 10.1038/nrm2530
    [38] J. R. Pomerening, S. Y. Kim, J. E. Ferrell, Systems-level dissection of the cell-cycle oscillator: Bypassing positive feedback produces damped oscillations, Cell, 122 (2005), 565-578. doi: 10.1016/j.cell.2005.06.016
    [39] K. Jonak, M. Kurpas, K. Szoltysek, J. Patryk, A. Abramowicz, K. Puszynski, A novel mathematical model of ATM/P53/NF-kB pathways points to the importance of the DDR switch-off mechanisms, BMC Syst. Biol., 10 (2016), 75. doi: 10.1186/s12918-016-0293-0
    [40] A. Honkela, J. Peltonen, H. Topa, I. Charapitsa, F. Matarese, Genome-wide modeling of transcription kinetics reveals patterns of RNA production delays, P Natl. Acad. Sci. USA, 112 (2015), 13115-13120. doi: 10.1073/pnas.1420404112
    [41] A. Prindle, J. Selimkhanov, H. Li, I. Razinkov, L. S. Tsimring, J. Hasty, Rapid and tunable post-translational coupling of genetic circuits, Nature, 508 (2014), 387-391. doi: 10.1038/nature13238
    [42] H. K. Yalamanchili, B. Yan, M. J. Li, J. Qin, Z. Zhao, F. Y. L. Chin, et al., DDGni: Dynamic delay gene-network inference from high-temporal data using gapped local alignment, Bioinformatics, 30 (2014), 377-383. doi: 10.1093/bioinformatics/btt692
    [43] V. Stambolic, D. Macpherson, D. Sas, Y. Lin, B. Snow, Regulation of PTEN transcription by P53, Mol. Cell, 8 (2001), 317-325. doi: 10.1016/S1097-2765(01)00323-9
    [44] Y. Barak, E. Gottlieb, T. Juven-Gershon, M. Oren, Regulation of mdm2 expression by P53: Alternative promoters produce transcripts with nonidentical translation potential, Gene Dev., 8 (1994), 1739-1749. doi: 10.1101/gad.8.15.1739
    [45] K. B. Wee, B. D. Aguda, Akt versus P53 in a network of oncogenes and tumor suppressor genes regulating cell survival and death, Biophys J., 91 (2006), 857-865. doi: 10.1529/biophysj.105.077693
    [46] D. Qiu, L. Mao, S. Kikuchi, M. Tomita, Sustained MAPK activation is dependent on continual NGF receptor regeneration, Dev. Growth Differ., 46 (2004), 393-403. doi: 10.1111/j.1440-169x.2004.00756.x
    [47] B. N. Kholodenko, Negative feedback and ultrasensitivity can bring about oscillations in the mitogen-activated protein kinase cascades, Eur. J. Biochem., 267 (2000), 1583-1588. doi: 10.1046/j.1432-1327.2000.01197.x
    [48] Y. Zhang, H. Liu, F. Yan, J. Zhou, Oscillatory dynamics of p38 activity with transcriptional and translational time delays, Sci. Rep. 7, (2017), 11495. doi: 10.1038/s41598-017-11149-5
    [49] Y. Harima, Y. Takashima, Y. Ueda, T. Ohtsuka, R. Kageyama, Accelerating the tempo of the segmentation clock by reducing the number of introns in the Hes7 gene, Cell Rep., 3, (2013), 1-7. doi: 10.1016/j.celrep.2012.11.012
    [50] Y. Takashima, T. Ohtsuka, A. Gonzalez, H. Miyachi, R. Kageyama, Intronic delay is essential for oscillatory expression in the segmentation clock, P Natl. Acad. Sci. USA, 108, (2011), 3300-3305.
    [51] J. Lewis, Autoinhibition with transcriptional delay: A simple mechanism for the zebrafish somitogenesis oscillator, Curr. Biol., 13, (2003), 1398-1408. doi: 10.1016/S0960-9822(03)00534-7
    [52] A. Audibert, D. Weil, F. Dautry, In vivo kinetics of mrna splicing and transport in mammalian cells, Mol. Cell Biol., 22, (2002), 6706-6718. doi: 10.1128/MCB.22.19.6706-6718.2002
    [53] J. Fruth, New methods for the sensitivity analysis of black-box functions with an application to sheet metal forming, TU Dortmund University, 2015.
    [54] B. P. Zhou, Y. Liao, W. Xia, Y. Zou, B. Spohn, M.C. Hung, HER-2/neu induces P53 ubiquitination via Akt-mediated MDM2 phosphorylation, Nat. Cell Biol., 3 (2001), 973-982. doi: 10.1038/ncb1101-973
    [55] S. Ruan, J. Wei, On the zeros of transcendental functions with applications to stability of delay differential equations with two delays, Dynam. Cont. Dis. Ser. A, 10 (2003), 863-874.
    [56] B. D. Hassard, N. D. Kazarinoff, Y. H. Wan, Theory and applications of Hopf bifurcation, Cambridge University Press, 1981.
  • This article has been cited by:

    1. Ismail Onder, Aydin Secer, Muslum Ozisik, Mustafa Bayram, Investigation of optical soliton solutions for the perturbed Gerdjikov-Ivanov equation with full-nonlinearity, 2023, 9, 24058440, e13519, 10.1016/j.heliyon.2023.e13519
    2. Riadh Chteoui, Abdulrahman F. Aljohani, Anouar Ben Mabrouk, Synchronous Steady State Solutions of a Symmetric Mixed Cubic-Superlinear Schrödinger System, 2021, 13, 2073-8994, 190, 10.3390/sym13020190
    3. Anouar Ben Mabrouk, Abdulaziz Alanazi, Zaid Bassfar, Dalal Alanazi, New hybrid model for nonlinear systems via Takagi-Sugeno fuzzy approach, 2024, 9, 2473-6988, 23197, 10.3934/math.20241128
    4. Abdulrahman F. Aljohani, Sabrine Arfaoui, Anouar Ben Mabrouk, On an assorted nonlinear Schrödinger dynamical system, 2025, 2193-5343, 10.1007/s40065-025-00532-0
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4836) PDF downloads(231) Cited by(1)

Figures and Tables

Figures(11)  /  Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog