Research article

A comparison and calibration of integer and fractional-order models of COVID-19 with stratified public response

  • Received: 03 June 2022 Revised: 10 July 2022 Accepted: 20 July 2022 Published: 01 September 2022
  • The spread of SARS-CoV-2 in the Canadian province of Ontario has resulted in millions of infections and tens of thousands of deaths to date. Correspondingly, the implementation of modeling to inform public health policies has proven to be exceptionally important. In this work, we expand a previous model of the spread of SARS-CoV-2 in Ontario, "Modeling the impact of a public response on the COVID-19 pandemic in Ontario, " to include the discretized, Caputo fractional derivative in the susceptible compartment. We perform identifiability and sensitivity analysis on both the integer-order and fractional-order SEIRD model and contrast the quality of the fits. We note that both methods produce fits of similar qualitative strength, though the inclusion of the fractional derivative operator quantitatively improves the fits by almost 27% corroborating the appropriateness of fractional operators for the purposes of phenomenological disease forecasting. In contrasting the fit procedures, we note potential simplifications for future study. Finally, we use all four models to provide an estimate of the time-dependent basic reproduction number for the spread of SARS-CoV-2 in Ontario between January 2020 and February 2021.

    Citation: Somayeh Fouladi, Mohammad Kohandel, Brydon Eastman. A comparison and calibration of integer and fractional-order models of COVID-19 with stratified public response[J]. Mathematical Biosciences and Engineering, 2022, 19(12): 12792-12813. doi: 10.3934/mbe.2022597

    Related Papers:

    [1] Hongan Li, Qiaoxue Zheng, Wenjing Yan, Ruolin Tao, Xin Qi, Zheng Wen . Image super-resolution reconstruction for secure data transmission in Internet of Things environment. Mathematical Biosciences and Engineering, 2021, 18(5): 6652-6671. doi: 10.3934/mbe.2021330
    [2] Zhijing Xu, Jingjing Su, Kan Huang . A-RetinaNet: A novel RetinaNet with an asymmetric attention fusion mechanism for dim and small drone detection in infrared images. Mathematical Biosciences and Engineering, 2023, 20(4): 6630-6651. doi: 10.3934/mbe.2023285
    [3] Yingying Xu, Songsong Dai, Haifeng Song, Lei Du, Ying Chen . Multi-modal brain MRI images enhancement based on framelet and local weights super-resolution. Mathematical Biosciences and Engineering, 2023, 20(2): 4258-4273. doi: 10.3934/mbe.2023199
    [4] Jimin Yu, Jiajun Yin, Shangbo Zhou, Saiao Huang, Xianzhong Xie . An image super-resolution reconstruction model based on fractional-order anisotropic diffusion equation. Mathematical Biosciences and Engineering, 2021, 18(5): 6581-6607. doi: 10.3934/mbe.2021326
    [5] Qing Zou, Zachary Miller, Sanja Dzelebdzic, Maher Abadeer, Kevin M. Johnson, Tarique Hussain . Time-Resolved 3D cardiopulmonary MRI reconstruction using spatial transformer network. Mathematical Biosciences and Engineering, 2023, 20(9): 15982-15998. doi: 10.3934/mbe.2023712
    [6] Linglei Meng, XinFang Shang, FengXiao Gao, DeMao Li . Comparative study of imaging staging and postoperative pathological staging of esophageal cancer based on smart medical big data. Mathematical Biosciences and Engineering, 2023, 20(6): 10514-10529. doi: 10.3934/mbe.2023464
    [7] Zhuang Zhang, Wenjie Luo . Hierarchical volumetric transformer with comprehensive attention for medical image segmentation. Mathematical Biosciences and Engineering, 2023, 20(2): 3177-3190. doi: 10.3934/mbe.2023149
    [8] Liwei Deng, Yuanzhi Zhang, Jingjing Qi, Sijuan Huang, Xin Yang, Jing Wang . Enhancement of cone beam CT image registration by super-resolution pre-processing algorithm. Mathematical Biosciences and Engineering, 2023, 20(3): 4403-4420. doi: 10.3934/mbe.2023204
    [9] Qiming Li, Chengcheng Chen . A robust and high-precision edge segmentation and refinement method for high-resolution images. Mathematical Biosciences and Engineering, 2023, 20(1): 1058-1082. doi: 10.3934/mbe.2023049
    [10] Shuaiyu Bu, Yuanyuan Li, Guoqiang Liu, Yifan Li . MAET-SAM: Magneto-Acousto-Electrical Tomography segmentation network based on the segment anything model. Mathematical Biosciences and Engineering, 2025, 22(3): 585-603. doi: 10.3934/mbe.2025022
  • The spread of SARS-CoV-2 in the Canadian province of Ontario has resulted in millions of infections and tens of thousands of deaths to date. Correspondingly, the implementation of modeling to inform public health policies has proven to be exceptionally important. In this work, we expand a previous model of the spread of SARS-CoV-2 in Ontario, "Modeling the impact of a public response on the COVID-19 pandemic in Ontario, " to include the discretized, Caputo fractional derivative in the susceptible compartment. We perform identifiability and sensitivity analysis on both the integer-order and fractional-order SEIRD model and contrast the quality of the fits. We note that both methods produce fits of similar qualitative strength, though the inclusion of the fractional derivative operator quantitatively improves the fits by almost 27% corroborating the appropriateness of fractional operators for the purposes of phenomenological disease forecasting. In contrasting the fit procedures, we note potential simplifications for future study. Finally, we use all four models to provide an estimate of the time-dependent basic reproduction number for the spread of SARS-CoV-2 in Ontario between January 2020 and February 2021.



    High-resolution (HR) magnetic resonance imaging (MRI) unveils enhanced structural details and textures, essential for accurate diagnosis and pathological analysis of bodily organs. However, the resolution of the medical image is often constrained by factors like imaging hardware limitations, prolonged scanning durations, and lower signal-to-noise ratios (SNR) [1]. Improving spatial resolution usually involves the sacrifice of decreased SNR and increased scanning time [2].

    Recently, super-resolution (SR) has emerged as a post-processing technique for upscaling the resolution of MRI images [2,3,4]. Existing SR methods include interpolation-based, regularization-based, and learning-based methods [5,6]. Interpolation methods usually blur sharp edges and can hardly recover fine details or handle complex textures [7]. Using deep convolutional neural networks (CNN) in the SR image has shown notable success in high-quality reconstruction performance [8]. After the pioneering work of SRCNN [9], a multitude of CNN-based SR models have been proposed, such as EDSR [10], RCAN [11], and SwinIR [12], significantly improving SR performance. The superior reconstruction performance of CNN-based methods, such as SAN [13] and HAN [14], primarily stems from their deep architecture, residual learning, and diverse attention mechanisms [7,15]. Deepening the network's layers can enlarge receptive fields and facilitate its ability to comprehend the intricate mapping between the low-resolution (LR) inputs and HR counterparts. The adoption of residual learning facilitates deeper SR networks, as it effectively mitigates issues associated with gradient vanishing and explosion. Since CNN-based SR methods develop rapidly, transformer-based SR methods emerged to further improve SR performance [12,16,17]. As an alternative to CNN, transformer-based methods make full use of long-range dependency information rather than local features, greatly improving SR performance. However, the transformer-based SR model usually has large model parameters and is difficult to train.

    Although previous work has made significant progress, the deep SR model is still challenging to train because of its expensive GPU computation and time costs, leading to decreased performance of the state-of-the-art methods [18]. Therefore, the SR methods proposed ahead are not suitable for limited computation resources and limited diagnosis time in medical applications.

    To tackle the aforementioned issues and challenges, we propose the multi-distillation residual network (MDRN), which has a superior trade-off between reconstruction quality and computation consumption. Specifically, we propose a feature multi-distillation residual block (FMDRB), used in MDRN, which selectively retains certain features and sends others to the subsequent steps. To maximize the feature distillation capability, we incorporate a contrast-aware channel attention layer (CCA) to enhance the aggregation of diverse refined information. Our approach focuses on leveraging more informative features such as edges, textures, and small vessels for MRI image reconstruction.

    In general, our main contributions can be summarized as follows:

    1) We propose a multi-distillation residual network (MDRN) applied to efficient and fast super-resolution MRI that learns extra discriminative feature representations and is lightweight enough for limited computation costs. Our MDRN is suitable for super-resolution MRI and clinical applications.

    2) We introduce a CCA block to our FMDRB that can guide the model to focus on recovering high-frequency information. Based on that, CCA maximizes the power of the MDRN network. Besides, it is suitable for low-level vision and has better performance than the plain channel attention block.

    3) Thanks to the unique design of MDRN, it outperforms previous CNN-based SR models even under smaller GPU conditions. The proposed method obtains the best trade-off between inference time and reconstruction quality, showing the competitive advantage of our MDRN over state-of-the-art (SOTA) methods, as supported by quantitative and qualitative evidence.

    We propose a multi-distillation residual network (MDRN) for efficient and fast super-resolution MRI, whose architecture is shown in Figure 1. In Section 2.1, we provide an overview of the MDRN structure. In Section 2.2, we introduce the core module: feature multi-distillation residual block (FMDRB). Drawing inspiration from the common residual block (RB) [10] and information multi-distillation block (IMDB) [19], our network comprises a series of stacked FMDRBs forming the main chain, as demonstrated in Figure 1.

    Figure 1.  The architecture of MDRN.

    Given ILR as the LR input of MDRN, the network reconstructs the SR output ISR from the LR input. As in previous works, we adopt a shallow feature extraction, deep feature extraction, and post-upsample structure. The process of shallow feature F0 extracted from the input ILR is as follows:

    F0=DSF(ILR), (1)

    where HSF() demonstrates the function of shallow feature extractor, specifically one convolution operation.

    The subsequent part of MDRN involves the integration of multiple FMDRBs, which are put in a chain manner with feature distillation connections. This design facilitates the gradual refinement of the initial extracted features, culminating in the generation of deep features. The deep feature extraction part can be described as follows:

    Fk=DDFk(Fk1),k=1,,n, (2)

    where DDFk() stands for the function of k -th FMDRB, and Fk1 and Fk represent the input and output features of the k-th FMDRB, respectively. After the iterative refinement process by the FMDRBs, one 1×1 convolution layer is put at the end of a feature extraction part to assemble the fused distilled features. Following the fusion operation, a 3×3 convolution layer is put here to smooth the inductive bias of the aggregated features as follows:

    Ffusion=Daggregated(Concat(F1,,Fn)), (3)

    where Concat denotes the fusion operation through channel concatenation of all the distillation features, Daggregated denotes the operation, which is one 3×3 convolution following one 1×1 convolution, and Ffusion is the fused and aggregated features. Finally, the SR output ISR is generated by the reconstruction module as follows:

    ISR=DREC(Ffusion+F0), (4)

    where DREC() denotes the function of the upscale reconstruction part. The initial extracted feature F0 is added to the assembled features Ffusion through skip connection, and ISR is the output of the network. The upsample reconstruction works through a convolution layer, whose output channels are quadratic in relation to the upscale factor with a 3×3 kernel size and a sub-pixel shuffle operation that is non-parametric.

    The shallow extracted features predominantly contain low-frequency information, whereas deep extracted features focus more on restoring fading high-frequency information. The skip connection path enables MDRN to directly transmit low frequencies to the reconstruction process, which can help combine information and achieve more stable training.

    Inspired by the concept of feature distillation and residual learning, we designed the core module--feature multi-distillation residual block (FMDRB), which is more efficient and lightweight than the traditional residual modules. Different from the common residual block (two convolutions and one activation with identity connection), FMDRB uses an additional path with convolution for feature distillation and improved residual blocks stacked in the main chain as refinement layers that process coarse features gradually. We describe the complete structure as follows:

    Fdistilled_1=D1(Fin),Fremain_1=R1(Fin),Fdistilled_2=D2(Fremain_1),Fremain_2=R2(Fremain_1),Fdistilled_3=D2(Fremain_2),Fremain_3=R2(Fremain_2),Fremain_4=R4(Fremain_3),Fout=Concat(Fdistilled_1,Fdistilled_2,Fdistilled_3,Fremain_4), (5)

    where D denotes the distillation operation, R denotes the layer for remaining features, and the subscript number represents the number of layers. The output feature Fout fuses the right-most features processed in the main chain and distilled features in the distillation paths. As described in the above equations, the distillation operation works concurrently with the residual learning; this structure shows more efficiency and flexibility than the original residual block commonly used. As such, this block is called feature multi-distillation residual block.

    As shown in Figure 1 below, the feature distillation path in each level is performed by one 1×1 convolution layer that effectively compresses feature channels at a fixed ratio; for example, we use input channels divided by 2. Although most convolutions in the SR model use 3×3 kernel size, we note that employing the 1×1 convolution for channel reduction, as done in numerous other CNN models, is more efficient. As we replace the convolution in the distillation path, the parameter amount is significantly reduced. The convolutions located in the main body of MDRN still use a 3×3 kernel size, which better refines the features in the main path and more effectively utilizes spatial information in context.

    As shown in Figure 1, despite the improvements mentioned above, we also introduce the base unit of FMDRB, named BSRB [20], which allows more flexible residual learning than a common residual block. Specifically, it uses a 3×3 Blueprint Separable Convolution (BSConv) [21], an identity connection, and the ReLU activation layer. BSConv is a 1×1 pointwise convolution followed by a 3×3 depthwise convolution, which differs from the standard convolution.

    The initial concept of channel attention, widely recognized as the squeeze-and-excitation (SE) module, has been extensively used in image processing tasks. The significance of a feature map is predominantly determined by the activation of high-value regions, as these areas are critical for classification or detection. Consequently, global average and maximum pooling are commonly utilized to capture global information in these high- or mid-level visions. While average pooling can indeed enhance the PSNR value, it lacks the capability to retain structural, textural, and edge information, which are crucial for improving image detail (as related to SSIM) [19]. As illustrated in Figure 1, the contrast-aware channel attention module is specific to low-level vision. Specifically, we replace global average pooling with the summation of standard deviation and mean (evaluating the contrast degree of a feature map). Let us denote X=[x1,x2,,xc,,xC] as the input, which has C feature maps with spatial size of H×W. Therefore, the contrast information value can be calculated by

    zc=HGC(xc)=1HW(i,j)xc(xi,jc1HW(i,j)xcxi,jc)2+1HW(i,j)xcxi,jc, (6)

    where zc is the c-th element of output. HGC indicates the global contrast information evaluation function. With the assistance of the CCA module, our network can steadily improve the accuracy of super-resolution.

    We used the public clinical dataset from The Cancer Imaging Archive [22], which is available at https://www.cancerimagingarchive.net/collection/vestibular-schwannoma-seg/, named MRI-brain below. The dataset contains labeled MRI images obtained from 242 patients who received Gamma Knife radiation treatment and have been diagnosed with vestibular schwannoma. The images were acquired on a 32-channel Siemens Avanto 1.5T scanner. We used 5000 slices in the MRI-brain dataset for the training set. For testing the performance of our method, we used the remaining 1000 slices as the testing set. The dataset is enough for training and testing since one patient has approximately 140-160 slices.

    In data preprocessing, first, we converted the DICOM raw files to NumPy files with voxels. Then, the image pixel data was clipped to range below 2000 and normalized to range [0, 1]. Third, we used bicubic interpolation as the degradation function of the original HR image to the LR image. The preprocessing workflow is shown in Figure 2.

    Figure 2.  Preprocessing workflow of our data.

    We trained our model with 5×104 learning rate updated by StepLR scheduler and minimizing the L1 loss function. For the purpose of reducing the training burden, we got patches 192×192 from whole HR images as the input to the network. We used the ADAM optimizer with β1 = 0.9, β2 = 0.99. The entire MDRN procedure took approximately 48 h (20, 000 iterations per epoch, 200 epochs) for training and evaluation on the MRI dataset on a single GeForce RTX 3090 GPU with 24 GB of memory.

    Following previous works, peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) were used to assess the model's performance. The calculation of these evaluation metrics is written below:

    PSNR=10log10(MAX2MSE),MSE=1mnm1i=0n1j=0[Ix(i,j)Iy(i,j)]2, (7)
    SSIM=(2μxμy+c1)(σxy+c2)(μ2x+μ2y+c1)(σ2x+σ2y+c2). (8)

    We verified the effectiveness of each proposed component in our MDRN introduced before in detail on the same dataset under the same experiment setting. As shown in Table 1, we itemized the performance of specific methods.

    Table 1.  Ablation study of the different components. The best PSNR values on the 4× dataset are listed below.
    Base R1 R2 R3 R4 R5 R6 R7 Ours
    Multi-distillation (inside block)
    BSRB
    Using CCA
    Multi-distillation (outside block)
    PSNR 31.07 31.08 31.26 31.54 31.89 31.89 31.97 31.53 32.46

     | Show Table
    DownLoad: CSV

    The Base refers to the model EDSR, which is a common residual block stacked in one path with one long skip connection, keeping the basic style of the mostly used SR SOTA model. The result of R1 shows the effectiveness of the distillation path outside the FMDRB. The result of R2 verifies the effectiveness of the basic unit (BSRB); as we can see, the block used alone enhances the performance, overtaking the model constructed from common residual blocks. The result of R3 shows the role of CCA in this proposed method. Results from R4 to R7 with/without the feature distillation operation outside/inside the proposed FMDRB, BSRB, and CCA obtain different SR results and outperform the before model, which further verifies the effectiveness of each proposed method. When the basic residual units (FMDRBs) are stacked in a chain manner, which is the common structure in the popular SR models, the model gets lower performance. However, when adding the feature distillation connections to the main chain of the residual blocks, which is the so-called FMDRB, the enhanced distillation block gets better performance.

    The distillation structure is useful not only inside the enhanced distillation block but also outside the basic block. The result R6 is without/with the CCA layer; the result using CCA outperforms the result not using CCA, which verifies that the CCA layer maximizes the performance of FDRB.

    We put the contrast-aware channel attention block in the tail position of the proposed FMDRB, which maximizes the capability of the proposed module. To prove the effectiveness of the attention module, we used other attention blocks for comparison, such as CA and IIA. As shown in Table 2, the results of the ablation study aiming at attention block show that the CCA is effective and has the best ability for immediate features.

    Table 2.  Effects of different attention blocks.
    Attention block w/o CA IIA CCA
    PSNR 31.97 31.98 32.12 32.46
    SSIM 0.8767 0.8771 0.8778 0.8761

     | Show Table
    DownLoad: CSV

    The proposed MDRN has inherited the advantages of the residual network and combines the advantages of the feature distillation network. To prove the excellent performance of MDRN, we compared our model with popular state-of-the-art SR models, including NTIRE2017 winner EDSR [10], RCAN [11], large-scale SAN [13], HAN [14], novel IGAN [15], RFDN [23], and the recent DINet [24]. Since most SR SOTA models are tested on DIV2K, which are 3-channel natural images, the performance comparison between different methods cannot be directly done from cited papers; they were re-tested on the MRI-brain dataset, composed of single-channel clinical images.

    Table 3 demonstrates the comparison of quantitative results for 2×, 4×, and 8× SR. Our MDRN outperforms existing methods on MR-brain test datasets of all scales. Without using tricks like self-ensemble, the proposed MDRN network still achieves significant improvements compared to recent advanced methods. It is notably worth noticing that our model is much better than the EDSR, which shares a similar basic architecture with MDRN and shows some superiority over RFDN, which also uses the feature distillation strategy as MDRN. MDRN outperforms methods such as SAN and IGAN, which have more computationally intensive attention modules. Specifically, MDRN obtains superior results by 1.82 dB improvement in PSNR compared to the base EDSR in 4× scale, and its SSIM wins over previous methods. MDRN gains better results by up to 0.44 dB in terms of PSNR than DIPNet.

    Table 3.  Comparison of quantitative results with state-of-the-art SR methods on Brain Vestibular-Schwannoma datasets in 2×, 4×, and 8× scale. The best and second-best performances are in red and blue colors, respectively.
    Memory Time Scale 2 Scale 4 Scale 8
    [M] (ms) PSNR/SSIM PSNR/SSIM PSNR/SSIM
    Bicubic -- -- 33.66/0.9299 28.44/0.8159 24.40/0.6580
    EDSR [10] 2192.74 72.36 34.98*/0.9025* 30.64*/0.8697* 26.17*/0.7513*
    RCAN [11] 2355.20 498.26 38.27*/0.9614* 31.65**/0.9019* 26.21*/0.7778*
    SAN [13] 5017.60 805.23 34.85*/0.9318* 31.09*/0.8432* 25.39*/0.7359*
    IGAN [15] 2099.20 335.77 33.91*/0.9173* 31.73*/0.8744* 26.32*/0.7804*
    HAN [14] 5038.98 719.07 34.97*/0.9576* 31.03*/0.8424* 25.66*/0.7612*
    RFDN [23] 813.06 49.51 38.31**/0.9620* 31.98*/0.8795* 26.28*/0.7794*
    DIPNet [24] 521.02 28.79 38.27**/0.9614* 32.02**/0.8712* 26.33*/0.7884*
    Ours 325.21 27.88 39.19/0.9686 32.46/0.8761 26.47/0.8696
    *p < 0.05, **p < 0.001

     | Show Table
    DownLoad: CSV

    The efficiency of a SR model can be assessed through various metrics, such as the number of parameters, runtime, computational complexity (FLOPs), and GPU memory consumption. These metrics play pivotal roles in the deployment of models in different aspects. Among these evaluation metrics, the runtime is the most direct indicator of a network's efficiency and is used as the primary evaluation metric. Memory consumption is also an important metric because it determines whether the model can be deployed to the edge device. In a clinical setting, the SR MRI model will be put into a small GPU, and models needing large-memory GPU will not work as intended. Our MDRN model gets the best PSNR, which is over 32 dB, only using 325.21 M GPU memory and 27.88 ms valid runtime, as shown in Table 3, showing a competitive advantage over other methods. To test the validation of experiment results, we analyzed the statistical significance of the results. As shown in Table 3, we calculated the P value of the results using the data of every epoch as a collection of random variables.

    Table 4.  Comparison of quantitative results on other datasets.

    BraTS-Gli BraTS-Meni
    PSNR/SSIM PSNR/SSIM
    Bicubic 32.94/0.9099 30.25/0.8689
    EDSR [10] 36.35*/0.9610* 33.33*/0.9196*
    RCAN [11] 36.94**/0.9513* 33.86*/0.9160*
    SAN [13] 37.06*/0.9514* 34.02*/0.9191*
    IGAN [15] 37.09*/0.9620* 34.13*/0.9217*
    HAN [14] 37.33*/0.9521* 33.83*/0.9197*
    RFDN [23] 38.17**/0.9600** 34.08**/0.9214*
    DIPNet [24] 38.38**/0.9623* 34.17*/0.9218*
    Ours 38.92/0.9635 34.25/0.9225
    *p < 0.05, **p < 0.001

     | Show Table
    DownLoad: CSV

    For a more intuitive demonstration of the gap between these methods, we show the comparison of zoomed results of various methods. As shown in Figure 3, we randomly select some results from the test set for evaluation. Taking "img_050112" as an example, most SR methods can reconstruct the general composition, but only IGAN and MDRN recover the more detailed textures and sharper edges. In zoomed details of "img_05011", we can see that IGAN, SAN, and RFDN do not clearly restore the small vessels, while our MDRN obviously does (shown in red arrows). Additionally, as seen in "img_05024", MDRN is closer to the ground truth, recovering the cerebrospinal fluid and not generating blurring artifacts (shown in yellow arrows). Our MDRN can output more high-frequency information, like enhanced contrast edges, than other methods. Through the observations of visual results, it is verified that MDRN has superiority in complex feature representations and recovery ability over previous works.

    Figure 3.  Visual comparison of SR methods in 4× scale on the MRI-brain dataset. Zoomed details for observation. Colored visualization below for better comparison.

    Deep learning-based methods have been proven to work effectively in the domain of medical image processing, including SR reconstruction for MR images. Based on the bottleneck of the SR task, we propose a novel lightweight and fast SR model named MDRN using multi-distillation residual learning.

    Figure 4 provides an overview of the comparison of the performance and computation efficiency of the proposed method and other methods. It is evident that MDRN achieves the best execution time. Except for SAN and HAN using transformer structure, the computation complexity of SAN and HAN is O(n2) and of other models is O(n). The quadratic computation complexity O(n) in relation to the query/key/value sequence length n leads to high computation costs when using self-attention with a global receptive field. For a precise assessment of the computation complexity of our method, we compare it using quantitative metrics with several representative open-source models, as shown in Table 3. Quantitative results show that our MDRN consumes lower computation resources while maintaining 32+ PSNR. MDRN has a better trade-off between performance and cost.

    Figure 4.  Comparison of computation efficiency and performance between our method and other methods.

    We conducted generalization experiments by applying the super-resolution model trained on head and neck magnetic resonance imaging (MRI) images to pelvic CT images, aiming to validate the model's generalization performance on different datasets (Table 5). The results demonstrate that our model achieves a PSNR of 31.4 dB on the pelvic dataset at a 4× magnification factor. This outcome indicates that our MDRN exhibits favorable generalization performance and is capable of completing super-resolution tasks on new datasets. Visual quality is shown in Figure 5.

    Table 5.  Generalization analysis on pelvic CT images.
    Scale
    PSNR 36.55 32.35 27.79
    SSIM 0.8882 0.8938 0.8928

     | Show Table
    DownLoad: CSV
    Figure 5.  Visual quality of SR results on pelvic CT images for generalization study.

    In this paper, we propose the MDRN, a lightweight CNN model, for efficient and fast super-resolution MRI tasks using the innovative multi-distillation strategy. Our findings show remarkable superiority of MDRN over current SR methods, supported by both quantitative metrics and visual evidence. Notably, MDRN excels at learning discriminative features and striking a better balance between computational efficiency and reconstruction performance by integrating the feature distillation mechanism into the network architecture. Extensive evaluations conducted on an MRI-brain dataset underline the favorable performance of MDRN over existing methods in both computational cost and accuracy for medical scenarios.

    We declare that we have not used generative AI tools to generate the scientific writing of this paper.

    We declare that we have no known financial interests or personal relationships that could have appeared to influence the work reported in this paper. There is no professional or other personal interest of any kind in any product, service or company that could influence the work reported in this paper.



    [1] World Health Organization, Novel Coronavirus (2019-nCoV) SITUATION REPORT-7, (2020).
    [2] I. Berry, J. P. R. Soucy, A. Tuite, D. Fisman, Open access epidemiologic data and an interactive dashboard to monitor the COVID-19 outbreak in Canada, CMAJ, 192 (2020), E420–E420. https://doi.org/10.1503/cmaj.75262 doi: 10.1503/cmaj.75262
    [3] E. D. Giuseppe, M. Moroni, M. Caputo, Flux in porous media with memory: Models and experiments, Transp. Porous. Media, 83 (2010), 479–500. https://doi.org/10.1007/s11242-009-9456-4 doi: 10.1007/s11242-009-9456-4
    [4] A. C. Chamgoué, G. S. M. Ngueuteu, R. Yamapi, P. Woafo, Memory effect in a self-sustained birhythmic biological system, Chaos Soliton. Fract., 109 (2018), 160–169. https://doi.org/10.1016/j.chaos.2018.02.027 doi: 10.1016/j.chaos.2018.02.027
    [5] E. Ahmed, A. Hashish, F. A. Rihan, On fractional order cancer model, JFCA, 3 (2012), 1–6.
    [6] F. Özköse, M. Yavuz, M. T. Șenel, R. Habbireeh, Fractional Order Modelling of Omicron SARS-CoV-2 Variant Containing Heart Attack Effect Using Real Data from the United Kingdom, Chaos Soliton. Fract., 157 (2022), 111954. https://doi.org/10.1016/j.chaos.2022.111954 doi: 10.1016/j.chaos.2022.111954
    [7] D. Fanelli, F. Piazza, Analysis and forecast of COVID-19 spreading in China, Italy and France, Chaos Soliton. Fract., 134 (2020), 109761. https://doi.org/10.1016/j.chaos.2020.109761 doi: 10.1016/j.chaos.2020.109761
    [8] A. J. Kucharski, T. W. Russell, C. Diamond, Y. Liu, J. Edmunds, S. Funk, et al., Early dynamics of transmission and control of COVID-19: a mathematical modelling study, Lancet Infect. Dis., 20 (2020), 553–558. https://doi.org/10.1016/S1473-3099(20)30144-4 doi: 10.1016/S1473-3099(20)30144-4
    [9] Z. Zhang, R. Gul, A. Zeb, Global sensitivity analysis of COVID-19 mathematical model, Alex. Eng. J., 60 (2021), 565–572. https://doi.org/10.1016/j.aej.2020.09.035 doi: 10.1016/j.aej.2020.09.035
    [10] C. M. A. Pinto, A. R. M. Carvalho, A latency fractional order model for HIV dynamics, J. Comput. Appl. Math., 312 (2017), 240–256. https://doi.org/10.1016/j.cam.2016.05.019 doi: 10.1016/j.cam.2016.05.019
    [11] K. N. Nabi, P. Kumar, V. S. Erturk, Projections and fractional dynamics of COVID-19 with optimal control strategies, Chaos Soliton. Fract., 145 (2021), 110689. https://doi.org/10.1016/j.chaos.2021.110689 doi: 10.1016/j.chaos.2021.110689
    [12] C. N. Angstmann, B. I. Henry, A. V. McGann, A fractional order recovery SIR model from a stochastic process, Bull. Math. Biol., 78 (2016), 468–499. https://doi.org/10.1007/s11538-016-0151-7 doi: 10.1007/s11538-016-0151-7
    [13] I. Area, H. Batarfi, J. Losada, J. J. Nieto, W. Shammakh, Á. Torres, On a fractional order Ebola epidemic model, Adv. Differ. Equ., 2015 (2015), 1–12. https://doi.org/10.1186/s13662-015-0613-5 doi: 10.1186/s13662-015-0613-5
    [14] E. Demirci, A. Unal, A fractional order SEIR model with density dependent death rate, Hacettepe J. Math. Stat., 40 (2011), 287–295.
    [15] P. Kumar, V. S. Erturk, A. Yusuf, K. S. Nisar, S. F. Abdelwahab, A study on canine distemper virus (CDV) and rabies epidemics in the red fox population via fractional derivatives, Results Phys., 25 (2021), 104281. https://doi.org/10.1016/j.rinp.2021.104281 doi: 10.1016/j.rinp.2021.104281
    [16] R. De Luca, F. Romeo, Memory effects and self-excited oscillations in deterministic epidemic models with intrinsic time delays, Eur. Phys. J. Plus, 135 (2020), 1–17. https://doi.org/10.1140/epjp/s13360-020-00862-2 doi: 10.1140/epjp/s13360-020-00862-2
    [17] E. Kharazmi, M. Cai, X. Zheng, Z. Zhang, G. Lin, G. E. Karniadakis, Identifiability and predictability of integer-and fractional-order epidemiological models using physics-informed neural networks, Nat. Comput. Sci., 1 (2021), 744–753. https://doi.org/10.1038/s43588-021-00158-0 doi: 10.1038/s43588-021-00158-0
    [18] X. B. Jin, W. T. Gong, J. L. Kong, Y. T. Bai, T. L. Su, PFVAE: a planar flow-based variational auto-encoder prediction model for time series data, Mathematics, 10 (2022). https://doi.org/10.3390/math10040610 doi: 10.3390/math10040610
    [19] X. B. Jin, W. Z. Zheng, J. L. Kong, X. Y. Wang, M. Zuo, Q. C. Zhang, et al., Deep-learning temporal predictor via bidirectional self-attentive encoder–decoder framework for IOT-based environmental sensing in intelligent greenhouse, Agriculture, 11 (2021), 802. https://doi.org/10.3390/agriculture11080802 doi: 10.3390/agriculture11080802
    [20] X. Jin, J. Zhang, J. Kong, T. Su, Y. Bai, A reversible automatic selection normalization (RASN) deep network for predicting in the smart agriculture system, Agronomy, 12 (2022), 591. https://doi.org/10.3390/agronomy12030591 doi: 10.3390/agronomy12030591
    [21] M. Caputo, Linear models of dissipation whose Q is almost frequency independent—II, Geophys. J. Int., 13 (1967), 529–539. https://doi.org/10.1111/j.1365-246X.1967.tb02303.x doi: 10.1111/j.1365-246X.1967.tb02303.x
    [22] D. C. López C, G. Wozny, A. Flores-Tlacuahuac, R. Vasquez-Medrano, V. M. Zavala, A computational framework for identifiability and ill-conditioning analysis of lithium-ion battery models, Ind. Eng. Chem. Res., 55 (2016), 3026–3042. https://doi.org/10.1021/acs.iecr.5b03910 doi: 10.1021/acs.iecr.5b03910
    [23] S. R. Pope, L. M. Ellwein, Ch. L. Zapata, V. Novak, C. T. Kelley, M. S. Olufsen, Estimation and identification of parameters in a lumped cerebrovascular model, Math. Biosci. Eng., 6 (2009), 93–115. https://doi.org/10.3934/mbe.2009.6.93 doi: 10.3934/mbe.2009.6.93
    [24] M. S. Olufsen, J. T. Ottesen, A practical approach to parameter estimation applied to model predicting heart rate regulation, J. Math. Biol., 67 (2013), 39–68. https://doi.org/10.1007/s00285-012-0535-8 doi: 10.1007/s00285-012-0535-8
    [25] M. Yavuz, F.Ö. Coșar, F. Günay, F. N. Özdemir, A new mathematical modeling of the COVID-19 pandemic including the vaccination campaign, OJMSi, 9 (2021), 299–321. https://doi.org/10.4236/ojmsi.2021.93020 doi: 10.4236/ojmsi.2021.93020
    [26] B. Eastman, C. Meaney, M. Przedborski, M. Kohandel, Modeling the impact of public response on the COVID-20 pandemic in Ontario, PLoS One, 15 (2020), e249455. https://doi.org/10.1371/journal.pone.0249456 doi: 10.1371/journal.pone.0249456
    [27] I. Podlubny, Fractional differential equations: an introduction to fractional derivatives, fractional differential equations, to methods of their solution and some of their applications, Elsevier 198 (1998).
    [28] O. P. Agrawal, Fractional variational calculus in terms of Riesz fractional derivatives, J. Phys. A Math. Theor., 40 (2007), 6287. https://doi.org/10.1088/1751-8113/40/24/003 doi: 10.1088/1751-8113/40/24/003
    [29] M. Ahmadinia, Z. Safari, S. Fouladi, Analysis of local discontinuous Galerkin method for time–space fractional convection–diffusion equations, BIT Numer. Math, 58 (2018), 533–554. https://doi.org/10.1007/s10543-018-0697-x doi: 10.1007/s10543-018-0697-x
    [30] S. Fouladi, M. S. Dahaghin, Numerical investigation of the variable-order fractional Sobolev equation with non-singular Mittag–Leffler kernel by finite difference and local discontinuous Galerkin methods, Chaos Soliton. Fract., 157 (2022), 111915. https://doi.org/10.1016/j.chaos.2022.111915 doi: 10.1016/j.chaos.2022.111915
    [31] P. A. Naik, K. M. Owolabi, M. Yavuz, J. Zu, Chaotic dynamics of a fractional order HIV-1 model involving AIDS-related cancer cells, Chaos Soliton. Fract., 140 (2020), 110272. https://doi.org/10.1016/j.chaos.2020.110272 doi: 10.1016/j.chaos.2020.110272
    [32] M. A. Khan, S. Ullah, S. Ullah, M. Farhan, Fractional order SEIR model with generalized incidence rate, AIMS Math., 5 (2020), 2843–2857. https://doi.org/10.3934/math.2020182 doi: 10.3934/math.2020182
    [33] K. N. Nabi, H. Abboubakar, P. Kumar, Forecasting of COVID-19 pandemic: From integer derivatives to fractional derivatives, Chaos Soliton. Fract., 141 (2020), 110283. https://doi.org/10.1016/j.chaos.2020.110283 doi: 10.1016/j.chaos.2020.110283
    [34] A. Zeb, P. Kumar, V. S. Erturk, T. Sitthiwirattham, A new study on two different vaccinated fractional-order COVID-19 models via numerical algorithms, J. King Saud Univ. Sci., 34 (2022), 101914. https://doi.org/10.1016/j.jksus.2022.101914 doi: 10.1016/j.jksus.2022.101914
    [35] B. M. Yambiyo, F. Norouzi, G. M. N'Guérékata, A study of an epidemic SIR model via homotopy analysis method in the sense of Caputo-fractional system, in Studies in evolution equations and related topics, (eds. G. M. N'Guérékata and B. Toni), (2021), 51–67. https://doi.org/10.1007/978-3-030-77704-3_4
    [36] P. Kumar, V. S. Erturk, M. Vellappandi, H. Trinh, V. Govindaraj, A study on the maize streak virus epidemic model by using optimized linearization-based predictor-corrector method in Caputo sense, Chaos Soliton. Fract., 158 (2022), 112067. https://doi.org/10.1016/j.chaos.2022.112067 doi: 10.1016/j.chaos.2022.112067
    [37] P. Kumar, V. S. Erturk, H. Almusawa, Mathematical structure of mosaic disease using microbial biostimulants via Caputo and Atangana–Baleanu derivatives, Results Phys., 24 (2021), 104186. https://doi.org/10.1016/j.rinp.2021.104186 doi: 10.1016/j.rinp.2021.104186
    [38] S. Abbas, S. Tyagi, P. Kumar, V. S. Ertürk, S. Momani, Stability and bifurcation analysis of a fractional-order model of cell-to-cell spread of HIV-1 with a discrete time delay, Math. Methods Appl. Sci., 45 (2022), 7081–7095. https://doi.org/10.1002/mma.8226 doi: 10.1002/mma.8226
    [39] Y. Lin, Ch. Xu, Finite difference/spectral approximations for the time-fractional diffusion equation, J. Comput. Phys., 225 (2007), 1533–1552. https://doi.org/10.1016/j.jcp.2007.02.001 doi: 10.1016/j.jcp.2007.02.001
    [40] G. H. Gao, Z. Z. Sun, H. W. Zhang, A new fractional numerical differentiation formula to approximate the Caputo fractional derivative and its applications, J. Comput. Phys., 259 (2014), 33–50. https://doi.org/10.1016/j.jcp.2013.11.017 doi: 10.1016/j.jcp.2013.11.017
    [41] S. A. Lauer, K. H. Grantz, Q. Bi, F. K. Jones, Q. Zheng, H. R. Meredith, et al., The incubation period of coronavirus disease 2019 (COVID-19) from publicly reported confirmed cases: estimation and application, Ann. Intern. Med., 172 (2020), 577–582. https://doi.org/10.7326/M20-0504 doi: 10.7326/M20-0504
    [42] X. Bai, H. Rui, An efficient FDTD algorithm for 2D/3D time fractional Maxwell's system, Appl. Math. Lett., 116 (2021), 106992. https://doi.org/10.1016/j.aml.2020.106992 doi: 10.1016/j.aml.2020.106992
    [43] X. Bai, S. Wang, H. Rui, Numerical analysis of finite-difference time-domain method for 2D/3D Maxwell's equations in a Cole-Cole dispersive medium, Comput. Math. with Appl., 93 (2021), 230–252. https://doi.org/10.1016/j.camwa.2021.04.015 doi: 10.1016/j.camwa.2021.04.015
    [44] D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley Longman, 1989.
    [45] H. Miao, X. Xia, A. S. Perelson, H. Wu, On identifiability of nonlinear ODE models and applications in viral dynamics, SIREV, 53 (2011), 3–39. https://doi.org/10.1137/090757009 doi: 10.1137/090757009
    [46] R. Brady, Mathematical modeling of the acute inflammatory response & cardiovascular dynamics in young men, Ph.D. Thesis, (2017). http://www.lib.ncsu.edu/resolver/1840.20/34823
    [47] C. Piazzola, L. Tamellini, R. Tempone, A note on tools for prediction under uncertainty and identifiability of SIR-like dynamical systems for epidemiology, Math. Biosci., 332 (2021), 108514. https://doi.org/10.1016/j.mbs.2020.108514 doi: 10.1016/j.mbs.2020.108514
    [48] K. Rajagopal, N. Hasanzadeh, F. Parastesh, I. I. Hamarash, S. Jafari, I. Hussain, A fractional-order model for the novel coronavirus (COVID-19) outbreak, Nonlinear Dyn., 101 (2020), 711–718. https://doi.org/10.1007/s11071-020-05757-6 doi: 10.1007/s11071-020-05757-6
    [49] M. A. Khan, M. Ismail, S. Ullah, M. Farhan, Fractional order SIR model with generalized incidence rate, AIMS Math., 5 (2020), 1856–1880. https://doi.org/10.3934/math.2020124 doi: 10.3934/math.2020124
    [50] L. M. A. Bettencourt, R. M. Ribeiro, Real time bayesian estimation of the epidemic potential of emerging infectious diseases, PLoS One, 3 (2008), e2185. https://doi.org/10.1371/journal.pone.0002185 doi: 10.1371/journal.pone.0002185
    [51] H. Nishiura, N. M. Linton, A. R. Akhmetzhanov, Serial interval of novel coronavirus (COVID-19) infections, Int. J. Infect. Dis., 93 (2020), 284–286. https://doi.org/10.1016/j.ijid.2020.02.060 doi: 10.1016/j.ijid.2020.02.060
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2362) PDF downloads(71) Cited by(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog